Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-4.8/core' of git://git.kernel.dk/linux-block

Pull core block updates from Jens Axboe:

- the big change is the cleanup from Mike Christie, cleaning up our
uses of command types and modified flags. This is what will throw
some merge conflicts

- regression fix for the above for btrfs, from Vincent

- following up to the above, better packing of struct request from
Christoph

- a 2038 fix for blktrace from Arnd

- a few trivial/spelling fixes from Bart Van Assche

- a front merge check fix from Damien, which could cause issues on
SMR drives

- Atari partition fix from Gabriel

- convert cfq to highres timers, since jiffies isn't granular enough
for some devices these days. From Jan and Jeff

- CFQ priority boost fix idle classes, from me

- cleanup series from Ming, improving our bio/bvec iteration

- a direct issue fix for blk-mq from Omar

- fix for plug merging not involving the IO scheduler, like we do for
other types of merges. From Tahsin

- expose DAX type internally and through sysfs. From Toshi and Yigal

* 'for-4.8/core' of git://git.kernel.dk/linux-block: (76 commits)
block: Fix front merge check
block: do not merge requests without consulting with io scheduler
block: Fix spelling in a source code comment
block: expose QUEUE_FLAG_DAX in sysfs
block: add QUEUE_FLAG_DAX for devices to advertise their DAX support
Btrfs: fix comparison in __btrfs_map_block()
block: atari: Return early for unsupported sector size
Doc: block: Fix a typo in queue-sysfs.txt
cfq-iosched: Charge at least 1 jiffie instead of 1 ns
cfq-iosched: Fix regression in bonnie++ rewrite performance
cfq-iosched: Convert slice_resid from u64 to s64
block: Convert fifo_time from ulong to u64
blktrace: avoid using timespec
block/blk-cgroup.c: Declare local symbols static
block/bio-integrity.c: Add #include "blk.h"
block/partition-generic.c: Remove a set-but-not-used variable
block: bio: kill BIO_MAX_SIZE
cfq-iosched: temporarily boost queue priority for idle classes
block: drbd: avoid to use BIO_MAX_SIZE
block: bio: remove BIO_MAX_SECTORS
...

+1919 -1524
+1 -1
Documentation/block/queue-sysfs.txt
··· 53 53 54 54 logical_block_size (RO) 55 55 ----------------------- 56 - This is the logcal block size of the device, in bytes. 56 + This is the logical block size of the device, in bytes. 57 57 58 58 max_hw_sectors_kb (RO) 59 59 ----------------------
+14 -14
Documentation/block/writeback_cache_control.txt
··· 20 20 Explicit cache flushes 21 21 ---------------------- 22 22 23 - The REQ_FLUSH flag can be OR ed into the r/w flags of a bio submitted from 23 + The REQ_PREFLUSH flag can be OR ed into the r/w flags of a bio submitted from 24 24 the filesystem and will make sure the volatile cache of the storage device 25 25 has been flushed before the actual I/O operation is started. This explicitly 26 26 guarantees that previously completed write requests are on non-volatile 27 - storage before the flagged bio starts. In addition the REQ_FLUSH flag can be 27 + storage before the flagged bio starts. In addition the REQ_PREFLUSH flag can be 28 28 set on an otherwise empty bio structure, which causes only an explicit cache 29 29 flush without any dependent I/O. It is recommend to use 30 30 the blkdev_issue_flush() helper for a pure cache flush. ··· 41 41 Implementation details for filesystems 42 42 -------------------------------------- 43 43 44 - Filesystems can simply set the REQ_FLUSH and REQ_FUA bits and do not have to 44 + Filesystems can simply set the REQ_PREFLUSH and REQ_FUA bits and do not have to 45 45 worry if the underlying devices need any explicit cache flushing and how 46 - the Forced Unit Access is implemented. The REQ_FLUSH and REQ_FUA flags 46 + the Forced Unit Access is implemented. The REQ_PREFLUSH and REQ_FUA flags 47 47 may both be set on a single bio. 48 48 49 49 50 50 Implementation details for make_request_fn based block drivers 51 51 -------------------------------------------------------------- 52 52 53 - These drivers will always see the REQ_FLUSH and REQ_FUA bits as they sit 53 + These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit 54 54 directly below the submit_bio interface. For remapping drivers the REQ_FUA 55 55 bits need to be propagated to underlying devices, and a global flush needs 56 - to be implemented for bios with the REQ_FLUSH bit set. For real device 57 - drivers that do not have a volatile cache the REQ_FLUSH and REQ_FUA bits 58 - on non-empty bios can simply be ignored, and REQ_FLUSH requests without 56 + to be implemented for bios with the REQ_PREFLUSH bit set. For real device 57 + drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits 58 + on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without 59 59 data can be completed successfully without doing any work. Drivers for 60 60 devices with volatile caches need to implement the support for these 61 61 flags themselves without any help from the block layer. ··· 65 65 -------------------------------------------------------------- 66 66 67 67 For devices that do not support volatile write caches there is no driver 68 - support required, the block layer completes empty REQ_FLUSH requests before 69 - entering the driver and strips off the REQ_FLUSH and REQ_FUA bits from 68 + support required, the block layer completes empty REQ_PREFLUSH requests before 69 + entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from 70 70 requests that have a payload. For devices with volatile write caches the 71 71 driver needs to tell the block layer that it supports flushing caches by 72 72 doing: 73 73 74 74 blk_queue_write_cache(sdkp->disk->queue, true, false); 75 75 76 - and handle empty REQ_FLUSH requests in its prep_fn/request_fn. Note that 77 - REQ_FLUSH requests with a payload are automatically turned into a sequence 78 - of an empty REQ_FLUSH request followed by the actual write by the block 76 + and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn. Note that 77 + REQ_PREFLUSH requests with a payload are automatically turned into a sequence 78 + of an empty REQ_OP_FLUSH request followed by the actual write by the block 79 79 layer. For devices that also support the FUA bit the block layer needs 80 80 to be told to pass through the REQ_FUA bit using: 81 81 ··· 83 83 84 84 and the driver must handle write requests that have the REQ_FUA bit set 85 85 in prep_fn/request_fn. If the FUA bit is not natively supported the block 86 - layer turns it into an empty REQ_FLUSH request after the actual write. 86 + layer turns it into an empty REQ_OP_FLUSH request after the actual write.
+5 -5
Documentation/device-mapper/log-writes.txt
··· 14 14 15 15 We log things in order of completion once we are sure the write is no longer in 16 16 cache. This means that normal WRITE requests are not actually logged until the 17 - next REQ_FLUSH request. This is to make it easier for userspace to replay the 18 - log in a way that correlates to what is on disk and not what is in cache, to 19 - make it easier to detect improper waiting/flushing. 17 + next REQ_PREFLUSH request. This is to make it easier for userspace to replay 18 + the log in a way that correlates to what is on disk and not what is in cache, 19 + to make it easier to detect improper waiting/flushing. 20 20 21 21 This works by attaching all WRITE requests to a list once the write completes. 22 - Once we see a REQ_FLUSH request we splice this list onto the request and once 22 + Once we see a REQ_PREFLUSH request we splice this list onto the request and once 23 23 the FLUSH request completes we log all of the WRITEs and then the FLUSH. Only 24 - completed WRITEs, at the time the REQ_FLUSH is issued, are added in order to 24 + completed WRITEs, at the time the REQ_PREFLUSH is issued, are added in order to 25 25 simulate the worst case scenario with regard to power failures. Consider the 26 26 following example (W means write, C means complete): 27 27
+1 -1
arch/um/drivers/ubd_kern.c
··· 1286 1286 1287 1287 req = dev->request; 1288 1288 1289 - if (req->cmd_flags & REQ_FLUSH) { 1289 + if (req_op(req) == REQ_OP_FLUSH) { 1290 1290 io_req = kmalloc(sizeof(struct io_thread_req), 1291 1291 GFP_ATOMIC); 1292 1292 if (io_req == NULL) {
+1
block/bio-integrity.c
··· 26 26 #include <linux/bio.h> 27 27 #include <linux/workqueue.h> 28 28 #include <linux/slab.h> 29 + #include "blk.h" 29 30 30 31 #define BIP_INLINE_VECS 4 31 32
+9 -11
block/bio.c
··· 656 656 bio = bio_alloc_bioset(gfp_mask, bio_segments(bio_src), bs); 657 657 if (!bio) 658 658 return NULL; 659 - 660 659 bio->bi_bdev = bio_src->bi_bdev; 661 660 bio->bi_rw = bio_src->bi_rw; 662 661 bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; 663 662 bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; 664 663 665 - if (bio->bi_rw & REQ_DISCARD) 664 + if (bio_op(bio) == REQ_OP_DISCARD) 666 665 goto integrity_clone; 667 666 668 - if (bio->bi_rw & REQ_WRITE_SAME) { 667 + if (bio_op(bio) == REQ_OP_WRITE_SAME) { 669 668 bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0]; 670 669 goto integrity_clone; 671 670 } ··· 853 854 854 855 /** 855 856 * submit_bio_wait - submit a bio, and wait until it completes 856 - * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead) 857 857 * @bio: The &struct bio which describes the I/O 858 858 * 859 859 * Simple wrapper around submit_bio(). Returns 0 on success, or the error from 860 860 * bio_endio() on failure. 861 861 */ 862 - int submit_bio_wait(int rw, struct bio *bio) 862 + int submit_bio_wait(struct bio *bio) 863 863 { 864 864 struct submit_bio_ret ret; 865 865 866 - rw |= REQ_SYNC; 867 866 init_completion(&ret.event); 868 867 bio->bi_private = &ret; 869 868 bio->bi_end_io = submit_bio_wait_endio; 870 - submit_bio(rw, bio); 869 + bio->bi_rw |= REQ_SYNC; 870 + submit_bio(bio); 871 871 wait_for_completion_io(&ret.event); 872 872 873 873 return ret.error; ··· 1165 1167 goto out_bmd; 1166 1168 1167 1169 if (iter->type & WRITE) 1168 - bio->bi_rw |= REQ_WRITE; 1170 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 1169 1171 1170 1172 ret = 0; 1171 1173 ··· 1335 1337 * set data direction, and check if mapped pages need bouncing 1336 1338 */ 1337 1339 if (iter->type & WRITE) 1338 - bio->bi_rw |= REQ_WRITE; 1340 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 1339 1341 1340 1342 bio_set_flag(bio, BIO_USER_MAPPED); 1341 1343 ··· 1528 1530 bio->bi_private = data; 1529 1531 } else { 1530 1532 bio->bi_end_io = bio_copy_kern_endio; 1531 - bio->bi_rw |= REQ_WRITE; 1533 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 1532 1534 } 1533 1535 1534 1536 return bio; ··· 1783 1785 * Discards need a mutable bio_vec to accommodate the payload 1784 1786 * required by the DSM TRIM and UNMAP commands. 1785 1787 */ 1786 - if (bio->bi_rw & REQ_DISCARD) 1788 + if (bio_op(bio) == REQ_OP_DISCARD) 1787 1789 split = bio_clone_bioset(bio, gfp, bs); 1788 1790 else 1789 1791 split = bio_clone_fast(bio, gfp, bs);
+2 -2
block/blk-cgroup.c
··· 905 905 return 0; 906 906 } 907 907 908 - struct cftype blkcg_files[] = { 908 + static struct cftype blkcg_files[] = { 909 909 { 910 910 .name = "stat", 911 911 .flags = CFTYPE_NOT_ON_ROOT, ··· 914 914 { } /* terminate */ 915 915 }; 916 916 917 - struct cftype blkcg_legacy_files[] = { 917 + static struct cftype blkcg_legacy_files[] = { 918 918 { 919 919 .name = "reset_stats", 920 920 .write_u64 = blkcg_reset_stats,
+51 -45
block/blk-core.c
··· 959 959 * A request has just been released. Account for it, update the full and 960 960 * congestion status, wake up any waiters. Called under q->queue_lock. 961 961 */ 962 - static void freed_request(struct request_list *rl, unsigned int flags) 962 + static void freed_request(struct request_list *rl, int op, unsigned int flags) 963 963 { 964 964 struct request_queue *q = rl->q; 965 - int sync = rw_is_sync(flags); 965 + int sync = rw_is_sync(op, flags); 966 966 967 967 q->nr_rqs[sync]--; 968 968 rl->count[sync]--; ··· 1029 1029 * Flush requests do not use the elevator so skip initialization. 1030 1030 * This allows a request to share the flush and elevator data. 1031 1031 */ 1032 - if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) 1032 + if (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA)) 1033 1033 return false; 1034 1034 1035 1035 return true; ··· 1054 1054 /** 1055 1055 * __get_request - get a free request 1056 1056 * @rl: request list to allocate from 1057 - * @rw_flags: RW and SYNC flags 1057 + * @op: REQ_OP_READ/REQ_OP_WRITE 1058 + * @op_flags: rq_flag_bits 1058 1059 * @bio: bio to allocate request for (can be %NULL) 1059 1060 * @gfp_mask: allocation mask 1060 1061 * ··· 1066 1065 * Returns ERR_PTR on failure, with @q->queue_lock held. 1067 1066 * Returns request pointer on success, with @q->queue_lock *not held*. 1068 1067 */ 1069 - static struct request *__get_request(struct request_list *rl, int rw_flags, 1070 - struct bio *bio, gfp_t gfp_mask) 1068 + static struct request *__get_request(struct request_list *rl, int op, 1069 + int op_flags, struct bio *bio, 1070 + gfp_t gfp_mask) 1071 1071 { 1072 1072 struct request_queue *q = rl->q; 1073 1073 struct request *rq; 1074 1074 struct elevator_type *et = q->elevator->type; 1075 1075 struct io_context *ioc = rq_ioc(bio); 1076 1076 struct io_cq *icq = NULL; 1077 - const bool is_sync = rw_is_sync(rw_flags) != 0; 1077 + const bool is_sync = rw_is_sync(op, op_flags) != 0; 1078 1078 int may_queue; 1079 1079 1080 1080 if (unlikely(blk_queue_dying(q))) 1081 1081 return ERR_PTR(-ENODEV); 1082 1082 1083 - may_queue = elv_may_queue(q, rw_flags); 1083 + may_queue = elv_may_queue(q, op, op_flags); 1084 1084 if (may_queue == ELV_MQUEUE_NO) 1085 1085 goto rq_starved; 1086 1086 ··· 1125 1123 1126 1124 /* 1127 1125 * Decide whether the new request will be managed by elevator. If 1128 - * so, mark @rw_flags and increment elvpriv. Non-zero elvpriv will 1126 + * so, mark @op_flags and increment elvpriv. Non-zero elvpriv will 1129 1127 * prevent the current elevator from being destroyed until the new 1130 1128 * request is freed. This guarantees icq's won't be destroyed and 1131 1129 * makes creating new ones safe. ··· 1134 1132 * it will be created after releasing queue_lock. 1135 1133 */ 1136 1134 if (blk_rq_should_init_elevator(bio) && !blk_queue_bypass(q)) { 1137 - rw_flags |= REQ_ELVPRIV; 1135 + op_flags |= REQ_ELVPRIV; 1138 1136 q->nr_rqs_elvpriv++; 1139 1137 if (et->icq_cache && ioc) 1140 1138 icq = ioc_lookup_icq(ioc, q); 1141 1139 } 1142 1140 1143 1141 if (blk_queue_io_stat(q)) 1144 - rw_flags |= REQ_IO_STAT; 1142 + op_flags |= REQ_IO_STAT; 1145 1143 spin_unlock_irq(q->queue_lock); 1146 1144 1147 1145 /* allocate and init request */ ··· 1151 1149 1152 1150 blk_rq_init(q, rq); 1153 1151 blk_rq_set_rl(rq, rl); 1154 - rq->cmd_flags = rw_flags | REQ_ALLOCED; 1152 + req_set_op_attrs(rq, op, op_flags | REQ_ALLOCED); 1155 1153 1156 1154 /* init elvpriv */ 1157 - if (rw_flags & REQ_ELVPRIV) { 1155 + if (op_flags & REQ_ELVPRIV) { 1158 1156 if (unlikely(et->icq_cache && !icq)) { 1159 1157 if (ioc) 1160 1158 icq = ioc_create_icq(ioc, q, gfp_mask); ··· 1180 1178 if (ioc_batching(q, ioc)) 1181 1179 ioc->nr_batch_requests--; 1182 1180 1183 - trace_block_getrq(q, bio, rw_flags & 1); 1181 + trace_block_getrq(q, bio, op); 1184 1182 return rq; 1185 1183 1186 1184 fail_elvpriv: ··· 1210 1208 * queue, but this is pretty rare. 1211 1209 */ 1212 1210 spin_lock_irq(q->queue_lock); 1213 - freed_request(rl, rw_flags); 1211 + freed_request(rl, op, op_flags); 1214 1212 1215 1213 /* 1216 1214 * in the very unlikely event that allocation failed and no ··· 1228 1226 /** 1229 1227 * get_request - get a free request 1230 1228 * @q: request_queue to allocate request from 1231 - * @rw_flags: RW and SYNC flags 1229 + * @op: REQ_OP_READ/REQ_OP_WRITE 1230 + * @op_flags: rq_flag_bits 1232 1231 * @bio: bio to allocate request for (can be %NULL) 1233 1232 * @gfp_mask: allocation mask 1234 1233 * ··· 1240 1237 * Returns ERR_PTR on failure, with @q->queue_lock held. 1241 1238 * Returns request pointer on success, with @q->queue_lock *not held*. 1242 1239 */ 1243 - static struct request *get_request(struct request_queue *q, int rw_flags, 1244 - struct bio *bio, gfp_t gfp_mask) 1240 + static struct request *get_request(struct request_queue *q, int op, 1241 + int op_flags, struct bio *bio, 1242 + gfp_t gfp_mask) 1245 1243 { 1246 - const bool is_sync = rw_is_sync(rw_flags) != 0; 1244 + const bool is_sync = rw_is_sync(op, op_flags) != 0; 1247 1245 DEFINE_WAIT(wait); 1248 1246 struct request_list *rl; 1249 1247 struct request *rq; 1250 1248 1251 1249 rl = blk_get_rl(q, bio); /* transferred to @rq on success */ 1252 1250 retry: 1253 - rq = __get_request(rl, rw_flags, bio, gfp_mask); 1251 + rq = __get_request(rl, op, op_flags, bio, gfp_mask); 1254 1252 if (!IS_ERR(rq)) 1255 1253 return rq; 1256 1254 ··· 1264 1260 prepare_to_wait_exclusive(&rl->wait[is_sync], &wait, 1265 1261 TASK_UNINTERRUPTIBLE); 1266 1262 1267 - trace_block_sleeprq(q, bio, rw_flags & 1); 1263 + trace_block_sleeprq(q, bio, op); 1268 1264 1269 1265 spin_unlock_irq(q->queue_lock); 1270 1266 io_schedule(); ··· 1293 1289 create_io_context(gfp_mask, q->node); 1294 1290 1295 1291 spin_lock_irq(q->queue_lock); 1296 - rq = get_request(q, rw, NULL, gfp_mask); 1292 + rq = get_request(q, rw, 0, NULL, gfp_mask); 1297 1293 if (IS_ERR(rq)) 1298 1294 spin_unlock_irq(q->queue_lock); 1299 1295 /* q->queue_lock is unlocked at this point */ ··· 1495 1491 */ 1496 1492 if (req->cmd_flags & REQ_ALLOCED) { 1497 1493 unsigned int flags = req->cmd_flags; 1494 + int op = req_op(req); 1498 1495 struct request_list *rl = blk_rq_rl(req); 1499 1496 1500 1497 BUG_ON(!list_empty(&req->queuelist)); 1501 1498 BUG_ON(ELV_ON_HASH(req)); 1502 1499 1503 1500 blk_free_request(rl, req); 1504 - freed_request(rl, flags); 1501 + freed_request(rl, op, flags); 1505 1502 blk_put_rl(rl); 1506 1503 } 1507 1504 } ··· 1717 1712 { 1718 1713 const bool sync = !!(bio->bi_rw & REQ_SYNC); 1719 1714 struct blk_plug *plug; 1720 - int el_ret, rw_flags, where = ELEVATOR_INSERT_SORT; 1715 + int el_ret, rw_flags = 0, where = ELEVATOR_INSERT_SORT; 1721 1716 struct request *req; 1722 1717 unsigned int request_count = 0; 1723 1718 ··· 1736 1731 return BLK_QC_T_NONE; 1737 1732 } 1738 1733 1739 - if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) { 1734 + if (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA)) { 1740 1735 spin_lock_irq(q->queue_lock); 1741 1736 where = ELEVATOR_INSERT_FLUSH; 1742 1737 goto get_rq; ··· 1777 1772 * but we need to set it earlier to expose the sync flag to the 1778 1773 * rq allocator and io schedulers. 1779 1774 */ 1780 - rw_flags = bio_data_dir(bio); 1781 1775 if (sync) 1782 1776 rw_flags |= REQ_SYNC; 1777 + 1778 + /* 1779 + * Add in META/PRIO flags, if set, before we get to the IO scheduler 1780 + */ 1781 + rw_flags |= (bio->bi_rw & (REQ_META | REQ_PRIO)); 1783 1782 1784 1783 /* 1785 1784 * Grab a free request. This is might sleep but can not fail. 1786 1785 * Returns with the queue unlocked. 1787 1786 */ 1788 - req = get_request(q, rw_flags, bio, GFP_NOIO); 1787 + req = get_request(q, bio_data_dir(bio), rw_flags, bio, GFP_NOIO); 1789 1788 if (IS_ERR(req)) { 1790 1789 bio->bi_error = PTR_ERR(req); 1791 1790 bio_endio(bio); ··· 1858 1849 char b[BDEVNAME_SIZE]; 1859 1850 1860 1851 printk(KERN_INFO "attempt to access beyond end of device\n"); 1861 - printk(KERN_INFO "%s: rw=%ld, want=%Lu, limit=%Lu\n", 1852 + printk(KERN_INFO "%s: rw=%d, want=%Lu, limit=%Lu\n", 1862 1853 bdevname(bio->bi_bdev, b), 1863 1854 bio->bi_rw, 1864 1855 (unsigned long long)bio_end_sector(bio), ··· 1973 1964 * drivers without flush support don't have to worry 1974 1965 * about them. 1975 1966 */ 1976 - if ((bio->bi_rw & (REQ_FLUSH | REQ_FUA)) && 1967 + if ((bio->bi_rw & (REQ_PREFLUSH | REQ_FUA)) && 1977 1968 !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) { 1978 - bio->bi_rw &= ~(REQ_FLUSH | REQ_FUA); 1969 + bio->bi_rw &= ~(REQ_PREFLUSH | REQ_FUA); 1979 1970 if (!nr_sectors) { 1980 1971 err = 0; 1981 1972 goto end_io; 1982 1973 } 1983 1974 } 1984 1975 1985 - if ((bio->bi_rw & REQ_DISCARD) && 1976 + if ((bio_op(bio) == REQ_OP_DISCARD) && 1986 1977 (!blk_queue_discard(q) || 1987 1978 ((bio->bi_rw & REQ_SECURE) && !blk_queue_secdiscard(q)))) { 1988 1979 err = -EOPNOTSUPP; 1989 1980 goto end_io; 1990 1981 } 1991 1982 1992 - if (bio->bi_rw & REQ_WRITE_SAME && !bdev_write_same(bio->bi_bdev)) { 1983 + if (bio_op(bio) == REQ_OP_WRITE_SAME && !bdev_write_same(bio->bi_bdev)) { 1993 1984 err = -EOPNOTSUPP; 1994 1985 goto end_io; 1995 1986 } ··· 2103 2094 2104 2095 /** 2105 2096 * submit_bio - submit a bio to the block device layer for I/O 2106 - * @rw: whether to %READ or %WRITE, or maybe to %READA (read ahead) 2107 2097 * @bio: The &struct bio which describes the I/O 2108 2098 * 2109 2099 * submit_bio() is very similar in purpose to generic_make_request(), and ··· 2110 2102 * interfaces; @bio must be presetup and ready for I/O. 2111 2103 * 2112 2104 */ 2113 - blk_qc_t submit_bio(int rw, struct bio *bio) 2105 + blk_qc_t submit_bio(struct bio *bio) 2114 2106 { 2115 - bio->bi_rw |= rw; 2116 - 2117 2107 /* 2118 2108 * If it's a regular read/write or a barrier with data attached, 2119 2109 * go through the normal accounting stuff before submission. ··· 2119 2113 if (bio_has_data(bio)) { 2120 2114 unsigned int count; 2121 2115 2122 - if (unlikely(rw & REQ_WRITE_SAME)) 2116 + if (unlikely(bio_op(bio) == REQ_OP_WRITE_SAME)) 2123 2117 count = bdev_logical_block_size(bio->bi_bdev) >> 9; 2124 2118 else 2125 2119 count = bio_sectors(bio); 2126 2120 2127 - if (rw & WRITE) { 2121 + if (op_is_write(bio_op(bio))) { 2128 2122 count_vm_events(PGPGOUT, count); 2129 2123 } else { 2130 2124 task_io_account_read(bio->bi_iter.bi_size); ··· 2135 2129 char b[BDEVNAME_SIZE]; 2136 2130 printk(KERN_DEBUG "%s(%d): %s block %Lu on %s (%u sectors)\n", 2137 2131 current->comm, task_pid_nr(current), 2138 - (rw & WRITE) ? "WRITE" : "READ", 2132 + op_is_write(bio_op(bio)) ? "WRITE" : "READ", 2139 2133 (unsigned long long)bio->bi_iter.bi_sector, 2140 2134 bdevname(bio->bi_bdev, b), 2141 2135 count); ··· 2166 2160 static int blk_cloned_rq_check_limits(struct request_queue *q, 2167 2161 struct request *rq) 2168 2162 { 2169 - if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, rq->cmd_flags)) { 2163 + if (blk_rq_sectors(rq) > blk_queue_get_max_sectors(q, req_op(rq))) { 2170 2164 printk(KERN_ERR "%s: over max size limit.\n", __func__); 2171 2165 return -EIO; 2172 2166 } ··· 2222 2216 */ 2223 2217 BUG_ON(blk_queued_rq(rq)); 2224 2218 2225 - if (rq->cmd_flags & (REQ_FLUSH|REQ_FUA)) 2219 + if (rq->cmd_flags & (REQ_PREFLUSH | REQ_FUA)) 2226 2220 where = ELEVATOR_INSERT_FLUSH; 2227 2221 2228 2222 add_acct_request(q, rq, where); ··· 2985 2979 void blk_rq_bio_prep(struct request_queue *q, struct request *rq, 2986 2980 struct bio *bio) 2987 2981 { 2988 - /* Bit 0 (R/W) is identical in rq->cmd_flags and bio->bi_rw */ 2989 - rq->cmd_flags |= bio->bi_rw & REQ_WRITE; 2982 + req_set_op(rq, bio_op(bio)); 2990 2983 2991 2984 if (bio_has_data(bio)) 2992 2985 rq->nr_phys_segments = bio_phys_segments(q, bio); ··· 3070 3065 static void __blk_rq_prep_clone(struct request *dst, struct request *src) 3071 3066 { 3072 3067 dst->cpu = src->cpu; 3073 - dst->cmd_flags |= (src->cmd_flags & REQ_CLONE_MASK) | REQ_NOMERGE; 3068 + req_set_op_attrs(dst, req_op(src), 3069 + (src->cmd_flags & REQ_CLONE_MASK) | REQ_NOMERGE); 3074 3070 dst->cmd_type = src->cmd_type; 3075 3071 dst->__sector = blk_rq_pos(src); 3076 3072 dst->__data_len = blk_rq_bytes(src); ··· 3316 3310 /* 3317 3311 * rq is already accounted, so use raw insert 3318 3312 */ 3319 - if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) 3313 + if (rq->cmd_flags & (REQ_PREFLUSH | REQ_FUA)) 3320 3314 __elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH); 3321 3315 else 3322 3316 __elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
+1 -1
block/blk-exec.c
··· 62 62 63 63 /* 64 64 * don't check dying flag for MQ because the request won't 65 - * be resued after dying flag is set 65 + * be reused after dying flag is set 66 66 */ 67 67 if (q->mq_ops) { 68 68 blk_mq_insert_request(rq, at_head, true, false);
+12 -11
block/blk-flush.c
··· 10 10 * optional steps - PREFLUSH, DATA and POSTFLUSH - according to the request 11 11 * properties and hardware capability. 12 12 * 13 - * If a request doesn't have data, only REQ_FLUSH makes sense, which 14 - * indicates a simple flush request. If there is data, REQ_FLUSH indicates 13 + * If a request doesn't have data, only REQ_PREFLUSH makes sense, which 14 + * indicates a simple flush request. If there is data, REQ_PREFLUSH indicates 15 15 * that the device cache should be flushed before the data is executed, and 16 16 * REQ_FUA means that the data must be on non-volatile media on request 17 17 * completion. ··· 20 20 * difference. The requests are either completed immediately if there's no 21 21 * data or executed as normal requests otherwise. 22 22 * 23 - * If the device has writeback cache and supports FUA, REQ_FLUSH is 23 + * If the device has writeback cache and supports FUA, REQ_PREFLUSH is 24 24 * translated to PREFLUSH but REQ_FUA is passed down directly with DATA. 25 25 * 26 - * If the device has writeback cache and doesn't support FUA, REQ_FLUSH is 27 - * translated to PREFLUSH and REQ_FUA to POSTFLUSH. 26 + * If the device has writeback cache and doesn't support FUA, REQ_PREFLUSH 27 + * is translated to PREFLUSH and REQ_FUA to POSTFLUSH. 28 28 * 29 29 * The actual execution of flush is double buffered. Whenever a request 30 30 * needs to execute PRE or POSTFLUSH, it queues at 31 31 * fq->flush_queue[fq->flush_pending_idx]. Once certain criteria are met, a 32 - * flush is issued and the pending_idx is toggled. When the flush 32 + * REQ_OP_FLUSH is issued and the pending_idx is toggled. When the flush 33 33 * completes, all the requests which were pending are proceeded to the next 34 34 * step. This allows arbitrary merging of different types of FLUSH/FUA 35 35 * requests. ··· 103 103 policy |= REQ_FSEQ_DATA; 104 104 105 105 if (fflags & (1UL << QUEUE_FLAG_WC)) { 106 - if (rq->cmd_flags & REQ_FLUSH) 106 + if (rq->cmd_flags & REQ_PREFLUSH) 107 107 policy |= REQ_FSEQ_PREFLUSH; 108 108 if (!(fflags & (1UL << QUEUE_FLAG_FUA)) && 109 109 (rq->cmd_flags & REQ_FUA)) ··· 330 330 } 331 331 332 332 flush_rq->cmd_type = REQ_TYPE_FS; 333 - flush_rq->cmd_flags = WRITE_FLUSH | REQ_FLUSH_SEQ; 333 + req_set_op_attrs(flush_rq, REQ_OP_FLUSH, WRITE_FLUSH | REQ_FLUSH_SEQ); 334 334 flush_rq->rq_disk = first_rq->rq_disk; 335 335 flush_rq->end_io = flush_end_io; 336 336 ··· 391 391 392 392 /* 393 393 * @policy now records what operations need to be done. Adjust 394 - * REQ_FLUSH and FUA for the driver. 394 + * REQ_PREFLUSH and FUA for the driver. 395 395 */ 396 - rq->cmd_flags &= ~REQ_FLUSH; 396 + rq->cmd_flags &= ~REQ_PREFLUSH; 397 397 if (!(fflags & (1UL << QUEUE_FLAG_FUA))) 398 398 rq->cmd_flags &= ~REQ_FUA; 399 399 ··· 485 485 486 486 bio = bio_alloc(gfp_mask, 0); 487 487 bio->bi_bdev = bdev; 488 + bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH); 488 489 489 - ret = submit_bio_wait(WRITE_FLUSH, bio); 490 + ret = submit_bio_wait(bio); 490 491 491 492 /* 492 493 * The driver must store the error location in ->bi_sector, if
+17 -14
block/blk-lib.c
··· 9 9 10 10 #include "blk.h" 11 11 12 - static struct bio *next_bio(struct bio *bio, int rw, unsigned int nr_pages, 12 + static struct bio *next_bio(struct bio *bio, unsigned int nr_pages, 13 13 gfp_t gfp) 14 14 { 15 15 struct bio *new = bio_alloc(gfp, nr_pages); 16 16 17 17 if (bio) { 18 18 bio_chain(bio, new); 19 - submit_bio(rw, bio); 19 + submit_bio(bio); 20 20 } 21 21 22 22 return new; 23 23 } 24 24 25 25 int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, 26 - sector_t nr_sects, gfp_t gfp_mask, int type, struct bio **biop) 26 + sector_t nr_sects, gfp_t gfp_mask, int op_flags, 27 + struct bio **biop) 27 28 { 28 29 struct request_queue *q = bdev_get_queue(bdev); 29 30 struct bio *bio = *biop; ··· 35 34 return -ENXIO; 36 35 if (!blk_queue_discard(q)) 37 36 return -EOPNOTSUPP; 38 - if ((type & REQ_SECURE) && !blk_queue_secdiscard(q)) 37 + if ((op_flags & REQ_SECURE) && !blk_queue_secdiscard(q)) 39 38 return -EOPNOTSUPP; 40 39 41 40 /* Zero-sector (unknown) and one-sector granularities are the same. */ ··· 63 62 req_sects = end_sect - sector; 64 63 } 65 64 66 - bio = next_bio(bio, type, 1, gfp_mask); 65 + bio = next_bio(bio, 1, gfp_mask); 67 66 bio->bi_iter.bi_sector = sector; 68 67 bio->bi_bdev = bdev; 68 + bio_set_op_attrs(bio, REQ_OP_DISCARD, op_flags); 69 69 70 70 bio->bi_iter.bi_size = req_sects << 9; 71 71 nr_sects -= req_sects; ··· 100 98 int blkdev_issue_discard(struct block_device *bdev, sector_t sector, 101 99 sector_t nr_sects, gfp_t gfp_mask, unsigned long flags) 102 100 { 103 - int type = REQ_WRITE | REQ_DISCARD; 101 + int op_flags = 0; 104 102 struct bio *bio = NULL; 105 103 struct blk_plug plug; 106 104 int ret; 107 105 108 106 if (flags & BLKDEV_DISCARD_SECURE) 109 - type |= REQ_SECURE; 107 + op_flags |= REQ_SECURE; 110 108 111 109 blk_start_plug(&plug); 112 - ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask, type, 110 + ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask, op_flags, 113 111 &bio); 114 112 if (!ret && bio) { 115 - ret = submit_bio_wait(type, bio); 113 + ret = submit_bio_wait(bio); 116 114 if (ret == -EOPNOTSUPP) 117 115 ret = 0; 118 116 bio_put(bio); ··· 150 148 max_write_same_sectors = UINT_MAX >> 9; 151 149 152 150 while (nr_sects) { 153 - bio = next_bio(bio, REQ_WRITE | REQ_WRITE_SAME, 1, gfp_mask); 151 + bio = next_bio(bio, 1, gfp_mask); 154 152 bio->bi_iter.bi_sector = sector; 155 153 bio->bi_bdev = bdev; 156 154 bio->bi_vcnt = 1; 157 155 bio->bi_io_vec->bv_page = page; 158 156 bio->bi_io_vec->bv_offset = 0; 159 157 bio->bi_io_vec->bv_len = bdev_logical_block_size(bdev); 158 + bio_set_op_attrs(bio, REQ_OP_WRITE_SAME, 0); 160 159 161 160 if (nr_sects > max_write_same_sectors) { 162 161 bio->bi_iter.bi_size = max_write_same_sectors << 9; ··· 170 167 } 171 168 172 169 if (bio) { 173 - ret = submit_bio_wait(REQ_WRITE | REQ_WRITE_SAME, bio); 170 + ret = submit_bio_wait(bio); 174 171 bio_put(bio); 175 172 } 176 173 return ret != -EOPNOTSUPP ? ret : 0; ··· 196 193 unsigned int sz; 197 194 198 195 while (nr_sects != 0) { 199 - bio = next_bio(bio, WRITE, 200 - min(nr_sects, (sector_t)BIO_MAX_PAGES), 196 + bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES), 201 197 gfp_mask); 202 198 bio->bi_iter.bi_sector = sector; 203 199 bio->bi_bdev = bdev; 200 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 204 201 205 202 while (nr_sects != 0) { 206 203 sz = min((sector_t) PAGE_SIZE >> 9 , nr_sects); ··· 213 210 } 214 211 215 212 if (bio) { 216 - ret = submit_bio_wait(WRITE, bio); 213 + ret = submit_bio_wait(bio); 217 214 bio_put(bio); 218 215 return ret; 219 216 }
+1 -1
block/blk-map.c
··· 224 224 return PTR_ERR(bio); 225 225 226 226 if (!reading) 227 - bio->bi_rw |= REQ_WRITE; 227 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 228 228 229 229 if (do_copy) 230 230 rq->cmd_flags |= REQ_COPY_USER;
+22 -14
block/blk-merge.c
··· 172 172 struct bio *split, *res; 173 173 unsigned nsegs; 174 174 175 - if ((*bio)->bi_rw & REQ_DISCARD) 175 + if (bio_op(*bio) == REQ_OP_DISCARD) 176 176 split = blk_bio_discard_split(q, *bio, bs, &nsegs); 177 - else if ((*bio)->bi_rw & REQ_WRITE_SAME) 177 + else if (bio_op(*bio) == REQ_OP_WRITE_SAME) 178 178 split = blk_bio_write_same_split(q, *bio, bs, &nsegs); 179 179 else 180 180 split = blk_bio_segment_split(q, *bio, q->bio_split, &nsegs); ··· 213 213 * This should probably be returning 0, but blk_add_request_payload() 214 214 * (Christoph!!!!) 215 215 */ 216 - if (bio->bi_rw & REQ_DISCARD) 216 + if (bio_op(bio) == REQ_OP_DISCARD) 217 217 return 1; 218 218 219 - if (bio->bi_rw & REQ_WRITE_SAME) 219 + if (bio_op(bio) == REQ_OP_WRITE_SAME) 220 220 return 1; 221 221 222 222 fbio = bio; ··· 385 385 nsegs = 0; 386 386 cluster = blk_queue_cluster(q); 387 387 388 - if (bio->bi_rw & REQ_DISCARD) { 388 + if (bio_op(bio) == REQ_OP_DISCARD) { 389 389 /* 390 390 * This is a hack - drivers should be neither modifying the 391 391 * biovec, nor relying on bi_vcnt - but because of ··· 400 400 return 0; 401 401 } 402 402 403 - if (bio->bi_rw & REQ_WRITE_SAME) { 403 + if (bio_op(bio) == REQ_OP_WRITE_SAME) { 404 404 single_segment: 405 405 *sg = sglist; 406 406 bvec = bio_iovec(bio); ··· 439 439 } 440 440 441 441 if (q->dma_drain_size && q->dma_drain_needed(rq)) { 442 - if (rq->cmd_flags & REQ_WRITE) 442 + if (op_is_write(req_op(rq))) 443 443 memset(q->dma_drain_buffer, 0, q->dma_drain_size); 444 444 445 445 sg_unmark_end(sg); ··· 500 500 integrity_req_gap_back_merge(req, bio)) 501 501 return 0; 502 502 if (blk_rq_sectors(req) + bio_sectors(bio) > 503 - blk_rq_get_max_sectors(req)) { 503 + blk_rq_get_max_sectors(req, blk_rq_pos(req))) { 504 504 req->cmd_flags |= REQ_NOMERGE; 505 505 if (req == q->last_merge) 506 506 q->last_merge = NULL; ··· 524 524 integrity_req_gap_front_merge(req, bio)) 525 525 return 0; 526 526 if (blk_rq_sectors(req) + bio_sectors(bio) > 527 - blk_rq_get_max_sectors(req)) { 527 + blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) { 528 528 req->cmd_flags |= REQ_NOMERGE; 529 529 if (req == q->last_merge) 530 530 q->last_merge = NULL; ··· 570 570 * Will it become too large? 571 571 */ 572 572 if ((blk_rq_sectors(req) + blk_rq_sectors(next)) > 573 - blk_rq_get_max_sectors(req)) 573 + blk_rq_get_max_sectors(req, blk_rq_pos(req))) 574 574 return 0; 575 575 576 576 total_phys_segments = req->nr_phys_segments + next->nr_phys_segments; ··· 649 649 if (!rq_mergeable(req) || !rq_mergeable(next)) 650 650 return 0; 651 651 652 - if (!blk_check_merge_flags(req->cmd_flags, next->cmd_flags)) 652 + if (!blk_check_merge_flags(req->cmd_flags, req_op(req), next->cmd_flags, 653 + req_op(next))) 653 654 return 0; 654 655 655 656 /* ··· 664 663 || req_no_special_merge(next)) 665 664 return 0; 666 665 667 - if (req->cmd_flags & REQ_WRITE_SAME && 666 + if (req_op(req) == REQ_OP_WRITE_SAME && 668 667 !blk_write_same_mergeable(req->bio, next->bio)) 669 668 return 0; 670 669 ··· 744 743 int blk_attempt_req_merge(struct request_queue *q, struct request *rq, 745 744 struct request *next) 746 745 { 746 + struct elevator_queue *e = q->elevator; 747 + 748 + if (e->type->ops.elevator_allow_rq_merge_fn) 749 + if (!e->type->ops.elevator_allow_rq_merge_fn(q, rq, next)) 750 + return 0; 751 + 747 752 return attempt_merge(q, rq, next); 748 753 } 749 754 ··· 758 751 if (!rq_mergeable(rq) || !bio_mergeable(bio)) 759 752 return false; 760 753 761 - if (!blk_check_merge_flags(rq->cmd_flags, bio->bi_rw)) 754 + if (!blk_check_merge_flags(rq->cmd_flags, req_op(rq), bio->bi_rw, 755 + bio_op(bio))) 762 756 return false; 763 757 764 758 /* different data direction or already started, don't merge */ ··· 775 767 return false; 776 768 777 769 /* must be using the same buffer */ 778 - if (rq->cmd_flags & REQ_WRITE_SAME && 770 + if (req_op(rq) == REQ_OP_WRITE_SAME && 779 771 !blk_write_same_mergeable(rq->bio, bio)) 780 772 return false; 781 773
+22 -20
block/blk-mq.c
··· 159 159 EXPORT_SYMBOL(blk_mq_can_queue); 160 160 161 161 static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx, 162 - struct request *rq, unsigned int rw_flags) 162 + struct request *rq, int op, 163 + unsigned int op_flags) 163 164 { 164 165 if (blk_queue_io_stat(q)) 165 - rw_flags |= REQ_IO_STAT; 166 + op_flags |= REQ_IO_STAT; 166 167 167 168 INIT_LIST_HEAD(&rq->queuelist); 168 169 /* csd/requeue_work/fifo_time is initialized before use */ 169 170 rq->q = q; 170 171 rq->mq_ctx = ctx; 171 - rq->cmd_flags |= rw_flags; 172 + req_set_op_attrs(rq, op, op_flags); 172 173 /* do not touch atomic flags, it needs atomic ops against the timer */ 173 174 rq->cpu = -1; 174 175 INIT_HLIST_NODE(&rq->hash); ··· 204 203 rq->end_io_data = NULL; 205 204 rq->next_rq = NULL; 206 205 207 - ctx->rq_dispatched[rw_is_sync(rw_flags)]++; 206 + ctx->rq_dispatched[rw_is_sync(op, op_flags)]++; 208 207 } 209 208 210 209 static struct request * 211 - __blk_mq_alloc_request(struct blk_mq_alloc_data *data, int rw) 210 + __blk_mq_alloc_request(struct blk_mq_alloc_data *data, int op, int op_flags) 212 211 { 213 212 struct request *rq; 214 213 unsigned int tag; ··· 223 222 } 224 223 225 224 rq->tag = tag; 226 - blk_mq_rq_ctx_init(data->q, data->ctx, rq, rw); 225 + blk_mq_rq_ctx_init(data->q, data->ctx, rq, op, op_flags); 227 226 return rq; 228 227 } 229 228 ··· 247 246 hctx = q->mq_ops->map_queue(q, ctx->cpu); 248 247 blk_mq_set_alloc_data(&alloc_data, q, flags, ctx, hctx); 249 248 250 - rq = __blk_mq_alloc_request(&alloc_data, rw); 249 + rq = __blk_mq_alloc_request(&alloc_data, rw, 0); 251 250 if (!rq && !(flags & BLK_MQ_REQ_NOWAIT)) { 252 251 __blk_mq_run_hw_queue(hctx); 253 252 blk_mq_put_ctx(ctx); ··· 255 254 ctx = blk_mq_get_ctx(q); 256 255 hctx = q->mq_ops->map_queue(q, ctx->cpu); 257 256 blk_mq_set_alloc_data(&alloc_data, q, flags, ctx, hctx); 258 - rq = __blk_mq_alloc_request(&alloc_data, rw); 257 + rq = __blk_mq_alloc_request(&alloc_data, rw, 0); 259 258 ctx = alloc_data.ctx; 260 259 } 261 260 blk_mq_put_ctx(ctx); ··· 785 784 switch (ret) { 786 785 case BLK_MQ_RQ_QUEUE_OK: 787 786 queued++; 788 - continue; 787 + break; 789 788 case BLK_MQ_RQ_QUEUE_BUSY: 790 789 list_add(&rq->queuelist, &rq_list); 791 790 __blk_mq_requeue_request(rq); ··· 1170 1169 struct blk_mq_hw_ctx *hctx; 1171 1170 struct blk_mq_ctx *ctx; 1172 1171 struct request *rq; 1173 - int rw = bio_data_dir(bio); 1172 + int op = bio_data_dir(bio); 1173 + int op_flags = 0; 1174 1174 struct blk_mq_alloc_data alloc_data; 1175 1175 1176 1176 blk_queue_enter_live(q); 1177 1177 ctx = blk_mq_get_ctx(q); 1178 1178 hctx = q->mq_ops->map_queue(q, ctx->cpu); 1179 1179 1180 - if (rw_is_sync(bio->bi_rw)) 1181 - rw |= REQ_SYNC; 1180 + if (rw_is_sync(bio_op(bio), bio->bi_rw)) 1181 + op_flags |= REQ_SYNC; 1182 1182 1183 - trace_block_getrq(q, bio, rw); 1183 + trace_block_getrq(q, bio, op); 1184 1184 blk_mq_set_alloc_data(&alloc_data, q, BLK_MQ_REQ_NOWAIT, ctx, hctx); 1185 - rq = __blk_mq_alloc_request(&alloc_data, rw); 1185 + rq = __blk_mq_alloc_request(&alloc_data, op, op_flags); 1186 1186 if (unlikely(!rq)) { 1187 1187 __blk_mq_run_hw_queue(hctx); 1188 1188 blk_mq_put_ctx(ctx); 1189 - trace_block_sleeprq(q, bio, rw); 1189 + trace_block_sleeprq(q, bio, op); 1190 1190 1191 1191 ctx = blk_mq_get_ctx(q); 1192 1192 hctx = q->mq_ops->map_queue(q, ctx->cpu); 1193 1193 blk_mq_set_alloc_data(&alloc_data, q, 0, ctx, hctx); 1194 - rq = __blk_mq_alloc_request(&alloc_data, rw); 1194 + rq = __blk_mq_alloc_request(&alloc_data, op, op_flags); 1195 1195 ctx = alloc_data.ctx; 1196 1196 hctx = alloc_data.hctx; 1197 1197 } ··· 1246 1244 */ 1247 1245 static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) 1248 1246 { 1249 - const int is_sync = rw_is_sync(bio->bi_rw); 1250 - const int is_flush_fua = bio->bi_rw & (REQ_FLUSH | REQ_FUA); 1247 + const int is_sync = rw_is_sync(bio_op(bio), bio->bi_rw); 1248 + const int is_flush_fua = bio->bi_rw & (REQ_PREFLUSH | REQ_FUA); 1251 1249 struct blk_map_ctx data; 1252 1250 struct request *rq; 1253 1251 unsigned int request_count = 0; ··· 1340 1338 */ 1341 1339 static blk_qc_t blk_sq_make_request(struct request_queue *q, struct bio *bio) 1342 1340 { 1343 - const int is_sync = rw_is_sync(bio->bi_rw); 1344 - const int is_flush_fua = bio->bi_rw & (REQ_FLUSH | REQ_FUA); 1341 + const int is_sync = rw_is_sync(bio_op(bio), bio->bi_rw); 1342 + const int is_flush_fua = bio->bi_rw & (REQ_PREFLUSH | REQ_FUA); 1345 1343 struct blk_plug *plug; 1346 1344 unsigned int request_count = 0; 1347 1345 struct blk_map_ctx data;
+11
block/blk-sysfs.c
··· 379 379 return count; 380 380 } 381 381 382 + static ssize_t queue_dax_show(struct request_queue *q, char *page) 383 + { 384 + return queue_var_show(blk_queue_dax(q), page); 385 + } 386 + 382 387 static struct queue_sysfs_entry queue_requests_entry = { 383 388 .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR }, 384 389 .show = queue_requests_show, ··· 521 516 .store = queue_wc_store, 522 517 }; 523 518 519 + static struct queue_sysfs_entry queue_dax_entry = { 520 + .attr = {.name = "dax", .mode = S_IRUGO }, 521 + .show = queue_dax_show, 522 + }; 523 + 524 524 static struct attribute *default_attrs[] = { 525 525 &queue_requests_entry.attr, 526 526 &queue_ra_entry.attr, ··· 552 542 &queue_random_entry.attr, 553 543 &queue_poll_entry.attr, 554 544 &queue_wc_entry.attr, 545 + &queue_dax_entry.attr, 555 546 NULL, 556 547 }; 557 548
+260 -172
block/cfq-iosched.c
··· 10 10 #include <linux/slab.h> 11 11 #include <linux/blkdev.h> 12 12 #include <linux/elevator.h> 13 - #include <linux/jiffies.h> 13 + #include <linux/ktime.h> 14 14 #include <linux/rbtree.h> 15 15 #include <linux/ioprio.h> 16 16 #include <linux/blktrace_api.h> ··· 22 22 */ 23 23 /* max queue in one round of service */ 24 24 static const int cfq_quantum = 8; 25 - static const int cfq_fifo_expire[2] = { HZ / 4, HZ / 8 }; 25 + static const u64 cfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 }; 26 26 /* maximum backwards seek, in KiB */ 27 27 static const int cfq_back_max = 16 * 1024; 28 28 /* penalty of a backwards seek */ 29 29 static const int cfq_back_penalty = 2; 30 - static const int cfq_slice_sync = HZ / 10; 31 - static int cfq_slice_async = HZ / 25; 30 + static const u64 cfq_slice_sync = NSEC_PER_SEC / 10; 31 + static u64 cfq_slice_async = NSEC_PER_SEC / 25; 32 32 static const int cfq_slice_async_rq = 2; 33 - static int cfq_slice_idle = HZ / 125; 34 - static int cfq_group_idle = HZ / 125; 35 - static const int cfq_target_latency = HZ * 3/10; /* 300 ms */ 33 + static u64 cfq_slice_idle = NSEC_PER_SEC / 125; 34 + static u64 cfq_group_idle = NSEC_PER_SEC / 125; 35 + static const u64 cfq_target_latency = (u64)NSEC_PER_SEC * 3/10; /* 300 ms */ 36 36 static const int cfq_hist_divisor = 4; 37 37 38 38 /* 39 39 * offset from end of service tree 40 40 */ 41 - #define CFQ_IDLE_DELAY (HZ / 5) 41 + #define CFQ_IDLE_DELAY (NSEC_PER_SEC / 5) 42 42 43 43 /* 44 44 * below this threshold, we consider thinktime immediate 45 45 */ 46 - #define CFQ_MIN_TT (2) 46 + #define CFQ_MIN_TT (2 * NSEC_PER_SEC / HZ) 47 47 48 48 #define CFQ_SLICE_SCALE (5) 49 49 #define CFQ_HW_QUEUE_MIN (5) ··· 73 73 #define CFQ_WEIGHT_LEGACY_MAX 1000 74 74 75 75 struct cfq_ttime { 76 - unsigned long last_end_request; 76 + u64 last_end_request; 77 77 78 - unsigned long ttime_total; 78 + u64 ttime_total; 79 + u64 ttime_mean; 79 80 unsigned long ttime_samples; 80 - unsigned long ttime_mean; 81 81 }; 82 82 83 83 /* ··· 94 94 struct cfq_ttime ttime; 95 95 }; 96 96 #define CFQ_RB_ROOT (struct cfq_rb_root) { .rb = RB_ROOT, \ 97 - .ttime = {.last_end_request = jiffies,},} 97 + .ttime = {.last_end_request = ktime_get_ns(),},} 98 98 99 99 /* 100 100 * Per process-grouping structure ··· 109 109 /* service_tree member */ 110 110 struct rb_node rb_node; 111 111 /* service_tree key */ 112 - unsigned long rb_key; 112 + u64 rb_key; 113 113 /* prio tree member */ 114 114 struct rb_node p_node; 115 115 /* prio tree root we belong to, if any */ ··· 126 126 struct list_head fifo; 127 127 128 128 /* time when queue got scheduled in to dispatch first request. */ 129 - unsigned long dispatch_start; 130 - unsigned int allocated_slice; 131 - unsigned int slice_dispatch; 129 + u64 dispatch_start; 130 + u64 allocated_slice; 131 + u64 slice_dispatch; 132 132 /* time when first request from queue completed and slice started. */ 133 - unsigned long slice_start; 134 - unsigned long slice_end; 135 - long slice_resid; 133 + u64 slice_start; 134 + u64 slice_end; 135 + s64 slice_resid; 136 136 137 137 /* pending priority requests */ 138 138 int prio_pending; ··· 141 141 142 142 /* io prio of this group */ 143 143 unsigned short ioprio, org_ioprio; 144 - unsigned short ioprio_class; 144 + unsigned short ioprio_class, org_ioprio_class; 145 145 146 146 pid_t pid; 147 147 ··· 290 290 struct cfq_rb_root service_trees[2][3]; 291 291 struct cfq_rb_root service_tree_idle; 292 292 293 - unsigned long saved_wl_slice; 293 + u64 saved_wl_slice; 294 294 enum wl_type_t saved_wl_type; 295 295 enum wl_class_t saved_wl_class; 296 296 ··· 329 329 */ 330 330 enum wl_class_t serving_wl_class; 331 331 enum wl_type_t serving_wl_type; 332 - unsigned long workload_expires; 332 + u64 workload_expires; 333 333 struct cfq_group *serving_group; 334 334 335 335 /* ··· 362 362 /* 363 363 * idle window management 364 364 */ 365 - struct timer_list idle_slice_timer; 365 + struct hrtimer idle_slice_timer; 366 366 struct work_struct unplug_work; 367 367 368 368 struct cfq_queue *active_queue; ··· 374 374 * tunables, see top of file 375 375 */ 376 376 unsigned int cfq_quantum; 377 - unsigned int cfq_fifo_expire[2]; 378 377 unsigned int cfq_back_penalty; 379 378 unsigned int cfq_back_max; 380 - unsigned int cfq_slice[2]; 381 379 unsigned int cfq_slice_async_rq; 382 - unsigned int cfq_slice_idle; 383 - unsigned int cfq_group_idle; 384 380 unsigned int cfq_latency; 385 - unsigned int cfq_target_latency; 381 + u64 cfq_fifo_expire[2]; 382 + u64 cfq_slice[2]; 383 + u64 cfq_slice_idle; 384 + u64 cfq_group_idle; 385 + u64 cfq_target_latency; 386 386 387 387 /* 388 388 * Fallback dummy cfqq for extreme OOM conditions 389 389 */ 390 390 struct cfq_queue oom_cfqq; 391 391 392 - unsigned long last_delayed_sync; 392 + u64 last_delayed_sync; 393 393 }; 394 394 395 395 static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd); ··· 667 667 } while (0) 668 668 669 669 static inline void cfqg_stats_update_io_add(struct cfq_group *cfqg, 670 - struct cfq_group *curr_cfqg, int rw) 670 + struct cfq_group *curr_cfqg, int op, 671 + int op_flags) 671 672 { 672 - blkg_rwstat_add(&cfqg->stats.queued, rw, 1); 673 + blkg_rwstat_add(&cfqg->stats.queued, op, op_flags, 1); 673 674 cfqg_stats_end_empty_time(&cfqg->stats); 674 675 cfqg_stats_set_start_group_wait_time(cfqg, curr_cfqg); 675 676 } 676 677 677 678 static inline void cfqg_stats_update_timeslice_used(struct cfq_group *cfqg, 678 - unsigned long time, unsigned long unaccounted_time) 679 + uint64_t time, unsigned long unaccounted_time) 679 680 { 680 681 blkg_stat_add(&cfqg->stats.time, time); 681 682 #ifdef CONFIG_DEBUG_BLK_CGROUP ··· 684 683 #endif 685 684 } 686 685 687 - static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, int rw) 686 + static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, int op, 687 + int op_flags) 688 688 { 689 - blkg_rwstat_add(&cfqg->stats.queued, rw, -1); 689 + blkg_rwstat_add(&cfqg->stats.queued, op, op_flags, -1); 690 690 } 691 691 692 - static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, int rw) 692 + static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, int op, 693 + int op_flags) 693 694 { 694 - blkg_rwstat_add(&cfqg->stats.merged, rw, 1); 695 + blkg_rwstat_add(&cfqg->stats.merged, op, op_flags, 1); 695 696 } 696 697 697 698 static inline void cfqg_stats_update_completion(struct cfq_group *cfqg, 698 - uint64_t start_time, uint64_t io_start_time, int rw) 699 + uint64_t start_time, uint64_t io_start_time, int op, 700 + int op_flags) 699 701 { 700 702 struct cfqg_stats *stats = &cfqg->stats; 701 703 unsigned long long now = sched_clock(); 702 704 703 705 if (time_after64(now, io_start_time)) 704 - blkg_rwstat_add(&stats->service_time, rw, now - io_start_time); 706 + blkg_rwstat_add(&stats->service_time, op, op_flags, 707 + now - io_start_time); 705 708 if (time_after64(io_start_time, start_time)) 706 - blkg_rwstat_add(&stats->wait_time, rw, 709 + blkg_rwstat_add(&stats->wait_time, op, op_flags, 707 710 io_start_time - start_time); 708 711 } 709 712 ··· 786 781 #define cfq_log_cfqg(cfqd, cfqg, fmt, args...) do {} while (0) 787 782 788 783 static inline void cfqg_stats_update_io_add(struct cfq_group *cfqg, 789 - struct cfq_group *curr_cfqg, int rw) { } 784 + struct cfq_group *curr_cfqg, int op, int op_flags) { } 790 785 static inline void cfqg_stats_update_timeslice_used(struct cfq_group *cfqg, 791 - unsigned long time, unsigned long unaccounted_time) { } 792 - static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, int rw) { } 793 - static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, int rw) { } 786 + uint64_t time, unsigned long unaccounted_time) { } 787 + static inline void cfqg_stats_update_io_remove(struct cfq_group *cfqg, int op, 788 + int op_flags) { } 789 + static inline void cfqg_stats_update_io_merged(struct cfq_group *cfqg, int op, 790 + int op_flags) { } 794 791 static inline void cfqg_stats_update_completion(struct cfq_group *cfqg, 795 - uint64_t start_time, uint64_t io_start_time, int rw) { } 792 + uint64_t start_time, uint64_t io_start_time, int op, 793 + int op_flags) { } 796 794 797 795 #endif /* CONFIG_CFQ_GROUP_IOSCHED */ 798 796 ··· 815 807 static inline bool cfq_io_thinktime_big(struct cfq_data *cfqd, 816 808 struct cfq_ttime *ttime, bool group_idle) 817 809 { 818 - unsigned long slice; 810 + u64 slice; 819 811 if (!sample_valid(ttime->ttime_samples)) 820 812 return false; 821 813 if (group_idle) ··· 938 930 * if a queue is marked sync and has sync io queued. A sync queue with async 939 931 * io only, should not get full sync slice length. 940 932 */ 941 - static inline int cfq_prio_slice(struct cfq_data *cfqd, bool sync, 933 + static inline u64 cfq_prio_slice(struct cfq_data *cfqd, bool sync, 942 934 unsigned short prio) 943 935 { 944 - const int base_slice = cfqd->cfq_slice[sync]; 936 + u64 base_slice = cfqd->cfq_slice[sync]; 937 + u64 slice = div_u64(base_slice, CFQ_SLICE_SCALE); 945 938 946 939 WARN_ON(prio >= IOPRIO_BE_NR); 947 940 948 - return base_slice + (base_slice/CFQ_SLICE_SCALE * (4 - prio)); 941 + return base_slice + (slice * (4 - prio)); 949 942 } 950 943 951 - static inline int 944 + static inline u64 952 945 cfq_prio_to_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq) 953 946 { 954 947 return cfq_prio_slice(cfqd, cfq_cfqq_sync(cfqq), cfqq->ioprio); ··· 967 958 * 968 959 * The result is also in fixed point w/ CFQ_SERVICE_SHIFT. 969 960 */ 970 - static inline u64 cfqg_scale_charge(unsigned long charge, 961 + static inline u64 cfqg_scale_charge(u64 charge, 971 962 unsigned int vfraction) 972 963 { 973 964 u64 c = charge << CFQ_SERVICE_SHIFT; /* make it fixed point */ 974 965 975 966 /* charge / vfraction */ 976 967 c <<= CFQ_SERVICE_SHIFT; 977 - do_div(c, vfraction); 978 - return c; 968 + return div_u64(c, vfraction); 979 969 } 980 970 981 971 static inline u64 max_vdisktime(u64 min_vdisktime, u64 vdisktime) ··· 1027 1019 return cfqg->busy_queues_avg[rt]; 1028 1020 } 1029 1021 1030 - static inline unsigned 1022 + static inline u64 1031 1023 cfq_group_slice(struct cfq_data *cfqd, struct cfq_group *cfqg) 1032 1024 { 1033 1025 return cfqd->cfq_target_latency * cfqg->vfraction >> CFQ_SERVICE_SHIFT; 1034 1026 } 1035 1027 1036 - static inline unsigned 1028 + static inline u64 1037 1029 cfq_scaled_cfqq_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq) 1038 1030 { 1039 - unsigned slice = cfq_prio_to_slice(cfqd, cfqq); 1031 + u64 slice = cfq_prio_to_slice(cfqd, cfqq); 1040 1032 if (cfqd->cfq_latency) { 1041 1033 /* 1042 1034 * interested queues (we consider only the ones with the same ··· 1044 1036 */ 1045 1037 unsigned iq = cfq_group_get_avg_queues(cfqd, cfqq->cfqg, 1046 1038 cfq_class_rt(cfqq)); 1047 - unsigned sync_slice = cfqd->cfq_slice[1]; 1048 - unsigned expect_latency = sync_slice * iq; 1049 - unsigned group_slice = cfq_group_slice(cfqd, cfqq->cfqg); 1039 + u64 sync_slice = cfqd->cfq_slice[1]; 1040 + u64 expect_latency = sync_slice * iq; 1041 + u64 group_slice = cfq_group_slice(cfqd, cfqq->cfqg); 1050 1042 1051 1043 if (expect_latency > group_slice) { 1052 - unsigned base_low_slice = 2 * cfqd->cfq_slice_idle; 1044 + u64 base_low_slice = 2 * cfqd->cfq_slice_idle; 1045 + u64 low_slice; 1046 + 1053 1047 /* scale low_slice according to IO priority 1054 1048 * and sync vs async */ 1055 - unsigned low_slice = 1056 - min(slice, base_low_slice * slice / sync_slice); 1049 + low_slice = div64_u64(base_low_slice*slice, sync_slice); 1050 + low_slice = min(slice, low_slice); 1057 1051 /* the adapted slice value is scaled to fit all iqs 1058 1052 * into the target latency */ 1059 - slice = max(slice * group_slice / expect_latency, 1060 - low_slice); 1053 + slice = div64_u64(slice*group_slice, expect_latency); 1054 + slice = max(slice, low_slice); 1061 1055 } 1062 1056 } 1063 1057 return slice; ··· 1068 1058 static inline void 1069 1059 cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq) 1070 1060 { 1071 - unsigned slice = cfq_scaled_cfqq_slice(cfqd, cfqq); 1061 + u64 slice = cfq_scaled_cfqq_slice(cfqd, cfqq); 1062 + u64 now = ktime_get_ns(); 1072 1063 1073 - cfqq->slice_start = jiffies; 1074 - cfqq->slice_end = jiffies + slice; 1064 + cfqq->slice_start = now; 1065 + cfqq->slice_end = now + slice; 1075 1066 cfqq->allocated_slice = slice; 1076 - cfq_log_cfqq(cfqd, cfqq, "set_slice=%lu", cfqq->slice_end - jiffies); 1067 + cfq_log_cfqq(cfqd, cfqq, "set_slice=%llu", cfqq->slice_end - now); 1077 1068 } 1078 1069 1079 1070 /* ··· 1086 1075 { 1087 1076 if (cfq_cfqq_slice_new(cfqq)) 1088 1077 return false; 1089 - if (time_before(jiffies, cfqq->slice_end)) 1078 + if (ktime_get_ns() < cfqq->slice_end) 1090 1079 return false; 1091 1080 1092 1081 return true; ··· 1252 1241 return cfq_choose_req(cfqd, next, prev, blk_rq_pos(last)); 1253 1242 } 1254 1243 1255 - static unsigned long cfq_slice_offset(struct cfq_data *cfqd, 1256 - struct cfq_queue *cfqq) 1244 + static u64 cfq_slice_offset(struct cfq_data *cfqd, 1245 + struct cfq_queue *cfqq) 1257 1246 { 1258 1247 /* 1259 1248 * just an approximation, should be ok. ··· 1446 1435 cfqg_stats_update_dequeue(cfqg); 1447 1436 } 1448 1437 1449 - static inline unsigned int cfq_cfqq_slice_usage(struct cfq_queue *cfqq, 1450 - unsigned int *unaccounted_time) 1438 + static inline u64 cfq_cfqq_slice_usage(struct cfq_queue *cfqq, 1439 + u64 *unaccounted_time) 1451 1440 { 1452 - unsigned int slice_used; 1441 + u64 slice_used; 1442 + u64 now = ktime_get_ns(); 1453 1443 1454 1444 /* 1455 1445 * Queue got expired before even a single request completed or 1456 1446 * got expired immediately after first request completion. 1457 1447 */ 1458 - if (!cfqq->slice_start || cfqq->slice_start == jiffies) { 1448 + if (!cfqq->slice_start || cfqq->slice_start == now) { 1459 1449 /* 1460 1450 * Also charge the seek time incurred to the group, otherwise 1461 1451 * if there are mutiple queues in the group, each can dispatch 1462 1452 * a single request on seeky media and cause lots of seek time 1463 1453 * and group will never know it. 1464 1454 */ 1465 - slice_used = max_t(unsigned, (jiffies - cfqq->dispatch_start), 1466 - 1); 1455 + slice_used = max_t(u64, (now - cfqq->dispatch_start), 1456 + jiffies_to_nsecs(1)); 1467 1457 } else { 1468 - slice_used = jiffies - cfqq->slice_start; 1458 + slice_used = now - cfqq->slice_start; 1469 1459 if (slice_used > cfqq->allocated_slice) { 1470 1460 *unaccounted_time = slice_used - cfqq->allocated_slice; 1471 1461 slice_used = cfqq->allocated_slice; 1472 1462 } 1473 - if (time_after(cfqq->slice_start, cfqq->dispatch_start)) 1463 + if (cfqq->slice_start > cfqq->dispatch_start) 1474 1464 *unaccounted_time += cfqq->slice_start - 1475 1465 cfqq->dispatch_start; 1476 1466 } ··· 1483 1471 struct cfq_queue *cfqq) 1484 1472 { 1485 1473 struct cfq_rb_root *st = &cfqd->grp_service_tree; 1486 - unsigned int used_sl, charge, unaccounted_sl = 0; 1474 + u64 used_sl, charge, unaccounted_sl = 0; 1487 1475 int nr_sync = cfqg->nr_cfqq - cfqg_busy_async_queues(cfqd, cfqg) 1488 1476 - cfqg->service_tree_idle.count; 1489 1477 unsigned int vfr; 1478 + u64 now = ktime_get_ns(); 1490 1479 1491 1480 BUG_ON(nr_sync < 0); 1492 1481 used_sl = charge = cfq_cfqq_slice_usage(cfqq, &unaccounted_sl); ··· 1509 1496 cfq_group_service_tree_add(st, cfqg); 1510 1497 1511 1498 /* This group is being expired. Save the context */ 1512 - if (time_after(cfqd->workload_expires, jiffies)) { 1513 - cfqg->saved_wl_slice = cfqd->workload_expires 1514 - - jiffies; 1499 + if (cfqd->workload_expires > now) { 1500 + cfqg->saved_wl_slice = cfqd->workload_expires - now; 1515 1501 cfqg->saved_wl_type = cfqd->serving_wl_type; 1516 1502 cfqg->saved_wl_class = cfqd->serving_wl_class; 1517 1503 } else ··· 1519 1507 cfq_log_cfqg(cfqd, cfqg, "served: vt=%llu min_vt=%llu", cfqg->vdisktime, 1520 1508 st->min_vdisktime); 1521 1509 cfq_log_cfqq(cfqq->cfqd, cfqq, 1522 - "sl_used=%u disp=%u charge=%u iops=%u sect=%lu", 1510 + "sl_used=%llu disp=%llu charge=%llu iops=%u sect=%lu", 1523 1511 used_sl, cfqq->slice_dispatch, charge, 1524 1512 iops_mode(cfqd), cfqq->nr_sectors); 1525 1513 cfqg_stats_update_timeslice_used(cfqg, used_sl, unaccounted_sl); ··· 1542 1530 *st = CFQ_RB_ROOT; 1543 1531 RB_CLEAR_NODE(&cfqg->rb_node); 1544 1532 1545 - cfqg->ttime.last_end_request = jiffies; 1533 + cfqg->ttime.last_end_request = ktime_get_ns(); 1546 1534 } 1547 1535 1548 1536 #ifdef CONFIG_CFQ_GROUP_IOSCHED ··· 2225 2213 { 2226 2214 struct rb_node **p, *parent; 2227 2215 struct cfq_queue *__cfqq; 2228 - unsigned long rb_key; 2216 + u64 rb_key; 2229 2217 struct cfq_rb_root *st; 2230 2218 int left; 2231 2219 int new_cfqq = 1; 2220 + u64 now = ktime_get_ns(); 2232 2221 2233 2222 st = st_for(cfqq->cfqg, cfqq_class(cfqq), cfqq_type(cfqq)); 2234 2223 if (cfq_class_idle(cfqq)) { ··· 2239 2226 __cfqq = rb_entry(parent, struct cfq_queue, rb_node); 2240 2227 rb_key += __cfqq->rb_key; 2241 2228 } else 2242 - rb_key += jiffies; 2229 + rb_key += now; 2243 2230 } else if (!add_front) { 2244 2231 /* 2245 2232 * Get our rb key offset. Subtract any residual slice ··· 2247 2234 * count indicates slice overrun, and this should position 2248 2235 * the next service time further away in the tree. 2249 2236 */ 2250 - rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies; 2237 + rb_key = cfq_slice_offset(cfqd, cfqq) + now; 2251 2238 rb_key -= cfqq->slice_resid; 2252 2239 cfqq->slice_resid = 0; 2253 2240 } else { 2254 - rb_key = -HZ; 2241 + rb_key = -NSEC_PER_SEC; 2255 2242 __cfqq = cfq_rb_first(st); 2256 - rb_key += __cfqq ? __cfqq->rb_key : jiffies; 2243 + rb_key += __cfqq ? __cfqq->rb_key : now; 2257 2244 } 2258 2245 2259 2246 if (!RB_EMPTY_NODE(&cfqq->rb_node)) { ··· 2279 2266 /* 2280 2267 * sort by key, that represents service time. 2281 2268 */ 2282 - if (time_before(rb_key, __cfqq->rb_key)) 2269 + if (rb_key < __cfqq->rb_key) 2283 2270 p = &parent->rb_left; 2284 2271 else { 2285 2272 p = &parent->rb_right; ··· 2474 2461 { 2475 2462 elv_rb_del(&cfqq->sort_list, rq); 2476 2463 cfqq->queued[rq_is_sync(rq)]--; 2477 - cfqg_stats_update_io_remove(RQ_CFQG(rq), rq->cmd_flags); 2464 + cfqg_stats_update_io_remove(RQ_CFQG(rq), req_op(rq), rq->cmd_flags); 2478 2465 cfq_add_rq_rb(rq); 2479 2466 cfqg_stats_update_io_add(RQ_CFQG(rq), cfqq->cfqd->serving_group, 2480 - rq->cmd_flags); 2467 + req_op(rq), rq->cmd_flags); 2481 2468 } 2482 2469 2483 2470 static struct request * ··· 2530 2517 cfq_del_rq_rb(rq); 2531 2518 2532 2519 cfqq->cfqd->rq_queued--; 2533 - cfqg_stats_update_io_remove(RQ_CFQG(rq), rq->cmd_flags); 2520 + cfqg_stats_update_io_remove(RQ_CFQG(rq), req_op(rq), rq->cmd_flags); 2534 2521 if (rq->cmd_flags & REQ_PRIO) { 2535 2522 WARN_ON(!cfqq->prio_pending); 2536 2523 cfqq->prio_pending--; ··· 2544 2531 struct request *__rq; 2545 2532 2546 2533 __rq = cfq_find_rq_fmerge(cfqd, bio); 2547 - if (__rq && elv_rq_merge_ok(__rq, bio)) { 2534 + if (__rq && elv_bio_merge_ok(__rq, bio)) { 2548 2535 *req = __rq; 2549 2536 return ELEVATOR_FRONT_MERGE; 2550 2537 } ··· 2565 2552 static void cfq_bio_merged(struct request_queue *q, struct request *req, 2566 2553 struct bio *bio) 2567 2554 { 2568 - cfqg_stats_update_io_merged(RQ_CFQG(req), bio->bi_rw); 2555 + cfqg_stats_update_io_merged(RQ_CFQG(req), bio_op(bio), bio->bi_rw); 2569 2556 } 2570 2557 2571 2558 static void ··· 2579 2566 * reposition in fifo if next is older than rq 2580 2567 */ 2581 2568 if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) && 2582 - time_before(next->fifo_time, rq->fifo_time) && 2569 + next->fifo_time < rq->fifo_time && 2583 2570 cfqq == RQ_CFQQ(next)) { 2584 2571 list_move(&rq->queuelist, &next->queuelist); 2585 2572 rq->fifo_time = next->fifo_time; ··· 2588 2575 if (cfqq->next_rq == next) 2589 2576 cfqq->next_rq = rq; 2590 2577 cfq_remove_request(next); 2591 - cfqg_stats_update_io_merged(RQ_CFQG(rq), next->cmd_flags); 2578 + cfqg_stats_update_io_merged(RQ_CFQG(rq), req_op(next), next->cmd_flags); 2592 2579 2593 2580 cfqq = RQ_CFQQ(next); 2594 2581 /* ··· 2601 2588 cfq_del_cfqq_rr(cfqd, cfqq); 2602 2589 } 2603 2590 2604 - static int cfq_allow_merge(struct request_queue *q, struct request *rq, 2605 - struct bio *bio) 2591 + static int cfq_allow_bio_merge(struct request_queue *q, struct request *rq, 2592 + struct bio *bio) 2606 2593 { 2607 2594 struct cfq_data *cfqd = q->elevator->elevator_data; 2608 2595 struct cfq_io_cq *cic; ··· 2626 2613 return cfqq == RQ_CFQQ(rq); 2627 2614 } 2628 2615 2616 + static int cfq_allow_rq_merge(struct request_queue *q, struct request *rq, 2617 + struct request *next) 2618 + { 2619 + return RQ_CFQQ(rq) == RQ_CFQQ(next); 2620 + } 2621 + 2629 2622 static inline void cfq_del_timer(struct cfq_data *cfqd, struct cfq_queue *cfqq) 2630 2623 { 2631 - del_timer(&cfqd->idle_slice_timer); 2624 + hrtimer_try_to_cancel(&cfqd->idle_slice_timer); 2632 2625 cfqg_stats_update_idle_time(cfqq->cfqg); 2633 2626 } 2634 2627 ··· 2646 2627 cfqd->serving_wl_class, cfqd->serving_wl_type); 2647 2628 cfqg_stats_update_avg_queue_size(cfqq->cfqg); 2648 2629 cfqq->slice_start = 0; 2649 - cfqq->dispatch_start = jiffies; 2630 + cfqq->dispatch_start = ktime_get_ns(); 2650 2631 cfqq->allocated_slice = 0; 2651 2632 cfqq->slice_end = 0; 2652 2633 cfqq->slice_dispatch = 0; ··· 2695 2676 if (cfq_cfqq_slice_new(cfqq)) 2696 2677 cfqq->slice_resid = cfq_scaled_cfqq_slice(cfqd, cfqq); 2697 2678 else 2698 - cfqq->slice_resid = cfqq->slice_end - jiffies; 2699 - cfq_log_cfqq(cfqd, cfqq, "resid=%ld", cfqq->slice_resid); 2679 + cfqq->slice_resid = cfqq->slice_end - ktime_get_ns(); 2680 + cfq_log_cfqq(cfqd, cfqq, "resid=%lld", cfqq->slice_resid); 2700 2681 } 2701 2682 2702 2683 cfq_group_served(cfqd, cfqq->cfqg, cfqq); ··· 2930 2911 struct cfq_queue *cfqq = cfqd->active_queue; 2931 2912 struct cfq_rb_root *st = cfqq->service_tree; 2932 2913 struct cfq_io_cq *cic; 2933 - unsigned long sl, group_idle = 0; 2914 + u64 sl, group_idle = 0; 2915 + u64 now = ktime_get_ns(); 2934 2916 2935 2917 /* 2936 2918 * SSD device without seek penalty, disable idling. But only do so ··· 2974 2954 * time slice. 2975 2955 */ 2976 2956 if (sample_valid(cic->ttime.ttime_samples) && 2977 - (cfqq->slice_end - jiffies < cic->ttime.ttime_mean)) { 2978 - cfq_log_cfqq(cfqd, cfqq, "Not idling. think_time:%lu", 2957 + (cfqq->slice_end - now < cic->ttime.ttime_mean)) { 2958 + cfq_log_cfqq(cfqd, cfqq, "Not idling. think_time:%llu", 2979 2959 cic->ttime.ttime_mean); 2980 2960 return; 2981 2961 } ··· 2996 2976 else 2997 2977 sl = cfqd->cfq_slice_idle; 2998 2978 2999 - mod_timer(&cfqd->idle_slice_timer, jiffies + sl); 2979 + hrtimer_start(&cfqd->idle_slice_timer, ns_to_ktime(sl), 2980 + HRTIMER_MODE_REL); 3000 2981 cfqg_stats_set_start_idle_time(cfqq->cfqg); 3001 - cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu group_idle: %d", sl, 2982 + cfq_log_cfqq(cfqd, cfqq, "arm_idle: %llu group_idle: %d", sl, 3002 2983 group_idle ? 1 : 0); 3003 2984 } 3004 2985 ··· 3039 3018 return NULL; 3040 3019 3041 3020 rq = rq_entry_fifo(cfqq->fifo.next); 3042 - if (time_before(jiffies, rq->fifo_time)) 3021 + if (ktime_get_ns() < rq->fifo_time) 3043 3022 rq = NULL; 3044 3023 3045 3024 cfq_log_cfqq(cfqq->cfqd, cfqq, "fifo=%p", rq); ··· 3117 3096 struct cfq_queue *queue; 3118 3097 int i; 3119 3098 bool key_valid = false; 3120 - unsigned long lowest_key = 0; 3099 + u64 lowest_key = 0; 3121 3100 enum wl_type_t cur_best = SYNC_NOIDLE_WORKLOAD; 3122 3101 3123 3102 for (i = 0; i <= SYNC_WORKLOAD; ++i) { 3124 3103 /* select the one with lowest rb_key */ 3125 3104 queue = cfq_rb_first(st_for(cfqg, wl_class, i)); 3126 3105 if (queue && 3127 - (!key_valid || time_before(queue->rb_key, lowest_key))) { 3106 + (!key_valid || queue->rb_key < lowest_key)) { 3128 3107 lowest_key = queue->rb_key; 3129 3108 cur_best = i; 3130 3109 key_valid = true; ··· 3137 3116 static void 3138 3117 choose_wl_class_and_type(struct cfq_data *cfqd, struct cfq_group *cfqg) 3139 3118 { 3140 - unsigned slice; 3119 + u64 slice; 3141 3120 unsigned count; 3142 3121 struct cfq_rb_root *st; 3143 - unsigned group_slice; 3122 + u64 group_slice; 3144 3123 enum wl_class_t original_class = cfqd->serving_wl_class; 3124 + u64 now = ktime_get_ns(); 3145 3125 3146 3126 /* Choose next priority. RT > BE > IDLE */ 3147 3127 if (cfq_group_busy_queues_wl(RT_WORKLOAD, cfqd, cfqg)) ··· 3151 3129 cfqd->serving_wl_class = BE_WORKLOAD; 3152 3130 else { 3153 3131 cfqd->serving_wl_class = IDLE_WORKLOAD; 3154 - cfqd->workload_expires = jiffies + 1; 3132 + cfqd->workload_expires = now + jiffies_to_nsecs(1); 3155 3133 return; 3156 3134 } 3157 3135 ··· 3169 3147 /* 3170 3148 * check workload expiration, and that we still have other queues ready 3171 3149 */ 3172 - if (count && !time_after(jiffies, cfqd->workload_expires)) 3150 + if (count && !(now > cfqd->workload_expires)) 3173 3151 return; 3174 3152 3175 3153 new_workload: ··· 3186 3164 */ 3187 3165 group_slice = cfq_group_slice(cfqd, cfqg); 3188 3166 3189 - slice = group_slice * count / 3167 + slice = div_u64(group_slice * count, 3190 3168 max_t(unsigned, cfqg->busy_queues_avg[cfqd->serving_wl_class], 3191 3169 cfq_group_busy_queues_wl(cfqd->serving_wl_class, cfqd, 3192 - cfqg)); 3170 + cfqg))); 3193 3171 3194 3172 if (cfqd->serving_wl_type == ASYNC_WORKLOAD) { 3195 - unsigned int tmp; 3173 + u64 tmp; 3196 3174 3197 3175 /* 3198 3176 * Async queues are currently system wide. Just taking ··· 3203 3181 */ 3204 3182 tmp = cfqd->cfq_target_latency * 3205 3183 cfqg_busy_async_queues(cfqd, cfqg); 3206 - tmp = tmp/cfqd->busy_queues; 3207 - slice = min_t(unsigned, slice, tmp); 3184 + tmp = div_u64(tmp, cfqd->busy_queues); 3185 + slice = min_t(u64, slice, tmp); 3208 3186 3209 3187 /* async workload slice is scaled down according to 3210 3188 * the sync/async slice ratio. */ 3211 - slice = slice * cfqd->cfq_slice[0] / cfqd->cfq_slice[1]; 3189 + slice = div64_u64(slice*cfqd->cfq_slice[0], cfqd->cfq_slice[1]); 3212 3190 } else 3213 3191 /* sync workload slice is at least 2 * cfq_slice_idle */ 3214 3192 slice = max(slice, 2 * cfqd->cfq_slice_idle); 3215 3193 3216 - slice = max_t(unsigned, slice, CFQ_MIN_TT); 3217 - cfq_log(cfqd, "workload slice:%d", slice); 3218 - cfqd->workload_expires = jiffies + slice; 3194 + slice = max_t(u64, slice, CFQ_MIN_TT); 3195 + cfq_log(cfqd, "workload slice:%llu", slice); 3196 + cfqd->workload_expires = now + slice; 3219 3197 } 3220 3198 3221 3199 static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd) ··· 3233 3211 static void cfq_choose_cfqg(struct cfq_data *cfqd) 3234 3212 { 3235 3213 struct cfq_group *cfqg = cfq_get_next_cfqg(cfqd); 3214 + u64 now = ktime_get_ns(); 3236 3215 3237 3216 cfqd->serving_group = cfqg; 3238 3217 3239 3218 /* Restore the workload type data */ 3240 3219 if (cfqg->saved_wl_slice) { 3241 - cfqd->workload_expires = jiffies + cfqg->saved_wl_slice; 3220 + cfqd->workload_expires = now + cfqg->saved_wl_slice; 3242 3221 cfqd->serving_wl_type = cfqg->saved_wl_type; 3243 3222 cfqd->serving_wl_class = cfqg->saved_wl_class; 3244 3223 } else 3245 - cfqd->workload_expires = jiffies - 1; 3224 + cfqd->workload_expires = now - 1; 3246 3225 3247 3226 choose_wl_class_and_type(cfqd, cfqg); 3248 3227 } ··· 3255 3232 static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd) 3256 3233 { 3257 3234 struct cfq_queue *cfqq, *new_cfqq = NULL; 3235 + u64 now = ktime_get_ns(); 3258 3236 3259 3237 cfqq = cfqd->active_queue; 3260 3238 if (!cfqq) ··· 3316 3292 * flight or is idling for a new request, allow either of these 3317 3293 * conditions to happen (or time out) before selecting a new queue. 3318 3294 */ 3319 - if (timer_pending(&cfqd->idle_slice_timer)) { 3295 + if (hrtimer_active(&cfqd->idle_slice_timer)) { 3320 3296 cfqq = NULL; 3321 3297 goto keep_queue; 3322 3298 } ··· 3327 3303 **/ 3328 3304 if (CFQQ_SEEKY(cfqq) && cfq_cfqq_idle_window(cfqq) && 3329 3305 (cfq_cfqq_slice_new(cfqq) || 3330 - (cfqq->slice_end - jiffies > jiffies - cfqq->slice_start))) { 3306 + (cfqq->slice_end - now > now - cfqq->slice_start))) { 3331 3307 cfq_clear_cfqq_deep(cfqq); 3332 3308 cfq_clear_cfqq_idle_window(cfqq); 3333 3309 } ··· 3405 3381 static inline bool cfq_slice_used_soon(struct cfq_data *cfqd, 3406 3382 struct cfq_queue *cfqq) 3407 3383 { 3384 + u64 now = ktime_get_ns(); 3385 + 3408 3386 /* the queue hasn't finished any request, can't estimate */ 3409 3387 if (cfq_cfqq_slice_new(cfqq)) 3410 3388 return true; 3411 - if (time_after(jiffies + cfqd->cfq_slice_idle * cfqq->dispatched, 3412 - cfqq->slice_end)) 3389 + if (now + cfqd->cfq_slice_idle * cfqq->dispatched > cfqq->slice_end) 3413 3390 return true; 3414 3391 3415 3392 return false; ··· 3485 3460 * based on the last sync IO we serviced 3486 3461 */ 3487 3462 if (!cfq_cfqq_sync(cfqq) && cfqd->cfq_latency) { 3488 - unsigned long last_sync = jiffies - cfqd->last_delayed_sync; 3463 + u64 last_sync = ktime_get_ns() - cfqd->last_delayed_sync; 3489 3464 unsigned int depth; 3490 3465 3491 - depth = last_sync / cfqd->cfq_slice[1]; 3466 + depth = div64_u64(last_sync, cfqd->cfq_slice[1]); 3492 3467 if (!depth && !cfqq->dispatched) 3493 3468 depth = 1; 3494 3469 if (depth < max_dispatch) ··· 3571 3546 if (cfqd->busy_queues > 1 && ((!cfq_cfqq_sync(cfqq) && 3572 3547 cfqq->slice_dispatch >= cfq_prio_to_maxrq(cfqd, cfqq)) || 3573 3548 cfq_class_idle(cfqq))) { 3574 - cfqq->slice_end = jiffies + 1; 3549 + cfqq->slice_end = ktime_get_ns() + 1; 3575 3550 cfq_slice_expired(cfqd, 0); 3576 3551 } 3577 3552 ··· 3649 3624 { 3650 3625 struct cfq_io_cq *cic = icq_to_cic(icq); 3651 3626 3652 - cic->ttime.last_end_request = jiffies; 3627 + cic->ttime.last_end_request = ktime_get_ns(); 3653 3628 } 3654 3629 3655 3630 static void cfq_exit_icq(struct io_cq *icq) ··· 3707 3682 * elevate the priority of this queue 3708 3683 */ 3709 3684 cfqq->org_ioprio = cfqq->ioprio; 3685 + cfqq->org_ioprio_class = cfqq->ioprio_class; 3710 3686 cfq_clear_cfqq_prio_changed(cfqq); 3711 3687 } 3712 3688 ··· 3871 3845 } 3872 3846 3873 3847 static void 3874 - __cfq_update_io_thinktime(struct cfq_ttime *ttime, unsigned long slice_idle) 3848 + __cfq_update_io_thinktime(struct cfq_ttime *ttime, u64 slice_idle) 3875 3849 { 3876 - unsigned long elapsed = jiffies - ttime->last_end_request; 3850 + u64 elapsed = ktime_get_ns() - ttime->last_end_request; 3877 3851 elapsed = min(elapsed, 2UL * slice_idle); 3878 3852 3879 3853 ttime->ttime_samples = (7*ttime->ttime_samples + 256) / 8; 3880 - ttime->ttime_total = (7*ttime->ttime_total + 256*elapsed) / 8; 3881 - ttime->ttime_mean = (ttime->ttime_total + 128) / ttime->ttime_samples; 3854 + ttime->ttime_total = div_u64(7*ttime->ttime_total + 256*elapsed, 8); 3855 + ttime->ttime_mean = div64_ul(ttime->ttime_total + 128, 3856 + ttime->ttime_samples); 3882 3857 } 3883 3858 3884 3859 static void ··· 4132 4105 cfq_log_cfqq(cfqd, cfqq, "insert_request"); 4133 4106 cfq_init_prio_data(cfqq, RQ_CIC(rq)); 4134 4107 4135 - rq->fifo_time = jiffies + cfqd->cfq_fifo_expire[rq_is_sync(rq)]; 4108 + rq->fifo_time = ktime_get_ns() + cfqd->cfq_fifo_expire[rq_is_sync(rq)]; 4136 4109 list_add_tail(&rq->queuelist, &cfqq->fifo); 4137 4110 cfq_add_rq_rb(rq); 4138 - cfqg_stats_update_io_add(RQ_CFQG(rq), cfqd->serving_group, 4111 + cfqg_stats_update_io_add(RQ_CFQG(rq), cfqd->serving_group, req_op(rq), 4139 4112 rq->cmd_flags); 4140 4113 cfq_rq_enqueued(cfqd, cfqq, rq); 4141 4114 } ··· 4180 4153 static bool cfq_should_wait_busy(struct cfq_data *cfqd, struct cfq_queue *cfqq) 4181 4154 { 4182 4155 struct cfq_io_cq *cic = cfqd->active_cic; 4156 + u64 now = ktime_get_ns(); 4183 4157 4184 4158 /* If the queue already has requests, don't wait */ 4185 4159 if (!RB_EMPTY_ROOT(&cfqq->sort_list)) ··· 4199 4171 4200 4172 /* if slice left is less than think time, wait busy */ 4201 4173 if (cic && sample_valid(cic->ttime.ttime_samples) 4202 - && (cfqq->slice_end - jiffies < cic->ttime.ttime_mean)) 4174 + && (cfqq->slice_end - now < cic->ttime.ttime_mean)) 4203 4175 return true; 4204 4176 4205 4177 /* ··· 4209 4181 * case where think time is less than a jiffy, mark the queue wait 4210 4182 * busy if only 1 jiffy is left in the slice. 4211 4183 */ 4212 - if (cfqq->slice_end - jiffies == 1) 4184 + if (cfqq->slice_end - now <= jiffies_to_nsecs(1)) 4213 4185 return true; 4214 4186 4215 4187 return false; ··· 4220 4192 struct cfq_queue *cfqq = RQ_CFQQ(rq); 4221 4193 struct cfq_data *cfqd = cfqq->cfqd; 4222 4194 const int sync = rq_is_sync(rq); 4223 - unsigned long now; 4195 + u64 now = ktime_get_ns(); 4224 4196 4225 - now = jiffies; 4226 4197 cfq_log_cfqq(cfqd, cfqq, "complete rqnoidle %d", 4227 4198 !!(rq->cmd_flags & REQ_NOIDLE)); 4228 4199 ··· 4233 4206 cfqq->dispatched--; 4234 4207 (RQ_CFQG(rq))->dispatched--; 4235 4208 cfqg_stats_update_completion(cfqq->cfqg, rq_start_time_ns(rq), 4236 - rq_io_start_time_ns(rq), rq->cmd_flags); 4209 + rq_io_start_time_ns(rq), req_op(rq), 4210 + rq->cmd_flags); 4237 4211 4238 4212 cfqd->rq_in_flight[cfq_cfqq_sync(cfqq)]--; 4239 4213 ··· 4250 4222 cfqq_type(cfqq)); 4251 4223 4252 4224 st->ttime.last_end_request = now; 4253 - if (!time_after(rq->start_time + cfqd->cfq_fifo_expire[1], now)) 4225 + /* 4226 + * We have to do this check in jiffies since start_time is in 4227 + * jiffies and it is not trivial to convert to ns. If 4228 + * cfq_fifo_expire[1] ever comes close to 1 jiffie, this test 4229 + * will become problematic but so far we are fine (the default 4230 + * is 128 ms). 4231 + */ 4232 + if (!time_after(rq->start_time + 4233 + nsecs_to_jiffies(cfqd->cfq_fifo_expire[1]), 4234 + jiffies)) 4254 4235 cfqd->last_delayed_sync = now; 4255 4236 } 4256 4237 ··· 4284 4247 * the queue. 4285 4248 */ 4286 4249 if (cfq_should_wait_busy(cfqd, cfqq)) { 4287 - unsigned long extend_sl = cfqd->cfq_slice_idle; 4250 + u64 extend_sl = cfqd->cfq_slice_idle; 4288 4251 if (!cfqd->cfq_slice_idle) 4289 4252 extend_sl = cfqd->cfq_group_idle; 4290 - cfqq->slice_end = jiffies + extend_sl; 4253 + cfqq->slice_end = now + extend_sl; 4291 4254 cfq_mark_cfqq_wait_busy(cfqq); 4292 4255 cfq_log_cfqq(cfqd, cfqq, "will busy wait"); 4293 4256 } ··· 4312 4275 cfq_schedule_dispatch(cfqd); 4313 4276 } 4314 4277 4278 + static void cfqq_boost_on_prio(struct cfq_queue *cfqq, int op_flags) 4279 + { 4280 + /* 4281 + * If REQ_PRIO is set, boost class and prio level, if it's below 4282 + * BE/NORM. If prio is not set, restore the potentially boosted 4283 + * class/prio level. 4284 + */ 4285 + if (!(op_flags & REQ_PRIO)) { 4286 + cfqq->ioprio_class = cfqq->org_ioprio_class; 4287 + cfqq->ioprio = cfqq->org_ioprio; 4288 + } else { 4289 + if (cfq_class_idle(cfqq)) 4290 + cfqq->ioprio_class = IOPRIO_CLASS_BE; 4291 + if (cfqq->ioprio > IOPRIO_NORM) 4292 + cfqq->ioprio = IOPRIO_NORM; 4293 + } 4294 + } 4295 + 4315 4296 static inline int __cfq_may_queue(struct cfq_queue *cfqq) 4316 4297 { 4317 4298 if (cfq_cfqq_wait_request(cfqq) && !cfq_cfqq_must_alloc_slice(cfqq)) { ··· 4340 4285 return ELV_MQUEUE_MAY; 4341 4286 } 4342 4287 4343 - static int cfq_may_queue(struct request_queue *q, int rw) 4288 + static int cfq_may_queue(struct request_queue *q, int op, int op_flags) 4344 4289 { 4345 4290 struct cfq_data *cfqd = q->elevator->elevator_data; 4346 4291 struct task_struct *tsk = current; ··· 4357 4302 if (!cic) 4358 4303 return ELV_MQUEUE_MAY; 4359 4304 4360 - cfqq = cic_to_cfqq(cic, rw_is_sync(rw)); 4305 + cfqq = cic_to_cfqq(cic, rw_is_sync(op, op_flags)); 4361 4306 if (cfqq) { 4362 4307 cfq_init_prio_data(cfqq, cic); 4308 + cfqq_boost_on_prio(cfqq, op_flags); 4363 4309 4364 4310 return __cfq_may_queue(cfqq); 4365 4311 } ··· 4491 4435 /* 4492 4436 * Timer running if the active_queue is currently idling inside its time slice 4493 4437 */ 4494 - static void cfq_idle_slice_timer(unsigned long data) 4438 + static enum hrtimer_restart cfq_idle_slice_timer(struct hrtimer *timer) 4495 4439 { 4496 - struct cfq_data *cfqd = (struct cfq_data *) data; 4440 + struct cfq_data *cfqd = container_of(timer, struct cfq_data, 4441 + idle_slice_timer); 4497 4442 struct cfq_queue *cfqq; 4498 4443 unsigned long flags; 4499 4444 int timed_out = 1; ··· 4543 4486 cfq_schedule_dispatch(cfqd); 4544 4487 out_cont: 4545 4488 spin_unlock_irqrestore(cfqd->queue->queue_lock, flags); 4489 + return HRTIMER_NORESTART; 4546 4490 } 4547 4491 4548 4492 static void cfq_shutdown_timer_wq(struct cfq_data *cfqd) 4549 4493 { 4550 - del_timer_sync(&cfqd->idle_slice_timer); 4494 + hrtimer_cancel(&cfqd->idle_slice_timer); 4551 4495 cancel_work_sync(&cfqd->unplug_work); 4552 4496 } 4553 4497 ··· 4644 4586 cfqg_put(cfqd->root_group); 4645 4587 spin_unlock_irq(q->queue_lock); 4646 4588 4647 - init_timer(&cfqd->idle_slice_timer); 4589 + hrtimer_init(&cfqd->idle_slice_timer, CLOCK_MONOTONIC, 4590 + HRTIMER_MODE_REL); 4648 4591 cfqd->idle_slice_timer.function = cfq_idle_slice_timer; 4649 - cfqd->idle_slice_timer.data = (unsigned long) cfqd; 4650 4592 4651 4593 INIT_WORK(&cfqd->unplug_work, cfq_kick_queue); 4652 4594 ··· 4667 4609 * we optimistically start assuming sync ops weren't delayed in last 4668 4610 * second, in order to have larger depth for async operations. 4669 4611 */ 4670 - cfqd->last_delayed_sync = jiffies - HZ; 4612 + cfqd->last_delayed_sync = ktime_get_ns() - NSEC_PER_SEC; 4671 4613 return 0; 4672 4614 4673 4615 out_free: ··· 4710 4652 static ssize_t __FUNC(struct elevator_queue *e, char *page) \ 4711 4653 { \ 4712 4654 struct cfq_data *cfqd = e->elevator_data; \ 4713 - unsigned int __data = __VAR; \ 4655 + u64 __data = __VAR; \ 4714 4656 if (__CONV) \ 4715 - __data = jiffies_to_msecs(__data); \ 4657 + __data = div_u64(__data, NSEC_PER_MSEC); \ 4716 4658 return cfq_var_show(__data, (page)); \ 4717 4659 } 4718 4660 SHOW_FUNCTION(cfq_quantum_show, cfqd->cfq_quantum, 0); ··· 4729 4671 SHOW_FUNCTION(cfq_target_latency_show, cfqd->cfq_target_latency, 1); 4730 4672 #undef SHOW_FUNCTION 4731 4673 4674 + #define USEC_SHOW_FUNCTION(__FUNC, __VAR) \ 4675 + static ssize_t __FUNC(struct elevator_queue *e, char *page) \ 4676 + { \ 4677 + struct cfq_data *cfqd = e->elevator_data; \ 4678 + u64 __data = __VAR; \ 4679 + __data = div_u64(__data, NSEC_PER_USEC); \ 4680 + return cfq_var_show(__data, (page)); \ 4681 + } 4682 + USEC_SHOW_FUNCTION(cfq_slice_idle_us_show, cfqd->cfq_slice_idle); 4683 + USEC_SHOW_FUNCTION(cfq_group_idle_us_show, cfqd->cfq_group_idle); 4684 + USEC_SHOW_FUNCTION(cfq_slice_sync_us_show, cfqd->cfq_slice[1]); 4685 + USEC_SHOW_FUNCTION(cfq_slice_async_us_show, cfqd->cfq_slice[0]); 4686 + USEC_SHOW_FUNCTION(cfq_target_latency_us_show, cfqd->cfq_target_latency); 4687 + #undef USEC_SHOW_FUNCTION 4688 + 4732 4689 #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ 4733 4690 static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) \ 4734 4691 { \ ··· 4755 4682 else if (__data > (MAX)) \ 4756 4683 __data = (MAX); \ 4757 4684 if (__CONV) \ 4758 - *(__PTR) = msecs_to_jiffies(__data); \ 4685 + *(__PTR) = (u64)__data * NSEC_PER_MSEC; \ 4759 4686 else \ 4760 4687 *(__PTR) = __data; \ 4761 4688 return ret; \ ··· 4778 4705 STORE_FUNCTION(cfq_target_latency_store, &cfqd->cfq_target_latency, 1, UINT_MAX, 1); 4779 4706 #undef STORE_FUNCTION 4780 4707 4708 + #define USEC_STORE_FUNCTION(__FUNC, __PTR, MIN, MAX) \ 4709 + static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count) \ 4710 + { \ 4711 + struct cfq_data *cfqd = e->elevator_data; \ 4712 + unsigned int __data; \ 4713 + int ret = cfq_var_store(&__data, (page), count); \ 4714 + if (__data < (MIN)) \ 4715 + __data = (MIN); \ 4716 + else if (__data > (MAX)) \ 4717 + __data = (MAX); \ 4718 + *(__PTR) = (u64)__data * NSEC_PER_USEC; \ 4719 + return ret; \ 4720 + } 4721 + USEC_STORE_FUNCTION(cfq_slice_idle_us_store, &cfqd->cfq_slice_idle, 0, UINT_MAX); 4722 + USEC_STORE_FUNCTION(cfq_group_idle_us_store, &cfqd->cfq_group_idle, 0, UINT_MAX); 4723 + USEC_STORE_FUNCTION(cfq_slice_sync_us_store, &cfqd->cfq_slice[1], 1, UINT_MAX); 4724 + USEC_STORE_FUNCTION(cfq_slice_async_us_store, &cfqd->cfq_slice[0], 1, UINT_MAX); 4725 + USEC_STORE_FUNCTION(cfq_target_latency_us_store, &cfqd->cfq_target_latency, 1, UINT_MAX); 4726 + #undef USEC_STORE_FUNCTION 4727 + 4781 4728 #define CFQ_ATTR(name) \ 4782 4729 __ATTR(name, S_IRUGO|S_IWUSR, cfq_##name##_show, cfq_##name##_store) 4783 4730 ··· 4808 4715 CFQ_ATTR(back_seek_max), 4809 4716 CFQ_ATTR(back_seek_penalty), 4810 4717 CFQ_ATTR(slice_sync), 4718 + CFQ_ATTR(slice_sync_us), 4811 4719 CFQ_ATTR(slice_async), 4720 + CFQ_ATTR(slice_async_us), 4812 4721 CFQ_ATTR(slice_async_rq), 4813 4722 CFQ_ATTR(slice_idle), 4723 + CFQ_ATTR(slice_idle_us), 4814 4724 CFQ_ATTR(group_idle), 4725 + CFQ_ATTR(group_idle_us), 4815 4726 CFQ_ATTR(low_latency), 4816 4727 CFQ_ATTR(target_latency), 4728 + CFQ_ATTR(target_latency_us), 4817 4729 __ATTR_NULL 4818 4730 }; 4819 4731 ··· 4827 4729 .elevator_merge_fn = cfq_merge, 4828 4730 .elevator_merged_fn = cfq_merged_request, 4829 4731 .elevator_merge_req_fn = cfq_merged_requests, 4830 - .elevator_allow_merge_fn = cfq_allow_merge, 4732 + .elevator_allow_bio_merge_fn = cfq_allow_bio_merge, 4733 + .elevator_allow_rq_merge_fn = cfq_allow_rq_merge, 4831 4734 .elevator_bio_merged_fn = cfq_bio_merged, 4832 4735 .elevator_dispatch_fn = cfq_dispatch_requests, 4833 4736 .elevator_add_req_fn = cfq_insert_request, ··· 4875 4776 { 4876 4777 int ret; 4877 4778 4878 - /* 4879 - * could be 0 on HZ < 1000 setups 4880 - */ 4881 - if (!cfq_slice_async) 4882 - cfq_slice_async = 1; 4883 - if (!cfq_slice_idle) 4884 - cfq_slice_idle = 1; 4885 - 4886 4779 #ifdef CONFIG_CFQ_GROUP_IOSCHED 4887 - if (!cfq_group_idle) 4888 - cfq_group_idle = 1; 4889 - 4890 4780 ret = blkcg_policy_register(&blkcg_policy_cfq); 4891 4781 if (ret) 4892 4782 return ret;
+4 -3
block/deadline-iosched.c
··· 137 137 if (__rq) { 138 138 BUG_ON(sector != blk_rq_pos(__rq)); 139 139 140 - if (elv_rq_merge_ok(__rq, bio)) { 140 + if (elv_bio_merge_ok(__rq, bio)) { 141 141 ret = ELEVATOR_FRONT_MERGE; 142 142 goto out; 143 143 } ··· 173 173 * and move into next position (next will be deleted) in fifo 174 174 */ 175 175 if (!list_empty(&req->queuelist) && !list_empty(&next->queuelist)) { 176 - if (time_before(next->fifo_time, req->fifo_time)) { 176 + if (time_before((unsigned long)next->fifo_time, 177 + (unsigned long)req->fifo_time)) { 177 178 list_move(&req->queuelist, &next->queuelist); 178 179 req->fifo_time = next->fifo_time; 179 180 } ··· 228 227 /* 229 228 * rq is expired! 230 229 */ 231 - if (time_after_eq(jiffies, rq->fifo_time)) 230 + if (time_after_eq(jiffies, (unsigned long)rq->fifo_time)) 232 231 return 1; 233 232 234 233 return 0;
+14 -15
block/elevator.c
··· 53 53 * Query io scheduler to see if the current process issuing bio may be 54 54 * merged with rq. 55 55 */ 56 - static int elv_iosched_allow_merge(struct request *rq, struct bio *bio) 56 + static int elv_iosched_allow_bio_merge(struct request *rq, struct bio *bio) 57 57 { 58 58 struct request_queue *q = rq->q; 59 59 struct elevator_queue *e = q->elevator; 60 60 61 - if (e->type->ops.elevator_allow_merge_fn) 62 - return e->type->ops.elevator_allow_merge_fn(q, rq, bio); 61 + if (e->type->ops.elevator_allow_bio_merge_fn) 62 + return e->type->ops.elevator_allow_bio_merge_fn(q, rq, bio); 63 63 64 64 return 1; 65 65 } ··· 67 67 /* 68 68 * can we safely merge with this request? 69 69 */ 70 - bool elv_rq_merge_ok(struct request *rq, struct bio *bio) 70 + bool elv_bio_merge_ok(struct request *rq, struct bio *bio) 71 71 { 72 72 if (!blk_rq_merge_ok(rq, bio)) 73 - return 0; 73 + return false; 74 74 75 - if (!elv_iosched_allow_merge(rq, bio)) 76 - return 0; 75 + if (!elv_iosched_allow_bio_merge(rq, bio)) 76 + return false; 77 77 78 - return 1; 78 + return true; 79 79 } 80 - EXPORT_SYMBOL(elv_rq_merge_ok); 80 + EXPORT_SYMBOL(elv_bio_merge_ok); 81 81 82 82 static struct elevator_type *elevator_find(const char *name) 83 83 { ··· 366 366 list_for_each_prev(entry, &q->queue_head) { 367 367 struct request *pos = list_entry_rq(entry); 368 368 369 - if ((rq->cmd_flags & REQ_DISCARD) != 370 - (pos->cmd_flags & REQ_DISCARD)) 369 + if ((req_op(rq) == REQ_OP_DISCARD) != (req_op(pos) == REQ_OP_DISCARD)) 371 370 break; 372 371 if (rq_data_dir(rq) != rq_data_dir(pos)) 373 372 break; ··· 425 426 /* 426 427 * First try one-hit cache. 427 428 */ 428 - if (q->last_merge && elv_rq_merge_ok(q->last_merge, bio)) { 429 + if (q->last_merge && elv_bio_merge_ok(q->last_merge, bio)) { 429 430 ret = blk_try_merge(q->last_merge, bio); 430 431 if (ret != ELEVATOR_NO_MERGE) { 431 432 *req = q->last_merge; ··· 440 441 * See if our hash lookup can find a potential backmerge. 441 442 */ 442 443 __rq = elv_rqhash_find(q, bio->bi_iter.bi_sector); 443 - if (__rq && elv_rq_merge_ok(__rq, bio)) { 444 + if (__rq && elv_bio_merge_ok(__rq, bio)) { 444 445 *req = __rq; 445 446 return ELEVATOR_BACK_MERGE; 446 447 } ··· 716 717 e->type->ops.elevator_put_req_fn(rq); 717 718 } 718 719 719 - int elv_may_queue(struct request_queue *q, int rw) 720 + int elv_may_queue(struct request_queue *q, int op, int op_flags) 720 721 { 721 722 struct elevator_queue *e = q->elevator; 722 723 723 724 if (e->type->ops.elevator_may_queue_fn) 724 - return e->type->ops.elevator_may_queue_fn(q, rw); 725 + return e->type->ops.elevator_may_queue_fn(q, op, op_flags); 725 726 726 727 return ELV_MQUEUE_MAY; 727 728 }
-3
block/partition-generic.c
··· 495 495 /* add partitions */ 496 496 for (p = 1; p < state->limit; p++) { 497 497 sector_t size, from; 498 - struct partition_meta_info *info = NULL; 499 498 500 499 size = state->parts[p].size; 501 500 if (!size) ··· 529 530 } 530 531 } 531 532 532 - if (state->parts[p].has_info) 533 - info = &state->parts[p].info; 534 533 part = add_partition(disk, p, from, size, 535 534 state->parts[p].flags, 536 535 &state->parts[p].info);
+7
block/partitions/atari.c
··· 42 42 int part_fmt = 0; /* 0:unknown, 1:AHDI, 2:ICD/Supra */ 43 43 #endif 44 44 45 + /* 46 + * ATARI partition scheme supports 512 lba only. If this is not 47 + * the case, bail early to avoid miscalculating hd_size. 48 + */ 49 + if (bdev_logical_block_size(state->bdev) != 512) 50 + return 0; 51 + 45 52 rs = read_part_sector(state, 0, &sect); 46 53 if (!rs) 47 54 return -1;
+1 -1
drivers/ata/libata-scsi.c
··· 1190 1190 if (likely(rq->cmd_type != REQ_TYPE_BLOCK_PC)) 1191 1191 return 0; 1192 1192 1193 - if (!blk_rq_bytes(rq) || (rq->cmd_flags & REQ_WRITE)) 1193 + if (!blk_rq_bytes(rq) || op_is_write(req_op(rq))) 1194 1194 return 0; 1195 1195 1196 1196 return atapi_cmd_type(rq->cmd[0]) == ATAPI_MISC;
+4 -2
drivers/block/brd.c
··· 339 339 if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) 340 340 goto io_error; 341 341 342 - if (unlikely(bio->bi_rw & REQ_DISCARD)) { 342 + if (unlikely(bio_op(bio) == REQ_OP_DISCARD)) { 343 343 if (sector & ((PAGE_SIZE >> SECTOR_SHIFT) - 1) || 344 344 bio->bi_iter.bi_size & ~PAGE_MASK) 345 345 goto io_error; ··· 509 509 blk_queue_max_discard_sectors(brd->brd_queue, UINT_MAX); 510 510 brd->brd_queue->limits.discard_zeroes_data = 1; 511 511 queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, brd->brd_queue); 512 - 512 + #ifdef CONFIG_BLK_DEV_RAM_DAX 513 + queue_flag_set_unlocked(QUEUE_FLAG_DAX, brd->brd_queue); 514 + #endif 513 515 disk = brd->brd_disk = alloc_disk(max_part); 514 516 if (!disk) 515 517 goto out_free_queue;
+17 -15
drivers/block/drbd/drbd_actlog.c
··· 137 137 138 138 static int _drbd_md_sync_page_io(struct drbd_device *device, 139 139 struct drbd_backing_dev *bdev, 140 - sector_t sector, int rw) 140 + sector_t sector, int op) 141 141 { 142 142 struct bio *bio; 143 143 /* we do all our meta data IO in aligned 4k blocks. */ 144 144 const int size = 4096; 145 - int err; 145 + int err, op_flags = 0; 146 146 147 147 device->md_io.done = 0; 148 148 device->md_io.error = -ENODEV; 149 149 150 - if ((rw & WRITE) && !test_bit(MD_NO_FUA, &device->flags)) 151 - rw |= REQ_FUA | REQ_FLUSH; 152 - rw |= REQ_SYNC | REQ_NOIDLE; 150 + if ((op == REQ_OP_WRITE) && !test_bit(MD_NO_FUA, &device->flags)) 151 + op_flags |= REQ_FUA | REQ_PREFLUSH; 152 + op_flags |= REQ_SYNC | REQ_NOIDLE; 153 153 154 154 bio = bio_alloc_drbd(GFP_NOIO); 155 155 bio->bi_bdev = bdev->md_bdev; ··· 159 159 goto out; 160 160 bio->bi_private = device; 161 161 bio->bi_end_io = drbd_md_endio; 162 - bio->bi_rw = rw; 162 + bio_set_op_attrs(bio, op, op_flags); 163 163 164 - if (!(rw & WRITE) && device->state.disk == D_DISKLESS && device->ldev == NULL) 164 + if (op != REQ_OP_WRITE && device->state.disk == D_DISKLESS && device->ldev == NULL) 165 165 /* special case, drbd_md_read() during drbd_adm_attach(): no get_ldev */ 166 166 ; 167 167 else if (!get_ldev_if_state(device, D_ATTACHING)) { ··· 174 174 bio_get(bio); /* one bio_put() is in the completion handler */ 175 175 atomic_inc(&device->md_io.in_use); /* drbd_md_put_buffer() is in the completion handler */ 176 176 device->md_io.submit_jif = jiffies; 177 - if (drbd_insert_fault(device, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) 177 + if (drbd_insert_fault(device, (op == REQ_OP_WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) 178 178 bio_io_error(bio); 179 179 else 180 - submit_bio(rw, bio); 180 + submit_bio(bio); 181 181 wait_until_done_or_force_detached(device, bdev, &device->md_io.done); 182 182 if (!bio->bi_error) 183 183 err = device->md_io.error; ··· 188 188 } 189 189 190 190 int drbd_md_sync_page_io(struct drbd_device *device, struct drbd_backing_dev *bdev, 191 - sector_t sector, int rw) 191 + sector_t sector, int op) 192 192 { 193 193 int err; 194 194 D_ASSERT(device, atomic_read(&device->md_io.in_use) == 1); ··· 197 197 198 198 dynamic_drbd_dbg(device, "meta_data io: %s [%d]:%s(,%llus,%s) %pS\n", 199 199 current->comm, current->pid, __func__, 200 - (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ", 200 + (unsigned long long)sector, (op == REQ_OP_WRITE) ? "WRITE" : "READ", 201 201 (void*)_RET_IP_ ); 202 202 203 203 if (sector < drbd_md_first_sector(bdev) || 204 204 sector + 7 > drbd_md_last_sector(bdev)) 205 205 drbd_alert(device, "%s [%d]:%s(,%llus,%s) out of range md access!\n", 206 206 current->comm, current->pid, __func__, 207 - (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ"); 207 + (unsigned long long)sector, 208 + (op == REQ_OP_WRITE) ? "WRITE" : "READ"); 208 209 209 - err = _drbd_md_sync_page_io(device, bdev, sector, rw); 210 + err = _drbd_md_sync_page_io(device, bdev, sector, op); 210 211 if (err) { 211 212 drbd_err(device, "drbd_md_sync_page_io(,%llus,%s) failed with error %d\n", 212 - (unsigned long long)sector, (rw & WRITE) ? "WRITE" : "READ", err); 213 + (unsigned long long)sector, 214 + (op == REQ_OP_WRITE) ? "WRITE" : "READ", err); 213 215 } 214 216 return err; 215 217 } ··· 847 845 unsigned long count = 0; 848 846 sector_t esector, nr_sectors; 849 847 850 - /* This would be an empty REQ_FLUSH, be silent. */ 848 + /* This would be an empty REQ_PREFLUSH, be silent. */ 851 849 if ((mode == SET_OUT_OF_SYNC) && size == 0) 852 850 return 0; 853 851
+4 -4
drivers/block/drbd/drbd_bitmap.c
··· 980 980 struct drbd_bitmap *b = device->bitmap; 981 981 struct page *page; 982 982 unsigned int len; 983 - unsigned int rw = (ctx->flags & BM_AIO_READ) ? READ : WRITE; 983 + unsigned int op = (ctx->flags & BM_AIO_READ) ? REQ_OP_READ : REQ_OP_WRITE; 984 984 985 985 sector_t on_disk_sector = 986 986 device->ldev->md.md_offset + device->ldev->md.bm_offset; ··· 1011 1011 bio_add_page(bio, page, len, 0); 1012 1012 bio->bi_private = ctx; 1013 1013 bio->bi_end_io = drbd_bm_endio; 1014 + bio_set_op_attrs(bio, op, 0); 1014 1015 1015 - if (drbd_insert_fault(device, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) { 1016 - bio->bi_rw |= rw; 1016 + if (drbd_insert_fault(device, (op == REQ_OP_WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) { 1017 1017 bio_io_error(bio); 1018 1018 } else { 1019 - submit_bio(rw, bio); 1019 + submit_bio(bio); 1020 1020 /* this should not count as user activity and cause the 1021 1021 * resync to throttle -- see drbd_rs_should_slow_down(). */ 1022 1022 atomic_add(len >> 9, &device->rs_sect_ev);
+4 -4
drivers/block/drbd/drbd_int.h
··· 1327 1327 #endif 1328 1328 #endif 1329 1329 1330 - /* BIO_MAX_SIZE is 256 * PAGE_SIZE, 1330 + /* Estimate max bio size as 256 * PAGE_SIZE, 1331 1331 * so for typical PAGE_SIZE of 4k, that is (1<<20) Byte. 1332 1332 * Since we may live in a mixed-platform cluster, 1333 1333 * we limit us to a platform agnostic constant here for now. 1334 1334 * A followup commit may allow even bigger BIO sizes, 1335 1335 * once we thought that through. */ 1336 1336 #define DRBD_MAX_BIO_SIZE (1U << 20) 1337 - #if DRBD_MAX_BIO_SIZE > BIO_MAX_SIZE 1337 + #if DRBD_MAX_BIO_SIZE > (BIO_MAX_PAGES << PAGE_SHIFT) 1338 1338 #error Architecture not supported: DRBD_MAX_BIO_SIZE > BIO_MAX_SIZE 1339 1339 #endif 1340 1340 #define DRBD_MAX_BIO_SIZE_SAFE (1U << 12) /* Works always = 4k */ ··· 1507 1507 extern void *drbd_md_get_buffer(struct drbd_device *device, const char *intent); 1508 1508 extern void drbd_md_put_buffer(struct drbd_device *device); 1509 1509 extern int drbd_md_sync_page_io(struct drbd_device *device, 1510 - struct drbd_backing_dev *bdev, sector_t sector, int rw); 1510 + struct drbd_backing_dev *bdev, sector_t sector, int op); 1511 1511 extern void drbd_ov_out_of_sync_found(struct drbd_device *, sector_t, int); 1512 1512 extern void wait_until_done_or_force_detached(struct drbd_device *device, 1513 1513 struct drbd_backing_dev *bdev, unsigned int *done); ··· 1557 1557 bool throttle_if_app_is_waiting); 1558 1558 extern int drbd_submit_peer_request(struct drbd_device *, 1559 1559 struct drbd_peer_request *, const unsigned, 1560 - const int); 1560 + const unsigned, const int); 1561 1561 extern int drbd_free_peer_reqs(struct drbd_device *, struct list_head *); 1562 1562 extern struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *, u64, 1563 1563 sector_t, unsigned int,
+11 -9
drivers/block/drbd/drbd_main.c
··· 1603 1603 return 0; 1604 1604 } 1605 1605 1606 - static u32 bio_flags_to_wire(struct drbd_connection *connection, unsigned long bi_rw) 1606 + static u32 bio_flags_to_wire(struct drbd_connection *connection, 1607 + struct bio *bio) 1607 1608 { 1608 1609 if (connection->agreed_pro_version >= 95) 1609 - return (bi_rw & REQ_SYNC ? DP_RW_SYNC : 0) | 1610 - (bi_rw & REQ_FUA ? DP_FUA : 0) | 1611 - (bi_rw & REQ_FLUSH ? DP_FLUSH : 0) | 1612 - (bi_rw & REQ_DISCARD ? DP_DISCARD : 0); 1610 + return (bio->bi_rw & REQ_SYNC ? DP_RW_SYNC : 0) | 1611 + (bio->bi_rw & REQ_FUA ? DP_FUA : 0) | 1612 + (bio->bi_rw & REQ_PREFLUSH ? DP_FLUSH : 0) | 1613 + (bio_op(bio) == REQ_OP_DISCARD ? DP_DISCARD : 0); 1613 1614 else 1614 - return bi_rw & REQ_SYNC ? DP_RW_SYNC : 0; 1615 + return bio->bi_rw & REQ_SYNC ? DP_RW_SYNC : 0; 1615 1616 } 1616 1617 1617 1618 /* Used to send write or TRIM aka REQ_DISCARD requests ··· 1637 1636 p->sector = cpu_to_be64(req->i.sector); 1638 1637 p->block_id = (unsigned long)req; 1639 1638 p->seq_num = cpu_to_be32(atomic_inc_return(&device->packet_seq)); 1640 - dp_flags = bio_flags_to_wire(peer_device->connection, req->master_bio->bi_rw); 1639 + dp_flags = bio_flags_to_wire(peer_device->connection, req->master_bio); 1641 1640 if (device->state.conn >= C_SYNC_SOURCE && 1642 1641 device->state.conn <= C_PAUSED_SYNC_T) 1643 1642 dp_flags |= DP_MAY_SET_IN_SYNC; ··· 3062 3061 D_ASSERT(device, drbd_md_ss(device->ldev) == device->ldev->md.md_offset); 3063 3062 sector = device->ldev->md.md_offset; 3064 3063 3065 - if (drbd_md_sync_page_io(device, device->ldev, sector, WRITE)) { 3064 + if (drbd_md_sync_page_io(device, device->ldev, sector, REQ_OP_WRITE)) { 3066 3065 /* this was a try anyways ... */ 3067 3066 drbd_err(device, "meta data update failed!\n"); 3068 3067 drbd_chk_io_error(device, 1, DRBD_META_IO_ERROR); ··· 3264 3263 * Affects the paranoia out-of-range access check in drbd_md_sync_page_io(). */ 3265 3264 bdev->md.md_size_sect = 8; 3266 3265 3267 - if (drbd_md_sync_page_io(device, bdev, bdev->md.md_offset, READ)) { 3266 + if (drbd_md_sync_page_io(device, bdev, bdev->md.md_offset, 3267 + REQ_OP_READ)) { 3268 3268 /* NOTE: can't do normal error processing here as this is 3269 3269 called BEFORE disk is attached */ 3270 3270 drbd_err(device, "Error while reading metadata.\n");
+1 -1
drivers/block/drbd/drbd_protocol.h
··· 112 112 #define DP_MAY_SET_IN_SYNC 4 113 113 #define DP_UNPLUG 8 /* not used anymore */ 114 114 #define DP_FUA 16 /* equals REQ_FUA */ 115 - #define DP_FLUSH 32 /* equals REQ_FLUSH */ 115 + #define DP_FLUSH 32 /* equals REQ_PREFLUSH */ 116 116 #define DP_DISCARD 64 /* equals REQ_DISCARD */ 117 117 #define DP_SEND_RECEIVE_ACK 128 /* This is a proto B write request */ 118 118 #define DP_SEND_WRITE_ACK 256 /* This is a proto C write request */
+24 -12
drivers/block/drbd/drbd_receiver.c
··· 1398 1398 /* TODO allocate from our own bio_set. */ 1399 1399 int drbd_submit_peer_request(struct drbd_device *device, 1400 1400 struct drbd_peer_request *peer_req, 1401 - const unsigned rw, const int fault_type) 1401 + const unsigned op, const unsigned op_flags, 1402 + const int fault_type) 1402 1403 { 1403 1404 struct bio *bios = NULL; 1404 1405 struct bio *bio; ··· 1451 1450 /* > peer_req->i.sector, unless this is the first bio */ 1452 1451 bio->bi_iter.bi_sector = sector; 1453 1452 bio->bi_bdev = device->ldev->backing_bdev; 1454 - bio->bi_rw = rw; 1453 + bio_set_op_attrs(bio, op, op_flags); 1455 1454 bio->bi_private = peer_req; 1456 1455 bio->bi_end_io = drbd_peer_request_endio; 1457 1456 ··· 1459 1458 bios = bio; 1460 1459 ++n_bios; 1461 1460 1462 - if (rw & REQ_DISCARD) { 1461 + if (op == REQ_OP_DISCARD) { 1463 1462 bio->bi_iter.bi_size = data_size; 1464 1463 goto submit; 1465 1464 } ··· 1831 1830 spin_unlock_irq(&device->resource->req_lock); 1832 1831 1833 1832 atomic_add(pi->size >> 9, &device->rs_sect_ev); 1834 - if (drbd_submit_peer_request(device, peer_req, WRITE, DRBD_FAULT_RS_WR) == 0) 1833 + if (drbd_submit_peer_request(device, peer_req, REQ_OP_WRITE, 0, 1834 + DRBD_FAULT_RS_WR) == 0) 1835 1835 return 0; 1836 1836 1837 1837 /* don't care for the reason here */ ··· 2154 2152 /* see also bio_flags_to_wire() 2155 2153 * DRBD_REQ_*, because we need to semantically map the flags to data packet 2156 2154 * flags and back. We may replicate to other kernel versions. */ 2157 - static unsigned long wire_flags_to_bio(u32 dpf) 2155 + static unsigned long wire_flags_to_bio_flags(u32 dpf) 2158 2156 { 2159 2157 return (dpf & DP_RW_SYNC ? REQ_SYNC : 0) | 2160 2158 (dpf & DP_FUA ? REQ_FUA : 0) | 2161 - (dpf & DP_FLUSH ? REQ_FLUSH : 0) | 2162 - (dpf & DP_DISCARD ? REQ_DISCARD : 0); 2159 + (dpf & DP_FLUSH ? REQ_PREFLUSH : 0); 2160 + } 2161 + 2162 + static unsigned long wire_flags_to_bio_op(u32 dpf) 2163 + { 2164 + if (dpf & DP_DISCARD) 2165 + return REQ_OP_DISCARD; 2166 + else 2167 + return REQ_OP_WRITE; 2163 2168 } 2164 2169 2165 2170 static void fail_postponed_requests(struct drbd_device *device, sector_t sector, ··· 2312 2303 struct drbd_peer_request *peer_req; 2313 2304 struct p_data *p = pi->data; 2314 2305 u32 peer_seq = be32_to_cpu(p->seq_num); 2315 - int rw = WRITE; 2306 + int op, op_flags; 2316 2307 u32 dp_flags; 2317 2308 int err, tp; 2318 2309 ··· 2351 2342 peer_req->flags |= EE_APPLICATION; 2352 2343 2353 2344 dp_flags = be32_to_cpu(p->dp_flags); 2354 - rw |= wire_flags_to_bio(dp_flags); 2345 + op = wire_flags_to_bio_op(dp_flags); 2346 + op_flags = wire_flags_to_bio_flags(dp_flags); 2355 2347 if (pi->cmd == P_TRIM) { 2356 2348 struct request_queue *q = bdev_get_queue(device->ldev->backing_bdev); 2357 2349 peer_req->flags |= EE_IS_TRIM; 2358 2350 if (!blk_queue_discard(q)) 2359 2351 peer_req->flags |= EE_IS_TRIM_USE_ZEROOUT; 2360 2352 D_ASSERT(peer_device, peer_req->i.size > 0); 2361 - D_ASSERT(peer_device, rw & REQ_DISCARD); 2353 + D_ASSERT(peer_device, op == REQ_OP_DISCARD); 2362 2354 D_ASSERT(peer_device, peer_req->pages == NULL); 2363 2355 } else if (peer_req->pages == NULL) { 2364 2356 D_ASSERT(device, peer_req->i.size == 0); ··· 2443 2433 peer_req->flags |= EE_CALL_AL_COMPLETE_IO; 2444 2434 } 2445 2435 2446 - err = drbd_submit_peer_request(device, peer_req, rw, DRBD_FAULT_DT_WR); 2436 + err = drbd_submit_peer_request(device, peer_req, op, op_flags, 2437 + DRBD_FAULT_DT_WR); 2447 2438 if (!err) 2448 2439 return 0; 2449 2440 ··· 2734 2723 submit: 2735 2724 update_receiver_timing_details(connection, drbd_submit_peer_request); 2736 2725 inc_unacked(device); 2737 - if (drbd_submit_peer_request(device, peer_req, READ, fault_type) == 0) 2726 + if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ, 0, 2727 + fault_type) == 0) 2738 2728 return 0; 2739 2729 2740 2730 /* don't care for the reason here */
+1 -1
drivers/block/drbd/drbd_req.c
··· 1132 1132 * replicating, in which case there is no point. */ 1133 1133 if (unlikely(req->i.size == 0)) { 1134 1134 /* The only size==0 bios we expect are empty flushes. */ 1135 - D_ASSERT(device, req->master_bio->bi_rw & REQ_FLUSH); 1135 + D_ASSERT(device, req->master_bio->bi_rw & REQ_PREFLUSH); 1136 1136 if (remote) 1137 1137 _req_mod(req, QUEUE_AS_DRBD_BARRIER); 1138 1138 return remote;
+4 -3
drivers/block/drbd/drbd_worker.c
··· 174 174 struct drbd_peer_request *peer_req = bio->bi_private; 175 175 struct drbd_device *device = peer_req->peer_device->device; 176 176 int is_write = bio_data_dir(bio) == WRITE; 177 - int is_discard = !!(bio->bi_rw & REQ_DISCARD); 177 + int is_discard = !!(bio_op(bio) == REQ_OP_DISCARD); 178 178 179 179 if (bio->bi_error && __ratelimit(&drbd_ratelimit_state)) 180 180 drbd_warn(device, "%s: error=%d s=%llus\n", ··· 248 248 249 249 /* to avoid recursion in __req_mod */ 250 250 if (unlikely(bio->bi_error)) { 251 - if (bio->bi_rw & REQ_DISCARD) 251 + if (bio_op(bio) == REQ_OP_DISCARD) 252 252 what = (bio->bi_error == -EOPNOTSUPP) 253 253 ? DISCARD_COMPLETED_NOTSUPP 254 254 : DISCARD_COMPLETED_WITH_ERROR; ··· 397 397 spin_unlock_irq(&device->resource->req_lock); 398 398 399 399 atomic_add(size >> 9, &device->rs_sect_ev); 400 - if (drbd_submit_peer_request(device, peer_req, READ, DRBD_FAULT_RS_RD) == 0) 400 + if (drbd_submit_peer_request(device, peer_req, REQ_OP_READ, 0, 401 + DRBD_FAULT_RS_RD) == 0) 401 402 return 0; 402 403 403 404 /* If it failed because of ENOMEM, retry should help. If it failed
+2 -1
drivers/block/floppy.c
··· 3822 3822 bio.bi_flags |= (1 << BIO_QUIET); 3823 3823 bio.bi_private = &cbdata; 3824 3824 bio.bi_end_io = floppy_rb0_cb; 3825 + bio_set_op_attrs(&bio, REQ_OP_READ, 0); 3825 3826 3826 - submit_bio(READ, &bio); 3827 + submit_bio(&bio); 3827 3828 process_fd_request(); 3828 3829 3829 3830 init_completion(&cbdata.complete);
+7 -7
drivers/block/loop.c
··· 447 447 448 448 static inline void handle_partial_read(struct loop_cmd *cmd, long bytes) 449 449 { 450 - if (bytes < 0 || (cmd->rq->cmd_flags & REQ_WRITE)) 450 + if (bytes < 0 || op_is_write(req_op(cmd->rq))) 451 451 return; 452 452 453 453 if (unlikely(bytes < blk_rq_bytes(cmd->rq))) { ··· 541 541 542 542 pos = ((loff_t) blk_rq_pos(rq) << 9) + lo->lo_offset; 543 543 544 - if (rq->cmd_flags & REQ_WRITE) { 545 - if (rq->cmd_flags & REQ_FLUSH) 544 + if (op_is_write(req_op(rq))) { 545 + if (req_op(rq) == REQ_OP_FLUSH) 546 546 ret = lo_req_flush(lo, rq); 547 - else if (rq->cmd_flags & REQ_DISCARD) 547 + else if (req_op(rq) == REQ_OP_DISCARD) 548 548 ret = lo_discard(lo, rq, pos); 549 549 else if (lo->transfer) 550 550 ret = lo_write_transfer(lo, rq, pos); ··· 1659 1659 if (lo->lo_state != Lo_bound) 1660 1660 return -EIO; 1661 1661 1662 - if (lo->use_dio && !(cmd->rq->cmd_flags & (REQ_FLUSH | 1663 - REQ_DISCARD))) 1662 + if (lo->use_dio && (req_op(cmd->rq) != REQ_OP_FLUSH || 1663 + req_op(cmd->rq) == REQ_OP_DISCARD)) 1664 1664 cmd->use_aio = true; 1665 1665 else 1666 1666 cmd->use_aio = false; ··· 1672 1672 1673 1673 static void loop_handle_cmd(struct loop_cmd *cmd) 1674 1674 { 1675 - const bool write = cmd->rq->cmd_flags & REQ_WRITE; 1675 + const bool write = op_is_write(req_op(cmd->rq)); 1676 1676 struct loop_device *lo = cmd->rq->q->queuedata; 1677 1677 int ret = 0; 1678 1678
+1 -1
drivers/block/mtip32xx/mtip32xx.c
··· 3765 3765 return -ENODATA; 3766 3766 } 3767 3767 3768 - if (rq->cmd_flags & REQ_DISCARD) { 3768 + if (req_op(rq) == REQ_OP_DISCARD) { 3769 3769 int err; 3770 3770 3771 3771 err = mtip_send_trim(dd, blk_rq_pos(rq), blk_rq_sectors(rq));
+2 -2
drivers/block/nbd.c
··· 282 282 283 283 if (req->cmd_type == REQ_TYPE_DRV_PRIV) 284 284 type = NBD_CMD_DISC; 285 - else if (req->cmd_flags & REQ_DISCARD) 285 + else if (req_op(req) == REQ_OP_DISCARD) 286 286 type = NBD_CMD_TRIM; 287 - else if (req->cmd_flags & REQ_FLUSH) 287 + else if (req_op(req) == REQ_OP_FLUSH) 288 288 type = NBD_CMD_FLUSH; 289 289 else if (rq_data_dir(req) == WRITE) 290 290 type = NBD_CMD_WRITE;
+1 -1
drivers/block/osdblk.c
··· 321 321 * driver-specific, etc. 322 322 */ 323 323 324 - do_flush = rq->cmd_flags & REQ_FLUSH; 324 + do_flush = (req_op(rq) == REQ_OP_FLUSH); 325 325 do_write = (rq_data_dir(rq) == WRITE); 326 326 327 327 if (!do_flush) { /* osd_flush does not use a bio */
+2 -2
drivers/block/pktcdvd.c
··· 1074 1074 BUG(); 1075 1075 1076 1076 atomic_inc(&pkt->io_wait); 1077 - bio->bi_rw = READ; 1077 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 1078 1078 pkt_queue_bio(pd, bio); 1079 1079 frames_read++; 1080 1080 } ··· 1336 1336 1337 1337 /* Start the write request */ 1338 1338 atomic_set(&pkt->io_wait, 1); 1339 - pkt->w_bio->bi_rw = WRITE; 1339 + bio_set_op_attrs(pkt->w_bio, REQ_OP_WRITE, 0); 1340 1340 pkt_queue_bio(pd, pkt->w_bio); 1341 1341 } 1342 1342
+2 -2
drivers/block/ps3disk.c
··· 196 196 dev_dbg(&dev->sbd.core, "%s:%u\n", __func__, __LINE__); 197 197 198 198 while ((req = blk_fetch_request(q))) { 199 - if (req->cmd_flags & REQ_FLUSH) { 199 + if (req_op(req) == REQ_OP_FLUSH) { 200 200 if (ps3disk_submit_flush_request(dev, req)) 201 201 break; 202 202 } else if (req->cmd_type == REQ_TYPE_FS) { ··· 256 256 return IRQ_HANDLED; 257 257 } 258 258 259 - if (req->cmd_flags & REQ_FLUSH) { 259 + if (req_op(req) == REQ_OP_FLUSH) { 260 260 read = 0; 261 261 op = "flush"; 262 262 } else {
+2 -2
drivers/block/rbd.c
··· 3286 3286 goto err; 3287 3287 } 3288 3288 3289 - if (rq->cmd_flags & REQ_DISCARD) 3289 + if (req_op(rq) == REQ_OP_DISCARD) 3290 3290 op_type = OBJ_OP_DISCARD; 3291 - else if (rq->cmd_flags & REQ_WRITE) 3291 + else if (req_op(rq) == REQ_OP_WRITE) 3292 3292 op_type = OBJ_OP_WRITE; 3293 3293 else 3294 3294 op_type = OBJ_OP_READ;
+1 -1
drivers/block/rsxx/dma.c
··· 705 705 dma_cnt[i] = 0; 706 706 } 707 707 708 - if (bio->bi_rw & REQ_DISCARD) { 708 + if (bio_op(bio) == REQ_OP_DISCARD) { 709 709 bv_len = bio->bi_iter.bi_size; 710 710 711 711 while (bv_len > 0) {
+1 -1
drivers/block/skd_main.c
··· 597 597 data_dir = rq_data_dir(req); 598 598 io_flags = req->cmd_flags; 599 599 600 - if (io_flags & REQ_FLUSH) 600 + if (req_op(req) == REQ_OP_FLUSH) 601 601 flush++; 602 602 603 603 if (io_flags & REQ_FUA)
+1 -1
drivers/block/umem.c
··· 462 462 le32_to_cpu(desc->local_addr)>>9, 463 463 le32_to_cpu(desc->transfer_size)); 464 464 dump_dmastat(card, control); 465 - } else if ((bio->bi_rw & REQ_WRITE) && 465 + } else if (op_is_write(bio_op(bio)) && 466 466 le32_to_cpu(desc->local_addr) >> 9 == 467 467 card->init_size) { 468 468 card->init_size += le32_to_cpu(desc->transfer_size) >> 9;
+1 -1
drivers/block/virtio_blk.c
··· 172 172 BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems); 173 173 174 174 vbr->req = req; 175 - if (req->cmd_flags & REQ_FLUSH) { 175 + if (req_op(req) == REQ_OP_FLUSH) { 176 176 vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_FLUSH); 177 177 vbr->out_hdr.sector = 0; 178 178 vbr->out_hdr.ioprio = cpu_to_virtio32(vblk->vdev, req_get_ioprio(vbr->req));
+16 -11
drivers/block/xen-blkback/blkback.c
··· 501 501 struct xen_vbd *vbd = &blkif->vbd; 502 502 int rc = -EACCES; 503 503 504 - if ((operation != READ) && vbd->readonly) 504 + if ((operation != REQ_OP_READ) && vbd->readonly) 505 505 goto out; 506 506 507 507 if (likely(req->nr_sects)) { ··· 1014 1014 preq.sector_number = req->u.discard.sector_number; 1015 1015 preq.nr_sects = req->u.discard.nr_sectors; 1016 1016 1017 - err = xen_vbd_translate(&preq, blkif, WRITE); 1017 + err = xen_vbd_translate(&preq, blkif, REQ_OP_WRITE); 1018 1018 if (err) { 1019 1019 pr_warn("access denied: DISCARD [%llu->%llu] on dev=%04x\n", 1020 1020 preq.sector_number, ··· 1229 1229 struct bio **biolist = pending_req->biolist; 1230 1230 int i, nbio = 0; 1231 1231 int operation; 1232 + int operation_flags = 0; 1232 1233 struct blk_plug plug; 1233 1234 bool drain = false; 1234 1235 struct grant_page **pages = pending_req->segments; ··· 1248 1247 switch (req_operation) { 1249 1248 case BLKIF_OP_READ: 1250 1249 ring->st_rd_req++; 1251 - operation = READ; 1250 + operation = REQ_OP_READ; 1252 1251 break; 1253 1252 case BLKIF_OP_WRITE: 1254 1253 ring->st_wr_req++; 1255 - operation = WRITE_ODIRECT; 1254 + operation = REQ_OP_WRITE; 1255 + operation_flags = WRITE_ODIRECT; 1256 1256 break; 1257 1257 case BLKIF_OP_WRITE_BARRIER: 1258 1258 drain = true; 1259 1259 case BLKIF_OP_FLUSH_DISKCACHE: 1260 1260 ring->st_f_req++; 1261 - operation = WRITE_FLUSH; 1261 + operation = REQ_OP_WRITE; 1262 + operation_flags = WRITE_FLUSH; 1262 1263 break; 1263 1264 default: 1264 1265 operation = 0; /* make gcc happy */ ··· 1272 1269 nseg = req->operation == BLKIF_OP_INDIRECT ? 1273 1270 req->u.indirect.nr_segments : req->u.rw.nr_segments; 1274 1271 1275 - if (unlikely(nseg == 0 && operation != WRITE_FLUSH) || 1272 + if (unlikely(nseg == 0 && operation_flags != WRITE_FLUSH) || 1276 1273 unlikely((req->operation != BLKIF_OP_INDIRECT) && 1277 1274 (nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) || 1278 1275 unlikely((req->operation == BLKIF_OP_INDIRECT) && ··· 1313 1310 1314 1311 if (xen_vbd_translate(&preq, ring->blkif, operation) != 0) { 1315 1312 pr_debug("access denied: %s of [%llu,%llu] on dev=%04x\n", 1316 - operation == READ ? "read" : "write", 1313 + operation == REQ_OP_READ ? "read" : "write", 1317 1314 preq.sector_number, 1318 1315 preq.sector_number + preq.nr_sects, 1319 1316 ring->blkif->vbd.pdevice); ··· 1372 1369 bio->bi_private = pending_req; 1373 1370 bio->bi_end_io = end_block_io_op; 1374 1371 bio->bi_iter.bi_sector = preq.sector_number; 1372 + bio_set_op_attrs(bio, operation, operation_flags); 1375 1373 } 1376 1374 1377 1375 preq.sector_number += seg[i].nsec; ··· 1380 1376 1381 1377 /* This will be hit if the operation was a flush or discard. */ 1382 1378 if (!bio) { 1383 - BUG_ON(operation != WRITE_FLUSH); 1379 + BUG_ON(operation_flags != WRITE_FLUSH); 1384 1380 1385 1381 bio = bio_alloc(GFP_KERNEL, 0); 1386 1382 if (unlikely(bio == NULL)) ··· 1390 1386 bio->bi_bdev = preq.bdev; 1391 1387 bio->bi_private = pending_req; 1392 1388 bio->bi_end_io = end_block_io_op; 1389 + bio_set_op_attrs(bio, operation, operation_flags); 1393 1390 } 1394 1391 1395 1392 atomic_set(&pending_req->pendcnt, nbio); 1396 1393 blk_start_plug(&plug); 1397 1394 1398 1395 for (i = 0; i < nbio; i++) 1399 - submit_bio(operation, biolist[i]); 1396 + submit_bio(biolist[i]); 1400 1397 1401 1398 /* Let the I/Os go.. */ 1402 1399 blk_finish_plug(&plug); 1403 1400 1404 - if (operation == READ) 1401 + if (operation == REQ_OP_READ) 1405 1402 ring->st_rd_sect += preq.nr_sects; 1406 - else if (operation & WRITE) 1403 + else if (operation == REQ_OP_WRITE) 1407 1404 ring->st_wr_sect += preq.nr_sects; 1408 1405 1409 1406 return 0;
+35 -31
drivers/block/xen-blkfront.c
··· 196 196 unsigned int nr_ring_pages; 197 197 struct request_queue *rq; 198 198 unsigned int feature_flush; 199 + unsigned int feature_fua; 199 200 unsigned int feature_discard:1; 200 201 unsigned int feature_secdiscard:1; 201 202 unsigned int discard_granularity; ··· 747 746 * The indirect operation can only be a BLKIF_OP_READ or 748 747 * BLKIF_OP_WRITE 749 748 */ 750 - BUG_ON(req->cmd_flags & (REQ_FLUSH | REQ_FUA)); 749 + BUG_ON(req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA); 751 750 ring_req->operation = BLKIF_OP_INDIRECT; 752 751 ring_req->u.indirect.indirect_op = rq_data_dir(req) ? 753 752 BLKIF_OP_WRITE : BLKIF_OP_READ; ··· 759 758 ring_req->u.rw.handle = info->handle; 760 759 ring_req->operation = rq_data_dir(req) ? 761 760 BLKIF_OP_WRITE : BLKIF_OP_READ; 762 - if (req->cmd_flags & (REQ_FLUSH | REQ_FUA)) { 761 + if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) { 763 762 /* 764 763 * Ideally we can do an unordered flush-to-disk. 765 764 * In case the backend onlysupports barriers, use that. ··· 767 766 * implement it the same way. (It's also a FLUSH+FUA, 768 767 * since it is guaranteed ordered WRT previous writes.) 769 768 */ 770 - switch (info->feature_flush & 771 - ((REQ_FLUSH|REQ_FUA))) { 772 - case REQ_FLUSH|REQ_FUA: 769 + if (info->feature_flush && info->feature_fua) 773 770 ring_req->operation = 774 771 BLKIF_OP_WRITE_BARRIER; 775 - break; 776 - case REQ_FLUSH: 772 + else if (info->feature_flush) 777 773 ring_req->operation = 778 774 BLKIF_OP_FLUSH_DISKCACHE; 779 - break; 780 - default: 775 + else 781 776 ring_req->operation = 0; 782 - } 783 777 } 784 778 ring_req->u.rw.nr_segments = num_grant; 785 779 if (unlikely(require_extra_req)) { ··· 843 847 if (unlikely(rinfo->dev_info->connected != BLKIF_STATE_CONNECTED)) 844 848 return 1; 845 849 846 - if (unlikely(req->cmd_flags & (REQ_DISCARD | REQ_SECURE))) 850 + if (unlikely(req_op(req) == REQ_OP_DISCARD || 851 + req->cmd_flags & REQ_SECURE)) 847 852 return blkif_queue_discard_req(req, rinfo); 848 853 else 849 854 return blkif_queue_rw_req(req, rinfo); ··· 864 867 struct blkfront_info *info) 865 868 { 866 869 return ((req->cmd_type != REQ_TYPE_FS) || 867 - ((req->cmd_flags & REQ_FLUSH) && 868 - !(info->feature_flush & REQ_FLUSH)) || 870 + ((req_op(req) == REQ_OP_FLUSH) && 871 + !info->feature_flush) || 869 872 ((req->cmd_flags & REQ_FUA) && 870 - !(info->feature_flush & REQ_FUA))); 873 + !info->feature_fua)); 871 874 } 872 875 873 876 static int blkif_queue_rq(struct blk_mq_hw_ctx *hctx, ··· 978 981 return 0; 979 982 } 980 983 981 - static const char *flush_info(unsigned int feature_flush) 984 + static const char *flush_info(struct blkfront_info *info) 982 985 { 983 - switch (feature_flush & ((REQ_FLUSH | REQ_FUA))) { 984 - case REQ_FLUSH|REQ_FUA: 986 + if (info->feature_flush && info->feature_fua) 985 987 return "barrier: enabled;"; 986 - case REQ_FLUSH: 988 + else if (info->feature_flush) 987 989 return "flush diskcache: enabled;"; 988 - default: 990 + else 989 991 return "barrier or flush: disabled;"; 990 - } 991 992 } 992 993 993 994 static void xlvbd_flush(struct blkfront_info *info) 994 995 { 995 - blk_queue_write_cache(info->rq, info->feature_flush & REQ_FLUSH, 996 - info->feature_flush & REQ_FUA); 996 + blk_queue_write_cache(info->rq, info->feature_flush ? true : false, 997 + info->feature_fua ? true : false); 997 998 pr_info("blkfront: %s: %s %s %s %s %s\n", 998 - info->gd->disk_name, flush_info(info->feature_flush), 999 + info->gd->disk_name, flush_info(info), 999 1000 "persistent grants:", info->feature_persistent ? 1000 1001 "enabled;" : "disabled;", "indirect descriptors:", 1001 1002 info->max_indirect_segments ? "enabled;" : "disabled;"); ··· 1612 1617 if (unlikely(error)) { 1613 1618 if (error == -EOPNOTSUPP) 1614 1619 error = 0; 1620 + info->feature_fua = 0; 1615 1621 info->feature_flush = 0; 1616 1622 xlvbd_flush(info); 1617 1623 } ··· 2060 2064 bio_trim(cloned_bio, offset, size); 2061 2065 cloned_bio->bi_private = split_bio; 2062 2066 cloned_bio->bi_end_io = split_bio_end; 2063 - submit_bio(cloned_bio->bi_rw, cloned_bio); 2067 + submit_bio(cloned_bio); 2064 2068 } 2065 2069 /* 2066 2070 * Now we have to wait for all those smaller bios to ··· 2069 2073 continue; 2070 2074 } 2071 2075 /* We don't need to split this bio */ 2072 - submit_bio(bio->bi_rw, bio); 2076 + submit_bio(bio); 2073 2077 } 2074 2078 2075 2079 return 0; ··· 2104 2108 /* 2105 2109 * Get the bios in the request so we can re-queue them. 2106 2110 */ 2107 - if (shadow[j].request->cmd_flags & 2108 - (REQ_FLUSH | REQ_FUA | REQ_DISCARD | REQ_SECURE)) { 2111 + if (req_op(shadow[i].request) == REQ_OP_FLUSH || 2112 + req_op(shadow[i].request) == REQ_OP_DISCARD || 2113 + shadow[j].request->cmd_flags & (REQ_FUA | REQ_SECURE)) { 2114 + 2109 2115 /* 2110 2116 * Flush operations don't contain bios, so 2111 2117 * we need to requeue the whole request ··· 2296 2298 unsigned int indirect_segments; 2297 2299 2298 2300 info->feature_flush = 0; 2301 + info->feature_fua = 0; 2299 2302 2300 2303 err = xenbus_gather(XBT_NIL, info->xbdev->otherend, 2301 2304 "feature-barrier", "%d", &barrier, ··· 2309 2310 * 2310 2311 * If there are barriers, then we use flush. 2311 2312 */ 2312 - if (!err && barrier) 2313 - info->feature_flush = REQ_FLUSH | REQ_FUA; 2313 + if (!err && barrier) { 2314 + info->feature_flush = 1; 2315 + info->feature_fua = 1; 2316 + } 2317 + 2314 2318 /* 2315 2319 * And if there is "feature-flush-cache" use that above 2316 2320 * barriers. ··· 2322 2320 "feature-flush-cache", "%d", &flush, 2323 2321 NULL); 2324 2322 2325 - if (!err && flush) 2326 - info->feature_flush = REQ_FLUSH; 2323 + if (!err && flush) { 2324 + info->feature_flush = 1; 2325 + info->feature_fua = 0; 2326 + } 2327 2327 2328 2328 err = xenbus_gather(XBT_NIL, info->xbdev->otherend, 2329 2329 "feature-discard", "%d", &discard,
+1 -1
drivers/block/zram/zram_drv.c
··· 874 874 offset = (bio->bi_iter.bi_sector & 875 875 (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT; 876 876 877 - if (unlikely(bio->bi_rw & REQ_DISCARD)) { 877 + if (unlikely(bio_op(bio) == REQ_OP_DISCARD)) { 878 878 zram_bio_discard(zram, index, offset, bio); 879 879 bio_endio(bio); 880 880 return;
-3
drivers/ide/ide-cd_ioctl.c
··· 459 459 layer. the packet must be complete, as we do not 460 460 touch it at all. */ 461 461 462 - if (cgc->data_direction == CGC_DATA_WRITE) 463 - flags |= REQ_WRITE; 464 - 465 462 if (cgc->sense) 466 463 memset(cgc->sense, 0, sizeof(struct request_sense)); 467 464
+1 -1
drivers/ide/ide-disk.c
··· 431 431 ide_drive_t *drive = q->queuedata; 432 432 struct ide_cmd *cmd; 433 433 434 - if (!(rq->cmd_flags & REQ_FLUSH)) 434 + if (req_op(rq) != REQ_OP_FLUSH) 435 435 return BLKPREP_OK; 436 436 437 437 if (rq->special) {
+1 -1
drivers/ide/ide-floppy.c
··· 206 206 memcpy(rq->cmd, pc->c, 12); 207 207 208 208 pc->rq = rq; 209 - if (rq->cmd_flags & REQ_WRITE) 209 + if (cmd == WRITE) 210 210 pc->flags |= PC_FLAG_WRITING; 211 211 212 212 pc->flags |= PC_FLAG_DMA_OK;
+3 -3
drivers/lightnvm/rrpc.c
··· 342 342 343 343 /* Perform read to do GC */ 344 344 bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr); 345 - bio->bi_rw = READ; 345 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 346 346 bio->bi_private = &wait; 347 347 bio->bi_end_io = rrpc_end_sync_bio; 348 348 ··· 364 364 reinit_completion(&wait); 365 365 366 366 bio->bi_iter.bi_sector = rrpc_get_sector(rev->addr); 367 - bio->bi_rw = WRITE; 367 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 368 368 bio->bi_private = &wait; 369 369 bio->bi_end_io = rrpc_end_sync_bio; 370 370 ··· 908 908 struct nvm_rq *rqd; 909 909 int err; 910 910 911 - if (bio->bi_rw & REQ_DISCARD) { 911 + if (bio_op(bio) == REQ_OP_DISCARD) { 912 912 rrpc_discard(rrpc, bio); 913 913 return BLK_QC_T_NONE; 914 914 }
+2 -2
drivers/md/bcache/btree.c
··· 294 294 closure_init_stack(&cl); 295 295 296 296 bio = bch_bbio_alloc(b->c); 297 - bio->bi_rw = REQ_META|READ_SYNC; 298 297 bio->bi_iter.bi_size = KEY_SIZE(&b->key) << 9; 299 298 bio->bi_end_io = btree_node_read_endio; 300 299 bio->bi_private = &cl; 300 + bio_set_op_attrs(bio, REQ_OP_READ, REQ_META|READ_SYNC); 301 301 302 302 bch_bio_map(bio, b->keys.set[0].data); 303 303 ··· 396 396 397 397 b->bio->bi_end_io = btree_node_write_endio; 398 398 b->bio->bi_private = cl; 399 - b->bio->bi_rw = REQ_META|WRITE_SYNC|REQ_FUA; 400 399 b->bio->bi_iter.bi_size = roundup(set_bytes(i), block_bytes(b->c)); 400 + bio_set_op_attrs(b->bio, REQ_OP_WRITE, REQ_META|WRITE_SYNC|REQ_FUA); 401 401 bch_bio_map(b->bio, i); 402 402 403 403 /*
+4 -2
drivers/md/bcache/debug.c
··· 52 52 bio->bi_bdev = PTR_CACHE(b->c, &b->key, 0)->bdev; 53 53 bio->bi_iter.bi_sector = PTR_OFFSET(&b->key, 0); 54 54 bio->bi_iter.bi_size = KEY_SIZE(&v->key) << 9; 55 + bio_set_op_attrs(bio, REQ_OP_READ, REQ_META|READ_SYNC); 55 56 bch_bio_map(bio, sorted); 56 57 57 - submit_bio_wait(REQ_META|READ_SYNC, bio); 58 + submit_bio_wait(bio); 58 59 bch_bbio_free(bio, b->c); 59 60 60 61 memcpy(ondisk, sorted, KEY_SIZE(&v->key) << 9); ··· 114 113 check = bio_clone(bio, GFP_NOIO); 115 114 if (!check) 116 115 return; 116 + bio_set_op_attrs(check, REQ_OP_READ, READ_SYNC); 117 117 118 118 if (bio_alloc_pages(check, GFP_NOIO)) 119 119 goto out_put; 120 120 121 - submit_bio_wait(READ_SYNC, check); 121 + submit_bio_wait(check); 122 122 123 123 bio_for_each_segment(bv, bio, iter) { 124 124 void *p1 = kmap_atomic(bv.bv_page);
+1 -1
drivers/md/bcache/io.c
··· 111 111 struct bbio *b = container_of(bio, struct bbio, bio); 112 112 struct cache *ca = PTR_CACHE(c, &b->key, 0); 113 113 114 - unsigned threshold = bio->bi_rw & REQ_WRITE 114 + unsigned threshold = op_is_write(bio_op(bio)) 115 115 ? c->congested_write_threshold_us 116 116 : c->congested_read_threshold_us; 117 117
+5 -4
drivers/md/bcache/journal.c
··· 54 54 bio_reset(bio); 55 55 bio->bi_iter.bi_sector = bucket + offset; 56 56 bio->bi_bdev = ca->bdev; 57 - bio->bi_rw = READ; 58 57 bio->bi_iter.bi_size = len << 9; 59 58 60 59 bio->bi_end_io = journal_read_endio; 61 60 bio->bi_private = &cl; 61 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 62 62 bch_bio_map(bio, data); 63 63 64 64 closure_bio_submit(bio, &cl); ··· 418 418 struct journal_device *ja = 419 419 container_of(work, struct journal_device, discard_work); 420 420 421 - submit_bio(0, &ja->discard_bio); 421 + submit_bio(&ja->discard_bio); 422 422 } 423 423 424 424 static void do_journal_discard(struct cache *ca) ··· 449 449 atomic_set(&ja->discard_in_flight, DISCARD_IN_FLIGHT); 450 450 451 451 bio_init(bio); 452 + bio_set_op_attrs(bio, REQ_OP_DISCARD, 0); 452 453 bio->bi_iter.bi_sector = bucket_to_sector(ca->set, 453 454 ca->sb.d[ja->discard_idx]); 454 455 bio->bi_bdev = ca->bdev; 455 - bio->bi_rw = REQ_WRITE|REQ_DISCARD; 456 456 bio->bi_max_vecs = 1; 457 457 bio->bi_io_vec = bio->bi_inline_vecs; 458 458 bio->bi_iter.bi_size = bucket_bytes(ca); ··· 626 626 bio_reset(bio); 627 627 bio->bi_iter.bi_sector = PTR_OFFSET(k, i); 628 628 bio->bi_bdev = ca->bdev; 629 - bio->bi_rw = REQ_WRITE|REQ_SYNC|REQ_META|REQ_FLUSH|REQ_FUA; 630 629 bio->bi_iter.bi_size = sectors << 9; 631 630 632 631 bio->bi_end_io = journal_write_endio; 633 632 bio->bi_private = w; 633 + bio_set_op_attrs(bio, REQ_OP_WRITE, 634 + REQ_SYNC|REQ_META|REQ_PREFLUSH|REQ_FUA); 634 635 bch_bio_map(bio, w->data); 635 636 636 637 trace_bcache_journal_write(bio);
+1 -1
drivers/md/bcache/movinggc.c
··· 163 163 moving_init(io); 164 164 bio = &io->bio.bio; 165 165 166 - bio->bi_rw = READ; 166 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 167 167 bio->bi_end_io = read_moving_endio; 168 168 169 169 if (bio_alloc_pages(bio, GFP_KERNEL))
+14 -14
drivers/md/bcache/request.c
··· 205 205 return bch_data_invalidate(cl); 206 206 207 207 /* 208 - * Journal writes are marked REQ_FLUSH; if the original write was a 208 + * Journal writes are marked REQ_PREFLUSH; if the original write was a 209 209 * flush, it'll wait on the journal write. 210 210 */ 211 - bio->bi_rw &= ~(REQ_FLUSH|REQ_FUA); 211 + bio->bi_rw &= ~(REQ_PREFLUSH|REQ_FUA); 212 212 213 213 do { 214 214 unsigned i; ··· 253 253 trace_bcache_cache_insert(k); 254 254 bch_keylist_push(&op->insert_keys); 255 255 256 - n->bi_rw |= REQ_WRITE; 256 + bio_set_op_attrs(n, REQ_OP_WRITE, 0); 257 257 bch_submit_bbio(n, op->c, k, 0); 258 258 } while (n != bio); 259 259 ··· 378 378 379 379 if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) || 380 380 c->gc_stats.in_use > CUTOFF_CACHE_ADD || 381 - (bio->bi_rw & REQ_DISCARD)) 381 + (bio_op(bio) == REQ_OP_DISCARD)) 382 382 goto skip; 383 383 384 384 if (mode == CACHE_MODE_NONE || 385 385 (mode == CACHE_MODE_WRITEAROUND && 386 - (bio->bi_rw & REQ_WRITE))) 386 + op_is_write(bio_op(bio)))) 387 387 goto skip; 388 388 389 389 if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) || ··· 404 404 405 405 if (!congested && 406 406 mode == CACHE_MODE_WRITEBACK && 407 - (bio->bi_rw & REQ_WRITE) && 407 + op_is_write(bio_op(bio)) && 408 408 (bio->bi_rw & REQ_SYNC)) 409 409 goto rescale; 410 410 ··· 657 657 s->cache_miss = NULL; 658 658 s->d = d; 659 659 s->recoverable = 1; 660 - s->write = (bio->bi_rw & REQ_WRITE) != 0; 660 + s->write = op_is_write(bio_op(bio)); 661 661 s->read_dirty_data = 0; 662 662 s->start_time = jiffies; 663 663 ··· 668 668 s->iop.write_prio = 0; 669 669 s->iop.error = 0; 670 670 s->iop.flags = 0; 671 - s->iop.flush_journal = (bio->bi_rw & (REQ_FLUSH|REQ_FUA)) != 0; 671 + s->iop.flush_journal = (bio->bi_rw & (REQ_PREFLUSH|REQ_FUA)) != 0; 672 672 s->iop.wq = bcache_wq; 673 673 674 674 return s; ··· 899 899 * But check_overlapping drops dirty keys for which io hasn't started, 900 900 * so we still want to call it. 901 901 */ 902 - if (bio->bi_rw & REQ_DISCARD) 902 + if (bio_op(bio) == REQ_OP_DISCARD) 903 903 s->iop.bypass = true; 904 904 905 905 if (should_writeback(dc, s->orig_bio, ··· 913 913 s->iop.bio = s->orig_bio; 914 914 bio_get(s->iop.bio); 915 915 916 - if (!(bio->bi_rw & REQ_DISCARD) || 916 + if ((bio_op(bio) != REQ_OP_DISCARD) || 917 917 blk_queue_discard(bdev_get_queue(dc->bdev))) 918 918 closure_bio_submit(bio, cl); 919 919 } else if (s->iop.writeback) { 920 920 bch_writeback_add(dc); 921 921 s->iop.bio = bio; 922 922 923 - if (bio->bi_rw & REQ_FLUSH) { 923 + if (bio->bi_rw & REQ_PREFLUSH) { 924 924 /* Also need to send a flush to the backing device */ 925 925 struct bio *flush = bio_alloc_bioset(GFP_NOIO, 0, 926 926 dc->disk.bio_split); 927 927 928 - flush->bi_rw = WRITE_FLUSH; 929 928 flush->bi_bdev = bio->bi_bdev; 930 929 flush->bi_end_io = request_endio; 931 930 flush->bi_private = cl; 931 + bio_set_op_attrs(flush, REQ_OP_WRITE, WRITE_FLUSH); 932 932 933 933 closure_bio_submit(flush, cl); 934 934 } ··· 992 992 cached_dev_read(dc, s); 993 993 } 994 994 } else { 995 - if ((bio->bi_rw & REQ_DISCARD) && 995 + if ((bio_op(bio) == REQ_OP_DISCARD) && 996 996 !blk_queue_discard(bdev_get_queue(dc->bdev))) 997 997 bio_endio(bio); 998 998 else ··· 1103 1103 &KEY(d->id, bio->bi_iter.bi_sector, 0), 1104 1104 &KEY(d->id, bio_end_sector(bio), 0)); 1105 1105 1106 - s->iop.bypass = (bio->bi_rw & REQ_DISCARD) != 0; 1106 + s->iop.bypass = (bio_op(bio) == REQ_OP_DISCARD) != 0; 1107 1107 s->iop.writeback = true; 1108 1108 s->iop.bio = bio; 1109 1109
+14 -12
drivers/md/bcache/super.c
··· 212 212 unsigned i; 213 213 214 214 bio->bi_iter.bi_sector = SB_SECTOR; 215 - bio->bi_rw = REQ_SYNC|REQ_META; 216 215 bio->bi_iter.bi_size = SB_SIZE; 216 + bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_SYNC|REQ_META); 217 217 bch_bio_map(bio, NULL); 218 218 219 219 out->offset = cpu_to_le64(sb->offset); ··· 238 238 pr_debug("ver %llu, flags %llu, seq %llu", 239 239 sb->version, sb->flags, sb->seq); 240 240 241 - submit_bio(REQ_WRITE, bio); 241 + submit_bio(bio); 242 242 } 243 243 244 244 static void bch_write_bdev_super_unlock(struct closure *cl) ··· 333 333 up(&c->uuid_write_mutex); 334 334 } 335 335 336 - static void uuid_io(struct cache_set *c, unsigned long rw, 336 + static void uuid_io(struct cache_set *c, int op, unsigned long op_flags, 337 337 struct bkey *k, struct closure *parent) 338 338 { 339 339 struct closure *cl = &c->uuid_write; ··· 348 348 for (i = 0; i < KEY_PTRS(k); i++) { 349 349 struct bio *bio = bch_bbio_alloc(c); 350 350 351 - bio->bi_rw = REQ_SYNC|REQ_META|rw; 351 + bio->bi_rw = REQ_SYNC|REQ_META|op_flags; 352 352 bio->bi_iter.bi_size = KEY_SIZE(k) << 9; 353 353 354 354 bio->bi_end_io = uuid_endio; 355 355 bio->bi_private = cl; 356 + bio_set_op_attrs(bio, op, REQ_SYNC|REQ_META|op_flags); 356 357 bch_bio_map(bio, c->uuids); 357 358 358 359 bch_submit_bbio(bio, c, k, i); 359 360 360 - if (!(rw & WRITE)) 361 + if (op != REQ_OP_WRITE) 361 362 break; 362 363 } 363 364 364 365 bch_extent_to_text(buf, sizeof(buf), k); 365 - pr_debug("%s UUIDs at %s", rw & REQ_WRITE ? "wrote" : "read", buf); 366 + pr_debug("%s UUIDs at %s", op == REQ_OP_WRITE ? "wrote" : "read", buf); 366 367 367 368 for (u = c->uuids; u < c->uuids + c->nr_uuids; u++) 368 369 if (!bch_is_zero(u->uuid, 16)) ··· 382 381 return "bad uuid pointer"; 383 382 384 383 bkey_copy(&c->uuid_bucket, k); 385 - uuid_io(c, READ_SYNC, k, cl); 384 + uuid_io(c, REQ_OP_READ, READ_SYNC, k, cl); 386 385 387 386 if (j->version < BCACHE_JSET_VERSION_UUIDv1) { 388 387 struct uuid_entry_v0 *u0 = (void *) c->uuids; ··· 427 426 return 1; 428 427 429 428 SET_KEY_SIZE(&k.key, c->sb.bucket_size); 430 - uuid_io(c, REQ_WRITE, &k.key, &cl); 429 + uuid_io(c, REQ_OP_WRITE, 0, &k.key, &cl); 431 430 closure_sync(&cl); 432 431 433 432 bkey_copy(&c->uuid_bucket, &k.key); ··· 499 498 closure_put(&ca->prio); 500 499 } 501 500 502 - static void prio_io(struct cache *ca, uint64_t bucket, unsigned long rw) 501 + static void prio_io(struct cache *ca, uint64_t bucket, int op, 502 + unsigned long op_flags) 503 503 { 504 504 struct closure *cl = &ca->prio; 505 505 struct bio *bio = bch_bbio_alloc(ca->set); ··· 509 507 510 508 bio->bi_iter.bi_sector = bucket * ca->sb.bucket_size; 511 509 bio->bi_bdev = ca->bdev; 512 - bio->bi_rw = REQ_SYNC|REQ_META|rw; 513 510 bio->bi_iter.bi_size = bucket_bytes(ca); 514 511 515 512 bio->bi_end_io = prio_endio; 516 513 bio->bi_private = ca; 514 + bio_set_op_attrs(bio, op, REQ_SYNC|REQ_META|op_flags); 517 515 bch_bio_map(bio, ca->disk_buckets); 518 516 519 517 closure_bio_submit(bio, &ca->prio); ··· 559 557 BUG_ON(bucket == -1); 560 558 561 559 mutex_unlock(&ca->set->bucket_lock); 562 - prio_io(ca, bucket, REQ_WRITE); 560 + prio_io(ca, bucket, REQ_OP_WRITE, 0); 563 561 mutex_lock(&ca->set->bucket_lock); 564 562 565 563 ca->prio_buckets[i] = bucket; ··· 601 599 ca->prio_last_buckets[bucket_nr] = bucket; 602 600 bucket_nr++; 603 601 604 - prio_io(ca, bucket, READ_SYNC); 602 + prio_io(ca, bucket, REQ_OP_READ, READ_SYNC); 605 603 606 604 if (p->csum != bch_crc64(&p->magic, bucket_bytes(ca) - 8)) 607 605 pr_warn("bad csum reading priorities");
+2 -2
drivers/md/bcache/writeback.c
··· 182 182 struct keybuf_key *w = io->bio.bi_private; 183 183 184 184 dirty_init(w); 185 - io->bio.bi_rw = WRITE; 185 + bio_set_op_attrs(&io->bio, REQ_OP_WRITE, 0); 186 186 io->bio.bi_iter.bi_sector = KEY_START(&w->key); 187 187 io->bio.bi_bdev = io->dc->bdev; 188 188 io->bio.bi_end_io = dirty_endio; ··· 251 251 io->dc = dc; 252 252 253 253 dirty_init(w); 254 + bio_set_op_attrs(&io->bio, REQ_OP_READ, 0); 254 255 io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0); 255 256 io->bio.bi_bdev = PTR_CACHE(dc->disk.c, 256 257 &w->key, 0)->bdev; 257 - io->bio.bi_rw = READ; 258 258 io->bio.bi_end_io = read_dirty_endio; 259 259 260 260 if (bio_alloc_pages(&io->bio, GFP_KERNEL))
+3 -3
drivers/md/bitmap.c
··· 162 162 163 163 if (sync_page_io(rdev, target, 164 164 roundup(size, bdev_logical_block_size(rdev->bdev)), 165 - page, READ, true)) { 165 + page, REQ_OP_READ, 0, true)) { 166 166 page->index = index; 167 167 return 0; 168 168 } ··· 297 297 atomic_inc(&bitmap->pending_writes); 298 298 set_buffer_locked(bh); 299 299 set_buffer_mapped(bh); 300 - submit_bh(WRITE | REQ_SYNC, bh); 300 + submit_bh(REQ_OP_WRITE, REQ_SYNC, bh); 301 301 bh = bh->b_this_page; 302 302 } 303 303 ··· 392 392 atomic_inc(&bitmap->pending_writes); 393 393 set_buffer_locked(bh); 394 394 set_buffer_mapped(bh); 395 - submit_bh(READ, bh); 395 + submit_bh(REQ_OP_READ, 0, bh); 396 396 } 397 397 block++; 398 398 bh = bh->b_this_page;
+6 -3
drivers/md/dm-bufio.c
··· 574 574 { 575 575 int r; 576 576 struct dm_io_request io_req = { 577 - .bi_rw = rw, 577 + .bi_op = rw, 578 + .bi_op_flags = 0, 578 579 .notify.fn = dmio_complete, 579 580 .notify.context = b, 580 581 .client = b->c->dm_io, ··· 635 634 * the dm_buffer's inline bio is local to bufio. 636 635 */ 637 636 b->bio.bi_private = end_io; 637 + bio_set_op_attrs(&b->bio, rw, 0); 638 638 639 639 /* 640 640 * We assume that if len >= PAGE_SIZE ptr is page-aligned. ··· 662 660 ptr += PAGE_SIZE; 663 661 } while (len > 0); 664 662 665 - submit_bio(rw, &b->bio); 663 + submit_bio(&b->bio); 666 664 } 667 665 668 666 static void submit_io(struct dm_buffer *b, int rw, sector_t block, ··· 1328 1326 int dm_bufio_issue_flush(struct dm_bufio_client *c) 1329 1327 { 1330 1328 struct dm_io_request io_req = { 1331 - .bi_rw = WRITE_FLUSH, 1329 + .bi_op = REQ_OP_WRITE, 1330 + .bi_op_flags = WRITE_FLUSH, 1332 1331 .mem.type = DM_IO_KMEM, 1333 1332 .mem.ptr.addr = NULL, 1334 1333 .client = c->dm_io,
+10 -8
drivers/md/dm-cache-target.c
··· 788 788 789 789 spin_lock_irqsave(&cache->lock, flags); 790 790 if (cache->need_tick_bio && 791 - !(bio->bi_rw & (REQ_FUA | REQ_FLUSH | REQ_DISCARD))) { 791 + !(bio->bi_rw & (REQ_FUA | REQ_PREFLUSH)) && 792 + bio_op(bio) != REQ_OP_DISCARD) { 792 793 pb->tick = true; 793 794 cache->need_tick_bio = false; 794 795 } ··· 830 829 831 830 static int bio_triggers_commit(struct cache *cache, struct bio *bio) 832 831 { 833 - return bio->bi_rw & (REQ_FLUSH | REQ_FUA); 832 + return bio->bi_rw & (REQ_PREFLUSH | REQ_FUA); 834 833 } 835 834 836 835 /* ··· 852 851 static bool accountable_bio(struct cache *cache, struct bio *bio) 853 852 { 854 853 return ((bio->bi_bdev == cache->origin_dev->bdev) && 855 - !(bio->bi_rw & REQ_DISCARD)); 854 + bio_op(bio) != REQ_OP_DISCARD); 856 855 } 857 856 858 857 static void accounted_begin(struct cache *cache, struct bio *bio) ··· 1068 1067 1069 1068 static bool discard_or_flush(struct bio *bio) 1070 1069 { 1071 - return bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD); 1070 + return bio_op(bio) == REQ_OP_DISCARD || 1071 + bio->bi_rw & (REQ_PREFLUSH | REQ_FUA); 1072 1072 } 1073 1073 1074 1074 static void __cell_defer(struct cache *cache, struct dm_bio_prison_cell *cell) ··· 1614 1612 remap_to_cache(cache, bio, 0); 1615 1613 1616 1614 /* 1617 - * REQ_FLUSH is not directed at any particular block so we don't 1618 - * need to inc_ds(). REQ_FUA's are split into a write + REQ_FLUSH 1615 + * REQ_PREFLUSH is not directed at any particular block so we don't 1616 + * need to inc_ds(). REQ_FUA's are split into a write + REQ_PREFLUSH 1619 1617 * by dm-core. 1620 1618 */ 1621 1619 issue(cache, bio); ··· 1980 1978 1981 1979 bio = bio_list_pop(&bios); 1982 1980 1983 - if (bio->bi_rw & REQ_FLUSH) 1981 + if (bio->bi_rw & REQ_PREFLUSH) 1984 1982 process_flush_bio(cache, bio); 1985 - else if (bio->bi_rw & REQ_DISCARD) 1983 + else if (bio_op(bio) == REQ_OP_DISCARD) 1986 1984 process_discard_bio(cache, &structs, bio); 1987 1985 else 1988 1986 process_bio(cache, &structs, bio);
+6 -5
drivers/md/dm-crypt.c
··· 1136 1136 clone->bi_private = io; 1137 1137 clone->bi_end_io = crypt_endio; 1138 1138 clone->bi_bdev = cc->dev->bdev; 1139 - clone->bi_rw = io->base_bio->bi_rw; 1139 + bio_set_op_attrs(clone, bio_op(io->base_bio), io->base_bio->bi_rw); 1140 1140 } 1141 1141 1142 1142 static int kcryptd_io_read(struct dm_crypt_io *io, gfp_t gfp) ··· 1911 1911 struct crypt_config *cc = ti->private; 1912 1912 1913 1913 /* 1914 - * If bio is REQ_FLUSH or REQ_DISCARD, just bypass crypt queues. 1915 - * - for REQ_FLUSH device-mapper core ensures that no IO is in-flight 1916 - * - for REQ_DISCARD caller must use flush if IO ordering matters 1914 + * If bio is REQ_PREFLUSH or REQ_OP_DISCARD, just bypass crypt queues. 1915 + * - for REQ_PREFLUSH device-mapper core ensures that no IO is in-flight 1916 + * - for REQ_OP_DISCARD caller must use flush if IO ordering matters 1917 1917 */ 1918 - if (unlikely(bio->bi_rw & (REQ_FLUSH | REQ_DISCARD))) { 1918 + if (unlikely(bio->bi_rw & REQ_PREFLUSH || 1919 + bio_op(bio) == REQ_OP_DISCARD)) { 1919 1920 bio->bi_bdev = cc->dev->bdev; 1920 1921 if (bio_sectors(bio)) 1921 1922 bio->bi_iter.bi_sector = cc->start +
+2 -2
drivers/md/dm-era-target.c
··· 1540 1540 remap_to_origin(era, bio); 1541 1541 1542 1542 /* 1543 - * REQ_FLUSH bios carry no data, so we're not interested in them. 1543 + * REQ_PREFLUSH bios carry no data, so we're not interested in them. 1544 1544 */ 1545 - if (!(bio->bi_rw & REQ_FLUSH) && 1545 + if (!(bio->bi_rw & REQ_PREFLUSH) && 1546 1546 (bio_data_dir(bio) == WRITE) && 1547 1547 !metadata_current_marked(era->md, block)) { 1548 1548 defer_bio(era, bio);
+1 -1
drivers/md/dm-flakey.c
··· 266 266 data[fc->corrupt_bio_byte - 1] = fc->corrupt_bio_value; 267 267 268 268 DMDEBUG("Corrupting data bio=%p by writing %u to byte %u " 269 - "(rw=%c bi_rw=%lu bi_sector=%llu cur_bytes=%u)\n", 269 + "(rw=%c bi_rw=%u bi_sector=%llu cur_bytes=%u)\n", 270 270 bio, fc->corrupt_bio_value, fc->corrupt_bio_byte, 271 271 (bio_data_dir(bio) == WRITE) ? 'w' : 'r', bio->bi_rw, 272 272 (unsigned long long)bio->bi_iter.bi_sector, bio_bytes);
+31 -26
drivers/md/dm-io.c
··· 278 278 /*----------------------------------------------------------------- 279 279 * IO routines that accept a list of pages. 280 280 *---------------------------------------------------------------*/ 281 - static void do_region(int rw, unsigned region, struct dm_io_region *where, 282 - struct dpages *dp, struct io *io) 281 + static void do_region(int op, int op_flags, unsigned region, 282 + struct dm_io_region *where, struct dpages *dp, 283 + struct io *io) 283 284 { 284 285 struct bio *bio; 285 286 struct page *page; ··· 296 295 /* 297 296 * Reject unsupported discard and write same requests. 298 297 */ 299 - if (rw & REQ_DISCARD) 298 + if (op == REQ_OP_DISCARD) 300 299 special_cmd_max_sectors = q->limits.max_discard_sectors; 301 - else if (rw & REQ_WRITE_SAME) 300 + else if (op == REQ_OP_WRITE_SAME) 302 301 special_cmd_max_sectors = q->limits.max_write_same_sectors; 303 - if ((rw & (REQ_DISCARD | REQ_WRITE_SAME)) && special_cmd_max_sectors == 0) { 302 + if ((op == REQ_OP_DISCARD || op == REQ_OP_WRITE_SAME) && 303 + special_cmd_max_sectors == 0) { 304 304 dec_count(io, region, -EOPNOTSUPP); 305 305 return; 306 306 } 307 307 308 308 /* 309 - * where->count may be zero if rw holds a flush and we need to 309 + * where->count may be zero if op holds a flush and we need to 310 310 * send a zero-sized flush. 311 311 */ 312 312 do { 313 313 /* 314 314 * Allocate a suitably sized-bio. 315 315 */ 316 - if ((rw & REQ_DISCARD) || (rw & REQ_WRITE_SAME)) 316 + if ((op == REQ_OP_DISCARD) || (op == REQ_OP_WRITE_SAME)) 317 317 num_bvecs = 1; 318 318 else 319 319 num_bvecs = min_t(int, BIO_MAX_PAGES, ··· 324 322 bio->bi_iter.bi_sector = where->sector + (where->count - remaining); 325 323 bio->bi_bdev = where->bdev; 326 324 bio->bi_end_io = endio; 325 + bio_set_op_attrs(bio, op, op_flags); 327 326 store_io_and_region_in_bio(bio, io, region); 328 327 329 - if (rw & REQ_DISCARD) { 328 + if (op == REQ_OP_DISCARD) { 330 329 num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining); 331 330 bio->bi_iter.bi_size = num_sectors << SECTOR_SHIFT; 332 331 remaining -= num_sectors; 333 - } else if (rw & REQ_WRITE_SAME) { 332 + } else if (op == REQ_OP_WRITE_SAME) { 334 333 /* 335 334 * WRITE SAME only uses a single page. 336 335 */ ··· 358 355 } 359 356 360 357 atomic_inc(&io->count); 361 - submit_bio(rw, bio); 358 + submit_bio(bio); 362 359 } while (remaining); 363 360 } 364 361 365 - static void dispatch_io(int rw, unsigned int num_regions, 362 + static void dispatch_io(int op, int op_flags, unsigned int num_regions, 366 363 struct dm_io_region *where, struct dpages *dp, 367 364 struct io *io, int sync) 368 365 { ··· 372 369 BUG_ON(num_regions > DM_IO_MAX_REGIONS); 373 370 374 371 if (sync) 375 - rw |= REQ_SYNC; 372 + op_flags |= REQ_SYNC; 376 373 377 374 /* 378 375 * For multiple regions we need to be careful to rewind ··· 380 377 */ 381 378 for (i = 0; i < num_regions; i++) { 382 379 *dp = old_pages; 383 - if (where[i].count || (rw & REQ_FLUSH)) 384 - do_region(rw, i, where + i, dp, io); 380 + if (where[i].count || (op_flags & REQ_PREFLUSH)) 381 + do_region(op, op_flags, i, where + i, dp, io); 385 382 } 386 383 387 384 /* ··· 405 402 } 406 403 407 404 static int sync_io(struct dm_io_client *client, unsigned int num_regions, 408 - struct dm_io_region *where, int rw, struct dpages *dp, 409 - unsigned long *error_bits) 405 + struct dm_io_region *where, int op, int op_flags, 406 + struct dpages *dp, unsigned long *error_bits) 410 407 { 411 408 struct io *io; 412 409 struct sync_io sio; 413 410 414 - if (num_regions > 1 && (rw & RW_MASK) != WRITE) { 411 + if (num_regions > 1 && !op_is_write(op)) { 415 412 WARN_ON(1); 416 413 return -EIO; 417 414 } ··· 428 425 io->vma_invalidate_address = dp->vma_invalidate_address; 429 426 io->vma_invalidate_size = dp->vma_invalidate_size; 430 427 431 - dispatch_io(rw, num_regions, where, dp, io, 1); 428 + dispatch_io(op, op_flags, num_regions, where, dp, io, 1); 432 429 433 430 wait_for_completion_io(&sio.wait); 434 431 ··· 439 436 } 440 437 441 438 static int async_io(struct dm_io_client *client, unsigned int num_regions, 442 - struct dm_io_region *where, int rw, struct dpages *dp, 443 - io_notify_fn fn, void *context) 439 + struct dm_io_region *where, int op, int op_flags, 440 + struct dpages *dp, io_notify_fn fn, void *context) 444 441 { 445 442 struct io *io; 446 443 447 - if (num_regions > 1 && (rw & RW_MASK) != WRITE) { 444 + if (num_regions > 1 && !op_is_write(op)) { 448 445 WARN_ON(1); 449 446 fn(1, context); 450 447 return -EIO; ··· 460 457 io->vma_invalidate_address = dp->vma_invalidate_address; 461 458 io->vma_invalidate_size = dp->vma_invalidate_size; 462 459 463 - dispatch_io(rw, num_regions, where, dp, io, 0); 460 + dispatch_io(op, op_flags, num_regions, where, dp, io, 0); 464 461 return 0; 465 462 } 466 463 ··· 483 480 484 481 case DM_IO_VMA: 485 482 flush_kernel_vmap_range(io_req->mem.ptr.vma, size); 486 - if ((io_req->bi_rw & RW_MASK) == READ) { 483 + if (io_req->bi_op == REQ_OP_READ) { 487 484 dp->vma_invalidate_address = io_req->mem.ptr.vma; 488 485 dp->vma_invalidate_size = size; 489 486 } ··· 521 518 522 519 if (!io_req->notify.fn) 523 520 return sync_io(io_req->client, num_regions, where, 524 - io_req->bi_rw, &dp, sync_error_bits); 521 + io_req->bi_op, io_req->bi_op_flags, &dp, 522 + sync_error_bits); 525 523 526 - return async_io(io_req->client, num_regions, where, io_req->bi_rw, 527 - &dp, io_req->notify.fn, io_req->notify.context); 524 + return async_io(io_req->client, num_regions, where, io_req->bi_op, 525 + io_req->bi_op_flags, &dp, io_req->notify.fn, 526 + io_req->notify.context); 528 527 } 529 528 EXPORT_SYMBOL(dm_io); 530 529
+6 -5
drivers/md/dm-kcopyd.c
··· 465 465 io_job_finish(kc->throttle); 466 466 467 467 if (error) { 468 - if (job->rw & WRITE) 468 + if (op_is_write(job->rw)) 469 469 job->write_err |= error; 470 470 else 471 471 job->read_err = 1; ··· 477 477 } 478 478 } 479 479 480 - if (job->rw & WRITE) 480 + if (op_is_write(job->rw)) 481 481 push(&kc->complete_jobs, job); 482 482 483 483 else { ··· 496 496 { 497 497 int r; 498 498 struct dm_io_request io_req = { 499 - .bi_rw = job->rw, 499 + .bi_op = job->rw, 500 + .bi_op_flags = 0, 500 501 .mem.type = DM_IO_PAGE_LIST, 501 502 .mem.ptr.pl = job->pages, 502 503 .mem.offset = 0, ··· 551 550 552 551 if (r < 0) { 553 552 /* error this rogue job */ 554 - if (job->rw & WRITE) 553 + if (op_is_write(job->rw)) 555 554 job->write_err = (unsigned long) -1L; 556 555 else 557 556 job->read_err = 1; ··· 735 734 /* 736 735 * Use WRITE SAME to optimize zeroing if all dests support it. 737 736 */ 738 - job->rw = WRITE | REQ_WRITE_SAME; 737 + job->rw = REQ_OP_WRITE_SAME; 739 738 for (i = 0; i < job->num_dests; i++) 740 739 if (!bdev_write_same(job->dests[i].bdev)) { 741 740 job->rw = WRITE;
+8 -5
drivers/md/dm-log-writes.c
··· 205 205 bio->bi_bdev = lc->logdev->bdev; 206 206 bio->bi_end_io = log_end_io; 207 207 bio->bi_private = lc; 208 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 208 209 209 210 page = alloc_page(GFP_KERNEL); 210 211 if (!page) { ··· 227 226 DMERR("Couldn't add page to the log block"); 228 227 goto error_bio; 229 228 } 230 - submit_bio(WRITE, bio); 229 + submit_bio(bio); 231 230 return 0; 232 231 error_bio: 233 232 bio_put(bio); ··· 270 269 bio->bi_bdev = lc->logdev->bdev; 271 270 bio->bi_end_io = log_end_io; 272 271 bio->bi_private = lc; 272 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 273 273 274 274 for (i = 0; i < block->vec_cnt; i++) { 275 275 /* ··· 281 279 block->vecs[i].bv_len, 0); 282 280 if (ret != block->vecs[i].bv_len) { 283 281 atomic_inc(&lc->io_blocks); 284 - submit_bio(WRITE, bio); 282 + submit_bio(bio); 285 283 bio = bio_alloc(GFP_KERNEL, block->vec_cnt - i); 286 284 if (!bio) { 287 285 DMERR("Couldn't alloc log bio"); ··· 292 290 bio->bi_bdev = lc->logdev->bdev; 293 291 bio->bi_end_io = log_end_io; 294 292 bio->bi_private = lc; 293 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 295 294 296 295 ret = bio_add_page(bio, block->vecs[i].bv_page, 297 296 block->vecs[i].bv_len, 0); ··· 304 301 } 305 302 sector += block->vecs[i].bv_len >> SECTOR_SHIFT; 306 303 } 307 - submit_bio(WRITE, bio); 304 + submit_bio(bio); 308 305 out: 309 306 kfree(block->data); 310 307 kfree(block); ··· 555 552 struct bio_vec bv; 556 553 size_t alloc_size; 557 554 int i = 0; 558 - bool flush_bio = (bio->bi_rw & REQ_FLUSH); 555 + bool flush_bio = (bio->bi_rw & REQ_PREFLUSH); 559 556 bool fua_bio = (bio->bi_rw & REQ_FUA); 560 - bool discard_bio = (bio->bi_rw & REQ_DISCARD); 557 + bool discard_bio = (bio_op(bio) == REQ_OP_DISCARD); 561 558 562 559 pb->block = NULL; 563 560
+3 -2
drivers/md/dm-log.c
··· 293 293 294 294 static int rw_header(struct log_c *lc, int rw) 295 295 { 296 - lc->io_req.bi_rw = rw; 296 + lc->io_req.bi_op = rw; 297 297 298 298 return dm_io(&lc->io_req, 1, &lc->header_location, NULL); 299 299 } ··· 306 306 .count = 0, 307 307 }; 308 308 309 - lc->io_req.bi_rw = WRITE_FLUSH; 309 + lc->io_req.bi_op = REQ_OP_WRITE; 310 + lc->io_req.bi_op_flags = WRITE_FLUSH; 310 311 311 312 return dm_io(&lc->io_req, 1, &null_location, NULL); 312 313 }
+3 -2
drivers/md/dm-raid.c
··· 792 792 if (rdev->sb_loaded) 793 793 return 0; 794 794 795 - if (!sync_page_io(rdev, 0, size, rdev->sb_page, READ, 1)) { 795 + if (!sync_page_io(rdev, 0, size, rdev->sb_page, REQ_OP_READ, 0, 1)) { 796 796 DMERR("Failed to read superblock of device at position %d", 797 797 rdev->raid_disk); 798 798 md_error(rdev->mddev, rdev); ··· 1651 1651 for (i = 0; i < rs->md.raid_disks; i++) { 1652 1652 r = &rs->dev[i].rdev; 1653 1653 if (test_bit(Faulty, &r->flags) && r->sb_page && 1654 - sync_page_io(r, 0, r->sb_size, r->sb_page, READ, 1)) { 1654 + sync_page_io(r, 0, r->sb_size, r->sb_page, REQ_OP_READ, 0, 1655 + 1)) { 1655 1656 DMINFO("Faulty %s device #%d has readable super block." 1656 1657 " Attempting to revive it.", 1657 1658 rs->raid_type->name, i);
+13 -9
drivers/md/dm-raid1.c
··· 260 260 struct dm_io_region io[ms->nr_mirrors]; 261 261 struct mirror *m; 262 262 struct dm_io_request io_req = { 263 - .bi_rw = WRITE_FLUSH, 263 + .bi_op = REQ_OP_WRITE, 264 + .bi_op_flags = WRITE_FLUSH, 264 265 .mem.type = DM_IO_KMEM, 265 266 .mem.ptr.addr = NULL, 266 267 .client = ms->io_client, ··· 542 541 { 543 542 struct dm_io_region io; 544 543 struct dm_io_request io_req = { 545 - .bi_rw = READ, 544 + .bi_op = REQ_OP_READ, 545 + .bi_op_flags = 0, 546 546 .mem.type = DM_IO_BIO, 547 547 .mem.ptr.bio = bio, 548 548 .notify.fn = read_callback, ··· 626 624 * If the bio is discard, return an error, but do not 627 625 * degrade the array. 628 626 */ 629 - if (bio->bi_rw & REQ_DISCARD) { 627 + if (bio_op(bio) == REQ_OP_DISCARD) { 630 628 bio->bi_error = -EOPNOTSUPP; 631 629 bio_endio(bio); 632 630 return; ··· 656 654 struct dm_io_region io[ms->nr_mirrors], *dest = io; 657 655 struct mirror *m; 658 656 struct dm_io_request io_req = { 659 - .bi_rw = WRITE | (bio->bi_rw & WRITE_FLUSH_FUA), 657 + .bi_op = REQ_OP_WRITE, 658 + .bi_op_flags = bio->bi_rw & WRITE_FLUSH_FUA, 660 659 .mem.type = DM_IO_BIO, 661 660 .mem.ptr.bio = bio, 662 661 .notify.fn = write_callback, ··· 665 662 .client = ms->io_client, 666 663 }; 667 664 668 - if (bio->bi_rw & REQ_DISCARD) { 669 - io_req.bi_rw |= REQ_DISCARD; 665 + if (bio_op(bio) == REQ_OP_DISCARD) { 666 + io_req.bi_op = REQ_OP_DISCARD; 670 667 io_req.mem.type = DM_IO_KMEM; 671 668 io_req.mem.ptr.addr = NULL; 672 669 } ··· 704 701 bio_list_init(&requeue); 705 702 706 703 while ((bio = bio_list_pop(writes))) { 707 - if ((bio->bi_rw & REQ_FLUSH) || 708 - (bio->bi_rw & REQ_DISCARD)) { 704 + if ((bio->bi_rw & REQ_PREFLUSH) || 705 + (bio_op(bio) == REQ_OP_DISCARD)) { 709 706 bio_list_add(&sync, bio); 710 707 continue; 711 708 } ··· 1253 1250 * We need to dec pending if this was a write. 1254 1251 */ 1255 1252 if (rw == WRITE) { 1256 - if (!(bio->bi_rw & (REQ_FLUSH | REQ_DISCARD))) 1253 + if (!(bio->bi_rw & REQ_PREFLUSH) && 1254 + bio_op(bio) != REQ_OP_DISCARD) 1257 1255 dm_rh_dec(ms->rh, bio_record->write_region); 1258 1256 return error; 1259 1257 }
+3 -3
drivers/md/dm-region-hash.c
··· 398 398 region_t region = dm_rh_bio_to_region(rh, bio); 399 399 int recovering = 0; 400 400 401 - if (bio->bi_rw & REQ_FLUSH) { 401 + if (bio->bi_rw & REQ_PREFLUSH) { 402 402 rh->flush_failure = 1; 403 403 return; 404 404 } 405 405 406 - if (bio->bi_rw & REQ_DISCARD) 406 + if (bio_op(bio) == REQ_OP_DISCARD) 407 407 return; 408 408 409 409 /* We must inform the log that the sync count has changed. */ ··· 526 526 struct bio *bio; 527 527 528 528 for (bio = bios->head; bio; bio = bio->bi_next) { 529 - if (bio->bi_rw & (REQ_FLUSH | REQ_DISCARD)) 529 + if (bio->bi_rw & REQ_PREFLUSH || bio_op(bio) == REQ_OP_DISCARD) 530 530 continue; 531 531 rh_inc(rh, dm_rh_bio_to_region(rh, bio)); 532 532 }
+13 -11
drivers/md/dm-snap-persistent.c
··· 226 226 /* 227 227 * Read or write a chunk aligned and sized block of data from a device. 228 228 */ 229 - static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int rw, 230 - int metadata) 229 + static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int op, 230 + int op_flags, int metadata) 231 231 { 232 232 struct dm_io_region where = { 233 233 .bdev = dm_snap_cow(ps->store->snap)->bdev, ··· 235 235 .count = ps->store->chunk_size, 236 236 }; 237 237 struct dm_io_request io_req = { 238 - .bi_rw = rw, 238 + .bi_op = op, 239 + .bi_op_flags = op_flags, 239 240 .mem.type = DM_IO_VMA, 240 241 .mem.ptr.vma = area, 241 242 .client = ps->io_client, ··· 282 281 * Read or write a metadata area. Remembering to skip the first 283 282 * chunk which holds the header. 284 283 */ 285 - static int area_io(struct pstore *ps, int rw) 284 + static int area_io(struct pstore *ps, int op, int op_flags) 286 285 { 287 286 int r; 288 287 chunk_t chunk; 289 288 290 289 chunk = area_location(ps, ps->current_area); 291 290 292 - r = chunk_io(ps, ps->area, chunk, rw, 0); 291 + r = chunk_io(ps, ps->area, chunk, op, op_flags, 0); 293 292 if (r) 294 293 return r; 295 294 ··· 303 302 304 303 static int zero_disk_area(struct pstore *ps, chunk_t area) 305 304 { 306 - return chunk_io(ps, ps->zero_area, area_location(ps, area), WRITE, 0); 305 + return chunk_io(ps, ps->zero_area, area_location(ps, area), 306 + REQ_OP_WRITE, 0, 0); 307 307 } 308 308 309 309 static int read_header(struct pstore *ps, int *new_snapshot) ··· 336 334 if (r) 337 335 return r; 338 336 339 - r = chunk_io(ps, ps->header_area, 0, READ, 1); 337 + r = chunk_io(ps, ps->header_area, 0, REQ_OP_READ, 0, 1); 340 338 if (r) 341 339 goto bad; 342 340 ··· 397 395 dh->version = cpu_to_le32(ps->version); 398 396 dh->chunk_size = cpu_to_le32(ps->store->chunk_size); 399 397 400 - return chunk_io(ps, ps->header_area, 0, WRITE, 1); 398 + return chunk_io(ps, ps->header_area, 0, REQ_OP_WRITE, 0, 1); 401 399 } 402 400 403 401 /* ··· 741 739 /* 742 740 * Commit exceptions to disk. 743 741 */ 744 - if (ps->valid && area_io(ps, WRITE_FLUSH_FUA)) 742 + if (ps->valid && area_io(ps, REQ_OP_WRITE, WRITE_FLUSH_FUA)) 745 743 ps->valid = 0; 746 744 747 745 /* ··· 781 779 return 0; 782 780 783 781 ps->current_area--; 784 - r = area_io(ps, READ); 782 + r = area_io(ps, REQ_OP_READ, 0); 785 783 if (r < 0) 786 784 return r; 787 785 ps->current_committed = ps->exceptions_per_area; ··· 818 816 for (i = 0; i < nr_merged; i++) 819 817 clear_exception(ps, ps->current_committed - 1 - i); 820 818 821 - r = area_io(ps, WRITE_FLUSH_FUA); 819 + r = area_io(ps, REQ_OP_WRITE, WRITE_FLUSH_FUA); 822 820 if (r < 0) 823 821 return r; 824 822
+3 -3
drivers/md/dm-snap.c
··· 1680 1680 1681 1681 init_tracked_chunk(bio); 1682 1682 1683 - if (bio->bi_rw & REQ_FLUSH) { 1683 + if (bio->bi_rw & REQ_PREFLUSH) { 1684 1684 bio->bi_bdev = s->cow->bdev; 1685 1685 return DM_MAPIO_REMAPPED; 1686 1686 } ··· 1799 1799 1800 1800 init_tracked_chunk(bio); 1801 1801 1802 - if (bio->bi_rw & REQ_FLUSH) { 1802 + if (bio->bi_rw & REQ_PREFLUSH) { 1803 1803 if (!dm_bio_get_target_bio_nr(bio)) 1804 1804 bio->bi_bdev = s->origin->bdev; 1805 1805 else ··· 2285 2285 2286 2286 bio->bi_bdev = o->dev->bdev; 2287 2287 2288 - if (unlikely(bio->bi_rw & REQ_FLUSH)) 2288 + if (unlikely(bio->bi_rw & REQ_PREFLUSH)) 2289 2289 return DM_MAPIO_REMAPPED; 2290 2290 2291 2291 if (bio_rw(bio) != WRITE)
+4 -5
drivers/md/dm-stats.c
··· 514 514 } 515 515 516 516 static void dm_stat_for_entry(struct dm_stat *s, size_t entry, 517 - unsigned long bi_rw, sector_t len, 517 + int idx, sector_t len, 518 518 struct dm_stats_aux *stats_aux, bool end, 519 519 unsigned long duration_jiffies) 520 520 { 521 - unsigned long idx = bi_rw & REQ_WRITE; 522 521 struct dm_stat_shared *shared = &s->stat_shared[entry]; 523 522 struct dm_stat_percpu *p; 524 523 ··· 583 584 #endif 584 585 } 585 586 586 - static void __dm_stat_bio(struct dm_stat *s, unsigned long bi_rw, 587 + static void __dm_stat_bio(struct dm_stat *s, int bi_rw, 587 588 sector_t bi_sector, sector_t end_sector, 588 589 bool end, unsigned long duration_jiffies, 589 590 struct dm_stats_aux *stats_aux) ··· 644 645 last = raw_cpu_ptr(stats->last); 645 646 stats_aux->merged = 646 647 (bi_sector == (ACCESS_ONCE(last->last_sector) && 647 - ((bi_rw & (REQ_WRITE | REQ_DISCARD)) == 648 - (ACCESS_ONCE(last->last_rw) & (REQ_WRITE | REQ_DISCARD))) 648 + ((bi_rw == WRITE) == 649 + (ACCESS_ONCE(last->last_rw) == WRITE)) 649 650 )); 650 651 ACCESS_ONCE(last->last_sector) = end_sector; 651 652 ACCESS_ONCE(last->last_rw) = bi_rw;
+3 -3
drivers/md/dm-stripe.c
··· 286 286 uint32_t stripe; 287 287 unsigned target_bio_nr; 288 288 289 - if (bio->bi_rw & REQ_FLUSH) { 289 + if (bio->bi_rw & REQ_PREFLUSH) { 290 290 target_bio_nr = dm_bio_get_target_bio_nr(bio); 291 291 BUG_ON(target_bio_nr >= sc->stripes); 292 292 bio->bi_bdev = sc->stripe[target_bio_nr].dev->bdev; 293 293 return DM_MAPIO_REMAPPED; 294 294 } 295 - if (unlikely(bio->bi_rw & REQ_DISCARD) || 296 - unlikely(bio->bi_rw & REQ_WRITE_SAME)) { 295 + if (unlikely(bio_op(bio) == REQ_OP_DISCARD) || 296 + unlikely(bio_op(bio) == REQ_OP_WRITE_SAME)) { 297 297 target_bio_nr = dm_bio_get_target_bio_nr(bio); 298 298 BUG_ON(target_bio_nr >= sc->stripes); 299 299 return stripe_map_range(sc, bio, target_bio_nr);
+13 -9
drivers/md/dm-thin.c
··· 360 360 sector_t len = block_to_sectors(tc->pool, data_e - data_b); 361 361 362 362 return __blkdev_issue_discard(tc->pool_dev->bdev, s, len, 363 - GFP_NOWAIT, REQ_WRITE | REQ_DISCARD, &op->bio); 363 + GFP_NOWAIT, 0, &op->bio); 364 364 } 365 365 366 366 static void end_discard(struct discard_op *op, int r) ··· 371 371 * need to wait for the chain to complete. 372 372 */ 373 373 bio_chain(op->bio, op->parent_bio); 374 - submit_bio(REQ_WRITE | REQ_DISCARD, op->bio); 374 + bio_set_op_attrs(op->bio, REQ_OP_DISCARD, 0); 375 + submit_bio(op->bio); 375 376 } 376 377 377 378 blk_finish_plug(&op->plug); ··· 697 696 698 697 static int bio_triggers_commit(struct thin_c *tc, struct bio *bio) 699 698 { 700 - return (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) && 699 + return (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA)) && 701 700 dm_thin_changed_this_transaction(tc->td); 702 701 } 703 702 ··· 705 704 { 706 705 struct dm_thin_endio_hook *h; 707 706 708 - if (bio->bi_rw & REQ_DISCARD) 707 + if (bio_op(bio) == REQ_OP_DISCARD) 709 708 return; 710 709 711 710 h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); ··· 868 867 struct bio *bio; 869 868 870 869 while ((bio = bio_list_pop(&cell->bios))) { 871 - if (bio->bi_rw & (REQ_DISCARD | REQ_FLUSH | REQ_FUA)) 870 + if (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA) || 871 + bio_op(bio) == REQ_OP_DISCARD) 872 872 bio_list_add(&info->defer_bios, bio); 873 873 else { 874 874 inc_all_io_entry(info->tc->pool, bio); ··· 1641 1639 1642 1640 while ((bio = bio_list_pop(&cell->bios))) { 1643 1641 if ((bio_data_dir(bio) == WRITE) || 1644 - (bio->bi_rw & (REQ_DISCARD | REQ_FLUSH | REQ_FUA))) 1642 + (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA) || 1643 + bio_op(bio) == REQ_OP_DISCARD)) 1645 1644 bio_list_add(&info->defer_bios, bio); 1646 1645 else { 1647 1646 struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook));; ··· 2031 2028 break; 2032 2029 } 2033 2030 2034 - if (bio->bi_rw & REQ_DISCARD) 2031 + if (bio_op(bio) == REQ_OP_DISCARD) 2035 2032 pool->process_discard(tc, bio); 2036 2033 else 2037 2034 pool->process_bio(tc, bio); ··· 2118 2115 return; 2119 2116 } 2120 2117 2121 - if (cell->holder->bi_rw & REQ_DISCARD) 2118 + if (bio_op(cell->holder) == REQ_OP_DISCARD) 2122 2119 pool->process_discard_cell(tc, cell); 2123 2120 else 2124 2121 pool->process_cell(tc, cell); ··· 2556 2553 return DM_MAPIO_SUBMITTED; 2557 2554 } 2558 2555 2559 - if (bio->bi_rw & (REQ_DISCARD | REQ_FLUSH | REQ_FUA)) { 2556 + if (bio->bi_rw & (REQ_PREFLUSH | REQ_FUA) || 2557 + bio_op(bio) == REQ_OP_DISCARD) { 2560 2558 thin_defer_bio_with_throttle(tc, bio); 2561 2559 return DM_MAPIO_SUBMITTED; 2562 2560 }
+24 -21
drivers/md/dm.c
··· 723 723 atomic_inc_return(&md->pending[rw])); 724 724 725 725 if (unlikely(dm_stats_used(&md->stats))) 726 - dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_iter.bi_sector, 727 - bio_sectors(bio), false, 0, &io->stats_aux); 726 + dm_stats_account_io(&md->stats, bio_data_dir(bio), 727 + bio->bi_iter.bi_sector, bio_sectors(bio), 728 + false, 0, &io->stats_aux); 728 729 } 729 730 730 731 static void end_io_acct(struct dm_io *io) ··· 739 738 generic_end_io_acct(rw, &dm_disk(md)->part0, io->start_time); 740 739 741 740 if (unlikely(dm_stats_used(&md->stats))) 742 - dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_iter.bi_sector, 743 - bio_sectors(bio), true, duration, &io->stats_aux); 741 + dm_stats_account_io(&md->stats, bio_data_dir(bio), 742 + bio->bi_iter.bi_sector, bio_sectors(bio), 743 + true, duration, &io->stats_aux); 744 744 745 745 /* 746 746 * After this is decremented the bio must not be touched if it is ··· 1003 1001 if (io_error == DM_ENDIO_REQUEUE) 1004 1002 return; 1005 1003 1006 - if ((bio->bi_rw & REQ_FLUSH) && bio->bi_iter.bi_size) { 1004 + if ((bio->bi_rw & REQ_PREFLUSH) && bio->bi_iter.bi_size) { 1007 1005 /* 1008 1006 * Preflush done for flush with data, reissue 1009 - * without REQ_FLUSH. 1007 + * without REQ_PREFLUSH. 1010 1008 */ 1011 - bio->bi_rw &= ~REQ_FLUSH; 1009 + bio->bi_rw &= ~REQ_PREFLUSH; 1012 1010 queue_io(md, bio); 1013 1011 } else { 1014 1012 /* done with normal IO or empty flush */ ··· 1053 1051 } 1054 1052 } 1055 1053 1056 - if (unlikely(r == -EREMOTEIO && (bio->bi_rw & REQ_WRITE_SAME) && 1054 + if (unlikely(r == -EREMOTEIO && (bio_op(bio) == REQ_OP_WRITE_SAME) && 1057 1055 !bdev_get_queue(bio->bi_bdev)->limits.max_write_same_sectors)) 1058 1056 disable_write_same(md); 1059 1057 ··· 1123 1121 if (unlikely(dm_stats_used(&md->stats))) { 1124 1122 struct dm_rq_target_io *tio = tio_from_request(orig); 1125 1123 tio->duration_jiffies = jiffies - tio->duration_jiffies; 1126 - dm_stats_account_io(&md->stats, orig->cmd_flags, blk_rq_pos(orig), 1127 - tio->n_sectors, true, tio->duration_jiffies, 1128 - &tio->stats_aux); 1124 + dm_stats_account_io(&md->stats, rq_data_dir(orig), 1125 + blk_rq_pos(orig), tio->n_sectors, true, 1126 + tio->duration_jiffies, &tio->stats_aux); 1129 1127 } 1130 1128 } 1131 1129 ··· 1322 1320 r = rq_end_io(tio->ti, clone, error, &tio->info); 1323 1321 } 1324 1322 1325 - if (unlikely(r == -EREMOTEIO && (clone->cmd_flags & REQ_WRITE_SAME) && 1323 + if (unlikely(r == -EREMOTEIO && (req_op(clone) == REQ_OP_WRITE_SAME) && 1326 1324 !clone->q->limits.max_write_same_sectors)) 1327 1325 disable_write_same(tio->md); 1328 1326 ··· 1477 1475 1478 1476 /* 1479 1477 * A target may call dm_accept_partial_bio only from the map routine. It is 1480 - * allowed for all bio types except REQ_FLUSH. 1478 + * allowed for all bio types except REQ_PREFLUSH. 1481 1479 * 1482 1480 * dm_accept_partial_bio informs the dm that the target only wants to process 1483 1481 * additional n_sectors sectors of the bio and the rest of the data should be ··· 1507 1505 { 1508 1506 struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); 1509 1507 unsigned bi_size = bio->bi_iter.bi_size >> SECTOR_SHIFT; 1510 - BUG_ON(bio->bi_rw & REQ_FLUSH); 1508 + BUG_ON(bio->bi_rw & REQ_PREFLUSH); 1511 1509 BUG_ON(bi_size > *tio->len_ptr); 1512 1510 BUG_ON(n_sectors > bi_size); 1513 1511 *tio->len_ptr -= bi_size - n_sectors; ··· 1748 1746 unsigned len; 1749 1747 int r; 1750 1748 1751 - if (unlikely(bio->bi_rw & REQ_DISCARD)) 1749 + if (unlikely(bio_op(bio) == REQ_OP_DISCARD)) 1752 1750 return __send_discard(ci); 1753 - else if (unlikely(bio->bi_rw & REQ_WRITE_SAME)) 1751 + else if (unlikely(bio_op(bio) == REQ_OP_WRITE_SAME)) 1754 1752 return __send_write_same(ci); 1755 1753 1756 1754 ti = dm_table_find_target(ci->map, ci->sector); ··· 1795 1793 1796 1794 start_io_acct(ci.io); 1797 1795 1798 - if (bio->bi_rw & REQ_FLUSH) { 1796 + if (bio->bi_rw & REQ_PREFLUSH) { 1799 1797 ci.bio = &ci.md->flush_bio; 1800 1798 ci.sector_count = 0; 1801 1799 error = __send_empty_flush(&ci); ··· 2084 2082 struct dm_rq_target_io *tio = tio_from_request(orig); 2085 2083 tio->duration_jiffies = jiffies; 2086 2084 tio->n_sectors = blk_rq_sectors(orig); 2087 - dm_stats_account_io(&md->stats, orig->cmd_flags, blk_rq_pos(orig), 2088 - tio->n_sectors, false, 0, &tio->stats_aux); 2085 + dm_stats_account_io(&md->stats, rq_data_dir(orig), 2086 + blk_rq_pos(orig), tio->n_sectors, false, 0, 2087 + &tio->stats_aux); 2089 2088 } 2090 2089 2091 2090 /* ··· 2171 2168 2172 2169 /* always use block 0 to find the target for flushes for now */ 2173 2170 pos = 0; 2174 - if (!(rq->cmd_flags & REQ_FLUSH)) 2171 + if (req_op(rq) != REQ_OP_FLUSH) 2175 2172 pos = blk_rq_pos(rq); 2176 2173 2177 2174 if ((dm_request_peeked_before_merge_deadline(md) && ··· 2415 2412 2416 2413 bio_init(&md->flush_bio); 2417 2414 md->flush_bio.bi_bdev = md->bdev; 2418 - md->flush_bio.bi_rw = WRITE_FLUSH; 2415 + bio_set_op_attrs(&md->flush_bio, REQ_OP_WRITE, WRITE_FLUSH); 2419 2416 2420 2417 dm_stats_init(&md->stats); 2421 2418
+2 -2
drivers/md/linear.c
··· 221 221 struct bio *split; 222 222 sector_t start_sector, end_sector, data_offset; 223 223 224 - if (unlikely(bio->bi_rw & REQ_FLUSH)) { 224 + if (unlikely(bio->bi_rw & REQ_PREFLUSH)) { 225 225 md_flush_request(mddev, bio); 226 226 return; 227 227 } ··· 252 252 split->bi_iter.bi_sector = split->bi_iter.bi_sector - 253 253 start_sector + data_offset; 254 254 255 - if (unlikely((split->bi_rw & REQ_DISCARD) && 255 + if (unlikely((bio_op(split) == REQ_OP_DISCARD) && 256 256 !blk_queue_discard(bdev_get_queue(split->bi_bdev)))) { 257 257 /* Just ignore it */ 258 258 bio_endio(split);
+11 -7
drivers/md/md.c
··· 394 394 bi->bi_end_io = md_end_flush; 395 395 bi->bi_private = rdev; 396 396 bi->bi_bdev = rdev->bdev; 397 + bio_set_op_attrs(bi, REQ_OP_WRITE, WRITE_FLUSH); 397 398 atomic_inc(&mddev->flush_pending); 398 - submit_bio(WRITE_FLUSH, bi); 399 + submit_bio(bi); 399 400 rcu_read_lock(); 400 401 rdev_dec_pending(rdev, mddev); 401 402 } ··· 414 413 /* an empty barrier - all done */ 415 414 bio_endio(bio); 416 415 else { 417 - bio->bi_rw &= ~REQ_FLUSH; 416 + bio->bi_rw &= ~REQ_PREFLUSH; 418 417 mddev->pers->make_request(mddev, bio); 419 418 } 420 419 ··· 743 742 bio_add_page(bio, page, size, 0); 744 743 bio->bi_private = rdev; 745 744 bio->bi_end_io = super_written; 745 + bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH_FUA); 746 746 747 747 atomic_inc(&mddev->pending_writes); 748 - submit_bio(WRITE_FLUSH_FUA, bio); 748 + submit_bio(bio); 749 749 } 750 750 751 751 void md_super_wait(struct mddev *mddev) ··· 756 754 } 757 755 758 756 int sync_page_io(struct md_rdev *rdev, sector_t sector, int size, 759 - struct page *page, int rw, bool metadata_op) 757 + struct page *page, int op, int op_flags, bool metadata_op) 760 758 { 761 759 struct bio *bio = bio_alloc_mddev(GFP_NOIO, 1, rdev->mddev); 762 760 int ret; 763 761 764 762 bio->bi_bdev = (metadata_op && rdev->meta_bdev) ? 765 763 rdev->meta_bdev : rdev->bdev; 764 + bio_set_op_attrs(bio, op, op_flags); 766 765 if (metadata_op) 767 766 bio->bi_iter.bi_sector = sector + rdev->sb_start; 768 767 else if (rdev->mddev->reshape_position != MaxSector && ··· 773 770 else 774 771 bio->bi_iter.bi_sector = sector + rdev->data_offset; 775 772 bio_add_page(bio, page, size, 0); 776 - submit_bio_wait(rw, bio); 773 + 774 + submit_bio_wait(bio); 777 775 778 776 ret = !bio->bi_error; 779 777 bio_put(bio); ··· 789 785 if (rdev->sb_loaded) 790 786 return 0; 791 787 792 - if (!sync_page_io(rdev, 0, size, rdev->sb_page, READ, true)) 788 + if (!sync_page_io(rdev, 0, size, rdev->sb_page, REQ_OP_READ, 0, true)) 793 789 goto fail; 794 790 rdev->sb_loaded = 1; 795 791 return 0; ··· 1475 1471 return -EINVAL; 1476 1472 bb_sector = (long long)offset; 1477 1473 if (!sync_page_io(rdev, bb_sector, sectors << 9, 1478 - rdev->bb_page, READ, true)) 1474 + rdev->bb_page, REQ_OP_READ, 0, true)) 1479 1475 return -EIO; 1480 1476 bbp = (u64 *)page_address(rdev->bb_page); 1481 1477 rdev->badblocks.shift = sb->bblog_shift;
+3 -2
drivers/md/md.h
··· 424 424 425 425 /* Generic flush handling. 426 426 * The last to finish preflush schedules a worker to submit 427 - * the rest of the request (without the REQ_FLUSH flag). 427 + * the rest of the request (without the REQ_PREFLUSH flag). 428 428 */ 429 429 struct bio *flush_bio; 430 430 atomic_t flush_pending; ··· 618 618 sector_t sector, int size, struct page *page); 619 619 extern void md_super_wait(struct mddev *mddev); 620 620 extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int size, 621 - struct page *page, int rw, bool metadata_op); 621 + struct page *page, int op, int op_flags, 622 + bool metadata_op); 622 623 extern void md_do_sync(struct md_thread *thread); 623 624 extern void md_new_event(struct mddev *mddev); 624 625 extern int md_allow_write(struct mddev *mddev);
+1 -1
drivers/md/multipath.c
··· 111 111 struct multipath_bh * mp_bh; 112 112 struct multipath_info *multipath; 113 113 114 - if (unlikely(bio->bi_rw & REQ_FLUSH)) { 114 + if (unlikely(bio->bi_rw & REQ_PREFLUSH)) { 115 115 md_flush_request(mddev, bio); 116 116 return; 117 117 }
+2 -2
drivers/md/raid0.c
··· 458 458 struct md_rdev *tmp_dev; 459 459 struct bio *split; 460 460 461 - if (unlikely(bio->bi_rw & REQ_FLUSH)) { 461 + if (unlikely(bio->bi_rw & REQ_PREFLUSH)) { 462 462 md_flush_request(mddev, bio); 463 463 return; 464 464 } ··· 488 488 split->bi_iter.bi_sector = sector + zone->dev_start + 489 489 tmp_dev->data_offset; 490 490 491 - if (unlikely((split->bi_rw & REQ_DISCARD) && 491 + if (unlikely((bio_op(split) == REQ_OP_DISCARD) && 492 492 !blk_queue_discard(bdev_get_queue(split->bi_bdev)))) { 493 493 /* Just ignore it */ 494 494 bio_endio(split);
+19 -19
drivers/md/raid1.c
··· 759 759 while (bio) { /* submit pending writes */ 760 760 struct bio *next = bio->bi_next; 761 761 bio->bi_next = NULL; 762 - if (unlikely((bio->bi_rw & REQ_DISCARD) && 762 + if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && 763 763 !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) 764 764 /* Just ignore it */ 765 765 bio_endio(bio); ··· 1033 1033 while (bio) { /* submit pending writes */ 1034 1034 struct bio *next = bio->bi_next; 1035 1035 bio->bi_next = NULL; 1036 - if (unlikely((bio->bi_rw & REQ_DISCARD) && 1036 + if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && 1037 1037 !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) 1038 1038 /* Just ignore it */ 1039 1039 bio_endio(bio); ··· 1053 1053 int i, disks; 1054 1054 struct bitmap *bitmap; 1055 1055 unsigned long flags; 1056 + const int op = bio_op(bio); 1056 1057 const int rw = bio_data_dir(bio); 1057 1058 const unsigned long do_sync = (bio->bi_rw & REQ_SYNC); 1058 - const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA)); 1059 - const unsigned long do_discard = (bio->bi_rw 1060 - & (REQ_DISCARD | REQ_SECURE)); 1061 - const unsigned long do_same = (bio->bi_rw & REQ_WRITE_SAME); 1059 + const unsigned long do_flush_fua = (bio->bi_rw & 1060 + (REQ_PREFLUSH | REQ_FUA)); 1061 + const unsigned long do_sec = (bio->bi_rw & REQ_SECURE); 1062 1062 struct md_rdev *blocked_rdev; 1063 1063 struct blk_plug_cb *cb; 1064 1064 struct raid1_plug_cb *plug = NULL; ··· 1166 1166 mirror->rdev->data_offset; 1167 1167 read_bio->bi_bdev = mirror->rdev->bdev; 1168 1168 read_bio->bi_end_io = raid1_end_read_request; 1169 - read_bio->bi_rw = READ | do_sync; 1169 + bio_set_op_attrs(read_bio, op, do_sync); 1170 1170 read_bio->bi_private = r1_bio; 1171 1171 1172 1172 if (max_sectors < r1_bio->sectors) { ··· 1376 1376 conf->mirrors[i].rdev->data_offset); 1377 1377 mbio->bi_bdev = conf->mirrors[i].rdev->bdev; 1378 1378 mbio->bi_end_io = raid1_end_write_request; 1379 - mbio->bi_rw = 1380 - WRITE | do_flush_fua | do_sync | do_discard | do_same; 1379 + bio_set_op_attrs(mbio, op, do_flush_fua | do_sync | do_sec); 1381 1380 mbio->bi_private = r1_bio; 1382 1381 1383 1382 atomic_inc(&r1_bio->remaining); ··· 1770 1771 static int r1_sync_page_io(struct md_rdev *rdev, sector_t sector, 1771 1772 int sectors, struct page *page, int rw) 1772 1773 { 1773 - if (sync_page_io(rdev, sector, sectors << 9, page, rw, false)) 1774 + if (sync_page_io(rdev, sector, sectors << 9, page, rw, 0, false)) 1774 1775 /* success */ 1775 1776 return 1; 1776 1777 if (rw == WRITE) { ··· 1824 1825 rdev = conf->mirrors[d].rdev; 1825 1826 if (sync_page_io(rdev, sect, s<<9, 1826 1827 bio->bi_io_vec[idx].bv_page, 1827 - READ, false)) { 1828 + REQ_OP_READ, 0, false)) { 1828 1829 success = 1; 1829 1830 break; 1830 1831 } ··· 2029 2030 !test_bit(MD_RECOVERY_SYNC, &mddev->recovery)))) 2030 2031 continue; 2031 2032 2032 - wbio->bi_rw = WRITE; 2033 + bio_set_op_attrs(wbio, REQ_OP_WRITE, 0); 2033 2034 wbio->bi_end_io = end_sync_write; 2034 2035 atomic_inc(&r1_bio->remaining); 2035 2036 md_sync_acct(conf->mirrors[i].rdev->bdev, bio_sectors(wbio)); ··· 2089 2090 is_badblock(rdev, sect, s, 2090 2091 &first_bad, &bad_sectors) == 0 && 2091 2092 sync_page_io(rdev, sect, s<<9, 2092 - conf->tmppage, READ, false)) 2093 + conf->tmppage, REQ_OP_READ, 0, false)) 2093 2094 success = 1; 2094 2095 else { 2095 2096 d++; ··· 2200 2201 wbio = bio_clone_mddev(r1_bio->master_bio, GFP_NOIO, mddev); 2201 2202 } 2202 2203 2203 - wbio->bi_rw = WRITE; 2204 + bio_set_op_attrs(wbio, REQ_OP_WRITE, 0); 2204 2205 wbio->bi_iter.bi_sector = r1_bio->sector; 2205 2206 wbio->bi_iter.bi_size = r1_bio->sectors << 9; 2206 2207 2207 2208 bio_trim(wbio, sector - r1_bio->sector, sectors); 2208 2209 wbio->bi_iter.bi_sector += rdev->data_offset; 2209 2210 wbio->bi_bdev = rdev->bdev; 2210 - if (submit_bio_wait(WRITE, wbio) < 0) 2211 + 2212 + if (submit_bio_wait(wbio) < 0) 2211 2213 /* failure! */ 2212 2214 ok = rdev_set_badblocks(rdev, sector, 2213 2215 sectors, 0) ··· 2343 2343 bio->bi_iter.bi_sector = r1_bio->sector + rdev->data_offset; 2344 2344 bio->bi_bdev = rdev->bdev; 2345 2345 bio->bi_end_io = raid1_end_read_request; 2346 - bio->bi_rw = READ | do_sync; 2346 + bio_set_op_attrs(bio, REQ_OP_READ, do_sync); 2347 2347 bio->bi_private = r1_bio; 2348 2348 if (max_sectors < r1_bio->sectors) { 2349 2349 /* Drat - have to split this up more */ ··· 2571 2571 if (i < conf->raid_disks) 2572 2572 still_degraded = 1; 2573 2573 } else if (!test_bit(In_sync, &rdev->flags)) { 2574 - bio->bi_rw = WRITE; 2574 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 2575 2575 bio->bi_end_io = end_sync_write; 2576 2576 write_targets ++; 2577 2577 } else { ··· 2598 2598 if (disk < 0) 2599 2599 disk = i; 2600 2600 } 2601 - bio->bi_rw = READ; 2601 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 2602 2602 bio->bi_end_io = end_sync_read; 2603 2603 read_targets++; 2604 2604 } else if (!test_bit(WriteErrorSeen, &rdev->flags) && ··· 2610 2610 * if we are doing resync or repair. Otherwise, leave 2611 2611 * this device alone for this sync request. 2612 2612 */ 2613 - bio->bi_rw = WRITE; 2613 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 2614 2614 bio->bi_end_io = end_sync_write; 2615 2615 write_targets++; 2616 2616 }
+26 -26
drivers/md/raid10.c
··· 865 865 while (bio) { /* submit pending writes */ 866 866 struct bio *next = bio->bi_next; 867 867 bio->bi_next = NULL; 868 - if (unlikely((bio->bi_rw & REQ_DISCARD) && 868 + if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && 869 869 !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) 870 870 /* Just ignore it */ 871 871 bio_endio(bio); ··· 1041 1041 while (bio) { /* submit pending writes */ 1042 1042 struct bio *next = bio->bi_next; 1043 1043 bio->bi_next = NULL; 1044 - if (unlikely((bio->bi_rw & REQ_DISCARD) && 1044 + if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && 1045 1045 !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) 1046 1046 /* Just ignore it */ 1047 1047 bio_endio(bio); ··· 1058 1058 struct r10bio *r10_bio; 1059 1059 struct bio *read_bio; 1060 1060 int i; 1061 + const int op = bio_op(bio); 1061 1062 const int rw = bio_data_dir(bio); 1062 1063 const unsigned long do_sync = (bio->bi_rw & REQ_SYNC); 1063 1064 const unsigned long do_fua = (bio->bi_rw & REQ_FUA); 1064 - const unsigned long do_discard = (bio->bi_rw 1065 - & (REQ_DISCARD | REQ_SECURE)); 1066 - const unsigned long do_same = (bio->bi_rw & REQ_WRITE_SAME); 1065 + const unsigned long do_sec = (bio->bi_rw & REQ_SECURE); 1067 1066 unsigned long flags; 1068 1067 struct md_rdev *blocked_rdev; 1069 1068 struct blk_plug_cb *cb; ··· 1155 1156 choose_data_offset(r10_bio, rdev); 1156 1157 read_bio->bi_bdev = rdev->bdev; 1157 1158 read_bio->bi_end_io = raid10_end_read_request; 1158 - read_bio->bi_rw = READ | do_sync; 1159 + bio_set_op_attrs(read_bio, op, do_sync); 1159 1160 read_bio->bi_private = r10_bio; 1160 1161 1161 1162 if (max_sectors < r10_bio->sectors) { ··· 1362 1363 rdev)); 1363 1364 mbio->bi_bdev = rdev->bdev; 1364 1365 mbio->bi_end_io = raid10_end_write_request; 1365 - mbio->bi_rw = 1366 - WRITE | do_sync | do_fua | do_discard | do_same; 1366 + bio_set_op_attrs(mbio, op, do_sync | do_fua | do_sec); 1367 1367 mbio->bi_private = r10_bio; 1368 1368 1369 1369 atomic_inc(&r10_bio->remaining); ··· 1404 1406 r10_bio, rdev)); 1405 1407 mbio->bi_bdev = rdev->bdev; 1406 1408 mbio->bi_end_io = raid10_end_write_request; 1407 - mbio->bi_rw = 1408 - WRITE | do_sync | do_fua | do_discard | do_same; 1409 + bio_set_op_attrs(mbio, op, do_sync | do_fua | do_sec); 1409 1410 mbio->bi_private = r10_bio; 1410 1411 1411 1412 atomic_inc(&r10_bio->remaining); ··· 1447 1450 1448 1451 struct bio *split; 1449 1452 1450 - if (unlikely(bio->bi_rw & REQ_FLUSH)) { 1453 + if (unlikely(bio->bi_rw & REQ_PREFLUSH)) { 1451 1454 md_flush_request(mddev, bio); 1452 1455 return; 1453 1456 } ··· 1989 1992 1990 1993 tbio->bi_vcnt = vcnt; 1991 1994 tbio->bi_iter.bi_size = fbio->bi_iter.bi_size; 1992 - tbio->bi_rw = WRITE; 1993 1995 tbio->bi_private = r10_bio; 1994 1996 tbio->bi_iter.bi_sector = r10_bio->devs[i].addr; 1995 1997 tbio->bi_end_io = end_sync_write; 1998 + bio_set_op_attrs(tbio, REQ_OP_WRITE, 0); 1996 1999 1997 2000 bio_copy_data(tbio, fbio); 1998 2001 ··· 2075 2078 addr, 2076 2079 s << 9, 2077 2080 bio->bi_io_vec[idx].bv_page, 2078 - READ, false); 2081 + REQ_OP_READ, 0, false); 2079 2082 if (ok) { 2080 2083 rdev = conf->mirrors[dw].rdev; 2081 2084 addr = r10_bio->devs[1].addr + sect; ··· 2083 2086 addr, 2084 2087 s << 9, 2085 2088 bio->bi_io_vec[idx].bv_page, 2086 - WRITE, false); 2089 + REQ_OP_WRITE, 0, false); 2087 2090 if (!ok) { 2088 2091 set_bit(WriteErrorSeen, &rdev->flags); 2089 2092 if (!test_and_set_bit(WantReplacement, ··· 2210 2213 if (is_badblock(rdev, sector, sectors, &first_bad, &bad_sectors) 2211 2214 && (rw == READ || test_bit(WriteErrorSeen, &rdev->flags))) 2212 2215 return -1; 2213 - if (sync_page_io(rdev, sector, sectors << 9, page, rw, false)) 2216 + if (sync_page_io(rdev, sector, sectors << 9, page, rw, 0, false)) 2214 2217 /* success */ 2215 2218 return 1; 2216 2219 if (rw == WRITE) { ··· 2296 2299 r10_bio->devs[sl].addr + 2297 2300 sect, 2298 2301 s<<9, 2299 - conf->tmppage, READ, false); 2302 + conf->tmppage, 2303 + REQ_OP_READ, 0, false); 2300 2304 rdev_dec_pending(rdev, mddev); 2301 2305 rcu_read_lock(); 2302 2306 if (success) ··· 2472 2474 choose_data_offset(r10_bio, rdev) + 2473 2475 (sector - r10_bio->sector)); 2474 2476 wbio->bi_bdev = rdev->bdev; 2475 - if (submit_bio_wait(WRITE, wbio) < 0) 2477 + bio_set_op_attrs(wbio, REQ_OP_WRITE, 0); 2478 + 2479 + if (submit_bio_wait(wbio) < 0) 2476 2480 /* Failure! */ 2477 2481 ok = rdev_set_badblocks(rdev, sector, 2478 2482 sectors, 0) ··· 2548 2548 bio->bi_iter.bi_sector = r10_bio->devs[slot].addr 2549 2549 + choose_data_offset(r10_bio, rdev); 2550 2550 bio->bi_bdev = rdev->bdev; 2551 - bio->bi_rw = READ | do_sync; 2551 + bio_set_op_attrs(bio, REQ_OP_READ, do_sync); 2552 2552 bio->bi_private = r10_bio; 2553 2553 bio->bi_end_io = raid10_end_read_request; 2554 2554 if (max_sectors < r10_bio->sectors) { ··· 3038 3038 biolist = bio; 3039 3039 bio->bi_private = r10_bio; 3040 3040 bio->bi_end_io = end_sync_read; 3041 - bio->bi_rw = READ; 3041 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 3042 3042 from_addr = r10_bio->devs[j].addr; 3043 3043 bio->bi_iter.bi_sector = from_addr + 3044 3044 rdev->data_offset; ··· 3064 3064 biolist = bio; 3065 3065 bio->bi_private = r10_bio; 3066 3066 bio->bi_end_io = end_sync_write; 3067 - bio->bi_rw = WRITE; 3067 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 3068 3068 bio->bi_iter.bi_sector = to_addr 3069 3069 + rdev->data_offset; 3070 3070 bio->bi_bdev = rdev->bdev; ··· 3093 3093 biolist = bio; 3094 3094 bio->bi_private = r10_bio; 3095 3095 bio->bi_end_io = end_sync_write; 3096 - bio->bi_rw = WRITE; 3096 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 3097 3097 bio->bi_iter.bi_sector = to_addr + 3098 3098 rdev->data_offset; 3099 3099 bio->bi_bdev = rdev->bdev; ··· 3213 3213 biolist = bio; 3214 3214 bio->bi_private = r10_bio; 3215 3215 bio->bi_end_io = end_sync_read; 3216 - bio->bi_rw = READ; 3216 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 3217 3217 bio->bi_iter.bi_sector = sector + 3218 3218 conf->mirrors[d].rdev->data_offset; 3219 3219 bio->bi_bdev = conf->mirrors[d].rdev->bdev; ··· 3235 3235 biolist = bio; 3236 3236 bio->bi_private = r10_bio; 3237 3237 bio->bi_end_io = end_sync_write; 3238 - bio->bi_rw = WRITE; 3238 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 3239 3239 bio->bi_iter.bi_sector = sector + 3240 3240 conf->mirrors[d].replacement->data_offset; 3241 3241 bio->bi_bdev = conf->mirrors[d].replacement->bdev; ··· 4320 4320 + rdev->data_offset); 4321 4321 read_bio->bi_private = r10_bio; 4322 4322 read_bio->bi_end_io = end_sync_read; 4323 - read_bio->bi_rw = READ; 4323 + bio_set_op_attrs(read_bio, REQ_OP_READ, 0); 4324 4324 read_bio->bi_flags &= (~0UL << BIO_RESET_BITS); 4325 4325 read_bio->bi_error = 0; 4326 4326 read_bio->bi_vcnt = 0; ··· 4354 4354 rdev2->new_data_offset; 4355 4355 b->bi_private = r10_bio; 4356 4356 b->bi_end_io = end_reshape_write; 4357 - b->bi_rw = WRITE; 4357 + bio_set_op_attrs(b, REQ_OP_WRITE, 0); 4358 4358 b->bi_next = blist; 4359 4359 blist = b; 4360 4360 } ··· 4522 4522 addr, 4523 4523 s << 9, 4524 4524 bvec[idx].bv_page, 4525 - READ, false); 4525 + REQ_OP_READ, 0, false); 4526 4526 if (success) 4527 4527 break; 4528 4528 failed:
+20 -13
drivers/md/raid5-cache.c
··· 254 254 __r5l_set_io_unit_state(io, IO_UNIT_IO_START); 255 255 spin_unlock_irqrestore(&log->io_list_lock, flags); 256 256 257 - submit_bio(WRITE, io->current_bio); 257 + submit_bio(io->current_bio); 258 258 } 259 259 260 260 static struct bio *r5l_bio_alloc(struct r5l_log *log) 261 261 { 262 262 struct bio *bio = bio_alloc_bioset(GFP_NOIO, BIO_MAX_PAGES, log->bs); 263 263 264 - bio->bi_rw = WRITE; 264 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 265 265 bio->bi_bdev = log->rdev->bdev; 266 266 bio->bi_iter.bi_sector = log->rdev->data_offset + log->log_start; 267 267 ··· 373 373 io->current_bio = r5l_bio_alloc(log); 374 374 bio_chain(io->current_bio, prev); 375 375 376 - submit_bio(WRITE, prev); 376 + submit_bio(prev); 377 377 } 378 378 379 379 if (!bio_add_page(io->current_bio, page, PAGE_SIZE, 0)) ··· 536 536 bio_endio(bio); 537 537 return 0; 538 538 } 539 - bio->bi_rw &= ~REQ_FLUSH; 539 + bio->bi_rw &= ~REQ_PREFLUSH; 540 540 return -EAGAIN; 541 541 } 542 542 ··· 686 686 bio_reset(&log->flush_bio); 687 687 log->flush_bio.bi_bdev = log->rdev->bdev; 688 688 log->flush_bio.bi_end_io = r5l_log_flush_endio; 689 - submit_bio(WRITE_FLUSH, &log->flush_bio); 689 + bio_set_op_attrs(&log->flush_bio, REQ_OP_WRITE, WRITE_FLUSH); 690 + submit_bio(&log->flush_bio); 690 691 } 691 692 692 693 static void r5l_write_super(struct r5l_log *log, sector_t cp); ··· 882 881 struct r5l_meta_block *mb; 883 882 u32 crc, stored_crc; 884 883 885 - if (!sync_page_io(log->rdev, ctx->pos, PAGE_SIZE, page, READ, false)) 884 + if (!sync_page_io(log->rdev, ctx->pos, PAGE_SIZE, page, REQ_OP_READ, 0, 885 + false)) 886 886 return -EIO; 887 887 888 888 mb = page_address(page); ··· 928 926 &disk_index, sh); 929 927 930 928 sync_page_io(log->rdev, *log_offset, PAGE_SIZE, 931 - sh->dev[disk_index].page, READ, false); 929 + sh->dev[disk_index].page, REQ_OP_READ, 0, 930 + false); 932 931 sh->dev[disk_index].log_checksum = 933 932 le32_to_cpu(payload->checksum[0]); 934 933 set_bit(R5_Wantwrite, &sh->dev[disk_index].flags); ··· 937 934 } else { 938 935 disk_index = sh->pd_idx; 939 936 sync_page_io(log->rdev, *log_offset, PAGE_SIZE, 940 - sh->dev[disk_index].page, READ, false); 937 + sh->dev[disk_index].page, REQ_OP_READ, 0, 938 + false); 941 939 sh->dev[disk_index].log_checksum = 942 940 le32_to_cpu(payload->checksum[0]); 943 941 set_bit(R5_Wantwrite, &sh->dev[disk_index].flags); ··· 948 944 sync_page_io(log->rdev, 949 945 r5l_ring_add(log, *log_offset, BLOCK_SECTORS), 950 946 PAGE_SIZE, sh->dev[disk_index].page, 951 - READ, false); 947 + REQ_OP_READ, 0, false); 952 948 sh->dev[disk_index].log_checksum = 953 949 le32_to_cpu(payload->checksum[1]); 954 950 set_bit(R5_Wantwrite, ··· 990 986 rdev = rcu_dereference(conf->disks[disk_index].rdev); 991 987 if (rdev) 992 988 sync_page_io(rdev, stripe_sect, PAGE_SIZE, 993 - sh->dev[disk_index].page, WRITE, false); 989 + sh->dev[disk_index].page, REQ_OP_WRITE, 0, 990 + false); 994 991 rrdev = rcu_dereference(conf->disks[disk_index].replacement); 995 992 if (rrdev) 996 993 sync_page_io(rrdev, stripe_sect, PAGE_SIZE, 997 - sh->dev[disk_index].page, WRITE, false); 994 + sh->dev[disk_index].page, REQ_OP_WRITE, 0, 995 + false); 998 996 } 999 997 raid5_release_stripe(sh); 1000 998 return 0; ··· 1068 1062 crc = crc32c_le(log->uuid_checksum, mb, PAGE_SIZE); 1069 1063 mb->checksum = cpu_to_le32(crc); 1070 1064 1071 - if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, WRITE_FUA, false)) { 1065 + if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE, 1066 + WRITE_FUA, false)) { 1072 1067 __free_page(page); 1073 1068 return -EIO; 1074 1069 } ··· 1144 1137 if (!page) 1145 1138 return -ENOMEM; 1146 1139 1147 - if (!sync_page_io(rdev, cp, PAGE_SIZE, page, READ, false)) { 1140 + if (!sync_page_io(rdev, cp, PAGE_SIZE, page, REQ_OP_READ, 0, false)) { 1148 1141 ret = -EIO; 1149 1142 goto ioerr; 1150 1143 }
+24 -24
drivers/md/raid5.c
··· 806 806 dd_idx = 0; 807 807 while (dd_idx == sh->pd_idx || dd_idx == sh->qd_idx) 808 808 dd_idx++; 809 - if (head->dev[dd_idx].towrite->bi_rw != sh->dev[dd_idx].towrite->bi_rw) 809 + if (head->dev[dd_idx].towrite->bi_rw != sh->dev[dd_idx].towrite->bi_rw || 810 + bio_op(head->dev[dd_idx].towrite) != bio_op(sh->dev[dd_idx].towrite)) 810 811 goto unlock_out; 811 812 812 813 if (head->batch_head) { ··· 892 891 if (r5l_write_stripe(conf->log, sh) == 0) 893 892 return; 894 893 for (i = disks; i--; ) { 895 - int rw; 894 + int op, op_flags = 0; 896 895 int replace_only = 0; 897 896 struct bio *bi, *rbi; 898 897 struct md_rdev *rdev, *rrdev = NULL; 899 898 900 899 sh = head_sh; 901 900 if (test_and_clear_bit(R5_Wantwrite, &sh->dev[i].flags)) { 901 + op = REQ_OP_WRITE; 902 902 if (test_and_clear_bit(R5_WantFUA, &sh->dev[i].flags)) 903 - rw = WRITE_FUA; 904 - else 905 - rw = WRITE; 903 + op_flags = WRITE_FUA; 906 904 if (test_bit(R5_Discard, &sh->dev[i].flags)) 907 - rw |= REQ_DISCARD; 905 + op = REQ_OP_DISCARD; 908 906 } else if (test_and_clear_bit(R5_Wantread, &sh->dev[i].flags)) 909 - rw = READ; 907 + op = REQ_OP_READ; 910 908 else if (test_and_clear_bit(R5_WantReplace, 911 909 &sh->dev[i].flags)) { 912 - rw = WRITE; 910 + op = REQ_OP_WRITE; 913 911 replace_only = 1; 914 912 } else 915 913 continue; 916 914 if (test_and_clear_bit(R5_SyncIO, &sh->dev[i].flags)) 917 - rw |= REQ_SYNC; 915 + op_flags |= REQ_SYNC; 918 916 919 917 again: 920 918 bi = &sh->dev[i].req; ··· 927 927 rdev = rrdev; 928 928 rrdev = NULL; 929 929 } 930 - if (rw & WRITE) { 930 + if (op_is_write(op)) { 931 931 if (replace_only) 932 932 rdev = NULL; 933 933 if (rdev == rrdev) ··· 953 953 * need to check for writes. We never accept write errors 954 954 * on the replacement, so we don't to check rrdev. 955 955 */ 956 - while ((rw & WRITE) && rdev && 956 + while (op_is_write(op) && rdev && 957 957 test_bit(WriteErrorSeen, &rdev->flags)) { 958 958 sector_t first_bad; 959 959 int bad_sectors; ··· 995 995 996 996 bio_reset(bi); 997 997 bi->bi_bdev = rdev->bdev; 998 - bi->bi_rw = rw; 999 - bi->bi_end_io = (rw & WRITE) 998 + bio_set_op_attrs(bi, op, op_flags); 999 + bi->bi_end_io = op_is_write(op) 1000 1000 ? raid5_end_write_request 1001 1001 : raid5_end_read_request; 1002 1002 bi->bi_private = sh; 1003 1003 1004 - pr_debug("%s: for %llu schedule op %ld on disc %d\n", 1004 + pr_debug("%s: for %llu schedule op %d on disc %d\n", 1005 1005 __func__, (unsigned long long)sh->sector, 1006 1006 bi->bi_rw, i); 1007 1007 atomic_inc(&sh->count); ··· 1027 1027 * If this is discard request, set bi_vcnt 0. We don't 1028 1028 * want to confuse SCSI because SCSI will replace payload 1029 1029 */ 1030 - if (rw & REQ_DISCARD) 1030 + if (op == REQ_OP_DISCARD) 1031 1031 bi->bi_vcnt = 0; 1032 1032 if (rrdev) 1033 1033 set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags); ··· 1047 1047 1048 1048 bio_reset(rbi); 1049 1049 rbi->bi_bdev = rrdev->bdev; 1050 - rbi->bi_rw = rw; 1051 - BUG_ON(!(rw & WRITE)); 1050 + bio_set_op_attrs(rbi, op, op_flags); 1051 + BUG_ON(!op_is_write(op)); 1052 1052 rbi->bi_end_io = raid5_end_write_request; 1053 1053 rbi->bi_private = sh; 1054 1054 1055 - pr_debug("%s: for %llu schedule op %ld on " 1055 + pr_debug("%s: for %llu schedule op %d on " 1056 1056 "replacement disc %d\n", 1057 1057 __func__, (unsigned long long)sh->sector, 1058 1058 rbi->bi_rw, i); ··· 1076 1076 * If this is discard request, set bi_vcnt 0. We don't 1077 1077 * want to confuse SCSI because SCSI will replace payload 1078 1078 */ 1079 - if (rw & REQ_DISCARD) 1079 + if (op == REQ_OP_DISCARD) 1080 1080 rbi->bi_vcnt = 0; 1081 1081 if (conf->mddev->gendisk) 1082 1082 trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev), ··· 1085 1085 generic_make_request(rbi); 1086 1086 } 1087 1087 if (!rdev && !rrdev) { 1088 - if (rw & WRITE) 1088 + if (op_is_write(op)) 1089 1089 set_bit(STRIPE_DEGRADED, &sh->state); 1090 - pr_debug("skip op %ld on disc %d for sector %llu\n", 1090 + pr_debug("skip op %d on disc %d for sector %llu\n", 1091 1091 bi->bi_rw, i, (unsigned long long)sh->sector); 1092 1092 clear_bit(R5_LOCKED, &sh->dev[i].flags); 1093 1093 set_bit(STRIPE_HANDLE, &sh->state); ··· 1623 1623 set_bit(R5_WantFUA, &dev->flags); 1624 1624 if (wbi->bi_rw & REQ_SYNC) 1625 1625 set_bit(R5_SyncIO, &dev->flags); 1626 - if (wbi->bi_rw & REQ_DISCARD) 1626 + if (bio_op(wbi) == REQ_OP_DISCARD) 1627 1627 set_bit(R5_Discard, &dev->flags); 1628 1628 else { 1629 1629 tx = async_copy_data(1, wbi, &dev->page, ··· 5150 5150 DEFINE_WAIT(w); 5151 5151 bool do_prepare; 5152 5152 5153 - if (unlikely(bi->bi_rw & REQ_FLUSH)) { 5153 + if (unlikely(bi->bi_rw & REQ_PREFLUSH)) { 5154 5154 int ret = r5l_handle_flush_request(conf->log, bi); 5155 5155 5156 5156 if (ret == 0) ··· 5176 5176 return; 5177 5177 } 5178 5178 5179 - if (unlikely(bi->bi_rw & REQ_DISCARD)) { 5179 + if (unlikely(bio_op(bi) == REQ_OP_DISCARD)) { 5180 5180 make_discard_request(mddev, bi); 5181 5181 return; 5182 5182 }
+5 -6
drivers/mmc/card/block.c
··· 1724 1724 !IS_ALIGNED(blk_rq_sectors(next), 8)) 1725 1725 break; 1726 1726 1727 - if (next->cmd_flags & REQ_DISCARD || 1728 - next->cmd_flags & REQ_FLUSH) 1727 + if (req_op(next) == REQ_OP_DISCARD || 1728 + req_op(next) == REQ_OP_FLUSH) 1729 1729 break; 1730 1730 1731 1731 if (rq_data_dir(cur) != rq_data_dir(next)) ··· 2150 2150 struct mmc_card *card = md->queue.card; 2151 2151 struct mmc_host *host = card->host; 2152 2152 unsigned long flags; 2153 - unsigned int cmd_flags = req ? req->cmd_flags : 0; 2154 2153 2155 2154 if (req && !mq->mqrq_prev->req) 2156 2155 /* claim host only for the first request */ ··· 2165 2166 } 2166 2167 2167 2168 mq->flags &= ~MMC_QUEUE_NEW_REQUEST; 2168 - if (cmd_flags & REQ_DISCARD) { 2169 + if (req && req_op(req) == REQ_OP_DISCARD) { 2169 2170 /* complete ongoing async transfer before issuing discard */ 2170 2171 if (card->host->areq) 2171 2172 mmc_blk_issue_rw_rq(mq, NULL); ··· 2173 2174 ret = mmc_blk_issue_secdiscard_rq(mq, req); 2174 2175 else 2175 2176 ret = mmc_blk_issue_discard_rq(mq, req); 2176 - } else if (cmd_flags & REQ_FLUSH) { 2177 + } else if (req && req_op(req) == REQ_OP_FLUSH) { 2177 2178 /* complete ongoing async transfer before issuing flush */ 2178 2179 if (card->host->areq) 2179 2180 mmc_blk_issue_rw_rq(mq, NULL); ··· 2189 2190 2190 2191 out: 2191 2192 if ((!req && !(mq->flags & MMC_QUEUE_NEW_REQUEST)) || 2192 - (cmd_flags & MMC_REQ_SPECIAL_MASK)) 2193 + mmc_req_is_special(req)) 2193 2194 /* 2194 2195 * Release host when there are no more requests 2195 2196 * and after special request(discard, flush) is done.
+2 -4
drivers/mmc/card/queue.c
··· 33 33 /* 34 34 * We only like normal block requests and discards. 35 35 */ 36 - if (req->cmd_type != REQ_TYPE_FS && !(req->cmd_flags & REQ_DISCARD)) { 36 + if (req->cmd_type != REQ_TYPE_FS && req_op(req) != REQ_OP_DISCARD) { 37 37 blk_dump_rq_flags(req, "MMC bad request"); 38 38 return BLKPREP_KILL; 39 39 } ··· 56 56 down(&mq->thread_sem); 57 57 do { 58 58 struct request *req = NULL; 59 - unsigned int cmd_flags = 0; 60 59 61 60 spin_lock_irq(q->queue_lock); 62 61 set_current_state(TASK_INTERRUPTIBLE); ··· 65 66 66 67 if (req || mq->mqrq_prev->req) { 67 68 set_current_state(TASK_RUNNING); 68 - cmd_flags = req ? req->cmd_flags : 0; 69 69 mq->issue_fn(mq, req); 70 70 cond_resched(); 71 71 if (mq->flags & MMC_QUEUE_NEW_REQUEST) { ··· 79 81 * has been finished. Do not assign it to previous 80 82 * request. 81 83 */ 82 - if (cmd_flags & MMC_REQ_SPECIAL_MASK) 84 + if (mmc_req_is_special(req)) 83 85 mq->mqrq_cur->req = NULL; 84 86 85 87 mq->mqrq_prev->brq.mrq.data = NULL;
+5 -1
drivers/mmc/card/queue.h
··· 1 1 #ifndef MMC_QUEUE_H 2 2 #define MMC_QUEUE_H 3 3 4 - #define MMC_REQ_SPECIAL_MASK (REQ_DISCARD | REQ_FLUSH) 4 + static inline bool mmc_req_is_special(struct request *req) 5 + { 6 + return req && 7 + (req_op(req) == REQ_OP_FLUSH || req_op(req) == REQ_OP_DISCARD); 8 + } 5 9 6 10 struct request; 7 11 struct task_struct;
+2 -2
drivers/mtd/mtd_blkdevs.c
··· 87 87 if (req->cmd_type != REQ_TYPE_FS) 88 88 return -EIO; 89 89 90 - if (req->cmd_flags & REQ_FLUSH) 90 + if (req_op(req) == REQ_OP_FLUSH) 91 91 return tr->flush(dev); 92 92 93 93 if (blk_rq_pos(req) + blk_rq_cur_sectors(req) > 94 94 get_capacity(req->rq_disk)) 95 95 return -EIO; 96 96 97 - if (req->cmd_flags & REQ_DISCARD) 97 + if (req_op(req) == REQ_OP_DISCARD) 98 98 return tr->discard(dev, block, nsect); 99 99 100 100 if (rq_data_dir(req) == READ) {
+1
drivers/nvdimm/pmem.c
··· 283 283 blk_queue_max_hw_sectors(q, UINT_MAX); 284 284 blk_queue_bounce_limit(q, BLK_BOUNCE_ANY); 285 285 queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q); 286 + queue_flag_set_unlocked(QUEUE_FLAG_DAX, q); 286 287 q->queuedata = pmem; 287 288 288 289 disk = alloc_disk_node(0, nid);
+2 -2
drivers/nvme/host/core.c
··· 290 290 291 291 if (req->cmd_type == REQ_TYPE_DRV_PRIV) 292 292 memcpy(cmd, req->cmd, sizeof(*cmd)); 293 - else if (req->cmd_flags & REQ_FLUSH) 293 + else if (req_op(req) == REQ_OP_FLUSH) 294 294 nvme_setup_flush(ns, cmd); 295 - else if (req->cmd_flags & REQ_DISCARD) 295 + else if (req_op(req) == REQ_OP_DISCARD) 296 296 ret = nvme_setup_discard(ns, req, cmd); 297 297 else 298 298 nvme_setup_rw(ns, req, cmd);
+2 -2
drivers/nvme/host/nvme.h
··· 177 177 178 178 static inline unsigned nvme_map_len(struct request *rq) 179 179 { 180 - if (rq->cmd_flags & REQ_DISCARD) 180 + if (req_op(rq) == REQ_OP_DISCARD) 181 181 return sizeof(struct nvme_dsm_range); 182 182 else 183 183 return blk_rq_bytes(rq); ··· 185 185 186 186 static inline void nvme_cleanup_cmd(struct request *req) 187 187 { 188 - if (req->cmd_flags & REQ_DISCARD) 188 + if (req_op(req) == REQ_OP_DISCARD) 189 189 kfree(req->completion_data); 190 190 } 191 191
+1
drivers/s390/block/dcssblk.c
··· 618 618 dev_info->gd->driverfs_dev = &dev_info->dev; 619 619 blk_queue_make_request(dev_info->dcssblk_queue, dcssblk_make_request); 620 620 blk_queue_logical_block_size(dev_info->dcssblk_queue, 4096); 621 + queue_flag_set_unlocked(QUEUE_FLAG_DAX, dev_info->dcssblk_queue); 621 622 622 623 seg_byte_size = (dev_info->end - dev_info->start + 1); 623 624 set_capacity(dev_info->gd, seg_byte_size >> 9); // size in sectors
+6 -6
drivers/scsi/osd/osd_initiator.c
··· 726 726 return PTR_ERR(bio); 727 727 } 728 728 729 - bio->bi_rw &= ~REQ_WRITE; 729 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 730 730 or->in.bio = bio; 731 731 or->in.total_bytes = bio->bi_iter.bi_size; 732 732 return 0; ··· 824 824 { 825 825 _osd_req_encode_common(or, OSD_ACT_WRITE, obj, offset, len); 826 826 WARN_ON(or->out.bio || or->out.total_bytes); 827 - WARN_ON(0 == (bio->bi_rw & REQ_WRITE)); 827 + WARN_ON(!op_is_write(bio_op(bio))); 828 828 or->out.bio = bio; 829 829 or->out.total_bytes = len; 830 830 } ··· 839 839 if (IS_ERR(bio)) 840 840 return PTR_ERR(bio); 841 841 842 - bio->bi_rw |= REQ_WRITE; /* FIXME: bio_set_dir() */ 842 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 843 843 osd_req_write(or, obj, offset, bio, len); 844 844 return 0; 845 845 } ··· 875 875 { 876 876 _osd_req_encode_common(or, OSD_ACT_READ, obj, offset, len); 877 877 WARN_ON(or->in.bio || or->in.total_bytes); 878 - WARN_ON(bio->bi_rw & REQ_WRITE); 878 + WARN_ON(op_is_write(bio_op(bio))); 879 879 or->in.bio = bio; 880 880 or->in.total_bytes = len; 881 881 } ··· 956 956 if (IS_ERR(bio)) 957 957 return PTR_ERR(bio); 958 958 959 - bio->bi_rw |= REQ_WRITE; 959 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 960 960 961 961 /* integrity check the continuation before the bio is linked 962 962 * with the other data segments since the continuation ··· 1077 1077 if (IS_ERR(bio)) 1078 1078 return PTR_ERR(bio); 1079 1079 1080 - bio->bi_rw |= REQ_WRITE; 1080 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 1081 1081 osd_req_write_sg(or, obj, bio, sglist, numentries); 1082 1082 1083 1083 return 0;
+13 -7
drivers/scsi/sd.c
··· 1012 1012 } else if (rq_data_dir(rq) == READ) { 1013 1013 SCpnt->cmnd[0] = READ_6; 1014 1014 } else { 1015 - scmd_printk(KERN_ERR, SCpnt, "Unknown command %llx\n", (unsigned long long) rq->cmd_flags); 1015 + scmd_printk(KERN_ERR, SCpnt, "Unknown command %llu,%llx\n", 1016 + req_op(rq), (unsigned long long) rq->cmd_flags); 1016 1017 goto out; 1017 1018 } 1018 1019 ··· 1138 1137 { 1139 1138 struct request *rq = cmd->request; 1140 1139 1141 - if (rq->cmd_flags & REQ_DISCARD) 1140 + switch (req_op(rq)) { 1141 + case REQ_OP_DISCARD: 1142 1142 return sd_setup_discard_cmnd(cmd); 1143 - else if (rq->cmd_flags & REQ_WRITE_SAME) 1143 + case REQ_OP_WRITE_SAME: 1144 1144 return sd_setup_write_same_cmnd(cmd); 1145 - else if (rq->cmd_flags & REQ_FLUSH) 1145 + case REQ_OP_FLUSH: 1146 1146 return sd_setup_flush_cmnd(cmd); 1147 - else 1147 + case REQ_OP_READ: 1148 + case REQ_OP_WRITE: 1148 1149 return sd_setup_read_write_cmnd(cmd); 1150 + default: 1151 + BUG(); 1152 + } 1149 1153 } 1150 1154 1151 1155 static void sd_uninit_command(struct scsi_cmnd *SCpnt) 1152 1156 { 1153 1157 struct request *rq = SCpnt->request; 1154 1158 1155 - if (rq->cmd_flags & REQ_DISCARD) 1159 + if (req_op(rq) == REQ_OP_DISCARD) 1156 1160 __free_page(rq->completion_data); 1157 1161 1158 1162 if (SCpnt->cmnd != rq->cmd) { ··· 1780 1774 unsigned char op = SCpnt->cmnd[0]; 1781 1775 unsigned char unmap = SCpnt->cmnd[1] & 8; 1782 1776 1783 - if (req->cmd_flags & REQ_DISCARD || req->cmd_flags & REQ_WRITE_SAME) { 1777 + if (req_op(req) == REQ_OP_DISCARD || req_op(req) == REQ_OP_WRITE_SAME) { 1784 1778 if (!result) { 1785 1779 good_bytes = blk_rq_bytes(req); 1786 1780 scsi_set_resid(SCpnt, 0);
+21 -20
drivers/target/target_core_iblock.c
··· 312 312 } 313 313 314 314 static struct bio * 315 - iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num) 315 + iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num, int op, 316 + int op_flags) 316 317 { 317 318 struct iblock_dev *ib_dev = IBLOCK_DEV(cmd->se_dev); 318 319 struct bio *bio; ··· 335 334 bio->bi_private = cmd; 336 335 bio->bi_end_io = &iblock_bio_done; 337 336 bio->bi_iter.bi_sector = lba; 337 + bio_set_op_attrs(bio, op, op_flags); 338 338 339 339 return bio; 340 340 } 341 341 342 - static void iblock_submit_bios(struct bio_list *list, int rw) 342 + static void iblock_submit_bios(struct bio_list *list) 343 343 { 344 344 struct blk_plug plug; 345 345 struct bio *bio; 346 346 347 347 blk_start_plug(&plug); 348 348 while ((bio = bio_list_pop(list))) 349 - submit_bio(rw, bio); 349 + submit_bio(bio); 350 350 blk_finish_plug(&plug); 351 351 } 352 352 ··· 389 387 bio = bio_alloc(GFP_KERNEL, 0); 390 388 bio->bi_end_io = iblock_end_io_flush; 391 389 bio->bi_bdev = ib_dev->ibd_bd; 390 + bio->bi_rw = WRITE_FLUSH; 392 391 if (!immed) 393 392 bio->bi_private = cmd; 394 - submit_bio(WRITE_FLUSH, bio); 393 + submit_bio(bio); 395 394 return 0; 396 395 } 397 396 ··· 481 478 goto fail; 482 479 cmd->priv = ibr; 483 480 484 - bio = iblock_get_bio(cmd, block_lba, 1); 481 + bio = iblock_get_bio(cmd, block_lba, 1, REQ_OP_WRITE, 0); 485 482 if (!bio) 486 483 goto fail_free_ibr; 487 484 ··· 494 491 while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset) 495 492 != sg->length) { 496 493 497 - bio = iblock_get_bio(cmd, block_lba, 1); 494 + bio = iblock_get_bio(cmd, block_lba, 1, REQ_OP_WRITE, 495 + 0); 498 496 if (!bio) 499 497 goto fail_put_bios; 500 498 ··· 508 504 sectors -= 1; 509 505 } 510 506 511 - iblock_submit_bios(&list, WRITE); 507 + iblock_submit_bios(&list); 512 508 return 0; 513 509 514 510 fail_put_bios: ··· 681 677 struct scatterlist *sg; 682 678 u32 sg_num = sgl_nents; 683 679 unsigned bio_cnt; 684 - int rw = 0; 685 - int i; 680 + int i, op, op_flags = 0; 686 681 687 682 if (data_direction == DMA_TO_DEVICE) { 688 683 struct iblock_dev *ib_dev = IBLOCK_DEV(dev); ··· 690 687 * Force writethrough using WRITE_FUA if a volatile write cache 691 688 * is not enabled, or if initiator set the Force Unit Access bit. 692 689 */ 690 + op = REQ_OP_WRITE; 693 691 if (test_bit(QUEUE_FLAG_FUA, &q->queue_flags)) { 694 692 if (cmd->se_cmd_flags & SCF_FUA) 695 - rw = WRITE_FUA; 693 + op_flags = WRITE_FUA; 696 694 else if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) 697 - rw = WRITE_FUA; 698 - else 699 - rw = WRITE; 700 - } else { 701 - rw = WRITE; 695 + op_flags = WRITE_FUA; 702 696 } 703 697 } else { 704 - rw = READ; 698 + op = REQ_OP_READ; 705 699 } 706 700 707 701 ibr = kzalloc(sizeof(struct iblock_req), GFP_KERNEL); ··· 712 712 return 0; 713 713 } 714 714 715 - bio = iblock_get_bio(cmd, block_lba, sgl_nents); 715 + bio = iblock_get_bio(cmd, block_lba, sgl_nents, op, op_flags); 716 716 if (!bio) 717 717 goto fail_free_ibr; 718 718 ··· 732 732 while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset) 733 733 != sg->length) { 734 734 if (bio_cnt >= IBLOCK_MAX_BIO_PER_TASK) { 735 - iblock_submit_bios(&list, rw); 735 + iblock_submit_bios(&list); 736 736 bio_cnt = 0; 737 737 } 738 738 739 - bio = iblock_get_bio(cmd, block_lba, sg_num); 739 + bio = iblock_get_bio(cmd, block_lba, sg_num, op, 740 + op_flags); 740 741 if (!bio) 741 742 goto fail_put_bios; 742 743 ··· 757 756 goto fail_put_bios; 758 757 } 759 758 760 - iblock_submit_bios(&list, rw); 759 + iblock_submit_bios(&list); 761 760 iblock_complete_cmd(cmd); 762 761 return 0; 763 762
+1 -1
drivers/target/target_core_pscsi.c
··· 922 922 goto fail; 923 923 924 924 if (rw) 925 - bio->bi_rw |= REQ_WRITE; 925 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 926 926 927 927 pr_debug("PSCSI: Allocated bio: %p," 928 928 " dir: %s nr_vecs: %d\n", bio,
+3 -2
fs/block_dev.c
··· 493 493 494 494 if (size < 0) 495 495 return size; 496 - if (!ops->direct_access) 496 + if (!blk_queue_dax(bdev_get_queue(bdev)) || !ops->direct_access) 497 497 return -EOPNOTSUPP; 498 498 if ((sector + DIV_ROUND_UP(size, 512)) > 499 499 part_nr_sects_read(bdev->bd_part)) ··· 1287 1287 bdev->bd_disk = disk; 1288 1288 bdev->bd_queue = disk->queue; 1289 1289 bdev->bd_contains = bdev; 1290 - if (IS_ENABLED(CONFIG_BLK_DEV_DAX) && disk->fops->direct_access) 1290 + if (IS_ENABLED(CONFIG_BLK_DEV_DAX) && 1291 + blk_queue_dax(disk->queue)) 1291 1292 bdev->bd_inode->i_flags = S_DAX; 1292 1293 else 1293 1294 bdev->bd_inode->i_flags = 0;
+31 -30
fs/btrfs/check-integrity.c
··· 1673 1673 } 1674 1674 bio->bi_bdev = block_ctx->dev->bdev; 1675 1675 bio->bi_iter.bi_sector = dev_bytenr >> 9; 1676 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 1676 1677 1677 1678 for (j = i; j < num_pages; j++) { 1678 1679 ret = bio_add_page(bio, block_ctx->pagev[j], ··· 1686 1685 "btrfsic: error, failed to add a single page!\n"); 1687 1686 return -1; 1688 1687 } 1689 - if (submit_bio_wait(READ, bio)) { 1688 + if (submit_bio_wait(bio)) { 1690 1689 printk(KERN_INFO 1691 1690 "btrfsic: read error at logical %llu dev %s!\n", 1692 1691 block_ctx->start, block_ctx->dev->name); ··· 2207 2206 block->dev_bytenr, block->mirror_num); 2208 2207 next_block = block->next_in_same_bio; 2209 2208 block->iodone_w_error = iodone_w_error; 2210 - if (block->submit_bio_bh_rw & REQ_FLUSH) { 2209 + if (block->submit_bio_bh_rw & REQ_PREFLUSH) { 2211 2210 dev_state->last_flush_gen++; 2212 2211 if ((dev_state->state->print_mask & 2213 2212 BTRFSIC_PRINT_MASK_END_IO_BIO_BH)) ··· 2243 2242 block->dev_bytenr, block->mirror_num); 2244 2243 2245 2244 block->iodone_w_error = iodone_w_error; 2246 - if (block->submit_bio_bh_rw & REQ_FLUSH) { 2245 + if (block->submit_bio_bh_rw & REQ_PREFLUSH) { 2247 2246 dev_state->last_flush_gen++; 2248 2247 if ((dev_state->state->print_mask & 2249 2248 BTRFSIC_PRINT_MASK_END_IO_BIO_BH)) ··· 2856 2855 return ds; 2857 2856 } 2858 2857 2859 - int btrfsic_submit_bh(int rw, struct buffer_head *bh) 2858 + int btrfsic_submit_bh(int op, int op_flags, struct buffer_head *bh) 2860 2859 { 2861 2860 struct btrfsic_dev_state *dev_state; 2862 2861 2863 2862 if (!btrfsic_is_initialized) 2864 - return submit_bh(rw, bh); 2863 + return submit_bh(op, op_flags, bh); 2865 2864 2866 2865 mutex_lock(&btrfsic_mutex); 2867 2866 /* since btrfsic_submit_bh() might also be called before ··· 2870 2869 2871 2870 /* Only called to write the superblock (incl. FLUSH/FUA) */ 2872 2871 if (NULL != dev_state && 2873 - (rw & WRITE) && bh->b_size > 0) { 2872 + (op == REQ_OP_WRITE) && bh->b_size > 0) { 2874 2873 u64 dev_bytenr; 2875 2874 2876 2875 dev_bytenr = 4096 * bh->b_blocknr; 2877 2876 if (dev_state->state->print_mask & 2878 2877 BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH) 2879 2878 printk(KERN_INFO 2880 - "submit_bh(rw=0x%x, blocknr=%llu (bytenr %llu)," 2881 - " size=%zu, data=%p, bdev=%p)\n", 2882 - rw, (unsigned long long)bh->b_blocknr, 2879 + "submit_bh(op=0x%x,0x%x, blocknr=%llu " 2880 + "(bytenr %llu), size=%zu, data=%p, bdev=%p)\n", 2881 + op, op_flags, (unsigned long long)bh->b_blocknr, 2883 2882 dev_bytenr, bh->b_size, bh->b_data, bh->b_bdev); 2884 2883 btrfsic_process_written_block(dev_state, dev_bytenr, 2885 2884 &bh->b_data, 1, NULL, 2886 - NULL, bh, rw); 2887 - } else if (NULL != dev_state && (rw & REQ_FLUSH)) { 2885 + NULL, bh, op_flags); 2886 + } else if (NULL != dev_state && (op_flags & REQ_PREFLUSH)) { 2888 2887 if (dev_state->state->print_mask & 2889 2888 BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH) 2890 2889 printk(KERN_INFO 2891 - "submit_bh(rw=0x%x FLUSH, bdev=%p)\n", 2892 - rw, bh->b_bdev); 2890 + "submit_bh(op=0x%x,0x%x FLUSH, bdev=%p)\n", 2891 + op, op_flags, bh->b_bdev); 2893 2892 if (!dev_state->dummy_block_for_bio_bh_flush.is_iodone) { 2894 2893 if ((dev_state->state->print_mask & 2895 2894 (BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH | ··· 2907 2906 block->never_written = 0; 2908 2907 block->iodone_w_error = 0; 2909 2908 block->flush_gen = dev_state->last_flush_gen + 1; 2910 - block->submit_bio_bh_rw = rw; 2909 + block->submit_bio_bh_rw = op_flags; 2911 2910 block->orig_bio_bh_private = bh->b_private; 2912 2911 block->orig_bio_bh_end_io.bh = bh->b_end_io; 2913 2912 block->next_in_same_bio = NULL; ··· 2916 2915 } 2917 2916 } 2918 2917 mutex_unlock(&btrfsic_mutex); 2919 - return submit_bh(rw, bh); 2918 + return submit_bh(op, op_flags, bh); 2920 2919 } 2921 2920 2922 - static void __btrfsic_submit_bio(int rw, struct bio *bio) 2921 + static void __btrfsic_submit_bio(struct bio *bio) 2923 2922 { 2924 2923 struct btrfsic_dev_state *dev_state; 2925 2924 ··· 2931 2930 * btrfsic_mount(), this might return NULL */ 2932 2931 dev_state = btrfsic_dev_state_lookup(bio->bi_bdev); 2933 2932 if (NULL != dev_state && 2934 - (rw & WRITE) && NULL != bio->bi_io_vec) { 2933 + (bio_op(bio) == REQ_OP_WRITE) && NULL != bio->bi_io_vec) { 2935 2934 unsigned int i; 2936 2935 u64 dev_bytenr; 2937 2936 u64 cur_bytenr; ··· 2943 2942 if (dev_state->state->print_mask & 2944 2943 BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH) 2945 2944 printk(KERN_INFO 2946 - "submit_bio(rw=0x%x, bi_vcnt=%u," 2945 + "submit_bio(rw=%d,0x%x, bi_vcnt=%u," 2947 2946 " bi_sector=%llu (bytenr %llu), bi_bdev=%p)\n", 2948 - rw, bio->bi_vcnt, 2947 + bio_op(bio), bio->bi_rw, bio->bi_vcnt, 2949 2948 (unsigned long long)bio->bi_iter.bi_sector, 2950 2949 dev_bytenr, bio->bi_bdev); 2951 2950 ··· 2976 2975 btrfsic_process_written_block(dev_state, dev_bytenr, 2977 2976 mapped_datav, bio->bi_vcnt, 2978 2977 bio, &bio_is_patched, 2979 - NULL, rw); 2978 + NULL, bio->bi_rw); 2980 2979 while (i > 0) { 2981 2980 i--; 2982 2981 kunmap(bio->bi_io_vec[i].bv_page); 2983 2982 } 2984 2983 kfree(mapped_datav); 2985 - } else if (NULL != dev_state && (rw & REQ_FLUSH)) { 2984 + } else if (NULL != dev_state && (bio->bi_rw & REQ_PREFLUSH)) { 2986 2985 if (dev_state->state->print_mask & 2987 2986 BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH) 2988 2987 printk(KERN_INFO 2989 - "submit_bio(rw=0x%x FLUSH, bdev=%p)\n", 2990 - rw, bio->bi_bdev); 2988 + "submit_bio(rw=%d,0x%x FLUSH, bdev=%p)\n", 2989 + bio_op(bio), bio->bi_rw, bio->bi_bdev); 2991 2990 if (!dev_state->dummy_block_for_bio_bh_flush.is_iodone) { 2992 2991 if ((dev_state->state->print_mask & 2993 2992 (BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH | ··· 3005 3004 block->never_written = 0; 3006 3005 block->iodone_w_error = 0; 3007 3006 block->flush_gen = dev_state->last_flush_gen + 1; 3008 - block->submit_bio_bh_rw = rw; 3007 + block->submit_bio_bh_rw = bio->bi_rw; 3009 3008 block->orig_bio_bh_private = bio->bi_private; 3010 3009 block->orig_bio_bh_end_io.bio = bio->bi_end_io; 3011 3010 block->next_in_same_bio = NULL; ··· 3017 3016 mutex_unlock(&btrfsic_mutex); 3018 3017 } 3019 3018 3020 - void btrfsic_submit_bio(int rw, struct bio *bio) 3019 + void btrfsic_submit_bio(struct bio *bio) 3021 3020 { 3022 - __btrfsic_submit_bio(rw, bio); 3023 - submit_bio(rw, bio); 3021 + __btrfsic_submit_bio(bio); 3022 + submit_bio(bio); 3024 3023 } 3025 3024 3026 - int btrfsic_submit_bio_wait(int rw, struct bio *bio) 3025 + int btrfsic_submit_bio_wait(struct bio *bio) 3027 3026 { 3028 - __btrfsic_submit_bio(rw, bio); 3029 - return submit_bio_wait(rw, bio); 3027 + __btrfsic_submit_bio(bio); 3028 + return submit_bio_wait(bio); 3030 3029 } 3031 3030 3032 3031 int btrfsic_mount(struct btrfs_root *root,
+3 -3
fs/btrfs/check-integrity.h
··· 20 20 #define __BTRFS_CHECK_INTEGRITY__ 21 21 22 22 #ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY 23 - int btrfsic_submit_bh(int rw, struct buffer_head *bh); 24 - void btrfsic_submit_bio(int rw, struct bio *bio); 25 - int btrfsic_submit_bio_wait(int rw, struct bio *bio); 23 + int btrfsic_submit_bh(int op, int op_flags, struct buffer_head *bh); 24 + void btrfsic_submit_bio(struct bio *bio); 25 + int btrfsic_submit_bio_wait(struct bio *bio); 26 26 #else 27 27 #define btrfsic_submit_bh submit_bh 28 28 #define btrfsic_submit_bio submit_bio
+10 -7
fs/btrfs/compression.c
··· 363 363 kfree(cb); 364 364 return -ENOMEM; 365 365 } 366 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 366 367 bio->bi_private = cb; 367 368 bio->bi_end_io = end_compressed_bio_write; 368 369 atomic_inc(&cb->pending_bios); ··· 374 373 page = compressed_pages[pg_index]; 375 374 page->mapping = inode->i_mapping; 376 375 if (bio->bi_iter.bi_size) 377 - ret = io_tree->ops->merge_bio_hook(WRITE, page, 0, 376 + ret = io_tree->ops->merge_bio_hook(page, 0, 378 377 PAGE_SIZE, 379 378 bio, 0); 380 379 else ··· 402 401 BUG_ON(ret); /* -ENOMEM */ 403 402 } 404 403 405 - ret = btrfs_map_bio(root, WRITE, bio, 0, 1); 404 + ret = btrfs_map_bio(root, bio, 0, 1); 406 405 BUG_ON(ret); /* -ENOMEM */ 407 406 408 407 bio_put(bio); 409 408 410 409 bio = compressed_bio_alloc(bdev, first_byte, GFP_NOFS); 411 410 BUG_ON(!bio); 411 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 412 412 bio->bi_private = cb; 413 413 bio->bi_end_io = end_compressed_bio_write; 414 414 bio_add_page(bio, page, PAGE_SIZE, 0); ··· 433 431 BUG_ON(ret); /* -ENOMEM */ 434 432 } 435 433 436 - ret = btrfs_map_bio(root, WRITE, bio, 0, 1); 434 + ret = btrfs_map_bio(root, bio, 0, 1); 437 435 BUG_ON(ret); /* -ENOMEM */ 438 436 439 437 bio_put(bio); ··· 648 646 comp_bio = compressed_bio_alloc(bdev, cur_disk_byte, GFP_NOFS); 649 647 if (!comp_bio) 650 648 goto fail2; 649 + bio_set_op_attrs (comp_bio, REQ_OP_READ, 0); 651 650 comp_bio->bi_private = cb; 652 651 comp_bio->bi_end_io = end_compressed_bio_read; 653 652 atomic_inc(&cb->pending_bios); ··· 659 656 page->index = em_start >> PAGE_SHIFT; 660 657 661 658 if (comp_bio->bi_iter.bi_size) 662 - ret = tree->ops->merge_bio_hook(READ, page, 0, 659 + ret = tree->ops->merge_bio_hook(page, 0, 663 660 PAGE_SIZE, 664 661 comp_bio, 0); 665 662 else ··· 690 687 sums += DIV_ROUND_UP(comp_bio->bi_iter.bi_size, 691 688 root->sectorsize); 692 689 693 - ret = btrfs_map_bio(root, READ, comp_bio, 694 - mirror_num, 0); 690 + ret = btrfs_map_bio(root, comp_bio, mirror_num, 0); 695 691 if (ret) { 696 692 bio->bi_error = ret; 697 693 bio_endio(comp_bio); ··· 701 699 comp_bio = compressed_bio_alloc(bdev, cur_disk_byte, 702 700 GFP_NOFS); 703 701 BUG_ON(!comp_bio); 702 + bio_set_op_attrs(comp_bio, REQ_OP_READ, 0); 704 703 comp_bio->bi_private = cb; 705 704 comp_bio->bi_end_io = end_compressed_bio_read; 706 705 ··· 720 717 BUG_ON(ret); /* -ENOMEM */ 721 718 } 722 719 723 - ret = btrfs_map_bio(root, READ, comp_bio, mirror_num, 0); 720 + ret = btrfs_map_bio(root, comp_bio, mirror_num, 0); 724 721 if (ret) { 725 722 bio->bi_error = ret; 726 723 bio_endio(comp_bio);
+1 -1
fs/btrfs/ctree.h
··· 3091 3091 struct btrfs_root *new_root, 3092 3092 struct btrfs_root *parent_root, 3093 3093 u64 new_dirid); 3094 - int btrfs_merge_bio_hook(int rw, struct page *page, unsigned long offset, 3094 + int btrfs_merge_bio_hook(struct page *page, unsigned long offset, 3095 3095 size_t size, struct bio *bio, 3096 3096 unsigned long bio_flags); 3097 3097 int btrfs_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf);
+19 -24
fs/btrfs/disk-io.c
··· 124 124 struct list_head list; 125 125 extent_submit_bio_hook_t *submit_bio_start; 126 126 extent_submit_bio_hook_t *submit_bio_done; 127 - int rw; 128 127 int mirror_num; 129 128 unsigned long bio_flags; 130 129 /* ··· 726 727 fs_info = end_io_wq->info; 727 728 end_io_wq->error = bio->bi_error; 728 729 729 - if (bio->bi_rw & REQ_WRITE) { 730 + if (bio_op(bio) == REQ_OP_WRITE) { 730 731 if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA) { 731 732 wq = fs_info->endio_meta_write_workers; 732 733 func = btrfs_endio_meta_write_helper; ··· 796 797 int ret; 797 798 798 799 async = container_of(work, struct async_submit_bio, work); 799 - ret = async->submit_bio_start(async->inode, async->rw, async->bio, 800 + ret = async->submit_bio_start(async->inode, async->bio, 800 801 async->mirror_num, async->bio_flags, 801 802 async->bio_offset); 802 803 if (ret) ··· 829 830 return; 830 831 } 831 832 832 - async->submit_bio_done(async->inode, async->rw, async->bio, 833 - async->mirror_num, async->bio_flags, 834 - async->bio_offset); 833 + async->submit_bio_done(async->inode, async->bio, async->mirror_num, 834 + async->bio_flags, async->bio_offset); 835 835 } 836 836 837 837 static void run_one_async_free(struct btrfs_work *work) ··· 842 844 } 843 845 844 846 int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode, 845 - int rw, struct bio *bio, int mirror_num, 847 + struct bio *bio, int mirror_num, 846 848 unsigned long bio_flags, 847 849 u64 bio_offset, 848 850 extent_submit_bio_hook_t *submit_bio_start, ··· 855 857 return -ENOMEM; 856 858 857 859 async->inode = inode; 858 - async->rw = rw; 859 860 async->bio = bio; 860 861 async->mirror_num = mirror_num; 861 862 async->submit_bio_start = submit_bio_start; ··· 870 873 871 874 atomic_inc(&fs_info->nr_async_submits); 872 875 873 - if (rw & REQ_SYNC) 876 + if (bio->bi_rw & REQ_SYNC) 874 877 btrfs_set_work_high_priority(&async->work); 875 878 876 879 btrfs_queue_work(fs_info->workers, &async->work); ··· 900 903 return ret; 901 904 } 902 905 903 - static int __btree_submit_bio_start(struct inode *inode, int rw, 904 - struct bio *bio, int mirror_num, 905 - unsigned long bio_flags, 906 + static int __btree_submit_bio_start(struct inode *inode, struct bio *bio, 907 + int mirror_num, unsigned long bio_flags, 906 908 u64 bio_offset) 907 909 { 908 910 /* ··· 911 915 return btree_csum_one_bio(bio); 912 916 } 913 917 914 - static int __btree_submit_bio_done(struct inode *inode, int rw, struct bio *bio, 918 + static int __btree_submit_bio_done(struct inode *inode, struct bio *bio, 915 919 int mirror_num, unsigned long bio_flags, 916 920 u64 bio_offset) 917 921 { ··· 921 925 * when we're called for a write, we're already in the async 922 926 * submission context. Just jump into btrfs_map_bio 923 927 */ 924 - ret = btrfs_map_bio(BTRFS_I(inode)->root, rw, bio, mirror_num, 1); 928 + ret = btrfs_map_bio(BTRFS_I(inode)->root, bio, mirror_num, 1); 925 929 if (ret) { 926 930 bio->bi_error = ret; 927 931 bio_endio(bio); ··· 940 944 return 1; 941 945 } 942 946 943 - static int btree_submit_bio_hook(struct inode *inode, int rw, struct bio *bio, 947 + static int btree_submit_bio_hook(struct inode *inode, struct bio *bio, 944 948 int mirror_num, unsigned long bio_flags, 945 949 u64 bio_offset) 946 950 { 947 951 int async = check_async_write(inode, bio_flags); 948 952 int ret; 949 953 950 - if (!(rw & REQ_WRITE)) { 954 + if (bio_op(bio) != REQ_OP_WRITE) { 951 955 /* 952 956 * called for a read, do the setup so that checksum validation 953 957 * can happen in the async kernel threads ··· 956 960 bio, BTRFS_WQ_ENDIO_METADATA); 957 961 if (ret) 958 962 goto out_w_error; 959 - ret = btrfs_map_bio(BTRFS_I(inode)->root, rw, bio, 960 - mirror_num, 0); 963 + ret = btrfs_map_bio(BTRFS_I(inode)->root, bio, mirror_num, 0); 961 964 } else if (!async) { 962 965 ret = btree_csum_one_bio(bio); 963 966 if (ret) 964 967 goto out_w_error; 965 - ret = btrfs_map_bio(BTRFS_I(inode)->root, rw, bio, 966 - mirror_num, 0); 968 + ret = btrfs_map_bio(BTRFS_I(inode)->root, bio, mirror_num, 0); 967 969 } else { 968 970 /* 969 971 * kthread helpers are used to submit writes so that 970 972 * checksumming can happen in parallel across all CPUs 971 973 */ 972 974 ret = btrfs_wq_submit_bio(BTRFS_I(inode)->root->fs_info, 973 - inode, rw, bio, mirror_num, 0, 975 + inode, bio, mirror_num, 0, 974 976 bio_offset, 975 977 __btree_submit_bio_start, 976 978 __btree_submit_bio_done); ··· 3412 3418 * to go down lazy. 3413 3419 */ 3414 3420 if (i == 0) 3415 - ret = btrfsic_submit_bh(WRITE_FUA, bh); 3421 + ret = btrfsic_submit_bh(REQ_OP_WRITE, WRITE_FUA, bh); 3416 3422 else 3417 - ret = btrfsic_submit_bh(WRITE_SYNC, bh); 3423 + ret = btrfsic_submit_bh(REQ_OP_WRITE, WRITE_SYNC, bh); 3418 3424 if (ret) 3419 3425 errors++; 3420 3426 } ··· 3478 3484 3479 3485 bio->bi_end_io = btrfs_end_empty_barrier; 3480 3486 bio->bi_bdev = device->bdev; 3487 + bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH); 3481 3488 init_completion(&device->flush_wait); 3482 3489 bio->bi_private = &device->flush_wait; 3483 3490 device->flush_bio = bio; 3484 3491 3485 3492 bio_get(bio); 3486 - btrfsic_submit_bio(WRITE_FLUSH, bio); 3493 + btrfsic_submit_bio(bio); 3487 3494 3488 3495 return 0; 3489 3496 }
+1 -1
fs/btrfs/disk-io.h
··· 122 122 int btrfs_bio_wq_end_io(struct btrfs_fs_info *info, struct bio *bio, 123 123 enum btrfs_wq_endio_type metadata); 124 124 int btrfs_wq_submit_bio(struct btrfs_fs_info *fs_info, struct inode *inode, 125 - int rw, struct bio *bio, int mirror_num, 125 + struct bio *bio, int mirror_num, 126 126 unsigned long bio_flags, u64 bio_offset, 127 127 extent_submit_bio_hook_t *submit_bio_start, 128 128 extent_submit_bio_hook_t *submit_bio_done);
+1 -1
fs/btrfs/extent-tree.c
··· 2048 2048 */ 2049 2049 btrfs_bio_counter_inc_blocked(root->fs_info); 2050 2050 /* Tell the block device(s) that the sectors can be discarded */ 2051 - ret = btrfs_map_block(root->fs_info, REQ_DISCARD, 2051 + ret = btrfs_map_block(root->fs_info, REQ_OP_DISCARD, 2052 2052 bytenr, &num_bytes, &bbio, 0); 2053 2053 /* Error condition is -ENOMEM */ 2054 2054 if (!ret) {
+44 -47
fs/btrfs/extent_io.c
··· 2049 2049 return -EIO; 2050 2050 } 2051 2051 bio->bi_bdev = dev->bdev; 2052 + bio->bi_rw = WRITE_SYNC; 2052 2053 bio_add_page(bio, page, length, pg_offset); 2053 2054 2054 - if (btrfsic_submit_bio_wait(WRITE_SYNC, bio)) { 2055 + if (btrfsic_submit_bio_wait(bio)) { 2055 2056 /* try to remap that extent elsewhere? */ 2056 2057 btrfs_bio_counter_dec(fs_info); 2057 2058 bio_put(bio); ··· 2387 2386 int read_mode; 2388 2387 int ret; 2389 2388 2390 - BUG_ON(failed_bio->bi_rw & REQ_WRITE); 2389 + BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE); 2391 2390 2392 2391 ret = btrfs_get_io_failure_record(inode, start, end, &failrec); 2393 2392 if (ret) ··· 2413 2412 free_io_failure(inode, failrec); 2414 2413 return -EIO; 2415 2414 } 2415 + bio_set_op_attrs(bio, REQ_OP_READ, read_mode); 2416 2416 2417 2417 pr_debug("Repair Read Error: submitting new read[%#x] to this_mirror=%d, in_validation=%d\n", 2418 2418 read_mode, failrec->this_mirror, failrec->in_validation); 2419 2419 2420 - ret = tree->ops->submit_bio_hook(inode, read_mode, bio, 2421 - failrec->this_mirror, 2420 + ret = tree->ops->submit_bio_hook(inode, bio, failrec->this_mirror, 2422 2421 failrec->bio_flags, 0); 2423 2422 if (ret) { 2424 2423 free_io_failure(inode, failrec); ··· 2724 2723 } 2725 2724 2726 2725 2727 - static int __must_check submit_one_bio(int rw, struct bio *bio, 2728 - int mirror_num, unsigned long bio_flags) 2726 + static int __must_check submit_one_bio(struct bio *bio, int mirror_num, 2727 + unsigned long bio_flags) 2729 2728 { 2730 2729 int ret = 0; 2731 2730 struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1; ··· 2736 2735 start = page_offset(page) + bvec->bv_offset; 2737 2736 2738 2737 bio->bi_private = NULL; 2739 - 2740 2738 bio_get(bio); 2741 2739 2742 2740 if (tree->ops && tree->ops->submit_bio_hook) 2743 - ret = tree->ops->submit_bio_hook(page->mapping->host, rw, bio, 2741 + ret = tree->ops->submit_bio_hook(page->mapping->host, bio, 2744 2742 mirror_num, bio_flags, start); 2745 2743 else 2746 - btrfsic_submit_bio(rw, bio); 2744 + btrfsic_submit_bio(bio); 2747 2745 2748 2746 bio_put(bio); 2749 2747 return ret; 2750 2748 } 2751 2749 2752 - static int merge_bio(int rw, struct extent_io_tree *tree, struct page *page, 2750 + static int merge_bio(struct extent_io_tree *tree, struct page *page, 2753 2751 unsigned long offset, size_t size, struct bio *bio, 2754 2752 unsigned long bio_flags) 2755 2753 { 2756 2754 int ret = 0; 2757 2755 if (tree->ops && tree->ops->merge_bio_hook) 2758 - ret = tree->ops->merge_bio_hook(rw, page, offset, size, bio, 2756 + ret = tree->ops->merge_bio_hook(page, offset, size, bio, 2759 2757 bio_flags); 2760 2758 BUG_ON(ret < 0); 2761 2759 return ret; 2762 2760 2763 2761 } 2764 2762 2765 - static int submit_extent_page(int rw, struct extent_io_tree *tree, 2763 + static int submit_extent_page(int op, int op_flags, struct extent_io_tree *tree, 2766 2764 struct writeback_control *wbc, 2767 2765 struct page *page, sector_t sector, 2768 2766 size_t size, unsigned long offset, ··· 2789 2789 2790 2790 if (prev_bio_flags != bio_flags || !contig || 2791 2791 force_bio_submit || 2792 - merge_bio(rw, tree, page, offset, page_size, bio, bio_flags) || 2792 + merge_bio(tree, page, offset, page_size, bio, bio_flags) || 2793 2793 bio_add_page(bio, page, page_size, offset) < page_size) { 2794 - ret = submit_one_bio(rw, bio, mirror_num, 2795 - prev_bio_flags); 2794 + ret = submit_one_bio(bio, mirror_num, prev_bio_flags); 2796 2795 if (ret < 0) { 2797 2796 *bio_ret = NULL; 2798 2797 return ret; ··· 2812 2813 bio_add_page(bio, page, page_size, offset); 2813 2814 bio->bi_end_io = end_io_func; 2814 2815 bio->bi_private = tree; 2816 + bio_set_op_attrs(bio, op, op_flags); 2815 2817 if (wbc) { 2816 2818 wbc_init_bio(wbc, bio); 2817 2819 wbc_account_io(wbc, page, page_size); ··· 2821 2821 if (bio_ret) 2822 2822 *bio_ret = bio; 2823 2823 else 2824 - ret = submit_one_bio(rw, bio, mirror_num, bio_flags); 2824 + ret = submit_one_bio(bio, mirror_num, bio_flags); 2825 2825 2826 2826 return ret; 2827 2827 } ··· 2885 2885 get_extent_t *get_extent, 2886 2886 struct extent_map **em_cached, 2887 2887 struct bio **bio, int mirror_num, 2888 - unsigned long *bio_flags, int rw, 2888 + unsigned long *bio_flags, int read_flags, 2889 2889 u64 *prev_em_start) 2890 2890 { 2891 2891 struct inode *inode = page->mapping->host; ··· 3068 3068 } 3069 3069 3070 3070 pnr -= page->index; 3071 - ret = submit_extent_page(rw, tree, NULL, page, 3072 - sector, disk_io_size, pg_offset, 3071 + ret = submit_extent_page(REQ_OP_READ, read_flags, tree, NULL, 3072 + page, sector, disk_io_size, pg_offset, 3073 3073 bdev, bio, pnr, 3074 3074 end_bio_extent_readpage, mirror_num, 3075 3075 *bio_flags, ··· 3100 3100 get_extent_t *get_extent, 3101 3101 struct extent_map **em_cached, 3102 3102 struct bio **bio, int mirror_num, 3103 - unsigned long *bio_flags, int rw, 3103 + unsigned long *bio_flags, 3104 3104 u64 *prev_em_start) 3105 3105 { 3106 3106 struct inode *inode; ··· 3121 3121 3122 3122 for (index = 0; index < nr_pages; index++) { 3123 3123 __do_readpage(tree, pages[index], get_extent, em_cached, bio, 3124 - mirror_num, bio_flags, rw, prev_em_start); 3124 + mirror_num, bio_flags, 0, prev_em_start); 3125 3125 put_page(pages[index]); 3126 3126 } 3127 3127 } ··· 3131 3131 int nr_pages, get_extent_t *get_extent, 3132 3132 struct extent_map **em_cached, 3133 3133 struct bio **bio, int mirror_num, 3134 - unsigned long *bio_flags, int rw, 3134 + unsigned long *bio_flags, 3135 3135 u64 *prev_em_start) 3136 3136 { 3137 3137 u64 start = 0; ··· 3153 3153 index - first_index, start, 3154 3154 end, get_extent, em_cached, 3155 3155 bio, mirror_num, bio_flags, 3156 - rw, prev_em_start); 3156 + prev_em_start); 3157 3157 start = page_start; 3158 3158 end = start + PAGE_SIZE - 1; 3159 3159 first_index = index; ··· 3164 3164 __do_contiguous_readpages(tree, &pages[first_index], 3165 3165 index - first_index, start, 3166 3166 end, get_extent, em_cached, bio, 3167 - mirror_num, bio_flags, rw, 3167 + mirror_num, bio_flags, 3168 3168 prev_em_start); 3169 3169 } 3170 3170 ··· 3172 3172 struct page *page, 3173 3173 get_extent_t *get_extent, 3174 3174 struct bio **bio, int mirror_num, 3175 - unsigned long *bio_flags, int rw) 3175 + unsigned long *bio_flags, int read_flags) 3176 3176 { 3177 3177 struct inode *inode = page->mapping->host; 3178 3178 struct btrfs_ordered_extent *ordered; ··· 3192 3192 } 3193 3193 3194 3194 ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num, 3195 - bio_flags, rw, NULL); 3195 + bio_flags, read_flags, NULL); 3196 3196 return ret; 3197 3197 } 3198 3198 ··· 3204 3204 int ret; 3205 3205 3206 3206 ret = __extent_read_full_page(tree, page, get_extent, &bio, mirror_num, 3207 - &bio_flags, READ); 3207 + &bio_flags, 0); 3208 3208 if (bio) 3209 - ret = submit_one_bio(READ, bio, mirror_num, bio_flags); 3209 + ret = submit_one_bio(bio, mirror_num, bio_flags); 3210 3210 return ret; 3211 3211 } 3212 3212 ··· 3440 3440 page->index, cur, end); 3441 3441 } 3442 3442 3443 - ret = submit_extent_page(write_flags, tree, wbc, page, 3444 - sector, iosize, pg_offset, 3443 + ret = submit_extent_page(REQ_OP_WRITE, write_flags, tree, wbc, 3444 + page, sector, iosize, pg_offset, 3445 3445 bdev, &epd->bio, max_nr, 3446 3446 end_bio_extent_writepage, 3447 3447 0, 0, 0, false); ··· 3480 3480 size_t pg_offset = 0; 3481 3481 loff_t i_size = i_size_read(inode); 3482 3482 unsigned long end_index = i_size >> PAGE_SHIFT; 3483 - int write_flags; 3483 + int write_flags = 0; 3484 3484 unsigned long nr_written = 0; 3485 3485 3486 3486 if (wbc->sync_mode == WB_SYNC_ALL) 3487 3487 write_flags = WRITE_SYNC; 3488 - else 3489 - write_flags = WRITE; 3490 3488 3491 3489 trace___extent_writepage(page, inode, wbc); 3492 3490 ··· 3728 3730 u64 offset = eb->start; 3729 3731 unsigned long i, num_pages; 3730 3732 unsigned long bio_flags = 0; 3731 - int rw = (epd->sync_io ? WRITE_SYNC : WRITE) | REQ_META; 3733 + int write_flags = (epd->sync_io ? WRITE_SYNC : 0) | REQ_META; 3732 3734 int ret = 0; 3733 3735 3734 3736 clear_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags); ··· 3742 3744 3743 3745 clear_page_dirty_for_io(p); 3744 3746 set_page_writeback(p); 3745 - ret = submit_extent_page(rw, tree, wbc, p, offset >> 9, 3746 - PAGE_SIZE, 0, bdev, &epd->bio, 3747 - -1, end_bio_extent_buffer_writepage, 3747 + ret = submit_extent_page(REQ_OP_WRITE, write_flags, tree, wbc, 3748 + p, offset >> 9, PAGE_SIZE, 0, bdev, 3749 + &epd->bio, -1, 3750 + end_bio_extent_buffer_writepage, 3748 3751 0, epd->bio_flags, bio_flags, false); 3749 3752 epd->bio_flags = bio_flags; 3750 3753 if (ret) { ··· 4055 4056 static void flush_epd_write_bio(struct extent_page_data *epd) 4056 4057 { 4057 4058 if (epd->bio) { 4058 - int rw = WRITE; 4059 4059 int ret; 4060 4060 4061 - if (epd->sync_io) 4062 - rw = WRITE_SYNC; 4061 + bio_set_op_attrs(epd->bio, REQ_OP_WRITE, 4062 + epd->sync_io ? WRITE_SYNC : 0); 4063 4063 4064 - ret = submit_one_bio(rw, epd->bio, 0, epd->bio_flags); 4064 + ret = submit_one_bio(epd->bio, 0, epd->bio_flags); 4065 4065 BUG_ON(ret < 0); /* -ENOMEM */ 4066 4066 epd->bio = NULL; 4067 4067 } ··· 4187 4189 if (nr < ARRAY_SIZE(pagepool)) 4188 4190 continue; 4189 4191 __extent_readpages(tree, pagepool, nr, get_extent, &em_cached, 4190 - &bio, 0, &bio_flags, READ, &prev_em_start); 4192 + &bio, 0, &bio_flags, &prev_em_start); 4191 4193 nr = 0; 4192 4194 } 4193 4195 if (nr) 4194 4196 __extent_readpages(tree, pagepool, nr, get_extent, &em_cached, 4195 - &bio, 0, &bio_flags, READ, &prev_em_start); 4197 + &bio, 0, &bio_flags, &prev_em_start); 4196 4198 4197 4199 if (em_cached) 4198 4200 free_extent_map(em_cached); 4199 4201 4200 4202 BUG_ON(!list_empty(pages)); 4201 4203 if (bio) 4202 - return submit_one_bio(READ, bio, 0, bio_flags); 4204 + return submit_one_bio(bio, 0, bio_flags); 4203 4205 return 0; 4204 4206 } 4205 4207 ··· 5234 5236 err = __extent_read_full_page(tree, page, 5235 5237 get_extent, &bio, 5236 5238 mirror_num, &bio_flags, 5237 - READ | REQ_META); 5239 + REQ_META); 5238 5240 if (err) 5239 5241 ret = err; 5240 5242 } else { ··· 5243 5245 } 5244 5246 5245 5247 if (bio) { 5246 - err = submit_one_bio(READ | REQ_META, bio, mirror_num, 5247 - bio_flags); 5248 + err = submit_one_bio(bio, mirror_num, bio_flags); 5248 5249 if (err) 5249 5250 return err; 5250 5251 }
+4 -4
fs/btrfs/extent_io.h
··· 63 63 struct btrfs_io_bio; 64 64 struct io_failure_record; 65 65 66 - typedef int (extent_submit_bio_hook_t)(struct inode *inode, int rw, 67 - struct bio *bio, int mirror_num, 68 - unsigned long bio_flags, u64 bio_offset); 66 + typedef int (extent_submit_bio_hook_t)(struct inode *inode, struct bio *bio, 67 + int mirror_num, unsigned long bio_flags, 68 + u64 bio_offset); 69 69 struct extent_io_ops { 70 70 int (*fill_delalloc)(struct inode *inode, struct page *locked_page, 71 71 u64 start, u64 end, int *page_started, 72 72 unsigned long *nr_written); 73 73 int (*writepage_start_hook)(struct page *page, u64 start, u64 end); 74 74 extent_submit_bio_hook_t *submit_bio_hook; 75 - int (*merge_bio_hook)(int rw, struct page *page, unsigned long offset, 75 + int (*merge_bio_hook)(struct page *page, unsigned long offset, 76 76 size_t size, struct bio *bio, 77 77 unsigned long bio_flags); 78 78 int (*readpage_io_failed_hook)(struct page *page, int failed_mirror);
+35 -35
fs/btrfs/inode.c
··· 1823 1823 * extent_io.c merge_bio_hook, this must check the chunk tree to make sure 1824 1824 * we don't create bios that span stripes or chunks 1825 1825 */ 1826 - int btrfs_merge_bio_hook(int rw, struct page *page, unsigned long offset, 1826 + int btrfs_merge_bio_hook(struct page *page, unsigned long offset, 1827 1827 size_t size, struct bio *bio, 1828 1828 unsigned long bio_flags) 1829 1829 { ··· 1838 1838 1839 1839 length = bio->bi_iter.bi_size; 1840 1840 map_length = length; 1841 - ret = btrfs_map_block(root->fs_info, rw, logical, 1841 + ret = btrfs_map_block(root->fs_info, bio_op(bio), logical, 1842 1842 &map_length, NULL, 0); 1843 1843 /* Will always return 0 with map_multi == NULL */ 1844 1844 BUG_ON(ret < 0); ··· 1855 1855 * At IO completion time the cums attached on the ordered extent record 1856 1856 * are inserted into the btree 1857 1857 */ 1858 - static int __btrfs_submit_bio_start(struct inode *inode, int rw, 1859 - struct bio *bio, int mirror_num, 1860 - unsigned long bio_flags, 1858 + static int __btrfs_submit_bio_start(struct inode *inode, struct bio *bio, 1859 + int mirror_num, unsigned long bio_flags, 1861 1860 u64 bio_offset) 1862 1861 { 1863 1862 struct btrfs_root *root = BTRFS_I(inode)->root; ··· 1875 1876 * At IO completion time the cums attached on the ordered extent record 1876 1877 * are inserted into the btree 1877 1878 */ 1878 - static int __btrfs_submit_bio_done(struct inode *inode, int rw, struct bio *bio, 1879 + static int __btrfs_submit_bio_done(struct inode *inode, struct bio *bio, 1879 1880 int mirror_num, unsigned long bio_flags, 1880 1881 u64 bio_offset) 1881 1882 { 1882 1883 struct btrfs_root *root = BTRFS_I(inode)->root; 1883 1884 int ret; 1884 1885 1885 - ret = btrfs_map_bio(root, rw, bio, mirror_num, 1); 1886 + ret = btrfs_map_bio(root, bio, mirror_num, 1); 1886 1887 if (ret) { 1887 1888 bio->bi_error = ret; 1888 1889 bio_endio(bio); ··· 1894 1895 * extent_io.c submission hook. This does the right thing for csum calculation 1895 1896 * on write, or reading the csums from the tree before a read 1896 1897 */ 1897 - static int btrfs_submit_bio_hook(struct inode *inode, int rw, struct bio *bio, 1898 + static int btrfs_submit_bio_hook(struct inode *inode, struct bio *bio, 1898 1899 int mirror_num, unsigned long bio_flags, 1899 1900 u64 bio_offset) 1900 1901 { ··· 1909 1910 if (btrfs_is_free_space_inode(inode)) 1910 1911 metadata = BTRFS_WQ_ENDIO_FREE_SPACE; 1911 1912 1912 - if (!(rw & REQ_WRITE)) { 1913 + if (bio_op(bio) != REQ_OP_WRITE) { 1913 1914 ret = btrfs_bio_wq_end_io(root->fs_info, bio, metadata); 1914 1915 if (ret) 1915 1916 goto out; ··· 1931 1932 goto mapit; 1932 1933 /* we're doing a write, do the async checksumming */ 1933 1934 ret = btrfs_wq_submit_bio(BTRFS_I(inode)->root->fs_info, 1934 - inode, rw, bio, mirror_num, 1935 + inode, bio, mirror_num, 1935 1936 bio_flags, bio_offset, 1936 1937 __btrfs_submit_bio_start, 1937 1938 __btrfs_submit_bio_done); ··· 1943 1944 } 1944 1945 1945 1946 mapit: 1946 - ret = btrfs_map_bio(root, rw, bio, mirror_num, 0); 1947 + ret = btrfs_map_bio(root, bio, mirror_num, 0); 1947 1948 1948 1949 out: 1949 1950 if (ret < 0) { ··· 7789 7790 } 7790 7791 7791 7792 static inline int submit_dio_repair_bio(struct inode *inode, struct bio *bio, 7792 - int rw, int mirror_num) 7793 + int mirror_num) 7793 7794 { 7794 7795 struct btrfs_root *root = BTRFS_I(inode)->root; 7795 7796 int ret; 7796 7797 7797 - BUG_ON(rw & REQ_WRITE); 7798 + BUG_ON(bio_op(bio) == REQ_OP_WRITE); 7798 7799 7799 7800 bio_get(bio); 7800 7801 ··· 7803 7804 if (ret) 7804 7805 goto err; 7805 7806 7806 - ret = btrfs_map_bio(root, rw, bio, mirror_num, 0); 7807 + ret = btrfs_map_bio(root, bio, mirror_num, 0); 7807 7808 err: 7808 7809 bio_put(bio); 7809 7810 return ret; ··· 7854 7855 int read_mode; 7855 7856 int ret; 7856 7857 7857 - BUG_ON(failed_bio->bi_rw & REQ_WRITE); 7858 + BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE); 7858 7859 7859 7860 ret = btrfs_get_io_failure_record(inode, start, end, &failrec); 7860 7861 if (ret) ··· 7882 7883 free_io_failure(inode, failrec); 7883 7884 return -EIO; 7884 7885 } 7886 + bio_set_op_attrs(bio, REQ_OP_READ, read_mode); 7885 7887 7886 7888 btrfs_debug(BTRFS_I(inode)->root->fs_info, 7887 7889 "Repair DIO Read Error: submitting new dio read[%#x] to this_mirror=%d, in_validation=%d\n", 7888 7890 read_mode, failrec->this_mirror, failrec->in_validation); 7889 7891 7890 - ret = submit_dio_repair_bio(inode, bio, read_mode, 7891 - failrec->this_mirror); 7892 + ret = submit_dio_repair_bio(inode, bio, failrec->this_mirror); 7892 7893 if (ret) { 7893 7894 free_io_failure(inode, failrec); 7894 7895 bio_put(bio); ··· 8178 8179 bio_put(bio); 8179 8180 } 8180 8181 8181 - static int __btrfs_submit_bio_start_direct_io(struct inode *inode, int rw, 8182 + static int __btrfs_submit_bio_start_direct_io(struct inode *inode, 8182 8183 struct bio *bio, int mirror_num, 8183 8184 unsigned long bio_flags, u64 offset) 8184 8185 { ··· 8196 8197 8197 8198 if (err) 8198 8199 btrfs_warn(BTRFS_I(dip->inode)->root->fs_info, 8199 - "direct IO failed ino %llu rw %lu sector %#Lx len %u err no %d", 8200 - btrfs_ino(dip->inode), bio->bi_rw, 8200 + "direct IO failed ino %llu rw %d,%u sector %#Lx len %u err no %d", 8201 + btrfs_ino(dip->inode), bio_op(bio), bio->bi_rw, 8201 8202 (unsigned long long)bio->bi_iter.bi_sector, 8202 8203 bio->bi_iter.bi_size, err); 8203 8204 ··· 8271 8272 } 8272 8273 8273 8274 static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode, 8274 - int rw, u64 file_offset, int skip_sum, 8275 + u64 file_offset, int skip_sum, 8275 8276 int async_submit) 8276 8277 { 8277 8278 struct btrfs_dio_private *dip = bio->bi_private; 8278 - int write = rw & REQ_WRITE; 8279 + bool write = bio_op(bio) == REQ_OP_WRITE; 8279 8280 struct btrfs_root *root = BTRFS_I(inode)->root; 8280 8281 int ret; 8281 8282 ··· 8296 8297 8297 8298 if (write && async_submit) { 8298 8299 ret = btrfs_wq_submit_bio(root->fs_info, 8299 - inode, rw, bio, 0, 0, 8300 - file_offset, 8300 + inode, bio, 0, 0, file_offset, 8301 8301 __btrfs_submit_bio_start_direct_io, 8302 8302 __btrfs_submit_bio_done); 8303 8303 goto err; ··· 8315 8317 goto err; 8316 8318 } 8317 8319 map: 8318 - ret = btrfs_map_bio(root, rw, bio, 0, async_submit); 8320 + ret = btrfs_map_bio(root, bio, 0, async_submit); 8319 8321 err: 8320 8322 bio_put(bio); 8321 8323 return ret; 8322 8324 } 8323 8325 8324 - static int btrfs_submit_direct_hook(int rw, struct btrfs_dio_private *dip, 8326 + static int btrfs_submit_direct_hook(struct btrfs_dio_private *dip, 8325 8327 int skip_sum) 8326 8328 { 8327 8329 struct inode *inode = dip->inode; ··· 8340 8342 int i; 8341 8343 8342 8344 map_length = orig_bio->bi_iter.bi_size; 8343 - ret = btrfs_map_block(root->fs_info, rw, start_sector << 9, 8344 - &map_length, NULL, 0); 8345 + ret = btrfs_map_block(root->fs_info, bio_op(orig_bio), 8346 + start_sector << 9, &map_length, NULL, 0); 8345 8347 if (ret) 8346 8348 return -EIO; 8347 8349 ··· 8361 8363 if (!bio) 8362 8364 return -ENOMEM; 8363 8365 8366 + bio_set_op_attrs(bio, bio_op(orig_bio), orig_bio->bi_rw); 8364 8367 bio->bi_private = dip; 8365 8368 bio->bi_end_io = btrfs_end_dio_bio; 8366 8369 btrfs_io_bio(bio)->logical = file_offset; ··· 8381 8382 * before we're done setting it up 8382 8383 */ 8383 8384 atomic_inc(&dip->pending_bios); 8384 - ret = __btrfs_submit_dio_bio(bio, inode, rw, 8385 + ret = __btrfs_submit_dio_bio(bio, inode, 8385 8386 file_offset, skip_sum, 8386 8387 async_submit); 8387 8388 if (ret) { ··· 8399 8400 start_sector, GFP_NOFS); 8400 8401 if (!bio) 8401 8402 goto out_err; 8403 + bio_set_op_attrs(bio, bio_op(orig_bio), orig_bio->bi_rw); 8402 8404 bio->bi_private = dip; 8403 8405 bio->bi_end_io = btrfs_end_dio_bio; 8404 8406 btrfs_io_bio(bio)->logical = file_offset; 8405 8407 8406 8408 map_length = orig_bio->bi_iter.bi_size; 8407 - ret = btrfs_map_block(root->fs_info, rw, 8409 + ret = btrfs_map_block(root->fs_info, bio_op(orig_bio), 8408 8410 start_sector << 9, 8409 8411 &map_length, NULL, 0); 8410 8412 if (ret) { ··· 8425 8425 } 8426 8426 8427 8427 submit: 8428 - ret = __btrfs_submit_dio_bio(bio, inode, rw, file_offset, skip_sum, 8428 + ret = __btrfs_submit_dio_bio(bio, inode, file_offset, skip_sum, 8429 8429 async_submit); 8430 8430 if (!ret) 8431 8431 return 0; ··· 8445 8445 return 0; 8446 8446 } 8447 8447 8448 - static void btrfs_submit_direct(int rw, struct bio *dio_bio, 8449 - struct inode *inode, loff_t file_offset) 8448 + static void btrfs_submit_direct(struct bio *dio_bio, struct inode *inode, 8449 + loff_t file_offset) 8450 8450 { 8451 8451 struct btrfs_dio_private *dip = NULL; 8452 8452 struct bio *io_bio = NULL; 8453 8453 struct btrfs_io_bio *btrfs_bio; 8454 8454 int skip_sum; 8455 - int write = rw & REQ_WRITE; 8455 + bool write = (bio_op(dio_bio) == REQ_OP_WRITE); 8456 8456 int ret = 0; 8457 8457 8458 8458 skip_sum = BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM; ··· 8503 8503 dio_data->unsubmitted_oe_range_end; 8504 8504 } 8505 8505 8506 - ret = btrfs_submit_direct_hook(rw, dip, skip_sum); 8506 + ret = btrfs_submit_direct_hook(dip, skip_sum); 8507 8507 if (!ret) 8508 8508 return; 8509 8509
+12 -5
fs/btrfs/raid56.c
··· 1320 1320 1321 1321 bio->bi_private = rbio; 1322 1322 bio->bi_end_io = raid_write_end_io; 1323 - submit_bio(WRITE, bio); 1323 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 1324 + 1325 + submit_bio(bio); 1324 1326 } 1325 1327 return; 1326 1328 ··· 1575 1573 1576 1574 bio->bi_private = rbio; 1577 1575 bio->bi_end_io = raid_rmw_end_io; 1576 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 1578 1577 1579 1578 btrfs_bio_wq_end_io(rbio->fs_info, bio, 1580 1579 BTRFS_WQ_ENDIO_RAID56); 1581 1580 1582 - submit_bio(READ, bio); 1581 + submit_bio(bio); 1583 1582 } 1584 1583 /* the actual write will happen once the reads are done */ 1585 1584 return 0; ··· 2100 2097 2101 2098 bio->bi_private = rbio; 2102 2099 bio->bi_end_io = raid_recover_end_io; 2100 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 2103 2101 2104 2102 btrfs_bio_wq_end_io(rbio->fs_info, bio, 2105 2103 BTRFS_WQ_ENDIO_RAID56); 2106 2104 2107 - submit_bio(READ, bio); 2105 + submit_bio(bio); 2108 2106 } 2109 2107 out: 2110 2108 return 0; ··· 2437 2433 2438 2434 bio->bi_private = rbio; 2439 2435 bio->bi_end_io = raid_write_end_io; 2440 - submit_bio(WRITE, bio); 2436 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 2437 + 2438 + submit_bio(bio); 2441 2439 } 2442 2440 return; 2443 2441 ··· 2616 2610 2617 2611 bio->bi_private = rbio; 2618 2612 bio->bi_end_io = raid56_parity_scrub_end_io; 2613 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 2619 2614 2620 2615 btrfs_bio_wq_end_io(rbio->fs_info, bio, 2621 2616 BTRFS_WQ_ENDIO_RAID56); 2622 2617 2623 - submit_bio(READ, bio); 2618 + submit_bio(bio); 2624 2619 } 2625 2620 /* the actual write will happen once the reads are done */ 2626 2621 return;
+10 -5
fs/btrfs/scrub.c
··· 1504 1504 sblock->no_io_error_seen = 0; 1505 1505 } else { 1506 1506 bio->bi_iter.bi_sector = page->physical >> 9; 1507 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 1507 1508 1508 - if (btrfsic_submit_bio_wait(READ, bio)) 1509 + if (btrfsic_submit_bio_wait(bio)) 1509 1510 sblock->no_io_error_seen = 0; 1510 1511 } 1511 1512 ··· 1584 1583 return -EIO; 1585 1584 bio->bi_bdev = page_bad->dev->bdev; 1586 1585 bio->bi_iter.bi_sector = page_bad->physical >> 9; 1586 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 1587 1587 1588 1588 ret = bio_add_page(bio, page_good->page, PAGE_SIZE, 0); 1589 1589 if (PAGE_SIZE != ret) { ··· 1592 1590 return -EIO; 1593 1591 } 1594 1592 1595 - if (btrfsic_submit_bio_wait(WRITE, bio)) { 1593 + if (btrfsic_submit_bio_wait(bio)) { 1596 1594 btrfs_dev_stat_inc_and_print(page_bad->dev, 1597 1595 BTRFS_DEV_STAT_WRITE_ERRS); 1598 1596 btrfs_dev_replace_stats_inc( ··· 1686 1684 bio->bi_end_io = scrub_wr_bio_end_io; 1687 1685 bio->bi_bdev = sbio->dev->bdev; 1688 1686 bio->bi_iter.bi_sector = sbio->physical >> 9; 1687 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 1689 1688 sbio->err = 0; 1690 1689 } else if (sbio->physical + sbio->page_count * PAGE_SIZE != 1691 1690 spage->physical_for_dev_replace || ··· 1734 1731 * orders the requests before sending them to the driver which 1735 1732 * doubled the write performance on spinning disks when measured 1736 1733 * with Linux 3.5 */ 1737 - btrfsic_submit_bio(WRITE, sbio->bio); 1734 + btrfsic_submit_bio(sbio->bio); 1738 1735 } 1739 1736 1740 1737 static void scrub_wr_bio_end_io(struct bio *bio) ··· 2044 2041 sbio = sctx->bios[sctx->curr]; 2045 2042 sctx->curr = -1; 2046 2043 scrub_pending_bio_inc(sctx); 2047 - btrfsic_submit_bio(READ, sbio->bio); 2044 + btrfsic_submit_bio(sbio->bio); 2048 2045 } 2049 2046 2050 2047 static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx, ··· 2091 2088 bio->bi_end_io = scrub_bio_end_io; 2092 2089 bio->bi_bdev = sbio->dev->bdev; 2093 2090 bio->bi_iter.bi_sector = sbio->physical >> 9; 2091 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 2094 2092 sbio->err = 0; 2095 2093 } else if (sbio->physical + sbio->page_count * PAGE_SIZE != 2096 2094 spage->physical || ··· 4440 4436 bio->bi_iter.bi_size = 0; 4441 4437 bio->bi_iter.bi_sector = physical_for_dev_replace >> 9; 4442 4438 bio->bi_bdev = dev->bdev; 4439 + bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_SYNC); 4443 4440 ret = bio_add_page(bio, page, PAGE_SIZE, 0); 4444 4441 if (ret != PAGE_SIZE) { 4445 4442 leave_with_eio: ··· 4449 4444 return -EIO; 4450 4445 } 4451 4446 4452 - if (btrfsic_submit_bio_wait(WRITE_SYNC, bio)) 4447 + if (btrfsic_submit_bio_wait(bio)) 4453 4448 goto leave_with_eio; 4454 4449 4455 4450 bio_put(bio);
+48 -43
fs/btrfs/volumes.c
··· 462 462 sync_pending = 0; 463 463 } 464 464 465 - btrfsic_submit_bio(cur->bi_rw, cur); 465 + btrfsic_submit_bio(cur); 466 466 num_run++; 467 467 batch_run++; 468 468 ··· 5260 5260 kfree(bbio); 5261 5261 } 5262 5262 5263 - static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw, 5263 + static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int op, 5264 5264 u64 logical, u64 *length, 5265 5265 struct btrfs_bio **bbio_ret, 5266 5266 int mirror_num, int need_raid_map) ··· 5346 5346 raid56_full_stripe_start *= full_stripe_len; 5347 5347 } 5348 5348 5349 - if (rw & REQ_DISCARD) { 5349 + if (op == REQ_OP_DISCARD) { 5350 5350 /* we don't discard raid56 yet */ 5351 5351 if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) { 5352 5352 ret = -EOPNOTSUPP; ··· 5359 5359 For other RAID types and for RAID[56] reads, just allow a single 5360 5360 stripe (on a single disk). */ 5361 5361 if ((map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) && 5362 - (rw & REQ_WRITE)) { 5362 + (op == REQ_OP_WRITE)) { 5363 5363 max_len = stripe_len * nr_data_stripes(map) - 5364 5364 (offset - raid56_full_stripe_start); 5365 5365 } else { ··· 5384 5384 btrfs_dev_replace_set_lock_blocking(dev_replace); 5385 5385 5386 5386 if (dev_replace_is_ongoing && mirror_num == map->num_stripes + 1 && 5387 - !(rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS)) && 5388 - dev_replace->tgtdev != NULL) { 5387 + op != REQ_OP_WRITE && op != REQ_OP_DISCARD && 5388 + op != REQ_GET_READ_MIRRORS && dev_replace->tgtdev != NULL) { 5389 5389 /* 5390 5390 * in dev-replace case, for repair case (that's the only 5391 5391 * case where the mirror is selected explicitly when ··· 5472 5472 (offset + *length); 5473 5473 5474 5474 if (map->type & BTRFS_BLOCK_GROUP_RAID0) { 5475 - if (rw & REQ_DISCARD) 5475 + if (op == REQ_OP_DISCARD) 5476 5476 num_stripes = min_t(u64, map->num_stripes, 5477 5477 stripe_nr_end - stripe_nr_orig); 5478 5478 stripe_nr = div_u64_rem(stripe_nr, map->num_stripes, 5479 5479 &stripe_index); 5480 - if (!(rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS))) 5480 + if (op != REQ_OP_WRITE && op != REQ_OP_DISCARD && 5481 + op != REQ_GET_READ_MIRRORS) 5481 5482 mirror_num = 1; 5482 5483 } else if (map->type & BTRFS_BLOCK_GROUP_RAID1) { 5483 - if (rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS)) 5484 + if (op == REQ_OP_WRITE || op == REQ_OP_DISCARD || 5485 + op == REQ_GET_READ_MIRRORS) 5484 5486 num_stripes = map->num_stripes; 5485 5487 else if (mirror_num) 5486 5488 stripe_index = mirror_num - 1; ··· 5495 5493 } 5496 5494 5497 5495 } else if (map->type & BTRFS_BLOCK_GROUP_DUP) { 5498 - if (rw & (REQ_WRITE | REQ_DISCARD | REQ_GET_READ_MIRRORS)) { 5496 + if (op == REQ_OP_WRITE || op == REQ_OP_DISCARD || 5497 + op == REQ_GET_READ_MIRRORS) { 5499 5498 num_stripes = map->num_stripes; 5500 5499 } else if (mirror_num) { 5501 5500 stripe_index = mirror_num - 1; ··· 5510 5507 stripe_nr = div_u64_rem(stripe_nr, factor, &stripe_index); 5511 5508 stripe_index *= map->sub_stripes; 5512 5509 5513 - if (rw & (REQ_WRITE | REQ_GET_READ_MIRRORS)) 5510 + if (op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS) 5514 5511 num_stripes = map->sub_stripes; 5515 - else if (rw & REQ_DISCARD) 5512 + else if (op == REQ_OP_DISCARD) 5516 5513 num_stripes = min_t(u64, map->sub_stripes * 5517 5514 (stripe_nr_end - stripe_nr_orig), 5518 5515 map->num_stripes); ··· 5530 5527 5531 5528 } else if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) { 5532 5529 if (need_raid_map && 5533 - ((rw & (REQ_WRITE | REQ_GET_READ_MIRRORS)) || 5530 + (op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS || 5534 5531 mirror_num > 1)) { 5535 5532 /* push stripe_nr back to the start of the full stripe */ 5536 5533 stripe_nr = div_u64(raid56_full_stripe_start, ··· 5558 5555 /* We distribute the parity blocks across stripes */ 5559 5556 div_u64_rem(stripe_nr + stripe_index, map->num_stripes, 5560 5557 &stripe_index); 5561 - if (!(rw & (REQ_WRITE | REQ_DISCARD | 5562 - REQ_GET_READ_MIRRORS)) && mirror_num <= 1) 5558 + if ((op != REQ_OP_WRITE && op != REQ_OP_DISCARD && 5559 + op != REQ_GET_READ_MIRRORS) && mirror_num <= 1) 5563 5560 mirror_num = 1; 5564 5561 } 5565 5562 } else { ··· 5582 5579 5583 5580 num_alloc_stripes = num_stripes; 5584 5581 if (dev_replace_is_ongoing) { 5585 - if (rw & (REQ_WRITE | REQ_DISCARD)) 5582 + if (op == REQ_OP_WRITE || op == REQ_OP_DISCARD) 5586 5583 num_alloc_stripes <<= 1; 5587 - if (rw & REQ_GET_READ_MIRRORS) 5584 + if (op == REQ_GET_READ_MIRRORS) 5588 5585 num_alloc_stripes++; 5589 5586 tgtdev_indexes = num_stripes; 5590 5587 } ··· 5599 5596 5600 5597 /* build raid_map */ 5601 5598 if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK && 5602 - need_raid_map && ((rw & (REQ_WRITE | REQ_GET_READ_MIRRORS)) || 5599 + need_raid_map && 5600 + ((op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS) || 5603 5601 mirror_num > 1)) { 5604 5602 u64 tmp; 5605 5603 unsigned rot; ··· 5625 5621 RAID6_Q_STRIPE; 5626 5622 } 5627 5623 5628 - if (rw & REQ_DISCARD) { 5624 + if (op == REQ_OP_DISCARD) { 5629 5625 u32 factor = 0; 5630 5626 u32 sub_stripes = 0; 5631 5627 u64 stripes_per_dev = 0; ··· 5705 5701 } 5706 5702 } 5707 5703 5708 - if (rw & (REQ_WRITE | REQ_GET_READ_MIRRORS)) 5704 + if (op == REQ_OP_WRITE || op == REQ_GET_READ_MIRRORS) 5709 5705 max_errors = btrfs_chunk_max_errors(map); 5710 5706 5711 5707 if (bbio->raid_map) 5712 5708 sort_parity_stripes(bbio, num_stripes); 5713 5709 5714 5710 tgtdev_indexes = 0; 5715 - if (dev_replace_is_ongoing && (rw & (REQ_WRITE | REQ_DISCARD)) && 5711 + if (dev_replace_is_ongoing && 5712 + (op == REQ_OP_WRITE || op == REQ_OP_DISCARD) && 5716 5713 dev_replace->tgtdev != NULL) { 5717 5714 int index_where_to_add; 5718 5715 u64 srcdev_devid = dev_replace->srcdev->devid; ··· 5748 5743 } 5749 5744 } 5750 5745 num_stripes = index_where_to_add; 5751 - } else if (dev_replace_is_ongoing && (rw & REQ_GET_READ_MIRRORS) && 5746 + } else if (dev_replace_is_ongoing && (op == REQ_GET_READ_MIRRORS) && 5752 5747 dev_replace->tgtdev != NULL) { 5753 5748 u64 srcdev_devid = dev_replace->srcdev->devid; 5754 5749 int index_srcdev = 0; ··· 5820 5815 return ret; 5821 5816 } 5822 5817 5823 - int btrfs_map_block(struct btrfs_fs_info *fs_info, int rw, 5818 + int btrfs_map_block(struct btrfs_fs_info *fs_info, int op, 5824 5819 u64 logical, u64 *length, 5825 5820 struct btrfs_bio **bbio_ret, int mirror_num) 5826 5821 { 5827 - return __btrfs_map_block(fs_info, rw, logical, length, bbio_ret, 5822 + return __btrfs_map_block(fs_info, op, logical, length, bbio_ret, 5828 5823 mirror_num, 0); 5829 5824 } 5830 5825 5831 5826 /* For Scrub/replace */ 5832 - int btrfs_map_sblock(struct btrfs_fs_info *fs_info, int rw, 5827 + int btrfs_map_sblock(struct btrfs_fs_info *fs_info, int op, 5833 5828 u64 logical, u64 *length, 5834 5829 struct btrfs_bio **bbio_ret, int mirror_num, 5835 5830 int need_raid_map) 5836 5831 { 5837 - return __btrfs_map_block(fs_info, rw, logical, length, bbio_ret, 5832 + return __btrfs_map_block(fs_info, op, logical, length, bbio_ret, 5838 5833 mirror_num, need_raid_map); 5839 5834 } 5840 5835 ··· 5948 5943 BUG_ON(stripe_index >= bbio->num_stripes); 5949 5944 dev = bbio->stripes[stripe_index].dev; 5950 5945 if (dev->bdev) { 5951 - if (bio->bi_rw & WRITE) 5946 + if (bio_op(bio) == REQ_OP_WRITE) 5952 5947 btrfs_dev_stat_inc(dev, 5953 5948 BTRFS_DEV_STAT_WRITE_ERRS); 5954 5949 else ··· 6002 5997 */ 6003 5998 static noinline void btrfs_schedule_bio(struct btrfs_root *root, 6004 5999 struct btrfs_device *device, 6005 - int rw, struct bio *bio) 6000 + struct bio *bio) 6006 6001 { 6007 6002 int should_queue = 1; 6008 6003 struct btrfs_pending_bios *pending_bios; ··· 6013 6008 } 6014 6009 6015 6010 /* don't bother with additional async steps for reads, right now */ 6016 - if (!(rw & REQ_WRITE)) { 6011 + if (bio_op(bio) == REQ_OP_READ) { 6017 6012 bio_get(bio); 6018 - btrfsic_submit_bio(rw, bio); 6013 + btrfsic_submit_bio(bio); 6019 6014 bio_put(bio); 6020 6015 return; 6021 6016 } ··· 6029 6024 atomic_inc(&root->fs_info->nr_async_bios); 6030 6025 WARN_ON(bio->bi_next); 6031 6026 bio->bi_next = NULL; 6032 - bio->bi_rw |= rw; 6033 6027 6034 6028 spin_lock(&device->io_lock); 6035 6029 if (bio->bi_rw & REQ_SYNC) ··· 6054 6050 6055 6051 static void submit_stripe_bio(struct btrfs_root *root, struct btrfs_bio *bbio, 6056 6052 struct bio *bio, u64 physical, int dev_nr, 6057 - int rw, int async) 6053 + int async) 6058 6054 { 6059 6055 struct btrfs_device *dev = bbio->stripes[dev_nr].dev; 6060 6056 ··· 6068 6064 6069 6065 rcu_read_lock(); 6070 6066 name = rcu_dereference(dev->name); 6071 - pr_debug("btrfs_map_bio: rw %d, sector=%llu, dev=%lu " 6072 - "(%s id %llu), size=%u\n", rw, 6067 + pr_debug("btrfs_map_bio: rw %d 0x%x, sector=%llu, dev=%lu " 6068 + "(%s id %llu), size=%u\n", bio_op(bio), bio->bi_rw, 6073 6069 (u64)bio->bi_iter.bi_sector, (u_long)dev->bdev->bd_dev, 6074 6070 name->str, dev->devid, bio->bi_iter.bi_size); 6075 6071 rcu_read_unlock(); ··· 6080 6076 btrfs_bio_counter_inc_noblocked(root->fs_info); 6081 6077 6082 6078 if (async) 6083 - btrfs_schedule_bio(root, dev, rw, bio); 6079 + btrfs_schedule_bio(root, dev, bio); 6084 6080 else 6085 - btrfsic_submit_bio(rw, bio); 6081 + btrfsic_submit_bio(bio); 6086 6082 } 6087 6083 6088 6084 static void bbio_error(struct btrfs_bio *bbio, struct bio *bio, u64 logical) ··· 6099 6095 } 6100 6096 } 6101 6097 6102 - int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio, 6098 + int btrfs_map_bio(struct btrfs_root *root, struct bio *bio, 6103 6099 int mirror_num, int async_submit) 6104 6100 { 6105 6101 struct btrfs_device *dev; ··· 6116 6112 map_length = length; 6117 6113 6118 6114 btrfs_bio_counter_inc_blocked(root->fs_info); 6119 - ret = __btrfs_map_block(root->fs_info, rw, logical, &map_length, &bbio, 6120 - mirror_num, 1); 6115 + ret = __btrfs_map_block(root->fs_info, bio_op(bio), logical, 6116 + &map_length, &bbio, mirror_num, 1); 6121 6117 if (ret) { 6122 6118 btrfs_bio_counter_dec(root->fs_info); 6123 6119 return ret; ··· 6131 6127 atomic_set(&bbio->stripes_pending, bbio->num_stripes); 6132 6128 6133 6129 if ((bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK) && 6134 - ((rw & WRITE) || (mirror_num > 1))) { 6130 + ((bio_op(bio) == REQ_OP_WRITE) || (mirror_num > 1))) { 6135 6131 /* In this case, map_length has been set to the length of 6136 6132 a single stripe; not the whole write */ 6137 - if (rw & WRITE) { 6133 + if (bio_op(bio) == REQ_OP_WRITE) { 6138 6134 ret = raid56_parity_write(root, bio, bbio, map_length); 6139 6135 } else { 6140 6136 ret = raid56_parity_recover(root, bio, bbio, map_length, ··· 6153 6149 6154 6150 for (dev_nr = 0; dev_nr < total_devs; dev_nr++) { 6155 6151 dev = bbio->stripes[dev_nr].dev; 6156 - if (!dev || !dev->bdev || (rw & WRITE && !dev->writeable)) { 6152 + if (!dev || !dev->bdev || 6153 + (bio_op(bio) == REQ_OP_WRITE && !dev->writeable)) { 6157 6154 bbio_error(bbio, first_bio, logical); 6158 6155 continue; 6159 6156 } ··· 6166 6161 bio = first_bio; 6167 6162 6168 6163 submit_stripe_bio(root, bbio, bio, 6169 - bbio->stripes[dev_nr].physical, dev_nr, rw, 6164 + bbio->stripes[dev_nr].physical, dev_nr, 6170 6165 async_submit); 6171 6166 } 6172 6167 btrfs_bio_counter_dec(root->fs_info);
+3 -3
fs/btrfs/volumes.h
··· 375 375 u64 end, u64 *length); 376 376 void btrfs_get_bbio(struct btrfs_bio *bbio); 377 377 void btrfs_put_bbio(struct btrfs_bio *bbio); 378 - int btrfs_map_block(struct btrfs_fs_info *fs_info, int rw, 378 + int btrfs_map_block(struct btrfs_fs_info *fs_info, int op, 379 379 u64 logical, u64 *length, 380 380 struct btrfs_bio **bbio_ret, int mirror_num); 381 - int btrfs_map_sblock(struct btrfs_fs_info *fs_info, int rw, 381 + int btrfs_map_sblock(struct btrfs_fs_info *fs_info, int op, 382 382 u64 logical, u64 *length, 383 383 struct btrfs_bio **bbio_ret, int mirror_num, 384 384 int need_raid_map); ··· 391 391 struct btrfs_root *extent_root, u64 type); 392 392 void btrfs_mapping_init(struct btrfs_mapping_tree *tree); 393 393 void btrfs_mapping_tree_free(struct btrfs_mapping_tree *tree); 394 - int btrfs_map_bio(struct btrfs_root *root, int rw, struct bio *bio, 394 + int btrfs_map_bio(struct btrfs_root *root, struct bio *bio, 395 395 int mirror_num, int async_submit); 396 396 int btrfs_open_devices(struct btrfs_fs_devices *fs_devices, 397 397 fmode_t flags, void *holder);
+36 -33
fs/buffer.c
··· 45 45 #include <trace/events/block.h> 46 46 47 47 static int fsync_buffers_list(spinlock_t *lock, struct list_head *list); 48 - static int submit_bh_wbc(int rw, struct buffer_head *bh, 48 + static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, 49 49 unsigned long bio_flags, 50 50 struct writeback_control *wbc); 51 51 ··· 588 588 struct buffer_head *bh = __find_get_block(bdev, bblock + 1, blocksize); 589 589 if (bh) { 590 590 if (buffer_dirty(bh)) 591 - ll_rw_block(WRITE, 1, &bh); 591 + ll_rw_block(REQ_OP_WRITE, 0, 1, &bh); 592 592 put_bh(bh); 593 593 } 594 594 } ··· 1225 1225 } else { 1226 1226 get_bh(bh); 1227 1227 bh->b_end_io = end_buffer_read_sync; 1228 - submit_bh(READ, bh); 1228 + submit_bh(REQ_OP_READ, 0, bh); 1229 1229 wait_on_buffer(bh); 1230 1230 if (buffer_uptodate(bh)) 1231 1231 return bh; ··· 1395 1395 { 1396 1396 struct buffer_head *bh = __getblk(bdev, block, size); 1397 1397 if (likely(bh)) { 1398 - ll_rw_block(READA, 1, &bh); 1398 + ll_rw_block(REQ_OP_READ, READA, 1, &bh); 1399 1399 brelse(bh); 1400 1400 } 1401 1401 } ··· 1697 1697 struct buffer_head *bh, *head; 1698 1698 unsigned int blocksize, bbits; 1699 1699 int nr_underway = 0; 1700 - int write_op = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE); 1700 + int write_flags = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : 0); 1701 1701 1702 1702 head = create_page_buffers(page, inode, 1703 1703 (1 << BH_Dirty)|(1 << BH_Uptodate)); ··· 1786 1786 do { 1787 1787 struct buffer_head *next = bh->b_this_page; 1788 1788 if (buffer_async_write(bh)) { 1789 - submit_bh_wbc(write_op, bh, 0, wbc); 1789 + submit_bh_wbc(REQ_OP_WRITE, write_flags, bh, 0, wbc); 1790 1790 nr_underway++; 1791 1791 } 1792 1792 bh = next; ··· 1840 1840 struct buffer_head *next = bh->b_this_page; 1841 1841 if (buffer_async_write(bh)) { 1842 1842 clear_buffer_dirty(bh); 1843 - submit_bh_wbc(write_op, bh, 0, wbc); 1843 + submit_bh_wbc(REQ_OP_WRITE, write_flags, bh, 0, wbc); 1844 1844 nr_underway++; 1845 1845 } 1846 1846 bh = next; ··· 1956 1956 if (!buffer_uptodate(bh) && !buffer_delay(bh) && 1957 1957 !buffer_unwritten(bh) && 1958 1958 (block_start < from || block_end > to)) { 1959 - ll_rw_block(READ, 1, &bh); 1959 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 1960 1960 *wait_bh++=bh; 1961 1961 } 1962 1962 } ··· 2249 2249 if (buffer_uptodate(bh)) 2250 2250 end_buffer_async_read(bh, 1); 2251 2251 else 2252 - submit_bh(READ, bh); 2252 + submit_bh(REQ_OP_READ, 0, bh); 2253 2253 } 2254 2254 return 0; 2255 2255 } ··· 2583 2583 if (block_start < from || block_end > to) { 2584 2584 lock_buffer(bh); 2585 2585 bh->b_end_io = end_buffer_read_nobh; 2586 - submit_bh(READ, bh); 2586 + submit_bh(REQ_OP_READ, 0, bh); 2587 2587 nr_reads++; 2588 2588 } 2589 2589 } ··· 2853 2853 2854 2854 if (!buffer_uptodate(bh) && !buffer_delay(bh) && !buffer_unwritten(bh)) { 2855 2855 err = -EIO; 2856 - ll_rw_block(READ, 1, &bh); 2856 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 2857 2857 wait_on_buffer(bh); 2858 2858 /* Uhhuh. Read error. Complain and punt. */ 2859 2859 if (!buffer_uptodate(bh)) ··· 2950 2950 * errors, this only handles the "we need to be able to 2951 2951 * do IO at the final sector" case. 2952 2952 */ 2953 - void guard_bio_eod(int rw, struct bio *bio) 2953 + void guard_bio_eod(int op, struct bio *bio) 2954 2954 { 2955 2955 sector_t maxsector; 2956 2956 struct bio_vec *bvec = &bio->bi_io_vec[bio->bi_vcnt - 1]; ··· 2980 2980 bvec->bv_len -= truncated_bytes; 2981 2981 2982 2982 /* ..and clear the end of the buffer for reads */ 2983 - if ((rw & RW_MASK) == READ) { 2983 + if (op == REQ_OP_READ) { 2984 2984 zero_user(bvec->bv_page, bvec->bv_offset + bvec->bv_len, 2985 2985 truncated_bytes); 2986 2986 } 2987 2987 } 2988 2988 2989 - static int submit_bh_wbc(int rw, struct buffer_head *bh, 2989 + static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, 2990 2990 unsigned long bio_flags, struct writeback_control *wbc) 2991 2991 { 2992 2992 struct bio *bio; ··· 3000 3000 /* 3001 3001 * Only clear out a write error when rewriting 3002 3002 */ 3003 - if (test_set_buffer_req(bh) && (rw & WRITE)) 3003 + if (test_set_buffer_req(bh) && (op == REQ_OP_WRITE)) 3004 3004 clear_buffer_write_io_error(bh); 3005 3005 3006 3006 /* ··· 3025 3025 bio->bi_flags |= bio_flags; 3026 3026 3027 3027 /* Take care of bh's that straddle the end of the device */ 3028 - guard_bio_eod(rw, bio); 3028 + guard_bio_eod(op, bio); 3029 3029 3030 3030 if (buffer_meta(bh)) 3031 - rw |= REQ_META; 3031 + op_flags |= REQ_META; 3032 3032 if (buffer_prio(bh)) 3033 - rw |= REQ_PRIO; 3033 + op_flags |= REQ_PRIO; 3034 + bio_set_op_attrs(bio, op, op_flags); 3034 3035 3035 - submit_bio(rw, bio); 3036 + submit_bio(bio); 3036 3037 return 0; 3037 3038 } 3038 3039 3039 - int _submit_bh(int rw, struct buffer_head *bh, unsigned long bio_flags) 3040 + int _submit_bh(int op, int op_flags, struct buffer_head *bh, 3041 + unsigned long bio_flags) 3040 3042 { 3041 - return submit_bh_wbc(rw, bh, bio_flags, NULL); 3043 + return submit_bh_wbc(op, op_flags, bh, bio_flags, NULL); 3042 3044 } 3043 3045 EXPORT_SYMBOL_GPL(_submit_bh); 3044 3046 3045 - int submit_bh(int rw, struct buffer_head *bh) 3047 + int submit_bh(int op, int op_flags, struct buffer_head *bh) 3046 3048 { 3047 - return submit_bh_wbc(rw, bh, 0, NULL); 3049 + return submit_bh_wbc(op, op_flags, bh, 0, NULL); 3048 3050 } 3049 3051 EXPORT_SYMBOL(submit_bh); 3050 3052 3051 3053 /** 3052 3054 * ll_rw_block: low-level access to block devices (DEPRECATED) 3053 - * @rw: whether to %READ or %WRITE or maybe %READA (readahead) 3055 + * @op: whether to %READ or %WRITE 3056 + * @op_flags: rq_flag_bits or %READA (readahead) 3054 3057 * @nr: number of &struct buffer_heads in the array 3055 3058 * @bhs: array of pointers to &struct buffer_head 3056 3059 * ··· 3076 3073 * All of the buffers must be for the same device, and must also be a 3077 3074 * multiple of the current approved size for the device. 3078 3075 */ 3079 - void ll_rw_block(int rw, int nr, struct buffer_head *bhs[]) 3076 + void ll_rw_block(int op, int op_flags, int nr, struct buffer_head *bhs[]) 3080 3077 { 3081 3078 int i; 3082 3079 ··· 3085 3082 3086 3083 if (!trylock_buffer(bh)) 3087 3084 continue; 3088 - if (rw == WRITE) { 3085 + if (op == WRITE) { 3089 3086 if (test_clear_buffer_dirty(bh)) { 3090 3087 bh->b_end_io = end_buffer_write_sync; 3091 3088 get_bh(bh); 3092 - submit_bh(WRITE, bh); 3089 + submit_bh(op, op_flags, bh); 3093 3090 continue; 3094 3091 } 3095 3092 } else { 3096 3093 if (!buffer_uptodate(bh)) { 3097 3094 bh->b_end_io = end_buffer_read_sync; 3098 3095 get_bh(bh); 3099 - submit_bh(rw, bh); 3096 + submit_bh(op, op_flags, bh); 3100 3097 continue; 3101 3098 } 3102 3099 } ··· 3105 3102 } 3106 3103 EXPORT_SYMBOL(ll_rw_block); 3107 3104 3108 - void write_dirty_buffer(struct buffer_head *bh, int rw) 3105 + void write_dirty_buffer(struct buffer_head *bh, int op_flags) 3109 3106 { 3110 3107 lock_buffer(bh); 3111 3108 if (!test_clear_buffer_dirty(bh)) { ··· 3114 3111 } 3115 3112 bh->b_end_io = end_buffer_write_sync; 3116 3113 get_bh(bh); 3117 - submit_bh(rw, bh); 3114 + submit_bh(REQ_OP_WRITE, op_flags, bh); 3118 3115 } 3119 3116 EXPORT_SYMBOL(write_dirty_buffer); 3120 3117 ··· 3123 3120 * and then start new I/O and then wait upon it. The caller must have a ref on 3124 3121 * the buffer_head. 3125 3122 */ 3126 - int __sync_dirty_buffer(struct buffer_head *bh, int rw) 3123 + int __sync_dirty_buffer(struct buffer_head *bh, int op_flags) 3127 3124 { 3128 3125 int ret = 0; 3129 3126 ··· 3132 3129 if (test_clear_buffer_dirty(bh)) { 3133 3130 get_bh(bh); 3134 3131 bh->b_end_io = end_buffer_write_sync; 3135 - ret = submit_bh(rw, bh); 3132 + ret = submit_bh(REQ_OP_WRITE, op_flags, bh); 3136 3133 wait_on_buffer(bh); 3137 3134 if (!ret && !buffer_uptodate(bh)) 3138 3135 ret = -EIO; ··· 3395 3392 3396 3393 get_bh(bh); 3397 3394 bh->b_end_io = end_buffer_read_sync; 3398 - submit_bh(READ, bh); 3395 + submit_bh(REQ_OP_READ, 0, bh); 3399 3396 wait_on_buffer(bh); 3400 3397 if (buffer_uptodate(bh)) 3401 3398 return 0;
+2 -1
fs/crypto/crypto.c
··· 318 318 bio->bi_bdev = inode->i_sb->s_bdev; 319 319 bio->bi_iter.bi_sector = 320 320 pblk << (inode->i_sb->s_blocksize_bits - 9); 321 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 321 322 ret = bio_add_page(bio, ciphertext_page, 322 323 inode->i_sb->s_blocksize, 0); 323 324 if (ret != inode->i_sb->s_blocksize) { ··· 328 327 err = -EIO; 329 328 goto errout; 330 329 } 331 - err = submit_bio_wait(WRITE, bio); 330 + err = submit_bio_wait(bio); 332 331 if ((err == 0) && bio->bi_error) 333 332 err = -EIO; 334 333 bio_put(bio);
+21 -14
fs/direct-io.c
··· 108 108 /* dio_state communicated between submission path and end_io */ 109 109 struct dio { 110 110 int flags; /* doesn't change */ 111 - int rw; 111 + int op; 112 + int op_flags; 112 113 blk_qc_t bio_cookie; 113 114 struct block_device *bio_bdev; 114 115 struct inode *inode; ··· 164 163 ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, 165 164 &sdio->from); 166 165 167 - if (ret < 0 && sdio->blocks_available && (dio->rw & WRITE)) { 166 + if (ret < 0 && sdio->blocks_available && (dio->op == REQ_OP_WRITE)) { 168 167 struct page *page = ZERO_PAGE(0); 169 168 /* 170 169 * A memory fault, but the filesystem has some outstanding ··· 243 242 transferred = dio->result; 244 243 245 244 /* Check for short read case */ 246 - if ((dio->rw == READ) && ((offset + transferred) > dio->i_size)) 245 + if ((dio->op == REQ_OP_READ) && 246 + ((offset + transferred) > dio->i_size)) 247 247 transferred = dio->i_size - offset; 248 248 } 249 249 ··· 275 273 */ 276 274 dio->iocb->ki_pos += transferred; 277 275 278 - if (dio->rw & WRITE) 276 + if (dio->op == REQ_OP_WRITE) 279 277 ret = generic_write_sync(dio->iocb, transferred); 280 278 dio->iocb->ki_complete(dio->iocb, ret, 0); 281 279 } ··· 377 375 378 376 bio->bi_bdev = bdev; 379 377 bio->bi_iter.bi_sector = first_sector; 378 + bio_set_op_attrs(bio, dio->op, dio->op_flags); 380 379 if (dio->is_async) 381 380 bio->bi_end_io = dio_bio_end_aio; 382 381 else ··· 405 402 dio->refcount++; 406 403 spin_unlock_irqrestore(&dio->bio_lock, flags); 407 404 408 - if (dio->is_async && dio->rw == READ && dio->should_dirty) 405 + if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty) 409 406 bio_set_pages_dirty(bio); 410 407 411 408 dio->bio_bdev = bio->bi_bdev; 412 409 413 410 if (sdio->submit_io) { 414 - sdio->submit_io(dio->rw, bio, dio->inode, 415 - sdio->logical_offset_in_bio); 411 + sdio->submit_io(bio, dio->inode, sdio->logical_offset_in_bio); 416 412 dio->bio_cookie = BLK_QC_T_NONE; 417 413 } else 418 - dio->bio_cookie = submit_bio(dio->rw, bio); 414 + dio->bio_cookie = submit_bio(bio); 419 415 420 416 sdio->bio = NULL; 421 417 sdio->boundary = 0; ··· 480 478 if (bio->bi_error) 481 479 dio->io_error = -EIO; 482 480 483 - if (dio->is_async && dio->rw == READ && dio->should_dirty) { 481 + if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty) { 484 482 err = bio->bi_error; 485 483 bio_check_pages_dirty(bio); /* transfers ownership */ 486 484 } else { 487 485 bio_for_each_segment_all(bvec, bio, i) { 488 486 struct page *page = bvec->bv_page; 489 487 490 - if (dio->rw == READ && !PageCompound(page) && 488 + if (dio->op == REQ_OP_READ && !PageCompound(page) && 491 489 dio->should_dirty) 492 490 set_page_dirty_lock(page); 493 491 put_page(page); ··· 640 638 * which may decide to handle it or also return an unmapped 641 639 * buffer head. 642 640 */ 643 - create = dio->rw & WRITE; 641 + create = dio->op == REQ_OP_WRITE; 644 642 if (dio->flags & DIO_SKIP_HOLES) { 645 643 if (fs_startblk <= ((i_size_read(dio->inode) - 1) >> 646 644 i_blkbits)) ··· 790 788 { 791 789 int ret = 0; 792 790 793 - if (dio->rw & WRITE) { 791 + if (dio->op == REQ_OP_WRITE) { 794 792 /* 795 793 * Read accounting is performed in submit_bio() 796 794 */ ··· 990 988 loff_t i_size_aligned; 991 989 992 990 /* AKPM: eargh, -ENOTBLK is a hack */ 993 - if (dio->rw & WRITE) { 991 + if (dio->op == REQ_OP_WRITE) { 994 992 put_page(page); 995 993 return -ENOTBLK; 996 994 } ··· 1204 1202 dio->is_async = true; 1205 1203 1206 1204 dio->inode = inode; 1207 - dio->rw = iov_iter_rw(iter) == WRITE ? WRITE_ODIRECT : READ; 1205 + if (iov_iter_rw(iter) == WRITE) { 1206 + dio->op = REQ_OP_WRITE; 1207 + dio->op_flags = WRITE_ODIRECT; 1208 + } else { 1209 + dio->op = REQ_OP_READ; 1210 + } 1208 1211 1209 1212 /* 1210 1213 * For AIO O_(D)SYNC writes we need to defer completions to a workqueue
+1 -1
fs/exofs/ore.c
··· 878 878 } else { 879 879 bio = master_dev->bio; 880 880 /* FIXME: bio_set_dir() */ 881 - bio->bi_rw |= REQ_WRITE; 881 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 882 882 } 883 883 884 884 osd_req_write(or, _ios_obj(ios, cur_comp),
+1 -1
fs/ext4/balloc.c
··· 470 470 trace_ext4_read_block_bitmap_load(sb, block_group); 471 471 bh->b_end_io = ext4_end_bitmap_read; 472 472 get_bh(bh); 473 - submit_bh(READ | REQ_META | REQ_PRIO, bh); 473 + submit_bh(REQ_OP_READ, REQ_META | REQ_PRIO, bh); 474 474 return bh; 475 475 verify: 476 476 err = ext4_validate_block_bitmap(sb, desc, block_group, bh);
+2 -1
fs/ext4/crypto.c
··· 428 428 bio->bi_bdev = inode->i_sb->s_bdev; 429 429 bio->bi_iter.bi_sector = 430 430 pblk << (inode->i_sb->s_blocksize_bits - 9); 431 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 431 432 ret = bio_add_page(bio, ciphertext_page, 432 433 inode->i_sb->s_blocksize, 0); 433 434 if (ret != inode->i_sb->s_blocksize) { ··· 440 439 err = -EIO; 441 440 goto errout; 442 441 } 443 - err = submit_bio_wait(WRITE, bio); 442 + err = submit_bio_wait(bio); 444 443 if ((err == 0) && bio->bi_error) 445 444 err = -EIO; 446 445 bio_put(bio);
+1 -1
fs/ext4/ialloc.c
··· 214 214 trace_ext4_load_inode_bitmap(sb, block_group); 215 215 bh->b_end_io = ext4_end_bitmap_read; 216 216 get_bh(bh); 217 - submit_bh(READ | REQ_META | REQ_PRIO, bh); 217 + submit_bh(REQ_OP_READ, REQ_META | REQ_PRIO, bh); 218 218 wait_on_buffer(bh); 219 219 if (!buffer_uptodate(bh)) { 220 220 put_bh(bh);
+4 -4
fs/ext4/inode.c
··· 981 981 return bh; 982 982 if (!bh || buffer_uptodate(bh)) 983 983 return bh; 984 - ll_rw_block(READ | REQ_META | REQ_PRIO, 1, &bh); 984 + ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &bh); 985 985 wait_on_buffer(bh); 986 986 if (buffer_uptodate(bh)) 987 987 return bh; ··· 1135 1135 if (!buffer_uptodate(bh) && !buffer_delay(bh) && 1136 1136 !buffer_unwritten(bh) && 1137 1137 (block_start < from || block_end > to)) { 1138 - ll_rw_block(READ, 1, &bh); 1138 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 1139 1139 *wait_bh++ = bh; 1140 1140 decrypt = ext4_encrypted_inode(inode) && 1141 1141 S_ISREG(inode->i_mode); ··· 3698 3698 3699 3699 if (!buffer_uptodate(bh)) { 3700 3700 err = -EIO; 3701 - ll_rw_block(READ, 1, &bh); 3701 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 3702 3702 wait_on_buffer(bh); 3703 3703 /* Uhhuh. Read error. Complain and punt. */ 3704 3704 if (!buffer_uptodate(bh)) ··· 4281 4281 trace_ext4_load_inode(inode); 4282 4282 get_bh(bh); 4283 4283 bh->b_end_io = end_buffer_read_sync; 4284 - submit_bh(READ | REQ_META | REQ_PRIO, bh); 4284 + submit_bh(REQ_OP_READ, REQ_META | REQ_PRIO, bh); 4285 4285 wait_on_buffer(bh); 4286 4286 if (!buffer_uptodate(bh)) { 4287 4287 EXT4_ERROR_INODE_BLOCK(inode, block,
+2 -2
fs/ext4/mmp.c
··· 52 52 lock_buffer(bh); 53 53 bh->b_end_io = end_buffer_write_sync; 54 54 get_bh(bh); 55 - submit_bh(WRITE_SYNC | REQ_META | REQ_PRIO, bh); 55 + submit_bh(REQ_OP_WRITE, WRITE_SYNC | REQ_META | REQ_PRIO, bh); 56 56 wait_on_buffer(bh); 57 57 sb_end_write(sb); 58 58 if (unlikely(!buffer_uptodate(bh))) ··· 88 88 get_bh(*bh); 89 89 lock_buffer(*bh); 90 90 (*bh)->b_end_io = end_buffer_read_sync; 91 - submit_bh(READ_SYNC | REQ_META | REQ_PRIO, *bh); 91 + submit_bh(REQ_OP_READ, READ_SYNC | REQ_META | REQ_PRIO, *bh); 92 92 wait_on_buffer(*bh); 93 93 if (!buffer_uptodate(*bh)) { 94 94 ret = -EIO;
+2 -1
fs/ext4/namei.c
··· 1443 1443 } 1444 1444 bh_use[ra_max] = bh; 1445 1445 if (bh) 1446 - ll_rw_block(READ | REQ_META | REQ_PRIO, 1446 + ll_rw_block(REQ_OP_READ, 1447 + REQ_META | REQ_PRIO, 1447 1448 1, &bh); 1448 1449 } 1449 1450 }
+4 -3
fs/ext4/page-io.c
··· 340 340 struct bio *bio = io->io_bio; 341 341 342 342 if (bio) { 343 - int io_op = io->io_wbc->sync_mode == WB_SYNC_ALL ? 344 - WRITE_SYNC : WRITE; 345 - submit_bio(io_op, io->io_bio); 343 + int io_op_flags = io->io_wbc->sync_mode == WB_SYNC_ALL ? 344 + WRITE_SYNC : 0; 345 + bio_set_op_attrs(io->io_bio, REQ_OP_WRITE, io_op_flags); 346 + submit_bio(io->io_bio); 346 347 } 347 348 io->io_bio = NULL; 348 349 }
+5 -4
fs/ext4/readpage.c
··· 271 271 */ 272 272 if (bio && (last_block_in_bio != blocks[0] - 1)) { 273 273 submit_and_realloc: 274 - submit_bio(READ, bio); 274 + submit_bio(bio); 275 275 bio = NULL; 276 276 } 277 277 if (bio == NULL) { ··· 294 294 bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9); 295 295 bio->bi_end_io = mpage_end_io; 296 296 bio->bi_private = ctx; 297 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 297 298 } 298 299 299 300 length = first_hole << blkbits; ··· 304 303 if (((map.m_flags & EXT4_MAP_BOUNDARY) && 305 304 (relative_block == map.m_len)) || 306 305 (first_hole != blocks_per_page)) { 307 - submit_bio(READ, bio); 306 + submit_bio(bio); 308 307 bio = NULL; 309 308 } else 310 309 last_block_in_bio = blocks[blocks_per_page - 1]; 311 310 goto next_page; 312 311 confused: 313 312 if (bio) { 314 - submit_bio(READ, bio); 313 + submit_bio(bio); 315 314 bio = NULL; 316 315 } 317 316 if (!PageUptodate(page)) ··· 324 323 } 325 324 BUG_ON(pages && !list_empty(pages)); 326 325 if (bio) 327 - submit_bio(READ, bio); 326 + submit_bio(bio); 328 327 return 0; 329 328 }
+1 -1
fs/ext4/super.c
··· 4204 4204 goto out_bdev; 4205 4205 } 4206 4206 journal->j_private = sb; 4207 - ll_rw_block(READ | REQ_META | REQ_PRIO, 1, &journal->j_sb_buffer); 4207 + ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &journal->j_sb_buffer); 4208 4208 wait_on_buffer(journal->j_sb_buffer); 4209 4209 if (!buffer_uptodate(journal->j_sb_buffer)) { 4210 4210 ext4_msg(sb, KERN_ERR, "I/O error on journal device");
+6 -4
fs/f2fs/checkpoint.c
··· 63 63 struct f2fs_io_info fio = { 64 64 .sbi = sbi, 65 65 .type = META, 66 - .rw = READ_SYNC | REQ_META | REQ_PRIO, 66 + .op = REQ_OP_READ, 67 + .op_flags = READ_SYNC | REQ_META | REQ_PRIO, 67 68 .old_blkaddr = index, 68 69 .new_blkaddr = index, 69 70 .encrypted_page = NULL, 70 71 }; 71 72 72 73 if (unlikely(!is_meta)) 73 - fio.rw &= ~REQ_META; 74 + fio.op_flags &= ~REQ_META; 74 75 repeat: 75 76 page = f2fs_grab_cache_page(mapping, index, false); 76 77 if (!page) { ··· 158 157 struct f2fs_io_info fio = { 159 158 .sbi = sbi, 160 159 .type = META, 161 - .rw = sync ? (READ_SYNC | REQ_META | REQ_PRIO) : READA, 160 + .op = REQ_OP_READ, 161 + .op_flags = sync ? (READ_SYNC | REQ_META | REQ_PRIO) : READA, 162 162 .encrypted_page = NULL, 163 163 }; 164 164 struct blk_plug plug; 165 165 166 166 if (unlikely(type == META_POR)) 167 - fio.rw &= ~REQ_META; 167 + fio.op_flags &= ~REQ_META; 168 168 169 169 blk_start_plug(&plug); 170 170 for (; nrpages-- > 0; blkno++) {
+28 -19
fs/f2fs/data.c
··· 97 97 return bio; 98 98 } 99 99 100 - static inline void __submit_bio(struct f2fs_sb_info *sbi, int rw, 101 - struct bio *bio) 100 + static inline void __submit_bio(struct f2fs_sb_info *sbi, struct bio *bio) 102 101 { 103 - if (!is_read_io(rw)) 102 + if (!is_read_io(bio_op(bio))) 104 103 atomic_inc(&sbi->nr_wb_bios); 105 - submit_bio(rw, bio); 104 + submit_bio(bio); 106 105 } 107 106 108 107 static void __submit_merged_bio(struct f2fs_bio_info *io) ··· 111 112 if (!io->bio) 112 113 return; 113 114 114 - if (is_read_io(fio->rw)) 115 + if (is_read_io(fio->op)) 115 116 trace_f2fs_submit_read_bio(io->sbi->sb, fio, io->bio); 116 117 else 117 118 trace_f2fs_submit_write_bio(io->sbi->sb, fio, io->bio); 118 119 119 - __submit_bio(io->sbi, fio->rw, io->bio); 120 + bio_set_op_attrs(io->bio, fio->op, fio->op_flags); 121 + 122 + __submit_bio(io->sbi, io->bio); 120 123 io->bio = NULL; 121 124 } 122 125 ··· 184 183 /* change META to META_FLUSH in the checkpoint procedure */ 185 184 if (type >= META_FLUSH) { 186 185 io->fio.type = META_FLUSH; 186 + io->fio.op = REQ_OP_WRITE; 187 187 if (test_opt(sbi, NOBARRIER)) 188 - io->fio.rw = WRITE_FLUSH | REQ_META | REQ_PRIO; 188 + io->fio.op_flags = WRITE_FLUSH | REQ_META | REQ_PRIO; 189 189 else 190 - io->fio.rw = WRITE_FLUSH_FUA | REQ_META | REQ_PRIO; 190 + io->fio.op_flags = WRITE_FLUSH_FUA | REQ_META | 191 + REQ_PRIO; 191 192 } 192 193 __submit_merged_bio(io); 193 194 out: ··· 231 228 f2fs_trace_ios(fio, 0); 232 229 233 230 /* Allocate a new bio */ 234 - bio = __bio_alloc(fio->sbi, fio->new_blkaddr, 1, is_read_io(fio->rw)); 231 + bio = __bio_alloc(fio->sbi, fio->new_blkaddr, 1, is_read_io(fio->op)); 235 232 236 233 if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { 237 234 bio_put(bio); 238 235 return -EFAULT; 239 236 } 237 + bio->bi_rw = fio->op_flags; 238 + bio_set_op_attrs(bio, fio->op, fio->op_flags); 240 239 241 - __submit_bio(fio->sbi, fio->rw, bio); 240 + __submit_bio(fio->sbi, bio); 242 241 return 0; 243 242 } 244 243 ··· 249 244 struct f2fs_sb_info *sbi = fio->sbi; 250 245 enum page_type btype = PAGE_TYPE_OF_BIO(fio->type); 251 246 struct f2fs_bio_info *io; 252 - bool is_read = is_read_io(fio->rw); 247 + bool is_read = is_read_io(fio->op); 253 248 struct page *bio_page; 254 249 255 250 io = is_read ? &sbi->read_io : &sbi->write_io[btype]; ··· 261 256 down_write(&io->io_rwsem); 262 257 263 258 if (io->bio && (io->last_block_in_bio != fio->new_blkaddr - 1 || 264 - io->fio.rw != fio->rw)) 259 + (io->fio.op != fio->op || io->fio.op_flags != fio->op_flags))) 265 260 __submit_merged_bio(io); 266 261 alloc_new: 267 262 if (io->bio == NULL) { ··· 395 390 } 396 391 397 392 struct page *get_read_data_page(struct inode *inode, pgoff_t index, 398 - int rw, bool for_write) 393 + int op_flags, bool for_write) 399 394 { 400 395 struct address_space *mapping = inode->i_mapping; 401 396 struct dnode_of_data dn; ··· 405 400 struct f2fs_io_info fio = { 406 401 .sbi = F2FS_I_SB(inode), 407 402 .type = DATA, 408 - .rw = rw, 403 + .op = REQ_OP_READ, 404 + .op_flags = op_flags, 409 405 .encrypted_page = NULL, 410 406 }; 411 407 ··· 1057 1051 */ 1058 1052 if (bio && (last_block_in_bio != block_nr - 1)) { 1059 1053 submit_and_realloc: 1060 - __submit_bio(F2FS_I_SB(inode), READ, bio); 1054 + __submit_bio(F2FS_I_SB(inode), bio); 1061 1055 bio = NULL; 1062 1056 } 1063 1057 if (bio == NULL) { ··· 1086 1080 bio->bi_iter.bi_sector = SECTOR_FROM_BLOCK(block_nr); 1087 1081 bio->bi_end_io = f2fs_read_end_io; 1088 1082 bio->bi_private = ctx; 1083 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 1089 1084 } 1090 1085 1091 1086 if (bio_add_page(bio, page, blocksize, 0) < blocksize) ··· 1101 1094 goto next_page; 1102 1095 confused: 1103 1096 if (bio) { 1104 - __submit_bio(F2FS_I_SB(inode), READ, bio); 1097 + __submit_bio(F2FS_I_SB(inode), bio); 1105 1098 bio = NULL; 1106 1099 } 1107 1100 unlock_page(page); ··· 1111 1104 } 1112 1105 BUG_ON(pages && !list_empty(pages)); 1113 1106 if (bio) 1114 - __submit_bio(F2FS_I_SB(inode), READ, bio); 1107 + __submit_bio(F2FS_I_SB(inode), bio); 1115 1108 return 0; 1116 1109 } 1117 1110 ··· 1228 1221 struct f2fs_io_info fio = { 1229 1222 .sbi = sbi, 1230 1223 .type = DATA, 1231 - .rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE, 1224 + .op = REQ_OP_WRITE, 1225 + .op_flags = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : 0, 1232 1226 .page = page, 1233 1227 .encrypted_page = NULL, 1234 1228 }; ··· 1670 1662 struct f2fs_io_info fio = { 1671 1663 .sbi = sbi, 1672 1664 .type = DATA, 1673 - .rw = READ_SYNC, 1665 + .op = REQ_OP_READ, 1666 + .op_flags = READ_SYNC, 1674 1667 .old_blkaddr = blkaddr, 1675 1668 .new_blkaddr = blkaddr, 1676 1669 .page = page,
+3 -2
fs/f2fs/f2fs.h
··· 686 686 struct f2fs_io_info { 687 687 struct f2fs_sb_info *sbi; /* f2fs_sb_info pointer */ 688 688 enum page_type type; /* contains DATA/NODE/META/META_FLUSH */ 689 - int rw; /* contains R/RS/W/WS with REQ_META/REQ_PRIO */ 689 + int op; /* contains REQ_OP_ */ 690 + int op_flags; /* rq_flag_bits */ 690 691 block_t new_blkaddr; /* new block address to be written */ 691 692 block_t old_blkaddr; /* old block address before Cow */ 692 693 struct page *page; /* page to be written */ 693 694 struct page *encrypted_page; /* encrypted page */ 694 695 }; 695 696 696 - #define is_read_io(rw) (((rw) & 1) == READ) 697 + #define is_read_io(rw) (rw == READ) 697 698 struct f2fs_bio_info { 698 699 struct f2fs_sb_info *sbi; /* f2fs superblock */ 699 700 struct bio *bio; /* bios to merge */
+6 -3
fs/f2fs/gc.c
··· 538 538 struct f2fs_io_info fio = { 539 539 .sbi = F2FS_I_SB(inode), 540 540 .type = DATA, 541 - .rw = READ_SYNC, 541 + .op = REQ_OP_READ, 542 + .op_flags = READ_SYNC, 542 543 .encrypted_page = NULL, 543 544 }; 544 545 struct dnode_of_data dn; ··· 613 612 /* allocate block address */ 614 613 f2fs_wait_on_page_writeback(dn.node_page, NODE, true); 615 614 616 - fio.rw = WRITE_SYNC; 615 + fio.op = REQ_OP_WRITE; 616 + fio.op_flags = WRITE_SYNC; 617 617 fio.new_blkaddr = newaddr; 618 618 f2fs_submit_page_mbio(&fio); 619 619 ··· 651 649 struct f2fs_io_info fio = { 652 650 .sbi = F2FS_I_SB(inode), 653 651 .type = DATA, 654 - .rw = WRITE_SYNC, 652 + .op = REQ_OP_WRITE, 653 + .op_flags = WRITE_SYNC, 655 654 .page = page, 656 655 .encrypted_page = NULL, 657 656 };
+2 -1
fs/f2fs/inline.c
··· 108 108 struct f2fs_io_info fio = { 109 109 .sbi = F2FS_I_SB(dn->inode), 110 110 .type = DATA, 111 - .rw = WRITE_SYNC | REQ_PRIO, 111 + .op = REQ_OP_WRITE, 112 + .op_flags = WRITE_SYNC | REQ_PRIO, 112 113 .page = page, 113 114 .encrypted_page = NULL, 114 115 };
+5 -3
fs/f2fs/node.c
··· 1070 1070 * 0: f2fs_put_page(page, 0) 1071 1071 * LOCKED_PAGE or error: f2fs_put_page(page, 1) 1072 1072 */ 1073 - static int read_node_page(struct page *page, int rw) 1073 + static int read_node_page(struct page *page, int op_flags) 1074 1074 { 1075 1075 struct f2fs_sb_info *sbi = F2FS_P_SB(page); 1076 1076 struct node_info ni; 1077 1077 struct f2fs_io_info fio = { 1078 1078 .sbi = sbi, 1079 1079 .type = NODE, 1080 - .rw = rw, 1080 + .op = REQ_OP_READ, 1081 + .op_flags = op_flags, 1081 1082 .page = page, 1082 1083 .encrypted_page = NULL, 1083 1084 }; ··· 1569 1568 struct f2fs_io_info fio = { 1570 1569 .sbi = sbi, 1571 1570 .type = NODE, 1572 - .rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE, 1571 + .op = REQ_OP_WRITE, 1572 + .op_flags = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : 0, 1573 1573 .page = page, 1574 1574 .encrypted_page = NULL, 1575 1575 };
+9 -5
fs/f2fs/segment.c
··· 257 257 struct f2fs_io_info fio = { 258 258 .sbi = sbi, 259 259 .type = DATA, 260 - .rw = WRITE_SYNC | REQ_PRIO, 260 + .op = REQ_OP_WRITE, 261 + .op_flags = WRITE_SYNC | REQ_PRIO, 261 262 .encrypted_page = NULL, 262 263 }; 263 264 bool submit_bio = false; ··· 407 406 fcc->dispatch_list = llist_reverse_order(fcc->dispatch_list); 408 407 409 408 bio->bi_bdev = sbi->sb->s_bdev; 410 - ret = submit_bio_wait(WRITE_FLUSH, bio); 409 + bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH); 410 + ret = submit_bio_wait(bio); 411 411 412 412 llist_for_each_entry_safe(cmd, next, 413 413 fcc->dispatch_list, llnode) { ··· 440 438 int ret; 441 439 442 440 bio->bi_bdev = sbi->sb->s_bdev; 443 - ret = submit_bio_wait(WRITE_FLUSH, bio); 441 + bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH); 442 + ret = submit_bio_wait(bio); 444 443 bio_put(bio); 445 444 return ret; 446 445 } ··· 1404 1401 struct f2fs_io_info fio = { 1405 1402 .sbi = sbi, 1406 1403 .type = META, 1407 - .rw = WRITE_SYNC | REQ_META | REQ_PRIO, 1404 + .op = REQ_OP_WRITE, 1405 + .op_flags = WRITE_SYNC | REQ_META | REQ_PRIO, 1408 1406 .old_blkaddr = page->index, 1409 1407 .new_blkaddr = page->index, 1410 1408 .page = page, ··· 1413 1409 }; 1414 1410 1415 1411 if (unlikely(page->index >= MAIN_BLKADDR(sbi))) 1416 - fio.rw &= ~REQ_META; 1412 + fio.op_flags &= ~REQ_META; 1417 1413 1418 1414 set_page_writeback(page); 1419 1415 f2fs_submit_page_mbio(&fio);
+4 -3
fs/f2fs/trace.c
··· 25 25 if (!last_io.len) 26 26 return; 27 27 28 - trace_printk("%3x:%3x %4x %-16s %2x %5x %12x %4x\n", 28 + trace_printk("%3x:%3x %4x %-16s %2x %5x %5x %12x %4x\n", 29 29 last_io.major, last_io.minor, 30 30 last_io.pid, "----------------", 31 31 last_io.type, 32 - last_io.fio.rw, 32 + last_io.fio.op, last_io.fio.op_flags, 33 33 last_io.fio.new_blkaddr, 34 34 last_io.len); 35 35 memset(&last_io, 0, sizeof(last_io)); ··· 101 101 if (last_io.major == major && last_io.minor == minor && 102 102 last_io.pid == pid && 103 103 last_io.type == __file_type(inode, pid) && 104 - last_io.fio.rw == fio->rw && 104 + last_io.fio.op == fio->op && 105 + last_io.fio.op_flags == fio->op_flags && 105 106 last_io.fio.new_blkaddr + last_io.len == 106 107 fio->new_blkaddr) { 107 108 last_io.len++;
+1 -1
fs/fat/misc.c
··· 267 267 int i, err = 0; 268 268 269 269 for (i = 0; i < nr_bhs; i++) 270 - write_dirty_buffer(bhs[i], WRITE); 270 + write_dirty_buffer(bhs[i], 0); 271 271 272 272 for (i = 0; i < nr_bhs; i++) { 273 273 wait_on_buffer(bhs[i]);
+2 -2
fs/gfs2/bmap.c
··· 285 285 if (trylock_buffer(rabh)) { 286 286 if (!buffer_uptodate(rabh)) { 287 287 rabh->b_end_io = end_buffer_read_sync; 288 - submit_bh(READA | REQ_META, rabh); 288 + submit_bh(REQ_OP_READ, READA | REQ_META, rabh); 289 289 continue; 290 290 } 291 291 unlock_buffer(rabh); ··· 974 974 975 975 if (!buffer_uptodate(bh)) { 976 976 err = -EIO; 977 - ll_rw_block(READ, 1, &bh); 977 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 978 978 wait_on_buffer(bh); 979 979 /* Uhhuh. Read error. Complain and punt. */ 980 980 if (!buffer_uptodate(bh))
+1 -1
fs/gfs2/dir.c
··· 1513 1513 continue; 1514 1514 } 1515 1515 bh->b_end_io = end_buffer_read_sync; 1516 - submit_bh(READA | REQ_META, bh); 1516 + submit_bh(REQ_OP_READ, READA | REQ_META, bh); 1517 1517 continue; 1518 1518 } 1519 1519 brelse(bh);
+4 -4
fs/gfs2/log.c
··· 657 657 struct gfs2_log_header *lh; 658 658 unsigned int tail; 659 659 u32 hash; 660 - int rw = WRITE_FLUSH_FUA | REQ_META; 660 + int op_flags = WRITE_FLUSH_FUA | REQ_META; 661 661 struct page *page = mempool_alloc(gfs2_page_pool, GFP_NOIO); 662 662 enum gfs2_freeze_state state = atomic_read(&sdp->sd_freeze_state); 663 663 lh = page_address(page); ··· 682 682 if (test_bit(SDF_NOBARRIERS, &sdp->sd_flags)) { 683 683 gfs2_ordered_wait(sdp); 684 684 log_flush_wait(sdp); 685 - rw = WRITE_SYNC | REQ_META | REQ_PRIO; 685 + op_flags = WRITE_SYNC | REQ_META | REQ_PRIO; 686 686 } 687 687 688 688 sdp->sd_log_idle = (tail == sdp->sd_log_flush_head); 689 689 gfs2_log_write_page(sdp, page); 690 - gfs2_log_flush_bio(sdp, rw); 690 + gfs2_log_flush_bio(sdp, REQ_OP_WRITE, op_flags); 691 691 log_flush_wait(sdp); 692 692 693 693 if (sdp->sd_log_tail != tail) ··· 738 738 739 739 gfs2_ordered_write(sdp); 740 740 lops_before_commit(sdp, tr); 741 - gfs2_log_flush_bio(sdp, WRITE); 741 + gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0); 742 742 743 743 if (sdp->sd_log_head != sdp->sd_log_flush_head) { 744 744 log_flush_wait(sdp);
+7 -5
fs/gfs2/lops.c
··· 230 230 /** 231 231 * gfs2_log_flush_bio - Submit any pending log bio 232 232 * @sdp: The superblock 233 - * @rw: The rw flags 233 + * @op: REQ_OP 234 + * @op_flags: rq_flag_bits 234 235 * 235 236 * Submit any pending part-built or full bio to the block device. If 236 237 * there is no pending bio, then this is a no-op. 237 238 */ 238 239 239 - void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int rw) 240 + void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags) 240 241 { 241 242 if (sdp->sd_log_bio) { 242 243 atomic_inc(&sdp->sd_log_in_flight); 243 - submit_bio(rw, sdp->sd_log_bio); 244 + bio_set_op_attrs(sdp->sd_log_bio, op, op_flags); 245 + submit_bio(sdp->sd_log_bio); 244 246 sdp->sd_log_bio = NULL; 245 247 } 246 248 } ··· 301 299 nblk >>= sdp->sd_fsb2bb_shift; 302 300 if (blkno == nblk) 303 301 return bio; 304 - gfs2_log_flush_bio(sdp, WRITE); 302 + gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0); 305 303 } 306 304 307 305 return gfs2_log_alloc_bio(sdp, blkno); ··· 330 328 bio = gfs2_log_get_bio(sdp, blkno); 331 329 ret = bio_add_page(bio, page, size, offset); 332 330 if (ret == 0) { 333 - gfs2_log_flush_bio(sdp, WRITE); 331 + gfs2_log_flush_bio(sdp, REQ_OP_WRITE, 0); 334 332 bio = gfs2_log_alloc_bio(sdp, blkno); 335 333 ret = bio_add_page(bio, page, size, offset); 336 334 WARN_ON(ret == 0);
+1 -1
fs/gfs2/lops.h
··· 27 27 28 28 extern const struct gfs2_log_operations *gfs2_log_ops[]; 29 29 extern void gfs2_log_write_page(struct gfs2_sbd *sdp, struct page *page); 30 - extern void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int rw); 30 + extern void gfs2_log_flush_bio(struct gfs2_sbd *sdp, int op, int op_flags); 31 31 extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh); 32 32 33 33 static inline unsigned int buf_limit(struct gfs2_sbd *sdp)
+10 -8
fs/gfs2/meta_io.c
··· 37 37 { 38 38 struct buffer_head *bh, *head; 39 39 int nr_underway = 0; 40 - int write_op = REQ_META | REQ_PRIO | 41 - (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE); 40 + int write_flags = REQ_META | REQ_PRIO | 41 + (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : 0); 42 42 43 43 BUG_ON(!PageLocked(page)); 44 44 BUG_ON(!page_has_buffers(page)); ··· 79 79 do { 80 80 struct buffer_head *next = bh->b_this_page; 81 81 if (buffer_async_write(bh)) { 82 - submit_bh(write_op, bh); 82 + submit_bh(REQ_OP_WRITE, write_flags, bh); 83 83 nr_underway++; 84 84 } 85 85 bh = next; ··· 213 213 * Submit several consecutive buffer head I/O requests as a single bio I/O 214 214 * request. (See submit_bh_wbc.) 215 215 */ 216 - static void gfs2_submit_bhs(int rw, struct buffer_head *bhs[], int num) 216 + static void gfs2_submit_bhs(int op, int op_flags, struct buffer_head *bhs[], 217 + int num) 217 218 { 218 219 struct buffer_head *bh = bhs[0]; 219 220 struct bio *bio; ··· 231 230 bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh)); 232 231 } 233 232 bio->bi_end_io = gfs2_meta_read_endio; 234 - submit_bio(rw, bio); 233 + bio_set_op_attrs(bio, op, op_flags); 234 + submit_bio(bio); 235 235 } 236 236 237 237 /** ··· 282 280 } 283 281 } 284 282 285 - gfs2_submit_bhs(READ_SYNC | REQ_META | REQ_PRIO, bhs, num); 283 + gfs2_submit_bhs(REQ_OP_READ, READ_SYNC | REQ_META | REQ_PRIO, bhs, num); 286 284 if (!(flags & DIO_WAIT)) 287 285 return 0; 288 286 ··· 450 448 if (buffer_uptodate(first_bh)) 451 449 goto out; 452 450 if (!buffer_locked(first_bh)) 453 - ll_rw_block(READ_SYNC | REQ_META, 1, &first_bh); 451 + ll_rw_block(REQ_OP_READ, READ_SYNC | REQ_META, 1, &first_bh); 454 452 455 453 dblock++; 456 454 extlen--; ··· 459 457 bh = gfs2_getbuf(gl, dblock, CREATE); 460 458 461 459 if (!buffer_uptodate(bh) && !buffer_locked(bh)) 462 - ll_rw_block(READA | REQ_META, 1, &bh); 460 + ll_rw_block(REQ_OP_READ, READA | REQ_META, 1, &bh); 463 461 brelse(bh); 464 462 dblock++; 465 463 extlen--;
+2 -1
fs/gfs2/ops_fstype.c
··· 246 246 247 247 bio->bi_end_io = end_bio_io_page; 248 248 bio->bi_private = page; 249 - submit_bio(READ_SYNC | REQ_META, bio); 249 + bio_set_op_attrs(bio, REQ_OP_READ, READ_SYNC | REQ_META); 250 + submit_bio(bio); 250 251 wait_on_page_locked(page); 251 252 bio_put(bio); 252 253 if (!PageUptodate(page)) {
+1 -1
fs/gfs2/quota.c
··· 730 730 if (PageUptodate(page)) 731 731 set_buffer_uptodate(bh); 732 732 if (!buffer_uptodate(bh)) { 733 - ll_rw_block(READ | REQ_META, 1, &bh); 733 + ll_rw_block(REQ_OP_READ, REQ_META, 1, &bh); 734 734 wait_on_buffer(bh); 735 735 if (!buffer_uptodate(bh)) 736 736 goto unlock_out;
+1 -1
fs/hfsplus/hfsplus_fs.h
··· 526 526 527 527 /* wrapper.c */ 528 528 int hfsplus_submit_bio(struct super_block *sb, sector_t sector, void *buf, 529 - void **data, int rw); 529 + void **data, int op, int op_flags); 530 530 int hfsplus_read_wrapper(struct super_block *sb); 531 531 532 532 /* time macros */
+3 -2
fs/hfsplus/part_tbl.c
··· 112 112 if ((u8 *)pm - (u8 *)buf >= buf_size) { 113 113 res = hfsplus_submit_bio(sb, 114 114 *part_start + HFS_PMAP_BLK + i, 115 - buf, (void **)&pm, READ); 115 + buf, (void **)&pm, REQ_OP_READ, 116 + 0); 116 117 if (res) 117 118 return res; 118 119 } ··· 137 136 return -ENOMEM; 138 137 139 138 res = hfsplus_submit_bio(sb, *part_start + HFS_PMAP_BLK, 140 - buf, &data, READ); 139 + buf, &data, REQ_OP_READ, 0); 141 140 if (res) 142 141 goto out; 143 142
+4 -2
fs/hfsplus/super.c
··· 220 220 221 221 error2 = hfsplus_submit_bio(sb, 222 222 sbi->part_start + HFSPLUS_VOLHEAD_SECTOR, 223 - sbi->s_vhdr_buf, NULL, WRITE_SYNC); 223 + sbi->s_vhdr_buf, NULL, REQ_OP_WRITE, 224 + WRITE_SYNC); 224 225 if (!error) 225 226 error = error2; 226 227 if (!write_backup) ··· 229 228 230 229 error2 = hfsplus_submit_bio(sb, 231 230 sbi->part_start + sbi->sect_count - 2, 232 - sbi->s_backup_vhdr_buf, NULL, WRITE_SYNC); 231 + sbi->s_backup_vhdr_buf, NULL, REQ_OP_WRITE, 232 + WRITE_SYNC); 233 233 if (!error) 234 234 error2 = error; 235 235 out:
+9 -6
fs/hfsplus/wrapper.c
··· 30 30 * @sector: block to read or write, for blocks of HFSPLUS_SECTOR_SIZE bytes 31 31 * @buf: buffer for I/O 32 32 * @data: output pointer for location of requested data 33 - * @rw: direction of I/O 33 + * @op: direction of I/O 34 + * @op_flags: request op flags 34 35 * 35 36 * The unit of I/O is hfsplus_min_io_size(sb), which may be bigger than 36 37 * HFSPLUS_SECTOR_SIZE, and @buf must be sized accordingly. On reads ··· 45 44 * will work correctly. 46 45 */ 47 46 int hfsplus_submit_bio(struct super_block *sb, sector_t sector, 48 - void *buf, void **data, int rw) 47 + void *buf, void **data, int op, int op_flags) 49 48 { 50 49 struct bio *bio; 51 50 int ret = 0; ··· 66 65 bio = bio_alloc(GFP_NOIO, 1); 67 66 bio->bi_iter.bi_sector = sector; 68 67 bio->bi_bdev = sb->s_bdev; 68 + bio_set_op_attrs(bio, op, op_flags); 69 69 70 - if (!(rw & WRITE) && data) 70 + if (op != WRITE && data) 71 71 *data = (u8 *)buf + offset; 72 72 73 73 while (io_size > 0) { ··· 85 83 buf = (u8 *)buf + len; 86 84 } 87 85 88 - ret = submit_bio_wait(rw, bio); 86 + ret = submit_bio_wait(bio); 89 87 out: 90 88 bio_put(bio); 91 89 return ret < 0 ? ret : 0; ··· 183 181 reread: 184 182 error = hfsplus_submit_bio(sb, part_start + HFSPLUS_VOLHEAD_SECTOR, 185 183 sbi->s_vhdr_buf, (void **)&sbi->s_vhdr, 186 - READ); 184 + REQ_OP_READ, 0); 187 185 if (error) 188 186 goto out_free_backup_vhdr; 189 187 ··· 215 213 216 214 error = hfsplus_submit_bio(sb, part_start + part_size - 2, 217 215 sbi->s_backup_vhdr_buf, 218 - (void **)&sbi->s_backup_vhdr, READ); 216 + (void **)&sbi->s_backup_vhdr, REQ_OP_READ, 217 + 0); 219 218 if (error) 220 219 goto out_free_backup_vhdr; 221 220
+1 -1
fs/isofs/compress.c
··· 81 81 blocknum = block_start >> bufshift; 82 82 memset(bhs, 0, (needblocks + 1) * sizeof(struct buffer_head *)); 83 83 haveblocks = isofs_get_blocks(inode, blocknum, bhs, needblocks); 84 - ll_rw_block(READ, haveblocks, bhs); 84 + ll_rw_block(REQ_OP_READ, 0, haveblocks, bhs); 85 85 86 86 curbh = 0; 87 87 curpage = 0;
+3 -3
fs/jbd2/commit.c
··· 155 155 156 156 if (journal->j_flags & JBD2_BARRIER && 157 157 !jbd2_has_feature_async_commit(journal)) 158 - ret = submit_bh(WRITE_SYNC | WRITE_FLUSH_FUA, bh); 158 + ret = submit_bh(REQ_OP_WRITE, WRITE_SYNC | WRITE_FLUSH_FUA, bh); 159 159 else 160 - ret = submit_bh(WRITE_SYNC, bh); 160 + ret = submit_bh(REQ_OP_WRITE, WRITE_SYNC, bh); 161 161 162 162 *cbh = bh; 163 163 return ret; ··· 718 718 clear_buffer_dirty(bh); 719 719 set_buffer_uptodate(bh); 720 720 bh->b_end_io = journal_end_buffer_io_sync; 721 - submit_bh(WRITE_SYNC, bh); 721 + submit_bh(REQ_OP_WRITE, WRITE_SYNC, bh); 722 722 } 723 723 cond_resched(); 724 724 stats.run.rs_blocks_logged += bufs;
+5 -5
fs/jbd2/journal.c
··· 1346 1346 return jbd2_journal_start_thread(journal); 1347 1347 } 1348 1348 1349 - static int jbd2_write_superblock(journal_t *journal, int write_op) 1349 + static int jbd2_write_superblock(journal_t *journal, int write_flags) 1350 1350 { 1351 1351 struct buffer_head *bh = journal->j_sb_buffer; 1352 1352 journal_superblock_t *sb = journal->j_superblock; 1353 1353 int ret; 1354 1354 1355 - trace_jbd2_write_superblock(journal, write_op); 1355 + trace_jbd2_write_superblock(journal, write_flags); 1356 1356 if (!(journal->j_flags & JBD2_BARRIER)) 1357 - write_op &= ~(REQ_FUA | REQ_FLUSH); 1357 + write_flags &= ~(REQ_FUA | REQ_PREFLUSH); 1358 1358 lock_buffer(bh); 1359 1359 if (buffer_write_io_error(bh)) { 1360 1360 /* ··· 1374 1374 jbd2_superblock_csum_set(journal, sb); 1375 1375 get_bh(bh); 1376 1376 bh->b_end_io = end_buffer_write_sync; 1377 - ret = submit_bh(write_op, bh); 1377 + ret = submit_bh(REQ_OP_WRITE, write_flags, bh); 1378 1378 wait_on_buffer(bh); 1379 1379 if (buffer_write_io_error(bh)) { 1380 1380 clear_buffer_write_io_error(bh); ··· 1498 1498 1499 1499 J_ASSERT(bh != NULL); 1500 1500 if (!buffer_uptodate(bh)) { 1501 - ll_rw_block(READ, 1, &bh); 1501 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 1502 1502 wait_on_buffer(bh); 1503 1503 if (!buffer_uptodate(bh)) { 1504 1504 printk(KERN_ERR
+2 -2
fs/jbd2/recovery.c
··· 104 104 if (!buffer_uptodate(bh) && !buffer_locked(bh)) { 105 105 bufs[nbufs++] = bh; 106 106 if (nbufs == MAXBUF) { 107 - ll_rw_block(READ, nbufs, bufs); 107 + ll_rw_block(REQ_OP_READ, 0, nbufs, bufs); 108 108 journal_brelse_array(bufs, nbufs); 109 109 nbufs = 0; 110 110 } ··· 113 113 } 114 114 115 115 if (nbufs) 116 - ll_rw_block(READ, nbufs, bufs); 116 + ll_rw_block(REQ_OP_READ, 0, nbufs, bufs); 117 117 err = 0; 118 118 119 119 failed:
+4 -2
fs/jfs/jfs_logmgr.c
··· 2002 2002 2003 2003 bio->bi_end_io = lbmIODone; 2004 2004 bio->bi_private = bp; 2005 + bio_set_op_attrs(bio, REQ_OP_READ, READ_SYNC); 2005 2006 /*check if journaling to disk has been disabled*/ 2006 2007 if (log->no_integrity) { 2007 2008 bio->bi_iter.bi_size = 0; 2008 2009 lbmIODone(bio); 2009 2010 } else { 2010 - submit_bio(READ_SYNC, bio); 2011 + submit_bio(bio); 2011 2012 } 2012 2013 2013 2014 wait_event(bp->l_ioevent, (bp->l_flag != lbmREAD)); ··· 2146 2145 2147 2146 bio->bi_end_io = lbmIODone; 2148 2147 bio->bi_private = bp; 2148 + bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_SYNC); 2149 2149 2150 2150 /* check if journaling to disk has been disabled */ 2151 2151 if (log->no_integrity) { 2152 2152 bio->bi_iter.bi_size = 0; 2153 2153 lbmIODone(bio); 2154 2154 } else { 2155 - submit_bio(WRITE_SYNC, bio); 2155 + submit_bio(bio); 2156 2156 INCREMENT(lmStat.submitted); 2157 2157 } 2158 2158 }
+6 -4
fs/jfs/jfs_metapage.c
··· 411 411 inc_io(page); 412 412 if (!bio->bi_iter.bi_size) 413 413 goto dump_bio; 414 - submit_bio(WRITE, bio); 414 + submit_bio(bio); 415 415 nr_underway++; 416 416 bio = NULL; 417 417 } else ··· 434 434 bio->bi_iter.bi_sector = pblock << (inode->i_blkbits - 9); 435 435 bio->bi_end_io = metapage_write_end_io; 436 436 bio->bi_private = page; 437 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 437 438 438 439 /* Don't call bio_add_page yet, we may add to this vec */ 439 440 bio_offset = offset; ··· 449 448 if (!bio->bi_iter.bi_size) 450 449 goto dump_bio; 451 450 452 - submit_bio(WRITE, bio); 451 + submit_bio(bio); 453 452 nr_underway++; 454 453 } 455 454 if (redirty) ··· 507 506 insert_metapage(page, NULL); 508 507 inc_io(page); 509 508 if (bio) 510 - submit_bio(READ, bio); 509 + submit_bio(bio); 511 510 512 511 bio = bio_alloc(GFP_NOFS, 1); 513 512 bio->bi_bdev = inode->i_sb->s_bdev; ··· 515 514 pblock << (inode->i_blkbits - 9); 516 515 bio->bi_end_io = metapage_read_end_io; 517 516 bio->bi_private = page; 517 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 518 518 len = xlen << inode->i_blkbits; 519 519 offset = block_offset << inode->i_blkbits; 520 520 if (bio_add_page(bio, page, len, offset) < len) ··· 525 523 block_offset++; 526 524 } 527 525 if (bio) 528 - submit_bio(READ, bio); 526 + submit_bio(bio); 529 527 else 530 528 unlock_page(page); 531 529
+11 -6
fs/logfs/dev_bdev.c
··· 14 14 15 15 #define PAGE_OFS(ofs) ((ofs) & (PAGE_SIZE-1)) 16 16 17 - static int sync_request(struct page *page, struct block_device *bdev, int rw) 17 + static int sync_request(struct page *page, struct block_device *bdev, int op) 18 18 { 19 19 struct bio bio; 20 20 struct bio_vec bio_vec; ··· 29 29 bio.bi_bdev = bdev; 30 30 bio.bi_iter.bi_sector = page->index * (PAGE_SIZE >> 9); 31 31 bio.bi_iter.bi_size = PAGE_SIZE; 32 + bio_set_op_attrs(&bio, op, 0); 32 33 33 - return submit_bio_wait(rw, &bio); 34 + return submit_bio_wait(&bio); 34 35 } 35 36 36 37 static int bdev_readpage(void *_sb, struct page *page) ··· 96 95 bio->bi_iter.bi_sector = ofs >> 9; 97 96 bio->bi_private = sb; 98 97 bio->bi_end_io = writeseg_end_io; 98 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 99 99 atomic_inc(&super->s_pending_writes); 100 - submit_bio(WRITE, bio); 100 + submit_bio(bio); 101 101 102 102 ofs += i * PAGE_SIZE; 103 103 index += i; ··· 124 122 bio->bi_iter.bi_sector = ofs >> 9; 125 123 bio->bi_private = sb; 126 124 bio->bi_end_io = writeseg_end_io; 125 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 127 126 atomic_inc(&super->s_pending_writes); 128 - submit_bio(WRITE, bio); 127 + submit_bio(bio); 129 128 return 0; 130 129 } 131 130 ··· 188 185 bio->bi_iter.bi_sector = ofs >> 9; 189 186 bio->bi_private = sb; 190 187 bio->bi_end_io = erase_end_io; 188 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 191 189 atomic_inc(&super->s_pending_writes); 192 - submit_bio(WRITE, bio); 190 + submit_bio(bio); 193 191 194 192 ofs += i * PAGE_SIZE; 195 193 index += i; ··· 210 206 bio->bi_iter.bi_sector = ofs >> 9; 211 207 bio->bi_private = sb; 212 208 bio->bi_end_io = erase_end_io; 209 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 213 210 atomic_inc(&super->s_pending_writes); 214 - submit_bio(WRITE, bio); 211 + submit_bio(bio); 215 212 return 0; 216 213 } 217 214
+21 -20
fs/mpage.c
··· 56 56 bio_put(bio); 57 57 } 58 58 59 - static struct bio *mpage_bio_submit(int rw, struct bio *bio) 59 + static struct bio *mpage_bio_submit(int op, int op_flags, struct bio *bio) 60 60 { 61 61 bio->bi_end_io = mpage_end_io; 62 - guard_bio_eod(rw, bio); 63 - submit_bio(rw, bio); 62 + bio_set_op_attrs(bio, op, op_flags); 63 + guard_bio_eod(op, bio); 64 + submit_bio(bio); 64 65 return NULL; 65 66 } 66 67 ··· 270 269 * This page will go to BIO. Do we need to send this BIO off first? 271 270 */ 272 271 if (bio && (*last_block_in_bio != blocks[0] - 1)) 273 - bio = mpage_bio_submit(READ, bio); 272 + bio = mpage_bio_submit(REQ_OP_READ, 0, bio); 274 273 275 274 alloc_new: 276 275 if (bio == NULL) { ··· 287 286 288 287 length = first_hole << blkbits; 289 288 if (bio_add_page(bio, page, length, 0) < length) { 290 - bio = mpage_bio_submit(READ, bio); 289 + bio = mpage_bio_submit(REQ_OP_READ, 0, bio); 291 290 goto alloc_new; 292 291 } 293 292 ··· 295 294 nblocks = map_bh->b_size >> blkbits; 296 295 if ((buffer_boundary(map_bh) && relative_block == nblocks) || 297 296 (first_hole != blocks_per_page)) 298 - bio = mpage_bio_submit(READ, bio); 297 + bio = mpage_bio_submit(REQ_OP_READ, 0, bio); 299 298 else 300 299 *last_block_in_bio = blocks[blocks_per_page - 1]; 301 300 out: ··· 303 302 304 303 confused: 305 304 if (bio) 306 - bio = mpage_bio_submit(READ, bio); 305 + bio = mpage_bio_submit(REQ_OP_READ, 0, bio); 307 306 if (!PageUptodate(page)) 308 307 block_read_full_page(page, get_block); 309 308 else ··· 385 384 } 386 385 BUG_ON(!list_empty(pages)); 387 386 if (bio) 388 - mpage_bio_submit(READ, bio); 387 + mpage_bio_submit(REQ_OP_READ, 0, bio); 389 388 return 0; 390 389 } 391 390 EXPORT_SYMBOL(mpage_readpages); ··· 406 405 bio = do_mpage_readpage(bio, page, 1, &last_block_in_bio, 407 406 &map_bh, &first_logical_block, get_block, gfp); 408 407 if (bio) 409 - mpage_bio_submit(READ, bio); 408 + mpage_bio_submit(REQ_OP_READ, 0, bio); 410 409 return 0; 411 410 } 412 411 EXPORT_SYMBOL(mpage_readpage); ··· 487 486 struct buffer_head map_bh; 488 487 loff_t i_size = i_size_read(inode); 489 488 int ret = 0; 490 - int wr = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE); 489 + int op_flags = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : 0); 491 490 492 491 if (page_has_buffers(page)) { 493 492 struct buffer_head *head = page_buffers(page); ··· 596 595 * This page will go to BIO. Do we need to send this BIO off first? 597 596 */ 598 597 if (bio && mpd->last_block_in_bio != blocks[0] - 1) 599 - bio = mpage_bio_submit(wr, bio); 598 + bio = mpage_bio_submit(REQ_OP_WRITE, op_flags, bio); 600 599 601 600 alloc_new: 602 601 if (bio == NULL) { ··· 623 622 wbc_account_io(wbc, page, PAGE_SIZE); 624 623 length = first_unmapped << blkbits; 625 624 if (bio_add_page(bio, page, length, 0) < length) { 626 - bio = mpage_bio_submit(wr, bio); 625 + bio = mpage_bio_submit(REQ_OP_WRITE, op_flags, bio); 627 626 goto alloc_new; 628 627 } 629 628 ··· 633 632 set_page_writeback(page); 634 633 unlock_page(page); 635 634 if (boundary || (first_unmapped != blocks_per_page)) { 636 - bio = mpage_bio_submit(wr, bio); 635 + bio = mpage_bio_submit(REQ_OP_WRITE, op_flags, bio); 637 636 if (boundary_block) { 638 637 write_boundary_block(boundary_bdev, 639 638 boundary_block, 1 << blkbits); ··· 645 644 646 645 confused: 647 646 if (bio) 648 - bio = mpage_bio_submit(wr, bio); 647 + bio = mpage_bio_submit(REQ_OP_WRITE, op_flags, bio); 649 648 650 649 if (mpd->use_writepage) { 651 650 ret = mapping->a_ops->writepage(page, wbc); ··· 702 701 703 702 ret = write_cache_pages(mapping, wbc, __mpage_writepage, &mpd); 704 703 if (mpd.bio) { 705 - int wr = (wbc->sync_mode == WB_SYNC_ALL ? 706 - WRITE_SYNC : WRITE); 707 - mpage_bio_submit(wr, mpd.bio); 704 + int op_flags = (wbc->sync_mode == WB_SYNC_ALL ? 705 + WRITE_SYNC : 0); 706 + mpage_bio_submit(REQ_OP_WRITE, op_flags, mpd.bio); 708 707 } 709 708 } 710 709 blk_finish_plug(&plug); ··· 723 722 }; 724 723 int ret = __mpage_writepage(page, wbc, &mpd); 725 724 if (mpd.bio) { 726 - int wr = (wbc->sync_mode == WB_SYNC_ALL ? 727 - WRITE_SYNC : WRITE); 728 - mpage_bio_submit(wr, mpd.bio); 725 + int op_flags = (wbc->sync_mode == WB_SYNC_ALL ? 726 + WRITE_SYNC : 0); 727 + mpage_bio_submit(REQ_OP_WRITE, op_flags, mpd.bio); 729 728 } 730 729 return ret; 731 730 }
+12 -10
fs/nfs/blocklayout/blocklayout.c
··· 102 102 } 103 103 104 104 static struct bio * 105 - bl_submit_bio(int rw, struct bio *bio) 105 + bl_submit_bio(struct bio *bio) 106 106 { 107 107 if (bio) { 108 108 get_parallel(bio->bi_private); 109 109 dprintk("%s submitting %s bio %u@%llu\n", __func__, 110 - rw == READ ? "read" : "write", bio->bi_iter.bi_size, 110 + bio_op(bio) == READ ? "read" : "write", 111 + bio->bi_iter.bi_size, 111 112 (unsigned long long)bio->bi_iter.bi_sector); 112 - submit_bio(rw, bio); 113 + submit_bio(bio); 113 114 } 114 115 return NULL; 115 116 } ··· 159 158 if (disk_addr < map->start || disk_addr >= map->start + map->len) { 160 159 if (!dev->map(dev, disk_addr, map)) 161 160 return ERR_PTR(-EIO); 162 - bio = bl_submit_bio(rw, bio); 161 + bio = bl_submit_bio(bio); 163 162 } 164 163 disk_addr += map->disk_offset; 165 164 disk_addr -= map->start; ··· 175 174 disk_addr >> SECTOR_SHIFT, end_io, par); 176 175 if (!bio) 177 176 return ERR_PTR(-ENOMEM); 177 + bio_set_op_attrs(bio, rw, 0); 178 178 } 179 179 if (bio_add_page(bio, page, *len, offset) < *len) { 180 - bio = bl_submit_bio(rw, bio); 180 + bio = bl_submit_bio(bio); 181 181 goto retry; 182 182 } 183 183 return bio; ··· 254 252 for (i = pg_index; i < header->page_array.npages; i++) { 255 253 if (extent_length <= 0) { 256 254 /* We've used up the previous extent */ 257 - bio = bl_submit_bio(READ, bio); 255 + bio = bl_submit_bio(bio); 258 256 259 257 /* Get the next one */ 260 258 if (!ext_tree_lookup(bl, isect, &be, false)) { ··· 275 273 } 276 274 277 275 if (is_hole(&be)) { 278 - bio = bl_submit_bio(READ, bio); 276 + bio = bl_submit_bio(bio); 279 277 /* Fill hole w/ zeroes w/o accessing device */ 280 278 dprintk("%s Zeroing page for hole\n", __func__); 281 279 zero_user_segment(pages[i], pg_offset, pg_len); ··· 308 306 header->res.count = (isect << SECTOR_SHIFT) - header->args.offset; 309 307 } 310 308 out: 311 - bl_submit_bio(READ, bio); 309 + bl_submit_bio(bio); 312 310 blk_finish_plug(&plug); 313 311 put_parallel(par); 314 312 return PNFS_ATTEMPTED; ··· 400 398 for (i = pg_index; i < header->page_array.npages; i++) { 401 399 if (extent_length <= 0) { 402 400 /* We've used up the previous extent */ 403 - bio = bl_submit_bio(WRITE, bio); 401 + bio = bl_submit_bio(bio); 404 402 /* Get the next one */ 405 403 if (!ext_tree_lookup(bl, isect, &be, true)) { 406 404 header->pnfs_error = -EINVAL; ··· 429 427 430 428 header->res.count = header->args.count; 431 429 out: 432 - bl_submit_bio(WRITE, bio); 430 + bl_submit_bio(bio); 433 431 blk_finish_plug(&plug); 434 432 put_parallel(par); 435 433 return PNFS_ATTEMPTED;
+3 -3
fs/nilfs2/btnode.c
··· 62 62 } 63 63 64 64 int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr, 65 - sector_t pblocknr, int mode, 65 + sector_t pblocknr, int mode, int mode_flags, 66 66 struct buffer_head **pbh, sector_t *submit_ptr) 67 67 { 68 68 struct buffer_head *bh; ··· 95 95 } 96 96 } 97 97 98 - if (mode == READA) { 98 + if (mode_flags & REQ_RAHEAD) { 99 99 if (pblocknr != *submit_ptr + 1 || !trylock_buffer(bh)) { 100 100 err = -EBUSY; /* internal code */ 101 101 brelse(bh); ··· 114 114 bh->b_blocknr = pblocknr; /* set block address for read */ 115 115 bh->b_end_io = end_buffer_read_sync; 116 116 get_bh(bh); 117 - submit_bh(mode, bh); 117 + submit_bh(mode, mode_flags, bh); 118 118 bh->b_blocknr = blocknr; /* set back to the given block address */ 119 119 *submit_ptr = pblocknr; 120 120 err = 0;
+1 -1
fs/nilfs2/btnode.h
··· 43 43 struct buffer_head *nilfs_btnode_create_block(struct address_space *btnc, 44 44 __u64 blocknr); 45 45 int nilfs_btnode_submit_block(struct address_space *, __u64, sector_t, int, 46 - struct buffer_head **, sector_t *); 46 + int, struct buffer_head **, sector_t *); 47 47 void nilfs_btnode_delete(struct buffer_head *); 48 48 int nilfs_btnode_prepare_change_key(struct address_space *, 49 49 struct nilfs_btnode_chkey_ctxt *);
+4 -2
fs/nilfs2/btree.c
··· 476 476 sector_t submit_ptr = 0; 477 477 int ret; 478 478 479 - ret = nilfs_btnode_submit_block(btnc, ptr, 0, READ, &bh, &submit_ptr); 479 + ret = nilfs_btnode_submit_block(btnc, ptr, 0, REQ_OP_READ, 0, &bh, 480 + &submit_ptr); 480 481 if (ret) { 481 482 if (ret != -EEXIST) 482 483 return ret; ··· 493 492 n > 0 && i < ra->ncmax; n--, i++) { 494 493 ptr2 = nilfs_btree_node_get_ptr(ra->node, i, ra->ncmax); 495 494 496 - ret = nilfs_btnode_submit_block(btnc, ptr2, 0, READA, 495 + ret = nilfs_btnode_submit_block(btnc, ptr2, 0, 496 + REQ_OP_READ, REQ_RAHEAD, 497 497 &ra_bh, &submit_ptr); 498 498 if (likely(!ret || ret == -EEXIST)) 499 499 brelse(ra_bh);
+3 -2
fs/nilfs2/gcinode.c
··· 101 101 bh->b_blocknr = pbn; 102 102 bh->b_end_io = end_buffer_read_sync; 103 103 get_bh(bh); 104 - submit_bh(READ, bh); 104 + submit_bh(REQ_OP_READ, 0, bh); 105 105 if (vbn) 106 106 bh->b_blocknr = vbn; 107 107 out: ··· 138 138 int ret; 139 139 140 140 ret = nilfs_btnode_submit_block(&NILFS_I(inode)->i_btnode_cache, 141 - vbn ? : pbn, pbn, READ, out_bh, &pbn); 141 + vbn ? : pbn, pbn, REQ_OP_READ, 0, 142 + out_bh, &pbn); 142 143 if (ret == -EEXIST) /* internal code (cache hit) */ 143 144 ret = 0; 144 145 return ret;
+6 -5
fs/nilfs2/mdt.c
··· 121 121 122 122 static int 123 123 nilfs_mdt_submit_block(struct inode *inode, unsigned long blkoff, 124 - int mode, struct buffer_head **out_bh) 124 + int mode, int mode_flags, struct buffer_head **out_bh) 125 125 { 126 126 struct buffer_head *bh; 127 127 __u64 blknum = 0; ··· 135 135 if (buffer_uptodate(bh)) 136 136 goto out; 137 137 138 - if (mode == READA) { 138 + if (mode_flags & REQ_RAHEAD) { 139 139 if (!trylock_buffer(bh)) { 140 140 ret = -EBUSY; 141 141 goto failed_bh; ··· 157 157 158 158 bh->b_end_io = end_buffer_read_sync; 159 159 get_bh(bh); 160 - submit_bh(mode, bh); 160 + submit_bh(mode, mode_flags, bh); 161 161 ret = 0; 162 162 163 163 trace_nilfs2_mdt_submit_block(inode, inode->i_ino, blkoff, mode); ··· 181 181 int i, nr_ra_blocks = NILFS_MDT_MAX_RA_BLOCKS; 182 182 int err; 183 183 184 - err = nilfs_mdt_submit_block(inode, block, READ, &first_bh); 184 + err = nilfs_mdt_submit_block(inode, block, REQ_OP_READ, 0, &first_bh); 185 185 if (err == -EEXIST) /* internal code */ 186 186 goto out; 187 187 ··· 191 191 if (readahead) { 192 192 blkoff = block + 1; 193 193 for (i = 0; i < nr_ra_blocks; i++, blkoff++) { 194 - err = nilfs_mdt_submit_block(inode, blkoff, READA, &bh); 194 + err = nilfs_mdt_submit_block(inode, blkoff, REQ_OP_READ, 195 + REQ_RAHEAD, &bh); 195 196 if (likely(!err || err == -EEXIST)) 196 197 brelse(bh); 197 198 else if (err != -EBUSY)
+10 -8
fs/nilfs2/segbuf.c
··· 346 346 } 347 347 348 348 static int nilfs_segbuf_submit_bio(struct nilfs_segment_buffer *segbuf, 349 - struct nilfs_write_info *wi, int mode) 349 + struct nilfs_write_info *wi, int mode, 350 + int mode_flags) 350 351 { 351 352 struct bio *bio = wi->bio; 352 353 int err; ··· 365 364 366 365 bio->bi_end_io = nilfs_end_bio_write; 367 366 bio->bi_private = segbuf; 368 - submit_bio(mode, bio); 367 + bio_set_op_attrs(bio, mode, mode_flags); 368 + submit_bio(bio); 369 369 segbuf->sb_nbio++; 370 370 371 371 wi->bio = NULL; ··· 439 437 return 0; 440 438 } 441 439 /* bio is FULL */ 442 - err = nilfs_segbuf_submit_bio(segbuf, wi, mode); 440 + err = nilfs_segbuf_submit_bio(segbuf, wi, mode, 0); 443 441 /* never submit current bh */ 444 442 if (likely(!err)) 445 443 goto repeat; ··· 463 461 { 464 462 struct nilfs_write_info wi; 465 463 struct buffer_head *bh; 466 - int res = 0, rw = WRITE; 464 + int res = 0; 467 465 468 466 wi.nilfs = nilfs; 469 467 nilfs_segbuf_prepare_write(segbuf, &wi); 470 468 471 469 list_for_each_entry(bh, &segbuf->sb_segsum_buffers, b_assoc_buffers) { 472 - res = nilfs_segbuf_submit_bh(segbuf, &wi, bh, rw); 470 + res = nilfs_segbuf_submit_bh(segbuf, &wi, bh, REQ_OP_WRITE); 473 471 if (unlikely(res)) 474 472 goto failed_bio; 475 473 } 476 474 477 475 list_for_each_entry(bh, &segbuf->sb_payload_buffers, b_assoc_buffers) { 478 - res = nilfs_segbuf_submit_bh(segbuf, &wi, bh, rw); 476 + res = nilfs_segbuf_submit_bh(segbuf, &wi, bh, REQ_OP_WRITE); 479 477 if (unlikely(res)) 480 478 goto failed_bio; 481 479 } ··· 485 483 * Last BIO is always sent through the following 486 484 * submission. 487 485 */ 488 - rw |= REQ_SYNC; 489 - res = nilfs_segbuf_submit_bio(segbuf, &wi, rw); 486 + res = nilfs_segbuf_submit_bio(segbuf, &wi, REQ_OP_WRITE, 487 + REQ_SYNC); 490 488 } 491 489 492 490 failed_bio:
+3 -3
fs/ntfs/aops.c
··· 362 362 for (i = 0; i < nr; i++) { 363 363 tbh = arr[i]; 364 364 if (likely(!buffer_uptodate(tbh))) 365 - submit_bh(READ, tbh); 365 + submit_bh(REQ_OP_READ, 0, tbh); 366 366 else 367 367 ntfs_end_buffer_async_read(tbh, 1); 368 368 } ··· 877 877 do { 878 878 struct buffer_head *next = bh->b_this_page; 879 879 if (buffer_async_write(bh)) { 880 - submit_bh(WRITE, bh); 880 + submit_bh(REQ_OP_WRITE, 0, bh); 881 881 need_end_writeback = false; 882 882 } 883 883 bh = next; ··· 1202 1202 BUG_ON(!buffer_mapped(tbh)); 1203 1203 get_bh(tbh); 1204 1204 tbh->b_end_io = end_buffer_write_sync; 1205 - submit_bh(WRITE, tbh); 1205 + submit_bh(REQ_OP_WRITE, 0, tbh); 1206 1206 } 1207 1207 /* Synchronize the mft mirror now if not @sync. */ 1208 1208 if (is_mft && !sync)
+1 -1
fs/ntfs/compress.c
··· 670 670 } 671 671 get_bh(tbh); 672 672 tbh->b_end_io = end_buffer_read_sync; 673 - submit_bh(READ, tbh); 673 + submit_bh(REQ_OP_READ, 0, tbh); 674 674 } 675 675 676 676 /* Wait for io completion on all buffer heads. */
+1 -1
fs/ntfs/file.c
··· 553 553 lock_buffer(bh); 554 554 get_bh(bh); 555 555 bh->b_end_io = end_buffer_read_sync; 556 - return submit_bh(READ, bh); 556 + return submit_bh(REQ_OP_READ, 0, bh); 557 557 } 558 558 559 559 /**
+1 -1
fs/ntfs/logfile.c
··· 821 821 * completed ignore errors afterwards as we can assume 822 822 * that if one buffer worked all of them will work. 823 823 */ 824 - submit_bh(WRITE, bh); 824 + submit_bh(REQ_OP_WRITE, 0, bh); 825 825 if (should_wait) { 826 826 should_wait = false; 827 827 wait_on_buffer(bh);
+2 -2
fs/ntfs/mft.c
··· 592 592 clear_buffer_dirty(tbh); 593 593 get_bh(tbh); 594 594 tbh->b_end_io = end_buffer_write_sync; 595 - submit_bh(WRITE, tbh); 595 + submit_bh(REQ_OP_WRITE, 0, tbh); 596 596 } 597 597 /* Wait on i/o completion of buffers. */ 598 598 for (i_bhs = 0; i_bhs < nr_bhs; i_bhs++) { ··· 785 785 clear_buffer_dirty(tbh); 786 786 get_bh(tbh); 787 787 tbh->b_end_io = end_buffer_write_sync; 788 - submit_bh(WRITE, tbh); 788 + submit_bh(REQ_OP_WRITE, 0, tbh); 789 789 } 790 790 /* Synchronize the mft mirror now if not @sync. */ 791 791 if (!sync && ni->mft_no < vol->mftmirr_size)
+1 -1
fs/ocfs2/aops.c
··· 640 640 !buffer_new(bh) && 641 641 ocfs2_should_read_blk(inode, page, block_start) && 642 642 (block_start < from || block_end > to)) { 643 - ll_rw_block(READ, 1, &bh); 643 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 644 644 *wait_bh++=bh; 645 645 } 646 646
+4 -4
fs/ocfs2/buffer_head_io.c
··· 79 79 80 80 get_bh(bh); /* for end_buffer_write_sync() */ 81 81 bh->b_end_io = end_buffer_write_sync; 82 - submit_bh(WRITE, bh); 82 + submit_bh(REQ_OP_WRITE, 0, bh); 83 83 84 84 wait_on_buffer(bh); 85 85 ··· 154 154 clear_buffer_uptodate(bh); 155 155 get_bh(bh); /* for end_buffer_read_sync() */ 156 156 bh->b_end_io = end_buffer_read_sync; 157 - submit_bh(READ, bh); 157 + submit_bh(REQ_OP_READ, 0, bh); 158 158 } 159 159 160 160 for (i = nr; i > 0; i--) { ··· 310 310 if (validate) 311 311 set_buffer_needs_validate(bh); 312 312 bh->b_end_io = end_buffer_read_sync; 313 - submit_bh(READ, bh); 313 + submit_bh(REQ_OP_READ, 0, bh); 314 314 continue; 315 315 } 316 316 } ··· 424 424 get_bh(bh); /* for end_buffer_write_sync() */ 425 425 bh->b_end_io = end_buffer_write_sync; 426 426 ocfs2_compute_meta_ecc(osb->sb, bh->b_data, &di->i_check); 427 - submit_bh(WRITE, bh); 427 + submit_bh(REQ_OP_WRITE, 0, bh); 428 428 429 429 wait_on_buffer(bh); 430 430
+9 -5
fs/ocfs2/cluster/heartbeat.c
··· 530 530 static struct bio *o2hb_setup_one_bio(struct o2hb_region *reg, 531 531 struct o2hb_bio_wait_ctxt *wc, 532 532 unsigned int *current_slot, 533 - unsigned int max_slots) 533 + unsigned int max_slots, int op, 534 + int op_flags) 534 535 { 535 536 int len, current_page; 536 537 unsigned int vec_len, vec_start; ··· 557 556 bio->bi_bdev = reg->hr_bdev; 558 557 bio->bi_private = wc; 559 558 bio->bi_end_io = o2hb_bio_end_io; 559 + bio_set_op_attrs(bio, op, op_flags); 560 560 561 561 vec_start = (cs << bits) % PAGE_SIZE; 562 562 while(cs < max_slots) { ··· 593 591 o2hb_bio_wait_init(&wc); 594 592 595 593 while(current_slot < max_slots) { 596 - bio = o2hb_setup_one_bio(reg, &wc, &current_slot, max_slots); 594 + bio = o2hb_setup_one_bio(reg, &wc, &current_slot, max_slots, 595 + REQ_OP_READ, 0); 597 596 if (IS_ERR(bio)) { 598 597 status = PTR_ERR(bio); 599 598 mlog_errno(status); ··· 602 599 } 603 600 604 601 atomic_inc(&wc.wc_num_reqs); 605 - submit_bio(READ, bio); 602 + submit_bio(bio); 606 603 } 607 604 608 605 status = 0; ··· 626 623 627 624 slot = o2nm_this_node(); 628 625 629 - bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1); 626 + bio = o2hb_setup_one_bio(reg, write_wc, &slot, slot+1, REQ_OP_WRITE, 627 + WRITE_SYNC); 630 628 if (IS_ERR(bio)) { 631 629 status = PTR_ERR(bio); 632 630 mlog_errno(status); ··· 635 631 } 636 632 637 633 atomic_inc(&write_wc->wc_num_reqs); 638 - submit_bio(WRITE_SYNC, bio); 634 + submit_bio(bio); 639 635 640 636 status = 0; 641 637 bail:
+1 -1
fs/ocfs2/super.c
··· 1819 1819 if (!buffer_dirty(*bh)) 1820 1820 clear_buffer_uptodate(*bh); 1821 1821 unlock_buffer(*bh); 1822 - ll_rw_block(READ, 1, bh); 1822 + ll_rw_block(REQ_OP_READ, 0, 1, bh); 1823 1823 wait_on_buffer(*bh); 1824 1824 if (!buffer_uptodate(*bh)) { 1825 1825 mlog_errno(-EIO);
+2 -2
fs/reiserfs/inode.c
··· 2668 2668 do { 2669 2669 struct buffer_head *next = bh->b_this_page; 2670 2670 if (buffer_async_write(bh)) { 2671 - submit_bh(WRITE, bh); 2671 + submit_bh(REQ_OP_WRITE, 0, bh); 2672 2672 nr++; 2673 2673 } 2674 2674 put_bh(bh); ··· 2728 2728 struct buffer_head *next = bh->b_this_page; 2729 2729 if (buffer_async_write(bh)) { 2730 2730 clear_buffer_dirty(bh); 2731 - submit_bh(WRITE, bh); 2731 + submit_bh(REQ_OP_WRITE, 0, bh); 2732 2732 nr++; 2733 2733 } 2734 2734 put_bh(bh);
+7 -7
fs/reiserfs/journal.c
··· 652 652 BUG(); 653 653 if (!buffer_uptodate(bh)) 654 654 BUG(); 655 - submit_bh(WRITE, bh); 655 + submit_bh(REQ_OP_WRITE, 0, bh); 656 656 } 657 657 658 658 static void submit_ordered_buffer(struct buffer_head *bh) ··· 662 662 clear_buffer_dirty(bh); 663 663 if (!buffer_uptodate(bh)) 664 664 BUG(); 665 - submit_bh(WRITE, bh); 665 + submit_bh(REQ_OP_WRITE, 0, bh); 666 666 } 667 667 668 668 #define CHUNK_SIZE 32 ··· 870 870 */ 871 871 if (buffer_dirty(bh) && unlikely(bh->b_page->mapping == NULL)) { 872 872 spin_unlock(lock); 873 - ll_rw_block(WRITE, 1, &bh); 873 + ll_rw_block(REQ_OP_WRITE, 0, 1, &bh); 874 874 spin_lock(lock); 875 875 } 876 876 put_bh(bh); ··· 1057 1057 if (tbh) { 1058 1058 if (buffer_dirty(tbh)) { 1059 1059 depth = reiserfs_write_unlock_nested(s); 1060 - ll_rw_block(WRITE, 1, &tbh); 1060 + ll_rw_block(REQ_OP_WRITE, 0, 1, &tbh); 1061 1061 reiserfs_write_lock_nested(s, depth); 1062 1062 } 1063 1063 put_bh(tbh) ; ··· 2244 2244 } 2245 2245 } 2246 2246 /* read in the log blocks, memcpy to the corresponding real block */ 2247 - ll_rw_block(READ, get_desc_trans_len(desc), log_blocks); 2247 + ll_rw_block(REQ_OP_READ, 0, get_desc_trans_len(desc), log_blocks); 2248 2248 for (i = 0; i < get_desc_trans_len(desc); i++) { 2249 2249 2250 2250 wait_on_buffer(log_blocks[i]); ··· 2269 2269 /* flush out the real blocks */ 2270 2270 for (i = 0; i < get_desc_trans_len(desc); i++) { 2271 2271 set_buffer_dirty(real_blocks[i]); 2272 - write_dirty_buffer(real_blocks[i], WRITE); 2272 + write_dirty_buffer(real_blocks[i], 0); 2273 2273 } 2274 2274 for (i = 0; i < get_desc_trans_len(desc); i++) { 2275 2275 wait_on_buffer(real_blocks[i]); ··· 2346 2346 } else 2347 2347 bhlist[j++] = bh; 2348 2348 } 2349 - ll_rw_block(READ, j, bhlist); 2349 + ll_rw_block(REQ_OP_READ, 0, j, bhlist); 2350 2350 for (i = 1; i < j; i++) 2351 2351 brelse(bhlist[i]); 2352 2352 bh = bhlist[0];
+2 -2
fs/reiserfs/stree.c
··· 551 551 if (!buffer_uptodate(bh[j])) { 552 552 if (depth == -1) 553 553 depth = reiserfs_write_unlock_nested(s); 554 - ll_rw_block(READA, 1, bh + j); 554 + ll_rw_block(REQ_OP_READ, READA, 1, bh + j); 555 555 } 556 556 brelse(bh[j]); 557 557 } ··· 660 660 if (!buffer_uptodate(bh) && depth == -1) 661 661 depth = reiserfs_write_unlock_nested(sb); 662 662 663 - ll_rw_block(READ, 1, &bh); 663 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 664 664 wait_on_buffer(bh); 665 665 666 666 if (depth != -1)
+1 -1
fs/reiserfs/super.c
··· 1666 1666 /* after journal replay, reread all bitmap and super blocks */ 1667 1667 static int reread_meta_blocks(struct super_block *s) 1668 1668 { 1669 - ll_rw_block(READ, 1, &SB_BUFFER_WITH_SB(s)); 1669 + ll_rw_block(REQ_OP_READ, 0, 1, &SB_BUFFER_WITH_SB(s)); 1670 1670 wait_on_buffer(SB_BUFFER_WITH_SB(s)); 1671 1671 if (!buffer_uptodate(SB_BUFFER_WITH_SB(s))) { 1672 1672 reiserfs_warning(s, "reiserfs-2504", "error reading the super");
+2 -2
fs/squashfs/block.c
··· 124 124 goto block_release; 125 125 bytes += msblk->devblksize; 126 126 } 127 - ll_rw_block(READ, b, bh); 127 + ll_rw_block(REQ_OP_READ, 0, b, bh); 128 128 } else { 129 129 /* 130 130 * Metadata block. ··· 156 156 goto block_release; 157 157 bytes += msblk->devblksize; 158 158 } 159 - ll_rw_block(READ, b - 1, bh + 1); 159 + ll_rw_block(REQ_OP_READ, 0, b - 1, bh + 1); 160 160 } 161 161 162 162 for (i = 0; i < b; i++) {
+1 -1
fs/udf/dir.c
··· 113 113 brelse(tmp); 114 114 } 115 115 if (num) { 116 - ll_rw_block(READA, num, bha); 116 + ll_rw_block(REQ_OP_READ, READA, num, bha); 117 117 for (i = 0; i < num; i++) 118 118 brelse(bha[i]); 119 119 }
+1 -1
fs/udf/directory.c
··· 87 87 brelse(tmp); 88 88 } 89 89 if (num) { 90 - ll_rw_block(READA, num, bha); 90 + ll_rw_block(REQ_OP_READ, READA, num, bha); 91 91 for (i = 0; i < num; i++) 92 92 brelse(bha[i]); 93 93 }
+1 -1
fs/udf/inode.c
··· 1199 1199 if (buffer_uptodate(bh)) 1200 1200 return bh; 1201 1201 1202 - ll_rw_block(READ, 1, &bh); 1202 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 1203 1203 1204 1204 wait_on_buffer(bh); 1205 1205 if (buffer_uptodate(bh))
+1 -1
fs/ufs/balloc.c
··· 292 292 if (!buffer_mapped(bh)) 293 293 map_bh(bh, inode->i_sb, oldb + pos); 294 294 if (!buffer_uptodate(bh)) { 295 - ll_rw_block(READ, 1, &bh); 295 + ll_rw_block(REQ_OP_READ, 0, 1, &bh); 296 296 wait_on_buffer(bh); 297 297 if (!buffer_uptodate(bh)) { 298 298 ufs_error(inode->i_sb, __func__,
+1 -1
fs/ufs/util.c
··· 118 118 unsigned i; 119 119 120 120 for (i = 0; i < ubh->count; i++) 121 - write_dirty_buffer(ubh->bh[i], WRITE); 121 + write_dirty_buffer(ubh->bh[i], 0); 122 122 123 123 for (i = 0; i < ubh->count; i++) 124 124 wait_on_buffer(ubh->bh[i]);
+6 -5
fs/xfs/xfs_aops.c
··· 438 438 439 439 ioend->io_bio->bi_private = ioend; 440 440 ioend->io_bio->bi_end_io = xfs_end_bio; 441 - 441 + bio_set_op_attrs(ioend->io_bio, REQ_OP_WRITE, 442 + (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : 0); 442 443 /* 443 444 * If we are failing the IO now, just mark the ioend with an 444 445 * error and finish it. This will run IO completion immediately ··· 452 451 return status; 453 452 } 454 453 455 - submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE, 456 - ioend->io_bio); 454 + submit_bio(ioend->io_bio); 457 455 return 0; 458 456 } 459 457 ··· 510 510 511 511 bio_chain(ioend->io_bio, new); 512 512 bio_get(ioend->io_bio); /* for xfs_destroy_ioend */ 513 - submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE, 514 - ioend->io_bio); 513 + bio_set_op_attrs(ioend->io_bio, REQ_OP_WRITE, 514 + (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : 0); 515 + submit_bio(ioend->io_bio); 515 516 ioend->io_bio = new; 516 517 } 517 518
+16 -16
fs/xfs/xfs_buf.c
··· 1127 1127 int map, 1128 1128 int *buf_offset, 1129 1129 int *count, 1130 - int rw) 1130 + int op, 1131 + int op_flags) 1131 1132 { 1132 1133 int page_index; 1133 1134 int total_nr_pages = bp->b_page_count; ··· 1158 1157 1159 1158 next_chunk: 1160 1159 atomic_inc(&bp->b_io_remaining); 1161 - nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - BBSHIFT); 1162 - if (nr_pages > total_nr_pages) 1163 - nr_pages = total_nr_pages; 1160 + nr_pages = min(total_nr_pages, BIO_MAX_PAGES); 1164 1161 1165 1162 bio = bio_alloc(GFP_NOIO, nr_pages); 1166 1163 bio->bi_bdev = bp->b_target->bt_bdev; 1167 1164 bio->bi_iter.bi_sector = sector; 1168 1165 bio->bi_end_io = xfs_buf_bio_end_io; 1169 1166 bio->bi_private = bp; 1170 - 1167 + bio_set_op_attrs(bio, op, op_flags); 1171 1168 1172 1169 for (; size && nr_pages; nr_pages--, page_index++) { 1173 1170 int rbytes, nbytes = PAGE_SIZE - offset; ··· 1189 1190 flush_kernel_vmap_range(bp->b_addr, 1190 1191 xfs_buf_vmap_len(bp)); 1191 1192 } 1192 - submit_bio(rw, bio); 1193 + submit_bio(bio); 1193 1194 if (size) 1194 1195 goto next_chunk; 1195 1196 } else { ··· 1209 1210 struct xfs_buf *bp) 1210 1211 { 1211 1212 struct blk_plug plug; 1212 - int rw; 1213 + int op; 1214 + int op_flags = 0; 1213 1215 int offset; 1214 1216 int size; 1215 1217 int i; ··· 1229 1229 bp->b_ioend_wq = bp->b_target->bt_mount->m_buf_workqueue; 1230 1230 1231 1231 if (bp->b_flags & XBF_WRITE) { 1232 + op = REQ_OP_WRITE; 1232 1233 if (bp->b_flags & XBF_SYNCIO) 1233 - rw = WRITE_SYNC; 1234 - else 1235 - rw = WRITE; 1234 + op_flags = WRITE_SYNC; 1236 1235 if (bp->b_flags & XBF_FUA) 1237 - rw |= REQ_FUA; 1236 + op_flags |= REQ_FUA; 1238 1237 if (bp->b_flags & XBF_FLUSH) 1239 - rw |= REQ_FLUSH; 1238 + op_flags |= REQ_PREFLUSH; 1240 1239 1241 1240 /* 1242 1241 * Run the write verifier callback function if it exists. If ··· 1265 1266 } 1266 1267 } 1267 1268 } else if (bp->b_flags & XBF_READ_AHEAD) { 1268 - rw = READA; 1269 + op = REQ_OP_READ; 1270 + op_flags = REQ_RAHEAD; 1269 1271 } else { 1270 - rw = READ; 1272 + op = REQ_OP_READ; 1271 1273 } 1272 1274 1273 1275 /* we only use the buffer cache for meta-data */ 1274 - rw |= REQ_META; 1276 + op_flags |= REQ_META; 1275 1277 1276 1278 /* 1277 1279 * Walk all the vectors issuing IO on them. Set up the initial offset ··· 1284 1284 size = BBTOB(bp->b_io_length); 1285 1285 blk_start_plug(&plug); 1286 1286 for (i = 0; i < bp->b_map_count; i++) { 1287 - xfs_buf_ioapply_map(bp, i, &offset, &size, rw); 1287 + xfs_buf_ioapply_map(bp, i, &offset, &size, op, op_flags); 1288 1288 if (bp->b_error) 1289 1289 break; 1290 1290 if (size <= 0)
+13 -70
include/linux/bio.h
··· 41 41 #endif 42 42 43 43 #define BIO_MAX_PAGES 256 44 - #define BIO_MAX_SIZE (BIO_MAX_PAGES << PAGE_SHIFT) 45 - #define BIO_MAX_SECTORS (BIO_MAX_SIZE >> 9) 46 44 47 - /* 48 - * upper 16 bits of bi_rw define the io priority of this bio 49 - */ 50 - #define BIO_PRIO_SHIFT (8 * sizeof(unsigned long) - IOPRIO_BITS) 51 - #define bio_prio(bio) ((bio)->bi_rw >> BIO_PRIO_SHIFT) 52 - #define bio_prio_valid(bio) ioprio_valid(bio_prio(bio)) 53 - 54 - #define bio_set_prio(bio, prio) do { \ 55 - WARN_ON(prio >= (1 << IOPRIO_BITS)); \ 56 - (bio)->bi_rw &= ((1UL << BIO_PRIO_SHIFT) - 1); \ 57 - (bio)->bi_rw |= ((unsigned long) (prio) << BIO_PRIO_SHIFT); \ 58 - } while (0) 59 - 60 - /* 61 - * various member access, note that bio_data should of course not be used 62 - * on highmem page vectors 63 - */ 64 - #define __bvec_iter_bvec(bvec, iter) (&(bvec)[(iter).bi_idx]) 65 - 66 - #define bvec_iter_page(bvec, iter) \ 67 - (__bvec_iter_bvec((bvec), (iter))->bv_page) 68 - 69 - #define bvec_iter_len(bvec, iter) \ 70 - min((iter).bi_size, \ 71 - __bvec_iter_bvec((bvec), (iter))->bv_len - (iter).bi_bvec_done) 72 - 73 - #define bvec_iter_offset(bvec, iter) \ 74 - (__bvec_iter_bvec((bvec), (iter))->bv_offset + (iter).bi_bvec_done) 75 - 76 - #define bvec_iter_bvec(bvec, iter) \ 77 - ((struct bio_vec) { \ 78 - .bv_page = bvec_iter_page((bvec), (iter)), \ 79 - .bv_len = bvec_iter_len((bvec), (iter)), \ 80 - .bv_offset = bvec_iter_offset((bvec), (iter)), \ 81 - }) 45 + #define bio_prio(bio) (bio)->bi_ioprio 46 + #define bio_set_prio(bio, prio) ((bio)->bi_ioprio = prio) 82 47 83 48 #define bio_iter_iovec(bio, iter) \ 84 49 bvec_iter_bvec((bio)->bi_io_vec, (iter)) ··· 71 106 { 72 107 if (bio && 73 108 bio->bi_iter.bi_size && 74 - !(bio->bi_rw & REQ_DISCARD)) 109 + bio_op(bio) != REQ_OP_DISCARD) 75 110 return true; 76 111 77 112 return false; 113 + } 114 + 115 + static inline bool bio_no_advance_iter(struct bio *bio) 116 + { 117 + return bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_WRITE_SAME; 78 118 } 79 119 80 120 static inline bool bio_is_rw(struct bio *bio) ··· 87 117 if (!bio_has_data(bio)) 88 118 return false; 89 119 90 - if (bio->bi_rw & BIO_NO_ADVANCE_ITER_MASK) 120 + if (bio_no_advance_iter(bio)) 91 121 return false; 92 122 93 123 return true; ··· 163 193 #define bio_for_each_segment_all(bvl, bio, i) \ 164 194 for (i = 0, bvl = (bio)->bi_io_vec; i < (bio)->bi_vcnt; i++, bvl++) 165 195 166 - static inline void bvec_iter_advance(struct bio_vec *bv, struct bvec_iter *iter, 167 - unsigned bytes) 168 - { 169 - WARN_ONCE(bytes > iter->bi_size, 170 - "Attempted to advance past end of bvec iter\n"); 171 - 172 - while (bytes) { 173 - unsigned len = min(bytes, bvec_iter_len(bv, *iter)); 174 - 175 - bytes -= len; 176 - iter->bi_size -= len; 177 - iter->bi_bvec_done += len; 178 - 179 - if (iter->bi_bvec_done == __bvec_iter_bvec(bv, *iter)->bv_len) { 180 - iter->bi_bvec_done = 0; 181 - iter->bi_idx++; 182 - } 183 - } 184 - } 185 - 186 - #define for_each_bvec(bvl, bio_vec, iter, start) \ 187 - for (iter = (start); \ 188 - (iter).bi_size && \ 189 - ((bvl = bvec_iter_bvec((bio_vec), (iter))), 1); \ 190 - bvec_iter_advance((bio_vec), &(iter), (bvl).bv_len)) 191 - 192 - 193 196 static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter, 194 197 unsigned bytes) 195 198 { 196 199 iter->bi_sector += bytes >> 9; 197 200 198 - if (bio->bi_rw & BIO_NO_ADVANCE_ITER_MASK) 201 + if (bio_no_advance_iter(bio)) 199 202 iter->bi_size -= bytes; 200 203 else 201 204 bvec_iter_advance(bio->bi_io_vec, iter, bytes); ··· 196 253 * differently: 197 254 */ 198 255 199 - if (bio->bi_rw & REQ_DISCARD) 256 + if (bio_op(bio) == REQ_OP_DISCARD) 200 257 return 1; 201 258 202 - if (bio->bi_rw & REQ_WRITE_SAME) 259 + if (bio_op(bio) == REQ_OP_WRITE_SAME) 203 260 return 1; 204 261 205 262 bio_for_each_segment(bv, bio, iter) ··· 416 473 struct request_queue; 417 474 extern int bio_phys_segments(struct request_queue *, struct bio *); 418 475 419 - extern int submit_bio_wait(int rw, struct bio *bio); 476 + extern int submit_bio_wait(struct bio *bio); 420 477 extern void bio_advance(struct bio *, unsigned); 421 478 422 479 extern void bio_init(struct bio *);
+7 -6
include/linux/blk-cgroup.h
··· 590 590 /** 591 591 * blkg_rwstat_add - add a value to a blkg_rwstat 592 592 * @rwstat: target blkg_rwstat 593 - * @rw: mask of REQ_{WRITE|SYNC} 593 + * @op: REQ_OP 594 + * @op_flags: rq_flag_bits 594 595 * @val: value to add 595 596 * 596 597 * Add @val to @rwstat. The counters are chosen according to @rw. The 597 598 * caller is responsible for synchronizing calls to this function. 598 599 */ 599 600 static inline void blkg_rwstat_add(struct blkg_rwstat *rwstat, 600 - int rw, uint64_t val) 601 + int op, int op_flags, uint64_t val) 601 602 { 602 603 struct percpu_counter *cnt; 603 604 604 - if (rw & REQ_WRITE) 605 + if (op_is_write(op)) 605 606 cnt = &rwstat->cpu_cnt[BLKG_RWSTAT_WRITE]; 606 607 else 607 608 cnt = &rwstat->cpu_cnt[BLKG_RWSTAT_READ]; 608 609 609 610 __percpu_counter_add(cnt, val, BLKG_STAT_CPU_BATCH); 610 611 611 - if (rw & REQ_SYNC) 612 + if (op_flags & REQ_SYNC) 612 613 cnt = &rwstat->cpu_cnt[BLKG_RWSTAT_SYNC]; 613 614 else 614 615 cnt = &rwstat->cpu_cnt[BLKG_RWSTAT_ASYNC]; ··· 714 713 715 714 if (!throtl) { 716 715 blkg = blkg ?: q->root_blkg; 717 - blkg_rwstat_add(&blkg->stat_bytes, bio->bi_rw, 716 + blkg_rwstat_add(&blkg->stat_bytes, bio_op(bio), bio->bi_rw, 718 717 bio->bi_iter.bi_size); 719 - blkg_rwstat_add(&blkg->stat_ios, bio->bi_rw, 1); 718 + blkg_rwstat_add(&blkg->stat_ios, bio_op(bio), bio->bi_rw, 1); 720 719 } 721 720 722 721 rcu_read_unlock();
+30 -38
include/linux/blk_types.h
··· 6 6 #define __LINUX_BLK_TYPES_H 7 7 8 8 #include <linux/types.h> 9 + #include <linux/bvec.h> 9 10 10 11 struct bio_set; 11 12 struct bio; ··· 18 17 typedef void (bio_end_io_t) (struct bio *); 19 18 typedef void (bio_destructor_t) (struct bio *); 20 19 21 - /* 22 - * was unsigned short, but we might as well be ready for > 64kB I/O pages 23 - */ 24 - struct bio_vec { 25 - struct page *bv_page; 26 - unsigned int bv_len; 27 - unsigned int bv_offset; 28 - }; 29 - 30 20 #ifdef CONFIG_BLOCK 31 - 32 - struct bvec_iter { 33 - sector_t bi_sector; /* device address in 512 byte 34 - sectors */ 35 - unsigned int bi_size; /* residual I/O count */ 36 - 37 - unsigned int bi_idx; /* current index into bvl_vec */ 38 - 39 - unsigned int bi_bvec_done; /* number of bytes completed in 40 - current bvec */ 41 - }; 42 - 43 21 /* 44 22 * main unit of I/O for the block layer and lower layers (ie drivers and 45 23 * stacking drivers) ··· 28 48 struct block_device *bi_bdev; 29 49 unsigned int bi_flags; /* status, command, etc */ 30 50 int bi_error; 31 - unsigned long bi_rw; /* bottom bits READ/WRITE, 32 - * top bits priority 51 + unsigned int bi_rw; /* bottom bits req flags, 52 + * top bits REQ_OP 33 53 */ 54 + unsigned short bi_ioprio; 34 55 35 56 struct bvec_iter bi_iter; 36 57 ··· 88 107 struct bio_vec bi_inline_vecs[0]; 89 108 }; 90 109 110 + #define BIO_OP_SHIFT (8 * sizeof(unsigned int) - REQ_OP_BITS) 111 + #define bio_op(bio) ((bio)->bi_rw >> BIO_OP_SHIFT) 112 + 113 + #define bio_set_op_attrs(bio, op, op_flags) do { \ 114 + WARN_ON(op >= (1 << REQ_OP_BITS)); \ 115 + (bio)->bi_rw &= ((1 << BIO_OP_SHIFT) - 1); \ 116 + (bio)->bi_rw |= ((unsigned int) (op) << BIO_OP_SHIFT); \ 117 + (bio)->bi_rw |= op_flags; \ 118 + } while (0) 119 + 91 120 #define BIO_RESET_BYTES offsetof(struct bio, bi_max_vecs) 92 121 93 122 /* ··· 136 145 */ 137 146 enum rq_flag_bits { 138 147 /* common flags */ 139 - __REQ_WRITE, /* not set, read. set, write */ 140 148 __REQ_FAILFAST_DEV, /* no driver retries of device errors */ 141 149 __REQ_FAILFAST_TRANSPORT, /* no driver retries of transport errors */ 142 150 __REQ_FAILFAST_DRIVER, /* no driver retries of driver errors */ ··· 143 153 __REQ_SYNC, /* request is sync (sync write or read) */ 144 154 __REQ_META, /* metadata io request */ 145 155 __REQ_PRIO, /* boost priority in cfq */ 146 - __REQ_DISCARD, /* request to discard sectors */ 147 - __REQ_SECURE, /* secure discard (used with __REQ_DISCARD) */ 148 - __REQ_WRITE_SAME, /* write same block many times */ 156 + __REQ_SECURE, /* secure discard (used with REQ_OP_DISCARD) */ 149 157 150 158 __REQ_NOIDLE, /* don't anticipate more IO after this one */ 151 159 __REQ_INTEGRITY, /* I/O includes block integrity payload */ 152 160 __REQ_FUA, /* forced unit access */ 153 - __REQ_FLUSH, /* request for cache flush */ 161 + __REQ_PREFLUSH, /* request for cache flush */ 154 162 155 163 /* bio only flags */ 156 164 __REQ_RAHEAD, /* read ahead, can fail anytime */ ··· 179 191 __REQ_NR_BITS, /* stops here */ 180 192 }; 181 193 182 - #define REQ_WRITE (1ULL << __REQ_WRITE) 183 194 #define REQ_FAILFAST_DEV (1ULL << __REQ_FAILFAST_DEV) 184 195 #define REQ_FAILFAST_TRANSPORT (1ULL << __REQ_FAILFAST_TRANSPORT) 185 196 #define REQ_FAILFAST_DRIVER (1ULL << __REQ_FAILFAST_DRIVER) 186 197 #define REQ_SYNC (1ULL << __REQ_SYNC) 187 198 #define REQ_META (1ULL << __REQ_META) 188 199 #define REQ_PRIO (1ULL << __REQ_PRIO) 189 - #define REQ_DISCARD (1ULL << __REQ_DISCARD) 190 - #define REQ_WRITE_SAME (1ULL << __REQ_WRITE_SAME) 191 200 #define REQ_NOIDLE (1ULL << __REQ_NOIDLE) 192 201 #define REQ_INTEGRITY (1ULL << __REQ_INTEGRITY) 193 202 194 203 #define REQ_FAILFAST_MASK \ 195 204 (REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER) 196 205 #define REQ_COMMON_MASK \ 197 - (REQ_WRITE | REQ_FAILFAST_MASK | REQ_SYNC | REQ_META | REQ_PRIO | \ 198 - REQ_DISCARD | REQ_WRITE_SAME | REQ_NOIDLE | REQ_FLUSH | REQ_FUA | \ 199 - REQ_SECURE | REQ_INTEGRITY | REQ_NOMERGE) 206 + (REQ_FAILFAST_MASK | REQ_SYNC | REQ_META | REQ_PRIO | REQ_NOIDLE | \ 207 + REQ_PREFLUSH | REQ_FUA | REQ_SECURE | REQ_INTEGRITY | REQ_NOMERGE) 200 208 #define REQ_CLONE_MASK REQ_COMMON_MASK 201 - 202 - #define BIO_NO_ADVANCE_ITER_MASK (REQ_DISCARD|REQ_WRITE_SAME) 203 209 204 210 /* This mask is used for both bio and request merge checking */ 205 211 #define REQ_NOMERGE_FLAGS \ 206 - (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_FLUSH | REQ_FUA | REQ_FLUSH_SEQ) 212 + (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_PREFLUSH | REQ_FUA | REQ_FLUSH_SEQ) 207 213 208 214 #define REQ_RAHEAD (1ULL << __REQ_RAHEAD) 209 215 #define REQ_THROTTLED (1ULL << __REQ_THROTTLED) ··· 215 233 #define REQ_PREEMPT (1ULL << __REQ_PREEMPT) 216 234 #define REQ_ALLOCED (1ULL << __REQ_ALLOCED) 217 235 #define REQ_COPY_USER (1ULL << __REQ_COPY_USER) 218 - #define REQ_FLUSH (1ULL << __REQ_FLUSH) 236 + #define REQ_PREFLUSH (1ULL << __REQ_PREFLUSH) 219 237 #define REQ_FLUSH_SEQ (1ULL << __REQ_FLUSH_SEQ) 220 238 #define REQ_IO_STAT (1ULL << __REQ_IO_STAT) 221 239 #define REQ_MIXED_MERGE (1ULL << __REQ_MIXED_MERGE) ··· 223 241 #define REQ_PM (1ULL << __REQ_PM) 224 242 #define REQ_HASHED (1ULL << __REQ_HASHED) 225 243 #define REQ_MQ_INFLIGHT (1ULL << __REQ_MQ_INFLIGHT) 244 + 245 + enum req_op { 246 + REQ_OP_READ, 247 + REQ_OP_WRITE, 248 + REQ_OP_DISCARD, /* request to discard sectors */ 249 + REQ_OP_WRITE_SAME, /* write same block many times */ 250 + REQ_OP_FLUSH, /* request for cache flush */ 251 + }; 252 + 253 + #define REQ_OP_BITS 3 226 254 227 255 typedef unsigned int blk_qc_t; 228 256 #define BLK_QC_T_NONE -1U
+42 -22
include/linux/blkdev.h
··· 90 90 struct list_head queuelist; 91 91 union { 92 92 struct call_single_data csd; 93 - unsigned long fifo_time; 93 + u64 fifo_time; 94 94 }; 95 95 96 96 struct request_queue *q; 97 97 struct blk_mq_ctx *mq_ctx; 98 98 99 - u64 cmd_flags; 100 - unsigned cmd_type; 101 - unsigned long atomic_flags; 102 - 103 99 int cpu; 100 + unsigned cmd_type; 101 + u64 cmd_flags; 102 + unsigned long atomic_flags; 104 103 105 104 /* the following two fields are internal, NEVER access directly */ 106 105 unsigned int __data_len; /* total data len */ ··· 198 199 /* for bidi */ 199 200 struct request *next_rq; 200 201 }; 202 + 203 + #define REQ_OP_SHIFT (8 * sizeof(u64) - REQ_OP_BITS) 204 + #define req_op(req) ((req)->cmd_flags >> REQ_OP_SHIFT) 205 + 206 + #define req_set_op(req, op) do { \ 207 + WARN_ON(op >= (1 << REQ_OP_BITS)); \ 208 + (req)->cmd_flags &= ((1ULL << REQ_OP_SHIFT) - 1); \ 209 + (req)->cmd_flags |= ((u64) (op) << REQ_OP_SHIFT); \ 210 + } while (0) 211 + 212 + #define req_set_op_attrs(req, op, flags) do { \ 213 + req_set_op(req, op); \ 214 + (req)->cmd_flags |= flags; \ 215 + } while (0) 201 216 202 217 static inline unsigned short req_get_ioprio(struct request *req) 203 218 { ··· 505 492 #define QUEUE_FLAG_WC 23 /* Write back caching */ 506 493 #define QUEUE_FLAG_FUA 24 /* device supports FUA writes */ 507 494 #define QUEUE_FLAG_FLUSH_NQ 25 /* flush not queueuable */ 495 + #define QUEUE_FLAG_DAX 26 /* device supports DAX */ 508 496 509 497 #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ 510 498 (1 << QUEUE_FLAG_STACKABLE) | \ ··· 595 581 #define blk_queue_discard(q) test_bit(QUEUE_FLAG_DISCARD, &(q)->queue_flags) 596 582 #define blk_queue_secdiscard(q) (blk_queue_discard(q) && \ 597 583 test_bit(QUEUE_FLAG_SECDISCARD, &(q)->queue_flags)) 584 + #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) 598 585 599 586 #define blk_noretry_request(rq) \ 600 587 ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \ ··· 612 597 613 598 #define list_entry_rq(ptr) list_entry((ptr), struct request, queuelist) 614 599 615 - #define rq_data_dir(rq) ((int)((rq)->cmd_flags & 1)) 600 + #define rq_data_dir(rq) (op_is_write(req_op(rq)) ? WRITE : READ) 616 601 617 602 /* 618 603 * Driver can handle struct request, if it either has an old style ··· 631 616 /* 632 617 * We regard a request as sync, if either a read or a sync write 633 618 */ 634 - static inline bool rw_is_sync(unsigned int rw_flags) 619 + static inline bool rw_is_sync(int op, unsigned int rw_flags) 635 620 { 636 - return !(rw_flags & REQ_WRITE) || (rw_flags & REQ_SYNC); 621 + return op == REQ_OP_READ || (rw_flags & REQ_SYNC); 637 622 } 638 623 639 624 static inline bool rq_is_sync(struct request *rq) 640 625 { 641 - return rw_is_sync(rq->cmd_flags); 626 + return rw_is_sync(req_op(rq), rq->cmd_flags); 642 627 } 643 628 644 629 static inline bool blk_rl_full(struct request_list *rl, bool sync) ··· 667 652 if (rq->cmd_type != REQ_TYPE_FS) 668 653 return false; 669 654 655 + if (req_op(rq) == REQ_OP_FLUSH) 656 + return false; 657 + 670 658 if (rq->cmd_flags & REQ_NOMERGE_FLAGS) 671 659 return false; 672 660 673 661 return true; 674 662 } 675 663 676 - static inline bool blk_check_merge_flags(unsigned int flags1, 677 - unsigned int flags2) 664 + static inline bool blk_check_merge_flags(unsigned int flags1, unsigned int op1, 665 + unsigned int flags2, unsigned int op2) 678 666 { 679 - if ((flags1 & REQ_DISCARD) != (flags2 & REQ_DISCARD)) 667 + if ((op1 == REQ_OP_DISCARD) != (op2 == REQ_OP_DISCARD)) 680 668 return false; 681 669 682 670 if ((flags1 & REQ_SECURE) != (flags2 & REQ_SECURE)) 683 671 return false; 684 672 685 - if ((flags1 & REQ_WRITE_SAME) != (flags2 & REQ_WRITE_SAME)) 673 + if ((op1 == REQ_OP_WRITE_SAME) != (op2 == REQ_OP_WRITE_SAME)) 686 674 return false; 687 675 688 676 return true; ··· 897 879 } 898 880 899 881 static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, 900 - unsigned int cmd_flags) 882 + int op) 901 883 { 902 - if (unlikely(cmd_flags & REQ_DISCARD)) 884 + if (unlikely(op == REQ_OP_DISCARD)) 903 885 return min(q->limits.max_discard_sectors, UINT_MAX >> 9); 904 886 905 - if (unlikely(cmd_flags & REQ_WRITE_SAME)) 887 + if (unlikely(op == REQ_OP_WRITE_SAME)) 906 888 return q->limits.max_write_same_sectors; 907 889 908 890 return q->limits.max_sectors; ··· 922 904 (offset & (q->limits.chunk_sectors - 1)); 923 905 } 924 906 925 - static inline unsigned int blk_rq_get_max_sectors(struct request *rq) 907 + static inline unsigned int blk_rq_get_max_sectors(struct request *rq, 908 + sector_t offset) 926 909 { 927 910 struct request_queue *q = rq->q; 928 911 929 912 if (unlikely(rq->cmd_type != REQ_TYPE_FS)) 930 913 return q->limits.max_hw_sectors; 931 914 932 - if (!q->limits.chunk_sectors || (rq->cmd_flags & REQ_DISCARD)) 933 - return blk_queue_get_max_sectors(q, rq->cmd_flags); 915 + if (!q->limits.chunk_sectors || (req_op(rq) == REQ_OP_DISCARD)) 916 + return blk_queue_get_max_sectors(q, req_op(rq)); 934 917 935 - return min(blk_max_size_offset(q, blk_rq_pos(rq)), 936 - blk_queue_get_max_sectors(q, rq->cmd_flags)); 918 + return min(blk_max_size_offset(q, offset), 919 + blk_queue_get_max_sectors(q, req_op(rq))); 937 920 } 938 921 939 922 static inline unsigned int blk_rq_count_bios(struct request *rq) ··· 1160 1141 extern int blkdev_issue_discard(struct block_device *bdev, sector_t sector, 1161 1142 sector_t nr_sects, gfp_t gfp_mask, unsigned long flags); 1162 1143 extern int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, 1163 - sector_t nr_sects, gfp_t gfp_mask, int type, struct bio **biop); 1144 + sector_t nr_sects, gfp_t gfp_mask, int op_flags, 1145 + struct bio **biop); 1164 1146 extern int blkdev_issue_write_same(struct block_device *bdev, sector_t sector, 1165 1147 sector_t nr_sects, gfp_t gfp_mask, struct page *page); 1166 1148 extern int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
+1 -1
include/linux/blktrace_api.h
··· 118 118 } 119 119 120 120 extern void blk_dump_cmd(char *buf, struct request *rq); 121 - extern void blk_fill_rwbs(char *rwbs, u32 rw, int bytes); 121 + extern void blk_fill_rwbs(char *rwbs, int op, u32 rw, int bytes); 122 122 123 123 #endif /* CONFIG_EVENT_TRACING && CONFIG_BLOCK */ 124 124
+6 -5
include/linux/buffer_head.h
··· 187 187 void free_buffer_head(struct buffer_head * bh); 188 188 void unlock_buffer(struct buffer_head *bh); 189 189 void __lock_buffer(struct buffer_head *bh); 190 - void ll_rw_block(int, int, struct buffer_head * bh[]); 190 + void ll_rw_block(int, int, int, struct buffer_head * bh[]); 191 191 int sync_dirty_buffer(struct buffer_head *bh); 192 - int __sync_dirty_buffer(struct buffer_head *bh, int rw); 193 - void write_dirty_buffer(struct buffer_head *bh, int rw); 194 - int _submit_bh(int rw, struct buffer_head *bh, unsigned long bio_flags); 195 - int submit_bh(int, struct buffer_head *); 192 + int __sync_dirty_buffer(struct buffer_head *bh, int op_flags); 193 + void write_dirty_buffer(struct buffer_head *bh, int op_flags); 194 + int _submit_bh(int op, int op_flags, struct buffer_head *bh, 195 + unsigned long bio_flags); 196 + int submit_bh(int, int, struct buffer_head *); 196 197 void write_boundary_block(struct block_device *bdev, 197 198 sector_t bblock, unsigned blocksize); 198 199 int bh_uptodate_or_lock(struct buffer_head *bh);
+96
include/linux/bvec.h
··· 1 + /* 2 + * bvec iterator 3 + * 4 + * Copyright (C) 2001 Ming Lei <ming.lei@canonical.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public Licens 17 + * along with this program; if not, write to the Free Software 18 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111- 19 + */ 20 + #ifndef __LINUX_BVEC_ITER_H 21 + #define __LINUX_BVEC_ITER_H 22 + 23 + #include <linux/kernel.h> 24 + #include <linux/bug.h> 25 + 26 + /* 27 + * was unsigned short, but we might as well be ready for > 64kB I/O pages 28 + */ 29 + struct bio_vec { 30 + struct page *bv_page; 31 + unsigned int bv_len; 32 + unsigned int bv_offset; 33 + }; 34 + 35 + struct bvec_iter { 36 + sector_t bi_sector; /* device address in 512 byte 37 + sectors */ 38 + unsigned int bi_size; /* residual I/O count */ 39 + 40 + unsigned int bi_idx; /* current index into bvl_vec */ 41 + 42 + unsigned int bi_bvec_done; /* number of bytes completed in 43 + current bvec */ 44 + }; 45 + 46 + /* 47 + * various member access, note that bio_data should of course not be used 48 + * on highmem page vectors 49 + */ 50 + #define __bvec_iter_bvec(bvec, iter) (&(bvec)[(iter).bi_idx]) 51 + 52 + #define bvec_iter_page(bvec, iter) \ 53 + (__bvec_iter_bvec((bvec), (iter))->bv_page) 54 + 55 + #define bvec_iter_len(bvec, iter) \ 56 + min((iter).bi_size, \ 57 + __bvec_iter_bvec((bvec), (iter))->bv_len - (iter).bi_bvec_done) 58 + 59 + #define bvec_iter_offset(bvec, iter) \ 60 + (__bvec_iter_bvec((bvec), (iter))->bv_offset + (iter).bi_bvec_done) 61 + 62 + #define bvec_iter_bvec(bvec, iter) \ 63 + ((struct bio_vec) { \ 64 + .bv_page = bvec_iter_page((bvec), (iter)), \ 65 + .bv_len = bvec_iter_len((bvec), (iter)), \ 66 + .bv_offset = bvec_iter_offset((bvec), (iter)), \ 67 + }) 68 + 69 + static inline void bvec_iter_advance(const struct bio_vec *bv, 70 + struct bvec_iter *iter, 71 + unsigned bytes) 72 + { 73 + WARN_ONCE(bytes > iter->bi_size, 74 + "Attempted to advance past end of bvec iter\n"); 75 + 76 + while (bytes) { 77 + unsigned len = min(bytes, bvec_iter_len(bv, *iter)); 78 + 79 + bytes -= len; 80 + iter->bi_size -= len; 81 + iter->bi_bvec_done += len; 82 + 83 + if (iter->bi_bvec_done == __bvec_iter_bvec(bv, *iter)->bv_len) { 84 + iter->bi_bvec_done = 0; 85 + iter->bi_idx++; 86 + } 87 + } 88 + } 89 + 90 + #define for_each_bvec(bvl, bio_vec, iter, start) \ 91 + for (iter = (start); \ 92 + (iter).bi_size && \ 93 + ((bvl = bvec_iter_bvec((bio_vec), (iter))), 1); \ 94 + bvec_iter_advance((bio_vec), &(iter), (bvl).bv_len)) 95 + 96 + #endif /* __LINUX_BVEC_ITER_H */
+2 -1
include/linux/dm-io.h
··· 57 57 */ 58 58 struct dm_io_client; 59 59 struct dm_io_request { 60 - int bi_rw; /* READ|WRITE - not READA */ 60 + int bi_op; /* REQ_OP */ 61 + int bi_op_flags; /* rq_flag_bits */ 61 62 struct dm_io_memory mem; /* Memory to use for io */ 62 63 struct dm_io_notify notify; /* Synchronous if notify.fn is NULL */ 63 64 struct dm_io_client *client; /* Client memory handler */
+10 -5
include/linux/elevator.h
··· 16 16 17 17 typedef void (elevator_merged_fn) (struct request_queue *, struct request *, int); 18 18 19 - typedef int (elevator_allow_merge_fn) (struct request_queue *, struct request *, struct bio *); 19 + typedef int (elevator_allow_bio_merge_fn) (struct request_queue *, 20 + struct request *, struct bio *); 21 + 22 + typedef int (elevator_allow_rq_merge_fn) (struct request_queue *, 23 + struct request *, struct request *); 20 24 21 25 typedef void (elevator_bio_merged_fn) (struct request_queue *, 22 26 struct request *, struct bio *); ··· 30 26 typedef void (elevator_add_req_fn) (struct request_queue *, struct request *); 31 27 typedef struct request *(elevator_request_list_fn) (struct request_queue *, struct request *); 32 28 typedef void (elevator_completed_req_fn) (struct request_queue *, struct request *); 33 - typedef int (elevator_may_queue_fn) (struct request_queue *, int); 29 + typedef int (elevator_may_queue_fn) (struct request_queue *, int, int); 34 30 35 31 typedef void (elevator_init_icq_fn) (struct io_cq *); 36 32 typedef void (elevator_exit_icq_fn) (struct io_cq *); ··· 50 46 elevator_merge_fn *elevator_merge_fn; 51 47 elevator_merged_fn *elevator_merged_fn; 52 48 elevator_merge_req_fn *elevator_merge_req_fn; 53 - elevator_allow_merge_fn *elevator_allow_merge_fn; 49 + elevator_allow_bio_merge_fn *elevator_allow_bio_merge_fn; 50 + elevator_allow_rq_merge_fn *elevator_allow_rq_merge_fn; 54 51 elevator_bio_merged_fn *elevator_bio_merged_fn; 55 52 56 53 elevator_dispatch_fn *elevator_dispatch_fn; ··· 139 134 extern struct request *elv_latter_request(struct request_queue *, struct request *); 140 135 extern int elv_register_queue(struct request_queue *q); 141 136 extern void elv_unregister_queue(struct request_queue *q); 142 - extern int elv_may_queue(struct request_queue *, int); 137 + extern int elv_may_queue(struct request_queue *, int, int); 143 138 extern void elv_completed_request(struct request_queue *, struct request *); 144 139 extern int elv_set_request(struct request_queue *q, struct request *rq, 145 140 struct bio *bio, gfp_t gfp_mask); ··· 162 157 extern int elevator_init(struct request_queue *, char *); 163 158 extern void elevator_exit(struct elevator_queue *); 164 159 extern int elevator_change(struct request_queue *, const char *); 165 - extern bool elv_rq_merge_ok(struct request *, struct bio *); 160 + extern bool elv_bio_merge_ok(struct request *, struct bio *); 166 161 extern struct elevator_queue *elevator_alloc(struct request_queue *, 167 162 struct elevator_type *); 168 163
+29 -14
include/linux/fs.h
··· 152 152 #define CHECK_IOVEC_ONLY -1 153 153 154 154 /* 155 - * The below are the various read and write types that we support. Some of 155 + * The below are the various read and write flags that we support. Some of 156 156 * them include behavioral modifiers that send information down to the 157 - * block layer and IO scheduler. Terminology: 157 + * block layer and IO scheduler. They should be used along with a req_op. 158 + * Terminology: 158 159 * 159 160 * The block layer uses device plugging to defer IO a little bit, in 160 161 * the hope that we will see more IO very shortly. This increases ··· 194 193 * non-volatile media on completion. 195 194 * 196 195 */ 197 - #define RW_MASK REQ_WRITE 196 + #define RW_MASK REQ_OP_WRITE 198 197 #define RWA_MASK REQ_RAHEAD 199 198 200 - #define READ 0 199 + #define READ REQ_OP_READ 201 200 #define WRITE RW_MASK 202 201 #define READA RWA_MASK 203 202 204 - #define READ_SYNC (READ | REQ_SYNC) 205 - #define WRITE_SYNC (WRITE | REQ_SYNC | REQ_NOIDLE) 206 - #define WRITE_ODIRECT (WRITE | REQ_SYNC) 207 - #define WRITE_FLUSH (WRITE | REQ_SYNC | REQ_NOIDLE | REQ_FLUSH) 208 - #define WRITE_FUA (WRITE | REQ_SYNC | REQ_NOIDLE | REQ_FUA) 209 - #define WRITE_FLUSH_FUA (WRITE | REQ_SYNC | REQ_NOIDLE | REQ_FLUSH | REQ_FUA) 203 + #define READ_SYNC REQ_SYNC 204 + #define WRITE_SYNC (REQ_SYNC | REQ_NOIDLE) 205 + #define WRITE_ODIRECT REQ_SYNC 206 + #define WRITE_FLUSH (REQ_SYNC | REQ_NOIDLE | REQ_PREFLUSH) 207 + #define WRITE_FUA (REQ_SYNC | REQ_NOIDLE | REQ_FUA) 208 + #define WRITE_FLUSH_FUA (REQ_SYNC | REQ_NOIDLE | REQ_PREFLUSH | REQ_FUA) 210 209 211 210 /* 212 211 * Attribute flags. These should be or-ed together to figure out what ··· 2465 2464 extern bool is_bad_inode(struct inode *); 2466 2465 2467 2466 #ifdef CONFIG_BLOCK 2467 + static inline bool op_is_write(unsigned int op) 2468 + { 2469 + return op == REQ_OP_READ ? false : true; 2470 + } 2471 + 2468 2472 /* 2469 2473 * return READ, READA, or WRITE 2470 2474 */ 2471 - #define bio_rw(bio) ((bio)->bi_rw & (RW_MASK | RWA_MASK)) 2475 + static inline int bio_rw(struct bio *bio) 2476 + { 2477 + if (op_is_write(bio_op(bio))) 2478 + return WRITE; 2479 + 2480 + return bio->bi_rw & RWA_MASK; 2481 + } 2472 2482 2473 2483 /* 2474 2484 * return data direction, READ or WRITE 2475 2485 */ 2476 - #define bio_data_dir(bio) ((bio)->bi_rw & 1) 2486 + static inline int bio_data_dir(struct bio *bio) 2487 + { 2488 + return op_is_write(bio_op(bio)) ? WRITE : READ; 2489 + } 2477 2490 2478 2491 extern void check_disk_size_change(struct gendisk *disk, 2479 2492 struct block_device *bdev); ··· 2762 2747 extern void inode_sb_list_add(struct inode *inode); 2763 2748 2764 2749 #ifdef CONFIG_BLOCK 2765 - extern blk_qc_t submit_bio(int, struct bio *); 2750 + extern blk_qc_t submit_bio(struct bio *); 2766 2751 extern int bdev_read_only(struct block_device *); 2767 2752 #endif 2768 2753 extern int set_blocksize(struct block_device *, int); ··· 2817 2802 extern int nonseekable_open(struct inode * inode, struct file * filp); 2818 2803 2819 2804 #ifdef CONFIG_BLOCK 2820 - typedef void (dio_submit_t)(int rw, struct bio *bio, struct inode *inode, 2805 + typedef void (dio_submit_t)(struct bio *bio, struct inode *inode, 2821 2806 loff_t file_offset); 2822 2807 2823 2808 enum {
+8 -4
include/trace/events/bcache.h
··· 27 27 __entry->sector = bio->bi_iter.bi_sector; 28 28 __entry->orig_sector = bio->bi_iter.bi_sector - 16; 29 29 __entry->nr_sector = bio->bi_iter.bi_size >> 9; 30 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 30 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 31 + bio->bi_iter.bi_size); 31 32 ), 32 33 33 34 TP_printk("%d,%d %s %llu + %u (from %d,%d @ %llu)", ··· 102 101 __entry->dev = bio->bi_bdev->bd_dev; 103 102 __entry->sector = bio->bi_iter.bi_sector; 104 103 __entry->nr_sector = bio->bi_iter.bi_size >> 9; 105 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 104 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 105 + bio->bi_iter.bi_size); 106 106 ), 107 107 108 108 TP_printk("%d,%d %s %llu + %u", ··· 138 136 __entry->dev = bio->bi_bdev->bd_dev; 139 137 __entry->sector = bio->bi_iter.bi_sector; 140 138 __entry->nr_sector = bio->bi_iter.bi_size >> 9; 141 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 139 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 140 + bio->bi_iter.bi_size); 142 141 __entry->cache_hit = hit; 143 142 __entry->bypass = bypass; 144 143 ), ··· 170 167 __entry->inode = inode; 171 168 __entry->sector = bio->bi_iter.bi_sector; 172 169 __entry->nr_sector = bio->bi_iter.bi_size >> 9; 173 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 170 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 171 + bio->bi_iter.bi_size); 174 172 __entry->writeback = writeback; 175 173 __entry->bypass = bypass; 176 174 ),
+20 -11
include/trace/events/block.h
··· 84 84 0 : blk_rq_sectors(rq); 85 85 __entry->errors = rq->errors; 86 86 87 - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, blk_rq_bytes(rq)); 87 + blk_fill_rwbs(__entry->rwbs, req_op(rq), rq->cmd_flags, 88 + blk_rq_bytes(rq)); 88 89 blk_dump_cmd(__get_str(cmd), rq); 89 90 ), 90 91 ··· 163 162 __entry->nr_sector = nr_bytes >> 9; 164 163 __entry->errors = rq->errors; 165 164 166 - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, nr_bytes); 165 + blk_fill_rwbs(__entry->rwbs, req_op(rq), rq->cmd_flags, nr_bytes); 167 166 blk_dump_cmd(__get_str(cmd), rq); 168 167 ), 169 168 ··· 199 198 __entry->bytes = (rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 200 199 blk_rq_bytes(rq) : 0; 201 200 202 - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, blk_rq_bytes(rq)); 201 + blk_fill_rwbs(__entry->rwbs, req_op(rq), rq->cmd_flags, 202 + blk_rq_bytes(rq)); 203 203 blk_dump_cmd(__get_str(cmd), rq); 204 204 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 205 205 ), ··· 274 272 bio->bi_bdev->bd_dev : 0; 275 273 __entry->sector = bio->bi_iter.bi_sector; 276 274 __entry->nr_sector = bio_sectors(bio); 277 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 275 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 276 + bio->bi_iter.bi_size); 278 277 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 279 278 ), 280 279 ··· 313 310 __entry->sector = bio->bi_iter.bi_sector; 314 311 __entry->nr_sector = bio_sectors(bio); 315 312 __entry->error = error; 316 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 313 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 314 + bio->bi_iter.bi_size); 317 315 ), 318 316 319 317 TP_printk("%d,%d %s %llu + %u [%d]", ··· 341 337 __entry->dev = bio->bi_bdev->bd_dev; 342 338 __entry->sector = bio->bi_iter.bi_sector; 343 339 __entry->nr_sector = bio_sectors(bio); 344 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 340 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 341 + bio->bi_iter.bi_size); 345 342 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 346 343 ), 347 344 ··· 409 404 __entry->dev = bio->bi_bdev->bd_dev; 410 405 __entry->sector = bio->bi_iter.bi_sector; 411 406 __entry->nr_sector = bio_sectors(bio); 412 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 407 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 408 + bio->bi_iter.bi_size); 413 409 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 414 410 ), 415 411 ··· 438 432 __entry->dev = bio ? bio->bi_bdev->bd_dev : 0; 439 433 __entry->sector = bio ? bio->bi_iter.bi_sector : 0; 440 434 __entry->nr_sector = bio ? bio_sectors(bio) : 0; 441 - blk_fill_rwbs(__entry->rwbs, 435 + blk_fill_rwbs(__entry->rwbs, bio ? bio_op(bio) : 0, 442 436 bio ? bio->bi_rw : 0, __entry->nr_sector); 443 437 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 444 438 ), ··· 573 567 __entry->dev = bio->bi_bdev->bd_dev; 574 568 __entry->sector = bio->bi_iter.bi_sector; 575 569 __entry->new_sector = new_sector; 576 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 570 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 571 + bio->bi_iter.bi_size); 577 572 memcpy(__entry->comm, current->comm, TASK_COMM_LEN); 578 573 ), 579 574 ··· 617 610 __entry->nr_sector = bio_sectors(bio); 618 611 __entry->old_dev = dev; 619 612 __entry->old_sector = from; 620 - blk_fill_rwbs(__entry->rwbs, bio->bi_rw, bio->bi_iter.bi_size); 613 + blk_fill_rwbs(__entry->rwbs, bio_op(bio), bio->bi_rw, 614 + bio->bi_iter.bi_size); 621 615 ), 622 616 623 617 TP_printk("%d,%d %s %llu + %u <- (%d,%d) %llu", ··· 664 656 __entry->old_dev = dev; 665 657 __entry->old_sector = from; 666 658 __entry->nr_bios = blk_rq_count_bios(rq); 667 - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, blk_rq_bytes(rq)); 659 + blk_fill_rwbs(__entry->rwbs, req_op(rq), rq->cmd_flags, 660 + blk_rq_bytes(rq)); 668 661 ), 669 662 670 663 TP_printk("%d,%d %s %llu + %u <- (%d,%d) %llu %u",
+22 -15
include/trace/events/f2fs.h
··· 31 31 TRACE_DEFINE_ENUM(LFS); 32 32 TRACE_DEFINE_ENUM(SSR); 33 33 TRACE_DEFINE_ENUM(__REQ_RAHEAD); 34 - TRACE_DEFINE_ENUM(__REQ_WRITE); 35 34 TRACE_DEFINE_ENUM(__REQ_SYNC); 36 35 TRACE_DEFINE_ENUM(__REQ_NOIDLE); 37 - TRACE_DEFINE_ENUM(__REQ_FLUSH); 36 + TRACE_DEFINE_ENUM(__REQ_PREFLUSH); 38 37 TRACE_DEFINE_ENUM(__REQ_FUA); 39 38 TRACE_DEFINE_ENUM(__REQ_PRIO); 40 39 TRACE_DEFINE_ENUM(__REQ_META); ··· 55 56 { IPU, "IN-PLACE" }, \ 56 57 { OPU, "OUT-OF-PLACE" }) 57 58 58 - #define F2FS_BIO_MASK(t) (t & (READA | WRITE_FLUSH_FUA)) 59 + #define F2FS_BIO_FLAG_MASK(t) (t & (READA | WRITE_FLUSH_FUA)) 59 60 #define F2FS_BIO_EXTRA_MASK(t) (t & (REQ_META | REQ_PRIO)) 60 61 61 - #define show_bio_type(type) show_bio_base(type), show_bio_extra(type) 62 + #define show_bio_type(op, op_flags) show_bio_op(op), \ 63 + show_bio_op_flags(op_flags), show_bio_extra(op_flags) 62 64 63 - #define show_bio_base(type) \ 64 - __print_symbolic(F2FS_BIO_MASK(type), \ 65 + #define show_bio_op(op) \ 66 + __print_symbolic(op, \ 65 67 { READ, "READ" }, \ 68 + { WRITE, "WRITE" }) 69 + 70 + #define show_bio_op_flags(flags) \ 71 + __print_symbolic(F2FS_BIO_FLAG_MASK(flags), \ 66 72 { READA, "READAHEAD" }, \ 67 73 { READ_SYNC, "READ_SYNC" }, \ 68 - { WRITE, "WRITE" }, \ 69 74 { WRITE_SYNC, "WRITE_SYNC" }, \ 70 75 { WRITE_FLUSH, "WRITE_FLUSH" }, \ 71 76 { WRITE_FUA, "WRITE_FUA" }, \ ··· 737 734 __field(pgoff_t, index) 738 735 __field(block_t, old_blkaddr) 739 736 __field(block_t, new_blkaddr) 740 - __field(int, rw) 737 + __field(int, op) 738 + __field(int, op_flags) 741 739 __field(int, type) 742 740 ), 743 741 ··· 748 744 __entry->index = page->index; 749 745 __entry->old_blkaddr = fio->old_blkaddr; 750 746 __entry->new_blkaddr = fio->new_blkaddr; 751 - __entry->rw = fio->rw; 747 + __entry->op = fio->op; 748 + __entry->op_flags = fio->op_flags; 752 749 __entry->type = fio->type; 753 750 ), 754 751 755 752 TP_printk("dev = (%d,%d), ino = %lu, page_index = 0x%lx, " 756 - "oldaddr = 0x%llx, newaddr = 0x%llx rw = %s%s, type = %s", 753 + "oldaddr = 0x%llx, newaddr = 0x%llx rw = %s%si%s, type = %s", 757 754 show_dev_ino(__entry), 758 755 (unsigned long)__entry->index, 759 756 (unsigned long long)__entry->old_blkaddr, 760 757 (unsigned long long)__entry->new_blkaddr, 761 - show_bio_type(__entry->rw), 758 + show_bio_type(__entry->op, __entry->op_flags), 762 759 show_block_type(__entry->type)) 763 760 ); 764 761 ··· 790 785 791 786 TP_STRUCT__entry( 792 787 __field(dev_t, dev) 793 - __field(int, rw) 788 + __field(int, op) 789 + __field(int, op_flags) 794 790 __field(int, type) 795 791 __field(sector_t, sector) 796 792 __field(unsigned int, size) ··· 799 793 800 794 TP_fast_assign( 801 795 __entry->dev = sb->s_dev; 802 - __entry->rw = fio->rw; 796 + __entry->op = fio->op; 797 + __entry->op_flags = fio->op_flags; 803 798 __entry->type = fio->type; 804 799 __entry->sector = bio->bi_iter.bi_sector; 805 800 __entry->size = bio->bi_iter.bi_size; 806 801 ), 807 802 808 - TP_printk("dev = (%d,%d), %s%s, %s, sector = %lld, size = %u", 803 + TP_printk("dev = (%d,%d), %s%s%s, %s, sector = %lld, size = %u", 809 804 show_dev(__entry), 810 - show_bio_type(__entry->rw), 805 + show_bio_type(__entry->op, __entry->op_flags), 811 806 show_block_type(__entry->type), 812 807 (unsigned long long)__entry->sector, 813 808 __entry->size)
+20 -13
kernel/power/swap.c
··· 261 261 bio_put(bio); 262 262 } 263 263 264 - static int hib_submit_io(int rw, pgoff_t page_off, void *addr, 264 + static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr, 265 265 struct hib_bio_batch *hb) 266 266 { 267 267 struct page *page = virt_to_page(addr); ··· 271 271 bio = bio_alloc(__GFP_RECLAIM | __GFP_HIGH, 1); 272 272 bio->bi_iter.bi_sector = page_off * (PAGE_SIZE >> 9); 273 273 bio->bi_bdev = hib_resume_bdev; 274 + bio_set_op_attrs(bio, op, op_flags); 274 275 275 276 if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { 276 277 printk(KERN_ERR "PM: Adding page to bio failed at %llu\n", ··· 284 283 bio->bi_end_io = hib_end_io; 285 284 bio->bi_private = hb; 286 285 atomic_inc(&hb->count); 287 - submit_bio(rw, bio); 286 + submit_bio(bio); 288 287 } else { 289 - error = submit_bio_wait(rw, bio); 288 + error = submit_bio_wait(bio); 290 289 bio_put(bio); 291 290 } 292 291 ··· 307 306 { 308 307 int error; 309 308 310 - hib_submit_io(READ_SYNC, swsusp_resume_block, swsusp_header, NULL); 309 + hib_submit_io(REQ_OP_READ, READ_SYNC, swsusp_resume_block, 310 + swsusp_header, NULL); 311 311 if (!memcmp("SWAP-SPACE",swsusp_header->sig, 10) || 312 312 !memcmp("SWAPSPACE2",swsusp_header->sig, 10)) { 313 313 memcpy(swsusp_header->orig_sig,swsusp_header->sig, 10); ··· 317 315 swsusp_header->flags = flags; 318 316 if (flags & SF_CRC32_MODE) 319 317 swsusp_header->crc32 = handle->crc32; 320 - error = hib_submit_io(WRITE_SYNC, swsusp_resume_block, 321 - swsusp_header, NULL); 318 + error = hib_submit_io(REQ_OP_WRITE, WRITE_SYNC, 319 + swsusp_resume_block, swsusp_header, NULL); 322 320 } else { 323 321 printk(KERN_ERR "PM: Swap header not found!\n"); 324 322 error = -ENODEV; ··· 391 389 } else { 392 390 src = buf; 393 391 } 394 - return hib_submit_io(WRITE_SYNC, offset, src, hb); 392 + return hib_submit_io(REQ_OP_WRITE, WRITE_SYNC, offset, src, hb); 395 393 } 396 394 397 395 static void release_swap_writer(struct swap_map_handle *handle) ··· 994 992 return -ENOMEM; 995 993 } 996 994 997 - error = hib_submit_io(READ_SYNC, offset, tmp->map, NULL); 995 + error = hib_submit_io(REQ_OP_READ, READ_SYNC, offset, 996 + tmp->map, NULL); 998 997 if (error) { 999 998 release_swap_reader(handle); 1000 999 return error; ··· 1019 1016 offset = handle->cur->entries[handle->k]; 1020 1017 if (!offset) 1021 1018 return -EFAULT; 1022 - error = hib_submit_io(READ_SYNC, offset, buf, hb); 1019 + error = hib_submit_io(REQ_OP_READ, READ_SYNC, offset, buf, hb); 1023 1020 if (error) 1024 1021 return error; 1025 1022 if (++handle->k >= MAP_PAGE_ENTRIES) { ··· 1528 1525 if (!IS_ERR(hib_resume_bdev)) { 1529 1526 set_blocksize(hib_resume_bdev, PAGE_SIZE); 1530 1527 clear_page(swsusp_header); 1531 - error = hib_submit_io(READ_SYNC, swsusp_resume_block, 1528 + error = hib_submit_io(REQ_OP_READ, READ_SYNC, 1529 + swsusp_resume_block, 1532 1530 swsusp_header, NULL); 1533 1531 if (error) 1534 1532 goto put; ··· 1537 1533 if (!memcmp(HIBERNATE_SIG, swsusp_header->sig, 10)) { 1538 1534 memcpy(swsusp_header->sig, swsusp_header->orig_sig, 10); 1539 1535 /* Reset swap signature now */ 1540 - error = hib_submit_io(WRITE_SYNC, swsusp_resume_block, 1536 + error = hib_submit_io(REQ_OP_WRITE, WRITE_SYNC, 1537 + swsusp_resume_block, 1541 1538 swsusp_header, NULL); 1542 1539 } else { 1543 1540 error = -EINVAL; ··· 1582 1577 { 1583 1578 int error; 1584 1579 1585 - hib_submit_io(READ_SYNC, swsusp_resume_block, swsusp_header, NULL); 1580 + hib_submit_io(REQ_OP_READ, READ_SYNC, swsusp_resume_block, 1581 + swsusp_header, NULL); 1586 1582 if (!memcmp(HIBERNATE_SIG,swsusp_header->sig, 10)) { 1587 1583 memcpy(swsusp_header->sig,swsusp_header->orig_sig, 10); 1588 - error = hib_submit_io(WRITE_SYNC, swsusp_resume_block, 1584 + error = hib_submit_io(REQ_OP_WRITE, WRITE_SYNC, 1585 + swsusp_resume_block, 1589 1586 swsusp_header, NULL); 1590 1587 } else { 1591 1588 printk(KERN_ERR "PM: Cannot find swsusp signature!\n");
+47 -30
kernel/trace/blktrace.c
··· 127 127 128 128 static void trace_note_time(struct blk_trace *bt) 129 129 { 130 - struct timespec now; 130 + struct timespec64 now; 131 131 unsigned long flags; 132 132 u32 words[2]; 133 133 134 - getnstimeofday(&now); 135 - words[0] = now.tv_sec; 134 + /* need to check user space to see if this breaks in y2038 or y2106 */ 135 + ktime_get_real_ts64(&now); 136 + words[0] = (u32)now.tv_sec; 136 137 words[1] = now.tv_nsec; 137 138 138 139 local_irq_save(flags); ··· 190 189 BLK_TC_ACT(BLK_TC_WRITE) }; 191 190 192 191 #define BLK_TC_RAHEAD BLK_TC_AHEAD 192 + #define BLK_TC_PREFLUSH BLK_TC_FLUSH 193 193 194 194 /* The ilog2() calls fall out because they're constant */ 195 195 #define MASK_TC_BIT(rw, __name) ((rw & REQ_ ## __name) << \ ··· 201 199 * blk_io_trace structure and places it in a per-cpu subbuffer. 202 200 */ 203 201 static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes, 204 - int rw, u32 what, int error, int pdu_len, void *pdu_data) 202 + int op, int op_flags, u32 what, int error, int pdu_len, 203 + void *pdu_data) 205 204 { 206 205 struct task_struct *tsk = current; 207 206 struct ring_buffer_event *event = NULL; ··· 217 214 if (unlikely(bt->trace_state != Blktrace_running && !blk_tracer)) 218 215 return; 219 216 220 - what |= ddir_act[rw & WRITE]; 221 - what |= MASK_TC_BIT(rw, SYNC); 222 - what |= MASK_TC_BIT(rw, RAHEAD); 223 - what |= MASK_TC_BIT(rw, META); 224 - what |= MASK_TC_BIT(rw, DISCARD); 225 - what |= MASK_TC_BIT(rw, FLUSH); 226 - what |= MASK_TC_BIT(rw, FUA); 217 + what |= ddir_act[op_is_write(op) ? WRITE : READ]; 218 + what |= MASK_TC_BIT(op_flags, SYNC); 219 + what |= MASK_TC_BIT(op_flags, RAHEAD); 220 + what |= MASK_TC_BIT(op_flags, META); 221 + what |= MASK_TC_BIT(op_flags, PREFLUSH); 222 + what |= MASK_TC_BIT(op_flags, FUA); 223 + if (op == REQ_OP_DISCARD) 224 + what |= BLK_TC_ACT(BLK_TC_DISCARD); 225 + if (op == REQ_OP_FLUSH) 226 + what |= BLK_TC_ACT(BLK_TC_FLUSH); 227 227 228 228 pid = tsk->pid; 229 229 if (act_log_check(bt, what, sector, pid)) ··· 714 708 715 709 if (rq->cmd_type == REQ_TYPE_BLOCK_PC) { 716 710 what |= BLK_TC_ACT(BLK_TC_PC); 717 - __blk_add_trace(bt, 0, nr_bytes, rq->cmd_flags, 711 + __blk_add_trace(bt, 0, nr_bytes, req_op(rq), rq->cmd_flags, 718 712 what, rq->errors, rq->cmd_len, rq->cmd); 719 713 } else { 720 714 what |= BLK_TC_ACT(BLK_TC_FS); 721 - __blk_add_trace(bt, blk_rq_pos(rq), nr_bytes, 715 + __blk_add_trace(bt, blk_rq_pos(rq), nr_bytes, req_op(rq), 722 716 rq->cmd_flags, what, rq->errors, 0, NULL); 723 717 } 724 718 } ··· 776 770 return; 777 771 778 772 __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size, 779 - bio->bi_rw, what, error, 0, NULL); 773 + bio_op(bio), bio->bi_rw, what, error, 0, NULL); 780 774 } 781 775 782 776 static void blk_add_trace_bio_bounce(void *ignore, ··· 824 818 struct blk_trace *bt = q->blk_trace; 825 819 826 820 if (bt) 827 - __blk_add_trace(bt, 0, 0, rw, BLK_TA_GETRQ, 0, 0, NULL); 821 + __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_GETRQ, 0, 0, 822 + NULL); 828 823 } 829 824 } 830 825 ··· 840 833 struct blk_trace *bt = q->blk_trace; 841 834 842 835 if (bt) 843 - __blk_add_trace(bt, 0, 0, rw, BLK_TA_SLEEPRQ, 836 + __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_SLEEPRQ, 844 837 0, 0, NULL); 845 838 } 846 839 } ··· 850 843 struct blk_trace *bt = q->blk_trace; 851 844 852 845 if (bt) 853 - __blk_add_trace(bt, 0, 0, 0, BLK_TA_PLUG, 0, 0, NULL); 846 + __blk_add_trace(bt, 0, 0, 0, 0, BLK_TA_PLUG, 0, 0, NULL); 854 847 } 855 848 856 849 static void blk_add_trace_unplug(void *ignore, struct request_queue *q, ··· 867 860 else 868 861 what = BLK_TA_UNPLUG_TIMER; 869 862 870 - __blk_add_trace(bt, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu); 863 + __blk_add_trace(bt, 0, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu); 871 864 } 872 865 } 873 866 ··· 881 874 __be64 rpdu = cpu_to_be64(pdu); 882 875 883 876 __blk_add_trace(bt, bio->bi_iter.bi_sector, 884 - bio->bi_iter.bi_size, bio->bi_rw, BLK_TA_SPLIT, 885 - bio->bi_error, sizeof(rpdu), &rpdu); 877 + bio->bi_iter.bi_size, bio_op(bio), bio->bi_rw, 878 + BLK_TA_SPLIT, bio->bi_error, sizeof(rpdu), 879 + &rpdu); 886 880 } 887 881 } 888 882 ··· 915 907 r.sector_from = cpu_to_be64(from); 916 908 917 909 __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size, 918 - bio->bi_rw, BLK_TA_REMAP, bio->bi_error, 910 + bio_op(bio), bio->bi_rw, BLK_TA_REMAP, bio->bi_error, 919 911 sizeof(r), &r); 920 912 } 921 913 ··· 948 940 r.sector_from = cpu_to_be64(from); 949 941 950 942 __blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq), 951 - rq_data_dir(rq), BLK_TA_REMAP, !!rq->errors, 943 + rq_data_dir(rq), 0, BLK_TA_REMAP, !!rq->errors, 952 944 sizeof(r), &r); 953 945 } 954 946 ··· 973 965 return; 974 966 975 967 if (rq->cmd_type == REQ_TYPE_BLOCK_PC) 976 - __blk_add_trace(bt, 0, blk_rq_bytes(rq), 0, 968 + __blk_add_trace(bt, 0, blk_rq_bytes(rq), 0, 0, 977 969 BLK_TA_DRV_DATA, rq->errors, len, data); 978 970 else 979 - __blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq), 0, 971 + __blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq), 0, 0, 980 972 BLK_TA_DRV_DATA, rq->errors, len, data); 981 973 } 982 974 EXPORT_SYMBOL_GPL(blk_add_driver_data); ··· 1777 1769 } 1778 1770 } 1779 1771 1780 - void blk_fill_rwbs(char *rwbs, u32 rw, int bytes) 1772 + void blk_fill_rwbs(char *rwbs, int op, u32 rw, int bytes) 1781 1773 { 1782 1774 int i = 0; 1783 1775 1784 - if (rw & REQ_FLUSH) 1776 + if (rw & REQ_PREFLUSH) 1785 1777 rwbs[i++] = 'F'; 1786 1778 1787 - if (rw & WRITE) 1779 + switch (op) { 1780 + case REQ_OP_WRITE: 1781 + case REQ_OP_WRITE_SAME: 1788 1782 rwbs[i++] = 'W'; 1789 - else if (rw & REQ_DISCARD) 1783 + break; 1784 + case REQ_OP_DISCARD: 1790 1785 rwbs[i++] = 'D'; 1791 - else if (bytes) 1786 + break; 1787 + case REQ_OP_FLUSH: 1788 + rwbs[i++] = 'F'; 1789 + break; 1790 + case REQ_OP_READ: 1792 1791 rwbs[i++] = 'R'; 1793 - else 1792 + break; 1793 + default: 1794 1794 rwbs[i++] = 'N'; 1795 + } 1795 1796 1796 1797 if (rw & REQ_FUA) 1797 1798 rwbs[i++] = 'F';
+15 -30
lib/iov_iter.c
··· 56 56 n = wanted; \ 57 57 } 58 58 59 - #define iterate_bvec(i, n, __v, __p, skip, STEP) { \ 60 - size_t wanted = n; \ 61 - __p = i->bvec; \ 62 - __v.bv_len = min_t(size_t, n, __p->bv_len - skip); \ 63 - if (likely(__v.bv_len)) { \ 64 - __v.bv_page = __p->bv_page; \ 65 - __v.bv_offset = __p->bv_offset + skip; \ 66 - (void)(STEP); \ 67 - skip += __v.bv_len; \ 68 - n -= __v.bv_len; \ 69 - } \ 70 - while (unlikely(n)) { \ 71 - __p++; \ 72 - __v.bv_len = min_t(size_t, n, __p->bv_len); \ 73 - if (unlikely(!__v.bv_len)) \ 59 + #define iterate_bvec(i, n, __v, __bi, skip, STEP) { \ 60 + struct bvec_iter __start; \ 61 + __start.bi_size = n; \ 62 + __start.bi_bvec_done = skip; \ 63 + __start.bi_idx = 0; \ 64 + for_each_bvec(__v, i->bvec, __bi, __start) { \ 65 + if (!__v.bv_len) \ 74 66 continue; \ 75 - __v.bv_page = __p->bv_page; \ 76 - __v.bv_offset = __p->bv_offset; \ 77 67 (void)(STEP); \ 78 - skip = __v.bv_len; \ 79 - n -= __v.bv_len; \ 80 68 } \ 81 - n = wanted; \ 82 69 } 83 70 84 71 #define iterate_all_kinds(i, n, v, I, B, K) { \ 85 72 size_t skip = i->iov_offset; \ 86 73 if (unlikely(i->type & ITER_BVEC)) { \ 87 - const struct bio_vec *bvec; \ 88 74 struct bio_vec v; \ 89 - iterate_bvec(i, n, v, bvec, skip, (B)) \ 75 + struct bvec_iter __bi; \ 76 + iterate_bvec(i, n, v, __bi, skip, (B)) \ 90 77 } else if (unlikely(i->type & ITER_KVEC)) { \ 91 78 const struct kvec *kvec; \ 92 79 struct kvec v; \ ··· 91 104 if (i->count) { \ 92 105 size_t skip = i->iov_offset; \ 93 106 if (unlikely(i->type & ITER_BVEC)) { \ 94 - const struct bio_vec *bvec; \ 107 + const struct bio_vec *bvec = i->bvec; \ 95 108 struct bio_vec v; \ 96 - iterate_bvec(i, n, v, bvec, skip, (B)) \ 97 - if (skip == bvec->bv_len) { \ 98 - bvec++; \ 99 - skip = 0; \ 100 - } \ 101 - i->nr_segs -= bvec - i->bvec; \ 102 - i->bvec = bvec; \ 109 + struct bvec_iter __bi; \ 110 + iterate_bvec(i, n, v, __bi, skip, (B)) \ 111 + i->bvec = __bvec_iter_bvec(i->bvec, __bi); \ 112 + i->nr_segs -= i->bvec - bvec; \ 113 + skip = __bi.bi_bvec_done; \ 103 114 } else if (unlikely(i->type & ITER_KVEC)) { \ 104 115 const struct kvec *kvec; \ 105 116 struct kvec v; \
+6 -4
mm/page_io.c
··· 259 259 bio_end_io_t end_write_func) 260 260 { 261 261 struct bio *bio; 262 - int ret, rw = WRITE; 262 + int ret; 263 263 struct swap_info_struct *sis = page_swap_info(page); 264 264 265 265 if (sis->flags & SWP_FILE) { ··· 317 317 ret = -ENOMEM; 318 318 goto out; 319 319 } 320 + bio_set_op_attrs(bio, REQ_OP_WRITE, 0); 320 321 if (wbc->sync_mode == WB_SYNC_ALL) 321 - rw |= REQ_SYNC; 322 + bio->bi_rw |= REQ_SYNC; 322 323 count_vm_event(PSWPOUT); 323 324 set_page_writeback(page); 324 325 unlock_page(page); 325 - submit_bio(rw, bio); 326 + submit_bio(bio); 326 327 out: 327 328 return ret; 328 329 } ··· 370 369 ret = -ENOMEM; 371 370 goto out; 372 371 } 372 + bio_set_op_attrs(bio, REQ_OP_READ, 0); 373 373 count_vm_event(PSWPIN); 374 - submit_bio(READ, bio); 374 + submit_bio(bio); 375 375 out: 376 376 return ret; 377 377 }