Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

dm: impose configurable deadline for dm_request_fn's merge heuristic

Otherwise, for sequential workloads, the dm_request_fn can allow
excessive request merging at the expense of increased service time.

Add a per-device sysfs attribute to allow the user to control how long a
request, that is a reasonable merge candidate, can be queued on the
request queue. The resolution of this request dispatch deadline is in
microseconds (ranging from 1 to 100000 usecs), to set a 20us deadline:
echo 20 > /sys/block/dm-7/dm/rq_based_seq_io_merge_deadline

The dm_request_fn's merge heuristic and associated extra accounting is
disabled by default (rq_based_seq_io_merge_deadline is 0).

This sysfs attribute is not applicable to bio-based DM devices so it
will only ever report 0 for them.

By allowing a request to remain on the queue it will block others
requests on the queue. But introducing a short dequeue delay has proven
very effective at enabling certain sequential IO workloads on really
fast, yet IOPS constrained, devices to build up slightly larger IOs --
yielding 90+% throughput improvements. Having precise control over the
time taken to wait for larger requests to build affords control beyond
that of waiting for certain IO sizes to accumulate (which would require
a deadline anyway). This knob will only ever make sense with sequential
IO workloads and the particular value used is storage configuration
specific.

Given the expected niche use-case for when this knob is useful it has
been deemed acceptable to expose this relatively crude method for
crafting optimal IO on specific storage -- especially given the solution
is simple yet effective. In the context of DM multipath, it is
advisable to tune this sysfs attribute to a value that offers the best
performance for the common case (e.g. if 4 paths are expected active,
tune for that; if paths fail then performance may be slightly reduced).

Alternatives were explored to have request-based DM autotune this value
(e.g. if/when paths fail) but they were quickly deemed too fragile and
complex to warrant further design and development time. If this problem
proves more common as faster storage emerges we'll have to look at
elevating a generic solution into the block core.

Tested-by: Shiva Krishna Merla <shivakrishna.merla@netapp.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>

+73 -4
+14
Documentation/ABI/testing/sysfs-block-dm
··· 23 23 Contains the value 1 while the device is suspended. 24 24 Otherwise it contains 0. Read-only attribute. 25 25 Users: util-linux, device-mapper udev rules 26 + 27 + What: /sys/block/dm-<num>/dm/rq_based_seq_io_merge_deadline 28 + Date: March 2015 29 + KernelVersion: 4.1 30 + Contact: dm-devel@redhat.com 31 + Description: Allow control over how long a request that is a 32 + reasonable merge candidate can be queued on the request 33 + queue. The resolution of this deadline is in 34 + microseconds (ranging from 1 to 100000 usecs). 35 + Setting this attribute to 0 (the default) will disable 36 + request-based DM's merge heuristic and associated extra 37 + accounting. This attribute is not applicable to 38 + bio-based DM devices so it will only ever report 0 for 39 + them.
+2
drivers/md/dm-sysfs.c
··· 92 92 static DM_ATTR_RO(name); 93 93 static DM_ATTR_RO(uuid); 94 94 static DM_ATTR_RO(suspended); 95 + static DM_ATTR_RW(rq_based_seq_io_merge_deadline); 95 96 96 97 static struct attribute *dm_attrs[] = { 97 98 &dm_attr_name.attr, 98 99 &dm_attr_uuid.attr, 99 100 &dm_attr_suspended.attr, 101 + &dm_attr_rq_based_seq_io_merge_deadline.attr, 100 102 NULL, 101 103 }; 102 104
+53 -4
drivers/md/dm.c
··· 21 21 #include <linux/delay.h> 22 22 #include <linux/wait.h> 23 23 #include <linux/kthread.h> 24 + #include <linux/ktime.h> 24 25 #include <linux/elevator.h> /* for rq_end_sector() */ 25 26 26 27 #include <trace/events/block.h> ··· 220 219 struct task_struct *kworker_task; 221 220 222 221 /* for request-based merge heuristic in dm_request_fn() */ 223 - sector_t last_rq_pos; 222 + unsigned seq_rq_merge_deadline_usecs; 224 223 int last_rq_rw; 224 + sector_t last_rq_pos; 225 + ktime_t last_rq_start_time; 225 226 }; 226 227 227 228 /* ··· 1938 1935 blk_start_request(orig); 1939 1936 atomic_inc(&md->pending[rq_data_dir(orig)]); 1940 1937 1941 - md->last_rq_pos = rq_end_sector(orig); 1942 - md->last_rq_rw = rq_data_dir(orig); 1938 + if (md->seq_rq_merge_deadline_usecs) { 1939 + md->last_rq_pos = rq_end_sector(orig); 1940 + md->last_rq_rw = rq_data_dir(orig); 1941 + md->last_rq_start_time = ktime_get(); 1942 + } 1943 1943 1944 1944 /* 1945 1945 * Hold the md reference here for the in-flight I/O. ··· 1952 1946 * See the comment in rq_completed() too. 1953 1947 */ 1954 1948 dm_get(md); 1949 + } 1950 + 1951 + #define MAX_SEQ_RQ_MERGE_DEADLINE_USECS 100000 1952 + 1953 + ssize_t dm_attr_rq_based_seq_io_merge_deadline_show(struct mapped_device *md, char *buf) 1954 + { 1955 + return sprintf(buf, "%u\n", md->seq_rq_merge_deadline_usecs); 1956 + } 1957 + 1958 + ssize_t dm_attr_rq_based_seq_io_merge_deadline_store(struct mapped_device *md, 1959 + const char *buf, size_t count) 1960 + { 1961 + unsigned deadline; 1962 + 1963 + if (!dm_request_based(md)) 1964 + return count; 1965 + 1966 + if (kstrtouint(buf, 10, &deadline)) 1967 + return -EINVAL; 1968 + 1969 + if (deadline > MAX_SEQ_RQ_MERGE_DEADLINE_USECS) 1970 + deadline = MAX_SEQ_RQ_MERGE_DEADLINE_USECS; 1971 + 1972 + md->seq_rq_merge_deadline_usecs = deadline; 1973 + 1974 + return count; 1975 + } 1976 + 1977 + static bool dm_request_peeked_before_merge_deadline(struct mapped_device *md) 1978 + { 1979 + ktime_t kt_deadline; 1980 + 1981 + if (!md->seq_rq_merge_deadline_usecs) 1982 + return false; 1983 + 1984 + kt_deadline = ns_to_ktime((u64)md->seq_rq_merge_deadline_usecs * NSEC_PER_USEC); 1985 + kt_deadline = ktime_add_safe(md->last_rq_start_time, kt_deadline); 1986 + 1987 + return !ktime_after(ktime_get(), kt_deadline); 1955 1988 } 1956 1989 1957 1990 /* ··· 2035 1990 continue; 2036 1991 } 2037 1992 2038 - if (md_in_flight(md) && rq->bio && rq->bio->bi_vcnt == 1 && 1993 + if (dm_request_peeked_before_merge_deadline(md) && 1994 + md_in_flight(md) && rq->bio && rq->bio->bi_vcnt == 1 && 2039 1995 md->last_rq_pos == pos && md->last_rq_rw == rq_data_dir(rq)) 2040 1996 goto delay_and_out; 2041 1997 ··· 2577 2531 q = blk_init_allocated_queue(md->queue, dm_request_fn, NULL); 2578 2532 if (!q) 2579 2533 return 0; 2534 + 2535 + /* disable dm_request_fn's merge heuristic by default */ 2536 + md->seq_rq_merge_deadline_usecs = 0; 2580 2537 2581 2538 md->queue = q; 2582 2539 dm_init_md_queue(md);
+4
drivers/md/dm.h
··· 234 234 return !maxlen || strlen(result) + 1 >= maxlen; 235 235 } 236 236 237 + ssize_t dm_attr_rq_based_seq_io_merge_deadline_show(struct mapped_device *md, char *buf); 238 + ssize_t dm_attr_rq_based_seq_io_merge_deadline_store(struct mapped_device *md, 239 + const char *buf, size_t count); 240 + 237 241 #endif