Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v3.7-rc8 193 lines 8.8 kB view raw
1CFQ (Complete Fairness Queueing) 2=============================== 3 4The main aim of CFQ scheduler is to provide a fair allocation of the disk 5I/O bandwidth for all the processes which requests an I/O operation. 6 7CFQ maintains the per process queue for the processes which request I/O 8operation(syncronous requests). In case of asynchronous requests, all the 9requests from all the processes are batched together according to their 10process's I/O priority. 11 12CFQ ioscheduler tunables 13======================== 14 15slice_idle 16---------- 17This specifies how long CFQ should idle for next request on certain cfq queues 18(for sequential workloads) and service trees (for random workloads) before 19queue is expired and CFQ selects next queue to dispatch from. 20 21By default slice_idle is a non-zero value. That means by default we idle on 22queues/service trees. This can be very helpful on highly seeky media like 23single spindle SATA/SAS disks where we can cut down on overall number of 24seeks and see improved throughput. 25 26Setting slice_idle to 0 will remove all the idling on queues/service tree 27level and one should see an overall improved throughput on faster storage 28devices like multiple SATA/SAS disks in hardware RAID configuration. The down 29side is that isolation provided from WRITES also goes down and notion of 30IO priority becomes weaker. 31 32So depending on storage and workload, it might be useful to set slice_idle=0. 33In general I think for SATA/SAS disks and software RAID of SATA/SAS disks 34keeping slice_idle enabled should be useful. For any configurations where 35there are multiple spindles behind single LUN (Host based hardware RAID 36controller or for storage arrays), setting slice_idle=0 might end up in better 37throughput and acceptable latencies. 38 39back_seek_max 40------------- 41This specifies, given in Kbytes, the maximum "distance" for backward seeking. 42The distance is the amount of space from the current head location to the 43sectors that are backward in terms of distance. 44 45This parameter allows the scheduler to anticipate requests in the "backward" 46direction and consider them as being the "next" if they are within this 47distance from the current head location. 48 49back_seek_penalty 50----------------- 51This parameter is used to compute the cost of backward seeking. If the 52backward distance of request is just 1/back_seek_penalty from a "front" 53request, then the seeking cost of two requests is considered equivalent. 54 55So scheduler will not bias toward one or the other request (otherwise scheduler 56will bias toward front request). Default value of back_seek_penalty is 2. 57 58fifo_expire_async 59----------------- 60This parameter is used to set the timeout of asynchronous requests. Default 61value of this is 248ms. 62 63fifo_expire_sync 64---------------- 65This parameter is used to set the timeout of synchronous requests. Default 66value of this is 124ms. In case to favor synchronous requests over asynchronous 67one, this value should be decreased relative to fifo_expire_async. 68 69slice_async 70----------- 71This parameter is same as of slice_sync but for asynchronous queue. The 72default value is 40ms. 73 74slice_async_rq 75-------------- 76This parameter is used to limit the dispatching of asynchronous request to 77device request queue in queue's slice time. The maximum number of request that 78are allowed to be dispatched also depends upon the io priority. Default value 79for this is 2. 80 81slice_sync 82---------- 83When a queue is selected for execution, the queues IO requests are only 84executed for a certain amount of time(time_slice) before switching to another 85queue. This parameter is used to calculate the time slice of synchronous 86queue. 87 88time_slice is computed using the below equation:- 89time_slice = slice_sync + (slice_sync/5 * (4 - prio)). To increase the 90time_slice of synchronous queue, increase the value of slice_sync. Default 91value is 100ms. 92 93quantum 94------- 95This specifies the number of request dispatched to the device queue. In a 96queue's time slice, a request will not be dispatched if the number of request 97in the device exceeds this parameter. This parameter is used for synchronous 98request. 99 100In case of storage with several disk, this setting can limit the parallel 101processing of request. Therefore, increasing the value can imporve the 102performace although this can cause the latency of some I/O to increase due 103to more number of requests. 104 105CFQ IOPS Mode for group scheduling 106=================================== 107Basic CFQ design is to provide priority based time slices. Higher priority 108process gets bigger time slice and lower priority process gets smaller time 109slice. Measuring time becomes harder if storage is fast and supports NCQ and 110it would be better to dispatch multiple requests from multiple cfq queues in 111request queue at a time. In such scenario, it is not possible to measure time 112consumed by single queue accurately. 113 114What is possible though is to measure number of requests dispatched from a 115single queue and also allow dispatch from multiple cfq queue at the same time. 116This effectively becomes the fairness in terms of IOPS (IO operations per 117second). 118 119If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches 120to IOPS mode and starts providing fairness in terms of number of requests 121dispatched. Note that this mode switching takes effect only for group 122scheduling. For non-cgroup users nothing should change. 123 124CFQ IO scheduler Idling Theory 125=============================== 126Idling on a queue is primarily about waiting for the next request to come 127on same queue after completion of a request. In this process CFQ will not 128dispatch requests from other cfq queues even if requests are pending there. 129 130The rationale behind idling is that it can cut down on number of seeks 131on rotational media. For example, if a process is doing dependent 132sequential reads (next read will come on only after completion of previous 133one), then not dispatching request from other queue should help as we 134did not move the disk head and kept on dispatching sequential IO from 135one queue. 136 137CFQ has following service trees and various queues are put on these trees. 138 139 sync-idle sync-noidle async 140 141All cfq queues doing synchronous sequential IO go on to sync-idle tree. 142On this tree we idle on each queue individually. 143 144All synchronous non-sequential queues go on sync-noidle tree. Also any 145request which are marked with REQ_NOIDLE go on this service tree. On this 146tree we do not idle on individual queues instead idle on the whole group 147of queues or the tree. So if there are 4 queues waiting for IO to dispatch 148we will idle only once last queue has dispatched the IO and there is 149no more IO on this service tree. 150 151All async writes go on async service tree. There is no idling on async 152queues. 153 154CFQ has some optimizations for SSDs and if it detects a non-rotational 155media which can support higher queue depth (multiple requests at in 156flight at a time), then it cuts down on idling of individual queues and 157all the queues move to sync-noidle tree and only tree idle remains. This 158tree idling provides isolation with buffered write queues on async tree. 159 160FAQ 161=== 162Q1. Why to idle at all on queues marked with REQ_NOIDLE. 163 164A1. We only do tree idle (all queues on sync-noidle tree) on queues marked 165 with REQ_NOIDLE. This helps in providing isolation with all the sync-idle 166 queues. Otherwise in presence of many sequential readers, other 167 synchronous IO might not get fair share of disk. 168 169 For example, if there are 10 sequential readers doing IO and they get 170 100ms each. If a REQ_NOIDLE request comes in, it will be scheduled 171 roughly after 1 second. If after completion of REQ_NOIDLE request we 172 do not idle, and after a couple of milli seconds a another REQ_NOIDLE 173 request comes in, again it will be scheduled after 1second. Repeat it 174 and notice how a workload can lose its disk share and suffer due to 175 multiple sequential readers. 176 177 fsync can generate dependent IO where bunch of data is written in the 178 context of fsync, and later some journaling data is written. Journaling 179 data comes in only after fsync has finished its IO (atleast for ext4 180 that seemed to be the case). Now if one decides not to idle on fsync 181 thread due to REQ_NOIDLE, then next journaling write will not get 182 scheduled for another second. A process doing small fsync, will suffer 183 badly in presence of multiple sequential readers. 184 185 Hence doing tree idling on threads using REQ_NOIDLE flag on requests 186 provides isolation from multiple sequential readers and at the same 187 time we do not idle on individual threads. 188 189Q2. When to specify REQ_NOIDLE 190A2. I would think whenever one is doing synchronous write and not expecting 191 more writes to be dispatched from same context soon, should be able 192 to specify REQ_NOIDLE on writes and that probably should work well for 193 most of the cases.