Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v5.0 1077 lines 50 kB view raw
1 Notes on the Generic Block Layer Rewrite in Linux 2.5 2 ===================================================== 3 4Notes Written on Jan 15, 2002: 5 Jens Axboe <jens.axboe@oracle.com> 6 Suparna Bhattacharya <suparna@in.ibm.com> 7 8Last Updated May 2, 2002 9September 2003: Updated I/O Scheduler portions 10 Nick Piggin <npiggin@kernel.dk> 11 12Introduction: 13 14These are some notes describing some aspects of the 2.5 block layer in the 15context of the bio rewrite. The idea is to bring out some of the key 16changes and a glimpse of the rationale behind those changes. 17 18Please mail corrections & suggestions to suparna@in.ibm.com. 19 20Credits: 21--------- 22 232.5 bio rewrite: 24 Jens Axboe <jens.axboe@oracle.com> 25 26Many aspects of the generic block layer redesign were driven by and evolved 27over discussions, prior patches and the collective experience of several 28people. See sections 8 and 9 for a list of some related references. 29 30The following people helped with review comments and inputs for this 31document: 32 Christoph Hellwig <hch@infradead.org> 33 Arjan van de Ven <arjanv@redhat.com> 34 Randy Dunlap <rdunlap@xenotime.net> 35 Andre Hedrick <andre@linux-ide.org> 36 37The following people helped with fixes/contributions to the bio patches 38while it was still work-in-progress: 39 David S. Miller <davem@redhat.com> 40 41 42Description of Contents: 43------------------------ 44 451. Scope for tuning of logic to various needs 46 1.1 Tuning based on device or low level driver capabilities 47 - Per-queue parameters 48 - Highmem I/O support 49 - I/O scheduler modularization 50 1.2 Tuning based on high level requirements/capabilities 51 1.2.1 Request Priority/Latency 52 1.3 Direct access/bypass to lower layers for diagnostics and special 53 device operations 54 1.3.1 Pre-built commands 552. New flexible and generic but minimalist i/o structure or descriptor 56 (instead of using buffer heads at the i/o layer) 57 2.1 Requirements/Goals addressed 58 2.2 The bio struct in detail (multi-page io unit) 59 2.3 Changes in the request structure 603. Using bios 61 3.1 Setup/teardown (allocation, splitting) 62 3.2 Generic bio helper routines 63 3.2.1 Traversing segments and completion units in a request 64 3.2.2 Setting up DMA scatterlists 65 3.2.3 I/O completion 66 3.2.4 Implications for drivers that do not interpret bios (don't handle 67 multiple segments) 68 3.3 I/O submission 694. The I/O scheduler 705. Scalability related changes 71 5.1 Granular locking: Removal of io_request_lock 72 5.2 Prepare for transition to 64 bit sector_t 736. Other Changes/Implications 74 6.1 Partition re-mapping handled by the generic block layer 757. A few tips on migration of older drivers 768. A list of prior/related/impacted patches/ideas 779. Other References/Discussion Threads 78 79--------------------------------------------------------------------------- 80 81Bio Notes 82-------- 83 84Let us discuss the changes in the context of how some overall goals for the 85block layer are addressed. 86 871. Scope for tuning the generic logic to satisfy various requirements 88 89The block layer design supports adaptable abstractions to handle common 90processing with the ability to tune the logic to an appropriate extent 91depending on the nature of the device and the requirements of the caller. 92One of the objectives of the rewrite was to increase the degree of tunability 93and to enable higher level code to utilize underlying device/driver 94capabilities to the maximum extent for better i/o performance. This is 95important especially in the light of ever improving hardware capabilities 96and application/middleware software designed to take advantage of these 97capabilities. 98 991.1 Tuning based on low level device / driver capabilities 100 101Sophisticated devices with large built-in caches, intelligent i/o scheduling 102optimizations, high memory DMA support, etc may find some of the 103generic processing an overhead, while for less capable devices the 104generic functionality is essential for performance or correctness reasons. 105Knowledge of some of the capabilities or parameters of the device should be 106used at the generic block layer to take the right decisions on 107behalf of the driver. 108 109How is this achieved ? 110 111Tuning at a per-queue level: 112 113i. Per-queue limits/values exported to the generic layer by the driver 114 115Various parameters that the generic i/o scheduler logic uses are set at 116a per-queue level (e.g maximum request size, maximum number of segments in 117a scatter-gather list, logical block size) 118 119Some parameters that were earlier available as global arrays indexed by 120major/minor are now directly associated with the queue. Some of these may 121move into the block device structure in the future. Some characteristics 122have been incorporated into a queue flags field rather than separate fields 123in themselves. There are blk_queue_xxx functions to set the parameters, 124rather than update the fields directly 125 126Some new queue property settings: 127 128 blk_queue_bounce_limit(q, u64 dma_address) 129 Enable I/O to highmem pages, dma_address being the 130 limit. No highmem default. 131 132 blk_queue_max_sectors(q, max_sectors) 133 Sets two variables that limit the size of the request. 134 135 - The request queue's max_sectors, which is a soft size in 136 units of 512 byte sectors, and could be dynamically varied 137 by the core kernel. 138 139 - The request queue's max_hw_sectors, which is a hard limit 140 and reflects the maximum size request a driver can handle 141 in units of 512 byte sectors. 142 143 The default for both max_sectors and max_hw_sectors is 144 255. The upper limit of max_sectors is 1024. 145 146 blk_queue_max_phys_segments(q, max_segments) 147 Maximum physical segments you can handle in a request. 128 148 default (driver limit). (See 3.2.2) 149 150 blk_queue_max_hw_segments(q, max_segments) 151 Maximum dma segments the hardware can handle in a request. 128 152 default (host adapter limit, after dma remapping). 153 (See 3.2.2) 154 155 blk_queue_max_segment_size(q, max_seg_size) 156 Maximum size of a clustered segment, 64kB default. 157 158 blk_queue_logical_block_size(q, logical_block_size) 159 Lowest possible sector size that the hardware can operate 160 on, 512 bytes default. 161 162New queue flags: 163 164 QUEUE_FLAG_CLUSTER (see 3.2.2) 165 QUEUE_FLAG_QUEUED (see 3.2.4) 166 167 168ii. High-mem i/o capabilities are now considered the default 169 170The generic bounce buffer logic, present in 2.4, where the block layer would 171by default copyin/out i/o requests on high-memory buffers to low-memory buffers 172assuming that the driver wouldn't be able to handle it directly, has been 173changed in 2.5. The bounce logic is now applied only for memory ranges 174for which the device cannot handle i/o. A driver can specify this by 175setting the queue bounce limit for the request queue for the device 176(blk_queue_bounce_limit()). This avoids the inefficiencies of the copyin/out 177where a device is capable of handling high memory i/o. 178 179In order to enable high-memory i/o where the device is capable of supporting 180it, the pci dma mapping routines and associated data structures have now been 181modified to accomplish a direct page -> bus translation, without requiring 182a virtual address mapping (unlike the earlier scheme of virtual address 183-> bus translation). So this works uniformly for high-memory pages (which 184do not have a corresponding kernel virtual address space mapping) and 185low-memory pages. 186 187Note: Please refer to Documentation/DMA-API-HOWTO.txt for a discussion 188on PCI high mem DMA aspects and mapping of scatter gather lists, and support 189for 64 bit PCI. 190 191Special handling is required only for cases where i/o needs to happen on 192pages at physical memory addresses beyond what the device can support. In these 193cases, a bounce bio representing a buffer from the supported memory range 194is used for performing the i/o with copyin/copyout as needed depending on 195the type of the operation. For example, in case of a read operation, the 196data read has to be copied to the original buffer on i/o completion, so a 197callback routine is set up to do this, while for write, the data is copied 198from the original buffer to the bounce buffer prior to issuing the 199operation. Since an original buffer may be in a high memory area that's not 200mapped in kernel virtual addr, a kmap operation may be required for 201performing the copy, and special care may be needed in the completion path 202as it may not be in irq context. Special care is also required (by way of 203GFP flags) when allocating bounce buffers, to avoid certain highmem 204deadlock possibilities. 205 206It is also possible that a bounce buffer may be allocated from high-memory 207area that's not mapped in kernel virtual addr, but within the range that the 208device can use directly; so the bounce page may need to be kmapped during 209copy operations. [Note: This does not hold in the current implementation, 210though] 211 212There are some situations when pages from high memory may need to 213be kmapped, even if bounce buffers are not necessary. For example a device 214may need to abort DMA operations and revert to PIO for the transfer, in 215which case a virtual mapping of the page is required. For SCSI it is also 216done in some scenarios where the low level driver cannot be trusted to 217handle a single sg entry correctly. The driver is expected to perform the 218kmaps as needed on such occasions as appropriate. A driver could also use 219the blk_queue_bounce() routine on its own to bounce highmem i/o to low 220memory for specific requests if so desired. 221 222iii. The i/o scheduler algorithm itself can be replaced/set as appropriate 223 224As in 2.4, it is possible to plugin a brand new i/o scheduler for a particular 225queue or pick from (copy) existing generic schedulers and replace/override 226certain portions of it. The 2.5 rewrite provides improved modularization 227of the i/o scheduler. There are more pluggable callbacks, e.g for init, 228add request, extract request, which makes it possible to abstract specific 229i/o scheduling algorithm aspects and details outside of the generic loop. 230It also makes it possible to completely hide the implementation details of 231the i/o scheduler from block drivers. 232 233I/O scheduler wrappers are to be used instead of accessing the queue directly. 234See section 4. The I/O scheduler for details. 235 2361.2 Tuning Based on High level code capabilities 237 238i. Application capabilities for raw i/o 239 240This comes from some of the high-performance database/middleware 241requirements where an application prefers to make its own i/o scheduling 242decisions based on an understanding of the access patterns and i/o 243characteristics 244 245ii. High performance filesystems or other higher level kernel code's 246capabilities 247 248Kernel components like filesystems could also take their own i/o scheduling 249decisions for optimizing performance. Journalling filesystems may need 250some control over i/o ordering. 251 252What kind of support exists at the generic block layer for this ? 253 254The flags and rw fields in the bio structure can be used for some tuning 255from above e.g indicating that an i/o is just a readahead request, or priority 256settings (currently unused). As far as user applications are concerned they 257would need an additional mechanism either via open flags or ioctls, or some 258other upper level mechanism to communicate such settings to block. 259 2601.2.1 Request Priority/Latency 261 262Todo/Under discussion: 263Arjan's proposed request priority scheme allows higher levels some broad 264 control (high/med/low) over the priority of an i/o request vs other pending 265 requests in the queue. For example it allows reads for bringing in an 266 executable page on demand to be given a higher priority over pending write 267 requests which haven't aged too much on the queue. Potentially this priority 268 could even be exposed to applications in some manner, providing higher level 269 tunability. Time based aging avoids starvation of lower priority 270 requests. Some bits in the bi_opf flags field in the bio structure are 271 intended to be used for this priority information. 272 273 2741.3 Direct Access to Low level Device/Driver Capabilities (Bypass mode) 275 (e.g Diagnostics, Systems Management) 276 277There are situations where high-level code needs to have direct access to 278the low level device capabilities or requires the ability to issue commands 279to the device bypassing some of the intermediate i/o layers. 280These could, for example, be special control commands issued through ioctl 281interfaces, or could be raw read/write commands that stress the drive's 282capabilities for certain kinds of fitness tests. Having direct interfaces at 283multiple levels without having to pass through upper layers makes 284it possible to perform bottom up validation of the i/o path, layer by 285layer, starting from the media. 286 287The normal i/o submission interfaces, e.g submit_bio, could be bypassed 288for specially crafted requests which such ioctl or diagnostics 289interfaces would typically use, and the elevator add_request routine 290can instead be used to directly insert such requests in the queue or preferably 291the blk_do_rq routine can be used to place the request on the queue and 292wait for completion. Alternatively, sometimes the caller might just 293invoke a lower level driver specific interface with the request as a 294parameter. 295 296If the request is a means for passing on special information associated with 297the command, then such information is associated with the request->special 298field (rather than misuse the request->buffer field which is meant for the 299request data buffer's virtual mapping). 300 301For passing request data, the caller must build up a bio descriptor 302representing the concerned memory buffer if the underlying driver interprets 303bio segments or uses the block layer end*request* functions for i/o 304completion. Alternatively one could directly use the request->buffer field to 305specify the virtual address of the buffer, if the driver expects buffer 306addresses passed in this way and ignores bio entries for the request type 307involved. In the latter case, the driver would modify and manage the 308request->buffer, request->sector and request->nr_sectors or 309request->current_nr_sectors fields itself rather than using the block layer 310end_request or end_that_request_first completion interfaces. 311(See 2.3 or Documentation/block/request.txt for a brief explanation of 312the request structure fields) 313 314[TBD: end_that_request_last should be usable even in this case; 315Perhaps an end_that_direct_request_first routine could be implemented to make 316handling direct requests easier for such drivers; Also for drivers that 317expect bios, a helper function could be provided for setting up a bio 318corresponding to a data buffer] 319 320<JENS: I dont understand the above, why is end_that_request_first() not 321usable? Or _last for that matter. I must be missing something> 322<SUP: What I meant here was that if the request doesn't have a bio, then 323 end_that_request_first doesn't modify nr_sectors or current_nr_sectors, 324 and hence can't be used for advancing request state settings on the 325 completion of partial transfers. The driver has to modify these fields 326 directly by hand. 327 This is because end_that_request_first only iterates over the bio list, 328 and always returns 0 if there are none associated with the request. 329 _last works OK in this case, and is not a problem, as I mentioned earlier 330> 331 3321.3.1 Pre-built Commands 333 334A request can be created with a pre-built custom command to be sent directly 335to the device. The cmd block in the request structure has room for filling 336in the command bytes. (i.e rq->cmd is now 16 bytes in size, and meant for 337command pre-building, and the type of the request is now indicated 338through rq->flags instead of via rq->cmd) 339 340The request structure flags can be set up to indicate the type of request 341in such cases (REQ_PC: direct packet command passed to driver, REQ_BLOCK_PC: 342packet command issued via blk_do_rq, REQ_SPECIAL: special request). 343 344It can help to pre-build device commands for requests in advance. 345Drivers can now specify a request prepare function (q->prep_rq_fn) that the 346block layer would invoke to pre-build device commands for a given request, 347or perform other preparatory processing for the request. This is routine is 348called by elv_next_request(), i.e. typically just before servicing a request. 349(The prepare function would not be called for requests that have RQF_DONTPREP 350enabled) 351 352Aside: 353 Pre-building could possibly even be done early, i.e before placing the 354 request on the queue, rather than construct the command on the fly in the 355 driver while servicing the request queue when it may affect latencies in 356 interrupt context or responsiveness in general. One way to add early 357 pre-building would be to do it whenever we fail to merge on a request. 358 Now REQ_NOMERGE is set in the request flags to skip this one in the future, 359 which means that it will not change before we feed it to the device. So 360 the pre-builder hook can be invoked there. 361 362 3632. Flexible and generic but minimalist i/o structure/descriptor. 364 3652.1 Reason for a new structure and requirements addressed 366 367Prior to 2.5, buffer heads were used as the unit of i/o at the generic block 368layer, and the low level request structure was associated with a chain of 369buffer heads for a contiguous i/o request. This led to certain inefficiencies 370when it came to large i/o requests and readv/writev style operations, as it 371forced such requests to be broken up into small chunks before being passed 372on to the generic block layer, only to be merged by the i/o scheduler 373when the underlying device was capable of handling the i/o in one shot. 374Also, using the buffer head as an i/o structure for i/os that didn't originate 375from the buffer cache unnecessarily added to the weight of the descriptors 376which were generated for each such chunk. 377 378The following were some of the goals and expectations considered in the 379redesign of the block i/o data structure in 2.5. 380 381i. Should be appropriate as a descriptor for both raw and buffered i/o - 382 avoid cache related fields which are irrelevant in the direct/page i/o path, 383 or filesystem block size alignment restrictions which may not be relevant 384 for raw i/o. 385ii. Ability to represent high-memory buffers (which do not have a virtual 386 address mapping in kernel address space). 387iii.Ability to represent large i/os w/o unnecessarily breaking them up (i.e 388 greater than PAGE_SIZE chunks in one shot) 389iv. At the same time, ability to retain independent identity of i/os from 390 different sources or i/o units requiring individual completion (e.g. for 391 latency reasons) 392v. Ability to represent an i/o involving multiple physical memory segments 393 (including non-page aligned page fragments, as specified via readv/writev) 394 without unnecessarily breaking it up, if the underlying device is capable of 395 handling it. 396vi. Preferably should be based on a memory descriptor structure that can be 397 passed around different types of subsystems or layers, maybe even 398 networking, without duplication or extra copies of data/descriptor fields 399 themselves in the process 400vii.Ability to handle the possibility of splits/merges as the structure passes 401 through layered drivers (lvm, md, evms), with minimal overhead. 402 403The solution was to define a new structure (bio) for the block layer, 404instead of using the buffer head structure (bh) directly, the idea being 405avoidance of some associated baggage and limitations. The bio structure 406is uniformly used for all i/o at the block layer ; it forms a part of the 407bh structure for buffered i/o, and in the case of raw/direct i/o kiobufs are 408mapped to bio structures. 409 4102.2 The bio struct 411 412The bio structure uses a vector representation pointing to an array of tuples 413of <page, offset, len> to describe the i/o buffer, and has various other 414fields describing i/o parameters and state that needs to be maintained for 415performing the i/o. 416 417Notice that this representation means that a bio has no virtual address 418mapping at all (unlike buffer heads). 419 420struct bio_vec { 421 struct page *bv_page; 422 unsigned short bv_len; 423 unsigned short bv_offset; 424}; 425 426/* 427 * main unit of I/O for the block layer and lower layers (ie drivers) 428 */ 429struct bio { 430 struct bio *bi_next; /* request queue link */ 431 struct block_device *bi_bdev; /* target device */ 432 unsigned long bi_flags; /* status, command, etc */ 433 unsigned long bi_opf; /* low bits: r/w, high: priority */ 434 435 unsigned int bi_vcnt; /* how may bio_vec's */ 436 struct bvec_iter bi_iter; /* current index into bio_vec array */ 437 438 unsigned int bi_size; /* total size in bytes */ 439 unsigned short bi_phys_segments; /* segments after physaddr coalesce*/ 440 unsigned short bi_hw_segments; /* segments after DMA remapping */ 441 unsigned int bi_max; /* max bio_vecs we can hold 442 used as index into pool */ 443 struct bio_vec *bi_io_vec; /* the actual vec list */ 444 bio_end_io_t *bi_end_io; /* bi_end_io (bio) */ 445 atomic_t bi_cnt; /* pin count: free when it hits zero */ 446 void *bi_private; 447}; 448 449With this multipage bio design: 450 451- Large i/os can be sent down in one go using a bio_vec list consisting 452 of an array of <page, offset, len> fragments (similar to the way fragments 453 are represented in the zero-copy network code) 454- Splitting of an i/o request across multiple devices (as in the case of 455 lvm or raid) is achieved by cloning the bio (where the clone points to 456 the same bi_io_vec array, but with the index and size accordingly modified) 457- A linked list of bios is used as before for unrelated merges (*) - this 458 avoids reallocs and makes independent completions easier to handle. 459- Code that traverses the req list can find all the segments of a bio 460 by using rq_for_each_segment. This handles the fact that a request 461 has multiple bios, each of which can have multiple segments. 462- Drivers which can't process a large bio in one shot can use the bi_iter 463 field to keep track of the next bio_vec entry to process. 464 (e.g a 1MB bio_vec needs to be handled in max 128kB chunks for IDE) 465 [TBD: Should preferably also have a bi_voffset and bi_vlen to avoid modifying 466 bi_offset an len fields] 467 468(*) unrelated merges -- a request ends up containing two or more bios that 469 didn't originate from the same place. 470 471bi_end_io() i/o callback gets called on i/o completion of the entire bio. 472 473At a lower level, drivers build a scatter gather list from the merged bios. 474The scatter gather list is in the form of an array of <page, offset, len> 475entries with their corresponding dma address mappings filled in at the 476appropriate time. As an optimization, contiguous physical pages can be 477covered by a single entry where <page> refers to the first page and <len> 478covers the range of pages (up to 16 contiguous pages could be covered this 479way). There is a helper routine (blk_rq_map_sg) which drivers can use to build 480the sg list. 481 482Note: Right now the only user of bios with more than one page is ll_rw_kio, 483which in turn means that only raw I/O uses it (direct i/o may not work 484right now). The intent however is to enable clustering of pages etc to 485become possible. The pagebuf abstraction layer from SGI also uses multi-page 486bios, but that is currently not included in the stock development kernels. 487The same is true of Andrew Morton's work-in-progress multipage bio writeout 488and readahead patches. 489 4902.3 Changes in the Request Structure 491 492The request structure is the structure that gets passed down to low level 493drivers. The block layer make_request function builds up a request structure, 494places it on the queue and invokes the drivers request_fn. The driver makes 495use of block layer helper routine elv_next_request to pull the next request 496off the queue. Control or diagnostic functions might bypass block and directly 497invoke underlying driver entry points passing in a specially constructed 498request structure. 499 500Only some relevant fields (mainly those which changed or may be referred 501to in some of the discussion here) are listed below, not necessarily in 502the order in which they occur in the structure (see include/linux/blkdev.h) 503Refer to Documentation/block/request.txt for details about all the request 504structure fields and a quick reference about the layers which are 505supposed to use or modify those fields. 506 507struct request { 508 struct list_head queuelist; /* Not meant to be directly accessed by 509 the driver. 510 Used by q->elv_next_request_fn 511 rq->queue is gone 512 */ 513 . 514 . 515 unsigned char cmd[16]; /* prebuilt command data block */ 516 unsigned long flags; /* also includes earlier rq->cmd settings */ 517 . 518 . 519 sector_t sector; /* this field is now of type sector_t instead of int 520 preparation for 64 bit sectors */ 521 . 522 . 523 524 /* Number of scatter-gather DMA addr+len pairs after 525 * physical address coalescing is performed. 526 */ 527 unsigned short nr_phys_segments; 528 529 /* Number of scatter-gather addr+len pairs after 530 * physical and DMA remapping hardware coalescing is performed. 531 * This is the number of scatter-gather entries the driver 532 * will actually have to deal with after DMA mapping is done. 533 */ 534 unsigned short nr_hw_segments; 535 536 /* Various sector counts */ 537 unsigned long nr_sectors; /* no. of sectors left: driver modifiable */ 538 unsigned long hard_nr_sectors; /* block internal copy of above */ 539 unsigned int current_nr_sectors; /* no. of sectors left in the 540 current segment:driver modifiable */ 541 unsigned long hard_cur_sectors; /* block internal copy of the above */ 542 . 543 . 544 int tag; /* command tag associated with request */ 545 void *special; /* same as before */ 546 char *buffer; /* valid only for low memory buffers up to 547 current_nr_sectors */ 548 . 549 . 550 struct bio *bio, *biotail; /* bio list instead of bh */ 551 struct request_list *rl; 552} 553 554See the req_ops and req_flag_bits definitions for an explanation of the various 555flags available. Some bits are used by the block layer or i/o scheduler. 556 557The behaviour of the various sector counts are almost the same as before, 558except that since we have multi-segment bios, current_nr_sectors refers 559to the numbers of sectors in the current segment being processed which could 560be one of the many segments in the current bio (i.e i/o completion unit). 561The nr_sectors value refers to the total number of sectors in the whole 562request that remain to be transferred (no change). The purpose of the 563hard_xxx values is for block to remember these counts every time it hands 564over the request to the driver. These values are updated by block on 565end_that_request_first, i.e. every time the driver completes a part of the 566transfer and invokes block end*request helpers to mark this. The 567driver should not modify these values. The block layer sets up the 568nr_sectors and current_nr_sectors fields (based on the corresponding 569hard_xxx values and the number of bytes transferred) and updates it on 570every transfer that invokes end_that_request_first. It does the same for the 571buffer, bio, bio->bi_iter fields too. 572 573The buffer field is just a virtual address mapping of the current segment 574of the i/o buffer in cases where the buffer resides in low-memory. For high 575memory i/o, this field is not valid and must not be used by drivers. 576 577Code that sets up its own request structures and passes them down to 578a driver needs to be careful about interoperation with the block layer helper 579functions which the driver uses. (Section 1.3) 580 5813. Using bios 582 5833.1 Setup/Teardown 584 585There are routines for managing the allocation, and reference counting, and 586freeing of bios (bio_alloc, bio_get, bio_put). 587 588This makes use of Ingo Molnar's mempool implementation, which enables 589subsystems like bio to maintain their own reserve memory pools for guaranteed 590deadlock-free allocations during extreme VM load. For example, the VM 591subsystem makes use of the block layer to writeout dirty pages in order to be 592able to free up memory space, a case which needs careful handling. The 593allocation logic draws from the preallocated emergency reserve in situations 594where it cannot allocate through normal means. If the pool is empty and it 595can wait, then it would trigger action that would help free up memory or 596replenish the pool (without deadlocking) and wait for availability in the pool. 597If it is in IRQ context, and hence not in a position to do this, allocation 598could fail if the pool is empty. In general mempool always first tries to 599perform allocation without having to wait, even if it means digging into the 600pool as long it is not less that 50% full. 601 602On a free, memory is released to the pool or directly freed depending on 603the current availability in the pool. The mempool interface lets the 604subsystem specify the routines to be used for normal alloc and free. In the 605case of bio, these routines make use of the standard slab allocator. 606 607The caller of bio_alloc is expected to taken certain steps to avoid 608deadlocks, e.g. avoid trying to allocate more memory from the pool while 609already holding memory obtained from the pool. 610[TBD: This is a potential issue, though a rare possibility 611 in the bounce bio allocation that happens in the current code, since 612 it ends up allocating a second bio from the same pool while 613 holding the original bio ] 614 615Memory allocated from the pool should be released back within a limited 616amount of time (in the case of bio, that would be after the i/o is completed). 617This ensures that if part of the pool has been used up, some work (in this 618case i/o) must already be in progress and memory would be available when it 619is over. If allocating from multiple pools in the same code path, the order 620or hierarchy of allocation needs to be consistent, just the way one deals 621with multiple locks. 622 623The bio_alloc routine also needs to allocate the bio_vec_list (bvec_alloc()) 624for a non-clone bio. There are the 6 pools setup for different size biovecs, 625so bio_alloc(gfp_mask, nr_iovecs) will allocate a vec_list of the 626given size from these slabs. 627 628The bio_get() routine may be used to hold an extra reference on a bio prior 629to i/o submission, if the bio fields are likely to be accessed after the 630i/o is issued (since the bio may otherwise get freed in case i/o completion 631happens in the meantime). 632 633The bio_clone_fast() routine may be used to duplicate a bio, where the clone 634shares the bio_vec_list with the original bio (i.e. both point to the 635same bio_vec_list). This would typically be used for splitting i/o requests 636in lvm or md. 637 6383.2 Generic bio helper Routines 639 6403.2.1 Traversing segments and completion units in a request 641 642The macro rq_for_each_segment() should be used for traversing the bios 643in the request list (drivers should avoid directly trying to do it 644themselves). Using these helpers should also make it easier to cope 645with block changes in the future. 646 647 struct req_iterator iter; 648 rq_for_each_segment(bio_vec, rq, iter) 649 /* bio_vec is now current segment */ 650 651I/O completion callbacks are per-bio rather than per-segment, so drivers 652that traverse bio chains on completion need to keep that in mind. Drivers 653which don't make a distinction between segments and completion units would 654need to be reorganized to support multi-segment bios. 655 6563.2.2 Setting up DMA scatterlists 657 658The blk_rq_map_sg() helper routine would be used for setting up scatter 659gather lists from a request, so a driver need not do it on its own. 660 661 nr_segments = blk_rq_map_sg(q, rq, scatterlist); 662 663The helper routine provides a level of abstraction which makes it easier 664to modify the internals of request to scatterlist conversion down the line 665without breaking drivers. The blk_rq_map_sg routine takes care of several 666things like collapsing physically contiguous segments (if QUEUE_FLAG_CLUSTER 667is set) and correct segment accounting to avoid exceeding the limits which 668the i/o hardware can handle, based on various queue properties. 669 670- Prevents a clustered segment from crossing a 4GB mem boundary 671- Avoids building segments that would exceed the number of physical 672 memory segments that the driver can handle (phys_segments) and the 673 number that the underlying hardware can handle at once, accounting for 674 DMA remapping (hw_segments) (i.e. IOMMU aware limits). 675 676Routines which the low level driver can use to set up the segment limits: 677 678blk_queue_max_hw_segments() : Sets an upper limit of the maximum number of 679hw data segments in a request (i.e. the maximum number of address/length 680pairs the host adapter can actually hand to the device at once) 681 682blk_queue_max_phys_segments() : Sets an upper limit on the maximum number 683of physical data segments in a request (i.e. the largest sized scatter list 684a driver could handle) 685 6863.2.3 I/O completion 687 688The existing generic block layer helper routines end_request, 689end_that_request_first and end_that_request_last can be used for i/o 690completion (and setting things up so the rest of the i/o or the next 691request can be kicked of) as before. With the introduction of multi-page 692bio support, end_that_request_first requires an additional argument indicating 693the number of sectors completed. 694 6953.2.4 Implications for drivers that do not interpret bios (don't handle 696 multiple segments) 697 698Drivers that do not interpret bios e.g those which do not handle multiple 699segments and do not support i/o into high memory addresses (require bounce 700buffers) and expect only virtually mapped buffers, can access the rq->buffer 701field. As before the driver should use current_nr_sectors to determine the 702size of remaining data in the current segment (that is the maximum it can 703transfer in one go unless it interprets segments), and rely on the block layer 704end_request, or end_that_request_first/last to take care of all accounting 705and transparent mapping of the next bio segment when a segment boundary 706is crossed on completion of a transfer. (The end*request* functions should 707be used if only if the request has come down from block/bio path, not for 708direct access requests which only specify rq->buffer without a valid rq->bio) 709 7103.3 I/O Submission 711 712The routine submit_bio() is used to submit a single io. Higher level i/o 713routines make use of this: 714 715(a) Buffered i/o: 716The routine submit_bh() invokes submit_bio() on a bio corresponding to the 717bh, allocating the bio if required. ll_rw_block() uses submit_bh() as before. 718 719(b) Kiobuf i/o (for raw/direct i/o): 720The ll_rw_kio() routine breaks up the kiobuf into page sized chunks and 721maps the array to one or more multi-page bios, issuing submit_bio() to 722perform the i/o on each of these. 723 724The embedded bh array in the kiobuf structure has been removed and no 725preallocation of bios is done for kiobufs. [The intent is to remove the 726blocks array as well, but it's currently in there to kludge around direct i/o.] 727Thus kiobuf allocation has switched back to using kmalloc rather than vmalloc. 728 729Todo/Observation: 730 731 A single kiobuf structure is assumed to correspond to a contiguous range 732 of data, so brw_kiovec() invokes ll_rw_kio for each kiobuf in a kiovec. 733 So right now it wouldn't work for direct i/o on non-contiguous blocks. 734 This is to be resolved. The eventual direction is to replace kiobuf 735 by kvec's. 736 737 Badari Pulavarty has a patch to implement direct i/o correctly using 738 bio and kvec. 739 740 741(c) Page i/o: 742Todo/Under discussion: 743 744 Andrew Morton's multi-page bio patches attempt to issue multi-page 745 writeouts (and reads) from the page cache, by directly building up 746 large bios for submission completely bypassing the usage of buffer 747 heads. This work is still in progress. 748 749 Christoph Hellwig had some code that uses bios for page-io (rather than 750 bh). This isn't included in bio as yet. Christoph was also working on a 751 design for representing virtual/real extents as an entity and modifying 752 some of the address space ops interfaces to utilize this abstraction rather 753 than buffer_heads. (This is somewhat along the lines of the SGI XFS pagebuf 754 abstraction, but intended to be as lightweight as possible). 755 756(d) Direct access i/o: 757Direct access requests that do not contain bios would be submitted differently 758as discussed earlier in section 1.3. 759 760Aside: 761 762 Kvec i/o: 763 764 Ben LaHaise's aio code uses a slightly different structure instead 765 of kiobufs, called a kvec_cb. This contains an array of <page, offset, len> 766 tuples (very much like the networking code), together with a callback function 767 and data pointer. This is embedded into a brw_cb structure when passed 768 to brw_kvec_async(). 769 770 Now it should be possible to directly map these kvecs to a bio. Just as while 771 cloning, in this case rather than PRE_BUILT bio_vecs, we set the bi_io_vec 772 array pointer to point to the veclet array in kvecs. 773 774 TBD: In order for this to work, some changes are needed in the way multi-page 775 bios are handled today. The values of the tuples in such a vector passed in 776 from higher level code should not be modified by the block layer in the course 777 of its request processing, since that would make it hard for the higher layer 778 to continue to use the vector descriptor (kvec) after i/o completes. Instead, 779 all such transient state should either be maintained in the request structure, 780 and passed on in some way to the endio completion routine. 781 782 7834. The I/O scheduler 784I/O scheduler, a.k.a. elevator, is implemented in two layers. Generic dispatch 785queue and specific I/O schedulers. Unless stated otherwise, elevator is used 786to refer to both parts and I/O scheduler to specific I/O schedulers. 787 788Block layer implements generic dispatch queue in block/*.c. 789The generic dispatch queue is responsible for requeueing, handling non-fs 790requests and all other subtleties. 791 792Specific I/O schedulers are responsible for ordering normal filesystem 793requests. They can also choose to delay certain requests to improve 794throughput or whatever purpose. As the plural form indicates, there are 795multiple I/O schedulers. They can be built as modules but at least one should 796be built inside the kernel. Each queue can choose different one and can also 797change to another one dynamically. 798 799A block layer call to the i/o scheduler follows the convention elv_xxx(). This 800calls elevator_xxx_fn in the elevator switch (block/elevator.c). Oh, xxx 801and xxx might not match exactly, but use your imagination. If an elevator 802doesn't implement a function, the switch does nothing or some minimal house 803keeping work. 804 8054.1. I/O scheduler API 806 807The functions an elevator may implement are: (* are mandatory) 808elevator_merge_fn called to query requests for merge with a bio 809 810elevator_merge_req_fn called when two requests get merged. the one 811 which gets merged into the other one will be 812 never seen by I/O scheduler again. IOW, after 813 being merged, the request is gone. 814 815elevator_merged_fn called when a request in the scheduler has been 816 involved in a merge. It is used in the deadline 817 scheduler for example, to reposition the request 818 if its sorting order has changed. 819 820elevator_allow_merge_fn called whenever the block layer determines 821 that a bio can be merged into an existing 822 request safely. The io scheduler may still 823 want to stop a merge at this point if it 824 results in some sort of conflict internally, 825 this hook allows it to do that. Note however 826 that two *requests* can still be merged at later 827 time. Currently the io scheduler has no way to 828 prevent that. It can only learn about the fact 829 from elevator_merge_req_fn callback. 830 831elevator_dispatch_fn* fills the dispatch queue with ready requests. 832 I/O schedulers are free to postpone requests by 833 not filling the dispatch queue unless @force 834 is non-zero. Once dispatched, I/O schedulers 835 are not allowed to manipulate the requests - 836 they belong to generic dispatch queue. 837 838elevator_add_req_fn* called to add a new request into the scheduler 839 840elevator_former_req_fn 841elevator_latter_req_fn These return the request before or after the 842 one specified in disk sort order. Used by the 843 block layer to find merge possibilities. 844 845elevator_completed_req_fn called when a request is completed. 846 847elevator_may_queue_fn returns true if the scheduler wants to allow the 848 current context to queue a new request even if 849 it is over the queue limit. This must be used 850 very carefully!! 851 852elevator_set_req_fn 853elevator_put_req_fn Must be used to allocate and free any elevator 854 specific storage for a request. 855 856elevator_activate_req_fn Called when device driver first sees a request. 857 I/O schedulers can use this callback to 858 determine when actual execution of a request 859 starts. 860elevator_deactivate_req_fn Called when device driver decides to delay 861 a request by requeueing it. 862 863elevator_init_fn* 864elevator_exit_fn Allocate and free any elevator specific storage 865 for a queue. 866 8674.2 Request flows seen by I/O schedulers 868All requests seen by I/O schedulers strictly follow one of the following three 869flows. 870 871 set_req_fn -> 872 873 i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn -> 874 (deactivate_req_fn -> activate_req_fn ->)* -> completed_req_fn 875 ii. add_req_fn -> (merged_fn ->)* -> merge_req_fn 876 iii. [none] 877 878 -> put_req_fn 879 8804.3 I/O scheduler implementation 881The generic i/o scheduler algorithm attempts to sort/merge/batch requests for 882optimal disk scan and request servicing performance (based on generic 883principles and device capabilities), optimized for: 884i. improved throughput 885ii. improved latency 886iii. better utilization of h/w & CPU time 887 888Characteristics: 889 890i. Binary tree 891AS and deadline i/o schedulers use red black binary trees for disk position 892sorting and searching, and a fifo linked list for time-based searching. This 893gives good scalability and good availability of information. Requests are 894almost always dispatched in disk sort order, so a cache is kept of the next 895request in sort order to prevent binary tree lookups. 896 897This arrangement is not a generic block layer characteristic however, so 898elevators may implement queues as they please. 899 900ii. Merge hash 901AS and deadline use a hash table indexed by the last sector of a request. This 902enables merging code to quickly look up "back merge" candidates, even when 903multiple I/O streams are being performed at once on one disk. 904 905"Front merges", a new request being merged at the front of an existing request, 906are far less common than "back merges" due to the nature of most I/O patterns. 907Front merges are handled by the binary trees in AS and deadline schedulers. 908 909iii. Plugging the queue to batch requests in anticipation of opportunities for 910 merge/sort optimizations 911 912Plugging is an approach that the current i/o scheduling algorithm resorts to so 913that it collects up enough requests in the queue to be able to take 914advantage of the sorting/merging logic in the elevator. If the 915queue is empty when a request comes in, then it plugs the request queue 916(sort of like plugging the bath tub of a vessel to get fluid to build up) 917till it fills up with a few more requests, before starting to service 918the requests. This provides an opportunity to merge/sort the requests before 919passing them down to the device. There are various conditions when the queue is 920unplugged (to open up the flow again), either through a scheduled task or 921could be on demand. For example wait_on_buffer sets the unplugging going 922through sync_buffer() running blk_run_address_space(mapping). Or the caller 923can do it explicity through blk_unplug(bdev). So in the read case, 924the queue gets explicitly unplugged as part of waiting for completion on that 925buffer. 926 927Aside: 928 This is kind of controversial territory, as it's not clear if plugging is 929 always the right thing to do. Devices typically have their own queues, 930 and allowing a big queue to build up in software, while letting the device be 931 idle for a while may not always make sense. The trick is to handle the fine 932 balance between when to plug and when to open up. Also now that we have 933 multi-page bios being queued in one shot, we may not need to wait to merge 934 a big request from the broken up pieces coming by. 935 9364.4 I/O contexts 937I/O contexts provide a dynamically allocated per process data area. They may 938be used in I/O schedulers, and in the block layer (could be used for IO statis, 939priorities for example). See *io_context in block/ll_rw_blk.c, and as-iosched.c 940for an example of usage in an i/o scheduler. 941 942 9435. Scalability related changes 944 9455.1 Granular Locking: io_request_lock replaced by a per-queue lock 946 947The global io_request_lock has been removed as of 2.5, to avoid 948the scalability bottleneck it was causing, and has been replaced by more 949granular locking. The request queue structure has a pointer to the 950lock to be used for that queue. As a result, locking can now be 951per-queue, with a provision for sharing a lock across queues if 952necessary (e.g the scsi layer sets the queue lock pointers to the 953corresponding adapter lock, which results in a per host locking 954granularity). The locking semantics are the same, i.e. locking is 955still imposed by the block layer, grabbing the lock before 956request_fn execution which it means that lots of older drivers 957should still be SMP safe. Drivers are free to drop the queue 958lock themselves, if required. Drivers that explicitly used the 959io_request_lock for serialization need to be modified accordingly. 960Usually it's as easy as adding a global lock: 961 962 static DEFINE_SPINLOCK(my_driver_lock); 963 964and passing the address to that lock to blk_init_queue(). 965 9665.2 64 bit sector numbers (sector_t prepares for 64 bit support) 967 968The sector number used in the bio structure has been changed to sector_t, 969which could be defined as 64 bit in preparation for 64 bit sector support. 970 9716. Other Changes/Implications 972 9736.1 Partition re-mapping handled by the generic block layer 974 975In 2.5 some of the gendisk/partition related code has been reorganized. 976Now the generic block layer performs partition-remapping early and thus 977provides drivers with a sector number relative to whole device, rather than 978having to take partition number into account in order to arrive at the true 979sector number. The routine blk_partition_remap() is invoked by 980generic_make_request even before invoking the queue specific make_request_fn, 981so the i/o scheduler also gets to operate on whole disk sector numbers. This 982should typically not require changes to block drivers, it just never gets 983to invoke its own partition sector offset calculations since all bios 984sent are offset from the beginning of the device. 985 986 9877. A Few Tips on Migration of older drivers 988 989Old-style drivers that just use CURRENT and ignores clustered requests, 990may not need much change. The generic layer will automatically handle 991clustered requests, multi-page bios, etc for the driver. 992 993For a low performance driver or hardware that is PIO driven or just doesn't 994support scatter-gather changes should be minimal too. 995 996The following are some points to keep in mind when converting old drivers 997to bio. 998 999Drivers should use elv_next_request to pick up requests and are no longer 1000supposed to handle looping directly over the request list. 1001(struct request->queue has been removed) 1002 1003Now end_that_request_first takes an additional number_of_sectors argument. 1004It used to handle always just the first buffer_head in a request, now 1005it will loop and handle as many sectors (on a bio-segment granularity) 1006as specified. 1007 1008Now bh->b_end_io is replaced by bio->bi_end_io, but most of the time the 1009right thing to use is bio_endio(bio) instead. 1010 1011If the driver is dropping the io_request_lock from its request_fn strategy, 1012then it just needs to replace that with q->queue_lock instead. 1013 1014As described in Sec 1.1, drivers can set max sector size, max segment size 1015etc per queue now. Drivers that used to define their own merge functions i 1016to handle things like this can now just use the blk_queue_* functions at 1017blk_init_queue time. 1018 1019Drivers no longer have to map a {partition, sector offset} into the 1020correct absolute location anymore, this is done by the block layer, so 1021where a driver received a request ala this before: 1022 1023 rq->rq_dev = mk_kdev(3, 5); /* /dev/hda5 */ 1024 rq->sector = 0; /* first sector on hda5 */ 1025 1026 it will now see 1027 1028 rq->rq_dev = mk_kdev(3, 0); /* /dev/hda */ 1029 rq->sector = 123128; /* offset from start of disk */ 1030 1031As mentioned, there is no virtual mapping of a bio. For DMA, this is 1032not a problem as the driver probably never will need a virtual mapping. 1033Instead it needs a bus mapping (dma_map_page for a single segment or 1034use dma_map_sg for scatter gather) to be able to ship it to the driver. For 1035PIO drivers (or drivers that need to revert to PIO transfer once in a 1036while (IDE for example)), where the CPU is doing the actual data 1037transfer a virtual mapping is needed. If the driver supports highmem I/O, 1038(Sec 1.1, (ii) ) it needs to use kmap_atomic or similar to temporarily map 1039a bio into the virtual address space. 1040 1041 10428. Prior/Related/Impacted patches 1043 10448.1. Earlier kiobuf patches (sct/axboe/chait/hch/mkp) 1045- orig kiobuf & raw i/o patches (now in 2.4 tree) 1046- direct kiobuf based i/o to devices (no intermediate bh's) 1047- page i/o using kiobuf 1048- kiobuf splitting for lvm (mkp) 1049- elevator support for kiobuf request merging (axboe) 10508.2. Zero-copy networking (Dave Miller) 10518.3. SGI XFS - pagebuf patches - use of kiobufs 10528.4. Multi-page pioent patch for bio (Christoph Hellwig) 10538.5. Direct i/o implementation (Andrea Arcangeli) since 2.4.10-pre11 10548.6. Async i/o implementation patch (Ben LaHaise) 10558.7. EVMS layering design (IBM EVMS team) 10568.8. Larger page cache size patch (Ben LaHaise) and 1057 Large page size (Daniel Phillips) 1058 => larger contiguous physical memory buffers 10598.9. VM reservations patch (Ben LaHaise) 10608.10. Write clustering patches ? (Marcelo/Quintela/Riel ?) 10618.11. Block device in page cache patch (Andrea Archangeli) - now in 2.4.10+ 10628.12. Multiple block-size transfers for faster raw i/o (Shailabh Nagar, 1063 Badari) 10648.13 Priority based i/o scheduler - prepatches (Arjan van de Ven) 10658.14 IDE Taskfile i/o patch (Andre Hedrick) 10668.15 Multi-page writeout and readahead patches (Andrew Morton) 10678.16 Direct i/o patches for 2.5 using kvec and bio (Badari Pulavarthy) 1068 10699. Other References: 1070 10719.1 The Splice I/O Model - Larry McVoy (and subsequent discussions on lkml, 1072and Linus' comments - Jan 2001) 10739.2 Discussions about kiobuf and bh design on lkml between sct, linus, alan 1074et al - Feb-March 2001 (many of the initial thoughts that led to bio were 1075brought up in this discussion thread) 10769.3 Discussions on mempool on lkml - Dec 2001. 1077