Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v2.6.16 1198 lines 55 kB view raw
1 Notes on the Generic Block Layer Rewrite in Linux 2.5 2 ===================================================== 3 4Notes Written on Jan 15, 2002: 5 Jens Axboe <axboe@suse.de> 6 Suparna Bhattacharya <suparna@in.ibm.com> 7 8Last Updated May 2, 2002 9September 2003: Updated I/O Scheduler portions 10 Nick Piggin <piggin@cyberone.com.au> 11 12Introduction: 13 14These are some notes describing some aspects of the 2.5 block layer in the 15context of the bio rewrite. The idea is to bring out some of the key 16changes and a glimpse of the rationale behind those changes. 17 18Please mail corrections & suggestions to suparna@in.ibm.com. 19 20Credits: 21--------- 22 232.5 bio rewrite: 24 Jens Axboe <axboe@suse.de> 25 26Many aspects of the generic block layer redesign were driven by and evolved 27over discussions, prior patches and the collective experience of several 28people. See sections 8 and 9 for a list of some related references. 29 30The following people helped with review comments and inputs for this 31document: 32 Christoph Hellwig <hch@infradead.org> 33 Arjan van de Ven <arjanv@redhat.com> 34 Randy Dunlap <rdunlap@xenotime.net> 35 Andre Hedrick <andre@linux-ide.org> 36 37The following people helped with fixes/contributions to the bio patches 38while it was still work-in-progress: 39 David S. Miller <davem@redhat.com> 40 41 42Description of Contents: 43------------------------ 44 451. Scope for tuning of logic to various needs 46 1.1 Tuning based on device or low level driver capabilities 47 - Per-queue parameters 48 - Highmem I/O support 49 - I/O scheduler modularization 50 1.2 Tuning based on high level requirements/capabilities 51 1.2.1 I/O Barriers 52 1.2.2 Request Priority/Latency 53 1.3 Direct access/bypass to lower layers for diagnostics and special 54 device operations 55 1.3.1 Pre-built commands 562. New flexible and generic but minimalist i/o structure or descriptor 57 (instead of using buffer heads at the i/o layer) 58 2.1 Requirements/Goals addressed 59 2.2 The bio struct in detail (multi-page io unit) 60 2.3 Changes in the request structure 613. Using bios 62 3.1 Setup/teardown (allocation, splitting) 63 3.2 Generic bio helper routines 64 3.2.1 Traversing segments and completion units in a request 65 3.2.2 Setting up DMA scatterlists 66 3.2.3 I/O completion 67 3.2.4 Implications for drivers that do not interpret bios (don't handle 68 multiple segments) 69 3.2.5 Request command tagging 70 3.3 I/O submission 714. The I/O scheduler 725. Scalability related changes 73 5.1 Granular locking: Removal of io_request_lock 74 5.2 Prepare for transition to 64 bit sector_t 756. Other Changes/Implications 76 6.1 Partition re-mapping handled by the generic block layer 777. A few tips on migration of older drivers 788. A list of prior/related/impacted patches/ideas 799. Other References/Discussion Threads 80 81--------------------------------------------------------------------------- 82 83Bio Notes 84-------- 85 86Let us discuss the changes in the context of how some overall goals for the 87block layer are addressed. 88 891. Scope for tuning the generic logic to satisfy various requirements 90 91The block layer design supports adaptable abstractions to handle common 92processing with the ability to tune the logic to an appropriate extent 93depending on the nature of the device and the requirements of the caller. 94One of the objectives of the rewrite was to increase the degree of tunability 95and to enable higher level code to utilize underlying device/driver 96capabilities to the maximum extent for better i/o performance. This is 97important especially in the light of ever improving hardware capabilities 98and application/middleware software designed to take advantage of these 99capabilities. 100 1011.1 Tuning based on low level device / driver capabilities 102 103Sophisticated devices with large built-in caches, intelligent i/o scheduling 104optimizations, high memory DMA support, etc may find some of the 105generic processing an overhead, while for less capable devices the 106generic functionality is essential for performance or correctness reasons. 107Knowledge of some of the capabilities or parameters of the device should be 108used at the generic block layer to take the right decisions on 109behalf of the driver. 110 111How is this achieved ? 112 113Tuning at a per-queue level: 114 115i. Per-queue limits/values exported to the generic layer by the driver 116 117Various parameters that the generic i/o scheduler logic uses are set at 118a per-queue level (e.g maximum request size, maximum number of segments in 119a scatter-gather list, hardsect size) 120 121Some parameters that were earlier available as global arrays indexed by 122major/minor are now directly associated with the queue. Some of these may 123move into the block device structure in the future. Some characteristics 124have been incorporated into a queue flags field rather than separate fields 125in themselves. There are blk_queue_xxx functions to set the parameters, 126rather than update the fields directly 127 128Some new queue property settings: 129 130 blk_queue_bounce_limit(q, u64 dma_address) 131 Enable I/O to highmem pages, dma_address being the 132 limit. No highmem default. 133 134 blk_queue_max_sectors(q, max_sectors) 135 Maximum size request you can handle in units of 512 byte 136 sectors. 255 default. 137 138 blk_queue_max_phys_segments(q, max_segments) 139 Maximum physical segments you can handle in a request. 128 140 default (driver limit). (See 3.2.2) 141 142 blk_queue_max_hw_segments(q, max_segments) 143 Maximum dma segments the hardware can handle in a request. 128 144 default (host adapter limit, after dma remapping). 145 (See 3.2.2) 146 147 blk_queue_max_segment_size(q, max_seg_size) 148 Maximum size of a clustered segment, 64kB default. 149 150 blk_queue_hardsect_size(q, hardsect_size) 151 Lowest possible sector size that the hardware can operate 152 on, 512 bytes default. 153 154New queue flags: 155 156 QUEUE_FLAG_CLUSTER (see 3.2.2) 157 QUEUE_FLAG_QUEUED (see 3.2.4) 158 159 160ii. High-mem i/o capabilities are now considered the default 161 162The generic bounce buffer logic, present in 2.4, where the block layer would 163by default copyin/out i/o requests on high-memory buffers to low-memory buffers 164assuming that the driver wouldn't be able to handle it directly, has been 165changed in 2.5. The bounce logic is now applied only for memory ranges 166for which the device cannot handle i/o. A driver can specify this by 167setting the queue bounce limit for the request queue for the device 168(blk_queue_bounce_limit()). This avoids the inefficiencies of the copyin/out 169where a device is capable of handling high memory i/o. 170 171In order to enable high-memory i/o where the device is capable of supporting 172it, the pci dma mapping routines and associated data structures have now been 173modified to accomplish a direct page -> bus translation, without requiring 174a virtual address mapping (unlike the earlier scheme of virtual address 175-> bus translation). So this works uniformly for high-memory pages (which 176do not have a correponding kernel virtual address space mapping) and 177low-memory pages. 178 179Note: Please refer to DMA-mapping.txt for a discussion on PCI high mem DMA 180aspects and mapping of scatter gather lists, and support for 64 bit PCI. 181 182Special handling is required only for cases where i/o needs to happen on 183pages at physical memory addresses beyond what the device can support. In these 184cases, a bounce bio representing a buffer from the supported memory range 185is used for performing the i/o with copyin/copyout as needed depending on 186the type of the operation. For example, in case of a read operation, the 187data read has to be copied to the original buffer on i/o completion, so a 188callback routine is set up to do this, while for write, the data is copied 189from the original buffer to the bounce buffer prior to issuing the 190operation. Since an original buffer may be in a high memory area that's not 191mapped in kernel virtual addr, a kmap operation may be required for 192performing the copy, and special care may be needed in the completion path 193as it may not be in irq context. Special care is also required (by way of 194GFP flags) when allocating bounce buffers, to avoid certain highmem 195deadlock possibilities. 196 197It is also possible that a bounce buffer may be allocated from high-memory 198area that's not mapped in kernel virtual addr, but within the range that the 199device can use directly; so the bounce page may need to be kmapped during 200copy operations. [Note: This does not hold in the current implementation, 201though] 202 203There are some situations when pages from high memory may need to 204be kmapped, even if bounce buffers are not necessary. For example a device 205may need to abort DMA operations and revert to PIO for the transfer, in 206which case a virtual mapping of the page is required. For SCSI it is also 207done in some scenarios where the low level driver cannot be trusted to 208handle a single sg entry correctly. The driver is expected to perform the 209kmaps as needed on such occasions using the __bio_kmap_atomic and bio_kmap_irq 210routines as appropriate. A driver could also use the blk_queue_bounce() 211routine on its own to bounce highmem i/o to low memory for specific requests 212if so desired. 213 214iii. The i/o scheduler algorithm itself can be replaced/set as appropriate 215 216As in 2.4, it is possible to plugin a brand new i/o scheduler for a particular 217queue or pick from (copy) existing generic schedulers and replace/override 218certain portions of it. The 2.5 rewrite provides improved modularization 219of the i/o scheduler. There are more pluggable callbacks, e.g for init, 220add request, extract request, which makes it possible to abstract specific 221i/o scheduling algorithm aspects and details outside of the generic loop. 222It also makes it possible to completely hide the implementation details of 223the i/o scheduler from block drivers. 224 225I/O scheduler wrappers are to be used instead of accessing the queue directly. 226See section 4. The I/O scheduler for details. 227 2281.2 Tuning Based on High level code capabilities 229 230i. Application capabilities for raw i/o 231 232This comes from some of the high-performance database/middleware 233requirements where an application prefers to make its own i/o scheduling 234decisions based on an understanding of the access patterns and i/o 235characteristics 236 237ii. High performance filesystems or other higher level kernel code's 238capabilities 239 240Kernel components like filesystems could also take their own i/o scheduling 241decisions for optimizing performance. Journalling filesystems may need 242some control over i/o ordering. 243 244What kind of support exists at the generic block layer for this ? 245 246The flags and rw fields in the bio structure can be used for some tuning 247from above e.g indicating that an i/o is just a readahead request, or for 248marking barrier requests (discussed next), or priority settings (currently 249unused). As far as user applications are concerned they would need an 250additional mechanism either via open flags or ioctls, or some other upper 251level mechanism to communicate such settings to block. 252 2531.2.1 I/O Barriers 254 255There is a way to enforce strict ordering for i/os through barriers. 256All requests before a barrier point must be serviced before the barrier 257request and any other requests arriving after the barrier will not be 258serviced until after the barrier has completed. This is useful for higher 259level control on write ordering, e.g flushing a log of committed updates 260to disk before the corresponding updates themselves. 261 262A flag in the bio structure, BIO_BARRIER is used to identify a barrier i/o. 263The generic i/o scheduler would make sure that it places the barrier request and 264all other requests coming after it after all the previous requests in the 265queue. Barriers may be implemented in different ways depending on the 266driver. For more details regarding I/O barriers, please read barrier.txt 267in this directory. 268 2691.2.2 Request Priority/Latency 270 271Todo/Under discussion: 272Arjan's proposed request priority scheme allows higher levels some broad 273 control (high/med/low) over the priority of an i/o request vs other pending 274 requests in the queue. For example it allows reads for bringing in an 275 executable page on demand to be given a higher priority over pending write 276 requests which haven't aged too much on the queue. Potentially this priority 277 could even be exposed to applications in some manner, providing higher level 278 tunability. Time based aging avoids starvation of lower priority 279 requests. Some bits in the bi_rw flags field in the bio structure are 280 intended to be used for this priority information. 281 282 2831.3 Direct Access to Low level Device/Driver Capabilities (Bypass mode) 284 (e.g Diagnostics, Systems Management) 285 286There are situations where high-level code needs to have direct access to 287the low level device capabilities or requires the ability to issue commands 288to the device bypassing some of the intermediate i/o layers. 289These could, for example, be special control commands issued through ioctl 290interfaces, or could be raw read/write commands that stress the drive's 291capabilities for certain kinds of fitness tests. Having direct interfaces at 292multiple levels without having to pass through upper layers makes 293it possible to perform bottom up validation of the i/o path, layer by 294layer, starting from the media. 295 296The normal i/o submission interfaces, e.g submit_bio, could be bypassed 297for specially crafted requests which such ioctl or diagnostics 298interfaces would typically use, and the elevator add_request routine 299can instead be used to directly insert such requests in the queue or preferably 300the blk_do_rq routine can be used to place the request on the queue and 301wait for completion. Alternatively, sometimes the caller might just 302invoke a lower level driver specific interface with the request as a 303parameter. 304 305If the request is a means for passing on special information associated with 306the command, then such information is associated with the request->special 307field (rather than misuse the request->buffer field which is meant for the 308request data buffer's virtual mapping). 309 310For passing request data, the caller must build up a bio descriptor 311representing the concerned memory buffer if the underlying driver interprets 312bio segments or uses the block layer end*request* functions for i/o 313completion. Alternatively one could directly use the request->buffer field to 314specify the virtual address of the buffer, if the driver expects buffer 315addresses passed in this way and ignores bio entries for the request type 316involved. In the latter case, the driver would modify and manage the 317request->buffer, request->sector and request->nr_sectors or 318request->current_nr_sectors fields itself rather than using the block layer 319end_request or end_that_request_first completion interfaces. 320(See 2.3 or Documentation/block/request.txt for a brief explanation of 321the request structure fields) 322 323[TBD: end_that_request_last should be usable even in this case; 324Perhaps an end_that_direct_request_first routine could be implemented to make 325handling direct requests easier for such drivers; Also for drivers that 326expect bios, a helper function could be provided for setting up a bio 327corresponding to a data buffer] 328 329<JENS: I dont understand the above, why is end_that_request_first() not 330usable? Or _last for that matter. I must be missing something> 331<SUP: What I meant here was that if the request doesn't have a bio, then 332 end_that_request_first doesn't modify nr_sectors or current_nr_sectors, 333 and hence can't be used for advancing request state settings on the 334 completion of partial transfers. The driver has to modify these fields 335 directly by hand. 336 This is because end_that_request_first only iterates over the bio list, 337 and always returns 0 if there are none associated with the request. 338 _last works OK in this case, and is not a problem, as I mentioned earlier 339> 340 3411.3.1 Pre-built Commands 342 343A request can be created with a pre-built custom command to be sent directly 344to the device. The cmd block in the request structure has room for filling 345in the command bytes. (i.e rq->cmd is now 16 bytes in size, and meant for 346command pre-building, and the type of the request is now indicated 347through rq->flags instead of via rq->cmd) 348 349The request structure flags can be set up to indicate the type of request 350in such cases (REQ_PC: direct packet command passed to driver, REQ_BLOCK_PC: 351packet command issued via blk_do_rq, REQ_SPECIAL: special request). 352 353It can help to pre-build device commands for requests in advance. 354Drivers can now specify a request prepare function (q->prep_rq_fn) that the 355block layer would invoke to pre-build device commands for a given request, 356or perform other preparatory processing for the request. This is routine is 357called by elv_next_request(), i.e. typically just before servicing a request. 358(The prepare function would not be called for requests that have REQ_DONTPREP 359enabled) 360 361Aside: 362 Pre-building could possibly even be done early, i.e before placing the 363 request on the queue, rather than construct the command on the fly in the 364 driver while servicing the request queue when it may affect latencies in 365 interrupt context or responsiveness in general. One way to add early 366 pre-building would be to do it whenever we fail to merge on a request. 367 Now REQ_NOMERGE is set in the request flags to skip this one in the future, 368 which means that it will not change before we feed it to the device. So 369 the pre-builder hook can be invoked there. 370 371 3722. Flexible and generic but minimalist i/o structure/descriptor. 373 3742.1 Reason for a new structure and requirements addressed 375 376Prior to 2.5, buffer heads were used as the unit of i/o at the generic block 377layer, and the low level request structure was associated with a chain of 378buffer heads for a contiguous i/o request. This led to certain inefficiencies 379when it came to large i/o requests and readv/writev style operations, as it 380forced such requests to be broken up into small chunks before being passed 381on to the generic block layer, only to be merged by the i/o scheduler 382when the underlying device was capable of handling the i/o in one shot. 383Also, using the buffer head as an i/o structure for i/os that didn't originate 384from the buffer cache unecessarily added to the weight of the descriptors 385which were generated for each such chunk. 386 387The following were some of the goals and expectations considered in the 388redesign of the block i/o data structure in 2.5. 389 390i. Should be appropriate as a descriptor for both raw and buffered i/o - 391 avoid cache related fields which are irrelevant in the direct/page i/o path, 392 or filesystem block size alignment restrictions which may not be relevant 393 for raw i/o. 394ii. Ability to represent high-memory buffers (which do not have a virtual 395 address mapping in kernel address space). 396iii.Ability to represent large i/os w/o unecessarily breaking them up (i.e 397 greater than PAGE_SIZE chunks in one shot) 398iv. At the same time, ability to retain independent identity of i/os from 399 different sources or i/o units requiring individual completion (e.g. for 400 latency reasons) 401v. Ability to represent an i/o involving multiple physical memory segments 402 (including non-page aligned page fragments, as specified via readv/writev) 403 without unecessarily breaking it up, if the underlying device is capable of 404 handling it. 405vi. Preferably should be based on a memory descriptor structure that can be 406 passed around different types of subsystems or layers, maybe even 407 networking, without duplication or extra copies of data/descriptor fields 408 themselves in the process 409vii.Ability to handle the possibility of splits/merges as the structure passes 410 through layered drivers (lvm, md, evms), with minimal overhead. 411 412The solution was to define a new structure (bio) for the block layer, 413instead of using the buffer head structure (bh) directly, the idea being 414avoidance of some associated baggage and limitations. The bio structure 415is uniformly used for all i/o at the block layer ; it forms a part of the 416bh structure for buffered i/o, and in the case of raw/direct i/o kiobufs are 417mapped to bio structures. 418 4192.2 The bio struct 420 421The bio structure uses a vector representation pointing to an array of tuples 422of <page, offset, len> to describe the i/o buffer, and has various other 423fields describing i/o parameters and state that needs to be maintained for 424performing the i/o. 425 426Notice that this representation means that a bio has no virtual address 427mapping at all (unlike buffer heads). 428 429struct bio_vec { 430 struct page *bv_page; 431 unsigned short bv_len; 432 unsigned short bv_offset; 433}; 434 435/* 436 * main unit of I/O for the block layer and lower layers (ie drivers) 437 */ 438struct bio { 439 sector_t bi_sector; 440 struct bio *bi_next; /* request queue link */ 441 struct block_device *bi_bdev; /* target device */ 442 unsigned long bi_flags; /* status, command, etc */ 443 unsigned long bi_rw; /* low bits: r/w, high: priority */ 444 445 unsigned int bi_vcnt; /* how may bio_vec's */ 446 unsigned int bi_idx; /* current index into bio_vec array */ 447 448 unsigned int bi_size; /* total size in bytes */ 449 unsigned short bi_phys_segments; /* segments after physaddr coalesce*/ 450 unsigned short bi_hw_segments; /* segments after DMA remapping */ 451 unsigned int bi_max; /* max bio_vecs we can hold 452 used as index into pool */ 453 struct bio_vec *bi_io_vec; /* the actual vec list */ 454 bio_end_io_t *bi_end_io; /* bi_end_io (bio) */ 455 atomic_t bi_cnt; /* pin count: free when it hits zero */ 456 void *bi_private; 457 bio_destructor_t *bi_destructor; /* bi_destructor (bio) */ 458}; 459 460With this multipage bio design: 461 462- Large i/os can be sent down in one go using a bio_vec list consisting 463 of an array of <page, offset, len> fragments (similar to the way fragments 464 are represented in the zero-copy network code) 465- Splitting of an i/o request across multiple devices (as in the case of 466 lvm or raid) is achieved by cloning the bio (where the clone points to 467 the same bi_io_vec array, but with the index and size accordingly modified) 468- A linked list of bios is used as before for unrelated merges (*) - this 469 avoids reallocs and makes independent completions easier to handle. 470- Code that traverses the req list needs to make a distinction between 471 segments of a request (bio_for_each_segment) and the distinct completion 472 units/bios (rq_for_each_bio). 473- Drivers which can't process a large bio in one shot can use the bi_idx 474 field to keep track of the next bio_vec entry to process. 475 (e.g a 1MB bio_vec needs to be handled in max 128kB chunks for IDE) 476 [TBD: Should preferably also have a bi_voffset and bi_vlen to avoid modifying 477 bi_offset an len fields] 478 479(*) unrelated merges -- a request ends up containing two or more bios that 480 didn't originate from the same place. 481 482bi_end_io() i/o callback gets called on i/o completion of the entire bio. 483 484At a lower level, drivers build a scatter gather list from the merged bios. 485The scatter gather list is in the form of an array of <page, offset, len> 486entries with their corresponding dma address mappings filled in at the 487appropriate time. As an optimization, contiguous physical pages can be 488covered by a single entry where <page> refers to the first page and <len> 489covers the range of pages (upto 16 contiguous pages could be covered this 490way). There is a helper routine (blk_rq_map_sg) which drivers can use to build 491the sg list. 492 493Note: Right now the only user of bios with more than one page is ll_rw_kio, 494which in turn means that only raw I/O uses it (direct i/o may not work 495right now). The intent however is to enable clustering of pages etc to 496become possible. The pagebuf abstraction layer from SGI also uses multi-page 497bios, but that is currently not included in the stock development kernels. 498The same is true of Andrew Morton's work-in-progress multipage bio writeout 499and readahead patches. 500 5012.3 Changes in the Request Structure 502 503The request structure is the structure that gets passed down to low level 504drivers. The block layer make_request function builds up a request structure, 505places it on the queue and invokes the drivers request_fn. The driver makes 506use of block layer helper routine elv_next_request to pull the next request 507off the queue. Control or diagnostic functions might bypass block and directly 508invoke underlying driver entry points passing in a specially constructed 509request structure. 510 511Only some relevant fields (mainly those which changed or may be referred 512to in some of the discussion here) are listed below, not necessarily in 513the order in which they occur in the structure (see include/linux/blkdev.h) 514Refer to Documentation/block/request.txt for details about all the request 515structure fields and a quick reference about the layers which are 516supposed to use or modify those fields. 517 518struct request { 519 struct list_head queuelist; /* Not meant to be directly accessed by 520 the driver. 521 Used by q->elv_next_request_fn 522 rq->queue is gone 523 */ 524 . 525 . 526 unsigned char cmd[16]; /* prebuilt command data block */ 527 unsigned long flags; /* also includes earlier rq->cmd settings */ 528 . 529 . 530 sector_t sector; /* this field is now of type sector_t instead of int 531 preparation for 64 bit sectors */ 532 . 533 . 534 535 /* Number of scatter-gather DMA addr+len pairs after 536 * physical address coalescing is performed. 537 */ 538 unsigned short nr_phys_segments; 539 540 /* Number of scatter-gather addr+len pairs after 541 * physical and DMA remapping hardware coalescing is performed. 542 * This is the number of scatter-gather entries the driver 543 * will actually have to deal with after DMA mapping is done. 544 */ 545 unsigned short nr_hw_segments; 546 547 /* Various sector counts */ 548 unsigned long nr_sectors; /* no. of sectors left: driver modifiable */ 549 unsigned long hard_nr_sectors; /* block internal copy of above */ 550 unsigned int current_nr_sectors; /* no. of sectors left in the 551 current segment:driver modifiable */ 552 unsigned long hard_cur_sectors; /* block internal copy of the above */ 553 . 554 . 555 int tag; /* command tag associated with request */ 556 void *special; /* same as before */ 557 char *buffer; /* valid only for low memory buffers upto 558 current_nr_sectors */ 559 . 560 . 561 struct bio *bio, *biotail; /* bio list instead of bh */ 562 struct request_list *rl; 563} 564 565See the rq_flag_bits definitions for an explanation of the various flags 566available. Some bits are used by the block layer or i/o scheduler. 567 568The behaviour of the various sector counts are almost the same as before, 569except that since we have multi-segment bios, current_nr_sectors refers 570to the numbers of sectors in the current segment being processed which could 571be one of the many segments in the current bio (i.e i/o completion unit). 572The nr_sectors value refers to the total number of sectors in the whole 573request that remain to be transferred (no change). The purpose of the 574hard_xxx values is for block to remember these counts every time it hands 575over the request to the driver. These values are updated by block on 576end_that_request_first, i.e. every time the driver completes a part of the 577transfer and invokes block end*request helpers to mark this. The 578driver should not modify these values. The block layer sets up the 579nr_sectors and current_nr_sectors fields (based on the corresponding 580hard_xxx values and the number of bytes transferred) and updates it on 581every transfer that invokes end_that_request_first. It does the same for the 582buffer, bio, bio->bi_idx fields too. 583 584The buffer field is just a virtual address mapping of the current segment 585of the i/o buffer in cases where the buffer resides in low-memory. For high 586memory i/o, this field is not valid and must not be used by drivers. 587 588Code that sets up its own request structures and passes them down to 589a driver needs to be careful about interoperation with the block layer helper 590functions which the driver uses. (Section 1.3) 591 5923. Using bios 593 5943.1 Setup/Teardown 595 596There are routines for managing the allocation, and reference counting, and 597freeing of bios (bio_alloc, bio_get, bio_put). 598 599This makes use of Ingo Molnar's mempool implementation, which enables 600subsystems like bio to maintain their own reserve memory pools for guaranteed 601deadlock-free allocations during extreme VM load. For example, the VM 602subsystem makes use of the block layer to writeout dirty pages in order to be 603able to free up memory space, a case which needs careful handling. The 604allocation logic draws from the preallocated emergency reserve in situations 605where it cannot allocate through normal means. If the pool is empty and it 606can wait, then it would trigger action that would help free up memory or 607replenish the pool (without deadlocking) and wait for availability in the pool. 608If it is in IRQ context, and hence not in a position to do this, allocation 609could fail if the pool is empty. In general mempool always first tries to 610perform allocation without having to wait, even if it means digging into the 611pool as long it is not less that 50% full. 612 613On a free, memory is released to the pool or directly freed depending on 614the current availability in the pool. The mempool interface lets the 615subsystem specify the routines to be used for normal alloc and free. In the 616case of bio, these routines make use of the standard slab allocator. 617 618The caller of bio_alloc is expected to taken certain steps to avoid 619deadlocks, e.g. avoid trying to allocate more memory from the pool while 620already holding memory obtained from the pool. 621[TBD: This is a potential issue, though a rare possibility 622 in the bounce bio allocation that happens in the current code, since 623 it ends up allocating a second bio from the same pool while 624 holding the original bio ] 625 626Memory allocated from the pool should be released back within a limited 627amount of time (in the case of bio, that would be after the i/o is completed). 628This ensures that if part of the pool has been used up, some work (in this 629case i/o) must already be in progress and memory would be available when it 630is over. If allocating from multiple pools in the same code path, the order 631or hierarchy of allocation needs to be consistent, just the way one deals 632with multiple locks. 633 634The bio_alloc routine also needs to allocate the bio_vec_list (bvec_alloc()) 635for a non-clone bio. There are the 6 pools setup for different size biovecs, 636so bio_alloc(gfp_mask, nr_iovecs) will allocate a vec_list of the 637given size from these slabs. 638 639The bi_destructor() routine takes into account the possibility of the bio 640having originated from a different source (see later discussions on 641n/w to block transfers and kvec_cb) 642 643The bio_get() routine may be used to hold an extra reference on a bio prior 644to i/o submission, if the bio fields are likely to be accessed after the 645i/o is issued (since the bio may otherwise get freed in case i/o completion 646happens in the meantime). 647 648The bio_clone() routine may be used to duplicate a bio, where the clone 649shares the bio_vec_list with the original bio (i.e. both point to the 650same bio_vec_list). This would typically be used for splitting i/o requests 651in lvm or md. 652 6533.2 Generic bio helper Routines 654 6553.2.1 Traversing segments and completion units in a request 656 657The macros bio_for_each_segment() and rq_for_each_bio() should be used for 658traversing the bios in the request list (drivers should avoid directly 659trying to do it themselves). Using these helpers should also make it easier 660to cope with block changes in the future. 661 662 rq_for_each_bio(bio, rq) 663 bio_for_each_segment(bio_vec, bio, i) 664 /* bio_vec is now current segment */ 665 666I/O completion callbacks are per-bio rather than per-segment, so drivers 667that traverse bio chains on completion need to keep that in mind. Drivers 668which don't make a distinction between segments and completion units would 669need to be reorganized to support multi-segment bios. 670 6713.2.2 Setting up DMA scatterlists 672 673The blk_rq_map_sg() helper routine would be used for setting up scatter 674gather lists from a request, so a driver need not do it on its own. 675 676 nr_segments = blk_rq_map_sg(q, rq, scatterlist); 677 678The helper routine provides a level of abstraction which makes it easier 679to modify the internals of request to scatterlist conversion down the line 680without breaking drivers. The blk_rq_map_sg routine takes care of several 681things like collapsing physically contiguous segments (if QUEUE_FLAG_CLUSTER 682is set) and correct segment accounting to avoid exceeding the limits which 683the i/o hardware can handle, based on various queue properties. 684 685- Prevents a clustered segment from crossing a 4GB mem boundary 686- Avoids building segments that would exceed the number of physical 687 memory segments that the driver can handle (phys_segments) and the 688 number that the underlying hardware can handle at once, accounting for 689 DMA remapping (hw_segments) (i.e. IOMMU aware limits). 690 691Routines which the low level driver can use to set up the segment limits: 692 693blk_queue_max_hw_segments() : Sets an upper limit of the maximum number of 694hw data segments in a request (i.e. the maximum number of address/length 695pairs the host adapter can actually hand to the device at once) 696 697blk_queue_max_phys_segments() : Sets an upper limit on the maximum number 698of physical data segments in a request (i.e. the largest sized scatter list 699a driver could handle) 700 7013.2.3 I/O completion 702 703The existing generic block layer helper routines end_request, 704end_that_request_first and end_that_request_last can be used for i/o 705completion (and setting things up so the rest of the i/o or the next 706request can be kicked of) as before. With the introduction of multi-page 707bio support, end_that_request_first requires an additional argument indicating 708the number of sectors completed. 709 7103.2.4 Implications for drivers that do not interpret bios (don't handle 711 multiple segments) 712 713Drivers that do not interpret bios e.g those which do not handle multiple 714segments and do not support i/o into high memory addresses (require bounce 715buffers) and expect only virtually mapped buffers, can access the rq->buffer 716field. As before the driver should use current_nr_sectors to determine the 717size of remaining data in the current segment (that is the maximum it can 718transfer in one go unless it interprets segments), and rely on the block layer 719end_request, or end_that_request_first/last to take care of all accounting 720and transparent mapping of the next bio segment when a segment boundary 721is crossed on completion of a transfer. (The end*request* functions should 722be used if only if the request has come down from block/bio path, not for 723direct access requests which only specify rq->buffer without a valid rq->bio) 724 7253.2.5 Generic request command tagging 726 7273.2.5.1 Tag helpers 728 729Block now offers some simple generic functionality to help support command 730queueing (typically known as tagged command queueing), ie manage more than 731one outstanding command on a queue at any given time. 732 733 blk_queue_init_tags(request_queue_t *q, int depth) 734 735 Initialize internal command tagging structures for a maximum 736 depth of 'depth'. 737 738 blk_queue_free_tags((request_queue_t *q) 739 740 Teardown tag info associated with the queue. This will be done 741 automatically by block if blk_queue_cleanup() is called on a queue 742 that is using tagging. 743 744The above are initialization and exit management, the main helpers during 745normal operations are: 746 747 blk_queue_start_tag(request_queue_t *q, struct request *rq) 748 749 Start tagged operation for this request. A free tag number between 750 0 and 'depth' is assigned to the request (rq->tag holds this number), 751 and 'rq' is added to the internal tag management. If the maximum depth 752 for this queue is already achieved (or if the tag wasn't started for 753 some other reason), 1 is returned. Otherwise 0 is returned. 754 755 blk_queue_end_tag(request_queue_t *q, struct request *rq) 756 757 End tagged operation on this request. 'rq' is removed from the internal 758 book keeping structures. 759 760To minimize struct request and queue overhead, the tag helpers utilize some 761of the same request members that are used for normal request queue management. 762This means that a request cannot both be an active tag and be on the queue 763list at the same time. blk_queue_start_tag() will remove the request, but 764the driver must remember to call blk_queue_end_tag() before signalling 765completion of the request to the block layer. This means ending tag 766operations before calling end_that_request_last()! For an example of a user 767of these helpers, see the IDE tagged command queueing support. 768 769Certain hardware conditions may dictate a need to invalidate the block tag 770queue. For instance, on IDE any tagged request error needs to clear both 771the hardware and software block queue and enable the driver to sanely restart 772all the outstanding requests. There's a third helper to do that: 773 774 blk_queue_invalidate_tags(request_queue_t *q) 775 776 Clear the internal block tag queue and readd all the pending requests 777 to the request queue. The driver will receive them again on the 778 next request_fn run, just like it did the first time it encountered 779 them. 780 7813.2.5.2 Tag info 782 783Some block functions exist to query current tag status or to go from a 784tag number to the associated request. These are, in no particular order: 785 786 blk_queue_tagged(q) 787 788 Returns 1 if the queue 'q' is using tagging, 0 if not. 789 790 blk_queue_tag_request(q, tag) 791 792 Returns a pointer to the request associated with tag 'tag'. 793 794 blk_queue_tag_depth(q) 795 796 Return current queue depth. 797 798 blk_queue_tag_queue(q) 799 800 Returns 1 if the queue can accept a new queued command, 0 if we are 801 at the maximum depth already. 802 803 blk_queue_rq_tagged(rq) 804 805 Returns 1 if the request 'rq' is tagged. 806 8073.2.5.2 Internal structure 808 809Internally, block manages tags in the blk_queue_tag structure: 810 811 struct blk_queue_tag { 812 struct request **tag_index; /* array or pointers to rq */ 813 unsigned long *tag_map; /* bitmap of free tags */ 814 struct list_head busy_list; /* fifo list of busy tags */ 815 int busy; /* queue depth */ 816 int max_depth; /* max queue depth */ 817 }; 818 819Most of the above is simple and straight forward, however busy_list may need 820a bit of explaining. Normally we don't care too much about request ordering, 821but in the event of any barrier requests in the tag queue we need to ensure 822that requests are restarted in the order they were queue. This may happen 823if the driver needs to use blk_queue_invalidate_tags(). 824 825Tagging also defines a new request flag, REQ_QUEUED. This is set whenever 826a request is currently tagged. You should not use this flag directly, 827blk_rq_tagged(rq) is the portable way to do so. 828 8293.3 I/O Submission 830 831The routine submit_bio() is used to submit a single io. Higher level i/o 832routines make use of this: 833 834(a) Buffered i/o: 835The routine submit_bh() invokes submit_bio() on a bio corresponding to the 836bh, allocating the bio if required. ll_rw_block() uses submit_bh() as before. 837 838(b) Kiobuf i/o (for raw/direct i/o): 839The ll_rw_kio() routine breaks up the kiobuf into page sized chunks and 840maps the array to one or more multi-page bios, issuing submit_bio() to 841perform the i/o on each of these. 842 843The embedded bh array in the kiobuf structure has been removed and no 844preallocation of bios is done for kiobufs. [The intent is to remove the 845blocks array as well, but it's currently in there to kludge around direct i/o.] 846Thus kiobuf allocation has switched back to using kmalloc rather than vmalloc. 847 848Todo/Observation: 849 850 A single kiobuf structure is assumed to correspond to a contiguous range 851 of data, so brw_kiovec() invokes ll_rw_kio for each kiobuf in a kiovec. 852 So right now it wouldn't work for direct i/o on non-contiguous blocks. 853 This is to be resolved. The eventual direction is to replace kiobuf 854 by kvec's. 855 856 Badari Pulavarty has a patch to implement direct i/o correctly using 857 bio and kvec. 858 859 860(c) Page i/o: 861Todo/Under discussion: 862 863 Andrew Morton's multi-page bio patches attempt to issue multi-page 864 writeouts (and reads) from the page cache, by directly building up 865 large bios for submission completely bypassing the usage of buffer 866 heads. This work is still in progress. 867 868 Christoph Hellwig had some code that uses bios for page-io (rather than 869 bh). This isn't included in bio as yet. Christoph was also working on a 870 design for representing virtual/real extents as an entity and modifying 871 some of the address space ops interfaces to utilize this abstraction rather 872 than buffer_heads. (This is somewhat along the lines of the SGI XFS pagebuf 873 abstraction, but intended to be as lightweight as possible). 874 875(d) Direct access i/o: 876Direct access requests that do not contain bios would be submitted differently 877as discussed earlier in section 1.3. 878 879Aside: 880 881 Kvec i/o: 882 883 Ben LaHaise's aio code uses a slighly different structure instead 884 of kiobufs, called a kvec_cb. This contains an array of <page, offset, len> 885 tuples (very much like the networking code), together with a callback function 886 and data pointer. This is embedded into a brw_cb structure when passed 887 to brw_kvec_async(). 888 889 Now it should be possible to directly map these kvecs to a bio. Just as while 890 cloning, in this case rather than PRE_BUILT bio_vecs, we set the bi_io_vec 891 array pointer to point to the veclet array in kvecs. 892 893 TBD: In order for this to work, some changes are needed in the way multi-page 894 bios are handled today. The values of the tuples in such a vector passed in 895 from higher level code should not be modified by the block layer in the course 896 of its request processing, since that would make it hard for the higher layer 897 to continue to use the vector descriptor (kvec) after i/o completes. Instead, 898 all such transient state should either be maintained in the request structure, 899 and passed on in some way to the endio completion routine. 900 901 9024. The I/O scheduler 903I/O scheduler, a.k.a. elevator, is implemented in two layers. Generic dispatch 904queue and specific I/O schedulers. Unless stated otherwise, elevator is used 905to refer to both parts and I/O scheduler to specific I/O schedulers. 906 907Block layer implements generic dispatch queue in ll_rw_blk.c and elevator.c. 908The generic dispatch queue is responsible for properly ordering barrier 909requests, requeueing, handling non-fs requests and all other subtleties. 910 911Specific I/O schedulers are responsible for ordering normal filesystem 912requests. They can also choose to delay certain requests to improve 913throughput or whatever purpose. As the plural form indicates, there are 914multiple I/O schedulers. They can be built as modules but at least one should 915be built inside the kernel. Each queue can choose different one and can also 916change to another one dynamically. 917 918A block layer call to the i/o scheduler follows the convention elv_xxx(). This 919calls elevator_xxx_fn in the elevator switch (drivers/block/elevator.c). Oh, 920xxx and xxx might not match exactly, but use your imagination. If an elevator 921doesn't implement a function, the switch does nothing or some minimal house 922keeping work. 923 9244.1. I/O scheduler API 925 926The functions an elevator may implement are: (* are mandatory) 927elevator_merge_fn called to query requests for merge with a bio 928 929elevator_merge_req_fn called when two requests get merged. the one 930 which gets merged into the other one will be 931 never seen by I/O scheduler again. IOW, after 932 being merged, the request is gone. 933 934elevator_merged_fn called when a request in the scheduler has been 935 involved in a merge. It is used in the deadline 936 scheduler for example, to reposition the request 937 if its sorting order has changed. 938 939elevator_dispatch_fn fills the dispatch queue with ready requests. 940 I/O schedulers are free to postpone requests by 941 not filling the dispatch queue unless @force 942 is non-zero. Once dispatched, I/O schedulers 943 are not allowed to manipulate the requests - 944 they belong to generic dispatch queue. 945 946elevator_add_req_fn called to add a new request into the scheduler 947 948elevator_queue_empty_fn returns true if the merge queue is empty. 949 Drivers shouldn't use this, but rather check 950 if elv_next_request is NULL (without losing the 951 request if one exists!) 952 953elevator_former_req_fn 954elevator_latter_req_fn These return the request before or after the 955 one specified in disk sort order. Used by the 956 block layer to find merge possibilities. 957 958elevator_completed_req_fn called when a request is completed. 959 960elevator_may_queue_fn returns true if the scheduler wants to allow the 961 current context to queue a new request even if 962 it is over the queue limit. This must be used 963 very carefully!! 964 965elevator_set_req_fn 966elevator_put_req_fn Must be used to allocate and free any elevator 967 specific storage for a request. 968 969elevator_activate_req_fn Called when device driver first sees a request. 970 I/O schedulers can use this callback to 971 determine when actual execution of a request 972 starts. 973elevator_deactivate_req_fn Called when device driver decides to delay 974 a request by requeueing it. 975 976elevator_init_fn 977elevator_exit_fn Allocate and free any elevator specific storage 978 for a queue. 979 9804.2 Request flows seen by I/O schedulers 981All requests seens by I/O schedulers strictly follow one of the following three 982flows. 983 984 set_req_fn -> 985 986 i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn -> 987 (deactivate_req_fn -> activate_req_fn ->)* -> completed_req_fn 988 ii. add_req_fn -> (merged_fn ->)* -> merge_req_fn 989 iii. [none] 990 991 -> put_req_fn 992 9934.3 I/O scheduler implementation 994The generic i/o scheduler algorithm attempts to sort/merge/batch requests for 995optimal disk scan and request servicing performance (based on generic 996principles and device capabilities), optimized for: 997i. improved throughput 998ii. improved latency 999iii. better utilization of h/w & CPU time 1000 1001Characteristics: 1002 1003i. Binary tree 1004AS and deadline i/o schedulers use red black binary trees for disk position 1005sorting and searching, and a fifo linked list for time-based searching. This 1006gives good scalability and good availablility of information. Requests are 1007almost always dispatched in disk sort order, so a cache is kept of the next 1008request in sort order to prevent binary tree lookups. 1009 1010This arrangement is not a generic block layer characteristic however, so 1011elevators may implement queues as they please. 1012 1013ii. Merge hash 1014AS and deadline use a hash table indexed by the last sector of a request. This 1015enables merging code to quickly look up "back merge" candidates, even when 1016multiple I/O streams are being performed at once on one disk. 1017 1018"Front merges", a new request being merged at the front of an existing request, 1019are far less common than "back merges" due to the nature of most I/O patterns. 1020Front merges are handled by the binary trees in AS and deadline schedulers. 1021 1022iii. Plugging the queue to batch requests in anticipation of opportunities for 1023 merge/sort optimizations 1024 1025This is just the same as in 2.4 so far, though per-device unplugging 1026support is anticipated for 2.5. Also with a priority-based i/o scheduler, 1027such decisions could be based on request priorities. 1028 1029Plugging is an approach that the current i/o scheduling algorithm resorts to so 1030that it collects up enough requests in the queue to be able to take 1031advantage of the sorting/merging logic in the elevator. If the 1032queue is empty when a request comes in, then it plugs the request queue 1033(sort of like plugging the bottom of a vessel to get fluid to build up) 1034till it fills up with a few more requests, before starting to service 1035the requests. This provides an opportunity to merge/sort the requests before 1036passing them down to the device. There are various conditions when the queue is 1037unplugged (to open up the flow again), either through a scheduled task or 1038could be on demand. For example wait_on_buffer sets the unplugging going 1039(by running tq_disk) so the read gets satisfied soon. So in the read case, 1040the queue gets explicitly unplugged as part of waiting for completion, 1041in fact all queues get unplugged as a side-effect. 1042 1043Aside: 1044 This is kind of controversial territory, as it's not clear if plugging is 1045 always the right thing to do. Devices typically have their own queues, 1046 and allowing a big queue to build up in software, while letting the device be 1047 idle for a while may not always make sense. The trick is to handle the fine 1048 balance between when to plug and when to open up. Also now that we have 1049 multi-page bios being queued in one shot, we may not need to wait to merge 1050 a big request from the broken up pieces coming by. 1051 1052 Per-queue granularity unplugging (still a Todo) may help reduce some of the 1053 concerns with just a single tq_disk flush approach. Something like 1054 blk_kick_queue() to unplug a specific queue (right away ?) 1055 or optionally, all queues, is in the plan. 1056 10574.4 I/O contexts 1058I/O contexts provide a dynamically allocated per process data area. They may 1059be used in I/O schedulers, and in the block layer (could be used for IO statis, 1060priorities for example). See *io_context in block/ll_rw_blk.c, and as-iosched.c 1061for an example of usage in an i/o scheduler. 1062 1063 10645. Scalability related changes 1065 10665.1 Granular Locking: io_request_lock replaced by a per-queue lock 1067 1068The global io_request_lock has been removed as of 2.5, to avoid 1069the scalability bottleneck it was causing, and has been replaced by more 1070granular locking. The request queue structure has a pointer to the 1071lock to be used for that queue. As a result, locking can now be 1072per-queue, with a provision for sharing a lock across queues if 1073necessary (e.g the scsi layer sets the queue lock pointers to the 1074corresponding adapter lock, which results in a per host locking 1075granularity). The locking semantics are the same, i.e. locking is 1076still imposed by the block layer, grabbing the lock before 1077request_fn execution which it means that lots of older drivers 1078should still be SMP safe. Drivers are free to drop the queue 1079lock themselves, if required. Drivers that explicitly used the 1080io_request_lock for serialization need to be modified accordingly. 1081Usually it's as easy as adding a global lock: 1082 1083 static spinlock_t my_driver_lock = SPIN_LOCK_UNLOCKED; 1084 1085and passing the address to that lock to blk_init_queue(). 1086 10875.2 64 bit sector numbers (sector_t prepares for 64 bit support) 1088 1089The sector number used in the bio structure has been changed to sector_t, 1090which could be defined as 64 bit in preparation for 64 bit sector support. 1091 10926. Other Changes/Implications 1093 10946.1 Partition re-mapping handled by the generic block layer 1095 1096In 2.5 some of the gendisk/partition related code has been reorganized. 1097Now the generic block layer performs partition-remapping early and thus 1098provides drivers with a sector number relative to whole device, rather than 1099having to take partition number into account in order to arrive at the true 1100sector number. The routine blk_partition_remap() is invoked by 1101generic_make_request even before invoking the queue specific make_request_fn, 1102so the i/o scheduler also gets to operate on whole disk sector numbers. This 1103should typically not require changes to block drivers, it just never gets 1104to invoke its own partition sector offset calculations since all bios 1105sent are offset from the beginning of the device. 1106 1107 11087. A Few Tips on Migration of older drivers 1109 1110Old-style drivers that just use CURRENT and ignores clustered requests, 1111may not need much change. The generic layer will automatically handle 1112clustered requests, multi-page bios, etc for the driver. 1113 1114For a low performance driver or hardware that is PIO driven or just doesn't 1115support scatter-gather changes should be minimal too. 1116 1117The following are some points to keep in mind when converting old drivers 1118to bio. 1119 1120Drivers should use elv_next_request to pick up requests and are no longer 1121supposed to handle looping directly over the request list. 1122(struct request->queue has been removed) 1123 1124Now end_that_request_first takes an additional number_of_sectors argument. 1125It used to handle always just the first buffer_head in a request, now 1126it will loop and handle as many sectors (on a bio-segment granularity) 1127as specified. 1128 1129Now bh->b_end_io is replaced by bio->bi_end_io, but most of the time the 1130right thing to use is bio_endio(bio, uptodate) instead. 1131 1132If the driver is dropping the io_request_lock from its request_fn strategy, 1133then it just needs to replace that with q->queue_lock instead. 1134 1135As described in Sec 1.1, drivers can set max sector size, max segment size 1136etc per queue now. Drivers that used to define their own merge functions i 1137to handle things like this can now just use the blk_queue_* functions at 1138blk_init_queue time. 1139 1140Drivers no longer have to map a {partition, sector offset} into the 1141correct absolute location anymore, this is done by the block layer, so 1142where a driver received a request ala this before: 1143 1144 rq->rq_dev = mk_kdev(3, 5); /* /dev/hda5 */ 1145 rq->sector = 0; /* first sector on hda5 */ 1146 1147 it will now see 1148 1149 rq->rq_dev = mk_kdev(3, 0); /* /dev/hda */ 1150 rq->sector = 123128; /* offset from start of disk */ 1151 1152As mentioned, there is no virtual mapping of a bio. For DMA, this is 1153not a problem as the driver probably never will need a virtual mapping. 1154Instead it needs a bus mapping (pci_map_page for a single segment or 1155use blk_rq_map_sg for scatter gather) to be able to ship it to the driver. For 1156PIO drivers (or drivers that need to revert to PIO transfer once in a 1157while (IDE for example)), where the CPU is doing the actual data 1158transfer a virtual mapping is needed. If the driver supports highmem I/O, 1159(Sec 1.1, (ii) ) it needs to use __bio_kmap_atomic and bio_kmap_irq to 1160temporarily map a bio into the virtual address space. 1161 1162 11638. Prior/Related/Impacted patches 1164 11658.1. Earlier kiobuf patches (sct/axboe/chait/hch/mkp) 1166- orig kiobuf & raw i/o patches (now in 2.4 tree) 1167- direct kiobuf based i/o to devices (no intermediate bh's) 1168- page i/o using kiobuf 1169- kiobuf splitting for lvm (mkp) 1170- elevator support for kiobuf request merging (axboe) 11718.2. Zero-copy networking (Dave Miller) 11728.3. SGI XFS - pagebuf patches - use of kiobufs 11738.4. Multi-page pioent patch for bio (Christoph Hellwig) 11748.5. Direct i/o implementation (Andrea Arcangeli) since 2.4.10-pre11 11758.6. Async i/o implementation patch (Ben LaHaise) 11768.7. EVMS layering design (IBM EVMS team) 11778.8. Larger page cache size patch (Ben LaHaise) and 1178 Large page size (Daniel Phillips) 1179 => larger contiguous physical memory buffers 11808.9. VM reservations patch (Ben LaHaise) 11818.10. Write clustering patches ? (Marcelo/Quintela/Riel ?) 11828.11. Block device in page cache patch (Andrea Archangeli) - now in 2.4.10+ 11838.12. Multiple block-size transfers for faster raw i/o (Shailabh Nagar, 1184 Badari) 11858.13 Priority based i/o scheduler - prepatches (Arjan van de Ven) 11868.14 IDE Taskfile i/o patch (Andre Hedrick) 11878.15 Multi-page writeout and readahead patches (Andrew Morton) 11888.16 Direct i/o patches for 2.5 using kvec and bio (Badari Pulavarthy) 1189 11909. Other References: 1191 11929.1 The Splice I/O Model - Larry McVoy (and subsequent discussions on lkml, 1193and Linus' comments - Jan 2001) 11949.2 Discussions about kiobuf and bh design on lkml between sct, linus, alan 1195et al - Feb-March 2001 (many of the initial thoughts that led to bio were 1196brought up in this discusion thread) 11979.3 Discussions on mempool on lkml - Dec 2001. 1198