Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.dk/data/git/linux-2.6-block

* 'for-linus' of git://git.kernel.dk/data/git/linux-2.6-block: (63 commits)
Fix memory leak in dm-crypt
SPARC64: sg chaining support
SPARC: sg chaining support
PPC: sg chaining support
PS3: sg chaining support
IA64: sg chaining support
x86-64: enable sg chaining
x86-64: update pci-gart iommu to sg helpers
x86-64: update nommu to sg helpers
x86-64: update calgary iommu to sg helpers
swiotlb: sg chaining support
i386: enable sg chaining
i386 dma_map_sg: convert to using sg helpers
mmc: need to zero sglist on init
Panic in blk_rq_map_sg() from CCISS driver
remove sglist_len
remove blk_queue_max_phys_segments in libata
revert sg segment size ifdefs
Fixup u14-34f ENABLE_SG_CHAINING
qla1280: enable use_sg_chaining option
...

+1186 -1115
+2 -2
Documentation/DMA-mapping.txt
··· 514 514 int i, count = pci_map_sg(dev, sglist, nents, direction); 515 515 struct scatterlist *sg; 516 516 517 - for (i = 0, sg = sglist; i < count; i++, sg++) { 517 + for_each_sg(sglist, sg, count, i) { 518 518 hw_address[i] = sg_dma_address(sg); 519 519 hw_len[i] = sg_dma_len(sg); 520 520 } ··· 782 782 Jay Estabrook <Jay.Estabrook@compaq.com> 783 783 Thomas Sailer <sailer@ife.ee.ethz.ch> 784 784 Andrea Arcangeli <andrea@suse.de> 785 - Jens Axboe <axboe@suse.de> 785 + Jens Axboe <jens.axboe@oracle.com> 786 786 David Mosberger-Tang <davidm@hpl.hp.com>
+1 -1
Documentation/HOWTO
··· 330 330 - ACPI development tree, Len Brown <len.brown@intel.com> 331 331 git.kernel.org:/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6.git 332 332 333 - - Block development tree, Jens Axboe <axboe@suse.de> 333 + - Block development tree, Jens Axboe <jens.axboe@oracle.com> 334 334 git.kernel.org:/pub/scm/linux/kernel/git/axboe/linux-2.6-block.git 335 335 336 336 - DRM development tree, Dave Airlie <airlied@linux.ie>
+20
Documentation/block/00-INDEX
··· 1 + 00-INDEX 2 + - This file 3 + as-iosched.txt 4 + - Anticipatory IO scheduler 5 + barrier.txt 6 + - I/O Barriers 7 + biodoc.txt 8 + - Notes on the Generic Block Layer Rewrite in Linux 2.5 9 + capability.txt 10 + - Generic Block Device Capability (/sys/block/<disk>/capability) 11 + deadline-iosched.txt 12 + - Deadline IO scheduler tunables 13 + ioprio.txt 14 + - Block io priorities (in CFQ scheduler) 15 + request.txt 16 + - The members of struct request (in include/linux/blkdev.h) 17 + stat.txt 18 + - Block layer statistics in /sys/block/<dev>/stat 19 + switching-sched.txt 20 + - Switching I/O schedulers at runtime
+13 -8
Documentation/block/as-iosched.txt
··· 20 20 However, setting the antic_expire (see tunable parameters below) produces 21 21 very similar behavior to the deadline IO scheduler. 22 22 23 - 24 23 Selecting IO schedulers 25 24 ----------------------- 26 - To choose IO schedulers at boot time, use the argument 'elevator=deadline'. 27 - 'noop', 'as' and 'cfq' (the default) are also available. IO schedulers are 28 - assigned globally at boot time only presently. It's also possible to change 29 - the IO scheduler for a determined device on the fly, as described in 30 - Documentation/block/switching-sched.txt. 31 - 25 + Refer to Documentation/block/switching-sched.txt for information on 26 + selecting an io scheduler on a per-device basis. 32 27 33 28 Anticipatory IO scheduler Policies 34 29 ---------------------------------- ··· 110 115 that submitted the just completed request are examined. If it seems 111 116 likely that that process will submit another request soon, and that 112 117 request is likely to be near the just completed request, then the IO 113 - scheduler will stop dispatching more read requests for up time (antic_expire) 118 + scheduler will stop dispatching more read requests for up to (antic_expire) 114 119 milliseconds, hoping that process will submit a new request near the one 115 120 that just completed. If such a request is made, then it is dispatched 116 121 immediately. If the antic_expire wait time expires, then the IO scheduler ··· 159 164 or some processes will not be "anticipated" at all. Should be a bit higher 160 165 for big seek time devices though not a linear correspondence - most 161 166 processes have only a few ms thinktime. 167 + 168 + In addition to the tunables above there is a read-only file named est_time 169 + which, when read, will show: 170 + 171 + - The probability of a task exiting without a cooperating task 172 + submitting an anticipated IO. 173 + 174 + - The current mean think time. 175 + 176 + - The seek distance used to determine if an incoming IO is better. 162 177
+2 -2
Documentation/block/biodoc.txt
··· 2 2 ===================================================== 3 3 4 4 Notes Written on Jan 15, 2002: 5 - Jens Axboe <axboe@suse.de> 5 + Jens Axboe <jens.axboe@oracle.com> 6 6 Suparna Bhattacharya <suparna@in.ibm.com> 7 7 8 8 Last Updated May 2, 2002 ··· 21 21 --------- 22 22 23 23 2.5 bio rewrite: 24 - Jens Axboe <axboe@suse.de> 24 + Jens Axboe <jens.axboe@oracle.com> 25 25 26 26 Many aspects of the generic block layer redesign were driven by and evolved 27 27 over discussions, prior patches and the collective experience of several
+8 -17
Documentation/block/deadline-iosched.txt
··· 5 5 In particular, it will clarify the meaning of the exposed tunables that may be 6 6 of interest to power users. 7 7 8 - Each io queue has a set of io scheduler tunables associated with it. These 9 - tunables control how the io scheduler works. You can find these entries 10 - in: 11 - 12 - /sys/block/<device>/queue/iosched 13 - 14 - assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted, 15 - you can do so by typing: 16 - 17 - # mount none /sys -t sysfs 8 + Selecting IO schedulers 9 + ----------------------- 10 + Refer to Documentation/block/switching-sched.txt for information on 11 + selecting an io scheduler on a per-device basis. 18 12 19 13 20 14 ******************************************************************************** ··· 35 41 36 42 When a read request expires its deadline, we must move some requests from 37 43 the sorted io scheduler list to the block device dispatch queue. fifo_batch 38 - controls how many requests we move, based on the cost of each request. A 39 - request is either qualified as a seek or a stream. The io scheduler knows 40 - the last request that was serviced by the drive (or will be serviced right 41 - before this one). See seek_cost and stream_unit. 44 + controls how many requests we move. 42 45 43 46 44 - write_starved (number of dispatches) 45 - ------------- 47 + writes_starved (number of dispatches) 48 + -------------- 46 49 47 50 When we have to move requests from the io scheduler queue to the block 48 51 device dispatch queue, we always give a preference to reads. However, we ··· 64 73 rbtree front sector lookup when the io scheduler merge function is called. 65 74 66 75 67 - Nov 11 2002, Jens Axboe <axboe@suse.de> 76 + Nov 11 2002, Jens Axboe <jens.axboe@oracle.com> 68 77 69 78
+1 -1
Documentation/block/ioprio.txt
··· 180 180 ---> snip ionice.c tool <--- 181 181 182 182 183 - March 11 2005, Jens Axboe <axboe@suse.de> 183 + March 11 2005, Jens Axboe <jens.axboe@oracle.com>
+1 -1
Documentation/block/request.txt
··· 1 1 2 2 struct request documentation 3 3 4 - Jens Axboe <axboe@suse.de> 27/05/02 4 + Jens Axboe <jens.axboe@oracle.com> 27/05/02 5 5 6 6 1.0 7 7 Index
+21
Documentation/block/switching-sched.txt
··· 1 + To choose IO schedulers at boot time, use the argument 'elevator=deadline'. 2 + 'noop', 'as' and 'cfq' (the default) are also available. IO schedulers are 3 + assigned globally at boot time only presently. 4 + 5 + Each io queue has a set of io scheduler tunables associated with it. These 6 + tunables control how the io scheduler works. You can find these entries 7 + in: 8 + 9 + /sys/block/<device>/queue/iosched 10 + 11 + assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted, 12 + you can do so by typing: 13 + 14 + # mount none /sys -t sysfs 15 + 1 16 As of the Linux 2.6.10 kernel, it is now possible to change the 2 17 IO scheduler for a given block device on the fly (thus making it possible, 3 18 for instance, to set the CFQ scheduler for the system default, but ··· 35 20 # echo anticipatory > /sys/block/hda/queue/scheduler 36 21 # cat /sys/block/hda/queue/scheduler 37 22 noop [anticipatory] deadline cfq 23 + 24 + Each io queue has a set of io scheduler tunables associated with it. These 25 + tunables control how the io scheduler works. You can find these entries 26 + in: 27 + 28 + /sys/block/<device>/queue/iosched
+7 -7
arch/ia64/hp/common/sba_iommu.c
··· 396 396 printk(KERN_DEBUG " %d : DMA %08lx/%05x CPU %p\n", nents, 397 397 startsg->dma_address, startsg->dma_length, 398 398 sba_sg_address(startsg)); 399 - startsg++; 399 + startsg = sg_next(startsg); 400 400 } 401 401 } 402 402 ··· 409 409 while (the_nents-- > 0) { 410 410 if (sba_sg_address(the_sg) == 0x0UL) 411 411 sba_dump_sg(NULL, startsg, nents); 412 - the_sg++; 412 + the_sg = sg_next(the_sg); 413 413 } 414 414 } 415 415 ··· 1201 1201 u32 pide = startsg->dma_address & ~PIDE_FLAG; 1202 1202 dma_offset = (unsigned long) pide & ~iovp_mask; 1203 1203 startsg->dma_address = 0; 1204 - dma_sg++; 1204 + dma_sg = sg_next(dma_sg); 1205 1205 dma_sg->dma_address = pide | ioc->ibase; 1206 1206 pdirp = &(ioc->pdir_base[pide >> iovp_shift]); 1207 1207 n_mappings++; ··· 1228 1228 pdirp++; 1229 1229 } while (cnt > 0); 1230 1230 } 1231 - startsg++; 1231 + startsg = sg_next(startsg); 1232 1232 } 1233 1233 /* force pdir update */ 1234 1234 wmb(); ··· 1297 1297 while (--nents > 0) { 1298 1298 unsigned long vaddr; /* tmp */ 1299 1299 1300 - startsg++; 1300 + startsg = sg_next(startsg); 1301 1301 1302 1302 /* PARANOID */ 1303 1303 startsg->dma_address = startsg->dma_length = 0; ··· 1407 1407 #ifdef ALLOW_IOV_BYPASS_SG 1408 1408 ASSERT(to_pci_dev(dev)->dma_mask); 1409 1409 if (likely((ioc->dma_mask & ~to_pci_dev(dev)->dma_mask) == 0)) { 1410 - for (sg = sglist ; filled < nents ; filled++, sg++){ 1410 + for_each_sg(sglist, sg, nents, filled) { 1411 1411 sg->dma_length = sg->length; 1412 1412 sg->dma_address = virt_to_phys(sba_sg_address(sg)); 1413 1413 } ··· 1501 1501 while (nents && sglist->dma_length) { 1502 1502 1503 1503 sba_unmap_single(dev, sglist->dma_address, sglist->dma_length, dir); 1504 - sglist++; 1504 + sglist = sg_next(sglist); 1505 1505 nents--; 1506 1506 } 1507 1507
+1
arch/ia64/hp/sim/simscsi.c
··· 360 360 .max_sectors = 1024, 361 361 .cmd_per_lun = SIMSCSI_REQ_QUEUE_LEN, 362 362 .use_clustering = DISABLE_CLUSTERING, 363 + .use_sg_chaining = ENABLE_SG_CHAINING, 363 364 }; 364 365 365 366 static int __init
+6 -5
arch/ia64/sn/pci/pci_dma.c
··· 218 218 * 219 219 * Unmap a set of streaming mode DMA translations. 220 220 */ 221 - void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sg, 221 + void sn_dma_unmap_sg(struct device *dev, struct scatterlist *sgl, 222 222 int nhwentries, int direction) 223 223 { 224 224 int i; 225 225 struct pci_dev *pdev = to_pci_dev(dev); 226 226 struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev); 227 + struct scatterlist *sg; 227 228 228 229 BUG_ON(dev->bus != &pci_bus_type); 229 230 230 - for (i = 0; i < nhwentries; i++, sg++) { 231 + for_each_sg(sgl, sg, nhwentries, i) { 231 232 provider->dma_unmap(pdev, sg->dma_address, direction); 232 233 sg->dma_address = (dma_addr_t) NULL; 233 234 sg->dma_length = 0; ··· 245 244 * 246 245 * Maps each entry of @sg for DMA. 247 246 */ 248 - int sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 247 + int sn_dma_map_sg(struct device *dev, struct scatterlist *sgl, int nhwentries, 249 248 int direction) 250 249 { 251 250 unsigned long phys_addr; 252 - struct scatterlist *saved_sg = sg; 251 + struct scatterlist *saved_sg = sgl, *sg; 253 252 struct pci_dev *pdev = to_pci_dev(dev); 254 253 struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev); 255 254 int i; ··· 259 258 /* 260 259 * Setup a DMA address for each entry in the scatterlist. 261 260 */ 262 - for (i = 0; i < nhwentries; i++, sg++) { 261 + for_each_sg(sgl, sg, nhwentries, i) { 263 262 phys_addr = SG_ENT_PHYS_ADDRESS(sg); 264 263 sg->dma_address = provider->dma_map(pdev, 265 264 phys_addr, sg->length,
+3 -2
arch/powerpc/kernel/dma_64.c
··· 154 154 { 155 155 } 156 156 157 - static int dma_direct_map_sg(struct device *dev, struct scatterlist *sg, 157 + static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, 158 158 int nents, enum dma_data_direction direction) 159 159 { 160 + struct scatterlist *sg; 160 161 int i; 161 162 162 - for (i = 0; i < nents; i++, sg++) { 163 + for_each_sg(sgl, sg, nents, i) { 163 164 sg->dma_address = (page_to_phys(sg->page) + sg->offset) | 164 165 dma_direct_offset; 165 166 sg->dma_length = sg->length;
+6 -5
arch/powerpc/kernel/ibmebus.c
··· 87 87 } 88 88 89 89 static int ibmebus_map_sg(struct device *dev, 90 - struct scatterlist *sg, 90 + struct scatterlist *sgl, 91 91 int nents, enum dma_data_direction direction) 92 92 { 93 + struct scatterlist *sg; 93 94 int i; 94 95 95 - for (i = 0; i < nents; i++) { 96 - sg[i].dma_address = (dma_addr_t)page_address(sg[i].page) 97 - + sg[i].offset; 98 - sg[i].dma_length = sg[i].length; 96 + for_each_sg(sgl, sg, nents, i) { 97 + sg->dma_address = (dma_addr_t)page_address(sg->page) 98 + + sg->offset; 99 + sg->dma_length = sg->length; 99 100 } 100 101 101 102 return nents;
+14 -9
arch/powerpc/kernel/iommu.c
··· 277 277 dma_addr_t dma_next = 0, dma_addr; 278 278 unsigned long flags; 279 279 struct scatterlist *s, *outs, *segstart; 280 - int outcount, incount; 280 + int outcount, incount, i; 281 281 unsigned long handle; 282 282 283 283 BUG_ON(direction == DMA_NONE); ··· 297 297 298 298 spin_lock_irqsave(&(tbl->it_lock), flags); 299 299 300 - for (s = outs; nelems; nelems--, s++) { 300 + for_each_sg(sglist, s, nelems, i) { 301 301 unsigned long vaddr, npages, entry, slen; 302 302 303 303 slen = s->length; ··· 341 341 if (novmerge || (dma_addr != dma_next)) { 342 342 /* Can't merge: create a new segment */ 343 343 segstart = s; 344 - outcount++; outs++; 344 + outcount++; 345 + outs = sg_next(outs); 345 346 DBG(" can't merge, new segment.\n"); 346 347 } else { 347 348 outs->dma_length += s->length; ··· 375 374 * next entry of the sglist if we didn't fill the list completely 376 375 */ 377 376 if (outcount < incount) { 378 - outs++; 377 + outs = sg_next(outs); 379 378 outs->dma_address = DMA_ERROR_CODE; 380 379 outs->dma_length = 0; 381 380 } ··· 386 385 return outcount; 387 386 388 387 failure: 389 - for (s = &sglist[0]; s <= outs; s++) { 388 + for_each_sg(sglist, s, nelems, i) { 390 389 if (s->dma_length != 0) { 391 390 unsigned long vaddr, npages; 392 391 ··· 396 395 s->dma_address = DMA_ERROR_CODE; 397 396 s->dma_length = 0; 398 397 } 398 + if (s == outs) 399 + break; 399 400 } 400 401 spin_unlock_irqrestore(&(tbl->it_lock), flags); 401 402 return 0; ··· 407 404 void iommu_unmap_sg(struct iommu_table *tbl, struct scatterlist *sglist, 408 405 int nelems, enum dma_data_direction direction) 409 406 { 407 + struct scatterlist *sg; 410 408 unsigned long flags; 411 409 412 410 BUG_ON(direction == DMA_NONE); ··· 417 413 418 414 spin_lock_irqsave(&(tbl->it_lock), flags); 419 415 416 + sg = sglist; 420 417 while (nelems--) { 421 418 unsigned int npages; 422 - dma_addr_t dma_handle = sglist->dma_address; 419 + dma_addr_t dma_handle = sg->dma_address; 423 420 424 - if (sglist->dma_length == 0) 421 + if (sg->dma_length == 0) 425 422 break; 426 - npages = iommu_num_pages(dma_handle,sglist->dma_length); 423 + npages = iommu_num_pages(dma_handle, sg->dma_length); 427 424 __iommu_free(tbl, dma_handle, npages); 428 - sglist++; 425 + sg = sg_next(sg); 429 426 } 430 427 431 428 /* Flush/invalidate TLBs if necessary. As for iommu_free(), we
+4 -3
arch/powerpc/platforms/ps3/system-bus.c
··· 616 616 } 617 617 } 618 618 619 - static int ps3_sb_map_sg(struct device *_dev, struct scatterlist *sg, int nents, 620 - enum dma_data_direction direction) 619 + static int ps3_sb_map_sg(struct device *_dev, struct scatterlist *sgl, 620 + int nents, enum dma_data_direction direction) 621 621 { 622 622 #if defined(CONFIG_PS3_DYNAMIC_DMA) 623 623 BUG_ON("do"); 624 624 return -EPERM; 625 625 #else 626 626 struct ps3_system_bus_device *dev = ps3_dev_to_system_bus_dev(_dev); 627 + struct scatterlist *sg; 627 628 int i; 628 629 629 - for (i = 0; i < nents; i++, sg++) { 630 + for_each_sg(sgl, sg, nents, i) { 630 631 int result = ps3_dma_map(dev->d_region, 631 632 page_to_phys(sg->page) + sg->offset, sg->length, 632 633 &sg->dma_address, 0);
+13 -12
arch/sparc/kernel/ioport.c
··· 35 35 #include <linux/slab.h> 36 36 #include <linux/pci.h> /* struct pci_dev */ 37 37 #include <linux/proc_fs.h> 38 + #include <linux/scatterlist.h> 38 39 39 40 #include <asm/io.h> 40 41 #include <asm/vaddrs.h> ··· 718 717 * Device ownership issues as mentioned above for pci_map_single are 719 718 * the same here. 720 719 */ 721 - int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, 720 + int pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, 722 721 int direction) 723 722 { 723 + struct scatterlist *sg; 724 724 int n; 725 725 726 726 BUG_ON(direction == PCI_DMA_NONE); 727 727 /* IIep is write-through, not flushing. */ 728 - for (n = 0; n < nents; n++) { 728 + for_each_sg(sgl, sg, nents, n) { 729 729 BUG_ON(page_address(sg->page) == NULL); 730 730 sg->dvma_address = 731 731 virt_to_phys(page_address(sg->page)) + sg->offset; 732 732 sg->dvma_length = sg->length; 733 - sg++; 734 733 } 735 734 return nents; 736 735 } ··· 739 738 * Again, cpu read rules concerning calls here are the same as for 740 739 * pci_unmap_single() above. 741 740 */ 742 - void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, 741 + void pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, 743 742 int direction) 744 743 { 744 + struct scatterlist *sg; 745 745 int n; 746 746 747 747 BUG_ON(direction == PCI_DMA_NONE); 748 748 if (direction != PCI_DMA_TODEVICE) { 749 - for (n = 0; n < nents; n++) { 749 + for_each_sg(sgl, sg, nents, n) { 750 750 BUG_ON(page_address(sg->page) == NULL); 751 751 mmu_inval_dma_area( 752 752 (unsigned long) page_address(sg->page), 753 753 (sg->length + PAGE_SIZE-1) & PAGE_MASK); 754 - sg++; 755 754 } 756 755 } 757 756 } ··· 790 789 * The same as pci_dma_sync_single_* but for a scatter-gather list, 791 790 * same rules and usage. 792 791 */ 793 - void pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction) 792 + void pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, int direction) 794 793 { 794 + struct scatterlist *sg; 795 795 int n; 796 796 797 797 BUG_ON(direction == PCI_DMA_NONE); 798 798 if (direction != PCI_DMA_TODEVICE) { 799 - for (n = 0; n < nents; n++) { 799 + for_each_sg(sgl, sg, nents, n) { 800 800 BUG_ON(page_address(sg->page) == NULL); 801 801 mmu_inval_dma_area( 802 802 (unsigned long) page_address(sg->page), 803 803 (sg->length + PAGE_SIZE-1) & PAGE_MASK); 804 - sg++; 805 804 } 806 805 } 807 806 } 808 807 809 - void pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction) 808 + void pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sgl, int nents, int direction) 810 809 { 810 + struct scatterlist *sg; 811 811 int n; 812 812 813 813 BUG_ON(direction == PCI_DMA_NONE); 814 814 if (direction != PCI_DMA_TODEVICE) { 815 - for (n = 0; n < nents; n++) { 815 + for_each_sg(sgl, sg, nents, n) { 816 816 BUG_ON(page_address(sg->page) == NULL); 817 817 mmu_inval_dma_area( 818 818 (unsigned long) page_address(sg->page), 819 819 (sg->length + PAGE_SIZE-1) & PAGE_MASK); 820 - sg++; 821 820 } 822 821 } 823 822 }
+7 -5
arch/sparc/mm/io-unit.c
··· 11 11 #include <linux/mm.h> 12 12 #include <linux/highmem.h> /* pte_offset_map => kmap_atomic */ 13 13 #include <linux/bitops.h> 14 + #include <linux/scatterlist.h> 14 15 15 - #include <asm/scatterlist.h> 16 16 #include <asm/pgalloc.h> 17 17 #include <asm/pgtable.h> 18 18 #include <asm/sbus.h> ··· 144 144 spin_lock_irqsave(&iounit->lock, flags); 145 145 while (sz != 0) { 146 146 --sz; 147 - sg[sz].dvma_address = iounit_get_area(iounit, (unsigned long)page_address(sg[sz].page) + sg[sz].offset, sg[sz].length); 148 - sg[sz].dvma_length = sg[sz].length; 147 + sg->dvma_address = iounit_get_area(iounit, (unsigned long)page_address(sg->page) + sg->offset, sg->length); 148 + sg->dvma_length = sg->length; 149 + sg = sg_next(sg); 149 150 } 150 151 spin_unlock_irqrestore(&iounit->lock, flags); 151 152 } ··· 174 173 spin_lock_irqsave(&iounit->lock, flags); 175 174 while (sz != 0) { 176 175 --sz; 177 - len = ((sg[sz].dvma_address & ~PAGE_MASK) + sg[sz].length + (PAGE_SIZE-1)) >> PAGE_SHIFT; 178 - vaddr = (sg[sz].dvma_address - IOUNIT_DMA_BASE) >> PAGE_SHIFT; 176 + len = ((sg->dvma_address & ~PAGE_MASK) + sg->length + (PAGE_SIZE-1)) >> PAGE_SHIFT; 177 + vaddr = (sg->dvma_address - IOUNIT_DMA_BASE) >> PAGE_SHIFT; 179 178 IOD(("iounit_release %08lx-%08lx\n", (long)vaddr, (long)len+vaddr)); 180 179 for (len += vaddr; vaddr < len; vaddr++) 181 180 clear_bit(vaddr, iounit->bmap); 181 + sg = sg_next(sg); 182 182 } 183 183 spin_unlock_irqrestore(&iounit->lock, flags); 184 184 }
+5 -5
arch/sparc/mm/iommu.c
··· 12 12 #include <linux/mm.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/highmem.h> /* pte_offset_map => kmap_atomic */ 15 + #include <linux/scatterlist.h> 15 16 16 - #include <asm/scatterlist.h> 17 17 #include <asm/pgalloc.h> 18 18 #include <asm/pgtable.h> 19 19 #include <asm/sbus.h> ··· 240 240 n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; 241 241 sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset; 242 242 sg->dvma_length = (__u32) sg->length; 243 - sg++; 243 + sg = sg_next(sg); 244 244 } 245 245 } 246 246 ··· 254 254 n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; 255 255 sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset; 256 256 sg->dvma_length = (__u32) sg->length; 257 - sg++; 257 + sg = sg_next(sg); 258 258 } 259 259 } 260 260 ··· 285 285 286 286 sg->dvma_address = iommu_get_one(sg->page, n, sbus) + sg->offset; 287 287 sg->dvma_length = (__u32) sg->length; 288 - sg++; 288 + sg = sg_next(sg); 289 289 } 290 290 } 291 291 ··· 325 325 n = (sg->length + sg->offset + PAGE_SIZE-1) >> PAGE_SHIFT; 326 326 iommu_release_one(sg->dvma_address & PAGE_MASK, n, sbus); 327 327 sg->dvma_address = 0x21212121; 328 - sg++; 328 + sg = sg_next(sg); 329 329 } 330 330 } 331 331
+6 -4
arch/sparc/mm/sun4c.c
··· 17 17 #include <linux/highmem.h> 18 18 #include <linux/fs.h> 19 19 #include <linux/seq_file.h> 20 + #include <linux/scatterlist.h> 20 21 21 - #include <asm/scatterlist.h> 22 22 #include <asm/page.h> 23 23 #include <asm/pgalloc.h> 24 24 #include <asm/pgtable.h> ··· 1228 1228 { 1229 1229 while (sz != 0) { 1230 1230 --sz; 1231 - sg[sz].dvma_address = (__u32)sun4c_lockarea(page_address(sg[sz].page) + sg[sz].offset, sg[sz].length); 1232 - sg[sz].dvma_length = sg[sz].length; 1231 + sg->dvma_address = (__u32)sun4c_lockarea(page_address(sg->page) + sg->offset, sg->length); 1232 + sg->dvma_length = sg->length; 1233 + sg = sg_next(sg); 1233 1234 } 1234 1235 } 1235 1236 ··· 1245 1244 { 1246 1245 while (sz != 0) { 1247 1246 --sz; 1248 - sun4c_unlockarea((char *)sg[sz].dvma_address, sg[sz].length); 1247 + sun4c_unlockarea((char *)sg->dvma_address, sg->length); 1248 + sg = sg_next(sg); 1249 1249 } 1250 1250 } 1251 1251
+24 -15
arch/sparc64/kernel/iommu.c
··· 10 10 #include <linux/device.h> 11 11 #include <linux/dma-mapping.h> 12 12 #include <linux/errno.h> 13 + #include <linux/scatterlist.h> 13 14 14 15 #ifdef CONFIG_PCI 15 16 #include <linux/pci.h> ··· 481 480 unsigned long iopte_protection) 482 481 { 483 482 struct scatterlist *dma_sg = sg; 484 - struct scatterlist *sg_end = sg + nelems; 483 + struct scatterlist *sg_end = sg_last(sg, nelems); 485 484 int i; 486 485 487 486 for (i = 0; i < nused; i++) { ··· 516 515 len -= (IO_PAGE_SIZE - (tmp & (IO_PAGE_SIZE - 1UL))); 517 516 break; 518 517 } 519 - sg++; 518 + sg = sg_next(sg); 520 519 } 521 520 522 521 pteval = iopte_protection | (pteval & IOPTE_PAGE); ··· 529 528 } 530 529 531 530 pteval = (pteval & IOPTE_PAGE) + len; 532 - sg++; 531 + sg = sg_next(sg); 533 532 534 533 /* Skip over any tail mappings we've fully mapped, 535 534 * adjusting pteval along the way. Stop when we 536 535 * detect a page crossing event. 537 536 */ 538 - while (sg < sg_end && 537 + while (sg != sg_end && 539 538 (pteval << (64 - IO_PAGE_SHIFT)) != 0UL && 540 539 (pteval == SG_ENT_PHYS_ADDRESS(sg)) && 541 540 ((pteval ^ 542 541 (SG_ENT_PHYS_ADDRESS(sg) + sg->length - 1UL)) >> IO_PAGE_SHIFT) == 0UL) { 543 542 pteval += sg->length; 544 - sg++; 543 + sg = sg_next(sg); 545 544 } 546 545 if ((pteval << (64 - IO_PAGE_SHIFT)) == 0UL) 547 546 pteval = ~0UL; 548 547 } while (dma_npages != 0); 549 - dma_sg++; 548 + dma_sg = sg_next(dma_sg); 550 549 } 551 550 } 552 551 ··· 607 606 sgtmp = sglist; 608 607 while (used && sgtmp->dma_length) { 609 608 sgtmp->dma_address += dma_base; 610 - sgtmp++; 609 + sgtmp = sg_next(sgtmp); 611 610 used--; 612 611 } 613 612 used = nelems - used; ··· 643 642 struct strbuf *strbuf; 644 643 iopte_t *base; 645 644 unsigned long flags, ctx, i, npages; 645 + struct scatterlist *sg, *sgprv; 646 646 u32 bus_addr; 647 647 648 648 if (unlikely(direction == DMA_NONE)) { ··· 656 654 657 655 bus_addr = sglist->dma_address & IO_PAGE_MASK; 658 656 659 - for (i = 1; i < nelems; i++) 660 - if (sglist[i].dma_length == 0) 657 + sgprv = NULL; 658 + for_each_sg(sglist, sg, nelems, i) { 659 + if (sg->dma_length == 0) 661 660 break; 662 - i--; 663 - npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) - 661 + sgprv = sg; 662 + } 663 + 664 + npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length) - 664 665 bus_addr) >> IO_PAGE_SHIFT; 665 666 666 667 base = iommu->page_table + ··· 735 730 struct iommu *iommu; 736 731 struct strbuf *strbuf; 737 732 unsigned long flags, ctx, npages, i; 733 + struct scatterlist *sg, *sgprv; 738 734 u32 bus_addr; 739 735 740 736 iommu = dev->archdata.iommu; ··· 759 753 760 754 /* Step 2: Kick data out of streaming buffers. */ 761 755 bus_addr = sglist[0].dma_address & IO_PAGE_MASK; 762 - for(i = 1; i < nelems; i++) 763 - if (!sglist[i].dma_length) 756 + sgprv = NULL; 757 + for_each_sg(sglist, sg, nelems, i) { 758 + if (sg->dma_length == 0) 764 759 break; 765 - i--; 766 - npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) 760 + sgprv = sg; 761 + } 762 + 763 + npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length) 767 764 - bus_addr) >> IO_PAGE_SHIFT; 768 765 strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction); 769 766
+19 -13
arch/sparc64/kernel/pci_sun4v.c
··· 13 13 #include <linux/irq.h> 14 14 #include <linux/msi.h> 15 15 #include <linux/log2.h> 16 + #include <linux/scatterlist.h> 16 17 17 18 #include <asm/iommu.h> 18 19 #include <asm/irq.h> ··· 374 373 int nused, int nelems, unsigned long prot) 375 374 { 376 375 struct scatterlist *dma_sg = sg; 377 - struct scatterlist *sg_end = sg + nelems; 376 + struct scatterlist *sg_end = sg_last(sg, nelems); 378 377 unsigned long flags; 379 378 int i; 380 379 ··· 414 413 len -= (IO_PAGE_SIZE - (tmp & (IO_PAGE_SIZE - 1UL))); 415 414 break; 416 415 } 417 - sg++; 416 + sg = sg_next(sg); 418 417 } 419 418 420 419 pteval = (pteval & IOPTE_PAGE); ··· 432 431 } 433 432 434 433 pteval = (pteval & IOPTE_PAGE) + len; 435 - sg++; 434 + sg = sg_next(sg); 436 435 437 436 /* Skip over any tail mappings we've fully mapped, 438 437 * adjusting pteval along the way. Stop when we 439 438 * detect a page crossing event. 440 439 */ 441 - while (sg < sg_end && 442 - (pteval << (64 - IO_PAGE_SHIFT)) != 0UL && 440 + while ((pteval << (64 - IO_PAGE_SHIFT)) != 0UL && 443 441 (pteval == SG_ENT_PHYS_ADDRESS(sg)) && 444 442 ((pteval ^ 445 443 (SG_ENT_PHYS_ADDRESS(sg) + sg->length - 1UL)) >> IO_PAGE_SHIFT) == 0UL) { 446 444 pteval += sg->length; 447 - sg++; 445 + if (sg == sg_end) 446 + break; 447 + sg = sg_next(sg); 448 448 } 449 449 if ((pteval << (64 - IO_PAGE_SHIFT)) == 0UL) 450 450 pteval = ~0UL; 451 451 } while (dma_npages != 0); 452 - dma_sg++; 452 + dma_sg = sg_next(dma_sg); 453 453 } 454 454 455 455 if (unlikely(iommu_batch_end() < 0L)) ··· 512 510 sgtmp = sglist; 513 511 while (used && sgtmp->dma_length) { 514 512 sgtmp->dma_address += dma_base; 515 - sgtmp++; 513 + sgtmp = sg_next(sgtmp); 516 514 used--; 517 515 } 518 516 used = nelems - used; ··· 547 545 struct pci_pbm_info *pbm; 548 546 struct iommu *iommu; 549 547 unsigned long flags, i, npages; 548 + struct scatterlist *sg, *sgprv; 550 549 long entry; 551 550 u32 devhandle, bus_addr; 552 551 ··· 561 558 devhandle = pbm->devhandle; 562 559 563 560 bus_addr = sglist->dma_address & IO_PAGE_MASK; 564 - 565 - for (i = 1; i < nelems; i++) 566 - if (sglist[i].dma_length == 0) 561 + sgprv = NULL; 562 + for_each_sg(sglist, sg, nelems, i) { 563 + if (sg->dma_length == 0) 567 564 break; 568 - i--; 569 - npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length) - 565 + 566 + sgprv = sg; 567 + } 568 + 569 + npages = (IO_PAGE_ALIGN(sgprv->dma_address + sgprv->dma_length) - 570 570 bus_addr) >> IO_PAGE_SHIFT; 571 571 572 572 entry = ((bus_addr - iommu->page_table_map_base) >> IO_PAGE_SHIFT);
+13 -11
arch/x86/kernel/pci-calgary_64.c
··· 35 35 #include <linux/pci_ids.h> 36 36 #include <linux/pci.h> 37 37 #include <linux/delay.h> 38 + #include <linux/scatterlist.h> 38 39 #include <asm/iommu.h> 39 40 #include <asm/calgary.h> 40 41 #include <asm/tce.h> ··· 385 384 struct scatterlist *sglist, int nelems, int direction) 386 385 { 387 386 struct iommu_table *tbl = find_iommu_table(dev); 387 + struct scatterlist *s; 388 + int i; 388 389 389 390 if (!translate_phb(to_pci_dev(dev))) 390 391 return; 391 392 392 - while (nelems--) { 393 + for_each_sg(sglist, s, nelems, i) { 393 394 unsigned int npages; 394 - dma_addr_t dma = sglist->dma_address; 395 - unsigned int dmalen = sglist->dma_length; 395 + dma_addr_t dma = s->dma_address; 396 + unsigned int dmalen = s->dma_length; 396 397 397 398 if (dmalen == 0) 398 399 break; 399 400 400 401 npages = num_dma_pages(dma, dmalen); 401 402 iommu_free(tbl, dma, npages); 402 - sglist++; 403 403 } 404 404 } 405 405 406 406 static int calgary_nontranslate_map_sg(struct device* dev, 407 407 struct scatterlist *sg, int nelems, int direction) 408 408 { 409 + struct scatterlist *s; 409 410 int i; 410 411 411 - for (i = 0; i < nelems; i++ ) { 412 - struct scatterlist *s = &sg[i]; 412 + for_each_sg(sg, s, nelems, i) { 413 413 BUG_ON(!s->page); 414 414 s->dma_address = virt_to_bus(page_address(s->page) +s->offset); 415 415 s->dma_length = s->length; ··· 422 420 int nelems, int direction) 423 421 { 424 422 struct iommu_table *tbl = find_iommu_table(dev); 423 + struct scatterlist *s; 425 424 unsigned long vaddr; 426 425 unsigned int npages; 427 426 unsigned long entry; ··· 431 428 if (!translate_phb(to_pci_dev(dev))) 432 429 return calgary_nontranslate_map_sg(dev, sg, nelems, direction); 433 430 434 - for (i = 0; i < nelems; i++ ) { 435 - struct scatterlist *s = &sg[i]; 431 + for_each_sg(sg, s, nelems, i) { 436 432 BUG_ON(!s->page); 437 433 438 434 vaddr = (unsigned long)page_address(s->page) + s->offset; ··· 456 454 return nelems; 457 455 error: 458 456 calgary_unmap_sg(dev, sg, nelems, direction); 459 - for (i = 0; i < nelems; i++) { 460 - sg[i].dma_address = bad_dma_address; 461 - sg[i].dma_length = 0; 457 + for_each_sg(sg, s, nelems, i) { 458 + sg->dma_address = bad_dma_address; 459 + sg->dma_length = 0; 462 460 } 463 461 return 0; 464 462 }
+36 -29
arch/x86/kernel/pci-gart_64.c
··· 23 23 #include <linux/interrupt.h> 24 24 #include <linux/bitops.h> 25 25 #include <linux/kdebug.h> 26 + #include <linux/scatterlist.h> 26 27 #include <asm/atomic.h> 27 28 #include <asm/io.h> 28 29 #include <asm/mtrr.h> ··· 279 278 */ 280 279 static void gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int dir) 281 280 { 281 + struct scatterlist *s; 282 282 int i; 283 283 284 - for (i = 0; i < nents; i++) { 285 - struct scatterlist *s = &sg[i]; 284 + for_each_sg(sg, s, nents, i) { 286 285 if (!s->dma_length || !s->length) 287 286 break; 288 287 gart_unmap_single(dev, s->dma_address, s->dma_length, dir); ··· 293 292 static int dma_map_sg_nonforce(struct device *dev, struct scatterlist *sg, 294 293 int nents, int dir) 295 294 { 295 + struct scatterlist *s; 296 296 int i; 297 297 298 298 #ifdef CONFIG_IOMMU_DEBUG 299 299 printk(KERN_DEBUG "dma_map_sg overflow\n"); 300 300 #endif 301 301 302 - for (i = 0; i < nents; i++ ) { 303 - struct scatterlist *s = &sg[i]; 302 + for_each_sg(sg, s, nents, i) { 304 303 unsigned long addr = page_to_phys(s->page) + s->offset; 305 304 if (nonforced_iommu(dev, addr, s->length)) { 306 305 addr = dma_map_area(dev, addr, s->length, dir); ··· 320 319 } 321 320 322 321 /* Map multiple scatterlist entries continuous into the first. */ 323 - static int __dma_map_cont(struct scatterlist *sg, int start, int stopat, 322 + static int __dma_map_cont(struct scatterlist *start, int nelems, 324 323 struct scatterlist *sout, unsigned long pages) 325 324 { 326 325 unsigned long iommu_start = alloc_iommu(pages); 327 326 unsigned long iommu_page = iommu_start; 327 + struct scatterlist *s; 328 328 int i; 329 329 330 330 if (iommu_start == -1) 331 331 return -1; 332 - 333 - for (i = start; i < stopat; i++) { 334 - struct scatterlist *s = &sg[i]; 332 + 333 + for_each_sg(start, s, nelems, i) { 335 334 unsigned long pages, addr; 336 335 unsigned long phys_addr = s->dma_address; 337 336 338 - BUG_ON(i > start && s->offset); 339 - if (i == start) { 337 + BUG_ON(s != start && s->offset); 338 + if (s == start) { 340 339 *sout = *s; 341 340 sout->dma_address = iommu_bus_base; 342 341 sout->dma_address += iommu_page*PAGE_SIZE + s->offset; ··· 358 357 return 0; 359 358 } 360 359 361 - static inline int dma_map_cont(struct scatterlist *sg, int start, int stopat, 360 + static inline int dma_map_cont(struct scatterlist *start, int nelems, 362 361 struct scatterlist *sout, 363 362 unsigned long pages, int need) 364 363 { 365 - if (!need) { 366 - BUG_ON(stopat - start != 1); 367 - *sout = sg[start]; 368 - sout->dma_length = sg[start].length; 364 + if (!need) { 365 + BUG_ON(nelems != 1); 366 + *sout = *start; 367 + sout->dma_length = start->length; 369 368 return 0; 370 - } 371 - return __dma_map_cont(sg, start, stopat, sout, pages); 369 + } 370 + return __dma_map_cont(start, nelems, sout, pages); 372 371 } 373 372 374 373 /* ··· 382 381 int start; 383 382 unsigned long pages = 0; 384 383 int need = 0, nextneed; 384 + struct scatterlist *s, *ps, *start_sg, *sgmap; 385 385 386 386 if (nents == 0) 387 387 return 0; ··· 392 390 393 391 out = 0; 394 392 start = 0; 395 - for (i = 0; i < nents; i++) { 396 - struct scatterlist *s = &sg[i]; 393 + start_sg = sgmap = sg; 394 + ps = NULL; /* shut up gcc */ 395 + for_each_sg(sg, s, nents, i) { 397 396 dma_addr_t addr = page_to_phys(s->page) + s->offset; 398 397 s->dma_address = addr; 399 398 BUG_ON(s->length == 0); ··· 403 400 404 401 /* Handle the previous not yet processed entries */ 405 402 if (i > start) { 406 - struct scatterlist *ps = &sg[i-1]; 407 403 /* Can only merge when the last chunk ends on a page 408 404 boundary and the new one doesn't have an offset. */ 409 405 if (!iommu_merge || !nextneed || !need || s->offset || 410 - (ps->offset + ps->length) % PAGE_SIZE) { 411 - if (dma_map_cont(sg, start, i, sg+out, pages, 412 - need) < 0) 406 + (ps->offset + ps->length) % PAGE_SIZE) { 407 + if (dma_map_cont(start_sg, i - start, sgmap, 408 + pages, need) < 0) 413 409 goto error; 414 410 out++; 411 + sgmap = sg_next(sgmap); 415 412 pages = 0; 416 - start = i; 413 + start = i; 414 + start_sg = s; 417 415 } 418 416 } 419 417 420 418 need = nextneed; 421 419 pages += to_pages(s->offset, s->length); 420 + ps = s; 422 421 } 423 - if (dma_map_cont(sg, start, i, sg+out, pages, need) < 0) 422 + if (dma_map_cont(start_sg, i - start, sgmap, pages, need) < 0) 424 423 goto error; 425 424 out++; 426 425 flush_gart(); 427 - if (out < nents) 428 - sg[out].dma_length = 0; 426 + if (out < nents) { 427 + sgmap = sg_next(sgmap); 428 + sgmap->dma_length = 0; 429 + } 429 430 return out; 430 431 431 432 error: ··· 444 437 if (panic_on_overflow) 445 438 panic("dma_map_sg: overflow on %lu pages\n", pages); 446 439 iommu_full(dev, pages << PAGE_SHIFT, dir); 447 - for (i = 0; i < nents; i++) 448 - sg[i].dma_address = bad_dma_address; 440 + for_each_sg(sg, s, nents, i) 441 + s->dma_address = bad_dma_address; 449 442 return 0; 450 443 } 451 444
+3 -2
arch/x86/kernel/pci-nommu_64.c
··· 5 5 #include <linux/pci.h> 6 6 #include <linux/string.h> 7 7 #include <linux/dma-mapping.h> 8 + #include <linux/scatterlist.h> 8 9 9 10 #include <asm/iommu.h> 10 11 #include <asm/processor.h> ··· 58 57 static int nommu_map_sg(struct device *hwdev, struct scatterlist *sg, 59 58 int nents, int direction) 60 59 { 60 + struct scatterlist *s; 61 61 int i; 62 62 63 - for (i = 0; i < nents; i++ ) { 64 - struct scatterlist *s = &sg[i]; 63 + for_each_sg(sg, s, nents, i) { 65 64 BUG_ON(!s->page); 66 65 s->dma_address = virt_to_bus(page_address(s->page) +s->offset); 67 66 if (!check_addr("map_sg", hwdev, s->dma_address, s->length))
+1 -1
block/bsg.c
··· 908 908 } 909 909 } 910 910 911 - static struct file_operations bsg_fops = { 911 + static const struct file_operations bsg_fops = { 912 912 .read = bsg_read, 913 913 .write = bsg_write, 914 914 .poll = bsg_poll,
+9 -8
block/elevator.c
··· 712 712 int ret; 713 713 714 714 while ((rq = __elv_next_request(q)) != NULL) { 715 + /* 716 + * Kill the empty barrier place holder, the driver must 717 + * not ever see it. 718 + */ 719 + if (blk_empty_barrier(rq)) { 720 + end_queued_request(rq, 1); 721 + continue; 722 + } 715 723 if (!(rq->cmd_flags & REQ_STARTED)) { 716 724 /* 717 725 * This is the first time the device driver ··· 759 751 rq = NULL; 760 752 break; 761 753 } else if (ret == BLKPREP_KILL) { 762 - int nr_bytes = rq->hard_nr_sectors << 9; 763 - 764 - if (!nr_bytes) 765 - nr_bytes = rq->data_len; 766 - 767 - blkdev_dequeue_request(rq); 768 754 rq->cmd_flags |= REQ_QUIET; 769 - end_that_request_chunk(rq, 0, nr_bytes); 770 - end_that_request_last(rq, 0); 755 + end_queued_request(rq, 0); 771 756 } else { 772 757 printk(KERN_ERR "%s: bad return=%d\n", __FUNCTION__, 773 758 ret);
+224 -87
block/ll_rw_blk.c
··· 30 30 #include <linux/cpu.h> 31 31 #include <linux/blktrace_api.h> 32 32 #include <linux/fault-inject.h> 33 + #include <linux/scatterlist.h> 33 34 34 35 /* 35 36 * for max sense size ··· 305 304 306 305 EXPORT_SYMBOL(blk_queue_ordered); 307 306 308 - /** 309 - * blk_queue_issue_flush_fn - set function for issuing a flush 310 - * @q: the request queue 311 - * @iff: the function to be called issuing the flush 312 - * 313 - * Description: 314 - * If a driver supports issuing a flush command, the support is notified 315 - * to the block layer by defining it through this call. 316 - * 317 - **/ 318 - void blk_queue_issue_flush_fn(struct request_queue *q, issue_flush_fn *iff) 319 - { 320 - q->issue_flush_fn = iff; 321 - } 322 - 323 - EXPORT_SYMBOL(blk_queue_issue_flush_fn); 324 - 325 307 /* 326 308 * Cache flushing for ordered writes handling 327 309 */ ··· 361 377 /* 362 378 * Okay, sequence complete. 363 379 */ 364 - rq = q->orig_bar_rq; 365 - uptodate = q->orderr ? q->orderr : 1; 380 + uptodate = 1; 381 + if (q->orderr) 382 + uptodate = q->orderr; 366 383 367 384 q->ordseq = 0; 385 + rq = q->orig_bar_rq; 368 386 369 387 end_that_request_first(rq, uptodate, rq->hard_nr_sectors); 370 388 end_that_request_last(rq, uptodate); ··· 431 445 rq_init(q, rq); 432 446 if (bio_data_dir(q->orig_bar_rq->bio) == WRITE) 433 447 rq->cmd_flags |= REQ_RW; 434 - rq->cmd_flags |= q->ordered & QUEUE_ORDERED_FUA ? REQ_FUA : 0; 448 + if (q->ordered & QUEUE_ORDERED_FUA) 449 + rq->cmd_flags |= REQ_FUA; 435 450 rq->elevator_private = NULL; 436 451 rq->elevator_private2 = NULL; 437 452 init_request_from_bio(rq, q->orig_bar_rq->bio); ··· 442 455 * Queue ordered sequence. As we stack them at the head, we 443 456 * need to queue in reverse order. Note that we rely on that 444 457 * no fs request uses ELEVATOR_INSERT_FRONT and thus no fs 445 - * request gets inbetween ordered sequence. 458 + * request gets inbetween ordered sequence. If this request is 459 + * an empty barrier, we don't need to do a postflush ever since 460 + * there will be no data written between the pre and post flush. 461 + * Hence a single flush will suffice. 446 462 */ 447 - if (q->ordered & QUEUE_ORDERED_POSTFLUSH) 463 + if ((q->ordered & QUEUE_ORDERED_POSTFLUSH) && !blk_empty_barrier(rq)) 448 464 queue_flush(q, QUEUE_ORDERED_POSTFLUSH); 449 465 else 450 466 q->ordseq |= QUEUE_ORDSEQ_POSTFLUSH; ··· 471 481 int blk_do_ordered(struct request_queue *q, struct request **rqp) 472 482 { 473 483 struct request *rq = *rqp; 474 - int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq); 484 + const int is_barrier = blk_fs_request(rq) && blk_barrier_rq(rq); 475 485 476 486 if (!q->ordseq) { 477 487 if (!is_barrier) ··· 1319 1329 * must make sure sg can hold rq->nr_phys_segments entries 1320 1330 */ 1321 1331 int blk_rq_map_sg(struct request_queue *q, struct request *rq, 1322 - struct scatterlist *sg) 1332 + struct scatterlist *sglist) 1323 1333 { 1324 1334 struct bio_vec *bvec, *bvprv; 1335 + struct scatterlist *next_sg, *sg; 1325 1336 struct req_iterator iter; 1326 1337 int nsegs, cluster; 1327 1338 ··· 1333 1342 * for each bio in rq 1334 1343 */ 1335 1344 bvprv = NULL; 1345 + sg = next_sg = &sglist[0]; 1336 1346 rq_for_each_segment(bvec, rq, iter) { 1337 1347 int nbytes = bvec->bv_len; 1338 1348 1339 1349 if (bvprv && cluster) { 1340 - if (sg[nsegs - 1].length + nbytes > q->max_segment_size) 1350 + if (sg->length + nbytes > q->max_segment_size) 1341 1351 goto new_segment; 1342 1352 1343 1353 if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) ··· 1346 1354 if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) 1347 1355 goto new_segment; 1348 1356 1349 - sg[nsegs - 1].length += nbytes; 1357 + sg->length += nbytes; 1350 1358 } else { 1351 1359 new_segment: 1352 - memset(&sg[nsegs],0,sizeof(struct scatterlist)); 1353 - sg[nsegs].page = bvec->bv_page; 1354 - sg[nsegs].length = nbytes; 1355 - sg[nsegs].offset = bvec->bv_offset; 1360 + sg = next_sg; 1361 + next_sg = sg_next(sg); 1356 1362 1363 + sg->page = bvec->bv_page; 1364 + sg->length = nbytes; 1365 + sg->offset = bvec->bv_offset; 1357 1366 nsegs++; 1358 1367 } 1359 1368 bvprv = bvec; ··· 2653 2660 2654 2661 EXPORT_SYMBOL(blk_execute_rq); 2655 2662 2663 + static void bio_end_empty_barrier(struct bio *bio, int err) 2664 + { 2665 + if (err) 2666 + clear_bit(BIO_UPTODATE, &bio->bi_flags); 2667 + 2668 + complete(bio->bi_private); 2669 + } 2670 + 2656 2671 /** 2657 2672 * blkdev_issue_flush - queue a flush 2658 2673 * @bdev: blockdev to issue flush for ··· 2673 2672 */ 2674 2673 int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector) 2675 2674 { 2675 + DECLARE_COMPLETION_ONSTACK(wait); 2676 2676 struct request_queue *q; 2677 + struct bio *bio; 2678 + int ret; 2677 2679 2678 2680 if (bdev->bd_disk == NULL) 2679 2681 return -ENXIO; ··· 2684 2680 q = bdev_get_queue(bdev); 2685 2681 if (!q) 2686 2682 return -ENXIO; 2687 - if (!q->issue_flush_fn) 2688 - return -EOPNOTSUPP; 2689 2683 2690 - return q->issue_flush_fn(q, bdev->bd_disk, error_sector); 2684 + bio = bio_alloc(GFP_KERNEL, 0); 2685 + if (!bio) 2686 + return -ENOMEM; 2687 + 2688 + bio->bi_end_io = bio_end_empty_barrier; 2689 + bio->bi_private = &wait; 2690 + bio->bi_bdev = bdev; 2691 + submit_bio(1 << BIO_RW_BARRIER, bio); 2692 + 2693 + wait_for_completion(&wait); 2694 + 2695 + /* 2696 + * The driver must store the error location in ->bi_sector, if 2697 + * it supports it. For non-stacked drivers, this should be copied 2698 + * from rq->sector. 2699 + */ 2700 + if (error_sector) 2701 + *error_sector = bio->bi_sector; 2702 + 2703 + ret = 0; 2704 + if (!bio_flagged(bio, BIO_UPTODATE)) 2705 + ret = -EIO; 2706 + 2707 + bio_put(bio); 2708 + return ret; 2691 2709 } 2692 2710 2693 2711 EXPORT_SYMBOL(blkdev_issue_flush); ··· 3077 3051 { 3078 3052 struct block_device *bdev = bio->bi_bdev; 3079 3053 3080 - if (bdev != bdev->bd_contains) { 3054 + if (bio_sectors(bio) && bdev != bdev->bd_contains) { 3081 3055 struct hd_struct *p = bdev->bd_part; 3082 3056 const int rw = bio_data_dir(bio); 3083 3057 ··· 3143 3117 3144 3118 #endif /* CONFIG_FAIL_MAKE_REQUEST */ 3145 3119 3120 + /* 3121 + * Check whether this bio extends beyond the end of the device. 3122 + */ 3123 + static inline int bio_check_eod(struct bio *bio, unsigned int nr_sectors) 3124 + { 3125 + sector_t maxsector; 3126 + 3127 + if (!nr_sectors) 3128 + return 0; 3129 + 3130 + /* Test device or partition size, when known. */ 3131 + maxsector = bio->bi_bdev->bd_inode->i_size >> 9; 3132 + if (maxsector) { 3133 + sector_t sector = bio->bi_sector; 3134 + 3135 + if (maxsector < nr_sectors || maxsector - nr_sectors < sector) { 3136 + /* 3137 + * This may well happen - the kernel calls bread() 3138 + * without checking the size of the device, e.g., when 3139 + * mounting a device. 3140 + */ 3141 + handle_bad_sector(bio); 3142 + return 1; 3143 + } 3144 + } 3145 + 3146 + return 0; 3147 + } 3148 + 3146 3149 /** 3147 3150 * generic_make_request: hand a buffer to its device driver for I/O 3148 3151 * @bio: The bio describing the location in memory and on the device. ··· 3199 3144 static inline void __generic_make_request(struct bio *bio) 3200 3145 { 3201 3146 struct request_queue *q; 3202 - sector_t maxsector; 3203 3147 sector_t old_sector; 3204 3148 int ret, nr_sectors = bio_sectors(bio); 3205 3149 dev_t old_dev; 3206 3150 3207 3151 might_sleep(); 3208 - /* Test device or partition size, when known. */ 3209 - maxsector = bio->bi_bdev->bd_inode->i_size >> 9; 3210 - if (maxsector) { 3211 - sector_t sector = bio->bi_sector; 3212 3152 3213 - if (maxsector < nr_sectors || maxsector - nr_sectors < sector) { 3214 - /* 3215 - * This may well happen - the kernel calls bread() 3216 - * without checking the size of the device, e.g., when 3217 - * mounting a device. 3218 - */ 3219 - handle_bad_sector(bio); 3220 - goto end_io; 3221 - } 3222 - } 3153 + if (bio_check_eod(bio, nr_sectors)) 3154 + goto end_io; 3223 3155 3224 3156 /* 3225 3157 * Resolve the mapping until finished. (drivers are ··· 3233 3191 break; 3234 3192 } 3235 3193 3236 - if (unlikely(bio_sectors(bio) > q->max_hw_sectors)) { 3194 + if (unlikely(nr_sectors > q->max_hw_sectors)) { 3237 3195 printk("bio too big device %s (%u > %u)\n", 3238 3196 bdevname(bio->bi_bdev, b), 3239 3197 bio_sectors(bio), ··· 3254 3212 blk_partition_remap(bio); 3255 3213 3256 3214 if (old_sector != -1) 3257 - blk_add_trace_remap(q, bio, old_dev, bio->bi_sector, 3215 + blk_add_trace_remap(q, bio, old_dev, bio->bi_sector, 3258 3216 old_sector); 3259 3217 3260 3218 blk_add_trace_bio(q, bio, BLK_TA_QUEUE); ··· 3262 3220 old_sector = bio->bi_sector; 3263 3221 old_dev = bio->bi_bdev->bd_dev; 3264 3222 3265 - maxsector = bio->bi_bdev->bd_inode->i_size >> 9; 3266 - if (maxsector) { 3267 - sector_t sector = bio->bi_sector; 3268 - 3269 - if (maxsector < nr_sectors || 3270 - maxsector - nr_sectors < sector) { 3271 - /* 3272 - * This may well happen - partitions are not 3273 - * checked to make sure they are within the size 3274 - * of the whole device. 3275 - */ 3276 - handle_bad_sector(bio); 3277 - goto end_io; 3278 - } 3279 - } 3223 + if (bio_check_eod(bio, nr_sectors)) 3224 + goto end_io; 3280 3225 3281 3226 ret = q->make_request_fn(q, bio); 3282 3227 } while (ret); ··· 3336 3307 { 3337 3308 int count = bio_sectors(bio); 3338 3309 3339 - BIO_BUG_ON(!bio->bi_size); 3340 - BIO_BUG_ON(!bio->bi_io_vec); 3341 3310 bio->bi_rw |= rw; 3342 - if (rw & WRITE) { 3343 - count_vm_events(PGPGOUT, count); 3344 - } else { 3345 - task_io_account_read(bio->bi_size); 3346 - count_vm_events(PGPGIN, count); 3347 - } 3348 3311 3349 - if (unlikely(block_dump)) { 3350 - char b[BDEVNAME_SIZE]; 3351 - printk(KERN_DEBUG "%s(%d): %s block %Lu on %s\n", 3352 - current->comm, current->pid, 3353 - (rw & WRITE) ? "WRITE" : "READ", 3354 - (unsigned long long)bio->bi_sector, 3355 - bdevname(bio->bi_bdev,b)); 3312 + /* 3313 + * If it's a regular read/write or a barrier with data attached, 3314 + * go through the normal accounting stuff before submission. 3315 + */ 3316 + if (!bio_empty_barrier(bio)) { 3317 + 3318 + BIO_BUG_ON(!bio->bi_size); 3319 + BIO_BUG_ON(!bio->bi_io_vec); 3320 + 3321 + if (rw & WRITE) { 3322 + count_vm_events(PGPGOUT, count); 3323 + } else { 3324 + task_io_account_read(bio->bi_size); 3325 + count_vm_events(PGPGIN, count); 3326 + } 3327 + 3328 + if (unlikely(block_dump)) { 3329 + char b[BDEVNAME_SIZE]; 3330 + printk(KERN_DEBUG "%s(%d): %s block %Lu on %s\n", 3331 + current->comm, current->pid, 3332 + (rw & WRITE) ? "WRITE" : "READ", 3333 + (unsigned long long)bio->bi_sector, 3334 + bdevname(bio->bi_bdev,b)); 3335 + } 3356 3336 } 3357 3337 3358 3338 generic_make_request(bio); ··· 3436 3398 total_bytes = bio_nbytes = 0; 3437 3399 while ((bio = req->bio) != NULL) { 3438 3400 int nbytes; 3401 + 3402 + /* 3403 + * For an empty barrier request, the low level driver must 3404 + * store a potential error location in ->sector. We pass 3405 + * that back up in ->bi_sector. 3406 + */ 3407 + if (blk_empty_barrier(req)) 3408 + bio->bi_sector = req->sector; 3439 3409 3440 3410 if (nr_bytes >= bio->bi_size) { 3441 3411 req->bio = bio->bi_next; ··· 3610 3564 * Description: 3611 3565 * Ends all I/O on a request. It does not handle partial completions, 3612 3566 * unless the driver actually implements this in its completion callback 3613 - * through requeueing. Theh actual completion happens out-of-order, 3567 + * through requeueing. The actual completion happens out-of-order, 3614 3568 * through a softirq handler. The user must have registered a completion 3615 3569 * callback through blk_queue_softirq_done(). 3616 3570 **/ ··· 3673 3627 3674 3628 EXPORT_SYMBOL(end_that_request_last); 3675 3629 3676 - void end_request(struct request *req, int uptodate) 3630 + static inline void __end_request(struct request *rq, int uptodate, 3631 + unsigned int nr_bytes, int dequeue) 3677 3632 { 3678 - if (!end_that_request_first(req, uptodate, req->hard_cur_sectors)) { 3679 - add_disk_randomness(req->rq_disk); 3680 - blkdev_dequeue_request(req); 3681 - end_that_request_last(req, uptodate); 3633 + if (!end_that_request_chunk(rq, uptodate, nr_bytes)) { 3634 + if (dequeue) 3635 + blkdev_dequeue_request(rq); 3636 + add_disk_randomness(rq->rq_disk); 3637 + end_that_request_last(rq, uptodate); 3682 3638 } 3683 3639 } 3684 3640 3641 + static unsigned int rq_byte_size(struct request *rq) 3642 + { 3643 + if (blk_fs_request(rq)) 3644 + return rq->hard_nr_sectors << 9; 3645 + 3646 + return rq->data_len; 3647 + } 3648 + 3649 + /** 3650 + * end_queued_request - end all I/O on a queued request 3651 + * @rq: the request being processed 3652 + * @uptodate: error value or 0/1 uptodate flag 3653 + * 3654 + * Description: 3655 + * Ends all I/O on a request, and removes it from the block layer queues. 3656 + * Not suitable for normal IO completion, unless the driver still has 3657 + * the request attached to the block layer. 3658 + * 3659 + **/ 3660 + void end_queued_request(struct request *rq, int uptodate) 3661 + { 3662 + __end_request(rq, uptodate, rq_byte_size(rq), 1); 3663 + } 3664 + EXPORT_SYMBOL(end_queued_request); 3665 + 3666 + /** 3667 + * end_dequeued_request - end all I/O on a dequeued request 3668 + * @rq: the request being processed 3669 + * @uptodate: error value or 0/1 uptodate flag 3670 + * 3671 + * Description: 3672 + * Ends all I/O on a request. The request must already have been 3673 + * dequeued using blkdev_dequeue_request(), as is normally the case 3674 + * for most drivers. 3675 + * 3676 + **/ 3677 + void end_dequeued_request(struct request *rq, int uptodate) 3678 + { 3679 + __end_request(rq, uptodate, rq_byte_size(rq), 0); 3680 + } 3681 + EXPORT_SYMBOL(end_dequeued_request); 3682 + 3683 + 3684 + /** 3685 + * end_request - end I/O on the current segment of the request 3686 + * @rq: the request being processed 3687 + * @uptodate: error value or 0/1 uptodate flag 3688 + * 3689 + * Description: 3690 + * Ends I/O on the current segment of a request. If that is the only 3691 + * remaining segment, the request is also completed and freed. 3692 + * 3693 + * This is a remnant of how older block drivers handled IO completions. 3694 + * Modern drivers typically end IO on the full request in one go, unless 3695 + * they have a residual value to account for. For that case this function 3696 + * isn't really useful, unless the residual just happens to be the 3697 + * full current segment. In other words, don't use this function in new 3698 + * code. Either use end_request_completely(), or the 3699 + * end_that_request_chunk() (along with end_that_request_last()) for 3700 + * partial completions. 3701 + * 3702 + **/ 3703 + void end_request(struct request *req, int uptodate) 3704 + { 3705 + __end_request(req, uptodate, req->hard_cur_sectors << 9, 1); 3706 + } 3685 3707 EXPORT_SYMBOL(end_request); 3686 3708 3687 3709 static void blk_rq_bio_prep(struct request_queue *q, struct request *rq, ··· 4063 3949 return queue_var_show(max_hw_sectors_kb, (page)); 4064 3950 } 4065 3951 3952 + static ssize_t queue_max_segments_show(struct request_queue *q, char *page) 3953 + { 3954 + return queue_var_show(q->max_phys_segments, page); 3955 + } 4066 3956 3957 + static ssize_t queue_max_segments_store(struct request_queue *q, 3958 + const char *page, size_t count) 3959 + { 3960 + unsigned long segments; 3961 + ssize_t ret = queue_var_store(&segments, page, count); 3962 + 3963 + spin_lock_irq(q->queue_lock); 3964 + q->max_phys_segments = segments; 3965 + spin_unlock_irq(q->queue_lock); 3966 + 3967 + return ret; 3968 + } 4067 3969 static struct queue_sysfs_entry queue_requests_entry = { 4068 3970 .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR }, 4069 3971 .show = queue_requests_show, ··· 4103 3973 .show = queue_max_hw_sectors_show, 4104 3974 }; 4105 3975 3976 + static struct queue_sysfs_entry queue_max_segments_entry = { 3977 + .attr = {.name = "max_segments", .mode = S_IRUGO | S_IWUSR }, 3978 + .show = queue_max_segments_show, 3979 + .store = queue_max_segments_store, 3980 + }; 3981 + 4106 3982 static struct queue_sysfs_entry queue_iosched_entry = { 4107 3983 .attr = {.name = "scheduler", .mode = S_IRUGO | S_IWUSR }, 4108 3984 .show = elv_iosched_show, ··· 4120 3984 &queue_ra_entry.attr, 4121 3985 &queue_max_hw_sectors_entry.attr, 4122 3986 &queue_max_sectors_entry.attr, 3987 + &queue_max_segments_entry.attr, 4123 3988 &queue_iosched_entry.attr, 4124 3989 NULL, 4125 3990 };
+1 -1
crypto/digest.c
··· 77 77 78 78 if (!nbytes) 79 79 break; 80 - sg = sg_next(sg); 80 + sg = scatterwalk_sg_next(sg); 81 81 } 82 82 83 83 return 0;
+1 -1
crypto/scatterwalk.c
··· 62 62 walk->offset += PAGE_SIZE - 1; 63 63 walk->offset &= PAGE_MASK; 64 64 if (walk->offset >= walk->sg->offset + walk->sg->length) 65 - scatterwalk_start(walk, sg_next(walk->sg)); 65 + scatterwalk_start(walk, scatterwalk_sg_next(walk->sg)); 66 66 } 67 67 } 68 68
+1 -1
crypto/scatterwalk.h
··· 20 20 21 21 #include "internal.h" 22 22 23 - static inline struct scatterlist *sg_next(struct scatterlist *sg) 23 + static inline struct scatterlist *scatterwalk_sg_next(struct scatterlist *sg) 24 24 { 25 25 return (++sg)->length ? sg : (void *)sg->page; 26 26 }
+21 -14
drivers/ata/libata-core.c
··· 1410 1410 */ 1411 1411 unsigned ata_exec_internal_sg(struct ata_device *dev, 1412 1412 struct ata_taskfile *tf, const u8 *cdb, 1413 - int dma_dir, struct scatterlist *sg, 1413 + int dma_dir, struct scatterlist *sgl, 1414 1414 unsigned int n_elem, unsigned long timeout) 1415 1415 { 1416 1416 struct ata_link *link = dev->link; ··· 1472 1472 qc->dma_dir = dma_dir; 1473 1473 if (dma_dir != DMA_NONE) { 1474 1474 unsigned int i, buflen = 0; 1475 + struct scatterlist *sg; 1475 1476 1476 - for (i = 0; i < n_elem; i++) 1477 - buflen += sg[i].length; 1477 + for_each_sg(sgl, sg, n_elem, i) 1478 + buflen += sg->length; 1478 1479 1479 - ata_sg_init(qc, sg, n_elem); 1480 + ata_sg_init(qc, sgl, n_elem); 1480 1481 qc->nbytes = buflen; 1481 1482 } 1482 1483 ··· 4293 4292 if (qc->n_elem) 4294 4293 dma_unmap_sg(ap->dev, sg, qc->n_elem, dir); 4295 4294 /* restore last sg */ 4296 - sg[qc->orig_n_elem - 1].length += qc->pad_len; 4295 + sg_last(sg, qc->orig_n_elem)->length += qc->pad_len; 4297 4296 if (pad_buf) { 4298 4297 struct scatterlist *psg = &qc->pad_sgent; 4299 4298 void *addr = kmap_atomic(psg->page, KM_IRQ0); ··· 4548 4547 qc->orig_n_elem = 1; 4549 4548 qc->buf_virt = buf; 4550 4549 qc->nbytes = buflen; 4550 + qc->cursg = qc->__sg; 4551 4551 4552 4552 sg_init_one(&qc->sgent, buf, buflen); 4553 4553 } ··· 4574 4572 qc->__sg = sg; 4575 4573 qc->n_elem = n_elem; 4576 4574 qc->orig_n_elem = n_elem; 4575 + qc->cursg = qc->__sg; 4577 4576 } 4578 4577 4579 4578 /** ··· 4664 4661 { 4665 4662 struct ata_port *ap = qc->ap; 4666 4663 struct scatterlist *sg = qc->__sg; 4667 - struct scatterlist *lsg = &sg[qc->n_elem - 1]; 4664 + struct scatterlist *lsg = sg_last(qc->__sg, qc->n_elem); 4668 4665 int n_elem, pre_n_elem, dir, trim_sg = 0; 4669 4666 4670 4667 VPRINTK("ENTER, ata%u\n", ap->print_id); ··· 4828 4825 static void ata_pio_sector(struct ata_queued_cmd *qc) 4829 4826 { 4830 4827 int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); 4831 - struct scatterlist *sg = qc->__sg; 4832 4828 struct ata_port *ap = qc->ap; 4833 4829 struct page *page; 4834 4830 unsigned int offset; ··· 4836 4834 if (qc->curbytes == qc->nbytes - qc->sect_size) 4837 4835 ap->hsm_task_state = HSM_ST_LAST; 4838 4836 4839 - page = sg[qc->cursg].page; 4840 - offset = sg[qc->cursg].offset + qc->cursg_ofs; 4837 + page = qc->cursg->page; 4838 + offset = qc->cursg->offset + qc->cursg_ofs; 4841 4839 4842 4840 /* get the current page and offset */ 4843 4841 page = nth_page(page, (offset >> PAGE_SHIFT)); ··· 4865 4863 qc->curbytes += qc->sect_size; 4866 4864 qc->cursg_ofs += qc->sect_size; 4867 4865 4868 - if (qc->cursg_ofs == (&sg[qc->cursg])->length) { 4869 - qc->cursg++; 4866 + if (qc->cursg_ofs == qc->cursg->length) { 4867 + qc->cursg = sg_next(qc->cursg); 4870 4868 qc->cursg_ofs = 0; 4871 4869 } 4872 4870 } ··· 4952 4950 { 4953 4951 int do_write = (qc->tf.flags & ATA_TFLAG_WRITE); 4954 4952 struct scatterlist *sg = qc->__sg; 4953 + struct scatterlist *lsg = sg_last(qc->__sg, qc->n_elem); 4955 4954 struct ata_port *ap = qc->ap; 4956 4955 struct page *page; 4957 4956 unsigned char *buf; 4958 4957 unsigned int offset, count; 4958 + int no_more_sg = 0; 4959 4959 4960 4960 if (qc->curbytes + bytes >= qc->nbytes) 4961 4961 ap->hsm_task_state = HSM_ST_LAST; 4962 4962 4963 4963 next_sg: 4964 - if (unlikely(qc->cursg >= qc->n_elem)) { 4964 + if (unlikely(no_more_sg)) { 4965 4965 /* 4966 4966 * The end of qc->sg is reached and the device expects 4967 4967 * more data to transfer. In order not to overrun qc->sg ··· 4986 4982 return; 4987 4983 } 4988 4984 4989 - sg = &qc->__sg[qc->cursg]; 4985 + sg = qc->cursg; 4990 4986 4991 4987 page = sg->page; 4992 4988 offset = sg->offset + qc->cursg_ofs; ··· 5025 5021 qc->cursg_ofs += count; 5026 5022 5027 5023 if (qc->cursg_ofs == sg->length) { 5028 - qc->cursg++; 5024 + if (qc->cursg == lsg) 5025 + no_more_sg = 1; 5026 + 5027 + qc->cursg = sg_next(qc->cursg); 5029 5028 qc->cursg_ofs = 0; 5030 5029 } 5031 5030
-2
drivers/ata/libata-scsi.c
··· 801 801 802 802 ata_scsi_sdev_config(sdev); 803 803 804 - blk_queue_max_phys_segments(sdev->request_queue, LIBATA_MAX_PRD); 805 - 806 804 sdev->manage_start_stop = 1; 807 805 808 806 if (dev)
+1 -1
drivers/block/cciss.c
··· 1191 1191 { 1192 1192 while (bio) { 1193 1193 struct bio *xbh = bio->bi_next; 1194 - int nr_sectors = bio_sectors(bio); 1195 1194 1196 1195 bio->bi_next = NULL; 1197 1196 bio_endio(bio, status ? 0 : -EIO); ··· 2569 2570 (int)creq->nr_sectors); 2570 2571 #endif /* CCISS_DEBUG */ 2571 2572 2573 + memset(tmp_sg, 0, sizeof(tmp_sg)); 2572 2574 seg = blk_rq_map_sg(q, creq, tmp_sg); 2573 2575 2574 2576 /* get the DMA records for the setup */
+1 -2
drivers/block/cpqarray.c
··· 981 981 static inline void complete_buffers(struct bio *bio, int ok) 982 982 { 983 983 struct bio *xbh; 984 - while(bio) { 985 - int nr_sectors = bio_sectors(bio); 986 984 985 + while (bio) { 987 986 xbh = bio->bi_next; 988 987 bio->bi_next = NULL; 989 988
+7
drivers/block/pktcdvd.c
··· 1133 1133 * Schedule reads for missing parts of the packet. 1134 1134 */ 1135 1135 for (f = 0; f < pkt->frames; f++) { 1136 + struct bio_vec *vec; 1137 + 1136 1138 int p, offset; 1137 1139 if (written[f]) 1138 1140 continue; 1139 1141 bio = pkt->r_bios[f]; 1142 + vec = bio->bi_io_vec; 1140 1143 bio_init(bio); 1141 1144 bio->bi_max_vecs = 1; 1142 1145 bio->bi_sector = pkt->sector + f * (CD_FRAMESIZE >> 9); 1143 1146 bio->bi_bdev = pd->bdev; 1144 1147 bio->bi_end_io = pkt_end_io_read; 1145 1148 bio->bi_private = pkt; 1149 + bio->bi_io_vec = vec; 1150 + bio->bi_destructor = pkt_bio_destructor; 1146 1151 1147 1152 p = (f * CD_FRAMESIZE) / PAGE_SIZE; 1148 1153 offset = (f * CD_FRAMESIZE) % PAGE_SIZE; ··· 1444 1439 pkt->w_bio->bi_bdev = pd->bdev; 1445 1440 pkt->w_bio->bi_end_io = pkt_end_io_packet_write; 1446 1441 pkt->w_bio->bi_private = pkt; 1442 + pkt->w_bio->bi_io_vec = bvec; 1443 + pkt->w_bio->bi_destructor = pkt_bio_destructor; 1447 1444 for (f = 0; f < pkt->frames; f++) 1448 1445 if (!bio_add_page(pkt->w_bio, bvec[f].bv_page, CD_FRAMESIZE, bvec[f].bv_offset)) 1449 1446 BUG();
-21
drivers/block/ps3disk.c
··· 414 414 req->cmd_type = REQ_TYPE_FLUSH; 415 415 } 416 416 417 - static int ps3disk_issue_flush(struct request_queue *q, struct gendisk *gendisk, 418 - sector_t *sector) 419 - { 420 - struct ps3_storage_device *dev = q->queuedata; 421 - struct request *req; 422 - int res; 423 - 424 - dev_dbg(&dev->sbd.core, "%s:%u\n", __func__, __LINE__); 425 - 426 - req = blk_get_request(q, WRITE, __GFP_WAIT); 427 - ps3disk_prepare_flush(q, req); 428 - res = blk_execute_rq(q, gendisk, req, 0); 429 - if (res) 430 - dev_err(&dev->sbd.core, "%s:%u: flush request failed %d\n", 431 - __func__, __LINE__, res); 432 - blk_put_request(req); 433 - return res; 434 - } 435 - 436 - 437 417 static unsigned long ps3disk_mask; 438 418 439 419 static DEFINE_MUTEX(ps3disk_mask_mutex); ··· 486 506 blk_queue_dma_alignment(queue, dev->blk_size-1); 487 507 blk_queue_hardsect_size(queue, dev->blk_size); 488 508 489 - blk_queue_issue_flush_fn(queue, ps3disk_issue_flush); 490 509 blk_queue_ordered(queue, QUEUE_ORDERED_DRAIN_FLUSH, 491 510 ps3disk_prepare_flush); 492 511
+2 -1
drivers/ide/cris/ide-cris.c
··· 939 939 /* group sequential buffers into one large buffer */ 940 940 addr = page_to_phys(sg->page) + sg->offset; 941 941 size = sg_dma_len(sg); 942 - while (sg++, --i) { 942 + while (--i) { 943 + sg = sg_next(sg); 943 944 if ((addr + size) != page_to_phys(sg->page) + sg->offset) 944 945 break; 945 946 size += sg_dma_len(sg);
-29
drivers/ide/ide-disk.c
··· 716 716 rq->buffer = rq->cmd; 717 717 } 718 718 719 - static int idedisk_issue_flush(struct request_queue *q, struct gendisk *disk, 720 - sector_t *error_sector) 721 - { 722 - ide_drive_t *drive = q->queuedata; 723 - struct request *rq; 724 - int ret; 725 - 726 - if (!drive->wcache) 727 - return 0; 728 - 729 - rq = blk_get_request(q, WRITE, __GFP_WAIT); 730 - 731 - idedisk_prepare_flush(q, rq); 732 - 733 - ret = blk_execute_rq(q, disk, rq, 0); 734 - 735 - /* 736 - * if we failed and caller wants error offset, get it 737 - */ 738 - if (ret && error_sector) 739 - *error_sector = ide_get_error_location(drive, rq->cmd); 740 - 741 - blk_put_request(rq); 742 - return ret; 743 - } 744 - 745 719 /* 746 720 * This is tightly woven into the driver->do_special can not touch. 747 721 * DON'T do it again until a total personality rewrite is committed. ··· 755 781 struct hd_driveid *id = drive->id; 756 782 unsigned ordered = QUEUE_ORDERED_NONE; 757 783 prepare_flush_fn *prep_fn = NULL; 758 - issue_flush_fn *issue_fn = NULL; 759 784 760 785 if (drive->wcache) { 761 786 unsigned long long capacity; ··· 778 805 if (barrier) { 779 806 ordered = QUEUE_ORDERED_DRAIN_FLUSH; 780 807 prep_fn = idedisk_prepare_flush; 781 - issue_fn = idedisk_issue_flush; 782 808 } 783 809 } else 784 810 ordered = QUEUE_ORDERED_DRAIN; 785 811 786 812 blk_queue_ordered(drive->queue, ordered, prep_fn); 787 - blk_queue_issue_flush_fn(drive->queue, issue_fn); 788 813 } 789 814 790 815 static int write_cache(ide_drive_t *drive, int arg)
+1 -1
drivers/ide/ide-dma.c
··· 280 280 } 281 281 } 282 282 283 - sg++; 283 + sg = sg_next(sg); 284 284 i--; 285 285 } 286 286
+2 -36
drivers/ide/ide-io.c
··· 322 322 spin_unlock_irqrestore(&ide_lock, flags); 323 323 } 324 324 325 - /* 326 - * FIXME: probably move this somewhere else, name is bad too :) 327 - */ 328 - u64 ide_get_error_location(ide_drive_t *drive, char *args) 329 - { 330 - u32 high, low; 331 - u8 hcyl, lcyl, sect; 332 - u64 sector; 333 - 334 - high = 0; 335 - hcyl = args[5]; 336 - lcyl = args[4]; 337 - sect = args[3]; 338 - 339 - if (ide_id_has_flush_cache_ext(drive->id)) { 340 - low = (hcyl << 16) | (lcyl << 8) | sect; 341 - HWIF(drive)->OUTB(drive->ctl|0x80, IDE_CONTROL_REG); 342 - high = ide_read_24(drive); 343 - } else { 344 - u8 cur = HWIF(drive)->INB(IDE_SELECT_REG); 345 - if (cur & 0x40) { 346 - high = cur & 0xf; 347 - low = (hcyl << 16) | (lcyl << 8) | sect; 348 - } else { 349 - low = hcyl * drive->head * drive->sect; 350 - low += lcyl * drive->sect; 351 - low += sect - 1; 352 - } 353 - } 354 - 355 - sector = ((u64) high << 24) | low; 356 - return sector; 357 - } 358 - EXPORT_SYMBOL(ide_get_error_location); 359 - 360 325 /** 361 326 * ide_end_drive_cmd - end an explicit drive command 362 327 * @drive: command ··· 846 881 ide_hwif_t *hwif = drive->hwif; 847 882 848 883 hwif->nsect = hwif->nleft = rq->nr_sectors; 849 - hwif->cursg = hwif->cursg_ofs = 0; 884 + hwif->cursg_ofs = 0; 885 + hwif->cursg = NULL; 850 886 } 851 887 852 888 EXPORT_SYMBOL_GPL(ide_init_sg_cmd);
+1 -1
drivers/ide/ide-probe.c
··· 1349 1349 if (!hwif->sg_max_nents) 1350 1350 hwif->sg_max_nents = PRD_ENTRIES; 1351 1351 1352 - hwif->sg_table = kmalloc(sizeof(struct scatterlist)*hwif->sg_max_nents, 1352 + hwif->sg_table = kzalloc(sizeof(struct scatterlist)*hwif->sg_max_nents, 1353 1353 GFP_KERNEL); 1354 1354 if (!hwif->sg_table) { 1355 1355 printk(KERN_ERR "%s: unable to allocate SG table.\n", hwif->name);
+14 -4
drivers/ide/ide-taskfile.c
··· 45 45 #include <linux/hdreg.h> 46 46 #include <linux/ide.h> 47 47 #include <linux/bitops.h> 48 + #include <linux/scatterlist.h> 48 49 49 50 #include <asm/byteorder.h> 50 51 #include <asm/irq.h> ··· 264 263 { 265 264 ide_hwif_t *hwif = drive->hwif; 266 265 struct scatterlist *sg = hwif->sg_table; 266 + struct scatterlist *cursg = hwif->cursg; 267 267 struct page *page; 268 268 #ifdef CONFIG_HIGHMEM 269 269 unsigned long flags; ··· 272 270 unsigned int offset; 273 271 u8 *buf; 274 272 275 - page = sg[hwif->cursg].page; 276 - offset = sg[hwif->cursg].offset + hwif->cursg_ofs * SECTOR_SIZE; 273 + cursg = hwif->cursg; 274 + if (!cursg) { 275 + cursg = sg; 276 + hwif->cursg = sg; 277 + } 278 + 279 + page = cursg->page; 280 + offset = cursg->offset + hwif->cursg_ofs * SECTOR_SIZE; 277 281 278 282 /* get the current page and offset */ 279 283 page = nth_page(page, (offset >> PAGE_SHIFT)); ··· 293 285 hwif->nleft--; 294 286 hwif->cursg_ofs++; 295 287 296 - if ((hwif->cursg_ofs * SECTOR_SIZE) == sg[hwif->cursg].length) { 297 - hwif->cursg++; 288 + if ((hwif->cursg_ofs * SECTOR_SIZE) == cursg->length) { 289 + hwif->cursg = sg_next(hwif->cursg); 298 290 hwif->cursg_ofs = 0; 299 291 } 300 292 ··· 375 367 376 368 static void task_end_request(ide_drive_t *drive, struct request *rq, u8 stat) 377 369 { 370 + HWIF(drive)->cursg = NULL; 371 + 378 372 if (rq->cmd_type == REQ_TYPE_ATA_TASKFILE) { 379 373 ide_task_t *task = rq->special; 380 374
+1 -1
drivers/ide/mips/au1xxx-ide.c
··· 296 296 cur_addr += tc; 297 297 cur_len -= tc; 298 298 } 299 - sg++; 299 + sg = sg_next(sg); 300 300 i--; 301 301 } 302 302
+2 -1
drivers/ide/pci/sgiioc4.c
··· 29 29 #include <linux/mm.h> 30 30 #include <linux/ioport.h> 31 31 #include <linux/blkdev.h> 32 + #include <linux/scatterlist.h> 32 33 #include <linux/ioc4.h> 33 34 #include <asm/io.h> 34 35 ··· 538 537 } 539 538 } 540 539 541 - sg++; 540 + sg = sg_next(sg); 542 541 i--; 543 542 } 544 543
+1 -1
drivers/ide/ppc/pmac.c
··· 1539 1539 cur_len -= tc; 1540 1540 ++table; 1541 1541 } 1542 - sg++; 1542 + sg = sg_next(sg); 1543 1543 i--; 1544 1544 } 1545 1545
+6 -4
drivers/infiniband/hw/ipath/ipath_dma.c
··· 30 30 * SOFTWARE. 31 31 */ 32 32 33 + #include <linux/scatterlist.h> 33 34 #include <rdma/ib_verbs.h> 34 35 35 36 #include "ipath_verbs.h" ··· 97 96 BUG_ON(!valid_dma_direction(direction)); 98 97 } 99 98 100 - static int ipath_map_sg(struct ib_device *dev, struct scatterlist *sg, int nents, 101 - enum dma_data_direction direction) 99 + static int ipath_map_sg(struct ib_device *dev, struct scatterlist *sgl, 100 + int nents, enum dma_data_direction direction) 102 101 { 102 + struct scatterlist *sg; 103 103 u64 addr; 104 104 int i; 105 105 int ret = nents; 106 106 107 107 BUG_ON(!valid_dma_direction(direction)); 108 108 109 - for (i = 0; i < nents; i++) { 110 - addr = (u64) page_address(sg[i].page); 109 + for_each_sg(sgl, sg, nents, i) { 110 + addr = (u64) page_address(sg->page); 111 111 /* TODO: handle highmem pages */ 112 112 if (!addr) { 113 113 ret = 0;
+41 -34
drivers/infiniband/ulp/iser/iser_memory.c
··· 124 124 125 125 if (cmd_dir == ISER_DIR_OUT) { 126 126 /* copy the unaligned sg the buffer which is used for RDMA */ 127 - struct scatterlist *sg = (struct scatterlist *)data->buf; 127 + struct scatterlist *sgl = (struct scatterlist *)data->buf; 128 + struct scatterlist *sg; 128 129 int i; 129 130 char *p, *from; 130 131 131 - for (p = mem, i = 0; i < data->size; i++) { 132 - from = kmap_atomic(sg[i].page, KM_USER0); 132 + p = mem; 133 + for_each_sg(sgl, sg, data->size, i) { 134 + from = kmap_atomic(sg->page, KM_USER0); 133 135 memcpy(p, 134 - from + sg[i].offset, 135 - sg[i].length); 136 + from + sg->offset, 137 + sg->length); 136 138 kunmap_atomic(from, KM_USER0); 137 - p += sg[i].length; 139 + p += sg->length; 138 140 } 139 141 } 140 142 ··· 178 176 179 177 if (cmd_dir == ISER_DIR_IN) { 180 178 char *mem; 181 - struct scatterlist *sg; 179 + struct scatterlist *sgl, *sg; 182 180 unsigned char *p, *to; 183 181 unsigned int sg_size; 184 182 int i; ··· 186 184 /* copy back read RDMA to unaligned sg */ 187 185 mem = mem_copy->copy_buf; 188 186 189 - sg = (struct scatterlist *)iser_ctask->data[ISER_DIR_IN].buf; 187 + sgl = (struct scatterlist *)iser_ctask->data[ISER_DIR_IN].buf; 190 188 sg_size = iser_ctask->data[ISER_DIR_IN].size; 191 189 192 - for (p = mem, i = 0; i < sg_size; i++){ 193 - to = kmap_atomic(sg[i].page, KM_SOFTIRQ0); 194 - memcpy(to + sg[i].offset, 190 + p = mem; 191 + for_each_sg(sgl, sg, sg_size, i) { 192 + to = kmap_atomic(sg->page, KM_SOFTIRQ0); 193 + memcpy(to + sg->offset, 195 194 p, 196 - sg[i].length); 195 + sg->length); 197 196 kunmap_atomic(to, KM_SOFTIRQ0); 198 - p += sg[i].length; 197 + p += sg->length; 199 198 } 200 199 } 201 200 ··· 227 224 struct iser_page_vec *page_vec, 228 225 struct ib_device *ibdev) 229 226 { 230 - struct scatterlist *sg = (struct scatterlist *)data->buf; 227 + struct scatterlist *sgl = (struct scatterlist *)data->buf; 228 + struct scatterlist *sg; 231 229 u64 first_addr, last_addr, page; 232 230 int end_aligned; 233 231 unsigned int cur_page = 0; ··· 236 232 int i; 237 233 238 234 /* compute the offset of first element */ 239 - page_vec->offset = (u64) sg[0].offset & ~MASK_4K; 235 + page_vec->offset = (u64) sgl[0].offset & ~MASK_4K; 240 236 241 - for (i = 0; i < data->dma_nents; i++) { 242 - unsigned int dma_len = ib_sg_dma_len(ibdev, &sg[i]); 237 + for_each_sg(sgl, sg, data->dma_nents, i) { 238 + unsigned int dma_len = ib_sg_dma_len(ibdev, sg); 243 239 244 240 total_sz += dma_len; 245 241 246 - first_addr = ib_sg_dma_address(ibdev, &sg[i]); 242 + first_addr = ib_sg_dma_address(ibdev, sg); 247 243 last_addr = first_addr + dma_len; 248 244 249 245 end_aligned = !(last_addr & ~MASK_4K); 250 246 251 247 /* continue to collect page fragments till aligned or SG ends */ 252 248 while (!end_aligned && (i + 1 < data->dma_nents)) { 249 + sg = sg_next(sg); 253 250 i++; 254 - dma_len = ib_sg_dma_len(ibdev, &sg[i]); 251 + dma_len = ib_sg_dma_len(ibdev, sg); 255 252 total_sz += dma_len; 256 - last_addr = ib_sg_dma_address(ibdev, &sg[i]) + dma_len; 253 + last_addr = ib_sg_dma_address(ibdev, sg) + dma_len; 257 254 end_aligned = !(last_addr & ~MASK_4K); 258 255 } 259 256 ··· 289 284 static unsigned int iser_data_buf_aligned_len(struct iser_data_buf *data, 290 285 struct ib_device *ibdev) 291 286 { 292 - struct scatterlist *sg; 287 + struct scatterlist *sgl, *sg; 293 288 u64 end_addr, next_addr; 294 289 int i, cnt; 295 290 unsigned int ret_len = 0; 296 291 297 - sg = (struct scatterlist *)data->buf; 292 + sgl = (struct scatterlist *)data->buf; 298 293 299 - for (cnt = 0, i = 0; i < data->dma_nents; i++, cnt++) { 294 + cnt = 0; 295 + for_each_sg(sgl, sg, data->dma_nents, i) { 300 296 /* iser_dbg("Checking sg iobuf [%d]: phys=0x%08lX " 301 297 "offset: %ld sz: %ld\n", i, 302 - (unsigned long)page_to_phys(sg[i].page), 303 - (unsigned long)sg[i].offset, 304 - (unsigned long)sg[i].length); */ 305 - end_addr = ib_sg_dma_address(ibdev, &sg[i]) + 306 - ib_sg_dma_len(ibdev, &sg[i]); 298 + (unsigned long)page_to_phys(sg->page), 299 + (unsigned long)sg->offset, 300 + (unsigned long)sg->length); */ 301 + end_addr = ib_sg_dma_address(ibdev, sg) + 302 + ib_sg_dma_len(ibdev, sg); 307 303 /* iser_dbg("Checking sg iobuf end address " 308 304 "0x%08lX\n", end_addr); */ 309 305 if (i + 1 < data->dma_nents) { 310 - next_addr = ib_sg_dma_address(ibdev, &sg[i+1]); 306 + next_addr = ib_sg_dma_address(ibdev, sg_next(sg)); 311 307 /* are i, i+1 fragments of the same page? */ 312 308 if (end_addr == next_addr) 313 309 continue; ··· 328 322 static void iser_data_buf_dump(struct iser_data_buf *data, 329 323 struct ib_device *ibdev) 330 324 { 331 - struct scatterlist *sg = (struct scatterlist *)data->buf; 325 + struct scatterlist *sgl = (struct scatterlist *)data->buf; 326 + struct scatterlist *sg; 332 327 int i; 333 328 334 - for (i = 0; i < data->dma_nents; i++) 329 + for_each_sg(sgl, sg, data->dma_nents, i) 335 330 iser_err("sg[%d] dma_addr:0x%lX page:0x%p " 336 331 "off:0x%x sz:0x%x dma_len:0x%x\n", 337 - i, (unsigned long)ib_sg_dma_address(ibdev, &sg[i]), 338 - sg[i].page, sg[i].offset, 339 - sg[i].length, ib_sg_dma_len(ibdev, &sg[i])); 332 + i, (unsigned long)ib_sg_dma_address(ibdev, sg), 333 + sg->page, sg->offset, 334 + sg->length, ib_sg_dma_len(ibdev, sg)); 340 335 } 341 336 342 337 static void iser_dump_page_vec(struct iser_page_vec *page_vec)
+5 -26
drivers/md/dm-crypt.c
··· 441 441 return clone; 442 442 } 443 443 444 - static void crypt_free_buffer_pages(struct crypt_config *cc, 445 - struct bio *clone, unsigned int bytes) 444 + static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone) 446 445 { 447 - unsigned int i, start, end; 446 + unsigned int i; 448 447 struct bio_vec *bv; 449 448 450 - /* 451 - * This is ugly, but Jens Axboe thinks that using bi_idx in the 452 - * endio function is too dangerous at the moment, so I calculate the 453 - * correct position using bi_vcnt and bi_size. 454 - * The bv_offset and bv_len fields might already be modified but we 455 - * know that we always allocated whole pages. 456 - * A fix to the bi_idx issue in the kernel is in the works, so 457 - * we will hopefully be able to revert to the cleaner solution soon. 458 - */ 459 - i = clone->bi_vcnt - 1; 460 - bv = bio_iovec_idx(clone, i); 461 - end = (i << PAGE_SHIFT) + (bv->bv_offset + bv->bv_len) - clone->bi_size; 462 - start = end - bytes; 463 - 464 - start >>= PAGE_SHIFT; 465 - if (!clone->bi_size) 466 - end = clone->bi_vcnt; 467 - else 468 - end >>= PAGE_SHIFT; 469 - 470 - for (i = start; i < end; i++) { 449 + for (i = 0; i < clone->bi_vcnt; i++) { 471 450 bv = bio_iovec_idx(clone, i); 472 451 BUG_ON(!bv->bv_page); 473 452 mempool_free(bv->bv_page, cc->page_pool); ··· 498 519 * free the processed pages 499 520 */ 500 521 if (!read_io) { 501 - crypt_free_buffer_pages(cc, clone, clone->bi_size); 522 + crypt_free_buffer_pages(cc, clone); 502 523 goto out; 503 524 } 504 525 ··· 587 608 ctx.idx_out = 0; 588 609 589 610 if (unlikely(crypt_convert(cc, &ctx) < 0)) { 590 - crypt_free_buffer_pages(cc, clone, clone->bi_size); 611 + crypt_free_buffer_pages(cc, clone); 591 612 bio_put(clone); 592 613 dec_pending(io, -EIO); 593 614 return;
-28
drivers/md/dm-table.c
··· 999 999 } 1000 1000 } 1001 1001 1002 - int dm_table_flush_all(struct dm_table *t) 1003 - { 1004 - struct list_head *d, *devices = dm_table_get_devices(t); 1005 - int ret = 0; 1006 - unsigned i; 1007 - 1008 - for (i = 0; i < t->num_targets; i++) 1009 - if (t->targets[i].type->flush) 1010 - t->targets[i].type->flush(&t->targets[i]); 1011 - 1012 - for (d = devices->next; d != devices; d = d->next) { 1013 - struct dm_dev *dd = list_entry(d, struct dm_dev, list); 1014 - struct request_queue *q = bdev_get_queue(dd->bdev); 1015 - int err; 1016 - 1017 - if (!q->issue_flush_fn) 1018 - err = -EOPNOTSUPP; 1019 - else 1020 - err = q->issue_flush_fn(q, dd->bdev->bd_disk, NULL); 1021 - 1022 - if (!ret) 1023 - ret = err; 1024 - } 1025 - 1026 - return ret; 1027 - } 1028 - 1029 1002 struct mapped_device *dm_table_get_md(struct dm_table *t) 1030 1003 { 1031 1004 dm_get(t->md); ··· 1016 1043 EXPORT_SYMBOL(dm_table_put); 1017 1044 EXPORT_SYMBOL(dm_table_get); 1018 1045 EXPORT_SYMBOL(dm_table_unplug_all); 1019 - EXPORT_SYMBOL(dm_table_flush_all);
-16
drivers/md/dm.c
··· 840 840 return 0; 841 841 } 842 842 843 - static int dm_flush_all(struct request_queue *q, struct gendisk *disk, 844 - sector_t *error_sector) 845 - { 846 - struct mapped_device *md = q->queuedata; 847 - struct dm_table *map = dm_get_table(md); 848 - int ret = -ENXIO; 849 - 850 - if (map) { 851 - ret = dm_table_flush_all(map); 852 - dm_table_put(map); 853 - } 854 - 855 - return ret; 856 - } 857 - 858 843 static void dm_unplug_all(struct request_queue *q) 859 844 { 860 845 struct mapped_device *md = q->queuedata; ··· 988 1003 blk_queue_make_request(md->queue, dm_request); 989 1004 blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY); 990 1005 md->queue->unplug_fn = dm_unplug_all; 991 - md->queue->issue_flush_fn = dm_flush_all; 992 1006 993 1007 md->io_pool = mempool_create_slab_pool(MIN_IOS, _io_cache); 994 1008 if (!md->io_pool)
-1
drivers/md/dm.h
··· 111 111 int dm_table_resume_targets(struct dm_table *t); 112 112 int dm_table_any_congested(struct dm_table *t, int bdi_bits); 113 113 void dm_table_unplug_all(struct dm_table *t); 114 - int dm_table_flush_all(struct dm_table *t); 115 114 116 115 /*----------------------------------------------------------------- 117 116 * A registry of target types.
-20
drivers/md/linear.c
··· 92 92 } 93 93 } 94 94 95 - static int linear_issue_flush(struct request_queue *q, struct gendisk *disk, 96 - sector_t *error_sector) 97 - { 98 - mddev_t *mddev = q->queuedata; 99 - linear_conf_t *conf = mddev_to_conf(mddev); 100 - int i, ret = 0; 101 - 102 - for (i=0; i < mddev->raid_disks && ret == 0; i++) { 103 - struct block_device *bdev = conf->disks[i].rdev->bdev; 104 - struct request_queue *r_queue = bdev_get_queue(bdev); 105 - 106 - if (!r_queue->issue_flush_fn) 107 - ret = -EOPNOTSUPP; 108 - else 109 - ret = r_queue->issue_flush_fn(r_queue, bdev->bd_disk, error_sector); 110 - } 111 - return ret; 112 - } 113 - 114 95 static int linear_congested(void *data, int bits) 115 96 { 116 97 mddev_t *mddev = data; ··· 260 279 261 280 blk_queue_merge_bvec(mddev->queue, linear_mergeable_bvec); 262 281 mddev->queue->unplug_fn = linear_unplug; 263 - mddev->queue->issue_flush_fn = linear_issue_flush; 264 282 mddev->queue->backing_dev_info.congested_fn = linear_congested; 265 283 mddev->queue->backing_dev_info.congested_data = mddev; 266 284 return 0;
-1
drivers/md/md.c
··· 3463 3463 mddev->pers->stop(mddev); 3464 3464 mddev->queue->merge_bvec_fn = NULL; 3465 3465 mddev->queue->unplug_fn = NULL; 3466 - mddev->queue->issue_flush_fn = NULL; 3467 3466 mddev->queue->backing_dev_info.congested_fn = NULL; 3468 3467 if (mddev->pers->sync_request) 3469 3468 sysfs_remove_group(&mddev->kobj, &md_redundancy_group);
-30
drivers/md/multipath.c
··· 194 194 seq_printf (seq, "]"); 195 195 } 196 196 197 - static int multipath_issue_flush(struct request_queue *q, struct gendisk *disk, 198 - sector_t *error_sector) 199 - { 200 - mddev_t *mddev = q->queuedata; 201 - multipath_conf_t *conf = mddev_to_conf(mddev); 202 - int i, ret = 0; 203 - 204 - rcu_read_lock(); 205 - for (i=0; i<mddev->raid_disks && ret == 0; i++) { 206 - mdk_rdev_t *rdev = rcu_dereference(conf->multipaths[i].rdev); 207 - if (rdev && !test_bit(Faulty, &rdev->flags)) { 208 - struct block_device *bdev = rdev->bdev; 209 - struct request_queue *r_queue = bdev_get_queue(bdev); 210 - 211 - if (!r_queue->issue_flush_fn) 212 - ret = -EOPNOTSUPP; 213 - else { 214 - atomic_inc(&rdev->nr_pending); 215 - rcu_read_unlock(); 216 - ret = r_queue->issue_flush_fn(r_queue, bdev->bd_disk, 217 - error_sector); 218 - rdev_dec_pending(rdev, mddev); 219 - rcu_read_lock(); 220 - } 221 - } 222 - } 223 - rcu_read_unlock(); 224 - return ret; 225 - } 226 197 static int multipath_congested(void *data, int bits) 227 198 { 228 199 mddev_t *mddev = data; ··· 498 527 mddev->array_size = mddev->size; 499 528 500 529 mddev->queue->unplug_fn = multipath_unplug; 501 - mddev->queue->issue_flush_fn = multipath_issue_flush; 502 530 mddev->queue->backing_dev_info.congested_fn = multipath_congested; 503 531 mddev->queue->backing_dev_info.congested_data = mddev; 504 532
-21
drivers/md/raid0.c
··· 40 40 } 41 41 } 42 42 43 - static int raid0_issue_flush(struct request_queue *q, struct gendisk *disk, 44 - sector_t *error_sector) 45 - { 46 - mddev_t *mddev = q->queuedata; 47 - raid0_conf_t *conf = mddev_to_conf(mddev); 48 - mdk_rdev_t **devlist = conf->strip_zone[0].dev; 49 - int i, ret = 0; 50 - 51 - for (i=0; i<mddev->raid_disks && ret == 0; i++) { 52 - struct block_device *bdev = devlist[i]->bdev; 53 - struct request_queue *r_queue = bdev_get_queue(bdev); 54 - 55 - if (!r_queue->issue_flush_fn) 56 - ret = -EOPNOTSUPP; 57 - else 58 - ret = r_queue->issue_flush_fn(r_queue, bdev->bd_disk, error_sector); 59 - } 60 - return ret; 61 - } 62 - 63 43 static int raid0_congested(void *data, int bits) 64 44 { 65 45 mddev_t *mddev = data; ··· 230 250 231 251 mddev->queue->unplug_fn = raid0_unplug; 232 252 233 - mddev->queue->issue_flush_fn = raid0_issue_flush; 234 253 mddev->queue->backing_dev_info.congested_fn = raid0_congested; 235 254 mddev->queue->backing_dev_info.congested_data = mddev; 236 255
-31
drivers/md/raid1.c
··· 567 567 md_wakeup_thread(mddev->thread); 568 568 } 569 569 570 - static int raid1_issue_flush(struct request_queue *q, struct gendisk *disk, 571 - sector_t *error_sector) 572 - { 573 - mddev_t *mddev = q->queuedata; 574 - conf_t *conf = mddev_to_conf(mddev); 575 - int i, ret = 0; 576 - 577 - rcu_read_lock(); 578 - for (i=0; i<mddev->raid_disks && ret == 0; i++) { 579 - mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); 580 - if (rdev && !test_bit(Faulty, &rdev->flags)) { 581 - struct block_device *bdev = rdev->bdev; 582 - struct request_queue *r_queue = bdev_get_queue(bdev); 583 - 584 - if (!r_queue->issue_flush_fn) 585 - ret = -EOPNOTSUPP; 586 - else { 587 - atomic_inc(&rdev->nr_pending); 588 - rcu_read_unlock(); 589 - ret = r_queue->issue_flush_fn(r_queue, bdev->bd_disk, 590 - error_sector); 591 - rdev_dec_pending(rdev, mddev); 592 - rcu_read_lock(); 593 - } 594 - } 595 - } 596 - rcu_read_unlock(); 597 - return ret; 598 - } 599 - 600 570 static int raid1_congested(void *data, int bits) 601 571 { 602 572 mddev_t *mddev = data; ··· 1967 1997 mddev->array_size = mddev->size; 1968 1998 1969 1999 mddev->queue->unplug_fn = raid1_unplug; 1970 - mddev->queue->issue_flush_fn = raid1_issue_flush; 1971 2000 mddev->queue->backing_dev_info.congested_fn = raid1_congested; 1972 2001 mddev->queue->backing_dev_info.congested_data = mddev; 1973 2002
-31
drivers/md/raid10.c
··· 611 611 md_wakeup_thread(mddev->thread); 612 612 } 613 613 614 - static int raid10_issue_flush(struct request_queue *q, struct gendisk *disk, 615 - sector_t *error_sector) 616 - { 617 - mddev_t *mddev = q->queuedata; 618 - conf_t *conf = mddev_to_conf(mddev); 619 - int i, ret = 0; 620 - 621 - rcu_read_lock(); 622 - for (i=0; i<mddev->raid_disks && ret == 0; i++) { 623 - mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev); 624 - if (rdev && !test_bit(Faulty, &rdev->flags)) { 625 - struct block_device *bdev = rdev->bdev; 626 - struct request_queue *r_queue = bdev_get_queue(bdev); 627 - 628 - if (!r_queue->issue_flush_fn) 629 - ret = -EOPNOTSUPP; 630 - else { 631 - atomic_inc(&rdev->nr_pending); 632 - rcu_read_unlock(); 633 - ret = r_queue->issue_flush_fn(r_queue, bdev->bd_disk, 634 - error_sector); 635 - rdev_dec_pending(rdev, mddev); 636 - rcu_read_lock(); 637 - } 638 - } 639 - } 640 - rcu_read_unlock(); 641 - return ret; 642 - } 643 - 644 614 static int raid10_congested(void *data, int bits) 645 615 { 646 616 mddev_t *mddev = data; ··· 2088 2118 mddev->resync_max_sectors = size << conf->chunk_shift; 2089 2119 2090 2120 mddev->queue->unplug_fn = raid10_unplug; 2091 - mddev->queue->issue_flush_fn = raid10_issue_flush; 2092 2121 mddev->queue->backing_dev_info.congested_fn = raid10_congested; 2093 2122 mddev->queue->backing_dev_info.congested_data = mddev; 2094 2123
-31
drivers/md/raid5.c
··· 3204 3204 unplug_slaves(mddev); 3205 3205 } 3206 3206 3207 - static int raid5_issue_flush(struct request_queue *q, struct gendisk *disk, 3208 - sector_t *error_sector) 3209 - { 3210 - mddev_t *mddev = q->queuedata; 3211 - raid5_conf_t *conf = mddev_to_conf(mddev); 3212 - int i, ret = 0; 3213 - 3214 - rcu_read_lock(); 3215 - for (i=0; i<mddev->raid_disks && ret == 0; i++) { 3216 - mdk_rdev_t *rdev = rcu_dereference(conf->disks[i].rdev); 3217 - if (rdev && !test_bit(Faulty, &rdev->flags)) { 3218 - struct block_device *bdev = rdev->bdev; 3219 - struct request_queue *r_queue = bdev_get_queue(bdev); 3220 - 3221 - if (!r_queue->issue_flush_fn) 3222 - ret = -EOPNOTSUPP; 3223 - else { 3224 - atomic_inc(&rdev->nr_pending); 3225 - rcu_read_unlock(); 3226 - ret = r_queue->issue_flush_fn(r_queue, bdev->bd_disk, 3227 - error_sector); 3228 - rdev_dec_pending(rdev, mddev); 3229 - rcu_read_lock(); 3230 - } 3231 - } 3232 - } 3233 - rcu_read_unlock(); 3234 - return ret; 3235 - } 3236 - 3237 3207 static int raid5_congested(void *data, int bits) 3238 3208 { 3239 3209 mddev_t *mddev = data; ··· 4233 4263 mdname(mddev)); 4234 4264 4235 4265 mddev->queue->unplug_fn = raid5_unplug_device; 4236 - mddev->queue->issue_flush_fn = raid5_issue_flush; 4237 4266 mddev->queue->backing_dev_info.congested_data = mddev; 4238 4267 mddev->queue->backing_dev_info.congested_fn = raid5_congested; 4239 4268
+3 -3
drivers/message/fusion/mptscsih.c
··· 293 293 for (ii=0; ii < (numSgeThisFrame-1); ii++) { 294 294 thisxfer = sg_dma_len(sg); 295 295 if (thisxfer == 0) { 296 - sg ++; /* Get next SG element from the OS */ 296 + sg = sg_next(sg); /* Get next SG element from the OS */ 297 297 sg_done++; 298 298 continue; 299 299 } ··· 301 301 v2 = sg_dma_address(sg); 302 302 mptscsih_add_sge(psge, sgflags | thisxfer, v2); 303 303 304 - sg++; /* Get next SG element from the OS */ 304 + sg = sg_next(sg); /* Get next SG element from the OS */ 305 305 psge += (sizeof(u32) + sizeof(dma_addr_t)); 306 306 sgeOffset += (sizeof(u32) + sizeof(dma_addr_t)); 307 307 sg_done++; ··· 322 322 v2 = sg_dma_address(sg); 323 323 mptscsih_add_sge(psge, sgflags | thisxfer, v2); 324 324 /* 325 - sg++; 325 + sg = sg_next(sg); 326 326 psge += (sizeof(u32) + sizeof(dma_addr_t)); 327 327 */ 328 328 sgeOffset += (sizeof(u32) + sizeof(dma_addr_t));
-24
drivers/message/i2o/i2o_block.c
··· 149 149 }; 150 150 151 151 /** 152 - * i2o_block_issue_flush - device-flush interface for block-layer 153 - * @queue: the request queue of the device which should be flushed 154 - * @disk: gendisk 155 - * @error_sector: error offset 156 - * 157 - * Helper function to provide flush functionality to block-layer. 158 - * 159 - * Returns 0 on success or negative error code on failure. 160 - */ 161 - 162 - static int i2o_block_issue_flush(struct request_queue * queue, struct gendisk *disk, 163 - sector_t * error_sector) 164 - { 165 - struct i2o_block_device *i2o_blk_dev = queue->queuedata; 166 - int rc = -ENODEV; 167 - 168 - if (likely(i2o_blk_dev)) 169 - rc = i2o_block_device_flush(i2o_blk_dev->i2o_dev); 170 - 171 - return rc; 172 - } 173 - 174 - /** 175 152 * i2o_block_device_mount - Mount (load) the media of device dev 176 153 * @dev: I2O device which should receive the mount request 177 154 * @media_id: Media Identifier ··· 986 1009 } 987 1010 988 1011 blk_queue_prep_rq(queue, i2o_block_prep_req_fn); 989 - blk_queue_issue_flush_fn(queue, i2o_block_issue_flush); 990 1012 991 1013 gd->major = I2O_MAJOR; 992 1014 gd->queue = queue;
+3 -3
drivers/mmc/card/queue.c
··· 153 153 blk_queue_max_hw_segments(mq->queue, bouncesz / 512); 154 154 blk_queue_max_segment_size(mq->queue, bouncesz); 155 155 156 - mq->sg = kmalloc(sizeof(struct scatterlist), 156 + mq->sg = kzalloc(sizeof(struct scatterlist), 157 157 GFP_KERNEL); 158 158 if (!mq->sg) { 159 159 ret = -ENOMEM; 160 160 goto cleanup_queue; 161 161 } 162 162 163 - mq->bounce_sg = kmalloc(sizeof(struct scatterlist) * 163 + mq->bounce_sg = kzalloc(sizeof(struct scatterlist) * 164 164 bouncesz / 512, GFP_KERNEL); 165 165 if (!mq->bounce_sg) { 166 166 ret = -ENOMEM; ··· 177 177 blk_queue_max_hw_segments(mq->queue, host->max_hw_segs); 178 178 blk_queue_max_segment_size(mq->queue, host->max_seg_size); 179 179 180 - mq->sg = kmalloc(sizeof(struct scatterlist) * 180 + mq->sg = kzalloc(sizeof(struct scatterlist) * 181 181 host->max_phys_segs, GFP_KERNEL); 182 182 if (!mq->sg) { 183 183 ret = -ENOMEM;
+1
drivers/s390/scsi/zfcp_def.h
··· 34 34 #include <linux/slab.h> 35 35 #include <linux/mempool.h> 36 36 #include <linux/syscalls.h> 37 + #include <linux/scatterlist.h> 37 38 #include <linux/ioctl.h> 38 39 #include <scsi/scsi.h> 39 40 #include <scsi/scsi_tcq.h>
+2 -4
drivers/s390/scsi/zfcp_qdio.c
··· 590 590 */ 591 591 int 592 592 zfcp_qdio_sbals_from_sg(struct zfcp_fsf_req *fsf_req, unsigned long sbtype, 593 - struct scatterlist *sg, int sg_count, int max_sbals) 593 + struct scatterlist *sgl, int sg_count, int max_sbals) 594 594 { 595 595 int sg_index; 596 596 struct scatterlist *sg_segment; ··· 606 606 sbale->flags |= sbtype; 607 607 608 608 /* process all segements of scatter-gather list */ 609 - for (sg_index = 0, sg_segment = sg, bytes = 0; 610 - sg_index < sg_count; 611 - sg_index++, sg_segment++) { 609 + for_each_sg(sgl, sg_segment, sg_count, sg_index) { 612 610 retval = zfcp_qdio_sbals_from_segment( 613 611 fsf_req, 614 612 sbtype,
+1
drivers/scsi/3w-9xxx.c
··· 1990 1990 .max_sectors = TW_MAX_SECTORS, 1991 1991 .cmd_per_lun = TW_MAX_CMDS_PER_LUN, 1992 1992 .use_clustering = ENABLE_CLUSTERING, 1993 + .use_sg_chaining = ENABLE_SG_CHAINING, 1993 1994 .shost_attrs = twa_host_attrs, 1994 1995 .emulated = 1 1995 1996 };
+1
drivers/scsi/3w-xxxx.c
··· 2261 2261 .max_sectors = TW_MAX_SECTORS, 2262 2262 .cmd_per_lun = TW_MAX_CMDS_PER_LUN, 2263 2263 .use_clustering = ENABLE_CLUSTERING, 2264 + .use_sg_chaining = ENABLE_SG_CHAINING, 2264 2265 .shost_attrs = tw_host_attrs, 2265 2266 .emulated = 1 2266 2267 };
+1
drivers/scsi/BusLogic.c
··· 3575 3575 .unchecked_isa_dma = 1, 3576 3576 .max_sectors = 128, 3577 3577 .use_clustering = ENABLE_CLUSTERING, 3578 + .use_sg_chaining = ENABLE_SG_CHAINING, 3578 3579 }; 3579 3580 3580 3581 /*
+2 -1
drivers/scsi/NCR53c406a.c
··· 1066 1066 .sg_tablesize = 32 /*SG_ALL*/ /*SG_NONE*/, 1067 1067 .cmd_per_lun = 1 /* commands per lun */, 1068 1068 .unchecked_isa_dma = 1 /* unchecked_isa_dma */, 1069 - .use_clustering = ENABLE_CLUSTERING 1069 + .use_clustering = ENABLE_CLUSTERING, 1070 + .use_sg_chaining = ENABLE_SG_CHAINING, 1070 1071 }; 1071 1072 1072 1073 #include "scsi_module.c"
+1
drivers/scsi/a100u2w.c
··· 1071 1071 .sg_tablesize = SG_ALL, 1072 1072 .cmd_per_lun = 1, 1073 1073 .use_clustering = ENABLE_CLUSTERING, 1074 + .use_sg_chaining = ENABLE_SG_CHAINING, 1074 1075 }; 1075 1076 1076 1077 static int __devinit inia100_probe_one(struct pci_dev *pdev,
+1
drivers/scsi/aacraid/linit.c
··· 944 944 .cmd_per_lun = AAC_NUM_IO_FIB, 945 945 #endif 946 946 .use_clustering = ENABLE_CLUSTERING, 947 + .use_sg_chaining = ENABLE_SG_CHAINING, 947 948 .emulated = 1, 948 949 }; 949 950
+15 -17
drivers/scsi/aha1542.c
··· 61 61 } 62 62 63 63 static void BAD_SG_DMA(Scsi_Cmnd * SCpnt, 64 - struct scatterlist *sgpnt, 64 + struct scatterlist *sgp, 65 65 int nseg, 66 66 int badseg) 67 67 { 68 68 printk(KERN_CRIT "sgpnt[%d:%d] page %p/0x%llx length %u\n", 69 69 badseg, nseg, 70 - page_address(sgpnt[badseg].page) + sgpnt[badseg].offset, 71 - (unsigned long long)SCSI_SG_PA(&sgpnt[badseg]), 72 - sgpnt[badseg].length); 70 + page_address(sgp->page) + sgp->offset, 71 + (unsigned long long)SCSI_SG_PA(sgp), 72 + sgp->length); 73 73 74 74 /* 75 75 * Not safe to continue. ··· 691 691 memcpy(ccb[mbo].cdb, cmd, ccb[mbo].cdblen); 692 692 693 693 if (SCpnt->use_sg) { 694 - struct scatterlist *sgpnt; 694 + struct scatterlist *sg; 695 695 struct chain *cptr; 696 696 #ifdef DEBUG 697 697 unsigned char *ptr; ··· 699 699 int i; 700 700 ccb[mbo].op = 2; /* SCSI Initiator Command w/scatter-gather */ 701 701 SCpnt->host_scribble = kmalloc(512, GFP_KERNEL | GFP_DMA); 702 - sgpnt = (struct scatterlist *) SCpnt->request_buffer; 703 702 cptr = (struct chain *) SCpnt->host_scribble; 704 703 if (cptr == NULL) { 705 704 /* free the claimed mailbox slot */ 706 705 HOSTDATA(SCpnt->device->host)->SCint[mbo] = NULL; 707 706 return SCSI_MLQUEUE_HOST_BUSY; 708 707 } 709 - for (i = 0; i < SCpnt->use_sg; i++) { 710 - if (sgpnt[i].length == 0 || SCpnt->use_sg > 16 || 711 - (((int) sgpnt[i].offset) & 1) || (sgpnt[i].length & 1)) { 708 + scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) { 709 + if (sg->length == 0 || SCpnt->use_sg > 16 || 710 + (((int) sg->offset) & 1) || (sg->length & 1)) { 712 711 unsigned char *ptr; 713 712 printk(KERN_CRIT "Bad segment list supplied to aha1542.c (%d, %d)\n", SCpnt->use_sg, i); 714 - for (i = 0; i < SCpnt->use_sg; i++) { 713 + scsi_for_each_sg(SCpnt, sg, SCpnt->use_sg, i) { 715 714 printk(KERN_CRIT "%d: %p %d\n", i, 716 - (page_address(sgpnt[i].page) + 717 - sgpnt[i].offset), 718 - sgpnt[i].length); 715 + (page_address(sg->page) + 716 + sg->offset), sg->length); 719 717 }; 720 718 printk(KERN_CRIT "cptr %x: ", (unsigned int) cptr); 721 719 ptr = (unsigned char *) &cptr[i]; ··· 721 723 printk("%02x ", ptr[i]); 722 724 panic("Foooooooood fight!"); 723 725 }; 724 - any2scsi(cptr[i].dataptr, SCSI_SG_PA(&sgpnt[i])); 725 - if (SCSI_SG_PA(&sgpnt[i]) + sgpnt[i].length - 1 > ISA_DMA_THRESHOLD) 726 - BAD_SG_DMA(SCpnt, sgpnt, SCpnt->use_sg, i); 727 - any2scsi(cptr[i].datalen, sgpnt[i].length); 726 + any2scsi(cptr[i].dataptr, SCSI_SG_PA(sg)); 727 + if (SCSI_SG_PA(sg) + sg->length - 1 > ISA_DMA_THRESHOLD) 728 + BAD_SG_DMA(SCpnt, sg, SCpnt->use_sg, i); 729 + any2scsi(cptr[i].datalen, sg->length); 728 730 }; 729 731 any2scsi(ccb[mbo].datalen, SCpnt->use_sg * sizeof(struct chain)); 730 732 any2scsi(ccb[mbo].dataptr, SCSI_BUF_PA(cptr));
+1
drivers/scsi/aha1740.c
··· 563 563 .sg_tablesize = AHA1740_SCATTER, 564 564 .cmd_per_lun = AHA1740_CMDLUN, 565 565 .use_clustering = ENABLE_CLUSTERING, 566 + .use_sg_chaining = ENABLE_SG_CHAINING, 566 567 .eh_abort_handler = aha1740_eh_abort_handler, 567 568 }; 568 569
+1
drivers/scsi/aic7xxx/aic79xx_osm.c
··· 766 766 .max_sectors = 8192, 767 767 .cmd_per_lun = 2, 768 768 .use_clustering = ENABLE_CLUSTERING, 769 + .use_sg_chaining = ENABLE_SG_CHAINING, 769 770 .slave_alloc = ahd_linux_slave_alloc, 770 771 .slave_configure = ahd_linux_slave_configure, 771 772 .target_alloc = ahd_linux_target_alloc,
+1
drivers/scsi/aic7xxx/aic7xxx_osm.c
··· 747 747 .max_sectors = 8192, 748 748 .cmd_per_lun = 2, 749 749 .use_clustering = ENABLE_CLUSTERING, 750 + .use_sg_chaining = ENABLE_SG_CHAINING, 750 751 .slave_alloc = ahc_linux_slave_alloc, 751 752 .slave_configure = ahc_linux_slave_configure, 752 753 .target_alloc = ahc_linux_target_alloc,
+1
drivers/scsi/aic7xxx_old.c
··· 11142 11142 .max_sectors = 2048, 11143 11143 .cmd_per_lun = 3, 11144 11144 .use_clustering = ENABLE_CLUSTERING, 11145 + .use_sg_chaining = ENABLE_SG_CHAINING, 11145 11146 }; 11146 11147 11147 11148 #include "scsi_module.c"
+3 -3
drivers/scsi/aic94xx/aic94xx_task.c
··· 94 94 res = -ENOMEM; 95 95 goto err_unmap; 96 96 } 97 - for (sc = task->scatter, i = 0; i < num_sg; i++, sc++) { 97 + for_each_sg(task->scatter, sc, num_sg, i) { 98 98 struct sg_el *sg = 99 99 &((struct sg_el *)ascb->sg_arr->vaddr)[i]; 100 100 sg->bus_addr = cpu_to_le64((u64)sg_dma_address(sc)); ··· 103 103 sg->flags |= ASD_SG_EL_LIST_EOL; 104 104 } 105 105 106 - for (sc = task->scatter, i = 0; i < 2; i++, sc++) { 106 + for_each_sg(task->scatter, sc, 2, i) { 107 107 sg_arr[i].bus_addr = 108 108 cpu_to_le64((u64)sg_dma_address(sc)); 109 109 sg_arr[i].size = cpu_to_le32((u32)sg_dma_len(sc)); ··· 115 115 sg_arr[2].bus_addr=cpu_to_le64((u64)ascb->sg_arr->dma_handle); 116 116 } else { 117 117 int i; 118 - for (sc = task->scatter, i = 0; i < num_sg; i++, sc++) { 118 + for_each_sg(task->scatter, sc, num_sg, i) { 119 119 sg_arr[i].bus_addr = 120 120 cpu_to_le64((u64)sg_dma_address(sc)); 121 121 sg_arr[i].size = cpu_to_le32((u32)sg_dma_len(sc));
+1
drivers/scsi/arcmsr/arcmsr_hba.c
··· 122 122 .max_sectors = ARCMSR_MAX_XFER_SECTORS, 123 123 .cmd_per_lun = ARCMSR_MAX_CMD_PERLUN, 124 124 .use_clustering = ENABLE_CLUSTERING, 125 + .use_sg_chaining = ENABLE_SG_CHAINING, 125 126 .shost_attrs = arcmsr_host_attrs, 126 127 }; 127 128 #ifdef CONFIG_SCSI_ARCMSR_AER
+1
drivers/scsi/dc395x.c
··· 4765 4765 .eh_bus_reset_handler = dc395x_eh_bus_reset, 4766 4766 .unchecked_isa_dma = 0, 4767 4767 .use_clustering = DISABLE_CLUSTERING, 4768 + .use_sg_chaining = ENABLE_SG_CHAINING, 4768 4769 }; 4769 4770 4770 4771
+1
drivers/scsi/dpt_i2o.c
··· 3295 3295 .this_id = 7, 3296 3296 .cmd_per_lun = 1, 3297 3297 .use_clustering = ENABLE_CLUSTERING, 3298 + .use_sg_chaining = ENABLE_SG_CHAINING, 3298 3299 }; 3299 3300 3300 3301 static s32 adpt_scsi_register(adpt_hba* pHba)
+2 -1
drivers/scsi/eata.c
··· 523 523 .slave_configure = eata2x_slave_configure, 524 524 .this_id = 7, 525 525 .unchecked_isa_dma = 1, 526 - .use_clustering = ENABLE_CLUSTERING 526 + .use_clustering = ENABLE_CLUSTERING, 527 + .use_sg_chaining = ENABLE_SG_CHAINING, 527 528 }; 528 529 529 530 #if !defined(__BIG_ENDIAN_BITFIELD) && !defined(__LITTLE_ENDIAN_BITFIELD)
+1
drivers/scsi/hosts.c
··· 343 343 shost->use_clustering = sht->use_clustering; 344 344 shost->ordered_tag = sht->ordered_tag; 345 345 shost->active_mode = sht->supported_mode; 346 + shost->use_sg_chaining = sht->use_sg_chaining; 346 347 347 348 if (sht->max_host_blocked) 348 349 shost->max_host_blocked = sht->max_host_blocked;
+1
drivers/scsi/hptiop.c
··· 655 655 .unchecked_isa_dma = 0, 656 656 .emulated = 0, 657 657 .use_clustering = ENABLE_CLUSTERING, 658 + .use_sg_chaining = ENABLE_SG_CHAINING, 658 659 .proc_name = driver_name, 659 660 .shost_attrs = hptiop_attrs, 660 661 .this_id = -1,
+1
drivers/scsi/ibmmca.c
··· 1501 1501 .sg_tablesize = 16, 1502 1502 .cmd_per_lun = 1, 1503 1503 .use_clustering = ENABLE_CLUSTERING, 1504 + .use_sg_chaining = ENABLE_SG_CHAINING, 1504 1505 }; 1505 1506 1506 1507 static int ibmmca_probe(struct device *dev)
+1
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 1548 1548 .this_id = -1, 1549 1549 .sg_tablesize = SG_ALL, 1550 1550 .use_clustering = ENABLE_CLUSTERING, 1551 + .use_sg_chaining = ENABLE_SG_CHAINING, 1551 1552 .shost_attrs = ibmvscsi_attrs, 1552 1553 }; 1553 1554
+18 -14
drivers/scsi/ide-scsi.c
··· 70 70 u8 *buffer; /* Data buffer */ 71 71 u8 *current_position; /* Pointer into the above buffer */ 72 72 struct scatterlist *sg; /* Scatter gather table */ 73 + struct scatterlist *last_sg; /* Last sg element */ 73 74 int b_count; /* Bytes transferred from current entry */ 74 75 struct scsi_cmnd *scsi_cmd; /* SCSI command */ 75 76 void (*done)(struct scsi_cmnd *); /* Scsi completion routine */ ··· 174 173 char *buf; 175 174 176 175 while (bcount) { 177 - if (pc->sg - scsi_sglist(pc->scsi_cmd) > 178 - scsi_sg_count(pc->scsi_cmd)) { 179 - printk (KERN_ERR "ide-scsi: scatter gather table too small, discarding data\n"); 180 - idescsi_discard_data (drive, bcount); 181 - return; 182 - } 183 176 count = min(pc->sg->length - pc->b_count, bcount); 184 177 if (PageHighMem(pc->sg->page)) { 185 178 unsigned long flags; ··· 192 197 } 193 198 bcount -= count; pc->b_count += count; 194 199 if (pc->b_count == pc->sg->length) { 195 - pc->sg++; 200 + if (pc->sg == pc->last_sg) 201 + break; 202 + pc->sg = sg_next(pc->sg); 196 203 pc->b_count = 0; 197 204 } 205 + } 206 + 207 + if (bcount) { 208 + printk (KERN_ERR "ide-scsi: scatter gather table too small, discarding data\n"); 209 + idescsi_discard_data (drive, bcount); 198 210 } 199 211 } 200 212 ··· 211 209 char *buf; 212 210 213 211 while (bcount) { 214 - if (pc->sg - scsi_sglist(pc->scsi_cmd) > 215 - scsi_sg_count(pc->scsi_cmd)) { 216 - printk (KERN_ERR "ide-scsi: scatter gather table too small, padding with zeros\n"); 217 - idescsi_output_zeros (drive, bcount); 218 - return; 219 - } 220 212 count = min(pc->sg->length - pc->b_count, bcount); 221 213 if (PageHighMem(pc->sg->page)) { 222 214 unsigned long flags; ··· 229 233 } 230 234 bcount -= count; pc->b_count += count; 231 235 if (pc->b_count == pc->sg->length) { 232 - pc->sg++; 236 + if (pc->sg == pc->last_sg) 237 + break; 238 + pc->sg = sg_next(pc->sg); 233 239 pc->b_count = 0; 234 240 } 241 + } 242 + 243 + if (bcount) { 244 + printk (KERN_ERR "ide-scsi: scatter gather table too small, padding with zeros\n"); 245 + idescsi_output_zeros (drive, bcount); 235 246 } 236 247 } 237 248 ··· 807 804 memcpy (pc->c, cmd->cmnd, cmd->cmd_len); 808 805 pc->buffer = NULL; 809 806 pc->sg = scsi_sglist(cmd); 807 + pc->last_sg = sg_last(pc->sg, cmd->use_sg); 810 808 pc->b_count = 0; 811 809 pc->request_transfer = pc->buffer_size = scsi_bufflen(cmd); 812 810 pc->scsi_cmd = cmd;
+1
drivers/scsi/initio.c
··· 2831 2831 .sg_tablesize = SG_ALL, 2832 2832 .cmd_per_lun = 1, 2833 2833 .use_clustering = ENABLE_CLUSTERING, 2834 + .use_sg_chaining = ENABLE_SG_CHAINING, 2834 2835 }; 2835 2836 2836 2837 static int initio_probe_one(struct pci_dev *pdev,
+8 -6
drivers/scsi/ips.c
··· 3252 3252 */ 3253 3253 if ((scb->breakup) || (scb->sg_break)) { 3254 3254 struct scatterlist *sg; 3255 - int sg_dma_index, ips_sg_index = 0; 3255 + int i, sg_dma_index, ips_sg_index = 0; 3256 3256 3257 3257 /* we had a data breakup */ 3258 3258 scb->data_len = 0; ··· 3261 3261 3262 3262 /* Spin forward to last dma chunk */ 3263 3263 sg_dma_index = scb->breakup; 3264 + for (i = 0; i < scb->breakup; i++) 3265 + sg = sg_next(sg); 3264 3266 3265 3267 /* Take care of possible partial on last chunk */ 3266 3268 ips_fill_scb_sg_single(ha, 3267 - sg_dma_address(&sg[sg_dma_index]), 3269 + sg_dma_address(sg), 3268 3270 scb, ips_sg_index++, 3269 - sg_dma_len(&sg[sg_dma_index])); 3271 + sg_dma_len(sg)); 3270 3272 3271 3273 for (; sg_dma_index < scsi_sg_count(scb->scsi_cmd); 3272 - sg_dma_index++) { 3274 + sg_dma_index++, sg = sg_next(sg)) { 3273 3275 if (ips_fill_scb_sg_single 3274 3276 (ha, 3275 - sg_dma_address(&sg[sg_dma_index]), 3277 + sg_dma_address(sg), 3276 3278 scb, ips_sg_index++, 3277 - sg_dma_len(&sg[sg_dma_index])) < 0) 3279 + sg_dma_len(sg)) < 0) 3278 3280 break; 3279 3281 } 3280 3282
+2
drivers/scsi/lpfc/lpfc_scsi.c
··· 1438 1438 .scan_finished = lpfc_scan_finished, 1439 1439 .this_id = -1, 1440 1440 .sg_tablesize = LPFC_SG_SEG_CNT, 1441 + .use_sg_chaining = ENABLE_SG_CHAINING, 1441 1442 .cmd_per_lun = LPFC_CMD_PER_LUN, 1442 1443 .use_clustering = ENABLE_CLUSTERING, 1443 1444 .shost_attrs = lpfc_hba_attrs, ··· 1461 1460 .sg_tablesize = LPFC_SG_SEG_CNT, 1462 1461 .cmd_per_lun = LPFC_CMD_PER_LUN, 1463 1462 .use_clustering = ENABLE_CLUSTERING, 1463 + .use_sg_chaining = ENABLE_SG_CHAINING, 1464 1464 .shost_attrs = lpfc_vport_attrs, 1465 1465 .max_sectors = 0xFFFF, 1466 1466 };
+1
drivers/scsi/mac53c94.c
··· 402 402 .sg_tablesize = SG_ALL, 403 403 .cmd_per_lun = 1, 404 404 .use_clustering = DISABLE_CLUSTERING, 405 + .use_sg_chaining = ENABLE_SG_CHAINING, 405 406 }; 406 407 407 408 static int mac53c94_probe(struct macio_dev *mdev, const struct of_device_id *match)
+1
drivers/scsi/megaraid.c
··· 4492 4492 .sg_tablesize = MAX_SGLIST, 4493 4493 .cmd_per_lun = DEF_CMD_PER_LUN, 4494 4494 .use_clustering = ENABLE_CLUSTERING, 4495 + .use_sg_chaining = ENABLE_SG_CHAINING, 4495 4496 .eh_abort_handler = megaraid_abort, 4496 4497 .eh_device_reset_handler = megaraid_reset, 4497 4498 .eh_bus_reset_handler = megaraid_reset,
+1
drivers/scsi/megaraid/megaraid_mbox.c
··· 361 361 .eh_host_reset_handler = megaraid_reset_handler, 362 362 .change_queue_depth = megaraid_change_queue_depth, 363 363 .use_clustering = ENABLE_CLUSTERING, 364 + .use_sg_chaining = ENABLE_SG_CHAINING, 364 365 .sdev_attrs = megaraid_sdev_attrs, 365 366 .shost_attrs = megaraid_shost_attrs, 366 367 };
+1
drivers/scsi/megaraid/megaraid_sas.c
··· 1110 1110 .eh_timed_out = megasas_reset_timer, 1111 1111 .bios_param = megasas_bios_param, 1112 1112 .use_clustering = ENABLE_CLUSTERING, 1113 + .use_sg_chaining = ENABLE_SG_CHAINING, 1113 1114 }; 1114 1115 1115 1116 /**
+1
drivers/scsi/mesh.c
··· 1843 1843 .sg_tablesize = SG_ALL, 1844 1844 .cmd_per_lun = 2, 1845 1845 .use_clustering = DISABLE_CLUSTERING, 1846 + .use_sg_chaining = ENABLE_SG_CHAINING, 1846 1847 }; 1847 1848 1848 1849 static int mesh_probe(struct macio_dev *mdev, const struct of_device_id *match)
+1
drivers/scsi/nsp32.c
··· 281 281 .cmd_per_lun = 1, 282 282 .this_id = NSP32_HOST_SCSIID, 283 283 .use_clustering = DISABLE_CLUSTERING, 284 + .use_sg_chaining = ENABLE_SG_CHAINING, 284 285 .eh_abort_handler = nsp32_eh_abort, 285 286 .eh_bus_reset_handler = nsp32_eh_bus_reset, 286 287 .eh_host_reset_handler = nsp32_eh_host_reset,
+1
drivers/scsi/pcmcia/sym53c500_cs.c
··· 694 694 .sg_tablesize = 32, 695 695 .cmd_per_lun = 1, 696 696 .use_clustering = ENABLE_CLUSTERING, 697 + .use_sg_chaining = ENABLE_SG_CHAINING, 697 698 .shost_attrs = SYM53C500_shost_attrs 698 699 }; 699 700
+41 -29
drivers/scsi/qla1280.c
··· 2775 2775 struct device_reg __iomem *reg = ha->iobase; 2776 2776 struct scsi_cmnd *cmd = sp->cmd; 2777 2777 cmd_a64_entry_t *pkt; 2778 - struct scatterlist *sg = NULL; 2778 + struct scatterlist *sg = NULL, *s; 2779 2779 __le32 *dword_ptr; 2780 2780 dma_addr_t dma_handle; 2781 2781 int status = 0; ··· 2889 2889 * Load data segments. 2890 2890 */ 2891 2891 if (seg_cnt) { /* If data transfer. */ 2892 + int remseg = seg_cnt; 2892 2893 /* Setup packet address segment pointer. */ 2893 2894 dword_ptr = (u32 *)&pkt->dseg_0_address; 2894 2895 2895 2896 if (cmd->use_sg) { /* If scatter gather */ 2896 2897 /* Load command entry data segments. */ 2897 - for (cnt = 0; cnt < 2 && seg_cnt; cnt++, seg_cnt--) { 2898 - dma_handle = sg_dma_address(sg); 2898 + for_each_sg(sg, s, seg_cnt, cnt) { 2899 + if (cnt == 2) 2900 + break; 2901 + dma_handle = sg_dma_address(s); 2899 2902 #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_SGI_SN2) 2900 2903 if (ha->flags.use_pci_vchannel) 2901 2904 sn_pci_set_vchan(ha->pdev, ··· 2909 2906 cpu_to_le32(pci_dma_lo32(dma_handle)); 2910 2907 *dword_ptr++ = 2911 2908 cpu_to_le32(pci_dma_hi32(dma_handle)); 2912 - *dword_ptr++ = cpu_to_le32(sg_dma_len(sg)); 2913 - sg++; 2909 + *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); 2914 2910 dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n", 2915 2911 cpu_to_le32(pci_dma_hi32(dma_handle)), 2916 2912 cpu_to_le32(pci_dma_lo32(dma_handle)), 2917 - cpu_to_le32(sg_dma_len(sg))); 2913 + cpu_to_le32(sg_dma_len(sg_next(s)))); 2914 + remseg--; 2918 2915 } 2919 2916 dprintk(5, "qla1280_64bit_start_scsi: Scatter/gather " 2920 2917 "command packet data - b %i, t %i, l %i \n", ··· 2929 2926 dprintk(3, "S/G Building Continuation...seg_cnt=0x%x " 2930 2927 "remains\n", seg_cnt); 2931 2928 2932 - while (seg_cnt > 0) { 2929 + while (remseg > 0) { 2930 + /* Update sg start */ 2931 + sg = s; 2933 2932 /* Adjust ring index. */ 2934 2933 ha->req_ring_index++; 2935 2934 if (ha->req_ring_index == REQUEST_ENTRY_CNT) { ··· 2957 2952 (u32 *)&((struct cont_a64_entry *) pkt)->dseg_0_address; 2958 2953 2959 2954 /* Load continuation entry data segments. */ 2960 - for (cnt = 0; cnt < 5 && seg_cnt; 2961 - cnt++, seg_cnt--) { 2962 - dma_handle = sg_dma_address(sg); 2955 + for_each_sg(sg, s, remseg, cnt) { 2956 + if (cnt == 5) 2957 + break; 2958 + dma_handle = sg_dma_address(s); 2963 2959 #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_SGI_SN2) 2964 2960 if (ha->flags.use_pci_vchannel) 2965 2961 sn_pci_set_vchan(ha->pdev, ··· 2972 2966 *dword_ptr++ = 2973 2967 cpu_to_le32(pci_dma_hi32(dma_handle)); 2974 2968 *dword_ptr++ = 2975 - cpu_to_le32(sg_dma_len(sg)); 2969 + cpu_to_le32(sg_dma_len(s)); 2976 2970 dprintk(3, "S/G Segment Cont. phys_addr=%x %x, len=0x%x\n", 2977 2971 cpu_to_le32(pci_dma_hi32(dma_handle)), 2978 2972 cpu_to_le32(pci_dma_lo32(dma_handle)), 2979 - cpu_to_le32(sg_dma_len(sg))); 2980 - sg++; 2973 + cpu_to_le32(sg_dma_len(s))); 2981 2974 } 2975 + remseg -= cnt; 2982 2976 dprintk(5, "qla1280_64bit_start_scsi: " 2983 2977 "continuation packet data - b %i, t " 2984 2978 "%i, l %i \n", SCSI_BUS_32(cmd), ··· 3068 3062 struct device_reg __iomem *reg = ha->iobase; 3069 3063 struct scsi_cmnd *cmd = sp->cmd; 3070 3064 struct cmd_entry *pkt; 3071 - struct scatterlist *sg = NULL; 3065 + struct scatterlist *sg = NULL, *s; 3072 3066 __le32 *dword_ptr; 3073 3067 int status = 0; 3074 3068 int cnt; ··· 3194 3188 * Load data segments. 3195 3189 */ 3196 3190 if (seg_cnt) { 3191 + int remseg = seg_cnt; 3197 3192 /* Setup packet address segment pointer. */ 3198 3193 dword_ptr = &pkt->dseg_0_address; 3199 3194 ··· 3203 3196 qla1280_dump_buffer(1, (char *)sg, 4 * 16); 3204 3197 3205 3198 /* Load command entry data segments. */ 3206 - for (cnt = 0; cnt < 4 && seg_cnt; cnt++, seg_cnt--) { 3199 + for_each_sg(sg, s, seg_cnt, cnt) { 3200 + if (cnt == 4) 3201 + break; 3207 3202 *dword_ptr++ = 3208 - cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))); 3209 - *dword_ptr++ = 3210 - cpu_to_le32(sg_dma_len(sg)); 3203 + cpu_to_le32(pci_dma_lo32(sg_dma_address(s))); 3204 + *dword_ptr++ = cpu_to_le32(sg_dma_len(s)); 3211 3205 dprintk(3, "S/G Segment phys_addr=0x%lx, len=0x%x\n", 3212 - (pci_dma_lo32(sg_dma_address(sg))), 3213 - (sg_dma_len(sg))); 3214 - sg++; 3206 + (pci_dma_lo32(sg_dma_address(s))), 3207 + (sg_dma_len(s))); 3208 + remseg--; 3215 3209 } 3216 3210 /* 3217 3211 * Build continuation packets. 3218 3212 */ 3219 3213 dprintk(3, "S/G Building Continuation" 3220 3214 "...seg_cnt=0x%x remains\n", seg_cnt); 3221 - while (seg_cnt > 0) { 3215 + while (remseg > 0) { 3216 + /* Continue from end point */ 3217 + sg = s; 3222 3218 /* Adjust ring index. */ 3223 3219 ha->req_ring_index++; 3224 3220 if (ha->req_ring_index == REQUEST_ENTRY_CNT) { ··· 3249 3239 &((struct cont_entry *) pkt)->dseg_0_address; 3250 3240 3251 3241 /* Load continuation entry data segments. */ 3252 - for (cnt = 0; cnt < 7 && seg_cnt; 3253 - cnt++, seg_cnt--) { 3242 + for_each_sg(sg, s, remseg, cnt) { 3243 + if (cnt == 7) 3244 + break; 3254 3245 *dword_ptr++ = 3255 - cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))); 3246 + cpu_to_le32(pci_dma_lo32(sg_dma_address(s))); 3256 3247 *dword_ptr++ = 3257 - cpu_to_le32(sg_dma_len(sg)); 3248 + cpu_to_le32(sg_dma_len(s)); 3258 3249 dprintk(1, 3259 3250 "S/G Segment Cont. phys_addr=0x%x, " 3260 3251 "len=0x%x\n", 3261 - cpu_to_le32(pci_dma_lo32(sg_dma_address(sg))), 3262 - cpu_to_le32(sg_dma_len(sg))); 3263 - sg++; 3252 + cpu_to_le32(pci_dma_lo32(sg_dma_address(s))), 3253 + cpu_to_le32(sg_dma_len(s))); 3264 3254 } 3255 + remseg -= cnt; 3265 3256 dprintk(5, "qla1280_32bit_start_scsi: " 3266 3257 "continuation packet data - " 3267 3258 "scsi(%i:%i:%i)\n", SCSI_BUS_32(cmd), ··· 4259 4248 .sg_tablesize = SG_ALL, 4260 4249 .cmd_per_lun = 1, 4261 4250 .use_clustering = ENABLE_CLUSTERING, 4251 + .use_sg_chaining = ENABLE_SG_CHAINING, 4262 4252 }; 4263 4253 4264 4254
+2
drivers/scsi/qla2xxx/qla_os.c
··· 132 132 .this_id = -1, 133 133 .cmd_per_lun = 3, 134 134 .use_clustering = ENABLE_CLUSTERING, 135 + .use_sg_chaining = ENABLE_SG_CHAINING, 135 136 .sg_tablesize = SG_ALL, 136 137 137 138 /* ··· 164 163 .this_id = -1, 165 164 .cmd_per_lun = 3, 166 165 .use_clustering = ENABLE_CLUSTERING, 166 + .use_sg_chaining = ENABLE_SG_CHAINING, 167 167 .sg_tablesize = SG_ALL, 168 168 169 169 .max_sectors = 0xFFFF,
+1
drivers/scsi/qla4xxx/ql4_os.c
··· 94 94 .this_id = -1, 95 95 .cmd_per_lun = 3, 96 96 .use_clustering = ENABLE_CLUSTERING, 97 + .use_sg_chaining = ENABLE_SG_CHAINING, 97 98 .sg_tablesize = SG_ALL, 98 99 99 100 .max_sectors = 0xFFFF,
+1
drivers/scsi/qlogicfas.c
··· 197 197 .sg_tablesize = SG_ALL, 198 198 .cmd_per_lun = 1, 199 199 .use_clustering = DISABLE_CLUSTERING, 200 + .use_sg_chaining = ENABLE_SG_CHAINING, 200 201 }; 201 202 202 203 static __init int qlogicfas_init(void)
+8 -7
drivers/scsi/qlogicpti.c
··· 868 868 struct qlogicpti *qpti, u_int in_ptr, u_int out_ptr) 869 869 { 870 870 struct dataseg *ds; 871 - struct scatterlist *sg; 871 + struct scatterlist *sg, *s; 872 872 int i, n; 873 873 874 874 if (Cmnd->use_sg) { ··· 884 884 n = sg_count; 885 885 if (n > 4) 886 886 n = 4; 887 - for (i = 0; i < n; i++, sg++) { 888 - ds[i].d_base = sg_dma_address(sg); 889 - ds[i].d_count = sg_dma_len(sg); 887 + for_each_sg(sg, s, n, i) { 888 + ds[i].d_base = sg_dma_address(s); 889 + ds[i].d_count = sg_dma_len(s); 890 890 } 891 891 sg_count -= 4; 892 + sg = s; 892 893 while (sg_count > 0) { 893 894 struct Continuation_Entry *cont; 894 895 ··· 908 907 n = sg_count; 909 908 if (n > 7) 910 909 n = 7; 911 - for (i = 0; i < n; i++, sg++) { 912 - ds[i].d_base = sg_dma_address(sg); 913 - ds[i].d_count = sg_dma_len(sg); 910 + for_each_sg(sg, s, n, i) { 911 + ds[i].d_base = sg_dma_address(s); 912 + ds[i].d_count = sg_dma_len(s); 914 913 } 915 914 sg_count -= n; 916 915 }
+16 -14
drivers/scsi/scsi_debug.c
··· 38 38 #include <linux/proc_fs.h> 39 39 #include <linux/vmalloc.h> 40 40 #include <linux/moduleparam.h> 41 + #include <linux/scatterlist.h> 41 42 42 43 #include <linux/blkdev.h> 43 44 #include "scsi.h" ··· 601 600 int k, req_len, act_len, len, active; 602 601 void * kaddr; 603 602 void * kaddr_off; 604 - struct scatterlist * sgpnt; 603 + struct scatterlist * sg; 605 604 606 605 if (0 == scp->request_bufflen) 607 606 return 0; ··· 620 619 scp->resid = req_len - act_len; 621 620 return 0; 622 621 } 623 - sgpnt = (struct scatterlist *)scp->request_buffer; 624 622 active = 1; 625 - for (k = 0, req_len = 0, act_len = 0; k < scp->use_sg; ++k, ++sgpnt) { 623 + req_len = act_len = 0; 624 + scsi_for_each_sg(scp, sg, scp->use_sg, k) { 626 625 if (active) { 627 626 kaddr = (unsigned char *) 628 - kmap_atomic(sgpnt->page, KM_USER0); 627 + kmap_atomic(sg->page, KM_USER0); 629 628 if (NULL == kaddr) 630 629 return (DID_ERROR << 16); 631 - kaddr_off = (unsigned char *)kaddr + sgpnt->offset; 632 - len = sgpnt->length; 630 + kaddr_off = (unsigned char *)kaddr + sg->offset; 631 + len = sg->length; 633 632 if ((req_len + len) > arr_len) { 634 633 active = 0; 635 634 len = arr_len - req_len; ··· 638 637 kunmap_atomic(kaddr, KM_USER0); 639 638 act_len += len; 640 639 } 641 - req_len += sgpnt->length; 640 + req_len += sg->length; 642 641 } 643 642 if (scp->resid) 644 643 scp->resid -= act_len; ··· 654 653 int k, req_len, len, fin; 655 654 void * kaddr; 656 655 void * kaddr_off; 657 - struct scatterlist * sgpnt; 656 + struct scatterlist * sg; 658 657 659 658 if (0 == scp->request_bufflen) 660 659 return 0; ··· 669 668 memcpy(arr, scp->request_buffer, len); 670 669 return len; 671 670 } 672 - sgpnt = (struct scatterlist *)scp->request_buffer; 673 - for (k = 0, req_len = 0, fin = 0; k < scp->use_sg; ++k, ++sgpnt) { 674 - kaddr = (unsigned char *)kmap_atomic(sgpnt->page, KM_USER0); 671 + sg = scsi_sglist(scp); 672 + req_len = fin = 0; 673 + for (k = 0; k < scp->use_sg; ++k, sg = sg_next(sg)) { 674 + kaddr = (unsigned char *)kmap_atomic(sg->page, KM_USER0); 675 675 if (NULL == kaddr) 676 676 return -1; 677 - kaddr_off = (unsigned char *)kaddr + sgpnt->offset; 678 - len = sgpnt->length; 677 + kaddr_off = (unsigned char *)kaddr + sg->offset; 678 + len = sg->length; 679 679 if ((req_len + len) > max_arr_len) { 680 680 len = max_arr_len - req_len; 681 681 fin = 1; ··· 685 683 kunmap_atomic(kaddr, KM_USER0); 686 684 if (fin) 687 685 return req_len + len; 688 - req_len += sgpnt->length; 686 + req_len += sg->length; 689 687 } 690 688 return req_len; 691 689 }
+185 -53
drivers/scsi/scsi_lib.c
··· 17 17 #include <linux/pci.h> 18 18 #include <linux/delay.h> 19 19 #include <linux/hardirq.h> 20 + #include <linux/scatterlist.h> 20 21 21 22 #include <scsi/scsi.h> 22 23 #include <scsi/scsi_cmnd.h> ··· 34 33 #define SG_MEMPOOL_NR ARRAY_SIZE(scsi_sg_pools) 35 34 #define SG_MEMPOOL_SIZE 2 36 35 36 + /* 37 + * The maximum number of SG segments that we will put inside a scatterlist 38 + * (unless chaining is used). Should ideally fit inside a single page, to 39 + * avoid a higher order allocation. 40 + */ 41 + #define SCSI_MAX_SG_SEGMENTS 128 42 + 37 43 struct scsi_host_sg_pool { 38 44 size_t size; 39 - char *name; 45 + char *name; 40 46 struct kmem_cache *slab; 41 47 mempool_t *pool; 42 48 }; 43 49 44 - #if (SCSI_MAX_PHYS_SEGMENTS < 32) 45 - #error SCSI_MAX_PHYS_SEGMENTS is too small 46 - #endif 47 - 48 - #define SP(x) { x, "sgpool-" #x } 50 + #define SP(x) { x, "sgpool-" #x } 49 51 static struct scsi_host_sg_pool scsi_sg_pools[] = { 50 52 SP(8), 51 53 SP(16), 54 + #if (SCSI_MAX_SG_SEGMENTS > 16) 52 55 SP(32), 53 - #if (SCSI_MAX_PHYS_SEGMENTS > 32) 56 + #if (SCSI_MAX_SG_SEGMENTS > 32) 54 57 SP(64), 55 - #if (SCSI_MAX_PHYS_SEGMENTS > 64) 58 + #if (SCSI_MAX_SG_SEGMENTS > 64) 56 59 SP(128), 57 - #if (SCSI_MAX_PHYS_SEGMENTS > 128) 58 - SP(256), 59 - #if (SCSI_MAX_PHYS_SEGMENTS > 256) 60 - #error SCSI_MAX_PHYS_SEGMENTS is too large 61 60 #endif 62 61 #endif 63 62 #endif 64 - #endif 65 - }; 63 + }; 66 64 #undef SP 67 65 68 66 static void scsi_run_queue(struct request_queue *q); ··· 289 289 struct request_queue *q = rq->q; 290 290 int nr_pages = (bufflen + sgl[0].offset + PAGE_SIZE - 1) >> PAGE_SHIFT; 291 291 unsigned int data_len = bufflen, len, bytes, off; 292 + struct scatterlist *sg; 292 293 struct page *page; 293 294 struct bio *bio = NULL; 294 295 int i, err, nr_vecs = 0; 295 296 296 - for (i = 0; i < nsegs; i++) { 297 - page = sgl[i].page; 298 - off = sgl[i].offset; 299 - len = sgl[i].length; 297 + for_each_sg(sgl, sg, nsegs, i) { 298 + page = sg->page; 299 + off = sg->offset; 300 + len = sg->length; 301 + data_len += len; 300 302 301 303 while (len > 0 && data_len > 0) { 302 304 /* ··· 697 695 return NULL; 698 696 } 699 697 700 - struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) 698 + /* 699 + * Like SCSI_MAX_SG_SEGMENTS, but for archs that have sg chaining. This limit 700 + * is totally arbitrary, a setting of 2048 will get you at least 8mb ios. 701 + */ 702 + #define SCSI_MAX_SG_CHAIN_SEGMENTS 2048 703 + 704 + static inline unsigned int scsi_sgtable_index(unsigned short nents) 701 705 { 702 - struct scsi_host_sg_pool *sgp; 703 - struct scatterlist *sgl; 706 + unsigned int index; 704 707 705 - BUG_ON(!cmd->use_sg); 706 - 707 - switch (cmd->use_sg) { 708 + switch (nents) { 708 709 case 1 ... 8: 709 - cmd->sglist_len = 0; 710 + index = 0; 710 711 break; 711 712 case 9 ... 16: 712 - cmd->sglist_len = 1; 713 + index = 1; 713 714 break; 715 + #if (SCSI_MAX_SG_SEGMENTS > 16) 714 716 case 17 ... 32: 715 - cmd->sglist_len = 2; 717 + index = 2; 716 718 break; 717 - #if (SCSI_MAX_PHYS_SEGMENTS > 32) 719 + #if (SCSI_MAX_SG_SEGMENTS > 32) 718 720 case 33 ... 64: 719 - cmd->sglist_len = 3; 721 + index = 3; 720 722 break; 721 - #if (SCSI_MAX_PHYS_SEGMENTS > 64) 723 + #if (SCSI_MAX_SG_SEGMENTS > 64) 722 724 case 65 ... 128: 723 - cmd->sglist_len = 4; 724 - break; 725 - #if (SCSI_MAX_PHYS_SEGMENTS > 128) 726 - case 129 ... 256: 727 - cmd->sglist_len = 5; 725 + index = 4; 728 726 break; 729 727 #endif 730 728 #endif 731 729 #endif 732 730 default: 733 - return NULL; 731 + printk(KERN_ERR "scsi: bad segment count=%d\n", nents); 732 + BUG(); 734 733 } 735 734 736 - sgp = scsi_sg_pools + cmd->sglist_len; 737 - sgl = mempool_alloc(sgp->pool, gfp_mask); 738 - return sgl; 735 + return index; 736 + } 737 + 738 + struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *cmd, gfp_t gfp_mask) 739 + { 740 + struct scsi_host_sg_pool *sgp; 741 + struct scatterlist *sgl, *prev, *ret; 742 + unsigned int index; 743 + int this, left; 744 + 745 + BUG_ON(!cmd->use_sg); 746 + 747 + left = cmd->use_sg; 748 + ret = prev = NULL; 749 + do { 750 + this = left; 751 + if (this > SCSI_MAX_SG_SEGMENTS) { 752 + this = SCSI_MAX_SG_SEGMENTS - 1; 753 + index = SG_MEMPOOL_NR - 1; 754 + } else 755 + index = scsi_sgtable_index(this); 756 + 757 + left -= this; 758 + 759 + sgp = scsi_sg_pools + index; 760 + 761 + sgl = mempool_alloc(sgp->pool, gfp_mask); 762 + if (unlikely(!sgl)) 763 + goto enomem; 764 + 765 + memset(sgl, 0, sizeof(*sgl) * sgp->size); 766 + 767 + /* 768 + * first loop through, set initial index and return value 769 + */ 770 + if (!ret) 771 + ret = sgl; 772 + 773 + /* 774 + * chain previous sglist, if any. we know the previous 775 + * sglist must be the biggest one, or we would not have 776 + * ended up doing another loop. 777 + */ 778 + if (prev) 779 + sg_chain(prev, SCSI_MAX_SG_SEGMENTS, sgl); 780 + 781 + /* 782 + * don't allow subsequent mempool allocs to sleep, it would 783 + * violate the mempool principle. 784 + */ 785 + gfp_mask &= ~__GFP_WAIT; 786 + gfp_mask |= __GFP_HIGH; 787 + prev = sgl; 788 + } while (left); 789 + 790 + /* 791 + * ->use_sg may get modified after dma mapping has potentially 792 + * shrunk the number of segments, so keep a copy of it for free. 793 + */ 794 + cmd->__use_sg = cmd->use_sg; 795 + return ret; 796 + enomem: 797 + if (ret) { 798 + /* 799 + * Free entries chained off ret. Since we were trying to 800 + * allocate another sglist, we know that all entries are of 801 + * the max size. 802 + */ 803 + sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1; 804 + prev = ret; 805 + ret = &ret[SCSI_MAX_SG_SEGMENTS - 1]; 806 + 807 + while ((sgl = sg_chain_ptr(ret)) != NULL) { 808 + ret = &sgl[SCSI_MAX_SG_SEGMENTS - 1]; 809 + mempool_free(sgl, sgp->pool); 810 + } 811 + 812 + mempool_free(prev, sgp->pool); 813 + } 814 + return NULL; 739 815 } 740 816 741 817 EXPORT_SYMBOL(scsi_alloc_sgtable); 742 818 743 - void scsi_free_sgtable(struct scatterlist *sgl, int index) 819 + void scsi_free_sgtable(struct scsi_cmnd *cmd) 744 820 { 821 + struct scatterlist *sgl = cmd->request_buffer; 745 822 struct scsi_host_sg_pool *sgp; 746 823 747 - BUG_ON(index >= SG_MEMPOOL_NR); 824 + /* 825 + * if this is the biggest size sglist, check if we have 826 + * chained parts we need to free 827 + */ 828 + if (cmd->__use_sg > SCSI_MAX_SG_SEGMENTS) { 829 + unsigned short this, left; 830 + struct scatterlist *next; 831 + unsigned int index; 748 832 749 - sgp = scsi_sg_pools + index; 833 + left = cmd->__use_sg - (SCSI_MAX_SG_SEGMENTS - 1); 834 + next = sg_chain_ptr(&sgl[SCSI_MAX_SG_SEGMENTS - 1]); 835 + while (left && next) { 836 + sgl = next; 837 + this = left; 838 + if (this > SCSI_MAX_SG_SEGMENTS) { 839 + this = SCSI_MAX_SG_SEGMENTS - 1; 840 + index = SG_MEMPOOL_NR - 1; 841 + } else 842 + index = scsi_sgtable_index(this); 843 + 844 + left -= this; 845 + 846 + sgp = scsi_sg_pools + index; 847 + 848 + if (left) 849 + next = sg_chain_ptr(&sgl[sgp->size - 1]); 850 + 851 + mempool_free(sgl, sgp->pool); 852 + } 853 + 854 + /* 855 + * Restore original, will be freed below 856 + */ 857 + sgl = cmd->request_buffer; 858 + sgp = scsi_sg_pools + SG_MEMPOOL_NR - 1; 859 + } else 860 + sgp = scsi_sg_pools + scsi_sgtable_index(cmd->__use_sg); 861 + 750 862 mempool_free(sgl, sgp->pool); 751 863 } 752 864 ··· 886 770 static void scsi_release_buffers(struct scsi_cmnd *cmd) 887 771 { 888 772 if (cmd->use_sg) 889 - scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len); 773 + scsi_free_sgtable(cmd); 890 774 891 775 /* 892 776 * Zero these out. They now point to freed memory, and it is ··· 1100 984 static int scsi_init_io(struct scsi_cmnd *cmd) 1101 985 { 1102 986 struct request *req = cmd->request; 1103 - struct scatterlist *sgpnt; 1104 987 int count; 1105 988 1106 989 /* ··· 1112 997 /* 1113 998 * If sg table allocation fails, requeue request later. 1114 999 */ 1115 - sgpnt = scsi_alloc_sgtable(cmd, GFP_ATOMIC); 1116 - if (unlikely(!sgpnt)) { 1000 + cmd->request_buffer = scsi_alloc_sgtable(cmd, GFP_ATOMIC); 1001 + if (unlikely(!cmd->request_buffer)) { 1117 1002 scsi_unprep_request(req); 1118 1003 return BLKPREP_DEFER; 1119 1004 } 1120 1005 1121 1006 req->buffer = NULL; 1122 - cmd->request_buffer = (char *) sgpnt; 1123 1007 if (blk_pc_request(req)) 1124 1008 cmd->request_bufflen = req->data_len; 1125 1009 else ··· 1643 1529 if (!q) 1644 1530 return NULL; 1645 1531 1532 + /* 1533 + * this limit is imposed by hardware restrictions 1534 + */ 1646 1535 blk_queue_max_hw_segments(q, shost->sg_tablesize); 1647 - blk_queue_max_phys_segments(q, SCSI_MAX_PHYS_SEGMENTS); 1536 + 1537 + /* 1538 + * In the future, sg chaining support will be mandatory and this 1539 + * ifdef can then go away. Right now we don't have all archs 1540 + * converted, so better keep it safe. 1541 + */ 1542 + #ifdef ARCH_HAS_SG_CHAIN 1543 + if (shost->use_sg_chaining) 1544 + blk_queue_max_phys_segments(q, SCSI_MAX_SG_CHAIN_SEGMENTS); 1545 + else 1546 + blk_queue_max_phys_segments(q, SCSI_MAX_SG_SEGMENTS); 1547 + #else 1548 + blk_queue_max_phys_segments(q, SCSI_MAX_SG_SEGMENTS); 1549 + #endif 1550 + 1648 1551 blk_queue_max_sectors(q, shost->max_sectors); 1649 1552 blk_queue_bounce_limit(q, scsi_calculate_bounce_limit(shost)); 1650 1553 blk_queue_segment_boundary(q, shost->dma_boundary); ··· 2324 2193 * 2325 2194 * Returns virtual address of the start of the mapped page 2326 2195 */ 2327 - void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count, 2196 + void *scsi_kmap_atomic_sg(struct scatterlist *sgl, int sg_count, 2328 2197 size_t *offset, size_t *len) 2329 2198 { 2330 2199 int i; 2331 2200 size_t sg_len = 0, len_complete = 0; 2201 + struct scatterlist *sg; 2332 2202 struct page *page; 2333 2203 2334 2204 WARN_ON(!irqs_disabled()); 2335 2205 2336 - for (i = 0; i < sg_count; i++) { 2206 + for_each_sg(sgl, sg, sg_count, i) { 2337 2207 len_complete = sg_len; /* Complete sg-entries */ 2338 - sg_len += sg[i].length; 2208 + sg_len += sg->length; 2339 2209 if (sg_len > *offset) 2340 2210 break; 2341 2211 } ··· 2350 2218 } 2351 2219 2352 2220 /* Offset starting from the beginning of first page in this sg-entry */ 2353 - *offset = *offset - len_complete + sg[i].offset; 2221 + *offset = *offset - len_complete + sg->offset; 2354 2222 2355 2223 /* Assumption: contiguous pages can be accessed as "page + i" */ 2356 - page = nth_page(sg[i].page, (*offset >> PAGE_SHIFT)); 2224 + page = nth_page(sg->page, (*offset >> PAGE_SHIFT)); 2357 2225 *offset &= ~PAGE_MASK; 2358 2226 2359 2227 /* Bytes in this sg-entry from *offset to the end of the page */
+2 -2
drivers/scsi/scsi_tgt_lib.c
··· 332 332 scsi_tgt_uspace_send_status(cmd, tcmd->itn_id, tcmd->tag); 333 333 334 334 if (cmd->request_buffer) 335 - scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len); 335 + scsi_free_sgtable(cmd); 336 336 337 337 queue_work(scsi_tgtd, &tcmd->work); 338 338 } ··· 373 373 } 374 374 375 375 eprintk("cmd %p cnt %d\n", cmd, cmd->use_sg); 376 - scsi_free_sgtable(cmd->request_buffer, cmd->sglist_len); 376 + scsi_free_sgtable(cmd); 377 377 return -EINVAL; 378 378 } 379 379
-22
drivers/scsi/sd.c
··· 826 826 return 0; 827 827 } 828 828 829 - static int sd_issue_flush(struct request_queue *q, struct gendisk *disk, 830 - sector_t *error_sector) 831 - { 832 - int ret = 0; 833 - struct scsi_device *sdp = q->queuedata; 834 - struct scsi_disk *sdkp; 835 - 836 - if (sdp->sdev_state != SDEV_RUNNING) 837 - return -ENXIO; 838 - 839 - sdkp = scsi_disk_get_from_dev(&sdp->sdev_gendev); 840 - 841 - if (!sdkp) 842 - return -ENODEV; 843 - 844 - if (sdkp->WCE) 845 - ret = sd_sync_cache(sdkp); 846 - scsi_disk_put(sdkp); 847 - return ret; 848 - } 849 - 850 829 static void sd_prepare_flush(struct request_queue *q, struct request *rq) 851 830 { 852 831 memset(rq->cmd, 0, sizeof(rq->cmd)); ··· 1676 1697 sd_revalidate_disk(gd); 1677 1698 1678 1699 blk_queue_prep_rq(sdp->request_queue, sd_prep_fn); 1679 - blk_queue_issue_flush_fn(sdp->request_queue, sd_issue_flush); 1680 1700 1681 1701 gd->driverfs_dev = &sdp->sdev_gendev; 1682 1702 gd->flags = GENHD_FL_DRIVERFS;
+8 -8
drivers/scsi/sg.c
··· 1165 1165 sg = rsv_schp->buffer; 1166 1166 sa = vma->vm_start; 1167 1167 for (k = 0; (k < rsv_schp->k_use_sg) && (sa < vma->vm_end); 1168 - ++k, ++sg) { 1168 + ++k, sg = sg_next(sg)) { 1169 1169 len = vma->vm_end - sa; 1170 1170 len = (len < sg->length) ? len : sg->length; 1171 1171 if (offset < len) { ··· 1209 1209 sa = vma->vm_start; 1210 1210 sg = rsv_schp->buffer; 1211 1211 for (k = 0; (k < rsv_schp->k_use_sg) && (sa < vma->vm_end); 1212 - ++k, ++sg) { 1212 + ++k, sg = sg_next(sg)) { 1213 1213 len = vma->vm_end - sa; 1214 1214 len = (len < sg->length) ? len : sg->length; 1215 1215 sa += len; ··· 1840 1840 } 1841 1841 for (k = 0, sg = schp->buffer, rem_sz = blk_size; 1842 1842 (rem_sz > 0) && (k < mx_sc_elems); 1843 - ++k, rem_sz -= ret_sz, ++sg) { 1843 + ++k, rem_sz -= ret_sz, sg = sg_next(sg)) { 1844 1844 1845 1845 num = (rem_sz > scatter_elem_sz_prev) ? 1846 1846 scatter_elem_sz_prev : rem_sz; ··· 1913 1913 if (res) 1914 1914 return res; 1915 1915 1916 - for (; p; ++sg, ksglen = sg->length, 1916 + for (; p; sg = sg_next(sg), ksglen = sg->length, 1917 1917 p = page_address(sg->page)) { 1918 1918 if (usglen <= 0) 1919 1919 break; ··· 1992 1992 int k; 1993 1993 1994 1994 for (k = 0; (k < schp->k_use_sg) && sg->page; 1995 - ++k, ++sg) { 1995 + ++k, sg = sg_next(sg)) { 1996 1996 SCSI_LOG_TIMEOUT(5, printk( 1997 1997 "sg_remove_scat: k=%d, pg=0x%p, len=%d\n", 1998 1998 k, sg->page, sg->length)); ··· 2045 2045 if (res) 2046 2046 return res; 2047 2047 2048 - for (; p; ++sg, ksglen = sg->length, 2048 + for (; p; sg = sg_next(sg), ksglen = sg->length, 2049 2049 p = page_address(sg->page)) { 2050 2050 if (usglen <= 0) 2051 2051 break; ··· 2092 2092 if ((!outp) || (num_read_xfer <= 0)) 2093 2093 return 0; 2094 2094 2095 - for (k = 0; (k < schp->k_use_sg) && sg->page; ++k, ++sg) { 2095 + for (k = 0; (k < schp->k_use_sg) && sg->page; ++k, sg = sg_next(sg)) { 2096 2096 num = sg->length; 2097 2097 if (num > num_read_xfer) { 2098 2098 if (__copy_to_user(outp, page_address(sg->page), ··· 2142 2142 SCSI_LOG_TIMEOUT(4, printk("sg_link_reserve: size=%d\n", size)); 2143 2143 rem = size; 2144 2144 2145 - for (k = 0; k < rsv_schp->k_use_sg; ++k, ++sg) { 2145 + for (k = 0; k < rsv_schp->k_use_sg; ++k, sg = sg_next(sg)) { 2146 2146 num = sg->length; 2147 2147 if (rem <= num) { 2148 2148 sfp->save_scat_len = num;
+1
drivers/scsi/stex.c
··· 1123 1123 .this_id = -1, 1124 1124 .sg_tablesize = ST_MAX_SG, 1125 1125 .cmd_per_lun = ST_CMD_PER_LUN, 1126 + .use_sg_chaining = ENABLE_SG_CHAINING, 1126 1127 }; 1127 1128 1128 1129 static int stex_set_dma_mask(struct pci_dev * pdev)
+1
drivers/scsi/sym53c416.c
··· 854 854 .cmd_per_lun = 1, 855 855 .unchecked_isa_dma = 1, 856 856 .use_clustering = ENABLE_CLUSTERING, 857 + .use_sg_chaining = ENABLE_SG_CHAINING, 857 858 }; 858 859 #include "scsi_module.c"
+1
drivers/scsi/sym53c8xx_2/sym_glue.c
··· 1808 1808 .eh_host_reset_handler = sym53c8xx_eh_host_reset_handler, 1809 1809 .this_id = 7, 1810 1810 .use_clustering = ENABLE_CLUSTERING, 1811 + .use_sg_chaining = ENABLE_SG_CHAINING, 1811 1812 .max_sectors = 0xFFFF, 1812 1813 #ifdef SYM_LINUX_PROC_INFO_SUPPORT 1813 1814 .proc_info = sym53c8xx_proc_info,
+2 -1
drivers/scsi/u14-34f.c
··· 450 450 .slave_configure = u14_34f_slave_configure, 451 451 .this_id = 7, 452 452 .unchecked_isa_dma = 1, 453 - .use_clustering = ENABLE_CLUSTERING 453 + .use_clustering = ENABLE_CLUSTERING, 454 + .use_sg_chaining = ENABLE_SG_CHAINING, 454 455 }; 455 456 456 457 #if !defined(__BIG_ENDIAN_BITFIELD) && !defined(__LITTLE_ENDIAN_BITFIELD)
+1
drivers/scsi/ultrastor.c
··· 1197 1197 .cmd_per_lun = ULTRASTOR_MAX_CMDS_PER_LUN, 1198 1198 .unchecked_isa_dma = 1, 1199 1199 .use_clustering = ENABLE_CLUSTERING, 1200 + .use_sg_chaining = ENABLE_SG_CHAINING, 1200 1201 }; 1201 1202 #include "scsi_module.c"
+1
drivers/scsi/wd7000.c
··· 1671 1671 .cmd_per_lun = 1, 1672 1672 .unchecked_isa_dma = 1, 1673 1673 .use_clustering = ENABLE_CLUSTERING, 1674 + .use_sg_chaining = ENABLE_SG_CHAINING, 1674 1675 }; 1675 1676 1676 1677 #include "scsi_module.c"
+10 -6
drivers/usb/storage/alauda.c
··· 798 798 { 799 799 unsigned char *buffer; 800 800 u16 lba, max_lba; 801 - unsigned int page, len, index, offset; 801 + unsigned int page, len, offset; 802 802 unsigned int blockshift = MEDIA_INFO(us).blockshift; 803 803 unsigned int pageshift = MEDIA_INFO(us).pageshift; 804 804 unsigned int blocksize = MEDIA_INFO(us).blocksize; 805 805 unsigned int pagesize = MEDIA_INFO(us).pagesize; 806 806 unsigned int uzonesize = MEDIA_INFO(us).uzonesize; 807 + struct scatterlist *sg; 807 808 int result; 808 809 809 810 /* ··· 828 827 max_lba = MEDIA_INFO(us).capacity >> (blockshift + pageshift); 829 828 830 829 result = USB_STOR_TRANSPORT_GOOD; 831 - index = offset = 0; 830 + offset = 0; 831 + sg = NULL; 832 832 833 833 while (sectors > 0) { 834 834 unsigned int zone = lba / uzonesize; /* integer division */ ··· 875 873 876 874 /* Store the data in the transfer buffer */ 877 875 usb_stor_access_xfer_buf(buffer, len, us->srb, 878 - &index, &offset, TO_XFER_BUF); 876 + &sg, &offset, TO_XFER_BUF); 879 877 880 878 page = 0; 881 879 lba++; ··· 893 891 unsigned int sectors) 894 892 { 895 893 unsigned char *buffer, *blockbuffer; 896 - unsigned int page, len, index, offset; 894 + unsigned int page, len, offset; 897 895 unsigned int blockshift = MEDIA_INFO(us).blockshift; 898 896 unsigned int pageshift = MEDIA_INFO(us).pageshift; 899 897 unsigned int blocksize = MEDIA_INFO(us).blocksize; 900 898 unsigned int pagesize = MEDIA_INFO(us).pagesize; 899 + struct scatterlist *sg; 901 900 u16 lba, max_lba; 902 901 int result; 903 902 ··· 932 929 max_lba = MEDIA_INFO(us).capacity >> (pageshift + blockshift); 933 930 934 931 result = USB_STOR_TRANSPORT_GOOD; 935 - index = offset = 0; 932 + offset = 0; 933 + sg = NULL; 936 934 937 935 while (sectors > 0) { 938 936 /* Write as many sectors as possible in this block */ ··· 950 946 951 947 /* Get the data from the transfer buffer */ 952 948 usb_stor_access_xfer_buf(buffer, len, us->srb, 953 - &index, &offset, FROM_XFER_BUF); 949 + &sg, &offset, FROM_XFER_BUF); 954 950 955 951 result = alauda_write_lba(us, lba, page, pages, buffer, 956 952 blockbuffer);
+6 -4
drivers/usb/storage/datafab.c
··· 98 98 unsigned char thistime; 99 99 unsigned int totallen, alloclen; 100 100 int len, result; 101 - unsigned int sg_idx = 0, sg_offset = 0; 101 + unsigned int sg_offset = 0; 102 + struct scatterlist *sg = NULL; 102 103 103 104 // we're working in LBA mode. according to the ATA spec, 104 105 // we can support up to 28-bit addressing. I don't know if Datafab ··· 156 155 157 156 // Store the data in the transfer buffer 158 157 usb_stor_access_xfer_buf(buffer, len, us->srb, 159 - &sg_idx, &sg_offset, TO_XFER_BUF); 158 + &sg, &sg_offset, TO_XFER_BUF); 160 159 161 160 sector += thistime; 162 161 totallen -= len; ··· 182 181 unsigned char thistime; 183 182 unsigned int totallen, alloclen; 184 183 int len, result; 185 - unsigned int sg_idx = 0, sg_offset = 0; 184 + unsigned int sg_offset = 0; 185 + struct scatterlist *sg = NULL; 186 186 187 187 // we're working in LBA mode. according to the ATA spec, 188 188 // we can support up to 28-bit addressing. I don't know if Datafab ··· 219 217 220 218 // Get the data from the transfer buffer 221 219 usb_stor_access_xfer_buf(buffer, len, us->srb, 222 - &sg_idx, &sg_offset, FROM_XFER_BUF); 220 + &sg, &sg_offset, FROM_XFER_BUF); 223 221 224 222 command[0] = 0; 225 223 command[1] = thistime;
+6 -4
drivers/usb/storage/jumpshot.c
··· 119 119 unsigned char thistime; 120 120 unsigned int totallen, alloclen; 121 121 int len, result; 122 - unsigned int sg_idx = 0, sg_offset = 0; 122 + unsigned int sg_offset = 0; 123 + struct scatterlist *sg = NULL; 123 124 124 125 // we're working in LBA mode. according to the ATA spec, 125 126 // we can support up to 28-bit addressing. I don't know if Jumpshot ··· 171 170 172 171 // Store the data in the transfer buffer 173 172 usb_stor_access_xfer_buf(buffer, len, us->srb, 174 - &sg_idx, &sg_offset, TO_XFER_BUF); 173 + &sg, &sg_offset, TO_XFER_BUF); 175 174 176 175 sector += thistime; 177 176 totallen -= len; ··· 196 195 unsigned char thistime; 197 196 unsigned int totallen, alloclen; 198 197 int len, result, waitcount; 199 - unsigned int sg_idx = 0, sg_offset = 0; 198 + unsigned int sg_offset = 0; 199 + struct scatterlist *sg = NULL; 200 200 201 201 // we're working in LBA mode. according to the ATA spec, 202 202 // we can support up to 28-bit addressing. I don't know if Jumpshot ··· 227 225 228 226 // Get the data from the transfer buffer 229 227 usb_stor_access_xfer_buf(buffer, len, us->srb, 230 - &sg_idx, &sg_offset, FROM_XFER_BUF); 228 + &sg, &sg_offset, FROM_XFER_BUF); 231 229 232 230 command[0] = 0; 233 231 command[1] = thistime;
+11 -9
drivers/usb/storage/protocol.c
··· 157 157 * pick up from where this one left off. */ 158 158 159 159 unsigned int usb_stor_access_xfer_buf(unsigned char *buffer, 160 - unsigned int buflen, struct scsi_cmnd *srb, unsigned int *index, 160 + unsigned int buflen, struct scsi_cmnd *srb, struct scatterlist **sgptr, 161 161 unsigned int *offset, enum xfer_buf_dir dir) 162 162 { 163 163 unsigned int cnt; ··· 184 184 * located in high memory -- then kmap() will map it to a temporary 185 185 * position in the kernel's virtual address space. */ 186 186 } else { 187 - struct scatterlist *sg = 188 - (struct scatterlist *) srb->request_buffer 189 - + *index; 187 + struct scatterlist *sg = *sgptr; 188 + 189 + if (!sg) 190 + sg = (struct scatterlist *) srb->request_buffer; 190 191 191 192 /* This loop handles a single s-g list entry, which may 192 193 * include multiple pages. Find the initial page structure 193 194 * and the starting offset within the page, and update 194 195 * the *offset and *index values for the next loop. */ 195 196 cnt = 0; 196 - while (cnt < buflen && *index < srb->use_sg) { 197 + while (cnt < buflen) { 197 198 struct page *page = sg->page + 198 199 ((sg->offset + *offset) >> PAGE_SHIFT); 199 200 unsigned int poff = ··· 210 209 211 210 /* Transfer continues to next s-g entry */ 212 211 *offset = 0; 213 - ++*index; 214 - ++sg; 212 + sg = sg_next(sg); 215 213 } 216 214 217 215 /* Transfer the data for all the pages in this ··· 234 234 sglen -= plen; 235 235 } 236 236 } 237 + *sgptr = sg; 237 238 } 238 239 239 240 /* Return the amount actually transferred */ ··· 246 245 void usb_stor_set_xfer_buf(unsigned char *buffer, 247 246 unsigned int buflen, struct scsi_cmnd *srb) 248 247 { 249 - unsigned int index = 0, offset = 0; 248 + unsigned int offset = 0; 249 + struct scatterlist *sg = NULL; 250 250 251 - usb_stor_access_xfer_buf(buffer, buflen, srb, &index, &offset, 251 + usb_stor_access_xfer_buf(buffer, buflen, srb, &sg, &offset, 252 252 TO_XFER_BUF); 253 253 if (buflen < srb->request_bufflen) 254 254 srb->resid = srb->request_bufflen - buflen;
+1 -1
drivers/usb/storage/protocol.h
··· 52 52 enum xfer_buf_dir {TO_XFER_BUF, FROM_XFER_BUF}; 53 53 54 54 extern unsigned int usb_stor_access_xfer_buf(unsigned char *buffer, 55 - unsigned int buflen, struct scsi_cmnd *srb, unsigned int *index, 55 + unsigned int buflen, struct scsi_cmnd *srb, struct scatterlist **, 56 56 unsigned int *offset, enum xfer_buf_dir dir); 57 57 58 58 extern void usb_stor_set_xfer_buf(unsigned char *buffer,
+10 -6
drivers/usb/storage/sddr09.c
··· 705 705 unsigned char *buffer; 706 706 unsigned int lba, maxlba, pba; 707 707 unsigned int page, pages; 708 - unsigned int len, index, offset; 708 + unsigned int len, offset; 709 + struct scatterlist *sg; 709 710 int result; 710 711 711 712 // Figure out the initial LBA and page ··· 731 730 // contiguous LBA's. Another exercise left to the student. 732 731 733 732 result = 0; 734 - index = offset = 0; 733 + offset = 0; 734 + sg = NULL; 735 735 736 736 while (sectors > 0) { 737 737 ··· 779 777 780 778 // Store the data in the transfer buffer 781 779 usb_stor_access_xfer_buf(buffer, len, us->srb, 782 - &index, &offset, TO_XFER_BUF); 780 + &sg, &offset, TO_XFER_BUF); 783 781 784 782 page = 0; 785 783 lba++; ··· 933 931 unsigned int pagelen, blocklen; 934 932 unsigned char *blockbuffer; 935 933 unsigned char *buffer; 936 - unsigned int len, index, offset; 934 + unsigned int len, offset; 935 + struct scatterlist *sg; 937 936 int result; 938 937 939 938 // Figure out the initial LBA and page ··· 971 968 } 972 969 973 970 result = 0; 974 - index = offset = 0; 971 + offset = 0; 972 + sg = NULL; 975 973 976 974 while (sectors > 0) { 977 975 ··· 991 987 992 988 // Get the data from the transfer buffer 993 989 usb_stor_access_xfer_buf(buffer, len, us->srb, 994 - &index, &offset, FROM_XFER_BUF); 990 + &sg, &offset, FROM_XFER_BUF); 995 991 996 992 result = sddr09_write_lba(us, lba, page, pages, 997 993 buffer, blockbuffer);
+10 -6
drivers/usb/storage/sddr55.c
··· 167 167 unsigned long address; 168 168 169 169 unsigned short pages; 170 - unsigned int len, index, offset; 170 + unsigned int len, offset; 171 + struct scatterlist *sg; 171 172 172 173 // Since we only read in one block at a time, we have to create 173 174 // a bounce buffer and move the data a piece at a time between the ··· 179 178 buffer = kmalloc(len, GFP_NOIO); 180 179 if (buffer == NULL) 181 180 return USB_STOR_TRANSPORT_ERROR; /* out of memory */ 182 - index = offset = 0; 181 + offset = 0; 182 + sg = NULL; 183 183 184 184 while (sectors>0) { 185 185 ··· 257 255 258 256 // Store the data in the transfer buffer 259 257 usb_stor_access_xfer_buf(buffer, len, us->srb, 260 - &index, &offset, TO_XFER_BUF); 258 + &sg, &offset, TO_XFER_BUF); 261 259 262 260 page = 0; 263 261 lba++; ··· 289 287 290 288 unsigned short pages; 291 289 int i; 292 - unsigned int len, index, offset; 290 + unsigned int len, offset; 291 + struct scatterlist *sg; 293 292 294 293 /* check if we are allowed to write */ 295 294 if (info->read_only || info->force_read_only) { ··· 307 304 buffer = kmalloc(len, GFP_NOIO); 308 305 if (buffer == NULL) 309 306 return USB_STOR_TRANSPORT_ERROR; 310 - index = offset = 0; 307 + offset = 0; 308 + sg = NULL; 311 309 312 310 while (sectors > 0) { 313 311 ··· 326 322 327 323 // Get the data from the transfer buffer 328 324 usb_stor_access_xfer_buf(buffer, len, us->srb, 329 - &index, &offset, FROM_XFER_BUF); 325 + &sg, &offset, FROM_XFER_BUF); 330 326 331 327 US_DEBUGP("Write %02X pages, to PBA %04X" 332 328 " (LBA %04X) page %02X\n",
+8 -9
drivers/usb/storage/shuttle_usbat.c
··· 993 993 unsigned char thistime; 994 994 unsigned int totallen, alloclen; 995 995 int len, result; 996 - unsigned int sg_idx = 0, sg_offset = 0; 996 + unsigned int sg_offset = 0; 997 + struct scatterlist *sg = NULL; 997 998 998 999 result = usbat_flash_check_media(us, info); 999 1000 if (result != USB_STOR_TRANSPORT_GOOD) ··· 1048 1047 1049 1048 /* Store the data in the transfer buffer */ 1050 1049 usb_stor_access_xfer_buf(buffer, len, us->srb, 1051 - &sg_idx, &sg_offset, TO_XFER_BUF); 1050 + &sg, &sg_offset, TO_XFER_BUF); 1052 1051 1053 1052 sector += thistime; 1054 1053 totallen -= len; ··· 1084 1083 unsigned char thistime; 1085 1084 unsigned int totallen, alloclen; 1086 1085 int len, result; 1087 - unsigned int sg_idx = 0, sg_offset = 0; 1086 + unsigned int sg_offset = 0; 1087 + struct scatterlist *sg = NULL; 1088 1088 1089 1089 result = usbat_flash_check_media(us, info); 1090 1090 if (result != USB_STOR_TRANSPORT_GOOD) ··· 1124 1122 1125 1123 /* Get the data from the transfer buffer */ 1126 1124 usb_stor_access_xfer_buf(buffer, len, us->srb, 1127 - &sg_idx, &sg_offset, FROM_XFER_BUF); 1125 + &sg, &sg_offset, FROM_XFER_BUF); 1128 1126 1129 1127 /* ATA command 0x30 (WRITE SECTORS) */ 1130 1128 usbat_pack_ata_sector_cmd(command, thistime, sector, 0x30); ··· 1164 1162 unsigned char *buffer; 1165 1163 unsigned int len; 1166 1164 unsigned int sector; 1167 - unsigned int sg_segment = 0; 1168 1165 unsigned int sg_offset = 0; 1166 + struct scatterlist *sg = NULL; 1169 1167 1170 1168 US_DEBUGP("handle_read10: transfersize %d\n", 1171 1169 srb->transfersize); ··· 1222 1220 sector |= short_pack(data[7+5], data[7+4]); 1223 1221 transferred = 0; 1224 1222 1225 - sg_segment = 0; /* for keeping track of where we are in */ 1226 - sg_offset = 0; /* the scatter/gather list */ 1227 - 1228 1223 while (transferred != srb->request_bufflen) { 1229 1224 1230 1225 if (len > srb->request_bufflen - transferred) ··· 1254 1255 1255 1256 /* Store the data in the transfer buffer */ 1256 1257 usb_stor_access_xfer_buf(buffer, len, srb, 1257 - &sg_segment, &sg_offset, TO_XFER_BUF); 1258 + &sg, &sg_offset, TO_XFER_BUF); 1258 1259 1259 1260 /* Update the amount transferred and the sector number */ 1260 1261
+7 -16
fs/bio.c
··· 109 109 110 110 void bio_free(struct bio *bio, struct bio_set *bio_set) 111 111 { 112 - const int pool_idx = BIO_POOL_IDX(bio); 112 + if (bio->bi_io_vec) { 113 + const int pool_idx = BIO_POOL_IDX(bio); 113 114 114 - BIO_BUG_ON(pool_idx >= BIOVEC_NR_POOLS); 115 + BIO_BUG_ON(pool_idx >= BIOVEC_NR_POOLS); 115 116 116 - mempool_free(bio->bi_io_vec, bio_set->bvec_pools[pool_idx]); 117 + mempool_free(bio->bi_io_vec, bio_set->bvec_pools[pool_idx]); 118 + } 119 + 117 120 mempool_free(bio, bio_set->bio_pool); 118 121 } 119 122 ··· 130 127 131 128 void bio_init(struct bio *bio) 132 129 { 133 - bio->bi_next = NULL; 134 - bio->bi_bdev = NULL; 130 + memset(bio, 0, sizeof(*bio)); 135 131 bio->bi_flags = 1 << BIO_UPTODATE; 136 - bio->bi_rw = 0; 137 - bio->bi_vcnt = 0; 138 - bio->bi_idx = 0; 139 - bio->bi_phys_segments = 0; 140 - bio->bi_hw_segments = 0; 141 - bio->bi_hw_front_size = 0; 142 - bio->bi_hw_back_size = 0; 143 - bio->bi_size = 0; 144 - bio->bi_max_vecs = 0; 145 - bio->bi_end_io = NULL; 146 132 atomic_set(&bio->bi_cnt, 1); 147 - bio->bi_private = NULL; 148 133 } 149 134 150 135 /**
+1 -1
fs/splice.c
··· 1335 1335 if (copy_to_user(sd->u.userptr, src + buf->offset, sd->len)) 1336 1336 ret = -EFAULT; 1337 1337 1338 + buf->ops->unmap(pipe, buf, src); 1338 1339 out: 1339 1340 if (ret > 0) 1340 1341 sd->u.userptr += ret; 1341 - buf->ops->unmap(pipe, buf, src); 1342 1342 return ret; 1343 1343 } 1344 1344
+1 -1
include/asm-ia64/dma-mapping.h
··· 6 6 * David Mosberger-Tang <davidm@hpl.hp.com> 7 7 */ 8 8 #include <asm/machvec.h> 9 - #include <asm/scatterlist.h> 9 + #include <linux/scatterlist.h> 10 10 11 11 #define dma_alloc_coherent platform_dma_alloc_coherent 12 12 /* coherent mem. is cheap */
+2
include/asm-ia64/scatterlist.h
··· 30 30 #define sg_dma_len(sg) ((sg)->dma_length) 31 31 #define sg_dma_address(sg) ((sg)->dma_address) 32 32 33 + #define ARCH_HAS_SG_CHAIN 34 + 33 35 #endif /* _ASM_IA64_SCATTERLIST_H */
+9 -149
include/asm-powerpc/dma-mapping.h
··· 6 6 */ 7 7 #ifndef _ASM_DMA_MAPPING_H 8 8 #define _ASM_DMA_MAPPING_H 9 - #ifdef __KERNEL__ 10 - 11 - #include <linux/types.h> 12 - #include <linux/cache.h> 13 - /* need struct page definitions */ 14 - #include <linux/mm.h> 15 - #include <asm/scatterlist.h> 16 - #include <asm/io.h> 17 - 18 - #define DMA_ERROR_CODE (~(dma_addr_t)0x0) 19 - 20 - #ifdef CONFIG_NOT_COHERENT_CACHE 21 - /* 22 - * DMA-consistent mapping functions for PowerPCs that don't support 23 - * cache snooping. These allocate/free a region of uncached mapped 24 - * memory space for use with DMA devices. Alternatively, you could 25 - * allocate the space "normally" and use the cache management functions 26 - * to ensure it is consistent. 27 - */ 28 - extern void *__dma_alloc_coherent(size_t size, dma_addr_t *handle, gfp_t gfp); 29 - extern void __dma_free_coherent(size_t size, void *vaddr); 30 - extern void __dma_sync(void *vaddr, size_t size, int direction); 31 - extern void __dma_sync_page(struct page *page, unsigned long offset, 32 - size_t size, int direction); 33 - 34 - #else /* ! CONFIG_NOT_COHERENT_CACHE */ 35 - /* 36 - * Cache coherent cores. 37 - */ 38 - 39 - #define __dma_alloc_coherent(gfp, size, handle) NULL 40 - #define __dma_free_coherent(size, addr) ((void)0) 41 - #define __dma_sync(addr, size, rw) ((void)0) 42 - #define __dma_sync_page(pg, off, sz, rw) ((void)0) 43 - 44 - #endif /* ! CONFIG_NOT_COHERENT_CACHE */ 45 - 46 - #ifdef CONFIG_PPC64 47 - /* 48 - * DMA operations are abstracted for G5 vs. i/pSeries, PCI vs. VIO 49 - */ 50 - struct dma_mapping_ops { 51 - void * (*alloc_coherent)(struct device *dev, size_t size, 52 - dma_addr_t *dma_handle, gfp_t flag); 53 - void (*free_coherent)(struct device *dev, size_t size, 54 - void *vaddr, dma_addr_t dma_handle); 55 - dma_addr_t (*map_single)(struct device *dev, void *ptr, 56 - size_t size, enum dma_data_direction direction); 57 - void (*unmap_single)(struct device *dev, dma_addr_t dma_addr, 58 - size_t size, enum dma_data_direction direction); 59 - int (*map_sg)(struct device *dev, struct scatterlist *sg, 60 - int nents, enum dma_data_direction direction); 61 - void (*unmap_sg)(struct device *dev, struct scatterlist *sg, 62 - int nents, enum dma_data_direction direction); 63 - int (*dma_supported)(struct device *dev, u64 mask); 64 - int (*set_dma_mask)(struct device *dev, u64 dma_mask); 65 - }; 66 - 67 - static inline struct dma_mapping_ops *get_dma_ops(struct device *dev) 68 - { 69 - /* We don't handle the NULL dev case for ISA for now. We could 70 - * do it via an out of line call but it is not needed for now. The 71 - * only ISA DMA device we support is the floppy and we have a hack 72 - * in the floppy driver directly to get a device for us. 73 - */ 74 - if (unlikely(dev == NULL || dev->archdata.dma_ops == NULL)) 75 - return NULL; 76 - return dev->archdata.dma_ops; 77 - } 78 - 79 - static inline int dma_supported(struct device *dev, u64 mask) 80 - { 81 - struct dma_mapping_ops *dma_ops = get_dma_ops(dev); 82 - 83 - if (unlikely(dma_ops == NULL)) 84 - return 0; 85 - if (dma_ops->dma_supported == NULL) 86 - return 1; 87 - return dma_ops->dma_supported(dev, mask); 88 - } 89 - 90 - static inline int dma_set_mask(struct device *dev, u64 dma_mask) 91 - { 92 - struct dma_mapping_ops *dma_ops = get_dma_ops(dev); 93 - 94 - if (unlikely(dma_ops == NULL)) 95 - return -EIO; 96 - if (dma_ops->set_dma_mask != NULL) 97 - return dma_ops->set_dma_mask(dev, dma_mask); 98 - if (!dev->dma_mask || !dma_supported(dev, dma_mask)) 99 - return -EIO; 100 - *dev->dma_mask = dma_mask; 101 - return 0; 102 - } 103 - 104 - static inline void *dma_alloc_coherent(struct device *dev, size_t size, 105 - dma_addr_t *dma_handle, gfp_t flag) 106 - { 107 - struct dma_mapping_ops *dma_ops = get_dma_ops(dev); 108 - 109 - BUG_ON(!dma_ops); 110 - return dma_ops->alloc_coherent(dev, size, dma_handle, flag); 111 - } 112 - 113 - static inline void dma_free_coherent(struct device *dev, size_t size, 114 - void *cpu_addr, dma_addr_t dma_handle) 115 - { 116 - struct dma_mapping_ops *dma_ops = get_dma_ops(dev); 117 - 118 - BUG_ON(!dma_ops); 119 - dma_ops->free_coherent(dev, size, cpu_addr, dma_handle); 120 - } 121 - 122 - static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr, 123 - size_t size, 124 - enum dma_data_direction direction) 125 - { 126 - struct dma_mapping_ops *dma_ops = get_dma_ops(dev); 127 - 128 - BUG_ON(!dma_ops); 129 - return dma_ops->map_single(dev, cpu_addr, size, direction); 130 - } 131 - 132 - static inline void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, 133 - size_t size, 134 - enum dma_data_direction direction) 135 - { 136 - struct dma_mapping_ops *dma_ops = get_dma_ops(dev); 137 - 138 - BUG_ON(!dma_ops); 139 - dma_ops->unmap_single(dev, dma_addr, size, direction); 140 - } 141 - 142 - static inline dma_addr_t dma_map_page(struct device *dev, struct page *page, 143 - unsigned long offset, size_t size, 144 - enum dma_data_direction direction) 145 - { 146 - struct dma_mapping_ops *dma_ops = get_dma_ops(dev); 147 - 148 - BUG_ON(!dma_ops); 149 - return dma_ops->map_single(dev, page_address(page) + offset, size, 150 - direction); 151 - } 152 9 153 10 static inline void dma_unmap_page(struct device *dev, dma_addr_t dma_address, 154 11 size_t size, ··· 133 276 } 134 277 135 278 static inline int 136 - dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 279 + dma_map_sg(struct device *dev, struct scatterlist *sgl, int nents, 137 280 enum dma_data_direction direction) 138 281 { 282 + struct scatterlist *sg; 139 283 int i; 140 284 141 285 BUG_ON(direction == DMA_NONE); 142 286 143 - for (i = 0; i < nents; i++, sg++) { 287 + for_each_sg(sgl, sg, nents, i) { 144 288 BUG_ON(!sg->page); 145 289 __dma_sync_page(sg->page, sg->offset, sg->length, direction); 146 290 sg->dma_address = page_to_bus(sg->page) + sg->offset; ··· 176 318 } 177 319 178 320 static inline void dma_sync_sg_for_cpu(struct device *dev, 179 - struct scatterlist *sg, int nents, 321 + struct scatterlist *sgl, int nents, 180 322 enum dma_data_direction direction) 181 323 { 324 + struct scatterlist *sg; 182 325 int i; 183 326 184 327 BUG_ON(direction == DMA_NONE); 185 328 186 - for (i = 0; i < nents; i++, sg++) 329 + for_each_sg(sgl, sg, nents, i) 187 330 __dma_sync_page(sg->page, sg->offset, sg->length, direction); 188 331 } 189 332 190 333 static inline void dma_sync_sg_for_device(struct device *dev, 191 - struct scatterlist *sg, int nents, 334 + struct scatterlist *sgl, int nents, 192 335 enum dma_data_direction direction) 193 336 { 337 + struct scatterlist *sg; 194 338 int i; 195 339 196 340 BUG_ON(direction == DMA_NONE); 197 341 198 - for (i = 0; i < nents; i++, sg++) 342 + for_each_sg(sgl, sg, nents, i) 199 343 __dma_sync_page(sg->page, sg->offset, sg->length, direction); 200 344 } 201 345
+2
include/asm-powerpc/scatterlist.h
··· 41 41 #define ISA_DMA_THRESHOLD (~0UL) 42 42 #endif 43 43 44 + #define ARCH_HAS_SG_CHAIN 45 + 44 46 #endif /* __KERNEL__ */ 45 47 #endif /* _ASM_POWERPC_SCATTERLIST_H */
+2
include/asm-sparc/scatterlist.h
··· 19 19 20 20 #define ISA_DMA_THRESHOLD (~0UL) 21 21 22 + #define ARCH_HAS_SG_CHAIN 23 + 22 24 #endif /* !(_SPARC_SCATTERLIST_H) */
+2
include/asm-sparc64/scatterlist.h
··· 20 20 21 21 #define ISA_DMA_THRESHOLD (~0UL) 22 22 23 + #define ARCH_HAS_SG_CHAIN 24 + 23 25 #endif /* !(_SPARC64_SCATTERLIST_H) */
+7 -6
include/asm-x86/dma-mapping_32.h
··· 2 2 #define _ASM_I386_DMA_MAPPING_H 3 3 4 4 #include <linux/mm.h> 5 + #include <linux/scatterlist.h> 5 6 6 7 #include <asm/cache.h> 7 8 #include <asm/io.h> 8 - #include <asm/scatterlist.h> 9 9 #include <asm/bug.h> 10 10 11 11 #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) ··· 35 35 } 36 36 37 37 static inline int 38 - dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 38 + dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 39 39 enum dma_data_direction direction) 40 40 { 41 + struct scatterlist *sg; 41 42 int i; 42 43 43 44 BUG_ON(!valid_dma_direction(direction)); 44 - WARN_ON(nents == 0 || sg[0].length == 0); 45 + WARN_ON(nents == 0 || sglist[0].length == 0); 45 46 46 - for (i = 0; i < nents; i++ ) { 47 - BUG_ON(!sg[i].page); 47 + for_each_sg(sglist, sg, nents, i) { 48 + BUG_ON(!sg->page); 48 49 49 - sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset; 50 + sg->dma_address = page_to_phys(sg->page) + sg->offset; 50 51 } 51 52 52 53 flush_write_buffers();
+1 -2
include/asm-x86/dma-mapping_64.h
··· 6 6 * documentation. 7 7 */ 8 8 9 - 10 - #include <asm/scatterlist.h> 9 + #include <linux/scatterlist.h> 11 10 #include <asm/io.h> 12 11 #include <asm/swiotlb.h> 13 12
+2
include/asm-x86/scatterlist_32.h
··· 10 10 unsigned int length; 11 11 }; 12 12 13 + #define ARCH_HAS_SG_CHAIN 14 + 13 15 /* These macros should be used after a pci_map_sg call has been done 14 16 * to get bus addresses of each of the SG entries and their lengths. 15 17 * You should only work with the number of sg entries pci_map_sg
+2
include/asm-x86/scatterlist_64.h
··· 11 11 unsigned int dma_length; 12 12 }; 13 13 14 + #define ARCH_HAS_SG_CHAIN 15 + 14 16 #define ISA_DMA_THRESHOLD (0x00ffffff) 15 17 16 18 /* These macros should be used after a pci_map_sg call has been done
+17 -2
include/linux/bio.h
··· 176 176 #define bio_offset(bio) bio_iovec((bio))->bv_offset 177 177 #define bio_segments(bio) ((bio)->bi_vcnt - (bio)->bi_idx) 178 178 #define bio_sectors(bio) ((bio)->bi_size >> 9) 179 - #define bio_cur_sectors(bio) (bio_iovec(bio)->bv_len >> 9) 180 - #define bio_data(bio) (page_address(bio_page((bio))) + bio_offset((bio))) 181 179 #define bio_barrier(bio) ((bio)->bi_rw & (1 << BIO_RW_BARRIER)) 182 180 #define bio_sync(bio) ((bio)->bi_rw & (1 << BIO_RW_SYNC)) 183 181 #define bio_failfast(bio) ((bio)->bi_rw & (1 << BIO_RW_FAILFAST)) 184 182 #define bio_rw_ahead(bio) ((bio)->bi_rw & (1 << BIO_RW_AHEAD)) 185 183 #define bio_rw_meta(bio) ((bio)->bi_rw & (1 << BIO_RW_META)) 184 + #define bio_empty_barrier(bio) (bio_barrier(bio) && !(bio)->bi_size) 185 + 186 + static inline unsigned int bio_cur_sectors(struct bio *bio) 187 + { 188 + if (bio->bi_vcnt) 189 + return bio_iovec(bio)->bv_len >> 9; 190 + 191 + return 0; 192 + } 193 + 194 + static inline void *bio_data(struct bio *bio) 195 + { 196 + if (bio->bi_vcnt) 197 + return page_address(bio_page(bio)) + bio_offset(bio); 198 + 199 + return NULL; 200 + } 186 201 187 202 /* 188 203 * will die
+4 -4
include/linux/blkdev.h
··· 330 330 331 331 struct bio_vec; 332 332 typedef int (merge_bvec_fn) (struct request_queue *, struct bio *, struct bio_vec *); 333 - typedef int (issue_flush_fn) (struct request_queue *, struct gendisk *, sector_t *); 334 333 typedef void (prepare_flush_fn) (struct request_queue *, struct request *); 335 334 typedef void (softirq_done_fn)(struct request *); 336 335 ··· 367 368 prep_rq_fn *prep_rq_fn; 368 369 unplug_fn *unplug_fn; 369 370 merge_bvec_fn *merge_bvec_fn; 370 - issue_flush_fn *issue_flush_fn; 371 371 prepare_flush_fn *prepare_flush_fn; 372 372 softirq_done_fn *softirq_done_fn; 373 373 ··· 538 540 #define blk_barrier_rq(rq) ((rq)->cmd_flags & REQ_HARDBARRIER) 539 541 #define blk_fua_rq(rq) ((rq)->cmd_flags & REQ_FUA) 540 542 #define blk_bidi_rq(rq) ((rq)->next_rq != NULL) 543 + #define blk_empty_barrier(rq) (blk_barrier_rq(rq) && blk_fs_request(rq) && !(rq)->hard_nr_sectors) 541 544 542 545 #define list_entry_rq(ptr) list_entry((ptr), struct request, queuelist) 543 546 ··· 728 729 extern int end_that_request_first(struct request *, int, int); 729 730 extern int end_that_request_chunk(struct request *, int, int); 730 731 extern void end_that_request_last(struct request *, int); 731 - extern void end_request(struct request *req, int uptodate); 732 + extern void end_request(struct request *, int); 733 + extern void end_queued_request(struct request *, int); 734 + extern void end_dequeued_request(struct request *, int); 732 735 extern void blk_complete_request(struct request *); 733 736 734 737 /* ··· 768 767 extern void blk_queue_softirq_done(struct request_queue *, softirq_done_fn *); 769 768 extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev); 770 769 extern int blk_queue_ordered(struct request_queue *, unsigned, prepare_flush_fn *); 771 - extern void blk_queue_issue_flush_fn(struct request_queue *, issue_flush_fn *); 772 770 extern int blk_do_ordered(struct request_queue *, struct request **); 773 771 extern unsigned blk_ordered_cur_seq(struct request_queue *); 774 772 extern unsigned blk_ordered_req_seq(struct request *);
+2 -1
include/linux/i2o.h
··· 32 32 #include <linux/workqueue.h> /* work_struct */ 33 33 #include <linux/mempool.h> 34 34 #include <linux/mutex.h> 35 + #include <linux/scatterlist.h> 35 36 36 37 #include <asm/io.h> 37 38 #include <asm/semaphore.h> /* Needed for MUTEX init macros */ ··· 838 837 if ((sizeof(dma_addr_t) > 4) && c->pae_support) 839 838 *mptr++ = cpu_to_le32(i2o_dma_high(sg_dma_address(sg))); 840 839 #endif 841 - sg++; 840 + sg = sg_next(sg); 842 841 } 843 842 *sg_ptr = mptr; 844 843
+1 -6
include/linux/ide.h
··· 772 772 773 773 unsigned int nsect; 774 774 unsigned int nleft; 775 - unsigned int cursg; 775 + struct scatterlist *cursg; 776 776 unsigned int cursg_ofs; 777 777 778 778 int rqsize; /* max sectors per request */ ··· 1091 1091 * This function is intended to be used prior to invoking ide_do_drive_cmd(). 1092 1092 */ 1093 1093 extern void ide_init_drive_cmd (struct request *rq); 1094 - 1095 - /* 1096 - * this function returns error location sector offset in case of a write error 1097 - */ 1098 - extern u64 ide_get_error_location(ide_drive_t *, char *); 1099 1094 1100 1095 /* 1101 1096 * "action" parameter type for ide_do_drive_cmd() below.
+10 -6
include/linux/libata.h
··· 29 29 #include <linux/delay.h> 30 30 #include <linux/interrupt.h> 31 31 #include <linux/dma-mapping.h> 32 - #include <asm/scatterlist.h> 32 + #include <linux/scatterlist.h> 33 33 #include <linux/io.h> 34 34 #include <linux/ata.h> 35 35 #include <linux/workqueue.h> ··· 416 416 unsigned long flags; /* ATA_QCFLAG_xxx */ 417 417 unsigned int tag; 418 418 unsigned int n_elem; 419 + unsigned int n_iter; 419 420 unsigned int orig_n_elem; 420 421 421 422 int dma_dir; ··· 427 426 unsigned int nbytes; 428 427 unsigned int curbytes; 429 428 430 - unsigned int cursg; 429 + struct scatterlist *cursg; 431 430 unsigned int cursg_ofs; 432 431 433 432 struct scatterlist sgent; ··· 1044 1043 return 1; 1045 1044 if (qc->pad_len) 1046 1045 return 0; 1047 - if (((sg - qc->__sg) + 1) == qc->n_elem) 1046 + if (qc->n_iter == qc->n_elem) 1048 1047 return 1; 1049 1048 return 0; 1050 1049 } ··· 1052 1051 static inline struct scatterlist * 1053 1052 ata_qc_first_sg(struct ata_queued_cmd *qc) 1054 1053 { 1054 + qc->n_iter = 0; 1055 1055 if (qc->n_elem) 1056 1056 return qc->__sg; 1057 1057 if (qc->pad_len) ··· 1065 1063 { 1066 1064 if (sg == &qc->pad_sgent) 1067 1065 return NULL; 1068 - if (++sg - qc->__sg < qc->n_elem) 1069 - return sg; 1066 + if (++qc->n_iter < qc->n_elem) 1067 + return sg_next(sg); 1070 1068 if (qc->pad_len) 1071 1069 return &qc->pad_sgent; 1072 1070 return NULL; ··· 1311 1309 qc->dma_dir = DMA_NONE; 1312 1310 qc->__sg = NULL; 1313 1311 qc->flags = 0; 1314 - qc->cursg = qc->cursg_ofs = 0; 1312 + qc->cursg = NULL; 1313 + qc->cursg_ofs = 0; 1315 1314 qc->nbytes = qc->curbytes = 0; 1316 1315 qc->n_elem = 0; 1316 + qc->n_iter = 0; 1317 1317 qc->err_mask = 0; 1318 1318 qc->pad_len = 0; 1319 1319 qc->sect_size = ATA_SECT_SIZE;
+84
include/linux/scatterlist.h
··· 20 20 sg_set_buf(sg, buf, buflen); 21 21 } 22 22 23 + /* 24 + * We overload the LSB of the page pointer to indicate whether it's 25 + * a valid sg entry, or whether it points to the start of a new scatterlist. 26 + * Those low bits are there for everyone! (thanks mason :-) 27 + */ 28 + #define sg_is_chain(sg) ((unsigned long) (sg)->page & 0x01) 29 + #define sg_chain_ptr(sg) \ 30 + ((struct scatterlist *) ((unsigned long) (sg)->page & ~0x01)) 31 + 32 + /** 33 + * sg_next - return the next scatterlist entry in a list 34 + * @sg: The current sg entry 35 + * 36 + * Usually the next entry will be @sg@ + 1, but if this sg element is part 37 + * of a chained scatterlist, it could jump to the start of a new 38 + * scatterlist array. 39 + * 40 + * Note that the caller must ensure that there are further entries after 41 + * the current entry, this function will NOT return NULL for an end-of-list. 42 + * 43 + */ 44 + static inline struct scatterlist *sg_next(struct scatterlist *sg) 45 + { 46 + sg++; 47 + 48 + if (unlikely(sg_is_chain(sg))) 49 + sg = sg_chain_ptr(sg); 50 + 51 + return sg; 52 + } 53 + 54 + /* 55 + * Loop over each sg element, following the pointer to a new list if necessary 56 + */ 57 + #define for_each_sg(sglist, sg, nr, __i) \ 58 + for (__i = 0, sg = (sglist); __i < (nr); __i++, sg = sg_next(sg)) 59 + 60 + /** 61 + * sg_last - return the last scatterlist entry in a list 62 + * @sgl: First entry in the scatterlist 63 + * @nents: Number of entries in the scatterlist 64 + * 65 + * Should only be used casually, it (currently) scan the entire list 66 + * to get the last entry. 67 + * 68 + * Note that the @sgl@ pointer passed in need not be the first one, 69 + * the important bit is that @nents@ denotes the number of entries that 70 + * exist from @sgl@. 71 + * 72 + */ 73 + static inline struct scatterlist *sg_last(struct scatterlist *sgl, 74 + unsigned int nents) 75 + { 76 + #ifndef ARCH_HAS_SG_CHAIN 77 + struct scatterlist *ret = &sgl[nents - 1]; 78 + #else 79 + struct scatterlist *sg, *ret = NULL; 80 + int i; 81 + 82 + for_each_sg(sgl, sg, nents, i) 83 + ret = sg; 84 + 85 + #endif 86 + return ret; 87 + } 88 + 89 + /** 90 + * sg_chain - Chain two sglists together 91 + * @prv: First scatterlist 92 + * @prv_nents: Number of entries in prv 93 + * @sgl: Second scatterlist 94 + * 95 + * Links @prv@ and @sgl@ together, to form a longer scatterlist. 96 + * 97 + */ 98 + static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents, 99 + struct scatterlist *sgl) 100 + { 101 + #ifndef ARCH_HAS_SG_CHAIN 102 + BUG(); 103 + #endif 104 + prv[prv_nents - 1].page = (struct page *) ((unsigned long) sgl | 0x01); 105 + } 106 + 23 107 #endif /* _LINUX_SCATTERLIST_H */
-7
include/scsi/scsi.h
··· 11 11 #include <linux/types.h> 12 12 13 13 /* 14 - * The maximum sg list length SCSI can cope with 15 - * (currently must be a power of 2 between 32 and 256) 16 - */ 17 - #define SCSI_MAX_PHYS_SEGMENTS MAX_PHYS_SEGMENTS 18 - 19 - 20 - /* 21 14 * SCSI command lengths 22 15 */ 23 16
+4 -3
include/scsi/scsi_cmnd.h
··· 5 5 #include <linux/list.h> 6 6 #include <linux/types.h> 7 7 #include <linux/timer.h> 8 + #include <linux/scatterlist.h> 8 9 9 10 struct request; 10 11 struct scatterlist; ··· 69 68 70 69 /* These elements define the operation we ultimately want to perform */ 71 70 unsigned short use_sg; /* Number of pieces of scatter-gather */ 72 - unsigned short sglist_len; /* size of malloc'd scatter-gather list */ 71 + unsigned short __use_sg; 73 72 74 73 unsigned underflow; /* Return error if less than 75 74 this amount is transferred */ ··· 129 128 extern void scsi_kunmap_atomic_sg(void *virt); 130 129 131 130 extern struct scatterlist *scsi_alloc_sgtable(struct scsi_cmnd *, gfp_t); 132 - extern void scsi_free_sgtable(struct scatterlist *, int); 131 + extern void scsi_free_sgtable(struct scsi_cmnd *); 133 132 134 133 extern int scsi_dma_map(struct scsi_cmnd *cmd); 135 134 extern void scsi_dma_unmap(struct scsi_cmnd *cmd); ··· 149 148 } 150 149 151 150 #define scsi_for_each_sg(cmd, sg, nseg, __i) \ 152 - for (__i = 0, sg = scsi_sglist(cmd); __i < (nseg); __i++, (sg)++) 151 + for_each_sg(scsi_sglist(cmd), sg, nseg, __i) 153 152 154 153 #endif /* _SCSI_SCSI_CMND_H */
+13
include/scsi/scsi_host.h
··· 39 39 #define DISABLE_CLUSTERING 0 40 40 #define ENABLE_CLUSTERING 1 41 41 42 + #define DISABLE_SG_CHAINING 0 43 + #define ENABLE_SG_CHAINING 1 44 + 42 45 enum scsi_eh_timer_return { 43 46 EH_NOT_HANDLED, 44 47 EH_HANDLED, ··· 446 443 unsigned ordered_tag:1; 447 444 448 445 /* 446 + * true if the low-level driver can support sg chaining. this 447 + * will be removed eventually when all the drivers are 448 + * converted to support sg chaining. 449 + * 450 + * Status: OBSOLETE 451 + */ 452 + unsigned use_sg_chaining:1; 453 + 454 + /* 449 455 * Countdown for host blocking with no commands outstanding 450 456 */ 451 457 unsigned int max_host_blocked; ··· 598 586 unsigned unchecked_isa_dma:1; 599 587 unsigned use_clustering:1; 600 588 unsigned use_blk_tcq:1; 589 + unsigned use_sg_chaining:1; 601 590 602 591 /* 603 592 * Host has requested that no further requests come through for the
+12 -7
lib/swiotlb.c
··· 677 677 * same here. 678 678 */ 679 679 int 680 - swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems, 680 + swiotlb_map_sg(struct device *hwdev, struct scatterlist *sgl, int nelems, 681 681 int dir) 682 682 { 683 + struct scatterlist *sg; 683 684 void *addr; 684 685 dma_addr_t dev_addr; 685 686 int i; 686 687 687 688 BUG_ON(dir == DMA_NONE); 688 689 689 - for (i = 0; i < nelems; i++, sg++) { 690 + for_each_sg(sgl, sg, nelems, i) { 690 691 addr = SG_ENT_VIRT_ADDRESS(sg); 691 692 dev_addr = virt_to_bus(addr); 692 693 if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) { ··· 697 696 to do proper error handling. */ 698 697 swiotlb_full(hwdev, sg->length, dir, 0); 699 698 swiotlb_unmap_sg(hwdev, sg - i, i, dir); 700 - sg[0].dma_length = 0; 699 + sgl[0].dma_length = 0; 701 700 return 0; 702 701 } 703 702 sg->dma_address = virt_to_bus(map); ··· 713 712 * concerning calls here are the same as for swiotlb_unmap_single() above. 714 713 */ 715 714 void 716 - swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems, 715 + swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sgl, int nelems, 717 716 int dir) 718 717 { 718 + struct scatterlist *sg; 719 719 int i; 720 720 721 721 BUG_ON(dir == DMA_NONE); 722 722 723 - for (i = 0; i < nelems; i++, sg++) 723 + for_each_sg(sgl, sg, nelems, i) { 724 724 if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) 725 725 unmap_single(hwdev, bus_to_virt(sg->dma_address), 726 726 sg->dma_length, dir); 727 727 else if (dir == DMA_FROM_DEVICE) 728 728 dma_mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length); 729 + } 729 730 } 730 731 731 732 /* ··· 738 735 * and usage. 739 736 */ 740 737 static void 741 - swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg, 738 + swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sgl, 742 739 int nelems, int dir, int target) 743 740 { 741 + struct scatterlist *sg; 744 742 int i; 745 743 746 744 BUG_ON(dir == DMA_NONE); 747 745 748 - for (i = 0; i < nelems; i++, sg++) 746 + for_each_sg(sgl, sg, nelems, i) { 749 747 if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg)) 750 748 sync_single(hwdev, bus_to_virt(sg->dma_address), 751 749 sg->dma_length, dir, target); 752 750 else if (dir == DMA_FROM_DEVICE) 753 751 dma_mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length); 752 + } 754 753 } 755 754 756 755 void
+6
mm/bounce.c
··· 265 265 mempool_t *pool; 266 266 267 267 /* 268 + * Data-less bio, nothing to bounce 269 + */ 270 + if (bio_empty_barrier(*bio_orig)) 271 + return; 272 + 273 + /* 268 274 * for non-isa bounce case, just check if the bounce pfn is equal 269 275 * to or bigger than the highest pfn in the system -- in that case, 270 276 * don't waste time iterating over bio segments