[POWERPC] Workaround for iommu page alignment

Commit 5d2efba64b231a1733c4048d1708d77e07f26426 changed our iommu code
so that it always uses an iommu page size of 4kB. That means with our
current code, drivers may do a dma_map_sg() of a 64kB page and obtain
a dma_addr_t that is only 4k aligned.

This works fine in most cases except for some infiniband HW it seems,
where they tell the HW about the page size and it ignores the low bits
of the DMA address.

This works around it by making our IOMMU code enforce a PAGE_SIZE alignment
for mappings of objects that are page aligned in the first place and whose
size is larger or equal to a page.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>

authored by Benjamin Herrenschmidt and committed by Paul Mackerras d262c32a 031f2dcd

+14 -3
+14 -3
arch/powerpc/kernel/iommu.c
··· 278 unsigned long flags; 279 struct scatterlist *s, *outs, *segstart; 280 int outcount, incount, i; 281 unsigned long handle; 282 283 BUG_ON(direction == DMA_NONE); ··· 310 /* Allocate iommu entries for that segment */ 311 vaddr = (unsigned long) sg_virt(s); 312 npages = iommu_num_pages(vaddr, slen); 313 - entry = iommu_range_alloc(tbl, npages, &handle, mask >> IOMMU_PAGE_SHIFT, 0); 314 315 DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen); 316 ··· 578 { 579 dma_addr_t dma_handle = DMA_ERROR_CODE; 580 unsigned long uaddr; 581 - unsigned int npages; 582 583 BUG_ON(direction == DMA_NONE); 584 ··· 586 npages = iommu_num_pages(uaddr, size); 587 588 if (tbl) { 589 dma_handle = iommu_alloc(tbl, vaddr, npages, direction, 590 - mask >> IOMMU_PAGE_SHIFT, 0); 591 if (dma_handle == DMA_ERROR_CODE) { 592 if (printk_ratelimit()) { 593 printk(KERN_INFO "iommu_alloc failed, "
··· 278 unsigned long flags; 279 struct scatterlist *s, *outs, *segstart; 280 int outcount, incount, i; 281 + unsigned int align; 282 unsigned long handle; 283 284 BUG_ON(direction == DMA_NONE); ··· 309 /* Allocate iommu entries for that segment */ 310 vaddr = (unsigned long) sg_virt(s); 311 npages = iommu_num_pages(vaddr, slen); 312 + align = 0; 313 + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && slen >= PAGE_SIZE && 314 + (vaddr & ~PAGE_MASK) == 0) 315 + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT; 316 + entry = iommu_range_alloc(tbl, npages, &handle, 317 + mask >> IOMMU_PAGE_SHIFT, align); 318 319 DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen); 320 ··· 572 { 573 dma_addr_t dma_handle = DMA_ERROR_CODE; 574 unsigned long uaddr; 575 + unsigned int npages, align; 576 577 BUG_ON(direction == DMA_NONE); 578 ··· 580 npages = iommu_num_pages(uaddr, size); 581 582 if (tbl) { 583 + align = 0; 584 + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && size >= PAGE_SIZE && 585 + ((unsigned long)vaddr & ~PAGE_MASK) == 0) 586 + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT; 587 + 588 dma_handle = iommu_alloc(tbl, vaddr, npages, direction, 589 + mask >> IOMMU_PAGE_SHIFT, align); 590 if (dma_handle == DMA_ERROR_CODE) { 591 if (printk_ratelimit()) { 592 printk(KERN_INFO "iommu_alloc failed, "