[POWERPC] Workaround for iommu page alignment

Commit 5d2efba64b231a1733c4048d1708d77e07f26426 changed our iommu code
so that it always uses an iommu page size of 4kB. That means with our
current code, drivers may do a dma_map_sg() of a 64kB page and obtain
a dma_addr_t that is only 4k aligned.

This works fine in most cases except for some infiniband HW it seems,
where they tell the HW about the page size and it ignores the low bits
of the DMA address.

This works around it by making our IOMMU code enforce a PAGE_SIZE alignment
for mappings of objects that are page aligned in the first place and whose
size is larger or equal to a page.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>

authored by Benjamin Herrenschmidt and committed by Paul Mackerras d262c32a 031f2dcd

+14 -3
+14 -3
arch/powerpc/kernel/iommu.c
··· 278 278 unsigned long flags; 279 279 struct scatterlist *s, *outs, *segstart; 280 280 int outcount, incount, i; 281 + unsigned int align; 281 282 unsigned long handle; 282 283 283 284 BUG_ON(direction == DMA_NONE); ··· 310 309 /* Allocate iommu entries for that segment */ 311 310 vaddr = (unsigned long) sg_virt(s); 312 311 npages = iommu_num_pages(vaddr, slen); 313 - entry = iommu_range_alloc(tbl, npages, &handle, mask >> IOMMU_PAGE_SHIFT, 0); 312 + align = 0; 313 + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && slen >= PAGE_SIZE && 314 + (vaddr & ~PAGE_MASK) == 0) 315 + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT; 316 + entry = iommu_range_alloc(tbl, npages, &handle, 317 + mask >> IOMMU_PAGE_SHIFT, align); 314 318 315 319 DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen); 316 320 ··· 578 572 { 579 573 dma_addr_t dma_handle = DMA_ERROR_CODE; 580 574 unsigned long uaddr; 581 - unsigned int npages; 575 + unsigned int npages, align; 582 576 583 577 BUG_ON(direction == DMA_NONE); 584 578 ··· 586 580 npages = iommu_num_pages(uaddr, size); 587 581 588 582 if (tbl) { 583 + align = 0; 584 + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && size >= PAGE_SIZE && 585 + ((unsigned long)vaddr & ~PAGE_MASK) == 0) 586 + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT; 587 + 589 588 dma_handle = iommu_alloc(tbl, vaddr, npages, direction, 590 - mask >> IOMMU_PAGE_SHIFT, 0); 589 + mask >> IOMMU_PAGE_SHIFT, align); 591 590 if (dma_handle == DMA_ERROR_CODE) { 592 591 if (printk_ratelimit()) { 593 592 printk(KERN_INFO "iommu_alloc failed, "