swiotlb: use coherent_dma_mask in alloc_coherent

Impact: fix DMA buffer allocation coherency bug in certain configs

This patch fixes swiotlb to use dev->coherent_dma_mask in
swiotlb_alloc_coherent().

coherent_dma_mask is a subset of dma_mask (equal to it most of
the time), enumerating the address range that a given device
is able to DMA to/from in a cache-coherent way.

But currently, swiotlb uses dev->dma_mask in alloc_coherent()
implicitly via address_needs_mapping(), but alloc_coherent is really
supposed to use coherent_dma_mask.

This bug could break drivers that uses smaller coherent_dma_mask than
dma_mask (though the current code works for the majority that use the
same mask for coherent_dma_mask and dma_mask).

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: tony.luck@intel.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by FUJITA Tomonori and committed by Ingo Molnar 1e74f300 e47411b1

+7 -3
+7 -3
lib/swiotlb.c
··· 467 467 dma_addr_t dev_addr; 468 468 void *ret; 469 469 int order = get_order(size); 470 + u64 dma_mask = DMA_32BIT_MASK; 471 + 472 + if (hwdev && hwdev->coherent_dma_mask) 473 + dma_mask = hwdev->coherent_dma_mask; 470 474 471 475 ret = (void *)__get_free_pages(flags, order); 472 - if (ret && address_needs_mapping(hwdev, virt_to_bus(ret), size)) { 476 + if (ret && !is_buffer_dma_capable(dma_mask, virt_to_bus(ret), size)) { 473 477 /* 474 478 * The allocated memory isn't reachable by the device. 475 479 * Fall back on swiotlb_map_single(). ··· 497 493 dev_addr = virt_to_bus(ret); 498 494 499 495 /* Confirm address can be DMA'd by device */ 500 - if (address_needs_mapping(hwdev, dev_addr, size)) { 496 + if (!is_buffer_dma_capable(dma_mask, dev_addr, size)) { 501 497 printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016Lx\n", 502 - (unsigned long long)*hwdev->dma_mask, 498 + (unsigned long long)dma_mask, 503 499 (unsigned long long)dev_addr); 504 500 505 501 /* DMA_TO_DEVICE to avoid memcpy in unmap_single */