Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""

Halil Pasic points out [1] that the full revert of that commit (revert
in bddac7c1e02b), and that a partial revert that only reverts the
problematic case, but still keeps some of the cleanups is probably
better. 

And that partial revert [2] had already been verified by Oleksandr
Natalenko to also fix the issue, I had just missed that in the long
discussion.

So let's reinstate the cleanups from commit aa6f8dcbab47 ("swiotlb:
rework "fix info leak with DMA_FROM_DEVICE""), and effectively only
revert the part that caused problems.

Link: https://lore.kernel.org/all/20220328013731.017ae3e3.pasic@linux.ibm.com/ [1]
Link: https://lore.kernel.org/all/20220324055732.GB12078@lst.de/ [2]
Link: https://lore.kernel.org/all/4386660.LvFx2qVVIh@natalenko.name/ [3]
Suggested-by: Halil Pasic <pasic@linux.ibm.com>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Cc: Christoph Hellwig" <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

+8 -20
-8
Documentation/core-api/dma-attributes.rst
··· 130 130 subsystem that the buffer is fully accessible at the elevated privilege 131 131 level (and ideally inaccessible or at least read-only at the 132 132 lesser-privileged levels). 133 - 134 - DMA_ATTR_OVERWRITE 135 - ------------------ 136 - 137 - This is a hint to the DMA-mapping subsystem that the device is expected to 138 - overwrite the entire mapped size, thus the caller does not require any of the 139 - previous buffer contents to be preserved. This allows bounce-buffering 140 - implementations to optimise DMA_FROM_DEVICE transfers.
-8
include/linux/dma-mapping.h
··· 62 62 #define DMA_ATTR_PRIVILEGED (1UL << 9) 63 63 64 64 /* 65 - * This is a hint to the DMA-mapping subsystem that the device is expected 66 - * to overwrite the entire mapped size, thus the caller does not require any 67 - * of the previous buffer contents to be preserved. This allows 68 - * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers. 69 - */ 70 - #define DMA_ATTR_OVERWRITE (1UL << 10) 71 - 72 - /* 73 65 * A dma_addr_t can hold any valid DMA or bus address for the platform. It can 74 66 * be given to a device to use as a DMA source or target. It is specific to a 75 67 * given device and there may be a translation between the CPU physical address
+8 -4
kernel/dma/swiotlb.c
··· 627 627 for (i = 0; i < nr_slots(alloc_size + offset); i++) 628 628 mem->slots[index + i].orig_addr = slot_addr(orig_addr, i); 629 629 tlb_addr = slot_addr(mem->start, index) + offset; 630 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && 631 - (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || 632 - dir == DMA_BIDIRECTIONAL)) 633 - swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE); 630 + /* 631 + * When dir == DMA_FROM_DEVICE we could omit the copy from the orig 632 + * to the tlb buffer, if we knew for sure the device will 633 + * overwirte the entire current content. But we don't. Thus 634 + * unconditional bounce may prevent leaking swiotlb content (i.e. 635 + * kernel memory) to user-space. 636 + */ 637 + swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE); 634 638 return tlb_addr; 635 639 } 636 640