ARM: dma-mapping: disallow dma_get_sgtable() for non-kernel managed memory

dma_get_sgtable() tries to create a scatterlist table containing valid
struct page pointers for the coherent memory allocation passed in to it.

However, memory can be declared via dma_declare_coherent_memory(), or
via other reservation schemes which means that coherent memory is not
guaranteed to be backed by struct pages. In such cases, the resulting
scatterlist table contains pointers to invalid pages, which causes
kernel oops later.

This patch adds detection of such memory, and refuses to create a
scatterlist table for such memory.

Reported-by: Shuah Khan <shuahkhan@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

+19 -1
+19 -1
arch/arm/mm/dma-mapping.c
··· 935 935 __arm_dma_free(dev, size, cpu_addr, handle, attrs, true); 936 936 } 937 937 938 + /* 939 + * The whole dma_get_sgtable() idea is fundamentally unsafe - it seems 940 + * that the intention is to allow exporting memory allocated via the 941 + * coherent DMA APIs through the dma_buf API, which only accepts a 942 + * scattertable. This presents a couple of problems: 943 + * 1. Not all memory allocated via the coherent DMA APIs is backed by 944 + * a struct page 945 + * 2. Passing coherent DMA memory into the streaming APIs is not allowed 946 + * as we will try to flush the memory through a different alias to that 947 + * actually being used (and the flushes are redundant.) 948 + */ 938 949 int arm_dma_get_sgtable(struct device *dev, struct sg_table *sgt, 939 950 void *cpu_addr, dma_addr_t handle, size_t size, 940 951 unsigned long attrs) 941 952 { 942 - struct page *page = pfn_to_page(dma_to_pfn(dev, handle)); 953 + unsigned long pfn = dma_to_pfn(dev, handle); 954 + struct page *page; 943 955 int ret; 956 + 957 + /* If the PFN is not valid, we do not have a struct page */ 958 + if (!pfn_valid(pfn)) 959 + return -ENXIO; 960 + 961 + page = pfn_to_page(pfn); 944 962 945 963 ret = sg_alloc_table(sgt, 1, GFP_KERNEL); 946 964 if (unlikely(ret))