Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

dma-mapping: remove leftover NULL device support

Most dma_map_ops implementations already had some issues with a NULL
device, or did simply crash if one was fed to them. Now that we have
cleaned up all the obvious offenders we can stop to pretend we
support this mode.

Signed-off-by: Christoph Hellwig <hch@lst.de>

+10 -11
+6 -7
Documentation/DMA-API-HOWTO.txt
··· 365 365 driver needs regions sized smaller than a page, you may prefer using 366 366 the dma_pool interface, described below. 367 367 368 - The consistent DMA mapping interfaces, for non-NULL dev, will by 369 - default return a DMA address which is 32-bit addressable. Even if the 370 - device indicates (via DMA mask) that it may address the upper 32-bits, 371 - consistent allocation will only return > 32-bit addresses for DMA if 372 - the consistent DMA mask has been explicitly changed via 373 - dma_set_coherent_mask(). This is true of the dma_pool interface as 374 - well. 368 + The consistent DMA mapping interfaces, will by default return a DMA address 369 + which is 32-bit addressable. Even if the device indicates (via the DMA mask) 370 + that it may address the upper 32-bits, consistent allocation will only 371 + return > 32-bit addresses for DMA if the consistent DMA mask has been 372 + explicitly changed via dma_set_coherent_mask(). This is true of the 373 + dma_pool interface as well. 375 374 376 375 dma_alloc_coherent() returns two values: the virtual address which you 377 376 can use to access it from the CPU and dma_handle which you pass to the
+3 -3
include/linux/dma-mapping.h
··· 267 267 268 268 static inline const struct dma_map_ops *get_dma_ops(struct device *dev) 269 269 { 270 - if (dev && dev->dma_ops) 270 + if (dev->dma_ops) 271 271 return dev->dma_ops; 272 - return get_arch_dma_ops(dev ? dev->bus : NULL); 272 + return get_arch_dma_ops(dev->bus); 273 273 } 274 274 275 275 static inline void set_dma_ops(struct device *dev, ··· 650 650 651 651 static inline u64 dma_get_mask(struct device *dev) 652 652 { 653 - if (dev && dev->dma_mask && *dev->dma_mask) 653 + if (dev->dma_mask && *dev->dma_mask) 654 654 return *dev->dma_mask; 655 655 return DMA_BIT_MASK(32); 656 656 }
+1 -1
kernel/dma/direct.c
··· 311 311 size_t size) 312 312 { 313 313 return swiotlb_force != SWIOTLB_FORCE && 314 - (!dev || dma_capable(dev, dma_addr, size)); 314 + dma_capable(dev, dma_addr, size); 315 315 } 316 316 317 317 dma_addr_t dma_direct_map_page(struct device *dev, struct page *page,