Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mmc: core: let the dma map ops handle bouncing

Just like we do for all other block drivers. Especially as the limit
imposed at the moment might be way to pessimistic for iommus.

This also means we are not going to set a bounce limit for the queue, in
case we have a dma mask. On most architectures it was never needed, the
major hold out was x86-32 with PAE, but that has been fixed by now.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>

authored by

Christoph Hellwig and committed by
Ulf Hansson
7559d612 1cdca16c

+2 -5
+2 -5
drivers/mmc/core/queue.c
··· 354 354 static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) 355 355 { 356 356 struct mmc_host *host = card->host; 357 - u64 limit = BLK_BOUNCE_HIGH; 358 357 unsigned block_size = 512; 359 - 360 - if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) 361 - limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; 362 358 363 359 blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue); 364 360 blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue); 365 361 if (mmc_can_erase(card)) 366 362 mmc_queue_setup_discard(mq->queue, card); 367 363 368 - blk_queue_bounce_limit(mq->queue, limit); 364 + if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask) 365 + blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH); 369 366 blk_queue_max_hw_sectors(mq->queue, 370 367 min(host->max_blk_count, host->max_req_size / 512)); 371 368 blk_queue_max_segments(mq->queue, host->max_segs);