Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mmc: sdio: Use mmc_pre_req() / mmc_post_req()

SDHCI changed from using a tasklet to finish requests, to using an IRQ
thread i.e. commit c07a48c2651965 ("mmc: sdhci: Remove finish_tasklet").
Because this increased the latency to complete requests, a preparatory
change was made to complete the request from the IRQ handler if
possible i.e. commit 19d2f695f4e827 ("mmc: sdhci: Call mmc_request_done()
from IRQ handler if possible"). That alleviated the situation for MMC
block devices because the MMC block driver makes use of mmc_pre_req()
and mmc_post_req() so that successful requests are completed in the IRQ
handler and any DMA unmapping is handled separately in mmc_post_req().
However SDIO was still affected, and an example has been reported with
up to 20% degradation in performance.

Looking at SDIO I/O helper functions, sdio_io_rw_ext_helper() appeared
to be a possible candidate for making use of asynchronous requests
within its I/O loops, but analysis revealed that these loops almost
never iterate more than once, so the complexity of the change would not
be warrented.

Instead, mmc_pre_req() and mmc_post_req() are added before and after I/O
submission (mmc_wait_for_req) in mmc_io_rw_extended(). This still has
the potential benefit of reducing the duration of interrupt handlers, as
well as addressing the latency issue for SDHCI. It also seems a more
reasonable solution than forcing drivers to do everything in the IRQ
handler.

Reported-by: Dmitry Osipenko <digetx@gmail.com>
Fixes: c07a48c2651965 ("mmc: sdhci: Remove finish_tasklet")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20200903082007.18715-1-adrian.hunter@intel.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>

authored by

Adrian Hunter and committed by
Ulf Hansson
f0c393e2 060522d8

+22 -17
+22 -17
drivers/mmc/core/sdio_ops.c
··· 121 121 struct sg_table sgtable; 122 122 unsigned int nents, left_size, i; 123 123 unsigned int seg_size = card->host->max_seg_size; 124 + int err; 124 125 125 126 WARN_ON(blksz == 0); 126 127 ··· 171 170 172 171 mmc_set_data_timeout(&data, card); 173 172 173 + mmc_pre_req(card->host, &mrq); 174 + 174 175 mmc_wait_for_req(card->host, &mrq); 176 + 177 + if (cmd.error) 178 + err = cmd.error; 179 + else if (data.error) 180 + err = data.error; 181 + else if (mmc_host_is_spi(card->host)) 182 + /* host driver already reported errors */ 183 + err = 0; 184 + else if (cmd.resp[0] & R5_ERROR) 185 + err = -EIO; 186 + else if (cmd.resp[0] & R5_FUNCTION_NUMBER) 187 + err = -EINVAL; 188 + else if (cmd.resp[0] & R5_OUT_OF_RANGE) 189 + err = -ERANGE; 190 + else 191 + err = 0; 192 + 193 + mmc_post_req(card->host, &mrq, err); 175 194 176 195 if (nents > 1) 177 196 sg_free_table(&sgtable); 178 197 179 - if (cmd.error) 180 - return cmd.error; 181 - if (data.error) 182 - return data.error; 183 - 184 - if (mmc_host_is_spi(card->host)) { 185 - /* host driver already reported errors */ 186 - } else { 187 - if (cmd.resp[0] & R5_ERROR) 188 - return -EIO; 189 - if (cmd.resp[0] & R5_FUNCTION_NUMBER) 190 - return -EINVAL; 191 - if (cmd.resp[0] & R5_OUT_OF_RANGE) 192 - return -ERANGE; 193 - } 194 - 195 - return 0; 198 + return err; 196 199 } 197 200 198 201 int sdio_reset(struct mmc_host *host)