Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mmc-v4.9' of git://git.linaro.org/people/ulf.hansson/mmc

Pull MMC updates from Ulf Hansson:

MMC core:
- Add support for sending commands during data transfer
- Erase/discard/trim improvements
- Improved error handling
- Extend sysfs with SD status register
- Document info about the vmmc/vmmcq regulators
- Extend pwrseq-simple to manage an optional post-power-on-delay
- Some various minor improvements and cleanups

MMC host:
- dw_mmc: Add reset support
- dw_mmc: Return -EILSEQ for EBE and SBE error
- dw_mmc: Some cleanups
- dw_mmc-k3: Add UHS-I support Hisilicon Hikey
- tmio: Add eMMC support
- sh_mobile_sdhi: Add r8a7796 support
- sunxi: Don't use sample clocks for sun4i/sun5i
- sunxi: Add support for A64 mmc controller
- sunxi: Some cleanups and improvements
- sdhci: Support for sending commands during data transfer
- sdhci: Do not allow tuning procedure to be interrupted
- sdhci-pci: Enable SD/SDIO on Merrifield
- sdhci-pci|acpi: Enable MMC_CAP_CMD_DURING_TFR
- sdhci-pci: Some cleanups
- sdhci-of-arasan: Set controller to test mode when no CD bit
- sdhci-of-arasan: Some fixes for clocks and phys
- sdhci-brcmstb: Don't use ADMA 64-bit when not supported
- sdhci-tegra: Mark 64-bit DMA broken on Tegra124
- sdhci-esdhc-imx: Fixups related to data timeouts

* tag 'mmc-v4.9' of git://git.linaro.org/people/ulf.hansson/mmc: (68 commits)
mmc: dw_mmc: remove the deprecated "supports-highspeed" property
mmc: dw_mmc: minor cleanup for dw_mci_adjust_fifoth
mmc: dw_mmc: use macro to define ring buffer size
mmc: dw_mmc: fix misleading error print if failing to do DMA transfer
mmc: dw_mmc: avoid race condition of cpu and IDMAC
mmc: dw_mmc: split out preparation of desc for IDMAC32 and IDMAC64
mmc: core: don't try to switch block size for dual rate mode
mmc: sdhci-of-arasan: Set controller to test mode when no CD bit
dt: sdhci-of-arasan: Add device tree option xlnx, fails-without-test-cd
mmc: tmio: add eMMC support
mmc: rtsx_usb: use new macro for R1 without CRC
mmc: rtsx_pci: use new macro for R1 without CRC
mmc: add define for R1 response without CRC
mmc: card: do away with indirection pointer
mmc: sdhci-acpi: Set MMC_CAP_CMD_DURING_TFR for Intel eMMC controllers
mmc: sdhci-pci: Set MMC_CAP_CMD_DURING_TFR for Intel eMMC controllers
mmc: sdhci: Support cap_cmd_during_tfr requests
mmc: mmc_test: Add tests for sending commands during transfer
mmc: core: Add support for sending commands during data transfer
mmc: sdhci-brcmstb: Fix incorrect capability
...

+1305 -481
+3
Documentation/devicetree/bindings/mmc/arasan,sdhci.txt
··· 36 36 - #clock-cells: If specified this should be the value <0>. With this property 37 37 in place we will export a clock representing the Card Clock. This clock 38 38 is expected to be consumed by our PHY. You must also specify 39 + - xlnx,fails-without-test-cd: when present, the controller doesn't work when 40 + the CD line is not connected properly, and the line is not connected 41 + properly. Test mode can be used to force the controller to function. 39 42 40 43 Example: 41 44 sdhci@e0100000 {
+3 -1
Documentation/devicetree/bindings/mmc/brcm,bcm7425-sdhci.txt Documentation/devicetree/bindings/mmc/brcm,sdhci-brcmstb.txt
··· 8 8 that support them. 9 9 10 10 Required properties: 11 - - compatible: "brcm,bcm7425-sdhci" 11 + - compatible: should be one of the following 12 + - "brcm,bcm7425-sdhci" 13 + - "brcm,bcm7445-sdhci" 12 14 13 15 Refer to clocks/clock-bindings.txt for generic clock consumer properties. 14 16
+2
Documentation/devicetree/bindings/mmc/mmc-pwrseq-simple.txt
··· 16 16 See ../clocks/clock-bindings.txt for details. 17 17 - clock-names : Must include the following entry: 18 18 "ext_clock" (External clock provided to the card). 19 + - post-power-on-delay-ms : Delay in ms after powering the card and 20 + de-asserting the reset-gpios (if any) 19 21 20 22 Example: 21 23
+14 -1
Documentation/devicetree/bindings/mmc/mmc.txt
··· 75 75 - wakeup-source: Enables wake up of host system on SDIO IRQ assertion 76 76 (Legacy property supported: "enable-sdio-wakeup") 77 77 78 + MMC power 79 + --------- 80 + 81 + Controllers may implement power control from both the connected cards and 82 + the IO signaling (for example to change to high-speed 1.8V signalling). If 83 + the system supports this, then the following two properties should point 84 + to valid regulator nodes: 85 + 86 + - vqmmc-supply: supply node for IO line power 87 + - vmmc-supply: supply node for card's power 88 + 78 89 79 90 MMC power sequences: 80 91 -------------------- ··· 113 102 - #size-cells: should be zero. 114 103 115 104 Required function subnode properties: 116 - - compatible: name of SDIO function following generic names recommended practice 117 105 - reg: Must contain the SDIO function number of the function this subnode 118 106 describes. A value of 0 denotes the memory SD function, values from 119 107 1 to 7 denote the SDIO functions. 108 + 109 + Optional function subnode properties: 110 + - compatible: name of SDIO function following generic names recommended practice 120 111 121 112 122 113 Examples
+6 -1
Documentation/devicetree/bindings/mmc/sunxi-mmc.txt
··· 8 8 Absolute maximum transfer rate is 200MB/s 9 9 10 10 Required properties: 11 - - compatible : "allwinner,sun4i-a10-mmc" or "allwinner,sun5i-a13-mmc" 11 + - compatible : should be one of: 12 + * "allwinner,sun4i-a10-mmc" 13 + * "allwinner,sun5i-a13-mmc" 14 + * "allwinner,sun7i-a20-mmc" 15 + * "allwinner,sun9i-a80-mmc" 16 + * "allwinner,sun50i-a64-mmc" 12 17 - reg : mmc controller base registers 13 18 - clocks : a list with 4 phandle + clock specifier pairs 14 19 - clock-names : must contain "ahb", "mmc", "output" and "sample"
+4
Documentation/devicetree/bindings/mmc/synopsys-dw-mshc.txt
··· 39 39 40 40 Optional properties: 41 41 42 + * resets: phandle + reset specifier pair, intended to represent hardware 43 + reset signal present internally in some host controller IC designs. 44 + See Documentation/devicetree/bindings/reset/reset.txt for details. 45 + 42 46 * clocks: from common clock binding: handle to biu and ciu clocks for the 43 47 bus interface unit clock and the card interface unit clock. 44 48
+1
Documentation/devicetree/bindings/mmc/tmio_mmc.txt
··· 23 23 "renesas,sdhi-r8a7793" - SDHI IP on R8A7793 SoC 24 24 "renesas,sdhi-r8a7794" - SDHI IP on R8A7794 SoC 25 25 "renesas,sdhi-r8a7795" - SDHI IP on R8A7795 SoC 26 + "renesas,sdhi-r8a7796" - SDHI IP on R8A7796 SoC 26 27 27 28 Optional properties: 28 29 - toshiba,mmc-wrprotect-disable: write-protect detection is unavailable
+4 -4
arch/arm/boot/dts/sun6i-a31.dtsi
··· 469 469 }; 470 470 471 471 mmc0: mmc@01c0f000 { 472 - compatible = "allwinner,sun5i-a13-mmc"; 472 + compatible = "allwinner,sun7i-a20-mmc"; 473 473 reg = <0x01c0f000 0x1000>; 474 474 clocks = <&ahb1_gates 8>, 475 475 <&mmc0_clk 0>, ··· 488 488 }; 489 489 490 490 mmc1: mmc@01c10000 { 491 - compatible = "allwinner,sun5i-a13-mmc"; 491 + compatible = "allwinner,sun7i-a20-mmc"; 492 492 reg = <0x01c10000 0x1000>; 493 493 clocks = <&ahb1_gates 9>, 494 494 <&mmc1_clk 0>, ··· 507 507 }; 508 508 509 509 mmc2: mmc@01c11000 { 510 - compatible = "allwinner,sun5i-a13-mmc"; 510 + compatible = "allwinner,sun7i-a20-mmc"; 511 511 reg = <0x01c11000 0x1000>; 512 512 clocks = <&ahb1_gates 10>, 513 513 <&mmc2_clk 0>, ··· 526 526 }; 527 527 528 528 mmc3: mmc@01c12000 { 529 - compatible = "allwinner,sun5i-a13-mmc"; 529 + compatible = "allwinner,sun7i-a20-mmc"; 530 530 reg = <0x01c12000 0x1000>; 531 531 clocks = <&ahb1_gates 11>, 532 532 <&mmc3_clk 0>,
+4 -4
arch/arm/boot/dts/sun7i-a20.dtsi
··· 905 905 }; 906 906 907 907 mmc0: mmc@01c0f000 { 908 - compatible = "allwinner,sun5i-a13-mmc"; 908 + compatible = "allwinner,sun7i-a20-mmc"; 909 909 reg = <0x01c0f000 0x1000>; 910 910 clocks = <&ahb_gates 8>, 911 911 <&mmc0_clk 0>, ··· 922 922 }; 923 923 924 924 mmc1: mmc@01c10000 { 925 - compatible = "allwinner,sun5i-a13-mmc"; 925 + compatible = "allwinner,sun7i-a20-mmc"; 926 926 reg = <0x01c10000 0x1000>; 927 927 clocks = <&ahb_gates 9>, 928 928 <&mmc1_clk 0>, ··· 939 939 }; 940 940 941 941 mmc2: mmc@01c11000 { 942 - compatible = "allwinner,sun5i-a13-mmc"; 942 + compatible = "allwinner,sun7i-a20-mmc"; 943 943 reg = <0x01c11000 0x1000>; 944 944 clocks = <&ahb_gates 10>, 945 945 <&mmc2_clk 0>, ··· 956 956 }; 957 957 958 958 mmc3: mmc@01c12000 { 959 - compatible = "allwinner,sun5i-a13-mmc"; 959 + compatible = "allwinner,sun7i-a20-mmc"; 960 960 reg = <0x01c12000 0x1000>; 961 961 clocks = <&ahb_gates 11>, 962 962 <&mmc3_clk 0>,
+3 -3
arch/arm/boot/dts/sun8i-a23-a33.dtsi
··· 266 266 }; 267 267 268 268 mmc0: mmc@01c0f000 { 269 - compatible = "allwinner,sun5i-a13-mmc"; 269 + compatible = "allwinner,sun7i-a20-mmc"; 270 270 reg = <0x01c0f000 0x1000>; 271 271 clocks = <&ahb1_gates 8>, 272 272 <&mmc0_clk 0>, ··· 285 285 }; 286 286 287 287 mmc1: mmc@01c10000 { 288 - compatible = "allwinner,sun5i-a13-mmc"; 288 + compatible = "allwinner,sun7i-a20-mmc"; 289 289 reg = <0x01c10000 0x1000>; 290 290 clocks = <&ahb1_gates 9>, 291 291 <&mmc1_clk 0>, ··· 304 304 }; 305 305 306 306 mmc2: mmc@01c11000 { 307 - compatible = "allwinner,sun5i-a13-mmc"; 307 + compatible = "allwinner,sun7i-a20-mmc"; 308 308 reg = <0x01c11000 0x1000>; 309 309 clocks = <&ahb1_gates 10>, 310 310 <&mmc2_clk 0>,
+3 -3
arch/arm/boot/dts/sun8i-h3.dtsi
··· 150 150 }; 151 151 152 152 mmc0: mmc@01c0f000 { 153 - compatible = "allwinner,sun5i-a13-mmc"; 153 + compatible = "allwinner,sun7i-a20-mmc"; 154 154 reg = <0x01c0f000 0x1000>; 155 155 clocks = <&ccu CLK_BUS_MMC0>, 156 156 <&ccu CLK_MMC0>, ··· 169 169 }; 170 170 171 171 mmc1: mmc@01c10000 { 172 - compatible = "allwinner,sun5i-a13-mmc"; 172 + compatible = "allwinner,sun7i-a20-mmc"; 173 173 reg = <0x01c10000 0x1000>; 174 174 clocks = <&ccu CLK_BUS_MMC1>, 175 175 <&ccu CLK_MMC1>, ··· 188 188 }; 189 189 190 190 mmc2: mmc@01c11000 { 191 - compatible = "allwinner,sun5i-a13-mmc"; 191 + compatible = "allwinner,sun7i-a20-mmc"; 192 192 reg = <0x01c11000 0x1000>; 193 193 clocks = <&ccu CLK_BUS_MMC2>, 194 194 <&ccu CLK_MMC2>,
+15 -15
drivers/mmc/card/block.c
··· 142 142 { 143 143 struct mmc_packed *packed = mqrq->packed; 144 144 145 - BUG_ON(!packed); 146 - 147 145 mqrq->cmd_type = MMC_PACKED_NONE; 148 146 packed->nr_entries = MMC_PACKED_NR_ZERO; 149 147 packed->idx_failure = MMC_PACKED_NR_IDX; ··· 1441 1443 int err, check, status; 1442 1444 u8 *ext_csd; 1443 1445 1444 - BUG_ON(!packed); 1445 - 1446 1446 packed->retries--; 1447 1447 check = mmc_blk_err_check(card, areq); 1448 1448 err = get_card_status(card, &status, 0); ··· 1669 1673 u8 max_packed_rw = 0; 1670 1674 u8 reqs = 0; 1671 1675 1676 + /* 1677 + * We don't need to check packed for any further 1678 + * operation of packed stuff as we set MMC_PACKED_NONE 1679 + * and return zero for reqs if geting null packed. Also 1680 + * we clean the flag of MMC_BLK_PACKED_CMD to avoid doing 1681 + * it again when removing blk req. 1682 + */ 1683 + if (!mqrq->packed) { 1684 + md->flags &= (~MMC_BLK_PACKED_CMD); 1685 + goto no_packed; 1686 + } 1687 + 1672 1688 if (!(md->flags & MMC_BLK_PACKED_CMD)) 1673 1689 goto no_packed; 1674 1690 ··· 1790 1782 u8 hdr_blocks; 1791 1783 u8 i = 1; 1792 1784 1793 - BUG_ON(!packed); 1794 - 1795 1785 mqrq->cmd_type = MMC_PACKED_WRITE; 1796 1786 packed->blocks = 0; 1797 1787 packed->idx_failure = MMC_PACKED_NR_IDX; ··· 1893 1887 int idx = packed->idx_failure, i = 0; 1894 1888 int ret = 0; 1895 1889 1896 - BUG_ON(!packed); 1897 - 1898 1890 while (!list_empty(&packed->list)) { 1899 1891 prq = list_entry_rq(packed->list.next); 1900 1892 if (idx == i) { ··· 1921 1917 struct request *prq; 1922 1918 struct mmc_packed *packed = mq_rq->packed; 1923 1919 1924 - BUG_ON(!packed); 1925 - 1926 1920 while (!list_empty(&packed->list)) { 1927 1921 prq = list_entry_rq(packed->list.next); 1928 1922 list_del_init(&prq->queuelist); ··· 1936 1934 struct request *prq; 1937 1935 struct request_queue *q = mq->queue; 1938 1936 struct mmc_packed *packed = mq_rq->packed; 1939 - 1940 - BUG_ON(!packed); 1941 1937 1942 1938 while (!list_empty(&packed->list)) { 1943 1939 prq = list_entry_rq(packed->list.prev); ··· 2144 2144 return 0; 2145 2145 } 2146 2146 2147 - static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) 2147 + int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) 2148 2148 { 2149 2149 int ret; 2150 2150 struct mmc_blk_data *md = mq->data; ··· 2265 2265 if (ret) 2266 2266 goto err_putdisk; 2267 2267 2268 - md->queue.issue_fn = mmc_blk_issue_rq; 2269 2268 md->queue.data = md; 2270 2269 2271 2270 md->disk->major = MMC_BLOCK_MAJOR; ··· 2302 2303 set_capacity(md->disk, size); 2303 2304 2304 2305 if (mmc_host_cmd23(card->host)) { 2305 - if (mmc_card_mmc(card) || 2306 + if ((mmc_card_mmc(card) && 2307 + card->csd.mmca_vsn >= CSD_SPEC_VER_3) || 2306 2308 (mmc_card_sd(card) && 2307 2309 card->scr.cmds & SD_SCR_CMD23_SUPPORT)) 2308 2310 md->flags |= MMC_BLK_CMD23;
+1
drivers/mmc/card/block.h
··· 1 + int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req);
+308
drivers/mmc/card/mmc_test.c
··· 184 184 return mmc_set_blocklen(test->card, size); 185 185 } 186 186 187 + static bool mmc_test_card_cmd23(struct mmc_card *card) 188 + { 189 + return mmc_card_mmc(card) || 190 + (mmc_card_sd(card) && card->scr.cmds & SD_SCR_CMD23_SUPPORT); 191 + } 192 + 193 + static void mmc_test_prepare_sbc(struct mmc_test_card *test, 194 + struct mmc_request *mrq, unsigned int blocks) 195 + { 196 + struct mmc_card *card = test->card; 197 + 198 + if (!mrq->sbc || !mmc_host_cmd23(card->host) || 199 + !mmc_test_card_cmd23(card) || !mmc_op_multi(mrq->cmd->opcode) || 200 + (card->quirks & MMC_QUIRK_BLK_NO_CMD23)) { 201 + mrq->sbc = NULL; 202 + return; 203 + } 204 + 205 + mrq->sbc->opcode = MMC_SET_BLOCK_COUNT; 206 + mrq->sbc->arg = blocks; 207 + mrq->sbc->flags = MMC_RSP_R1 | MMC_CMD_AC; 208 + } 209 + 187 210 /* 188 211 * Fill in the mmc_request structure given a set of transfer parameters. 189 212 */ ··· 243 220 mrq->data->flags = write ? MMC_DATA_WRITE : MMC_DATA_READ; 244 221 mrq->data->sg = sg; 245 222 mrq->data->sg_len = sg_len; 223 + 224 + mmc_test_prepare_sbc(test, mrq, blocks); 246 225 247 226 mmc_set_data_timeout(mrq->data, test->card); 248 227 } ··· 718 693 719 694 ret = 0; 720 695 696 + if (mrq->sbc && mrq->sbc->error) 697 + ret = mrq->sbc->error; 721 698 if (!ret && mrq->cmd->error) 722 699 ret = mrq->cmd->error; 723 700 if (!ret && mrq->data->error) ··· 2305 2278 return RESULT_FAIL; 2306 2279 } 2307 2280 2281 + struct mmc_test_req { 2282 + struct mmc_request mrq; 2283 + struct mmc_command sbc; 2284 + struct mmc_command cmd; 2285 + struct mmc_command stop; 2286 + struct mmc_command status; 2287 + struct mmc_data data; 2288 + }; 2289 + 2290 + static struct mmc_test_req *mmc_test_req_alloc(void) 2291 + { 2292 + struct mmc_test_req *rq = kzalloc(sizeof(*rq), GFP_KERNEL); 2293 + 2294 + if (rq) { 2295 + rq->mrq.cmd = &rq->cmd; 2296 + rq->mrq.data = &rq->data; 2297 + rq->mrq.stop = &rq->stop; 2298 + } 2299 + 2300 + return rq; 2301 + } 2302 + 2303 + static int mmc_test_send_status(struct mmc_test_card *test, 2304 + struct mmc_command *cmd) 2305 + { 2306 + memset(cmd, 0, sizeof(*cmd)); 2307 + 2308 + cmd->opcode = MMC_SEND_STATUS; 2309 + if (!mmc_host_is_spi(test->card->host)) 2310 + cmd->arg = test->card->rca << 16; 2311 + cmd->flags = MMC_RSP_SPI_R2 | MMC_RSP_R1 | MMC_CMD_AC; 2312 + 2313 + return mmc_wait_for_cmd(test->card->host, cmd, 0); 2314 + } 2315 + 2316 + static int mmc_test_ongoing_transfer(struct mmc_test_card *test, 2317 + unsigned int dev_addr, int use_sbc, 2318 + int repeat_cmd, int write, int use_areq) 2319 + { 2320 + struct mmc_test_req *rq = mmc_test_req_alloc(); 2321 + struct mmc_host *host = test->card->host; 2322 + struct mmc_test_area *t = &test->area; 2323 + struct mmc_async_req areq; 2324 + struct mmc_request *mrq; 2325 + unsigned long timeout; 2326 + bool expired = false; 2327 + int ret = 0, cmd_ret; 2328 + u32 status = 0; 2329 + int count = 0; 2330 + 2331 + if (!rq) 2332 + return -ENOMEM; 2333 + 2334 + mrq = &rq->mrq; 2335 + if (use_sbc) 2336 + mrq->sbc = &rq->sbc; 2337 + mrq->cap_cmd_during_tfr = true; 2338 + 2339 + areq.mrq = mrq; 2340 + areq.err_check = mmc_test_check_result_async; 2341 + 2342 + mmc_test_prepare_mrq(test, mrq, t->sg, t->sg_len, dev_addr, t->blocks, 2343 + 512, write); 2344 + 2345 + if (use_sbc && t->blocks > 1 && !mrq->sbc) { 2346 + ret = mmc_host_cmd23(host) ? 2347 + RESULT_UNSUP_CARD : 2348 + RESULT_UNSUP_HOST; 2349 + goto out_free; 2350 + } 2351 + 2352 + /* Start ongoing data request */ 2353 + if (use_areq) { 2354 + mmc_start_req(host, &areq, &ret); 2355 + if (ret) 2356 + goto out_free; 2357 + } else { 2358 + mmc_wait_for_req(host, mrq); 2359 + } 2360 + 2361 + timeout = jiffies + msecs_to_jiffies(3000); 2362 + do { 2363 + count += 1; 2364 + 2365 + /* Send status command while data transfer in progress */ 2366 + cmd_ret = mmc_test_send_status(test, &rq->status); 2367 + if (cmd_ret) 2368 + break; 2369 + 2370 + status = rq->status.resp[0]; 2371 + if (status & R1_ERROR) { 2372 + cmd_ret = -EIO; 2373 + break; 2374 + } 2375 + 2376 + if (mmc_is_req_done(host, mrq)) 2377 + break; 2378 + 2379 + expired = time_after(jiffies, timeout); 2380 + if (expired) { 2381 + pr_info("%s: timeout waiting for Tran state status %#x\n", 2382 + mmc_hostname(host), status); 2383 + cmd_ret = -ETIMEDOUT; 2384 + break; 2385 + } 2386 + } while (repeat_cmd && R1_CURRENT_STATE(status) != R1_STATE_TRAN); 2387 + 2388 + /* Wait for data request to complete */ 2389 + if (use_areq) 2390 + mmc_start_req(host, NULL, &ret); 2391 + else 2392 + mmc_wait_for_req_done(test->card->host, mrq); 2393 + 2394 + /* 2395 + * For cap_cmd_during_tfr request, upper layer must send stop if 2396 + * required. 2397 + */ 2398 + if (mrq->data->stop && (mrq->data->error || !mrq->sbc)) { 2399 + if (ret) 2400 + mmc_wait_for_cmd(host, mrq->data->stop, 0); 2401 + else 2402 + ret = mmc_wait_for_cmd(host, mrq->data->stop, 0); 2403 + } 2404 + 2405 + if (ret) 2406 + goto out_free; 2407 + 2408 + if (cmd_ret) { 2409 + pr_info("%s: Send Status failed: status %#x, error %d\n", 2410 + mmc_hostname(test->card->host), status, cmd_ret); 2411 + } 2412 + 2413 + ret = mmc_test_check_result(test, mrq); 2414 + if (ret) 2415 + goto out_free; 2416 + 2417 + ret = mmc_test_wait_busy(test); 2418 + if (ret) 2419 + goto out_free; 2420 + 2421 + if (repeat_cmd && (t->blocks + 1) << 9 > t->max_tfr) 2422 + pr_info("%s: %d commands completed during transfer of %u blocks\n", 2423 + mmc_hostname(test->card->host), count, t->blocks); 2424 + 2425 + if (cmd_ret) 2426 + ret = cmd_ret; 2427 + out_free: 2428 + kfree(rq); 2429 + 2430 + return ret; 2431 + } 2432 + 2433 + static int __mmc_test_cmds_during_tfr(struct mmc_test_card *test, 2434 + unsigned long sz, int use_sbc, int write, 2435 + int use_areq) 2436 + { 2437 + struct mmc_test_area *t = &test->area; 2438 + int ret; 2439 + 2440 + if (!(test->card->host->caps & MMC_CAP_CMD_DURING_TFR)) 2441 + return RESULT_UNSUP_HOST; 2442 + 2443 + ret = mmc_test_area_map(test, sz, 0, 0); 2444 + if (ret) 2445 + return ret; 2446 + 2447 + ret = mmc_test_ongoing_transfer(test, t->dev_addr, use_sbc, 0, write, 2448 + use_areq); 2449 + if (ret) 2450 + return ret; 2451 + 2452 + return mmc_test_ongoing_transfer(test, t->dev_addr, use_sbc, 1, write, 2453 + use_areq); 2454 + } 2455 + 2456 + static int mmc_test_cmds_during_tfr(struct mmc_test_card *test, int use_sbc, 2457 + int write, int use_areq) 2458 + { 2459 + struct mmc_test_area *t = &test->area; 2460 + unsigned long sz; 2461 + int ret; 2462 + 2463 + for (sz = 512; sz <= t->max_tfr; sz += 512) { 2464 + ret = __mmc_test_cmds_during_tfr(test, sz, use_sbc, write, 2465 + use_areq); 2466 + if (ret) 2467 + return ret; 2468 + } 2469 + return 0; 2470 + } 2471 + 2472 + /* 2473 + * Commands during read - no Set Block Count (CMD23). 2474 + */ 2475 + static int mmc_test_cmds_during_read(struct mmc_test_card *test) 2476 + { 2477 + return mmc_test_cmds_during_tfr(test, 0, 0, 0); 2478 + } 2479 + 2480 + /* 2481 + * Commands during write - no Set Block Count (CMD23). 2482 + */ 2483 + static int mmc_test_cmds_during_write(struct mmc_test_card *test) 2484 + { 2485 + return mmc_test_cmds_during_tfr(test, 0, 1, 0); 2486 + } 2487 + 2488 + /* 2489 + * Commands during read - use Set Block Count (CMD23). 2490 + */ 2491 + static int mmc_test_cmds_during_read_cmd23(struct mmc_test_card *test) 2492 + { 2493 + return mmc_test_cmds_during_tfr(test, 1, 0, 0); 2494 + } 2495 + 2496 + /* 2497 + * Commands during write - use Set Block Count (CMD23). 2498 + */ 2499 + static int mmc_test_cmds_during_write_cmd23(struct mmc_test_card *test) 2500 + { 2501 + return mmc_test_cmds_during_tfr(test, 1, 1, 0); 2502 + } 2503 + 2504 + /* 2505 + * Commands during non-blocking read - use Set Block Count (CMD23). 2506 + */ 2507 + static int mmc_test_cmds_during_read_cmd23_nonblock(struct mmc_test_card *test) 2508 + { 2509 + return mmc_test_cmds_during_tfr(test, 1, 0, 1); 2510 + } 2511 + 2512 + /* 2513 + * Commands during non-blocking write - use Set Block Count (CMD23). 2514 + */ 2515 + static int mmc_test_cmds_during_write_cmd23_nonblock(struct mmc_test_card *test) 2516 + { 2517 + return mmc_test_cmds_during_tfr(test, 1, 1, 1); 2518 + } 2519 + 2308 2520 static const struct mmc_test_case mmc_test_cases[] = { 2309 2521 { 2310 2522 .name = "Basic write (no data verification)", ··· 2870 2604 { 2871 2605 .name = "Reset test", 2872 2606 .run = mmc_test_reset, 2607 + }, 2608 + 2609 + { 2610 + .name = "Commands during read - no Set Block Count (CMD23)", 2611 + .prepare = mmc_test_area_prepare, 2612 + .run = mmc_test_cmds_during_read, 2613 + .cleanup = mmc_test_area_cleanup, 2614 + }, 2615 + 2616 + { 2617 + .name = "Commands during write - no Set Block Count (CMD23)", 2618 + .prepare = mmc_test_area_prepare, 2619 + .run = mmc_test_cmds_during_write, 2620 + .cleanup = mmc_test_area_cleanup, 2621 + }, 2622 + 2623 + { 2624 + .name = "Commands during read - use Set Block Count (CMD23)", 2625 + .prepare = mmc_test_area_prepare, 2626 + .run = mmc_test_cmds_during_read_cmd23, 2627 + .cleanup = mmc_test_area_cleanup, 2628 + }, 2629 + 2630 + { 2631 + .name = "Commands during write - use Set Block Count (CMD23)", 2632 + .prepare = mmc_test_area_prepare, 2633 + .run = mmc_test_cmds_during_write_cmd23, 2634 + .cleanup = mmc_test_area_cleanup, 2635 + }, 2636 + 2637 + { 2638 + .name = "Commands during non-blocking read - use Set Block Count (CMD23)", 2639 + .prepare = mmc_test_area_prepare, 2640 + .run = mmc_test_cmds_during_read_cmd23_nonblock, 2641 + .cleanup = mmc_test_area_cleanup, 2642 + }, 2643 + 2644 + { 2645 + .name = "Commands during non-blocking write - use Set Block Count (CMD23)", 2646 + .prepare = mmc_test_area_prepare, 2647 + .run = mmc_test_cmds_during_write_cmd23_nonblock, 2648 + .cleanup = mmc_test_area_cleanup, 2873 2649 }, 2874 2650 }; 2875 2651
+3 -1
drivers/mmc/card/queue.c
··· 19 19 20 20 #include <linux/mmc/card.h> 21 21 #include <linux/mmc/host.h> 22 + 22 23 #include "queue.h" 24 + #include "block.h" 23 25 24 26 #define MMC_QUEUE_BOUNCESZ 65536 25 27 ··· 70 68 bool req_is_special = mmc_req_is_special(req); 71 69 72 70 set_current_state(TASK_RUNNING); 73 - mq->issue_fn(mq, req); 71 + mmc_blk_issue_rq(mq, req); 74 72 cond_resched(); 75 73 if (mq->flags & MMC_QUEUE_NEW_REQUEST) { 76 74 mq->flags &= ~MMC_QUEUE_NEW_REQUEST;
-2
drivers/mmc/card/queue.h
··· 57 57 unsigned int flags; 58 58 #define MMC_QUEUE_SUSPENDED (1 << 0) 59 59 #define MMC_QUEUE_NEW_REQUEST (1 << 1) 60 - 61 - int (*issue_fn)(struct mmc_queue *, struct request *); 62 60 void *data; 63 61 struct request_queue *queue; 64 62 struct mmc_queue_req mqrq[2];
+151 -30
drivers/mmc/core/core.c
··· 58 58 */ 59 59 #define MMC_BKOPS_MAX_TIMEOUT (4 * 60 * 1000) /* max time to wait in ms */ 60 60 61 + /* The max erase timeout, used when host->max_busy_timeout isn't specified */ 62 + #define MMC_ERASE_TIMEOUT_MS (60 * 1000) /* 60 s */ 63 + 61 64 static const unsigned freqs[] = { 400000, 300000, 200000, 100000 }; 62 65 63 66 /* ··· 120 117 121 118 #endif /* CONFIG_FAIL_MMC_REQUEST */ 122 119 120 + static inline void mmc_complete_cmd(struct mmc_request *mrq) 121 + { 122 + if (mrq->cap_cmd_during_tfr && !completion_done(&mrq->cmd_completion)) 123 + complete_all(&mrq->cmd_completion); 124 + } 125 + 126 + void mmc_command_done(struct mmc_host *host, struct mmc_request *mrq) 127 + { 128 + if (!mrq->cap_cmd_during_tfr) 129 + return; 130 + 131 + mmc_complete_cmd(mrq); 132 + 133 + pr_debug("%s: cmd done, tfr ongoing (CMD%u)\n", 134 + mmc_hostname(host), mrq->cmd->opcode); 135 + } 136 + EXPORT_SYMBOL(mmc_command_done); 137 + 123 138 /** 124 139 * mmc_request_done - finish processing an MMC request 125 140 * @host: MMC host which completed request ··· 164 143 cmd->retries = 0; 165 144 } 166 145 146 + if (host->ongoing_mrq == mrq) 147 + host->ongoing_mrq = NULL; 148 + 149 + mmc_complete_cmd(mrq); 150 + 167 151 trace_mmc_request_done(host, mrq); 168 152 169 153 if (err && cmd->retries && !mmc_card_removed(host->card)) { ··· 181 155 } else { 182 156 mmc_should_fail_request(host, mrq); 183 157 184 - led_trigger_event(host->led, LED_OFF); 158 + if (!host->ongoing_mrq) 159 + led_trigger_event(host->led, LED_OFF); 185 160 186 161 if (mrq->sbc) { 187 162 pr_debug("%s: req done <CMD%u>: %d: %08x %08x %08x %08x\n", ··· 245 218 mmc_request_done(host, mrq); 246 219 return; 247 220 } 221 + } 222 + 223 + if (mrq->cap_cmd_during_tfr) { 224 + host->ongoing_mrq = mrq; 225 + /* 226 + * Retry path could come through here without having waiting on 227 + * cmd_completion, so ensure it is reinitialised. 228 + */ 229 + reinit_completion(&mrq->cmd_completion); 248 230 } 249 231 250 232 trace_mmc_request_start(host, mrq); ··· 422 386 complete(&mrq->completion); 423 387 } 424 388 389 + static inline void mmc_wait_ongoing_tfr_cmd(struct mmc_host *host) 390 + { 391 + struct mmc_request *ongoing_mrq = READ_ONCE(host->ongoing_mrq); 392 + 393 + /* 394 + * If there is an ongoing transfer, wait for the command line to become 395 + * available. 396 + */ 397 + if (ongoing_mrq && !completion_done(&ongoing_mrq->cmd_completion)) 398 + wait_for_completion(&ongoing_mrq->cmd_completion); 399 + } 400 + 425 401 /* 426 402 *__mmc_start_data_req() - starts data request 427 403 * @host: MMC host to start the request ··· 441 393 * 442 394 * Sets the done callback to be called when request is completed by the card. 443 395 * Starts data mmc request execution 396 + * If an ongoing transfer is already in progress, wait for the command line 397 + * to become available before sending another command. 444 398 */ 445 399 static int __mmc_start_data_req(struct mmc_host *host, struct mmc_request *mrq) 446 400 { 447 401 int err; 448 402 403 + mmc_wait_ongoing_tfr_cmd(host); 404 + 449 405 mrq->done = mmc_wait_data_done; 450 406 mrq->host = host; 407 + 408 + init_completion(&mrq->cmd_completion); 451 409 452 410 err = mmc_start_request(host, mrq); 453 411 if (err) { 454 412 mrq->cmd->error = err; 413 + mmc_complete_cmd(mrq); 455 414 mmc_wait_data_done(mrq); 456 415 } 457 416 ··· 469 414 { 470 415 int err; 471 416 417 + mmc_wait_ongoing_tfr_cmd(host); 418 + 472 419 init_completion(&mrq->completion); 473 420 mrq->done = mmc_wait_done; 421 + 422 + init_completion(&mrq->cmd_completion); 474 423 475 424 err = mmc_start_request(host, mrq); 476 425 if (err) { 477 426 mrq->cmd->error = err; 427 + mmc_complete_cmd(mrq); 478 428 complete(&mrq->completion); 479 429 } 480 430 ··· 543 483 return err; 544 484 } 545 485 546 - static void mmc_wait_for_req_done(struct mmc_host *host, 547 - struct mmc_request *mrq) 486 + void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq) 548 487 { 549 488 struct mmc_command *cmd; 550 489 ··· 584 525 585 526 mmc_retune_release(host); 586 527 } 528 + EXPORT_SYMBOL(mmc_wait_for_req_done); 529 + 530 + /** 531 + * mmc_is_req_done - Determine if a 'cap_cmd_during_tfr' request is done 532 + * @host: MMC host 533 + * @mrq: MMC request 534 + * 535 + * mmc_is_req_done() is used with requests that have 536 + * mrq->cap_cmd_during_tfr = true. mmc_is_req_done() must be called after 537 + * starting a request and before waiting for it to complete. That is, 538 + * either in between calls to mmc_start_req(), or after mmc_wait_for_req() 539 + * and before mmc_wait_for_req_done(). If it is called at other times the 540 + * result is not meaningful. 541 + */ 542 + bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq) 543 + { 544 + if (host->areq) 545 + return host->context_info.is_done_rcv; 546 + else 547 + return completion_done(&mrq->completion); 548 + } 549 + EXPORT_SYMBOL(mmc_is_req_done); 587 550 588 551 /** 589 552 * mmc_pre_req - Prepare for a new request ··· 726 645 * @mrq: MMC request to start 727 646 * 728 647 * Start a new MMC custom command request for a host, and wait 729 - * for the command to complete. Does not attempt to parse the 730 - * response. 648 + * for the command to complete. In the case of 'cap_cmd_during_tfr' 649 + * requests, the transfer is ongoing and the caller can issue further 650 + * commands that do not use the data lines, and then wait by calling 651 + * mmc_wait_for_req_done(). 652 + * Does not attempt to parse the response. 731 653 */ 732 654 void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq) 733 655 { 734 656 __mmc_start_req(host, mrq); 735 - mmc_wait_for_req_done(host, mrq); 657 + 658 + if (!mrq->cap_cmd_during_tfr) 659 + mmc_wait_for_req_done(host, mrq); 736 660 } 737 661 EXPORT_SYMBOL(mmc_wait_for_req); 738 662 ··· 2288 2202 return err; 2289 2203 } 2290 2204 2205 + static unsigned int mmc_align_erase_size(struct mmc_card *card, 2206 + unsigned int *from, 2207 + unsigned int *to, 2208 + unsigned int nr) 2209 + { 2210 + unsigned int from_new = *from, nr_new = nr, rem; 2211 + 2212 + /* 2213 + * When the 'card->erase_size' is power of 2, we can use round_up/down() 2214 + * to align the erase size efficiently. 2215 + */ 2216 + if (is_power_of_2(card->erase_size)) { 2217 + unsigned int temp = from_new; 2218 + 2219 + from_new = round_up(temp, card->erase_size); 2220 + rem = from_new - temp; 2221 + 2222 + if (nr_new > rem) 2223 + nr_new -= rem; 2224 + else 2225 + return 0; 2226 + 2227 + nr_new = round_down(nr_new, card->erase_size); 2228 + } else { 2229 + rem = from_new % card->erase_size; 2230 + if (rem) { 2231 + rem = card->erase_size - rem; 2232 + from_new += rem; 2233 + if (nr_new > rem) 2234 + nr_new -= rem; 2235 + else 2236 + return 0; 2237 + } 2238 + 2239 + rem = nr_new % card->erase_size; 2240 + if (rem) 2241 + nr_new -= rem; 2242 + } 2243 + 2244 + if (nr_new == 0) 2245 + return 0; 2246 + 2247 + *to = from_new + nr_new; 2248 + *from = from_new; 2249 + 2250 + return nr_new; 2251 + } 2252 + 2291 2253 /** 2292 2254 * mmc_erase - erase sectors. 2293 2255 * @card: card to erase ··· 2374 2240 return -EINVAL; 2375 2241 } 2376 2242 2377 - if (arg == MMC_ERASE_ARG) { 2378 - rem = from % card->erase_size; 2379 - if (rem) { 2380 - rem = card->erase_size - rem; 2381 - from += rem; 2382 - if (nr > rem) 2383 - nr -= rem; 2384 - else 2385 - return 0; 2386 - } 2387 - rem = nr % card->erase_size; 2388 - if (rem) 2389 - nr -= rem; 2390 - } 2243 + if (arg == MMC_ERASE_ARG) 2244 + nr = mmc_align_erase_size(card, &from, &to, nr); 2391 2245 2392 2246 if (nr == 0) 2393 2247 return 0; 2394 - 2395 - to = from + nr; 2396 2248 2397 2249 if (to <= from) 2398 2250 return -EINVAL; ··· 2472 2352 struct mmc_host *host = card->host; 2473 2353 unsigned int max_discard, x, y, qty = 0, max_qty, min_qty, timeout; 2474 2354 unsigned int last_timeout = 0; 2355 + unsigned int max_busy_timeout = host->max_busy_timeout ? 2356 + host->max_busy_timeout : MMC_ERASE_TIMEOUT_MS; 2475 2357 2476 2358 if (card->erase_shift) { 2477 2359 max_qty = UINT_MAX >> card->erase_shift; ··· 2496 2374 * matter what size of 'host->max_busy_timeout', but if the 2497 2375 * 'host->max_busy_timeout' is large enough for more discard sectors, 2498 2376 * then we can continue to increase the max discard sectors until we 2499 - * get a balance value. 2377 + * get a balance value. In cases when the 'host->max_busy_timeout' 2378 + * isn't specified, use the default max erase timeout. 2500 2379 */ 2501 2380 do { 2502 2381 y = 0; 2503 2382 for (x = 1; x && x <= max_qty && max_qty - x >= qty; x <<= 1) { 2504 2383 timeout = mmc_erase_timeout(card, arg, qty + x); 2505 2384 2506 - if (qty + x > min_qty && 2507 - timeout > host->max_busy_timeout) 2385 + if (qty + x > min_qty && timeout > max_busy_timeout) 2508 2386 break; 2509 2387 2510 2388 if (timeout < last_timeout) ··· 2549 2427 struct mmc_host *host = card->host; 2550 2428 unsigned int max_discard, max_trim; 2551 2429 2552 - if (!host->max_busy_timeout) 2553 - return UINT_MAX; 2554 - 2555 2430 /* 2556 2431 * Without erase_group_def set, MMC erase timeout depends on clock 2557 2432 * frequence which can change. In that case, the best choice is ··· 2566 2447 max_discard = 0; 2567 2448 } 2568 2449 pr_debug("%s: calculated max. discard sectors %u for timeout %u ms\n", 2569 - mmc_hostname(host), max_discard, host->max_busy_timeout); 2450 + mmc_hostname(host), max_discard, host->max_busy_timeout ? 2451 + host->max_busy_timeout : MMC_ERASE_TIMEOUT_MS); 2570 2452 return max_discard; 2571 2453 } 2572 2454 EXPORT_SYMBOL(mmc_calc_max_discard); ··· 2576 2456 { 2577 2457 struct mmc_command cmd = {0}; 2578 2458 2579 - if (mmc_card_blockaddr(card) || mmc_card_ddr52(card)) 2459 + if (mmc_card_blockaddr(card) || mmc_card_ddr52(card) || 2460 + mmc_card_hs400(card) || mmc_card_hs400es(card)) 2580 2461 return 0; 2581 2462 2582 2463 cmd.opcode = MMC_SET_BLOCKLEN;
+5 -4
drivers/mmc/core/mmc.c
··· 1029 1029 err = mmc_switch_status(card); 1030 1030 } 1031 1031 1032 + if (err) 1033 + pr_warn("%s: switch to high-speed failed, err:%d\n", 1034 + mmc_hostname(card->host), err); 1035 + 1032 1036 return err; 1033 1037 } 1034 1038 ··· 1269 1265 1270 1266 /* Switch card to HS mode */ 1271 1267 err = mmc_select_hs(card); 1272 - if (err) { 1273 - pr_err("%s: switch to high-speed failed, err:%d\n", 1274 - mmc_hostname(host), err); 1268 + if (err) 1275 1269 goto out_err; 1276 - } 1277 1270 1278 1271 err = mmc_switch_status(card); 1279 1272 if (err)
+9
drivers/mmc/core/pwrseq_simple.c
··· 16 16 #include <linux/device.h> 17 17 #include <linux/err.h> 18 18 #include <linux/gpio/consumer.h> 19 + #include <linux/delay.h> 20 + #include <linux/property.h> 19 21 20 22 #include <linux/mmc/host.h> 21 23 ··· 26 24 struct mmc_pwrseq_simple { 27 25 struct mmc_pwrseq pwrseq; 28 26 bool clk_enabled; 27 + u32 post_power_on_delay_ms; 29 28 struct clk *ext_clk; 30 29 struct gpio_descs *reset_gpios; 31 30 }; ··· 67 64 struct mmc_pwrseq_simple *pwrseq = to_pwrseq_simple(host->pwrseq); 68 65 69 66 mmc_pwrseq_simple_set_gpios_value(pwrseq, 0); 67 + 68 + if (pwrseq->post_power_on_delay_ms) 69 + msleep(pwrseq->post_power_on_delay_ms); 70 70 } 71 71 72 72 static void mmc_pwrseq_simple_power_off(struct mmc_host *host) ··· 116 110 PTR_ERR(pwrseq->reset_gpios) != -ENOSYS) { 117 111 return PTR_ERR(pwrseq->reset_gpios); 118 112 } 113 + 114 + device_property_read_u32(dev, "post-power-on-delay-ms", 115 + &pwrseq->post_power_on_delay_ms); 119 116 120 117 pwrseq->pwrseq.dev = dev; 121 118 pwrseq->pwrseq.ops = &mmc_pwrseq_simple_ops;
+19 -18
drivers/mmc/core/sd.c
··· 223 223 static int mmc_read_ssr(struct mmc_card *card) 224 224 { 225 225 unsigned int au, es, et, eo; 226 - int err, i; 227 - u32 *ssr; 226 + int i; 228 227 229 228 if (!(card->csd.cmdclass & CCC_APP_SPEC)) { 230 229 pr_warn("%s: card lacks mandatory SD Status function\n", ··· 231 232 return 0; 232 233 } 233 234 234 - ssr = kmalloc(64, GFP_KERNEL); 235 - if (!ssr) 236 - return -ENOMEM; 237 - 238 - err = mmc_app_sd_status(card, ssr); 239 - if (err) { 235 + if (mmc_app_sd_status(card, card->raw_ssr)) { 240 236 pr_warn("%s: problem reading SD Status register\n", 241 237 mmc_hostname(card->host)); 242 - err = 0; 243 - goto out; 238 + return 0; 244 239 } 245 240 246 241 for (i = 0; i < 16; i++) 247 - ssr[i] = be32_to_cpu(ssr[i]); 242 + card->raw_ssr[i] = be32_to_cpu(card->raw_ssr[i]); 248 243 249 244 /* 250 245 * UNSTUFF_BITS only works with four u32s so we have to offset the 251 246 * bitfield positions accordingly. 252 247 */ 253 - au = UNSTUFF_BITS(ssr, 428 - 384, 4); 248 + au = UNSTUFF_BITS(card->raw_ssr, 428 - 384, 4); 254 249 if (au) { 255 250 if (au <= 9 || card->scr.sda_spec3) { 256 251 card->ssr.au = sd_au_size[au]; 257 - es = UNSTUFF_BITS(ssr, 408 - 384, 16); 258 - et = UNSTUFF_BITS(ssr, 402 - 384, 6); 252 + es = UNSTUFF_BITS(card->raw_ssr, 408 - 384, 16); 253 + et = UNSTUFF_BITS(card->raw_ssr, 402 - 384, 6); 259 254 if (es && et) { 260 - eo = UNSTUFF_BITS(ssr, 400 - 384, 2); 255 + eo = UNSTUFF_BITS(card->raw_ssr, 400 - 384, 2); 261 256 card->ssr.erase_timeout = (et * 1000) / es; 262 257 card->ssr.erase_offset = eo * 1000; 263 258 } ··· 260 267 mmc_hostname(card->host)); 261 268 } 262 269 } 263 - out: 264 - kfree(ssr); 265 - return err; 270 + 271 + return 0; 266 272 } 267 273 268 274 /* ··· 658 666 MMC_DEV_ATTR(csd, "%08x%08x%08x%08x\n", card->raw_csd[0], card->raw_csd[1], 659 667 card->raw_csd[2], card->raw_csd[3]); 660 668 MMC_DEV_ATTR(scr, "%08x%08x\n", card->raw_scr[0], card->raw_scr[1]); 669 + MMC_DEV_ATTR(ssr, 670 + "%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x\n", 671 + card->raw_ssr[0], card->raw_ssr[1], card->raw_ssr[2], 672 + card->raw_ssr[3], card->raw_ssr[4], card->raw_ssr[5], 673 + card->raw_ssr[6], card->raw_ssr[7], card->raw_ssr[8], 674 + card->raw_ssr[9], card->raw_ssr[10], card->raw_ssr[11], 675 + card->raw_ssr[12], card->raw_ssr[13], card->raw_ssr[14], 676 + card->raw_ssr[15]); 661 677 MMC_DEV_ATTR(date, "%02d/%04d\n", card->cid.month, card->cid.year); 662 678 MMC_DEV_ATTR(erase_size, "%u\n", card->erase_size << 9); 663 679 MMC_DEV_ATTR(preferred_erase_size, "%u\n", card->pref_erase << 9); ··· 698 698 &dev_attr_cid.attr, 699 699 &dev_attr_csd.attr, 700 700 &dev_attr_scr.attr, 701 + &dev_attr_ssr.attr, 701 702 &dev_attr_date.attr, 702 703 &dev_attr_erase_size.attr, 703 704 &dev_attr_preferred_erase_size.attr,
+31 -16
drivers/mmc/core/sdio_io.c
··· 26 26 */ 27 27 void sdio_claim_host(struct sdio_func *func) 28 28 { 29 - BUG_ON(!func); 30 - BUG_ON(!func->card); 29 + if (WARN_ON(!func)) 30 + return; 31 31 32 32 mmc_claim_host(func->card->host); 33 33 } ··· 42 42 */ 43 43 void sdio_release_host(struct sdio_func *func) 44 44 { 45 - BUG_ON(!func); 46 - BUG_ON(!func->card); 45 + if (WARN_ON(!func)) 46 + return; 47 47 48 48 mmc_release_host(func->card->host); 49 49 } ··· 62 62 unsigned char reg; 63 63 unsigned long timeout; 64 64 65 - BUG_ON(!func); 66 - BUG_ON(!func->card); 65 + if (!func) 66 + return -EINVAL; 67 67 68 68 pr_debug("SDIO: Enabling device %s...\n", sdio_func_id(func)); 69 69 ··· 112 112 int ret; 113 113 unsigned char reg; 114 114 115 - BUG_ON(!func); 116 - BUG_ON(!func->card); 115 + if (!func) 116 + return -EINVAL; 117 117 118 118 pr_debug("SDIO: Disabling device %s...\n", sdio_func_id(func)); 119 119 ··· 307 307 unsigned max_blocks; 308 308 int ret; 309 309 310 + if (!func || (func->num > 7)) 311 + return -EINVAL; 312 + 310 313 /* Do the bulk of the transfer using block mode (if supported). */ 311 314 if (func->card->cccr.multi_block && (size > sdio_max_byte_size(func))) { 312 315 /* Blocks per command is limited by host count, host transfer ··· 370 367 int ret; 371 368 u8 val; 372 369 373 - BUG_ON(!func); 370 + if (!func) { 371 + *err_ret = -EINVAL; 372 + return 0xFF; 373 + } 374 374 375 375 if (err_ret) 376 376 *err_ret = 0; ··· 404 398 { 405 399 int ret; 406 400 407 - BUG_ON(!func); 401 + if (!func) { 402 + *err_ret = -EINVAL; 403 + return; 404 + } 408 405 409 406 ret = mmc_io_rw_direct(func->card, 1, func->num, addr, b, NULL); 410 407 if (err_ret) ··· 632 623 int ret; 633 624 unsigned char val; 634 625 635 - BUG_ON(!func); 626 + if (!func) { 627 + *err_ret = -EINVAL; 628 + return 0xFF; 629 + } 636 630 637 631 if (err_ret) 638 632 *err_ret = 0; ··· 670 658 { 671 659 int ret; 672 660 673 - BUG_ON(!func); 661 + if (!func) { 662 + *err_ret = -EINVAL; 663 + return; 664 + } 674 665 675 666 if ((addr < 0xF0 || addr > 0xFF) && (!mmc_card_lenient_fn0(func->card))) { 676 667 if (err_ret) ··· 699 684 */ 700 685 mmc_pm_flag_t sdio_get_host_pm_caps(struct sdio_func *func) 701 686 { 702 - BUG_ON(!func); 703 - BUG_ON(!func->card); 687 + if (!func) 688 + return 0; 704 689 705 690 return func->card->host->pm_caps; 706 691 } ··· 722 707 { 723 708 struct mmc_host *host; 724 709 725 - BUG_ON(!func); 726 - BUG_ON(!func->card); 710 + if (!func) 711 + return -EINVAL; 727 712 728 713 host = func->card->host; 729 714
+2 -7
drivers/mmc/core/sdio_ops.c
··· 24 24 struct mmc_command cmd = {0}; 25 25 int i, err = 0; 26 26 27 - BUG_ON(!host); 28 - 29 27 cmd.opcode = SD_IO_SEND_OP_COND; 30 28 cmd.arg = ocr; 31 29 cmd.flags = MMC_RSP_SPI_R4 | MMC_RSP_R4 | MMC_CMD_BCR; ··· 69 71 struct mmc_command cmd = {0}; 70 72 int err; 71 73 72 - BUG_ON(!host); 73 - BUG_ON(fn > 7); 74 + if (fn > 7) 75 + return -EINVAL; 74 76 75 77 /* sanity check */ 76 78 if (addr & ~0x1FFFF) ··· 112 114 int mmc_io_rw_direct(struct mmc_card *card, int write, unsigned fn, 113 115 unsigned addr, u8 in, u8 *out) 114 116 { 115 - BUG_ON(!card); 116 117 return mmc_io_rw_direct_host(card->host, write, fn, addr, in, out); 117 118 } 118 119 ··· 126 129 unsigned int nents, left_size, i; 127 130 unsigned int seg_size = card->host->max_seg_size; 128 131 129 - BUG_ON(!card); 130 - BUG_ON(fn > 7); 131 132 WARN_ON(blksz == 0); 132 133 133 134 /* sanity check */
+4 -2
drivers/mmc/host/davinci_mmc.c
··· 1216 1216 } 1217 1217 1218 1218 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1219 - irq = platform_get_irq(pdev, 0); 1220 - if (!r || irq == NO_IRQ) 1219 + if (!r) 1221 1220 return -ENODEV; 1221 + irq = platform_get_irq(pdev, 0); 1222 + if (irq < 0) 1223 + return irq; 1222 1224 1223 1225 mem_size = resource_size(r); 1224 1226 mem = devm_request_mem_region(&pdev->dev, r->start, mem_size,
+5 -1
drivers/mmc/host/dw_mmc-exynos.c
··· 225 225 * Not supported to configure register 226 226 * related to HS400 227 227 */ 228 - if (priv->ctrl_type < DW_MCI_TYPE_EXYNOS5420) 228 + if (priv->ctrl_type < DW_MCI_TYPE_EXYNOS5420) { 229 + if (timing == MMC_TIMING_MMC_HS400) 230 + dev_warn(host->dev, 231 + "cannot configure HS400, unsupported chipset\n"); 229 232 return; 233 + } 230 234 231 235 dqs = priv->saved_dqs_en; 232 236 strobe = priv->saved_strobe_ctrl;
+6
drivers/mmc/host/dw_mmc-k3.c
··· 131 131 host->bus_hz = clk_get_rate(host->biu_clk); 132 132 } 133 133 134 + static int dw_mci_hi6220_execute_tuning(struct dw_mci_slot *slot, u32 opcode) 135 + { 136 + return 0; 137 + } 138 + 134 139 static const struct dw_mci_drv_data hi6220_data = { 135 140 .caps = dw_mci_hi6220_caps, 136 141 .switch_voltage = dw_mci_hi6220_switch_voltage, 137 142 .set_ios = dw_mci_hi6220_set_ios, 138 143 .parse_dt = dw_mci_hi6220_parse_dt, 144 + .execute_tuning = dw_mci_hi6220_execute_tuning, 139 145 }; 140 146 141 147 static const struct of_device_id dw_mci_k3_match[] = {
+229 -198
drivers/mmc/host/dw_mmc.c
··· 61 61 SDMMC_IDMAC_INT_FBE | SDMMC_IDMAC_INT_RI | \ 62 62 SDMMC_IDMAC_INT_TI) 63 63 64 + #define DESC_RING_BUF_SZ PAGE_SIZE 65 + 64 66 struct idmac_desc_64addr { 65 67 u32 des0; /* Control Descriptor */ 66 68 ··· 469 467 } 470 468 } 471 469 472 - static void dw_mci_translate_sglist(struct dw_mci *host, struct mmc_data *data, 473 - unsigned int sg_len) 474 - { 475 - unsigned int desc_len; 476 - int i; 477 - 478 - if (host->dma_64bit_address == 1) { 479 - struct idmac_desc_64addr *desc_first, *desc_last, *desc; 480 - 481 - desc_first = desc_last = desc = host->sg_cpu; 482 - 483 - for (i = 0; i < sg_len; i++) { 484 - unsigned int length = sg_dma_len(&data->sg[i]); 485 - 486 - u64 mem_addr = sg_dma_address(&data->sg[i]); 487 - 488 - for ( ; length ; desc++) { 489 - desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? 490 - length : DW_MCI_DESC_DATA_LENGTH; 491 - 492 - length -= desc_len; 493 - 494 - /* 495 - * Set the OWN bit and disable interrupts 496 - * for this descriptor 497 - */ 498 - desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC | 499 - IDMAC_DES0_CH; 500 - 501 - /* Buffer length */ 502 - IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, desc_len); 503 - 504 - /* Physical address to DMA to/from */ 505 - desc->des4 = mem_addr & 0xffffffff; 506 - desc->des5 = mem_addr >> 32; 507 - 508 - /* Update physical address for the next desc */ 509 - mem_addr += desc_len; 510 - 511 - /* Save pointer to the last descriptor */ 512 - desc_last = desc; 513 - } 514 - } 515 - 516 - /* Set first descriptor */ 517 - desc_first->des0 |= IDMAC_DES0_FD; 518 - 519 - /* Set last descriptor */ 520 - desc_last->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC); 521 - desc_last->des0 |= IDMAC_DES0_LD; 522 - 523 - } else { 524 - struct idmac_desc *desc_first, *desc_last, *desc; 525 - 526 - desc_first = desc_last = desc = host->sg_cpu; 527 - 528 - for (i = 0; i < sg_len; i++) { 529 - unsigned int length = sg_dma_len(&data->sg[i]); 530 - 531 - u32 mem_addr = sg_dma_address(&data->sg[i]); 532 - 533 - for ( ; length ; desc++) { 534 - desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? 535 - length : DW_MCI_DESC_DATA_LENGTH; 536 - 537 - length -= desc_len; 538 - 539 - /* 540 - * Set the OWN bit and disable interrupts 541 - * for this descriptor 542 - */ 543 - desc->des0 = cpu_to_le32(IDMAC_DES0_OWN | 544 - IDMAC_DES0_DIC | 545 - IDMAC_DES0_CH); 546 - 547 - /* Buffer length */ 548 - IDMAC_SET_BUFFER1_SIZE(desc, desc_len); 549 - 550 - /* Physical address to DMA to/from */ 551 - desc->des2 = cpu_to_le32(mem_addr); 552 - 553 - /* Update physical address for the next desc */ 554 - mem_addr += desc_len; 555 - 556 - /* Save pointer to the last descriptor */ 557 - desc_last = desc; 558 - } 559 - } 560 - 561 - /* Set first descriptor */ 562 - desc_first->des0 |= cpu_to_le32(IDMAC_DES0_FD); 563 - 564 - /* Set last descriptor */ 565 - desc_last->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | 566 - IDMAC_DES0_DIC)); 567 - desc_last->des0 |= cpu_to_le32(IDMAC_DES0_LD); 568 - } 569 - 570 - wmb(); /* drain writebuffer */ 571 - } 572 - 573 - static int dw_mci_idmac_start_dma(struct dw_mci *host, unsigned int sg_len) 574 - { 575 - u32 temp; 576 - 577 - dw_mci_translate_sglist(host, host->data, sg_len); 578 - 579 - /* Make sure to reset DMA in case we did PIO before this */ 580 - dw_mci_ctrl_reset(host, SDMMC_CTRL_DMA_RESET); 581 - dw_mci_idmac_reset(host); 582 - 583 - /* Select IDMAC interface */ 584 - temp = mci_readl(host, CTRL); 585 - temp |= SDMMC_CTRL_USE_IDMAC; 586 - mci_writel(host, CTRL, temp); 587 - 588 - /* drain writebuffer */ 589 - wmb(); 590 - 591 - /* Enable the IDMAC */ 592 - temp = mci_readl(host, BMOD); 593 - temp |= SDMMC_IDMAC_ENABLE | SDMMC_IDMAC_FB; 594 - mci_writel(host, BMOD, temp); 595 - 596 - /* Start it running */ 597 - mci_writel(host, PLDMND, 1); 598 - 599 - return 0; 600 - } 601 - 602 470 static int dw_mci_idmac_init(struct dw_mci *host) 603 471 { 604 472 int i; ··· 476 604 if (host->dma_64bit_address == 1) { 477 605 struct idmac_desc_64addr *p; 478 606 /* Number of descriptors in the ring buffer */ 479 - host->ring_size = PAGE_SIZE / sizeof(struct idmac_desc_64addr); 607 + host->ring_size = 608 + DESC_RING_BUF_SZ / sizeof(struct idmac_desc_64addr); 480 609 481 610 /* Forward link the descriptor list */ 482 611 for (i = 0, p = host->sg_cpu; i < host->ring_size - 1; ··· 503 630 } else { 504 631 struct idmac_desc *p; 505 632 /* Number of descriptors in the ring buffer */ 506 - host->ring_size = PAGE_SIZE / sizeof(struct idmac_desc); 633 + host->ring_size = 634 + DESC_RING_BUF_SZ / sizeof(struct idmac_desc); 507 635 508 636 /* Forward link the descriptor list */ 509 637 for (i = 0, p = host->sg_cpu; ··· 543 669 } 544 670 545 671 return 0; 672 + } 673 + 674 + static inline int dw_mci_prepare_desc64(struct dw_mci *host, 675 + struct mmc_data *data, 676 + unsigned int sg_len) 677 + { 678 + unsigned int desc_len; 679 + struct idmac_desc_64addr *desc_first, *desc_last, *desc; 680 + unsigned long timeout; 681 + int i; 682 + 683 + desc_first = desc_last = desc = host->sg_cpu; 684 + 685 + for (i = 0; i < sg_len; i++) { 686 + unsigned int length = sg_dma_len(&data->sg[i]); 687 + 688 + u64 mem_addr = sg_dma_address(&data->sg[i]); 689 + 690 + for ( ; length ; desc++) { 691 + desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? 692 + length : DW_MCI_DESC_DATA_LENGTH; 693 + 694 + length -= desc_len; 695 + 696 + /* 697 + * Wait for the former clear OWN bit operation 698 + * of IDMAC to make sure that this descriptor 699 + * isn't still owned by IDMAC as IDMAC's write 700 + * ops and CPU's read ops are asynchronous. 701 + */ 702 + timeout = jiffies + msecs_to_jiffies(100); 703 + while (readl(&desc->des0) & IDMAC_DES0_OWN) { 704 + if (time_after(jiffies, timeout)) 705 + goto err_own_bit; 706 + udelay(10); 707 + } 708 + 709 + /* 710 + * Set the OWN bit and disable interrupts 711 + * for this descriptor 712 + */ 713 + desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC | 714 + IDMAC_DES0_CH; 715 + 716 + /* Buffer length */ 717 + IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, desc_len); 718 + 719 + /* Physical address to DMA to/from */ 720 + desc->des4 = mem_addr & 0xffffffff; 721 + desc->des5 = mem_addr >> 32; 722 + 723 + /* Update physical address for the next desc */ 724 + mem_addr += desc_len; 725 + 726 + /* Save pointer to the last descriptor */ 727 + desc_last = desc; 728 + } 729 + } 730 + 731 + /* Set first descriptor */ 732 + desc_first->des0 |= IDMAC_DES0_FD; 733 + 734 + /* Set last descriptor */ 735 + desc_last->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC); 736 + desc_last->des0 |= IDMAC_DES0_LD; 737 + 738 + return 0; 739 + err_own_bit: 740 + /* restore the descriptor chain as it's polluted */ 741 + dev_dbg(host->dev, "desciptor is still owned by IDMAC.\n"); 742 + memset(host->sg_cpu, 0, DESC_RING_BUF_SZ); 743 + dw_mci_idmac_init(host); 744 + return -EINVAL; 745 + } 746 + 747 + 748 + static inline int dw_mci_prepare_desc32(struct dw_mci *host, 749 + struct mmc_data *data, 750 + unsigned int sg_len) 751 + { 752 + unsigned int desc_len; 753 + struct idmac_desc *desc_first, *desc_last, *desc; 754 + unsigned long timeout; 755 + int i; 756 + 757 + desc_first = desc_last = desc = host->sg_cpu; 758 + 759 + for (i = 0; i < sg_len; i++) { 760 + unsigned int length = sg_dma_len(&data->sg[i]); 761 + 762 + u32 mem_addr = sg_dma_address(&data->sg[i]); 763 + 764 + for ( ; length ; desc++) { 765 + desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? 766 + length : DW_MCI_DESC_DATA_LENGTH; 767 + 768 + length -= desc_len; 769 + 770 + /* 771 + * Wait for the former clear OWN bit operation 772 + * of IDMAC to make sure that this descriptor 773 + * isn't still owned by IDMAC as IDMAC's write 774 + * ops and CPU's read ops are asynchronous. 775 + */ 776 + timeout = jiffies + msecs_to_jiffies(100); 777 + while (readl(&desc->des0) & 778 + cpu_to_le32(IDMAC_DES0_OWN)) { 779 + if (time_after(jiffies, timeout)) 780 + goto err_own_bit; 781 + udelay(10); 782 + } 783 + 784 + /* 785 + * Set the OWN bit and disable interrupts 786 + * for this descriptor 787 + */ 788 + desc->des0 = cpu_to_le32(IDMAC_DES0_OWN | 789 + IDMAC_DES0_DIC | 790 + IDMAC_DES0_CH); 791 + 792 + /* Buffer length */ 793 + IDMAC_SET_BUFFER1_SIZE(desc, desc_len); 794 + 795 + /* Physical address to DMA to/from */ 796 + desc->des2 = cpu_to_le32(mem_addr); 797 + 798 + /* Update physical address for the next desc */ 799 + mem_addr += desc_len; 800 + 801 + /* Save pointer to the last descriptor */ 802 + desc_last = desc; 803 + } 804 + } 805 + 806 + /* Set first descriptor */ 807 + desc_first->des0 |= cpu_to_le32(IDMAC_DES0_FD); 808 + 809 + /* Set last descriptor */ 810 + desc_last->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | 811 + IDMAC_DES0_DIC)); 812 + desc_last->des0 |= cpu_to_le32(IDMAC_DES0_LD); 813 + 814 + return 0; 815 + err_own_bit: 816 + /* restore the descriptor chain as it's polluted */ 817 + dev_dbg(host->dev, "desciptor is still owned by IDMAC.\n"); 818 + memset(host->sg_cpu, 0, DESC_RING_BUF_SZ); 819 + dw_mci_idmac_init(host); 820 + return -EINVAL; 821 + } 822 + 823 + static int dw_mci_idmac_start_dma(struct dw_mci *host, unsigned int sg_len) 824 + { 825 + u32 temp; 826 + int ret; 827 + 828 + if (host->dma_64bit_address == 1) 829 + ret = dw_mci_prepare_desc64(host, host->data, sg_len); 830 + else 831 + ret = dw_mci_prepare_desc32(host, host->data, sg_len); 832 + 833 + if (ret) 834 + goto out; 835 + 836 + /* drain writebuffer */ 837 + wmb(); 838 + 839 + /* Make sure to reset DMA in case we did PIO before this */ 840 + dw_mci_ctrl_reset(host, SDMMC_CTRL_DMA_RESET); 841 + dw_mci_idmac_reset(host); 842 + 843 + /* Select IDMAC interface */ 844 + temp = mci_readl(host, CTRL); 845 + temp |= SDMMC_CTRL_USE_IDMAC; 846 + mci_writel(host, CTRL, temp); 847 + 848 + /* drain writebuffer */ 849 + wmb(); 850 + 851 + /* Enable the IDMAC */ 852 + temp = mci_readl(host, BMOD); 853 + temp |= SDMMC_IDMAC_ENABLE | SDMMC_IDMAC_FB; 854 + mci_writel(host, BMOD, temp); 855 + 856 + /* Start it running */ 857 + mci_writel(host, PLDMND, 1); 858 + 859 + out: 860 + return ret; 546 861 } 547 862 548 863 static const struct dw_mci_dma_ops dw_mci_idmac_ops = { ··· 939 876 * MSIZE is '1', 940 877 * if blksz is not a multiple of the FIFO width 941 878 */ 942 - if (blksz % fifo_width) { 943 - msize = 0; 944 - rx_wmark = 1; 879 + if (blksz % fifo_width) 945 880 goto done; 946 - } 947 881 948 882 do { 949 883 if (!((blksz_depth % mszs[idx]) || ··· 1058 998 spin_unlock_irqrestore(&host->irq_lock, irqflags); 1059 999 1060 1000 if (host->dma_ops->start(host, sg_len)) { 1061 - /* We can't do DMA */ 1062 - dev_err(host->dev, "%s: failed to start DMA.\n", __func__); 1001 + /* We can't do DMA, try PIO for this one */ 1002 + dev_dbg(host->dev, 1003 + "%s: fall back to PIO mode for current transfer\n", 1004 + __func__); 1063 1005 return -ENODEV; 1064 1006 } 1065 1007 ··· 1757 1695 data->error = -ETIMEDOUT; 1758 1696 } else if (host->dir_status == 1759 1697 DW_MCI_RECV_STATUS) { 1760 - data->error = -EIO; 1698 + data->error = -EILSEQ; 1761 1699 } 1762 1700 } else { 1763 1701 /* SDMMC_INT_SBE is included */ 1764 - data->error = -EIO; 1702 + data->error = -EILSEQ; 1765 1703 } 1766 1704 1767 1705 dev_dbg(host->dev, "data error, status 0x%08x\n", status); ··· 2589 2527 return IRQ_HANDLED; 2590 2528 } 2591 2529 2592 - #ifdef CONFIG_OF 2593 - /* given a slot, find out the device node representing that slot */ 2594 - static struct device_node *dw_mci_of_find_slot_node(struct dw_mci_slot *slot) 2595 - { 2596 - struct device *dev = slot->mmc->parent; 2597 - struct device_node *np; 2598 - const __be32 *addr; 2599 - int len; 2600 - 2601 - if (!dev || !dev->of_node) 2602 - return NULL; 2603 - 2604 - for_each_child_of_node(dev->of_node, np) { 2605 - addr = of_get_property(np, "reg", &len); 2606 - if (!addr || (len < sizeof(int))) 2607 - continue; 2608 - if (be32_to_cpup(addr) == slot->id) 2609 - return np; 2610 - } 2611 - return NULL; 2612 - } 2613 - 2614 - static void dw_mci_slot_of_parse(struct dw_mci_slot *slot) 2615 - { 2616 - struct device_node *np = dw_mci_of_find_slot_node(slot); 2617 - 2618 - if (!np) 2619 - return; 2620 - 2621 - if (of_property_read_bool(np, "disable-wp")) { 2622 - slot->mmc->caps2 |= MMC_CAP2_NO_WRITE_PROTECT; 2623 - dev_warn(slot->mmc->parent, 2624 - "Slot quirk 'disable-wp' is deprecated\n"); 2625 - } 2626 - } 2627 - #else /* CONFIG_OF */ 2628 - static void dw_mci_slot_of_parse(struct dw_mci_slot *slot) 2629 - { 2630 - } 2631 - #endif /* CONFIG_OF */ 2632 - 2633 2530 static int dw_mci_init_slot(struct dw_mci *host, unsigned int id) 2634 2531 { 2635 2532 struct mmc_host *mmc; ··· 2650 2629 2651 2630 if (host->pdata->caps2) 2652 2631 mmc->caps2 = host->pdata->caps2; 2653 - 2654 - dw_mci_slot_of_parse(slot); 2655 2632 2656 2633 ret = mmc_of_parse(mmc); 2657 2634 if (ret) ··· 2755 2736 } 2756 2737 2757 2738 /* Alloc memory for sg translation */ 2758 - host->sg_cpu = dmam_alloc_coherent(host->dev, PAGE_SIZE, 2739 + host->sg_cpu = dmam_alloc_coherent(host->dev, 2740 + DESC_RING_BUF_SZ, 2759 2741 &host->sg_dma, GFP_KERNEL); 2760 2742 if (!host->sg_cpu) { 2761 2743 dev_err(host->dev, ··· 2939 2919 if (!pdata) 2940 2920 return ERR_PTR(-ENOMEM); 2941 2921 2922 + /* find reset controller when exist */ 2923 + pdata->rstc = devm_reset_control_get_optional(dev, NULL); 2924 + if (IS_ERR(pdata->rstc)) { 2925 + if (PTR_ERR(pdata->rstc) == -EPROBE_DEFER) 2926 + return ERR_PTR(-EPROBE_DEFER); 2927 + } 2928 + 2942 2929 /* find out number of slots supported */ 2943 2930 of_property_read_u32(np, "num-slots", &pdata->num_slots); 2944 2931 ··· 2962 2935 ret = drv_data->parse_dt(host); 2963 2936 if (ret) 2964 2937 return ERR_PTR(ret); 2965 - } 2966 - 2967 - if (of_find_property(np, "supports-highspeed", NULL)) { 2968 - dev_info(dev, "supports-highspeed property is deprecated.\n"); 2969 - pdata->caps |= MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED; 2970 2938 } 2971 2939 2972 2940 return pdata; ··· 3012 2990 3013 2991 if (!host->pdata) { 3014 2992 host->pdata = dw_mci_parse_dt(host); 3015 - if (IS_ERR(host->pdata)) { 2993 + if (PTR_ERR(host->pdata) == -EPROBE_DEFER) { 2994 + return -EPROBE_DEFER; 2995 + } else if (IS_ERR(host->pdata)) { 3016 2996 dev_err(host->dev, "platform data not available\n"); 3017 2997 return -EINVAL; 3018 2998 } ··· 3066 3042 "implementation specific init failed\n"); 3067 3043 goto err_clk_ciu; 3068 3044 } 3045 + } 3046 + 3047 + if (!IS_ERR(host->pdata->rstc)) { 3048 + reset_control_assert(host->pdata->rstc); 3049 + usleep_range(10, 50); 3050 + reset_control_deassert(host->pdata->rstc); 3069 3051 } 3070 3052 3071 3053 setup_timer(&host->cmd11_timer, ··· 3223 3193 if (host->use_dma && host->dma_ops->exit) 3224 3194 host->dma_ops->exit(host); 3225 3195 3196 + if (!IS_ERR(host->pdata->rstc)) 3197 + reset_control_assert(host->pdata->rstc); 3198 + 3226 3199 err_clk_ciu: 3227 - if (!IS_ERR(host->ciu_clk)) 3228 - clk_disable_unprepare(host->ciu_clk); 3200 + clk_disable_unprepare(host->ciu_clk); 3229 3201 3230 3202 err_clk_biu: 3231 - if (!IS_ERR(host->biu_clk)) 3232 - clk_disable_unprepare(host->biu_clk); 3203 + clk_disable_unprepare(host->biu_clk); 3233 3204 3234 3205 return ret; 3235 3206 } ··· 3256 3225 if (host->use_dma && host->dma_ops->exit) 3257 3226 host->dma_ops->exit(host); 3258 3227 3259 - if (!IS_ERR(host->ciu_clk)) 3260 - clk_disable_unprepare(host->ciu_clk); 3228 + if (!IS_ERR(host->pdata->rstc)) 3229 + reset_control_assert(host->pdata->rstc); 3261 3230 3262 - if (!IS_ERR(host->biu_clk)) 3263 - clk_disable_unprepare(host->biu_clk); 3231 + clk_disable_unprepare(host->ciu_clk); 3232 + clk_disable_unprepare(host->biu_clk); 3264 3233 } 3265 3234 EXPORT_SYMBOL(dw_mci_remove); 3266 3235
+3 -2
drivers/mmc/host/moxart-mmc.c
··· 257 257 static void moxart_transfer_dma(struct mmc_data *data, struct moxart_host *host) 258 258 { 259 259 u32 len, dir_data, dir_slave; 260 - unsigned long dma_time; 260 + long dma_time; 261 261 struct dma_async_tx_descriptor *desc = NULL; 262 262 struct dma_chan *dma_chan; 263 263 ··· 397 397 static void moxart_request(struct mmc_host *mmc, struct mmc_request *mrq) 398 398 { 399 399 struct moxart_host *host = mmc_priv(mmc); 400 - unsigned long pio_time, flags; 400 + long pio_time; 401 + unsigned long flags; 401 402 u32 status; 402 403 403 404 spin_lock_irqsave(&host->lock, flags);
+1 -1
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 126 126 return SD_RSP_TYPE_R0; 127 127 case MMC_RSP_R1: 128 128 return SD_RSP_TYPE_R1; 129 - case MMC_RSP_R1 & ~MMC_RSP_CRC: 129 + case MMC_RSP_R1_NO_CRC: 130 130 return SD_RSP_TYPE_R1 | SD_NO_CHECK_CRC7; 131 131 case MMC_RSP_R1B: 132 132 return SD_RSP_TYPE_R1b;
+1 -1
drivers/mmc/host/rtsx_usb_sdmmc.c
··· 324 324 case MMC_RSP_R1: 325 325 rsp_type = SD_RSP_TYPE_R1; 326 326 break; 327 - case MMC_RSP_R1 & ~MMC_RSP_CRC: 327 + case MMC_RSP_R1_NO_CRC: 328 328 rsp_type = SD_RSP_TYPE_R1 | SD_NO_CHECK_CRC7; 329 329 break; 330 330 case MMC_RSP_R1B:
+1 -1
drivers/mmc/host/sdhci-acpi.c
··· 275 275 .chip = &sdhci_acpi_chip_int, 276 276 .caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE | 277 277 MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | 278 - MMC_CAP_WAIT_WHILE_BUSY, 278 + MMC_CAP_CMD_DURING_TFR | MMC_CAP_WAIT_WHILE_BUSY, 279 279 .caps2 = MMC_CAP2_HC_ERASE_SZ, 280 280 .flags = SDHCI_ACPI_RUNTIME_PM, 281 281 .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
+4 -2
drivers/mmc/host/sdhci-bcm-kona.c
··· 253 253 goto err_pltfm_free; 254 254 } 255 255 256 - if (clk_set_rate(pltfm_priv->clk, host->mmc->f_max) != 0) { 256 + ret = clk_set_rate(pltfm_priv->clk, host->mmc->f_max); 257 + if (ret) { 257 258 dev_err(dev, "Failed to set rate core clock\n"); 258 259 goto err_pltfm_free; 259 260 } 260 261 261 - if (clk_prepare_enable(pltfm_priv->clk) != 0) { 262 + ret = clk_prepare_enable(pltfm_priv->clk); 263 + if (ret) { 262 264 dev_err(dev, "Failed to enable core clock\n"); 263 265 goto err_pltfm_free; 264 266 }
+3 -1
drivers/mmc/host/sdhci-brcmstb.c
··· 98 98 * properties through mmc_of_parse(). 99 99 */ 100 100 host->caps = sdhci_readl(host, SDHCI_CAPABILITIES); 101 + if (of_device_is_compatible(pdev->dev.of_node, "brcm,bcm7425-sdhci")) 102 + host->caps &= ~SDHCI_CAN_64BIT; 101 103 host->caps1 = sdhci_readl(host, SDHCI_CAPABILITIES_1); 102 104 host->caps1 &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_SDR104 | 103 105 SDHCI_SUPPORT_DDR50); ··· 123 121 124 122 static const struct of_device_id sdhci_brcm_of_match[] = { 125 123 { .compatible = "brcm,bcm7425-sdhci" }, 124 + { .compatible = "brcm,bcm7445-sdhci" }, 126 125 {}, 127 126 }; 128 127 MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match); ··· 131 128 static struct platform_driver sdhci_brcmstb_driver = { 132 129 .driver = { 133 130 .name = "sdhci-brcmstb", 134 - .owner = THIS_MODULE, 135 131 .pm = &sdhci_brcmstb_pmops, 136 132 .of_match_table = of_match_ptr(sdhci_brcm_of_match), 137 133 },
+5 -2
drivers/mmc/host/sdhci-esdhc-imx.c
··· 31 31 #include "sdhci-pltfm.h" 32 32 #include "sdhci-esdhc.h" 33 33 34 + #define ESDHC_SYS_CTRL_DTOCV_MASK 0x0f 34 35 #define ESDHC_CTRL_D3CD 0x08 35 36 #define ESDHC_BURST_LEN_EN_INCR (1 << 27) 36 37 /* VENDOR SPEC register */ ··· 929 928 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 930 929 struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host); 931 930 932 - return esdhc_is_usdhc(imx_data) ? 1 << 28 : 1 << 27; 931 + /* Doc Errata: the uSDHC actual maximum timeout count is 1 << 29 */ 932 + return esdhc_is_usdhc(imx_data) ? 1 << 29 : 1 << 27; 933 933 } 934 934 935 935 static void esdhc_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) ··· 939 937 struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host); 940 938 941 939 /* use maximum timeout counter */ 942 - sdhci_writeb(host, esdhc_is_usdhc(imx_data) ? 0xF : 0xE, 940 + esdhc_clrset_le(host, ESDHC_SYS_CTRL_DTOCV_MASK, 941 + esdhc_is_usdhc(imx_data) ? 0xF : 0xE, 943 942 SDHCI_TIMEOUT_CONTROL); 944 943 } 945 944
+120 -16
drivers/mmc/host/sdhci-of-arasan.c
··· 26 26 #include <linux/phy/phy.h> 27 27 #include <linux/regmap.h> 28 28 #include "sdhci-pltfm.h" 29 + #include <linux/of.h> 29 30 30 31 #define SDHCI_ARASAN_CLK_CTRL_OFFSET 0x2c 31 32 #define SDHCI_ARASAN_VENDOR_REGISTER 0x78 ··· 35 34 #define CLK_CTRL_TIMEOUT_SHIFT 16 36 35 #define CLK_CTRL_TIMEOUT_MASK (0xf << CLK_CTRL_TIMEOUT_SHIFT) 37 36 #define CLK_CTRL_TIMEOUT_MIN_EXP 13 37 + 38 + #define PHY_CLK_TOO_SLOW_HZ 400000 38 39 39 40 /* 40 41 * On some SoCs the syscon area has a feature where the upper 16-bits of ··· 68 65 * accessible via the syscon API. 69 66 * 70 67 * @baseclkfreq: Where to find corecfg_baseclkfreq 68 + * @clockmultiplier: Where to find corecfg_clockmultiplier 71 69 * @hiword_update: If true, use HIWORD_UPDATE to access the syscon 72 70 */ 73 71 struct sdhci_arasan_soc_ctl_map { 74 72 struct sdhci_arasan_soc_ctl_field baseclkfreq; 73 + struct sdhci_arasan_soc_ctl_field clockmultiplier; 75 74 bool hiword_update; 76 75 }; 77 76 ··· 82 77 * @host: Pointer to the main SDHCI host structure. 83 78 * @clk_ahb: Pointer to the AHB clock 84 79 * @phy: Pointer to the generic phy 80 + * @is_phy_on: True if the PHY is on; false if not. 85 81 * @sdcardclk_hw: Struct for the clock we might provide to a PHY. 86 82 * @sdcardclk: Pointer to normal 'struct clock' for sdcardclk_hw. 87 83 * @soc_ctl_base: Pointer to regmap for syscon for soc_ctl registers. ··· 92 86 struct sdhci_host *host; 93 87 struct clk *clk_ahb; 94 88 struct phy *phy; 89 + bool is_phy_on; 95 90 96 91 struct clk_hw sdcardclk_hw; 97 92 struct clk *sdcardclk; 98 93 99 94 struct regmap *soc_ctl_base; 100 95 const struct sdhci_arasan_soc_ctl_map *soc_ctl_map; 96 + unsigned int quirks; /* Arasan deviations from spec */ 97 + 98 + /* Controller does not have CD wired and will not function normally without */ 99 + #define SDHCI_ARASAN_QUIRK_FORCE_CDTEST BIT(0) 101 100 }; 102 101 103 102 static const struct sdhci_arasan_soc_ctl_map rk3399_soc_ctl_map = { 104 103 .baseclkfreq = { .reg = 0xf000, .width = 8, .shift = 8 }, 104 + .clockmultiplier = { .reg = 0xf02c, .width = 8, .shift = 0}, 105 105 .hiword_update = true, 106 106 }; 107 107 ··· 182 170 struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); 183 171 bool ctrl_phy = false; 184 172 185 - if (clock > MMC_HIGH_52_MAX_DTR && (!IS_ERR(sdhci_arasan->phy))) 186 - ctrl_phy = true; 173 + if (!IS_ERR(sdhci_arasan->phy)) { 174 + if (!sdhci_arasan->is_phy_on && clock <= PHY_CLK_TOO_SLOW_HZ) { 175 + /* 176 + * If PHY off, set clock to max speed and power PHY on. 177 + * 178 + * Although PHY docs apparently suggest power cycling 179 + * when changing the clock the PHY doesn't like to be 180 + * powered on while at low speeds like those used in ID 181 + * mode. Even worse is powering the PHY on while the 182 + * clock is off. 183 + * 184 + * To workaround the PHY limitations, the best we can 185 + * do is to power it on at a faster speed and then slam 186 + * through low speeds without power cycling. 187 + */ 188 + sdhci_set_clock(host, host->max_clk); 189 + spin_unlock_irq(&host->lock); 190 + phy_power_on(sdhci_arasan->phy); 191 + spin_lock_irq(&host->lock); 192 + sdhci_arasan->is_phy_on = true; 187 193 188 - if (ctrl_phy) { 194 + /* 195 + * We'll now fall through to the below case with 196 + * ctrl_phy = false (so we won't turn off/on). The 197 + * sdhci_set_clock() will set the real clock. 198 + */ 199 + } else if (clock > PHY_CLK_TOO_SLOW_HZ) { 200 + /* 201 + * At higher clock speeds the PHY is fine being power 202 + * cycled and docs say you _should_ power cycle when 203 + * changing clock speeds. 204 + */ 205 + ctrl_phy = true; 206 + } 207 + } 208 + 209 + if (ctrl_phy && sdhci_arasan->is_phy_on) { 189 210 spin_unlock_irq(&host->lock); 190 211 phy_power_off(sdhci_arasan->phy); 191 212 spin_lock_irq(&host->lock); 213 + sdhci_arasan->is_phy_on = false; 192 214 } 193 215 194 216 sdhci_set_clock(host, clock); ··· 231 185 spin_unlock_irq(&host->lock); 232 186 phy_power_on(sdhci_arasan->phy); 233 187 spin_lock_irq(&host->lock); 188 + sdhci_arasan->is_phy_on = true; 234 189 } 235 190 } 236 191 ··· 250 203 writel(vendor, host->ioaddr + SDHCI_ARASAN_VENDOR_REGISTER); 251 204 } 252 205 206 + void sdhci_arasan_reset(struct sdhci_host *host, u8 mask) 207 + { 208 + u8 ctrl; 209 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 210 + struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); 211 + 212 + sdhci_reset(host, mask); 213 + 214 + if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_FORCE_CDTEST) { 215 + ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL); 216 + ctrl |= SDHCI_CTRL_CDTEST_INS | SDHCI_CTRL_CDTEST_EN; 217 + sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); 218 + } 219 + } 220 + 253 221 static struct sdhci_ops sdhci_arasan_ops = { 254 222 .set_clock = sdhci_arasan_set_clock, 255 223 .get_max_clock = sdhci_pltfm_clk_get_max_clock, 256 224 .get_timeout_clock = sdhci_arasan_get_timeout_clock, 257 225 .set_bus_width = sdhci_set_bus_width, 258 - .reset = sdhci_reset, 226 + .reset = sdhci_arasan_reset, 259 227 .set_uhs_signaling = sdhci_set_uhs_signaling, 260 228 }; 261 229 ··· 301 239 if (ret) 302 240 return ret; 303 241 304 - if (!IS_ERR(sdhci_arasan->phy)) { 242 + if (!IS_ERR(sdhci_arasan->phy) && sdhci_arasan->is_phy_on) { 305 243 ret = phy_power_off(sdhci_arasan->phy); 306 244 if (ret) { 307 245 dev_err(dev, "Cannot power off phy.\n"); 308 246 sdhci_resume_host(host); 309 247 return ret; 310 248 } 249 + sdhci_arasan->is_phy_on = false; 311 250 } 312 251 313 252 clk_disable(pltfm_host->clk); ··· 344 281 return ret; 345 282 } 346 283 347 - if (!IS_ERR(sdhci_arasan->phy)) { 284 + if (!IS_ERR(sdhci_arasan->phy) && host->mmc->actual_clock) { 348 285 ret = phy_power_on(sdhci_arasan->phy); 349 286 if (ret) { 350 287 dev_err(dev, "Cannot power on phy.\n"); 351 288 return ret; 352 289 } 290 + sdhci_arasan->is_phy_on = true; 353 291 } 354 292 355 293 return sdhci_resume_host(host); ··· 400 336 static const struct clk_ops arasan_sdcardclk_ops = { 401 337 .recalc_rate = sdhci_arasan_sdcardclk_recalc_rate, 402 338 }; 339 + 340 + /** 341 + * sdhci_arasan_update_clockmultiplier - Set corecfg_clockmultiplier 342 + * 343 + * The corecfg_clockmultiplier is supposed to contain clock multiplier 344 + * value of programmable clock generator. 345 + * 346 + * NOTES: 347 + * - Many existing devices don't seem to do this and work fine. To keep 348 + * compatibility for old hardware where the device tree doesn't provide a 349 + * register map, this function is a noop if a soc_ctl_map hasn't been provided 350 + * for this platform. 351 + * - The value of corecfg_clockmultiplier should sync with that of corresponding 352 + * value reading from sdhci_capability_register. So this function is called 353 + * once at probe time and never called again. 354 + * 355 + * @host: The sdhci_host 356 + */ 357 + static void sdhci_arasan_update_clockmultiplier(struct sdhci_host *host, 358 + u32 value) 359 + { 360 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 361 + struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); 362 + const struct sdhci_arasan_soc_ctl_map *soc_ctl_map = 363 + sdhci_arasan->soc_ctl_map; 364 + 365 + /* Having a map is optional */ 366 + if (!soc_ctl_map) 367 + return; 368 + 369 + /* If we have a map, we expect to have a syscon */ 370 + if (!sdhci_arasan->soc_ctl_base) { 371 + pr_warn("%s: Have regmap, but no soc-ctl-syscon\n", 372 + mmc_hostname(host->mmc)); 373 + return; 374 + } 375 + 376 + sdhci_arasan_syscon_write(host, &soc_ctl_map->clockmultiplier, value); 377 + } 403 378 404 379 /** 405 380 * sdhci_arasan_update_baseclkfreq - Set corecfg_baseclkfreq ··· 565 462 struct sdhci_host *host; 566 463 struct sdhci_pltfm_host *pltfm_host; 567 464 struct sdhci_arasan_data *sdhci_arasan; 465 + struct device_node *np = pdev->dev.of_node; 568 466 569 467 host = sdhci_pltfm_init(pdev, &sdhci_arasan_pdata, 570 468 sizeof(*sdhci_arasan)); ··· 620 516 } 621 517 622 518 sdhci_get_of_property(pdev); 519 + 520 + if (of_property_read_bool(np, "xlnx,fails-without-test-cd")) 521 + sdhci_arasan->quirks |= SDHCI_ARASAN_QUIRK_FORCE_CDTEST; 522 + 623 523 pltfm_host->clk = clk_xin; 524 + 525 + if (of_device_is_compatible(pdev->dev.of_node, 526 + "rockchip,rk3399-sdhci-5.1")) 527 + sdhci_arasan_update_clockmultiplier(host, 0x0); 624 528 625 529 sdhci_arasan_update_baseclkfreq(host); 626 530 ··· 659 547 goto unreg_clk; 660 548 } 661 549 662 - ret = phy_power_on(sdhci_arasan->phy); 663 - if (ret < 0) { 664 - dev_err(&pdev->dev, "phy_power_on err.\n"); 665 - goto err_phy_power; 666 - } 667 - 668 550 host->mmc_host_ops.hs400_enhanced_strobe = 669 551 sdhci_arasan_hs400_enhanced_strobe; 670 552 } ··· 670 564 return 0; 671 565 672 566 err_add_host: 673 - if (!IS_ERR(sdhci_arasan->phy)) 674 - phy_power_off(sdhci_arasan->phy); 675 - err_phy_power: 676 567 if (!IS_ERR(sdhci_arasan->phy)) 677 568 phy_exit(sdhci_arasan->phy); 678 569 unreg_clk: ··· 692 589 struct clk *clk_ahb = sdhci_arasan->clk_ahb; 693 590 694 591 if (!IS_ERR(sdhci_arasan->phy)) { 695 - phy_power_off(sdhci_arasan->phy); 592 + if (sdhci_arasan->is_phy_on) 593 + phy_power_off(sdhci_arasan->phy); 696 594 phy_exit(sdhci_arasan->phy); 697 595 } 698 596
+1 -1
drivers/mmc/host/sdhci-of-esdhc.c
··· 583 583 584 584 np = pdev->dev.of_node; 585 585 586 - if (of_get_property(np, "little-endian", NULL)) 586 + if (of_property_read_bool(np, "little-endian")) 587 587 host = sdhci_pltfm_init(pdev, &sdhci_esdhc_le_pdata, 588 588 sizeof(struct sdhci_esdhc)); 589 589 else
+27 -35
drivers/mmc/host/sdhci-pci-core.c
··· 156 156 if (!gpio_is_valid(gpio)) 157 157 return; 158 158 159 - err = gpio_request(gpio, "sd_cd"); 159 + err = devm_gpio_request(&slot->chip->pdev->dev, gpio, "sd_cd"); 160 160 if (err < 0) 161 161 goto out; 162 162 ··· 179 179 return; 180 180 181 181 out_free: 182 - gpio_free(gpio); 182 + devm_gpio_free(&slot->chip->pdev->dev, gpio); 183 183 out: 184 184 dev_warn(&slot->chip->pdev->dev, "failed to setup card detect wake up\n"); 185 185 } ··· 188 188 { 189 189 if (slot->cd_irq >= 0) 190 190 free_irq(slot->cd_irq, slot); 191 - if (gpio_is_valid(slot->cd_gpio)) 192 - gpio_free(slot->cd_gpio); 193 191 } 194 192 195 193 #else ··· 354 356 { 355 357 slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE | 356 358 MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | 359 + MMC_CAP_CMD_DURING_TFR | 357 360 MMC_CAP_WAIT_WHILE_BUSY; 358 361 slot->host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ; 359 362 slot->hw_reset = sdhci_pci_int_hw_reset; ··· 420 421 /* Define Host controllers for Intel Merrifield platform */ 421 422 #define INTEL_MRFLD_EMMC_0 0 422 423 #define INTEL_MRFLD_EMMC_1 1 424 + #define INTEL_MRFLD_SD 2 425 + #define INTEL_MRFLD_SDIO 3 423 426 424 427 static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot) 425 428 { 426 - if ((PCI_FUNC(slot->chip->pdev->devfn) != INTEL_MRFLD_EMMC_0) && 427 - (PCI_FUNC(slot->chip->pdev->devfn) != INTEL_MRFLD_EMMC_1)) 428 - /* SD support is not ready yet */ 429 + unsigned int func = PCI_FUNC(slot->chip->pdev->devfn); 430 + 431 + switch (func) { 432 + case INTEL_MRFLD_EMMC_0: 433 + case INTEL_MRFLD_EMMC_1: 434 + slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE | 435 + MMC_CAP_8_BIT_DATA | 436 + MMC_CAP_1_8V_DDR; 437 + break; 438 + case INTEL_MRFLD_SD: 439 + slot->host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V; 440 + break; 441 + case INTEL_MRFLD_SDIO: 442 + slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE | 443 + MMC_CAP_POWER_OFF_CARD; 444 + break; 445 + default: 429 446 return -ENODEV; 430 - 431 - slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE | 432 - MMC_CAP_1_8V_DDR; 433 - 447 + } 434 448 return 0; 435 449 } 436 450 ··· 1627 1615 1628 1616 slot->chip = chip; 1629 1617 slot->host = host; 1630 - slot->pci_bar = bar; 1631 1618 slot->rst_n_gpio = -EINVAL; 1632 1619 slot->cd_gpio = -EINVAL; 1633 1620 slot->cd_idx = -1; ··· 1654 1643 1655 1644 host->irq = pdev->irq; 1656 1645 1657 - ret = pci_request_region(pdev, bar, mmc_hostname(host->mmc)); 1646 + ret = pcim_iomap_regions(pdev, BIT(bar), mmc_hostname(host->mmc)); 1658 1647 if (ret) { 1659 1648 dev_err(&pdev->dev, "cannot request region\n"); 1660 1649 goto cleanup; 1661 1650 } 1662 1651 1663 - host->ioaddr = pci_ioremap_bar(pdev, bar); 1664 - if (!host->ioaddr) { 1665 - dev_err(&pdev->dev, "failed to remap registers\n"); 1666 - ret = -ENOMEM; 1667 - goto release; 1668 - } 1652 + host->ioaddr = pcim_iomap_table(pdev)[bar]; 1669 1653 1670 1654 if (chip->fixes && chip->fixes->probe_slot) { 1671 1655 ret = chip->fixes->probe_slot(slot); 1672 1656 if (ret) 1673 - goto unmap; 1657 + goto cleanup; 1674 1658 } 1675 1659 1676 1660 if (gpio_is_valid(slot->rst_n_gpio)) { 1677 - if (!gpio_request(slot->rst_n_gpio, "eMMC_reset")) { 1661 + if (!devm_gpio_request(&pdev->dev, slot->rst_n_gpio, "eMMC_reset")) { 1678 1662 gpio_direction_output(slot->rst_n_gpio, 1); 1679 1663 slot->host->mmc->caps |= MMC_CAP_HW_RESET; 1680 1664 slot->hw_reset = sdhci_pci_gpio_hw_reset; ··· 1708 1702 return slot; 1709 1703 1710 1704 remove: 1711 - if (gpio_is_valid(slot->rst_n_gpio)) 1712 - gpio_free(slot->rst_n_gpio); 1713 - 1714 1705 if (chip->fixes && chip->fixes->remove_slot) 1715 1706 chip->fixes->remove_slot(slot, 0); 1716 - 1717 - unmap: 1718 - iounmap(host->ioaddr); 1719 - 1720 - release: 1721 - pci_release_region(pdev, bar); 1722 1707 1723 1708 cleanup: 1724 1709 if (slot->data && slot->data->cleanup) ··· 1735 1738 1736 1739 sdhci_remove_host(slot->host, dead); 1737 1740 1738 - if (gpio_is_valid(slot->rst_n_gpio)) 1739 - gpio_free(slot->rst_n_gpio); 1740 - 1741 1741 if (slot->chip->fixes && slot->chip->fixes->remove_slot) 1742 1742 slot->chip->fixes->remove_slot(slot, dead); 1743 1743 1744 1744 if (slot->data && slot->data->cleanup) 1745 1745 slot->data->cleanup(slot->data); 1746 - 1747 - pci_release_region(slot->chip->pdev, slot->pci_bar); 1748 1746 1749 1747 sdhci_free_host(slot->host); 1750 1748 }
-1
drivers/mmc/host/sdhci-pci.h
··· 72 72 struct sdhci_host *host; 73 73 struct sdhci_pci_data *data; 74 74 75 - int pci_bar; 76 75 int rst_n_gpio; 77 76 int cd_gpio; 78 77 int cd_irq;
-7
drivers/mmc/host/sdhci-pltfm.c
··· 156 156 host->quirks2 = pdata->quirks2; 157 157 } 158 158 159 - /* 160 - * Some platforms need to probe the controller to be able to 161 - * determine which caps should be used. 162 - */ 163 - if (host->ops && host->ops->platform_init) 164 - host->ops->platform_init(host); 165 - 166 159 platform_set_drvdata(pdev, host); 167 160 168 161 return host;
+26 -1
drivers/mmc/host/sdhci-tegra.c
··· 391 391 .pdata = &sdhci_tegra114_pdata, 392 392 }; 393 393 394 + static const struct sdhci_pltfm_data sdhci_tegra124_pdata = { 395 + .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL | 396 + SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK | 397 + SDHCI_QUIRK_SINGLE_POWER_WRITE | 398 + SDHCI_QUIRK_NO_HISPD_BIT | 399 + SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC | 400 + SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN, 401 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 402 + /* 403 + * The TRM states that the SD/MMC controller found on 404 + * Tegra124 can address 34 bits (the maximum supported by 405 + * the Tegra memory controller), but tests show that DMA 406 + * to or from above 4 GiB doesn't work. This is possibly 407 + * caused by missing programming, though it's not obvious 408 + * what sequence is required. Mark 64-bit DMA broken for 409 + * now to fix this for existing users (e.g. Nyan boards). 410 + */ 411 + SDHCI_QUIRK2_BROKEN_64_BIT_DMA, 412 + .ops = &tegra114_sdhci_ops, 413 + }; 414 + 415 + static const struct sdhci_tegra_soc_data soc_data_tegra124 = { 416 + .pdata = &sdhci_tegra124_pdata, 417 + }; 418 + 394 419 static const struct sdhci_pltfm_data sdhci_tegra210_pdata = { 395 420 .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL | 396 421 SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK | ··· 433 408 434 409 static const struct of_device_id sdhci_tegra_dt_match[] = { 435 410 { .compatible = "nvidia,tegra210-sdhci", .data = &soc_data_tegra210 }, 436 - { .compatible = "nvidia,tegra124-sdhci", .data = &soc_data_tegra114 }, 411 + { .compatible = "nvidia,tegra124-sdhci", .data = &soc_data_tegra124 }, 437 412 { .compatible = "nvidia,tegra114-sdhci", .data = &soc_data_tegra114 }, 438 413 { .compatible = "nvidia,tegra30-sdhci", .data = &soc_data_tegra30 }, 439 414 { .compatible = "nvidia,tegra20-sdhci", .data = &soc_data_tegra20 },
+18 -5
drivers/mmc/host/sdhci.c
··· 888 888 static inline bool sdhci_auto_cmd12(struct sdhci_host *host, 889 889 struct mmc_request *mrq) 890 890 { 891 - return !mrq->sbc && (host->flags & SDHCI_AUTO_CMD12); 891 + return !mrq->sbc && (host->flags & SDHCI_AUTO_CMD12) && 892 + !mrq->cap_cmd_during_tfr; 892 893 } 893 894 894 895 static void sdhci_set_transfer_mode(struct sdhci_host *host, ··· 1032 1031 sdhci_do_reset(host, SDHCI_RESET_DATA); 1033 1032 } 1034 1033 1035 - /* Avoid triggering warning in sdhci_send_command() */ 1036 - host->cmd = NULL; 1037 - sdhci_send_command(host, data->stop); 1034 + /* 1035 + * 'cap_cmd_during_tfr' request must not use the command line 1036 + * after mmc_command_done() has been called. It is upper layer's 1037 + * responsibility to send the stop command if required. 1038 + */ 1039 + if (data->mrq->cap_cmd_during_tfr) { 1040 + sdhci_finish_mrq(host, data->mrq); 1041 + } else { 1042 + /* Avoid triggering warning in sdhci_send_command() */ 1043 + host->cmd = NULL; 1044 + sdhci_send_command(host, data->stop); 1045 + } 1038 1046 } else { 1039 1047 sdhci_finish_mrq(host, data->mrq); 1040 1048 } ··· 1174 1164 cmd->resp[0] = sdhci_readl(host, SDHCI_RESPONSE); 1175 1165 } 1176 1166 } 1167 + 1168 + if (cmd->mrq->cap_cmd_during_tfr && cmd == cmd->mrq->cmd) 1169 + mmc_command_done(host->mmc, cmd->mrq); 1177 1170 1178 1171 /* 1179 1172 * The host can send and interrupt when the busy state has ··· 2075 2062 2076 2063 spin_unlock_irqrestore(&host->lock, flags); 2077 2064 /* Wait for Buffer Read Ready interrupt */ 2078 - wait_event_interruptible_timeout(host->buf_ready_int, 2065 + wait_event_timeout(host->buf_ready_int, 2079 2066 (host->tuning_done == 1), 2080 2067 msecs_to_jiffies(50)); 2081 2068 spin_lock_irqsave(&host->lock, flags);
+2 -1
drivers/mmc/host/sdhci.h
··· 84 84 #define SDHCI_CTRL_ADMA32 0x10 85 85 #define SDHCI_CTRL_ADMA64 0x18 86 86 #define SDHCI_CTRL_8BITBUS 0x20 87 + #define SDHCI_CTRL_CDTEST_INS 0x40 88 + #define SDHCI_CTRL_CDTEST_EN 0x80 87 89 88 90 #define SDHCI_POWER_CONTROL 0x29 89 91 #define SDHCI_POWER_ON 0x01 ··· 557 555 void (*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs); 558 556 void (*hw_reset)(struct sdhci_host *host); 559 557 void (*adma_workaround)(struct sdhci_host *host, u32 intmask); 560 - void (*platform_init)(struct sdhci_host *host); 561 558 void (*card_event)(struct sdhci_host *host); 562 559 void (*voltage_switch)(struct sdhci_host *host); 563 560 int (*select_drive_strength)(struct sdhci_host *host,
+16 -1
drivers/mmc/host/sh_mobile_sdhi.c
··· 94 94 { .compatible = "renesas,sdhi-r8a7793", .data = &of_rcar_gen2_compatible, }, 95 95 { .compatible = "renesas,sdhi-r8a7794", .data = &of_rcar_gen2_compatible, }, 96 96 { .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_gen3_compatible, }, 97 + { .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_gen3_compatible, }, 97 98 {}, 98 99 }; 99 100 MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match); ··· 212 211 struct sh_mobile_sdhi *priv = host_to_priv(host); 213 212 214 213 clk_disable_unprepare(priv->clk); 214 + } 215 + 216 + static int sh_mobile_sdhi_card_busy(struct mmc_host *mmc) 217 + { 218 + struct tmio_mmc_host *host = mmc_priv(mmc); 219 + 220 + return !(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_DAT0); 215 221 } 216 222 217 223 static int sh_mobile_sdhi_start_signal_voltage_switch(struct mmc_host *mmc, ··· 377 369 host->clk_update = sh_mobile_sdhi_clk_update; 378 370 host->clk_disable = sh_mobile_sdhi_clk_disable; 379 371 host->multi_io_quirk = sh_mobile_sdhi_multi_io_quirk; 380 - host->start_signal_voltage_switch = sh_mobile_sdhi_start_signal_voltage_switch; 372 + 373 + /* SDR speeds are only available on Gen2+ */ 374 + if (mmc_data->flags & TMIO_MMC_MIN_RCAR2) { 375 + /* card_busy caused issues on r8a73a4 (pre-Gen2) CD-less SDHI */ 376 + host->card_busy = sh_mobile_sdhi_card_busy; 377 + host->start_signal_voltage_switch = 378 + sh_mobile_sdhi_start_signal_voltage_switch; 379 + } 381 380 382 381 /* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */ 383 382 if (!host->bus_shift && resource_size(res) > 0x100) /* old way to determine the shift */
+193 -72
drivers/mmc/host/sunxi-mmc.c
··· 72 72 #define SDXC_REG_CHDA (0x90) 73 73 #define SDXC_REG_CBDA (0x94) 74 74 75 + /* New registers introduced in A64 */ 76 + #define SDXC_REG_A12A 0x058 /* SMC Auto Command 12 Register */ 77 + #define SDXC_REG_SD_NTSR 0x05C /* SMC New Timing Set Register */ 78 + #define SDXC_REG_DRV_DL 0x140 /* Drive Delay Control Register */ 79 + #define SDXC_REG_SAMP_DL_REG 0x144 /* SMC sample delay control */ 80 + #define SDXC_REG_DS_DL_REG 0x148 /* SMC data strobe delay control */ 81 + 75 82 #define mmc_readl(host, reg) \ 76 83 readl((host)->reg_base + SDXC_##reg) 77 84 #define mmc_writel(host, reg, value) \ ··· 224 217 #define SDXC_CLK_50M_DDR 3 225 218 #define SDXC_CLK_50M_DDR_8BIT 4 226 219 220 + #define SDXC_2X_TIMING_MODE BIT(31) 221 + 222 + #define SDXC_CAL_START BIT(15) 223 + #define SDXC_CAL_DONE BIT(14) 224 + #define SDXC_CAL_DL_SHIFT 8 225 + #define SDXC_CAL_DL_SW_EN BIT(7) 226 + #define SDXC_CAL_DL_SW_SHIFT 0 227 + #define SDXC_CAL_DL_MASK 0x3f 228 + 229 + #define SDXC_CAL_TIMEOUT 3 /* in seconds, 3s is enough*/ 230 + 227 231 struct sunxi_mmc_clk_delay { 228 232 u32 output; 229 233 u32 sample; 230 234 }; 231 235 232 236 struct sunxi_idma_des { 233 - u32 config; 234 - u32 buf_size; 235 - u32 buf_addr_ptr1; 236 - u32 buf_addr_ptr2; 237 + __le32 config; 238 + __le32 buf_size; 239 + __le32 buf_addr_ptr1; 240 + __le32 buf_addr_ptr2; 241 + }; 242 + 243 + struct sunxi_mmc_cfg { 244 + u32 idma_des_size_bits; 245 + const struct sunxi_mmc_clk_delay *clk_delays; 246 + 247 + /* does the IP block support autocalibration? */ 248 + bool can_calibrate; 237 249 }; 238 250 239 251 struct sunxi_mmc_host { 240 252 struct mmc_host *mmc; 241 253 struct reset_control *reset; 254 + const struct sunxi_mmc_cfg *cfg; 242 255 243 256 /* IO mapping base */ 244 257 void __iomem *reg_base; ··· 268 241 struct clk *clk_mmc; 269 242 struct clk *clk_sample; 270 243 struct clk *clk_output; 271 - const struct sunxi_mmc_clk_delay *clk_delays; 272 244 273 245 /* irq */ 274 246 spinlock_t lock; ··· 276 250 u32 sdio_imask; 277 251 278 252 /* dma */ 279 - u32 idma_des_size_bits; 280 253 dma_addr_t sg_dma; 281 254 void *sg_cpu; 282 255 bool wait_dma; ··· 347 322 { 348 323 struct sunxi_idma_des *pdes = (struct sunxi_idma_des *)host->sg_cpu; 349 324 dma_addr_t next_desc = host->sg_dma; 350 - int i, max_len = (1 << host->idma_des_size_bits); 325 + int i, max_len = (1 << host->cfg->idma_des_size_bits); 351 326 352 327 for (i = 0; i < data->sg_len; i++) { 353 - pdes[i].config = SDXC_IDMAC_DES0_CH | SDXC_IDMAC_DES0_OWN | 354 - SDXC_IDMAC_DES0_DIC; 328 + pdes[i].config = cpu_to_le32(SDXC_IDMAC_DES0_CH | 329 + SDXC_IDMAC_DES0_OWN | 330 + SDXC_IDMAC_DES0_DIC); 355 331 356 332 if (data->sg[i].length == max_len) 357 333 pdes[i].buf_size = 0; /* 0 == max_len */ 358 334 else 359 - pdes[i].buf_size = data->sg[i].length; 335 + pdes[i].buf_size = cpu_to_le32(data->sg[i].length); 360 336 361 337 next_desc += sizeof(struct sunxi_idma_des); 362 - pdes[i].buf_addr_ptr1 = sg_dma_address(&data->sg[i]); 363 - pdes[i].buf_addr_ptr2 = (u32)next_desc; 338 + pdes[i].buf_addr_ptr1 = 339 + cpu_to_le32(sg_dma_address(&data->sg[i])); 340 + pdes[i].buf_addr_ptr2 = cpu_to_le32((u32)next_desc); 364 341 } 365 342 366 - pdes[0].config |= SDXC_IDMAC_DES0_FD; 367 - pdes[i - 1].config |= SDXC_IDMAC_DES0_LD | SDXC_IDMAC_DES0_ER; 368 - pdes[i - 1].config &= ~SDXC_IDMAC_DES0_DIC; 343 + pdes[0].config |= cpu_to_le32(SDXC_IDMAC_DES0_FD); 344 + pdes[i - 1].config |= cpu_to_le32(SDXC_IDMAC_DES0_LD | 345 + SDXC_IDMAC_DES0_ER); 346 + pdes[i - 1].config &= cpu_to_le32(~SDXC_IDMAC_DES0_DIC); 369 347 pdes[i - 1].buf_addr_ptr2 = 0; 370 348 371 349 /* ··· 681 653 return 0; 682 654 } 683 655 656 + static int sunxi_mmc_calibrate(struct sunxi_mmc_host *host, int reg_off) 657 + { 658 + u32 reg = readl(host->reg_base + reg_off); 659 + u32 delay; 660 + unsigned long timeout; 661 + 662 + if (!host->cfg->can_calibrate) 663 + return 0; 664 + 665 + reg &= ~(SDXC_CAL_DL_MASK << SDXC_CAL_DL_SW_SHIFT); 666 + reg &= ~SDXC_CAL_DL_SW_EN; 667 + 668 + writel(reg | SDXC_CAL_START, host->reg_base + reg_off); 669 + 670 + dev_dbg(mmc_dev(host->mmc), "calibration started\n"); 671 + 672 + timeout = jiffies + HZ * SDXC_CAL_TIMEOUT; 673 + 674 + while (!((reg = readl(host->reg_base + reg_off)) & SDXC_CAL_DONE)) { 675 + if (time_before(jiffies, timeout)) 676 + cpu_relax(); 677 + else { 678 + reg &= ~SDXC_CAL_START; 679 + writel(reg, host->reg_base + reg_off); 680 + 681 + return -ETIMEDOUT; 682 + } 683 + } 684 + 685 + delay = (reg >> SDXC_CAL_DL_SHIFT) & SDXC_CAL_DL_MASK; 686 + 687 + reg &= ~SDXC_CAL_START; 688 + reg |= (delay << SDXC_CAL_DL_SW_SHIFT) | SDXC_CAL_DL_SW_EN; 689 + 690 + writel(reg, host->reg_base + reg_off); 691 + 692 + dev_dbg(mmc_dev(host->mmc), "calibration ended, reg is 0x%x\n", reg); 693 + 694 + return 0; 695 + } 696 + 697 + static int sunxi_mmc_clk_set_phase(struct sunxi_mmc_host *host, 698 + struct mmc_ios *ios, u32 rate) 699 + { 700 + int index; 701 + 702 + if (!host->cfg->clk_delays) 703 + return 0; 704 + 705 + /* determine delays */ 706 + if (rate <= 400000) { 707 + index = SDXC_CLK_400K; 708 + } else if (rate <= 25000000) { 709 + index = SDXC_CLK_25M; 710 + } else if (rate <= 52000000) { 711 + if (ios->timing != MMC_TIMING_UHS_DDR50 && 712 + ios->timing != MMC_TIMING_MMC_DDR52) { 713 + index = SDXC_CLK_50M; 714 + } else if (ios->bus_width == MMC_BUS_WIDTH_8) { 715 + index = SDXC_CLK_50M_DDR_8BIT; 716 + } else { 717 + index = SDXC_CLK_50M_DDR; 718 + } 719 + } else { 720 + return -EINVAL; 721 + } 722 + 723 + clk_set_phase(host->clk_sample, host->cfg->clk_delays[index].sample); 724 + clk_set_phase(host->clk_output, host->cfg->clk_delays[index].output); 725 + 726 + return 0; 727 + } 728 + 684 729 static int sunxi_mmc_clk_set_rate(struct sunxi_mmc_host *host, 685 730 struct mmc_ios *ios) 686 731 { 687 - u32 rate, oclk_dly, rval, sclk_dly; 688 - u32 clock = ios->clock; 732 + long rate; 733 + u32 rval, clock = ios->clock; 689 734 int ret; 690 735 691 736 /* 8 bit DDR requires a higher module clock */ ··· 767 666 clock <<= 1; 768 667 769 668 rate = clk_round_rate(host->clk_mmc, clock); 770 - dev_dbg(mmc_dev(host->mmc), "setting clk to %d, rounded %d\n", 669 + if (rate < 0) { 670 + dev_err(mmc_dev(host->mmc), "error rounding clk to %d: %ld\n", 671 + clock, rate); 672 + return rate; 673 + } 674 + dev_dbg(mmc_dev(host->mmc), "setting clk to %d, rounded %ld\n", 771 675 clock, rate); 772 676 773 677 /* setting clock rate */ 774 678 ret = clk_set_rate(host->clk_mmc, rate); 775 679 if (ret) { 776 - dev_err(mmc_dev(host->mmc), "error setting clk to %d: %d\n", 680 + dev_err(mmc_dev(host->mmc), "error setting clk to %ld: %d\n", 777 681 rate, ret); 778 682 return ret; 779 683 } ··· 798 692 } 799 693 mmc_writel(host, REG_CLKCR, rval); 800 694 801 - /* determine delays */ 802 - if (rate <= 400000) { 803 - oclk_dly = host->clk_delays[SDXC_CLK_400K].output; 804 - sclk_dly = host->clk_delays[SDXC_CLK_400K].sample; 805 - } else if (rate <= 25000000) { 806 - oclk_dly = host->clk_delays[SDXC_CLK_25M].output; 807 - sclk_dly = host->clk_delays[SDXC_CLK_25M].sample; 808 - } else if (rate <= 52000000) { 809 - if (ios->timing != MMC_TIMING_UHS_DDR50 && 810 - ios->timing != MMC_TIMING_MMC_DDR52) { 811 - oclk_dly = host->clk_delays[SDXC_CLK_50M].output; 812 - sclk_dly = host->clk_delays[SDXC_CLK_50M].sample; 813 - } else if (ios->bus_width == MMC_BUS_WIDTH_8) { 814 - oclk_dly = host->clk_delays[SDXC_CLK_50M_DDR_8BIT].output; 815 - sclk_dly = host->clk_delays[SDXC_CLK_50M_DDR_8BIT].sample; 816 - } else { 817 - oclk_dly = host->clk_delays[SDXC_CLK_50M_DDR].output; 818 - sclk_dly = host->clk_delays[SDXC_CLK_50M_DDR].sample; 819 - } 820 - } else { 821 - return -EINVAL; 822 - } 695 + ret = sunxi_mmc_clk_set_phase(host, ios, rate); 696 + if (ret) 697 + return ret; 823 698 824 - clk_set_phase(host->clk_sample, sclk_dly); 825 - clk_set_phase(host->clk_output, oclk_dly); 699 + ret = sunxi_mmc_calibrate(host, SDXC_REG_SAMP_DL_REG); 700 + if (ret) 701 + return ret; 702 + 703 + /* TODO: enable calibrate on sdc2 SDXC_REG_DS_DL_REG of A64 */ 826 704 827 705 return sunxi_mmc_oclk_onoff(host, 1); 828 706 } ··· 1028 938 return !!(mmc_readl(host, REG_STAS) & SDXC_CARD_DATA_BUSY); 1029 939 } 1030 940 1031 - static const struct of_device_id sunxi_mmc_of_match[] = { 1032 - { .compatible = "allwinner,sun4i-a10-mmc", }, 1033 - { .compatible = "allwinner,sun5i-a13-mmc", }, 1034 - { .compatible = "allwinner,sun9i-a80-mmc", }, 1035 - { /* sentinel */ } 1036 - }; 1037 - MODULE_DEVICE_TABLE(of, sunxi_mmc_of_match); 1038 - 1039 941 static struct mmc_host_ops sunxi_mmc_ops = { 1040 942 .request = sunxi_mmc_request, 1041 943 .set_ios = sunxi_mmc_set_ios, ··· 1056 974 [SDXC_CLK_50M_DDR_8BIT] = { .output = 72, .sample = 72 }, 1057 975 }; 1058 976 977 + static const struct sunxi_mmc_cfg sun4i_a10_cfg = { 978 + .idma_des_size_bits = 13, 979 + .clk_delays = NULL, 980 + .can_calibrate = false, 981 + }; 982 + 983 + static const struct sunxi_mmc_cfg sun5i_a13_cfg = { 984 + .idma_des_size_bits = 16, 985 + .clk_delays = NULL, 986 + .can_calibrate = false, 987 + }; 988 + 989 + static const struct sunxi_mmc_cfg sun7i_a20_cfg = { 990 + .idma_des_size_bits = 16, 991 + .clk_delays = sunxi_mmc_clk_delays, 992 + .can_calibrate = false, 993 + }; 994 + 995 + static const struct sunxi_mmc_cfg sun9i_a80_cfg = { 996 + .idma_des_size_bits = 16, 997 + .clk_delays = sun9i_mmc_clk_delays, 998 + .can_calibrate = false, 999 + }; 1000 + 1001 + static const struct sunxi_mmc_cfg sun50i_a64_cfg = { 1002 + .idma_des_size_bits = 16, 1003 + .clk_delays = NULL, 1004 + .can_calibrate = true, 1005 + }; 1006 + 1007 + static const struct of_device_id sunxi_mmc_of_match[] = { 1008 + { .compatible = "allwinner,sun4i-a10-mmc", .data = &sun4i_a10_cfg }, 1009 + { .compatible = "allwinner,sun5i-a13-mmc", .data = &sun5i_a13_cfg }, 1010 + { .compatible = "allwinner,sun7i-a20-mmc", .data = &sun7i_a20_cfg }, 1011 + { .compatible = "allwinner,sun9i-a80-mmc", .data = &sun9i_a80_cfg }, 1012 + { .compatible = "allwinner,sun50i-a64-mmc", .data = &sun50i_a64_cfg }, 1013 + { /* sentinel */ } 1014 + }; 1015 + MODULE_DEVICE_TABLE(of, sunxi_mmc_of_match); 1016 + 1059 1017 static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host, 1060 1018 struct platform_device *pdev) 1061 1019 { 1062 - struct device_node *np = pdev->dev.of_node; 1063 1020 int ret; 1064 1021 1065 - if (of_device_is_compatible(np, "allwinner,sun4i-a10-mmc")) 1066 - host->idma_des_size_bits = 13; 1067 - else 1068 - host->idma_des_size_bits = 16; 1069 - 1070 - if (of_device_is_compatible(np, "allwinner,sun9i-a80-mmc")) 1071 - host->clk_delays = sun9i_mmc_clk_delays; 1072 - else 1073 - host->clk_delays = sunxi_mmc_clk_delays; 1022 + host->cfg = of_device_get_match_data(&pdev->dev); 1023 + if (!host->cfg) 1024 + return -EINVAL; 1074 1025 1075 1026 ret = mmc_regulator_get_supply(host->mmc); 1076 1027 if (ret) { ··· 1129 1014 return PTR_ERR(host->clk_mmc); 1130 1015 } 1131 1016 1132 - host->clk_output = devm_clk_get(&pdev->dev, "output"); 1133 - if (IS_ERR(host->clk_output)) { 1134 - dev_err(&pdev->dev, "Could not get output clock\n"); 1135 - return PTR_ERR(host->clk_output); 1136 - } 1017 + if (host->cfg->clk_delays) { 1018 + host->clk_output = devm_clk_get(&pdev->dev, "output"); 1019 + if (IS_ERR(host->clk_output)) { 1020 + dev_err(&pdev->dev, "Could not get output clock\n"); 1021 + return PTR_ERR(host->clk_output); 1022 + } 1137 1023 1138 - host->clk_sample = devm_clk_get(&pdev->dev, "sample"); 1139 - if (IS_ERR(host->clk_sample)) { 1140 - dev_err(&pdev->dev, "Could not get sample clock\n"); 1141 - return PTR_ERR(host->clk_sample); 1024 + host->clk_sample = devm_clk_get(&pdev->dev, "sample"); 1025 + if (IS_ERR(host->clk_sample)) { 1026 + dev_err(&pdev->dev, "Could not get sample clock\n"); 1027 + return PTR_ERR(host->clk_sample); 1028 + } 1142 1029 } 1143 1030 1144 1031 host->reset = devm_reset_control_get_optional(&pdev->dev, "ahb"); ··· 1237 1120 mmc->max_blk_count = 8192; 1238 1121 mmc->max_blk_size = 4096; 1239 1122 mmc->max_segs = PAGE_SIZE / sizeof(struct sunxi_idma_des); 1240 - mmc->max_seg_size = (1 << host->idma_des_size_bits); 1123 + mmc->max_seg_size = (1 << host->cfg->idma_des_size_bits); 1241 1124 mmc->max_req_size = mmc->max_seg_size * mmc->max_segs; 1242 1125 /* 400kHz ~ 52MHz */ 1243 1126 mmc->f_min = 400000; 1244 1127 mmc->f_max = 52000000; 1245 1128 mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED | 1246 - MMC_CAP_1_8V_DDR | 1247 1129 MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ; 1130 + 1131 + if (host->cfg->clk_delays) 1132 + mmc->caps |= MMC_CAP_1_8V_DDR; 1248 1133 1249 1134 ret = mmc_of_parse(mmc); 1250 1135 if (ret) ··· 1279 1160 if (!IS_ERR(host->reset)) 1280 1161 reset_control_assert(host->reset); 1281 1162 1163 + clk_disable_unprepare(host->clk_sample); 1164 + clk_disable_unprepare(host->clk_output); 1282 1165 clk_disable_unprepare(host->clk_mmc); 1283 1166 clk_disable_unprepare(host->clk_ahb); 1284 1167
+4
drivers/mmc/host/tmio_mmc.h
··· 79 79 #define CLK_CTL_DIV_MASK 0xff 80 80 #define CLK_CTL_SCLKEN BIT(8) 81 81 82 + #define CARD_OPT_WIDTH8 BIT(13) 83 + #define CARD_OPT_WIDTH BIT(15) 84 + 82 85 #define TMIO_BBS 512 /* Boot block size */ 83 86 84 87 /* Definitions for values the CTRL_SDIO_STATUS register can take. */ ··· 161 158 void (*clk_disable)(struct tmio_mmc_host *host); 162 159 int (*multi_io_quirk)(struct mmc_card *card, 163 160 unsigned int direction, int blk_size); 161 + int (*card_busy)(struct mmc_host *mmc); 164 162 int (*start_signal_voltage_switch)(struct mmc_host *mmc, 165 163 struct mmc_ios *ios); 166 164 };
+27 -20
drivers/mmc/host/tmio_mmc_pio.c
··· 336 336 337 337 switch (mmc_resp_type(cmd)) { 338 338 case MMC_RSP_NONE: c |= RESP_NONE; break; 339 - case MMC_RSP_R1: c |= RESP_R1; break; 339 + case MMC_RSP_R1: 340 + case MMC_RSP_R1_NO_CRC: 341 + c |= RESP_R1; break; 340 342 case MMC_RSP_R1B: c |= RESP_R1B; break; 341 343 case MMC_RSP_R2: c |= RESP_R2; break; 342 344 case MMC_RSP_R3: c |= RESP_R3; break; ··· 732 730 pr_debug("setup data transfer: blocksize %08x nr_blocks %d\n", 733 731 data->blksz, data->blocks); 734 732 735 - /* Some hardware cannot perform 2 byte requests in 4 bit mode */ 736 - if (host->mmc->ios.bus_width == MMC_BUS_WIDTH_4) { 733 + /* Some hardware cannot perform 2 byte requests in 4/8 bit mode */ 734 + if (host->mmc->ios.bus_width == MMC_BUS_WIDTH_4 || 735 + host->mmc->ios.bus_width == MMC_BUS_WIDTH_8) { 737 736 int blksz_2bytes = pdata->flags & TMIO_MMC_BLKSZ_2BYTES; 738 737 739 738 if (data->blksz < 2 || (data->blksz < 4 && !blksz_2bytes)) { 740 - pr_err("%s: %d byte block unsupported in 4 bit mode\n", 739 + pr_err("%s: %d byte block unsupported in 4/8 bit mode\n", 741 740 mmc_hostname(host->mmc), data->blksz); 742 741 return -EINVAL; 743 742 } ··· 860 857 static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host, 861 858 unsigned char bus_width) 862 859 { 863 - switch (bus_width) { 864 - case MMC_BUS_WIDTH_1: 865 - sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x80e0); 866 - break; 867 - case MMC_BUS_WIDTH_4: 868 - sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x00e0); 869 - break; 870 - } 860 + u16 reg = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT) 861 + & ~(CARD_OPT_WIDTH | CARD_OPT_WIDTH8); 862 + 863 + /* reg now applies to MMC_BUS_WIDTH_4 */ 864 + if (bus_width == MMC_BUS_WIDTH_1) 865 + reg |= CARD_OPT_WIDTH; 866 + else if (bus_width == MMC_BUS_WIDTH_8) 867 + reg |= CARD_OPT_WIDTH8; 868 + 869 + sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, reg); 871 870 } 872 871 873 872 /* Set MMC clock / power. ··· 965 960 return blk_size; 966 961 } 967 962 968 - static int tmio_mmc_card_busy(struct mmc_host *mmc) 969 - { 970 - struct tmio_mmc_host *host = mmc_priv(mmc); 971 - 972 - return !(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_DAT0); 973 - } 974 - 975 963 static struct mmc_host_ops tmio_mmc_ops = { 976 964 .request = tmio_mmc_request, 977 965 .set_ios = tmio_mmc_set_ios, 978 966 .get_ro = tmio_mmc_get_ro, 979 967 .get_cd = mmc_gpio_get_cd, 980 968 .enable_sdio_irq = tmio_mmc_enable_sdio_irq, 981 - .card_busy = tmio_mmc_card_busy, 982 969 .multi_io_quirk = tmio_multi_io_quirk, 983 970 }; 984 971 ··· 1069 1072 goto host_free; 1070 1073 } 1071 1074 1075 + tmio_mmc_ops.card_busy = _host->card_busy; 1072 1076 tmio_mmc_ops.start_signal_voltage_switch = _host->start_signal_voltage_switch; 1073 1077 mmc->ops = &tmio_mmc_ops; 1074 1078 ··· 1086 1088 mmc->caps & MMC_CAP_NEEDS_POLL || 1087 1089 !mmc_card_is_removable(mmc) || 1088 1090 mmc->slot.cd_irq >= 0); 1091 + 1092 + /* 1093 + * On Gen2+, eMMC with NONREMOVABLE currently fails because native 1094 + * hotplug gets disabled. It seems RuntimePM related yet we need further 1095 + * research. Since we are planning a PM overhaul anyway, let's enforce 1096 + * for now the device being active by enabling native hotplug always. 1097 + */ 1098 + if (pdata->flags & TMIO_MMC_MIN_RCAR2) 1099 + _host->native_hotplug = true; 1089 1100 1090 1101 if (tmio_mmc_clk_enable(_host) < 0) { 1091 1102 mmc->f_max = pdata->hclk;
+1
include/linux/mmc/card.h
··· 292 292 u32 raw_cid[4]; /* raw card CID */ 293 293 u32 raw_csd[4]; /* raw card CSD */ 294 294 u32 raw_scr[2]; /* raw card SCR */ 295 + u32 raw_ssr[16]; /* raw card SSR */ 295 296 struct mmc_cid cid; /* card identification */ 296 297 struct mmc_csd csd; /* card specific */ 297 298 struct mmc_ext_csd ext_csd; /* mmc v4 extended card specific */
+10
include/linux/mmc/core.h
··· 55 55 #define MMC_RSP_R6 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE) 56 56 #define MMC_RSP_R7 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE) 57 57 58 + /* Can be used by core to poll after switch to MMC HS mode */ 59 + #define MMC_RSP_R1_NO_CRC (MMC_RSP_PRESENT|MMC_RSP_OPCODE) 60 + 58 61 #define mmc_resp_type(cmd) ((cmd)->flags & (MMC_RSP_PRESENT|MMC_RSP_136|MMC_RSP_CRC|MMC_RSP_BUSY|MMC_RSP_OPCODE)) 59 62 60 63 /* ··· 136 133 struct mmc_command *stop; 137 134 138 135 struct completion completion; 136 + struct completion cmd_completion; 139 137 void (*done)(struct mmc_request *);/* completion function */ 140 138 struct mmc_host *host; 139 + 140 + /* Allow other commands during this ongoing data transfer or busy wait */ 141 + bool cap_cmd_during_tfr; 141 142 }; 142 143 143 144 struct mmc_card; ··· 153 146 struct mmc_async_req *, int *); 154 147 extern int mmc_interrupt_hpi(struct mmc_card *); 155 148 extern void mmc_wait_for_req(struct mmc_host *, struct mmc_request *); 149 + extern void mmc_wait_for_req_done(struct mmc_host *host, 150 + struct mmc_request *mrq); 151 + extern bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq); 156 152 extern int mmc_wait_for_cmd(struct mmc_host *, struct mmc_command *, int); 157 153 extern int mmc_app_cmd(struct mmc_host *, struct mmc_card *); 158 154 extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *,
+2
include/linux/mmc/dw_mmc.h
··· 17 17 #include <linux/scatterlist.h> 18 18 #include <linux/mmc/core.h> 19 19 #include <linux/dmaengine.h> 20 + #include <linux/reset.h> 20 21 21 22 #define MAX_MCI_SLOTS 2 22 23 ··· 260 259 /* delay in mS before detecting cards after interrupt */ 261 260 u32 detect_delay_ms; 262 261 262 + struct reset_control *rstc; 263 263 struct dw_mci_dma_ops *dma_ops; 264 264 struct dma_pdata *data; 265 265 };
+5
include/linux/mmc/host.h
··· 281 281 #define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */ 282 282 #define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */ 283 283 #define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */ 284 + #define MMC_CAP_CMD_DURING_TFR (1 << 29) /* Commands during data transfer */ 284 285 #define MMC_CAP_CMD23 (1 << 30) /* CMD23 supported. */ 285 286 #define MMC_CAP_HW_RESET (1 << 31) /* Hardware reset */ 286 287 ··· 383 382 struct mmc_async_req *areq; /* active async req */ 384 383 struct mmc_context_info context_info; /* async synchronization info */ 385 384 385 + /* Ongoing data transfer that allows commands during transfer */ 386 + struct mmc_request *ongoing_mrq; 387 + 386 388 #ifdef CONFIG_FAIL_MMC_REQUEST 387 389 struct fault_attr fail_mmc_request; 388 390 #endif ··· 422 418 423 419 void mmc_detect_change(struct mmc_host *, unsigned long delay); 424 420 void mmc_request_done(struct mmc_host *, struct mmc_request *); 421 + void mmc_command_done(struct mmc_host *host, struct mmc_request *mrq); 425 422 426 423 static inline void mmc_signal_sdio_irq(struct mmc_host *host) 427 424 {