Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mmc-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC and MEMSTICK updates from Ulf Hansson:
"MMC core:
- Fix hanging on I/O during system suspend for removable cards
- Set read only for SD cards with permanent write protect bit
- Power cycle the SD/SDIO card if CMD11 fails for UHS voltage
- Issue a cache flush for eMMC only when it's enabled
- Adopt to updated cache ctrl settings for eMMC from MMC ioctls
- Use use device property API when parsing voltages
- Don't retry eMMC sanitize cmds
- Use the timeout from the MMC ioctl for eMMC santize cmds

MMC host:
- mmc_spi: Make of_mmc_spi.c resource provider agnostic
- mmc_spi: Use polling for card detect even without voltage-ranges
- sdhci: Check for reset prior to DMA address unmap
- sdhci-acpi: Add support for the AMDI0041 eMMC controller variant
- sdhci-esdhc-imx: Depending on OF Kconfig and cleanup code
- sdhci-pci: Add PCI IDs for Intel LKF
- sdhci-pci: Fix initialization of some SD cards for Intel BYT
- sdhci-pci-gli: Various improvements for GL97xx variants
- sdhci-of-dwcmshc: Enable support for MMC_CAP_WAIT_WHILE_BUSY
- sdhci-of-dwcmshc: Add ACPI support for BlueField-3 SoC
- sdhci-of-dwcmshc: Add Rockchip platform support
- tmio/renesas_sdhi: Extend support for reset and use a reset controller
- tmio/renesas_sdhi: Enable support for MMC_CAP_WAIT_WHILE_BUSY
- tmio/renesas_sdhi: Various improvements

MEMSTICK:
- Minor improvements/cleanups"

* tag 'mmc-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (79 commits)
mmc: block: Issue a cache flush only when it's enabled
memstick: r592: ignore kfifo_out() return code again
mmc: block: Update ext_csd.cache_ctrl if it was written
mmc: mmc_spi: Make of_mmc_spi.c resource provider agnostic
mmc: mmc_spi: Use already parsed IRQ
mmc: mmc_spi: Drop unused NO_IRQ definition
mmc: mmc_spi: Set up polling even if voltage-ranges is not present
mmc: core: Convert mmc_of_parse_voltage() to use device property API
mmc: core: Correct descriptions in mmc_of_parse()
mmc: dw_mmc-rockchip: Just set default sample value for legacy mode
mmc: sdhci-s3c: constify uses of driver/match data
mmc: sdhci-s3c: correct kerneldoc of sdhci_s3c_drv_data
mmc: sdhci-s3c: simplify getting of_device_id match data
mmc: tmio: always restore irq register
mmc: sdhci-pci-gli: Enlarge ASPM L1 entry delay of GL975x
mmc: core: Let eMMC sanitize not retry in case of timeout/failure
mmc: core: Add a retries parameter to __mmc_switch function
memstick: r592: remove unused variable
mmc: sdhci-st: Remove unnecessary error log
mmc: sdhci-msm: Remove unnecessary error log
...

+1010 -642
+63
Documentation/devicetree/bindings/mmc/brcm,iproc-sdhci.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mmc/brcm,iproc-sdhci.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Broadcom IPROC SDHCI controller 8 + 9 + maintainers: 10 + - Ray Jui <ray.jui@broadcom.com> 11 + - Scott Branden <scott.branden@broadcom.com> 12 + - Nicolas Saenz Julienne <nsaenz@kernel.org> 13 + 14 + allOf: 15 + - $ref: mmc-controller.yaml# 16 + 17 + properties: 18 + compatible: 19 + enum: 20 + - brcm,bcm2835-sdhci 21 + - brcm,bcm2711-emmc2 22 + - brcm,sdhci-iproc-cygnus 23 + - brcm,sdhci-iproc 24 + 25 + reg: 26 + minItems: 1 27 + 28 + interrupts: 29 + maxItems: 1 30 + 31 + clocks: 32 + maxItems: 1 33 + description: 34 + Handle to core clock for the sdhci controller. 35 + 36 + sdhci,auto-cmd12: 37 + type: boolean 38 + description: Specifies that controller should use auto CMD12 39 + 40 + required: 41 + - compatible 42 + - reg 43 + - interrupts 44 + - clocks 45 + 46 + unevaluatedProperties: false 47 + 48 + examples: 49 + - | 50 + #include <dt-bindings/interrupt-controller/irq.h> 51 + #include <dt-bindings/interrupt-controller/arm-gic.h> 52 + #include <dt-bindings/clock/bcm-cygnus.h> 53 + 54 + mmc@18041000 { 55 + compatible = "brcm,sdhci-iproc-cygnus"; 56 + reg = <0x18041000 0x100>; 57 + interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; 58 + clocks = <&lcpll0_clks BCM_CYGNUS_LCPLL0_SDIO_CLK>; 59 + bus-width = <4>; 60 + sdhci,auto-cmd12; 61 + no-1-8-v; 62 + }; 63 + ...
-37
Documentation/devicetree/bindings/mmc/brcm,sdhci-iproc.txt
··· 1 - Broadcom IPROC SDHCI controller 2 - 3 - This file documents differences between the core properties described 4 - by mmc.txt and the properties that represent the IPROC SDHCI controller. 5 - 6 - Required properties: 7 - - compatible : Should be one of the following 8 - "brcm,bcm2835-sdhci" 9 - "brcm,bcm2711-emmc2" 10 - "brcm,sdhci-iproc-cygnus" 11 - "brcm,sdhci-iproc" 12 - 13 - Use brcm2835-sdhci for the eMMC controller on the BCM2835 (Raspberry Pi) and 14 - bcm2711-emmc2 for the additional eMMC2 controller on BCM2711. 15 - 16 - Use sdhci-iproc-cygnus for Broadcom SDHCI Controllers 17 - restricted to 32bit host accesses to SDHCI registers. 18 - 19 - Use sdhci-iproc for Broadcom SDHCI Controllers that allow standard 20 - 8, 16, 32-bit host access to SDHCI register. 21 - 22 - - clocks : The clock feeding the SDHCI controller. 23 - 24 - Optional properties: 25 - - sdhci,auto-cmd12: specifies that controller should use auto CMD12. 26 - 27 - Example: 28 - 29 - sdhci0: sdhci@18041000 { 30 - compatible = "brcm,sdhci-iproc-cygnus"; 31 - reg = <0x18041000 0x100>; 32 - interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; 33 - clocks = <&lcpll0_clks BCM_CYGNUS_LCPLL0_SDIO_CLK>; 34 - bus-width = <4>; 35 - sdhci,auto-cmd12; 36 - no-1-8-v; 37 - };
+20
Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.yaml
··· 103 103 Only eMMC HS400 mode need to take care of this property. 104 104 default: 0 105 105 106 + clocks: 107 + maxItems: 3 108 + description: 109 + Handle clocks for the sdhc controller. 110 + 111 + clock-names: 112 + items: 113 + - const: ipg 114 + - const: ahb 115 + - const: per 116 + 117 + pinctrl-names: 118 + minItems: 1 119 + maxItems: 4 120 + items: 121 + - const: default 122 + - const: state_100mhz 123 + - const: state_200mhz 124 + - const: sleep 125 + 106 126 required: 107 127 - compatible 108 128 - reg
+3 -3
Documentation/devicetree/bindings/mmc/mmc-spi-slot.txt
··· 5 5 6 6 Required properties: 7 7 - spi-max-frequency : maximum frequency for this device (Hz). 8 - - voltage-ranges : two cells are required, first cell specifies minimum 9 - slot voltage (mV), second cell specifies maximum slot voltage (mV). 10 - Several ranges could be specified. 11 8 12 9 Optional properties: 10 + - voltage-ranges : two cells are required, first cell specifies minimum 11 + slot voltage (mV), second cell specifies maximum slot voltage (mV). 12 + Several ranges could be specified. If not provided, 3.2v..3.4v is assumed. 13 13 - gpios : may specify GPIOs in this order: Card-Detect GPIO, 14 14 Write-Protect GPIO. Note that this does not follow the 15 15 binding from mmc.txt, for historical reasons.
+1
Documentation/devicetree/bindings/mmc/mtk-sd.yaml
··· 31 31 - const: mediatek,mt2701-mmc 32 32 - items: 33 33 - const: mediatek,mt8192-mmc 34 + - const: mediatek,mt8195-mmc 34 35 - const: mediatek,mt8183-mmc 35 36 36 37 clocks:
-20
Documentation/devicetree/bindings/mmc/sdhci-of-dwcmshc.txt
··· 1 - * Synopsys DesignWare Cores Mobile Storage Host Controller 2 - 3 - Required properties: 4 - - compatible: should be one of the following: 5 - "snps,dwcmshc-sdhci" 6 - - reg: offset and length of the register set for the device. 7 - - interrupts: a single interrupt specifier. 8 - - clocks: Array of clocks required for SDHCI; requires at least one for 9 - core clock. 10 - - clock-names: Array of names corresponding to clocks property; shall be 11 - "core" for core clock and "bus" for optional bus clock. 12 - 13 - Example: 14 - sdhci2: sdhci@aa0000 { 15 - compatible = "snps,dwcmshc-sdhci"; 16 - reg = <0xaa0000 0x1000>; 17 - interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>; 18 - clocks = <&emmcclk>; 19 - bus-width = <8>; 20 - }
+87
Documentation/devicetree/bindings/mmc/snps,dwcmshc-sdhci.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mmc/snps,dwcmshc-sdhci.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Synopsys Designware Mobile Storage Host Controller Binding 8 + 9 + maintainers: 10 + - Ulf Hansson <ulf.hansson@linaro.org> 11 + - Jisheng Zhang <Jisheng.Zhang@synaptics.com> 12 + 13 + allOf: 14 + - $ref: mmc-controller.yaml# 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - rockchip,rk3568-dwcmshc 20 + - snps,dwcmshc-sdhci 21 + 22 + reg: 23 + minItems: 1 24 + items: 25 + - description: Offset and length of the register set for the device 26 + 27 + interrupts: 28 + maxItems: 1 29 + 30 + clocks: 31 + minItems: 1 32 + items: 33 + - description: core clock 34 + - description: bus clock for optional 35 + - description: axi clock for rockchip specified 36 + - description: block clock for rockchip specified 37 + - description: timer clock for rockchip specified 38 + 39 + 40 + clock-names: 41 + minItems: 1 42 + items: 43 + - const: core 44 + - const: bus 45 + - const: axi 46 + - const: block 47 + - const: timer 48 + 49 + rockchip,txclk-tapnum: 50 + description: Specify the number of delay for tx sampling. 51 + $ref: /schemas/types.yaml#/definitions/uint8 52 + 53 + 54 + required: 55 + - compatible 56 + - reg 57 + - interrupts 58 + - clocks 59 + - clock-names 60 + 61 + unevaluatedProperties: false 62 + 63 + examples: 64 + - | 65 + mmc@fe310000 { 66 + compatible = "rockchip,rk3568-dwcmshc"; 67 + reg = <0xfe310000 0x10000>; 68 + interrupts = <0 25 0x4>; 69 + clocks = <&cru 17>, <&cru 18>, <&cru 19>, <&cru 20>, <&cru 21>; 70 + clock-names = "core", "bus", "axi", "block", "timer"; 71 + bus-width = <8>; 72 + #address-cells = <1>; 73 + #size-cells = <0>; 74 + }; 75 + - | 76 + mmc@aa0000 { 77 + compatible = "snps,dwcmshc-sdhci"; 78 + reg = <0xaa000 0x1000>; 79 + interrupts = <0 25 0x4>; 80 + clocks = <&cru 17>, <&cru 18>; 81 + clock-names = "core", "bus"; 82 + bus-width = <8>; 83 + #address-cells = <1>; 84 + #size-cells = <0>; 85 + }; 86 + 87 + ...
+10 -11
drivers/memstick/core/memstick.c
··· 331 331 sizeof(struct ms_id_register)); 332 332 *mrq = &card->current_mrq; 333 333 return 0; 334 - } else { 335 - if (!(*mrq)->error) { 336 - memcpy(&id_reg, (*mrq)->data, sizeof(id_reg)); 337 - card->id.match_flags = MEMSTICK_MATCH_ALL; 338 - card->id.type = id_reg.type; 339 - card->id.category = id_reg.category; 340 - card->id.class = id_reg.class; 341 - dev_dbg(&card->dev, "if_mode = %02x\n", id_reg.if_mode); 342 - } 343 - complete(&card->mrq_complete); 344 - return -EAGAIN; 345 334 } 335 + if (!(*mrq)->error) { 336 + memcpy(&id_reg, (*mrq)->data, sizeof(id_reg)); 337 + card->id.match_flags = MEMSTICK_MATCH_ALL; 338 + card->id.type = id_reg.type; 339 + card->id.category = id_reg.category; 340 + card->id.class = id_reg.class; 341 + dev_dbg(&card->dev, "if_mode = %02x\n", id_reg.if_mode); 342 + } 343 + complete(&card->mrq_complete); 344 + return -EAGAIN; 346 345 } 347 346 348 347 static int h_memstick_set_rw_addr(struct memstick_dev *card,
+2 -1
drivers/memstick/core/mspro_block.c
··· 1382 1382 1383 1383 new_msb->card = card; 1384 1384 memstick_set_drvdata(card, new_msb); 1385 - if (mspro_block_init_card(card)) 1385 + rc = mspro_block_init_card(card); 1386 + if (rc) 1386 1387 goto out_free; 1387 1388 1388 1389 for (cnt = 0; new_msb->attr_group.attrs[cnt]
+4 -2
drivers/memstick/host/r592.c
··· 359 359 /* Flushes the temporary FIFO used to make aligned DWORD writes */ 360 360 static void r592_flush_fifo_write(struct r592_device *dev) 361 361 { 362 + int ret; 362 363 u8 buffer[4] = { 0 }; 363 - int len; 364 364 365 365 if (kfifo_is_empty(&dev->pio_fifo)) 366 366 return; 367 367 368 - len = kfifo_out(&dev->pio_fifo, buffer, 4); 368 + ret = kfifo_out(&dev->pio_fifo, buffer, 4); 369 + /* intentionally ignore __must_check return code */ 370 + (void)ret; 369 371 r592_write_reg_raw_be(dev, R592_FIFO_PIO, *(u32 *)buffer); 370 372 } 371 373
+47 -29
drivers/mmc/core/block.c
··· 539 539 540 540 if ((MMC_EXTRACT_INDEX_FROM_ARG(cmd.arg) == EXT_CSD_SANITIZE_START) && 541 541 (cmd.opcode == MMC_SWITCH)) 542 - return mmc_sanitize(card); 542 + return mmc_sanitize(card, idata->ic.cmd_timeout_ms); 543 543 544 544 mmc_wait_for_req(card->host, &mrq); 545 545 ··· 570 570 */ 571 571 card->ext_csd.part_config = value; 572 572 main_md->part_curr = value & EXT_CSD_PART_CONFIG_ACC_MASK; 573 + } 574 + 575 + /* 576 + * Make sure to update CACHE_CTRL in case it was changed. The cache 577 + * will get turned back on if the card is re-initialized, e.g. 578 + * suspend/resume or hw reset in recovery. 579 + */ 580 + if ((MMC_EXTRACT_INDEX_FROM_ARG(cmd.arg) == EXT_CSD_CACHE_CTRL) && 581 + (cmd.opcode == MMC_SWITCH)) { 582 + u8 value = MMC_EXTRACT_VALUE_FROM_ARG(cmd.arg) & 1; 583 + 584 + card->ext_csd.cache_ctrl = value; 573 585 } 574 586 575 587 /* ··· 959 947 md->reset_done |= type; 960 948 err = mmc_hw_reset(host); 961 949 /* Ensure we switch back to the correct partition */ 962 - if (err != -EOPNOTSUPP) { 950 + if (err) { 963 951 struct mmc_blk_data *main_md = 964 952 dev_get_drvdata(&host->card->dev); 965 953 int part_err; ··· 1945 1933 void mmc_blk_mq_complete(struct request *req) 1946 1934 { 1947 1935 struct mmc_queue *mq = req->q->queuedata; 1936 + struct mmc_host *host = mq->card->host; 1948 1937 1949 - if (mq->use_cqe) 1938 + if (host->cqe_enabled) 1950 1939 mmc_blk_cqe_complete_rq(mq, req); 1951 1940 else if (likely(!blk_should_fake_timeout(req->q))) 1952 1941 mmc_blk_mq_complete_rq(mq, req); ··· 2192 2179 2193 2180 static int mmc_blk_wait_for_idle(struct mmc_queue *mq, struct mmc_host *host) 2194 2181 { 2195 - if (mq->use_cqe) 2182 + if (host->cqe_enabled) 2196 2183 return host->cqe_ops->cqe_wait_for_idle(host); 2197 2184 2198 2185 return mmc_blk_rw_wait(mq, NULL); ··· 2237 2224 case MMC_ISSUE_ASYNC: 2238 2225 switch (req_op(req)) { 2239 2226 case REQ_OP_FLUSH: 2227 + if (!mmc_cache_enabled(host)) { 2228 + blk_mq_end_request(req, BLK_STS_OK); 2229 + return MMC_REQ_FINISHED; 2230 + } 2240 2231 ret = mmc_blk_cqe_issue_flush(mq, req); 2241 2232 break; 2242 2233 case REQ_OP_READ: 2243 2234 case REQ_OP_WRITE: 2244 - if (mq->use_cqe) 2235 + if (host->cqe_enabled) 2245 2236 ret = mmc_blk_cqe_issue_rw_rq(mq, req); 2246 2237 else 2247 2238 ret = mmc_blk_mq_issue_rw_rq(mq, req); ··· 2278 2261 { 2279 2262 struct mmc_blk_data *md; 2280 2263 int devidx, ret; 2264 + char cap_str[10]; 2281 2265 2282 2266 devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL); 2283 2267 if (devidx < 0) { ··· 2383 2365 blk_queue_write_cache(md->queue.queue, true, true); 2384 2366 } 2385 2367 2368 + string_get_size((u64)size, 512, STRING_UNITS_2, 2369 + cap_str, sizeof(cap_str)); 2370 + pr_info("%s: %s %s %s %s\n", 2371 + md->disk->disk_name, mmc_card_id(card), mmc_card_name(card), 2372 + cap_str, md->read_only ? "(ro)" : ""); 2373 + 2386 2374 return md; 2387 2375 2388 2376 err_putdisk: ··· 2431 2407 const char *subname, 2432 2408 int area_type) 2433 2409 { 2434 - char cap_str[10]; 2435 2410 struct mmc_blk_data *part_md; 2436 2411 2437 2412 part_md = mmc_blk_alloc_req(card, disk_to_dev(md->disk), size, default_ro, ··· 2440 2417 part_md->part_type = part_type; 2441 2418 list_add(&part_md->part, &md->part); 2442 2419 2443 - string_get_size((u64)get_capacity(part_md->disk), 512, STRING_UNITS_2, 2444 - cap_str, sizeof(cap_str)); 2445 - pr_info("%s: %s %s partition %u %s\n", 2446 - part_md->disk->disk_name, mmc_card_id(card), 2447 - mmc_card_name(card), part_md->part_type, cap_str); 2448 2420 return 0; 2449 2421 } 2450 2422 ··· 2576 2558 string_get_size((u64)size, 512, STRING_UNITS_2, 2577 2559 cap_str, sizeof(cap_str)); 2578 2560 2579 - pr_info("%s: %s %s partition %u %s, chardev (%d:%d)\n", 2580 - rpmb_name, mmc_card_id(card), 2581 - mmc_card_name(card), EXT_CSD_PART_CONFIG_ACC_RPMB, cap_str, 2561 + pr_info("%s: %s %s %s, chardev (%d:%d)\n", 2562 + rpmb_name, mmc_card_id(card), mmc_card_name(card), cap_str, 2582 2563 MAJOR(mmc_rpmb_devt), rpmb->id); 2583 2564 2584 2565 return 0; ··· 2893 2876 static int mmc_blk_probe(struct mmc_card *card) 2894 2877 { 2895 2878 struct mmc_blk_data *md, *part_md; 2896 - char cap_str[10]; 2879 + int ret = 0; 2897 2880 2898 2881 /* 2899 2882 * Check that the card supports the command class(es) we need. ··· 2905 2888 2906 2889 card->complete_wq = alloc_workqueue("mmc_complete", 2907 2890 WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); 2908 - if (unlikely(!card->complete_wq)) { 2891 + if (!card->complete_wq) { 2909 2892 pr_err("Failed to create mmc completion workqueue"); 2910 2893 return -ENOMEM; 2911 2894 } 2912 2895 2913 2896 md = mmc_blk_alloc(card); 2914 - if (IS_ERR(md)) 2915 - return PTR_ERR(md); 2897 + if (IS_ERR(md)) { 2898 + ret = PTR_ERR(md); 2899 + goto out_free; 2900 + } 2916 2901 2917 - string_get_size((u64)get_capacity(md->disk), 512, STRING_UNITS_2, 2918 - cap_str, sizeof(cap_str)); 2919 - pr_info("%s: %s %s %s %s\n", 2920 - md->disk->disk_name, mmc_card_id(card), mmc_card_name(card), 2921 - cap_str, md->read_only ? "(ro)" : ""); 2922 - 2923 - if (mmc_blk_alloc_parts(card, md)) 2902 + ret = mmc_blk_alloc_parts(card, md); 2903 + if (ret) 2924 2904 goto out; 2925 2905 2926 2906 dev_set_drvdata(&card->dev, md); 2927 2907 2928 - if (mmc_add_disk(md)) 2908 + ret = mmc_add_disk(md); 2909 + if (ret) 2929 2910 goto out; 2930 2911 2931 2912 list_for_each_entry(part_md, &md->part, part) { 2932 - if (mmc_add_disk(part_md)) 2913 + ret = mmc_add_disk(part_md); 2914 + if (ret) 2933 2915 goto out; 2934 2916 } 2935 2917 ··· 2949 2933 2950 2934 return 0; 2951 2935 2952 - out: 2936 + out: 2953 2937 mmc_blk_remove_parts(card, md); 2954 2938 mmc_blk_remove_req(md); 2955 - return 0; 2939 + out_free: 2940 + destroy_workqueue(card->complete_wq); 2941 + return ret; 2956 2942 } 2957 2943 2958 2944 static void mmc_blk_remove(struct mmc_card *card)
+6 -180
drivers/mmc/core/core.c
··· 1207 1207 1208 1208 err = mmc_wait_for_cmd(host, &cmd, 0); 1209 1209 if (err) 1210 - return err; 1210 + goto power_cycle; 1211 1211 1212 1212 if (!mmc_host_is_spi(host) && (cmd.resp[0] & R1_ERROR)) 1213 1213 return -EIO; ··· 1378 1378 } 1379 1379 1380 1380 /* 1381 - * Cleanup when the last reference to the bus operator is dropped. 1382 - */ 1383 - static void __mmc_release_bus(struct mmc_host *host) 1384 - { 1385 - WARN_ON(!host->bus_dead); 1386 - 1387 - host->bus_ops = NULL; 1388 - } 1389 - 1390 - /* 1391 - * Increase reference count of bus operator 1392 - */ 1393 - static inline void mmc_bus_get(struct mmc_host *host) 1394 - { 1395 - unsigned long flags; 1396 - 1397 - spin_lock_irqsave(&host->lock, flags); 1398 - host->bus_refs++; 1399 - spin_unlock_irqrestore(&host->lock, flags); 1400 - } 1401 - 1402 - /* 1403 - * Decrease reference count of bus operator and free it if 1404 - * it is the last reference. 1405 - */ 1406 - static inline void mmc_bus_put(struct mmc_host *host) 1407 - { 1408 - unsigned long flags; 1409 - 1410 - spin_lock_irqsave(&host->lock, flags); 1411 - host->bus_refs--; 1412 - if ((host->bus_refs == 0) && host->bus_ops) 1413 - __mmc_release_bus(host); 1414 - spin_unlock_irqrestore(&host->lock, flags); 1415 - } 1416 - 1417 - /* 1418 1381 * Assign a mmc bus handler to a host. Only one bus handler may control a 1419 1382 * host at any given time. 1420 1383 */ 1421 1384 void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops) 1422 1385 { 1423 - unsigned long flags; 1424 - 1425 - WARN_ON(!host->claimed); 1426 - 1427 - spin_lock_irqsave(&host->lock, flags); 1428 - 1429 - WARN_ON(host->bus_ops); 1430 - WARN_ON(host->bus_refs); 1431 - 1432 1386 host->bus_ops = ops; 1433 - host->bus_refs = 1; 1434 - host->bus_dead = 0; 1435 - 1436 - spin_unlock_irqrestore(&host->lock, flags); 1437 1387 } 1438 1388 1439 1389 /* ··· 1391 1441 */ 1392 1442 void mmc_detach_bus(struct mmc_host *host) 1393 1443 { 1394 - unsigned long flags; 1395 - 1396 - WARN_ON(!host->claimed); 1397 - WARN_ON(!host->bus_ops); 1398 - 1399 - spin_lock_irqsave(&host->lock, flags); 1400 - 1401 - host->bus_dead = 1; 1402 - 1403 - spin_unlock_irqrestore(&host->lock, flags); 1404 - 1405 - mmc_bus_put(host); 1444 + host->bus_ops = NULL; 1406 1445 } 1407 1446 1408 1447 void _mmc_detect_change(struct mmc_host *host, unsigned long delay, bool cd_irq) ··· 2019 2080 { 2020 2081 int ret; 2021 2082 2022 - if (!host->card) 2023 - return -EINVAL; 2024 - 2025 - mmc_bus_get(host); 2026 - if (!host->bus_ops || host->bus_dead || !host->bus_ops->hw_reset) { 2027 - mmc_bus_put(host); 2028 - return -EOPNOTSUPP; 2029 - } 2030 - 2031 2083 ret = host->bus_ops->hw_reset(host); 2032 - mmc_bus_put(host); 2033 - 2034 2084 if (ret < 0) 2035 2085 pr_warn("%s: tried to HW reset card, got error %d\n", 2036 2086 mmc_hostname(host), ret); ··· 2032 2104 { 2033 2105 int ret; 2034 2106 2035 - if (!host->card) 2036 - return -EINVAL; 2037 - 2038 - mmc_bus_get(host); 2039 - if (!host->bus_ops || host->bus_dead || !host->bus_ops->sw_reset) { 2040 - mmc_bus_put(host); 2107 + if (!host->bus_ops->sw_reset) 2041 2108 return -EOPNOTSUPP; 2042 - } 2043 2109 2044 2110 ret = host->bus_ops->sw_reset(host); 2045 - mmc_bus_put(host); 2046 - 2047 2111 if (ret) 2048 2112 pr_warn("%s: tried to SW reset card, got error %d\n", 2049 2113 mmc_hostname(host), ret); ··· 2183 2263 host->trigger_card_event = false; 2184 2264 } 2185 2265 2186 - mmc_bus_get(host); 2187 - 2188 2266 /* Verify a registered card to be functional, else remove it. */ 2189 - if (host->bus_ops && !host->bus_dead) 2267 + if (host->bus_ops) 2190 2268 host->bus_ops->detect(host); 2191 2269 2192 2270 host->detect_change = 0; 2193 2271 2194 - /* 2195 - * Let mmc_bus_put() free the bus/bus_ops if we've found that 2196 - * the card is no longer present. 2197 - */ 2198 - mmc_bus_put(host); 2199 - mmc_bus_get(host); 2200 - 2201 2272 /* if there still is a card present, stop here */ 2202 - if (host->bus_ops != NULL) { 2203 - mmc_bus_put(host); 2273 + if (host->bus_ops != NULL) 2204 2274 goto out; 2205 - } 2206 - 2207 - /* 2208 - * Only we can add a new handler, so it's safe to 2209 - * release the lock here. 2210 - */ 2211 - mmc_bus_put(host); 2212 2275 2213 2276 mmc_claim_host(host); 2214 2277 if (mmc_card_is_removable(host) && host->ops->get_cd && ··· 2254 2351 /* clear pm flags now and let card drivers set them as needed */ 2255 2352 host->pm_flags = 0; 2256 2353 2257 - mmc_bus_get(host); 2258 - if (host->bus_ops && !host->bus_dead) { 2354 + if (host->bus_ops) { 2259 2355 /* Calling bus_ops->remove() with a claimed host can deadlock */ 2260 2356 host->bus_ops->remove(host); 2261 2357 mmc_claim_host(host); 2262 2358 mmc_detach_bus(host); 2263 2359 mmc_power_off(host); 2264 2360 mmc_release_host(host); 2265 - mmc_bus_put(host); 2266 2361 return; 2267 2362 } 2268 - mmc_bus_put(host); 2269 2363 2270 2364 mmc_claim_host(host); 2271 2365 mmc_power_off(host); 2272 2366 mmc_release_host(host); 2273 2367 } 2274 - 2275 - #ifdef CONFIG_PM_SLEEP 2276 - /* Do the card removal on suspend if card is assumed removeable 2277 - * Do that in pm notifier while userspace isn't yet frozen, so we will be able 2278 - to sync the card. 2279 - */ 2280 - static int mmc_pm_notify(struct notifier_block *notify_block, 2281 - unsigned long mode, void *unused) 2282 - { 2283 - struct mmc_host *host = container_of( 2284 - notify_block, struct mmc_host, pm_notify); 2285 - unsigned long flags; 2286 - int err = 0; 2287 - 2288 - switch (mode) { 2289 - case PM_HIBERNATION_PREPARE: 2290 - case PM_SUSPEND_PREPARE: 2291 - case PM_RESTORE_PREPARE: 2292 - spin_lock_irqsave(&host->lock, flags); 2293 - host->rescan_disable = 1; 2294 - spin_unlock_irqrestore(&host->lock, flags); 2295 - cancel_delayed_work_sync(&host->detect); 2296 - 2297 - if (!host->bus_ops) 2298 - break; 2299 - 2300 - /* Validate prerequisites for suspend */ 2301 - if (host->bus_ops->pre_suspend) 2302 - err = host->bus_ops->pre_suspend(host); 2303 - if (!err) 2304 - break; 2305 - 2306 - if (!mmc_card_is_removable(host)) { 2307 - dev_warn(mmc_dev(host), 2308 - "pre_suspend failed for non-removable host: " 2309 - "%d\n", err); 2310 - /* Avoid removing non-removable hosts */ 2311 - break; 2312 - } 2313 - 2314 - /* Calling bus_ops->remove() with a claimed host can deadlock */ 2315 - host->bus_ops->remove(host); 2316 - mmc_claim_host(host); 2317 - mmc_detach_bus(host); 2318 - mmc_power_off(host); 2319 - mmc_release_host(host); 2320 - host->pm_flags = 0; 2321 - break; 2322 - 2323 - case PM_POST_SUSPEND: 2324 - case PM_POST_HIBERNATION: 2325 - case PM_POST_RESTORE: 2326 - 2327 - spin_lock_irqsave(&host->lock, flags); 2328 - host->rescan_disable = 0; 2329 - spin_unlock_irqrestore(&host->lock, flags); 2330 - _mmc_detect_change(host, 0, false); 2331 - 2332 - } 2333 - 2334 - return 0; 2335 - } 2336 - 2337 - void mmc_register_pm_notifier(struct mmc_host *host) 2338 - { 2339 - host->pm_notify.notifier_call = mmc_pm_notify; 2340 - register_pm_notifier(&host->pm_notify); 2341 - } 2342 - 2343 - void mmc_unregister_pm_notifier(struct mmc_host *host) 2344 - { 2345 - unregister_pm_notifier(&host->pm_notify); 2346 - } 2347 - #endif 2348 2368 2349 2369 static int __init mmc_init(void) 2350 2370 {
+9 -8
drivers/mmc/core/core.h
··· 29 29 int (*shutdown)(struct mmc_host *); 30 30 int (*hw_reset)(struct mmc_host *); 31 31 int (*sw_reset)(struct mmc_host *); 32 + bool (*cache_enabled)(struct mmc_host *); 32 33 }; 33 34 34 35 void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops); ··· 93 92 int mmc_execute_tuning(struct mmc_card *card); 94 93 int mmc_hs200_to_hs400(struct mmc_card *card); 95 94 int mmc_hs400_to_hs200(struct mmc_card *card); 96 - 97 - #ifdef CONFIG_PM_SLEEP 98 - void mmc_register_pm_notifier(struct mmc_host *host); 99 - void mmc_unregister_pm_notifier(struct mmc_host *host); 100 - #else 101 - static inline void mmc_register_pm_notifier(struct mmc_host *host) { } 102 - static inline void mmc_unregister_pm_notifier(struct mmc_host *host) { } 103 - #endif 104 95 105 96 void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq); 106 97 bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq); ··· 162 169 { 163 170 if (host->ops->post_req) 164 171 host->ops->post_req(host, mrq, err); 172 + } 173 + 174 + static inline bool mmc_cache_enabled(struct mmc_host *host) 175 + { 176 + if (host->bus_ops->cache_enabled) 177 + return host->bus_ops->cache_enabled(host); 178 + 179 + return false; 165 180 } 166 181 167 182 #endif
+71 -19
drivers/mmc/core/host.c
··· 35 35 36 36 static DEFINE_IDA(mmc_host_ida); 37 37 38 + #ifdef CONFIG_PM_SLEEP 39 + static int mmc_host_class_prepare(struct device *dev) 40 + { 41 + struct mmc_host *host = cls_dev_to_mmc_host(dev); 42 + 43 + /* 44 + * It's safe to access the bus_ops pointer, as both userspace and the 45 + * workqueue for detecting cards are frozen at this point. 46 + */ 47 + if (!host->bus_ops) 48 + return 0; 49 + 50 + /* Validate conditions for system suspend. */ 51 + if (host->bus_ops->pre_suspend) 52 + return host->bus_ops->pre_suspend(host); 53 + 54 + return 0; 55 + } 56 + 57 + static void mmc_host_class_complete(struct device *dev) 58 + { 59 + struct mmc_host *host = cls_dev_to_mmc_host(dev); 60 + 61 + _mmc_detect_change(host, 0, false); 62 + } 63 + 64 + static const struct dev_pm_ops mmc_host_class_dev_pm_ops = { 65 + .prepare = mmc_host_class_prepare, 66 + .complete = mmc_host_class_complete, 67 + }; 68 + 69 + #define MMC_HOST_CLASS_DEV_PM_OPS (&mmc_host_class_dev_pm_ops) 70 + #else 71 + #define MMC_HOST_CLASS_DEV_PM_OPS NULL 72 + #endif 73 + 38 74 static void mmc_host_classdev_release(struct device *dev) 39 75 { 40 76 struct mmc_host *host = cls_dev_to_mmc_host(dev); ··· 82 46 static struct class mmc_host_class = { 83 47 .name = "mmc_host", 84 48 .dev_release = mmc_host_classdev_release, 49 + .pm = MMC_HOST_CLASS_DEV_PM_OPS, 85 50 }; 86 51 87 52 int mmc_register_host_class(void) ··· 246 209 EXPORT_SYMBOL(mmc_of_parse_clk_phase); 247 210 248 211 /** 249 - * mmc_of_parse() - parse host's device-tree node 250 - * @host: host whose node should be parsed. 212 + * mmc_of_parse() - parse host's device properties 213 + * @host: host whose properties should be parsed. 251 214 * 252 215 * To keep the rest of the MMC subsystem unaware of whether DT has been 253 216 * used to to instantiate and configure this host instance or not, we ··· 416 379 417 380 /** 418 381 * mmc_of_parse_voltage - return mask of supported voltages 419 - * @np: The device node need to be parsed. 382 + * @host: host whose properties should be parsed. 420 383 * @mask: mask of voltages available for MMC/SD/SDIO 421 384 * 422 - * Parse the "voltage-ranges" DT property, returning zero if it is not 385 + * Parse the "voltage-ranges" property, returning zero if it is not 423 386 * found, negative errno if the voltage-range specification is invalid, 424 387 * or one if the voltage-range is specified and successfully parsed. 425 388 */ 426 - int mmc_of_parse_voltage(struct device_node *np, u32 *mask) 389 + int mmc_of_parse_voltage(struct mmc_host *host, u32 *mask) 427 390 { 428 - const u32 *voltage_ranges; 391 + const char *prop = "voltage-ranges"; 392 + struct device *dev = host->parent; 393 + u32 *voltage_ranges; 429 394 int num_ranges, i; 395 + int ret; 430 396 431 - voltage_ranges = of_get_property(np, "voltage-ranges", &num_ranges); 432 - if (!voltage_ranges) { 433 - pr_debug("%pOF: voltage-ranges unspecified\n", np); 397 + if (!device_property_present(dev, prop)) { 398 + dev_dbg(dev, "%s unspecified\n", prop); 434 399 return 0; 435 400 } 436 - num_ranges = num_ranges / sizeof(*voltage_ranges) / 2; 401 + 402 + ret = device_property_count_u32(dev, prop); 403 + if (ret < 0) 404 + return ret; 405 + 406 + num_ranges = ret / 2; 437 407 if (!num_ranges) { 438 - pr_err("%pOF: voltage-ranges empty\n", np); 408 + dev_err(dev, "%s empty\n", prop); 439 409 return -EINVAL; 410 + } 411 + 412 + voltage_ranges = kcalloc(2 * num_ranges, sizeof(*voltage_ranges), GFP_KERNEL); 413 + if (!voltage_ranges) 414 + return -ENOMEM; 415 + 416 + ret = device_property_read_u32_array(dev, prop, voltage_ranges, 2 * num_ranges); 417 + if (ret) { 418 + kfree(voltage_ranges); 419 + return ret; 440 420 } 441 421 442 422 for (i = 0; i < num_ranges; i++) { 443 423 const int j = i * 2; 444 424 u32 ocr_mask; 445 425 446 - ocr_mask = mmc_vddrange_to_ocrmask( 447 - be32_to_cpu(voltage_ranges[j]), 448 - be32_to_cpu(voltage_ranges[j + 1])); 426 + ocr_mask = mmc_vddrange_to_ocrmask(voltage_ranges[j + 0], 427 + voltage_ranges[j + 1]); 449 428 if (!ocr_mask) { 450 - pr_err("%pOF: voltage-range #%d is invalid\n", 451 - np, i); 429 + dev_err(dev, "range #%d in %s is invalid\n", i, prop); 430 + kfree(voltage_ranges); 452 431 return -EINVAL; 453 432 } 454 433 *mask |= ocr_mask; 455 434 } 435 + 436 + kfree(voltage_ranges); 456 437 457 438 return 1; 458 439 } ··· 593 538 #endif 594 539 595 540 mmc_start_host(host); 596 - mmc_register_pm_notifier(host); 597 - 598 541 return 0; 599 542 } 600 543 ··· 608 555 */ 609 556 void mmc_remove_host(struct mmc_host *host) 610 557 { 611 - mmc_unregister_pm_notifier(host); 612 558 mmc_stop_host(host); 613 559 614 560 #ifdef CONFIG_DEBUG_FS
+18 -11
drivers/mmc/core/mmc.c
··· 1068 1068 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1069 1069 EXT_CSD_HS_TIMING, EXT_CSD_TIMING_HS, 1070 1070 card->ext_csd.generic_cmd6_time, MMC_TIMING_MMC_HS, 1071 - true, true); 1071 + true, true, MMC_CMD_RETRIES); 1072 1072 if (err) 1073 1073 pr_warn("%s: switch to high-speed failed, err:%d\n", 1074 1074 mmc_hostname(card->host), err); ··· 1100 1100 ext_csd_bits, 1101 1101 card->ext_csd.generic_cmd6_time, 1102 1102 MMC_TIMING_MMC_DDR52, 1103 - true, true); 1103 + true, true, MMC_CMD_RETRIES); 1104 1104 if (err) { 1105 1105 pr_err("%s: switch to bus width %d ddr failed\n", 1106 1106 mmc_hostname(host), 1 << bus_width); ··· 1168 1168 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1169 1169 EXT_CSD_HS_TIMING, val, 1170 1170 card->ext_csd.generic_cmd6_time, 0, 1171 - false, true); 1171 + false, true, MMC_CMD_RETRIES); 1172 1172 if (err) { 1173 1173 pr_err("%s: switch to high-speed from hs200 failed, err:%d\n", 1174 1174 mmc_hostname(host), err); ··· 1210 1210 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1211 1211 EXT_CSD_HS_TIMING, val, 1212 1212 card->ext_csd.generic_cmd6_time, 0, 1213 - false, true); 1213 + false, true, MMC_CMD_RETRIES); 1214 1214 if (err) { 1215 1215 pr_err("%s: switch to hs400 failed, err:%d\n", 1216 1216 mmc_hostname(host), err); ··· 1256 1256 val = EXT_CSD_TIMING_HS; 1257 1257 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_HS_TIMING, 1258 1258 val, card->ext_csd.generic_cmd6_time, 0, 1259 - false, true); 1259 + false, true, MMC_CMD_RETRIES); 1260 1260 if (err) 1261 1261 goto out_err; 1262 1262 ··· 1272 1272 /* Switch HS DDR to HS */ 1273 1273 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BUS_WIDTH, 1274 1274 EXT_CSD_BUS_WIDTH_8, card->ext_csd.generic_cmd6_time, 1275 - 0, false, true); 1275 + 0, false, true, MMC_CMD_RETRIES); 1276 1276 if (err) 1277 1277 goto out_err; 1278 1278 ··· 1287 1287 card->drive_strength << EXT_CSD_DRV_STR_SHIFT; 1288 1288 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_HS_TIMING, 1289 1289 val, card->ext_csd.generic_cmd6_time, 0, 1290 - false, true); 1290 + false, true, MMC_CMD_RETRIES); 1291 1291 if (err) 1292 1292 goto out_err; 1293 1293 ··· 1371 1371 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1372 1372 EXT_CSD_HS_TIMING, EXT_CSD_TIMING_HS, 1373 1373 card->ext_csd.generic_cmd6_time, 0, 1374 - false, true); 1374 + false, true, MMC_CMD_RETRIES); 1375 1375 if (err) { 1376 1376 pr_err("%s: switch to hs for hs400es failed, err:%d\n", 1377 1377 mmc_hostname(host), err); ··· 1405 1405 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1406 1406 EXT_CSD_HS_TIMING, val, 1407 1407 card->ext_csd.generic_cmd6_time, 0, 1408 - false, true); 1408 + false, true, MMC_CMD_RETRIES); 1409 1409 if (err) { 1410 1410 pr_err("%s: switch to hs400es failed, err:%d\n", 1411 1411 mmc_hostname(host), err); ··· 1470 1470 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1471 1471 EXT_CSD_HS_TIMING, val, 1472 1472 card->ext_csd.generic_cmd6_time, 0, 1473 - false, true); 1473 + false, true, MMC_CMD_RETRIES); 1474 1474 if (err) 1475 1475 goto err; 1476 1476 old_timing = host->ios.timing; ··· 1975 1975 1976 1976 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1977 1977 EXT_CSD_POWER_OFF_NOTIFICATION, 1978 - notify_type, timeout, 0, false, false); 1978 + notify_type, timeout, 0, false, false, MMC_CMD_RETRIES); 1979 1979 if (err) 1980 1980 pr_err("%s: Power Off Notification timed out, %u\n", 1981 1981 mmc_hostname(card->host), timeout); ··· 2027 2027 mmc_power_off(host); 2028 2028 mmc_release_host(host); 2029 2029 } 2030 + } 2031 + 2032 + static bool _mmc_cache_enabled(struct mmc_host *host) 2033 + { 2034 + return host->card->ext_csd.cache_size > 0 && 2035 + host->card->ext_csd.cache_ctrl & 1; 2030 2036 } 2031 2037 2032 2038 static int _mmc_suspend(struct mmc_host *host, bool is_suspend) ··· 2214 2208 .alive = mmc_alive, 2215 2209 .shutdown = mmc_shutdown, 2216 2210 .hw_reset = _mmc_hw_reset, 2211 + .cache_enabled = _mmc_cache_enabled, 2217 2212 }; 2218 2213 2219 2214 /*
+20 -39
drivers/mmc/core/mmc_ops.c
··· 296 296 return 0; 297 297 } 298 298 299 - static int mmc_spi_send_csd(struct mmc_host *host, u32 *csd) 299 + static int mmc_spi_send_cxd(struct mmc_host *host, u32 *cxd, u32 opcode) 300 300 { 301 301 int ret, i; 302 - __be32 *csd_tmp; 302 + __be32 *cxd_tmp; 303 303 304 - csd_tmp = kzalloc(16, GFP_KERNEL); 305 - if (!csd_tmp) 304 + cxd_tmp = kzalloc(16, GFP_KERNEL); 305 + if (!cxd_tmp) 306 306 return -ENOMEM; 307 307 308 - ret = mmc_send_cxd_data(NULL, host, MMC_SEND_CSD, csd_tmp, 16); 308 + ret = mmc_send_cxd_data(NULL, host, opcode, cxd_tmp, 16); 309 309 if (ret) 310 310 goto err; 311 311 312 312 for (i = 0; i < 4; i++) 313 - csd[i] = be32_to_cpu(csd_tmp[i]); 313 + cxd[i] = be32_to_cpu(cxd_tmp[i]); 314 314 315 315 err: 316 - kfree(csd_tmp); 316 + kfree(cxd_tmp); 317 317 return ret; 318 318 } 319 319 320 320 int mmc_send_csd(struct mmc_card *card, u32 *csd) 321 321 { 322 322 if (mmc_host_is_spi(card->host)) 323 - return mmc_spi_send_csd(card->host, csd); 323 + return mmc_spi_send_cxd(card->host, csd, MMC_SEND_CSD); 324 324 325 325 return mmc_send_cxd_native(card->host, card->rca << 16, csd, 326 326 MMC_SEND_CSD); 327 327 } 328 328 329 - static int mmc_spi_send_cid(struct mmc_host *host, u32 *cid) 330 - { 331 - int ret, i; 332 - __be32 *cid_tmp; 333 - 334 - cid_tmp = kzalloc(16, GFP_KERNEL); 335 - if (!cid_tmp) 336 - return -ENOMEM; 337 - 338 - ret = mmc_send_cxd_data(NULL, host, MMC_SEND_CID, cid_tmp, 16); 339 - if (ret) 340 - goto err; 341 - 342 - for (i = 0; i < 4; i++) 343 - cid[i] = be32_to_cpu(cid_tmp[i]); 344 - 345 - err: 346 - kfree(cid_tmp); 347 - return ret; 348 - } 349 - 350 329 int mmc_send_cid(struct mmc_host *host, u32 *cid) 351 330 { 352 331 if (mmc_host_is_spi(host)) 353 - return mmc_spi_send_cid(host, cid); 332 + return mmc_spi_send_cxd(host, cid, MMC_SEND_CID); 354 333 355 334 return mmc_send_cxd_native(host, 0, cid, MMC_ALL_SEND_CID); 356 335 } ··· 532 553 * @timing: new timing to change to 533 554 * @send_status: send status cmd to poll for busy 534 555 * @retry_crc_err: retry when CRC errors when polling with CMD13 for busy 556 + * @retries: number of retries 535 557 * 536 558 * Modifies the EXT_CSD register for selected card. 537 559 */ 538 560 int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, 539 561 unsigned int timeout_ms, unsigned char timing, 540 - bool send_status, bool retry_crc_err) 562 + bool send_status, bool retry_crc_err, unsigned int retries) 541 563 { 542 564 struct mmc_host *host = card->host; 543 565 int err; ··· 578 598 cmd.flags |= MMC_RSP_SPI_R1 | MMC_RSP_R1; 579 599 } 580 600 581 - err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES); 601 + err = mmc_wait_for_cmd(host, &cmd, retries); 582 602 if (err) 583 603 goto out; 584 604 ··· 613 633 unsigned int timeout_ms) 614 634 { 615 635 return __mmc_switch(card, set, index, value, timeout_ms, 0, 616 - true, false); 636 + true, false, MMC_CMD_RETRIES); 617 637 } 618 638 EXPORT_SYMBOL_GPL(mmc_switch); 619 639 ··· 968 988 { 969 989 int err = 0; 970 990 971 - if (mmc_card_mmc(card) && 972 - (card->ext_csd.cache_size > 0) && 973 - (card->ext_csd.cache_ctrl & 1)) { 991 + if (mmc_cache_enabled(card->host)) { 974 992 err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 975 993 EXT_CSD_FLUSH_CACHE, 1, 976 994 MMC_CACHE_FLUSH_TIMEOUT_MS); ··· 1009 1031 } 1010 1032 EXPORT_SYMBOL_GPL(mmc_cmdq_disable); 1011 1033 1012 - int mmc_sanitize(struct mmc_card *card) 1034 + int mmc_sanitize(struct mmc_card *card, unsigned int timeout_ms) 1013 1035 { 1014 1036 struct mmc_host *host = card->host; 1015 1037 int err; ··· 1019 1041 return -EOPNOTSUPP; 1020 1042 } 1021 1043 1044 + if (!timeout_ms) 1045 + timeout_ms = MMC_SANITIZE_TIMEOUT_MS; 1046 + 1022 1047 pr_debug("%s: Sanitize in progress...\n", mmc_hostname(host)); 1023 1048 1024 1049 mmc_retune_hold(host); 1025 1050 1026 - err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_SANITIZE_START, 1027 - 1, MMC_SANITIZE_TIMEOUT_MS); 1051 + err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_SANITIZE_START, 1052 + 1, timeout_ms, 0, true, false, 0); 1028 1053 if (err) 1029 1054 pr_err("%s: Sanitize failed err=%d\n", mmc_hostname(host), err); 1030 1055
+2 -2
drivers/mmc/core/mmc_ops.h
··· 39 39 enum mmc_busy_cmd busy_cmd); 40 40 int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, 41 41 unsigned int timeout_ms, unsigned char timing, 42 - bool send_status, bool retry_crc_err); 42 + bool send_status, bool retry_crc_err, unsigned int retries); 43 43 int mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, 44 44 unsigned int timeout_ms); 45 45 void mmc_run_bkops(struct mmc_card *card); 46 46 int mmc_flush_cache(struct mmc_card *card); 47 47 int mmc_cmdq_enable(struct mmc_card *card); 48 48 int mmc_cmdq_disable(struct mmc_card *card); 49 - int mmc_sanitize(struct mmc_card *card); 49 + int mmc_sanitize(struct mmc_card *card, unsigned int timeout_ms); 50 50 51 51 #endif 52 52
+5 -6
drivers/mmc/core/queue.c
··· 60 60 { 61 61 struct mmc_host *host = mq->card->host; 62 62 63 - if (mq->use_cqe && !host->hsq_enabled) 63 + if (host->cqe_enabled && !host->hsq_enabled) 64 64 return mmc_cqe_issue_type(host, req); 65 65 66 66 if (req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_WRITE) ··· 127 127 bool ignore_tout; 128 128 129 129 spin_lock_irqsave(&mq->lock, flags); 130 - ignore_tout = mq->recovery_needed || !mq->use_cqe || host->hsq_enabled; 130 + ignore_tout = mq->recovery_needed || !host->cqe_enabled || host->hsq_enabled; 131 131 spin_unlock_irqrestore(&mq->lock, flags); 132 132 133 133 return ignore_tout ? BLK_EH_RESET_TIMER : mmc_cqe_timed_out(req); ··· 144 144 145 145 mq->in_recovery = true; 146 146 147 - if (mq->use_cqe && !host->hsq_enabled) 147 + if (host->cqe_enabled && !host->hsq_enabled) 148 148 mmc_blk_cqe_recovery(mq); 149 149 else 150 150 mmc_blk_mq_recovery(mq); ··· 315 315 if (get_card) 316 316 mmc_get_card(card, &mq->ctx); 317 317 318 - if (mq->use_cqe) { 318 + if (host->cqe_enabled) { 319 319 host->retune_now = host->need_retune && cqe_retune_ok && 320 320 !host->hold_retune; 321 321 } ··· 430 430 int ret; 431 431 432 432 mq->card = card; 433 - mq->use_cqe = host->cqe_enabled; 434 433 435 434 spin_lock_init(&mq->lock); 436 435 ··· 439 440 * The queue depth for CQE must match the hardware because the request 440 441 * tag is used to index the hardware queue. 441 442 */ 442 - if (mq->use_cqe && !host->hsq_enabled) 443 + if (host->cqe_enabled && !host->hsq_enabled) 443 444 mq->tag_set.queue_depth = 444 445 min_t(int, card->ext_csd.cmdq_depth, host->cqe_qdepth); 445 446 else
-1
drivers/mmc/core/queue.h
··· 82 82 unsigned int cqe_busy; 83 83 #define MMC_CQE_DCMD_BUSY BIT(0) 84 84 bool busy; 85 - bool use_cqe; 86 85 bool recovery_needed; 87 86 bool in_recovery; 88 87 bool rw_wait;
+6
drivers/mmc/core/sd.c
··· 135 135 csd->erase_size = UNSTUFF_BITS(resp, 39, 7) + 1; 136 136 csd->erase_size <<= csd->write_blkbits - 9; 137 137 } 138 + 139 + if (UNSTUFF_BITS(resp, 13, 1)) 140 + mmc_card_set_readonly(card); 138 141 break; 139 142 case 1: 140 143 /* ··· 172 169 csd->write_blkbits = 9; 173 170 csd->write_partial = 0; 174 171 csd->erase_size = 1; 172 + 173 + if (UNSTUFF_BITS(resp, 13, 1)) 174 + mmc_card_set_readonly(card); 175 175 break; 176 176 default: 177 177 pr_err("%s: unrecognised CSD structure version %d\n",
+22 -6
drivers/mmc/core/sdio.c
··· 985 985 */ 986 986 static int mmc_sdio_pre_suspend(struct mmc_host *host) 987 987 { 988 - int i, err = 0; 988 + int i; 989 989 990 990 for (i = 0; i < host->card->sdio_funcs; i++) { 991 991 struct sdio_func *func = host->card->sdio_func[i]; 992 992 if (func && sdio_func_present(func) && func->dev.driver) { 993 993 const struct dev_pm_ops *pmops = func->dev.driver->pm; 994 - if (!pmops || !pmops->suspend || !pmops->resume) { 994 + if (!pmops || !pmops->suspend || !pmops->resume) 995 995 /* force removal of entire card in that case */ 996 - err = -ENOSYS; 997 - break; 998 - } 996 + goto remove; 999 997 } 1000 998 } 1001 999 1002 - return err; 1000 + return 0; 1001 + 1002 + remove: 1003 + if (!mmc_card_is_removable(host)) { 1004 + dev_warn(mmc_dev(host), 1005 + "missing suspend/resume ops for non-removable SDIO card\n"); 1006 + /* Don't remove a non-removable card - we can't re-detect it. */ 1007 + return 0; 1008 + } 1009 + 1010 + /* Remove the SDIO card and let it be re-detected later on. */ 1011 + mmc_sdio_remove(host); 1012 + mmc_claim_host(host); 1013 + mmc_detach_bus(host); 1014 + mmc_power_off(host); 1015 + mmc_release_host(host); 1016 + host->pm_flags = 0; 1017 + 1018 + return 0; 1003 1019 } 1004 1020 1005 1021 /*
+2
drivers/mmc/host/Kconfig
··· 278 278 tristate "SDHCI support for the Freescale eSDHC/uSDHC i.MX controller" 279 279 depends on ARCH_MXC || COMPILE_TEST 280 280 depends on MMC_SDHCI_PLTFM 281 + depends on OF 281 282 select MMC_SDHCI_IO_ACCESSORS 282 283 select MMC_CQHCI 283 284 help ··· 708 707 tristate "Renesas SDHI SD/SDIO controller support" 709 708 depends on SUPERH || ARCH_RENESAS || COMPILE_TEST 710 709 select MMC_TMIO_CORE 710 + select RESET_CONTROLLER if ARCH_RENESAS 711 711 help 712 712 This provides support for the SDHI SD/SDIO controller found in 713 713 Renesas SuperH, ARM and ARM64 based SoCs
-2
drivers/mmc/host/Makefile
··· 34 34 obj-$(CONFIG_MMC_MVSDIO) += mvsdio.o 35 35 obj-$(CONFIG_MMC_DAVINCI) += davinci_mmc.o 36 36 obj-$(CONFIG_MMC_SPI) += mmc_spi.o 37 - ifeq ($(CONFIG_OF),y) 38 37 obj-$(CONFIG_MMC_SPI) += of_mmc_spi.o 39 - endif 40 38 obj-$(CONFIG_MMC_S3C) += s3cmci.o 41 39 obj-$(CONFIG_MMC_SDRICOH_CS) += sdricoh_cs.o 42 40 obj-$(CONFIG_MMC_TMIO) += tmio_mmc.o
+1 -2
drivers/mmc/host/cavium.c
··· 656 656 657 657 if (!mrq->data || !mrq->data->sg || !mrq->data->sg_len || 658 658 !mrq->stop || mrq->stop->opcode != MMC_STOP_TRANSMISSION) { 659 - dev_err(&mmc->card->dev, 660 - "Error: cmv_mmc_dma_request no data\n"); 659 + dev_err(&mmc->card->dev, "Error: %s no data\n", __func__); 661 660 goto error; 662 661 } 663 662
+1 -1
drivers/mmc/host/dw_mmc-k3.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 3 * Copyright (c) 2013 Linaro Ltd. 4 - * Copyright (c) 2013 Hisilicon Limited. 4 + * Copyright (c) 2013 HiSilicon Limited. 5 5 */ 6 6 7 7 #include <linux/bitops.h>
+1 -1
drivers/mmc/host/dw_mmc-rockchip.c
··· 61 61 } 62 62 63 63 /* Make sure we use phases which we can enumerate with */ 64 - if (!IS_ERR(priv->sample_clk)) 64 + if (!IS_ERR(priv->sample_clk) && ios->timing <= MMC_TIMING_SD_HS) 65 65 clk_set_phase(priv->sample_clk, priv->default_sample_phase); 66 66 67 67 /*
+5 -11
drivers/mmc/host/dw_mmc.c
··· 2606 2606 { 2607 2607 struct dw_mci_slot *slot = host->slot; 2608 2608 2609 - if (slot->mmc->ops->card_event) 2610 - slot->mmc->ops->card_event(slot->mmc); 2611 2609 mmc_detect_change(slot->mmc, 2612 2610 msecs_to_jiffies(host->pdata->detect_delay_ms)); 2613 2611 } ··· 3093 3095 3094 3096 /* find reset controller when exist */ 3095 3097 pdata->rstc = devm_reset_control_get_optional_exclusive(dev, "reset"); 3096 - if (IS_ERR(pdata->rstc)) { 3097 - if (PTR_ERR(pdata->rstc) == -EPROBE_DEFER) 3098 - return ERR_PTR(-EPROBE_DEFER); 3099 - } 3098 + if (IS_ERR(pdata->rstc)) 3099 + return ERR_CAST(pdata->rstc); 3100 3100 3101 3101 if (device_property_read_u32(dev, "fifo-depth", &pdata->fifo_depth)) 3102 3102 dev_info(dev, ··· 3200 3204 goto err_clk_ciu; 3201 3205 } 3202 3206 3203 - if (!IS_ERR(host->pdata->rstc)) { 3207 + if (host->pdata->rstc) { 3204 3208 reset_control_assert(host->pdata->rstc); 3205 3209 usleep_range(10, 50); 3206 3210 reset_control_deassert(host->pdata->rstc); ··· 3340 3344 if (host->use_dma && host->dma_ops->exit) 3341 3345 host->dma_ops->exit(host); 3342 3346 3343 - if (!IS_ERR(host->pdata->rstc)) 3344 - reset_control_assert(host->pdata->rstc); 3347 + reset_control_assert(host->pdata->rstc); 3345 3348 3346 3349 err_clk_ciu: 3347 3350 clk_disable_unprepare(host->ciu_clk); ··· 3368 3373 if (host->use_dma && host->dma_ops->exit) 3369 3374 host->dma_ops->exit(host); 3370 3375 3371 - if (!IS_ERR(host->pdata->rstc)) 3372 - reset_control_assert(host->pdata->rstc); 3376 + reset_control_assert(host->pdata->rstc); 3373 3377 3374 3378 clk_disable_unprepare(host->ciu_clk); 3375 3379 clk_disable_unprepare(host->biu_clk);
+4 -4
drivers/mmc/host/mmc_spi.c
··· 1397 1397 1398 1398 host->ones = ones; 1399 1399 1400 + dev_set_drvdata(&spi->dev, mmc); 1401 + 1400 1402 /* Platform data is used to hook up things like card sensing 1401 1403 * and power switching gpios. 1402 1404 */ ··· 1414 1412 if (!host->powerup_msecs || host->powerup_msecs > 250) 1415 1413 host->powerup_msecs = 250; 1416 1414 } 1417 - 1418 - dev_set_drvdata(&spi->dev, mmc); 1419 1415 1420 1416 /* preallocate dma buffers */ 1421 1417 host->data = kmalloc(sizeof(*host->data), GFP_KERNEL); ··· 1494 1494 fail_dma: 1495 1495 kfree(host->data); 1496 1496 fail_nobuf1: 1497 - mmc_free_host(mmc); 1498 1497 mmc_spi_put_pdata(spi); 1498 + mmc_free_host(mmc); 1499 1499 nomem: 1500 1500 kfree(ones); 1501 1501 return status; ··· 1518 1518 kfree(host->ones); 1519 1519 1520 1520 spi->max_speed_hz = mmc->f_max; 1521 - mmc_free_host(mmc); 1522 1521 mmc_spi_put_pdata(spi); 1522 + mmc_free_host(mmc); 1523 1523 return 0; 1524 1524 } 1525 1525
+4 -6
drivers/mmc/host/moxart-mmc.c
··· 257 257 static void moxart_transfer_dma(struct mmc_data *data, struct moxart_host *host) 258 258 { 259 259 u32 len, dir_slave; 260 - long dma_time; 261 260 struct dma_async_tx_descriptor *desc = NULL; 262 261 struct dma_chan *dma_chan; 263 262 ··· 293 294 294 295 data->bytes_xfered += host->data_remain; 295 296 296 - dma_time = wait_for_completion_interruptible_timeout( 297 - &host->dma_complete, host->timeout); 297 + wait_for_completion_interruptible_timeout(&host->dma_complete, 298 + host->timeout); 298 299 299 300 dma_unmap_sg(dma_chan->device->dev, 300 301 data->sg, data->sg_len, ··· 394 395 static void moxart_request(struct mmc_host *mmc, struct mmc_request *mrq) 395 396 { 396 397 struct moxart_host *host = mmc_priv(mmc); 397 - long pio_time; 398 398 unsigned long flags; 399 399 u32 status; 400 400 ··· 429 431 spin_unlock_irqrestore(&host->lock, flags); 430 432 431 433 /* PIO transfers start from interrupt. */ 432 - pio_time = wait_for_completion_interruptible_timeout( 433 - &host->pio_complete, host->timeout); 434 + wait_for_completion_interruptible_timeout(&host->pio_complete, 435 + host->timeout); 434 436 435 437 spin_lock_irqsave(&host->lock, flags); 436 438 }
+6 -12
drivers/mmc/host/of_mmc_spi.c
··· 19 19 #include <linux/mmc/core.h> 20 20 #include <linux/mmc/host.h> 21 21 22 - /* For archs that don't support NO_IRQ (such as mips), provide a dummy value */ 23 - #ifndef NO_IRQ 24 - #define NO_IRQ 0 25 - #endif 26 - 27 22 MODULE_LICENSE("GPL"); 28 23 29 24 struct of_mmc_spi { ··· 49 54 50 55 struct mmc_spi_platform_data *mmc_spi_get_pdata(struct spi_device *spi) 51 56 { 57 + struct mmc_host *mmc = dev_get_drvdata(&spi->dev); 52 58 struct device *dev = &spi->dev; 53 - struct device_node *np = dev->of_node; 54 59 struct of_mmc_spi *oms; 55 60 56 - if (dev->platform_data || !np) 61 + if (dev->platform_data || !dev_fwnode(dev)) 57 62 return dev->platform_data; 58 63 59 64 oms = kzalloc(sizeof(*oms), GFP_KERNEL); 60 65 if (!oms) 61 66 return NULL; 62 67 63 - if (mmc_of_parse_voltage(np, &oms->pdata.ocr_mask) <= 0) 68 + if (mmc_of_parse_voltage(mmc, &oms->pdata.ocr_mask) < 0) 64 69 goto err_ocr; 65 70 66 - oms->detect_irq = irq_of_parse_and_map(np, 0); 67 - if (oms->detect_irq != 0) { 71 + oms->detect_irq = spi->irq; 72 + if (oms->detect_irq > 0) { 68 73 oms->pdata.init = of_mmc_spi_init; 69 74 oms->pdata.exit = of_mmc_spi_exit; 70 75 } else { ··· 82 87 void mmc_spi_put_pdata(struct spi_device *spi) 83 88 { 84 89 struct device *dev = &spi->dev; 85 - struct device_node *np = dev->of_node; 86 90 struct of_mmc_spi *oms = to_of_mmc_spi(dev); 87 91 88 - if (!dev->platform_data || !np) 92 + if (!dev->platform_data || !dev_fwnode(dev)) 89 93 return; 90 94 91 95 kfree(oms);
-1
drivers/mmc/host/owl-mmc.c
··· 581 581 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 582 582 owl_host->base = devm_ioremap_resource(&pdev->dev, res); 583 583 if (IS_ERR(owl_host->base)) { 584 - dev_err(&pdev->dev, "Failed to remap registers\n"); 585 584 ret = PTR_ERR(owl_host->base); 586 585 goto err_free_host; 587 586 }
+2
drivers/mmc/host/renesas_sdhi.h
··· 70 70 DECLARE_BITMAP(smpcmp, BITS_PER_LONG); 71 71 unsigned int tap_num; 72 72 unsigned int tap_set; 73 + 74 + struct reset_control *rstc; 73 75 }; 74 76 75 77 #define host_to_priv(host) \
+28 -10
drivers/mmc/host/renesas_sdhi_core.c
··· 20 20 21 21 #include <linux/clk.h> 22 22 #include <linux/delay.h> 23 + #include <linux/iopoll.h> 23 24 #include <linux/kernel.h> 24 25 #include <linux/mfd/tmio.h> 25 26 #include <linux/mmc/host.h> ··· 33 32 #include <linux/platform_device.h> 34 33 #include <linux/pm_domain.h> 35 34 #include <linux/regulator/consumer.h> 35 + #include <linux/reset.h> 36 36 #include <linux/sh_dma.h> 37 37 #include <linux/slab.h> 38 38 #include <linux/sys_soc.h> ··· 559 557 return 0; 560 558 } 561 559 560 + static void renesas_sdhi_scc_reset(struct tmio_mmc_host *host, struct renesas_sdhi *priv) 561 + { 562 + renesas_sdhi_disable_scc(host->mmc); 563 + renesas_sdhi_reset_hs400_mode(host, priv); 564 + priv->needs_adjust_hs400 = false; 565 + 566 + sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL, 567 + ~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN & 568 + sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL)); 569 + } 570 + 562 571 /* only populated for TMIO_MMC_MIN_RCAR2 */ 563 572 static void renesas_sdhi_reset(struct tmio_mmc_host *host) 564 573 { 565 574 struct renesas_sdhi *priv = host_to_priv(host); 575 + int ret; 566 576 u16 val; 567 577 568 - if (priv->scc_ctl) { 569 - renesas_sdhi_disable_scc(host->mmc); 570 - renesas_sdhi_reset_hs400_mode(host, priv); 578 + if (priv->rstc) { 579 + reset_control_reset(priv->rstc); 580 + /* Unknown why but without polling reset status, it will hang */ 581 + read_poll_timeout(reset_control_status, ret, ret == 0, 1, 100, 582 + false, priv->rstc); 571 583 priv->needs_adjust_hs400 = false; 572 - 573 - sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL, 574 - ~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN & 575 - sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL)); 584 + renesas_sdhi_set_clock(host, host->clk_cache); 585 + } else if (priv->scc_ctl) { 586 + renesas_sdhi_scc_reset(host, priv); 576 587 } 577 - 578 - sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, TMIO_MASK_INIT_RCAR2); 579 588 580 589 if (sd_ctrl_read16(host, CTL_VERSION) >= SDHI_VER_GEN3_SD) { 581 590 val = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT); ··· 704 691 705 692 ret = renesas_sdhi_select_tuning(host); 706 693 if (ret < 0) 707 - renesas_sdhi_reset(host); 694 + renesas_sdhi_scc_reset(host, priv); 708 695 return ret; 709 696 } 710 697 ··· 1047 1034 host->ops.start_signal_voltage_switch = 1048 1035 renesas_sdhi_start_signal_voltage_switch; 1049 1036 host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27; 1037 + host->sdcard_irq_mask_all = TMIO_MASK_ALL_RCAR2; 1050 1038 host->reset = renesas_sdhi_reset; 1051 1039 } 1052 1040 ··· 1089 1075 ret = renesas_sdhi_clk_enable(host); 1090 1076 if (ret) 1091 1077 goto efree; 1078 + 1079 + priv->rstc = devm_reset_control_get_optional_exclusive(&pdev->dev, NULL); 1080 + if (IS_ERR(priv->rstc)) 1081 + return PTR_ERR(priv->rstc); 1092 1082 1093 1083 ver = sd_ctrl_read16(host, CTL_VERSION); 1094 1084 /* GEN2_SDR104 is first known SDHI to use 32bit block count */
+2 -2
drivers/mmc/host/renesas_sdhi_internal_dmac.c
··· 97 97 TMIO_MMC_HAVE_CBSY, 98 98 .tmio_ocr_mask = MMC_VDD_32_33, 99 99 .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ | 100 - MMC_CAP_CMD23, 100 + MMC_CAP_CMD23 | MMC_CAP_WAIT_WHILE_BUSY, 101 101 .bus_shift = 2, 102 102 .scc_offset = 0 - 0x1000, 103 103 .taps = rcar_gen3_scc_taps, ··· 111 111 .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_CLK_ACTUAL | 112 112 TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2, 113 113 .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ | 114 - MMC_CAP_CMD23, 114 + MMC_CAP_CMD23 | MMC_CAP_WAIT_WHILE_BUSY, 115 115 .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT | MMC_CAP2_MERGE_CAPABLE, 116 116 .bus_shift = 2, 117 117 .scc_offset = 0x1000,
+5 -3
drivers/mmc/host/renesas_sdhi_sys_dmac.c
··· 33 33 .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_32BIT_DATA_PORT | 34 34 TMIO_MMC_HAVE_CBSY, 35 35 .tmio_ocr_mask = MMC_VDD_32_33, 36 - .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ, 36 + .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ | 37 + MMC_CAP_WAIT_WHILE_BUSY, 37 38 }; 38 39 39 40 static const struct renesas_sdhi_of_data of_rcar_gen1_compatible = { 40 41 .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_CLK_ACTUAL, 41 - .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ, 42 + .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ | 43 + MMC_CAP_WAIT_WHILE_BUSY, 42 44 .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT, 43 45 }; 44 46 ··· 60 58 .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_CLK_ACTUAL | 61 59 TMIO_MMC_HAVE_CBSY | TMIO_MMC_MIN_RCAR2, 62 60 .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ | 63 - MMC_CAP_CMD23, 61 + MMC_CAP_CMD23 | MMC_CAP_WAIT_WHILE_BUSY, 64 62 .capabilities2 = MMC_CAP2_NO_WRITE_PROTECT, 65 63 .dma_buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES, 66 64 .dma_rx_offset = 0x2000,
+2
drivers/mmc/host/sdhci-acpi.c
··· 772 772 { "QCOM8051", NULL, &sdhci_acpi_slot_qcom_sd_3v }, 773 773 { "QCOM8052", NULL, &sdhci_acpi_slot_qcom_sd }, 774 774 { "AMDI0040", NULL, &sdhci_acpi_slot_amd_emmc }, 775 + { "AMDI0041", NULL, &sdhci_acpi_slot_amd_emmc }, 775 776 { }, 776 777 }; 777 778 ··· 790 789 { "QCOM8051" }, 791 790 { "QCOM8052" }, 792 791 { "AMDI0040" }, 792 + { "AMDI0041" }, 793 793 { }, 794 794 }; 795 795 MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids);
-1
drivers/mmc/host/sdhci-brcmstb.c
··· 199 199 if (dma64) { 200 200 dev_dbg(mmc_dev(host->mmc), "Using 64 bit DMA\n"); 201 201 cq_host->caps |= CQHCI_TASK_DESC_SZ_128; 202 - cq_host->quirks |= CQHCI_QUIRK_SHORT_TXFR_DESC_SZ; 203 202 } 204 203 205 204 ret = cqhci_init(cq_host, host->mmc, dma64);
+7 -19
drivers/mmc/host/sdhci-esdhc-imx.c
··· 434 434 * Do not advertise faster UHS modes if there are no 435 435 * pinctrl states for 100MHz/200MHz. 436 436 */ 437 - if (IS_ERR_OR_NULL(imx_data->pins_100mhz) || 438 - IS_ERR_OR_NULL(imx_data->pins_200mhz)) 439 - val &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50 440 - | SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_HS400); 437 + if (IS_ERR_OR_NULL(imx_data->pins_100mhz)) 438 + val &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50); 439 + if (IS_ERR_OR_NULL(imx_data->pins_200mhz)) 440 + val &= ~(SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_HS400); 441 441 } 442 442 } 443 443 ··· 1453 1453 .dumpregs = esdhc_sdhci_dumpregs, 1454 1454 }; 1455 1455 1456 - #ifdef CONFIG_OF 1457 1456 static int 1458 1457 sdhci_esdhc_imx_probe_dt(struct platform_device *pdev, 1459 1458 struct sdhci_host *host, ··· 1485 1486 if (of_property_read_u32(np, "fsl,delay-line", &boarddata->delay_line)) 1486 1487 boarddata->delay_line = 0; 1487 1488 1488 - mmc_of_parse_voltage(np, &host->ocr_mask); 1489 + mmc_of_parse_voltage(host->mmc, &host->ocr_mask); 1489 1490 1490 - if (esdhc_is_usdhc(imx_data)) { 1491 + if (esdhc_is_usdhc(imx_data) && !IS_ERR(imx_data->pinctrl)) { 1491 1492 imx_data->pins_100mhz = pinctrl_lookup_state(imx_data->pinctrl, 1492 1493 ESDHC_PINCTRL_STATE_100MHZ); 1493 1494 imx_data->pins_200mhz = pinctrl_lookup_state(imx_data->pinctrl, ··· 1504 1505 1505 1506 return 0; 1506 1507 } 1507 - #else 1508 - static inline int 1509 - sdhci_esdhc_imx_probe_dt(struct platform_device *pdev, 1510 - struct sdhci_host *host, 1511 - struct pltfm_imx_data *imx_data) 1512 - { 1513 - return -ENODEV; 1514 - } 1515 - #endif 1516 1508 1517 1509 static int sdhci_esdhc_imx_probe(struct platform_device *pdev) 1518 1510 { 1519 - const struct of_device_id *of_id = 1520 - of_match_device(imx_esdhc_dt_ids, &pdev->dev); 1521 1511 struct sdhci_pltfm_host *pltfm_host; 1522 1512 struct sdhci_host *host; 1523 1513 struct cqhci_host *cq_host; ··· 1522 1534 1523 1535 imx_data = sdhci_pltfm_priv(pltfm_host); 1524 1536 1525 - imx_data->socdata = of_id->data; 1537 + imx_data->socdata = device_get_match_data(&pdev->dev); 1526 1538 1527 1539 if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) 1528 1540 cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
+4 -4
drivers/mmc/host/sdhci-esdhc-mcf.c
··· 367 367 struct pltfm_mcf_data *mcf_data) 368 368 { 369 369 struct mcf_esdhc_platform_data *plat_data; 370 + struct device *dev = mmc_dev(host->mmc); 370 371 371 - if (!host->mmc->parent->platform_data) { 372 - dev_err(mmc_dev(host->mmc), "no platform data!\n"); 372 + if (!dev->platform_data) { 373 + dev_err(dev, "no platform data!\n"); 373 374 return -EINVAL; 374 375 } 375 376 376 - plat_data = (struct mcf_esdhc_platform_data *) 377 - host->mmc->parent->platform_data; 377 + plat_data = (struct mcf_esdhc_platform_data *)dev->platform_data; 378 378 379 379 /* Card_detect */ 380 380 switch (plat_data->cd_type) {
+2 -6
drivers/mmc/host/sdhci-msm.c
··· 1863 1863 struct mmc_host *mmc = msm_host->mmc; 1864 1864 struct device *dev = mmc_dev(mmc); 1865 1865 struct resource *res; 1866 - int err; 1867 1866 1868 1867 if (!(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS)) 1869 1868 return 0; ··· 1880 1881 } 1881 1882 1882 1883 msm_host->ice_mem = devm_ioremap_resource(dev, res); 1883 - if (IS_ERR(msm_host->ice_mem)) { 1884 - err = PTR_ERR(msm_host->ice_mem); 1885 - dev_err(dev, "Failed to map ICE registers; err=%d\n", err); 1886 - return err; 1887 - } 1884 + if (IS_ERR(msm_host->ice_mem)) 1885 + return PTR_ERR(msm_host->ice_mem); 1888 1886 1889 1887 if (!sdhci_msm_ice_supported(msm_host)) 1890 1888 goto disable;
+1 -1
drivers/mmc/host/sdhci-of-aspeed.c
··· 181 181 struct aspeed_sdhci *sdhci; 182 182 struct device *dev; 183 183 184 - dev = host->mmc->parent; 184 + dev = mmc_dev(host->mmc); 185 185 sdhci = sdhci_pltfm_priv(sdhci_priv(host)); 186 186 187 187 if (!sdhci->phase_desc)
+292 -21
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 7 7 * Author: Jisheng Zhang <jszhang@kernel.org> 8 8 */ 9 9 10 + #include <linux/acpi.h> 10 11 #include <linux/clk.h> 11 12 #include <linux/dma-mapping.h> 13 + #include <linux/iopoll.h> 12 14 #include <linux/kernel.h> 13 15 #include <linux/module.h> 14 16 #include <linux/of.h> 17 + #include <linux/of_device.h> 15 18 #include <linux/sizes.h> 16 19 17 20 #include "sdhci-pltfm.h" ··· 24 21 /* DWCMSHC specific Mode Select value */ 25 22 #define DWCMSHC_CTRL_HS400 0x7 26 23 24 + /* DWC IP vendor area 1 pointer */ 25 + #define DWCMSHC_P_VENDOR_AREA1 0xe8 26 + #define DWCMSHC_AREA1_MASK GENMASK(11, 0) 27 + /* Offset inside the vendor area 1 */ 28 + #define DWCMSHC_HOST_CTRL3 0x8 29 + #define DWCMSHC_EMMC_CONTROL 0x2c 30 + #define DWCMSHC_ENHANCED_STROBE BIT(8) 31 + #define DWCMSHC_EMMC_ATCTRL 0x40 32 + 33 + /* Rockchip specific Registers */ 34 + #define DWCMSHC_EMMC_DLL_CTRL 0x800 35 + #define DWCMSHC_EMMC_DLL_RXCLK 0x804 36 + #define DWCMSHC_EMMC_DLL_TXCLK 0x808 37 + #define DWCMSHC_EMMC_DLL_STRBIN 0x80c 38 + #define DLL_STRBIN_TAPNUM_FROM_SW BIT(24) 39 + #define DWCMSHC_EMMC_DLL_STATUS0 0x840 40 + #define DWCMSHC_EMMC_DLL_START BIT(0) 41 + #define DWCMSHC_EMMC_DLL_LOCKED BIT(8) 42 + #define DWCMSHC_EMMC_DLL_TIMEOUT BIT(9) 43 + #define DWCMSHC_EMMC_DLL_RXCLK_SRCSEL 29 44 + #define DWCMSHC_EMMC_DLL_START_POINT 16 45 + #define DWCMSHC_EMMC_DLL_INC 8 46 + #define DWCMSHC_EMMC_DLL_DLYENA BIT(27) 47 + #define DLL_TXCLK_TAPNUM_DEFAULT 0x8 48 + #define DLL_STRBIN_TAPNUM_DEFAULT 0x8 49 + #define DLL_TXCLK_TAPNUM_FROM_SW BIT(24) 50 + #define DLL_RXCLK_NO_INVERTER 1 51 + #define DLL_RXCLK_INVERTER 0 52 + #define DLL_LOCK_WO_TMOUT(x) \ 53 + ((((x) & DWCMSHC_EMMC_DLL_LOCKED) == DWCMSHC_EMMC_DLL_LOCKED) && \ 54 + (((x) & DWCMSHC_EMMC_DLL_TIMEOUT) == 0)) 55 + #define RK3568_MAX_CLKS 3 56 + 27 57 #define BOUNDARY_OK(addr, len) \ 28 58 ((addr | (SZ_128M - 1)) == ((addr + len - 1) | (SZ_128M - 1))) 29 59 60 + struct rk3568_priv { 61 + /* Rockchip specified optional clocks */ 62 + struct clk_bulk_data rockchip_clks[RK3568_MAX_CLKS]; 63 + u8 txclk_tapnum; 64 + }; 65 + 30 66 struct dwcmshc_priv { 31 67 struct clk *bus_clk; 68 + int vendor_specific_area1; /* P_VENDOR_SPECIFIC_AREA reg */ 69 + void *priv; /* pointer to SoC private stuff */ 32 70 }; 33 71 34 72 /* ··· 93 49 addr += tmplen; 94 50 len -= tmplen; 95 51 sdhci_adma_write_desc(host, desc, addr, len, cmd); 52 + } 53 + 54 + static unsigned int dwcmshc_get_max_clock(struct sdhci_host *host) 55 + { 56 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 57 + 58 + if (pltfm_host->clk) 59 + return sdhci_pltfm_clk_get_max_clock(host); 60 + else 61 + return pltfm_host->clock; 96 62 } 97 63 98 64 static void dwcmshc_check_auto_cmd23(struct mmc_host *mmc, ··· 154 100 sdhci_writew(host, ctrl_2, SDHCI_HOST_CONTROL2); 155 101 } 156 102 103 + static void dwcmshc_hs400_enhanced_strobe(struct mmc_host *mmc, 104 + struct mmc_ios *ios) 105 + { 106 + u32 vendor; 107 + struct sdhci_host *host = mmc_priv(mmc); 108 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 109 + struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host); 110 + int reg = priv->vendor_specific_area1 + DWCMSHC_EMMC_CONTROL; 111 + 112 + vendor = sdhci_readl(host, reg); 113 + if (ios->enhanced_strobe) 114 + vendor |= DWCMSHC_ENHANCED_STROBE; 115 + else 116 + vendor &= ~DWCMSHC_ENHANCED_STROBE; 117 + 118 + sdhci_writel(host, vendor, reg); 119 + } 120 + 121 + static void dwcmshc_rk3568_set_clock(struct sdhci_host *host, unsigned int clock) 122 + { 123 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 124 + struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host); 125 + struct rk3568_priv *priv = dwc_priv->priv; 126 + u8 txclk_tapnum = DLL_TXCLK_TAPNUM_DEFAULT; 127 + u32 extra, reg; 128 + int err; 129 + 130 + host->mmc->actual_clock = 0; 131 + 132 + /* 133 + * DO NOT TOUCH THIS SETTING. RX clk inverter unit is enabled 134 + * by default, but it shouldn't be enabled. We should anyway 135 + * disable it before issuing any cmds. 136 + */ 137 + extra = DWCMSHC_EMMC_DLL_DLYENA | 138 + DLL_RXCLK_NO_INVERTER << DWCMSHC_EMMC_DLL_RXCLK_SRCSEL; 139 + sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_RXCLK); 140 + 141 + if (clock == 0) 142 + return; 143 + 144 + /* Rockchip platform only support 375KHz for identify mode */ 145 + if (clock <= 400000) 146 + clock = 375000; 147 + 148 + err = clk_set_rate(pltfm_host->clk, clock); 149 + if (err) 150 + dev_err(mmc_dev(host->mmc), "fail to set clock %d", clock); 151 + 152 + sdhci_set_clock(host, clock); 153 + 154 + /* Disable cmd conflict check */ 155 + reg = dwc_priv->vendor_specific_area1 + DWCMSHC_HOST_CTRL3; 156 + extra = sdhci_readl(host, reg); 157 + extra &= ~BIT(0); 158 + sdhci_writel(host, extra, reg); 159 + 160 + if (clock <= 400000) { 161 + /* Disable DLL to reset sample clock */ 162 + sdhci_writel(host, 0, DWCMSHC_EMMC_DLL_CTRL); 163 + return; 164 + } 165 + 166 + /* Reset DLL */ 167 + sdhci_writel(host, BIT(1), DWCMSHC_EMMC_DLL_CTRL); 168 + udelay(1); 169 + sdhci_writel(host, 0x0, DWCMSHC_EMMC_DLL_CTRL); 170 + 171 + /* Init DLL settings */ 172 + extra = 0x5 << DWCMSHC_EMMC_DLL_START_POINT | 173 + 0x2 << DWCMSHC_EMMC_DLL_INC | 174 + DWCMSHC_EMMC_DLL_START; 175 + sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_CTRL); 176 + err = readl_poll_timeout(host->ioaddr + DWCMSHC_EMMC_DLL_STATUS0, 177 + extra, DLL_LOCK_WO_TMOUT(extra), 1, 178 + 500 * USEC_PER_MSEC); 179 + if (err) { 180 + dev_err(mmc_dev(host->mmc), "DLL lock timeout!\n"); 181 + return; 182 + } 183 + 184 + extra = 0x1 << 16 | /* tune clock stop en */ 185 + 0x2 << 17 | /* pre-change delay */ 186 + 0x3 << 19; /* post-change delay */ 187 + sdhci_writel(host, extra, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_ATCTRL); 188 + 189 + if (host->mmc->ios.timing == MMC_TIMING_MMC_HS200 || 190 + host->mmc->ios.timing == MMC_TIMING_MMC_HS400) 191 + txclk_tapnum = priv->txclk_tapnum; 192 + 193 + extra = DWCMSHC_EMMC_DLL_DLYENA | 194 + DLL_TXCLK_TAPNUM_FROM_SW | 195 + txclk_tapnum; 196 + sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_TXCLK); 197 + 198 + extra = DWCMSHC_EMMC_DLL_DLYENA | 199 + DLL_STRBIN_TAPNUM_DEFAULT | 200 + DLL_STRBIN_TAPNUM_FROM_SW; 201 + sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_STRBIN); 202 + } 203 + 157 204 static const struct sdhci_ops sdhci_dwcmshc_ops = { 158 205 .set_clock = sdhci_set_clock, 206 + .set_bus_width = sdhci_set_bus_width, 207 + .set_uhs_signaling = dwcmshc_set_uhs_signaling, 208 + .get_max_clock = dwcmshc_get_max_clock, 209 + .reset = sdhci_reset, 210 + .adma_write_desc = dwcmshc_adma_write_desc, 211 + }; 212 + 213 + static const struct sdhci_ops sdhci_dwcmshc_rk3568_ops = { 214 + .set_clock = dwcmshc_rk3568_set_clock, 159 215 .set_bus_width = sdhci_set_bus_width, 160 216 .set_uhs_signaling = dwcmshc_set_uhs_signaling, 161 217 .get_max_clock = sdhci_pltfm_clk_get_max_clock, ··· 279 115 .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 280 116 }; 281 117 118 + static const struct sdhci_pltfm_data sdhci_dwcmshc_rk3568_pdata = { 119 + .ops = &sdhci_dwcmshc_rk3568_ops, 120 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 121 + SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 122 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 123 + SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, 124 + }; 125 + 126 + static int dwcmshc_rk3568_init(struct sdhci_host *host, struct dwcmshc_priv *dwc_priv) 127 + { 128 + int err; 129 + struct rk3568_priv *priv = dwc_priv->priv; 130 + 131 + priv->rockchip_clks[0].id = "axi"; 132 + priv->rockchip_clks[1].id = "block"; 133 + priv->rockchip_clks[2].id = "timer"; 134 + err = devm_clk_bulk_get_optional(mmc_dev(host->mmc), RK3568_MAX_CLKS, 135 + priv->rockchip_clks); 136 + if (err) { 137 + dev_err(mmc_dev(host->mmc), "failed to get clocks %d\n", err); 138 + return err; 139 + } 140 + 141 + err = clk_bulk_prepare_enable(RK3568_MAX_CLKS, priv->rockchip_clks); 142 + if (err) { 143 + dev_err(mmc_dev(host->mmc), "failed to enable clocks %d\n", err); 144 + return err; 145 + } 146 + 147 + if (of_property_read_u8(mmc_dev(host->mmc)->of_node, "rockchip,txclk-tapnum", 148 + &priv->txclk_tapnum)) 149 + priv->txclk_tapnum = DLL_TXCLK_TAPNUM_DEFAULT; 150 + 151 + /* Disable cmd conflict check */ 152 + sdhci_writel(host, 0x0, dwc_priv->vendor_specific_area1 + DWCMSHC_HOST_CTRL3); 153 + /* Reset previous settings */ 154 + sdhci_writel(host, 0, DWCMSHC_EMMC_DLL_TXCLK); 155 + sdhci_writel(host, 0, DWCMSHC_EMMC_DLL_STRBIN); 156 + 157 + return 0; 158 + } 159 + 160 + static const struct of_device_id sdhci_dwcmshc_dt_ids[] = { 161 + { 162 + .compatible = "rockchip,rk3568-dwcmshc", 163 + .data = &sdhci_dwcmshc_rk3568_pdata, 164 + }, 165 + { 166 + .compatible = "snps,dwcmshc-sdhci", 167 + .data = &sdhci_dwcmshc_pdata, 168 + }, 169 + {}, 170 + }; 171 + MODULE_DEVICE_TABLE(of, sdhci_dwcmshc_dt_ids); 172 + 173 + #ifdef CONFIG_ACPI 174 + static const struct acpi_device_id sdhci_dwcmshc_acpi_ids[] = { 175 + { .id = "MLNXBF30" }, 176 + {} 177 + }; 178 + #endif 179 + 282 180 static int dwcmshc_probe(struct platform_device *pdev) 283 181 { 182 + struct device *dev = &pdev->dev; 284 183 struct sdhci_pltfm_host *pltfm_host; 285 184 struct sdhci_host *host; 286 185 struct dwcmshc_priv *priv; 186 + struct rk3568_priv *rk_priv = NULL; 187 + const struct sdhci_pltfm_data *pltfm_data; 287 188 int err; 288 189 u32 extra; 289 190 290 - host = sdhci_pltfm_init(pdev, &sdhci_dwcmshc_pdata, 191 + pltfm_data = of_device_get_match_data(&pdev->dev); 192 + if (!pltfm_data) { 193 + dev_err(&pdev->dev, "Error: No device match data found\n"); 194 + return -ENODEV; 195 + } 196 + 197 + host = sdhci_pltfm_init(pdev, pltfm_data, 291 198 sizeof(struct dwcmshc_priv)); 292 199 if (IS_ERR(host)) 293 200 return PTR_ERR(host); ··· 366 131 /* 367 132 * extra adma table cnt for cross 128M boundary handling. 368 133 */ 369 - extra = DIV_ROUND_UP_ULL(dma_get_required_mask(&pdev->dev), SZ_128M); 134 + extra = DIV_ROUND_UP_ULL(dma_get_required_mask(dev), SZ_128M); 370 135 if (extra > SDHCI_MAX_SEGS) 371 136 extra = SDHCI_MAX_SEGS; 372 137 host->adma_table_cnt += extra; ··· 374 139 pltfm_host = sdhci_priv(host); 375 140 priv = sdhci_pltfm_priv(pltfm_host); 376 141 377 - pltfm_host->clk = devm_clk_get(&pdev->dev, "core"); 378 - if (IS_ERR(pltfm_host->clk)) { 379 - err = PTR_ERR(pltfm_host->clk); 380 - dev_err(&pdev->dev, "failed to get core clk: %d\n", err); 381 - goto free_pltfm; 382 - } 383 - err = clk_prepare_enable(pltfm_host->clk); 384 - if (err) 385 - goto free_pltfm; 142 + if (dev->of_node) { 143 + pltfm_host->clk = devm_clk_get(dev, "core"); 144 + if (IS_ERR(pltfm_host->clk)) { 145 + err = PTR_ERR(pltfm_host->clk); 146 + dev_err(dev, "failed to get core clk: %d\n", err); 147 + goto free_pltfm; 148 + } 149 + err = clk_prepare_enable(pltfm_host->clk); 150 + if (err) 151 + goto free_pltfm; 386 152 387 - priv->bus_clk = devm_clk_get(&pdev->dev, "bus"); 388 - if (!IS_ERR(priv->bus_clk)) 389 - clk_prepare_enable(priv->bus_clk); 153 + priv->bus_clk = devm_clk_get(dev, "bus"); 154 + if (!IS_ERR(priv->bus_clk)) 155 + clk_prepare_enable(priv->bus_clk); 156 + } 390 157 391 158 err = mmc_of_parse(host->mmc); 392 159 if (err) ··· 396 159 397 160 sdhci_get_of_property(pdev); 398 161 162 + priv->vendor_specific_area1 = 163 + sdhci_readl(host, DWCMSHC_P_VENDOR_AREA1) & DWCMSHC_AREA1_MASK; 164 + 399 165 host->mmc_host_ops.request = dwcmshc_request; 166 + host->mmc_host_ops.hs400_enhanced_strobe = dwcmshc_hs400_enhanced_strobe; 167 + 168 + if (pltfm_data == &sdhci_dwcmshc_rk3568_pdata) { 169 + rk_priv = devm_kzalloc(&pdev->dev, sizeof(struct rk3568_priv), GFP_KERNEL); 170 + if (!rk_priv) { 171 + err = -ENOMEM; 172 + goto err_clk; 173 + } 174 + 175 + priv->priv = rk_priv; 176 + 177 + err = dwcmshc_rk3568_init(host, priv); 178 + if (err) 179 + goto err_clk; 180 + } 181 + 182 + host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY; 400 183 401 184 err = sdhci_add_host(host); 402 185 if (err) ··· 427 170 err_clk: 428 171 clk_disable_unprepare(pltfm_host->clk); 429 172 clk_disable_unprepare(priv->bus_clk); 173 + if (rk_priv) 174 + clk_bulk_disable_unprepare(RK3568_MAX_CLKS, 175 + rk_priv->rockchip_clks); 430 176 free_pltfm: 431 177 sdhci_pltfm_free(pdev); 432 178 return err; ··· 440 180 struct sdhci_host *host = platform_get_drvdata(pdev); 441 181 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 442 182 struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host); 183 + struct rk3568_priv *rk_priv = priv->priv; 443 184 444 185 sdhci_remove_host(host, 0); 445 186 446 187 clk_disable_unprepare(pltfm_host->clk); 447 188 clk_disable_unprepare(priv->bus_clk); 448 - 189 + if (rk_priv) 190 + clk_bulk_disable_unprepare(RK3568_MAX_CLKS, 191 + rk_priv->rockchip_clks); 449 192 sdhci_pltfm_free(pdev); 450 193 451 194 return 0; ··· 460 197 struct sdhci_host *host = dev_get_drvdata(dev); 461 198 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 462 199 struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host); 200 + struct rk3568_priv *rk_priv = priv->priv; 463 201 int ret; 464 202 465 203 ret = sdhci_suspend_host(host); ··· 471 207 if (!IS_ERR(priv->bus_clk)) 472 208 clk_disable_unprepare(priv->bus_clk); 473 209 210 + if (rk_priv) 211 + clk_bulk_disable_unprepare(RK3568_MAX_CLKS, 212 + rk_priv->rockchip_clks); 213 + 474 214 return ret; 475 215 } 476 216 ··· 483 215 struct sdhci_host *host = dev_get_drvdata(dev); 484 216 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 485 217 struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host); 218 + struct rk3568_priv *rk_priv = priv->priv; 486 219 int ret; 487 220 488 221 ret = clk_prepare_enable(pltfm_host->clk); ··· 496 227 return ret; 497 228 } 498 229 230 + if (rk_priv) { 231 + ret = clk_bulk_prepare_enable(RK3568_MAX_CLKS, 232 + rk_priv->rockchip_clks); 233 + if (ret) 234 + return ret; 235 + } 236 + 499 237 return sdhci_resume_host(host); 500 238 } 501 239 #endif 502 240 503 241 static SIMPLE_DEV_PM_OPS(dwcmshc_pmops, dwcmshc_suspend, dwcmshc_resume); 504 242 505 - static const struct of_device_id sdhci_dwcmshc_dt_ids[] = { 506 - { .compatible = "snps,dwcmshc-sdhci" }, 507 - {} 508 - }; 509 - MODULE_DEVICE_TABLE(of, sdhci_dwcmshc_dt_ids); 510 - 511 243 static struct platform_driver sdhci_dwcmshc_driver = { 512 244 .driver = { 513 245 .name = "sdhci-dwcmshc", 514 246 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 515 247 .of_match_table = sdhci_dwcmshc_dt_ids, 248 + .acpi_match_table = ACPI_PTR(sdhci_dwcmshc_acpi_ids), 516 249 .pm = &dwcmshc_pmops, 517 250 }, 518 251 .probe = dwcmshc_probe,
+1 -1
drivers/mmc/host/sdhci-of-esdhc.c
··· 1489 1489 if (ret) 1490 1490 goto err; 1491 1491 1492 - mmc_of_parse_voltage(np, &host->ocr_mask); 1492 + mmc_of_parse_voltage(host->mmc, &host->ocr_mask); 1493 1493 1494 1494 ret = sdhci_add_host(host); 1495 1495 if (ret)
+30 -1
drivers/mmc/host/sdhci-pci-core.c
··· 516 516 int drv_strength; 517 517 bool d3_retune; 518 518 bool rpm_retune_ok; 519 + bool needs_pwr_off; 519 520 u32 glk_rx_ctrl1; 520 521 u32 glk_tun_val; 521 522 u32 active_ltr; ··· 644 643 static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode, 645 644 unsigned short vdd) 646 645 { 646 + struct sdhci_pci_slot *slot = sdhci_priv(host); 647 + struct intel_host *intel_host = sdhci_pci_priv(slot); 647 648 int cntr; 648 649 u8 reg; 650 + 651 + /* 652 + * Bus power may control card power, but a full reset still may not 653 + * reset the power, whereas a direct write to SDHCI_POWER_CONTROL can. 654 + * That might be needed to initialize correctly, if the card was left 655 + * powered on previously. 656 + */ 657 + if (intel_host->needs_pwr_off) { 658 + intel_host->needs_pwr_off = false; 659 + if (mode != MMC_POWER_OFF) { 660 + sdhci_writeb(host, 0, SDHCI_POWER_CONTROL); 661 + usleep_range(10000, 12500); 662 + } 663 + } 649 664 650 665 sdhci_set_power(host, mode, vdd); 651 666 ··· 975 958 slot->host->mmc->caps2 |= MMC_CAP2_CQE; 976 959 977 960 if (slot->chip->pdev->device != PCI_DEVICE_ID_INTEL_GLK_EMMC) { 978 - slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES, 961 + slot->host->mmc->caps2 |= MMC_CAP2_HS400_ES; 979 962 slot->host->mmc_host_ops.hs400_enhanced_strobe = 980 963 intel_hs400_enhanced_strobe; 981 964 slot->host->mmc->caps2 |= MMC_CAP2_CQE_DCMD; ··· 1152 1135 return 0; 1153 1136 } 1154 1137 1138 + static void byt_needs_pwr_off(struct sdhci_pci_slot *slot) 1139 + { 1140 + struct intel_host *intel_host = sdhci_pci_priv(slot); 1141 + u8 reg = sdhci_readb(slot->host, SDHCI_POWER_CONTROL); 1142 + 1143 + intel_host->needs_pwr_off = reg & SDHCI_POWER_ON; 1144 + } 1145 + 1155 1146 static int byt_sd_probe_slot(struct sdhci_pci_slot *slot) 1156 1147 { 1157 1148 byt_probe_slot(slot); ··· 1176 1151 if (slot->chip->pdev->subsystem_vendor == PCI_VENDOR_ID_NI && 1177 1152 slot->chip->pdev->subsystem_device == PCI_SUBDEVICE_ID_NI_78E3) 1178 1153 slot->host->mmc->caps2 |= MMC_CAP2_AVOID_3_3V; 1154 + 1155 + byt_needs_pwr_off(slot); 1179 1156 1180 1157 return 0; 1181 1158 } ··· 1930 1903 SDHCI_PCI_DEVICE(INTEL, CMLH_SD, intel_byt_sd), 1931 1904 SDHCI_PCI_DEVICE(INTEL, JSL_EMMC, intel_glk_emmc), 1932 1905 SDHCI_PCI_DEVICE(INTEL, JSL_SD, intel_byt_sd), 1906 + SDHCI_PCI_DEVICE(INTEL, LKF_EMMC, intel_glk_emmc), 1907 + SDHCI_PCI_DEVICE(INTEL, LKF_SD, intel_byt_sd), 1933 1908 SDHCI_PCI_DEVICE(O2, 8120, o2), 1934 1909 SDHCI_PCI_DEVICE(O2, 8220, o2), 1935 1910 SDHCI_PCI_DEVICE(O2, 8221, o2),
+43 -3
drivers/mmc/host/sdhci-pci-gli.c
··· 22 22 #define GLI_9750_WT_EN_ON 0x1 23 23 #define GLI_9750_WT_EN_OFF 0x0 24 24 25 + #define SDHCI_GLI_9750_CFG2 0x848 26 + #define SDHCI_GLI_9750_CFG2_L1DLY GENMASK(28, 24) 27 + #define GLI_9750_CFG2_L1DLY_VALUE 0x1F 28 + 25 29 #define SDHCI_GLI_9750_DRIVING 0x860 26 30 #define SDHCI_GLI_9750_DRIVING_1 GENMASK(11, 0) 27 31 #define SDHCI_GLI_9750_DRIVING_2 GENMASK(27, 26) ··· 94 90 95 91 #define PCIE_GLI_9763E_CFG2 0x8A4 96 92 #define GLI_9763E_CFG2_L1DLY GENMASK(28, 19) 97 - #define GLI_9763E_CFG2_L1DLY_MAX 0x3FF 93 + #define GLI_9763E_CFG2_L1DLY_MID 0x50 98 94 99 95 #define PCIE_GLI_9763E_MMC_CTRL 0x960 100 96 #define GLI_9763E_HS400_SLOW BIT(3) ··· 117 113 #define PCI_GLI_9755_LFCLK GENMASK(14, 12) 118 114 #define PCI_GLI_9755_DMACLK BIT(29) 119 115 116 + #define PCI_GLI_9755_CFG2 0x48 117 + #define PCI_GLI_9755_CFG2_L1DLY GENMASK(28, 24) 118 + #define GLI_9755_CFG2_L1DLY_VALUE 0x1F 119 + 120 120 #define PCI_GLI_9755_PLL 0x64 121 121 #define PCI_GLI_9755_PLL_LDIV GENMASK(9, 0) 122 122 #define PCI_GLI_9755_PLL_PDIV GENMASK(14, 12) ··· 130 122 131 123 #define PCI_GLI_9755_PLLSSC 0x68 132 124 #define PCI_GLI_9755_PLLSSC_PPM GENMASK(15, 0) 125 + 126 + #define PCI_GLI_9755_SerDes 0x70 127 + #define PCI_GLI_9755_SCP_DIS BIT(19) 133 128 134 129 #define GLI_MAX_TUNING_LOOP 40 135 130 ··· 416 405 sdhci_enable_clk(host, clk); 417 406 } 418 407 408 + static void gl9750_hw_setting(struct sdhci_host *host) 409 + { 410 + u32 value; 411 + 412 + gl9750_wt_on(host); 413 + 414 + value = sdhci_readl(host, SDHCI_GLI_9750_CFG2); 415 + value &= ~SDHCI_GLI_9750_CFG2_L1DLY; 416 + /* set ASPM L1 entry delay to 7.9us */ 417 + value |= FIELD_PREP(SDHCI_GLI_9750_CFG2_L1DLY, 418 + GLI_9750_CFG2_L1DLY_VALUE); 419 + sdhci_writel(host, value, SDHCI_GLI_9750_CFG2); 420 + 421 + gl9750_wt_off(host); 422 + } 423 + 419 424 static void gli_pcie_enable_msi(struct sdhci_pci_slot *slot) 420 425 { 421 426 int ret; ··· 574 547 value &= ~PCI_GLI_9755_DMACLK; 575 548 pci_write_config_dword(pdev, PCI_GLI_9755_PECONF, value); 576 549 550 + /* enable short circuit protection */ 551 + pci_read_config_dword(pdev, PCI_GLI_9755_SerDes, &value); 552 + value &= ~PCI_GLI_9755_SCP_DIS; 553 + pci_write_config_dword(pdev, PCI_GLI_9755_SerDes, value); 554 + 555 + pci_read_config_dword(pdev, PCI_GLI_9755_CFG2, &value); 556 + value &= ~PCI_GLI_9755_CFG2_L1DLY; 557 + /* set ASPM L1 entry delay to 7.9us */ 558 + value |= FIELD_PREP(PCI_GLI_9755_CFG2_L1DLY, 559 + GLI_9755_CFG2_L1DLY_VALUE); 560 + pci_write_config_dword(pdev, PCI_GLI_9755_CFG2, value); 561 + 577 562 gl9755_wt_off(pdev); 578 563 } 579 564 ··· 593 554 { 594 555 struct sdhci_host *host = slot->host; 595 556 557 + gl9750_hw_setting(host); 596 558 gli_pcie_enable_msi(slot); 597 559 slot->host->mmc->caps2 |= MMC_CAP2_NO_SDIO; 598 560 sdhci_enable_v4_mode(host); ··· 842 802 843 803 pci_read_config_dword(pdev, PCIE_GLI_9763E_CFG2, &value); 844 804 value &= ~GLI_9763E_CFG2_L1DLY; 845 - /* set ASPM L1 entry delay to 260us */ 846 - value |= FIELD_PREP(GLI_9763E_CFG2_L1DLY, GLI_9763E_CFG2_L1DLY_MAX); 805 + /* set ASPM L1 entry delay to 20us */ 806 + value |= FIELD_PREP(GLI_9763E_CFG2_L1DLY, GLI_9763E_CFG2_L1DLY_MID); 847 807 pci_write_config_dword(pdev, PCIE_GLI_9763E_CFG2, value); 848 808 849 809 pci_read_config_dword(pdev, PCIE_GLI_9763E_CLKRXDLY, &value);
+8
drivers/mmc/host/sdhci-pci-o2micro.c
··· 706 706 ret = pci_read_config_dword(chip->pdev, 707 707 O2_SD_FUNC_REG0, 708 708 &scratch_32); 709 + if (ret) 710 + return ret; 709 711 scratch_32 = ((scratch_32 & 0xFF000000) >> 24); 710 712 711 713 /* Check Whether subId is 0x11 or 0x12 */ ··· 718 716 ret = pci_read_config_dword(chip->pdev, 719 717 O2_SD_FUNC_REG4, 720 718 &scratch_32); 719 + if (ret) 720 + return ret; 721 721 722 722 /* Enable Base Clk setting change */ 723 723 scratch_32 |= O2_SD_FREG4_ENABLE_CLK_SET; ··· 799 795 800 796 ret = pci_read_config_dword(chip->pdev, 801 797 O2_SD_PLL_SETTING, &scratch_32); 798 + if (ret) 799 + return ret; 802 800 803 801 if ((scratch_32 & 0xff000000) == 0x01000000) { 804 802 scratch_32 &= 0x0000FFFF; ··· 818 812 ret = pci_read_config_dword(chip->pdev, 819 813 O2_SD_FUNC_REG4, 820 814 &scratch_32); 815 + if (ret) 816 + return ret; 821 817 scratch_32 |= (1 << 22); 822 818 pci_write_config_dword(chip->pdev, 823 819 O2_SD_FUNC_REG4, scratch_32);
+2
drivers/mmc/host/sdhci-pci.h
··· 57 57 #define PCI_DEVICE_ID_INTEL_CMLH_SD 0x06f5 58 58 #define PCI_DEVICE_ID_INTEL_JSL_EMMC 0x4dc4 59 59 #define PCI_DEVICE_ID_INTEL_JSL_SD 0x4df8 60 + #define PCI_DEVICE_ID_INTEL_LKF_EMMC 0x98c4 61 + #define PCI_DEVICE_ID_INTEL_LKF_SD 0x98f8 60 62 61 63 #define PCI_DEVICE_ID_SYSKONNECT_8000 0x8000 62 64 #define PCI_DEVICE_ID_VIA_95D0 0x95d0
+8 -14
drivers/mmc/host/sdhci-s3c.c
··· 20 20 #include <linux/gpio.h> 21 21 #include <linux/module.h> 22 22 #include <linux/of.h> 23 + #include <linux/of_device.h> 23 24 #include <linux/of_gpio.h> 24 25 #include <linux/pm.h> 25 26 #include <linux/pm_runtime.h> ··· 130 129 }; 131 130 132 131 /** 133 - * struct sdhci_s3c_driver_data - S3C SDHCI platform specific driver data 132 + * struct sdhci_s3c_drv_data - S3C SDHCI platform specific driver data 134 133 * @sdhci_quirks: sdhci host specific quirks. 135 134 * @no_divider: no or non-standard internal clock divider. 136 135 * ··· 462 461 } 463 462 #endif 464 463 465 - #ifdef CONFIG_OF 466 - static const struct of_device_id sdhci_s3c_dt_match[]; 467 - #endif 468 - 469 - static inline struct sdhci_s3c_drv_data *sdhci_s3c_get_driver_data( 464 + static inline const struct sdhci_s3c_drv_data *sdhci_s3c_get_driver_data( 470 465 struct platform_device *pdev) 471 466 { 472 467 #ifdef CONFIG_OF 473 - if (pdev->dev.of_node) { 474 - const struct of_device_id *match; 475 - match = of_match_node(sdhci_s3c_dt_match, pdev->dev.of_node); 476 - return (struct sdhci_s3c_drv_data *)match->data; 477 - } 468 + if (pdev->dev.of_node) 469 + return of_device_get_match_data(&pdev->dev); 478 470 #endif 479 - return (struct sdhci_s3c_drv_data *) 471 + return (const struct sdhci_s3c_drv_data *) 480 472 platform_get_device_id(pdev)->driver_data; 481 473 } 482 474 483 475 static int sdhci_s3c_probe(struct platform_device *pdev) 484 476 { 485 477 struct s3c_sdhci_platdata *pdata; 486 - struct sdhci_s3c_drv_data *drv_data; 478 + const struct sdhci_s3c_drv_data *drv_data; 487 479 struct device *dev = &pdev->dev; 488 480 struct sdhci_host *host; 489 481 struct sdhci_s3c *sc; ··· 761 767 MODULE_DEVICE_TABLE(platform, sdhci_s3c_driver_ids); 762 768 763 769 #ifdef CONFIG_OF 764 - static struct sdhci_s3c_drv_data exynos4_sdhci_drv_data = { 770 + static const struct sdhci_s3c_drv_data exynos4_sdhci_drv_data = { 765 771 .no_divider = true, 766 772 }; 767 773
+8 -15
drivers/mmc/host/sdhci-st.c
··· 362 362 if (IS_ERR(icnclk)) 363 363 icnclk = NULL; 364 364 365 - rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL); 365 + rstc = devm_reset_control_get_optional_exclusive(&pdev->dev, NULL); 366 366 if (IS_ERR(rstc)) 367 - rstc = NULL; 368 - else 369 - reset_control_deassert(rstc); 367 + return PTR_ERR(rstc); 368 + reset_control_deassert(rstc); 370 369 371 370 host = sdhci_pltfm_init(pdev, &sdhci_st_pdata, sizeof(*pdata)); 372 371 if (IS_ERR(host)) { ··· 400 401 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 401 402 "top-mmc-delay"); 402 403 pdata->top_ioaddr = devm_ioremap_resource(&pdev->dev, res); 403 - if (IS_ERR(pdata->top_ioaddr)) { 404 - dev_warn(&pdev->dev, "FlashSS Top Dly registers not available"); 404 + if (IS_ERR(pdata->top_ioaddr)) 405 405 pdata->top_ioaddr = NULL; 406 - } 407 406 408 407 pltfm_host->clk = clk; 409 408 pdata->icnclk = icnclk; ··· 429 432 err_of: 430 433 sdhci_pltfm_free(pdev); 431 434 err_pltfm_init: 432 - if (rstc) 433 - reset_control_assert(rstc); 435 + reset_control_assert(rstc); 434 436 435 437 return ret; 436 438 } ··· 446 450 447 451 clk_disable_unprepare(pdata->icnclk); 448 452 449 - if (rstc) 450 - reset_control_assert(rstc); 453 + reset_control_assert(rstc); 451 454 452 455 return ret; 453 456 } ··· 466 471 if (ret) 467 472 goto out; 468 473 469 - if (pdata->rstc) 470 - reset_control_assert(pdata->rstc); 474 + reset_control_assert(pdata->rstc); 471 475 472 476 clk_disable_unprepare(pdata->icnclk); 473 477 clk_disable_unprepare(pltfm_host->clk); ··· 492 498 return ret; 493 499 } 494 500 495 - if (pdata->rstc) 496 - reset_control_deassert(pdata->rstc); 501 + reset_control_deassert(pdata->rstc); 497 502 498 503 st_mmcss_cconfig(np, host); 499 504
+49 -17
drivers/mmc/host/sdhci-tegra.c
··· 119 119 /* SDMMC CQE Base Address for Tegra Host Ver 4.1 and Higher */ 120 120 #define SDHCI_TEGRA_CQE_BASE_ADDR 0xF000 121 121 122 + #define SDHCI_TEGRA_CQE_TRNS_MODE (SDHCI_TRNS_MULTI | \ 123 + SDHCI_TRNS_BLK_CNT_EN | \ 124 + SDHCI_TRNS_DMA) 125 + 122 126 struct sdhci_tegra_soc_data { 123 127 const struct sdhci_pltfm_data *pdata; 124 128 u64 dma_mask; ··· 600 596 &tegra_host->autocal_offsets; 601 597 int err; 602 598 603 - err = device_property_read_u32(host->mmc->parent, 599 + err = device_property_read_u32(mmc_dev(host->mmc), 604 600 "nvidia,pad-autocal-pull-up-offset-3v3", 605 601 &autocal->pull_up_3v3); 606 602 if (err) 607 603 autocal->pull_up_3v3 = 0; 608 604 609 - err = device_property_read_u32(host->mmc->parent, 605 + err = device_property_read_u32(mmc_dev(host->mmc), 610 606 "nvidia,pad-autocal-pull-down-offset-3v3", 611 607 &autocal->pull_down_3v3); 612 608 if (err) 613 609 autocal->pull_down_3v3 = 0; 614 610 615 - err = device_property_read_u32(host->mmc->parent, 611 + err = device_property_read_u32(mmc_dev(host->mmc), 616 612 "nvidia,pad-autocal-pull-up-offset-1v8", 617 613 &autocal->pull_up_1v8); 618 614 if (err) 619 615 autocal->pull_up_1v8 = 0; 620 616 621 - err = device_property_read_u32(host->mmc->parent, 617 + err = device_property_read_u32(mmc_dev(host->mmc), 622 618 "nvidia,pad-autocal-pull-down-offset-1v8", 623 619 &autocal->pull_down_1v8); 624 620 if (err) 625 621 autocal->pull_down_1v8 = 0; 626 622 627 - err = device_property_read_u32(host->mmc->parent, 623 + err = device_property_read_u32(mmc_dev(host->mmc), 628 624 "nvidia,pad-autocal-pull-up-offset-sdr104", 629 625 &autocal->pull_up_sdr104); 630 626 if (err) 631 627 autocal->pull_up_sdr104 = autocal->pull_up_1v8; 632 628 633 - err = device_property_read_u32(host->mmc->parent, 629 + err = device_property_read_u32(mmc_dev(host->mmc), 634 630 "nvidia,pad-autocal-pull-down-offset-sdr104", 635 631 &autocal->pull_down_sdr104); 636 632 if (err) 637 633 autocal->pull_down_sdr104 = autocal->pull_down_1v8; 638 634 639 - err = device_property_read_u32(host->mmc->parent, 635 + err = device_property_read_u32(mmc_dev(host->mmc), 640 636 "nvidia,pad-autocal-pull-up-offset-hs400", 641 637 &autocal->pull_up_hs400); 642 638 if (err) 643 639 autocal->pull_up_hs400 = autocal->pull_up_1v8; 644 640 645 - err = device_property_read_u32(host->mmc->parent, 641 + err = device_property_read_u32(mmc_dev(host->mmc), 646 642 "nvidia,pad-autocal-pull-down-offset-hs400", 647 643 &autocal->pull_down_hs400); 648 644 if (err) ··· 657 653 if (!(tegra_host->soc_data->nvquirks & NVQUIRK_NEEDS_PAD_CONTROL)) 658 654 return; 659 655 660 - err = device_property_read_u32(host->mmc->parent, 656 + err = device_property_read_u32(mmc_dev(host->mmc), 661 657 "nvidia,pad-autocal-pull-up-offset-3v3-timeout", 662 658 &autocal->pull_up_3v3_timeout); 663 659 if (err) { ··· 668 664 autocal->pull_up_3v3_timeout = 0; 669 665 } 670 666 671 - err = device_property_read_u32(host->mmc->parent, 667 + err = device_property_read_u32(mmc_dev(host->mmc), 672 668 "nvidia,pad-autocal-pull-down-offset-3v3-timeout", 673 669 &autocal->pull_down_3v3_timeout); 674 670 if (err) { ··· 679 675 autocal->pull_down_3v3_timeout = 0; 680 676 } 681 677 682 - err = device_property_read_u32(host->mmc->parent, 678 + err = device_property_read_u32(mmc_dev(host->mmc), 683 679 "nvidia,pad-autocal-pull-up-offset-1v8-timeout", 684 680 &autocal->pull_up_1v8_timeout); 685 681 if (err) { ··· 690 686 autocal->pull_up_1v8_timeout = 0; 691 687 } 692 688 693 - err = device_property_read_u32(host->mmc->parent, 689 + err = device_property_read_u32(mmc_dev(host->mmc), 694 690 "nvidia,pad-autocal-pull-down-offset-1v8-timeout", 695 691 &autocal->pull_down_1v8_timeout); 696 692 if (err) { ··· 724 720 struct sdhci_tegra *tegra_host = sdhci_pltfm_priv(pltfm_host); 725 721 int err; 726 722 727 - err = device_property_read_u32(host->mmc->parent, "nvidia,default-tap", 723 + err = device_property_read_u32(mmc_dev(host->mmc), "nvidia,default-tap", 728 724 &tegra_host->default_tap); 729 725 if (err) 730 726 tegra_host->default_tap = 0; 731 727 732 - err = device_property_read_u32(host->mmc->parent, "nvidia,default-trim", 728 + err = device_property_read_u32(mmc_dev(host->mmc), "nvidia,default-trim", 733 729 &tegra_host->default_trim); 734 730 if (err) 735 731 tegra_host->default_trim = 0; 736 732 737 - err = device_property_read_u32(host->mmc->parent, "nvidia,dqs-trim", 733 + err = device_property_read_u32(mmc_dev(host->mmc), "nvidia,dqs-trim", 738 734 &tegra_host->dqs_trim); 739 735 if (err) 740 736 tegra_host->dqs_trim = 0x11; ··· 745 741 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 746 742 struct sdhci_tegra *tegra_host = sdhci_pltfm_priv(pltfm_host); 747 743 748 - if (device_property_read_bool(host->mmc->parent, "supports-cqe")) 744 + if (device_property_read_bool(mmc_dev(host->mmc), "supports-cqe")) 749 745 tegra_host->enable_hwcq = true; 750 746 else 751 747 tegra_host->enable_hwcq = false; ··· 1160 1156 static void tegra_cqhci_writel(struct cqhci_host *cq_host, u32 val, int reg) 1161 1157 { 1162 1158 struct mmc_host *mmc = cq_host->mmc; 1159 + struct sdhci_host *host = mmc_priv(mmc); 1163 1160 u8 ctrl; 1164 1161 ktime_t timeout; 1165 1162 bool timed_out; ··· 1175 1170 */ 1176 1171 if (reg == CQHCI_CTL && !(val & CQHCI_HALT) && 1177 1172 cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT) { 1173 + sdhci_writew(host, SDHCI_TEGRA_CQE_TRNS_MODE, SDHCI_TRANSFER_MODE); 1178 1174 sdhci_cqe_enable(mmc); 1179 1175 writel(val, cq_host->mmio + reg); 1180 1176 timeout = ktime_add_us(ktime_get(), 50); ··· 1211 1205 static void sdhci_tegra_cqe_enable(struct mmc_host *mmc) 1212 1206 { 1213 1207 struct cqhci_host *cq_host = mmc->cqe_private; 1208 + struct sdhci_host *host = mmc_priv(mmc); 1214 1209 u32 val; 1215 1210 1216 1211 /* ··· 1225 1218 if (val & CQHCI_ENABLE) 1226 1219 cqhci_writel(cq_host, (val & ~CQHCI_ENABLE), 1227 1220 CQHCI_CFG); 1221 + sdhci_writew(host, SDHCI_TEGRA_CQE_TRNS_MODE, SDHCI_TRANSFER_MODE); 1228 1222 sdhci_cqe_enable(mmc); 1229 1223 if (val & CQHCI_ENABLE) 1230 1224 cqhci_writel(cq_host, val, CQHCI_CFG); ··· 1289 1281 __sdhci_set_timeout(host, cmd); 1290 1282 } 1291 1283 1284 + static void sdhci_tegra_cqe_pre_enable(struct mmc_host *mmc) 1285 + { 1286 + struct cqhci_host *cq_host = mmc->cqe_private; 1287 + u32 reg; 1288 + 1289 + reg = cqhci_readl(cq_host, CQHCI_CFG); 1290 + reg |= CQHCI_ENABLE; 1291 + cqhci_writel(cq_host, reg, CQHCI_CFG); 1292 + } 1293 + 1294 + static void sdhci_tegra_cqe_post_disable(struct mmc_host *mmc) 1295 + { 1296 + struct cqhci_host *cq_host = mmc->cqe_private; 1297 + struct sdhci_host *host = mmc_priv(mmc); 1298 + u32 reg; 1299 + 1300 + reg = cqhci_readl(cq_host, CQHCI_CFG); 1301 + reg &= ~CQHCI_ENABLE; 1302 + cqhci_writel(cq_host, reg, CQHCI_CFG); 1303 + sdhci_writew(host, 0x0, SDHCI_TRANSFER_MODE); 1304 + } 1305 + 1292 1306 static const struct cqhci_host_ops sdhci_tegra_cqhci_ops = { 1293 1307 .write_l = tegra_cqhci_writel, 1294 1308 .enable = sdhci_tegra_cqe_enable, 1295 1309 .disable = sdhci_cqe_disable, 1296 1310 .dumpregs = sdhci_tegra_dumpregs, 1297 1311 .update_dcmd_desc = sdhci_tegra_update_dcmd_desc, 1312 + .pre_enable = sdhci_tegra_cqe_pre_enable, 1313 + .post_disable = sdhci_tegra_cqe_post_disable, 1298 1314 }; 1299 1315 1300 1316 static int tegra_sdhci_set_dma_mask(struct sdhci_host *host) ··· 1561 1529 1562 1530 host->mmc->caps2 |= MMC_CAP2_CQE | MMC_CAP2_CQE_DCMD; 1563 1531 1564 - cq_host = devm_kzalloc(host->mmc->parent, 1532 + cq_host = devm_kzalloc(mmc_dev(host->mmc), 1565 1533 sizeof(*cq_host), GFP_KERNEL); 1566 1534 if (!cq_host) { 1567 1535 ret = -ENOMEM;
+57 -56
drivers/mmc/host/sdhci.c
··· 188 188 if (host->bus_on) 189 189 return; 190 190 host->bus_on = true; 191 - pm_runtime_get_noresume(host->mmc->parent); 191 + pm_runtime_get_noresume(mmc_dev(host->mmc)); 192 192 } 193 193 194 194 static void sdhci_runtime_pm_bus_off(struct sdhci_host *host) ··· 196 196 if (!host->bus_on) 197 197 return; 198 198 host->bus_on = false; 199 - pm_runtime_put_noidle(host->mmc->parent); 199 + pm_runtime_put_noidle(mmc_dev(host->mmc)); 200 200 } 201 201 202 202 void sdhci_reset(struct sdhci_host *host, u8 mask) ··· 648 648 } 649 649 } 650 650 /* Switch ownership to the DMA */ 651 - dma_sync_single_for_device(host->mmc->parent, 651 + dma_sync_single_for_device(mmc_dev(host->mmc), 652 652 host->bounce_addr, 653 653 host->bounce_buffer_size, 654 654 mmc_get_dma_dir(data)); ··· 907 907 908 908 if (data) { 909 909 blksz = data->blksz; 910 - freq = host->mmc->actual_clock ? : host->clock; 910 + freq = mmc->actual_clock ? : host->clock; 911 911 transfer_time = (u64)blksz * NSEC_PER_SEC * (8 / bus_width); 912 912 do_div(transfer_time, freq); 913 913 /* multiply by '2' to account for any unknowns */ ··· 1176 1176 int ret = 0; 1177 1177 struct mmc_host *mmc = host->mmc; 1178 1178 1179 - host->tx_chan = dma_request_chan(mmc->parent, "tx"); 1179 + host->tx_chan = dma_request_chan(mmc_dev(mmc), "tx"); 1180 1180 if (IS_ERR(host->tx_chan)) { 1181 1181 ret = PTR_ERR(host->tx_chan); 1182 1182 if (ret != -EPROBE_DEFER) ··· 1185 1185 return ret; 1186 1186 } 1187 1187 1188 - host->rx_chan = dma_request_chan(mmc->parent, "rx"); 1188 + host->rx_chan = dma_request_chan(mmc_dev(mmc), "rx"); 1189 1189 if (IS_ERR(host->rx_chan)) { 1190 1190 if (host->tx_chan) { 1191 1191 dma_release_channel(host->tx_chan); ··· 2269 2269 2270 2270 if (host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK && 2271 2271 host->clock) { 2272 - host->timeout_clk = host->mmc->actual_clock ? 2273 - host->mmc->actual_clock / 1000 : 2272 + host->timeout_clk = mmc->actual_clock ? 2273 + mmc->actual_clock / 1000 : 2274 2274 host->clock / 1000; 2275 - host->mmc->max_busy_timeout = 2275 + mmc->max_busy_timeout = 2276 2276 host->ops->get_max_timeout_count ? 2277 2277 host->ops->get_max_timeout_count(host) : 2278 2278 1 << 27; 2279 - host->mmc->max_busy_timeout /= host->timeout_clk; 2279 + mmc->max_busy_timeout /= host->timeout_clk; 2280 2280 } 2281 2281 } 2282 2282 ··· 2399 2399 return 0; 2400 2400 2401 2401 /* If nonremovable, assume that the card is always present. */ 2402 - if (!mmc_card_is_removable(host->mmc)) 2402 + if (!mmc_card_is_removable(mmc)) 2403 2403 return 1; 2404 2404 2405 2405 /* ··· 2489 2489 unsigned long flags; 2490 2490 2491 2491 if (enable) 2492 - pm_runtime_get_noresume(host->mmc->parent); 2492 + pm_runtime_get_noresume(mmc_dev(mmc)); 2493 2493 2494 2494 spin_lock_irqsave(&host->lock, flags); 2495 2495 sdhci_enable_sdio_irq_nolock(host, enable); 2496 2496 spin_unlock_irqrestore(&host->lock, flags); 2497 2497 2498 2498 if (!enable) 2499 - pm_runtime_put_noidle(host->mmc->parent); 2499 + pm_runtime_put_noidle(mmc_dev(mmc)); 2500 2500 } 2501 2501 EXPORT_SYMBOL_GPL(sdhci_enable_sdio_irq); 2502 2502 ··· 2837 2837 goto out; 2838 2838 } 2839 2839 2840 - host->mmc->retune_period = tuning_count; 2840 + mmc->retune_period = tuning_count; 2841 2841 2842 2842 if (host->tuning_delay < 0) 2843 2843 host->tuning_delay = opcode == MMC_SEND_TUNING_BLOCK; ··· 2886 2886 static void sdhci_post_req(struct mmc_host *mmc, struct mmc_request *mrq, 2887 2887 int err) 2888 2888 { 2889 - struct sdhci_host *host = mmc_priv(mmc); 2890 2889 struct mmc_data *data = mrq->data; 2891 2890 2892 2891 if (data->host_cookie != COOKIE_UNMAPPED) 2893 - dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 2892 + dma_unmap_sg(mmc_dev(mmc), data->sg, data->sg_len, 2894 2893 mmc_get_dma_dir(data)); 2895 2894 2896 2895 data->host_cookie = COOKIE_UNMAPPED; ··· 2940 2941 /* Check sdhci_has_requests() first in case we are runtime suspended */ 2941 2942 if (sdhci_has_requests(host) && !present) { 2942 2943 pr_err("%s: Card removed during transfer!\n", 2943 - mmc_hostname(host->mmc)); 2944 + mmc_hostname(mmc)); 2944 2945 pr_err("%s: Resetting controller.\n", 2945 - mmc_hostname(host->mmc)); 2946 + mmc_hostname(mmc)); 2946 2947 2947 2948 sdhci_do_reset(host, SDHCI_RESET_CMD); 2948 2949 sdhci_do_reset(host, SDHCI_RESET_DATA); ··· 2996 2997 } 2997 2998 2998 2999 /* 3000 + * The controller needs a reset of internal state machines 3001 + * upon error conditions. 3002 + */ 3003 + if (sdhci_needs_reset(host, mrq)) { 3004 + /* 3005 + * Do not finish until command and data lines are available for 3006 + * reset. Note there can only be one other mrq, so it cannot 3007 + * also be in mrqs_done, otherwise host->cmd and host->data_cmd 3008 + * would both be null. 3009 + */ 3010 + if (host->cmd || host->data_cmd) { 3011 + spin_unlock_irqrestore(&host->lock, flags); 3012 + return true; 3013 + } 3014 + 3015 + /* Some controllers need this kick or reset won't work here */ 3016 + if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET) 3017 + /* This is to force an update */ 3018 + host->ops->set_clock(host, host->clock); 3019 + 3020 + /* 3021 + * Spec says we should do both at the same time, but Ricoh 3022 + * controllers do not like that. 3023 + */ 3024 + sdhci_do_reset(host, SDHCI_RESET_CMD); 3025 + sdhci_do_reset(host, SDHCI_RESET_DATA); 3026 + 3027 + host->pending_reset = false; 3028 + } 3029 + 3030 + /* 2999 3031 * Always unmap the data buffers if they were mapped by 3000 3032 * sdhci_prepare_data() whenever we finish with a request. 3001 3033 * This avoids leaking DMA mappings on error. ··· 3063 3033 length = host->bounce_buffer_size; 3064 3034 } 3065 3035 dma_sync_single_for_cpu( 3066 - host->mmc->parent, 3036 + mmc_dev(host->mmc), 3067 3037 host->bounce_addr, 3068 3038 host->bounce_buffer_size, 3069 3039 DMA_FROM_DEVICE); ··· 3074 3044 } else { 3075 3045 /* No copying, just switch ownership */ 3076 3046 dma_sync_single_for_cpu( 3077 - host->mmc->parent, 3047 + mmc_dev(host->mmc), 3078 3048 host->bounce_addr, 3079 3049 host->bounce_buffer_size, 3080 3050 mmc_get_dma_dir(data)); ··· 3087 3057 } 3088 3058 data->host_cookie = COOKIE_UNMAPPED; 3089 3059 } 3090 - } 3091 - 3092 - /* 3093 - * The controller needs a reset of internal state machines 3094 - * upon error conditions. 3095 - */ 3096 - if (sdhci_needs_reset(host, mrq)) { 3097 - /* 3098 - * Do not finish until command and data lines are available for 3099 - * reset. Note there can only be one other mrq, so it cannot 3100 - * also be in mrqs_done, otherwise host->cmd and host->data_cmd 3101 - * would both be null. 3102 - */ 3103 - if (host->cmd || host->data_cmd) { 3104 - spin_unlock_irqrestore(&host->lock, flags); 3105 - return true; 3106 - } 3107 - 3108 - /* Some controllers need this kick or reset won't work here */ 3109 - if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET) 3110 - /* This is to force an update */ 3111 - host->ops->set_clock(host, host->clock); 3112 - 3113 - /* Spec says we should do both at the same time, but Ricoh 3114 - controllers do not like that. */ 3115 - sdhci_do_reset(host, SDHCI_RESET_CMD); 3116 - sdhci_do_reset(host, SDHCI_RESET_DATA); 3117 - 3118 - host->pending_reset = false; 3119 3060 } 3120 3061 3121 3062 host->mrqs_done[i] = NULL; ··· 3676 3675 host->ops->enable_dma(host); 3677 3676 } 3678 3677 3679 - if ((host->mmc->pm_flags & MMC_PM_KEEP_POWER) && 3678 + if ((mmc->pm_flags & MMC_PM_KEEP_POWER) && 3680 3679 (host->quirks2 & SDHCI_QUIRK2_HOST_OFF_CARD_ON)) { 3681 3680 /* Card keeps power but host controller does not */ 3682 3681 sdhci_init(host, 0); ··· 3684 3683 host->clock = 0; 3685 3684 mmc->ops->set_ios(mmc, &mmc->ios); 3686 3685 } else { 3687 - sdhci_init(host, (host->mmc->pm_flags & MMC_PM_KEEP_POWER)); 3686 + sdhci_init(host, (mmc->pm_flags & MMC_PM_KEEP_POWER)); 3688 3687 } 3689 3688 3690 3689 if (host->irq_wake_enabled) { ··· 3692 3691 } else { 3693 3692 ret = request_threaded_irq(host->irq, sdhci_irq, 3694 3693 sdhci_thread_irq, IRQF_SHARED, 3695 - mmc_hostname(host->mmc), host); 3694 + mmc_hostname(mmc), host); 3696 3695 if (ret) 3697 3696 return ret; 3698 3697 } ··· 4053 4052 * speedups by the help of a bounce buffer to group scattered 4054 4053 * reads/writes together. 4055 4054 */ 4056 - host->bounce_buffer = devm_kmalloc(mmc->parent, 4055 + host->bounce_buffer = devm_kmalloc(mmc_dev(mmc), 4057 4056 bounce_size, 4058 4057 GFP_KERNEL); 4059 4058 if (!host->bounce_buffer) { ··· 4067 4066 return; 4068 4067 } 4069 4068 4070 - host->bounce_addr = dma_map_single(mmc->parent, 4069 + host->bounce_addr = dma_map_single(mmc_dev(mmc), 4071 4070 host->bounce_buffer, 4072 4071 bounce_size, 4073 4072 DMA_BIDIRECTIONAL); 4074 - ret = dma_mapping_error(mmc->parent, host->bounce_addr); 4073 + ret = dma_mapping_error(mmc_dev(mmc), host->bounce_addr); 4075 4074 if (ret) 4076 4075 /* Again fall back to max_segs == 1 */ 4077 4076 return; ··· 4379 4378 4380 4379 if ((host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION) && 4381 4380 mmc_card_is_removable(mmc) && 4382 - mmc_gpio_get_cd(host->mmc) < 0) 4381 + mmc_gpio_get_cd(mmc) < 0) 4383 4382 mmc->caps |= MMC_CAP_NEEDS_POLL; 4384 4383 4385 4384 if (!IS_ERR(mmc->supply.vqmmc)) {
+1 -1
drivers/mmc/host/sdhci_am654.c
··· 558 558 struct cqhci_host *cq_host; 559 559 int ret; 560 560 561 - cq_host = devm_kzalloc(host->mmc->parent, sizeof(struct cqhci_host), 561 + cq_host = devm_kzalloc(mmc_dev(host->mmc), sizeof(struct cqhci_host), 562 562 GFP_KERNEL); 563 563 if (!cq_host) 564 564 return -ENOMEM;
+2 -1
drivers/mmc/host/tmio_mmc.h
··· 100 100 101 101 /* Define some IRQ masks */ 102 102 /* This is the mask used at reset by the chip */ 103 - #define TMIO_MASK_INIT_RCAR2 0x8b7f031d /* Initial value for R-Car Gen2+ */ 104 103 #define TMIO_MASK_ALL 0x837f031d 104 + #define TMIO_MASK_ALL_RCAR2 0x8b7f031d 105 105 #define TMIO_MASK_READOP (TMIO_STAT_RXRDY | TMIO_STAT_DATAEND) 106 106 #define TMIO_MASK_WRITEOP (TMIO_STAT_TXRQ | TMIO_STAT_DATAEND) 107 107 #define TMIO_MASK_CMD (TMIO_STAT_CMDRESPEND | TMIO_STAT_CMDTIMEOUT | \ ··· 164 164 u32 sdio_irq_mask; 165 165 unsigned int clk_cache; 166 166 u32 sdcard_irq_setbit_mask; 167 + u32 sdcard_irq_mask_all; 167 168 168 169 spinlock_t lock; /* protect host private data */ 169 170 unsigned long last_req_ts;
+29 -32
drivers/mmc/host/tmio_mmc_core.c
··· 164 164 } 165 165 } 166 166 167 + static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host, 168 + unsigned char bus_width) 169 + { 170 + u16 reg = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT) 171 + & ~(CARD_OPT_WIDTH | CARD_OPT_WIDTH8); 172 + 173 + /* reg now applies to MMC_BUS_WIDTH_4 */ 174 + if (bus_width == MMC_BUS_WIDTH_1) 175 + reg |= CARD_OPT_WIDTH; 176 + else if (bus_width == MMC_BUS_WIDTH_8) 177 + reg |= CARD_OPT_WIDTH8; 178 + 179 + sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, reg); 180 + } 181 + 167 182 static void tmio_mmc_reset(struct tmio_mmc_host *host) 168 183 { 169 184 /* FIXME - should we set stop clock reg here */ ··· 187 172 sd_ctrl_write16(host, CTL_RESET_SD, 0x0001); 188 173 usleep_range(10000, 11000); 189 174 175 + tmio_mmc_abort_dma(host); 176 + 190 177 if (host->reset) 191 178 host->reset(host); 192 179 193 - tmio_mmc_abort_dma(host); 180 + sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all); 181 + host->sdcard_irq_mask = host->sdcard_irq_mask_all; 182 + 183 + tmio_mmc_set_bus_width(host, host->mmc->ios.bus_width); 194 184 195 185 if (host->pdata->flags & TMIO_MMC_SDIO_IRQ) { 196 186 sd_ctrl_write16(host, CTL_SDIO_IRQ_MASK, host->sdio_irq_mask); 197 187 sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0001); 198 188 } 189 + 190 + if (host->mmc->card) 191 + mmc_retune_needed(host->mmc); 199 192 } 200 193 201 194 static void tmio_mmc_reset_work(struct work_struct *work) ··· 897 874 host->set_pwr(host->pdev, 0); 898 875 } 899 876 900 - static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host, 901 - unsigned char bus_width) 902 - { 903 - u16 reg = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT) 904 - & ~(CARD_OPT_WIDTH | CARD_OPT_WIDTH8); 905 - 906 - /* reg now applies to MMC_BUS_WIDTH_4 */ 907 - if (bus_width == MMC_BUS_WIDTH_1) 908 - reg |= CARD_OPT_WIDTH; 909 - else if (bus_width == MMC_BUS_WIDTH_8) 910 - reg |= CARD_OPT_WIDTH8; 911 - 912 - sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, reg); 913 - } 914 - 915 877 static unsigned int tmio_mmc_get_timeout_cycles(struct tmio_mmc_host *host) 916 878 { 917 879 u16 val = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT); ··· 1169 1161 !mmc_card_is_removable(mmc)); 1170 1162 1171 1163 /* 1172 - * On Gen2+, eMMC with NONREMOVABLE currently fails because native 1173 - * hotplug gets disabled. It seems RuntimePM related yet we need further 1174 - * research. Since we are planning a PM overhaul anyway, let's enforce 1175 - * for now the device being active by enabling native hotplug always. 1176 - */ 1177 - if (pdata->flags & TMIO_MMC_MIN_RCAR2) 1178 - _host->native_hotplug = true; 1179 - 1180 - /* 1181 1164 * While using internal tmio hardware logic for card detection, we need 1182 1165 * to ensure it stays powered for it to work. 1183 1166 */ ··· 1179 1180 if (pdata->flags & TMIO_MMC_SDIO_IRQ) 1180 1181 _host->sdio_irq_mask = TMIO_SDIO_MASK_ALL; 1181 1182 1183 + if (!_host->sdcard_irq_mask_all) 1184 + _host->sdcard_irq_mask_all = TMIO_MASK_ALL; 1185 + 1182 1186 _host->set_clock(_host, 0); 1183 1187 tmio_mmc_reset(_host); 1184 - 1185 - _host->sdcard_irq_mask = sd_ctrl_read16_and_16_as_32(_host, CTL_IRQ_MASK); 1186 - tmio_mmc_disable_mmc_irqs(_host, TMIO_MASK_ALL); 1187 1188 1188 1189 if (_host->native_hotplug) 1189 1190 tmio_mmc_enable_mmc_irqs(_host, ··· 1237 1238 cancel_work_sync(&host->done); 1238 1239 cancel_delayed_work_sync(&host->delayed_reset_work); 1239 1240 tmio_mmc_release_dma(host); 1240 - tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL); 1241 + tmio_mmc_disable_mmc_irqs(host, host->sdcard_irq_mask_all); 1241 1242 1242 1243 if (host->native_hotplug) 1243 1244 pm_runtime_put_noidle(&pdev->dev); ··· 1267 1268 { 1268 1269 struct tmio_mmc_host *host = dev_get_drvdata(dev); 1269 1270 1270 - tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL); 1271 + tmio_mmc_disable_mmc_irqs(host, host->sdcard_irq_mask_all); 1271 1272 1272 1273 if (host->clk_cache) 1273 1274 host->set_clock(host, 0); ··· 1293 1294 TMIO_STAT_CARD_REMOVE | TMIO_STAT_CARD_INSERT); 1294 1295 1295 1296 tmio_mmc_enable_dma(host, true); 1296 - 1297 - mmc_retune_needed(host->mmc); 1298 1297 1299 1298 return 0; 1300 1299 }
+4 -1
drivers/mmc/host/uniphier-sd.c
··· 635 635 636 636 ret = tmio_mmc_host_probe(host); 637 637 if (ret) 638 - goto free_host; 638 + goto disable_clk; 639 639 640 640 ret = devm_request_irq(dev, irq, tmio_mmc_irq, IRQF_SHARED, 641 641 dev_name(dev), host); ··· 646 646 647 647 remove_host: 648 648 tmio_mmc_host_remove(host); 649 + disable_clk: 650 + uniphier_sd_clk_disable(host); 649 651 free_host: 650 652 tmio_mmc_host_free(host); 651 653 ··· 660 658 661 659 tmio_mmc_host_remove(host); 662 660 uniphier_sd_clk_disable(host); 661 + tmio_mmc_host_free(host); 663 662 664 663 return 0; 665 664 }
+1 -2
drivers/mmc/host/via-sdmmc.c
··· 1271 1271 static int __maybe_unused via_sd_resume(struct device *dev) 1272 1272 { 1273 1273 struct via_crdr_mmc_host *sdhost; 1274 - int ret = 0; 1275 1274 u8 gatt; 1276 1275 1277 1276 sdhost = dev_get_drvdata(dev); ··· 1291 1292 via_restore_pcictrlreg(sdhost); 1292 1293 via_init_sdc_pm(sdhost); 1293 1294 1294 - return ret; 1295 + return 0; 1295 1296 } 1296 1297 1297 1298 static SIMPLE_DEV_PM_OPS(via_sd_pm_ops, via_sd_suspend, via_sd_resume);
+1 -6
include/linux/mmc/host.h
··· 302 302 u32 ocr_avail_sdio; /* SDIO-specific OCR */ 303 303 u32 ocr_avail_sd; /* SD-specific OCR */ 304 304 u32 ocr_avail_mmc; /* MMC-specific OCR */ 305 - #ifdef CONFIG_PM_SLEEP 306 - struct notifier_block pm_notify; 307 - #endif 308 305 struct wakeup_source *ws; /* Enable consume of uevents */ 309 306 u32 max_current_330; 310 307 u32 max_current_300; ··· 420 423 /* group bitfields together to minimize padding */ 421 424 unsigned int use_spi_crc:1; 422 425 unsigned int claimed:1; /* host exclusively claimed */ 423 - unsigned int bus_dead:1; /* bus has been released */ 424 426 unsigned int doing_init_tune:1; /* initial tuning in progress */ 425 427 unsigned int can_retune:1; /* re-tuning can be used */ 426 428 unsigned int doing_retune:1; /* re-tuning in progress */ ··· 450 454 struct mmc_slot slot; 451 455 452 456 const struct mmc_bus_ops *bus_ops; /* current bus driver */ 453 - unsigned int bus_refs; /* reference counter */ 454 457 455 458 unsigned int sdio_irqs; 456 459 struct task_struct *sdio_irq_thread; ··· 509 514 void mmc_of_parse_clk_phase(struct mmc_host *host, 510 515 struct mmc_clk_phase_map *map); 511 516 int mmc_of_parse(struct mmc_host *host); 512 - int mmc_of_parse_voltage(struct device_node *np, u32 *mask); 517 + int mmc_of_parse_voltage(struct mmc_host *host, u32 *mask); 513 518 514 519 static inline void *mmc_priv(struct mmc_host *host) 515 520 {
+1 -1
include/linux/mmc/sdio.h
··· 82 82 #define SDIO_SD_REV_1_01 0 /* SD Physical Spec Version 1.01 */ 83 83 #define SDIO_SD_REV_1_10 1 /* SD Physical Spec Version 1.10 */ 84 84 #define SDIO_SD_REV_2_00 2 /* SD Physical Spec Version 2.00 */ 85 - #define SDIO_SD_REV_3_00 3 /* SD Physical Spev Version 3.00 */ 85 + #define SDIO_SD_REV_3_00 3 /* SD Physical Spec Version 3.00 */ 86 86 87 87 #define SDIO_CCCR_IOEx 0x02 88 88 #define SDIO_CCCR_IORx 0x03
-9
include/linux/spi/mmc_spi.h
··· 35 35 void (*setpower)(struct device *, unsigned int maskval); 36 36 }; 37 37 38 - #ifdef CONFIG_OF 39 38 extern struct mmc_spi_platform_data *mmc_spi_get_pdata(struct spi_device *spi); 40 39 extern void mmc_spi_put_pdata(struct spi_device *spi); 41 - #else 42 - static inline struct mmc_spi_platform_data * 43 - mmc_spi_get_pdata(struct spi_device *spi) 44 - { 45 - return spi->dev.platform_data; 46 - } 47 - static inline void mmc_spi_put_pdata(struct spi_device *spi) {} 48 - #endif /* CONFIG_OF */ 49 40 50 41 #endif /* __LINUX_SPI_MMC_SPI_H */