Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mmc-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC updates from Ulf Hansson:
"MMC core:
- Introduce host claiming by context to support blkmq
- Preparations for enabling CQE (eMMC CMDQ) requests
- Re-factorizations to prepare for blkmq support
- Re-factorizations to prepare for CQE support
- Fix signal voltage switch for SD cards without power cycle
- Convert RPMB to a character device
- Export eMMC revision via sysfs
- Support eMMC DT binding for fixed driver type
- Document mmc_regulator_get_supply() API

MMC host:
- omap_hsmmc: Updated regulator management for PBIAS
- sdhci-omap: Add new OMAP SDHCI driver
- meson-mx-sdio: New driver for the Amlogic Meson8 and Meson8b SoCs
- sdhci-pci: Add support for Intel CDF
- sdhci-acpi: Fix voltage switch for some Intel host controllers
- sdhci-msm: Enable delay circuit calibration clocks
- sdhci-msm: Manage power IRQ properly
- mediatek: Add support of mt2701/mt2712
- mediatek: Updates management of clocks and tunings
- mediatek: Upgrade eMMC HS400 support
- rtsx_pci: Update tuning for gen3 PCI-Express
- renesas_sdhi: Support R-Car Gen[123] fallback compatibility strings
- Catch all errors when getting regulators
- Various additional improvements and cleanups"

* tag 'mmc-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (91 commits)
sdhci-fujitsu: add support for setting the CMD_DAT_DELAY attribute
dt-bindings: sdhci-fujitsu: document cmd-dat-delay property
mmc: tmio: Replace msleep() of 20ms or less with usleep_range()
mmc: dw_mmc: Convert timers to use timer_setup()
mmc: dw_mmc: Cleanup the DTO timer like the CTO one
mmc: vub300: Use common code in __download_offload_pseudocode()
mmc: tmio: Use common error handling code in tmio_mmc_host_probe()
mmc: Convert timers to use timer_setup()
mmc: sdhci-acpi: Fix voltage switch for some Intel host controllers
mmc: sdhci-acpi: Let devices define their own private data
mmc: mediatek: perfer to use rise edge latching for cmd line
mmc: mediatek: improve eMMC hs400 mode read performance
mmc: mediatek: add latch-ck support
mmc: mediatek: add support of source_cg clock
mmc: mediatek: add stop_clk fix and enhance_rx support
mmc: mediatek: add busy_check support
mmc: mediatek: add async fifo and data tune support
mmc: mediatek: add pad_tune0 support
mmc: mediatek: make hs400_tune_response only for mt8173
arm64: dts: mt8173: remove "mediatek, mt8135-mmc" from mmc nodes
...

+3256 -581
+4
Documentation/ABI/testing/sysfs-bus-mmc
··· 1 + What: /sys/bus/mmc/devices/.../rev 2 + Date: October 2017 3 + Contact: Jin Qian <jinqian@android.com> 4 + Description: Extended CSD revision number
+54
Documentation/devicetree/bindings/mmc/amlogic,meson-mx-sdio.txt
··· 1 + * Amlogic Meson6, Meson8 and Meson8b SDIO/MMC controller 2 + 3 + The highspeed MMC host controller on Amlogic SoCs provides an interface 4 + for MMC, SD, SDIO and SDHC types of memory cards. 5 + 6 + Supported maximum speeds are the ones of the eMMC standard 4.41 as well 7 + as the speed of SD standard 2.0. 8 + 9 + The hardware provides an internal "mux" which allows up to three slots 10 + to be controlled. Only one slot can be accessed at a time. 11 + 12 + Required properties: 13 + - compatible : must be one of 14 + - "amlogic,meson8-sdio" 15 + - "amlogic,meson8b-sdio" 16 + along with the generic "amlogic,meson-mx-sdio" 17 + - reg : mmc controller base registers 18 + - interrupts : mmc controller interrupt 19 + - #address-cells : must be 1 20 + - size-cells : must be 0 21 + - clocks : phandle to clock providers 22 + - clock-names : must contain "core" and "clkin" 23 + 24 + Required child nodes: 25 + A node for each slot provided by the MMC controller is required. 26 + NOTE: due to a driver limitation currently only one slot (= child node) 27 + is supported! 28 + 29 + Required properties on each child node (= slot): 30 + - compatible : must be "mmc-slot" (see mmc.txt within this directory) 31 + - reg : the slot (or "port") ID 32 + 33 + Optional properties on each child node (= slot): 34 + - bus-width : must be 1 or 4 (8-bit bus is not supported) 35 + - for cd and all other additional generic mmc parameters 36 + please refer to mmc.txt within this directory 37 + 38 + Examples: 39 + mmc@c1108c20 { 40 + compatible = "amlogic,meson8-sdio", "amlogic,meson-mx-sdio"; 41 + reg = <0xc1108c20 0x20>; 42 + interrupts = <0 28 1>; 43 + #address-cells = <1>; 44 + #size-cells = <0>; 45 + clocks = <&clkc CLKID_SDIO>, <&clkc CLKID_CLK81>; 46 + clock-names = "core", "clkin"; 47 + 48 + slot@1 { 49 + compatible = "mmc-slot"; 50 + reg = <1>; 51 + 52 + bus-width = <4>; 53 + }; 54 + };
+3
Documentation/devicetree/bindings/mmc/mmc.txt
··· 53 53 - no-sdio: controller is limited to send sdio cmd during initialization 54 54 - no-sd: controller is limited to send sd cmd during initialization 55 55 - no-mmc: controller is limited to send mmc cmd during initialization 56 + - fixed-emmc-driver-type: for non-removable eMMC, enforce this driver type. 57 + The value <n> is the driver type as specified in the eMMC specification 58 + (table 206 in spec version 5.1). 56 59 57 60 *NOTE* on CD and WP polarity. To use common for all SD/MMC host controllers line 58 61 polarity properties, we have to fix the meaning of the "normal" and "inverted"
+15 -3
Documentation/devicetree/bindings/mmc/mtk-sd.txt
··· 7 7 and the properties used by the msdc driver. 8 8 9 9 Required properties: 10 - - compatible: Should be "mediatek,mt8173-mmc","mediatek,mt8135-mmc" 10 + - compatible: value should be either of the following. 11 + "mediatek,mt8135-mmc": for mmc host ip compatible with mt8135 12 + "mediatek,mt8173-mmc": for mmc host ip compatible with mt8173 13 + "mediatek,mt2701-mmc": for mmc host ip compatible with mt2701 14 + "mediatek,mt2712-mmc": for mmc host ip compatible with mt2712 15 + - reg: physical base address of the controller and length 11 16 - interrupts: Should contain MSDC interrupt number 12 - - clocks: MSDC source clock, HCLK 13 - - clock-names: "source", "hclk" 17 + - clocks: Should contain phandle for the clock feeding the MMC controller 18 + - clock-names: Should contain the following: 19 + "source" - source clock (required) 20 + "hclk" - HCLK which used for host (required) 21 + "source_cg" - independent source clock gate (required for MT2712) 14 22 - pinctrl-names: should be "default", "state_uhs" 15 23 - pinctrl-0: should contain default/high speed pin ctrl 16 24 - pinctrl-1: should contain uhs mode pin ctrl ··· 38 30 - mediatek,hs400-cmd-resp-sel-rising: HS400 command response sample selection 39 31 If present,HS400 command responses are sampled on rising edges. 40 32 If not present,HS400 command responses are sampled on falling edges. 33 + - mediatek,latch-ck: Some SoCs do not support enhance_rx, need set correct latch-ck to avoid data crc 34 + error caused by stop clock(fifo full) 35 + Valid range = [0:0x7]. if not present, default value is 0. 36 + applied to compatible "mediatek,mt2701-mmc". 41 37 42 38 Examples: 43 39 mmc0: mmc@11230000 {
+2
Documentation/devicetree/bindings/mmc/sdhci-fujitsu.txt
··· 15 15 Optional properties: 16 16 - vqmmc-supply: phandle to the regulator device tree node, mentioned 17 17 as the VCCQ/VDD_IO supply in the eMMC/SD specs. 18 + - fujitsu,cmd-dat-delay-select: boolean property indicating that this host 19 + requires the CMD_DAT_DELAY control to be enabled. 18 20 19 21 Example: 20 22
+2
Documentation/devicetree/bindings/mmc/sdhci-msm.txt
··· 18 18 "core" - SDC MMC clock (MCLK) (required) 19 19 "bus" - SDCC bus voter clock (optional) 20 20 "xo" - TCXO clock (optional) 21 + "cal" - reference clock for RCLK delay calibration (optional) 22 + "sleep" - sleep clock for RCLK delay calibration (optional) 21 23 22 24 Example: 23 25
+16
Documentation/devicetree/bindings/mmc/sdhci-omap.txt
··· 1 + * TI OMAP SDHCI Controller 2 + 3 + Refer to mmc.txt for standard MMC bindings. 4 + 5 + Required properties: 6 + - compatible: Should be "ti,dra7-sdhci" for DRA7 and DRA72 controllers 7 + - ti,hwmods: Must be "mmc<n>", <n> is controller instance starting 1 8 + 9 + Example: 10 + mmc1: mmc@4809c000 { 11 + compatible = "ti,dra7-sdhci"; 12 + reg = <0x4809c000 0x400>; 13 + ti,hwmods = "mmc1"; 14 + bus-width = <4>; 15 + vmmc-supply = <&vmmc>; /* phandle to regulator node */ 16 + };
+69 -1
Documentation/devicetree/bindings/mmc/tmio_mmc.txt
··· 10 10 optional bindings can be used. 11 11 12 12 Required properties: 13 - - compatible: "renesas,sdhi-shmobile" - a generic sh-mobile SDHI unit 13 + - compatible: should contain one or more of the following: 14 14 "renesas,sdhi-sh73a0" - SDHI IP on SH73A0 SoC 15 15 "renesas,sdhi-r7s72100" - SDHI IP on R7S72100 SoC 16 16 "renesas,sdhi-r8a73a4" - SDHI IP on R8A73A4 SoC ··· 26 26 "renesas,sdhi-r8a7794" - SDHI IP on R8A7794 SoC 27 27 "renesas,sdhi-r8a7795" - SDHI IP on R8A7795 SoC 28 28 "renesas,sdhi-r8a7796" - SDHI IP on R8A7796 SoC 29 + "renesas,sdhi-shmobile" - a generic sh-mobile SDHI controller 30 + "renesas,rcar-gen1-sdhi" - a generic R-Car Gen1 SDHI controller 31 + "renesas,rcar-gen2-sdhi" - a generic R-Car Gen2 or RZ/G1 32 + SDHI controller 33 + "renesas,rcar-gen3-sdhi" - a generic R-Car Gen3 SDHI controller 34 + 35 + 36 + When compatible with the generic version, nodes must list 37 + the SoC-specific version corresponding to the platform 38 + first followed by the generic version. 29 39 30 40 - clocks: Most controllers only have 1 clock source per channel. However, on 31 41 some variations of this controller, the internal card detection ··· 53 43 - pinctrl-names: should be "default", "state_uhs" 54 44 - pinctrl-0: should contain default/high speed pin ctrl 55 45 - pinctrl-1: should contain uhs mode pin ctrl 46 + 47 + Example: R8A7790 (R-Car H2) SDHI controller nodes 48 + 49 + sdhi0: sd@ee100000 { 50 + compatible = "renesas,sdhi-r8a7790", "renesas,rcar-gen2-sdhi"; 51 + reg = <0 0xee100000 0 0x328>; 52 + interrupts = <GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>; 53 + clocks = <&cpg CPG_MOD 314>; 54 + dmas = <&dmac0 0xcd>, <&dmac0 0xce>, 55 + <&dmac1 0xcd>, <&dmac1 0xce>; 56 + dma-names = "tx", "rx", "tx", "rx"; 57 + max-frequency = <195000000>; 58 + power-domains = <&sysc R8A7790_PD_ALWAYS_ON>; 59 + resets = <&cpg 314>; 60 + status = "disabled"; 61 + }; 62 + 63 + sdhi1: sd@ee120000 { 64 + compatible = "renesas,sdhi-r8a7790", "renesas,rcar-gen2-sdhi"; 65 + reg = <0 0xee120000 0 0x328>; 66 + interrupts = <GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>; 67 + clocks = <&cpg CPG_MOD 313>; 68 + dmas = <&dmac0 0xc9>, <&dmac0 0xca>, 69 + <&dmac1 0xc9>, <&dmac1 0xca>; 70 + dma-names = "tx", "rx", "tx", "rx"; 71 + max-frequency = <195000000>; 72 + power-domains = <&sysc R8A7790_PD_ALWAYS_ON>; 73 + resets = <&cpg 313>; 74 + status = "disabled"; 75 + }; 76 + 77 + sdhi2: sd@ee140000 { 78 + compatible = "renesas,sdhi-r8a7790", "renesas,rcar-gen2-sdhi"; 79 + reg = <0 0xee140000 0 0x100>; 80 + interrupts = <GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>; 81 + clocks = <&cpg CPG_MOD 312>; 82 + dmas = <&dmac0 0xc1>, <&dmac0 0xc2>, 83 + <&dmac1 0xc1>, <&dmac1 0xc2>; 84 + dma-names = "tx", "rx", "tx", "rx"; 85 + max-frequency = <97500000>; 86 + power-domains = <&sysc R8A7790_PD_ALWAYS_ON>; 87 + resets = <&cpg 312>; 88 + status = "disabled"; 89 + }; 90 + 91 + sdhi3: sd@ee160000 { 92 + compatible = "renesas,sdhi-r8a7790", "renesas,rcar-gen2-sdhi"; 93 + reg = <0 0xee160000 0 0x100>; 94 + interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>; 95 + clocks = <&cpg CPG_MOD 311>; 96 + dmas = <&dmac0 0xd3>, <&dmac0 0xd4>, 97 + <&dmac1 0xd3>, <&dmac1 0xd4>; 98 + dma-names = "tx", "rx", "tx", "rx"; 99 + max-frequency = <97500000>; 100 + power-domains = <&sysc R8A7790_PD_ALWAYS_ON>; 101 + resets = <&cpg 311>; 102 + status = "disabled"; 103 + };
+6
MAINTAINERS
··· 12067 12067 S: Maintained 12068 12068 F: drivers/mmc/host/sdhci-spear.c 12069 12069 12070 + SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) TI OMAP DRIVER 12071 + M: Kishon Vijay Abraham I <kishon@ti.com> 12072 + L: linux-mmc@vger.kernel.org 12073 + S: Maintained 12074 + F: drivers/mmc/host/sdhci-omap.c 12075 + 12070 12076 SECURE ENCRYPTING DEVICE (SED) OPAL DRIVER 12071 12077 M: Scott Bauer <scott.bauer@intel.com> 12072 12078 M: Jonathan Derrick <jonathan.derrick@intel.com>
+4 -8
arch/arm64/boot/dts/mediatek/mt8173.dtsi
··· 682 682 }; 683 683 684 684 mmc0: mmc@11230000 { 685 - compatible = "mediatek,mt8173-mmc", 686 - "mediatek,mt8135-mmc"; 685 + compatible = "mediatek,mt8173-mmc"; 687 686 reg = <0 0x11230000 0 0x1000>; 688 687 interrupts = <GIC_SPI 71 IRQ_TYPE_LEVEL_LOW>; 689 688 clocks = <&pericfg CLK_PERI_MSDC30_0>, ··· 692 693 }; 693 694 694 695 mmc1: mmc@11240000 { 695 - compatible = "mediatek,mt8173-mmc", 696 - "mediatek,mt8135-mmc"; 696 + compatible = "mediatek,mt8173-mmc"; 697 697 reg = <0 0x11240000 0 0x1000>; 698 698 interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_LOW>; 699 699 clocks = <&pericfg CLK_PERI_MSDC30_1>, ··· 702 704 }; 703 705 704 706 mmc2: mmc@11250000 { 705 - compatible = "mediatek,mt8173-mmc", 706 - "mediatek,mt8135-mmc"; 707 + compatible = "mediatek,mt8173-mmc"; 707 708 reg = <0 0x11250000 0 0x1000>; 708 709 interrupts = <GIC_SPI 73 IRQ_TYPE_LEVEL_LOW>; 709 710 clocks = <&pericfg CLK_PERI_MSDC30_2>, ··· 712 715 }; 713 716 714 717 mmc3: mmc@11260000 { 715 - compatible = "mediatek,mt8173-mmc", 716 - "mediatek,mt8135-mmc"; 718 + compatible = "mediatek,mt8173-mmc"; 717 719 reg = <0 0x11260000 0 0x1000>; 718 720 interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_LOW>; 719 721 clocks = <&pericfg CLK_PERI_MSDC30_3>,
+299 -47
drivers/mmc/core/block.c
··· 28 28 #include <linux/hdreg.h> 29 29 #include <linux/kdev_t.h> 30 30 #include <linux/blkdev.h> 31 + #include <linux/cdev.h> 31 32 #include <linux/mutex.h> 32 33 #include <linux/scatterlist.h> 33 34 #include <linux/string_helpers.h> ··· 87 86 #define MAX_DEVICES 256 88 87 89 88 static DEFINE_IDA(mmc_blk_ida); 89 + static DEFINE_IDA(mmc_rpmb_ida); 90 90 91 91 /* 92 92 * There is one mmc_blk_data per slot. ··· 98 96 struct gendisk *disk; 99 97 struct mmc_queue queue; 100 98 struct list_head part; 99 + struct list_head rpmbs; 101 100 102 101 unsigned int flags; 103 102 #define MMC_BLK_CMD23 (1 << 0) /* Can do SET_BLOCK_COUNT for multiblock */ ··· 122 119 struct device_attribute force_ro; 123 120 struct device_attribute power_ro_lock; 124 121 int area_type; 122 + }; 123 + 124 + /* Device type for RPMB character devices */ 125 + static dev_t mmc_rpmb_devt; 126 + 127 + /* Bus type for RPMB character devices */ 128 + static struct bus_type mmc_rpmb_bus_type = { 129 + .name = "mmc_rpmb", 130 + }; 131 + 132 + /** 133 + * struct mmc_rpmb_data - special RPMB device type for these areas 134 + * @dev: the device for the RPMB area 135 + * @chrdev: character device for the RPMB area 136 + * @id: unique device ID number 137 + * @part_index: partition index (0 on first) 138 + * @md: parent MMC block device 139 + * @node: list item, so we can put this device on a list 140 + */ 141 + struct mmc_rpmb_data { 142 + struct device dev; 143 + struct cdev chrdev; 144 + int id; 145 + unsigned int part_index; 146 + struct mmc_blk_data *md; 147 + struct list_head node; 125 148 }; 126 149 127 150 static DEFINE_MUTEX(open_lock); ··· 328 299 struct mmc_ioc_cmd ic; 329 300 unsigned char *buf; 330 301 u64 buf_bytes; 302 + struct mmc_rpmb_data *rpmb; 331 303 }; 332 304 333 305 static struct mmc_blk_ioc_data *mmc_blk_ioctl_copy_from_user( ··· 467 437 struct mmc_request mrq = {}; 468 438 struct scatterlist sg; 469 439 int err; 470 - bool is_rpmb = false; 440 + unsigned int target_part; 471 441 u32 status = 0; 472 442 473 443 if (!card || !md || !idata) 474 444 return -EINVAL; 475 445 476 - if (md->area_type & MMC_BLK_DATA_AREA_RPMB) 477 - is_rpmb = true; 446 + /* 447 + * The RPMB accesses comes in from the character device, so we 448 + * need to target these explicitly. Else we just target the 449 + * partition type for the block device the ioctl() was issued 450 + * on. 451 + */ 452 + if (idata->rpmb) { 453 + /* Support multiple RPMB partitions */ 454 + target_part = idata->rpmb->part_index; 455 + target_part |= EXT_CSD_PART_CONFIG_ACC_RPMB; 456 + } else { 457 + target_part = md->part_type; 458 + } 478 459 479 460 cmd.opcode = idata->ic.opcode; 480 461 cmd.arg = idata->ic.arg; ··· 529 488 530 489 mrq.cmd = &cmd; 531 490 532 - err = mmc_blk_part_switch(card, md->part_type); 491 + err = mmc_blk_part_switch(card, target_part); 533 492 if (err) 534 493 return err; 535 494 ··· 539 498 return err; 540 499 } 541 500 542 - if (is_rpmb) { 501 + if (idata->rpmb) { 543 502 err = mmc_set_blockcount(card, data.blocks, 544 503 idata->ic.write_flag & (1 << 31)); 545 504 if (err) ··· 579 538 580 539 memcpy(&(idata->ic.response), cmd.resp, sizeof(cmd.resp)); 581 540 582 - if (is_rpmb) { 541 + if (idata->rpmb) { 583 542 /* 584 543 * Ensure RPMB command has completed by polling CMD13 585 544 * "Send Status". ··· 595 554 } 596 555 597 556 static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md, 598 - struct mmc_ioc_cmd __user *ic_ptr) 557 + struct mmc_ioc_cmd __user *ic_ptr, 558 + struct mmc_rpmb_data *rpmb) 599 559 { 600 560 struct mmc_blk_ioc_data *idata; 601 561 struct mmc_blk_ioc_data *idatas[1]; ··· 608 566 idata = mmc_blk_ioctl_copy_from_user(ic_ptr); 609 567 if (IS_ERR(idata)) 610 568 return PTR_ERR(idata); 569 + /* This will be NULL on non-RPMB ioctl():s */ 570 + idata->rpmb = rpmb; 611 571 612 572 card = md->queue.card; 613 573 if (IS_ERR(card)) { ··· 625 581 idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, 626 582 __GFP_RECLAIM); 627 583 idatas[0] = idata; 628 - req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_IOCTL; 584 + req_to_mmc_queue_req(req)->drv_op = 585 + rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL; 629 586 req_to_mmc_queue_req(req)->drv_op_data = idatas; 630 587 req_to_mmc_queue_req(req)->ioc_count = 1; 631 588 blk_execute_rq(mq->queue, NULL, req, 0); ··· 641 596 } 642 597 643 598 static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md, 644 - struct mmc_ioc_multi_cmd __user *user) 599 + struct mmc_ioc_multi_cmd __user *user, 600 + struct mmc_rpmb_data *rpmb) 645 601 { 646 602 struct mmc_blk_ioc_data **idata = NULL; 647 603 struct mmc_ioc_cmd __user *cmds = user->cmds; ··· 673 627 num_of_cmds = i; 674 628 goto cmd_err; 675 629 } 630 + /* This will be NULL on non-RPMB ioctl():s */ 631 + idata[i]->rpmb = rpmb; 676 632 } 677 633 678 634 card = md->queue.card; ··· 691 643 req = blk_get_request(mq->queue, 692 644 idata[0]->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, 693 645 __GFP_RECLAIM); 694 - req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_IOCTL; 646 + req_to_mmc_queue_req(req)->drv_op = 647 + rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL; 695 648 req_to_mmc_queue_req(req)->drv_op_data = idata; 696 649 req_to_mmc_queue_req(req)->ioc_count = num_of_cmds; 697 650 blk_execute_rq(mq->queue, NULL, req, 0); ··· 740 691 if (!md) 741 692 return -EINVAL; 742 693 ret = mmc_blk_ioctl_cmd(md, 743 - (struct mmc_ioc_cmd __user *)arg); 694 + (struct mmc_ioc_cmd __user *)arg, 695 + NULL); 744 696 mmc_blk_put(md); 745 697 return ret; 746 698 case MMC_IOC_MULTI_CMD: ··· 752 702 if (!md) 753 703 return -EINVAL; 754 704 ret = mmc_blk_ioctl_multi_cmd(md, 755 - (struct mmc_ioc_multi_cmd __user *)arg); 705 + (struct mmc_ioc_multi_cmd __user *)arg, 706 + NULL); 756 707 mmc_blk_put(md); 757 708 return ret; 758 709 default: ··· 1203 1152 md->reset_done &= ~type; 1204 1153 } 1205 1154 1206 - int mmc_access_rpmb(struct mmc_queue *mq) 1207 - { 1208 - struct mmc_blk_data *md = mq->blkdata; 1209 - /* 1210 - * If this is a RPMB partition access, return ture 1211 - */ 1212 - if (md && md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) 1213 - return true; 1214 - 1215 - return false; 1216 - } 1217 - 1218 1155 /* 1219 1156 * The non-block commands come back from the block layer after it queued it and 1220 1157 * processed it with all other requests and then they get issued in this ··· 1213 1174 struct mmc_queue_req *mq_rq; 1214 1175 struct mmc_card *card = mq->card; 1215 1176 struct mmc_blk_data *md = mq->blkdata; 1216 - struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev); 1217 1177 struct mmc_blk_ioc_data **idata; 1178 + bool rpmb_ioctl; 1218 1179 u8 **ext_csd; 1219 1180 u32 status; 1220 1181 int ret; 1221 1182 int i; 1222 1183 1223 1184 mq_rq = req_to_mmc_queue_req(req); 1185 + rpmb_ioctl = (mq_rq->drv_op == MMC_DRV_OP_IOCTL_RPMB); 1224 1186 1225 1187 switch (mq_rq->drv_op) { 1226 1188 case MMC_DRV_OP_IOCTL: 1189 + case MMC_DRV_OP_IOCTL_RPMB: 1227 1190 idata = mq_rq->drv_op_data; 1228 1191 for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) { 1229 1192 ret = __mmc_blk_ioctl_cmd(card, md, idata[i]); ··· 1233 1192 break; 1234 1193 } 1235 1194 /* Always switch back to main area after RPMB access */ 1236 - if (md->area_type & MMC_BLK_DATA_AREA_RPMB) 1237 - mmc_blk_part_switch(card, main_md->part_type); 1195 + if (rpmb_ioctl) 1196 + mmc_blk_part_switch(card, 0); 1238 1197 break; 1239 1198 case MMC_DRV_OP_BOOT_WP: 1240 1199 ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BOOT_WP, ··· 1575 1534 } 1576 1535 1577 1536 static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, 1578 - int disable_multi, bool *do_rel_wr, 1579 - bool *do_data_tag) 1537 + int disable_multi, bool *do_rel_wr_p, 1538 + bool *do_data_tag_p) 1580 1539 { 1581 1540 struct mmc_blk_data *md = mq->blkdata; 1582 1541 struct mmc_card *card = md->queue.card; 1583 1542 struct mmc_blk_request *brq = &mqrq->brq; 1584 1543 struct request *req = mmc_queue_req_to_req(mqrq); 1544 + bool do_rel_wr, do_data_tag; 1585 1545 1586 1546 /* 1587 1547 * Reliable writes are used to implement Forced Unit Access and 1588 1548 * are supported only on MMCs. 1589 1549 */ 1590 - *do_rel_wr = (req->cmd_flags & REQ_FUA) && 1591 - rq_data_dir(req) == WRITE && 1592 - (md->flags & MMC_BLK_REL_WR); 1550 + do_rel_wr = (req->cmd_flags & REQ_FUA) && 1551 + rq_data_dir(req) == WRITE && 1552 + (md->flags & MMC_BLK_REL_WR); 1593 1553 1594 1554 memset(brq, 0, sizeof(struct mmc_blk_request)); 1595 1555 1596 1556 brq->mrq.data = &brq->data; 1557 + brq->mrq.tag = req->tag; 1597 1558 1598 1559 brq->stop.opcode = MMC_STOP_TRANSMISSION; 1599 1560 brq->stop.arg = 0; ··· 1610 1567 1611 1568 brq->data.blksz = 512; 1612 1569 brq->data.blocks = blk_rq_sectors(req); 1570 + brq->data.blk_addr = blk_rq_pos(req); 1571 + 1572 + /* 1573 + * The command queue supports 2 priorities: "high" (1) and "simple" (0). 1574 + * The eMMC will give "high" priority tasks priority over "simple" 1575 + * priority tasks. Here we always set "simple" priority by not setting 1576 + * MMC_DATA_PRIO. 1577 + */ 1613 1578 1614 1579 /* 1615 1580 * The block layer doesn't support all sector count ··· 1647 1596 brq->data.blocks); 1648 1597 } 1649 1598 1650 - if (*do_rel_wr) 1599 + if (do_rel_wr) { 1651 1600 mmc_apply_rel_rw(brq, card, req); 1601 + brq->data.flags |= MMC_DATA_REL_WR; 1602 + } 1652 1603 1653 1604 /* 1654 1605 * Data tag is used only during writing meta data to speed 1655 1606 * up write and any subsequent read of this meta data 1656 1607 */ 1657 - *do_data_tag = card->ext_csd.data_tag_unit_size && 1658 - (req->cmd_flags & REQ_META) && 1659 - (rq_data_dir(req) == WRITE) && 1660 - ((brq->data.blocks * brq->data.blksz) >= 1661 - card->ext_csd.data_tag_unit_size); 1608 + do_data_tag = card->ext_csd.data_tag_unit_size && 1609 + (req->cmd_flags & REQ_META) && 1610 + (rq_data_dir(req) == WRITE) && 1611 + ((brq->data.blocks * brq->data.blksz) >= 1612 + card->ext_csd.data_tag_unit_size); 1613 + 1614 + if (do_data_tag) 1615 + brq->data.flags |= MMC_DATA_DAT_TAG; 1662 1616 1663 1617 mmc_set_data_timeout(&brq->data, card); 1664 1618 ··· 1690 1634 } 1691 1635 1692 1636 mqrq->areq.mrq = &brq->mrq; 1637 + 1638 + if (do_rel_wr_p) 1639 + *do_rel_wr_p = do_rel_wr; 1640 + 1641 + if (do_data_tag_p) 1642 + *do_data_tag_p = do_data_tag; 1693 1643 } 1694 1644 1695 1645 static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, ··· 2010 1948 2011 1949 if (req && !mq->qcnt) 2012 1950 /* claim host only for the first request */ 2013 - mmc_get_card(card); 1951 + mmc_get_card(card, NULL); 2014 1952 2015 1953 ret = mmc_blk_part_switch(card, md->part_type); 2016 1954 if (ret) { ··· 2073 2011 2074 2012 out: 2075 2013 if (!mq->qcnt) 2076 - mmc_put_card(card); 2014 + mmc_put_card(card, NULL); 2077 2015 } 2078 2016 2079 2017 static inline int mmc_blk_readonly(struct mmc_card *card) ··· 2130 2068 2131 2069 spin_lock_init(&md->lock); 2132 2070 INIT_LIST_HEAD(&md->part); 2071 + INIT_LIST_HEAD(&md->rpmbs); 2133 2072 md->usage = 1; 2134 2073 2135 2074 ret = mmc_init_queue(&md->queue, card, &md->lock, subname); ··· 2249 2186 return 0; 2250 2187 } 2251 2188 2189 + /** 2190 + * mmc_rpmb_ioctl() - ioctl handler for the RPMB chardev 2191 + * @filp: the character device file 2192 + * @cmd: the ioctl() command 2193 + * @arg: the argument from userspace 2194 + * 2195 + * This will essentially just redirect the ioctl()s coming in over to 2196 + * the main block device spawning the RPMB character device. 2197 + */ 2198 + static long mmc_rpmb_ioctl(struct file *filp, unsigned int cmd, 2199 + unsigned long arg) 2200 + { 2201 + struct mmc_rpmb_data *rpmb = filp->private_data; 2202 + int ret; 2203 + 2204 + switch (cmd) { 2205 + case MMC_IOC_CMD: 2206 + ret = mmc_blk_ioctl_cmd(rpmb->md, 2207 + (struct mmc_ioc_cmd __user *)arg, 2208 + rpmb); 2209 + break; 2210 + case MMC_IOC_MULTI_CMD: 2211 + ret = mmc_blk_ioctl_multi_cmd(rpmb->md, 2212 + (struct mmc_ioc_multi_cmd __user *)arg, 2213 + rpmb); 2214 + break; 2215 + default: 2216 + ret = -EINVAL; 2217 + break; 2218 + } 2219 + 2220 + return 0; 2221 + } 2222 + 2223 + #ifdef CONFIG_COMPAT 2224 + static long mmc_rpmb_ioctl_compat(struct file *filp, unsigned int cmd, 2225 + unsigned long arg) 2226 + { 2227 + return mmc_rpmb_ioctl(filp, cmd, (unsigned long)compat_ptr(arg)); 2228 + } 2229 + #endif 2230 + 2231 + static int mmc_rpmb_chrdev_open(struct inode *inode, struct file *filp) 2232 + { 2233 + struct mmc_rpmb_data *rpmb = container_of(inode->i_cdev, 2234 + struct mmc_rpmb_data, chrdev); 2235 + 2236 + get_device(&rpmb->dev); 2237 + filp->private_data = rpmb; 2238 + mmc_blk_get(rpmb->md->disk); 2239 + 2240 + return nonseekable_open(inode, filp); 2241 + } 2242 + 2243 + static int mmc_rpmb_chrdev_release(struct inode *inode, struct file *filp) 2244 + { 2245 + struct mmc_rpmb_data *rpmb = container_of(inode->i_cdev, 2246 + struct mmc_rpmb_data, chrdev); 2247 + 2248 + put_device(&rpmb->dev); 2249 + mmc_blk_put(rpmb->md); 2250 + 2251 + return 0; 2252 + } 2253 + 2254 + static const struct file_operations mmc_rpmb_fileops = { 2255 + .release = mmc_rpmb_chrdev_release, 2256 + .open = mmc_rpmb_chrdev_open, 2257 + .owner = THIS_MODULE, 2258 + .llseek = no_llseek, 2259 + .unlocked_ioctl = mmc_rpmb_ioctl, 2260 + #ifdef CONFIG_COMPAT 2261 + .compat_ioctl = mmc_rpmb_ioctl_compat, 2262 + #endif 2263 + }; 2264 + 2265 + static void mmc_blk_rpmb_device_release(struct device *dev) 2266 + { 2267 + struct mmc_rpmb_data *rpmb = dev_get_drvdata(dev); 2268 + 2269 + ida_simple_remove(&mmc_rpmb_ida, rpmb->id); 2270 + kfree(rpmb); 2271 + } 2272 + 2273 + static int mmc_blk_alloc_rpmb_part(struct mmc_card *card, 2274 + struct mmc_blk_data *md, 2275 + unsigned int part_index, 2276 + sector_t size, 2277 + const char *subname) 2278 + { 2279 + int devidx, ret; 2280 + char rpmb_name[DISK_NAME_LEN]; 2281 + char cap_str[10]; 2282 + struct mmc_rpmb_data *rpmb; 2283 + 2284 + /* This creates the minor number for the RPMB char device */ 2285 + devidx = ida_simple_get(&mmc_rpmb_ida, 0, max_devices, GFP_KERNEL); 2286 + if (devidx < 0) 2287 + return devidx; 2288 + 2289 + rpmb = kzalloc(sizeof(*rpmb), GFP_KERNEL); 2290 + if (!rpmb) { 2291 + ida_simple_remove(&mmc_rpmb_ida, devidx); 2292 + return -ENOMEM; 2293 + } 2294 + 2295 + snprintf(rpmb_name, sizeof(rpmb_name), 2296 + "mmcblk%u%s", card->host->index, subname ? subname : ""); 2297 + 2298 + rpmb->id = devidx; 2299 + rpmb->part_index = part_index; 2300 + rpmb->dev.init_name = rpmb_name; 2301 + rpmb->dev.bus = &mmc_rpmb_bus_type; 2302 + rpmb->dev.devt = MKDEV(MAJOR(mmc_rpmb_devt), rpmb->id); 2303 + rpmb->dev.parent = &card->dev; 2304 + rpmb->dev.release = mmc_blk_rpmb_device_release; 2305 + device_initialize(&rpmb->dev); 2306 + dev_set_drvdata(&rpmb->dev, rpmb); 2307 + rpmb->md = md; 2308 + 2309 + cdev_init(&rpmb->chrdev, &mmc_rpmb_fileops); 2310 + rpmb->chrdev.owner = THIS_MODULE; 2311 + ret = cdev_device_add(&rpmb->chrdev, &rpmb->dev); 2312 + if (ret) { 2313 + pr_err("%s: could not add character device\n", rpmb_name); 2314 + goto out_put_device; 2315 + } 2316 + 2317 + list_add(&rpmb->node, &md->rpmbs); 2318 + 2319 + string_get_size((u64)size, 512, STRING_UNITS_2, 2320 + cap_str, sizeof(cap_str)); 2321 + 2322 + pr_info("%s: %s %s partition %u %s, chardev (%d:%d)\n", 2323 + rpmb_name, mmc_card_id(card), 2324 + mmc_card_name(card), EXT_CSD_PART_CONFIG_ACC_RPMB, cap_str, 2325 + MAJOR(mmc_rpmb_devt), rpmb->id); 2326 + 2327 + return 0; 2328 + 2329 + out_put_device: 2330 + put_device(&rpmb->dev); 2331 + return ret; 2332 + } 2333 + 2334 + static void mmc_blk_remove_rpmb_part(struct mmc_rpmb_data *rpmb) 2335 + 2336 + { 2337 + cdev_device_del(&rpmb->chrdev, &rpmb->dev); 2338 + put_device(&rpmb->dev); 2339 + } 2340 + 2252 2341 /* MMC Physical partitions consist of two boot partitions and 2253 2342 * up to four general purpose partitions. 2254 2343 * For each partition enabled in EXT_CSD a block device will be allocatedi ··· 2409 2194 2410 2195 static int mmc_blk_alloc_parts(struct mmc_card *card, struct mmc_blk_data *md) 2411 2196 { 2412 - int idx, ret = 0; 2197 + int idx, ret; 2413 2198 2414 2199 if (!mmc_card_mmc(card)) 2415 2200 return 0; 2416 2201 2417 2202 for (idx = 0; idx < card->nr_parts; idx++) { 2418 - if (card->part[idx].size) { 2203 + if (card->part[idx].area_type & MMC_BLK_DATA_AREA_RPMB) { 2204 + /* 2205 + * RPMB partitions does not provide block access, they 2206 + * are only accessed using ioctl():s. Thus create 2207 + * special RPMB block devices that do not have a 2208 + * backing block queue for these. 2209 + */ 2210 + ret = mmc_blk_alloc_rpmb_part(card, md, 2211 + card->part[idx].part_cfg, 2212 + card->part[idx].size >> 9, 2213 + card->part[idx].name); 2214 + if (ret) 2215 + return ret; 2216 + } else if (card->part[idx].size) { 2419 2217 ret = mmc_blk_alloc_part(card, md, 2420 2218 card->part[idx].part_cfg, 2421 2219 card->part[idx].size >> 9, ··· 2440 2212 } 2441 2213 } 2442 2214 2443 - return ret; 2215 + return 0; 2444 2216 } 2445 2217 2446 2218 static void mmc_blk_remove_req(struct mmc_blk_data *md) ··· 2477 2249 { 2478 2250 struct list_head *pos, *q; 2479 2251 struct mmc_blk_data *part_md; 2252 + struct mmc_rpmb_data *rpmb; 2480 2253 2254 + /* Remove RPMB partitions */ 2255 + list_for_each_safe(pos, q, &md->rpmbs) { 2256 + rpmb = list_entry(pos, struct mmc_rpmb_data, node); 2257 + list_del(pos); 2258 + mmc_blk_remove_rpmb_part(rpmb); 2259 + } 2260 + /* Remove block partitions */ 2481 2261 list_for_each_safe(pos, q, &md->part) { 2482 2262 part_md = list_entry(pos, struct mmc_blk_data, part); 2483 2263 list_del(pos); ··· 2804 2568 { 2805 2569 int res; 2806 2570 2571 + res = bus_register(&mmc_rpmb_bus_type); 2572 + if (res < 0) { 2573 + pr_err("mmcblk: could not register RPMB bus type\n"); 2574 + return res; 2575 + } 2576 + res = alloc_chrdev_region(&mmc_rpmb_devt, 0, MAX_DEVICES, "rpmb"); 2577 + if (res < 0) { 2578 + pr_err("mmcblk: failed to allocate rpmb chrdev region\n"); 2579 + goto out_bus_unreg; 2580 + } 2581 + 2807 2582 if (perdev_minors != CONFIG_MMC_BLOCK_MINORS) 2808 2583 pr_info("mmcblk: using %d minors per device\n", perdev_minors); 2809 2584 ··· 2822 2575 2823 2576 res = register_blkdev(MMC_BLOCK_MAJOR, "mmc"); 2824 2577 if (res) 2825 - goto out; 2578 + goto out_chrdev_unreg; 2826 2579 2827 2580 res = mmc_register_driver(&mmc_driver); 2828 2581 if (res) 2829 - goto out2; 2582 + goto out_blkdev_unreg; 2830 2583 2831 2584 return 0; 2832 - out2: 2585 + 2586 + out_blkdev_unreg: 2833 2587 unregister_blkdev(MMC_BLOCK_MAJOR, "mmc"); 2834 - out: 2588 + out_chrdev_unreg: 2589 + unregister_chrdev_region(mmc_rpmb_devt, MAX_DEVICES); 2590 + out_bus_unreg: 2591 + bus_unregister(&mmc_rpmb_bus_type); 2835 2592 return res; 2836 2593 } 2837 2594 ··· 2843 2592 { 2844 2593 mmc_unregister_driver(&mmc_driver); 2845 2594 unregister_blkdev(MMC_BLOCK_MAJOR, "mmc"); 2595 + unregister_chrdev_region(mmc_rpmb_devt, MAX_DEVICES); 2846 2596 } 2847 2597 2848 2598 module_init(mmc_blk_init);
+7
drivers/mmc/core/bus.c
··· 369 369 */ 370 370 void mmc_remove_card(struct mmc_card *card) 371 371 { 372 + struct mmc_host *host = card->host; 373 + 372 374 #ifdef CONFIG_DEBUG_FS 373 375 mmc_remove_card_debugfs(card); 374 376 #endif 377 + 378 + if (host->cqe_enabled) { 379 + host->cqe_ops->cqe_disable(host); 380 + host->cqe_enabled = false; 381 + } 375 382 376 383 if (mmc_card_present(card)) { 377 384 if (mmc_host_is_spi(card->host)) {
+235 -27
drivers/mmc/core/core.c
··· 266 266 host->ops->request(host, mrq); 267 267 } 268 268 269 - static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq) 269 + static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq, 270 + bool cqe) 270 271 { 271 272 if (mrq->sbc) { 272 273 pr_debug("<%s: starting CMD%u arg %08x flags %08x>\n", ··· 276 275 } 277 276 278 277 if (mrq->cmd) { 279 - pr_debug("%s: starting CMD%u arg %08x flags %08x\n", 280 - mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->arg, 281 - mrq->cmd->flags); 278 + pr_debug("%s: starting %sCMD%u arg %08x flags %08x\n", 279 + mmc_hostname(host), cqe ? "CQE direct " : "", 280 + mrq->cmd->opcode, mrq->cmd->arg, mrq->cmd->flags); 281 + } else if (cqe) { 282 + pr_debug("%s: starting CQE transfer for tag %d blkaddr %u\n", 283 + mmc_hostname(host), mrq->tag, mrq->data->blk_addr); 282 284 } 283 285 284 286 if (mrq->data) { ··· 337 333 return 0; 338 334 } 339 335 340 - static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq) 336 + int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq) 341 337 { 342 338 int err; 343 339 ··· 346 342 if (mmc_card_removed(host->card)) 347 343 return -ENOMEDIUM; 348 344 349 - mmc_mrq_pr_debug(host, mrq); 345 + mmc_mrq_pr_debug(host, mrq, false); 350 346 351 347 WARN_ON(!host->claimed); 352 348 ··· 359 355 360 356 return 0; 361 357 } 358 + EXPORT_SYMBOL(mmc_start_request); 362 359 363 360 /* 364 361 * mmc_wait_data_done() - done callback for data request ··· 486 481 mmc_retune_release(host); 487 482 } 488 483 EXPORT_SYMBOL(mmc_wait_for_req_done); 484 + 485 + /* 486 + * mmc_cqe_start_req - Start a CQE request. 487 + * @host: MMC host to start the request 488 + * @mrq: request to start 489 + * 490 + * Start the request, re-tuning if needed and it is possible. Returns an error 491 + * code if the request fails to start or -EBUSY if CQE is busy. 492 + */ 493 + int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq) 494 + { 495 + int err; 496 + 497 + /* 498 + * CQE cannot process re-tuning commands. Caller must hold retuning 499 + * while CQE is in use. Re-tuning can happen here only when CQE has no 500 + * active requests i.e. this is the first. Note, re-tuning will call 501 + * ->cqe_off(). 502 + */ 503 + err = mmc_retune(host); 504 + if (err) 505 + goto out_err; 506 + 507 + mrq->host = host; 508 + 509 + mmc_mrq_pr_debug(host, mrq, true); 510 + 511 + err = mmc_mrq_prep(host, mrq); 512 + if (err) 513 + goto out_err; 514 + 515 + err = host->cqe_ops->cqe_request(host, mrq); 516 + if (err) 517 + goto out_err; 518 + 519 + trace_mmc_request_start(host, mrq); 520 + 521 + return 0; 522 + 523 + out_err: 524 + if (mrq->cmd) { 525 + pr_debug("%s: failed to start CQE direct CMD%u, error %d\n", 526 + mmc_hostname(host), mrq->cmd->opcode, err); 527 + } else { 528 + pr_debug("%s: failed to start CQE transfer for tag %d, error %d\n", 529 + mmc_hostname(host), mrq->tag, err); 530 + } 531 + return err; 532 + } 533 + EXPORT_SYMBOL(mmc_cqe_start_req); 534 + 535 + /** 536 + * mmc_cqe_request_done - CQE has finished processing an MMC request 537 + * @host: MMC host which completed request 538 + * @mrq: MMC request which completed 539 + * 540 + * CQE drivers should call this function when they have completed 541 + * their processing of a request. 542 + */ 543 + void mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq) 544 + { 545 + mmc_should_fail_request(host, mrq); 546 + 547 + /* Flag re-tuning needed on CRC errors */ 548 + if ((mrq->cmd && mrq->cmd->error == -EILSEQ) || 549 + (mrq->data && mrq->data->error == -EILSEQ)) 550 + mmc_retune_needed(host); 551 + 552 + trace_mmc_request_done(host, mrq); 553 + 554 + if (mrq->cmd) { 555 + pr_debug("%s: CQE req done (direct CMD%u): %d\n", 556 + mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->error); 557 + } else { 558 + pr_debug("%s: CQE transfer done tag %d\n", 559 + mmc_hostname(host), mrq->tag); 560 + } 561 + 562 + if (mrq->data) { 563 + pr_debug("%s: %d bytes transferred: %d\n", 564 + mmc_hostname(host), 565 + mrq->data->bytes_xfered, mrq->data->error); 566 + } 567 + 568 + mrq->done(mrq); 569 + } 570 + EXPORT_SYMBOL(mmc_cqe_request_done); 571 + 572 + /** 573 + * mmc_cqe_post_req - CQE post process of a completed MMC request 574 + * @host: MMC host 575 + * @mrq: MMC request to be processed 576 + */ 577 + void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq) 578 + { 579 + if (host->cqe_ops->cqe_post_req) 580 + host->cqe_ops->cqe_post_req(host, mrq); 581 + } 582 + EXPORT_SYMBOL(mmc_cqe_post_req); 583 + 584 + /* Arbitrary 1 second timeout */ 585 + #define MMC_CQE_RECOVERY_TIMEOUT 1000 586 + 587 + /* 588 + * mmc_cqe_recovery - Recover from CQE errors. 589 + * @host: MMC host to recover 590 + * 591 + * Recovery consists of stopping CQE, stopping eMMC, discarding the queue in 592 + * in eMMC, and discarding the queue in CQE. CQE must call 593 + * mmc_cqe_request_done() on all requests. An error is returned if the eMMC 594 + * fails to discard its queue. 595 + */ 596 + int mmc_cqe_recovery(struct mmc_host *host) 597 + { 598 + struct mmc_command cmd; 599 + int err; 600 + 601 + mmc_retune_hold_now(host); 602 + 603 + /* 604 + * Recovery is expected seldom, if at all, but it reduces performance, 605 + * so make sure it is not completely silent. 606 + */ 607 + pr_warn("%s: running CQE recovery\n", mmc_hostname(host)); 608 + 609 + host->cqe_ops->cqe_recovery_start(host); 610 + 611 + memset(&cmd, 0, sizeof(cmd)); 612 + cmd.opcode = MMC_STOP_TRANSMISSION, 613 + cmd.flags = MMC_RSP_R1B | MMC_CMD_AC, 614 + cmd.flags &= ~MMC_RSP_CRC; /* Ignore CRC */ 615 + cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT, 616 + mmc_wait_for_cmd(host, &cmd, 0); 617 + 618 + memset(&cmd, 0, sizeof(cmd)); 619 + cmd.opcode = MMC_CMDQ_TASK_MGMT; 620 + cmd.arg = 1; /* Discard entire queue */ 621 + cmd.flags = MMC_RSP_R1B | MMC_CMD_AC; 622 + cmd.flags &= ~MMC_RSP_CRC; /* Ignore CRC */ 623 + cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT, 624 + err = mmc_wait_for_cmd(host, &cmd, 0); 625 + 626 + host->cqe_ops->cqe_recovery_finish(host); 627 + 628 + mmc_retune_release(host); 629 + 630 + return err; 631 + } 632 + EXPORT_SYMBOL(mmc_cqe_recovery); 489 633 490 634 /** 491 635 * mmc_is_req_done - Determine if a 'cap_cmd_during_tfr' request is done ··· 986 832 } 987 833 EXPORT_SYMBOL(mmc_align_data_size); 988 834 835 + /* 836 + * Allow claiming an already claimed host if the context is the same or there is 837 + * no context but the task is the same. 838 + */ 839 + static inline bool mmc_ctx_matches(struct mmc_host *host, struct mmc_ctx *ctx, 840 + struct task_struct *task) 841 + { 842 + return host->claimer == ctx || 843 + (!ctx && task && host->claimer->task == task); 844 + } 845 + 846 + static inline void mmc_ctx_set_claimer(struct mmc_host *host, 847 + struct mmc_ctx *ctx, 848 + struct task_struct *task) 849 + { 850 + if (!host->claimer) { 851 + if (ctx) 852 + host->claimer = ctx; 853 + else 854 + host->claimer = &host->default_ctx; 855 + } 856 + if (task) 857 + host->claimer->task = task; 858 + } 859 + 989 860 /** 990 861 * __mmc_claim_host - exclusively claim a host 991 862 * @host: mmc host to claim 863 + * @ctx: context that claims the host or NULL in which case the default 864 + * context will be used 992 865 * @abort: whether or not the operation should be aborted 993 866 * 994 867 * Claim a host for a set of operations. If @abort is non null and ··· 1023 842 * that non-zero value without acquiring the lock. Returns zero 1024 843 * with the lock held otherwise. 1025 844 */ 1026 - int __mmc_claim_host(struct mmc_host *host, atomic_t *abort) 845 + int __mmc_claim_host(struct mmc_host *host, struct mmc_ctx *ctx, 846 + atomic_t *abort) 1027 847 { 848 + struct task_struct *task = ctx ? NULL : current; 1028 849 DECLARE_WAITQUEUE(wait, current); 1029 850 unsigned long flags; 1030 851 int stop; ··· 1039 856 while (1) { 1040 857 set_current_state(TASK_UNINTERRUPTIBLE); 1041 858 stop = abort ? atomic_read(abort) : 0; 1042 - if (stop || !host->claimed || host->claimer == current) 859 + if (stop || !host->claimed || mmc_ctx_matches(host, ctx, task)) 1043 860 break; 1044 861 spin_unlock_irqrestore(&host->lock, flags); 1045 862 schedule(); ··· 1048 865 set_current_state(TASK_RUNNING); 1049 866 if (!stop) { 1050 867 host->claimed = 1; 1051 - host->claimer = current; 868 + mmc_ctx_set_claimer(host, ctx, task); 1052 869 host->claim_cnt += 1; 1053 870 if (host->claim_cnt == 1) 1054 871 pm = true; ··· 1083 900 spin_unlock_irqrestore(&host->lock, flags); 1084 901 } else { 1085 902 host->claimed = 0; 903 + host->claimer->task = NULL; 1086 904 host->claimer = NULL; 1087 905 spin_unlock_irqrestore(&host->lock, flags); 1088 906 wake_up(&host->wq); ··· 1097 913 * This is a helper function, which fetches a runtime pm reference for the 1098 914 * card device and also claims the host. 1099 915 */ 1100 - void mmc_get_card(struct mmc_card *card) 916 + void mmc_get_card(struct mmc_card *card, struct mmc_ctx *ctx) 1101 917 { 1102 918 pm_runtime_get_sync(&card->dev); 1103 - mmc_claim_host(card->host); 919 + __mmc_claim_host(card->host, ctx, NULL); 1104 920 } 1105 921 EXPORT_SYMBOL(mmc_get_card); 1106 922 ··· 1108 924 * This is a helper function, which releases the host and drops the runtime 1109 925 * pm reference for the card device. 1110 926 */ 1111 - void mmc_put_card(struct mmc_card *card) 927 + void mmc_put_card(struct mmc_card *card, struct mmc_ctx *ctx) 1112 928 { 1113 - mmc_release_host(card->host); 929 + struct mmc_host *host = card->host; 930 + 931 + WARN_ON(ctx && host->claimer != ctx); 932 + 933 + mmc_release_host(host); 1114 934 pm_runtime_mark_last_busy(&card->dev); 1115 935 pm_runtime_put_autosuspend(&card->dev); 1116 936 } ··· 1588 1400 1589 1401 #endif /* CONFIG_REGULATOR */ 1590 1402 1403 + /** 1404 + * mmc_regulator_get_supply - try to get VMMC and VQMMC regulators for a host 1405 + * @mmc: the host to regulate 1406 + * 1407 + * Returns 0 or errno. errno should be handled, it is either a critical error 1408 + * or -EPROBE_DEFER. 0 means no critical error but it does not mean all 1409 + * regulators have been found because they all are optional. If you require 1410 + * certain regulators, you need to check separately in your driver if they got 1411 + * populated after calling this function. 1412 + */ 1591 1413 int mmc_regulator_get_supply(struct mmc_host *mmc) 1592 1414 { 1593 1415 struct device *dev = mmc_dev(mmc); ··· 1682 1484 1683 1485 } 1684 1486 1487 + int mmc_host_set_uhs_voltage(struct mmc_host *host) 1488 + { 1489 + u32 clock; 1490 + 1491 + /* 1492 + * During a signal voltage level switch, the clock must be gated 1493 + * for 5 ms according to the SD spec 1494 + */ 1495 + clock = host->ios.clock; 1496 + host->ios.clock = 0; 1497 + mmc_set_ios(host); 1498 + 1499 + if (mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180)) 1500 + return -EAGAIN; 1501 + 1502 + /* Keep clock gated for at least 10 ms, though spec only says 5 ms */ 1503 + mmc_delay(10); 1504 + host->ios.clock = clock; 1505 + mmc_set_ios(host); 1506 + 1507 + return 0; 1508 + } 1509 + 1685 1510 int mmc_set_uhs_voltage(struct mmc_host *host, u32 ocr) 1686 1511 { 1687 1512 struct mmc_command cmd = {}; 1688 1513 int err = 0; 1689 - u32 clock; 1690 1514 1691 1515 /* 1692 1516 * If we cannot switch voltages, return failure so the caller ··· 1740 1520 err = -EAGAIN; 1741 1521 goto power_cycle; 1742 1522 } 1743 - /* 1744 - * During a signal voltage level switch, the clock must be gated 1745 - * for 5 ms according to the SD spec 1746 - */ 1747 - clock = host->ios.clock; 1748 - host->ios.clock = 0; 1749 - mmc_set_ios(host); 1750 1523 1751 - if (mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180)) { 1524 + if (mmc_host_set_uhs_voltage(host)) { 1752 1525 /* 1753 1526 * Voltages may not have been switched, but we've already 1754 1527 * sent CMD11, so a power cycle is required anyway ··· 1749 1536 err = -EAGAIN; 1750 1537 goto power_cycle; 1751 1538 } 1752 - 1753 - /* Keep clock gated for at least 10 ms, though spec only says 5 ms */ 1754 - mmc_delay(10); 1755 - host->ios.clock = clock; 1756 - mmc_set_ios(host); 1757 1539 1758 1540 /* Wait for at least 1 ms according to spec */ 1759 1541 mmc_delay(1);
+12 -4
drivers/mmc/core/core.h
··· 49 49 void mmc_set_bus_width(struct mmc_host *host, unsigned int width); 50 50 u32 mmc_select_voltage(struct mmc_host *host, u32 ocr); 51 51 int mmc_set_uhs_voltage(struct mmc_host *host, u32 ocr); 52 + int mmc_host_set_uhs_voltage(struct mmc_host *host); 52 53 int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage); 53 54 void mmc_set_timing(struct mmc_host *host, unsigned int timing); 54 55 void mmc_set_driver_type(struct mmc_host *host, unsigned int drv_type); ··· 108 107 void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq); 109 108 bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq); 110 109 110 + int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq); 111 + 111 112 struct mmc_async_req; 112 113 113 114 struct mmc_async_req *mmc_start_areq(struct mmc_host *host, ··· 131 128 int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount, 132 129 bool is_rel_write); 133 130 134 - int __mmc_claim_host(struct mmc_host *host, atomic_t *abort); 131 + int __mmc_claim_host(struct mmc_host *host, struct mmc_ctx *ctx, 132 + atomic_t *abort); 135 133 void mmc_release_host(struct mmc_host *host); 136 - void mmc_get_card(struct mmc_card *card); 137 - void mmc_put_card(struct mmc_card *card); 134 + void mmc_get_card(struct mmc_card *card, struct mmc_ctx *ctx); 135 + void mmc_put_card(struct mmc_card *card, struct mmc_ctx *ctx); 138 136 139 137 /** 140 138 * mmc_claim_host - exclusively claim a host ··· 145 141 */ 146 142 static inline void mmc_claim_host(struct mmc_host *host) 147 143 { 148 - __mmc_claim_host(host, NULL); 144 + __mmc_claim_host(host, NULL, NULL); 149 145 } 146 + 147 + int mmc_cqe_start_req(struct mmc_host *host, struct mmc_request *mrq); 148 + void mmc_cqe_post_req(struct mmc_host *host, struct mmc_request *mrq); 149 + int mmc_cqe_recovery(struct mmc_host *host); 150 150 151 151 #endif
+13 -7
drivers/mmc/core/host.c
··· 111 111 host->hold_retune += 1; 112 112 } 113 113 114 - void mmc_retune_hold_now(struct mmc_host *host) 115 - { 116 - host->retune_now = 0; 117 - host->hold_retune += 1; 118 - } 119 - 120 114 void mmc_retune_release(struct mmc_host *host) 121 115 { 122 116 if (host->hold_retune) ··· 118 124 else 119 125 WARN_ON(1); 120 126 } 127 + EXPORT_SYMBOL(mmc_retune_release); 121 128 122 129 int mmc_retune(struct mmc_host *host) 123 130 { ··· 179 184 int mmc_of_parse(struct mmc_host *host) 180 185 { 181 186 struct device *dev = host->parent; 182 - u32 bus_width; 187 + u32 bus_width, drv_type; 183 188 int ret; 184 189 bool cd_cap_invert, cd_gpio_invert = false; 185 190 bool ro_cap_invert, ro_gpio_invert = false; ··· 321 326 if (device_property_read_bool(dev, "no-mmc")) 322 327 host->caps2 |= MMC_CAP2_NO_MMC; 323 328 329 + /* Must be after "non-removable" check */ 330 + if (device_property_read_u32(dev, "fixed-emmc-driver-type", &drv_type) == 0) { 331 + if (host->caps & MMC_CAP_NONREMOVABLE) 332 + host->fixed_drv_type = drv_type; 333 + else 334 + dev_err(host->parent, 335 + "can't use fixed driver type, media is removable\n"); 336 + } 337 + 324 338 host->dsr_req = !device_property_read_u32(dev, "dsr", &host->dsr); 325 339 if (host->dsr_req && (host->dsr & ~0xffff)) { 326 340 dev_err(host->parent, ··· 401 397 host->max_req_size = PAGE_SIZE; 402 398 host->max_blk_size = 512; 403 399 host->max_blk_count = PAGE_SIZE / 512; 400 + 401 + host->fixed_drv_type = -EINVAL; 404 402 405 403 return host; 406 404 }
+6 -1
drivers/mmc/core/host.h
··· 19 19 void mmc_retune_enable(struct mmc_host *host); 20 20 void mmc_retune_disable(struct mmc_host *host); 21 21 void mmc_retune_hold(struct mmc_host *host); 22 - void mmc_retune_hold_now(struct mmc_host *host); 23 22 void mmc_retune_release(struct mmc_host *host); 24 23 int mmc_retune(struct mmc_host *host); 25 24 void mmc_retune_pause(struct mmc_host *host); 26 25 void mmc_retune_unpause(struct mmc_host *host); 26 + 27 + static inline void mmc_retune_hold_now(struct mmc_host *host) 28 + { 29 + host->retune_now = 0; 30 + host->hold_retune += 1; 31 + } 27 32 28 33 static inline void mmc_retune_recheck(struct mmc_host *host) 29 34 {
+41 -5
drivers/mmc/core/mmc.c
··· 780 780 MMC_DEV_ATTR(name, "%s\n", card->cid.prod_name); 781 781 MMC_DEV_ATTR(oemid, "0x%04x\n", card->cid.oemid); 782 782 MMC_DEV_ATTR(prv, "0x%x\n", card->cid.prv); 783 + MMC_DEV_ATTR(rev, "0x%x\n", card->ext_csd.rev); 783 784 MMC_DEV_ATTR(pre_eol_info, "%02x\n", card->ext_csd.pre_eol_info); 784 785 MMC_DEV_ATTR(life_time, "0x%02x 0x%02x\n", 785 786 card->ext_csd.device_life_time_est_typ_a, ··· 839 838 &dev_attr_name.attr, 840 839 &dev_attr_oemid.attr, 841 840 &dev_attr_prv.attr, 841 + &dev_attr_rev.attr, 842 842 &dev_attr_pre_eol_info.attr, 843 843 &dev_attr_life_time.attr, 844 844 &dev_attr_serial.attr, ··· 1291 1289 static void mmc_select_driver_type(struct mmc_card *card) 1292 1290 { 1293 1291 int card_drv_type, drive_strength, drv_type; 1292 + int fixed_drv_type = card->host->fixed_drv_type; 1294 1293 1295 1294 card_drv_type = card->ext_csd.raw_driver_strength | 1296 1295 mmc_driver_type_mask(0); 1297 1296 1298 - drive_strength = mmc_select_drive_strength(card, 1299 - card->ext_csd.hs200_max_dtr, 1300 - card_drv_type, &drv_type); 1297 + if (fixed_drv_type >= 0) 1298 + drive_strength = card_drv_type & mmc_driver_type_mask(fixed_drv_type) 1299 + ? fixed_drv_type : 0; 1300 + else 1301 + drive_strength = mmc_select_drive_strength(card, 1302 + card->ext_csd.hs200_max_dtr, 1303 + card_drv_type, &drv_type); 1301 1304 1302 1305 card->drive_strength = drive_strength; 1303 1306 ··· 1793 1786 } 1794 1787 1795 1788 /* 1789 + * Enable Command Queue if supported. Note that Packed Commands cannot 1790 + * be used with Command Queue. 1791 + */ 1792 + card->ext_csd.cmdq_en = false; 1793 + if (card->ext_csd.cmdq_support && host->caps2 & MMC_CAP2_CQE) { 1794 + err = mmc_cmdq_enable(card); 1795 + if (err && err != -EBADMSG) 1796 + goto free_card; 1797 + if (err) { 1798 + pr_warn("%s: Enabling CMDQ failed\n", 1799 + mmc_hostname(card->host)); 1800 + card->ext_csd.cmdq_support = false; 1801 + card->ext_csd.cmdq_depth = 0; 1802 + err = 0; 1803 + } 1804 + } 1805 + /* 1796 1806 * In some cases (e.g. RPMB or mmc_test), the Command Queue must be 1797 1807 * disabled for a time, so a flag is needed to indicate to re-enable the 1798 1808 * Command Queue. 1799 1809 */ 1800 1810 card->reenable_cmdq = card->ext_csd.cmdq_en; 1811 + 1812 + if (card->ext_csd.cmdq_en && !host->cqe_enabled) { 1813 + err = host->cqe_ops->cqe_enable(host, card); 1814 + if (err) { 1815 + pr_err("%s: Failed to enable CQE, error %d\n", 1816 + mmc_hostname(host), err); 1817 + } else { 1818 + host->cqe_enabled = true; 1819 + pr_info("%s: Command Queue Engine enabled\n", 1820 + mmc_hostname(host)); 1821 + } 1822 + } 1801 1823 1802 1824 if (!oldcard) 1803 1825 host->card = card; ··· 1947 1911 { 1948 1912 int err; 1949 1913 1950 - mmc_get_card(host->card); 1914 + mmc_get_card(host->card, NULL); 1951 1915 1952 1916 /* 1953 1917 * Just check if our card has been removed. 1954 1918 */ 1955 1919 err = _mmc_detect_card_removed(host); 1956 1920 1957 - mmc_put_card(host->card); 1921 + mmc_put_card(host->card, NULL); 1958 1922 1959 1923 if (err) { 1960 1924 mmc_remove(host);
+2 -4
drivers/mmc/core/mmc_ops.c
··· 977 977 from_exception) 978 978 return; 979 979 980 - mmc_claim_host(card->host); 981 980 if (card->ext_csd.raw_bkops_status >= EXT_CSD_BKOPS_LEVEL_2) { 982 981 timeout = MMC_OPS_TIMEOUT_MS; 983 982 use_busy_signal = true; ··· 994 995 pr_warn("%s: Error %d starting bkops\n", 995 996 mmc_hostname(card->host), err); 996 997 mmc_retune_release(card->host); 997 - goto out; 998 + return; 998 999 } 999 1000 1000 1001 /* ··· 1006 1007 mmc_card_set_doing_bkops(card); 1007 1008 else 1008 1009 mmc_retune_release(card->host); 1009 - out: 1010 - mmc_release_host(card->host); 1011 1010 } 1011 + EXPORT_SYMBOL(mmc_start_bkops); 1012 1012 1013 1013 /* 1014 1014 * Flush the cache to the non-volatile storage.
+25 -16
drivers/mmc/core/queue.c
··· 30 30 { 31 31 struct mmc_queue *mq = q->queuedata; 32 32 33 - if (mq && (mmc_card_removed(mq->card) || mmc_access_rpmb(mq))) 33 + if (mq && mmc_card_removed(mq->card)) 34 34 return BLKPREP_KILL; 35 35 36 36 req->rq_flags |= RQF_DONTPREP; ··· 177 177 mq_rq->sg = NULL; 178 178 } 179 179 180 + static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) 181 + { 182 + struct mmc_host *host = card->host; 183 + u64 limit = BLK_BOUNCE_HIGH; 184 + 185 + if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) 186 + limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; 187 + 188 + queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); 189 + queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue); 190 + if (mmc_can_erase(card)) 191 + mmc_queue_setup_discard(mq->queue, card); 192 + 193 + blk_queue_bounce_limit(mq->queue, limit); 194 + blk_queue_max_hw_sectors(mq->queue, 195 + min(host->max_blk_count, host->max_req_size / 512)); 196 + blk_queue_max_segments(mq->queue, host->max_segs); 197 + blk_queue_max_segment_size(mq->queue, host->max_seg_size); 198 + 199 + /* Initialize thread_sem even if it is not used */ 200 + sema_init(&mq->thread_sem, 1); 201 + } 202 + 180 203 /** 181 204 * mmc_init_queue - initialise a queue structure. 182 205 * @mq: mmc queue ··· 213 190 spinlock_t *lock, const char *subname) 214 191 { 215 192 struct mmc_host *host = card->host; 216 - u64 limit = BLK_BOUNCE_HIGH; 217 193 int ret = -ENOMEM; 218 - 219 - if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) 220 - limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; 221 194 222 195 mq->card = card; 223 196 mq->queue = blk_alloc_queue(GFP_KERNEL); ··· 233 214 } 234 215 235 216 blk_queue_prep_rq(mq->queue, mmc_prep_request); 236 - queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); 237 - queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue); 238 - if (mmc_can_erase(card)) 239 - mmc_queue_setup_discard(mq->queue, card); 240 217 241 - blk_queue_bounce_limit(mq->queue, limit); 242 - blk_queue_max_hw_sectors(mq->queue, 243 - min(host->max_blk_count, host->max_req_size / 512)); 244 - blk_queue_max_segments(mq->queue, host->max_segs); 245 - blk_queue_max_segment_size(mq->queue, host->max_seg_size); 246 - 247 - sema_init(&mq->thread_sem, 1); 218 + mmc_setup_queue(mq, card); 248 219 249 220 mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s", 250 221 host->index, subname ? subname : "");
+2 -2
drivers/mmc/core/queue.h
··· 36 36 /** 37 37 * enum mmc_drv_op - enumerates the operations in the mmc_queue_req 38 38 * @MMC_DRV_OP_IOCTL: ioctl operation 39 + * @MMC_DRV_OP_IOCTL_RPMB: RPMB-oriented ioctl operation 39 40 * @MMC_DRV_OP_BOOT_WP: write protect boot partitions 40 41 * @MMC_DRV_OP_GET_CARD_STATUS: get card status 41 42 * @MMC_DRV_OP_GET_EXT_CSD: get the EXT CSD from an eMMC card 42 43 */ 43 44 enum mmc_drv_op { 44 45 MMC_DRV_OP_IOCTL, 46 + MMC_DRV_OP_IOCTL_RPMB, 45 47 MMC_DRV_OP_BOOT_WP, 46 48 MMC_DRV_OP_GET_CARD_STATUS, 47 49 MMC_DRV_OP_GET_EXT_CSD, ··· 83 81 extern void mmc_queue_resume(struct mmc_queue *); 84 82 extern unsigned int mmc_queue_map_sg(struct mmc_queue *, 85 83 struct mmc_queue_req *); 86 - 87 - extern int mmc_access_rpmb(struct mmc_queue *); 88 84 89 85 #endif
+47 -4
drivers/mmc/core/sd.c
··· 908 908 return max_dtr; 909 909 } 910 910 911 + static bool mmc_sd_card_using_v18(struct mmc_card *card) 912 + { 913 + /* 914 + * According to the SD spec., the Bus Speed Mode (function group 1) bits 915 + * 2 to 4 are zero if the card is initialized at 3.3V signal level. Thus 916 + * they can be used to determine if the card has already switched to 917 + * 1.8V signaling. 918 + */ 919 + return card->sw_caps.sd3_bus_mode & 920 + (SD_MODE_UHS_SDR50 | SD_MODE_UHS_SDR104 | SD_MODE_UHS_DDR50); 921 + } 922 + 911 923 /* 912 924 * Handle the detection and initialisation of a card. 913 925 * ··· 933 921 int err; 934 922 u32 cid[4]; 935 923 u32 rocr = 0; 924 + bool v18_fixup_failed = false; 936 925 937 926 WARN_ON(!host->claimed); 938 - 927 + retry: 939 928 err = mmc_sd_get_cid(host, ocr, cid, &rocr); 940 929 if (err) 941 930 return err; ··· 1002 989 if (err) 1003 990 goto free_card; 1004 991 992 + /* 993 + * If the card has not been power cycled, it may still be using 1.8V 994 + * signaling. Detect that situation and try to initialize a UHS-I (1.8V) 995 + * transfer mode. 996 + */ 997 + if (!v18_fixup_failed && !mmc_host_is_spi(host) && mmc_host_uhs(host) && 998 + mmc_sd_card_using_v18(card) && 999 + host->ios.signal_voltage != MMC_SIGNAL_VOLTAGE_180) { 1000 + /* 1001 + * Re-read switch information in case it has changed since 1002 + * oldcard was initialized. 1003 + */ 1004 + if (oldcard) { 1005 + err = mmc_read_switch(card); 1006 + if (err) 1007 + goto free_card; 1008 + } 1009 + if (mmc_sd_card_using_v18(card)) { 1010 + if (mmc_host_set_uhs_voltage(host) || 1011 + mmc_sd_init_uhs_card(card)) { 1012 + v18_fixup_failed = true; 1013 + mmc_power_cycle(host, ocr); 1014 + if (!oldcard) 1015 + mmc_remove_card(card); 1016 + goto retry; 1017 + } 1018 + goto done; 1019 + } 1020 + } 1021 + 1005 1022 /* Initialization sequence for UHS-I cards */ 1006 1023 if (rocr & SD_ROCR_S18A) { 1007 1024 err = mmc_sd_init_uhs_card(card); ··· 1064 1021 mmc_set_bus_width(host, MMC_BUS_WIDTH_4); 1065 1022 } 1066 1023 } 1067 - 1024 + done: 1068 1025 host->card = card; 1069 1026 return 0; 1070 1027 ··· 1099 1056 { 1100 1057 int err; 1101 1058 1102 - mmc_get_card(host->card); 1059 + mmc_get_card(host->card, NULL); 1103 1060 1104 1061 /* 1105 1062 * Just check if our card has been removed. 1106 1063 */ 1107 1064 err = _mmc_detect_card_removed(host); 1108 1065 1109 - mmc_put_card(host->card); 1066 + mmc_put_card(host->card, NULL); 1110 1067 1111 1068 if (err) { 1112 1069 mmc_sd_remove(host);
+2 -1
drivers/mmc/core/sdio_irq.c
··· 155 155 * holding of the host lock does not cover too much work 156 156 * that doesn't require that lock to be held. 157 157 */ 158 - ret = __mmc_claim_host(host, &host->sdio_irq_thread_abort); 158 + ret = __mmc_claim_host(host, NULL, 159 + &host->sdio_irq_thread_abort); 159 160 if (ret) 160 161 break; 161 162 ret = process_sdio_pending_irqs(host);
+27 -1
drivers/mmc/host/Kconfig
··· 352 352 353 353 If you have a controller with this interface, say Y here. 354 354 355 + config MMC_MESON_MX_SDIO 356 + tristate "Amlogic Meson6/Meson8/Meson8b SD/MMC Host Controller support" 357 + depends on ARCH_MESON || COMPILE_TEST 358 + depends on COMMON_CLK 359 + depends on HAS_DMA 360 + depends on OF 361 + help 362 + This selects support for the SD/MMC Host Controller on 363 + Amlogic Meson6, Meson8 and Meson8b SoCs. 364 + 365 + If you have a controller with this interface, say Y or M here. 366 + If unsure, say N. 367 + 355 368 config MMC_MOXART 356 369 tristate "MOXART SD/MMC Host Controller support" 357 370 depends on ARCH_MOXART && MMC ··· 442 429 tristate "Qualcomm SDHCI Controller Support" 443 430 depends on ARCH_QCOM || (ARM && COMPILE_TEST) 444 431 depends on MMC_SDHCI_PLTFM 432 + select MMC_SDHCI_IO_ACCESSORS 445 433 help 446 434 This selects the Secure Digital Host Controller Interface (SDHCI) 447 435 support present in Qualcomm SOCs. The controller supports ··· 677 663 config MMC_CAVIUM_THUNDERX 678 664 tristate "Cavium ThunderX SD/MMC Card Interface support" 679 665 depends on PCI && 64BIT && (ARM64 || COMPILE_TEST) 680 - depends on GPIOLIB 666 + depends on GPIO_THUNDERX 681 667 depends on OF_ADDRESS 682 668 help 683 669 This selects Cavium ThunderX SD/MMC Card Interface. ··· 912 898 help 913 899 This selects Marvell Xenon eMMC/SD/SDIO SDHCI. 914 900 If you have a controller with this interface, say Y or M here. 901 + If unsure, say N. 902 + 903 + config MMC_SDHCI_OMAP 904 + tristate "TI SDHCI Controller Support" 905 + depends on MMC_SDHCI_PLTFM && OF 906 + help 907 + This selects the Secure Digital Host Controller Interface (SDHCI) 908 + support present in TI's DRA7 SOCs. The controller supports 909 + SD/MMC/SDIO devices. 910 + 911 + If you have a controller with this interface, say Y or M here. 912 + 915 913 If unsure, say N.
+2
drivers/mmc/host/Makefile
··· 65 65 obj-$(CONFIG_MMC_USHC) += ushc.o 66 66 obj-$(CONFIG_MMC_WMT) += wmt-sdmmc.o 67 67 obj-$(CONFIG_MMC_MESON_GX) += meson-gx-mmc.o 68 + obj-$(CONFIG_MMC_MESON_MX_SDIO) += meson-mx-sdio.o 68 69 obj-$(CONFIG_MMC_MOXART) += moxart-mmc.o 69 70 obj-$(CONFIG_MMC_SUNXI) += sunxi-mmc.o 70 71 obj-$(CONFIG_MMC_USDHI6ROL0) += usdhi6rol0.o ··· 91 90 obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o 92 91 obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32) += sdhci-pic32.o 93 92 obj-$(CONFIG_MMC_SDHCI_BRCMSTB) += sdhci-brcmstb.o 93 + obj-$(CONFIG_MMC_SDHCI_OMAP) += sdhci-omap.o 94 94 95 95 ifeq ($(CONFIG_CB710_DEBUG),y) 96 96 CFLAGS-cb710-mmc += -DDEBUG
+6 -7
drivers/mmc/host/atmel-mci.c
··· 732 732 return 0; 733 733 } 734 734 735 - static void atmci_timeout_timer(unsigned long data) 735 + static void atmci_timeout_timer(struct timer_list *t) 736 736 { 737 737 struct atmel_mci *host; 738 738 739 - host = (struct atmel_mci *)data; 739 + host = from_timer(host, t, timer); 740 740 741 741 dev_dbg(&host->pdev->dev, "software timeout\n"); 742 742 ··· 1661 1661 cmd->error = 0; 1662 1662 } 1663 1663 1664 - static void atmci_detect_change(unsigned long data) 1664 + static void atmci_detect_change(struct timer_list *t) 1665 1665 { 1666 - struct atmel_mci_slot *slot = (struct atmel_mci_slot *)data; 1666 + struct atmel_mci_slot *slot = from_timer(slot, t, detect_timer); 1667 1667 bool present; 1668 1668 bool present_old; 1669 1669 ··· 2349 2349 if (gpio_is_valid(slot->detect_pin)) { 2350 2350 int ret; 2351 2351 2352 - setup_timer(&slot->detect_timer, atmci_detect_change, 2353 - (unsigned long)slot); 2352 + timer_setup(&slot->detect_timer, atmci_detect_change, 0); 2354 2353 2355 2354 ret = request_irq(gpio_to_irq(slot->detect_pin), 2356 2355 atmci_detect_interrupt, ··· 2562 2563 2563 2564 platform_set_drvdata(pdev, host); 2564 2565 2565 - setup_timer(&host->timer, atmci_timeout_timer, (unsigned long)host); 2566 + timer_setup(&host->timer, atmci_timeout_timer, 0); 2566 2567 2567 2568 pm_runtime_get_noresume(&pdev->dev); 2568 2569 pm_runtime_set_active(&pdev->dev);
+1 -1
drivers/mmc/host/cavium.c
··· 967 967 } 968 968 969 969 ret = mmc_regulator_get_supply(mmc); 970 - if (ret == -EPROBE_DEFER) 970 + if (ret) 971 971 return ret; 972 972 /* 973 973 * Legacy Octeon firmware has no regulator entry, fall-back to
+1 -1
drivers/mmc/host/dw_mmc-k3.c
··· 75 75 u32 smpl_phase_min; 76 76 }; 77 77 78 - struct hs_timing hs_timing_cfg[TIMING_MODE][TIMING_CFG_NUM] = { 78 + static struct hs_timing hs_timing_cfg[TIMING_MODE][TIMING_CFG_NUM] = { 79 79 { /* reserved */ }, 80 80 { /* SD */ 81 81 {7, 0, 15, 15,}, /* 0: LEGACY 400k */
+64 -20
drivers/mmc/host/dw_mmc.c
··· 817 817 struct dma_slave_config cfg; 818 818 struct dma_async_tx_descriptor *desc = NULL; 819 819 struct scatterlist *sgl = host->data->sg; 820 - const u32 mszs[] = {1, 4, 8, 16, 32, 64, 128, 256}; 820 + static const u32 mszs[] = {1, 4, 8, 16, 32, 64, 128, 256}; 821 821 u32 sg_elems = host->data->sg_len; 822 822 u32 fifoth_val; 823 823 u32 fifo_offset = host->fifo_reg - host->regs; ··· 1024 1024 static void dw_mci_adjust_fifoth(struct dw_mci *host, struct mmc_data *data) 1025 1025 { 1026 1026 unsigned int blksz = data->blksz; 1027 - const u32 mszs[] = {1, 4, 8, 16, 32, 64, 128, 256}; 1027 + static const u32 mszs[] = {1, 4, 8, 16, 32, 64, 128, 256}; 1028 1028 u32 fifo_width = 1 << host->data_shift; 1029 1029 u32 blksz_depth = blksz / fifo_width, fifoth_val; 1030 1030 u32 msize = 0, rx_wmark = 1, tx_wmark, tx_wmark_invers; ··· 1938 1938 unsigned int drto_clks; 1939 1939 unsigned int drto_div; 1940 1940 unsigned int drto_ms; 1941 + unsigned long irqflags; 1941 1942 1942 1943 drto_clks = mci_readl(host, TMOUT) >> 8; 1943 1944 drto_div = (mci_readl(host, CLKDIV) & 0xff) * 2; ··· 1950 1949 /* add a bit spare time */ 1951 1950 drto_ms += 10; 1952 1951 1953 - mod_timer(&host->dto_timer, jiffies + msecs_to_jiffies(drto_ms)); 1952 + spin_lock_irqsave(&host->irq_lock, irqflags); 1953 + if (!test_bit(EVENT_DATA_COMPLETE, &host->pending_events)) 1954 + mod_timer(&host->dto_timer, 1955 + jiffies + msecs_to_jiffies(drto_ms)); 1956 + spin_unlock_irqrestore(&host->irq_lock, irqflags); 1954 1957 } 1955 1958 1956 1959 static bool dw_mci_clear_pending_cmd_complete(struct dw_mci *host) ··· 1971 1966 */ 1972 1967 WARN_ON(del_timer_sync(&host->cto_timer)); 1973 1968 clear_bit(EVENT_CMD_COMPLETE, &host->pending_events); 1969 + 1970 + return true; 1971 + } 1972 + 1973 + static bool dw_mci_clear_pending_data_complete(struct dw_mci *host) 1974 + { 1975 + if (!test_bit(EVENT_DATA_COMPLETE, &host->pending_events)) 1976 + return false; 1977 + 1978 + /* Extra paranoia just like dw_mci_clear_pending_cmd_complete() */ 1979 + WARN_ON(del_timer_sync(&host->dto_timer)); 1980 + clear_bit(EVENT_DATA_COMPLETE, &host->pending_events); 1974 1981 1975 1982 return true; 1976 1983 } ··· 2128 2111 /* fall through */ 2129 2112 2130 2113 case STATE_DATA_BUSY: 2131 - if (!test_and_clear_bit(EVENT_DATA_COMPLETE, 2132 - &host->pending_events)) { 2114 + if (!dw_mci_clear_pending_data_complete(host)) { 2133 2115 /* 2134 2116 * If data error interrupt comes but data over 2135 2117 * interrupt doesn't come within the given time. ··· 2698 2682 } 2699 2683 2700 2684 if (pending & SDMMC_INT_DATA_OVER) { 2685 + spin_lock_irqsave(&host->irq_lock, irqflags); 2686 + 2701 2687 del_timer(&host->dto_timer); 2702 2688 2703 2689 mci_writel(host, RINTSTS, SDMMC_INT_DATA_OVER); ··· 2712 2694 } 2713 2695 set_bit(EVENT_DATA_COMPLETE, &host->pending_events); 2714 2696 tasklet_schedule(&host->tasklet); 2697 + 2698 + spin_unlock_irqrestore(&host->irq_lock, irqflags); 2715 2699 } 2716 2700 2717 2701 if (pending & SDMMC_INT_RXDR) { ··· 2811 2791 2812 2792 /*if there are external regulators, get them*/ 2813 2793 ret = mmc_regulator_get_supply(mmc); 2814 - if (ret == -EPROBE_DEFER) 2794 + if (ret) 2815 2795 goto err_host_allocated; 2816 2796 2817 2797 if (!mmc->ocr_avail) ··· 2991 2971 host->use_dma = TRANS_MODE_PIO; 2992 2972 } 2993 2973 2994 - static void dw_mci_cmd11_timer(unsigned long arg) 2974 + static void dw_mci_cmd11_timer(struct timer_list *t) 2995 2975 { 2996 - struct dw_mci *host = (struct dw_mci *)arg; 2976 + struct dw_mci *host = from_timer(host, t, cmd11_timer); 2997 2977 2998 2978 if (host->state != STATE_SENDING_CMD11) { 2999 2979 dev_warn(host->dev, "Unexpected CMD11 timeout\n"); ··· 3005 2985 tasklet_schedule(&host->tasklet); 3006 2986 } 3007 2987 3008 - static void dw_mci_cto_timer(unsigned long arg) 2988 + static void dw_mci_cto_timer(struct timer_list *t) 3009 2989 { 3010 - struct dw_mci *host = (struct dw_mci *)arg; 2990 + struct dw_mci *host = from_timer(host, t, cto_timer); 3011 2991 unsigned long irqflags; 3012 2992 u32 pending; 3013 2993 ··· 3060 3040 spin_unlock_irqrestore(&host->irq_lock, irqflags); 3061 3041 } 3062 3042 3063 - static void dw_mci_dto_timer(unsigned long arg) 3043 + static void dw_mci_dto_timer(struct timer_list *t) 3064 3044 { 3065 - struct dw_mci *host = (struct dw_mci *)arg; 3045 + struct dw_mci *host = from_timer(host, t, dto_timer); 3046 + unsigned long irqflags; 3047 + u32 pending; 3066 3048 3049 + spin_lock_irqsave(&host->irq_lock, irqflags); 3050 + 3051 + /* 3052 + * The DTO timer is much longer than the CTO timer, so it's even less 3053 + * likely that we'll these cases, but it pays to be paranoid. 3054 + */ 3055 + pending = mci_readl(host, MINTSTS); /* read-only mask reg */ 3056 + if (pending & SDMMC_INT_DATA_OVER) { 3057 + /* The interrupt should fire; no need to act but we can warn */ 3058 + dev_warn(host->dev, "Unexpected data interrupt latency\n"); 3059 + goto exit; 3060 + } 3061 + if (test_bit(EVENT_DATA_COMPLETE, &host->pending_events)) { 3062 + /* Presumably interrupt handler couldn't delete the timer */ 3063 + dev_warn(host->dev, "DTO timeout when already completed\n"); 3064 + goto exit; 3065 + } 3066 + 3067 + /* 3068 + * Continued paranoia to make sure we're in the state we expect. 3069 + * This paranoia isn't really justified but it seems good to be safe. 3070 + */ 3067 3071 switch (host->state) { 3068 3072 case STATE_SENDING_DATA: 3069 3073 case STATE_DATA_BUSY: ··· 3102 3058 tasklet_schedule(&host->tasklet); 3103 3059 break; 3104 3060 default: 3061 + dev_warn(host->dev, "Unexpected data timeout, state %d\n", 3062 + host->state); 3105 3063 break; 3106 3064 } 3065 + 3066 + exit: 3067 + spin_unlock_irqrestore(&host->irq_lock, irqflags); 3107 3068 } 3108 3069 3109 3070 #ifdef CONFIG_OF ··· 3257 3208 } 3258 3209 } 3259 3210 3260 - setup_timer(&host->cmd11_timer, 3261 - dw_mci_cmd11_timer, (unsigned long)host); 3262 - 3263 - setup_timer(&host->cto_timer, 3264 - dw_mci_cto_timer, (unsigned long)host); 3265 - 3266 - setup_timer(&host->dto_timer, 3267 - dw_mci_dto_timer, (unsigned long)host); 3211 + timer_setup(&host->cmd11_timer, dw_mci_cmd11_timer, 0); 3212 + timer_setup(&host->cto_timer, dw_mci_cto_timer, 0); 3213 + timer_setup(&host->dto_timer, dw_mci_dto_timer, 0); 3268 3214 3269 3215 spin_lock_init(&host->lock); 3270 3216 spin_lock_init(&host->irq_lock);
+2 -1
drivers/mmc/host/dw_mmc.h
··· 74 74 * @stop_abort: The command currently prepared for stoping transfer. 75 75 * @prev_blksz: The former transfer blksz record. 76 76 * @timing: Record of current ios timing. 77 - * @use_dma: Whether DMA channel is initialized or not. 77 + * @use_dma: Which DMA channel is in use for the current transfer, zero 78 + * denotes PIO mode. 78 79 * @using_dma: Whether DMA is in use for the current transfer. 79 80 * @dma_64bit_address: Whether DMA supports 64-bit address mode or not. 80 81 * @sg_dma: Bus address of DMA buffer.
+3 -4
drivers/mmc/host/jz4740_mmc.c
··· 586 586 return true; 587 587 } 588 588 589 - static void jz4740_mmc_timeout(unsigned long data) 589 + static void jz4740_mmc_timeout(struct timer_list *t) 590 590 { 591 - struct jz4740_mmc_host *host = (struct jz4740_mmc_host *)data; 591 + struct jz4740_mmc_host *host = from_timer(host, t, timeout_timer); 592 592 593 593 if (!test_and_clear_bit(0, &host->waiting)) 594 594 return; ··· 1036 1036 1037 1037 jz4740_mmc_reset(host); 1038 1038 jz4740_mmc_clock_disable(host); 1039 - setup_timer(&host->timeout_timer, jz4740_mmc_timeout, 1040 - (unsigned long)host); 1039 + timer_setup(&host->timeout_timer, jz4740_mmc_timeout, 0); 1041 1040 1042 1041 host->use_dma = true; 1043 1042 if (host->use_dma && jz4740_mmc_acquire_dma_channels(host) != 0)
+1 -1
drivers/mmc/host/meson-gx-mmc.c
··· 1190 1190 /* Get regulators and the supported OCR mask */ 1191 1191 host->vqmmc_enabled = false; 1192 1192 ret = mmc_regulator_get_supply(mmc); 1193 - if (ret == -EPROBE_DEFER) 1193 + if (ret) 1194 1194 goto free_host; 1195 1195 1196 1196 ret = mmc_of_parse(mmc);
+768
drivers/mmc/host/meson-mx-sdio.c
··· 1 + /* 2 + * meson-mx-sdio.c - Meson6, Meson8 and Meson8b SDIO/MMC Host Controller 3 + * 4 + * Copyright (C) 2015 Endless Mobile, Inc. 5 + * Author: Carlo Caione <carlo@endlessm.com> 6 + * Copyright (C) 2017 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License as published by 10 + * the Free Software Foundation; either version 2 of the License, or (at 11 + * your option) any later version. 12 + */ 13 + 14 + #include <linux/bitfield.h> 15 + #include <linux/clk.h> 16 + #include <linux/clk-provider.h> 17 + #include <linux/delay.h> 18 + #include <linux/device.h> 19 + #include <linux/dma-mapping.h> 20 + #include <linux/module.h> 21 + #include <linux/interrupt.h> 22 + #include <linux/ioport.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/of_platform.h> 25 + #include <linux/timer.h> 26 + #include <linux/types.h> 27 + 28 + #include <linux/mmc/host.h> 29 + #include <linux/mmc/mmc.h> 30 + #include <linux/mmc/sdio.h> 31 + #include <linux/mmc/slot-gpio.h> 32 + 33 + #define MESON_MX_SDIO_ARGU 0x00 34 + 35 + #define MESON_MX_SDIO_SEND 0x04 36 + #define MESON_MX_SDIO_SEND_COMMAND_INDEX_MASK GENMASK(7, 0) 37 + #define MESON_MX_SDIO_SEND_CMD_RESP_BITS_MASK GENMASK(15, 8) 38 + #define MESON_MX_SDIO_SEND_RESP_WITHOUT_CRC7 BIT(16) 39 + #define MESON_MX_SDIO_SEND_RESP_HAS_DATA BIT(17) 40 + #define MESON_MX_SDIO_SEND_RESP_CRC7_FROM_8 BIT(18) 41 + #define MESON_MX_SDIO_SEND_CHECK_DAT0_BUSY BIT(19) 42 + #define MESON_MX_SDIO_SEND_DATA BIT(20) 43 + #define MESON_MX_SDIO_SEND_USE_INT_WINDOW BIT(21) 44 + #define MESON_MX_SDIO_SEND_REPEAT_PACKAGE_TIMES_MASK GENMASK(31, 24) 45 + 46 + #define MESON_MX_SDIO_CONF 0x08 47 + #define MESON_MX_SDIO_CONF_CMD_CLK_DIV_SHIFT 0 48 + #define MESON_MX_SDIO_CONF_CMD_CLK_DIV_WIDTH 10 49 + #define MESON_MX_SDIO_CONF_CMD_DISABLE_CRC BIT(10) 50 + #define MESON_MX_SDIO_CONF_CMD_OUT_AT_POSITIVE_EDGE BIT(11) 51 + #define MESON_MX_SDIO_CONF_CMD_ARGUMENT_BITS_MASK GENMASK(17, 12) 52 + #define MESON_MX_SDIO_CONF_RESP_LATCH_AT_NEGATIVE_EDGE BIT(18) 53 + #define MESON_MX_SDIO_CONF_DATA_LATCH_AT_NEGATIVE_EDGE BIT(19) 54 + #define MESON_MX_SDIO_CONF_BUS_WIDTH BIT(20) 55 + #define MESON_MX_SDIO_CONF_M_ENDIAN_MASK GENMASK(22, 21) 56 + #define MESON_MX_SDIO_CONF_WRITE_NWR_MASK GENMASK(28, 23) 57 + #define MESON_MX_SDIO_CONF_WRITE_CRC_OK_STATUS_MASK GENMASK(31, 29) 58 + 59 + #define MESON_MX_SDIO_IRQS 0x0c 60 + #define MESON_MX_SDIO_IRQS_STATUS_STATE_MACHINE_MASK GENMASK(3, 0) 61 + #define MESON_MX_SDIO_IRQS_CMD_BUSY BIT(4) 62 + #define MESON_MX_SDIO_IRQS_RESP_CRC7_OK BIT(5) 63 + #define MESON_MX_SDIO_IRQS_DATA_READ_CRC16_OK BIT(6) 64 + #define MESON_MX_SDIO_IRQS_DATA_WRITE_CRC16_OK BIT(7) 65 + #define MESON_MX_SDIO_IRQS_IF_INT BIT(8) 66 + #define MESON_MX_SDIO_IRQS_CMD_INT BIT(9) 67 + #define MESON_MX_SDIO_IRQS_STATUS_INFO_MASK GENMASK(15, 12) 68 + #define MESON_MX_SDIO_IRQS_TIMING_OUT_INT BIT(16) 69 + #define MESON_MX_SDIO_IRQS_AMRISC_TIMING_OUT_INT_EN BIT(17) 70 + #define MESON_MX_SDIO_IRQS_ARC_TIMING_OUT_INT_EN BIT(18) 71 + #define MESON_MX_SDIO_IRQS_TIMING_OUT_COUNT_MASK GENMASK(31, 19) 72 + 73 + #define MESON_MX_SDIO_IRQC 0x10 74 + #define MESON_MX_SDIO_IRQC_ARC_IF_INT_EN BIT(3) 75 + #define MESON_MX_SDIO_IRQC_ARC_CMD_INT_EN BIT(4) 76 + #define MESON_MX_SDIO_IRQC_IF_CONFIG_MASK GENMASK(7, 6) 77 + #define MESON_MX_SDIO_IRQC_FORCE_DATA_CLK BIT(8) 78 + #define MESON_MX_SDIO_IRQC_FORCE_DATA_CMD BIT(9) 79 + #define MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK GENMASK(10, 13) 80 + #define MESON_MX_SDIO_IRQC_SOFT_RESET BIT(15) 81 + #define MESON_MX_SDIO_IRQC_FORCE_HALT BIT(30) 82 + #define MESON_MX_SDIO_IRQC_HALT_HOLE BIT(31) 83 + 84 + #define MESON_MX_SDIO_MULT 0x14 85 + #define MESON_MX_SDIO_MULT_PORT_SEL_MASK GENMASK(1, 0) 86 + #define MESON_MX_SDIO_MULT_MEMORY_STICK_ENABLE BIT(2) 87 + #define MESON_MX_SDIO_MULT_MEMORY_STICK_SCLK_ALWAYS BIT(3) 88 + #define MESON_MX_SDIO_MULT_STREAM_ENABLE BIT(4) 89 + #define MESON_MX_SDIO_MULT_STREAM_8BITS_MODE BIT(5) 90 + #define MESON_MX_SDIO_MULT_WR_RD_OUT_INDEX BIT(8) 91 + #define MESON_MX_SDIO_MULT_DAT0_DAT1_SWAPPED BIT(10) 92 + #define MESON_MX_SDIO_MULT_DAT1_DAT0_SWAPPED BIT(11) 93 + #define MESON_MX_SDIO_MULT_RESP_READ_INDEX_MASK GENMASK(15, 12) 94 + 95 + #define MESON_MX_SDIO_ADDR 0x18 96 + 97 + #define MESON_MX_SDIO_EXT 0x1c 98 + #define MESON_MX_SDIO_EXT_DATA_RW_NUMBER_MASK GENMASK(29, 16) 99 + 100 + #define MESON_MX_SDIO_BOUNCE_REQ_SIZE (128 * 1024) 101 + #define MESON_MX_SDIO_RESPONSE_CRC16_BITS (16 - 1) 102 + #define MESON_MX_SDIO_MAX_SLOTS 3 103 + 104 + struct meson_mx_mmc_host { 105 + struct device *controller_dev; 106 + 107 + struct clk *parent_clk; 108 + struct clk *core_clk; 109 + struct clk_divider cfg_div; 110 + struct clk *cfg_div_clk; 111 + struct clk_fixed_factor fixed_factor; 112 + struct clk *fixed_factor_clk; 113 + 114 + void __iomem *base; 115 + int irq; 116 + spinlock_t irq_lock; 117 + 118 + struct timer_list cmd_timeout; 119 + 120 + unsigned int slot_id; 121 + struct mmc_host *mmc; 122 + 123 + struct mmc_request *mrq; 124 + struct mmc_command *cmd; 125 + int error; 126 + }; 127 + 128 + static void meson_mx_mmc_mask_bits(struct mmc_host *mmc, char reg, u32 mask, 129 + u32 val) 130 + { 131 + struct meson_mx_mmc_host *host = mmc_priv(mmc); 132 + u32 regval; 133 + 134 + regval = readl(host->base + reg); 135 + regval &= ~mask; 136 + regval |= (val & mask); 137 + 138 + writel(regval, host->base + reg); 139 + } 140 + 141 + static void meson_mx_mmc_soft_reset(struct meson_mx_mmc_host *host) 142 + { 143 + writel(MESON_MX_SDIO_IRQC_SOFT_RESET, host->base + MESON_MX_SDIO_IRQC); 144 + udelay(2); 145 + } 146 + 147 + static struct mmc_command *meson_mx_mmc_get_next_cmd(struct mmc_command *cmd) 148 + { 149 + if (cmd->opcode == MMC_SET_BLOCK_COUNT && !cmd->error) 150 + return cmd->mrq->cmd; 151 + else if (mmc_op_multi(cmd->opcode) && 152 + (!cmd->mrq->sbc || cmd->error || cmd->data->error)) 153 + return cmd->mrq->stop; 154 + else 155 + return NULL; 156 + } 157 + 158 + static void meson_mx_mmc_start_cmd(struct mmc_host *mmc, 159 + struct mmc_command *cmd) 160 + { 161 + struct meson_mx_mmc_host *host = mmc_priv(mmc); 162 + unsigned int pack_size; 163 + unsigned long irqflags, timeout; 164 + u32 mult, send = 0, ext = 0; 165 + 166 + host->cmd = cmd; 167 + 168 + if (cmd->busy_timeout) 169 + timeout = msecs_to_jiffies(cmd->busy_timeout); 170 + else 171 + timeout = msecs_to_jiffies(1000); 172 + 173 + switch (mmc_resp_type(cmd)) { 174 + case MMC_RSP_R1: 175 + case MMC_RSP_R1B: 176 + case MMC_RSP_R3: 177 + /* 7 (CMD) + 32 (response) + 7 (CRC) -1 */ 178 + send |= FIELD_PREP(MESON_MX_SDIO_SEND_CMD_RESP_BITS_MASK, 45); 179 + break; 180 + case MMC_RSP_R2: 181 + /* 7 (CMD) + 120 (response) + 7 (CRC) -1 */ 182 + send |= FIELD_PREP(MESON_MX_SDIO_SEND_CMD_RESP_BITS_MASK, 133); 183 + send |= MESON_MX_SDIO_SEND_RESP_CRC7_FROM_8; 184 + break; 185 + default: 186 + break; 187 + } 188 + 189 + if (!(cmd->flags & MMC_RSP_CRC)) 190 + send |= MESON_MX_SDIO_SEND_RESP_WITHOUT_CRC7; 191 + 192 + if (cmd->flags & MMC_RSP_BUSY) 193 + send |= MESON_MX_SDIO_SEND_CHECK_DAT0_BUSY; 194 + 195 + if (cmd->data) { 196 + send |= FIELD_PREP(MESON_MX_SDIO_SEND_REPEAT_PACKAGE_TIMES_MASK, 197 + (cmd->data->blocks - 1)); 198 + 199 + pack_size = cmd->data->blksz * BITS_PER_BYTE; 200 + if (mmc->ios.bus_width == MMC_BUS_WIDTH_4) 201 + pack_size += MESON_MX_SDIO_RESPONSE_CRC16_BITS * 4; 202 + else 203 + pack_size += MESON_MX_SDIO_RESPONSE_CRC16_BITS * 1; 204 + 205 + ext |= FIELD_PREP(MESON_MX_SDIO_EXT_DATA_RW_NUMBER_MASK, 206 + pack_size); 207 + 208 + if (cmd->data->flags & MMC_DATA_WRITE) 209 + send |= MESON_MX_SDIO_SEND_DATA; 210 + else 211 + send |= MESON_MX_SDIO_SEND_RESP_HAS_DATA; 212 + 213 + cmd->data->bytes_xfered = 0; 214 + } 215 + 216 + send |= FIELD_PREP(MESON_MX_SDIO_SEND_COMMAND_INDEX_MASK, 217 + (0x40 | cmd->opcode)); 218 + 219 + spin_lock_irqsave(&host->irq_lock, irqflags); 220 + 221 + mult = readl(host->base + MESON_MX_SDIO_MULT); 222 + mult &= ~MESON_MX_SDIO_MULT_PORT_SEL_MASK; 223 + mult |= FIELD_PREP(MESON_MX_SDIO_MULT_PORT_SEL_MASK, host->slot_id); 224 + mult |= BIT(31); 225 + writel(mult, host->base + MESON_MX_SDIO_MULT); 226 + 227 + /* enable the CMD done interrupt */ 228 + meson_mx_mmc_mask_bits(mmc, MESON_MX_SDIO_IRQC, 229 + MESON_MX_SDIO_IRQC_ARC_CMD_INT_EN, 230 + MESON_MX_SDIO_IRQC_ARC_CMD_INT_EN); 231 + 232 + /* clear pending interrupts */ 233 + meson_mx_mmc_mask_bits(mmc, MESON_MX_SDIO_IRQS, 234 + MESON_MX_SDIO_IRQS_CMD_INT, 235 + MESON_MX_SDIO_IRQS_CMD_INT); 236 + 237 + writel(cmd->arg, host->base + MESON_MX_SDIO_ARGU); 238 + writel(ext, host->base + MESON_MX_SDIO_EXT); 239 + writel(send, host->base + MESON_MX_SDIO_SEND); 240 + 241 + spin_unlock_irqrestore(&host->irq_lock, irqflags); 242 + 243 + mod_timer(&host->cmd_timeout, jiffies + timeout); 244 + } 245 + 246 + static void meson_mx_mmc_request_done(struct meson_mx_mmc_host *host) 247 + { 248 + struct mmc_request *mrq; 249 + 250 + mrq = host->mrq; 251 + 252 + host->mrq = NULL; 253 + host->cmd = NULL; 254 + 255 + mmc_request_done(host->mmc, mrq); 256 + } 257 + 258 + static void meson_mx_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 259 + { 260 + struct meson_mx_mmc_host *host = mmc_priv(mmc); 261 + unsigned short vdd = ios->vdd; 262 + unsigned long clk_rate = ios->clock; 263 + 264 + switch (ios->bus_width) { 265 + case MMC_BUS_WIDTH_1: 266 + meson_mx_mmc_mask_bits(mmc, MESON_MX_SDIO_CONF, 267 + MESON_MX_SDIO_CONF_BUS_WIDTH, 0); 268 + break; 269 + 270 + case MMC_BUS_WIDTH_4: 271 + meson_mx_mmc_mask_bits(mmc, MESON_MX_SDIO_CONF, 272 + MESON_MX_SDIO_CONF_BUS_WIDTH, 273 + MESON_MX_SDIO_CONF_BUS_WIDTH); 274 + break; 275 + 276 + case MMC_BUS_WIDTH_8: 277 + default: 278 + dev_err(mmc_dev(mmc), "unsupported bus width: %d\n", 279 + ios->bus_width); 280 + host->error = -EINVAL; 281 + return; 282 + } 283 + 284 + host->error = clk_set_rate(host->cfg_div_clk, ios->clock); 285 + if (host->error) { 286 + dev_warn(mmc_dev(mmc), 287 + "failed to set MMC clock to %lu: %d\n", 288 + clk_rate, host->error); 289 + return; 290 + } 291 + 292 + mmc->actual_clock = clk_get_rate(host->cfg_div_clk); 293 + 294 + switch (ios->power_mode) { 295 + case MMC_POWER_OFF: 296 + vdd = 0; 297 + /* fall-through: */ 298 + case MMC_POWER_UP: 299 + if (!IS_ERR(mmc->supply.vmmc)) { 300 + host->error = mmc_regulator_set_ocr(mmc, 301 + mmc->supply.vmmc, 302 + vdd); 303 + if (host->error) 304 + return; 305 + } 306 + break; 307 + } 308 + } 309 + 310 + static int meson_mx_mmc_map_dma(struct mmc_host *mmc, struct mmc_request *mrq) 311 + { 312 + struct mmc_data *data = mrq->data; 313 + int dma_len; 314 + struct scatterlist *sg; 315 + 316 + if (!data) 317 + return 0; 318 + 319 + sg = data->sg; 320 + if (sg->offset & 3 || sg->length & 3) { 321 + dev_err(mmc_dev(mmc), 322 + "unaligned scatterlist: offset %x length %d\n", 323 + sg->offset, sg->length); 324 + return -EINVAL; 325 + } 326 + 327 + dma_len = dma_map_sg(mmc_dev(mmc), data->sg, data->sg_len, 328 + mmc_get_dma_dir(data)); 329 + if (dma_len <= 0) { 330 + dev_err(mmc_dev(mmc), "dma_map_sg failed\n"); 331 + return -ENOMEM; 332 + } 333 + 334 + return 0; 335 + } 336 + 337 + static void meson_mx_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq) 338 + { 339 + struct meson_mx_mmc_host *host = mmc_priv(mmc); 340 + struct mmc_command *cmd = mrq->cmd; 341 + 342 + if (!host->error) 343 + host->error = meson_mx_mmc_map_dma(mmc, mrq); 344 + 345 + if (host->error) { 346 + cmd->error = host->error; 347 + mmc_request_done(mmc, mrq); 348 + return; 349 + } 350 + 351 + host->mrq = mrq; 352 + 353 + if (mrq->data) 354 + writel(sg_dma_address(mrq->data->sg), 355 + host->base + MESON_MX_SDIO_ADDR); 356 + 357 + if (mrq->sbc) 358 + meson_mx_mmc_start_cmd(mmc, mrq->sbc); 359 + else 360 + meson_mx_mmc_start_cmd(mmc, mrq->cmd); 361 + } 362 + 363 + static int meson_mx_mmc_card_busy(struct mmc_host *mmc) 364 + { 365 + struct meson_mx_mmc_host *host = mmc_priv(mmc); 366 + u32 irqc = readl(host->base + MESON_MX_SDIO_IRQC); 367 + 368 + return !!(irqc & MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK); 369 + } 370 + 371 + static void meson_mx_mmc_read_response(struct mmc_host *mmc, 372 + struct mmc_command *cmd) 373 + { 374 + struct meson_mx_mmc_host *host = mmc_priv(mmc); 375 + u32 mult; 376 + int i, resp[4]; 377 + 378 + mult = readl(host->base + MESON_MX_SDIO_MULT); 379 + mult |= MESON_MX_SDIO_MULT_WR_RD_OUT_INDEX; 380 + mult &= ~MESON_MX_SDIO_MULT_RESP_READ_INDEX_MASK; 381 + mult |= FIELD_PREP(MESON_MX_SDIO_MULT_RESP_READ_INDEX_MASK, 0); 382 + writel(mult, host->base + MESON_MX_SDIO_MULT); 383 + 384 + if (cmd->flags & MMC_RSP_136) { 385 + for (i = 0; i <= 3; i++) 386 + resp[3 - i] = readl(host->base + MESON_MX_SDIO_ARGU); 387 + cmd->resp[0] = (resp[0] << 8) | ((resp[1] >> 24) & 0xff); 388 + cmd->resp[1] = (resp[1] << 8) | ((resp[2] >> 24) & 0xff); 389 + cmd->resp[2] = (resp[2] << 8) | ((resp[3] >> 24) & 0xff); 390 + cmd->resp[3] = (resp[3] << 8); 391 + } else if (cmd->flags & MMC_RSP_PRESENT) { 392 + cmd->resp[0] = readl(host->base + MESON_MX_SDIO_ARGU); 393 + } 394 + } 395 + 396 + static irqreturn_t meson_mx_mmc_process_cmd_irq(struct meson_mx_mmc_host *host, 397 + u32 irqs, u32 send) 398 + { 399 + struct mmc_command *cmd = host->cmd; 400 + 401 + /* 402 + * NOTE: even though it shouldn't happen we sometimes get command 403 + * interrupts twice (at least this is what it looks like). Ideally 404 + * we find out why this happens and warn here as soon as it occurs. 405 + */ 406 + if (!cmd) 407 + return IRQ_HANDLED; 408 + 409 + cmd->error = 0; 410 + meson_mx_mmc_read_response(host->mmc, cmd); 411 + 412 + if (cmd->data) { 413 + if (!((irqs & MESON_MX_SDIO_IRQS_DATA_READ_CRC16_OK) || 414 + (irqs & MESON_MX_SDIO_IRQS_DATA_WRITE_CRC16_OK))) 415 + cmd->error = -EILSEQ; 416 + } else { 417 + if (!((irqs & MESON_MX_SDIO_IRQS_RESP_CRC7_OK) || 418 + (send & MESON_MX_SDIO_SEND_RESP_WITHOUT_CRC7))) 419 + cmd->error = -EILSEQ; 420 + } 421 + 422 + return IRQ_WAKE_THREAD; 423 + } 424 + 425 + static irqreturn_t meson_mx_mmc_irq(int irq, void *data) 426 + { 427 + struct meson_mx_mmc_host *host = (void *) data; 428 + u32 irqs, send; 429 + unsigned long irqflags; 430 + irqreturn_t ret; 431 + 432 + spin_lock_irqsave(&host->irq_lock, irqflags); 433 + 434 + irqs = readl(host->base + MESON_MX_SDIO_IRQS); 435 + send = readl(host->base + MESON_MX_SDIO_SEND); 436 + 437 + if (irqs & MESON_MX_SDIO_IRQS_CMD_INT) 438 + ret = meson_mx_mmc_process_cmd_irq(host, irqs, send); 439 + else 440 + ret = IRQ_HANDLED; 441 + 442 + /* finally ACK all pending interrupts */ 443 + writel(irqs, host->base + MESON_MX_SDIO_IRQS); 444 + 445 + spin_unlock_irqrestore(&host->irq_lock, irqflags); 446 + 447 + return ret; 448 + } 449 + 450 + static irqreturn_t meson_mx_mmc_irq_thread(int irq, void *irq_data) 451 + { 452 + struct meson_mx_mmc_host *host = (void *) irq_data; 453 + struct mmc_command *cmd = host->cmd, *next_cmd; 454 + 455 + if (WARN_ON(!cmd)) 456 + return IRQ_HANDLED; 457 + 458 + del_timer_sync(&host->cmd_timeout); 459 + 460 + if (cmd->data) { 461 + dma_unmap_sg(mmc_dev(host->mmc), cmd->data->sg, 462 + cmd->data->sg_len, 463 + mmc_get_dma_dir(cmd->data)); 464 + 465 + cmd->data->bytes_xfered = cmd->data->blksz * cmd->data->blocks; 466 + } 467 + 468 + next_cmd = meson_mx_mmc_get_next_cmd(cmd); 469 + if (next_cmd) 470 + meson_mx_mmc_start_cmd(host->mmc, next_cmd); 471 + else 472 + meson_mx_mmc_request_done(host); 473 + 474 + return IRQ_HANDLED; 475 + } 476 + 477 + static void meson_mx_mmc_timeout(struct timer_list *t) 478 + { 479 + struct meson_mx_mmc_host *host = from_timer(host, t, cmd_timeout); 480 + unsigned long irqflags; 481 + u32 irqc; 482 + 483 + spin_lock_irqsave(&host->irq_lock, irqflags); 484 + 485 + /* disable the CMD interrupt */ 486 + irqc = readl(host->base + MESON_MX_SDIO_IRQC); 487 + irqc &= ~MESON_MX_SDIO_IRQC_ARC_CMD_INT_EN; 488 + writel(irqc, host->base + MESON_MX_SDIO_IRQC); 489 + 490 + spin_unlock_irqrestore(&host->irq_lock, irqflags); 491 + 492 + /* 493 + * skip the timeout handling if the interrupt handler already processed 494 + * the command. 495 + */ 496 + if (!host->cmd) 497 + return; 498 + 499 + dev_dbg(mmc_dev(host->mmc), 500 + "Timeout on CMD%u (IRQS = 0x%08x, ARGU = 0x%08x)\n", 501 + host->cmd->opcode, readl(host->base + MESON_MX_SDIO_IRQS), 502 + readl(host->base + MESON_MX_SDIO_ARGU)); 503 + 504 + host->cmd->error = -ETIMEDOUT; 505 + 506 + meson_mx_mmc_request_done(host); 507 + } 508 + 509 + static struct mmc_host_ops meson_mx_mmc_ops = { 510 + .request = meson_mx_mmc_request, 511 + .set_ios = meson_mx_mmc_set_ios, 512 + .card_busy = meson_mx_mmc_card_busy, 513 + .get_cd = mmc_gpio_get_cd, 514 + .get_ro = mmc_gpio_get_ro, 515 + }; 516 + 517 + static struct platform_device *meson_mx_mmc_slot_pdev(struct device *parent) 518 + { 519 + struct device_node *slot_node; 520 + 521 + /* 522 + * TODO: the MMC core framework currently does not support 523 + * controllers with multiple slots properly. So we only register 524 + * the first slot for now 525 + */ 526 + slot_node = of_find_compatible_node(parent->of_node, NULL, "mmc-slot"); 527 + if (!slot_node) { 528 + dev_warn(parent, "no 'mmc-slot' sub-node found\n"); 529 + return ERR_PTR(-ENOENT); 530 + } 531 + 532 + return of_platform_device_create(slot_node, NULL, parent); 533 + } 534 + 535 + static int meson_mx_mmc_add_host(struct meson_mx_mmc_host *host) 536 + { 537 + struct mmc_host *mmc = host->mmc; 538 + struct device *slot_dev = mmc_dev(mmc); 539 + int ret; 540 + 541 + if (of_property_read_u32(slot_dev->of_node, "reg", &host->slot_id)) { 542 + dev_err(slot_dev, "missing 'reg' property\n"); 543 + return -EINVAL; 544 + } 545 + 546 + if (host->slot_id >= MESON_MX_SDIO_MAX_SLOTS) { 547 + dev_err(slot_dev, "invalid 'reg' property value %d\n", 548 + host->slot_id); 549 + return -EINVAL; 550 + } 551 + 552 + /* Get regulators and the supported OCR mask */ 553 + ret = mmc_regulator_get_supply(mmc); 554 + if (ret) 555 + return ret; 556 + 557 + mmc->max_req_size = MESON_MX_SDIO_BOUNCE_REQ_SIZE; 558 + mmc->max_seg_size = mmc->max_req_size; 559 + mmc->max_blk_count = 560 + FIELD_GET(MESON_MX_SDIO_SEND_REPEAT_PACKAGE_TIMES_MASK, 561 + 0xffffffff); 562 + mmc->max_blk_size = FIELD_GET(MESON_MX_SDIO_EXT_DATA_RW_NUMBER_MASK, 563 + 0xffffffff); 564 + mmc->max_blk_size -= (4 * MESON_MX_SDIO_RESPONSE_CRC16_BITS); 565 + mmc->max_blk_size /= BITS_PER_BYTE; 566 + 567 + /* Get the min and max supported clock rates */ 568 + mmc->f_min = clk_round_rate(host->cfg_div_clk, 1); 569 + mmc->f_max = clk_round_rate(host->cfg_div_clk, 570 + clk_get_rate(host->parent_clk)); 571 + 572 + mmc->caps |= MMC_CAP_ERASE | MMC_CAP_CMD23; 573 + mmc->ops = &meson_mx_mmc_ops; 574 + 575 + ret = mmc_of_parse(mmc); 576 + if (ret) 577 + return ret; 578 + 579 + ret = mmc_add_host(mmc); 580 + if (ret) 581 + return ret; 582 + 583 + return 0; 584 + } 585 + 586 + static int meson_mx_mmc_register_clks(struct meson_mx_mmc_host *host) 587 + { 588 + struct clk_init_data init; 589 + const char *clk_div_parent, *clk_fixed_factor_parent; 590 + 591 + clk_fixed_factor_parent = __clk_get_name(host->parent_clk); 592 + init.name = devm_kasprintf(host->controller_dev, GFP_KERNEL, 593 + "%s#fixed_factor", 594 + dev_name(host->controller_dev)); 595 + init.ops = &clk_fixed_factor_ops; 596 + init.flags = 0; 597 + init.parent_names = &clk_fixed_factor_parent; 598 + init.num_parents = 1; 599 + host->fixed_factor.div = 2; 600 + host->fixed_factor.mult = 1; 601 + host->fixed_factor.hw.init = &init; 602 + 603 + host->fixed_factor_clk = devm_clk_register(host->controller_dev, 604 + &host->fixed_factor.hw); 605 + if (WARN_ON(IS_ERR(host->fixed_factor_clk))) 606 + return PTR_ERR(host->fixed_factor_clk); 607 + 608 + clk_div_parent = __clk_get_name(host->fixed_factor_clk); 609 + init.name = devm_kasprintf(host->controller_dev, GFP_KERNEL, 610 + "%s#div", dev_name(host->controller_dev)); 611 + init.ops = &clk_divider_ops; 612 + init.flags = CLK_SET_RATE_PARENT; 613 + init.parent_names = &clk_div_parent; 614 + init.num_parents = 1; 615 + host->cfg_div.reg = host->base + MESON_MX_SDIO_CONF; 616 + host->cfg_div.shift = MESON_MX_SDIO_CONF_CMD_CLK_DIV_SHIFT; 617 + host->cfg_div.width = MESON_MX_SDIO_CONF_CMD_CLK_DIV_WIDTH; 618 + host->cfg_div.hw.init = &init; 619 + host->cfg_div.flags = CLK_DIVIDER_ALLOW_ZERO; 620 + 621 + host->cfg_div_clk = devm_clk_register(host->controller_dev, 622 + &host->cfg_div.hw); 623 + if (WARN_ON(IS_ERR(host->cfg_div_clk))) 624 + return PTR_ERR(host->cfg_div_clk); 625 + 626 + return 0; 627 + } 628 + 629 + static int meson_mx_mmc_probe(struct platform_device *pdev) 630 + { 631 + struct platform_device *slot_pdev; 632 + struct mmc_host *mmc; 633 + struct meson_mx_mmc_host *host; 634 + struct resource *res; 635 + int ret, irq; 636 + u32 conf; 637 + 638 + slot_pdev = meson_mx_mmc_slot_pdev(&pdev->dev); 639 + if (!slot_pdev) 640 + return -ENODEV; 641 + else if (IS_ERR(slot_pdev)) 642 + return PTR_ERR(slot_pdev); 643 + 644 + mmc = mmc_alloc_host(sizeof(*host), &slot_pdev->dev); 645 + if (!mmc) { 646 + ret = -ENOMEM; 647 + goto error_unregister_slot_pdev; 648 + } 649 + 650 + host = mmc_priv(mmc); 651 + host->mmc = mmc; 652 + host->controller_dev = &pdev->dev; 653 + 654 + spin_lock_init(&host->irq_lock); 655 + timer_setup(&host->cmd_timeout, meson_mx_mmc_timeout, 0); 656 + 657 + platform_set_drvdata(pdev, host); 658 + 659 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 660 + host->base = devm_ioremap_resource(host->controller_dev, res); 661 + if (IS_ERR(host->base)) { 662 + ret = PTR_ERR(host->base); 663 + goto error_free_mmc; 664 + } 665 + 666 + irq = platform_get_irq(pdev, 0); 667 + ret = devm_request_threaded_irq(host->controller_dev, irq, 668 + meson_mx_mmc_irq, 669 + meson_mx_mmc_irq_thread, IRQF_ONESHOT, 670 + NULL, host); 671 + if (ret) 672 + goto error_free_mmc; 673 + 674 + host->core_clk = devm_clk_get(host->controller_dev, "core"); 675 + if (IS_ERR(host->core_clk)) { 676 + ret = PTR_ERR(host->core_clk); 677 + goto error_free_mmc; 678 + } 679 + 680 + host->parent_clk = devm_clk_get(host->controller_dev, "clkin"); 681 + if (IS_ERR(host->parent_clk)) { 682 + ret = PTR_ERR(host->parent_clk); 683 + goto error_free_mmc; 684 + } 685 + 686 + ret = meson_mx_mmc_register_clks(host); 687 + if (ret) 688 + goto error_free_mmc; 689 + 690 + ret = clk_prepare_enable(host->core_clk); 691 + if (ret) { 692 + dev_err(host->controller_dev, "Failed to enable core clock\n"); 693 + goto error_free_mmc; 694 + } 695 + 696 + ret = clk_prepare_enable(host->cfg_div_clk); 697 + if (ret) { 698 + dev_err(host->controller_dev, "Failed to enable MMC clock\n"); 699 + goto error_disable_core_clk; 700 + } 701 + 702 + conf = 0; 703 + conf |= FIELD_PREP(MESON_MX_SDIO_CONF_CMD_ARGUMENT_BITS_MASK, 39); 704 + conf |= FIELD_PREP(MESON_MX_SDIO_CONF_M_ENDIAN_MASK, 0x3); 705 + conf |= FIELD_PREP(MESON_MX_SDIO_CONF_WRITE_NWR_MASK, 0x2); 706 + conf |= FIELD_PREP(MESON_MX_SDIO_CONF_WRITE_CRC_OK_STATUS_MASK, 0x2); 707 + writel(conf, host->base + MESON_MX_SDIO_CONF); 708 + 709 + meson_mx_mmc_soft_reset(host); 710 + 711 + ret = meson_mx_mmc_add_host(host); 712 + if (ret) 713 + goto error_disable_clks; 714 + 715 + return 0; 716 + 717 + error_disable_clks: 718 + clk_disable_unprepare(host->cfg_div_clk); 719 + error_disable_core_clk: 720 + clk_disable_unprepare(host->core_clk); 721 + error_free_mmc: 722 + mmc_free_host(mmc); 723 + error_unregister_slot_pdev: 724 + of_platform_device_destroy(&slot_pdev->dev, NULL); 725 + return ret; 726 + } 727 + 728 + static int meson_mx_mmc_remove(struct platform_device *pdev) 729 + { 730 + struct meson_mx_mmc_host *host = platform_get_drvdata(pdev); 731 + struct device *slot_dev = mmc_dev(host->mmc); 732 + 733 + del_timer_sync(&host->cmd_timeout); 734 + 735 + mmc_remove_host(host->mmc); 736 + 737 + of_platform_device_destroy(slot_dev, NULL); 738 + 739 + clk_disable_unprepare(host->cfg_div_clk); 740 + clk_disable_unprepare(host->core_clk); 741 + 742 + mmc_free_host(host->mmc); 743 + 744 + return 0; 745 + } 746 + 747 + static const struct of_device_id meson_mx_mmc_of_match[] = { 748 + { .compatible = "amlogic,meson8-sdio", }, 749 + { .compatible = "amlogic,meson8b-sdio", }, 750 + { /* sentinel */ } 751 + }; 752 + MODULE_DEVICE_TABLE(of, meson_mx_mmc_of_match); 753 + 754 + static struct platform_driver meson_mx_mmc_driver = { 755 + .probe = meson_mx_mmc_probe, 756 + .remove = meson_mx_mmc_remove, 757 + .driver = { 758 + .name = "meson-mx-sdio", 759 + .of_match_table = of_match_ptr(meson_mx_mmc_of_match), 760 + }, 761 + }; 762 + 763 + module_platform_driver(meson_mx_mmc_driver); 764 + 765 + MODULE_DESCRIPTION("Meson6, Meson8 and Meson8b SDIO/MMC Host Driver"); 766 + MODULE_AUTHOR("Carlo Caione <carlo@endlessm.com>"); 767 + MODULE_AUTHOR("Martin Blumenstingl <martin.blumenstingl@googlemail.com>"); 768 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/mmc/host/mmci.c
··· 1658 1658 1659 1659 /* Get regulators and the supported OCR mask */ 1660 1660 ret = mmc_regulator_get_supply(mmc); 1661 - if (ret == -EPROBE_DEFER) 1661 + if (ret) 1662 1662 goto clk_disable; 1663 1663 1664 1664 if (!mmc->ocr_avail)
+247 -38
drivers/mmc/host/mtk-sd.c
··· 67 67 #define SDC_RESP2 0x48 68 68 #define SDC_RESP3 0x4c 69 69 #define SDC_BLK_NUM 0x50 70 + #define SDC_ADV_CFG0 0x64 70 71 #define EMMC_IOCON 0x7c 71 72 #define SDC_ACMD_RESP 0x80 72 73 #define MSDC_DMA_SA 0x90 ··· 75 74 #define MSDC_DMA_CFG 0x9c 76 75 #define MSDC_PATCH_BIT 0xb0 77 76 #define MSDC_PATCH_BIT1 0xb4 77 + #define MSDC_PATCH_BIT2 0xb8 78 78 #define MSDC_PAD_TUNE 0xec 79 + #define MSDC_PAD_TUNE0 0xf0 79 80 #define PAD_DS_TUNE 0x188 80 81 #define PAD_CMD_TUNE 0x18c 81 82 #define EMMC50_CFG0 0x208 83 + #define EMMC50_CFG3 0x220 84 + #define SDC_FIFO_CFG 0x228 82 85 83 86 /*--------------------------------------------------------------------------*/ 84 87 /* Register Mask */ ··· 100 95 #define MSDC_CFG_CKDIV (0xff << 8) /* RW */ 101 96 #define MSDC_CFG_CKMOD (0x3 << 16) /* RW */ 102 97 #define MSDC_CFG_HS400_CK_MODE (0x1 << 18) /* RW */ 98 + #define MSDC_CFG_HS400_CK_MODE_EXTRA (0x1 << 22) /* RW */ 99 + #define MSDC_CFG_CKDIV_EXTRA (0xfff << 8) /* RW */ 100 + #define MSDC_CFG_CKMOD_EXTRA (0x3 << 20) /* RW */ 103 101 104 102 /* MSDC_IOCON mask */ 105 103 #define MSDC_IOCON_SDR104CKS (0x1 << 0) /* RW */ ··· 191 183 #define SDC_STS_CMDBUSY (0x1 << 1) /* RW */ 192 184 #define SDC_STS_SWR_COMPL (0x1 << 31) /* RW */ 193 185 186 + /* SDC_ADV_CFG0 mask */ 187 + #define SDC_RX_ENHANCE_EN (0x1 << 20) /* RW */ 188 + 194 189 /* MSDC_DMA_CTRL mask */ 195 190 #define MSDC_DMA_CTRL_START (0x1 << 0) /* W */ 196 191 #define MSDC_DMA_CTRL_STOP (0x1 << 1) /* W */ ··· 223 212 #define MSDC_PATCH_BIT_SPCPUSH (0x1 << 29) /* RW */ 224 213 #define MSDC_PATCH_BIT_DECRCTMO (0x1 << 30) /* RW */ 225 214 215 + #define MSDC_PATCH_BIT1_STOP_DLY (0xf << 8) /* RW */ 216 + 217 + #define MSDC_PATCH_BIT2_CFGRESP (0x1 << 15) /* RW */ 218 + #define MSDC_PATCH_BIT2_CFGCRCSTS (0x1 << 28) /* RW */ 219 + #define MSDC_PB2_RESPWAIT (0x3 << 2) /* RW */ 220 + #define MSDC_PB2_RESPSTSENSEL (0x7 << 16) /* RW */ 221 + #define MSDC_PB2_CRCSTSENSEL (0x7 << 29) /* RW */ 222 + 226 223 #define MSDC_PAD_TUNE_DATWRDLY (0x1f << 0) /* RW */ 227 224 #define MSDC_PAD_TUNE_DATRRDLY (0x1f << 8) /* RW */ 228 225 #define MSDC_PAD_TUNE_CMDRDLY (0x1f << 16) /* RW */ 229 226 #define MSDC_PAD_TUNE_CMDRRDLY (0x1f << 22) /* RW */ 230 227 #define MSDC_PAD_TUNE_CLKTDLY (0x1f << 27) /* RW */ 228 + #define MSDC_PAD_TUNE_RXDLYSEL (0x1 << 15) /* RW */ 229 + #define MSDC_PAD_TUNE_RD_SEL (0x1 << 13) /* RW */ 230 + #define MSDC_PAD_TUNE_CMD_SEL (0x1 << 21) /* RW */ 231 231 232 232 #define PAD_DS_TUNE_DLY1 (0x1f << 2) /* RW */ 233 233 #define PAD_DS_TUNE_DLY2 (0x1f << 7) /* RW */ ··· 249 227 #define EMMC50_CFG_PADCMD_LATCHCK (0x1 << 0) /* RW */ 250 228 #define EMMC50_CFG_CRCSTS_EDGE (0x1 << 3) /* RW */ 251 229 #define EMMC50_CFG_CFCSTS_SEL (0x1 << 4) /* RW */ 230 + 231 + #define EMMC50_CFG3_OUTS_WR (0x1f << 0) /* RW */ 232 + 233 + #define SDC_FIFO_CFG_WRVALIDSEL (0x1 << 24) /* RW */ 234 + #define SDC_FIFO_CFG_RDVALIDSEL (0x1 << 25) /* RW */ 252 235 253 236 #define REQ_CMD_EIO (0x1 << 0) 254 237 #define REQ_CMD_TMO (0x1 << 1) ··· 317 290 u32 pad_tune; 318 291 u32 patch_bit0; 319 292 u32 patch_bit1; 293 + u32 patch_bit2; 320 294 u32 pad_ds_tune; 321 295 u32 pad_cmd_tune; 322 296 u32 emmc50_cfg0; 297 + u32 emmc50_cfg3; 298 + u32 sdc_fifo_cfg; 299 + }; 300 + 301 + struct mtk_mmc_compatible { 302 + u8 clk_div_bits; 303 + bool hs400_tune; /* only used for MT8173 */ 304 + u32 pad_tune_reg; 305 + bool async_fifo; 306 + bool data_tune; 307 + bool busy_check; 308 + bool stop_clk_fix; 309 + bool enhance_rx; 323 310 }; 324 311 325 312 struct msdc_tune_para { ··· 350 309 351 310 struct msdc_host { 352 311 struct device *dev; 312 + const struct mtk_mmc_compatible *dev_comp; 353 313 struct mmc_host *mmc; /* mmc structure */ 354 314 int cmd_rsp; 355 315 ··· 376 334 377 335 struct clk *src_clk; /* msdc source clock */ 378 336 struct clk *h_clk; /* msdc h_clk */ 337 + struct clk *src_clk_cg; /* msdc source clock control gate */ 379 338 u32 mclk; /* mmc subsystem clock frequency */ 380 339 u32 src_clk_freq; /* source clock frequency */ 381 340 u32 sclk; /* SD/MS bus clock frequency */ 382 341 unsigned char timing; 383 342 bool vqmmc_enabled; 343 + u32 latch_ck; 384 344 u32 hs400_ds_delay; 385 345 u32 hs200_cmd_int_delay; /* cmd internal delay for HS200/SDR104 */ 386 346 u32 hs400_cmd_int_delay; /* cmd internal delay for HS400 */ ··· 393 349 struct msdc_tune_para def_tune_para; /* default tune setting */ 394 350 struct msdc_tune_para saved_tune_para; /* tune result of CMD21/CMD19 */ 395 351 }; 352 + 353 + static const struct mtk_mmc_compatible mt8135_compat = { 354 + .clk_div_bits = 8, 355 + .hs400_tune = false, 356 + .pad_tune_reg = MSDC_PAD_TUNE, 357 + .async_fifo = false, 358 + .data_tune = false, 359 + .busy_check = false, 360 + .stop_clk_fix = false, 361 + .enhance_rx = false, 362 + }; 363 + 364 + static const struct mtk_mmc_compatible mt8173_compat = { 365 + .clk_div_bits = 8, 366 + .hs400_tune = true, 367 + .pad_tune_reg = MSDC_PAD_TUNE, 368 + .async_fifo = false, 369 + .data_tune = false, 370 + .busy_check = false, 371 + .stop_clk_fix = false, 372 + .enhance_rx = false, 373 + }; 374 + 375 + static const struct mtk_mmc_compatible mt2701_compat = { 376 + .clk_div_bits = 12, 377 + .hs400_tune = false, 378 + .pad_tune_reg = MSDC_PAD_TUNE0, 379 + .async_fifo = true, 380 + .data_tune = true, 381 + .busy_check = false, 382 + .stop_clk_fix = false, 383 + .enhance_rx = false, 384 + }; 385 + 386 + static const struct mtk_mmc_compatible mt2712_compat = { 387 + .clk_div_bits = 12, 388 + .hs400_tune = false, 389 + .pad_tune_reg = MSDC_PAD_TUNE0, 390 + .async_fifo = true, 391 + .data_tune = true, 392 + .busy_check = true, 393 + .stop_clk_fix = true, 394 + .enhance_rx = true, 395 + }; 396 + 397 + static const struct of_device_id msdc_of_ids[] = { 398 + { .compatible = "mediatek,mt8135-mmc", .data = &mt8135_compat}, 399 + { .compatible = "mediatek,mt8173-mmc", .data = &mt8173_compat}, 400 + { .compatible = "mediatek,mt2701-mmc", .data = &mt2701_compat}, 401 + { .compatible = "mediatek,mt2712-mmc", .data = &mt2712_compat}, 402 + {} 403 + }; 404 + MODULE_DEVICE_TABLE(of, msdc_of_ids); 396 405 397 406 static void sdr_set_bits(void __iomem *reg, u32 bs) 398 407 { ··· 606 509 timeout = (ns + clk_ns - 1) / clk_ns + clks; 607 510 /* in 1048576 sclk cycle unit */ 608 511 timeout = (timeout + (0x1 << 20) - 1) >> 20; 609 - sdr_get_field(host->base + MSDC_CFG, MSDC_CFG_CKMOD, &mode); 512 + if (host->dev_comp->clk_div_bits == 8) 513 + sdr_get_field(host->base + MSDC_CFG, 514 + MSDC_CFG_CKMOD, &mode); 515 + else 516 + sdr_get_field(host->base + MSDC_CFG, 517 + MSDC_CFG_CKMOD_EXTRA, &mode); 610 518 /*DDR mode will double the clk cycles for data timeout */ 611 519 timeout = mode >= 2 ? timeout * 2 : timeout; 612 520 timeout = timeout > 1 ? timeout - 1 : 0; ··· 622 520 623 521 static void msdc_gate_clock(struct msdc_host *host) 624 522 { 523 + clk_disable_unprepare(host->src_clk_cg); 625 524 clk_disable_unprepare(host->src_clk); 626 525 clk_disable_unprepare(host->h_clk); 627 526 } ··· 631 528 { 632 529 clk_prepare_enable(host->h_clk); 633 530 clk_prepare_enable(host->src_clk); 531 + clk_prepare_enable(host->src_clk_cg); 634 532 while (!(readl(host->base + MSDC_CFG) & MSDC_CFG_CKSTB)) 635 533 cpu_relax(); 636 534 } ··· 642 538 u32 flags; 643 539 u32 div; 644 540 u32 sclk; 541 + u32 tune_reg = host->dev_comp->pad_tune_reg; 645 542 646 543 if (!hz) { 647 544 dev_dbg(host->dev, "set mclk to 0\n"); ··· 653 548 654 549 flags = readl(host->base + MSDC_INTEN); 655 550 sdr_clr_bits(host->base + MSDC_INTEN, flags); 656 - sdr_clr_bits(host->base + MSDC_CFG, MSDC_CFG_HS400_CK_MODE); 551 + if (host->dev_comp->clk_div_bits == 8) 552 + sdr_clr_bits(host->base + MSDC_CFG, MSDC_CFG_HS400_CK_MODE); 553 + else 554 + sdr_clr_bits(host->base + MSDC_CFG, 555 + MSDC_CFG_HS400_CK_MODE_EXTRA); 657 556 if (timing == MMC_TIMING_UHS_DDR50 || 658 557 timing == MMC_TIMING_MMC_DDR52 || 659 558 timing == MMC_TIMING_MMC_HS400) { ··· 677 568 678 569 if (timing == MMC_TIMING_MMC_HS400 && 679 570 hz >= (host->src_clk_freq >> 1)) { 680 - sdr_set_bits(host->base + MSDC_CFG, 681 - MSDC_CFG_HS400_CK_MODE); 571 + if (host->dev_comp->clk_div_bits == 8) 572 + sdr_set_bits(host->base + MSDC_CFG, 573 + MSDC_CFG_HS400_CK_MODE); 574 + else 575 + sdr_set_bits(host->base + MSDC_CFG, 576 + MSDC_CFG_HS400_CK_MODE_EXTRA); 682 577 sclk = host->src_clk_freq >> 1; 683 578 div = 0; /* div is ignore when bit18 is set */ 684 579 } ··· 700 587 sclk = (host->src_clk_freq >> 2) / div; 701 588 } 702 589 } 703 - sdr_set_field(host->base + MSDC_CFG, MSDC_CFG_CKMOD | MSDC_CFG_CKDIV, 704 - (mode << 8) | div); 705 - sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_CKPDN); 590 + sdr_clr_bits(host->base + MSDC_CFG, MSDC_CFG_CKPDN); 591 + /* 592 + * As src_clk/HCLK use the same bit to gate/ungate, 593 + * So if want to only gate src_clk, need gate its parent(mux). 594 + */ 595 + if (host->src_clk_cg) 596 + clk_disable_unprepare(host->src_clk_cg); 597 + else 598 + clk_disable_unprepare(clk_get_parent(host->src_clk)); 599 + if (host->dev_comp->clk_div_bits == 8) 600 + sdr_set_field(host->base + MSDC_CFG, 601 + MSDC_CFG_CKMOD | MSDC_CFG_CKDIV, 602 + (mode << 8) | div); 603 + else 604 + sdr_set_field(host->base + MSDC_CFG, 605 + MSDC_CFG_CKMOD_EXTRA | MSDC_CFG_CKDIV_EXTRA, 606 + (mode << 12) | div); 607 + if (host->src_clk_cg) 608 + clk_prepare_enable(host->src_clk_cg); 609 + else 610 + clk_prepare_enable(clk_get_parent(host->src_clk)); 611 + 706 612 while (!(readl(host->base + MSDC_CFG) & MSDC_CFG_CKSTB)) 707 613 cpu_relax(); 614 + sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_CKPDN); 708 615 host->sclk = sclk; 709 616 host->mclk = hz; 710 617 host->timing = timing; ··· 738 605 */ 739 606 if (host->sclk <= 52000000) { 740 607 writel(host->def_tune_para.iocon, host->base + MSDC_IOCON); 741 - writel(host->def_tune_para.pad_tune, host->base + MSDC_PAD_TUNE); 608 + writel(host->def_tune_para.pad_tune, host->base + tune_reg); 742 609 } else { 743 610 writel(host->saved_tune_para.iocon, host->base + MSDC_IOCON); 744 - writel(host->saved_tune_para.pad_tune, host->base + MSDC_PAD_TUNE); 611 + writel(host->saved_tune_para.pad_tune, host->base + tune_reg); 745 612 writel(host->saved_tune_para.pad_cmd_tune, 746 613 host->base + PAD_CMD_TUNE); 747 614 } 748 615 749 - if (timing == MMC_TIMING_MMC_HS400) 616 + if (timing == MMC_TIMING_MMC_HS400 && 617 + host->dev_comp->hs400_tune) 750 618 sdr_set_field(host->base + PAD_CMD_TUNE, 751 619 MSDC_PAD_TUNE_CMDRRDLY, 752 620 host->hs400_cmd_int_delay); ··· 1299 1165 static void msdc_init_hw(struct msdc_host *host) 1300 1166 { 1301 1167 u32 val; 1168 + u32 tune_reg = host->dev_comp->pad_tune_reg; 1302 1169 1303 1170 /* Configure to MMC/SD mode, clock free running */ 1304 1171 sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_MODE | MSDC_CFG_CKPDN); ··· 1315 1180 val = readl(host->base + MSDC_INT); 1316 1181 writel(val, host->base + MSDC_INT); 1317 1182 1318 - writel(0, host->base + MSDC_PAD_TUNE); 1183 + writel(0, host->base + tune_reg); 1319 1184 writel(0, host->base + MSDC_IOCON); 1320 1185 sdr_set_field(host->base + MSDC_IOCON, MSDC_IOCON_DDLSEL, 0); 1321 1186 writel(0x403c0046, host->base + MSDC_PATCH_BIT); 1322 1187 sdr_set_field(host->base + MSDC_PATCH_BIT, MSDC_CKGEN_MSDC_DLY_SEL, 1); 1323 - writel(0xffff0089, host->base + MSDC_PATCH_BIT1); 1188 + writel(0xffff4089, host->base + MSDC_PATCH_BIT1); 1324 1189 sdr_set_bits(host->base + EMMC50_CFG0, EMMC50_CFG_CFCSTS_SEL); 1190 + 1191 + if (host->dev_comp->stop_clk_fix) { 1192 + sdr_set_field(host->base + MSDC_PATCH_BIT1, 1193 + MSDC_PATCH_BIT1_STOP_DLY, 3); 1194 + sdr_clr_bits(host->base + SDC_FIFO_CFG, 1195 + SDC_FIFO_CFG_WRVALIDSEL); 1196 + sdr_clr_bits(host->base + SDC_FIFO_CFG, 1197 + SDC_FIFO_CFG_RDVALIDSEL); 1198 + } 1199 + 1200 + if (host->dev_comp->busy_check) 1201 + sdr_clr_bits(host->base + MSDC_PATCH_BIT1, (1 << 7)); 1202 + 1203 + if (host->dev_comp->async_fifo) { 1204 + sdr_set_field(host->base + MSDC_PATCH_BIT2, 1205 + MSDC_PB2_RESPWAIT, 3); 1206 + if (host->dev_comp->enhance_rx) { 1207 + sdr_set_bits(host->base + SDC_ADV_CFG0, 1208 + SDC_RX_ENHANCE_EN); 1209 + } else { 1210 + sdr_set_field(host->base + MSDC_PATCH_BIT2, 1211 + MSDC_PB2_RESPSTSENSEL, 2); 1212 + sdr_set_field(host->base + MSDC_PATCH_BIT2, 1213 + MSDC_PB2_CRCSTSENSEL, 2); 1214 + } 1215 + /* use async fifo, then no need tune internal delay */ 1216 + sdr_clr_bits(host->base + MSDC_PATCH_BIT2, 1217 + MSDC_PATCH_BIT2_CFGRESP); 1218 + sdr_set_bits(host->base + MSDC_PATCH_BIT2, 1219 + MSDC_PATCH_BIT2_CFGCRCSTS); 1220 + } 1221 + 1222 + if (host->dev_comp->data_tune) { 1223 + sdr_set_bits(host->base + tune_reg, 1224 + MSDC_PAD_TUNE_RD_SEL | MSDC_PAD_TUNE_CMD_SEL); 1225 + } else { 1226 + /* choose clock tune */ 1227 + sdr_set_bits(host->base + tune_reg, MSDC_PAD_TUNE_RXDLYSEL); 1228 + } 1325 1229 1326 1230 /* Configure to enable SDIO mode. 1327 1231 * it's must otherwise sdio cmd5 failed ··· 1374 1200 sdr_set_field(host->base + SDC_CFG, SDC_CFG_DTOC, 3); 1375 1201 1376 1202 host->def_tune_para.iocon = readl(host->base + MSDC_IOCON); 1377 - host->def_tune_para.pad_tune = readl(host->base + MSDC_PAD_TUNE); 1203 + host->def_tune_para.pad_tune = readl(host->base + tune_reg); 1204 + host->saved_tune_para.iocon = readl(host->base + MSDC_IOCON); 1205 + host->saved_tune_para.pad_tune = readl(host->base + tune_reg); 1378 1206 dev_dbg(host->dev, "init hardware done!"); 1379 1207 } 1380 1208 ··· 1519 1343 struct msdc_delay_phase internal_delay_phase; 1520 1344 u8 final_delay, final_maxlen; 1521 1345 u32 internal_delay = 0; 1346 + u32 tune_reg = host->dev_comp->pad_tune_reg; 1522 1347 int cmd_err; 1523 1348 int i, j; 1524 1349 1525 1350 if (mmc->ios.timing == MMC_TIMING_MMC_HS200 || 1526 1351 mmc->ios.timing == MMC_TIMING_UHS_SDR104) 1527 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1352 + sdr_set_field(host->base + tune_reg, 1528 1353 MSDC_PAD_TUNE_CMDRRDLY, 1529 1354 host->hs200_cmd_int_delay); 1530 1355 1531 1356 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1532 1357 for (i = 0 ; i < PAD_DELAY_MAX; i++) { 1533 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1358 + sdr_set_field(host->base + tune_reg, 1534 1359 MSDC_PAD_TUNE_CMDRDLY, i); 1535 1360 /* 1536 1361 * Using the same parameters, it may sometimes pass the test, ··· 1550 1373 } 1551 1374 final_rise_delay = get_best_delay(host, rise_delay); 1552 1375 /* if rising edge has enough margin, then do not scan falling edge */ 1553 - if (final_rise_delay.maxlen >= 12 && final_rise_delay.start < 4) 1376 + if (final_rise_delay.maxlen >= 12 || 1377 + (final_rise_delay.start == 0 && final_rise_delay.maxlen >= 4)) 1554 1378 goto skip_fall; 1555 1379 1556 1380 sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1557 1381 for (i = 0; i < PAD_DELAY_MAX; i++) { 1558 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1382 + sdr_set_field(host->base + tune_reg, 1559 1383 MSDC_PAD_TUNE_CMDRDLY, i); 1560 1384 /* 1561 1385 * Using the same parameters, it may sometimes pass the test, ··· 1581 1403 final_maxlen = final_fall_delay.maxlen; 1582 1404 if (final_maxlen == final_rise_delay.maxlen) { 1583 1405 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1584 - sdr_set_field(host->base + MSDC_PAD_TUNE, MSDC_PAD_TUNE_CMDRDLY, 1406 + sdr_set_field(host->base + tune_reg, MSDC_PAD_TUNE_CMDRDLY, 1585 1407 final_rise_delay.final_phase); 1586 1408 final_delay = final_rise_delay.final_phase; 1587 1409 } else { 1588 1410 sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1589 - sdr_set_field(host->base + MSDC_PAD_TUNE, MSDC_PAD_TUNE_CMDRDLY, 1411 + sdr_set_field(host->base + tune_reg, MSDC_PAD_TUNE_CMDRDLY, 1590 1412 final_fall_delay.final_phase); 1591 1413 final_delay = final_fall_delay.final_phase; 1592 1414 } 1593 - if (host->hs200_cmd_int_delay) 1415 + if (host->dev_comp->async_fifo || host->hs200_cmd_int_delay) 1594 1416 goto skip_internal; 1595 1417 1596 1418 for (i = 0; i < PAD_DELAY_MAX; i++) { 1597 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1419 + sdr_set_field(host->base + tune_reg, 1598 1420 MSDC_PAD_TUNE_CMDRRDLY, i); 1599 1421 mmc_send_tuning(mmc, opcode, &cmd_err); 1600 1422 if (!cmd_err) ··· 1602 1424 } 1603 1425 dev_dbg(host->dev, "Final internal delay: 0x%x\n", internal_delay); 1604 1426 internal_delay_phase = get_best_delay(host, internal_delay); 1605 - sdr_set_field(host->base + MSDC_PAD_TUNE, MSDC_PAD_TUNE_CMDRRDLY, 1427 + sdr_set_field(host->base + tune_reg, MSDC_PAD_TUNE_CMDRRDLY, 1606 1428 internal_delay_phase.final_phase); 1607 1429 skip_internal: 1608 1430 dev_dbg(host->dev, "Final cmd pad delay: %x\n", final_delay); ··· 1664 1486 u32 rise_delay = 0, fall_delay = 0; 1665 1487 struct msdc_delay_phase final_rise_delay, final_fall_delay = { 0,}; 1666 1488 u8 final_delay, final_maxlen; 1489 + u32 tune_reg = host->dev_comp->pad_tune_reg; 1667 1490 int i, ret; 1668 1491 1492 + sdr_set_field(host->base + MSDC_PATCH_BIT, MSDC_INT_DAT_LATCH_CK_SEL, 1493 + host->latch_ck); 1669 1494 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_DSPL); 1670 1495 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_W_DSPL); 1671 1496 for (i = 0 ; i < PAD_DELAY_MAX; i++) { 1672 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1497 + sdr_set_field(host->base + tune_reg, 1673 1498 MSDC_PAD_TUNE_DATRRDLY, i); 1674 1499 ret = mmc_send_tuning(mmc, opcode, NULL); 1675 1500 if (!ret) ··· 1687 1506 sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_DSPL); 1688 1507 sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_W_DSPL); 1689 1508 for (i = 0; i < PAD_DELAY_MAX; i++) { 1690 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1509 + sdr_set_field(host->base + tune_reg, 1691 1510 MSDC_PAD_TUNE_DATRRDLY, i); 1692 1511 ret = mmc_send_tuning(mmc, opcode, NULL); 1693 1512 if (!ret) ··· 1700 1519 if (final_maxlen == final_rise_delay.maxlen) { 1701 1520 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_DSPL); 1702 1521 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_W_DSPL); 1703 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1522 + sdr_set_field(host->base + tune_reg, 1704 1523 MSDC_PAD_TUNE_DATRRDLY, 1705 1524 final_rise_delay.final_phase); 1706 1525 final_delay = final_rise_delay.final_phase; 1707 1526 } else { 1708 1527 sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_DSPL); 1709 1528 sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_W_DSPL); 1710 - sdr_set_field(host->base + MSDC_PAD_TUNE, 1529 + sdr_set_field(host->base + tune_reg, 1711 1530 MSDC_PAD_TUNE_DATRRDLY, 1712 1531 final_fall_delay.final_phase); 1713 1532 final_delay = final_fall_delay.final_phase; ··· 1721 1540 { 1722 1541 struct msdc_host *host = mmc_priv(mmc); 1723 1542 int ret; 1543 + u32 tune_reg = host->dev_comp->pad_tune_reg; 1724 1544 1725 - if (host->hs400_mode) 1545 + if (host->hs400_mode && 1546 + host->dev_comp->hs400_tune) 1726 1547 ret = hs400_tune_response(mmc, opcode); 1727 1548 else 1728 1549 ret = msdc_tune_response(mmc, opcode); ··· 1739 1556 } 1740 1557 1741 1558 host->saved_tune_para.iocon = readl(host->base + MSDC_IOCON); 1742 - host->saved_tune_para.pad_tune = readl(host->base + MSDC_PAD_TUNE); 1559 + host->saved_tune_para.pad_tune = readl(host->base + tune_reg); 1743 1560 host->saved_tune_para.pad_cmd_tune = readl(host->base + PAD_CMD_TUNE); 1744 1561 return ret; 1745 1562 } ··· 1750 1567 host->hs400_mode = true; 1751 1568 1752 1569 writel(host->hs400_ds_delay, host->base + PAD_DS_TUNE); 1570 + /* hs400 mode must set it to 0 */ 1571 + sdr_clr_bits(host->base + MSDC_PATCH_BIT2, MSDC_PATCH_BIT2_CFGCRCSTS); 1572 + /* to improve read performance, set outstanding to 2 */ 1573 + sdr_set_field(host->base + EMMC50_CFG3, EMMC50_CFG3_OUTS_WR, 2); 1574 + 1753 1575 return 0; 1754 1576 } 1755 1577 ··· 1784 1596 static void msdc_of_property_parse(struct platform_device *pdev, 1785 1597 struct msdc_host *host) 1786 1598 { 1599 + of_property_read_u32(pdev->dev.of_node, "mediatek,latch-ck", 1600 + &host->latch_ck); 1601 + 1787 1602 of_property_read_u32(pdev->dev.of_node, "hs400-ds-delay", 1788 1603 &host->hs400_ds_delay); 1789 1604 ··· 1808 1617 struct mmc_host *mmc; 1809 1618 struct msdc_host *host; 1810 1619 struct resource *res; 1620 + const struct of_device_id *of_id; 1811 1621 int ret; 1812 1622 1813 1623 if (!pdev->dev.of_node) { 1814 1624 dev_err(&pdev->dev, "No DT found\n"); 1815 1625 return -EINVAL; 1816 1626 } 1627 + 1628 + of_id = of_match_node(msdc_of_ids, pdev->dev.of_node); 1629 + if (!of_id) 1630 + return -EINVAL; 1817 1631 /* Allocate MMC host for this device */ 1818 1632 mmc = mmc_alloc_host(sizeof(struct msdc_host), &pdev->dev); 1819 1633 if (!mmc) ··· 1837 1641 } 1838 1642 1839 1643 ret = mmc_regulator_get_supply(mmc); 1840 - if (ret == -EPROBE_DEFER) 1644 + if (ret) 1841 1645 goto host_free; 1842 1646 1843 1647 host->src_clk = devm_clk_get(&pdev->dev, "source"); ··· 1851 1655 ret = PTR_ERR(host->h_clk); 1852 1656 goto host_free; 1853 1657 } 1658 + 1659 + /*source clock control gate is optional clock*/ 1660 + host->src_clk_cg = devm_clk_get(&pdev->dev, "source_cg"); 1661 + if (IS_ERR(host->src_clk_cg)) 1662 + host->src_clk_cg = NULL; 1854 1663 1855 1664 host->irq = platform_get_irq(pdev, 0); 1856 1665 if (host->irq < 0) { ··· 1887 1686 msdc_of_property_parse(pdev, host); 1888 1687 1889 1688 host->dev = &pdev->dev; 1689 + host->dev_comp = of_id->data; 1890 1690 host->mmc = mmc; 1891 1691 host->src_clk_freq = clk_get_rate(host->src_clk); 1892 1692 /* Set host parameters to mmc */ 1893 1693 mmc->ops = &mt_msdc_ops; 1894 - mmc->f_min = DIV_ROUND_UP(host->src_clk_freq, 4 * 255); 1694 + if (host->dev_comp->clk_div_bits == 8) 1695 + mmc->f_min = DIV_ROUND_UP(host->src_clk_freq, 4 * 255); 1696 + else 1697 + mmc->f_min = DIV_ROUND_UP(host->src_clk_freq, 4 * 4095); 1895 1698 1896 1699 mmc->caps |= MMC_CAP_ERASE | MMC_CAP_CMD23; 1897 1700 /* MMC core transfer sizes tunable parameters */ ··· 1993 1788 #ifdef CONFIG_PM 1994 1789 static void msdc_save_reg(struct msdc_host *host) 1995 1790 { 1791 + u32 tune_reg = host->dev_comp->pad_tune_reg; 1792 + 1996 1793 host->save_para.msdc_cfg = readl(host->base + MSDC_CFG); 1997 1794 host->save_para.iocon = readl(host->base + MSDC_IOCON); 1998 1795 host->save_para.sdc_cfg = readl(host->base + SDC_CFG); 1999 - host->save_para.pad_tune = readl(host->base + MSDC_PAD_TUNE); 1796 + host->save_para.pad_tune = readl(host->base + tune_reg); 2000 1797 host->save_para.patch_bit0 = readl(host->base + MSDC_PATCH_BIT); 2001 1798 host->save_para.patch_bit1 = readl(host->base + MSDC_PATCH_BIT1); 1799 + host->save_para.patch_bit2 = readl(host->base + MSDC_PATCH_BIT2); 2002 1800 host->save_para.pad_ds_tune = readl(host->base + PAD_DS_TUNE); 2003 1801 host->save_para.pad_cmd_tune = readl(host->base + PAD_CMD_TUNE); 2004 1802 host->save_para.emmc50_cfg0 = readl(host->base + EMMC50_CFG0); 1803 + host->save_para.emmc50_cfg3 = readl(host->base + EMMC50_CFG3); 1804 + host->save_para.sdc_fifo_cfg = readl(host->base + SDC_FIFO_CFG); 2005 1805 } 2006 1806 2007 1807 static void msdc_restore_reg(struct msdc_host *host) 2008 1808 { 1809 + u32 tune_reg = host->dev_comp->pad_tune_reg; 1810 + 2009 1811 writel(host->save_para.msdc_cfg, host->base + MSDC_CFG); 2010 1812 writel(host->save_para.iocon, host->base + MSDC_IOCON); 2011 1813 writel(host->save_para.sdc_cfg, host->base + SDC_CFG); 2012 - writel(host->save_para.pad_tune, host->base + MSDC_PAD_TUNE); 1814 + writel(host->save_para.pad_tune, host->base + tune_reg); 2013 1815 writel(host->save_para.patch_bit0, host->base + MSDC_PATCH_BIT); 2014 1816 writel(host->save_para.patch_bit1, host->base + MSDC_PATCH_BIT1); 1817 + writel(host->save_para.patch_bit2, host->base + MSDC_PATCH_BIT2); 2015 1818 writel(host->save_para.pad_ds_tune, host->base + PAD_DS_TUNE); 2016 1819 writel(host->save_para.pad_cmd_tune, host->base + PAD_CMD_TUNE); 2017 1820 writel(host->save_para.emmc50_cfg0, host->base + EMMC50_CFG0); 1821 + writel(host->save_para.emmc50_cfg3, host->base + EMMC50_CFG3); 1822 + writel(host->save_para.sdc_fifo_cfg, host->base + SDC_FIFO_CFG); 2018 1823 } 2019 1824 2020 1825 static int msdc_runtime_suspend(struct device *dev) ··· 2053 1838 pm_runtime_force_resume) 2054 1839 SET_RUNTIME_PM_OPS(msdc_runtime_suspend, msdc_runtime_resume, NULL) 2055 1840 }; 2056 - 2057 - static const struct of_device_id msdc_of_ids[] = { 2058 - { .compatible = "mediatek,mt8135-mmc", }, 2059 - {} 2060 - }; 2061 - MODULE_DEVICE_TABLE(of, msdc_of_ids); 2062 1841 2063 1842 static struct platform_driver mt_msdc_driver = { 2064 1843 .probe = msdc_drv_probe,
+3 -3
drivers/mmc/host/mvsdio.c
··· 508 508 return IRQ_NONE; 509 509 } 510 510 511 - static void mvsd_timeout_timer(unsigned long data) 511 + static void mvsd_timeout_timer(struct timer_list *t) 512 512 { 513 - struct mvsd_host *host = (struct mvsd_host *)data; 513 + struct mvsd_host *host = from_timer(host, t, timer); 514 514 void __iomem *iobase = host->base; 515 515 struct mmc_request *mrq; 516 516 unsigned long flags; ··· 776 776 goto out; 777 777 } 778 778 779 - setup_timer(&host->timer, mvsd_timeout_timer, (unsigned long)host); 779 + timer_setup(&host->timer, mvsd_timeout_timer, 0); 780 780 platform_set_drvdata(pdev, mmc); 781 781 ret = mmc_add_host(mmc); 782 782 if (ret)
+4 -7
drivers/mmc/host/mxcmmc.c
··· 963 963 return true; 964 964 } 965 965 966 - static void mxcmci_watchdog(unsigned long data) 966 + static void mxcmci_watchdog(struct timer_list *t) 967 967 { 968 - struct mmc_host *mmc = (struct mmc_host *)data; 969 - struct mxcmci_host *host = mmc_priv(mmc); 968 + struct mxcmci_host *host = from_timer(host, t, watchdog); 970 969 struct mmc_request *req = host->req; 971 970 unsigned int stat = mxcmci_readl(host, MMC_REG_STATUS); 972 971 ··· 1074 1075 dat3_card_detect = true; 1075 1076 1076 1077 ret = mmc_regulator_get_supply(mmc); 1077 - if (ret == -EPROBE_DEFER) 1078 + if (ret) 1078 1079 goto out_free; 1079 1080 1080 1081 if (!mmc->ocr_avail) { ··· 1164 1165 goto out_free_dma; 1165 1166 } 1166 1167 1167 - init_timer(&host->watchdog); 1168 - host->watchdog.function = &mxcmci_watchdog; 1169 - host->watchdog.data = (unsigned long)mmc; 1168 + timer_setup(&host->watchdog, mxcmci_watchdog, 0); 1170 1169 1171 1170 mmc_add_host(mmc); 1172 1171
+9 -11
drivers/mmc/host/omap.c
··· 625 625 } 626 626 627 627 static void 628 - mmc_omap_cmd_timer(unsigned long data) 628 + mmc_omap_cmd_timer(struct timer_list *t) 629 629 { 630 - struct mmc_omap_host *host = (struct mmc_omap_host *) data; 630 + struct mmc_omap_host *host = from_timer(host, t, cmd_abort_timer); 631 631 unsigned long flags; 632 632 633 633 spin_lock_irqsave(&host->slot_lock, flags); ··· 654 654 } 655 655 656 656 static void 657 - mmc_omap_clk_timer(unsigned long data) 657 + mmc_omap_clk_timer(struct timer_list *t) 658 658 { 659 - struct mmc_omap_host *host = (struct mmc_omap_host *) data; 659 + struct mmc_omap_host *host = from_timer(host, t, clk_timer); 660 660 661 661 mmc_omap_fclk_enable(host, 0); 662 662 } ··· 874 874 tasklet_hi_schedule(&slot->cover_tasklet); 875 875 } 876 876 877 - static void mmc_omap_cover_timer(unsigned long arg) 877 + static void mmc_omap_cover_timer(struct timer_list *t) 878 878 { 879 - struct mmc_omap_slot *slot = (struct mmc_omap_slot *) arg; 879 + struct mmc_omap_slot *slot = from_timer(slot, t, cover_timer); 880 880 tasklet_schedule(&slot->cover_tasklet); 881 881 } 882 882 ··· 1264 1264 mmc->max_seg_size = mmc->max_req_size; 1265 1265 1266 1266 if (slot->pdata->get_cover_state != NULL) { 1267 - setup_timer(&slot->cover_timer, mmc_omap_cover_timer, 1268 - (unsigned long)slot); 1267 + timer_setup(&slot->cover_timer, mmc_omap_cover_timer, 0); 1269 1268 tasklet_init(&slot->cover_tasklet, mmc_omap_cover_handler, 1270 1269 (unsigned long)slot); 1271 1270 } ··· 1351 1352 INIT_WORK(&host->send_stop_work, mmc_omap_send_stop_work); 1352 1353 1353 1354 INIT_WORK(&host->cmd_abort_work, mmc_omap_abort_command); 1354 - setup_timer(&host->cmd_abort_timer, mmc_omap_cmd_timer, 1355 - (unsigned long) host); 1355 + timer_setup(&host->cmd_abort_timer, mmc_omap_cmd_timer, 0); 1356 1356 1357 1357 spin_lock_init(&host->clk_lock); 1358 - setup_timer(&host->clk_timer, mmc_omap_clk_timer, (unsigned long) host); 1358 + timer_setup(&host->clk_timer, mmc_omap_clk_timer, 0); 1359 1359 1360 1360 spin_lock_init(&host->dma_lock); 1361 1361 spin_lock_init(&host->slot_lock);
+9 -26
drivers/mmc/host/omap_hsmmc.c
··· 147 147 #define OMAP_MMC_MAX_CLOCK 52000000 148 148 #define DRIVER_NAME "omap_hsmmc" 149 149 150 - #define VDD_1V8 1800000 /* 180000 uV */ 151 - #define VDD_3V0 3000000 /* 300000 uV */ 152 - #define VDD_165_195 (ffs(MMC_VDD_165_195) - 1) 153 - 154 150 /* 155 151 * One controller can have multiple slots, like on some omap boards using 156 152 * omap.c controller driver. Luckily this is not currently done on any known ··· 304 308 return ret; 305 309 } 306 310 307 - static int omap_hsmmc_set_pbias(struct omap_hsmmc_host *host, bool power_on, 308 - int vdd) 311 + static int omap_hsmmc_set_pbias(struct omap_hsmmc_host *host, bool power_on) 309 312 { 310 313 int ret; 311 314 ··· 312 317 return 0; 313 318 314 319 if (power_on) { 315 - if (vdd <= VDD_165_195) 316 - ret = regulator_set_voltage(host->pbias, VDD_1V8, 317 - VDD_1V8); 318 - else 319 - ret = regulator_set_voltage(host->pbias, VDD_3V0, 320 - VDD_3V0); 321 - if (ret < 0) { 322 - dev_err(host->dev, "pbias set voltage fail\n"); 323 - return ret; 324 - } 325 - 326 320 if (host->pbias_enabled == 0) { 327 321 ret = regulator_enable(host->pbias); 328 322 if (ret) { ··· 334 350 return 0; 335 351 } 336 352 337 - static int omap_hsmmc_set_power(struct omap_hsmmc_host *host, int power_on, 338 - int vdd) 353 + static int omap_hsmmc_set_power(struct omap_hsmmc_host *host, int power_on) 339 354 { 340 355 struct mmc_host *mmc = host->mmc; 341 356 int ret = 0; ··· 346 363 if (IS_ERR(mmc->supply.vmmc)) 347 364 return 0; 348 365 349 - ret = omap_hsmmc_set_pbias(host, false, 0); 366 + ret = omap_hsmmc_set_pbias(host, false); 350 367 if (ret) 351 368 return ret; 352 369 ··· 368 385 if (ret) 369 386 return ret; 370 387 371 - ret = omap_hsmmc_set_pbias(host, true, vdd); 388 + ret = omap_hsmmc_set_pbias(host, true); 372 389 if (ret) 373 390 goto err_set_voltage; 374 391 } else { ··· 445 462 446 463 447 464 ret = mmc_regulator_get_supply(mmc); 448 - if (ret == -EPROBE_DEFER) 465 + if (ret) 449 466 return ret; 450 467 451 468 /* Allow an aux regulator */ ··· 1203 1220 clk_disable_unprepare(host->dbclk); 1204 1221 1205 1222 /* Turn the power off */ 1206 - ret = omap_hsmmc_set_power(host, 0, 0); 1223 + ret = omap_hsmmc_set_power(host, 0); 1207 1224 1208 1225 /* Turn the power ON with given VDD 1.8 or 3.0v */ 1209 1226 if (!ret) 1210 - ret = omap_hsmmc_set_power(host, 1, vdd); 1227 + ret = omap_hsmmc_set_power(host, 1); 1211 1228 if (host->dbclk) 1212 1229 clk_prepare_enable(host->dbclk); 1213 1230 ··· 1604 1621 if (ios->power_mode != host->power_mode) { 1605 1622 switch (ios->power_mode) { 1606 1623 case MMC_POWER_OFF: 1607 - omap_hsmmc_set_power(host, 0, 0); 1624 + omap_hsmmc_set_power(host, 0); 1608 1625 break; 1609 1626 case MMC_POWER_UP: 1610 - omap_hsmmc_set_power(host, 1, ios->vdd); 1627 + omap_hsmmc_set_power(host, 1); 1611 1628 break; 1612 1629 case MMC_POWER_ON: 1613 1630 do_send_init_stream = 1;
+1
drivers/mmc/host/renesas_sdhi_internal_dmac.c
··· 88 88 static const struct of_device_id renesas_sdhi_internal_dmac_of_match[] = { 89 89 { .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_gen3_compatible, }, 90 90 { .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_gen3_compatible, }, 91 + { .compatible = "renesas,rcar-gen3-sdhi", .data = &of_rcar_gen3_compatible, }, 91 92 {}, 92 93 }; 93 94 MODULE_DEVICE_TABLE(of, renesas_sdhi_internal_dmac_of_match);
+4 -1
drivers/mmc/host/renesas_sdhi_sys_dmac.c
··· 91 91 }; 92 92 93 93 static const struct of_device_id renesas_sdhi_sys_dmac_of_match[] = { 94 - { .compatible = "renesas,sdhi-shmobile" }, 95 94 { .compatible = "renesas,sdhi-sh73a0", .data = &of_default_cfg, }, 96 95 { .compatible = "renesas,sdhi-r8a73a4", .data = &of_default_cfg, }, 97 96 { .compatible = "renesas,sdhi-r8a7740", .data = &of_default_cfg, }, ··· 106 107 { .compatible = "renesas,sdhi-r8a7794", .data = &of_rcar_gen2_compatible, }, 107 108 { .compatible = "renesas,sdhi-r8a7795", .data = &of_rcar_gen3_compatible, }, 108 109 { .compatible = "renesas,sdhi-r8a7796", .data = &of_rcar_gen3_compatible, }, 110 + { .compatible = "renesas,rcar-gen1-sdhi", .data = &of_rcar_gen1_compatible, }, 111 + { .compatible = "renesas,rcar-gen2-sdhi", .data = &of_rcar_gen2_compatible, }, 112 + { .compatible = "renesas,rcar-gen3-sdhi", .data = &of_rcar_gen3_compatible, }, 113 + { .compatible = "renesas,sdhi-shmobile" }, 109 114 {}, 110 115 }; 111 116 MODULE_DEVICE_TABLE(of, renesas_sdhi_sys_dmac_of_match);
+18 -20
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 618 618 u8 sample_point, bool rx) 619 619 { 620 620 struct rtsx_pcr *pcr = host->pcr; 621 - int err; 622 621 623 622 dev_dbg(sdmmc_dev(host), "%s(%s): sample_point = %d\n", 624 623 __func__, rx ? "RX" : "TX", sample_point); 625 624 626 - rtsx_pci_init_cmd(pcr); 627 - 628 - rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, CLK_CTL, CHANGE_CLK, CHANGE_CLK); 625 + rtsx_pci_write_register(pcr, CLK_CTL, CHANGE_CLK, CHANGE_CLK); 629 626 if (rx) 630 - rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, 631 - SD_VPRX_CTL, 0x1F, sample_point); 627 + rtsx_pci_write_register(pcr, SD_VPRX_CTL, 628 + PHASE_SELECT_MASK, sample_point); 632 629 else 633 - rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, 634 - SD_VPTX_CTL, 0x1F, sample_point); 635 - rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SD_VPCLK0_CTL, PHASE_NOT_RESET, 0); 636 - rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SD_VPCLK0_CTL, 637 - PHASE_NOT_RESET, PHASE_NOT_RESET); 638 - rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, CLK_CTL, CHANGE_CLK, 0); 639 - rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SD_CFG1, SD_ASYNC_FIFO_NOT_RST, 0); 640 - 641 - err = rtsx_pci_send_cmd(pcr, 100); 642 - if (err < 0) 643 - return err; 630 + rtsx_pci_write_register(pcr, SD_VPTX_CTL, 631 + PHASE_SELECT_MASK, sample_point); 632 + rtsx_pci_write_register(pcr, SD_VPCLK0_CTL, PHASE_NOT_RESET, 0); 633 + rtsx_pci_write_register(pcr, SD_VPCLK0_CTL, PHASE_NOT_RESET, 634 + PHASE_NOT_RESET); 635 + rtsx_pci_write_register(pcr, CLK_CTL, CHANGE_CLK, 0); 636 + rtsx_pci_write_register(pcr, SD_CFG1, SD_ASYNC_FIFO_NOT_RST, 0); 644 637 645 638 return 0; 646 639 } ··· 701 708 { 702 709 int err; 703 710 struct mmc_command cmd = {}; 711 + struct rtsx_pcr *pcr = host->pcr; 704 712 705 - err = sd_change_phase(host, sample_point, true); 706 - if (err < 0) 707 - return err; 713 + sd_change_phase(host, sample_point, true); 714 + 715 + rtsx_pci_write_register(pcr, SD_CFG3, SD_RSP_80CLK_TIMEOUT_EN, 716 + SD_RSP_80CLK_TIMEOUT_EN); 708 717 709 718 cmd.opcode = opcode; 710 719 err = sd_read_data(host, &cmd, 0x40, NULL, 0, 100); ··· 714 719 /* Wait till SD DATA IDLE */ 715 720 sd_wait_data_idle(host); 716 721 sd_clear_error(host); 722 + rtsx_pci_write_register(pcr, SD_CFG3, 723 + SD_RSP_80CLK_TIMEOUT_EN, 0); 717 724 return err; 718 725 } 719 726 727 + rtsx_pci_write_register(pcr, SD_CFG3, SD_RSP_80CLK_TIMEOUT_EN, 0); 720 728 return 0; 721 729 } 722 730
+129 -45
drivers/mmc/host/sdhci-acpi.c
··· 73 73 unsigned int caps2; 74 74 mmc_pm_flag_t pm_caps; 75 75 unsigned int flags; 76 + size_t priv_size; 76 77 int (*probe_slot)(struct platform_device *, const char *, const char *); 77 78 int (*remove_slot)(struct platform_device *); 78 79 }; ··· 83 82 const struct sdhci_acpi_slot *slot; 84 83 struct platform_device *pdev; 85 84 bool use_runtime_pm; 85 + unsigned long private[0] ____cacheline_aligned; 86 86 }; 87 + 88 + static inline void *sdhci_acpi_priv(struct sdhci_acpi_host *c) 89 + { 90 + return (void *)c->private; 91 + } 87 92 88 93 static inline bool sdhci_acpi_flag(struct sdhci_acpi_host *c, unsigned int flag) 89 94 { 90 95 return c->slot && (c->slot->flags & flag); 96 + } 97 + 98 + enum { 99 + INTEL_DSM_FNS = 0, 100 + INTEL_DSM_V18_SWITCH = 3, 101 + INTEL_DSM_V33_SWITCH = 4, 102 + }; 103 + 104 + struct intel_host { 105 + u32 dsm_fns; 106 + }; 107 + 108 + static const guid_t intel_dsm_guid = 109 + GUID_INIT(0xF6C13EA5, 0x65CD, 0x461F, 110 + 0xAB, 0x7A, 0x29, 0xF7, 0xE8, 0xD5, 0xBD, 0x61); 111 + 112 + static int __intel_dsm(struct intel_host *intel_host, struct device *dev, 113 + unsigned int fn, u32 *result) 114 + { 115 + union acpi_object *obj; 116 + int err = 0; 117 + 118 + obj = acpi_evaluate_dsm(ACPI_HANDLE(dev), &intel_dsm_guid, 0, fn, NULL); 119 + if (!obj) 120 + return -EOPNOTSUPP; 121 + 122 + if (obj->type == ACPI_TYPE_INTEGER) { 123 + *result = obj->integer.value; 124 + } else if (obj->type == ACPI_TYPE_BUFFER && obj->buffer.length > 0) { 125 + size_t len = min_t(size_t, obj->buffer.length, 4); 126 + 127 + *result = 0; 128 + memcpy(result, obj->buffer.pointer, len); 129 + } else { 130 + dev_err(dev, "%s DSM fn %u obj->type %d obj->buffer.length %d\n", 131 + __func__, fn, obj->type, obj->buffer.length); 132 + err = -EINVAL; 133 + } 134 + 135 + ACPI_FREE(obj); 136 + 137 + return err; 138 + } 139 + 140 + static int intel_dsm(struct intel_host *intel_host, struct device *dev, 141 + unsigned int fn, u32 *result) 142 + { 143 + if (fn > 31 || !(intel_host->dsm_fns & (1 << fn))) 144 + return -EOPNOTSUPP; 145 + 146 + return __intel_dsm(intel_host, dev, fn, result); 147 + } 148 + 149 + static void intel_dsm_init(struct intel_host *intel_host, struct device *dev, 150 + struct mmc_host *mmc) 151 + { 152 + int err; 153 + 154 + err = __intel_dsm(intel_host, dev, INTEL_DSM_FNS, &intel_host->dsm_fns); 155 + if (err) { 156 + pr_debug("%s: DSM not supported, error %d\n", 157 + mmc_hostname(mmc), err); 158 + return; 159 + } 160 + 161 + pr_debug("%s: DSM function mask %#x\n", 162 + mmc_hostname(mmc), intel_host->dsm_fns); 163 + } 164 + 165 + static int intel_start_signal_voltage_switch(struct mmc_host *mmc, 166 + struct mmc_ios *ios) 167 + { 168 + struct device *dev = mmc_dev(mmc); 169 + struct sdhci_acpi_host *c = dev_get_drvdata(dev); 170 + struct intel_host *intel_host = sdhci_acpi_priv(c); 171 + unsigned int fn; 172 + u32 result = 0; 173 + int err; 174 + 175 + err = sdhci_start_signal_voltage_switch(mmc, ios); 176 + if (err) 177 + return err; 178 + 179 + switch (ios->signal_voltage) { 180 + case MMC_SIGNAL_VOLTAGE_330: 181 + fn = INTEL_DSM_V33_SWITCH; 182 + break; 183 + case MMC_SIGNAL_VOLTAGE_180: 184 + fn = INTEL_DSM_V18_SWITCH; 185 + break; 186 + default: 187 + return 0; 188 + } 189 + 190 + err = intel_dsm(intel_host, dev, fn, &result); 191 + pr_debug("%s: %s DSM fn %u error %d result %u\n", 192 + mmc_hostname(mmc), __func__, fn, err, result); 193 + 194 + return 0; 91 195 } 92 196 93 197 static void sdhci_acpi_int_hw_reset(struct sdhci_host *host) ··· 375 269 return ret; 376 270 } 377 271 378 - static int sdhci_acpi_emmc_probe_slot(struct platform_device *pdev, 379 - const char *hid, const char *uid) 272 + static int intel_probe_slot(struct platform_device *pdev, const char *hid, 273 + const char *uid) 380 274 { 381 275 struct sdhci_acpi_host *c = platform_get_drvdata(pdev); 382 - struct sdhci_host *host; 383 - 384 - if (!c || !c->host) 385 - return 0; 386 - 387 - host = c->host; 388 - 389 - /* Platform specific code during emmc probe slot goes here */ 276 + struct intel_host *intel_host = sdhci_acpi_priv(c); 277 + struct sdhci_host *host = c->host; 390 278 391 279 if (hid && uid && !strcmp(hid, "80860F14") && !strcmp(uid, "1") && 392 280 sdhci_readl(host, SDHCI_CAPABILITIES) == 0x446cc8b2 && 393 281 sdhci_readl(host, SDHCI_CAPABILITIES_1) == 0x00000807) 394 282 host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */ 395 283 396 - return 0; 397 - } 398 - 399 - static int sdhci_acpi_sdio_probe_slot(struct platform_device *pdev, 400 - const char *hid, const char *uid) 401 - { 402 - struct sdhci_acpi_host *c = platform_get_drvdata(pdev); 403 - 404 - if (!c || !c->host) 405 - return 0; 406 - 407 - /* Platform specific code during sdio probe slot goes here */ 408 - 409 - return 0; 410 - } 411 - 412 - static int sdhci_acpi_sd_probe_slot(struct platform_device *pdev, 413 - const char *hid, const char *uid) 414 - { 415 - struct sdhci_acpi_host *c = platform_get_drvdata(pdev); 416 - struct sdhci_host *host; 417 - 418 - if (!c || !c->host || !c->slot) 419 - return 0; 420 - 421 - host = c->host; 422 - 423 - /* Platform specific code during sd probe slot goes here */ 424 - 425 284 if (hid && !strcmp(hid, "80865ACA")) 426 285 host->mmc_host_ops.get_cd = bxt_get_cd; 286 + 287 + intel_dsm_init(intel_host, &pdev->dev, host->mmc); 288 + 289 + host->mmc_host_ops.start_signal_voltage_switch = 290 + intel_start_signal_voltage_switch; 427 291 428 292 return 0; 429 293 } ··· 408 332 .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 409 333 SDHCI_QUIRK2_STOP_WITH_TC | 410 334 SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400, 411 - .probe_slot = sdhci_acpi_emmc_probe_slot, 335 + .probe_slot = intel_probe_slot, 336 + .priv_size = sizeof(struct intel_host), 412 337 }; 413 338 414 339 static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = { ··· 420 343 MMC_CAP_WAIT_WHILE_BUSY, 421 344 .flags = SDHCI_ACPI_RUNTIME_PM, 422 345 .pm_caps = MMC_PM_KEEP_POWER, 423 - .probe_slot = sdhci_acpi_sdio_probe_slot, 346 + .probe_slot = intel_probe_slot, 347 + .priv_size = sizeof(struct intel_host), 424 348 }; 425 349 426 350 static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sd = { ··· 431 353 .quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON | 432 354 SDHCI_QUIRK2_STOP_WITH_TC, 433 355 .caps = MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_AGGRESSIVE_PM, 434 - .probe_slot = sdhci_acpi_sd_probe_slot, 356 + .probe_slot = intel_probe_slot, 357 + .priv_size = sizeof(struct intel_host), 435 358 }; 436 359 437 360 static const struct sdhci_acpi_slot sdhci_acpi_slot_qcom_sd_3v = { ··· 508 429 static int sdhci_acpi_probe(struct platform_device *pdev) 509 430 { 510 431 struct device *dev = &pdev->dev; 432 + const struct sdhci_acpi_slot *slot; 511 433 struct acpi_device *device, *child; 512 434 struct sdhci_acpi_host *c; 513 435 struct sdhci_host *host; 514 436 struct resource *iomem; 515 437 resource_size_t len; 438 + size_t priv_size; 516 439 const char *hid; 517 440 const char *uid; 518 441 int err; ··· 524 443 return -ENODEV; 525 444 526 445 hid = acpi_device_hid(device); 527 - uid = device->pnp.unique_id; 446 + uid = acpi_device_uid(device); 447 + 448 + slot = sdhci_acpi_get_slot(hid, uid); 528 449 529 450 /* Power on the SDHCI controller and its children */ 530 451 acpi_device_fix_up_power(device); ··· 550 467 if (!devm_request_mem_region(dev, iomem->start, len, dev_name(dev))) 551 468 return -ENOMEM; 552 469 553 - host = sdhci_alloc_host(dev, sizeof(struct sdhci_acpi_host)); 470 + priv_size = slot ? slot->priv_size : 0; 471 + host = sdhci_alloc_host(dev, sizeof(struct sdhci_acpi_host) + priv_size); 554 472 if (IS_ERR(host)) 555 473 return PTR_ERR(host); 556 474 557 475 c = sdhci_priv(host); 558 476 c->host = host; 559 - c->slot = sdhci_acpi_get_slot(hid, uid); 477 + c->slot = slot; 560 478 c->pdev = pdev; 561 479 c->use_runtime_pm = sdhci_acpi_flag(c, SDHCI_ACPI_RUNTIME_PM); 562 480
+14 -14
drivers/mmc/host/sdhci-cadence.c
··· 13 13 * GNU General Public License for more details. 14 14 */ 15 15 16 + #include <linux/bitfield.h> 16 17 #include <linux/bitops.h> 17 18 #include <linux/iopoll.h> 18 19 #include <linux/module.h> ··· 28 27 #define SDHCI_CDNS_HRS04_ACK BIT(26) 29 28 #define SDHCI_CDNS_HRS04_RD BIT(25) 30 29 #define SDHCI_CDNS_HRS04_WR BIT(24) 31 - #define SDHCI_CDNS_HRS04_RDATA_SHIFT 16 32 - #define SDHCI_CDNS_HRS04_WDATA_SHIFT 8 33 - #define SDHCI_CDNS_HRS04_ADDR_SHIFT 0 30 + #define SDHCI_CDNS_HRS04_RDATA GENMASK(23, 16) 31 + #define SDHCI_CDNS_HRS04_WDATA GENMASK(15, 8) 32 + #define SDHCI_CDNS_HRS04_ADDR GENMASK(5, 0) 34 33 35 34 #define SDHCI_CDNS_HRS06 0x18 /* eMMC control */ 36 35 #define SDHCI_CDNS_HRS06_TUNE_UP BIT(15) 37 - #define SDHCI_CDNS_HRS06_TUNE_SHIFT 8 38 - #define SDHCI_CDNS_HRS06_TUNE_MASK 0x3f 39 - #define SDHCI_CDNS_HRS06_MODE_MASK 0x7 36 + #define SDHCI_CDNS_HRS06_TUNE GENMASK(13, 8) 37 + #define SDHCI_CDNS_HRS06_MODE GENMASK(2, 0) 40 38 #define SDHCI_CDNS_HRS06_MODE_SD 0x0 41 39 #define SDHCI_CDNS_HRS06_MODE_MMC_SDR 0x2 42 40 #define SDHCI_CDNS_HRS06_MODE_MMC_DDR 0x3 ··· 105 105 u32 tmp; 106 106 int ret; 107 107 108 - tmp = (data << SDHCI_CDNS_HRS04_WDATA_SHIFT) | 109 - (addr << SDHCI_CDNS_HRS04_ADDR_SHIFT); 108 + tmp = FIELD_PREP(SDHCI_CDNS_HRS04_WDATA, data) | 109 + FIELD_PREP(SDHCI_CDNS_HRS04_ADDR, addr); 110 110 writel(tmp, reg); 111 111 112 112 tmp |= SDHCI_CDNS_HRS04_WR; ··· 189 189 190 190 /* The speed mode for eMMC is selected by HRS06 register */ 191 191 tmp = readl(priv->hrs_addr + SDHCI_CDNS_HRS06); 192 - tmp &= ~SDHCI_CDNS_HRS06_MODE_MASK; 193 - tmp |= mode; 192 + tmp &= ~SDHCI_CDNS_HRS06_MODE; 193 + tmp |= FIELD_PREP(SDHCI_CDNS_HRS06_MODE, mode); 194 194 writel(tmp, priv->hrs_addr + SDHCI_CDNS_HRS06); 195 195 } 196 196 ··· 199 199 u32 tmp; 200 200 201 201 tmp = readl(priv->hrs_addr + SDHCI_CDNS_HRS06); 202 - return tmp & SDHCI_CDNS_HRS06_MODE_MASK; 202 + return FIELD_GET(SDHCI_CDNS_HRS06_MODE, tmp); 203 203 } 204 204 205 205 static void sdhci_cdns_set_uhs_signaling(struct sdhci_host *host, ··· 254 254 void __iomem *reg = priv->hrs_addr + SDHCI_CDNS_HRS06; 255 255 u32 tmp; 256 256 257 - if (WARN_ON(val > SDHCI_CDNS_HRS06_TUNE_MASK)) 257 + if (WARN_ON(!FIELD_FIT(SDHCI_CDNS_HRS06_TUNE, val))) 258 258 return -EINVAL; 259 259 260 260 tmp = readl(reg); 261 - tmp &= ~(SDHCI_CDNS_HRS06_TUNE_MASK << SDHCI_CDNS_HRS06_TUNE_SHIFT); 262 - tmp |= val << SDHCI_CDNS_HRS06_TUNE_SHIFT; 261 + tmp &= ~SDHCI_CDNS_HRS06_TUNE; 262 + tmp |= FIELD_PREP(SDHCI_CDNS_HRS06_TUNE, val); 263 263 tmp |= SDHCI_CDNS_HRS06_TUNE_UP; 264 264 writel(tmp, reg); 265 265
+275 -51
drivers/mmc/host/sdhci-msm.c
··· 123 123 #define CMUX_SHIFT_PHASE_MASK (7 << CMUX_SHIFT_PHASE_SHIFT) 124 124 125 125 #define MSM_MMC_AUTOSUSPEND_DELAY_MS 50 126 + 127 + /* Timeout value to avoid infinite waiting for pwr_irq */ 128 + #define MSM_PWR_IRQ_TIMEOUT_MS 5000 129 + 126 130 struct sdhci_msm_host { 127 131 struct platform_device *pdev; 128 132 void __iomem *core_mem; /* MSM SDCC mapped address */ 129 133 int pwr_irq; /* power irq */ 130 - struct clk *clk; /* main SD/MMC bus clock */ 131 - struct clk *pclk; /* SDHC peripheral bus clock */ 132 134 struct clk *bus_clk; /* SDHC bus voter clock */ 133 135 struct clk *xo_clk; /* TCXO clk needed for FLL feature of cm_dll*/ 136 + struct clk_bulk_data bulk_clks[4]; /* core, iface, cal, sleep clocks */ 134 137 unsigned long clk_rate; 135 138 struct mmc_host *mmc; 136 139 bool use_14lpp_dll_reset; ··· 141 138 bool calibration_done; 142 139 u8 saved_tuning_phase; 143 140 bool use_cdclp533; 141 + u32 curr_pwr_state; 142 + u32 curr_io_level; 143 + wait_queue_head_t pwr_irq_wait; 144 + bool pwr_irq_flag; 144 145 }; 145 146 146 147 static unsigned int msm_get_clock_rate_for_bus_mode(struct sdhci_host *host, ··· 171 164 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 172 165 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 173 166 struct mmc_ios curr_ios = host->mmc->ios; 167 + struct clk *core_clk = msm_host->bulk_clks[0].clk; 174 168 int rc; 175 169 176 170 clock = msm_get_clock_rate_for_bus_mode(host, clock); 177 - rc = clk_set_rate(msm_host->clk, clock); 171 + rc = clk_set_rate(core_clk, clock); 178 172 if (rc) { 179 173 pr_err("%s: Failed to set clock at rate %u at timing %d\n", 180 174 mmc_hostname(host->mmc), clock, ··· 184 176 } 185 177 msm_host->clk_rate = clock; 186 178 pr_debug("%s: Setting clock at rate %lu at timing %d\n", 187 - mmc_hostname(host->mmc), clk_get_rate(msm_host->clk), 179 + mmc_hostname(host->mmc), clk_get_rate(core_clk), 188 180 curr_ios.timing); 189 181 } 190 182 ··· 1003 995 sdhci_msm_hs400(host, &mmc->ios); 1004 996 } 1005 997 1006 - static void sdhci_msm_voltage_switch(struct sdhci_host *host) 998 + static inline void sdhci_msm_init_pwr_irq_wait(struct sdhci_msm_host *msm_host) 999 + { 1000 + init_waitqueue_head(&msm_host->pwr_irq_wait); 1001 + } 1002 + 1003 + static inline void sdhci_msm_complete_pwr_irq_wait( 1004 + struct sdhci_msm_host *msm_host) 1005 + { 1006 + wake_up(&msm_host->pwr_irq_wait); 1007 + } 1008 + 1009 + /* 1010 + * sdhci_msm_check_power_status API should be called when registers writes 1011 + * which can toggle sdhci IO bus ON/OFF or change IO lines HIGH/LOW happens. 1012 + * To what state the register writes will change the IO lines should be passed 1013 + * as the argument req_type. This API will check whether the IO line's state 1014 + * is already the expected state and will wait for power irq only if 1015 + * power irq is expected to be trigerred based on the current IO line state 1016 + * and expected IO line state. 1017 + */ 1018 + static void sdhci_msm_check_power_status(struct sdhci_host *host, u32 req_type) 1019 + { 1020 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1021 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1022 + bool done = false; 1023 + 1024 + pr_debug("%s: %s: request %d curr_pwr_state %x curr_io_level %x\n", 1025 + mmc_hostname(host->mmc), __func__, req_type, 1026 + msm_host->curr_pwr_state, msm_host->curr_io_level); 1027 + 1028 + /* 1029 + * The IRQ for request type IO High/LOW will be generated when - 1030 + * there is a state change in 1.8V enable bit (bit 3) of 1031 + * SDHCI_HOST_CONTROL2 register. The reset state of that bit is 0 1032 + * which indicates 3.3V IO voltage. So, when MMC core layer tries 1033 + * to set it to 3.3V before card detection happens, the 1034 + * IRQ doesn't get triggered as there is no state change in this bit. 1035 + * The driver already handles this case by changing the IO voltage 1036 + * level to high as part of controller power up sequence. Hence, check 1037 + * for host->pwr to handle a case where IO voltage high request is 1038 + * issued even before controller power up. 1039 + */ 1040 + if ((req_type & REQ_IO_HIGH) && !host->pwr) { 1041 + pr_debug("%s: do not wait for power IRQ that never comes, req_type: %d\n", 1042 + mmc_hostname(host->mmc), req_type); 1043 + return; 1044 + } 1045 + if ((req_type & msm_host->curr_pwr_state) || 1046 + (req_type & msm_host->curr_io_level)) 1047 + done = true; 1048 + /* 1049 + * This is needed here to handle cases where register writes will 1050 + * not change the current bus state or io level of the controller. 1051 + * In this case, no power irq will be triggerred and we should 1052 + * not wait. 1053 + */ 1054 + if (!done) { 1055 + if (!wait_event_timeout(msm_host->pwr_irq_wait, 1056 + msm_host->pwr_irq_flag, 1057 + msecs_to_jiffies(MSM_PWR_IRQ_TIMEOUT_MS))) 1058 + dev_warn(&msm_host->pdev->dev, 1059 + "%s: pwr_irq for req: (%d) timed out\n", 1060 + mmc_hostname(host->mmc), req_type); 1061 + } 1062 + pr_debug("%s: %s: request %d done\n", mmc_hostname(host->mmc), 1063 + __func__, req_type); 1064 + } 1065 + 1066 + static void sdhci_msm_dump_pwr_ctrl_regs(struct sdhci_host *host) 1067 + { 1068 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1069 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1070 + 1071 + pr_err("%s: PWRCTL_STATUS: 0x%08x | PWRCTL_MASK: 0x%08x | PWRCTL_CTL: 0x%08x\n", 1072 + mmc_hostname(host->mmc), 1073 + readl_relaxed(msm_host->core_mem + CORE_PWRCTL_STATUS), 1074 + readl_relaxed(msm_host->core_mem + CORE_PWRCTL_MASK), 1075 + readl_relaxed(msm_host->core_mem + CORE_PWRCTL_CTL)); 1076 + } 1077 + 1078 + static void sdhci_msm_handle_pwr_irq(struct sdhci_host *host, int irq) 1007 1079 { 1008 1080 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1009 1081 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1010 1082 u32 irq_status, irq_ack = 0; 1083 + int retry = 10; 1084 + int pwr_state = 0, io_level = 0; 1085 + 1011 1086 1012 1087 irq_status = readl_relaxed(msm_host->core_mem + CORE_PWRCTL_STATUS); 1013 1088 irq_status &= INT_MASK; 1014 1089 1015 1090 writel_relaxed(irq_status, msm_host->core_mem + CORE_PWRCTL_CLEAR); 1016 1091 1017 - if (irq_status & (CORE_PWRCTL_BUS_ON | CORE_PWRCTL_BUS_OFF)) 1092 + /* 1093 + * There is a rare HW scenario where the first clear pulse could be 1094 + * lost when actual reset and clear/read of status register is 1095 + * happening at a time. Hence, retry for at least 10 times to make 1096 + * sure status register is cleared. Otherwise, this will result in 1097 + * a spurious power IRQ resulting in system instability. 1098 + */ 1099 + while (irq_status & readl_relaxed(msm_host->core_mem + 1100 + CORE_PWRCTL_STATUS)) { 1101 + if (retry == 0) { 1102 + pr_err("%s: Timedout clearing (0x%x) pwrctl status register\n", 1103 + mmc_hostname(host->mmc), irq_status); 1104 + sdhci_msm_dump_pwr_ctrl_regs(host); 1105 + WARN_ON(1); 1106 + break; 1107 + } 1108 + writel_relaxed(irq_status, 1109 + msm_host->core_mem + CORE_PWRCTL_CLEAR); 1110 + retry--; 1111 + udelay(10); 1112 + } 1113 + 1114 + /* Handle BUS ON/OFF*/ 1115 + if (irq_status & CORE_PWRCTL_BUS_ON) { 1116 + pwr_state = REQ_BUS_ON; 1117 + io_level = REQ_IO_HIGH; 1018 1118 irq_ack |= CORE_PWRCTL_BUS_SUCCESS; 1019 - if (irq_status & (CORE_PWRCTL_IO_LOW | CORE_PWRCTL_IO_HIGH)) 1119 + } 1120 + if (irq_status & CORE_PWRCTL_BUS_OFF) { 1121 + pwr_state = REQ_BUS_OFF; 1122 + io_level = REQ_IO_LOW; 1123 + irq_ack |= CORE_PWRCTL_BUS_SUCCESS; 1124 + } 1125 + /* Handle IO LOW/HIGH */ 1126 + if (irq_status & CORE_PWRCTL_IO_LOW) { 1127 + io_level = REQ_IO_LOW; 1020 1128 irq_ack |= CORE_PWRCTL_IO_SUCCESS; 1129 + } 1130 + if (irq_status & CORE_PWRCTL_IO_HIGH) { 1131 + io_level = REQ_IO_HIGH; 1132 + irq_ack |= CORE_PWRCTL_IO_SUCCESS; 1133 + } 1021 1134 1022 1135 /* 1023 1136 * The driver has to acknowledge the interrupt, switch voltages and ··· 1146 1017 * switches are handled by the sdhci core, so just report success. 1147 1018 */ 1148 1019 writel_relaxed(irq_ack, msm_host->core_mem + CORE_PWRCTL_CTL); 1020 + 1021 + if (pwr_state) 1022 + msm_host->curr_pwr_state = pwr_state; 1023 + if (io_level) 1024 + msm_host->curr_io_level = io_level; 1025 + 1026 + pr_debug("%s: %s: Handled IRQ(%d), irq_status=0x%x, ack=0x%x\n", 1027 + mmc_hostname(msm_host->mmc), __func__, irq, irq_status, 1028 + irq_ack); 1149 1029 } 1150 1030 1151 1031 static irqreturn_t sdhci_msm_pwr_irq(int irq, void *data) 1152 1032 { 1153 1033 struct sdhci_host *host = (struct sdhci_host *)data; 1034 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1035 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1154 1036 1155 - sdhci_msm_voltage_switch(host); 1037 + sdhci_msm_handle_pwr_irq(host, irq); 1038 + msm_host->pwr_irq_flag = 1; 1039 + sdhci_msm_complete_pwr_irq_wait(msm_host); 1040 + 1156 1041 1157 1042 return IRQ_HANDLED; 1158 1043 } ··· 1175 1032 { 1176 1033 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1177 1034 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1035 + struct clk *core_clk = msm_host->bulk_clks[0].clk; 1178 1036 1179 - return clk_round_rate(msm_host->clk, ULONG_MAX); 1037 + return clk_round_rate(core_clk, ULONG_MAX); 1180 1038 } 1181 1039 1182 1040 static unsigned int sdhci_msm_get_min_clock(struct sdhci_host *host) ··· 1236 1092 __sdhci_msm_set_clock(host, clock); 1237 1093 } 1238 1094 1095 + /* 1096 + * Platform specific register write functions. This is so that, if any 1097 + * register write needs to be followed up by platform specific actions, 1098 + * they can be added here. These functions can go to sleep when writes 1099 + * to certain registers are done. 1100 + * These functions are relying on sdhci_set_ios not using spinlock. 1101 + */ 1102 + static int __sdhci_msm_check_write(struct sdhci_host *host, u16 val, int reg) 1103 + { 1104 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1105 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1106 + u32 req_type = 0; 1107 + 1108 + switch (reg) { 1109 + case SDHCI_HOST_CONTROL2: 1110 + req_type = (val & SDHCI_CTRL_VDD_180) ? REQ_IO_LOW : 1111 + REQ_IO_HIGH; 1112 + break; 1113 + case SDHCI_SOFTWARE_RESET: 1114 + if (host->pwr && (val & SDHCI_RESET_ALL)) 1115 + req_type = REQ_BUS_OFF; 1116 + break; 1117 + case SDHCI_POWER_CONTROL: 1118 + req_type = !val ? REQ_BUS_OFF : REQ_BUS_ON; 1119 + break; 1120 + } 1121 + 1122 + if (req_type) { 1123 + msm_host->pwr_irq_flag = 0; 1124 + /* 1125 + * Since this register write may trigger a power irq, ensure 1126 + * all previous register writes are complete by this point. 1127 + */ 1128 + mb(); 1129 + } 1130 + return req_type; 1131 + } 1132 + 1133 + /* This function may sleep*/ 1134 + static void sdhci_msm_writew(struct sdhci_host *host, u16 val, int reg) 1135 + { 1136 + u32 req_type = 0; 1137 + 1138 + req_type = __sdhci_msm_check_write(host, val, reg); 1139 + writew_relaxed(val, host->ioaddr + reg); 1140 + 1141 + if (req_type) 1142 + sdhci_msm_check_power_status(host, req_type); 1143 + } 1144 + 1145 + /* This function may sleep*/ 1146 + static void sdhci_msm_writeb(struct sdhci_host *host, u8 val, int reg) 1147 + { 1148 + u32 req_type = 0; 1149 + 1150 + req_type = __sdhci_msm_check_write(host, val, reg); 1151 + 1152 + writeb_relaxed(val, host->ioaddr + reg); 1153 + 1154 + if (req_type) 1155 + sdhci_msm_check_power_status(host, req_type); 1156 + } 1157 + 1239 1158 static const struct of_device_id sdhci_msm_dt_match[] = { 1240 1159 { .compatible = "qcom,sdhci-msm-v4" }, 1241 1160 {}, ··· 1313 1106 .get_max_clock = sdhci_msm_get_max_clock, 1314 1107 .set_bus_width = sdhci_set_bus_width, 1315 1108 .set_uhs_signaling = sdhci_msm_set_uhs_signaling, 1316 - .voltage_switch = sdhci_msm_voltage_switch, 1109 + .write_w = sdhci_msm_writew, 1110 + .write_b = sdhci_msm_writeb, 1317 1111 }; 1318 1112 1319 1113 static const struct sdhci_pltfm_data sdhci_msm_pdata = { ··· 1332 1124 struct sdhci_pltfm_host *pltfm_host; 1333 1125 struct sdhci_msm_host *msm_host; 1334 1126 struct resource *core_memres; 1127 + struct clk *clk; 1335 1128 int ret; 1336 1129 u16 host_version, core_minor; 1337 1130 u32 core_version, config; ··· 1369 1160 } 1370 1161 1371 1162 /* Setup main peripheral bus clock */ 1372 - msm_host->pclk = devm_clk_get(&pdev->dev, "iface"); 1373 - if (IS_ERR(msm_host->pclk)) { 1374 - ret = PTR_ERR(msm_host->pclk); 1163 + clk = devm_clk_get(&pdev->dev, "iface"); 1164 + if (IS_ERR(clk)) { 1165 + ret = PTR_ERR(clk); 1375 1166 dev_err(&pdev->dev, "Peripheral clk setup failed (%d)\n", ret); 1376 1167 goto bus_clk_disable; 1377 1168 } 1378 - 1379 - ret = clk_prepare_enable(msm_host->pclk); 1380 - if (ret) 1381 - goto bus_clk_disable; 1169 + msm_host->bulk_clks[1].clk = clk; 1382 1170 1383 1171 /* Setup SDC MMC clock */ 1384 - msm_host->clk = devm_clk_get(&pdev->dev, "core"); 1385 - if (IS_ERR(msm_host->clk)) { 1386 - ret = PTR_ERR(msm_host->clk); 1172 + clk = devm_clk_get(&pdev->dev, "core"); 1173 + if (IS_ERR(clk)) { 1174 + ret = PTR_ERR(clk); 1387 1175 dev_err(&pdev->dev, "SDC MMC clk setup failed (%d)\n", ret); 1388 - goto pclk_disable; 1176 + goto bus_clk_disable; 1389 1177 } 1178 + msm_host->bulk_clks[0].clk = clk; 1179 + 1180 + /* Vote for maximum clock rate for maximum performance */ 1181 + ret = clk_set_rate(clk, INT_MAX); 1182 + if (ret) 1183 + dev_warn(&pdev->dev, "core clock boost failed\n"); 1184 + 1185 + clk = devm_clk_get(&pdev->dev, "cal"); 1186 + if (IS_ERR(clk)) 1187 + clk = NULL; 1188 + msm_host->bulk_clks[2].clk = clk; 1189 + 1190 + clk = devm_clk_get(&pdev->dev, "sleep"); 1191 + if (IS_ERR(clk)) 1192 + clk = NULL; 1193 + msm_host->bulk_clks[3].clk = clk; 1194 + 1195 + ret = clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), 1196 + msm_host->bulk_clks); 1197 + if (ret) 1198 + goto bus_clk_disable; 1390 1199 1391 1200 /* 1392 1201 * xo clock is needed for FLL feature of cm_dll. ··· 1415 1188 ret = PTR_ERR(msm_host->xo_clk); 1416 1189 dev_warn(&pdev->dev, "TCXO clk not present (%d)\n", ret); 1417 1190 } 1418 - 1419 - /* Vote for maximum clock rate for maximum performance */ 1420 - ret = clk_set_rate(msm_host->clk, INT_MAX); 1421 - if (ret) 1422 - dev_warn(&pdev->dev, "core clock boost failed\n"); 1423 - 1424 - ret = clk_prepare_enable(msm_host->clk); 1425 - if (ret) 1426 - goto pclk_disable; 1427 1191 1428 1192 core_memres = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1429 1193 msm_host->core_mem = devm_ioremap_resource(&pdev->dev, core_memres); ··· 1469 1251 CORE_VENDOR_SPEC_CAPABILITIES0); 1470 1252 } 1471 1253 1254 + /* 1255 + * Power on reset state may trigger power irq if previous status of 1256 + * PWRCTL was either BUS_ON or IO_HIGH_V. So before enabling pwr irq 1257 + * interrupt in GIC, any pending power irq interrupt should be 1258 + * acknowledged. Otherwise power irq interrupt handler would be 1259 + * fired prematurely. 1260 + */ 1261 + sdhci_msm_handle_pwr_irq(host, 0); 1262 + 1263 + /* 1264 + * Ensure that above writes are propogated before interrupt enablement 1265 + * in GIC. 1266 + */ 1267 + mb(); 1268 + 1472 1269 /* Setup IRQ for handling power/voltage tasks with PMIC */ 1473 1270 msm_host->pwr_irq = platform_get_irq_byname(pdev, "pwr_irq"); 1474 1271 if (msm_host->pwr_irq < 0) { ··· 1492 1259 ret = msm_host->pwr_irq; 1493 1260 goto clk_disable; 1494 1261 } 1262 + 1263 + sdhci_msm_init_pwr_irq_wait(msm_host); 1264 + /* Enable pwr irq interrupts */ 1265 + writel_relaxed(INT_MASK, msm_host->core_mem + CORE_PWRCTL_MASK); 1495 1266 1496 1267 ret = devm_request_threaded_irq(&pdev->dev, msm_host->pwr_irq, NULL, 1497 1268 sdhci_msm_pwr_irq, IRQF_ONESHOT, ··· 1527 1290 pm_runtime_set_suspended(&pdev->dev); 1528 1291 pm_runtime_put_noidle(&pdev->dev); 1529 1292 clk_disable: 1530 - clk_disable_unprepare(msm_host->clk); 1531 - pclk_disable: 1532 - clk_disable_unprepare(msm_host->pclk); 1293 + clk_bulk_disable_unprepare(ARRAY_SIZE(msm_host->bulk_clks), 1294 + msm_host->bulk_clks); 1533 1295 bus_clk_disable: 1534 1296 if (!IS_ERR(msm_host->bus_clk)) 1535 1297 clk_disable_unprepare(msm_host->bus_clk); ··· 1551 1315 pm_runtime_disable(&pdev->dev); 1552 1316 pm_runtime_put_noidle(&pdev->dev); 1553 1317 1554 - clk_disable_unprepare(msm_host->clk); 1555 - clk_disable_unprepare(msm_host->pclk); 1318 + clk_bulk_disable_unprepare(ARRAY_SIZE(msm_host->bulk_clks), 1319 + msm_host->bulk_clks); 1556 1320 if (!IS_ERR(msm_host->bus_clk)) 1557 1321 clk_disable_unprepare(msm_host->bus_clk); 1558 1322 sdhci_pltfm_free(pdev); ··· 1566 1330 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1567 1331 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1568 1332 1569 - clk_disable_unprepare(msm_host->clk); 1570 - clk_disable_unprepare(msm_host->pclk); 1333 + clk_bulk_disable_unprepare(ARRAY_SIZE(msm_host->bulk_clks), 1334 + msm_host->bulk_clks); 1571 1335 1572 1336 return 0; 1573 1337 } ··· 1577 1341 struct sdhci_host *host = dev_get_drvdata(dev); 1578 1342 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1579 1343 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1580 - int ret; 1581 1344 1582 - ret = clk_prepare_enable(msm_host->clk); 1583 - if (ret) { 1584 - dev_err(dev, "clk_enable failed for core_clk: %d\n", ret); 1585 - return ret; 1586 - } 1587 - ret = clk_prepare_enable(msm_host->pclk); 1588 - if (ret) { 1589 - dev_err(dev, "clk_enable failed for iface_clk: %d\n", ret); 1590 - clk_disable_unprepare(msm_host->clk); 1591 - return ret; 1592 - } 1593 - 1594 - return 0; 1345 + return clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), 1346 + msm_host->bulk_clks); 1595 1347 } 1596 1348 #endif 1597 1349
+2 -1
drivers/mmc/host/sdhci-of-at91.c
··· 114 114 sdhci_set_power_noreg(host, mode, vdd); 115 115 } 116 116 117 - void sdhci_at91_set_uhs_signaling(struct sdhci_host *host, unsigned int timing) 117 + static void sdhci_at91_set_uhs_signaling(struct sdhci_host *host, 118 + unsigned int timing) 118 119 { 119 120 if (timing == MMC_TIMING_MMC_DDR52) 120 121 sdhci_writeb(host, SDMMC_MC1R_DDR, SDMMC_MC1R);
+30 -28
drivers/mmc/host/sdhci-of-esdhc.c
··· 458 458 return clock / 256 / 16; 459 459 } 460 460 461 + static void esdhc_clock_enable(struct sdhci_host *host, bool enable) 462 + { 463 + u32 val; 464 + ktime_t timeout; 465 + 466 + val = sdhci_readl(host, ESDHC_SYSTEM_CONTROL); 467 + 468 + if (enable) 469 + val |= ESDHC_CLOCK_SDCLKEN; 470 + else 471 + val &= ~ESDHC_CLOCK_SDCLKEN; 472 + 473 + sdhci_writel(host, val, ESDHC_SYSTEM_CONTROL); 474 + 475 + /* Wait max 20 ms */ 476 + timeout = ktime_add_ms(ktime_get(), 20); 477 + val = ESDHC_CLOCK_STABLE; 478 + while (!(sdhci_readl(host, ESDHC_PRSSTAT) & val)) { 479 + if (ktime_after(ktime_get(), timeout)) { 480 + pr_err("%s: Internal clock never stabilised.\n", 481 + mmc_hostname(host->mmc)); 482 + break; 483 + } 484 + udelay(10); 485 + } 486 + } 487 + 461 488 static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock) 462 489 { 463 490 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ··· 496 469 497 470 host->mmc->actual_clock = 0; 498 471 499 - if (clock == 0) 472 + if (clock == 0) { 473 + esdhc_clock_enable(host, false); 500 474 return; 475 + } 501 476 502 477 /* Workaround to start pre_div at 2 for VNN < VENDOR_V_23 */ 503 478 if (esdhc->vendor_ver < VENDOR_V_23) ··· 585 556 } 586 557 587 558 sdhci_writel(host, ctrl, ESDHC_PROCTL); 588 - } 589 - 590 - static void esdhc_clock_enable(struct sdhci_host *host, bool enable) 591 - { 592 - u32 val; 593 - ktime_t timeout; 594 - 595 - val = sdhci_readl(host, ESDHC_SYSTEM_CONTROL); 596 - 597 - if (enable) 598 - val |= ESDHC_CLOCK_SDCLKEN; 599 - else 600 - val &= ~ESDHC_CLOCK_SDCLKEN; 601 - 602 - sdhci_writel(host, val, ESDHC_SYSTEM_CONTROL); 603 - 604 - /* Wait max 20 ms */ 605 - timeout = ktime_add_ms(ktime_get(), 20); 606 - val = ESDHC_CLOCK_STABLE; 607 - while (!(sdhci_readl(host, ESDHC_PRSSTAT) & val)) { 608 - if (ktime_after(ktime_get(), timeout)) { 609 - pr_err("%s: Internal clock never stabilised.\n", 610 - mmc_hostname(host->mmc)); 611 - break; 612 - } 613 - udelay(10); 614 - } 615 559 } 616 560 617 561 static void esdhc_reset(struct sdhci_host *host, u8 mask)
+607
drivers/mmc/host/sdhci-omap.c
··· 1 + /** 2 + * SDHCI Controller driver for TI's OMAP SoCs 3 + * 4 + * Copyright (C) 2017 Texas Instruments 5 + * Author: Kishon Vijay Abraham I <kishon@ti.com> 6 + * 7 + * This program is free software: you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 of 9 + * the License as published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 + */ 19 + 20 + #include <linux/delay.h> 21 + #include <linux/mmc/slot-gpio.h> 22 + #include <linux/module.h> 23 + #include <linux/of.h> 24 + #include <linux/of_device.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/pm_runtime.h> 27 + #include <linux/regulator/consumer.h> 28 + 29 + #include "sdhci-pltfm.h" 30 + 31 + #define SDHCI_OMAP_CON 0x12c 32 + #define CON_DW8 BIT(5) 33 + #define CON_DMA_MASTER BIT(20) 34 + #define CON_INIT BIT(1) 35 + #define CON_OD BIT(0) 36 + 37 + #define SDHCI_OMAP_CMD 0x20c 38 + 39 + #define SDHCI_OMAP_HCTL 0x228 40 + #define HCTL_SDBP BIT(8) 41 + #define HCTL_SDVS_SHIFT 9 42 + #define HCTL_SDVS_MASK (0x7 << HCTL_SDVS_SHIFT) 43 + #define HCTL_SDVS_33 (0x7 << HCTL_SDVS_SHIFT) 44 + #define HCTL_SDVS_30 (0x6 << HCTL_SDVS_SHIFT) 45 + #define HCTL_SDVS_18 (0x5 << HCTL_SDVS_SHIFT) 46 + 47 + #define SDHCI_OMAP_SYSCTL 0x22c 48 + #define SYSCTL_CEN BIT(2) 49 + #define SYSCTL_CLKD_SHIFT 6 50 + #define SYSCTL_CLKD_MASK 0x3ff 51 + 52 + #define SDHCI_OMAP_STAT 0x230 53 + 54 + #define SDHCI_OMAP_IE 0x234 55 + #define INT_CC_EN BIT(0) 56 + 57 + #define SDHCI_OMAP_AC12 0x23c 58 + #define AC12_V1V8_SIGEN BIT(19) 59 + 60 + #define SDHCI_OMAP_CAPA 0x240 61 + #define CAPA_VS33 BIT(24) 62 + #define CAPA_VS30 BIT(25) 63 + #define CAPA_VS18 BIT(26) 64 + 65 + #define SDHCI_OMAP_TIMEOUT 1 /* 1 msec */ 66 + 67 + #define SYSCTL_CLKD_MAX 0x3FF 68 + 69 + #define IOV_1V8 1800000 /* 180000 uV */ 70 + #define IOV_3V0 3000000 /* 300000 uV */ 71 + #define IOV_3V3 3300000 /* 330000 uV */ 72 + 73 + struct sdhci_omap_data { 74 + u32 offset; 75 + }; 76 + 77 + struct sdhci_omap_host { 78 + void __iomem *base; 79 + struct device *dev; 80 + struct regulator *pbias; 81 + bool pbias_enabled; 82 + struct sdhci_host *host; 83 + u8 bus_mode; 84 + u8 power_mode; 85 + }; 86 + 87 + static inline u32 sdhci_omap_readl(struct sdhci_omap_host *host, 88 + unsigned int offset) 89 + { 90 + return readl(host->base + offset); 91 + } 92 + 93 + static inline void sdhci_omap_writel(struct sdhci_omap_host *host, 94 + unsigned int offset, u32 data) 95 + { 96 + writel(data, host->base + offset); 97 + } 98 + 99 + static int sdhci_omap_set_pbias(struct sdhci_omap_host *omap_host, 100 + bool power_on, unsigned int iov) 101 + { 102 + int ret; 103 + struct device *dev = omap_host->dev; 104 + 105 + if (IS_ERR(omap_host->pbias)) 106 + return 0; 107 + 108 + if (power_on) { 109 + ret = regulator_set_voltage(omap_host->pbias, iov, iov); 110 + if (ret) { 111 + dev_err(dev, "pbias set voltage failed\n"); 112 + return ret; 113 + } 114 + 115 + if (omap_host->pbias_enabled) 116 + return 0; 117 + 118 + ret = regulator_enable(omap_host->pbias); 119 + if (ret) { 120 + dev_err(dev, "pbias reg enable fail\n"); 121 + return ret; 122 + } 123 + 124 + omap_host->pbias_enabled = true; 125 + } else { 126 + if (!omap_host->pbias_enabled) 127 + return 0; 128 + 129 + ret = regulator_disable(omap_host->pbias); 130 + if (ret) { 131 + dev_err(dev, "pbias reg disable fail\n"); 132 + return ret; 133 + } 134 + omap_host->pbias_enabled = false; 135 + } 136 + 137 + return 0; 138 + } 139 + 140 + static int sdhci_omap_enable_iov(struct sdhci_omap_host *omap_host, 141 + unsigned int iov) 142 + { 143 + int ret; 144 + struct sdhci_host *host = omap_host->host; 145 + struct mmc_host *mmc = host->mmc; 146 + 147 + ret = sdhci_omap_set_pbias(omap_host, false, 0); 148 + if (ret) 149 + return ret; 150 + 151 + if (!IS_ERR(mmc->supply.vqmmc)) { 152 + ret = regulator_set_voltage(mmc->supply.vqmmc, iov, iov); 153 + if (ret) { 154 + dev_err(mmc_dev(mmc), "vqmmc set voltage failed\n"); 155 + return ret; 156 + } 157 + } 158 + 159 + ret = sdhci_omap_set_pbias(omap_host, true, iov); 160 + if (ret) 161 + return ret; 162 + 163 + return 0; 164 + } 165 + 166 + static void sdhci_omap_conf_bus_power(struct sdhci_omap_host *omap_host, 167 + unsigned char signal_voltage) 168 + { 169 + u32 reg; 170 + ktime_t timeout; 171 + 172 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_HCTL); 173 + reg &= ~HCTL_SDVS_MASK; 174 + 175 + if (signal_voltage == MMC_SIGNAL_VOLTAGE_330) 176 + reg |= HCTL_SDVS_33; 177 + else 178 + reg |= HCTL_SDVS_18; 179 + 180 + sdhci_omap_writel(omap_host, SDHCI_OMAP_HCTL, reg); 181 + 182 + reg |= HCTL_SDBP; 183 + sdhci_omap_writel(omap_host, SDHCI_OMAP_HCTL, reg); 184 + 185 + /* wait 1ms */ 186 + timeout = ktime_add_ms(ktime_get(), SDHCI_OMAP_TIMEOUT); 187 + while (!(sdhci_omap_readl(omap_host, SDHCI_OMAP_HCTL) & HCTL_SDBP)) { 188 + if (WARN_ON(ktime_after(ktime_get(), timeout))) 189 + return; 190 + usleep_range(5, 10); 191 + } 192 + } 193 + 194 + static int sdhci_omap_start_signal_voltage_switch(struct mmc_host *mmc, 195 + struct mmc_ios *ios) 196 + { 197 + u32 reg; 198 + int ret; 199 + unsigned int iov; 200 + struct sdhci_host *host = mmc_priv(mmc); 201 + struct sdhci_pltfm_host *pltfm_host; 202 + struct sdhci_omap_host *omap_host; 203 + struct device *dev; 204 + 205 + pltfm_host = sdhci_priv(host); 206 + omap_host = sdhci_pltfm_priv(pltfm_host); 207 + dev = omap_host->dev; 208 + 209 + if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330) { 210 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CAPA); 211 + if (!(reg & CAPA_VS33)) 212 + return -EOPNOTSUPP; 213 + 214 + sdhci_omap_conf_bus_power(omap_host, ios->signal_voltage); 215 + 216 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_AC12); 217 + reg &= ~AC12_V1V8_SIGEN; 218 + sdhci_omap_writel(omap_host, SDHCI_OMAP_AC12, reg); 219 + 220 + iov = IOV_3V3; 221 + } else if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_180) { 222 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CAPA); 223 + if (!(reg & CAPA_VS18)) 224 + return -EOPNOTSUPP; 225 + 226 + sdhci_omap_conf_bus_power(omap_host, ios->signal_voltage); 227 + 228 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_AC12); 229 + reg |= AC12_V1V8_SIGEN; 230 + sdhci_omap_writel(omap_host, SDHCI_OMAP_AC12, reg); 231 + 232 + iov = IOV_1V8; 233 + } else { 234 + return -EOPNOTSUPP; 235 + } 236 + 237 + ret = sdhci_omap_enable_iov(omap_host, iov); 238 + if (ret) { 239 + dev_err(dev, "failed to switch IO voltage to %dmV\n", iov); 240 + return ret; 241 + } 242 + 243 + dev_dbg(dev, "IO voltage switched to %dmV\n", iov); 244 + return 0; 245 + } 246 + 247 + static void sdhci_omap_set_bus_mode(struct sdhci_omap_host *omap_host, 248 + unsigned int mode) 249 + { 250 + u32 reg; 251 + 252 + if (omap_host->bus_mode == mode) 253 + return; 254 + 255 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CON); 256 + if (mode == MMC_BUSMODE_OPENDRAIN) 257 + reg |= CON_OD; 258 + else 259 + reg &= ~CON_OD; 260 + sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, reg); 261 + 262 + omap_host->bus_mode = mode; 263 + } 264 + 265 + static void sdhci_omap_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 266 + { 267 + struct sdhci_host *host = mmc_priv(mmc); 268 + struct sdhci_pltfm_host *pltfm_host; 269 + struct sdhci_omap_host *omap_host; 270 + 271 + pltfm_host = sdhci_priv(host); 272 + omap_host = sdhci_pltfm_priv(pltfm_host); 273 + 274 + sdhci_omap_set_bus_mode(omap_host, ios->bus_mode); 275 + sdhci_set_ios(mmc, ios); 276 + } 277 + 278 + static u16 sdhci_omap_calc_divisor(struct sdhci_pltfm_host *host, 279 + unsigned int clock) 280 + { 281 + u16 dsor; 282 + 283 + dsor = DIV_ROUND_UP(clk_get_rate(host->clk), clock); 284 + if (dsor > SYSCTL_CLKD_MAX) 285 + dsor = SYSCTL_CLKD_MAX; 286 + 287 + return dsor; 288 + } 289 + 290 + static void sdhci_omap_start_clock(struct sdhci_omap_host *omap_host) 291 + { 292 + u32 reg; 293 + 294 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_SYSCTL); 295 + reg |= SYSCTL_CEN; 296 + sdhci_omap_writel(omap_host, SDHCI_OMAP_SYSCTL, reg); 297 + } 298 + 299 + static void sdhci_omap_stop_clock(struct sdhci_omap_host *omap_host) 300 + { 301 + u32 reg; 302 + 303 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_SYSCTL); 304 + reg &= ~SYSCTL_CEN; 305 + sdhci_omap_writel(omap_host, SDHCI_OMAP_SYSCTL, reg); 306 + } 307 + 308 + static void sdhci_omap_set_clock(struct sdhci_host *host, unsigned int clock) 309 + { 310 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 311 + struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); 312 + unsigned long clkdiv; 313 + 314 + sdhci_omap_stop_clock(omap_host); 315 + 316 + if (!clock) 317 + return; 318 + 319 + clkdiv = sdhci_omap_calc_divisor(pltfm_host, clock); 320 + clkdiv = (clkdiv & SYSCTL_CLKD_MASK) << SYSCTL_CLKD_SHIFT; 321 + sdhci_enable_clk(host, clkdiv); 322 + 323 + sdhci_omap_start_clock(omap_host); 324 + } 325 + 326 + static void sdhci_omap_set_power(struct sdhci_host *host, unsigned char mode, 327 + unsigned short vdd) 328 + { 329 + struct mmc_host *mmc = host->mmc; 330 + 331 + mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); 332 + } 333 + 334 + static int sdhci_omap_enable_dma(struct sdhci_host *host) 335 + { 336 + u32 reg; 337 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 338 + struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); 339 + 340 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CON); 341 + reg |= CON_DMA_MASTER; 342 + sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, reg); 343 + 344 + return 0; 345 + } 346 + 347 + static unsigned int sdhci_omap_get_min_clock(struct sdhci_host *host) 348 + { 349 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 350 + 351 + return clk_get_rate(pltfm_host->clk) / SYSCTL_CLKD_MAX; 352 + } 353 + 354 + static void sdhci_omap_set_bus_width(struct sdhci_host *host, int width) 355 + { 356 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 357 + struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); 358 + u32 reg; 359 + 360 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CON); 361 + if (width == MMC_BUS_WIDTH_8) 362 + reg |= CON_DW8; 363 + else 364 + reg &= ~CON_DW8; 365 + sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, reg); 366 + 367 + sdhci_set_bus_width(host, width); 368 + } 369 + 370 + static void sdhci_omap_init_74_clocks(struct sdhci_host *host, u8 power_mode) 371 + { 372 + u32 reg; 373 + ktime_t timeout; 374 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 375 + struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host); 376 + 377 + if (omap_host->power_mode == power_mode) 378 + return; 379 + 380 + if (power_mode != MMC_POWER_ON) 381 + return; 382 + 383 + disable_irq(host->irq); 384 + 385 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CON); 386 + reg |= CON_INIT; 387 + sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, reg); 388 + sdhci_omap_writel(omap_host, SDHCI_OMAP_CMD, 0x0); 389 + 390 + /* wait 1ms */ 391 + timeout = ktime_add_ms(ktime_get(), SDHCI_OMAP_TIMEOUT); 392 + while (!(sdhci_omap_readl(omap_host, SDHCI_OMAP_STAT) & INT_CC_EN)) { 393 + if (WARN_ON(ktime_after(ktime_get(), timeout))) 394 + return; 395 + usleep_range(5, 10); 396 + } 397 + 398 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CON); 399 + reg &= ~CON_INIT; 400 + sdhci_omap_writel(omap_host, SDHCI_OMAP_CON, reg); 401 + sdhci_omap_writel(omap_host, SDHCI_OMAP_STAT, INT_CC_EN); 402 + 403 + enable_irq(host->irq); 404 + 405 + omap_host->power_mode = power_mode; 406 + } 407 + 408 + static struct sdhci_ops sdhci_omap_ops = { 409 + .set_clock = sdhci_omap_set_clock, 410 + .set_power = sdhci_omap_set_power, 411 + .enable_dma = sdhci_omap_enable_dma, 412 + .get_max_clock = sdhci_pltfm_clk_get_max_clock, 413 + .get_min_clock = sdhci_omap_get_min_clock, 414 + .set_bus_width = sdhci_omap_set_bus_width, 415 + .platform_send_init_74_clocks = sdhci_omap_init_74_clocks, 416 + .reset = sdhci_reset, 417 + .set_uhs_signaling = sdhci_set_uhs_signaling, 418 + }; 419 + 420 + static int sdhci_omap_set_capabilities(struct sdhci_omap_host *omap_host) 421 + { 422 + u32 reg; 423 + int ret = 0; 424 + struct device *dev = omap_host->dev; 425 + struct regulator *vqmmc; 426 + 427 + vqmmc = regulator_get(dev, "vqmmc"); 428 + if (IS_ERR(vqmmc)) { 429 + ret = PTR_ERR(vqmmc); 430 + goto reg_put; 431 + } 432 + 433 + /* voltage capabilities might be set by boot loader, clear it */ 434 + reg = sdhci_omap_readl(omap_host, SDHCI_OMAP_CAPA); 435 + reg &= ~(CAPA_VS18 | CAPA_VS30 | CAPA_VS33); 436 + 437 + if (regulator_is_supported_voltage(vqmmc, IOV_3V3, IOV_3V3)) 438 + reg |= CAPA_VS33; 439 + if (regulator_is_supported_voltage(vqmmc, IOV_1V8, IOV_1V8)) 440 + reg |= CAPA_VS18; 441 + 442 + sdhci_omap_writel(omap_host, SDHCI_OMAP_CAPA, reg); 443 + 444 + reg_put: 445 + regulator_put(vqmmc); 446 + 447 + return ret; 448 + } 449 + 450 + static const struct sdhci_pltfm_data sdhci_omap_pdata = { 451 + .quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION | 452 + SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK | 453 + SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 454 + SDHCI_QUIRK_NO_HISPD_BIT | 455 + SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC, 456 + .quirks2 = SDHCI_QUIRK2_NO_1_8_V | 457 + SDHCI_QUIRK2_ACMD23_BROKEN | 458 + SDHCI_QUIRK2_RSP_136_HAS_CRC, 459 + .ops = &sdhci_omap_ops, 460 + }; 461 + 462 + static const struct sdhci_omap_data dra7_data = { 463 + .offset = 0x200, 464 + }; 465 + 466 + static const struct of_device_id omap_sdhci_match[] = { 467 + { .compatible = "ti,dra7-sdhci", .data = &dra7_data }, 468 + {}, 469 + }; 470 + MODULE_DEVICE_TABLE(of, omap_sdhci_match); 471 + 472 + static int sdhci_omap_probe(struct platform_device *pdev) 473 + { 474 + int ret; 475 + u32 offset; 476 + struct device *dev = &pdev->dev; 477 + struct sdhci_host *host; 478 + struct sdhci_pltfm_host *pltfm_host; 479 + struct sdhci_omap_host *omap_host; 480 + struct mmc_host *mmc; 481 + const struct of_device_id *match; 482 + struct sdhci_omap_data *data; 483 + 484 + match = of_match_device(omap_sdhci_match, dev); 485 + if (!match) 486 + return -EINVAL; 487 + 488 + data = (struct sdhci_omap_data *)match->data; 489 + if (!data) { 490 + dev_err(dev, "no sdhci omap data\n"); 491 + return -EINVAL; 492 + } 493 + offset = data->offset; 494 + 495 + host = sdhci_pltfm_init(pdev, &sdhci_omap_pdata, 496 + sizeof(*omap_host)); 497 + if (IS_ERR(host)) { 498 + dev_err(dev, "Failed sdhci_pltfm_init\n"); 499 + return PTR_ERR(host); 500 + } 501 + 502 + pltfm_host = sdhci_priv(host); 503 + omap_host = sdhci_pltfm_priv(pltfm_host); 504 + omap_host->host = host; 505 + omap_host->base = host->ioaddr; 506 + omap_host->dev = dev; 507 + host->ioaddr += offset; 508 + 509 + mmc = host->mmc; 510 + ret = mmc_of_parse(mmc); 511 + if (ret) 512 + goto err_pltfm_free; 513 + 514 + pltfm_host->clk = devm_clk_get(dev, "fck"); 515 + if (IS_ERR(pltfm_host->clk)) { 516 + ret = PTR_ERR(pltfm_host->clk); 517 + goto err_pltfm_free; 518 + } 519 + 520 + ret = clk_set_rate(pltfm_host->clk, mmc->f_max); 521 + if (ret) { 522 + dev_err(dev, "failed to set clock to %d\n", mmc->f_max); 523 + goto err_pltfm_free; 524 + } 525 + 526 + omap_host->pbias = devm_regulator_get_optional(dev, "pbias"); 527 + if (IS_ERR(omap_host->pbias)) { 528 + ret = PTR_ERR(omap_host->pbias); 529 + if (ret != -ENODEV) 530 + goto err_pltfm_free; 531 + dev_dbg(dev, "unable to get pbias regulator %d\n", ret); 532 + } 533 + omap_host->pbias_enabled = false; 534 + 535 + /* 536 + * omap_device_pm_domain has callbacks to enable the main 537 + * functional clock, interface clock and also configure the 538 + * SYSCONFIG register of omap devices. The callback will be invoked 539 + * as part of pm_runtime_get_sync. 540 + */ 541 + pm_runtime_enable(dev); 542 + ret = pm_runtime_get_sync(dev); 543 + if (ret < 0) { 544 + dev_err(dev, "pm_runtime_get_sync failed\n"); 545 + pm_runtime_put_noidle(dev); 546 + goto err_rpm_disable; 547 + } 548 + 549 + ret = sdhci_omap_set_capabilities(omap_host); 550 + if (ret) { 551 + dev_err(dev, "failed to set system capabilities\n"); 552 + goto err_put_sync; 553 + } 554 + 555 + host->mmc_host_ops.get_ro = mmc_gpio_get_ro; 556 + host->mmc_host_ops.start_signal_voltage_switch = 557 + sdhci_omap_start_signal_voltage_switch; 558 + host->mmc_host_ops.set_ios = sdhci_omap_set_ios; 559 + 560 + sdhci_read_caps(host); 561 + host->caps |= SDHCI_CAN_DO_ADMA2; 562 + 563 + ret = sdhci_add_host(host); 564 + if (ret) 565 + goto err_put_sync; 566 + 567 + return 0; 568 + 569 + err_put_sync: 570 + pm_runtime_put_sync(dev); 571 + 572 + err_rpm_disable: 573 + pm_runtime_disable(dev); 574 + 575 + err_pltfm_free: 576 + sdhci_pltfm_free(pdev); 577 + return ret; 578 + } 579 + 580 + static int sdhci_omap_remove(struct platform_device *pdev) 581 + { 582 + struct device *dev = &pdev->dev; 583 + struct sdhci_host *host = platform_get_drvdata(pdev); 584 + 585 + sdhci_remove_host(host, true); 586 + pm_runtime_put_sync(dev); 587 + pm_runtime_disable(dev); 588 + sdhci_pltfm_free(pdev); 589 + 590 + return 0; 591 + } 592 + 593 + static struct platform_driver sdhci_omap_driver = { 594 + .probe = sdhci_omap_probe, 595 + .remove = sdhci_omap_remove, 596 + .driver = { 597 + .name = "sdhci-omap", 598 + .of_match_table = omap_sdhci_match, 599 + }, 600 + }; 601 + 602 + module_platform_driver(sdhci_omap_driver); 603 + 604 + MODULE_DESCRIPTION("SDHCI driver for OMAP SoCs"); 605 + MODULE_AUTHOR("Texas Instruments Inc."); 606 + MODULE_LICENSE("GPL v2"); 607 + MODULE_ALIAS("platform:sdhci_omap");
+1 -10
drivers/mmc/host/sdhci-pci-core.c
··· 32 32 33 33 #include "sdhci.h" 34 34 #include "sdhci-pci.h" 35 - #include "sdhci-pci-o2micro.h" 36 35 37 36 static int sdhci_pci_enable_dma(struct sdhci_host *host); 38 37 static void sdhci_pci_hw_reset(struct sdhci_host *host); ··· 797 798 .probe_slot = intel_mrfld_mmc_probe_slot, 798 799 }; 799 800 800 - /* O2Micro extra registers */ 801 - #define O2_SD_LOCK_WP 0xD3 802 - #define O2_SD_MULTI_VCC3V 0xEE 803 - #define O2_SD_CLKREQ 0xEC 804 - #define O2_SD_CAPS 0xE0 805 - #define O2_SD_ADMA1 0xE2 806 - #define O2_SD_ADMA2 0xE7 807 - #define O2_SD_INF_MOD 0xF1 808 - 809 801 static int jmicron_pmos(struct sdhci_pci_chip *chip, int on) 810 802 { 811 803 u8 scratch; ··· 1280 1290 SDHCI_PCI_DEVICE(INTEL, SPT_SDIO, intel_byt_sdio), 1281 1291 SDHCI_PCI_DEVICE(INTEL, SPT_SD, intel_byt_sd), 1282 1292 SDHCI_PCI_DEVICE(INTEL, DNV_EMMC, intel_byt_emmc), 1293 + SDHCI_PCI_DEVICE(INTEL, CDF_EMMC, intel_glk_emmc), 1283 1294 SDHCI_PCI_DEVICE(INTEL, BXT_EMMC, intel_byt_emmc), 1284 1295 SDHCI_PCI_DEVICE(INTEL, BXT_SDIO, intel_byt_sdio), 1285 1296 SDHCI_PCI_DEVICE(INTEL, BXT_SD, intel_byt_sd),
+34 -1
drivers/mmc/host/sdhci-pci-o2micro.c
··· 19 19 20 20 #include "sdhci.h" 21 21 #include "sdhci-pci.h" 22 - #include "sdhci-pci-o2micro.h" 22 + 23 + /* 24 + * O2Micro device registers 25 + */ 26 + 27 + #define O2_SD_MISC_REG5 0x64 28 + #define O2_SD_LD0_CTRL 0x68 29 + #define O2_SD_DEV_CTRL 0x88 30 + #define O2_SD_LOCK_WP 0xD3 31 + #define O2_SD_TEST_REG 0xD4 32 + #define O2_SD_FUNC_REG0 0xDC 33 + #define O2_SD_MULTI_VCC3V 0xEE 34 + #define O2_SD_CLKREQ 0xEC 35 + #define O2_SD_CAPS 0xE0 36 + #define O2_SD_ADMA1 0xE2 37 + #define O2_SD_ADMA2 0xE7 38 + #define O2_SD_INF_MOD 0xF1 39 + #define O2_SD_MISC_CTRL4 0xFC 40 + #define O2_SD_TUNING_CTRL 0x300 41 + #define O2_SD_PLL_SETTING 0x304 42 + #define O2_SD_CLK_SETTING 0x328 43 + #define O2_SD_CAP_REG2 0x330 44 + #define O2_SD_CAP_REG0 0x334 45 + #define O2_SD_UHS1_CAP_SETTING 0x33C 46 + #define O2_SD_DELAY_CTRL 0x350 47 + #define O2_SD_UHS2_L1_CTRL 0x35C 48 + #define O2_SD_FUNC_REG3 0x3E0 49 + #define O2_SD_FUNC_REG4 0x3E4 50 + #define O2_SD_LED_ENABLE BIT(6) 51 + #define O2_SD_FREG0_LEDOFF BIT(13) 52 + #define O2_SD_FREG4_ENABLE_CLK_SET BIT(22) 53 + 54 + #define O2_SD_VENDOR_SETTING 0x110 55 + #define O2_SD_VENDOR_SETTING2 0x1C8 23 56 24 57 static void o2_pci_set_baseclk(struct sdhci_pci_chip *chip, u32 value) 25 58 {
-73
drivers/mmc/host/sdhci-pci-o2micro.h
··· 1 - /* 2 - * Copyright (C) 2013 BayHub Technology Ltd. 3 - * 4 - * Authors: Peter Guo <peter.guo@bayhubtech.com> 5 - * Adam Lee <adam.lee@canonical.com> 6 - * 7 - * This software is licensed under the terms of the GNU General Public 8 - * License version 2, as published by the Free Software Foundation, and 9 - * may be copied, distributed, and modified under those terms. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - */ 17 - 18 - #ifndef __SDHCI_PCI_O2MICRO_H 19 - #define __SDHCI_PCI_O2MICRO_H 20 - 21 - #include "sdhci-pci.h" 22 - 23 - /* 24 - * O2Micro device IDs 25 - */ 26 - 27 - #define PCI_DEVICE_ID_O2_SDS0 0x8420 28 - #define PCI_DEVICE_ID_O2_SDS1 0x8421 29 - #define PCI_DEVICE_ID_O2_FUJIN2 0x8520 30 - #define PCI_DEVICE_ID_O2_SEABIRD0 0x8620 31 - #define PCI_DEVICE_ID_O2_SEABIRD1 0x8621 32 - 33 - /* 34 - * O2Micro device registers 35 - */ 36 - 37 - #define O2_SD_MISC_REG5 0x64 38 - #define O2_SD_LD0_CTRL 0x68 39 - #define O2_SD_DEV_CTRL 0x88 40 - #define O2_SD_LOCK_WP 0xD3 41 - #define O2_SD_TEST_REG 0xD4 42 - #define O2_SD_FUNC_REG0 0xDC 43 - #define O2_SD_MULTI_VCC3V 0xEE 44 - #define O2_SD_CLKREQ 0xEC 45 - #define O2_SD_CAPS 0xE0 46 - #define O2_SD_ADMA1 0xE2 47 - #define O2_SD_ADMA2 0xE7 48 - #define O2_SD_INF_MOD 0xF1 49 - #define O2_SD_MISC_CTRL4 0xFC 50 - #define O2_SD_TUNING_CTRL 0x300 51 - #define O2_SD_PLL_SETTING 0x304 52 - #define O2_SD_CLK_SETTING 0x328 53 - #define O2_SD_CAP_REG2 0x330 54 - #define O2_SD_CAP_REG0 0x334 55 - #define O2_SD_UHS1_CAP_SETTING 0x33C 56 - #define O2_SD_DELAY_CTRL 0x350 57 - #define O2_SD_UHS2_L1_CTRL 0x35C 58 - #define O2_SD_FUNC_REG3 0x3E0 59 - #define O2_SD_FUNC_REG4 0x3E4 60 - #define O2_SD_LED_ENABLE BIT(6) 61 - #define O2_SD_FREG0_LEDOFF BIT(13) 62 - #define O2_SD_FREG4_ENABLE_CLK_SET BIT(22) 63 - 64 - #define O2_SD_VENDOR_SETTING 0x110 65 - #define O2_SD_VENDOR_SETTING2 0x1C8 66 - 67 - extern int sdhci_pci_o2_probe_slot(struct sdhci_pci_slot *slot); 68 - 69 - extern int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip); 70 - 71 - extern int sdhci_pci_o2_resume(struct sdhci_pci_chip *chip); 72 - 73 - #endif /* __SDHCI_PCI_O2MICRO_H */
+13
drivers/mmc/host/sdhci-pci.h
··· 6 6 * PCI device IDs, sub IDs 7 7 */ 8 8 9 + #define PCI_DEVICE_ID_O2_SDS0 0x8420 10 + #define PCI_DEVICE_ID_O2_SDS1 0x8421 11 + #define PCI_DEVICE_ID_O2_FUJIN2 0x8520 12 + #define PCI_DEVICE_ID_O2_SEABIRD0 0x8620 13 + #define PCI_DEVICE_ID_O2_SEABIRD1 0x8621 14 + 9 15 #define PCI_DEVICE_ID_INTEL_PCH_SDIO0 0x8809 10 16 #define PCI_DEVICE_ID_INTEL_PCH_SDIO1 0x880a 11 17 #define PCI_DEVICE_ID_INTEL_BYT_EMMC 0x0f14 ··· 32 26 #define PCI_DEVICE_ID_INTEL_SPT_SDIO 0x9d2c 33 27 #define PCI_DEVICE_ID_INTEL_SPT_SD 0x9d2d 34 28 #define PCI_DEVICE_ID_INTEL_DNV_EMMC 0x19db 29 + #define PCI_DEVICE_ID_INTEL_CDF_EMMC 0x18db 35 30 #define PCI_DEVICE_ID_INTEL_BXT_SD 0x0aca 36 31 #define PCI_DEVICE_ID_INTEL_BXT_EMMC 0x0acc 37 32 #define PCI_DEVICE_ID_INTEL_BXT_SDIO 0x0ad0 ··· 169 162 170 163 #ifdef CONFIG_PM_SLEEP 171 164 int sdhci_pci_resume_host(struct sdhci_pci_chip *chip); 165 + #endif 166 + 167 + int sdhci_pci_o2_probe_slot(struct sdhci_pci_slot *slot); 168 + int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip); 169 + #ifdef CONFIG_PM_SLEEP 170 + int sdhci_pci_o2_resume(struct sdhci_pci_chip *chip); 172 171 #endif 173 172 174 173 #endif /* __SDHCI_PCI_H */
+5 -13
drivers/mmc/host/sdhci-s3c.c
··· 761 761 NULL) 762 762 }; 763 763 764 - #if defined(CONFIG_CPU_EXYNOS4210) || defined(CONFIG_SOC_EXYNOS4212) 765 - static struct sdhci_s3c_drv_data exynos4_sdhci_drv_data = { 766 - .no_divider = true, 767 - }; 768 - #define EXYNOS4_SDHCI_DRV_DATA ((kernel_ulong_t)&exynos4_sdhci_drv_data) 769 - #else 770 - #define EXYNOS4_SDHCI_DRV_DATA ((kernel_ulong_t)NULL) 771 - #endif 772 - 773 764 static const struct platform_device_id sdhci_s3c_driver_ids[] = { 774 765 { 775 766 .name = "s3c-sdhci", 776 767 .driver_data = (kernel_ulong_t)NULL, 777 - }, { 778 - .name = "exynos4-sdhci", 779 - .driver_data = EXYNOS4_SDHCI_DRV_DATA, 780 768 }, 781 769 { } 782 770 }; 783 771 MODULE_DEVICE_TABLE(platform, sdhci_s3c_driver_ids); 784 772 785 773 #ifdef CONFIG_OF 774 + static struct sdhci_s3c_drv_data exynos4_sdhci_drv_data = { 775 + .no_divider = true, 776 + }; 777 + 786 778 static const struct of_device_id sdhci_s3c_dt_match[] = { 787 779 { .compatible = "samsung,s3c6410-sdhci", }, 788 780 { .compatible = "samsung,exynos4210-sdhci", 789 - .data = (void *)EXYNOS4_SDHCI_DRV_DATA }, 781 + .data = &exynos4_sdhci_drv_data }, 790 782 {}, 791 783 }; 792 784 MODULE_DEVICE_TABLE(of, sdhci_s3c_dt_match);
+9 -1
drivers/mmc/host/sdhci-tegra.c
··· 422 422 SDHCI_QUIRK_NO_HISPD_BIT | 423 423 SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC | 424 424 SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN, 425 - .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 425 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 426 + /* SDHCI controllers on Tegra186 support 40-bit addressing. 427 + * IOVA addresses are 48-bit wide on Tegra186. 428 + * With 64-bit dma mask used for SDHCI, accesses can 429 + * be broken. Disable 64-bit dma, which would fall back 430 + * to 32-bit dma mask. Ideally 40-bit dma mask would work, 431 + * But it is not supported as of now. 432 + */ 433 + SDHCI_QUIRK2_BROKEN_64_BIT_DMA, 426 434 .ops = &tegra114_sdhci_ops, 427 435 }; 428 436
+7 -8
drivers/mmc/host/sdhci.c
··· 2407 2407 ; 2408 2408 } 2409 2409 2410 - static void sdhci_timeout_timer(unsigned long data) 2410 + static void sdhci_timeout_timer(struct timer_list *t) 2411 2411 { 2412 2412 struct sdhci_host *host; 2413 2413 unsigned long flags; 2414 2414 2415 - host = (struct sdhci_host*)data; 2415 + host = from_timer(host, t, timer); 2416 2416 2417 2417 spin_lock_irqsave(&host->lock, flags); 2418 2418 ··· 2429 2429 spin_unlock_irqrestore(&host->lock, flags); 2430 2430 } 2431 2431 2432 - static void sdhci_timeout_data_timer(unsigned long data) 2432 + static void sdhci_timeout_data_timer(struct timer_list *t) 2433 2433 { 2434 2434 struct sdhci_host *host; 2435 2435 unsigned long flags; 2436 2436 2437 - host = (struct sdhci_host *)data; 2437 + host = from_timer(host, t, data_timer); 2438 2438 2439 2439 spin_lock_irqsave(&host->lock, flags); 2440 2440 ··· 3238 3238 * available. 3239 3239 */ 3240 3240 ret = mmc_regulator_get_supply(mmc); 3241 - if (ret == -EPROBE_DEFER) 3241 + if (ret) 3242 3242 return ret; 3243 3243 3244 3244 DBG("Version: 0x%08x | Present: 0x%08x\n", ··· 3749 3749 tasklet_init(&host->finish_tasklet, 3750 3750 sdhci_tasklet_finish, (unsigned long)host); 3751 3751 3752 - setup_timer(&host->timer, sdhci_timeout_timer, (unsigned long)host); 3753 - setup_timer(&host->data_timer, sdhci_timeout_data_timer, 3754 - (unsigned long)host); 3752 + timer_setup(&host->timer, sdhci_timeout_timer, 0); 3753 + timer_setup(&host->data_timer, sdhci_timeout_data_timer, 0); 3755 3754 3756 3755 init_waitqueue_head(&host->buf_ready_int); 3757 3756
+14
drivers/mmc/host/sdhci_f_sdh30.c
··· 13 13 #include <linux/err.h> 14 14 #include <linux/delay.h> 15 15 #include <linux/module.h> 16 + #include <linux/property.h> 16 17 #include <linux/clk.h> 17 18 18 19 #include "sdhci-pltfm.h" ··· 48 47 struct clk *clk; 49 48 u32 vendor_hs200; 50 49 struct device *dev; 50 + bool enable_cmd_dat_delay; 51 51 }; 52 52 53 53 static void sdhci_f_sdh30_soft_voltage_switch(struct sdhci_host *host) ··· 86 84 87 85 static void sdhci_f_sdh30_reset(struct sdhci_host *host, u8 mask) 88 86 { 87 + struct f_sdhost_priv *priv = sdhci_priv(host); 88 + u32 ctl; 89 + 89 90 if (sdhci_readw(host, SDHCI_CLOCK_CONTROL) == 0) 90 91 sdhci_writew(host, 0xBC01, SDHCI_CLOCK_CONTROL); 91 92 92 93 sdhci_reset(host, mask); 94 + 95 + if (priv->enable_cmd_dat_delay) { 96 + ctl = sdhci_readl(host, F_SDH30_ESD_CONTROL); 97 + ctl |= F_SDH30_CMD_DAT_DELAY; 98 + sdhci_writel(host, ctl, F_SDH30_ESD_CONTROL); 99 + } 93 100 } 94 101 95 102 static const struct sdhci_ops sdhci_f_sdh30_ops = { ··· 136 125 SDHCI_QUIRK_INVERTED_WRITE_PROTECT; 137 126 host->quirks2 = SDHCI_QUIRK2_SUPPORT_SINGLE | 138 127 SDHCI_QUIRK2_TUNING_WORK_AROUND; 128 + 129 + priv->enable_cmd_dat_delay = device_property_read_bool(dev, 130 + "fujitsu,cmd-dat-delay-select"); 139 131 140 132 ret = mmc_of_parse(host->mmc); 141 133 if (ret)
+1 -4
drivers/mmc/host/sunxi-mmc.c
··· 1175 1175 return -EINVAL; 1176 1176 1177 1177 ret = mmc_regulator_get_supply(host->mmc); 1178 - if (ret) { 1179 - if (ret != -EPROBE_DEFER) 1180 - dev_err(&pdev->dev, "Could not get vmmc supply\n"); 1178 + if (ret) 1181 1179 return ret; 1182 - } 1183 1180 1184 1181 host->reg_base = devm_ioremap_resource(&pdev->dev, 1185 1182 platform_get_resource(pdev, IORESOURCE_MEM, 0));
+3 -3
drivers/mmc/host/tifm_sd.c
··· 783 783 mmc_request_done(mmc, mrq); 784 784 } 785 785 786 - static void tifm_sd_abort(unsigned long data) 786 + static void tifm_sd_abort(struct timer_list *t) 787 787 { 788 - struct tifm_sd *host = (struct tifm_sd*)data; 788 + struct tifm_sd *host = from_timer(host, t, timer); 789 789 790 790 pr_err("%s : card failed to respond for a long period of time " 791 791 "(%x, %x)\n", ··· 968 968 969 969 tasklet_init(&host->finish_tasklet, tifm_sd_end_cmd, 970 970 (unsigned long)host); 971 - setup_timer(&host->timer, tifm_sd_abort, (unsigned long)host); 971 + timer_setup(&host->timer, tifm_sd_abort, 0); 972 972 973 973 mmc->ops = &tifm_sd_ops; 974 974 mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
+20 -16
drivers/mmc/host/tmio_mmc_core.c
··· 167 167 168 168 /* HW engineers overrode docs: no sleep needed on R-Car2+ */ 169 169 if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2)) 170 - msleep(10); 170 + usleep_range(10000, 11000); 171 171 172 172 if (host->pdata->flags & TMIO_MMC_HAVE_HIGH_REG) { 173 173 sd_ctrl_write16(host, CTL_CLK_AND_WAIT_CTL, 0x0100); 174 - msleep(10); 174 + usleep_range(10000, 11000); 175 175 } 176 176 } 177 177 ··· 179 179 { 180 180 if (host->pdata->flags & TMIO_MMC_HAVE_HIGH_REG) { 181 181 sd_ctrl_write16(host, CTL_CLK_AND_WAIT_CTL, 0x0000); 182 - msleep(10); 182 + usleep_range(10000, 11000); 183 183 } 184 184 185 185 sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, ~CLK_CTL_SCLKEN & ··· 187 187 188 188 /* HW engineers overrode docs: no sleep needed on R-Car2+ */ 189 189 if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2)) 190 - msleep(10); 190 + usleep_range(10000, 11000); 191 191 } 192 192 193 193 static void tmio_mmc_set_clock(struct tmio_mmc_host *host, ··· 219 219 sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL)); 220 220 sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, clk & CLK_CTL_DIV_MASK); 221 221 if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2)) 222 - msleep(10); 222 + usleep_range(10000, 11000); 223 223 224 224 tmio_mmc_clk_start(host); 225 225 } ··· 230 230 sd_ctrl_write16(host, CTL_RESET_SD, 0x0000); 231 231 if (host->pdata->flags & TMIO_MMC_HAVE_HIGH_REG) 232 232 sd_ctrl_write16(host, CTL_RESET_SDIO, 0x0000); 233 - msleep(10); 233 + usleep_range(10000, 11000); 234 234 sd_ctrl_write16(host, CTL_RESET_SD, 0x0001); 235 235 if (host->pdata->flags & TMIO_MMC_HAVE_HIGH_REG) 236 236 sd_ctrl_write16(host, CTL_RESET_SDIO, 0x0001); 237 - msleep(10); 237 + usleep_range(10000, 11000); 238 238 239 239 if (host->pdata->flags & TMIO_MMC_SDIO_IRQ) { 240 240 sd_ctrl_write16(host, CTL_SDIO_IRQ_MASK, host->sdio_irq_mask); ··· 1113 1113 { 1114 1114 struct tmio_mmc_data *pdata = host->pdata; 1115 1115 struct mmc_host *mmc = host->mmc; 1116 + int err; 1116 1117 1117 - mmc_regulator_get_supply(mmc); 1118 + err = mmc_regulator_get_supply(mmc); 1119 + if (err) 1120 + return err; 1118 1121 1119 1122 /* use ocr_mask if no regulator */ 1120 1123 if (!mmc->ocr_avail) ··· 1302 1299 pm_runtime_enable(&pdev->dev); 1303 1300 1304 1301 ret = mmc_add_host(mmc); 1305 - if (ret < 0) { 1306 - tmio_mmc_host_remove(_host); 1307 - return ret; 1308 - } 1302 + if (ret) 1303 + goto remove_host; 1309 1304 1310 1305 dev_pm_qos_expose_latency_limit(&pdev->dev, 100); 1311 1306 1312 1307 if (pdata->flags & TMIO_MMC_USE_GPIO_CD) { 1313 1308 ret = mmc_gpio_request_cd(mmc, pdata->cd_gpio, 0); 1314 - if (ret < 0) { 1315 - tmio_mmc_host_remove(_host); 1316 - return ret; 1317 - } 1309 + if (ret) 1310 + goto remove_host; 1311 + 1318 1312 mmc_gpiod_request_cd_irq(mmc); 1319 1313 } 1320 1314 1321 1315 return 0; 1316 + 1317 + remove_host: 1318 + tmio_mmc_host_remove(_host); 1319 + return ret; 1322 1320 } 1323 1321 EXPORT_SYMBOL_GPL(tmio_mmc_host_probe); 1324 1322
+1 -1
drivers/mmc/host/usdhi6rol0.c
··· 1757 1757 return -ENOMEM; 1758 1758 1759 1759 ret = mmc_regulator_get_supply(mmc); 1760 - if (ret == -EPROBE_DEFER) 1760 + if (ret) 1761 1761 goto e_free_mmc; 1762 1762 1763 1763 ret = mmc_of_parse(mmc);
+3 -5
drivers/mmc/host/via-sdmmc.c
··· 932 932 return result; 933 933 } 934 934 935 - static void via_sdc_timeout(unsigned long ulongdata) 935 + static void via_sdc_timeout(struct timer_list *t) 936 936 { 937 937 struct via_crdr_mmc_host *sdhost; 938 938 unsigned long flags; 939 939 940 - sdhost = (struct via_crdr_mmc_host *)ulongdata; 940 + sdhost = from_timer(sdhost, t, timer); 941 941 942 942 spin_lock_irqsave(&sdhost->lock, flags); 943 943 ··· 1036 1036 u32 lenreg; 1037 1037 u32 status; 1038 1038 1039 - init_timer(&host->timer); 1040 - host->timer.data = (unsigned long)host; 1041 - host->timer.function = via_sdc_timeout; 1039 + timer_setup(&host->timer, via_sdc_timeout, 0); 1042 1040 1043 1041 spin_lock_init(&host->lock); 1044 1042
+19 -22
drivers/mmc/host/vub300.c
··· 741 741 kref_put(&vub300->kref, vub300_delete); 742 742 } 743 743 744 - static void vub300_inactivity_timer_expired(unsigned long data) 744 + static void vub300_inactivity_timer_expired(struct timer_list *t) 745 745 { /* softirq */ 746 - struct vub300_mmc_host *vub300 = (struct vub300_mmc_host *)data; 746 + struct vub300_mmc_host *vub300 = from_timer(vub300, t, 747 + inactivity_timer); 747 748 if (!vub300->interface) { 748 749 kref_put(&vub300->kref, vub300_delete); 749 750 } else if (vub300->cmd) { ··· 1181 1180 * timer callback runs in atomic mode 1182 1181 * so it cannot call usb_kill_urb() 1183 1182 */ 1184 - static void vub300_sg_timed_out(unsigned long data) 1183 + static void vub300_sg_timed_out(struct timer_list *t) 1185 1184 { 1186 - struct vub300_mmc_host *vub300 = (struct vub300_mmc_host *)data; 1185 + struct vub300_mmc_host *vub300 = from_timer(vub300, t, 1186 + sg_transfer_timer); 1187 1187 vub300->usb_timed_out = 1; 1188 1188 usb_sg_cancel(&vub300->sg_request); 1189 1189 usb_unlink_urb(vub300->command_out_urb); ··· 1246 1244 USB_RECIP_DEVICE, 0x0000, 0x0000, 1247 1245 xfer_buffer, xfer_length, HZ); 1248 1246 kfree(xfer_buffer); 1249 - if (retval < 0) { 1250 - strncpy(vub300->vub_name, 1251 - "SDIO pseudocode download failed", 1252 - sizeof(vub300->vub_name)); 1253 - return; 1254 - } 1247 + if (retval < 0) 1248 + goto copy_error_message; 1255 1249 } else { 1256 1250 dev_err(&vub300->udev->dev, 1257 1251 "not enough memory for xfer buffer to send" ··· 1289 1291 USB_RECIP_DEVICE, 0x0000, 0x0000, 1290 1292 xfer_buffer, xfer_length, HZ); 1291 1293 kfree(xfer_buffer); 1292 - if (retval < 0) { 1293 - strncpy(vub300->vub_name, 1294 - "SDIO pseudocode download failed", 1295 - sizeof(vub300->vub_name)); 1296 - return; 1297 - } 1294 + if (retval < 0) 1295 + goto copy_error_message; 1298 1296 } else { 1299 1297 dev_err(&vub300->udev->dev, 1300 1298 "not enough memory for xfer buffer to send" ··· 1343 1349 sizeof(vub300->vub_name)); 1344 1350 return; 1345 1351 } 1352 + 1353 + return; 1354 + 1355 + copy_error_message: 1356 + strncpy(vub300->vub_name, "SDIO pseudocode download failed", 1357 + sizeof(vub300->vub_name)); 1346 1358 } 1347 1359 1348 1360 /* ··· 2323 2323 INIT_WORK(&vub300->cmndwork, vub300_cmndwork_thread); 2324 2324 INIT_WORK(&vub300->deadwork, vub300_deadwork_thread); 2325 2325 kref_init(&vub300->kref); 2326 - init_timer(&vub300->sg_transfer_timer); 2327 - vub300->sg_transfer_timer.data = (unsigned long)vub300; 2328 - vub300->sg_transfer_timer.function = vub300_sg_timed_out; 2326 + timer_setup(&vub300->sg_transfer_timer, vub300_sg_timed_out, 0); 2329 2327 kref_get(&vub300->kref); 2330 - init_timer(&vub300->inactivity_timer); 2331 - vub300->inactivity_timer.data = (unsigned long)vub300; 2332 - vub300->inactivity_timer.function = vub300_inactivity_timer_expired; 2328 + timer_setup(&vub300->inactivity_timer, 2329 + vub300_inactivity_timer_expired, 0); 2333 2330 vub300->inactivity_timer.expires = jiffies + HZ; 2334 2331 add_timer(&vub300->inactivity_timer); 2335 2332 if (vub300->card_present)
+3 -5
drivers/mmc/host/wbsd.c
··· 956 956 * Helper function to reset detection ignore 957 957 */ 958 958 959 - static void wbsd_reset_ignore(unsigned long data) 959 + static void wbsd_reset_ignore(struct timer_list *t) 960 960 { 961 - struct wbsd_host *host = (struct wbsd_host *)data; 961 + struct wbsd_host *host = from_timer(host, t, ignore_timer); 962 962 963 963 BUG_ON(host == NULL); 964 964 ··· 1224 1224 /* 1225 1225 * Set up timers 1226 1226 */ 1227 - init_timer(&host->ignore_timer); 1228 - host->ignore_timer.data = (unsigned long)host; 1229 - host->ignore_timer.function = wbsd_reset_ignore; 1227 + timer_setup(&host->ignore_timer, wbsd_reset_ignore, 0); 1230 1228 1231 1229 /* 1232 1230 * Maximum number of segments. Worst case is one sector per segment
+18 -3
drivers/regulator/pbias-regulator.c
··· 34 34 u32 vmode; 35 35 unsigned int enable_time; 36 36 char *name; 37 + const unsigned int *pbias_volt_table; 38 + int n_voltages; 37 39 }; 38 40 39 41 struct pbias_regulator_data { ··· 51 49 unsigned int offset; 52 50 }; 53 51 54 - static const unsigned int pbias_volt_table[] = { 52 + static const unsigned int pbias_volt_table_3_0V[] = { 55 53 1800000, 56 54 3000000 55 + }; 56 + 57 + static const unsigned int pbias_volt_table_3_3V[] = { 58 + 1800000, 59 + 3300000 57 60 }; 58 61 59 62 static const struct regulator_ops pbias_regulator_voltage_ops = { ··· 76 69 .vmode = BIT(0), 77 70 .disable_val = 0, 78 71 .enable_time = 100, 72 + .pbias_volt_table = pbias_volt_table_3_0V, 73 + .n_voltages = 2, 79 74 .name = "pbias_mmc_omap2430" 80 75 }; 81 76 ··· 86 77 .enable_mask = BIT(9), 87 78 .vmode = BIT(8), 88 79 .enable_time = 100, 80 + .pbias_volt_table = pbias_volt_table_3_0V, 81 + .n_voltages = 2, 89 82 .name = "pbias_sim_omap3" 90 83 }; 91 84 ··· 97 86 .disable_val = BIT(25), 98 87 .vmode = BIT(21), 99 88 .enable_time = 100, 89 + .pbias_volt_table = pbias_volt_table_3_0V, 90 + .n_voltages = 2, 100 91 .name = "pbias_mmc_omap4" 101 92 }; 102 93 ··· 108 95 .disable_val = BIT(25), 109 96 .vmode = BIT(21), 110 97 .enable_time = 100, 98 + .pbias_volt_table = pbias_volt_table_3_3V, 99 + .n_voltages = 2, 111 100 .name = "pbias_mmc_omap5" 112 101 }; 113 102 ··· 214 199 drvdata[data_idx].desc.owner = THIS_MODULE; 215 200 drvdata[data_idx].desc.type = REGULATOR_VOLTAGE; 216 201 drvdata[data_idx].desc.ops = &pbias_regulator_voltage_ops; 217 - drvdata[data_idx].desc.volt_table = pbias_volt_table; 218 - drvdata[data_idx].desc.n_voltages = 2; 202 + drvdata[data_idx].desc.volt_table = info->pbias_volt_table; 203 + drvdata[data_idx].desc.n_voltages = info->n_voltages; 219 204 drvdata[data_idx].desc.enable_time = info->enable_time; 220 205 drvdata[data_idx].desc.vsel_reg = offset; 221 206 drvdata[data_idx].desc.vsel_mask = info->vmode;
+1
include/linux/mfd/rtsx_pci.h
··· 334 334 #define DCM_DRP_RD_DATA_H 0xFC29 335 335 #define SD_VPCLK0_CTL 0xFC2A 336 336 #define SD_VPCLK1_CTL 0xFC2B 337 + #define PHASE_SELECT_MASK 0x1F 337 338 #define SD_DCMPS0_CTL 0xFC2C 338 339 #define SD_DCMPS1_CTL 0xFC2D 339 340 #define SD_VPTX_CTL SD_VPCLK0_CTL
+10 -1
include/linux/mmc/host.h
··· 255 255 struct regulator *vqmmc; /* Optional Vccq supply */ 256 256 }; 257 257 258 + struct mmc_ctx { 259 + struct task_struct *task; 260 + }; 261 + 258 262 struct mmc_host { 259 263 struct device *parent; 260 264 struct device class_dev; ··· 354 350 #define MMC_CAP2_CQE (1 << 23) /* Has eMMC command queue engine */ 355 351 #define MMC_CAP2_CQE_DCMD (1 << 24) /* CQE can issue a direct command */ 356 352 353 + int fixed_drv_type; /* fixed driver type for non-removable media */ 354 + 357 355 mmc_pm_flag_t pm_caps; /* supported pm features */ 358 356 359 357 /* host specific block data */ ··· 394 388 struct mmc_card *card; /* device attached to this host */ 395 389 396 390 wait_queue_head_t wq; 397 - struct task_struct *claimer; /* task that has host claimed */ 391 + struct mmc_ctx *claimer; /* context that has host claimed */ 398 392 int claim_cnt; /* "claim" nesting count */ 393 + struct mmc_ctx default_ctx; /* default context */ 399 394 400 395 struct delayed_work detect; 401 396 int detect_change; /* card detect flag */ ··· 475 468 void mmc_detect_change(struct mmc_host *, unsigned long delay); 476 469 void mmc_request_done(struct mmc_host *, struct mmc_request *); 477 470 void mmc_command_done(struct mmc_host *host, struct mmc_request *mrq); 471 + 472 + void mmc_cqe_request_done(struct mmc_host *host, struct mmc_request *mrq); 478 473 479 474 static inline void mmc_signal_sdio_irq(struct mmc_host *host) 480 475 {
-3
include/linux/mmc/sdhci-pci-data.h
··· 15 15 16 16 extern struct sdhci_pci_data *(*sdhci_pci_get_data)(struct pci_dev *pdev, 17 17 int slotno); 18 - 19 - extern int sdhci_pci_spt_drive_strength; 20 - 21 18 #endif