Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mmc-updates-for-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc

Pull MMC updates from Chris Ball:
"MMC highlights for 3.15:

Core:
- CONFIG_MMC_UNSAFE_RESUME=y is now default behavior
- DT bindings for SDHCI UHS, eMMC HS200, high-speed DDR, at 1.8/1.2V
- Add GPIO descriptor based slot-gpio card detect API

Drivers:
- dw_mmc: Refactor SOCFPGA support as a variant inside dw_mmc-pltfm.c
- mmci: Support HW busy detection on ux500
- omap: Support MMC_ERASE
- omap_hsmmc: Support MMC_PM_KEEP_POWER, MMC_PM_WAKE_SDIO_IRQ, (a)cmd23
- rtsx: Support pre-req/post-req async
- sdhci: Add support for Realtek RTS5250 controllers
- sdhci-acpi: Add support for 80860F16, fix 80860F14/SDIO card detect
- sdhci-msm: Add new driver for Qualcomm SDHCI chipset support
- sdhci-pxav3: Add support for Marvell Armada 380 and 385 SoCs"

* tag 'mmc-updates-for-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc: (102 commits)
mmc: sdhci-acpi: Intel SDIO has broken card detect
mmc: sdhci-pxav3: add support for the Armada 38x SDHCI controller
mmc: sdhci-msm: Add platform_execute_tuning implementation
mmc: sdhci-msm: Initial support for Qualcomm chipsets
mmc: sdhci-msm: Qualcomm SDHCI binding documentation
sdhci: only reprogram retuning timer when flag is set
mmc: rename ARCH_BCM to ARCH_BCM_MOBILE
mmc: sdhci: Allow for irq being shared
mmc: sdhci-acpi: Add device id 80860F16
mmc: sdhci-acpi: Fix broken card detect for ACPI HID 80860F14
mmc: slot-gpio: Add GPIO descriptor based CD GPIO API
mmc: slot-gpio: Split out CD IRQ request into a separate function
mmc: slot-gpio: Record GPIO descriptors instead of GPIO numbers
Revert "dts: socfpga: Add support for SD/MMC on the SOCFPGA platform"
mmc: sdhci-spear: use generic card detection gpio support
mmc: sdhci-spear: remove support for power gpio
mmc: sdhci-spear: simplify resource handling
mmc: sdhci-spear: fix platform_data usage
mmc: sdhci-spear: fix error handling paths for DT
mmc: sdhci-bcm-kona: fix build errors when built-in
...

+2581 -1170
+9
Documentation/devicetree/bindings/mmc/mmc.txt
··· 26 26 this system, even if the controller claims it is. 27 27 - cap-sd-highspeed: SD high-speed timing is supported 28 28 - cap-mmc-highspeed: MMC high-speed timing is supported 29 + - sd-uhs-sdr12: SD UHS SDR12 speed is supported 30 + - sd-uhs-sdr25: SD UHS SDR25 speed is supported 31 + - sd-uhs-sdr50: SD UHS SDR50 speed is supported 32 + - sd-uhs-sdr104: SD UHS SDR104 speed is supported 33 + - sd-uhs-ddr50: SD UHS DDR50 speed is supported 29 34 - cap-power-off-card: powering off the card is safe 30 35 - cap-sdio-irq: enable SDIO IRQ signalling on this interface 31 36 - full-pwr-cycle: full power cycle of the card is supported 37 + - mmc-highspeed-ddr-1_8v: eMMC high-speed DDR mode(1.8V I/O) is supported 38 + - mmc-highspeed-ddr-1_2v: eMMC high-speed DDR mode(1.2V I/O) is supported 39 + - mmc-hs200-1_8v: eMMC HS200 mode(1.8V I/O) is supported 40 + - mmc-hs200-1_2v: eMMC HS200 mode(1.2V I/O) is supported 32 41 33 42 *NOTE* on CD and WP polarity. To use common for all SD/MMC host controllers line 34 43 polarity properties, we have to fix the meaning of the "normal" and "inverted"
+55
Documentation/devicetree/bindings/mmc/sdhci-msm.txt
··· 1 + * Qualcomm SDHCI controller (sdhci-msm) 2 + 3 + This file documents differences between the core properties in mmc.txt 4 + and the properties used by the sdhci-msm driver. 5 + 6 + Required properties: 7 + - compatible: Should contain "qcom,sdhci-msm-v4". 8 + - reg: Base address and length of the register in the following order: 9 + - Host controller register map (required) 10 + - SD Core register map (required) 11 + - interrupts: Should contain an interrupt-specifiers for the interrupts: 12 + - Host controller interrupt (required) 13 + - pinctrl-names: Should contain only one value - "default". 14 + - pinctrl-0: Should specify pin control groups used for this controller. 15 + - clocks: A list of phandle + clock-specifier pairs for the clocks listed in clock-names. 16 + - clock-names: Should contain the following: 17 + "iface" - Main peripheral bus clock (PCLK/HCLK - AHB Bus clock) (required) 18 + "core" - SDC MMC clock (MCLK) (required) 19 + "bus" - SDCC bus voter clock (optional) 20 + 21 + Example: 22 + 23 + sdhc_1: sdhci@f9824900 { 24 + compatible = "qcom,sdhci-msm-v4"; 25 + reg = <0xf9824900 0x11c>, <0xf9824000 0x800>; 26 + interrupts = <0 123 0>; 27 + bus-width = <8>; 28 + non-removable; 29 + 30 + vmmc = <&pm8941_l20>; 31 + vqmmc = <&pm8941_s3>; 32 + 33 + pinctrl-names = "default"; 34 + pinctrl-0 = <&sdc1_clk &sdc1_cmd &sdc1_data>; 35 + 36 + clocks = <&gcc GCC_SDCC1_APPS_CLK>, <&gcc GCC_SDCC1_AHB_CLK>; 37 + clock-names = "core", "iface"; 38 + }; 39 + 40 + sdhc_2: sdhci@f98a4900 { 41 + compatible = "qcom,sdhci-msm-v4"; 42 + reg = <0xf98a4900 0x11c>, <0xf98a4000 0x800>; 43 + interrupts = <0 125 0>; 44 + bus-width = <4>; 45 + cd-gpios = <&msmgpio 62 0x1>; 46 + 47 + vmmc = <&pm8941_l21>; 48 + vqmmc = <&pm8941_l13>; 49 + 50 + pinctrl-names = "default"; 51 + pinctrl-0 = <&sdc2_clk &sdc2_cmd &sdc2_data>; 52 + 53 + clocks = <&gcc GCC_SDCC2_APPS_CLK>, <&gcc GCC_SDCC2_AHB_CLK>; 54 + clock-names = "core", "iface"; 55 + };
+16 -1
Documentation/devicetree/bindings/mmc/sdhci-pxa.txt
··· 4 4 and the properties used by the sdhci-pxav2 and sdhci-pxav3 drivers. 5 5 6 6 Required properties: 7 - - compatible: Should be "mrvl,pxav2-mmc" or "mrvl,pxav3-mmc". 7 + - compatible: Should be "mrvl,pxav2-mmc", "mrvl,pxav3-mmc" or 8 + "marvell,armada-380-sdhci". 9 + - reg: 10 + * for "mrvl,pxav2-mmc" and "mrvl,pxav3-mmc", one register area for 11 + the SDHCI registers. 12 + * for "marvell,armada-380-sdhci", two register areas. The first one 13 + for the SDHCI registers themselves, and the second one for the 14 + AXI/Mbus bridge registers of the SDHCI unit. 8 15 9 16 Optional properties: 10 17 - mrvl,clk-delay-cycles: Specify a number of cycles to delay for tuning. ··· 25 18 interrupts = <27>; 26 19 non-removable; 27 20 mrvl,clk-delay-cycles = <31>; 21 + }; 22 + 23 + sdhci@d8000 { 24 + compatible = "marvell,armada-380-sdhci"; 25 + reg = <0xd8000 0x1000>, <0xdc000 0x100>; 26 + interrupts = <0 25 0x4>; 27 + clocks = <&gateclk 17>; 28 + mrvl,clk-delay-cycles = <0x1F>; 28 29 };
+1
Documentation/devicetree/bindings/mmc/ti-omap-hsmmc.txt
··· 10 10 - compatible: 11 11 Should be "ti,omap2-hsmmc", for OMAP2 controllers 12 12 Should be "ti,omap3-hsmmc", for OMAP3 controllers 13 + Should be "ti,omap3-pre-es3-hsmmc" for OMAP3 controllers pre ES3.0 13 14 Should be "ti,omap4-hsmmc", for OMAP4 controllers 14 15 - ti,hwmods: Must be "mmc<n>", n is controller instance starting 1 15 16
+27
Documentation/devicetree/bindings/regulator/pbias-regulator.txt
··· 1 + PBIAS internal regulator for SD card dual voltage i/o pads on OMAP SoCs. 2 + 3 + Required properties: 4 + - compatible: 5 + - "ti,pbias-omap" for OMAP2, OMAP3, OMAP4, OMAP5, DRA7. 6 + - reg: pbias register offset from syscon base and size of pbias register. 7 + - syscon : phandle of the system control module 8 + - regulator-name : should be 9 + pbias_mmc_omap2430 for OMAP2430, OMAP3 SoCs 10 + pbias_sim_omap3 for OMAP3 SoCs 11 + pbias_mmc_omap4 for OMAP4 SoCs 12 + pbias_mmc_omap5 for OMAP5 and DRA7 SoC 13 + 14 + Optional properties: 15 + - Any optional property defined in bindings/regulator/regulator.txt 16 + 17 + Example: 18 + 19 + pbias_regulator: pbias_regulator { 20 + compatible = "ti,pbias-omap"; 21 + reg = <0 0x4>; 22 + syscon = <&omap5_padconf_global>; 23 + pbias_mmc_reg: pbias_mmc_omap5 { 24 + regulator-name = "pbias_mmc_omap5"; 25 + regulator-min-microvolt = <1800000>; 26 + regulator-max-microvolt = <3000000>; 27 + };
+1
MAINTAINERS
··· 5930 5930 5931 5931 MULTIMEDIA CARD (MMC), SECURE DIGITAL (SD) AND SDIO SUBSYSTEM 5932 5932 M: Chris Ball <chris@printf.net> 5933 + M: Ulf Hansson <ulf.hansson@linaro.org> 5933 5934 L: linux-mmc@vger.kernel.org 5934 5935 T: git git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc.git 5935 5936 S: Maintained
+17
arch/arm/boot/dts/dra7.dtsi
··· 154 154 ti,hwmods = "counter_32k"; 155 155 }; 156 156 157 + dra7_ctrl_general: tisyscon@4a002e00 { 158 + compatible = "syscon"; 159 + reg = <0x4a002e00 0x7c>; 160 + }; 161 + 162 + pbias_regulator: pbias_regulator { 163 + compatible = "ti,pbias-omap"; 164 + reg = <0 0x4>; 165 + syscon = <&dra7_ctrl_general>; 166 + pbias_mmc_reg: pbias_mmc_omap5 { 167 + regulator-name = "pbias_mmc_omap5"; 168 + regulator-min-microvolt = <1800000>; 169 + regulator-max-microvolt = <3000000>; 170 + }; 171 + }; 172 + 157 173 dra7_pmx_core: pinmux@4a003400 { 158 174 compatible = "pinctrl-single"; 159 175 reg = <0x4a003400 0x0464>; ··· 559 543 dmas = <&sdma 61>, <&sdma 62>; 560 544 dma-names = "tx", "rx"; 561 545 status = "disabled"; 546 + pbias-supply = <&pbias_mmc_reg>; 562 547 }; 563 548 564 549 mmc2: mmc@480b4000 {
+17
arch/arm/boot/dts/omap2430.dtsi
··· 29 29 pinctrl-single,function-mask = <0x3f>; 30 30 }; 31 31 32 + omap2_scm_general: tisyscon@49002270 { 33 + compatible = "syscon"; 34 + reg = <0x49002270 0x240>; 35 + }; 36 + 37 + pbias_regulator: pbias_regulator { 38 + compatible = "ti,pbias-omap"; 39 + reg = <0x230 0x4>; 40 + syscon = <&omap2_scm_general>; 41 + pbias_mmc_reg: pbias_mmc_omap2430 { 42 + regulator-name = "pbias_mmc_omap2430"; 43 + regulator-min-microvolt = <1800000>; 44 + regulator-max-microvolt = <3000000>; 45 + }; 46 + }; 47 + 32 48 gpio1: gpio@4900c000 { 33 49 compatible = "ti,omap2-gpio"; 34 50 reg = <0x4900c000 0x200>; ··· 204 188 ti,dual-volt; 205 189 dmas = <&sdma 61>, <&sdma 62>; 206 190 dma-names = "tx", "rx"; 191 + pbias-supply = <&pbias_mmc_reg>; 207 192 }; 208 193 209 194 mmc2: mmc@480b4000 {
+23
arch/arm/boot/dts/omap3-ldp.dts
··· 174 174 }; 175 175 176 176 &mmc1 { 177 + /* See 35xx errata 2.1.1.128 in SPRZ278F */ 178 + compatible = "ti,omap3-pre-es3-hsmmc"; 177 179 vmmc-supply = <&vmmc1>; 178 180 bus-width = <4>; 181 + pinctrl-names = "default"; 182 + pinctrl-0 = <&mmc1_pins>; 183 + }; 184 + 185 + &mmc2 { 186 + status="disabled"; 187 + }; 188 + 189 + &mmc3 { 190 + status="disabled"; 179 191 }; 180 192 181 193 &omap3_pmx_core { ··· 219 207 0x176 (PIN_INPUT | MUX_MODE0) /* hsusb0_dir.hsusb0_dir */ 220 208 0x178 (PIN_INPUT | MUX_MODE0) /* hsusb0_nxt.hsusb0_nxt */ 221 209 0x174 (PIN_OUTPUT | MUX_MODE0) /* hsusb0_stp.hsusb0_stp */ 210 + >; 211 + }; 212 + 213 + mmc1_pins: pinmux_mmc1_pins { 214 + pinctrl-single,pins = < 215 + OMAP3_CORE1_IOPAD(0x2144, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_clk.mmc1_clk */ 216 + OMAP3_CORE1_IOPAD(0x2146, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_cmd.mmc1_cmd */ 217 + OMAP3_CORE1_IOPAD(0x2148, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat0.mmc1_dat0 */ 218 + OMAP3_CORE1_IOPAD(0x214A, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat1.mmc1_dat1 */ 219 + OMAP3_CORE1_IOPAD(0x214C, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat2.mmc1_dat2 */ 220 + OMAP3_CORE1_IOPAD(0x214e, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat3.mmc1_dat3 */ 222 221 >; 223 222 }; 224 223 };
+17
arch/arm/boot/dts/omap3.dtsi
··· 181 181 pinctrl-single,function-mask = <0xff1f>; 182 182 }; 183 183 184 + omap3_scm_general: tisyscon@48002270 { 185 + compatible = "syscon"; 186 + reg = <0x48002270 0x2f0>; 187 + }; 188 + 189 + pbias_regulator: pbias_regulator { 190 + compatible = "ti,pbias-omap"; 191 + reg = <0x2b0 0x4>; 192 + syscon = <&omap3_scm_general>; 193 + pbias_mmc_reg: pbias_mmc_omap2430 { 194 + regulator-name = "pbias_mmc_omap2430"; 195 + regulator-min-microvolt = <1800000>; 196 + regulator-max-microvolt = <3000000>; 197 + }; 198 + }; 199 + 184 200 gpio1: gpio@48310000 { 185 201 compatible = "ti,omap3-gpio"; 186 202 reg = <0x48310000 0x200>; ··· 411 395 ti,dual-volt; 412 396 dmas = <&sdma 61>, <&sdma 62>; 413 397 dma-names = "tx", "rx"; 398 + pbias-supply = <&pbias_mmc_reg>; 414 399 }; 415 400 416 401 mmc2: mmc@480b4000 {
+17
arch/arm/boot/dts/omap4.dtsi
··· 191 191 pinctrl-single,function-mask = <0x7fff>; 192 192 }; 193 193 194 + omap4_padconf_global: tisyscon@4a1005a0 { 195 + compatible = "syscon"; 196 + reg = <0x4a1005a0 0x170>; 197 + }; 198 + 199 + pbias_regulator: pbias_regulator { 200 + compatible = "ti,pbias-omap"; 201 + reg = <0x60 0x4>; 202 + syscon = <&omap4_padconf_global>; 203 + pbias_mmc_reg: pbias_mmc_omap4 { 204 + regulator-name = "pbias_mmc_omap4"; 205 + regulator-min-microvolt = <1800000>; 206 + regulator-max-microvolt = <3000000>; 207 + }; 208 + }; 209 + 194 210 sdma: dma-controller@4a056000 { 195 211 compatible = "ti,omap4430-sdma"; 196 212 reg = <0x4a056000 0x1000>; ··· 443 427 ti,needs-special-reset; 444 428 dmas = <&sdma 61>, <&sdma 62>; 445 429 dma-names = "tx", "rx"; 430 + pbias-supply = <&pbias_mmc_reg>; 446 431 }; 447 432 448 433 mmc2: mmc@480b4000 {
+17
arch/arm/boot/dts/omap5.dtsi
··· 198 198 pinctrl-single,function-mask = <0x7fff>; 199 199 }; 200 200 201 + omap5_padconf_global: tisyscon@4a002da0 { 202 + compatible = "syscon"; 203 + reg = <0x4A002da0 0xec>; 204 + }; 205 + 206 + pbias_regulator: pbias_regulator { 207 + compatible = "ti,pbias-omap"; 208 + reg = <0x60 0x4>; 209 + syscon = <&omap5_padconf_global>; 210 + pbias_mmc_reg: pbias_mmc_omap5 { 211 + regulator-name = "pbias_mmc_omap5"; 212 + regulator-min-microvolt = <1800000>; 213 + regulator-max-microvolt = <3000000>; 214 + }; 215 + }; 216 + 201 217 sdma: dma-controller@4a056000 { 202 218 compatible = "ti,omap4430-sdma"; 203 219 reg = <0x4a056000 0x1000>; ··· 496 480 ti,needs-special-reset; 497 481 dmas = <&sdma 61>, <&sdma 62>; 498 482 dma-names = "tx", "rx"; 483 + pbias-supply = <&pbias_mmc_reg>; 499 484 }; 500 485 501 486 mmc2: mmc@480b4000 {
+2
arch/arm/configs/omap2plus_defconfig
··· 170 170 CONFIG_WATCHDOG=y 171 171 CONFIG_OMAP_WATCHDOG=y 172 172 CONFIG_TWL4030_WATCHDOG=y 173 + CONFIG_MFD_SYSCON=y 173 174 CONFIG_MFD_PALMAS=y 174 175 CONFIG_MFD_TPS65217=y 175 176 CONFIG_MFD_TPS65910=y ··· 182 181 CONFIG_REGULATOR_TPS65217=y 183 182 CONFIG_REGULATOR_TPS65910=y 184 183 CONFIG_REGULATOR_TWL4030=y 184 + CONFIG_REGULATOR_PBIAS=y 185 185 CONFIG_FB=y 186 186 CONFIG_FIRMWARE_EDID=y 187 187 CONFIG_FB_MODE_HELPERS=y
+91 -41
drivers/mfd/rtsx_pcr.c
··· 338 338 int num_sg, bool read, int timeout) 339 339 { 340 340 struct completion trans_done; 341 - u8 dir; 342 - int err = 0, i, count; 341 + int err = 0, count; 343 342 long timeleft; 344 343 unsigned long flags; 345 - struct scatterlist *sg; 346 - enum dma_data_direction dma_dir; 347 - u32 val; 348 - dma_addr_t addr; 349 - unsigned int len; 350 344 351 - dev_dbg(&(pcr->pci->dev), "--> %s: num_sg = %d\n", __func__, num_sg); 352 - 353 - /* don't transfer data during abort processing */ 354 - if (pcr->remove_pci) 355 - return -EINVAL; 356 - 357 - if ((sglist == NULL) || (num_sg <= 0)) 358 - return -EINVAL; 359 - 360 - if (read) { 361 - dir = DEVICE_TO_HOST; 362 - dma_dir = DMA_FROM_DEVICE; 363 - } else { 364 - dir = HOST_TO_DEVICE; 365 - dma_dir = DMA_TO_DEVICE; 366 - } 367 - 368 - count = dma_map_sg(&(pcr->pci->dev), sglist, num_sg, dma_dir); 345 + count = rtsx_pci_dma_map_sg(pcr, sglist, num_sg, read); 369 346 if (count < 1) { 370 347 dev_err(&(pcr->pci->dev), "scatterlist map failed\n"); 371 348 return -EINVAL; 372 349 } 373 350 dev_dbg(&(pcr->pci->dev), "DMA mapping count: %d\n", count); 374 351 375 - val = ((u32)(dir & 0x01) << 29) | TRIG_DMA | ADMA_MODE; 376 - pcr->sgi = 0; 377 - for_each_sg(sglist, sg, count, i) { 378 - addr = sg_dma_address(sg); 379 - len = sg_dma_len(sg); 380 - rtsx_pci_add_sg_tbl(pcr, addr, len, i == count - 1); 381 - } 382 352 383 353 spin_lock_irqsave(&pcr->lock, flags); 384 354 385 355 pcr->done = &trans_done; 386 356 pcr->trans_result = TRANS_NOT_READY; 387 357 init_completion(&trans_done); 388 - rtsx_pci_writel(pcr, RTSX_HDBAR, pcr->host_sg_tbl_addr); 389 - rtsx_pci_writel(pcr, RTSX_HDBCTLR, val); 390 358 391 359 spin_unlock_irqrestore(&pcr->lock, flags); 360 + 361 + rtsx_pci_dma_transfer(pcr, sglist, count, read); 392 362 393 363 timeleft = wait_for_completion_interruptible_timeout( 394 364 &trans_done, msecs_to_jiffies(timeout)); ··· 383 413 pcr->done = NULL; 384 414 spin_unlock_irqrestore(&pcr->lock, flags); 385 415 386 - dma_unmap_sg(&(pcr->pci->dev), sglist, num_sg, dma_dir); 416 + rtsx_pci_dma_unmap_sg(pcr, sglist, num_sg, read); 387 417 388 418 if ((err < 0) && (err != -ENODEV)) 389 419 rtsx_pci_stop_cmd(pcr); ··· 394 424 return err; 395 425 } 396 426 EXPORT_SYMBOL_GPL(rtsx_pci_transfer_data); 427 + 428 + int rtsx_pci_dma_map_sg(struct rtsx_pcr *pcr, struct scatterlist *sglist, 429 + int num_sg, bool read) 430 + { 431 + enum dma_data_direction dir = read ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 432 + 433 + if (pcr->remove_pci) 434 + return -EINVAL; 435 + 436 + if ((sglist == NULL) || num_sg < 1) 437 + return -EINVAL; 438 + 439 + return dma_map_sg(&(pcr->pci->dev), sglist, num_sg, dir); 440 + } 441 + EXPORT_SYMBOL_GPL(rtsx_pci_dma_map_sg); 442 + 443 + int rtsx_pci_dma_unmap_sg(struct rtsx_pcr *pcr, struct scatterlist *sglist, 444 + int num_sg, bool read) 445 + { 446 + enum dma_data_direction dir = read ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 447 + 448 + if (pcr->remove_pci) 449 + return -EINVAL; 450 + 451 + if (sglist == NULL || num_sg < 1) 452 + return -EINVAL; 453 + 454 + dma_unmap_sg(&(pcr->pci->dev), sglist, num_sg, dir); 455 + return num_sg; 456 + } 457 + EXPORT_SYMBOL_GPL(rtsx_pci_dma_unmap_sg); 458 + 459 + int rtsx_pci_dma_transfer(struct rtsx_pcr *pcr, struct scatterlist *sglist, 460 + int sg_count, bool read) 461 + { 462 + struct scatterlist *sg; 463 + dma_addr_t addr; 464 + unsigned int len; 465 + int i; 466 + u32 val; 467 + u8 dir = read ? DEVICE_TO_HOST : HOST_TO_DEVICE; 468 + unsigned long flags; 469 + 470 + if (pcr->remove_pci) 471 + return -EINVAL; 472 + 473 + if ((sglist == NULL) || (sg_count < 1)) 474 + return -EINVAL; 475 + 476 + val = ((u32)(dir & 0x01) << 29) | TRIG_DMA | ADMA_MODE; 477 + pcr->sgi = 0; 478 + for_each_sg(sglist, sg, sg_count, i) { 479 + addr = sg_dma_address(sg); 480 + len = sg_dma_len(sg); 481 + rtsx_pci_add_sg_tbl(pcr, addr, len, i == sg_count - 1); 482 + } 483 + 484 + spin_lock_irqsave(&pcr->lock, flags); 485 + 486 + rtsx_pci_writel(pcr, RTSX_HDBAR, pcr->host_sg_tbl_addr); 487 + rtsx_pci_writel(pcr, RTSX_HDBCTLR, val); 488 + 489 + spin_unlock_irqrestore(&pcr->lock, flags); 490 + 491 + return 0; 492 + } 493 + EXPORT_SYMBOL_GPL(rtsx_pci_dma_transfer); 397 494 398 495 int rtsx_pci_read_ppbuf(struct rtsx_pcr *pcr, u8 *buf, int buf_len) 399 496 { ··· 873 836 int_reg = rtsx_pci_readl(pcr, RTSX_BIPR); 874 837 /* Clear interrupt flag */ 875 838 rtsx_pci_writel(pcr, RTSX_BIPR, int_reg); 839 + dev_dbg(&pcr->pci->dev, "=========== BIPR 0x%8x ==========\n", int_reg); 840 + 876 841 if ((int_reg & pcr->bier) == 0) { 877 842 spin_unlock(&pcr->lock); 878 843 return IRQ_NONE; ··· 905 866 } 906 867 907 868 if (int_reg & (NEED_COMPLETE_INT | DELINK_INT)) { 908 - if (int_reg & (TRANS_FAIL_INT | DELINK_INT)) { 869 + if (int_reg & (TRANS_FAIL_INT | DELINK_INT)) 909 870 pcr->trans_result = TRANS_RESULT_FAIL; 910 - if (pcr->done) 911 - complete(pcr->done); 912 - } else if (int_reg & TRANS_OK_INT) { 871 + else if (int_reg & TRANS_OK_INT) 913 872 pcr->trans_result = TRANS_RESULT_OK; 914 - if (pcr->done) 915 - complete(pcr->done); 873 + 874 + if (pcr->done) 875 + complete(pcr->done); 876 + 877 + if (int_reg & SD_EXIST) { 878 + struct rtsx_slot *slot = &pcr->slots[RTSX_SD_CARD]; 879 + if (slot && slot->done_transfer) 880 + slot->done_transfer(slot->p_dev); 881 + } 882 + 883 + if (int_reg & MS_EXIST) { 884 + struct rtsx_slot *slot = &pcr->slots[RTSX_SD_CARD]; 885 + if (slot && slot->done_transfer) 886 + slot->done_transfer(slot->p_dev); 916 887 } 917 888 } 889 + 918 890 919 891 if (pcr->card_inserted || pcr->card_removed) 920 892 schedule_delayed_work(&pcr->carddet_work,
+115 -66
drivers/mmc/card/block.c
··· 415 415 { 416 416 int err; 417 417 418 - if (!(mmc_can_sanitize(card) && 419 - (card->host->caps2 & MMC_CAP2_SANITIZE))) { 418 + if (!mmc_can_sanitize(card)) { 420 419 pr_warn("%s: %s - SANITIZE is not supported\n", 421 420 mmc_hostname(card->host), __func__); 422 421 err = -EOPNOTSUPP; ··· 721 722 return result; 722 723 } 723 724 724 - static int send_stop(struct mmc_card *card, u32 *status) 725 - { 726 - struct mmc_command cmd = {0}; 727 - int err; 728 - 729 - cmd.opcode = MMC_STOP_TRANSMISSION; 730 - cmd.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; 731 - err = mmc_wait_for_cmd(card->host, &cmd, 5); 732 - if (err == 0) 733 - *status = cmd.resp[0]; 734 - return err; 735 - } 736 - 737 725 static int get_card_status(struct mmc_card *card, u32 *status, int retries) 738 726 { 739 727 struct mmc_command cmd = {0}; ··· 734 748 if (err == 0) 735 749 *status = cmd.resp[0]; 736 750 return err; 751 + } 752 + 753 + static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms, 754 + bool hw_busy_detect, struct request *req, int *gen_err) 755 + { 756 + unsigned long timeout = jiffies + msecs_to_jiffies(timeout_ms); 757 + int err = 0; 758 + u32 status; 759 + 760 + do { 761 + err = get_card_status(card, &status, 5); 762 + if (err) { 763 + pr_err("%s: error %d requesting status\n", 764 + req->rq_disk->disk_name, err); 765 + return err; 766 + } 767 + 768 + if (status & R1_ERROR) { 769 + pr_err("%s: %s: error sending status cmd, status %#x\n", 770 + req->rq_disk->disk_name, __func__, status); 771 + *gen_err = 1; 772 + } 773 + 774 + /* We may rely on the host hw to handle busy detection.*/ 775 + if ((card->host->caps & MMC_CAP_WAIT_WHILE_BUSY) && 776 + hw_busy_detect) 777 + break; 778 + 779 + /* 780 + * Timeout if the device never becomes ready for data and never 781 + * leaves the program state. 782 + */ 783 + if (time_after(jiffies, timeout)) { 784 + pr_err("%s: Card stuck in programming state! %s %s\n", 785 + mmc_hostname(card->host), 786 + req->rq_disk->disk_name, __func__); 787 + return -ETIMEDOUT; 788 + } 789 + 790 + /* 791 + * Some cards mishandle the status bits, 792 + * so make sure to check both the busy 793 + * indication and the card state. 794 + */ 795 + } while (!(status & R1_READY_FOR_DATA) || 796 + (R1_CURRENT_STATE(status) == R1_STATE_PRG)); 797 + 798 + return err; 799 + } 800 + 801 + static int send_stop(struct mmc_card *card, unsigned int timeout_ms, 802 + struct request *req, int *gen_err, u32 *stop_status) 803 + { 804 + struct mmc_host *host = card->host; 805 + struct mmc_command cmd = {0}; 806 + int err; 807 + bool use_r1b_resp = rq_data_dir(req) == WRITE; 808 + 809 + /* 810 + * Normally we use R1B responses for WRITE, but in cases where the host 811 + * has specified a max_busy_timeout we need to validate it. A failure 812 + * means we need to prevent the host from doing hw busy detection, which 813 + * is done by converting to a R1 response instead. 814 + */ 815 + if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) 816 + use_r1b_resp = false; 817 + 818 + cmd.opcode = MMC_STOP_TRANSMISSION; 819 + if (use_r1b_resp) { 820 + cmd.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; 821 + cmd.busy_timeout = timeout_ms; 822 + } else { 823 + cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC; 824 + } 825 + 826 + err = mmc_wait_for_cmd(host, &cmd, 5); 827 + if (err) 828 + return err; 829 + 830 + *stop_status = cmd.resp[0]; 831 + 832 + /* No need to check card status in case of READ. */ 833 + if (rq_data_dir(req) == READ) 834 + return 0; 835 + 836 + if (!mmc_host_is_spi(host) && 837 + (*stop_status & R1_ERROR)) { 838 + pr_err("%s: %s: general error sending stop command, resp %#x\n", 839 + req->rq_disk->disk_name, __func__, *stop_status); 840 + *gen_err = 1; 841 + } 842 + 843 + return card_busy_detect(card, timeout_ms, use_r1b_resp, req, gen_err); 737 844 } 738 845 739 846 #define ERR_NOMEDIUM 3 ··· 945 866 */ 946 867 if (R1_CURRENT_STATE(status) == R1_STATE_DATA || 947 868 R1_CURRENT_STATE(status) == R1_STATE_RCV) { 948 - err = send_stop(card, &stop_status); 949 - if (err) 869 + err = send_stop(card, 870 + DIV_ROUND_UP(brq->data.timeout_ns, 1000000), 871 + req, gen_err, &stop_status); 872 + if (err) { 950 873 pr_err("%s: error %d sending stop command\n", 951 874 req->rq_disk->disk_name, err); 952 - 953 - /* 954 - * If the stop cmd also timed out, the card is probably 955 - * not present, so abort. Other errors are bad news too. 956 - */ 957 - if (err) 875 + /* 876 + * If the stop cmd also timed out, the card is probably 877 + * not present, so abort. Other errors are bad news too. 878 + */ 958 879 return ERR_ABORT; 880 + } 881 + 959 882 if (stop_status & R1_CARD_ECC_FAILED) 960 883 *ecc_err = 1; 961 - if (!mmc_host_is_spi(card->host) && rq_data_dir(req) != READ) 962 - if (stop_status & R1_ERROR) { 963 - pr_err("%s: %s: general error sending stop command, stop cmd response %#x\n", 964 - req->rq_disk->disk_name, __func__, 965 - stop_status); 966 - *gen_err = 1; 967 - } 968 884 } 969 885 970 886 /* Check for set block count errors */ ··· 1231 1157 * program mode, which we have to wait for it to complete. 1232 1158 */ 1233 1159 if (!mmc_host_is_spi(card->host) && rq_data_dir(req) != READ) { 1234 - u32 status; 1235 - unsigned long timeout; 1160 + int err; 1236 1161 1237 1162 /* Check stop command response */ 1238 1163 if (brq->stop.resp[0] & R1_ERROR) { ··· 1241 1168 gen_err = 1; 1242 1169 } 1243 1170 1244 - timeout = jiffies + msecs_to_jiffies(MMC_BLK_TIMEOUT_MS); 1245 - do { 1246 - int err = get_card_status(card, &status, 5); 1247 - if (err) { 1248 - pr_err("%s: error %d requesting status\n", 1249 - req->rq_disk->disk_name, err); 1250 - return MMC_BLK_CMD_ERR; 1251 - } 1252 - 1253 - if (status & R1_ERROR) { 1254 - pr_err("%s: %s: general error sending status command, card status %#x\n", 1255 - req->rq_disk->disk_name, __func__, 1256 - status); 1257 - gen_err = 1; 1258 - } 1259 - 1260 - /* Timeout if the device never becomes ready for data 1261 - * and never leaves the program state. 1262 - */ 1263 - if (time_after(jiffies, timeout)) { 1264 - pr_err("%s: Card stuck in programming state!"\ 1265 - " %s %s\n", mmc_hostname(card->host), 1266 - req->rq_disk->disk_name, __func__); 1267 - 1268 - return MMC_BLK_CMD_ERR; 1269 - } 1270 - /* 1271 - * Some cards mishandle the status bits, 1272 - * so make sure to check both the busy 1273 - * indication and the card state. 1274 - */ 1275 - } while (!(status & R1_READY_FOR_DATA) || 1276 - (R1_CURRENT_STATE(status) == R1_STATE_PRG)); 1171 + err = card_busy_detect(card, MMC_BLK_TIMEOUT_MS, false, req, 1172 + &gen_err); 1173 + if (err) 1174 + return MMC_BLK_CMD_ERR; 1277 1175 } 1278 1176 1279 1177 /* if general error occurs, retry the write operation. */ ··· 1379 1335 brq->data.blksz = 512; 1380 1336 brq->stop.opcode = MMC_STOP_TRANSMISSION; 1381 1337 brq->stop.arg = 0; 1382 - brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; 1383 1338 brq->data.blocks = blk_rq_sectors(req); 1384 1339 1385 1340 /* ··· 1421 1378 if (rq_data_dir(req) == READ) { 1422 1379 brq->cmd.opcode = readcmd; 1423 1380 brq->data.flags |= MMC_DATA_READ; 1381 + if (brq->mrq.stop) 1382 + brq->stop.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | 1383 + MMC_CMD_AC; 1424 1384 } else { 1425 1385 brq->cmd.opcode = writecmd; 1426 1386 brq->data.flags |= MMC_DATA_WRITE; 1387 + if (brq->mrq.stop) 1388 + brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | 1389 + MMC_CMD_AC; 1427 1390 } 1428 1391 1429 1392 if (do_rel_wr)
-15
drivers/mmc/core/Kconfig
··· 2 2 # MMC core configuration 3 3 # 4 4 5 - config MMC_UNSAFE_RESUME 6 - bool "Assume MMC/SD cards are non-removable (DANGEROUS)" 7 - help 8 - If you say Y here, the MMC layer will assume that all cards 9 - stayed in their respective slots during the suspend. The 10 - normal behaviour is to remove them at suspend and 11 - redetecting them at resume. Breaking this assumption will 12 - in most cases result in data corruption. 13 - 14 - This option is usually just for embedded systems which use 15 - a MMC/SD card for rootfs. Most people should say N here. 16 - 17 - This option sets a default which can be overridden by the 18 - module parameter "removable=0" or "removable=1". 19 - 20 5 config MMC_CLKGATE 21 6 bool "MMC host clock gating" 22 7 help
+2 -10
drivers/mmc/core/bus.c
··· 185 185 { 186 186 struct mmc_card *card = mmc_dev_to_card(dev); 187 187 struct mmc_host *host = card->host; 188 - int ret = 0; 189 188 190 - if (host->bus_ops->runtime_suspend) 191 - ret = host->bus_ops->runtime_suspend(host); 192 - 193 - return ret; 189 + return host->bus_ops->runtime_suspend(host); 194 190 } 195 191 196 192 static int mmc_runtime_resume(struct device *dev) 197 193 { 198 194 struct mmc_card *card = mmc_dev_to_card(dev); 199 195 struct mmc_host *host = card->host; 200 - int ret = 0; 201 196 202 - if (host->bus_ops->runtime_resume) 203 - ret = host->bus_ops->runtime_resume(host); 204 - 205 - return ret; 197 + return host->bus_ops->runtime_resume(host); 206 198 } 207 199 208 200 static int mmc_runtime_idle(struct device *dev)
+15 -72
drivers/mmc/core/core.c
··· 34 34 #include <linux/mmc/host.h> 35 35 #include <linux/mmc/mmc.h> 36 36 #include <linux/mmc/sd.h> 37 + #include <linux/mmc/slot-gpio.h> 37 38 38 39 #include "core.h" 39 40 #include "bus.h" ··· 64 63 */ 65 64 bool use_spi_crc = 1; 66 65 module_param(use_spi_crc, bool, 0); 67 - 68 - /* 69 - * We normally treat cards as removed during suspend if they are not 70 - * known to be on a non-removable bus, to avoid the risk of writing 71 - * back data to a different card after resume. Allow this to be 72 - * overridden if necessary. 73 - */ 74 - #ifdef CONFIG_MMC_UNSAFE_RESUME 75 - bool mmc_assume_removable; 76 - #else 77 - bool mmc_assume_removable = 1; 78 - #endif 79 - EXPORT_SYMBOL(mmc_assume_removable); 80 - module_param_named(removable, mmc_assume_removable, bool, 0644); 81 - MODULE_PARM_DESC( 82 - removable, 83 - "MMC/SD cards are removable and may be removed during suspend"); 84 66 85 67 /* 86 68 * Internal function. Schedule delayed work in the MMC work queue. ··· 286 302 } 287 303 288 304 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 289 - EXT_CSD_BKOPS_START, 1, timeout, use_busy_signal, true); 305 + EXT_CSD_BKOPS_START, 1, timeout, 306 + use_busy_signal, true, false); 290 307 if (err) { 291 308 pr_warn("%s: Error %d starting bkops\n", 292 309 mmc_hostname(card->host), err); ··· 1935 1950 cmd.opcode = MMC_ERASE; 1936 1951 cmd.arg = arg; 1937 1952 cmd.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; 1938 - cmd.cmd_timeout_ms = mmc_erase_timeout(card, arg, qty); 1953 + cmd.busy_timeout = mmc_erase_timeout(card, arg, qty); 1939 1954 err = mmc_wait_for_cmd(card->host, &cmd, 0); 1940 1955 if (err) { 1941 1956 pr_err("mmc_erase: erase error %d, status %#x\n", ··· 2122 2137 y = 0; 2123 2138 for (x = 1; x && x <= max_qty && max_qty - x >= qty; x <<= 1) { 2124 2139 timeout = mmc_erase_timeout(card, arg, qty + x); 2125 - if (timeout > host->max_discard_to) 2140 + if (timeout > host->max_busy_timeout) 2126 2141 break; 2127 2142 if (timeout < last_timeout) 2128 2143 break; ··· 2154 2169 struct mmc_host *host = card->host; 2155 2170 unsigned int max_discard, max_trim; 2156 2171 2157 - if (!host->max_discard_to) 2172 + if (!host->max_busy_timeout) 2158 2173 return UINT_MAX; 2159 2174 2160 2175 /* ··· 2174 2189 max_discard = 0; 2175 2190 } 2176 2191 pr_debug("%s: calculated max. discard sectors %u for timeout %u ms\n", 2177 - mmc_hostname(host), max_discard, host->max_discard_to); 2192 + mmc_hostname(host), max_discard, host->max_busy_timeout); 2178 2193 return max_discard; 2179 2194 } 2180 2195 EXPORT_SYMBOL(mmc_calc_max_discard); ··· 2232 2247 static int mmc_do_hw_reset(struct mmc_host *host, int check) 2233 2248 { 2234 2249 struct mmc_card *card = host->card; 2235 - 2236 - if (!host->bus_ops->power_restore) 2237 - return -EOPNOTSUPP; 2238 2250 2239 2251 if (!(host->caps & MMC_CAP_HW_RESET) || !host->ops->hw_reset) 2240 2252 return -EOPNOTSUPP; ··· 2334 2352 { 2335 2353 int ret; 2336 2354 2337 - if ((host->caps & MMC_CAP_NONREMOVABLE) || !host->bus_ops->alive) 2355 + if (host->caps & MMC_CAP_NONREMOVABLE) 2338 2356 return 0; 2339 2357 2340 2358 if (!host->card || mmc_card_removed(host->card)) ··· 2417 2435 * if there is a _removable_ card registered, check whether it is 2418 2436 * still present 2419 2437 */ 2420 - if (host->bus_ops && host->bus_ops->detect && !host->bus_dead 2438 + if (host->bus_ops && !host->bus_dead 2421 2439 && !(host->caps & MMC_CAP_NONREMOVABLE)) 2422 2440 host->bus_ops->detect(host); 2423 2441 ··· 2472 2490 mmc_power_off(host); 2473 2491 else 2474 2492 mmc_power_up(host, host->ocr_avail); 2493 + mmc_gpiod_request_cd_irq(host); 2475 2494 _mmc_detect_change(host, 0, false); 2476 2495 } 2477 2496 ··· 2484 2501 host->removed = 1; 2485 2502 spin_unlock_irqrestore(&host->lock, flags); 2486 2503 #endif 2504 + if (host->slot.cd_irq >= 0) 2505 + disable_irq(host->slot.cd_irq); 2487 2506 2488 2507 host->rescan_disable = 1; 2489 2508 cancel_delayed_work_sync(&host->detect); ··· 2522 2537 2523 2538 mmc_bus_get(host); 2524 2539 2525 - if (!host->bus_ops || host->bus_dead || !host->bus_ops->power_restore) { 2540 + if (!host->bus_ops || host->bus_dead) { 2526 2541 mmc_bus_put(host); 2527 2542 return -EINVAL; 2528 2543 } ··· 2548 2563 2549 2564 mmc_bus_get(host); 2550 2565 2551 - if (!host->bus_ops || host->bus_dead || !host->bus_ops->power_restore) { 2566 + if (!host->bus_ops || host->bus_dead) { 2552 2567 mmc_bus_put(host); 2553 2568 return -EINVAL; 2554 2569 } ··· 2567 2582 */ 2568 2583 int mmc_flush_cache(struct mmc_card *card) 2569 2584 { 2570 - struct mmc_host *host = card->host; 2571 2585 int err = 0; 2572 - 2573 - if (!(host->caps2 & MMC_CAP2_CACHE_CTRL)) 2574 - return err; 2575 2586 2576 2587 if (mmc_card_mmc(card) && 2577 2588 (card->ext_csd.cache_size > 0) && ··· 2582 2601 return err; 2583 2602 } 2584 2603 EXPORT_SYMBOL(mmc_flush_cache); 2585 - 2586 - /* 2587 - * Turn the cache ON/OFF. 2588 - * Turning the cache OFF shall trigger flushing of the data 2589 - * to the non-volatile storage. 2590 - * This function should be called with host claimed 2591 - */ 2592 - int mmc_cache_ctrl(struct mmc_host *host, u8 enable) 2593 - { 2594 - struct mmc_card *card = host->card; 2595 - unsigned int timeout; 2596 - int err = 0; 2597 - 2598 - if (!(host->caps2 & MMC_CAP2_CACHE_CTRL) || 2599 - mmc_card_is_removable(host)) 2600 - return err; 2601 - 2602 - if (card && mmc_card_mmc(card) && 2603 - (card->ext_csd.cache_size > 0)) { 2604 - enable = !!enable; 2605 - 2606 - if (card->ext_csd.cache_ctrl ^ enable) { 2607 - timeout = enable ? card->ext_csd.generic_cmd6_time : 0; 2608 - err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 2609 - EXT_CSD_CACHE_CTRL, enable, timeout); 2610 - if (err) 2611 - pr_err("%s: cache %s error %d\n", 2612 - mmc_hostname(card->host), 2613 - enable ? "on" : "off", 2614 - err); 2615 - else 2616 - card->ext_csd.cache_ctrl = enable; 2617 - } 2618 - } 2619 - 2620 - return err; 2621 - } 2622 - EXPORT_SYMBOL(mmc_cache_ctrl); 2623 2604 2624 2605 #ifdef CONFIG_PM 2625 2606 ··· 2611 2668 /* Validate prerequisites for suspend */ 2612 2669 if (host->bus_ops->pre_suspend) 2613 2670 err = host->bus_ops->pre_suspend(host); 2614 - if (!err && host->bus_ops->suspend) 2671 + if (!err) 2615 2672 break; 2616 2673 2617 2674 /* Calling bus_ops->remove() with a claimed host can deadlock */
+18
drivers/mmc/core/host.c
··· 419 419 host->caps |= MMC_CAP_SD_HIGHSPEED; 420 420 if (of_find_property(np, "cap-mmc-highspeed", &len)) 421 421 host->caps |= MMC_CAP_MMC_HIGHSPEED; 422 + if (of_find_property(np, "sd-uhs-sdr12", &len)) 423 + host->caps |= MMC_CAP_UHS_SDR12; 424 + if (of_find_property(np, "sd-uhs-sdr25", &len)) 425 + host->caps |= MMC_CAP_UHS_SDR25; 426 + if (of_find_property(np, "sd-uhs-sdr50", &len)) 427 + host->caps |= MMC_CAP_UHS_SDR50; 428 + if (of_find_property(np, "sd-uhs-sdr104", &len)) 429 + host->caps |= MMC_CAP_UHS_SDR104; 430 + if (of_find_property(np, "sd-uhs-ddr50", &len)) 431 + host->caps |= MMC_CAP_UHS_DDR50; 422 432 if (of_find_property(np, "cap-power-off-card", &len)) 423 433 host->caps |= MMC_CAP_POWER_OFF_CARD; 424 434 if (of_find_property(np, "cap-sdio-irq", &len)) ··· 439 429 host->pm_caps |= MMC_PM_KEEP_POWER; 440 430 if (of_find_property(np, "enable-sdio-wakeup", &len)) 441 431 host->pm_caps |= MMC_PM_WAKE_SDIO_IRQ; 432 + if (of_find_property(np, "mmc-ddr-1_8v", &len)) 433 + host->caps |= MMC_CAP_1_8V_DDR; 434 + if (of_find_property(np, "mmc-ddr-1_2v", &len)) 435 + host->caps |= MMC_CAP_1_2V_DDR; 436 + if (of_find_property(np, "mmc-hs200-1_8v", &len)) 437 + host->caps2 |= MMC_CAP2_HS200_1_8V_SDR; 438 + if (of_find_property(np, "mmc-hs200-1_2v", &len)) 439 + host->caps2 |= MMC_CAP2_HS200_1_2V_SDR; 442 440 443 441 return 0; 444 442
+28 -37
drivers/mmc/core/mmc.c
··· 856 856 857 857 /* switch to HS200 mode if bus width set successfully */ 858 858 if (!err) 859 - err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 860 - EXT_CSD_HS_TIMING, 2, 0); 859 + err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 860 + EXT_CSD_HS_TIMING, 2, 861 + card->ext_csd.generic_cmd6_time, 862 + true, true, true); 861 863 err: 862 864 return err; 863 865 } ··· 1076 1074 host->caps2 & MMC_CAP2_HS200) 1077 1075 err = mmc_select_hs200(card); 1078 1076 else if (host->caps & MMC_CAP_MMC_HIGHSPEED) 1079 - err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1080 - EXT_CSD_HS_TIMING, 1, 1081 - card->ext_csd.generic_cmd6_time); 1077 + err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1078 + EXT_CSD_HS_TIMING, 1, 1079 + card->ext_csd.generic_cmd6_time, 1080 + true, true, true); 1082 1081 1083 1082 if (err && err != -EBADMSG) 1084 1083 goto free_card; ··· 1290 1287 * If cache size is higher than 0, this indicates 1291 1288 * the existence of cache and it can be turned on. 1292 1289 */ 1293 - if ((host->caps2 & MMC_CAP2_CACHE_CTRL) && 1294 - card->ext_csd.cache_size > 0) { 1290 + if (card->ext_csd.cache_size > 0) { 1295 1291 err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1296 1292 EXT_CSD_CACHE_CTRL, 1, 1297 1293 card->ext_csd.generic_cmd6_time); ··· 1358 1356 { 1359 1357 struct mmc_command cmd = {0}; 1360 1358 struct mmc_card *card = host->card; 1359 + unsigned int timeout_ms = DIV_ROUND_UP(card->ext_csd.sa_timeout, 10000); 1361 1360 int err; 1362 - 1363 - if (host->caps2 & MMC_CAP2_NO_SLEEP_CMD) 1364 - return 0; 1365 1361 1366 1362 err = mmc_deselect_cards(host); 1367 1363 if (err) ··· 1369 1369 cmd.arg = card->rca << 16; 1370 1370 cmd.arg |= 1 << 15; 1371 1371 1372 - cmd.flags = MMC_RSP_R1B | MMC_CMD_AC; 1372 + /* 1373 + * If the max_busy_timeout of the host is specified, validate it against 1374 + * the sleep cmd timeout. A failure means we need to prevent the host 1375 + * from doing hw busy detection, which is done by converting to a R1 1376 + * response instead of a R1B. 1377 + */ 1378 + if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) { 1379 + cmd.flags = MMC_RSP_R1 | MMC_CMD_AC; 1380 + } else { 1381 + cmd.flags = MMC_RSP_R1B | MMC_CMD_AC; 1382 + cmd.busy_timeout = timeout_ms; 1383 + } 1384 + 1373 1385 err = mmc_wait_for_cmd(host, &cmd, 0); 1374 1386 if (err) 1375 1387 return err; ··· 1392 1380 * SEND_STATUS command to poll the status because that command (and most 1393 1381 * others) is invalid while the card sleeps. 1394 1382 */ 1395 - if (!(host->caps & MMC_CAP_WAIT_WHILE_BUSY)) 1396 - mmc_delay(DIV_ROUND_UP(card->ext_csd.sa_timeout, 10000)); 1383 + if (!cmd.busy_timeout || !(host->caps & MMC_CAP_WAIT_WHILE_BUSY)) 1384 + mmc_delay(timeout_ms); 1397 1385 1398 1386 return err; 1399 1387 } ··· 1416 1404 1417 1405 err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1418 1406 EXT_CSD_POWER_OFF_NOTIFICATION, 1419 - notify_type, timeout, true, false); 1407 + notify_type, timeout, true, false, false); 1420 1408 if (err) 1421 1409 pr_err("%s: Power Off Notification timed out, %u\n", 1422 1410 mmc_hostname(card->host), timeout); ··· 1496 1484 goto out; 1497 1485 } 1498 1486 1499 - err = mmc_cache_ctrl(host, 0); 1487 + err = mmc_flush_cache(host->card); 1500 1488 if (err) 1501 1489 goto out; 1502 1490 ··· 1648 1636 static const struct mmc_bus_ops mmc_ops = { 1649 1637 .remove = mmc_remove, 1650 1638 .detect = mmc_detect, 1651 - .suspend = NULL, 1652 - .resume = NULL, 1653 - .power_restore = mmc_power_restore, 1654 - .alive = mmc_alive, 1655 - .shutdown = mmc_shutdown, 1656 - }; 1657 - 1658 - static const struct mmc_bus_ops mmc_ops_unsafe = { 1659 - .remove = mmc_remove, 1660 - .detect = mmc_detect, 1661 1639 .suspend = mmc_suspend, 1662 1640 .resume = mmc_resume, 1663 1641 .runtime_suspend = mmc_runtime_suspend, ··· 1656 1654 .alive = mmc_alive, 1657 1655 .shutdown = mmc_shutdown, 1658 1656 }; 1659 - 1660 - static void mmc_attach_bus_ops(struct mmc_host *host) 1661 - { 1662 - const struct mmc_bus_ops *bus_ops; 1663 - 1664 - if (!mmc_card_is_removable(host)) 1665 - bus_ops = &mmc_ops_unsafe; 1666 - else 1667 - bus_ops = &mmc_ops; 1668 - mmc_attach_bus(host, bus_ops); 1669 - } 1670 1657 1671 1658 /* 1672 1659 * Starting point for MMC card init. ··· 1676 1685 if (err) 1677 1686 return err; 1678 1687 1679 - mmc_attach_bus_ops(host); 1688 + mmc_attach_bus(host, &mmc_ops); 1680 1689 if (host->ocr_avail_mmc) 1681 1690 host->ocr_avail = host->ocr_avail_mmc; 1682 1691
+41 -23
drivers/mmc/core/mmc_ops.c
··· 405 405 * timeout of zero implies maximum possible timeout 406 406 * @use_busy_signal: use the busy signal as response type 407 407 * @send_status: send status cmd to poll for busy 408 + * @ignore_crc: ignore CRC errors when sending status cmd to poll for busy 408 409 * 409 410 * Modifies the EXT_CSD register for selected card. 410 411 */ 411 412 int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, 412 - unsigned int timeout_ms, bool use_busy_signal, bool send_status) 413 + unsigned int timeout_ms, bool use_busy_signal, bool send_status, 414 + bool ignore_crc) 413 415 { 416 + struct mmc_host *host = card->host; 414 417 int err; 415 418 struct mmc_command cmd = {0}; 416 419 unsigned long timeout; 417 420 u32 status = 0; 418 - bool ignore_crc = false; 421 + bool use_r1b_resp = use_busy_signal; 419 422 420 - BUG_ON(!card); 421 - BUG_ON(!card->host); 423 + /* 424 + * If the cmd timeout and the max_busy_timeout of the host are both 425 + * specified, let's validate them. A failure means we need to prevent 426 + * the host from doing hw busy detection, which is done by converting 427 + * to a R1 response instead of a R1B. 428 + */ 429 + if (timeout_ms && host->max_busy_timeout && 430 + (timeout_ms > host->max_busy_timeout)) 431 + use_r1b_resp = false; 422 432 423 433 cmd.opcode = MMC_SWITCH; 424 434 cmd.arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) | ··· 436 426 (value << 8) | 437 427 set; 438 428 cmd.flags = MMC_CMD_AC; 439 - if (use_busy_signal) 429 + if (use_r1b_resp) { 440 430 cmd.flags |= MMC_RSP_SPI_R1B | MMC_RSP_R1B; 441 - else 431 + /* 432 + * A busy_timeout of zero means the host can decide to use 433 + * whatever value it finds suitable. 434 + */ 435 + cmd.busy_timeout = timeout_ms; 436 + } else { 442 437 cmd.flags |= MMC_RSP_SPI_R1 | MMC_RSP_R1; 438 + } 443 439 444 - 445 - cmd.cmd_timeout_ms = timeout_ms; 446 440 if (index == EXT_CSD_SANITIZE_START) 447 441 cmd.sanitize_busy = true; 448 442 449 - err = mmc_wait_for_cmd(card->host, &cmd, MMC_CMD_RETRIES); 443 + err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES); 450 444 if (err) 451 445 return err; 452 446 ··· 459 445 return 0; 460 446 461 447 /* 462 - * Must check status to be sure of no errors 463 - * If CMD13 is to check the busy completion of the timing change, 464 - * disable the check of CRC error. 448 + * CRC errors shall only be ignored in cases were CMD13 is used to poll 449 + * to detect busy completion. 465 450 */ 466 - if (index == EXT_CSD_HS_TIMING && 467 - !(card->host->caps & MMC_CAP_WAIT_WHILE_BUSY)) 468 - ignore_crc = true; 451 + if ((host->caps & MMC_CAP_WAIT_WHILE_BUSY) && use_r1b_resp) 452 + ignore_crc = false; 469 453 470 - timeout = jiffies + msecs_to_jiffies(MMC_OPS_TIMEOUT_MS); 454 + /* We have an unspecified cmd timeout, use the fallback value. */ 455 + if (!timeout_ms) 456 + timeout_ms = MMC_OPS_TIMEOUT_MS; 457 + 458 + /* Must check status to be sure of no errors. */ 459 + timeout = jiffies + msecs_to_jiffies(timeout_ms); 471 460 do { 472 461 if (send_status) { 473 462 err = __mmc_send_status(card, &status, ignore_crc); 474 463 if (err) 475 464 return err; 476 465 } 477 - if (card->host->caps & MMC_CAP_WAIT_WHILE_BUSY) 466 + if ((host->caps & MMC_CAP_WAIT_WHILE_BUSY) && use_r1b_resp) 478 467 break; 479 - if (mmc_host_is_spi(card->host)) 468 + if (mmc_host_is_spi(host)) 480 469 break; 481 470 482 471 /* ··· 495 478 /* Timeout if the device never leaves the program state. */ 496 479 if (time_after(jiffies, timeout)) { 497 480 pr_err("%s: Card stuck in programming state! %s\n", 498 - mmc_hostname(card->host), __func__); 481 + mmc_hostname(host), __func__); 499 482 return -ETIMEDOUT; 500 483 } 501 484 } while (R1_CURRENT_STATE(status) == R1_STATE_PRG); 502 485 503 - if (mmc_host_is_spi(card->host)) { 486 + if (mmc_host_is_spi(host)) { 504 487 if (status & R1_SPI_ILLEGAL_COMMAND) 505 488 return -EBADMSG; 506 489 } else { 507 490 if (status & 0xFDFFA000) 508 - pr_warning("%s: unexpected status %#x after " 509 - "switch", mmc_hostname(card->host), status); 491 + pr_warn("%s: unexpected status %#x after switch\n", 492 + mmc_hostname(host), status); 510 493 if (status & R1_SWITCH_ERROR) 511 494 return -EBADMSG; 512 495 } ··· 518 501 int mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, 519 502 unsigned int timeout_ms) 520 503 { 521 - return __mmc_switch(card, set, index, value, timeout_ms, true, true); 504 + return __mmc_switch(card, set, index, value, timeout_ms, true, true, 505 + false); 522 506 } 523 507 EXPORT_SYMBOL_GPL(mmc_switch); 524 508
+1 -22
drivers/mmc/core/sd.c
··· 1209 1209 static const struct mmc_bus_ops mmc_sd_ops = { 1210 1210 .remove = mmc_sd_remove, 1211 1211 .detect = mmc_sd_detect, 1212 - .suspend = NULL, 1213 - .resume = NULL, 1214 - .power_restore = mmc_sd_power_restore, 1215 - .alive = mmc_sd_alive, 1216 - .shutdown = mmc_sd_suspend, 1217 - }; 1218 - 1219 - static const struct mmc_bus_ops mmc_sd_ops_unsafe = { 1220 - .remove = mmc_sd_remove, 1221 - .detect = mmc_sd_detect, 1222 1212 .runtime_suspend = mmc_sd_runtime_suspend, 1223 1213 .runtime_resume = mmc_sd_runtime_resume, 1224 1214 .suspend = mmc_sd_suspend, ··· 1217 1227 .alive = mmc_sd_alive, 1218 1228 .shutdown = mmc_sd_suspend, 1219 1229 }; 1220 - 1221 - static void mmc_sd_attach_bus_ops(struct mmc_host *host) 1222 - { 1223 - const struct mmc_bus_ops *bus_ops; 1224 - 1225 - if (!mmc_card_is_removable(host)) 1226 - bus_ops = &mmc_sd_ops_unsafe; 1227 - else 1228 - bus_ops = &mmc_sd_ops; 1229 - mmc_attach_bus(host, bus_ops); 1230 - } 1231 1230 1232 1231 /* 1233 1232 * Starting point for SD card init. ··· 1233 1254 if (err) 1234 1255 return err; 1235 1256 1236 - mmc_sd_attach_bus_ops(host); 1257 + mmc_attach_bus(host, &mmc_sd_ops); 1237 1258 if (host->ocr_avail_sd) 1238 1259 host->ocr_avail = host->ocr_avail_sd; 1239 1260
+140 -40
drivers/mmc/core/slot-gpio.c
··· 10 10 11 11 #include <linux/err.h> 12 12 #include <linux/gpio.h> 13 + #include <linux/gpio/consumer.h> 13 14 #include <linux/interrupt.h> 14 15 #include <linux/jiffies.h> 15 16 #include <linux/mmc/host.h> ··· 19 18 #include <linux/slab.h> 20 19 21 20 struct mmc_gpio { 22 - int ro_gpio; 23 - int cd_gpio; 21 + struct gpio_desc *ro_gpio; 22 + struct gpio_desc *cd_gpio; 23 + bool override_ro_active_level; 24 + bool override_cd_active_level; 24 25 char *ro_label; 25 26 char cd_label[0]; 26 27 }; ··· 60 57 ctx->ro_label = ctx->cd_label + len; 61 58 snprintf(ctx->cd_label, len, "%s cd", dev_name(host->parent)); 62 59 snprintf(ctx->ro_label, len, "%s ro", dev_name(host->parent)); 63 - ctx->cd_gpio = -EINVAL; 64 - ctx->ro_gpio = -EINVAL; 65 60 host->slot.handler_priv = ctx; 66 61 } 67 62 } ··· 73 72 { 74 73 struct mmc_gpio *ctx = host->slot.handler_priv; 75 74 76 - if (!ctx || !gpio_is_valid(ctx->ro_gpio)) 75 + if (!ctx || !ctx->ro_gpio) 77 76 return -ENOSYS; 78 77 79 - return !gpio_get_value_cansleep(ctx->ro_gpio) ^ 80 - !!(host->caps2 & MMC_CAP2_RO_ACTIVE_HIGH); 78 + if (ctx->override_ro_active_level) 79 + return !gpiod_get_raw_value_cansleep(ctx->ro_gpio) ^ 80 + !!(host->caps2 & MMC_CAP2_RO_ACTIVE_HIGH); 81 + 82 + return gpiod_get_value_cansleep(ctx->ro_gpio); 81 83 } 82 84 EXPORT_SYMBOL(mmc_gpio_get_ro); 83 85 ··· 88 84 { 89 85 struct mmc_gpio *ctx = host->slot.handler_priv; 90 86 91 - if (!ctx || !gpio_is_valid(ctx->cd_gpio)) 87 + if (!ctx || !ctx->cd_gpio) 92 88 return -ENOSYS; 93 89 94 - return !gpio_get_value_cansleep(ctx->cd_gpio) ^ 95 - !!(host->caps2 & MMC_CAP2_CD_ACTIVE_HIGH); 90 + if (ctx->override_cd_active_level) 91 + return !gpiod_get_raw_value_cansleep(ctx->cd_gpio) ^ 92 + !!(host->caps2 & MMC_CAP2_CD_ACTIVE_HIGH); 93 + 94 + return gpiod_get_value_cansleep(ctx->cd_gpio); 96 95 } 97 96 EXPORT_SYMBOL(mmc_gpio_get_cd); 98 97 ··· 132 125 if (ret < 0) 133 126 return ret; 134 127 135 - ctx->ro_gpio = gpio; 128 + ctx->override_ro_active_level = true; 129 + ctx->ro_gpio = gpio_to_desc(gpio); 136 130 137 131 return 0; 138 132 } 139 133 EXPORT_SYMBOL(mmc_gpio_request_ro); 134 + 135 + void mmc_gpiod_request_cd_irq(struct mmc_host *host) 136 + { 137 + struct mmc_gpio *ctx = host->slot.handler_priv; 138 + int ret, irq; 139 + 140 + if (host->slot.cd_irq >= 0 || !ctx || !ctx->cd_gpio) 141 + return; 142 + 143 + irq = gpiod_to_irq(ctx->cd_gpio); 144 + 145 + /* 146 + * Even if gpiod_to_irq() returns a valid IRQ number, the platform might 147 + * still prefer to poll, e.g., because that IRQ number is already used 148 + * by another unit and cannot be shared. 149 + */ 150 + if (irq >= 0 && host->caps & MMC_CAP_NEEDS_POLL) 151 + irq = -EINVAL; 152 + 153 + if (irq >= 0) { 154 + ret = devm_request_threaded_irq(&host->class_dev, irq, 155 + NULL, mmc_gpio_cd_irqt, 156 + IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 157 + ctx->cd_label, host); 158 + if (ret < 0) 159 + irq = ret; 160 + } 161 + 162 + host->slot.cd_irq = irq; 163 + 164 + if (irq < 0) 165 + host->caps |= MMC_CAP_NEEDS_POLL; 166 + } 167 + EXPORT_SYMBOL(mmc_gpiod_request_cd_irq); 140 168 141 169 /** 142 170 * mmc_gpio_request_cd - request a gpio for card-detection ··· 196 154 unsigned int debounce) 197 155 { 198 156 struct mmc_gpio *ctx; 199 - int irq = gpio_to_irq(gpio); 200 157 int ret; 201 158 202 159 ret = mmc_gpio_alloc(host); ··· 220 179 return ret; 221 180 } 222 181 223 - /* 224 - * Even if gpio_to_irq() returns a valid IRQ number, the platform might 225 - * still prefer to poll, e.g., because that IRQ number is already used 226 - * by another unit and cannot be shared. 227 - */ 228 - if (irq >= 0 && host->caps & MMC_CAP_NEEDS_POLL) 229 - irq = -EINVAL; 182 + ctx->override_cd_active_level = true; 183 + ctx->cd_gpio = gpio_to_desc(gpio); 230 184 231 - if (irq >= 0) { 232 - ret = devm_request_threaded_irq(&host->class_dev, irq, 233 - NULL, mmc_gpio_cd_irqt, 234 - IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 235 - ctx->cd_label, host); 236 - if (ret < 0) 237 - irq = ret; 238 - } 239 - 240 - host->slot.cd_irq = irq; 241 - 242 - if (irq < 0) 243 - host->caps |= MMC_CAP_NEEDS_POLL; 244 - 245 - ctx->cd_gpio = gpio; 185 + mmc_gpiod_request_cd_irq(host); 246 186 247 187 return 0; 248 188 } ··· 241 219 struct mmc_gpio *ctx = host->slot.handler_priv; 242 220 int gpio; 243 221 244 - if (!ctx || !gpio_is_valid(ctx->ro_gpio)) 222 + if (!ctx || !ctx->ro_gpio) 245 223 return; 246 224 247 - gpio = ctx->ro_gpio; 248 - ctx->ro_gpio = -EINVAL; 225 + gpio = desc_to_gpio(ctx->ro_gpio); 226 + ctx->ro_gpio = NULL; 249 227 250 228 devm_gpio_free(&host->class_dev, gpio); 251 229 } ··· 263 241 struct mmc_gpio *ctx = host->slot.handler_priv; 264 242 int gpio; 265 243 266 - if (!ctx || !gpio_is_valid(ctx->cd_gpio)) 244 + if (!ctx || !ctx->cd_gpio) 267 245 return; 268 246 269 247 if (host->slot.cd_irq >= 0) { ··· 271 249 host->slot.cd_irq = -EINVAL; 272 250 } 273 251 274 - gpio = ctx->cd_gpio; 275 - ctx->cd_gpio = -EINVAL; 252 + gpio = desc_to_gpio(ctx->cd_gpio); 253 + ctx->cd_gpio = NULL; 276 254 277 255 devm_gpio_free(&host->class_dev, gpio); 278 256 } 279 257 EXPORT_SYMBOL(mmc_gpio_free_cd); 258 + 259 + /** 260 + * mmc_gpiod_request_cd - request a gpio descriptor for card-detection 261 + * @host: mmc host 262 + * @con_id: function within the GPIO consumer 263 + * @idx: index of the GPIO to obtain in the consumer 264 + * @override_active_level: ignore %GPIO_ACTIVE_LOW flag 265 + * @debounce: debounce time in microseconds 266 + * 267 + * Use this function in place of mmc_gpio_request_cd() to use the GPIO 268 + * descriptor API. Note that it is paired with mmc_gpiod_free_cd() not 269 + * mmc_gpio_free_cd(). Note also that it must be called prior to mmc_add_host() 270 + * otherwise the caller must also call mmc_gpiod_request_cd_irq(). 271 + * 272 + * Returns zero on success, else an error. 273 + */ 274 + int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id, 275 + unsigned int idx, bool override_active_level, 276 + unsigned int debounce) 277 + { 278 + struct mmc_gpio *ctx; 279 + struct gpio_desc *desc; 280 + int ret; 281 + 282 + ret = mmc_gpio_alloc(host); 283 + if (ret < 0) 284 + return ret; 285 + 286 + ctx = host->slot.handler_priv; 287 + 288 + if (!con_id) 289 + con_id = ctx->cd_label; 290 + 291 + desc = devm_gpiod_get_index(host->parent, con_id, idx); 292 + if (IS_ERR(desc)) 293 + return PTR_ERR(desc); 294 + 295 + ret = gpiod_direction_input(desc); 296 + if (ret < 0) 297 + return ret; 298 + 299 + if (debounce) { 300 + ret = gpiod_set_debounce(desc, debounce); 301 + if (ret < 0) 302 + return ret; 303 + } 304 + 305 + ctx->override_cd_active_level = override_active_level; 306 + ctx->cd_gpio = desc; 307 + 308 + return 0; 309 + } 310 + EXPORT_SYMBOL(mmc_gpiod_request_cd); 311 + 312 + /** 313 + * mmc_gpiod_free_cd - free the card-detection gpio descriptor 314 + * @host: mmc host 315 + * 316 + * It's provided only for cases that client drivers need to manually free 317 + * up the card-detection gpio requested by mmc_gpiod_request_cd(). 318 + */ 319 + void mmc_gpiod_free_cd(struct mmc_host *host) 320 + { 321 + struct mmc_gpio *ctx = host->slot.handler_priv; 322 + 323 + if (!ctx || !ctx->cd_gpio) 324 + return; 325 + 326 + if (host->slot.cd_irq >= 0) { 327 + devm_free_irq(&host->class_dev, host->slot.cd_irq, host); 328 + host->slot.cd_irq = -EINVAL; 329 + } 330 + 331 + devm_gpiod_put(&host->class_dev, ctx->cd_gpio); 332 + 333 + ctx->cd_gpio = NULL; 334 + } 335 + EXPORT_SYMBOL(mmc_gpiod_free_cd);
+14 -9
drivers/mmc/host/Kconfig
··· 263 263 264 264 config MMC_SDHCI_BCM_KONA 265 265 tristate "SDHCI support on Broadcom KONA platform" 266 - depends on ARCH_BCM 266 + depends on ARCH_BCM_MOBILE 267 267 select MMC_SDHCI_PLTFM 268 268 help 269 269 This selects the Broadcom Kona Secure Digital Host Controller ··· 331 331 This selects the Atmel Multimedia Card Interface driver. If 332 332 you have an AT32 (AVR32) or AT91 platform with a Multimedia 333 333 Card slot, say Y or M here. 334 + 335 + If unsure, say N. 336 + 337 + config MMC_SDHCI_MSM 338 + tristate "Qualcomm SDHCI Controller Support" 339 + depends on ARCH_QCOM 340 + depends on MMC_SDHCI_PLTFM 341 + help 342 + This selects the Secure Digital Host Controller Interface (SDHCI) 343 + support present in Qualcomm SOCs. The controller supports 344 + SD/MMC/SDIO devices. 345 + 346 + If you have a controller with this interface, say Y or M here. 334 347 335 348 If unsure, say N. 336 349 ··· 592 579 This selects support for Samsung Exynos SoC specific extensions to the 593 580 Synopsys DesignWare Memory Card Interface driver. Select this option 594 581 for platforms based on Exynos4 and Exynos5 SoC's. 595 - 596 - config MMC_DW_SOCFPGA 597 - tristate "SOCFPGA specific extensions for Synopsys DW Memory Card Interface" 598 - depends on MMC_DW && MFD_SYSCON 599 - select MMC_DW_PLTFM 600 - help 601 - This selects support for Altera SoCFPGA specific extensions to the 602 - Synopsys DesignWare Memory Card Interface driver. 603 582 604 583 config MMC_DW_K3 605 584 tristate "K3 specific extensions for Synopsys DW Memory Card Interface"
+1 -1
drivers/mmc/host/Makefile
··· 43 43 obj-$(CONFIG_MMC_DW) += dw_mmc.o 44 44 obj-$(CONFIG_MMC_DW_PLTFM) += dw_mmc-pltfm.o 45 45 obj-$(CONFIG_MMC_DW_EXYNOS) += dw_mmc-exynos.o 46 - obj-$(CONFIG_MMC_DW_SOCFPGA) += dw_mmc-socfpga.o 47 46 obj-$(CONFIG_MMC_DW_K3) += dw_mmc-k3.o 48 47 obj-$(CONFIG_MMC_DW_PCI) += dw_mmc-pci.o 49 48 obj-$(CONFIG_MMC_SH_MMCIF) += sh_mmcif.o ··· 63 64 obj-$(CONFIG_MMC_SDHCI_OF_HLWD) += sdhci-of-hlwd.o 64 65 obj-$(CONFIG_MMC_SDHCI_BCM_KONA) += sdhci-bcm-kona.o 65 66 obj-$(CONFIG_MMC_SDHCI_BCM2835) += sdhci-bcm2835.o 67 + obj-$(CONFIG_MMC_SDHCI_MSM) += sdhci-msm.o 66 68 67 69 ifeq ($(CONFIG_CB710_DEBUG),y) 68 70 CFLAGS-cb710-mmc += -DDEBUG
+2 -2
drivers/mmc/host/davinci_mmc.c
··· 1192 1192 struct device_node *np; 1193 1193 struct davinci_mmc_config *pdata = pdev->dev.platform_data; 1194 1194 const struct of_device_id *match = 1195 - of_match_device(of_match_ptr(davinci_mmc_dt_ids), &pdev->dev); 1195 + of_match_device(davinci_mmc_dt_ids, &pdev->dev); 1196 1196 u32 data; 1197 1197 1198 1198 np = pdev->dev.of_node; ··· 1468 1468 .name = "davinci_mmc", 1469 1469 .owner = THIS_MODULE, 1470 1470 .pm = davinci_mmcsd_pm_ops, 1471 - .of_match_table = of_match_ptr(davinci_mmc_dt_ids), 1471 + .of_match_table = davinci_mmc_dt_ids, 1472 1472 }, 1473 1473 .remove = __exit_p(davinci_mmcsd_remove), 1474 1474 .id_table = davinci_mmc_devtype,
+2
drivers/mmc/host/dw_mmc-k3.c
··· 50 50 return dw_mci_pltfm_register(pdev, drv_data); 51 51 } 52 52 53 + #ifdef CONFIG_PM_SLEEP 53 54 static int dw_mci_k3_suspend(struct device *dev) 54 55 { 55 56 struct dw_mci *host = dev_get_drvdata(dev); ··· 76 75 77 76 return dw_mci_resume(host); 78 77 } 78 + #endif /* CONFIG_PM_SLEEP */ 79 79 80 80 static SIMPLE_DEV_PM_OPS(dw_mci_k3_pmops, dw_mci_k3_suspend, dw_mci_k3_resume); 81 81
+9 -3
drivers/mmc/host/dw_mmc-pltfm.c
··· 25 25 #include "dw_mmc.h" 26 26 #include "dw_mmc-pltfm.h" 27 27 28 - static void dw_mci_rockchip_prepare_command(struct dw_mci *host, u32 *cmdr) 28 + static void dw_mci_pltfm_prepare_command(struct dw_mci *host, u32 *cmdr) 29 29 { 30 30 *cmdr |= SDMMC_CMD_USE_HOLD_REG; 31 31 } 32 32 33 33 static const struct dw_mci_drv_data rockchip_drv_data = { 34 - .prepare_command = dw_mci_rockchip_prepare_command, 34 + .prepare_command = dw_mci_pltfm_prepare_command, 35 + }; 36 + 37 + static const struct dw_mci_drv_data socfpga_drv_data = { 38 + .prepare_command = dw_mci_pltfm_prepare_command, 35 39 }; 36 40 37 41 int dw_mci_pltfm_register(struct platform_device *pdev, ··· 96 92 { .compatible = "snps,dw-mshc", }, 97 93 { .compatible = "rockchip,rk2928-dw-mshc", 98 94 .data = &rockchip_drv_data }, 95 + { .compatible = "altr,socfpga-dw-mshc", 96 + .data = &socfpga_drv_data }, 99 97 {}, 100 98 }; 101 99 MODULE_DEVICE_TABLE(of, dw_mci_pltfm_match); ··· 129 123 .remove = dw_mci_pltfm_remove, 130 124 .driver = { 131 125 .name = "dw_mmc", 132 - .of_match_table = of_match_ptr(dw_mci_pltfm_match), 126 + .of_match_table = dw_mci_pltfm_match, 133 127 .pm = &dw_mci_pltfm_pmops, 134 128 }, 135 129 };
-138
drivers/mmc/host/dw_mmc-socfpga.c
··· 1 - /* 2 - * Altera SoCFPGA Specific Extensions for Synopsys DW Multimedia Card Interface 3 - * driver 4 - * 5 - * Copyright (C) 2012, Samsung Electronics Co., Ltd. 6 - * Copyright (C) 2013 Altera Corporation 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 12 - * 13 - * Taken from dw_mmc-exynos.c 14 - */ 15 - #include <linux/clk.h> 16 - #include <linux/mfd/syscon.h> 17 - #include <linux/mmc/host.h> 18 - #include <linux/mmc/dw_mmc.h> 19 - #include <linux/module.h> 20 - #include <linux/of.h> 21 - #include <linux/platform_device.h> 22 - #include <linux/regmap.h> 23 - 24 - #include "dw_mmc.h" 25 - #include "dw_mmc-pltfm.h" 26 - 27 - #define SYSMGR_SDMMCGRP_CTRL_OFFSET 0x108 28 - #define DRV_CLK_PHASE_SHIFT_SEL_MASK 0x7 29 - #define SYSMGR_SDMMC_CTRL_SET(smplsel, drvsel) \ 30 - ((((smplsel) & 0x7) << 3) | (((drvsel) & 0x7) << 0)) 31 - 32 - /* SOCFPGA implementation specific driver private data */ 33 - struct dw_mci_socfpga_priv_data { 34 - u8 ciu_div; /* card interface unit divisor */ 35 - u32 hs_timing; /* bitmask for CIU clock phase shift */ 36 - struct regmap *sysreg; /* regmap for system manager register */ 37 - }; 38 - 39 - static int dw_mci_socfpga_priv_init(struct dw_mci *host) 40 - { 41 - return 0; 42 - } 43 - 44 - static int dw_mci_socfpga_setup_clock(struct dw_mci *host) 45 - { 46 - struct dw_mci_socfpga_priv_data *priv = host->priv; 47 - 48 - clk_disable_unprepare(host->ciu_clk); 49 - regmap_write(priv->sysreg, SYSMGR_SDMMCGRP_CTRL_OFFSET, 50 - priv->hs_timing); 51 - clk_prepare_enable(host->ciu_clk); 52 - 53 - host->bus_hz /= (priv->ciu_div + 1); 54 - return 0; 55 - } 56 - 57 - static void dw_mci_socfpga_prepare_command(struct dw_mci *host, u32 *cmdr) 58 - { 59 - struct dw_mci_socfpga_priv_data *priv = host->priv; 60 - 61 - if (priv->hs_timing & DRV_CLK_PHASE_SHIFT_SEL_MASK) 62 - *cmdr |= SDMMC_CMD_USE_HOLD_REG; 63 - } 64 - 65 - static int dw_mci_socfpga_parse_dt(struct dw_mci *host) 66 - { 67 - struct dw_mci_socfpga_priv_data *priv; 68 - struct device_node *np = host->dev->of_node; 69 - u32 timing[2]; 70 - u32 div = 0; 71 - int ret; 72 - 73 - priv = devm_kzalloc(host->dev, sizeof(*priv), GFP_KERNEL); 74 - if (!priv) { 75 - dev_err(host->dev, "mem alloc failed for private data\n"); 76 - return -ENOMEM; 77 - } 78 - 79 - priv->sysreg = syscon_regmap_lookup_by_compatible("altr,sys-mgr"); 80 - if (IS_ERR(priv->sysreg)) { 81 - dev_err(host->dev, "regmap for altr,sys-mgr lookup failed.\n"); 82 - return PTR_ERR(priv->sysreg); 83 - } 84 - 85 - ret = of_property_read_u32(np, "altr,dw-mshc-ciu-div", &div); 86 - if (ret) 87 - dev_info(host->dev, "No dw-mshc-ciu-div specified, assuming 1"); 88 - priv->ciu_div = div; 89 - 90 - ret = of_property_read_u32_array(np, 91 - "altr,dw-mshc-sdr-timing", timing, 2); 92 - if (ret) 93 - return ret; 94 - 95 - priv->hs_timing = SYSMGR_SDMMC_CTRL_SET(timing[0], timing[1]); 96 - host->priv = priv; 97 - return 0; 98 - } 99 - 100 - static const struct dw_mci_drv_data socfpga_drv_data = { 101 - .init = dw_mci_socfpga_priv_init, 102 - .setup_clock = dw_mci_socfpga_setup_clock, 103 - .prepare_command = dw_mci_socfpga_prepare_command, 104 - .parse_dt = dw_mci_socfpga_parse_dt, 105 - }; 106 - 107 - static const struct of_device_id dw_mci_socfpga_match[] = { 108 - { .compatible = "altr,socfpga-dw-mshc", 109 - .data = &socfpga_drv_data, }, 110 - {}, 111 - }; 112 - MODULE_DEVICE_TABLE(of, dw_mci_socfpga_match); 113 - 114 - static int dw_mci_socfpga_probe(struct platform_device *pdev) 115 - { 116 - const struct dw_mci_drv_data *drv_data; 117 - const struct of_device_id *match; 118 - 119 - match = of_match_node(dw_mci_socfpga_match, pdev->dev.of_node); 120 - drv_data = match->data; 121 - return dw_mci_pltfm_register(pdev, drv_data); 122 - } 123 - 124 - static struct platform_driver dw_mci_socfpga_pltfm_driver = { 125 - .probe = dw_mci_socfpga_probe, 126 - .remove = __exit_p(dw_mci_pltfm_remove), 127 - .driver = { 128 - .name = "dwmmc_socfpga", 129 - .of_match_table = dw_mci_socfpga_match, 130 - .pm = &dw_mci_pltfm_pmops, 131 - }, 132 - }; 133 - 134 - module_platform_driver(dw_mci_socfpga_pltfm_driver); 135 - 136 - MODULE_DESCRIPTION("Altera SOCFPGA Specific DW-MSHC Driver Extension"); 137 - MODULE_LICENSE("GPL v2"); 138 - MODULE_ALIAS("platform:dwmmc-socfpga");
+1 -1
drivers/mmc/host/dw_mmc.c
··· 1345 1345 1346 1346 if (!err) { 1347 1347 if (!data->stop || mrq->sbc) { 1348 - if (mrq->sbc) 1348 + if (mrq->sbc && data->stop) 1349 1349 data->stop->error = 0; 1350 1350 dw_mci_request_end(host, mrq); 1351 1351 goto unlock;
+2 -1
drivers/mmc/host/dw_mmc.h
··· 185 185 186 186 extern int dw_mci_probe(struct dw_mci *host); 187 187 extern void dw_mci_remove(struct dw_mci *host); 188 - #ifdef CONFIG_PM 188 + #ifdef CONFIG_PM_SLEEP 189 189 extern int dw_mci_suspend(struct dw_mci *host); 190 190 extern int dw_mci_resume(struct dw_mci *host); 191 191 #endif ··· 244 244 * @prepare_command: handle CMD register extensions. 245 245 * @set_ios: handle bus specific extensions. 246 246 * @parse_dt: parse implementation specific device tree properties. 247 + * @execute_tuning: implementation specific tuning procedure. 247 248 * 248 249 * Provide controller implementation specific extensions. The usage of this 249 250 * data structure is fully optional and usage of each member in this structure
+45 -9
drivers/mmc/host/mmci.c
··· 921 921 { 922 922 void __iomem *base = host->base; 923 923 bool sbc = (cmd == host->mrq->sbc); 924 + bool busy_resp = host->variant->busy_detect && 925 + (cmd->flags & MMC_RSP_BUSY); 926 + 927 + /* Check if we need to wait for busy completion. */ 928 + if (host->busy_status && (status & MCI_ST_CARDBUSY)) 929 + return; 930 + 931 + /* Enable busy completion if needed and supported. */ 932 + if (!host->busy_status && busy_resp && 933 + !(status & (MCI_CMDCRCFAIL|MCI_CMDTIMEOUT)) && 934 + (readl(base + MMCISTATUS) & MCI_ST_CARDBUSY)) { 935 + writel(readl(base + MMCIMASK0) | MCI_ST_BUSYEND, 936 + base + MMCIMASK0); 937 + host->busy_status = status & (MCI_CMDSENT|MCI_CMDRESPEND); 938 + return; 939 + } 940 + 941 + /* At busy completion, mask the IRQ and complete the request. */ 942 + if (host->busy_status) { 943 + writel(readl(base + MMCIMASK0) & ~MCI_ST_BUSYEND, 944 + base + MMCIMASK0); 945 + host->busy_status = 0; 946 + } 924 947 925 948 host->cmd = NULL; 926 949 ··· 1162 1139 status &= ~MCI_IRQ1MASK; 1163 1140 } 1164 1141 1142 + /* 1143 + * We intentionally clear the MCI_ST_CARDBUSY IRQ here (if it's 1144 + * enabled) since the HW seems to be triggering the IRQ on both 1145 + * edges while monitoring DAT0 for busy completion. 1146 + */ 1165 1147 status &= readl(host->base + MMCIMASK0); 1166 1148 writel(status, host->base + MMCICLEAR); 1167 1149 1168 1150 dev_dbg(mmc_dev(host->mmc), "irq0 (data+cmd) %08x\n", status); 1151 + 1152 + cmd = host->cmd; 1153 + if ((status|host->busy_status) & (MCI_CMDCRCFAIL|MCI_CMDTIMEOUT| 1154 + MCI_CMDSENT|MCI_CMDRESPEND) && cmd) 1155 + mmci_cmd_irq(host, cmd, status); 1169 1156 1170 1157 data = host->data; 1171 1158 if (status & (MCI_DATACRCFAIL|MCI_DATATIMEOUT|MCI_STARTBITERR| ··· 1183 1150 MCI_DATABLOCKEND) && data) 1184 1151 mmci_data_irq(host, data, status); 1185 1152 1186 - cmd = host->cmd; 1187 - if (status & (MCI_CMDCRCFAIL|MCI_CMDTIMEOUT|MCI_CMDSENT|MCI_CMDRESPEND) && cmd) 1188 - mmci_cmd_irq(host, cmd, status); 1153 + /* Don't poll for busy completion in irq context. */ 1154 + if (host->busy_status) 1155 + status &= ~MCI_ST_CARDBUSY; 1189 1156 1190 1157 ret = 1; 1191 1158 } while (status); ··· 1536 1503 goto clk_disable; 1537 1504 } 1538 1505 1539 - if (variant->busy_detect) { 1540 - mmci_ops.card_busy = mmci_card_busy; 1541 - mmci_write_datactrlreg(host, MCI_ST_DPSM_BUSYMODE); 1542 - } 1543 - 1544 - mmc->ops = &mmci_ops; 1545 1506 /* 1546 1507 * The ARM and ST versions of the block have slightly different 1547 1508 * clock divider equations which means that the minimum divider ··· 1568 1541 1569 1542 mmc->caps = plat->capabilities; 1570 1543 mmc->caps2 = plat->capabilities2; 1544 + 1545 + if (variant->busy_detect) { 1546 + mmci_ops.card_busy = mmci_card_busy; 1547 + mmci_write_datactrlreg(host, MCI_ST_DPSM_BUSYMODE); 1548 + mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY; 1549 + mmc->max_busy_timeout = 0; 1550 + } 1551 + 1552 + mmc->ops = &mmci_ops; 1571 1553 1572 1554 /* We support these PM capabilities. */ 1573 1555 mmc->pm_caps = MMC_PM_KEEP_POWER;
+2
drivers/mmc/host/mmci.h
··· 140 140 /* Extended status bits for the ST Micro variants */ 141 141 #define MCI_ST_SDIOITMASK (1 << 22) 142 142 #define MCI_ST_CEATAENDMASK (1 << 23) 143 + #define MCI_ST_BUSYEND (1 << 24) 143 144 144 145 #define MMCIMASK1 0x040 145 146 #define MMCIFIFOCNT 0x048 ··· 188 187 u32 pwr_reg; 189 188 u32 clk_reg; 190 189 u32 datactrl_reg; 190 + u32 busy_status; 191 191 bool vqmmc_enabled; 192 192 struct mmci_platform_data *plat; 193 193 struct variant_data *variant;
+36 -57
drivers/mmc/host/omap.c
··· 26 26 #include <linux/omap-dma.h> 27 27 #include <linux/mmc/host.h> 28 28 #include <linux/mmc/card.h> 29 + #include <linux/mmc/mmc.h> 29 30 #include <linux/clk.h> 30 31 #include <linux/scatterlist.h> 31 32 #include <linux/slab.h> ··· 131 130 u32 dma_rx_burst; 132 131 struct dma_chan *dma_tx; 133 132 u32 dma_tx_burst; 134 - struct resource *mem_res; 135 133 void __iomem *virt_base; 136 134 unsigned int phys_base; 137 135 int irq; ··· 153 153 u32 total_bytes_left; 154 154 155 155 unsigned features; 156 - unsigned use_dma:1; 157 156 unsigned brs_received:1, dma_done:1; 158 157 unsigned dma_in_use:1; 159 158 spinlock_t dma_lock; ··· 337 338 u32 cmdreg; 338 339 u32 resptype; 339 340 u32 cmdtype; 341 + u16 irq_mask; 340 342 341 343 host->cmd = cmd; 342 344 ··· 390 390 OMAP_MMC_WRITE(host, CTO, 200); 391 391 OMAP_MMC_WRITE(host, ARGL, cmd->arg & 0xffff); 392 392 OMAP_MMC_WRITE(host, ARGH, cmd->arg >> 16); 393 - OMAP_MMC_WRITE(host, IE, 394 - OMAP_MMC_STAT_A_EMPTY | OMAP_MMC_STAT_A_FULL | 395 - OMAP_MMC_STAT_CMD_CRC | OMAP_MMC_STAT_CMD_TOUT | 396 - OMAP_MMC_STAT_DATA_CRC | OMAP_MMC_STAT_DATA_TOUT | 397 - OMAP_MMC_STAT_END_OF_CMD | OMAP_MMC_STAT_CARD_ERR | 398 - OMAP_MMC_STAT_END_OF_DATA); 393 + irq_mask = OMAP_MMC_STAT_A_EMPTY | OMAP_MMC_STAT_A_FULL | 394 + OMAP_MMC_STAT_CMD_CRC | OMAP_MMC_STAT_CMD_TOUT | 395 + OMAP_MMC_STAT_DATA_CRC | OMAP_MMC_STAT_DATA_TOUT | 396 + OMAP_MMC_STAT_END_OF_CMD | OMAP_MMC_STAT_CARD_ERR | 397 + OMAP_MMC_STAT_END_OF_DATA; 398 + if (cmd->opcode == MMC_ERASE) 399 + irq_mask &= ~OMAP_MMC_STAT_DATA_TOUT; 400 + OMAP_MMC_WRITE(host, IE, irq_mask); 399 401 OMAP_MMC_WRITE(host, CMD, cmdreg); 400 402 } 401 403 ··· 947 945 mmc_omap_prepare_data(struct mmc_omap_host *host, struct mmc_request *req) 948 946 { 949 947 struct mmc_data *data = req->data; 950 - int i, use_dma, block_size; 948 + int i, use_dma = 1, block_size; 951 949 unsigned sg_len; 952 950 953 951 host->data = data; ··· 972 970 sg_len = (data->blocks == 1) ? 1 : data->sg_len; 973 971 974 972 /* Only do DMA for entire blocks */ 975 - use_dma = host->use_dma; 976 - if (use_dma) { 977 - for (i = 0; i < sg_len; i++) { 978 - if ((data->sg[i].length % block_size) != 0) { 979 - use_dma = 0; 980 - break; 981 - } 973 + for (i = 0; i < sg_len; i++) { 974 + if ((data->sg[i].length % block_size) != 0) { 975 + use_dma = 0; 976 + break; 982 977 } 983 978 } 984 979 ··· 1238 1239 1239 1240 mmc->caps = 0; 1240 1241 if (host->pdata->slots[id].wires >= 4) 1241 - mmc->caps |= MMC_CAP_4_BIT_DATA; 1242 + mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_ERASE; 1242 1243 1243 1244 mmc->ops = &mmc_omap_ops; 1244 1245 mmc->f_min = 400000; ··· 1261 1262 mmc->max_req_size = mmc->max_blk_size * mmc->max_blk_count; 1262 1263 mmc->max_seg_size = mmc->max_req_size; 1263 1264 1265 + if (slot->pdata->get_cover_state != NULL) { 1266 + setup_timer(&slot->cover_timer, mmc_omap_cover_timer, 1267 + (unsigned long)slot); 1268 + tasklet_init(&slot->cover_tasklet, mmc_omap_cover_handler, 1269 + (unsigned long)slot); 1270 + } 1271 + 1264 1272 r = mmc_add_host(mmc); 1265 1273 if (r < 0) 1266 1274 goto err_remove_host; ··· 1284 1278 &dev_attr_cover_switch); 1285 1279 if (r < 0) 1286 1280 goto err_remove_slot_name; 1287 - 1288 - setup_timer(&slot->cover_timer, mmc_omap_cover_timer, 1289 - (unsigned long)slot); 1290 - tasklet_init(&slot->cover_tasklet, mmc_omap_cover_handler, 1291 - (unsigned long)slot); 1292 1281 tasklet_schedule(&slot->cover_tasklet); 1293 1282 } 1294 1283 ··· 1334 1333 return -EPROBE_DEFER; 1335 1334 } 1336 1335 1337 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1336 + host = devm_kzalloc(&pdev->dev, sizeof(struct mmc_omap_host), 1337 + GFP_KERNEL); 1338 + if (host == NULL) 1339 + return -ENOMEM; 1340 + 1338 1341 irq = platform_get_irq(pdev, 0); 1339 - if (res == NULL || irq < 0) 1342 + if (irq < 0) 1340 1343 return -ENXIO; 1341 1344 1342 - res = request_mem_region(res->start, resource_size(res), 1343 - pdev->name); 1344 - if (res == NULL) 1345 - return -EBUSY; 1346 - 1347 - host = kzalloc(sizeof(struct mmc_omap_host), GFP_KERNEL); 1348 - if (host == NULL) { 1349 - ret = -ENOMEM; 1350 - goto err_free_mem_region; 1351 - } 1345 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1346 + host->virt_base = devm_ioremap_resource(&pdev->dev, res); 1347 + if (IS_ERR(host->virt_base)) 1348 + return PTR_ERR(host->virt_base); 1352 1349 1353 1350 INIT_WORK(&host->slot_release_work, mmc_omap_slot_release_work); 1354 1351 INIT_WORK(&host->send_stop_work, mmc_omap_send_stop_work); ··· 1368 1369 platform_set_drvdata(pdev, host); 1369 1370 1370 1371 host->id = pdev->id; 1371 - host->mem_res = res; 1372 1372 host->irq = irq; 1373 - host->use_dma = 1; 1374 - host->irq = irq; 1375 - host->phys_base = host->mem_res->start; 1376 - host->virt_base = ioremap(res->start, resource_size(res)); 1377 - if (!host->virt_base) 1378 - goto err_ioremap; 1379 - 1373 + host->phys_base = res->start; 1380 1374 host->iclk = clk_get(&pdev->dev, "ick"); 1381 - if (IS_ERR(host->iclk)) { 1382 - ret = PTR_ERR(host->iclk); 1383 - goto err_free_mmc_host; 1384 - } 1375 + if (IS_ERR(host->iclk)) 1376 + return PTR_ERR(host->iclk); 1385 1377 clk_enable(host->iclk); 1386 1378 1387 1379 host->fclk = clk_get(&pdev->dev, "fck"); ··· 1450 1460 err_free_iclk: 1451 1461 clk_disable(host->iclk); 1452 1462 clk_put(host->iclk); 1453 - err_free_mmc_host: 1454 - iounmap(host->virt_base); 1455 - err_ioremap: 1456 - kfree(host); 1457 - err_free_mem_region: 1458 - release_mem_region(res->start, resource_size(res)); 1459 1463 return ret; 1460 1464 } 1461 1465 ··· 1477 1493 if (host->dma_rx) 1478 1494 dma_release_channel(host->dma_rx); 1479 1495 1480 - iounmap(host->virt_base); 1481 - release_mem_region(pdev->resource[0].start, 1482 - pdev->resource[0].end - pdev->resource[0].start + 1); 1483 1496 destroy_workqueue(host->mmc_omap_wq); 1484 - 1485 - kfree(host); 1486 1497 1487 1498 return 0; 1488 1499 }
+166 -76
drivers/mmc/host/omap_hsmmc.c
··· 45 45 /* OMAP HSMMC Host Controller Registers */ 46 46 #define OMAP_HSMMC_SYSSTATUS 0x0014 47 47 #define OMAP_HSMMC_CON 0x002C 48 + #define OMAP_HSMMC_SDMASA 0x0100 48 49 #define OMAP_HSMMC_BLK 0x0104 49 50 #define OMAP_HSMMC_ARG 0x0108 50 51 #define OMAP_HSMMC_CMD 0x010C ··· 59 58 #define OMAP_HSMMC_STAT 0x0130 60 59 #define OMAP_HSMMC_IE 0x0134 61 60 #define OMAP_HSMMC_ISE 0x0138 61 + #define OMAP_HSMMC_AC12 0x013C 62 62 #define OMAP_HSMMC_CAPA 0x0140 63 63 64 64 #define VS18 (1 << 26) ··· 83 81 #define DTO_MASK 0x000F0000 84 82 #define DTO_SHIFT 16 85 83 #define INIT_STREAM (1 << 1) 84 + #define ACEN_ACMD23 (2 << 2) 86 85 #define DP_SELECT (1 << 21) 87 86 #define DDIR (1 << 4) 88 87 #define DMAE 0x1 ··· 100 97 #define SRC (1 << 25) 101 98 #define SRD (1 << 26) 102 99 #define SOFTRESET (1 << 1) 103 - #define RESETDONE (1 << 0) 104 100 105 101 /* Interrupt masks for IE and ISE register */ 106 102 #define CC_EN (1 << 0) ··· 114 112 #define DTO_EN (1 << 20) 115 113 #define DCRC_EN (1 << 21) 116 114 #define DEB_EN (1 << 22) 115 + #define ACE_EN (1 << 24) 117 116 #define CERR_EN (1 << 28) 118 117 #define BADA_EN (1 << 29) 119 118 120 - #define INT_EN_MASK (BADA_EN | CERR_EN | DEB_EN | DCRC_EN |\ 119 + #define INT_EN_MASK (BADA_EN | CERR_EN | ACE_EN | DEB_EN | DCRC_EN |\ 121 120 DTO_EN | CIE_EN | CEB_EN | CCRC_EN | CTO_EN | \ 122 121 BRR_EN | BWR_EN | TC_EN | CC_EN) 122 + 123 + #define CNI (1 << 7) 124 + #define ACIE (1 << 4) 125 + #define ACEB (1 << 3) 126 + #define ACCE (1 << 2) 127 + #define ACTO (1 << 1) 128 + #define ACNE (1 << 0) 123 129 124 130 #define MMC_AUTOSUSPEND_DELAY 100 125 131 #define MMC_TIMEOUT_MS 20 /* 20 mSec */ ··· 136 126 #define OMAP_MMC_MAX_CLOCK 52000000 137 127 #define DRIVER_NAME "omap_hsmmc" 138 128 129 + #define VDD_1V8 1800000 /* 180000 uV */ 130 + #define VDD_3V0 3000000 /* 300000 uV */ 131 + #define VDD_165_195 (ffs(MMC_VDD_165_195) - 1) 132 + 133 + #define AUTO_CMD23 (1 << 1) /* Auto CMD23 support */ 139 134 /* 140 135 * One controller can have multiple slots, like on some omap boards using 141 136 * omap.c controller driver. Luckily this is not currently done on any known ··· 179 164 */ 180 165 struct regulator *vcc; 181 166 struct regulator *vcc_aux; 182 - int pbias_disable; 167 + struct regulator *pbias; 168 + bool pbias_enabled; 183 169 void __iomem *base; 184 170 resource_size_t mapbase; 185 171 spinlock_t irq_lock; /* Prevent races with irq handler */ ··· 204 188 int reqs_blocked; 205 189 int use_reg; 206 190 int req_in_progress; 191 + unsigned long clk_rate; 192 + unsigned int flags; 207 193 struct omap_hsmmc_next next_data; 208 194 struct omap_mmc_platform_data *pdata; 209 195 }; 196 + 197 + struct omap_mmc_of_data { 198 + u32 reg_offset; 199 + u8 controller_flags; 200 + }; 201 + 202 + static void omap_hsmmc_start_dma_transfer(struct omap_hsmmc_host *host); 210 203 211 204 static int omap_hsmmc_card_detect(struct device *dev, int slot) 212 205 { ··· 286 261 */ 287 262 if (!host->vcc) 288 263 return 0; 289 - /* 290 - * With DT, never turn OFF the regulator for MMC1. This is because 291 - * the pbias cell programming support is still missing when 292 - * booting with Device tree 293 - */ 294 - if (host->pbias_disable && !vdd) 295 - return 0; 296 264 297 265 if (mmc_slot(host).before_set_reg) 298 266 mmc_slot(host).before_set_reg(dev, slot, power_on, vdd); 267 + 268 + if (host->pbias) { 269 + if (host->pbias_enabled == 1) { 270 + ret = regulator_disable(host->pbias); 271 + if (!ret) 272 + host->pbias_enabled = 0; 273 + } 274 + regulator_set_voltage(host->pbias, VDD_3V0, VDD_3V0); 275 + } 299 276 300 277 /* 301 278 * Assume Vcc regulator is used only to power the card ... OMAP ··· 313 286 * chips/cards need an interface voltage rail too. 314 287 */ 315 288 if (power_on) { 316 - ret = mmc_regulator_set_ocr(host->mmc, host->vcc, vdd); 289 + if (host->vcc) 290 + ret = mmc_regulator_set_ocr(host->mmc, host->vcc, vdd); 317 291 /* Enable interface voltage rail, if needed */ 318 292 if (ret == 0 && host->vcc_aux) { 319 293 ret = regulator_enable(host->vcc_aux); 320 - if (ret < 0) 294 + if (ret < 0 && host->vcc) 321 295 ret = mmc_regulator_set_ocr(host->mmc, 322 296 host->vcc, 0); 323 297 } ··· 326 298 /* Shut down the rail */ 327 299 if (host->vcc_aux) 328 300 ret = regulator_disable(host->vcc_aux); 329 - if (!ret) { 301 + if (host->vcc) { 330 302 /* Then proceed to shut down the local regulator */ 331 303 ret = mmc_regulator_set_ocr(host->mmc, 332 304 host->vcc, 0); 333 305 } 334 306 } 335 307 308 + if (host->pbias) { 309 + if (vdd <= VDD_165_195) 310 + ret = regulator_set_voltage(host->pbias, VDD_1V8, 311 + VDD_1V8); 312 + else 313 + ret = regulator_set_voltage(host->pbias, VDD_3V0, 314 + VDD_3V0); 315 + if (ret < 0) 316 + goto error_set_power; 317 + 318 + if (host->pbias_enabled == 0) { 319 + ret = regulator_enable(host->pbias); 320 + if (!ret) 321 + host->pbias_enabled = 1; 322 + } 323 + } 324 + 336 325 if (mmc_slot(host).after_set_reg) 337 326 mmc_slot(host).after_set_reg(dev, slot, power_on, vdd); 338 327 328 + error_set_power: 339 329 return ret; 340 330 } 341 331 ··· 362 316 struct regulator *reg; 363 317 int ocr_value = 0; 364 318 365 - reg = regulator_get(host->dev, "vmmc"); 319 + reg = devm_regulator_get(host->dev, "vmmc"); 366 320 if (IS_ERR(reg)) { 367 - dev_err(host->dev, "vmmc regulator missing\n"); 321 + dev_err(host->dev, "unable to get vmmc regulator %ld\n", 322 + PTR_ERR(reg)); 368 323 return PTR_ERR(reg); 369 324 } else { 370 - mmc_slot(host).set_power = omap_hsmmc_set_power; 371 325 host->vcc = reg; 372 326 ocr_value = mmc_regulator_get_ocrmask(reg); 373 327 if (!mmc_slot(host).ocr_mask) { ··· 380 334 return -EINVAL; 381 335 } 382 336 } 337 + } 338 + mmc_slot(host).set_power = omap_hsmmc_set_power; 383 339 384 - /* Allow an aux regulator */ 385 - reg = regulator_get(host->dev, "vmmc_aux"); 386 - host->vcc_aux = IS_ERR(reg) ? NULL : reg; 340 + /* Allow an aux regulator */ 341 + reg = devm_regulator_get_optional(host->dev, "vmmc_aux"); 342 + host->vcc_aux = IS_ERR(reg) ? NULL : reg; 387 343 388 - /* For eMMC do not power off when not in sleep state */ 389 - if (mmc_slot(host).no_regulator_off_init) 390 - return 0; 391 - /* 392 - * UGLY HACK: workaround regulator framework bugs. 393 - * When the bootloader leaves a supply active, it's 394 - * initialized with zero usecount ... and we can't 395 - * disable it without first enabling it. Until the 396 - * framework is fixed, we need a workaround like this 397 - * (which is safe for MMC, but not in general). 398 - */ 399 - if (regulator_is_enabled(host->vcc) > 0 || 400 - (host->vcc_aux && regulator_is_enabled(host->vcc_aux))) { 401 - int vdd = ffs(mmc_slot(host).ocr_mask) - 1; 344 + reg = devm_regulator_get_optional(host->dev, "pbias"); 345 + host->pbias = IS_ERR(reg) ? NULL : reg; 402 346 403 - mmc_slot(host).set_power(host->dev, host->slot_id, 404 - 1, vdd); 405 - mmc_slot(host).set_power(host->dev, host->slot_id, 406 - 0, 0); 407 - } 347 + /* For eMMC do not power off when not in sleep state */ 348 + if (mmc_slot(host).no_regulator_off_init) 349 + return 0; 350 + /* 351 + * To disable boot_on regulator, enable regulator 352 + * to increase usecount and then disable it. 353 + */ 354 + if ((host->vcc && regulator_is_enabled(host->vcc) > 0) || 355 + (host->vcc_aux && regulator_is_enabled(host->vcc_aux))) { 356 + int vdd = ffs(mmc_slot(host).ocr_mask) - 1; 357 + 358 + mmc_slot(host).set_power(host->dev, host->slot_id, 1, vdd); 359 + mmc_slot(host).set_power(host->dev, host->slot_id, 0, 0); 408 360 } 409 361 410 362 return 0; ··· 410 366 411 367 static void omap_hsmmc_reg_put(struct omap_hsmmc_host *host) 412 368 { 413 - regulator_put(host->vcc); 414 - regulator_put(host->vcc_aux); 415 369 mmc_slot(host).set_power = NULL; 416 370 } 417 371 ··· 647 605 u32 hctl, capa; 648 606 unsigned long timeout; 649 607 650 - if (!OMAP_HSMMC_READ(host->base, SYSSTATUS) & RESETDONE) 651 - return 1; 652 - 653 608 if (host->con == OMAP_HSMMC_READ(host->base, CON) && 654 609 host->hctl == OMAP_HSMMC_READ(host->base, HCTL) && 655 610 host->sysctl == OMAP_HSMMC_READ(host->base, SYSCTL) && ··· 826 787 827 788 cmdreg = (cmd->opcode << 24) | (resptype << 16) | (cmdtype << 22); 828 789 790 + if ((host->flags & AUTO_CMD23) && mmc_op_multi(cmd->opcode) && 791 + host->mrq->sbc) { 792 + cmdreg |= ACEN_ACMD23; 793 + OMAP_HSMMC_WRITE(host->base, SDMASA, host->mrq->sbc->arg); 794 + } 829 795 if (data) { 830 796 cmdreg |= DP_SELECT | MSBS | BCE; 831 797 if (data->flags & MMC_DATA_READ) ··· 908 864 else 909 865 data->bytes_xfered = 0; 910 866 911 - if (!data->stop) { 867 + if (data->stop && (data->error || !host->mrq->sbc)) 868 + omap_hsmmc_start_command(host, data->stop, NULL); 869 + else 912 870 omap_hsmmc_request_done(host, data->mrq); 913 - return; 914 - } 915 - omap_hsmmc_start_command(host, data->stop, NULL); 916 871 } 917 872 918 873 /* ··· 921 878 omap_hsmmc_cmd_done(struct omap_hsmmc_host *host, struct mmc_command *cmd) 922 879 { 923 880 host->cmd = NULL; 881 + 882 + if (host->mrq->sbc && (host->cmd == host->mrq->sbc) && 883 + !host->mrq->sbc->error && !(host->flags & AUTO_CMD23)) { 884 + omap_hsmmc_start_dma_transfer(host); 885 + omap_hsmmc_start_command(host, host->mrq->cmd, 886 + host->mrq->data); 887 + return; 888 + } 924 889 925 890 if (cmd->flags & MMC_RSP_PRESENT) { 926 891 if (cmd->flags & MMC_RSP_136) { ··· 943 892 } 944 893 } 945 894 if ((host->data == NULL && !host->response_busy) || cmd->error) 946 - omap_hsmmc_request_done(host, cmd->mrq); 895 + omap_hsmmc_request_done(host, host->mrq); 947 896 } 948 897 949 898 /* ··· 1066 1015 { 1067 1016 struct mmc_data *data; 1068 1017 int end_cmd = 0, end_trans = 0; 1018 + int error = 0; 1069 1019 1070 1020 data = host->data; 1071 1021 dev_vdbg(mmc_dev(host->mmc), "IRQ Status is %x\n", status); ··· 1081 1029 else if (status & (CCRC_EN | DCRC_EN)) 1082 1030 hsmmc_command_incomplete(host, -EILSEQ, end_cmd); 1083 1031 1032 + if (status & ACE_EN) { 1033 + u32 ac12; 1034 + ac12 = OMAP_HSMMC_READ(host->base, AC12); 1035 + if (!(ac12 & ACNE) && host->mrq->sbc) { 1036 + end_cmd = 1; 1037 + if (ac12 & ACTO) 1038 + error = -ETIMEDOUT; 1039 + else if (ac12 & (ACCE | ACEB | ACIE)) 1040 + error = -EILSEQ; 1041 + host->mrq->sbc->error = error; 1042 + hsmmc_command_incomplete(host, error, end_cmd); 1043 + } 1044 + dev_dbg(mmc_dev(host->mmc), "AC12 err: 0x%x\n", ac12); 1045 + } 1084 1046 if (host->data || host->response_busy) { 1085 1047 end_trans = !end_cmd; 1086 1048 host->response_busy = 0; ··· 1302 1236 } 1303 1237 1304 1238 /* Check if next job is already prepared */ 1305 - if (next || 1306 - (!next && data->host_cookie != host->next_data.cookie)) { 1239 + if (next || data->host_cookie != host->next_data.cookie) { 1307 1240 dma_len = dma_map_sg(chan->device->dev, data->sg, data->sg_len, 1308 1241 omap_hsmmc_get_dma_dir(host, data)); 1309 1242 ··· 1327 1262 /* 1328 1263 * Routine to configure and start DMA for the MMC card 1329 1264 */ 1330 - static int omap_hsmmc_start_dma_transfer(struct omap_hsmmc_host *host, 1265 + static int omap_hsmmc_setup_dma_transfer(struct omap_hsmmc_host *host, 1331 1266 struct mmc_request *req) 1332 1267 { 1333 1268 struct dma_slave_config cfg; ··· 1386 1321 1387 1322 host->dma_ch = 1; 1388 1323 1389 - dma_async_issue_pending(chan); 1390 - 1391 1324 return 0; 1392 1325 } 1393 1326 ··· 1401 1338 if (clkd == 0) 1402 1339 clkd = 1; 1403 1340 1404 - cycle_ns = 1000000000 / (clk_get_rate(host->fclk) / clkd); 1341 + cycle_ns = 1000000000 / (host->clk_rate / clkd); 1405 1342 timeout = timeout_ns / cycle_ns; 1406 1343 timeout += timeout_clks; 1407 1344 if (timeout) { ··· 1426 1363 OMAP_HSMMC_WRITE(host->base, SYSCTL, reg); 1427 1364 } 1428 1365 1366 + static void omap_hsmmc_start_dma_transfer(struct omap_hsmmc_host *host) 1367 + { 1368 + struct mmc_request *req = host->mrq; 1369 + struct dma_chan *chan; 1370 + 1371 + if (!req->data) 1372 + return; 1373 + OMAP_HSMMC_WRITE(host->base, BLK, (req->data->blksz) 1374 + | (req->data->blocks << 16)); 1375 + set_data_timeout(host, req->data->timeout_ns, 1376 + req->data->timeout_clks); 1377 + chan = omap_hsmmc_get_dma_chan(host, req->data); 1378 + dma_async_issue_pending(chan); 1379 + } 1380 + 1429 1381 /* 1430 1382 * Configure block length for MMC/SD cards and initiate the transfer. 1431 1383 */ ··· 1461 1383 return 0; 1462 1384 } 1463 1385 1464 - OMAP_HSMMC_WRITE(host->base, BLK, (req->data->blksz) 1465 - | (req->data->blocks << 16)); 1466 - set_data_timeout(host, req->data->timeout_ns, req->data->timeout_clks); 1467 - 1468 1386 if (host->use_dma) { 1469 - ret = omap_hsmmc_start_dma_transfer(host, req); 1387 + ret = omap_hsmmc_setup_dma_transfer(host, req); 1470 1388 if (ret != 0) { 1471 1389 dev_err(mmc_dev(host->mmc), "MMC start dma failure\n"); 1472 1390 return ret; ··· 1536 1462 host->reqs_blocked = 0; 1537 1463 WARN_ON(host->mrq != NULL); 1538 1464 host->mrq = req; 1465 + host->clk_rate = clk_get_rate(host->fclk); 1539 1466 err = omap_hsmmc_prepare_data(host, req); 1540 1467 if (err) { 1541 1468 req->cmd->error = err; ··· 1546 1471 mmc_request_done(mmc, req); 1547 1472 return; 1548 1473 } 1474 + if (req->sbc && !(host->flags & AUTO_CMD23)) { 1475 + omap_hsmmc_start_command(host, req->sbc, NULL); 1476 + return; 1477 + } 1549 1478 1479 + omap_hsmmc_start_dma_transfer(host); 1550 1480 omap_hsmmc_start_command(host, req->cmd, req->data); 1551 1481 } 1552 1482 ··· 1589 1509 * of external transceiver; but they all handle 1.8V. 1590 1510 */ 1591 1511 if ((OMAP_HSMMC_READ(host->base, HCTL) & SDVSDET) && 1592 - (ios->vdd == DUAL_VOLT_OCR_BIT) && 1593 - /* 1594 - * With pbias cell programming missing, this 1595 - * can't be allowed on MMC1 when booting with device 1596 - * tree. 1597 - */ 1598 - !host->pbias_disable) { 1512 + (ios->vdd == DUAL_VOLT_OCR_BIT)) { 1599 1513 /* 1600 1514 * The mmc_select_voltage fn of the core does 1601 1515 * not seem to set the power_mode to ··· 1752 1678 #endif 1753 1679 1754 1680 #ifdef CONFIG_OF 1755 - static u16 omap4_reg_offset = 0x100; 1681 + static const struct omap_mmc_of_data omap3_pre_es3_mmc_of_data = { 1682 + /* See 35xx errata 2.1.1.128 in SPRZ278F */ 1683 + .controller_flags = OMAP_HSMMC_BROKEN_MULTIBLOCK_READ, 1684 + }; 1685 + 1686 + static const struct omap_mmc_of_data omap4_mmc_of_data = { 1687 + .reg_offset = 0x100, 1688 + }; 1756 1689 1757 1690 static const struct of_device_id omap_mmc_of_match[] = { 1758 1691 { 1759 1692 .compatible = "ti,omap2-hsmmc", 1760 1693 }, 1761 1694 { 1695 + .compatible = "ti,omap3-pre-es3-hsmmc", 1696 + .data = &omap3_pre_es3_mmc_of_data, 1697 + }, 1698 + { 1762 1699 .compatible = "ti,omap3-hsmmc", 1763 1700 }, 1764 1701 { 1765 1702 .compatible = "ti,omap4-hsmmc", 1766 - .data = &omap4_reg_offset, 1703 + .data = &omap4_mmc_of_data, 1767 1704 }, 1768 1705 {}, 1769 1706 }; ··· 1794 1709 1795 1710 pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); 1796 1711 if (!pdata) 1797 - return NULL; /* out of memory */ 1712 + return ERR_PTR(-ENOMEM); /* out of memory */ 1798 1713 1799 1714 if (of_find_property(np, "ti,dual-volt", NULL)) 1800 1715 pdata->controller_flags |= OMAP_HSMMC_SUPPORTS_DUAL_VOLT; ··· 1823 1738 if (of_find_property(np, "ti,needs-special-hs-handling", NULL)) 1824 1739 pdata->slots[0].features |= HSMMC_HAS_HSPE_SUPPORT; 1825 1740 1741 + if (of_find_property(np, "keep-power-in-suspend", NULL)) 1742 + pdata->slots[0].pm_caps |= MMC_PM_KEEP_POWER; 1743 + 1744 + if (of_find_property(np, "enable-sdio-wakeup", NULL)) 1745 + pdata->slots[0].pm_caps |= MMC_PM_WAKE_SDIO_IRQ; 1746 + 1826 1747 return pdata; 1827 1748 } 1828 1749 #else 1829 1750 static inline struct omap_mmc_platform_data 1830 1751 *of_get_hsmmc_pdata(struct device *dev) 1831 1752 { 1832 - return NULL; 1753 + return ERR_PTR(-EINVAL); 1833 1754 } 1834 1755 #endif 1835 1756 ··· 1850 1759 dma_cap_mask_t mask; 1851 1760 unsigned tx_req, rx_req; 1852 1761 struct pinctrl *pinctrl; 1762 + const struct omap_mmc_of_data *data; 1853 1763 1854 1764 match = of_match_device(of_match_ptr(omap_mmc_of_match), &pdev->dev); 1855 1765 if (match) { ··· 1860 1768 return PTR_ERR(pdata); 1861 1769 1862 1770 if (match->data) { 1863 - const u16 *offsetp = match->data; 1864 - pdata->reg_offset = *offsetp; 1771 + data = match->data; 1772 + pdata->reg_offset = data->reg_offset; 1773 + pdata->controller_flags |= data->controller_flags; 1865 1774 } 1866 1775 } 1867 1776 ··· 1907 1814 host->base = ioremap(host->mapbase, SZ_4K); 1908 1815 host->power_mode = MMC_POWER_OFF; 1909 1816 host->next_data.cookie = 1; 1817 + host->pbias_enabled = 0; 1910 1818 1911 1819 platform_set_drvdata(pdev, host); 1912 1820 ··· 1940 1846 pm_runtime_use_autosuspend(host->dev); 1941 1847 1942 1848 omap_hsmmc_context_save(host); 1943 - 1944 - /* This can be removed once we support PBIAS with DT */ 1945 - if (host->dev->of_node && res->start == 0x4809c000) 1946 - host->pbias_disable = 1; 1947 1849 1948 1850 host->dbclk = clk_get(&pdev->dev, "mmchsdb_fck"); 1949 1851 /*
+383 -148
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 31 31 #include <linux/mfd/rtsx_pci.h> 32 32 #include <asm/unaligned.h> 33 33 34 - /* SD Tuning Data Structure 35 - * Record continuous timing phase path 36 - */ 37 - struct timing_phase_path { 38 - int start; 39 - int end; 40 - int mid; 41 - int len; 34 + struct realtek_next { 35 + unsigned int sg_count; 36 + s32 cookie; 42 37 }; 43 38 44 39 struct realtek_pci_sdmmc { ··· 41 46 struct rtsx_pcr *pcr; 42 47 struct mmc_host *mmc; 43 48 struct mmc_request *mrq; 49 + struct mmc_command *cmd; 50 + struct mmc_data *data; 44 51 45 - struct mutex host_mutex; 52 + spinlock_t lock; 53 + struct timer_list timer; 54 + struct tasklet_struct cmd_tasklet; 55 + struct tasklet_struct data_tasklet; 56 + struct tasklet_struct finish_tasklet; 46 57 58 + u8 rsp_type; 59 + u8 rsp_len; 60 + int sg_count; 47 61 u8 ssc_depth; 48 62 unsigned int clock; 49 63 bool vpclk; ··· 62 58 int power_state; 63 59 #define SDMMC_POWER_ON 1 64 60 #define SDMMC_POWER_OFF 0 61 + 62 + struct realtek_next next_data; 65 63 }; 64 + 65 + static int sd_start_multi_rw(struct realtek_pci_sdmmc *host, 66 + struct mmc_request *mrq); 66 67 67 68 static inline struct device *sdmmc_dev(struct realtek_pci_sdmmc *host) 68 69 { ··· 104 95 #else 105 96 #define sd_print_debug_regs(host) 106 97 #endif /* DEBUG */ 98 + 99 + static void sd_isr_done_transfer(struct platform_device *pdev) 100 + { 101 + struct realtek_pci_sdmmc *host = platform_get_drvdata(pdev); 102 + 103 + spin_lock(&host->lock); 104 + if (host->cmd) 105 + tasklet_schedule(&host->cmd_tasklet); 106 + if (host->data) 107 + tasklet_schedule(&host->data_tasklet); 108 + spin_unlock(&host->lock); 109 + } 110 + 111 + static void sd_request_timeout(unsigned long host_addr) 112 + { 113 + struct realtek_pci_sdmmc *host = (struct realtek_pci_sdmmc *)host_addr; 114 + unsigned long flags; 115 + 116 + spin_lock_irqsave(&host->lock, flags); 117 + 118 + if (!host->mrq) { 119 + dev_err(sdmmc_dev(host), "error: no request exist\n"); 120 + goto out; 121 + } 122 + 123 + if (host->cmd) 124 + host->cmd->error = -ETIMEDOUT; 125 + if (host->data) 126 + host->data->error = -ETIMEDOUT; 127 + 128 + dev_dbg(sdmmc_dev(host), "timeout for request\n"); 129 + 130 + out: 131 + tasklet_schedule(&host->finish_tasklet); 132 + spin_unlock_irqrestore(&host->lock, flags); 133 + } 134 + 135 + static void sd_finish_request(unsigned long host_addr) 136 + { 137 + struct realtek_pci_sdmmc *host = (struct realtek_pci_sdmmc *)host_addr; 138 + struct rtsx_pcr *pcr = host->pcr; 139 + struct mmc_request *mrq; 140 + struct mmc_command *cmd; 141 + struct mmc_data *data; 142 + unsigned long flags; 143 + bool any_error; 144 + 145 + spin_lock_irqsave(&host->lock, flags); 146 + 147 + del_timer(&host->timer); 148 + mrq = host->mrq; 149 + if (!mrq) { 150 + dev_err(sdmmc_dev(host), "error: no request need finish\n"); 151 + goto out; 152 + } 153 + 154 + cmd = mrq->cmd; 155 + data = mrq->data; 156 + 157 + any_error = (mrq->sbc && mrq->sbc->error) || 158 + (mrq->stop && mrq->stop->error) || 159 + (cmd && cmd->error) || (data && data->error); 160 + 161 + if (any_error) { 162 + rtsx_pci_stop_cmd(pcr); 163 + sd_clear_error(host); 164 + } 165 + 166 + if (data) { 167 + if (any_error) 168 + data->bytes_xfered = 0; 169 + else 170 + data->bytes_xfered = data->blocks * data->blksz; 171 + 172 + if (!data->host_cookie) 173 + rtsx_pci_dma_unmap_sg(pcr, data->sg, data->sg_len, 174 + data->flags & MMC_DATA_READ); 175 + 176 + } 177 + 178 + host->mrq = NULL; 179 + host->cmd = NULL; 180 + host->data = NULL; 181 + 182 + out: 183 + spin_unlock_irqrestore(&host->lock, flags); 184 + mutex_unlock(&pcr->pcr_mutex); 185 + mmc_request_done(host->mmc, mrq); 186 + } 107 187 108 188 static int sd_read_data(struct realtek_pci_sdmmc *host, u8 *cmd, u16 byte_cnt, 109 189 u8 *buf, int buf_len, int timeout) ··· 311 213 return 0; 312 214 } 313 215 314 - static void sd_send_cmd_get_rsp(struct realtek_pci_sdmmc *host, 315 - struct mmc_command *cmd) 216 + static void sd_send_cmd(struct realtek_pci_sdmmc *host, struct mmc_command *cmd) 316 217 { 317 218 struct rtsx_pcr *pcr = host->pcr; 318 219 u8 cmd_idx = (u8)cmd->opcode; ··· 319 222 int err = 0; 320 223 int timeout = 100; 321 224 int i; 322 - u8 *ptr; 323 - int stat_idx = 0; 324 225 u8 rsp_type; 325 226 int rsp_len = 5; 326 - bool clock_toggled = false; 227 + unsigned long flags; 228 + 229 + if (host->cmd) 230 + dev_err(sdmmc_dev(host), "error: cmd already exist\n"); 231 + 232 + host->cmd = cmd; 327 233 328 234 dev_dbg(sdmmc_dev(host), "%s: SD/MMC CMD %d, arg = 0x%08x\n", 329 235 __func__, cmd_idx, arg); ··· 361 261 err = -EINVAL; 362 262 goto out; 363 263 } 264 + host->rsp_type = rsp_type; 265 + host->rsp_len = rsp_len; 364 266 365 267 if (rsp_type == SD_RSP_TYPE_R1b) 366 268 timeout = 3000; ··· 372 270 0xFF, SD_CLK_TOGGLE_EN); 373 271 if (err < 0) 374 272 goto out; 375 - 376 - clock_toggled = true; 377 273 } 378 274 379 275 rtsx_pci_init_cmd(pcr); ··· 395 295 /* Read data from ping-pong buffer */ 396 296 for (i = PPBUF_BASE2; i < PPBUF_BASE2 + 16; i++) 397 297 rtsx_pci_add_cmd(pcr, READ_REG_CMD, (u16)i, 0, 0); 398 - stat_idx = 16; 399 298 } else if (rsp_type != SD_RSP_TYPE_R0) { 400 299 /* Read data from SD_CMDx registers */ 401 300 for (i = SD_CMD0; i <= SD_CMD4; i++) 402 301 rtsx_pci_add_cmd(pcr, READ_REG_CMD, (u16)i, 0, 0); 403 - stat_idx = 5; 404 302 } 405 303 406 304 rtsx_pci_add_cmd(pcr, READ_REG_CMD, SD_STAT1, 0, 0); 407 305 408 - err = rtsx_pci_send_cmd(pcr, timeout); 409 - if (err < 0) { 410 - sd_print_debug_regs(host); 411 - sd_clear_error(host); 412 - dev_dbg(sdmmc_dev(host), 413 - "rtsx_pci_send_cmd error (err = %d)\n", err); 306 + mod_timer(&host->timer, jiffies + msecs_to_jiffies(timeout)); 307 + 308 + spin_lock_irqsave(&pcr->lock, flags); 309 + pcr->trans_result = TRANS_NOT_READY; 310 + rtsx_pci_send_cmd_no_wait(pcr); 311 + spin_unlock_irqrestore(&pcr->lock, flags); 312 + 313 + return; 314 + 315 + out: 316 + cmd->error = err; 317 + tasklet_schedule(&host->finish_tasklet); 318 + } 319 + 320 + static void sd_get_rsp(unsigned long host_addr) 321 + { 322 + struct realtek_pci_sdmmc *host = (struct realtek_pci_sdmmc *)host_addr; 323 + struct rtsx_pcr *pcr = host->pcr; 324 + struct mmc_command *cmd; 325 + int i, err = 0, stat_idx; 326 + u8 *ptr, rsp_type; 327 + unsigned long flags; 328 + 329 + spin_lock_irqsave(&host->lock, flags); 330 + 331 + cmd = host->cmd; 332 + host->cmd = NULL; 333 + 334 + if (!cmd) { 335 + dev_err(sdmmc_dev(host), "error: cmd not exist\n"); 414 336 goto out; 415 337 } 338 + 339 + spin_lock(&pcr->lock); 340 + if (pcr->trans_result == TRANS_NO_DEVICE) 341 + err = -ENODEV; 342 + else if (pcr->trans_result != TRANS_RESULT_OK) 343 + err = -EINVAL; 344 + spin_unlock(&pcr->lock); 345 + 346 + if (err < 0) 347 + goto out; 348 + 349 + rsp_type = host->rsp_type; 350 + stat_idx = host->rsp_len; 416 351 417 352 if (rsp_type == SD_RSP_TYPE_R0) { 418 353 err = 0; ··· 485 350 cmd->resp[0]); 486 351 } 487 352 353 + if (cmd == host->mrq->sbc) { 354 + sd_send_cmd(host, host->mrq->cmd); 355 + spin_unlock_irqrestore(&host->lock, flags); 356 + return; 357 + } 358 + 359 + if (cmd == host->mrq->stop) 360 + goto out; 361 + 362 + if (cmd->data) { 363 + sd_start_multi_rw(host, host->mrq); 364 + spin_unlock_irqrestore(&host->lock, flags); 365 + return; 366 + } 367 + 488 368 out: 489 369 cmd->error = err; 490 370 491 - if (err && clock_toggled) 492 - rtsx_pci_write_register(pcr, SD_BUS_STAT, 493 - SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 0); 371 + tasklet_schedule(&host->finish_tasklet); 372 + spin_unlock_irqrestore(&host->lock, flags); 494 373 } 495 374 496 - static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq) 375 + static int sd_pre_dma_transfer(struct realtek_pci_sdmmc *host, 376 + struct mmc_data *data, struct realtek_next *next) 377 + { 378 + struct rtsx_pcr *pcr = host->pcr; 379 + int read = data->flags & MMC_DATA_READ; 380 + int sg_count = 0; 381 + 382 + if (!next && data->host_cookie && 383 + data->host_cookie != host->next_data.cookie) { 384 + dev_err(sdmmc_dev(host), 385 + "error: invalid cookie data[%d] host[%d]\n", 386 + data->host_cookie, host->next_data.cookie); 387 + data->host_cookie = 0; 388 + } 389 + 390 + if (next || (!next && data->host_cookie != host->next_data.cookie)) 391 + sg_count = rtsx_pci_dma_map_sg(pcr, 392 + data->sg, data->sg_len, read); 393 + else 394 + sg_count = host->next_data.sg_count; 395 + 396 + if (next) { 397 + next->sg_count = sg_count; 398 + if (++next->cookie < 0) 399 + next->cookie = 1; 400 + data->host_cookie = next->cookie; 401 + } 402 + 403 + return sg_count; 404 + } 405 + 406 + static void sdmmc_pre_req(struct mmc_host *mmc, struct mmc_request *mrq, 407 + bool is_first_req) 408 + { 409 + struct realtek_pci_sdmmc *host = mmc_priv(mmc); 410 + struct mmc_data *data = mrq->data; 411 + 412 + if (data->host_cookie) { 413 + dev_err(sdmmc_dev(host), 414 + "error: descard already cookie data[%d]\n", 415 + data->host_cookie); 416 + data->host_cookie = 0; 417 + } 418 + 419 + dev_dbg(sdmmc_dev(host), "dma sg prepared: %d\n", 420 + sd_pre_dma_transfer(host, data, &host->next_data)); 421 + } 422 + 423 + static void sdmmc_post_req(struct mmc_host *mmc, struct mmc_request *mrq, 424 + int err) 425 + { 426 + struct realtek_pci_sdmmc *host = mmc_priv(mmc); 427 + struct rtsx_pcr *pcr = host->pcr; 428 + struct mmc_data *data = mrq->data; 429 + int read = data->flags & MMC_DATA_READ; 430 + 431 + rtsx_pci_dma_unmap_sg(pcr, data->sg, data->sg_len, read); 432 + data->host_cookie = 0; 433 + } 434 + 435 + static int sd_start_multi_rw(struct realtek_pci_sdmmc *host, 436 + struct mmc_request *mrq) 497 437 { 498 438 struct rtsx_pcr *pcr = host->pcr; 499 439 struct mmc_host *mmc = host->mmc; 500 440 struct mmc_card *card = mmc->card; 501 441 struct mmc_data *data = mrq->data; 502 442 int uhs = mmc_card_uhs(card); 503 - int read = (data->flags & MMC_DATA_READ) ? 1 : 0; 443 + int read = data->flags & MMC_DATA_READ; 504 444 u8 cfg2, trans_mode; 505 445 int err; 506 446 size_t data_len = data->blksz * data->blocks; 447 + 448 + if (host->data) 449 + dev_err(sdmmc_dev(host), "error: data already exist\n"); 450 + 451 + host->data = data; 507 452 508 453 if (read) { 509 454 cfg2 = SD_CALCULATE_CRC7 | SD_CHECK_CRC16 | ··· 635 420 rtsx_pci_add_cmd(pcr, CHECK_REG_CMD, SD_TRANSFER, 636 421 SD_TRANSFER_END, SD_TRANSFER_END); 637 422 423 + mod_timer(&host->timer, jiffies + 10 * HZ); 638 424 rtsx_pci_send_cmd_no_wait(pcr); 639 425 640 - err = rtsx_pci_transfer_data(pcr, data->sg, data->sg_len, read, 10000); 426 + err = rtsx_pci_dma_transfer(pcr, data->sg, host->sg_count, read); 641 427 if (err < 0) { 642 - sd_clear_error(host); 643 - return err; 428 + data->error = err; 429 + tasklet_schedule(&host->finish_tasklet); 430 + } 431 + return 0; 432 + } 433 + 434 + static void sd_finish_multi_rw(unsigned long host_addr) 435 + { 436 + struct realtek_pci_sdmmc *host = (struct realtek_pci_sdmmc *)host_addr; 437 + struct rtsx_pcr *pcr = host->pcr; 438 + struct mmc_data *data; 439 + int err = 0; 440 + unsigned long flags; 441 + 442 + spin_lock_irqsave(&host->lock, flags); 443 + 444 + if (!host->data) { 445 + dev_err(sdmmc_dev(host), "error: no data exist\n"); 446 + goto out; 644 447 } 645 448 646 - return 0; 449 + data = host->data; 450 + host->data = NULL; 451 + 452 + if (pcr->trans_result == TRANS_NO_DEVICE) 453 + err = -ENODEV; 454 + else if (pcr->trans_result != TRANS_RESULT_OK) 455 + err = -EINVAL; 456 + 457 + if (err < 0) { 458 + data->error = err; 459 + goto out; 460 + } 461 + 462 + if (!host->mrq->sbc && data->stop) { 463 + sd_send_cmd(host, data->stop); 464 + spin_unlock_irqrestore(&host->lock, flags); 465 + return; 466 + } 467 + 468 + out: 469 + tasklet_schedule(&host->finish_tasklet); 470 + spin_unlock_irqrestore(&host->lock, flags); 647 471 } 648 472 649 473 static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host) ··· 765 511 return 0; 766 512 } 767 513 514 + static inline u32 test_phase_bit(u32 phase_map, unsigned int bit) 515 + { 516 + bit %= RTSX_PHASE_MAX; 517 + return phase_map & (1 << bit); 518 + } 519 + 520 + static int sd_get_phase_len(u32 phase_map, unsigned int start_bit) 521 + { 522 + int i; 523 + 524 + for (i = 0; i < RTSX_PHASE_MAX; i++) { 525 + if (test_phase_bit(phase_map, start_bit + i) == 0) 526 + return i; 527 + } 528 + return RTSX_PHASE_MAX; 529 + } 530 + 768 531 static u8 sd_search_final_phase(struct realtek_pci_sdmmc *host, u32 phase_map) 769 532 { 770 - struct timing_phase_path path[MAX_PHASE + 1]; 771 - int i, j, cont_path_cnt; 772 - int new_block, max_len, final_path_idx; 533 + int start = 0, len = 0; 534 + int start_final = 0, len_final = 0; 773 535 u8 final_phase = 0xFF; 774 536 775 - /* Parse phase_map, take it as a bit-ring */ 776 - cont_path_cnt = 0; 777 - new_block = 1; 778 - j = 0; 779 - for (i = 0; i < MAX_PHASE + 1; i++) { 780 - if (phase_map & (1 << i)) { 781 - if (new_block) { 782 - new_block = 0; 783 - j = cont_path_cnt++; 784 - path[j].start = i; 785 - path[j].end = i; 786 - } else { 787 - path[j].end = i; 788 - } 789 - } else { 790 - new_block = 1; 791 - if (cont_path_cnt) { 792 - /* Calculate path length and middle point */ 793 - int idx = cont_path_cnt - 1; 794 - path[idx].len = 795 - path[idx].end - path[idx].start + 1; 796 - path[idx].mid = 797 - path[idx].start + path[idx].len / 2; 798 - } 537 + if (phase_map == 0) { 538 + dev_err(sdmmc_dev(host), "phase error: [map:%x]\n", phase_map); 539 + return final_phase; 540 + } 541 + 542 + while (start < RTSX_PHASE_MAX) { 543 + len = sd_get_phase_len(phase_map, start); 544 + if (len_final < len) { 545 + start_final = start; 546 + len_final = len; 799 547 } 548 + start += len ? len : 1; 800 549 } 801 550 802 - if (cont_path_cnt == 0) { 803 - dev_dbg(sdmmc_dev(host), "No continuous phase path\n"); 804 - goto finish; 805 - } else { 806 - /* Calculate last continuous path length and middle point */ 807 - int idx = cont_path_cnt - 1; 808 - path[idx].len = path[idx].end - path[idx].start + 1; 809 - path[idx].mid = path[idx].start + path[idx].len / 2; 810 - } 551 + final_phase = (start_final + len_final / 2) % RTSX_PHASE_MAX; 552 + dev_dbg(sdmmc_dev(host), "phase: [map:%x] [maxlen:%d] [final:%d]\n", 553 + phase_map, len_final, final_phase); 811 554 812 - /* Connect the first and last continuous paths if they are adjacent */ 813 - if (!path[0].start && (path[cont_path_cnt - 1].end == MAX_PHASE)) { 814 - /* Using negative index */ 815 - path[0].start = path[cont_path_cnt - 1].start - MAX_PHASE - 1; 816 - path[0].len += path[cont_path_cnt - 1].len; 817 - path[0].mid = path[0].start + path[0].len / 2; 818 - /* Convert negative middle point index to positive one */ 819 - if (path[0].mid < 0) 820 - path[0].mid += MAX_PHASE + 1; 821 - cont_path_cnt--; 822 - } 823 - 824 - /* Choose the longest continuous phase path */ 825 - max_len = 0; 826 - final_phase = 0; 827 - final_path_idx = 0; 828 - for (i = 0; i < cont_path_cnt; i++) { 829 - if (path[i].len > max_len) { 830 - max_len = path[i].len; 831 - final_phase = (u8)path[i].mid; 832 - final_path_idx = i; 833 - } 834 - 835 - dev_dbg(sdmmc_dev(host), "path[%d].start = %d\n", 836 - i, path[i].start); 837 - dev_dbg(sdmmc_dev(host), "path[%d].end = %d\n", 838 - i, path[i].end); 839 - dev_dbg(sdmmc_dev(host), "path[%d].len = %d\n", 840 - i, path[i].len); 841 - dev_dbg(sdmmc_dev(host), "path[%d].mid = %d\n", 842 - i, path[i].mid); 843 - } 844 - 845 - finish: 846 - dev_dbg(sdmmc_dev(host), "Final chosen phase: %d\n", final_phase); 847 555 return final_phase; 848 556 } 849 557 ··· 851 635 int err, i; 852 636 u32 raw_phase_map = 0; 853 637 854 - for (i = MAX_PHASE; i >= 0; i--) { 638 + for (i = 0; i < RTSX_PHASE_MAX; i++) { 855 639 err = sd_tuning_rx_cmd(host, opcode, (u8)i); 856 640 if (err == 0) 857 641 raw_phase_map |= 1 << i; ··· 901 685 return 0; 902 686 } 903 687 688 + static inline bool sd_use_muti_rw(struct mmc_command *cmd) 689 + { 690 + return mmc_op_multi(cmd->opcode) || 691 + (cmd->opcode == MMC_READ_SINGLE_BLOCK) || 692 + (cmd->opcode == MMC_WRITE_BLOCK); 693 + } 694 + 904 695 static void sdmmc_request(struct mmc_host *mmc, struct mmc_request *mrq) 905 696 { 906 697 struct realtek_pci_sdmmc *host = mmc_priv(mmc); ··· 916 693 struct mmc_data *data = mrq->data; 917 694 unsigned int data_size = 0; 918 695 int err; 696 + unsigned long flags; 697 + 698 + mutex_lock(&pcr->pcr_mutex); 699 + spin_lock_irqsave(&host->lock, flags); 700 + 701 + if (host->mrq) 702 + dev_err(sdmmc_dev(host), "error: request already exist\n"); 703 + host->mrq = mrq; 919 704 920 705 if (host->eject) { 921 706 cmd->error = -ENOMEDIUM; ··· 936 705 goto finish; 937 706 } 938 707 939 - mutex_lock(&pcr->pcr_mutex); 940 - 941 708 rtsx_pci_start_run(pcr); 942 709 943 710 rtsx_pci_switch_clock(pcr, host->clock, host->ssc_depth, ··· 944 715 rtsx_pci_write_register(pcr, CARD_SHARE_MODE, 945 716 CARD_SHARE_MASK, CARD_SHARE_48_SD); 946 717 947 - mutex_lock(&host->host_mutex); 948 - host->mrq = mrq; 949 - mutex_unlock(&host->host_mutex); 950 - 951 718 if (mrq->data) 952 719 data_size = data->blocks * data->blksz; 953 720 954 - if (!data_size || mmc_op_multi(cmd->opcode) || 955 - (cmd->opcode == MMC_READ_SINGLE_BLOCK) || 956 - (cmd->opcode == MMC_WRITE_BLOCK)) { 957 - sd_send_cmd_get_rsp(host, cmd); 721 + if (sd_use_muti_rw(cmd)) 722 + host->sg_count = sd_pre_dma_transfer(host, data, NULL); 958 723 959 - if (!cmd->error && data_size) { 960 - sd_rw_multi(host, mrq); 961 - 962 - if (mmc_op_multi(cmd->opcode) && mrq->stop) 963 - sd_send_cmd_get_rsp(host, mrq->stop); 964 - } 965 - } else { 966 - sd_normal_rw(host, mrq); 967 - } 968 - 969 - if (mrq->data) { 970 - if (cmd->error || data->error) 971 - data->bytes_xfered = 0; 724 + if (!data_size || sd_use_muti_rw(cmd)) { 725 + if (mrq->sbc) 726 + sd_send_cmd(host, mrq->sbc); 972 727 else 973 - data->bytes_xfered = data->blocks * data->blksz; 728 + sd_send_cmd(host, cmd); 729 + spin_unlock_irqrestore(&host->lock, flags); 730 + } else { 731 + spin_unlock_irqrestore(&host->lock, flags); 732 + sd_normal_rw(host, mrq); 733 + tasklet_schedule(&host->finish_tasklet); 974 734 } 975 - 976 - mutex_unlock(&pcr->pcr_mutex); 735 + return; 977 736 978 737 finish: 979 - if (cmd->error) 980 - dev_dbg(sdmmc_dev(host), "cmd->error = %d\n", cmd->error); 981 - 982 - mutex_lock(&host->host_mutex); 983 - host->mrq = NULL; 984 - mutex_unlock(&host->host_mutex); 985 - 986 - mmc_request_done(mmc, mrq); 738 + tasklet_schedule(&host->finish_tasklet); 739 + spin_unlock_irqrestore(&host->lock, flags); 987 740 } 988 741 989 742 static int sd_set_bus_width(struct realtek_pci_sdmmc *host, ··· 1400 1189 } 1401 1190 1402 1191 static const struct mmc_host_ops realtek_pci_sdmmc_ops = { 1192 + .pre_req = sdmmc_pre_req, 1193 + .post_req = sdmmc_post_req, 1403 1194 .request = sdmmc_request, 1404 1195 .set_ios = sdmmc_set_ios, 1405 1196 .get_ro = sdmmc_get_ro, ··· 1465 1252 struct realtek_pci_sdmmc *host; 1466 1253 struct rtsx_pcr *pcr; 1467 1254 struct pcr_handle *handle = pdev->dev.platform_data; 1255 + unsigned long host_addr; 1468 1256 1469 1257 if (!handle) 1470 1258 return -ENXIO; ··· 1489 1275 pcr->slots[RTSX_SD_CARD].p_dev = pdev; 1490 1276 pcr->slots[RTSX_SD_CARD].card_event = rtsx_pci_sdmmc_card_event; 1491 1277 1492 - mutex_init(&host->host_mutex); 1278 + host_addr = (unsigned long)host; 1279 + host->next_data.cookie = 1; 1280 + setup_timer(&host->timer, sd_request_timeout, host_addr); 1281 + tasklet_init(&host->cmd_tasklet, sd_get_rsp, host_addr); 1282 + tasklet_init(&host->data_tasklet, sd_finish_multi_rw, host_addr); 1283 + tasklet_init(&host->finish_tasklet, sd_finish_request, host_addr); 1284 + spin_lock_init(&host->lock); 1493 1285 1286 + pcr->slots[RTSX_SD_CARD].done_transfer = sd_isr_done_transfer; 1494 1287 realtek_init_host(host); 1495 1288 1496 1289 mmc_add_host(mmc); ··· 1510 1289 struct realtek_pci_sdmmc *host = platform_get_drvdata(pdev); 1511 1290 struct rtsx_pcr *pcr; 1512 1291 struct mmc_host *mmc; 1292 + struct mmc_request *mrq; 1293 + unsigned long flags; 1513 1294 1514 1295 if (!host) 1515 1296 return 0; ··· 1519 1296 pcr = host->pcr; 1520 1297 pcr->slots[RTSX_SD_CARD].p_dev = NULL; 1521 1298 pcr->slots[RTSX_SD_CARD].card_event = NULL; 1299 + pcr->slots[RTSX_SD_CARD].done_transfer = NULL; 1522 1300 mmc = host->mmc; 1523 - host->eject = true; 1301 + mrq = host->mrq; 1524 1302 1525 - mutex_lock(&host->host_mutex); 1303 + spin_lock_irqsave(&host->lock, flags); 1526 1304 if (host->mrq) { 1527 1305 dev_dbg(&(pdev->dev), 1528 1306 "%s: Controller removed during transfer\n", 1529 1307 mmc_hostname(mmc)); 1530 1308 1531 - rtsx_pci_complete_unfinished_transfer(pcr); 1309 + if (mrq->sbc) 1310 + mrq->sbc->error = -ENOMEDIUM; 1311 + if (mrq->cmd) 1312 + mrq->cmd->error = -ENOMEDIUM; 1313 + if (mrq->stop) 1314 + mrq->stop->error = -ENOMEDIUM; 1315 + if (mrq->data) 1316 + mrq->data->error = -ENOMEDIUM; 1532 1317 1533 - host->mrq->cmd->error = -ENOMEDIUM; 1534 - if (host->mrq->stop) 1535 - host->mrq->stop->error = -ENOMEDIUM; 1536 - mmc_request_done(mmc, host->mrq); 1318 + tasklet_schedule(&host->finish_tasklet); 1537 1319 } 1538 - mutex_unlock(&host->host_mutex); 1320 + spin_unlock_irqrestore(&host->lock, flags); 1321 + 1322 + del_timer_sync(&host->timer); 1323 + tasklet_kill(&host->cmd_tasklet); 1324 + tasklet_kill(&host->data_tasklet); 1325 + tasklet_kill(&host->finish_tasklet); 1539 1326 1540 1327 mmc_remove_host(mmc); 1328 + host->eject = true; 1329 + 1541 1330 mmc_free_host(mmc); 1542 1331 1543 1332 dev_dbg(&(pdev->dev),
+18 -62
drivers/mmc/host/sdhci-acpi.c
··· 31 31 #include <linux/bitops.h> 32 32 #include <linux/types.h> 33 33 #include <linux/err.h> 34 - #include <linux/gpio/consumer.h> 35 34 #include <linux/interrupt.h> 36 35 #include <linux/acpi.h> 37 36 #include <linux/pm.h> ··· 39 40 40 41 #include <linux/mmc/host.h> 41 42 #include <linux/mmc/pm.h> 43 + #include <linux/mmc/slot-gpio.h> 42 44 #include <linux/mmc/sdhci.h> 43 45 44 46 #include "sdhci.h" 45 47 46 48 enum { 47 - SDHCI_ACPI_SD_CD = BIT(0), 48 - SDHCI_ACPI_RUNTIME_PM = BIT(1), 49 + SDHCI_ACPI_SD_CD = BIT(0), 50 + SDHCI_ACPI_RUNTIME_PM = BIT(1), 51 + SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL = BIT(2), 49 52 }; 50 53 51 54 struct sdhci_acpi_chip { ··· 122 121 }; 123 122 124 123 static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = { 124 + .quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION, 125 125 .quirks2 = SDHCI_QUIRK2_HOST_OFF_CARD_ON, 126 126 .caps = MMC_CAP_NONREMOVABLE | MMC_CAP_POWER_OFF_CARD, 127 127 .flags = SDHCI_ACPI_RUNTIME_PM, ··· 130 128 }; 131 129 132 130 static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sd = { 133 - .flags = SDHCI_ACPI_SD_CD | SDHCI_ACPI_RUNTIME_PM, 131 + .flags = SDHCI_ACPI_SD_CD | SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL | 132 + SDHCI_ACPI_RUNTIME_PM, 134 133 .quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON, 135 134 }; 136 135 ··· 144 141 static const struct sdhci_acpi_uid_slot sdhci_acpi_uids[] = { 145 142 { "80860F14" , "1" , &sdhci_acpi_slot_int_emmc }, 146 143 { "80860F14" , "3" , &sdhci_acpi_slot_int_sd }, 144 + { "80860F16" , NULL, &sdhci_acpi_slot_int_sd }, 147 145 { "INT33BB" , "2" , &sdhci_acpi_slot_int_sdio }, 148 146 { "INT33C6" , NULL, &sdhci_acpi_slot_int_sdio }, 149 147 { "INT3436" , NULL, &sdhci_acpi_slot_int_sdio }, ··· 154 150 155 151 static const struct acpi_device_id sdhci_acpi_ids[] = { 156 152 { "80860F14" }, 153 + { "80860F16" }, 157 154 { "INT33BB" }, 158 155 { "INT33C6" }, 159 156 { "INT3436" }, ··· 196 191 kfree(info); 197 192 return slot; 198 193 } 199 - 200 - #ifdef CONFIG_PM_RUNTIME 201 - 202 - static irqreturn_t sdhci_acpi_sd_cd(int irq, void *dev_id) 203 - { 204 - mmc_detect_change(dev_id, msecs_to_jiffies(200)); 205 - return IRQ_HANDLED; 206 - } 207 - 208 - static int sdhci_acpi_add_own_cd(struct device *dev, struct mmc_host *mmc) 209 - { 210 - struct gpio_desc *desc; 211 - unsigned long flags; 212 - int err, irq; 213 - 214 - desc = devm_gpiod_get_index(dev, "sd_cd", 0); 215 - if (IS_ERR(desc)) { 216 - err = PTR_ERR(desc); 217 - goto out; 218 - } 219 - 220 - err = gpiod_direction_input(desc); 221 - if (err) 222 - goto out_free; 223 - 224 - irq = gpiod_to_irq(desc); 225 - if (irq < 0) { 226 - err = irq; 227 - goto out_free; 228 - } 229 - 230 - flags = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING; 231 - err = devm_request_irq(dev, irq, sdhci_acpi_sd_cd, flags, "sd_cd", mmc); 232 - if (err) 233 - goto out_free; 234 - 235 - return 0; 236 - 237 - out_free: 238 - devm_gpiod_put(dev, desc); 239 - out: 240 - dev_warn(dev, "failed to setup card detect wake up\n"); 241 - return err; 242 - } 243 - 244 - #else 245 - 246 - static int sdhci_acpi_add_own_cd(struct device *dev, struct mmc_host *mmc) 247 - { 248 - return 0; 249 - } 250 - 251 - #endif 252 194 253 195 static int sdhci_acpi_probe(struct platform_device *pdev) 254 196 { ··· 284 332 285 333 host->mmc->caps2 |= MMC_CAP2_NO_PRESCAN_POWERUP; 286 334 335 + if (sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD)) { 336 + bool v = sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL); 337 + 338 + if (mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0)) { 339 + dev_warn(dev, "failed to setup card detect gpio\n"); 340 + c->use_runtime_pm = false; 341 + } 342 + } 343 + 287 344 err = sdhci_add_host(host); 288 345 if (err) 289 346 goto err_free; 290 - 291 - if (sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD)) { 292 - if (sdhci_acpi_add_own_cd(dev, host->mmc)) 293 - c->use_runtime_pm = false; 294 - } 295 347 296 348 if (c->use_runtime_pm) { 297 349 pm_runtime_set_active(dev);
+36 -3
drivers/mmc/host/sdhci-bcm-kona.c
··· 54 54 55 55 struct sdhci_bcm_kona_dev { 56 56 struct mutex write_lock; /* protect back to back writes */ 57 + struct clk *external_clk; 57 58 }; 58 59 59 60 ··· 258 257 goto err_pltfm_free; 259 258 } 260 259 260 + /* Get and enable the external clock */ 261 + kona_dev->external_clk = devm_clk_get(dev, NULL); 262 + if (IS_ERR(kona_dev->external_clk)) { 263 + dev_err(dev, "Failed to get external clock\n"); 264 + ret = PTR_ERR(kona_dev->external_clk); 265 + goto err_pltfm_free; 266 + } 267 + 268 + if (clk_set_rate(kona_dev->external_clk, host->mmc->f_max) != 0) { 269 + dev_err(dev, "Failed to set rate external clock\n"); 270 + goto err_pltfm_free; 271 + } 272 + 273 + if (clk_prepare_enable(kona_dev->external_clk) != 0) { 274 + dev_err(dev, "Failed to enable external clock\n"); 275 + goto err_pltfm_free; 276 + } 277 + 261 278 dev_dbg(dev, "non-removable=%c\n", 262 279 (host->mmc->caps & MMC_CAP_NONREMOVABLE) ? 'Y' : 'N'); 263 280 dev_dbg(dev, "cd_gpio %c, wp_gpio %c\n", ··· 290 271 291 272 ret = sdhci_bcm_kona_sd_reset(host); 292 273 if (ret) 293 - goto err_pltfm_free; 274 + goto err_clk_disable; 294 275 295 276 sdhci_bcm_kona_sd_init(host); 296 277 ··· 326 307 err_reset: 327 308 sdhci_bcm_kona_sd_reset(host); 328 309 310 + err_clk_disable: 311 + clk_disable_unprepare(kona_dev->external_clk); 312 + 329 313 err_pltfm_free: 330 314 sdhci_pltfm_free(pdev); 331 315 ··· 336 314 return ret; 337 315 } 338 316 339 - static int __exit sdhci_bcm_kona_remove(struct platform_device *pdev) 317 + static int sdhci_bcm_kona_remove(struct platform_device *pdev) 340 318 { 341 - return sdhci_pltfm_unregister(pdev); 319 + struct sdhci_host *host = platform_get_drvdata(pdev); 320 + struct sdhci_pltfm_host *pltfm_priv = sdhci_priv(host); 321 + struct sdhci_bcm_kona_dev *kona_dev = sdhci_pltfm_priv(pltfm_priv); 322 + int dead = (readl(host->ioaddr + SDHCI_INT_STATUS) == 0xffffffff); 323 + 324 + sdhci_remove_host(host, dead); 325 + 326 + clk_disable_unprepare(kona_dev->external_clk); 327 + 328 + sdhci_pltfm_free(pdev); 329 + 330 + return 0; 342 331 } 343 332 344 333 static struct platform_driver sdhci_bcm_kona_driver = {
+1 -1
drivers/mmc/host/sdhci-dove.c
··· 208 208 .name = "sdhci-dove", 209 209 .owner = THIS_MODULE, 210 210 .pm = SDHCI_PLTFM_PMOPS, 211 - .of_match_table = of_match_ptr(sdhci_dove_of_match_table), 211 + .of_match_table = sdhci_dove_of_match_table, 212 212 }, 213 213 .probe = sdhci_dove_probe, 214 214 .remove = sdhci_dove_remove,
+618
drivers/mmc/host/sdhci-msm.c
··· 1 + /* 2 + * drivers/mmc/host/sdhci-msm.c - Qualcomm SDHCI Platform driver 3 + * 4 + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 and 8 + * only version 2 as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + */ 16 + 17 + #include <linux/module.h> 18 + #include <linux/of_device.h> 19 + #include <linux/regulator/consumer.h> 20 + #include <linux/delay.h> 21 + #include <linux/mmc/mmc.h> 22 + #include <linux/slab.h> 23 + 24 + #include "sdhci-pltfm.h" 25 + 26 + #define CORE_HC_MODE 0x78 27 + #define HC_MODE_EN 0x1 28 + #define CORE_POWER 0x0 29 + #define CORE_SW_RST BIT(7) 30 + 31 + #define MAX_PHASES 16 32 + #define CORE_DLL_LOCK BIT(7) 33 + #define CORE_DLL_EN BIT(16) 34 + #define CORE_CDR_EN BIT(17) 35 + #define CORE_CK_OUT_EN BIT(18) 36 + #define CORE_CDR_EXT_EN BIT(19) 37 + #define CORE_DLL_PDN BIT(29) 38 + #define CORE_DLL_RST BIT(30) 39 + #define CORE_DLL_CONFIG 0x100 40 + #define CORE_DLL_STATUS 0x108 41 + 42 + #define CORE_VENDOR_SPEC 0x10c 43 + #define CORE_CLK_PWRSAVE BIT(1) 44 + 45 + #define CDR_SELEXT_SHIFT 20 46 + #define CDR_SELEXT_MASK (0xf << CDR_SELEXT_SHIFT) 47 + #define CMUX_SHIFT_PHASE_SHIFT 24 48 + #define CMUX_SHIFT_PHASE_MASK (7 << CMUX_SHIFT_PHASE_SHIFT) 49 + 50 + static const u32 tuning_block_64[] = { 51 + 0x00ff0fff, 0xccc3ccff, 0xffcc3cc3, 0xeffefffe, 52 + 0xddffdfff, 0xfbfffbff, 0xff7fffbf, 0xefbdf777, 53 + 0xf0fff0ff, 0x3cccfc0f, 0xcfcc33cc, 0xeeffefff, 54 + 0xfdfffdff, 0xffbfffdf, 0xfff7ffbb, 0xde7b7ff7 55 + }; 56 + 57 + static const u32 tuning_block_128[] = { 58 + 0xff00ffff, 0x0000ffff, 0xccccffff, 0xcccc33cc, 59 + 0xcc3333cc, 0xffffcccc, 0xffffeeff, 0xffeeeeff, 60 + 0xffddffff, 0xddddffff, 0xbbffffff, 0xbbffffff, 61 + 0xffffffbb, 0xffffff77, 0x77ff7777, 0xffeeddbb, 62 + 0x00ffffff, 0x00ffffff, 0xccffff00, 0xcc33cccc, 63 + 0x3333cccc, 0xffcccccc, 0xffeeffff, 0xeeeeffff, 64 + 0xddffffff, 0xddffffff, 0xffffffdd, 0xffffffbb, 65 + 0xffffbbbb, 0xffff77ff, 0xff7777ff, 0xeeddbb77 66 + }; 67 + 68 + struct sdhci_msm_host { 69 + struct platform_device *pdev; 70 + void __iomem *core_mem; /* MSM SDCC mapped address */ 71 + struct clk *clk; /* main SD/MMC bus clock */ 72 + struct clk *pclk; /* SDHC peripheral bus clock */ 73 + struct clk *bus_clk; /* SDHC bus voter clock */ 74 + struct mmc_host *mmc; 75 + struct sdhci_pltfm_data sdhci_msm_pdata; 76 + }; 77 + 78 + /* Platform specific tuning */ 79 + static inline int msm_dll_poll_ck_out_en(struct sdhci_host *host, u8 poll) 80 + { 81 + u32 wait_cnt = 50; 82 + u8 ck_out_en; 83 + struct mmc_host *mmc = host->mmc; 84 + 85 + /* Poll for CK_OUT_EN bit. max. poll time = 50us */ 86 + ck_out_en = !!(readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) & 87 + CORE_CK_OUT_EN); 88 + 89 + while (ck_out_en != poll) { 90 + if (--wait_cnt == 0) { 91 + dev_err(mmc_dev(mmc), "%s: CK_OUT_EN bit is not %d\n", 92 + mmc_hostname(mmc), poll); 93 + return -ETIMEDOUT; 94 + } 95 + udelay(1); 96 + 97 + ck_out_en = !!(readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) & 98 + CORE_CK_OUT_EN); 99 + } 100 + 101 + return 0; 102 + } 103 + 104 + static int msm_config_cm_dll_phase(struct sdhci_host *host, u8 phase) 105 + { 106 + int rc; 107 + static const u8 grey_coded_phase_table[] = { 108 + 0x0, 0x1, 0x3, 0x2, 0x6, 0x7, 0x5, 0x4, 109 + 0xc, 0xd, 0xf, 0xe, 0xa, 0xb, 0x9, 0x8 110 + }; 111 + unsigned long flags; 112 + u32 config; 113 + struct mmc_host *mmc = host->mmc; 114 + 115 + spin_lock_irqsave(&host->lock, flags); 116 + 117 + config = readl_relaxed(host->ioaddr + CORE_DLL_CONFIG); 118 + config &= ~(CORE_CDR_EN | CORE_CK_OUT_EN); 119 + config |= (CORE_CDR_EXT_EN | CORE_DLL_EN); 120 + writel_relaxed(config, host->ioaddr + CORE_DLL_CONFIG); 121 + 122 + /* Wait until CK_OUT_EN bit of DLL_CONFIG register becomes '0' */ 123 + rc = msm_dll_poll_ck_out_en(host, 0); 124 + if (rc) 125 + goto err_out; 126 + 127 + /* 128 + * Write the selected DLL clock output phase (0 ... 15) 129 + * to CDR_SELEXT bit field of DLL_CONFIG register. 130 + */ 131 + config = readl_relaxed(host->ioaddr + CORE_DLL_CONFIG); 132 + config &= ~CDR_SELEXT_MASK; 133 + config |= grey_coded_phase_table[phase] << CDR_SELEXT_SHIFT; 134 + writel_relaxed(config, host->ioaddr + CORE_DLL_CONFIG); 135 + 136 + /* Set CK_OUT_EN bit of DLL_CONFIG register to 1. */ 137 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) 138 + | CORE_CK_OUT_EN), host->ioaddr + CORE_DLL_CONFIG); 139 + 140 + /* Wait until CK_OUT_EN bit of DLL_CONFIG register becomes '1' */ 141 + rc = msm_dll_poll_ck_out_en(host, 1); 142 + if (rc) 143 + goto err_out; 144 + 145 + config = readl_relaxed(host->ioaddr + CORE_DLL_CONFIG); 146 + config |= CORE_CDR_EN; 147 + config &= ~CORE_CDR_EXT_EN; 148 + writel_relaxed(config, host->ioaddr + CORE_DLL_CONFIG); 149 + goto out; 150 + 151 + err_out: 152 + dev_err(mmc_dev(mmc), "%s: Failed to set DLL phase: %d\n", 153 + mmc_hostname(mmc), phase); 154 + out: 155 + spin_unlock_irqrestore(&host->lock, flags); 156 + return rc; 157 + } 158 + 159 + /* 160 + * Find out the greatest range of consecuitive selected 161 + * DLL clock output phases that can be used as sampling 162 + * setting for SD3.0 UHS-I card read operation (in SDR104 163 + * timing mode) or for eMMC4.5 card read operation (in HS200 164 + * timing mode). 165 + * Select the 3/4 of the range and configure the DLL with the 166 + * selected DLL clock output phase. 167 + */ 168 + 169 + static int msm_find_most_appropriate_phase(struct sdhci_host *host, 170 + u8 *phase_table, u8 total_phases) 171 + { 172 + int ret; 173 + u8 ranges[MAX_PHASES][MAX_PHASES] = { {0}, {0} }; 174 + u8 phases_per_row[MAX_PHASES] = { 0 }; 175 + int row_index = 0, col_index = 0, selected_row_index = 0, curr_max = 0; 176 + int i, cnt, phase_0_raw_index = 0, phase_15_raw_index = 0; 177 + bool phase_0_found = false, phase_15_found = false; 178 + struct mmc_host *mmc = host->mmc; 179 + 180 + if (!total_phases || (total_phases > MAX_PHASES)) { 181 + dev_err(mmc_dev(mmc), "%s: Invalid argument: total_phases=%d\n", 182 + mmc_hostname(mmc), total_phases); 183 + return -EINVAL; 184 + } 185 + 186 + for (cnt = 0; cnt < total_phases; cnt++) { 187 + ranges[row_index][col_index] = phase_table[cnt]; 188 + phases_per_row[row_index] += 1; 189 + col_index++; 190 + 191 + if ((cnt + 1) == total_phases) { 192 + continue; 193 + /* check if next phase in phase_table is consecutive or not */ 194 + } else if ((phase_table[cnt] + 1) != phase_table[cnt + 1]) { 195 + row_index++; 196 + col_index = 0; 197 + } 198 + } 199 + 200 + if (row_index >= MAX_PHASES) 201 + return -EINVAL; 202 + 203 + /* Check if phase-0 is present in first valid window? */ 204 + if (!ranges[0][0]) { 205 + phase_0_found = true; 206 + phase_0_raw_index = 0; 207 + /* Check if cycle exist between 2 valid windows */ 208 + for (cnt = 1; cnt <= row_index; cnt++) { 209 + if (phases_per_row[cnt]) { 210 + for (i = 0; i < phases_per_row[cnt]; i++) { 211 + if (ranges[cnt][i] == 15) { 212 + phase_15_found = true; 213 + phase_15_raw_index = cnt; 214 + break; 215 + } 216 + } 217 + } 218 + } 219 + } 220 + 221 + /* If 2 valid windows form cycle then merge them as single window */ 222 + if (phase_0_found && phase_15_found) { 223 + /* number of phases in raw where phase 0 is present */ 224 + u8 phases_0 = phases_per_row[phase_0_raw_index]; 225 + /* number of phases in raw where phase 15 is present */ 226 + u8 phases_15 = phases_per_row[phase_15_raw_index]; 227 + 228 + if (phases_0 + phases_15 >= MAX_PHASES) 229 + /* 230 + * If there are more than 1 phase windows then total 231 + * number of phases in both the windows should not be 232 + * more than or equal to MAX_PHASES. 233 + */ 234 + return -EINVAL; 235 + 236 + /* Merge 2 cyclic windows */ 237 + i = phases_15; 238 + for (cnt = 0; cnt < phases_0; cnt++) { 239 + ranges[phase_15_raw_index][i] = 240 + ranges[phase_0_raw_index][cnt]; 241 + if (++i >= MAX_PHASES) 242 + break; 243 + } 244 + 245 + phases_per_row[phase_0_raw_index] = 0; 246 + phases_per_row[phase_15_raw_index] = phases_15 + phases_0; 247 + } 248 + 249 + for (cnt = 0; cnt <= row_index; cnt++) { 250 + if (phases_per_row[cnt] > curr_max) { 251 + curr_max = phases_per_row[cnt]; 252 + selected_row_index = cnt; 253 + } 254 + } 255 + 256 + i = (curr_max * 3) / 4; 257 + if (i) 258 + i--; 259 + 260 + ret = ranges[selected_row_index][i]; 261 + 262 + if (ret >= MAX_PHASES) { 263 + ret = -EINVAL; 264 + dev_err(mmc_dev(mmc), "%s: Invalid phase selected=%d\n", 265 + mmc_hostname(mmc), ret); 266 + } 267 + 268 + return ret; 269 + } 270 + 271 + static inline void msm_cm_dll_set_freq(struct sdhci_host *host) 272 + { 273 + u32 mclk_freq = 0, config; 274 + 275 + /* Program the MCLK value to MCLK_FREQ bit field */ 276 + if (host->clock <= 112000000) 277 + mclk_freq = 0; 278 + else if (host->clock <= 125000000) 279 + mclk_freq = 1; 280 + else if (host->clock <= 137000000) 281 + mclk_freq = 2; 282 + else if (host->clock <= 150000000) 283 + mclk_freq = 3; 284 + else if (host->clock <= 162000000) 285 + mclk_freq = 4; 286 + else if (host->clock <= 175000000) 287 + mclk_freq = 5; 288 + else if (host->clock <= 187000000) 289 + mclk_freq = 6; 290 + else if (host->clock <= 200000000) 291 + mclk_freq = 7; 292 + 293 + config = readl_relaxed(host->ioaddr + CORE_DLL_CONFIG); 294 + config &= ~CMUX_SHIFT_PHASE_MASK; 295 + config |= mclk_freq << CMUX_SHIFT_PHASE_SHIFT; 296 + writel_relaxed(config, host->ioaddr + CORE_DLL_CONFIG); 297 + } 298 + 299 + /* Initialize the DLL (Programmable Delay Line) */ 300 + static int msm_init_cm_dll(struct sdhci_host *host) 301 + { 302 + struct mmc_host *mmc = host->mmc; 303 + int wait_cnt = 50; 304 + unsigned long flags; 305 + 306 + spin_lock_irqsave(&host->lock, flags); 307 + 308 + /* 309 + * Make sure that clock is always enabled when DLL 310 + * tuning is in progress. Keeping PWRSAVE ON may 311 + * turn off the clock. 312 + */ 313 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC) 314 + & ~CORE_CLK_PWRSAVE), host->ioaddr + CORE_VENDOR_SPEC); 315 + 316 + /* Write 1 to DLL_RST bit of DLL_CONFIG register */ 317 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) 318 + | CORE_DLL_RST), host->ioaddr + CORE_DLL_CONFIG); 319 + 320 + /* Write 1 to DLL_PDN bit of DLL_CONFIG register */ 321 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) 322 + | CORE_DLL_PDN), host->ioaddr + CORE_DLL_CONFIG); 323 + msm_cm_dll_set_freq(host); 324 + 325 + /* Write 0 to DLL_RST bit of DLL_CONFIG register */ 326 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) 327 + & ~CORE_DLL_RST), host->ioaddr + CORE_DLL_CONFIG); 328 + 329 + /* Write 0 to DLL_PDN bit of DLL_CONFIG register */ 330 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) 331 + & ~CORE_DLL_PDN), host->ioaddr + CORE_DLL_CONFIG); 332 + 333 + /* Set DLL_EN bit to 1. */ 334 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) 335 + | CORE_DLL_EN), host->ioaddr + CORE_DLL_CONFIG); 336 + 337 + /* Set CK_OUT_EN bit to 1. */ 338 + writel_relaxed((readl_relaxed(host->ioaddr + CORE_DLL_CONFIG) 339 + | CORE_CK_OUT_EN), host->ioaddr + CORE_DLL_CONFIG); 340 + 341 + /* Wait until DLL_LOCK bit of DLL_STATUS register becomes '1' */ 342 + while (!(readl_relaxed(host->ioaddr + CORE_DLL_STATUS) & 343 + CORE_DLL_LOCK)) { 344 + /* max. wait for 50us sec for LOCK bit to be set */ 345 + if (--wait_cnt == 0) { 346 + dev_err(mmc_dev(mmc), "%s: DLL failed to LOCK\n", 347 + mmc_hostname(mmc)); 348 + spin_unlock_irqrestore(&host->lock, flags); 349 + return -ETIMEDOUT; 350 + } 351 + udelay(1); 352 + } 353 + 354 + spin_unlock_irqrestore(&host->lock, flags); 355 + return 0; 356 + } 357 + 358 + static int sdhci_msm_execute_tuning(struct sdhci_host *host, u32 opcode) 359 + { 360 + int tuning_seq_cnt = 3; 361 + u8 phase, *data_buf, tuned_phases[16], tuned_phase_cnt = 0; 362 + const u32 *tuning_block_pattern = tuning_block_64; 363 + int size = sizeof(tuning_block_64); /* Pattern size in bytes */ 364 + int rc; 365 + struct mmc_host *mmc = host->mmc; 366 + struct mmc_ios ios = host->mmc->ios; 367 + 368 + /* 369 + * Tuning is required for SDR104, HS200 and HS400 cards and 370 + * if clock frequency is greater than 100MHz in these modes. 371 + */ 372 + if (host->clock <= 100 * 1000 * 1000 || 373 + !((ios.timing == MMC_TIMING_MMC_HS200) || 374 + (ios.timing == MMC_TIMING_UHS_SDR104))) 375 + return 0; 376 + 377 + if ((opcode == MMC_SEND_TUNING_BLOCK_HS200) && 378 + (mmc->ios.bus_width == MMC_BUS_WIDTH_8)) { 379 + tuning_block_pattern = tuning_block_128; 380 + size = sizeof(tuning_block_128); 381 + } 382 + 383 + data_buf = kmalloc(size, GFP_KERNEL); 384 + if (!data_buf) 385 + return -ENOMEM; 386 + 387 + retry: 388 + /* First of all reset the tuning block */ 389 + rc = msm_init_cm_dll(host); 390 + if (rc) 391 + goto out; 392 + 393 + phase = 0; 394 + do { 395 + struct mmc_command cmd = { 0 }; 396 + struct mmc_data data = { 0 }; 397 + struct mmc_request mrq = { 398 + .cmd = &cmd, 399 + .data = &data 400 + }; 401 + struct scatterlist sg; 402 + 403 + /* Set the phase in delay line hw block */ 404 + rc = msm_config_cm_dll_phase(host, phase); 405 + if (rc) 406 + goto out; 407 + 408 + cmd.opcode = opcode; 409 + cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC; 410 + 411 + data.blksz = size; 412 + data.blocks = 1; 413 + data.flags = MMC_DATA_READ; 414 + data.timeout_ns = NSEC_PER_SEC; /* 1 second */ 415 + 416 + data.sg = &sg; 417 + data.sg_len = 1; 418 + sg_init_one(&sg, data_buf, size); 419 + memset(data_buf, 0, size); 420 + mmc_wait_for_req(mmc, &mrq); 421 + 422 + if (!cmd.error && !data.error && 423 + !memcmp(data_buf, tuning_block_pattern, size)) { 424 + /* Tuning is successful at this tuning point */ 425 + tuned_phases[tuned_phase_cnt++] = phase; 426 + dev_dbg(mmc_dev(mmc), "%s: Found good phase = %d\n", 427 + mmc_hostname(mmc), phase); 428 + } 429 + } while (++phase < ARRAY_SIZE(tuned_phases)); 430 + 431 + if (tuned_phase_cnt) { 432 + rc = msm_find_most_appropriate_phase(host, tuned_phases, 433 + tuned_phase_cnt); 434 + if (rc < 0) 435 + goto out; 436 + else 437 + phase = rc; 438 + 439 + /* 440 + * Finally set the selected phase in delay 441 + * line hw block. 442 + */ 443 + rc = msm_config_cm_dll_phase(host, phase); 444 + if (rc) 445 + goto out; 446 + dev_dbg(mmc_dev(mmc), "%s: Setting the tuning phase to %d\n", 447 + mmc_hostname(mmc), phase); 448 + } else { 449 + if (--tuning_seq_cnt) 450 + goto retry; 451 + /* Tuning failed */ 452 + dev_dbg(mmc_dev(mmc), "%s: No tuning point found\n", 453 + mmc_hostname(mmc)); 454 + rc = -EIO; 455 + } 456 + 457 + out: 458 + kfree(data_buf); 459 + return rc; 460 + } 461 + 462 + static const struct of_device_id sdhci_msm_dt_match[] = { 463 + { .compatible = "qcom,sdhci-msm-v4" }, 464 + {}, 465 + }; 466 + 467 + MODULE_DEVICE_TABLE(of, sdhci_msm_dt_match); 468 + 469 + static struct sdhci_ops sdhci_msm_ops = { 470 + .platform_execute_tuning = sdhci_msm_execute_tuning, 471 + }; 472 + 473 + static int sdhci_msm_probe(struct platform_device *pdev) 474 + { 475 + struct sdhci_host *host; 476 + struct sdhci_pltfm_host *pltfm_host; 477 + struct sdhci_msm_host *msm_host; 478 + struct resource *core_memres; 479 + int ret; 480 + u16 host_version; 481 + 482 + msm_host = devm_kzalloc(&pdev->dev, sizeof(*msm_host), GFP_KERNEL); 483 + if (!msm_host) 484 + return -ENOMEM; 485 + 486 + msm_host->sdhci_msm_pdata.ops = &sdhci_msm_ops; 487 + host = sdhci_pltfm_init(pdev, &msm_host->sdhci_msm_pdata, 0); 488 + if (IS_ERR(host)) 489 + return PTR_ERR(host); 490 + 491 + pltfm_host = sdhci_priv(host); 492 + pltfm_host->priv = msm_host; 493 + msm_host->mmc = host->mmc; 494 + msm_host->pdev = pdev; 495 + 496 + ret = mmc_of_parse(host->mmc); 497 + if (ret) 498 + goto pltfm_free; 499 + 500 + sdhci_get_of_property(pdev); 501 + 502 + /* Setup SDCC bus voter clock. */ 503 + msm_host->bus_clk = devm_clk_get(&pdev->dev, "bus"); 504 + if (!IS_ERR(msm_host->bus_clk)) { 505 + /* Vote for max. clk rate for max. performance */ 506 + ret = clk_set_rate(msm_host->bus_clk, INT_MAX); 507 + if (ret) 508 + goto pltfm_free; 509 + ret = clk_prepare_enable(msm_host->bus_clk); 510 + if (ret) 511 + goto pltfm_free; 512 + } 513 + 514 + /* Setup main peripheral bus clock */ 515 + msm_host->pclk = devm_clk_get(&pdev->dev, "iface"); 516 + if (IS_ERR(msm_host->pclk)) { 517 + ret = PTR_ERR(msm_host->pclk); 518 + dev_err(&pdev->dev, "Perpheral clk setup failed (%d)\n", ret); 519 + goto bus_clk_disable; 520 + } 521 + 522 + ret = clk_prepare_enable(msm_host->pclk); 523 + if (ret) 524 + goto bus_clk_disable; 525 + 526 + /* Setup SDC MMC clock */ 527 + msm_host->clk = devm_clk_get(&pdev->dev, "core"); 528 + if (IS_ERR(msm_host->clk)) { 529 + ret = PTR_ERR(msm_host->clk); 530 + dev_err(&pdev->dev, "SDC MMC clk setup failed (%d)\n", ret); 531 + goto pclk_disable; 532 + } 533 + 534 + ret = clk_prepare_enable(msm_host->clk); 535 + if (ret) 536 + goto pclk_disable; 537 + 538 + core_memres = platform_get_resource(pdev, IORESOURCE_MEM, 1); 539 + msm_host->core_mem = devm_ioremap_resource(&pdev->dev, core_memres); 540 + 541 + if (IS_ERR(msm_host->core_mem)) { 542 + dev_err(&pdev->dev, "Failed to remap registers\n"); 543 + ret = PTR_ERR(msm_host->core_mem); 544 + goto clk_disable; 545 + } 546 + 547 + /* Reset the core and Enable SDHC mode */ 548 + writel_relaxed(readl_relaxed(msm_host->core_mem + CORE_POWER) | 549 + CORE_SW_RST, msm_host->core_mem + CORE_POWER); 550 + 551 + /* SW reset can take upto 10HCLK + 15MCLK cycles. (min 40us) */ 552 + usleep_range(1000, 5000); 553 + if (readl(msm_host->core_mem + CORE_POWER) & CORE_SW_RST) { 554 + dev_err(&pdev->dev, "Stuck in reset\n"); 555 + ret = -ETIMEDOUT; 556 + goto clk_disable; 557 + } 558 + 559 + /* Set HC_MODE_EN bit in HC_MODE register */ 560 + writel_relaxed(HC_MODE_EN, (msm_host->core_mem + CORE_HC_MODE)); 561 + 562 + host->quirks |= SDHCI_QUIRK_BROKEN_CARD_DETECTION; 563 + host->quirks |= SDHCI_QUIRK_SINGLE_POWER_WRITE; 564 + 565 + host_version = readw_relaxed((host->ioaddr + SDHCI_HOST_VERSION)); 566 + dev_dbg(&pdev->dev, "Host Version: 0x%x Vendor Version 0x%x\n", 567 + host_version, ((host_version & SDHCI_VENDOR_VER_MASK) >> 568 + SDHCI_VENDOR_VER_SHIFT)); 569 + 570 + ret = sdhci_add_host(host); 571 + if (ret) 572 + goto clk_disable; 573 + 574 + return 0; 575 + 576 + clk_disable: 577 + clk_disable_unprepare(msm_host->clk); 578 + pclk_disable: 579 + clk_disable_unprepare(msm_host->pclk); 580 + bus_clk_disable: 581 + if (!IS_ERR(msm_host->bus_clk)) 582 + clk_disable_unprepare(msm_host->bus_clk); 583 + pltfm_free: 584 + sdhci_pltfm_free(pdev); 585 + return ret; 586 + } 587 + 588 + static int sdhci_msm_remove(struct platform_device *pdev) 589 + { 590 + struct sdhci_host *host = platform_get_drvdata(pdev); 591 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 592 + struct sdhci_msm_host *msm_host = pltfm_host->priv; 593 + int dead = (readl_relaxed(host->ioaddr + SDHCI_INT_STATUS) == 594 + 0xffffffff); 595 + 596 + sdhci_remove_host(host, dead); 597 + sdhci_pltfm_free(pdev); 598 + clk_disable_unprepare(msm_host->clk); 599 + clk_disable_unprepare(msm_host->pclk); 600 + if (!IS_ERR(msm_host->bus_clk)) 601 + clk_disable_unprepare(msm_host->bus_clk); 602 + return 0; 603 + } 604 + 605 + static struct platform_driver sdhci_msm_driver = { 606 + .probe = sdhci_msm_probe, 607 + .remove = sdhci_msm_remove, 608 + .driver = { 609 + .name = "sdhci_msm", 610 + .owner = THIS_MODULE, 611 + .of_match_table = sdhci_msm_dt_match, 612 + }, 613 + }; 614 + 615 + module_platform_driver(sdhci_msm_driver); 616 + 617 + MODULE_DESCRIPTION("Qualcomm Secure Digital Host Controller Interface driver"); 618 + MODULE_LICENSE("GPL v2");
+20
drivers/mmc/host/sdhci-pci.c
··· 610 610 .probe = via_probe, 611 611 }; 612 612 613 + static int rtsx_probe_slot(struct sdhci_pci_slot *slot) 614 + { 615 + slot->host->mmc->caps2 |= MMC_CAP2_HS200; 616 + return 0; 617 + } 618 + 619 + static const struct sdhci_pci_fixes sdhci_rtsx = { 620 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 621 + SDHCI_QUIRK2_BROKEN_DDR50, 622 + .probe_slot = rtsx_probe_slot, 623 + }; 624 + 613 625 static const struct pci_device_id pci_ids[] = { 614 626 { 615 627 .vendor = PCI_VENDOR_ID_RICOH, ··· 741 729 .subvendor = PCI_ANY_ID, 742 730 .subdevice = PCI_ANY_ID, 743 731 .driver_data = (kernel_ulong_t)&sdhci_via, 732 + }, 733 + 734 + { 735 + .vendor = PCI_VENDOR_ID_REALTEK, 736 + .device = 0x5250, 737 + .subvendor = PCI_ANY_ID, 738 + .subdevice = PCI_ANY_ID, 739 + .driver_data = (kernel_ulong_t)&sdhci_rtsx, 744 740 }, 745 741 746 742 {
+68
drivers/mmc/host/sdhci-pxav3.c
··· 34 34 #include <linux/of_gpio.h> 35 35 #include <linux/pm.h> 36 36 #include <linux/pm_runtime.h> 37 + #include <linux/mbus.h> 37 38 38 39 #include "sdhci.h" 39 40 #include "sdhci-pltfm.h" ··· 57 56 #define SD_CE_ATA_2 0x10E 58 57 #define SDCE_MISC_INT (1<<2) 59 58 #define SDCE_MISC_INT_EN (1<<1) 59 + 60 + /* 61 + * These registers are relative to the second register region, for the 62 + * MBus bridge. 63 + */ 64 + #define SDHCI_WINDOW_CTRL(i) (0x80 + ((i) << 3)) 65 + #define SDHCI_WINDOW_BASE(i) (0x84 + ((i) << 3)) 66 + #define SDHCI_MAX_WIN_NUM 8 67 + 68 + static int mv_conf_mbus_windows(struct platform_device *pdev, 69 + const struct mbus_dram_target_info *dram) 70 + { 71 + int i; 72 + void __iomem *regs; 73 + struct resource *res; 74 + 75 + if (!dram) { 76 + dev_err(&pdev->dev, "no mbus dram info\n"); 77 + return -EINVAL; 78 + } 79 + 80 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 81 + if (!res) { 82 + dev_err(&pdev->dev, "cannot get mbus registers\n"); 83 + return -EINVAL; 84 + } 85 + 86 + regs = ioremap(res->start, resource_size(res)); 87 + if (!regs) { 88 + dev_err(&pdev->dev, "cannot map mbus registers\n"); 89 + return -ENOMEM; 90 + } 91 + 92 + for (i = 0; i < SDHCI_MAX_WIN_NUM; i++) { 93 + writel(0, regs + SDHCI_WINDOW_CTRL(i)); 94 + writel(0, regs + SDHCI_WINDOW_BASE(i)); 95 + } 96 + 97 + for (i = 0; i < dram->num_cs; i++) { 98 + const struct mbus_dram_window *cs = dram->cs + i; 99 + 100 + /* Write size, attributes and target id to control register */ 101 + writel(((cs->size - 1) & 0xffff0000) | 102 + (cs->mbus_attr << 8) | 103 + (dram->mbus_dram_target_id << 4) | 1, 104 + regs + SDHCI_WINDOW_CTRL(i)); 105 + /* Write base address to base register */ 106 + writel(cs->base, regs + SDHCI_WINDOW_BASE(i)); 107 + } 108 + 109 + iounmap(regs); 110 + 111 + return 0; 112 + } 60 113 61 114 static void pxav3_set_private_registers(struct sdhci_host *host, u8 mask) 62 115 { ··· 242 187 { 243 188 .compatible = "mrvl,pxav3-mmc", 244 189 }, 190 + { 191 + .compatible = "marvell,armada-380-sdhci", 192 + }, 245 193 {}, 246 194 }; 247 195 MODULE_DEVICE_TABLE(of, sdhci_pxav3_of_match); ··· 277 219 struct sdhci_pltfm_host *pltfm_host; 278 220 struct sdhci_pxa_platdata *pdata = pdev->dev.platform_data; 279 221 struct device *dev = &pdev->dev; 222 + struct device_node *np = pdev->dev.of_node; 280 223 struct sdhci_host *host = NULL; 281 224 struct sdhci_pxa *pxa = NULL; 282 225 const struct of_device_id *match; ··· 294 235 kfree(pxa); 295 236 return PTR_ERR(host); 296 237 } 238 + 239 + if (of_device_is_compatible(np, "marvell,armada-380-sdhci")) { 240 + ret = mv_conf_mbus_windows(pdev, mv_mbus_dram_info()); 241 + if (ret < 0) 242 + goto err_mbus_win; 243 + } 244 + 245 + 297 246 pltfm_host = sdhci_priv(host); 298 247 pltfm_host->priv = pxa; 299 248 ··· 388 321 clk_disable_unprepare(clk); 389 322 clk_put(clk); 390 323 err_clk_get: 324 + err_mbus_win: 391 325 sdhci_pltfm_free(pdev); 392 326 kfree(pxa); 393 327 return ret;
+78 -94
drivers/mmc/host/sdhci-s3c.c
··· 51 51 struct platform_device *pdev; 52 52 struct resource *ioarea; 53 53 struct s3c_sdhci_platdata *pdata; 54 - unsigned int cur_clk; 54 + int cur_clk; 55 55 int ext_cd_irq; 56 56 int ext_cd_gpio; 57 57 58 58 struct clk *clk_io; 59 59 struct clk *clk_bus[MAX_BUS_CLK]; 60 + unsigned long clk_rates[MAX_BUS_CLK]; 60 61 }; 61 62 62 63 /** ··· 78 77 } 79 78 80 79 /** 81 - * get_curclk - convert ctrl2 register to clock source number 82 - * @ctrl2: Control2 register value. 83 - */ 84 - static u32 get_curclk(u32 ctrl2) 85 - { 86 - ctrl2 &= S3C_SDHCI_CTRL2_SELBASECLK_MASK; 87 - ctrl2 >>= S3C_SDHCI_CTRL2_SELBASECLK_SHIFT; 88 - 89 - return ctrl2; 90 - } 91 - 92 - static void sdhci_s3c_check_sclk(struct sdhci_host *host) 93 - { 94 - struct sdhci_s3c *ourhost = to_s3c(host); 95 - u32 tmp = readl(host->ioaddr + S3C_SDHCI_CONTROL2); 96 - 97 - if (get_curclk(tmp) != ourhost->cur_clk) { 98 - dev_dbg(&ourhost->pdev->dev, "restored ctrl2 clock setting\n"); 99 - 100 - tmp &= ~S3C_SDHCI_CTRL2_SELBASECLK_MASK; 101 - tmp |= ourhost->cur_clk << S3C_SDHCI_CTRL2_SELBASECLK_SHIFT; 102 - writel(tmp, host->ioaddr + S3C_SDHCI_CONTROL2); 103 - } 104 - } 105 - 106 - /** 107 80 * sdhci_s3c_get_max_clk - callback to get maximum clock frequency. 108 81 * @host: The SDHCI host instance. 109 82 * ··· 86 111 static unsigned int sdhci_s3c_get_max_clk(struct sdhci_host *host) 87 112 { 88 113 struct sdhci_s3c *ourhost = to_s3c(host); 89 - struct clk *busclk; 90 - unsigned int rate, max; 91 - int clk; 114 + unsigned long rate, max = 0; 115 + int src; 92 116 93 - /* note, a reset will reset the clock source */ 94 - 95 - sdhci_s3c_check_sclk(host); 96 - 97 - for (max = 0, clk = 0; clk < MAX_BUS_CLK; clk++) { 98 - busclk = ourhost->clk_bus[clk]; 99 - if (!busclk) 100 - continue; 101 - 102 - rate = clk_get_rate(busclk); 117 + for (src = 0; src < MAX_BUS_CLK; src++) { 118 + rate = ourhost->clk_rates[src]; 103 119 if (rate > max) 104 120 max = rate; 105 121 } ··· 110 144 { 111 145 unsigned long rate; 112 146 struct clk *clksrc = ourhost->clk_bus[src]; 113 - int div; 147 + int shift; 114 148 115 - if (!clksrc) 149 + if (IS_ERR(clksrc)) 116 150 return UINT_MAX; 117 151 118 152 /* ··· 124 158 return wanted - rate; 125 159 } 126 160 127 - rate = clk_get_rate(clksrc); 161 + rate = ourhost->clk_rates[src]; 128 162 129 - for (div = 1; div < 256; div *= 2) { 130 - if ((rate / div) <= wanted) 163 + for (shift = 0; shift <= 8; ++shift) { 164 + if ((rate >> shift) <= wanted) 131 165 break; 132 166 } 133 167 134 - dev_dbg(&ourhost->pdev->dev, "clk %d: rate %ld, want %d, got %ld\n", 135 - src, rate, wanted, rate / div); 168 + if (shift > 8) { 169 + dev_dbg(&ourhost->pdev->dev, 170 + "clk %d: rate %ld, min rate %lu > wanted %u\n", 171 + src, rate, rate / 256, wanted); 172 + return UINT_MAX; 173 + } 136 174 137 - return wanted - (rate / div); 175 + dev_dbg(&ourhost->pdev->dev, "clk %d: rate %ld, want %d, got %ld\n", 176 + src, rate, wanted, rate >> shift); 177 + 178 + return wanted - (rate >> shift); 138 179 } 139 180 140 181 /** ··· 182 209 struct clk *clk = ourhost->clk_bus[best_src]; 183 210 184 211 clk_prepare_enable(clk); 185 - clk_disable_unprepare(ourhost->clk_bus[ourhost->cur_clk]); 186 - 187 - /* turn clock off to card before changing clock source */ 188 - writew(0, host->ioaddr + SDHCI_CLOCK_CONTROL); 212 + if (ourhost->cur_clk >= 0) 213 + clk_disable_unprepare( 214 + ourhost->clk_bus[ourhost->cur_clk]); 189 215 190 216 ourhost->cur_clk = best_src; 191 - host->max_clk = clk_get_rate(clk); 192 - 193 - ctrl = readl(host->ioaddr + S3C_SDHCI_CONTROL2); 194 - ctrl &= ~S3C_SDHCI_CTRL2_SELBASECLK_MASK; 195 - ctrl |= best_src << S3C_SDHCI_CTRL2_SELBASECLK_SHIFT; 196 - writel(ctrl, host->ioaddr + S3C_SDHCI_CONTROL2); 217 + host->max_clk = ourhost->clk_rates[best_src]; 197 218 } 219 + 220 + /* turn clock off to card before changing clock source */ 221 + writew(0, host->ioaddr + SDHCI_CLOCK_CONTROL); 222 + 223 + ctrl = readl(host->ioaddr + S3C_SDHCI_CONTROL2); 224 + ctrl &= ~S3C_SDHCI_CTRL2_SELBASECLK_MASK; 225 + ctrl |= best_src << S3C_SDHCI_CTRL2_SELBASECLK_SHIFT; 226 + writel(ctrl, host->ioaddr + S3C_SDHCI_CONTROL2); 198 227 199 228 /* reprogram default hardware configuration */ 200 229 writel(S3C64XX_SDHCI_CONTROL4_DRIVE_9mA, ··· 229 254 static unsigned int sdhci_s3c_get_min_clock(struct sdhci_host *host) 230 255 { 231 256 struct sdhci_s3c *ourhost = to_s3c(host); 232 - unsigned int delta, min = UINT_MAX; 257 + unsigned long rate, min = ULONG_MAX; 233 258 int src; 234 259 235 260 for (src = 0; src < MAX_BUS_CLK; src++) { 236 - delta = sdhci_s3c_consider_clock(ourhost, src, 0); 237 - if (delta == UINT_MAX) 261 + rate = ourhost->clk_rates[src] / 256; 262 + if (!rate) 238 263 continue; 239 - /* delta is a negative value in this case */ 240 - if (-delta < min) 241 - min = -delta; 264 + if (rate < min) 265 + min = rate; 242 266 } 267 + 243 268 return min; 244 269 } 245 270 ··· 247 272 static unsigned int sdhci_cmu_get_max_clock(struct sdhci_host *host) 248 273 { 249 274 struct sdhci_s3c *ourhost = to_s3c(host); 275 + unsigned long rate, max = 0; 276 + int src; 250 277 251 - return clk_round_rate(ourhost->clk_bus[ourhost->cur_clk], UINT_MAX); 278 + for (src = 0; src < MAX_BUS_CLK; src++) { 279 + struct clk *clk; 280 + 281 + clk = ourhost->clk_bus[src]; 282 + if (IS_ERR(clk)) 283 + continue; 284 + 285 + rate = clk_round_rate(clk, ULONG_MAX); 286 + if (rate > max) 287 + max = rate; 288 + } 289 + 290 + return max; 252 291 } 253 292 254 293 /* sdhci_cmu_get_min_clock - callback to get minimal supported clock value. */ 255 294 static unsigned int sdhci_cmu_get_min_clock(struct sdhci_host *host) 256 295 { 257 296 struct sdhci_s3c *ourhost = to_s3c(host); 297 + unsigned long rate, min = ULONG_MAX; 298 + int src; 258 299 259 - /* 260 - * initial clock can be in the frequency range of 261 - * 100KHz-400KHz, so we set it as max value. 262 - */ 263 - return clk_round_rate(ourhost->clk_bus[ourhost->cur_clk], 400000); 300 + for (src = 0; src < MAX_BUS_CLK; src++) { 301 + struct clk *clk; 302 + 303 + clk = ourhost->clk_bus[src]; 304 + if (IS_ERR(clk)) 305 + continue; 306 + 307 + rate = clk_round_rate(clk, 0); 308 + if (rate < min) 309 + min = rate; 310 + } 311 + 312 + return min; 264 313 } 265 314 266 315 /* sdhci_cmu_set_clock - callback on clock change.*/ ··· 551 552 sc->host = host; 552 553 sc->pdev = pdev; 553 554 sc->pdata = pdata; 555 + sc->cur_clk = -1; 554 556 555 557 platform_set_drvdata(pdev, host); 556 558 ··· 566 566 clk_prepare_enable(sc->clk_io); 567 567 568 568 for (clks = 0, ptr = 0; ptr < MAX_BUS_CLK; ptr++) { 569 - struct clk *clk; 570 569 char name[14]; 571 570 572 571 snprintf(name, 14, "mmc_busclk.%d", ptr); 573 - clk = devm_clk_get(dev, name); 574 - if (IS_ERR(clk)) 572 + sc->clk_bus[ptr] = devm_clk_get(dev, name); 573 + if (IS_ERR(sc->clk_bus[ptr])) 575 574 continue; 576 575 577 576 clks++; 578 - sc->clk_bus[ptr] = clk; 579 - 580 - /* 581 - * save current clock index to know which clock bus 582 - * is used later in overriding functions. 583 - */ 584 - sc->cur_clk = ptr; 577 + sc->clk_rates[ptr] = clk_get_rate(sc->clk_bus[ptr]); 585 578 586 579 dev_info(dev, "clock source %d: %s (%ld Hz)\n", 587 - ptr, name, clk_get_rate(clk)); 580 + ptr, name, sc->clk_rates[ptr]); 588 581 } 589 582 590 583 if (clks == 0) { ··· 585 592 ret = -ENOENT; 586 593 goto err_no_busclks; 587 594 } 588 - 589 - #ifndef CONFIG_PM_RUNTIME 590 - clk_prepare_enable(sc->clk_bus[sc->cur_clk]); 591 - #endif 592 595 593 596 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 594 597 host->ioaddr = devm_ioremap_resource(&pdev->dev, res); ··· 698 709 return 0; 699 710 700 711 err_req_regs: 701 - #ifndef CONFIG_PM_RUNTIME 702 - clk_disable_unprepare(sc->clk_bus[sc->cur_clk]); 703 - #endif 704 - 705 712 err_no_busclks: 706 713 clk_disable_unprepare(sc->clk_io); 707 714 ··· 728 743 pm_runtime_dont_use_autosuspend(&pdev->dev); 729 744 pm_runtime_disable(&pdev->dev); 730 745 731 - #ifndef CONFIG_PM_RUNTIME 732 - clk_disable_unprepare(sc->clk_bus[sc->cur_clk]); 733 - #endif 734 746 clk_disable_unprepare(sc->clk_io); 735 747 736 748 sdhci_free_host(host); ··· 761 779 762 780 ret = sdhci_runtime_suspend_host(host); 763 781 764 - clk_disable_unprepare(ourhost->clk_bus[ourhost->cur_clk]); 782 + if (ourhost->cur_clk >= 0) 783 + clk_disable_unprepare(ourhost->clk_bus[ourhost->cur_clk]); 765 784 clk_disable_unprepare(busclk); 766 785 return ret; 767 786 } ··· 775 792 int ret; 776 793 777 794 clk_prepare_enable(busclk); 778 - clk_prepare_enable(ourhost->clk_bus[ourhost->cur_clk]); 795 + if (ourhost->cur_clk >= 0) 796 + clk_prepare_enable(ourhost->clk_bus[ourhost->cur_clk]); 779 797 ret = sdhci_runtime_resume_host(host); 780 798 return ret; 781 799 }
+49 -154
drivers/mmc/host/sdhci-spear.c
··· 27 27 #include <linux/slab.h> 28 28 #include <linux/mmc/host.h> 29 29 #include <linux/mmc/sdhci-spear.h> 30 + #include <linux/mmc/slot-gpio.h> 30 31 #include <linux/io.h> 31 32 #include "sdhci.h" 32 33 ··· 40 39 static const struct sdhci_ops sdhci_pltfm_ops = { 41 40 /* Nothing to do for now. */ 42 41 }; 43 - 44 - /* gpio card detection interrupt handler */ 45 - static irqreturn_t sdhci_gpio_irq(int irq, void *dev_id) 46 - { 47 - struct platform_device *pdev = dev_id; 48 - struct sdhci_host *host = platform_get_drvdata(pdev); 49 - struct spear_sdhci *sdhci = dev_get_platdata(&pdev->dev); 50 - unsigned long gpio_irq_type; 51 - int val; 52 - 53 - val = gpio_get_value(sdhci->data->card_int_gpio); 54 - 55 - /* val == 1 -> card removed, val == 0 -> card inserted */ 56 - /* if card removed - set irq for low level, else vice versa */ 57 - gpio_irq_type = val ? IRQF_TRIGGER_LOW : IRQF_TRIGGER_HIGH; 58 - irq_set_irq_type(irq, gpio_irq_type); 59 - 60 - if (sdhci->data->card_power_gpio >= 0) { 61 - if (!sdhci->data->power_always_enb) { 62 - /* if card inserted, give power, otherwise remove it */ 63 - val = sdhci->data->power_active_high ? !val : val ; 64 - gpio_set_value(sdhci->data->card_power_gpio, val); 65 - } 66 - } 67 - 68 - /* inform sdhci driver about card insertion/removal */ 69 - tasklet_schedule(&host->card_tasklet); 70 - 71 - return IRQ_HANDLED; 72 - } 73 42 74 43 #ifdef CONFIG_OF 75 44 static struct sdhci_plat_data *sdhci_probe_config_dt(struct platform_device *pdev) ··· 55 84 /* If pdata is required */ 56 85 if (cd_gpio != -1) { 57 86 pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 58 - if (!pdata) { 87 + if (!pdata) 59 88 dev_err(&pdev->dev, "DT: kzalloc failed\n"); 60 - return ERR_PTR(-ENOMEM); 61 - } 89 + else 90 + pdata->card_int_gpio = cd_gpio; 62 91 } 63 - 64 - pdata->card_int_gpio = cd_gpio; 65 92 66 93 return pdata; 67 94 } ··· 76 107 struct sdhci_host *host; 77 108 struct resource *iomem; 78 109 struct spear_sdhci *sdhci; 110 + struct device *dev; 79 111 int ret; 80 112 81 - iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 82 - if (!iomem) { 83 - ret = -ENOMEM; 84 - dev_dbg(&pdev->dev, "memory resource not defined\n"); 85 - goto err; 86 - } 87 - 88 - if (!devm_request_mem_region(&pdev->dev, iomem->start, 89 - resource_size(iomem), "spear-sdhci")) { 90 - ret = -EBUSY; 91 - dev_dbg(&pdev->dev, "cannot request region\n"); 92 - goto err; 93 - } 94 - 95 - sdhci = devm_kzalloc(&pdev->dev, sizeof(*sdhci), GFP_KERNEL); 96 - if (!sdhci) { 97 - ret = -ENOMEM; 113 + dev = pdev->dev.parent ? pdev->dev.parent : &pdev->dev; 114 + host = sdhci_alloc_host(dev, sizeof(*sdhci)); 115 + if (IS_ERR(host)) { 116 + ret = PTR_ERR(host); 98 117 dev_dbg(&pdev->dev, "cannot allocate memory for sdhci\n"); 99 118 goto err; 100 119 } 101 120 121 + iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 122 + host->ioaddr = devm_ioremap_resource(&pdev->dev, iomem); 123 + if (IS_ERR(host->ioaddr)) { 124 + ret = PTR_ERR(host->ioaddr); 125 + dev_dbg(&pdev->dev, "unable to map iomem: %d\n", ret); 126 + goto err_host; 127 + } 128 + 129 + host->hw_name = "sdhci"; 130 + host->ops = &sdhci_pltfm_ops; 131 + host->irq = platform_get_irq(pdev, 0); 132 + host->quirks = SDHCI_QUIRK_BROKEN_ADMA; 133 + 134 + sdhci = sdhci_priv(host); 135 + 102 136 /* clk enable */ 103 - sdhci->clk = clk_get(&pdev->dev, NULL); 137 + sdhci->clk = devm_clk_get(&pdev->dev, NULL); 104 138 if (IS_ERR(sdhci->clk)) { 105 139 ret = PTR_ERR(sdhci->clk); 106 140 dev_dbg(&pdev->dev, "Error getting clock\n"); 107 - goto err; 141 + goto err_host; 108 142 } 109 143 110 144 ret = clk_prepare_enable(sdhci->clk); 111 145 if (ret) { 112 146 dev_dbg(&pdev->dev, "Error enabling clock\n"); 113 - goto put_clk; 147 + goto err_host; 114 148 } 115 149 116 150 ret = clk_set_rate(sdhci->clk, 50000000); ··· 125 153 sdhci->data = sdhci_probe_config_dt(pdev); 126 154 if (IS_ERR(sdhci->data)) { 127 155 dev_err(&pdev->dev, "DT: Failed to get pdata\n"); 128 - return -ENODEV; 156 + goto disable_clk; 129 157 } 130 158 } else { 131 159 sdhci->data = dev_get_platdata(&pdev->dev); 132 160 } 133 161 134 - pdev->dev.platform_data = sdhci; 135 - 136 - if (pdev->dev.parent) 137 - host = sdhci_alloc_host(pdev->dev.parent, 0); 138 - else 139 - host = sdhci_alloc_host(&pdev->dev, 0); 140 - 141 - if (IS_ERR(host)) { 142 - ret = PTR_ERR(host); 143 - dev_dbg(&pdev->dev, "error allocating host\n"); 144 - goto disable_clk; 145 - } 146 - 147 - host->hw_name = "sdhci"; 148 - host->ops = &sdhci_pltfm_ops; 149 - host->irq = platform_get_irq(pdev, 0); 150 - host->quirks = SDHCI_QUIRK_BROKEN_ADMA; 151 - 152 - host->ioaddr = devm_ioremap(&pdev->dev, iomem->start, 153 - resource_size(iomem)); 154 - if (!host->ioaddr) { 155 - ret = -ENOMEM; 156 - dev_dbg(&pdev->dev, "failed to remap registers\n"); 157 - goto free_host; 162 + /* 163 + * It is optional to use GPIOs for sdhci card detection. If 164 + * sdhci->data is NULL, then use original sdhci lines otherwise 165 + * GPIO lines. We use the built-in GPIO support for this. 166 + */ 167 + if (sdhci->data && sdhci->data->card_int_gpio >= 0) { 168 + ret = mmc_gpio_request_cd(host->mmc, 169 + sdhci->data->card_int_gpio, 0); 170 + if (ret < 0) { 171 + dev_dbg(&pdev->dev, 172 + "failed to request card-detect gpio%d\n", 173 + sdhci->data->card_int_gpio); 174 + goto disable_clk; 175 + } 158 176 } 159 177 160 178 ret = sdhci_add_host(host); 161 179 if (ret) { 162 180 dev_dbg(&pdev->dev, "error adding host\n"); 163 - goto free_host; 181 + goto disable_clk; 164 182 } 165 183 166 184 platform_set_drvdata(pdev, host); 167 185 168 - /* 169 - * It is optional to use GPIOs for sdhci Power control & sdhci card 170 - * interrupt detection. If sdhci->data is NULL, then use original sdhci 171 - * lines otherwise GPIO lines. 172 - * If GPIO is selected for power control, then power should be disabled 173 - * after card removal and should be enabled when card insertion 174 - * interrupt occurs 175 - */ 176 - if (!sdhci->data) 177 - return 0; 178 - 179 - if (sdhci->data->card_power_gpio >= 0) { 180 - int val = 0; 181 - 182 - ret = devm_gpio_request(&pdev->dev, 183 - sdhci->data->card_power_gpio, "sdhci"); 184 - if (ret < 0) { 185 - dev_dbg(&pdev->dev, "gpio request fail: %d\n", 186 - sdhci->data->card_power_gpio); 187 - goto set_drvdata; 188 - } 189 - 190 - if (sdhci->data->power_always_enb) 191 - val = sdhci->data->power_active_high; 192 - else 193 - val = !sdhci->data->power_active_high; 194 - 195 - ret = gpio_direction_output(sdhci->data->card_power_gpio, val); 196 - if (ret) { 197 - dev_dbg(&pdev->dev, "gpio set direction fail: %d\n", 198 - sdhci->data->card_power_gpio); 199 - goto set_drvdata; 200 - } 201 - } 202 - 203 - if (sdhci->data->card_int_gpio >= 0) { 204 - ret = devm_gpio_request(&pdev->dev, sdhci->data->card_int_gpio, 205 - "sdhci"); 206 - if (ret < 0) { 207 - dev_dbg(&pdev->dev, "gpio request fail: %d\n", 208 - sdhci->data->card_int_gpio); 209 - goto set_drvdata; 210 - } 211 - 212 - ret = gpio_direction_input(sdhci->data->card_int_gpio); 213 - if (ret) { 214 - dev_dbg(&pdev->dev, "gpio set direction fail: %d\n", 215 - sdhci->data->card_int_gpio); 216 - goto set_drvdata; 217 - } 218 - ret = devm_request_irq(&pdev->dev, 219 - gpio_to_irq(sdhci->data->card_int_gpio), 220 - sdhci_gpio_irq, IRQF_TRIGGER_LOW, 221 - mmc_hostname(host->mmc), pdev); 222 - if (ret) { 223 - dev_dbg(&pdev->dev, "gpio request irq fail: %d\n", 224 - sdhci->data->card_int_gpio); 225 - goto set_drvdata; 226 - } 227 - 228 - } 229 - 230 186 return 0; 231 187 232 - set_drvdata: 233 - sdhci_remove_host(host, 1); 234 - free_host: 235 - sdhci_free_host(host); 236 188 disable_clk: 237 189 clk_disable_unprepare(sdhci->clk); 238 - put_clk: 239 - clk_put(sdhci->clk); 190 + err_host: 191 + sdhci_free_host(host); 240 192 err: 241 193 dev_err(&pdev->dev, "spear-sdhci probe failed: %d\n", ret); 242 194 return ret; ··· 169 273 static int sdhci_remove(struct platform_device *pdev) 170 274 { 171 275 struct sdhci_host *host = platform_get_drvdata(pdev); 172 - struct spear_sdhci *sdhci = dev_get_platdata(&pdev->dev); 276 + struct spear_sdhci *sdhci = sdhci_priv(host); 173 277 int dead = 0; 174 278 u32 scratch; 175 279 ··· 178 282 dead = 1; 179 283 180 284 sdhci_remove_host(host, dead); 181 - sdhci_free_host(host); 182 285 clk_disable_unprepare(sdhci->clk); 183 - clk_put(sdhci->clk); 286 + sdhci_free_host(host); 184 287 185 288 return 0; 186 289 } ··· 188 293 static int sdhci_suspend(struct device *dev) 189 294 { 190 295 struct sdhci_host *host = dev_get_drvdata(dev); 191 - struct spear_sdhci *sdhci = dev_get_platdata(dev); 296 + struct spear_sdhci *sdhci = sdhci_priv(host); 192 297 int ret; 193 298 194 299 ret = sdhci_suspend_host(host); ··· 201 306 static int sdhci_resume(struct device *dev) 202 307 { 203 308 struct sdhci_host *host = dev_get_drvdata(dev); 204 - struct spear_sdhci *sdhci = dev_get_platdata(dev); 309 + struct spear_sdhci *sdhci = sdhci_priv(host); 205 310 int ret; 206 311 207 312 ret = clk_enable(sdhci->clk);
+11 -13
drivers/mmc/host/sdhci.c
··· 675 675 return 0xE; 676 676 677 677 /* Unspecified timeout, assume max */ 678 - if (!data && !cmd->cmd_timeout_ms) 678 + if (!data && !cmd->busy_timeout) 679 679 return 0xE; 680 680 681 681 /* timeout in us */ 682 682 if (!data) 683 - target_timeout = cmd->cmd_timeout_ms * 1000; 683 + target_timeout = cmd->busy_timeout * 1000; 684 684 else { 685 685 target_timeout = data->timeout_ns / 1000; 686 686 if (host->clock) ··· 1019 1019 } 1020 1020 1021 1021 timeout = jiffies; 1022 - if (!cmd->data && cmd->cmd_timeout_ms > 9000) 1023 - timeout += DIV_ROUND_UP(cmd->cmd_timeout_ms, 1000) * HZ + HZ; 1022 + if (!cmd->data && cmd->busy_timeout > 9000) 1023 + timeout += DIV_ROUND_UP(cmd->busy_timeout, 1000) * HZ + HZ; 1024 1024 else 1025 1025 timeout += 10 * HZ; 1026 1026 mod_timer(&host->timer, timeout); ··· 2026 2026 host->tuning_count * HZ); 2027 2027 /* Tuning mode 1 limits the maximum data length to 4MB */ 2028 2028 mmc->max_blk_count = (4 * 1024 * 1024) / mmc->max_blk_size; 2029 - } else { 2029 + } else if (host->flags & SDHCI_USING_RETUNING_TIMER) { 2030 2030 host->flags &= ~SDHCI_NEEDS_RETUNING; 2031 2031 /* Reload the new initial value for timer */ 2032 - if (host->tuning_mode == SDHCI_TUNING_MODE_1) 2033 - mod_timer(&host->tuning_timer, jiffies + 2034 - host->tuning_count * HZ); 2032 + mod_timer(&host->tuning_timer, jiffies + 2033 + host->tuning_count * HZ); 2035 2034 } 2036 2035 2037 2036 /* ··· 2433 2434 2434 2435 if (host->runtime_suspended) { 2435 2436 spin_unlock(&host->lock); 2436 - pr_warning("%s: got irq while runtime suspended\n", 2437 - mmc_hostname(host->mmc)); 2438 - return IRQ_HANDLED; 2437 + return IRQ_NONE; 2439 2438 } 2440 2439 2441 2440 intmask = sdhci_readl(host, SDHCI_INT_STATUS); ··· 2938 2941 if (host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK) 2939 2942 host->timeout_clk = mmc->f_max / 1000; 2940 2943 2941 - mmc->max_discard_to = (1 << 27) / host->timeout_clk; 2944 + mmc->max_busy_timeout = (1 << 27) / host->timeout_clk; 2942 2945 2943 2946 mmc->caps |= MMC_CAP_SDIO_IRQ | MMC_CAP_ERASE | MMC_CAP_CMD23; 2944 2947 ··· 3017 3020 } else if (caps[1] & SDHCI_SUPPORT_SDR50) 3018 3021 mmc->caps |= MMC_CAP_UHS_SDR50; 3019 3022 3020 - if (caps[1] & SDHCI_SUPPORT_DDR50) 3023 + if ((caps[1] & SDHCI_SUPPORT_DDR50) && 3024 + !(host->quirks2 & SDHCI_QUIRK2_BROKEN_DDR50)) 3021 3025 mmc->caps |= MMC_CAP_UHS_DDR50; 3022 3026 3023 3027 /* Does the host need tuning for SDR50? */
+33 -17
drivers/mmc/host/sh_mobile_sdhi.c
··· 37 37 38 38 struct sh_mobile_sdhi_of_data { 39 39 unsigned long tmio_flags; 40 + unsigned long capabilities; 41 + unsigned long capabilities2; 40 42 }; 41 43 42 44 static const struct sh_mobile_sdhi_of_data sh_mobile_sdhi_of_cfg[] = { ··· 46 44 .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT, 47 45 }, 48 46 }; 47 + 48 + static const struct sh_mobile_sdhi_of_data of_rcar_gen1_compatible = { 49 + .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE, 50 + .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ, 51 + }; 52 + 53 + static const struct sh_mobile_sdhi_of_data of_rcar_gen2_compatible = { 54 + .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE, 55 + .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ, 56 + .capabilities2 = MMC_CAP2_NO_MULTI_READ, 57 + }; 58 + 59 + static const struct of_device_id sh_mobile_sdhi_of_match[] = { 60 + { .compatible = "renesas,sdhi-shmobile" }, 61 + { .compatible = "renesas,sdhi-sh7372" }, 62 + { .compatible = "renesas,sdhi-sh73a0", .data = &sh_mobile_sdhi_of_cfg[0], }, 63 + { .compatible = "renesas,sdhi-r8a73a4", .data = &sh_mobile_sdhi_of_cfg[0], }, 64 + { .compatible = "renesas,sdhi-r8a7740", .data = &sh_mobile_sdhi_of_cfg[0], }, 65 + { .compatible = "renesas,sdhi-r8a7778", .data = &of_rcar_gen1_compatible, }, 66 + { .compatible = "renesas,sdhi-r8a7779", .data = &of_rcar_gen1_compatible, }, 67 + { .compatible = "renesas,sdhi-r8a7790", .data = &of_rcar_gen2_compatible, }, 68 + { .compatible = "renesas,sdhi-r8a7791", .data = &of_rcar_gen2_compatible, }, 69 + {}, 70 + }; 71 + MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match); 49 72 50 73 struct sh_mobile_sdhi { 51 74 struct clk *clk; ··· 140 113 static const struct sh_mobile_sdhi_ops sdhi_ops = { 141 114 .cd_wakeup = sh_mobile_sdhi_cd_wakeup, 142 115 }; 143 - 144 - static const struct of_device_id sh_mobile_sdhi_of_match[] = { 145 - { .compatible = "renesas,sdhi-shmobile" }, 146 - { .compatible = "renesas,sdhi-sh7372" }, 147 - { .compatible = "renesas,sdhi-sh73a0", .data = &sh_mobile_sdhi_of_cfg[0], }, 148 - { .compatible = "renesas,sdhi-r8a73a4", .data = &sh_mobile_sdhi_of_cfg[0], }, 149 - { .compatible = "renesas,sdhi-r8a7740", .data = &sh_mobile_sdhi_of_cfg[0], }, 150 - { .compatible = "renesas,sdhi-r8a7778", .data = &sh_mobile_sdhi_of_cfg[0], }, 151 - { .compatible = "renesas,sdhi-r8a7779", .data = &sh_mobile_sdhi_of_cfg[0], }, 152 - { .compatible = "renesas,sdhi-r8a7790", .data = &sh_mobile_sdhi_of_cfg[0], }, 153 - {}, 154 - }; 155 - MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match); 156 116 157 117 static int sh_mobile_sdhi_probe(struct platform_device *pdev) 158 118 { ··· 226 212 if (of_id && of_id->data) { 227 213 const struct sh_mobile_sdhi_of_data *of_data = of_id->data; 228 214 mmc_data->flags |= of_data->tmio_flags; 215 + mmc_data->capabilities |= of_data->capabilities; 216 + mmc_data->capabilities2 |= of_data->capabilities2; 229 217 } 230 218 231 219 /* SD control register space size is 0x100, 0x200 for bus_shift=1 */ ··· 332 316 } 333 317 334 318 static const struct dev_pm_ops tmio_mmc_dev_pm_ops = { 335 - .suspend = tmio_mmc_host_suspend, 336 - .resume = tmio_mmc_host_resume, 337 - .runtime_suspend = tmio_mmc_host_runtime_suspend, 338 - .runtime_resume = tmio_mmc_host_runtime_resume, 319 + SET_SYSTEM_SLEEP_PM_OPS(tmio_mmc_host_suspend, tmio_mmc_host_resume) 320 + SET_RUNTIME_PM_OPS(tmio_mmc_host_runtime_suspend, 321 + tmio_mmc_host_runtime_resume, 322 + NULL) 339 323 }; 340 324 341 325 static struct platform_driver sh_mobile_sdhi_driver = {
+16 -14
drivers/mmc/host/tmio_mmc.c
··· 23 23 24 24 #include "tmio_mmc.h" 25 25 26 - #ifdef CONFIG_PM 27 - static int tmio_mmc_suspend(struct platform_device *dev, pm_message_t state) 26 + #ifdef CONFIG_PM_SLEEP 27 + static int tmio_mmc_suspend(struct device *dev) 28 28 { 29 - const struct mfd_cell *cell = mfd_get_cell(dev); 29 + struct platform_device *pdev = to_platform_device(dev); 30 + const struct mfd_cell *cell = mfd_get_cell(pdev); 30 31 int ret; 31 32 32 - ret = tmio_mmc_host_suspend(&dev->dev); 33 + ret = tmio_mmc_host_suspend(dev); 33 34 34 35 /* Tell MFD core it can disable us now.*/ 35 36 if (!ret && cell->disable) 36 - cell->disable(dev); 37 + cell->disable(pdev); 37 38 38 39 return ret; 39 40 } 40 41 41 - static int tmio_mmc_resume(struct platform_device *dev) 42 + static int tmio_mmc_resume(struct device *dev) 42 43 { 43 - const struct mfd_cell *cell = mfd_get_cell(dev); 44 + struct platform_device *pdev = to_platform_device(dev); 45 + const struct mfd_cell *cell = mfd_get_cell(pdev); 44 46 int ret = 0; 45 47 46 48 /* Tell the MFD core we are ready to be enabled */ 47 49 if (cell->resume) 48 - ret = cell->resume(dev); 50 + ret = cell->resume(pdev); 49 51 50 52 if (!ret) 51 - ret = tmio_mmc_host_resume(&dev->dev); 53 + ret = tmio_mmc_host_resume(dev); 52 54 53 55 return ret; 54 56 } 55 - #else 56 - #define tmio_mmc_suspend NULL 57 - #define tmio_mmc_resume NULL 58 57 #endif 59 58 60 59 static int tmio_mmc_probe(struct platform_device *pdev) ··· 133 134 134 135 /* ------------------- device registration ----------------------- */ 135 136 137 + static const struct dev_pm_ops tmio_mmc_dev_pm_ops = { 138 + SET_SYSTEM_SLEEP_PM_OPS(tmio_mmc_suspend, tmio_mmc_resume) 139 + }; 140 + 136 141 static struct platform_driver tmio_mmc_driver = { 137 142 .driver = { 138 143 .name = "tmio-mmc", 139 144 .owner = THIS_MODULE, 145 + .pm = &tmio_mmc_dev_pm_ops, 140 146 }, 141 147 .probe = tmio_mmc_probe, 142 148 .remove = tmio_mmc_remove, 143 - .suspend = tmio_mmc_suspend, 144 - .resume = tmio_mmc_resume, 145 149 }; 146 150 147 151 module_platform_driver(tmio_mmc_driver);
+3 -4
drivers/mmc/host/tmio_mmc.h
··· 162 162 } 163 163 #endif 164 164 165 - #ifdef CONFIG_PM 165 + #ifdef CONFIG_PM_SLEEP 166 166 int tmio_mmc_host_suspend(struct device *dev); 167 167 int tmio_mmc_host_resume(struct device *dev); 168 - #else 169 - #define tmio_mmc_host_suspend NULL 170 - #define tmio_mmc_host_resume NULL 171 168 #endif 172 169 170 + #ifdef CONFIG_PM_RUNTIME 173 171 int tmio_mmc_host_runtime_suspend(struct device *dev); 174 172 int tmio_mmc_host_runtime_resume(struct device *dev); 173 + #endif 175 174 176 175 static inline u16 sd_ctrl_read16(struct tmio_mmc_host *host, int addr) 177 176 {
+4 -3
drivers/mmc/host/tmio_mmc_pio.c
··· 1142 1142 } 1143 1143 EXPORT_SYMBOL(tmio_mmc_host_remove); 1144 1144 1145 - #ifdef CONFIG_PM 1145 + #ifdef CONFIG_PM_SLEEP 1146 1146 int tmio_mmc_host_suspend(struct device *dev) 1147 1147 { 1148 1148 struct mmc_host *mmc = dev_get_drvdata(dev); ··· 1165 1165 return 0; 1166 1166 } 1167 1167 EXPORT_SYMBOL(tmio_mmc_host_resume); 1168 + #endif 1168 1169 1169 - #endif /* CONFIG_PM */ 1170 - 1170 + #ifdef CONFIG_PM_RUNTIME 1171 1171 int tmio_mmc_host_runtime_suspend(struct device *dev) 1172 1172 { 1173 1173 return 0; ··· 1184 1184 return 0; 1185 1185 } 1186 1186 EXPORT_SYMBOL(tmio_mmc_host_runtime_resume); 1187 + #endif 1187 1188 1188 1189 MODULE_LICENSE("GPL v2");
+1 -1
drivers/mmc/host/ushc.c
··· 504 504 ret = -ENOMEM; 505 505 goto err; 506 506 } 507 - ushc->csw = kzalloc(sizeof(struct ushc_cbw), GFP_KERNEL); 507 + ushc->csw = kzalloc(sizeof(struct ushc_csw), GFP_KERNEL); 508 508 if (ushc->csw == NULL) { 509 509 ret = -ENOMEM; 510 510 goto err;
+3 -1
drivers/mmc/host/wmt-sdmmc.c
··· 757 757 struct device_node *np = pdev->dev.of_node; 758 758 const struct of_device_id *of_id = 759 759 of_match_device(wmt_mci_dt_ids, &pdev->dev); 760 - const struct wmt_mci_caps *wmt_caps = of_id->data; 760 + const struct wmt_mci_caps *wmt_caps; 761 761 int ret; 762 762 int regular_irq, dma_irq; 763 763 ··· 765 765 dev_err(&pdev->dev, "Controller capabilities data missing\n"); 766 766 return -EFAULT; 767 767 } 768 + 769 + wmt_caps = of_id->data; 768 770 769 771 if (!np) { 770 772 dev_err(&pdev->dev, "Missing SDMMC description in devicetree\n");
+9
drivers/regulator/Kconfig
··· 392 392 on the muxing. This is handled automatically in the driver by 393 393 reading the mux info from OTP. 394 394 395 + config REGULATOR_PBIAS 396 + tristate "PBIAS OMAP regulator driver" 397 + depends on (ARCH_OMAP || COMPILE_TEST) && MFD_SYSCON 398 + help 399 + Say y here to support pbias regulator for mmc1:SD card i/o 400 + on OMAP SoCs. 401 + This driver provides support for OMAP pbias modelled 402 + regulators. 403 + 395 404 config REGULATOR_PCAP 396 405 tristate "Motorola PCAP2 regulator driver" 397 406 depends on EZX_PCAP
+1
drivers/regulator/Makefile
··· 55 55 obj-$(CONFIG_REGULATOR_PALMAS) += palmas-regulator.o 56 56 obj-$(CONFIG_REGULATOR_PFUZE100) += pfuze100-regulator.o 57 57 obj-$(CONFIG_REGULATOR_TPS51632) += tps51632-regulator.o 58 + obj-$(CONFIG_REGULATOR_PBIAS) += pbias-regulator.o 58 59 obj-$(CONFIG_REGULATOR_PCAP) += pcap-regulator.o 59 60 obj-$(CONFIG_REGULATOR_PCF50633) += pcf50633-regulator.o 60 61 obj-$(CONFIG_REGULATOR_RC5T583) += rc5t583-regulator.o
+255
drivers/regulator/pbias-regulator.c
··· 1 + /* 2 + * pbias-regulator.c 3 + * 4 + * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com/ 5 + * Author: Balaji T K <balajitk@ti.com> 6 + * 7 + * This program is free software; you can redistribute it and/or 8 + * modify it under the terms of the GNU General Public License as 9 + * published by the Free Software Foundation version 2. 10 + * 11 + * This program is distributed "as is" WITHOUT ANY WARRANTY of any 12 + * kind, whether express or implied; without even the implied warranty 13 + * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #include <linux/err.h> 18 + #include <linux/io.h> 19 + #include <linux/module.h> 20 + #include <linux/mfd/syscon.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/regulator/driver.h> 23 + #include <linux/regulator/machine.h> 24 + #include <linux/regulator/of_regulator.h> 25 + #include <linux/regmap.h> 26 + #include <linux/slab.h> 27 + #include <linux/of.h> 28 + #include <linux/of_device.h> 29 + 30 + struct pbias_reg_info { 31 + u32 enable; 32 + u32 enable_mask; 33 + u32 vmode; 34 + unsigned int enable_time; 35 + char *name; 36 + }; 37 + 38 + struct pbias_regulator_data { 39 + struct regulator_desc desc; 40 + void __iomem *pbias_addr; 41 + unsigned int pbias_reg; 42 + struct regulator_dev *dev; 43 + struct regmap *syscon; 44 + const struct pbias_reg_info *info; 45 + int voltage; 46 + }; 47 + 48 + static int pbias_regulator_set_voltage(struct regulator_dev *dev, 49 + int min_uV, int max_uV, unsigned *selector) 50 + { 51 + struct pbias_regulator_data *data = rdev_get_drvdata(dev); 52 + const struct pbias_reg_info *info = data->info; 53 + int ret, vmode; 54 + 55 + if (min_uV <= 1800000) 56 + vmode = 0; 57 + else if (min_uV > 1800000) 58 + vmode = info->vmode; 59 + 60 + ret = regmap_update_bits(data->syscon, data->pbias_reg, 61 + info->vmode, vmode); 62 + 63 + return ret; 64 + } 65 + 66 + static int pbias_regulator_get_voltage(struct regulator_dev *rdev) 67 + { 68 + struct pbias_regulator_data *data = rdev_get_drvdata(rdev); 69 + const struct pbias_reg_info *info = data->info; 70 + int value, voltage; 71 + 72 + regmap_read(data->syscon, data->pbias_reg, &value); 73 + value &= info->vmode; 74 + 75 + voltage = value ? 3000000 : 1800000; 76 + 77 + return voltage; 78 + } 79 + 80 + static int pbias_regulator_enable(struct regulator_dev *rdev) 81 + { 82 + struct pbias_regulator_data *data = rdev_get_drvdata(rdev); 83 + const struct pbias_reg_info *info = data->info; 84 + int ret; 85 + 86 + ret = regmap_update_bits(data->syscon, data->pbias_reg, 87 + info->enable_mask, info->enable); 88 + 89 + return ret; 90 + } 91 + 92 + static int pbias_regulator_disable(struct regulator_dev *rdev) 93 + { 94 + struct pbias_regulator_data *data = rdev_get_drvdata(rdev); 95 + const struct pbias_reg_info *info = data->info; 96 + int ret; 97 + 98 + ret = regmap_update_bits(data->syscon, data->pbias_reg, 99 + info->enable_mask, 0); 100 + return ret; 101 + } 102 + 103 + static int pbias_regulator_is_enable(struct regulator_dev *rdev) 104 + { 105 + struct pbias_regulator_data *data = rdev_get_drvdata(rdev); 106 + const struct pbias_reg_info *info = data->info; 107 + int value; 108 + 109 + regmap_read(data->syscon, data->pbias_reg, &value); 110 + 111 + return (value & info->enable_mask) == info->enable_mask; 112 + } 113 + 114 + static struct regulator_ops pbias_regulator_voltage_ops = { 115 + .set_voltage = pbias_regulator_set_voltage, 116 + .get_voltage = pbias_regulator_get_voltage, 117 + .enable = pbias_regulator_enable, 118 + .disable = pbias_regulator_disable, 119 + .is_enabled = pbias_regulator_is_enable, 120 + }; 121 + 122 + static const struct pbias_reg_info pbias_mmc_omap2430 = { 123 + .enable = BIT(1), 124 + .enable_mask = BIT(1), 125 + .vmode = BIT(0), 126 + .enable_time = 100, 127 + .name = "pbias_mmc_omap2430" 128 + }; 129 + 130 + static const struct pbias_reg_info pbias_sim_omap3 = { 131 + .enable = BIT(9), 132 + .enable_mask = BIT(9), 133 + .vmode = BIT(8), 134 + .enable_time = 100, 135 + .name = "pbias_sim_omap3" 136 + }; 137 + 138 + static const struct pbias_reg_info pbias_mmc_omap4 = { 139 + .enable = BIT(26) | BIT(22), 140 + .enable_mask = BIT(26) | BIT(25) | BIT(22), 141 + .vmode = BIT(21), 142 + .enable_time = 100, 143 + .name = "pbias_mmc_omap4" 144 + }; 145 + 146 + static const struct pbias_reg_info pbias_mmc_omap5 = { 147 + .enable = BIT(27) | BIT(26), 148 + .enable_mask = BIT(27) | BIT(25) | BIT(26), 149 + .vmode = BIT(21), 150 + .enable_time = 100, 151 + .name = "pbias_mmc_omap5" 152 + }; 153 + 154 + static struct of_regulator_match pbias_matches[] = { 155 + { .name = "pbias_mmc_omap2430", .driver_data = (void *)&pbias_mmc_omap2430}, 156 + { .name = "pbias_sim_omap3", .driver_data = (void *)&pbias_sim_omap3}, 157 + { .name = "pbias_mmc_omap4", .driver_data = (void *)&pbias_mmc_omap4}, 158 + { .name = "pbias_mmc_omap5", .driver_data = (void *)&pbias_mmc_omap5}, 159 + }; 160 + #define PBIAS_NUM_REGS ARRAY_SIZE(pbias_matches) 161 + 162 + static const struct of_device_id pbias_of_match[] = { 163 + { .compatible = "ti,pbias-omap", }, 164 + {}, 165 + }; 166 + MODULE_DEVICE_TABLE(of, pbias_of_match); 167 + 168 + static int pbias_regulator_probe(struct platform_device *pdev) 169 + { 170 + struct device_node *np = pdev->dev.of_node; 171 + struct pbias_regulator_data *drvdata; 172 + struct resource *res; 173 + struct regulator_config cfg = { }; 174 + struct regmap *syscon; 175 + const struct pbias_reg_info *info; 176 + int ret = 0; 177 + int count, idx, data_idx = 0; 178 + 179 + count = of_regulator_match(&pdev->dev, np, pbias_matches, 180 + PBIAS_NUM_REGS); 181 + if (count < 0) 182 + return count; 183 + 184 + drvdata = devm_kzalloc(&pdev->dev, sizeof(struct pbias_regulator_data) 185 + * count, GFP_KERNEL); 186 + if (drvdata == NULL) { 187 + dev_err(&pdev->dev, "Failed to allocate device data\n"); 188 + return -ENOMEM; 189 + } 190 + 191 + syscon = syscon_regmap_lookup_by_phandle(np, "syscon"); 192 + if (IS_ERR(syscon)) 193 + return PTR_ERR(syscon); 194 + 195 + cfg.dev = &pdev->dev; 196 + 197 + for (idx = 0; idx < PBIAS_NUM_REGS && data_idx < count; idx++) { 198 + if (!pbias_matches[idx].init_data || 199 + !pbias_matches[idx].of_node) 200 + continue; 201 + 202 + info = pbias_matches[idx].driver_data; 203 + if (!info) 204 + return -ENODEV; 205 + 206 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 207 + if (!res) 208 + return -EINVAL; 209 + 210 + drvdata[data_idx].pbias_reg = res->start; 211 + drvdata[data_idx].syscon = syscon; 212 + drvdata[data_idx].info = info; 213 + drvdata[data_idx].desc.name = info->name; 214 + drvdata[data_idx].desc.owner = THIS_MODULE; 215 + drvdata[data_idx].desc.type = REGULATOR_VOLTAGE; 216 + drvdata[data_idx].desc.ops = &pbias_regulator_voltage_ops; 217 + drvdata[data_idx].desc.n_voltages = 2; 218 + drvdata[data_idx].desc.enable_time = info->enable_time; 219 + 220 + cfg.init_data = pbias_matches[idx].init_data; 221 + cfg.driver_data = &drvdata[data_idx]; 222 + cfg.of_node = pbias_matches[idx].of_node; 223 + 224 + drvdata[data_idx].dev = devm_regulator_register(&pdev->dev, 225 + &drvdata[data_idx].desc, &cfg); 226 + if (IS_ERR(drvdata[data_idx].dev)) { 227 + ret = PTR_ERR(drvdata[data_idx].dev); 228 + dev_err(&pdev->dev, 229 + "Failed to register regulator: %d\n", ret); 230 + goto err_regulator; 231 + } 232 + data_idx++; 233 + } 234 + 235 + platform_set_drvdata(pdev, drvdata); 236 + 237 + err_regulator: 238 + return ret; 239 + } 240 + 241 + static struct platform_driver pbias_regulator_driver = { 242 + .probe = pbias_regulator_probe, 243 + .driver = { 244 + .name = "pbias-regulator", 245 + .owner = THIS_MODULE, 246 + .of_match_table = of_match_ptr(pbias_of_match), 247 + }, 248 + }; 249 + 250 + module_platform_driver(pbias_regulator_driver); 251 + 252 + MODULE_AUTHOR("Balaji T K <balajitk@ti.com>"); 253 + MODULE_DESCRIPTION("pbias voltage regulator"); 254 + MODULE_LICENSE("GPL"); 255 + MODULE_ALIAS("platform:pbias-regulator");
+1
include/linux/mfd/rtsx_common.h
··· 45 45 struct rtsx_slot { 46 46 struct platform_device *p_dev; 47 47 void (*card_event)(struct platform_device *p_dev); 48 + void (*done_transfer)(struct platform_device *p_dev); 48 49 }; 49 50 50 51 #endif
+7 -1
include/linux/mfd/rtsx_pci.h
··· 144 144 #define HOST_TO_DEVICE 0 145 145 #define DEVICE_TO_HOST 1 146 146 147 - #define MAX_PHASE 31 147 + #define RTSX_PHASE_MAX 32 148 148 #define RX_TUNING_CNT 3 149 149 150 150 /* SG descriptor */ ··· 943 943 int rtsx_pci_send_cmd(struct rtsx_pcr *pcr, int timeout); 944 944 int rtsx_pci_transfer_data(struct rtsx_pcr *pcr, struct scatterlist *sglist, 945 945 int num_sg, bool read, int timeout); 946 + int rtsx_pci_dma_map_sg(struct rtsx_pcr *pcr, struct scatterlist *sglist, 947 + int num_sg, bool read); 948 + int rtsx_pci_dma_unmap_sg(struct rtsx_pcr *pcr, struct scatterlist *sglist, 949 + int num_sg, bool read); 950 + int rtsx_pci_dma_transfer(struct rtsx_pcr *pcr, struct scatterlist *sglist, 951 + int sg_count, bool read); 946 952 int rtsx_pci_read_ppbuf(struct rtsx_pcr *pcr, u8 *buf, int buf_len); 947 953 int rtsx_pci_write_ppbuf(struct rtsx_pcr *pcr, u8 *buf, int buf_len); 948 954 int rtsx_pci_card_pull_ctl_enable(struct rtsx_pcr *pcr, int card);
+2 -2
include/linux/mmc/core.h
··· 95 95 * actively failing requests 96 96 */ 97 97 98 - unsigned int cmd_timeout_ms; /* in milliseconds */ 98 + unsigned int busy_timeout; /* busy detect timeout in ms */ 99 99 /* Set this flag only for blocking sanitize request */ 100 100 bool sanitize_busy; 101 101 ··· 152 152 struct mmc_command *, int); 153 153 extern void mmc_start_bkops(struct mmc_card *card, bool from_exception); 154 154 extern int __mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int, bool, 155 - bool); 155 + bool, bool); 156 156 extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int); 157 157 extern int mmc_send_ext_csd(struct mmc_card *card, u8 *ext_csd); 158 158
+2 -11
include/linux/mmc/host.h
··· 264 264 u32 caps2; /* More host capabilities */ 265 265 266 266 #define MMC_CAP2_BOOTPART_NOACC (1 << 0) /* Boot partition no access */ 267 - #define MMC_CAP2_CACHE_CTRL (1 << 1) /* Allow cache control */ 268 267 #define MMC_CAP2_FULL_PWR_CYCLE (1 << 2) /* Can do full power cycle */ 269 268 #define MMC_CAP2_NO_MULTI_READ (1 << 3) /* Multiblock reads don't work */ 270 - #define MMC_CAP2_NO_SLEEP_CMD (1 << 4) /* Don't allow sleep command */ 271 269 #define MMC_CAP2_HS200_1_8V_SDR (1 << 5) /* can support */ 272 270 #define MMC_CAP2_HS200_1_2V_SDR (1 << 6) /* can support */ 273 271 #define MMC_CAP2_HS200 (MMC_CAP2_HS200_1_8V_SDR | \ 274 272 MMC_CAP2_HS200_1_2V_SDR) 275 - #define MMC_CAP2_BROKEN_VOLTAGE (1 << 7) /* Use the broken voltage */ 276 273 #define MMC_CAP2_HC_ERASE_SZ (1 << 9) /* High-capacity erase size */ 277 274 #define MMC_CAP2_CD_ACTIVE_HIGH (1 << 10) /* Card-detect signal active high */ 278 275 #define MMC_CAP2_RO_ACTIVE_HIGH (1 << 11) /* Write-protect signal active high */ ··· 278 281 #define MMC_CAP2_PACKED_CMD (MMC_CAP2_PACKED_RD | \ 279 282 MMC_CAP2_PACKED_WR) 280 283 #define MMC_CAP2_NO_PRESCAN_POWERUP (1 << 14) /* Don't power up before scan */ 281 - #define MMC_CAP2_SANITIZE (1 << 15) /* Support Sanitize */ 282 284 283 285 mmc_pm_flag_t pm_caps; /* supported pm features */ 284 286 ··· 300 304 unsigned int max_req_size; /* maximum number of bytes in one req */ 301 305 unsigned int max_blk_size; /* maximum size of one mmc block */ 302 306 unsigned int max_blk_count; /* maximum number of blocks in one req */ 303 - unsigned int max_discard_to; /* max. discard timeout in ms */ 307 + unsigned int max_busy_timeout; /* max busy timeout in ms */ 304 308 305 309 /* private data */ 306 310 spinlock_t lock; /* lock for claim and bus ops */ ··· 384 388 void mmc_detect_change(struct mmc_host *, unsigned long delay); 385 389 void mmc_request_done(struct mmc_host *, struct mmc_request *); 386 390 387 - int mmc_cache_ctrl(struct mmc_host *, u8); 388 - 389 391 static inline void mmc_signal_sdio_irq(struct mmc_host *host) 390 392 { 391 393 host->ops->enable_sdio_irq(host, 0); ··· 418 424 419 425 int mmc_pm_notify(struct notifier_block *notify_block, unsigned long, void *); 420 426 421 - /* Module parameter */ 422 - extern bool mmc_assume_removable; 423 - 424 427 static inline int mmc_card_is_removable(struct mmc_host *host) 425 428 { 426 - return !(host->caps & MMC_CAP_NONREMOVABLE) && mmc_assume_removable; 429 + return !(host->caps & MMC_CAP_NONREMOVABLE); 427 430 } 428 431 429 432 static inline int mmc_card_keep_power(struct mmc_host *host)
-8
include/linux/mmc/sdhci-spear.h
··· 18 18 /* 19 19 * struct sdhci_plat_data: spear sdhci platform data structure 20 20 * 21 - * @card_power_gpio: gpio pin for enabling/disabling power to sdhci socket 22 - * @power_active_high: if set, enable power to sdhci socket by setting 23 - * card_power_gpio 24 - * @power_always_enb: If set, then enable power on probe, otherwise enable only 25 - * on card insertion and disable on card removal. 26 21 * card_int_gpio: gpio pin used for card detection 27 22 */ 28 23 struct sdhci_plat_data { 29 - int card_power_gpio; 30 - int power_active_high; 31 - int power_always_enb; 32 24 int card_int_gpio; 33 25 }; 34 26
+2
include/linux/mmc/sdhci.h
··· 100 100 #define SDHCI_QUIRK2_BROKEN_HOST_CONTROL (1<<5) 101 101 /* Controller does not support HS200 */ 102 102 #define SDHCI_QUIRK2_BROKEN_HS200 (1<<6) 103 + /* Controller does not support DDR50 */ 104 + #define SDHCI_QUIRK2_BROKEN_DDR50 (1<<7) 103 105 104 106 int irq; /* Device IRQ */ 105 107 void __iomem *ioaddr; /* Mapped address */
+6
include/linux/mmc/slot-gpio.h
··· 22 22 unsigned int debounce); 23 23 void mmc_gpio_free_cd(struct mmc_host *host); 24 24 25 + int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id, 26 + unsigned int idx, bool override_active_level, 27 + unsigned int debounce); 28 + void mmc_gpiod_free_cd(struct mmc_host *host); 29 + void mmc_gpiod_request_cd_irq(struct mmc_host *host); 30 + 25 31 #endif
+2 -5
include/linux/platform_data/mmc-msm_sdcc.h
··· 1 - /* 2 - * arch/arm/include/asm/mach/mmc.h 3 - */ 4 - #ifndef ASMARM_MACH_MMC_H 5 - #define ASMARM_MACH_MMC_H 1 + #ifndef __MMC_MSM_SDCC_H 2 + #define __MMC_MSM_SDCC_H 6 3 7 4 #include <linux/mmc/host.h> 8 5 #include <linux/mmc/card.h>
+2 -4
include/linux/platform_data/mmc-mvsdio.h
··· 1 1 /* 2 - * arch/arm/plat-orion/include/plat/mvsdio.h 3 - * 4 2 * This file is licensed under the terms of the GNU General Public 5 3 * License version 2. This program is licensed "as is" without any 6 4 * warranty of any kind, whether express or implied. 7 5 */ 8 6 9 - #ifndef __MACH_MVSDIO_H 10 - #define __MACH_MVSDIO_H 7 + #ifndef __MMC_MVSDIO_H 8 + #define __MMC_MVSDIO_H 11 9 12 10 #include <linux/mbus.h> 13 11