Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mmc-v4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC updates from Ulf Hansson:
"MMC core:
- Add support for Marvell SD8787 Wifi/BT chip
- Improve UHS support for SDIO
- Invent MMC_CAP_3_3V_DDR and a DT binding for eMMC DDR 3.3V mode
- Detect Auto BKOPS enable bit
- Export eMMC device lifetime information through sysfs
- First take to slim down the public mmc headers to avoid abuse
- Re-factoring of the mmc block device driver to prepare for blkmq
- Cleanup code for the mmc block device driver
- Clarify and cleanup code dealing with data requests
- Cleanup some code by converting to ida_simple_ functions
- Cleanup code dealing with card quirks
- Cleanup private and public mmc header files

MMC host:
- Don't rely on public mmc headers to include non-mmc related headers
- meson: Add support for eMMC HS400 mode
- meson: Various cleanups and improvements
- omap_hsmmc: Use the proper provided busy timeout from the core
- sunxi: Enable new timings for the A64 MMC controllers
- sunxi: Improvements for clock management
- tmio: Improvements for SDIO interrupts
- mxs-mmc: Add CMD23 support
- sdhci-msm: Enable HS400 enhanced strobe mode support
- sdhci-msm: Correct HS400 tuning sequence
- sdhci-acpi: Support deferred probe
- sdhci-pci: Add support for eMMC HS200 tuning mode on AMD
- mediatek: Correct the implementation of card busy detection
- dw_mmc: Initial support for ZX mmc controller
- sh_mobile_sdhi: Enable support for eMMC HS200 mode
- sh_mmcif: Various cleanups and improvements"

* tag 'mmc-v4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (145 commits)
mmc: core: add mmc prefix for blk_fixups
mmc: core: move all quirks together into quirks.h
mmc: core: improve the quirks for sdio devices
mmc: core: move some sdio IDs out of quirks file
mmc: core: change quirks.c to be a header file
mmc: sdhci-cadence: fix bit shift of read data from PHY port
mmc: Adding AUTO_BKOPS_EN bit set for Auto BKOPS support
mmc: MAN_BKOPS_EN inverse debug message logic
mmc: meson-gx: add support for HS400 mode
mmc: meson-gx: remove unneeded checks in remove
mmc: meson-gx: reduce bounce buffer size
mmc: meson-gx: set max block count and request size
mmc: meson-gx: improve interrupt handling
mmc: meson-gx: improve meson_mmc_irq_thread
mmc: meson-gx: improve meson_mmc_clk_set
mmc: meson-gx: minor improvements in meson_mmc_set_ios
mmc: meson: Assign the minimum clk rate as close to 400KHz as possible
mmc: core: start to break apart mmc_start_areq()
mmc: block: respect bool returned from blk_end_request()
mmc: block: return errorcode from mmc_sd_num_wr_blocks()
...

+2590 -1817
+1 -1
Documentation/devicetree/bindings/mmc/amlogic,meson-gx.txt
··· 17 17 "core" - Main peripheral bus clock 18 18 "clkin0" - Parent clock of internal mux 19 19 "clkin1" - Other parent clock of internal mux 20 - The driver has an interal mux clock which switches between clkin0 and clkin1 depending on the 20 + The driver has an internal mux clock which switches between clkin0 and clkin1 depending on the 21 21 clock rate requested by the MMC core. 22 22 23 23 Example:
+16
Documentation/devicetree/bindings/mmc/mmc-pwrseq-sd8787.txt
··· 1 + * Marvell SD8787 power sequence provider 2 + 3 + Required properties: 4 + - compatible: must be "mmc-pwrseq-sd8787". 5 + - powerdown-gpios: contains a power down GPIO specifier with the 6 + default active state 7 + - reset-gpios: contains a reset GPIO specifier with the default 8 + active state 9 + 10 + Example: 11 + 12 + wifi_pwrseq: wifi_pwrseq { 13 + compatible = "mmc-pwrseq-sd8787"; 14 + powerdown-gpios = <&twl_gpio 0 GPIO_ACTIVE_LOW>; 15 + reset-gpios = <&twl_gpio 1 GPIO_ACTIVE_LOW>; 16 + }
+1
Documentation/devicetree/bindings/mmc/mmc.txt
··· 40 40 - cap-mmc-hw-reset: eMMC hardware reset is supported 41 41 - cap-sdio-irq: enable SDIO IRQ signalling on this interface 42 42 - full-pwr-cycle: full power cycle of the card is supported 43 + - mmc-ddr-3_3v: eMMC high-speed DDR mode(3.3V I/O) is supported 43 44 - mmc-ddr-1_8v: eMMC high-speed DDR mode(1.8V I/O) is supported 44 45 - mmc-ddr-1_2v: eMMC high-speed DDR mode(1.2V I/O) is supported 45 46 - mmc-hs200-1_8v: eMMC HS200 mode(1.8V I/O) is supported
+1 -1
Documentation/devicetree/bindings/mmc/sdhci-st.txt
··· 38 38 - bus-width: Number of data lines. 39 39 See: Documentation/devicetree/bindings/mmc/mmc.txt. 40 40 41 - - max-frequency: Can be 200MHz, 100Mz or 50MHz (default) and used for 41 + - max-frequency: Can be 200MHz, 100MHz or 50MHz (default) and used for 42 42 configuring the CCONFIG3 in the mmcss. 43 43 See: Documentation/devicetree/bindings/mmc/mmc.txt. 44 44
+1 -1
Documentation/devicetree/bindings/mmc/sdhci.txt
··· 5 5 6 6 Optional properties: 7 7 - sdhci-caps-mask: The sdhci capabilities register is incorrect. This 64bit 8 - property corresponds to the bits in the sdhci capabilty register. If the bit 8 + property corresponds to the bits in the sdhci capability register. If the bit 9 9 is on in the mask then the bit is incorrect in the register and should be 10 10 turned off, before applying sdhci-caps. 11 11 - sdhci-caps: The sdhci capabilities register is incorrect. This 64bit
+1
Documentation/devicetree/bindings/mmc/sunxi-mmc.txt
··· 13 13 * "allwinner,sun5i-a13-mmc" 14 14 * "allwinner,sun7i-a20-mmc" 15 15 * "allwinner,sun9i-a80-mmc" 16 + * "allwinner,sun50i-a64-emmc" 16 17 * "allwinner,sun50i-a64-mmc" 17 18 - reg : mmc controller base registers 18 19 - clocks : a list with 4 phandle + clock specifier pairs
+14 -1
Documentation/devicetree/bindings/mmc/synopsys-dw-mshc.txt
··· 16 16 each child-node representing a supported slot. There should be atleast one 17 17 child node representing a card slot. The name of the child node representing 18 18 the slot is recommended to be slot@n where n is the unique number of the slot 19 - connnected to the controller. The following are optional properties which 19 + connected to the controller. The following are optional properties which 20 20 can be included in the slot child node. 21 21 22 22 * reg: specifies the physical slot number. The valid values of this ··· 75 75 * card-detect-delay: Delay in milli-seconds before detecting card after card 76 76 insert event. The default value is 0. 77 77 78 + * data-addr: Override fifo address with value provided by DT. The default FIFO reg 79 + offset is assumed as 0x100 (version < 0x240A) and 0x200(version >= 0x240A) by 80 + driver. If the controller does not follow this rule, please use this property 81 + to set fifo address in device tree. 82 + 83 + * fifo-watermark-aligned: Data done irq is expected if data length is less than 84 + watermark in PIO mode. But fifo watermark is requested to be aligned with data 85 + length in some SoC so that TX/RX irq can be generated with data done irq. Add this 86 + watermark quirk to mark this requirement and force fifo watermark setting 87 + accordingly. 88 + 78 89 * vmmc-supply: The phandle to the regulator to use for vmmc. If this is 79 90 specified we'll defer probe until we can find this regulator. 80 91 ··· 113 102 interrupts = <0 75 0>; 114 103 #address-cells = <1>; 115 104 #size-cells = <0>; 105 + data-addr = <0x200>; 106 + fifo-watermark-aligned; 116 107 resets = <&rst 20>; 117 108 reset-names = "reset"; 118 109 };
+13
Documentation/devicetree/bindings/mmc/tmio_mmc.txt
··· 25 25 "renesas,sdhi-r8a7795" - SDHI IP on R8A7795 SoC 26 26 "renesas,sdhi-r8a7796" - SDHI IP on R8A7796 SoC 27 27 28 + - clocks: Most controllers only have 1 clock source per channel. However, on 29 + some variations of this controller, the internal card detection 30 + logic that exists in this controller is sectioned off to be run by a 31 + separate second clock source to allow the main core clock to be turned 32 + off to save power. 33 + If 2 clocks are specified by the hardware, you must name them as 34 + "core" and "cd". If the controller only has 1 clock, naming is not 35 + required. 36 + Below is the number clocks for each supported SoC: 37 + 1: SH73A0, R8A73A4, R8A7740, R8A7778, R8A7779, R8A7790 38 + R8A7791, R8A7792, R8A7793, R8A7794, R8A7795, R8A7796 39 + 2: R7S72100 40 + 28 41 Optional properties: 29 42 - toshiba,mmc-wrprotect-disable: write-protect detection is unavailable 30 43 - pinctrl-names: should be "default", "state_uhs"
+33
Documentation/devicetree/bindings/mmc/zx-dw-mshc.txt
··· 1 + * ZTE specific extensions to the Synopsys Designware Mobile Storage 2 + Host Controller 3 + 4 + The Synopsys designware mobile storage host controller is used to interface 5 + a SoC with storage medium such as eMMC or SD/MMC cards. This file documents 6 + differences between the core Synopsys dw mshc controller properties described 7 + by synopsys-dw-mshc.txt and the properties used by the ZTE specific 8 + extensions to the Synopsys Designware Mobile Storage Host Controller. 9 + 10 + Required Properties: 11 + 12 + * compatible: should be 13 + - "zte,zx296718-dw-mshc": for ZX SoCs 14 + 15 + Example: 16 + 17 + mmc1: mmc@1110000 { 18 + compatible = "zte,zx296718-dw-mshc"; 19 + reg = <0x01110000 0x1000>; 20 + interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>; 21 + fifo-depth = <32>; 22 + data-addr = <0x200>; 23 + fifo-watermark-aligned; 24 + bus-width = <4>; 25 + clock-frequency = <50000000>; 26 + clocks = <&topcrm SD0_AHB>, <&topcrm SD0_WCLK>; 27 + clock-names = "biu", "ciu"; 28 + num-slots = <1>; 29 + max-frequency = <50000000>; 30 + cap-sdio-irq; 31 + cap-sd-highspeed; 32 + status = "disabled"; 33 + };
+6 -1
Documentation/devicetree/bindings/net/wireless/marvell-8xxx.txt
··· 1 - Marvell 8897/8997 (sd8897/sd8997/pcie8997) SDIO/PCIE devices 1 + Marvell 8787/8897/8997 (sd8787/sd8897/sd8997/pcie8997) SDIO/PCIE devices 2 2 ------ 3 3 4 4 This node provides properties for controlling the Marvell SDIO/PCIE wireless device. ··· 8 8 Required properties: 9 9 10 10 - compatible : should be one of the following: 11 + * "marvell,sd8787" 11 12 * "marvell,sd8897" 12 13 * "marvell,sd8997" 13 14 * "pci11ab,2b42" ··· 35 34 so that the wifi chip can wakeup host platform under certain condition. 36 35 during system resume, the irq will be disabled to make sure 37 36 unnecessary interrupt is not received. 37 + - vmmc-supply: a phandle of a regulator, supplying VCC to the card 38 + - mmc-pwrseq: phandle to the MMC power sequence node. See "mmc-pwrseq-*" 39 + for documentation of MMC power sequence bindings. 38 40 39 41 Example: 40 42 ··· 50 46 &mmc3 { 51 47 status = "okay"; 52 48 vmmc-supply = <&wlan_en_reg>; 49 + mmc-pwrseq = <&wifi_pwrseq>; 53 50 bus-width = <4>; 54 51 cap-power-off-card; 55 52 keep-power-in-suspend;
-1
MAINTAINERS
··· 10894 10894 M: Jaehoon Chung <jh80.chung@samsung.com> 10895 10895 L: linux-mmc@vger.kernel.org 10896 10896 S: Maintained 10897 - F: include/linux/mmc/dw_mmc.h 10898 10897 F: drivers/mmc/host/dw_mmc* 10899 10898 10900 10899 SYSTEM TRACE MODULE CLASS
+1
arch/arm/mach-davinci/board-da850-evm.c
··· 18 18 #include <linux/gpio/machine.h> 19 19 #include <linux/init.h> 20 20 #include <linux/kernel.h> 21 + #include <linux/leds.h> 21 22 #include <linux/i2c.h> 22 23 #include <linux/platform_data/at24.h> 23 24 #include <linux/platform_data/pca953x.h>
+1
arch/arm/mach-davinci/board-dm644x-evm.c
··· 25 25 #include <linux/videodev2.h> 26 26 #include <linux/v4l2-dv-timings.h> 27 27 #include <linux/export.h> 28 + #include <linux/leds.h> 28 29 29 30 #include <media/i2c/tvp514x.h> 30 31
+1
arch/arm/mach-davinci/board-neuros-osd2.c
··· 25 25 */ 26 26 #include <linux/platform_device.h> 27 27 #include <linux/gpio.h> 28 + #include <linux/leds.h> 28 29 #include <linux/mtd/partitions.h> 29 30 #include <linux/platform_data/gpio-davinci.h> 30 31 #include <linux/platform_data/i2c-davinci.h>
+1
arch/arm/mach-davinci/board-omapl138-hawk.c
··· 12 12 #include <linux/kernel.h> 13 13 #include <linux/init.h> 14 14 #include <linux/console.h> 15 + #include <linux/interrupt.h> 15 16 #include <linux/gpio.h> 16 17 #include <linux/gpio/machine.h> 17 18 #include <linux/platform_data/gpio-davinci.h>
+1
arch/arm/mach-pxa/balloon3.c
··· 17 17 #include <linux/init.h> 18 18 #include <linux/platform_device.h> 19 19 #include <linux/interrupt.h> 20 + #include <linux/leds.h> 20 21 #include <linux/sched.h> 21 22 #include <linux/bitops.h> 22 23 #include <linux/fb.h>
+1
arch/arm/mach-pxa/colibri-pxa270-income.c
··· 17 17 #include <linux/gpio.h> 18 18 #include <linux/init.h> 19 19 #include <linux/interrupt.h> 20 + #include <linux/leds.h> 20 21 #include <linux/ioport.h> 21 22 #include <linux/kernel.h> 22 23 #include <linux/platform_device.h>
+1
arch/arm/mach-pxa/corgi.c
··· 19 19 #include <linux/major.h> 20 20 #include <linux/fs.h> 21 21 #include <linux/interrupt.h> 22 + #include <linux/leds.h> 22 23 #include <linux/mmc/host.h> 23 24 #include <linux/mtd/physmap.h> 24 25 #include <linux/pm.h>
+1
arch/arm/mach-pxa/trizeps4.c
··· 16 16 #include <linux/kernel.h> 17 17 #include <linux/platform_device.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/leds.h> 19 20 #include <linux/export.h> 20 21 #include <linux/sched.h> 21 22 #include <linux/bitops.h>
+1
arch/arm/mach-pxa/vpac270.c
··· 15 15 #include <linux/irq.h> 16 16 #include <linux/gpio_keys.h> 17 17 #include <linux/input.h> 18 + #include <linux/leds.h> 18 19 #include <linux/gpio.h> 19 20 #include <linux/usb/gpio_vbus.h> 20 21 #include <linux/mtd/mtd.h>
+1
arch/arm/mach-pxa/zeus.c
··· 13 13 14 14 #include <linux/cpufreq.h> 15 15 #include <linux/interrupt.h> 16 + #include <linux/leds.h> 16 17 #include <linux/irq.h> 17 18 #include <linux/pm.h> 18 19 #include <linux/gpio.h>
+1
arch/arm/mach-pxa/zylonite.c
··· 16 16 #include <linux/module.h> 17 17 #include <linux/kernel.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/leds.h> 19 20 #include <linux/init.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/gpio.h>
+1
arch/mips/alchemy/devboards/db1300.c
··· 13 13 #include <linux/i2c.h> 14 14 #include <linux/io.h> 15 15 #include <linux/leds.h> 16 + #include <linux/interrupt.h> 16 17 #include <linux/ata_platform.h> 17 18 #include <linux/mmc/host.h> 18 19 #include <linux/module.h>
+11 -5
arch/sh/boot/romimage/mmcif-sh7724.c
··· 9 9 */ 10 10 11 11 #include <linux/mmc/sh_mmcif.h> 12 - #include <linux/mmc/boot.h> 13 12 #include <mach/romimage.h> 14 13 15 14 #define MMCIF_BASE (void __iomem *)0xa4ca0000 ··· 21 22 #define HIZCRC 0xa405015c 22 23 #define DRVCRA 0xa405018a 23 24 25 + enum { 26 + MMCIF_PROGRESS_ENTER, 27 + MMCIF_PROGRESS_INIT, 28 + MMCIF_PROGRESS_LOAD, 29 + MMCIF_PROGRESS_DONE 30 + }; 31 + 24 32 /* SH7724 specific MMCIF loader 25 33 * 26 34 * loads the romImage from an MMC card starting from block 512 ··· 36 30 */ 37 31 asmlinkage void mmcif_loader(unsigned char *buf, unsigned long no_bytes) 38 32 { 39 - mmcif_update_progress(MMC_PROGRESS_ENTER); 33 + mmcif_update_progress(MMCIF_PROGRESS_ENTER); 40 34 41 35 /* enable clock to the MMCIF hardware block */ 42 36 __raw_writel(__raw_readl(MSTPCR2) & ~0x20000000, MSTPCR2); ··· 59 53 /* high drive capability for MMC pins */ 60 54 __raw_writew(__raw_readw(DRVCRA) | 0x3000, DRVCRA); 61 55 62 - mmcif_update_progress(MMC_PROGRESS_INIT); 56 + mmcif_update_progress(MMCIF_PROGRESS_INIT); 63 57 64 58 /* setup MMCIF hardware */ 65 59 sh_mmcif_boot_init(MMCIF_BASE); 66 60 67 - mmcif_update_progress(MMC_PROGRESS_LOAD); 61 + mmcif_update_progress(MMCIF_PROGRESS_LOAD); 68 62 69 63 /* load kernel via MMCIF interface */ 70 64 sh_mmcif_boot_do_read(MMCIF_BASE, 512, ··· 74 68 /* disable clock to the MMCIF hardware block */ 75 69 __raw_writel(__raw_readl(MSTPCR2) | 0x20000000, MSTPCR2); 76 70 77 - mmcif_update_progress(MMC_PROGRESS_DONE); 71 + mmcif_update_progress(MMCIF_PROGRESS_DONE); 78 72 }
+10
drivers/mmc/core/Kconfig
··· 12 12 This driver can also be built as a module. If so, the module 13 13 will be called pwrseq_emmc. 14 14 15 + config PWRSEQ_SD8787 16 + tristate "HW reset support for SD8787 BT + Wifi module" 17 + depends on OF && (MWIFIEX || BT_MRVL_SDIO) 18 + help 19 + This selects hardware reset support for the SD8787 BT + Wifi 20 + module. By default this option is set to n. 21 + 22 + This driver can also be built as a module. If so, the module 23 + will be called pwrseq_sd8787. 24 + 15 25 config PWRSEQ_SIMPLE 16 26 tristate "Simple HW reset support for MMC" 17 27 default y
+2 -1
drivers/mmc/core/Makefile
··· 7 7 mmc.o mmc_ops.o sd.o sd_ops.o \ 8 8 sdio.o sdio_ops.o sdio_bus.o \ 9 9 sdio_cis.o sdio_io.o sdio_irq.o \ 10 - quirks.o slot-gpio.o 10 + slot-gpio.o 11 11 mmc_core-$(CONFIG_OF) += pwrseq.o 12 12 obj-$(CONFIG_PWRSEQ_SIMPLE) += pwrseq_simple.o 13 + obj-$(CONFIG_PWRSEQ_SD8787) += pwrseq_sd8787.o 13 14 obj-$(CONFIG_PWRSEQ_EMMC) += pwrseq_emmc.o 14 15 mmc_core-$(CONFIG_DEBUG_FS) += debugfs.o 15 16 obj-$(CONFIG_MMC_BLOCK) += mmc_block.o
+174 -239
drivers/mmc/core/block.c
··· 47 47 48 48 #include "queue.h" 49 49 #include "block.h" 50 + #include "core.h" 51 + #include "card.h" 52 + #include "host.h" 53 + #include "bus.h" 54 + #include "mmc_ops.h" 55 + #include "quirks.h" 56 + #include "sd_ops.h" 50 57 51 58 MODULE_ALIAS("mmc:block"); 52 59 #ifdef MODULE_PARAM_PREFIX ··· 61 54 #endif 62 55 #define MODULE_PARAM_PREFIX "mmcblk." 63 56 64 - #define INAND_CMD38_ARG_EXT_CSD 113 65 - #define INAND_CMD38_ARG_ERASE 0x00 66 - #define INAND_CMD38_ARG_TRIM 0x01 67 - #define INAND_CMD38_ARG_SECERASE 0x80 68 - #define INAND_CMD38_ARG_SECTRIM1 0x81 69 - #define INAND_CMD38_ARG_SECTRIM2 0x88 70 57 #define MMC_BLK_TIMEOUT_MS (10 * 60 * 1000) /* 10 minute timeout */ 71 58 #define MMC_SANITIZE_REQ_TIMEOUT 240000 72 59 #define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16) ··· 85 84 #define MAX_DEVICES 256 86 85 87 86 static DEFINE_IDA(mmc_blk_ida); 88 - static DEFINE_SPINLOCK(mmc_blk_lock); 89 87 90 88 /* 91 89 * There is one mmc_blk_data per slot. ··· 157 157 if (md->usage == 0) { 158 158 int devidx = mmc_get_devidx(md->disk); 159 159 blk_cleanup_queue(md->queue.queue); 160 - 161 - spin_lock(&mmc_blk_lock); 162 - ida_remove(&mmc_blk_ida, devidx); 163 - spin_unlock(&mmc_blk_lock); 164 - 160 + ida_simple_remove(&mmc_blk_ida, devidx); 165 161 put_disk(md->disk); 166 162 kfree(md); 167 163 } ··· 438 442 static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, 439 443 struct mmc_blk_ioc_data *idata) 440 444 { 441 - struct mmc_command cmd = {0}; 442 - struct mmc_data data = {0}; 443 - struct mmc_request mrq = {NULL}; 445 + struct mmc_command cmd = {}; 446 + struct mmc_data data = {}; 447 + struct mmc_request mrq = {}; 444 448 struct scatterlist sg; 445 449 int err; 446 450 int is_rpmb = false; ··· 758 762 return 0; 759 763 } 760 764 761 - static u32 mmc_sd_num_wr_blocks(struct mmc_card *card) 765 + static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks) 762 766 { 763 767 int err; 764 768 u32 result; 765 769 __be32 *blocks; 766 770 767 - struct mmc_request mrq = {NULL}; 768 - struct mmc_command cmd = {0}; 769 - struct mmc_data data = {0}; 771 + struct mmc_request mrq = {}; 772 + struct mmc_command cmd = {}; 773 + struct mmc_data data = {}; 770 774 771 775 struct scatterlist sg; 772 776 ··· 776 780 777 781 err = mmc_wait_for_cmd(card->host, &cmd, 0); 778 782 if (err) 779 - return (u32)-1; 783 + return err; 780 784 if (!mmc_host_is_spi(card->host) && !(cmd.resp[0] & R1_APP_CMD)) 781 - return (u32)-1; 785 + return -EIO; 782 786 783 787 memset(&cmd, 0, sizeof(struct mmc_command)); 784 788 ··· 798 802 799 803 blocks = kmalloc(4, GFP_KERNEL); 800 804 if (!blocks) 801 - return (u32)-1; 805 + return -ENOMEM; 802 806 803 807 sg_init_one(&sg, blocks, 4); 804 808 ··· 808 812 kfree(blocks); 809 813 810 814 if (cmd.error || data.error) 811 - result = (u32)-1; 815 + return -EIO; 812 816 813 - return result; 817 + *written_blocks = result; 818 + 819 + return 0; 814 820 } 815 821 816 822 static int get_card_status(struct mmc_card *card, u32 *status, int retries) 817 823 { 818 - struct mmc_command cmd = {0}; 824 + struct mmc_command cmd = {}; 819 825 int err; 820 826 821 827 cmd.opcode = MMC_SEND_STATUS; ··· 882 884 struct request *req, bool *gen_err, u32 *stop_status) 883 885 { 884 886 struct mmc_host *host = card->host; 885 - struct mmc_command cmd = {0}; 887 + struct mmc_command cmd = {}; 886 888 int err; 887 889 bool use_r1b_resp = rq_data_dir(req) == WRITE; 888 890 ··· 1141 1143 return false; 1142 1144 } 1143 1145 1144 - static int mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) 1146 + static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) 1145 1147 { 1146 1148 struct mmc_blk_data *md = mq->blkdata; 1147 1149 struct mmc_card *card = md->queue.card; ··· 1150 1152 1151 1153 if (!mmc_can_erase(card)) { 1152 1154 err = -EOPNOTSUPP; 1153 - goto out; 1155 + goto fail; 1154 1156 } 1155 1157 1156 1158 from = blk_rq_pos(req); ··· 1162 1164 arg = MMC_TRIM_ARG; 1163 1165 else 1164 1166 arg = MMC_ERASE_ARG; 1165 - retry: 1166 - if (card->quirks & MMC_QUIRK_INAND_CMD38) { 1167 - err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1168 - INAND_CMD38_ARG_EXT_CSD, 1169 - arg == MMC_TRIM_ARG ? 1170 - INAND_CMD38_ARG_TRIM : 1171 - INAND_CMD38_ARG_ERASE, 1172 - 0); 1173 - if (err) 1174 - goto out; 1175 - } 1176 - err = mmc_erase(card, from, nr, arg); 1177 - out: 1178 - if (err == -EIO && !mmc_blk_reset(md, card->host, type)) 1179 - goto retry; 1167 + do { 1168 + err = 0; 1169 + if (card->quirks & MMC_QUIRK_INAND_CMD38) { 1170 + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1171 + INAND_CMD38_ARG_EXT_CSD, 1172 + arg == MMC_TRIM_ARG ? 1173 + INAND_CMD38_ARG_TRIM : 1174 + INAND_CMD38_ARG_ERASE, 1175 + 0); 1176 + } 1177 + if (!err) 1178 + err = mmc_erase(card, from, nr, arg); 1179 + } while (err == -EIO && !mmc_blk_reset(md, card->host, type)); 1180 1180 if (!err) 1181 1181 mmc_blk_reset_success(md, type); 1182 + fail: 1182 1183 blk_end_request(req, err, blk_rq_bytes(req)); 1183 - 1184 - return err ? 0 : 1; 1185 1184 } 1186 1185 1187 - static int mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, 1186 + static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, 1188 1187 struct request *req) 1189 1188 { 1190 1189 struct mmc_blk_data *md = mq->blkdata; ··· 1244 1249 mmc_blk_reset_success(md, type); 1245 1250 out: 1246 1251 blk_end_request(req, err, blk_rq_bytes(req)); 1247 - 1248 - return err ? 0 : 1; 1249 1252 } 1250 1253 1251 - static int mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) 1254 + static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) 1252 1255 { 1253 1256 struct mmc_blk_data *md = mq->blkdata; 1254 1257 struct mmc_card *card = md->queue.card; ··· 1257 1264 ret = -EIO; 1258 1265 1259 1266 blk_end_request_all(req, ret); 1260 - 1261 - return ret ? 0 : 1; 1262 1267 } 1263 1268 1264 1269 /* ··· 1294 1303 struct mmc_async_req *areq) 1295 1304 { 1296 1305 struct mmc_queue_req *mq_mrq = container_of(areq, struct mmc_queue_req, 1297 - mmc_active); 1306 + areq); 1298 1307 struct mmc_blk_request *brq = &mq_mrq->brq; 1299 1308 struct request *req = mq_mrq->req; 1300 1309 int need_retune = card->host->need_retune; ··· 1550 1559 brq->data.sg_len = i; 1551 1560 } 1552 1561 1553 - mqrq->mmc_active.mrq = &brq->mrq; 1554 - mqrq->mmc_active.err_check = mmc_blk_err_check; 1562 + mqrq->areq.mrq = &brq->mrq; 1563 + mqrq->areq.err_check = mmc_blk_err_check; 1555 1564 1556 1565 mmc_queue_bounce_pre(mqrq); 1557 1566 } 1558 1567 1559 - static int mmc_blk_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, 1560 - struct mmc_blk_request *brq, struct request *req, 1561 - int ret) 1568 + static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, 1569 + struct mmc_blk_request *brq, struct request *req, 1570 + bool old_req_pending) 1562 1571 { 1563 1572 struct mmc_queue_req *mq_rq; 1573 + bool req_pending; 1574 + 1564 1575 mq_rq = container_of(brq, struct mmc_queue_req, brq); 1565 1576 1566 1577 /* ··· 1575 1582 */ 1576 1583 if (mmc_card_sd(card)) { 1577 1584 u32 blocks; 1585 + int err; 1578 1586 1579 - blocks = mmc_sd_num_wr_blocks(card); 1580 - if (blocks != (u32)-1) { 1581 - ret = blk_end_request(req, 0, blocks << 9); 1582 - } 1587 + err = mmc_sd_num_wr_blocks(card, &blocks); 1588 + if (err) 1589 + req_pending = old_req_pending; 1590 + else 1591 + req_pending = blk_end_request(req, 0, blocks << 9); 1583 1592 } else { 1584 - ret = blk_end_request(req, 0, brq->data.bytes_xfered); 1593 + req_pending = blk_end_request(req, 0, brq->data.bytes_xfered); 1585 1594 } 1586 - return ret; 1595 + return req_pending; 1587 1596 } 1588 1597 1589 - static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc) 1598 + static void mmc_blk_rw_cmd_abort(struct mmc_card *card, struct request *req) 1599 + { 1600 + if (mmc_card_removed(card)) 1601 + req->rq_flags |= RQF_QUIET; 1602 + while (blk_end_request(req, -EIO, blk_rq_cur_bytes(req))); 1603 + } 1604 + 1605 + /** 1606 + * mmc_blk_rw_try_restart() - tries to restart the current async request 1607 + * @mq: the queue with the card and host to restart 1608 + * @req: a new request that want to be started after the current one 1609 + */ 1610 + static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req) 1611 + { 1612 + if (!req) 1613 + return; 1614 + 1615 + /* 1616 + * If the card was removed, just cancel everything and return. 1617 + */ 1618 + if (mmc_card_removed(mq->card)) { 1619 + req->rq_flags |= RQF_QUIET; 1620 + blk_end_request_all(req, -EIO); 1621 + return; 1622 + } 1623 + /* Else proceed and try to restart the current async request */ 1624 + mmc_blk_rw_rq_prep(mq->mqrq_cur, mq->card, 0, mq); 1625 + mmc_start_areq(mq->card->host, &mq->mqrq_cur->areq, NULL); 1626 + } 1627 + 1628 + static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) 1590 1629 { 1591 1630 struct mmc_blk_data *md = mq->blkdata; 1592 1631 struct mmc_card *card = md->queue.card; 1593 1632 struct mmc_blk_request *brq; 1594 - int ret = 1, disable_multi = 0, retry = 0, type, retune_retry_done = 0; 1633 + int disable_multi = 0, retry = 0, type, retune_retry_done = 0; 1595 1634 enum mmc_blk_status status; 1596 1635 struct mmc_queue_req *mq_rq; 1597 - struct request *req; 1598 - struct mmc_async_req *areq; 1636 + struct request *old_req; 1637 + struct mmc_async_req *new_areq; 1638 + struct mmc_async_req *old_areq; 1639 + bool req_pending = true; 1599 1640 1600 - if (!rqc && !mq->mqrq_prev->req) 1601 - return 0; 1641 + if (!new_req && !mq->mqrq_prev->req) 1642 + return; 1602 1643 1603 1644 do { 1604 - if (rqc) { 1645 + if (new_req) { 1605 1646 /* 1606 1647 * When 4KB native sector is enabled, only 8 blocks 1607 1648 * multiple read or write is allowed 1608 1649 */ 1609 1650 if (mmc_large_sector(card) && 1610 - !IS_ALIGNED(blk_rq_sectors(rqc), 8)) { 1651 + !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { 1611 1652 pr_err("%s: Transfer size is not 4KB sector size aligned\n", 1612 - rqc->rq_disk->disk_name); 1613 - mq_rq = mq->mqrq_cur; 1614 - req = rqc; 1615 - rqc = NULL; 1616 - goto cmd_abort; 1653 + new_req->rq_disk->disk_name); 1654 + mmc_blk_rw_cmd_abort(card, new_req); 1655 + return; 1617 1656 } 1618 1657 1619 1658 mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); 1620 - areq = &mq->mqrq_cur->mmc_active; 1659 + new_areq = &mq->mqrq_cur->areq; 1621 1660 } else 1622 - areq = NULL; 1623 - areq = mmc_start_req(card->host, areq, &status); 1624 - if (!areq) { 1661 + new_areq = NULL; 1662 + 1663 + old_areq = mmc_start_areq(card->host, new_areq, &status); 1664 + if (!old_areq) { 1665 + /* 1666 + * We have just put the first request into the pipeline 1667 + * and there is nothing more to do until it is 1668 + * complete. 1669 + */ 1625 1670 if (status == MMC_BLK_NEW_REQUEST) 1626 - mq->flags |= MMC_QUEUE_NEW_REQUEST; 1627 - return 0; 1671 + mq->new_request = true; 1672 + return; 1628 1673 } 1629 1674 1630 - mq_rq = container_of(areq, struct mmc_queue_req, mmc_active); 1675 + /* 1676 + * An asynchronous request has been completed and we proceed 1677 + * to handle the result of it. 1678 + */ 1679 + mq_rq = container_of(old_areq, struct mmc_queue_req, areq); 1631 1680 brq = &mq_rq->brq; 1632 - req = mq_rq->req; 1633 - type = rq_data_dir(req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; 1681 + old_req = mq_rq->req; 1682 + type = rq_data_dir(old_req) == READ ? MMC_BLK_READ : MMC_BLK_WRITE; 1634 1683 mmc_queue_bounce_post(mq_rq); 1635 1684 1636 1685 switch (status) { ··· 1683 1648 */ 1684 1649 mmc_blk_reset_success(md, type); 1685 1650 1686 - ret = blk_end_request(req, 0, 1687 - brq->data.bytes_xfered); 1688 - 1651 + req_pending = blk_end_request(old_req, 0, 1652 + brq->data.bytes_xfered); 1689 1653 /* 1690 1654 * If the blk_end_request function returns non-zero even 1691 1655 * though all data has been transferred and no errors 1692 1656 * were returned by the host controller, it's a bug. 1693 1657 */ 1694 - if (status == MMC_BLK_SUCCESS && ret) { 1658 + if (status == MMC_BLK_SUCCESS && req_pending) { 1695 1659 pr_err("%s BUG rq_tot %d d_xfer %d\n", 1696 - __func__, blk_rq_bytes(req), 1660 + __func__, blk_rq_bytes(old_req), 1697 1661 brq->data.bytes_xfered); 1698 - rqc = NULL; 1699 - goto cmd_abort; 1662 + mmc_blk_rw_cmd_abort(card, old_req); 1663 + return; 1700 1664 } 1701 1665 break; 1702 1666 case MMC_BLK_CMD_ERR: 1703 - ret = mmc_blk_cmd_err(md, card, brq, req, ret); 1704 - if (mmc_blk_reset(md, card->host, type)) 1705 - goto cmd_abort; 1706 - if (!ret) 1707 - goto start_new_req; 1667 + req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); 1668 + if (mmc_blk_reset(md, card->host, type)) { 1669 + mmc_blk_rw_cmd_abort(card, old_req); 1670 + mmc_blk_rw_try_restart(mq, new_req); 1671 + return; 1672 + } 1673 + if (!req_pending) { 1674 + mmc_blk_rw_try_restart(mq, new_req); 1675 + return; 1676 + } 1708 1677 break; 1709 1678 case MMC_BLK_RETRY: 1710 1679 retune_retry_done = brq->retune_retry_done; ··· 1718 1679 case MMC_BLK_ABORT: 1719 1680 if (!mmc_blk_reset(md, card->host, type)) 1720 1681 break; 1721 - goto cmd_abort; 1682 + mmc_blk_rw_cmd_abort(card, old_req); 1683 + mmc_blk_rw_try_restart(mq, new_req); 1684 + return; 1722 1685 case MMC_BLK_DATA_ERR: { 1723 1686 int err; 1724 1687 1725 1688 err = mmc_blk_reset(md, card->host, type); 1726 1689 if (!err) 1727 1690 break; 1728 - if (err == -ENODEV) 1729 - goto cmd_abort; 1691 + if (err == -ENODEV) { 1692 + mmc_blk_rw_cmd_abort(card, old_req); 1693 + mmc_blk_rw_try_restart(mq, new_req); 1694 + return; 1695 + } 1730 1696 /* Fall through */ 1731 1697 } 1732 1698 case MMC_BLK_ECC_ERR: 1733 1699 if (brq->data.blocks > 1) { 1734 1700 /* Redo read one sector at a time */ 1735 1701 pr_warn("%s: retrying using single block read\n", 1736 - req->rq_disk->disk_name); 1702 + old_req->rq_disk->disk_name); 1737 1703 disable_multi = 1; 1738 1704 break; 1739 1705 } ··· 1747 1703 * time, so we only reach here after trying to 1748 1704 * read a single sector. 1749 1705 */ 1750 - ret = blk_end_request(req, -EIO, 1751 - brq->data.blksz); 1752 - if (!ret) 1753 - goto start_new_req; 1706 + req_pending = blk_end_request(old_req, -EIO, 1707 + brq->data.blksz); 1708 + if (!req_pending) { 1709 + mmc_blk_rw_try_restart(mq, new_req); 1710 + return; 1711 + } 1754 1712 break; 1755 1713 case MMC_BLK_NOMEDIUM: 1756 - goto cmd_abort; 1714 + mmc_blk_rw_cmd_abort(card, old_req); 1715 + mmc_blk_rw_try_restart(mq, new_req); 1716 + return; 1757 1717 default: 1758 1718 pr_err("%s: Unhandled return value (%d)", 1759 - req->rq_disk->disk_name, status); 1760 - goto cmd_abort; 1719 + old_req->rq_disk->disk_name, status); 1720 + mmc_blk_rw_cmd_abort(card, old_req); 1721 + mmc_blk_rw_try_restart(mq, new_req); 1722 + return; 1761 1723 } 1762 1724 1763 - if (ret) { 1725 + if (req_pending) { 1764 1726 /* 1765 1727 * In case of a incomplete request 1766 1728 * prepare it again and resend. 1767 1729 */ 1768 1730 mmc_blk_rw_rq_prep(mq_rq, card, 1769 1731 disable_multi, mq); 1770 - mmc_start_req(card->host, 1771 - &mq_rq->mmc_active, NULL); 1732 + mmc_start_areq(card->host, 1733 + &mq_rq->areq, NULL); 1772 1734 mq_rq->brq.retune_retry_done = retune_retry_done; 1773 1735 } 1774 - } while (ret); 1775 - 1776 - return 1; 1777 - 1778 - cmd_abort: 1779 - if (mmc_card_removed(card)) 1780 - req->rq_flags |= RQF_QUIET; 1781 - while (ret) 1782 - ret = blk_end_request(req, -EIO, 1783 - blk_rq_cur_bytes(req)); 1784 - 1785 - start_new_req: 1786 - if (rqc) { 1787 - if (mmc_card_removed(card)) { 1788 - rqc->rq_flags |= RQF_QUIET; 1789 - blk_end_request_all(rqc, -EIO); 1790 - } else { 1791 - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); 1792 - mmc_start_req(card->host, 1793 - &mq->mqrq_cur->mmc_active, NULL); 1794 - } 1795 - } 1796 - 1797 - return 0; 1736 + } while (req_pending); 1798 1737 } 1799 1738 1800 - int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) 1739 + void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) 1801 1740 { 1802 1741 int ret; 1803 1742 struct mmc_blk_data *md = mq->blkdata; ··· 1796 1769 if (req) { 1797 1770 blk_end_request_all(req, -EIO); 1798 1771 } 1799 - ret = 0; 1800 1772 goto out; 1801 1773 } 1802 1774 1803 - mq->flags &= ~MMC_QUEUE_NEW_REQUEST; 1775 + mq->new_request = false; 1804 1776 if (req && req_op(req) == REQ_OP_DISCARD) { 1805 1777 /* complete ongoing async transfer before issuing discard */ 1806 1778 if (card->host->areq) 1807 1779 mmc_blk_issue_rw_rq(mq, NULL); 1808 - ret = mmc_blk_issue_discard_rq(mq, req); 1780 + mmc_blk_issue_discard_rq(mq, req); 1809 1781 } else if (req && req_op(req) == REQ_OP_SECURE_ERASE) { 1810 1782 /* complete ongoing async transfer before issuing secure erase*/ 1811 1783 if (card->host->areq) 1812 1784 mmc_blk_issue_rw_rq(mq, NULL); 1813 - ret = mmc_blk_issue_secdiscard_rq(mq, req); 1785 + mmc_blk_issue_secdiscard_rq(mq, req); 1814 1786 } else if (req && req_op(req) == REQ_OP_FLUSH) { 1815 1787 /* complete ongoing async transfer before issuing flush */ 1816 1788 if (card->host->areq) 1817 1789 mmc_blk_issue_rw_rq(mq, NULL); 1818 - ret = mmc_blk_issue_flush(mq, req); 1790 + mmc_blk_issue_flush(mq, req); 1819 1791 } else { 1820 - ret = mmc_blk_issue_rw_rq(mq, req); 1792 + mmc_blk_issue_rw_rq(mq, req); 1821 1793 } 1822 1794 1823 1795 out: 1824 - if ((!req && !(mq->flags & MMC_QUEUE_NEW_REQUEST)) || req_is_special) 1796 + if ((!req && !mq->new_request) || req_is_special) 1825 1797 /* 1826 1798 * Release host when there are no more requests 1827 1799 * and after special request(discard, flush) is done. ··· 1828 1802 * the 'mmc_blk_issue_rq' with 'mqrq_prev->req'. 1829 1803 */ 1830 1804 mmc_put_card(card); 1831 - return ret; 1832 1805 } 1833 1806 1834 1807 static inline int mmc_blk_readonly(struct mmc_card *card) ··· 1846 1821 struct mmc_blk_data *md; 1847 1822 int devidx, ret; 1848 1823 1849 - again: 1850 - if (!ida_pre_get(&mmc_blk_ida, GFP_KERNEL)) 1851 - return ERR_PTR(-ENOMEM); 1852 - 1853 - spin_lock(&mmc_blk_lock); 1854 - ret = ida_get_new(&mmc_blk_ida, &devidx); 1855 - spin_unlock(&mmc_blk_lock); 1856 - 1857 - if (ret == -EAGAIN) 1858 - goto again; 1859 - else if (ret) 1860 - return ERR_PTR(ret); 1861 - 1862 - if (devidx >= max_devices) { 1863 - ret = -ENOSPC; 1864 - goto out; 1865 - } 1824 + devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL); 1825 + if (devidx < 0) 1826 + return ERR_PTR(devidx); 1866 1827 1867 1828 md = kzalloc(sizeof(struct mmc_blk_data), GFP_KERNEL); 1868 1829 if (!md) { ··· 1937 1926 err_kfree: 1938 1927 kfree(md); 1939 1928 out: 1940 - spin_lock(&mmc_blk_lock); 1941 - ida_remove(&mmc_blk_ida, devidx); 1942 - spin_unlock(&mmc_blk_lock); 1929 + ida_simple_remove(&mmc_blk_ida, devidx); 1943 1930 return ERR_PTR(ret); 1944 1931 } 1945 1932 ··· 2102 2093 return ret; 2103 2094 } 2104 2095 2105 - static const struct mmc_fixup blk_fixups[] = 2106 - { 2107 - MMC_FIXUP("SEM02G", CID_MANFID_SANDISK, 0x100, add_quirk, 2108 - MMC_QUIRK_INAND_CMD38), 2109 - MMC_FIXUP("SEM04G", CID_MANFID_SANDISK, 0x100, add_quirk, 2110 - MMC_QUIRK_INAND_CMD38), 2111 - MMC_FIXUP("SEM08G", CID_MANFID_SANDISK, 0x100, add_quirk, 2112 - MMC_QUIRK_INAND_CMD38), 2113 - MMC_FIXUP("SEM16G", CID_MANFID_SANDISK, 0x100, add_quirk, 2114 - MMC_QUIRK_INAND_CMD38), 2115 - MMC_FIXUP("SEM32G", CID_MANFID_SANDISK, 0x100, add_quirk, 2116 - MMC_QUIRK_INAND_CMD38), 2117 - 2118 - /* 2119 - * Some MMC cards experience performance degradation with CMD23 2120 - * instead of CMD12-bounded multiblock transfers. For now we'll 2121 - * black list what's bad... 2122 - * - Certain Toshiba cards. 2123 - * 2124 - * N.B. This doesn't affect SD cards. 2125 - */ 2126 - MMC_FIXUP("SDMB-32", CID_MANFID_SANDISK, CID_OEMID_ANY, add_quirk_mmc, 2127 - MMC_QUIRK_BLK_NO_CMD23), 2128 - MMC_FIXUP("SDM032", CID_MANFID_SANDISK, CID_OEMID_ANY, add_quirk_mmc, 2129 - MMC_QUIRK_BLK_NO_CMD23), 2130 - MMC_FIXUP("MMC08G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 2131 - MMC_QUIRK_BLK_NO_CMD23), 2132 - MMC_FIXUP("MMC16G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 2133 - MMC_QUIRK_BLK_NO_CMD23), 2134 - MMC_FIXUP("MMC32G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 2135 - MMC_QUIRK_BLK_NO_CMD23), 2136 - 2137 - /* 2138 - * Some MMC cards need longer data read timeout than indicated in CSD. 2139 - */ 2140 - MMC_FIXUP(CID_NAME_ANY, CID_MANFID_MICRON, 0x200, add_quirk_mmc, 2141 - MMC_QUIRK_LONG_READ_TIME), 2142 - MMC_FIXUP("008GE0", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 2143 - MMC_QUIRK_LONG_READ_TIME), 2144 - 2145 - /* 2146 - * On these Samsung MoviNAND parts, performing secure erase or 2147 - * secure trim can result in unrecoverable corruption due to a 2148 - * firmware bug. 2149 - */ 2150 - MMC_FIXUP("M8G2FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2151 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2152 - MMC_FIXUP("MAG4FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2153 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2154 - MMC_FIXUP("MBG8FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2155 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2156 - MMC_FIXUP("MCGAFA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2157 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2158 - MMC_FIXUP("VAL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2159 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2160 - MMC_FIXUP("VYL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2161 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2162 - MMC_FIXUP("KYL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2163 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2164 - MMC_FIXUP("VZL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 2165 - MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 2166 - 2167 - /* 2168 - * On Some Kingston eMMCs, performing trim can result in 2169 - * unrecoverable data conrruption occasionally due to a firmware bug. 2170 - */ 2171 - MMC_FIXUP("V10008", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc, 2172 - MMC_QUIRK_TRIM_BROKEN), 2173 - MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc, 2174 - MMC_QUIRK_TRIM_BROKEN), 2175 - 2176 - END_FIXUP 2177 - }; 2178 - 2179 2096 static int mmc_blk_probe(struct mmc_card *card) 2180 2097 { 2181 2098 struct mmc_blk_data *md, *part_md; ··· 2113 2178 if (!(card->csd.cmdclass & CCC_BLOCK_READ)) 2114 2179 return -ENODEV; 2115 2180 2116 - mmc_fixup_device(card, blk_fixups); 2181 + mmc_fixup_device(card, mmc_blk_fixups); 2117 2182 2118 2183 md = mmc_blk_alloc(card); 2119 2184 if (IS_ERR(md))
+9 -1
drivers/mmc/core/block.h
··· 1 - int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req); 1 + #ifndef _MMC_CORE_BLOCK_H 2 + #define _MMC_CORE_BLOCK_H 3 + 4 + struct mmc_queue; 5 + struct request; 6 + 7 + void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req); 8 + 9 + #endif
+2
drivers/mmc/core/bus.c
··· 23 23 #include <linux/mmc/host.h> 24 24 25 25 #include "core.h" 26 + #include "card.h" 27 + #include "host.h" 26 28 #include "sdio_cis.h" 27 29 #include "bus.h" 28 30
+15 -1
drivers/mmc/core/bus.h
··· 11 11 #ifndef _MMC_CORE_BUS_H 12 12 #define _MMC_CORE_BUS_H 13 13 14 + #include <linux/device.h> 15 + 16 + struct mmc_host; 17 + struct mmc_card; 18 + 14 19 #define MMC_DEV_ATTR(name, fmt, args...) \ 15 20 static ssize_t mmc_##name##_show (struct device *dev, struct device_attribute *attr, char *buf) \ 16 21 { \ ··· 32 27 int mmc_register_bus(void); 33 28 void mmc_unregister_bus(void); 34 29 35 - #endif 30 + struct mmc_driver { 31 + struct device_driver drv; 32 + int (*probe)(struct mmc_card *card); 33 + void (*remove)(struct mmc_card *card); 34 + void (*shutdown)(struct mmc_card *card); 35 + }; 36 36 37 + int mmc_register_driver(struct mmc_driver *drv); 38 + void mmc_unregister_driver(struct mmc_driver *drv); 39 + 40 + #endif
+221
drivers/mmc/core/card.h
··· 1 + /* 2 + * Private header for the mmc subsystem 3 + * 4 + * Copyright (C) 2016 Linaro Ltd 5 + * 6 + * Author: Ulf Hansson <ulf.hansson@linaro.org> 7 + * 8 + * License terms: GNU General Public License (GPL) version 2 9 + */ 10 + 11 + #ifndef _MMC_CORE_CARD_H 12 + #define _MMC_CORE_CARD_H 13 + 14 + #include <linux/mmc/card.h> 15 + 16 + #define mmc_card_name(c) ((c)->cid.prod_name) 17 + #define mmc_card_id(c) (dev_name(&(c)->dev)) 18 + #define mmc_dev_to_card(d) container_of(d, struct mmc_card, dev) 19 + 20 + /* Card states */ 21 + #define MMC_STATE_PRESENT (1<<0) /* present in sysfs */ 22 + #define MMC_STATE_READONLY (1<<1) /* card is read-only */ 23 + #define MMC_STATE_BLOCKADDR (1<<2) /* card uses block-addressing */ 24 + #define MMC_CARD_SDXC (1<<3) /* card is SDXC */ 25 + #define MMC_CARD_REMOVED (1<<4) /* card has been removed */ 26 + #define MMC_STATE_DOING_BKOPS (1<<5) /* card is doing BKOPS */ 27 + #define MMC_STATE_SUSPENDED (1<<6) /* card is suspended */ 28 + 29 + #define mmc_card_present(c) ((c)->state & MMC_STATE_PRESENT) 30 + #define mmc_card_readonly(c) ((c)->state & MMC_STATE_READONLY) 31 + #define mmc_card_blockaddr(c) ((c)->state & MMC_STATE_BLOCKADDR) 32 + #define mmc_card_ext_capacity(c) ((c)->state & MMC_CARD_SDXC) 33 + #define mmc_card_removed(c) ((c) && ((c)->state & MMC_CARD_REMOVED)) 34 + #define mmc_card_doing_bkops(c) ((c)->state & MMC_STATE_DOING_BKOPS) 35 + #define mmc_card_suspended(c) ((c)->state & MMC_STATE_SUSPENDED) 36 + 37 + #define mmc_card_set_present(c) ((c)->state |= MMC_STATE_PRESENT) 38 + #define mmc_card_set_readonly(c) ((c)->state |= MMC_STATE_READONLY) 39 + #define mmc_card_set_blockaddr(c) ((c)->state |= MMC_STATE_BLOCKADDR) 40 + #define mmc_card_set_ext_capacity(c) ((c)->state |= MMC_CARD_SDXC) 41 + #define mmc_card_set_removed(c) ((c)->state |= MMC_CARD_REMOVED) 42 + #define mmc_card_set_doing_bkops(c) ((c)->state |= MMC_STATE_DOING_BKOPS) 43 + #define mmc_card_clr_doing_bkops(c) ((c)->state &= ~MMC_STATE_DOING_BKOPS) 44 + #define mmc_card_set_suspended(c) ((c)->state |= MMC_STATE_SUSPENDED) 45 + #define mmc_card_clr_suspended(c) ((c)->state &= ~MMC_STATE_SUSPENDED) 46 + 47 + /* 48 + * The world is not perfect and supplies us with broken mmc/sdio devices. 49 + * For at least some of these bugs we need a work-around. 50 + */ 51 + struct mmc_fixup { 52 + /* CID-specific fields. */ 53 + const char *name; 54 + 55 + /* Valid revision range */ 56 + u64 rev_start, rev_end; 57 + 58 + unsigned int manfid; 59 + unsigned short oemid; 60 + 61 + /* SDIO-specific fields. You can use SDIO_ANY_ID here of course */ 62 + u16 cis_vendor, cis_device; 63 + 64 + /* for MMC cards */ 65 + unsigned int ext_csd_rev; 66 + 67 + void (*vendor_fixup)(struct mmc_card *card, int data); 68 + int data; 69 + }; 70 + 71 + #define CID_MANFID_ANY (-1u) 72 + #define CID_OEMID_ANY ((unsigned short) -1) 73 + #define CID_NAME_ANY (NULL) 74 + 75 + #define EXT_CSD_REV_ANY (-1u) 76 + 77 + #define CID_MANFID_SANDISK 0x2 78 + #define CID_MANFID_TOSHIBA 0x11 79 + #define CID_MANFID_MICRON 0x13 80 + #define CID_MANFID_SAMSUNG 0x15 81 + #define CID_MANFID_KINGSTON 0x70 82 + #define CID_MANFID_HYNIX 0x90 83 + 84 + #define END_FIXUP { NULL } 85 + 86 + #define _FIXUP_EXT(_name, _manfid, _oemid, _rev_start, _rev_end, \ 87 + _cis_vendor, _cis_device, \ 88 + _fixup, _data, _ext_csd_rev) \ 89 + { \ 90 + .name = (_name), \ 91 + .manfid = (_manfid), \ 92 + .oemid = (_oemid), \ 93 + .rev_start = (_rev_start), \ 94 + .rev_end = (_rev_end), \ 95 + .cis_vendor = (_cis_vendor), \ 96 + .cis_device = (_cis_device), \ 97 + .vendor_fixup = (_fixup), \ 98 + .data = (_data), \ 99 + .ext_csd_rev = (_ext_csd_rev), \ 100 + } 101 + 102 + #define MMC_FIXUP_REV(_name, _manfid, _oemid, _rev_start, _rev_end, \ 103 + _fixup, _data, _ext_csd_rev) \ 104 + _FIXUP_EXT(_name, _manfid, \ 105 + _oemid, _rev_start, _rev_end, \ 106 + SDIO_ANY_ID, SDIO_ANY_ID, \ 107 + _fixup, _data, _ext_csd_rev) \ 108 + 109 + #define MMC_FIXUP(_name, _manfid, _oemid, _fixup, _data) \ 110 + MMC_FIXUP_REV(_name, _manfid, _oemid, 0, -1ull, _fixup, _data, \ 111 + EXT_CSD_REV_ANY) 112 + 113 + #define MMC_FIXUP_EXT_CSD_REV(_name, _manfid, _oemid, _fixup, _data, \ 114 + _ext_csd_rev) \ 115 + MMC_FIXUP_REV(_name, _manfid, _oemid, 0, -1ull, _fixup, _data, \ 116 + _ext_csd_rev) 117 + 118 + #define SDIO_FIXUP(_vendor, _device, _fixup, _data) \ 119 + _FIXUP_EXT(CID_NAME_ANY, CID_MANFID_ANY, \ 120 + CID_OEMID_ANY, 0, -1ull, \ 121 + _vendor, _device, \ 122 + _fixup, _data, EXT_CSD_REV_ANY) \ 123 + 124 + #define cid_rev(hwrev, fwrev, year, month) \ 125 + (((u64) hwrev) << 40 | \ 126 + ((u64) fwrev) << 32 | \ 127 + ((u64) year) << 16 | \ 128 + ((u64) month)) 129 + 130 + #define cid_rev_card(card) \ 131 + cid_rev(card->cid.hwrev, \ 132 + card->cid.fwrev, \ 133 + card->cid.year, \ 134 + card->cid.month) 135 + 136 + /* 137 + * Unconditionally quirk add/remove. 138 + */ 139 + static inline void __maybe_unused add_quirk(struct mmc_card *card, int data) 140 + { 141 + card->quirks |= data; 142 + } 143 + 144 + static inline void __maybe_unused remove_quirk(struct mmc_card *card, int data) 145 + { 146 + card->quirks &= ~data; 147 + } 148 + 149 + /* 150 + * Quirk add/remove for MMC products. 151 + */ 152 + static inline void __maybe_unused add_quirk_mmc(struct mmc_card *card, int data) 153 + { 154 + if (mmc_card_mmc(card)) 155 + card->quirks |= data; 156 + } 157 + 158 + static inline void __maybe_unused remove_quirk_mmc(struct mmc_card *card, 159 + int data) 160 + { 161 + if (mmc_card_mmc(card)) 162 + card->quirks &= ~data; 163 + } 164 + 165 + /* 166 + * Quirk add/remove for SD products. 167 + */ 168 + static inline void __maybe_unused add_quirk_sd(struct mmc_card *card, int data) 169 + { 170 + if (mmc_card_sd(card)) 171 + card->quirks |= data; 172 + } 173 + 174 + static inline void __maybe_unused remove_quirk_sd(struct mmc_card *card, 175 + int data) 176 + { 177 + if (mmc_card_sd(card)) 178 + card->quirks &= ~data; 179 + } 180 + 181 + static inline int mmc_card_lenient_fn0(const struct mmc_card *c) 182 + { 183 + return c->quirks & MMC_QUIRK_LENIENT_FN0; 184 + } 185 + 186 + static inline int mmc_blksz_for_byte_mode(const struct mmc_card *c) 187 + { 188 + return c->quirks & MMC_QUIRK_BLKSZ_FOR_BYTE_MODE; 189 + } 190 + 191 + static inline int mmc_card_disable_cd(const struct mmc_card *c) 192 + { 193 + return c->quirks & MMC_QUIRK_DISABLE_CD; 194 + } 195 + 196 + static inline int mmc_card_nonstd_func_interface(const struct mmc_card *c) 197 + { 198 + return c->quirks & MMC_QUIRK_NONSTD_FUNC_IF; 199 + } 200 + 201 + static inline int mmc_card_broken_byte_mode_512(const struct mmc_card *c) 202 + { 203 + return c->quirks & MMC_QUIRK_BROKEN_BYTE_MODE_512; 204 + } 205 + 206 + static inline int mmc_card_long_read_time(const struct mmc_card *c) 207 + { 208 + return c->quirks & MMC_QUIRK_LONG_READ_TIME; 209 + } 210 + 211 + static inline int mmc_card_broken_irq_polling(const struct mmc_card *c) 212 + { 213 + return c->quirks & MMC_QUIRK_BROKEN_IRQ_POLLING; 214 + } 215 + 216 + static inline int mmc_card_broken_hpi(const struct mmc_card *c) 217 + { 218 + return c->quirks & MMC_QUIRK_BROKEN_HPI; 219 + } 220 + 221 + #endif
+61 -55
drivers/mmc/core/core.c
··· 40 40 #include <trace/events/mmc.h> 41 41 42 42 #include "core.h" 43 + #include "card.h" 43 44 #include "bus.h" 44 45 #include "host.h" 45 46 #include "sdio_bus.h" ··· 631 630 } 632 631 633 632 /** 634 - * mmc_start_req - start a non-blocking request 633 + * mmc_finalize_areq() - finalize an asynchronous request 634 + * @host: MMC host to finalize any ongoing request on 635 + * 636 + * Returns the status of the ongoing asynchronous request, but 637 + * MMC_BLK_SUCCESS if no request was going on. 638 + */ 639 + static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) 640 + { 641 + enum mmc_blk_status status; 642 + 643 + if (!host->areq) 644 + return MMC_BLK_SUCCESS; 645 + 646 + status = mmc_wait_for_data_req_done(host, host->areq->mrq); 647 + if (status == MMC_BLK_NEW_REQUEST) 648 + return status; 649 + 650 + /* 651 + * Check BKOPS urgency for each R1 response 652 + */ 653 + if (host->card && mmc_card_mmc(host->card) && 654 + ((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) || 655 + (mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) && 656 + (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { 657 + mmc_start_bkops(host->card, true); 658 + } 659 + 660 + return status; 661 + } 662 + 663 + /** 664 + * mmc_start_areq - start an asynchronous request 635 665 * @host: MMC host to start command 636 - * @areq: async request to start 637 - * @error: out parameter returns 0 for success, otherwise non zero 666 + * @areq: asynchronous request to start 667 + * @ret_stat: out parameter for status 638 668 * 639 669 * Start a new MMC custom command request for a host. 640 670 * If there is on ongoing async request wait for completion ··· 677 645 * return the completed request. If there is no ongoing request, NULL 678 646 * is returned without waiting. NULL is not an error condition. 679 647 */ 680 - struct mmc_async_req *mmc_start_req(struct mmc_host *host, 681 - struct mmc_async_req *areq, 682 - enum mmc_blk_status *ret_stat) 648 + struct mmc_async_req *mmc_start_areq(struct mmc_host *host, 649 + struct mmc_async_req *areq, 650 + enum mmc_blk_status *ret_stat) 683 651 { 684 - enum mmc_blk_status status = MMC_BLK_SUCCESS; 652 + enum mmc_blk_status status; 685 653 int start_err = 0; 686 654 struct mmc_async_req *data = host->areq; 687 655 ··· 689 657 if (areq) 690 658 mmc_pre_req(host, areq->mrq); 691 659 692 - if (host->areq) { 693 - status = mmc_wait_for_data_req_done(host, host->areq->mrq); 694 - if (status == MMC_BLK_NEW_REQUEST) { 695 - if (ret_stat) 696 - *ret_stat = status; 697 - /* 698 - * The previous request was not completed, 699 - * nothing to return 700 - */ 701 - return NULL; 702 - } 703 - /* 704 - * Check BKOPS urgency for each R1 response 705 - */ 706 - if (host->card && mmc_card_mmc(host->card) && 707 - ((mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1) || 708 - (mmc_resp_type(host->areq->mrq->cmd) == MMC_RSP_R1B)) && 709 - (host->areq->mrq->cmd->resp[0] & R1_EXCEPTION_EVENT)) { 660 + /* Finalize previous request */ 661 + status = mmc_finalize_areq(host); 710 662 711 - /* Cancel the prepared request */ 712 - if (areq) 713 - mmc_post_req(host, areq->mrq, -EINVAL); 714 - 715 - mmc_start_bkops(host->card, true); 716 - 717 - /* prepare the request again */ 718 - if (areq) 719 - mmc_pre_req(host, areq->mrq); 720 - } 663 + /* The previous request is still going on... */ 664 + if (status == MMC_BLK_NEW_REQUEST) { 665 + if (ret_stat) 666 + *ret_stat = status; 667 + return NULL; 721 668 } 722 669 670 + /* Fine so far, start the new request! */ 723 671 if (status == MMC_BLK_SUCCESS && areq) 724 672 start_err = __mmc_start_data_req(host, areq->mrq); 725 673 674 + /* Postprocess the old request at this point */ 726 675 if (host->areq) 727 676 mmc_post_req(host, host->areq->mrq, 0); 728 677 729 - /* Cancel a prepared request if it was not started. */ 678 + /* Cancel a prepared request if it was not started. */ 730 679 if ((status != MMC_BLK_SUCCESS || start_err) && areq) 731 680 mmc_post_req(host, areq->mrq, -EINVAL); 732 681 ··· 720 707 *ret_stat = status; 721 708 return data; 722 709 } 723 - EXPORT_SYMBOL(mmc_start_req); 710 + EXPORT_SYMBOL(mmc_start_areq); 724 711 725 712 /** 726 713 * mmc_wait_for_req - start a request and wait for completion ··· 820 807 */ 821 808 int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd, int retries) 822 809 { 823 - struct mmc_request mrq = {NULL}; 810 + struct mmc_request mrq = {}; 824 811 825 812 WARN_ON(!host->claimed); 826 813 ··· 1643 1630 return ocr; 1644 1631 } 1645 1632 1646 - int __mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage) 1633 + int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage) 1647 1634 { 1648 1635 int err = 0; 1649 1636 int old_signal_voltage = host->ios.signal_voltage; ··· 1659 1646 1660 1647 } 1661 1648 1662 - int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage, u32 ocr) 1649 + int mmc_set_uhs_voltage(struct mmc_host *host, u32 ocr) 1663 1650 { 1664 - struct mmc_command cmd = {0}; 1651 + struct mmc_command cmd = {}; 1665 1652 int err = 0; 1666 1653 u32 clock; 1667 - 1668 - /* 1669 - * Send CMD11 only if the request is to switch the card to 1670 - * 1.8V signalling. 1671 - */ 1672 - if (signal_voltage == MMC_SIGNAL_VOLTAGE_330) 1673 - return __mmc_set_signal_voltage(host, signal_voltage); 1674 1654 1675 1655 /* 1676 1656 * If we cannot switch voltages, return failure so the caller ··· 1703 1697 host->ios.clock = 0; 1704 1698 mmc_set_ios(host); 1705 1699 1706 - if (__mmc_set_signal_voltage(host, signal_voltage)) { 1700 + if (mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180)) { 1707 1701 /* 1708 1702 * Voltages may not have been switched, but we've already 1709 1703 * sent CMD11, so a power cycle is required anyway ··· 1812 1806 mmc_set_initial_state(host); 1813 1807 1814 1808 /* Try to set signal voltage to 3.3V but fall back to 1.8v or 1.2v */ 1815 - if (__mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330) == 0) 1809 + if (!mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330)) 1816 1810 dev_dbg(mmc_dev(host), "Initial signal voltage of 3.3v\n"); 1817 - else if (__mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180) == 0) 1811 + else if (!mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180)) 1818 1812 dev_dbg(mmc_dev(host), "Initial signal voltage of 1.8v\n"); 1819 - else if (__mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120) == 0) 1813 + else if (!mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120)) 1820 1814 dev_dbg(mmc_dev(host), "Initial signal voltage of 1.2v\n"); 1821 1815 1822 1816 /* ··· 2135 2129 static int mmc_do_erase(struct mmc_card *card, unsigned int from, 2136 2130 unsigned int to, unsigned int arg) 2137 2131 { 2138 - struct mmc_command cmd = {0}; 2132 + struct mmc_command cmd = {}; 2139 2133 unsigned int qty = 0, busy_timeout = 0; 2140 2134 bool use_r1b_resp = false; 2141 2135 unsigned long timeout; ··· 2557 2551 2558 2552 int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen) 2559 2553 { 2560 - struct mmc_command cmd = {0}; 2554 + struct mmc_command cmd = {}; 2561 2555 2562 2556 if (mmc_card_blockaddr(card) || mmc_card_ddr52(card) || 2563 2557 mmc_card_hs400(card) || mmc_card_hs400es(card)) ··· 2573 2567 int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount, 2574 2568 bool is_rel_write) 2575 2569 { 2576 - struct mmc_command cmd = {0}; 2570 + struct mmc_command cmd = {}; 2577 2571 2578 2572 cmd.opcode = MMC_SET_BLOCK_COUNT; 2579 2573 cmd.arg = blockcount & 0x0000FFFF;
+42 -3
drivers/mmc/core/core.h
··· 12 12 #define _MMC_CORE_CORE_H 13 13 14 14 #include <linux/delay.h> 15 + #include <linux/sched.h> 16 + 17 + struct mmc_host; 18 + struct mmc_card; 19 + struct mmc_request; 15 20 16 21 #define MMC_CMD_RETRIES 3 17 22 ··· 48 43 void mmc_set_bus_mode(struct mmc_host *host, unsigned int mode); 49 44 void mmc_set_bus_width(struct mmc_host *host, unsigned int width); 50 45 u32 mmc_select_voltage(struct mmc_host *host, u32 ocr); 51 - int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage, u32 ocr); 52 - int __mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage); 46 + int mmc_set_uhs_voltage(struct mmc_host *host, u32 ocr); 47 + int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage); 53 48 void mmc_set_timing(struct mmc_host *host, unsigned int timing); 54 49 void mmc_set_driver_type(struct mmc_host *host, unsigned int drv_type); 55 50 int mmc_select_drive_strength(struct mmc_card *card, unsigned int max_dtr, ··· 74 69 void mmc_stop_host(struct mmc_host *host); 75 70 76 71 int _mmc_detect_card_removed(struct mmc_host *host); 72 + int mmc_detect_card_removed(struct mmc_host *host); 77 73 78 74 int mmc_attach_mmc(struct mmc_host *host); 79 75 int mmc_attach_sd(struct mmc_host *host); ··· 104 98 static inline void mmc_unregister_pm_notifier(struct mmc_host *host) { } 105 99 #endif 106 100 107 - #endif 101 + void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq); 102 + bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq); 108 103 104 + int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr, 105 + unsigned int arg); 106 + int mmc_can_erase(struct mmc_card *card); 107 + int mmc_can_trim(struct mmc_card *card); 108 + int mmc_can_discard(struct mmc_card *card); 109 + int mmc_can_sanitize(struct mmc_card *card); 110 + int mmc_can_secure_erase_trim(struct mmc_card *card); 111 + int mmc_erase_group_aligned(struct mmc_card *card, unsigned int from, 112 + unsigned int nr); 113 + unsigned int mmc_calc_max_discard(struct mmc_card *card); 114 + 115 + int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen); 116 + int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount, 117 + bool is_rel_write); 118 + 119 + int __mmc_claim_host(struct mmc_host *host, atomic_t *abort); 120 + void mmc_release_host(struct mmc_host *host); 121 + void mmc_get_card(struct mmc_card *card); 122 + void mmc_put_card(struct mmc_card *card); 123 + 124 + /** 125 + * mmc_claim_host - exclusively claim a host 126 + * @host: mmc host to claim 127 + * 128 + * Claim a host for a set of operations. 129 + */ 130 + static inline void mmc_claim_host(struct mmc_host *host) 131 + { 132 + __mmc_claim_host(host, NULL); 133 + } 134 + 135 + #endif
+2
drivers/mmc/core/debugfs.c
··· 20 20 #include <linux/mmc/host.h> 21 21 22 22 #include "core.h" 23 + #include "card.h" 24 + #include "host.h" 23 25 #include "mmc_ops.h" 24 26 25 27 #ifdef CONFIG_FAIL_MMC_REQUEST
+8 -16
drivers/mmc/core/host.c
··· 34 34 #define cls_dev_to_mmc_host(d) container_of(d, struct mmc_host, class_dev) 35 35 36 36 static DEFINE_IDA(mmc_host_ida); 37 - static DEFINE_SPINLOCK(mmc_host_lock); 38 37 39 38 static void mmc_host_classdev_release(struct device *dev) 40 39 { 41 40 struct mmc_host *host = cls_dev_to_mmc_host(dev); 42 - spin_lock(&mmc_host_lock); 43 - ida_remove(&mmc_host_ida, host->index); 44 - spin_unlock(&mmc_host_lock); 41 + ida_simple_remove(&mmc_host_ida, host->index); 45 42 kfree(host); 46 43 } 47 44 ··· 298 301 if (of_property_read_bool(np, "wakeup-source") || 299 302 of_property_read_bool(np, "enable-sdio-wakeup")) /* legacy */ 300 303 host->pm_caps |= MMC_PM_WAKE_SDIO_IRQ; 304 + if (of_property_read_bool(np, "mmc-ddr-3_3v")) 305 + host->caps |= MMC_CAP_3_3V_DDR; 301 306 if (of_property_read_bool(np, "mmc-ddr-1_8v")) 302 307 host->caps |= MMC_CAP_1_8V_DDR; 303 308 if (of_property_read_bool(np, "mmc-ddr-1_2v")) ··· 353 354 /* scanning will be enabled when we're ready */ 354 355 host->rescan_disable = 1; 355 356 356 - again: 357 - if (!ida_pre_get(&mmc_host_ida, GFP_KERNEL)) { 357 + err = ida_simple_get(&mmc_host_ida, 0, 0, GFP_KERNEL); 358 + if (err < 0) { 358 359 kfree(host); 359 360 return NULL; 360 361 } 361 362 362 - spin_lock(&mmc_host_lock); 363 - err = ida_get_new(&mmc_host_ida, &host->index); 364 - spin_unlock(&mmc_host_lock); 365 - 366 - if (err == -EAGAIN) { 367 - goto again; 368 - } else if (err) { 369 - kfree(host); 370 - return NULL; 371 - } 363 + host->index = err; 372 364 373 365 dev_set_name(&host->class_dev, "mmc%d", host->index); 374 366 ··· 371 381 372 382 if (mmc_gpio_alloc(host)) { 373 383 put_device(&host->class_dev); 384 + ida_simple_remove(&mmc_host_ida, host->index); 385 + kfree(host); 374 386 return NULL; 375 387 } 376 388
+48
drivers/mmc/core/host.h
··· 10 10 */ 11 11 #ifndef _MMC_CORE_HOST_H 12 12 #define _MMC_CORE_HOST_H 13 + 13 14 #include <linux/mmc/host.h> 14 15 15 16 int mmc_register_host_class(void); ··· 21 20 void mmc_retune_hold(struct mmc_host *host); 22 21 void mmc_retune_release(struct mmc_host *host); 23 22 int mmc_retune(struct mmc_host *host); 23 + void mmc_retune_pause(struct mmc_host *host); 24 + void mmc_retune_unpause(struct mmc_host *host); 25 + 26 + static inline void mmc_retune_recheck(struct mmc_host *host) 27 + { 28 + if (host->hold_retune <= 1) 29 + host->retune_now = 1; 30 + } 31 + 32 + static inline int mmc_host_cmd23(struct mmc_host *host) 33 + { 34 + return host->caps & MMC_CAP_CMD23; 35 + } 36 + 37 + static inline int mmc_boot_partition_access(struct mmc_host *host) 38 + { 39 + return !(host->caps2 & MMC_CAP2_BOOTPART_NOACC); 40 + } 41 + 42 + static inline int mmc_host_uhs(struct mmc_host *host) 43 + { 44 + return host->caps & 45 + (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | 46 + MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | 47 + MMC_CAP_UHS_DDR50); 48 + } 49 + 50 + static inline bool mmc_card_hs200(struct mmc_card *card) 51 + { 52 + return card->host->ios.timing == MMC_TIMING_MMC_HS200; 53 + } 54 + 55 + static inline bool mmc_card_ddr52(struct mmc_card *card) 56 + { 57 + return card->host->ios.timing == MMC_TIMING_MMC_DDR52; 58 + } 59 + 60 + static inline bool mmc_card_hs400(struct mmc_card *card) 61 + { 62 + return card->host->ios.timing == MMC_TIMING_MMC_HS400; 63 + } 64 + 65 + static inline bool mmc_card_hs400es(struct mmc_card *card) 66 + { 67 + return card->host->ios.enhanced_strobe; 68 + } 69 + 24 70 25 71 #endif 26 72
+50 -26
drivers/mmc/core/mmc.c
··· 21 21 #include <linux/mmc/mmc.h> 22 22 23 23 #include "core.h" 24 + #include "card.h" 24 25 #include "host.h" 25 26 #include "bus.h" 26 27 #include "mmc_ops.h" 28 + #include "quirks.h" 27 29 #include "sd_ops.h" 28 30 29 31 #define DEFAULT_CMD6_TIMEOUT_MS 500 ··· 47 45 static const unsigned int tacc_mant[] = { 48 46 0, 10, 12, 13, 15, 20, 25, 30, 49 47 35, 40, 45, 50, 55, 60, 70, 80, 50 - }; 51 - 52 - static const struct mmc_fixup mmc_ext_csd_fixups[] = { 53 - /* 54 - * Certain Hynix eMMC 4.41 cards might get broken when HPI feature 55 - * is used so disable the HPI feature for such buggy cards. 56 - */ 57 - MMC_FIXUP_EXT_CSD_REV(CID_NAME_ANY, CID_MANFID_HYNIX, 58 - 0x014a, add_quirk, MMC_QUIRK_BROKEN_HPI, 5), 59 - 60 - END_FIXUP 61 48 }; 62 49 63 50 #define UNSTUFF_BITS(resp,start,size) \ ··· 203 212 avail_type |= EXT_CSD_CARD_TYPE_HS_52; 204 213 } 205 214 206 - if (caps & MMC_CAP_1_8V_DDR && 215 + if (caps & (MMC_CAP_1_8V_DDR | MMC_CAP_3_3V_DDR) && 207 216 card_type & EXT_CSD_CARD_TYPE_DDR_1_8V) { 208 217 hs_max_dtr = MMC_HIGH_DDR_MAX_DTR; 209 218 avail_type |= EXT_CSD_CARD_TYPE_DDR_1_8V; ··· 296 305 mmc_hostname(card->host)); 297 306 } 298 307 } 308 + } 309 + 310 + static void mmc_part_add(struct mmc_card *card, unsigned int size, 311 + unsigned int part_cfg, char *name, int idx, bool ro, 312 + int area_type) 313 + { 314 + card->part[card->nr_parts].size = size; 315 + card->part[card->nr_parts].part_cfg = part_cfg; 316 + sprintf(card->part[card->nr_parts].name, name, idx); 317 + card->part[card->nr_parts].force_ro = ro; 318 + card->part[card->nr_parts].area_type = area_type; 319 + card->nr_parts++; 299 320 } 300 321 301 322 static void mmc_manage_gp_partitions(struct mmc_card *card, u8 *ext_csd) ··· 533 530 EXT_CSD_MANUAL_BKOPS_MASK); 534 531 card->ext_csd.raw_bkops_status = 535 532 ext_csd[EXT_CSD_BKOPS_STATUS]; 536 - if (!card->ext_csd.man_bkops_en) 537 - pr_debug("%s: MAN_BKOPS_EN bit is not set\n", 533 + if (card->ext_csd.man_bkops_en) 534 + pr_debug("%s: MAN_BKOPS_EN bit is set\n", 535 + mmc_hostname(card->host)); 536 + card->ext_csd.auto_bkops_en = 537 + (ext_csd[EXT_CSD_BKOPS_EN] & 538 + EXT_CSD_AUTO_BKOPS_MASK); 539 + if (card->ext_csd.auto_bkops_en) 540 + pr_debug("%s: AUTO_BKOPS_EN bit is set\n", 538 541 mmc_hostname(card->host)); 539 542 } 540 543 ··· 626 617 card->ext_csd.ffu_capable = 627 618 (ext_csd[EXT_CSD_SUPPORTED_MODE] & 0x1) && 628 619 !(ext_csd[EXT_CSD_FW_CONFIG] & 0x1); 620 + 621 + card->ext_csd.pre_eol_info = ext_csd[EXT_CSD_PRE_EOL_INFO]; 622 + card->ext_csd.device_life_time_est_typ_a = 623 + ext_csd[EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]; 624 + card->ext_csd.device_life_time_est_typ_b = 625 + ext_csd[EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]; 629 626 } 630 627 631 628 /* eMMC v5.1 or later */ ··· 779 764 MMC_DEV_ATTR(name, "%s\n", card->cid.prod_name); 780 765 MMC_DEV_ATTR(oemid, "0x%04x\n", card->cid.oemid); 781 766 MMC_DEV_ATTR(prv, "0x%x\n", card->cid.prv); 767 + MMC_DEV_ATTR(pre_eol_info, "%02x\n", card->ext_csd.pre_eol_info); 768 + MMC_DEV_ATTR(life_time, "0x%02x 0x%02x\n", 769 + card->ext_csd.device_life_time_est_typ_a, 770 + card->ext_csd.device_life_time_est_typ_b); 782 771 MMC_DEV_ATTR(serial, "0x%08x\n", card->cid.serial); 783 772 MMC_DEV_ATTR(enhanced_area_offset, "%llu\n", 784 773 card->ext_csd.enhanced_area_offset); ··· 836 817 &dev_attr_name.attr, 837 818 &dev_attr_oemid.attr, 838 819 &dev_attr_prv.attr, 820 + &dev_attr_pre_eol_info.attr, 821 + &dev_attr_life_time.attr, 839 822 &dev_attr_serial.attr, 840 823 &dev_attr_enhanced_area_offset.attr, 841 824 &dev_attr_enhanced_area_size.attr, ··· 1116 1095 * 1117 1096 * WARNING: eMMC rules are NOT the same as SD DDR 1118 1097 */ 1119 - err = -EINVAL; 1120 - if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_DDR_1_2V) 1121 - err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120); 1098 + if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_DDR_1_2V) { 1099 + err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120); 1100 + if (!err) 1101 + return 0; 1102 + } 1122 1103 1123 - if (err && (card->mmc_avail_type & EXT_CSD_CARD_TYPE_DDR_1_8V)) 1124 - err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180); 1104 + if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_DDR_1_8V && 1105 + host->caps & MMC_CAP_1_8V_DDR) 1106 + err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180); 1125 1107 1126 1108 /* make sure vccq is 3.3v after switching disaster */ 1127 1109 if (err) 1128 - err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330); 1110 + err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330); 1129 1111 1130 1112 return err; 1131 1113 } ··· 1295 1271 } 1296 1272 1297 1273 if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS400_1_2V) 1298 - err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120); 1274 + err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120); 1299 1275 1300 1276 if (err && card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS400_1_8V) 1301 - err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180); 1277 + err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180); 1302 1278 1303 1279 /* If fails try again during next card power cycle */ 1304 1280 if (err) ··· 1404 1380 1405 1381 old_signal_voltage = host->ios.signal_voltage; 1406 1382 if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS200_1_2V) 1407 - err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120); 1383 + err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120); 1408 1384 1409 1385 if (err && card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS200_1_8V) 1410 - err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180); 1386 + err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180); 1411 1387 1412 1388 /* If fails try again during next card power cycle */ 1413 1389 if (err) ··· 1449 1425 err: 1450 1426 if (err) { 1451 1427 /* fall back to the old signal voltage, if fails report error */ 1452 - if (__mmc_set_signal_voltage(host, old_signal_voltage)) 1428 + if (mmc_set_signal_voltage(host, old_signal_voltage)) 1453 1429 err = -EIO; 1454 1430 1455 1431 pr_err("%s: %s failed, error %d\n", mmc_hostname(card->host), ··· 1829 1805 1830 1806 static int mmc_sleep(struct mmc_host *host) 1831 1807 { 1832 - struct mmc_command cmd = {0}; 1808 + struct mmc_command cmd = {}; 1833 1809 struct mmc_card *card = host->card; 1834 1810 unsigned int timeout_ms = DIV_ROUND_UP(card->ext_csd.sa_timeout, 10000); 1835 1811 int err;
+22 -22
drivers/mmc/core/mmc_ops.c
··· 57 57 int mmc_send_status(struct mmc_card *card, u32 *status) 58 58 { 59 59 int err; 60 - struct mmc_command cmd = {0}; 60 + struct mmc_command cmd = {}; 61 61 62 62 cmd.opcode = MMC_SEND_STATUS; 63 63 if (!mmc_host_is_spi(card->host)) ··· 79 79 80 80 static int _mmc_select_card(struct mmc_host *host, struct mmc_card *card) 81 81 { 82 - struct mmc_command cmd = {0}; 82 + struct mmc_command cmd = {}; 83 83 84 84 cmd.opcode = MMC_SELECT_CARD; 85 85 ··· 115 115 */ 116 116 int mmc_set_dsr(struct mmc_host *host) 117 117 { 118 - struct mmc_command cmd = {0}; 118 + struct mmc_command cmd = {}; 119 119 120 120 cmd.opcode = MMC_SET_DSR; 121 121 ··· 128 128 int mmc_go_idle(struct mmc_host *host) 129 129 { 130 130 int err; 131 - struct mmc_command cmd = {0}; 131 + struct mmc_command cmd = {}; 132 132 133 133 /* 134 134 * Non-SPI hosts need to prevent chipselect going active during ··· 164 164 165 165 int mmc_send_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr) 166 166 { 167 - struct mmc_command cmd = {0}; 167 + struct mmc_command cmd = {}; 168 168 int i, err = 0; 169 169 170 170 cmd.opcode = MMC_SEND_OP_COND; ··· 203 203 int mmc_all_send_cid(struct mmc_host *host, u32 *cid) 204 204 { 205 205 int err; 206 - struct mmc_command cmd = {0}; 206 + struct mmc_command cmd = {}; 207 207 208 208 cmd.opcode = MMC_ALL_SEND_CID; 209 209 cmd.arg = 0; ··· 220 220 221 221 int mmc_set_relative_addr(struct mmc_card *card) 222 222 { 223 - struct mmc_command cmd = {0}; 223 + struct mmc_command cmd = {}; 224 224 225 225 cmd.opcode = MMC_SET_RELATIVE_ADDR; 226 226 cmd.arg = card->rca << 16; ··· 233 233 mmc_send_cxd_native(struct mmc_host *host, u32 arg, u32 *cxd, int opcode) 234 234 { 235 235 int err; 236 - struct mmc_command cmd = {0}; 236 + struct mmc_command cmd = {}; 237 237 238 238 cmd.opcode = opcode; 239 239 cmd.arg = arg; ··· 256 256 mmc_send_cxd_data(struct mmc_card *card, struct mmc_host *host, 257 257 u32 opcode, void *buf, unsigned len) 258 258 { 259 - struct mmc_request mrq = {NULL}; 260 - struct mmc_command cmd = {0}; 261 - struct mmc_data data = {0}; 259 + struct mmc_request mrq = {}; 260 + struct mmc_command cmd = {}; 261 + struct mmc_data data = {}; 262 262 struct scatterlist sg; 263 263 264 264 mrq.cmd = &cmd; ··· 387 387 388 388 int mmc_spi_read_ocr(struct mmc_host *host, int highcap, u32 *ocrp) 389 389 { 390 - struct mmc_command cmd = {0}; 390 + struct mmc_command cmd = {}; 391 391 int err; 392 392 393 393 cmd.opcode = MMC_SPI_READ_OCR; ··· 402 402 403 403 int mmc_spi_set_crc(struct mmc_host *host, int use_crc) 404 404 { 405 - struct mmc_command cmd = {0}; 405 + struct mmc_command cmd = {}; 406 406 int err; 407 407 408 408 cmd.opcode = MMC_SPI_CRC_ON_OFF; ··· 530 530 { 531 531 struct mmc_host *host = card->host; 532 532 int err; 533 - struct mmc_command cmd = {0}; 533 + struct mmc_command cmd = {}; 534 534 bool use_r1b_resp = use_busy_signal; 535 535 unsigned char old_timing = host->ios.timing; 536 536 ··· 610 610 611 611 int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error) 612 612 { 613 - struct mmc_request mrq = {NULL}; 614 - struct mmc_command cmd = {0}; 615 - struct mmc_data data = {0}; 613 + struct mmc_request mrq = {}; 614 + struct mmc_command cmd = {}; 615 + struct mmc_data data = {}; 616 616 struct scatterlist sg; 617 617 struct mmc_ios *ios = &host->ios; 618 618 const u8 *tuning_block_pattern; ··· 679 679 680 680 int mmc_abort_tuning(struct mmc_host *host, u32 opcode) 681 681 { 682 - struct mmc_command cmd = {0}; 682 + struct mmc_command cmd = {}; 683 683 684 684 /* 685 685 * eMMC specification specifies that CMD12 can be used to stop a tuning ··· 706 706 mmc_send_bus_test(struct mmc_card *card, struct mmc_host *host, u8 opcode, 707 707 u8 len) 708 708 { 709 - struct mmc_request mrq = {NULL}; 710 - struct mmc_command cmd = {0}; 711 - struct mmc_data data = {0}; 709 + struct mmc_request mrq = {}; 710 + struct mmc_command cmd = {}; 711 + struct mmc_data data = {}; 712 712 struct scatterlist sg; 713 713 u8 *data_buf; 714 714 u8 *test_buf; ··· 802 802 803 803 int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status) 804 804 { 805 - struct mmc_command cmd = {0}; 805 + struct mmc_command cmd = {}; 806 806 unsigned int opcode; 807 807 int err; 808 808
+14
drivers/mmc/core/mmc_ops.h
··· 12 12 #ifndef _MMC_MMC_OPS_H 13 13 #define _MMC_MMC_OPS_H 14 14 15 + #include <linux/types.h> 16 + 17 + struct mmc_host; 18 + struct mmc_card; 19 + 15 20 int mmc_select_card(struct mmc_card *card); 16 21 int mmc_deselect_cards(struct mmc_host *host); 17 22 int mmc_set_dsr(struct mmc_host *host); ··· 31 26 int mmc_spi_set_crc(struct mmc_host *host, int use_crc); 32 27 int mmc_bus_test(struct mmc_card *card, u8 bus_width); 33 28 int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status); 29 + int mmc_interrupt_hpi(struct mmc_card *card); 34 30 int mmc_can_ext_csd(struct mmc_card *card); 31 + int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd); 35 32 int mmc_switch_status(struct mmc_card *card); 36 33 int __mmc_switch_status(struct mmc_card *card, bool crc_err_fatal); 37 34 int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, 38 35 unsigned int timeout_ms, unsigned char timing, 39 36 bool use_busy_signal, bool send_status, bool retry_crc_err); 37 + int mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value, 38 + unsigned int timeout_ms); 39 + int mmc_stop_bkops(struct mmc_card *card); 40 + int mmc_read_bkops_status(struct mmc_card *card); 41 + void mmc_start_bkops(struct mmc_card *card, bool from_exception); 42 + int mmc_can_reset(struct mmc_card *card); 43 + int mmc_flush_cache(struct mmc_card *card); 40 44 41 45 #endif 42 46
+57 -59
drivers/mmc/core/mmc_test.c
··· 22 22 #include <linux/seq_file.h> 23 23 #include <linux/module.h> 24 24 25 + #include "core.h" 26 + #include "card.h" 27 + #include "host.h" 28 + #include "bus.h" 29 + 25 30 #define RESULT_OK 0 26 31 #define RESULT_FAIL 1 27 32 #define RESULT_UNSUP_HOST 2 ··· 265 260 static int mmc_test_wait_busy(struct mmc_test_card *test) 266 261 { 267 262 int ret, busy; 268 - struct mmc_command cmd = {0}; 263 + struct mmc_command cmd = {}; 269 264 270 265 busy = 0; 271 266 do { ··· 282 277 if (!busy && mmc_test_busy(&cmd)) { 283 278 busy = 1; 284 279 if (test->card->host->caps & MMC_CAP_WAIT_WHILE_BUSY) 285 - pr_info("%s: Warning: Host did not " 286 - "wait for busy state to end.\n", 280 + pr_info("%s: Warning: Host did not wait for busy state to end.\n", 287 281 mmc_hostname(test->card->host)); 288 282 } 289 283 } while (mmc_test_busy(&cmd)); ··· 296 292 static int mmc_test_buffer_transfer(struct mmc_test_card *test, 297 293 u8 *buffer, unsigned addr, unsigned blksz, int write) 298 294 { 299 - struct mmc_request mrq = {0}; 300 - struct mmc_command cmd = {0}; 301 - struct mmc_command stop = {0}; 302 - struct mmc_data data = {0}; 295 + struct mmc_request mrq = {}; 296 + struct mmc_command cmd = {}; 297 + struct mmc_command stop = {}; 298 + struct mmc_data data = {}; 303 299 304 300 struct scatterlist sg; 305 301 ··· 361 357 if (max_segs > max_page_cnt) 362 358 max_segs = max_page_cnt; 363 359 364 - mem = kzalloc(sizeof(struct mmc_test_mem), GFP_KERNEL); 360 + mem = kzalloc(sizeof(*mem), GFP_KERNEL); 365 361 if (!mem) 366 362 return NULL; 367 363 368 - mem->arr = kzalloc(sizeof(struct mmc_test_pages) * max_segs, 369 - GFP_KERNEL); 364 + mem->arr = kcalloc(max_segs, sizeof(*mem->arr), GFP_KERNEL); 370 365 if (!mem->arr) 371 366 goto out_free; 372 367 ··· 549 546 if (!test->gr) 550 547 return; 551 548 552 - tr = kmalloc(sizeof(struct mmc_test_transfer_result), GFP_KERNEL); 549 + tr = kmalloc(sizeof(*tr), GFP_KERNEL); 553 550 if (!tr) 554 551 return; 555 552 ··· 644 641 if (write) 645 642 memset(test->buffer, 0xDF, 512); 646 643 else { 647 - for (i = 0;i < 512;i++) 644 + for (i = 0; i < 512; i++) 648 645 test->buffer[i] = i; 649 646 } 650 647 651 - for (i = 0;i < BUFFER_SIZE / 512;i++) { 648 + for (i = 0; i < BUFFER_SIZE / 512; i++) { 652 649 ret = mmc_test_buffer_transfer(test, test->buffer, i, 512, 1); 653 650 if (ret) 654 651 return ret; ··· 677 674 678 675 memset(test->buffer, 0, 512); 679 676 680 - for (i = 0;i < BUFFER_SIZE / 512;i++) { 677 + for (i = 0; i < BUFFER_SIZE / 512; i++) { 681 678 ret = mmc_test_buffer_transfer(test, test->buffer, i, 512, 1); 682 679 if (ret) 683 680 return ret; ··· 853 850 for (i = 0; i < count; i++) { 854 851 mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr, 855 852 blocks, blksz, write); 856 - done_areq = mmc_start_req(test->card->host, cur_areq, &status); 853 + done_areq = mmc_start_areq(test->card->host, cur_areq, &status); 857 854 858 855 if (status != MMC_BLK_SUCCESS || (!done_areq && i > 0)) { 859 856 ret = RESULT_FAIL; ··· 872 869 dev_addr += blocks; 873 870 } 874 871 875 - done_areq = mmc_start_req(test->card->host, NULL, &status); 872 + done_areq = mmc_start_areq(test->card->host, NULL, &status); 876 873 if (status != MMC_BLK_SUCCESS) 877 874 ret = RESULT_FAIL; 878 875 ··· 888 885 struct scatterlist *sg, unsigned sg_len, unsigned dev_addr, 889 886 unsigned blocks, unsigned blksz, int write) 890 887 { 891 - struct mmc_request mrq = {0}; 892 - struct mmc_command cmd = {0}; 893 - struct mmc_command stop = {0}; 894 - struct mmc_data data = {0}; 888 + struct mmc_request mrq = {}; 889 + struct mmc_command cmd = {}; 890 + struct mmc_command stop = {}; 891 + struct mmc_data data = {}; 895 892 896 893 mrq.cmd = &cmd; 897 894 mrq.data = &data; ··· 913 910 static int mmc_test_broken_transfer(struct mmc_test_card *test, 914 911 unsigned blocks, unsigned blksz, int write) 915 912 { 916 - struct mmc_request mrq = {0}; 917 - struct mmc_command cmd = {0}; 918 - struct mmc_command stop = {0}; 919 - struct mmc_data data = {0}; 913 + struct mmc_request mrq = {}; 914 + struct mmc_command cmd = {}; 915 + struct mmc_command stop = {}; 916 + struct mmc_data data = {}; 920 917 921 918 struct scatterlist sg; 922 919 ··· 949 946 unsigned long flags; 950 947 951 948 if (write) { 952 - for (i = 0;i < blocks * blksz;i++) 949 + for (i = 0; i < blocks * blksz; i++) 953 950 test->scratch[i] = i; 954 951 } else { 955 952 memset(test->scratch, 0, BUFFER_SIZE); ··· 983 980 984 981 memset(test->buffer, 0, sectors * 512); 985 982 986 - for (i = 0;i < sectors;i++) { 983 + for (i = 0; i < sectors; i++) { 987 984 ret = mmc_test_buffer_transfer(test, 988 985 test->buffer + i * 512, 989 986 dev_addr + i, 512, 0); ··· 991 988 return ret; 992 989 } 993 990 994 - for (i = 0;i < blocks * blksz;i++) { 991 + for (i = 0; i < blocks * blksz; i++) { 995 992 if (test->buffer[i] != (u8)i) 996 993 return RESULT_FAIL; 997 994 } 998 995 999 - for (;i < sectors * 512;i++) { 996 + for (; i < sectors * 512; i++) { 1000 997 if (test->buffer[i] != 0xDF) 1001 998 return RESULT_FAIL; 1002 999 } ··· 1004 1001 local_irq_save(flags); 1005 1002 sg_copy_to_buffer(sg, sg_len, test->scratch, BUFFER_SIZE); 1006 1003 local_irq_restore(flags); 1007 - for (i = 0;i < blocks * blksz;i++) { 1004 + for (i = 0; i < blocks * blksz; i++) { 1008 1005 if (test->scratch[i] != (u8)i) 1009 1006 return RESULT_FAIL; 1010 1007 } ··· 1089 1086 1090 1087 sg_init_one(&sg, test->buffer, size); 1091 1088 1092 - return mmc_test_transfer(test, &sg, 1, 0, size/512, 512, 1); 1089 + return mmc_test_transfer(test, &sg, 1, 0, size / 512, 512, 1); 1093 1090 } 1094 1091 1095 1092 static int mmc_test_multi_read(struct mmc_test_card *test) ··· 1110 1107 1111 1108 sg_init_one(&sg, test->buffer, size); 1112 1109 1113 - return mmc_test_transfer(test, &sg, 1, 0, size/512, 512, 0); 1110 + return mmc_test_transfer(test, &sg, 1, 0, size / 512, 512, 0); 1114 1111 } 1115 1112 1116 1113 static int mmc_test_pow2_write(struct mmc_test_card *test) ··· 1121 1118 if (!test->card->csd.write_partial) 1122 1119 return RESULT_UNSUP_CARD; 1123 1120 1124 - for (i = 1; i < 512;i <<= 1) { 1121 + for (i = 1; i < 512; i <<= 1) { 1125 1122 sg_init_one(&sg, test->buffer, i); 1126 1123 ret = mmc_test_transfer(test, &sg, 1, 0, 1, i, 1); 1127 1124 if (ret) ··· 1139 1136 if (!test->card->csd.read_partial) 1140 1137 return RESULT_UNSUP_CARD; 1141 1138 1142 - for (i = 1; i < 512;i <<= 1) { 1139 + for (i = 1; i < 512; i <<= 1) { 1143 1140 sg_init_one(&sg, test->buffer, i); 1144 1141 ret = mmc_test_transfer(test, &sg, 1, 0, 1, i, 0); 1145 1142 if (ret) ··· 1157 1154 if (!test->card->csd.write_partial) 1158 1155 return RESULT_UNSUP_CARD; 1159 1156 1160 - for (i = 3; i < 512;i += 7) { 1157 + for (i = 3; i < 512; i += 7) { 1161 1158 sg_init_one(&sg, test->buffer, i); 1162 1159 ret = mmc_test_transfer(test, &sg, 1, 0, 1, i, 1); 1163 1160 if (ret) ··· 1175 1172 if (!test->card->csd.read_partial) 1176 1173 return RESULT_UNSUP_CARD; 1177 1174 1178 - for (i = 3; i < 512;i += 7) { 1175 + for (i = 3; i < 512; i += 7) { 1179 1176 sg_init_one(&sg, test->buffer, i); 1180 1177 ret = mmc_test_transfer(test, &sg, 1, 0, 1, i, 0); 1181 1178 if (ret) ··· 1234 1231 1235 1232 for (i = 1; i < TEST_ALIGN_END; i++) { 1236 1233 sg_init_one(&sg, test->buffer + i, size); 1237 - ret = mmc_test_transfer(test, &sg, 1, 0, size/512, 512, 1); 1234 + ret = mmc_test_transfer(test, &sg, 1, 0, size / 512, 512, 1); 1238 1235 if (ret) 1239 1236 return ret; 1240 1237 } ··· 1261 1258 1262 1259 for (i = 1; i < TEST_ALIGN_END; i++) { 1263 1260 sg_init_one(&sg, test->buffer + i, size); 1264 - ret = mmc_test_transfer(test, &sg, 1, 0, size/512, 512, 0); 1261 + ret = mmc_test_transfer(test, &sg, 1, 0, size / 512, 512, 0); 1265 1262 if (ret) 1266 1263 return ret; 1267 1264 } ··· 1360 1357 sg_init_table(&sg, 1); 1361 1358 sg_set_page(&sg, test->highmem, size, 0); 1362 1359 1363 - return mmc_test_transfer(test, &sg, 1, 0, size/512, 512, 1); 1360 + return mmc_test_transfer(test, &sg, 1, 0, size / 512, 512, 1); 1364 1361 } 1365 1362 1366 1363 static int mmc_test_multi_read_high(struct mmc_test_card *test) ··· 1382 1379 sg_init_table(&sg, 1); 1383 1380 sg_set_page(&sg, test->highmem, size, 0); 1384 1381 1385 - return mmc_test_transfer(test, &sg, 1, 0, size/512, 512, 0); 1382 + return mmc_test_transfer(test, &sg, 1, 0, size / 512, 512, 0); 1386 1383 } 1387 1384 1388 1385 #else ··· 1536 1533 1537 1534 /* 1538 1535 * Initialize an area for testing large transfers. The test area is set to the 1539 - * middle of the card because cards may have different charateristics at the 1536 + * middle of the card because cards may have different characteristics at the 1540 1537 * front (for FAT file system optimization). Optionally, the area is erased 1541 1538 * (if the card supports it) which may improve write performance. Optionally, 1542 1539 * the area is filled with data for subsequent read tests. ··· 1582 1579 if (!t->mem) 1583 1580 return -ENOMEM; 1584 1581 1585 - t->sg = kmalloc(sizeof(struct scatterlist) * t->max_segs, GFP_KERNEL); 1582 + t->sg = kmalloc_array(t->max_segs, sizeof(*t->sg), GFP_KERNEL); 1586 1583 if (!t->sg) { 1587 1584 ret = -ENOMEM; 1588 1585 goto out_free; ··· 2150 2147 int i; 2151 2148 2152 2149 for (i = 0 ; i < rw->len && ret == 0; i++) { 2153 - ret = mmc_test_rw_multiple(test, rw, 512*1024, rw->size, 2150 + ret = mmc_test_rw_multiple(test, rw, 512 * 1024, rw->size, 2154 2151 rw->sg_len[i]); 2155 2152 if (ret) 2156 2153 break; ··· 2402 2399 2403 2400 /* Start ongoing data request */ 2404 2401 if (use_areq) { 2405 - mmc_start_req(host, &test_areq.areq, &blkstat); 2402 + mmc_start_areq(host, &test_areq.areq, &blkstat); 2406 2403 if (blkstat != MMC_BLK_SUCCESS) { 2407 2404 ret = RESULT_FAIL; 2408 2405 goto out_free; ··· 2440 2437 2441 2438 /* Wait for data request to complete */ 2442 2439 if (use_areq) { 2443 - mmc_start_req(host, NULL, &blkstat); 2440 + mmc_start_areq(host, NULL, &blkstat); 2444 2441 if (blkstat != MMC_BLK_SUCCESS) 2445 2442 ret = RESULT_FAIL; 2446 2443 } else { ··· 2957 2954 2958 2955 mmc_claim_host(test->card->host); 2959 2956 2960 - for (i = 0;i < ARRAY_SIZE(mmc_test_cases);i++) { 2957 + for (i = 0; i < ARRAY_SIZE(mmc_test_cases); i++) { 2961 2958 struct mmc_test_general_result *gr; 2962 2959 2963 2960 if (testcase && ((i + 1) != testcase)) ··· 2970 2967 if (mmc_test_cases[i].prepare) { 2971 2968 ret = mmc_test_cases[i].prepare(test); 2972 2969 if (ret) { 2973 - pr_info("%s: Result: Prepare " 2974 - "stage failed! (%d)\n", 2970 + pr_info("%s: Result: Prepare stage failed! (%d)\n", 2975 2971 mmc_hostname(test->card->host), 2976 2972 ret); 2977 2973 continue; 2978 2974 } 2979 2975 } 2980 2976 2981 - gr = kzalloc(sizeof(struct mmc_test_general_result), 2982 - GFP_KERNEL); 2977 + gr = kzalloc(sizeof(*gr), GFP_KERNEL); 2983 2978 if (gr) { 2984 2979 INIT_LIST_HEAD(&gr->tr_lst); 2985 2980 ··· 3006 3005 mmc_hostname(test->card->host)); 3007 3006 break; 3008 3007 case RESULT_UNSUP_HOST: 3009 - pr_info("%s: Result: UNSUPPORTED " 3010 - "(by host)\n", 3008 + pr_info("%s: Result: UNSUPPORTED (by host)\n", 3011 3009 mmc_hostname(test->card->host)); 3012 3010 break; 3013 3011 case RESULT_UNSUP_CARD: 3014 - pr_info("%s: Result: UNSUPPORTED " 3015 - "(by card)\n", 3012 + pr_info("%s: Result: UNSUPPORTED (by card)\n", 3016 3013 mmc_hostname(test->card->host)); 3017 3014 break; 3018 3015 default: ··· 3025 3026 if (mmc_test_cases[i].cleanup) { 3026 3027 ret = mmc_test_cases[i].cleanup(test); 3027 3028 if (ret) { 3028 - pr_info("%s: Warning: Cleanup " 3029 - "stage failed! (%d)\n", 3029 + pr_info("%s: Warning: Cleanup stage failed! (%d)\n", 3030 3030 mmc_hostname(test->card->host), 3031 3031 ret); 3032 3032 } ··· 3111 3113 if (ret) 3112 3114 return ret; 3113 3115 3114 - test = kzalloc(sizeof(struct mmc_test_card), GFP_KERNEL); 3116 + test = kzalloc(sizeof(*test), GFP_KERNEL); 3115 3117 if (!test) 3116 3118 return -ENOMEM; 3117 3119 ··· 3161 3163 3162 3164 mutex_lock(&mmc_test_lock); 3163 3165 3164 - seq_printf(sf, "0:\tRun all tests\n"); 3166 + seq_puts(sf, "0:\tRun all tests\n"); 3165 3167 for (i = 0; i < ARRAY_SIZE(mmc_test_cases); i++) 3166 - seq_printf(sf, "%d:\t%s\n", i+1, mmc_test_cases[i].name); 3168 + seq_printf(sf, "%d:\t%s\n", i + 1, mmc_test_cases[i].name); 3167 3169 3168 3170 mutex_unlock(&mmc_test_lock); 3169 3171 ··· 3216 3218 return -ENODEV; 3217 3219 } 3218 3220 3219 - df = kmalloc(sizeof(struct mmc_test_dbgfs_file), GFP_KERNEL); 3221 + df = kmalloc(sizeof(*df), GFP_KERNEL); 3220 3222 if (!df) { 3221 3223 debugfs_remove(file); 3222 3224 dev_err(&card->dev,
+5 -1
drivers/mmc/core/pwrseq.h
··· 8 8 #ifndef _MMC_CORE_PWRSEQ_H 9 9 #define _MMC_CORE_PWRSEQ_H 10 10 11 - #include <linux/mmc/host.h> 11 + #include <linux/types.h> 12 + 13 + struct mmc_host; 14 + struct device; 15 + struct module; 12 16 13 17 struct mmc_pwrseq_ops { 14 18 void (*pre_power_on)(struct mmc_host *host);
+117
drivers/mmc/core/pwrseq_sd8787.c
··· 1 + /* 2 + * pwrseq_sd8787.c - power sequence support for Marvell SD8787 BT + Wifi chip 3 + * 4 + * Copyright (C) 2016 Matt Ranostay <matt@ranostay.consulting> 5 + * 6 + * Based on the original work pwrseq_simple.c 7 + * Copyright (C) 2014 Linaro Ltd 8 + * Author: Ulf Hansson <ulf.hansson@linaro.org> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License as published by 12 + * the Free Software Foundation; either version 2 of the License, or 13 + * (at your option) any later version. 14 + * 15 + * This program is distributed in the hope that it will be useful, 16 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 + * GNU General Public License for more details. 19 + * 20 + */ 21 + 22 + #include <linux/delay.h> 23 + #include <linux/init.h> 24 + #include <linux/kernel.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/module.h> 27 + #include <linux/slab.h> 28 + #include <linux/device.h> 29 + #include <linux/err.h> 30 + #include <linux/gpio/consumer.h> 31 + 32 + #include <linux/mmc/host.h> 33 + 34 + #include "pwrseq.h" 35 + 36 + struct mmc_pwrseq_sd8787 { 37 + struct mmc_pwrseq pwrseq; 38 + struct gpio_desc *reset_gpio; 39 + struct gpio_desc *pwrdn_gpio; 40 + }; 41 + 42 + #define to_pwrseq_sd8787(p) container_of(p, struct mmc_pwrseq_sd8787, pwrseq) 43 + 44 + static void mmc_pwrseq_sd8787_pre_power_on(struct mmc_host *host) 45 + { 46 + struct mmc_pwrseq_sd8787 *pwrseq = to_pwrseq_sd8787(host->pwrseq); 47 + 48 + gpiod_set_value_cansleep(pwrseq->reset_gpio, 1); 49 + 50 + msleep(300); 51 + gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 1); 52 + } 53 + 54 + static void mmc_pwrseq_sd8787_power_off(struct mmc_host *host) 55 + { 56 + struct mmc_pwrseq_sd8787 *pwrseq = to_pwrseq_sd8787(host->pwrseq); 57 + 58 + gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 0); 59 + gpiod_set_value_cansleep(pwrseq->reset_gpio, 0); 60 + } 61 + 62 + static const struct mmc_pwrseq_ops mmc_pwrseq_sd8787_ops = { 63 + .pre_power_on = mmc_pwrseq_sd8787_pre_power_on, 64 + .power_off = mmc_pwrseq_sd8787_power_off, 65 + }; 66 + 67 + static const struct of_device_id mmc_pwrseq_sd8787_of_match[] = { 68 + { .compatible = "mmc-pwrseq-sd8787",}, 69 + {/* sentinel */}, 70 + }; 71 + MODULE_DEVICE_TABLE(of, mmc_pwrseq_sd8787_of_match); 72 + 73 + static int mmc_pwrseq_sd8787_probe(struct platform_device *pdev) 74 + { 75 + struct mmc_pwrseq_sd8787 *pwrseq; 76 + struct device *dev = &pdev->dev; 77 + 78 + pwrseq = devm_kzalloc(dev, sizeof(*pwrseq), GFP_KERNEL); 79 + if (!pwrseq) 80 + return -ENOMEM; 81 + 82 + pwrseq->pwrdn_gpio = devm_gpiod_get(dev, "powerdown", GPIOD_OUT_LOW); 83 + if (IS_ERR(pwrseq->pwrdn_gpio)) 84 + return PTR_ERR(pwrseq->pwrdn_gpio); 85 + 86 + pwrseq->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 87 + if (IS_ERR(pwrseq->reset_gpio)) 88 + return PTR_ERR(pwrseq->reset_gpio); 89 + 90 + pwrseq->pwrseq.dev = dev; 91 + pwrseq->pwrseq.ops = &mmc_pwrseq_sd8787_ops; 92 + pwrseq->pwrseq.owner = THIS_MODULE; 93 + platform_set_drvdata(pdev, pwrseq); 94 + 95 + return mmc_pwrseq_register(&pwrseq->pwrseq); 96 + } 97 + 98 + static int mmc_pwrseq_sd8787_remove(struct platform_device *pdev) 99 + { 100 + struct mmc_pwrseq_sd8787 *pwrseq = platform_get_drvdata(pdev); 101 + 102 + mmc_pwrseq_unregister(&pwrseq->pwrseq); 103 + 104 + return 0; 105 + } 106 + 107 + static struct platform_driver mmc_pwrseq_sd8787_driver = { 108 + .probe = mmc_pwrseq_sd8787_probe, 109 + .remove = mmc_pwrseq_sd8787_remove, 110 + .driver = { 111 + .name = "pwrseq_sd8787", 112 + .of_match_table = mmc_pwrseq_sd8787_of_match, 113 + }, 114 + }; 115 + 116 + module_platform_driver(mmc_pwrseq_sd8787_driver); 117 + MODULE_LICENSE("GPL v2");
+9 -7
drivers/mmc/core/queue.c
··· 20 20 21 21 #include "queue.h" 22 22 #include "block.h" 23 + #include "core.h" 24 + #include "card.h" 23 25 24 26 #define MMC_QUEUE_BOUNCESZ 65536 25 27 ··· 77 75 set_current_state(TASK_RUNNING); 78 76 mmc_blk_issue_rq(mq, req); 79 77 cond_resched(); 80 - if (mq->flags & MMC_QUEUE_NEW_REQUEST) { 81 - mq->flags &= ~MMC_QUEUE_NEW_REQUEST; 78 + if (mq->new_request) { 79 + mq->new_request = false; 82 80 continue; /* fetch again */ 83 81 } 84 82 ··· 145 143 { 146 144 struct scatterlist *sg; 147 145 148 - sg = kmalloc(sizeof(struct scatterlist)*sg_len, GFP_KERNEL); 146 + sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL); 149 147 if (!sg) 150 148 *err = -ENOMEM; 151 149 else { ··· 392 390 struct request_queue *q = mq->queue; 393 391 unsigned long flags; 394 392 395 - if (!(mq->flags & MMC_QUEUE_SUSPENDED)) { 396 - mq->flags |= MMC_QUEUE_SUSPENDED; 393 + if (!mq->suspended) { 394 + mq->suspended |= true; 397 395 398 396 spin_lock_irqsave(q->queue_lock, flags); 399 397 blk_stop_queue(q); ··· 412 410 struct request_queue *q = mq->queue; 413 411 unsigned long flags; 414 412 415 - if (mq->flags & MMC_QUEUE_SUSPENDED) { 416 - mq->flags &= ~MMC_QUEUE_SUSPENDED; 413 + if (mq->suspended) { 414 + mq->suspended = false; 417 415 418 416 up(&mq->thread_sem); 419 417
+8 -5
drivers/mmc/core/queue.h
··· 1 1 #ifndef MMC_QUEUE_H 2 2 #define MMC_QUEUE_H 3 3 4 + #include <linux/types.h> 5 + #include <linux/blkdev.h> 6 + #include <linux/mmc/core.h> 7 + #include <linux/mmc/host.h> 8 + 4 9 static inline bool mmc_req_is_special(struct request *req) 5 10 { 6 11 return req && ··· 14 9 req_op(req) == REQ_OP_SECURE_ERASE); 15 10 } 16 11 17 - struct request; 18 12 struct task_struct; 19 13 struct mmc_blk_data; 20 14 ··· 33 29 char *bounce_buf; 34 30 struct scatterlist *bounce_sg; 35 31 unsigned int bounce_sg_len; 36 - struct mmc_async_req mmc_active; 32 + struct mmc_async_req areq; 37 33 }; 38 34 39 35 struct mmc_queue { 40 36 struct mmc_card *card; 41 37 struct task_struct *thread; 42 38 struct semaphore thread_sem; 43 - unsigned int flags; 44 - #define MMC_QUEUE_SUSPENDED (1 << 0) 45 - #define MMC_QUEUE_NEW_REQUEST (1 << 1) 39 + bool new_request; 40 + bool suspended; 46 41 bool asleep; 47 42 struct mmc_blk_data *blkdata; 48 43 struct request_queue *queue;
-83
drivers/mmc/core/quirks.c
··· 1 - /* 2 - * This file contains work-arounds for many known SD/MMC 3 - * and SDIO hardware bugs. 4 - * 5 - * Copyright (c) 2011 Andrei Warkentin <andreiw@motorola.com> 6 - * Copyright (c) 2011 Pierre Tardy <tardyp@gmail.com> 7 - * Inspired from pci fixup code: 8 - * Copyright (c) 1999 Martin Mares <mj@ucw.cz> 9 - * 10 - */ 11 - 12 - #include <linux/types.h> 13 - #include <linux/kernel.h> 14 - #include <linux/export.h> 15 - #include <linux/mmc/card.h> 16 - #include <linux/mmc/sdio_ids.h> 17 - 18 - #ifndef SDIO_VENDOR_ID_TI 19 - #define SDIO_VENDOR_ID_TI 0x0097 20 - #endif 21 - 22 - #ifndef SDIO_DEVICE_ID_TI_WL1271 23 - #define SDIO_DEVICE_ID_TI_WL1271 0x4076 24 - #endif 25 - 26 - #ifndef SDIO_VENDOR_ID_STE 27 - #define SDIO_VENDOR_ID_STE 0x0020 28 - #endif 29 - 30 - #ifndef SDIO_DEVICE_ID_STE_CW1200 31 - #define SDIO_DEVICE_ID_STE_CW1200 0x2280 32 - #endif 33 - 34 - #ifndef SDIO_DEVICE_ID_MARVELL_8797_F0 35 - #define SDIO_DEVICE_ID_MARVELL_8797_F0 0x9128 36 - #endif 37 - 38 - static const struct mmc_fixup mmc_fixup_methods[] = { 39 - SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271, 40 - add_quirk, MMC_QUIRK_NONSTD_FUNC_IF), 41 - 42 - SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271, 43 - add_quirk, MMC_QUIRK_DISABLE_CD), 44 - 45 - SDIO_FIXUP(SDIO_VENDOR_ID_STE, SDIO_DEVICE_ID_STE_CW1200, 46 - add_quirk, MMC_QUIRK_BROKEN_BYTE_MODE_512), 47 - 48 - SDIO_FIXUP(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8797_F0, 49 - add_quirk, MMC_QUIRK_BROKEN_IRQ_POLLING), 50 - 51 - END_FIXUP 52 - }; 53 - 54 - void mmc_fixup_device(struct mmc_card *card, const struct mmc_fixup *table) 55 - { 56 - const struct mmc_fixup *f; 57 - u64 rev = cid_rev_card(card); 58 - 59 - /* Non-core specific workarounds. */ 60 - if (!table) 61 - table = mmc_fixup_methods; 62 - 63 - for (f = table; f->vendor_fixup; f++) { 64 - if ((f->manfid == CID_MANFID_ANY || 65 - f->manfid == card->cid.manfid) && 66 - (f->oemid == CID_OEMID_ANY || 67 - f->oemid == card->cid.oemid) && 68 - (f->name == CID_NAME_ANY || 69 - !strncmp(f->name, card->cid.prod_name, 70 - sizeof(card->cid.prod_name))) && 71 - (f->cis_vendor == card->cis.vendor || 72 - f->cis_vendor == (u16) SDIO_ANY_ID) && 73 - (f->cis_device == card->cis.device || 74 - f->cis_device == (u16) SDIO_ANY_ID) && 75 - (f->ext_csd_rev == EXT_CSD_REV_ANY || 76 - f->ext_csd_rev == card->ext_csd.rev) && 77 - rev >= f->rev_start && rev <= f->rev_end) { 78 - dev_dbg(&card->dev, "calling %pf\n", f->vendor_fixup); 79 - f->vendor_fixup(card, f->data); 80 - } 81 - } 82 - } 83 - EXPORT_SYMBOL(mmc_fixup_device);
+148
drivers/mmc/core/quirks.h
··· 1 + /* 2 + * This file contains work-arounds for many known SD/MMC 3 + * and SDIO hardware bugs. 4 + * 5 + * Copyright (c) 2011 Andrei Warkentin <andreiw@motorola.com> 6 + * Copyright (c) 2011 Pierre Tardy <tardyp@gmail.com> 7 + * Inspired from pci fixup code: 8 + * Copyright (c) 1999 Martin Mares <mj@ucw.cz> 9 + * 10 + */ 11 + 12 + #include <linux/mmc/sdio_ids.h> 13 + 14 + #include "card.h" 15 + 16 + static const struct mmc_fixup mmc_blk_fixups[] = { 17 + #define INAND_CMD38_ARG_EXT_CSD 113 18 + #define INAND_CMD38_ARG_ERASE 0x00 19 + #define INAND_CMD38_ARG_TRIM 0x01 20 + #define INAND_CMD38_ARG_SECERASE 0x80 21 + #define INAND_CMD38_ARG_SECTRIM1 0x81 22 + #define INAND_CMD38_ARG_SECTRIM2 0x88 23 + /* CMD38 argument is passed through EXT_CSD[113] */ 24 + MMC_FIXUP("SEM02G", CID_MANFID_SANDISK, 0x100, add_quirk, 25 + MMC_QUIRK_INAND_CMD38), 26 + MMC_FIXUP("SEM04G", CID_MANFID_SANDISK, 0x100, add_quirk, 27 + MMC_QUIRK_INAND_CMD38), 28 + MMC_FIXUP("SEM08G", CID_MANFID_SANDISK, 0x100, add_quirk, 29 + MMC_QUIRK_INAND_CMD38), 30 + MMC_FIXUP("SEM16G", CID_MANFID_SANDISK, 0x100, add_quirk, 31 + MMC_QUIRK_INAND_CMD38), 32 + MMC_FIXUP("SEM32G", CID_MANFID_SANDISK, 0x100, add_quirk, 33 + MMC_QUIRK_INAND_CMD38), 34 + 35 + /* 36 + * Some MMC cards experience performance degradation with CMD23 37 + * instead of CMD12-bounded multiblock transfers. For now we'll 38 + * black list what's bad... 39 + * - Certain Toshiba cards. 40 + * 41 + * N.B. This doesn't affect SD cards. 42 + */ 43 + MMC_FIXUP("SDMB-32", CID_MANFID_SANDISK, CID_OEMID_ANY, add_quirk_mmc, 44 + MMC_QUIRK_BLK_NO_CMD23), 45 + MMC_FIXUP("SDM032", CID_MANFID_SANDISK, CID_OEMID_ANY, add_quirk_mmc, 46 + MMC_QUIRK_BLK_NO_CMD23), 47 + MMC_FIXUP("MMC08G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 48 + MMC_QUIRK_BLK_NO_CMD23), 49 + MMC_FIXUP("MMC16G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 50 + MMC_QUIRK_BLK_NO_CMD23), 51 + MMC_FIXUP("MMC32G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 52 + MMC_QUIRK_BLK_NO_CMD23), 53 + 54 + /* 55 + * Some MMC cards need longer data read timeout than indicated in CSD. 56 + */ 57 + MMC_FIXUP(CID_NAME_ANY, CID_MANFID_MICRON, 0x200, add_quirk_mmc, 58 + MMC_QUIRK_LONG_READ_TIME), 59 + MMC_FIXUP("008GE0", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 60 + MMC_QUIRK_LONG_READ_TIME), 61 + 62 + /* 63 + * On these Samsung MoviNAND parts, performing secure erase or 64 + * secure trim can result in unrecoverable corruption due to a 65 + * firmware bug. 66 + */ 67 + MMC_FIXUP("M8G2FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 68 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 69 + MMC_FIXUP("MAG4FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 70 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 71 + MMC_FIXUP("MBG8FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 72 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 73 + MMC_FIXUP("MCGAFA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 74 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 75 + MMC_FIXUP("VAL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 76 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 77 + MMC_FIXUP("VYL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 78 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 79 + MMC_FIXUP("KYL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 80 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 81 + MMC_FIXUP("VZL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 82 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 83 + 84 + /* 85 + * On Some Kingston eMMCs, performing trim can result in 86 + * unrecoverable data conrruption occasionally due to a firmware bug. 87 + */ 88 + MMC_FIXUP("V10008", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc, 89 + MMC_QUIRK_TRIM_BROKEN), 90 + MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc, 91 + MMC_QUIRK_TRIM_BROKEN), 92 + 93 + END_FIXUP 94 + }; 95 + 96 + static const struct mmc_fixup mmc_ext_csd_fixups[] = { 97 + /* 98 + * Certain Hynix eMMC 4.41 cards might get broken when HPI feature 99 + * is used so disable the HPI feature for such buggy cards. 100 + */ 101 + MMC_FIXUP_EXT_CSD_REV(CID_NAME_ANY, CID_MANFID_HYNIX, 102 + 0x014a, add_quirk, MMC_QUIRK_BROKEN_HPI, 5), 103 + 104 + END_FIXUP 105 + }; 106 + 107 + static const struct mmc_fixup sdio_fixup_methods[] = { 108 + SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271, 109 + add_quirk, MMC_QUIRK_NONSTD_FUNC_IF), 110 + 111 + SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271, 112 + add_quirk, MMC_QUIRK_DISABLE_CD), 113 + 114 + SDIO_FIXUP(SDIO_VENDOR_ID_STE, SDIO_DEVICE_ID_STE_CW1200, 115 + add_quirk, MMC_QUIRK_BROKEN_BYTE_MODE_512), 116 + 117 + SDIO_FIXUP(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8797_F0, 118 + add_quirk, MMC_QUIRK_BROKEN_IRQ_POLLING), 119 + 120 + END_FIXUP 121 + }; 122 + 123 + static inline void mmc_fixup_device(struct mmc_card *card, 124 + const struct mmc_fixup *table) 125 + { 126 + const struct mmc_fixup *f; 127 + u64 rev = cid_rev_card(card); 128 + 129 + for (f = table; f->vendor_fixup; f++) { 130 + if ((f->manfid == CID_MANFID_ANY || 131 + f->manfid == card->cid.manfid) && 132 + (f->oemid == CID_OEMID_ANY || 133 + f->oemid == card->cid.oemid) && 134 + (f->name == CID_NAME_ANY || 135 + !strncmp(f->name, card->cid.prod_name, 136 + sizeof(card->cid.prod_name))) && 137 + (f->cis_vendor == card->cis.vendor || 138 + f->cis_vendor == (u16) SDIO_ANY_ID) && 139 + (f->cis_device == card->cis.device || 140 + f->cis_device == (u16) SDIO_ANY_ID) && 141 + (f->ext_csd_rev == EXT_CSD_REV_ANY || 142 + f->ext_csd_rev == card->ext_csd.rev) && 143 + rev >= f->rev_start && rev <= f->rev_end) { 144 + dev_dbg(&card->dev, "calling %pf\n", f->vendor_fixup); 145 + f->vendor_fixup(card, f->data); 146 + } 147 + } 148 + }
+3 -2
drivers/mmc/core/sd.c
··· 22 22 #include <linux/mmc/sd.h> 23 23 24 24 #include "core.h" 25 + #include "card.h" 26 + #include "host.h" 25 27 #include "bus.h" 26 28 #include "mmc_ops.h" 27 29 #include "sd.h" ··· 788 786 */ 789 787 if (!mmc_host_is_spi(host) && rocr && 790 788 ((*rocr & 0x41000000) == 0x41000000)) { 791 - err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180, 792 - pocr); 789 + err = mmc_set_uhs_voltage(host, pocr); 793 790 if (err == -EAGAIN) { 794 791 retries--; 795 792 goto try_again;
+4 -1
drivers/mmc/core/sd.h
··· 1 1 #ifndef _MMC_CORE_SD_H 2 2 #define _MMC_CORE_SD_H 3 3 4 - #include <linux/mmc/card.h> 4 + #include <linux/types.h> 5 5 6 6 extern struct device_type sd_type; 7 + 8 + struct mmc_host; 9 + struct mmc_card; 7 10 8 11 int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr); 9 12 int mmc_sd_get_csd(struct mmc_host *host, struct mmc_card *card);
+15 -15
drivers/mmc/core/sd_ops.c
··· 25 25 int mmc_app_cmd(struct mmc_host *host, struct mmc_card *card) 26 26 { 27 27 int err; 28 - struct mmc_command cmd = {0}; 28 + struct mmc_command cmd = {}; 29 29 30 30 if (WARN_ON(card && card->host != host)) 31 31 return -EINVAL; ··· 68 68 int mmc_wait_for_app_cmd(struct mmc_host *host, struct mmc_card *card, 69 69 struct mmc_command *cmd, int retries) 70 70 { 71 - struct mmc_request mrq = {NULL}; 71 + struct mmc_request mrq = {}; 72 72 73 73 int i, err; 74 74 ··· 120 120 121 121 int mmc_app_set_bus_width(struct mmc_card *card, int width) 122 122 { 123 - struct mmc_command cmd = {0}; 123 + struct mmc_command cmd = {}; 124 124 125 125 cmd.opcode = SD_APP_SET_BUS_WIDTH; 126 126 cmd.flags = MMC_RSP_R1 | MMC_CMD_AC; ··· 141 141 142 142 int mmc_send_app_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr) 143 143 { 144 - struct mmc_command cmd = {0}; 144 + struct mmc_command cmd = {}; 145 145 int i, err = 0; 146 146 147 147 cmd.opcode = SD_APP_OP_COND; ··· 185 185 186 186 int mmc_send_if_cond(struct mmc_host *host, u32 ocr) 187 187 { 188 - struct mmc_command cmd = {0}; 188 + struct mmc_command cmd = {}; 189 189 int err; 190 190 static const u8 test_pattern = 0xAA; 191 191 u8 result_pattern; ··· 217 217 int mmc_send_relative_addr(struct mmc_host *host, unsigned int *rca) 218 218 { 219 219 int err; 220 - struct mmc_command cmd = {0}; 220 + struct mmc_command cmd = {}; 221 221 222 222 cmd.opcode = SD_SEND_RELATIVE_ADDR; 223 223 cmd.arg = 0; ··· 235 235 int mmc_app_send_scr(struct mmc_card *card, u32 *scr) 236 236 { 237 237 int err; 238 - struct mmc_request mrq = {NULL}; 239 - struct mmc_command cmd = {0}; 240 - struct mmc_data data = {0}; 238 + struct mmc_request mrq = {}; 239 + struct mmc_command cmd = {}; 240 + struct mmc_data data = {}; 241 241 struct scatterlist sg; 242 242 void *data_buf; 243 243 ··· 290 290 int mmc_sd_switch(struct mmc_card *card, int mode, int group, 291 291 u8 value, u8 *resp) 292 292 { 293 - struct mmc_request mrq = {NULL}; 294 - struct mmc_command cmd = {0}; 295 - struct mmc_data data = {0}; 293 + struct mmc_request mrq = {}; 294 + struct mmc_command cmd = {}; 295 + struct mmc_data data = {}; 296 296 struct scatterlist sg; 297 297 298 298 /* NOTE: caller guarantees resp is heap-allocated */ ··· 332 332 int mmc_app_sd_status(struct mmc_card *card, void *ssr) 333 333 { 334 334 int err; 335 - struct mmc_request mrq = {NULL}; 336 - struct mmc_command cmd = {0}; 337 - struct mmc_data data = {0}; 335 + struct mmc_request mrq = {}; 336 + struct mmc_command cmd = {}; 337 + struct mmc_data data = {}; 338 338 struct scatterlist sg; 339 339 340 340 /* NOTE: caller guarantees ssr is heap-allocated */
+9
drivers/mmc/core/sd_ops.h
··· 12 12 #ifndef _MMC_SD_OPS_H 13 13 #define _MMC_SD_OPS_H 14 14 15 + #include <linux/types.h> 16 + 17 + struct mmc_card; 18 + struct mmc_host; 19 + struct mmc_command; 20 + 15 21 int mmc_app_set_bus_width(struct mmc_card *card, int width); 16 22 int mmc_send_app_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr); 17 23 int mmc_send_if_cond(struct mmc_host *host, u32 ocr); ··· 26 20 int mmc_sd_switch(struct mmc_card *card, int mode, int group, 27 21 u8 value, u8 *resp); 28 22 int mmc_app_sd_status(struct mmc_card *card, void *ssr); 23 + int mmc_app_cmd(struct mmc_host *host, struct mmc_card *card); 24 + int mmc_wait_for_app_cmd(struct mmc_host *host, struct mmc_card *card, 25 + struct mmc_command *cmd, int retries); 29 26 30 27 #endif 31 28
+32 -14
drivers/mmc/core/sdio.c
··· 20 20 #include <linux/mmc/sdio_ids.h> 21 21 22 22 #include "core.h" 23 + #include "card.h" 24 + #include "host.h" 23 25 #include "bus.h" 26 + #include "quirks.h" 24 27 #include "sd.h" 25 28 #include "sdio_bus.h" 26 29 #include "mmc_ops.h" ··· 544 541 return err; 545 542 } 546 543 544 + static void mmc_sdio_resend_if_cond(struct mmc_host *host, 545 + struct mmc_card *card) 546 + { 547 + sdio_reset(host); 548 + mmc_go_idle(host); 549 + mmc_send_if_cond(host, host->ocr_avail); 550 + mmc_remove_card(card); 551 + } 552 + 547 553 /* 548 554 * Handle the detection and initialisation of a card. 549 555 * ··· 636 624 * to switch to 1.8V signaling level. No 1.8v signalling if 637 625 * UHS mode is not enabled to maintain compatibility and some 638 626 * systems that claim 1.8v signalling in fact do not support 639 - * it. 627 + * it. Per SDIO spec v3, section 3.1.2, if the voltage is already 628 + * 1.8v, the card sets S18A to 0 in the R4 response. So it will 629 + * fails to check rocr & R4_18V_PRESENT, but we still need to 630 + * try to init uhs card. sdio_read_cccr will take over this task 631 + * to make sure which speed mode should work. 640 632 */ 641 633 if (!powered_resume && (rocr & ocr & R4_18V_PRESENT)) { 642 - err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180, 643 - ocr_card); 634 + err = mmc_set_uhs_voltage(host, ocr_card); 644 635 if (err == -EAGAIN) { 645 - sdio_reset(host); 646 - mmc_go_idle(host); 647 - mmc_send_if_cond(host, host->ocr_avail); 648 - mmc_remove_card(card); 636 + mmc_sdio_resend_if_cond(host, card); 649 637 retries--; 650 638 goto try_again; 651 639 } else if (err) { 652 640 ocr &= ~R4_18V_PRESENT; 653 641 } 654 - err = 0; 655 - } else { 656 - ocr &= ~R4_18V_PRESENT; 657 642 } 658 643 659 644 /* ··· 707 698 } 708 699 709 700 /* 710 - * Read the common registers. 701 + * Read the common registers. Note that we should try to 702 + * validate whether UHS would work or not. 711 703 */ 712 704 err = sdio_read_cccr(card, ocr); 713 - if (err) 714 - goto remove; 705 + if (err) { 706 + mmc_sdio_resend_if_cond(host, card); 707 + if (ocr & R4_18V_PRESENT) { 708 + /* Retry init sequence, but without R4_18V_PRESENT. */ 709 + retries = 0; 710 + goto try_again; 711 + } else { 712 + goto remove; 713 + } 714 + } 715 715 716 716 /* 717 717 * Read the common CIS tuples. ··· 739 721 card = oldcard; 740 722 } 741 723 card->ocr = ocr_card; 742 - mmc_fixup_device(card, NULL); 724 + mmc_fixup_device(card, sdio_fixup_methods); 743 725 744 726 if (card->type == MMC_TYPE_SD_COMBO) { 745 727 err = mmc_sd_setup_card(host, card, oldcard != NULL);
+1
drivers/mmc/core/sdio_bus.c
··· 25 25 #include <linux/of.h> 26 26 27 27 #include "core.h" 28 + #include "card.h" 28 29 #include "sdio_cis.h" 29 30 #include "sdio_bus.h" 30 31
+3
drivers/mmc/core/sdio_bus.h
··· 11 11 #ifndef _MMC_CORE_SDIO_BUS_H 12 12 #define _MMC_CORE_SDIO_BUS_H 13 13 14 + struct mmc_card; 15 + struct sdio_func; 16 + 14 17 struct sdio_func *sdio_alloc_func(struct mmc_card *card); 15 18 int sdio_add_func(struct sdio_func *func); 16 19 void sdio_remove_func(struct sdio_func *func);
+3
drivers/mmc/core/sdio_cis.h
··· 14 14 #ifndef _MMC_SDIO_CIS_H 15 15 #define _MMC_SDIO_CIS_H 16 16 17 + struct mmc_card; 18 + struct sdio_func; 19 + 17 20 int sdio_read_common_cis(struct mmc_card *card); 18 21 void sdio_free_common_cis(struct mmc_card *card); 19 22
+2
drivers/mmc/core/sdio_io.c
··· 16 16 #include <linux/mmc/sdio_func.h> 17 17 18 18 #include "sdio_ops.h" 19 + #include "core.h" 20 + #include "card.h" 19 21 20 22 /** 21 23 * sdio_claim_host - exclusively claim a bus for a certain SDIO function
+2
drivers/mmc/core/sdio_irq.c
··· 27 27 #include <linux/mmc/sdio_func.h> 28 28 29 29 #include "sdio_ops.h" 30 + #include "core.h" 31 + #include "card.h" 30 32 31 33 static int process_sdio_pending_irqs(struct mmc_host *host) 32 34 {
+5 -5
drivers/mmc/core/sdio_ops.c
··· 21 21 22 22 int mmc_send_io_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr) 23 23 { 24 - struct mmc_command cmd = {0}; 24 + struct mmc_command cmd = {}; 25 25 int i, err = 0; 26 26 27 27 cmd.opcode = SD_IO_SEND_OP_COND; ··· 66 66 static int mmc_io_rw_direct_host(struct mmc_host *host, int write, unsigned fn, 67 67 unsigned addr, u8 in, u8 *out) 68 68 { 69 - struct mmc_command cmd = {0}; 69 + struct mmc_command cmd = {}; 70 70 int err; 71 71 72 72 if (fn > 7) ··· 118 118 int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn, 119 119 unsigned addr, int incr_addr, u8 *buf, unsigned blocks, unsigned blksz) 120 120 { 121 - struct mmc_request mrq = {NULL}; 122 - struct mmc_command cmd = {0}; 123 - struct mmc_data data = {0}; 121 + struct mmc_request mrq = {}; 122 + struct mmc_command cmd = {}; 123 + struct mmc_data data = {}; 124 124 struct scatterlist sg, *sg_ptr; 125 125 struct sg_table sgtable; 126 126 unsigned int nents, left_size, i;
+5
drivers/mmc/core/sdio_ops.h
··· 12 12 #ifndef _MMC_SDIO_OPS_H 13 13 #define _MMC_SDIO_OPS_H 14 14 15 + #include <linux/types.h> 15 16 #include <linux/mmc/sdio.h> 17 + 18 + struct mmc_host; 19 + struct mmc_card; 16 20 17 21 int mmc_send_io_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr); 18 22 int mmc_io_rw_direct(struct mmc_card *card, int write, unsigned fn, ··· 24 20 int mmc_io_rw_extended(struct mmc_card *card, int write, unsigned fn, 25 21 unsigned addr, int incr_addr, u8 *buf, unsigned blocks, unsigned blksz); 26 22 int sdio_reset(struct mmc_host *host); 23 + unsigned int mmc_align_data_size(struct mmc_card *card, unsigned int sz); 27 24 28 25 static inline bool mmc_is_io_op(u32 opcode) 29 26 {
-6
drivers/mmc/core/slot-gpio.c
··· 235 235 struct gpio_desc *desc; 236 236 int ret; 237 237 238 - if (!con_id) 239 - con_id = ctx->cd_label; 240 - 241 238 desc = devm_gpiod_get_index(host->parent, con_id, idx, GPIOD_IN); 242 239 if (IS_ERR(desc)) 243 240 return PTR_ERR(desc); ··· 285 288 struct mmc_gpio *ctx = host->slot.handler_priv; 286 289 struct gpio_desc *desc; 287 290 int ret; 288 - 289 - if (!con_id) 290 - con_id = ctx->ro_label; 291 291 292 292 desc = devm_gpiod_get_index(host->parent, con_id, idx, GPIOD_IN); 293 293 if (IS_ERR(desc))
+2
drivers/mmc/core/slot-gpio.h
··· 8 8 #ifndef _MMC_CORE_SLOTGPIO_H 9 9 #define _MMC_CORE_SLOTGPIO_H 10 10 11 + struct mmc_host; 12 + 11 13 int mmc_gpio_alloc(struct mmc_host *host); 12 14 13 15 #endif
+9
drivers/mmc/host/Kconfig
··· 683 683 Synopsys DesignWare Memory Card Interface driver. Select this option 684 684 for platforms based on RK3066, RK3188 and RK3288 SoC's. 685 685 686 + config MMC_DW_ZX 687 + tristate "ZTE specific extensions for Synopsys DW Memory Card Interface" 688 + depends on MMC_DW && ARCH_ZX 689 + select MMC_DW_PLTFM 690 + help 691 + This selects support for ZTE SoC specific extensions to the 692 + Synopsys DesignWare Memory Card Interface driver. Select this option 693 + for platforms based on ZX296718 SoC's. 694 + 686 695 config MMC_SH_MMCIF 687 696 tristate "SuperH Internal MMCIF support" 688 697 depends on HAS_DMA
+1
drivers/mmc/host/Makefile
··· 48 48 obj-$(CONFIG_MMC_DW_K3) += dw_mmc-k3.o 49 49 obj-$(CONFIG_MMC_DW_PCI) += dw_mmc-pci.o 50 50 obj-$(CONFIG_MMC_DW_ROCKCHIP) += dw_mmc-rockchip.o 51 + obj-$(CONFIG_MMC_DW_ZX) += dw_mmc-zx.o 51 52 obj-$(CONFIG_MMC_SH_MMCIF) += sh_mmcif.o 52 53 obj-$(CONFIG_MMC_JZ4740) += jz4740_mmc.o 53 54 obj-$(CONFIG_MMC_VUB300) += vub300.o
+1
drivers/mmc/host/davinci_mmc.c
··· 36 36 #include <linux/of.h> 37 37 #include <linux/of_device.h> 38 38 #include <linux/mmc/slot-gpio.h> 39 + #include <linux/interrupt.h> 39 40 40 41 #include <linux/platform_data/mmc-davinci.h> 41 42
-1
drivers/mmc/host/dw_mmc-exynos.c
··· 13 13 #include <linux/platform_device.h> 14 14 #include <linux/clk.h> 15 15 #include <linux/mmc/host.h> 16 - #include <linux/mmc/dw_mmc.h> 17 16 #include <linux/mmc/mmc.h> 18 17 #include <linux/of.h> 19 18 #include <linux/of_gpio.h>
-1
drivers/mmc/host/dw_mmc-k3.c
··· 11 11 #include <linux/clk.h> 12 12 #include <linux/mfd/syscon.h> 13 13 #include <linux/mmc/host.h> 14 - #include <linux/mmc/dw_mmc.h> 15 14 #include <linux/module.h> 16 15 #include <linux/of_address.h> 17 16 #include <linux/platform_device.h>
-1
drivers/mmc/host/dw_mmc-pci.c
··· 18 18 #include <linux/slab.h> 19 19 #include <linux/mmc/host.h> 20 20 #include <linux/mmc/mmc.h> 21 - #include <linux/mmc/dw_mmc.h> 22 21 #include "dw_mmc.h" 23 22 24 23 #define PCI_BAR_NO 2
-1
drivers/mmc/host/dw_mmc-pltfm.c
··· 20 20 #include <linux/slab.h> 21 21 #include <linux/mmc/host.h> 22 22 #include <linux/mmc/mmc.h> 23 - #include <linux/mmc/dw_mmc.h> 24 23 #include <linux/of.h> 25 24 #include <linux/clk.h> 26 25
-1
drivers/mmc/host/dw_mmc-rockchip.c
··· 11 11 #include <linux/platform_device.h> 12 12 #include <linux/clk.h> 13 13 #include <linux/mmc/host.h> 14 - #include <linux/mmc/dw_mmc.h> 15 14 #include <linux/of_address.h> 16 15 #include <linux/mmc/slot-gpio.h> 17 16 #include <linux/pm_runtime.h>
+241
drivers/mmc/host/dw_mmc-zx.c
··· 1 + /* 2 + * ZX Specific Extensions for Synopsys DW Multimedia Card Interface driver 3 + * 4 + * Copyright (C) 2016, Linaro Ltd. 5 + * Copyright (C) 2016, ZTE Corp. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + */ 12 + 13 + #include <linux/clk.h> 14 + #include <linux/mfd/syscon.h> 15 + #include <linux/mmc/host.h> 16 + #include <linux/mmc/mmc.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/pm_runtime.h> 21 + #include <linux/regmap.h> 22 + #include <linux/slab.h> 23 + 24 + #include "dw_mmc.h" 25 + #include "dw_mmc-pltfm.h" 26 + #include "dw_mmc-zx.h" 27 + 28 + struct dw_mci_zx_priv_data { 29 + struct regmap *sysc_base; 30 + }; 31 + 32 + enum delay_type { 33 + DELAY_TYPE_READ, /* read dqs delay */ 34 + DELAY_TYPE_CLK, /* clk sample delay */ 35 + }; 36 + 37 + static int dw_mci_zx_emmc_set_delay(struct dw_mci *host, unsigned int delay, 38 + enum delay_type dflag) 39 + { 40 + struct dw_mci_zx_priv_data *priv = host->priv; 41 + struct regmap *sysc_base = priv->sysc_base; 42 + unsigned int clksel; 43 + unsigned int loop = 1000; 44 + int ret; 45 + 46 + if (!sysc_base) 47 + return -EINVAL; 48 + 49 + ret = regmap_update_bits(sysc_base, LB_AON_EMMC_CFG_REG0, 50 + PARA_HALF_CLK_MODE | PARA_DLL_BYPASS_MODE | 51 + PARA_PHASE_DET_SEL_MASK | 52 + PARA_DLL_LOCK_NUM_MASK | 53 + DLL_REG_SET | PARA_DLL_START_MASK, 54 + PARA_DLL_START(4) | PARA_DLL_LOCK_NUM(4)); 55 + if (ret) 56 + return ret; 57 + 58 + ret = regmap_read(sysc_base, LB_AON_EMMC_CFG_REG1, &clksel); 59 + if (ret) 60 + return ret; 61 + 62 + if (dflag == DELAY_TYPE_CLK) { 63 + clksel &= ~CLK_SAMP_DELAY_MASK; 64 + clksel |= CLK_SAMP_DELAY(delay); 65 + } else { 66 + clksel &= ~READ_DQS_DELAY_MASK; 67 + clksel |= READ_DQS_DELAY(delay); 68 + } 69 + 70 + regmap_write(sysc_base, LB_AON_EMMC_CFG_REG1, clksel); 71 + regmap_update_bits(sysc_base, LB_AON_EMMC_CFG_REG0, 72 + PARA_DLL_START_MASK | PARA_DLL_LOCK_NUM_MASK | 73 + DLL_REG_SET, 74 + PARA_DLL_START(4) | PARA_DLL_LOCK_NUM(4) | 75 + DLL_REG_SET); 76 + 77 + do { 78 + ret = regmap_read(sysc_base, LB_AON_EMMC_CFG_REG2, &clksel); 79 + if (ret) 80 + return ret; 81 + 82 + } while (--loop && !(clksel & ZX_DLL_LOCKED)); 83 + 84 + if (!loop) { 85 + dev_err(host->dev, "Error: %s dll lock fail\n", __func__); 86 + return -EIO; 87 + } 88 + 89 + return 0; 90 + } 91 + 92 + static int dw_mci_zx_emmc_execute_tuning(struct dw_mci_slot *slot, u32 opcode) 93 + { 94 + struct dw_mci *host = slot->host; 95 + struct mmc_host *mmc = slot->mmc; 96 + int ret, len = 0, start = 0, end = 0, delay, best = 0; 97 + 98 + for (delay = 1; delay < 128; delay++) { 99 + ret = dw_mci_zx_emmc_set_delay(host, delay, DELAY_TYPE_CLK); 100 + if (!ret && mmc_send_tuning(mmc, opcode, NULL)) { 101 + if (start >= 0) { 102 + end = delay - 1; 103 + /* check and update longest good range */ 104 + if ((end - start) > len) { 105 + best = (start + end) >> 1; 106 + len = end - start; 107 + } 108 + } 109 + start = -1; 110 + end = 0; 111 + continue; 112 + } 113 + if (start < 0) 114 + start = delay; 115 + } 116 + 117 + if (start >= 0) { 118 + end = delay - 1; 119 + if ((end - start) > len) { 120 + best = (start + end) >> 1; 121 + len = end - start; 122 + } 123 + } 124 + if (best < 0) 125 + return -EIO; 126 + 127 + dev_info(host->dev, "%s best range: start %d end %d\n", __func__, 128 + start, end); 129 + return dw_mci_zx_emmc_set_delay(host, best, DELAY_TYPE_CLK); 130 + } 131 + 132 + static int dw_mci_zx_prepare_hs400_tuning(struct dw_mci *host, 133 + struct mmc_ios *ios) 134 + { 135 + int ret; 136 + 137 + /* config phase shift as 90 degree */ 138 + ret = dw_mci_zx_emmc_set_delay(host, 32, DELAY_TYPE_READ); 139 + if (ret < 0) 140 + return -EIO; 141 + 142 + return 0; 143 + } 144 + 145 + static int dw_mci_zx_execute_tuning(struct dw_mci_slot *slot, u32 opcode) 146 + { 147 + struct dw_mci *host = slot->host; 148 + 149 + if (host->verid == 0x290a) /* only for emmc */ 150 + return dw_mci_zx_emmc_execute_tuning(slot, opcode); 151 + /* TODO: Add 0x210a dedicated tuning for sd/sdio */ 152 + 153 + return 0; 154 + } 155 + 156 + static int dw_mci_zx_parse_dt(struct dw_mci *host) 157 + { 158 + struct device_node *np = host->dev->of_node; 159 + struct device_node *node; 160 + struct dw_mci_zx_priv_data *priv; 161 + struct regmap *sysc_base; 162 + int ret; 163 + 164 + /* syscon is needed only by emmc */ 165 + node = of_parse_phandle(np, "zte,aon-syscon", 0); 166 + if (node) { 167 + sysc_base = syscon_node_to_regmap(node); 168 + of_node_put(node); 169 + 170 + if (IS_ERR(sysc_base)) { 171 + ret = PTR_ERR(sysc_base); 172 + if (ret != -EPROBE_DEFER) 173 + dev_err(host->dev, "Can't get syscon: %d\n", 174 + ret); 175 + return ret; 176 + } 177 + } else { 178 + return 0; 179 + } 180 + 181 + priv = devm_kzalloc(host->dev, sizeof(*priv), GFP_KERNEL); 182 + if (!priv) 183 + return -ENOMEM; 184 + priv->sysc_base = sysc_base; 185 + host->priv = priv; 186 + 187 + return 0; 188 + } 189 + 190 + static unsigned long zx_dwmmc_caps[3] = { 191 + MMC_CAP_CMD23, 192 + MMC_CAP_CMD23, 193 + MMC_CAP_CMD23, 194 + }; 195 + 196 + static const struct dw_mci_drv_data zx_drv_data = { 197 + .caps = zx_dwmmc_caps, 198 + .execute_tuning = dw_mci_zx_execute_tuning, 199 + .prepare_hs400_tuning = dw_mci_zx_prepare_hs400_tuning, 200 + .parse_dt = dw_mci_zx_parse_dt, 201 + }; 202 + 203 + static const struct of_device_id dw_mci_zx_match[] = { 204 + { .compatible = "zte,zx296718-dw-mshc", .data = &zx_drv_data}, 205 + {}, 206 + }; 207 + MODULE_DEVICE_TABLE(of, dw_mci_zx_match); 208 + 209 + static int dw_mci_zx_probe(struct platform_device *pdev) 210 + { 211 + const struct dw_mci_drv_data *drv_data; 212 + const struct of_device_id *match; 213 + 214 + match = of_match_node(dw_mci_zx_match, pdev->dev.of_node); 215 + drv_data = match->data; 216 + 217 + return dw_mci_pltfm_register(pdev, drv_data); 218 + } 219 + 220 + static const struct dev_pm_ops dw_mci_zx_dev_pm_ops = { 221 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 222 + pm_runtime_force_resume) 223 + SET_RUNTIME_PM_OPS(dw_mci_runtime_suspend, 224 + dw_mci_runtime_resume, 225 + NULL) 226 + }; 227 + 228 + static struct platform_driver dw_mci_zx_pltfm_driver = { 229 + .probe = dw_mci_zx_probe, 230 + .remove = dw_mci_pltfm_remove, 231 + .driver = { 232 + .name = "dwmmc_zx", 233 + .of_match_table = dw_mci_zx_match, 234 + .pm = &dw_mci_zx_dev_pm_ops, 235 + }, 236 + }; 237 + 238 + module_platform_driver(dw_mci_zx_pltfm_driver); 239 + 240 + MODULE_DESCRIPTION("ZTE emmc/sd driver"); 241 + MODULE_LICENSE("GPL v2");
+31
drivers/mmc/host/dw_mmc-zx.h
··· 1 + #ifndef _DW_MMC_ZX_H_ 2 + #define _DW_MMC_ZX_H_ 3 + 4 + /* ZX296718 SoC specific DLL register offset. */ 5 + #define LB_AON_EMMC_CFG_REG0 0x1B0 6 + #define LB_AON_EMMC_CFG_REG1 0x1B4 7 + #define LB_AON_EMMC_CFG_REG2 0x1B8 8 + 9 + /* LB_AON_EMMC_CFG_REG0 register defines */ 10 + #define PARA_DLL_START(x) ((x) & 0xFF) 11 + #define PARA_DLL_START_MASK 0xFF 12 + #define DLL_REG_SET BIT(8) 13 + #define PARA_DLL_LOCK_NUM(x) (((x) & 7) << 16) 14 + #define PARA_DLL_LOCK_NUM_MASK (7 << 16) 15 + #define PARA_PHASE_DET_SEL(x) (((x) & 7) << 20) 16 + #define PARA_PHASE_DET_SEL_MASK (7 << 20) 17 + #define PARA_DLL_BYPASS_MODE BIT(23) 18 + #define PARA_HALF_CLK_MODE BIT(24) 19 + 20 + /* LB_AON_EMMC_CFG_REG1 register defines */ 21 + #define READ_DQS_DELAY(x) ((x) & 0x7F) 22 + #define READ_DQS_DELAY_MASK (0x7F) 23 + #define READ_DQS_BYPASS_MODE BIT(7) 24 + #define CLK_SAMP_DELAY(x) (((x) & 0x7F) << 8) 25 + #define CLK_SAMP_DELAY_MASK (0x7F << 8) 26 + #define CLK_SAMP_BYPASS_MODE BIT(15) 27 + 28 + /* LB_AON_EMMC_CFG_REG2 register defines */ 29 + #define ZX_DLL_LOCKED BIT(2) 30 + 31 + #endif /* _DW_MMC_ZX_H_ */
+21 -9
drivers/mmc/host/dw_mmc.c
··· 32 32 #include <linux/mmc/mmc.h> 33 33 #include <linux/mmc/sd.h> 34 34 #include <linux/mmc/sdio.h> 35 - #include <linux/mmc/dw_mmc.h> 36 35 #include <linux/bitops.h> 37 36 #include <linux/regulator/consumer.h> 38 37 #include <linux/of.h> ··· 1112 1113 mci_writel(host, CTRL, temp); 1113 1114 1114 1115 /* 1115 - * Use the initial fifoth_val for PIO mode. 1116 + * Use the initial fifoth_val for PIO mode. If wm_algined 1117 + * is set, we set watermark same as data size. 1116 1118 * If next issued data may be transfered by DMA mode, 1117 1119 * prev_blksz should be invalidated. 1118 1120 */ 1119 - mci_writel(host, FIFOTH, host->fifoth_val); 1121 + if (host->wm_aligned) 1122 + dw_mci_adjust_fifoth(host, data); 1123 + else 1124 + mci_writel(host, FIFOTH, host->fifoth_val); 1120 1125 host->prev_blksz = 0; 1121 1126 } else { 1122 1127 /* ··· 1182 1179 if ((clock != slot->__clk_old && 1183 1180 !test_bit(DW_MMC_CARD_NEEDS_POLL, &slot->flags)) || 1184 1181 force_clkinit) { 1185 - dev_info(&slot->mmc->class_dev, 1186 - "Bus speed (slot %d) = %dHz (slot req %dHz, actual %dHZ div = %d)\n", 1187 - slot->id, host->bus_hz, clock, 1188 - div ? ((host->bus_hz / div) >> 1) : 1189 - host->bus_hz, div); 1182 + /* Silent the verbose log if calling from PM context */ 1183 + if (!force_clkinit) 1184 + dev_info(&slot->mmc->class_dev, 1185 + "Bus speed (slot %d) = %dHz (slot req %dHz, actual %dHZ div = %d)\n", 1186 + slot->id, host->bus_hz, clock, 1187 + div ? ((host->bus_hz / div) >> 1) : 1188 + host->bus_hz, div); 1190 1189 1191 1190 /* 1192 1191 * If card is polling, display the message only ··· 2982 2977 2983 2978 of_property_read_u32(np, "card-detect-delay", &pdata->detect_delay_ms); 2984 2979 2980 + of_property_read_u32(np, "data-addr", &host->data_addr_override); 2981 + 2982 + if (of_get_property(np, "fifo-watermark-aligned", NULL)) 2983 + host->wm_aligned = true; 2984 + 2985 2985 if (!of_property_read_u32(np, "clock-frequency", &clock_frequency)) 2986 2986 pdata->bus_hz = clock_frequency; 2987 2987 ··· 3190 3180 host->verid = SDMMC_GET_VERID(mci_readl(host, VERID)); 3191 3181 dev_info(host->dev, "Version ID is %04x\n", host->verid); 3192 3182 3193 - if (host->verid < DW_MMC_240A) 3183 + if (host->data_addr_override) 3184 + host->fifo_reg = host->regs + host->data_addr_override; 3185 + else if (host->verid < DW_MMC_240A) 3194 3186 host->fifo_reg = host->regs + DATA_OFFSET; 3195 3187 else 3196 3188 host->fifo_reg = host->regs + DATA_240A_OFFSET;
+263
drivers/mmc/host/dw_mmc.h
··· 14 14 #ifndef _DW_MMC_H_ 15 15 #define _DW_MMC_H_ 16 16 17 + #include <linux/scatterlist.h> 18 + #include <linux/mmc/core.h> 19 + #include <linux/dmaengine.h> 20 + #include <linux/reset.h> 21 + #include <linux/interrupt.h> 22 + 23 + #define MAX_MCI_SLOTS 2 24 + 25 + enum dw_mci_state { 26 + STATE_IDLE = 0, 27 + STATE_SENDING_CMD, 28 + STATE_SENDING_DATA, 29 + STATE_DATA_BUSY, 30 + STATE_SENDING_STOP, 31 + STATE_DATA_ERROR, 32 + STATE_SENDING_CMD11, 33 + STATE_WAITING_CMD11_DONE, 34 + }; 35 + 36 + enum { 37 + EVENT_CMD_COMPLETE = 0, 38 + EVENT_XFER_COMPLETE, 39 + EVENT_DATA_COMPLETE, 40 + EVENT_DATA_ERROR, 41 + }; 42 + 43 + enum dw_mci_cookie { 44 + COOKIE_UNMAPPED, 45 + COOKIE_PRE_MAPPED, /* mapped by pre_req() of dwmmc */ 46 + COOKIE_MAPPED, /* mapped by prepare_data() of dwmmc */ 47 + }; 48 + 49 + struct mmc_data; 50 + 51 + enum { 52 + TRANS_MODE_PIO = 0, 53 + TRANS_MODE_IDMAC, 54 + TRANS_MODE_EDMAC 55 + }; 56 + 57 + struct dw_mci_dma_slave { 58 + struct dma_chan *ch; 59 + enum dma_transfer_direction direction; 60 + }; 61 + 62 + /** 63 + * struct dw_mci - MMC controller state shared between all slots 64 + * @lock: Spinlock protecting the queue and associated data. 65 + * @irq_lock: Spinlock protecting the INTMASK setting. 66 + * @regs: Pointer to MMIO registers. 67 + * @fifo_reg: Pointer to MMIO registers for data FIFO 68 + * @sg: Scatterlist entry currently being processed by PIO code, if any. 69 + * @sg_miter: PIO mapping scatterlist iterator. 70 + * @cur_slot: The slot which is currently using the controller. 71 + * @mrq: The request currently being processed on @cur_slot, 72 + * or NULL if the controller is idle. 73 + * @cmd: The command currently being sent to the card, or NULL. 74 + * @data: The data currently being transferred, or NULL if no data 75 + * transfer is in progress. 76 + * @stop_abort: The command currently prepared for stoping transfer. 77 + * @prev_blksz: The former transfer blksz record. 78 + * @timing: Record of current ios timing. 79 + * @use_dma: Whether DMA channel is initialized or not. 80 + * @using_dma: Whether DMA is in use for the current transfer. 81 + * @dma_64bit_address: Whether DMA supports 64-bit address mode or not. 82 + * @sg_dma: Bus address of DMA buffer. 83 + * @sg_cpu: Virtual address of DMA buffer. 84 + * @dma_ops: Pointer to platform-specific DMA callbacks. 85 + * @cmd_status: Snapshot of SR taken upon completion of the current 86 + * @ring_size: Buffer size for idma descriptors. 87 + * command. Only valid when EVENT_CMD_COMPLETE is pending. 88 + * @dms: structure of slave-dma private data. 89 + * @phy_regs: physical address of controller's register map 90 + * @data_status: Snapshot of SR taken upon completion of the current 91 + * data transfer. Only valid when EVENT_DATA_COMPLETE or 92 + * EVENT_DATA_ERROR is pending. 93 + * @stop_cmdr: Value to be loaded into CMDR when the stop command is 94 + * to be sent. 95 + * @dir_status: Direction of current transfer. 96 + * @tasklet: Tasklet running the request state machine. 97 + * @pending_events: Bitmask of events flagged by the interrupt handler 98 + * to be processed by the tasklet. 99 + * @completed_events: Bitmask of events which the state machine has 100 + * processed. 101 + * @state: Tasklet state. 102 + * @queue: List of slots waiting for access to the controller. 103 + * @bus_hz: The rate of @mck in Hz. This forms the basis for MMC bus 104 + * rate and timeout calculations. 105 + * @current_speed: Configured rate of the controller. 106 + * @num_slots: Number of slots available. 107 + * @fifoth_val: The value of FIFOTH register. 108 + * @verid: Denote Version ID. 109 + * @dev: Device associated with the MMC controller. 110 + * @pdata: Platform data associated with the MMC controller. 111 + * @drv_data: Driver specific data for identified variant of the controller 112 + * @priv: Implementation defined private data. 113 + * @biu_clk: Pointer to bus interface unit clock instance. 114 + * @ciu_clk: Pointer to card interface unit clock instance. 115 + * @slot: Slots sharing this MMC controller. 116 + * @fifo_depth: depth of FIFO. 117 + * @data_addr_override: override fifo reg offset with this value. 118 + * @wm_aligned: force fifo watermark equal with data length in PIO mode. 119 + * Set as true if alignment is needed. 120 + * @data_shift: log2 of FIFO item size. 121 + * @part_buf_start: Start index in part_buf. 122 + * @part_buf_count: Bytes of partial data in part_buf. 123 + * @part_buf: Simple buffer for partial fifo reads/writes. 124 + * @push_data: Pointer to FIFO push function. 125 + * @pull_data: Pointer to FIFO pull function. 126 + * @vqmmc_enabled: Status of vqmmc, should be true or false. 127 + * @irq_flags: The flags to be passed to request_irq. 128 + * @irq: The irq value to be passed to request_irq. 129 + * @sdio_id0: Number of slot0 in the SDIO interrupt registers. 130 + * @cmd11_timer: Timer for SD3.0 voltage switch over scheme. 131 + * @dto_timer: Timer for broken data transfer over scheme. 132 + * 133 + * Locking 134 + * ======= 135 + * 136 + * @lock is a softirq-safe spinlock protecting @queue as well as 137 + * @cur_slot, @mrq and @state. These must always be updated 138 + * at the same time while holding @lock. 139 + * 140 + * @irq_lock is an irq-safe spinlock protecting the INTMASK register 141 + * to allow the interrupt handler to modify it directly. Held for only long 142 + * enough to read-modify-write INTMASK and no other locks are grabbed when 143 + * holding this one. 144 + * 145 + * The @mrq field of struct dw_mci_slot is also protected by @lock, 146 + * and must always be written at the same time as the slot is added to 147 + * @queue. 148 + * 149 + * @pending_events and @completed_events are accessed using atomic bit 150 + * operations, so they don't need any locking. 151 + * 152 + * None of the fields touched by the interrupt handler need any 153 + * locking. However, ordering is important: Before EVENT_DATA_ERROR or 154 + * EVENT_DATA_COMPLETE is set in @pending_events, all data-related 155 + * interrupts must be disabled and @data_status updated with a 156 + * snapshot of SR. Similarly, before EVENT_CMD_COMPLETE is set, the 157 + * CMDRDY interrupt must be disabled and @cmd_status updated with a 158 + * snapshot of SR, and before EVENT_XFER_COMPLETE can be set, the 159 + * bytes_xfered field of @data must be written. This is ensured by 160 + * using barriers. 161 + */ 162 + struct dw_mci { 163 + spinlock_t lock; 164 + spinlock_t irq_lock; 165 + void __iomem *regs; 166 + void __iomem *fifo_reg; 167 + u32 data_addr_override; 168 + bool wm_aligned; 169 + 170 + struct scatterlist *sg; 171 + struct sg_mapping_iter sg_miter; 172 + 173 + struct dw_mci_slot *cur_slot; 174 + struct mmc_request *mrq; 175 + struct mmc_command *cmd; 176 + struct mmc_data *data; 177 + struct mmc_command stop_abort; 178 + unsigned int prev_blksz; 179 + unsigned char timing; 180 + 181 + /* DMA interface members*/ 182 + int use_dma; 183 + int using_dma; 184 + int dma_64bit_address; 185 + 186 + dma_addr_t sg_dma; 187 + void *sg_cpu; 188 + const struct dw_mci_dma_ops *dma_ops; 189 + /* For idmac */ 190 + unsigned int ring_size; 191 + 192 + /* For edmac */ 193 + struct dw_mci_dma_slave *dms; 194 + /* Registers's physical base address */ 195 + resource_size_t phy_regs; 196 + 197 + u32 cmd_status; 198 + u32 data_status; 199 + u32 stop_cmdr; 200 + u32 dir_status; 201 + struct tasklet_struct tasklet; 202 + unsigned long pending_events; 203 + unsigned long completed_events; 204 + enum dw_mci_state state; 205 + struct list_head queue; 206 + 207 + u32 bus_hz; 208 + u32 current_speed; 209 + u32 num_slots; 210 + u32 fifoth_val; 211 + u16 verid; 212 + struct device *dev; 213 + struct dw_mci_board *pdata; 214 + const struct dw_mci_drv_data *drv_data; 215 + void *priv; 216 + struct clk *biu_clk; 217 + struct clk *ciu_clk; 218 + struct dw_mci_slot *slot[MAX_MCI_SLOTS]; 219 + 220 + /* FIFO push and pull */ 221 + int fifo_depth; 222 + int data_shift; 223 + u8 part_buf_start; 224 + u8 part_buf_count; 225 + union { 226 + u16 part_buf16; 227 + u32 part_buf32; 228 + u64 part_buf; 229 + }; 230 + void (*push_data)(struct dw_mci *host, void *buf, int cnt); 231 + void (*pull_data)(struct dw_mci *host, void *buf, int cnt); 232 + 233 + bool vqmmc_enabled; 234 + unsigned long irq_flags; /* IRQ flags */ 235 + int irq; 236 + 237 + int sdio_id0; 238 + 239 + struct timer_list cmd11_timer; 240 + struct timer_list dto_timer; 241 + }; 242 + 243 + /* DMA ops for Internal/External DMAC interface */ 244 + struct dw_mci_dma_ops { 245 + /* DMA Ops */ 246 + int (*init)(struct dw_mci *host); 247 + int (*start)(struct dw_mci *host, unsigned int sg_len); 248 + void (*complete)(void *host); 249 + void (*stop)(struct dw_mci *host); 250 + void (*cleanup)(struct dw_mci *host); 251 + void (*exit)(struct dw_mci *host); 252 + }; 253 + 254 + struct dma_pdata; 255 + 256 + /* Board platform data */ 257 + struct dw_mci_board { 258 + u32 num_slots; 259 + 260 + unsigned int bus_hz; /* Clock speed at the cclk_in pad */ 261 + 262 + u32 caps; /* Capabilities */ 263 + u32 caps2; /* More capabilities */ 264 + u32 pm_caps; /* PM capabilities */ 265 + /* 266 + * Override fifo depth. If 0, autodetect it from the FIFOTH register, 267 + * but note that this may not be reliable after a bootloader has used 268 + * it. 269 + */ 270 + unsigned int fifo_depth; 271 + 272 + /* delay in mS before detecting cards after interrupt */ 273 + u32 detect_delay_ms; 274 + 275 + struct reset_control *rstc; 276 + struct dw_mci_dma_ops *dma_ops; 277 + struct dma_pdata *data; 278 + }; 279 + 17 280 #define DW_MMC_240A 0x240a 18 281 #define DW_MMC_280A 0x280a 19 282
+61 -57
drivers/mmc/host/meson-gx-mmc.c
··· 35 35 #include <linux/clk.h> 36 36 #include <linux/clk-provider.h> 37 37 #include <linux/regulator/consumer.h> 38 + #include <linux/interrupt.h> 38 39 39 40 #define DRIVER_NAME "meson-gx-mmc" 40 41 ··· 83 82 #define CFG_RC_CC_MASK 0xf 84 83 #define CFG_STOP_CLOCK BIT(22) 85 84 #define CFG_CLK_ALWAYS_ON BIT(18) 85 + #define CFG_CHK_DS BIT(20) 86 86 #define CFG_AUTO_CLK BIT(23) 87 87 88 88 #define SD_EMMC_STATUS 0x48 ··· 133 131 struct clk_mux mux; 134 132 struct clk *mux_clk; 135 133 struct clk *mux_parent[MUX_CLK_NUM_PARENTS]; 136 - unsigned long mux_parent_rate[MUX_CLK_NUM_PARENTS]; 134 + unsigned long current_clock; 137 135 138 136 struct clk_divider cfg_div; 139 137 struct clk *cfg_div_clk; ··· 180 178 static int meson_mmc_clk_set(struct meson_host *host, unsigned long clk_rate) 181 179 { 182 180 struct mmc_host *mmc = host->mmc; 183 - int ret = 0; 181 + int ret; 184 182 u32 cfg; 185 183 186 184 if (clk_rate) { ··· 190 188 clk_rate = mmc->f_min; 191 189 } 192 190 193 - if (clk_rate == mmc->actual_clock) 191 + if (clk_rate == host->current_clock) 194 192 return 0; 195 193 196 194 /* stop clock */ ··· 203 201 dev_dbg(host->dev, "change clock rate %u -> %lu\n", 204 202 mmc->actual_clock, clk_rate); 205 203 206 - if (clk_rate == 0) { 204 + if (!clk_rate) { 207 205 mmc->actual_clock = 0; 206 + host->current_clock = 0; 207 + /* return with clock being stopped */ 208 208 return 0; 209 209 } 210 210 211 211 ret = clk_set_rate(host->cfg_div_clk, clk_rate); 212 - if (ret) 213 - dev_warn(host->dev, "Unable to set cfg_div_clk to %lu. ret=%d\n", 214 - clk_rate, ret); 215 - else if (clk_rate && clk_rate != clk_get_rate(host->cfg_div_clk)) 216 - dev_warn(host->dev, "divider requested rate %lu != actual rate %lu: ret=%d\n", 217 - clk_rate, clk_get_rate(host->cfg_div_clk), ret); 218 - else 219 - mmc->actual_clock = clk_rate; 220 - 221 - /* (re)start clock, if non-zero */ 222 - if (!ret && clk_rate) { 223 - cfg = readl(host->regs + SD_EMMC_CFG); 224 - cfg &= ~CFG_STOP_CLOCK; 225 - writel(cfg, host->regs + SD_EMMC_CFG); 212 + if (ret) { 213 + dev_err(host->dev, "Unable to set cfg_div_clk to %lu. ret=%d\n", 214 + clk_rate, ret); 215 + return ret; 226 216 } 227 217 228 - return ret; 218 + mmc->actual_clock = clk_get_rate(host->cfg_div_clk); 219 + host->current_clock = clk_rate; 220 + 221 + if (clk_rate != mmc->actual_clock) 222 + dev_dbg(host->dev, 223 + "divider requested rate %lu != actual rate %u\n", 224 + clk_rate, mmc->actual_clock); 225 + 226 + /* (re)start clock */ 227 + cfg = readl(host->regs + SD_EMMC_CFG); 228 + cfg &= ~CFG_STOP_CLOCK; 229 + writel(cfg, host->regs + SD_EMMC_CFG); 230 + 231 + return 0; 229 232 } 230 233 231 234 /* ··· 246 239 const char *mux_parent_names[MUX_CLK_NUM_PARENTS]; 247 240 unsigned int mux_parent_count = 0; 248 241 const char *clk_div_parents[1]; 249 - unsigned int f_min = UINT_MAX; 250 242 u32 clk_reg, cfg; 251 243 252 244 /* get the mux parents */ ··· 262 256 return ret; 263 257 } 264 258 265 - host->mux_parent_rate[i] = clk_get_rate(host->mux_parent[i]); 266 259 mux_parent_names[i] = __clk_get_name(host->mux_parent[i]); 267 260 mux_parent_count++; 268 - if (host->mux_parent_rate[i] < f_min) 269 - f_min = host->mux_parent_rate[i]; 270 261 } 271 - 272 - /* cacluate f_min based on input clocks, and max divider value */ 273 - if (f_min != UINT_MAX) 274 - f_min = DIV_ROUND_UP(CLK_SRC_XTAL_RATE, CLK_DIV_MAX); 275 - else 276 - f_min = 4000000; /* default min: 400 MHz */ 277 - host->mmc->f_min = f_min; 278 262 279 263 /* create the mux */ 280 264 snprintf(clk_name, sizeof(clk_name), "%s#mux", dev_name(host->dev)); ··· 320 324 writel(cfg, host->regs + SD_EMMC_CFG); 321 325 322 326 ret = clk_prepare_enable(host->cfg_div_clk); 323 - if (!ret) 324 - ret = meson_mmc_clk_set(host, f_min); 327 + if (ret) 328 + return ret; 325 329 330 + /* Get the nearest minimum clock to 400KHz */ 331 + host->mmc->f_min = clk_round_rate(host->cfg_div_clk, 400000); 332 + 333 + ret = meson_mmc_clk_set(host, host->mmc->f_min); 326 334 if (!ret) 327 335 clk_disable_unprepare(host->cfg_div_clk); 328 336 ··· 378 378 meson_mmc_clk_set(host, ios->clock); 379 379 380 380 /* Bus width */ 381 - val = readl(host->regs + SD_EMMC_CFG); 382 381 switch (ios->bus_width) { 383 382 case MMC_BUS_WIDTH_1: 384 383 bus_width = CFG_BUS_WIDTH_1; ··· 392 393 dev_err(host->dev, "Invalid ios->bus_width: %u. Setting to 4.\n", 393 394 ios->bus_width); 394 395 bus_width = CFG_BUS_WIDTH_4; 395 - return; 396 396 } 397 397 398 398 val = readl(host->regs + SD_EMMC_CFG); ··· 408 410 409 411 val &= ~(CFG_RC_CC_MASK << CFG_RC_CC_SHIFT); 410 412 val |= ilog2(SD_EMMC_CFG_CMD_GAP) << CFG_RC_CC_SHIFT; 413 + 414 + val &= ~CFG_DDR; 415 + if (ios->timing == MMC_TIMING_UHS_DDR50 || 416 + ios->timing == MMC_TIMING_MMC_DDR52 || 417 + ios->timing == MMC_TIMING_MMC_HS400) 418 + val |= CFG_DDR; 419 + 420 + val &= ~CFG_CHK_DS; 421 + if (ios->timing == MMC_TIMING_MMC_HS400) 422 + val |= CFG_CHK_DS; 411 423 412 424 writel(val, host->regs + SD_EMMC_CFG); 413 425 ··· 488 480 blk_len = cfg & (CFG_BLK_LEN_MASK << CFG_BLK_LEN_SHIFT); 489 481 blk_len >>= CFG_BLK_LEN_SHIFT; 490 482 if (blk_len != ilog2(cmd->data->blksz)) { 491 - dev_warn(host->dev, "%s: update blk_len %d -> %d\n", 483 + dev_dbg(host->dev, "%s: update blk_len %d -> %d\n", 492 484 __func__, blk_len, 493 - ilog2(cmd->data->blksz)); 485 + ilog2(cmd->data->blksz)); 494 486 blk_len = ilog2(cmd->data->blksz); 495 487 cfg &= ~(CFG_BLK_LEN_MASK << CFG_BLK_LEN_SHIFT); 496 488 cfg |= blk_len << CFG_BLK_LEN_SHIFT; ··· 552 544 553 545 /* Stop execution */ 554 546 writel(0, host->regs + SD_EMMC_START); 555 - 556 - /* clear, ack, enable all interrupts */ 557 - writel(0, host->regs + SD_EMMC_IRQ_EN); 558 - writel(IRQ_EN_MASK, host->regs + SD_EMMC_STATUS); 559 - writel(IRQ_EN_MASK, host->regs + SD_EMMC_IRQ_EN); 560 547 561 548 host->mrq = mrq; 562 549 ··· 672 669 struct mmc_command *cmd = host->cmd; 673 670 struct mmc_data *data; 674 671 unsigned int xfer_bytes; 675 - int ret = IRQ_HANDLED; 676 672 677 673 if (WARN_ON(!mrq)) 678 674 return IRQ_NONE; ··· 680 678 return IRQ_NONE; 681 679 682 680 data = cmd->data; 683 - if (data) { 681 + if (data && data->flags & MMC_DATA_READ) { 684 682 xfer_bytes = data->blksz * data->blocks; 685 - if (data->flags & MMC_DATA_READ) { 686 - WARN_ON(xfer_bytes > host->bounce_buf_size); 687 - sg_copy_from_buffer(data->sg, data->sg_len, 688 - host->bounce_buf, xfer_bytes); 689 - data->bytes_xfered = xfer_bytes; 690 - } 683 + WARN_ON(xfer_bytes > host->bounce_buf_size); 684 + sg_copy_from_buffer(data->sg, data->sg_len, 685 + host->bounce_buf, xfer_bytes); 686 + data->bytes_xfered = xfer_bytes; 691 687 } 692 688 693 689 meson_mmc_read_resp(host->mmc, cmd); ··· 694 694 else 695 695 meson_mmc_start_cmd(host->mmc, data->stop); 696 696 697 - return ret; 697 + return IRQ_HANDLED; 698 698 } 699 699 700 700 /* ··· 742 742 743 743 ret = mmc_of_parse(mmc); 744 744 if (ret) { 745 - dev_warn(&pdev->dev, "error parsing DT: %d\n", ret); 745 + if (ret != -EPROBE_DEFER) 746 + dev_warn(&pdev->dev, "error parsing DT: %d\n", ret); 746 747 goto free_host; 747 748 } 748 749 ··· 781 780 /* clear, ack, enable all interrupts */ 782 781 writel(0, host->regs + SD_EMMC_IRQ_EN); 783 782 writel(IRQ_EN_MASK, host->regs + SD_EMMC_STATUS); 783 + writel(IRQ_EN_MASK, host->regs + SD_EMMC_IRQ_EN); 784 784 785 785 ret = devm_request_threaded_irq(&pdev->dev, host->irq, 786 786 meson_mmc_irq, meson_mmc_irq_thread, ··· 789 787 if (ret) 790 788 goto free_host; 791 789 790 + mmc->max_blk_count = CMD_CFG_LENGTH_MASK; 791 + mmc->max_req_size = mmc->max_blk_count * mmc->max_blk_size; 792 + 792 793 /* data bounce buffer */ 793 - host->bounce_buf_size = SZ_512K; 794 + host->bounce_buf_size = mmc->max_req_size; 794 795 host->bounce_buf = 795 796 dma_alloc_coherent(host->dev, host->bounce_buf_size, 796 797 &host->bounce_dma_addr, GFP_KERNEL); ··· 819 814 { 820 815 struct meson_host *host = dev_get_drvdata(&pdev->dev); 821 816 822 - if (WARN_ON(!host)) 823 - return 0; 817 + /* disable interrupts */ 818 + writel(0, host->regs + SD_EMMC_IRQ_EN); 824 819 825 - if (host->bounce_buf) 826 - dma_free_coherent(host->dev, host->bounce_buf_size, 827 - host->bounce_buf, host->bounce_dma_addr); 820 + dma_free_coherent(host->dev, host->bounce_buf_size, 821 + host->bounce_buf, host->bounce_dma_addr); 828 822 829 823 clk_disable_unprepare(host->cfg_div_clk); 830 824 clk_disable_unprepare(host->core_clk);
+6 -1
drivers/mmc/host/mmci.c
··· 507 507 { 508 508 dev_err(mmc_dev(host->mmc), "error during DMA transfer!\n"); 509 509 dmaengine_terminate_all(host->dma_current); 510 + host->dma_in_progress = false; 510 511 host->dma_current = NULL; 511 512 host->dma_desc_current = NULL; 512 513 host->data->host_cookie = 0; ··· 566 565 mmci_dma_release(host); 567 566 } 568 567 568 + host->dma_in_progress = false; 569 569 host->dma_current = NULL; 570 570 host->dma_desc_current = NULL; 571 571 } ··· 667 665 dev_vdbg(mmc_dev(host->mmc), 668 666 "Submit MMCI DMA job, sglen %d blksz %04x blks %04x flags %08x\n", 669 667 data->sg_len, data->blksz, data->blocks, data->flags); 668 + host->dma_in_progress = true; 670 669 dmaengine_submit(host->dma_desc_current); 671 670 dma_async_issue_pending(host->dma_current); 672 671 ··· 743 740 if (host->dma_desc_current == next->dma_desc) 744 741 host->dma_desc_current = NULL; 745 742 746 - if (host->dma_current == next->dma_chan) 743 + if (host->dma_current == next->dma_chan) { 744 + host->dma_in_progress = false; 747 745 host->dma_current = NULL; 746 + } 748 747 749 748 next->dma_desc = NULL; 750 749 next->dma_chan = NULL;
+2 -1
drivers/mmc/host/mmci.h
··· 245 245 struct dma_chan *dma_tx_channel; 246 246 struct dma_async_tx_descriptor *dma_desc_current; 247 247 struct mmci_host_next next_data; 248 + bool dma_in_progress; 248 249 249 - #define dma_inprogress(host) ((host)->dma_current) 250 + #define dma_inprogress(host) ((host)->dma_in_progress) 250 251 #else 251 252 #define dma_inprogress(host) (0) 252 253 #endif
+3 -5
drivers/mmc/host/mtk-sd.c
··· 28 28 #include <linux/regulator/consumer.h> 29 29 #include <linux/slab.h> 30 30 #include <linux/spinlock.h> 31 + #include <linux/interrupt.h> 31 32 32 33 #include <linux/mmc/card.h> 33 34 #include <linux/mmc/core.h> ··· 1075 1074 struct msdc_host *host = mmc_priv(mmc); 1076 1075 u32 status = readl(host->base + MSDC_PS); 1077 1076 1078 - /* check if any pin between dat[0:3] is low */ 1079 - if (((status >> 16) & 0xf) != 0xf) 1080 - return 1; 1081 - 1082 - return 0; 1077 + /* only check if data0 is low */ 1078 + return !(status & BIT(16)); 1083 1079 } 1084 1080 1085 1081 static void msdc_request_timeout(struct work_struct *work)
+12 -4
drivers/mmc/host/mxs-mmc.c
··· 153 153 } 154 154 } 155 155 156 - if (data) { 156 + if (cmd == mrq->sbc) { 157 + /* Finished CMD23, now send actual command. */ 158 + mxs_mmc_start_cmd(host, mrq->cmd); 159 + return; 160 + } else if (data) { 157 161 dma_unmap_sg(mmc_dev(host->mmc), data->sg, 158 162 data->sg_len, ssp->dma_dir); 159 163 /* ··· 170 166 data->bytes_xfered = 0; 171 167 172 168 host->data = NULL; 173 - if (mrq->stop) { 169 + if (data->stop && (data->error || !mrq->sbc)) { 174 170 mxs_mmc_start_cmd(host, mrq->stop); 175 171 return; 176 172 } ··· 499 495 500 496 WARN_ON(host->mrq != NULL); 501 497 host->mrq = mrq; 502 - mxs_mmc_start_cmd(host, mrq->cmd); 498 + 499 + if (mrq->sbc) 500 + mxs_mmc_start_cmd(host, mrq->sbc); 501 + else 502 + mxs_mmc_start_cmd(host, mrq->cmd); 503 503 } 504 504 505 505 static void mxs_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) ··· 650 642 /* set mmc core parameters */ 651 643 mmc->ops = &mxs_mmc_ops; 652 644 mmc->caps = MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED | 653 - MMC_CAP_SDIO_IRQ | MMC_CAP_NEEDS_POLL; 645 + MMC_CAP_SDIO_IRQ | MMC_CAP_NEEDS_POLL | MMC_CAP_CMD23; 654 646 655 647 host->broken_cd = of_property_read_bool(np, "broken-cd"); 656 648
+1 -1
drivers/mmc/host/omap.c
··· 893 893 * If no card is inserted, we postpone polling until 894 894 * the cover has been closed. 895 895 */ 896 - if (slot->mmc->card == NULL || !mmc_card_present(slot->mmc->card)) 896 + if (slot->mmc->card == NULL) 897 897 return; 898 898 899 899 mod_timer(&slot->cover_timer,
+19 -10
drivers/mmc/host/omap_hsmmc.c
··· 1162 1162 if (status & ERR_EN) { 1163 1163 omap_hsmmc_dbg_report_irq(host, status); 1164 1164 1165 - if (status & (CTO_EN | CCRC_EN)) 1165 + if (status & (CTO_EN | CCRC_EN | CEB_EN)) 1166 1166 end_cmd = 1; 1167 1167 if (host->data || host->response_busy) { 1168 1168 end_trans = !end_cmd; ··· 1469 1469 } 1470 1470 1471 1471 static void set_data_timeout(struct omap_hsmmc_host *host, 1472 - unsigned int timeout_ns, 1472 + unsigned long long timeout_ns, 1473 1473 unsigned int timeout_clks) 1474 1474 { 1475 - unsigned int timeout, cycle_ns; 1475 + unsigned long long timeout = timeout_ns; 1476 + unsigned int cycle_ns; 1476 1477 uint32_t reg, clkd, dto = 0; 1477 1478 1478 1479 reg = OMAP_HSMMC_READ(host->base, SYSCTL); ··· 1482 1481 clkd = 1; 1483 1482 1484 1483 cycle_ns = 1000000000 / (host->clk_rate / clkd); 1485 - timeout = timeout_ns / cycle_ns; 1484 + do_div(timeout, cycle_ns); 1486 1485 timeout += timeout_clks; 1487 1486 if (timeout) { 1488 1487 while ((timeout & 0x80000000) == 0) { ··· 1528 1527 omap_hsmmc_prepare_data(struct omap_hsmmc_host *host, struct mmc_request *req) 1529 1528 { 1530 1529 int ret; 1530 + unsigned long long timeout; 1531 + 1531 1532 host->data = req->data; 1532 1533 1533 1534 if (req->data == NULL) { 1534 1535 OMAP_HSMMC_WRITE(host->base, BLK, 0); 1535 - /* 1536 - * Set an arbitrary 100ms data timeout for commands with 1537 - * busy signal. 1538 - */ 1539 - if (req->cmd->flags & MMC_RSP_BUSY) 1540 - set_data_timeout(host, 100000000U, 0); 1536 + if (req->cmd->flags & MMC_RSP_BUSY) { 1537 + timeout = req->cmd->busy_timeout * NSEC_PER_MSEC; 1538 + 1539 + /* 1540 + * Set an arbitrary 100ms data timeout for commands with 1541 + * busy signal and no indication of busy_timeout. 1542 + */ 1543 + if (!timeout) 1544 + timeout = 100000000U; 1545 + 1546 + set_data_timeout(host, timeout, 0); 1547 + } 1541 1548 return 0; 1542 1549 } 1543 1550
+1 -1
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 707 707 u8 opcode, u8 sample_point) 708 708 { 709 709 int err; 710 - struct mmc_command cmd = {0}; 710 + struct mmc_command cmd = {}; 711 711 712 712 err = sd_change_phase(host, sample_point, true); 713 713 if (err < 0)
+1 -1
drivers/mmc/host/rtsx_usb_sdmmc.c
··· 682 682 u8 opcode, u8 sample_point) 683 683 { 684 684 int err; 685 - struct mmc_command cmd = {0}; 685 + struct mmc_command cmd = {}; 686 686 687 687 err = sd_change_phase(host, sample_point, 0); 688 688 if (err)
+1
drivers/mmc/host/s3cmci.c
··· 21 21 #include <linux/debugfs.h> 22 22 #include <linux/seq_file.h> 23 23 #include <linux/gpio.h> 24 + #include <linux/interrupt.h> 24 25 #include <linux/irq.h> 25 26 #include <linux/io.h> 26 27
+4 -1
drivers/mmc/host/sdhci-acpi.c
··· 467 467 if (sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD)) { 468 468 bool v = sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL); 469 469 470 - if (mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0, NULL)) { 470 + err = mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0, NULL); 471 + if (err) { 472 + if (err == -EPROBE_DEFER) 473 + goto err_free; 471 474 dev_warn(dev, "failed to setup card detect gpio\n"); 472 475 c->use_runtime_pm = false; 473 476 }
+2 -1
drivers/mmc/host/sdhci-cadence.c
··· 17 17 #include <linux/iopoll.h> 18 18 #include <linux/module.h> 19 19 #include <linux/mmc/host.h> 20 + #include <linux/mmc/mmc.h> 20 21 21 22 #include "sdhci-pltfm.h" 22 23 ··· 26 25 #define SDHCI_CDNS_HRS04_ACK BIT(26) 27 26 #define SDHCI_CDNS_HRS04_RD BIT(25) 28 27 #define SDHCI_CDNS_HRS04_WR BIT(24) 29 - #define SDHCI_CDNS_HRS04_RDATA_SHIFT 12 28 + #define SDHCI_CDNS_HRS04_RDATA_SHIFT 16 30 29 #define SDHCI_CDNS_HRS04_WDATA_SHIFT 8 31 30 #define SDHCI_CDNS_HRS04_ADDR_SHIFT 0 32 31
+24 -18
drivers/mmc/host/sdhci-esdhc.h
··· 24 24 SDHCI_QUIRK_PIO_NEEDS_DELAY | \ 25 25 SDHCI_QUIRK_NO_HISPD_BIT) 26 26 27 - #define ESDHC_PROCTL 0x28 28 - 29 - #define ESDHC_SYSTEM_CONTROL 0x2c 30 - #define ESDHC_CLOCK_MASK 0x0000fff0 31 - #define ESDHC_PREDIV_SHIFT 8 32 - #define ESDHC_DIVIDER_SHIFT 4 33 - #define ESDHC_CLOCK_PEREN 0x00000004 34 - #define ESDHC_CLOCK_HCKEN 0x00000002 35 - #define ESDHC_CLOCK_IPGEN 0x00000001 36 - 37 27 /* pltfm-specific */ 38 28 #define ESDHC_HOST_CONTROL_LE 0x20 39 29 40 30 /* 41 - * P2020 interpretation of the SDHCI_HOST_CONTROL register 31 + * eSDHC register definition 42 32 */ 43 - #define ESDHC_CTRL_4BITBUS (0x1 << 1) 44 - #define ESDHC_CTRL_8BITBUS (0x2 << 1) 45 - #define ESDHC_CTRL_BUSWIDTH_MASK (0x3 << 1) 46 33 47 - /* OF-specific */ 48 - #define ESDHC_DMA_SYSCTL 0x40c 49 - #define ESDHC_DMA_SNOOP 0x00000040 34 + /* Present State Register */ 35 + #define ESDHC_PRSSTAT 0x24 36 + #define ESDHC_CLOCK_STABLE 0x00000008 50 37 51 - #define ESDHC_HOST_CONTROL_RES 0x01 38 + /* Protocol Control Register */ 39 + #define ESDHC_PROCTL 0x28 40 + #define ESDHC_CTRL_4BITBUS (0x1 << 1) 41 + #define ESDHC_CTRL_8BITBUS (0x2 << 1) 42 + #define ESDHC_CTRL_BUSWIDTH_MASK (0x3 << 1) 43 + #define ESDHC_HOST_CONTROL_RES 0x01 44 + 45 + /* System Control Register */ 46 + #define ESDHC_SYSTEM_CONTROL 0x2c 47 + #define ESDHC_CLOCK_MASK 0x0000fff0 48 + #define ESDHC_PREDIV_SHIFT 8 49 + #define ESDHC_DIVIDER_SHIFT 4 50 + #define ESDHC_CLOCK_SDCLKEN 0x00000008 51 + #define ESDHC_CLOCK_PEREN 0x00000004 52 + #define ESDHC_CLOCK_HCKEN 0x00000002 53 + #define ESDHC_CLOCK_IPGEN 0x00000001 54 + 55 + /* Control Register for DMA transfer */ 56 + #define ESDHC_DMA_SYSCTL 0x40c 57 + #define ESDHC_DMA_SNOOP 0x00000040 52 58 53 59 #endif /* _DRIVERS_MMC_SDHCI_ESDHC_H */
+8 -3
drivers/mmc/host/sdhci-iproc.c
··· 211 211 static const struct sdhci_pltfm_data sdhci_bcm2835_pltfm_data = { 212 212 .quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION | 213 213 SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK | 214 - SDHCI_QUIRK_MISSING_CAPS, 214 + SDHCI_QUIRK_MISSING_CAPS | 215 + SDHCI_QUIRK_NO_HISPD_BIT, 215 216 .ops = &sdhci_iproc_32only_ops, 216 217 }; 217 218 218 219 static const struct sdhci_iproc_data bcm2835_data = { 219 220 .pdata = &sdhci_bcm2835_pltfm_data, 220 - .caps = SDHCI_CAN_VDD_330, 221 - .caps1 = 0x00000000, 221 + .caps = ((0x1 << SDHCI_MAX_BLOCK_SHIFT) 222 + & SDHCI_MAX_BLOCK_MASK) | 223 + SDHCI_CAN_VDD_330 | 224 + SDHCI_CAN_DO_HISPD, 225 + .caps1 = SDHCI_DRIVER_TYPE_A | 226 + SDHCI_DRIVER_TYPE_C, 222 227 .mmc_caps = 0x00000000, 223 228 }; 224 229
+225 -152
drivers/mmc/host/sdhci-msm.c
··· 69 69 #define CORE_DLL_CLOCK_DISABLE BIT(21) 70 70 71 71 #define CORE_VENDOR_SPEC 0x10c 72 + #define CORE_VENDOR_SPEC_POR_VAL 0xa1c 72 73 #define CORE_CLK_PWRSAVE BIT(1) 73 74 #define CORE_HC_MCLK_SEL_DFLT (2 << 8) 74 75 #define CORE_HC_MCLK_SEL_HS400 (3 << 8) ··· 103 102 104 103 #define CORE_DDR_200_CFG 0x184 105 104 #define CORE_CDC_T4_DLY_SEL BIT(0) 105 + #define CORE_CMDIN_RCLK_EN BIT(1) 106 106 #define CORE_START_CDC_TRAFFIC BIT(6) 107 107 #define CORE_VENDOR_SPEC3 0x1b0 108 108 #define CORE_PWRSAVE_DLL BIT(3) ··· 139 137 u8 saved_tuning_phase; 140 138 bool use_cdclp533; 141 139 }; 140 + 141 + static unsigned int msm_get_clock_rate_for_bus_mode(struct sdhci_host *host, 142 + unsigned int clock) 143 + { 144 + struct mmc_ios ios = host->mmc->ios; 145 + /* 146 + * The SDHC requires internal clock frequency to be double the 147 + * actual clock that will be set for DDR mode. The controller 148 + * uses the faster clock(100/400MHz) for some of its parts and 149 + * send the actual required clock (50/200MHz) to the card. 150 + */ 151 + if (ios.timing == MMC_TIMING_UHS_DDR50 || 152 + ios.timing == MMC_TIMING_MMC_DDR52 || 153 + ios.timing == MMC_TIMING_MMC_HS400 || 154 + host->flags & SDHCI_HS400_TUNING) 155 + clock *= 2; 156 + return clock; 157 + } 158 + 159 + static void msm_set_clock_rate_for_bus_mode(struct sdhci_host *host, 160 + unsigned int clock) 161 + { 162 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 163 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 164 + struct mmc_ios curr_ios = host->mmc->ios; 165 + int rc; 166 + 167 + clock = msm_get_clock_rate_for_bus_mode(host, clock); 168 + rc = clk_set_rate(msm_host->clk, clock); 169 + if (rc) { 170 + pr_err("%s: Failed to set clock at rate %u at timing %d\n", 171 + mmc_hostname(host->mmc), clock, 172 + curr_ios.timing); 173 + return; 174 + } 175 + msm_host->clk_rate = clock; 176 + pr_debug("%s: Setting clock at rate %lu at timing %d\n", 177 + mmc_hostname(host->mmc), clk_get_rate(msm_host->clk), 178 + curr_ios.timing); 179 + } 142 180 143 181 /* Platform specific tuning */ 144 182 static inline int msm_dll_poll_ck_out_en(struct sdhci_host *host, u8 poll) ··· 506 464 return 0; 507 465 } 508 466 467 + static void msm_hc_select_default(struct sdhci_host *host) 468 + { 469 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 470 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 471 + u32 config; 472 + 473 + if (!msm_host->use_cdclp533) { 474 + config = readl_relaxed(host->ioaddr + 475 + CORE_VENDOR_SPEC3); 476 + config &= ~CORE_PWRSAVE_DLL; 477 + writel_relaxed(config, host->ioaddr + 478 + CORE_VENDOR_SPEC3); 479 + } 480 + 481 + config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 482 + config &= ~CORE_HC_MCLK_SEL_MASK; 483 + config |= CORE_HC_MCLK_SEL_DFLT; 484 + writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 485 + 486 + /* 487 + * Disable HC_SELECT_IN to be able to use the UHS mode select 488 + * configuration from Host Control2 register for all other 489 + * modes. 490 + * Write 0 to HC_SELECT_IN and HC_SELECT_IN_EN field 491 + * in VENDOR_SPEC_FUNC 492 + */ 493 + config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 494 + config &= ~CORE_HC_SELECT_IN_EN; 495 + config &= ~CORE_HC_SELECT_IN_MASK; 496 + writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 497 + 498 + /* 499 + * Make sure above writes impacting free running MCLK are completed 500 + * before changing the clk_rate at GCC. 501 + */ 502 + wmb(); 503 + } 504 + 505 + static void msm_hc_select_hs400(struct sdhci_host *host) 506 + { 507 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 508 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 509 + struct mmc_ios ios = host->mmc->ios; 510 + u32 config, dll_lock; 511 + int rc; 512 + 513 + /* Select the divided clock (free running MCLK/2) */ 514 + config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 515 + config &= ~CORE_HC_MCLK_SEL_MASK; 516 + config |= CORE_HC_MCLK_SEL_HS400; 517 + 518 + writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 519 + /* 520 + * Select HS400 mode using the HC_SELECT_IN from VENDOR SPEC 521 + * register 522 + */ 523 + if ((msm_host->tuning_done || ios.enhanced_strobe) && 524 + !msm_host->calibration_done) { 525 + config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 526 + config |= CORE_HC_SELECT_IN_HS400; 527 + config |= CORE_HC_SELECT_IN_EN; 528 + writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 529 + } 530 + if (!msm_host->clk_rate && !msm_host->use_cdclp533) { 531 + /* 532 + * Poll on DLL_LOCK or DDR_DLL_LOCK bits in 533 + * CORE_DLL_STATUS to be set. This should get set 534 + * within 15 us at 200 MHz. 535 + */ 536 + rc = readl_relaxed_poll_timeout(host->ioaddr + 537 + CORE_DLL_STATUS, 538 + dll_lock, 539 + (dll_lock & 540 + (CORE_DLL_LOCK | 541 + CORE_DDR_DLL_LOCK)), 10, 542 + 1000); 543 + if (rc == -ETIMEDOUT) 544 + pr_err("%s: Unable to get DLL_LOCK/DDR_DLL_LOCK, dll_status: 0x%08x\n", 545 + mmc_hostname(host->mmc), dll_lock); 546 + } 547 + /* 548 + * Make sure above writes impacting free running MCLK are completed 549 + * before changing the clk_rate at GCC. 550 + */ 551 + wmb(); 552 + } 553 + 554 + /* 555 + * sdhci_msm_hc_select_mode :- In general all timing modes are 556 + * controlled via UHS mode select in Host Control2 register. 557 + * eMMC specific HS200/HS400 doesn't have their respective modes 558 + * defined here, hence we use these values. 559 + * 560 + * HS200 - SDR104 (Since they both are equivalent in functionality) 561 + * HS400 - This involves multiple configurations 562 + * Initially SDR104 - when tuning is required as HS200 563 + * Then when switching to DDR @ 400MHz (HS400) we use 564 + * the vendor specific HC_SELECT_IN to control the mode. 565 + * 566 + * In addition to controlling the modes we also need to select the 567 + * correct input clock for DLL depending on the mode. 568 + * 569 + * HS400 - divided clock (free running MCLK/2) 570 + * All other modes - default (free running MCLK) 571 + */ 572 + void sdhci_msm_hc_select_mode(struct sdhci_host *host) 573 + { 574 + struct mmc_ios ios = host->mmc->ios; 575 + 576 + if (ios.timing == MMC_TIMING_MMC_HS400 || 577 + host->flags & SDHCI_HS400_TUNING) 578 + msm_hc_select_hs400(host); 579 + else 580 + msm_hc_select_default(host); 581 + } 582 + 509 583 static int sdhci_msm_cdclp533_calibration(struct sdhci_host *host) 510 584 { 511 585 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ··· 664 506 config &= ~CORE_START_CDC_TRAFFIC; 665 507 writel_relaxed(config, host->ioaddr + CORE_DDR_200_CFG); 666 508 667 - /* 668 - * Perform CDC Register Initialization Sequence 669 - * 670 - * CORE_CSR_CDC_CTLR_CFG0 0x11800EC 671 - * CORE_CSR_CDC_CTLR_CFG1 0x3011111 672 - * CORE_CSR_CDC_CAL_TIMER_CFG0 0x1201000 673 - * CORE_CSR_CDC_CAL_TIMER_CFG1 0x4 674 - * CORE_CSR_CDC_REFCOUNT_CFG 0xCB732020 675 - * CORE_CSR_CDC_COARSE_CAL_CFG 0xB19 676 - * CORE_CSR_CDC_DELAY_CFG 0x3AC 677 - * CORE_CDC_OFFSET_CFG 0x0 678 - * CORE_CDC_SLAVE_DDA_CFG 0x16334 679 - */ 509 + /* Perform CDC Register Initialization Sequence */ 680 510 681 511 writel_relaxed(0x11800EC, host->ioaddr + CORE_CSR_CDC_CTLR_CFG0); 682 512 writel_relaxed(0x3011111, host->ioaddr + CORE_CSR_CDC_CTLR_CFG1); ··· 672 526 writel_relaxed(0x4, host->ioaddr + CORE_CSR_CDC_CAL_TIMER_CFG1); 673 527 writel_relaxed(0xCB732020, host->ioaddr + CORE_CSR_CDC_REFCOUNT_CFG); 674 528 writel_relaxed(0xB19, host->ioaddr + CORE_CSR_CDC_COARSE_CAL_CFG); 675 - writel_relaxed(0x3AC, host->ioaddr + CORE_CSR_CDC_DELAY_CFG); 529 + writel_relaxed(0x4E2, host->ioaddr + CORE_CSR_CDC_DELAY_CFG); 676 530 writel_relaxed(0x0, host->ioaddr + CORE_CDC_OFFSET_CFG); 677 531 writel_relaxed(0x16334, host->ioaddr + CORE_CDC_SLAVE_DDA_CFG); 678 532 ··· 725 579 726 580 static int sdhci_msm_cm_dll_sdc4_calibration(struct sdhci_host *host) 727 581 { 582 + struct mmc_host *mmc = host->mmc; 728 583 u32 dll_status, config; 729 584 int ret; 730 585 ··· 739 592 * values will need to be programmed appropriately. 740 593 */ 741 594 writel_relaxed(DDR_CONFIG_POR_VAL, host->ioaddr + CORE_DDR_CONFIG); 595 + 596 + if (mmc->ios.enhanced_strobe) { 597 + config = readl_relaxed(host->ioaddr + CORE_DDR_200_CFG); 598 + config |= CORE_CMDIN_RCLK_EN; 599 + writel_relaxed(config, host->ioaddr + CORE_DDR_200_CFG); 600 + } 742 601 743 602 config = readl_relaxed(host->ioaddr + CORE_DLL_CONFIG_2); 744 603 config |= CORE_DDR_CAL_EN; ··· 780 627 { 781 628 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 782 629 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 630 + struct mmc_host *mmc = host->mmc; 783 631 int ret; 784 632 u32 config; 785 633 ··· 794 640 if (ret) 795 641 goto out; 796 642 797 - /* Set the selected phase in delay line hw block */ 798 - ret = msm_config_cm_dll_phase(host, msm_host->saved_tuning_phase); 799 - if (ret) 800 - goto out; 643 + if (!mmc->ios.enhanced_strobe) { 644 + /* Set the selected phase in delay line hw block */ 645 + ret = msm_config_cm_dll_phase(host, 646 + msm_host->saved_tuning_phase); 647 + if (ret) 648 + goto out; 649 + config = readl_relaxed(host->ioaddr + CORE_DLL_CONFIG); 650 + config |= CORE_CMD_DAT_TRACK_SEL; 651 + writel_relaxed(config, host->ioaddr + CORE_DLL_CONFIG); 652 + } 801 653 802 - config = readl_relaxed(host->ioaddr + CORE_DLL_CONFIG); 803 - config |= CORE_CMD_DAT_TRACK_SEL; 804 - writel_relaxed(config, host->ioaddr + CORE_DLL_CONFIG); 805 654 if (msm_host->use_cdclp533) 806 655 ret = sdhci_msm_cdclp533_calibration(host); 807 656 else ··· 815 658 return ret; 816 659 } 817 660 818 - static int sdhci_msm_execute_tuning(struct sdhci_host *host, u32 opcode) 661 + static int sdhci_msm_execute_tuning(struct mmc_host *mmc, u32 opcode) 819 662 { 663 + struct sdhci_host *host = mmc_priv(mmc); 820 664 int tuning_seq_cnt = 3; 821 665 u8 phase, tuned_phases[16], tuned_phase_cnt = 0; 822 666 int rc; 823 - struct mmc_host *mmc = host->mmc; 824 667 struct mmc_ios ios = host->mmc->ios; 825 668 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 826 669 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); ··· 834 677 ios.timing == MMC_TIMING_MMC_HS200 || 835 678 ios.timing == MMC_TIMING_UHS_SDR104)) 836 679 return 0; 680 + 681 + /* 682 + * For HS400 tuning in HS200 timing requires: 683 + * - select MCLK/2 in VENDOR_SPEC 684 + * - program MCLK to 400MHz (or nearest supported) in GCC 685 + */ 686 + if (host->flags & SDHCI_HS400_TUNING) { 687 + sdhci_msm_hc_select_mode(host); 688 + msm_set_clock_rate_for_bus_mode(host, ios.clock); 689 + host->flags &= ~SDHCI_HS400_TUNING; 690 + } 837 691 838 692 retry: 839 693 /* First of all reset the tuning block */ ··· 898 730 if (!rc) 899 731 msm_host->tuning_done = true; 900 732 return rc; 733 + } 734 + 735 + /* 736 + * sdhci_msm_hs400 - Calibrate the DLL for HS400 bus speed mode operation. 737 + * This needs to be done for both tuning and enhanced_strobe mode. 738 + * DLL operation is only needed for clock > 100MHz. For clock <= 100MHz 739 + * fixed feedback clock is used. 740 + */ 741 + static void sdhci_msm_hs400(struct sdhci_host *host, struct mmc_ios *ios) 742 + { 743 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 744 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 745 + int ret; 746 + 747 + if (host->clock > CORE_FREQ_100MHZ && 748 + (msm_host->tuning_done || ios->enhanced_strobe) && 749 + !msm_host->calibration_done) { 750 + ret = sdhci_msm_hs400_dll_calibration(host); 751 + if (!ret) 752 + msm_host->calibration_done = true; 753 + else 754 + pr_err("%s: Failed to calibrate DLL for hs400 mode (%d)\n", 755 + mmc_hostname(host->mmc), ret); 756 + } 901 757 } 902 758 903 759 static void sdhci_msm_set_uhs_signaling(struct sdhci_host *host, ··· 992 800 sdhci_writew(host, ctrl_2, SDHCI_HOST_CONTROL2); 993 801 994 802 spin_unlock_irq(&host->lock); 995 - /* CDCLP533 HW calibration is only required for HS400 mode*/ 996 - if (host->clock > CORE_FREQ_100MHZ && 997 - msm_host->tuning_done && !msm_host->calibration_done && 998 - mmc->ios.timing == MMC_TIMING_MMC_HS400) 999 - if (!sdhci_msm_hs400_dll_calibration(host)) 1000 - msm_host->calibration_done = true; 803 + 804 + if (mmc->ios.timing == MMC_TIMING_MMC_HS400) 805 + sdhci_msm_hs400(host, &mmc->ios); 806 + 1001 807 spin_lock_irq(&host->lock); 1002 808 } 1003 809 ··· 1083 893 { 1084 894 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1085 895 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1086 - struct mmc_ios curr_ios = host->mmc->ios; 1087 - u32 config, dll_lock; 1088 - int rc; 1089 896 1090 897 if (!clock) { 1091 898 msm_host->clk_rate = clock; ··· 1090 903 } 1091 904 1092 905 spin_unlock_irq(&host->lock); 1093 - /* 1094 - * The SDHC requires internal clock frequency to be double the 1095 - * actual clock that will be set for DDR mode. The controller 1096 - * uses the faster clock(100/400MHz) for some of its parts and 1097 - * send the actual required clock (50/200MHz) to the card. 1098 - */ 1099 - if (curr_ios.timing == MMC_TIMING_UHS_DDR50 || 1100 - curr_ios.timing == MMC_TIMING_MMC_DDR52 || 1101 - curr_ios.timing == MMC_TIMING_MMC_HS400) 1102 - clock *= 2; 1103 - /* 1104 - * In general all timing modes are controlled via UHS mode select in 1105 - * Host Control2 register. eMMC specific HS200/HS400 doesn't have 1106 - * their respective modes defined here, hence we use these values. 1107 - * 1108 - * HS200 - SDR104 (Since they both are equivalent in functionality) 1109 - * HS400 - This involves multiple configurations 1110 - * Initially SDR104 - when tuning is required as HS200 1111 - * Then when switching to DDR @ 400MHz (HS400) we use 1112 - * the vendor specific HC_SELECT_IN to control the mode. 1113 - * 1114 - * In addition to controlling the modes we also need to select the 1115 - * correct input clock for DLL depending on the mode. 1116 - * 1117 - * HS400 - divided clock (free running MCLK/2) 1118 - * All other modes - default (free running MCLK) 1119 - */ 1120 - if (curr_ios.timing == MMC_TIMING_MMC_HS400) { 1121 - /* Select the divided clock (free running MCLK/2) */ 1122 - config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 1123 - config &= ~CORE_HC_MCLK_SEL_MASK; 1124 - config |= CORE_HC_MCLK_SEL_HS400; 1125 906 1126 - writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 1127 - /* 1128 - * Select HS400 mode using the HC_SELECT_IN from VENDOR SPEC 1129 - * register 1130 - */ 1131 - if (msm_host->tuning_done && !msm_host->calibration_done) { 1132 - /* 1133 - * Write 0x6 to HC_SELECT_IN and 1 to HC_SELECT_IN_EN 1134 - * field in VENDOR_SPEC_FUNC 1135 - */ 1136 - config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 1137 - config |= CORE_HC_SELECT_IN_HS400; 1138 - config |= CORE_HC_SELECT_IN_EN; 1139 - writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 1140 - } 1141 - if (!msm_host->clk_rate && !msm_host->use_cdclp533) { 1142 - /* 1143 - * Poll on DLL_LOCK or DDR_DLL_LOCK bits in 1144 - * CORE_DLL_STATUS to be set. This should get set 1145 - * within 15 us at 200 MHz. 1146 - */ 1147 - rc = readl_relaxed_poll_timeout(host->ioaddr + 1148 - CORE_DLL_STATUS, 1149 - dll_lock, 1150 - (dll_lock & 1151 - (CORE_DLL_LOCK | 1152 - CORE_DDR_DLL_LOCK)), 10, 1153 - 1000); 1154 - if (rc == -ETIMEDOUT) 1155 - pr_err("%s: Unable to get DLL_LOCK/DDR_DLL_LOCK, dll_status: 0x%08x\n", 1156 - mmc_hostname(host->mmc), dll_lock); 1157 - } 1158 - } else { 1159 - if (!msm_host->use_cdclp533) { 1160 - config = readl_relaxed(host->ioaddr + 1161 - CORE_VENDOR_SPEC3); 1162 - config &= ~CORE_PWRSAVE_DLL; 1163 - writel_relaxed(config, host->ioaddr + 1164 - CORE_VENDOR_SPEC3); 1165 - } 907 + sdhci_msm_hc_select_mode(host); 1166 908 1167 - config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 1168 - config &= ~CORE_HC_MCLK_SEL_MASK; 1169 - config |= CORE_HC_MCLK_SEL_DFLT; 1170 - writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 909 + msm_set_clock_rate_for_bus_mode(host, clock); 1171 910 1172 - /* 1173 - * Disable HC_SELECT_IN to be able to use the UHS mode select 1174 - * configuration from Host Control2 register for all other 1175 - * modes. 1176 - * Write 0 to HC_SELECT_IN and HC_SELECT_IN_EN field 1177 - * in VENDOR_SPEC_FUNC 1178 - */ 1179 - config = readl_relaxed(host->ioaddr + CORE_VENDOR_SPEC); 1180 - config &= ~CORE_HC_SELECT_IN_EN; 1181 - config &= ~CORE_HC_SELECT_IN_MASK; 1182 - writel_relaxed(config, host->ioaddr + CORE_VENDOR_SPEC); 1183 - } 1184 - 1185 - /* 1186 - * Make sure above writes impacting free running MCLK are completed 1187 - * before changing the clk_rate at GCC. 1188 - */ 1189 - wmb(); 1190 - 1191 - rc = clk_set_rate(msm_host->clk, clock); 1192 - if (rc) { 1193 - pr_err("%s: Failed to set clock at rate %u at timing %d\n", 1194 - mmc_hostname(host->mmc), clock, 1195 - curr_ios.timing); 1196 - goto out_lock; 1197 - } 1198 - msm_host->clk_rate = clock; 1199 - pr_debug("%s: Setting clock at rate %lu at timing %d\n", 1200 - mmc_hostname(host->mmc), clk_get_rate(msm_host->clk), 1201 - curr_ios.timing); 1202 - 1203 - out_lock: 1204 911 spin_lock_irq(&host->lock); 1205 912 out: 1206 913 __sdhci_msm_set_clock(host, clock); ··· 1108 1027 MODULE_DEVICE_TABLE(of, sdhci_msm_dt_match); 1109 1028 1110 1029 static const struct sdhci_ops sdhci_msm_ops = { 1111 - .platform_execute_tuning = sdhci_msm_execute_tuning, 1112 1030 .reset = sdhci_reset, 1113 1031 .set_clock = sdhci_msm_set_clock, 1114 1032 .get_min_clock = sdhci_msm_get_min_clock, ··· 1214 1134 goto clk_disable; 1215 1135 } 1216 1136 1217 - config = readl_relaxed(msm_host->core_mem + CORE_POWER); 1218 - config |= CORE_SW_RST; 1219 - writel_relaxed(config, msm_host->core_mem + CORE_POWER); 1220 - 1221 - /* SW reset can take upto 10HCLK + 15MCLK cycles. (min 40us) */ 1222 - usleep_range(1000, 5000); 1223 - if (readl(msm_host->core_mem + CORE_POWER) & CORE_SW_RST) { 1224 - dev_err(&pdev->dev, "Stuck in reset\n"); 1225 - ret = -ETIMEDOUT; 1226 - goto clk_disable; 1227 - } 1137 + /* Reset the vendor spec register to power on reset state */ 1138 + writel_relaxed(CORE_VENDOR_SPEC_POR_VAL, 1139 + host->ioaddr + CORE_VENDOR_SPEC); 1228 1140 1229 1141 /* Set HC_MODE_EN bit in HC_MODE register */ 1230 1142 writel_relaxed(HC_MODE_EN, (msm_host->core_mem + CORE_HC_MODE)); ··· 1282 1210 MSM_MMC_AUTOSUSPEND_DELAY_MS); 1283 1211 pm_runtime_use_autosuspend(&pdev->dev); 1284 1212 1213 + host->mmc_host_ops.execute_tuning = sdhci_msm_execute_tuning; 1285 1214 ret = sdhci_add_host(host); 1286 1215 if (ret) 1287 1216 goto pm_runtime_disable;
+28 -11
drivers/mmc/host/sdhci-of-esdhc.c
··· 431 431 struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); 432 432 int pre_div = 1; 433 433 int div = 1; 434 + u32 timeout; 434 435 u32 temp; 435 436 436 437 host->mmc->actual_clock = 0; ··· 452 451 } 453 452 454 453 temp = sdhci_readl(host, ESDHC_SYSTEM_CONTROL); 455 - temp &= ~(ESDHC_CLOCK_IPGEN | ESDHC_CLOCK_HCKEN | ESDHC_CLOCK_PEREN 456 - | ESDHC_CLOCK_MASK); 454 + temp &= ~(ESDHC_CLOCK_SDCLKEN | ESDHC_CLOCK_IPGEN | ESDHC_CLOCK_HCKEN | 455 + ESDHC_CLOCK_PEREN | ESDHC_CLOCK_MASK); 457 456 sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL); 458 457 459 458 while (host->max_clk / pre_div / 16 > clock && pre_div < 256) ··· 473 472 | (div << ESDHC_DIVIDER_SHIFT) 474 473 | (pre_div << ESDHC_PREDIV_SHIFT)); 475 474 sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL); 476 - mdelay(1); 475 + 476 + /* Wait max 20 ms */ 477 + timeout = 20; 478 + while (!(sdhci_readl(host, ESDHC_PRSSTAT) & ESDHC_CLOCK_STABLE)) { 479 + if (timeout == 0) { 480 + pr_err("%s: Internal clock never stabilised.\n", 481 + mmc_hostname(host->mmc)); 482 + return; 483 + } 484 + timeout--; 485 + mdelay(1); 486 + } 487 + 488 + temp |= ESDHC_CLOCK_SDCLKEN; 489 + sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL); 477 490 } 478 491 479 492 static void esdhc_pltfm_set_bus_width(struct sdhci_host *host, int width) ··· 584 569 }; 585 570 586 571 static const struct sdhci_pltfm_data sdhci_esdhc_be_pdata = { 587 - .quirks = ESDHC_DEFAULT_QUIRKS | SDHCI_QUIRK_BROKEN_CARD_DETECTION 588 - | SDHCI_QUIRK_NO_CARD_NO_RESET 589 - | SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 572 + .quirks = ESDHC_DEFAULT_QUIRKS | 573 + #ifdef CONFIG_PPC 574 + SDHCI_QUIRK_BROKEN_CARD_DETECTION | 575 + #endif 576 + SDHCI_QUIRK_NO_CARD_NO_RESET | 577 + SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 590 578 .ops = &sdhci_esdhc_be_ops, 591 579 }; 592 580 593 581 static const struct sdhci_pltfm_data sdhci_esdhc_le_pdata = { 594 - .quirks = ESDHC_DEFAULT_QUIRKS | SDHCI_QUIRK_BROKEN_CARD_DETECTION 595 - | SDHCI_QUIRK_NO_CARD_NO_RESET 596 - | SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 582 + .quirks = ESDHC_DEFAULT_QUIRKS | 583 + SDHCI_QUIRK_NO_CARD_NO_RESET | 584 + SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 597 585 .ops = &sdhci_esdhc_le_ops, 598 586 }; 599 587 ··· 661 643 of_device_is_compatible(np, "fsl,p5020-esdhc") || 662 644 of_device_is_compatible(np, "fsl,p4080-esdhc") || 663 645 of_device_is_compatible(np, "fsl,p1020-esdhc") || 664 - of_device_is_compatible(np, "fsl,t1040-esdhc") || 665 - of_device_is_compatible(np, "fsl,ls1021a-esdhc")) 646 + of_device_is_compatible(np, "fsl,t1040-esdhc")) 666 647 host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION; 667 648 668 649 if (of_device_is_compatible(np, "fsl,ls1021a-esdhc"))
+92 -5
drivers/mmc/host/sdhci-pci-core.c
··· 424 424 static int byt_sd_probe_slot(struct sdhci_pci_slot *slot) 425 425 { 426 426 slot->host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY; 427 - slot->cd_con_id = NULL; 428 427 slot->cd_idx = 0; 429 428 slot->cd_override_level = true; 430 429 if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BXT_SD || ··· 865 866 AMD_CHIPSET_UNKNOWN, 866 867 }; 867 868 869 + /* AMD registers */ 870 + #define AMD_SD_AUTO_PATTERN 0xB8 871 + #define AMD_MSLEEP_DURATION 4 872 + #define AMD_SD_MISC_CONTROL 0xD0 873 + #define AMD_MAX_TUNE_VALUE 0x0B 874 + #define AMD_AUTO_TUNE_SEL 0x10800 875 + #define AMD_FIFO_PTR 0x30 876 + #define AMD_BIT_MASK 0x1F 877 + 878 + static void amd_tuning_reset(struct sdhci_host *host) 879 + { 880 + unsigned int val; 881 + 882 + val = sdhci_readw(host, SDHCI_HOST_CONTROL2); 883 + val |= SDHCI_CTRL_PRESET_VAL_ENABLE | SDHCI_CTRL_EXEC_TUNING; 884 + sdhci_writew(host, val, SDHCI_HOST_CONTROL2); 885 + 886 + val = sdhci_readw(host, SDHCI_HOST_CONTROL2); 887 + val &= ~SDHCI_CTRL_EXEC_TUNING; 888 + sdhci_writew(host, val, SDHCI_HOST_CONTROL2); 889 + } 890 + 891 + static void amd_config_tuning_phase(struct pci_dev *pdev, u8 phase) 892 + { 893 + unsigned int val; 894 + 895 + pci_read_config_dword(pdev, AMD_SD_AUTO_PATTERN, &val); 896 + val &= ~AMD_BIT_MASK; 897 + val |= (AMD_AUTO_TUNE_SEL | (phase << 1)); 898 + pci_write_config_dword(pdev, AMD_SD_AUTO_PATTERN, val); 899 + } 900 + 901 + static void amd_enable_manual_tuning(struct pci_dev *pdev) 902 + { 903 + unsigned int val; 904 + 905 + pci_read_config_dword(pdev, AMD_SD_MISC_CONTROL, &val); 906 + val |= AMD_FIFO_PTR; 907 + pci_write_config_dword(pdev, AMD_SD_MISC_CONTROL, val); 908 + } 909 + 910 + static int amd_execute_tuning(struct sdhci_host *host, u32 opcode) 911 + { 912 + struct sdhci_pci_slot *slot = sdhci_priv(host); 913 + struct pci_dev *pdev = slot->chip->pdev; 914 + u8 valid_win = 0; 915 + u8 valid_win_max = 0; 916 + u8 valid_win_end = 0; 917 + u8 ctrl, tune_around; 918 + 919 + amd_tuning_reset(host); 920 + 921 + for (tune_around = 0; tune_around < 12; tune_around++) { 922 + amd_config_tuning_phase(pdev, tune_around); 923 + 924 + if (mmc_send_tuning(host->mmc, opcode, NULL)) { 925 + valid_win = 0; 926 + msleep(AMD_MSLEEP_DURATION); 927 + ctrl = SDHCI_RESET_CMD | SDHCI_RESET_DATA; 928 + sdhci_writeb(host, ctrl, SDHCI_SOFTWARE_RESET); 929 + } else if (++valid_win > valid_win_max) { 930 + valid_win_max = valid_win; 931 + valid_win_end = tune_around; 932 + } 933 + } 934 + 935 + if (!valid_win_max) { 936 + dev_err(&pdev->dev, "no tuning point found\n"); 937 + return -EIO; 938 + } 939 + 940 + amd_config_tuning_phase(pdev, valid_win_end - valid_win_max / 2); 941 + 942 + amd_enable_manual_tuning(pdev); 943 + 944 + host->mmc->retune_period = 0; 945 + 946 + return 0; 947 + } 948 + 868 949 static int amd_probe(struct sdhci_pci_chip *chip) 869 950 { 870 951 struct pci_dev *smbus_dev; ··· 967 888 } 968 889 } 969 890 970 - if ((gen == AMD_CHIPSET_BEFORE_ML) || (gen == AMD_CHIPSET_CZ)) { 891 + if (gen == AMD_CHIPSET_BEFORE_ML || gen == AMD_CHIPSET_CZ) 971 892 chip->quirks2 |= SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD; 972 - chip->quirks2 |= SDHCI_QUIRK2_BROKEN_HS200; 973 - } 974 893 975 894 return 0; 976 895 } 977 896 897 + static const struct sdhci_ops amd_sdhci_pci_ops = { 898 + .set_clock = sdhci_set_clock, 899 + .enable_dma = sdhci_pci_enable_dma, 900 + .set_bus_width = sdhci_pci_set_bus_width, 901 + .reset = sdhci_reset, 902 + .set_uhs_signaling = sdhci_set_uhs_signaling, 903 + .platform_execute_tuning = amd_execute_tuning, 904 + }; 905 + 978 906 static const struct sdhci_pci_fixes sdhci_amd = { 979 907 .probe = amd_probe, 908 + .ops = &amd_sdhci_pci_ops, 980 909 }; 981 910 982 911 static const struct pci_device_id pci_ids[] = { ··· 1904 1817 host->mmc->caps2 |= MMC_CAP2_NO_PRESCAN_POWERUP; 1905 1818 1906 1819 if (slot->cd_idx >= 0) { 1907 - ret = mmc_gpiod_request_cd(host->mmc, slot->cd_con_id, slot->cd_idx, 1820 + ret = mmc_gpiod_request_cd(host->mmc, NULL, slot->cd_idx, 1908 1821 slot->cd_override_level, 0, NULL); 1909 1822 if (ret == -EPROBE_DEFER) 1910 1823 goto remove;
-1
drivers/mmc/host/sdhci-pci.h
··· 81 81 int cd_gpio; 82 82 int cd_irq; 83 83 84 - char *cd_con_id; 85 84 int cd_idx; 86 85 bool cd_override_level; 87 86
-87
drivers/mmc/host/sdhci-s3c-regs.h
··· 1 - /* linux/arch/arm/plat-s3c/include/plat/regs-sdhci.h 2 - * 3 - * Copyright 2008 Openmoko, Inc. 4 - * Copyright 2008 Simtec Electronics 5 - * http://armlinux.simtec.co.uk/ 6 - * Ben Dooks <ben@simtec.co.uk> 7 - * 8 - * S3C Platform - SDHCI (HSMMC) register definitions 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License version 2 as 12 - * published by the Free Software Foundation. 13 - */ 14 - 15 - #ifndef __PLAT_S3C_SDHCI_REGS_H 16 - #define __PLAT_S3C_SDHCI_REGS_H __FILE__ 17 - 18 - #define S3C_SDHCI_CONTROL2 (0x80) 19 - #define S3C_SDHCI_CONTROL3 (0x84) 20 - #define S3C64XX_SDHCI_CONTROL4 (0x8C) 21 - 22 - #define S3C64XX_SDHCI_CTRL2_ENSTAASYNCCLR (1 << 31) 23 - #define S3C64XX_SDHCI_CTRL2_ENCMDCNFMSK (1 << 30) 24 - #define S3C_SDHCI_CTRL2_CDINVRXD3 (1 << 29) 25 - #define S3C_SDHCI_CTRL2_SLCARDOUT (1 << 28) 26 - 27 - #define S3C_SDHCI_CTRL2_FLTCLKSEL_MASK (0xf << 24) 28 - #define S3C_SDHCI_CTRL2_FLTCLKSEL_SHIFT (24) 29 - #define S3C_SDHCI_CTRL2_FLTCLKSEL(_x) ((_x) << 24) 30 - 31 - #define S3C_SDHCI_CTRL2_LVLDAT_MASK (0xff << 16) 32 - #define S3C_SDHCI_CTRL2_LVLDAT_SHIFT (16) 33 - #define S3C_SDHCI_CTRL2_LVLDAT(_x) ((_x) << 16) 34 - 35 - #define S3C_SDHCI_CTRL2_ENFBCLKTX (1 << 15) 36 - #define S3C_SDHCI_CTRL2_ENFBCLKRX (1 << 14) 37 - #define S3C_SDHCI_CTRL2_SDCDSEL (1 << 13) 38 - #define S3C_SDHCI_CTRL2_SDSIGPC (1 << 12) 39 - #define S3C_SDHCI_CTRL2_ENBUSYCHKTXSTART (1 << 11) 40 - 41 - #define S3C_SDHCI_CTRL2_DFCNT_MASK (0x3 << 9) 42 - #define S3C_SDHCI_CTRL2_DFCNT_SHIFT (9) 43 - #define S3C_SDHCI_CTRL2_DFCNT_NONE (0x0 << 9) 44 - #define S3C_SDHCI_CTRL2_DFCNT_4SDCLK (0x1 << 9) 45 - #define S3C_SDHCI_CTRL2_DFCNT_16SDCLK (0x2 << 9) 46 - #define S3C_SDHCI_CTRL2_DFCNT_64SDCLK (0x3 << 9) 47 - 48 - #define S3C_SDHCI_CTRL2_ENCLKOUTHOLD (1 << 8) 49 - #define S3C_SDHCI_CTRL2_RWAITMODE (1 << 7) 50 - #define S3C_SDHCI_CTRL2_DISBUFRD (1 << 6) 51 - #define S3C_SDHCI_CTRL2_SELBASECLK_MASK (0x3 << 4) 52 - #define S3C_SDHCI_CTRL2_SELBASECLK_SHIFT (4) 53 - #define S3C_SDHCI_CTRL2_PWRSYNC (1 << 3) 54 - #define S3C_SDHCI_CTRL2_ENCLKOUTMSKCON (1 << 1) 55 - #define S3C_SDHCI_CTRL2_HWINITFIN (1 << 0) 56 - 57 - #define S3C_SDHCI_CTRL3_FCSEL3 (1 << 31) 58 - #define S3C_SDHCI_CTRL3_FCSEL2 (1 << 23) 59 - #define S3C_SDHCI_CTRL3_FCSEL1 (1 << 15) 60 - #define S3C_SDHCI_CTRL3_FCSEL0 (1 << 7) 61 - 62 - #define S3C_SDHCI_CTRL3_FIA3_MASK (0x7f << 24) 63 - #define S3C_SDHCI_CTRL3_FIA3_SHIFT (24) 64 - #define S3C_SDHCI_CTRL3_FIA3(_x) ((_x) << 24) 65 - 66 - #define S3C_SDHCI_CTRL3_FIA2_MASK (0x7f << 16) 67 - #define S3C_SDHCI_CTRL3_FIA2_SHIFT (16) 68 - #define S3C_SDHCI_CTRL3_FIA2(_x) ((_x) << 16) 69 - 70 - #define S3C_SDHCI_CTRL3_FIA1_MASK (0x7f << 8) 71 - #define S3C_SDHCI_CTRL3_FIA1_SHIFT (8) 72 - #define S3C_SDHCI_CTRL3_FIA1(_x) ((_x) << 8) 73 - 74 - #define S3C_SDHCI_CTRL3_FIA0_MASK (0x7f << 0) 75 - #define S3C_SDHCI_CTRL3_FIA0_SHIFT (0) 76 - #define S3C_SDHCI_CTRL3_FIA0(_x) ((_x) << 0) 77 - 78 - #define S3C64XX_SDHCI_CONTROL4_DRIVE_MASK (0x3 << 16) 79 - #define S3C64XX_SDHCI_CONTROL4_DRIVE_SHIFT (16) 80 - #define S3C64XX_SDHCI_CONTROL4_DRIVE_2mA (0x0 << 16) 81 - #define S3C64XX_SDHCI_CONTROL4_DRIVE_4mA (0x1 << 16) 82 - #define S3C64XX_SDHCI_CONTROL4_DRIVE_7mA (0x2 << 16) 83 - #define S3C64XX_SDHCI_CONTROL4_DRIVE_9mA (0x3 << 16) 84 - 85 - #define S3C64XX_SDHCI_CONTROL4_BUSY (1) 86 - 87 - #endif /* __PLAT_S3C_SDHCI_REGS_H */
+70 -1
drivers/mmc/host/sdhci-s3c.c
··· 29 29 30 30 #include <linux/mmc/host.h> 31 31 32 - #include "sdhci-s3c-regs.h" 33 32 #include "sdhci.h" 34 33 35 34 #define MAX_BUS_CLK (4) 35 + 36 + #define S3C_SDHCI_CONTROL2 (0x80) 37 + #define S3C_SDHCI_CONTROL3 (0x84) 38 + #define S3C64XX_SDHCI_CONTROL4 (0x8C) 39 + 40 + #define S3C64XX_SDHCI_CTRL2_ENSTAASYNCCLR BIT(31) 41 + #define S3C64XX_SDHCI_CTRL2_ENCMDCNFMSK BIT(30) 42 + #define S3C_SDHCI_CTRL2_CDINVRXD3 BIT(29) 43 + #define S3C_SDHCI_CTRL2_SLCARDOUT BIT(28) 44 + 45 + #define S3C_SDHCI_CTRL2_FLTCLKSEL_MASK (0xf << 24) 46 + #define S3C_SDHCI_CTRL2_FLTCLKSEL_SHIFT (24) 47 + #define S3C_SDHCI_CTRL2_FLTCLKSEL(_x) ((_x) << 24) 48 + 49 + #define S3C_SDHCI_CTRL2_LVLDAT_MASK (0xff << 16) 50 + #define S3C_SDHCI_CTRL2_LVLDAT_SHIFT (16) 51 + #define S3C_SDHCI_CTRL2_LVLDAT(_x) ((_x) << 16) 52 + 53 + #define S3C_SDHCI_CTRL2_ENFBCLKTX BIT(15) 54 + #define S3C_SDHCI_CTRL2_ENFBCLKRX BIT(14) 55 + #define S3C_SDHCI_CTRL2_SDCDSEL BIT(13) 56 + #define S3C_SDHCI_CTRL2_SDSIGPC BIT(12) 57 + #define S3C_SDHCI_CTRL2_ENBUSYCHKTXSTART BIT(11) 58 + 59 + #define S3C_SDHCI_CTRL2_DFCNT_MASK (0x3 << 9) 60 + #define S3C_SDHCI_CTRL2_DFCNT_SHIFT (9) 61 + #define S3C_SDHCI_CTRL2_DFCNT_NONE (0x0 << 9) 62 + #define S3C_SDHCI_CTRL2_DFCNT_4SDCLK (0x1 << 9) 63 + #define S3C_SDHCI_CTRL2_DFCNT_16SDCLK (0x2 << 9) 64 + #define S3C_SDHCI_CTRL2_DFCNT_64SDCLK (0x3 << 9) 65 + 66 + #define S3C_SDHCI_CTRL2_ENCLKOUTHOLD BIT(8) 67 + #define S3C_SDHCI_CTRL2_RWAITMODE BIT(7) 68 + #define S3C_SDHCI_CTRL2_DISBUFRD BIT(6) 69 + 70 + #define S3C_SDHCI_CTRL2_SELBASECLK_MASK (0x3 << 4) 71 + #define S3C_SDHCI_CTRL2_SELBASECLK_SHIFT (4) 72 + #define S3C_SDHCI_CTRL2_PWRSYNC BIT(3) 73 + #define S3C_SDHCI_CTRL2_ENCLKOUTMSKCON BIT(1) 74 + #define S3C_SDHCI_CTRL2_HWINITFIN BIT(0) 75 + 76 + #define S3C_SDHCI_CTRL3_FCSEL3 BIT(31) 77 + #define S3C_SDHCI_CTRL3_FCSEL2 BIT(23) 78 + #define S3C_SDHCI_CTRL3_FCSEL1 BIT(15) 79 + #define S3C_SDHCI_CTRL3_FCSEL0 BIT(7) 80 + 81 + #define S3C_SDHCI_CTRL3_FIA3_MASK (0x7f << 24) 82 + #define S3C_SDHCI_CTRL3_FIA3_SHIFT (24) 83 + #define S3C_SDHCI_CTRL3_FIA3(_x) ((_x) << 24) 84 + 85 + #define S3C_SDHCI_CTRL3_FIA2_MASK (0x7f << 16) 86 + #define S3C_SDHCI_CTRL3_FIA2_SHIFT (16) 87 + #define S3C_SDHCI_CTRL3_FIA2(_x) ((_x) << 16) 88 + 89 + #define S3C_SDHCI_CTRL3_FIA1_MASK (0x7f << 8) 90 + #define S3C_SDHCI_CTRL3_FIA1_SHIFT (8) 91 + #define S3C_SDHCI_CTRL3_FIA1(_x) ((_x) << 8) 92 + 93 + #define S3C_SDHCI_CTRL3_FIA0_MASK (0x7f << 0) 94 + #define S3C_SDHCI_CTRL3_FIA0_SHIFT (0) 95 + #define S3C_SDHCI_CTRL3_FIA0(_x) ((_x) << 0) 96 + 97 + #define S3C64XX_SDHCI_CONTROL4_DRIVE_MASK (0x3 << 16) 98 + #define S3C64XX_SDHCI_CONTROL4_DRIVE_SHIFT (16) 99 + #define S3C64XX_SDHCI_CONTROL4_DRIVE_2mA (0x0 << 16) 100 + #define S3C64XX_SDHCI_CONTROL4_DRIVE_4mA (0x1 << 16) 101 + #define S3C64XX_SDHCI_CONTROL4_DRIVE_7mA (0x2 << 16) 102 + #define S3C64XX_SDHCI_CONTROL4_DRIVE_9mA (0x3 << 16) 103 + 104 + #define S3C64XX_SDHCI_CONTROL4_BUSY (1) 36 105 37 106 /** 38 107 * struct sdhci_s3c - S3C SDHCI instance
+6 -4
drivers/mmc/host/sdhci.c
··· 2021 2021 unsigned long flags) 2022 2022 { 2023 2023 struct mmc_host *mmc = host->mmc; 2024 - struct mmc_command cmd = {0}; 2025 - struct mmc_request mrq = {NULL}; 2024 + struct mmc_command cmd = {}; 2025 + struct mmc_request mrq = {}; 2026 2026 2027 2027 cmd.opcode = opcode; 2028 2028 cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC; ··· 2114 2114 spin_lock_irqsave(&host->lock, flags); 2115 2115 2116 2116 hs400_tuning = host->flags & SDHCI_HS400_TUNING; 2117 - host->flags &= ~SDHCI_HS400_TUNING; 2118 2117 2119 2118 if (host->tuning_mode == SDHCI_TUNING_MODE_1) 2120 2119 tuning_count = host->tuning_count; ··· 2155 2156 2156 2157 if (host->ops->platform_execute_tuning) { 2157 2158 spin_unlock_irqrestore(&host->lock, flags); 2158 - return host->ops->platform_execute_tuning(host, opcode); 2159 + err = host->ops->platform_execute_tuning(host, opcode); 2160 + spin_lock_irqsave(&host->lock, flags); 2161 + goto out_unlock; 2159 2162 } 2160 2163 2161 2164 host->mmc->retune_period = tuning_count; ··· 2168 2167 2169 2168 sdhci_end_tuning(host); 2170 2169 out_unlock: 2170 + host->flags &= ~SDHCI_HS400_TUNING; 2171 2171 spin_unlock_irqrestore(&host->lock, flags); 2172 2172 2173 2173 return err;
+2
drivers/mmc/host/sdhci.h
··· 17 17 #include <linux/compiler.h> 18 18 #include <linux/types.h> 19 19 #include <linux/io.h> 20 + #include <linux/leds.h> 21 + #include <linux/interrupt.h> 20 22 21 23 #include <linux/mmc/host.h> 22 24
+3 -25
drivers/mmc/host/sh_mmcif.c
··· 1079 1079 host->state = STATE_IDLE; 1080 1080 } 1081 1081 1082 - static int sh_mmcif_get_cd(struct mmc_host *mmc) 1083 - { 1084 - struct sh_mmcif_host *host = mmc_priv(mmc); 1085 - struct device *dev = sh_mmcif_host_to_dev(host); 1086 - struct sh_mmcif_plat_data *p = dev->platform_data; 1087 - int ret = mmc_gpio_get_cd(mmc); 1088 - 1089 - if (ret >= 0) 1090 - return ret; 1091 - 1092 - if (!p || !p->get_cd) 1093 - return -ENOSYS; 1094 - else 1095 - return p->get_cd(host->pd); 1096 - } 1097 - 1098 1082 static struct mmc_host_ops sh_mmcif_ops = { 1099 1083 .request = sh_mmcif_request, 1100 1084 .set_ios = sh_mmcif_set_ios, 1101 - .get_cd = sh_mmcif_get_cd, 1085 + .get_cd = mmc_gpio_get_cd, 1102 1086 }; 1103 1087 1104 1088 static bool sh_mmcif_end_cmd(struct sh_mmcif_host *host) ··· 1427 1443 host->mmc = mmc; 1428 1444 host->addr = reg; 1429 1445 host->timeout = msecs_to_jiffies(10000); 1430 - host->ccs_enable = !pd || !pd->ccs_unsupported; 1431 - host->clk_ctrl2_enable = pd && pd->clk_ctrl2_present; 1446 + host->ccs_enable = true; 1447 + host->clk_ctrl2_enable = false; 1432 1448 1433 1449 host->pd = pdev; 1434 1450 ··· 1491 1507 dev_err(dev, "request_irq error (sh_mmc:int)\n"); 1492 1508 goto err_clk; 1493 1509 } 1494 - } 1495 - 1496 - if (pd && pd->use_cd_gpio) { 1497 - ret = mmc_gpio_request_cd(mmc, pd->cd_gpio, 0); 1498 - if (ret < 0) 1499 - goto err_clk; 1500 1510 } 1501 1511 1502 1512 mutex_init(&host->thread_lock);
+51 -46
drivers/mmc/host/sh_mobile_sdhi.c
··· 143 143 144 144 struct sh_mobile_sdhi { 145 145 struct clk *clk; 146 + struct clk *clk_cd; 146 147 struct tmio_mmc_data mmc_data; 147 148 struct tmio_mmc_dma dma_priv; 148 149 struct pinctrl *pinctrl; ··· 190 189 int ret = clk_prepare_enable(priv->clk); 191 190 if (ret < 0) 192 191 return ret; 192 + 193 + ret = clk_prepare_enable(priv->clk_cd); 194 + if (ret < 0) { 195 + clk_disable_unprepare(priv->clk); 196 + return ret; 197 + } 193 198 194 199 /* 195 200 * The clock driver may not know what maximum frequency ··· 262 255 struct sh_mobile_sdhi *priv = host_to_priv(host); 263 256 264 257 clk_disable_unprepare(priv->clk); 258 + clk_disable_unprepare(priv->clk_cd); 265 259 } 266 260 267 261 static int sh_mobile_sdhi_card_busy(struct mmc_host *mmc) ··· 342 334 static unsigned int sh_mobile_sdhi_init_tuning(struct tmio_mmc_host *host) 343 335 { 344 336 struct sh_mobile_sdhi *priv; 345 - 346 - if (!(host->mmc->caps & MMC_CAP_UHS_SDR104)) 347 - return 0; 348 337 349 338 priv = host_to_priv(host); 350 339 ··· 449 444 450 445 static bool sh_mobile_sdhi_check_scc_error(struct tmio_mmc_host *host) 451 446 { 452 - struct sh_mobile_sdhi *priv; 453 - 454 - if (!(host->mmc->caps & MMC_CAP_UHS_SDR104)) 455 - return 0; 456 - 457 - priv = host_to_priv(host); 447 + struct sh_mobile_sdhi *priv = host_to_priv(host); 458 448 459 449 /* Check SCC error */ 460 450 if (sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL) & ··· 467 467 static void sh_mobile_sdhi_hw_reset(struct tmio_mmc_host *host) 468 468 { 469 469 struct sh_mobile_sdhi *priv; 470 - 471 - if (!(host->mmc->caps & MMC_CAP_UHS_SDR104)) 472 - return; 473 470 474 471 priv = host_to_priv(host); 475 472 ··· 553 556 554 557 static int sh_mobile_sdhi_probe(struct platform_device *pdev) 555 558 { 556 - const struct of_device_id *of_id = 557 - of_match_device(sh_mobile_sdhi_of_match, &pdev->dev); 559 + const struct sh_mobile_sdhi_of_data *of_data = of_device_get_match_data(&pdev->dev); 558 560 struct sh_mobile_sdhi *priv; 559 561 struct tmio_mmc_data *mmc_data; 560 562 struct tmio_mmc_data *mmd = pdev->dev.platform_data; ··· 580 584 goto eprobe; 581 585 } 582 586 587 + /* 588 + * Some controllers provide a 2nd clock just to run the internal card 589 + * detection logic. Unfortunately, the existing driver architecture does 590 + * not support a separation of clocks for runtime PM usage. When 591 + * native hotplug is used, the tmio driver assumes that the core 592 + * must continue to run for card detect to stay active, so we cannot 593 + * disable it. 594 + * Additionally, it is prohibited to supply a clock to the core but not 595 + * to the card detect circuit. That leaves us with if separate clocks 596 + * are presented, we must treat them both as virtually 1 clock. 597 + */ 598 + priv->clk_cd = devm_clk_get(&pdev->dev, "cd"); 599 + if (IS_ERR(priv->clk_cd)) 600 + priv->clk_cd = NULL; 601 + 583 602 priv->pinctrl = devm_pinctrl_get(&pdev->dev); 584 603 if (!IS_ERR(priv->pinctrl)) { 585 604 priv->pins_default = pinctrl_lookup_state(priv->pinctrl, ··· 609 598 goto eprobe; 610 599 } 611 600 612 - if (of_id && of_id->data) { 613 - const struct sh_mobile_sdhi_of_data *of_data = of_id->data; 614 601 602 + if (of_data) { 615 603 mmc_data->flags |= of_data->tmio_flags; 616 604 mmc_data->ocr_mask = of_data->tmio_ocr_mask; 617 605 mmc_data->capabilities |= of_data->capabilities; ··· 633 623 host->card_busy = sh_mobile_sdhi_card_busy; 634 624 host->start_signal_voltage_switch = 635 625 sh_mobile_sdhi_start_signal_voltage_switch; 636 - host->init_tuning = sh_mobile_sdhi_init_tuning; 637 - host->prepare_tuning = sh_mobile_sdhi_prepare_tuning; 638 - host->select_tuning = sh_mobile_sdhi_select_tuning; 639 - host->check_scc_error = sh_mobile_sdhi_check_scc_error; 640 - host->hw_reset = sh_mobile_sdhi_hw_reset; 641 626 } 642 627 643 628 /* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */ ··· 664 659 */ 665 660 mmc_data->flags |= TMIO_MMC_HAVE_CMD12_CTRL; 666 661 667 - /* 668 - * All SDHI need SDIO_INFO1 reserved bit 669 - */ 670 - mmc_data->flags |= TMIO_MMC_SDIO_STATUS_QUIRK; 662 + /* All SDHI have SDIO status bits which must be 1 */ 663 + mmc_data->flags |= TMIO_MMC_SDIO_STATUS_SETBITS; 671 664 672 665 ret = tmio_mmc_host_probe(host, mmc_data); 673 666 if (ret < 0) 674 667 goto efree; 675 668 676 - if (host->mmc->caps & MMC_CAP_UHS_SDR104) { 669 + /* Enable tuning iff we have an SCC and a supported mode */ 670 + if (of_data && of_data->scc_offset && 671 + (host->mmc->caps & MMC_CAP_UHS_SDR104 || 672 + host->mmc->caps2 & MMC_CAP2_HS200_1_8V_SDR)) { 673 + const struct sh_mobile_sdhi_scc *taps = of_data->taps; 674 + bool hit = false; 675 + 677 676 host->mmc->caps |= MMC_CAP_HW_RESET; 678 677 679 - if (of_id && of_id->data) { 680 - const struct sh_mobile_sdhi_of_data *of_data; 681 - const struct sh_mobile_sdhi_scc *taps; 682 - bool hit = false; 683 - 684 - of_data = of_id->data; 685 - taps = of_data->taps; 686 - 687 - for (i = 0; i < of_data->taps_num; i++) { 688 - if (taps[i].clk_rate == 0 || 689 - taps[i].clk_rate == host->mmc->f_max) { 690 - host->scc_tappos = taps->tap; 691 - hit = true; 692 - break; 693 - } 678 + for (i = 0; i < of_data->taps_num; i++) { 679 + if (taps[i].clk_rate == 0 || 680 + taps[i].clk_rate == host->mmc->f_max) { 681 + host->scc_tappos = taps->tap; 682 + hit = true; 683 + break; 694 684 } 695 - 696 - if (!hit) 697 - dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n"); 698 - 699 - priv->scc_ctl = host->ctl + of_data->scc_offset; 700 685 } 686 + 687 + if (!hit) 688 + dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n"); 689 + 690 + priv->scc_ctl = host->ctl + of_data->scc_offset; 691 + host->init_tuning = sh_mobile_sdhi_init_tuning; 692 + host->prepare_tuning = sh_mobile_sdhi_prepare_tuning; 693 + host->select_tuning = sh_mobile_sdhi_select_tuning; 694 + host->check_scc_error = sh_mobile_sdhi_check_scc_error; 695 + host->hw_reset = sh_mobile_sdhi_hw_reset; 701 696 } 702 697 703 698 i = 0;
+71 -43
drivers/mmc/host/sunxi-mmc.c
··· 5 5 * (C) Copyright 2013-2014 O2S GmbH <www.o2s.ch> 6 6 * (C) Copyright 2013-2014 David Lanzend�rfer <david.lanzendoerfer@o2s.ch> 7 7 * (C) Copyright 2013-2014 Hans de Goede <hdegoede@redhat.com> 8 + * (C) Copyright 2017 Sootech SA 8 9 * 9 10 * This program is free software; you can redistribute it and/or 10 11 * modify it under the terms of the GNU General Public License as ··· 102 101 (SDXC_SOFT_RESET | SDXC_FIFO_RESET | SDXC_DMA_RESET) 103 102 104 103 /* clock control bits */ 104 + #define SDXC_MASK_DATA0 BIT(31) 105 105 #define SDXC_CARD_CLOCK_ON BIT(16) 106 106 #define SDXC_LOW_POWER_ON BIT(17) 107 107 ··· 255 253 256 254 /* does the IP block support autocalibration? */ 257 255 bool can_calibrate; 256 + 257 + /* Does DATA0 needs to be masked while the clock is updated */ 258 + bool mask_data0; 259 + 260 + bool needs_new_timings; 258 261 }; 259 262 260 263 struct sunxi_mmc_host { ··· 661 654 unsigned long expire = jiffies + msecs_to_jiffies(750); 662 655 u32 rval; 663 656 657 + dev_dbg(mmc_dev(host->mmc), "%sabling the clock\n", 658 + oclk_en ? "en" : "dis"); 659 + 664 660 rval = mmc_readl(host, REG_CLKCR); 665 - rval &= ~(SDXC_CARD_CLOCK_ON | SDXC_LOW_POWER_ON); 661 + rval &= ~(SDXC_CARD_CLOCK_ON | SDXC_LOW_POWER_ON | SDXC_MASK_DATA0); 666 662 667 663 if (oclk_en) 668 664 rval |= SDXC_CARD_CLOCK_ON; 665 + if (host->cfg->mask_data0) 666 + rval |= SDXC_MASK_DATA0; 669 667 670 668 mmc_writel(host, REG_CLKCR, rval); 671 669 ··· 690 678 return -EIO; 691 679 } 692 680 681 + if (host->cfg->mask_data0) { 682 + rval = mmc_readl(host, REG_CLKCR); 683 + mmc_writel(host, REG_CLKCR, rval & ~SDXC_MASK_DATA0); 684 + } 685 + 693 686 return 0; 694 687 } 695 688 696 689 static int sunxi_mmc_calibrate(struct sunxi_mmc_host *host, int reg_off) 697 690 { 698 - u32 reg = readl(host->reg_base + reg_off); 699 - u32 delay; 700 - unsigned long timeout; 701 - 702 691 if (!host->cfg->can_calibrate) 703 692 return 0; 704 693 705 - reg &= ~(SDXC_CAL_DL_MASK << SDXC_CAL_DL_SW_SHIFT); 706 - reg &= ~SDXC_CAL_DL_SW_EN; 707 - 708 - writel(reg | SDXC_CAL_START, host->reg_base + reg_off); 709 - 710 - dev_dbg(mmc_dev(host->mmc), "calibration started\n"); 711 - 712 - timeout = jiffies + HZ * SDXC_CAL_TIMEOUT; 713 - 714 - while (!((reg = readl(host->reg_base + reg_off)) & SDXC_CAL_DONE)) { 715 - if (time_before(jiffies, timeout)) 716 - cpu_relax(); 717 - else { 718 - reg &= ~SDXC_CAL_START; 719 - writel(reg, host->reg_base + reg_off); 720 - 721 - return -ETIMEDOUT; 722 - } 723 - } 724 - 725 - delay = (reg >> SDXC_CAL_DL_SHIFT) & SDXC_CAL_DL_MASK; 726 - 727 - reg &= ~SDXC_CAL_START; 728 - reg |= (delay << SDXC_CAL_DL_SW_SHIFT) | SDXC_CAL_DL_SW_EN; 729 - 730 - writel(reg, host->reg_base + reg_off); 731 - 732 - dev_dbg(mmc_dev(host->mmc), "calibration ended, reg is 0x%x\n", reg); 694 + /* 695 + * FIXME: 696 + * This is not clear how the calibration is supposed to work 697 + * yet. The best rate have been obtained by simply setting the 698 + * delay to 0, as Allwinner does in its BSP. 699 + * 700 + * The only mode that doesn't have such a delay is HS400, that 701 + * is in itself a TODO. 702 + */ 703 + writel(SDXC_CAL_DL_SW_EN, host->reg_base + reg_off); 733 704 734 705 return 0; 735 706 } ··· 740 745 index = SDXC_CLK_50M_DDR; 741 746 } 742 747 } else { 748 + dev_dbg(mmc_dev(host->mmc), "Invalid clock... returning\n"); 743 749 return -EINVAL; 744 750 } 745 751 ··· 753 757 static int sunxi_mmc_clk_set_rate(struct sunxi_mmc_host *host, 754 758 struct mmc_ios *ios) 755 759 { 760 + struct mmc_host *mmc = host->mmc; 756 761 long rate; 757 762 u32 rval, clock = ios->clock; 758 763 int ret; 764 + 765 + ret = sunxi_mmc_oclk_onoff(host, 0); 766 + if (ret) 767 + return ret; 768 + 769 + /* Our clock is gated now */ 770 + mmc->actual_clock = 0; 771 + 772 + if (!ios->clock) 773 + return 0; 759 774 760 775 /* 8 bit DDR requires a higher module clock */ 761 776 if (ios->timing == MMC_TIMING_MMC_DDR52 && ··· 775 768 776 769 rate = clk_round_rate(host->clk_mmc, clock); 777 770 if (rate < 0) { 778 - dev_err(mmc_dev(host->mmc), "error rounding clk to %d: %ld\n", 771 + dev_err(mmc_dev(mmc), "error rounding clk to %d: %ld\n", 779 772 clock, rate); 780 773 return rate; 781 774 } 782 - dev_dbg(mmc_dev(host->mmc), "setting clk to %d, rounded %ld\n", 775 + dev_dbg(mmc_dev(mmc), "setting clk to %d, rounded %ld\n", 783 776 clock, rate); 784 777 785 778 /* setting clock rate */ 786 779 ret = clk_set_rate(host->clk_mmc, rate); 787 780 if (ret) { 788 - dev_err(mmc_dev(host->mmc), "error setting clk to %ld: %d\n", 781 + dev_err(mmc_dev(mmc), "error setting clk to %ld: %d\n", 789 782 rate, ret); 790 783 return ret; 791 784 } 792 - 793 - ret = sunxi_mmc_oclk_onoff(host, 0); 794 - if (ret) 795 - return ret; 796 785 797 786 /* clear internal divider */ 798 787 rval = mmc_readl(host, REG_CLKCR); ··· 801 798 } 802 799 mmc_writel(host, REG_CLKCR, rval); 803 800 801 + if (host->cfg->needs_new_timings) 802 + mmc_writel(host, REG_SD_NTSR, SDXC_2X_TIMING_MODE); 803 + 804 804 ret = sunxi_mmc_clk_set_phase(host, ios, rate); 805 805 if (ret) 806 806 return ret; ··· 812 806 if (ret) 813 807 return ret; 814 808 815 - /* TODO: enable calibrate on sdc2 SDXC_REG_DS_DL_REG of A64 */ 809 + /* 810 + * FIXME: 811 + * 812 + * In HS400 we'll also need to calibrate the data strobe 813 + * signal. This should only happen on the MMC2 controller (at 814 + * least on the A64). 815 + */ 816 816 817 - return sunxi_mmc_oclk_onoff(host, 1); 817 + ret = sunxi_mmc_oclk_onoff(host, 1); 818 + if (ret) 819 + return ret; 820 + 821 + /* And we just enabled our clock back */ 822 + mmc->actual_clock = rate; 823 + 824 + return 0; 818 825 } 819 826 820 827 static void sunxi_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) ··· 901 882 mmc_writel(host, REG_GCTRL, rval); 902 883 903 884 /* set up clock */ 904 - if (ios->clock && ios->power_mode) { 885 + if (ios->power_mode) { 905 886 host->ferror = sunxi_mmc_clk_set_rate(host, ios); 906 887 /* Android code had a usleep_range(50000, 55000); here */ 907 888 } ··· 1108 1089 .idma_des_size_bits = 16, 1109 1090 .clk_delays = NULL, 1110 1091 .can_calibrate = true, 1092 + .mask_data0 = true, 1093 + .needs_new_timings = true, 1094 + }; 1095 + 1096 + static const struct sunxi_mmc_cfg sun50i_a64_emmc_cfg = { 1097 + .idma_des_size_bits = 13, 1098 + .clk_delays = NULL, 1099 + .can_calibrate = true, 1111 1100 }; 1112 1101 1113 1102 static const struct of_device_id sunxi_mmc_of_match[] = { ··· 1124 1097 { .compatible = "allwinner,sun7i-a20-mmc", .data = &sun7i_a20_cfg }, 1125 1098 { .compatible = "allwinner,sun9i-a80-mmc", .data = &sun9i_a80_cfg }, 1126 1099 { .compatible = "allwinner,sun50i-a64-mmc", .data = &sun50i_a64_cfg }, 1100 + { .compatible = "allwinner,sun50i-a64-emmc", .data = &sun50i_a64_emmc_cfg }, 1127 1101 { /* sentinel */ } 1128 1102 }; 1129 1103 MODULE_DEVICE_TABLE(of, sunxi_mmc_of_match);
+3
drivers/mmc/host/tmio_mmc.h
··· 24 24 #include <linux/pagemap.h> 25 25 #include <linux/scatterlist.h> 26 26 #include <linux/spinlock.h> 27 + #include <linux/interrupt.h> 27 28 28 29 #define CTL_SD_CMD 0x00 29 30 #define CTL_ARG_REG 0x04 ··· 90 89 #define TMIO_SDIO_STAT_EXPUB52 0x4000 91 90 #define TMIO_SDIO_STAT_EXWT 0x8000 92 91 #define TMIO_SDIO_MASK_ALL 0xc007 92 + 93 + #define TMIO_SDIO_SETBITS_MASK 0x0006 93 94 94 95 /* Define some IRQ masks */ 95 96 /* This is the mask used at reset by the chip */
+34 -27
drivers/mmc/host/tmio_mmc_pio.c
··· 134 134 struct tmio_mmc_host *host = mmc_priv(mmc); 135 135 136 136 if (enable && !host->sdio_irq_enabled) { 137 + u16 sdio_status; 138 + 137 139 /* Keep device active while SDIO irq is enabled */ 138 140 pm_runtime_get_sync(mmc_dev(mmc)); 139 - host->sdio_irq_enabled = true; 140 141 142 + host->sdio_irq_enabled = true; 141 143 host->sdio_irq_mask = TMIO_SDIO_MASK_ALL & 142 144 ~TMIO_SDIO_STAT_IOIRQ; 143 - sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0001); 145 + 146 + /* Clear obsolete interrupts before enabling */ 147 + sdio_status = sd_ctrl_read16(host, CTL_SDIO_STATUS) & ~TMIO_SDIO_MASK_ALL; 148 + if (host->pdata->flags & TMIO_MMC_SDIO_STATUS_SETBITS) 149 + sdio_status |= TMIO_SDIO_SETBITS_MASK; 150 + sd_ctrl_write16(host, CTL_SDIO_STATUS, sdio_status); 151 + 144 152 sd_ctrl_write16(host, CTL_SDIO_IRQ_MASK, host->sdio_irq_mask); 145 153 } else if (!enable && host->sdio_irq_enabled) { 146 154 host->sdio_irq_mask = TMIO_SDIO_MASK_ALL; 147 155 sd_ctrl_write16(host, CTL_SDIO_IRQ_MASK, host->sdio_irq_mask); 148 - sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0000); 149 156 150 157 host->sdio_irq_enabled = false; 151 158 pm_runtime_mark_last_busy(mmc_dev(mmc)); ··· 718 711 return false; 719 712 } 720 713 721 - static void tmio_mmc_sdio_irq(int irq, void *devid) 714 + static void __tmio_mmc_sdio_irq(struct tmio_mmc_host *host) 722 715 { 723 - struct tmio_mmc_host *host = devid; 724 716 struct mmc_host *mmc = host->mmc; 725 717 struct tmio_mmc_data *pdata = host->pdata; 726 718 unsigned int ireg, status; ··· 732 726 ireg = status & TMIO_SDIO_MASK_ALL & ~host->sdio_irq_mask; 733 727 734 728 sdio_status = status & ~TMIO_SDIO_MASK_ALL; 735 - if (pdata->flags & TMIO_MMC_SDIO_STATUS_QUIRK) 736 - sdio_status |= 6; 729 + if (pdata->flags & TMIO_MMC_SDIO_STATUS_SETBITS) 730 + sdio_status |= TMIO_SDIO_SETBITS_MASK; 737 731 738 732 sd_ctrl_write16(host, CTL_SDIO_STATUS, sdio_status); 739 733 ··· 760 754 if (__tmio_mmc_sdcard_irq(host, ireg, status)) 761 755 return IRQ_HANDLED; 762 756 763 - tmio_mmc_sdio_irq(irq, devid); 757 + __tmio_mmc_sdio_irq(host); 764 758 765 759 return IRQ_HANDLED; 766 760 } ··· 906 900 return -ENOTSUPP; 907 901 908 902 return host->clk_enable(host); 903 + } 904 + 905 + static void tmio_mmc_clk_disable(struct tmio_mmc_host *host) 906 + { 907 + if (host->clk_disable) 908 + host->clk_disable(host); 909 909 } 910 910 911 911 static void tmio_mmc_power_on(struct tmio_mmc_host *host, unsigned short vdd) ··· 1157 1145 1158 1146 ret = mmc_of_parse(mmc); 1159 1147 if (ret < 0) 1160 - goto host_free; 1148 + return ret; 1161 1149 1162 1150 _host->pdata = pdata; 1163 1151 platform_set_drvdata(pdev, mmc); ··· 1167 1155 1168 1156 ret = tmio_mmc_init_ocr(_host); 1169 1157 if (ret < 0) 1170 - goto host_free; 1158 + return ret; 1171 1159 1172 1160 _host->ctl = devm_ioremap(&pdev->dev, 1173 1161 res_ctl->start, resource_size(res_ctl)); 1174 - if (!_host->ctl) { 1175 - ret = -ENOMEM; 1176 - goto host_free; 1177 - } 1162 + if (!_host->ctl) 1163 + return -ENOMEM; 1178 1164 1179 1165 tmio_mmc_ops.card_busy = _host->card_busy; 1180 1166 tmio_mmc_ops.start_signal_voltage_switch = _host->start_signal_voltage_switch; ··· 1189 1179 1190 1180 _host->native_hotplug = !(pdata->flags & TMIO_MMC_USE_GPIO_CD || 1191 1181 mmc->caps & MMC_CAP_NEEDS_POLL || 1192 - !mmc_card_is_removable(mmc) || 1193 - mmc->slot.cd_irq >= 0); 1182 + !mmc_card_is_removable(mmc)); 1194 1183 1195 1184 /* 1196 1185 * On Gen2+, eMMC with NONREMOVABLE currently fails because native ··· 1209 1200 * Check the sanity of mmc->f_min to prevent tmio_mmc_set_clock() from 1210 1201 * looping forever... 1211 1202 */ 1212 - if (mmc->f_min == 0) { 1213 - ret = -EINVAL; 1214 - goto host_free; 1215 - } 1203 + if (mmc->f_min == 0) 1204 + return -EINVAL; 1216 1205 1217 1206 /* 1218 1207 * While using internal tmio hardware logic for card detection, we need ··· 1239 1232 if (pdata->flags & TMIO_MMC_SDIO_IRQ) { 1240 1233 _host->sdio_irq_mask = TMIO_SDIO_MASK_ALL; 1241 1234 sd_ctrl_write16(_host, CTL_SDIO_IRQ_MASK, _host->sdio_irq_mask); 1242 - sd_ctrl_write16(_host, CTL_TRANSACTION_CTL, 0x0000); 1235 + sd_ctrl_write16(_host, CTL_TRANSACTION_CTL, 0x0001); 1243 1236 } 1244 1237 1245 1238 spin_lock_init(&_host->lock); ··· 1275 1268 } 1276 1269 1277 1270 return 0; 1278 - 1279 - host_free: 1280 - 1281 - return ret; 1282 1271 } 1283 1272 EXPORT_SYMBOL(tmio_mmc_host_probe); 1284 1273 ··· 1282 1279 { 1283 1280 struct platform_device *pdev = host->pdev; 1284 1281 struct mmc_host *mmc = host->mmc; 1282 + 1283 + if (host->pdata->flags & TMIO_MMC_SDIO_IRQ) 1284 + sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0000); 1285 1285 1286 1286 if (!host->native_hotplug) 1287 1287 pm_runtime_get_sync(&pdev->dev); ··· 1298 1292 1299 1293 pm_runtime_put_sync(&pdev->dev); 1300 1294 pm_runtime_disable(&pdev->dev); 1295 + 1296 + tmio_mmc_clk_disable(host); 1301 1297 } 1302 1298 EXPORT_SYMBOL(tmio_mmc_host_remove); 1303 1299 ··· 1314 1306 if (host->clk_cache) 1315 1307 tmio_mmc_clk_stop(host); 1316 1308 1317 - if (host->clk_disable) 1318 - host->clk_disable(host); 1309 + tmio_mmc_clk_disable(host); 1319 1310 1320 1311 return 0; 1321 1312 }
+1
drivers/mmc/host/via-sdmmc.c
··· 13 13 #include <linux/dma-mapping.h> 14 14 #include <linux/highmem.h> 15 15 #include <linux/delay.h> 16 + #include <linux/interrupt.h> 16 17 17 18 #include <linux/mmc/host.h> 18 19
+2 -6
drivers/mmc/host/vub300.c
··· 640 640 mutex_lock(&vub300->irq_mutex); 641 641 if (vub300->irq_enabled) 642 642 mmc_signal_sdio_irq(vub300->mmc); 643 - else if (vub300->irqs_queued) 644 - vub300->irqs_queued += 1; 645 643 else 646 644 vub300->irqs_queued += 1; 647 645 vub300->irq_disabled = 0; ··· 726 728 */ 727 729 } else if (vub300->card_present) { 728 730 check_vub300_port_status(vub300); 729 - } else if (vub300->mmc && vub300->mmc->card && 730 - mmc_card_present(vub300->mmc->card)) { 731 + } else if (vub300->mmc && vub300->mmc->card) { 731 732 /* 732 733 * the MMC core must not have responded 733 734 * to the previous indication - lets ··· 1753 1756 int data_length; 1754 1757 mutex_lock(&vub300->cmd_mutex); 1755 1758 init_completion(&vub300->command_complete); 1756 - if (likely(vub300->vub_name[0]) || !vub300->mmc->card || 1757 - !mmc_card_present(vub300->mmc->card)) { 1759 + if (likely(vub300->vub_name[0]) || !vub300->mmc->card) { 1758 1760 /* 1759 1761 * the name of the EMPTY Pseudo firmware file 1760 1762 * is used as a flag to indicate that the file
+5 -2
drivers/mmc/host/wbsd.c
··· 1437 1437 1438 1438 static void wbsd_release_dma(struct wbsd_host *host) 1439 1439 { 1440 - if (!dma_mapping_error(mmc_dev(host->mmc), host->dma_addr)) { 1440 + /* 1441 + * host->dma_addr is valid here iff host->dma_buffer is not NULL. 1442 + */ 1443 + if (host->dma_buffer) { 1441 1444 dma_unmap_single(mmc_dev(host->mmc), host->dma_addr, 1442 1445 WBSD_DMA_SIZE, DMA_BIDIRECTIONAL); 1446 + kfree(host->dma_buffer); 1443 1447 } 1444 - kfree(host->dma_buffer); 1445 1448 if (host->dma >= 0) 1446 1449 free_dma(host->dma); 1447 1450
+1
drivers/mmc/host/wmt-sdmmc.c
··· 20 20 #include <linux/irq.h> 21 21 #include <linux/clk.h> 22 22 #include <linux/gpio.h> 23 + #include <linux/interrupt.h> 23 24 24 25 #include <linux/of.h> 25 26 #include <linux/of_address.h>
+2 -4
include/linux/mfd/tmio.h
··· 94 94 */ 95 95 #define TMIO_MMC_HAVE_CMD12_CTRL (1 << 7) 96 96 97 - /* 98 - * Some controllers needs to set 1 on SDIO status reserved bits 99 - */ 100 - #define TMIO_MMC_SDIO_STATUS_QUIRK (1 << 8) 97 + /* Controller has some SDIO status bits which must be 1 */ 98 + #define TMIO_MMC_SDIO_STATUS_SETBITS (1 << 8) 101 99 102 100 /* 103 101 * Some controllers have a 32-bit wide data port register
-7
include/linux/mmc/boot.h
··· 1 - #ifndef LINUX_MMC_BOOT_H 2 - #define LINUX_MMC_BOOT_H 3 - 4 - enum { MMC_PROGRESS_ENTER, MMC_PROGRESS_INIT, 5 - MMC_PROGRESS_LOAD, MMC_PROGRESS_DONE }; 6 - 7 - #endif /* LINUX_MMC_BOOT_H */
+4 -242
include/linux/mmc/card.h
··· 11 11 #define LINUX_MMC_CARD_H 12 12 13 13 #include <linux/device.h> 14 - #include <linux/mmc/core.h> 15 14 #include <linux/mod_devicetable.h> 16 15 17 16 struct mmc_cid { ··· 83 84 unsigned int hpi_cmd; /* cmd used as HPI */ 84 85 bool bkops; /* background support bit */ 85 86 bool man_bkops_en; /* manual bkops enable bit */ 87 + bool auto_bkops_en; /* auto bkops enable bit */ 86 88 unsigned int data_sector_size; /* 512 bytes or 4KB */ 87 89 unsigned int data_tag_unit_size; /* DATA TAG UNIT size */ 88 90 unsigned int boot_ro_lock; /* ro lock support */ ··· 121 121 u8 raw_pwr_cl_ddr_200_360; /* 253 */ 122 122 u8 raw_bkops_status; /* 246 */ 123 123 u8 raw_sectors[4]; /* 212 - 4 bytes */ 124 + u8 pre_eol_info; /* 267 */ 125 + u8 device_life_time_est_typ_a; /* 268 */ 126 + u8 device_life_time_est_typ_b; /* 269 */ 124 127 125 128 unsigned int feature_support; 126 129 #define MMC_DISCARD_FEATURE BIT(0) /* CMD38 feature */ ··· 206 203 }; 207 204 208 205 struct mmc_host; 209 - struct mmc_ios; 210 206 struct sdio_func; 211 207 struct sdio_func_tuple; 212 208 ··· 249 247 #define MMC_TYPE_SDIO 2 /* SDIO card */ 250 248 #define MMC_TYPE_SD_COMBO 3 /* SD combo (IO+mem) card */ 251 249 unsigned int state; /* (our) card state */ 252 - #define MMC_STATE_PRESENT (1<<0) /* present in sysfs */ 253 - #define MMC_STATE_READONLY (1<<1) /* card is read-only */ 254 - #define MMC_STATE_BLOCKADDR (1<<2) /* card uses block-addressing */ 255 - #define MMC_CARD_SDXC (1<<3) /* card is SDXC */ 256 - #define MMC_CARD_REMOVED (1<<4) /* card has been removed */ 257 - #define MMC_STATE_DOING_BKOPS (1<<5) /* card is doing BKOPS */ 258 - #define MMC_STATE_SUSPENDED (1<<6) /* card is suspended */ 259 250 unsigned int quirks; /* card quirks */ 260 251 #define MMC_QUIRK_LENIENT_FN0 (1<<0) /* allow SDIO FN0 writes outside of the VS CCCR range */ 261 252 #define MMC_QUIRK_BLKSZ_FOR_BYTE_MODE (1<<1) /* use func->cur_blksize */ ··· 266 271 #define MMC_QUIRK_BROKEN_IRQ_POLLING (1<<11) /* Polling SDIO_CCCR_INTx could create a fake interrupt */ 267 272 #define MMC_QUIRK_TRIM_BROKEN (1<<12) /* Skip trim */ 268 273 #define MMC_QUIRK_BROKEN_HPI (1<<13) /* Disable broken HPI support */ 269 - 270 274 271 275 unsigned int erase_size; /* erase size in sectors */ 272 276 unsigned int erase_shift; /* if erase unit is power 2 */ ··· 302 308 unsigned int nr_parts; 303 309 }; 304 310 305 - /* 306 - * This function fill contents in mmc_part. 307 - */ 308 - static inline void mmc_part_add(struct mmc_card *card, unsigned int size, 309 - unsigned int part_cfg, char *name, int idx, bool ro, 310 - int area_type) 311 - { 312 - card->part[card->nr_parts].size = size; 313 - card->part[card->nr_parts].part_cfg = part_cfg; 314 - sprintf(card->part[card->nr_parts].name, name, idx); 315 - card->part[card->nr_parts].force_ro = ro; 316 - card->part[card->nr_parts].area_type = area_type; 317 - card->nr_parts++; 318 - } 319 - 320 311 static inline bool mmc_large_sector(struct mmc_card *card) 321 312 { 322 313 return card->ext_csd.data_sector_size == 4096; 323 314 } 324 315 325 - /* 326 - * The world is not perfect and supplies us with broken mmc/sdio devices. 327 - * For at least some of these bugs we need a work-around. 328 - */ 329 - 330 - struct mmc_fixup { 331 - /* CID-specific fields. */ 332 - const char *name; 333 - 334 - /* Valid revision range */ 335 - u64 rev_start, rev_end; 336 - 337 - unsigned int manfid; 338 - unsigned short oemid; 339 - 340 - /* SDIO-specfic fields. You can use SDIO_ANY_ID here of course */ 341 - u16 cis_vendor, cis_device; 342 - 343 - /* for MMC cards */ 344 - unsigned int ext_csd_rev; 345 - 346 - void (*vendor_fixup)(struct mmc_card *card, int data); 347 - int data; 348 - }; 349 - 350 - #define CID_MANFID_ANY (-1u) 351 - #define CID_OEMID_ANY ((unsigned short) -1) 352 - #define CID_NAME_ANY (NULL) 353 - 354 - #define EXT_CSD_REV_ANY (-1u) 355 - 356 - #define CID_MANFID_SANDISK 0x2 357 - #define CID_MANFID_TOSHIBA 0x11 358 - #define CID_MANFID_MICRON 0x13 359 - #define CID_MANFID_SAMSUNG 0x15 360 - #define CID_MANFID_KINGSTON 0x70 361 - #define CID_MANFID_HYNIX 0x90 362 - 363 - #define END_FIXUP { NULL } 364 - 365 - #define _FIXUP_EXT(_name, _manfid, _oemid, _rev_start, _rev_end, \ 366 - _cis_vendor, _cis_device, \ 367 - _fixup, _data, _ext_csd_rev) \ 368 - { \ 369 - .name = (_name), \ 370 - .manfid = (_manfid), \ 371 - .oemid = (_oemid), \ 372 - .rev_start = (_rev_start), \ 373 - .rev_end = (_rev_end), \ 374 - .cis_vendor = (_cis_vendor), \ 375 - .cis_device = (_cis_device), \ 376 - .vendor_fixup = (_fixup), \ 377 - .data = (_data), \ 378 - .ext_csd_rev = (_ext_csd_rev), \ 379 - } 380 - 381 - #define MMC_FIXUP_REV(_name, _manfid, _oemid, _rev_start, _rev_end, \ 382 - _fixup, _data, _ext_csd_rev) \ 383 - _FIXUP_EXT(_name, _manfid, \ 384 - _oemid, _rev_start, _rev_end, \ 385 - SDIO_ANY_ID, SDIO_ANY_ID, \ 386 - _fixup, _data, _ext_csd_rev) \ 387 - 388 - #define MMC_FIXUP(_name, _manfid, _oemid, _fixup, _data) \ 389 - MMC_FIXUP_REV(_name, _manfid, _oemid, 0, -1ull, _fixup, _data, \ 390 - EXT_CSD_REV_ANY) 391 - 392 - #define MMC_FIXUP_EXT_CSD_REV(_name, _manfid, _oemid, _fixup, _data, \ 393 - _ext_csd_rev) \ 394 - MMC_FIXUP_REV(_name, _manfid, _oemid, 0, -1ull, _fixup, _data, \ 395 - _ext_csd_rev) 396 - 397 - #define SDIO_FIXUP(_vendor, _device, _fixup, _data) \ 398 - _FIXUP_EXT(CID_NAME_ANY, CID_MANFID_ANY, \ 399 - CID_OEMID_ANY, 0, -1ull, \ 400 - _vendor, _device, \ 401 - _fixup, _data, EXT_CSD_REV_ANY) \ 402 - 403 - #define cid_rev(hwrev, fwrev, year, month) \ 404 - (((u64) hwrev) << 40 | \ 405 - ((u64) fwrev) << 32 | \ 406 - ((u64) year) << 16 | \ 407 - ((u64) month)) 408 - 409 - #define cid_rev_card(card) \ 410 - cid_rev(card->cid.hwrev, \ 411 - card->cid.fwrev, \ 412 - card->cid.year, \ 413 - card->cid.month) 414 - 415 - /* 416 - * Unconditionally quirk add/remove. 417 - */ 418 - 419 - static inline void __maybe_unused add_quirk(struct mmc_card *card, int data) 420 - { 421 - card->quirks |= data; 422 - } 423 - 424 - static inline void __maybe_unused remove_quirk(struct mmc_card *card, int data) 425 - { 426 - card->quirks &= ~data; 427 - } 428 - 429 316 #define mmc_card_mmc(c) ((c)->type == MMC_TYPE_MMC) 430 317 #define mmc_card_sd(c) ((c)->type == MMC_TYPE_SD) 431 318 #define mmc_card_sdio(c) ((c)->type == MMC_TYPE_SDIO) 432 - 433 - #define mmc_card_present(c) ((c)->state & MMC_STATE_PRESENT) 434 - #define mmc_card_readonly(c) ((c)->state & MMC_STATE_READONLY) 435 - #define mmc_card_blockaddr(c) ((c)->state & MMC_STATE_BLOCKADDR) 436 - #define mmc_card_ext_capacity(c) ((c)->state & MMC_CARD_SDXC) 437 - #define mmc_card_removed(c) ((c) && ((c)->state & MMC_CARD_REMOVED)) 438 - #define mmc_card_doing_bkops(c) ((c)->state & MMC_STATE_DOING_BKOPS) 439 - #define mmc_card_suspended(c) ((c)->state & MMC_STATE_SUSPENDED) 440 - 441 - #define mmc_card_set_present(c) ((c)->state |= MMC_STATE_PRESENT) 442 - #define mmc_card_set_readonly(c) ((c)->state |= MMC_STATE_READONLY) 443 - #define mmc_card_set_blockaddr(c) ((c)->state |= MMC_STATE_BLOCKADDR) 444 - #define mmc_card_set_ext_capacity(c) ((c)->state |= MMC_CARD_SDXC) 445 - #define mmc_card_set_removed(c) ((c)->state |= MMC_CARD_REMOVED) 446 - #define mmc_card_set_doing_bkops(c) ((c)->state |= MMC_STATE_DOING_BKOPS) 447 - #define mmc_card_clr_doing_bkops(c) ((c)->state &= ~MMC_STATE_DOING_BKOPS) 448 - #define mmc_card_set_suspended(c) ((c)->state |= MMC_STATE_SUSPENDED) 449 - #define mmc_card_clr_suspended(c) ((c)->state &= ~MMC_STATE_SUSPENDED) 450 - 451 - /* 452 - * Quirk add/remove for MMC products. 453 - */ 454 - 455 - static inline void __maybe_unused add_quirk_mmc(struct mmc_card *card, int data) 456 - { 457 - if (mmc_card_mmc(card)) 458 - card->quirks |= data; 459 - } 460 - 461 - static inline void __maybe_unused remove_quirk_mmc(struct mmc_card *card, 462 - int data) 463 - { 464 - if (mmc_card_mmc(card)) 465 - card->quirks &= ~data; 466 - } 467 - 468 - /* 469 - * Quirk add/remove for SD products. 470 - */ 471 - 472 - static inline void __maybe_unused add_quirk_sd(struct mmc_card *card, int data) 473 - { 474 - if (mmc_card_sd(card)) 475 - card->quirks |= data; 476 - } 477 - 478 - static inline void __maybe_unused remove_quirk_sd(struct mmc_card *card, 479 - int data) 480 - { 481 - if (mmc_card_sd(card)) 482 - card->quirks &= ~data; 483 - } 484 - 485 - static inline int mmc_card_lenient_fn0(const struct mmc_card *c) 486 - { 487 - return c->quirks & MMC_QUIRK_LENIENT_FN0; 488 - } 489 - 490 - static inline int mmc_blksz_for_byte_mode(const struct mmc_card *c) 491 - { 492 - return c->quirks & MMC_QUIRK_BLKSZ_FOR_BYTE_MODE; 493 - } 494 - 495 - static inline int mmc_card_disable_cd(const struct mmc_card *c) 496 - { 497 - return c->quirks & MMC_QUIRK_DISABLE_CD; 498 - } 499 - 500 - static inline int mmc_card_nonstd_func_interface(const struct mmc_card *c) 501 - { 502 - return c->quirks & MMC_QUIRK_NONSTD_FUNC_IF; 503 - } 504 - 505 - static inline int mmc_card_broken_byte_mode_512(const struct mmc_card *c) 506 - { 507 - return c->quirks & MMC_QUIRK_BROKEN_BYTE_MODE_512; 508 - } 509 - 510 - static inline int mmc_card_long_read_time(const struct mmc_card *c) 511 - { 512 - return c->quirks & MMC_QUIRK_LONG_READ_TIME; 513 - } 514 - 515 - static inline int mmc_card_broken_irq_polling(const struct mmc_card *c) 516 - { 517 - return c->quirks & MMC_QUIRK_BROKEN_IRQ_POLLING; 518 - } 519 - 520 - static inline int mmc_card_broken_hpi(const struct mmc_card *c) 521 - { 522 - return c->quirks & MMC_QUIRK_BROKEN_HPI; 523 - } 524 - 525 - #define mmc_card_name(c) ((c)->cid.prod_name) 526 - #define mmc_card_id(c) (dev_name(&(c)->dev)) 527 - 528 - #define mmc_dev_to_card(d) container_of(d, struct mmc_card, dev) 529 - 530 - /* 531 - * MMC device driver (e.g., Flash card, I/O card...) 532 - */ 533 - struct mmc_driver { 534 - struct device_driver drv; 535 - int (*probe)(struct mmc_card *); 536 - void (*remove)(struct mmc_card *); 537 - void (*shutdown)(struct mmc_card *); 538 - }; 539 - 540 - extern int mmc_register_driver(struct mmc_driver *); 541 - extern void mmc_unregister_driver(struct mmc_driver *); 542 - 543 - extern void mmc_fixup_device(struct mmc_card *card, 544 - const struct mmc_fixup *table); 545 319 546 320 #endif /* LINUX_MMC_CARD_H */
+9 -75
include/linux/mmc/core.h
··· 8 8 #ifndef LINUX_MMC_CORE_H 9 9 #define LINUX_MMC_CORE_H 10 10 11 - #include <linux/interrupt.h> 12 11 #include <linux/completion.h> 12 + #include <linux/types.h> 13 13 14 - struct request; 15 14 struct mmc_data; 16 15 struct mmc_request; 17 16 ··· 158 159 struct mmc_card; 159 160 struct mmc_async_req; 160 161 161 - extern int mmc_stop_bkops(struct mmc_card *); 162 - extern int mmc_read_bkops_status(struct mmc_card *); 163 - extern struct mmc_async_req *mmc_start_req(struct mmc_host *, 164 - struct mmc_async_req *, 165 - enum mmc_blk_status *); 166 - extern int mmc_interrupt_hpi(struct mmc_card *); 167 - extern void mmc_wait_for_req(struct mmc_host *, struct mmc_request *); 168 - extern void mmc_wait_for_req_done(struct mmc_host *host, 169 - struct mmc_request *mrq); 170 - extern bool mmc_is_req_done(struct mmc_host *host, struct mmc_request *mrq); 171 - extern int mmc_wait_for_cmd(struct mmc_host *, struct mmc_command *, int); 172 - extern int mmc_app_cmd(struct mmc_host *, struct mmc_card *); 173 - extern int mmc_wait_for_app_cmd(struct mmc_host *, struct mmc_card *, 174 - struct mmc_command *, int); 175 - extern void mmc_start_bkops(struct mmc_card *card, bool from_exception); 176 - extern int mmc_switch(struct mmc_card *, u8, u8, u8, unsigned int); 177 - extern int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error); 178 - extern int mmc_abort_tuning(struct mmc_host *host, u32 opcode); 179 - extern int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd); 162 + struct mmc_async_req *mmc_start_areq(struct mmc_host *host, 163 + struct mmc_async_req *areq, 164 + enum mmc_blk_status *ret_stat); 165 + void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq); 166 + int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd, 167 + int retries); 180 168 181 - #define MMC_ERASE_ARG 0x00000000 182 - #define MMC_SECURE_ERASE_ARG 0x80000000 183 - #define MMC_TRIM_ARG 0x00000001 184 - #define MMC_DISCARD_ARG 0x00000003 185 - #define MMC_SECURE_TRIM1_ARG 0x80000001 186 - #define MMC_SECURE_TRIM2_ARG 0x80008000 187 - 188 - #define MMC_SECURE_ARGS 0x80000000 189 - #define MMC_TRIM_ARGS 0x00008001 190 - 191 - extern int mmc_erase(struct mmc_card *card, unsigned int from, unsigned int nr, 192 - unsigned int arg); 193 - extern int mmc_can_erase(struct mmc_card *card); 194 - extern int mmc_can_trim(struct mmc_card *card); 195 - extern int mmc_can_discard(struct mmc_card *card); 196 - extern int mmc_can_sanitize(struct mmc_card *card); 197 - extern int mmc_can_secure_erase_trim(struct mmc_card *card); 198 - extern int mmc_erase_group_aligned(struct mmc_card *card, unsigned int from, 199 - unsigned int nr); 200 - extern unsigned int mmc_calc_max_discard(struct mmc_card *card); 201 - 202 - extern int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen); 203 - extern int mmc_set_blockcount(struct mmc_card *card, unsigned int blockcount, 204 - bool is_rel_write); 205 - extern int mmc_hw_reset(struct mmc_host *host); 206 - extern int mmc_can_reset(struct mmc_card *card); 207 - 208 - extern void mmc_set_data_timeout(struct mmc_data *, const struct mmc_card *); 209 - extern unsigned int mmc_align_data_size(struct mmc_card *, unsigned int); 210 - 211 - extern int __mmc_claim_host(struct mmc_host *host, atomic_t *abort); 212 - extern void mmc_release_host(struct mmc_host *host); 213 - 214 - extern void mmc_get_card(struct mmc_card *card); 215 - extern void mmc_put_card(struct mmc_card *card); 216 - 217 - extern int mmc_flush_cache(struct mmc_card *); 218 - 219 - extern int mmc_detect_card_removed(struct mmc_host *host); 220 - 221 - /** 222 - * mmc_claim_host - exclusively claim a host 223 - * @host: mmc host to claim 224 - * 225 - * Claim a host for a set of operations. 226 - */ 227 - static inline void mmc_claim_host(struct mmc_host *host) 228 - { 229 - __mmc_claim_host(host, NULL); 230 - } 231 - 232 - struct device_node; 233 - extern u32 mmc_vddrange_to_ocrmask(int vdd_min, int vdd_max); 234 - extern int mmc_of_parse_voltage(struct device_node *np, u32 *mask); 169 + int mmc_hw_reset(struct mmc_host *host); 170 + void mmc_set_data_timeout(struct mmc_data *data, const struct mmc_card *card); 235 171 236 172 #endif /* LINUX_MMC_CORE_H */
-274
include/linux/mmc/dw_mmc.h
··· 1 - /* 2 - * Synopsys DesignWare Multimedia Card Interface driver 3 - * (Based on NXP driver for lpc 31xx) 4 - * 5 - * Copyright (C) 2009 NXP Semiconductors 6 - * Copyright (C) 2009, 2010 Imagination Technologies Ltd. 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 12 - */ 13 - 14 - #ifndef LINUX_MMC_DW_MMC_H 15 - #define LINUX_MMC_DW_MMC_H 16 - 17 - #include <linux/scatterlist.h> 18 - #include <linux/mmc/core.h> 19 - #include <linux/dmaengine.h> 20 - #include <linux/reset.h> 21 - 22 - #define MAX_MCI_SLOTS 2 23 - 24 - enum dw_mci_state { 25 - STATE_IDLE = 0, 26 - STATE_SENDING_CMD, 27 - STATE_SENDING_DATA, 28 - STATE_DATA_BUSY, 29 - STATE_SENDING_STOP, 30 - STATE_DATA_ERROR, 31 - STATE_SENDING_CMD11, 32 - STATE_WAITING_CMD11_DONE, 33 - }; 34 - 35 - enum { 36 - EVENT_CMD_COMPLETE = 0, 37 - EVENT_XFER_COMPLETE, 38 - EVENT_DATA_COMPLETE, 39 - EVENT_DATA_ERROR, 40 - }; 41 - 42 - enum dw_mci_cookie { 43 - COOKIE_UNMAPPED, 44 - COOKIE_PRE_MAPPED, /* mapped by pre_req() of dwmmc */ 45 - COOKIE_MAPPED, /* mapped by prepare_data() of dwmmc */ 46 - }; 47 - 48 - struct mmc_data; 49 - 50 - enum { 51 - TRANS_MODE_PIO = 0, 52 - TRANS_MODE_IDMAC, 53 - TRANS_MODE_EDMAC 54 - }; 55 - 56 - struct dw_mci_dma_slave { 57 - struct dma_chan *ch; 58 - enum dma_transfer_direction direction; 59 - }; 60 - 61 - /** 62 - * struct dw_mci - MMC controller state shared between all slots 63 - * @lock: Spinlock protecting the queue and associated data. 64 - * @irq_lock: Spinlock protecting the INTMASK setting. 65 - * @regs: Pointer to MMIO registers. 66 - * @fifo_reg: Pointer to MMIO registers for data FIFO 67 - * @sg: Scatterlist entry currently being processed by PIO code, if any. 68 - * @sg_miter: PIO mapping scatterlist iterator. 69 - * @cur_slot: The slot which is currently using the controller. 70 - * @mrq: The request currently being processed on @cur_slot, 71 - * or NULL if the controller is idle. 72 - * @cmd: The command currently being sent to the card, or NULL. 73 - * @data: The data currently being transferred, or NULL if no data 74 - * transfer is in progress. 75 - * @stop_abort: The command currently prepared for stoping transfer. 76 - * @prev_blksz: The former transfer blksz record. 77 - * @timing: Record of current ios timing. 78 - * @use_dma: Whether DMA channel is initialized or not. 79 - * @using_dma: Whether DMA is in use for the current transfer. 80 - * @dma_64bit_address: Whether DMA supports 64-bit address mode or not. 81 - * @sg_dma: Bus address of DMA buffer. 82 - * @sg_cpu: Virtual address of DMA buffer. 83 - * @dma_ops: Pointer to platform-specific DMA callbacks. 84 - * @cmd_status: Snapshot of SR taken upon completion of the current 85 - * @ring_size: Buffer size for idma descriptors. 86 - * command. Only valid when EVENT_CMD_COMPLETE is pending. 87 - * @dms: structure of slave-dma private data. 88 - * @phy_regs: physical address of controller's register map 89 - * @data_status: Snapshot of SR taken upon completion of the current 90 - * data transfer. Only valid when EVENT_DATA_COMPLETE or 91 - * EVENT_DATA_ERROR is pending. 92 - * @stop_cmdr: Value to be loaded into CMDR when the stop command is 93 - * to be sent. 94 - * @dir_status: Direction of current transfer. 95 - * @tasklet: Tasklet running the request state machine. 96 - * @pending_events: Bitmask of events flagged by the interrupt handler 97 - * to be processed by the tasklet. 98 - * @completed_events: Bitmask of events which the state machine has 99 - * processed. 100 - * @state: Tasklet state. 101 - * @queue: List of slots waiting for access to the controller. 102 - * @bus_hz: The rate of @mck in Hz. This forms the basis for MMC bus 103 - * rate and timeout calculations. 104 - * @current_speed: Configured rate of the controller. 105 - * @num_slots: Number of slots available. 106 - * @fifoth_val: The value of FIFOTH register. 107 - * @verid: Denote Version ID. 108 - * @dev: Device associated with the MMC controller. 109 - * @pdata: Platform data associated with the MMC controller. 110 - * @drv_data: Driver specific data for identified variant of the controller 111 - * @priv: Implementation defined private data. 112 - * @biu_clk: Pointer to bus interface unit clock instance. 113 - * @ciu_clk: Pointer to card interface unit clock instance. 114 - * @slot: Slots sharing this MMC controller. 115 - * @fifo_depth: depth of FIFO. 116 - * @data_shift: log2 of FIFO item size. 117 - * @part_buf_start: Start index in part_buf. 118 - * @part_buf_count: Bytes of partial data in part_buf. 119 - * @part_buf: Simple buffer for partial fifo reads/writes. 120 - * @push_data: Pointer to FIFO push function. 121 - * @pull_data: Pointer to FIFO pull function. 122 - * @vqmmc_enabled: Status of vqmmc, should be true or false. 123 - * @irq_flags: The flags to be passed to request_irq. 124 - * @irq: The irq value to be passed to request_irq. 125 - * @sdio_id0: Number of slot0 in the SDIO interrupt registers. 126 - * @cmd11_timer: Timer for SD3.0 voltage switch over scheme. 127 - * @dto_timer: Timer for broken data transfer over scheme. 128 - * 129 - * Locking 130 - * ======= 131 - * 132 - * @lock is a softirq-safe spinlock protecting @queue as well as 133 - * @cur_slot, @mrq and @state. These must always be updated 134 - * at the same time while holding @lock. 135 - * 136 - * @irq_lock is an irq-safe spinlock protecting the INTMASK register 137 - * to allow the interrupt handler to modify it directly. Held for only long 138 - * enough to read-modify-write INTMASK and no other locks are grabbed when 139 - * holding this one. 140 - * 141 - * The @mrq field of struct dw_mci_slot is also protected by @lock, 142 - * and must always be written at the same time as the slot is added to 143 - * @queue. 144 - * 145 - * @pending_events and @completed_events are accessed using atomic bit 146 - * operations, so they don't need any locking. 147 - * 148 - * None of the fields touched by the interrupt handler need any 149 - * locking. However, ordering is important: Before EVENT_DATA_ERROR or 150 - * EVENT_DATA_COMPLETE is set in @pending_events, all data-related 151 - * interrupts must be disabled and @data_status updated with a 152 - * snapshot of SR. Similarly, before EVENT_CMD_COMPLETE is set, the 153 - * CMDRDY interrupt must be disabled and @cmd_status updated with a 154 - * snapshot of SR, and before EVENT_XFER_COMPLETE can be set, the 155 - * bytes_xfered field of @data must be written. This is ensured by 156 - * using barriers. 157 - */ 158 - struct dw_mci { 159 - spinlock_t lock; 160 - spinlock_t irq_lock; 161 - void __iomem *regs; 162 - void __iomem *fifo_reg; 163 - 164 - struct scatterlist *sg; 165 - struct sg_mapping_iter sg_miter; 166 - 167 - struct dw_mci_slot *cur_slot; 168 - struct mmc_request *mrq; 169 - struct mmc_command *cmd; 170 - struct mmc_data *data; 171 - struct mmc_command stop_abort; 172 - unsigned int prev_blksz; 173 - unsigned char timing; 174 - 175 - /* DMA interface members*/ 176 - int use_dma; 177 - int using_dma; 178 - int dma_64bit_address; 179 - 180 - dma_addr_t sg_dma; 181 - void *sg_cpu; 182 - const struct dw_mci_dma_ops *dma_ops; 183 - /* For idmac */ 184 - unsigned int ring_size; 185 - 186 - /* For edmac */ 187 - struct dw_mci_dma_slave *dms; 188 - /* Registers's physical base address */ 189 - resource_size_t phy_regs; 190 - 191 - u32 cmd_status; 192 - u32 data_status; 193 - u32 stop_cmdr; 194 - u32 dir_status; 195 - struct tasklet_struct tasklet; 196 - unsigned long pending_events; 197 - unsigned long completed_events; 198 - enum dw_mci_state state; 199 - struct list_head queue; 200 - 201 - u32 bus_hz; 202 - u32 current_speed; 203 - u32 num_slots; 204 - u32 fifoth_val; 205 - u16 verid; 206 - struct device *dev; 207 - struct dw_mci_board *pdata; 208 - const struct dw_mci_drv_data *drv_data; 209 - void *priv; 210 - struct clk *biu_clk; 211 - struct clk *ciu_clk; 212 - struct dw_mci_slot *slot[MAX_MCI_SLOTS]; 213 - 214 - /* FIFO push and pull */ 215 - int fifo_depth; 216 - int data_shift; 217 - u8 part_buf_start; 218 - u8 part_buf_count; 219 - union { 220 - u16 part_buf16; 221 - u32 part_buf32; 222 - u64 part_buf; 223 - }; 224 - void (*push_data)(struct dw_mci *host, void *buf, int cnt); 225 - void (*pull_data)(struct dw_mci *host, void *buf, int cnt); 226 - 227 - bool vqmmc_enabled; 228 - unsigned long irq_flags; /* IRQ flags */ 229 - int irq; 230 - 231 - int sdio_id0; 232 - 233 - struct timer_list cmd11_timer; 234 - struct timer_list dto_timer; 235 - }; 236 - 237 - /* DMA ops for Internal/External DMAC interface */ 238 - struct dw_mci_dma_ops { 239 - /* DMA Ops */ 240 - int (*init)(struct dw_mci *host); 241 - int (*start)(struct dw_mci *host, unsigned int sg_len); 242 - void (*complete)(void *host); 243 - void (*stop)(struct dw_mci *host); 244 - void (*cleanup)(struct dw_mci *host); 245 - void (*exit)(struct dw_mci *host); 246 - }; 247 - 248 - struct dma_pdata; 249 - 250 - /* Board platform data */ 251 - struct dw_mci_board { 252 - u32 num_slots; 253 - 254 - unsigned int bus_hz; /* Clock speed at the cclk_in pad */ 255 - 256 - u32 caps; /* Capabilities */ 257 - u32 caps2; /* More capabilities */ 258 - u32 pm_caps; /* PM capabilities */ 259 - /* 260 - * Override fifo depth. If 0, autodetect it from the FIFOTH register, 261 - * but note that this may not be reliable after a bootloader has used 262 - * it. 263 - */ 264 - unsigned int fifo_depth; 265 - 266 - /* delay in mS before detecting cards after interrupt */ 267 - u32 detect_delay_ms; 268 - 269 - struct reset_control *rstc; 270 - struct dw_mci_dma_ops *dma_ops; 271 - struct dma_pdata *data; 272 - }; 273 - 274 - #endif /* LINUX_MMC_DW_MMC_H */
+20 -64
include/linux/mmc/host.h
··· 10 10 #ifndef LINUX_MMC_HOST_H 11 11 #define LINUX_MMC_HOST_H 12 12 13 - #include <linux/leds.h> 14 - #include <linux/mutex.h> 15 - #include <linux/timer.h> 16 13 #include <linux/sched.h> 17 14 #include <linux/device.h> 18 15 #include <linux/fault-inject.h> 19 16 20 17 #include <linux/mmc/core.h> 21 18 #include <linux/mmc/card.h> 22 - #include <linux/mmc/mmc.h> 23 19 #include <linux/mmc/pm.h> 24 20 25 21 struct mmc_ios { ··· 77 81 78 82 bool enhanced_strobe; /* hs400es selection */ 79 83 }; 84 + 85 + struct mmc_host; 80 86 81 87 struct mmc_host_ops { 82 88 /* ··· 159 161 int (*multi_io_quirk)(struct mmc_card *card, 160 162 unsigned int direction, int blk_size); 161 163 }; 162 - 163 - struct mmc_card; 164 - struct device; 165 164 166 165 struct mmc_async_req { 167 166 /* active mmc request */ ··· 259 264 #define MMC_CAP_NONREMOVABLE (1 << 8) /* Nonremovable e.g. eMMC */ 260 265 #define MMC_CAP_WAIT_WHILE_BUSY (1 << 9) /* Waits while card is busy */ 261 266 #define MMC_CAP_ERASE (1 << 10) /* Allow erase/trim commands */ 262 - #define MMC_CAP_1_8V_DDR (1 << 11) /* can support */ 263 - /* DDR mode at 1.8V */ 264 - #define MMC_CAP_1_2V_DDR (1 << 12) /* can support */ 265 - /* DDR mode at 1.2V */ 266 - #define MMC_CAP_POWER_OFF_CARD (1 << 13) /* Can power off after boot */ 267 - #define MMC_CAP_BUS_WIDTH_TEST (1 << 14) /* CMD14/CMD19 bus width ok */ 268 - #define MMC_CAP_UHS_SDR12 (1 << 15) /* Host supports UHS SDR12 mode */ 269 - #define MMC_CAP_UHS_SDR25 (1 << 16) /* Host supports UHS SDR25 mode */ 270 - #define MMC_CAP_UHS_SDR50 (1 << 17) /* Host supports UHS SDR50 mode */ 271 - #define MMC_CAP_UHS_SDR104 (1 << 18) /* Host supports UHS SDR104 mode */ 272 - #define MMC_CAP_UHS_DDR50 (1 << 19) /* Host supports UHS DDR50 mode */ 267 + #define MMC_CAP_3_3V_DDR (1 << 11) /* Host supports eMMC DDR 3.3V */ 268 + #define MMC_CAP_1_8V_DDR (1 << 12) /* Host supports eMMC DDR 1.8V */ 269 + #define MMC_CAP_1_2V_DDR (1 << 13) /* Host supports eMMC DDR 1.2V */ 270 + #define MMC_CAP_POWER_OFF_CARD (1 << 14) /* Can power off after boot */ 271 + #define MMC_CAP_BUS_WIDTH_TEST (1 << 15) /* CMD14/CMD19 bus width ok */ 272 + #define MMC_CAP_UHS_SDR12 (1 << 16) /* Host supports UHS SDR12 mode */ 273 + #define MMC_CAP_UHS_SDR25 (1 << 17) /* Host supports UHS SDR25 mode */ 274 + #define MMC_CAP_UHS_SDR50 (1 << 18) /* Host supports UHS SDR50 mode */ 275 + #define MMC_CAP_UHS_SDR104 (1 << 19) /* Host supports UHS SDR104 mode */ 276 + #define MMC_CAP_UHS_DDR50 (1 << 20) /* Host supports UHS DDR50 mode */ 273 277 #define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */ 274 278 #define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */ 275 279 #define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */ ··· 391 397 unsigned long private[0] ____cacheline_aligned; 392 398 }; 393 399 400 + struct device_node; 401 + 394 402 struct mmc_host *mmc_alloc_host(int extra, struct device *); 395 403 int mmc_add_host(struct mmc_host *); 396 404 void mmc_remove_host(struct mmc_host *); 397 405 void mmc_free_host(struct mmc_host *); 398 406 int mmc_of_parse(struct mmc_host *host); 407 + int mmc_of_parse_voltage(struct device_node *np, u32 *mask); 399 408 400 409 static inline void *mmc_priv(struct mmc_host *host) 401 410 { ··· 454 457 } 455 458 #endif 456 459 460 + u32 mmc_vddrange_to_ocrmask(int vdd_min, int vdd_max); 457 461 int mmc_regulator_get_supply(struct mmc_host *mmc); 458 462 459 463 static inline int mmc_card_is_removable(struct mmc_host *host) ··· 472 474 return host->pm_flags & MMC_PM_WAKE_SDIO_IRQ; 473 475 } 474 476 475 - static inline int mmc_host_cmd23(struct mmc_host *host) 476 - { 477 - return host->caps & MMC_CAP_CMD23; 478 - } 479 - 480 - static inline int mmc_boot_partition_access(struct mmc_host *host) 481 - { 482 - return !(host->caps2 & MMC_CAP2_BOOTPART_NOACC); 483 - } 484 - 485 - static inline int mmc_host_uhs(struct mmc_host *host) 486 - { 487 - return host->caps & 488 - (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | 489 - MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | 490 - MMC_CAP_UHS_DDR50); 491 - } 492 - 477 + /* TODO: Move to private header */ 493 478 static inline int mmc_card_hs(struct mmc_card *card) 494 479 { 495 480 return card->host->ios.timing == MMC_TIMING_SD_HS || 496 481 card->host->ios.timing == MMC_TIMING_MMC_HS; 497 482 } 498 483 484 + /* TODO: Move to private header */ 499 485 static inline int mmc_card_uhs(struct mmc_card *card) 500 486 { 501 487 return card->host->ios.timing >= MMC_TIMING_UHS_SDR12 && 502 488 card->host->ios.timing <= MMC_TIMING_UHS_DDR50; 503 - } 504 - 505 - static inline bool mmc_card_hs200(struct mmc_card *card) 506 - { 507 - return card->host->ios.timing == MMC_TIMING_MMC_HS200; 508 - } 509 - 510 - static inline bool mmc_card_ddr52(struct mmc_card *card) 511 - { 512 - return card->host->ios.timing == MMC_TIMING_MMC_DDR52; 513 - } 514 - 515 - static inline bool mmc_card_hs400(struct mmc_card *card) 516 - { 517 - return card->host->ios.timing == MMC_TIMING_MMC_HS400; 518 - } 519 - 520 - static inline bool mmc_card_hs400es(struct mmc_card *card) 521 - { 522 - return card->host->ios.enhanced_strobe; 523 489 } 524 490 525 491 void mmc_retune_timer_stop(struct mmc_host *host); ··· 494 532 host->need_retune = 1; 495 533 } 496 534 497 - static inline void mmc_retune_recheck(struct mmc_host *host) 498 - { 499 - if (host->hold_retune <= 1) 500 - host->retune_now = 1; 501 - } 502 - 503 535 static inline bool mmc_can_retune(struct mmc_host *host) 504 536 { 505 537 return host->can_retune == 1; 506 538 } 507 539 508 - void mmc_retune_pause(struct mmc_host *host); 509 - void mmc_retune_unpause(struct mmc_host *host); 540 + int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error); 541 + int mmc_abort_tuning(struct mmc_host *host, u32 opcode); 510 542 511 543 #endif /* LINUX_MMC_HOST_H */
+18 -45
include/linux/mmc/mmc.h
··· 24 24 #ifndef LINUX_MMC_MMC_H 25 25 #define LINUX_MMC_MMC_H 26 26 27 + #include <linux/types.h> 28 + 27 29 /* Standard MMC commands (4.1) type argument response */ 28 30 /* class 1 */ 29 31 #define MMC_GO_IDLE_STATE 0 /* bc */ ··· 184 182 #define R2_SPI_OUT_OF_RANGE (1 << 15) /* or CSD overwrite */ 185 183 #define R2_SPI_CSD_OVERWRITE R2_SPI_OUT_OF_RANGE 186 184 187 - /* These are unpacked versions of the actual responses */ 188 - 189 - struct _mmc_csd { 190 - u8 csd_structure; 191 - u8 spec_vers; 192 - u8 taac; 193 - u8 nsac; 194 - u8 tran_speed; 195 - u16 ccc; 196 - u8 read_bl_len; 197 - u8 read_bl_partial; 198 - u8 write_blk_misalign; 199 - u8 read_blk_misalign; 200 - u8 dsr_imp; 201 - u16 c_size; 202 - u8 vdd_r_curr_min; 203 - u8 vdd_r_curr_max; 204 - u8 vdd_w_curr_min; 205 - u8 vdd_w_curr_max; 206 - u8 c_size_mult; 207 - union { 208 - struct { /* MMC system specification version 3.1 */ 209 - u8 erase_grp_size; 210 - u8 erase_grp_mult; 211 - } v31; 212 - struct { /* MMC system specification version 2.2 */ 213 - u8 sector_size; 214 - u8 erase_grp_size; 215 - } v22; 216 - } erase; 217 - u8 wp_grp_size; 218 - u8 wp_grp_enable; 219 - u8 default_ecc; 220 - u8 r2w_factor; 221 - u8 write_bl_len; 222 - u8 write_bl_partial; 223 - u8 file_format_grp; 224 - u8 copy; 225 - u8 perm_write_protect; 226 - u8 tmp_write_protect; 227 - u8 file_format; 228 - u8 ecc; 229 - }; 230 - 231 185 /* 232 186 * OCR bits are mostly in host.h 233 187 */ ··· 297 339 #define EXT_CSD_CACHE_SIZE 249 /* RO, 4 bytes */ 298 340 #define EXT_CSD_PWR_CL_DDR_200_360 253 /* RO */ 299 341 #define EXT_CSD_FIRMWARE_VERSION 254 /* RO, 8 bytes */ 342 + #define EXT_CSD_PRE_EOL_INFO 267 /* RO */ 343 + #define EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A 268 /* RO */ 344 + #define EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B 269 /* RO */ 300 345 #define EXT_CSD_CMDQ_DEPTH 307 /* RO */ 301 346 #define EXT_CSD_CMDQ_SUPPORT 308 /* RO */ 302 347 #define EXT_CSD_SUPPORTED_MODE 493 /* RO */ ··· 407 446 * BKOPS modes 408 447 */ 409 448 #define EXT_CSD_MANUAL_BKOPS_MASK 0x01 449 + #define EXT_CSD_AUTO_BKOPS_MASK 0x02 410 450 411 451 /* 412 452 * Command Queue ··· 419 457 /* 420 458 * MMC_SWITCH access modes 421 459 */ 422 - 423 460 #define MMC_SWITCH_MODE_CMD_SET 0x00 /* Change the command set */ 424 461 #define MMC_SWITCH_MODE_SET_BITS 0x01 /* Set bits which are 1 in value */ 425 462 #define MMC_SWITCH_MODE_CLEAR_BITS 0x02 /* Clear bits which are 1 in value */ 426 463 #define MMC_SWITCH_MODE_WRITE_BYTE 0x03 /* Set target to value */ 464 + 465 + /* 466 + * Erase/trim/discard 467 + */ 468 + #define MMC_ERASE_ARG 0x00000000 469 + #define MMC_SECURE_ERASE_ARG 0x80000000 470 + #define MMC_TRIM_ARG 0x00000001 471 + #define MMC_DISCARD_ARG 0x00000003 472 + #define MMC_SECURE_TRIM1_ARG 0x80000001 473 + #define MMC_SECURE_TRIM2_ARG 0x80008000 474 + #define MMC_SECURE_ARGS 0x80000000 475 + #define MMC_TRIM_ARGS 0x00008001 427 476 428 477 #define mmc_driver_type_mask(n) (1 << (n)) 429 478
+7
include/linux/mmc/sdio_ids.h
··· 51 51 #define SDIO_DEVICE_ID_MARVELL_LIBERTAS 0x9103 52 52 #define SDIO_DEVICE_ID_MARVELL_8688WLAN 0x9104 53 53 #define SDIO_DEVICE_ID_MARVELL_8688BT 0x9105 54 + #define SDIO_DEVICE_ID_MARVELL_8797_F0 0x9128 54 55 55 56 #define SDIO_VENDOR_ID_SIANO 0x039a 56 57 #define SDIO_DEVICE_ID_SIANO_NOVA_B0 0x0201 ··· 60 59 #define SDIO_DEVICE_ID_SIANO_VENICE 0x0301 61 60 #define SDIO_DEVICE_ID_SIANO_NOVA_A0 0x1100 62 61 #define SDIO_DEVICE_ID_SIANO_STELLAR 0x5347 62 + 63 + #define SDIO_VENDOR_ID_TI 0x0097 64 + #define SDIO_DEVICE_ID_TI_WL1271 0x4076 65 + 66 + #define SDIO_VENDOR_ID_STE 0x0020 67 + #define SDIO_DEVICE_ID_STE_CW1200 0x2280 63 68 64 69 #endif /* LINUX_MMC_SDIO_IDS_H */
-5
include/linux/mmc/sh_mmcif.h
··· 32 32 */ 33 33 34 34 struct sh_mmcif_plat_data { 35 - int (*get_cd)(struct platform_device *pdef); 36 35 unsigned int slave_id_tx; /* embedded slave_id_[tr]x */ 37 36 unsigned int slave_id_rx; 38 - bool use_cd_gpio : 1; 39 - bool ccs_unsupported : 1; 40 - bool clk_ctrl2_present : 1; 41 - unsigned int cd_gpio; 42 37 u8 sup_pclk; /* 1 :SH7757, 0: SH7724/SH7372 */ 43 38 unsigned long caps; 44 39 u32 ocr;
+3
include/linux/mmc/slot-gpio.h
··· 11 11 #ifndef MMC_SLOT_GPIO_H 12 12 #define MMC_SLOT_GPIO_H 13 13 14 + #include <linux/types.h> 15 + #include <linux/irqreturn.h> 16 + 14 17 struct mmc_host; 15 18 16 19 int mmc_gpio_get_ro(struct mmc_host *host);
+1
include/linux/platform_data/mmc-mxcmmc.h
··· 1 1 #ifndef ASMARM_ARCH_MMC_H 2 2 #define ASMARM_ARCH_MMC_H 3 3 4 + #include <linux/interrupt.h> 4 5 #include <linux/mmc/host.h> 5 6 6 7 struct device;