Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mmc-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC updates from Ulf Hansson:
"MMC core:
- Continue to re-factor code to prepare for eMMC CMDQ and blkmq support
- Introduce queue semantics to prepare for eMMC CMDQ and blkmq support
- Add helper functions to manage temporary enable/disable of eMMC CMDQ
- Improve wait-busy detection for SDIO

MMC host:
- cavium: Add driver to support Cavium controllers
- cavium: Extend Cavium driver to support Octeon and ThunderX SOCs
- bcm2835: Add new driver for Broadcom BCM2835 controller
- sdhci-xenon: Add driver to support Marvell Xenon SDHCI controller
- sdhci-tegra: Add support for the Tegra186 variant
- sdhci-of-esdhc: Support for UHS-I SD cards
- sdhci-of-esdhc: Support for eMMC HS200 cards
- sdhci-cadence: Add eMMC HS400 enhanced strobe support
- sdhci-esdhc-imx: Reset tuning circuit when needed
- sdhci-pci: Modernize and clean-up some PM related code
- sdhci-pci: Avoid re-tuning at runtime PM for some Intel devices
- sdhci-pci|acpi: Use aggressive PM for some Intel BYT controllers
- sdhci: Re-factoring and modernizations
- sdhci: Optimize delay loops
- sdhci: Improve register dump print format
- sdhci: Add support for the Command Queue Engine
- meson-gx: Various improvements and clean-ups
- meson-gx: Add support for CMD23
- meson-gx: Basic tuning support to avoid CRC errors
- s3cmci: Enable probing via DT
- mediatek: Improve tuning support for eMMC HS200 and HS400 mode
- tmio: Improve DMA support
- tmio: Use correct response for CMD12
- dw_mmc: Minor improvements and clean-ups"

* tag 'mmc-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (148 commits)
mmc: sdhci-of-esdhc: limit SD clock for ls1012a/ls1046a
mmc: sdhci-of-esdhc: poll ESDHC_CLOCK_STABLE bit with udelay
mmc: sdhci-xenon: Fix default value of LOGIC_TIMING_ADJUST for eMMC5.0 PHY
mmc: sdhci-xenon: Fix the work flow in xenon_remove().
MIPS: Octeon: cavium_octeon_defconfig: Enable Octeon MMC
mmc: sdhci-xenon: Remove redundant dev_err call in get_dt_pad_ctrl_data()
mmc: cavium: Use module_pci_driver to simplify the code
mmc: cavium: Add MMC support for Octeon SOCs.
mmc: cavium: Fix detection of block or byte addressing.
mmc: core: Export API to allow hosts to get the card address
mmc: sdio: Fix sdio wait busy implement limitation
mmc: sdhci-esdhc-imx: reset tuning circuit when power on mmc card
clk: apn806: fix spelling mistake: "mising" -> "missing"
mmc: sdhci-of-esdhc: add delay between tuning cycles
mmc: sdhci: Control the delay between tuning commands
mmc: sdhci-of-esdhc: add tuning support
mmc: sdhci-of-esdhc: add support for signal voltage switch
mmc: sdhci-of-esdhc: add peripheral clock support
mmc: sdhci-pci: Allow for 3 bytes from Intel DSM
mmc: cavium: Fix a shift wrapping bug
...

+7991 -1662
+23
Documentation/devicetree/bindings/mmc/brcm,bcm2835-sdhost.txt
··· 1 + Broadcom BCM2835 SDHOST controller 2 + 3 + This file documents differences between the core properties described 4 + by mmc.txt and the properties that represent the BCM2835 controller. 5 + 6 + Required properties: 7 + - compatible: Should be "brcm,bcm2835-sdhost". 8 + - clocks: The clock feeding the SDHOST controller. 9 + 10 + Optional properties: 11 + - dmas: DMA channel for read and write. 12 + See Documentation/devicetree/bindings/dma/dma.txt for details 13 + 14 + Example: 15 + 16 + sdhost: mmc@7e202000 { 17 + compatible = "brcm,bcm2835-sdhost"; 18 + reg = <0x7e202000 0x100>; 19 + interrupts = <2 24>; 20 + clocks = <&clocks BCM2835_CLOCK_VPU>; 21 + dmas = <&dma 13>; 22 + dma-names = "rx-tx"; 23 + };
+57
Documentation/devicetree/bindings/mmc/cavium-mmc.txt
··· 1 + * Cavium Octeon & ThunderX MMC controller 2 + 3 + The highspeed MMC host controller on Caviums SoCs provides an interface 4 + for MMC and SD types of memory cards. 5 + 6 + Supported maximum speeds are the ones of the eMMC standard 4.41 as well 7 + as the speed of SD standard 4.0. Only 3.3 Volt is supported. 8 + 9 + Required properties: 10 + - compatible : should be one of: 11 + cavium,octeon-6130-mmc 12 + cavium,octeon-7890-mmc 13 + cavium,thunder-8190-mmc 14 + cavium,thunder-8390-mmc 15 + mmc-slot 16 + - reg : mmc controller base registers 17 + - clocks : phandle 18 + 19 + Optional properties: 20 + - for cd, bus-width and additional generic mmc parameters 21 + please refer to mmc.txt within this directory 22 + - cavium,cmd-clk-skew : number of coprocessor clocks before sampling command 23 + - cavium,dat-clk-skew : number of coprocessor clocks before sampling data 24 + 25 + Deprecated properties: 26 + - spi-max-frequency : use max-frequency instead 27 + - cavium,bus-max-width : use bus-width instead 28 + - power-gpios : use vmmc-supply instead 29 + - cavium,octeon-6130-mmc-slot : use mmc-slot instead 30 + 31 + Examples: 32 + mmc_1_4: mmc@1,4 { 33 + compatible = "cavium,thunder-8390-mmc"; 34 + reg = <0x0c00 0 0 0 0>; /* DEVFN = 0x0c (1:4) */ 35 + #address-cells = <1>; 36 + #size-cells = <0>; 37 + clocks = <&sclk>; 38 + 39 + mmc-slot@0 { 40 + compatible = "mmc-slot"; 41 + reg = <0>; 42 + vmmc-supply = <&mmc_supply_3v3>; 43 + max-frequency = <42000000>; 44 + bus-width = <4>; 45 + cap-sd-highspeed; 46 + }; 47 + 48 + mmc-slot@1 { 49 + compatible = "mmc-slot"; 50 + reg = <1>; 51 + vmmc-supply = <&mmc_supply_3v3>; 52 + max-frequency = <42000000>; 53 + bus-width = <8>; 54 + cap-mmc-highspeed; 55 + non-removable; 56 + }; 57 + };
+170
Documentation/devicetree/bindings/mmc/marvell,xenon-sdhci.txt
··· 1 + Marvell Xenon SDHCI Controller device tree bindings 2 + This file documents differences between the core mmc properties 3 + described by mmc.txt and the properties used by the Xenon implementation. 4 + 5 + Multiple SDHCs might be put into a single Xenon IP, to save size and cost. 6 + Each SDHC is independent and owns independent resources, such as register sets, 7 + clock and PHY. 8 + Each SDHC should have an independent device tree node. 9 + 10 + Required Properties: 11 + - compatible: should be one of the following 12 + - "marvell,armada-3700-sdhci": For controllers on Armada-3700 SoC. 13 + Must provide a second register area and marvell,pad-type. 14 + - "marvell,armada-ap806-sdhci": For controllers on Armada AP806. 15 + - "marvell,armada-cp110-sdhci": For controllers on Armada CP110. 16 + 17 + - clocks: 18 + Array of clocks required for SDHC. 19 + Require at least input clock for Xenon IP core. 20 + 21 + - clock-names: 22 + Array of names corresponding to clocks property. 23 + The input clock for Xenon IP core should be named as "core". 24 + 25 + - reg: 26 + * For "marvell,armada-3700-sdhci", two register areas. 27 + The first one for Xenon IP register. The second one for the Armada 3700 SoC 28 + PHY PAD Voltage Control register. 29 + Please follow the examples with compatible "marvell,armada-3700-sdhci" 30 + in below. 31 + Please also check property marvell,pad-type in below. 32 + 33 + * For other compatible strings, one register area for Xenon IP. 34 + 35 + Optional Properties: 36 + - marvell,xenon-sdhc-id: 37 + Indicate the corresponding bit index of current SDHC in 38 + SDHC System Operation Control Register Bit[7:0]. 39 + Set/clear the corresponding bit to enable/disable current SDHC. 40 + If Xenon IP contains only one SDHC, this property is optional. 41 + 42 + - marvell,xenon-phy-type: 43 + Xenon support multiple types of PHYs. 44 + To select eMMC 5.1 PHY, set: 45 + marvell,xenon-phy-type = "emmc 5.1 phy" 46 + eMMC 5.1 PHY is the default choice if this property is not provided. 47 + To select eMMC 5.0 PHY, set: 48 + marvell,xenon-phy-type = "emmc 5.0 phy" 49 + 50 + All those types of PHYs can support eMMC, SD and SDIO. 51 + Please note that this property only presents the type of PHY. 52 + It doesn't stand for the entire SDHC type or property. 53 + For example, "emmc 5.1 phy" doesn't mean that this Xenon SDHC only 54 + supports eMMC 5.1. 55 + 56 + - marvell,xenon-phy-znr: 57 + Set PHY ZNR value. 58 + Only available for eMMC PHY. 59 + Valid range = [0:0x1F]. 60 + ZNR is set as 0xF by default if this property is not provided. 61 + 62 + - marvell,xenon-phy-zpr: 63 + Set PHY ZPR value. 64 + Only available for eMMC PHY. 65 + Valid range = [0:0x1F]. 66 + ZPR is set as 0xF by default if this property is not provided. 67 + 68 + - marvell,xenon-phy-nr-success-tun: 69 + Set the number of required consecutive successful sampling points 70 + used to identify a valid sampling window, in tuning process. 71 + Valid range = [1:7]. 72 + Set as 0x4 by default if this property is not provided. 73 + 74 + - marvell,xenon-phy-tun-step-divider: 75 + Set the divider for calculating TUN_STEP. 76 + Set as 64 by default if this property is not provided. 77 + 78 + - marvell,xenon-phy-slow-mode: 79 + If this property is selected, transfers will bypass PHY. 80 + Only available when bus frequency lower than 55MHz in SDR mode. 81 + Disabled by default. Please only try this property if timing issues 82 + always occur with PHY enabled in eMMC HS SDR, SD SDR12, SD SDR25, 83 + SD Default Speed and HS mode and eMMC legacy speed mode. 84 + 85 + - marvell,xenon-tun-count: 86 + Xenon SDHC SoC usually doesn't provide re-tuning counter in 87 + Capabilities Register 3 Bit[11:8]. 88 + This property provides the re-tuning counter. 89 + If this property is not set, default re-tuning counter will 90 + be set as 0x9 in driver. 91 + 92 + - marvell,pad-type: 93 + Type of Armada 3700 SoC PHY PAD Voltage Controller register. 94 + Only valid when "marvell,armada-3700-sdhci" is selected. 95 + Two types: "sd" and "fixed-1-8v". 96 + If "sd" is selected, SoC PHY PAD is set as 3.3V at the beginning and is 97 + switched to 1.8V when later in higher speed mode. 98 + If "fixed-1-8v" is selected, SoC PHY PAD is fixed 1.8V, such as for eMMC. 99 + Please follow the examples with compatible "marvell,armada-3700-sdhci" 100 + in below. 101 + 102 + Example: 103 + - For eMMC: 104 + 105 + sdhci@aa0000 { 106 + compatible = "marvell,armada-ap806-sdhci"; 107 + reg = <0xaa0000 0x1000>; 108 + interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH> 109 + clocks = <&emmc_clk>; 110 + clock-names = "core"; 111 + bus-width = <4>; 112 + marvell,xenon-phy-slow-mode; 113 + marvell,xenon-tun-count = <11>; 114 + non-removable; 115 + no-sd; 116 + no-sdio; 117 + 118 + /* Vmmc and Vqmmc are both fixed */ 119 + }; 120 + 121 + - For SD/SDIO: 122 + 123 + sdhci@ab0000 { 124 + compatible = "marvell,armada-cp110-sdhci"; 125 + reg = <0xab0000 0x1000>; 126 + interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH> 127 + vqmmc-supply = <&sd_vqmmc_regulator>; 128 + vmmc-supply = <&sd_vmmc_regulator>; 129 + clocks = <&sdclk>; 130 + clock-names = "core"; 131 + bus-width = <4>; 132 + marvell,xenon-tun-count = <9>; 133 + }; 134 + 135 + - For eMMC with compatible "marvell,armada-3700-sdhci": 136 + 137 + sdhci@aa0000 { 138 + compatible = "marvell,armada-3700-sdhci"; 139 + reg = <0xaa0000 0x1000>, 140 + <phy_addr 0x4>; 141 + interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH> 142 + clocks = <&emmcclk>; 143 + clock-names = "core"; 144 + bus-width = <8>; 145 + mmc-ddr-1_8v; 146 + mmc-hs400-1_8v; 147 + non-removable; 148 + no-sd; 149 + no-sdio; 150 + 151 + /* Vmmc and Vqmmc are both fixed */ 152 + 153 + marvell,pad-type = "fixed-1-8v"; 154 + }; 155 + 156 + - For SD/SDIO with compatible "marvell,armada-3700-sdhci": 157 + 158 + sdhci@ab0000 { 159 + compatible = "marvell,armada-3700-sdhci"; 160 + reg = <0xab0000 0x1000>, 161 + <phy_addr 0x4>; 162 + interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH> 163 + vqmmc-supply = <&sd_regulator>; 164 + /* Vmmc is fixed */ 165 + clocks = <&sdclk>; 166 + clock-names = "core"; 167 + bus-width = <4>; 168 + 169 + marvell,pad-type = "sd"; 170 + };
+12
Documentation/devicetree/bindings/mmc/mtk-sd.txt
··· 21 21 - assigned-clocks: PLL of the source clock 22 22 - assigned-clock-parents: parent of source clock, used for HS400 mode to get 400Mhz source clock 23 23 - hs400-ds-delay: HS400 DS delay setting 24 + - mediatek,hs200-cmd-int-delay: HS200 command internal delay setting. 25 + This field has total 32 stages. 26 + The value is an integer from 0 to 31. 27 + - mediatek,hs400-cmd-int-delay: HS400 command internal delay setting 28 + This field has total 32 stages. 29 + The value is an integer from 0 to 31. 30 + - mediatek,hs400-cmd-resp-sel-rising: HS400 command response sample selection 31 + If present,HS400 command responses are sampled on rising edges. 32 + If not present,HS400 command responses are sampled on falling edges. 24 33 25 34 Examples: 26 35 mmc0: mmc@11230000 { ··· 47 38 assigned-clocks = <&topckgen CLK_TOP_MSDC50_0_SEL>; 48 39 assigned-clock-parents = <&topckgen CLK_TOP_MSDCPLL_D2>; 49 40 hs400-ds-delay = <0x14015>; 41 + mediatek,hs200-cmd-int-delay = <26>; 42 + mediatek,hs400-cmd-int-delay = <14>; 43 + mediatek,hs400-cmd-resp-sel-rising; 50 44 };
+7 -5
Documentation/devicetree/bindings/mmc/nvidia,tegra20-sdhci.txt
··· 7 7 by mmc.txt and the properties used by the sdhci-tegra driver. 8 8 9 9 Required properties: 10 - - compatible : For Tegra20, must contain "nvidia,tegra20-sdhci". 11 - For Tegra30, must contain "nvidia,tegra30-sdhci". For Tegra114, 12 - must contain "nvidia,tegra114-sdhci". For Tegra124, must contain 13 - "nvidia,tegra124-sdhci". Otherwise, must contain "nvidia,<chip>-sdhci", 14 - plus one of the above, where <chip> is tegra132 or tegra210. 10 + - compatible : should be one of: 11 + - "nvidia,tegra20-sdhci": for Tegra20 12 + - "nvidia,tegra30-sdhci": for Tegra30 13 + - "nvidia,tegra114-sdhci": for Tegra114 14 + - "nvidia,tegra124-sdhci": for Tegra124 and Tegra132 15 + - "nvidia,tegra210-sdhci": for Tegra210 16 + - "nvidia,tegra186-sdhci": for Tegra186 15 17 - clocks : Must contain one entry, for the module clock. 16 18 See ../clocks/clock-bindings.txt for details. 17 19 - resets : Must contain an entry for each entry in reset-names.
+8
Documentation/devicetree/bindings/mmc/renesas,mmcif.txt
··· 8 8 9 9 - compatible: should be "renesas,mmcif-<soctype>", "renesas,sh-mmcif" as a 10 10 fallback. Examples with <soctype> are: 11 + - "renesas,mmcif-r7s72100" for the MMCIF found in r7s72100 SoCs 11 12 - "renesas,mmcif-r8a73a4" for the MMCIF found in r8a73a4 SoCs 12 13 - "renesas,mmcif-r8a7740" for the MMCIF found in r8a7740 SoCs 13 14 - "renesas,mmcif-r8a7778" for the MMCIF found in r8a7778 SoCs ··· 17 16 - "renesas,mmcif-r8a7793" for the MMCIF found in r8a7793 SoCs 18 17 - "renesas,mmcif-r8a7794" for the MMCIF found in r8a7794 SoCs 19 18 - "renesas,mmcif-sh73a0" for the MMCIF found in sh73a0 SoCs 19 + 20 + - interrupts: Some SoCs have only 1 shared interrupt, while others have either 21 + 2 or 3 individual interrupts (error, int, card detect). Below is the number 22 + of interrupts for each SoC: 23 + 1: r8a73a4, r8a7778, r8a7790, r8a7791, r8a7793, r8a7794 24 + 2: r8a7740, sh73a0 25 + 3: r7s72100 20 26 21 27 - clocks: reference to the functional clock 22 28
+42
Documentation/devicetree/bindings/mmc/samsung,s3cmci.txt
··· 1 + * Samsung's S3C24XX MMC/SD/SDIO controller device tree bindings 2 + 3 + Samsung's S3C24XX MMC/SD/SDIO controller is used as a connectivity interface 4 + with external MMC, SD and SDIO storage mediums. 5 + 6 + This file documents differences between the core mmc properties described by 7 + mmc.txt and the properties used by the Samsung S3C24XX MMC/SD/SDIO controller 8 + implementation. 9 + 10 + Required SoC Specific Properties: 11 + - compatible: should be one of the following 12 + - "samsung,s3c2410-sdi": for controllers compatible with s3c2410 13 + - "samsung,s3c2412-sdi": for controllers compatible with s3c2412 14 + - "samsung,s3c2440-sdi": for controllers compatible with s3c2440 15 + - reg: register location and length 16 + - interrupts: mmc controller interrupt 17 + - clocks: Should reference the controller clock 18 + - clock-names: Should contain "sdi" 19 + 20 + Required Board Specific Properties: 21 + - pinctrl-0: Should specify pin control groups used for this controller. 22 + - pinctrl-names: Should contain only one value - "default". 23 + 24 + Optional Properties: 25 + - bus-width: number of data lines (see mmc.txt) 26 + - cd-gpios: gpio for card detection (see mmc.txt) 27 + - wp-gpios: gpio for write protection (see mmc.txt) 28 + 29 + Example: 30 + 31 + mmc0: mmc@5a000000 { 32 + compatible = "samsung,s3c2440-sdi"; 33 + pinctrl-names = "default"; 34 + pinctrl-0 = <&sdi_pins>; 35 + reg = <0x5a000000 0x100000>; 36 + interrupts = <0 0 21 3>; 37 + clocks = <&clocks PCLK_SDI>; 38 + clock-names = "sdi"; 39 + bus-width = <4>; 40 + cd-gpios = <&gpg 8 GPIO_ACTIVE_LOW>; 41 + wp-gpios = <&gph 8 GPIO_ACTIVE_LOW>; 42 + };
+48
Documentation/devicetree/bindings/mmc/sdhci-cadence.txt
··· 19 19 - mmc-hs400-1_8v 20 20 - mmc-hs400-1_2v 21 21 22 + Some PHY delays can be configured by following properties. 23 + PHY DLL input delays: 24 + They are used to delay the data valid window, and align the window 25 + to sampling clock. The delay starts from 5ns (for delay parameter equal to 0) 26 + and it is increased by 2.5ns in each step. 27 + - cdns,phy-input-delay-sd-highspeed: 28 + Value of the delay in the input path for SD high-speed timing 29 + Valid range = [0:0x1F]. 30 + - cdns,phy-input-delay-legacy: 31 + Value of the delay in the input path for legacy timing 32 + Valid range = [0:0x1F]. 33 + - cdns,phy-input-delay-sd-uhs-sdr12: 34 + Value of the delay in the input path for SD UHS SDR12 timing 35 + Valid range = [0:0x1F]. 36 + - cdns,phy-input-delay-sd-uhs-sdr25: 37 + Value of the delay in the input path for SD UHS SDR25 timing 38 + Valid range = [0:0x1F]. 39 + - cdns,phy-input-delay-sd-uhs-sdr50: 40 + Value of the delay in the input path for SD UHS SDR50 timing 41 + Valid range = [0:0x1F]. 42 + - cdns,phy-input-delay-sd-uhs-ddr50: 43 + Value of the delay in the input path for SD UHS DDR50 timing 44 + Valid range = [0:0x1F]. 45 + - cdns,phy-input-delay-mmc-highspeed: 46 + Value of the delay in the input path for MMC high-speed timing 47 + Valid range = [0:0x1F]. 48 + - cdns,phy-input-delay-mmc-ddr: 49 + Value of the delay in the input path for eMMC high-speed DDR timing 50 + Valid range = [0:0x1F]. 51 + 52 + PHY DLL clock delays: 53 + Each delay property represents the fraction of the clock period. 54 + The approximate delay value will be 55 + (<delay property value>/128)*sdmclk_clock_period. 56 + - cdns,phy-dll-delay-sdclk: 57 + Value of the delay introduced on the sdclk output 58 + for all modes except HS200, HS400 and HS400_ES. 59 + Valid range = [0:0x7F]. 60 + - cdns,phy-dll-delay-sdclk-hsmmc: 61 + Value of the delay introduced on the sdclk output 62 + for HS200, HS400 and HS400_ES speed modes. 63 + Valid range = [0:0x7F]. 64 + - cdns,phy-dll-delay-strobe: 65 + Value of the delay introduced on the dat_strobe input 66 + used in HS400 / HS400_ES speed modes. 67 + Valid range = [0:0x7F]. 68 + 22 69 Example: 23 70 emmc: sdhci@5a000000 { 24 71 compatible = "socionext,uniphier-sd4hc", "cdns,sd4hc"; ··· 76 29 mmc-ddr-1_8v; 77 30 mmc-hs200-1_8v; 78 31 mmc-hs400-1_8v; 32 + cdns,phy-dll-delay-sdclk = <0>; 79 33 };
+1
Documentation/mmc/mmc-dev-attrs.txt
··· 30 30 rel_sectors Reliable write sector count 31 31 ocr Operation Conditions Register 32 32 dsr Driver Stage Register 33 + cmdq_en Command Queue enabled: 1 => enabled, 0 => not enabled 33 34 34 35 Note on Erase Size and Preferred Erase Size: 35 36
+15
MAINTAINERS
··· 3064 3064 F: drivers/i2c/busses/i2c-octeon* 3065 3065 F: drivers/i2c/busses/i2c-thunderx* 3066 3066 3067 + CAVIUM MMC DRIVER 3068 + M: Jan Glauber <jglauber@cavium.com> 3069 + M: David Daney <david.daney@cavium.com> 3070 + M: Steven J. Hill <Steven.Hill@cavium.com> 3071 + W: http://www.cavium.com 3072 + S: Supported 3073 + F: drivers/mmc/host/cavium* 3074 + 3067 3075 CAVIUM LIQUIDIO NETWORK DRIVER 3068 3076 M: Derek Chickles <derek.chickles@caviumnetworks.com> 3069 3077 M: Satanand Burla <satananda.burla@caviumnetworks.com> ··· 7926 7918 M: Nicolas Pitre <nico@fluxnic.net> 7927 7919 S: Odd Fixes 7928 7920 F: drivers/mmc/host/mvsdio.* 7921 + 7922 + MARVELL XENON MMC/SD/SDIO HOST CONTROLLER DRIVER 7923 + M: Hu Ziji <huziji@marvell.com> 7924 + L: linux-mmc@vger.kernel.org 7925 + S: Supported 7926 + F: drivers/mmc/host/sdhci-xenon* 7927 + F: Documentation/devicetree/bindings/mmc/marvell,xenon-sdhci.txt 7929 7928 7930 7929 MATROX FRAMEBUFFER DRIVER 7931 7930 L: linux-fbdev@vger.kernel.org
+2 -1
arch/arm64/boot/dts/marvell/armada-ap806.dtsi
··· 235 235 #clock-cells = <1>; 236 236 clock-output-names = "ap-cpu-cluster-0", 237 237 "ap-cpu-cluster-1", 238 - "ap-fixed", "ap-mss"; 238 + "ap-fixed", "ap-mss", 239 + "ap-emmc"; 239 240 reg = <0x6f4000 0x1000>; 240 241 }; 241 242 };
+5
arch/mips/configs/cavium_octeon_defconfig
··· 127 127 CONFIG_USB_EHCI_HCD_PLATFORM=m 128 128 CONFIG_USB_OHCI_HCD=m 129 129 CONFIG_USB_OHCI_HCD_PLATFORM=m 130 + CONFIG_MMC=y 131 + # CONFIG_PWRSEQ_EMMC is not set 132 + # CONFIG_PWRSEQ_SIMPLE is not set 133 + # CONFIG_MMC_BLOCK_BOUNCE is not set 134 + CONFIG_MMC_CAVIUM_OCTEON=y 130 135 CONFIG_RTC_CLASS=y 131 136 CONFIG_RTC_DRV_DS1307=y 132 137 CONFIG_STAGING=y
+20 -1
drivers/clk/mvebu/ap806-system-controller.c
··· 23 23 #define AP806_SAR_REG 0x400 24 24 #define AP806_SAR_CLKFREQ_MODE_MASK 0x1f 25 25 26 - #define AP806_CLK_NUM 4 26 + #define AP806_CLK_NUM 5 27 27 28 28 static struct clk *ap806_clks[AP806_CLK_NUM]; 29 29 ··· 135 135 goto fail3; 136 136 } 137 137 138 + /* eMMC Clock is fixed clock divided by 3 */ 139 + if (of_property_read_string_index(np, "clock-output-names", 140 + 4, &name)) { 141 + ap806_clk_data.clk_num--; 142 + dev_warn(&pdev->dev, 143 + "eMMC clock missing: update the device tree!\n"); 144 + } else { 145 + ap806_clks[4] = clk_register_fixed_factor(NULL, name, 146 + fixedclk_name, 147 + 0, 1, 3); 148 + if (IS_ERR(ap806_clks[4])) { 149 + ret = PTR_ERR(ap806_clks[4]); 150 + goto fail4; 151 + } 152 + } 153 + 154 + of_clk_add_provider(np, of_clk_src_onecell_get, &ap806_clk_data); 138 155 ret = of_clk_add_provider(np, of_clk_src_onecell_get, &ap806_clk_data); 139 156 if (ret) 140 157 goto fail_clk_add; ··· 159 142 return 0; 160 143 161 144 fail_clk_add: 145 + clk_unregister_fixed_factor(ap806_clks[4]); 146 + fail4: 162 147 clk_unregister_fixed_factor(ap806_clks[3]); 163 148 fail3: 164 149 clk_unregister_fixed_rate(ap806_clks[2]);
+185 -115
drivers/mmc/core/block.c
··· 129 129 struct mmc_blk_data *md); 130 130 static int get_card_status(struct mmc_card *card, u32 *status, int retries); 131 131 132 + static void mmc_blk_requeue(struct request_queue *q, struct request *req) 133 + { 134 + spin_lock_irq(q->queue_lock); 135 + blk_requeue_request(q, req); 136 + spin_unlock_irq(q->queue_lock); 137 + } 138 + 132 139 static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk) 133 140 { 134 141 struct mmc_blk_data *md; ··· 728 721 #endif 729 722 }; 730 723 724 + static int mmc_blk_part_switch_pre(struct mmc_card *card, 725 + unsigned int part_type) 726 + { 727 + int ret = 0; 728 + 729 + if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) { 730 + if (card->ext_csd.cmdq_en) { 731 + ret = mmc_cmdq_disable(card); 732 + if (ret) 733 + return ret; 734 + } 735 + mmc_retune_pause(card->host); 736 + } 737 + 738 + return ret; 739 + } 740 + 741 + static int mmc_blk_part_switch_post(struct mmc_card *card, 742 + unsigned int part_type) 743 + { 744 + int ret = 0; 745 + 746 + if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) { 747 + mmc_retune_unpause(card->host); 748 + if (card->reenable_cmdq && !card->ext_csd.cmdq_en) 749 + ret = mmc_cmdq_enable(card); 750 + } 751 + 752 + return ret; 753 + } 754 + 731 755 static inline int mmc_blk_part_switch(struct mmc_card *card, 732 756 struct mmc_blk_data *md) 733 757 { 734 - int ret; 758 + int ret = 0; 735 759 struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev); 736 760 737 761 if (main_md->part_curr == md->part_type) ··· 771 733 if (mmc_card_mmc(card)) { 772 734 u8 part_config = card->ext_csd.part_config; 773 735 774 - if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) 775 - mmc_retune_pause(card->host); 736 + ret = mmc_blk_part_switch_pre(card, md->part_type); 737 + if (ret) 738 + return ret; 776 739 777 740 part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK; 778 741 part_config |= md->part_type; ··· 782 743 EXT_CSD_PART_CONFIG, part_config, 783 744 card->ext_csd.part_time); 784 745 if (ret) { 785 - if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) 786 - mmc_retune_unpause(card->host); 746 + mmc_blk_part_switch_post(card, md->part_type); 787 747 return ret; 788 748 } 789 749 790 750 card->ext_csd.part_config = part_config; 791 751 792 - if (main_md->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB) 793 - mmc_retune_unpause(card->host); 752 + ret = mmc_blk_part_switch_post(card, main_md->part_curr); 794 753 } 795 754 796 755 main_md->part_curr = md->part_type; 797 - return 0; 756 + return ret; 798 757 } 799 758 800 759 static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks) ··· 1309 1272 { 1310 1273 if (!(card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN)) { 1311 1274 /* Legacy mode imposes restrictions on transfers. */ 1312 - if (!IS_ALIGNED(brq->cmd.arg, card->ext_csd.rel_sectors)) 1275 + if (!IS_ALIGNED(blk_rq_pos(req), card->ext_csd.rel_sectors)) 1313 1276 brq->data.blocks = 1; 1314 1277 1315 1278 if (brq->data.blocks > card->ext_csd.rel_sectors) ··· 1433 1396 return MMC_BLK_SUCCESS; 1434 1397 } 1435 1398 1436 - static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, 1437 - struct mmc_card *card, 1438 - int disable_multi, 1439 - struct mmc_queue *mq) 1399 + static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq, 1400 + int disable_multi, bool *do_rel_wr, 1401 + bool *do_data_tag) 1440 1402 { 1441 - u32 readcmd, writecmd; 1403 + struct mmc_blk_data *md = mq->blkdata; 1404 + struct mmc_card *card = md->queue.card; 1442 1405 struct mmc_blk_request *brq = &mqrq->brq; 1443 1406 struct request *req = mqrq->req; 1444 - struct mmc_blk_data *md = mq->blkdata; 1445 - bool do_data_tag; 1446 1407 1447 1408 /* 1448 1409 * Reliable writes are used to implement Forced Unit Access and 1449 1410 * are supported only on MMCs. 1450 1411 */ 1451 - bool do_rel_wr = (req->cmd_flags & REQ_FUA) && 1452 - (rq_data_dir(req) == WRITE) && 1453 - (md->flags & MMC_BLK_REL_WR); 1412 + *do_rel_wr = (req->cmd_flags & REQ_FUA) && 1413 + rq_data_dir(req) == WRITE && 1414 + (md->flags & MMC_BLK_REL_WR); 1454 1415 1455 1416 memset(brq, 0, sizeof(struct mmc_blk_request)); 1456 - brq->mrq.cmd = &brq->cmd; 1417 + 1457 1418 brq->mrq.data = &brq->data; 1458 1419 1459 - brq->cmd.arg = blk_rq_pos(req); 1460 - if (!mmc_card_blockaddr(card)) 1461 - brq->cmd.arg <<= 9; 1462 - brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC; 1463 - brq->data.blksz = 512; 1464 1420 brq->stop.opcode = MMC_STOP_TRANSMISSION; 1465 1421 brq->stop.arg = 0; 1422 + 1423 + if (rq_data_dir(req) == READ) { 1424 + brq->data.flags = MMC_DATA_READ; 1425 + brq->stop.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC; 1426 + } else { 1427 + brq->data.flags = MMC_DATA_WRITE; 1428 + brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; 1429 + } 1430 + 1431 + brq->data.blksz = 512; 1466 1432 brq->data.blocks = blk_rq_sectors(req); 1467 1433 1468 1434 /* ··· 1496 1456 brq->data.blocks); 1497 1457 } 1498 1458 1459 + if (*do_rel_wr) 1460 + mmc_apply_rel_rw(brq, card, req); 1461 + 1462 + /* 1463 + * Data tag is used only during writing meta data to speed 1464 + * up write and any subsequent read of this meta data 1465 + */ 1466 + *do_data_tag = card->ext_csd.data_tag_unit_size && 1467 + (req->cmd_flags & REQ_META) && 1468 + (rq_data_dir(req) == WRITE) && 1469 + ((brq->data.blocks * brq->data.blksz) >= 1470 + card->ext_csd.data_tag_unit_size); 1471 + 1472 + mmc_set_data_timeout(&brq->data, card); 1473 + 1474 + brq->data.sg = mqrq->sg; 1475 + brq->data.sg_len = mmc_queue_map_sg(mq, mqrq); 1476 + 1477 + /* 1478 + * Adjust the sg list so it is the same size as the 1479 + * request. 1480 + */ 1481 + if (brq->data.blocks != blk_rq_sectors(req)) { 1482 + int i, data_size = brq->data.blocks << 9; 1483 + struct scatterlist *sg; 1484 + 1485 + for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) { 1486 + data_size -= sg->length; 1487 + if (data_size <= 0) { 1488 + sg->length += data_size; 1489 + i++; 1490 + break; 1491 + } 1492 + } 1493 + brq->data.sg_len = i; 1494 + } 1495 + 1496 + mqrq->areq.mrq = &brq->mrq; 1497 + 1498 + mmc_queue_bounce_pre(mqrq); 1499 + } 1500 + 1501 + static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq, 1502 + struct mmc_card *card, 1503 + int disable_multi, 1504 + struct mmc_queue *mq) 1505 + { 1506 + u32 readcmd, writecmd; 1507 + struct mmc_blk_request *brq = &mqrq->brq; 1508 + struct request *req = mqrq->req; 1509 + struct mmc_blk_data *md = mq->blkdata; 1510 + bool do_rel_wr, do_data_tag; 1511 + 1512 + mmc_blk_data_prep(mq, mqrq, disable_multi, &do_rel_wr, &do_data_tag); 1513 + 1514 + brq->mrq.cmd = &brq->cmd; 1515 + 1516 + brq->cmd.arg = blk_rq_pos(req); 1517 + if (!mmc_card_blockaddr(card)) 1518 + brq->cmd.arg <<= 9; 1519 + brq->cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC; 1520 + 1499 1521 if (brq->data.blocks > 1 || do_rel_wr) { 1500 1522 /* SPI multiblock writes terminate using a special 1501 1523 * token, not a STOP_TRANSMISSION request. ··· 1572 1470 readcmd = MMC_READ_SINGLE_BLOCK; 1573 1471 writecmd = MMC_WRITE_BLOCK; 1574 1472 } 1575 - if (rq_data_dir(req) == READ) { 1576 - brq->cmd.opcode = readcmd; 1577 - brq->data.flags = MMC_DATA_READ; 1578 - if (brq->mrq.stop) 1579 - brq->stop.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | 1580 - MMC_CMD_AC; 1581 - } else { 1582 - brq->cmd.opcode = writecmd; 1583 - brq->data.flags = MMC_DATA_WRITE; 1584 - if (brq->mrq.stop) 1585 - brq->stop.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | 1586 - MMC_CMD_AC; 1587 - } 1588 - 1589 - if (do_rel_wr) 1590 - mmc_apply_rel_rw(brq, card, req); 1591 - 1592 - /* 1593 - * Data tag is used only during writing meta data to speed 1594 - * up write and any subsequent read of this meta data 1595 - */ 1596 - do_data_tag = (card->ext_csd.data_tag_unit_size) && 1597 - (req->cmd_flags & REQ_META) && 1598 - (rq_data_dir(req) == WRITE) && 1599 - ((brq->data.blocks * brq->data.blksz) >= 1600 - card->ext_csd.data_tag_unit_size); 1473 + brq->cmd.opcode = rq_data_dir(req) == READ ? readcmd : writecmd; 1601 1474 1602 1475 /* 1603 1476 * Pre-defined multi-block transfers are preferable to ··· 1603 1526 brq->mrq.sbc = &brq->sbc; 1604 1527 } 1605 1528 1606 - mmc_set_data_timeout(&brq->data, card); 1607 - 1608 - brq->data.sg = mqrq->sg; 1609 - brq->data.sg_len = mmc_queue_map_sg(mq, mqrq); 1610 - 1611 - /* 1612 - * Adjust the sg list so it is the same size as the 1613 - * request. 1614 - */ 1615 - if (brq->data.blocks != blk_rq_sectors(req)) { 1616 - int i, data_size = brq->data.blocks << 9; 1617 - struct scatterlist *sg; 1618 - 1619 - for_each_sg(brq->data.sg, sg, brq->data.sg_len, i) { 1620 - data_size -= sg->length; 1621 - if (data_size <= 0) { 1622 - sg->length += data_size; 1623 - i++; 1624 - break; 1625 - } 1626 - } 1627 - brq->data.sg_len = i; 1628 - } 1629 - 1630 - mqrq->areq.mrq = &brq->mrq; 1631 1529 mqrq->areq.err_check = mmc_blk_err_check; 1632 - 1633 - mmc_queue_bounce_pre(mqrq); 1634 1530 } 1635 1531 1636 1532 static bool mmc_blk_rw_cmd_err(struct mmc_blk_data *md, struct mmc_card *card, ··· 1635 1585 return req_pending; 1636 1586 } 1637 1587 1638 - static void mmc_blk_rw_cmd_abort(struct mmc_card *card, struct request *req) 1588 + static void mmc_blk_rw_cmd_abort(struct mmc_queue *mq, struct mmc_card *card, 1589 + struct request *req, 1590 + struct mmc_queue_req *mqrq) 1639 1591 { 1640 1592 if (mmc_card_removed(card)) 1641 1593 req->rq_flags |= RQF_QUIET; 1642 1594 while (blk_end_request(req, -EIO, blk_rq_cur_bytes(req))); 1595 + mmc_queue_req_free(mq, mqrq); 1643 1596 } 1644 1597 1645 1598 /** ··· 1650 1597 * @mq: the queue with the card and host to restart 1651 1598 * @req: a new request that want to be started after the current one 1652 1599 */ 1653 - static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req) 1600 + static void mmc_blk_rw_try_restart(struct mmc_queue *mq, struct request *req, 1601 + struct mmc_queue_req *mqrq) 1654 1602 { 1655 1603 if (!req) 1656 1604 return; ··· 1662 1608 if (mmc_card_removed(mq->card)) { 1663 1609 req->rq_flags |= RQF_QUIET; 1664 1610 blk_end_request_all(req, -EIO); 1611 + mmc_queue_req_free(mq, mqrq); 1665 1612 return; 1666 1613 } 1667 1614 /* Else proceed and try to restart the current async request */ 1668 - mmc_blk_rw_rq_prep(mq->mqrq_cur, mq->card, 0, mq); 1669 - mmc_start_areq(mq->card->host, &mq->mqrq_cur->areq, NULL); 1615 + mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq); 1616 + mmc_start_areq(mq->card->host, &mqrq->areq, NULL); 1670 1617 } 1671 1618 1672 1619 static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) ··· 1677 1622 struct mmc_blk_request *brq; 1678 1623 int disable_multi = 0, retry = 0, type, retune_retry_done = 0; 1679 1624 enum mmc_blk_status status; 1625 + struct mmc_queue_req *mqrq_cur = NULL; 1680 1626 struct mmc_queue_req *mq_rq; 1681 1627 struct request *old_req; 1682 1628 struct mmc_async_req *new_areq; 1683 1629 struct mmc_async_req *old_areq; 1684 1630 bool req_pending = true; 1685 1631 1686 - if (!new_req && !mq->mqrq_prev->req) 1632 + if (new_req) { 1633 + mqrq_cur = mmc_queue_req_find(mq, new_req); 1634 + if (!mqrq_cur) { 1635 + WARN_ON(1); 1636 + mmc_blk_requeue(mq->queue, new_req); 1637 + new_req = NULL; 1638 + } 1639 + } 1640 + 1641 + if (!mq->qcnt) 1687 1642 return; 1688 1643 1689 1644 do { ··· 1706 1641 !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { 1707 1642 pr_err("%s: Transfer size is not 4KB sector size aligned\n", 1708 1643 new_req->rq_disk->disk_name); 1709 - mmc_blk_rw_cmd_abort(card, new_req); 1644 + mmc_blk_rw_cmd_abort(mq, card, new_req, mqrq_cur); 1710 1645 return; 1711 1646 } 1712 1647 1713 - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); 1714 - new_areq = &mq->mqrq_cur->areq; 1648 + mmc_blk_rw_rq_prep(mqrq_cur, card, 0, mq); 1649 + new_areq = &mqrq_cur->areq; 1715 1650 } else 1716 1651 new_areq = NULL; 1717 1652 ··· 1722 1657 * and there is nothing more to do until it is 1723 1658 * complete. 1724 1659 */ 1725 - if (status == MMC_BLK_NEW_REQUEST) 1726 - mq->new_request = true; 1727 1660 return; 1728 1661 } 1729 1662 ··· 1754 1691 pr_err("%s BUG rq_tot %d d_xfer %d\n", 1755 1692 __func__, blk_rq_bytes(old_req), 1756 1693 brq->data.bytes_xfered); 1757 - mmc_blk_rw_cmd_abort(card, old_req); 1694 + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); 1758 1695 return; 1759 1696 } 1760 1697 break; ··· 1762 1699 req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); 1763 1700 if (mmc_blk_reset(md, card->host, type)) { 1764 1701 if (req_pending) 1765 - mmc_blk_rw_cmd_abort(card, old_req); 1766 - mmc_blk_rw_try_restart(mq, new_req); 1702 + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); 1703 + else 1704 + mmc_queue_req_free(mq, mq_rq); 1705 + mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); 1767 1706 return; 1768 1707 } 1769 1708 if (!req_pending) { 1770 - mmc_blk_rw_try_restart(mq, new_req); 1709 + mmc_queue_req_free(mq, mq_rq); 1710 + mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); 1771 1711 return; 1772 1712 } 1773 1713 break; ··· 1782 1716 case MMC_BLK_ABORT: 1783 1717 if (!mmc_blk_reset(md, card->host, type)) 1784 1718 break; 1785 - mmc_blk_rw_cmd_abort(card, old_req); 1786 - mmc_blk_rw_try_restart(mq, new_req); 1719 + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); 1720 + mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); 1787 1721 return; 1788 1722 case MMC_BLK_DATA_ERR: { 1789 1723 int err; ··· 1792 1726 if (!err) 1793 1727 break; 1794 1728 if (err == -ENODEV) { 1795 - mmc_blk_rw_cmd_abort(card, old_req); 1796 - mmc_blk_rw_try_restart(mq, new_req); 1729 + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); 1730 + mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); 1797 1731 return; 1798 1732 } 1799 1733 /* Fall through */ ··· 1814 1748 req_pending = blk_end_request(old_req, -EIO, 1815 1749 brq->data.blksz); 1816 1750 if (!req_pending) { 1817 - mmc_blk_rw_try_restart(mq, new_req); 1751 + mmc_queue_req_free(mq, mq_rq); 1752 + mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); 1818 1753 return; 1819 1754 } 1820 1755 break; 1821 1756 case MMC_BLK_NOMEDIUM: 1822 - mmc_blk_rw_cmd_abort(card, old_req); 1823 - mmc_blk_rw_try_restart(mq, new_req); 1757 + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); 1758 + mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); 1824 1759 return; 1825 1760 default: 1826 1761 pr_err("%s: Unhandled return value (%d)", 1827 1762 old_req->rq_disk->disk_name, status); 1828 - mmc_blk_rw_cmd_abort(card, old_req); 1829 - mmc_blk_rw_try_restart(mq, new_req); 1763 + mmc_blk_rw_cmd_abort(mq, card, old_req, mq_rq); 1764 + mmc_blk_rw_try_restart(mq, new_req, mqrq_cur); 1830 1765 return; 1831 1766 } 1832 1767 ··· 1843 1776 mq_rq->brq.retune_retry_done = retune_retry_done; 1844 1777 } 1845 1778 } while (req_pending); 1779 + 1780 + mmc_queue_req_free(mq, mq_rq); 1846 1781 } 1847 1782 1848 1783 void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) ··· 1852 1783 int ret; 1853 1784 struct mmc_blk_data *md = mq->blkdata; 1854 1785 struct mmc_card *card = md->queue.card; 1855 - bool req_is_special = mmc_req_is_special(req); 1856 1786 1857 - if (req && !mq->mqrq_prev->req) 1787 + if (req && !mq->qcnt) 1858 1788 /* claim host only for the first request */ 1859 1789 mmc_get_card(card); 1860 1790 ··· 1865 1797 goto out; 1866 1798 } 1867 1799 1868 - mq->new_request = false; 1869 1800 if (req && req_op(req) == REQ_OP_DISCARD) { 1870 1801 /* complete ongoing async transfer before issuing discard */ 1871 - if (card->host->areq) 1802 + if (mq->qcnt) 1872 1803 mmc_blk_issue_rw_rq(mq, NULL); 1873 1804 mmc_blk_issue_discard_rq(mq, req); 1874 1805 } else if (req && req_op(req) == REQ_OP_SECURE_ERASE) { 1875 1806 /* complete ongoing async transfer before issuing secure erase*/ 1876 - if (card->host->areq) 1807 + if (mq->qcnt) 1877 1808 mmc_blk_issue_rw_rq(mq, NULL); 1878 1809 mmc_blk_issue_secdiscard_rq(mq, req); 1879 1810 } else if (req && req_op(req) == REQ_OP_FLUSH) { 1880 1811 /* complete ongoing async transfer before issuing flush */ 1881 - if (card->host->areq) 1812 + if (mq->qcnt) 1882 1813 mmc_blk_issue_rw_rq(mq, NULL); 1883 1814 mmc_blk_issue_flush(mq, req); 1884 1815 } else { ··· 1886 1819 } 1887 1820 1888 1821 out: 1889 - if ((!req && !mq->new_request) || req_is_special) 1890 - /* 1891 - * Release host when there are no more requests 1892 - * and after special request(discard, flush) is done. 1893 - * In case sepecial request, there is no reentry to 1894 - * the 'mmc_blk_issue_rq' with 'mqrq_prev->req'. 1895 - */ 1822 + if (!mq->qcnt) 1896 1823 mmc_put_card(card); 1897 1824 } 1898 1825 ··· 2166 2105 { 2167 2106 struct mmc_blk_data *md, *part_md; 2168 2107 char cap_str[10]; 2108 + int ret; 2169 2109 2170 2110 /* 2171 2111 * Check that the card supports the command class(es) we need. ··· 2176 2114 2177 2115 mmc_fixup_device(card, mmc_blk_fixups); 2178 2116 2117 + ret = mmc_queue_alloc_shared_queue(card); 2118 + if (ret) 2119 + return ret; 2120 + 2179 2121 md = mmc_blk_alloc(card); 2180 - if (IS_ERR(md)) 2122 + if (IS_ERR(md)) { 2123 + mmc_queue_free_shared_queue(card); 2181 2124 return PTR_ERR(md); 2125 + } 2182 2126 2183 2127 string_get_size((u64)get_capacity(md->disk), 512, STRING_UNITS_2, 2184 2128 cap_str, sizeof(cap_str)); ··· 2222 2154 out: 2223 2155 mmc_blk_remove_parts(card, md); 2224 2156 mmc_blk_remove_req(md); 2157 + mmc_queue_free_shared_queue(card); 2225 2158 return 0; 2226 2159 } 2227 2160 ··· 2240 2171 pm_runtime_put_noidle(&card->dev); 2241 2172 mmc_blk_remove_req(md); 2242 2173 dev_set_drvdata(&card->dev, NULL); 2174 + mmc_queue_free_shared_queue(card); 2243 2175 } 2244 2176 2245 2177 static int _mmc_blk_suspend(struct mmc_card *card)
+102 -91
drivers/mmc/core/core.c
··· 172 172 173 173 trace_mmc_request_done(host, mrq); 174 174 175 - if (err && cmd->retries && !mmc_card_removed(host->card)) { 176 - /* 177 - * Request starter must handle retries - see 178 - * mmc_wait_for_req_done(). 179 - */ 180 - if (mrq->done) 181 - mrq->done(mrq); 182 - } else { 175 + /* 176 + * We list various conditions for the command to be considered 177 + * properly done: 178 + * 179 + * - There was no error, OK fine then 180 + * - We are not doing some kind of retry 181 + * - The card was removed (...so just complete everything no matter 182 + * if there are errors or retries) 183 + */ 184 + if (!err || !cmd->retries || mmc_card_removed(host->card)) { 183 185 mmc_should_fail_request(host, mrq); 184 186 185 187 if (!host->ongoing_mrq) ··· 213 211 mrq->stop->resp[0], mrq->stop->resp[1], 214 212 mrq->stop->resp[2], mrq->stop->resp[3]); 215 213 } 216 - 217 - if (mrq->done) 218 - mrq->done(mrq); 219 214 } 215 + /* 216 + * Request starter must handle retries - see 217 + * mmc_wait_for_req_done(). 218 + */ 219 + if (mrq->done) 220 + mrq->done(mrq); 220 221 } 221 222 222 223 EXPORT_SYMBOL(mmc_request_done); ··· 239 234 /* 240 235 * For sdio rw commands we must wait for card busy otherwise some 241 236 * sdio devices won't work properly. 237 + * And bypass I/O abort, reset and bus suspend operations. 242 238 */ 243 - if (mmc_is_io_op(mrq->cmd->opcode) && host->ops->card_busy) { 239 + if (sdio_is_io_busy(mrq->cmd->opcode, mrq->cmd->arg) && 240 + host->ops->card_busy) { 244 241 int tries = 500; /* Wait aprox 500ms at maximum */ 245 242 246 243 while (host->ops->card_busy(host) && --tries) ··· 269 262 host->ops->request(host, mrq); 270 263 } 271 264 272 - static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq) 265 + static void mmc_mrq_pr_debug(struct mmc_host *host, struct mmc_request *mrq) 273 266 { 274 - #ifdef CONFIG_MMC_DEBUG 275 - unsigned int i, sz; 276 - struct scatterlist *sg; 277 - #endif 278 - mmc_retune_hold(host); 279 - 280 - if (mmc_card_removed(host->card)) 281 - return -ENOMEDIUM; 282 - 283 267 if (mrq->sbc) { 284 268 pr_debug("<%s: starting CMD%u arg %08x flags %08x>\n", 285 269 mmc_hostname(host), mrq->sbc->opcode, 286 270 mrq->sbc->arg, mrq->sbc->flags); 287 271 } 288 272 289 - pr_debug("%s: starting CMD%u arg %08x flags %08x\n", 290 - mmc_hostname(host), mrq->cmd->opcode, 291 - mrq->cmd->arg, mrq->cmd->flags); 273 + if (mrq->cmd) { 274 + pr_debug("%s: starting CMD%u arg %08x flags %08x\n", 275 + mmc_hostname(host), mrq->cmd->opcode, mrq->cmd->arg, 276 + mrq->cmd->flags); 277 + } 292 278 293 279 if (mrq->data) { 294 280 pr_debug("%s: blksz %d blocks %d flags %08x " ··· 297 297 mmc_hostname(host), mrq->stop->opcode, 298 298 mrq->stop->arg, mrq->stop->flags); 299 299 } 300 + } 300 301 301 - WARN_ON(!host->claimed); 302 + static int mmc_mrq_prep(struct mmc_host *host, struct mmc_request *mrq) 303 + { 304 + #ifdef CONFIG_MMC_DEBUG 305 + unsigned int i, sz; 306 + struct scatterlist *sg; 307 + #endif 302 308 303 - mrq->cmd->error = 0; 304 - mrq->cmd->mrq = mrq; 309 + if (mrq->cmd) { 310 + mrq->cmd->error = 0; 311 + mrq->cmd->mrq = mrq; 312 + mrq->cmd->data = mrq->data; 313 + } 305 314 if (mrq->sbc) { 306 315 mrq->sbc->error = 0; 307 316 mrq->sbc->mrq = mrq; ··· 327 318 if (sz != mrq->data->blocks * mrq->data->blksz) 328 319 return -EINVAL; 329 320 #endif 330 - 331 - mrq->cmd->data = mrq->data; 332 321 mrq->data->error = 0; 333 322 mrq->data->mrq = mrq; 334 323 if (mrq->stop) { ··· 335 328 mrq->stop->mrq = mrq; 336 329 } 337 330 } 331 + 332 + return 0; 333 + } 334 + 335 + static int mmc_start_request(struct mmc_host *host, struct mmc_request *mrq) 336 + { 337 + int err; 338 + 339 + mmc_retune_hold(host); 340 + 341 + if (mmc_card_removed(host->card)) 342 + return -ENOMEDIUM; 343 + 344 + mmc_mrq_pr_debug(host, mrq); 345 + 346 + WARN_ON(!host->claimed); 347 + 348 + err = mmc_mrq_prep(host, mrq); 349 + if (err) 350 + return err; 351 + 338 352 led_trigger_event(host->led, LED_FULL); 339 353 __mmc_start_request(host, mrq); 340 354 ··· 513 485 return err; 514 486 } 515 487 516 - /* 517 - * mmc_wait_for_data_req_done() - wait for request completed 518 - * @host: MMC host to prepare the command. 519 - * @mrq: MMC request to wait for 520 - * 521 - * Blocks MMC context till host controller will ack end of data request 522 - * execution or new request notification arrives from the block layer. 523 - * Handles command retries. 524 - * 525 - * Returns enum mmc_blk_status after checking errors. 526 - */ 527 - static enum mmc_blk_status mmc_wait_for_data_req_done(struct mmc_host *host, 528 - struct mmc_request *mrq) 529 - { 530 - struct mmc_command *cmd; 531 - struct mmc_context_info *context_info = &host->context_info; 532 - enum mmc_blk_status status; 533 - 534 - while (1) { 535 - wait_event_interruptible(context_info->wait, 536 - (context_info->is_done_rcv || 537 - context_info->is_new_req)); 538 - 539 - if (context_info->is_done_rcv) { 540 - context_info->is_done_rcv = false; 541 - cmd = mrq->cmd; 542 - 543 - if (!cmd->error || !cmd->retries || 544 - mmc_card_removed(host->card)) { 545 - status = host->areq->err_check(host->card, 546 - host->areq); 547 - break; /* return status */ 548 - } else { 549 - mmc_retune_recheck(host); 550 - pr_info("%s: req failed (CMD%u): %d, retrying...\n", 551 - mmc_hostname(host), 552 - cmd->opcode, cmd->error); 553 - cmd->retries--; 554 - cmd->error = 0; 555 - __mmc_start_request(host, mrq); 556 - continue; /* wait for done/new event again */ 557 - } 558 - } 559 - 560 - return MMC_BLK_NEW_REQUEST; 561 - } 562 - mmc_retune_release(host); 563 - return status; 564 - } 565 - 566 488 void mmc_wait_for_req_done(struct mmc_host *host, struct mmc_request *mrq) 567 489 { 568 490 struct mmc_command *cmd; ··· 617 639 */ 618 640 static enum mmc_blk_status mmc_finalize_areq(struct mmc_host *host) 619 641 { 642 + struct mmc_context_info *context_info = &host->context_info; 620 643 enum mmc_blk_status status; 621 644 622 645 if (!host->areq) 623 646 return MMC_BLK_SUCCESS; 624 647 625 - status = mmc_wait_for_data_req_done(host, host->areq->mrq); 626 - if (status == MMC_BLK_NEW_REQUEST) 627 - return status; 648 + while (1) { 649 + wait_event_interruptible(context_info->wait, 650 + (context_info->is_done_rcv || 651 + context_info->is_new_req)); 652 + 653 + if (context_info->is_done_rcv) { 654 + struct mmc_command *cmd; 655 + 656 + context_info->is_done_rcv = false; 657 + cmd = host->areq->mrq->cmd; 658 + 659 + if (!cmd->error || !cmd->retries || 660 + mmc_card_removed(host->card)) { 661 + status = host->areq->err_check(host->card, 662 + host->areq); 663 + break; /* return status */ 664 + } else { 665 + mmc_retune_recheck(host); 666 + pr_info("%s: req failed (CMD%u): %d, retrying...\n", 667 + mmc_hostname(host), 668 + cmd->opcode, cmd->error); 669 + cmd->retries--; 670 + cmd->error = 0; 671 + __mmc_start_request(host, host->areq->mrq); 672 + continue; /* wait for done/new event again */ 673 + } 674 + } 675 + 676 + return MMC_BLK_NEW_REQUEST; 677 + } 678 + 679 + mmc_retune_release(host); 628 680 629 681 /* 630 682 * Check BKOPS urgency for each R1 response ··· 691 683 { 692 684 enum mmc_blk_status status; 693 685 int start_err = 0; 694 - struct mmc_async_req *data = host->areq; 686 + struct mmc_async_req *previous = host->areq; 695 687 696 688 /* Prepare a new request */ 697 689 if (areq) ··· 699 691 700 692 /* Finalize previous request */ 701 693 status = mmc_finalize_areq(host); 694 + if (ret_stat) 695 + *ret_stat = status; 702 696 703 697 /* The previous request is still going on... */ 704 - if (status == MMC_BLK_NEW_REQUEST) { 705 - if (ret_stat) 706 - *ret_stat = status; 698 + if (status == MMC_BLK_NEW_REQUEST) 707 699 return NULL; 708 - } 709 700 710 701 /* Fine so far, start the new request! */ 711 702 if (status == MMC_BLK_SUCCESS && areq) ··· 723 716 else 724 717 host->areq = areq; 725 718 726 - if (ret_stat) 727 - *ret_stat = status; 728 - return data; 719 + return previous; 729 720 } 730 721 EXPORT_SYMBOL(mmc_start_areq); 731 722 ··· 2559 2554 return max_discard; 2560 2555 } 2561 2556 EXPORT_SYMBOL(mmc_calc_max_discard); 2557 + 2558 + bool mmc_card_is_blockaddr(struct mmc_card *card) 2559 + { 2560 + return card ? mmc_card_blockaddr(card) : false; 2561 + } 2562 + EXPORT_SYMBOL(mmc_card_is_blockaddr); 2562 2563 2563 2564 int mmc_set_blocklen(struct mmc_card *card, unsigned int blocklen) 2564 2565 {
+9
drivers/mmc/core/mmc.c
··· 790 790 MMC_DEV_ATTR(raw_rpmb_size_mult, "%#x\n", card->ext_csd.raw_rpmb_size_mult); 791 791 MMC_DEV_ATTR(rel_sectors, "%#x\n", card->ext_csd.rel_sectors); 792 792 MMC_DEV_ATTR(ocr, "%08x\n", card->ocr); 793 + MMC_DEV_ATTR(cmdq_en, "%d\n", card->ext_csd.cmdq_en); 793 794 794 795 static ssize_t mmc_fwrev_show(struct device *dev, 795 796 struct device_attribute *attr, ··· 846 845 &dev_attr_rel_sectors.attr, 847 846 &dev_attr_ocr.attr, 848 847 &dev_attr_dsr.attr, 848 + &dev_attr_cmdq_en.attr, 849 849 NULL, 850 850 }; 851 851 ATTRIBUTE_GROUPS(mmc_std); ··· 1788 1786 card->ext_csd.cache_ctrl = 1; 1789 1787 } 1790 1788 } 1789 + 1790 + /* 1791 + * In some cases (e.g. RPMB or mmc_test), the Command Queue must be 1792 + * disabled for a time, so a flag is needed to indicate to re-enable the 1793 + * Command Queue. 1794 + */ 1795 + card->reenable_cmdq = card->ext_csd.cmdq_en; 1791 1796 1792 1797 /* 1793 1798 * The mandatory minimum values are defined for packed command.
+32 -4
drivers/mmc/core/mmc_ops.c
··· 305 305 int mmc_send_csd(struct mmc_card *card, u32 *csd) 306 306 { 307 307 int ret, i; 308 - u32 *csd_tmp; 308 + __be32 *csd_tmp; 309 309 310 310 if (!mmc_host_is_spi(card->host)) 311 311 return mmc_send_cxd_native(card->host, card->rca << 16, ··· 319 319 if (ret) 320 320 goto err; 321 321 322 - for (i = 0;i < 4;i++) 322 + for (i = 0; i < 4; i++) 323 323 csd[i] = be32_to_cpu(csd_tmp[i]); 324 324 325 325 err: ··· 330 330 int mmc_send_cid(struct mmc_host *host, u32 *cid) 331 331 { 332 332 int ret, i; 333 - u32 *cid_tmp; 333 + __be32 *cid_tmp; 334 334 335 335 if (!mmc_host_is_spi(host)) { 336 336 if (!host->card) ··· 347 347 if (ret) 348 348 goto err; 349 349 350 - for (i = 0;i < 4;i++) 350 + for (i = 0; i < 4; i++) 351 351 cid[i] = be32_to_cpu(cid_tmp[i]); 352 352 353 353 err: ··· 838 838 { 839 839 return (card && card->csd.mmca_vsn > CSD_SPEC_VER_3); 840 840 } 841 + 842 + static int mmc_cmdq_switch(struct mmc_card *card, bool enable) 843 + { 844 + u8 val = enable ? EXT_CSD_CMDQ_MODE_ENABLED : 0; 845 + int err; 846 + 847 + if (!card->ext_csd.cmdq_support) 848 + return -EOPNOTSUPP; 849 + 850 + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_CMDQ_MODE_EN, 851 + val, card->ext_csd.generic_cmd6_time); 852 + if (!err) 853 + card->ext_csd.cmdq_en = enable; 854 + 855 + return err; 856 + } 857 + 858 + int mmc_cmdq_enable(struct mmc_card *card) 859 + { 860 + return mmc_cmdq_switch(card, true); 861 + } 862 + EXPORT_SYMBOL_GPL(mmc_cmdq_enable); 863 + 864 + int mmc_cmdq_disable(struct mmc_card *card) 865 + { 866 + return mmc_cmdq_switch(card, false); 867 + } 868 + EXPORT_SYMBOL_GPL(mmc_cmdq_disable);
+2
drivers/mmc/core/mmc_ops.h
··· 46 46 void mmc_start_bkops(struct mmc_card *card, bool from_exception); 47 47 int mmc_can_reset(struct mmc_card *card); 48 48 int mmc_flush_cache(struct mmc_card *card); 49 + int mmc_cmdq_enable(struct mmc_card *card); 50 + int mmc_cmdq_disable(struct mmc_card *card); 49 51 50 52 #endif 51 53
+14
drivers/mmc/core/mmc_test.c
··· 26 26 #include "card.h" 27 27 #include "host.h" 28 28 #include "bus.h" 29 + #include "mmc_ops.h" 29 30 30 31 #define RESULT_OK 0 31 32 #define RESULT_FAIL 1 ··· 3265 3264 if (ret) 3266 3265 return ret; 3267 3266 3267 + if (card->ext_csd.cmdq_en) { 3268 + mmc_claim_host(card->host); 3269 + ret = mmc_cmdq_disable(card); 3270 + mmc_release_host(card->host); 3271 + if (ret) 3272 + return ret; 3273 + } 3274 + 3268 3275 dev_info(&card->dev, "Card claimed for testing.\n"); 3269 3276 3270 3277 return 0; ··· 3280 3271 3281 3272 static void mmc_test_remove(struct mmc_card *card) 3282 3273 { 3274 + if (card->reenable_cmdq) { 3275 + mmc_claim_host(card->host); 3276 + mmc_cmdq_enable(card); 3277 + mmc_release_host(card->host); 3278 + } 3283 3279 mmc_test_free_result(card); 3284 3280 mmc_test_free_dbgfs_file(card); 3285 3281 }
+200 -137
drivers/mmc/core/queue.c
··· 40 40 return BLKPREP_OK; 41 41 } 42 42 43 + struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *mq, 44 + struct request *req) 45 + { 46 + struct mmc_queue_req *mqrq; 47 + int i = ffz(mq->qslots); 48 + 49 + if (i >= mq->qdepth) 50 + return NULL; 51 + 52 + mqrq = &mq->mqrq[i]; 53 + WARN_ON(mqrq->req || mq->qcnt >= mq->qdepth || 54 + test_bit(mqrq->task_id, &mq->qslots)); 55 + mqrq->req = req; 56 + mq->qcnt += 1; 57 + __set_bit(mqrq->task_id, &mq->qslots); 58 + 59 + return mqrq; 60 + } 61 + 62 + void mmc_queue_req_free(struct mmc_queue *mq, 63 + struct mmc_queue_req *mqrq) 64 + { 65 + WARN_ON(!mqrq->req || mq->qcnt < 1 || 66 + !test_bit(mqrq->task_id, &mq->qslots)); 67 + mqrq->req = NULL; 68 + mq->qcnt -= 1; 69 + __clear_bit(mqrq->task_id, &mq->qslots); 70 + } 71 + 43 72 static int mmc_queue_thread(void *d) 44 73 { 45 74 struct mmc_queue *mq = d; ··· 79 50 80 51 down(&mq->thread_sem); 81 52 do { 82 - struct request *req = NULL; 53 + struct request *req; 83 54 84 55 spin_lock_irq(q->queue_lock); 85 56 set_current_state(TASK_INTERRUPTIBLE); ··· 92 63 * Dispatch queue is empty so set flags for 93 64 * mmc_request_fn() to wake us up. 94 65 */ 95 - if (mq->mqrq_prev->req) 66 + if (mq->qcnt) 96 67 cntx->is_waiting_last_req = true; 97 68 else 98 69 mq->asleep = true; 99 70 } 100 - mq->mqrq_cur->req = req; 101 71 spin_unlock_irq(q->queue_lock); 102 72 103 - if (req || mq->mqrq_prev->req) { 104 - bool req_is_special = mmc_req_is_special(req); 105 - 73 + if (req || mq->qcnt) { 106 74 set_current_state(TASK_RUNNING); 107 75 mmc_blk_issue_rq(mq, req); 108 76 cond_resched(); 109 - if (mq->new_request) { 110 - mq->new_request = false; 111 - continue; /* fetch again */ 112 - } 113 - 114 - /* 115 - * Current request becomes previous request 116 - * and vice versa. 117 - * In case of special requests, current request 118 - * has been finished. Do not assign it to previous 119 - * request. 120 - */ 121 - if (req_is_special) 122 - mq->mqrq_cur->req = NULL; 123 - 124 - mq->mqrq_prev->brq.mrq.data = NULL; 125 - mq->mqrq_prev->req = NULL; 126 - swap(mq->mqrq_prev, mq->mqrq_cur); 127 77 } else { 128 78 if (kthread_should_stop()) { 129 79 set_current_state(TASK_RUNNING); ··· 149 141 wake_up_process(mq->thread); 150 142 } 151 143 152 - static struct scatterlist *mmc_alloc_sg(int sg_len, int *err) 144 + static struct scatterlist *mmc_alloc_sg(int sg_len) 153 145 { 154 146 struct scatterlist *sg; 155 147 156 148 sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL); 157 - if (!sg) 158 - *err = -ENOMEM; 159 - else { 160 - *err = 0; 149 + if (sg) 161 150 sg_init_table(sg, sg_len); 162 - } 163 151 164 152 return sg; 165 153 } ··· 179 175 queue_flag_set_unlocked(QUEUE_FLAG_SECERASE, q); 180 176 } 181 177 182 - #ifdef CONFIG_MMC_BLOCK_BOUNCE 183 - static bool mmc_queue_alloc_bounce_bufs(struct mmc_queue *mq, 184 - unsigned int bouncesz) 185 - { 186 - int i; 187 - 188 - for (i = 0; i < mq->qdepth; i++) { 189 - mq->mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL); 190 - if (!mq->mqrq[i].bounce_buf) 191 - goto out_err; 192 - } 193 - 194 - return true; 195 - 196 - out_err: 197 - while (--i >= 0) { 198 - kfree(mq->mqrq[i].bounce_buf); 199 - mq->mqrq[i].bounce_buf = NULL; 200 - } 201 - pr_warn("%s: unable to allocate bounce buffers\n", 202 - mmc_card_name(mq->card)); 203 - return false; 204 - } 205 - 206 - static int mmc_queue_alloc_bounce_sgs(struct mmc_queue *mq, 207 - unsigned int bouncesz) 208 - { 209 - int i, ret; 210 - 211 - for (i = 0; i < mq->qdepth; i++) { 212 - mq->mqrq[i].sg = mmc_alloc_sg(1, &ret); 213 - if (ret) 214 - return ret; 215 - 216 - mq->mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512, &ret); 217 - if (ret) 218 - return ret; 219 - } 220 - 221 - return 0; 222 - } 223 - #endif 224 - 225 - static int mmc_queue_alloc_sgs(struct mmc_queue *mq, int max_segs) 226 - { 227 - int i, ret; 228 - 229 - for (i = 0; i < mq->qdepth; i++) { 230 - mq->mqrq[i].sg = mmc_alloc_sg(max_segs, &ret); 231 - if (ret) 232 - return ret; 233 - } 234 - 235 - return 0; 236 - } 237 - 238 178 static void mmc_queue_req_free_bufs(struct mmc_queue_req *mqrq) 239 179 { 240 180 kfree(mqrq->bounce_sg); ··· 191 243 mqrq->bounce_buf = NULL; 192 244 } 193 245 194 - static void mmc_queue_reqs_free_bufs(struct mmc_queue *mq) 246 + static void mmc_queue_reqs_free_bufs(struct mmc_queue_req *mqrq, int qdepth) 195 247 { 196 248 int i; 197 249 198 - for (i = 0; i < mq->qdepth; i++) 199 - mmc_queue_req_free_bufs(&mq->mqrq[i]); 250 + for (i = 0; i < qdepth; i++) 251 + mmc_queue_req_free_bufs(&mqrq[i]); 252 + } 253 + 254 + static void mmc_queue_free_mqrqs(struct mmc_queue_req *mqrq, int qdepth) 255 + { 256 + mmc_queue_reqs_free_bufs(mqrq, qdepth); 257 + kfree(mqrq); 258 + } 259 + 260 + static struct mmc_queue_req *mmc_queue_alloc_mqrqs(int qdepth) 261 + { 262 + struct mmc_queue_req *mqrq; 263 + int i; 264 + 265 + mqrq = kcalloc(qdepth, sizeof(*mqrq), GFP_KERNEL); 266 + if (mqrq) { 267 + for (i = 0; i < qdepth; i++) 268 + mqrq[i].task_id = i; 269 + } 270 + 271 + return mqrq; 272 + } 273 + 274 + #ifdef CONFIG_MMC_BLOCK_BOUNCE 275 + static int mmc_queue_alloc_bounce_bufs(struct mmc_queue_req *mqrq, int qdepth, 276 + unsigned int bouncesz) 277 + { 278 + int i; 279 + 280 + for (i = 0; i < qdepth; i++) { 281 + mqrq[i].bounce_buf = kmalloc(bouncesz, GFP_KERNEL); 282 + if (!mqrq[i].bounce_buf) 283 + return -ENOMEM; 284 + 285 + mqrq[i].sg = mmc_alloc_sg(1); 286 + if (!mqrq[i].sg) 287 + return -ENOMEM; 288 + 289 + mqrq[i].bounce_sg = mmc_alloc_sg(bouncesz / 512); 290 + if (!mqrq[i].bounce_sg) 291 + return -ENOMEM; 292 + } 293 + 294 + return 0; 295 + } 296 + 297 + static bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, int qdepth, 298 + unsigned int bouncesz) 299 + { 300 + int ret; 301 + 302 + ret = mmc_queue_alloc_bounce_bufs(mqrq, qdepth, bouncesz); 303 + if (ret) 304 + mmc_queue_reqs_free_bufs(mqrq, qdepth); 305 + 306 + return !ret; 307 + } 308 + 309 + static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) 310 + { 311 + unsigned int bouncesz = MMC_QUEUE_BOUNCESZ; 312 + 313 + if (host->max_segs != 1) 314 + return 0; 315 + 316 + if (bouncesz > host->max_req_size) 317 + bouncesz = host->max_req_size; 318 + if (bouncesz > host->max_seg_size) 319 + bouncesz = host->max_seg_size; 320 + if (bouncesz > host->max_blk_count * 512) 321 + bouncesz = host->max_blk_count * 512; 322 + 323 + if (bouncesz <= 512) 324 + return 0; 325 + 326 + return bouncesz; 327 + } 328 + #else 329 + static inline bool mmc_queue_alloc_bounce(struct mmc_queue_req *mqrq, 330 + int qdepth, unsigned int bouncesz) 331 + { 332 + return false; 333 + } 334 + 335 + static unsigned int mmc_queue_calc_bouncesz(struct mmc_host *host) 336 + { 337 + return 0; 338 + } 339 + #endif 340 + 341 + static int mmc_queue_alloc_sgs(struct mmc_queue_req *mqrq, int qdepth, 342 + int max_segs) 343 + { 344 + int i; 345 + 346 + for (i = 0; i < qdepth; i++) { 347 + mqrq[i].sg = mmc_alloc_sg(max_segs); 348 + if (!mqrq[i].sg) 349 + return -ENOMEM; 350 + } 351 + 352 + return 0; 353 + } 354 + 355 + void mmc_queue_free_shared_queue(struct mmc_card *card) 356 + { 357 + if (card->mqrq) { 358 + mmc_queue_free_mqrqs(card->mqrq, card->qdepth); 359 + card->mqrq = NULL; 360 + } 361 + } 362 + 363 + static int __mmc_queue_alloc_shared_queue(struct mmc_card *card, int qdepth) 364 + { 365 + struct mmc_host *host = card->host; 366 + struct mmc_queue_req *mqrq; 367 + unsigned int bouncesz; 368 + int ret = 0; 369 + 370 + if (card->mqrq) 371 + return -EINVAL; 372 + 373 + mqrq = mmc_queue_alloc_mqrqs(qdepth); 374 + if (!mqrq) 375 + return -ENOMEM; 376 + 377 + card->mqrq = mqrq; 378 + card->qdepth = qdepth; 379 + 380 + bouncesz = mmc_queue_calc_bouncesz(host); 381 + 382 + if (bouncesz && !mmc_queue_alloc_bounce(mqrq, qdepth, bouncesz)) { 383 + bouncesz = 0; 384 + pr_warn("%s: unable to allocate bounce buffers\n", 385 + mmc_card_name(card)); 386 + } 387 + 388 + card->bouncesz = bouncesz; 389 + 390 + if (!bouncesz) { 391 + ret = mmc_queue_alloc_sgs(mqrq, qdepth, host->max_segs); 392 + if (ret) 393 + goto out_err; 394 + } 395 + 396 + return ret; 397 + 398 + out_err: 399 + mmc_queue_free_shared_queue(card); 400 + return ret; 401 + } 402 + 403 + int mmc_queue_alloc_shared_queue(struct mmc_card *card) 404 + { 405 + return __mmc_queue_alloc_shared_queue(card, 2); 200 406 } 201 407 202 408 /** ··· 367 265 { 368 266 struct mmc_host *host = card->host; 369 267 u64 limit = BLK_BOUNCE_HIGH; 370 - bool bounce = false; 371 268 int ret = -ENOMEM; 372 269 373 270 if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) ··· 377 276 if (!mq->queue) 378 277 return -ENOMEM; 379 278 380 - mq->qdepth = 2; 381 - mq->mqrq = kcalloc(mq->qdepth, sizeof(struct mmc_queue_req), 382 - GFP_KERNEL); 383 - if (!mq->mqrq) 384 - goto blk_cleanup; 385 - mq->mqrq_cur = &mq->mqrq[0]; 386 - mq->mqrq_prev = &mq->mqrq[1]; 279 + mq->mqrq = card->mqrq; 280 + mq->qdepth = card->qdepth; 387 281 mq->queue->queuedata = mq; 388 282 389 283 blk_queue_prep_rq(mq->queue, mmc_prep_request); ··· 387 291 if (mmc_can_erase(card)) 388 292 mmc_queue_setup_discard(mq->queue, card); 389 293 390 - #ifdef CONFIG_MMC_BLOCK_BOUNCE 391 - if (host->max_segs == 1) { 392 - unsigned int bouncesz; 393 - 394 - bouncesz = MMC_QUEUE_BOUNCESZ; 395 - 396 - if (bouncesz > host->max_req_size) 397 - bouncesz = host->max_req_size; 398 - if (bouncesz > host->max_seg_size) 399 - bouncesz = host->max_seg_size; 400 - if (bouncesz > (host->max_blk_count * 512)) 401 - bouncesz = host->max_blk_count * 512; 402 - 403 - if (bouncesz > 512 && 404 - mmc_queue_alloc_bounce_bufs(mq, bouncesz)) { 405 - blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY); 406 - blk_queue_max_hw_sectors(mq->queue, bouncesz / 512); 407 - blk_queue_max_segments(mq->queue, bouncesz / 512); 408 - blk_queue_max_segment_size(mq->queue, bouncesz); 409 - 410 - ret = mmc_queue_alloc_bounce_sgs(mq, bouncesz); 411 - if (ret) 412 - goto cleanup_queue; 413 - bounce = true; 414 - } 415 - } 416 - #endif 417 - 418 - if (!bounce) { 294 + if (card->bouncesz) { 295 + blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY); 296 + blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512); 297 + blk_queue_max_segments(mq->queue, card->bouncesz / 512); 298 + blk_queue_max_segment_size(mq->queue, card->bouncesz); 299 + } else { 419 300 blk_queue_bounce_limit(mq->queue, limit); 420 301 blk_queue_max_hw_sectors(mq->queue, 421 302 min(host->max_blk_count, host->max_req_size / 512)); 422 303 blk_queue_max_segments(mq->queue, host->max_segs); 423 304 blk_queue_max_segment_size(mq->queue, host->max_seg_size); 424 - 425 - ret = mmc_queue_alloc_sgs(mq, host->max_segs); 426 - if (ret) 427 - goto cleanup_queue; 428 305 } 429 306 430 307 sema_init(&mq->thread_sem, 1); ··· 412 343 413 344 return 0; 414 345 415 - cleanup_queue: 416 - mmc_queue_reqs_free_bufs(mq); 417 - kfree(mq->mqrq); 346 + cleanup_queue: 418 347 mq->mqrq = NULL; 419 - blk_cleanup: 420 348 blk_cleanup_queue(mq->queue); 421 349 return ret; 422 350 } ··· 435 369 blk_start_queue(q); 436 370 spin_unlock_irqrestore(q->queue_lock, flags); 437 371 438 - mmc_queue_reqs_free_bufs(mq); 439 - kfree(mq->mqrq); 440 372 mq->mqrq = NULL; 441 - 442 373 mq->card = NULL; 443 374 } 444 375 EXPORT_SYMBOL(mmc_cleanup_queue);
+9 -3
drivers/mmc/core/queue.h
··· 34 34 struct scatterlist *bounce_sg; 35 35 unsigned int bounce_sg_len; 36 36 struct mmc_async_req areq; 37 + int task_id; 37 38 }; 38 39 39 40 struct mmc_queue { 40 41 struct mmc_card *card; 41 42 struct task_struct *thread; 42 43 struct semaphore thread_sem; 43 - bool new_request; 44 44 bool suspended; 45 45 bool asleep; 46 46 struct mmc_blk_data *blkdata; 47 47 struct request_queue *queue; 48 48 struct mmc_queue_req *mqrq; 49 - struct mmc_queue_req *mqrq_cur; 50 - struct mmc_queue_req *mqrq_prev; 51 49 int qdepth; 50 + int qcnt; 51 + unsigned long qslots; 52 52 }; 53 53 54 + extern int mmc_queue_alloc_shared_queue(struct mmc_card *card); 55 + extern void mmc_queue_free_shared_queue(struct mmc_card *card); 54 56 extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, 55 57 const char *); 56 58 extern void mmc_cleanup_queue(struct mmc_queue *); ··· 65 63 extern void mmc_queue_bounce_post(struct mmc_queue_req *); 66 64 67 65 extern int mmc_access_rpmb(struct mmc_queue *); 66 + 67 + extern struct mmc_queue_req *mmc_queue_req_find(struct mmc_queue *, 68 + struct request *); 69 + extern void mmc_queue_req_free(struct mmc_queue *, struct mmc_queue_req *); 68 70 69 71 #endif
+2 -2
drivers/mmc/core/sd.c
··· 225 225 static int mmc_read_ssr(struct mmc_card *card) 226 226 { 227 227 unsigned int au, es, et, eo; 228 - u32 *raw_ssr; 228 + __be32 *raw_ssr; 229 229 int i; 230 230 231 231 if (!(card->csd.cmdclass & CCC_APP_SPEC)) { ··· 853 853 /* 854 854 * Fetch SCR from card. 855 855 */ 856 - err = mmc_app_send_scr(card, card->raw_scr); 856 + err = mmc_app_send_scr(card); 857 857 if (err) 858 858 return err; 859 859
+9 -10
drivers/mmc/core/sd_ops.c
··· 232 232 return 0; 233 233 } 234 234 235 - int mmc_app_send_scr(struct mmc_card *card, u32 *scr) 235 + int mmc_app_send_scr(struct mmc_card *card) 236 236 { 237 237 int err; 238 238 struct mmc_request mrq = {}; 239 239 struct mmc_command cmd = {}; 240 240 struct mmc_data data = {}; 241 241 struct scatterlist sg; 242 - void *data_buf; 242 + __be32 *scr; 243 243 244 244 /* NOTE: caller guarantees scr is heap-allocated */ 245 245 ··· 250 250 /* dma onto stack is unsafe/nonportable, but callers to this 251 251 * routine normally provide temporary on-stack buffers ... 252 252 */ 253 - data_buf = kmalloc(sizeof(card->raw_scr), GFP_KERNEL); 254 - if (data_buf == NULL) 253 + scr = kmalloc(sizeof(card->raw_scr), GFP_KERNEL); 254 + if (!scr) 255 255 return -ENOMEM; 256 256 257 257 mrq.cmd = &cmd; ··· 267 267 data.sg = &sg; 268 268 data.sg_len = 1; 269 269 270 - sg_init_one(&sg, data_buf, 8); 270 + sg_init_one(&sg, scr, 8); 271 271 272 272 mmc_set_data_timeout(&data, card); 273 273 274 274 mmc_wait_for_req(card->host, &mrq); 275 275 276 - memcpy(scr, data_buf, sizeof(card->raw_scr)); 277 - kfree(data_buf); 276 + card->raw_scr[0] = be32_to_cpu(scr[0]); 277 + card->raw_scr[1] = be32_to_cpu(scr[1]); 278 + 279 + kfree(scr); 278 280 279 281 if (cmd.error) 280 282 return cmd.error; 281 283 if (data.error) 282 284 return data.error; 283 - 284 - scr[0] = be32_to_cpu(scr[0]); 285 - scr[1] = be32_to_cpu(scr[1]); 286 285 287 286 return 0; 288 287 }
+1 -1
drivers/mmc/core/sd_ops.h
··· 22 22 int mmc_send_app_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr); 23 23 int mmc_send_if_cond(struct mmc_host *host, u32 ocr); 24 24 int mmc_send_relative_addr(struct mmc_host *host, unsigned int *rca); 25 - int mmc_app_send_scr(struct mmc_card *card, u32 *scr); 25 + int mmc_app_send_scr(struct mmc_card *card); 26 26 int mmc_sd_switch(struct mmc_card *card, int mode, int group, 27 27 u8 value, u8 *resp); 28 28 int mmc_app_sd_status(struct mmc_card *card, void *ssr);
+21 -33
drivers/mmc/core/sdio_io.c
··· 373 373 u8 val; 374 374 375 375 if (!func) { 376 - *err_ret = -EINVAL; 376 + if (err_ret) 377 + *err_ret = -EINVAL; 377 378 return 0xFF; 378 379 } 379 - 380 - if (err_ret) 381 - *err_ret = 0; 382 380 383 381 ret = mmc_io_rw_direct(func->card, 0, func->num, addr, 0, &val); 384 - if (ret) { 385 - if (err_ret) 386 - *err_ret = ret; 382 + if (err_ret) 383 + *err_ret = ret; 384 + if (ret) 387 385 return 0xFF; 388 - } 389 386 390 387 return val; 391 388 } ··· 404 407 int ret; 405 408 406 409 if (!func) { 407 - *err_ret = -EINVAL; 410 + if (err_ret) 411 + *err_ret = -EINVAL; 408 412 return; 409 413 } 410 414 ··· 439 441 if (err_ret) 440 442 *err_ret = ret; 441 443 if (ret) 442 - val = 0xff; 444 + return 0xff; 443 445 444 446 return val; 445 447 } ··· 527 529 { 528 530 int ret; 529 531 530 - if (err_ret) 531 - *err_ret = 0; 532 - 533 532 ret = sdio_memcpy_fromio(func, func->tmpbuf, addr, 2); 534 - if (ret) { 535 - if (err_ret) 536 - *err_ret = ret; 533 + if (err_ret) 534 + *err_ret = ret; 535 + if (ret) 537 536 return 0xFFFF; 538 - } 539 537 540 538 return le16_to_cpup((__le16 *)func->tmpbuf); 541 539 } ··· 575 581 { 576 582 int ret; 577 583 578 - if (err_ret) 579 - *err_ret = 0; 580 - 581 584 ret = sdio_memcpy_fromio(func, func->tmpbuf, addr, 4); 582 - if (ret) { 583 - if (err_ret) 584 - *err_ret = ret; 585 + if (err_ret) 586 + *err_ret = ret; 587 + if (ret) 585 588 return 0xFFFFFFFF; 586 - } 587 589 588 590 return le32_to_cpup((__le32 *)func->tmpbuf); 589 591 } ··· 625 635 unsigned char val; 626 636 627 637 if (!func) { 628 - *err_ret = -EINVAL; 638 + if (err_ret) 639 + *err_ret = -EINVAL; 629 640 return 0xFF; 630 641 } 631 - 632 - if (err_ret) 633 - *err_ret = 0; 634 642 635 643 ret = mmc_io_rw_direct(func->card, 0, 0, addr, 0, &val); 636 - if (ret) { 637 - if (err_ret) 638 - *err_ret = ret; 644 + if (err_ret) 645 + *err_ret = ret; 646 + if (ret) 639 647 return 0xFF; 640 - } 641 648 642 649 return val; 643 650 } ··· 660 673 int ret; 661 674 662 675 if (!func) { 663 - *err_ret = -EINVAL; 676 + if (err_ret) 677 + *err_ret = -EINVAL; 664 678 return; 665 679 } 666 680
+4 -5
drivers/mmc/core/sdio_ops.c
··· 152 152 data.flags = write ? MMC_DATA_WRITE : MMC_DATA_READ; 153 153 154 154 left_size = data.blksz * data.blocks; 155 - nents = (left_size - 1) / seg_size + 1; 155 + nents = DIV_ROUND_UP(left_size, seg_size); 156 156 if (nents > 1) { 157 157 if (sg_alloc_table(&sgtable, nents, GFP_KERNEL)) 158 158 return -ENOMEM; ··· 161 161 data.sg_len = nents; 162 162 163 163 for_each_sg(data.sg, sg_ptr, data.sg_len, i) { 164 - sg_set_page(sg_ptr, virt_to_page(buf + (i * seg_size)), 165 - min(seg_size, left_size), 166 - offset_in_page(buf + (i * seg_size))); 167 - left_size = left_size - seg_size; 164 + sg_set_buf(sg_ptr, buf + i * seg_size, 165 + min(seg_size, left_size)); 166 + left_size -= seg_size; 168 167 } 169 168 } else { 170 169 data.sg = &sg;
+8 -2
drivers/mmc/core/sdio_ops.h
··· 26 26 int sdio_reset(struct mmc_host *host); 27 27 unsigned int mmc_align_data_size(struct mmc_card *card, unsigned int sz); 28 28 29 - static inline bool mmc_is_io_op(u32 opcode) 29 + static inline bool sdio_is_io_busy(u32 opcode, u32 arg) 30 30 { 31 - return opcode == SD_IO_RW_DIRECT || opcode == SD_IO_RW_EXTENDED; 31 + u32 addr; 32 + 33 + addr = (arg >> 9) & 0x1FFFF; 34 + 35 + return (opcode == SD_IO_RW_EXTENDED || 36 + (opcode == SD_IO_RW_DIRECT && 37 + !(addr == SDIO_CCCR_ABORT || addr == SDIO_CCCR_SUSPEND))); 32 38 } 33 39 34 40 #endif
+43
drivers/mmc/host/Kconfig
··· 622 622 help 623 623 If you say yes here SD-Cards may work on the EZkit. 624 624 625 + config MMC_CAVIUM_OCTEON 626 + tristate "Cavium OCTEON SD/MMC Card Interface support" 627 + depends on CAVIUM_OCTEON_SOC 628 + help 629 + This selects Cavium OCTEON SD/MMC card Interface. 630 + If you have an OCTEON board with a Multimedia Card slot, 631 + say Y or M here. 632 + 633 + If unsure, say N. 634 + 635 + config MMC_CAVIUM_THUNDERX 636 + tristate "Cavium ThunderX SD/MMC Card Interface support" 637 + depends on PCI && 64BIT && (ARM64 || COMPILE_TEST) 638 + depends on GPIOLIB 639 + depends on OF_ADDRESS 640 + help 641 + This selects Cavium ThunderX SD/MMC Card Interface. 642 + If you have an Cavium ARM64 board with a Multimedia Card slot 643 + or builtin eMMC chip say Y or M here. If built as a module 644 + the module will be called thunderx_mmc.ko. 645 + 625 646 config MMC_DW 626 647 tristate "Synopsys DesignWare Memory Card Interface" 627 648 depends on HAS_DMA ··· 820 799 depends on PCI 821 800 help 822 801 802 + config MMC_BCM2835 803 + tristate "Broadcom BCM2835 SDHOST MMC Controller support" 804 + depends on ARCH_BCM2835 || COMPILE_TEST 805 + depends on HAS_DMA 806 + help 807 + This selects the BCM2835 SDHOST MMC controller. If you have 808 + a BCM2835 platform with SD or MMC devices, say Y or M here. 809 + 810 + Note that the BCM2835 has two SD controllers: The Arasan 811 + sdhci controller (supported by MMC_SDHCI_IPROC) and a custom 812 + sdhost controller (supported by this driver). 813 + 814 + If unsure, say N. 815 + 823 816 config MMC_MTK 824 817 tristate "MediaTek SD/MMC Card Interface support" 825 818 depends on HAS_DMA ··· 863 828 Broadcom STB SoCs. 864 829 865 830 If unsure, say Y. 831 + 832 + config MMC_SDHCI_XENON 833 + tristate "Marvell Xenon eMMC/SD/SDIO SDHCI driver" 834 + depends on MMC_SDHCI_PLTFM 835 + help 836 + This selects Marvell Xenon eMMC/SD/SDIO SDHCI. 837 + If you have a controller with this interface, say Y or M here. 838 + If unsure, say N.
+8
drivers/mmc/host/Makefile
··· 42 42 obj-$(CONFIG_MMC_CB710) += cb710-mmc.o 43 43 obj-$(CONFIG_MMC_VIA_SDMMC) += via-sdmmc.o 44 44 obj-$(CONFIG_SDH_BFIN) += bfin_sdh.o 45 + octeon-mmc-objs := cavium.o cavium-octeon.o 46 + obj-$(CONFIG_MMC_CAVIUM_OCTEON) += octeon-mmc.o 47 + thunderx-mmc-objs := cavium.o cavium-thunderx.o 48 + obj-$(CONFIG_MMC_CAVIUM_THUNDERX) += thunderx-mmc.o 45 49 obj-$(CONFIG_MMC_DW) += dw_mmc.o 46 50 obj-$(CONFIG_MMC_DW_PLTFM) += dw_mmc-pltfm.o 47 51 obj-$(CONFIG_MMC_DW_EXYNOS) += dw_mmc-exynos.o ··· 63 59 obj-$(CONFIG_MMC_SUNXI) += sunxi-mmc.o 64 60 obj-$(CONFIG_MMC_USDHI6ROL0) += usdhi6rol0.o 65 61 obj-$(CONFIG_MMC_TOSHIBA_PCI) += toshsd.o 62 + obj-$(CONFIG_MMC_BCM2835) += bcm2835.o 66 63 67 64 obj-$(CONFIG_MMC_REALTEK_PCI) += rtsx_pci_sdmmc.o 68 65 obj-$(CONFIG_MMC_REALTEK_USB) += rtsx_usb_sdmmc.o ··· 88 83 ifeq ($(CONFIG_CB710_DEBUG),y) 89 84 CFLAGS-cb710-mmc += -DDEBUG 90 85 endif 86 + 87 + obj-$(CONFIG_MMC_SDHCI_XENON) += sdhci-xenon-driver.o 88 + sdhci-xenon-driver-y += sdhci-xenon.o sdhci-xenon-phy.o
+2 -8
drivers/mmc/host/android-goldfish.c
··· 212 212 if (host->dma_in_use) { 213 213 enum dma_data_direction dma_data_dir; 214 214 215 - if (data->flags & MMC_DATA_WRITE) 216 - dma_data_dir = DMA_TO_DEVICE; 217 - else 218 - dma_data_dir = DMA_FROM_DEVICE; 215 + dma_data_dir = mmc_get_dma_dir(data); 219 216 220 217 if (dma_data_dir == DMA_FROM_DEVICE) { 221 218 /* ··· 387 390 */ 388 391 sg_len = (data->blocks == 1) ? 1 : data->sg_len; 389 392 390 - if (data->flags & MMC_DATA_WRITE) 391 - dma_data_dir = DMA_TO_DEVICE; 392 - else 393 - dma_data_dir = DMA_FROM_DEVICE; 393 + dma_data_dir = mmc_get_dma_dir(data); 394 394 395 395 host->sg_len = dma_map_sg(mmc_dev(host->mmc), data->sg, 396 396 sg_len, dma_data_dir);
+11 -19
drivers/mmc/host/atmel-mci.c
··· 954 954 if (data) 955 955 dma_unmap_sg(&host->pdev->dev, 956 956 data->sg, data->sg_len, 957 - ((data->flags & MMC_DATA_WRITE) 958 - ? DMA_TO_DEVICE : DMA_FROM_DEVICE)); 957 + mmc_get_dma_dir(data)); 959 958 } 960 959 961 960 /* ··· 992 993 if (data) 993 994 dma_unmap_sg(host->dma.chan->device->dev, 994 995 data->sg, data->sg_len, 995 - ((data->flags & MMC_DATA_WRITE) 996 - ? DMA_TO_DEVICE : DMA_FROM_DEVICE)); 996 + mmc_get_dma_dir(data)); 997 997 } 998 998 999 999 /* ··· 1093 1095 { 1094 1096 u32 iflags, tmp; 1095 1097 unsigned int sg_len; 1096 - enum dma_data_direction dir; 1097 1098 int i; 1098 1099 1099 1100 data->error = -EINPROGRESS; ··· 1104 1107 /* Enable pdc mode */ 1105 1108 atmci_writel(host, ATMCI_MR, host->mode_reg | ATMCI_MR_PDCMODE); 1106 1109 1107 - if (data->flags & MMC_DATA_READ) { 1108 - dir = DMA_FROM_DEVICE; 1110 + if (data->flags & MMC_DATA_READ) 1109 1111 iflags |= ATMCI_ENDRX | ATMCI_RXBUFF; 1110 - } else { 1111 - dir = DMA_TO_DEVICE; 1112 + else 1112 1113 iflags |= ATMCI_ENDTX | ATMCI_TXBUFE | ATMCI_BLKE; 1113 - } 1114 1114 1115 1115 /* Set BLKLEN */ 1116 1116 tmp = atmci_readl(host, ATMCI_MR); ··· 1117 1123 1118 1124 /* Configure PDC */ 1119 1125 host->data_size = data->blocks * data->blksz; 1120 - sg_len = dma_map_sg(&host->pdev->dev, data->sg, data->sg_len, dir); 1126 + sg_len = dma_map_sg(&host->pdev->dev, data->sg, data->sg_len, 1127 + mmc_get_dma_dir(data)); 1121 1128 1122 1129 if ((!host->caps.has_rwproof) 1123 1130 && (host->data->flags & MMC_DATA_WRITE)) { ··· 1130 1135 } 1131 1136 1132 1137 if (host->data_size) 1133 - atmci_pdc_set_both_buf(host, 1134 - ((dir == DMA_FROM_DEVICE) ? XFER_RECEIVE : XFER_TRANSMIT)); 1135 - 1138 + atmci_pdc_set_both_buf(host, data->flags & MMC_DATA_READ ? 1139 + XFER_RECEIVE : XFER_TRANSMIT); 1136 1140 return iflags; 1137 1141 } 1138 1142 ··· 1142 1148 struct dma_async_tx_descriptor *desc; 1143 1149 struct scatterlist *sg; 1144 1150 unsigned int i; 1145 - enum dma_data_direction direction; 1146 1151 enum dma_transfer_direction slave_dirn; 1147 1152 unsigned int sglen; 1148 1153 u32 maxburst; ··· 1179 1186 return -ENODEV; 1180 1187 1181 1188 if (data->flags & MMC_DATA_READ) { 1182 - direction = DMA_FROM_DEVICE; 1183 1189 host->dma_conf.direction = slave_dirn = DMA_DEV_TO_MEM; 1184 1190 maxburst = atmci_convert_chksize(host, 1185 1191 host->dma_conf.src_maxburst); 1186 1192 } else { 1187 - direction = DMA_TO_DEVICE; 1188 1193 host->dma_conf.direction = slave_dirn = DMA_MEM_TO_DEV; 1189 1194 maxburst = atmci_convert_chksize(host, 1190 1195 host->dma_conf.dst_maxburst); ··· 1193 1202 ATMCI_DMAEN); 1194 1203 1195 1204 sglen = dma_map_sg(chan->device->dev, data->sg, 1196 - data->sg_len, direction); 1205 + data->sg_len, mmc_get_dma_dir(data)); 1197 1206 1198 1207 dmaengine_slave_config(chan, &host->dma_conf); 1199 1208 desc = dmaengine_prep_slave_sg(chan, ··· 1208 1217 1209 1218 return iflags; 1210 1219 unmap_exit: 1211 - dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, direction); 1220 + dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, 1221 + mmc_get_dma_dir(data)); 1212 1222 return -ENOMEM; 1213 1223 } 1214 1224
+1466
drivers/mmc/host/bcm2835.c
··· 1 + /* 2 + * bcm2835 sdhost driver. 3 + * 4 + * The 2835 has two SD controllers: The Arasan sdhci controller 5 + * (supported by the iproc driver) and a custom sdhost controller 6 + * (supported by this driver). 7 + * 8 + * The sdhci controller supports both sdcard and sdio. The sdhost 9 + * controller supports the sdcard only, but has better performance. 10 + * Also note that the rpi3 has sdio wifi, so driving the sdcard with 11 + * the sdhost controller allows to use the sdhci controller for wifi 12 + * support. 13 + * 14 + * The configuration is done by devicetree via pin muxing. Both 15 + * SD controller are available on the same pins (2 pin groups = pin 22 16 + * to 27 + pin 48 to 53). So it's possible to use both SD controllers 17 + * at the same time with different pin groups. 18 + * 19 + * Author: Phil Elwell <phil@raspberrypi.org> 20 + * Copyright (C) 2015-2016 Raspberry Pi (Trading) Ltd. 21 + * 22 + * Based on 23 + * mmc-bcm2835.c by Gellert Weisz 24 + * which is, in turn, based on 25 + * sdhci-bcm2708.c by Broadcom 26 + * sdhci-bcm2835.c by Stephen Warren and Oleksandr Tymoshenko 27 + * sdhci.c and sdhci-pci.c by Pierre Ossman 28 + * 29 + * This program is free software; you can redistribute it and/or modify it 30 + * under the terms and conditions of the GNU General Public License, 31 + * version 2, as published by the Free Software Foundation. 32 + * 33 + * This program is distributed in the hope it will be useful, but WITHOUT 34 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 35 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 36 + * more details. 37 + * 38 + * You should have received a copy of the GNU General Public License 39 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 40 + */ 41 + #include <linux/clk.h> 42 + #include <linux/delay.h> 43 + #include <linux/device.h> 44 + #include <linux/dmaengine.h> 45 + #include <linux/dma-mapping.h> 46 + #include <linux/err.h> 47 + #include <linux/highmem.h> 48 + #include <linux/interrupt.h> 49 + #include <linux/io.h> 50 + #include <linux/iopoll.h> 51 + #include <linux/module.h> 52 + #include <linux/of_address.h> 53 + #include <linux/of_irq.h> 54 + #include <linux/platform_device.h> 55 + #include <linux/scatterlist.h> 56 + #include <linux/time.h> 57 + #include <linux/workqueue.h> 58 + 59 + #include <linux/mmc/host.h> 60 + #include <linux/mmc/mmc.h> 61 + #include <linux/mmc/sd.h> 62 + 63 + #define SDCMD 0x00 /* Command to SD card - 16 R/W */ 64 + #define SDARG 0x04 /* Argument to SD card - 32 R/W */ 65 + #define SDTOUT 0x08 /* Start value for timeout counter - 32 R/W */ 66 + #define SDCDIV 0x0c /* Start value for clock divider - 11 R/W */ 67 + #define SDRSP0 0x10 /* SD card response (31:0) - 32 R */ 68 + #define SDRSP1 0x14 /* SD card response (63:32) - 32 R */ 69 + #define SDRSP2 0x18 /* SD card response (95:64) - 32 R */ 70 + #define SDRSP3 0x1c /* SD card response (127:96) - 32 R */ 71 + #define SDHSTS 0x20 /* SD host status - 11 R/W */ 72 + #define SDVDD 0x30 /* SD card power control - 1 R/W */ 73 + #define SDEDM 0x34 /* Emergency Debug Mode - 13 R/W */ 74 + #define SDHCFG 0x38 /* Host configuration - 2 R/W */ 75 + #define SDHBCT 0x3c /* Host byte count (debug) - 32 R/W */ 76 + #define SDDATA 0x40 /* Data to/from SD card - 32 R/W */ 77 + #define SDHBLC 0x50 /* Host block count (SDIO/SDHC) - 9 R/W */ 78 + 79 + #define SDCMD_NEW_FLAG 0x8000 80 + #define SDCMD_FAIL_FLAG 0x4000 81 + #define SDCMD_BUSYWAIT 0x800 82 + #define SDCMD_NO_RESPONSE 0x400 83 + #define SDCMD_LONG_RESPONSE 0x200 84 + #define SDCMD_WRITE_CMD 0x80 85 + #define SDCMD_READ_CMD 0x40 86 + #define SDCMD_CMD_MASK 0x3f 87 + 88 + #define SDCDIV_MAX_CDIV 0x7ff 89 + 90 + #define SDHSTS_BUSY_IRPT 0x400 91 + #define SDHSTS_BLOCK_IRPT 0x200 92 + #define SDHSTS_SDIO_IRPT 0x100 93 + #define SDHSTS_REW_TIME_OUT 0x80 94 + #define SDHSTS_CMD_TIME_OUT 0x40 95 + #define SDHSTS_CRC16_ERROR 0x20 96 + #define SDHSTS_CRC7_ERROR 0x10 97 + #define SDHSTS_FIFO_ERROR 0x08 98 + /* Reserved */ 99 + /* Reserved */ 100 + #define SDHSTS_DATA_FLAG 0x01 101 + 102 + #define SDHSTS_TRANSFER_ERROR_MASK (SDHSTS_CRC7_ERROR | \ 103 + SDHSTS_CRC16_ERROR | \ 104 + SDHSTS_REW_TIME_OUT | \ 105 + SDHSTS_FIFO_ERROR) 106 + 107 + #define SDHSTS_ERROR_MASK (SDHSTS_CMD_TIME_OUT | \ 108 + SDHSTS_TRANSFER_ERROR_MASK) 109 + 110 + #define SDHCFG_BUSY_IRPT_EN BIT(10) 111 + #define SDHCFG_BLOCK_IRPT_EN BIT(8) 112 + #define SDHCFG_SDIO_IRPT_EN BIT(5) 113 + #define SDHCFG_DATA_IRPT_EN BIT(4) 114 + #define SDHCFG_SLOW_CARD BIT(3) 115 + #define SDHCFG_WIDE_EXT_BUS BIT(2) 116 + #define SDHCFG_WIDE_INT_BUS BIT(1) 117 + #define SDHCFG_REL_CMD_LINE BIT(0) 118 + 119 + #define SDVDD_POWER_OFF 0 120 + #define SDVDD_POWER_ON 1 121 + 122 + #define SDEDM_FORCE_DATA_MODE BIT(19) 123 + #define SDEDM_CLOCK_PULSE BIT(20) 124 + #define SDEDM_BYPASS BIT(21) 125 + 126 + #define SDEDM_WRITE_THRESHOLD_SHIFT 9 127 + #define SDEDM_READ_THRESHOLD_SHIFT 14 128 + #define SDEDM_THRESHOLD_MASK 0x1f 129 + 130 + #define SDEDM_FSM_MASK 0xf 131 + #define SDEDM_FSM_IDENTMODE 0x0 132 + #define SDEDM_FSM_DATAMODE 0x1 133 + #define SDEDM_FSM_READDATA 0x2 134 + #define SDEDM_FSM_WRITEDATA 0x3 135 + #define SDEDM_FSM_READWAIT 0x4 136 + #define SDEDM_FSM_READCRC 0x5 137 + #define SDEDM_FSM_WRITECRC 0x6 138 + #define SDEDM_FSM_WRITEWAIT1 0x7 139 + #define SDEDM_FSM_POWERDOWN 0x8 140 + #define SDEDM_FSM_POWERUP 0x9 141 + #define SDEDM_FSM_WRITESTART1 0xa 142 + #define SDEDM_FSM_WRITESTART2 0xb 143 + #define SDEDM_FSM_GENPULSES 0xc 144 + #define SDEDM_FSM_WRITEWAIT2 0xd 145 + #define SDEDM_FSM_STARTPOWDOWN 0xf 146 + 147 + #define SDDATA_FIFO_WORDS 16 148 + 149 + #define FIFO_READ_THRESHOLD 4 150 + #define FIFO_WRITE_THRESHOLD 4 151 + #define SDDATA_FIFO_PIO_BURST 8 152 + 153 + #define PIO_THRESHOLD 1 /* Maximum block count for PIO (0 = always DMA) */ 154 + 155 + struct bcm2835_host { 156 + spinlock_t lock; 157 + struct mutex mutex; 158 + 159 + void __iomem *ioaddr; 160 + u32 phys_addr; 161 + 162 + struct mmc_host *mmc; 163 + struct platform_device *pdev; 164 + 165 + int clock; /* Current clock speed */ 166 + unsigned int max_clk; /* Max possible freq */ 167 + struct work_struct dma_work; 168 + struct delayed_work timeout_work; /* Timer for timeouts */ 169 + struct sg_mapping_iter sg_miter; /* SG state for PIO */ 170 + unsigned int blocks; /* remaining PIO blocks */ 171 + int irq; /* Device IRQ */ 172 + 173 + u32 ns_per_fifo_word; 174 + 175 + /* cached registers */ 176 + u32 hcfg; 177 + u32 cdiv; 178 + 179 + struct mmc_request *mrq; /* Current request */ 180 + struct mmc_command *cmd; /* Current command */ 181 + struct mmc_data *data; /* Current data request */ 182 + bool data_complete:1;/* Data finished before cmd */ 183 + bool use_busy:1; /* Wait for busy interrupt */ 184 + bool use_sbc:1; /* Send CMD23 */ 185 + 186 + /* for threaded irq handler */ 187 + bool irq_block; 188 + bool irq_busy; 189 + bool irq_data; 190 + 191 + /* DMA part */ 192 + struct dma_chan *dma_chan_rxtx; 193 + struct dma_chan *dma_chan; 194 + struct dma_slave_config dma_cfg_rx; 195 + struct dma_slave_config dma_cfg_tx; 196 + struct dma_async_tx_descriptor *dma_desc; 197 + u32 dma_dir; 198 + u32 drain_words; 199 + struct page *drain_page; 200 + u32 drain_offset; 201 + bool use_dma; 202 + }; 203 + 204 + static void bcm2835_dumpcmd(struct bcm2835_host *host, struct mmc_command *cmd, 205 + const char *label) 206 + { 207 + struct device *dev = &host->pdev->dev; 208 + 209 + if (!cmd) 210 + return; 211 + 212 + dev_dbg(dev, "%c%s op %d arg 0x%x flags 0x%x - resp %08x %08x %08x %08x, err %d\n", 213 + (cmd == host->cmd) ? '>' : ' ', 214 + label, cmd->opcode, cmd->arg, cmd->flags, 215 + cmd->resp[0], cmd->resp[1], cmd->resp[2], cmd->resp[3], 216 + cmd->error); 217 + } 218 + 219 + static void bcm2835_dumpregs(struct bcm2835_host *host) 220 + { 221 + struct mmc_request *mrq = host->mrq; 222 + struct device *dev = &host->pdev->dev; 223 + 224 + if (mrq) { 225 + bcm2835_dumpcmd(host, mrq->sbc, "sbc"); 226 + bcm2835_dumpcmd(host, mrq->cmd, "cmd"); 227 + if (mrq->data) { 228 + dev_dbg(dev, "data blocks %x blksz %x - err %d\n", 229 + mrq->data->blocks, 230 + mrq->data->blksz, 231 + mrq->data->error); 232 + } 233 + bcm2835_dumpcmd(host, mrq->stop, "stop"); 234 + } 235 + 236 + dev_dbg(dev, "=========== REGISTER DUMP ===========\n"); 237 + dev_dbg(dev, "SDCMD 0x%08x\n", readl(host->ioaddr + SDCMD)); 238 + dev_dbg(dev, "SDARG 0x%08x\n", readl(host->ioaddr + SDARG)); 239 + dev_dbg(dev, "SDTOUT 0x%08x\n", readl(host->ioaddr + SDTOUT)); 240 + dev_dbg(dev, "SDCDIV 0x%08x\n", readl(host->ioaddr + SDCDIV)); 241 + dev_dbg(dev, "SDRSP0 0x%08x\n", readl(host->ioaddr + SDRSP0)); 242 + dev_dbg(dev, "SDRSP1 0x%08x\n", readl(host->ioaddr + SDRSP1)); 243 + dev_dbg(dev, "SDRSP2 0x%08x\n", readl(host->ioaddr + SDRSP2)); 244 + dev_dbg(dev, "SDRSP3 0x%08x\n", readl(host->ioaddr + SDRSP3)); 245 + dev_dbg(dev, "SDHSTS 0x%08x\n", readl(host->ioaddr + SDHSTS)); 246 + dev_dbg(dev, "SDVDD 0x%08x\n", readl(host->ioaddr + SDVDD)); 247 + dev_dbg(dev, "SDEDM 0x%08x\n", readl(host->ioaddr + SDEDM)); 248 + dev_dbg(dev, "SDHCFG 0x%08x\n", readl(host->ioaddr + SDHCFG)); 249 + dev_dbg(dev, "SDHBCT 0x%08x\n", readl(host->ioaddr + SDHBCT)); 250 + dev_dbg(dev, "SDHBLC 0x%08x\n", readl(host->ioaddr + SDHBLC)); 251 + dev_dbg(dev, "===========================================\n"); 252 + } 253 + 254 + static void bcm2835_reset_internal(struct bcm2835_host *host) 255 + { 256 + u32 temp; 257 + 258 + writel(SDVDD_POWER_OFF, host->ioaddr + SDVDD); 259 + writel(0, host->ioaddr + SDCMD); 260 + writel(0, host->ioaddr + SDARG); 261 + writel(0xf00000, host->ioaddr + SDTOUT); 262 + writel(0, host->ioaddr + SDCDIV); 263 + writel(0x7f8, host->ioaddr + SDHSTS); /* Write 1s to clear */ 264 + writel(0, host->ioaddr + SDHCFG); 265 + writel(0, host->ioaddr + SDHBCT); 266 + writel(0, host->ioaddr + SDHBLC); 267 + 268 + /* Limit fifo usage due to silicon bug */ 269 + temp = readl(host->ioaddr + SDEDM); 270 + temp &= ~((SDEDM_THRESHOLD_MASK << SDEDM_READ_THRESHOLD_SHIFT) | 271 + (SDEDM_THRESHOLD_MASK << SDEDM_WRITE_THRESHOLD_SHIFT)); 272 + temp |= (FIFO_READ_THRESHOLD << SDEDM_READ_THRESHOLD_SHIFT) | 273 + (FIFO_WRITE_THRESHOLD << SDEDM_WRITE_THRESHOLD_SHIFT); 274 + writel(temp, host->ioaddr + SDEDM); 275 + msleep(20); 276 + writel(SDVDD_POWER_ON, host->ioaddr + SDVDD); 277 + msleep(20); 278 + host->clock = 0; 279 + writel(host->hcfg, host->ioaddr + SDHCFG); 280 + writel(host->cdiv, host->ioaddr + SDCDIV); 281 + } 282 + 283 + static void bcm2835_reset(struct mmc_host *mmc) 284 + { 285 + struct bcm2835_host *host = mmc_priv(mmc); 286 + 287 + if (host->dma_chan) 288 + dmaengine_terminate_sync(host->dma_chan); 289 + bcm2835_reset_internal(host); 290 + } 291 + 292 + static void bcm2835_finish_command(struct bcm2835_host *host); 293 + 294 + static void bcm2835_wait_transfer_complete(struct bcm2835_host *host) 295 + { 296 + int timediff; 297 + u32 alternate_idle; 298 + 299 + alternate_idle = (host->mrq->data->flags & MMC_DATA_READ) ? 300 + SDEDM_FSM_READWAIT : SDEDM_FSM_WRITESTART1; 301 + 302 + timediff = 0; 303 + 304 + while (1) { 305 + u32 edm, fsm; 306 + 307 + edm = readl(host->ioaddr + SDEDM); 308 + fsm = edm & SDEDM_FSM_MASK; 309 + 310 + if ((fsm == SDEDM_FSM_IDENTMODE) || 311 + (fsm == SDEDM_FSM_DATAMODE)) 312 + break; 313 + if (fsm == alternate_idle) { 314 + writel(edm | SDEDM_FORCE_DATA_MODE, 315 + host->ioaddr + SDEDM); 316 + break; 317 + } 318 + 319 + timediff++; 320 + if (timediff == 100000) { 321 + dev_err(&host->pdev->dev, 322 + "wait_transfer_complete - still waiting after %d retries\n", 323 + timediff); 324 + bcm2835_dumpregs(host); 325 + host->mrq->data->error = -ETIMEDOUT; 326 + return; 327 + } 328 + cpu_relax(); 329 + } 330 + } 331 + 332 + static void bcm2835_dma_complete(void *param) 333 + { 334 + struct bcm2835_host *host = param; 335 + 336 + schedule_work(&host->dma_work); 337 + } 338 + 339 + static void bcm2835_transfer_block_pio(struct bcm2835_host *host, bool is_read) 340 + { 341 + unsigned long flags; 342 + size_t blksize; 343 + unsigned long wait_max; 344 + 345 + blksize = host->data->blksz; 346 + 347 + wait_max = jiffies + msecs_to_jiffies(500); 348 + 349 + local_irq_save(flags); 350 + 351 + while (blksize) { 352 + int copy_words; 353 + u32 hsts = 0; 354 + size_t len; 355 + u32 *buf; 356 + 357 + if (!sg_miter_next(&host->sg_miter)) { 358 + host->data->error = -EINVAL; 359 + break; 360 + } 361 + 362 + len = min(host->sg_miter.length, blksize); 363 + if (len % 4) { 364 + host->data->error = -EINVAL; 365 + break; 366 + } 367 + 368 + blksize -= len; 369 + host->sg_miter.consumed = len; 370 + 371 + buf = (u32 *)host->sg_miter.addr; 372 + 373 + copy_words = len / 4; 374 + 375 + while (copy_words) { 376 + int burst_words, words; 377 + u32 edm; 378 + 379 + burst_words = min(SDDATA_FIFO_PIO_BURST, copy_words); 380 + edm = readl(host->ioaddr + SDEDM); 381 + if (is_read) 382 + words = ((edm >> 4) & 0x1f); 383 + else 384 + words = SDDATA_FIFO_WORDS - ((edm >> 4) & 0x1f); 385 + 386 + if (words < burst_words) { 387 + int fsm_state = (edm & SDEDM_FSM_MASK); 388 + struct device *dev = &host->pdev->dev; 389 + 390 + if ((is_read && 391 + (fsm_state != SDEDM_FSM_READDATA && 392 + fsm_state != SDEDM_FSM_READWAIT && 393 + fsm_state != SDEDM_FSM_READCRC)) || 394 + (!is_read && 395 + (fsm_state != SDEDM_FSM_WRITEDATA && 396 + fsm_state != SDEDM_FSM_WRITESTART1 && 397 + fsm_state != SDEDM_FSM_WRITESTART2))) { 398 + hsts = readl(host->ioaddr + SDHSTS); 399 + dev_err(dev, "fsm %x, hsts %08x\n", 400 + fsm_state, hsts); 401 + if (hsts & SDHSTS_ERROR_MASK) 402 + break; 403 + } 404 + 405 + if (time_after(jiffies, wait_max)) { 406 + dev_err(dev, "PIO %s timeout - EDM %08x\n", 407 + is_read ? "read" : "write", 408 + edm); 409 + hsts = SDHSTS_REW_TIME_OUT; 410 + break; 411 + } 412 + ndelay((burst_words - words) * 413 + host->ns_per_fifo_word); 414 + continue; 415 + } else if (words > copy_words) { 416 + words = copy_words; 417 + } 418 + 419 + copy_words -= words; 420 + 421 + while (words) { 422 + if (is_read) 423 + *(buf++) = readl(host->ioaddr + SDDATA); 424 + else 425 + writel(*(buf++), host->ioaddr + SDDATA); 426 + words--; 427 + } 428 + } 429 + 430 + if (hsts & SDHSTS_ERROR_MASK) 431 + break; 432 + } 433 + 434 + sg_miter_stop(&host->sg_miter); 435 + 436 + local_irq_restore(flags); 437 + } 438 + 439 + static void bcm2835_transfer_pio(struct bcm2835_host *host) 440 + { 441 + struct device *dev = &host->pdev->dev; 442 + u32 sdhsts; 443 + bool is_read; 444 + 445 + is_read = (host->data->flags & MMC_DATA_READ) != 0; 446 + bcm2835_transfer_block_pio(host, is_read); 447 + 448 + sdhsts = readl(host->ioaddr + SDHSTS); 449 + if (sdhsts & (SDHSTS_CRC16_ERROR | 450 + SDHSTS_CRC7_ERROR | 451 + SDHSTS_FIFO_ERROR)) { 452 + dev_err(dev, "%s transfer error - HSTS %08x\n", 453 + is_read ? "read" : "write", sdhsts); 454 + host->data->error = -EILSEQ; 455 + } else if ((sdhsts & (SDHSTS_CMD_TIME_OUT | 456 + SDHSTS_REW_TIME_OUT))) { 457 + dev_err(dev, "%s timeout error - HSTS %08x\n", 458 + is_read ? "read" : "write", sdhsts); 459 + host->data->error = -ETIMEDOUT; 460 + } 461 + } 462 + 463 + static 464 + void bcm2835_prepare_dma(struct bcm2835_host *host, struct mmc_data *data) 465 + { 466 + int len, dir_data, dir_slave; 467 + struct dma_async_tx_descriptor *desc = NULL; 468 + struct dma_chan *dma_chan; 469 + 470 + dma_chan = host->dma_chan_rxtx; 471 + if (data->flags & MMC_DATA_READ) { 472 + dir_data = DMA_FROM_DEVICE; 473 + dir_slave = DMA_DEV_TO_MEM; 474 + } else { 475 + dir_data = DMA_TO_DEVICE; 476 + dir_slave = DMA_MEM_TO_DEV; 477 + } 478 + 479 + /* The block doesn't manage the FIFO DREQs properly for 480 + * multi-block transfers, so don't attempt to DMA the final 481 + * few words. Unfortunately this requires the final sg entry 482 + * to be trimmed. N.B. This code demands that the overspill 483 + * is contained in a single sg entry. 484 + */ 485 + 486 + host->drain_words = 0; 487 + if ((data->blocks > 1) && (dir_data == DMA_FROM_DEVICE)) { 488 + struct scatterlist *sg; 489 + u32 len; 490 + int i; 491 + 492 + len = min((u32)(FIFO_READ_THRESHOLD - 1) * 4, 493 + (u32)data->blocks * data->blksz); 494 + 495 + for_each_sg(data->sg, sg, data->sg_len, i) { 496 + if (sg_is_last(sg)) { 497 + WARN_ON(sg->length < len); 498 + sg->length -= len; 499 + host->drain_page = sg_page(sg); 500 + host->drain_offset = sg->offset + sg->length; 501 + } 502 + } 503 + host->drain_words = len / 4; 504 + } 505 + 506 + /* The parameters have already been validated, so this will not fail */ 507 + (void)dmaengine_slave_config(dma_chan, 508 + (dir_data == DMA_FROM_DEVICE) ? 509 + &host->dma_cfg_rx : 510 + &host->dma_cfg_tx); 511 + 512 + len = dma_map_sg(dma_chan->device->dev, data->sg, data->sg_len, 513 + dir_data); 514 + 515 + if (len > 0) { 516 + desc = dmaengine_prep_slave_sg(dma_chan, data->sg, 517 + len, dir_slave, 518 + DMA_PREP_INTERRUPT | 519 + DMA_CTRL_ACK); 520 + } 521 + 522 + if (desc) { 523 + desc->callback = bcm2835_dma_complete; 524 + desc->callback_param = host; 525 + host->dma_desc = desc; 526 + host->dma_chan = dma_chan; 527 + host->dma_dir = dir_data; 528 + } 529 + } 530 + 531 + static void bcm2835_start_dma(struct bcm2835_host *host) 532 + { 533 + dmaengine_submit(host->dma_desc); 534 + dma_async_issue_pending(host->dma_chan); 535 + } 536 + 537 + static void bcm2835_set_transfer_irqs(struct bcm2835_host *host) 538 + { 539 + u32 all_irqs = SDHCFG_DATA_IRPT_EN | SDHCFG_BLOCK_IRPT_EN | 540 + SDHCFG_BUSY_IRPT_EN; 541 + 542 + if (host->dma_desc) { 543 + host->hcfg = (host->hcfg & ~all_irqs) | 544 + SDHCFG_BUSY_IRPT_EN; 545 + } else { 546 + host->hcfg = (host->hcfg & ~all_irqs) | 547 + SDHCFG_DATA_IRPT_EN | 548 + SDHCFG_BUSY_IRPT_EN; 549 + } 550 + 551 + writel(host->hcfg, host->ioaddr + SDHCFG); 552 + } 553 + 554 + static 555 + void bcm2835_prepare_data(struct bcm2835_host *host, struct mmc_command *cmd) 556 + { 557 + struct mmc_data *data = cmd->data; 558 + 559 + WARN_ON(host->data); 560 + 561 + host->data = data; 562 + if (!data) 563 + return; 564 + 565 + host->data_complete = false; 566 + host->data->bytes_xfered = 0; 567 + 568 + if (!host->dma_desc) { 569 + /* Use PIO */ 570 + int flags = SG_MITER_ATOMIC; 571 + 572 + if (data->flags & MMC_DATA_READ) 573 + flags |= SG_MITER_TO_SG; 574 + else 575 + flags |= SG_MITER_FROM_SG; 576 + sg_miter_start(&host->sg_miter, data->sg, data->sg_len, flags); 577 + host->blocks = data->blocks; 578 + } 579 + 580 + bcm2835_set_transfer_irqs(host); 581 + 582 + writel(data->blksz, host->ioaddr + SDHBCT); 583 + writel(data->blocks, host->ioaddr + SDHBLC); 584 + } 585 + 586 + static u32 bcm2835_read_wait_sdcmd(struct bcm2835_host *host, u32 max_ms) 587 + { 588 + struct device *dev = &host->pdev->dev; 589 + u32 value; 590 + int ret; 591 + 592 + ret = readl_poll_timeout(host->ioaddr + SDCMD, value, 593 + !(value & SDCMD_NEW_FLAG), 1, 10); 594 + if (ret == -ETIMEDOUT) 595 + /* if it takes a while make poll interval bigger */ 596 + ret = readl_poll_timeout(host->ioaddr + SDCMD, value, 597 + !(value & SDCMD_NEW_FLAG), 598 + 10, max_ms * 1000); 599 + if (ret == -ETIMEDOUT) 600 + dev_err(dev, "%s: timeout (%d ms)\n", __func__, max_ms); 601 + 602 + return value; 603 + } 604 + 605 + static void bcm2835_finish_request(struct bcm2835_host *host) 606 + { 607 + struct dma_chan *terminate_chan = NULL; 608 + struct mmc_request *mrq; 609 + 610 + cancel_delayed_work(&host->timeout_work); 611 + 612 + mrq = host->mrq; 613 + 614 + host->mrq = NULL; 615 + host->cmd = NULL; 616 + host->data = NULL; 617 + 618 + host->dma_desc = NULL; 619 + terminate_chan = host->dma_chan; 620 + host->dma_chan = NULL; 621 + 622 + if (terminate_chan) { 623 + int err = dmaengine_terminate_all(terminate_chan); 624 + 625 + if (err) 626 + dev_err(&host->pdev->dev, 627 + "failed to terminate DMA (%d)\n", err); 628 + } 629 + 630 + mmc_request_done(host->mmc, mrq); 631 + } 632 + 633 + static 634 + bool bcm2835_send_command(struct bcm2835_host *host, struct mmc_command *cmd) 635 + { 636 + struct device *dev = &host->pdev->dev; 637 + u32 sdcmd, sdhsts; 638 + unsigned long timeout; 639 + 640 + WARN_ON(host->cmd); 641 + 642 + sdcmd = bcm2835_read_wait_sdcmd(host, 100); 643 + if (sdcmd & SDCMD_NEW_FLAG) { 644 + dev_err(dev, "previous command never completed.\n"); 645 + bcm2835_dumpregs(host); 646 + cmd->error = -EILSEQ; 647 + bcm2835_finish_request(host); 648 + return false; 649 + } 650 + 651 + if (!cmd->data && cmd->busy_timeout > 9000) 652 + timeout = DIV_ROUND_UP(cmd->busy_timeout, 1000) * HZ + HZ; 653 + else 654 + timeout = 10 * HZ; 655 + schedule_delayed_work(&host->timeout_work, timeout); 656 + 657 + host->cmd = cmd; 658 + 659 + /* Clear any error flags */ 660 + sdhsts = readl(host->ioaddr + SDHSTS); 661 + if (sdhsts & SDHSTS_ERROR_MASK) 662 + writel(sdhsts, host->ioaddr + SDHSTS); 663 + 664 + if ((cmd->flags & MMC_RSP_136) && (cmd->flags & MMC_RSP_BUSY)) { 665 + dev_err(dev, "unsupported response type!\n"); 666 + cmd->error = -EINVAL; 667 + bcm2835_finish_request(host); 668 + return false; 669 + } 670 + 671 + bcm2835_prepare_data(host, cmd); 672 + 673 + writel(cmd->arg, host->ioaddr + SDARG); 674 + 675 + sdcmd = cmd->opcode & SDCMD_CMD_MASK; 676 + 677 + host->use_busy = false; 678 + if (!(cmd->flags & MMC_RSP_PRESENT)) { 679 + sdcmd |= SDCMD_NO_RESPONSE; 680 + } else { 681 + if (cmd->flags & MMC_RSP_136) 682 + sdcmd |= SDCMD_LONG_RESPONSE; 683 + if (cmd->flags & MMC_RSP_BUSY) { 684 + sdcmd |= SDCMD_BUSYWAIT; 685 + host->use_busy = true; 686 + } 687 + } 688 + 689 + if (cmd->data) { 690 + if (cmd->data->flags & MMC_DATA_WRITE) 691 + sdcmd |= SDCMD_WRITE_CMD; 692 + if (cmd->data->flags & MMC_DATA_READ) 693 + sdcmd |= SDCMD_READ_CMD; 694 + } 695 + 696 + writel(sdcmd | SDCMD_NEW_FLAG, host->ioaddr + SDCMD); 697 + 698 + return true; 699 + } 700 + 701 + static void bcm2835_transfer_complete(struct bcm2835_host *host) 702 + { 703 + struct mmc_data *data; 704 + 705 + WARN_ON(!host->data_complete); 706 + 707 + data = host->data; 708 + host->data = NULL; 709 + 710 + /* Need to send CMD12 if - 711 + * a) open-ended multiblock transfer (no CMD23) 712 + * b) error in multiblock transfer 713 + */ 714 + if (host->mrq->stop && (data->error || !host->use_sbc)) { 715 + if (bcm2835_send_command(host, host->mrq->stop)) { 716 + /* No busy, so poll for completion */ 717 + if (!host->use_busy) 718 + bcm2835_finish_command(host); 719 + } 720 + } else { 721 + bcm2835_wait_transfer_complete(host); 722 + bcm2835_finish_request(host); 723 + } 724 + } 725 + 726 + static void bcm2835_finish_data(struct bcm2835_host *host) 727 + { 728 + struct device *dev = &host->pdev->dev; 729 + struct mmc_data *data; 730 + 731 + data = host->data; 732 + 733 + host->hcfg &= ~(SDHCFG_DATA_IRPT_EN | SDHCFG_BLOCK_IRPT_EN); 734 + writel(host->hcfg, host->ioaddr + SDHCFG); 735 + 736 + data->bytes_xfered = data->error ? 0 : (data->blksz * data->blocks); 737 + 738 + host->data_complete = true; 739 + 740 + if (host->cmd) { 741 + /* Data managed to finish before the 742 + * command completed. Make sure we do 743 + * things in the proper order. 744 + */ 745 + dev_dbg(dev, "Finished early - HSTS %08x\n", 746 + readl(host->ioaddr + SDHSTS)); 747 + } else { 748 + bcm2835_transfer_complete(host); 749 + } 750 + } 751 + 752 + static void bcm2835_finish_command(struct bcm2835_host *host) 753 + { 754 + struct device *dev = &host->pdev->dev; 755 + struct mmc_command *cmd = host->cmd; 756 + u32 sdcmd; 757 + 758 + sdcmd = bcm2835_read_wait_sdcmd(host, 100); 759 + 760 + /* Check for errors */ 761 + if (sdcmd & SDCMD_NEW_FLAG) { 762 + dev_err(dev, "command never completed.\n"); 763 + bcm2835_dumpregs(host); 764 + host->cmd->error = -EIO; 765 + bcm2835_finish_request(host); 766 + return; 767 + } else if (sdcmd & SDCMD_FAIL_FLAG) { 768 + u32 sdhsts = readl(host->ioaddr + SDHSTS); 769 + 770 + /* Clear the errors */ 771 + writel(SDHSTS_ERROR_MASK, host->ioaddr + SDHSTS); 772 + 773 + if (!(sdhsts & SDHSTS_CRC7_ERROR) || 774 + (host->cmd->opcode != MMC_SEND_OP_COND)) { 775 + if (sdhsts & SDHSTS_CMD_TIME_OUT) { 776 + host->cmd->error = -ETIMEDOUT; 777 + } else { 778 + dev_err(dev, "unexpected command %d error\n", 779 + host->cmd->opcode); 780 + bcm2835_dumpregs(host); 781 + host->cmd->error = -EILSEQ; 782 + } 783 + bcm2835_finish_request(host); 784 + return; 785 + } 786 + } 787 + 788 + if (cmd->flags & MMC_RSP_PRESENT) { 789 + if (cmd->flags & MMC_RSP_136) { 790 + int i; 791 + 792 + for (i = 0; i < 4; i++) { 793 + cmd->resp[3 - i] = 794 + readl(host->ioaddr + SDRSP0 + i * 4); 795 + } 796 + } else { 797 + cmd->resp[0] = readl(host->ioaddr + SDRSP0); 798 + } 799 + } 800 + 801 + if (cmd == host->mrq->sbc) { 802 + /* Finished CMD23, now send actual command. */ 803 + host->cmd = NULL; 804 + if (bcm2835_send_command(host, host->mrq->cmd)) { 805 + if (host->data && host->dma_desc) 806 + /* DMA transfer starts now, PIO starts 807 + * after irq 808 + */ 809 + bcm2835_start_dma(host); 810 + 811 + if (!host->use_busy) 812 + bcm2835_finish_command(host); 813 + } 814 + } else if (cmd == host->mrq->stop) { 815 + /* Finished CMD12 */ 816 + bcm2835_finish_request(host); 817 + } else { 818 + /* Processed actual command. */ 819 + host->cmd = NULL; 820 + if (!host->data) 821 + bcm2835_finish_request(host); 822 + else if (host->data_complete) 823 + bcm2835_transfer_complete(host); 824 + } 825 + } 826 + 827 + static void bcm2835_timeout(struct work_struct *work) 828 + { 829 + struct delayed_work *d = to_delayed_work(work); 830 + struct bcm2835_host *host = 831 + container_of(d, struct bcm2835_host, timeout_work); 832 + struct device *dev = &host->pdev->dev; 833 + 834 + mutex_lock(&host->mutex); 835 + 836 + if (host->mrq) { 837 + dev_err(dev, "timeout waiting for hardware interrupt.\n"); 838 + bcm2835_dumpregs(host); 839 + 840 + if (host->data) { 841 + host->data->error = -ETIMEDOUT; 842 + bcm2835_finish_data(host); 843 + } else { 844 + if (host->cmd) 845 + host->cmd->error = -ETIMEDOUT; 846 + else 847 + host->mrq->cmd->error = -ETIMEDOUT; 848 + 849 + bcm2835_finish_request(host); 850 + } 851 + } 852 + 853 + mutex_unlock(&host->mutex); 854 + } 855 + 856 + static bool bcm2835_check_cmd_error(struct bcm2835_host *host, u32 intmask) 857 + { 858 + struct device *dev = &host->pdev->dev; 859 + 860 + if (!(intmask & SDHSTS_ERROR_MASK)) 861 + return false; 862 + 863 + if (!host->cmd) 864 + return true; 865 + 866 + dev_err(dev, "sdhost_busy_irq: intmask %08x\n", intmask); 867 + if (intmask & SDHSTS_CRC7_ERROR) { 868 + host->cmd->error = -EILSEQ; 869 + } else if (intmask & (SDHSTS_CRC16_ERROR | 870 + SDHSTS_FIFO_ERROR)) { 871 + if (host->mrq->data) 872 + host->mrq->data->error = -EILSEQ; 873 + else 874 + host->cmd->error = -EILSEQ; 875 + } else if (intmask & SDHSTS_REW_TIME_OUT) { 876 + if (host->mrq->data) 877 + host->mrq->data->error = -ETIMEDOUT; 878 + else 879 + host->cmd->error = -ETIMEDOUT; 880 + } else if (intmask & SDHSTS_CMD_TIME_OUT) { 881 + host->cmd->error = -ETIMEDOUT; 882 + } 883 + bcm2835_dumpregs(host); 884 + return true; 885 + } 886 + 887 + static void bcm2835_check_data_error(struct bcm2835_host *host, u32 intmask) 888 + { 889 + if (!host->data) 890 + return; 891 + if (intmask & (SDHSTS_CRC16_ERROR | SDHSTS_FIFO_ERROR)) 892 + host->data->error = -EILSEQ; 893 + if (intmask & SDHSTS_REW_TIME_OUT) 894 + host->data->error = -ETIMEDOUT; 895 + } 896 + 897 + static void bcm2835_busy_irq(struct bcm2835_host *host) 898 + { 899 + if (WARN_ON(!host->cmd)) { 900 + bcm2835_dumpregs(host); 901 + return; 902 + } 903 + 904 + if (WARN_ON(!host->use_busy)) { 905 + bcm2835_dumpregs(host); 906 + return; 907 + } 908 + host->use_busy = false; 909 + 910 + bcm2835_finish_command(host); 911 + } 912 + 913 + static void bcm2835_data_irq(struct bcm2835_host *host, u32 intmask) 914 + { 915 + /* There are no dedicated data/space available interrupt 916 + * status bits, so it is necessary to use the single shared 917 + * data/space available FIFO status bits. It is therefore not 918 + * an error to get here when there is no data transfer in 919 + * progress. 920 + */ 921 + if (!host->data) 922 + return; 923 + 924 + bcm2835_check_data_error(host, intmask); 925 + if (host->data->error) 926 + goto finished; 927 + 928 + if (host->data->flags & MMC_DATA_WRITE) { 929 + /* Use the block interrupt for writes after the first block */ 930 + host->hcfg &= ~(SDHCFG_DATA_IRPT_EN); 931 + host->hcfg |= SDHCFG_BLOCK_IRPT_EN; 932 + writel(host->hcfg, host->ioaddr + SDHCFG); 933 + bcm2835_transfer_pio(host); 934 + } else { 935 + bcm2835_transfer_pio(host); 936 + host->blocks--; 937 + if ((host->blocks == 0) || host->data->error) 938 + goto finished; 939 + } 940 + return; 941 + 942 + finished: 943 + host->hcfg &= ~(SDHCFG_DATA_IRPT_EN | SDHCFG_BLOCK_IRPT_EN); 944 + writel(host->hcfg, host->ioaddr + SDHCFG); 945 + } 946 + 947 + static void bcm2835_data_threaded_irq(struct bcm2835_host *host) 948 + { 949 + if (!host->data) 950 + return; 951 + if ((host->blocks == 0) || host->data->error) 952 + bcm2835_finish_data(host); 953 + } 954 + 955 + static void bcm2835_block_irq(struct bcm2835_host *host) 956 + { 957 + if (WARN_ON(!host->data)) { 958 + bcm2835_dumpregs(host); 959 + return; 960 + } 961 + 962 + if (!host->dma_desc) { 963 + WARN_ON(!host->blocks); 964 + if (host->data->error || (--host->blocks == 0)) 965 + bcm2835_finish_data(host); 966 + else 967 + bcm2835_transfer_pio(host); 968 + } else if (host->data->flags & MMC_DATA_WRITE) { 969 + bcm2835_finish_data(host); 970 + } 971 + } 972 + 973 + static irqreturn_t bcm2835_irq(int irq, void *dev_id) 974 + { 975 + irqreturn_t result = IRQ_NONE; 976 + struct bcm2835_host *host = dev_id; 977 + u32 intmask; 978 + 979 + spin_lock(&host->lock); 980 + 981 + intmask = readl(host->ioaddr + SDHSTS); 982 + 983 + writel(SDHSTS_BUSY_IRPT | 984 + SDHSTS_BLOCK_IRPT | 985 + SDHSTS_SDIO_IRPT | 986 + SDHSTS_DATA_FLAG, 987 + host->ioaddr + SDHSTS); 988 + 989 + if (intmask & SDHSTS_BLOCK_IRPT) { 990 + bcm2835_check_data_error(host, intmask); 991 + host->irq_block = true; 992 + result = IRQ_WAKE_THREAD; 993 + } 994 + 995 + if (intmask & SDHSTS_BUSY_IRPT) { 996 + if (!bcm2835_check_cmd_error(host, intmask)) { 997 + host->irq_busy = true; 998 + result = IRQ_WAKE_THREAD; 999 + } else { 1000 + result = IRQ_HANDLED; 1001 + } 1002 + } 1003 + 1004 + /* There is no true data interrupt status bit, so it is 1005 + * necessary to qualify the data flag with the interrupt 1006 + * enable bit. 1007 + */ 1008 + if ((intmask & SDHSTS_DATA_FLAG) && 1009 + (host->hcfg & SDHCFG_DATA_IRPT_EN)) { 1010 + bcm2835_data_irq(host, intmask); 1011 + host->irq_data = true; 1012 + result = IRQ_WAKE_THREAD; 1013 + } 1014 + 1015 + spin_unlock(&host->lock); 1016 + 1017 + return result; 1018 + } 1019 + 1020 + static irqreturn_t bcm2835_threaded_irq(int irq, void *dev_id) 1021 + { 1022 + struct bcm2835_host *host = dev_id; 1023 + unsigned long flags; 1024 + bool block, busy, data; 1025 + 1026 + spin_lock_irqsave(&host->lock, flags); 1027 + 1028 + block = host->irq_block; 1029 + busy = host->irq_busy; 1030 + data = host->irq_data; 1031 + host->irq_block = false; 1032 + host->irq_busy = false; 1033 + host->irq_data = false; 1034 + 1035 + spin_unlock_irqrestore(&host->lock, flags); 1036 + 1037 + mutex_lock(&host->mutex); 1038 + 1039 + if (block) 1040 + bcm2835_block_irq(host); 1041 + if (busy) 1042 + bcm2835_busy_irq(host); 1043 + if (data) 1044 + bcm2835_data_threaded_irq(host); 1045 + 1046 + mutex_unlock(&host->mutex); 1047 + 1048 + return IRQ_HANDLED; 1049 + } 1050 + 1051 + static void bcm2835_dma_complete_work(struct work_struct *work) 1052 + { 1053 + struct bcm2835_host *host = 1054 + container_of(work, struct bcm2835_host, dma_work); 1055 + struct mmc_data *data = host->data; 1056 + 1057 + mutex_lock(&host->mutex); 1058 + 1059 + if (host->dma_chan) { 1060 + dma_unmap_sg(host->dma_chan->device->dev, 1061 + data->sg, data->sg_len, 1062 + host->dma_dir); 1063 + 1064 + host->dma_chan = NULL; 1065 + } 1066 + 1067 + if (host->drain_words) { 1068 + unsigned long flags; 1069 + void *page; 1070 + u32 *buf; 1071 + 1072 + if (host->drain_offset & PAGE_MASK) { 1073 + host->drain_page += host->drain_offset >> PAGE_SHIFT; 1074 + host->drain_offset &= ~PAGE_MASK; 1075 + } 1076 + local_irq_save(flags); 1077 + page = kmap_atomic(host->drain_page); 1078 + buf = page + host->drain_offset; 1079 + 1080 + while (host->drain_words) { 1081 + u32 edm = readl(host->ioaddr + SDEDM); 1082 + 1083 + if ((edm >> 4) & 0x1f) 1084 + *(buf++) = readl(host->ioaddr + SDDATA); 1085 + host->drain_words--; 1086 + } 1087 + 1088 + kunmap_atomic(page); 1089 + local_irq_restore(flags); 1090 + } 1091 + 1092 + bcm2835_finish_data(host); 1093 + 1094 + mutex_unlock(&host->mutex); 1095 + } 1096 + 1097 + static void bcm2835_set_clock(struct bcm2835_host *host, unsigned int clock) 1098 + { 1099 + int div; 1100 + 1101 + /* The SDCDIV register has 11 bits, and holds (div - 2). But 1102 + * in data mode the max is 50MHz wihout a minimum, and only 1103 + * the bottom 3 bits are used. Since the switch over is 1104 + * automatic (unless we have marked the card as slow...), 1105 + * chosen values have to make sense in both modes. Ident mode 1106 + * must be 100-400KHz, so can range check the requested 1107 + * clock. CMD15 must be used to return to data mode, so this 1108 + * can be monitored. 1109 + * 1110 + * clock 250MHz -> 0->125MHz, 1->83.3MHz, 2->62.5MHz, 3->50.0MHz 1111 + * 4->41.7MHz, 5->35.7MHz, 6->31.3MHz, 7->27.8MHz 1112 + * 1113 + * 623->400KHz/27.8MHz 1114 + * reset value (507)->491159/50MHz 1115 + * 1116 + * BUT, the 3-bit clock divisor in data mode is too small if 1117 + * the core clock is higher than 250MHz, so instead use the 1118 + * SLOW_CARD configuration bit to force the use of the ident 1119 + * clock divisor at all times. 1120 + */ 1121 + 1122 + if (clock < 100000) { 1123 + /* Can't stop the clock, but make it as slow as possible 1124 + * to show willing 1125 + */ 1126 + host->cdiv = SDCDIV_MAX_CDIV; 1127 + writel(host->cdiv, host->ioaddr + SDCDIV); 1128 + return; 1129 + } 1130 + 1131 + div = host->max_clk / clock; 1132 + if (div < 2) 1133 + div = 2; 1134 + if ((host->max_clk / div) > clock) 1135 + div++; 1136 + div -= 2; 1137 + 1138 + if (div > SDCDIV_MAX_CDIV) 1139 + div = SDCDIV_MAX_CDIV; 1140 + 1141 + clock = host->max_clk / (div + 2); 1142 + host->mmc->actual_clock = clock; 1143 + 1144 + /* Calibrate some delays */ 1145 + 1146 + host->ns_per_fifo_word = (1000000000 / clock) * 1147 + ((host->mmc->caps & MMC_CAP_4_BIT_DATA) ? 8 : 32); 1148 + 1149 + host->cdiv = div; 1150 + writel(host->cdiv, host->ioaddr + SDCDIV); 1151 + 1152 + /* Set the timeout to 500ms */ 1153 + writel(host->mmc->actual_clock / 2, host->ioaddr + SDTOUT); 1154 + } 1155 + 1156 + static void bcm2835_request(struct mmc_host *mmc, struct mmc_request *mrq) 1157 + { 1158 + struct bcm2835_host *host = mmc_priv(mmc); 1159 + struct device *dev = &host->pdev->dev; 1160 + u32 edm, fsm; 1161 + 1162 + /* Reset the error statuses in case this is a retry */ 1163 + if (mrq->sbc) 1164 + mrq->sbc->error = 0; 1165 + if (mrq->cmd) 1166 + mrq->cmd->error = 0; 1167 + if (mrq->data) 1168 + mrq->data->error = 0; 1169 + if (mrq->stop) 1170 + mrq->stop->error = 0; 1171 + 1172 + if (mrq->data && !is_power_of_2(mrq->data->blksz)) { 1173 + dev_err(dev, "unsupported block size (%d bytes)\n", 1174 + mrq->data->blksz); 1175 + mrq->cmd->error = -EINVAL; 1176 + mmc_request_done(mmc, mrq); 1177 + return; 1178 + } 1179 + 1180 + if (host->use_dma && mrq->data && (mrq->data->blocks > PIO_THRESHOLD)) 1181 + bcm2835_prepare_dma(host, mrq->data); 1182 + 1183 + mutex_lock(&host->mutex); 1184 + 1185 + WARN_ON(host->mrq); 1186 + host->mrq = mrq; 1187 + 1188 + edm = readl(host->ioaddr + SDEDM); 1189 + fsm = edm & SDEDM_FSM_MASK; 1190 + 1191 + if ((fsm != SDEDM_FSM_IDENTMODE) && 1192 + (fsm != SDEDM_FSM_DATAMODE)) { 1193 + dev_err(dev, "previous command (%d) not complete (EDM %08x)\n", 1194 + readl(host->ioaddr + SDCMD) & SDCMD_CMD_MASK, 1195 + edm); 1196 + bcm2835_dumpregs(host); 1197 + mrq->cmd->error = -EILSEQ; 1198 + bcm2835_finish_request(host); 1199 + mutex_unlock(&host->mutex); 1200 + return; 1201 + } 1202 + 1203 + host->use_sbc = !!mrq->sbc && host->mrq->data && 1204 + (host->mrq->data->flags & MMC_DATA_READ); 1205 + if (host->use_sbc) { 1206 + if (bcm2835_send_command(host, mrq->sbc)) { 1207 + if (!host->use_busy) 1208 + bcm2835_finish_command(host); 1209 + } 1210 + } else if (bcm2835_send_command(host, mrq->cmd)) { 1211 + if (host->data && host->dma_desc) { 1212 + /* DMA transfer starts now, PIO starts after irq */ 1213 + bcm2835_start_dma(host); 1214 + } 1215 + 1216 + if (!host->use_busy) 1217 + bcm2835_finish_command(host); 1218 + } 1219 + 1220 + mutex_unlock(&host->mutex); 1221 + } 1222 + 1223 + static void bcm2835_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 1224 + { 1225 + struct bcm2835_host *host = mmc_priv(mmc); 1226 + 1227 + mutex_lock(&host->mutex); 1228 + 1229 + if (!ios->clock || ios->clock != host->clock) { 1230 + bcm2835_set_clock(host, ios->clock); 1231 + host->clock = ios->clock; 1232 + } 1233 + 1234 + /* set bus width */ 1235 + host->hcfg &= ~SDHCFG_WIDE_EXT_BUS; 1236 + if (ios->bus_width == MMC_BUS_WIDTH_4) 1237 + host->hcfg |= SDHCFG_WIDE_EXT_BUS; 1238 + 1239 + host->hcfg |= SDHCFG_WIDE_INT_BUS; 1240 + 1241 + /* Disable clever clock switching, to cope with fast core clocks */ 1242 + host->hcfg |= SDHCFG_SLOW_CARD; 1243 + 1244 + writel(host->hcfg, host->ioaddr + SDHCFG); 1245 + 1246 + mutex_unlock(&host->mutex); 1247 + } 1248 + 1249 + static struct mmc_host_ops bcm2835_ops = { 1250 + .request = bcm2835_request, 1251 + .set_ios = bcm2835_set_ios, 1252 + .hw_reset = bcm2835_reset, 1253 + }; 1254 + 1255 + static int bcm2835_add_host(struct bcm2835_host *host) 1256 + { 1257 + struct mmc_host *mmc = host->mmc; 1258 + struct device *dev = &host->pdev->dev; 1259 + char pio_limit_string[20]; 1260 + int ret; 1261 + 1262 + mmc->f_max = host->max_clk; 1263 + mmc->f_min = host->max_clk / SDCDIV_MAX_CDIV; 1264 + 1265 + mmc->max_busy_timeout = ~0 / (mmc->f_max / 1000); 1266 + 1267 + dev_dbg(dev, "f_max %d, f_min %d, max_busy_timeout %d\n", 1268 + mmc->f_max, mmc->f_min, mmc->max_busy_timeout); 1269 + 1270 + /* host controller capabilities */ 1271 + mmc->caps |= MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED | 1272 + MMC_CAP_NEEDS_POLL | MMC_CAP_HW_RESET | MMC_CAP_ERASE | 1273 + MMC_CAP_CMD23; 1274 + 1275 + spin_lock_init(&host->lock); 1276 + mutex_init(&host->mutex); 1277 + 1278 + if (IS_ERR_OR_NULL(host->dma_chan_rxtx)) { 1279 + dev_warn(dev, "unable to initialise DMA channel. Falling back to PIO\n"); 1280 + host->use_dma = false; 1281 + } else { 1282 + host->use_dma = true; 1283 + 1284 + host->dma_cfg_tx.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 1285 + host->dma_cfg_tx.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 1286 + host->dma_cfg_tx.slave_id = 13; /* DREQ channel */ 1287 + host->dma_cfg_tx.direction = DMA_MEM_TO_DEV; 1288 + host->dma_cfg_tx.src_addr = 0; 1289 + host->dma_cfg_tx.dst_addr = host->phys_addr + SDDATA; 1290 + 1291 + host->dma_cfg_rx.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 1292 + host->dma_cfg_rx.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 1293 + host->dma_cfg_rx.slave_id = 13; /* DREQ channel */ 1294 + host->dma_cfg_rx.direction = DMA_DEV_TO_MEM; 1295 + host->dma_cfg_rx.src_addr = host->phys_addr + SDDATA; 1296 + host->dma_cfg_rx.dst_addr = 0; 1297 + 1298 + if (dmaengine_slave_config(host->dma_chan_rxtx, 1299 + &host->dma_cfg_tx) != 0 || 1300 + dmaengine_slave_config(host->dma_chan_rxtx, 1301 + &host->dma_cfg_rx) != 0) 1302 + host->use_dma = false; 1303 + } 1304 + 1305 + mmc->max_segs = 128; 1306 + mmc->max_req_size = 524288; 1307 + mmc->max_seg_size = mmc->max_req_size; 1308 + mmc->max_blk_size = 1024; 1309 + mmc->max_blk_count = 65535; 1310 + 1311 + /* report supported voltage ranges */ 1312 + mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34; 1313 + 1314 + INIT_WORK(&host->dma_work, bcm2835_dma_complete_work); 1315 + INIT_DELAYED_WORK(&host->timeout_work, bcm2835_timeout); 1316 + 1317 + /* Set interrupt enables */ 1318 + host->hcfg = SDHCFG_BUSY_IRPT_EN; 1319 + 1320 + bcm2835_reset_internal(host); 1321 + 1322 + ret = request_threaded_irq(host->irq, bcm2835_irq, 1323 + bcm2835_threaded_irq, 1324 + 0, mmc_hostname(mmc), host); 1325 + if (ret) { 1326 + dev_err(dev, "failed to request IRQ %d: %d\n", host->irq, ret); 1327 + return ret; 1328 + } 1329 + 1330 + ret = mmc_add_host(mmc); 1331 + if (ret) { 1332 + free_irq(host->irq, host); 1333 + return ret; 1334 + } 1335 + 1336 + pio_limit_string[0] = '\0'; 1337 + if (host->use_dma && (PIO_THRESHOLD > 0)) 1338 + sprintf(pio_limit_string, " (>%d)", PIO_THRESHOLD); 1339 + dev_info(dev, "loaded - DMA %s%s\n", 1340 + host->use_dma ? "enabled" : "disabled", pio_limit_string); 1341 + 1342 + return 0; 1343 + } 1344 + 1345 + static int bcm2835_probe(struct platform_device *pdev) 1346 + { 1347 + struct device *dev = &pdev->dev; 1348 + struct clk *clk; 1349 + struct resource *iomem; 1350 + struct bcm2835_host *host; 1351 + struct mmc_host *mmc; 1352 + const __be32 *regaddr_p; 1353 + int ret; 1354 + 1355 + dev_dbg(dev, "%s\n", __func__); 1356 + mmc = mmc_alloc_host(sizeof(*host), dev); 1357 + if (!mmc) 1358 + return -ENOMEM; 1359 + 1360 + mmc->ops = &bcm2835_ops; 1361 + host = mmc_priv(mmc); 1362 + host->mmc = mmc; 1363 + host->pdev = pdev; 1364 + spin_lock_init(&host->lock); 1365 + 1366 + iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1367 + host->ioaddr = devm_ioremap_resource(dev, iomem); 1368 + if (IS_ERR(host->ioaddr)) { 1369 + ret = PTR_ERR(host->ioaddr); 1370 + goto err; 1371 + } 1372 + 1373 + /* Parse OF address directly to get the physical address for 1374 + * DMA to our registers. 1375 + */ 1376 + regaddr_p = of_get_address(pdev->dev.of_node, 0, NULL, NULL); 1377 + if (!regaddr_p) { 1378 + dev_err(dev, "Can't get phys address\n"); 1379 + ret = -EINVAL; 1380 + goto err; 1381 + } 1382 + 1383 + host->phys_addr = be32_to_cpup(regaddr_p); 1384 + 1385 + host->dma_chan = NULL; 1386 + host->dma_desc = NULL; 1387 + 1388 + host->dma_chan_rxtx = dma_request_slave_channel(dev, "rx-tx"); 1389 + 1390 + clk = devm_clk_get(dev, NULL); 1391 + if (IS_ERR(clk)) { 1392 + ret = PTR_ERR(clk); 1393 + if (ret != -EPROBE_DEFER) 1394 + dev_err(dev, "could not get clk: %d\n", ret); 1395 + goto err; 1396 + } 1397 + 1398 + host->max_clk = clk_get_rate(clk); 1399 + 1400 + host->irq = platform_get_irq(pdev, 0); 1401 + if (host->irq <= 0) { 1402 + dev_err(dev, "get IRQ failed\n"); 1403 + ret = -EINVAL; 1404 + goto err; 1405 + } 1406 + 1407 + ret = mmc_of_parse(mmc); 1408 + if (ret) 1409 + goto err; 1410 + 1411 + ret = bcm2835_add_host(host); 1412 + if (ret) 1413 + goto err; 1414 + 1415 + platform_set_drvdata(pdev, host); 1416 + 1417 + dev_dbg(dev, "%s -> OK\n", __func__); 1418 + 1419 + return 0; 1420 + 1421 + err: 1422 + dev_dbg(dev, "%s -> err %d\n", __func__, ret); 1423 + mmc_free_host(mmc); 1424 + 1425 + return ret; 1426 + } 1427 + 1428 + static int bcm2835_remove(struct platform_device *pdev) 1429 + { 1430 + struct bcm2835_host *host = platform_get_drvdata(pdev); 1431 + 1432 + mmc_remove_host(host->mmc); 1433 + 1434 + writel(SDVDD_POWER_OFF, host->ioaddr + SDVDD); 1435 + 1436 + free_irq(host->irq, host); 1437 + 1438 + cancel_work_sync(&host->dma_work); 1439 + cancel_delayed_work_sync(&host->timeout_work); 1440 + 1441 + mmc_free_host(host->mmc); 1442 + platform_set_drvdata(pdev, NULL); 1443 + 1444 + return 0; 1445 + } 1446 + 1447 + static const struct of_device_id bcm2835_match[] = { 1448 + { .compatible = "brcm,bcm2835-sdhost" }, 1449 + { } 1450 + }; 1451 + MODULE_DEVICE_TABLE(of, bcm2835_match); 1452 + 1453 + static struct platform_driver bcm2835_driver = { 1454 + .probe = bcm2835_probe, 1455 + .remove = bcm2835_remove, 1456 + .driver = { 1457 + .name = "sdhost-bcm2835", 1458 + .of_match_table = bcm2835_match, 1459 + }, 1460 + }; 1461 + module_platform_driver(bcm2835_driver); 1462 + 1463 + MODULE_ALIAS("platform:sdhost-bcm2835"); 1464 + MODULE_DESCRIPTION("BCM2835 SDHost driver"); 1465 + MODULE_LICENSE("GPL v2"); 1466 + MODULE_AUTHOR("Phil Elwell");
+351
drivers/mmc/host/cavium-octeon.c
··· 1 + /* 2 + * Driver for MMC and SSD cards for Cavium OCTEON SOCs. 3 + * 4 + * This file is subject to the terms and conditions of the GNU General Public 5 + * License. See the file "COPYING" in the main directory of this archive 6 + * for more details. 7 + * 8 + * Copyright (C) 2012-2017 Cavium Inc. 9 + */ 10 + #include <linux/dma-mapping.h> 11 + #include <linux/gpio/consumer.h> 12 + #include <linux/interrupt.h> 13 + #include <linux/mmc/mmc.h> 14 + #include <linux/mmc/slot-gpio.h> 15 + #include <linux/module.h> 16 + #include <linux/of_platform.h> 17 + #include <asm/octeon/octeon.h> 18 + #include "cavium.h" 19 + 20 + #define CVMX_MIO_BOOT_CTL CVMX_ADD_IO_SEG(0x00011800000000D0ull) 21 + 22 + /* 23 + * The l2c* functions below are used for the EMMC-17978 workaround. 24 + * 25 + * Due to a bug in the design of the MMC bus hardware, the 2nd to last 26 + * cache block of a DMA read must be locked into the L2 Cache. 27 + * Otherwise, data corruption may occur. 28 + */ 29 + static inline void *phys_to_ptr(u64 address) 30 + { 31 + return (void *)(address | (1ull << 63)); /* XKPHYS */ 32 + } 33 + 34 + /* 35 + * Lock a single line into L2. The line is zeroed before locking 36 + * to make sure no dram accesses are made. 37 + */ 38 + static void l2c_lock_line(u64 addr) 39 + { 40 + char *addr_ptr = phys_to_ptr(addr); 41 + 42 + asm volatile ( 43 + "cache 31, %[line]" /* Unlock the line */ 44 + ::[line] "m" (*addr_ptr)); 45 + } 46 + 47 + /* Unlock a single line in the L2 cache. */ 48 + static void l2c_unlock_line(u64 addr) 49 + { 50 + char *addr_ptr = phys_to_ptr(addr); 51 + 52 + asm volatile ( 53 + "cache 23, %[line]" /* Unlock the line */ 54 + ::[line] "m" (*addr_ptr)); 55 + } 56 + 57 + /* Locks a memory region in the L2 cache. */ 58 + static void l2c_lock_mem_region(u64 start, u64 len) 59 + { 60 + u64 end; 61 + 62 + /* Round start/end to cache line boundaries */ 63 + end = ALIGN(start + len - 1, CVMX_CACHE_LINE_SIZE); 64 + start = ALIGN(start, CVMX_CACHE_LINE_SIZE); 65 + 66 + while (start <= end) { 67 + l2c_lock_line(start); 68 + start += CVMX_CACHE_LINE_SIZE; 69 + } 70 + asm volatile("sync"); 71 + } 72 + 73 + /* Unlock a memory region in the L2 cache. */ 74 + static void l2c_unlock_mem_region(u64 start, u64 len) 75 + { 76 + u64 end; 77 + 78 + /* Round start/end to cache line boundaries */ 79 + end = ALIGN(start + len - 1, CVMX_CACHE_LINE_SIZE); 80 + start = ALIGN(start, CVMX_CACHE_LINE_SIZE); 81 + 82 + while (start <= end) { 83 + l2c_unlock_line(start); 84 + start += CVMX_CACHE_LINE_SIZE; 85 + } 86 + } 87 + 88 + static void octeon_mmc_acquire_bus(struct cvm_mmc_host *host) 89 + { 90 + if (!host->has_ciu3) { 91 + down(&octeon_bootbus_sem); 92 + /* For CN70XX, switch the MMC controller onto the bus. */ 93 + if (OCTEON_IS_MODEL(OCTEON_CN70XX)) 94 + writeq(0, (void __iomem *)CVMX_MIO_BOOT_CTL); 95 + } else { 96 + down(&host->mmc_serializer); 97 + } 98 + } 99 + 100 + static void octeon_mmc_release_bus(struct cvm_mmc_host *host) 101 + { 102 + if (!host->has_ciu3) 103 + up(&octeon_bootbus_sem); 104 + else 105 + up(&host->mmc_serializer); 106 + } 107 + 108 + static void octeon_mmc_int_enable(struct cvm_mmc_host *host, u64 val) 109 + { 110 + writeq(val, host->base + MIO_EMM_INT(host)); 111 + if (!host->dma_active || (host->dma_active && !host->has_ciu3)) 112 + writeq(val, host->base + MIO_EMM_INT_EN(host)); 113 + } 114 + 115 + static void octeon_mmc_set_shared_power(struct cvm_mmc_host *host, int dir) 116 + { 117 + if (dir == 0) 118 + if (!atomic_dec_return(&host->shared_power_users)) 119 + gpiod_set_value_cansleep(host->global_pwr_gpiod, 0); 120 + if (dir == 1) 121 + if (atomic_inc_return(&host->shared_power_users) == 1) 122 + gpiod_set_value_cansleep(host->global_pwr_gpiod, 1); 123 + } 124 + 125 + static void octeon_mmc_dmar_fixup(struct cvm_mmc_host *host, 126 + struct mmc_command *cmd, 127 + struct mmc_data *data, 128 + u64 addr) 129 + { 130 + if (cmd->opcode != MMC_WRITE_MULTIPLE_BLOCK) 131 + return; 132 + if (data->blksz * data->blocks <= 1024) 133 + return; 134 + 135 + host->n_minus_one = addr + (data->blksz * data->blocks) - 1024; 136 + l2c_lock_mem_region(host->n_minus_one, 512); 137 + } 138 + 139 + static void octeon_mmc_dmar_fixup_done(struct cvm_mmc_host *host) 140 + { 141 + if (!host->n_minus_one) 142 + return; 143 + l2c_unlock_mem_region(host->n_minus_one, 512); 144 + host->n_minus_one = 0; 145 + } 146 + 147 + static int octeon_mmc_probe(struct platform_device *pdev) 148 + { 149 + struct device_node *cn, *node = pdev->dev.of_node; 150 + struct cvm_mmc_host *host; 151 + struct resource *res; 152 + void __iomem *base; 153 + int mmc_irq[9]; 154 + int i, ret = 0; 155 + u64 val; 156 + 157 + host = devm_kzalloc(&pdev->dev, sizeof(*host), GFP_KERNEL); 158 + if (!host) 159 + return -ENOMEM; 160 + 161 + spin_lock_init(&host->irq_handler_lock); 162 + sema_init(&host->mmc_serializer, 1); 163 + 164 + host->dev = &pdev->dev; 165 + host->acquire_bus = octeon_mmc_acquire_bus; 166 + host->release_bus = octeon_mmc_release_bus; 167 + host->int_enable = octeon_mmc_int_enable; 168 + host->set_shared_power = octeon_mmc_set_shared_power; 169 + if (OCTEON_IS_MODEL(OCTEON_CN6XXX) || 170 + OCTEON_IS_MODEL(OCTEON_CNF7XXX)) { 171 + host->dmar_fixup = octeon_mmc_dmar_fixup; 172 + host->dmar_fixup_done = octeon_mmc_dmar_fixup_done; 173 + } 174 + 175 + host->sys_freq = octeon_get_io_clock_rate(); 176 + 177 + if (of_device_is_compatible(node, "cavium,octeon-7890-mmc")) { 178 + host->big_dma_addr = true; 179 + host->need_irq_handler_lock = true; 180 + host->has_ciu3 = true; 181 + host->use_sg = true; 182 + /* 183 + * First seven are the EMM_INT bits 0..6, then two for 184 + * the EMM_DMA_INT bits 185 + */ 186 + for (i = 0; i < 9; i++) { 187 + mmc_irq[i] = platform_get_irq(pdev, i); 188 + if (mmc_irq[i] < 0) 189 + return mmc_irq[i]; 190 + 191 + /* work around legacy u-boot device trees */ 192 + irq_set_irq_type(mmc_irq[i], IRQ_TYPE_EDGE_RISING); 193 + } 194 + } else { 195 + host->big_dma_addr = false; 196 + host->need_irq_handler_lock = false; 197 + host->has_ciu3 = false; 198 + /* First one is EMM second DMA */ 199 + for (i = 0; i < 2; i++) { 200 + mmc_irq[i] = platform_get_irq(pdev, i); 201 + if (mmc_irq[i] < 0) 202 + return mmc_irq[i]; 203 + } 204 + } 205 + 206 + host->last_slot = -1; 207 + 208 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 209 + if (!res) { 210 + dev_err(&pdev->dev, "Platform resource[0] is missing\n"); 211 + return -ENXIO; 212 + } 213 + base = devm_ioremap_resource(&pdev->dev, res); 214 + if (IS_ERR(base)) 215 + return PTR_ERR(base); 216 + host->base = (void __iomem *)base; 217 + host->reg_off = 0; 218 + 219 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 220 + if (!res) { 221 + dev_err(&pdev->dev, "Platform resource[1] is missing\n"); 222 + return -EINVAL; 223 + } 224 + base = devm_ioremap_resource(&pdev->dev, res); 225 + if (IS_ERR(base)) 226 + return PTR_ERR(base); 227 + host->dma_base = (void __iomem *)base; 228 + /* 229 + * To keep the register addresses shared we intentionaly use 230 + * a negative offset here, first register used on Octeon therefore 231 + * starts at 0x20 (MIO_EMM_DMA_CFG). 232 + */ 233 + host->reg_off_dma = -0x20; 234 + 235 + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 236 + if (ret) 237 + return ret; 238 + 239 + /* 240 + * Clear out any pending interrupts that may be left over from 241 + * bootloader. 242 + */ 243 + val = readq(host->base + MIO_EMM_INT(host)); 244 + writeq(val, host->base + MIO_EMM_INT(host)); 245 + 246 + if (host->has_ciu3) { 247 + /* Only CMD_DONE, DMA_DONE, CMD_ERR, DMA_ERR */ 248 + for (i = 1; i <= 4; i++) { 249 + ret = devm_request_irq(&pdev->dev, mmc_irq[i], 250 + cvm_mmc_interrupt, 251 + 0, cvm_mmc_irq_names[i], host); 252 + if (ret < 0) { 253 + dev_err(&pdev->dev, "Error: devm_request_irq %d\n", 254 + mmc_irq[i]); 255 + return ret; 256 + } 257 + } 258 + } else { 259 + ret = devm_request_irq(&pdev->dev, mmc_irq[0], 260 + cvm_mmc_interrupt, 0, KBUILD_MODNAME, 261 + host); 262 + if (ret < 0) { 263 + dev_err(&pdev->dev, "Error: devm_request_irq %d\n", 264 + mmc_irq[0]); 265 + return ret; 266 + } 267 + } 268 + 269 + host->global_pwr_gpiod = devm_gpiod_get_optional(&pdev->dev, 270 + "power-gpios", 271 + GPIOD_OUT_HIGH); 272 + if (IS_ERR(host->global_pwr_gpiod)) { 273 + dev_err(&pdev->dev, "Invalid power GPIO\n"); 274 + return PTR_ERR(host->global_pwr_gpiod); 275 + } 276 + 277 + platform_set_drvdata(pdev, host); 278 + 279 + i = 0; 280 + for_each_child_of_node(node, cn) { 281 + host->slot_pdev[i] = 282 + of_platform_device_create(cn, NULL, &pdev->dev); 283 + if (!host->slot_pdev[i]) { 284 + i++; 285 + continue; 286 + } 287 + ret = cvm_mmc_of_slot_probe(&host->slot_pdev[i]->dev, host); 288 + if (ret) { 289 + dev_err(&pdev->dev, "Error populating slots\n"); 290 + octeon_mmc_set_shared_power(host, 0); 291 + return ret; 292 + } 293 + i++; 294 + } 295 + return 0; 296 + } 297 + 298 + static int octeon_mmc_remove(struct platform_device *pdev) 299 + { 300 + struct cvm_mmc_host *host = platform_get_drvdata(pdev); 301 + u64 dma_cfg; 302 + int i; 303 + 304 + for (i = 0; i < CAVIUM_MAX_MMC; i++) 305 + if (host->slot[i]) 306 + cvm_mmc_of_slot_remove(host->slot[i]); 307 + 308 + dma_cfg = readq(host->dma_base + MIO_EMM_DMA_CFG(host)); 309 + dma_cfg &= ~MIO_EMM_DMA_CFG_EN; 310 + writeq(dma_cfg, host->dma_base + MIO_EMM_DMA_CFG(host)); 311 + 312 + octeon_mmc_set_shared_power(host, 0); 313 + return 0; 314 + } 315 + 316 + static const struct of_device_id octeon_mmc_match[] = { 317 + { 318 + .compatible = "cavium,octeon-6130-mmc", 319 + }, 320 + { 321 + .compatible = "cavium,octeon-7890-mmc", 322 + }, 323 + {}, 324 + }; 325 + MODULE_DEVICE_TABLE(of, octeon_mmc_match); 326 + 327 + static struct platform_driver octeon_mmc_driver = { 328 + .probe = octeon_mmc_probe, 329 + .remove = octeon_mmc_remove, 330 + .driver = { 331 + .name = KBUILD_MODNAME, 332 + .of_match_table = octeon_mmc_match, 333 + }, 334 + }; 335 + 336 + static int __init octeon_mmc_init(void) 337 + { 338 + return platform_driver_register(&octeon_mmc_driver); 339 + } 340 + 341 + static void __exit octeon_mmc_cleanup(void) 342 + { 343 + platform_driver_unregister(&octeon_mmc_driver); 344 + } 345 + 346 + module_init(octeon_mmc_init); 347 + module_exit(octeon_mmc_cleanup); 348 + 349 + MODULE_AUTHOR("Cavium Inc. <support@cavium.com>"); 350 + MODULE_DESCRIPTION("Low-level driver for Cavium OCTEON MMC/SSD card"); 351 + MODULE_LICENSE("GPL");
+187
drivers/mmc/host/cavium-thunderx.c
··· 1 + /* 2 + * Driver for MMC and SSD cards for Cavium ThunderX SOCs. 3 + * 4 + * This file is subject to the terms and conditions of the GNU General Public 5 + * License. See the file "COPYING" in the main directory of this archive 6 + * for more details. 7 + * 8 + * Copyright (C) 2016 Cavium Inc. 9 + */ 10 + #include <linux/dma-mapping.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/mmc/mmc.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/of_platform.h> 16 + #include <linux/pci.h> 17 + #include "cavium.h" 18 + 19 + static void thunder_mmc_acquire_bus(struct cvm_mmc_host *host) 20 + { 21 + down(&host->mmc_serializer); 22 + } 23 + 24 + static void thunder_mmc_release_bus(struct cvm_mmc_host *host) 25 + { 26 + up(&host->mmc_serializer); 27 + } 28 + 29 + static void thunder_mmc_int_enable(struct cvm_mmc_host *host, u64 val) 30 + { 31 + writeq(val, host->base + MIO_EMM_INT(host)); 32 + writeq(val, host->base + MIO_EMM_INT_EN_SET(host)); 33 + } 34 + 35 + static int thunder_mmc_register_interrupts(struct cvm_mmc_host *host, 36 + struct pci_dev *pdev) 37 + { 38 + int nvec, ret, i; 39 + 40 + nvec = pci_alloc_irq_vectors(pdev, 1, 9, PCI_IRQ_MSIX); 41 + if (nvec < 0) 42 + return nvec; 43 + 44 + /* register interrupts */ 45 + for (i = 0; i < nvec; i++) { 46 + ret = devm_request_irq(&pdev->dev, pci_irq_vector(pdev, i), 47 + cvm_mmc_interrupt, 48 + 0, cvm_mmc_irq_names[i], host); 49 + if (ret) 50 + return ret; 51 + } 52 + return 0; 53 + } 54 + 55 + static int thunder_mmc_probe(struct pci_dev *pdev, 56 + const struct pci_device_id *id) 57 + { 58 + struct device_node *node = pdev->dev.of_node; 59 + struct device *dev = &pdev->dev; 60 + struct device_node *child_node; 61 + struct cvm_mmc_host *host; 62 + int ret, i = 0; 63 + 64 + host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 65 + if (!host) 66 + return -ENOMEM; 67 + 68 + pci_set_drvdata(pdev, host); 69 + ret = pcim_enable_device(pdev); 70 + if (ret) 71 + return ret; 72 + 73 + ret = pci_request_regions(pdev, KBUILD_MODNAME); 74 + if (ret) 75 + return ret; 76 + 77 + host->base = pcim_iomap(pdev, 0, pci_resource_len(pdev, 0)); 78 + if (!host->base) 79 + return -EINVAL; 80 + 81 + /* On ThunderX these are identical */ 82 + host->dma_base = host->base; 83 + 84 + host->reg_off = 0x2000; 85 + host->reg_off_dma = 0x160; 86 + 87 + host->clk = devm_clk_get(dev, NULL); 88 + if (IS_ERR(host->clk)) 89 + return PTR_ERR(host->clk); 90 + 91 + ret = clk_prepare_enable(host->clk); 92 + if (ret) 93 + return ret; 94 + host->sys_freq = clk_get_rate(host->clk); 95 + 96 + spin_lock_init(&host->irq_handler_lock); 97 + sema_init(&host->mmc_serializer, 1); 98 + 99 + host->dev = dev; 100 + host->acquire_bus = thunder_mmc_acquire_bus; 101 + host->release_bus = thunder_mmc_release_bus; 102 + host->int_enable = thunder_mmc_int_enable; 103 + 104 + host->use_sg = true; 105 + host->big_dma_addr = true; 106 + host->need_irq_handler_lock = true; 107 + host->last_slot = -1; 108 + 109 + ret = dma_set_mask(dev, DMA_BIT_MASK(48)); 110 + if (ret) 111 + goto error; 112 + 113 + /* 114 + * Clear out any pending interrupts that may be left over from 115 + * bootloader. Writing 1 to the bits clears them. 116 + */ 117 + writeq(127, host->base + MIO_EMM_INT_EN(host)); 118 + writeq(3, host->base + MIO_EMM_DMA_INT_ENA_W1C(host)); 119 + /* Clear DMA FIFO */ 120 + writeq(BIT_ULL(16), host->base + MIO_EMM_DMA_FIFO_CFG(host)); 121 + 122 + ret = thunder_mmc_register_interrupts(host, pdev); 123 + if (ret) 124 + goto error; 125 + 126 + for_each_child_of_node(node, child_node) { 127 + /* 128 + * mmc_of_parse and devm* require one device per slot. 129 + * Create a dummy device per slot and set the node pointer to 130 + * the slot. The easiest way to get this is using 131 + * of_platform_device_create. 132 + */ 133 + if (of_device_is_compatible(child_node, "mmc-slot")) { 134 + host->slot_pdev[i] = of_platform_device_create(child_node, NULL, 135 + &pdev->dev); 136 + if (!host->slot_pdev[i]) 137 + continue; 138 + 139 + ret = cvm_mmc_of_slot_probe(&host->slot_pdev[i]->dev, host); 140 + if (ret) 141 + goto error; 142 + } 143 + i++; 144 + } 145 + dev_info(dev, "probed\n"); 146 + return 0; 147 + 148 + error: 149 + clk_disable_unprepare(host->clk); 150 + return ret; 151 + } 152 + 153 + static void thunder_mmc_remove(struct pci_dev *pdev) 154 + { 155 + struct cvm_mmc_host *host = pci_get_drvdata(pdev); 156 + u64 dma_cfg; 157 + int i; 158 + 159 + for (i = 0; i < CAVIUM_MAX_MMC; i++) 160 + if (host->slot[i]) 161 + cvm_mmc_of_slot_remove(host->slot[i]); 162 + 163 + dma_cfg = readq(host->dma_base + MIO_EMM_DMA_CFG(host)); 164 + dma_cfg &= ~MIO_EMM_DMA_CFG_EN; 165 + writeq(dma_cfg, host->dma_base + MIO_EMM_DMA_CFG(host)); 166 + 167 + clk_disable_unprepare(host->clk); 168 + } 169 + 170 + static const struct pci_device_id thunder_mmc_id_table[] = { 171 + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xa010) }, 172 + { 0, } /* end of table */ 173 + }; 174 + 175 + static struct pci_driver thunder_mmc_driver = { 176 + .name = KBUILD_MODNAME, 177 + .id_table = thunder_mmc_id_table, 178 + .probe = thunder_mmc_probe, 179 + .remove = thunder_mmc_remove, 180 + }; 181 + 182 + module_pci_driver(thunder_mmc_driver); 183 + 184 + MODULE_AUTHOR("Cavium Inc."); 185 + MODULE_DESCRIPTION("Cavium ThunderX eMMC Driver"); 186 + MODULE_LICENSE("GPL"); 187 + MODULE_DEVICE_TABLE(pci, thunder_mmc_id_table);
+1090
drivers/mmc/host/cavium.c
··· 1 + /* 2 + * Shared part of driver for MMC/SDHC controller on Cavium OCTEON and 3 + * ThunderX SOCs. 4 + * 5 + * This file is subject to the terms and conditions of the GNU General Public 6 + * License. See the file "COPYING" in the main directory of this archive 7 + * for more details. 8 + * 9 + * Copyright (C) 2012-2017 Cavium Inc. 10 + * Authors: 11 + * David Daney <david.daney@cavium.com> 12 + * Peter Swain <pswain@cavium.com> 13 + * Steven J. Hill <steven.hill@cavium.com> 14 + * Jan Glauber <jglauber@cavium.com> 15 + */ 16 + #include <linux/bitfield.h> 17 + #include <linux/delay.h> 18 + #include <linux/dma-direction.h> 19 + #include <linux/dma-mapping.h> 20 + #include <linux/gpio/consumer.h> 21 + #include <linux/interrupt.h> 22 + #include <linux/mmc/mmc.h> 23 + #include <linux/mmc/slot-gpio.h> 24 + #include <linux/module.h> 25 + #include <linux/regulator/consumer.h> 26 + #include <linux/scatterlist.h> 27 + #include <linux/time.h> 28 + 29 + #include "cavium.h" 30 + 31 + const char *cvm_mmc_irq_names[] = { 32 + "MMC Buffer", 33 + "MMC Command", 34 + "MMC DMA", 35 + "MMC Command Error", 36 + "MMC DMA Error", 37 + "MMC Switch", 38 + "MMC Switch Error", 39 + "MMC DMA int Fifo", 40 + "MMC DMA int", 41 + }; 42 + 43 + /* 44 + * The Cavium MMC host hardware assumes that all commands have fixed 45 + * command and response types. These are correct if MMC devices are 46 + * being used. However, non-MMC devices like SD use command and 47 + * response types that are unexpected by the host hardware. 48 + * 49 + * The command and response types can be overridden by supplying an 50 + * XOR value that is applied to the type. We calculate the XOR value 51 + * from the values in this table and the flags passed from the MMC 52 + * core. 53 + */ 54 + static struct cvm_mmc_cr_type cvm_mmc_cr_types[] = { 55 + {0, 0}, /* CMD0 */ 56 + {0, 3}, /* CMD1 */ 57 + {0, 2}, /* CMD2 */ 58 + {0, 1}, /* CMD3 */ 59 + {0, 0}, /* CMD4 */ 60 + {0, 1}, /* CMD5 */ 61 + {0, 1}, /* CMD6 */ 62 + {0, 1}, /* CMD7 */ 63 + {1, 1}, /* CMD8 */ 64 + {0, 2}, /* CMD9 */ 65 + {0, 2}, /* CMD10 */ 66 + {1, 1}, /* CMD11 */ 67 + {0, 1}, /* CMD12 */ 68 + {0, 1}, /* CMD13 */ 69 + {1, 1}, /* CMD14 */ 70 + {0, 0}, /* CMD15 */ 71 + {0, 1}, /* CMD16 */ 72 + {1, 1}, /* CMD17 */ 73 + {1, 1}, /* CMD18 */ 74 + {3, 1}, /* CMD19 */ 75 + {2, 1}, /* CMD20 */ 76 + {0, 0}, /* CMD21 */ 77 + {0, 0}, /* CMD22 */ 78 + {0, 1}, /* CMD23 */ 79 + {2, 1}, /* CMD24 */ 80 + {2, 1}, /* CMD25 */ 81 + {2, 1}, /* CMD26 */ 82 + {2, 1}, /* CMD27 */ 83 + {0, 1}, /* CMD28 */ 84 + {0, 1}, /* CMD29 */ 85 + {1, 1}, /* CMD30 */ 86 + {1, 1}, /* CMD31 */ 87 + {0, 0}, /* CMD32 */ 88 + {0, 0}, /* CMD33 */ 89 + {0, 0}, /* CMD34 */ 90 + {0, 1}, /* CMD35 */ 91 + {0, 1}, /* CMD36 */ 92 + {0, 0}, /* CMD37 */ 93 + {0, 1}, /* CMD38 */ 94 + {0, 4}, /* CMD39 */ 95 + {0, 5}, /* CMD40 */ 96 + {0, 0}, /* CMD41 */ 97 + {2, 1}, /* CMD42 */ 98 + {0, 0}, /* CMD43 */ 99 + {0, 0}, /* CMD44 */ 100 + {0, 0}, /* CMD45 */ 101 + {0, 0}, /* CMD46 */ 102 + {0, 0}, /* CMD47 */ 103 + {0, 0}, /* CMD48 */ 104 + {0, 0}, /* CMD49 */ 105 + {0, 0}, /* CMD50 */ 106 + {0, 0}, /* CMD51 */ 107 + {0, 0}, /* CMD52 */ 108 + {0, 0}, /* CMD53 */ 109 + {0, 0}, /* CMD54 */ 110 + {0, 1}, /* CMD55 */ 111 + {0xff, 0xff}, /* CMD56 */ 112 + {0, 0}, /* CMD57 */ 113 + {0, 0}, /* CMD58 */ 114 + {0, 0}, /* CMD59 */ 115 + {0, 0}, /* CMD60 */ 116 + {0, 0}, /* CMD61 */ 117 + {0, 0}, /* CMD62 */ 118 + {0, 0} /* CMD63 */ 119 + }; 120 + 121 + static struct cvm_mmc_cr_mods cvm_mmc_get_cr_mods(struct mmc_command *cmd) 122 + { 123 + struct cvm_mmc_cr_type *cr; 124 + u8 hardware_ctype, hardware_rtype; 125 + u8 desired_ctype = 0, desired_rtype = 0; 126 + struct cvm_mmc_cr_mods r; 127 + 128 + cr = cvm_mmc_cr_types + (cmd->opcode & 0x3f); 129 + hardware_ctype = cr->ctype; 130 + hardware_rtype = cr->rtype; 131 + if (cmd->opcode == MMC_GEN_CMD) 132 + hardware_ctype = (cmd->arg & 1) ? 1 : 2; 133 + 134 + switch (mmc_cmd_type(cmd)) { 135 + case MMC_CMD_ADTC: 136 + desired_ctype = (cmd->data->flags & MMC_DATA_WRITE) ? 2 : 1; 137 + break; 138 + case MMC_CMD_AC: 139 + case MMC_CMD_BC: 140 + case MMC_CMD_BCR: 141 + desired_ctype = 0; 142 + break; 143 + } 144 + 145 + switch (mmc_resp_type(cmd)) { 146 + case MMC_RSP_NONE: 147 + desired_rtype = 0; 148 + break; 149 + case MMC_RSP_R1:/* MMC_RSP_R5, MMC_RSP_R6, MMC_RSP_R7 */ 150 + case MMC_RSP_R1B: 151 + desired_rtype = 1; 152 + break; 153 + case MMC_RSP_R2: 154 + desired_rtype = 2; 155 + break; 156 + case MMC_RSP_R3: /* MMC_RSP_R4 */ 157 + desired_rtype = 3; 158 + break; 159 + } 160 + r.ctype_xor = desired_ctype ^ hardware_ctype; 161 + r.rtype_xor = desired_rtype ^ hardware_rtype; 162 + return r; 163 + } 164 + 165 + static void check_switch_errors(struct cvm_mmc_host *host) 166 + { 167 + u64 emm_switch; 168 + 169 + emm_switch = readq(host->base + MIO_EMM_SWITCH(host)); 170 + if (emm_switch & MIO_EMM_SWITCH_ERR0) 171 + dev_err(host->dev, "Switch power class error\n"); 172 + if (emm_switch & MIO_EMM_SWITCH_ERR1) 173 + dev_err(host->dev, "Switch hs timing error\n"); 174 + if (emm_switch & MIO_EMM_SWITCH_ERR2) 175 + dev_err(host->dev, "Switch bus width error\n"); 176 + } 177 + 178 + static void clear_bus_id(u64 *reg) 179 + { 180 + u64 bus_id_mask = GENMASK_ULL(61, 60); 181 + 182 + *reg &= ~bus_id_mask; 183 + } 184 + 185 + static void set_bus_id(u64 *reg, int bus_id) 186 + { 187 + clear_bus_id(reg); 188 + *reg |= FIELD_PREP(GENMASK(61, 60), bus_id); 189 + } 190 + 191 + static int get_bus_id(u64 reg) 192 + { 193 + return FIELD_GET(GENMASK_ULL(61, 60), reg); 194 + } 195 + 196 + /* 197 + * We never set the switch_exe bit since that would interfere 198 + * with the commands send by the MMC core. 199 + */ 200 + static void do_switch(struct cvm_mmc_host *host, u64 emm_switch) 201 + { 202 + int retries = 100; 203 + u64 rsp_sts; 204 + int bus_id; 205 + 206 + /* 207 + * Modes setting only taken from slot 0. Work around that hardware 208 + * issue by first switching to slot 0. 209 + */ 210 + bus_id = get_bus_id(emm_switch); 211 + clear_bus_id(&emm_switch); 212 + writeq(emm_switch, host->base + MIO_EMM_SWITCH(host)); 213 + 214 + set_bus_id(&emm_switch, bus_id); 215 + writeq(emm_switch, host->base + MIO_EMM_SWITCH(host)); 216 + 217 + /* wait for the switch to finish */ 218 + do { 219 + rsp_sts = readq(host->base + MIO_EMM_RSP_STS(host)); 220 + if (!(rsp_sts & MIO_EMM_RSP_STS_SWITCH_VAL)) 221 + break; 222 + udelay(10); 223 + } while (--retries); 224 + 225 + check_switch_errors(host); 226 + } 227 + 228 + static bool switch_val_changed(struct cvm_mmc_slot *slot, u64 new_val) 229 + { 230 + /* Match BUS_ID, HS_TIMING, BUS_WIDTH, POWER_CLASS, CLK_HI, CLK_LO */ 231 + u64 match = 0x3001070fffffffffull; 232 + 233 + return (slot->cached_switch & match) != (new_val & match); 234 + } 235 + 236 + static void set_wdog(struct cvm_mmc_slot *slot, unsigned int ns) 237 + { 238 + u64 timeout; 239 + 240 + if (!slot->clock) 241 + return; 242 + 243 + if (ns) 244 + timeout = (slot->clock * ns) / NSEC_PER_SEC; 245 + else 246 + timeout = (slot->clock * 850ull) / 1000ull; 247 + writeq(timeout, slot->host->base + MIO_EMM_WDOG(slot->host)); 248 + } 249 + 250 + static void cvm_mmc_reset_bus(struct cvm_mmc_slot *slot) 251 + { 252 + struct cvm_mmc_host *host = slot->host; 253 + u64 emm_switch, wdog; 254 + 255 + emm_switch = readq(slot->host->base + MIO_EMM_SWITCH(host)); 256 + emm_switch &= ~(MIO_EMM_SWITCH_EXE | MIO_EMM_SWITCH_ERR0 | 257 + MIO_EMM_SWITCH_ERR1 | MIO_EMM_SWITCH_ERR2); 258 + set_bus_id(&emm_switch, slot->bus_id); 259 + 260 + wdog = readq(slot->host->base + MIO_EMM_WDOG(host)); 261 + do_switch(slot->host, emm_switch); 262 + 263 + slot->cached_switch = emm_switch; 264 + 265 + msleep(20); 266 + 267 + writeq(wdog, slot->host->base + MIO_EMM_WDOG(host)); 268 + } 269 + 270 + /* Switch to another slot if needed */ 271 + static void cvm_mmc_switch_to(struct cvm_mmc_slot *slot) 272 + { 273 + struct cvm_mmc_host *host = slot->host; 274 + struct cvm_mmc_slot *old_slot; 275 + u64 emm_sample, emm_switch; 276 + 277 + if (slot->bus_id == host->last_slot) 278 + return; 279 + 280 + if (host->last_slot >= 0 && host->slot[host->last_slot]) { 281 + old_slot = host->slot[host->last_slot]; 282 + old_slot->cached_switch = readq(host->base + MIO_EMM_SWITCH(host)); 283 + old_slot->cached_rca = readq(host->base + MIO_EMM_RCA(host)); 284 + } 285 + 286 + writeq(slot->cached_rca, host->base + MIO_EMM_RCA(host)); 287 + emm_switch = slot->cached_switch; 288 + set_bus_id(&emm_switch, slot->bus_id); 289 + do_switch(host, emm_switch); 290 + 291 + emm_sample = FIELD_PREP(MIO_EMM_SAMPLE_CMD_CNT, slot->cmd_cnt) | 292 + FIELD_PREP(MIO_EMM_SAMPLE_DAT_CNT, slot->dat_cnt); 293 + writeq(emm_sample, host->base + MIO_EMM_SAMPLE(host)); 294 + 295 + host->last_slot = slot->bus_id; 296 + } 297 + 298 + static void do_read(struct cvm_mmc_host *host, struct mmc_request *req, 299 + u64 dbuf) 300 + { 301 + struct sg_mapping_iter *smi = &host->smi; 302 + int data_len = req->data->blocks * req->data->blksz; 303 + int bytes_xfered, shift = -1; 304 + u64 dat = 0; 305 + 306 + /* Auto inc from offset zero */ 307 + writeq((0x10000 | (dbuf << 6)), host->base + MIO_EMM_BUF_IDX(host)); 308 + 309 + for (bytes_xfered = 0; bytes_xfered < data_len;) { 310 + if (smi->consumed >= smi->length) { 311 + if (!sg_miter_next(smi)) 312 + break; 313 + smi->consumed = 0; 314 + } 315 + 316 + if (shift < 0) { 317 + dat = readq(host->base + MIO_EMM_BUF_DAT(host)); 318 + shift = 56; 319 + } 320 + 321 + while (smi->consumed < smi->length && shift >= 0) { 322 + ((u8 *)smi->addr)[smi->consumed] = (dat >> shift) & 0xff; 323 + bytes_xfered++; 324 + smi->consumed++; 325 + shift -= 8; 326 + } 327 + } 328 + 329 + sg_miter_stop(smi); 330 + req->data->bytes_xfered = bytes_xfered; 331 + req->data->error = 0; 332 + } 333 + 334 + static void do_write(struct mmc_request *req) 335 + { 336 + req->data->bytes_xfered = req->data->blocks * req->data->blksz; 337 + req->data->error = 0; 338 + } 339 + 340 + static void set_cmd_response(struct cvm_mmc_host *host, struct mmc_request *req, 341 + u64 rsp_sts) 342 + { 343 + u64 rsp_hi, rsp_lo; 344 + 345 + if (!(rsp_sts & MIO_EMM_RSP_STS_RSP_VAL)) 346 + return; 347 + 348 + rsp_lo = readq(host->base + MIO_EMM_RSP_LO(host)); 349 + 350 + switch (FIELD_GET(MIO_EMM_RSP_STS_RSP_TYPE, rsp_sts)) { 351 + case 1: 352 + case 3: 353 + req->cmd->resp[0] = (rsp_lo >> 8) & 0xffffffff; 354 + req->cmd->resp[1] = 0; 355 + req->cmd->resp[2] = 0; 356 + req->cmd->resp[3] = 0; 357 + break; 358 + case 2: 359 + req->cmd->resp[3] = rsp_lo & 0xffffffff; 360 + req->cmd->resp[2] = (rsp_lo >> 32) & 0xffffffff; 361 + rsp_hi = readq(host->base + MIO_EMM_RSP_HI(host)); 362 + req->cmd->resp[1] = rsp_hi & 0xffffffff; 363 + req->cmd->resp[0] = (rsp_hi >> 32) & 0xffffffff; 364 + break; 365 + } 366 + } 367 + 368 + static int get_dma_dir(struct mmc_data *data) 369 + { 370 + return (data->flags & MMC_DATA_WRITE) ? DMA_TO_DEVICE : DMA_FROM_DEVICE; 371 + } 372 + 373 + static int finish_dma_single(struct cvm_mmc_host *host, struct mmc_data *data) 374 + { 375 + data->bytes_xfered = data->blocks * data->blksz; 376 + data->error = 0; 377 + return 1; 378 + } 379 + 380 + static int finish_dma_sg(struct cvm_mmc_host *host, struct mmc_data *data) 381 + { 382 + u64 fifo_cfg; 383 + int count; 384 + 385 + /* Check if there are any pending requests left */ 386 + fifo_cfg = readq(host->dma_base + MIO_EMM_DMA_FIFO_CFG(host)); 387 + count = FIELD_GET(MIO_EMM_DMA_FIFO_CFG_COUNT, fifo_cfg); 388 + if (count) 389 + dev_err(host->dev, "%u requests still pending\n", count); 390 + 391 + data->bytes_xfered = data->blocks * data->blksz; 392 + data->error = 0; 393 + 394 + /* Clear and disable FIFO */ 395 + writeq(BIT_ULL(16), host->dma_base + MIO_EMM_DMA_FIFO_CFG(host)); 396 + dma_unmap_sg(host->dev, data->sg, data->sg_len, get_dma_dir(data)); 397 + return 1; 398 + } 399 + 400 + static int finish_dma(struct cvm_mmc_host *host, struct mmc_data *data) 401 + { 402 + if (host->use_sg && data->sg_len > 1) 403 + return finish_dma_sg(host, data); 404 + else 405 + return finish_dma_single(host, data); 406 + } 407 + 408 + static int check_status(u64 rsp_sts) 409 + { 410 + if (rsp_sts & MIO_EMM_RSP_STS_RSP_BAD_STS || 411 + rsp_sts & MIO_EMM_RSP_STS_RSP_CRC_ERR || 412 + rsp_sts & MIO_EMM_RSP_STS_BLK_CRC_ERR) 413 + return -EILSEQ; 414 + if (rsp_sts & MIO_EMM_RSP_STS_RSP_TIMEOUT || 415 + rsp_sts & MIO_EMM_RSP_STS_BLK_TIMEOUT) 416 + return -ETIMEDOUT; 417 + if (rsp_sts & MIO_EMM_RSP_STS_DBUF_ERR) 418 + return -EIO; 419 + return 0; 420 + } 421 + 422 + /* Try to clean up failed DMA. */ 423 + static void cleanup_dma(struct cvm_mmc_host *host, u64 rsp_sts) 424 + { 425 + u64 emm_dma; 426 + 427 + emm_dma = readq(host->base + MIO_EMM_DMA(host)); 428 + emm_dma |= FIELD_PREP(MIO_EMM_DMA_VAL, 1) | 429 + FIELD_PREP(MIO_EMM_DMA_DAT_NULL, 1); 430 + set_bus_id(&emm_dma, get_bus_id(rsp_sts)); 431 + writeq(emm_dma, host->base + MIO_EMM_DMA(host)); 432 + } 433 + 434 + irqreturn_t cvm_mmc_interrupt(int irq, void *dev_id) 435 + { 436 + struct cvm_mmc_host *host = dev_id; 437 + struct mmc_request *req; 438 + unsigned long flags = 0; 439 + u64 emm_int, rsp_sts; 440 + bool host_done; 441 + 442 + if (host->need_irq_handler_lock) 443 + spin_lock_irqsave(&host->irq_handler_lock, flags); 444 + else 445 + __acquire(&host->irq_handler_lock); 446 + 447 + /* Clear interrupt bits (write 1 clears ). */ 448 + emm_int = readq(host->base + MIO_EMM_INT(host)); 449 + writeq(emm_int, host->base + MIO_EMM_INT(host)); 450 + 451 + if (emm_int & MIO_EMM_INT_SWITCH_ERR) 452 + check_switch_errors(host); 453 + 454 + req = host->current_req; 455 + if (!req) 456 + goto out; 457 + 458 + rsp_sts = readq(host->base + MIO_EMM_RSP_STS(host)); 459 + /* 460 + * dma_val set means DMA is still in progress. Don't touch 461 + * the request and wait for the interrupt indicating that 462 + * the DMA is finished. 463 + */ 464 + if ((rsp_sts & MIO_EMM_RSP_STS_DMA_VAL) && host->dma_active) 465 + goto out; 466 + 467 + if (!host->dma_active && req->data && 468 + (emm_int & MIO_EMM_INT_BUF_DONE)) { 469 + unsigned int type = (rsp_sts >> 7) & 3; 470 + 471 + if (type == 1) 472 + do_read(host, req, rsp_sts & MIO_EMM_RSP_STS_DBUF); 473 + else if (type == 2) 474 + do_write(req); 475 + } 476 + 477 + host_done = emm_int & MIO_EMM_INT_CMD_DONE || 478 + emm_int & MIO_EMM_INT_DMA_DONE || 479 + emm_int & MIO_EMM_INT_CMD_ERR || 480 + emm_int & MIO_EMM_INT_DMA_ERR; 481 + 482 + if (!(host_done && req->done)) 483 + goto no_req_done; 484 + 485 + req->cmd->error = check_status(rsp_sts); 486 + 487 + if (host->dma_active && req->data) 488 + if (!finish_dma(host, req->data)) 489 + goto no_req_done; 490 + 491 + set_cmd_response(host, req, rsp_sts); 492 + if ((emm_int & MIO_EMM_INT_DMA_ERR) && 493 + (rsp_sts & MIO_EMM_RSP_STS_DMA_PEND)) 494 + cleanup_dma(host, rsp_sts); 495 + 496 + host->current_req = NULL; 497 + req->done(req); 498 + 499 + no_req_done: 500 + if (host->dmar_fixup_done) 501 + host->dmar_fixup_done(host); 502 + if (host_done) 503 + host->release_bus(host); 504 + out: 505 + if (host->need_irq_handler_lock) 506 + spin_unlock_irqrestore(&host->irq_handler_lock, flags); 507 + else 508 + __release(&host->irq_handler_lock); 509 + return IRQ_RETVAL(emm_int != 0); 510 + } 511 + 512 + /* 513 + * Program DMA_CFG and if needed DMA_ADR. 514 + * Returns 0 on error, DMA address otherwise. 515 + */ 516 + static u64 prepare_dma_single(struct cvm_mmc_host *host, struct mmc_data *data) 517 + { 518 + u64 dma_cfg, addr; 519 + int count, rw; 520 + 521 + count = dma_map_sg(host->dev, data->sg, data->sg_len, 522 + get_dma_dir(data)); 523 + if (!count) 524 + return 0; 525 + 526 + rw = (data->flags & MMC_DATA_WRITE) ? 1 : 0; 527 + dma_cfg = FIELD_PREP(MIO_EMM_DMA_CFG_EN, 1) | 528 + FIELD_PREP(MIO_EMM_DMA_CFG_RW, rw); 529 + #ifdef __LITTLE_ENDIAN 530 + dma_cfg |= FIELD_PREP(MIO_EMM_DMA_CFG_ENDIAN, 1); 531 + #endif 532 + dma_cfg |= FIELD_PREP(MIO_EMM_DMA_CFG_SIZE, 533 + (sg_dma_len(&data->sg[0]) / 8) - 1); 534 + 535 + addr = sg_dma_address(&data->sg[0]); 536 + if (!host->big_dma_addr) 537 + dma_cfg |= FIELD_PREP(MIO_EMM_DMA_CFG_ADR, addr); 538 + writeq(dma_cfg, host->dma_base + MIO_EMM_DMA_CFG(host)); 539 + 540 + pr_debug("[%s] sg_dma_len: %u total sg_elem: %d\n", 541 + (rw) ? "W" : "R", sg_dma_len(&data->sg[0]), count); 542 + 543 + if (host->big_dma_addr) 544 + writeq(addr, host->dma_base + MIO_EMM_DMA_ADR(host)); 545 + return addr; 546 + } 547 + 548 + /* 549 + * Queue complete sg list into the FIFO. 550 + * Returns 0 on error, 1 otherwise. 551 + */ 552 + static u64 prepare_dma_sg(struct cvm_mmc_host *host, struct mmc_data *data) 553 + { 554 + struct scatterlist *sg; 555 + u64 fifo_cmd, addr; 556 + int count, i, rw; 557 + 558 + count = dma_map_sg(host->dev, data->sg, data->sg_len, 559 + get_dma_dir(data)); 560 + if (!count) 561 + return 0; 562 + if (count > 16) 563 + goto error; 564 + 565 + /* Enable FIFO by removing CLR bit */ 566 + writeq(0, host->dma_base + MIO_EMM_DMA_FIFO_CFG(host)); 567 + 568 + for_each_sg(data->sg, sg, count, i) { 569 + /* Program DMA address */ 570 + addr = sg_dma_address(sg); 571 + if (addr & 7) 572 + goto error; 573 + writeq(addr, host->dma_base + MIO_EMM_DMA_FIFO_ADR(host)); 574 + 575 + /* 576 + * If we have scatter-gather support we also have an extra 577 + * register for the DMA addr, so no need to check 578 + * host->big_dma_addr here. 579 + */ 580 + rw = (data->flags & MMC_DATA_WRITE) ? 1 : 0; 581 + fifo_cmd = FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_RW, rw); 582 + 583 + /* enable interrupts on the last element */ 584 + fifo_cmd |= FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_INTDIS, 585 + (i + 1 == count) ? 0 : 1); 586 + 587 + #ifdef __LITTLE_ENDIAN 588 + fifo_cmd |= FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_ENDIAN, 1); 589 + #endif 590 + fifo_cmd |= FIELD_PREP(MIO_EMM_DMA_FIFO_CMD_SIZE, 591 + sg_dma_len(sg) / 8 - 1); 592 + /* 593 + * The write copies the address and the command to the FIFO 594 + * and increments the FIFO's COUNT field. 595 + */ 596 + writeq(fifo_cmd, host->dma_base + MIO_EMM_DMA_FIFO_CMD(host)); 597 + pr_debug("[%s] sg_dma_len: %u sg_elem: %d/%d\n", 598 + (rw) ? "W" : "R", sg_dma_len(sg), i, count); 599 + } 600 + 601 + /* 602 + * In difference to prepare_dma_single we don't return the 603 + * address here, as it would not make sense for scatter-gather. 604 + * The dma fixup is only required on models that don't support 605 + * scatter-gather, so that is not a problem. 606 + */ 607 + return 1; 608 + 609 + error: 610 + WARN_ON_ONCE(1); 611 + dma_unmap_sg(host->dev, data->sg, data->sg_len, get_dma_dir(data)); 612 + /* Disable FIFO */ 613 + writeq(BIT_ULL(16), host->dma_base + MIO_EMM_DMA_FIFO_CFG(host)); 614 + return 0; 615 + } 616 + 617 + static u64 prepare_dma(struct cvm_mmc_host *host, struct mmc_data *data) 618 + { 619 + if (host->use_sg && data->sg_len > 1) 620 + return prepare_dma_sg(host, data); 621 + else 622 + return prepare_dma_single(host, data); 623 + } 624 + 625 + static u64 prepare_ext_dma(struct mmc_host *mmc, struct mmc_request *mrq) 626 + { 627 + struct cvm_mmc_slot *slot = mmc_priv(mmc); 628 + u64 emm_dma; 629 + 630 + emm_dma = FIELD_PREP(MIO_EMM_DMA_VAL, 1) | 631 + FIELD_PREP(MIO_EMM_DMA_SECTOR, 632 + mmc_card_is_blockaddr(mmc->card) ? 1 : 0) | 633 + FIELD_PREP(MIO_EMM_DMA_RW, 634 + (mrq->data->flags & MMC_DATA_WRITE) ? 1 : 0) | 635 + FIELD_PREP(MIO_EMM_DMA_BLOCK_CNT, mrq->data->blocks) | 636 + FIELD_PREP(MIO_EMM_DMA_CARD_ADDR, mrq->cmd->arg); 637 + set_bus_id(&emm_dma, slot->bus_id); 638 + 639 + if (mmc_card_mmc(mmc->card) || (mmc_card_sd(mmc->card) && 640 + (mmc->card->scr.cmds & SD_SCR_CMD23_SUPPORT))) 641 + emm_dma |= FIELD_PREP(MIO_EMM_DMA_MULTI, 1); 642 + 643 + pr_debug("[%s] blocks: %u multi: %d\n", 644 + (emm_dma & MIO_EMM_DMA_RW) ? "W" : "R", 645 + mrq->data->blocks, (emm_dma & MIO_EMM_DMA_MULTI) ? 1 : 0); 646 + return emm_dma; 647 + } 648 + 649 + static void cvm_mmc_dma_request(struct mmc_host *mmc, 650 + struct mmc_request *mrq) 651 + { 652 + struct cvm_mmc_slot *slot = mmc_priv(mmc); 653 + struct cvm_mmc_host *host = slot->host; 654 + struct mmc_data *data; 655 + u64 emm_dma, addr; 656 + 657 + if (!mrq->data || !mrq->data->sg || !mrq->data->sg_len || 658 + !mrq->stop || mrq->stop->opcode != MMC_STOP_TRANSMISSION) { 659 + dev_err(&mmc->card->dev, 660 + "Error: cmv_mmc_dma_request no data\n"); 661 + goto error; 662 + } 663 + 664 + cvm_mmc_switch_to(slot); 665 + 666 + data = mrq->data; 667 + pr_debug("DMA request blocks: %d block_size: %d total_size: %d\n", 668 + data->blocks, data->blksz, data->blocks * data->blksz); 669 + if (data->timeout_ns) 670 + set_wdog(slot, data->timeout_ns); 671 + 672 + WARN_ON(host->current_req); 673 + host->current_req = mrq; 674 + 675 + emm_dma = prepare_ext_dma(mmc, mrq); 676 + addr = prepare_dma(host, data); 677 + if (!addr) { 678 + dev_err(host->dev, "prepare_dma failed\n"); 679 + goto error; 680 + } 681 + 682 + host->dma_active = true; 683 + host->int_enable(host, MIO_EMM_INT_CMD_ERR | MIO_EMM_INT_DMA_DONE | 684 + MIO_EMM_INT_DMA_ERR); 685 + 686 + if (host->dmar_fixup) 687 + host->dmar_fixup(host, mrq->cmd, data, addr); 688 + 689 + /* 690 + * If we have a valid SD card in the slot, we set the response 691 + * bit mask to check for CRC errors and timeouts only. 692 + * Otherwise, use the default power reset value. 693 + */ 694 + if (mmc_card_sd(mmc->card)) 695 + writeq(0x00b00000ull, host->base + MIO_EMM_STS_MASK(host)); 696 + else 697 + writeq(0xe4390080ull, host->base + MIO_EMM_STS_MASK(host)); 698 + writeq(emm_dma, host->base + MIO_EMM_DMA(host)); 699 + return; 700 + 701 + error: 702 + mrq->cmd->error = -EINVAL; 703 + if (mrq->done) 704 + mrq->done(mrq); 705 + host->release_bus(host); 706 + } 707 + 708 + static void do_read_request(struct cvm_mmc_host *host, struct mmc_request *mrq) 709 + { 710 + sg_miter_start(&host->smi, mrq->data->sg, mrq->data->sg_len, 711 + SG_MITER_ATOMIC | SG_MITER_TO_SG); 712 + } 713 + 714 + static void do_write_request(struct cvm_mmc_host *host, struct mmc_request *mrq) 715 + { 716 + unsigned int data_len = mrq->data->blocks * mrq->data->blksz; 717 + struct sg_mapping_iter *smi = &host->smi; 718 + unsigned int bytes_xfered; 719 + int shift = 56; 720 + u64 dat = 0; 721 + 722 + /* Copy data to the xmit buffer before issuing the command. */ 723 + sg_miter_start(smi, mrq->data->sg, mrq->data->sg_len, SG_MITER_FROM_SG); 724 + 725 + /* Auto inc from offset zero, dbuf zero */ 726 + writeq(0x10000ull, host->base + MIO_EMM_BUF_IDX(host)); 727 + 728 + for (bytes_xfered = 0; bytes_xfered < data_len;) { 729 + if (smi->consumed >= smi->length) { 730 + if (!sg_miter_next(smi)) 731 + break; 732 + smi->consumed = 0; 733 + } 734 + 735 + while (smi->consumed < smi->length && shift >= 0) { 736 + dat |= (u64)((u8 *)smi->addr)[smi->consumed] << shift; 737 + bytes_xfered++; 738 + smi->consumed++; 739 + shift -= 8; 740 + } 741 + 742 + if (shift < 0) { 743 + writeq(dat, host->base + MIO_EMM_BUF_DAT(host)); 744 + shift = 56; 745 + dat = 0; 746 + } 747 + } 748 + sg_miter_stop(smi); 749 + } 750 + 751 + static void cvm_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq) 752 + { 753 + struct cvm_mmc_slot *slot = mmc_priv(mmc); 754 + struct cvm_mmc_host *host = slot->host; 755 + struct mmc_command *cmd = mrq->cmd; 756 + struct cvm_mmc_cr_mods mods; 757 + u64 emm_cmd, rsp_sts; 758 + int retries = 100; 759 + 760 + /* 761 + * Note about locking: 762 + * All MMC devices share the same bus and controller. Allow only a 763 + * single user of the bootbus/MMC bus at a time. The lock is acquired 764 + * on all entry points from the MMC layer. 765 + * 766 + * For requests the lock is only released after the completion 767 + * interrupt! 768 + */ 769 + host->acquire_bus(host); 770 + 771 + if (cmd->opcode == MMC_READ_MULTIPLE_BLOCK || 772 + cmd->opcode == MMC_WRITE_MULTIPLE_BLOCK) 773 + return cvm_mmc_dma_request(mmc, mrq); 774 + 775 + cvm_mmc_switch_to(slot); 776 + 777 + mods = cvm_mmc_get_cr_mods(cmd); 778 + 779 + WARN_ON(host->current_req); 780 + host->current_req = mrq; 781 + 782 + if (cmd->data) { 783 + if (cmd->data->flags & MMC_DATA_READ) 784 + do_read_request(host, mrq); 785 + else 786 + do_write_request(host, mrq); 787 + 788 + if (cmd->data->timeout_ns) 789 + set_wdog(slot, cmd->data->timeout_ns); 790 + } else 791 + set_wdog(slot, 0); 792 + 793 + host->dma_active = false; 794 + host->int_enable(host, MIO_EMM_INT_CMD_DONE | MIO_EMM_INT_CMD_ERR); 795 + 796 + emm_cmd = FIELD_PREP(MIO_EMM_CMD_VAL, 1) | 797 + FIELD_PREP(MIO_EMM_CMD_CTYPE_XOR, mods.ctype_xor) | 798 + FIELD_PREP(MIO_EMM_CMD_RTYPE_XOR, mods.rtype_xor) | 799 + FIELD_PREP(MIO_EMM_CMD_IDX, cmd->opcode) | 800 + FIELD_PREP(MIO_EMM_CMD_ARG, cmd->arg); 801 + set_bus_id(&emm_cmd, slot->bus_id); 802 + if (cmd->data && mmc_cmd_type(cmd) == MMC_CMD_ADTC) 803 + emm_cmd |= FIELD_PREP(MIO_EMM_CMD_OFFSET, 804 + 64 - ((cmd->data->blocks * cmd->data->blksz) / 8)); 805 + 806 + writeq(0, host->base + MIO_EMM_STS_MASK(host)); 807 + 808 + retry: 809 + rsp_sts = readq(host->base + MIO_EMM_RSP_STS(host)); 810 + if (rsp_sts & MIO_EMM_RSP_STS_DMA_VAL || 811 + rsp_sts & MIO_EMM_RSP_STS_CMD_VAL || 812 + rsp_sts & MIO_EMM_RSP_STS_SWITCH_VAL || 813 + rsp_sts & MIO_EMM_RSP_STS_DMA_PEND) { 814 + udelay(10); 815 + if (--retries) 816 + goto retry; 817 + } 818 + if (!retries) 819 + dev_err(host->dev, "Bad status: %llx before command write\n", rsp_sts); 820 + writeq(emm_cmd, host->base + MIO_EMM_CMD(host)); 821 + } 822 + 823 + static void cvm_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 824 + { 825 + struct cvm_mmc_slot *slot = mmc_priv(mmc); 826 + struct cvm_mmc_host *host = slot->host; 827 + int clk_period = 0, power_class = 10, bus_width = 0; 828 + u64 clock, emm_switch; 829 + 830 + host->acquire_bus(host); 831 + cvm_mmc_switch_to(slot); 832 + 833 + /* Set the power state */ 834 + switch (ios->power_mode) { 835 + case MMC_POWER_ON: 836 + break; 837 + 838 + case MMC_POWER_OFF: 839 + cvm_mmc_reset_bus(slot); 840 + if (host->global_pwr_gpiod) 841 + host->set_shared_power(host, 0); 842 + else 843 + mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0); 844 + break; 845 + 846 + case MMC_POWER_UP: 847 + if (host->global_pwr_gpiod) 848 + host->set_shared_power(host, 1); 849 + else 850 + mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd); 851 + break; 852 + } 853 + 854 + /* Convert bus width to HW definition */ 855 + switch (ios->bus_width) { 856 + case MMC_BUS_WIDTH_8: 857 + bus_width = 2; 858 + break; 859 + case MMC_BUS_WIDTH_4: 860 + bus_width = 1; 861 + break; 862 + case MMC_BUS_WIDTH_1: 863 + bus_width = 0; 864 + break; 865 + } 866 + 867 + /* DDR is available for 4/8 bit bus width */ 868 + if (ios->bus_width && ios->timing == MMC_TIMING_MMC_DDR52) 869 + bus_width |= 4; 870 + 871 + /* Change the clock frequency. */ 872 + clock = ios->clock; 873 + if (clock > 52000000) 874 + clock = 52000000; 875 + slot->clock = clock; 876 + 877 + if (clock) 878 + clk_period = (host->sys_freq + clock - 1) / (2 * clock); 879 + 880 + emm_switch = FIELD_PREP(MIO_EMM_SWITCH_HS_TIMING, 881 + (ios->timing == MMC_TIMING_MMC_HS)) | 882 + FIELD_PREP(MIO_EMM_SWITCH_BUS_WIDTH, bus_width) | 883 + FIELD_PREP(MIO_EMM_SWITCH_POWER_CLASS, power_class) | 884 + FIELD_PREP(MIO_EMM_SWITCH_CLK_HI, clk_period) | 885 + FIELD_PREP(MIO_EMM_SWITCH_CLK_LO, clk_period); 886 + set_bus_id(&emm_switch, slot->bus_id); 887 + 888 + if (!switch_val_changed(slot, emm_switch)) 889 + goto out; 890 + 891 + set_wdog(slot, 0); 892 + do_switch(host, emm_switch); 893 + slot->cached_switch = emm_switch; 894 + out: 895 + host->release_bus(host); 896 + } 897 + 898 + static const struct mmc_host_ops cvm_mmc_ops = { 899 + .request = cvm_mmc_request, 900 + .set_ios = cvm_mmc_set_ios, 901 + .get_ro = mmc_gpio_get_ro, 902 + .get_cd = mmc_gpio_get_cd, 903 + }; 904 + 905 + static void cvm_mmc_set_clock(struct cvm_mmc_slot *slot, unsigned int clock) 906 + { 907 + struct mmc_host *mmc = slot->mmc; 908 + 909 + clock = min(clock, mmc->f_max); 910 + clock = max(clock, mmc->f_min); 911 + slot->clock = clock; 912 + } 913 + 914 + static int cvm_mmc_init_lowlevel(struct cvm_mmc_slot *slot) 915 + { 916 + struct cvm_mmc_host *host = slot->host; 917 + u64 emm_switch; 918 + 919 + /* Enable this bus slot. */ 920 + host->emm_cfg |= (1ull << slot->bus_id); 921 + writeq(host->emm_cfg, slot->host->base + MIO_EMM_CFG(host)); 922 + udelay(10); 923 + 924 + /* Program initial clock speed and power. */ 925 + cvm_mmc_set_clock(slot, slot->mmc->f_min); 926 + emm_switch = FIELD_PREP(MIO_EMM_SWITCH_POWER_CLASS, 10); 927 + emm_switch |= FIELD_PREP(MIO_EMM_SWITCH_CLK_HI, 928 + (host->sys_freq / slot->clock) / 2); 929 + emm_switch |= FIELD_PREP(MIO_EMM_SWITCH_CLK_LO, 930 + (host->sys_freq / slot->clock) / 2); 931 + 932 + /* Make the changes take effect on this bus slot. */ 933 + set_bus_id(&emm_switch, slot->bus_id); 934 + do_switch(host, emm_switch); 935 + 936 + slot->cached_switch = emm_switch; 937 + 938 + /* 939 + * Set watchdog timeout value and default reset value 940 + * for the mask register. Finally, set the CARD_RCA 941 + * bit so that we can get the card address relative 942 + * to the CMD register for CMD7 transactions. 943 + */ 944 + set_wdog(slot, 0); 945 + writeq(0xe4390080ull, host->base + MIO_EMM_STS_MASK(host)); 946 + writeq(1, host->base + MIO_EMM_RCA(host)); 947 + return 0; 948 + } 949 + 950 + static int cvm_mmc_of_parse(struct device *dev, struct cvm_mmc_slot *slot) 951 + { 952 + u32 id, cmd_skew = 0, dat_skew = 0, bus_width = 0; 953 + struct device_node *node = dev->of_node; 954 + struct mmc_host *mmc = slot->mmc; 955 + u64 clock_period; 956 + int ret; 957 + 958 + ret = of_property_read_u32(node, "reg", &id); 959 + if (ret) { 960 + dev_err(dev, "Missing or invalid reg property on %s\n", 961 + of_node_full_name(node)); 962 + return ret; 963 + } 964 + 965 + if (id >= CAVIUM_MAX_MMC || slot->host->slot[id]) { 966 + dev_err(dev, "Invalid reg property on %s\n", 967 + of_node_full_name(node)); 968 + return -EINVAL; 969 + } 970 + 971 + mmc->supply.vmmc = devm_regulator_get_optional(dev, "vmmc"); 972 + if (IS_ERR(mmc->supply.vmmc)) { 973 + if (PTR_ERR(mmc->supply.vmmc) == -EPROBE_DEFER) 974 + return -EPROBE_DEFER; 975 + /* 976 + * Legacy Octeon firmware has no regulator entry, fall-back to 977 + * a hard-coded voltage to get a sane OCR. 978 + */ 979 + mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34; 980 + } else { 981 + ret = mmc_regulator_get_ocrmask(mmc->supply.vmmc); 982 + if (ret > 0) 983 + mmc->ocr_avail = ret; 984 + } 985 + 986 + /* Common MMC bindings */ 987 + ret = mmc_of_parse(mmc); 988 + if (ret) 989 + return ret; 990 + 991 + /* Set bus width */ 992 + if (!(mmc->caps & (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA))) { 993 + of_property_read_u32(node, "cavium,bus-max-width", &bus_width); 994 + if (bus_width == 8) 995 + mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA; 996 + else if (bus_width == 4) 997 + mmc->caps |= MMC_CAP_4_BIT_DATA; 998 + } 999 + 1000 + /* Set maximum and minimum frequency */ 1001 + if (!mmc->f_max) 1002 + of_property_read_u32(node, "spi-max-frequency", &mmc->f_max); 1003 + if (!mmc->f_max || mmc->f_max > 52000000) 1004 + mmc->f_max = 52000000; 1005 + mmc->f_min = 400000; 1006 + 1007 + /* Sampling register settings, period in picoseconds */ 1008 + clock_period = 1000000000000ull / slot->host->sys_freq; 1009 + of_property_read_u32(node, "cavium,cmd-clk-skew", &cmd_skew); 1010 + of_property_read_u32(node, "cavium,dat-clk-skew", &dat_skew); 1011 + slot->cmd_cnt = (cmd_skew + clock_period / 2) / clock_period; 1012 + slot->dat_cnt = (dat_skew + clock_period / 2) / clock_period; 1013 + 1014 + return id; 1015 + } 1016 + 1017 + int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host) 1018 + { 1019 + struct cvm_mmc_slot *slot; 1020 + struct mmc_host *mmc; 1021 + int ret, id; 1022 + 1023 + mmc = mmc_alloc_host(sizeof(struct cvm_mmc_slot), dev); 1024 + if (!mmc) 1025 + return -ENOMEM; 1026 + 1027 + slot = mmc_priv(mmc); 1028 + slot->mmc = mmc; 1029 + slot->host = host; 1030 + 1031 + ret = cvm_mmc_of_parse(dev, slot); 1032 + if (ret < 0) 1033 + goto error; 1034 + id = ret; 1035 + 1036 + /* Set up host parameters */ 1037 + mmc->ops = &cvm_mmc_ops; 1038 + 1039 + /* 1040 + * We only have a 3.3v supply, we cannot support any 1041 + * of the UHS modes. We do support the high speed DDR 1042 + * modes up to 52MHz. 1043 + */ 1044 + mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED | 1045 + MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_POWER_OFF_CARD | 1046 + MMC_CAP_3_3V_DDR; 1047 + 1048 + if (host->use_sg) 1049 + mmc->max_segs = 16; 1050 + else 1051 + mmc->max_segs = 1; 1052 + 1053 + /* DMA size field can address up to 8 MB */ 1054 + mmc->max_seg_size = 8 * 1024 * 1024; 1055 + mmc->max_req_size = mmc->max_seg_size; 1056 + /* External DMA is in 512 byte blocks */ 1057 + mmc->max_blk_size = 512; 1058 + /* DMA block count field is 15 bits */ 1059 + mmc->max_blk_count = 32767; 1060 + 1061 + slot->clock = mmc->f_min; 1062 + slot->bus_id = id; 1063 + slot->cached_rca = 1; 1064 + 1065 + host->acquire_bus(host); 1066 + host->slot[id] = slot; 1067 + cvm_mmc_switch_to(slot); 1068 + cvm_mmc_init_lowlevel(slot); 1069 + host->release_bus(host); 1070 + 1071 + ret = mmc_add_host(mmc); 1072 + if (ret) { 1073 + dev_err(dev, "mmc_add_host() returned %d\n", ret); 1074 + slot->host->slot[id] = NULL; 1075 + goto error; 1076 + } 1077 + return 0; 1078 + 1079 + error: 1080 + mmc_free_host(slot->mmc); 1081 + return ret; 1082 + } 1083 + 1084 + int cvm_mmc_of_slot_remove(struct cvm_mmc_slot *slot) 1085 + { 1086 + mmc_remove_host(slot->mmc); 1087 + slot->host->slot[slot->bus_id] = NULL; 1088 + mmc_free_host(slot->mmc); 1089 + return 0; 1090 + }
+215
drivers/mmc/host/cavium.h
··· 1 + /* 2 + * Driver for MMC and SSD cards for Cavium OCTEON and ThunderX SOCs. 3 + * 4 + * This file is subject to the terms and conditions of the GNU General Public 5 + * License. See the file "COPYING" in the main directory of this archive 6 + * for more details. 7 + * 8 + * Copyright (C) 2012-2017 Cavium Inc. 9 + */ 10 + 11 + #ifndef _CAVIUM_MMC_H_ 12 + #define _CAVIUM_MMC_H_ 13 + 14 + #include <linux/bitops.h> 15 + #include <linux/clk.h> 16 + #include <linux/gpio/consumer.h> 17 + #include <linux/io.h> 18 + #include <linux/mmc/host.h> 19 + #include <linux/of.h> 20 + #include <linux/scatterlist.h> 21 + #include <linux/semaphore.h> 22 + 23 + #define CAVIUM_MAX_MMC 4 24 + 25 + /* DMA register addresses */ 26 + #define MIO_EMM_DMA_FIFO_CFG(x) (0x00 + x->reg_off_dma) 27 + #define MIO_EMM_DMA_FIFO_ADR(x) (0x10 + x->reg_off_dma) 28 + #define MIO_EMM_DMA_FIFO_CMD(x) (0x18 + x->reg_off_dma) 29 + #define MIO_EMM_DMA_CFG(x) (0x20 + x->reg_off_dma) 30 + #define MIO_EMM_DMA_ADR(x) (0x28 + x->reg_off_dma) 31 + #define MIO_EMM_DMA_INT(x) (0x30 + x->reg_off_dma) 32 + #define MIO_EMM_DMA_INT_W1S(x) (0x38 + x->reg_off_dma) 33 + #define MIO_EMM_DMA_INT_ENA_W1S(x) (0x40 + x->reg_off_dma) 34 + #define MIO_EMM_DMA_INT_ENA_W1C(x) (0x48 + x->reg_off_dma) 35 + 36 + /* register addresses */ 37 + #define MIO_EMM_CFG(x) (0x00 + x->reg_off) 38 + #define MIO_EMM_SWITCH(x) (0x48 + x->reg_off) 39 + #define MIO_EMM_DMA(x) (0x50 + x->reg_off) 40 + #define MIO_EMM_CMD(x) (0x58 + x->reg_off) 41 + #define MIO_EMM_RSP_STS(x) (0x60 + x->reg_off) 42 + #define MIO_EMM_RSP_LO(x) (0x68 + x->reg_off) 43 + #define MIO_EMM_RSP_HI(x) (0x70 + x->reg_off) 44 + #define MIO_EMM_INT(x) (0x78 + x->reg_off) 45 + #define MIO_EMM_INT_EN(x) (0x80 + x->reg_off) 46 + #define MIO_EMM_WDOG(x) (0x88 + x->reg_off) 47 + #define MIO_EMM_SAMPLE(x) (0x90 + x->reg_off) 48 + #define MIO_EMM_STS_MASK(x) (0x98 + x->reg_off) 49 + #define MIO_EMM_RCA(x) (0xa0 + x->reg_off) 50 + #define MIO_EMM_INT_EN_SET(x) (0xb0 + x->reg_off) 51 + #define MIO_EMM_INT_EN_CLR(x) (0xb8 + x->reg_off) 52 + #define MIO_EMM_BUF_IDX(x) (0xe0 + x->reg_off) 53 + #define MIO_EMM_BUF_DAT(x) (0xe8 + x->reg_off) 54 + 55 + struct cvm_mmc_host { 56 + struct device *dev; 57 + void __iomem *base; 58 + void __iomem *dma_base; 59 + int reg_off; 60 + int reg_off_dma; 61 + u64 emm_cfg; 62 + u64 n_minus_one; /* OCTEON II workaround location */ 63 + int last_slot; 64 + struct clk *clk; 65 + int sys_freq; 66 + 67 + struct mmc_request *current_req; 68 + struct sg_mapping_iter smi; 69 + bool dma_active; 70 + bool use_sg; 71 + 72 + bool has_ciu3; 73 + bool big_dma_addr; 74 + bool need_irq_handler_lock; 75 + spinlock_t irq_handler_lock; 76 + struct semaphore mmc_serializer; 77 + 78 + struct gpio_desc *global_pwr_gpiod; 79 + atomic_t shared_power_users; 80 + 81 + struct cvm_mmc_slot *slot[CAVIUM_MAX_MMC]; 82 + struct platform_device *slot_pdev[CAVIUM_MAX_MMC]; 83 + 84 + void (*set_shared_power)(struct cvm_mmc_host *, int); 85 + void (*acquire_bus)(struct cvm_mmc_host *); 86 + void (*release_bus)(struct cvm_mmc_host *); 87 + void (*int_enable)(struct cvm_mmc_host *, u64); 88 + /* required on some MIPS models */ 89 + void (*dmar_fixup)(struct cvm_mmc_host *, struct mmc_command *, 90 + struct mmc_data *, u64); 91 + void (*dmar_fixup_done)(struct cvm_mmc_host *); 92 + }; 93 + 94 + struct cvm_mmc_slot { 95 + struct mmc_host *mmc; /* slot-level mmc_core object */ 96 + struct cvm_mmc_host *host; /* common hw for all slots */ 97 + 98 + u64 clock; 99 + 100 + u64 cached_switch; 101 + u64 cached_rca; 102 + 103 + unsigned int cmd_cnt; /* sample delay */ 104 + unsigned int dat_cnt; /* sample delay */ 105 + 106 + int bus_id; 107 + }; 108 + 109 + struct cvm_mmc_cr_type { 110 + u8 ctype; 111 + u8 rtype; 112 + }; 113 + 114 + struct cvm_mmc_cr_mods { 115 + u8 ctype_xor; 116 + u8 rtype_xor; 117 + }; 118 + 119 + /* Bitfield definitions */ 120 + #define MIO_EMM_DMA_FIFO_CFG_CLR BIT_ULL(16) 121 + #define MIO_EMM_DMA_FIFO_CFG_INT_LVL GENMASK_ULL(12, 8) 122 + #define MIO_EMM_DMA_FIFO_CFG_COUNT GENMASK_ULL(4, 0) 123 + 124 + #define MIO_EMM_DMA_FIFO_CMD_RW BIT_ULL(62) 125 + #define MIO_EMM_DMA_FIFO_CMD_INTDIS BIT_ULL(60) 126 + #define MIO_EMM_DMA_FIFO_CMD_SWAP32 BIT_ULL(59) 127 + #define MIO_EMM_DMA_FIFO_CMD_SWAP16 BIT_ULL(58) 128 + #define MIO_EMM_DMA_FIFO_CMD_SWAP8 BIT_ULL(57) 129 + #define MIO_EMM_DMA_FIFO_CMD_ENDIAN BIT_ULL(56) 130 + #define MIO_EMM_DMA_FIFO_CMD_SIZE GENMASK_ULL(55, 36) 131 + 132 + #define MIO_EMM_CMD_SKIP_BUSY BIT_ULL(62) 133 + #define MIO_EMM_CMD_BUS_ID GENMASK_ULL(61, 60) 134 + #define MIO_EMM_CMD_VAL BIT_ULL(59) 135 + #define MIO_EMM_CMD_DBUF BIT_ULL(55) 136 + #define MIO_EMM_CMD_OFFSET GENMASK_ULL(54, 49) 137 + #define MIO_EMM_CMD_CTYPE_XOR GENMASK_ULL(42, 41) 138 + #define MIO_EMM_CMD_RTYPE_XOR GENMASK_ULL(40, 38) 139 + #define MIO_EMM_CMD_IDX GENMASK_ULL(37, 32) 140 + #define MIO_EMM_CMD_ARG GENMASK_ULL(31, 0) 141 + 142 + #define MIO_EMM_DMA_SKIP_BUSY BIT_ULL(62) 143 + #define MIO_EMM_DMA_BUS_ID GENMASK_ULL(61, 60) 144 + #define MIO_EMM_DMA_VAL BIT_ULL(59) 145 + #define MIO_EMM_DMA_SECTOR BIT_ULL(58) 146 + #define MIO_EMM_DMA_DAT_NULL BIT_ULL(57) 147 + #define MIO_EMM_DMA_THRES GENMASK_ULL(56, 51) 148 + #define MIO_EMM_DMA_REL_WR BIT_ULL(50) 149 + #define MIO_EMM_DMA_RW BIT_ULL(49) 150 + #define MIO_EMM_DMA_MULTI BIT_ULL(48) 151 + #define MIO_EMM_DMA_BLOCK_CNT GENMASK_ULL(47, 32) 152 + #define MIO_EMM_DMA_CARD_ADDR GENMASK_ULL(31, 0) 153 + 154 + #define MIO_EMM_DMA_CFG_EN BIT_ULL(63) 155 + #define MIO_EMM_DMA_CFG_RW BIT_ULL(62) 156 + #define MIO_EMM_DMA_CFG_CLR BIT_ULL(61) 157 + #define MIO_EMM_DMA_CFG_SWAP32 BIT_ULL(59) 158 + #define MIO_EMM_DMA_CFG_SWAP16 BIT_ULL(58) 159 + #define MIO_EMM_DMA_CFG_SWAP8 BIT_ULL(57) 160 + #define MIO_EMM_DMA_CFG_ENDIAN BIT_ULL(56) 161 + #define MIO_EMM_DMA_CFG_SIZE GENMASK_ULL(55, 36) 162 + #define MIO_EMM_DMA_CFG_ADR GENMASK_ULL(35, 0) 163 + 164 + #define MIO_EMM_INT_SWITCH_ERR BIT_ULL(6) 165 + #define MIO_EMM_INT_SWITCH_DONE BIT_ULL(5) 166 + #define MIO_EMM_INT_DMA_ERR BIT_ULL(4) 167 + #define MIO_EMM_INT_CMD_ERR BIT_ULL(3) 168 + #define MIO_EMM_INT_DMA_DONE BIT_ULL(2) 169 + #define MIO_EMM_INT_CMD_DONE BIT_ULL(1) 170 + #define MIO_EMM_INT_BUF_DONE BIT_ULL(0) 171 + 172 + #define MIO_EMM_RSP_STS_BUS_ID GENMASK_ULL(61, 60) 173 + #define MIO_EMM_RSP_STS_CMD_VAL BIT_ULL(59) 174 + #define MIO_EMM_RSP_STS_SWITCH_VAL BIT_ULL(58) 175 + #define MIO_EMM_RSP_STS_DMA_VAL BIT_ULL(57) 176 + #define MIO_EMM_RSP_STS_DMA_PEND BIT_ULL(56) 177 + #define MIO_EMM_RSP_STS_DBUF_ERR BIT_ULL(28) 178 + #define MIO_EMM_RSP_STS_DBUF BIT_ULL(23) 179 + #define MIO_EMM_RSP_STS_BLK_TIMEOUT BIT_ULL(22) 180 + #define MIO_EMM_RSP_STS_BLK_CRC_ERR BIT_ULL(21) 181 + #define MIO_EMM_RSP_STS_RSP_BUSYBIT BIT_ULL(20) 182 + #define MIO_EMM_RSP_STS_STP_TIMEOUT BIT_ULL(19) 183 + #define MIO_EMM_RSP_STS_STP_CRC_ERR BIT_ULL(18) 184 + #define MIO_EMM_RSP_STS_STP_BAD_STS BIT_ULL(17) 185 + #define MIO_EMM_RSP_STS_STP_VAL BIT_ULL(16) 186 + #define MIO_EMM_RSP_STS_RSP_TIMEOUT BIT_ULL(15) 187 + #define MIO_EMM_RSP_STS_RSP_CRC_ERR BIT_ULL(14) 188 + #define MIO_EMM_RSP_STS_RSP_BAD_STS BIT_ULL(13) 189 + #define MIO_EMM_RSP_STS_RSP_VAL BIT_ULL(12) 190 + #define MIO_EMM_RSP_STS_RSP_TYPE GENMASK_ULL(11, 9) 191 + #define MIO_EMM_RSP_STS_CMD_TYPE GENMASK_ULL(8, 7) 192 + #define MIO_EMM_RSP_STS_CMD_IDX GENMASK_ULL(6, 1) 193 + #define MIO_EMM_RSP_STS_CMD_DONE BIT_ULL(0) 194 + 195 + #define MIO_EMM_SAMPLE_CMD_CNT GENMASK_ULL(25, 16) 196 + #define MIO_EMM_SAMPLE_DAT_CNT GENMASK_ULL(9, 0) 197 + 198 + #define MIO_EMM_SWITCH_BUS_ID GENMASK_ULL(61, 60) 199 + #define MIO_EMM_SWITCH_EXE BIT_ULL(59) 200 + #define MIO_EMM_SWITCH_ERR0 BIT_ULL(58) 201 + #define MIO_EMM_SWITCH_ERR1 BIT_ULL(57) 202 + #define MIO_EMM_SWITCH_ERR2 BIT_ULL(56) 203 + #define MIO_EMM_SWITCH_HS_TIMING BIT_ULL(48) 204 + #define MIO_EMM_SWITCH_BUS_WIDTH GENMASK_ULL(42, 40) 205 + #define MIO_EMM_SWITCH_POWER_CLASS GENMASK_ULL(35, 32) 206 + #define MIO_EMM_SWITCH_CLK_HI GENMASK_ULL(31, 16) 207 + #define MIO_EMM_SWITCH_CLK_LO GENMASK_ULL(15, 0) 208 + 209 + /* Protoypes */ 210 + irqreturn_t cvm_mmc_interrupt(int irq, void *dev_id); 211 + int cvm_mmc_of_slot_probe(struct device *dev, struct cvm_mmc_host *host); 212 + int cvm_mmc_of_slot_remove(struct cvm_mmc_slot *slot); 213 + extern const char *cvm_mmc_irq_names[]; 214 + 215 + #endif
+4 -10
drivers/mmc/host/davinci_mmc.c
··· 478 478 int ret = 0; 479 479 480 480 host->sg_len = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 481 - ((data->flags & MMC_DATA_WRITE) 482 - ? DMA_TO_DEVICE 483 - : DMA_FROM_DEVICE)); 481 + mmc_get_dma_dir(data)); 484 482 485 483 /* no individual DMA segment should need a partial FIFO */ 486 484 for (i = 0; i < host->sg_len; i++) { 487 485 if (sg_dma_len(data->sg + i) & mask) { 488 486 dma_unmap_sg(mmc_dev(host->mmc), 489 - data->sg, data->sg_len, 490 - (data->flags & MMC_DATA_WRITE) 491 - ? DMA_TO_DEVICE 492 - : DMA_FROM_DEVICE); 487 + data->sg, data->sg_len, 488 + mmc_get_dma_dir(data)); 493 489 return -1; 494 490 } 495 491 } ··· 798 802 davinci_abort_dma(host); 799 803 800 804 dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 801 - (data->flags & MMC_DATA_WRITE) 802 - ? DMA_TO_DEVICE 803 - : DMA_FROM_DEVICE); 805 + mmc_get_dma_dir(data)); 804 806 host->do_dma = false; 805 807 } 806 808 host->data_dir = DAVINCI_MMC_DATADIR_NONE;
+185 -212
drivers/mmc/host/dw_mmc.c
··· 19 19 #include <linux/err.h> 20 20 #include <linux/init.h> 21 21 #include <linux/interrupt.h> 22 + #include <linux/iopoll.h> 22 23 #include <linux/ioport.h> 23 24 #include <linux/module.h> 24 25 #include <linux/platform_device.h> ··· 66 65 67 66 struct idmac_desc_64addr { 68 67 u32 des0; /* Control Descriptor */ 68 + #define IDMAC_OWN_CLR64(x) \ 69 + !((x) & cpu_to_le32(IDMAC_DES0_OWN)) 69 70 70 71 u32 des1; /* Reserved */ 71 72 ··· 106 103 107 104 /* Each descriptor can transfer up to 4KB of data in chained mode */ 108 105 #define DW_MCI_DESC_DATA_LENGTH 0x1000 109 - 110 - static bool dw_mci_reset(struct dw_mci *host); 111 - static bool dw_mci_ctrl_reset(struct dw_mci *host, u32 reset); 112 - static int dw_mci_card_busy(struct mmc_host *mmc); 113 - static int dw_mci_get_cd(struct mmc_host *mmc); 114 106 115 107 #if defined(CONFIG_DEBUG_FS) 116 108 static int dw_mci_req_show(struct seq_file *s, void *v) ··· 230 232 } 231 233 #endif /* defined(CONFIG_DEBUG_FS) */ 232 234 233 - static void mci_send_cmd(struct dw_mci_slot *slot, u32 cmd, u32 arg); 235 + static bool dw_mci_ctrl_reset(struct dw_mci *host, u32 reset) 236 + { 237 + u32 ctrl; 238 + 239 + ctrl = mci_readl(host, CTRL); 240 + ctrl |= reset; 241 + mci_writel(host, CTRL, ctrl); 242 + 243 + /* wait till resets clear */ 244 + if (readl_poll_timeout_atomic(host->regs + SDMMC_CTRL, ctrl, 245 + !(ctrl & reset), 246 + 1, 500 * USEC_PER_MSEC)) { 247 + dev_err(host->dev, 248 + "Timeout resetting block (ctrl reset %#x)\n", 249 + ctrl & reset); 250 + return false; 251 + } 252 + 253 + return true; 254 + } 255 + 256 + static void dw_mci_wait_while_busy(struct dw_mci *host, u32 cmd_flags) 257 + { 258 + u32 status; 259 + 260 + /* 261 + * Databook says that before issuing a new data transfer command 262 + * we need to check to see if the card is busy. Data transfer commands 263 + * all have SDMMC_CMD_PRV_DAT_WAIT set, so we'll key off that. 264 + * 265 + * ...also allow sending for SDMMC_CMD_VOLT_SWITCH where busy is 266 + * expected. 267 + */ 268 + if ((cmd_flags & SDMMC_CMD_PRV_DAT_WAIT) && 269 + !(cmd_flags & SDMMC_CMD_VOLT_SWITCH)) { 270 + if (readl_poll_timeout_atomic(host->regs + SDMMC_STATUS, 271 + status, 272 + !(status & SDMMC_STATUS_BUSY), 273 + 10, 500 * USEC_PER_MSEC)) 274 + dev_err(host->dev, "Busy; trying anyway\n"); 275 + } 276 + } 277 + 278 + static void mci_send_cmd(struct dw_mci_slot *slot, u32 cmd, u32 arg) 279 + { 280 + struct dw_mci *host = slot->host; 281 + unsigned int cmd_status = 0; 282 + 283 + mci_writel(host, CMDARG, arg); 284 + wmb(); /* drain writebuffer */ 285 + dw_mci_wait_while_busy(host, cmd); 286 + mci_writel(host, CMD, SDMMC_CMD_START | cmd); 287 + 288 + if (readl_poll_timeout_atomic(host->regs + SDMMC_CMD, cmd_status, 289 + !(cmd_status & SDMMC_CMD_START), 290 + 1, 500 * USEC_PER_MSEC)) 291 + dev_err(&slot->mmc->class_dev, 292 + "Timeout sending command (cmd %#x arg %#x status %#x)\n", 293 + cmd, arg, cmd_status); 294 + } 234 295 235 296 static u32 dw_mci_prepare_command(struct mmc_host *mmc, struct mmc_command *cmd) 236 297 { ··· 398 341 return cmdr; 399 342 } 400 343 401 - static void dw_mci_wait_while_busy(struct dw_mci *host, u32 cmd_flags) 402 - { 403 - unsigned long timeout = jiffies + msecs_to_jiffies(500); 404 - 405 - /* 406 - * Databook says that before issuing a new data transfer command 407 - * we need to check to see if the card is busy. Data transfer commands 408 - * all have SDMMC_CMD_PRV_DAT_WAIT set, so we'll key off that. 409 - * 410 - * ...also allow sending for SDMMC_CMD_VOLT_SWITCH where busy is 411 - * expected. 412 - */ 413 - if ((cmd_flags & SDMMC_CMD_PRV_DAT_WAIT) && 414 - !(cmd_flags & SDMMC_CMD_VOLT_SWITCH)) { 415 - while (mci_readl(host, STATUS) & SDMMC_STATUS_BUSY) { 416 - if (time_after(jiffies, timeout)) { 417 - /* Command will fail; we'll pass error then */ 418 - dev_err(host->dev, "Busy; trying anyway\n"); 419 - break; 420 - } 421 - udelay(10); 422 - } 423 - } 424 - } 425 - 426 344 static void dw_mci_start_command(struct dw_mci *host, 427 345 struct mmc_command *cmd, u32 cmd_flags) 428 346 { ··· 432 400 set_bit(EVENT_XFER_COMPLETE, &host->pending_events); 433 401 } 434 402 435 - static int dw_mci_get_dma_dir(struct mmc_data *data) 436 - { 437 - if (data->flags & MMC_DATA_WRITE) 438 - return DMA_TO_DEVICE; 439 - else 440 - return DMA_FROM_DEVICE; 441 - } 442 - 443 403 static void dw_mci_dma_cleanup(struct dw_mci *host) 444 404 { 445 405 struct mmc_data *data = host->data; ··· 440 416 dma_unmap_sg(host->dev, 441 417 data->sg, 442 418 data->sg_len, 443 - dw_mci_get_dma_dir(data)); 419 + mmc_get_dma_dir(data)); 444 420 data->host_cookie = COOKIE_UNMAPPED; 445 421 } 446 422 } ··· 579 555 { 580 556 unsigned int desc_len; 581 557 struct idmac_desc_64addr *desc_first, *desc_last, *desc; 582 - unsigned long timeout; 558 + u32 val; 583 559 int i; 584 560 585 561 desc_first = desc_last = desc = host->sg_cpu; ··· 601 577 * isn't still owned by IDMAC as IDMAC's write 602 578 * ops and CPU's read ops are asynchronous. 603 579 */ 604 - timeout = jiffies + msecs_to_jiffies(100); 605 - while (readl(&desc->des0) & IDMAC_DES0_OWN) { 606 - if (time_after(jiffies, timeout)) 607 - goto err_own_bit; 608 - udelay(10); 609 - } 580 + if (readl_poll_timeout_atomic(&desc->des0, val, 581 + !(val & IDMAC_DES0_OWN), 582 + 10, 100 * USEC_PER_MSEC)) 583 + goto err_own_bit; 610 584 611 585 /* 612 586 * Set the OWN bit and disable interrupts ··· 651 629 { 652 630 unsigned int desc_len; 653 631 struct idmac_desc *desc_first, *desc_last, *desc; 654 - unsigned long timeout; 632 + u32 val; 655 633 int i; 656 634 657 635 desc_first = desc_last = desc = host->sg_cpu; ··· 673 651 * isn't still owned by IDMAC as IDMAC's write 674 652 * ops and CPU's read ops are asynchronous. 675 653 */ 676 - timeout = jiffies + msecs_to_jiffies(100); 677 - while (readl(&desc->des0) & 678 - cpu_to_le32(IDMAC_DES0_OWN)) { 679 - if (time_after(jiffies, timeout)) 680 - goto err_own_bit; 681 - udelay(10); 682 - } 654 + if (readl_poll_timeout_atomic(&desc->des0, val, 655 + IDMAC_OWN_CLR64(val), 656 + 10, 657 + 100 * USEC_PER_MSEC)) 658 + goto err_own_bit; 683 659 684 660 /* 685 661 * Set the OWN bit and disable interrupts ··· 896 876 sg_len = dma_map_sg(host->dev, 897 877 data->sg, 898 878 data->sg_len, 899 - dw_mci_get_dma_dir(data)); 879 + mmc_get_dma_dir(data)); 900 880 if (sg_len == 0) 901 881 return -EINVAL; 902 882 ··· 936 916 dma_unmap_sg(slot->host->dev, 937 917 data->sg, 938 918 data->sg_len, 939 - dw_mci_get_dma_dir(data)); 919 + mmc_get_dma_dir(data)); 940 920 data->host_cookie = COOKIE_UNMAPPED; 921 + } 922 + 923 + static int dw_mci_get_cd(struct mmc_host *mmc) 924 + { 925 + int present; 926 + struct dw_mci_slot *slot = mmc_priv(mmc); 927 + struct dw_mci *host = slot->host; 928 + int gpio_cd = mmc_gpio_get_cd(mmc); 929 + 930 + /* Use platform get_cd function, else try onboard card detect */ 931 + if (((mmc->caps & MMC_CAP_NEEDS_POLL) 932 + || !mmc_card_is_removable(mmc))) { 933 + present = 1; 934 + 935 + if (!test_bit(DW_MMC_CARD_PRESENT, &slot->flags)) { 936 + if (mmc->caps & MMC_CAP_NEEDS_POLL) { 937 + dev_info(&mmc->class_dev, 938 + "card is polling.\n"); 939 + } else { 940 + dev_info(&mmc->class_dev, 941 + "card is non-removable.\n"); 942 + } 943 + set_bit(DW_MMC_CARD_PRESENT, &slot->flags); 944 + } 945 + 946 + return present; 947 + } else if (gpio_cd >= 0) 948 + present = gpio_cd; 949 + else 950 + present = (mci_readl(slot->host, CDETECT) & (1 << slot->id)) 951 + == 0 ? 1 : 0; 952 + 953 + spin_lock_bh(&host->lock); 954 + if (present && !test_and_set_bit(DW_MMC_CARD_PRESENT, &slot->flags)) 955 + dev_dbg(&mmc->class_dev, "card is present\n"); 956 + else if (!present && 957 + !test_and_clear_bit(DW_MMC_CARD_PRESENT, &slot->flags)) 958 + dev_dbg(&mmc->class_dev, "card is not present\n"); 959 + spin_unlock_bh(&host->lock); 960 + 961 + return present; 941 962 } 942 963 943 964 static void dw_mci_adjust_fifoth(struct dw_mci *host, struct mmc_data *data) ··· 1192 1131 */ 1193 1132 host->prev_blksz = data->blksz; 1194 1133 } 1195 - } 1196 - 1197 - static void mci_send_cmd(struct dw_mci_slot *slot, u32 cmd, u32 arg) 1198 - { 1199 - struct dw_mci *host = slot->host; 1200 - unsigned long timeout = jiffies + msecs_to_jiffies(500); 1201 - unsigned int cmd_status = 0; 1202 - 1203 - mci_writel(host, CMDARG, arg); 1204 - wmb(); /* drain writebuffer */ 1205 - dw_mci_wait_while_busy(host, cmd); 1206 - mci_writel(host, CMD, SDMMC_CMD_START | cmd); 1207 - 1208 - while (time_before(jiffies, timeout)) { 1209 - cmd_status = mci_readl(host, CMD); 1210 - if (!(cmd_status & SDMMC_CMD_START)) 1211 - return; 1212 - } 1213 - dev_err(&slot->mmc->class_dev, 1214 - "Timeout sending command (cmd %#x arg %#x status %#x)\n", 1215 - cmd, arg, cmd_status); 1216 1134 } 1217 1135 1218 1136 static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit) ··· 1574 1534 return read_only; 1575 1535 } 1576 1536 1577 - static int dw_mci_get_cd(struct mmc_host *mmc) 1578 - { 1579 - int present; 1580 - struct dw_mci_slot *slot = mmc_priv(mmc); 1581 - struct dw_mci *host = slot->host; 1582 - int gpio_cd = mmc_gpio_get_cd(mmc); 1583 - 1584 - /* Use platform get_cd function, else try onboard card detect */ 1585 - if (((mmc->caps & MMC_CAP_NEEDS_POLL) 1586 - || !mmc_card_is_removable(mmc))) { 1587 - present = 1; 1588 - 1589 - if (!test_bit(DW_MMC_CARD_PRESENT, &slot->flags)) { 1590 - if (mmc->caps & MMC_CAP_NEEDS_POLL) { 1591 - dev_info(&mmc->class_dev, 1592 - "card is polling.\n"); 1593 - } else { 1594 - dev_info(&mmc->class_dev, 1595 - "card is non-removable.\n"); 1596 - } 1597 - set_bit(DW_MMC_CARD_PRESENT, &slot->flags); 1598 - } 1599 - 1600 - return present; 1601 - } else if (gpio_cd >= 0) 1602 - present = gpio_cd; 1603 - else 1604 - present = (mci_readl(slot->host, CDETECT) & (1 << slot->id)) 1605 - == 0 ? 1 : 0; 1606 - 1607 - spin_lock_bh(&host->lock); 1608 - if (present && !test_and_set_bit(DW_MMC_CARD_PRESENT, &slot->flags)) 1609 - dev_dbg(&mmc->class_dev, "card is present\n"); 1610 - else if (!present && 1611 - !test_and_clear_bit(DW_MMC_CARD_PRESENT, &slot->flags)) 1612 - dev_dbg(&mmc->class_dev, "card is not present\n"); 1613 - spin_unlock_bh(&host->lock); 1614 - 1615 - return present; 1616 - } 1617 - 1618 1537 static void dw_mci_hw_reset(struct mmc_host *mmc) 1619 1538 { 1620 1539 struct dw_mci_slot *slot = mmc_priv(mmc); ··· 1685 1686 return drv_data->prepare_hs400_tuning(host, ios); 1686 1687 1687 1688 return 0; 1689 + } 1690 + 1691 + static bool dw_mci_reset(struct dw_mci *host) 1692 + { 1693 + u32 flags = SDMMC_CTRL_RESET | SDMMC_CTRL_FIFO_RESET; 1694 + bool ret = false; 1695 + u32 status = 0; 1696 + 1697 + /* 1698 + * Resetting generates a block interrupt, hence setting 1699 + * the scatter-gather pointer to NULL. 1700 + */ 1701 + if (host->sg) { 1702 + sg_miter_stop(&host->sg_miter); 1703 + host->sg = NULL; 1704 + } 1705 + 1706 + if (host->use_dma) 1707 + flags |= SDMMC_CTRL_DMA_RESET; 1708 + 1709 + if (dw_mci_ctrl_reset(host, flags)) { 1710 + /* 1711 + * In all cases we clear the RAWINTS 1712 + * register to clear any interrupts. 1713 + */ 1714 + mci_writel(host, RINTSTS, 0xFFFFFFFF); 1715 + 1716 + if (!host->use_dma) { 1717 + ret = true; 1718 + goto ciu_out; 1719 + } 1720 + 1721 + /* Wait for dma_req to be cleared */ 1722 + if (readl_poll_timeout_atomic(host->regs + SDMMC_STATUS, 1723 + status, 1724 + !(status & SDMMC_STATUS_DMA_REQ), 1725 + 1, 500 * USEC_PER_MSEC)) { 1726 + dev_err(host->dev, 1727 + "%s: Timeout waiting for dma_req to be cleared\n", 1728 + __func__); 1729 + goto ciu_out; 1730 + } 1731 + 1732 + /* when using DMA next we reset the fifo again */ 1733 + if (!dw_mci_ctrl_reset(host, SDMMC_CTRL_FIFO_RESET)) 1734 + goto ciu_out; 1735 + } else { 1736 + /* if the controller reset bit did clear, then set clock regs */ 1737 + if (!(mci_readl(host, CTRL) & SDMMC_CTRL_RESET)) { 1738 + dev_err(host->dev, 1739 + "%s: fifo/dma reset bits didn't clear but ciu was reset, doing clock update\n", 1740 + __func__); 1741 + goto ciu_out; 1742 + } 1743 + } 1744 + 1745 + if (host->use_dma == TRANS_MODE_IDMAC) 1746 + /* It is also recommended that we reset and reprogram idmac */ 1747 + dw_mci_idmac_reset(host); 1748 + 1749 + ret = true; 1750 + 1751 + ciu_out: 1752 + /* After a CTRL reset we need to have CIU set clock registers */ 1753 + mci_send_cmd(host->cur_slot, SDMMC_CMD_UPD_CLK, 0); 1754 + 1755 + return ret; 1688 1756 } 1689 1757 1690 1758 static const struct mmc_host_ops dw_mci_ops = { ··· 2894 2828 no_dma: 2895 2829 dev_info(host->dev, "Using PIO mode.\n"); 2896 2830 host->use_dma = TRANS_MODE_PIO; 2897 - } 2898 - 2899 - static bool dw_mci_ctrl_reset(struct dw_mci *host, u32 reset) 2900 - { 2901 - unsigned long timeout = jiffies + msecs_to_jiffies(500); 2902 - u32 ctrl; 2903 - 2904 - ctrl = mci_readl(host, CTRL); 2905 - ctrl |= reset; 2906 - mci_writel(host, CTRL, ctrl); 2907 - 2908 - /* wait till resets clear */ 2909 - do { 2910 - ctrl = mci_readl(host, CTRL); 2911 - if (!(ctrl & reset)) 2912 - return true; 2913 - } while (time_before(jiffies, timeout)); 2914 - 2915 - dev_err(host->dev, 2916 - "Timeout resetting block (ctrl reset %#x)\n", 2917 - ctrl & reset); 2918 - 2919 - return false; 2920 - } 2921 - 2922 - static bool dw_mci_reset(struct dw_mci *host) 2923 - { 2924 - u32 flags = SDMMC_CTRL_RESET | SDMMC_CTRL_FIFO_RESET; 2925 - bool ret = false; 2926 - 2927 - /* 2928 - * Reseting generates a block interrupt, hence setting 2929 - * the scatter-gather pointer to NULL. 2930 - */ 2931 - if (host->sg) { 2932 - sg_miter_stop(&host->sg_miter); 2933 - host->sg = NULL; 2934 - } 2935 - 2936 - if (host->use_dma) 2937 - flags |= SDMMC_CTRL_DMA_RESET; 2938 - 2939 - if (dw_mci_ctrl_reset(host, flags)) { 2940 - /* 2941 - * In all cases we clear the RAWINTS register to clear any 2942 - * interrupts. 2943 - */ 2944 - mci_writel(host, RINTSTS, 0xFFFFFFFF); 2945 - 2946 - /* if using dma we wait for dma_req to clear */ 2947 - if (host->use_dma) { 2948 - unsigned long timeout = jiffies + msecs_to_jiffies(500); 2949 - u32 status; 2950 - 2951 - do { 2952 - status = mci_readl(host, STATUS); 2953 - if (!(status & SDMMC_STATUS_DMA_REQ)) 2954 - break; 2955 - cpu_relax(); 2956 - } while (time_before(jiffies, timeout)); 2957 - 2958 - if (status & SDMMC_STATUS_DMA_REQ) { 2959 - dev_err(host->dev, 2960 - "%s: Timeout waiting for dma_req to clear during reset\n", 2961 - __func__); 2962 - goto ciu_out; 2963 - } 2964 - 2965 - /* when using DMA next we reset the fifo again */ 2966 - if (!dw_mci_ctrl_reset(host, SDMMC_CTRL_FIFO_RESET)) 2967 - goto ciu_out; 2968 - } 2969 - } else { 2970 - /* if the controller reset bit did clear, then set clock regs */ 2971 - if (!(mci_readl(host, CTRL) & SDMMC_CTRL_RESET)) { 2972 - dev_err(host->dev, 2973 - "%s: fifo/dma reset bits didn't clear but ciu was reset, doing clock update\n", 2974 - __func__); 2975 - goto ciu_out; 2976 - } 2977 - } 2978 - 2979 - if (host->use_dma == TRANS_MODE_IDMAC) 2980 - /* It is also recommended that we reset and reprogram idmac */ 2981 - dw_mci_idmac_reset(host); 2982 - 2983 - ret = true; 2984 - 2985 - ciu_out: 2986 - /* After a CTRL reset we need to have CIU set clock registers */ 2987 - mci_send_cmd(host->cur_slot, SDMMC_CMD_UPD_CLK, 0); 2988 - 2989 - return ret; 2990 2831 } 2991 2832 2992 2833 static void dw_mci_cmd11_timer(unsigned long arg)
+2 -7
drivers/mmc/host/jz4740_mmc.c
··· 200 200 return -ENODEV; 201 201 } 202 202 203 - static inline int jz4740_mmc_get_dma_dir(struct mmc_data *data) 204 - { 205 - return (data->flags & MMC_DATA_READ) ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 206 - } 207 - 208 203 static inline struct dma_chan *jz4740_mmc_get_dma_chan(struct jz4740_mmc_host *host, 209 204 struct mmc_data *data) 210 205 { ··· 210 215 struct mmc_data *data) 211 216 { 212 217 struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data); 213 - enum dma_data_direction dir = jz4740_mmc_get_dma_dir(data); 218 + enum dma_data_direction dir = mmc_get_dma_dir(data); 214 219 215 220 dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir); 216 221 } ··· 222 227 struct dma_chan *chan) 223 228 { 224 229 struct jz4740_mmc_host_next *next_data = &host->next_data; 225 - enum dma_data_direction dir = jz4740_mmc_get_dma_dir(data); 230 + enum dma_data_direction dir = mmc_get_dma_dir(data); 226 231 int sg_len; 227 232 228 233 if (!next && data->host_cookie &&
+424 -226
drivers/mmc/host/meson-gx-mmc.c
··· 36 36 #include <linux/clk-provider.h> 37 37 #include <linux/regulator/consumer.h> 38 38 #include <linux/interrupt.h> 39 + #include <linux/bitfield.h> 39 40 40 41 #define DRIVER_NAME "meson-gx-mmc" 41 42 42 43 #define SD_EMMC_CLOCK 0x0 43 - #define CLK_DIV_SHIFT 0 44 - #define CLK_DIV_WIDTH 6 45 - #define CLK_DIV_MASK 0x3f 44 + #define CLK_DIV_MASK GENMASK(5, 0) 46 45 #define CLK_DIV_MAX 63 47 - #define CLK_SRC_SHIFT 6 48 - #define CLK_SRC_WIDTH 2 49 - #define CLK_SRC_MASK 0x3 46 + #define CLK_SRC_MASK GENMASK(7, 6) 50 47 #define CLK_SRC_XTAL 0 /* external crystal */ 51 48 #define CLK_SRC_XTAL_RATE 24000000 52 49 #define CLK_SRC_PLL 1 /* FCLK_DIV2 */ 53 50 #define CLK_SRC_PLL_RATE 1000000000 54 - #define CLK_PHASE_SHIFT 8 55 - #define CLK_PHASE_MASK 0x3 51 + #define CLK_CORE_PHASE_MASK GENMASK(9, 8) 52 + #define CLK_TX_PHASE_MASK GENMASK(11, 10) 53 + #define CLK_RX_PHASE_MASK GENMASK(13, 12) 56 54 #define CLK_PHASE_0 0 57 55 #define CLK_PHASE_90 1 58 56 #define CLK_PHASE_180 2 ··· 63 65 #define SD_EMMC_START 0x40 64 66 #define START_DESC_INIT BIT(0) 65 67 #define START_DESC_BUSY BIT(1) 66 - #define START_DESC_ADDR_SHIFT 2 67 - #define START_DESC_ADDR_MASK (~0x3) 68 + #define START_DESC_ADDR_MASK GENMASK(31, 2) 68 69 69 70 #define SD_EMMC_CFG 0x44 70 - #define CFG_BUS_WIDTH_SHIFT 0 71 - #define CFG_BUS_WIDTH_MASK 0x3 71 + #define CFG_BUS_WIDTH_MASK GENMASK(1, 0) 72 72 #define CFG_BUS_WIDTH_1 0x0 73 73 #define CFG_BUS_WIDTH_4 0x1 74 74 #define CFG_BUS_WIDTH_8 0x2 75 75 #define CFG_DDR BIT(2) 76 - #define CFG_BLK_LEN_SHIFT 4 77 - #define CFG_BLK_LEN_MASK 0xf 78 - #define CFG_RESP_TIMEOUT_SHIFT 8 79 - #define CFG_RESP_TIMEOUT_MASK 0xf 80 - #define CFG_RC_CC_SHIFT 12 81 - #define CFG_RC_CC_MASK 0xf 76 + #define CFG_BLK_LEN_MASK GENMASK(7, 4) 77 + #define CFG_RESP_TIMEOUT_MASK GENMASK(11, 8) 78 + #define CFG_RC_CC_MASK GENMASK(15, 12) 82 79 #define CFG_STOP_CLOCK BIT(22) 83 80 #define CFG_CLK_ALWAYS_ON BIT(18) 84 81 #define CFG_CHK_DS BIT(20) ··· 83 90 #define STATUS_BUSY BIT(31) 84 91 85 92 #define SD_EMMC_IRQ_EN 0x4c 86 - #define IRQ_EN_MASK 0x3fff 87 - #define IRQ_RXD_ERR_SHIFT 0 88 - #define IRQ_RXD_ERR_MASK 0xff 93 + #define IRQ_EN_MASK GENMASK(13, 0) 94 + #define IRQ_RXD_ERR_MASK GENMASK(7, 0) 89 95 #define IRQ_TXD_ERR BIT(8) 90 96 #define IRQ_DESC_ERR BIT(9) 91 97 #define IRQ_RESP_ERR BIT(10) ··· 108 116 109 117 #define SD_EMMC_CFG_BLK_SIZE 512 /* internal buffer max: 512 bytes */ 110 118 #define SD_EMMC_CFG_RESP_TIMEOUT 256 /* in clock cycles */ 119 + #define SD_EMMC_CMD_TIMEOUT 1024 /* in ms */ 120 + #define SD_EMMC_CMD_TIMEOUT_DATA 4096 /* in ms */ 111 121 #define SD_EMMC_CFG_CMD_GAP 16 /* in clock cycles */ 122 + #define SD_EMMC_DESC_BUF_LEN PAGE_SIZE 123 + 124 + #define SD_EMMC_PRE_REQ_DONE BIT(0) 125 + #define SD_EMMC_DESC_CHAIN_MODE BIT(1) 126 + 112 127 #define MUX_CLK_NUM_PARENTS 2 113 128 114 - struct meson_host { 115 - struct device *dev; 116 - struct mmc_host *mmc; 117 - struct mmc_request *mrq; 118 - struct mmc_command *cmd; 119 - 120 - spinlock_t lock; 121 - void __iomem *regs; 122 - int irq; 123 - u32 ocr_mask; 124 - struct clk *core_clk; 125 - struct clk_mux mux; 126 - struct clk *mux_clk; 127 - struct clk *mux_parent[MUX_CLK_NUM_PARENTS]; 128 - unsigned long current_clock; 129 - 130 - struct clk_divider cfg_div; 131 - struct clk *cfg_div_clk; 132 - 133 - unsigned int bounce_buf_size; 134 - void *bounce_buf; 135 - dma_addr_t bounce_dma_addr; 136 - 137 - bool vqmmc_enabled; 129 + struct meson_tuning_params { 130 + u8 core_phase; 131 + u8 tx_phase; 132 + u8 rx_phase; 138 133 }; 139 134 140 135 struct sd_emmc_desc { ··· 130 151 u32 cmd_data; 131 152 u32 cmd_resp; 132 153 }; 133 - #define CMD_CFG_LENGTH_SHIFT 0 134 - #define CMD_CFG_LENGTH_MASK 0x1ff 154 + 155 + struct meson_host { 156 + struct device *dev; 157 + struct mmc_host *mmc; 158 + struct mmc_command *cmd; 159 + 160 + spinlock_t lock; 161 + void __iomem *regs; 162 + struct clk *core_clk; 163 + struct clk_mux mux; 164 + struct clk *mux_clk; 165 + unsigned long current_clock; 166 + 167 + struct clk_divider cfg_div; 168 + struct clk *cfg_div_clk; 169 + 170 + unsigned int bounce_buf_size; 171 + void *bounce_buf; 172 + dma_addr_t bounce_dma_addr; 173 + struct sd_emmc_desc *descs; 174 + dma_addr_t descs_dma_addr; 175 + 176 + struct meson_tuning_params tp; 177 + bool vqmmc_enabled; 178 + }; 179 + 180 + #define CMD_CFG_LENGTH_MASK GENMASK(8, 0) 135 181 #define CMD_CFG_BLOCK_MODE BIT(9) 136 182 #define CMD_CFG_R1B BIT(10) 137 183 #define CMD_CFG_END_OF_CHAIN BIT(11) 138 - #define CMD_CFG_TIMEOUT_SHIFT 12 139 - #define CMD_CFG_TIMEOUT_MASK 0xf 184 + #define CMD_CFG_TIMEOUT_MASK GENMASK(15, 12) 140 185 #define CMD_CFG_NO_RESP BIT(16) 141 186 #define CMD_CFG_NO_CMD BIT(17) 142 187 #define CMD_CFG_DATA_IO BIT(18) ··· 169 166 #define CMD_CFG_RESP_128 BIT(21) 170 167 #define CMD_CFG_RESP_NUM BIT(22) 171 168 #define CMD_CFG_DATA_NUM BIT(23) 172 - #define CMD_CFG_CMD_INDEX_SHIFT 24 173 - #define CMD_CFG_CMD_INDEX_MASK 0x3f 169 + #define CMD_CFG_CMD_INDEX_MASK GENMASK(29, 24) 174 170 #define CMD_CFG_ERROR BIT(30) 175 171 #define CMD_CFG_OWNER BIT(31) 176 172 177 - #define CMD_DATA_MASK (~0x3) 173 + #define CMD_DATA_MASK GENMASK(31, 2) 178 174 #define CMD_DATA_BIG_ENDIAN BIT(1) 179 175 #define CMD_DATA_SRAM BIT(0) 180 - #define CMD_RESP_MASK (~0x1) 176 + #define CMD_RESP_MASK GENMASK(31, 1) 181 177 #define CMD_RESP_SRAM BIT(0) 178 + 179 + static unsigned int meson_mmc_get_timeout_msecs(struct mmc_data *data) 180 + { 181 + unsigned int timeout = data->timeout_ns / NSEC_PER_MSEC; 182 + 183 + if (!timeout) 184 + return SD_EMMC_CMD_TIMEOUT_DATA; 185 + 186 + timeout = roundup_pow_of_two(timeout); 187 + 188 + return min(timeout, 32768U); /* max. 2^15 ms */ 189 + } 190 + 191 + static struct mmc_command *meson_mmc_get_next_command(struct mmc_command *cmd) 192 + { 193 + if (cmd->opcode == MMC_SET_BLOCK_COUNT && !cmd->error) 194 + return cmd->mrq->cmd; 195 + else if (mmc_op_multi(cmd->opcode) && 196 + (!cmd->mrq->sbc || cmd->error || cmd->data->error)) 197 + return cmd->mrq->stop; 198 + else 199 + return NULL; 200 + } 201 + 202 + static void meson_mmc_get_transfer_mode(struct mmc_host *mmc, 203 + struct mmc_request *mrq) 204 + { 205 + struct mmc_data *data = mrq->data; 206 + struct scatterlist *sg; 207 + int i; 208 + bool use_desc_chain_mode = true; 209 + 210 + for_each_sg(data->sg, sg, data->sg_len, i) 211 + /* check for 8 byte alignment */ 212 + if (sg->offset & 7) { 213 + WARN_ONCE(1, "unaligned scatterlist buffer\n"); 214 + use_desc_chain_mode = false; 215 + break; 216 + } 217 + 218 + if (use_desc_chain_mode) 219 + data->host_cookie |= SD_EMMC_DESC_CHAIN_MODE; 220 + } 221 + 222 + static inline bool meson_mmc_desc_chain_mode(const struct mmc_data *data) 223 + { 224 + return data->host_cookie & SD_EMMC_DESC_CHAIN_MODE; 225 + } 226 + 227 + static inline bool meson_mmc_bounce_buf_read(const struct mmc_data *data) 228 + { 229 + return data && data->flags & MMC_DATA_READ && 230 + !meson_mmc_desc_chain_mode(data); 231 + } 232 + 233 + static void meson_mmc_pre_req(struct mmc_host *mmc, struct mmc_request *mrq) 234 + { 235 + struct mmc_data *data = mrq->data; 236 + 237 + if (!data) 238 + return; 239 + 240 + meson_mmc_get_transfer_mode(mmc, mrq); 241 + data->host_cookie |= SD_EMMC_PRE_REQ_DONE; 242 + 243 + if (!meson_mmc_desc_chain_mode(data)) 244 + return; 245 + 246 + data->sg_count = dma_map_sg(mmc_dev(mmc), data->sg, data->sg_len, 247 + mmc_get_dma_dir(data)); 248 + if (!data->sg_count) 249 + dev_err(mmc_dev(mmc), "dma_map_sg failed"); 250 + } 251 + 252 + static void meson_mmc_post_req(struct mmc_host *mmc, struct mmc_request *mrq, 253 + int err) 254 + { 255 + struct mmc_data *data = mrq->data; 256 + 257 + if (data && meson_mmc_desc_chain_mode(data) && data->sg_count) 258 + dma_unmap_sg(mmc_dev(mmc), data->sg, data->sg_len, 259 + mmc_get_dma_dir(data)); 260 + } 182 261 183 262 static int meson_mmc_clk_set(struct meson_host *host, unsigned long clk_rate) 184 263 { ··· 329 244 char clk_name[32]; 330 245 int i, ret = 0; 331 246 const char *mux_parent_names[MUX_CLK_NUM_PARENTS]; 332 - unsigned int mux_parent_count = 0; 333 247 const char *clk_div_parents[1]; 334 248 u32 clk_reg, cfg; 335 249 336 250 /* get the mux parents */ 337 251 for (i = 0; i < MUX_CLK_NUM_PARENTS; i++) { 252 + struct clk *clk; 338 253 char name[16]; 339 254 340 255 snprintf(name, sizeof(name), "clkin%d", i); 341 - host->mux_parent[i] = devm_clk_get(host->dev, name); 342 - if (IS_ERR(host->mux_parent[i])) { 343 - ret = PTR_ERR(host->mux_parent[i]); 344 - if (PTR_ERR(host->mux_parent[i]) != -EPROBE_DEFER) 256 + clk = devm_clk_get(host->dev, name); 257 + if (IS_ERR(clk)) { 258 + if (clk != ERR_PTR(-EPROBE_DEFER)) 345 259 dev_err(host->dev, "Missing clock %s\n", name); 346 - host->mux_parent[i] = NULL; 347 - return ret; 260 + return PTR_ERR(clk); 348 261 } 349 262 350 - mux_parent_names[i] = __clk_get_name(host->mux_parent[i]); 351 - mux_parent_count++; 263 + mux_parent_names[i] = __clk_get_name(clk); 352 264 } 353 265 354 266 /* create the mux */ ··· 354 272 init.ops = &clk_mux_ops; 355 273 init.flags = 0; 356 274 init.parent_names = mux_parent_names; 357 - init.num_parents = mux_parent_count; 358 - 275 + init.num_parents = MUX_CLK_NUM_PARENTS; 359 276 host->mux.reg = host->regs + SD_EMMC_CLOCK; 360 - host->mux.shift = CLK_SRC_SHIFT; 277 + host->mux.shift = __bf_shf(CLK_SRC_MASK); 361 278 host->mux.mask = CLK_SRC_MASK; 362 279 host->mux.flags = 0; 363 280 host->mux.table = NULL; ··· 368 287 369 288 /* create the divider */ 370 289 snprintf(clk_name, sizeof(clk_name), "%s#div", dev_name(host->dev)); 371 - init.name = devm_kstrdup(host->dev, clk_name, GFP_KERNEL); 290 + init.name = clk_name; 372 291 init.ops = &clk_divider_ops; 373 292 init.flags = CLK_SET_RATE_PARENT; 374 293 clk_div_parents[0] = __clk_get_name(host->mux_clk); ··· 376 295 init.num_parents = ARRAY_SIZE(clk_div_parents); 377 296 378 297 host->cfg_div.reg = host->regs + SD_EMMC_CLOCK; 379 - host->cfg_div.shift = CLK_DIV_SHIFT; 380 - host->cfg_div.width = CLK_DIV_WIDTH; 298 + host->cfg_div.shift = __bf_shf(CLK_DIV_MASK); 299 + host->cfg_div.width = __builtin_popcountl(CLK_DIV_MASK); 381 300 host->cfg_div.hw.init = &init; 382 301 host->cfg_div.flags = CLK_DIVIDER_ONE_BASED | 383 302 CLK_DIVIDER_ROUND_CLOSEST | CLK_DIVIDER_ALLOW_ZERO; ··· 388 307 389 308 /* init SD_EMMC_CLOCK to sane defaults w/min clock rate */ 390 309 clk_reg = 0; 391 - clk_reg |= CLK_PHASE_180 << CLK_PHASE_SHIFT; 392 - clk_reg |= CLK_SRC_XTAL << CLK_SRC_SHIFT; 393 - clk_reg |= CLK_DIV_MAX << CLK_DIV_SHIFT; 310 + clk_reg |= FIELD_PREP(CLK_CORE_PHASE_MASK, host->tp.core_phase); 311 + clk_reg |= FIELD_PREP(CLK_TX_PHASE_MASK, host->tp.tx_phase); 312 + clk_reg |= FIELD_PREP(CLK_RX_PHASE_MASK, host->tp.rx_phase); 313 + clk_reg |= FIELD_PREP(CLK_SRC_MASK, CLK_SRC_XTAL); 314 + clk_reg |= FIELD_PREP(CLK_DIV_MASK, CLK_DIV_MAX); 394 315 clk_reg &= ~CLK_ALWAYS_ON; 395 316 writel(clk_reg, host->regs + SD_EMMC_CLOCK); 396 317 ··· 410 327 host->mmc->f_min = clk_round_rate(host->cfg_div_clk, 400000); 411 328 412 329 ret = meson_mmc_clk_set(host, host->mmc->f_min); 413 - if (!ret) 330 + if (ret) 414 331 clk_disable_unprepare(host->cfg_div_clk); 415 332 416 333 return ret; 334 + } 335 + 336 + static void meson_mmc_set_tuning_params(struct mmc_host *mmc) 337 + { 338 + struct meson_host *host = mmc_priv(mmc); 339 + u32 regval; 340 + 341 + /* stop clock */ 342 + regval = readl(host->regs + SD_EMMC_CFG); 343 + regval |= CFG_STOP_CLOCK; 344 + writel(regval, host->regs + SD_EMMC_CFG); 345 + 346 + regval = readl(host->regs + SD_EMMC_CLOCK); 347 + regval &= ~CLK_CORE_PHASE_MASK; 348 + regval |= FIELD_PREP(CLK_CORE_PHASE_MASK, host->tp.core_phase); 349 + regval &= ~CLK_TX_PHASE_MASK; 350 + regval |= FIELD_PREP(CLK_TX_PHASE_MASK, host->tp.tx_phase); 351 + regval &= ~CLK_RX_PHASE_MASK; 352 + regval |= FIELD_PREP(CLK_RX_PHASE_MASK, host->tp.rx_phase); 353 + writel(regval, host->regs + SD_EMMC_CLOCK); 354 + 355 + /* start clock */ 356 + regval = readl(host->regs + SD_EMMC_CFG); 357 + regval &= ~CFG_STOP_CLOCK; 358 + writel(regval, host->regs + SD_EMMC_CFG); 417 359 } 418 360 419 361 static void meson_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) ··· 505 397 val = readl(host->regs + SD_EMMC_CFG); 506 398 orig = val; 507 399 508 - val &= ~(CFG_BUS_WIDTH_MASK << CFG_BUS_WIDTH_SHIFT); 509 - val |= bus_width << CFG_BUS_WIDTH_SHIFT; 510 - 511 - val &= ~(CFG_BLK_LEN_MASK << CFG_BLK_LEN_SHIFT); 512 - val |= ilog2(SD_EMMC_CFG_BLK_SIZE) << CFG_BLK_LEN_SHIFT; 513 - 514 - val &= ~(CFG_RESP_TIMEOUT_MASK << CFG_RESP_TIMEOUT_SHIFT); 515 - val |= ilog2(SD_EMMC_CFG_RESP_TIMEOUT) << CFG_RESP_TIMEOUT_SHIFT; 516 - 517 - val &= ~(CFG_RC_CC_MASK << CFG_RC_CC_SHIFT); 518 - val |= ilog2(SD_EMMC_CFG_CMD_GAP) << CFG_RC_CC_SHIFT; 400 + val &= ~CFG_BUS_WIDTH_MASK; 401 + val |= FIELD_PREP(CFG_BUS_WIDTH_MASK, bus_width); 519 402 520 403 val &= ~CFG_DDR; 521 404 if (ios->timing == MMC_TIMING_UHS_DDR50 || ··· 518 419 if (ios->timing == MMC_TIMING_MMC_HS400) 519 420 val |= CFG_CHK_DS; 520 421 521 - writel(val, host->regs + SD_EMMC_CFG); 522 - 523 - if (val != orig) 422 + if (val != orig) { 423 + writel(val, host->regs + SD_EMMC_CFG); 524 424 dev_dbg(host->dev, "%s: SD_EMMC_CFG: 0x%08x -> 0x%08x\n", 525 425 __func__, orig, val); 426 + } 526 427 } 527 428 528 - static int meson_mmc_request_done(struct mmc_host *mmc, struct mmc_request *mrq) 429 + static void meson_mmc_request_done(struct mmc_host *mmc, 430 + struct mmc_request *mrq) 529 431 { 530 432 struct meson_host *host = mmc_priv(mmc); 531 433 532 - WARN_ON(host->mrq != mrq); 533 - 534 - host->mrq = NULL; 535 434 host->cmd = NULL; 536 435 mmc_request_done(host->mmc, mrq); 436 + } 537 437 538 - return 0; 438 + static void meson_mmc_set_blksz(struct mmc_host *mmc, unsigned int blksz) 439 + { 440 + struct meson_host *host = mmc_priv(mmc); 441 + u32 cfg, blksz_old; 442 + 443 + cfg = readl(host->regs + SD_EMMC_CFG); 444 + blksz_old = FIELD_GET(CFG_BLK_LEN_MASK, cfg); 445 + 446 + if (!is_power_of_2(blksz)) 447 + dev_err(host->dev, "blksz %u is not a power of 2\n", blksz); 448 + 449 + blksz = ilog2(blksz); 450 + 451 + /* check if block-size matches, if not update */ 452 + if (blksz == blksz_old) 453 + return; 454 + 455 + dev_dbg(host->dev, "%s: update blk_len %d -> %d\n", __func__, 456 + blksz_old, blksz); 457 + 458 + cfg &= ~CFG_BLK_LEN_MASK; 459 + cfg |= FIELD_PREP(CFG_BLK_LEN_MASK, blksz); 460 + writel(cfg, host->regs + SD_EMMC_CFG); 461 + } 462 + 463 + static void meson_mmc_set_response_bits(struct mmc_command *cmd, u32 *cmd_cfg) 464 + { 465 + if (cmd->flags & MMC_RSP_PRESENT) { 466 + if (cmd->flags & MMC_RSP_136) 467 + *cmd_cfg |= CMD_CFG_RESP_128; 468 + *cmd_cfg |= CMD_CFG_RESP_NUM; 469 + 470 + if (!(cmd->flags & MMC_RSP_CRC)) 471 + *cmd_cfg |= CMD_CFG_RESP_NOCRC; 472 + 473 + if (cmd->flags & MMC_RSP_BUSY) 474 + *cmd_cfg |= CMD_CFG_R1B; 475 + } else { 476 + *cmd_cfg |= CMD_CFG_NO_RESP; 477 + } 478 + } 479 + 480 + static void meson_mmc_desc_chain_transfer(struct mmc_host *mmc, u32 cmd_cfg) 481 + { 482 + struct meson_host *host = mmc_priv(mmc); 483 + struct sd_emmc_desc *desc = host->descs; 484 + struct mmc_data *data = host->cmd->data; 485 + struct scatterlist *sg; 486 + u32 start; 487 + int i; 488 + 489 + if (data->flags & MMC_DATA_WRITE) 490 + cmd_cfg |= CMD_CFG_DATA_WR; 491 + 492 + if (data->blocks > 1) { 493 + cmd_cfg |= CMD_CFG_BLOCK_MODE; 494 + meson_mmc_set_blksz(mmc, data->blksz); 495 + } 496 + 497 + for_each_sg(data->sg, sg, data->sg_count, i) { 498 + unsigned int len = sg_dma_len(sg); 499 + 500 + if (data->blocks > 1) 501 + len /= data->blksz; 502 + 503 + desc[i].cmd_cfg = cmd_cfg; 504 + desc[i].cmd_cfg |= FIELD_PREP(CMD_CFG_LENGTH_MASK, len); 505 + if (i > 0) 506 + desc[i].cmd_cfg |= CMD_CFG_NO_CMD; 507 + desc[i].cmd_arg = host->cmd->arg; 508 + desc[i].cmd_resp = 0; 509 + desc[i].cmd_data = sg_dma_address(sg); 510 + } 511 + desc[data->sg_count - 1].cmd_cfg |= CMD_CFG_END_OF_CHAIN; 512 + 513 + dma_wmb(); /* ensure descriptor is written before kicked */ 514 + start = host->descs_dma_addr | START_DESC_BUSY; 515 + writel(start, host->regs + SD_EMMC_START); 539 516 } 540 517 541 518 static void meson_mmc_start_cmd(struct mmc_host *mmc, struct mmc_command *cmd) 542 519 { 543 520 struct meson_host *host = mmc_priv(mmc); 544 - struct sd_emmc_desc *desc, desc_tmp; 545 - u32 cfg; 546 - u8 blk_len, cmd_cfg_timeout; 521 + struct mmc_data *data = cmd->data; 522 + u32 cmd_cfg = 0, cmd_data = 0; 547 523 unsigned int xfer_bytes = 0; 548 524 549 525 /* Setup descriptors */ 550 526 dma_rmb(); 551 - desc = &desc_tmp; 552 - memset(desc, 0, sizeof(struct sd_emmc_desc)); 553 - 554 - desc->cmd_cfg |= (cmd->opcode & CMD_CFG_CMD_INDEX_MASK) << 555 - CMD_CFG_CMD_INDEX_SHIFT; 556 - desc->cmd_cfg |= CMD_CFG_OWNER; /* owned by CPU */ 557 - desc->cmd_arg = cmd->arg; 558 - 559 - /* Response */ 560 - if (cmd->flags & MMC_RSP_PRESENT) { 561 - desc->cmd_cfg &= ~CMD_CFG_NO_RESP; 562 - if (cmd->flags & MMC_RSP_136) 563 - desc->cmd_cfg |= CMD_CFG_RESP_128; 564 - desc->cmd_cfg |= CMD_CFG_RESP_NUM; 565 - desc->cmd_resp = 0; 566 - 567 - if (!(cmd->flags & MMC_RSP_CRC)) 568 - desc->cmd_cfg |= CMD_CFG_RESP_NOCRC; 569 - 570 - if (cmd->flags & MMC_RSP_BUSY) 571 - desc->cmd_cfg |= CMD_CFG_R1B; 572 - } else { 573 - desc->cmd_cfg |= CMD_CFG_NO_RESP; 574 - } 575 - 576 - /* data? */ 577 - if (cmd->data) { 578 - desc->cmd_cfg |= CMD_CFG_DATA_IO; 579 - if (cmd->data->blocks > 1) { 580 - desc->cmd_cfg |= CMD_CFG_BLOCK_MODE; 581 - desc->cmd_cfg |= 582 - (cmd->data->blocks & CMD_CFG_LENGTH_MASK) << 583 - CMD_CFG_LENGTH_SHIFT; 584 - 585 - /* check if block-size matches, if not update */ 586 - cfg = readl(host->regs + SD_EMMC_CFG); 587 - blk_len = cfg & (CFG_BLK_LEN_MASK << CFG_BLK_LEN_SHIFT); 588 - blk_len >>= CFG_BLK_LEN_SHIFT; 589 - if (blk_len != ilog2(cmd->data->blksz)) { 590 - dev_dbg(host->dev, "%s: update blk_len %d -> %d\n", 591 - __func__, blk_len, 592 - ilog2(cmd->data->blksz)); 593 - blk_len = ilog2(cmd->data->blksz); 594 - cfg &= ~(CFG_BLK_LEN_MASK << CFG_BLK_LEN_SHIFT); 595 - cfg |= blk_len << CFG_BLK_LEN_SHIFT; 596 - writel(cfg, host->regs + SD_EMMC_CFG); 597 - } 598 - } else { 599 - desc->cmd_cfg &= ~CMD_CFG_BLOCK_MODE; 600 - desc->cmd_cfg |= 601 - (cmd->data->blksz & CMD_CFG_LENGTH_MASK) << 602 - CMD_CFG_LENGTH_SHIFT; 603 - } 604 - 605 - cmd->data->bytes_xfered = 0; 606 - xfer_bytes = cmd->data->blksz * cmd->data->blocks; 607 - if (cmd->data->flags & MMC_DATA_WRITE) { 608 - desc->cmd_cfg |= CMD_CFG_DATA_WR; 609 - WARN_ON(xfer_bytes > host->bounce_buf_size); 610 - sg_copy_to_buffer(cmd->data->sg, cmd->data->sg_len, 611 - host->bounce_buf, xfer_bytes); 612 - cmd->data->bytes_xfered = xfer_bytes; 613 - dma_wmb(); 614 - } else { 615 - desc->cmd_cfg &= ~CMD_CFG_DATA_WR; 616 - } 617 - 618 - if (xfer_bytes > 0) { 619 - desc->cmd_cfg &= ~CMD_CFG_DATA_NUM; 620 - desc->cmd_data = host->bounce_dma_addr & CMD_DATA_MASK; 621 - } else { 622 - /* write data to data_addr */ 623 - desc->cmd_cfg |= CMD_CFG_DATA_NUM; 624 - desc->cmd_data = 0; 625 - } 626 - 627 - cmd_cfg_timeout = 12; 628 - } else { 629 - desc->cmd_cfg &= ~CMD_CFG_DATA_IO; 630 - cmd_cfg_timeout = 10; 631 - } 632 - desc->cmd_cfg |= (cmd_cfg_timeout & CMD_CFG_TIMEOUT_MASK) << 633 - CMD_CFG_TIMEOUT_SHIFT; 634 527 635 528 host->cmd = cmd; 636 529 530 + cmd_cfg |= FIELD_PREP(CMD_CFG_CMD_INDEX_MASK, cmd->opcode); 531 + cmd_cfg |= CMD_CFG_OWNER; /* owned by CPU */ 532 + 533 + meson_mmc_set_response_bits(cmd, &cmd_cfg); 534 + 535 + /* data? */ 536 + if (data) { 537 + data->bytes_xfered = 0; 538 + cmd_cfg |= CMD_CFG_DATA_IO; 539 + cmd_cfg |= FIELD_PREP(CMD_CFG_TIMEOUT_MASK, 540 + ilog2(meson_mmc_get_timeout_msecs(data))); 541 + 542 + if (meson_mmc_desc_chain_mode(data)) { 543 + meson_mmc_desc_chain_transfer(mmc, cmd_cfg); 544 + return; 545 + } 546 + 547 + if (data->blocks > 1) { 548 + cmd_cfg |= CMD_CFG_BLOCK_MODE; 549 + cmd_cfg |= FIELD_PREP(CMD_CFG_LENGTH_MASK, 550 + data->blocks); 551 + meson_mmc_set_blksz(mmc, data->blksz); 552 + } else { 553 + cmd_cfg |= FIELD_PREP(CMD_CFG_LENGTH_MASK, data->blksz); 554 + } 555 + 556 + xfer_bytes = data->blksz * data->blocks; 557 + if (data->flags & MMC_DATA_WRITE) { 558 + cmd_cfg |= CMD_CFG_DATA_WR; 559 + WARN_ON(xfer_bytes > host->bounce_buf_size); 560 + sg_copy_to_buffer(data->sg, data->sg_len, 561 + host->bounce_buf, xfer_bytes); 562 + dma_wmb(); 563 + } 564 + 565 + cmd_data = host->bounce_dma_addr & CMD_DATA_MASK; 566 + } else { 567 + cmd_cfg |= FIELD_PREP(CMD_CFG_TIMEOUT_MASK, 568 + ilog2(SD_EMMC_CMD_TIMEOUT)); 569 + } 570 + 637 571 /* Last descriptor */ 638 - desc->cmd_cfg |= CMD_CFG_END_OF_CHAIN; 639 - writel(desc->cmd_cfg, host->regs + SD_EMMC_CMD_CFG); 640 - writel(desc->cmd_data, host->regs + SD_EMMC_CMD_DAT); 641 - writel(desc->cmd_resp, host->regs + SD_EMMC_CMD_RSP); 572 + cmd_cfg |= CMD_CFG_END_OF_CHAIN; 573 + writel(cmd_cfg, host->regs + SD_EMMC_CMD_CFG); 574 + writel(cmd_data, host->regs + SD_EMMC_CMD_DAT); 575 + writel(0, host->regs + SD_EMMC_CMD_RSP); 642 576 wmb(); /* ensure descriptor is written before kicked */ 643 - writel(desc->cmd_arg, host->regs + SD_EMMC_CMD_ARG); 577 + writel(cmd->arg, host->regs + SD_EMMC_CMD_ARG); 644 578 } 645 579 646 580 static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq) 647 581 { 648 582 struct meson_host *host = mmc_priv(mmc); 583 + bool needs_pre_post_req = mrq->data && 584 + !(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE); 649 585 650 - WARN_ON(host->mrq != NULL); 586 + if (needs_pre_post_req) { 587 + meson_mmc_get_transfer_mode(mmc, mrq); 588 + if (!meson_mmc_desc_chain_mode(mrq->data)) 589 + needs_pre_post_req = false; 590 + } 591 + 592 + if (needs_pre_post_req) 593 + meson_mmc_pre_req(mmc, mrq); 651 594 652 595 /* Stop execution */ 653 596 writel(0, host->regs + SD_EMMC_START); 654 597 655 - host->mrq = mrq; 598 + meson_mmc_start_cmd(mmc, mrq->sbc ?: mrq->cmd); 656 599 657 - if (mrq->sbc) 658 - meson_mmc_start_cmd(mmc, mrq->sbc); 659 - else 660 - meson_mmc_start_cmd(mmc, mrq->cmd); 600 + if (needs_pre_post_req) 601 + meson_mmc_post_req(mmc, mrq, 0); 661 602 } 662 603 663 - static int meson_mmc_read_resp(struct mmc_host *mmc, struct mmc_command *cmd) 604 + static void meson_mmc_read_resp(struct mmc_host *mmc, struct mmc_command *cmd) 664 605 { 665 606 struct meson_host *host = mmc_priv(mmc); 666 607 ··· 712 573 } else if (cmd->flags & MMC_RSP_PRESENT) { 713 574 cmd->resp[0] = readl(host->regs + SD_EMMC_CMD_RSP); 714 575 } 715 - 716 - return 0; 717 576 } 718 577 719 578 static irqreturn_t meson_mmc_irq(int irq, void *dev_id) 720 579 { 721 580 struct meson_host *host = dev_id; 722 - struct mmc_request *mrq; 723 581 struct mmc_command *cmd; 582 + struct mmc_data *data; 724 583 u32 irq_en, status, raw_status; 725 584 irqreturn_t ret = IRQ_HANDLED; 726 585 ··· 727 590 728 591 cmd = host->cmd; 729 592 730 - mrq = host->mrq; 731 - 732 - if (WARN_ON(!mrq)) 733 - return IRQ_NONE; 734 - 735 593 if (WARN_ON(!cmd)) 736 594 return IRQ_NONE; 595 + 596 + data = cmd->data; 737 597 738 598 spin_lock(&host->lock); 739 599 irq_en = readl(host->regs + SD_EMMC_IRQ_EN); ··· 743 609 ret = IRQ_NONE; 744 610 goto out; 745 611 } 612 + 613 + meson_mmc_read_resp(host->mmc, cmd); 746 614 747 615 cmd->error = 0; 748 616 if (status & IRQ_RXD_ERR_MASK) { ··· 772 636 if (status & IRQ_SDIO) 773 637 dev_dbg(host->dev, "Unhandled IRQ: SDIO.\n"); 774 638 775 - if (status & (IRQ_END_OF_CHAIN | IRQ_RESP_STATUS)) 776 - ret = IRQ_WAKE_THREAD; 777 - else { 639 + if (status & (IRQ_END_OF_CHAIN | IRQ_RESP_STATUS)) { 640 + if (data && !cmd->error) 641 + data->bytes_xfered = data->blksz * data->blocks; 642 + if (meson_mmc_bounce_buf_read(data) || 643 + meson_mmc_get_next_command(cmd)) 644 + ret = IRQ_WAKE_THREAD; 645 + } else { 778 646 dev_warn(host->dev, "Unknown IRQ! status=0x%04x: MMC CMD%u arg=0x%08x flags=0x%08x stop=%d\n", 779 647 status, cmd->opcode, cmd->arg, 780 - cmd->flags, mrq->stop ? 1 : 0); 648 + cmd->flags, cmd->mrq->stop ? 1 : 0); 781 649 if (cmd->data) { 782 650 struct mmc_data *data = cmd->data; 783 651 ··· 796 656 /* ack all (enabled) interrupts */ 797 657 writel(status, host->regs + SD_EMMC_STATUS); 798 658 799 - if (ret == IRQ_HANDLED) { 800 - meson_mmc_read_resp(host->mmc, cmd); 659 + if (ret == IRQ_HANDLED) 801 660 meson_mmc_request_done(host->mmc, cmd->mrq); 802 - } 803 661 804 662 spin_unlock(&host->lock); 805 663 return ret; ··· 806 668 static irqreturn_t meson_mmc_irq_thread(int irq, void *dev_id) 807 669 { 808 670 struct meson_host *host = dev_id; 809 - struct mmc_request *mrq = host->mrq; 810 - struct mmc_command *cmd = host->cmd; 671 + struct mmc_command *next_cmd, *cmd = host->cmd; 811 672 struct mmc_data *data; 812 673 unsigned int xfer_bytes; 813 - 814 - if (WARN_ON(!mrq)) 815 - return IRQ_NONE; 816 674 817 675 if (WARN_ON(!cmd)) 818 676 return IRQ_NONE; 819 677 820 678 data = cmd->data; 821 - if (data && data->flags & MMC_DATA_READ) { 679 + if (meson_mmc_bounce_buf_read(data)) { 822 680 xfer_bytes = data->blksz * data->blocks; 823 681 WARN_ON(xfer_bytes > host->bounce_buf_size); 824 682 sg_copy_from_buffer(data->sg, data->sg_len, 825 683 host->bounce_buf, xfer_bytes); 826 - data->bytes_xfered = xfer_bytes; 827 684 } 828 685 829 - meson_mmc_read_resp(host->mmc, cmd); 830 - if (!data || !data->stop || mrq->sbc) 831 - meson_mmc_request_done(host->mmc, mrq); 686 + next_cmd = meson_mmc_get_next_command(cmd); 687 + if (next_cmd) 688 + meson_mmc_start_cmd(host->mmc, next_cmd); 832 689 else 833 - meson_mmc_start_cmd(host->mmc, data->stop); 690 + meson_mmc_request_done(host->mmc, cmd->mrq); 834 691 835 692 return IRQ_HANDLED; 693 + } 694 + 695 + static int meson_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode) 696 + { 697 + struct meson_host *host = mmc_priv(mmc); 698 + struct meson_tuning_params tp_old = host->tp; 699 + int ret = -EINVAL, i, cmd_error; 700 + 701 + dev_info(mmc_dev(mmc), "(re)tuning...\n"); 702 + 703 + for (i = CLK_PHASE_0; i <= CLK_PHASE_270; i++) { 704 + host->tp.rx_phase = i; 705 + /* exclude the active parameter set if retuning */ 706 + if (!memcmp(&tp_old, &host->tp, sizeof(tp_old)) && 707 + mmc->doing_retune) 708 + continue; 709 + meson_mmc_set_tuning_params(mmc); 710 + ret = mmc_send_tuning(mmc, opcode, &cmd_error); 711 + if (!ret) 712 + break; 713 + } 714 + 715 + return ret; 836 716 } 837 717 838 718 /* ··· 867 711 return status; 868 712 } 869 713 714 + static void meson_mmc_cfg_init(struct meson_host *host) 715 + { 716 + u32 cfg = 0; 717 + 718 + cfg |= FIELD_PREP(CFG_RESP_TIMEOUT_MASK, 719 + ilog2(SD_EMMC_CFG_RESP_TIMEOUT)); 720 + cfg |= FIELD_PREP(CFG_RC_CC_MASK, ilog2(SD_EMMC_CFG_CMD_GAP)); 721 + cfg |= FIELD_PREP(CFG_BLK_LEN_MASK, ilog2(SD_EMMC_CFG_BLK_SIZE)); 722 + 723 + writel(cfg, host->regs + SD_EMMC_CFG); 724 + } 725 + 870 726 static const struct mmc_host_ops meson_mmc_ops = { 871 727 .request = meson_mmc_request, 872 728 .set_ios = meson_mmc_set_ios, 873 729 .get_cd = meson_mmc_get_cd, 730 + .pre_req = meson_mmc_pre_req, 731 + .post_req = meson_mmc_post_req, 732 + .execute_tuning = meson_mmc_execute_tuning, 874 733 }; 875 734 876 735 static int meson_mmc_probe(struct platform_device *pdev) ··· 893 722 struct resource *res; 894 723 struct meson_host *host; 895 724 struct mmc_host *mmc; 896 - int ret; 725 + int ret, irq; 897 726 898 727 mmc = mmc_alloc_host(sizeof(struct meson_host), &pdev->dev); 899 728 if (!mmc) ··· 925 754 goto free_host; 926 755 } 927 756 928 - host->irq = platform_get_irq(pdev, 0); 929 - if (host->irq == 0) { 757 + irq = platform_get_irq(pdev, 0); 758 + if (!irq) { 930 759 dev_err(&pdev->dev, "failed to get interrupt resource.\n"); 931 760 ret = -EINVAL; 932 761 goto free_host; ··· 942 771 if (ret) 943 772 goto free_host; 944 773 774 + host->tp.core_phase = CLK_PHASE_180; 775 + host->tp.tx_phase = CLK_PHASE_0; 776 + host->tp.rx_phase = CLK_PHASE_0; 777 + 945 778 ret = meson_mmc_clk_init(host); 946 779 if (ret) 947 - goto free_host; 780 + goto err_core_clk; 948 781 949 782 /* Stop execution */ 950 783 writel(0, host->regs + SD_EMMC_START); ··· 958 783 writel(IRQ_EN_MASK, host->regs + SD_EMMC_STATUS); 959 784 writel(IRQ_EN_MASK, host->regs + SD_EMMC_IRQ_EN); 960 785 961 - ret = devm_request_threaded_irq(&pdev->dev, host->irq, 962 - meson_mmc_irq, meson_mmc_irq_thread, 963 - IRQF_SHARED, DRIVER_NAME, host); 964 - if (ret) 965 - goto free_host; 786 + /* set config to sane default */ 787 + meson_mmc_cfg_init(host); 966 788 789 + ret = devm_request_threaded_irq(&pdev->dev, irq, meson_mmc_irq, 790 + meson_mmc_irq_thread, IRQF_SHARED, 791 + NULL, host); 792 + if (ret) 793 + goto err_div_clk; 794 + 795 + mmc->caps |= MMC_CAP_CMD23; 967 796 mmc->max_blk_count = CMD_CFG_LENGTH_MASK; 968 797 mmc->max_req_size = mmc->max_blk_count * mmc->max_blk_size; 798 + mmc->max_segs = SD_EMMC_DESC_BUF_LEN / sizeof(struct sd_emmc_desc); 799 + mmc->max_seg_size = mmc->max_req_size; 969 800 970 801 /* data bounce buffer */ 971 802 host->bounce_buf_size = mmc->max_req_size; ··· 981 800 if (host->bounce_buf == NULL) { 982 801 dev_err(host->dev, "Unable to map allocate DMA bounce buffer.\n"); 983 802 ret = -ENOMEM; 984 - goto free_host; 803 + goto err_div_clk; 804 + } 805 + 806 + host->descs = dma_alloc_coherent(host->dev, SD_EMMC_DESC_BUF_LEN, 807 + &host->descs_dma_addr, GFP_KERNEL); 808 + if (!host->descs) { 809 + dev_err(host->dev, "Allocating descriptor DMA buffer failed\n"); 810 + ret = -ENOMEM; 811 + goto err_bounce_buf; 985 812 } 986 813 987 814 mmc->ops = &meson_mmc_ops; ··· 997 808 998 809 return 0; 999 810 1000 - free_host: 811 + err_bounce_buf: 812 + dma_free_coherent(host->dev, host->bounce_buf_size, 813 + host->bounce_buf, host->bounce_dma_addr); 814 + err_div_clk: 1001 815 clk_disable_unprepare(host->cfg_div_clk); 816 + err_core_clk: 1002 817 clk_disable_unprepare(host->core_clk); 818 + free_host: 1003 819 mmc_free_host(mmc); 1004 820 return ret; 1005 821 } ··· 1013 819 { 1014 820 struct meson_host *host = dev_get_drvdata(&pdev->dev); 1015 821 822 + mmc_remove_host(host->mmc); 823 + 1016 824 /* disable interrupts */ 1017 825 writel(0, host->regs + SD_EMMC_IRQ_EN); 1018 826 827 + dma_free_coherent(host->dev, SD_EMMC_DESC_BUF_LEN, 828 + host->descs, host->descs_dma_addr); 1019 829 dma_free_coherent(host->dev, host->bounce_buf_size, 1020 830 host->bounce_buf, host->bounce_dma_addr); 1021 831
+1 -4
drivers/mmc/host/mmc_spi.c
··· 888 888 u32 clock_rate; 889 889 unsigned long timeout; 890 890 891 - if (data->flags & MMC_DATA_READ) 892 - direction = DMA_FROM_DEVICE; 893 - else 894 - direction = DMA_TO_DEVICE; 891 + direction = mmc_get_dma_dir(data); 895 892 mmc_spi_setup_data_message(host, multiple, direction); 896 893 t = &host->t; 897 894
+8 -12
drivers/mmc/host/mmci.c
··· 516 516 static void mmci_dma_unmap(struct mmci_host *host, struct mmc_data *data) 517 517 { 518 518 struct dma_chan *chan; 519 - enum dma_data_direction dir; 520 519 521 - if (data->flags & MMC_DATA_READ) { 522 - dir = DMA_FROM_DEVICE; 520 + if (data->flags & MMC_DATA_READ) 523 521 chan = host->dma_rx_channel; 524 - } else { 525 - dir = DMA_TO_DEVICE; 522 + else 526 523 chan = host->dma_tx_channel; 527 - } 528 524 529 - dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir); 525 + dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, 526 + mmc_get_dma_dir(data)); 530 527 } 531 528 532 529 static void mmci_dma_finalize(struct mmci_host *host, struct mmc_data *data) ··· 586 589 struct dma_chan *chan; 587 590 struct dma_device *device; 588 591 struct dma_async_tx_descriptor *desc; 589 - enum dma_data_direction buffer_dirn; 590 592 int nr_sg; 591 593 unsigned long flags = DMA_CTRL_ACK; 592 594 593 595 if (data->flags & MMC_DATA_READ) { 594 596 conf.direction = DMA_DEV_TO_MEM; 595 - buffer_dirn = DMA_FROM_DEVICE; 596 597 chan = host->dma_rx_channel; 597 598 } else { 598 599 conf.direction = DMA_MEM_TO_DEV; 599 - buffer_dirn = DMA_TO_DEVICE; 600 600 chan = host->dma_tx_channel; 601 601 } 602 602 ··· 606 612 return -EINVAL; 607 613 608 614 device = chan->device; 609 - nr_sg = dma_map_sg(device->dev, data->sg, data->sg_len, buffer_dirn); 615 + nr_sg = dma_map_sg(device->dev, data->sg, data->sg_len, 616 + mmc_get_dma_dir(data)); 610 617 if (nr_sg == 0) 611 618 return -EINVAL; 612 619 ··· 626 631 return 0; 627 632 628 633 unmap_exit: 629 - dma_unmap_sg(device->dev, data->sg, data->sg_len, buffer_dirn); 634 + dma_unmap_sg(device->dev, data->sg, data->sg_len, 635 + mmc_get_dma_dir(data)); 630 636 return -ENOMEM; 631 637 } 632 638
+3 -5
drivers/mmc/host/moxart-mmc.c
··· 256 256 257 257 static void moxart_transfer_dma(struct mmc_data *data, struct moxart_host *host) 258 258 { 259 - u32 len, dir_data, dir_slave; 259 + u32 len, dir_slave; 260 260 long dma_time; 261 261 struct dma_async_tx_descriptor *desc = NULL; 262 262 struct dma_chan *dma_chan; ··· 266 266 267 267 if (data->flags & MMC_DATA_WRITE) { 268 268 dma_chan = host->dma_chan_tx; 269 - dir_data = DMA_TO_DEVICE; 270 269 dir_slave = DMA_MEM_TO_DEV; 271 270 } else { 272 271 dma_chan = host->dma_chan_rx; 273 - dir_data = DMA_FROM_DEVICE; 274 272 dir_slave = DMA_DEV_TO_MEM; 275 273 } 276 274 277 275 len = dma_map_sg(dma_chan->device->dev, data->sg, 278 - data->sg_len, dir_data); 276 + data->sg_len, mmc_get_dma_dir(data)); 279 277 280 278 if (len > 0) { 281 279 desc = dmaengine_prep_slave_sg(dma_chan, data->sg, ··· 299 301 300 302 dma_unmap_sg(dma_chan->device->dev, 301 303 data->sg, data->sg_len, 302 - dir_data); 304 + mmc_get_dma_dir(data)); 303 305 } 304 306 305 307
+154 -22
drivers/mmc/host/mtk-sd.c
··· 76 76 #define MSDC_PATCH_BIT1 0xb4 77 77 #define MSDC_PAD_TUNE 0xec 78 78 #define PAD_DS_TUNE 0x188 79 + #define PAD_CMD_TUNE 0x18c 79 80 #define EMMC50_CFG0 0x208 80 81 81 82 /*--------------------------------------------------------------------------*/ ··· 212 211 #define MSDC_PATCH_BIT_SPCPUSH (0x1 << 29) /* RW */ 213 212 #define MSDC_PATCH_BIT_DECRCTMO (0x1 << 30) /* RW */ 214 213 214 + #define MSDC_PAD_TUNE_DATWRDLY (0x1f << 0) /* RW */ 215 215 #define MSDC_PAD_TUNE_DATRRDLY (0x1f << 8) /* RW */ 216 216 #define MSDC_PAD_TUNE_CMDRDLY (0x1f << 16) /* RW */ 217 + #define MSDC_PAD_TUNE_CMDRRDLY (0x1f << 22) /* RW */ 218 + #define MSDC_PAD_TUNE_CLKTDLY (0x1f << 27) /* RW */ 217 219 218 220 #define PAD_DS_TUNE_DLY1 (0x1f << 2) /* RW */ 219 221 #define PAD_DS_TUNE_DLY2 (0x1f << 7) /* RW */ 220 222 #define PAD_DS_TUNE_DLY3 (0x1f << 12) /* RW */ 223 + 224 + #define PAD_CMD_TUNE_RX_DLY3 (0x1f << 1) /* RW */ 221 225 222 226 #define EMMC50_CFG_PADCMD_LATCHCK (0x1 << 0) /* RW */ 223 227 #define EMMC50_CFG_CRCSTS_EDGE (0x1 << 3) /* RW */ ··· 291 285 u32 patch_bit0; 292 286 u32 patch_bit1; 293 287 u32 pad_ds_tune; 288 + u32 pad_cmd_tune; 294 289 u32 emmc50_cfg0; 295 290 }; 296 291 297 292 struct msdc_tune_para { 298 293 u32 iocon; 299 294 u32 pad_tune; 295 + u32 pad_cmd_tune; 300 296 }; 301 297 302 298 struct msdc_delay_phase { ··· 340 332 unsigned char timing; 341 333 bool vqmmc_enabled; 342 334 u32 hs400_ds_delay; 335 + u32 hs200_cmd_int_delay; /* cmd internal delay for HS200/SDR104 */ 336 + u32 hs400_cmd_int_delay; /* cmd internal delay for HS400 */ 337 + bool hs400_cmd_resp_sel_rising; 338 + /* cmd response sample selection for HS400 */ 343 339 bool hs400_mode; /* current eMMC will run at hs400 mode */ 344 340 struct msdc_save_para save_para; /* used when gate HCLK */ 345 341 struct msdc_tune_para def_tune_para; /* default tune setting */ ··· 474 462 struct mmc_data *data = mrq->data; 475 463 476 464 if (!(data->host_cookie & MSDC_PREPARE_FLAG)) { 477 - bool read = (data->flags & MMC_DATA_READ) != 0; 478 - 479 465 data->host_cookie |= MSDC_PREPARE_FLAG; 480 466 data->sg_count = dma_map_sg(host->dev, data->sg, data->sg_len, 481 - read ? DMA_FROM_DEVICE : DMA_TO_DEVICE); 467 + mmc_get_dma_dir(data)); 482 468 } 483 469 } 484 470 ··· 488 478 return; 489 479 490 480 if (data->host_cookie & MSDC_PREPARE_FLAG) { 491 - bool read = (data->flags & MMC_DATA_READ) != 0; 492 - 493 481 dma_unmap_sg(host->dev, data->sg, data->sg_len, 494 - read ? DMA_FROM_DEVICE : DMA_TO_DEVICE); 482 + mmc_get_dma_dir(data)); 495 483 data->host_cookie &= ~MSDC_PREPARE_FLAG; 496 484 } 497 485 } ··· 609 601 } else { 610 602 writel(host->saved_tune_para.iocon, host->base + MSDC_IOCON); 611 603 writel(host->saved_tune_para.pad_tune, host->base + MSDC_PAD_TUNE); 604 + writel(host->saved_tune_para.pad_cmd_tune, 605 + host->base + PAD_CMD_TUNE); 612 606 } 613 607 608 + if (timing == MMC_TIMING_MMC_HS400) 609 + sdr_set_field(host->base + PAD_CMD_TUNE, 610 + MSDC_PAD_TUNE_CMDRRDLY, 611 + host->hs400_cmd_int_delay); 614 612 dev_dbg(host->dev, "sclk: %d, timing: %d\n", host->sclk, timing); 615 613 } 616 614 ··· 1317 1303 len_final = len; 1318 1304 } 1319 1305 start += len ? len : 1; 1320 - if (len >= 8 && start_final < 4) 1306 + if (len >= 12 && start_final < 4) 1321 1307 break; 1322 1308 } 1323 1309 ··· 1340 1326 struct msdc_host *host = mmc_priv(mmc); 1341 1327 u32 rise_delay = 0, fall_delay = 0; 1342 1328 struct msdc_delay_phase final_rise_delay, final_fall_delay = { 0,}; 1329 + struct msdc_delay_phase internal_delay_phase; 1343 1330 u8 final_delay, final_maxlen; 1331 + u32 internal_delay = 0; 1344 1332 int cmd_err; 1345 - int i; 1333 + int i, j; 1334 + 1335 + if (mmc->ios.timing == MMC_TIMING_MMC_HS200 || 1336 + mmc->ios.timing == MMC_TIMING_UHS_SDR104) 1337 + sdr_set_field(host->base + MSDC_PAD_TUNE, 1338 + MSDC_PAD_TUNE_CMDRRDLY, 1339 + host->hs200_cmd_int_delay); 1346 1340 1347 1341 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1348 1342 for (i = 0 ; i < PAD_DELAY_MAX; i++) { 1349 1343 sdr_set_field(host->base + MSDC_PAD_TUNE, 1350 1344 MSDC_PAD_TUNE_CMDRDLY, i); 1351 - mmc_send_tuning(mmc, opcode, &cmd_err); 1352 - if (!cmd_err) 1353 - rise_delay |= (1 << i); 1345 + /* 1346 + * Using the same parameters, it may sometimes pass the test, 1347 + * but sometimes it may fail. To make sure the parameters are 1348 + * more stable, we test each set of parameters 3 times. 1349 + */ 1350 + for (j = 0; j < 3; j++) { 1351 + mmc_send_tuning(mmc, opcode, &cmd_err); 1352 + if (!cmd_err) { 1353 + rise_delay |= (1 << i); 1354 + } else { 1355 + rise_delay &= ~(1 << i); 1356 + break; 1357 + } 1358 + } 1354 1359 } 1355 1360 final_rise_delay = get_best_delay(host, rise_delay); 1356 1361 /* if rising edge has enough margin, then do not scan falling edge */ 1357 - if (final_rise_delay.maxlen >= 10 || 1358 - (final_rise_delay.start == 0 && final_rise_delay.maxlen >= 4)) 1362 + if (final_rise_delay.maxlen >= 12 && final_rise_delay.start < 4) 1359 1363 goto skip_fall; 1360 1364 1361 1365 sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1362 1366 for (i = 0; i < PAD_DELAY_MAX; i++) { 1363 1367 sdr_set_field(host->base + MSDC_PAD_TUNE, 1364 1368 MSDC_PAD_TUNE_CMDRDLY, i); 1365 - mmc_send_tuning(mmc, opcode, &cmd_err); 1366 - if (!cmd_err) 1367 - fall_delay |= (1 << i); 1369 + /* 1370 + * Using the same parameters, it may sometimes pass the test, 1371 + * but sometimes it may fail. To make sure the parameters are 1372 + * more stable, we test each set of parameters 3 times. 1373 + */ 1374 + for (j = 0; j < 3; j++) { 1375 + mmc_send_tuning(mmc, opcode, &cmd_err); 1376 + if (!cmd_err) { 1377 + fall_delay |= (1 << i); 1378 + } else { 1379 + fall_delay &= ~(1 << i); 1380 + break; 1381 + } 1382 + } 1368 1383 } 1369 1384 final_fall_delay = get_best_delay(host, fall_delay); 1370 1385 1371 1386 skip_fall: 1372 1387 final_maxlen = max(final_rise_delay.maxlen, final_fall_delay.maxlen); 1388 + if (final_fall_delay.maxlen >= 12 && final_fall_delay.start < 4) 1389 + final_maxlen = final_fall_delay.maxlen; 1373 1390 if (final_maxlen == final_rise_delay.maxlen) { 1374 1391 sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1375 1392 sdr_set_field(host->base + MSDC_PAD_TUNE, MSDC_PAD_TUNE_CMDRDLY, ··· 1412 1367 final_fall_delay.final_phase); 1413 1368 final_delay = final_fall_delay.final_phase; 1414 1369 } 1370 + if (host->hs200_cmd_int_delay) 1371 + goto skip_internal; 1415 1372 1373 + for (i = 0; i < PAD_DELAY_MAX; i++) { 1374 + sdr_set_field(host->base + MSDC_PAD_TUNE, 1375 + MSDC_PAD_TUNE_CMDRRDLY, i); 1376 + mmc_send_tuning(mmc, opcode, &cmd_err); 1377 + if (!cmd_err) 1378 + internal_delay |= (1 << i); 1379 + } 1380 + dev_dbg(host->dev, "Final internal delay: 0x%x\n", internal_delay); 1381 + internal_delay_phase = get_best_delay(host, internal_delay); 1382 + sdr_set_field(host->base + MSDC_PAD_TUNE, MSDC_PAD_TUNE_CMDRRDLY, 1383 + internal_delay_phase.final_phase); 1384 + skip_internal: 1385 + dev_dbg(host->dev, "Final cmd pad delay: %x\n", final_delay); 1386 + return final_delay == 0xff ? -EIO : 0; 1387 + } 1388 + 1389 + static int hs400_tune_response(struct mmc_host *mmc, u32 opcode) 1390 + { 1391 + struct msdc_host *host = mmc_priv(mmc); 1392 + u32 cmd_delay = 0; 1393 + struct msdc_delay_phase final_cmd_delay = { 0,}; 1394 + u8 final_delay; 1395 + int cmd_err; 1396 + int i, j; 1397 + 1398 + /* select EMMC50 PAD CMD tune */ 1399 + sdr_set_bits(host->base + PAD_CMD_TUNE, BIT(0)); 1400 + 1401 + if (mmc->ios.timing == MMC_TIMING_MMC_HS200 || 1402 + mmc->ios.timing == MMC_TIMING_UHS_SDR104) 1403 + sdr_set_field(host->base + MSDC_PAD_TUNE, 1404 + MSDC_PAD_TUNE_CMDRRDLY, 1405 + host->hs200_cmd_int_delay); 1406 + 1407 + if (host->hs400_cmd_resp_sel_rising) 1408 + sdr_clr_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1409 + else 1410 + sdr_set_bits(host->base + MSDC_IOCON, MSDC_IOCON_RSPL); 1411 + for (i = 0 ; i < PAD_DELAY_MAX; i++) { 1412 + sdr_set_field(host->base + PAD_CMD_TUNE, 1413 + PAD_CMD_TUNE_RX_DLY3, i); 1414 + /* 1415 + * Using the same parameters, it may sometimes pass the test, 1416 + * but sometimes it may fail. To make sure the parameters are 1417 + * more stable, we test each set of parameters 3 times. 1418 + */ 1419 + for (j = 0; j < 3; j++) { 1420 + mmc_send_tuning(mmc, opcode, &cmd_err); 1421 + if (!cmd_err) { 1422 + cmd_delay |= (1 << i); 1423 + } else { 1424 + cmd_delay &= ~(1 << i); 1425 + break; 1426 + } 1427 + } 1428 + } 1429 + final_cmd_delay = get_best_delay(host, cmd_delay); 1430 + sdr_set_field(host->base + PAD_CMD_TUNE, PAD_CMD_TUNE_RX_DLY3, 1431 + final_cmd_delay.final_phase); 1432 + final_delay = final_cmd_delay.final_phase; 1433 + 1434 + dev_dbg(host->dev, "Final cmd pad delay: %x\n", final_delay); 1416 1435 return final_delay == 0xff ? -EIO : 0; 1417 1436 } 1418 1437 ··· 1499 1390 } 1500 1391 final_rise_delay = get_best_delay(host, rise_delay); 1501 1392 /* if rising edge has enough margin, then do not scan falling edge */ 1502 - if (final_rise_delay.maxlen >= 10 || 1393 + if (final_rise_delay.maxlen >= 12 || 1503 1394 (final_rise_delay.start == 0 && final_rise_delay.maxlen >= 4)) 1504 1395 goto skip_fall; 1505 1396 ··· 1532 1423 final_delay = final_fall_delay.final_phase; 1533 1424 } 1534 1425 1426 + dev_dbg(host->dev, "Final data pad delay: %x\n", final_delay); 1535 1427 return final_delay == 0xff ? -EIO : 0; 1536 1428 } 1537 1429 ··· 1541 1431 struct msdc_host *host = mmc_priv(mmc); 1542 1432 int ret; 1543 1433 1544 - ret = msdc_tune_response(mmc, opcode); 1434 + if (host->hs400_mode) 1435 + ret = hs400_tune_response(mmc, opcode); 1436 + else 1437 + ret = msdc_tune_response(mmc, opcode); 1545 1438 if (ret == -EIO) { 1546 1439 dev_err(host->dev, "Tune response fail!\n"); 1547 1440 return ret; ··· 1557 1444 1558 1445 host->saved_tune_para.iocon = readl(host->base + MSDC_IOCON); 1559 1446 host->saved_tune_para.pad_tune = readl(host->base + MSDC_PAD_TUNE); 1447 + host->saved_tune_para.pad_cmd_tune = readl(host->base + PAD_CMD_TUNE); 1560 1448 return ret; 1561 1449 } 1562 1450 ··· 1591 1477 .prepare_hs400_tuning = msdc_prepare_hs400_tuning, 1592 1478 .hw_reset = msdc_hw_reset, 1593 1479 }; 1480 + 1481 + static void msdc_of_property_parse(struct platform_device *pdev, 1482 + struct msdc_host *host) 1483 + { 1484 + of_property_read_u32(pdev->dev.of_node, "hs400-ds-delay", 1485 + &host->hs400_ds_delay); 1486 + 1487 + of_property_read_u32(pdev->dev.of_node, "mediatek,hs200-cmd-int-delay", 1488 + &host->hs200_cmd_int_delay); 1489 + 1490 + of_property_read_u32(pdev->dev.of_node, "mediatek,hs400-cmd-int-delay", 1491 + &host->hs400_cmd_int_delay); 1492 + 1493 + if (of_property_read_bool(pdev->dev.of_node, 1494 + "mediatek,hs400-cmd-resp-sel-rising")) 1495 + host->hs400_cmd_resp_sel_rising = true; 1496 + else 1497 + host->hs400_cmd_resp_sel_rising = false; 1498 + } 1594 1499 1595 1500 static int msdc_drv_probe(struct platform_device *pdev) 1596 1501 { ··· 1682 1549 goto host_free; 1683 1550 } 1684 1551 1685 - if (!of_property_read_u32(pdev->dev.of_node, "hs400-ds-delay", 1686 - &host->hs400_ds_delay)) 1687 - dev_dbg(&pdev->dev, "hs400-ds-delay: %x\n", 1688 - host->hs400_ds_delay); 1552 + msdc_of_property_parse(pdev, host); 1689 1553 1690 1554 host->dev = &pdev->dev; 1691 1555 host->mmc = mmc; ··· 1794 1664 host->save_para.patch_bit0 = readl(host->base + MSDC_PATCH_BIT); 1795 1665 host->save_para.patch_bit1 = readl(host->base + MSDC_PATCH_BIT1); 1796 1666 host->save_para.pad_ds_tune = readl(host->base + PAD_DS_TUNE); 1667 + host->save_para.pad_cmd_tune = readl(host->base + PAD_CMD_TUNE); 1797 1668 host->save_para.emmc50_cfg0 = readl(host->base + EMMC50_CFG0); 1798 1669 } 1799 1670 ··· 1807 1676 writel(host->save_para.patch_bit0, host->base + MSDC_PATCH_BIT); 1808 1677 writel(host->save_para.patch_bit1, host->base + MSDC_PATCH_BIT1); 1809 1678 writel(host->save_para.pad_ds_tune, host->base + PAD_DS_TUNE); 1679 + writel(host->save_para.pad_cmd_tune, host->base + PAD_CMD_TUNE); 1810 1680 writel(host->save_para.emmc50_cfg0, host->base + EMMC50_CFG0); 1811 1681 } 1812 1682
+5 -6
drivers/mmc/host/mvsdio.c
··· 125 125 return 1; 126 126 } else { 127 127 dma_addr_t phys_addr; 128 - int dma_dir = (data->flags & MMC_DATA_READ) ? 129 - DMA_FROM_DEVICE : DMA_TO_DEVICE; 130 - host->sg_frags = dma_map_sg(mmc_dev(host->mmc), data->sg, 131 - data->sg_len, dma_dir); 128 + 129 + host->sg_frags = dma_map_sg(mmc_dev(host->mmc), 130 + data->sg, data->sg_len, 131 + mmc_get_dma_dir(data)); 132 132 phys_addr = sg_dma_address(data->sg); 133 133 mvsd_write(MVSD_SYS_ADDR_LOW, (u32)phys_addr & 0xffff); 134 134 mvsd_write(MVSD_SYS_ADDR_HI, (u32)phys_addr >> 16); ··· 294 294 host->pio_size = 0; 295 295 } else { 296 296 dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->sg_frags, 297 - (data->flags & MMC_DATA_READ) ? 298 - DMA_FROM_DEVICE : DMA_TO_DEVICE); 297 + mmc_get_dma_dir(data)); 299 298 } 300 299 301 300 if (err_status & MVSD_ERR_DATA_TIMEOUT)
+6 -15
drivers/mmc/host/omap_hsmmc.c
··· 935 935 OMAP_HSMMC_WRITE(host->base, CMD, cmdreg); 936 936 } 937 937 938 - static int 939 - omap_hsmmc_get_dma_dir(struct omap_hsmmc_host *host, struct mmc_data *data) 940 - { 941 - if (data->flags & MMC_DATA_WRITE) 942 - return DMA_TO_DEVICE; 943 - else 944 - return DMA_FROM_DEVICE; 945 - } 946 - 947 938 static struct dma_chan *omap_hsmmc_get_dma_chan(struct omap_hsmmc_host *host, 948 939 struct mmc_data *data) 949 940 { ··· 1046 1055 dmaengine_terminate_all(chan); 1047 1056 dma_unmap_sg(chan->device->dev, 1048 1057 host->data->sg, host->data->sg_len, 1049 - omap_hsmmc_get_dma_dir(host, host->data)); 1058 + mmc_get_dma_dir(host->data)); 1050 1059 1051 1060 host->data->host_cookie = 0; 1052 1061 } ··· 1341 1350 if (!data->host_cookie) 1342 1351 dma_unmap_sg(chan->device->dev, 1343 1352 data->sg, data->sg_len, 1344 - omap_hsmmc_get_dma_dir(host, data)); 1353 + mmc_get_dma_dir(data)); 1345 1354 1346 1355 req_in_progress = host->req_in_progress; 1347 1356 host->dma_ch = -1; ··· 1374 1383 /* Check if next job is already prepared */ 1375 1384 if (next || data->host_cookie != host->next_data.cookie) { 1376 1385 dma_len = dma_map_sg(chan->device->dev, data->sg, data->sg_len, 1377 - omap_hsmmc_get_dma_dir(host, data)); 1386 + mmc_get_dma_dir(data)); 1378 1387 1379 1388 } else { 1380 1389 dma_len = host->next_data.dma_len; ··· 1560 1569 struct dma_chan *c = omap_hsmmc_get_dma_chan(host, data); 1561 1570 1562 1571 dma_unmap_sg(c->device->dev, data->sg, data->sg_len, 1563 - omap_hsmmc_get_dma_dir(host, data)); 1572 + mmc_get_dma_dir(data)); 1564 1573 data->host_cookie = 0; 1565 1574 } 1566 1575 } ··· 1761 1770 */ 1762 1771 if (host->pdata->controller_flags & OMAP_HSMMC_SWAKEUP_MISSING) { 1763 1772 struct pinctrl *p = devm_pinctrl_get(host->dev); 1764 - if (!p) { 1765 - ret = -ENODEV; 1773 + if (IS_ERR(p)) { 1774 + ret = PTR_ERR(p); 1766 1775 goto err_free_irq; 1767 1776 } 1768 1777 if (IS_ERR(pinctrl_lookup_state(p, PINCTRL_STATE_DEFAULT))) {
+128 -133
drivers/mmc/host/s3cmci.c
··· 24 24 #include <linux/interrupt.h> 25 25 #include <linux/irq.h> 26 26 #include <linux/io.h> 27 + #include <linux/of.h> 28 + #include <linux/of_device.h> 29 + #include <linux/of_gpio.h> 30 + #include <linux/mmc/slot-gpio.h> 27 31 28 32 #include <plat/gpio-cfg.h> 29 33 #include <mach/dma.h> ··· 811 807 812 808 } 813 809 814 - /* 815 - * ISR for the CardDetect Pin 816 - */ 817 - 818 - static irqreturn_t s3cmci_irq_cd(int irq, void *dev_id) 819 - { 820 - struct s3cmci_host *host = (struct s3cmci_host *)dev_id; 821 - 822 - dbg(host, dbg_irq, "card detect\n"); 823 - 824 - mmc_detect_change(host->mmc, msecs_to_jiffies(500)); 825 - 826 - return IRQ_HANDLED; 827 - } 828 - 829 810 static void s3cmci_dma_done_callback(void *arg) 830 811 { 831 812 struct s3cmci_host *host = arg; ··· 1093 1104 conf.direction = DMA_MEM_TO_DEV; 1094 1105 1095 1106 dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 1096 - rw ? DMA_TO_DEVICE : DMA_FROM_DEVICE); 1107 + mmc_get_dma_dir(data)); 1097 1108 1098 1109 dmaengine_slave_config(host->dma, &conf); 1099 1110 desc = dmaengine_prep_slave_sg(host->dma, data->sg, data->sg_len, ··· 1110 1121 1111 1122 unmap_exit: 1112 1123 dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 1113 - rw ? DMA_TO_DEVICE : DMA_FROM_DEVICE); 1124 + mmc_get_dma_dir(data)); 1114 1125 return -ENOMEM; 1115 1126 } 1116 1127 ··· 1166 1177 s3cmci_enable_irq(host, true); 1167 1178 } 1168 1179 1169 - static int s3cmci_card_present(struct mmc_host *mmc) 1170 - { 1171 - struct s3cmci_host *host = mmc_priv(mmc); 1172 - struct s3c24xx_mci_pdata *pdata = host->pdata; 1173 - int ret; 1174 - 1175 - if (pdata->no_detect) 1176 - return -ENOSYS; 1177 - 1178 - ret = gpio_get_value(pdata->gpio_detect) ? 0 : 1; 1179 - return ret ^ pdata->detect_invert; 1180 - } 1181 - 1182 1180 static void s3cmci_request(struct mmc_host *mmc, struct mmc_request *mrq) 1183 1181 { 1184 1182 struct s3cmci_host *host = mmc_priv(mmc); ··· 1174 1198 host->cmd_is_stop = 0; 1175 1199 host->mrq = mrq; 1176 1200 1177 - if (s3cmci_card_present(mmc) == 0) { 1201 + if (mmc_gpio_get_cd(mmc) == 0) { 1178 1202 dbg(host, dbg_err, "%s: no medium present\n", __func__); 1179 1203 host->mrq->cmd->error = -ENOMEDIUM; 1180 1204 mmc_request_done(mmc, mrq); ··· 1218 1242 case MMC_POWER_ON: 1219 1243 case MMC_POWER_UP: 1220 1244 /* Configure GPE5...GPE10 pins in SD mode */ 1221 - s3c_gpio_cfgall_range(S3C2410_GPE(5), 6, S3C_GPIO_SFN(2), 1222 - S3C_GPIO_PULL_NONE); 1245 + if (!host->pdev->dev.of_node) 1246 + s3c_gpio_cfgall_range(S3C2410_GPE(5), 6, S3C_GPIO_SFN(2), 1247 + S3C_GPIO_PULL_NONE); 1223 1248 1224 1249 if (host->pdata->set_power) 1225 1250 host->pdata->set_power(ios->power_mode, ios->vdd); ··· 1232 1255 1233 1256 case MMC_POWER_OFF: 1234 1257 default: 1235 - gpio_direction_output(S3C2410_GPE(5), 0); 1258 + if (!host->pdev->dev.of_node) 1259 + gpio_direction_output(S3C2410_GPE(5), 0); 1236 1260 1237 1261 if (host->is2440) 1238 1262 mci_con |= S3C2440_SDICON_SDRESET; ··· 1271 1293 1272 1294 con |= S3C2440_SDICON_SDRESET; 1273 1295 writel(con, host->base + S3C2410_SDICON); 1274 - } 1275 - 1276 - static int s3cmci_get_ro(struct mmc_host *mmc) 1277 - { 1278 - struct s3cmci_host *host = mmc_priv(mmc); 1279 - struct s3c24xx_mci_pdata *pdata = host->pdata; 1280 - int ret; 1281 - 1282 - if (pdata->no_wprotect) 1283 - return 0; 1284 - 1285 - ret = gpio_get_value(pdata->gpio_wprotect) ? 1 : 0; 1286 - ret ^= pdata->wprotect_invert; 1287 - 1288 - return ret; 1289 1296 } 1290 1297 1291 1298 static void s3cmci_enable_sdio_irq(struct mmc_host *mmc, int enable) ··· 1316 1353 static struct mmc_host_ops s3cmci_ops = { 1317 1354 .request = s3cmci_request, 1318 1355 .set_ios = s3cmci_set_ios, 1319 - .get_ro = s3cmci_get_ro, 1320 - .get_cd = s3cmci_card_present, 1356 + .get_ro = mmc_gpio_get_ro, 1357 + .get_cd = mmc_gpio_get_cd, 1321 1358 .enable_sdio_irq = s3cmci_enable_sdio_irq, 1322 1359 }; 1323 1360 ··· 1508 1545 1509 1546 #endif /* CONFIG_DEBUG_FS */ 1510 1547 1511 - static int s3cmci_probe(struct platform_device *pdev) 1548 + static int s3cmci_probe_pdata(struct s3cmci_host *host) 1512 1549 { 1513 - struct s3cmci_host *host; 1514 - struct mmc_host *mmc; 1515 - int ret; 1516 - int is2440; 1517 - int i; 1550 + struct platform_device *pdev = host->pdev; 1551 + struct mmc_host *mmc = host->mmc; 1552 + struct s3c24xx_mci_pdata *pdata; 1553 + int i, ret; 1518 1554 1519 - is2440 = platform_get_device_id(pdev)->driver_data; 1520 - 1521 - mmc = mmc_alloc_host(sizeof(struct s3cmci_host), &pdev->dev); 1522 - if (!mmc) { 1523 - ret = -ENOMEM; 1524 - goto probe_out; 1525 - } 1555 + host->is2440 = platform_get_device_id(pdev)->driver_data; 1526 1556 1527 1557 for (i = S3C2410_GPE(5); i <= S3C2410_GPE(10); i++) { 1528 1558 ret = gpio_request(i, dev_name(&pdev->dev)); ··· 1525 1569 for (i--; i >= S3C2410_GPE(5); i--) 1526 1570 gpio_free(i); 1527 1571 1528 - goto probe_free_host; 1572 + return ret; 1529 1573 } 1574 + } 1575 + 1576 + if (!pdev->dev.platform_data) 1577 + pdev->dev.platform_data = &s3cmci_def_pdata; 1578 + 1579 + pdata = pdev->dev.platform_data; 1580 + 1581 + if (pdata->no_wprotect) 1582 + mmc->caps2 |= MMC_CAP2_NO_WRITE_PROTECT; 1583 + 1584 + if (pdata->no_detect) 1585 + mmc->caps |= MMC_CAP_NEEDS_POLL; 1586 + 1587 + if (pdata->wprotect_invert) 1588 + mmc->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH; 1589 + 1590 + if (pdata->detect_invert) 1591 + mmc->caps2 |= MMC_CAP2_CD_ACTIVE_HIGH; 1592 + 1593 + if (gpio_is_valid(pdata->gpio_detect)) { 1594 + ret = mmc_gpio_request_cd(mmc, pdata->gpio_detect, 0); 1595 + if (ret) { 1596 + dev_err(&pdev->dev, "error requesting GPIO for CD %d\n", 1597 + ret); 1598 + return ret; 1599 + } 1600 + } 1601 + 1602 + if (gpio_is_valid(pdata->gpio_wprotect)) { 1603 + ret = mmc_gpio_request_ro(mmc, pdata->gpio_wprotect); 1604 + if (ret) { 1605 + dev_err(&pdev->dev, "error requesting GPIO for WP %d\n", 1606 + ret); 1607 + return ret; 1608 + } 1609 + } 1610 + 1611 + return 0; 1612 + } 1613 + 1614 + static int s3cmci_probe_dt(struct s3cmci_host *host) 1615 + { 1616 + struct platform_device *pdev = host->pdev; 1617 + struct s3c24xx_mci_pdata *pdata; 1618 + struct mmc_host *mmc = host->mmc; 1619 + int ret; 1620 + 1621 + host->is2440 = (int) of_device_get_match_data(&pdev->dev); 1622 + 1623 + ret = mmc_of_parse(mmc); 1624 + if (ret) 1625 + return ret; 1626 + 1627 + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 1628 + if (!pdata) 1629 + return -ENOMEM; 1630 + 1631 + pdev->dev.platform_data = pdata; 1632 + 1633 + return 0; 1634 + } 1635 + 1636 + static int s3cmci_probe(struct platform_device *pdev) 1637 + { 1638 + struct s3cmci_host *host; 1639 + struct mmc_host *mmc; 1640 + int ret; 1641 + int i; 1642 + 1643 + mmc = mmc_alloc_host(sizeof(struct s3cmci_host), &pdev->dev); 1644 + if (!mmc) { 1645 + ret = -ENOMEM; 1646 + goto probe_out; 1530 1647 } 1531 1648 1532 1649 host = mmc_priv(mmc); 1533 1650 host->mmc = mmc; 1534 1651 host->pdev = pdev; 1535 - host->is2440 = is2440; 1652 + 1653 + if (pdev->dev.of_node) 1654 + ret = s3cmci_probe_dt(host); 1655 + else 1656 + ret = s3cmci_probe_pdata(host); 1657 + 1658 + if (ret) 1659 + goto probe_free_host; 1536 1660 1537 1661 host->pdata = pdev->dev.platform_data; 1538 - if (!host->pdata) { 1539 - pdev->dev.platform_data = &s3cmci_def_pdata; 1540 - host->pdata = &s3cmci_def_pdata; 1541 - } 1542 1662 1543 1663 spin_lock_init(&host->complete_lock); 1544 1664 tasklet_init(&host->pio_tasklet, pio_tasklet, (unsigned long) host); 1545 1665 1546 - if (is2440) { 1666 + if (host->is2440) { 1547 1667 host->sdiimsk = S3C2440_SDIIMSK; 1548 1668 host->sdidata = S3C2440_SDIDATA; 1549 1669 host->clk_div = 1; ··· 1677 1645 disable_irq(host->irq); 1678 1646 host->irq_state = false; 1679 1647 1680 - if (!host->pdata->no_detect) { 1681 - ret = gpio_request(host->pdata->gpio_detect, "s3cmci detect"); 1682 - if (ret) { 1683 - dev_err(&pdev->dev, "failed to get detect gpio\n"); 1684 - goto probe_free_irq; 1685 - } 1686 - 1687 - host->irq_cd = gpio_to_irq(host->pdata->gpio_detect); 1688 - 1689 - if (host->irq_cd >= 0) { 1690 - if (request_irq(host->irq_cd, s3cmci_irq_cd, 1691 - IRQF_TRIGGER_RISING | 1692 - IRQF_TRIGGER_FALLING, 1693 - DRIVER_NAME, host)) { 1694 - dev_err(&pdev->dev, 1695 - "can't get card detect irq.\n"); 1696 - ret = -ENOENT; 1697 - goto probe_free_gpio_cd; 1698 - } 1699 - } else { 1700 - dev_warn(&pdev->dev, 1701 - "host detect has no irq available\n"); 1702 - gpio_direction_input(host->pdata->gpio_detect); 1703 - } 1704 - } else 1705 - host->irq_cd = -1; 1706 - 1707 - if (!host->pdata->no_wprotect) { 1708 - ret = gpio_request(host->pdata->gpio_wprotect, "s3cmci wp"); 1709 - if (ret) { 1710 - dev_err(&pdev->dev, "failed to get writeprotect\n"); 1711 - goto probe_free_irq_cd; 1712 - } 1713 - 1714 - gpio_direction_input(host->pdata->gpio_wprotect); 1715 - } 1716 - 1717 1648 /* Depending on the dma state, get a DMA channel to use. */ 1718 1649 1719 1650 if (s3cmci_host_usedma(host)) { ··· 1684 1689 ret = PTR_ERR_OR_ZERO(host->dma); 1685 1690 if (ret) { 1686 1691 dev_err(&pdev->dev, "cannot get DMA channel.\n"); 1687 - goto probe_free_gpio_wp; 1692 + goto probe_free_irq; 1688 1693 } 1689 1694 } 1690 1695 ··· 1763 1768 if (s3cmci_host_usedma(host)) 1764 1769 dma_release_channel(host->dma); 1765 1770 1766 - probe_free_gpio_wp: 1767 - if (!host->pdata->no_wprotect) 1768 - gpio_free(host->pdata->gpio_wprotect); 1769 - 1770 - probe_free_gpio_cd: 1771 - if (!host->pdata->no_detect) 1772 - gpio_free(host->pdata->gpio_detect); 1773 - 1774 - probe_free_irq_cd: 1775 - if (host->irq_cd >= 0) 1776 - free_irq(host->irq_cd, host); 1777 - 1778 1771 probe_free_irq: 1779 1772 free_irq(host->irq, host); 1780 1773 ··· 1773 1790 release_mem_region(host->mem->start, resource_size(host->mem)); 1774 1791 1775 1792 probe_free_gpio: 1776 - for (i = S3C2410_GPE(5); i <= S3C2410_GPE(10); i++) 1777 - gpio_free(i); 1793 + if (!pdev->dev.of_node) 1794 + for (i = S3C2410_GPE(5); i <= S3C2410_GPE(10); i++) 1795 + gpio_free(i); 1778 1796 1779 1797 probe_free_host: 1780 1798 mmc_free_host(mmc); ··· 1802 1818 { 1803 1819 struct mmc_host *mmc = platform_get_drvdata(pdev); 1804 1820 struct s3cmci_host *host = mmc_priv(mmc); 1805 - struct s3c24xx_mci_pdata *pd = host->pdata; 1806 1821 int i; 1807 1822 1808 1823 s3cmci_shutdown(pdev); ··· 1815 1832 1816 1833 free_irq(host->irq, host); 1817 1834 1818 - if (!pd->no_wprotect) 1819 - gpio_free(pd->gpio_wprotect); 1820 - 1821 - if (!pd->no_detect) 1822 - gpio_free(pd->gpio_detect); 1823 - 1824 - for (i = S3C2410_GPE(5); i <= S3C2410_GPE(10); i++) 1825 - gpio_free(i); 1826 - 1835 + if (!pdev->dev.of_node) 1836 + for (i = S3C2410_GPE(5); i <= S3C2410_GPE(10); i++) 1837 + gpio_free(i); 1827 1838 1828 1839 iounmap(host->base); 1829 1840 release_mem_region(host->mem->start, resource_size(host->mem)); ··· 1825 1848 mmc_free_host(mmc); 1826 1849 return 0; 1827 1850 } 1851 + 1852 + static const struct of_device_id s3cmci_dt_match[] = { 1853 + { 1854 + .compatible = "samsung,s3c2410-sdi", 1855 + .data = (void *)0, 1856 + }, 1857 + { 1858 + .compatible = "samsung,s3c2412-sdi", 1859 + .data = (void *)1, 1860 + }, 1861 + { 1862 + .compatible = "samsung,s3c2440-sdi", 1863 + .data = (void *)1, 1864 + }, 1865 + { /* sentinel */ }, 1866 + }; 1867 + MODULE_DEVICE_TABLE(of, s3cmci_dt_match); 1828 1868 1829 1869 static const struct platform_device_id s3cmci_driver_ids[] = { 1830 1870 { ··· 1862 1868 static struct platform_driver s3cmci_driver = { 1863 1869 .driver = { 1864 1870 .name = "s3c-sdi", 1871 + .of_match_table = s3cmci_dt_match, 1865 1872 }, 1866 1873 .id_table = s3cmci_driver_ids, 1867 1874 .probe = s3cmci_probe,
+12 -6
drivers/mmc/host/sdhci-acpi.c
··· 263 263 264 264 /* Platform specific code during sd probe slot goes here */ 265 265 266 - if (hid && !strcmp(hid, "80865ACA")) { 266 + if (hid && !strcmp(hid, "80865ACA")) 267 267 host->mmc_host_ops.get_cd = bxt_get_cd; 268 - host->mmc->caps |= MMC_CAP_AGGRESSIVE_PM; 269 - } 270 268 271 269 return 0; 272 270 } ··· 300 302 .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 301 303 .quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON | 302 304 SDHCI_QUIRK2_STOP_WITH_TC, 303 - .caps = MMC_CAP_WAIT_WHILE_BUSY, 305 + .caps = MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_AGGRESSIVE_PM, 304 306 .probe_slot = sdhci_acpi_sd_probe_slot, 305 307 }; 306 308 ··· 522 524 static int sdhci_acpi_suspend(struct device *dev) 523 525 { 524 526 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 527 + struct sdhci_host *host = c->host; 525 528 526 - return sdhci_suspend_host(c->host); 529 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 530 + mmc_retune_needed(host->mmc); 531 + 532 + return sdhci_suspend_host(host); 527 533 } 528 534 529 535 static int sdhci_acpi_resume(struct device *dev) ··· 546 544 static int sdhci_acpi_runtime_suspend(struct device *dev) 547 545 { 548 546 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 547 + struct sdhci_host *host = c->host; 549 548 550 - return sdhci_runtime_suspend_host(c->host); 549 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 550 + mmc_retune_needed(host->mmc); 551 + 552 + return sdhci_runtime_suspend_host(host); 551 553 } 552 554 553 555 static int sdhci_acpi_runtime_resume(struct device *dev)
+3
drivers/mmc/host/sdhci-brcmstb.c
··· 29 29 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 30 30 int res; 31 31 32 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 33 + mmc_retune_needed(host->mmc); 34 + 32 35 res = sdhci_suspend_host(host); 33 36 if (res) 34 37 return res;
+110 -19
drivers/mmc/host/sdhci-cadence.c
··· 18 18 #include <linux/module.h> 19 19 #include <linux/mmc/host.h> 20 20 #include <linux/mmc/mmc.h> 21 + #include <linux/of.h> 21 22 22 23 #include "sdhci-pltfm.h" 23 24 ··· 41 40 #define SDHCI_CDNS_HRS06_MODE_MMC_DDR 0x3 42 41 #define SDHCI_CDNS_HRS06_MODE_MMC_HS200 0x4 43 42 #define SDHCI_CDNS_HRS06_MODE_MMC_HS400 0x5 43 + #define SDHCI_CDNS_HRS06_MODE_MMC_HS400ES 0x6 44 44 45 45 /* SRS - Slot Register Set (SDHCI-compatible) */ 46 46 #define SDHCI_CDNS_SRS_BASE 0x200 ··· 56 54 #define SDHCI_CDNS_PHY_DLY_EMMC_LEGACY 0x06 57 55 #define SDHCI_CDNS_PHY_DLY_EMMC_SDR 0x07 58 56 #define SDHCI_CDNS_PHY_DLY_EMMC_DDR 0x08 57 + #define SDHCI_CDNS_PHY_DLY_SDCLK 0x0b 58 + #define SDHCI_CDNS_PHY_DLY_HSMMC 0x0c 59 + #define SDHCI_CDNS_PHY_DLY_STROBE 0x0d 59 60 60 61 /* 61 62 * The tuned val register is 6 bit-wide, but not the whole of the range is ··· 69 64 70 65 struct sdhci_cdns_priv { 71 66 void __iomem *hrs_addr; 67 + bool enhanced_strobe; 72 68 }; 73 69 74 - static void sdhci_cdns_write_phy_reg(struct sdhci_cdns_priv *priv, 75 - u8 addr, u8 data) 70 + struct sdhci_cdns_phy_cfg { 71 + const char *property; 72 + u8 addr; 73 + }; 74 + 75 + static const struct sdhci_cdns_phy_cfg sdhci_cdns_phy_cfgs[] = { 76 + { "cdns,phy-input-delay-sd-highspeed", SDHCI_CDNS_PHY_DLY_SD_HS, }, 77 + { "cdns,phy-input-delay-legacy", SDHCI_CDNS_PHY_DLY_SD_DEFAULT, }, 78 + { "cdns,phy-input-delay-sd-uhs-sdr12", SDHCI_CDNS_PHY_DLY_UHS_SDR12, }, 79 + { "cdns,phy-input-delay-sd-uhs-sdr25", SDHCI_CDNS_PHY_DLY_UHS_SDR25, }, 80 + { "cdns,phy-input-delay-sd-uhs-sdr50", SDHCI_CDNS_PHY_DLY_UHS_SDR50, }, 81 + { "cdns,phy-input-delay-sd-uhs-ddr50", SDHCI_CDNS_PHY_DLY_UHS_DDR50, }, 82 + { "cdns,phy-input-delay-mmc-highspeed", SDHCI_CDNS_PHY_DLY_EMMC_SDR, }, 83 + { "cdns,phy-input-delay-mmc-ddr", SDHCI_CDNS_PHY_DLY_EMMC_DDR, }, 84 + { "cdns,phy-dll-delay-sdclk", SDHCI_CDNS_PHY_DLY_SDCLK, }, 85 + { "cdns,phy-dll-delay-sdclk-hsmmc", SDHCI_CDNS_PHY_DLY_HSMMC, }, 86 + { "cdns,phy-dll-delay-strobe", SDHCI_CDNS_PHY_DLY_STROBE, }, 87 + }; 88 + 89 + static int sdhci_cdns_write_phy_reg(struct sdhci_cdns_priv *priv, 90 + u8 addr, u8 data) 76 91 { 77 92 void __iomem *reg = priv->hrs_addr + SDHCI_CDNS_HRS04; 78 93 u32 tmp; 94 + int ret; 79 95 80 96 tmp = (data << SDHCI_CDNS_HRS04_WDATA_SHIFT) | 81 97 (addr << SDHCI_CDNS_HRS04_ADDR_SHIFT); ··· 105 79 tmp |= SDHCI_CDNS_HRS04_WR; 106 80 writel(tmp, reg); 107 81 82 + ret = readl_poll_timeout(reg, tmp, tmp & SDHCI_CDNS_HRS04_ACK, 0, 10); 83 + if (ret) 84 + return ret; 85 + 108 86 tmp &= ~SDHCI_CDNS_HRS04_WR; 109 87 writel(tmp, reg); 88 + 89 + return 0; 110 90 } 111 91 112 - static void sdhci_cdns_phy_init(struct sdhci_cdns_priv *priv) 92 + static int sdhci_cdns_phy_init(struct device_node *np, 93 + struct sdhci_cdns_priv *priv) 113 94 { 114 - sdhci_cdns_write_phy_reg(priv, SDHCI_CDNS_PHY_DLY_SD_HS, 4); 115 - sdhci_cdns_write_phy_reg(priv, SDHCI_CDNS_PHY_DLY_SD_DEFAULT, 4); 116 - sdhci_cdns_write_phy_reg(priv, SDHCI_CDNS_PHY_DLY_EMMC_LEGACY, 9); 117 - sdhci_cdns_write_phy_reg(priv, SDHCI_CDNS_PHY_DLY_EMMC_SDR, 2); 118 - sdhci_cdns_write_phy_reg(priv, SDHCI_CDNS_PHY_DLY_EMMC_DDR, 3); 95 + u32 val; 96 + int ret, i; 97 + 98 + for (i = 0; i < ARRAY_SIZE(sdhci_cdns_phy_cfgs); i++) { 99 + ret = of_property_read_u32(np, sdhci_cdns_phy_cfgs[i].property, 100 + &val); 101 + if (ret) 102 + continue; 103 + 104 + ret = sdhci_cdns_write_phy_reg(priv, 105 + sdhci_cdns_phy_cfgs[i].addr, 106 + val); 107 + if (ret) 108 + return ret; 109 + } 110 + 111 + return 0; 119 112 } 120 113 121 114 static inline void *sdhci_cdns_priv(struct sdhci_host *host) ··· 148 103 { 149 104 /* 150 105 * Cadence's spec says the Timeout Clock Frequency is the same as the 151 - * Base Clock Frequency. Divide it by 1000 to return a value in kHz. 106 + * Base Clock Frequency. 152 107 */ 153 - return host->max_clk / 1000; 108 + return host->max_clk; 109 + } 110 + 111 + static void sdhci_cdns_set_emmc_mode(struct sdhci_cdns_priv *priv, u32 mode) 112 + { 113 + u32 tmp; 114 + 115 + /* The speed mode for eMMC is selected by HRS06 register */ 116 + tmp = readl(priv->hrs_addr + SDHCI_CDNS_HRS06); 117 + tmp &= ~SDHCI_CDNS_HRS06_MODE_MASK; 118 + tmp |= mode; 119 + writel(tmp, priv->hrs_addr + SDHCI_CDNS_HRS06); 120 + } 121 + 122 + static u32 sdhci_cdns_get_emmc_mode(struct sdhci_cdns_priv *priv) 123 + { 124 + u32 tmp; 125 + 126 + tmp = readl(priv->hrs_addr + SDHCI_CDNS_HRS06); 127 + return tmp & SDHCI_CDNS_HRS06_MODE_MASK; 154 128 } 155 129 156 130 static void sdhci_cdns_set_uhs_signaling(struct sdhci_host *host, 157 131 unsigned int timing) 158 132 { 159 133 struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host); 160 - u32 mode, tmp; 134 + u32 mode; 161 135 162 136 switch (timing) { 163 137 case MMC_TIMING_MMC_HS: ··· 189 125 mode = SDHCI_CDNS_HRS06_MODE_MMC_HS200; 190 126 break; 191 127 case MMC_TIMING_MMC_HS400: 192 - mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400; 128 + if (priv->enhanced_strobe) 129 + mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400ES; 130 + else 131 + mode = SDHCI_CDNS_HRS06_MODE_MMC_HS400; 193 132 break; 194 133 default: 195 134 mode = SDHCI_CDNS_HRS06_MODE_SD; 196 135 break; 197 136 } 198 137 199 - /* The speed mode for eMMC is selected by HRS06 register */ 200 - tmp = readl(priv->hrs_addr + SDHCI_CDNS_HRS06); 201 - tmp &= ~SDHCI_CDNS_HRS06_MODE_MASK; 202 - tmp |= mode; 203 - writel(tmp, priv->hrs_addr + SDHCI_CDNS_HRS06); 138 + sdhci_cdns_set_emmc_mode(priv, mode); 204 139 205 140 /* For SD, fall back to the default handler */ 206 141 if (mode == SDHCI_CDNS_HRS06_MODE_SD) ··· 276 213 return sdhci_cdns_set_tune_val(host, end_of_streak - max_streak / 2); 277 214 } 278 215 216 + static void sdhci_cdns_hs400_enhanced_strobe(struct mmc_host *mmc, 217 + struct mmc_ios *ios) 218 + { 219 + struct sdhci_host *host = mmc_priv(mmc); 220 + struct sdhci_cdns_priv *priv = sdhci_cdns_priv(host); 221 + u32 mode; 222 + 223 + priv->enhanced_strobe = ios->enhanced_strobe; 224 + 225 + mode = sdhci_cdns_get_emmc_mode(priv); 226 + 227 + if (mode == SDHCI_CDNS_HRS06_MODE_MMC_HS400 && ios->enhanced_strobe) 228 + sdhci_cdns_set_emmc_mode(priv, 229 + SDHCI_CDNS_HRS06_MODE_MMC_HS400ES); 230 + 231 + if (mode == SDHCI_CDNS_HRS06_MODE_MMC_HS400ES && !ios->enhanced_strobe) 232 + sdhci_cdns_set_emmc_mode(priv, 233 + SDHCI_CDNS_HRS06_MODE_MMC_HS400); 234 + } 235 + 279 236 static int sdhci_cdns_probe(struct platform_device *pdev) 280 237 { 281 238 struct sdhci_host *host; ··· 303 220 struct sdhci_cdns_priv *priv; 304 221 struct clk *clk; 305 222 int ret; 223 + struct device *dev = &pdev->dev; 306 224 307 - clk = devm_clk_get(&pdev->dev, NULL); 225 + clk = devm_clk_get(dev, NULL); 308 226 if (IS_ERR(clk)) 309 227 return PTR_ERR(clk); 310 228 ··· 324 240 325 241 priv = sdhci_cdns_priv(host); 326 242 priv->hrs_addr = host->ioaddr; 243 + priv->enhanced_strobe = false; 327 244 host->ioaddr += SDHCI_CDNS_SRS_BASE; 328 245 host->mmc_host_ops.execute_tuning = sdhci_cdns_execute_tuning; 246 + host->mmc_host_ops.hs400_enhanced_strobe = 247 + sdhci_cdns_hs400_enhanced_strobe; 248 + 249 + sdhci_get_of_property(pdev); 329 250 330 251 ret = mmc_of_parse(host->mmc); 331 252 if (ret) 332 253 goto free; 333 254 334 - sdhci_cdns_phy_init(priv); 255 + ret = sdhci_cdns_phy_init(dev->of_node, priv); 256 + if (ret) 257 + goto free; 335 258 336 259 ret = sdhci_add_host(host); 337 260 if (ret)
+32
drivers/mmc/host/sdhci-esdhc-imx.c
··· 889 889 } 890 890 } 891 891 892 + static void esdhc_reset_tuning(struct sdhci_host *host) 893 + { 894 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 895 + struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host); 896 + u32 ctrl; 897 + 898 + /* Rest the tuning circurt */ 899 + if (esdhc_is_usdhc(imx_data)) { 900 + if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING) { 901 + ctrl = readl(host->ioaddr + ESDHC_MIX_CTRL); 902 + ctrl &= ~ESDHC_MIX_CTRL_SMPCLK_SEL; 903 + ctrl &= ~ESDHC_MIX_CTRL_FBCLK_SEL; 904 + writel(ctrl, host->ioaddr + ESDHC_MIX_CTRL); 905 + writel(0, host->ioaddr + ESDHC_TUNE_CTRL_STATUS); 906 + } else if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) { 907 + ctrl = readl(host->ioaddr + SDHCI_ACMD12_ERR); 908 + ctrl &= ~ESDHC_MIX_CTRL_SMPCLK_SEL; 909 + writel(ctrl, host->ioaddr + SDHCI_ACMD12_ERR); 910 + } 911 + } 912 + } 913 + 892 914 static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing) 893 915 { 894 916 u32 m; ··· 953 931 /* update clock after enable DDR for strobe DLL lock */ 954 932 host->ops->set_clock(host, host->clock); 955 933 esdhc_set_strobe_dll(host); 934 + break; 935 + case MMC_TIMING_LEGACY: 936 + default: 937 + esdhc_reset_tuning(host); 956 938 break; 957 939 } 958 940 ··· 1349 1323 { 1350 1324 struct sdhci_host *host = dev_get_drvdata(dev); 1351 1325 1326 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 1327 + mmc_retune_needed(host->mmc); 1328 + 1352 1329 return sdhci_suspend_host(host); 1353 1330 } 1354 1331 ··· 1375 1346 int ret; 1376 1347 1377 1348 ret = sdhci_runtime_suspend_host(host); 1349 + 1350 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 1351 + mmc_retune_needed(host->mmc); 1378 1352 1379 1353 if (!sdhci_sdio_irq_enabled(host)) { 1380 1354 clk_disable_unprepare(imx_data->clk_per);
+7
drivers/mmc/host/sdhci-esdhc.h
··· 37 37 38 38 /* Protocol Control Register */ 39 39 #define ESDHC_PROCTL 0x28 40 + #define ESDHC_VOLT_SEL 0x00000400 40 41 #define ESDHC_CTRL_4BITBUS (0x1 << 1) 41 42 #define ESDHC_CTRL_8BITBUS (0x2 << 1) 42 43 #define ESDHC_CTRL_BUSWIDTH_MASK (0x3 << 1) ··· 53 52 #define ESDHC_CLOCK_HCKEN 0x00000002 54 53 #define ESDHC_CLOCK_IPGEN 0x00000001 55 54 55 + /* Tuning Block Control Register */ 56 + #define ESDHC_TBCTL 0x120 57 + #define ESDHC_TB_EN 0x00000004 58 + 56 59 /* Control Register for DMA transfer */ 57 60 #define ESDHC_DMA_SYSCTL 0x40c 61 + #define ESDHC_PERIPHERAL_CLK_SEL 0x00080000 62 + #define ESDHC_FLUSH_ASYNC_FIFO 0x00040000 58 63 #define ESDHC_DMA_SNOOP 0x00000040 59 64 60 65 #endif /* _DRIVERS_MMC_SDHCI_ESDHC_H */
-8
drivers/mmc/host/sdhci-msm.c
··· 991 991 mmc_hostname(host->mmc), host->clock, uhs, ctrl_2); 992 992 sdhci_writew(host, ctrl_2, SDHCI_HOST_CONTROL2); 993 993 994 - spin_unlock_irq(&host->lock); 995 - 996 994 if (mmc->ios.timing == MMC_TIMING_MMC_HS400) 997 995 sdhci_msm_hs400(host, &mmc->ios); 998 - 999 - spin_lock_irq(&host->lock); 1000 996 } 1001 997 1002 998 static void sdhci_msm_voltage_switch(struct sdhci_host *host) ··· 1085 1089 goto out; 1086 1090 } 1087 1091 1088 - spin_unlock_irq(&host->lock); 1089 - 1090 1092 sdhci_msm_hc_select_mode(host); 1091 1093 1092 1094 msm_set_clock_rate_for_bus_mode(host, clock); 1093 - 1094 - spin_lock_irq(&host->lock); 1095 1095 out: 1096 1096 __sdhci_msm_set_clock(host, clock); 1097 1097 }
+4 -22
drivers/mmc/host/sdhci-of-arasan.c
··· 157 157 return ret; 158 158 } 159 159 160 - static unsigned int sdhci_arasan_get_timeout_clock(struct sdhci_host *host) 161 - { 162 - unsigned long freq; 163 - struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 164 - 165 - /* SDHCI timeout clock is in kHz */ 166 - freq = DIV_ROUND_UP(clk_get_rate(pltfm_host->clk), 1000); 167 - 168 - /* or in MHz */ 169 - if (host->caps & SDHCI_TIMEOUT_CLK_UNIT) 170 - freq = DIV_ROUND_UP(freq, 1000); 171 - 172 - return freq; 173 - } 174 - 175 160 static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock) 176 161 { 177 162 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ··· 179 194 * through low speeds without power cycling. 180 195 */ 181 196 sdhci_set_clock(host, host->max_clk); 182 - spin_unlock_irq(&host->lock); 183 197 phy_power_on(sdhci_arasan->phy); 184 - spin_lock_irq(&host->lock); 185 198 sdhci_arasan->is_phy_on = true; 186 199 187 200 /* ··· 198 215 } 199 216 200 217 if (ctrl_phy && sdhci_arasan->is_phy_on) { 201 - spin_unlock_irq(&host->lock); 202 218 phy_power_off(sdhci_arasan->phy); 203 - spin_lock_irq(&host->lock); 204 219 sdhci_arasan->is_phy_on = false; 205 220 } 206 221 207 222 sdhci_set_clock(host, clock); 208 223 209 224 if (ctrl_phy) { 210 - spin_unlock_irq(&host->lock); 211 225 phy_power_on(sdhci_arasan->phy); 212 - spin_lock_irq(&host->lock); 213 226 sdhci_arasan->is_phy_on = true; 214 227 } 215 228 } ··· 265 286 static struct sdhci_ops sdhci_arasan_ops = { 266 287 .set_clock = sdhci_arasan_set_clock, 267 288 .get_max_clock = sdhci_pltfm_clk_get_max_clock, 268 - .get_timeout_clock = sdhci_arasan_get_timeout_clock, 289 + .get_timeout_clock = sdhci_pltfm_clk_get_max_clock, 269 290 .set_bus_width = sdhci_set_bus_width, 270 291 .reset = sdhci_arasan_reset, 271 292 .set_uhs_signaling = sdhci_set_uhs_signaling, ··· 293 314 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 294 315 struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host); 295 316 int ret; 317 + 318 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 319 + mmc_retune_needed(host->mmc); 296 320 297 321 ret = sdhci_suspend_host(host); 298 322 if (ret)
+3 -2
drivers/mmc/host/sdhci-of-at91.c
··· 98 98 if (!IS_ERR(host->mmc->supply.vmmc)) { 99 99 struct mmc_host *mmc = host->mmc; 100 100 101 - spin_unlock_irq(&host->lock); 102 101 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); 103 - spin_lock_irq(&host->lock); 104 102 } 105 103 sdhci_set_power_noreg(host, mode, vdd); 106 104 } ··· 137 139 int ret; 138 140 139 141 ret = sdhci_runtime_suspend_host(host); 142 + 143 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 144 + mmc_retune_needed(host->mmc); 140 145 141 146 clk_disable_unprepare(priv->gck); 142 147 clk_disable_unprepare(priv->hclock);
+187 -7
drivers/mmc/host/sdhci-of-esdhc.c
··· 16 16 #include <linux/err.h> 17 17 #include <linux/io.h> 18 18 #include <linux/of.h> 19 + #include <linux/of_address.h> 19 20 #include <linux/delay.h> 20 21 #include <linux/module.h> 21 22 #include <linux/sys_soc.h> 23 + #include <linux/clk.h> 24 + #include <linux/ktime.h> 22 25 #include <linux/mmc/host.h> 23 26 #include "sdhci-pltfm.h" 24 27 #include "sdhci-esdhc.h" ··· 33 30 u8 vendor_ver; 34 31 u8 spec_ver; 35 32 bool quirk_incorrect_hostver; 33 + unsigned int peripheral_clock; 36 34 }; 37 35 38 36 /** ··· 418 414 static unsigned int esdhc_of_get_max_clock(struct sdhci_host *host) 419 415 { 420 416 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 417 + struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); 421 418 422 - return pltfm_host->clock; 419 + if (esdhc->peripheral_clock) 420 + return esdhc->peripheral_clock; 421 + else 422 + return pltfm_host->clock; 423 423 } 424 424 425 425 static unsigned int esdhc_of_get_min_clock(struct sdhci_host *host) 426 426 { 427 427 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 428 + struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); 429 + unsigned int clock; 428 430 429 - return pltfm_host->clock / 256 / 16; 431 + if (esdhc->peripheral_clock) 432 + clock = esdhc->peripheral_clock; 433 + else 434 + clock = pltfm_host->clock; 435 + return clock / 256 / 16; 430 436 } 431 437 432 438 static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock) ··· 445 431 struct sdhci_esdhc *esdhc = sdhci_pltfm_priv(pltfm_host); 446 432 int pre_div = 1; 447 433 int div = 1; 448 - u32 timeout; 434 + ktime_t timeout; 449 435 u32 temp; 450 436 451 437 host->mmc->actual_clock = 0; ··· 456 442 /* Workaround to start pre_div at 2 for VNN < VENDOR_V_23 */ 457 443 if (esdhc->vendor_ver < VENDOR_V_23) 458 444 pre_div = 2; 445 + 446 + /* 447 + * Limit SD clock to 167MHz for ls1046a according to its datasheet 448 + */ 449 + if (clock > 167000000 && 450 + of_find_compatible_node(NULL, NULL, "fsl,ls1046a-esdhc")) 451 + clock = 167000000; 452 + 453 + /* 454 + * Limit SD clock to 125MHz for ls1012a according to its datasheet 455 + */ 456 + if (clock > 125000000 && 457 + of_find_compatible_node(NULL, NULL, "fsl,ls1012a-esdhc")) 458 + clock = 125000000; 459 459 460 460 /* Workaround to reduce the clock frequency for p1010 esdhc */ 461 461 if (of_find_compatible_node(NULL, NULL, "fsl,p1010-esdhc")) { ··· 503 475 sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL); 504 476 505 477 /* Wait max 20 ms */ 506 - timeout = 20; 478 + timeout = ktime_add_ms(ktime_get(), 20); 507 479 while (!(sdhci_readl(host, ESDHC_PRSSTAT) & ESDHC_CLOCK_STABLE)) { 508 - if (timeout == 0) { 480 + if (ktime_after(ktime_get(), timeout)) { 509 481 pr_err("%s: Internal clock never stabilised.\n", 510 482 mmc_hostname(host->mmc)); 511 483 return; 512 484 } 513 - timeout--; 514 - mdelay(1); 485 + udelay(10); 515 486 } 516 487 517 488 temp |= ESDHC_CLOCK_SDCLKEN; ··· 539 512 sdhci_writel(host, ctrl, ESDHC_PROCTL); 540 513 } 541 514 515 + static void esdhc_clock_enable(struct sdhci_host *host, bool enable) 516 + { 517 + u32 val; 518 + ktime_t timeout; 519 + 520 + val = sdhci_readl(host, ESDHC_SYSTEM_CONTROL); 521 + 522 + if (enable) 523 + val |= ESDHC_CLOCK_SDCLKEN; 524 + else 525 + val &= ~ESDHC_CLOCK_SDCLKEN; 526 + 527 + sdhci_writel(host, val, ESDHC_SYSTEM_CONTROL); 528 + 529 + /* Wait max 20 ms */ 530 + timeout = ktime_add_ms(ktime_get(), 20); 531 + val = ESDHC_CLOCK_STABLE; 532 + while (!(sdhci_readl(host, ESDHC_PRSSTAT) & val)) { 533 + if (ktime_after(ktime_get(), timeout)) { 534 + pr_err("%s: Internal clock never stabilised.\n", 535 + mmc_hostname(host->mmc)); 536 + break; 537 + } 538 + udelay(10); 539 + } 540 + } 541 + 542 542 static void esdhc_reset(struct sdhci_host *host, u8 mask) 543 543 { 544 544 sdhci_reset(host, mask); 545 545 546 546 sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); 547 547 sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); 548 + } 549 + 550 + /* The SCFG, Supplemental Configuration Unit, provides SoC specific 551 + * configuration and status registers for the device. There is a 552 + * SDHC IO VSEL control register on SCFG for some platforms. It's 553 + * used to support SDHC IO voltage switching. 554 + */ 555 + static const struct of_device_id scfg_device_ids[] = { 556 + { .compatible = "fsl,t1040-scfg", }, 557 + { .compatible = "fsl,ls1012a-scfg", }, 558 + { .compatible = "fsl,ls1046a-scfg", }, 559 + {} 560 + }; 561 + 562 + /* SDHC IO VSEL control register definition */ 563 + #define SCFG_SDHCIOVSELCR 0x408 564 + #define SDHCIOVSELCR_TGLEN 0x80000000 565 + #define SDHCIOVSELCR_VSELVAL 0x60000000 566 + #define SDHCIOVSELCR_SDHC_VS 0x00000001 567 + 568 + static int esdhc_signal_voltage_switch(struct mmc_host *mmc, 569 + struct mmc_ios *ios) 570 + { 571 + struct sdhci_host *host = mmc_priv(mmc); 572 + struct device_node *scfg_node; 573 + void __iomem *scfg_base = NULL; 574 + u32 sdhciovselcr; 575 + u32 val; 576 + 577 + /* 578 + * Signal Voltage Switching is only applicable for Host Controllers 579 + * v3.00 and above. 580 + */ 581 + if (host->version < SDHCI_SPEC_300) 582 + return 0; 583 + 584 + val = sdhci_readl(host, ESDHC_PROCTL); 585 + 586 + switch (ios->signal_voltage) { 587 + case MMC_SIGNAL_VOLTAGE_330: 588 + val &= ~ESDHC_VOLT_SEL; 589 + sdhci_writel(host, val, ESDHC_PROCTL); 590 + return 0; 591 + case MMC_SIGNAL_VOLTAGE_180: 592 + scfg_node = of_find_matching_node(NULL, scfg_device_ids); 593 + if (scfg_node) 594 + scfg_base = of_iomap(scfg_node, 0); 595 + if (scfg_base) { 596 + sdhciovselcr = SDHCIOVSELCR_TGLEN | 597 + SDHCIOVSELCR_VSELVAL; 598 + iowrite32be(sdhciovselcr, 599 + scfg_base + SCFG_SDHCIOVSELCR); 600 + 601 + val |= ESDHC_VOLT_SEL; 602 + sdhci_writel(host, val, ESDHC_PROCTL); 603 + mdelay(5); 604 + 605 + sdhciovselcr = SDHCIOVSELCR_TGLEN | 606 + SDHCIOVSELCR_SDHC_VS; 607 + iowrite32be(sdhciovselcr, 608 + scfg_base + SCFG_SDHCIOVSELCR); 609 + iounmap(scfg_base); 610 + } else { 611 + val |= ESDHC_VOLT_SEL; 612 + sdhci_writel(host, val, ESDHC_PROCTL); 613 + } 614 + return 0; 615 + default: 616 + return 0; 617 + } 618 + } 619 + 620 + static int esdhc_execute_tuning(struct mmc_host *mmc, u32 opcode) 621 + { 622 + struct sdhci_host *host = mmc_priv(mmc); 623 + u32 val; 624 + 625 + /* Use tuning block for tuning procedure */ 626 + esdhc_clock_enable(host, false); 627 + val = sdhci_readl(host, ESDHC_DMA_SYSCTL); 628 + val |= ESDHC_FLUSH_ASYNC_FIFO; 629 + sdhci_writel(host, val, ESDHC_DMA_SYSCTL); 630 + 631 + val = sdhci_readl(host, ESDHC_TBCTL); 632 + val |= ESDHC_TB_EN; 633 + sdhci_writel(host, val, ESDHC_TBCTL); 634 + esdhc_clock_enable(host, true); 635 + 636 + return sdhci_execute_tuning(mmc, opcode); 548 637 } 549 638 550 639 #ifdef CONFIG_PM_SLEEP ··· 670 527 struct sdhci_host *host = dev_get_drvdata(dev); 671 528 672 529 esdhc_proctl = sdhci_readl(host, SDHCI_HOST_CONTROL); 530 + 531 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 532 + mmc_retune_needed(host->mmc); 673 533 674 534 return sdhci_suspend_host(host); 675 535 } ··· 756 610 { 757 611 struct sdhci_pltfm_host *pltfm_host; 758 612 struct sdhci_esdhc *esdhc; 613 + struct device_node *np; 614 + struct clk *clk; 615 + u32 val; 759 616 u16 host_ver; 760 617 761 618 pltfm_host = sdhci_priv(host); ··· 772 623 esdhc->quirk_incorrect_hostver = true; 773 624 else 774 625 esdhc->quirk_incorrect_hostver = false; 626 + 627 + np = pdev->dev.of_node; 628 + clk = of_clk_get(np, 0); 629 + if (!IS_ERR(clk)) { 630 + /* 631 + * esdhc->peripheral_clock would be assigned with a value 632 + * which is eSDHC base clock when use periperal clock. 633 + * For ls1046a, the clock value got by common clk API is 634 + * peripheral clock while the eSDHC base clock is 1/2 635 + * peripheral clock. 636 + */ 637 + if (of_device_is_compatible(np, "fsl,ls1046a-esdhc")) 638 + esdhc->peripheral_clock = clk_get_rate(clk) / 2; 639 + else 640 + esdhc->peripheral_clock = clk_get_rate(clk); 641 + 642 + clk_put(clk); 643 + } 644 + 645 + if (esdhc->peripheral_clock) { 646 + esdhc_clock_enable(host, false); 647 + val = sdhci_readl(host, ESDHC_DMA_SYSCTL); 648 + val |= ESDHC_PERIPHERAL_CLK_SEL; 649 + sdhci_writel(host, val, ESDHC_DMA_SYSCTL); 650 + esdhc_clock_enable(host, true); 651 + } 775 652 } 776 653 777 654 static int sdhci_esdhc_probe(struct platform_device *pdev) ··· 819 644 820 645 if (IS_ERR(host)) 821 646 return PTR_ERR(host); 647 + 648 + host->mmc_host_ops.start_signal_voltage_switch = 649 + esdhc_signal_voltage_switch; 650 + host->mmc_host_ops.execute_tuning = esdhc_execute_tuning; 651 + host->tuning_delay = 1; 822 652 823 653 esdhc_init(pdev, host); 824 654
+320 -242
drivers/mmc/host/sdhci-pci-core.c
··· 12 12 * - JMicron (hardware and technical support) 13 13 */ 14 14 15 + #include <linux/string.h> 15 16 #include <linux/delay.h> 16 17 #include <linux/highmem.h> 17 18 #include <linux/module.h> ··· 37 36 static int sdhci_pci_enable_dma(struct sdhci_host *host); 38 37 static void sdhci_pci_set_bus_width(struct sdhci_host *host, int width); 39 38 static void sdhci_pci_hw_reset(struct sdhci_host *host); 40 - static int sdhci_pci_select_drive_strength(struct sdhci_host *host, 41 - struct mmc_card *card, 42 - unsigned int max_dtr, int host_drv, 43 - int card_drv, int *drv_type); 39 + 40 + #ifdef CONFIG_PM_SLEEP 41 + static int __sdhci_pci_suspend_host(struct sdhci_pci_chip *chip) 42 + { 43 + int i, ret; 44 + 45 + for (i = 0; i < chip->num_slots; i++) { 46 + struct sdhci_pci_slot *slot = chip->slots[i]; 47 + struct sdhci_host *host; 48 + 49 + if (!slot) 50 + continue; 51 + 52 + host = slot->host; 53 + 54 + if (chip->pm_retune && host->tuning_mode != SDHCI_TUNING_MODE_3) 55 + mmc_retune_needed(host->mmc); 56 + 57 + ret = sdhci_suspend_host(host); 58 + if (ret) 59 + goto err_pci_suspend; 60 + 61 + if (host->mmc->pm_flags & MMC_PM_WAKE_SDIO_IRQ) 62 + sdhci_enable_irq_wakeups(host); 63 + } 64 + 65 + return 0; 66 + 67 + err_pci_suspend: 68 + while (--i >= 0) 69 + sdhci_resume_host(chip->slots[i]->host); 70 + return ret; 71 + } 72 + 73 + static int sdhci_pci_init_wakeup(struct sdhci_pci_chip *chip) 74 + { 75 + mmc_pm_flag_t pm_flags = 0; 76 + int i; 77 + 78 + for (i = 0; i < chip->num_slots; i++) { 79 + struct sdhci_pci_slot *slot = chip->slots[i]; 80 + 81 + if (slot) 82 + pm_flags |= slot->host->mmc->pm_flags; 83 + } 84 + 85 + return device_init_wakeup(&chip->pdev->dev, 86 + (pm_flags & MMC_PM_KEEP_POWER) && 87 + (pm_flags & MMC_PM_WAKE_SDIO_IRQ)); 88 + } 89 + 90 + static int sdhci_pci_suspend_host(struct sdhci_pci_chip *chip) 91 + { 92 + int ret; 93 + 94 + ret = __sdhci_pci_suspend_host(chip); 95 + if (ret) 96 + return ret; 97 + 98 + sdhci_pci_init_wakeup(chip); 99 + 100 + return 0; 101 + } 102 + 103 + int sdhci_pci_resume_host(struct sdhci_pci_chip *chip) 104 + { 105 + struct sdhci_pci_slot *slot; 106 + int i, ret; 107 + 108 + for (i = 0; i < chip->num_slots; i++) { 109 + slot = chip->slots[i]; 110 + if (!slot) 111 + continue; 112 + 113 + ret = sdhci_resume_host(slot->host); 114 + if (ret) 115 + return ret; 116 + } 117 + 118 + return 0; 119 + } 120 + #endif 121 + 122 + #ifdef CONFIG_PM 123 + static int sdhci_pci_runtime_suspend_host(struct sdhci_pci_chip *chip) 124 + { 125 + struct sdhci_pci_slot *slot; 126 + struct sdhci_host *host; 127 + int i, ret; 128 + 129 + for (i = 0; i < chip->num_slots; i++) { 130 + slot = chip->slots[i]; 131 + if (!slot) 132 + continue; 133 + 134 + host = slot->host; 135 + 136 + ret = sdhci_runtime_suspend_host(host); 137 + if (ret) 138 + goto err_pci_runtime_suspend; 139 + 140 + if (chip->rpm_retune && 141 + host->tuning_mode != SDHCI_TUNING_MODE_3) 142 + mmc_retune_needed(host->mmc); 143 + } 144 + 145 + return 0; 146 + 147 + err_pci_runtime_suspend: 148 + while (--i >= 0) 149 + sdhci_runtime_resume_host(chip->slots[i]->host); 150 + return ret; 151 + } 152 + 153 + static int sdhci_pci_runtime_resume_host(struct sdhci_pci_chip *chip) 154 + { 155 + struct sdhci_pci_slot *slot; 156 + int i, ret; 157 + 158 + for (i = 0; i < chip->num_slots; i++) { 159 + slot = chip->slots[i]; 160 + if (!slot) 161 + continue; 162 + 163 + ret = sdhci_runtime_resume_host(slot->host); 164 + if (ret) 165 + return ret; 166 + } 167 + 168 + return 0; 169 + } 170 + #endif 44 171 45 172 /*****************************************************************************\ 46 173 * * ··· 200 71 return 0; 201 72 } 202 73 74 + #ifdef CONFIG_PM_SLEEP 203 75 static int ricoh_mmc_resume(struct sdhci_pci_chip *chip) 204 76 { 205 77 /* Apply a delay to allow controller to settle */ 206 78 /* Otherwise it becomes confused if card state changed 207 79 during suspend */ 208 80 msleep(500); 209 - return 0; 81 + return sdhci_pci_resume_host(chip); 210 82 } 83 + #endif 211 84 212 85 static const struct sdhci_pci_fixes sdhci_ricoh = { 213 86 .probe = ricoh_probe, ··· 220 89 221 90 static const struct sdhci_pci_fixes sdhci_ricoh_mmc = { 222 91 .probe_slot = ricoh_mmc_probe_slot, 92 + #ifdef CONFIG_PM_SLEEP 223 93 .resume = ricoh_mmc_resume, 94 + #endif 224 95 .quirks = SDHCI_QUIRK_32BIT_DMA_ADDR | 225 96 SDHCI_QUIRK_CLOCK_BEFORE_RESET | 226 97 SDHCI_QUIRK_NO_CARD_NO_RESET | ··· 392 259 .probe_slot = pch_hc_probe_slot, 393 260 }; 394 261 262 + enum { 263 + INTEL_DSM_FNS = 0, 264 + INTEL_DSM_DRV_STRENGTH = 9, 265 + INTEL_DSM_D3_RETUNE = 10, 266 + }; 267 + 268 + struct intel_host { 269 + u32 dsm_fns; 270 + int drv_strength; 271 + bool d3_retune; 272 + }; 273 + 274 + const u8 intel_dsm_uuid[] = { 275 + 0xA5, 0x3E, 0xC1, 0xF6, 0xCD, 0x65, 0x1F, 0x46, 276 + 0xAB, 0x7A, 0x29, 0xF7, 0xE8, 0xD5, 0xBD, 0x61, 277 + }; 278 + 279 + static int __intel_dsm(struct intel_host *intel_host, struct device *dev, 280 + unsigned int fn, u32 *result) 281 + { 282 + union acpi_object *obj; 283 + int err = 0; 284 + size_t len; 285 + 286 + obj = acpi_evaluate_dsm(ACPI_HANDLE(dev), intel_dsm_uuid, 0, fn, NULL); 287 + if (!obj) 288 + return -EOPNOTSUPP; 289 + 290 + if (obj->type != ACPI_TYPE_BUFFER || obj->buffer.length < 1) { 291 + err = -EINVAL; 292 + goto out; 293 + } 294 + 295 + len = min_t(size_t, obj->buffer.length, 4); 296 + 297 + *result = 0; 298 + memcpy(result, obj->buffer.pointer, len); 299 + out: 300 + ACPI_FREE(obj); 301 + 302 + return err; 303 + } 304 + 305 + static int intel_dsm(struct intel_host *intel_host, struct device *dev, 306 + unsigned int fn, u32 *result) 307 + { 308 + if (fn > 31 || !(intel_host->dsm_fns & (1 << fn))) 309 + return -EOPNOTSUPP; 310 + 311 + return __intel_dsm(intel_host, dev, fn, result); 312 + } 313 + 314 + static void intel_dsm_init(struct intel_host *intel_host, struct device *dev, 315 + struct mmc_host *mmc) 316 + { 317 + int err; 318 + u32 val; 319 + 320 + err = __intel_dsm(intel_host, dev, INTEL_DSM_FNS, &intel_host->dsm_fns); 321 + if (err) { 322 + pr_debug("%s: DSM not supported, error %d\n", 323 + mmc_hostname(mmc), err); 324 + return; 325 + } 326 + 327 + pr_debug("%s: DSM function mask %#x\n", 328 + mmc_hostname(mmc), intel_host->dsm_fns); 329 + 330 + err = intel_dsm(intel_host, dev, INTEL_DSM_DRV_STRENGTH, &val); 331 + intel_host->drv_strength = err ? 0 : val; 332 + 333 + err = intel_dsm(intel_host, dev, INTEL_DSM_D3_RETUNE, &val); 334 + intel_host->d3_retune = err ? true : !!val; 335 + } 336 + 395 337 static void sdhci_pci_int_hw_reset(struct sdhci_host *host) 396 338 { 397 339 u8 reg; ··· 482 274 usleep_range(300, 1000); 483 275 } 484 276 485 - static int spt_select_drive_strength(struct sdhci_host *host, 486 - struct mmc_card *card, 487 - unsigned int max_dtr, 488 - int host_drv, int card_drv, int *drv_type) 277 + static int intel_select_drive_strength(struct mmc_card *card, 278 + unsigned int max_dtr, int host_drv, 279 + int card_drv, int *drv_type) 489 280 { 490 - int drive_strength; 281 + struct sdhci_host *host = mmc_priv(card->host); 282 + struct sdhci_pci_slot *slot = sdhci_priv(host); 283 + struct intel_host *intel_host = sdhci_pci_priv(slot); 491 284 492 - if (sdhci_pci_spt_drive_strength > 0) 493 - drive_strength = sdhci_pci_spt_drive_strength & 0xf; 494 - else 495 - drive_strength = 0; /* Default 50-ohm */ 496 - 497 - if ((mmc_driver_type_mask(drive_strength) & card_drv) == 0) 498 - drive_strength = 0; /* Default 50-ohm */ 499 - 500 - return drive_strength; 501 - } 502 - 503 - /* Try to read the drive strength from the card */ 504 - static void spt_read_drive_strength(struct sdhci_host *host) 505 - { 506 - u32 val, i, t; 507 - u16 m; 508 - 509 - if (sdhci_pci_spt_drive_strength) 510 - return; 511 - 512 - sdhci_pci_spt_drive_strength = -1; 513 - 514 - m = sdhci_readw(host, SDHCI_HOST_CONTROL2) & 0x7; 515 - if (m != 3 && m != 5) 516 - return; 517 - val = sdhci_readl(host, SDHCI_PRESENT_STATE); 518 - if (val & 0x3) 519 - return; 520 - sdhci_writel(host, 0x007f0023, SDHCI_INT_ENABLE); 521 - sdhci_writel(host, 0, SDHCI_SIGNAL_ENABLE); 522 - sdhci_writew(host, 0x10, SDHCI_TRANSFER_MODE); 523 - sdhci_writeb(host, 0xe, SDHCI_TIMEOUT_CONTROL); 524 - sdhci_writew(host, 512, SDHCI_BLOCK_SIZE); 525 - sdhci_writew(host, 1, SDHCI_BLOCK_COUNT); 526 - sdhci_writel(host, 0, SDHCI_ARGUMENT); 527 - sdhci_writew(host, 0x83b, SDHCI_COMMAND); 528 - for (i = 0; i < 1000; i++) { 529 - val = sdhci_readl(host, SDHCI_INT_STATUS); 530 - if (val & 0xffff8000) 531 - return; 532 - if (val & 0x20) 533 - break; 534 - udelay(1); 535 - } 536 - val = sdhci_readl(host, SDHCI_PRESENT_STATE); 537 - if (!(val & 0x800)) 538 - return; 539 - for (i = 0; i < 47; i++) 540 - val = sdhci_readl(host, SDHCI_BUFFER); 541 - t = val & 0xf00; 542 - if (t != 0x200 && t != 0x300) 543 - return; 544 - 545 - sdhci_pci_spt_drive_strength = 0x10 | ((val >> 12) & 0xf); 285 + return intel_host->drv_strength; 546 286 } 547 287 548 288 static int bxt_get_cd(struct mmc_host *mmc) ··· 515 359 return ret; 516 360 } 517 361 362 + #define SDHCI_INTEL_PWR_TIMEOUT_CNT 20 363 + #define SDHCI_INTEL_PWR_TIMEOUT_UDELAY 100 364 + 365 + static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode, 366 + unsigned short vdd) 367 + { 368 + int cntr; 369 + u8 reg; 370 + 371 + sdhci_set_power(host, mode, vdd); 372 + 373 + if (mode == MMC_POWER_OFF) 374 + return; 375 + 376 + /* 377 + * Bus power might not enable after D3 -> D0 transition due to the 378 + * present state not yet having propagated. Retry for up to 2ms. 379 + */ 380 + for (cntr = 0; cntr < SDHCI_INTEL_PWR_TIMEOUT_CNT; cntr++) { 381 + reg = sdhci_readb(host, SDHCI_POWER_CONTROL); 382 + if (reg & SDHCI_POWER_ON) 383 + break; 384 + udelay(SDHCI_INTEL_PWR_TIMEOUT_UDELAY); 385 + reg |= SDHCI_POWER_ON; 386 + sdhci_writeb(host, reg, SDHCI_POWER_CONTROL); 387 + } 388 + } 389 + 390 + static const struct sdhci_ops sdhci_intel_byt_ops = { 391 + .set_clock = sdhci_set_clock, 392 + .set_power = sdhci_intel_set_power, 393 + .enable_dma = sdhci_pci_enable_dma, 394 + .set_bus_width = sdhci_pci_set_bus_width, 395 + .reset = sdhci_reset, 396 + .set_uhs_signaling = sdhci_set_uhs_signaling, 397 + .hw_reset = sdhci_pci_hw_reset, 398 + }; 399 + 400 + static void byt_read_dsm(struct sdhci_pci_slot *slot) 401 + { 402 + struct intel_host *intel_host = sdhci_pci_priv(slot); 403 + struct device *dev = &slot->chip->pdev->dev; 404 + struct mmc_host *mmc = slot->host->mmc; 405 + 406 + intel_dsm_init(intel_host, dev, mmc); 407 + slot->chip->rpm_retune = intel_host->d3_retune; 408 + } 409 + 518 410 static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot) 519 411 { 412 + byt_read_dsm(slot); 520 413 slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE | 521 414 MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR | 522 415 MMC_CAP_CMD_DURING_TFR | ··· 574 369 slot->hw_reset = sdhci_pci_int_hw_reset; 575 370 if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BSW_EMMC) 576 371 slot->host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */ 577 - if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_SPT_EMMC) { 578 - spt_read_drive_strength(slot->host); 579 - slot->select_drive_strength = spt_select_drive_strength; 580 - } 372 + slot->host->mmc_host_ops.select_drive_strength = 373 + intel_select_drive_strength; 581 374 return 0; 582 375 } 583 376 ··· 608 405 { 609 406 int err; 610 407 408 + byt_read_dsm(slot); 409 + 611 410 err = ni_set_max_freq(slot); 612 411 if (err) 613 412 return err; ··· 621 416 622 417 static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot) 623 418 { 419 + byt_read_dsm(slot); 624 420 slot->host->mmc->caps |= MMC_CAP_POWER_OFF_CARD | MMC_CAP_NONREMOVABLE | 625 421 MMC_CAP_WAIT_WHILE_BUSY; 626 422 return 0; ··· 629 423 630 424 static int byt_sd_probe_slot(struct sdhci_pci_slot *slot) 631 425 { 632 - slot->host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY; 426 + byt_read_dsm(slot); 427 + slot->host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY | 428 + MMC_CAP_AGGRESSIVE_PM; 633 429 slot->cd_idx = 0; 634 430 slot->cd_override_level = true; 635 431 if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BXT_SD || 636 432 slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BXTM_SD || 637 433 slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_APL_SD || 638 - slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_SD) { 434 + slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_SD) 639 435 slot->host->mmc_host_ops.get_cd = bxt_get_cd; 640 - slot->host->mmc->caps |= MMC_CAP_AGGRESSIVE_PM; 641 - } 642 436 643 437 return 0; 644 438 } 645 - 646 - #define SDHCI_INTEL_PWR_TIMEOUT_CNT 20 647 - #define SDHCI_INTEL_PWR_TIMEOUT_UDELAY 100 648 - 649 - static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode, 650 - unsigned short vdd) 651 - { 652 - int cntr; 653 - u8 reg; 654 - 655 - sdhci_set_power(host, mode, vdd); 656 - 657 - if (mode == MMC_POWER_OFF) 658 - return; 659 - 660 - spin_unlock_irq(&host->lock); 661 - 662 - /* 663 - * Bus power might not enable after D3 -> D0 transition due to the 664 - * present state not yet having propagated. Retry for up to 2ms. 665 - */ 666 - for (cntr = 0; cntr < SDHCI_INTEL_PWR_TIMEOUT_CNT; cntr++) { 667 - reg = sdhci_readb(host, SDHCI_POWER_CONTROL); 668 - if (reg & SDHCI_POWER_ON) 669 - break; 670 - udelay(SDHCI_INTEL_PWR_TIMEOUT_UDELAY); 671 - reg |= SDHCI_POWER_ON; 672 - sdhci_writeb(host, reg, SDHCI_POWER_CONTROL); 673 - } 674 - 675 - spin_lock_irq(&host->lock); 676 - } 677 - 678 - static const struct sdhci_ops sdhci_intel_byt_ops = { 679 - .set_clock = sdhci_set_clock, 680 - .set_power = sdhci_intel_set_power, 681 - .enable_dma = sdhci_pci_enable_dma, 682 - .set_bus_width = sdhci_pci_set_bus_width, 683 - .reset = sdhci_reset, 684 - .set_uhs_signaling = sdhci_set_uhs_signaling, 685 - .hw_reset = sdhci_pci_hw_reset, 686 - .select_drive_strength = sdhci_pci_select_drive_strength, 687 - }; 688 439 689 440 static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = { 690 441 .allow_runtime_pm = true, ··· 651 488 SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400 | 652 489 SDHCI_QUIRK2_STOP_WITH_TC, 653 490 .ops = &sdhci_intel_byt_ops, 491 + .priv_size = sizeof(struct intel_host), 654 492 }; 655 493 656 494 static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = { ··· 661 497 .allow_runtime_pm = true, 662 498 .probe_slot = ni_byt_sdio_probe_slot, 663 499 .ops = &sdhci_intel_byt_ops, 500 + .priv_size = sizeof(struct intel_host), 664 501 }; 665 502 666 503 static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = { ··· 671 506 .allow_runtime_pm = true, 672 507 .probe_slot = byt_sdio_probe_slot, 673 508 .ops = &sdhci_intel_byt_ops, 509 + .priv_size = sizeof(struct intel_host), 674 510 }; 675 511 676 512 static const struct sdhci_pci_fixes sdhci_intel_byt_sd = { ··· 683 517 .own_cd_for_runtime_pm = true, 684 518 .probe_slot = byt_sd_probe_slot, 685 519 .ops = &sdhci_intel_byt_ops, 520 + .priv_size = sizeof(struct intel_host), 686 521 }; 687 522 688 523 /* Define Host controllers for Intel Merrifield platform */ ··· 886 719 jmicron_enable_mmc(slot->host, 0); 887 720 } 888 721 722 + #ifdef CONFIG_PM_SLEEP 889 723 static int jmicron_suspend(struct sdhci_pci_chip *chip) 890 724 { 891 - int i; 725 + int i, ret; 726 + 727 + ret = __sdhci_pci_suspend_host(chip); 728 + if (ret) 729 + return ret; 892 730 893 731 if (chip->pdev->device == PCI_DEVICE_ID_JMICRON_JMB38X_MMC || 894 732 chip->pdev->device == PCI_DEVICE_ID_JMICRON_JMB388_ESD) { 895 733 for (i = 0; i < chip->num_slots; i++) 896 734 jmicron_enable_mmc(chip->slots[i]->host, 0); 897 735 } 736 + 737 + sdhci_pci_init_wakeup(chip); 898 738 899 739 return 0; 900 740 } ··· 922 748 return ret; 923 749 } 924 750 925 - return 0; 751 + return sdhci_pci_resume_host(chip); 926 752 } 753 + #endif 927 754 928 755 static const struct sdhci_pci_fixes sdhci_o2 = { 929 756 .probe = sdhci_pci_o2_probe, 930 757 .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 931 758 .quirks2 = SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD, 932 759 .probe_slot = sdhci_pci_o2_probe_slot, 760 + #ifdef CONFIG_PM_SLEEP 933 761 .resume = sdhci_pci_o2_resume, 762 + #endif 934 763 }; 935 764 936 765 static const struct sdhci_pci_fixes sdhci_jmicron = { ··· 942 765 .probe_slot = jmicron_probe_slot, 943 766 .remove_slot = jmicron_remove_slot, 944 767 768 + #ifdef CONFIG_PM_SLEEP 945 769 .suspend = jmicron_suspend, 946 770 .resume = jmicron_resume, 771 + #endif 947 772 }; 948 773 949 774 /* SysKonnect CardBus2SDIO extra registers */ ··· 1796 1617 slot->hw_reset(host); 1797 1618 } 1798 1619 1799 - static int sdhci_pci_select_drive_strength(struct sdhci_host *host, 1800 - struct mmc_card *card, 1801 - unsigned int max_dtr, int host_drv, 1802 - int card_drv, int *drv_type) 1803 - { 1804 - struct sdhci_pci_slot *slot = sdhci_priv(host); 1805 - 1806 - if (!slot->select_drive_strength) 1807 - return 0; 1808 - 1809 - return slot->select_drive_strength(host, card, max_dtr, host_drv, 1810 - card_drv, drv_type); 1811 - } 1812 - 1813 1620 static const struct sdhci_ops sdhci_pci_ops = { 1814 1621 .set_clock = sdhci_set_clock, 1815 1622 .enable_dma = sdhci_pci_enable_dma, ··· 1803 1638 .reset = sdhci_reset, 1804 1639 .set_uhs_signaling = sdhci_set_uhs_signaling, 1805 1640 .hw_reset = sdhci_pci_hw_reset, 1806 - .select_drive_strength = sdhci_pci_select_drive_strength, 1807 1641 }; 1808 1642 1809 1643 /*****************************************************************************\ ··· 1815 1651 static int sdhci_pci_suspend(struct device *dev) 1816 1652 { 1817 1653 struct pci_dev *pdev = to_pci_dev(dev); 1818 - struct sdhci_pci_chip *chip; 1819 - struct sdhci_pci_slot *slot; 1820 - mmc_pm_flag_t slot_pm_flags; 1821 - mmc_pm_flag_t pm_flags = 0; 1822 - int i, ret; 1654 + struct sdhci_pci_chip *chip = pci_get_drvdata(pdev); 1823 1655 1824 - chip = pci_get_drvdata(pdev); 1825 1656 if (!chip) 1826 1657 return 0; 1827 1658 1828 - for (i = 0; i < chip->num_slots; i++) { 1829 - slot = chip->slots[i]; 1830 - if (!slot) 1831 - continue; 1659 + if (chip->fixes && chip->fixes->suspend) 1660 + return chip->fixes->suspend(chip); 1832 1661 1833 - ret = sdhci_suspend_host(slot->host); 1834 - 1835 - if (ret) 1836 - goto err_pci_suspend; 1837 - 1838 - slot_pm_flags = slot->host->mmc->pm_flags; 1839 - if (slot_pm_flags & MMC_PM_WAKE_SDIO_IRQ) 1840 - sdhci_enable_irq_wakeups(slot->host); 1841 - 1842 - pm_flags |= slot_pm_flags; 1843 - } 1844 - 1845 - if (chip->fixes && chip->fixes->suspend) { 1846 - ret = chip->fixes->suspend(chip); 1847 - if (ret) 1848 - goto err_pci_suspend; 1849 - } 1850 - 1851 - if (pm_flags & MMC_PM_KEEP_POWER) { 1852 - if (pm_flags & MMC_PM_WAKE_SDIO_IRQ) 1853 - device_init_wakeup(dev, true); 1854 - else 1855 - device_init_wakeup(dev, false); 1856 - } else 1857 - device_init_wakeup(dev, false); 1858 - 1859 - return 0; 1860 - 1861 - err_pci_suspend: 1862 - while (--i >= 0) 1863 - sdhci_resume_host(chip->slots[i]->host); 1864 - return ret; 1662 + return sdhci_pci_suspend_host(chip); 1865 1663 } 1866 1664 1867 1665 static int sdhci_pci_resume(struct device *dev) 1868 1666 { 1869 1667 struct pci_dev *pdev = to_pci_dev(dev); 1870 - struct sdhci_pci_chip *chip; 1871 - struct sdhci_pci_slot *slot; 1872 - int i, ret; 1668 + struct sdhci_pci_chip *chip = pci_get_drvdata(pdev); 1873 1669 1874 - chip = pci_get_drvdata(pdev); 1875 1670 if (!chip) 1876 1671 return 0; 1877 1672 1878 - if (chip->fixes && chip->fixes->resume) { 1879 - ret = chip->fixes->resume(chip); 1880 - if (ret) 1881 - return ret; 1882 - } 1673 + if (chip->fixes && chip->fixes->resume) 1674 + return chip->fixes->resume(chip); 1883 1675 1884 - for (i = 0; i < chip->num_slots; i++) { 1885 - slot = chip->slots[i]; 1886 - if (!slot) 1887 - continue; 1888 - 1889 - ret = sdhci_resume_host(slot->host); 1890 - if (ret) 1891 - return ret; 1892 - } 1893 - 1894 - return 0; 1676 + return sdhci_pci_resume_host(chip); 1895 1677 } 1896 1678 #endif 1897 1679 ··· 1845 1735 static int sdhci_pci_runtime_suspend(struct device *dev) 1846 1736 { 1847 1737 struct pci_dev *pdev = to_pci_dev(dev); 1848 - struct sdhci_pci_chip *chip; 1849 - struct sdhci_pci_slot *slot; 1850 - int i, ret; 1738 + struct sdhci_pci_chip *chip = pci_get_drvdata(pdev); 1851 1739 1852 - chip = pci_get_drvdata(pdev); 1853 1740 if (!chip) 1854 1741 return 0; 1855 1742 1856 - for (i = 0; i < chip->num_slots; i++) { 1857 - slot = chip->slots[i]; 1858 - if (!slot) 1859 - continue; 1743 + if (chip->fixes && chip->fixes->runtime_suspend) 1744 + return chip->fixes->runtime_suspend(chip); 1860 1745 1861 - ret = sdhci_runtime_suspend_host(slot->host); 1862 - 1863 - if (ret) 1864 - goto err_pci_runtime_suspend; 1865 - } 1866 - 1867 - if (chip->fixes && chip->fixes->suspend) { 1868 - ret = chip->fixes->suspend(chip); 1869 - if (ret) 1870 - goto err_pci_runtime_suspend; 1871 - } 1872 - 1873 - return 0; 1874 - 1875 - err_pci_runtime_suspend: 1876 - while (--i >= 0) 1877 - sdhci_runtime_resume_host(chip->slots[i]->host); 1878 - return ret; 1746 + return sdhci_pci_runtime_suspend_host(chip); 1879 1747 } 1880 1748 1881 1749 static int sdhci_pci_runtime_resume(struct device *dev) 1882 1750 { 1883 1751 struct pci_dev *pdev = to_pci_dev(dev); 1884 - struct sdhci_pci_chip *chip; 1885 - struct sdhci_pci_slot *slot; 1886 - int i, ret; 1752 + struct sdhci_pci_chip *chip = pci_get_drvdata(pdev); 1887 1753 1888 - chip = pci_get_drvdata(pdev); 1889 1754 if (!chip) 1890 1755 return 0; 1891 1756 1892 - if (chip->fixes && chip->fixes->resume) { 1893 - ret = chip->fixes->resume(chip); 1894 - if (ret) 1895 - return ret; 1896 - } 1757 + if (chip->fixes && chip->fixes->runtime_resume) 1758 + return chip->fixes->runtime_resume(chip); 1897 1759 1898 - for (i = 0; i < chip->num_slots; i++) { 1899 - slot = chip->slots[i]; 1900 - if (!slot) 1901 - continue; 1902 - 1903 - ret = sdhci_runtime_resume_host(slot->host); 1904 - if (ret) 1905 - return ret; 1906 - } 1907 - 1908 - return 0; 1760 + return sdhci_pci_runtime_resume_host(chip); 1909 1761 } 1910 1762 #endif 1911 1763 ··· 1890 1818 struct sdhci_pci_slot *slot; 1891 1819 struct sdhci_host *host; 1892 1820 int ret, bar = first_bar + slotno; 1821 + size_t priv_size = chip->fixes ? chip->fixes->priv_size : 0; 1893 1822 1894 1823 if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) { 1895 1824 dev_err(&pdev->dev, "BAR %d is not iomem. Aborting.\n", bar); ··· 1912 1839 return ERR_PTR(-ENODEV); 1913 1840 } 1914 1841 1915 - host = sdhci_alloc_host(&pdev->dev, sizeof(struct sdhci_pci_slot)); 1842 + host = sdhci_alloc_host(&pdev->dev, sizeof(*slot) + priv_size); 1916 1843 if (IS_ERR(host)) { 1917 1844 dev_err(&pdev->dev, "cannot allocate host\n"); 1918 1845 return ERR_CAST(host); ··· 1992 1919 } 1993 1920 } 1994 1921 1995 - ret = sdhci_add_host(host); 1922 + if (chip->fixes && chip->fixes->add_host) 1923 + ret = chip->fixes->add_host(slot); 1924 + else 1925 + ret = sdhci_add_host(host); 1996 1926 if (ret) 1997 1927 goto remove; 1998 1928 ··· 2118 2042 chip->allow_runtime_pm = chip->fixes->allow_runtime_pm; 2119 2043 } 2120 2044 chip->num_slots = slots; 2045 + chip->pm_retune = true; 2046 + chip->rpm_retune = true; 2121 2047 2122 2048 pci_set_drvdata(pdev, chip); 2123 2049
-3
drivers/mmc/host/sdhci-pci-data.c
··· 3 3 4 4 struct sdhci_pci_data *(*sdhci_pci_get_data)(struct pci_dev *pdev, int slotno); 5 5 EXPORT_SYMBOL_GPL(sdhci_pci_get_data); 6 - 7 - int sdhci_pci_spt_drive_strength; 8 - EXPORT_SYMBOL_GPL(sdhci_pci_spt_drive_strength);
+3 -1
drivers/mmc/host/sdhci-pci-o2micro.c
··· 384 384 return 0; 385 385 } 386 386 387 + #ifdef CONFIG_PM_SLEEP 387 388 int sdhci_pci_o2_resume(struct sdhci_pci_chip *chip) 388 389 { 389 390 sdhci_pci_o2_probe(chip); 390 - return 0; 391 + return sdhci_pci_resume_host(chip); 391 392 } 393 + #endif
+20 -4
drivers/mmc/host/sdhci-pci.h
··· 64 64 int (*probe) (struct sdhci_pci_chip *); 65 65 66 66 int (*probe_slot) (struct sdhci_pci_slot *); 67 + int (*add_host) (struct sdhci_pci_slot *); 67 68 void (*remove_slot) (struct sdhci_pci_slot *, int); 68 69 70 + #ifdef CONFIG_PM_SLEEP 69 71 int (*suspend) (struct sdhci_pci_chip *); 70 72 int (*resume) (struct sdhci_pci_chip *); 73 + #endif 74 + #ifdef CONFIG_PM 75 + int (*runtime_suspend) (struct sdhci_pci_chip *); 76 + int (*runtime_resume) (struct sdhci_pci_chip *); 77 + #endif 71 78 72 79 const struct sdhci_ops *ops; 80 + size_t priv_size; 73 81 }; 74 82 75 83 struct sdhci_pci_slot { ··· 93 85 bool cd_override_level; 94 86 95 87 void (*hw_reset)(struct sdhci_host *host); 96 - int (*select_drive_strength)(struct sdhci_host *host, 97 - struct mmc_card *card, 98 - unsigned int max_dtr, int host_drv, 99 - int card_drv, int *drv_type); 88 + unsigned long private[0] ____cacheline_aligned; 100 89 }; 101 90 102 91 struct sdhci_pci_chip { ··· 102 97 unsigned int quirks; 103 98 unsigned int quirks2; 104 99 bool allow_runtime_pm; 100 + bool pm_retune; 101 + bool rpm_retune; 105 102 const struct sdhci_pci_fixes *fixes; 106 103 107 104 int num_slots; /* Slots on controller */ 108 105 struct sdhci_pci_slot *slots[MAX_SLOTS]; /* Pointers to host slots */ 109 106 }; 107 + 108 + static inline void *sdhci_pci_priv(struct sdhci_pci_slot *slot) 109 + { 110 + return (void *)slot->private; 111 + } 112 + 113 + #ifdef CONFIG_PM_SLEEP 114 + int sdhci_pci_resume_host(struct sdhci_pci_chip *chip); 115 + #endif 110 116 111 117 #endif /* __SDHCI_PCI_H */
+3
drivers/mmc/host/sdhci-pltfm.c
··· 213 213 { 214 214 struct sdhci_host *host = dev_get_drvdata(dev); 215 215 216 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 217 + mmc_retune_needed(host->mmc); 218 + 216 219 return sdhci_suspend_host(host); 217 220 } 218 221
+6 -3
drivers/mmc/host/sdhci-pxav2.c
··· 185 185 goto err_clk_get; 186 186 } 187 187 pltfm_host->clk = clk; 188 - clk_prepare_enable(clk); 188 + ret = clk_prepare_enable(clk); 189 + if (ret) { 190 + dev_err(&pdev->dev, "failed to enable io clock\n"); 191 + goto err_clk_enable; 192 + } 189 193 190 194 host->quirks = SDHCI_QUIRK_BROKEN_ADMA 191 195 | SDHCI_QUIRK_BROKEN_TIMEOUT_VAL ··· 226 222 goto err_add_host; 227 223 } 228 224 229 - platform_set_drvdata(pdev, host); 230 - 231 225 return 0; 232 226 233 227 err_add_host: 234 228 clk_disable_unprepare(clk); 229 + err_clk_enable: 235 230 clk_put(clk); 236 231 err_clk_get: 237 232 sdhci_pltfm_free(pdev);
+6 -6
drivers/mmc/host/sdhci-pxav3.c
··· 323 323 if (host->pwr == 0) 324 324 vdd = 0; 325 325 326 - if (!IS_ERR(mmc->supply.vmmc)) { 327 - spin_unlock_irq(&host->lock); 326 + if (!IS_ERR(mmc->supply.vmmc)) 328 327 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); 329 - spin_lock_irq(&host->lock); 330 - } 331 328 } 332 329 333 330 static const struct sdhci_ops pxav3_sdhci_ops = { ··· 477 480 goto err_add_host; 478 481 } 479 482 480 - platform_set_drvdata(pdev, host); 481 - 482 483 if (host->mmc->pm_caps & MMC_PM_WAKE_SDIO_IRQ) 483 484 device_init_wakeup(&pdev->dev, 1); 484 485 ··· 524 529 struct sdhci_host *host = dev_get_drvdata(dev); 525 530 526 531 pm_runtime_get_sync(dev); 532 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 533 + mmc_retune_needed(host->mmc); 527 534 ret = sdhci_suspend_host(host); 528 535 pm_runtime_mark_last_busy(dev); 529 536 pm_runtime_put_autosuspend(dev); ··· 558 561 ret = sdhci_runtime_suspend_host(host); 559 562 if (ret) 560 563 return ret; 564 + 565 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 566 + mmc_retune_needed(host->mmc); 561 567 562 568 clk_disable_unprepare(pxa->clk_io); 563 569 if (!IS_ERR(pxa->clk_core))
+6 -4
drivers/mmc/host/sdhci-s3c.c
··· 190 190 * speed possible with selected clock source and skip the division. 191 191 */ 192 192 if (ourhost->no_divider) { 193 - spin_unlock_irq(&ourhost->host->lock); 194 193 rate = clk_round_rate(clksrc, wanted); 195 - spin_lock_irq(&ourhost->host->lock); 196 194 return wanted - rate; 197 195 } 198 196 ··· 387 389 clk &= ~SDHCI_CLOCK_CARD_EN; 388 390 sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 389 391 390 - spin_unlock_irq(&host->lock); 391 392 ret = clk_set_rate(ourhost->clk_bus[ourhost->cur_clk], clock); 392 - spin_lock_irq(&host->lock); 393 393 if (ret != 0) { 394 394 dev_err(dev, "%s: failed to set clock rate %uHz\n", 395 395 mmc_hostname(host->mmc), clock); ··· 739 743 { 740 744 struct sdhci_host *host = dev_get_drvdata(dev); 741 745 746 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 747 + mmc_retune_needed(host->mmc); 748 + 742 749 return sdhci_suspend_host(host); 743 750 } 744 751 ··· 762 763 int ret; 763 764 764 765 ret = sdhci_runtime_suspend_host(host); 766 + 767 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 768 + mmc_retune_needed(host->mmc); 765 769 766 770 if (ourhost->cur_clk >= 0) 767 771 clk_disable_unprepare(ourhost->clk_bus[ourhost->cur_clk]);
+3
drivers/mmc/host/sdhci-sirf.c
··· 237 237 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 238 238 int ret; 239 239 240 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 241 + mmc_retune_needed(host->mmc); 242 + 240 243 ret = sdhci_suspend_host(host); 241 244 if (ret) 242 245 return ret;
+3
drivers/mmc/host/sdhci-spear.c
··· 165 165 struct spear_sdhci *sdhci = sdhci_priv(host); 166 166 int ret; 167 167 168 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 169 + mmc_retune_needed(host->mmc); 170 + 168 171 ret = sdhci_suspend_host(host); 169 172 if (!ret) 170 173 clk_disable(sdhci->clk);
+5 -3
drivers/mmc/host/sdhci-st.c
··· 418 418 goto err_out; 419 419 } 420 420 421 - platform_set_drvdata(pdev, host); 422 - 423 421 host_version = readw_relaxed((host->ioaddr + SDHCI_HOST_VERSION)); 424 422 425 423 dev_info(&pdev->dev, "SDHCI ST Initialised: Host Version: 0x%x Vendor Version 0x%x\n", ··· 463 465 struct sdhci_host *host = dev_get_drvdata(dev); 464 466 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 465 467 struct st_mmc_platform_data *pdata = sdhci_pltfm_priv(pltfm_host); 466 - int ret = sdhci_suspend_host(host); 468 + int ret; 467 469 470 + if (host->tuning_mode != SDHCI_TUNING_MODE_3) 471 + mmc_retune_needed(host->mmc); 472 + 473 + ret = sdhci_suspend_host(host); 468 474 if (ret) 469 475 goto out; 470 476
+58 -1
drivers/mmc/host/sdhci-tegra.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/of.h> 23 23 #include <linux/of_device.h> 24 + #include <linux/reset.h> 24 25 #include <linux/mmc/card.h> 25 26 #include <linux/mmc/host.h> 26 27 #include <linux/mmc/mmc.h> ··· 66 65 struct gpio_desc *power_gpio; 67 66 bool ddr_signaling; 68 67 bool pad_calib_required; 68 + 69 + struct reset_control *rst; 69 70 }; 70 71 71 72 static u16 tegra_sdhci_readw(struct sdhci_host *host, int reg) ··· 434 431 .pdata = &sdhci_tegra210_pdata, 435 432 }; 436 433 434 + static const struct sdhci_pltfm_data sdhci_tegra186_pdata = { 435 + .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL | 436 + SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK | 437 + SDHCI_QUIRK_SINGLE_POWER_WRITE | 438 + SDHCI_QUIRK_NO_HISPD_BIT | 439 + SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC | 440 + SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN, 441 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 442 + .ops = &tegra114_sdhci_ops, 443 + }; 444 + 445 + static const struct sdhci_tegra_soc_data soc_data_tegra186 = { 446 + .pdata = &sdhci_tegra186_pdata, 447 + }; 448 + 437 449 static const struct of_device_id sdhci_tegra_dt_match[] = { 450 + { .compatible = "nvidia,tegra186-sdhci", .data = &soc_data_tegra186 }, 438 451 { .compatible = "nvidia,tegra210-sdhci", .data = &soc_data_tegra210 }, 439 452 { .compatible = "nvidia,tegra124-sdhci", .data = &soc_data_tegra124 }, 440 453 { .compatible = "nvidia,tegra114-sdhci", .data = &soc_data_tegra114 }, ··· 508 489 clk_prepare_enable(clk); 509 490 pltfm_host->clk = clk; 510 491 492 + tegra_host->rst = devm_reset_control_get(&pdev->dev, "sdhci"); 493 + if (IS_ERR(tegra_host->rst)) { 494 + rc = PTR_ERR(tegra_host->rst); 495 + dev_err(&pdev->dev, "failed to get reset control: %d\n", rc); 496 + goto err_rst_get; 497 + } 498 + 499 + rc = reset_control_assert(tegra_host->rst); 500 + if (rc) 501 + goto err_rst_get; 502 + 503 + usleep_range(2000, 4000); 504 + 505 + rc = reset_control_deassert(tegra_host->rst); 506 + if (rc) 507 + goto err_rst_get; 508 + 509 + usleep_range(2000, 4000); 510 + 511 511 rc = sdhci_add_host(host); 512 512 if (rc) 513 513 goto err_add_host; ··· 534 496 return 0; 535 497 536 498 err_add_host: 499 + reset_control_assert(tegra_host->rst); 500 + err_rst_get: 537 501 clk_disable_unprepare(pltfm_host->clk); 538 502 err_clk_get: 539 503 err_power_req: 540 504 err_parse_dt: 541 505 sdhci_pltfm_free(pdev); 542 506 return rc; 507 + } 508 + 509 + static int sdhci_tegra_remove(struct platform_device *pdev) 510 + { 511 + struct sdhci_host *host = platform_get_drvdata(pdev); 512 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 513 + struct sdhci_tegra *tegra_host = sdhci_pltfm_priv(pltfm_host); 514 + 515 + sdhci_remove_host(host, 0); 516 + 517 + reset_control_assert(tegra_host->rst); 518 + usleep_range(2000, 4000); 519 + clk_disable_unprepare(pltfm_host->clk); 520 + 521 + sdhci_pltfm_free(pdev); 522 + 523 + return 0; 543 524 } 544 525 545 526 static struct platform_driver sdhci_tegra_driver = { ··· 568 511 .pm = &sdhci_pltfm_pmops, 569 512 }, 570 513 .probe = sdhci_tegra_probe, 571 - .remove = sdhci_pltfm_unregister, 514 + .remove = sdhci_tegra_remove, 572 515 }; 573 516 574 517 module_platform_driver(sdhci_tegra_driver);
+837
drivers/mmc/host/sdhci-xenon-phy.c
··· 1 + /* 2 + * PHY support for Xenon SDHC 3 + * 4 + * Copyright (C) 2016 Marvell, All Rights Reserved. 5 + * 6 + * Author: Hu Ziji <huziji@marvell.com> 7 + * Date: 2016-8-24 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License as 11 + * published by the Free Software Foundation version 2. 12 + */ 13 + 14 + #include <linux/slab.h> 15 + #include <linux/delay.h> 16 + #include <linux/ktime.h> 17 + #include <linux/of_address.h> 18 + 19 + #include "sdhci-pltfm.h" 20 + #include "sdhci-xenon.h" 21 + 22 + /* Register base for eMMC PHY 5.0 Version */ 23 + #define XENON_EMMC_5_0_PHY_REG_BASE 0x0160 24 + /* Register base for eMMC PHY 5.1 Version */ 25 + #define XENON_EMMC_PHY_REG_BASE 0x0170 26 + 27 + #define XENON_EMMC_PHY_TIMING_ADJUST XENON_EMMC_PHY_REG_BASE 28 + #define XENON_EMMC_5_0_PHY_TIMING_ADJUST XENON_EMMC_5_0_PHY_REG_BASE 29 + #define XENON_TIMING_ADJUST_SLOW_MODE BIT(29) 30 + #define XENON_TIMING_ADJUST_SDIO_MODE BIT(28) 31 + #define XENON_SAMPL_INV_QSP_PHASE_SELECT BIT(18) 32 + #define XENON_SAMPL_INV_QSP_PHASE_SELECT_SHIFT 18 33 + #define XENON_PHY_INITIALIZAION BIT(31) 34 + #define XENON_WAIT_CYCLE_BEFORE_USING_MASK 0xF 35 + #define XENON_WAIT_CYCLE_BEFORE_USING_SHIFT 12 36 + #define XENON_FC_SYNC_EN_DURATION_MASK 0xF 37 + #define XENON_FC_SYNC_EN_DURATION_SHIFT 8 38 + #define XENON_FC_SYNC_RST_EN_DURATION_MASK 0xF 39 + #define XENON_FC_SYNC_RST_EN_DURATION_SHIFT 4 40 + #define XENON_FC_SYNC_RST_DURATION_MASK 0xF 41 + #define XENON_FC_SYNC_RST_DURATION_SHIFT 0 42 + 43 + #define XENON_EMMC_PHY_FUNC_CONTROL (XENON_EMMC_PHY_REG_BASE + 0x4) 44 + #define XENON_EMMC_5_0_PHY_FUNC_CONTROL \ 45 + (XENON_EMMC_5_0_PHY_REG_BASE + 0x4) 46 + #define XENON_ASYNC_DDRMODE_MASK BIT(23) 47 + #define XENON_ASYNC_DDRMODE_SHIFT 23 48 + #define XENON_CMD_DDR_MODE BIT(16) 49 + #define XENON_DQ_DDR_MODE_SHIFT 8 50 + #define XENON_DQ_DDR_MODE_MASK 0xFF 51 + #define XENON_DQ_ASYNC_MODE BIT(4) 52 + 53 + #define XENON_EMMC_PHY_PAD_CONTROL (XENON_EMMC_PHY_REG_BASE + 0x8) 54 + #define XENON_EMMC_5_0_PHY_PAD_CONTROL \ 55 + (XENON_EMMC_5_0_PHY_REG_BASE + 0x8) 56 + #define XENON_REC_EN_SHIFT 24 57 + #define XENON_REC_EN_MASK 0xF 58 + #define XENON_FC_DQ_RECEN BIT(24) 59 + #define XENON_FC_CMD_RECEN BIT(25) 60 + #define XENON_FC_QSP_RECEN BIT(26) 61 + #define XENON_FC_QSN_RECEN BIT(27) 62 + #define XENON_OEN_QSN BIT(28) 63 + #define XENON_AUTO_RECEN_CTRL BIT(30) 64 + #define XENON_FC_ALL_CMOS_RECEIVER 0xF000 65 + 66 + #define XENON_EMMC5_FC_QSP_PD BIT(18) 67 + #define XENON_EMMC5_FC_QSP_PU BIT(22) 68 + #define XENON_EMMC5_FC_CMD_PD BIT(17) 69 + #define XENON_EMMC5_FC_CMD_PU BIT(21) 70 + #define XENON_EMMC5_FC_DQ_PD BIT(16) 71 + #define XENON_EMMC5_FC_DQ_PU BIT(20) 72 + 73 + #define XENON_EMMC_PHY_PAD_CONTROL1 (XENON_EMMC_PHY_REG_BASE + 0xC) 74 + #define XENON_EMMC5_1_FC_QSP_PD BIT(9) 75 + #define XENON_EMMC5_1_FC_QSP_PU BIT(25) 76 + #define XENON_EMMC5_1_FC_CMD_PD BIT(8) 77 + #define XENON_EMMC5_1_FC_CMD_PU BIT(24) 78 + #define XENON_EMMC5_1_FC_DQ_PD 0xFF 79 + #define XENON_EMMC5_1_FC_DQ_PU (0xFF << 16) 80 + 81 + #define XENON_EMMC_PHY_PAD_CONTROL2 (XENON_EMMC_PHY_REG_BASE + 0x10) 82 + #define XENON_EMMC_5_0_PHY_PAD_CONTROL2 \ 83 + (XENON_EMMC_5_0_PHY_REG_BASE + 0xC) 84 + #define XENON_ZNR_MASK 0x1F 85 + #define XENON_ZNR_SHIFT 8 86 + #define XENON_ZPR_MASK 0x1F 87 + /* Preferred ZNR and ZPR value vary between different boards. 88 + * The specific ZNR and ZPR value should be defined here 89 + * according to board actual timing. 90 + */ 91 + #define XENON_ZNR_DEF_VALUE 0xF 92 + #define XENON_ZPR_DEF_VALUE 0xF 93 + 94 + #define XENON_EMMC_PHY_DLL_CONTROL (XENON_EMMC_PHY_REG_BASE + 0x14) 95 + #define XENON_EMMC_5_0_PHY_DLL_CONTROL \ 96 + (XENON_EMMC_5_0_PHY_REG_BASE + 0x10) 97 + #define XENON_DLL_ENABLE BIT(31) 98 + #define XENON_DLL_UPDATE_STROBE_5_0 BIT(30) 99 + #define XENON_DLL_REFCLK_SEL BIT(30) 100 + #define XENON_DLL_UPDATE BIT(23) 101 + #define XENON_DLL_PHSEL1_SHIFT 24 102 + #define XENON_DLL_PHSEL0_SHIFT 16 103 + #define XENON_DLL_PHASE_MASK 0x3F 104 + #define XENON_DLL_PHASE_90_DEGREE 0x1F 105 + #define XENON_DLL_FAST_LOCK BIT(5) 106 + #define XENON_DLL_GAIN2X BIT(3) 107 + #define XENON_DLL_BYPASS_EN BIT(0) 108 + 109 + #define XENON_EMMC_5_0_PHY_LOGIC_TIMING_ADJUST \ 110 + (XENON_EMMC_5_0_PHY_REG_BASE + 0x14) 111 + #define XENON_EMMC_5_0_PHY_LOGIC_TIMING_VALUE 0x5A54 112 + #define XENON_EMMC_PHY_LOGIC_TIMING_ADJUST (XENON_EMMC_PHY_REG_BASE + 0x18) 113 + #define XENON_LOGIC_TIMING_VALUE 0x00AA8977 114 + 115 + /* 116 + * List offset of PHY registers and some special register values 117 + * in eMMC PHY 5.0 or eMMC PHY 5.1 118 + */ 119 + struct xenon_emmc_phy_regs { 120 + /* Offset of Timing Adjust register */ 121 + u16 timing_adj; 122 + /* Offset of Func Control register */ 123 + u16 func_ctrl; 124 + /* Offset of Pad Control register */ 125 + u16 pad_ctrl; 126 + /* Offset of Pad Control register 2 */ 127 + u16 pad_ctrl2; 128 + /* Offset of DLL Control register */ 129 + u16 dll_ctrl; 130 + /* Offset of Logic Timing Adjust register */ 131 + u16 logic_timing_adj; 132 + /* DLL Update Enable bit */ 133 + u32 dll_update; 134 + /* value in Logic Timing Adjustment register */ 135 + u32 logic_timing_val; 136 + }; 137 + 138 + static const char * const phy_types[] = { 139 + "emmc 5.0 phy", 140 + "emmc 5.1 phy" 141 + }; 142 + 143 + enum xenon_phy_type_enum { 144 + EMMC_5_0_PHY, 145 + EMMC_5_1_PHY, 146 + NR_PHY_TYPES 147 + }; 148 + 149 + enum soc_pad_ctrl_type { 150 + SOC_PAD_SD, 151 + SOC_PAD_FIXED_1_8V, 152 + }; 153 + 154 + struct soc_pad_ctrl { 155 + /* Register address of SoC PHY PAD ctrl */ 156 + void __iomem *reg; 157 + /* SoC PHY PAD ctrl type */ 158 + enum soc_pad_ctrl_type pad_type; 159 + /* SoC specific operation to set SoC PHY PAD */ 160 + void (*set_soc_pad)(struct sdhci_host *host, 161 + unsigned char signal_voltage); 162 + }; 163 + 164 + static struct xenon_emmc_phy_regs xenon_emmc_5_0_phy_regs = { 165 + .timing_adj = XENON_EMMC_5_0_PHY_TIMING_ADJUST, 166 + .func_ctrl = XENON_EMMC_5_0_PHY_FUNC_CONTROL, 167 + .pad_ctrl = XENON_EMMC_5_0_PHY_PAD_CONTROL, 168 + .pad_ctrl2 = XENON_EMMC_5_0_PHY_PAD_CONTROL2, 169 + .dll_ctrl = XENON_EMMC_5_0_PHY_DLL_CONTROL, 170 + .logic_timing_adj = XENON_EMMC_5_0_PHY_LOGIC_TIMING_ADJUST, 171 + .dll_update = XENON_DLL_UPDATE_STROBE_5_0, 172 + .logic_timing_val = XENON_EMMC_5_0_PHY_LOGIC_TIMING_VALUE, 173 + }; 174 + 175 + static struct xenon_emmc_phy_regs xenon_emmc_5_1_phy_regs = { 176 + .timing_adj = XENON_EMMC_PHY_TIMING_ADJUST, 177 + .func_ctrl = XENON_EMMC_PHY_FUNC_CONTROL, 178 + .pad_ctrl = XENON_EMMC_PHY_PAD_CONTROL, 179 + .pad_ctrl2 = XENON_EMMC_PHY_PAD_CONTROL2, 180 + .dll_ctrl = XENON_EMMC_PHY_DLL_CONTROL, 181 + .logic_timing_adj = XENON_EMMC_PHY_LOGIC_TIMING_ADJUST, 182 + .dll_update = XENON_DLL_UPDATE, 183 + .logic_timing_val = XENON_LOGIC_TIMING_VALUE, 184 + }; 185 + 186 + /* 187 + * eMMC PHY configuration and operations 188 + */ 189 + struct xenon_emmc_phy_params { 190 + bool slow_mode; 191 + 192 + u8 znr; 193 + u8 zpr; 194 + 195 + /* Nr of consecutive Sampling Points of a Valid Sampling Window */ 196 + u8 nr_tun_times; 197 + /* Divider for calculating Tuning Step */ 198 + u8 tun_step_divider; 199 + 200 + struct soc_pad_ctrl pad_ctrl; 201 + }; 202 + 203 + static int xenon_alloc_emmc_phy(struct sdhci_host *host) 204 + { 205 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 206 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 207 + struct xenon_emmc_phy_params *params; 208 + 209 + params = devm_kzalloc(mmc_dev(host->mmc), sizeof(*params), GFP_KERNEL); 210 + if (!params) 211 + return -ENOMEM; 212 + 213 + priv->phy_params = params; 214 + if (priv->phy_type == EMMC_5_0_PHY) 215 + priv->emmc_phy_regs = &xenon_emmc_5_0_phy_regs; 216 + else 217 + priv->emmc_phy_regs = &xenon_emmc_5_1_phy_regs; 218 + 219 + return 0; 220 + } 221 + 222 + /* 223 + * eMMC 5.0/5.1 PHY init/re-init. 224 + * eMMC PHY init should be executed after: 225 + * 1. SDCLK frequency changes. 226 + * 2. SDCLK is stopped and re-enabled. 227 + * 3. config in emmc_phy_regs->timing_adj and emmc_phy_regs->func_ctrl 228 + * are changed 229 + */ 230 + static int xenon_emmc_phy_init(struct sdhci_host *host) 231 + { 232 + u32 reg; 233 + u32 wait, clock; 234 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 235 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 236 + struct xenon_emmc_phy_regs *phy_regs = priv->emmc_phy_regs; 237 + 238 + reg = sdhci_readl(host, phy_regs->timing_adj); 239 + reg |= XENON_PHY_INITIALIZAION; 240 + sdhci_writel(host, reg, phy_regs->timing_adj); 241 + 242 + /* Add duration of FC_SYNC_RST */ 243 + wait = ((reg >> XENON_FC_SYNC_RST_DURATION_SHIFT) & 244 + XENON_FC_SYNC_RST_DURATION_MASK); 245 + /* Add interval between FC_SYNC_EN and FC_SYNC_RST */ 246 + wait += ((reg >> XENON_FC_SYNC_RST_EN_DURATION_SHIFT) & 247 + XENON_FC_SYNC_RST_EN_DURATION_MASK); 248 + /* Add duration of asserting FC_SYNC_EN */ 249 + wait += ((reg >> XENON_FC_SYNC_EN_DURATION_SHIFT) & 250 + XENON_FC_SYNC_EN_DURATION_MASK); 251 + /* Add duration of waiting for PHY */ 252 + wait += ((reg >> XENON_WAIT_CYCLE_BEFORE_USING_SHIFT) & 253 + XENON_WAIT_CYCLE_BEFORE_USING_MASK); 254 + /* 4 additional bus clock and 4 AXI bus clock are required */ 255 + wait += 8; 256 + wait <<= 20; 257 + 258 + clock = host->clock; 259 + if (!clock) 260 + /* Use the possibly slowest bus frequency value */ 261 + clock = XENON_LOWEST_SDCLK_FREQ; 262 + /* get the wait time */ 263 + wait /= clock; 264 + wait++; 265 + /* wait for host eMMC PHY init completes */ 266 + udelay(wait); 267 + 268 + reg = sdhci_readl(host, phy_regs->timing_adj); 269 + reg &= XENON_PHY_INITIALIZAION; 270 + if (reg) { 271 + dev_err(mmc_dev(host->mmc), "eMMC PHY init cannot complete after %d us\n", 272 + wait); 273 + return -ETIMEDOUT; 274 + } 275 + 276 + return 0; 277 + } 278 + 279 + #define ARMADA_3700_SOC_PAD_1_8V 0x1 280 + #define ARMADA_3700_SOC_PAD_3_3V 0x0 281 + 282 + static void armada_3700_soc_pad_voltage_set(struct sdhci_host *host, 283 + unsigned char signal_voltage) 284 + { 285 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 286 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 287 + struct xenon_emmc_phy_params *params = priv->phy_params; 288 + 289 + if (params->pad_ctrl.pad_type == SOC_PAD_FIXED_1_8V) { 290 + writel(ARMADA_3700_SOC_PAD_1_8V, params->pad_ctrl.reg); 291 + } else if (params->pad_ctrl.pad_type == SOC_PAD_SD) { 292 + if (signal_voltage == MMC_SIGNAL_VOLTAGE_180) 293 + writel(ARMADA_3700_SOC_PAD_1_8V, params->pad_ctrl.reg); 294 + else if (signal_voltage == MMC_SIGNAL_VOLTAGE_330) 295 + writel(ARMADA_3700_SOC_PAD_3_3V, params->pad_ctrl.reg); 296 + } 297 + } 298 + 299 + /* 300 + * Set SoC PHY voltage PAD control register, 301 + * according to the operation voltage on PAD. 302 + * The detailed operation depends on SoC implementation. 303 + */ 304 + static void xenon_emmc_phy_set_soc_pad(struct sdhci_host *host, 305 + unsigned char signal_voltage) 306 + { 307 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 308 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 309 + struct xenon_emmc_phy_params *params = priv->phy_params; 310 + 311 + if (!params->pad_ctrl.reg) 312 + return; 313 + 314 + if (params->pad_ctrl.set_soc_pad) 315 + params->pad_ctrl.set_soc_pad(host, signal_voltage); 316 + } 317 + 318 + /* 319 + * Enable eMMC PHY HW DLL 320 + * DLL should be enabled and stable before HS200/SDR104 tuning, 321 + * and before HS400 data strobe setting. 322 + */ 323 + static int xenon_emmc_phy_enable_dll(struct sdhci_host *host) 324 + { 325 + u32 reg; 326 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 327 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 328 + struct xenon_emmc_phy_regs *phy_regs = priv->emmc_phy_regs; 329 + ktime_t timeout; 330 + 331 + if (WARN_ON(host->clock <= MMC_HIGH_52_MAX_DTR)) 332 + return -EINVAL; 333 + 334 + reg = sdhci_readl(host, phy_regs->dll_ctrl); 335 + if (reg & XENON_DLL_ENABLE) 336 + return 0; 337 + 338 + /* Enable DLL */ 339 + reg = sdhci_readl(host, phy_regs->dll_ctrl); 340 + reg |= (XENON_DLL_ENABLE | XENON_DLL_FAST_LOCK); 341 + 342 + /* 343 + * Set Phase as 90 degree, which is most common value. 344 + * Might set another value if necessary. 345 + * The granularity is 1 degree. 346 + */ 347 + reg &= ~((XENON_DLL_PHASE_MASK << XENON_DLL_PHSEL0_SHIFT) | 348 + (XENON_DLL_PHASE_MASK << XENON_DLL_PHSEL1_SHIFT)); 349 + reg |= ((XENON_DLL_PHASE_90_DEGREE << XENON_DLL_PHSEL0_SHIFT) | 350 + (XENON_DLL_PHASE_90_DEGREE << XENON_DLL_PHSEL1_SHIFT)); 351 + 352 + reg &= ~XENON_DLL_BYPASS_EN; 353 + reg |= phy_regs->dll_update; 354 + if (priv->phy_type == EMMC_5_1_PHY) 355 + reg &= ~XENON_DLL_REFCLK_SEL; 356 + sdhci_writel(host, reg, phy_regs->dll_ctrl); 357 + 358 + /* Wait max 32 ms */ 359 + timeout = ktime_add_ms(ktime_get(), 32); 360 + while (!(sdhci_readw(host, XENON_SLOT_EXT_PRESENT_STATE) & 361 + XENON_DLL_LOCK_STATE)) { 362 + if (ktime_after(ktime_get(), timeout)) { 363 + dev_err(mmc_dev(host->mmc), "Wait for DLL Lock time-out\n"); 364 + return -ETIMEDOUT; 365 + } 366 + udelay(100); 367 + } 368 + return 0; 369 + } 370 + 371 + /* 372 + * Config to eMMC PHY to prepare for tuning. 373 + * Enable HW DLL and set the TUNING_STEP 374 + */ 375 + static int xenon_emmc_phy_config_tuning(struct sdhci_host *host) 376 + { 377 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 378 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 379 + struct xenon_emmc_phy_params *params = priv->phy_params; 380 + u32 reg, tuning_step; 381 + int ret; 382 + 383 + if (host->clock <= MMC_HIGH_52_MAX_DTR) 384 + return -EINVAL; 385 + 386 + ret = xenon_emmc_phy_enable_dll(host); 387 + if (ret) 388 + return ret; 389 + 390 + /* Achieve TUNING_STEP with HW DLL help */ 391 + reg = sdhci_readl(host, XENON_SLOT_DLL_CUR_DLY_VAL); 392 + tuning_step = reg / params->tun_step_divider; 393 + if (unlikely(tuning_step > XENON_TUNING_STEP_MASK)) { 394 + dev_warn(mmc_dev(host->mmc), 395 + "HS200 TUNING_STEP %d is larger than MAX value\n", 396 + tuning_step); 397 + tuning_step = XENON_TUNING_STEP_MASK; 398 + } 399 + 400 + /* Set TUNING_STEP for later tuning */ 401 + reg = sdhci_readl(host, XENON_SLOT_OP_STATUS_CTRL); 402 + reg &= ~(XENON_TUN_CONSECUTIVE_TIMES_MASK << 403 + XENON_TUN_CONSECUTIVE_TIMES_SHIFT); 404 + reg |= (params->nr_tun_times << XENON_TUN_CONSECUTIVE_TIMES_SHIFT); 405 + reg &= ~(XENON_TUNING_STEP_MASK << XENON_TUNING_STEP_SHIFT); 406 + reg |= (tuning_step << XENON_TUNING_STEP_SHIFT); 407 + sdhci_writel(host, reg, XENON_SLOT_OP_STATUS_CTRL); 408 + 409 + return 0; 410 + } 411 + 412 + static void xenon_emmc_phy_disable_data_strobe(struct sdhci_host *host) 413 + { 414 + u32 reg; 415 + 416 + /* Disable SDHC Data Strobe */ 417 + reg = sdhci_readl(host, XENON_SLOT_EMMC_CTRL); 418 + reg &= ~XENON_ENABLE_DATA_STROBE; 419 + sdhci_writel(host, reg, XENON_SLOT_EMMC_CTRL); 420 + } 421 + 422 + /* Set HS400 Data Strobe */ 423 + static void xenon_emmc_phy_strobe_delay_adj(struct sdhci_host *host) 424 + { 425 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 426 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 427 + u32 reg; 428 + 429 + if (WARN_ON(host->timing != MMC_TIMING_MMC_HS400)) 430 + return; 431 + 432 + if (host->clock <= MMC_HIGH_52_MAX_DTR) 433 + return; 434 + 435 + dev_dbg(mmc_dev(host->mmc), "starts HS400 strobe delay adjustment\n"); 436 + 437 + xenon_emmc_phy_enable_dll(host); 438 + 439 + /* Enable SDHC Data Strobe */ 440 + reg = sdhci_readl(host, XENON_SLOT_EMMC_CTRL); 441 + reg |= XENON_ENABLE_DATA_STROBE; 442 + sdhci_writel(host, reg, XENON_SLOT_EMMC_CTRL); 443 + 444 + /* Set Data Strobe Pull down */ 445 + if (priv->phy_type == EMMC_5_0_PHY) { 446 + reg = sdhci_readl(host, XENON_EMMC_5_0_PHY_PAD_CONTROL); 447 + reg |= XENON_EMMC5_FC_QSP_PD; 448 + reg &= ~XENON_EMMC5_FC_QSP_PU; 449 + sdhci_writel(host, reg, XENON_EMMC_5_0_PHY_PAD_CONTROL); 450 + } else { 451 + reg = sdhci_readl(host, XENON_EMMC_PHY_PAD_CONTROL1); 452 + reg |= XENON_EMMC5_1_FC_QSP_PD; 453 + reg &= ~XENON_EMMC5_1_FC_QSP_PU; 454 + sdhci_writel(host, reg, XENON_EMMC_PHY_PAD_CONTROL1); 455 + } 456 + } 457 + 458 + /* 459 + * If eMMC PHY Slow Mode is required in lower speed mode (SDCLK < 55MHz) 460 + * in SDR mode, enable Slow Mode to bypass eMMC PHY. 461 + * SDIO slower SDR mode also requires Slow Mode. 462 + * 463 + * If Slow Mode is enabled, return true. 464 + * Otherwise, return false. 465 + */ 466 + static bool xenon_emmc_phy_slow_mode(struct sdhci_host *host, 467 + unsigned char timing) 468 + { 469 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 470 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 471 + struct xenon_emmc_phy_params *params = priv->phy_params; 472 + struct xenon_emmc_phy_regs *phy_regs = priv->emmc_phy_regs; 473 + u32 reg; 474 + int ret; 475 + 476 + if (host->clock > MMC_HIGH_52_MAX_DTR) 477 + return false; 478 + 479 + reg = sdhci_readl(host, phy_regs->timing_adj); 480 + /* When in slower SDR mode, enable Slow Mode for SDIO 481 + * or when Slow Mode flag is set 482 + */ 483 + switch (timing) { 484 + case MMC_TIMING_LEGACY: 485 + /* 486 + * If Slow Mode is required, enable Slow Mode by default 487 + * in early init phase to avoid any potential issue. 488 + */ 489 + if (params->slow_mode) { 490 + reg |= XENON_TIMING_ADJUST_SLOW_MODE; 491 + ret = true; 492 + } else { 493 + reg &= ~XENON_TIMING_ADJUST_SLOW_MODE; 494 + ret = false; 495 + } 496 + break; 497 + case MMC_TIMING_UHS_SDR25: 498 + case MMC_TIMING_UHS_SDR12: 499 + case MMC_TIMING_SD_HS: 500 + case MMC_TIMING_MMC_HS: 501 + if ((priv->init_card_type == MMC_TYPE_SDIO) || 502 + params->slow_mode) { 503 + reg |= XENON_TIMING_ADJUST_SLOW_MODE; 504 + ret = true; 505 + break; 506 + } 507 + default: 508 + reg &= ~XENON_TIMING_ADJUST_SLOW_MODE; 509 + ret = false; 510 + } 511 + 512 + sdhci_writel(host, reg, phy_regs->timing_adj); 513 + return ret; 514 + } 515 + 516 + /* 517 + * Set-up eMMC 5.0/5.1 PHY. 518 + * Specific configuration depends on the current speed mode in use. 519 + */ 520 + static void xenon_emmc_phy_set(struct sdhci_host *host, 521 + unsigned char timing) 522 + { 523 + u32 reg; 524 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 525 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 526 + struct xenon_emmc_phy_params *params = priv->phy_params; 527 + struct xenon_emmc_phy_regs *phy_regs = priv->emmc_phy_regs; 528 + 529 + dev_dbg(mmc_dev(host->mmc), "eMMC PHY setting starts\n"); 530 + 531 + /* Setup pad, set bit[28] and bits[26:24] */ 532 + reg = sdhci_readl(host, phy_regs->pad_ctrl); 533 + reg |= (XENON_FC_DQ_RECEN | XENON_FC_CMD_RECEN | 534 + XENON_FC_QSP_RECEN | XENON_OEN_QSN); 535 + /* All FC_XX_RECEIVCE should be set as CMOS Type */ 536 + reg |= XENON_FC_ALL_CMOS_RECEIVER; 537 + sdhci_writel(host, reg, phy_regs->pad_ctrl); 538 + 539 + /* Set CMD and DQ Pull Up */ 540 + if (priv->phy_type == EMMC_5_0_PHY) { 541 + reg = sdhci_readl(host, XENON_EMMC_5_0_PHY_PAD_CONTROL); 542 + reg |= (XENON_EMMC5_FC_CMD_PU | XENON_EMMC5_FC_DQ_PU); 543 + reg &= ~(XENON_EMMC5_FC_CMD_PD | XENON_EMMC5_FC_DQ_PD); 544 + sdhci_writel(host, reg, XENON_EMMC_5_0_PHY_PAD_CONTROL); 545 + } else { 546 + reg = sdhci_readl(host, XENON_EMMC_PHY_PAD_CONTROL1); 547 + reg |= (XENON_EMMC5_1_FC_CMD_PU | XENON_EMMC5_1_FC_DQ_PU); 548 + reg &= ~(XENON_EMMC5_1_FC_CMD_PD | XENON_EMMC5_1_FC_DQ_PD); 549 + sdhci_writel(host, reg, XENON_EMMC_PHY_PAD_CONTROL1); 550 + } 551 + 552 + if (timing == MMC_TIMING_LEGACY) { 553 + xenon_emmc_phy_slow_mode(host, timing); 554 + goto phy_init; 555 + } 556 + 557 + /* 558 + * If SDIO card, set SDIO Mode 559 + * Otherwise, clear SDIO Mode 560 + */ 561 + reg = sdhci_readl(host, phy_regs->timing_adj); 562 + if (priv->init_card_type == MMC_TYPE_SDIO) 563 + reg |= XENON_TIMING_ADJUST_SDIO_MODE; 564 + else 565 + reg &= ~XENON_TIMING_ADJUST_SDIO_MODE; 566 + sdhci_writel(host, reg, phy_regs->timing_adj); 567 + 568 + if (xenon_emmc_phy_slow_mode(host, timing)) 569 + goto phy_init; 570 + 571 + /* 572 + * Set preferred ZNR and ZPR value 573 + * The ZNR and ZPR value vary between different boards. 574 + * Define them both in sdhci-xenon-emmc-phy.h. 575 + */ 576 + reg = sdhci_readl(host, phy_regs->pad_ctrl2); 577 + reg &= ~((XENON_ZNR_MASK << XENON_ZNR_SHIFT) | XENON_ZPR_MASK); 578 + reg |= ((params->znr << XENON_ZNR_SHIFT) | params->zpr); 579 + sdhci_writel(host, reg, phy_regs->pad_ctrl2); 580 + 581 + /* 582 + * When setting EMMC_PHY_FUNC_CONTROL register, 583 + * SD clock should be disabled 584 + */ 585 + reg = sdhci_readl(host, SDHCI_CLOCK_CONTROL); 586 + reg &= ~SDHCI_CLOCK_CARD_EN; 587 + sdhci_writew(host, reg, SDHCI_CLOCK_CONTROL); 588 + 589 + reg = sdhci_readl(host, phy_regs->func_ctrl); 590 + switch (timing) { 591 + case MMC_TIMING_MMC_HS400: 592 + reg |= (XENON_DQ_DDR_MODE_MASK << XENON_DQ_DDR_MODE_SHIFT) | 593 + XENON_CMD_DDR_MODE; 594 + reg &= ~XENON_DQ_ASYNC_MODE; 595 + break; 596 + case MMC_TIMING_UHS_DDR50: 597 + case MMC_TIMING_MMC_DDR52: 598 + reg |= (XENON_DQ_DDR_MODE_MASK << XENON_DQ_DDR_MODE_SHIFT) | 599 + XENON_CMD_DDR_MODE | XENON_DQ_ASYNC_MODE; 600 + break; 601 + default: 602 + reg &= ~((XENON_DQ_DDR_MODE_MASK << XENON_DQ_DDR_MODE_SHIFT) | 603 + XENON_CMD_DDR_MODE); 604 + reg |= XENON_DQ_ASYNC_MODE; 605 + } 606 + sdhci_writel(host, reg, phy_regs->func_ctrl); 607 + 608 + /* Enable bus clock */ 609 + reg = sdhci_readl(host, SDHCI_CLOCK_CONTROL); 610 + reg |= SDHCI_CLOCK_CARD_EN; 611 + sdhci_writew(host, reg, SDHCI_CLOCK_CONTROL); 612 + 613 + if (timing == MMC_TIMING_MMC_HS400) 614 + /* Hardware team recommend a value for HS400 */ 615 + sdhci_writel(host, phy_regs->logic_timing_val, 616 + phy_regs->logic_timing_adj); 617 + else 618 + xenon_emmc_phy_disable_data_strobe(host); 619 + 620 + phy_init: 621 + xenon_emmc_phy_init(host); 622 + 623 + dev_dbg(mmc_dev(host->mmc), "eMMC PHY setting completes\n"); 624 + } 625 + 626 + static int get_dt_pad_ctrl_data(struct sdhci_host *host, 627 + struct device_node *np, 628 + struct xenon_emmc_phy_params *params) 629 + { 630 + int ret = 0; 631 + const char *name; 632 + struct resource iomem; 633 + 634 + if (of_device_is_compatible(np, "marvell,armada-3700-sdhci")) 635 + params->pad_ctrl.set_soc_pad = armada_3700_soc_pad_voltage_set; 636 + else 637 + return 0; 638 + 639 + if (of_address_to_resource(np, 1, &iomem)) { 640 + dev_err(mmc_dev(host->mmc), "Unable to find SoC PAD ctrl register address for %s\n", 641 + np->name); 642 + return -EINVAL; 643 + } 644 + 645 + params->pad_ctrl.reg = devm_ioremap_resource(mmc_dev(host->mmc), 646 + &iomem); 647 + if (IS_ERR(params->pad_ctrl.reg)) 648 + return PTR_ERR(params->pad_ctrl.reg); 649 + 650 + ret = of_property_read_string(np, "marvell,pad-type", &name); 651 + if (ret) { 652 + dev_err(mmc_dev(host->mmc), "Unable to determine SoC PHY PAD ctrl type\n"); 653 + return ret; 654 + } 655 + if (!strcmp(name, "sd")) { 656 + params->pad_ctrl.pad_type = SOC_PAD_SD; 657 + } else if (!strcmp(name, "fixed-1-8v")) { 658 + params->pad_ctrl.pad_type = SOC_PAD_FIXED_1_8V; 659 + } else { 660 + dev_err(mmc_dev(host->mmc), "Unsupported SoC PHY PAD ctrl type %s\n", 661 + name); 662 + return -EINVAL; 663 + } 664 + 665 + return ret; 666 + } 667 + 668 + static int xenon_emmc_phy_parse_param_dt(struct sdhci_host *host, 669 + struct device_node *np, 670 + struct xenon_emmc_phy_params *params) 671 + { 672 + u32 value; 673 + 674 + params->slow_mode = false; 675 + if (of_property_read_bool(np, "marvell,xenon-phy-slow-mode")) 676 + params->slow_mode = true; 677 + 678 + params->znr = XENON_ZNR_DEF_VALUE; 679 + if (!of_property_read_u32(np, "marvell,xenon-phy-znr", &value)) 680 + params->znr = value & XENON_ZNR_MASK; 681 + 682 + params->zpr = XENON_ZPR_DEF_VALUE; 683 + if (!of_property_read_u32(np, "marvell,xenon-phy-zpr", &value)) 684 + params->zpr = value & XENON_ZPR_MASK; 685 + 686 + params->nr_tun_times = XENON_TUN_CONSECUTIVE_TIMES; 687 + if (!of_property_read_u32(np, "marvell,xenon-phy-nr-success-tun", 688 + &value)) 689 + params->nr_tun_times = value & XENON_TUN_CONSECUTIVE_TIMES_MASK; 690 + 691 + params->tun_step_divider = XENON_TUNING_STEP_DIVIDER; 692 + if (!of_property_read_u32(np, "marvell,xenon-phy-tun-step-divider", 693 + &value)) 694 + params->tun_step_divider = value & 0xFF; 695 + 696 + return get_dt_pad_ctrl_data(host, np, params); 697 + } 698 + 699 + /* Set SoC PHY Voltage PAD */ 700 + void xenon_soc_pad_ctrl(struct sdhci_host *host, 701 + unsigned char signal_voltage) 702 + { 703 + xenon_emmc_phy_set_soc_pad(host, signal_voltage); 704 + } 705 + 706 + /* 707 + * Setting PHY when card is working in High Speed Mode. 708 + * HS400 set data strobe line. 709 + * HS200/SDR104 set tuning config to prepare for tuning. 710 + */ 711 + static int xenon_hs_delay_adj(struct sdhci_host *host) 712 + { 713 + int ret = 0; 714 + 715 + if (WARN_ON(host->clock <= XENON_DEFAULT_SDCLK_FREQ)) 716 + return -EINVAL; 717 + 718 + switch (host->timing) { 719 + case MMC_TIMING_MMC_HS400: 720 + xenon_emmc_phy_strobe_delay_adj(host); 721 + return 0; 722 + case MMC_TIMING_MMC_HS200: 723 + case MMC_TIMING_UHS_SDR104: 724 + return xenon_emmc_phy_config_tuning(host); 725 + case MMC_TIMING_MMC_DDR52: 726 + case MMC_TIMING_UHS_DDR50: 727 + /* 728 + * DDR Mode requires driver to scan Sampling Fixed Delay Line, 729 + * to find out a perfect operation sampling point. 730 + * It is hard to implement such a scan in host driver 731 + * since initiating commands by host driver is not safe. 732 + * Thus so far just keep PHY Sampling Fixed Delay in 733 + * default value of DDR mode. 734 + * 735 + * If any timing issue occurs in DDR mode on Marvell products, 736 + * please contact maintainer for internal support in Marvell. 737 + */ 738 + dev_warn_once(mmc_dev(host->mmc), "Timing issue might occur in DDR mode\n"); 739 + return 0; 740 + } 741 + 742 + return ret; 743 + } 744 + 745 + /* 746 + * Adjust PHY setting. 747 + * PHY setting should be adjusted when SDCLK frequency, Bus Width 748 + * or Speed Mode is changed. 749 + * Additional config are required when card is working in High Speed mode, 750 + * after leaving Legacy Mode. 751 + */ 752 + int xenon_phy_adj(struct sdhci_host *host, struct mmc_ios *ios) 753 + { 754 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 755 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 756 + int ret = 0; 757 + 758 + if (!host->clock) { 759 + priv->clock = 0; 760 + return 0; 761 + } 762 + 763 + /* 764 + * The timing, frequency or bus width is changed, 765 + * better to set eMMC PHY based on current setting 766 + * and adjust Xenon SDHC delay. 767 + */ 768 + if ((host->clock == priv->clock) && 769 + (ios->bus_width == priv->bus_width) && 770 + (ios->timing == priv->timing)) 771 + return 0; 772 + 773 + xenon_emmc_phy_set(host, ios->timing); 774 + 775 + /* Update the record */ 776 + priv->bus_width = ios->bus_width; 777 + 778 + priv->timing = ios->timing; 779 + priv->clock = host->clock; 780 + 781 + /* Legacy mode is a special case */ 782 + if (ios->timing == MMC_TIMING_LEGACY) 783 + return 0; 784 + 785 + if (host->clock > XENON_DEFAULT_SDCLK_FREQ) 786 + ret = xenon_hs_delay_adj(host); 787 + return ret; 788 + } 789 + 790 + void xenon_clean_phy(struct sdhci_host *host) 791 + { 792 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 793 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 794 + 795 + kfree(priv->phy_params); 796 + } 797 + 798 + static int xenon_add_phy(struct device_node *np, struct sdhci_host *host, 799 + const char *phy_name) 800 + { 801 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 802 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 803 + int i, ret; 804 + 805 + for (i = 0; i < NR_PHY_TYPES; i++) { 806 + if (!strcmp(phy_name, phy_types[i])) { 807 + priv->phy_type = i; 808 + break; 809 + } 810 + } 811 + if (i == NR_PHY_TYPES) { 812 + dev_err(mmc_dev(host->mmc), 813 + "Unable to determine PHY name %s. Use default eMMC 5.1 PHY\n", 814 + phy_name); 815 + priv->phy_type = EMMC_5_1_PHY; 816 + } 817 + 818 + ret = xenon_alloc_emmc_phy(host); 819 + if (ret) 820 + return ret; 821 + 822 + ret = xenon_emmc_phy_parse_param_dt(host, np, priv->phy_params); 823 + if (ret) 824 + xenon_clean_phy(host); 825 + 826 + return ret; 827 + } 828 + 829 + int xenon_phy_parse_dt(struct device_node *np, struct sdhci_host *host) 830 + { 831 + const char *phy_type = NULL; 832 + 833 + if (!of_property_read_string(np, "marvell,xenon-phy-type", &phy_type)) 834 + return xenon_add_phy(np, host, phy_type); 835 + 836 + return xenon_add_phy(np, host, "emmc 5.1 phy"); 837 + }
+548
drivers/mmc/host/sdhci-xenon.c
··· 1 + /* 2 + * Driver for Marvell Xenon SDHC as a platform device 3 + * 4 + * Copyright (C) 2016 Marvell, All Rights Reserved. 5 + * 6 + * Author: Hu Ziji <huziji@marvell.com> 7 + * Date: 2016-8-24 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License as 11 + * published by the Free Software Foundation version 2. 12 + * 13 + * Inspired by Jisheng Zhang <jszhang@marvell.com> 14 + * Special thanks to Video BG4 project team. 15 + */ 16 + 17 + #include <linux/delay.h> 18 + #include <linux/ktime.h> 19 + #include <linux/module.h> 20 + #include <linux/of.h> 21 + 22 + #include "sdhci-pltfm.h" 23 + #include "sdhci-xenon.h" 24 + 25 + static int xenon_enable_internal_clk(struct sdhci_host *host) 26 + { 27 + u32 reg; 28 + ktime_t timeout; 29 + 30 + reg = sdhci_readl(host, SDHCI_CLOCK_CONTROL); 31 + reg |= SDHCI_CLOCK_INT_EN; 32 + sdhci_writel(host, reg, SDHCI_CLOCK_CONTROL); 33 + /* Wait max 20 ms */ 34 + timeout = ktime_add_ms(ktime_get(), 20); 35 + while (!((reg = sdhci_readw(host, SDHCI_CLOCK_CONTROL)) 36 + & SDHCI_CLOCK_INT_STABLE)) { 37 + if (ktime_after(ktime_get(), timeout)) { 38 + dev_err(mmc_dev(host->mmc), "Internal clock never stabilised.\n"); 39 + return -ETIMEDOUT; 40 + } 41 + usleep_range(900, 1100); 42 + } 43 + 44 + return 0; 45 + } 46 + 47 + /* Set SDCLK-off-while-idle */ 48 + static void xenon_set_sdclk_off_idle(struct sdhci_host *host, 49 + unsigned char sdhc_id, bool enable) 50 + { 51 + u32 reg; 52 + u32 mask; 53 + 54 + reg = sdhci_readl(host, XENON_SYS_OP_CTRL); 55 + /* Get the bit shift basing on the SDHC index */ 56 + mask = (0x1 << (XENON_SDCLK_IDLEOFF_ENABLE_SHIFT + sdhc_id)); 57 + if (enable) 58 + reg |= mask; 59 + else 60 + reg &= ~mask; 61 + 62 + sdhci_writel(host, reg, XENON_SYS_OP_CTRL); 63 + } 64 + 65 + /* Enable/Disable the Auto Clock Gating function */ 66 + static void xenon_set_acg(struct sdhci_host *host, bool enable) 67 + { 68 + u32 reg; 69 + 70 + reg = sdhci_readl(host, XENON_SYS_OP_CTRL); 71 + if (enable) 72 + reg &= ~XENON_AUTO_CLKGATE_DISABLE_MASK; 73 + else 74 + reg |= XENON_AUTO_CLKGATE_DISABLE_MASK; 75 + sdhci_writel(host, reg, XENON_SYS_OP_CTRL); 76 + } 77 + 78 + /* Enable this SDHC */ 79 + static void xenon_enable_sdhc(struct sdhci_host *host, 80 + unsigned char sdhc_id) 81 + { 82 + u32 reg; 83 + 84 + reg = sdhci_readl(host, XENON_SYS_OP_CTRL); 85 + reg |= (BIT(sdhc_id) << XENON_SLOT_ENABLE_SHIFT); 86 + sdhci_writel(host, reg, XENON_SYS_OP_CTRL); 87 + 88 + host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY; 89 + /* 90 + * Force to clear BUS_TEST to 91 + * skip bus_test_pre and bus_test_post 92 + */ 93 + host->mmc->caps &= ~MMC_CAP_BUS_WIDTH_TEST; 94 + } 95 + 96 + /* Disable this SDHC */ 97 + static void xenon_disable_sdhc(struct sdhci_host *host, 98 + unsigned char sdhc_id) 99 + { 100 + u32 reg; 101 + 102 + reg = sdhci_readl(host, XENON_SYS_OP_CTRL); 103 + reg &= ~(BIT(sdhc_id) << XENON_SLOT_ENABLE_SHIFT); 104 + sdhci_writel(host, reg, XENON_SYS_OP_CTRL); 105 + } 106 + 107 + /* Enable Parallel Transfer Mode */ 108 + static void xenon_enable_sdhc_parallel_tran(struct sdhci_host *host, 109 + unsigned char sdhc_id) 110 + { 111 + u32 reg; 112 + 113 + reg = sdhci_readl(host, XENON_SYS_EXT_OP_CTRL); 114 + reg |= BIT(sdhc_id); 115 + sdhci_writel(host, reg, XENON_SYS_EXT_OP_CTRL); 116 + } 117 + 118 + /* Mask command conflict error */ 119 + static void xenon_mask_cmd_conflict_err(struct sdhci_host *host) 120 + { 121 + u32 reg; 122 + 123 + reg = sdhci_readl(host, XENON_SYS_EXT_OP_CTRL); 124 + reg |= XENON_MASK_CMD_CONFLICT_ERR; 125 + sdhci_writel(host, reg, XENON_SYS_EXT_OP_CTRL); 126 + } 127 + 128 + static void xenon_retune_setup(struct sdhci_host *host) 129 + { 130 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 131 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 132 + u32 reg; 133 + 134 + /* Disable the Re-Tuning Request functionality */ 135 + reg = sdhci_readl(host, XENON_SLOT_RETUNING_REQ_CTRL); 136 + reg &= ~XENON_RETUNING_COMPATIBLE; 137 + sdhci_writel(host, reg, XENON_SLOT_RETUNING_REQ_CTRL); 138 + 139 + /* Disable the Re-tuning Interrupt */ 140 + reg = sdhci_readl(host, SDHCI_SIGNAL_ENABLE); 141 + reg &= ~SDHCI_INT_RETUNE; 142 + sdhci_writel(host, reg, SDHCI_SIGNAL_ENABLE); 143 + reg = sdhci_readl(host, SDHCI_INT_ENABLE); 144 + reg &= ~SDHCI_INT_RETUNE; 145 + sdhci_writel(host, reg, SDHCI_INT_ENABLE); 146 + 147 + /* Force to use Tuning Mode 1 */ 148 + host->tuning_mode = SDHCI_TUNING_MODE_1; 149 + /* Set re-tuning period */ 150 + host->tuning_count = 1 << (priv->tuning_count - 1); 151 + } 152 + 153 + /* 154 + * Operations inside struct sdhci_ops 155 + */ 156 + /* Recover the Register Setting cleared during SOFTWARE_RESET_ALL */ 157 + static void xenon_reset_exit(struct sdhci_host *host, 158 + unsigned char sdhc_id, u8 mask) 159 + { 160 + /* Only SOFTWARE RESET ALL will clear the register setting */ 161 + if (!(mask & SDHCI_RESET_ALL)) 162 + return; 163 + 164 + /* Disable tuning request and auto-retuning again */ 165 + xenon_retune_setup(host); 166 + 167 + xenon_set_acg(host, true); 168 + 169 + xenon_set_sdclk_off_idle(host, sdhc_id, false); 170 + 171 + xenon_mask_cmd_conflict_err(host); 172 + } 173 + 174 + static void xenon_reset(struct sdhci_host *host, u8 mask) 175 + { 176 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 177 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 178 + 179 + sdhci_reset(host, mask); 180 + xenon_reset_exit(host, priv->sdhc_id, mask); 181 + } 182 + 183 + /* 184 + * Xenon defines different values for HS200 and HS400 185 + * in Host_Control_2 186 + */ 187 + static void xenon_set_uhs_signaling(struct sdhci_host *host, 188 + unsigned int timing) 189 + { 190 + u16 ctrl_2; 191 + 192 + ctrl_2 = sdhci_readw(host, SDHCI_HOST_CONTROL2); 193 + /* Select Bus Speed Mode for host */ 194 + ctrl_2 &= ~SDHCI_CTRL_UHS_MASK; 195 + if (timing == MMC_TIMING_MMC_HS200) 196 + ctrl_2 |= XENON_CTRL_HS200; 197 + else if (timing == MMC_TIMING_UHS_SDR104) 198 + ctrl_2 |= SDHCI_CTRL_UHS_SDR104; 199 + else if (timing == MMC_TIMING_UHS_SDR12) 200 + ctrl_2 |= SDHCI_CTRL_UHS_SDR12; 201 + else if (timing == MMC_TIMING_UHS_SDR25) 202 + ctrl_2 |= SDHCI_CTRL_UHS_SDR25; 203 + else if (timing == MMC_TIMING_UHS_SDR50) 204 + ctrl_2 |= SDHCI_CTRL_UHS_SDR50; 205 + else if ((timing == MMC_TIMING_UHS_DDR50) || 206 + (timing == MMC_TIMING_MMC_DDR52)) 207 + ctrl_2 |= SDHCI_CTRL_UHS_DDR50; 208 + else if (timing == MMC_TIMING_MMC_HS400) 209 + ctrl_2 |= XENON_CTRL_HS400; 210 + sdhci_writew(host, ctrl_2, SDHCI_HOST_CONTROL2); 211 + } 212 + 213 + static const struct sdhci_ops sdhci_xenon_ops = { 214 + .set_clock = sdhci_set_clock, 215 + .set_bus_width = sdhci_set_bus_width, 216 + .reset = xenon_reset, 217 + .set_uhs_signaling = xenon_set_uhs_signaling, 218 + .get_max_clock = sdhci_pltfm_clk_get_max_clock, 219 + }; 220 + 221 + static const struct sdhci_pltfm_data sdhci_xenon_pdata = { 222 + .ops = &sdhci_xenon_ops, 223 + .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | 224 + SDHCI_QUIRK_NO_SIMULT_VDD_AND_POWER | 225 + SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN, 226 + }; 227 + 228 + /* 229 + * Xenon Specific Operations in mmc_host_ops 230 + */ 231 + static void xenon_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 232 + { 233 + struct sdhci_host *host = mmc_priv(mmc); 234 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 235 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 236 + u32 reg; 237 + 238 + /* 239 + * HS400/HS200/eMMC HS doesn't have Preset Value register. 240 + * However, sdhci_set_ios will read HS400/HS200 Preset register. 241 + * Disable Preset Value register for HS400/HS200. 242 + * eMMC HS with preset_enabled set will trigger a bug in 243 + * get_preset_value(). 244 + */ 245 + if ((ios->timing == MMC_TIMING_MMC_HS400) || 246 + (ios->timing == MMC_TIMING_MMC_HS200) || 247 + (ios->timing == MMC_TIMING_MMC_HS)) { 248 + host->preset_enabled = false; 249 + host->quirks2 |= SDHCI_QUIRK2_PRESET_VALUE_BROKEN; 250 + host->flags &= ~SDHCI_PV_ENABLED; 251 + 252 + reg = sdhci_readw(host, SDHCI_HOST_CONTROL2); 253 + reg &= ~SDHCI_CTRL_PRESET_VAL_ENABLE; 254 + sdhci_writew(host, reg, SDHCI_HOST_CONTROL2); 255 + } else { 256 + host->quirks2 &= ~SDHCI_QUIRK2_PRESET_VALUE_BROKEN; 257 + } 258 + 259 + sdhci_set_ios(mmc, ios); 260 + xenon_phy_adj(host, ios); 261 + 262 + if (host->clock > XENON_DEFAULT_SDCLK_FREQ) 263 + xenon_set_sdclk_off_idle(host, priv->sdhc_id, true); 264 + } 265 + 266 + static int xenon_start_signal_voltage_switch(struct mmc_host *mmc, 267 + struct mmc_ios *ios) 268 + { 269 + struct sdhci_host *host = mmc_priv(mmc); 270 + 271 + /* 272 + * Before SD/SDIO set signal voltage, SD bus clock should be 273 + * disabled. However, sdhci_set_clock will also disable the Internal 274 + * clock in mmc_set_signal_voltage(). 275 + * If Internal clock is disabled, the 3.3V/1.8V bit can not be updated. 276 + * Thus here manually enable internal clock. 277 + * 278 + * After switch completes, it is unnecessary to disable internal clock, 279 + * since keeping internal clock active obeys SD spec. 280 + */ 281 + xenon_enable_internal_clk(host); 282 + 283 + xenon_soc_pad_ctrl(host, ios->signal_voltage); 284 + 285 + /* 286 + * If Vqmmc is fixed on platform, vqmmc regulator should be unavailable. 287 + * Thus SDHCI_CTRL_VDD_180 bit might not work then. 288 + * Skip the standard voltage switch to avoid any issue. 289 + */ 290 + if (PTR_ERR(mmc->supply.vqmmc) == -ENODEV) 291 + return 0; 292 + 293 + return sdhci_start_signal_voltage_switch(mmc, ios); 294 + } 295 + 296 + /* 297 + * Update card type. 298 + * priv->init_card_type will be used in PHY timing adjustment. 299 + */ 300 + static void xenon_init_card(struct mmc_host *mmc, struct mmc_card *card) 301 + { 302 + struct sdhci_host *host = mmc_priv(mmc); 303 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 304 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 305 + 306 + /* Update card type*/ 307 + priv->init_card_type = card->type; 308 + } 309 + 310 + static int xenon_execute_tuning(struct mmc_host *mmc, u32 opcode) 311 + { 312 + struct sdhci_host *host = mmc_priv(mmc); 313 + 314 + if (host->timing == MMC_TIMING_UHS_DDR50) 315 + return 0; 316 + 317 + /* 318 + * Currently force Xenon driver back to support mode 1 only, 319 + * even though Xenon might claim to support mode 2 or mode 3. 320 + * It requires more time to test mode 2/mode 3 on more platforms. 321 + */ 322 + if (host->tuning_mode != SDHCI_TUNING_MODE_1) 323 + xenon_retune_setup(host); 324 + 325 + return sdhci_execute_tuning(mmc, opcode); 326 + } 327 + 328 + static void xenon_enable_sdio_irq(struct mmc_host *mmc, int enable) 329 + { 330 + struct sdhci_host *host = mmc_priv(mmc); 331 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 332 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 333 + u32 reg; 334 + u8 sdhc_id = priv->sdhc_id; 335 + 336 + sdhci_enable_sdio_irq(mmc, enable); 337 + 338 + if (enable) { 339 + /* 340 + * Set SDIO Card Inserted indication 341 + * to enable detecting SDIO async irq. 342 + */ 343 + reg = sdhci_readl(host, XENON_SYS_CFG_INFO); 344 + reg |= (1 << (sdhc_id + XENON_SLOT_TYPE_SDIO_SHIFT)); 345 + sdhci_writel(host, reg, XENON_SYS_CFG_INFO); 346 + } else { 347 + /* Clear SDIO Card Inserted indication */ 348 + reg = sdhci_readl(host, XENON_SYS_CFG_INFO); 349 + reg &= ~(1 << (sdhc_id + XENON_SLOT_TYPE_SDIO_SHIFT)); 350 + sdhci_writel(host, reg, XENON_SYS_CFG_INFO); 351 + } 352 + } 353 + 354 + static void xenon_replace_mmc_host_ops(struct sdhci_host *host) 355 + { 356 + host->mmc_host_ops.set_ios = xenon_set_ios; 357 + host->mmc_host_ops.start_signal_voltage_switch = 358 + xenon_start_signal_voltage_switch; 359 + host->mmc_host_ops.init_card = xenon_init_card; 360 + host->mmc_host_ops.execute_tuning = xenon_execute_tuning; 361 + host->mmc_host_ops.enable_sdio_irq = xenon_enable_sdio_irq; 362 + } 363 + 364 + /* 365 + * Parse Xenon specific DT properties: 366 + * sdhc-id: the index of current SDHC. 367 + * Refer to XENON_SYS_CFG_INFO register 368 + * tun-count: the interval between re-tuning 369 + */ 370 + static int xenon_probe_dt(struct platform_device *pdev) 371 + { 372 + struct device_node *np = pdev->dev.of_node; 373 + struct sdhci_host *host = platform_get_drvdata(pdev); 374 + struct mmc_host *mmc = host->mmc; 375 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 376 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 377 + u32 sdhc_id, nr_sdhc; 378 + u32 tuning_count; 379 + 380 + /* Disable HS200 on Armada AP806 */ 381 + if (of_device_is_compatible(np, "marvell,armada-ap806-sdhci")) 382 + host->quirks2 |= SDHCI_QUIRK2_BROKEN_HS200; 383 + 384 + sdhc_id = 0x0; 385 + if (!of_property_read_u32(np, "marvell,xenon-sdhc-id", &sdhc_id)) { 386 + nr_sdhc = sdhci_readl(host, XENON_SYS_CFG_INFO); 387 + nr_sdhc &= XENON_NR_SUPPORTED_SLOT_MASK; 388 + if (unlikely(sdhc_id > nr_sdhc)) { 389 + dev_err(mmc_dev(mmc), "SDHC Index %d exceeds Number of SDHCs %d\n", 390 + sdhc_id, nr_sdhc); 391 + return -EINVAL; 392 + } 393 + } 394 + priv->sdhc_id = sdhc_id; 395 + 396 + tuning_count = XENON_DEF_TUNING_COUNT; 397 + if (!of_property_read_u32(np, "marvell,xenon-tun-count", 398 + &tuning_count)) { 399 + if (unlikely(tuning_count >= XENON_TMR_RETUN_NO_PRESENT)) { 400 + dev_err(mmc_dev(mmc), "Wrong Re-tuning Count. Set default value %d\n", 401 + XENON_DEF_TUNING_COUNT); 402 + tuning_count = XENON_DEF_TUNING_COUNT; 403 + } 404 + } 405 + priv->tuning_count = tuning_count; 406 + 407 + return xenon_phy_parse_dt(np, host); 408 + } 409 + 410 + static int xenon_sdhc_prepare(struct sdhci_host *host) 411 + { 412 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 413 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 414 + u8 sdhc_id = priv->sdhc_id; 415 + 416 + /* Enable SDHC */ 417 + xenon_enable_sdhc(host, sdhc_id); 418 + 419 + /* Enable ACG */ 420 + xenon_set_acg(host, true); 421 + 422 + /* Enable Parallel Transfer Mode */ 423 + xenon_enable_sdhc_parallel_tran(host, sdhc_id); 424 + 425 + /* Disable SDCLK-Off-While-Idle before card init */ 426 + xenon_set_sdclk_off_idle(host, sdhc_id, false); 427 + 428 + xenon_mask_cmd_conflict_err(host); 429 + 430 + return 0; 431 + } 432 + 433 + static void xenon_sdhc_unprepare(struct sdhci_host *host) 434 + { 435 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 436 + struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 437 + u8 sdhc_id = priv->sdhc_id; 438 + 439 + /* disable SDHC */ 440 + xenon_disable_sdhc(host, sdhc_id); 441 + } 442 + 443 + static int xenon_probe(struct platform_device *pdev) 444 + { 445 + struct sdhci_pltfm_host *pltfm_host; 446 + struct sdhci_host *host; 447 + struct xenon_priv *priv; 448 + int err; 449 + 450 + host = sdhci_pltfm_init(pdev, &sdhci_xenon_pdata, 451 + sizeof(struct xenon_priv)); 452 + if (IS_ERR(host)) 453 + return PTR_ERR(host); 454 + 455 + pltfm_host = sdhci_priv(host); 456 + priv = sdhci_pltfm_priv(pltfm_host); 457 + 458 + /* 459 + * Link Xenon specific mmc_host_ops function, 460 + * to replace standard ones in sdhci_ops. 461 + */ 462 + xenon_replace_mmc_host_ops(host); 463 + 464 + pltfm_host->clk = devm_clk_get(&pdev->dev, "core"); 465 + if (IS_ERR(pltfm_host->clk)) { 466 + err = PTR_ERR(pltfm_host->clk); 467 + dev_err(&pdev->dev, "Failed to setup input clk: %d\n", err); 468 + goto free_pltfm; 469 + } 470 + err = clk_prepare_enable(pltfm_host->clk); 471 + if (err) 472 + goto free_pltfm; 473 + 474 + err = mmc_of_parse(host->mmc); 475 + if (err) 476 + goto err_clk; 477 + 478 + sdhci_get_of_property(pdev); 479 + 480 + xenon_set_acg(host, false); 481 + 482 + /* Xenon specific dt parse */ 483 + err = xenon_probe_dt(pdev); 484 + if (err) 485 + goto err_clk; 486 + 487 + err = xenon_sdhc_prepare(host); 488 + if (err) 489 + goto clean_phy_param; 490 + 491 + err = sdhci_add_host(host); 492 + if (err) 493 + goto remove_sdhc; 494 + 495 + return 0; 496 + 497 + remove_sdhc: 498 + xenon_sdhc_unprepare(host); 499 + clean_phy_param: 500 + xenon_clean_phy(host); 501 + err_clk: 502 + clk_disable_unprepare(pltfm_host->clk); 503 + free_pltfm: 504 + sdhci_pltfm_free(pdev); 505 + return err; 506 + } 507 + 508 + static int xenon_remove(struct platform_device *pdev) 509 + { 510 + struct sdhci_host *host = platform_get_drvdata(pdev); 511 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 512 + 513 + xenon_clean_phy(host); 514 + 515 + sdhci_remove_host(host, 0); 516 + 517 + xenon_sdhc_unprepare(host); 518 + 519 + clk_disable_unprepare(pltfm_host->clk); 520 + 521 + sdhci_pltfm_free(pdev); 522 + 523 + return 0; 524 + } 525 + 526 + static const struct of_device_id sdhci_xenon_dt_ids[] = { 527 + { .compatible = "marvell,armada-ap806-sdhci",}, 528 + { .compatible = "marvell,armada-cp110-sdhci",}, 529 + { .compatible = "marvell,armada-3700-sdhci",}, 530 + {} 531 + }; 532 + MODULE_DEVICE_TABLE(of, sdhci_xenon_dt_ids); 533 + 534 + static struct platform_driver sdhci_xenon_driver = { 535 + .driver = { 536 + .name = "xenon-sdhci", 537 + .of_match_table = sdhci_xenon_dt_ids, 538 + .pm = &sdhci_pltfm_pmops, 539 + }, 540 + .probe = xenon_probe, 541 + .remove = xenon_remove, 542 + }; 543 + 544 + module_platform_driver(sdhci_xenon_driver); 545 + 546 + MODULE_DESCRIPTION("SDHCI platform driver for Marvell Xenon SDHC"); 547 + MODULE_AUTHOR("Hu Ziji <huziji@marvell.com>"); 548 + MODULE_LICENSE("GPL v2");
+101
drivers/mmc/host/sdhci-xenon.h
··· 1 + /* 2 + * Copyright (C) 2016 Marvell, All Rights Reserved. 3 + * 4 + * Author: Hu Ziji <huziji@marvell.com> 5 + * Date: 2016-8-24 6 + * 7 + * This program is free software; you can redistribute it and/or 8 + * modify it under the terms of the GNU General Public License as 9 + * published by the Free Software Foundation version 2. 10 + */ 11 + #ifndef SDHCI_XENON_H_ 12 + #define SDHCI_XENON_H_ 13 + 14 + /* Register Offset of Xenon SDHC self-defined register */ 15 + #define XENON_SYS_CFG_INFO 0x0104 16 + #define XENON_SLOT_TYPE_SDIO_SHIFT 24 17 + #define XENON_NR_SUPPORTED_SLOT_MASK 0x7 18 + 19 + #define XENON_SYS_OP_CTRL 0x0108 20 + #define XENON_AUTO_CLKGATE_DISABLE_MASK BIT(20) 21 + #define XENON_SDCLK_IDLEOFF_ENABLE_SHIFT 8 22 + #define XENON_SLOT_ENABLE_SHIFT 0 23 + 24 + #define XENON_SYS_EXT_OP_CTRL 0x010C 25 + #define XENON_MASK_CMD_CONFLICT_ERR BIT(8) 26 + 27 + #define XENON_SLOT_OP_STATUS_CTRL 0x0128 28 + #define XENON_TUN_CONSECUTIVE_TIMES_SHIFT 16 29 + #define XENON_TUN_CONSECUTIVE_TIMES_MASK 0x7 30 + #define XENON_TUN_CONSECUTIVE_TIMES 0x4 31 + #define XENON_TUNING_STEP_SHIFT 12 32 + #define XENON_TUNING_STEP_MASK 0xF 33 + #define XENON_TUNING_STEP_DIVIDER BIT(6) 34 + 35 + #define XENON_SLOT_EMMC_CTRL 0x0130 36 + #define XENON_ENABLE_DATA_STROBE BIT(24) 37 + 38 + #define XENON_SLOT_RETUNING_REQ_CTRL 0x0144 39 + /* retuning compatible */ 40 + #define XENON_RETUNING_COMPATIBLE 0x1 41 + 42 + #define XENON_SLOT_EXT_PRESENT_STATE 0x014C 43 + #define XENON_DLL_LOCK_STATE 0x1 44 + 45 + #define XENON_SLOT_DLL_CUR_DLY_VAL 0x0150 46 + 47 + /* Tuning Parameter */ 48 + #define XENON_TMR_RETUN_NO_PRESENT 0xF 49 + #define XENON_DEF_TUNING_COUNT 0x9 50 + 51 + #define XENON_DEFAULT_SDCLK_FREQ 400000 52 + #define XENON_LOWEST_SDCLK_FREQ 100000 53 + 54 + /* Xenon specific Mode Select value */ 55 + #define XENON_CTRL_HS200 0x5 56 + #define XENON_CTRL_HS400 0x6 57 + 58 + struct xenon_priv { 59 + unsigned char tuning_count; 60 + /* idx of SDHC */ 61 + u8 sdhc_id; 62 + 63 + /* 64 + * eMMC/SD/SDIO require different register settings. 65 + * Xenon driver has to recognize card type 66 + * before mmc_host->card is not available. 67 + * This field records the card type during init. 68 + * It is updated in xenon_init_card(). 69 + * 70 + * It is only valid during initialization after it is updated. 71 + * Do not access this variable in normal transfers after 72 + * initialization completes. 73 + */ 74 + unsigned int init_card_type; 75 + 76 + /* 77 + * The bus_width, timing, and clock fields in below 78 + * record the current ios setting of Xenon SDHC. 79 + * Driver will adjust PHY setting if any change to 80 + * ios affects PHY timing. 81 + */ 82 + unsigned char bus_width; 83 + unsigned char timing; 84 + unsigned int clock; 85 + 86 + int phy_type; 87 + /* 88 + * Contains board-specific PHY parameters 89 + * passed from device tree. 90 + */ 91 + void *phy_params; 92 + struct xenon_emmc_phy_regs *emmc_phy_regs; 93 + }; 94 + 95 + int xenon_phy_adj(struct sdhci_host *host, struct mmc_ios *ios); 96 + void xenon_clean_phy(struct sdhci_host *host); 97 + int xenon_phy_parse_dt(struct device_node *np, 98 + struct sdhci_host *host); 99 + void xenon_soc_pad_ctrl(struct sdhci_host *host, 100 + unsigned char signal_voltage); 101 + #endif
+285 -164
drivers/mmc/host/sdhci.c
··· 14 14 */ 15 15 16 16 #include <linux/delay.h> 17 + #include <linux/ktime.h> 17 18 #include <linux/highmem.h> 18 19 #include <linux/io.h> 19 20 #include <linux/module.h> ··· 38 37 #define DRIVER_NAME "sdhci" 39 38 40 39 #define DBG(f, x...) \ 41 - pr_debug(DRIVER_NAME " [%s()]: " f, __func__,## x) 40 + pr_debug("%s: " DRIVER_NAME ": " f, mmc_hostname(host->mmc), ## x) 41 + 42 + #define SDHCI_DUMP(f, x...) \ 43 + pr_err("%s: " DRIVER_NAME ": " f, mmc_hostname(host->mmc), ## x) 42 44 43 45 #define MAX_TUNING_LOOP 40 44 46 ··· 52 48 53 49 static void sdhci_enable_preset_value(struct sdhci_host *host, bool enable); 54 50 55 - static void sdhci_dumpregs(struct sdhci_host *host) 51 + void sdhci_dumpregs(struct sdhci_host *host) 56 52 { 57 - pr_err(DRIVER_NAME ": =========== REGISTER DUMP (%s)===========\n", 58 - mmc_hostname(host->mmc)); 53 + SDHCI_DUMP("============ SDHCI REGISTER DUMP ===========\n"); 59 54 60 - pr_err(DRIVER_NAME ": Sys addr: 0x%08x | Version: 0x%08x\n", 61 - sdhci_readl(host, SDHCI_DMA_ADDRESS), 62 - sdhci_readw(host, SDHCI_HOST_VERSION)); 63 - pr_err(DRIVER_NAME ": Blk size: 0x%08x | Blk cnt: 0x%08x\n", 64 - sdhci_readw(host, SDHCI_BLOCK_SIZE), 65 - sdhci_readw(host, SDHCI_BLOCK_COUNT)); 66 - pr_err(DRIVER_NAME ": Argument: 0x%08x | Trn mode: 0x%08x\n", 67 - sdhci_readl(host, SDHCI_ARGUMENT), 68 - sdhci_readw(host, SDHCI_TRANSFER_MODE)); 69 - pr_err(DRIVER_NAME ": Present: 0x%08x | Host ctl: 0x%08x\n", 70 - sdhci_readl(host, SDHCI_PRESENT_STATE), 71 - sdhci_readb(host, SDHCI_HOST_CONTROL)); 72 - pr_err(DRIVER_NAME ": Power: 0x%08x | Blk gap: 0x%08x\n", 73 - sdhci_readb(host, SDHCI_POWER_CONTROL), 74 - sdhci_readb(host, SDHCI_BLOCK_GAP_CONTROL)); 75 - pr_err(DRIVER_NAME ": Wake-up: 0x%08x | Clock: 0x%08x\n", 76 - sdhci_readb(host, SDHCI_WAKE_UP_CONTROL), 77 - sdhci_readw(host, SDHCI_CLOCK_CONTROL)); 78 - pr_err(DRIVER_NAME ": Timeout: 0x%08x | Int stat: 0x%08x\n", 79 - sdhci_readb(host, SDHCI_TIMEOUT_CONTROL), 80 - sdhci_readl(host, SDHCI_INT_STATUS)); 81 - pr_err(DRIVER_NAME ": Int enab: 0x%08x | Sig enab: 0x%08x\n", 82 - sdhci_readl(host, SDHCI_INT_ENABLE), 83 - sdhci_readl(host, SDHCI_SIGNAL_ENABLE)); 84 - pr_err(DRIVER_NAME ": AC12 err: 0x%08x | Slot int: 0x%08x\n", 85 - sdhci_readw(host, SDHCI_ACMD12_ERR), 86 - sdhci_readw(host, SDHCI_SLOT_INT_STATUS)); 87 - pr_err(DRIVER_NAME ": Caps: 0x%08x | Caps_1: 0x%08x\n", 88 - sdhci_readl(host, SDHCI_CAPABILITIES), 89 - sdhci_readl(host, SDHCI_CAPABILITIES_1)); 90 - pr_err(DRIVER_NAME ": Cmd: 0x%08x | Max curr: 0x%08x\n", 91 - sdhci_readw(host, SDHCI_COMMAND), 92 - sdhci_readl(host, SDHCI_MAX_CURRENT)); 93 - pr_err(DRIVER_NAME ": Host ctl2: 0x%08x\n", 94 - sdhci_readw(host, SDHCI_HOST_CONTROL2)); 55 + SDHCI_DUMP("Sys addr: 0x%08x | Version: 0x%08x\n", 56 + sdhci_readl(host, SDHCI_DMA_ADDRESS), 57 + sdhci_readw(host, SDHCI_HOST_VERSION)); 58 + SDHCI_DUMP("Blk size: 0x%08x | Blk cnt: 0x%08x\n", 59 + sdhci_readw(host, SDHCI_BLOCK_SIZE), 60 + sdhci_readw(host, SDHCI_BLOCK_COUNT)); 61 + SDHCI_DUMP("Argument: 0x%08x | Trn mode: 0x%08x\n", 62 + sdhci_readl(host, SDHCI_ARGUMENT), 63 + sdhci_readw(host, SDHCI_TRANSFER_MODE)); 64 + SDHCI_DUMP("Present: 0x%08x | Host ctl: 0x%08x\n", 65 + sdhci_readl(host, SDHCI_PRESENT_STATE), 66 + sdhci_readb(host, SDHCI_HOST_CONTROL)); 67 + SDHCI_DUMP("Power: 0x%08x | Blk gap: 0x%08x\n", 68 + sdhci_readb(host, SDHCI_POWER_CONTROL), 69 + sdhci_readb(host, SDHCI_BLOCK_GAP_CONTROL)); 70 + SDHCI_DUMP("Wake-up: 0x%08x | Clock: 0x%08x\n", 71 + sdhci_readb(host, SDHCI_WAKE_UP_CONTROL), 72 + sdhci_readw(host, SDHCI_CLOCK_CONTROL)); 73 + SDHCI_DUMP("Timeout: 0x%08x | Int stat: 0x%08x\n", 74 + sdhci_readb(host, SDHCI_TIMEOUT_CONTROL), 75 + sdhci_readl(host, SDHCI_INT_STATUS)); 76 + SDHCI_DUMP("Int enab: 0x%08x | Sig enab: 0x%08x\n", 77 + sdhci_readl(host, SDHCI_INT_ENABLE), 78 + sdhci_readl(host, SDHCI_SIGNAL_ENABLE)); 79 + SDHCI_DUMP("AC12 err: 0x%08x | Slot int: 0x%08x\n", 80 + sdhci_readw(host, SDHCI_ACMD12_ERR), 81 + sdhci_readw(host, SDHCI_SLOT_INT_STATUS)); 82 + SDHCI_DUMP("Caps: 0x%08x | Caps_1: 0x%08x\n", 83 + sdhci_readl(host, SDHCI_CAPABILITIES), 84 + sdhci_readl(host, SDHCI_CAPABILITIES_1)); 85 + SDHCI_DUMP("Cmd: 0x%08x | Max curr: 0x%08x\n", 86 + sdhci_readw(host, SDHCI_COMMAND), 87 + sdhci_readl(host, SDHCI_MAX_CURRENT)); 88 + SDHCI_DUMP("Resp[0]: 0x%08x | Resp[1]: 0x%08x\n", 89 + sdhci_readl(host, SDHCI_RESPONSE), 90 + sdhci_readl(host, SDHCI_RESPONSE + 4)); 91 + SDHCI_DUMP("Resp[2]: 0x%08x | Resp[3]: 0x%08x\n", 92 + sdhci_readl(host, SDHCI_RESPONSE + 8), 93 + sdhci_readl(host, SDHCI_RESPONSE + 12)); 94 + SDHCI_DUMP("Host ctl2: 0x%08x\n", 95 + sdhci_readw(host, SDHCI_HOST_CONTROL2)); 95 96 96 97 if (host->flags & SDHCI_USE_ADMA) { 97 - if (host->flags & SDHCI_USE_64_BIT_DMA) 98 - pr_err(DRIVER_NAME ": ADMA Err: 0x%08x | ADMA Ptr: 0x%08x%08x\n", 99 - readl(host->ioaddr + SDHCI_ADMA_ERROR), 100 - readl(host->ioaddr + SDHCI_ADMA_ADDRESS_HI), 101 - readl(host->ioaddr + SDHCI_ADMA_ADDRESS)); 102 - else 103 - pr_err(DRIVER_NAME ": ADMA Err: 0x%08x | ADMA Ptr: 0x%08x\n", 104 - readl(host->ioaddr + SDHCI_ADMA_ERROR), 105 - readl(host->ioaddr + SDHCI_ADMA_ADDRESS)); 98 + if (host->flags & SDHCI_USE_64_BIT_DMA) { 99 + SDHCI_DUMP("ADMA Err: 0x%08x | ADMA Ptr: 0x%08x%08x\n", 100 + sdhci_readl(host, SDHCI_ADMA_ERROR), 101 + sdhci_readl(host, SDHCI_ADMA_ADDRESS_HI), 102 + sdhci_readl(host, SDHCI_ADMA_ADDRESS)); 103 + } else { 104 + SDHCI_DUMP("ADMA Err: 0x%08x | ADMA Ptr: 0x%08x\n", 105 + sdhci_readl(host, SDHCI_ADMA_ERROR), 106 + sdhci_readl(host, SDHCI_ADMA_ADDRESS)); 107 + } 106 108 } 107 109 108 - pr_err(DRIVER_NAME ": ===========================================\n"); 110 + SDHCI_DUMP("============================================\n"); 109 111 } 112 + EXPORT_SYMBOL_GPL(sdhci_dumpregs); 110 113 111 114 /*****************************************************************************\ 112 115 * * ··· 176 165 177 166 void sdhci_reset(struct sdhci_host *host, u8 mask) 178 167 { 179 - unsigned long timeout; 168 + ktime_t timeout; 180 169 181 170 sdhci_writeb(host, mask, SDHCI_SOFTWARE_RESET); 182 171 ··· 188 177 } 189 178 190 179 /* Wait max 100 ms */ 191 - timeout = 100; 180 + timeout = ktime_add_ms(ktime_get(), 100); 192 181 193 182 /* hw clears the bit when it's done */ 194 183 while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) { 195 - if (timeout == 0) { 184 + if (ktime_after(ktime_get(), timeout)) { 196 185 pr_err("%s: Reset 0x%x never completed.\n", 197 186 mmc_hostname(host->mmc), (int)mask); 198 187 sdhci_dumpregs(host); 199 188 return; 200 189 } 201 - timeout--; 202 - mdelay(1); 190 + udelay(10); 203 191 } 204 192 } 205 193 EXPORT_SYMBOL_GPL(sdhci_reset); ··· 225 215 } 226 216 } 227 217 228 - static void sdhci_init(struct sdhci_host *host, int soft) 218 + static void sdhci_set_default_irqs(struct sdhci_host *host) 229 219 { 230 - struct mmc_host *mmc = host->mmc; 231 - 232 - if (soft) 233 - sdhci_do_reset(host, SDHCI_RESET_CMD|SDHCI_RESET_DATA); 234 - else 235 - sdhci_do_reset(host, SDHCI_RESET_ALL); 236 - 237 220 host->ier = SDHCI_INT_BUS_POWER | SDHCI_INT_DATA_END_BIT | 238 221 SDHCI_INT_DATA_CRC | SDHCI_INT_DATA_TIMEOUT | 239 222 SDHCI_INT_INDEX | SDHCI_INT_END_BIT | SDHCI_INT_CRC | ··· 239 236 240 237 sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); 241 238 sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); 239 + } 240 + 241 + static void sdhci_init(struct sdhci_host *host, int soft) 242 + { 243 + struct mmc_host *mmc = host->mmc; 244 + 245 + if (soft) 246 + sdhci_do_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA); 247 + else 248 + sdhci_do_reset(host, SDHCI_RESET_ALL); 249 + 250 + sdhci_set_default_irqs(host); 251 + 252 + host->cqe_on = false; 242 253 243 254 if (soft) { 244 255 /* force clock reconfiguration */ ··· 502 485 return data->sg_count; 503 486 504 487 sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 505 - data->flags & MMC_DATA_WRITE ? 506 - DMA_TO_DEVICE : DMA_FROM_DEVICE); 488 + mmc_get_dma_dir(data)); 507 489 508 490 if (sg_count == 0) 509 491 return -ENOSPC; ··· 731 715 } 732 716 733 717 if (count >= 0xF) { 734 - DBG("%s: Too large timeout 0x%x requested for CMD%d!\n", 735 - mmc_hostname(host->mmc), count, cmd->opcode); 718 + DBG("Too large timeout 0x%x requested for CMD%d!\n", 719 + count, cmd->opcode); 736 720 count = 0xE; 737 721 } 738 722 ··· 1362 1346 1363 1347 void sdhci_enable_clk(struct sdhci_host *host, u16 clk) 1364 1348 { 1365 - unsigned long timeout; 1349 + ktime_t timeout; 1366 1350 1367 1351 clk |= SDHCI_CLOCK_INT_EN; 1368 1352 sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 1369 1353 1370 1354 /* Wait max 20 ms */ 1371 - timeout = 20; 1355 + timeout = ktime_add_ms(ktime_get(), 20); 1372 1356 while (!((clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL)) 1373 1357 & SDHCI_CLOCK_INT_STABLE)) { 1374 - if (timeout == 0) { 1358 + if (ktime_after(ktime_get(), timeout)) { 1375 1359 pr_err("%s: Internal clock never stabilised.\n", 1376 1360 mmc_hostname(host->mmc)); 1377 1361 sdhci_dumpregs(host); 1378 1362 return; 1379 1363 } 1380 - timeout--; 1381 - spin_unlock_irq(&host->lock); 1382 - usleep_range(900, 1100); 1383 - spin_lock_irq(&host->lock); 1364 + udelay(10); 1384 1365 } 1385 1366 1386 1367 clk |= SDHCI_CLOCK_CARD_EN; ··· 1406 1393 { 1407 1394 struct mmc_host *mmc = host->mmc; 1408 1395 1409 - spin_unlock_irq(&host->lock); 1410 1396 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); 1411 - spin_lock_irq(&host->lock); 1412 1397 1413 1398 if (mode != MMC_POWER_OFF) 1414 1399 sdhci_writeb(host, SDHCI_POWER_ON, SDHCI_POWER_CONTROL); ··· 1583 1572 } 1584 1573 EXPORT_SYMBOL_GPL(sdhci_set_uhs_signaling); 1585 1574 1586 - static void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 1575 + void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 1587 1576 { 1588 1577 struct sdhci_host *host = mmc_priv(mmc); 1589 - unsigned long flags; 1590 1578 u8 ctrl; 1591 1579 1592 1580 if (ios->power_mode == MMC_POWER_UNDEFINED) 1593 1581 return; 1594 1582 1595 - spin_lock_irqsave(&host->lock, flags); 1596 - 1597 1583 if (host->flags & SDHCI_DEVICE_DEAD) { 1598 - spin_unlock_irqrestore(&host->lock, flags); 1599 1584 if (!IS_ERR(mmc->supply.vmmc) && 1600 1585 ios->power_mode == MMC_POWER_OFF) 1601 1586 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0); ··· 1737 1730 sdhci_do_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA); 1738 1731 1739 1732 mmiowb(); 1740 - spin_unlock_irqrestore(&host->lock, flags); 1741 1733 } 1734 + EXPORT_SYMBOL_GPL(sdhci_set_ios); 1742 1735 1743 1736 static int sdhci_get_cd(struct mmc_host *mmc) 1744 1737 { ··· 1832 1825 } 1833 1826 } 1834 1827 1835 - static void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable) 1828 + void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable) 1836 1829 { 1837 1830 struct sdhci_host *host = mmc_priv(mmc); 1838 1831 unsigned long flags; ··· 1852 1845 if (!enable) 1853 1846 pm_runtime_put_noidle(host->mmc->parent); 1854 1847 } 1848 + EXPORT_SYMBOL_GPL(sdhci_enable_sdio_irq); 1855 1849 1856 - static int sdhci_start_signal_voltage_switch(struct mmc_host *mmc, 1857 - struct mmc_ios *ios) 1850 + int sdhci_start_signal_voltage_switch(struct mmc_host *mmc, 1851 + struct mmc_ios *ios) 1858 1852 { 1859 1853 struct sdhci_host *host = mmc_priv(mmc); 1860 1854 u16 ctrl; ··· 1947 1939 return 0; 1948 1940 } 1949 1941 } 1942 + EXPORT_SYMBOL_GPL(sdhci_start_signal_voltage_switch); 1950 1943 1951 1944 static int sdhci_card_busy(struct mmc_host *mmc) 1952 1945 { ··· 2012 2003 sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2); 2013 2004 } 2014 2005 2015 - static void sdhci_abort_tuning(struct sdhci_host *host, u32 opcode, 2016 - unsigned long flags) 2006 + static void sdhci_abort_tuning(struct sdhci_host *host, u32 opcode) 2017 2007 { 2018 2008 sdhci_reset_tuning(host); 2019 2009 ··· 2021 2013 2022 2014 sdhci_end_tuning(host); 2023 2015 2024 - spin_unlock_irqrestore(&host->lock, flags); 2025 2016 mmc_abort_tuning(host->mmc, opcode); 2026 - spin_lock_irqsave(&host->lock, flags); 2027 2017 } 2028 2018 2029 2019 /* ··· 2031 2025 * interrupt setup is different to other commands and there is no timeout 2032 2026 * interrupt so special handling is needed. 2033 2027 */ 2034 - static void sdhci_send_tuning(struct sdhci_host *host, u32 opcode, 2035 - unsigned long flags) 2028 + static void sdhci_send_tuning(struct sdhci_host *host, u32 opcode) 2036 2029 { 2037 2030 struct mmc_host *mmc = host->mmc; 2038 2031 struct mmc_command cmd = {}; 2039 2032 struct mmc_request mrq = {}; 2033 + unsigned long flags; 2034 + 2035 + spin_lock_irqsave(&host->lock, flags); 2040 2036 2041 2037 cmd.opcode = opcode; 2042 2038 cmd.flags = MMC_RSP_R1 | MMC_CMD_ADTC; ··· 2072 2064 2073 2065 host->tuning_done = 0; 2074 2066 2067 + mmiowb(); 2075 2068 spin_unlock_irqrestore(&host->lock, flags); 2076 2069 2077 2070 /* Wait for Buffer Read Ready interrupt */ 2078 2071 wait_event_timeout(host->buf_ready_int, (host->tuning_done == 1), 2079 2072 msecs_to_jiffies(50)); 2080 2073 2081 - spin_lock_irqsave(&host->lock, flags); 2082 2074 } 2083 2075 2084 - static void __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode, 2085 - unsigned long flags) 2076 + static void __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode) 2086 2077 { 2087 2078 int i; 2088 2079 ··· 2092 2085 for (i = 0; i < MAX_TUNING_LOOP; i++) { 2093 2086 u16 ctrl; 2094 2087 2095 - sdhci_send_tuning(host, opcode, flags); 2088 + sdhci_send_tuning(host, opcode); 2096 2089 2097 2090 if (!host->tuning_done) { 2098 2091 pr_info("%s: Tuning timeout, falling back to fixed sampling clock\n", 2099 2092 mmc_hostname(host->mmc)); 2100 - sdhci_abort_tuning(host, opcode, flags); 2093 + sdhci_abort_tuning(host, opcode); 2101 2094 return; 2102 2095 } 2103 2096 ··· 2108 2101 break; 2109 2102 } 2110 2103 2111 - /* eMMC spec does not require a delay between tuning cycles */ 2112 - if (opcode == MMC_SEND_TUNING_BLOCK) 2113 - mdelay(1); 2104 + /* Spec does not require a delay between tuning cycles */ 2105 + if (host->tuning_delay > 0) 2106 + mdelay(host->tuning_delay); 2114 2107 } 2115 2108 2116 2109 pr_info("%s: Tuning failed, falling back to fixed sampling clock\n", ··· 2122 2115 { 2123 2116 struct sdhci_host *host = mmc_priv(mmc); 2124 2117 int err = 0; 2125 - unsigned long flags; 2126 2118 unsigned int tuning_count = 0; 2127 2119 bool hs400_tuning; 2128 - 2129 - spin_lock_irqsave(&host->lock, flags); 2130 2120 2131 2121 hs400_tuning = host->flags & SDHCI_HS400_TUNING; 2132 2122 ··· 2141 2137 /* HS400 tuning is done in HS200 mode */ 2142 2138 case MMC_TIMING_MMC_HS400: 2143 2139 err = -EINVAL; 2144 - goto out_unlock; 2140 + goto out; 2145 2141 2146 2142 case MMC_TIMING_MMC_HS200: 2147 2143 /* ··· 2162 2158 /* FALLTHROUGH */ 2163 2159 2164 2160 default: 2165 - goto out_unlock; 2161 + goto out; 2166 2162 } 2167 2163 2168 2164 if (host->ops->platform_execute_tuning) { 2169 - spin_unlock_irqrestore(&host->lock, flags); 2170 2165 err = host->ops->platform_execute_tuning(host, opcode); 2171 - spin_lock_irqsave(&host->lock, flags); 2172 - goto out_unlock; 2166 + goto out; 2173 2167 } 2174 2168 2175 2169 host->mmc->retune_period = tuning_count; 2176 2170 2171 + if (host->tuning_delay < 0) 2172 + host->tuning_delay = opcode == MMC_SEND_TUNING_BLOCK; 2173 + 2177 2174 sdhci_start_tuning(host); 2178 2175 2179 - __sdhci_execute_tuning(host, opcode, flags); 2176 + __sdhci_execute_tuning(host, opcode); 2180 2177 2181 2178 sdhci_end_tuning(host); 2182 - out_unlock: 2179 + out: 2183 2180 host->flags &= ~SDHCI_HS400_TUNING; 2184 - spin_unlock_irqrestore(&host->lock, flags); 2185 2181 2186 2182 return err; 2187 2183 } 2188 2184 EXPORT_SYMBOL_GPL(sdhci_execute_tuning); 2189 - 2190 - static int sdhci_select_drive_strength(struct mmc_card *card, 2191 - unsigned int max_dtr, int host_drv, 2192 - int card_drv, int *drv_type) 2193 - { 2194 - struct sdhci_host *host = mmc_priv(card->host); 2195 - 2196 - if (!host->ops->select_drive_strength) 2197 - return 0; 2198 - 2199 - return host->ops->select_drive_strength(host, card, max_dtr, host_drv, 2200 - card_drv, drv_type); 2201 - } 2202 2185 2203 2186 static void sdhci_enable_preset_value(struct sdhci_host *host, bool enable) 2204 2187 { ··· 2224 2233 2225 2234 if (data->host_cookie != COOKIE_UNMAPPED) 2226 2235 dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 2227 - data->flags & MMC_DATA_WRITE ? 2228 - DMA_TO_DEVICE : DMA_FROM_DEVICE); 2236 + mmc_get_dma_dir(data)); 2229 2237 2230 2238 data->host_cookie = COOKIE_UNMAPPED; 2231 2239 } ··· 2299 2309 .start_signal_voltage_switch = sdhci_start_signal_voltage_switch, 2300 2310 .prepare_hs400_tuning = sdhci_prepare_hs400_tuning, 2301 2311 .execute_tuning = sdhci_execute_tuning, 2302 - .select_drive_strength = sdhci_select_drive_strength, 2303 2312 .card_event = sdhci_card_event, 2304 2313 .card_busy = sdhci_card_busy, 2305 2314 }; ··· 2340 2351 2341 2352 if (data && data->host_cookie == COOKIE_MAPPED) { 2342 2353 dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 2343 - (data->flags & MMC_DATA_READ) ? 2344 - DMA_FROM_DEVICE : DMA_TO_DEVICE); 2354 + mmc_get_dma_dir(data)); 2345 2355 data->host_cookie = COOKIE_UNMAPPED; 2346 2356 } 2347 2357 } ··· 2505 2517 #ifdef CONFIG_MMC_DEBUG 2506 2518 static void sdhci_adma_show_error(struct sdhci_host *host) 2507 2519 { 2508 - const char *name = mmc_hostname(host->mmc); 2509 2520 void *desc = host->adma_table; 2510 2521 2511 2522 sdhci_dumpregs(host); ··· 2513 2526 struct sdhci_adma2_64_desc *dma_desc = desc; 2514 2527 2515 2528 if (host->flags & SDHCI_USE_64_BIT_DMA) 2516 - DBG("%s: %p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n", 2517 - name, desc, le32_to_cpu(dma_desc->addr_hi), 2529 + DBG("%p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n", 2530 + desc, le32_to_cpu(dma_desc->addr_hi), 2518 2531 le32_to_cpu(dma_desc->addr_lo), 2519 2532 le16_to_cpu(dma_desc->len), 2520 2533 le16_to_cpu(dma_desc->cmd)); 2521 2534 else 2522 - DBG("%s: %p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n", 2523 - name, desc, le32_to_cpu(dma_desc->addr_lo), 2535 + DBG("%p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n", 2536 + desc, le32_to_cpu(dma_desc->addr_lo), 2524 2537 le16_to_cpu(dma_desc->len), 2525 2538 le16_to_cpu(dma_desc->cmd)); 2526 2539 ··· 2636 2649 ~(SDHCI_DEFAULT_BOUNDARY_SIZE - 1)) + 2637 2650 SDHCI_DEFAULT_BOUNDARY_SIZE; 2638 2651 host->data->bytes_xfered = dmanow - dmastart; 2639 - DBG("%s: DMA base 0x%08x, transferred 0x%06x bytes," 2640 - " next 0x%08x\n", 2641 - mmc_hostname(host->mmc), dmastart, 2642 - host->data->bytes_xfered, dmanow); 2652 + DBG("DMA base 0x%08x, transferred 0x%06x bytes, next 0x%08x\n", 2653 + dmastart, host->data->bytes_xfered, dmanow); 2643 2654 sdhci_writel(host, dmanow, SDHCI_DMA_ADDRESS); 2644 2655 } 2645 2656 ··· 2677 2692 } 2678 2693 2679 2694 do { 2695 + DBG("IRQ status 0x%08x\n", intmask); 2696 + 2697 + if (host->ops->irq) { 2698 + intmask = host->ops->irq(host, intmask); 2699 + if (!intmask) 2700 + goto cont; 2701 + } 2702 + 2680 2703 /* Clear selected interrupts. */ 2681 2704 mask = intmask & (SDHCI_INT_CMD_MASK | SDHCI_INT_DATA_MASK | 2682 2705 SDHCI_INT_BUS_POWER); 2683 2706 sdhci_writel(host, mask, SDHCI_INT_STATUS); 2684 - 2685 - DBG("*** %s got interrupt: 0x%08x\n", 2686 - mmc_hostname(host->mmc), intmask); 2687 2707 2688 2708 if (intmask & (SDHCI_INT_CARD_INSERT | SDHCI_INT_CARD_REMOVE)) { 2689 2709 u32 present = sdhci_readl(host, SDHCI_PRESENT_STATE) & ··· 2749 2759 unexpected |= intmask; 2750 2760 sdhci_writel(host, intmask, SDHCI_INT_STATUS); 2751 2761 } 2752 - 2762 + cont: 2753 2763 if (result == IRQ_NONE) 2754 2764 result = IRQ_HANDLED; 2755 2765 ··· 2848 2858 sdhci_disable_card_detection(host); 2849 2859 2850 2860 mmc_retune_timer_stop(host->mmc); 2851 - if (host->tuning_mode != SDHCI_TUNING_MODE_3) 2852 - mmc_retune_needed(host->mmc); 2853 2861 2854 2862 if (!device_may_wakeup(mmc_dev(host->mmc))) { 2855 2863 host->ier = 0; ··· 2908 2920 unsigned long flags; 2909 2921 2910 2922 mmc_retune_timer_stop(host->mmc); 2911 - if (host->tuning_mode != SDHCI_TUNING_MODE_3) 2912 - mmc_retune_needed(host->mmc); 2913 2923 2914 2924 spin_lock_irqsave(&host->lock, flags); 2915 2925 host->ier &= SDHCI_INT_CARD_INT; ··· 2978 2992 2979 2993 /*****************************************************************************\ 2980 2994 * * 2995 + * Command Queue Engine (CQE) helpers * 2996 + * * 2997 + \*****************************************************************************/ 2998 + 2999 + void sdhci_cqe_enable(struct mmc_host *mmc) 3000 + { 3001 + struct sdhci_host *host = mmc_priv(mmc); 3002 + unsigned long flags; 3003 + u8 ctrl; 3004 + 3005 + spin_lock_irqsave(&host->lock, flags); 3006 + 3007 + ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL); 3008 + ctrl &= ~SDHCI_CTRL_DMA_MASK; 3009 + if (host->flags & SDHCI_USE_64_BIT_DMA) 3010 + ctrl |= SDHCI_CTRL_ADMA64; 3011 + else 3012 + ctrl |= SDHCI_CTRL_ADMA32; 3013 + sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); 3014 + 3015 + sdhci_writew(host, SDHCI_MAKE_BLKSZ(SDHCI_DEFAULT_BOUNDARY_ARG, 512), 3016 + SDHCI_BLOCK_SIZE); 3017 + 3018 + /* Set maximum timeout */ 3019 + sdhci_writeb(host, 0xE, SDHCI_TIMEOUT_CONTROL); 3020 + 3021 + host->ier = host->cqe_ier; 3022 + 3023 + sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); 3024 + sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); 3025 + 3026 + host->cqe_on = true; 3027 + 3028 + pr_debug("%s: sdhci: CQE on, IRQ mask %#x, IRQ status %#x\n", 3029 + mmc_hostname(mmc), host->ier, 3030 + sdhci_readl(host, SDHCI_INT_STATUS)); 3031 + 3032 + mmiowb(); 3033 + spin_unlock_irqrestore(&host->lock, flags); 3034 + } 3035 + EXPORT_SYMBOL_GPL(sdhci_cqe_enable); 3036 + 3037 + void sdhci_cqe_disable(struct mmc_host *mmc, bool recovery) 3038 + { 3039 + struct sdhci_host *host = mmc_priv(mmc); 3040 + unsigned long flags; 3041 + 3042 + spin_lock_irqsave(&host->lock, flags); 3043 + 3044 + sdhci_set_default_irqs(host); 3045 + 3046 + host->cqe_on = false; 3047 + 3048 + if (recovery) { 3049 + sdhci_do_reset(host, SDHCI_RESET_CMD); 3050 + sdhci_do_reset(host, SDHCI_RESET_DATA); 3051 + } 3052 + 3053 + pr_debug("%s: sdhci: CQE off, IRQ mask %#x, IRQ status %#x\n", 3054 + mmc_hostname(mmc), host->ier, 3055 + sdhci_readl(host, SDHCI_INT_STATUS)); 3056 + 3057 + mmiowb(); 3058 + spin_unlock_irqrestore(&host->lock, flags); 3059 + } 3060 + EXPORT_SYMBOL_GPL(sdhci_cqe_disable); 3061 + 3062 + bool sdhci_cqe_irq(struct sdhci_host *host, u32 intmask, int *cmd_error, 3063 + int *data_error) 3064 + { 3065 + u32 mask; 3066 + 3067 + if (!host->cqe_on) 3068 + return false; 3069 + 3070 + if (intmask & (SDHCI_INT_INDEX | SDHCI_INT_END_BIT | SDHCI_INT_CRC)) 3071 + *cmd_error = -EILSEQ; 3072 + else if (intmask & SDHCI_INT_TIMEOUT) 3073 + *cmd_error = -ETIMEDOUT; 3074 + else 3075 + *cmd_error = 0; 3076 + 3077 + if (intmask & (SDHCI_INT_DATA_END_BIT | SDHCI_INT_DATA_CRC)) 3078 + *data_error = -EILSEQ; 3079 + else if (intmask & SDHCI_INT_DATA_TIMEOUT) 3080 + *data_error = -ETIMEDOUT; 3081 + else if (intmask & SDHCI_INT_ADMA_ERROR) 3082 + *data_error = -EIO; 3083 + else 3084 + *data_error = 0; 3085 + 3086 + /* Clear selected interrupts. */ 3087 + mask = intmask & host->cqe_ier; 3088 + sdhci_writel(host, mask, SDHCI_INT_STATUS); 3089 + 3090 + if (intmask & SDHCI_INT_BUS_POWER) 3091 + pr_err("%s: Card is consuming too much power!\n", 3092 + mmc_hostname(host->mmc)); 3093 + 3094 + intmask &= ~(host->cqe_ier | SDHCI_INT_ERROR); 3095 + if (intmask) { 3096 + sdhci_writel(host, intmask, SDHCI_INT_STATUS); 3097 + pr_err("%s: CQE: Unexpected interrupt 0x%08x.\n", 3098 + mmc_hostname(host->mmc), intmask); 3099 + sdhci_dumpregs(host); 3100 + } 3101 + 3102 + return true; 3103 + } 3104 + EXPORT_SYMBOL_GPL(sdhci_cqe_irq); 3105 + 3106 + /*****************************************************************************\ 3107 + * * 2981 3108 * Device allocation/registration * 2982 3109 * * 2983 3110 \*****************************************************************************/ ··· 3113 3014 mmc->ops = &host->mmc_host_ops; 3114 3015 3115 3016 host->flags = SDHCI_SIGNALING_330; 3017 + 3018 + host->cqe_ier = SDHCI_CQE_INT_MASK; 3019 + host->cqe_err_ier = SDHCI_CQE_INT_ERR_MASK; 3020 + 3021 + host->tuning_delay = -1; 3116 3022 3117 3023 return host; 3118 3024 } ··· 3401 3297 if (!(host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK)) { 3402 3298 host->timeout_clk = (host->caps & SDHCI_TIMEOUT_CLK_MASK) >> 3403 3299 SDHCI_TIMEOUT_CLK_SHIFT; 3300 + 3301 + if (host->caps & SDHCI_TIMEOUT_CLK_UNIT) 3302 + host->timeout_clk *= 1000; 3303 + 3404 3304 if (host->timeout_clk == 0) { 3405 - if (host->ops->get_timeout_clock) { 3406 - host->timeout_clk = 3407 - host->ops->get_timeout_clock(host); 3408 - } else { 3305 + if (!host->ops->get_timeout_clock) { 3409 3306 pr_err("%s: Hardware doesn't specify timeout clock frequency.\n", 3410 3307 mmc_hostname(mmc)); 3411 3308 ret = -ENODEV; 3412 3309 goto undma; 3413 3310 } 3414 - } 3415 3311 3416 - if (host->caps & SDHCI_TIMEOUT_CLK_UNIT) 3417 - host->timeout_clk *= 1000; 3312 + host->timeout_clk = 3313 + DIV_ROUND_UP(host->ops->get_timeout_clock(host), 3314 + 1000); 3315 + } 3418 3316 3419 3317 if (override_timeout_clk) 3420 3318 host->timeout_clk = override_timeout_clk; ··· 3438 3332 !(host->flags & SDHCI_USE_SDMA)) && 3439 3333 !(host->quirks2 & SDHCI_QUIRK2_ACMD23_BROKEN)) { 3440 3334 host->flags |= SDHCI_AUTO_CMD23; 3441 - DBG("%s: Auto-CMD23 available\n", mmc_hostname(mmc)); 3335 + DBG("Auto-CMD23 available\n"); 3442 3336 } else { 3443 - DBG("%s: Auto-CMD23 unavailable\n", mmc_hostname(mmc)); 3337 + DBG("Auto-CMD23 unavailable\n"); 3444 3338 } 3445 3339 3446 3340 /* ··· 3704 3598 } 3705 3599 EXPORT_SYMBOL_GPL(sdhci_setup_host); 3706 3600 3601 + void sdhci_cleanup_host(struct sdhci_host *host) 3602 + { 3603 + struct mmc_host *mmc = host->mmc; 3604 + 3605 + if (!IS_ERR(mmc->supply.vqmmc)) 3606 + regulator_disable(mmc->supply.vqmmc); 3607 + 3608 + if (host->align_buffer) 3609 + dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + 3610 + host->adma_table_sz, host->align_buffer, 3611 + host->align_addr); 3612 + host->adma_table = NULL; 3613 + host->align_buffer = NULL; 3614 + } 3615 + EXPORT_SYMBOL_GPL(sdhci_cleanup_host); 3616 + 3707 3617 int __sdhci_add_host(struct sdhci_host *host) 3708 3618 { 3709 3619 struct mmc_host *mmc = host->mmc; ··· 3784 3662 untasklet: 3785 3663 tasklet_kill(&host->finish_tasklet); 3786 3664 3787 - if (!IS_ERR(mmc->supply.vqmmc)) 3788 - regulator_disable(mmc->supply.vqmmc); 3789 - 3790 - if (host->align_buffer) 3791 - dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz + 3792 - host->adma_table_sz, host->align_buffer, 3793 - host->align_addr); 3794 - host->adma_table = NULL; 3795 - host->align_buffer = NULL; 3796 - 3797 3665 return ret; 3798 3666 } 3799 3667 EXPORT_SYMBOL_GPL(__sdhci_add_host); ··· 3796 3684 if (ret) 3797 3685 return ret; 3798 3686 3799 - return __sdhci_add_host(host); 3687 + ret = __sdhci_add_host(host); 3688 + if (ret) 3689 + goto cleanup; 3690 + 3691 + return 0; 3692 + 3693 + cleanup: 3694 + sdhci_cleanup_host(host); 3695 + 3696 + return ret; 3800 3697 } 3801 3698 EXPORT_SYMBOL_GPL(sdhci_add_host); 3802 3699
+44 -21
drivers/mmc/host/sdhci.h
··· 134 134 #define SDHCI_INT_CARD_REMOVE 0x00000080 135 135 #define SDHCI_INT_CARD_INT 0x00000100 136 136 #define SDHCI_INT_RETUNE 0x00001000 137 + #define SDHCI_INT_CQE 0x00004000 137 138 #define SDHCI_INT_ERROR 0x00008000 138 139 #define SDHCI_INT_TIMEOUT 0x00010000 139 140 #define SDHCI_INT_CRC 0x00020000 ··· 158 157 SDHCI_INT_DATA_END_BIT | SDHCI_INT_ADMA_ERROR | \ 159 158 SDHCI_INT_BLK_GAP) 160 159 #define SDHCI_INT_ALL_MASK ((unsigned int)-1) 160 + 161 + #define SDHCI_CQE_INT_ERR_MASK ( \ 162 + SDHCI_INT_ADMA_ERROR | SDHCI_INT_BUS_POWER | SDHCI_INT_DATA_END_BIT | \ 163 + SDHCI_INT_DATA_CRC | SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_INDEX | \ 164 + SDHCI_INT_END_BIT | SDHCI_INT_CRC | SDHCI_INT_TIMEOUT) 165 + 166 + #define SDHCI_CQE_INT_MASK (SDHCI_CQE_INT_ERR_MASK | SDHCI_INT_CQE) 161 167 162 168 #define SDHCI_ACMD12_ERR 0x3C 163 169 ··· 526 518 /* cached registers */ 527 519 u32 ier; 528 520 521 + bool cqe_on; /* CQE is operating */ 522 + u32 cqe_ier; /* CQE interrupt mask */ 523 + u32 cqe_err_ier; /* CQE error interrupt mask */ 524 + 529 525 wait_queue_head_t buf_ready_int; /* Waitqueue for Buffer Read Ready interrupt */ 530 526 unsigned int tuning_done; /* Condition flag set when CMD19 succeeds */ 531 527 ··· 538 526 #define SDHCI_TUNING_MODE_1 0 539 527 #define SDHCI_TUNING_MODE_2 1 540 528 #define SDHCI_TUNING_MODE_3 2 529 + /* Delay (ms) between tuning commands */ 530 + int tuning_delay; 541 531 542 532 unsigned long private[0] ____cacheline_aligned; 543 533 }; ··· 558 544 void (*set_power)(struct sdhci_host *host, unsigned char mode, 559 545 unsigned short vdd); 560 546 547 + u32 (*irq)(struct sdhci_host *host, u32 intmask); 548 + 561 549 int (*enable_dma)(struct sdhci_host *host); 562 550 unsigned int (*get_max_clock)(struct sdhci_host *host); 563 551 unsigned int (*get_min_clock)(struct sdhci_host *host); 552 + /* get_timeout_clock should return clk rate in unit of Hz */ 564 553 unsigned int (*get_timeout_clock)(struct sdhci_host *host); 565 554 unsigned int (*get_max_timeout_count)(struct sdhci_host *host); 566 555 void (*set_timeout)(struct sdhci_host *host, ··· 579 562 void (*adma_workaround)(struct sdhci_host *host, u32 intmask); 580 563 void (*card_event)(struct sdhci_host *host); 581 564 void (*voltage_switch)(struct sdhci_host *host); 582 - int (*select_drive_strength)(struct sdhci_host *host, 583 - struct mmc_card *card, 584 - unsigned int max_dtr, int host_drv, 585 - int card_drv, int *drv_type); 586 565 }; 587 566 588 567 #ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS ··· 665 652 666 653 #endif /* CONFIG_MMC_SDHCI_IO_ACCESSORS */ 667 654 668 - extern struct sdhci_host *sdhci_alloc_host(struct device *dev, 669 - size_t priv_size); 670 - extern void sdhci_free_host(struct sdhci_host *host); 655 + struct sdhci_host *sdhci_alloc_host(struct device *dev, size_t priv_size); 656 + void sdhci_free_host(struct sdhci_host *host); 671 657 672 658 static inline void *sdhci_priv(struct sdhci_host *host) 673 659 { 674 660 return host->private; 675 661 } 676 662 677 - extern void sdhci_card_detect(struct sdhci_host *host); 678 - extern void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps, 679 - u32 *caps1); 680 - extern int sdhci_setup_host(struct sdhci_host *host); 681 - extern int __sdhci_add_host(struct sdhci_host *host); 682 - extern int sdhci_add_host(struct sdhci_host *host); 683 - extern void sdhci_remove_host(struct sdhci_host *host, int dead); 684 - extern void sdhci_send_command(struct sdhci_host *host, 685 - struct mmc_command *cmd); 663 + void sdhci_card_detect(struct sdhci_host *host); 664 + void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps, 665 + u32 *caps1); 666 + int sdhci_setup_host(struct sdhci_host *host); 667 + void sdhci_cleanup_host(struct sdhci_host *host); 668 + int __sdhci_add_host(struct sdhci_host *host); 669 + int sdhci_add_host(struct sdhci_host *host); 670 + void sdhci_remove_host(struct sdhci_host *host, int dead); 671 + void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd); 686 672 687 673 static inline void sdhci_read_caps(struct sdhci_host *host) 688 674 { ··· 705 693 void sdhci_reset(struct sdhci_host *host, u8 mask); 706 694 void sdhci_set_uhs_signaling(struct sdhci_host *host, unsigned timing); 707 695 int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode); 696 + void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios); 697 + int sdhci_start_signal_voltage_switch(struct mmc_host *mmc, 698 + struct mmc_ios *ios); 699 + void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable); 708 700 709 701 #ifdef CONFIG_PM 710 - extern int sdhci_suspend_host(struct sdhci_host *host); 711 - extern int sdhci_resume_host(struct sdhci_host *host); 712 - extern void sdhci_enable_irq_wakeups(struct sdhci_host *host); 713 - extern int sdhci_runtime_suspend_host(struct sdhci_host *host); 714 - extern int sdhci_runtime_resume_host(struct sdhci_host *host); 702 + int sdhci_suspend_host(struct sdhci_host *host); 703 + int sdhci_resume_host(struct sdhci_host *host); 704 + void sdhci_enable_irq_wakeups(struct sdhci_host *host); 705 + int sdhci_runtime_suspend_host(struct sdhci_host *host); 706 + int sdhci_runtime_resume_host(struct sdhci_host *host); 715 707 #endif 708 + 709 + void sdhci_cqe_enable(struct mmc_host *mmc); 710 + void sdhci_cqe_disable(struct mmc_host *mmc, bool recovery); 711 + bool sdhci_cqe_irq(struct sdhci_host *host, u32 intmask, int *cmd_error, 712 + int *data_error); 713 + 714 + void sdhci_dumpregs(struct sdhci_host *host); 716 715 717 716 #endif /* __SDHCI_HW_H */
+4 -12
drivers/mmc/host/sunxi-mmc.c
··· 385 385 wmb(); 386 386 } 387 387 388 - static enum dma_data_direction sunxi_mmc_get_dma_dir(struct mmc_data *data) 389 - { 390 - if (data->flags & MMC_DATA_WRITE) 391 - return DMA_TO_DEVICE; 392 - else 393 - return DMA_FROM_DEVICE; 394 - } 395 - 396 388 static int sunxi_mmc_map_dma(struct sunxi_mmc_host *host, 397 389 struct mmc_data *data) 398 390 { ··· 392 400 struct scatterlist *sg; 393 401 394 402 dma_len = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 395 - sunxi_mmc_get_dma_dir(data)); 403 + mmc_get_dma_dir(data)); 396 404 if (dma_len == 0) { 397 405 dev_err(mmc_dev(host->mmc), "dma_map_sg failed\n"); 398 406 return -ENOMEM; ··· 481 489 cmd->opcode == SD_IO_RW_DIRECT)) 482 490 return; 483 491 484 - dev_err(mmc_dev(host->mmc), 492 + dev_dbg(mmc_dev(host->mmc), 485 493 "smc %d err, cmd %d,%s%s%s%s%s%s%s%s%s%s !!\n", 486 494 host->mmc->index, cmd->opcode, 487 495 data ? (data->flags & MMC_DATA_WRITE ? " WR" : " RD") : "", ··· 543 551 rval |= SDXC_FIFO_RESET; 544 552 mmc_writel(host, REG_GCTRL, rval); 545 553 dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 546 - sunxi_mmc_get_dma_dir(data)); 554 + mmc_get_dma_dir(data)); 547 555 } 548 556 549 557 mmc_writel(host, REG_RINTR, 0xffff); ··· 1014 1022 1015 1023 if (data) 1016 1024 dma_unmap_sg(mmc_dev(mmc), data->sg, data->sg_len, 1017 - sunxi_mmc_get_dma_dir(data)); 1025 + mmc_get_dma_dir(data)); 1018 1026 1019 1027 dev_err(mmc_dev(mmc), "request already pending\n"); 1020 1028 mrq->cmd->error = -EBUSY;
+8 -4
drivers/mmc/host/tmio_mmc.h
··· 50 50 #define CTL_CLK_AND_WAIT_CTL 0x138 51 51 #define CTL_RESET_SDIO 0x1e0 52 52 53 - /* Definitions for values the CTRL_STATUS register can take. */ 53 + /* Definitions for values the CTL_STOP_INTERNAL_ACTION register can take */ 54 + #define TMIO_STOP_STP BIT(0) 55 + #define TMIO_STOP_SEC BIT(8) 56 + 57 + /* Definitions for values the CTL_STATUS register can take */ 54 58 #define TMIO_STAT_CMDRESPEND BIT(0) 55 59 #define TMIO_STAT_DATAEND BIT(2) 56 60 #define TMIO_STAT_CARD_REMOVE BIT(3) ··· 65 61 #define TMIO_STAT_CARD_INSERT_A BIT(9) 66 62 #define TMIO_STAT_SIGSTATE_A BIT(10) 67 63 68 - /* These belong technically to CTRL_STATUS2, but the driver merges them */ 64 + /* These belong technically to CTL_STATUS2, but the driver merges them */ 69 65 #define TMIO_STAT_CMD_IDX_ERR BIT(16) 70 66 #define TMIO_STAT_CRCFAIL BIT(17) 71 67 #define TMIO_STAT_STOPBIT_ERR BIT(18) ··· 89 85 90 86 #define TMIO_BBS 512 /* Boot block size */ 91 87 92 - /* Definitions for values the CTRL_SDIO_STATUS register can take. */ 88 + /* Definitions for values the CTL_SDIO_STATUS register can take */ 93 89 #define TMIO_SDIO_STAT_IOIRQ 0x0001 94 90 #define TMIO_SDIO_STAT_EXPUB52 0x4000 95 91 #define TMIO_SDIO_STAT_EXWT 0x8000 ··· 141 137 bool force_pio; 142 138 struct dma_chan *chan_rx; 143 139 struct dma_chan *chan_tx; 144 - struct tasklet_struct dma_complete; 140 + struct completion dma_dataend; 145 141 struct tasklet_struct dma_issue; 146 142 struct scatterlist bounce_sg; 147 143 u8 *bounce_buf;
+37 -24
drivers/mmc/host/tmio_mmc_dma.c
··· 43 43 tmio_mmc_enable_dma(host, true); 44 44 } 45 45 46 + static void tmio_mmc_dma_callback(void *arg) 47 + { 48 + struct tmio_mmc_host *host = arg; 49 + 50 + spin_lock_irq(&host->lock); 51 + 52 + if (!host->data) 53 + goto out; 54 + 55 + if (host->data->flags & MMC_DATA_READ) 56 + dma_unmap_sg(host->chan_rx->device->dev, 57 + host->sg_ptr, host->sg_len, 58 + DMA_FROM_DEVICE); 59 + else 60 + dma_unmap_sg(host->chan_tx->device->dev, 61 + host->sg_ptr, host->sg_len, 62 + DMA_TO_DEVICE); 63 + 64 + spin_unlock_irq(&host->lock); 65 + 66 + wait_for_completion(&host->dma_dataend); 67 + 68 + spin_lock_irq(&host->lock); 69 + tmio_mmc_do_data_irq(host); 70 + out: 71 + spin_unlock_irq(&host->lock); 72 + } 73 + 46 74 static void tmio_mmc_start_dma_rx(struct tmio_mmc_host *host) 47 75 { 48 76 struct scatterlist *sg = host->sg_ptr, *sg_tmp; ··· 116 88 DMA_DEV_TO_MEM, DMA_CTRL_ACK); 117 89 118 90 if (desc) { 91 + reinit_completion(&host->dma_dataend); 92 + desc->callback = tmio_mmc_dma_callback; 93 + desc->callback_param = host; 94 + 119 95 cookie = dmaengine_submit(desc); 120 96 if (cookie < 0) { 121 97 desc = NULL; ··· 194 162 DMA_MEM_TO_DEV, DMA_CTRL_ACK); 195 163 196 164 if (desc) { 165 + reinit_completion(&host->dma_dataend); 166 + desc->callback = tmio_mmc_dma_callback; 167 + desc->callback_param = host; 168 + 197 169 cookie = dmaengine_submit(desc); 198 170 if (cookie < 0) { 199 171 desc = NULL; ··· 255 219 256 220 if (chan) 257 221 dma_async_issue_pending(chan); 258 - } 259 - 260 - static void tmio_mmc_tasklet_fn(unsigned long arg) 261 - { 262 - struct tmio_mmc_host *host = (struct tmio_mmc_host *)arg; 263 - 264 - spin_lock_irq(&host->lock); 265 - 266 - if (!host->data) 267 - goto out; 268 - 269 - if (host->data->flags & MMC_DATA_READ) 270 - dma_unmap_sg(host->chan_rx->device->dev, 271 - host->sg_ptr, host->sg_len, 272 - DMA_FROM_DEVICE); 273 - else 274 - dma_unmap_sg(host->chan_tx->device->dev, 275 - host->sg_ptr, host->sg_len, 276 - DMA_TO_DEVICE); 277 - 278 - tmio_mmc_do_data_irq(host); 279 - out: 280 - spin_unlock_irq(&host->lock); 281 222 } 282 223 283 224 void tmio_mmc_request_dma(struct tmio_mmc_host *host, struct tmio_mmc_data *pdata) ··· 319 306 if (!host->bounce_buf) 320 307 goto ebouncebuf; 321 308 322 - tasklet_init(&host->dma_complete, tmio_mmc_tasklet_fn, (unsigned long)host); 309 + init_completion(&host->dma_dataend); 323 310 tasklet_init(&host->dma_issue, tmio_mmc_issue_tasklet_fn, (unsigned long)host); 324 311 } 325 312
+19 -17
drivers/mmc/host/tmio_mmc_pio.c
··· 340 340 341 341 /* CMD12 is handled by hardware */ 342 342 if (cmd->opcode == MMC_STOP_TRANSMISSION && !cmd->arg) { 343 - sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, 0x001); 343 + sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, TMIO_STOP_STP); 344 344 return 0; 345 345 } 346 346 ··· 367 367 if (data) { 368 368 c |= DATA_PRESENT; 369 369 if (data->blocks > 1) { 370 - sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, 0x100); 370 + sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, TMIO_STOP_SEC); 371 371 c |= TRANSFER_MULTI; 372 372 373 373 /* ··· 553 553 } 554 554 555 555 if (stop) { 556 - if (stop->opcode == MMC_STOP_TRANSMISSION && !stop->arg) 557 - sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, 0x000); 558 - else 559 - BUG(); 556 + if (stop->opcode != MMC_STOP_TRANSMISSION || stop->arg) 557 + dev_err(&host->pdev->dev, "unsupported stop: CMD%u,0x%x. We did CMD12,0\n", 558 + stop->opcode, stop->arg); 559 + 560 + /* fill in response from auto CMD12 */ 561 + stop->resp[0] = sd_ctrl_read16_and_16_as_32(host, CTL_RESPONSE); 562 + 563 + sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, 0); 560 564 } 561 565 562 566 schedule_work(&host->done); ··· 600 596 601 597 if (done) { 602 598 tmio_mmc_disable_mmc_irqs(host, TMIO_STAT_DATAEND); 603 - tasklet_schedule(&host->dma_complete); 599 + complete(&host->dma_dataend); 604 600 } 605 601 } else if (host->chan_rx && (data->flags & MMC_DATA_READ) && !host->force_pio) { 606 602 tmio_mmc_disable_mmc_irqs(host, TMIO_STAT_DATAEND); 607 - tasklet_schedule(&host->dma_complete); 603 + complete(&host->dma_dataend); 608 604 } else { 609 605 tmio_mmc_do_data_irq(host); 610 606 tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_READOP | TMIO_MASK_WRITEOP); ··· 815 811 struct tmio_mmc_host *host = mmc_priv(mmc); 816 812 int i, ret = 0; 817 813 818 - if (!host->tap_num) { 819 - if (!host->init_tuning || !host->select_tuning) 820 - /* Tuning is not supported */ 821 - goto out; 814 + if (!host->init_tuning || !host->select_tuning) 815 + /* Tuning is not supported */ 816 + goto out; 822 817 823 - host->tap_num = host->init_tuning(host); 824 - if (!host->tap_num) 825 - /* Tuning is not supported */ 826 - goto out; 827 - } 818 + host->tap_num = host->init_tuning(host); 819 + if (!host->tap_num) 820 + /* Tuning is not supported */ 821 + goto out; 828 822 829 823 if (host->tap_num * 2 >= sizeof(host->taps) * BITS_PER_BYTE) { 830 824 dev_warn_once(&host->pdev->dev,
+10
include/linux/mmc/card.h
··· 89 89 unsigned int boot_ro_lock; /* ro lock support */ 90 90 bool boot_ro_lockable; 91 91 bool ffu_capable; /* Firmware upgrade support */ 92 + bool cmdq_en; /* Command Queue enabled */ 92 93 bool cmdq_support; /* Command Queue supported */ 93 94 unsigned int cmdq_depth; /* Command Queue depth */ 94 95 #define MMC_FIRMWARE_LEN 8 ··· 209 208 struct mmc_host; 210 209 struct sdio_func; 211 210 struct sdio_func_tuple; 211 + struct mmc_queue_req; 212 212 213 213 #define SDIO_MAX_FUNCS 7 214 214 ··· 269 267 #define MMC_QUIRK_TRIM_BROKEN (1<<12) /* Skip trim */ 270 268 #define MMC_QUIRK_BROKEN_HPI (1<<13) /* Disable broken HPI support */ 271 269 270 + bool reenable_cmdq; /* Re-enable Command Queue */ 271 + 272 272 unsigned int erase_size; /* erase size in sectors */ 273 273 unsigned int erase_shift; /* if erase unit is power 2 */ 274 274 unsigned int pref_erase; /* in sectors */ ··· 304 300 struct dentry *debugfs_root; 305 301 struct mmc_part part[MMC_NUM_PHY_PARTITION]; /* physical partitions */ 306 302 unsigned int nr_parts; 303 + 304 + struct mmc_queue_req *mqrq; /* Shared queue structure */ 305 + unsigned int bouncesz; /* Bounce buffer size */ 306 + int qdepth; /* Shared queue depth */ 307 307 }; 308 308 309 309 static inline bool mmc_large_sector(struct mmc_card *card) 310 310 { 311 311 return card->ext_csd.data_sector_size == 4096; 312 312 } 313 + 314 + bool mmc_card_is_blockaddr(struct mmc_card *card); 313 315 314 316 #define mmc_card_mmc(c) ((c)->type == MMC_TYPE_MMC) 315 317 #define mmc_card_sd(c) ((c)->type == MMC_TYPE_SD)
+6
include/linux/mmc/host.h
··· 17 17 #include <linux/mmc/core.h> 18 18 #include <linux/mmc/card.h> 19 19 #include <linux/mmc/pm.h> 20 + #include <linux/dma-direction.h> 20 21 21 22 struct mmc_ios { 22 23 unsigned int clock; /* clock rate */ ··· 498 497 static inline bool mmc_can_retune(struct mmc_host *host) 499 498 { 500 499 return host->can_retune == 1; 500 + } 501 + 502 + static inline enum dma_data_direction mmc_get_dma_dir(struct mmc_data *data) 503 + { 504 + return data->flags & MMC_DATA_WRITE ? DMA_TO_DEVICE : DMA_FROM_DEVICE; 501 505 } 502 506 503 507 int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error);