Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'nand/for-4.16' of git://git.infradead.org/linux-mtd into mtd/next

Pull NAND changes from Boris Brezillon:

"
Core changes:
* Fix NAND_CMD_NONE handling in nand_command[_lp]() hooks
* Introduce the ->exec_op() infrastructure
* Rework NAND buffers handling
* Fix ECC requirements for K9F4G08U0D
* Fix nand_do_read_oob() to return the number of bitflips
* Mark K9F1G08U0E as not supporting subpage writes

Driver changes:
* MTK: Rework the driver to support new IP versions
* OMAP OneNAND: Full rework to use new APIs (libgpio, dmaengine) and fix
DT support
* Marvell: Add a new driver to replace the pxa3xx one
"

+6463 -1914
+4 -2
Documentation/devicetree/bindings/mtd/gpmc-onenand.txt
··· 9 9 10 10 Required properties: 11 11 12 + - compatible: "ti,omap2-onenand" 12 13 - reg: The CS line the peripheral is connected to 13 - - gpmc,device-width Width of the ONENAND device connected to the GPMC 14 + - gpmc,device-width: Width of the ONENAND device connected to the GPMC 14 15 in bytes. Must be 1 or 2. 15 16 16 17 Optional properties: 17 18 18 - - dma-channel: DMA Channel index 19 + - int-gpios: GPIO specifier for the INT pin. 19 20 20 21 For inline partition table parsing (optional): 21 22 ··· 36 35 #size-cells = <1>; 37 36 38 37 onenand@0 { 38 + compatible = "ti,omap2-onenand"; 39 39 reg = <0 0 0>; /* CS0, offset 0 */ 40 40 gpmc,device-width = <2>; 41 41
+123
Documentation/devicetree/bindings/mtd/marvell-nand.txt
··· 1 + Marvell NAND Flash Controller (NFC) 2 + 3 + Required properties: 4 + - compatible: can be one of the following: 5 + * "marvell,armada-8k-nand-controller" 6 + * "marvell,armada370-nand-controller" 7 + * "marvell,pxa3xx-nand-controller" 8 + * "marvell,armada-8k-nand" (deprecated) 9 + * "marvell,armada370-nand" (deprecated) 10 + * "marvell,pxa3xx-nand" (deprecated) 11 + Compatibles marked deprecated support only the old bindings described 12 + at the bottom. 13 + - reg: NAND flash controller memory area. 14 + - #address-cells: shall be set to 1. Encode the NAND CS. 15 + - #size-cells: shall be set to 0. 16 + - interrupts: shall define the NAND controller interrupt. 17 + - clocks: shall reference the NAND controller clock. 18 + - marvell,system-controller: Set to retrieve the syscon node that handles 19 + NAND controller related registers (only required with the 20 + "marvell,armada-8k-nand[-controller]" compatibles). 21 + 22 + Optional properties: 23 + - label: see partition.txt. New platforms shall omit this property. 24 + - dmas: shall reference DMA channel associated to the NAND controller. 25 + This property is only used with "marvell,pxa3xx-nand[-controller]" 26 + compatible strings. 27 + - dma-names: shall be "rxtx". 28 + This property is only used with "marvell,pxa3xx-nand[-controller]" 29 + compatible strings. 30 + 31 + Optional children nodes: 32 + Children nodes represent the available NAND chips. 33 + 34 + Required properties: 35 + - reg: shall contain the native Chip Select ids (0-3). 36 + - nand-rb: see nand.txt (0-1). 37 + 38 + Optional properties: 39 + - marvell,nand-keep-config: orders the driver not to take the timings 40 + from the core and leaving them completely untouched. Bootloader 41 + timings will then be used. 42 + - label: MTD name. 43 + - nand-on-flash-bbt: see nand.txt. 44 + - nand-ecc-mode: see nand.txt. Will use hardware ECC if not specified. 45 + - nand-ecc-algo: see nand.txt. This property is essentially useful when 46 + not using hardware ECC. Howerver, it may be added when using hardware 47 + ECC for clarification but will be ignored by the driver because ECC 48 + mode is chosen depending on the page size and the strength required by 49 + the NAND chip. This value may be overwritten with nand-ecc-strength 50 + property. 51 + - nand-ecc-strength: see nand.txt. 52 + - nand-ecc-step-size: see nand.txt. Marvell's NAND flash controller does 53 + use fixed strength (1-bit for Hamming, 16-bit for BCH), so the actual 54 + step size will shrink or grow in order to fit the required strength. 55 + Step sizes are not completely random for all and follow certain 56 + patterns described in AN-379, "Marvell SoC NFC ECC". 57 + 58 + See Documentation/devicetree/bindings/mtd/nand.txt for more details on 59 + generic bindings. 60 + 61 + 62 + Example: 63 + nand_controller: nand-controller@d0000 { 64 + compatible = "marvell,armada370-nand-controller"; 65 + reg = <0xd0000 0x54>; 66 + #address-cells = <1>; 67 + #size-cells = <0>; 68 + interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>; 69 + clocks = <&coredivclk 0>; 70 + 71 + nand@0 { 72 + reg = <0>; 73 + label = "main-storage"; 74 + nand-rb = <0>; 75 + nand-ecc-mode = "hw"; 76 + marvell,nand-keep-config; 77 + nand-on-flash-bbt; 78 + nand-ecc-strength = <4>; 79 + nand-ecc-step-size = <512>; 80 + 81 + partitions { 82 + compatible = "fixed-partitions"; 83 + #address-cells = <1>; 84 + #size-cells = <1>; 85 + 86 + partition@0 { 87 + label = "Rootfs"; 88 + reg = <0x00000000 0x40000000>; 89 + }; 90 + }; 91 + }; 92 + }; 93 + 94 + 95 + Note on legacy bindings: One can find, in not-updated device trees, 96 + bindings slightly different than described above with other properties 97 + described below as well as the partitions node at the root of a so 98 + called "nand" node (without clear controller/chip separation). 99 + 100 + Legacy properties: 101 + - marvell,nand-enable-arbiter: To enable the arbiter, all boards blindly 102 + used it, this bit was set by the bootloader for many boards and even if 103 + it is marked reserved in several datasheets, it might be needed to set 104 + it (otherwise it is harmless) so whether or not this property is set, 105 + the bit is selected by the driver. 106 + - num-cs: Number of chip-select lines to use, all boards blindly set 1 107 + to this and for a reason, other values would have failed. The value of 108 + this property is ignored. 109 + 110 + Example: 111 + 112 + nand0: nand@43100000 { 113 + compatible = "marvell,pxa3xx-nand"; 114 + reg = <0x43100000 90>; 115 + interrupts = <45>; 116 + dmas = <&pdma 97 0>; 117 + dma-names = "rxtx"; 118 + #address-cells = <1>; 119 + marvell,nand-keep-config; 120 + marvell,nand-enable-arbiter; 121 + num-cs = <1>; 122 + /* Partitions (optional) */ 123 + };
+8 -3
Documentation/devicetree/bindings/mtd/mtk-nand.txt
··· 12 12 13 13 The first part of NFC is NAND Controller Interface (NFI) HW. 14 14 Required NFI properties: 15 - - compatible: Should be one of "mediatek,mt2701-nfc", 16 - "mediatek,mt2712-nfc". 15 + - compatible: Should be one of 16 + "mediatek,mt2701-nfc", 17 + "mediatek,mt2712-nfc", 18 + "mediatek,mt7622-nfc". 17 19 - reg: Base physical address and size of NFI. 18 20 - interrupts: Interrupts of NFI. 19 21 - clocks: NFI required clocks. ··· 144 142 ============== 145 143 146 144 Required BCH properties: 147 - - compatible: Should be one of "mediatek,mt2701-ecc", "mediatek,mt2712-ecc". 145 + - compatible: Should be one of 146 + "mediatek,mt2701-ecc", 147 + "mediatek,mt2712-ecc", 148 + "mediatek,mt7622-ecc". 148 149 - reg: Base physical address and size of ECC. 149 150 - interrupts: Interrupts of ECC. 150 151 - clocks: ECC required clocks.
+1
Documentation/devicetree/bindings/mtd/nand.txt
··· 43 43 This is particularly useful when only the in-band area is 44 44 used by the upper layers, and you want to make your NAND 45 45 as reliable as possible. 46 + - nand-rb: shall contain the native Ready/Busy ids. 46 47 47 48 The ECC strength and ECC step size properties define the correction capability 48 49 of a controller. Together, they say a controller can correct "{strength} bit
+15 -7
MAINTAINERS
··· 2382 2382 F: drivers/input/touchscreen/atmel_mxt_ts.c 2383 2383 F: include/linux/platform_data/atmel_mxt_ts.h 2384 2384 2385 - ATMEL NAND DRIVER 2386 - M: Wenyou Yang <wenyou.yang@atmel.com> 2387 - M: Josh Wu <rainyfeeling@outlook.com> 2388 - L: linux-mtd@lists.infradead.org 2389 - S: Supported 2390 - F: drivers/mtd/nand/atmel/* 2391 - 2392 2385 ATMEL SAMA5D2 ADC DRIVER 2393 2386 M: Ludovic Desroches <ludovic.desroches@microchip.com> 2394 2387 L: linux-iio@vger.kernel.org ··· 8402 8409 S: Odd Fixes 8403 8410 F: drivers/net/wireless/marvell/mwl8k.c 8404 8411 8412 + MARVELL NAND CONTROLLER DRIVER 8413 + M: Miquel Raynal <miquel.raynal@free-electrons.com> 8414 + L: linux-mtd@lists.infradead.org 8415 + S: Maintained 8416 + F: drivers/mtd/nand/marvell_nand.c 8417 + F: Documentation/devicetree/bindings/mtd/marvell-nand.txt 8418 + 8405 8419 MARVELL SOC MMC/SD/SDIO CONTROLLER DRIVER 8406 8420 M: Nicolas Pitre <nico@fluxnic.net> 8407 8421 S: Odd Fixes ··· 9044 9044 F: drivers/media/platform/atmel/atmel-isc.c 9045 9045 F: drivers/media/platform/atmel/atmel-isc-regs.h 9046 9046 F: devicetree/bindings/media/atmel-isc.txt 9047 + 9048 + MICROCHIP / ATMEL NAND DRIVER 9049 + M: Wenyou Yang <wenyou.yang@microchip.com> 9050 + M: Josh Wu <rainyfeeling@outlook.com> 9051 + L: linux-mtd@lists.infradead.org 9052 + S: Supported 9053 + F: drivers/mtd/nand/atmel/* 9054 + F: Documentation/devicetree/bindings/mtd/atmel-nand.txt 9047 9055 9048 9056 MICROCHIP KSZ SERIES ETHERNET SWITCH DRIVER 9049 9057 M: Woojung Huh <Woojung.Huh@microchip.com>
+1
arch/arm/boot/dts/omap2420-n8x0-common.dtsi
··· 52 52 onenand@0,0 { 53 53 #address-cells = <1>; 54 54 #size-cells = <1>; 55 + compatible = "ti,omap2-onenand"; 55 56 reg = <0 0 0x20000>; /* CS0, offset 0, IO size 128K */ 56 57 57 58 gpmc,sync-read;
+15 -15
arch/arm/boot/dts/omap3-igep.dtsi
··· 147 147 gpmc,sync-read; 148 148 gpmc,sync-write; 149 149 gpmc,burst-length = <16>; 150 - gpmc,burst-read; 151 150 gpmc,burst-wrap; 151 + gpmc,burst-read; 152 152 gpmc,burst-write; 153 153 gpmc,device-width = <2>; /* GPMC_DEVWIDTH_16BIT */ 154 154 gpmc,mux-add-data = <2>; /* GPMC_MUX_AD */ 155 155 gpmc,cs-on-ns = <0>; 156 - gpmc,cs-rd-off-ns = <87>; 157 - gpmc,cs-wr-off-ns = <87>; 156 + gpmc,cs-rd-off-ns = <96>; 157 + gpmc,cs-wr-off-ns = <96>; 158 158 gpmc,adv-on-ns = <0>; 159 - gpmc,adv-rd-off-ns = <10>; 160 - gpmc,adv-wr-off-ns = <10>; 161 - gpmc,oe-on-ns = <15>; 162 - gpmc,oe-off-ns = <87>; 159 + gpmc,adv-rd-off-ns = <12>; 160 + gpmc,adv-wr-off-ns = <12>; 161 + gpmc,oe-on-ns = <18>; 162 + gpmc,oe-off-ns = <96>; 163 163 gpmc,we-on-ns = <0>; 164 - gpmc,we-off-ns = <87>; 165 - gpmc,rd-cycle-ns = <112>; 166 - gpmc,wr-cycle-ns = <112>; 167 - gpmc,access-ns = <81>; 168 - gpmc,page-burst-access-ns = <15>; 164 + gpmc,we-off-ns = <96>; 165 + gpmc,rd-cycle-ns = <114>; 166 + gpmc,wr-cycle-ns = <114>; 167 + gpmc,access-ns = <90>; 168 + gpmc,page-burst-access-ns = <12>; 169 169 gpmc,bus-turnaround-ns = <0>; 170 170 gpmc,cycle2cycle-delay-ns = <0>; 171 171 gpmc,wait-monitoring-ns = <0>; 172 - gpmc,clk-activation-ns = <5>; 172 + gpmc,clk-activation-ns = <6>; 173 173 gpmc,wr-data-mux-bus-ns = <30>; 174 - gpmc,wr-access-ns = <81>; 175 - gpmc,sync-clk-ps = <15000>; 174 + gpmc,wr-access-ns = <90>; 175 + gpmc,sync-clk-ps = <12000>; 176 176 177 177 #address-cells = <1>; 178 178 #size-cells = <1>;
+1
arch/arm/boot/dts/omap3-n900.dts
··· 838 838 onenand@0,0 { 839 839 #address-cells = <1>; 840 840 #size-cells = <1>; 841 + compatible = "ti,omap2-onenand"; 841 842 reg = <0 0 0x20000>; /* CS0, offset 0, IO size 128K */ 842 843 843 844 gpmc,sync-read;
+1
arch/arm/boot/dts/omap3-n950-n9.dtsi
··· 367 367 onenand@0,0 { 368 368 #address-cells = <1>; 369 369 #size-cells = <1>; 370 + compatible = "ti,omap2-onenand"; 370 371 reg = <0 0 0x20000>; /* CS0, offset 0, IO size 128K */ 371 372 372 373 gpmc,sync-read;
+1
arch/arm/boot/dts/omap3430-sdp.dts
··· 154 154 linux,mtd-name= "samsung,kfm2g16q2m-deb8"; 155 155 #address-cells = <1>; 156 156 #size-cells = <1>; 157 + compatible = "ti,omap2-onenand"; 157 158 reg = <2 0 0x20000>; /* CS2, offset 0, IO size 4 */ 158 159 159 160 gpmc,device-width = <2>;
+1 -1
arch/arm/configs/mvebu_v7_defconfig
··· 57 57 CONFIG_MTD_PHYSMAP_OF=y 58 58 CONFIG_MTD_M25P80=y 59 59 CONFIG_MTD_NAND=y 60 - CONFIG_MTD_NAND_PXA3xx=y 60 + CONFIG_MTD_NAND_MARVELL=y 61 61 CONFIG_MTD_SPI_NOR=y 62 62 CONFIG_SRAM=y 63 63 CONFIG_MTD_UBI=y
-3
arch/arm/mach-omap2/Makefile
··· 232 232 obj-y += omap_phy_internal.o 233 233 234 234 obj-$(CONFIG_MACH_OMAP2_TUSB6010) += usb-tusb6010.o 235 - 236 - onenand-$(CONFIG_MTD_ONENAND_OMAP2) := gpmc-onenand.o 237 - obj-y += $(onenand-m) $(onenand-y)
-409
arch/arm/mach-omap2/gpmc-onenand.c
··· 1 - /* 2 - * linux/arch/arm/mach-omap2/gpmc-onenand.c 3 - * 4 - * Copyright (C) 2006 - 2009 Nokia Corporation 5 - * Contacts: Juha Yrjola 6 - * Tony Lindgren 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - */ 12 - 13 - #include <linux/string.h> 14 - #include <linux/kernel.h> 15 - #include <linux/platform_device.h> 16 - #include <linux/mtd/onenand_regs.h> 17 - #include <linux/io.h> 18 - #include <linux/omap-gpmc.h> 19 - #include <linux/platform_data/mtd-onenand-omap2.h> 20 - #include <linux/err.h> 21 - 22 - #include <asm/mach/flash.h> 23 - 24 - #include "soc.h" 25 - 26 - #define ONENAND_IO_SIZE SZ_128K 27 - 28 - #define ONENAND_FLAG_SYNCREAD (1 << 0) 29 - #define ONENAND_FLAG_SYNCWRITE (1 << 1) 30 - #define ONENAND_FLAG_HF (1 << 2) 31 - #define ONENAND_FLAG_VHF (1 << 3) 32 - 33 - static unsigned onenand_flags; 34 - static unsigned latency; 35 - 36 - static struct omap_onenand_platform_data *gpmc_onenand_data; 37 - 38 - static struct resource gpmc_onenand_resource = { 39 - .flags = IORESOURCE_MEM, 40 - }; 41 - 42 - static struct platform_device gpmc_onenand_device = { 43 - .name = "omap2-onenand", 44 - .id = -1, 45 - .num_resources = 1, 46 - .resource = &gpmc_onenand_resource, 47 - }; 48 - 49 - static struct gpmc_settings onenand_async = { 50 - .device_width = GPMC_DEVWIDTH_16BIT, 51 - .mux_add_data = GPMC_MUX_AD, 52 - }; 53 - 54 - static struct gpmc_settings onenand_sync = { 55 - .burst_read = true, 56 - .burst_wrap = true, 57 - .burst_len = GPMC_BURST_16, 58 - .device_width = GPMC_DEVWIDTH_16BIT, 59 - .mux_add_data = GPMC_MUX_AD, 60 - .wait_pin = 0, 61 - }; 62 - 63 - static void omap2_onenand_calc_async_timings(struct gpmc_timings *t) 64 - { 65 - struct gpmc_device_timings dev_t; 66 - const int t_cer = 15; 67 - const int t_avdp = 12; 68 - const int t_aavdh = 7; 69 - const int t_ce = 76; 70 - const int t_aa = 76; 71 - const int t_oe = 20; 72 - const int t_cez = 20; /* max of t_cez, t_oez */ 73 - const int t_wpl = 40; 74 - const int t_wph = 30; 75 - 76 - memset(&dev_t, 0, sizeof(dev_t)); 77 - 78 - dev_t.t_avdp_r = max_t(int, t_avdp, t_cer) * 1000; 79 - dev_t.t_avdp_w = dev_t.t_avdp_r; 80 - dev_t.t_aavdh = t_aavdh * 1000; 81 - dev_t.t_aa = t_aa * 1000; 82 - dev_t.t_ce = t_ce * 1000; 83 - dev_t.t_oe = t_oe * 1000; 84 - dev_t.t_cez_r = t_cez * 1000; 85 - dev_t.t_cez_w = dev_t.t_cez_r; 86 - dev_t.t_wpl = t_wpl * 1000; 87 - dev_t.t_wph = t_wph * 1000; 88 - 89 - gpmc_calc_timings(t, &onenand_async, &dev_t); 90 - } 91 - 92 - static void omap2_onenand_set_async_mode(void __iomem *onenand_base) 93 - { 94 - u32 reg; 95 - 96 - /* Ensure sync read and sync write are disabled */ 97 - reg = readw(onenand_base + ONENAND_REG_SYS_CFG1); 98 - reg &= ~ONENAND_SYS_CFG1_SYNC_READ & ~ONENAND_SYS_CFG1_SYNC_WRITE; 99 - writew(reg, onenand_base + ONENAND_REG_SYS_CFG1); 100 - } 101 - 102 - static void set_onenand_cfg(void __iomem *onenand_base) 103 - { 104 - u32 reg = ONENAND_SYS_CFG1_RDY | ONENAND_SYS_CFG1_INT; 105 - 106 - reg |= (latency << ONENAND_SYS_CFG1_BRL_SHIFT) | 107 - ONENAND_SYS_CFG1_BL_16; 108 - if (onenand_flags & ONENAND_FLAG_SYNCREAD) 109 - reg |= ONENAND_SYS_CFG1_SYNC_READ; 110 - else 111 - reg &= ~ONENAND_SYS_CFG1_SYNC_READ; 112 - if (onenand_flags & ONENAND_FLAG_SYNCWRITE) 113 - reg |= ONENAND_SYS_CFG1_SYNC_WRITE; 114 - else 115 - reg &= ~ONENAND_SYS_CFG1_SYNC_WRITE; 116 - if (onenand_flags & ONENAND_FLAG_HF) 117 - reg |= ONENAND_SYS_CFG1_HF; 118 - else 119 - reg &= ~ONENAND_SYS_CFG1_HF; 120 - if (onenand_flags & ONENAND_FLAG_VHF) 121 - reg |= ONENAND_SYS_CFG1_VHF; 122 - else 123 - reg &= ~ONENAND_SYS_CFG1_VHF; 124 - 125 - writew(reg, onenand_base + ONENAND_REG_SYS_CFG1); 126 - } 127 - 128 - static int omap2_onenand_get_freq(struct omap_onenand_platform_data *cfg, 129 - void __iomem *onenand_base) 130 - { 131 - u16 ver = readw(onenand_base + ONENAND_REG_VERSION_ID); 132 - int freq; 133 - 134 - switch ((ver >> 4) & 0xf) { 135 - case 0: 136 - freq = 40; 137 - break; 138 - case 1: 139 - freq = 54; 140 - break; 141 - case 2: 142 - freq = 66; 143 - break; 144 - case 3: 145 - freq = 83; 146 - break; 147 - case 4: 148 - freq = 104; 149 - break; 150 - default: 151 - pr_err("onenand rate not detected, bad GPMC async timings?\n"); 152 - freq = 0; 153 - } 154 - 155 - return freq; 156 - } 157 - 158 - static void omap2_onenand_calc_sync_timings(struct gpmc_timings *t, 159 - unsigned int flags, 160 - int freq) 161 - { 162 - struct gpmc_device_timings dev_t; 163 - const int t_cer = 15; 164 - const int t_avdp = 12; 165 - const int t_cez = 20; /* max of t_cez, t_oez */ 166 - const int t_wpl = 40; 167 - const int t_wph = 30; 168 - int min_gpmc_clk_period, t_ces, t_avds, t_avdh, t_ach, t_aavdh, t_rdyo; 169 - int div, gpmc_clk_ns; 170 - 171 - if (flags & ONENAND_SYNC_READ) 172 - onenand_flags = ONENAND_FLAG_SYNCREAD; 173 - else if (flags & ONENAND_SYNC_READWRITE) 174 - onenand_flags = ONENAND_FLAG_SYNCREAD | ONENAND_FLAG_SYNCWRITE; 175 - 176 - switch (freq) { 177 - case 104: 178 - min_gpmc_clk_period = 9600; /* 104 MHz */ 179 - t_ces = 3; 180 - t_avds = 4; 181 - t_avdh = 2; 182 - t_ach = 3; 183 - t_aavdh = 6; 184 - t_rdyo = 6; 185 - break; 186 - case 83: 187 - min_gpmc_clk_period = 12000; /* 83 MHz */ 188 - t_ces = 5; 189 - t_avds = 4; 190 - t_avdh = 2; 191 - t_ach = 6; 192 - t_aavdh = 6; 193 - t_rdyo = 9; 194 - break; 195 - case 66: 196 - min_gpmc_clk_period = 15000; /* 66 MHz */ 197 - t_ces = 6; 198 - t_avds = 5; 199 - t_avdh = 2; 200 - t_ach = 6; 201 - t_aavdh = 6; 202 - t_rdyo = 11; 203 - break; 204 - default: 205 - min_gpmc_clk_period = 18500; /* 54 MHz */ 206 - t_ces = 7; 207 - t_avds = 7; 208 - t_avdh = 7; 209 - t_ach = 9; 210 - t_aavdh = 7; 211 - t_rdyo = 15; 212 - onenand_flags &= ~ONENAND_FLAG_SYNCWRITE; 213 - break; 214 - } 215 - 216 - div = gpmc_calc_divider(min_gpmc_clk_period); 217 - gpmc_clk_ns = gpmc_ticks_to_ns(div); 218 - if (gpmc_clk_ns < 15) /* >66MHz */ 219 - onenand_flags |= ONENAND_FLAG_HF; 220 - else 221 - onenand_flags &= ~ONENAND_FLAG_HF; 222 - if (gpmc_clk_ns < 12) /* >83MHz */ 223 - onenand_flags |= ONENAND_FLAG_VHF; 224 - else 225 - onenand_flags &= ~ONENAND_FLAG_VHF; 226 - if (onenand_flags & ONENAND_FLAG_VHF) 227 - latency = 8; 228 - else if (onenand_flags & ONENAND_FLAG_HF) 229 - latency = 6; 230 - else if (gpmc_clk_ns >= 25) /* 40 MHz*/ 231 - latency = 3; 232 - else 233 - latency = 4; 234 - 235 - /* Set synchronous read timings */ 236 - memset(&dev_t, 0, sizeof(dev_t)); 237 - 238 - if (onenand_flags & ONENAND_FLAG_SYNCREAD) 239 - onenand_sync.sync_read = true; 240 - if (onenand_flags & ONENAND_FLAG_SYNCWRITE) { 241 - onenand_sync.sync_write = true; 242 - onenand_sync.burst_write = true; 243 - } else { 244 - dev_t.t_avdp_w = max(t_avdp, t_cer) * 1000; 245 - dev_t.t_wpl = t_wpl * 1000; 246 - dev_t.t_wph = t_wph * 1000; 247 - dev_t.t_aavdh = t_aavdh * 1000; 248 - } 249 - dev_t.ce_xdelay = true; 250 - dev_t.avd_xdelay = true; 251 - dev_t.oe_xdelay = true; 252 - dev_t.we_xdelay = true; 253 - dev_t.clk = min_gpmc_clk_period; 254 - dev_t.t_bacc = dev_t.clk; 255 - dev_t.t_ces = t_ces * 1000; 256 - dev_t.t_avds = t_avds * 1000; 257 - dev_t.t_avdh = t_avdh * 1000; 258 - dev_t.t_ach = t_ach * 1000; 259 - dev_t.cyc_iaa = (latency + 1); 260 - dev_t.t_cez_r = t_cez * 1000; 261 - dev_t.t_cez_w = dev_t.t_cez_r; 262 - dev_t.cyc_aavdh_oe = 1; 263 - dev_t.t_rdyo = t_rdyo * 1000 + min_gpmc_clk_period; 264 - 265 - gpmc_calc_timings(t, &onenand_sync, &dev_t); 266 - } 267 - 268 - static int omap2_onenand_setup_async(void __iomem *onenand_base) 269 - { 270 - struct gpmc_timings t; 271 - int ret; 272 - 273 - /* 274 - * Note that we need to keep sync_write set for the call to 275 - * omap2_onenand_set_async_mode() to work to detect the onenand 276 - * supported clock rate for the sync timings. 277 - */ 278 - if (gpmc_onenand_data->of_node) { 279 - gpmc_read_settings_dt(gpmc_onenand_data->of_node, 280 - &onenand_async); 281 - if (onenand_async.sync_read || onenand_async.sync_write) { 282 - if (onenand_async.sync_write) 283 - gpmc_onenand_data->flags |= 284 - ONENAND_SYNC_READWRITE; 285 - else 286 - gpmc_onenand_data->flags |= ONENAND_SYNC_READ; 287 - onenand_async.sync_read = false; 288 - } 289 - } 290 - 291 - onenand_async.sync_write = true; 292 - omap2_onenand_calc_async_timings(&t); 293 - 294 - ret = gpmc_cs_program_settings(gpmc_onenand_data->cs, &onenand_async); 295 - if (ret < 0) 296 - return ret; 297 - 298 - ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_async); 299 - if (ret < 0) 300 - return ret; 301 - 302 - omap2_onenand_set_async_mode(onenand_base); 303 - 304 - return 0; 305 - } 306 - 307 - static int omap2_onenand_setup_sync(void __iomem *onenand_base, int *freq_ptr) 308 - { 309 - int ret, freq = *freq_ptr; 310 - struct gpmc_timings t; 311 - 312 - if (!freq) { 313 - /* Very first call freq is not known */ 314 - freq = omap2_onenand_get_freq(gpmc_onenand_data, onenand_base); 315 - if (!freq) 316 - return -ENODEV; 317 - set_onenand_cfg(onenand_base); 318 - } 319 - 320 - if (gpmc_onenand_data->of_node) { 321 - gpmc_read_settings_dt(gpmc_onenand_data->of_node, 322 - &onenand_sync); 323 - } else { 324 - /* 325 - * FIXME: Appears to be legacy code from initial ONENAND commit. 326 - * Unclear what boards this is for and if this can be removed. 327 - */ 328 - if (!cpu_is_omap34xx()) 329 - onenand_sync.wait_on_read = true; 330 - } 331 - 332 - omap2_onenand_calc_sync_timings(&t, gpmc_onenand_data->flags, freq); 333 - 334 - ret = gpmc_cs_program_settings(gpmc_onenand_data->cs, &onenand_sync); 335 - if (ret < 0) 336 - return ret; 337 - 338 - ret = gpmc_cs_set_timings(gpmc_onenand_data->cs, &t, &onenand_sync); 339 - if (ret < 0) 340 - return ret; 341 - 342 - set_onenand_cfg(onenand_base); 343 - 344 - *freq_ptr = freq; 345 - 346 - return 0; 347 - } 348 - 349 - static int gpmc_onenand_setup(void __iomem *onenand_base, int *freq_ptr) 350 - { 351 - struct device *dev = &gpmc_onenand_device.dev; 352 - unsigned l = ONENAND_SYNC_READ | ONENAND_SYNC_READWRITE; 353 - int ret; 354 - 355 - ret = omap2_onenand_setup_async(onenand_base); 356 - if (ret) { 357 - dev_err(dev, "unable to set to async mode\n"); 358 - return ret; 359 - } 360 - 361 - if (!(gpmc_onenand_data->flags & l)) 362 - return 0; 363 - 364 - ret = omap2_onenand_setup_sync(onenand_base, freq_ptr); 365 - if (ret) 366 - dev_err(dev, "unable to set to sync mode\n"); 367 - return ret; 368 - } 369 - 370 - int gpmc_onenand_init(struct omap_onenand_platform_data *_onenand_data) 371 - { 372 - int err; 373 - struct device *dev = &gpmc_onenand_device.dev; 374 - 375 - gpmc_onenand_data = _onenand_data; 376 - gpmc_onenand_data->onenand_setup = gpmc_onenand_setup; 377 - gpmc_onenand_device.dev.platform_data = gpmc_onenand_data; 378 - 379 - if (cpu_is_omap24xx() && 380 - (gpmc_onenand_data->flags & ONENAND_SYNC_READWRITE)) { 381 - dev_warn(dev, "OneNAND using only SYNC_READ on 24xx\n"); 382 - gpmc_onenand_data->flags &= ~ONENAND_SYNC_READWRITE; 383 - gpmc_onenand_data->flags |= ONENAND_SYNC_READ; 384 - } 385 - 386 - if (cpu_is_omap34xx()) 387 - gpmc_onenand_data->flags |= ONENAND_IN_OMAP34XX; 388 - else 389 - gpmc_onenand_data->flags &= ~ONENAND_IN_OMAP34XX; 390 - 391 - err = gpmc_cs_request(gpmc_onenand_data->cs, ONENAND_IO_SIZE, 392 - (unsigned long *)&gpmc_onenand_resource.start); 393 - if (err < 0) { 394 - dev_err(dev, "Cannot request GPMC CS %d, error %d\n", 395 - gpmc_onenand_data->cs, err); 396 - return err; 397 - } 398 - 399 - gpmc_onenand_resource.end = gpmc_onenand_resource.start + 400 - ONENAND_IO_SIZE - 1; 401 - 402 - err = platform_device_register(&gpmc_onenand_device); 403 - if (err) { 404 - dev_err(dev, "Unable to register OneNAND device\n"); 405 - gpmc_cs_free(gpmc_onenand_data->cs); 406 - } 407 - 408 - return err; 409 - }
+1 -1
arch/arm64/configs/defconfig
··· 161 161 CONFIG_MTD_M25P80=y 162 162 CONFIG_MTD_NAND=y 163 163 CONFIG_MTD_NAND_DENALI_DT=y 164 - CONFIG_MTD_NAND_PXA3xx=y 164 + CONFIG_MTD_NAND_MARVELL=y 165 165 CONFIG_MTD_SPI_NOR=y 166 166 CONFIG_BLK_DEV_LOOP=y 167 167 CONFIG_BLK_DEV_NBD=m
+120 -43
drivers/memory/omap-gpmc.c
··· 32 32 #include <linux/pm_runtime.h> 33 33 34 34 #include <linux/platform_data/mtd-nand-omap2.h> 35 - #include <linux/platform_data/mtd-onenand-omap2.h> 36 35 37 36 #include <asm/mach-types.h> 38 37 ··· 1137 1138 } 1138 1139 EXPORT_SYMBOL_GPL(gpmc_omap_get_nand_ops); 1139 1140 1141 + static void gpmc_omap_onenand_calc_sync_timings(struct gpmc_timings *t, 1142 + struct gpmc_settings *s, 1143 + int freq, int latency) 1144 + { 1145 + struct gpmc_device_timings dev_t; 1146 + const int t_cer = 15; 1147 + const int t_avdp = 12; 1148 + const int t_cez = 20; /* max of t_cez, t_oez */ 1149 + const int t_wpl = 40; 1150 + const int t_wph = 30; 1151 + int min_gpmc_clk_period, t_ces, t_avds, t_avdh, t_ach, t_aavdh, t_rdyo; 1152 + 1153 + switch (freq) { 1154 + case 104: 1155 + min_gpmc_clk_period = 9600; /* 104 MHz */ 1156 + t_ces = 3; 1157 + t_avds = 4; 1158 + t_avdh = 2; 1159 + t_ach = 3; 1160 + t_aavdh = 6; 1161 + t_rdyo = 6; 1162 + break; 1163 + case 83: 1164 + min_gpmc_clk_period = 12000; /* 83 MHz */ 1165 + t_ces = 5; 1166 + t_avds = 4; 1167 + t_avdh = 2; 1168 + t_ach = 6; 1169 + t_aavdh = 6; 1170 + t_rdyo = 9; 1171 + break; 1172 + case 66: 1173 + min_gpmc_clk_period = 15000; /* 66 MHz */ 1174 + t_ces = 6; 1175 + t_avds = 5; 1176 + t_avdh = 2; 1177 + t_ach = 6; 1178 + t_aavdh = 6; 1179 + t_rdyo = 11; 1180 + break; 1181 + default: 1182 + min_gpmc_clk_period = 18500; /* 54 MHz */ 1183 + t_ces = 7; 1184 + t_avds = 7; 1185 + t_avdh = 7; 1186 + t_ach = 9; 1187 + t_aavdh = 7; 1188 + t_rdyo = 15; 1189 + break; 1190 + } 1191 + 1192 + /* Set synchronous read timings */ 1193 + memset(&dev_t, 0, sizeof(dev_t)); 1194 + 1195 + if (!s->sync_write) { 1196 + dev_t.t_avdp_w = max(t_avdp, t_cer) * 1000; 1197 + dev_t.t_wpl = t_wpl * 1000; 1198 + dev_t.t_wph = t_wph * 1000; 1199 + dev_t.t_aavdh = t_aavdh * 1000; 1200 + } 1201 + dev_t.ce_xdelay = true; 1202 + dev_t.avd_xdelay = true; 1203 + dev_t.oe_xdelay = true; 1204 + dev_t.we_xdelay = true; 1205 + dev_t.clk = min_gpmc_clk_period; 1206 + dev_t.t_bacc = dev_t.clk; 1207 + dev_t.t_ces = t_ces * 1000; 1208 + dev_t.t_avds = t_avds * 1000; 1209 + dev_t.t_avdh = t_avdh * 1000; 1210 + dev_t.t_ach = t_ach * 1000; 1211 + dev_t.cyc_iaa = (latency + 1); 1212 + dev_t.t_cez_r = t_cez * 1000; 1213 + dev_t.t_cez_w = dev_t.t_cez_r; 1214 + dev_t.cyc_aavdh_oe = 1; 1215 + dev_t.t_rdyo = t_rdyo * 1000 + min_gpmc_clk_period; 1216 + 1217 + gpmc_calc_timings(t, s, &dev_t); 1218 + } 1219 + 1220 + int gpmc_omap_onenand_set_timings(struct device *dev, int cs, int freq, 1221 + int latency, 1222 + struct gpmc_onenand_info *info) 1223 + { 1224 + int ret; 1225 + struct gpmc_timings gpmc_t; 1226 + struct gpmc_settings gpmc_s; 1227 + 1228 + gpmc_read_settings_dt(dev->of_node, &gpmc_s); 1229 + 1230 + info->sync_read = gpmc_s.sync_read; 1231 + info->sync_write = gpmc_s.sync_write; 1232 + info->burst_len = gpmc_s.burst_len; 1233 + 1234 + if (!gpmc_s.sync_read && !gpmc_s.sync_write) 1235 + return 0; 1236 + 1237 + gpmc_omap_onenand_calc_sync_timings(&gpmc_t, &gpmc_s, freq, latency); 1238 + 1239 + ret = gpmc_cs_program_settings(cs, &gpmc_s); 1240 + if (ret < 0) 1241 + return ret; 1242 + 1243 + return gpmc_cs_set_timings(cs, &gpmc_t, &gpmc_s); 1244 + } 1245 + EXPORT_SYMBOL_GPL(gpmc_omap_onenand_set_timings); 1246 + 1140 1247 int gpmc_get_client_irq(unsigned irq_config) 1141 1248 { 1142 1249 if (!gpmc_irq_domain) { ··· 2021 1916 of_property_read_bool(np, "gpmc,time-para-granularity"); 2022 1917 } 2023 1918 2024 - #if IS_ENABLED(CONFIG_MTD_ONENAND) 2025 - static int gpmc_probe_onenand_child(struct platform_device *pdev, 2026 - struct device_node *child) 2027 - { 2028 - u32 val; 2029 - struct omap_onenand_platform_data *gpmc_onenand_data; 2030 - 2031 - if (of_property_read_u32(child, "reg", &val) < 0) { 2032 - dev_err(&pdev->dev, "%pOF has no 'reg' property\n", 2033 - child); 2034 - return -ENODEV; 2035 - } 2036 - 2037 - gpmc_onenand_data = devm_kzalloc(&pdev->dev, sizeof(*gpmc_onenand_data), 2038 - GFP_KERNEL); 2039 - if (!gpmc_onenand_data) 2040 - return -ENOMEM; 2041 - 2042 - gpmc_onenand_data->cs = val; 2043 - gpmc_onenand_data->of_node = child; 2044 - gpmc_onenand_data->dma_channel = -1; 2045 - 2046 - if (!of_property_read_u32(child, "dma-channel", &val)) 2047 - gpmc_onenand_data->dma_channel = val; 2048 - 2049 - return gpmc_onenand_init(gpmc_onenand_data); 2050 - } 2051 - #else 2052 - static int gpmc_probe_onenand_child(struct platform_device *pdev, 2053 - struct device_node *child) 2054 - { 2055 - return 0; 2056 - } 2057 - #endif 2058 - 2059 1919 /** 2060 1920 * gpmc_probe_generic_child - configures the gpmc for a child device 2061 1921 * @pdev: pointer to gpmc platform device ··· 2123 2053 } 2124 2054 } 2125 2055 2056 + if (of_node_cmp(child->name, "onenand") == 0) { 2057 + /* Warn about older DT blobs with no compatible property */ 2058 + if (!of_property_read_bool(child, "compatible")) { 2059 + dev_warn(&pdev->dev, 2060 + "Incompatible OneNAND node: missing compatible"); 2061 + ret = -EINVAL; 2062 + goto err; 2063 + } 2064 + } 2065 + 2126 2066 if (of_device_is_compatible(child, "ti,omap2-nand")) { 2127 2067 /* NAND specific setup */ 2128 2068 val = 8; ··· 2157 2077 } else { 2158 2078 ret = of_property_read_u32(child, "bank-width", 2159 2079 &gpmc_s.device_width); 2160 - if (ret < 0) { 2161 - dev_err(&pdev->dev, "%pOF has no 'bank-width' property\n", 2080 + if (ret < 0 && !gpmc_s.device_width) { 2081 + dev_err(&pdev->dev, 2082 + "%pOF has no 'gpmc,device-width' property\n", 2162 2083 child); 2163 2084 goto err; 2164 2085 } ··· 2269 2188 if (!child->name) 2270 2189 continue; 2271 2190 2272 - if (of_node_cmp(child->name, "onenand") == 0) 2273 - ret = gpmc_probe_onenand_child(pdev, child); 2274 - else 2275 - ret = gpmc_probe_generic_child(pdev, child); 2276 - 2191 + ret = gpmc_probe_generic_child(pdev, child); 2277 2192 if (ret) { 2278 2193 dev_err(&pdev->dev, "failed to probe DT child '%s': %d\n", 2279 2194 child->name, ret);
+14 -3
drivers/mtd/nand/Kconfig
··· 315 315 316 316 config MTD_NAND_PXA3xx 317 317 tristate "NAND support on PXA3xx and Armada 370/XP" 318 + depends on !MTD_NAND_MARVELL 318 319 depends on PXA3xx || ARCH_MMP || PLAT_ORION || ARCH_MVEBU 319 320 help 320 321 ··· 323 322 PXA3xx processors (NFCv1) and also on 32-bit Armada 324 323 platforms (XP, 370, 375, 38x, 39x) and 64-bit Armada 325 324 platforms (7K, 8K) (NFCv2). 325 + 326 + config MTD_NAND_MARVELL 327 + tristate "NAND controller support on Marvell boards" 328 + depends on PXA3xx || ARCH_MMP || PLAT_ORION || ARCH_MVEBU || \ 329 + COMPILE_TEST 330 + depends on HAS_IOMEM 331 + help 332 + This enables the NAND flash controller driver for Marvell boards, 333 + including: 334 + - PXA3xx processors (NFCv1) 335 + - 32-bit Armada platforms (XP, 37x, 38x, 39x) (NFCv2) 336 + - 64-bit Aramda platforms (7k, 8k) (NFCv2) 326 337 327 338 config MTD_NAND_SLC_LPC32XX 328 339 tristate "NXP LPC32xx SLC Controller" ··· 389 376 Enables NAND Flash support for IMX23, IMX28 or IMX6. 390 377 The GPMI controller is very powerful, with the help of BCH 391 378 module, it can do the hardware ECC. The GPMI supports several 392 - NAND flashs at the same time. The GPMI may conflicts with other 393 - block, such as SD card. So pay attention to it when you enable 394 - the GPMI. 379 + NAND flashs at the same time. 395 380 396 381 config MTD_NAND_BRCMNAND 397 382 tristate "Broadcom STB NAND controller"
+1
drivers/mtd/nand/Makefile
··· 32 32 obj-$(CONFIG_MTD_NAND_OMAP_BCH_BUILD) += omap_elm.o 33 33 obj-$(CONFIG_MTD_NAND_CM_X270) += cmx270_nand.o 34 34 obj-$(CONFIG_MTD_NAND_PXA3xx) += pxa3xx_nand.o 35 + obj-$(CONFIG_MTD_NAND_MARVELL) += marvell_nand.o 35 36 obj-$(CONFIG_MTD_NAND_TMIO) += tmio_nand.o 36 37 obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o 37 38 obj-$(CONFIG_MTD_NAND_PASEMI) += pasemi_nand.o
+6 -3
drivers/mtd/nand/atmel/nand-controller.c
··· 841 841 struct atmel_nand *nand = to_atmel_nand(chip); 842 842 int ret; 843 843 844 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 845 + 844 846 ret = atmel_nand_pmecc_enable(chip, NAND_ECC_WRITE, raw); 845 847 if (ret) 846 848 return ret; ··· 859 857 860 858 atmel_nand_write_buf(mtd, chip->oob_poi, mtd->oobsize); 861 859 862 - return 0; 860 + return nand_prog_page_end_op(chip); 863 861 } 864 862 865 863 static int atmel_nand_pmecc_write_page(struct mtd_info *mtd, ··· 882 880 { 883 881 struct mtd_info *mtd = nand_to_mtd(chip); 884 882 int ret; 883 + 884 + nand_read_page_op(chip, page, 0, NULL, 0); 885 885 886 886 ret = atmel_nand_pmecc_enable(chip, NAND_ECC_READ, raw); 887 887 if (ret) ··· 1004 1000 * to the non-optimized one. 1005 1001 */ 1006 1002 if (nand->activecs->rb.type != ATMEL_NAND_NATIVE_RB) { 1007 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); 1003 + nand_read_page_op(chip, page, 0, NULL, 0); 1008 1004 1009 1005 return atmel_nand_pmecc_read_pg(chip, buf, oob_required, page, 1010 1006 raw); ··· 1182 1178 chip->ecc.write_page = atmel_hsmc_nand_pmecc_write_page; 1183 1179 chip->ecc.read_page_raw = atmel_hsmc_nand_pmecc_read_page_raw; 1184 1180 chip->ecc.write_page_raw = atmel_hsmc_nand_pmecc_write_page_raw; 1185 - chip->ecc.options |= NAND_ECC_CUSTOM_PAGE_ACCESS; 1186 1181 1187 1182 return 0; 1188 1183 }
+4 -2
drivers/mtd/nand/bf5xx_nand.c
··· 572 572 static int bf5xx_nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 573 573 uint8_t *buf, int oob_required, int page) 574 574 { 575 + nand_read_page_op(chip, page, 0, NULL, 0); 576 + 575 577 bf5xx_nand_read_buf(mtd, buf, mtd->writesize); 576 578 bf5xx_nand_read_buf(mtd, chip->oob_poi, mtd->oobsize); 577 579 ··· 584 582 struct nand_chip *chip, const uint8_t *buf, int oob_required, 585 583 int page) 586 584 { 587 - bf5xx_nand_write_buf(mtd, buf, mtd->writesize); 585 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 588 586 bf5xx_nand_write_buf(mtd, chip->oob_poi, mtd->oobsize); 589 587 590 - return 0; 588 + return nand_prog_page_end_op(chip); 591 589 } 592 590 593 591 /*
+20 -18
drivers/mtd/nand/brcmnand/brcmnand.c
··· 1071 1071 return; 1072 1072 1073 1073 brcmnand_set_wp(ctrl, wp); 1074 - chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 1074 + nand_status_op(chip, NULL); 1075 1075 /* NAND_STATUS_WP 0x00 = protected, 0x80 = not protected */ 1076 1076 ret = bcmnand_ctrl_poll_status(ctrl, 1077 1077 NAND_CTRL_RDY | ··· 1453 1453 1454 1454 /* At FC_BYTES boundary, switch to next column */ 1455 1455 if (host->last_byte > 0 && offs == 0) 1456 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, addr, -1); 1456 + nand_change_read_column_op(chip, addr, NULL, 0, false); 1457 1457 1458 1458 ret = ctrl->flash_cache[offs]; 1459 1459 break; ··· 1681 1681 int ret; 1682 1682 1683 1683 if (!buf) { 1684 - buf = chip->buffers->databuf; 1684 + buf = chip->data_buf; 1685 1685 /* Invalidate page cache */ 1686 1686 chip->pagebuf = -1; 1687 1687 } ··· 1689 1689 sas = mtd->oobsize / chip->ecc.steps; 1690 1690 1691 1691 /* read without ecc for verification */ 1692 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); 1693 1692 ret = chip->ecc.read_page_raw(mtd, chip, buf, true, page); 1694 1693 if (ret) 1695 1694 return ret; ··· 1792 1793 struct brcmnand_host *host = nand_get_controller_data(chip); 1793 1794 u8 *oob = oob_required ? (u8 *)chip->oob_poi : NULL; 1794 1795 1796 + nand_read_page_op(chip, page, 0, NULL, 0); 1797 + 1795 1798 return brcmnand_read(mtd, chip, host->last_addr, 1796 1799 mtd->writesize >> FC_SHIFT, (u32 *)buf, oob); 1797 1800 } ··· 1804 1803 struct brcmnand_host *host = nand_get_controller_data(chip); 1805 1804 u8 *oob = oob_required ? (u8 *)chip->oob_poi : NULL; 1806 1805 int ret; 1806 + 1807 + nand_read_page_op(chip, page, 0, NULL, 0); 1807 1808 1808 1809 brcmnand_set_ecc_enabled(host, 0); 1809 1810 ret = brcmnand_read(mtd, chip, host->last_addr, ··· 1912 1909 struct brcmnand_host *host = nand_get_controller_data(chip); 1913 1910 void *oob = oob_required ? chip->oob_poi : NULL; 1914 1911 1912 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1915 1913 brcmnand_write(mtd, chip, host->last_addr, (const u32 *)buf, oob); 1916 - return 0; 1914 + 1915 + return nand_prog_page_end_op(chip); 1917 1916 } 1918 1917 1919 1918 static int brcmnand_write_page_raw(struct mtd_info *mtd, ··· 1925 1920 struct brcmnand_host *host = nand_get_controller_data(chip); 1926 1921 void *oob = oob_required ? chip->oob_poi : NULL; 1927 1922 1923 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1928 1924 brcmnand_set_ecc_enabled(host, 0); 1929 1925 brcmnand_write(mtd, chip, host->last_addr, (const u32 *)buf, oob); 1930 1926 brcmnand_set_ecc_enabled(host, 1); 1931 - return 0; 1927 + 1928 + return nand_prog_page_end_op(chip); 1932 1929 } 1933 1930 1934 1931 static int brcmnand_write_oob(struct mtd_info *mtd, struct nand_chip *chip, ··· 2200 2193 if (ctrl->nand_version >= 0x0702) 2201 2194 tmp |= ACC_CONTROL_RD_ERASED; 2202 2195 tmp &= ~ACC_CONTROL_FAST_PGM_RDIN; 2203 - if (ctrl->features & BRCMNAND_HAS_PREFETCH) { 2204 - /* 2205 - * FIXME: Flash DMA + prefetch may see spurious erased-page ECC 2206 - * errors 2207 - */ 2208 - if (has_flash_dma(ctrl)) 2209 - tmp &= ~ACC_CONTROL_PREFETCH; 2210 - else 2211 - tmp |= ACC_CONTROL_PREFETCH; 2212 - } 2196 + if (ctrl->features & BRCMNAND_HAS_PREFETCH) 2197 + tmp &= ~ACC_CONTROL_PREFETCH; 2198 + 2213 2199 nand_writereg(ctrl, offs, tmp); 2214 2200 2215 2201 return 0; ··· 2230 2230 nand_set_controller_data(chip, host); 2231 2231 mtd->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "brcmnand.%d", 2232 2232 host->cs); 2233 + if (!mtd->name) 2234 + return -ENOMEM; 2235 + 2233 2236 mtd->owner = THIS_MODULE; 2234 2237 mtd->dev.parent = &pdev->dev; 2235 2238 ··· 2372 2369 2373 2370 list_for_each_entry(host, &ctrl->host_list, node) { 2374 2371 struct nand_chip *chip = &host->chip; 2375 - struct mtd_info *mtd = nand_to_mtd(chip); 2376 2372 2377 2373 brcmnand_save_restore_cs_config(host, 1); 2378 2374 2379 2375 /* Reset the chip, required by some chips after power-up */ 2380 - chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 2376 + nand_reset_op(chip); 2381 2377 } 2382 2378 2383 2379 return 0;
+12 -40
drivers/mtd/nand/cafe_nand.c
··· 353 353 static int cafe_nand_write_oob(struct mtd_info *mtd, 354 354 struct nand_chip *chip, int page) 355 355 { 356 - int status = 0; 357 - 358 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, page); 359 - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 360 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 361 - status = chip->waitfunc(mtd, chip); 362 - 363 - return status & NAND_STATUS_FAIL ? -EIO : 0; 356 + return nand_prog_page_op(chip, page, mtd->writesize, chip->oob_poi, 357 + mtd->oobsize); 364 358 } 365 359 366 360 /* Don't use -- use nand_read_oob_std for now */ 367 361 static int cafe_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 368 362 int page) 369 363 { 370 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 371 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 372 - return 0; 364 + return nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 373 365 } 374 366 /** 375 367 * cafe_nand_read_page_syndrome - [REPLACEABLE] hardware ecc syndrome based page read ··· 383 391 cafe_readl(cafe, NAND_ECC_RESULT), 384 392 cafe_readl(cafe, NAND_ECC_SYN01)); 385 393 386 - chip->read_buf(mtd, buf, mtd->writesize); 394 + nand_read_page_op(chip, page, 0, buf, mtd->writesize); 387 395 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 388 396 389 397 if (checkecc && cafe_readl(cafe, NAND_ECC_RESULT) & (1<<18)) { ··· 541 549 { 542 550 struct cafe_priv *cafe = nand_get_controller_data(chip); 543 551 544 - chip->write_buf(mtd, buf, mtd->writesize); 552 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 545 553 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 546 554 547 555 /* Set up ECC autogeneration */ 548 556 cafe->ctl2 |= (1<<30); 549 557 550 - return 0; 558 + return nand_prog_page_end_op(chip); 551 559 } 552 560 553 561 static int cafe_nand_block_bad(struct mtd_info *mtd, loff_t ofs) ··· 605 613 uint32_t ctrl; 606 614 int err = 0; 607 615 int old_dma; 608 - struct nand_buffers *nbuf; 609 616 610 617 /* Very old versions shared the same PCI ident for all three 611 618 functions on the chip. Verify the class too... */ ··· 652 661 653 662 /* Enable the following for a flash based bad block table */ 654 663 cafe->nand.bbt_options = NAND_BBT_USE_FLASH; 655 - cafe->nand.options = NAND_OWN_BUFFERS; 656 664 657 665 if (skipbbt) { 658 666 cafe->nand.options |= NAND_SKIP_BBTSCAN; ··· 721 731 if (err) 722 732 goto out_irq; 723 733 724 - cafe->dmabuf = dma_alloc_coherent(&cafe->pdev->dev, 725 - 2112 + sizeof(struct nand_buffers) + 726 - mtd->writesize + mtd->oobsize, 727 - &cafe->dmaaddr, GFP_KERNEL); 734 + cafe->dmabuf = dma_alloc_coherent(&cafe->pdev->dev, 2112, 735 + &cafe->dmaaddr, GFP_KERNEL); 728 736 if (!cafe->dmabuf) { 729 737 err = -ENOMEM; 730 738 goto out_irq; 731 739 } 732 - cafe->nand.buffers = nbuf = (void *)cafe->dmabuf + 2112; 733 740 734 741 /* Set up DMA address */ 735 - cafe_writel(cafe, cafe->dmaaddr & 0xffffffff, NAND_DMA_ADDR0); 736 - if (sizeof(cafe->dmaaddr) > 4) 737 - /* Shift in two parts to shut the compiler up */ 738 - cafe_writel(cafe, (cafe->dmaaddr >> 16) >> 16, NAND_DMA_ADDR1); 739 - else 740 - cafe_writel(cafe, 0, NAND_DMA_ADDR1); 742 + cafe_writel(cafe, lower_32_bits(cafe->dmaaddr), NAND_DMA_ADDR0); 743 + cafe_writel(cafe, upper_32_bits(cafe->dmaaddr), NAND_DMA_ADDR1); 741 744 742 745 cafe_dev_dbg(&cafe->pdev->dev, "Set DMA address to %x (virt %p)\n", 743 746 cafe_readl(cafe, NAND_DMA_ADDR0), cafe->dmabuf); 744 - 745 - /* this driver does not need the @ecccalc and @ecccode */ 746 - nbuf->ecccalc = NULL; 747 - nbuf->ecccode = NULL; 748 - nbuf->databuf = (uint8_t *)(nbuf + 1); 749 747 750 748 /* Restore the DMA flag */ 751 749 usedma = old_dma; ··· 779 801 goto out; 780 802 781 803 out_free_dma: 782 - dma_free_coherent(&cafe->pdev->dev, 783 - 2112 + sizeof(struct nand_buffers) + 784 - mtd->writesize + mtd->oobsize, 785 - cafe->dmabuf, cafe->dmaaddr); 804 + dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr); 786 805 out_irq: 787 806 /* Disable NAND IRQ in global IRQ mask register */ 788 807 cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK); ··· 804 829 nand_release(mtd); 805 830 free_rs(cafe->rs); 806 831 pci_iounmap(pdev, cafe->mmio); 807 - dma_free_coherent(&cafe->pdev->dev, 808 - 2112 + sizeof(struct nand_buffers) + 809 - mtd->writesize + mtd->oobsize, 810 - cafe->dmabuf, cafe->dmaaddr); 832 + dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr); 811 833 kfree(cafe); 812 834 } 813 835
+39 -45
drivers/mtd/nand/denali.c
··· 330 330 unsigned long uncor_ecc_flags, 331 331 unsigned int max_bitflips) 332 332 { 333 - uint8_t *ecc_code = chip->buffers->ecccode; 333 + struct denali_nand_info *denali = mtd_to_denali(mtd); 334 + uint8_t *ecc_code = chip->oob_poi + denali->oob_skip_bytes; 334 335 int ecc_steps = chip->ecc.steps; 335 336 int ecc_size = chip->ecc.size; 336 337 int ecc_bytes = chip->ecc.bytes; 337 - int i, ret, stat; 338 - 339 - ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0, 340 - chip->ecc.total); 341 - if (ret) 342 - return ret; 338 + int i, stat; 343 339 344 340 for (i = 0; i < ecc_steps; i++) { 345 341 if (!(uncor_ecc_flags & BIT(i))) ··· 641 645 int page, int write) 642 646 { 643 647 struct denali_nand_info *denali = mtd_to_denali(mtd); 644 - unsigned int start_cmd = write ? NAND_CMD_SEQIN : NAND_CMD_READ0; 645 - unsigned int rnd_cmd = write ? NAND_CMD_RNDIN : NAND_CMD_RNDOUT; 646 648 int writesize = mtd->writesize; 647 649 int oobsize = mtd->oobsize; 648 650 uint8_t *bufpoi = chip->oob_poi; ··· 652 658 int i, pos, len; 653 659 654 660 /* BBM at the beginning of the OOB area */ 655 - chip->cmdfunc(mtd, start_cmd, writesize, page); 656 661 if (write) 657 - chip->write_buf(mtd, bufpoi, oob_skip); 662 + nand_prog_page_begin_op(chip, page, writesize, bufpoi, 663 + oob_skip); 658 664 else 659 - chip->read_buf(mtd, bufpoi, oob_skip); 665 + nand_read_page_op(chip, page, writesize, bufpoi, oob_skip); 660 666 bufpoi += oob_skip; 661 667 662 668 /* OOB ECC */ ··· 669 675 else if (pos + len > writesize) 670 676 len = writesize - pos; 671 677 672 - chip->cmdfunc(mtd, rnd_cmd, pos, -1); 673 678 if (write) 674 - chip->write_buf(mtd, bufpoi, len); 679 + nand_change_write_column_op(chip, pos, bufpoi, len, 680 + false); 675 681 else 676 - chip->read_buf(mtd, bufpoi, len); 682 + nand_change_read_column_op(chip, pos, bufpoi, len, 683 + false); 677 684 bufpoi += len; 678 685 if (len < ecc_bytes) { 679 686 len = ecc_bytes - len; 680 - chip->cmdfunc(mtd, rnd_cmd, writesize + oob_skip, -1); 681 687 if (write) 682 - chip->write_buf(mtd, bufpoi, len); 688 + nand_change_write_column_op(chip, writesize + 689 + oob_skip, bufpoi, 690 + len, false); 683 691 else 684 - chip->read_buf(mtd, bufpoi, len); 692 + nand_change_read_column_op(chip, writesize + 693 + oob_skip, bufpoi, 694 + len, false); 685 695 bufpoi += len; 686 696 } 687 697 } 688 698 689 699 /* OOB free */ 690 700 len = oobsize - (bufpoi - chip->oob_poi); 691 - chip->cmdfunc(mtd, rnd_cmd, size - len, -1); 692 701 if (write) 693 - chip->write_buf(mtd, bufpoi, len); 702 + nand_change_write_column_op(chip, size - len, bufpoi, len, 703 + false); 694 704 else 695 - chip->read_buf(mtd, bufpoi, len); 705 + nand_change_read_column_op(chip, size - len, bufpoi, len, 706 + false); 696 707 } 697 708 698 709 static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, ··· 709 710 int ecc_steps = chip->ecc.steps; 710 711 int ecc_size = chip->ecc.size; 711 712 int ecc_bytes = chip->ecc.bytes; 712 - void *dma_buf = denali->buf; 713 + void *tmp_buf = denali->buf; 713 714 int oob_skip = denali->oob_skip_bytes; 714 715 size_t size = writesize + oobsize; 715 716 int ret, i, pos, len; 716 717 717 - ret = denali_data_xfer(denali, dma_buf, size, page, 1, 0); 718 + ret = denali_data_xfer(denali, tmp_buf, size, page, 1, 0); 718 719 if (ret) 719 720 return ret; 720 721 ··· 729 730 else if (pos + len > writesize) 730 731 len = writesize - pos; 731 732 732 - memcpy(buf, dma_buf + pos, len); 733 + memcpy(buf, tmp_buf + pos, len); 733 734 buf += len; 734 735 if (len < ecc_size) { 735 736 len = ecc_size - len; 736 - memcpy(buf, dma_buf + writesize + oob_skip, 737 + memcpy(buf, tmp_buf + writesize + oob_skip, 737 738 len); 738 739 buf += len; 739 740 } ··· 744 745 uint8_t *oob = chip->oob_poi; 745 746 746 747 /* BBM at the beginning of the OOB area */ 747 - memcpy(oob, dma_buf + writesize, oob_skip); 748 + memcpy(oob, tmp_buf + writesize, oob_skip); 748 749 oob += oob_skip; 749 750 750 751 /* OOB ECC */ ··· 757 758 else if (pos + len > writesize) 758 759 len = writesize - pos; 759 760 760 - memcpy(oob, dma_buf + pos, len); 761 + memcpy(oob, tmp_buf + pos, len); 761 762 oob += len; 762 763 if (len < ecc_bytes) { 763 764 len = ecc_bytes - len; 764 - memcpy(oob, dma_buf + writesize + oob_skip, 765 + memcpy(oob, tmp_buf + writesize + oob_skip, 765 766 len); 766 767 oob += len; 767 768 } ··· 769 770 770 771 /* OOB free */ 771 772 len = oobsize - (oob - chip->oob_poi); 772 - memcpy(oob, dma_buf + size - len, len); 773 + memcpy(oob, tmp_buf + size - len, len); 773 774 } 774 775 775 776 return 0; ··· 787 788 int page) 788 789 { 789 790 struct denali_nand_info *denali = mtd_to_denali(mtd); 790 - int status; 791 791 792 792 denali_reset_irq(denali); 793 793 794 794 denali_oob_xfer(mtd, chip, page, 1); 795 795 796 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 797 - status = chip->waitfunc(mtd, chip); 798 - 799 - return status & NAND_STATUS_FAIL ? -EIO : 0; 796 + return nand_prog_page_end_op(chip); 800 797 } 801 798 802 799 static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip, ··· 836 841 int ecc_steps = chip->ecc.steps; 837 842 int ecc_size = chip->ecc.size; 838 843 int ecc_bytes = chip->ecc.bytes; 839 - void *dma_buf = denali->buf; 844 + void *tmp_buf = denali->buf; 840 845 int oob_skip = denali->oob_skip_bytes; 841 846 size_t size = writesize + oobsize; 842 847 int i, pos, len; ··· 846 851 * This simplifies the logic. 847 852 */ 848 853 if (!buf || !oob_required) 849 - memset(dma_buf, 0xff, size); 854 + memset(tmp_buf, 0xff, size); 850 855 851 856 /* Arrange the buffer for syndrome payload/ecc layout */ 852 857 if (buf) { ··· 859 864 else if (pos + len > writesize) 860 865 len = writesize - pos; 861 866 862 - memcpy(dma_buf + pos, buf, len); 867 + memcpy(tmp_buf + pos, buf, len); 863 868 buf += len; 864 869 if (len < ecc_size) { 865 870 len = ecc_size - len; 866 - memcpy(dma_buf + writesize + oob_skip, buf, 871 + memcpy(tmp_buf + writesize + oob_skip, buf, 867 872 len); 868 873 buf += len; 869 874 } ··· 874 879 const uint8_t *oob = chip->oob_poi; 875 880 876 881 /* BBM at the beginning of the OOB area */ 877 - memcpy(dma_buf + writesize, oob, oob_skip); 882 + memcpy(tmp_buf + writesize, oob, oob_skip); 878 883 oob += oob_skip; 879 884 880 885 /* OOB ECC */ ··· 887 892 else if (pos + len > writesize) 888 893 len = writesize - pos; 889 894 890 - memcpy(dma_buf + pos, oob, len); 895 + memcpy(tmp_buf + pos, oob, len); 891 896 oob += len; 892 897 if (len < ecc_bytes) { 893 898 len = ecc_bytes - len; 894 - memcpy(dma_buf + writesize + oob_skip, oob, 899 + memcpy(tmp_buf + writesize + oob_skip, oob, 895 900 len); 896 901 oob += len; 897 902 } ··· 899 904 900 905 /* OOB free */ 901 906 len = oobsize - (oob - chip->oob_poi); 902 - memcpy(dma_buf + size - len, oob, len); 907 + memcpy(tmp_buf + size - len, oob, len); 903 908 } 904 909 905 - return denali_data_xfer(denali, dma_buf, size, page, 1, 1); 910 + return denali_data_xfer(denali, tmp_buf, size, page, 1, 1); 906 911 } 907 912 908 913 static int denali_write_page(struct mtd_info *mtd, struct nand_chip *chip, ··· 946 951 irq_status = denali_wait_for_irq(denali, 947 952 INTR__ERASE_COMP | INTR__ERASE_FAIL); 948 953 949 - return irq_status & INTR__ERASE_COMP ? 0 : NAND_STATUS_FAIL; 954 + return irq_status & INTR__ERASE_COMP ? 0 : -EIO; 950 955 } 951 956 952 957 static int denali_setup_data_interface(struct mtd_info *mtd, int chipnr, ··· 1354 1359 chip->read_buf = denali_read_buf; 1355 1360 chip->write_buf = denali_write_buf; 1356 1361 } 1357 - chip->ecc.options |= NAND_ECC_CUSTOM_PAGE_ACCESS; 1358 1362 chip->ecc.read_page = denali_read_page; 1359 1363 chip->ecc.read_page_raw = denali_read_page_raw; 1360 1364 chip->ecc.write_page = denali_write_page;
+2 -2
drivers/mtd/nand/denali.h
··· 329 329 #define DENALI_CAP_DMA_64BIT BIT(1) 330 330 331 331 int denali_calc_ecc_bytes(int step_size, int strength); 332 - extern int denali_init(struct denali_nand_info *denali); 333 - extern void denali_remove(struct denali_nand_info *denali); 332 + int denali_init(struct denali_nand_info *denali); 333 + void denali_remove(struct denali_nand_info *denali); 334 334 335 335 #endif /* __DENALI_H__ */
+4
drivers/mtd/nand/denali_pci.c
··· 125 125 .remove = denali_pci_remove, 126 126 }; 127 127 module_pci_driver(denali_pci_driver); 128 + 129 + MODULE_DESCRIPTION("PCI driver for Denali NAND controller"); 130 + MODULE_AUTHOR("Intel Corporation and its suppliers"); 131 + MODULE_LICENSE("GPL v2");
+2 -2
drivers/mtd/nand/diskonchip.c
··· 448 448 int status; 449 449 450 450 DoC_WaitReady(doc); 451 - this->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 451 + nand_status_op(this, NULL); 452 452 DoC_WaitReady(doc); 453 453 status = (int)this->read_byte(mtd); 454 454 ··· 595 595 596 596 /* Assert ChipEnable and deassert WriteProtect */ 597 597 WriteDOC((DOC_FLASH_CE), docptr, Mplus_FlashSelect); 598 - this->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 598 + nand_reset_op(this); 599 599 600 600 doc->curchip = chip; 601 601 doc->curfloor = floor;
+15 -6
drivers/mtd/nand/docg4.c
··· 785 785 786 786 dev_dbg(doc->dev, "%s: page %08x\n", __func__, page); 787 787 788 + nand_read_page_op(nand, page, 0, NULL, 0); 789 + 788 790 writew(DOC_ECCCONF0_READ_MODE | 789 791 DOC_ECCCONF0_ECC_ENABLE | 790 792 DOC_ECCCONF0_UNKNOWN | ··· 866 864 867 865 dev_dbg(doc->dev, "%s: page %x\n", __func__, page); 868 866 869 - docg4_command(mtd, NAND_CMD_READ0, nand->ecc.size, page); 867 + nand_read_page_op(nand, page, nand->ecc.size, NULL, 0); 870 868 871 869 writew(DOC_ECCCONF0_READ_MODE | DOCG4_OOB_SIZE, docptr + DOC_ECCCONF0); 872 870 write_nop(docptr); ··· 902 900 struct docg4_priv *doc = nand_get_controller_data(nand); 903 901 void __iomem *docptr = doc->virtadr; 904 902 uint16_t g4_page; 903 + int status; 905 904 906 905 dev_dbg(doc->dev, "%s: page %04x\n", __func__, page); 907 906 ··· 942 939 poll_status(doc); 943 940 write_nop(docptr); 944 941 945 - return nand->waitfunc(mtd, nand); 942 + status = nand->waitfunc(mtd, nand); 943 + if (status < 0) 944 + return status; 945 + 946 + return status & NAND_STATUS_FAIL ? -EIO : 0; 946 947 } 947 948 948 949 static int write_page(struct mtd_info *mtd, struct nand_chip *nand, 949 - const uint8_t *buf, bool use_ecc) 950 + const uint8_t *buf, int page, bool use_ecc) 950 951 { 951 952 struct docg4_priv *doc = nand_get_controller_data(nand); 952 953 void __iomem *docptr = doc->virtadr; 953 954 uint8_t ecc_buf[8]; 954 955 955 956 dev_dbg(doc->dev, "%s...\n", __func__); 957 + 958 + nand_prog_page_begin_op(nand, page, 0, NULL, 0); 956 959 957 960 writew(DOC_ECCCONF0_ECC_ENABLE | 958 961 DOC_ECCCONF0_UNKNOWN | ··· 1004 995 writew(0, docptr + DOC_DATAEND); 1005 996 write_nop(docptr); 1006 997 1007 - return 0; 998 + return nand_prog_page_end_op(nand); 1008 999 } 1009 1000 1010 1001 static int docg4_write_page_raw(struct mtd_info *mtd, struct nand_chip *nand, 1011 1002 const uint8_t *buf, int oob_required, int page) 1012 1003 { 1013 - return write_page(mtd, nand, buf, false); 1004 + return write_page(mtd, nand, buf, page, false); 1014 1005 } 1015 1006 1016 1007 static int docg4_write_page(struct mtd_info *mtd, struct nand_chip *nand, 1017 1008 const uint8_t *buf, int oob_required, int page) 1018 1009 { 1019 - return write_page(mtd, nand, buf, true); 1010 + return write_page(mtd, nand, buf, page, true); 1020 1011 } 1021 1012 1022 1013 static int docg4_write_oob(struct mtd_info *mtd, struct nand_chip *nand,
+5 -5
drivers/mtd/nand/fsl_elbc_nand.c
··· 713 713 struct fsl_lbc_ctrl *ctrl = priv->ctrl; 714 714 struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = ctrl->nand; 715 715 716 - fsl_elbc_read_buf(mtd, buf, mtd->writesize); 716 + nand_read_page_op(chip, page, 0, buf, mtd->writesize); 717 717 if (oob_required) 718 718 fsl_elbc_read_buf(mtd, chip->oob_poi, mtd->oobsize); 719 719 ··· 729 729 static int fsl_elbc_write_page(struct mtd_info *mtd, struct nand_chip *chip, 730 730 const uint8_t *buf, int oob_required, int page) 731 731 { 732 - fsl_elbc_write_buf(mtd, buf, mtd->writesize); 732 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 733 733 fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize); 734 734 735 - return 0; 735 + return nand_prog_page_end_op(chip); 736 736 } 737 737 738 738 /* ECC will be calculated automatically, and errors will be detected in ··· 742 742 uint32_t offset, uint32_t data_len, 743 743 const uint8_t *buf, int oob_required, int page) 744 744 { 745 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 745 746 fsl_elbc_write_buf(mtd, buf, mtd->writesize); 746 747 fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize); 747 - 748 - return 0; 748 + return nand_prog_page_end_op(chip); 749 749 } 750 750 751 751 static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)
+10 -3
drivers/mtd/nand/fsl_ifc_nand.c
··· 688 688 struct fsl_ifc_ctrl *ctrl = priv->ctrl; 689 689 struct fsl_ifc_nand_ctrl *nctrl = ifc_nand_ctrl; 690 690 691 - fsl_ifc_read_buf(mtd, buf, mtd->writesize); 691 + nand_read_page_op(chip, page, 0, buf, mtd->writesize); 692 692 if (oob_required) 693 693 fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize); 694 694 ··· 711 711 static int fsl_ifc_write_page(struct mtd_info *mtd, struct nand_chip *chip, 712 712 const uint8_t *buf, int oob_required, int page) 713 713 { 714 - fsl_ifc_write_buf(mtd, buf, mtd->writesize); 714 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 715 715 fsl_ifc_write_buf(mtd, chip->oob_poi, mtd->oobsize); 716 716 717 - return 0; 717 + return nand_prog_page_end_op(chip); 718 718 } 719 719 720 720 static int fsl_ifc_chip_init_tail(struct mtd_info *mtd) ··· 915 915 916 916 if (ctrl->version >= FSL_IFC_VERSION_1_1_0) 917 917 fsl_ifc_sram_init(priv); 918 + 919 + /* 920 + * As IFC version 2.0.0 has 16KB of internal SRAM as compared to older 921 + * versions which had 8KB. Hence bufnum mask needs to be updated. 922 + */ 923 + if (ctrl->version >= FSL_IFC_VERSION_2_0_0) 924 + priv->bufnum_mask = (priv->bufnum_mask * 2) + 1; 918 925 919 926 return 0; 920 927 }
+4 -5
drivers/mtd/nand/fsmc_nand.c
··· 684 684 int eccbytes = chip->ecc.bytes; 685 685 int eccsteps = chip->ecc.steps; 686 686 uint8_t *p = buf; 687 - uint8_t *ecc_calc = chip->buffers->ecccalc; 688 - uint8_t *ecc_code = chip->buffers->ecccode; 687 + uint8_t *ecc_calc = chip->ecc.calc_buf; 688 + uint8_t *ecc_code = chip->ecc.code_buf; 689 689 int off, len, group = 0; 690 690 /* 691 691 * ecc_oob is intentionally taken as uint16_t. In 16bit devices, we ··· 697 697 unsigned int max_bitflips = 0; 698 698 699 699 for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) { 700 - chip->cmdfunc(mtd, NAND_CMD_READ0, s * eccsize, page); 700 + nand_read_page_op(chip, page, s * eccsize, NULL, 0); 701 701 chip->ecc.hwctl(mtd, NAND_ECC_READ); 702 702 chip->read_buf(mtd, p, eccsize); 703 703 ··· 720 720 if (chip->options & NAND_BUSWIDTH_16) 721 721 len = roundup(len, 2); 722 722 723 - chip->cmdfunc(mtd, NAND_CMD_READOOB, off, page); 724 - chip->read_buf(mtd, oob + j, len); 723 + nand_read_oob_op(chip, page, off, oob + j, len); 725 724 j += len; 726 725 } 727 726
+46 -65
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 1029 1029 p[1] = (p[1] & mask) | (from_oob >> (8 - bit)); 1030 1030 } 1031 1031 1032 - static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip, 1033 - uint8_t *buf, int oob_required, int page) 1032 + static int gpmi_ecc_read_page_data(struct nand_chip *chip, 1033 + uint8_t *buf, int oob_required, 1034 + int page) 1034 1035 { 1035 1036 struct gpmi_nand_data *this = nand_get_controller_data(chip); 1036 1037 struct bch_geometry *nfc_geo = &this->bch_geometry; 1038 + struct mtd_info *mtd = nand_to_mtd(chip); 1037 1039 void *payload_virt; 1038 1040 dma_addr_t payload_phys; 1039 1041 void *auxiliary_virt; ··· 1099 1097 eccbytes = DIV_ROUND_UP(offset + eccbits, 8); 1100 1098 offset /= 8; 1101 1099 eccbytes -= offset; 1102 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, offset, -1); 1103 - chip->read_buf(mtd, eccbuf, eccbytes); 1100 + nand_change_read_column_op(chip, offset, eccbuf, 1101 + eccbytes, false); 1104 1102 1105 1103 /* 1106 1104 * ECC data are not byte aligned and we may have ··· 1178 1176 return max_bitflips; 1179 1177 } 1180 1178 1179 + static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip, 1180 + uint8_t *buf, int oob_required, int page) 1181 + { 1182 + nand_read_page_op(chip, page, 0, NULL, 0); 1183 + 1184 + return gpmi_ecc_read_page_data(chip, buf, oob_required, page); 1185 + } 1186 + 1181 1187 /* Fake a virtual small page for the subpage read */ 1182 1188 static int gpmi_ecc_read_subpage(struct mtd_info *mtd, struct nand_chip *chip, 1183 1189 uint32_t offs, uint32_t len, uint8_t *buf, int page) ··· 1230 1220 meta = geo->metadata_size; 1231 1221 if (first) { 1232 1222 col = meta + (size + ecc_parity_size) * first; 1233 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, col, -1); 1234 - 1235 1223 meta = 0; 1236 1224 buf = buf + first * size; 1237 1225 } 1226 + 1227 + nand_read_page_op(chip, page, col, NULL, 0); 1238 1228 1239 1229 /* Save the old environment */ 1240 1230 r1_old = r1_new = readl(bch_regs + HW_BCH_FLASH0LAYOUT0); ··· 1264 1254 1265 1255 /* Read the subpage now */ 1266 1256 this->swap_block_mark = false; 1267 - max_bitflips = gpmi_ecc_read_page(mtd, chip, buf, 0, page); 1257 + max_bitflips = gpmi_ecc_read_page_data(chip, buf, 0, page); 1268 1258 1269 1259 /* Restore */ 1270 1260 writel(r1_old, bch_regs + HW_BCH_FLASH0LAYOUT0); ··· 1287 1277 int ret; 1288 1278 1289 1279 dev_dbg(this->dev, "ecc write page.\n"); 1280 + 1281 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1282 + 1290 1283 if (this->swap_block_mark) { 1291 1284 /* 1292 1285 * If control arrives here, we're doing block mark swapping. ··· 1351 1338 payload_virt, payload_phys); 1352 1339 } 1353 1340 1354 - return 0; 1341 + if (ret) 1342 + return ret; 1343 + 1344 + return nand_prog_page_end_op(chip); 1355 1345 } 1356 1346 1357 1347 /* ··· 1427 1411 memset(chip->oob_poi, ~0, mtd->oobsize); 1428 1412 1429 1413 /* Read out the conventional OOB. */ 1430 - chip->cmdfunc(mtd, NAND_CMD_READ0, mtd->writesize, page); 1414 + nand_read_page_op(chip, page, mtd->writesize, NULL, 0); 1431 1415 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1432 1416 1433 1417 /* ··· 1437 1421 */ 1438 1422 if (GPMI_IS_MX23(this)) { 1439 1423 /* Read the block mark into the first byte of the OOB buffer. */ 1440 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 1424 + nand_read_page_op(chip, page, 0, NULL, 0); 1441 1425 chip->oob_poi[0] = chip->read_byte(mtd); 1442 1426 } 1443 1427 ··· 1448 1432 gpmi_ecc_write_oob(struct mtd_info *mtd, struct nand_chip *chip, int page) 1449 1433 { 1450 1434 struct mtd_oob_region of = { }; 1451 - int status = 0; 1452 1435 1453 1436 /* Do we have available oob area? */ 1454 1437 mtd_ooblayout_free(mtd, 0, &of); ··· 1457 1442 if (!nand_is_slc(chip)) 1458 1443 return -EPERM; 1459 1444 1460 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize + of.offset, page); 1461 - chip->write_buf(mtd, chip->oob_poi + of.offset, of.length); 1462 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1463 - 1464 - status = chip->waitfunc(mtd, chip); 1465 - return status & NAND_STATUS_FAIL ? -EIO : 0; 1445 + return nand_prog_page_op(chip, page, mtd->writesize + of.offset, 1446 + chip->oob_poi + of.offset, of.length); 1466 1447 } 1467 1448 1468 1449 /* ··· 1488 1477 uint8_t *oob = chip->oob_poi; 1489 1478 int step; 1490 1479 1491 - chip->read_buf(mtd, tmp_buf, 1492 - mtd->writesize + mtd->oobsize); 1480 + nand_read_page_op(chip, page, 0, tmp_buf, 1481 + mtd->writesize + mtd->oobsize); 1493 1482 1494 1483 /* 1495 1484 * If required, swap the bad block marker and the data stored in the ··· 1498 1487 * See the layout description for a detailed explanation on why this 1499 1488 * is needed. 1500 1489 */ 1501 - if (this->swap_block_mark) { 1502 - u8 swap = tmp_buf[0]; 1503 - 1504 - tmp_buf[0] = tmp_buf[mtd->writesize]; 1505 - tmp_buf[mtd->writesize] = swap; 1506 - } 1490 + if (this->swap_block_mark) 1491 + swap(tmp_buf[0], tmp_buf[mtd->writesize]); 1507 1492 1508 1493 /* 1509 1494 * Copy the metadata section into the oob buffer (this section is ··· 1622 1615 * See the layout description for a detailed explanation on why this 1623 1616 * is needed. 1624 1617 */ 1625 - if (this->swap_block_mark) { 1626 - u8 swap = tmp_buf[0]; 1618 + if (this->swap_block_mark) 1619 + swap(tmp_buf[0], tmp_buf[mtd->writesize]); 1627 1620 1628 - tmp_buf[0] = tmp_buf[mtd->writesize]; 1629 - tmp_buf[mtd->writesize] = swap; 1630 - } 1631 - 1632 - chip->write_buf(mtd, tmp_buf, mtd->writesize + mtd->oobsize); 1633 - 1634 - return 0; 1621 + return nand_prog_page_op(chip, page, 0, tmp_buf, 1622 + mtd->writesize + mtd->oobsize); 1635 1623 } 1636 1624 1637 1625 static int gpmi_ecc_read_oob_raw(struct mtd_info *mtd, struct nand_chip *chip, 1638 1626 int page) 1639 1627 { 1640 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 1641 - 1642 1628 return gpmi_ecc_read_page_raw(mtd, chip, NULL, 1, page); 1643 1629 } 1644 1630 1645 1631 static int gpmi_ecc_write_oob_raw(struct mtd_info *mtd, struct nand_chip *chip, 1646 1632 int page) 1647 1633 { 1648 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page); 1649 - 1650 1634 return gpmi_ecc_write_page_raw(mtd, chip, NULL, 1, page); 1651 1635 } 1652 1636 ··· 1647 1649 struct gpmi_nand_data *this = nand_get_controller_data(chip); 1648 1650 int ret = 0; 1649 1651 uint8_t *block_mark; 1650 - int column, page, status, chipnr; 1652 + int column, page, chipnr; 1651 1653 1652 1654 chipnr = (int)(ofs >> chip->chip_shift); 1653 1655 chip->select_chip(mtd, chipnr); ··· 1661 1663 /* Shift to get page */ 1662 1664 page = (int)(ofs >> chip->page_shift); 1663 1665 1664 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, column, page); 1665 - chip->write_buf(mtd, block_mark, 1); 1666 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1667 - 1668 - status = chip->waitfunc(mtd, chip); 1669 - if (status & NAND_STATUS_FAIL) 1670 - ret = -EIO; 1666 + ret = nand_prog_page_op(chip, page, column, block_mark, 1); 1671 1667 1672 1668 chip->select_chip(mtd, -1); 1673 1669 ··· 1704 1712 unsigned int search_area_size_in_strides; 1705 1713 unsigned int stride; 1706 1714 unsigned int page; 1707 - uint8_t *buffer = chip->buffers->databuf; 1715 + uint8_t *buffer = chip->data_buf; 1708 1716 int saved_chip_number; 1709 1717 int found_an_ncb_fingerprint = false; 1710 1718 ··· 1729 1737 * Read the NCB fingerprint. The fingerprint is four bytes long 1730 1738 * and starts in the 12th byte of the page. 1731 1739 */ 1732 - chip->cmdfunc(mtd, NAND_CMD_READ0, 12, page); 1740 + nand_read_page_op(chip, page, 12, NULL, 0); 1733 1741 chip->read_buf(mtd, buffer, strlen(fingerprint)); 1734 1742 1735 1743 /* Look for the fingerprint. */ ··· 1763 1771 unsigned int block; 1764 1772 unsigned int stride; 1765 1773 unsigned int page; 1766 - uint8_t *buffer = chip->buffers->databuf; 1774 + uint8_t *buffer = chip->data_buf; 1767 1775 int saved_chip_number; 1768 1776 int status; 1769 1777 ··· 1789 1797 dev_dbg(dev, "Erasing the search area...\n"); 1790 1798 1791 1799 for (block = 0; block < search_area_size_in_blocks; block++) { 1792 - /* Compute the page address. */ 1793 - page = block * block_size_in_pages; 1794 - 1795 1800 /* Erase this block. */ 1796 1801 dev_dbg(dev, "\tErasing block 0x%x\n", block); 1797 - chip->cmdfunc(mtd, NAND_CMD_ERASE1, -1, page); 1798 - chip->cmdfunc(mtd, NAND_CMD_ERASE2, -1, -1); 1799 - 1800 - /* Wait for the erase to finish. */ 1801 - status = chip->waitfunc(mtd, chip); 1802 - if (status & NAND_STATUS_FAIL) 1802 + status = nand_erase_op(chip, block); 1803 + if (status) 1803 1804 dev_err(dev, "[%s] Erase failed.\n", __func__); 1804 1805 } 1805 1806 ··· 1808 1823 1809 1824 /* Write the first page of the current stride. */ 1810 1825 dev_dbg(dev, "Writing an NCB fingerprint in page 0x%x\n", page); 1811 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 1812 - chip->ecc.write_page_raw(mtd, chip, buffer, 0, page); 1813 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1814 1826 1815 - /* Wait for the write to finish. */ 1816 - status = chip->waitfunc(mtd, chip); 1817 - if (status & NAND_STATUS_FAIL) 1827 + status = chip->ecc.write_page_raw(mtd, chip, buffer, 0, page); 1828 + if (status) 1818 1829 dev_err(dev, "[%s] Write failed.\n", __func__); 1819 1830 } 1820 1831 ··· 1865 1884 1866 1885 /* Send the command to read the conventional block mark. */ 1867 1886 chip->select_chip(mtd, chipnr); 1868 - chip->cmdfunc(mtd, NAND_CMD_READ0, mtd->writesize, page); 1887 + nand_read_page_op(chip, page, mtd->writesize, NULL, 0); 1869 1888 block_mark = chip->read_byte(mtd); 1870 1889 chip->select_chip(mtd, -1); 1871 1890
+23 -23
drivers/mtd/nand/gpmi-nand/gpmi-nand.h
··· 268 268 }; 269 269 270 270 /* Common Services */ 271 - extern int common_nfc_set_geometry(struct gpmi_nand_data *); 272 - extern struct dma_chan *get_dma_chan(struct gpmi_nand_data *); 273 - extern void prepare_data_dma(struct gpmi_nand_data *, 274 - enum dma_data_direction dr); 275 - extern int start_dma_without_bch_irq(struct gpmi_nand_data *, 276 - struct dma_async_tx_descriptor *); 277 - extern int start_dma_with_bch_irq(struct gpmi_nand_data *, 278 - struct dma_async_tx_descriptor *); 271 + int common_nfc_set_geometry(struct gpmi_nand_data *); 272 + struct dma_chan *get_dma_chan(struct gpmi_nand_data *); 273 + void prepare_data_dma(struct gpmi_nand_data *, 274 + enum dma_data_direction dr); 275 + int start_dma_without_bch_irq(struct gpmi_nand_data *, 276 + struct dma_async_tx_descriptor *); 277 + int start_dma_with_bch_irq(struct gpmi_nand_data *, 278 + struct dma_async_tx_descriptor *); 279 279 280 280 /* GPMI-NAND helper function library */ 281 - extern int gpmi_init(struct gpmi_nand_data *); 282 - extern int gpmi_extra_init(struct gpmi_nand_data *); 283 - extern void gpmi_clear_bch(struct gpmi_nand_data *); 284 - extern void gpmi_dump_info(struct gpmi_nand_data *); 285 - extern int bch_set_geometry(struct gpmi_nand_data *); 286 - extern int gpmi_is_ready(struct gpmi_nand_data *, unsigned chip); 287 - extern int gpmi_send_command(struct gpmi_nand_data *); 288 - extern void gpmi_begin(struct gpmi_nand_data *); 289 - extern void gpmi_end(struct gpmi_nand_data *); 290 - extern int gpmi_read_data(struct gpmi_nand_data *); 291 - extern int gpmi_send_data(struct gpmi_nand_data *); 292 - extern int gpmi_send_page(struct gpmi_nand_data *, 293 - dma_addr_t payload, dma_addr_t auxiliary); 294 - extern int gpmi_read_page(struct gpmi_nand_data *, 295 - dma_addr_t payload, dma_addr_t auxiliary); 281 + int gpmi_init(struct gpmi_nand_data *); 282 + int gpmi_extra_init(struct gpmi_nand_data *); 283 + void gpmi_clear_bch(struct gpmi_nand_data *); 284 + void gpmi_dump_info(struct gpmi_nand_data *); 285 + int bch_set_geometry(struct gpmi_nand_data *); 286 + int gpmi_is_ready(struct gpmi_nand_data *, unsigned chip); 287 + int gpmi_send_command(struct gpmi_nand_data *); 288 + void gpmi_begin(struct gpmi_nand_data *); 289 + void gpmi_end(struct gpmi_nand_data *); 290 + int gpmi_read_data(struct gpmi_nand_data *); 291 + int gpmi_send_data(struct gpmi_nand_data *); 292 + int gpmi_send_page(struct gpmi_nand_data *, 293 + dma_addr_t payload, dma_addr_t auxiliary); 294 + int gpmi_read_page(struct gpmi_nand_data *, 295 + dma_addr_t payload, dma_addr_t auxiliary); 296 296 297 297 void gpmi_copy_bits(u8 *dst, size_t dst_bit_off, 298 298 const u8 *src, size_t src_bit_off,
+4 -5
drivers/mtd/nand/hisi504_nand.c
··· 544 544 int max_bitflips = 0, stat = 0, stat_max = 0, status_ecc; 545 545 int stat_1, stat_2; 546 546 547 - chip->read_buf(mtd, buf, mtd->writesize); 547 + nand_read_page_op(chip, page, 0, buf, mtd->writesize); 548 548 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 549 549 550 550 /* errors which can not be corrected by ECC */ ··· 574 574 { 575 575 struct hinfc_host *host = nand_get_controller_data(chip); 576 576 577 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 578 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 577 + nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 579 578 580 579 if (host->irq_status & HINFC504_INTS_UE) { 581 580 host->irq_status = 0; ··· 589 590 struct nand_chip *chip, const uint8_t *buf, int oob_required, 590 591 int page) 591 592 { 592 - chip->write_buf(mtd, buf, mtd->writesize); 593 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 593 594 if (oob_required) 594 595 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 595 596 596 - return 0; 597 + return nand_prog_page_end_op(chip); 597 598 } 598 599 599 600 static void hisi_nfc_host_init(struct hinfc_host *host)
+8 -8
drivers/mtd/nand/jz4740_nand.c
··· 313 313 uint32_t ctrl; 314 314 struct nand_chip *chip = &nand->chip; 315 315 struct mtd_info *mtd = nand_to_mtd(chip); 316 + u8 id[2]; 316 317 317 318 /* Request I/O resource. */ 318 319 sprintf(res_name, "bank%d", bank); ··· 336 335 337 336 /* Retrieve the IDs from the first chip. */ 338 337 chip->select_chip(mtd, 0); 339 - chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 340 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1); 341 - *nand_maf_id = chip->read_byte(mtd); 342 - *nand_dev_id = chip->read_byte(mtd); 338 + nand_reset_op(chip); 339 + nand_readid_op(chip, 0, id, sizeof(id)); 340 + *nand_maf_id = id[0]; 341 + *nand_dev_id = id[1]; 343 342 } else { 344 343 /* Detect additional chip. */ 345 344 chip->select_chip(mtd, chipnr); 346 - chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 347 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1); 348 - if (*nand_maf_id != chip->read_byte(mtd) 349 - || *nand_dev_id != chip->read_byte(mtd)) { 345 + nand_reset_op(chip); 346 + nand_readid_op(chip, 0, id, sizeof(id)); 347 + if (*nand_maf_id != id[0] || *nand_dev_id != id[1]) { 350 348 ret = -ENODEV; 351 349 goto notfound_id; 352 350 }
+5 -2
drivers/mtd/nand/lpc32xx_mlc.c
··· 461 461 } 462 462 463 463 /* Writing Command and Address */ 464 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 464 + nand_read_page_op(chip, page, 0, NULL, 0); 465 465 466 466 /* For all sub-pages */ 467 467 for (i = 0; i < host->mlcsubpages; i++) { ··· 522 522 memcpy(dma_buf, buf, mtd->writesize); 523 523 } 524 524 525 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 526 + 525 527 for (i = 0; i < host->mlcsubpages; i++) { 526 528 /* Start Encode */ 527 529 writeb(0x00, MLC_ECC_ENC_REG(host->io_base)); ··· 552 550 /* Wait for Controller Ready */ 553 551 lpc32xx_waitfunc_controller(mtd, chip); 554 552 } 555 - return 0; 553 + 554 + return nand_prog_page_end_op(chip); 556 555 } 557 556 558 557 static int lpc32xx_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
+13 -20
drivers/mtd/nand/lpc32xx_slc.c
··· 399 399 static int lpc32xx_nand_read_oob_syndrome(struct mtd_info *mtd, 400 400 struct nand_chip *chip, int page) 401 401 { 402 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 403 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 404 - 405 - return 0; 402 + return nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 406 403 } 407 404 408 405 /* ··· 408 411 static int lpc32xx_nand_write_oob_syndrome(struct mtd_info *mtd, 409 412 struct nand_chip *chip, int page) 410 413 { 411 - int status; 412 - 413 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, page); 414 - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 415 - 416 - /* Send command to program the OOB data */ 417 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 418 - 419 - status = chip->waitfunc(mtd, chip); 420 - 421 - return status & NAND_STATUS_FAIL ? -EIO : 0; 414 + return nand_prog_page_op(chip, page, mtd->writesize, chip->oob_poi, 415 + mtd->oobsize); 422 416 } 423 417 424 418 /* ··· 620 632 uint8_t *oobecc, tmpecc[LPC32XX_ECC_SAVE_SIZE]; 621 633 622 634 /* Issue read command */ 623 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 635 + nand_read_page_op(chip, page, 0, NULL, 0); 624 636 625 637 /* Read data and oob, calculate ECC */ 626 638 status = lpc32xx_xfer(mtd, buf, chip->ecc.steps, 1); ··· 663 675 int page) 664 676 { 665 677 /* Issue read command */ 666 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 678 + nand_read_page_op(chip, page, 0, NULL, 0); 667 679 668 680 /* Raw reads can just use the FIFO interface */ 669 681 chip->read_buf(mtd, buf, chip->ecc.size * chip->ecc.steps); ··· 686 698 uint8_t *pb; 687 699 int error; 688 700 701 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 702 + 689 703 /* Write data, calculate ECC on outbound data */ 690 704 error = lpc32xx_xfer(mtd, (uint8_t *)buf, chip->ecc.steps, 0); 691 705 if (error) ··· 706 716 707 717 /* Write ECC data to device */ 708 718 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 709 - return 0; 719 + 720 + return nand_prog_page_end_op(chip); 710 721 } 711 722 712 723 /* ··· 720 729 int oob_required, int page) 721 730 { 722 731 /* Raw writes can just use the FIFO interface */ 723 - chip->write_buf(mtd, buf, chip->ecc.size * chip->ecc.steps); 732 + nand_prog_page_begin_op(chip, page, 0, buf, 733 + chip->ecc.size * chip->ecc.steps); 724 734 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 725 - return 0; 735 + 736 + return nand_prog_page_end_op(chip); 726 737 } 727 738 728 739 static int lpc32xx_nand_dma_setup(struct lpc32xx_nand_host *host)
+2896
drivers/mtd/nand/marvell_nand.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Marvell NAND flash controller driver 4 + * 5 + * Copyright (C) 2017 Marvell 6 + * Author: Miquel RAYNAL <miquel.raynal@free-electrons.com> 7 + * 8 + */ 9 + 10 + #include <linux/module.h> 11 + #include <linux/clk.h> 12 + #include <linux/mtd/rawnand.h> 13 + #include <linux/of_platform.h> 14 + #include <linux/iopoll.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/slab.h> 17 + #include <linux/mfd/syscon.h> 18 + #include <linux/regmap.h> 19 + #include <asm/unaligned.h> 20 + 21 + #include <linux/dmaengine.h> 22 + #include <linux/dma-mapping.h> 23 + #include <linux/dma/pxa-dma.h> 24 + #include <linux/platform_data/mtd-nand-pxa3xx.h> 25 + 26 + /* Data FIFO granularity, FIFO reads/writes must be a multiple of this length */ 27 + #define FIFO_DEPTH 8 28 + #define FIFO_REP(x) (x / sizeof(u32)) 29 + #define BCH_SEQ_READS (32 / FIFO_DEPTH) 30 + /* NFC does not support transfers of larger chunks at a time */ 31 + #define MAX_CHUNK_SIZE 2112 32 + /* NFCv1 cannot read more that 7 bytes of ID */ 33 + #define NFCV1_READID_LEN 7 34 + /* Polling is done at a pace of POLL_PERIOD us until POLL_TIMEOUT is reached */ 35 + #define POLL_PERIOD 0 36 + #define POLL_TIMEOUT 100000 37 + /* Interrupt maximum wait period in ms */ 38 + #define IRQ_TIMEOUT 1000 39 + /* Latency in clock cycles between SoC pins and NFC logic */ 40 + #define MIN_RD_DEL_CNT 3 41 + /* Maximum number of contiguous address cycles */ 42 + #define MAX_ADDRESS_CYC_NFCV1 5 43 + #define MAX_ADDRESS_CYC_NFCV2 7 44 + /* System control registers/bits to enable the NAND controller on some SoCs */ 45 + #define GENCONF_SOC_DEVICE_MUX 0x208 46 + #define GENCONF_SOC_DEVICE_MUX_NFC_EN BIT(0) 47 + #define GENCONF_SOC_DEVICE_MUX_ECC_CLK_RST BIT(20) 48 + #define GENCONF_SOC_DEVICE_MUX_ECC_CORE_RST BIT(21) 49 + #define GENCONF_SOC_DEVICE_MUX_NFC_INT_EN BIT(25) 50 + #define GENCONF_CLK_GATING_CTRL 0x220 51 + #define GENCONF_CLK_GATING_CTRL_ND_GATE BIT(2) 52 + #define GENCONF_ND_CLK_CTRL 0x700 53 + #define GENCONF_ND_CLK_CTRL_EN BIT(0) 54 + 55 + /* NAND controller data flash control register */ 56 + #define NDCR 0x00 57 + #define NDCR_ALL_INT GENMASK(11, 0) 58 + #define NDCR_CS1_CMDDM BIT(7) 59 + #define NDCR_CS0_CMDDM BIT(8) 60 + #define NDCR_RDYM BIT(11) 61 + #define NDCR_ND_ARB_EN BIT(12) 62 + #define NDCR_RA_START BIT(15) 63 + #define NDCR_RD_ID_CNT(x) (min_t(unsigned int, x, 0x7) << 16) 64 + #define NDCR_PAGE_SZ(x) (x >= 2048 ? BIT(24) : 0) 65 + #define NDCR_DWIDTH_M BIT(26) 66 + #define NDCR_DWIDTH_C BIT(27) 67 + #define NDCR_ND_RUN BIT(28) 68 + #define NDCR_DMA_EN BIT(29) 69 + #define NDCR_ECC_EN BIT(30) 70 + #define NDCR_SPARE_EN BIT(31) 71 + #define NDCR_GENERIC_FIELDS_MASK (~(NDCR_RA_START | NDCR_PAGE_SZ(2048) | \ 72 + NDCR_DWIDTH_M | NDCR_DWIDTH_C)) 73 + 74 + /* NAND interface timing parameter 0 register */ 75 + #define NDTR0 0x04 76 + #define NDTR0_TRP(x) ((min_t(unsigned int, x, 0xF) & 0x7) << 0) 77 + #define NDTR0_TRH(x) (min_t(unsigned int, x, 0x7) << 3) 78 + #define NDTR0_ETRP(x) ((min_t(unsigned int, x, 0xF) & 0x8) << 3) 79 + #define NDTR0_SEL_NRE_EDGE BIT(7) 80 + #define NDTR0_TWP(x) (min_t(unsigned int, x, 0x7) << 8) 81 + #define NDTR0_TWH(x) (min_t(unsigned int, x, 0x7) << 11) 82 + #define NDTR0_TCS(x) (min_t(unsigned int, x, 0x7) << 16) 83 + #define NDTR0_TCH(x) (min_t(unsigned int, x, 0x7) << 19) 84 + #define NDTR0_RD_CNT_DEL(x) (min_t(unsigned int, x, 0xF) << 22) 85 + #define NDTR0_SELCNTR BIT(26) 86 + #define NDTR0_TADL(x) (min_t(unsigned int, x, 0x1F) << 27) 87 + 88 + /* NAND interface timing parameter 1 register */ 89 + #define NDTR1 0x0C 90 + #define NDTR1_TAR(x) (min_t(unsigned int, x, 0xF) << 0) 91 + #define NDTR1_TWHR(x) (min_t(unsigned int, x, 0xF) << 4) 92 + #define NDTR1_TRHW(x) (min_t(unsigned int, x / 16, 0x3) << 8) 93 + #define NDTR1_PRESCALE BIT(14) 94 + #define NDTR1_WAIT_MODE BIT(15) 95 + #define NDTR1_TR(x) (min_t(unsigned int, x, 0xFFFF) << 16) 96 + 97 + /* NAND controller status register */ 98 + #define NDSR 0x14 99 + #define NDSR_WRCMDREQ BIT(0) 100 + #define NDSR_RDDREQ BIT(1) 101 + #define NDSR_WRDREQ BIT(2) 102 + #define NDSR_CORERR BIT(3) 103 + #define NDSR_UNCERR BIT(4) 104 + #define NDSR_CMDD(cs) BIT(8 - cs) 105 + #define NDSR_RDY(rb) BIT(11 + rb) 106 + #define NDSR_ERRCNT(x) ((x >> 16) & 0x1F) 107 + 108 + /* NAND ECC control register */ 109 + #define NDECCCTRL 0x28 110 + #define NDECCCTRL_BCH_EN BIT(0) 111 + 112 + /* NAND controller data buffer register */ 113 + #define NDDB 0x40 114 + 115 + /* NAND controller command buffer 0 register */ 116 + #define NDCB0 0x48 117 + #define NDCB0_CMD1(x) ((x & 0xFF) << 0) 118 + #define NDCB0_CMD2(x) ((x & 0xFF) << 8) 119 + #define NDCB0_ADDR_CYC(x) ((x & 0x7) << 16) 120 + #define NDCB0_ADDR_GET_NUM_CYC(x) (((x) >> 16) & 0x7) 121 + #define NDCB0_DBC BIT(19) 122 + #define NDCB0_CMD_TYPE(x) ((x & 0x7) << 21) 123 + #define NDCB0_CSEL BIT(24) 124 + #define NDCB0_RDY_BYP BIT(27) 125 + #define NDCB0_LEN_OVRD BIT(28) 126 + #define NDCB0_CMD_XTYPE(x) ((x & 0x7) << 29) 127 + 128 + /* NAND controller command buffer 1 register */ 129 + #define NDCB1 0x4C 130 + #define NDCB1_COLS(x) ((x & 0xFFFF) << 0) 131 + #define NDCB1_ADDRS_PAGE(x) (x << 16) 132 + 133 + /* NAND controller command buffer 2 register */ 134 + #define NDCB2 0x50 135 + #define NDCB2_ADDR5_PAGE(x) (((x >> 16) & 0xFF) << 0) 136 + #define NDCB2_ADDR5_CYC(x) ((x & 0xFF) << 0) 137 + 138 + /* NAND controller command buffer 3 register */ 139 + #define NDCB3 0x54 140 + #define NDCB3_ADDR6_CYC(x) ((x & 0xFF) << 16) 141 + #define NDCB3_ADDR7_CYC(x) ((x & 0xFF) << 24) 142 + 143 + /* NAND controller command buffer 0 register 'type' and 'xtype' fields */ 144 + #define TYPE_READ 0 145 + #define TYPE_WRITE 1 146 + #define TYPE_ERASE 2 147 + #define TYPE_READ_ID 3 148 + #define TYPE_STATUS 4 149 + #define TYPE_RESET 5 150 + #define TYPE_NAKED_CMD 6 151 + #define TYPE_NAKED_ADDR 7 152 + #define TYPE_MASK 7 153 + #define XTYPE_MONOLITHIC_RW 0 154 + #define XTYPE_LAST_NAKED_RW 1 155 + #define XTYPE_FINAL_COMMAND 3 156 + #define XTYPE_READ 4 157 + #define XTYPE_WRITE_DISPATCH 4 158 + #define XTYPE_NAKED_RW 5 159 + #define XTYPE_COMMAND_DISPATCH 6 160 + #define XTYPE_MASK 7 161 + 162 + /** 163 + * Marvell ECC engine works differently than the others, in order to limit the 164 + * size of the IP, hardware engineers chose to set a fixed strength at 16 bits 165 + * per subpage, and depending on a the desired strength needed by the NAND chip, 166 + * a particular layout mixing data/spare/ecc is defined, with a possible last 167 + * chunk smaller that the others. 168 + * 169 + * @writesize: Full page size on which the layout applies 170 + * @chunk: Desired ECC chunk size on which the layout applies 171 + * @strength: Desired ECC strength (per chunk size bytes) on which the 172 + * layout applies 173 + * @nchunks: Total number of chunks 174 + * @full_chunk_cnt: Number of full-sized chunks, which is the number of 175 + * repetitions of the pattern: 176 + * (data_bytes + spare_bytes + ecc_bytes). 177 + * @data_bytes: Number of data bytes per chunk 178 + * @spare_bytes: Number of spare bytes per chunk 179 + * @ecc_bytes: Number of ecc bytes per chunk 180 + * @last_data_bytes: Number of data bytes in the last chunk 181 + * @last_spare_bytes: Number of spare bytes in the last chunk 182 + * @last_ecc_bytes: Number of ecc bytes in the last chunk 183 + */ 184 + struct marvell_hw_ecc_layout { 185 + /* Constraints */ 186 + int writesize; 187 + int chunk; 188 + int strength; 189 + /* Corresponding layout */ 190 + int nchunks; 191 + int full_chunk_cnt; 192 + int data_bytes; 193 + int spare_bytes; 194 + int ecc_bytes; 195 + int last_data_bytes; 196 + int last_spare_bytes; 197 + int last_ecc_bytes; 198 + }; 199 + 200 + #define MARVELL_LAYOUT(ws, dc, ds, nc, fcc, db, sb, eb, ldb, lsb, leb) \ 201 + { \ 202 + .writesize = ws, \ 203 + .chunk = dc, \ 204 + .strength = ds, \ 205 + .nchunks = nc, \ 206 + .full_chunk_cnt = fcc, \ 207 + .data_bytes = db, \ 208 + .spare_bytes = sb, \ 209 + .ecc_bytes = eb, \ 210 + .last_data_bytes = ldb, \ 211 + .last_spare_bytes = lsb, \ 212 + .last_ecc_bytes = leb, \ 213 + } 214 + 215 + /* Layouts explained in AN-379_Marvell_SoC_NFC_ECC */ 216 + static const struct marvell_hw_ecc_layout marvell_nfc_layouts[] = { 217 + MARVELL_LAYOUT( 512, 512, 1, 1, 1, 512, 8, 8, 0, 0, 0), 218 + MARVELL_LAYOUT( 2048, 512, 1, 1, 1, 2048, 40, 24, 0, 0, 0), 219 + MARVELL_LAYOUT( 2048, 512, 4, 1, 1, 2048, 32, 30, 0, 0, 0), 220 + MARVELL_LAYOUT( 4096, 512, 4, 2, 2, 2048, 32, 30, 0, 0, 0), 221 + MARVELL_LAYOUT( 4096, 512, 8, 5, 4, 1024, 0, 30, 0, 64, 30), 222 + }; 223 + 224 + /** 225 + * The Nand Flash Controller has up to 4 CE and 2 RB pins. The CE selection 226 + * is made by a field in NDCB0 register, and in another field in NDCB2 register. 227 + * The datasheet describes the logic with an error: ADDR5 field is once 228 + * declared at the beginning of NDCB2, and another time at its end. Because the 229 + * ADDR5 field of NDCB2 may be used by other bytes, it would be more logical 230 + * to use the last bit of this field instead of the first ones. 231 + * 232 + * @cs: Wanted CE lane. 233 + * @ndcb0_csel: Value of the NDCB0 register with or without the flag 234 + * selecting the wanted CE lane. This is set once when 235 + * the Device Tree is probed. 236 + * @rb: Ready/Busy pin for the flash chip 237 + */ 238 + struct marvell_nand_chip_sel { 239 + unsigned int cs; 240 + u32 ndcb0_csel; 241 + unsigned int rb; 242 + }; 243 + 244 + /** 245 + * NAND chip structure: stores NAND chip device related information 246 + * 247 + * @chip: Base NAND chip structure 248 + * @node: Used to store NAND chips into a list 249 + * @layout NAND layout when using hardware ECC 250 + * @ndcr: Controller register value for this NAND chip 251 + * @ndtr0: Timing registers 0 value for this NAND chip 252 + * @ndtr1: Timing registers 1 value for this NAND chip 253 + * @selected_die: Current active CS 254 + * @nsels: Number of CS lines required by the NAND chip 255 + * @sels: Array of CS lines descriptions 256 + */ 257 + struct marvell_nand_chip { 258 + struct nand_chip chip; 259 + struct list_head node; 260 + const struct marvell_hw_ecc_layout *layout; 261 + u32 ndcr; 262 + u32 ndtr0; 263 + u32 ndtr1; 264 + int addr_cyc; 265 + int selected_die; 266 + unsigned int nsels; 267 + struct marvell_nand_chip_sel sels[0]; 268 + }; 269 + 270 + static inline struct marvell_nand_chip *to_marvell_nand(struct nand_chip *chip) 271 + { 272 + return container_of(chip, struct marvell_nand_chip, chip); 273 + } 274 + 275 + static inline struct marvell_nand_chip_sel *to_nand_sel(struct marvell_nand_chip 276 + *nand) 277 + { 278 + return &nand->sels[nand->selected_die]; 279 + } 280 + 281 + /** 282 + * NAND controller capabilities for distinction between compatible strings 283 + * 284 + * @max_cs_nb: Number of Chip Select lines available 285 + * @max_rb_nb: Number of Ready/Busy lines available 286 + * @need_system_controller: Indicates if the SoC needs to have access to the 287 + * system controller (ie. to enable the NAND controller) 288 + * @legacy_of_bindings: Indicates if DT parsing must be done using the old 289 + * fashion way 290 + * @is_nfcv2: NFCv2 has numerous enhancements compared to NFCv1, ie. 291 + * BCH error detection and correction algorithm, 292 + * NDCB3 register has been added 293 + * @use_dma: Use dma for data transfers 294 + */ 295 + struct marvell_nfc_caps { 296 + unsigned int max_cs_nb; 297 + unsigned int max_rb_nb; 298 + bool need_system_controller; 299 + bool legacy_of_bindings; 300 + bool is_nfcv2; 301 + bool use_dma; 302 + }; 303 + 304 + /** 305 + * NAND controller structure: stores Marvell NAND controller information 306 + * 307 + * @controller: Base controller structure 308 + * @dev: Parent device (used to print error messages) 309 + * @regs: NAND controller registers 310 + * @ecc_clk: ECC block clock, two times the NAND controller clock 311 + * @complete: Completion object to wait for NAND controller events 312 + * @assigned_cs: Bitmask describing already assigned CS lines 313 + * @chips: List containing all the NAND chips attached to 314 + * this NAND controller 315 + * @caps: NAND controller capabilities for each compatible string 316 + * @dma_chan: DMA channel (NFCv1 only) 317 + * @dma_buf: 32-bit aligned buffer for DMA transfers (NFCv1 only) 318 + */ 319 + struct marvell_nfc { 320 + struct nand_hw_control controller; 321 + struct device *dev; 322 + void __iomem *regs; 323 + struct clk *ecc_clk; 324 + struct completion complete; 325 + unsigned long assigned_cs; 326 + struct list_head chips; 327 + struct nand_chip *selected_chip; 328 + const struct marvell_nfc_caps *caps; 329 + 330 + /* DMA (NFCv1 only) */ 331 + bool use_dma; 332 + struct dma_chan *dma_chan; 333 + u8 *dma_buf; 334 + }; 335 + 336 + static inline struct marvell_nfc *to_marvell_nfc(struct nand_hw_control *ctrl) 337 + { 338 + return container_of(ctrl, struct marvell_nfc, controller); 339 + } 340 + 341 + /** 342 + * NAND controller timings expressed in NAND Controller clock cycles 343 + * 344 + * @tRP: ND_nRE pulse width 345 + * @tRH: ND_nRE high duration 346 + * @tWP: ND_nWE pulse time 347 + * @tWH: ND_nWE high duration 348 + * @tCS: Enable signal setup time 349 + * @tCH: Enable signal hold time 350 + * @tADL: Address to write data delay 351 + * @tAR: ND_ALE low to ND_nRE low delay 352 + * @tWHR: ND_nWE high to ND_nRE low for status read 353 + * @tRHW: ND_nRE high duration, read to write delay 354 + * @tR: ND_nWE high to ND_nRE low for read 355 + */ 356 + struct marvell_nfc_timings { 357 + /* NDTR0 fields */ 358 + unsigned int tRP; 359 + unsigned int tRH; 360 + unsigned int tWP; 361 + unsigned int tWH; 362 + unsigned int tCS; 363 + unsigned int tCH; 364 + unsigned int tADL; 365 + /* NDTR1 fields */ 366 + unsigned int tAR; 367 + unsigned int tWHR; 368 + unsigned int tRHW; 369 + unsigned int tR; 370 + }; 371 + 372 + /** 373 + * Derives a duration in numbers of clock cycles. 374 + * 375 + * @ps: Duration in pico-seconds 376 + * @period_ns: Clock period in nano-seconds 377 + * 378 + * Convert the duration in nano-seconds, then divide by the period and 379 + * return the number of clock periods. 380 + */ 381 + #define TO_CYCLES(ps, period_ns) (DIV_ROUND_UP(ps / 1000, period_ns)) 382 + 383 + /** 384 + * NAND driver structure filled during the parsing of the ->exec_op() subop 385 + * subset of instructions. 386 + * 387 + * @ndcb: Array of values written to NDCBx registers 388 + * @cle_ale_delay_ns: Optional delay after the last CMD or ADDR cycle 389 + * @rdy_timeout_ms: Timeout for waits on Ready/Busy pin 390 + * @rdy_delay_ns: Optional delay after waiting for the RB pin 391 + * @data_delay_ns: Optional delay after the data xfer 392 + * @data_instr_idx: Index of the data instruction in the subop 393 + * @data_instr: Pointer to the data instruction in the subop 394 + */ 395 + struct marvell_nfc_op { 396 + u32 ndcb[4]; 397 + unsigned int cle_ale_delay_ns; 398 + unsigned int rdy_timeout_ms; 399 + unsigned int rdy_delay_ns; 400 + unsigned int data_delay_ns; 401 + unsigned int data_instr_idx; 402 + const struct nand_op_instr *data_instr; 403 + }; 404 + 405 + /* 406 + * Internal helper to conditionnally apply a delay (from the above structure, 407 + * most of the time). 408 + */ 409 + static void cond_delay(unsigned int ns) 410 + { 411 + if (!ns) 412 + return; 413 + 414 + if (ns < 10000) 415 + ndelay(ns); 416 + else 417 + udelay(DIV_ROUND_UP(ns, 1000)); 418 + } 419 + 420 + /* 421 + * The controller has many flags that could generate interrupts, most of them 422 + * are disabled and polling is used. For the very slow signals, using interrupts 423 + * may relax the CPU charge. 424 + */ 425 + static void marvell_nfc_disable_int(struct marvell_nfc *nfc, u32 int_mask) 426 + { 427 + u32 reg; 428 + 429 + /* Writing 1 disables the interrupt */ 430 + reg = readl_relaxed(nfc->regs + NDCR); 431 + writel_relaxed(reg | int_mask, nfc->regs + NDCR); 432 + } 433 + 434 + static void marvell_nfc_enable_int(struct marvell_nfc *nfc, u32 int_mask) 435 + { 436 + u32 reg; 437 + 438 + /* Writing 0 enables the interrupt */ 439 + reg = readl_relaxed(nfc->regs + NDCR); 440 + writel_relaxed(reg & ~int_mask, nfc->regs + NDCR); 441 + } 442 + 443 + static void marvell_nfc_clear_int(struct marvell_nfc *nfc, u32 int_mask) 444 + { 445 + writel_relaxed(int_mask, nfc->regs + NDSR); 446 + } 447 + 448 + static void marvell_nfc_force_byte_access(struct nand_chip *chip, 449 + bool force_8bit) 450 + { 451 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 452 + u32 ndcr; 453 + 454 + /* 455 + * Callers of this function do not verify if the NAND is using a 16-bit 456 + * an 8-bit bus for normal operations, so we need to take care of that 457 + * here by leaving the configuration unchanged if the NAND does not have 458 + * the NAND_BUSWIDTH_16 flag set. 459 + */ 460 + if (!(chip->options & NAND_BUSWIDTH_16)) 461 + return; 462 + 463 + ndcr = readl_relaxed(nfc->regs + NDCR); 464 + 465 + if (force_8bit) 466 + ndcr &= ~(NDCR_DWIDTH_M | NDCR_DWIDTH_C); 467 + else 468 + ndcr |= NDCR_DWIDTH_M | NDCR_DWIDTH_C; 469 + 470 + writel_relaxed(ndcr, nfc->regs + NDCR); 471 + } 472 + 473 + static int marvell_nfc_wait_ndrun(struct nand_chip *chip) 474 + { 475 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 476 + u32 val; 477 + int ret; 478 + 479 + /* 480 + * The command is being processed, wait for the ND_RUN bit to be 481 + * cleared by the NFC. If not, we must clear it by hand. 482 + */ 483 + ret = readl_relaxed_poll_timeout(nfc->regs + NDCR, val, 484 + (val & NDCR_ND_RUN) == 0, 485 + POLL_PERIOD, POLL_TIMEOUT); 486 + if (ret) { 487 + dev_err(nfc->dev, "Timeout on NAND controller run mode\n"); 488 + writel_relaxed(readl(nfc->regs + NDCR) & ~NDCR_ND_RUN, 489 + nfc->regs + NDCR); 490 + return ret; 491 + } 492 + 493 + return 0; 494 + } 495 + 496 + /* 497 + * Any time a command has to be sent to the controller, the following sequence 498 + * has to be followed: 499 + * - call marvell_nfc_prepare_cmd() 500 + * -> activate the ND_RUN bit that will kind of 'start a job' 501 + * -> wait the signal indicating the NFC is waiting for a command 502 + * - send the command (cmd and address cycles) 503 + * - enventually send or receive the data 504 + * - call marvell_nfc_end_cmd() with the corresponding flag 505 + * -> wait the flag to be triggered or cancel the job with a timeout 506 + * 507 + * The following helpers are here to factorize the code a bit so that 508 + * specialized functions responsible for executing the actual NAND 509 + * operations do not have to replicate the same code blocks. 510 + */ 511 + static int marvell_nfc_prepare_cmd(struct nand_chip *chip) 512 + { 513 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 514 + u32 ndcr, val; 515 + int ret; 516 + 517 + /* Poll ND_RUN and clear NDSR before issuing any command */ 518 + ret = marvell_nfc_wait_ndrun(chip); 519 + if (ret) { 520 + dev_err(nfc->dev, "Last operation did not succeed\n"); 521 + return ret; 522 + } 523 + 524 + ndcr = readl_relaxed(nfc->regs + NDCR); 525 + writel_relaxed(readl(nfc->regs + NDSR), nfc->regs + NDSR); 526 + 527 + /* Assert ND_RUN bit and wait the NFC to be ready */ 528 + writel_relaxed(ndcr | NDCR_ND_RUN, nfc->regs + NDCR); 529 + ret = readl_relaxed_poll_timeout(nfc->regs + NDSR, val, 530 + val & NDSR_WRCMDREQ, 531 + POLL_PERIOD, POLL_TIMEOUT); 532 + if (ret) { 533 + dev_err(nfc->dev, "Timeout on WRCMDRE\n"); 534 + return -ETIMEDOUT; 535 + } 536 + 537 + /* Command may be written, clear WRCMDREQ status bit */ 538 + writel_relaxed(NDSR_WRCMDREQ, nfc->regs + NDSR); 539 + 540 + return 0; 541 + } 542 + 543 + static void marvell_nfc_send_cmd(struct nand_chip *chip, 544 + struct marvell_nfc_op *nfc_op) 545 + { 546 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 547 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 548 + 549 + dev_dbg(nfc->dev, "\nNDCR: 0x%08x\n" 550 + "NDCB0: 0x%08x\nNDCB1: 0x%08x\nNDCB2: 0x%08x\nNDCB3: 0x%08x\n", 551 + (u32)readl_relaxed(nfc->regs + NDCR), nfc_op->ndcb[0], 552 + nfc_op->ndcb[1], nfc_op->ndcb[2], nfc_op->ndcb[3]); 553 + 554 + writel_relaxed(to_nand_sel(marvell_nand)->ndcb0_csel | nfc_op->ndcb[0], 555 + nfc->regs + NDCB0); 556 + writel_relaxed(nfc_op->ndcb[1], nfc->regs + NDCB0); 557 + writel(nfc_op->ndcb[2], nfc->regs + NDCB0); 558 + 559 + /* 560 + * Write NDCB0 four times only if LEN_OVRD is set or if ADDR6 or ADDR7 561 + * fields are used (only available on NFCv2). 562 + */ 563 + if (nfc_op->ndcb[0] & NDCB0_LEN_OVRD || 564 + NDCB0_ADDR_GET_NUM_CYC(nfc_op->ndcb[0]) >= 6) { 565 + if (!WARN_ON_ONCE(!nfc->caps->is_nfcv2)) 566 + writel(nfc_op->ndcb[3], nfc->regs + NDCB0); 567 + } 568 + } 569 + 570 + static int marvell_nfc_end_cmd(struct nand_chip *chip, int flag, 571 + const char *label) 572 + { 573 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 574 + u32 val; 575 + int ret; 576 + 577 + ret = readl_relaxed_poll_timeout(nfc->regs + NDSR, val, 578 + val & flag, 579 + POLL_PERIOD, POLL_TIMEOUT); 580 + 581 + if (ret) { 582 + dev_err(nfc->dev, "Timeout on %s (NDSR: 0x%08x)\n", 583 + label, val); 584 + if (nfc->dma_chan) 585 + dmaengine_terminate_all(nfc->dma_chan); 586 + return ret; 587 + } 588 + 589 + /* 590 + * DMA function uses this helper to poll on CMDD bits without wanting 591 + * them to be cleared. 592 + */ 593 + if (nfc->use_dma && (readl_relaxed(nfc->regs + NDCR) & NDCR_DMA_EN)) 594 + return 0; 595 + 596 + writel_relaxed(flag, nfc->regs + NDSR); 597 + 598 + return 0; 599 + } 600 + 601 + static int marvell_nfc_wait_cmdd(struct nand_chip *chip) 602 + { 603 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 604 + int cs_flag = NDSR_CMDD(to_nand_sel(marvell_nand)->ndcb0_csel); 605 + 606 + return marvell_nfc_end_cmd(chip, cs_flag, "CMDD"); 607 + } 608 + 609 + static int marvell_nfc_wait_op(struct nand_chip *chip, unsigned int timeout_ms) 610 + { 611 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 612 + int ret; 613 + 614 + /* Timeout is expressed in ms */ 615 + if (!timeout_ms) 616 + timeout_ms = IRQ_TIMEOUT; 617 + 618 + init_completion(&nfc->complete); 619 + 620 + marvell_nfc_enable_int(nfc, NDCR_RDYM); 621 + ret = wait_for_completion_timeout(&nfc->complete, 622 + msecs_to_jiffies(timeout_ms)); 623 + marvell_nfc_disable_int(nfc, NDCR_RDYM); 624 + marvell_nfc_clear_int(nfc, NDSR_RDY(0) | NDSR_RDY(1)); 625 + if (!ret) { 626 + dev_err(nfc->dev, "Timeout waiting for RB signal\n"); 627 + return -ETIMEDOUT; 628 + } 629 + 630 + return 0; 631 + } 632 + 633 + static void marvell_nfc_select_chip(struct mtd_info *mtd, int die_nr) 634 + { 635 + struct nand_chip *chip = mtd_to_nand(mtd); 636 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 637 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 638 + u32 ndcr_generic; 639 + 640 + if (chip == nfc->selected_chip && die_nr == marvell_nand->selected_die) 641 + return; 642 + 643 + if (die_nr < 0 || die_nr >= marvell_nand->nsels) { 644 + nfc->selected_chip = NULL; 645 + marvell_nand->selected_die = -1; 646 + return; 647 + } 648 + 649 + /* 650 + * Do not change the timing registers when using the DT property 651 + * marvell,nand-keep-config; in that case ->ndtr0 and ->ndtr1 from the 652 + * marvell_nand structure are supposedly empty. 653 + */ 654 + writel_relaxed(marvell_nand->ndtr0, nfc->regs + NDTR0); 655 + writel_relaxed(marvell_nand->ndtr1, nfc->regs + NDTR1); 656 + 657 + /* 658 + * Reset the NDCR register to a clean state for this particular chip, 659 + * also clear ND_RUN bit. 660 + */ 661 + ndcr_generic = readl_relaxed(nfc->regs + NDCR) & 662 + NDCR_GENERIC_FIELDS_MASK & ~NDCR_ND_RUN; 663 + writel_relaxed(ndcr_generic | marvell_nand->ndcr, nfc->regs + NDCR); 664 + 665 + /* Also reset the interrupt status register */ 666 + marvell_nfc_clear_int(nfc, NDCR_ALL_INT); 667 + 668 + nfc->selected_chip = chip; 669 + marvell_nand->selected_die = die_nr; 670 + } 671 + 672 + static irqreturn_t marvell_nfc_isr(int irq, void *dev_id) 673 + { 674 + struct marvell_nfc *nfc = dev_id; 675 + u32 st = readl_relaxed(nfc->regs + NDSR); 676 + u32 ien = (~readl_relaxed(nfc->regs + NDCR)) & NDCR_ALL_INT; 677 + 678 + /* 679 + * RDY interrupt mask is one bit in NDCR while there are two status 680 + * bit in NDSR (RDY[cs0/cs2] and RDY[cs1/cs3]). 681 + */ 682 + if (st & NDSR_RDY(1)) 683 + st |= NDSR_RDY(0); 684 + 685 + if (!(st & ien)) 686 + return IRQ_NONE; 687 + 688 + marvell_nfc_disable_int(nfc, st & NDCR_ALL_INT); 689 + 690 + if (!(st & (NDSR_RDDREQ | NDSR_WRDREQ | NDSR_WRCMDREQ))) 691 + complete(&nfc->complete); 692 + 693 + return IRQ_HANDLED; 694 + } 695 + 696 + /* HW ECC related functions */ 697 + static void marvell_nfc_enable_hw_ecc(struct nand_chip *chip) 698 + { 699 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 700 + u32 ndcr = readl_relaxed(nfc->regs + NDCR); 701 + 702 + if (!(ndcr & NDCR_ECC_EN)) { 703 + writel_relaxed(ndcr | NDCR_ECC_EN, nfc->regs + NDCR); 704 + 705 + /* 706 + * When enabling BCH, set threshold to 0 to always know the 707 + * number of corrected bitflips. 708 + */ 709 + if (chip->ecc.algo == NAND_ECC_BCH) 710 + writel_relaxed(NDECCCTRL_BCH_EN, nfc->regs + NDECCCTRL); 711 + } 712 + } 713 + 714 + static void marvell_nfc_disable_hw_ecc(struct nand_chip *chip) 715 + { 716 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 717 + u32 ndcr = readl_relaxed(nfc->regs + NDCR); 718 + 719 + if (ndcr & NDCR_ECC_EN) { 720 + writel_relaxed(ndcr & ~NDCR_ECC_EN, nfc->regs + NDCR); 721 + if (chip->ecc.algo == NAND_ECC_BCH) 722 + writel_relaxed(0, nfc->regs + NDECCCTRL); 723 + } 724 + } 725 + 726 + /* DMA related helpers */ 727 + static void marvell_nfc_enable_dma(struct marvell_nfc *nfc) 728 + { 729 + u32 reg; 730 + 731 + reg = readl_relaxed(nfc->regs + NDCR); 732 + writel_relaxed(reg | NDCR_DMA_EN, nfc->regs + NDCR); 733 + } 734 + 735 + static void marvell_nfc_disable_dma(struct marvell_nfc *nfc) 736 + { 737 + u32 reg; 738 + 739 + reg = readl_relaxed(nfc->regs + NDCR); 740 + writel_relaxed(reg & ~NDCR_DMA_EN, nfc->regs + NDCR); 741 + } 742 + 743 + /* Read/write PIO/DMA accessors */ 744 + static int marvell_nfc_xfer_data_dma(struct marvell_nfc *nfc, 745 + enum dma_data_direction direction, 746 + unsigned int len) 747 + { 748 + unsigned int dma_len = min_t(int, ALIGN(len, 32), MAX_CHUNK_SIZE); 749 + struct dma_async_tx_descriptor *tx; 750 + struct scatterlist sg; 751 + dma_cookie_t cookie; 752 + int ret; 753 + 754 + marvell_nfc_enable_dma(nfc); 755 + /* Prepare the DMA transfer */ 756 + sg_init_one(&sg, nfc->dma_buf, dma_len); 757 + dma_map_sg(nfc->dma_chan->device->dev, &sg, 1, direction); 758 + tx = dmaengine_prep_slave_sg(nfc->dma_chan, &sg, 1, 759 + direction == DMA_FROM_DEVICE ? 760 + DMA_DEV_TO_MEM : DMA_MEM_TO_DEV, 761 + DMA_PREP_INTERRUPT); 762 + if (!tx) { 763 + dev_err(nfc->dev, "Could not prepare DMA S/G list\n"); 764 + return -ENXIO; 765 + } 766 + 767 + /* Do the task and wait for it to finish */ 768 + cookie = dmaengine_submit(tx); 769 + ret = dma_submit_error(cookie); 770 + if (ret) 771 + return -EIO; 772 + 773 + dma_async_issue_pending(nfc->dma_chan); 774 + ret = marvell_nfc_wait_cmdd(nfc->selected_chip); 775 + dma_unmap_sg(nfc->dma_chan->device->dev, &sg, 1, direction); 776 + marvell_nfc_disable_dma(nfc); 777 + if (ret) { 778 + dev_err(nfc->dev, "Timeout waiting for DMA (status: %d)\n", 779 + dmaengine_tx_status(nfc->dma_chan, cookie, NULL)); 780 + dmaengine_terminate_all(nfc->dma_chan); 781 + return -ETIMEDOUT; 782 + } 783 + 784 + return 0; 785 + } 786 + 787 + static int marvell_nfc_xfer_data_in_pio(struct marvell_nfc *nfc, u8 *in, 788 + unsigned int len) 789 + { 790 + unsigned int last_len = len % FIFO_DEPTH; 791 + unsigned int last_full_offset = round_down(len, FIFO_DEPTH); 792 + int i; 793 + 794 + for (i = 0; i < last_full_offset; i += FIFO_DEPTH) 795 + ioread32_rep(nfc->regs + NDDB, in + i, FIFO_REP(FIFO_DEPTH)); 796 + 797 + if (last_len) { 798 + u8 tmp_buf[FIFO_DEPTH]; 799 + 800 + ioread32_rep(nfc->regs + NDDB, tmp_buf, FIFO_REP(FIFO_DEPTH)); 801 + memcpy(in + last_full_offset, tmp_buf, last_len); 802 + } 803 + 804 + return 0; 805 + } 806 + 807 + static int marvell_nfc_xfer_data_out_pio(struct marvell_nfc *nfc, const u8 *out, 808 + unsigned int len) 809 + { 810 + unsigned int last_len = len % FIFO_DEPTH; 811 + unsigned int last_full_offset = round_down(len, FIFO_DEPTH); 812 + int i; 813 + 814 + for (i = 0; i < last_full_offset; i += FIFO_DEPTH) 815 + iowrite32_rep(nfc->regs + NDDB, out + i, FIFO_REP(FIFO_DEPTH)); 816 + 817 + if (last_len) { 818 + u8 tmp_buf[FIFO_DEPTH]; 819 + 820 + memcpy(tmp_buf, out + last_full_offset, last_len); 821 + iowrite32_rep(nfc->regs + NDDB, tmp_buf, FIFO_REP(FIFO_DEPTH)); 822 + } 823 + 824 + return 0; 825 + } 826 + 827 + static void marvell_nfc_check_empty_chunk(struct nand_chip *chip, 828 + u8 *data, int data_len, 829 + u8 *spare, int spare_len, 830 + u8 *ecc, int ecc_len, 831 + unsigned int *max_bitflips) 832 + { 833 + struct mtd_info *mtd = nand_to_mtd(chip); 834 + int bf; 835 + 836 + /* 837 + * Blank pages (all 0xFF) that have not been written may be recognized 838 + * as bad if bitflips occur, so whenever an uncorrectable error occurs, 839 + * check if the entire page (with ECC bytes) is actually blank or not. 840 + */ 841 + if (!data) 842 + data_len = 0; 843 + if (!spare) 844 + spare_len = 0; 845 + if (!ecc) 846 + ecc_len = 0; 847 + 848 + bf = nand_check_erased_ecc_chunk(data, data_len, ecc, ecc_len, 849 + spare, spare_len, chip->ecc.strength); 850 + if (bf < 0) { 851 + mtd->ecc_stats.failed++; 852 + return; 853 + } 854 + 855 + /* Update the stats and max_bitflips */ 856 + mtd->ecc_stats.corrected += bf; 857 + *max_bitflips = max_t(unsigned int, *max_bitflips, bf); 858 + } 859 + 860 + /* 861 + * Check a chunk is correct or not according to hardware ECC engine. 862 + * mtd->ecc_stats.corrected is updated, as well as max_bitflips, however 863 + * mtd->ecc_stats.failure is not, the function will instead return a non-zero 864 + * value indicating that a check on the emptyness of the subpage must be 865 + * performed before declaring the subpage corrupted. 866 + */ 867 + static int marvell_nfc_hw_ecc_correct(struct nand_chip *chip, 868 + unsigned int *max_bitflips) 869 + { 870 + struct mtd_info *mtd = nand_to_mtd(chip); 871 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 872 + int bf = 0; 873 + u32 ndsr; 874 + 875 + ndsr = readl_relaxed(nfc->regs + NDSR); 876 + 877 + /* Check uncorrectable error flag */ 878 + if (ndsr & NDSR_UNCERR) { 879 + writel_relaxed(ndsr, nfc->regs + NDSR); 880 + 881 + /* 882 + * Do not increment ->ecc_stats.failed now, instead, return a 883 + * non-zero value to indicate that this chunk was apparently 884 + * bad, and it should be check to see if it empty or not. If 885 + * the chunk (with ECC bytes) is not declared empty, the calling 886 + * function must increment the failure count. 887 + */ 888 + return -EBADMSG; 889 + } 890 + 891 + /* Check correctable error flag */ 892 + if (ndsr & NDSR_CORERR) { 893 + writel_relaxed(ndsr, nfc->regs + NDSR); 894 + 895 + if (chip->ecc.algo == NAND_ECC_BCH) 896 + bf = NDSR_ERRCNT(ndsr); 897 + else 898 + bf = 1; 899 + } 900 + 901 + /* Update the stats and max_bitflips */ 902 + mtd->ecc_stats.corrected += bf; 903 + *max_bitflips = max_t(unsigned int, *max_bitflips, bf); 904 + 905 + return 0; 906 + } 907 + 908 + /* Hamming read helpers */ 909 + static int marvell_nfc_hw_ecc_hmg_do_read_page(struct nand_chip *chip, 910 + u8 *data_buf, u8 *oob_buf, 911 + bool raw, int page) 912 + { 913 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 914 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 915 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 916 + struct marvell_nfc_op nfc_op = { 917 + .ndcb[0] = NDCB0_CMD_TYPE(TYPE_READ) | 918 + NDCB0_ADDR_CYC(marvell_nand->addr_cyc) | 919 + NDCB0_DBC | 920 + NDCB0_CMD1(NAND_CMD_READ0) | 921 + NDCB0_CMD2(NAND_CMD_READSTART), 922 + .ndcb[1] = NDCB1_ADDRS_PAGE(page), 923 + .ndcb[2] = NDCB2_ADDR5_PAGE(page), 924 + }; 925 + unsigned int oob_bytes = lt->spare_bytes + (raw ? lt->ecc_bytes : 0); 926 + int ret; 927 + 928 + /* NFCv2 needs more information about the operation being executed */ 929 + if (nfc->caps->is_nfcv2) 930 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW); 931 + 932 + ret = marvell_nfc_prepare_cmd(chip); 933 + if (ret) 934 + return ret; 935 + 936 + marvell_nfc_send_cmd(chip, &nfc_op); 937 + ret = marvell_nfc_end_cmd(chip, NDSR_RDDREQ, 938 + "RDDREQ while draining FIFO (data/oob)"); 939 + if (ret) 940 + return ret; 941 + 942 + /* 943 + * Read the page then the OOB area. Unlike what is shown in current 944 + * documentation, spare bytes are protected by the ECC engine, and must 945 + * be at the beginning of the OOB area or running this driver on legacy 946 + * systems will prevent the discovery of the BBM/BBT. 947 + */ 948 + if (nfc->use_dma) { 949 + marvell_nfc_xfer_data_dma(nfc, DMA_FROM_DEVICE, 950 + lt->data_bytes + oob_bytes); 951 + memcpy(data_buf, nfc->dma_buf, lt->data_bytes); 952 + memcpy(oob_buf, nfc->dma_buf + lt->data_bytes, oob_bytes); 953 + } else { 954 + marvell_nfc_xfer_data_in_pio(nfc, data_buf, lt->data_bytes); 955 + marvell_nfc_xfer_data_in_pio(nfc, oob_buf, oob_bytes); 956 + } 957 + 958 + ret = marvell_nfc_wait_cmdd(chip); 959 + 960 + return ret; 961 + } 962 + 963 + static int marvell_nfc_hw_ecc_hmg_read_page_raw(struct mtd_info *mtd, 964 + struct nand_chip *chip, u8 *buf, 965 + int oob_required, int page) 966 + { 967 + return marvell_nfc_hw_ecc_hmg_do_read_page(chip, buf, chip->oob_poi, 968 + true, page); 969 + } 970 + 971 + static int marvell_nfc_hw_ecc_hmg_read_page(struct mtd_info *mtd, 972 + struct nand_chip *chip, 973 + u8 *buf, int oob_required, 974 + int page) 975 + { 976 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 977 + unsigned int full_sz = lt->data_bytes + lt->spare_bytes + lt->ecc_bytes; 978 + int max_bitflips = 0, ret; 979 + u8 *raw_buf; 980 + 981 + marvell_nfc_enable_hw_ecc(chip); 982 + marvell_nfc_hw_ecc_hmg_do_read_page(chip, buf, chip->oob_poi, false, 983 + page); 984 + ret = marvell_nfc_hw_ecc_correct(chip, &max_bitflips); 985 + marvell_nfc_disable_hw_ecc(chip); 986 + 987 + if (!ret) 988 + return max_bitflips; 989 + 990 + /* 991 + * When ECC failures are detected, check if the full page has been 992 + * written or not. Ignore the failure if it is actually empty. 993 + */ 994 + raw_buf = kmalloc(full_sz, GFP_KERNEL); 995 + if (!raw_buf) 996 + return -ENOMEM; 997 + 998 + marvell_nfc_hw_ecc_hmg_do_read_page(chip, raw_buf, raw_buf + 999 + lt->data_bytes, true, page); 1000 + marvell_nfc_check_empty_chunk(chip, raw_buf, full_sz, NULL, 0, NULL, 0, 1001 + &max_bitflips); 1002 + kfree(raw_buf); 1003 + 1004 + return max_bitflips; 1005 + } 1006 + 1007 + /* 1008 + * Spare area in Hamming layouts is not protected by the ECC engine (even if 1009 + * it appears before the ECC bytes when reading), the ->read_oob_raw() function 1010 + * also stands for ->read_oob(). 1011 + */ 1012 + static int marvell_nfc_hw_ecc_hmg_read_oob_raw(struct mtd_info *mtd, 1013 + struct nand_chip *chip, int page) 1014 + { 1015 + /* Invalidate page cache */ 1016 + chip->pagebuf = -1; 1017 + 1018 + return marvell_nfc_hw_ecc_hmg_do_read_page(chip, chip->data_buf, 1019 + chip->oob_poi, true, page); 1020 + } 1021 + 1022 + /* Hamming write helpers */ 1023 + static int marvell_nfc_hw_ecc_hmg_do_write_page(struct nand_chip *chip, 1024 + const u8 *data_buf, 1025 + const u8 *oob_buf, bool raw, 1026 + int page) 1027 + { 1028 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 1029 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1030 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1031 + struct marvell_nfc_op nfc_op = { 1032 + .ndcb[0] = NDCB0_CMD_TYPE(TYPE_WRITE) | 1033 + NDCB0_ADDR_CYC(marvell_nand->addr_cyc) | 1034 + NDCB0_CMD1(NAND_CMD_SEQIN) | 1035 + NDCB0_CMD2(NAND_CMD_PAGEPROG) | 1036 + NDCB0_DBC, 1037 + .ndcb[1] = NDCB1_ADDRS_PAGE(page), 1038 + .ndcb[2] = NDCB2_ADDR5_PAGE(page), 1039 + }; 1040 + unsigned int oob_bytes = lt->spare_bytes + (raw ? lt->ecc_bytes : 0); 1041 + int ret; 1042 + 1043 + /* NFCv2 needs more information about the operation being executed */ 1044 + if (nfc->caps->is_nfcv2) 1045 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW); 1046 + 1047 + ret = marvell_nfc_prepare_cmd(chip); 1048 + if (ret) 1049 + return ret; 1050 + 1051 + marvell_nfc_send_cmd(chip, &nfc_op); 1052 + ret = marvell_nfc_end_cmd(chip, NDSR_WRDREQ, 1053 + "WRDREQ while loading FIFO (data)"); 1054 + if (ret) 1055 + return ret; 1056 + 1057 + /* Write the page then the OOB area */ 1058 + if (nfc->use_dma) { 1059 + memcpy(nfc->dma_buf, data_buf, lt->data_bytes); 1060 + memcpy(nfc->dma_buf + lt->data_bytes, oob_buf, oob_bytes); 1061 + marvell_nfc_xfer_data_dma(nfc, DMA_TO_DEVICE, lt->data_bytes + 1062 + lt->ecc_bytes + lt->spare_bytes); 1063 + } else { 1064 + marvell_nfc_xfer_data_out_pio(nfc, data_buf, lt->data_bytes); 1065 + marvell_nfc_xfer_data_out_pio(nfc, oob_buf, oob_bytes); 1066 + } 1067 + 1068 + ret = marvell_nfc_wait_cmdd(chip); 1069 + if (ret) 1070 + return ret; 1071 + 1072 + ret = marvell_nfc_wait_op(chip, 1073 + chip->data_interface.timings.sdr.tPROG_max); 1074 + return ret; 1075 + } 1076 + 1077 + static int marvell_nfc_hw_ecc_hmg_write_page_raw(struct mtd_info *mtd, 1078 + struct nand_chip *chip, 1079 + const u8 *buf, 1080 + int oob_required, int page) 1081 + { 1082 + return marvell_nfc_hw_ecc_hmg_do_write_page(chip, buf, chip->oob_poi, 1083 + true, page); 1084 + } 1085 + 1086 + static int marvell_nfc_hw_ecc_hmg_write_page(struct mtd_info *mtd, 1087 + struct nand_chip *chip, 1088 + const u8 *buf, 1089 + int oob_required, int page) 1090 + { 1091 + int ret; 1092 + 1093 + marvell_nfc_enable_hw_ecc(chip); 1094 + ret = marvell_nfc_hw_ecc_hmg_do_write_page(chip, buf, chip->oob_poi, 1095 + false, page); 1096 + marvell_nfc_disable_hw_ecc(chip); 1097 + 1098 + return ret; 1099 + } 1100 + 1101 + /* 1102 + * Spare area in Hamming layouts is not protected by the ECC engine (even if 1103 + * it appears before the ECC bytes when reading), the ->write_oob_raw() function 1104 + * also stands for ->write_oob(). 1105 + */ 1106 + static int marvell_nfc_hw_ecc_hmg_write_oob_raw(struct mtd_info *mtd, 1107 + struct nand_chip *chip, 1108 + int page) 1109 + { 1110 + /* Invalidate page cache */ 1111 + chip->pagebuf = -1; 1112 + 1113 + memset(chip->data_buf, 0xFF, mtd->writesize); 1114 + 1115 + return marvell_nfc_hw_ecc_hmg_do_write_page(chip, chip->data_buf, 1116 + chip->oob_poi, true, page); 1117 + } 1118 + 1119 + /* BCH read helpers */ 1120 + static int marvell_nfc_hw_ecc_bch_read_page_raw(struct mtd_info *mtd, 1121 + struct nand_chip *chip, u8 *buf, 1122 + int oob_required, int page) 1123 + { 1124 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1125 + u8 *oob = chip->oob_poi; 1126 + int chunk_size = lt->data_bytes + lt->spare_bytes + lt->ecc_bytes; 1127 + int ecc_offset = (lt->full_chunk_cnt * lt->spare_bytes) + 1128 + lt->last_spare_bytes; 1129 + int data_len = lt->data_bytes; 1130 + int spare_len = lt->spare_bytes; 1131 + int ecc_len = lt->ecc_bytes; 1132 + int chunk; 1133 + 1134 + if (oob_required) 1135 + memset(chip->oob_poi, 0xFF, mtd->oobsize); 1136 + 1137 + nand_read_page_op(chip, page, 0, NULL, 0); 1138 + 1139 + for (chunk = 0; chunk < lt->nchunks; chunk++) { 1140 + /* Update last chunk length */ 1141 + if (chunk >= lt->full_chunk_cnt) { 1142 + data_len = lt->last_data_bytes; 1143 + spare_len = lt->last_spare_bytes; 1144 + ecc_len = lt->last_ecc_bytes; 1145 + } 1146 + 1147 + /* Read data bytes*/ 1148 + nand_change_read_column_op(chip, chunk * chunk_size, 1149 + buf + (lt->data_bytes * chunk), 1150 + data_len, false); 1151 + 1152 + /* Read spare bytes */ 1153 + nand_read_data_op(chip, oob + (lt->spare_bytes * chunk), 1154 + spare_len, false); 1155 + 1156 + /* Read ECC bytes */ 1157 + nand_read_data_op(chip, oob + ecc_offset + 1158 + (ALIGN(lt->ecc_bytes, 32) * chunk), 1159 + ecc_len, false); 1160 + } 1161 + 1162 + return 0; 1163 + } 1164 + 1165 + static void marvell_nfc_hw_ecc_bch_read_chunk(struct nand_chip *chip, int chunk, 1166 + u8 *data, unsigned int data_len, 1167 + u8 *spare, unsigned int spare_len, 1168 + int page) 1169 + { 1170 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 1171 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1172 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1173 + int i, ret; 1174 + struct marvell_nfc_op nfc_op = { 1175 + .ndcb[0] = NDCB0_CMD_TYPE(TYPE_READ) | 1176 + NDCB0_ADDR_CYC(marvell_nand->addr_cyc) | 1177 + NDCB0_LEN_OVRD, 1178 + .ndcb[1] = NDCB1_ADDRS_PAGE(page), 1179 + .ndcb[2] = NDCB2_ADDR5_PAGE(page), 1180 + .ndcb[3] = data_len + spare_len, 1181 + }; 1182 + 1183 + ret = marvell_nfc_prepare_cmd(chip); 1184 + if (ret) 1185 + return; 1186 + 1187 + if (chunk == 0) 1188 + nfc_op.ndcb[0] |= NDCB0_DBC | 1189 + NDCB0_CMD1(NAND_CMD_READ0) | 1190 + NDCB0_CMD2(NAND_CMD_READSTART); 1191 + 1192 + /* 1193 + * Trigger the naked read operation only on the last chunk. 1194 + * Otherwise, use monolithic read. 1195 + */ 1196 + if (lt->nchunks == 1 || (chunk < lt->nchunks - 1)) 1197 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW); 1198 + else 1199 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_LAST_NAKED_RW); 1200 + 1201 + marvell_nfc_send_cmd(chip, &nfc_op); 1202 + 1203 + /* 1204 + * According to the datasheet, when reading from NDDB 1205 + * with BCH enabled, after each 32 bytes reads, we 1206 + * have to make sure that the NDSR.RDDREQ bit is set. 1207 + * 1208 + * Drain the FIFO, 8 32-bit reads at a time, and skip 1209 + * the polling on the last read. 1210 + * 1211 + * Length is a multiple of 32 bytes, hence it is a multiple of 8 too. 1212 + */ 1213 + for (i = 0; i < data_len; i += FIFO_DEPTH * BCH_SEQ_READS) { 1214 + marvell_nfc_end_cmd(chip, NDSR_RDDREQ, 1215 + "RDDREQ while draining FIFO (data)"); 1216 + marvell_nfc_xfer_data_in_pio(nfc, data, 1217 + FIFO_DEPTH * BCH_SEQ_READS); 1218 + data += FIFO_DEPTH * BCH_SEQ_READS; 1219 + } 1220 + 1221 + for (i = 0; i < spare_len; i += FIFO_DEPTH * BCH_SEQ_READS) { 1222 + marvell_nfc_end_cmd(chip, NDSR_RDDREQ, 1223 + "RDDREQ while draining FIFO (OOB)"); 1224 + marvell_nfc_xfer_data_in_pio(nfc, spare, 1225 + FIFO_DEPTH * BCH_SEQ_READS); 1226 + spare += FIFO_DEPTH * BCH_SEQ_READS; 1227 + } 1228 + } 1229 + 1230 + static int marvell_nfc_hw_ecc_bch_read_page(struct mtd_info *mtd, 1231 + struct nand_chip *chip, 1232 + u8 *buf, int oob_required, 1233 + int page) 1234 + { 1235 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1236 + int data_len = lt->data_bytes, spare_len = lt->spare_bytes, ecc_len; 1237 + u8 *data = buf, *spare = chip->oob_poi, *ecc; 1238 + int max_bitflips = 0; 1239 + u32 failure_mask = 0; 1240 + int chunk, ecc_offset_in_page, ret; 1241 + 1242 + /* 1243 + * With BCH, OOB is not fully used (and thus not read entirely), not 1244 + * expected bytes could show up at the end of the OOB buffer if not 1245 + * explicitly erased. 1246 + */ 1247 + if (oob_required) 1248 + memset(chip->oob_poi, 0xFF, mtd->oobsize); 1249 + 1250 + marvell_nfc_enable_hw_ecc(chip); 1251 + 1252 + for (chunk = 0; chunk < lt->nchunks; chunk++) { 1253 + /* Update length for the last chunk */ 1254 + if (chunk >= lt->full_chunk_cnt) { 1255 + data_len = lt->last_data_bytes; 1256 + spare_len = lt->last_spare_bytes; 1257 + } 1258 + 1259 + /* Read the chunk and detect number of bitflips */ 1260 + marvell_nfc_hw_ecc_bch_read_chunk(chip, chunk, data, data_len, 1261 + spare, spare_len, page); 1262 + ret = marvell_nfc_hw_ecc_correct(chip, &max_bitflips); 1263 + if (ret) 1264 + failure_mask |= BIT(chunk); 1265 + 1266 + data += data_len; 1267 + spare += spare_len; 1268 + } 1269 + 1270 + marvell_nfc_disable_hw_ecc(chip); 1271 + 1272 + if (!failure_mask) 1273 + return max_bitflips; 1274 + 1275 + /* 1276 + * Please note that dumping the ECC bytes during a normal read with OOB 1277 + * area would add a significant overhead as ECC bytes are "consumed" by 1278 + * the controller in normal mode and must be re-read in raw mode. To 1279 + * avoid dropping the performances, we prefer not to include them. The 1280 + * user should re-read the page in raw mode if ECC bytes are required. 1281 + * 1282 + * However, for any subpage read error reported by ->correct(), the ECC 1283 + * bytes must be read in raw mode and the full subpage must be checked 1284 + * to see if it is entirely empty of if there was an actual error. 1285 + */ 1286 + for (chunk = 0; chunk < lt->nchunks; chunk++) { 1287 + /* No failure reported for this chunk, move to the next one */ 1288 + if (!(failure_mask & BIT(chunk))) 1289 + continue; 1290 + 1291 + /* Derive ECC bytes positions (in page/buffer) and length */ 1292 + ecc = chip->oob_poi + 1293 + (lt->full_chunk_cnt * lt->spare_bytes) + 1294 + lt->last_spare_bytes + 1295 + (chunk * ALIGN(lt->ecc_bytes, 32)); 1296 + ecc_offset_in_page = 1297 + (chunk * (lt->data_bytes + lt->spare_bytes + 1298 + lt->ecc_bytes)) + 1299 + (chunk < lt->full_chunk_cnt ? 1300 + lt->data_bytes + lt->spare_bytes : 1301 + lt->last_data_bytes + lt->last_spare_bytes); 1302 + ecc_len = chunk < lt->full_chunk_cnt ? 1303 + lt->ecc_bytes : lt->last_ecc_bytes; 1304 + 1305 + /* Do the actual raw read of the ECC bytes */ 1306 + nand_change_read_column_op(chip, ecc_offset_in_page, 1307 + ecc, ecc_len, false); 1308 + 1309 + /* Derive data/spare bytes positions (in buffer) and length */ 1310 + data = buf + (chunk * lt->data_bytes); 1311 + data_len = chunk < lt->full_chunk_cnt ? 1312 + lt->data_bytes : lt->last_data_bytes; 1313 + spare = chip->oob_poi + (chunk * (lt->spare_bytes + 1314 + lt->ecc_bytes)); 1315 + spare_len = chunk < lt->full_chunk_cnt ? 1316 + lt->spare_bytes : lt->last_spare_bytes; 1317 + 1318 + /* Check the entire chunk (data + spare + ecc) for emptyness */ 1319 + marvell_nfc_check_empty_chunk(chip, data, data_len, spare, 1320 + spare_len, ecc, ecc_len, 1321 + &max_bitflips); 1322 + } 1323 + 1324 + return max_bitflips; 1325 + } 1326 + 1327 + static int marvell_nfc_hw_ecc_bch_read_oob_raw(struct mtd_info *mtd, 1328 + struct nand_chip *chip, int page) 1329 + { 1330 + /* Invalidate page cache */ 1331 + chip->pagebuf = -1; 1332 + 1333 + return chip->ecc.read_page_raw(mtd, chip, chip->data_buf, true, page); 1334 + } 1335 + 1336 + static int marvell_nfc_hw_ecc_bch_read_oob(struct mtd_info *mtd, 1337 + struct nand_chip *chip, int page) 1338 + { 1339 + /* Invalidate page cache */ 1340 + chip->pagebuf = -1; 1341 + 1342 + return chip->ecc.read_page(mtd, chip, chip->data_buf, true, page); 1343 + } 1344 + 1345 + /* BCH write helpers */ 1346 + static int marvell_nfc_hw_ecc_bch_write_page_raw(struct mtd_info *mtd, 1347 + struct nand_chip *chip, 1348 + const u8 *buf, 1349 + int oob_required, int page) 1350 + { 1351 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1352 + int full_chunk_size = lt->data_bytes + lt->spare_bytes + lt->ecc_bytes; 1353 + int data_len = lt->data_bytes; 1354 + int spare_len = lt->spare_bytes; 1355 + int ecc_len = lt->ecc_bytes; 1356 + int spare_offset = 0; 1357 + int ecc_offset = (lt->full_chunk_cnt * lt->spare_bytes) + 1358 + lt->last_spare_bytes; 1359 + int chunk; 1360 + 1361 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1362 + 1363 + for (chunk = 0; chunk < lt->nchunks; chunk++) { 1364 + if (chunk >= lt->full_chunk_cnt) { 1365 + data_len = lt->last_data_bytes; 1366 + spare_len = lt->last_spare_bytes; 1367 + ecc_len = lt->last_ecc_bytes; 1368 + } 1369 + 1370 + /* Point to the column of the next chunk */ 1371 + nand_change_write_column_op(chip, chunk * full_chunk_size, 1372 + NULL, 0, false); 1373 + 1374 + /* Write the data */ 1375 + nand_write_data_op(chip, buf + (chunk * lt->data_bytes), 1376 + data_len, false); 1377 + 1378 + if (!oob_required) 1379 + continue; 1380 + 1381 + /* Write the spare bytes */ 1382 + if (spare_len) 1383 + nand_write_data_op(chip, chip->oob_poi + spare_offset, 1384 + spare_len, false); 1385 + 1386 + /* Write the ECC bytes */ 1387 + if (ecc_len) 1388 + nand_write_data_op(chip, chip->oob_poi + ecc_offset, 1389 + ecc_len, false); 1390 + 1391 + spare_offset += spare_len; 1392 + ecc_offset += ALIGN(ecc_len, 32); 1393 + } 1394 + 1395 + return nand_prog_page_end_op(chip); 1396 + } 1397 + 1398 + static int 1399 + marvell_nfc_hw_ecc_bch_write_chunk(struct nand_chip *chip, int chunk, 1400 + const u8 *data, unsigned int data_len, 1401 + const u8 *spare, unsigned int spare_len, 1402 + int page) 1403 + { 1404 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 1405 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1406 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1407 + int ret; 1408 + struct marvell_nfc_op nfc_op = { 1409 + .ndcb[0] = NDCB0_CMD_TYPE(TYPE_WRITE) | NDCB0_LEN_OVRD, 1410 + .ndcb[3] = data_len + spare_len, 1411 + }; 1412 + 1413 + /* 1414 + * First operation dispatches the CMD_SEQIN command, issue the address 1415 + * cycles and asks for the first chunk of data. 1416 + * All operations in the middle (if any) will issue a naked write and 1417 + * also ask for data. 1418 + * Last operation (if any) asks for the last chunk of data through a 1419 + * last naked write. 1420 + */ 1421 + if (chunk == 0) { 1422 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_WRITE_DISPATCH) | 1423 + NDCB0_ADDR_CYC(marvell_nand->addr_cyc) | 1424 + NDCB0_CMD1(NAND_CMD_SEQIN); 1425 + nfc_op.ndcb[1] |= NDCB1_ADDRS_PAGE(page); 1426 + nfc_op.ndcb[2] |= NDCB2_ADDR5_PAGE(page); 1427 + } else if (chunk < lt->nchunks - 1) { 1428 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_NAKED_RW); 1429 + } else { 1430 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_LAST_NAKED_RW); 1431 + } 1432 + 1433 + /* Always dispatch the PAGEPROG command on the last chunk */ 1434 + if (chunk == lt->nchunks - 1) 1435 + nfc_op.ndcb[0] |= NDCB0_CMD2(NAND_CMD_PAGEPROG) | NDCB0_DBC; 1436 + 1437 + ret = marvell_nfc_prepare_cmd(chip); 1438 + if (ret) 1439 + return ret; 1440 + 1441 + marvell_nfc_send_cmd(chip, &nfc_op); 1442 + ret = marvell_nfc_end_cmd(chip, NDSR_WRDREQ, 1443 + "WRDREQ while loading FIFO (data)"); 1444 + if (ret) 1445 + return ret; 1446 + 1447 + /* Transfer the contents */ 1448 + iowrite32_rep(nfc->regs + NDDB, data, FIFO_REP(data_len)); 1449 + iowrite32_rep(nfc->regs + NDDB, spare, FIFO_REP(spare_len)); 1450 + 1451 + return 0; 1452 + } 1453 + 1454 + static int marvell_nfc_hw_ecc_bch_write_page(struct mtd_info *mtd, 1455 + struct nand_chip *chip, 1456 + const u8 *buf, 1457 + int oob_required, int page) 1458 + { 1459 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1460 + const u8 *data = buf; 1461 + const u8 *spare = chip->oob_poi; 1462 + int data_len = lt->data_bytes; 1463 + int spare_len = lt->spare_bytes; 1464 + int chunk, ret; 1465 + 1466 + /* Spare data will be written anyway, so clear it to avoid garbage */ 1467 + if (!oob_required) 1468 + memset(chip->oob_poi, 0xFF, mtd->oobsize); 1469 + 1470 + marvell_nfc_enable_hw_ecc(chip); 1471 + 1472 + for (chunk = 0; chunk < lt->nchunks; chunk++) { 1473 + if (chunk >= lt->full_chunk_cnt) { 1474 + data_len = lt->last_data_bytes; 1475 + spare_len = lt->last_spare_bytes; 1476 + } 1477 + 1478 + marvell_nfc_hw_ecc_bch_write_chunk(chip, chunk, data, data_len, 1479 + spare, spare_len, page); 1480 + data += data_len; 1481 + spare += spare_len; 1482 + 1483 + /* 1484 + * Waiting only for CMDD or PAGED is not enough, ECC are 1485 + * partially written. No flag is set once the operation is 1486 + * really finished but the ND_RUN bit is cleared, so wait for it 1487 + * before stepping into the next command. 1488 + */ 1489 + marvell_nfc_wait_ndrun(chip); 1490 + } 1491 + 1492 + ret = marvell_nfc_wait_op(chip, 1493 + chip->data_interface.timings.sdr.tPROG_max); 1494 + 1495 + marvell_nfc_disable_hw_ecc(chip); 1496 + 1497 + if (ret) 1498 + return ret; 1499 + 1500 + return 0; 1501 + } 1502 + 1503 + static int marvell_nfc_hw_ecc_bch_write_oob_raw(struct mtd_info *mtd, 1504 + struct nand_chip *chip, 1505 + int page) 1506 + { 1507 + /* Invalidate page cache */ 1508 + chip->pagebuf = -1; 1509 + 1510 + memset(chip->data_buf, 0xFF, mtd->writesize); 1511 + 1512 + return chip->ecc.write_page_raw(mtd, chip, chip->data_buf, true, page); 1513 + } 1514 + 1515 + static int marvell_nfc_hw_ecc_bch_write_oob(struct mtd_info *mtd, 1516 + struct nand_chip *chip, int page) 1517 + { 1518 + /* Invalidate page cache */ 1519 + chip->pagebuf = -1; 1520 + 1521 + memset(chip->data_buf, 0xFF, mtd->writesize); 1522 + 1523 + return chip->ecc.write_page(mtd, chip, chip->data_buf, true, page); 1524 + } 1525 + 1526 + /* NAND framework ->exec_op() hooks and related helpers */ 1527 + static void marvell_nfc_parse_instructions(struct nand_chip *chip, 1528 + const struct nand_subop *subop, 1529 + struct marvell_nfc_op *nfc_op) 1530 + { 1531 + const struct nand_op_instr *instr = NULL; 1532 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1533 + bool first_cmd = true; 1534 + unsigned int op_id; 1535 + int i; 1536 + 1537 + /* Reset the input structure as most of its fields will be OR'ed */ 1538 + memset(nfc_op, 0, sizeof(struct marvell_nfc_op)); 1539 + 1540 + for (op_id = 0; op_id < subop->ninstrs; op_id++) { 1541 + unsigned int offset, naddrs; 1542 + const u8 *addrs; 1543 + int len = nand_subop_get_data_len(subop, op_id); 1544 + 1545 + instr = &subop->instrs[op_id]; 1546 + 1547 + switch (instr->type) { 1548 + case NAND_OP_CMD_INSTR: 1549 + if (first_cmd) 1550 + nfc_op->ndcb[0] |= 1551 + NDCB0_CMD1(instr->ctx.cmd.opcode); 1552 + else 1553 + nfc_op->ndcb[0] |= 1554 + NDCB0_CMD2(instr->ctx.cmd.opcode) | 1555 + NDCB0_DBC; 1556 + 1557 + nfc_op->cle_ale_delay_ns = instr->delay_ns; 1558 + first_cmd = false; 1559 + break; 1560 + 1561 + case NAND_OP_ADDR_INSTR: 1562 + offset = nand_subop_get_addr_start_off(subop, op_id); 1563 + naddrs = nand_subop_get_num_addr_cyc(subop, op_id); 1564 + addrs = &instr->ctx.addr.addrs[offset]; 1565 + 1566 + nfc_op->ndcb[0] |= NDCB0_ADDR_CYC(naddrs); 1567 + 1568 + for (i = 0; i < min_t(unsigned int, 4, naddrs); i++) 1569 + nfc_op->ndcb[1] |= addrs[i] << (8 * i); 1570 + 1571 + if (naddrs >= 5) 1572 + nfc_op->ndcb[2] |= NDCB2_ADDR5_CYC(addrs[4]); 1573 + if (naddrs >= 6) 1574 + nfc_op->ndcb[3] |= NDCB3_ADDR6_CYC(addrs[5]); 1575 + if (naddrs == 7) 1576 + nfc_op->ndcb[3] |= NDCB3_ADDR7_CYC(addrs[6]); 1577 + 1578 + nfc_op->cle_ale_delay_ns = instr->delay_ns; 1579 + break; 1580 + 1581 + case NAND_OP_DATA_IN_INSTR: 1582 + nfc_op->data_instr = instr; 1583 + nfc_op->data_instr_idx = op_id; 1584 + nfc_op->ndcb[0] |= NDCB0_CMD_TYPE(TYPE_READ); 1585 + if (nfc->caps->is_nfcv2) { 1586 + nfc_op->ndcb[0] |= 1587 + NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW) | 1588 + NDCB0_LEN_OVRD; 1589 + nfc_op->ndcb[3] |= round_up(len, FIFO_DEPTH); 1590 + } 1591 + nfc_op->data_delay_ns = instr->delay_ns; 1592 + break; 1593 + 1594 + case NAND_OP_DATA_OUT_INSTR: 1595 + nfc_op->data_instr = instr; 1596 + nfc_op->data_instr_idx = op_id; 1597 + nfc_op->ndcb[0] |= NDCB0_CMD_TYPE(TYPE_WRITE); 1598 + if (nfc->caps->is_nfcv2) { 1599 + nfc_op->ndcb[0] |= 1600 + NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW) | 1601 + NDCB0_LEN_OVRD; 1602 + nfc_op->ndcb[3] |= round_up(len, FIFO_DEPTH); 1603 + } 1604 + nfc_op->data_delay_ns = instr->delay_ns; 1605 + break; 1606 + 1607 + case NAND_OP_WAITRDY_INSTR: 1608 + nfc_op->rdy_timeout_ms = instr->ctx.waitrdy.timeout_ms; 1609 + nfc_op->rdy_delay_ns = instr->delay_ns; 1610 + break; 1611 + } 1612 + } 1613 + } 1614 + 1615 + static int marvell_nfc_xfer_data_pio(struct nand_chip *chip, 1616 + const struct nand_subop *subop, 1617 + struct marvell_nfc_op *nfc_op) 1618 + { 1619 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1620 + const struct nand_op_instr *instr = nfc_op->data_instr; 1621 + unsigned int op_id = nfc_op->data_instr_idx; 1622 + unsigned int len = nand_subop_get_data_len(subop, op_id); 1623 + unsigned int offset = nand_subop_get_data_start_off(subop, op_id); 1624 + bool reading = (instr->type == NAND_OP_DATA_IN_INSTR); 1625 + int ret; 1626 + 1627 + if (instr->ctx.data.force_8bit) 1628 + marvell_nfc_force_byte_access(chip, true); 1629 + 1630 + if (reading) { 1631 + u8 *in = instr->ctx.data.buf.in + offset; 1632 + 1633 + ret = marvell_nfc_xfer_data_in_pio(nfc, in, len); 1634 + } else { 1635 + const u8 *out = instr->ctx.data.buf.out + offset; 1636 + 1637 + ret = marvell_nfc_xfer_data_out_pio(nfc, out, len); 1638 + } 1639 + 1640 + if (instr->ctx.data.force_8bit) 1641 + marvell_nfc_force_byte_access(chip, false); 1642 + 1643 + return ret; 1644 + } 1645 + 1646 + static int marvell_nfc_monolithic_access_exec(struct nand_chip *chip, 1647 + const struct nand_subop *subop) 1648 + { 1649 + struct marvell_nfc_op nfc_op; 1650 + bool reading; 1651 + int ret; 1652 + 1653 + marvell_nfc_parse_instructions(chip, subop, &nfc_op); 1654 + reading = (nfc_op.data_instr->type == NAND_OP_DATA_IN_INSTR); 1655 + 1656 + ret = marvell_nfc_prepare_cmd(chip); 1657 + if (ret) 1658 + return ret; 1659 + 1660 + marvell_nfc_send_cmd(chip, &nfc_op); 1661 + ret = marvell_nfc_end_cmd(chip, NDSR_RDDREQ | NDSR_WRDREQ, 1662 + "RDDREQ/WRDREQ while draining raw data"); 1663 + if (ret) 1664 + return ret; 1665 + 1666 + cond_delay(nfc_op.cle_ale_delay_ns); 1667 + 1668 + if (reading) { 1669 + if (nfc_op.rdy_timeout_ms) { 1670 + ret = marvell_nfc_wait_op(chip, nfc_op.rdy_timeout_ms); 1671 + if (ret) 1672 + return ret; 1673 + } 1674 + 1675 + cond_delay(nfc_op.rdy_delay_ns); 1676 + } 1677 + 1678 + marvell_nfc_xfer_data_pio(chip, subop, &nfc_op); 1679 + ret = marvell_nfc_wait_cmdd(chip); 1680 + if (ret) 1681 + return ret; 1682 + 1683 + cond_delay(nfc_op.data_delay_ns); 1684 + 1685 + if (!reading) { 1686 + if (nfc_op.rdy_timeout_ms) { 1687 + ret = marvell_nfc_wait_op(chip, nfc_op.rdy_timeout_ms); 1688 + if (ret) 1689 + return ret; 1690 + } 1691 + 1692 + cond_delay(nfc_op.rdy_delay_ns); 1693 + } 1694 + 1695 + /* 1696 + * NDCR ND_RUN bit should be cleared automatically at the end of each 1697 + * operation but experience shows that the behavior is buggy when it 1698 + * comes to writes (with LEN_OVRD). Clear it by hand in this case. 1699 + */ 1700 + if (!reading) { 1701 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1702 + 1703 + writel_relaxed(readl(nfc->regs + NDCR) & ~NDCR_ND_RUN, 1704 + nfc->regs + NDCR); 1705 + } 1706 + 1707 + return 0; 1708 + } 1709 + 1710 + static int marvell_nfc_naked_access_exec(struct nand_chip *chip, 1711 + const struct nand_subop *subop) 1712 + { 1713 + struct marvell_nfc_op nfc_op; 1714 + int ret; 1715 + 1716 + marvell_nfc_parse_instructions(chip, subop, &nfc_op); 1717 + 1718 + /* 1719 + * Naked access are different in that they need to be flagged as naked 1720 + * by the controller. Reset the controller registers fields that inform 1721 + * on the type and refill them according to the ongoing operation. 1722 + */ 1723 + nfc_op.ndcb[0] &= ~(NDCB0_CMD_TYPE(TYPE_MASK) | 1724 + NDCB0_CMD_XTYPE(XTYPE_MASK)); 1725 + switch (subop->instrs[0].type) { 1726 + case NAND_OP_CMD_INSTR: 1727 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_NAKED_CMD); 1728 + break; 1729 + case NAND_OP_ADDR_INSTR: 1730 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_NAKED_ADDR); 1731 + break; 1732 + case NAND_OP_DATA_IN_INSTR: 1733 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_READ) | 1734 + NDCB0_CMD_XTYPE(XTYPE_LAST_NAKED_RW); 1735 + break; 1736 + case NAND_OP_DATA_OUT_INSTR: 1737 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_WRITE) | 1738 + NDCB0_CMD_XTYPE(XTYPE_LAST_NAKED_RW); 1739 + break; 1740 + default: 1741 + /* This should never happen */ 1742 + break; 1743 + } 1744 + 1745 + ret = marvell_nfc_prepare_cmd(chip); 1746 + if (ret) 1747 + return ret; 1748 + 1749 + marvell_nfc_send_cmd(chip, &nfc_op); 1750 + 1751 + if (!nfc_op.data_instr) { 1752 + ret = marvell_nfc_wait_cmdd(chip); 1753 + cond_delay(nfc_op.cle_ale_delay_ns); 1754 + return ret; 1755 + } 1756 + 1757 + ret = marvell_nfc_end_cmd(chip, NDSR_RDDREQ | NDSR_WRDREQ, 1758 + "RDDREQ/WRDREQ while draining raw data"); 1759 + if (ret) 1760 + return ret; 1761 + 1762 + marvell_nfc_xfer_data_pio(chip, subop, &nfc_op); 1763 + ret = marvell_nfc_wait_cmdd(chip); 1764 + if (ret) 1765 + return ret; 1766 + 1767 + /* 1768 + * NDCR ND_RUN bit should be cleared automatically at the end of each 1769 + * operation but experience shows that the behavior is buggy when it 1770 + * comes to writes (with LEN_OVRD). Clear it by hand in this case. 1771 + */ 1772 + if (subop->instrs[0].type == NAND_OP_DATA_OUT_INSTR) { 1773 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1774 + 1775 + writel_relaxed(readl(nfc->regs + NDCR) & ~NDCR_ND_RUN, 1776 + nfc->regs + NDCR); 1777 + } 1778 + 1779 + return 0; 1780 + } 1781 + 1782 + static int marvell_nfc_naked_waitrdy_exec(struct nand_chip *chip, 1783 + const struct nand_subop *subop) 1784 + { 1785 + struct marvell_nfc_op nfc_op; 1786 + int ret; 1787 + 1788 + marvell_nfc_parse_instructions(chip, subop, &nfc_op); 1789 + 1790 + ret = marvell_nfc_wait_op(chip, nfc_op.rdy_timeout_ms); 1791 + cond_delay(nfc_op.rdy_delay_ns); 1792 + 1793 + return ret; 1794 + } 1795 + 1796 + static int marvell_nfc_read_id_type_exec(struct nand_chip *chip, 1797 + const struct nand_subop *subop) 1798 + { 1799 + struct marvell_nfc_op nfc_op; 1800 + int ret; 1801 + 1802 + marvell_nfc_parse_instructions(chip, subop, &nfc_op); 1803 + nfc_op.ndcb[0] &= ~NDCB0_CMD_TYPE(TYPE_READ); 1804 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_READ_ID); 1805 + 1806 + ret = marvell_nfc_prepare_cmd(chip); 1807 + if (ret) 1808 + return ret; 1809 + 1810 + marvell_nfc_send_cmd(chip, &nfc_op); 1811 + ret = marvell_nfc_end_cmd(chip, NDSR_RDDREQ, 1812 + "RDDREQ while reading ID"); 1813 + if (ret) 1814 + return ret; 1815 + 1816 + cond_delay(nfc_op.cle_ale_delay_ns); 1817 + 1818 + if (nfc_op.rdy_timeout_ms) { 1819 + ret = marvell_nfc_wait_op(chip, nfc_op.rdy_timeout_ms); 1820 + if (ret) 1821 + return ret; 1822 + } 1823 + 1824 + cond_delay(nfc_op.rdy_delay_ns); 1825 + 1826 + marvell_nfc_xfer_data_pio(chip, subop, &nfc_op); 1827 + ret = marvell_nfc_wait_cmdd(chip); 1828 + if (ret) 1829 + return ret; 1830 + 1831 + cond_delay(nfc_op.data_delay_ns); 1832 + 1833 + return 0; 1834 + } 1835 + 1836 + static int marvell_nfc_read_status_exec(struct nand_chip *chip, 1837 + const struct nand_subop *subop) 1838 + { 1839 + struct marvell_nfc_op nfc_op; 1840 + int ret; 1841 + 1842 + marvell_nfc_parse_instructions(chip, subop, &nfc_op); 1843 + nfc_op.ndcb[0] &= ~NDCB0_CMD_TYPE(TYPE_READ); 1844 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_STATUS); 1845 + 1846 + ret = marvell_nfc_prepare_cmd(chip); 1847 + if (ret) 1848 + return ret; 1849 + 1850 + marvell_nfc_send_cmd(chip, &nfc_op); 1851 + ret = marvell_nfc_end_cmd(chip, NDSR_RDDREQ, 1852 + "RDDREQ while reading status"); 1853 + if (ret) 1854 + return ret; 1855 + 1856 + cond_delay(nfc_op.cle_ale_delay_ns); 1857 + 1858 + if (nfc_op.rdy_timeout_ms) { 1859 + ret = marvell_nfc_wait_op(chip, nfc_op.rdy_timeout_ms); 1860 + if (ret) 1861 + return ret; 1862 + } 1863 + 1864 + cond_delay(nfc_op.rdy_delay_ns); 1865 + 1866 + marvell_nfc_xfer_data_pio(chip, subop, &nfc_op); 1867 + ret = marvell_nfc_wait_cmdd(chip); 1868 + if (ret) 1869 + return ret; 1870 + 1871 + cond_delay(nfc_op.data_delay_ns); 1872 + 1873 + return 0; 1874 + } 1875 + 1876 + static int marvell_nfc_reset_cmd_type_exec(struct nand_chip *chip, 1877 + const struct nand_subop *subop) 1878 + { 1879 + struct marvell_nfc_op nfc_op; 1880 + int ret; 1881 + 1882 + marvell_nfc_parse_instructions(chip, subop, &nfc_op); 1883 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_RESET); 1884 + 1885 + ret = marvell_nfc_prepare_cmd(chip); 1886 + if (ret) 1887 + return ret; 1888 + 1889 + marvell_nfc_send_cmd(chip, &nfc_op); 1890 + ret = marvell_nfc_wait_cmdd(chip); 1891 + if (ret) 1892 + return ret; 1893 + 1894 + cond_delay(nfc_op.cle_ale_delay_ns); 1895 + 1896 + ret = marvell_nfc_wait_op(chip, nfc_op.rdy_timeout_ms); 1897 + if (ret) 1898 + return ret; 1899 + 1900 + cond_delay(nfc_op.rdy_delay_ns); 1901 + 1902 + return 0; 1903 + } 1904 + 1905 + static int marvell_nfc_erase_cmd_type_exec(struct nand_chip *chip, 1906 + const struct nand_subop *subop) 1907 + { 1908 + struct marvell_nfc_op nfc_op; 1909 + int ret; 1910 + 1911 + marvell_nfc_parse_instructions(chip, subop, &nfc_op); 1912 + nfc_op.ndcb[0] |= NDCB0_CMD_TYPE(TYPE_ERASE); 1913 + 1914 + ret = marvell_nfc_prepare_cmd(chip); 1915 + if (ret) 1916 + return ret; 1917 + 1918 + marvell_nfc_send_cmd(chip, &nfc_op); 1919 + ret = marvell_nfc_wait_cmdd(chip); 1920 + if (ret) 1921 + return ret; 1922 + 1923 + cond_delay(nfc_op.cle_ale_delay_ns); 1924 + 1925 + ret = marvell_nfc_wait_op(chip, nfc_op.rdy_timeout_ms); 1926 + if (ret) 1927 + return ret; 1928 + 1929 + cond_delay(nfc_op.rdy_delay_ns); 1930 + 1931 + return 0; 1932 + } 1933 + 1934 + static const struct nand_op_parser marvell_nfcv2_op_parser = NAND_OP_PARSER( 1935 + /* Monolithic reads/writes */ 1936 + NAND_OP_PARSER_PATTERN( 1937 + marvell_nfc_monolithic_access_exec, 1938 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1939 + NAND_OP_PARSER_PAT_ADDR_ELEM(true, MAX_ADDRESS_CYC_NFCV2), 1940 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 1941 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 1942 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, MAX_CHUNK_SIZE)), 1943 + NAND_OP_PARSER_PATTERN( 1944 + marvell_nfc_monolithic_access_exec, 1945 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1946 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, MAX_ADDRESS_CYC_NFCV2), 1947 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, MAX_CHUNK_SIZE), 1948 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 1949 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)), 1950 + /* Naked commands */ 1951 + NAND_OP_PARSER_PATTERN( 1952 + marvell_nfc_naked_access_exec, 1953 + NAND_OP_PARSER_PAT_CMD_ELEM(false)), 1954 + NAND_OP_PARSER_PATTERN( 1955 + marvell_nfc_naked_access_exec, 1956 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, MAX_ADDRESS_CYC_NFCV2)), 1957 + NAND_OP_PARSER_PATTERN( 1958 + marvell_nfc_naked_access_exec, 1959 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, MAX_CHUNK_SIZE)), 1960 + NAND_OP_PARSER_PATTERN( 1961 + marvell_nfc_naked_access_exec, 1962 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, MAX_CHUNK_SIZE)), 1963 + NAND_OP_PARSER_PATTERN( 1964 + marvell_nfc_naked_waitrdy_exec, 1965 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 1966 + ); 1967 + 1968 + static const struct nand_op_parser marvell_nfcv1_op_parser = NAND_OP_PARSER( 1969 + /* Naked commands not supported, use a function for each pattern */ 1970 + NAND_OP_PARSER_PATTERN( 1971 + marvell_nfc_read_id_type_exec, 1972 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1973 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, MAX_ADDRESS_CYC_NFCV1), 1974 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 8)), 1975 + NAND_OP_PARSER_PATTERN( 1976 + marvell_nfc_erase_cmd_type_exec, 1977 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1978 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, MAX_ADDRESS_CYC_NFCV1), 1979 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1980 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 1981 + NAND_OP_PARSER_PATTERN( 1982 + marvell_nfc_read_status_exec, 1983 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1984 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 1)), 1985 + NAND_OP_PARSER_PATTERN( 1986 + marvell_nfc_reset_cmd_type_exec, 1987 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1988 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 1989 + NAND_OP_PARSER_PATTERN( 1990 + marvell_nfc_naked_waitrdy_exec, 1991 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 1992 + ); 1993 + 1994 + static int marvell_nfc_exec_op(struct nand_chip *chip, 1995 + const struct nand_operation *op, 1996 + bool check_only) 1997 + { 1998 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1999 + 2000 + if (nfc->caps->is_nfcv2) 2001 + return nand_op_parser_exec_op(chip, &marvell_nfcv2_op_parser, 2002 + op, check_only); 2003 + else 2004 + return nand_op_parser_exec_op(chip, &marvell_nfcv1_op_parser, 2005 + op, check_only); 2006 + } 2007 + 2008 + /* 2009 + * Layouts were broken in old pxa3xx_nand driver, these are supposed to be 2010 + * usable. 2011 + */ 2012 + static int marvell_nand_ooblayout_ecc(struct mtd_info *mtd, int section, 2013 + struct mtd_oob_region *oobregion) 2014 + { 2015 + struct nand_chip *chip = mtd_to_nand(mtd); 2016 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 2017 + 2018 + if (section) 2019 + return -ERANGE; 2020 + 2021 + oobregion->length = (lt->full_chunk_cnt * lt->ecc_bytes) + 2022 + lt->last_ecc_bytes; 2023 + oobregion->offset = mtd->oobsize - oobregion->length; 2024 + 2025 + return 0; 2026 + } 2027 + 2028 + static int marvell_nand_ooblayout_free(struct mtd_info *mtd, int section, 2029 + struct mtd_oob_region *oobregion) 2030 + { 2031 + struct nand_chip *chip = mtd_to_nand(mtd); 2032 + const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 2033 + 2034 + if (section) 2035 + return -ERANGE; 2036 + 2037 + /* 2038 + * Bootrom looks in bytes 0 & 5 for bad blocks for the 2039 + * 4KB page / 4bit BCH combination. 2040 + */ 2041 + if (mtd->writesize == SZ_4K && lt->data_bytes == SZ_2K) 2042 + oobregion->offset = 6; 2043 + else 2044 + oobregion->offset = 2; 2045 + 2046 + oobregion->length = (lt->full_chunk_cnt * lt->spare_bytes) + 2047 + lt->last_spare_bytes - oobregion->offset; 2048 + 2049 + return 0; 2050 + } 2051 + 2052 + static const struct mtd_ooblayout_ops marvell_nand_ooblayout_ops = { 2053 + .ecc = marvell_nand_ooblayout_ecc, 2054 + .free = marvell_nand_ooblayout_free, 2055 + }; 2056 + 2057 + static int marvell_nand_hw_ecc_ctrl_init(struct mtd_info *mtd, 2058 + struct nand_ecc_ctrl *ecc) 2059 + { 2060 + struct nand_chip *chip = mtd_to_nand(mtd); 2061 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 2062 + const struct marvell_hw_ecc_layout *l; 2063 + int i; 2064 + 2065 + if (!nfc->caps->is_nfcv2 && 2066 + (mtd->writesize + mtd->oobsize > MAX_CHUNK_SIZE)) { 2067 + dev_err(nfc->dev, 2068 + "NFCv1: writesize (%d) cannot be bigger than a chunk (%d)\n", 2069 + mtd->writesize, MAX_CHUNK_SIZE - mtd->oobsize); 2070 + return -ENOTSUPP; 2071 + } 2072 + 2073 + to_marvell_nand(chip)->layout = NULL; 2074 + for (i = 0; i < ARRAY_SIZE(marvell_nfc_layouts); i++) { 2075 + l = &marvell_nfc_layouts[i]; 2076 + if (mtd->writesize == l->writesize && 2077 + ecc->size == l->chunk && ecc->strength == l->strength) { 2078 + to_marvell_nand(chip)->layout = l; 2079 + break; 2080 + } 2081 + } 2082 + 2083 + if (!to_marvell_nand(chip)->layout || 2084 + (!nfc->caps->is_nfcv2 && ecc->strength > 1)) { 2085 + dev_err(nfc->dev, 2086 + "ECC strength %d at page size %d is not supported\n", 2087 + ecc->strength, mtd->writesize); 2088 + return -ENOTSUPP; 2089 + } 2090 + 2091 + mtd_set_ooblayout(mtd, &marvell_nand_ooblayout_ops); 2092 + ecc->steps = l->nchunks; 2093 + ecc->size = l->data_bytes; 2094 + 2095 + if (ecc->strength == 1) { 2096 + chip->ecc.algo = NAND_ECC_HAMMING; 2097 + ecc->read_page_raw = marvell_nfc_hw_ecc_hmg_read_page_raw; 2098 + ecc->read_page = marvell_nfc_hw_ecc_hmg_read_page; 2099 + ecc->read_oob_raw = marvell_nfc_hw_ecc_hmg_read_oob_raw; 2100 + ecc->read_oob = ecc->read_oob_raw; 2101 + ecc->write_page_raw = marvell_nfc_hw_ecc_hmg_write_page_raw; 2102 + ecc->write_page = marvell_nfc_hw_ecc_hmg_write_page; 2103 + ecc->write_oob_raw = marvell_nfc_hw_ecc_hmg_write_oob_raw; 2104 + ecc->write_oob = ecc->write_oob_raw; 2105 + } else { 2106 + chip->ecc.algo = NAND_ECC_BCH; 2107 + ecc->strength = 16; 2108 + ecc->read_page_raw = marvell_nfc_hw_ecc_bch_read_page_raw; 2109 + ecc->read_page = marvell_nfc_hw_ecc_bch_read_page; 2110 + ecc->read_oob_raw = marvell_nfc_hw_ecc_bch_read_oob_raw; 2111 + ecc->read_oob = marvell_nfc_hw_ecc_bch_read_oob; 2112 + ecc->write_page_raw = marvell_nfc_hw_ecc_bch_write_page_raw; 2113 + ecc->write_page = marvell_nfc_hw_ecc_bch_write_page; 2114 + ecc->write_oob_raw = marvell_nfc_hw_ecc_bch_write_oob_raw; 2115 + ecc->write_oob = marvell_nfc_hw_ecc_bch_write_oob; 2116 + } 2117 + 2118 + return 0; 2119 + } 2120 + 2121 + static int marvell_nand_ecc_init(struct mtd_info *mtd, 2122 + struct nand_ecc_ctrl *ecc) 2123 + { 2124 + struct nand_chip *chip = mtd_to_nand(mtd); 2125 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 2126 + int ret; 2127 + 2128 + if (ecc->mode != NAND_ECC_NONE && (!ecc->size || !ecc->strength)) { 2129 + if (chip->ecc_step_ds && chip->ecc_strength_ds) { 2130 + ecc->size = chip->ecc_step_ds; 2131 + ecc->strength = chip->ecc_strength_ds; 2132 + } else { 2133 + dev_info(nfc->dev, 2134 + "No minimum ECC strength, using 1b/512B\n"); 2135 + ecc->size = 512; 2136 + ecc->strength = 1; 2137 + } 2138 + } 2139 + 2140 + switch (ecc->mode) { 2141 + case NAND_ECC_HW: 2142 + ret = marvell_nand_hw_ecc_ctrl_init(mtd, ecc); 2143 + if (ret) 2144 + return ret; 2145 + break; 2146 + case NAND_ECC_NONE: 2147 + case NAND_ECC_SOFT: 2148 + if (!nfc->caps->is_nfcv2 && mtd->writesize != SZ_512 && 2149 + mtd->writesize != SZ_2K) { 2150 + dev_err(nfc->dev, "NFCv1 cannot write %d bytes pages\n", 2151 + mtd->writesize); 2152 + return -EINVAL; 2153 + } 2154 + break; 2155 + default: 2156 + return -EINVAL; 2157 + } 2158 + 2159 + return 0; 2160 + } 2161 + 2162 + static u8 bbt_pattern[] = {'M', 'V', 'B', 'b', 't', '0' }; 2163 + static u8 bbt_mirror_pattern[] = {'1', 't', 'b', 'B', 'V', 'M' }; 2164 + 2165 + static struct nand_bbt_descr bbt_main_descr = { 2166 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE | 2167 + NAND_BBT_2BIT | NAND_BBT_VERSION, 2168 + .offs = 8, 2169 + .len = 6, 2170 + .veroffs = 14, 2171 + .maxblocks = 8, /* Last 8 blocks in each chip */ 2172 + .pattern = bbt_pattern 2173 + }; 2174 + 2175 + static struct nand_bbt_descr bbt_mirror_descr = { 2176 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE | 2177 + NAND_BBT_2BIT | NAND_BBT_VERSION, 2178 + .offs = 8, 2179 + .len = 6, 2180 + .veroffs = 14, 2181 + .maxblocks = 8, /* Last 8 blocks in each chip */ 2182 + .pattern = bbt_mirror_pattern 2183 + }; 2184 + 2185 + static int marvell_nfc_setup_data_interface(struct mtd_info *mtd, int chipnr, 2186 + const struct nand_data_interface 2187 + *conf) 2188 + { 2189 + struct nand_chip *chip = mtd_to_nand(mtd); 2190 + struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 2191 + struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 2192 + unsigned int period_ns = 1000000000 / clk_get_rate(nfc->ecc_clk) * 2; 2193 + const struct nand_sdr_timings *sdr; 2194 + struct marvell_nfc_timings nfc_tmg; 2195 + int read_delay; 2196 + 2197 + sdr = nand_get_sdr_timings(conf); 2198 + if (IS_ERR(sdr)) 2199 + return PTR_ERR(sdr); 2200 + 2201 + /* 2202 + * SDR timings are given in pico-seconds while NFC timings must be 2203 + * expressed in NAND controller clock cycles, which is half of the 2204 + * frequency of the accessible ECC clock retrieved by clk_get_rate(). 2205 + * This is not written anywhere in the datasheet but was observed 2206 + * with an oscilloscope. 2207 + * 2208 + * NFC datasheet gives equations from which thoses calculations 2209 + * are derived, they tend to be slightly more restrictives than the 2210 + * given core timings and may improve the overall speed. 2211 + */ 2212 + nfc_tmg.tRP = TO_CYCLES(DIV_ROUND_UP(sdr->tRC_min, 2), period_ns) - 1; 2213 + nfc_tmg.tRH = nfc_tmg.tRP; 2214 + nfc_tmg.tWP = TO_CYCLES(DIV_ROUND_UP(sdr->tWC_min, 2), period_ns) - 1; 2215 + nfc_tmg.tWH = nfc_tmg.tWP; 2216 + nfc_tmg.tCS = TO_CYCLES(sdr->tCS_min, period_ns); 2217 + nfc_tmg.tCH = TO_CYCLES(sdr->tCH_min, period_ns) - 1; 2218 + nfc_tmg.tADL = TO_CYCLES(sdr->tADL_min, period_ns); 2219 + /* 2220 + * Read delay is the time of propagation from SoC pins to NFC internal 2221 + * logic. With non-EDO timings, this is MIN_RD_DEL_CNT clock cycles. In 2222 + * EDO mode, an additional delay of tRH must be taken into account so 2223 + * the data is sampled on the falling edge instead of the rising edge. 2224 + */ 2225 + read_delay = sdr->tRC_min >= 30000 ? 2226 + MIN_RD_DEL_CNT : MIN_RD_DEL_CNT + nfc_tmg.tRH; 2227 + 2228 + nfc_tmg.tAR = TO_CYCLES(sdr->tAR_min, period_ns); 2229 + /* 2230 + * tWHR and tRHW are supposed to be read to write delays (and vice 2231 + * versa) but in some cases, ie. when doing a change column, they must 2232 + * be greater than that to be sure tCCS delay is respected. 2233 + */ 2234 + nfc_tmg.tWHR = TO_CYCLES(max_t(int, sdr->tWHR_min, sdr->tCCS_min), 2235 + period_ns) - 2, 2236 + nfc_tmg.tRHW = TO_CYCLES(max_t(int, sdr->tRHW_min, sdr->tCCS_min), 2237 + period_ns); 2238 + 2239 + /* Use WAIT_MODE (wait for RB line) instead of only relying on delays */ 2240 + nfc_tmg.tR = TO_CYCLES(sdr->tWB_max, period_ns); 2241 + 2242 + if (chipnr < 0) 2243 + return 0; 2244 + 2245 + marvell_nand->ndtr0 = 2246 + NDTR0_TRP(nfc_tmg.tRP) | 2247 + NDTR0_TRH(nfc_tmg.tRH) | 2248 + NDTR0_ETRP(nfc_tmg.tRP) | 2249 + NDTR0_TWP(nfc_tmg.tWP) | 2250 + NDTR0_TWH(nfc_tmg.tWH) | 2251 + NDTR0_TCS(nfc_tmg.tCS) | 2252 + NDTR0_TCH(nfc_tmg.tCH) | 2253 + NDTR0_RD_CNT_DEL(read_delay) | 2254 + NDTR0_SELCNTR | 2255 + NDTR0_TADL(nfc_tmg.tADL); 2256 + 2257 + marvell_nand->ndtr1 = 2258 + NDTR1_TAR(nfc_tmg.tAR) | 2259 + NDTR1_TWHR(nfc_tmg.tWHR) | 2260 + NDTR1_TRHW(nfc_tmg.tRHW) | 2261 + NDTR1_WAIT_MODE | 2262 + NDTR1_TR(nfc_tmg.tR); 2263 + 2264 + return 0; 2265 + } 2266 + 2267 + static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc, 2268 + struct device_node *np) 2269 + { 2270 + struct pxa3xx_nand_platform_data *pdata = dev_get_platdata(dev); 2271 + struct marvell_nand_chip *marvell_nand; 2272 + struct mtd_info *mtd; 2273 + struct nand_chip *chip; 2274 + int nsels, ret, i; 2275 + u32 cs, rb; 2276 + 2277 + /* 2278 + * The legacy "num-cs" property indicates the number of CS on the only 2279 + * chip connected to the controller (legacy bindings does not support 2280 + * more than one chip). CS are only incremented one by one while the RB 2281 + * pin is always the #0. 2282 + * 2283 + * When not using legacy bindings, a couple of "reg" and "nand-rb" 2284 + * properties must be filled. For each chip, expressed as a subnode, 2285 + * "reg" points to the CS lines and "nand-rb" to the RB line. 2286 + */ 2287 + if (pdata) { 2288 + nsels = 1; 2289 + } else if (nfc->caps->legacy_of_bindings && 2290 + !of_get_property(np, "num-cs", &nsels)) { 2291 + dev_err(dev, "missing num-cs property\n"); 2292 + return -EINVAL; 2293 + } else if (!of_get_property(np, "reg", &nsels)) { 2294 + dev_err(dev, "missing reg property\n"); 2295 + return -EINVAL; 2296 + } 2297 + 2298 + if (!pdata) 2299 + nsels /= sizeof(u32); 2300 + if (!nsels) { 2301 + dev_err(dev, "invalid reg property size\n"); 2302 + return -EINVAL; 2303 + } 2304 + 2305 + /* Alloc the nand chip structure */ 2306 + marvell_nand = devm_kzalloc(dev, sizeof(*marvell_nand) + 2307 + (nsels * 2308 + sizeof(struct marvell_nand_chip_sel)), 2309 + GFP_KERNEL); 2310 + if (!marvell_nand) { 2311 + dev_err(dev, "could not allocate chip structure\n"); 2312 + return -ENOMEM; 2313 + } 2314 + 2315 + marvell_nand->nsels = nsels; 2316 + marvell_nand->selected_die = -1; 2317 + 2318 + for (i = 0; i < nsels; i++) { 2319 + if (pdata || nfc->caps->legacy_of_bindings) { 2320 + /* 2321 + * Legacy bindings use the CS lines in natural 2322 + * order (0, 1, ...) 2323 + */ 2324 + cs = i; 2325 + } else { 2326 + /* Retrieve CS id */ 2327 + ret = of_property_read_u32_index(np, "reg", i, &cs); 2328 + if (ret) { 2329 + dev_err(dev, "could not retrieve reg property: %d\n", 2330 + ret); 2331 + return ret; 2332 + } 2333 + } 2334 + 2335 + if (cs >= nfc->caps->max_cs_nb) { 2336 + dev_err(dev, "invalid reg value: %u (max CS = %d)\n", 2337 + cs, nfc->caps->max_cs_nb); 2338 + return -EINVAL; 2339 + } 2340 + 2341 + if (test_and_set_bit(cs, &nfc->assigned_cs)) { 2342 + dev_err(dev, "CS %d already assigned\n", cs); 2343 + return -EINVAL; 2344 + } 2345 + 2346 + /* 2347 + * The cs variable represents the chip select id, which must be 2348 + * converted in bit fields for NDCB0 and NDCB2 to select the 2349 + * right chip. Unfortunately, due to a lack of information on 2350 + * the subject and incoherent documentation, the user should not 2351 + * use CS1 and CS3 at all as asserting them is not supported in 2352 + * a reliable way (due to multiplexing inside ADDR5 field). 2353 + */ 2354 + marvell_nand->sels[i].cs = cs; 2355 + switch (cs) { 2356 + case 0: 2357 + case 2: 2358 + marvell_nand->sels[i].ndcb0_csel = 0; 2359 + break; 2360 + case 1: 2361 + case 3: 2362 + marvell_nand->sels[i].ndcb0_csel = NDCB0_CSEL; 2363 + break; 2364 + default: 2365 + return -EINVAL; 2366 + } 2367 + 2368 + /* Retrieve RB id */ 2369 + if (pdata || nfc->caps->legacy_of_bindings) { 2370 + /* Legacy bindings always use RB #0 */ 2371 + rb = 0; 2372 + } else { 2373 + ret = of_property_read_u32_index(np, "nand-rb", i, 2374 + &rb); 2375 + if (ret) { 2376 + dev_err(dev, 2377 + "could not retrieve RB property: %d\n", 2378 + ret); 2379 + return ret; 2380 + } 2381 + } 2382 + 2383 + if (rb >= nfc->caps->max_rb_nb) { 2384 + dev_err(dev, "invalid reg value: %u (max RB = %d)\n", 2385 + rb, nfc->caps->max_rb_nb); 2386 + return -EINVAL; 2387 + } 2388 + 2389 + marvell_nand->sels[i].rb = rb; 2390 + } 2391 + 2392 + chip = &marvell_nand->chip; 2393 + chip->controller = &nfc->controller; 2394 + nand_set_flash_node(chip, np); 2395 + 2396 + chip->exec_op = marvell_nfc_exec_op; 2397 + chip->select_chip = marvell_nfc_select_chip; 2398 + if (nfc->caps->is_nfcv2 && 2399 + !of_property_read_bool(np, "marvell,nand-keep-config")) 2400 + chip->setup_data_interface = marvell_nfc_setup_data_interface; 2401 + 2402 + mtd = nand_to_mtd(chip); 2403 + mtd->dev.parent = dev; 2404 + 2405 + /* 2406 + * Default to HW ECC engine mode. If the nand-ecc-mode property is given 2407 + * in the DT node, this entry will be overwritten in nand_scan_ident(). 2408 + */ 2409 + chip->ecc.mode = NAND_ECC_HW; 2410 + 2411 + /* 2412 + * Save a reference value for timing registers before 2413 + * ->setup_data_interface() is called. 2414 + */ 2415 + marvell_nand->ndtr0 = readl_relaxed(nfc->regs + NDTR0); 2416 + marvell_nand->ndtr1 = readl_relaxed(nfc->regs + NDTR1); 2417 + 2418 + chip->options |= NAND_BUSWIDTH_AUTO; 2419 + ret = nand_scan_ident(mtd, marvell_nand->nsels, NULL); 2420 + if (ret) { 2421 + dev_err(dev, "could not identify the nand chip\n"); 2422 + return ret; 2423 + } 2424 + 2425 + if (pdata && pdata->flash_bbt) 2426 + chip->bbt_options |= NAND_BBT_USE_FLASH; 2427 + 2428 + if (chip->bbt_options & NAND_BBT_USE_FLASH) { 2429 + /* 2430 + * We'll use a bad block table stored in-flash and don't 2431 + * allow writing the bad block marker to the flash. 2432 + */ 2433 + chip->bbt_options |= NAND_BBT_NO_OOB_BBM; 2434 + chip->bbt_td = &bbt_main_descr; 2435 + chip->bbt_md = &bbt_mirror_descr; 2436 + } 2437 + 2438 + /* Save the chip-specific fields of NDCR */ 2439 + marvell_nand->ndcr = NDCR_PAGE_SZ(mtd->writesize); 2440 + if (chip->options & NAND_BUSWIDTH_16) 2441 + marvell_nand->ndcr |= NDCR_DWIDTH_M | NDCR_DWIDTH_C; 2442 + 2443 + /* 2444 + * On small page NANDs, only one cycle is needed to pass the 2445 + * column address. 2446 + */ 2447 + if (mtd->writesize <= 512) { 2448 + marvell_nand->addr_cyc = 1; 2449 + } else { 2450 + marvell_nand->addr_cyc = 2; 2451 + marvell_nand->ndcr |= NDCR_RA_START; 2452 + } 2453 + 2454 + /* 2455 + * Now add the number of cycles needed to pass the row 2456 + * address. 2457 + * 2458 + * Addressing a chip using CS 2 or 3 should also need the third row 2459 + * cycle but due to inconsistance in the documentation and lack of 2460 + * hardware to test this situation, this case is not supported. 2461 + */ 2462 + if (chip->options & NAND_ROW_ADDR_3) 2463 + marvell_nand->addr_cyc += 3; 2464 + else 2465 + marvell_nand->addr_cyc += 2; 2466 + 2467 + if (pdata) { 2468 + chip->ecc.size = pdata->ecc_step_size; 2469 + chip->ecc.strength = pdata->ecc_strength; 2470 + } 2471 + 2472 + ret = marvell_nand_ecc_init(mtd, &chip->ecc); 2473 + if (ret) { 2474 + dev_err(dev, "ECC init failed: %d\n", ret); 2475 + return ret; 2476 + } 2477 + 2478 + if (chip->ecc.mode == NAND_ECC_HW) { 2479 + /* 2480 + * Subpage write not available with hardware ECC, prohibit also 2481 + * subpage read as in userspace subpage access would still be 2482 + * allowed and subpage write, if used, would lead to numerous 2483 + * uncorrectable ECC errors. 2484 + */ 2485 + chip->options |= NAND_NO_SUBPAGE_WRITE; 2486 + } 2487 + 2488 + if (pdata || nfc->caps->legacy_of_bindings) { 2489 + /* 2490 + * We keep the MTD name unchanged to avoid breaking platforms 2491 + * where the MTD cmdline parser is used and the bootloader 2492 + * has not been updated to use the new naming scheme. 2493 + */ 2494 + mtd->name = "pxa3xx_nand-0"; 2495 + } else if (!mtd->name) { 2496 + /* 2497 + * If the new bindings are used and the bootloader has not been 2498 + * updated to pass a new mtdparts parameter on the cmdline, you 2499 + * should define the following property in your NAND node, ie: 2500 + * 2501 + * label = "main-storage"; 2502 + * 2503 + * This way, mtd->name will be set by the core when 2504 + * nand_set_flash_node() is called. 2505 + */ 2506 + mtd->name = devm_kasprintf(nfc->dev, GFP_KERNEL, 2507 + "%s:nand.%d", dev_name(nfc->dev), 2508 + marvell_nand->sels[0].cs); 2509 + if (!mtd->name) { 2510 + dev_err(nfc->dev, "Failed to allocate mtd->name\n"); 2511 + return -ENOMEM; 2512 + } 2513 + } 2514 + 2515 + ret = nand_scan_tail(mtd); 2516 + if (ret) { 2517 + dev_err(dev, "nand_scan_tail failed: %d\n", ret); 2518 + return ret; 2519 + } 2520 + 2521 + if (pdata) 2522 + /* Legacy bindings support only one chip */ 2523 + ret = mtd_device_register(mtd, pdata->parts[0], 2524 + pdata->nr_parts[0]); 2525 + else 2526 + ret = mtd_device_register(mtd, NULL, 0); 2527 + if (ret) { 2528 + dev_err(dev, "failed to register mtd device: %d\n", ret); 2529 + nand_release(mtd); 2530 + return ret; 2531 + } 2532 + 2533 + list_add_tail(&marvell_nand->node, &nfc->chips); 2534 + 2535 + return 0; 2536 + } 2537 + 2538 + static int marvell_nand_chips_init(struct device *dev, struct marvell_nfc *nfc) 2539 + { 2540 + struct device_node *np = dev->of_node; 2541 + struct device_node *nand_np; 2542 + int max_cs = nfc->caps->max_cs_nb; 2543 + int nchips; 2544 + int ret; 2545 + 2546 + if (!np) 2547 + nchips = 1; 2548 + else 2549 + nchips = of_get_child_count(np); 2550 + 2551 + if (nchips > max_cs) { 2552 + dev_err(dev, "too many NAND chips: %d (max = %d CS)\n", nchips, 2553 + max_cs); 2554 + return -EINVAL; 2555 + } 2556 + 2557 + /* 2558 + * Legacy bindings do not use child nodes to exhibit NAND chip 2559 + * properties and layout. Instead, NAND properties are mixed with the 2560 + * controller ones, and partitions are defined as direct subnodes of the 2561 + * NAND controller node. 2562 + */ 2563 + if (nfc->caps->legacy_of_bindings) { 2564 + ret = marvell_nand_chip_init(dev, nfc, np); 2565 + return ret; 2566 + } 2567 + 2568 + for_each_child_of_node(np, nand_np) { 2569 + ret = marvell_nand_chip_init(dev, nfc, nand_np); 2570 + if (ret) { 2571 + of_node_put(nand_np); 2572 + return ret; 2573 + } 2574 + } 2575 + 2576 + return 0; 2577 + } 2578 + 2579 + static void marvell_nand_chips_cleanup(struct marvell_nfc *nfc) 2580 + { 2581 + struct marvell_nand_chip *entry, *temp; 2582 + 2583 + list_for_each_entry_safe(entry, temp, &nfc->chips, node) { 2584 + nand_release(nand_to_mtd(&entry->chip)); 2585 + list_del(&entry->node); 2586 + } 2587 + } 2588 + 2589 + static int marvell_nfc_init_dma(struct marvell_nfc *nfc) 2590 + { 2591 + struct platform_device *pdev = container_of(nfc->dev, 2592 + struct platform_device, 2593 + dev); 2594 + struct dma_slave_config config = {}; 2595 + struct resource *r; 2596 + dma_cap_mask_t mask; 2597 + struct pxad_param param; 2598 + int ret; 2599 + 2600 + if (!IS_ENABLED(CONFIG_PXA_DMA)) { 2601 + dev_warn(nfc->dev, 2602 + "DMA not enabled in configuration\n"); 2603 + return -ENOTSUPP; 2604 + } 2605 + 2606 + ret = dma_set_mask_and_coherent(nfc->dev, DMA_BIT_MASK(32)); 2607 + if (ret) 2608 + return ret; 2609 + 2610 + r = platform_get_resource(pdev, IORESOURCE_DMA, 0); 2611 + if (!r) { 2612 + dev_err(nfc->dev, "No resource defined for data DMA\n"); 2613 + return -ENXIO; 2614 + } 2615 + 2616 + param.drcmr = r->start; 2617 + param.prio = PXAD_PRIO_LOWEST; 2618 + dma_cap_zero(mask); 2619 + dma_cap_set(DMA_SLAVE, mask); 2620 + nfc->dma_chan = 2621 + dma_request_slave_channel_compat(mask, pxad_filter_fn, 2622 + &param, nfc->dev, 2623 + "data"); 2624 + if (!nfc->dma_chan) { 2625 + dev_err(nfc->dev, 2626 + "Unable to request data DMA channel\n"); 2627 + return -ENODEV; 2628 + } 2629 + 2630 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2631 + if (!r) 2632 + return -ENXIO; 2633 + 2634 + config.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 2635 + config.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 2636 + config.src_addr = r->start + NDDB; 2637 + config.dst_addr = r->start + NDDB; 2638 + config.src_maxburst = 32; 2639 + config.dst_maxburst = 32; 2640 + ret = dmaengine_slave_config(nfc->dma_chan, &config); 2641 + if (ret < 0) { 2642 + dev_err(nfc->dev, "Failed to configure DMA channel\n"); 2643 + return ret; 2644 + } 2645 + 2646 + /* 2647 + * DMA must act on length multiple of 32 and this length may be 2648 + * bigger than the destination buffer. Use this buffer instead 2649 + * for DMA transfers and then copy the desired amount of data to 2650 + * the provided buffer. 2651 + */ 2652 + nfc->dma_buf = kmalloc(MAX_CHUNK_SIZE, GFP_KERNEL | GFP_DMA); 2653 + if (!nfc->dma_buf) 2654 + return -ENOMEM; 2655 + 2656 + nfc->use_dma = true; 2657 + 2658 + return 0; 2659 + } 2660 + 2661 + static int marvell_nfc_init(struct marvell_nfc *nfc) 2662 + { 2663 + struct device_node *np = nfc->dev->of_node; 2664 + 2665 + /* 2666 + * Some SoCs like A7k/A8k need to enable manually the NAND 2667 + * controller, gated clocks and reset bits to avoid being bootloader 2668 + * dependent. This is done through the use of the System Functions 2669 + * registers. 2670 + */ 2671 + if (nfc->caps->need_system_controller) { 2672 + struct regmap *sysctrl_base = 2673 + syscon_regmap_lookup_by_phandle(np, 2674 + "marvell,system-controller"); 2675 + u32 reg; 2676 + 2677 + if (IS_ERR(sysctrl_base)) 2678 + return PTR_ERR(sysctrl_base); 2679 + 2680 + reg = GENCONF_SOC_DEVICE_MUX_NFC_EN | 2681 + GENCONF_SOC_DEVICE_MUX_ECC_CLK_RST | 2682 + GENCONF_SOC_DEVICE_MUX_ECC_CORE_RST | 2683 + GENCONF_SOC_DEVICE_MUX_NFC_INT_EN; 2684 + regmap_write(sysctrl_base, GENCONF_SOC_DEVICE_MUX, reg); 2685 + 2686 + regmap_read(sysctrl_base, GENCONF_CLK_GATING_CTRL, &reg); 2687 + reg |= GENCONF_CLK_GATING_CTRL_ND_GATE; 2688 + regmap_write(sysctrl_base, GENCONF_CLK_GATING_CTRL, reg); 2689 + 2690 + regmap_read(sysctrl_base, GENCONF_ND_CLK_CTRL, &reg); 2691 + reg |= GENCONF_ND_CLK_CTRL_EN; 2692 + regmap_write(sysctrl_base, GENCONF_ND_CLK_CTRL, reg); 2693 + } 2694 + 2695 + /* Configure the DMA if appropriate */ 2696 + if (!nfc->caps->is_nfcv2) 2697 + marvell_nfc_init_dma(nfc); 2698 + 2699 + /* 2700 + * ECC operations and interruptions are only enabled when specifically 2701 + * needed. ECC shall not be activated in the early stages (fails probe). 2702 + * Arbiter flag, even if marked as "reserved", must be set (empirical). 2703 + * SPARE_EN bit must always be set or ECC bytes will not be at the same 2704 + * offset in the read page and this will fail the protection. 2705 + */ 2706 + writel_relaxed(NDCR_ALL_INT | NDCR_ND_ARB_EN | NDCR_SPARE_EN | 2707 + NDCR_RD_ID_CNT(NFCV1_READID_LEN), nfc->regs + NDCR); 2708 + writel_relaxed(0xFFFFFFFF, nfc->regs + NDSR); 2709 + writel_relaxed(0, nfc->regs + NDECCCTRL); 2710 + 2711 + return 0; 2712 + } 2713 + 2714 + static int marvell_nfc_probe(struct platform_device *pdev) 2715 + { 2716 + struct device *dev = &pdev->dev; 2717 + struct resource *r; 2718 + struct marvell_nfc *nfc; 2719 + int ret; 2720 + int irq; 2721 + 2722 + nfc = devm_kzalloc(&pdev->dev, sizeof(struct marvell_nfc), 2723 + GFP_KERNEL); 2724 + if (!nfc) 2725 + return -ENOMEM; 2726 + 2727 + nfc->dev = dev; 2728 + nand_hw_control_init(&nfc->controller); 2729 + INIT_LIST_HEAD(&nfc->chips); 2730 + 2731 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2732 + nfc->regs = devm_ioremap_resource(dev, r); 2733 + if (IS_ERR(nfc->regs)) 2734 + return PTR_ERR(nfc->regs); 2735 + 2736 + irq = platform_get_irq(pdev, 0); 2737 + if (irq < 0) { 2738 + dev_err(dev, "failed to retrieve irq\n"); 2739 + return irq; 2740 + } 2741 + 2742 + nfc->ecc_clk = devm_clk_get(&pdev->dev, NULL); 2743 + if (IS_ERR(nfc->ecc_clk)) 2744 + return PTR_ERR(nfc->ecc_clk); 2745 + 2746 + ret = clk_prepare_enable(nfc->ecc_clk); 2747 + if (ret) 2748 + return ret; 2749 + 2750 + marvell_nfc_disable_int(nfc, NDCR_ALL_INT); 2751 + marvell_nfc_clear_int(nfc, NDCR_ALL_INT); 2752 + ret = devm_request_irq(dev, irq, marvell_nfc_isr, 2753 + 0, "marvell-nfc", nfc); 2754 + if (ret) 2755 + goto unprepare_clk; 2756 + 2757 + /* Get NAND controller capabilities */ 2758 + if (pdev->id_entry) 2759 + nfc->caps = (void *)pdev->id_entry->driver_data; 2760 + else 2761 + nfc->caps = of_device_get_match_data(&pdev->dev); 2762 + 2763 + if (!nfc->caps) { 2764 + dev_err(dev, "Could not retrieve NFC caps\n"); 2765 + ret = -EINVAL; 2766 + goto unprepare_clk; 2767 + } 2768 + 2769 + /* Init the controller and then probe the chips */ 2770 + ret = marvell_nfc_init(nfc); 2771 + if (ret) 2772 + goto unprepare_clk; 2773 + 2774 + platform_set_drvdata(pdev, nfc); 2775 + 2776 + ret = marvell_nand_chips_init(dev, nfc); 2777 + if (ret) 2778 + goto unprepare_clk; 2779 + 2780 + return 0; 2781 + 2782 + unprepare_clk: 2783 + clk_disable_unprepare(nfc->ecc_clk); 2784 + 2785 + return ret; 2786 + } 2787 + 2788 + static int marvell_nfc_remove(struct platform_device *pdev) 2789 + { 2790 + struct marvell_nfc *nfc = platform_get_drvdata(pdev); 2791 + 2792 + marvell_nand_chips_cleanup(nfc); 2793 + 2794 + if (nfc->use_dma) { 2795 + dmaengine_terminate_all(nfc->dma_chan); 2796 + dma_release_channel(nfc->dma_chan); 2797 + } 2798 + 2799 + clk_disable_unprepare(nfc->ecc_clk); 2800 + 2801 + return 0; 2802 + } 2803 + 2804 + static const struct marvell_nfc_caps marvell_armada_8k_nfc_caps = { 2805 + .max_cs_nb = 4, 2806 + .max_rb_nb = 2, 2807 + .need_system_controller = true, 2808 + .is_nfcv2 = true, 2809 + }; 2810 + 2811 + static const struct marvell_nfc_caps marvell_armada370_nfc_caps = { 2812 + .max_cs_nb = 4, 2813 + .max_rb_nb = 2, 2814 + .is_nfcv2 = true, 2815 + }; 2816 + 2817 + static const struct marvell_nfc_caps marvell_pxa3xx_nfc_caps = { 2818 + .max_cs_nb = 2, 2819 + .max_rb_nb = 1, 2820 + .use_dma = true, 2821 + }; 2822 + 2823 + static const struct marvell_nfc_caps marvell_armada_8k_nfc_legacy_caps = { 2824 + .max_cs_nb = 4, 2825 + .max_rb_nb = 2, 2826 + .need_system_controller = true, 2827 + .legacy_of_bindings = true, 2828 + .is_nfcv2 = true, 2829 + }; 2830 + 2831 + static const struct marvell_nfc_caps marvell_armada370_nfc_legacy_caps = { 2832 + .max_cs_nb = 4, 2833 + .max_rb_nb = 2, 2834 + .legacy_of_bindings = true, 2835 + .is_nfcv2 = true, 2836 + }; 2837 + 2838 + static const struct marvell_nfc_caps marvell_pxa3xx_nfc_legacy_caps = { 2839 + .max_cs_nb = 2, 2840 + .max_rb_nb = 1, 2841 + .legacy_of_bindings = true, 2842 + .use_dma = true, 2843 + }; 2844 + 2845 + static const struct platform_device_id marvell_nfc_platform_ids[] = { 2846 + { 2847 + .name = "pxa3xx-nand", 2848 + .driver_data = (kernel_ulong_t)&marvell_pxa3xx_nfc_legacy_caps, 2849 + }, 2850 + { /* sentinel */ }, 2851 + }; 2852 + MODULE_DEVICE_TABLE(platform, marvell_nfc_platform_ids); 2853 + 2854 + static const struct of_device_id marvell_nfc_of_ids[] = { 2855 + { 2856 + .compatible = "marvell,armada-8k-nand-controller", 2857 + .data = &marvell_armada_8k_nfc_caps, 2858 + }, 2859 + { 2860 + .compatible = "marvell,armada370-nand-controller", 2861 + .data = &marvell_armada370_nfc_caps, 2862 + }, 2863 + { 2864 + .compatible = "marvell,pxa3xx-nand-controller", 2865 + .data = &marvell_pxa3xx_nfc_caps, 2866 + }, 2867 + /* Support for old/deprecated bindings: */ 2868 + { 2869 + .compatible = "marvell,armada-8k-nand", 2870 + .data = &marvell_armada_8k_nfc_legacy_caps, 2871 + }, 2872 + { 2873 + .compatible = "marvell,armada370-nand", 2874 + .data = &marvell_armada370_nfc_legacy_caps, 2875 + }, 2876 + { 2877 + .compatible = "marvell,pxa3xx-nand", 2878 + .data = &marvell_pxa3xx_nfc_legacy_caps, 2879 + }, 2880 + { /* sentinel */ }, 2881 + }; 2882 + MODULE_DEVICE_TABLE(of, marvell_nfc_of_ids); 2883 + 2884 + static struct platform_driver marvell_nfc_driver = { 2885 + .driver = { 2886 + .name = "marvell-nfc", 2887 + .of_match_table = marvell_nfc_of_ids, 2888 + }, 2889 + .id_table = marvell_nfc_platform_ids, 2890 + .probe = marvell_nfc_probe, 2891 + .remove = marvell_nfc_remove, 2892 + }; 2893 + module_platform_driver(marvell_nfc_driver); 2894 + 2895 + MODULE_LICENSE("GPL"); 2896 + MODULE_DESCRIPTION("Marvell NAND controller driver");
+95 -31
drivers/mtd/nand/mtk_ecc.c
··· 34 34 35 35 #define ECC_ENCCON (0x00) 36 36 #define ECC_ENCCNFG (0x04) 37 - #define ECC_MODE_SHIFT (5) 38 37 #define ECC_MS_SHIFT (16) 39 38 #define ECC_ENCDIADDR (0x08) 40 39 #define ECC_ENCIDLE (0x0C) 41 - #define ECC_ENCIRQ_EN (0x80) 42 - #define ECC_ENCIRQ_STA (0x84) 43 40 #define ECC_DECCON (0x100) 44 41 #define ECC_DECCNFG (0x104) 45 42 #define DEC_EMPTY_EN BIT(31) 46 43 #define DEC_CNFG_CORRECT (0x3 << 12) 47 44 #define ECC_DECIDLE (0x10C) 48 45 #define ECC_DECENUM0 (0x114) 49 - #define ECC_DECDONE (0x124) 50 - #define ECC_DECIRQ_EN (0x200) 51 - #define ECC_DECIRQ_STA (0x204) 52 46 53 47 #define ECC_TIMEOUT (500000) 54 48 55 49 #define ECC_IDLE_REG(op) ((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE) 56 50 #define ECC_CTL_REG(op) ((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON) 57 - #define ECC_IRQ_REG(op) ((op) == ECC_ENCODE ? \ 58 - ECC_ENCIRQ_EN : ECC_DECIRQ_EN) 59 51 60 52 struct mtk_ecc_caps { 61 53 u32 err_mask; 62 54 const u8 *ecc_strength; 55 + const u32 *ecc_regs; 63 56 u8 num_ecc_strength; 64 - u32 encode_parity_reg0; 57 + u8 ecc_mode_shift; 58 + u32 parity_bits; 65 59 int pg_irq_sel; 66 60 }; 67 61 ··· 83 89 40, 44, 48, 52, 56, 60, 68, 72, 80 84 90 }; 85 91 92 + static const u8 ecc_strength_mt7622[] = { 93 + 4, 6, 8, 10, 12, 14, 16 94 + }; 95 + 96 + enum mtk_ecc_regs { 97 + ECC_ENCPAR00, 98 + ECC_ENCIRQ_EN, 99 + ECC_ENCIRQ_STA, 100 + ECC_DECDONE, 101 + ECC_DECIRQ_EN, 102 + ECC_DECIRQ_STA, 103 + }; 104 + 105 + static int mt2701_ecc_regs[] = { 106 + [ECC_ENCPAR00] = 0x10, 107 + [ECC_ENCIRQ_EN] = 0x80, 108 + [ECC_ENCIRQ_STA] = 0x84, 109 + [ECC_DECDONE] = 0x124, 110 + [ECC_DECIRQ_EN] = 0x200, 111 + [ECC_DECIRQ_STA] = 0x204, 112 + }; 113 + 114 + static int mt2712_ecc_regs[] = { 115 + [ECC_ENCPAR00] = 0x300, 116 + [ECC_ENCIRQ_EN] = 0x80, 117 + [ECC_ENCIRQ_STA] = 0x84, 118 + [ECC_DECDONE] = 0x124, 119 + [ECC_DECIRQ_EN] = 0x200, 120 + [ECC_DECIRQ_STA] = 0x204, 121 + }; 122 + 123 + static int mt7622_ecc_regs[] = { 124 + [ECC_ENCPAR00] = 0x10, 125 + [ECC_ENCIRQ_EN] = 0x30, 126 + [ECC_ENCIRQ_STA] = 0x34, 127 + [ECC_DECDONE] = 0x11c, 128 + [ECC_DECIRQ_EN] = 0x140, 129 + [ECC_DECIRQ_STA] = 0x144, 130 + }; 131 + 86 132 static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc, 87 133 enum mtk_ecc_operation op) 88 134 { ··· 141 107 static irqreturn_t mtk_ecc_irq(int irq, void *id) 142 108 { 143 109 struct mtk_ecc *ecc = id; 144 - enum mtk_ecc_operation op; 145 110 u32 dec, enc; 146 111 147 - dec = readw(ecc->regs + ECC_DECIRQ_STA) & ECC_IRQ_EN; 112 + dec = readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECIRQ_STA]) 113 + & ECC_IRQ_EN; 148 114 if (dec) { 149 - op = ECC_DECODE; 150 - dec = readw(ecc->regs + ECC_DECDONE); 115 + dec = readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECDONE]); 151 116 if (dec & ecc->sectors) { 152 117 /* 153 118 * Clear decode IRQ status once again to ensure that 154 119 * there will be no extra IRQ. 155 120 */ 156 - readw(ecc->regs + ECC_DECIRQ_STA); 121 + readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECIRQ_STA]); 157 122 ecc->sectors = 0; 158 123 complete(&ecc->done); 159 124 } else { 160 125 return IRQ_HANDLED; 161 126 } 162 127 } else { 163 - enc = readl(ecc->regs + ECC_ENCIRQ_STA) & ECC_IRQ_EN; 164 - if (enc) { 165 - op = ECC_ENCODE; 128 + enc = readl(ecc->regs + ecc->caps->ecc_regs[ECC_ENCIRQ_STA]) 129 + & ECC_IRQ_EN; 130 + if (enc) 166 131 complete(&ecc->done); 167 - } else { 132 + else 168 133 return IRQ_NONE; 169 - } 170 134 } 171 135 172 136 return IRQ_HANDLED; ··· 192 160 /* configure ECC encoder (in bits) */ 193 161 enc_sz = config->len << 3; 194 162 195 - reg = ecc_bit | (config->mode << ECC_MODE_SHIFT); 163 + reg = ecc_bit | (config->mode << ecc->caps->ecc_mode_shift); 196 164 reg |= (enc_sz << ECC_MS_SHIFT); 197 165 writel(reg, ecc->regs + ECC_ENCCNFG); 198 166 ··· 203 171 } else { 204 172 /* configure ECC decoder (in bits) */ 205 173 dec_sz = (config->len << 3) + 206 - config->strength * ECC_PARITY_BITS; 174 + config->strength * ecc->caps->parity_bits; 207 175 208 - reg = ecc_bit | (config->mode << ECC_MODE_SHIFT); 176 + reg = ecc_bit | (config->mode << ecc->caps->ecc_mode_shift); 209 177 reg |= (dec_sz << ECC_MS_SHIFT) | DEC_CNFG_CORRECT; 210 178 reg |= DEC_EMPTY_EN; 211 179 writel(reg, ecc->regs + ECC_DECCNFG); ··· 323 291 */ 324 292 if (ecc->caps->pg_irq_sel && config->mode == ECC_NFI_MODE) 325 293 reg_val |= ECC_PG_IRQ_SEL; 326 - writew(reg_val, ecc->regs + ECC_IRQ_REG(op)); 294 + if (op == ECC_ENCODE) 295 + writew(reg_val, ecc->regs + 296 + ecc->caps->ecc_regs[ECC_ENCIRQ_EN]); 297 + else 298 + writew(reg_val, ecc->regs + 299 + ecc->caps->ecc_regs[ECC_DECIRQ_EN]); 327 300 } 328 301 329 302 writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op)); ··· 347 310 348 311 /* disable it */ 349 312 mtk_ecc_wait_idle(ecc, op); 350 - if (op == ECC_DECODE) 313 + if (op == ECC_DECODE) { 351 314 /* 352 315 * Clear decode IRQ status in case there is a timeout to wait 353 316 * decode IRQ. 354 317 */ 355 - readw(ecc->regs + ECC_DECIRQ_STA); 356 - writew(0, ecc->regs + ECC_IRQ_REG(op)); 318 + readw(ecc->regs + ecc->caps->ecc_regs[ECC_DECDONE]); 319 + writew(0, ecc->regs + ecc->caps->ecc_regs[ECC_DECIRQ_EN]); 320 + } else { 321 + writew(0, ecc->regs + ecc->caps->ecc_regs[ECC_ENCIRQ_EN]); 322 + } 323 + 357 324 writew(ECC_OP_DISABLE, ecc->regs + ECC_CTL_REG(op)); 358 325 359 326 mutex_unlock(&ecc->lock); ··· 408 367 mtk_ecc_wait_idle(ecc, ECC_ENCODE); 409 368 410 369 /* Program ECC bytes to OOB: per sector oob = FDM + ECC + SPARE */ 411 - len = (config->strength * ECC_PARITY_BITS + 7) >> 3; 370 + len = (config->strength * ecc->caps->parity_bits + 7) >> 3; 412 371 413 372 /* write the parity bytes generated by the ECC back to temp buffer */ 414 373 __ioread32_copy(ecc->eccdata, 415 - ecc->regs + ecc->caps->encode_parity_reg0, 374 + ecc->regs + ecc->caps->ecc_regs[ECC_ENCPAR00], 416 375 round_up(len, 4)); 417 376 418 377 /* copy into possibly unaligned OOB region with actual length */ ··· 445 404 } 446 405 EXPORT_SYMBOL(mtk_ecc_adjust_strength); 447 406 407 + unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc) 408 + { 409 + return ecc->caps->parity_bits; 410 + } 411 + EXPORT_SYMBOL(mtk_ecc_get_parity_bits); 412 + 448 413 static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = { 449 414 .err_mask = 0x3f, 450 415 .ecc_strength = ecc_strength_mt2701, 416 + .ecc_regs = mt2701_ecc_regs, 451 417 .num_ecc_strength = 20, 452 - .encode_parity_reg0 = 0x10, 418 + .ecc_mode_shift = 5, 419 + .parity_bits = 14, 453 420 .pg_irq_sel = 0, 454 421 }; 455 422 456 423 static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = { 457 424 .err_mask = 0x7f, 458 425 .ecc_strength = ecc_strength_mt2712, 426 + .ecc_regs = mt2712_ecc_regs, 459 427 .num_ecc_strength = 23, 460 - .encode_parity_reg0 = 0x300, 428 + .ecc_mode_shift = 5, 429 + .parity_bits = 14, 461 430 .pg_irq_sel = 1, 431 + }; 432 + 433 + static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = { 434 + .err_mask = 0x3f, 435 + .ecc_strength = ecc_strength_mt7622, 436 + .ecc_regs = mt7622_ecc_regs, 437 + .num_ecc_strength = 7, 438 + .ecc_mode_shift = 4, 439 + .parity_bits = 13, 440 + .pg_irq_sel = 0, 462 441 }; 463 442 464 443 static const struct of_device_id mtk_ecc_dt_match[] = { ··· 488 427 }, { 489 428 .compatible = "mediatek,mt2712-ecc", 490 429 .data = &mtk_ecc_caps_mt2712, 430 + }, { 431 + .compatible = "mediatek,mt7622-ecc", 432 + .data = &mtk_ecc_caps_mt7622, 491 433 }, 492 434 {}, 493 435 }; ··· 516 452 517 453 max_eccdata_size = ecc->caps->num_ecc_strength - 1; 518 454 max_eccdata_size = ecc->caps->ecc_strength[max_eccdata_size]; 519 - max_eccdata_size = (max_eccdata_size * ECC_PARITY_BITS + 7) >> 3; 455 + max_eccdata_size = (max_eccdata_size * ecc->caps->parity_bits + 7) >> 3; 520 456 max_eccdata_size = round_up(max_eccdata_size, 4); 521 457 ecc->eccdata = devm_kzalloc(dev, max_eccdata_size, GFP_KERNEL); 522 458 if (!ecc->eccdata)
+1 -2
drivers/mtd/nand/mtk_ecc.h
··· 14 14 15 15 #include <linux/types.h> 16 16 17 - #define ECC_PARITY_BITS (14) 18 - 19 17 enum mtk_ecc_mode {ECC_DMA_MODE = 0, ECC_NFI_MODE = 1}; 20 18 enum mtk_ecc_operation {ECC_ENCODE, ECC_DECODE}; 21 19 ··· 41 43 int mtk_ecc_enable(struct mtk_ecc *, struct mtk_ecc_config *); 42 44 void mtk_ecc_disable(struct mtk_ecc *); 43 45 void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p); 46 + unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc); 44 47 45 48 struct mtk_ecc *of_mtk_ecc_get(struct device_node *); 46 49 void mtk_ecc_release(struct mtk_ecc *);
+45 -31
drivers/mtd/nand/mtk_nand.c
··· 97 97 98 98 #define MTK_TIMEOUT (500000) 99 99 #define MTK_RESET_TIMEOUT (1000000) 100 - #define MTK_MAX_SECTOR (16) 101 100 #define MTK_NAND_MAX_NSELS (2) 102 101 #define MTK_NFC_MIN_SPARE (16) 103 102 #define ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt) \ ··· 108 109 u8 num_spare_size; 109 110 u8 pageformat_spare_shift; 110 111 u8 nfi_clk_div; 112 + u8 max_sector; 113 + u32 max_sector_size; 111 114 }; 112 115 113 116 struct mtk_nfc_bad_mark_ctl { ··· 172 171 static const u8 spare_size_mt2712[] = { 173 172 16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, 52, 62, 61, 63, 64, 67, 174 173 74 174 + }; 175 + 176 + static const u8 spare_size_mt7622[] = { 177 + 16, 26, 27, 28 175 178 }; 176 179 177 180 static inline struct mtk_nfc_nand_chip *to_mtk_nand(struct nand_chip *nand) ··· 455 450 * set to max sector to allow the HW to continue reading over 456 451 * unaligned accesses 457 452 */ 458 - reg = (MTK_MAX_SECTOR << CON_SEC_SHIFT) | CON_BRD; 453 + reg = (nfc->caps->max_sector << CON_SEC_SHIFT) | CON_BRD; 459 454 nfi_writel(nfc, reg, NFI_CON); 460 455 461 456 /* trigger to fetch data */ ··· 486 481 reg = nfi_readw(nfc, NFI_CNFG) | CNFG_BYTE_RW; 487 482 nfi_writew(nfc, reg, NFI_CNFG); 488 483 489 - reg = MTK_MAX_SECTOR << CON_SEC_SHIFT | CON_BWR; 484 + reg = nfc->caps->max_sector << CON_SEC_SHIFT | CON_BWR; 490 485 nfi_writel(nfc, reg, NFI_CON); 491 486 492 487 nfi_writew(nfc, STAR_EN, NFI_STRDATA); ··· 766 761 u32 reg; 767 762 int ret; 768 763 764 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 765 + 769 766 if (!raw) { 770 767 /* OOB => FDM: from register, ECC: from HW */ 771 768 reg = nfi_readw(nfc, NFI_CNFG) | CNFG_AUTO_FMT_EN; ··· 801 794 if (!raw) 802 795 mtk_ecc_disable(nfc->ecc); 803 796 804 - return ret; 797 + if (ret) 798 + return ret; 799 + 800 + return nand_prog_page_end_op(chip); 805 801 } 806 802 807 803 static int mtk_nfc_write_page_hwecc(struct mtd_info *mtd, ··· 842 832 static int mtk_nfc_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, 843 833 int page) 844 834 { 845 - int ret; 846 - 847 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 848 - 849 - ret = mtk_nfc_write_page_raw(mtd, chip, NULL, 1, page); 850 - if (ret < 0) 851 - return -EIO; 852 - 853 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 854 - ret = chip->waitfunc(mtd, chip); 855 - 856 - return ret & NAND_STATUS_FAIL ? -EIO : 0; 835 + return mtk_nfc_write_page_raw(mtd, chip, NULL, 1, page); 857 836 } 858 837 859 838 static int mtk_nfc_update_ecc_stats(struct mtd_info *mtd, u8 *buf, u32 sectors) ··· 891 892 len = sectors * chip->ecc.size + (raw ? sectors * spare : 0); 892 893 buf = bufpoi + start * chip->ecc.size; 893 894 894 - if (column != 0) 895 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, column, -1); 895 + nand_read_page_op(chip, page, column, NULL, 0); 896 896 897 897 addr = dma_map_single(nfc->dev, buf, len, DMA_FROM_DEVICE); 898 898 rc = dma_mapping_error(nfc->dev, addr); ··· 1014 1016 static int mtk_nfc_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, 1015 1017 int page) 1016 1018 { 1017 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 1018 - 1019 1019 return mtk_nfc_read_page_raw(mtd, chip, NULL, 1, page); 1020 1020 } 1021 1021 ··· 1122 1126 { 1123 1127 struct nand_chip *nand = mtd_to_nand(mtd); 1124 1128 struct mtk_nfc_nand_chip *chip = to_mtk_nand(nand); 1129 + struct mtk_nfc *nfc = nand_get_controller_data(nand); 1125 1130 u32 ecc_bytes; 1126 1131 1127 - ecc_bytes = DIV_ROUND_UP(nand->ecc.strength * ECC_PARITY_BITS, 8); 1132 + ecc_bytes = DIV_ROUND_UP(nand->ecc.strength * 1133 + mtk_ecc_get_parity_bits(nfc->ecc), 8); 1128 1134 1129 1135 fdm->reg_size = chip->spare_per_sector - ecc_bytes; 1130 1136 if (fdm->reg_size > NFI_FDM_MAX_SIZE) ··· 1206 1208 * this controller only supports 512 and 1024 sizes 1207 1209 */ 1208 1210 if (nand->ecc.size < 1024) { 1209 - if (mtd->writesize > 512) { 1211 + if (mtd->writesize > 512 && 1212 + nfc->caps->max_sector_size > 512) { 1210 1213 nand->ecc.size = 1024; 1211 1214 nand->ecc.strength <<= 1; 1212 1215 } else { ··· 1222 1223 return ret; 1223 1224 1224 1225 /* calculate oob bytes except ecc parity data */ 1225 - free = ((nand->ecc.strength * ECC_PARITY_BITS) + 7) >> 3; 1226 + free = (nand->ecc.strength * mtk_ecc_get_parity_bits(nfc->ecc) 1227 + + 7) >> 3; 1226 1228 free = spare - free; 1227 1229 1228 1230 /* ··· 1233 1233 */ 1234 1234 if (free > NFI_FDM_MAX_SIZE) { 1235 1235 spare -= NFI_FDM_MAX_SIZE; 1236 - nand->ecc.strength = (spare << 3) / ECC_PARITY_BITS; 1236 + nand->ecc.strength = (spare << 3) / 1237 + mtk_ecc_get_parity_bits(nfc->ecc); 1237 1238 } else if (free < 0) { 1238 1239 spare -= NFI_FDM_MIN_SIZE; 1239 - nand->ecc.strength = (spare << 3) / ECC_PARITY_BITS; 1240 + nand->ecc.strength = (spare << 3) / 1241 + mtk_ecc_get_parity_bits(nfc->ecc); 1240 1242 } 1241 1243 } 1242 1244 ··· 1391 1389 .num_spare_size = 16, 1392 1390 .pageformat_spare_shift = 4, 1393 1391 .nfi_clk_div = 1, 1392 + .max_sector = 16, 1393 + .max_sector_size = 1024, 1394 1394 }; 1395 1395 1396 1396 static const struct mtk_nfc_caps mtk_nfc_caps_mt2712 = { ··· 1400 1396 .num_spare_size = 19, 1401 1397 .pageformat_spare_shift = 16, 1402 1398 .nfi_clk_div = 2, 1399 + .max_sector = 16, 1400 + .max_sector_size = 1024, 1401 + }; 1402 + 1403 + static const struct mtk_nfc_caps mtk_nfc_caps_mt7622 = { 1404 + .spare_size = spare_size_mt7622, 1405 + .num_spare_size = 4, 1406 + .pageformat_spare_shift = 4, 1407 + .nfi_clk_div = 1, 1408 + .max_sector = 8, 1409 + .max_sector_size = 512, 1403 1410 }; 1404 1411 1405 1412 static const struct of_device_id mtk_nfc_id_table[] = { ··· 1420 1405 }, { 1421 1406 .compatible = "mediatek,mt2712-nfc", 1422 1407 .data = &mtk_nfc_caps_mt2712, 1408 + }, { 1409 + .compatible = "mediatek,mt7622-nfc", 1410 + .data = &mtk_nfc_caps_mt7622, 1423 1411 }, 1424 1412 {} 1425 1413 }; ··· 1558 1540 struct mtk_nfc *nfc = dev_get_drvdata(dev); 1559 1541 struct mtk_nfc_nand_chip *chip; 1560 1542 struct nand_chip *nand; 1561 - struct mtd_info *mtd; 1562 1543 int ret; 1563 1544 u32 i; 1564 1545 ··· 1570 1553 /* reset NAND chip if VCC was powered off */ 1571 1554 list_for_each_entry(chip, &nfc->chips, node) { 1572 1555 nand = &chip->nand; 1573 - mtd = nand_to_mtd(nand); 1574 - for (i = 0; i < chip->nsels; i++) { 1575 - nand->select_chip(mtd, i); 1576 - nand->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 1577 - } 1556 + for (i = 0; i < chip->nsels; i++) 1557 + nand_reset(nand, i); 1578 1558 } 1579 1559 1580 1560 return 0;
+1913 -297
drivers/mtd/nand/nand_base.c
··· 561 561 static int nand_check_wp(struct mtd_info *mtd) 562 562 { 563 563 struct nand_chip *chip = mtd_to_nand(mtd); 564 + u8 status; 565 + int ret; 564 566 565 567 /* Broken xD cards report WP despite being writable */ 566 568 if (chip->options & NAND_BROKEN_XD) 567 569 return 0; 568 570 569 571 /* Check the WP bit */ 570 - chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 571 - return (chip->read_byte(mtd) & NAND_STATUS_WP) ? 0 : 1; 572 + ret = nand_status_op(chip, &status); 573 + if (ret) 574 + return ret; 575 + 576 + return status & NAND_STATUS_WP ? 0 : 1; 572 577 } 573 578 574 579 /** ··· 672 667 static void nand_wait_status_ready(struct mtd_info *mtd, unsigned long timeo) 673 668 { 674 669 register struct nand_chip *chip = mtd_to_nand(mtd); 670 + int ret; 675 671 676 672 timeo = jiffies + msecs_to_jiffies(timeo); 677 673 do { 678 - if ((chip->read_byte(mtd) & NAND_STATUS_READY)) 674 + u8 status; 675 + 676 + ret = nand_read_data_op(chip, &status, sizeof(status), true); 677 + if (ret) 678 + return; 679 + 680 + if (status & NAND_STATUS_READY) 679 681 break; 680 682 touch_softlockup_watchdog(); 681 683 } while (time_before(jiffies, timeo)); 682 684 }; 685 + 686 + /** 687 + * nand_soft_waitrdy - Poll STATUS reg until RDY bit is set to 1 688 + * @chip: NAND chip structure 689 + * @timeout_ms: Timeout in ms 690 + * 691 + * Poll the STATUS register using ->exec_op() until the RDY bit becomes 1. 692 + * If that does not happen whitin the specified timeout, -ETIMEDOUT is 693 + * returned. 694 + * 695 + * This helper is intended to be used when the controller does not have access 696 + * to the NAND R/B pin. 697 + * 698 + * Be aware that calling this helper from an ->exec_op() implementation means 699 + * ->exec_op() must be re-entrant. 700 + * 701 + * Return 0 if the NAND chip is ready, a negative error otherwise. 702 + */ 703 + int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms) 704 + { 705 + u8 status = 0; 706 + int ret; 707 + 708 + if (!chip->exec_op) 709 + return -ENOTSUPP; 710 + 711 + ret = nand_status_op(chip, NULL); 712 + if (ret) 713 + return ret; 714 + 715 + timeout_ms = jiffies + msecs_to_jiffies(timeout_ms); 716 + do { 717 + ret = nand_read_data_op(chip, &status, sizeof(status), true); 718 + if (ret) 719 + break; 720 + 721 + if (status & NAND_STATUS_READY) 722 + break; 723 + 724 + /* 725 + * Typical lowest execution time for a tR on most NANDs is 10us, 726 + * use this as polling delay before doing something smarter (ie. 727 + * deriving a delay from the timeout value, timeout_ms/ratio). 728 + */ 729 + udelay(10); 730 + } while (time_before(jiffies, timeout_ms)); 731 + 732 + /* 733 + * We have to exit READ_STATUS mode in order to read real data on the 734 + * bus in case the WAITRDY instruction is preceding a DATA_IN 735 + * instruction. 736 + */ 737 + nand_exit_status_op(chip); 738 + 739 + if (ret) 740 + return ret; 741 + 742 + return status & NAND_STATUS_READY ? 0 : -ETIMEDOUT; 743 + }; 744 + EXPORT_SYMBOL_GPL(nand_soft_waitrdy); 683 745 684 746 /** 685 747 * nand_command - [DEFAULT] Send command to NAND device ··· 782 710 chip->cmd_ctrl(mtd, readcmd, ctrl); 783 711 ctrl &= ~NAND_CTRL_CHANGE; 784 712 } 785 - chip->cmd_ctrl(mtd, command, ctrl); 713 + if (command != NAND_CMD_NONE) 714 + chip->cmd_ctrl(mtd, command, ctrl); 786 715 787 716 /* Address cycle, when necessary */ 788 717 ctrl = NAND_CTRL_ALE | NAND_CTRL_CHANGE; ··· 811 738 */ 812 739 switch (command) { 813 740 741 + case NAND_CMD_NONE: 814 742 case NAND_CMD_PAGEPROG: 815 743 case NAND_CMD_ERASE1: 816 744 case NAND_CMD_ERASE2: ··· 876 802 * Wait tCCS_min if it is correctly defined, otherwise wait 500ns 877 803 * (which should be safe for all NANDs). 878 804 */ 879 - if (chip->data_interface && chip->data_interface->timings.sdr.tCCS_min) 880 - ndelay(chip->data_interface->timings.sdr.tCCS_min / 1000); 805 + if (chip->setup_data_interface) 806 + ndelay(chip->data_interface.timings.sdr.tCCS_min / 1000); 881 807 else 882 808 ndelay(500); 883 809 } ··· 905 831 } 906 832 907 833 /* Command latch cycle */ 908 - chip->cmd_ctrl(mtd, command, NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE); 834 + if (command != NAND_CMD_NONE) 835 + chip->cmd_ctrl(mtd, command, 836 + NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE); 909 837 910 838 if (column != -1 || page_addr != -1) { 911 839 int ctrl = NAND_CTRL_CHANGE | NAND_NCE | NAND_ALE; ··· 942 866 */ 943 867 switch (command) { 944 868 869 + case NAND_CMD_NONE: 945 870 case NAND_CMD_CACHEDPROG: 946 871 case NAND_CMD_PAGEPROG: 947 872 case NAND_CMD_ERASE1: ··· 1091 1014 if (chip->dev_ready(mtd)) 1092 1015 break; 1093 1016 } else { 1094 - if (chip->read_byte(mtd) & NAND_STATUS_READY) 1017 + int ret; 1018 + u8 status; 1019 + 1020 + ret = nand_read_data_op(chip, &status, sizeof(status), 1021 + true); 1022 + if (ret) 1023 + return; 1024 + 1025 + if (status & NAND_STATUS_READY) 1095 1026 break; 1096 1027 } 1097 1028 mdelay(1); ··· 1116 1031 static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip) 1117 1032 { 1118 1033 1119 - int status; 1120 1034 unsigned long timeo = 400; 1035 + u8 status; 1036 + int ret; 1121 1037 1122 1038 /* 1123 1039 * Apply this short delay always to ensure that we do wait tWB in any ··· 1126 1040 */ 1127 1041 ndelay(100); 1128 1042 1129 - chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 1043 + ret = nand_status_op(chip, NULL); 1044 + if (ret) 1045 + return ret; 1130 1046 1131 1047 if (in_interrupt() || oops_in_progress) 1132 1048 panic_nand_wait(mtd, chip, timeo); ··· 1139 1051 if (chip->dev_ready(mtd)) 1140 1052 break; 1141 1053 } else { 1142 - if (chip->read_byte(mtd) & NAND_STATUS_READY) 1054 + ret = nand_read_data_op(chip, &status, 1055 + sizeof(status), true); 1056 + if (ret) 1057 + return ret; 1058 + 1059 + if (status & NAND_STATUS_READY) 1143 1060 break; 1144 1061 } 1145 1062 cond_resched(); 1146 1063 } while (time_before(jiffies, timeo)); 1147 1064 } 1148 1065 1149 - status = (int)chip->read_byte(mtd); 1066 + ret = nand_read_data_op(chip, &status, sizeof(status), true); 1067 + if (ret) 1068 + return ret; 1069 + 1150 1070 /* This can happen if in case of timeout or buggy dev_ready */ 1151 1071 WARN_ON(!(status & NAND_STATUS_READY)); 1152 1072 return status; ··· 1172 1076 static int nand_reset_data_interface(struct nand_chip *chip, int chipnr) 1173 1077 { 1174 1078 struct mtd_info *mtd = nand_to_mtd(chip); 1175 - const struct nand_data_interface *conf; 1176 1079 int ret; 1177 1080 1178 1081 if (!chip->setup_data_interface) ··· 1191 1096 * timings to timing mode 0. 1192 1097 */ 1193 1098 1194 - conf = nand_get_default_data_interface(); 1195 - ret = chip->setup_data_interface(mtd, chipnr, conf); 1099 + onfi_fill_data_interface(chip, NAND_SDR_IFACE, 0); 1100 + ret = chip->setup_data_interface(mtd, chipnr, &chip->data_interface); 1196 1101 if (ret) 1197 1102 pr_err("Failed to configure data interface to SDR timing mode 0\n"); 1198 1103 ··· 1217 1122 struct mtd_info *mtd = nand_to_mtd(chip); 1218 1123 int ret; 1219 1124 1220 - if (!chip->setup_data_interface || !chip->data_interface) 1125 + if (!chip->setup_data_interface) 1221 1126 return 0; 1222 1127 1223 1128 /* ··· 1238 1143 goto err; 1239 1144 } 1240 1145 1241 - ret = chip->setup_data_interface(mtd, chipnr, chip->data_interface); 1146 + ret = chip->setup_data_interface(mtd, chipnr, &chip->data_interface); 1242 1147 err: 1243 1148 return ret; 1244 1149 } ··· 1278 1183 modes = GENMASK(chip->onfi_timing_mode_default, 0); 1279 1184 } 1280 1185 1281 - chip->data_interface = kzalloc(sizeof(*chip->data_interface), 1282 - GFP_KERNEL); 1283 - if (!chip->data_interface) 1284 - return -ENOMEM; 1285 1186 1286 1187 for (mode = fls(modes) - 1; mode >= 0; mode--) { 1287 - ret = onfi_init_data_interface(chip, chip->data_interface, 1288 - NAND_SDR_IFACE, mode); 1188 + ret = onfi_fill_data_interface(chip, NAND_SDR_IFACE, mode); 1289 1189 if (ret) 1290 1190 continue; 1291 1191 1292 - /* Pass -1 to only */ 1192 + /* 1193 + * Pass NAND_DATA_IFACE_CHECK_ONLY to only check if the 1194 + * controller supports the requested timings. 1195 + */ 1293 1196 ret = chip->setup_data_interface(mtd, 1294 1197 NAND_DATA_IFACE_CHECK_ONLY, 1295 - chip->data_interface); 1198 + &chip->data_interface); 1296 1199 if (!ret) { 1297 1200 chip->onfi_timing_mode_default = mode; 1298 1201 break; ··· 1300 1207 return 0; 1301 1208 } 1302 1209 1303 - static void nand_release_data_interface(struct nand_chip *chip) 1210 + /** 1211 + * nand_fill_column_cycles - fill the column cycles of an address 1212 + * @chip: The NAND chip 1213 + * @addrs: Array of address cycles to fill 1214 + * @offset_in_page: The offset in the page 1215 + * 1216 + * Fills the first or the first two bytes of the @addrs field depending 1217 + * on the NAND bus width and the page size. 1218 + * 1219 + * Returns the number of cycles needed to encode the column, or a negative 1220 + * error code in case one of the arguments is invalid. 1221 + */ 1222 + static int nand_fill_column_cycles(struct nand_chip *chip, u8 *addrs, 1223 + unsigned int offset_in_page) 1304 1224 { 1305 - kfree(chip->data_interface); 1225 + struct mtd_info *mtd = nand_to_mtd(chip); 1226 + 1227 + /* Make sure the offset is less than the actual page size. */ 1228 + if (offset_in_page > mtd->writesize + mtd->oobsize) 1229 + return -EINVAL; 1230 + 1231 + /* 1232 + * On small page NANDs, there's a dedicated command to access the OOB 1233 + * area, and the column address is relative to the start of the OOB 1234 + * area, not the start of the page. Asjust the address accordingly. 1235 + */ 1236 + if (mtd->writesize <= 512 && offset_in_page >= mtd->writesize) 1237 + offset_in_page -= mtd->writesize; 1238 + 1239 + /* 1240 + * The offset in page is expressed in bytes, if the NAND bus is 16-bit 1241 + * wide, then it must be divided by 2. 1242 + */ 1243 + if (chip->options & NAND_BUSWIDTH_16) { 1244 + if (WARN_ON(offset_in_page % 2)) 1245 + return -EINVAL; 1246 + 1247 + offset_in_page /= 2; 1248 + } 1249 + 1250 + addrs[0] = offset_in_page; 1251 + 1252 + /* 1253 + * Small page NANDs use 1 cycle for the columns, while large page NANDs 1254 + * need 2 1255 + */ 1256 + if (mtd->writesize <= 512) 1257 + return 1; 1258 + 1259 + addrs[1] = offset_in_page >> 8; 1260 + 1261 + return 2; 1306 1262 } 1263 + 1264 + static int nand_sp_exec_read_page_op(struct nand_chip *chip, unsigned int page, 1265 + unsigned int offset_in_page, void *buf, 1266 + unsigned int len) 1267 + { 1268 + struct mtd_info *mtd = nand_to_mtd(chip); 1269 + const struct nand_sdr_timings *sdr = 1270 + nand_get_sdr_timings(&chip->data_interface); 1271 + u8 addrs[4]; 1272 + struct nand_op_instr instrs[] = { 1273 + NAND_OP_CMD(NAND_CMD_READ0, 0), 1274 + NAND_OP_ADDR(3, addrs, PSEC_TO_NSEC(sdr->tWB_max)), 1275 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tR_max), 1276 + PSEC_TO_NSEC(sdr->tRR_min)), 1277 + NAND_OP_DATA_IN(len, buf, 0), 1278 + }; 1279 + struct nand_operation op = NAND_OPERATION(instrs); 1280 + int ret; 1281 + 1282 + /* Drop the DATA_IN instruction if len is set to 0. */ 1283 + if (!len) 1284 + op.ninstrs--; 1285 + 1286 + if (offset_in_page >= mtd->writesize) 1287 + instrs[0].ctx.cmd.opcode = NAND_CMD_READOOB; 1288 + else if (offset_in_page >= 256 && 1289 + !(chip->options & NAND_BUSWIDTH_16)) 1290 + instrs[0].ctx.cmd.opcode = NAND_CMD_READ1; 1291 + 1292 + ret = nand_fill_column_cycles(chip, addrs, offset_in_page); 1293 + if (ret < 0) 1294 + return ret; 1295 + 1296 + addrs[1] = page; 1297 + addrs[2] = page >> 8; 1298 + 1299 + if (chip->options & NAND_ROW_ADDR_3) { 1300 + addrs[3] = page >> 16; 1301 + instrs[1].ctx.addr.naddrs++; 1302 + } 1303 + 1304 + return nand_exec_op(chip, &op); 1305 + } 1306 + 1307 + static int nand_lp_exec_read_page_op(struct nand_chip *chip, unsigned int page, 1308 + unsigned int offset_in_page, void *buf, 1309 + unsigned int len) 1310 + { 1311 + const struct nand_sdr_timings *sdr = 1312 + nand_get_sdr_timings(&chip->data_interface); 1313 + u8 addrs[5]; 1314 + struct nand_op_instr instrs[] = { 1315 + NAND_OP_CMD(NAND_CMD_READ0, 0), 1316 + NAND_OP_ADDR(4, addrs, 0), 1317 + NAND_OP_CMD(NAND_CMD_READSTART, PSEC_TO_NSEC(sdr->tWB_max)), 1318 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tR_max), 1319 + PSEC_TO_NSEC(sdr->tRR_min)), 1320 + NAND_OP_DATA_IN(len, buf, 0), 1321 + }; 1322 + struct nand_operation op = NAND_OPERATION(instrs); 1323 + int ret; 1324 + 1325 + /* Drop the DATA_IN instruction if len is set to 0. */ 1326 + if (!len) 1327 + op.ninstrs--; 1328 + 1329 + ret = nand_fill_column_cycles(chip, addrs, offset_in_page); 1330 + if (ret < 0) 1331 + return ret; 1332 + 1333 + addrs[2] = page; 1334 + addrs[3] = page >> 8; 1335 + 1336 + if (chip->options & NAND_ROW_ADDR_3) { 1337 + addrs[4] = page >> 16; 1338 + instrs[1].ctx.addr.naddrs++; 1339 + } 1340 + 1341 + return nand_exec_op(chip, &op); 1342 + } 1343 + 1344 + /** 1345 + * nand_read_page_op - Do a READ PAGE operation 1346 + * @chip: The NAND chip 1347 + * @page: page to read 1348 + * @offset_in_page: offset within the page 1349 + * @buf: buffer used to store the data 1350 + * @len: length of the buffer 1351 + * 1352 + * This function issues a READ PAGE operation. 1353 + * This function does not select/unselect the CS line. 1354 + * 1355 + * Returns 0 on success, a negative error code otherwise. 1356 + */ 1357 + int nand_read_page_op(struct nand_chip *chip, unsigned int page, 1358 + unsigned int offset_in_page, void *buf, unsigned int len) 1359 + { 1360 + struct mtd_info *mtd = nand_to_mtd(chip); 1361 + 1362 + if (len && !buf) 1363 + return -EINVAL; 1364 + 1365 + if (offset_in_page + len > mtd->writesize + mtd->oobsize) 1366 + return -EINVAL; 1367 + 1368 + if (chip->exec_op) { 1369 + if (mtd->writesize > 512) 1370 + return nand_lp_exec_read_page_op(chip, page, 1371 + offset_in_page, buf, 1372 + len); 1373 + 1374 + return nand_sp_exec_read_page_op(chip, page, offset_in_page, 1375 + buf, len); 1376 + } 1377 + 1378 + chip->cmdfunc(mtd, NAND_CMD_READ0, offset_in_page, page); 1379 + if (len) 1380 + chip->read_buf(mtd, buf, len); 1381 + 1382 + return 0; 1383 + } 1384 + EXPORT_SYMBOL_GPL(nand_read_page_op); 1385 + 1386 + /** 1387 + * nand_read_param_page_op - Do a READ PARAMETER PAGE operation 1388 + * @chip: The NAND chip 1389 + * @page: parameter page to read 1390 + * @buf: buffer used to store the data 1391 + * @len: length of the buffer 1392 + * 1393 + * This function issues a READ PARAMETER PAGE operation. 1394 + * This function does not select/unselect the CS line. 1395 + * 1396 + * Returns 0 on success, a negative error code otherwise. 1397 + */ 1398 + static int nand_read_param_page_op(struct nand_chip *chip, u8 page, void *buf, 1399 + unsigned int len) 1400 + { 1401 + struct mtd_info *mtd = nand_to_mtd(chip); 1402 + unsigned int i; 1403 + u8 *p = buf; 1404 + 1405 + if (len && !buf) 1406 + return -EINVAL; 1407 + 1408 + if (chip->exec_op) { 1409 + const struct nand_sdr_timings *sdr = 1410 + nand_get_sdr_timings(&chip->data_interface); 1411 + struct nand_op_instr instrs[] = { 1412 + NAND_OP_CMD(NAND_CMD_PARAM, 0), 1413 + NAND_OP_ADDR(1, &page, PSEC_TO_NSEC(sdr->tWB_max)), 1414 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tR_max), 1415 + PSEC_TO_NSEC(sdr->tRR_min)), 1416 + NAND_OP_8BIT_DATA_IN(len, buf, 0), 1417 + }; 1418 + struct nand_operation op = NAND_OPERATION(instrs); 1419 + 1420 + /* Drop the DATA_IN instruction if len is set to 0. */ 1421 + if (!len) 1422 + op.ninstrs--; 1423 + 1424 + return nand_exec_op(chip, &op); 1425 + } 1426 + 1427 + chip->cmdfunc(mtd, NAND_CMD_PARAM, page, -1); 1428 + for (i = 0; i < len; i++) 1429 + p[i] = chip->read_byte(mtd); 1430 + 1431 + return 0; 1432 + } 1433 + 1434 + /** 1435 + * nand_change_read_column_op - Do a CHANGE READ COLUMN operation 1436 + * @chip: The NAND chip 1437 + * @offset_in_page: offset within the page 1438 + * @buf: buffer used to store the data 1439 + * @len: length of the buffer 1440 + * @force_8bit: force 8-bit bus access 1441 + * 1442 + * This function issues a CHANGE READ COLUMN operation. 1443 + * This function does not select/unselect the CS line. 1444 + * 1445 + * Returns 0 on success, a negative error code otherwise. 1446 + */ 1447 + int nand_change_read_column_op(struct nand_chip *chip, 1448 + unsigned int offset_in_page, void *buf, 1449 + unsigned int len, bool force_8bit) 1450 + { 1451 + struct mtd_info *mtd = nand_to_mtd(chip); 1452 + 1453 + if (len && !buf) 1454 + return -EINVAL; 1455 + 1456 + if (offset_in_page + len > mtd->writesize + mtd->oobsize) 1457 + return -EINVAL; 1458 + 1459 + /* Small page NANDs do not support column change. */ 1460 + if (mtd->writesize <= 512) 1461 + return -ENOTSUPP; 1462 + 1463 + if (chip->exec_op) { 1464 + const struct nand_sdr_timings *sdr = 1465 + nand_get_sdr_timings(&chip->data_interface); 1466 + u8 addrs[2] = {}; 1467 + struct nand_op_instr instrs[] = { 1468 + NAND_OP_CMD(NAND_CMD_RNDOUT, 0), 1469 + NAND_OP_ADDR(2, addrs, 0), 1470 + NAND_OP_CMD(NAND_CMD_RNDOUTSTART, 1471 + PSEC_TO_NSEC(sdr->tCCS_min)), 1472 + NAND_OP_DATA_IN(len, buf, 0), 1473 + }; 1474 + struct nand_operation op = NAND_OPERATION(instrs); 1475 + int ret; 1476 + 1477 + ret = nand_fill_column_cycles(chip, addrs, offset_in_page); 1478 + if (ret < 0) 1479 + return ret; 1480 + 1481 + /* Drop the DATA_IN instruction if len is set to 0. */ 1482 + if (!len) 1483 + op.ninstrs--; 1484 + 1485 + instrs[3].ctx.data.force_8bit = force_8bit; 1486 + 1487 + return nand_exec_op(chip, &op); 1488 + } 1489 + 1490 + chip->cmdfunc(mtd, NAND_CMD_RNDOUT, offset_in_page, -1); 1491 + if (len) 1492 + chip->read_buf(mtd, buf, len); 1493 + 1494 + return 0; 1495 + } 1496 + EXPORT_SYMBOL_GPL(nand_change_read_column_op); 1497 + 1498 + /** 1499 + * nand_read_oob_op - Do a READ OOB operation 1500 + * @chip: The NAND chip 1501 + * @page: page to read 1502 + * @offset_in_oob: offset within the OOB area 1503 + * @buf: buffer used to store the data 1504 + * @len: length of the buffer 1505 + * 1506 + * This function issues a READ OOB operation. 1507 + * This function does not select/unselect the CS line. 1508 + * 1509 + * Returns 0 on success, a negative error code otherwise. 1510 + */ 1511 + int nand_read_oob_op(struct nand_chip *chip, unsigned int page, 1512 + unsigned int offset_in_oob, void *buf, unsigned int len) 1513 + { 1514 + struct mtd_info *mtd = nand_to_mtd(chip); 1515 + 1516 + if (len && !buf) 1517 + return -EINVAL; 1518 + 1519 + if (offset_in_oob + len > mtd->oobsize) 1520 + return -EINVAL; 1521 + 1522 + if (chip->exec_op) 1523 + return nand_read_page_op(chip, page, 1524 + mtd->writesize + offset_in_oob, 1525 + buf, len); 1526 + 1527 + chip->cmdfunc(mtd, NAND_CMD_READOOB, offset_in_oob, page); 1528 + if (len) 1529 + chip->read_buf(mtd, buf, len); 1530 + 1531 + return 0; 1532 + } 1533 + EXPORT_SYMBOL_GPL(nand_read_oob_op); 1534 + 1535 + static int nand_exec_prog_page_op(struct nand_chip *chip, unsigned int page, 1536 + unsigned int offset_in_page, const void *buf, 1537 + unsigned int len, bool prog) 1538 + { 1539 + struct mtd_info *mtd = nand_to_mtd(chip); 1540 + const struct nand_sdr_timings *sdr = 1541 + nand_get_sdr_timings(&chip->data_interface); 1542 + u8 addrs[5] = {}; 1543 + struct nand_op_instr instrs[] = { 1544 + /* 1545 + * The first instruction will be dropped if we're dealing 1546 + * with a large page NAND and adjusted if we're dealing 1547 + * with a small page NAND and the page offset is > 255. 1548 + */ 1549 + NAND_OP_CMD(NAND_CMD_READ0, 0), 1550 + NAND_OP_CMD(NAND_CMD_SEQIN, 0), 1551 + NAND_OP_ADDR(0, addrs, PSEC_TO_NSEC(sdr->tADL_min)), 1552 + NAND_OP_DATA_OUT(len, buf, 0), 1553 + NAND_OP_CMD(NAND_CMD_PAGEPROG, PSEC_TO_NSEC(sdr->tWB_max)), 1554 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tPROG_max), 0), 1555 + }; 1556 + struct nand_operation op = NAND_OPERATION(instrs); 1557 + int naddrs = nand_fill_column_cycles(chip, addrs, offset_in_page); 1558 + int ret; 1559 + u8 status; 1560 + 1561 + if (naddrs < 0) 1562 + return naddrs; 1563 + 1564 + addrs[naddrs++] = page; 1565 + addrs[naddrs++] = page >> 8; 1566 + if (chip->options & NAND_ROW_ADDR_3) 1567 + addrs[naddrs++] = page >> 16; 1568 + 1569 + instrs[2].ctx.addr.naddrs = naddrs; 1570 + 1571 + /* Drop the last two instructions if we're not programming the page. */ 1572 + if (!prog) { 1573 + op.ninstrs -= 2; 1574 + /* Also drop the DATA_OUT instruction if empty. */ 1575 + if (!len) 1576 + op.ninstrs--; 1577 + } 1578 + 1579 + if (mtd->writesize <= 512) { 1580 + /* 1581 + * Small pages need some more tweaking: we have to adjust the 1582 + * first instruction depending on the page offset we're trying 1583 + * to access. 1584 + */ 1585 + if (offset_in_page >= mtd->writesize) 1586 + instrs[0].ctx.cmd.opcode = NAND_CMD_READOOB; 1587 + else if (offset_in_page >= 256 && 1588 + !(chip->options & NAND_BUSWIDTH_16)) 1589 + instrs[0].ctx.cmd.opcode = NAND_CMD_READ1; 1590 + } else { 1591 + /* 1592 + * Drop the first command if we're dealing with a large page 1593 + * NAND. 1594 + */ 1595 + op.instrs++; 1596 + op.ninstrs--; 1597 + } 1598 + 1599 + ret = nand_exec_op(chip, &op); 1600 + if (!prog || ret) 1601 + return ret; 1602 + 1603 + ret = nand_status_op(chip, &status); 1604 + if (ret) 1605 + return ret; 1606 + 1607 + return status; 1608 + } 1609 + 1610 + /** 1611 + * nand_prog_page_begin_op - starts a PROG PAGE operation 1612 + * @chip: The NAND chip 1613 + * @page: page to write 1614 + * @offset_in_page: offset within the page 1615 + * @buf: buffer containing the data to write to the page 1616 + * @len: length of the buffer 1617 + * 1618 + * This function issues the first half of a PROG PAGE operation. 1619 + * This function does not select/unselect the CS line. 1620 + * 1621 + * Returns 0 on success, a negative error code otherwise. 1622 + */ 1623 + int nand_prog_page_begin_op(struct nand_chip *chip, unsigned int page, 1624 + unsigned int offset_in_page, const void *buf, 1625 + unsigned int len) 1626 + { 1627 + struct mtd_info *mtd = nand_to_mtd(chip); 1628 + 1629 + if (len && !buf) 1630 + return -EINVAL; 1631 + 1632 + if (offset_in_page + len > mtd->writesize + mtd->oobsize) 1633 + return -EINVAL; 1634 + 1635 + if (chip->exec_op) 1636 + return nand_exec_prog_page_op(chip, page, offset_in_page, buf, 1637 + len, false); 1638 + 1639 + chip->cmdfunc(mtd, NAND_CMD_SEQIN, offset_in_page, page); 1640 + 1641 + if (buf) 1642 + chip->write_buf(mtd, buf, len); 1643 + 1644 + return 0; 1645 + } 1646 + EXPORT_SYMBOL_GPL(nand_prog_page_begin_op); 1647 + 1648 + /** 1649 + * nand_prog_page_end_op - ends a PROG PAGE operation 1650 + * @chip: The NAND chip 1651 + * 1652 + * This function issues the second half of a PROG PAGE operation. 1653 + * This function does not select/unselect the CS line. 1654 + * 1655 + * Returns 0 on success, a negative error code otherwise. 1656 + */ 1657 + int nand_prog_page_end_op(struct nand_chip *chip) 1658 + { 1659 + struct mtd_info *mtd = nand_to_mtd(chip); 1660 + int ret; 1661 + u8 status; 1662 + 1663 + if (chip->exec_op) { 1664 + const struct nand_sdr_timings *sdr = 1665 + nand_get_sdr_timings(&chip->data_interface); 1666 + struct nand_op_instr instrs[] = { 1667 + NAND_OP_CMD(NAND_CMD_PAGEPROG, 1668 + PSEC_TO_NSEC(sdr->tWB_max)), 1669 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tPROG_max), 0), 1670 + }; 1671 + struct nand_operation op = NAND_OPERATION(instrs); 1672 + 1673 + ret = nand_exec_op(chip, &op); 1674 + if (ret) 1675 + return ret; 1676 + 1677 + ret = nand_status_op(chip, &status); 1678 + if (ret) 1679 + return ret; 1680 + } else { 1681 + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1682 + ret = chip->waitfunc(mtd, chip); 1683 + if (ret < 0) 1684 + return ret; 1685 + 1686 + status = ret; 1687 + } 1688 + 1689 + if (status & NAND_STATUS_FAIL) 1690 + return -EIO; 1691 + 1692 + return 0; 1693 + } 1694 + EXPORT_SYMBOL_GPL(nand_prog_page_end_op); 1695 + 1696 + /** 1697 + * nand_prog_page_op - Do a full PROG PAGE operation 1698 + * @chip: The NAND chip 1699 + * @page: page to write 1700 + * @offset_in_page: offset within the page 1701 + * @buf: buffer containing the data to write to the page 1702 + * @len: length of the buffer 1703 + * 1704 + * This function issues a full PROG PAGE operation. 1705 + * This function does not select/unselect the CS line. 1706 + * 1707 + * Returns 0 on success, a negative error code otherwise. 1708 + */ 1709 + int nand_prog_page_op(struct nand_chip *chip, unsigned int page, 1710 + unsigned int offset_in_page, const void *buf, 1711 + unsigned int len) 1712 + { 1713 + struct mtd_info *mtd = nand_to_mtd(chip); 1714 + int status; 1715 + 1716 + if (!len || !buf) 1717 + return -EINVAL; 1718 + 1719 + if (offset_in_page + len > mtd->writesize + mtd->oobsize) 1720 + return -EINVAL; 1721 + 1722 + if (chip->exec_op) { 1723 + status = nand_exec_prog_page_op(chip, page, offset_in_page, buf, 1724 + len, true); 1725 + } else { 1726 + chip->cmdfunc(mtd, NAND_CMD_SEQIN, offset_in_page, page); 1727 + chip->write_buf(mtd, buf, len); 1728 + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1729 + status = chip->waitfunc(mtd, chip); 1730 + } 1731 + 1732 + if (status & NAND_STATUS_FAIL) 1733 + return -EIO; 1734 + 1735 + return 0; 1736 + } 1737 + EXPORT_SYMBOL_GPL(nand_prog_page_op); 1738 + 1739 + /** 1740 + * nand_change_write_column_op - Do a CHANGE WRITE COLUMN operation 1741 + * @chip: The NAND chip 1742 + * @offset_in_page: offset within the page 1743 + * @buf: buffer containing the data to send to the NAND 1744 + * @len: length of the buffer 1745 + * @force_8bit: force 8-bit bus access 1746 + * 1747 + * This function issues a CHANGE WRITE COLUMN operation. 1748 + * This function does not select/unselect the CS line. 1749 + * 1750 + * Returns 0 on success, a negative error code otherwise. 1751 + */ 1752 + int nand_change_write_column_op(struct nand_chip *chip, 1753 + unsigned int offset_in_page, 1754 + const void *buf, unsigned int len, 1755 + bool force_8bit) 1756 + { 1757 + struct mtd_info *mtd = nand_to_mtd(chip); 1758 + 1759 + if (len && !buf) 1760 + return -EINVAL; 1761 + 1762 + if (offset_in_page + len > mtd->writesize + mtd->oobsize) 1763 + return -EINVAL; 1764 + 1765 + /* Small page NANDs do not support column change. */ 1766 + if (mtd->writesize <= 512) 1767 + return -ENOTSUPP; 1768 + 1769 + if (chip->exec_op) { 1770 + const struct nand_sdr_timings *sdr = 1771 + nand_get_sdr_timings(&chip->data_interface); 1772 + u8 addrs[2]; 1773 + struct nand_op_instr instrs[] = { 1774 + NAND_OP_CMD(NAND_CMD_RNDIN, 0), 1775 + NAND_OP_ADDR(2, addrs, PSEC_TO_NSEC(sdr->tCCS_min)), 1776 + NAND_OP_DATA_OUT(len, buf, 0), 1777 + }; 1778 + struct nand_operation op = NAND_OPERATION(instrs); 1779 + int ret; 1780 + 1781 + ret = nand_fill_column_cycles(chip, addrs, offset_in_page); 1782 + if (ret < 0) 1783 + return ret; 1784 + 1785 + instrs[2].ctx.data.force_8bit = force_8bit; 1786 + 1787 + /* Drop the DATA_OUT instruction if len is set to 0. */ 1788 + if (!len) 1789 + op.ninstrs--; 1790 + 1791 + return nand_exec_op(chip, &op); 1792 + } 1793 + 1794 + chip->cmdfunc(mtd, NAND_CMD_RNDIN, offset_in_page, -1); 1795 + if (len) 1796 + chip->write_buf(mtd, buf, len); 1797 + 1798 + return 0; 1799 + } 1800 + EXPORT_SYMBOL_GPL(nand_change_write_column_op); 1801 + 1802 + /** 1803 + * nand_readid_op - Do a READID operation 1804 + * @chip: The NAND chip 1805 + * @addr: address cycle to pass after the READID command 1806 + * @buf: buffer used to store the ID 1807 + * @len: length of the buffer 1808 + * 1809 + * This function sends a READID command and reads back the ID returned by the 1810 + * NAND. 1811 + * This function does not select/unselect the CS line. 1812 + * 1813 + * Returns 0 on success, a negative error code otherwise. 1814 + */ 1815 + int nand_readid_op(struct nand_chip *chip, u8 addr, void *buf, 1816 + unsigned int len) 1817 + { 1818 + struct mtd_info *mtd = nand_to_mtd(chip); 1819 + unsigned int i; 1820 + u8 *id = buf; 1821 + 1822 + if (len && !buf) 1823 + return -EINVAL; 1824 + 1825 + if (chip->exec_op) { 1826 + const struct nand_sdr_timings *sdr = 1827 + nand_get_sdr_timings(&chip->data_interface); 1828 + struct nand_op_instr instrs[] = { 1829 + NAND_OP_CMD(NAND_CMD_READID, 0), 1830 + NAND_OP_ADDR(1, &addr, PSEC_TO_NSEC(sdr->tADL_min)), 1831 + NAND_OP_8BIT_DATA_IN(len, buf, 0), 1832 + }; 1833 + struct nand_operation op = NAND_OPERATION(instrs); 1834 + 1835 + /* Drop the DATA_IN instruction if len is set to 0. */ 1836 + if (!len) 1837 + op.ninstrs--; 1838 + 1839 + return nand_exec_op(chip, &op); 1840 + } 1841 + 1842 + chip->cmdfunc(mtd, NAND_CMD_READID, addr, -1); 1843 + 1844 + for (i = 0; i < len; i++) 1845 + id[i] = chip->read_byte(mtd); 1846 + 1847 + return 0; 1848 + } 1849 + EXPORT_SYMBOL_GPL(nand_readid_op); 1850 + 1851 + /** 1852 + * nand_status_op - Do a STATUS operation 1853 + * @chip: The NAND chip 1854 + * @status: out variable to store the NAND status 1855 + * 1856 + * This function sends a STATUS command and reads back the status returned by 1857 + * the NAND. 1858 + * This function does not select/unselect the CS line. 1859 + * 1860 + * Returns 0 on success, a negative error code otherwise. 1861 + */ 1862 + int nand_status_op(struct nand_chip *chip, u8 *status) 1863 + { 1864 + struct mtd_info *mtd = nand_to_mtd(chip); 1865 + 1866 + if (chip->exec_op) { 1867 + const struct nand_sdr_timings *sdr = 1868 + nand_get_sdr_timings(&chip->data_interface); 1869 + struct nand_op_instr instrs[] = { 1870 + NAND_OP_CMD(NAND_CMD_STATUS, 1871 + PSEC_TO_NSEC(sdr->tADL_min)), 1872 + NAND_OP_8BIT_DATA_IN(1, status, 0), 1873 + }; 1874 + struct nand_operation op = NAND_OPERATION(instrs); 1875 + 1876 + if (!status) 1877 + op.ninstrs--; 1878 + 1879 + return nand_exec_op(chip, &op); 1880 + } 1881 + 1882 + chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 1883 + if (status) 1884 + *status = chip->read_byte(mtd); 1885 + 1886 + return 0; 1887 + } 1888 + EXPORT_SYMBOL_GPL(nand_status_op); 1889 + 1890 + /** 1891 + * nand_exit_status_op - Exit a STATUS operation 1892 + * @chip: The NAND chip 1893 + * 1894 + * This function sends a READ0 command to cancel the effect of the STATUS 1895 + * command to avoid reading only the status until a new read command is sent. 1896 + * 1897 + * This function does not select/unselect the CS line. 1898 + * 1899 + * Returns 0 on success, a negative error code otherwise. 1900 + */ 1901 + int nand_exit_status_op(struct nand_chip *chip) 1902 + { 1903 + struct mtd_info *mtd = nand_to_mtd(chip); 1904 + 1905 + if (chip->exec_op) { 1906 + struct nand_op_instr instrs[] = { 1907 + NAND_OP_CMD(NAND_CMD_READ0, 0), 1908 + }; 1909 + struct nand_operation op = NAND_OPERATION(instrs); 1910 + 1911 + return nand_exec_op(chip, &op); 1912 + } 1913 + 1914 + chip->cmdfunc(mtd, NAND_CMD_READ0, -1, -1); 1915 + 1916 + return 0; 1917 + } 1918 + EXPORT_SYMBOL_GPL(nand_exit_status_op); 1919 + 1920 + /** 1921 + * nand_erase_op - Do an erase operation 1922 + * @chip: The NAND chip 1923 + * @eraseblock: block to erase 1924 + * 1925 + * This function sends an ERASE command and waits for the NAND to be ready 1926 + * before returning. 1927 + * This function does not select/unselect the CS line. 1928 + * 1929 + * Returns 0 on success, a negative error code otherwise. 1930 + */ 1931 + int nand_erase_op(struct nand_chip *chip, unsigned int eraseblock) 1932 + { 1933 + struct mtd_info *mtd = nand_to_mtd(chip); 1934 + unsigned int page = eraseblock << 1935 + (chip->phys_erase_shift - chip->page_shift); 1936 + int ret; 1937 + u8 status; 1938 + 1939 + if (chip->exec_op) { 1940 + const struct nand_sdr_timings *sdr = 1941 + nand_get_sdr_timings(&chip->data_interface); 1942 + u8 addrs[3] = { page, page >> 8, page >> 16 }; 1943 + struct nand_op_instr instrs[] = { 1944 + NAND_OP_CMD(NAND_CMD_ERASE1, 0), 1945 + NAND_OP_ADDR(2, addrs, 0), 1946 + NAND_OP_CMD(NAND_CMD_ERASE2, 1947 + PSEC_TO_MSEC(sdr->tWB_max)), 1948 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tBERS_max), 0), 1949 + }; 1950 + struct nand_operation op = NAND_OPERATION(instrs); 1951 + 1952 + if (chip->options & NAND_ROW_ADDR_3) 1953 + instrs[1].ctx.addr.naddrs++; 1954 + 1955 + ret = nand_exec_op(chip, &op); 1956 + if (ret) 1957 + return ret; 1958 + 1959 + ret = nand_status_op(chip, &status); 1960 + if (ret) 1961 + return ret; 1962 + } else { 1963 + chip->cmdfunc(mtd, NAND_CMD_ERASE1, -1, page); 1964 + chip->cmdfunc(mtd, NAND_CMD_ERASE2, -1, -1); 1965 + 1966 + ret = chip->waitfunc(mtd, chip); 1967 + if (ret < 0) 1968 + return ret; 1969 + 1970 + status = ret; 1971 + } 1972 + 1973 + if (status & NAND_STATUS_FAIL) 1974 + return -EIO; 1975 + 1976 + return 0; 1977 + } 1978 + EXPORT_SYMBOL_GPL(nand_erase_op); 1979 + 1980 + /** 1981 + * nand_set_features_op - Do a SET FEATURES operation 1982 + * @chip: The NAND chip 1983 + * @feature: feature id 1984 + * @data: 4 bytes of data 1985 + * 1986 + * This function sends a SET FEATURES command and waits for the NAND to be 1987 + * ready before returning. 1988 + * This function does not select/unselect the CS line. 1989 + * 1990 + * Returns 0 on success, a negative error code otherwise. 1991 + */ 1992 + static int nand_set_features_op(struct nand_chip *chip, u8 feature, 1993 + const void *data) 1994 + { 1995 + struct mtd_info *mtd = nand_to_mtd(chip); 1996 + const u8 *params = data; 1997 + int i, ret; 1998 + u8 status; 1999 + 2000 + if (chip->exec_op) { 2001 + const struct nand_sdr_timings *sdr = 2002 + nand_get_sdr_timings(&chip->data_interface); 2003 + struct nand_op_instr instrs[] = { 2004 + NAND_OP_CMD(NAND_CMD_SET_FEATURES, 0), 2005 + NAND_OP_ADDR(1, &feature, PSEC_TO_NSEC(sdr->tADL_min)), 2006 + NAND_OP_8BIT_DATA_OUT(ONFI_SUBFEATURE_PARAM_LEN, data, 2007 + PSEC_TO_NSEC(sdr->tWB_max)), 2008 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tFEAT_max), 0), 2009 + }; 2010 + struct nand_operation op = NAND_OPERATION(instrs); 2011 + 2012 + ret = nand_exec_op(chip, &op); 2013 + if (ret) 2014 + return ret; 2015 + 2016 + ret = nand_status_op(chip, &status); 2017 + if (ret) 2018 + return ret; 2019 + } else { 2020 + chip->cmdfunc(mtd, NAND_CMD_SET_FEATURES, feature, -1); 2021 + for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i) 2022 + chip->write_byte(mtd, params[i]); 2023 + 2024 + ret = chip->waitfunc(mtd, chip); 2025 + if (ret < 0) 2026 + return ret; 2027 + 2028 + status = ret; 2029 + } 2030 + 2031 + if (status & NAND_STATUS_FAIL) 2032 + return -EIO; 2033 + 2034 + return 0; 2035 + } 2036 + 2037 + /** 2038 + * nand_get_features_op - Do a GET FEATURES operation 2039 + * @chip: The NAND chip 2040 + * @feature: feature id 2041 + * @data: 4 bytes of data 2042 + * 2043 + * This function sends a GET FEATURES command and waits for the NAND to be 2044 + * ready before returning. 2045 + * This function does not select/unselect the CS line. 2046 + * 2047 + * Returns 0 on success, a negative error code otherwise. 2048 + */ 2049 + static int nand_get_features_op(struct nand_chip *chip, u8 feature, 2050 + void *data) 2051 + { 2052 + struct mtd_info *mtd = nand_to_mtd(chip); 2053 + u8 *params = data; 2054 + int i; 2055 + 2056 + if (chip->exec_op) { 2057 + const struct nand_sdr_timings *sdr = 2058 + nand_get_sdr_timings(&chip->data_interface); 2059 + struct nand_op_instr instrs[] = { 2060 + NAND_OP_CMD(NAND_CMD_GET_FEATURES, 0), 2061 + NAND_OP_ADDR(1, &feature, PSEC_TO_NSEC(sdr->tWB_max)), 2062 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tFEAT_max), 2063 + PSEC_TO_NSEC(sdr->tRR_min)), 2064 + NAND_OP_8BIT_DATA_IN(ONFI_SUBFEATURE_PARAM_LEN, 2065 + data, 0), 2066 + }; 2067 + struct nand_operation op = NAND_OPERATION(instrs); 2068 + 2069 + return nand_exec_op(chip, &op); 2070 + } 2071 + 2072 + chip->cmdfunc(mtd, NAND_CMD_GET_FEATURES, feature, -1); 2073 + for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i) 2074 + params[i] = chip->read_byte(mtd); 2075 + 2076 + return 0; 2077 + } 2078 + 2079 + /** 2080 + * nand_reset_op - Do a reset operation 2081 + * @chip: The NAND chip 2082 + * 2083 + * This function sends a RESET command and waits for the NAND to be ready 2084 + * before returning. 2085 + * This function does not select/unselect the CS line. 2086 + * 2087 + * Returns 0 on success, a negative error code otherwise. 2088 + */ 2089 + int nand_reset_op(struct nand_chip *chip) 2090 + { 2091 + struct mtd_info *mtd = nand_to_mtd(chip); 2092 + 2093 + if (chip->exec_op) { 2094 + const struct nand_sdr_timings *sdr = 2095 + nand_get_sdr_timings(&chip->data_interface); 2096 + struct nand_op_instr instrs[] = { 2097 + NAND_OP_CMD(NAND_CMD_RESET, PSEC_TO_NSEC(sdr->tWB_max)), 2098 + NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tRST_max), 0), 2099 + }; 2100 + struct nand_operation op = NAND_OPERATION(instrs); 2101 + 2102 + return nand_exec_op(chip, &op); 2103 + } 2104 + 2105 + chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 2106 + 2107 + return 0; 2108 + } 2109 + EXPORT_SYMBOL_GPL(nand_reset_op); 2110 + 2111 + /** 2112 + * nand_read_data_op - Read data from the NAND 2113 + * @chip: The NAND chip 2114 + * @buf: buffer used to store the data 2115 + * @len: length of the buffer 2116 + * @force_8bit: force 8-bit bus access 2117 + * 2118 + * This function does a raw data read on the bus. Usually used after launching 2119 + * another NAND operation like nand_read_page_op(). 2120 + * This function does not select/unselect the CS line. 2121 + * 2122 + * Returns 0 on success, a negative error code otherwise. 2123 + */ 2124 + int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len, 2125 + bool force_8bit) 2126 + { 2127 + struct mtd_info *mtd = nand_to_mtd(chip); 2128 + 2129 + if (!len || !buf) 2130 + return -EINVAL; 2131 + 2132 + if (chip->exec_op) { 2133 + struct nand_op_instr instrs[] = { 2134 + NAND_OP_DATA_IN(len, buf, 0), 2135 + }; 2136 + struct nand_operation op = NAND_OPERATION(instrs); 2137 + 2138 + instrs[0].ctx.data.force_8bit = force_8bit; 2139 + 2140 + return nand_exec_op(chip, &op); 2141 + } 2142 + 2143 + if (force_8bit) { 2144 + u8 *p = buf; 2145 + unsigned int i; 2146 + 2147 + for (i = 0; i < len; i++) 2148 + p[i] = chip->read_byte(mtd); 2149 + } else { 2150 + chip->read_buf(mtd, buf, len); 2151 + } 2152 + 2153 + return 0; 2154 + } 2155 + EXPORT_SYMBOL_GPL(nand_read_data_op); 2156 + 2157 + /** 2158 + * nand_write_data_op - Write data from the NAND 2159 + * @chip: The NAND chip 2160 + * @buf: buffer containing the data to send on the bus 2161 + * @len: length of the buffer 2162 + * @force_8bit: force 8-bit bus access 2163 + * 2164 + * This function does a raw data write on the bus. Usually used after launching 2165 + * another NAND operation like nand_write_page_begin_op(). 2166 + * This function does not select/unselect the CS line. 2167 + * 2168 + * Returns 0 on success, a negative error code otherwise. 2169 + */ 2170 + int nand_write_data_op(struct nand_chip *chip, const void *buf, 2171 + unsigned int len, bool force_8bit) 2172 + { 2173 + struct mtd_info *mtd = nand_to_mtd(chip); 2174 + 2175 + if (!len || !buf) 2176 + return -EINVAL; 2177 + 2178 + if (chip->exec_op) { 2179 + struct nand_op_instr instrs[] = { 2180 + NAND_OP_DATA_OUT(len, buf, 0), 2181 + }; 2182 + struct nand_operation op = NAND_OPERATION(instrs); 2183 + 2184 + instrs[0].ctx.data.force_8bit = force_8bit; 2185 + 2186 + return nand_exec_op(chip, &op); 2187 + } 2188 + 2189 + if (force_8bit) { 2190 + const u8 *p = buf; 2191 + unsigned int i; 2192 + 2193 + for (i = 0; i < len; i++) 2194 + chip->write_byte(mtd, p[i]); 2195 + } else { 2196 + chip->write_buf(mtd, buf, len); 2197 + } 2198 + 2199 + return 0; 2200 + } 2201 + EXPORT_SYMBOL_GPL(nand_write_data_op); 2202 + 2203 + /** 2204 + * struct nand_op_parser_ctx - Context used by the parser 2205 + * @instrs: array of all the instructions that must be addressed 2206 + * @ninstrs: length of the @instrs array 2207 + * @subop: Sub-operation to be passed to the NAND controller 2208 + * 2209 + * This structure is used by the core to split NAND operations into 2210 + * sub-operations that can be handled by the NAND controller. 2211 + */ 2212 + struct nand_op_parser_ctx { 2213 + const struct nand_op_instr *instrs; 2214 + unsigned int ninstrs; 2215 + struct nand_subop subop; 2216 + }; 2217 + 2218 + /** 2219 + * nand_op_parser_must_split_instr - Checks if an instruction must be split 2220 + * @pat: the parser pattern element that matches @instr 2221 + * @instr: pointer to the instruction to check 2222 + * @start_offset: this is an in/out parameter. If @instr has already been 2223 + * split, then @start_offset is the offset from which to start 2224 + * (either an address cycle or an offset in the data buffer). 2225 + * Conversely, if the function returns true (ie. instr must be 2226 + * split), this parameter is updated to point to the first 2227 + * data/address cycle that has not been taken care of. 2228 + * 2229 + * Some NAND controllers are limited and cannot send X address cycles with a 2230 + * unique operation, or cannot read/write more than Y bytes at the same time. 2231 + * In this case, split the instruction that does not fit in a single 2232 + * controller-operation into two or more chunks. 2233 + * 2234 + * Returns true if the instruction must be split, false otherwise. 2235 + * The @start_offset parameter is also updated to the offset at which the next 2236 + * bundle of instruction must start (if an address or a data instruction). 2237 + */ 2238 + static bool 2239 + nand_op_parser_must_split_instr(const struct nand_op_parser_pattern_elem *pat, 2240 + const struct nand_op_instr *instr, 2241 + unsigned int *start_offset) 2242 + { 2243 + switch (pat->type) { 2244 + case NAND_OP_ADDR_INSTR: 2245 + if (!pat->ctx.addr.maxcycles) 2246 + break; 2247 + 2248 + if (instr->ctx.addr.naddrs - *start_offset > 2249 + pat->ctx.addr.maxcycles) { 2250 + *start_offset += pat->ctx.addr.maxcycles; 2251 + return true; 2252 + } 2253 + break; 2254 + 2255 + case NAND_OP_DATA_IN_INSTR: 2256 + case NAND_OP_DATA_OUT_INSTR: 2257 + if (!pat->ctx.data.maxlen) 2258 + break; 2259 + 2260 + if (instr->ctx.data.len - *start_offset > 2261 + pat->ctx.data.maxlen) { 2262 + *start_offset += pat->ctx.data.maxlen; 2263 + return true; 2264 + } 2265 + break; 2266 + 2267 + default: 2268 + break; 2269 + } 2270 + 2271 + return false; 2272 + } 2273 + 2274 + /** 2275 + * nand_op_parser_match_pat - Checks if a pattern matches the instructions 2276 + * remaining in the parser context 2277 + * @pat: the pattern to test 2278 + * @ctx: the parser context structure to match with the pattern @pat 2279 + * 2280 + * Check if @pat matches the set or a sub-set of instructions remaining in @ctx. 2281 + * Returns true if this is the case, false ortherwise. When true is returned, 2282 + * @ctx->subop is updated with the set of instructions to be passed to the 2283 + * controller driver. 2284 + */ 2285 + static bool 2286 + nand_op_parser_match_pat(const struct nand_op_parser_pattern *pat, 2287 + struct nand_op_parser_ctx *ctx) 2288 + { 2289 + unsigned int instr_offset = ctx->subop.first_instr_start_off; 2290 + const struct nand_op_instr *end = ctx->instrs + ctx->ninstrs; 2291 + const struct nand_op_instr *instr = ctx->subop.instrs; 2292 + unsigned int i, ninstrs; 2293 + 2294 + for (i = 0, ninstrs = 0; i < pat->nelems && instr < end; i++) { 2295 + /* 2296 + * The pattern instruction does not match the operation 2297 + * instruction. If the instruction is marked optional in the 2298 + * pattern definition, we skip the pattern element and continue 2299 + * to the next one. If the element is mandatory, there's no 2300 + * match and we can return false directly. 2301 + */ 2302 + if (instr->type != pat->elems[i].type) { 2303 + if (!pat->elems[i].optional) 2304 + return false; 2305 + 2306 + continue; 2307 + } 2308 + 2309 + /* 2310 + * Now check the pattern element constraints. If the pattern is 2311 + * not able to handle the whole instruction in a single step, 2312 + * we have to split it. 2313 + * The last_instr_end_off value comes back updated to point to 2314 + * the position where we have to split the instruction (the 2315 + * start of the next subop chunk). 2316 + */ 2317 + if (nand_op_parser_must_split_instr(&pat->elems[i], instr, 2318 + &instr_offset)) { 2319 + ninstrs++; 2320 + i++; 2321 + break; 2322 + } 2323 + 2324 + instr++; 2325 + ninstrs++; 2326 + instr_offset = 0; 2327 + } 2328 + 2329 + /* 2330 + * This can happen if all instructions of a pattern are optional. 2331 + * Still, if there's not at least one instruction handled by this 2332 + * pattern, this is not a match, and we should try the next one (if 2333 + * any). 2334 + */ 2335 + if (!ninstrs) 2336 + return false; 2337 + 2338 + /* 2339 + * We had a match on the pattern head, but the pattern may be longer 2340 + * than the instructions we're asked to execute. We need to make sure 2341 + * there's no mandatory elements in the pattern tail. 2342 + */ 2343 + for (; i < pat->nelems; i++) { 2344 + if (!pat->elems[i].optional) 2345 + return false; 2346 + } 2347 + 2348 + /* 2349 + * We have a match: update the subop structure accordingly and return 2350 + * true. 2351 + */ 2352 + ctx->subop.ninstrs = ninstrs; 2353 + ctx->subop.last_instr_end_off = instr_offset; 2354 + 2355 + return true; 2356 + } 2357 + 2358 + #if IS_ENABLED(CONFIG_DYNAMIC_DEBUG) || defined(DEBUG) 2359 + static void nand_op_parser_trace(const struct nand_op_parser_ctx *ctx) 2360 + { 2361 + const struct nand_op_instr *instr; 2362 + char *prefix = " "; 2363 + unsigned int i; 2364 + 2365 + pr_debug("executing subop:\n"); 2366 + 2367 + for (i = 0; i < ctx->ninstrs; i++) { 2368 + instr = &ctx->instrs[i]; 2369 + 2370 + if (instr == &ctx->subop.instrs[0]) 2371 + prefix = " ->"; 2372 + 2373 + switch (instr->type) { 2374 + case NAND_OP_CMD_INSTR: 2375 + pr_debug("%sCMD [0x%02x]\n", prefix, 2376 + instr->ctx.cmd.opcode); 2377 + break; 2378 + case NAND_OP_ADDR_INSTR: 2379 + pr_debug("%sADDR [%d cyc: %*ph]\n", prefix, 2380 + instr->ctx.addr.naddrs, 2381 + instr->ctx.addr.naddrs < 64 ? 2382 + instr->ctx.addr.naddrs : 64, 2383 + instr->ctx.addr.addrs); 2384 + break; 2385 + case NAND_OP_DATA_IN_INSTR: 2386 + pr_debug("%sDATA_IN [%d B%s]\n", prefix, 2387 + instr->ctx.data.len, 2388 + instr->ctx.data.force_8bit ? 2389 + ", force 8-bit" : ""); 2390 + break; 2391 + case NAND_OP_DATA_OUT_INSTR: 2392 + pr_debug("%sDATA_OUT [%d B%s]\n", prefix, 2393 + instr->ctx.data.len, 2394 + instr->ctx.data.force_8bit ? 2395 + ", force 8-bit" : ""); 2396 + break; 2397 + case NAND_OP_WAITRDY_INSTR: 2398 + pr_debug("%sWAITRDY [max %d ms]\n", prefix, 2399 + instr->ctx.waitrdy.timeout_ms); 2400 + break; 2401 + } 2402 + 2403 + if (instr == &ctx->subop.instrs[ctx->subop.ninstrs - 1]) 2404 + prefix = " "; 2405 + } 2406 + } 2407 + #else 2408 + static void nand_op_parser_trace(const struct nand_op_parser_ctx *ctx) 2409 + { 2410 + /* NOP */ 2411 + } 2412 + #endif 2413 + 2414 + /** 2415 + * nand_op_parser_exec_op - exec_op parser 2416 + * @chip: the NAND chip 2417 + * @parser: patterns description provided by the controller driver 2418 + * @op: the NAND operation to address 2419 + * @check_only: when true, the function only checks if @op can be handled but 2420 + * does not execute the operation 2421 + * 2422 + * Helper function designed to ease integration of NAND controller drivers that 2423 + * only support a limited set of instruction sequences. The supported sequences 2424 + * are described in @parser, and the framework takes care of splitting @op into 2425 + * multiple sub-operations (if required) and pass them back to the ->exec() 2426 + * callback of the matching pattern if @check_only is set to false. 2427 + * 2428 + * NAND controller drivers should call this function from their own ->exec_op() 2429 + * implementation. 2430 + * 2431 + * Returns 0 on success, a negative error code otherwise. A failure can be 2432 + * caused by an unsupported operation (none of the supported patterns is able 2433 + * to handle the requested operation), or an error returned by one of the 2434 + * matching pattern->exec() hook. 2435 + */ 2436 + int nand_op_parser_exec_op(struct nand_chip *chip, 2437 + const struct nand_op_parser *parser, 2438 + const struct nand_operation *op, bool check_only) 2439 + { 2440 + struct nand_op_parser_ctx ctx = { 2441 + .subop.instrs = op->instrs, 2442 + .instrs = op->instrs, 2443 + .ninstrs = op->ninstrs, 2444 + }; 2445 + unsigned int i; 2446 + 2447 + while (ctx.subop.instrs < op->instrs + op->ninstrs) { 2448 + int ret; 2449 + 2450 + for (i = 0; i < parser->npatterns; i++) { 2451 + const struct nand_op_parser_pattern *pattern; 2452 + 2453 + pattern = &parser->patterns[i]; 2454 + if (!nand_op_parser_match_pat(pattern, &ctx)) 2455 + continue; 2456 + 2457 + nand_op_parser_trace(&ctx); 2458 + 2459 + if (check_only) 2460 + break; 2461 + 2462 + ret = pattern->exec(chip, &ctx.subop); 2463 + if (ret) 2464 + return ret; 2465 + 2466 + break; 2467 + } 2468 + 2469 + if (i == parser->npatterns) { 2470 + pr_debug("->exec_op() parser: pattern not found!\n"); 2471 + return -ENOTSUPP; 2472 + } 2473 + 2474 + /* 2475 + * Update the context structure by pointing to the start of the 2476 + * next subop. 2477 + */ 2478 + ctx.subop.instrs = ctx.subop.instrs + ctx.subop.ninstrs; 2479 + if (ctx.subop.last_instr_end_off) 2480 + ctx.subop.instrs -= 1; 2481 + 2482 + ctx.subop.first_instr_start_off = ctx.subop.last_instr_end_off; 2483 + } 2484 + 2485 + return 0; 2486 + } 2487 + EXPORT_SYMBOL_GPL(nand_op_parser_exec_op); 2488 + 2489 + static bool nand_instr_is_data(const struct nand_op_instr *instr) 2490 + { 2491 + return instr && (instr->type == NAND_OP_DATA_IN_INSTR || 2492 + instr->type == NAND_OP_DATA_OUT_INSTR); 2493 + } 2494 + 2495 + static bool nand_subop_instr_is_valid(const struct nand_subop *subop, 2496 + unsigned int instr_idx) 2497 + { 2498 + return subop && instr_idx < subop->ninstrs; 2499 + } 2500 + 2501 + static int nand_subop_get_start_off(const struct nand_subop *subop, 2502 + unsigned int instr_idx) 2503 + { 2504 + if (instr_idx) 2505 + return 0; 2506 + 2507 + return subop->first_instr_start_off; 2508 + } 2509 + 2510 + /** 2511 + * nand_subop_get_addr_start_off - Get the start offset in an address array 2512 + * @subop: The entire sub-operation 2513 + * @instr_idx: Index of the instruction inside the sub-operation 2514 + * 2515 + * During driver development, one could be tempted to directly use the 2516 + * ->addr.addrs field of address instructions. This is wrong as address 2517 + * instructions might be split. 2518 + * 2519 + * Given an address instruction, returns the offset of the first cycle to issue. 2520 + */ 2521 + int nand_subop_get_addr_start_off(const struct nand_subop *subop, 2522 + unsigned int instr_idx) 2523 + { 2524 + if (!nand_subop_instr_is_valid(subop, instr_idx) || 2525 + subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR) 2526 + return -EINVAL; 2527 + 2528 + return nand_subop_get_start_off(subop, instr_idx); 2529 + } 2530 + EXPORT_SYMBOL_GPL(nand_subop_get_addr_start_off); 2531 + 2532 + /** 2533 + * nand_subop_get_num_addr_cyc - Get the remaining address cycles to assert 2534 + * @subop: The entire sub-operation 2535 + * @instr_idx: Index of the instruction inside the sub-operation 2536 + * 2537 + * During driver development, one could be tempted to directly use the 2538 + * ->addr->naddrs field of a data instruction. This is wrong as instructions 2539 + * might be split. 2540 + * 2541 + * Given an address instruction, returns the number of address cycle to issue. 2542 + */ 2543 + int nand_subop_get_num_addr_cyc(const struct nand_subop *subop, 2544 + unsigned int instr_idx) 2545 + { 2546 + int start_off, end_off; 2547 + 2548 + if (!nand_subop_instr_is_valid(subop, instr_idx) || 2549 + subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR) 2550 + return -EINVAL; 2551 + 2552 + start_off = nand_subop_get_addr_start_off(subop, instr_idx); 2553 + 2554 + if (instr_idx == subop->ninstrs - 1 && 2555 + subop->last_instr_end_off) 2556 + end_off = subop->last_instr_end_off; 2557 + else 2558 + end_off = subop->instrs[instr_idx].ctx.addr.naddrs; 2559 + 2560 + return end_off - start_off; 2561 + } 2562 + EXPORT_SYMBOL_GPL(nand_subop_get_num_addr_cyc); 2563 + 2564 + /** 2565 + * nand_subop_get_data_start_off - Get the start offset in a data array 2566 + * @subop: The entire sub-operation 2567 + * @instr_idx: Index of the instruction inside the sub-operation 2568 + * 2569 + * During driver development, one could be tempted to directly use the 2570 + * ->data->buf.{in,out} field of data instructions. This is wrong as data 2571 + * instructions might be split. 2572 + * 2573 + * Given a data instruction, returns the offset to start from. 2574 + */ 2575 + int nand_subop_get_data_start_off(const struct nand_subop *subop, 2576 + unsigned int instr_idx) 2577 + { 2578 + if (!nand_subop_instr_is_valid(subop, instr_idx) || 2579 + !nand_instr_is_data(&subop->instrs[instr_idx])) 2580 + return -EINVAL; 2581 + 2582 + return nand_subop_get_start_off(subop, instr_idx); 2583 + } 2584 + EXPORT_SYMBOL_GPL(nand_subop_get_data_start_off); 2585 + 2586 + /** 2587 + * nand_subop_get_data_len - Get the number of bytes to retrieve 2588 + * @subop: The entire sub-operation 2589 + * @instr_idx: Index of the instruction inside the sub-operation 2590 + * 2591 + * During driver development, one could be tempted to directly use the 2592 + * ->data->len field of a data instruction. This is wrong as data instructions 2593 + * might be split. 2594 + * 2595 + * Returns the length of the chunk of data to send/receive. 2596 + */ 2597 + int nand_subop_get_data_len(const struct nand_subop *subop, 2598 + unsigned int instr_idx) 2599 + { 2600 + int start_off = 0, end_off; 2601 + 2602 + if (!nand_subop_instr_is_valid(subop, instr_idx) || 2603 + !nand_instr_is_data(&subop->instrs[instr_idx])) 2604 + return -EINVAL; 2605 + 2606 + start_off = nand_subop_get_data_start_off(subop, instr_idx); 2607 + 2608 + if (instr_idx == subop->ninstrs - 1 && 2609 + subop->last_instr_end_off) 2610 + end_off = subop->last_instr_end_off; 2611 + else 2612 + end_off = subop->instrs[instr_idx].ctx.data.len; 2613 + 2614 + return end_off - start_off; 2615 + } 2616 + EXPORT_SYMBOL_GPL(nand_subop_get_data_len); 1307 2617 1308 2618 /** 1309 2619 * nand_reset - Reset and initialize a NAND device 1310 2620 * @chip: The NAND chip 1311 2621 * @chipnr: Internal die id 1312 2622 * 1313 - * Returns 0 for success or negative error code otherwise 2623 + * Save the timings data structure, then apply SDR timings mode 0 (see 2624 + * nand_reset_data_interface for details), do the reset operation, and 2625 + * apply back the previous timings. 2626 + * 2627 + * Returns 0 on success, a negative error code otherwise. 1314 2628 */ 1315 2629 int nand_reset(struct nand_chip *chip, int chipnr) 1316 2630 { 1317 2631 struct mtd_info *mtd = nand_to_mtd(chip); 2632 + struct nand_data_interface saved_data_intf = chip->data_interface; 1318 2633 int ret; 1319 2634 1320 2635 ret = nand_reset_data_interface(chip, chipnr); ··· 2734 1233 * interface settings, hence this weird ->select_chip() dance. 2735 1234 */ 2736 1235 chip->select_chip(mtd, chipnr); 2737 - chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 1236 + ret = nand_reset_op(chip); 2738 1237 chip->select_chip(mtd, -1); 1238 + if (ret) 1239 + return ret; 2739 1240 2740 1241 chip->select_chip(mtd, chipnr); 1242 + chip->data_interface = saved_data_intf; 2741 1243 ret = nand_setup_data_interface(chip, chipnr); 2742 1244 chip->select_chip(mtd, -1); 2743 1245 if (ret) ··· 2894 1390 int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 2895 1391 uint8_t *buf, int oob_required, int page) 2896 1392 { 2897 - chip->read_buf(mtd, buf, mtd->writesize); 2898 - if (oob_required) 2899 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1393 + int ret; 1394 + 1395 + ret = nand_read_page_op(chip, page, 0, buf, mtd->writesize); 1396 + if (ret) 1397 + return ret; 1398 + 1399 + if (oob_required) { 1400 + ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, 1401 + false); 1402 + if (ret) 1403 + return ret; 1404 + } 1405 + 2900 1406 return 0; 2901 1407 } 2902 1408 EXPORT_SYMBOL(nand_read_page_raw); ··· 2928 1414 int eccsize = chip->ecc.size; 2929 1415 int eccbytes = chip->ecc.bytes; 2930 1416 uint8_t *oob = chip->oob_poi; 2931 - int steps, size; 1417 + int steps, size, ret; 1418 + 1419 + ret = nand_read_page_op(chip, page, 0, NULL, 0); 1420 + if (ret) 1421 + return ret; 2932 1422 2933 1423 for (steps = chip->ecc.steps; steps > 0; steps--) { 2934 - chip->read_buf(mtd, buf, eccsize); 1424 + ret = nand_read_data_op(chip, buf, eccsize, false); 1425 + if (ret) 1426 + return ret; 1427 + 2935 1428 buf += eccsize; 2936 1429 2937 1430 if (chip->ecc.prepad) { 2938 - chip->read_buf(mtd, oob, chip->ecc.prepad); 1431 + ret = nand_read_data_op(chip, oob, chip->ecc.prepad, 1432 + false); 1433 + if (ret) 1434 + return ret; 1435 + 2939 1436 oob += chip->ecc.prepad; 2940 1437 } 2941 1438 2942 - chip->read_buf(mtd, oob, eccbytes); 1439 + ret = nand_read_data_op(chip, oob, eccbytes, false); 1440 + if (ret) 1441 + return ret; 1442 + 2943 1443 oob += eccbytes; 2944 1444 2945 1445 if (chip->ecc.postpad) { 2946 - chip->read_buf(mtd, oob, chip->ecc.postpad); 1446 + ret = nand_read_data_op(chip, oob, chip->ecc.postpad, 1447 + false); 1448 + if (ret) 1449 + return ret; 1450 + 2947 1451 oob += chip->ecc.postpad; 2948 1452 } 2949 1453 } 2950 1454 2951 1455 size = mtd->oobsize - (oob - chip->oob_poi); 2952 - if (size) 2953 - chip->read_buf(mtd, oob, size); 1456 + if (size) { 1457 + ret = nand_read_data_op(chip, oob, size, false); 1458 + if (ret) 1459 + return ret; 1460 + } 2954 1461 2955 1462 return 0; 2956 1463 } ··· 2991 1456 int eccbytes = chip->ecc.bytes; 2992 1457 int eccsteps = chip->ecc.steps; 2993 1458 uint8_t *p = buf; 2994 - uint8_t *ecc_calc = chip->buffers->ecccalc; 2995 - uint8_t *ecc_code = chip->buffers->ecccode; 1459 + uint8_t *ecc_calc = chip->ecc.calc_buf; 1460 + uint8_t *ecc_code = chip->ecc.code_buf; 2996 1461 unsigned int max_bitflips = 0; 2997 1462 2998 1463 chip->ecc.read_page_raw(mtd, chip, buf, 1, page); ··· 3056 1521 3057 1522 data_col_addr = start_step * chip->ecc.size; 3058 1523 /* If we read not a page aligned data */ 3059 - if (data_col_addr != 0) 3060 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, data_col_addr, -1); 3061 - 3062 1524 p = bufpoi + data_col_addr; 3063 - chip->read_buf(mtd, p, datafrag_len); 1525 + ret = nand_read_page_op(chip, page, data_col_addr, p, datafrag_len); 1526 + if (ret) 1527 + return ret; 3064 1528 3065 1529 /* Calculate ECC */ 3066 1530 for (i = 0; i < eccfrag_len ; i += chip->ecc.bytes, p += chip->ecc.size) 3067 - chip->ecc.calculate(mtd, p, &chip->buffers->ecccalc[i]); 1531 + chip->ecc.calculate(mtd, p, &chip->ecc.calc_buf[i]); 3068 1532 3069 1533 /* 3070 1534 * The performance is faster if we position offsets according to ··· 3077 1543 gaps = 1; 3078 1544 3079 1545 if (gaps) { 3080 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, mtd->writesize, -1); 3081 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1546 + ret = nand_change_read_column_op(chip, mtd->writesize, 1547 + chip->oob_poi, mtd->oobsize, 1548 + false); 1549 + if (ret) 1550 + return ret; 3082 1551 } else { 3083 1552 /* 3084 1553 * Send the command to read the particular ECC bytes take care ··· 3095 1558 (busw - 1)) 3096 1559 aligned_len++; 3097 1560 3098 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 3099 - mtd->writesize + aligned_pos, -1); 3100 - chip->read_buf(mtd, &chip->oob_poi[aligned_pos], aligned_len); 1561 + ret = nand_change_read_column_op(chip, 1562 + mtd->writesize + aligned_pos, 1563 + &chip->oob_poi[aligned_pos], 1564 + aligned_len, false); 1565 + if (ret) 1566 + return ret; 3101 1567 } 3102 1568 3103 - ret = mtd_ooblayout_get_eccbytes(mtd, chip->buffers->ecccode, 1569 + ret = mtd_ooblayout_get_eccbytes(mtd, chip->ecc.code_buf, 3104 1570 chip->oob_poi, index, eccfrag_len); 3105 1571 if (ret) 3106 1572 return ret; ··· 3112 1572 for (i = 0; i < eccfrag_len ; i += chip->ecc.bytes, p += chip->ecc.size) { 3113 1573 int stat; 3114 1574 3115 - stat = chip->ecc.correct(mtd, p, 3116 - &chip->buffers->ecccode[i], &chip->buffers->ecccalc[i]); 1575 + stat = chip->ecc.correct(mtd, p, &chip->ecc.code_buf[i], 1576 + &chip->ecc.calc_buf[i]); 3117 1577 if (stat == -EBADMSG && 3118 1578 (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) { 3119 1579 /* check for empty pages with bitflips */ 3120 1580 stat = nand_check_erased_ecc_chunk(p, chip->ecc.size, 3121 - &chip->buffers->ecccode[i], 1581 + &chip->ecc.code_buf[i], 3122 1582 chip->ecc.bytes, 3123 1583 NULL, 0, 3124 1584 chip->ecc.strength); ··· 3151 1611 int eccbytes = chip->ecc.bytes; 3152 1612 int eccsteps = chip->ecc.steps; 3153 1613 uint8_t *p = buf; 3154 - uint8_t *ecc_calc = chip->buffers->ecccalc; 3155 - uint8_t *ecc_code = chip->buffers->ecccode; 1614 + uint8_t *ecc_calc = chip->ecc.calc_buf; 1615 + uint8_t *ecc_code = chip->ecc.code_buf; 3156 1616 unsigned int max_bitflips = 0; 1617 + 1618 + ret = nand_read_page_op(chip, page, 0, NULL, 0); 1619 + if (ret) 1620 + return ret; 3157 1621 3158 1622 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 3159 1623 chip->ecc.hwctl(mtd, NAND_ECC_READ); 3160 - chip->read_buf(mtd, p, eccsize); 1624 + 1625 + ret = nand_read_data_op(chip, p, eccsize, false); 1626 + if (ret) 1627 + return ret; 1628 + 3161 1629 chip->ecc.calculate(mtd, p, &ecc_calc[i]); 3162 1630 } 3163 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1631 + 1632 + ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false); 1633 + if (ret) 1634 + return ret; 3164 1635 3165 1636 ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0, 3166 1637 chip->ecc.total); ··· 3225 1674 int eccbytes = chip->ecc.bytes; 3226 1675 int eccsteps = chip->ecc.steps; 3227 1676 uint8_t *p = buf; 3228 - uint8_t *ecc_code = chip->buffers->ecccode; 3229 - uint8_t *ecc_calc = chip->buffers->ecccalc; 1677 + uint8_t *ecc_code = chip->ecc.code_buf; 1678 + uint8_t *ecc_calc = chip->ecc.calc_buf; 3230 1679 unsigned int max_bitflips = 0; 3231 1680 3232 1681 /* Read the OOB area first */ 3233 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 3234 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 3235 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 1682 + ret = nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 1683 + if (ret) 1684 + return ret; 1685 + 1686 + ret = nand_read_page_op(chip, page, 0, NULL, 0); 1687 + if (ret) 1688 + return ret; 3236 1689 3237 1690 ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0, 3238 1691 chip->ecc.total); ··· 3247 1692 int stat; 3248 1693 3249 1694 chip->ecc.hwctl(mtd, NAND_ECC_READ); 3250 - chip->read_buf(mtd, p, eccsize); 1695 + 1696 + ret = nand_read_data_op(chip, p, eccsize, false); 1697 + if (ret) 1698 + return ret; 1699 + 3251 1700 chip->ecc.calculate(mtd, p, &ecc_calc[i]); 3252 1701 3253 1702 stat = chip->ecc.correct(mtd, p, &ecc_code[i], NULL); ··· 3288 1729 static int nand_read_page_syndrome(struct mtd_info *mtd, struct nand_chip *chip, 3289 1730 uint8_t *buf, int oob_required, int page) 3290 1731 { 3291 - int i, eccsize = chip->ecc.size; 1732 + int ret, i, eccsize = chip->ecc.size; 3292 1733 int eccbytes = chip->ecc.bytes; 3293 1734 int eccsteps = chip->ecc.steps; 3294 1735 int eccpadbytes = eccbytes + chip->ecc.prepad + chip->ecc.postpad; ··· 3296 1737 uint8_t *oob = chip->oob_poi; 3297 1738 unsigned int max_bitflips = 0; 3298 1739 1740 + ret = nand_read_page_op(chip, page, 0, NULL, 0); 1741 + if (ret) 1742 + return ret; 1743 + 3299 1744 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 3300 1745 int stat; 3301 1746 3302 1747 chip->ecc.hwctl(mtd, NAND_ECC_READ); 3303 - chip->read_buf(mtd, p, eccsize); 1748 + 1749 + ret = nand_read_data_op(chip, p, eccsize, false); 1750 + if (ret) 1751 + return ret; 3304 1752 3305 1753 if (chip->ecc.prepad) { 3306 - chip->read_buf(mtd, oob, chip->ecc.prepad); 1754 + ret = nand_read_data_op(chip, oob, chip->ecc.prepad, 1755 + false); 1756 + if (ret) 1757 + return ret; 1758 + 3307 1759 oob += chip->ecc.prepad; 3308 1760 } 3309 1761 3310 1762 chip->ecc.hwctl(mtd, NAND_ECC_READSYN); 3311 - chip->read_buf(mtd, oob, eccbytes); 1763 + 1764 + ret = nand_read_data_op(chip, oob, eccbytes, false); 1765 + if (ret) 1766 + return ret; 1767 + 3312 1768 stat = chip->ecc.correct(mtd, p, oob, NULL); 3313 1769 3314 1770 oob += eccbytes; 3315 1771 3316 1772 if (chip->ecc.postpad) { 3317 - chip->read_buf(mtd, oob, chip->ecc.postpad); 1773 + ret = nand_read_data_op(chip, oob, chip->ecc.postpad, 1774 + false); 1775 + if (ret) 1776 + return ret; 1777 + 3318 1778 oob += chip->ecc.postpad; 3319 1779 } 3320 1780 ··· 3357 1779 3358 1780 /* Calculate remaining oob bytes */ 3359 1781 i = mtd->oobsize - (oob - chip->oob_poi); 3360 - if (i) 3361 - chip->read_buf(mtd, oob, i); 1782 + if (i) { 1783 + ret = nand_read_data_op(chip, oob, i, false); 1784 + if (ret) 1785 + return ret; 1786 + } 3362 1787 3363 1788 return max_bitflips; 3364 1789 } ··· 3475 1894 3476 1895 /* Is the current page in the buffer? */ 3477 1896 if (realpage != chip->pagebuf || oob) { 3478 - bufpoi = use_bufpoi ? chip->buffers->databuf : buf; 1897 + bufpoi = use_bufpoi ? chip->data_buf : buf; 3479 1898 3480 1899 if (use_bufpoi && aligned) 3481 1900 pr_debug("%s: using read bounce buffer for buf@%p\n", 3482 1901 __func__, buf); 3483 1902 3484 1903 read_retry: 3485 - if (nand_standard_page_accessors(&chip->ecc)) 3486 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); 3487 - 3488 1904 /* 3489 1905 * Now read the page into the buffer. Absent an error, 3490 1906 * the read methods return max bitflips per ecc step. ··· 3516 1938 /* Invalidate page cache */ 3517 1939 chip->pagebuf = -1; 3518 1940 } 3519 - memcpy(buf, chip->buffers->databuf + col, bytes); 1941 + memcpy(buf, chip->data_buf + col, bytes); 3520 1942 } 3521 1943 3522 1944 if (unlikely(oob)) { ··· 3557 1979 buf += bytes; 3558 1980 max_bitflips = max_t(unsigned int, max_bitflips, ret); 3559 1981 } else { 3560 - memcpy(buf, chip->buffers->databuf + col, bytes); 1982 + memcpy(buf, chip->data_buf + col, bytes); 3561 1983 buf += bytes; 3562 1984 max_bitflips = max_t(unsigned int, max_bitflips, 3563 1985 chip->pagebuf_bitflips); ··· 3612 2034 */ 3613 2035 int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page) 3614 2036 { 3615 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 3616 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 3617 - return 0; 2037 + return nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 3618 2038 } 3619 2039 EXPORT_SYMBOL(nand_read_oob_std); 3620 2040 ··· 3630 2054 int chunk = chip->ecc.bytes + chip->ecc.prepad + chip->ecc.postpad; 3631 2055 int eccsize = chip->ecc.size; 3632 2056 uint8_t *bufpoi = chip->oob_poi; 3633 - int i, toread, sndrnd = 0, pos; 2057 + int i, toread, sndrnd = 0, pos, ret; 3634 2058 3635 - chip->cmdfunc(mtd, NAND_CMD_READ0, chip->ecc.size, page); 2059 + ret = nand_read_page_op(chip, page, chip->ecc.size, NULL, 0); 2060 + if (ret) 2061 + return ret; 2062 + 3636 2063 for (i = 0; i < chip->ecc.steps; i++) { 3637 2064 if (sndrnd) { 2065 + int ret; 2066 + 3638 2067 pos = eccsize + i * (eccsize + chunk); 3639 2068 if (mtd->writesize > 512) 3640 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, pos, -1); 2069 + ret = nand_change_read_column_op(chip, pos, 2070 + NULL, 0, 2071 + false); 3641 2072 else 3642 - chip->cmdfunc(mtd, NAND_CMD_READ0, pos, page); 2073 + ret = nand_read_page_op(chip, page, pos, NULL, 2074 + 0); 2075 + 2076 + if (ret) 2077 + return ret; 3643 2078 } else 3644 2079 sndrnd = 1; 3645 2080 toread = min_t(int, length, chunk); 3646 - chip->read_buf(mtd, bufpoi, toread); 2081 + 2082 + ret = nand_read_data_op(chip, bufpoi, toread, false); 2083 + if (ret) 2084 + return ret; 2085 + 3647 2086 bufpoi += toread; 3648 2087 length -= toread; 3649 2088 } 3650 - if (length > 0) 3651 - chip->read_buf(mtd, bufpoi, length); 2089 + if (length > 0) { 2090 + ret = nand_read_data_op(chip, bufpoi, length, false); 2091 + if (ret) 2092 + return ret; 2093 + } 3652 2094 3653 2095 return 0; 3654 2096 } ··· 3680 2086 */ 3681 2087 int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page) 3682 2088 { 3683 - int status = 0; 3684 - const uint8_t *buf = chip->oob_poi; 3685 - int length = mtd->oobsize; 3686 - 3687 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, page); 3688 - chip->write_buf(mtd, buf, length); 3689 - /* Send command to program the OOB data */ 3690 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 3691 - 3692 - status = chip->waitfunc(mtd, chip); 3693 - 3694 - return status & NAND_STATUS_FAIL ? -EIO : 0; 2089 + return nand_prog_page_op(chip, page, mtd->writesize, chip->oob_poi, 2090 + mtd->oobsize); 3695 2091 } 3696 2092 EXPORT_SYMBOL(nand_write_oob_std); 3697 2093 ··· 3697 2113 { 3698 2114 int chunk = chip->ecc.bytes + chip->ecc.prepad + chip->ecc.postpad; 3699 2115 int eccsize = chip->ecc.size, length = mtd->oobsize; 3700 - int i, len, pos, status = 0, sndcmd = 0, steps = chip->ecc.steps; 2116 + int ret, i, len, pos, sndcmd = 0, steps = chip->ecc.steps; 3701 2117 const uint8_t *bufpoi = chip->oob_poi; 3702 2118 3703 2119 /* ··· 3711 2127 } else 3712 2128 pos = eccsize; 3713 2129 3714 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, pos, page); 2130 + ret = nand_prog_page_begin_op(chip, page, pos, NULL, 0); 2131 + if (ret) 2132 + return ret; 2133 + 3715 2134 for (i = 0; i < steps; i++) { 3716 2135 if (sndcmd) { 3717 2136 if (mtd->writesize <= 512) { ··· 3723 2136 len = eccsize; 3724 2137 while (len > 0) { 3725 2138 int num = min_t(int, len, 4); 3726 - chip->write_buf(mtd, (uint8_t *)&fill, 3727 - num); 2139 + 2140 + ret = nand_write_data_op(chip, &fill, 2141 + num, false); 2142 + if (ret) 2143 + return ret; 2144 + 3728 2145 len -= num; 3729 2146 } 3730 2147 } else { 3731 2148 pos = eccsize + i * (eccsize + chunk); 3732 - chip->cmdfunc(mtd, NAND_CMD_RNDIN, pos, -1); 2149 + ret = nand_change_write_column_op(chip, pos, 2150 + NULL, 0, 2151 + false); 2152 + if (ret) 2153 + return ret; 3733 2154 } 3734 2155 } else 3735 2156 sndcmd = 1; 3736 2157 len = min_t(int, length, chunk); 3737 - chip->write_buf(mtd, bufpoi, len); 2158 + 2159 + ret = nand_write_data_op(chip, bufpoi, len, false); 2160 + if (ret) 2161 + return ret; 2162 + 3738 2163 bufpoi += len; 3739 2164 length -= len; 3740 2165 } 3741 - if (length > 0) 3742 - chip->write_buf(mtd, bufpoi, length); 2166 + if (length > 0) { 2167 + ret = nand_write_data_op(chip, bufpoi, length, false); 2168 + if (ret) 2169 + return ret; 2170 + } 3743 2171 3744 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 3745 - status = chip->waitfunc(mtd, chip); 3746 - 3747 - return status & NAND_STATUS_FAIL ? -EIO : 0; 2172 + return nand_prog_page_end_op(chip); 3748 2173 } 3749 2174 EXPORT_SYMBOL(nand_write_oob_syndrome); 3750 2175 ··· 3771 2172 static int nand_do_read_oob(struct mtd_info *mtd, loff_t from, 3772 2173 struct mtd_oob_ops *ops) 3773 2174 { 2175 + unsigned int max_bitflips = 0; 3774 2176 int page, realpage, chipnr; 3775 2177 struct nand_chip *chip = mtd_to_nand(mtd); 3776 2178 struct mtd_ecc_stats stats; ··· 3814 2214 nand_wait_ready(mtd); 3815 2215 } 3816 2216 2217 + max_bitflips = max_t(unsigned int, max_bitflips, ret); 2218 + 3817 2219 readlen -= len; 3818 2220 if (!readlen) 3819 2221 break; ··· 3841 2239 if (mtd->ecc_stats.failed - stats.failed) 3842 2240 return -EBADMSG; 3843 2241 3844 - return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0; 2242 + return max_bitflips; 3845 2243 } 3846 2244 3847 2245 /** ··· 3889 2287 int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 3890 2288 const uint8_t *buf, int oob_required, int page) 3891 2289 { 3892 - chip->write_buf(mtd, buf, mtd->writesize); 3893 - if (oob_required) 3894 - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 2290 + int ret; 3895 2291 3896 - return 0; 2292 + ret = nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 2293 + if (ret) 2294 + return ret; 2295 + 2296 + if (oob_required) { 2297 + ret = nand_write_data_op(chip, chip->oob_poi, mtd->oobsize, 2298 + false); 2299 + if (ret) 2300 + return ret; 2301 + } 2302 + 2303 + return nand_prog_page_end_op(chip); 3897 2304 } 3898 2305 EXPORT_SYMBOL(nand_write_page_raw); 3899 2306 ··· 3924 2313 int eccsize = chip->ecc.size; 3925 2314 int eccbytes = chip->ecc.bytes; 3926 2315 uint8_t *oob = chip->oob_poi; 3927 - int steps, size; 2316 + int steps, size, ret; 2317 + 2318 + ret = nand_prog_page_begin_op(chip, page, 0, NULL, 0); 2319 + if (ret) 2320 + return ret; 3928 2321 3929 2322 for (steps = chip->ecc.steps; steps > 0; steps--) { 3930 - chip->write_buf(mtd, buf, eccsize); 2323 + ret = nand_write_data_op(chip, buf, eccsize, false); 2324 + if (ret) 2325 + return ret; 2326 + 3931 2327 buf += eccsize; 3932 2328 3933 2329 if (chip->ecc.prepad) { 3934 - chip->write_buf(mtd, oob, chip->ecc.prepad); 2330 + ret = nand_write_data_op(chip, oob, chip->ecc.prepad, 2331 + false); 2332 + if (ret) 2333 + return ret; 2334 + 3935 2335 oob += chip->ecc.prepad; 3936 2336 } 3937 2337 3938 - chip->write_buf(mtd, oob, eccbytes); 2338 + ret = nand_write_data_op(chip, oob, eccbytes, false); 2339 + if (ret) 2340 + return ret; 2341 + 3939 2342 oob += eccbytes; 3940 2343 3941 2344 if (chip->ecc.postpad) { 3942 - chip->write_buf(mtd, oob, chip->ecc.postpad); 2345 + ret = nand_write_data_op(chip, oob, chip->ecc.postpad, 2346 + false); 2347 + if (ret) 2348 + return ret; 2349 + 3943 2350 oob += chip->ecc.postpad; 3944 2351 } 3945 2352 } 3946 2353 3947 2354 size = mtd->oobsize - (oob - chip->oob_poi); 3948 - if (size) 3949 - chip->write_buf(mtd, oob, size); 2355 + if (size) { 2356 + ret = nand_write_data_op(chip, oob, size, false); 2357 + if (ret) 2358 + return ret; 2359 + } 3950 2360 3951 - return 0; 2361 + return nand_prog_page_end_op(chip); 3952 2362 } 3953 2363 /** 3954 2364 * nand_write_page_swecc - [REPLACEABLE] software ECC based page write function ··· 3986 2354 int i, eccsize = chip->ecc.size, ret; 3987 2355 int eccbytes = chip->ecc.bytes; 3988 2356 int eccsteps = chip->ecc.steps; 3989 - uint8_t *ecc_calc = chip->buffers->ecccalc; 2357 + uint8_t *ecc_calc = chip->ecc.calc_buf; 3990 2358 const uint8_t *p = buf; 3991 2359 3992 2360 /* Software ECC calculation */ ··· 4016 2384 int i, eccsize = chip->ecc.size, ret; 4017 2385 int eccbytes = chip->ecc.bytes; 4018 2386 int eccsteps = chip->ecc.steps; 4019 - uint8_t *ecc_calc = chip->buffers->ecccalc; 2387 + uint8_t *ecc_calc = chip->ecc.calc_buf; 4020 2388 const uint8_t *p = buf; 2389 + 2390 + ret = nand_prog_page_begin_op(chip, page, 0, NULL, 0); 2391 + if (ret) 2392 + return ret; 4021 2393 4022 2394 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 4023 2395 chip->ecc.hwctl(mtd, NAND_ECC_WRITE); 4024 - chip->write_buf(mtd, p, eccsize); 2396 + 2397 + ret = nand_write_data_op(chip, p, eccsize, false); 2398 + if (ret) 2399 + return ret; 2400 + 4025 2401 chip->ecc.calculate(mtd, p, &ecc_calc[i]); 4026 2402 } 4027 2403 ··· 4038 2398 if (ret) 4039 2399 return ret; 4040 2400 4041 - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 2401 + ret = nand_write_data_op(chip, chip->oob_poi, mtd->oobsize, false); 2402 + if (ret) 2403 + return ret; 4042 2404 4043 - return 0; 2405 + return nand_prog_page_end_op(chip); 4044 2406 } 4045 2407 4046 2408 ··· 4062 2420 int oob_required, int page) 4063 2421 { 4064 2422 uint8_t *oob_buf = chip->oob_poi; 4065 - uint8_t *ecc_calc = chip->buffers->ecccalc; 2423 + uint8_t *ecc_calc = chip->ecc.calc_buf; 4066 2424 int ecc_size = chip->ecc.size; 4067 2425 int ecc_bytes = chip->ecc.bytes; 4068 2426 int ecc_steps = chip->ecc.steps; ··· 4071 2429 int oob_bytes = mtd->oobsize / ecc_steps; 4072 2430 int step, ret; 4073 2431 2432 + ret = nand_prog_page_begin_op(chip, page, 0, NULL, 0); 2433 + if (ret) 2434 + return ret; 2435 + 4074 2436 for (step = 0; step < ecc_steps; step++) { 4075 2437 /* configure controller for WRITE access */ 4076 2438 chip->ecc.hwctl(mtd, NAND_ECC_WRITE); 4077 2439 4078 2440 /* write data (untouched subpages already masked by 0xFF) */ 4079 - chip->write_buf(mtd, buf, ecc_size); 2441 + ret = nand_write_data_op(chip, buf, ecc_size, false); 2442 + if (ret) 2443 + return ret; 4080 2444 4081 2445 /* mask ECC of un-touched subpages by padding 0xFF */ 4082 2446 if ((step < start_step) || (step > end_step)) ··· 4102 2454 4103 2455 /* copy calculated ECC for whole page to chip->buffer->oob */ 4104 2456 /* this include masked-value(0xFF) for unwritten subpages */ 4105 - ecc_calc = chip->buffers->ecccalc; 2457 + ecc_calc = chip->ecc.calc_buf; 4106 2458 ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0, 4107 2459 chip->ecc.total); 4108 2460 if (ret) 4109 2461 return ret; 4110 2462 4111 2463 /* write OOB buffer to NAND device */ 4112 - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 2464 + ret = nand_write_data_op(chip, chip->oob_poi, mtd->oobsize, false); 2465 + if (ret) 2466 + return ret; 4113 2467 4114 - return 0; 2468 + return nand_prog_page_end_op(chip); 4115 2469 } 4116 2470 4117 2471 ··· 4138 2488 int eccsteps = chip->ecc.steps; 4139 2489 const uint8_t *p = buf; 4140 2490 uint8_t *oob = chip->oob_poi; 2491 + int ret; 2492 + 2493 + ret = nand_prog_page_begin_op(chip, page, 0, NULL, 0); 2494 + if (ret) 2495 + return ret; 4141 2496 4142 2497 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 4143 - 4144 2498 chip->ecc.hwctl(mtd, NAND_ECC_WRITE); 4145 - chip->write_buf(mtd, p, eccsize); 2499 + 2500 + ret = nand_write_data_op(chip, p, eccsize, false); 2501 + if (ret) 2502 + return ret; 4146 2503 4147 2504 if (chip->ecc.prepad) { 4148 - chip->write_buf(mtd, oob, chip->ecc.prepad); 2505 + ret = nand_write_data_op(chip, oob, chip->ecc.prepad, 2506 + false); 2507 + if (ret) 2508 + return ret; 2509 + 4149 2510 oob += chip->ecc.prepad; 4150 2511 } 4151 2512 4152 2513 chip->ecc.calculate(mtd, p, oob); 4153 - chip->write_buf(mtd, oob, eccbytes); 2514 + 2515 + ret = nand_write_data_op(chip, oob, eccbytes, false); 2516 + if (ret) 2517 + return ret; 2518 + 4154 2519 oob += eccbytes; 4155 2520 4156 2521 if (chip->ecc.postpad) { 4157 - chip->write_buf(mtd, oob, chip->ecc.postpad); 2522 + ret = nand_write_data_op(chip, oob, chip->ecc.postpad, 2523 + false); 2524 + if (ret) 2525 + return ret; 2526 + 4158 2527 oob += chip->ecc.postpad; 4159 2528 } 4160 2529 } 4161 2530 4162 2531 /* Calculate remaining oob bytes */ 4163 2532 i = mtd->oobsize - (oob - chip->oob_poi); 4164 - if (i) 4165 - chip->write_buf(mtd, oob, i); 2533 + if (i) { 2534 + ret = nand_write_data_op(chip, oob, i, false); 2535 + if (ret) 2536 + return ret; 2537 + } 4166 2538 4167 - return 0; 2539 + return nand_prog_page_end_op(chip); 4168 2540 } 4169 2541 4170 2542 /** ··· 4212 2540 else 4213 2541 subpage = 0; 4214 2542 4215 - if (nand_standard_page_accessors(&chip->ecc)) 4216 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 4217 - 4218 2543 if (unlikely(raw)) 4219 2544 status = chip->ecc.write_page_raw(mtd, chip, buf, 4220 2545 oob_required, page); ··· 4224 2555 4225 2556 if (status < 0) 4226 2557 return status; 4227 - 4228 - if (nand_standard_page_accessors(&chip->ecc)) { 4229 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 4230 - 4231 - status = chip->waitfunc(mtd, chip); 4232 - if (status & NAND_STATUS_FAIL) 4233 - return -EIO; 4234 - } 4235 2558 4236 2559 return 0; 4237 2560 } ··· 4349 2688 if (part_pagewr) 4350 2689 bytes = min_t(int, bytes - column, writelen); 4351 2690 chip->pagebuf = -1; 4352 - memset(chip->buffers->databuf, 0xff, mtd->writesize); 4353 - memcpy(&chip->buffers->databuf[column], buf, bytes); 4354 - wbuf = chip->buffers->databuf; 2691 + memset(chip->data_buf, 0xff, mtd->writesize); 2692 + memcpy(&chip->data_buf[column], buf, bytes); 2693 + wbuf = chip->data_buf; 4355 2694 } 4356 2695 4357 2696 if (unlikely(oob)) { ··· 4546 2885 static int single_erase(struct mtd_info *mtd, int page) 4547 2886 { 4548 2887 struct nand_chip *chip = mtd_to_nand(mtd); 4549 - /* Send commands to erase a block */ 4550 - chip->cmdfunc(mtd, NAND_CMD_ERASE1, -1, page); 4551 - chip->cmdfunc(mtd, NAND_CMD_ERASE2, -1, -1); 2888 + unsigned int eraseblock; 4552 2889 4553 - return chip->waitfunc(mtd, chip); 2890 + /* Send commands to erase a block */ 2891 + eraseblock = page >> (chip->phys_erase_shift - chip->page_shift); 2892 + 2893 + return nand_erase_op(chip, eraseblock); 4554 2894 } 4555 2895 4556 2896 /** ··· 4635 2973 status = chip->erase(mtd, page & chip->pagemask); 4636 2974 4637 2975 /* See if block erase succeeded */ 4638 - if (status & NAND_STATUS_FAIL) { 2976 + if (status) { 4639 2977 pr_debug("%s: failed erase, page 0x%08x\n", 4640 2978 __func__, page); 4641 2979 instr->state = MTD_ERASE_FAILED; ··· 4778 3116 static int nand_onfi_set_features(struct mtd_info *mtd, struct nand_chip *chip, 4779 3117 int addr, uint8_t *subfeature_param) 4780 3118 { 4781 - int status; 4782 - int i; 4783 - 4784 3119 if (!chip->onfi_version || 4785 3120 !(le16_to_cpu(chip->onfi_params.opt_cmd) 4786 3121 & ONFI_OPT_CMD_SET_GET_FEATURES)) 4787 3122 return -EINVAL; 4788 3123 4789 - chip->cmdfunc(mtd, NAND_CMD_SET_FEATURES, addr, -1); 4790 - for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i) 4791 - chip->write_byte(mtd, subfeature_param[i]); 4792 - 4793 - status = chip->waitfunc(mtd, chip); 4794 - if (status & NAND_STATUS_FAIL) 4795 - return -EIO; 4796 - return 0; 3124 + return nand_set_features_op(chip, addr, subfeature_param); 4797 3125 } 4798 3126 4799 3127 /** ··· 4796 3144 static int nand_onfi_get_features(struct mtd_info *mtd, struct nand_chip *chip, 4797 3145 int addr, uint8_t *subfeature_param) 4798 3146 { 4799 - int i; 4800 - 4801 3147 if (!chip->onfi_version || 4802 3148 !(le16_to_cpu(chip->onfi_params.opt_cmd) 4803 3149 & ONFI_OPT_CMD_SET_GET_FEATURES)) 4804 3150 return -EINVAL; 4805 3151 4806 - chip->cmdfunc(mtd, NAND_CMD_GET_FEATURES, addr, -1); 4807 - for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i) 4808 - *subfeature_param++ = chip->read_byte(mtd); 4809 - return 0; 3152 + return nand_get_features_op(chip, addr, subfeature_param); 4810 3153 } 4811 3154 4812 3155 /** ··· 4867 3220 chip->chip_delay = 20; 4868 3221 4869 3222 /* check, if a user supplied command function given */ 4870 - if (chip->cmdfunc == NULL) 3223 + if (!chip->cmdfunc && !chip->exec_op) 4871 3224 chip->cmdfunc = nand_command; 4872 3225 4873 3226 /* check, if a user supplied wait function given */ ··· 4944 3297 static int nand_flash_detect_ext_param_page(struct nand_chip *chip, 4945 3298 struct nand_onfi_params *p) 4946 3299 { 4947 - struct mtd_info *mtd = nand_to_mtd(chip); 4948 3300 struct onfi_ext_param_page *ep; 4949 3301 struct onfi_ext_section *s; 4950 3302 struct onfi_ext_ecc_info *ecc; 4951 3303 uint8_t *cursor; 4952 - int ret = -EINVAL; 3304 + int ret; 4953 3305 int len; 4954 3306 int i; 4955 3307 ··· 4958 3312 return -ENOMEM; 4959 3313 4960 3314 /* Send our own NAND_CMD_PARAM. */ 4961 - chip->cmdfunc(mtd, NAND_CMD_PARAM, 0, -1); 3315 + ret = nand_read_param_page_op(chip, 0, NULL, 0); 3316 + if (ret) 3317 + goto ext_out; 4962 3318 4963 3319 /* Use the Change Read Column command to skip the ONFI param pages. */ 4964 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 4965 - sizeof(*p) * p->num_of_param_pages , -1); 3320 + ret = nand_change_read_column_op(chip, 3321 + sizeof(*p) * p->num_of_param_pages, 3322 + ep, len, true); 3323 + if (ret) 3324 + goto ext_out; 4966 3325 4967 - /* Read out the Extended Parameter Page. */ 4968 - chip->read_buf(mtd, (uint8_t *)ep, len); 3326 + ret = -EINVAL; 4969 3327 if ((onfi_crc16(ONFI_CRC_BASE, ((uint8_t *)ep) + 2, len - 2) 4970 3328 != le16_to_cpu(ep->crc))) { 4971 3329 pr_debug("fail in the CRC.\n"); ··· 5022 3372 { 5023 3373 struct mtd_info *mtd = nand_to_mtd(chip); 5024 3374 struct nand_onfi_params *p = &chip->onfi_params; 5025 - int i, j; 5026 - int val; 3375 + char id[4]; 3376 + int i, ret, val; 5027 3377 5028 3378 /* Try ONFI for unknown chip or LP */ 5029 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x20, -1); 5030 - if (chip->read_byte(mtd) != 'O' || chip->read_byte(mtd) != 'N' || 5031 - chip->read_byte(mtd) != 'F' || chip->read_byte(mtd) != 'I') 3379 + ret = nand_readid_op(chip, 0x20, id, sizeof(id)); 3380 + if (ret || strncmp(id, "ONFI", 4)) 5032 3381 return 0; 5033 3382 5034 - chip->cmdfunc(mtd, NAND_CMD_PARAM, 0, -1); 3383 + ret = nand_read_param_page_op(chip, 0, NULL, 0); 3384 + if (ret) 3385 + return 0; 3386 + 5035 3387 for (i = 0; i < 3; i++) { 5036 - for (j = 0; j < sizeof(*p); j++) 5037 - ((uint8_t *)p)[j] = chip->read_byte(mtd); 3388 + ret = nand_read_data_op(chip, p, sizeof(*p), true); 3389 + if (ret) 3390 + return 0; 3391 + 5038 3392 if (onfi_crc16(ONFI_CRC_BASE, (uint8_t *)p, 254) == 5039 3393 le16_to_cpu(p->crc)) { 5040 3394 break; ··· 5129 3475 struct mtd_info *mtd = nand_to_mtd(chip); 5130 3476 struct nand_jedec_params *p = &chip->jedec_params; 5131 3477 struct jedec_ecc_info *ecc; 5132 - int val; 5133 - int i, j; 3478 + char id[5]; 3479 + int i, val, ret; 5134 3480 5135 3481 /* Try JEDEC for unknown chip or LP */ 5136 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x40, -1); 5137 - if (chip->read_byte(mtd) != 'J' || chip->read_byte(mtd) != 'E' || 5138 - chip->read_byte(mtd) != 'D' || chip->read_byte(mtd) != 'E' || 5139 - chip->read_byte(mtd) != 'C') 3482 + ret = nand_readid_op(chip, 0x40, id, sizeof(id)); 3483 + if (ret || strncmp(id, "JEDEC", sizeof(id))) 5140 3484 return 0; 5141 3485 5142 - chip->cmdfunc(mtd, NAND_CMD_PARAM, 0x40, -1); 3486 + ret = nand_read_param_page_op(chip, 0x40, NULL, 0); 3487 + if (ret) 3488 + return 0; 3489 + 5143 3490 for (i = 0; i < 3; i++) { 5144 - for (j = 0; j < sizeof(*p); j++) 5145 - ((uint8_t *)p)[j] = chip->read_byte(mtd); 3491 + ret = nand_read_data_op(chip, p, sizeof(*p), true); 3492 + if (ret) 3493 + return 0; 5146 3494 5147 3495 if (onfi_crc16(ONFI_CRC_BASE, (uint8_t *)p, 510) == 5148 3496 le16_to_cpu(p->crc)) ··· 5423 3767 { 5424 3768 const struct nand_manufacturer *manufacturer; 5425 3769 struct mtd_info *mtd = nand_to_mtd(chip); 5426 - int busw; 5427 - int i; 3770 + int busw, ret; 5428 3771 u8 *id_data = chip->id.data; 5429 3772 u8 maf_id, dev_id; 5430 3773 ··· 5431 3776 * Reset the chip, required by some chips (e.g. Micron MT29FxGxxxxx) 5432 3777 * after power-up. 5433 3778 */ 5434 - nand_reset(chip, 0); 3779 + ret = nand_reset(chip, 0); 3780 + if (ret) 3781 + return ret; 5435 3782 5436 3783 /* Select the device */ 5437 3784 chip->select_chip(mtd, 0); 5438 3785 5439 3786 /* Send the command for reading device ID */ 5440 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1); 3787 + ret = nand_readid_op(chip, 0, id_data, 2); 3788 + if (ret) 3789 + return ret; 5441 3790 5442 3791 /* Read manufacturer and device IDs */ 5443 - maf_id = chip->read_byte(mtd); 5444 - dev_id = chip->read_byte(mtd); 3792 + maf_id = id_data[0]; 3793 + dev_id = id_data[1]; 5445 3794 5446 3795 /* 5447 3796 * Try again to make sure, as some systems the bus-hold or other ··· 5454 3795 * not match, ignore the device completely. 5455 3796 */ 5456 3797 5457 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1); 5458 - 5459 3798 /* Read entire ID string */ 5460 - for (i = 0; i < ARRAY_SIZE(chip->id.data); i++) 5461 - id_data[i] = chip->read_byte(mtd); 3799 + ret = nand_readid_op(chip, 0, id_data, sizeof(chip->id.data)); 3800 + if (ret) 3801 + return ret; 5462 3802 5463 3803 if (id_data[0] != maf_id || id_data[1] != dev_id) { 5464 3804 pr_info("second ID read did not match %02x,%02x against %02x,%02x\n", ··· 5749 4091 struct nand_chip *chip = mtd_to_nand(mtd); 5750 4092 int ret; 5751 4093 4094 + /* Enforce the right timings for reset/detection */ 4095 + onfi_fill_data_interface(chip, NAND_SDR_IFACE, 0); 4096 + 5752 4097 ret = nand_dt_init(chip); 5753 4098 if (ret) 5754 4099 return ret; ··· 5759 4098 if (!mtd->name && mtd->dev.parent) 5760 4099 mtd->name = dev_name(mtd->dev.parent); 5761 4100 5762 - if ((!chip->cmdfunc || !chip->select_chip) && !chip->cmd_ctrl) { 4101 + /* 4102 + * ->cmdfunc() is legacy and will only be used if ->exec_op() is not 4103 + * populated. 4104 + */ 4105 + if (!chip->exec_op) { 5763 4106 /* 5764 - * Default functions assigned for chip_select() and 5765 - * cmdfunc() both expect cmd_ctrl() to be populated, 5766 - * so we need to check that that's the case 4107 + * Default functions assigned for ->cmdfunc() and 4108 + * ->select_chip() both expect ->cmd_ctrl() to be populated. 5767 4109 */ 5768 - pr_err("chip.cmd_ctrl() callback is not provided"); 5769 - return -EINVAL; 4110 + if ((!chip->cmdfunc || !chip->select_chip) && !chip->cmd_ctrl) { 4111 + pr_err("->cmd_ctrl() should be provided\n"); 4112 + return -EINVAL; 4113 + } 5770 4114 } 4115 + 5771 4116 /* Set the default functions */ 5772 4117 nand_set_defaults(chip); 5773 4118 ··· 5793 4126 5794 4127 /* Check for a chip array */ 5795 4128 for (i = 1; i < maxchips; i++) { 4129 + u8 id[2]; 4130 + 5796 4131 /* See comment in nand_get_flash_type for reset */ 5797 4132 nand_reset(chip, i); 5798 4133 5799 4134 chip->select_chip(mtd, i); 5800 4135 /* Send the command for reading device ID */ 5801 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1); 4136 + nand_readid_op(chip, 0, id, sizeof(id)); 5802 4137 /* Read manufacturer and device IDs */ 5803 - if (nand_maf_id != chip->read_byte(mtd) || 5804 - nand_dev_id != chip->read_byte(mtd)) { 4138 + if (nand_maf_id != id[0] || nand_dev_id != id[1]) { 5805 4139 chip->select_chip(mtd, -1); 5806 4140 break; 5807 4141 } ··· 6169 4501 return corr >= ds_corr && ecc->strength >= chip->ecc_strength_ds; 6170 4502 } 6171 4503 6172 - static bool invalid_ecc_page_accessors(struct nand_chip *chip) 6173 - { 6174 - struct nand_ecc_ctrl *ecc = &chip->ecc; 6175 - 6176 - if (nand_standard_page_accessors(ecc)) 6177 - return false; 6178 - 6179 - /* 6180 - * NAND_ECC_CUSTOM_PAGE_ACCESS flag is set, make sure the NAND 6181 - * controller driver implements all the page accessors because 6182 - * default helpers are not suitable when the core does not 6183 - * send the READ0/PAGEPROG commands. 6184 - */ 6185 - return (!ecc->read_page || !ecc->write_page || 6186 - !ecc->read_page_raw || !ecc->write_page_raw || 6187 - (NAND_HAS_SUBPAGE_READ(chip) && !ecc->read_subpage) || 6188 - (NAND_HAS_SUBPAGE_WRITE(chip) && !ecc->write_subpage && 6189 - ecc->hwctl && ecc->calculate)); 6190 - } 6191 - 6192 4504 /** 6193 4505 * nand_scan_tail - [NAND Interface] Scan for the NAND device 6194 4506 * @mtd: MTD device structure ··· 6181 4533 { 6182 4534 struct nand_chip *chip = mtd_to_nand(mtd); 6183 4535 struct nand_ecc_ctrl *ecc = &chip->ecc; 6184 - struct nand_buffers *nbuf = NULL; 6185 4536 int ret, i; 6186 4537 6187 4538 /* New bad blocks should be marked in OOB, flash-based BBT, or both */ ··· 6189 4542 return -EINVAL; 6190 4543 } 6191 4544 6192 - if (invalid_ecc_page_accessors(chip)) { 6193 - pr_err("Invalid ECC page accessors setup\n"); 6194 - return -EINVAL; 6195 - } 6196 - 6197 - if (!(chip->options & NAND_OWN_BUFFERS)) { 6198 - nbuf = kzalloc(sizeof(*nbuf), GFP_KERNEL); 6199 - if (!nbuf) 6200 - return -ENOMEM; 6201 - 6202 - nbuf->ecccalc = kmalloc(mtd->oobsize, GFP_KERNEL); 6203 - if (!nbuf->ecccalc) { 6204 - ret = -ENOMEM; 6205 - goto err_free_nbuf; 6206 - } 6207 - 6208 - nbuf->ecccode = kmalloc(mtd->oobsize, GFP_KERNEL); 6209 - if (!nbuf->ecccode) { 6210 - ret = -ENOMEM; 6211 - goto err_free_nbuf; 6212 - } 6213 - 6214 - nbuf->databuf = kmalloc(mtd->writesize + mtd->oobsize, 6215 - GFP_KERNEL); 6216 - if (!nbuf->databuf) { 6217 - ret = -ENOMEM; 6218 - goto err_free_nbuf; 6219 - } 6220 - 6221 - chip->buffers = nbuf; 6222 - } else if (!chip->buffers) { 4545 + chip->data_buf = kmalloc(mtd->writesize + mtd->oobsize, GFP_KERNEL); 4546 + if (!chip->data_buf) 6223 4547 return -ENOMEM; 6224 - } 6225 4548 6226 4549 /* 6227 4550 * FIXME: some NAND manufacturer drivers expect the first die to be ··· 6203 4586 ret = nand_manufacturer_init(chip); 6204 4587 chip->select_chip(mtd, -1); 6205 4588 if (ret) 6206 - goto err_free_nbuf; 4589 + goto err_free_buf; 6207 4590 6208 4591 /* Set the internal oob buffer location, just after the page data */ 6209 - chip->oob_poi = chip->buffers->databuf + mtd->writesize; 4592 + chip->oob_poi = chip->data_buf + mtd->writesize; 6210 4593 6211 4594 /* 6212 4595 * If no default placement scheme is given, select an appropriate one. ··· 6354 4737 goto err_nand_manuf_cleanup; 6355 4738 } 6356 4739 4740 + if (ecc->correct || ecc->calculate) { 4741 + ecc->calc_buf = kmalloc(mtd->oobsize, GFP_KERNEL); 4742 + ecc->code_buf = kmalloc(mtd->oobsize, GFP_KERNEL); 4743 + if (!ecc->calc_buf || !ecc->code_buf) { 4744 + ret = -ENOMEM; 4745 + goto err_nand_manuf_cleanup; 4746 + } 4747 + } 4748 + 6357 4749 /* For many systems, the standard OOB write also works for raw */ 6358 4750 if (!ecc->read_oob_raw) 6359 4751 ecc->read_oob_raw = ecc->read_oob; ··· 6479 4853 chip->select_chip(mtd, -1); 6480 4854 6481 4855 if (ret) 6482 - goto err_nand_data_iface_cleanup; 4856 + goto err_nand_manuf_cleanup; 6483 4857 } 6484 4858 6485 4859 /* Check, if we should skip the bad block table scan */ ··· 6489 4863 /* Build bad block table */ 6490 4864 ret = chip->scan_bbt(mtd); 6491 4865 if (ret) 6492 - goto err_nand_data_iface_cleanup; 4866 + goto err_nand_manuf_cleanup; 6493 4867 6494 4868 return 0; 6495 4869 6496 - err_nand_data_iface_cleanup: 6497 - nand_release_data_interface(chip); 6498 4870 6499 4871 err_nand_manuf_cleanup: 6500 4872 nand_manufacturer_cleanup(chip); 6501 4873 6502 - err_free_nbuf: 6503 - if (nbuf) { 6504 - kfree(nbuf->databuf); 6505 - kfree(nbuf->ecccode); 6506 - kfree(nbuf->ecccalc); 6507 - kfree(nbuf); 6508 - } 4874 + err_free_buf: 4875 + kfree(chip->data_buf); 4876 + kfree(ecc->code_buf); 4877 + kfree(ecc->calc_buf); 6509 4878 6510 4879 return ret; 6511 4880 } ··· 6548 4927 chip->ecc.algo == NAND_ECC_BCH) 6549 4928 nand_bch_free((struct nand_bch_control *)chip->ecc.priv); 6550 4929 6551 - nand_release_data_interface(chip); 6552 - 6553 4930 /* Free bad block table memory */ 6554 4931 kfree(chip->bbt); 6555 - if (!(chip->options & NAND_OWN_BUFFERS) && chip->buffers) { 6556 - kfree(chip->buffers->databuf); 6557 - kfree(chip->buffers->ecccode); 6558 - kfree(chip->buffers->ecccalc); 6559 - kfree(chip->buffers); 6560 - } 4932 + kfree(chip->data_buf); 4933 + kfree(chip->ecc.code_buf); 4934 + kfree(chip->ecc.calc_buf); 6561 4935 6562 4936 /* Free bad block descriptor memory */ 6563 4937 if (chip->badblock_pattern && chip->badblock_pattern->options
+1 -1
drivers/mtd/nand/nand_bbt.c
··· 898 898 { 899 899 struct nand_chip *this = mtd_to_nand(mtd); 900 900 901 - return create_bbt(mtd, this->buffers->databuf, bd, -1); 901 + return create_bbt(mtd, this->data_buf, bd, -1); 902 902 } 903 903 904 904 /**
+87 -42
drivers/mtd/nand/nand_hynix.c
··· 67 67 68 68 static bool hynix_nand_has_valid_jedecid(struct nand_chip *chip) 69 69 { 70 + u8 jedecid[5] = { }; 71 + int ret; 72 + 73 + ret = nand_readid_op(chip, 0x40, jedecid, sizeof(jedecid)); 74 + if (ret) 75 + return false; 76 + 77 + return !strncmp("JEDEC", jedecid, sizeof(jedecid)); 78 + } 79 + 80 + static int hynix_nand_cmd_op(struct nand_chip *chip, u8 cmd) 81 + { 70 82 struct mtd_info *mtd = nand_to_mtd(chip); 71 - u8 jedecid[6] = { }; 72 - int i = 0; 73 83 74 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x40, -1); 75 - for (i = 0; i < 5; i++) 76 - jedecid[i] = chip->read_byte(mtd); 84 + if (chip->exec_op) { 85 + struct nand_op_instr instrs[] = { 86 + NAND_OP_CMD(cmd, 0), 87 + }; 88 + struct nand_operation op = NAND_OPERATION(instrs); 77 89 78 - return !strcmp("JEDEC", jedecid); 90 + return nand_exec_op(chip, &op); 91 + } 92 + 93 + chip->cmdfunc(mtd, cmd, -1, -1); 94 + 95 + return 0; 96 + } 97 + 98 + static int hynix_nand_reg_write_op(struct nand_chip *chip, u8 addr, u8 val) 99 + { 100 + struct mtd_info *mtd = nand_to_mtd(chip); 101 + u16 column = ((u16)addr << 8) | addr; 102 + 103 + chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1); 104 + chip->write_byte(mtd, val); 105 + 106 + return 0; 79 107 } 80 108 81 109 static int hynix_nand_setup_read_retry(struct mtd_info *mtd, int retry_mode) ··· 111 83 struct nand_chip *chip = mtd_to_nand(mtd); 112 84 struct hynix_nand *hynix = nand_get_manufacturer_data(chip); 113 85 const u8 *values; 114 - int status; 115 - int i; 86 + int i, ret; 116 87 117 88 values = hynix->read_retry->values + 118 89 (retry_mode * hynix->read_retry->nregs); 119 90 120 91 /* Enter 'Set Hynix Parameters' mode */ 121 - chip->cmdfunc(mtd, NAND_HYNIX_CMD_SET_PARAMS, -1, -1); 92 + ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_SET_PARAMS); 93 + if (ret) 94 + return ret; 122 95 123 96 /* 124 97 * Configure the NAND in the requested read-retry mode. ··· 131 102 * probably tweaked at production in this case). 132 103 */ 133 104 for (i = 0; i < hynix->read_retry->nregs; i++) { 134 - int column = hynix->read_retry->regs[i]; 135 - 136 - column |= column << 8; 137 - chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1); 138 - chip->write_byte(mtd, values[i]); 105 + ret = hynix_nand_reg_write_op(chip, hynix->read_retry->regs[i], 106 + values[i]); 107 + if (ret) 108 + return ret; 139 109 } 140 110 141 111 /* Apply the new settings. */ 142 - chip->cmdfunc(mtd, NAND_HYNIX_CMD_APPLY_PARAMS, -1, -1); 143 - 144 - status = chip->waitfunc(mtd, chip); 145 - if (status & NAND_STATUS_FAIL) 146 - return -EIO; 147 - 148 - return 0; 112 + return hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_APPLY_PARAMS); 149 113 } 150 114 151 115 /** ··· 194 172 const struct hynix_read_retry_otp *info, 195 173 void *buf) 196 174 { 197 - struct mtd_info *mtd = nand_to_mtd(chip); 198 - int i; 175 + int i, ret; 199 176 200 - chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 177 + ret = nand_reset_op(chip); 178 + if (ret) 179 + return ret; 201 180 202 - chip->cmdfunc(mtd, NAND_HYNIX_CMD_SET_PARAMS, -1, -1); 181 + ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_SET_PARAMS); 182 + if (ret) 183 + return ret; 203 184 204 185 for (i = 0; i < info->nregs; i++) { 205 - int column = info->regs[i]; 206 - 207 - column |= column << 8; 208 - chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1); 209 - chip->write_byte(mtd, info->values[i]); 186 + ret = hynix_nand_reg_write_op(chip, info->regs[i], 187 + info->values[i]); 188 + if (ret) 189 + return ret; 210 190 } 211 191 212 - chip->cmdfunc(mtd, NAND_HYNIX_CMD_APPLY_PARAMS, -1, -1); 192 + ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_APPLY_PARAMS); 193 + if (ret) 194 + return ret; 213 195 214 196 /* Sequence to enter OTP mode? */ 215 - chip->cmdfunc(mtd, 0x17, -1, -1); 216 - chip->cmdfunc(mtd, 0x04, -1, -1); 217 - chip->cmdfunc(mtd, 0x19, -1, -1); 197 + ret = hynix_nand_cmd_op(chip, 0x17); 198 + if (ret) 199 + return ret; 200 + 201 + ret = hynix_nand_cmd_op(chip, 0x4); 202 + if (ret) 203 + return ret; 204 + 205 + ret = hynix_nand_cmd_op(chip, 0x19); 206 + if (ret) 207 + return ret; 218 208 219 209 /* Now read the page */ 220 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x0, info->page); 221 - chip->read_buf(mtd, buf, info->size); 210 + ret = nand_read_page_op(chip, info->page, 0, buf, info->size); 211 + if (ret) 212 + return ret; 222 213 223 214 /* Put everything back to normal */ 224 - chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 225 - chip->cmdfunc(mtd, NAND_HYNIX_CMD_SET_PARAMS, 0x38, -1); 226 - chip->write_byte(mtd, 0x0); 227 - chip->cmdfunc(mtd, NAND_HYNIX_CMD_APPLY_PARAMS, -1, -1); 228 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x0, -1); 215 + ret = nand_reset_op(chip); 216 + if (ret) 217 + return ret; 229 218 230 - return 0; 219 + ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_SET_PARAMS); 220 + if (ret) 221 + return ret; 222 + 223 + ret = hynix_nand_reg_write_op(chip, 0x38, 0); 224 + if (ret) 225 + return ret; 226 + 227 + ret = hynix_nand_cmd_op(chip, NAND_HYNIX_CMD_APPLY_PARAMS); 228 + if (ret) 229 + return ret; 230 + 231 + return nand_read_page_op(chip, 0, 0, NULL, 0); 231 232 } 232 233 233 234 #define NAND_HYNIX_1XNM_RR_COUNT_OFFS 0
+32 -51
drivers/mtd/nand/nand_micron.c
··· 117 117 uint8_t *buf, int oob_required, 118 118 int page) 119 119 { 120 - int status; 121 - int max_bitflips = 0; 120 + u8 status; 121 + int ret, max_bitflips = 0; 122 122 123 - micron_nand_on_die_ecc_setup(chip, true); 123 + ret = micron_nand_on_die_ecc_setup(chip, true); 124 + if (ret) 125 + return ret; 124 126 125 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); 126 - chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 127 - status = chip->read_byte(mtd); 127 + ret = nand_read_page_op(chip, page, 0, NULL, 0); 128 + if (ret) 129 + goto out; 130 + 131 + ret = nand_status_op(chip, &status); 132 + if (ret) 133 + goto out; 134 + 135 + ret = nand_exit_status_op(chip); 136 + if (ret) 137 + goto out; 138 + 128 139 if (status & NAND_STATUS_FAIL) 129 140 mtd->ecc_stats.failed++; 141 + 130 142 /* 131 143 * The internal ECC doesn't tell us the number of bitflips 132 144 * that have been corrected, but tells us if it recommends to ··· 149 137 else if (status & NAND_STATUS_WRITE_RECOMMENDED) 150 138 max_bitflips = chip->ecc.strength; 151 139 152 - chip->cmdfunc(mtd, NAND_CMD_READ0, -1, -1); 140 + ret = nand_read_data_op(chip, buf, mtd->writesize, false); 141 + if (!ret && oob_required) 142 + ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, 143 + false); 153 144 154 - nand_read_page_raw(mtd, chip, buf, oob_required, page); 155 - 145 + out: 156 146 micron_nand_on_die_ecc_setup(chip, false); 157 147 158 - return max_bitflips; 148 + return ret ? ret : max_bitflips; 159 149 } 160 150 161 151 static int ··· 165 151 const uint8_t *buf, int oob_required, 166 152 int page) 167 153 { 168 - int status; 154 + int ret; 169 155 170 - micron_nand_on_die_ecc_setup(chip, true); 156 + ret = micron_nand_on_die_ecc_setup(chip, true); 157 + if (ret) 158 + return ret; 171 159 172 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 173 - nand_write_page_raw(mtd, chip, buf, oob_required, page); 174 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 175 - status = chip->waitfunc(mtd, chip); 176 - 160 + ret = nand_write_page_raw(mtd, chip, buf, oob_required, page); 177 161 micron_nand_on_die_ecc_setup(chip, false); 178 162 179 - return status & NAND_STATUS_FAIL ? -EIO : 0; 180 - } 181 - 182 - static int 183 - micron_nand_read_page_raw_on_die_ecc(struct mtd_info *mtd, 184 - struct nand_chip *chip, 185 - uint8_t *buf, int oob_required, 186 - int page) 187 - { 188 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); 189 - nand_read_page_raw(mtd, chip, buf, oob_required, page); 190 - 191 - return 0; 192 - } 193 - 194 - static int 195 - micron_nand_write_page_raw_on_die_ecc(struct mtd_info *mtd, 196 - struct nand_chip *chip, 197 - const uint8_t *buf, int oob_required, 198 - int page) 199 - { 200 - int status; 201 - 202 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 203 - nand_write_page_raw(mtd, chip, buf, oob_required, page); 204 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 205 - status = chip->waitfunc(mtd, chip); 206 - 207 - return status & NAND_STATUS_FAIL ? -EIO : 0; 163 + return ret; 208 164 } 209 165 210 166 enum { ··· 269 285 return -EINVAL; 270 286 } 271 287 272 - chip->ecc.options = NAND_ECC_CUSTOM_PAGE_ACCESS; 273 288 chip->ecc.bytes = 8; 274 289 chip->ecc.size = 512; 275 290 chip->ecc.strength = 4; 276 291 chip->ecc.algo = NAND_ECC_BCH; 277 292 chip->ecc.read_page = micron_nand_read_page_on_die_ecc; 278 293 chip->ecc.write_page = micron_nand_write_page_on_die_ecc; 279 - chip->ecc.read_page_raw = 280 - micron_nand_read_page_raw_on_die_ecc; 281 - chip->ecc.write_page_raw = 282 - micron_nand_write_page_raw_on_die_ecc; 294 + chip->ecc.read_page_raw = nand_read_page_raw; 295 + chip->ecc.write_page_raw = nand_write_page_raw; 283 296 284 297 mtd_set_ooblayout(mtd, &micron_nand_on_die_ooblayout_ops); 285 298 }
+19
drivers/mtd/nand/nand_samsung.c
··· 91 91 } 92 92 } else { 93 93 nand_decode_ext_id(chip); 94 + 95 + if (nand_is_slc(chip)) { 96 + switch (chip->id.data[1]) { 97 + /* K9F4G08U0D-S[I|C]B0(T00) */ 98 + case 0xDC: 99 + chip->ecc_step_ds = 512; 100 + chip->ecc_strength_ds = 1; 101 + break; 102 + 103 + /* K9F1G08U0E 21nm chips do not support subpage write */ 104 + case 0xF1: 105 + if (chip->id.len > 4 && 106 + (chip->id.data[4] & GENMASK(1, 0)) == 0x1) 107 + chip->options |= NAND_NO_SUBPAGE_WRITE; 108 + break; 109 + default: 110 + break; 111 + } 112 + } 94 113 } 95 114 } 96 115
+5 -16
drivers/mtd/nand/nand_timings.c
··· 283 283 EXPORT_SYMBOL(onfi_async_timing_mode_to_sdr_timings); 284 284 285 285 /** 286 - * onfi_init_data_interface - [NAND Interface] Initialize a data interface from 286 + * onfi_fill_data_interface - [NAND Interface] Initialize a data interface from 287 287 * given ONFI mode 288 - * @iface: The data interface to be initialized 289 288 * @mode: The ONFI timing mode 290 289 */ 291 - int onfi_init_data_interface(struct nand_chip *chip, 292 - struct nand_data_interface *iface, 290 + int onfi_fill_data_interface(struct nand_chip *chip, 293 291 enum nand_data_interface_type type, 294 292 int timing_mode) 295 293 { 294 + struct nand_data_interface *iface = &chip->data_interface; 295 + 296 296 if (type != NAND_SDR_IFACE) 297 297 return -EINVAL; 298 298 ··· 321 321 322 322 return 0; 323 323 } 324 - EXPORT_SYMBOL(onfi_init_data_interface); 325 - 326 - /** 327 - * nand_get_default_data_interface - [NAND Interface] Retrieve NAND 328 - * data interface for mode 0. This is used as default timing after 329 - * reset. 330 - */ 331 - const struct nand_data_interface *nand_get_default_data_interface(void) 332 - { 333 - return &onfi_sdr_timings[0]; 334 - } 335 - EXPORT_SYMBOL(nand_get_default_data_interface); 324 + EXPORT_SYMBOL(onfi_fill_data_interface);
+17 -11
drivers/mtd/nand/omap2.c
··· 1530 1530 const uint8_t *buf, int oob_required, int page) 1531 1531 { 1532 1532 int ret; 1533 - uint8_t *ecc_calc = chip->buffers->ecccalc; 1533 + uint8_t *ecc_calc = chip->ecc.calc_buf; 1534 + 1535 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1534 1536 1535 1537 /* Enable GPMC ecc engine */ 1536 1538 chip->ecc.hwctl(mtd, NAND_ECC_WRITE); ··· 1550 1548 1551 1549 /* Write ecc vector to OOB area */ 1552 1550 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1553 - return 0; 1551 + 1552 + return nand_prog_page_end_op(chip); 1554 1553 } 1555 1554 1556 1555 /** ··· 1571 1568 u32 data_len, const u8 *buf, 1572 1569 int oob_required, int page) 1573 1570 { 1574 - u8 *ecc_calc = chip->buffers->ecccalc; 1571 + u8 *ecc_calc = chip->ecc.calc_buf; 1575 1572 int ecc_size = chip->ecc.size; 1576 1573 int ecc_bytes = chip->ecc.bytes; 1577 1574 int ecc_steps = chip->ecc.steps; ··· 1585 1582 * ECC is calculated for all subpages but we choose 1586 1583 * only what we want. 1587 1584 */ 1585 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1588 1586 1589 1587 /* Enable GPMC ECC engine */ 1590 1588 chip->ecc.hwctl(mtd, NAND_ECC_WRITE); ··· 1609 1605 1610 1606 /* copy calculated ECC for whole page to chip->buffer->oob */ 1611 1607 /* this include masked-value(0xFF) for unwritten subpages */ 1612 - ecc_calc = chip->buffers->ecccalc; 1608 + ecc_calc = chip->ecc.calc_buf; 1613 1609 ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0, 1614 1610 chip->ecc.total); 1615 1611 if (ret) ··· 1618 1614 /* write OOB buffer to NAND device */ 1619 1615 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1620 1616 1621 - return 0; 1617 + return nand_prog_page_end_op(chip); 1622 1618 } 1623 1619 1624 1620 /** ··· 1639 1635 static int omap_read_page_bch(struct mtd_info *mtd, struct nand_chip *chip, 1640 1636 uint8_t *buf, int oob_required, int page) 1641 1637 { 1642 - uint8_t *ecc_calc = chip->buffers->ecccalc; 1643 - uint8_t *ecc_code = chip->buffers->ecccode; 1638 + uint8_t *ecc_calc = chip->ecc.calc_buf; 1639 + uint8_t *ecc_code = chip->ecc.code_buf; 1644 1640 int stat, ret; 1645 1641 unsigned int max_bitflips = 0; 1642 + 1643 + nand_read_page_op(chip, page, 0, NULL, 0); 1646 1644 1647 1645 /* Enable GPMC ecc engine */ 1648 1646 chip->ecc.hwctl(mtd, NAND_ECC_READ); ··· 1653 1647 chip->read_buf(mtd, buf, mtd->writesize); 1654 1648 1655 1649 /* Read oob bytes */ 1656 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 1657 - mtd->writesize + BADBLOCK_MARKER_LENGTH, -1); 1658 - chip->read_buf(mtd, chip->oob_poi + BADBLOCK_MARKER_LENGTH, 1659 - chip->ecc.total); 1650 + nand_change_read_column_op(chip, 1651 + mtd->writesize + BADBLOCK_MARKER_LENGTH, 1652 + chip->oob_poi + BADBLOCK_MARKER_LENGTH, 1653 + chip->ecc.total, false); 1660 1654 1661 1655 /* Calculate ecc bytes */ 1662 1656 omap_calculate_ecc_bch_multi(mtd, buf, ecc_calc);
+6 -8
drivers/mtd/nand/pxa3xx_nand.c
··· 520 520 struct nand_chip *chip = &host->chip; 521 521 struct pxa3xx_nand_info *info = host->info_data; 522 522 const struct pxa3xx_nand_flash *f = NULL; 523 - struct mtd_info *mtd = nand_to_mtd(&host->chip); 524 523 int i, id, ntypes; 524 + u8 idbuf[2]; 525 525 526 526 ntypes = ARRAY_SIZE(builtin_flash_types); 527 527 528 - chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1); 529 - 530 - id = chip->read_byte(mtd); 531 - id |= chip->read_byte(mtd) << 0x8; 528 + nand_readid_op(chip, 0, idbuf, sizeof(idbuf)); 529 + id = idbuf[0] | (idbuf[1] << 8); 532 530 533 531 for (i = 0; i < ntypes; i++) { 534 532 f = &builtin_flash_types[i]; ··· 1348 1350 struct nand_chip *chip, const uint8_t *buf, int oob_required, 1349 1351 int page) 1350 1352 { 1351 - chip->write_buf(mtd, buf, mtd->writesize); 1353 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 1352 1354 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1353 1355 1354 - return 0; 1356 + return nand_prog_page_end_op(chip); 1355 1357 } 1356 1358 1357 1359 static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd, ··· 1361 1363 struct pxa3xx_nand_host *host = nand_get_controller_data(chip); 1362 1364 struct pxa3xx_nand_info *info = host->info_data; 1363 1365 1364 - chip->read_buf(mtd, buf, mtd->writesize); 1366 + nand_read_page_op(chip, page, 0, buf, mtd->writesize); 1365 1367 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1366 1368 1367 1369 if (info->retcode == ERR_CORERR && info->use_ecc) {
+18 -12
drivers/mtd/nand/qcom_nandc.c
··· 1725 1725 u8 *data_buf, *oob_buf = NULL; 1726 1726 int ret; 1727 1727 1728 + nand_read_page_op(chip, page, 0, NULL, 0); 1728 1729 data_buf = buf; 1729 1730 oob_buf = oob_required ? chip->oob_poi : NULL; 1730 1731 ··· 1751 1750 int i, ret; 1752 1751 int read_loc; 1753 1752 1753 + nand_read_page_op(chip, page, 0, NULL, 0); 1754 1754 data_buf = buf; 1755 1755 oob_buf = chip->oob_poi; 1756 1756 ··· 1852 1850 u8 *data_buf, *oob_buf; 1853 1851 int i, ret; 1854 1852 1853 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1854 + 1855 1855 clear_read_regs(nandc); 1856 1856 clear_bam_transaction(nandc); 1857 1857 ··· 1906 1902 1907 1903 free_descs(nandc); 1908 1904 1905 + if (!ret) 1906 + ret = nand_prog_page_end_op(chip); 1907 + 1909 1908 return ret; 1910 1909 } 1911 1910 ··· 1923 1916 u8 *data_buf, *oob_buf; 1924 1917 int i, ret; 1925 1918 1919 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1926 1920 clear_read_regs(nandc); 1927 1921 clear_bam_transaction(nandc); 1928 1922 ··· 1978 1970 1979 1971 free_descs(nandc); 1980 1972 1973 + if (!ret) 1974 + ret = nand_prog_page_end_op(chip); 1975 + 1981 1976 return ret; 1982 1977 } 1983 1978 ··· 2001 1990 struct nand_ecc_ctrl *ecc = &chip->ecc; 2002 1991 u8 *oob = chip->oob_poi; 2003 1992 int data_size, oob_size; 2004 - int ret, status = 0; 1993 + int ret; 2005 1994 2006 1995 host->use_ecc = true; 2007 1996 ··· 2038 2027 return -EIO; 2039 2028 } 2040 2029 2041 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 2042 - 2043 - status = chip->waitfunc(mtd, chip); 2044 - 2045 - return status & NAND_STATUS_FAIL ? -EIO : 0; 2030 + return nand_prog_page_end_op(chip); 2046 2031 } 2047 2032 2048 2033 static int qcom_nandc_block_bad(struct mtd_info *mtd, loff_t ofs) ··· 2088 2081 struct qcom_nand_host *host = to_qcom_nand_host(chip); 2089 2082 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2090 2083 struct nand_ecc_ctrl *ecc = &chip->ecc; 2091 - int page, ret, status = 0; 2084 + int page, ret; 2092 2085 2093 2086 clear_read_regs(nandc); 2094 2087 clear_bam_transaction(nandc); ··· 2121 2114 return -EIO; 2122 2115 } 2123 2116 2124 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 2125 - 2126 - status = chip->waitfunc(mtd, chip); 2127 - 2128 - return status & NAND_STATUS_FAIL ? -EIO : 0; 2117 + return nand_prog_page_end_op(chip); 2129 2118 } 2130 2119 2131 2120 /* ··· 2639 2636 2640 2637 nand_set_flash_node(chip, dn); 2641 2638 mtd->name = devm_kasprintf(dev, GFP_KERNEL, "qcom_nand.%d", host->cs); 2639 + if (!mtd->name) 2640 + return -ENOMEM; 2641 + 2642 2642 mtd->owner = THIS_MODULE; 2643 2643 mtd->dev.parent = dev; 2644 2644
+4 -7
drivers/mtd/nand/r852.c
··· 364 364 struct r852_device *dev = nand_get_controller_data(chip); 365 365 366 366 unsigned long timeout; 367 - int status; 367 + u8 status; 368 368 369 369 timeout = jiffies + (chip->state == FL_ERASING ? 370 370 msecs_to_jiffies(400) : msecs_to_jiffies(20)); ··· 373 373 if (chip->dev_ready(mtd)) 374 374 break; 375 375 376 - chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 377 - status = (int)chip->read_byte(mtd); 376 + nand_status_op(chip, &status); 378 377 379 378 /* Unfortunelly, no way to send detailed error status... */ 380 379 if (dev->dma_error) { ··· 521 522 static int r852_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 522 523 int page) 523 524 { 524 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 525 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 526 - return 0; 525 + return nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 527 526 } 528 527 529 528 /* ··· 1043 1046 if (dev->card_registred) { 1044 1047 r852_engine_enable(dev); 1045 1048 dev->chip->select_chip(mtd, 0); 1046 - dev->chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1); 1049 + nand_reset_op(dev->chip); 1047 1050 dev->chip->select_chip(mtd, -1); 1048 1051 } 1049 1052
+3 -3
drivers/mtd/nand/sh_flctl.c
··· 614 614 static int flctl_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, 615 615 uint8_t *buf, int oob_required, int page) 616 616 { 617 - chip->read_buf(mtd, buf, mtd->writesize); 617 + nand_read_page_op(chip, page, 0, buf, mtd->writesize); 618 618 if (oob_required) 619 619 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 620 620 return 0; ··· 624 624 const uint8_t *buf, int oob_required, 625 625 int page) 626 626 { 627 - chip->write_buf(mtd, buf, mtd->writesize); 627 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 628 628 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 629 - return 0; 629 + return nand_prog_page_end_op(chip); 630 630 } 631 631 632 632 static void execmd_read_page_sector(struct mtd_info *mtd, int page_addr)
+1 -1
drivers/mtd/nand/sm_common.h
··· 36 36 #define SM_SMALL_OOB_SIZE 8 37 37 38 38 39 - extern int sm_register_device(struct mtd_info *mtd, int smartmedia); 39 + int sm_register_device(struct mtd_info *mtd, int smartmedia); 40 40 41 41 42 42 static inline int sm_sector_valid(struct sm_oob *oob)
+61 -50
drivers/mtd/nand/sunxi_nand.c
··· 958 958 int ret; 959 959 960 960 if (*cur_off != data_off) 961 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1); 961 + nand_change_read_column_op(nand, data_off, NULL, 0, false); 962 962 963 963 sunxi_nfc_randomizer_read_buf(mtd, NULL, ecc->size, false, page); 964 964 965 965 if (data_off + ecc->size != oob_off) 966 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1); 966 + nand_change_read_column_op(nand, oob_off, NULL, 0, false); 967 967 968 968 ret = sunxi_nfc_wait_cmd_fifo_empty(nfc); 969 969 if (ret) ··· 991 991 * Re-read the data with the randomizer disabled to identify 992 992 * bitflips in erased pages. 993 993 */ 994 - if (nand->options & NAND_NEED_SCRAMBLING) { 995 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1); 996 - nand->read_buf(mtd, data, ecc->size); 997 - } else { 994 + if (nand->options & NAND_NEED_SCRAMBLING) 995 + nand_change_read_column_op(nand, data_off, data, 996 + ecc->size, false); 997 + else 998 998 memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, 999 999 ecc->size); 1000 - } 1001 1000 1002 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1); 1003 - nand->read_buf(mtd, oob, ecc->bytes + 4); 1001 + nand_change_read_column_op(nand, oob_off, oob, ecc->bytes + 4, 1002 + false); 1004 1003 1005 1004 ret = nand_check_erased_ecc_chunk(data, ecc->size, 1006 1005 oob, ecc->bytes + 4, ··· 1010 1011 memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size); 1011 1012 1012 1013 if (oob_required) { 1013 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1); 1014 + nand_change_read_column_op(nand, oob_off, NULL, 0, 1015 + false); 1014 1016 sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, 1015 1017 true, page); 1016 1018 ··· 1038 1038 return; 1039 1039 1040 1040 if (!cur_off || *cur_off != offset) 1041 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, 1042 - offset + mtd->writesize, -1); 1041 + nand_change_read_column_op(nand, mtd->writesize, NULL, 0, 1042 + false); 1043 1043 1044 1044 if (!randomize) 1045 1045 sunxi_nfc_read_buf(mtd, oob + offset, len); ··· 1116 1116 1117 1117 if (oob_required && !erased) { 1118 1118 /* TODO: use DMA to retrieve OOB */ 1119 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, 1120 - mtd->writesize + oob_off, -1); 1121 - nand->read_buf(mtd, oob, ecc->bytes + 4); 1119 + nand_change_read_column_op(nand, 1120 + mtd->writesize + oob_off, 1121 + oob, ecc->bytes + 4, false); 1122 1122 1123 1123 sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i, 1124 1124 !i, page); ··· 1143 1143 /* 1144 1144 * Re-read the data with the randomizer disabled to 1145 1145 * identify bitflips in erased pages. 1146 + * TODO: use DMA to read page in raw mode 1146 1147 */ 1147 - if (randomized) { 1148 - /* TODO: use DMA to read page in raw mode */ 1149 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, 1150 - data_off, -1); 1151 - nand->read_buf(mtd, data, ecc->size); 1152 - } 1148 + if (randomized) 1149 + nand_change_read_column_op(nand, data_off, 1150 + data, ecc->size, 1151 + false); 1153 1152 1154 1153 /* TODO: use DMA to retrieve OOB */ 1155 - nand->cmdfunc(mtd, NAND_CMD_RNDOUT, 1156 - mtd->writesize + oob_off, -1); 1157 - nand->read_buf(mtd, oob, ecc->bytes + 4); 1154 + nand_change_read_column_op(nand, 1155 + mtd->writesize + oob_off, 1156 + oob, ecc->bytes + 4, false); 1158 1157 1159 1158 ret = nand_check_erased_ecc_chunk(data, ecc->size, 1160 1159 oob, ecc->bytes + 4, ··· 1186 1187 int ret; 1187 1188 1188 1189 if (data_off != *cur_off) 1189 - nand->cmdfunc(mtd, NAND_CMD_RNDIN, data_off, -1); 1190 + nand_change_write_column_op(nand, data_off, NULL, 0, false); 1190 1191 1191 1192 sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page); 1192 1193 1193 1194 if (data_off + ecc->size != oob_off) 1194 - nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1); 1195 + nand_change_write_column_op(nand, oob_off, NULL, 0, false); 1195 1196 1196 1197 ret = sunxi_nfc_wait_cmd_fifo_empty(nfc); 1197 1198 if (ret) ··· 1227 1228 return; 1228 1229 1229 1230 if (!cur_off || *cur_off != offset) 1230 - nand->cmdfunc(mtd, NAND_CMD_RNDIN, 1231 - offset + mtd->writesize, -1); 1231 + nand_change_write_column_op(nand, offset + mtd->writesize, 1232 + NULL, 0, false); 1232 1233 1233 1234 sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page); 1234 1235 ··· 1244 1245 unsigned int max_bitflips = 0; 1245 1246 int ret, i, cur_off = 0; 1246 1247 bool raw_mode = false; 1248 + 1249 + nand_read_page_op(chip, page, 0, NULL, 0); 1247 1250 1248 1251 sunxi_nfc_hw_ecc_enable(mtd); 1249 1252 ··· 1280 1279 { 1281 1280 int ret; 1282 1281 1282 + nand_read_page_op(chip, page, 0, NULL, 0); 1283 + 1283 1284 ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page, 1284 1285 chip->ecc.steps); 1285 1286 if (ret >= 0) 1286 1287 return ret; 1287 1288 1288 1289 /* Fallback to PIO mode */ 1289 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1); 1290 - 1291 1290 return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page); 1292 1291 } 1293 1292 ··· 1299 1298 struct nand_ecc_ctrl *ecc = &chip->ecc; 1300 1299 int ret, i, cur_off = 0; 1301 1300 unsigned int max_bitflips = 0; 1301 + 1302 + nand_read_page_op(chip, page, 0, NULL, 0); 1302 1303 1303 1304 sunxi_nfc_hw_ecc_enable(mtd); 1304 1305 ··· 1333 1330 int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size); 1334 1331 int ret; 1335 1332 1333 + nand_read_page_op(chip, page, 0, NULL, 0); 1334 + 1336 1335 ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks); 1337 1336 if (ret >= 0) 1338 1337 return ret; 1339 1338 1340 1339 /* Fallback to PIO mode */ 1341 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1); 1342 - 1343 1340 return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen, 1344 1341 buf, page); 1345 1342 } ··· 1351 1348 { 1352 1349 struct nand_ecc_ctrl *ecc = &chip->ecc; 1353 1350 int ret, i, cur_off = 0; 1351 + 1352 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1354 1353 1355 1354 sunxi_nfc_hw_ecc_enable(mtd); 1356 1355 ··· 1375 1370 1376 1371 sunxi_nfc_hw_ecc_disable(mtd); 1377 1372 1378 - return 0; 1373 + return nand_prog_page_end_op(chip); 1379 1374 } 1380 1375 1381 1376 static int sunxi_nfc_hw_ecc_write_subpage(struct mtd_info *mtd, ··· 1386 1381 { 1387 1382 struct nand_ecc_ctrl *ecc = &chip->ecc; 1388 1383 int ret, i, cur_off = 0; 1384 + 1385 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1389 1386 1390 1387 sunxi_nfc_hw_ecc_enable(mtd); 1391 1388 ··· 1407 1400 1408 1401 sunxi_nfc_hw_ecc_disable(mtd); 1409 1402 1410 - return 0; 1403 + return nand_prog_page_end_op(chip); 1411 1404 } 1412 1405 1413 1406 static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd, ··· 1436 1429 1437 1430 sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page); 1438 1431 } 1432 + 1433 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1439 1434 1440 1435 sunxi_nfc_hw_ecc_enable(mtd); 1441 1436 sunxi_nfc_randomizer_config(mtd, page, false); ··· 1469 1460 sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi, 1470 1461 NULL, page); 1471 1462 1472 - return 0; 1463 + return nand_prog_page_end_op(chip); 1473 1464 1474 1465 pio_fallback: 1475 1466 return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page); ··· 1484 1475 unsigned int max_bitflips = 0; 1485 1476 int ret, i, cur_off = 0; 1486 1477 bool raw_mode = false; 1478 + 1479 + nand_read_page_op(chip, page, 0, NULL, 0); 1487 1480 1488 1481 sunxi_nfc_hw_ecc_enable(mtd); 1489 1482 ··· 1523 1512 struct nand_ecc_ctrl *ecc = &chip->ecc; 1524 1513 int ret, i, cur_off = 0; 1525 1514 1515 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1516 + 1526 1517 sunxi_nfc_hw_ecc_enable(mtd); 1527 1518 1528 1519 for (i = 0; i < ecc->steps; i++) { ··· 1546 1533 1547 1534 sunxi_nfc_hw_ecc_disable(mtd); 1548 1535 1549 - return 0; 1536 + return nand_prog_page_end_op(chip); 1550 1537 } 1551 1538 1552 1539 static int sunxi_nfc_hw_common_ecc_read_oob(struct mtd_info *mtd, 1553 1540 struct nand_chip *chip, 1554 1541 int page) 1555 1542 { 1556 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 1557 - 1558 1543 chip->pagebuf = -1; 1559 1544 1560 - return chip->ecc.read_page(mtd, chip, chip->buffers->databuf, 1, page); 1545 + return chip->ecc.read_page(mtd, chip, chip->data_buf, 1, page); 1561 1546 } 1562 1547 1563 1548 static int sunxi_nfc_hw_common_ecc_write_oob(struct mtd_info *mtd, 1564 1549 struct nand_chip *chip, 1565 1550 int page) 1566 1551 { 1567 - int ret, status; 1568 - 1569 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page); 1552 + int ret; 1570 1553 1571 1554 chip->pagebuf = -1; 1572 1555 1573 - memset(chip->buffers->databuf, 0xff, mtd->writesize); 1574 - ret = chip->ecc.write_page(mtd, chip, chip->buffers->databuf, 1, page); 1556 + memset(chip->data_buf, 0xff, mtd->writesize); 1557 + ret = chip->ecc.write_page(mtd, chip, chip->data_buf, 1, page); 1575 1558 if (ret) 1576 1559 return ret; 1577 1560 1578 1561 /* Send command to program the OOB data */ 1579 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1580 - 1581 - status = chip->waitfunc(mtd, chip); 1582 - 1583 - return status & NAND_STATUS_FAIL ? -EIO : 0; 1562 + return nand_prog_page_end_op(chip); 1584 1563 } 1585 1564 1586 1565 static const s32 tWB_lut[] = {6, 12, 16, 20}; ··· 1858 1853 1859 1854 /* Add ECC info retrieval from DT */ 1860 1855 for (i = 0; i < ARRAY_SIZE(strengths); i++) { 1861 - if (ecc->strength <= strengths[i]) 1856 + if (ecc->strength <= strengths[i]) { 1857 + /* 1858 + * Update ecc->strength value with the actual strength 1859 + * that will be used by the ECC engine. 1860 + */ 1861 + ecc->strength = strengths[i]; 1862 1862 break; 1863 + } 1863 1864 } 1864 1865 1865 1866 if (i >= ARRAY_SIZE(strengths)) {
+8 -19
drivers/mtd/nand/tango_nand.c
··· 329 329 330 330 if (!*buf) { 331 331 /* skip over "len" bytes */ 332 - chip->cmdfunc(mtd, NAND_CMD_RNDOUT, *pos, -1); 332 + nand_change_read_column_op(chip, *pos, NULL, 0, false); 333 333 } else { 334 334 tango_read_buf(mtd, *buf, len); 335 335 *buf += len; ··· 344 344 345 345 if (!*buf) { 346 346 /* skip over "len" bytes */ 347 - chip->cmdfunc(mtd, NAND_CMD_RNDIN, *pos, -1); 347 + nand_change_write_column_op(chip, *pos, NULL, 0, false); 348 348 } else { 349 349 tango_write_buf(mtd, *buf, len); 350 350 *buf += len; ··· 427 427 static int tango_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 428 428 u8 *buf, int oob_required, int page) 429 429 { 430 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 430 + nand_read_page_op(chip, page, 0, NULL, 0); 431 431 raw_read(chip, buf, chip->oob_poi); 432 432 return 0; 433 433 } ··· 435 435 static int tango_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 436 436 const u8 *buf, int oob_required, int page) 437 437 { 438 - int status; 439 - 440 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page); 438 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 441 439 raw_write(chip, buf, chip->oob_poi); 442 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 443 - 444 - status = chip->waitfunc(mtd, chip); 445 - if (status & NAND_STATUS_FAIL) 446 - return -EIO; 447 - 448 - return 0; 440 + return nand_prog_page_end_op(chip); 449 441 } 450 442 451 443 static int tango_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 452 444 int page) 453 445 { 454 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 446 + nand_read_page_op(chip, page, 0, NULL, 0); 455 447 raw_read(chip, NULL, chip->oob_poi); 456 448 return 0; 457 449 } ··· 451 459 static int tango_write_oob(struct mtd_info *mtd, struct nand_chip *chip, 452 460 int page) 453 461 { 454 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page); 462 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 455 463 raw_write(chip, NULL, chip->oob_poi); 456 - chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 457 - chip->waitfunc(mtd, chip); 458 - return 0; 464 + return nand_prog_page_end_op(chip); 459 465 } 460 466 461 467 static int oob_ecc(struct mtd_info *mtd, int idx, struct mtd_oob_region *res) ··· 580 590 ecc->write_page = tango_write_page; 581 591 ecc->read_oob = tango_read_oob; 582 592 ecc->write_oob = tango_write_oob; 583 - ecc->options = NAND_ECC_CUSTOM_PAGE_ACCESS; 584 593 585 594 err = nand_scan_tail(mtd); 586 595 if (err)
+3 -2
drivers/mtd/nand/tmio_nand.c
··· 192 192 { 193 193 struct tmio_nand *tmio = mtd_to_tmio(mtd); 194 194 long timeout; 195 + u8 status; 195 196 196 197 /* enable RDYREQ interrupt */ 197 198 tmio_iowrite8(0x0f, tmio->fcr + FCR_ISR); ··· 213 212 dev_warn(&tmio->dev->dev, "timeout waiting for interrupt\n"); 214 213 } 215 214 216 - nand_chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 217 - return nand_chip->read_byte(mtd); 215 + nand_status_op(nand_chip, &status); 216 + return status; 218 217 } 219 218 220 219 /*
+3 -3
drivers/mtd/nand/vf610_nfc.c
··· 560 560 int eccsize = chip->ecc.size; 561 561 int stat; 562 562 563 - vf610_nfc_read_buf(mtd, buf, eccsize); 563 + nand_read_page_op(chip, page, 0, buf, eccsize); 564 564 if (oob_required) 565 565 vf610_nfc_read_buf(mtd, chip->oob_poi, mtd->oobsize); 566 566 ··· 580 580 { 581 581 struct vf610_nfc *nfc = mtd_to_nfc(mtd); 582 582 583 - vf610_nfc_write_buf(mtd, buf, mtd->writesize); 583 + nand_prog_page_begin_op(chip, page, 0, buf, mtd->writesize); 584 584 if (oob_required) 585 585 vf610_nfc_write_buf(mtd, chip->oob_poi, mtd->oobsize); 586 586 ··· 588 588 nfc->use_hw_ecc = true; 589 589 nfc->write_sz = mtd->writesize + mtd->oobsize; 590 590 591 - return 0; 591 + return nand_prog_page_end_op(chip); 592 592 } 593 593 594 594 static const struct of_device_id vf610_nfc_dt_ids[] = {
+4 -3
drivers/mtd/onenand/Kconfig
··· 4 4 depends on HAS_IOMEM 5 5 help 6 6 This enables support for accessing all type of OneNAND flash 7 - devices. For further information see 8 - <http://www.samsung.com/Products/Semiconductor/OneNAND/index.htm> 7 + devices. 9 8 10 9 if MTD_ONENAND 11 10 ··· 25 26 config MTD_ONENAND_OMAP2 26 27 tristate "OneNAND on OMAP2/OMAP3 support" 27 28 depends on ARCH_OMAP2 || ARCH_OMAP3 29 + depends on OF || COMPILE_TEST 28 30 help 29 - Support for a OneNAND flash device connected to an OMAP2/OMAP3 CPU 31 + Support for a OneNAND flash device connected to an OMAP2/OMAP3 SoC 30 32 via the GPMC memory controller. 33 + Enable dmaengine and gpiolib for better performance. 31 34 32 35 config MTD_ONENAND_SAMSUNG 33 36 tristate "OneNAND on Samsung SOC controller support"
+215 -364
drivers/mtd/onenand/omap2.c
··· 28 28 #include <linux/mtd/mtd.h> 29 29 #include <linux/mtd/onenand.h> 30 30 #include <linux/mtd/partitions.h> 31 + #include <linux/of_device.h> 32 + #include <linux/omap-gpmc.h> 31 33 #include <linux/platform_device.h> 32 34 #include <linux/interrupt.h> 33 35 #include <linux/delay.h> 34 36 #include <linux/dma-mapping.h> 37 + #include <linux/dmaengine.h> 35 38 #include <linux/io.h> 36 39 #include <linux/slab.h> 37 - #include <linux/regulator/consumer.h> 38 - #include <linux/gpio.h> 40 + #include <linux/gpio/consumer.h> 39 41 40 42 #include <asm/mach/flash.h> 41 - #include <linux/platform_data/mtd-onenand-omap2.h> 42 - 43 - #include <linux/omap-dma.h> 44 43 45 44 #define DRIVER_NAME "omap2-onenand" 46 45 ··· 49 50 struct platform_device *pdev; 50 51 int gpmc_cs; 51 52 unsigned long phys_base; 52 - unsigned int mem_size; 53 - int gpio_irq; 53 + struct gpio_desc *int_gpiod; 54 54 struct mtd_info mtd; 55 55 struct onenand_chip onenand; 56 56 struct completion irq_done; 57 57 struct completion dma_done; 58 - int dma_channel; 59 - int freq; 60 - int (*setup)(void __iomem *base, int *freq_ptr); 61 - struct regulator *regulator; 62 - u8 flags; 58 + struct dma_chan *dma_chan; 63 59 }; 64 60 65 - static void omap2_onenand_dma_cb(int lch, u16 ch_status, void *data) 61 + static void omap2_onenand_dma_complete_func(void *completion) 66 62 { 67 - struct omap2_onenand *c = data; 68 - 69 - complete(&c->dma_done); 63 + complete(completion); 70 64 } 71 65 72 66 static irqreturn_t omap2_onenand_interrupt(int irq, void *dev_id) ··· 80 88 int reg) 81 89 { 82 90 writew(value, c->onenand.base + reg); 91 + } 92 + 93 + static int omap2_onenand_set_cfg(struct omap2_onenand *c, 94 + bool sr, bool sw, 95 + int latency, int burst_len) 96 + { 97 + unsigned short reg = ONENAND_SYS_CFG1_RDY | ONENAND_SYS_CFG1_INT; 98 + 99 + reg |= latency << ONENAND_SYS_CFG1_BRL_SHIFT; 100 + 101 + switch (burst_len) { 102 + case 0: /* continuous */ 103 + break; 104 + case 4: 105 + reg |= ONENAND_SYS_CFG1_BL_4; 106 + break; 107 + case 8: 108 + reg |= ONENAND_SYS_CFG1_BL_8; 109 + break; 110 + case 16: 111 + reg |= ONENAND_SYS_CFG1_BL_16; 112 + break; 113 + case 32: 114 + reg |= ONENAND_SYS_CFG1_BL_32; 115 + break; 116 + default: 117 + return -EINVAL; 118 + } 119 + 120 + if (latency > 5) 121 + reg |= ONENAND_SYS_CFG1_HF; 122 + if (latency > 7) 123 + reg |= ONENAND_SYS_CFG1_VHF; 124 + if (sr) 125 + reg |= ONENAND_SYS_CFG1_SYNC_READ; 126 + if (sw) 127 + reg |= ONENAND_SYS_CFG1_SYNC_WRITE; 128 + 129 + write_reg(c, reg, ONENAND_REG_SYS_CFG1); 130 + 131 + return 0; 132 + } 133 + 134 + static int omap2_onenand_get_freq(int ver) 135 + { 136 + switch ((ver >> 4) & 0xf) { 137 + case 0: 138 + return 40; 139 + case 1: 140 + return 54; 141 + case 2: 142 + return 66; 143 + case 3: 144 + return 83; 145 + case 4: 146 + return 104; 147 + } 148 + 149 + return -EINVAL; 83 150 } 84 151 85 152 static void wait_err(char *msg, int state, unsigned int ctrl, unsigned int intr) ··· 204 153 if (!(syscfg & ONENAND_SYS_CFG1_IOBE)) { 205 154 syscfg |= ONENAND_SYS_CFG1_IOBE; 206 155 write_reg(c, syscfg, ONENAND_REG_SYS_CFG1); 207 - if (c->flags & ONENAND_IN_OMAP34XX) 208 - /* Add a delay to let GPIO settle */ 209 - syscfg = read_reg(c, ONENAND_REG_SYS_CFG1); 156 + /* Add a delay to let GPIO settle */ 157 + syscfg = read_reg(c, ONENAND_REG_SYS_CFG1); 210 158 } 211 159 212 160 reinit_completion(&c->irq_done); 213 - if (c->gpio_irq) { 214 - result = gpio_get_value(c->gpio_irq); 215 - if (result == -1) { 216 - ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS); 217 - intr = read_reg(c, ONENAND_REG_INTERRUPT); 218 - wait_err("gpio error", state, ctrl, intr); 219 - return -EIO; 220 - } 221 - } else 222 - result = 0; 223 - if (result == 0) { 161 + result = gpiod_get_value(c->int_gpiod); 162 + if (result < 0) { 163 + ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS); 164 + intr = read_reg(c, ONENAND_REG_INTERRUPT); 165 + wait_err("gpio error", state, ctrl, intr); 166 + return result; 167 + } else if (result == 0) { 224 168 int retry_cnt = 0; 225 169 retry: 226 - result = wait_for_completion_timeout(&c->irq_done, 227 - msecs_to_jiffies(20)); 228 - if (result == 0) { 170 + if (!wait_for_completion_io_timeout(&c->irq_done, 171 + msecs_to_jiffies(20))) { 229 172 /* Timeout after 20ms */ 230 173 ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS); 231 174 if (ctrl & ONENAND_CTRL_ONGO && ··· 336 291 return 0; 337 292 } 338 293 339 - #if defined(CONFIG_ARCH_OMAP3) || defined(MULTI_OMAP2) 294 + static inline int omap2_onenand_dma_transfer(struct omap2_onenand *c, 295 + dma_addr_t src, dma_addr_t dst, 296 + size_t count) 297 + { 298 + struct dma_async_tx_descriptor *tx; 299 + dma_cookie_t cookie; 340 300 341 - static int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area, 301 + tx = dmaengine_prep_dma_memcpy(c->dma_chan, dst, src, count, 0); 302 + if (!tx) { 303 + dev_err(&c->pdev->dev, "Failed to prepare DMA memcpy\n"); 304 + return -EIO; 305 + } 306 + 307 + reinit_completion(&c->dma_done); 308 + 309 + tx->callback = omap2_onenand_dma_complete_func; 310 + tx->callback_param = &c->dma_done; 311 + 312 + cookie = tx->tx_submit(tx); 313 + if (dma_submit_error(cookie)) { 314 + dev_err(&c->pdev->dev, "Failed to do DMA tx_submit\n"); 315 + return -EIO; 316 + } 317 + 318 + dma_async_issue_pending(c->dma_chan); 319 + 320 + if (!wait_for_completion_io_timeout(&c->dma_done, 321 + msecs_to_jiffies(20))) { 322 + dmaengine_terminate_sync(c->dma_chan); 323 + return -ETIMEDOUT; 324 + } 325 + 326 + return 0; 327 + } 328 + 329 + static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area, 342 330 unsigned char *buffer, int offset, 343 331 size_t count) 344 332 { ··· 379 301 struct onenand_chip *this = mtd->priv; 380 302 dma_addr_t dma_src, dma_dst; 381 303 int bram_offset; 382 - unsigned long timeout; 383 304 void *buf = (void *)buffer; 384 305 size_t xtra; 385 - volatile unsigned *done; 306 + int ret; 386 307 387 308 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 388 309 if (bram_offset & 3 || (size_t)buf & 3 || count < 384) ··· 418 341 goto out_copy; 419 342 } 420 343 421 - omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32, 422 - count >> 2, 1, 0, 0, 0); 423 - omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 424 - dma_src, 0, 0); 425 - omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 426 - dma_dst, 0, 0); 427 - 428 - reinit_completion(&c->dma_done); 429 - omap_start_dma(c->dma_channel); 430 - 431 - timeout = jiffies + msecs_to_jiffies(20); 432 - done = &c->dma_done.done; 433 - while (time_before(jiffies, timeout)) 434 - if (*done) 435 - break; 436 - 344 + ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count); 437 345 dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE); 438 346 439 - if (!*done) { 347 + if (ret) { 440 348 dev_err(&c->pdev->dev, "timeout waiting for DMA\n"); 441 349 goto out_copy; 442 350 } ··· 433 371 return 0; 434 372 } 435 373 436 - static int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area, 374 + static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area, 437 375 const unsigned char *buffer, 438 376 int offset, size_t count) 439 377 { ··· 441 379 struct onenand_chip *this = mtd->priv; 442 380 dma_addr_t dma_src, dma_dst; 443 381 int bram_offset; 444 - unsigned long timeout; 445 382 void *buf = (void *)buffer; 446 - volatile unsigned *done; 383 + int ret; 447 384 448 385 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 449 386 if (bram_offset & 3 || (size_t)buf & 3 || count < 384) ··· 473 412 return -1; 474 413 } 475 414 476 - omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32, 477 - count >> 2, 1, 0, 0, 0); 478 - omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 479 - dma_src, 0, 0); 480 - omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 481 - dma_dst, 0, 0); 482 - 483 - reinit_completion(&c->dma_done); 484 - omap_start_dma(c->dma_channel); 485 - 486 - timeout = jiffies + msecs_to_jiffies(20); 487 - done = &c->dma_done.done; 488 - while (time_before(jiffies, timeout)) 489 - if (*done) 490 - break; 491 - 415 + ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count); 492 416 dma_unmap_single(&c->pdev->dev, dma_src, count, DMA_TO_DEVICE); 493 417 494 - if (!*done) { 418 + if (ret) { 495 419 dev_err(&c->pdev->dev, "timeout waiting for DMA\n"); 496 420 goto out_copy; 497 421 } ··· 487 441 memcpy(this->base + bram_offset, buf, count); 488 442 return 0; 489 443 } 490 - 491 - #else 492 - 493 - static int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area, 494 - unsigned char *buffer, int offset, 495 - size_t count) 496 - { 497 - return -ENOSYS; 498 - } 499 - 500 - static int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area, 501 - const unsigned char *buffer, 502 - int offset, size_t count) 503 - { 504 - return -ENOSYS; 505 - } 506 - 507 - #endif 508 - 509 - #if defined(CONFIG_ARCH_OMAP2) || defined(MULTI_OMAP2) 510 - 511 - static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area, 512 - unsigned char *buffer, int offset, 513 - size_t count) 514 - { 515 - struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd); 516 - struct onenand_chip *this = mtd->priv; 517 - dma_addr_t dma_src, dma_dst; 518 - int bram_offset; 519 - 520 - bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 521 - /* DMA is not used. Revisit PM requirements before enabling it. */ 522 - if (1 || (c->dma_channel < 0) || 523 - ((void *) buffer >= (void *) high_memory) || (bram_offset & 3) || 524 - (((unsigned int) buffer) & 3) || (count < 1024) || (count & 3)) { 525 - memcpy(buffer, (__force void *)(this->base + bram_offset), 526 - count); 527 - return 0; 528 - } 529 - 530 - dma_src = c->phys_base + bram_offset; 531 - dma_dst = dma_map_single(&c->pdev->dev, buffer, count, 532 - DMA_FROM_DEVICE); 533 - if (dma_mapping_error(&c->pdev->dev, dma_dst)) { 534 - dev_err(&c->pdev->dev, 535 - "Couldn't DMA map a %d byte buffer\n", 536 - count); 537 - return -1; 538 - } 539 - 540 - omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32, 541 - count / 4, 1, 0, 0, 0); 542 - omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 543 - dma_src, 0, 0); 544 - omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 545 - dma_dst, 0, 0); 546 - 547 - reinit_completion(&c->dma_done); 548 - omap_start_dma(c->dma_channel); 549 - wait_for_completion(&c->dma_done); 550 - 551 - dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE); 552 - 553 - return 0; 554 - } 555 - 556 - static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area, 557 - const unsigned char *buffer, 558 - int offset, size_t count) 559 - { 560 - struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd); 561 - struct onenand_chip *this = mtd->priv; 562 - dma_addr_t dma_src, dma_dst; 563 - int bram_offset; 564 - 565 - bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 566 - /* DMA is not used. Revisit PM requirements before enabling it. */ 567 - if (1 || (c->dma_channel < 0) || 568 - ((void *) buffer >= (void *) high_memory) || (bram_offset & 3) || 569 - (((unsigned int) buffer) & 3) || (count < 1024) || (count & 3)) { 570 - memcpy((__force void *)(this->base + bram_offset), buffer, 571 - count); 572 - return 0; 573 - } 574 - 575 - dma_src = dma_map_single(&c->pdev->dev, (void *) buffer, count, 576 - DMA_TO_DEVICE); 577 - dma_dst = c->phys_base + bram_offset; 578 - if (dma_mapping_error(&c->pdev->dev, dma_src)) { 579 - dev_err(&c->pdev->dev, 580 - "Couldn't DMA map a %d byte buffer\n", 581 - count); 582 - return -1; 583 - } 584 - 585 - omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S16, 586 - count / 2, 1, 0, 0, 0); 587 - omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 588 - dma_src, 0, 0); 589 - omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC, 590 - dma_dst, 0, 0); 591 - 592 - reinit_completion(&c->dma_done); 593 - omap_start_dma(c->dma_channel); 594 - wait_for_completion(&c->dma_done); 595 - 596 - dma_unmap_single(&c->pdev->dev, dma_src, count, DMA_TO_DEVICE); 597 - 598 - return 0; 599 - } 600 - 601 - #else 602 - 603 - static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area, 604 - unsigned char *buffer, int offset, 605 - size_t count) 606 - { 607 - return -ENOSYS; 608 - } 609 - 610 - static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area, 611 - const unsigned char *buffer, 612 - int offset, size_t count) 613 - { 614 - return -ENOSYS; 615 - } 616 - 617 - #endif 618 - 619 - static struct platform_driver omap2_onenand_driver; 620 444 621 445 static void omap2_onenand_shutdown(struct platform_device *pdev) 622 446 { ··· 499 583 memset((__force void *)c->onenand.base, 0, ONENAND_BUFRAM_SIZE); 500 584 } 501 585 502 - static int omap2_onenand_enable(struct mtd_info *mtd) 503 - { 504 - int ret; 505 - struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd); 506 - 507 - ret = regulator_enable(c->regulator); 508 - if (ret != 0) 509 - dev_err(&c->pdev->dev, "can't enable regulator\n"); 510 - 511 - return ret; 512 - } 513 - 514 - static int omap2_onenand_disable(struct mtd_info *mtd) 515 - { 516 - int ret; 517 - struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd); 518 - 519 - ret = regulator_disable(c->regulator); 520 - if (ret != 0) 521 - dev_err(&c->pdev->dev, "can't disable regulator\n"); 522 - 523 - return ret; 524 - } 525 - 526 586 static int omap2_onenand_probe(struct platform_device *pdev) 527 587 { 528 - struct omap_onenand_platform_data *pdata; 529 - struct omap2_onenand *c; 530 - struct onenand_chip *this; 531 - int r; 588 + u32 val; 589 + dma_cap_mask_t mask; 590 + int freq, latency, r; 532 591 struct resource *res; 592 + struct omap2_onenand *c; 593 + struct gpmc_onenand_info info; 594 + struct device *dev = &pdev->dev; 595 + struct device_node *np = dev->of_node; 533 596 534 - pdata = dev_get_platdata(&pdev->dev); 535 - if (pdata == NULL) { 536 - dev_err(&pdev->dev, "platform data missing\n"); 537 - return -ENODEV; 597 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 598 + if (!res) { 599 + dev_err(dev, "error getting memory resource\n"); 600 + return -EINVAL; 538 601 } 539 602 540 - c = kzalloc(sizeof(struct omap2_onenand), GFP_KERNEL); 603 + r = of_property_read_u32(np, "reg", &val); 604 + if (r) { 605 + dev_err(dev, "reg not found in DT\n"); 606 + return r; 607 + } 608 + 609 + c = devm_kzalloc(dev, sizeof(struct omap2_onenand), GFP_KERNEL); 541 610 if (!c) 542 611 return -ENOMEM; 543 612 544 613 init_completion(&c->irq_done); 545 614 init_completion(&c->dma_done); 546 - c->flags = pdata->flags; 547 - c->gpmc_cs = pdata->cs; 548 - c->gpio_irq = pdata->gpio_irq; 549 - c->dma_channel = pdata->dma_channel; 550 - if (c->dma_channel < 0) { 551 - /* if -1, don't use DMA */ 552 - c->gpio_irq = 0; 553 - } 554 - 555 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 556 - if (res == NULL) { 557 - r = -EINVAL; 558 - dev_err(&pdev->dev, "error getting memory resource\n"); 559 - goto err_kfree; 560 - } 561 - 615 + c->gpmc_cs = val; 562 616 c->phys_base = res->start; 563 - c->mem_size = resource_size(res); 564 617 565 - if (request_mem_region(c->phys_base, c->mem_size, 566 - pdev->dev.driver->name) == NULL) { 567 - dev_err(&pdev->dev, "Cannot reserve memory region at 0x%08lx, size: 0x%x\n", 568 - c->phys_base, c->mem_size); 569 - r = -EBUSY; 570 - goto err_kfree; 571 - } 572 - c->onenand.base = ioremap(c->phys_base, c->mem_size); 573 - if (c->onenand.base == NULL) { 574 - r = -ENOMEM; 575 - goto err_release_mem_region; 576 - } 618 + c->onenand.base = devm_ioremap_resource(dev, res); 619 + if (IS_ERR(c->onenand.base)) 620 + return PTR_ERR(c->onenand.base); 577 621 578 - if (pdata->onenand_setup != NULL) { 579 - r = pdata->onenand_setup(c->onenand.base, &c->freq); 580 - if (r < 0) { 581 - dev_err(&pdev->dev, "Onenand platform setup failed: " 582 - "%d\n", r); 583 - goto err_iounmap; 584 - } 585 - c->setup = pdata->onenand_setup; 622 + c->int_gpiod = devm_gpiod_get_optional(dev, "int", GPIOD_IN); 623 + if (IS_ERR(c->int_gpiod)) { 624 + r = PTR_ERR(c->int_gpiod); 625 + /* Just try again if this happens */ 626 + if (r != -EPROBE_DEFER) 627 + dev_err(dev, "error getting gpio: %d\n", r); 628 + return r; 586 629 } 587 630 588 - if (c->gpio_irq) { 589 - if ((r = gpio_request(c->gpio_irq, "OneNAND irq")) < 0) { 590 - dev_err(&pdev->dev, "Failed to request GPIO%d for " 591 - "OneNAND\n", c->gpio_irq); 592 - goto err_iounmap; 593 - } 594 - gpio_direction_input(c->gpio_irq); 631 + if (c->int_gpiod) { 632 + r = devm_request_irq(dev, gpiod_to_irq(c->int_gpiod), 633 + omap2_onenand_interrupt, 634 + IRQF_TRIGGER_RISING, "onenand", c); 635 + if (r) 636 + return r; 595 637 596 - if ((r = request_irq(gpio_to_irq(c->gpio_irq), 597 - omap2_onenand_interrupt, IRQF_TRIGGER_RISING, 598 - pdev->dev.driver->name, c)) < 0) 599 - goto err_release_gpio; 638 + c->onenand.wait = omap2_onenand_wait; 600 639 } 601 640 602 - if (c->dma_channel >= 0) { 603 - r = omap_request_dma(0, pdev->dev.driver->name, 604 - omap2_onenand_dma_cb, (void *) c, 605 - &c->dma_channel); 606 - if (r == 0) { 607 - omap_set_dma_write_mode(c->dma_channel, 608 - OMAP_DMA_WRITE_NON_POSTED); 609 - omap_set_dma_src_data_pack(c->dma_channel, 1); 610 - omap_set_dma_src_burst_mode(c->dma_channel, 611 - OMAP_DMA_DATA_BURST_8); 612 - omap_set_dma_dest_data_pack(c->dma_channel, 1); 613 - omap_set_dma_dest_burst_mode(c->dma_channel, 614 - OMAP_DMA_DATA_BURST_8); 615 - } else { 616 - dev_info(&pdev->dev, 617 - "failed to allocate DMA for OneNAND, " 618 - "using PIO instead\n"); 619 - c->dma_channel = -1; 620 - } 621 - } 641 + dma_cap_zero(mask); 642 + dma_cap_set(DMA_MEMCPY, mask); 622 643 623 - dev_info(&pdev->dev, "initializing on CS%d, phys base 0x%08lx, virtual " 624 - "base %p, freq %d MHz\n", c->gpmc_cs, c->phys_base, 625 - c->onenand.base, c->freq); 644 + c->dma_chan = dma_request_channel(mask, NULL, NULL); 645 + if (c->dma_chan) { 646 + c->onenand.read_bufferram = omap2_onenand_read_bufferram; 647 + c->onenand.write_bufferram = omap2_onenand_write_bufferram; 648 + } 626 649 627 650 c->pdev = pdev; 628 651 c->mtd.priv = &c->onenand; 652 + c->mtd.dev.parent = dev; 653 + mtd_set_of_node(&c->mtd, dev->of_node); 629 654 630 - c->mtd.dev.parent = &pdev->dev; 631 - mtd_set_of_node(&c->mtd, pdata->of_node); 632 - 633 - this = &c->onenand; 634 - if (c->dma_channel >= 0) { 635 - this->wait = omap2_onenand_wait; 636 - if (c->flags & ONENAND_IN_OMAP34XX) { 637 - this->read_bufferram = omap3_onenand_read_bufferram; 638 - this->write_bufferram = omap3_onenand_write_bufferram; 639 - } else { 640 - this->read_bufferram = omap2_onenand_read_bufferram; 641 - this->write_bufferram = omap2_onenand_write_bufferram; 642 - } 643 - } 644 - 645 - if (pdata->regulator_can_sleep) { 646 - c->regulator = regulator_get(&pdev->dev, "vonenand"); 647 - if (IS_ERR(c->regulator)) { 648 - dev_err(&pdev->dev, "Failed to get regulator\n"); 649 - r = PTR_ERR(c->regulator); 650 - goto err_release_dma; 651 - } 652 - c->onenand.enable = omap2_onenand_enable; 653 - c->onenand.disable = omap2_onenand_disable; 654 - } 655 - 656 - if (pdata->skip_initial_unlocking) 657 - this->options |= ONENAND_SKIP_INITIAL_UNLOCKING; 655 + dev_info(dev, "initializing on CS%d (0x%08lx), va %p, %s mode\n", 656 + c->gpmc_cs, c->phys_base, c->onenand.base, 657 + c->dma_chan ? "DMA" : "PIO"); 658 658 659 659 if ((r = onenand_scan(&c->mtd, 1)) < 0) 660 - goto err_release_regulator; 660 + goto err_release_dma; 661 661 662 - r = mtd_device_register(&c->mtd, pdata ? pdata->parts : NULL, 663 - pdata ? pdata->nr_parts : 0); 662 + freq = omap2_onenand_get_freq(c->onenand.version_id); 663 + if (freq > 0) { 664 + switch (freq) { 665 + case 104: 666 + latency = 7; 667 + break; 668 + case 83: 669 + latency = 6; 670 + break; 671 + case 66: 672 + latency = 5; 673 + break; 674 + case 56: 675 + latency = 4; 676 + break; 677 + default: /* 40 MHz or lower */ 678 + latency = 3; 679 + break; 680 + } 681 + 682 + r = gpmc_omap_onenand_set_timings(dev, c->gpmc_cs, 683 + freq, latency, &info); 684 + if (r) 685 + goto err_release_onenand; 686 + 687 + r = omap2_onenand_set_cfg(c, info.sync_read, info.sync_write, 688 + latency, info.burst_len); 689 + if (r) 690 + goto err_release_onenand; 691 + 692 + if (info.sync_read || info.sync_write) 693 + dev_info(dev, "optimized timings for %d MHz\n", freq); 694 + } 695 + 696 + r = mtd_device_register(&c->mtd, NULL, 0); 664 697 if (r) 665 698 goto err_release_onenand; 666 699 ··· 619 754 620 755 err_release_onenand: 621 756 onenand_release(&c->mtd); 622 - err_release_regulator: 623 - regulator_put(c->regulator); 624 757 err_release_dma: 625 - if (c->dma_channel != -1) 626 - omap_free_dma(c->dma_channel); 627 - if (c->gpio_irq) 628 - free_irq(gpio_to_irq(c->gpio_irq), c); 629 - err_release_gpio: 630 - if (c->gpio_irq) 631 - gpio_free(c->gpio_irq); 632 - err_iounmap: 633 - iounmap(c->onenand.base); 634 - err_release_mem_region: 635 - release_mem_region(c->phys_base, c->mem_size); 636 - err_kfree: 637 - kfree(c); 758 + if (c->dma_chan) 759 + dma_release_channel(c->dma_chan); 638 760 639 761 return r; 640 762 } ··· 631 779 struct omap2_onenand *c = dev_get_drvdata(&pdev->dev); 632 780 633 781 onenand_release(&c->mtd); 634 - regulator_put(c->regulator); 635 - if (c->dma_channel != -1) 636 - omap_free_dma(c->dma_channel); 782 + if (c->dma_chan) 783 + dma_release_channel(c->dma_chan); 637 784 omap2_onenand_shutdown(pdev); 638 - if (c->gpio_irq) { 639 - free_irq(gpio_to_irq(c->gpio_irq), c); 640 - gpio_free(c->gpio_irq); 641 - } 642 - iounmap(c->onenand.base); 643 - release_mem_region(c->phys_base, c->mem_size); 644 - kfree(c); 645 785 646 786 return 0; 647 787 } 788 + 789 + static const struct of_device_id omap2_onenand_id_table[] = { 790 + { .compatible = "ti,omap2-onenand", }, 791 + {}, 792 + }; 793 + MODULE_DEVICE_TABLE(of, omap2_onenand_id_table); 648 794 649 795 static struct platform_driver omap2_onenand_driver = { 650 796 .probe = omap2_onenand_probe, ··· 650 800 .shutdown = omap2_onenand_shutdown, 651 801 .driver = { 652 802 .name = DRIVER_NAME, 803 + .of_match_table = omap2_onenand_id_table, 653 804 }, 654 805 }; 655 806
+41 -142
drivers/mtd/onenand/samsung.c
··· 25 25 #include <linux/interrupt.h> 26 26 #include <linux/io.h> 27 27 28 - #include <asm/mach/flash.h> 29 - 30 28 #include "samsung.h" 31 29 32 30 enum soc_type { ··· 127 129 struct platform_device *pdev; 128 130 enum soc_type type; 129 131 void __iomem *base; 130 - struct resource *base_res; 131 132 void __iomem *ahb_addr; 132 - struct resource *ahb_res; 133 133 int bootram_command; 134 - void __iomem *page_buf; 135 - void __iomem *oob_buf; 134 + void *page_buf; 135 + void *oob_buf; 136 136 unsigned int (*mem_addr)(int fba, int fpa, int fsa); 137 137 unsigned int (*cmd_map)(unsigned int type, unsigned int val); 138 138 void __iomem *dma_addr; 139 - struct resource *dma_res; 140 139 unsigned long phys_base; 141 140 struct completion complete; 142 141 }; ··· 408 413 /* 409 414 * Emulate Two BufferRAMs and access with 4 bytes pointer 410 415 */ 411 - m = (unsigned int *) onenand->page_buf; 412 - s = (unsigned int *) onenand->oob_buf; 416 + m = onenand->page_buf; 417 + s = onenand->oob_buf; 413 418 414 419 if (index) { 415 420 m += (this->writesize >> 2); ··· 481 486 unsigned char *p; 482 487 483 488 if (area == ONENAND_DATARAM) { 484 - p = (unsigned char *) onenand->page_buf; 489 + p = onenand->page_buf; 485 490 if (index == 1) 486 491 p += this->writesize; 487 492 } else { 488 - p = (unsigned char *) onenand->oob_buf; 493 + p = onenand->oob_buf; 489 494 if (index == 1) 490 495 p += mtd->oobsize; 491 496 } ··· 846 851 /* No need to check pdata. the platform data is optional */ 847 852 848 853 size = sizeof(struct mtd_info) + sizeof(struct onenand_chip); 849 - mtd = kzalloc(size, GFP_KERNEL); 854 + mtd = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); 850 855 if (!mtd) 851 856 return -ENOMEM; 852 857 853 - onenand = kzalloc(sizeof(struct s3c_onenand), GFP_KERNEL); 854 - if (!onenand) { 855 - err = -ENOMEM; 856 - goto onenand_fail; 857 - } 858 + onenand = devm_kzalloc(&pdev->dev, sizeof(struct s3c_onenand), 859 + GFP_KERNEL); 860 + if (!onenand) 861 + return -ENOMEM; 858 862 859 863 this = (struct onenand_chip *) &mtd[1]; 860 864 mtd->priv = this; ··· 864 870 s3c_onenand_setup(mtd); 865 871 866 872 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 867 - if (!r) { 868 - dev_err(&pdev->dev, "no memory resource defined\n"); 869 - return -ENOENT; 870 - goto ahb_resource_failed; 871 - } 873 + onenand->base = devm_ioremap_resource(&pdev->dev, r); 874 + if (IS_ERR(onenand->base)) 875 + return PTR_ERR(onenand->base); 872 876 873 - onenand->base_res = request_mem_region(r->start, resource_size(r), 874 - pdev->name); 875 - if (!onenand->base_res) { 876 - dev_err(&pdev->dev, "failed to request memory resource\n"); 877 - err = -EBUSY; 878 - goto resource_failed; 879 - } 877 + onenand->phys_base = r->start; 880 878 881 - onenand->base = ioremap(r->start, resource_size(r)); 882 - if (!onenand->base) { 883 - dev_err(&pdev->dev, "failed to map memory resource\n"); 884 - err = -EFAULT; 885 - goto ioremap_failed; 886 - } 887 879 /* Set onenand_chip also */ 888 880 this->base = onenand->base; 889 881 ··· 878 898 879 899 if (onenand->type != TYPE_S5PC110) { 880 900 r = platform_get_resource(pdev, IORESOURCE_MEM, 1); 881 - if (!r) { 882 - dev_err(&pdev->dev, "no buffer memory resource defined\n"); 883 - err = -ENOENT; 884 - goto ahb_resource_failed; 885 - } 886 - 887 - onenand->ahb_res = request_mem_region(r->start, resource_size(r), 888 - pdev->name); 889 - if (!onenand->ahb_res) { 890 - dev_err(&pdev->dev, "failed to request buffer memory resource\n"); 891 - err = -EBUSY; 892 - goto ahb_resource_failed; 893 - } 894 - 895 - onenand->ahb_addr = ioremap(r->start, resource_size(r)); 896 - if (!onenand->ahb_addr) { 897 - dev_err(&pdev->dev, "failed to map buffer memory resource\n"); 898 - err = -EINVAL; 899 - goto ahb_ioremap_failed; 900 - } 901 + onenand->ahb_addr = devm_ioremap_resource(&pdev->dev, r); 902 + if (IS_ERR(onenand->ahb_addr)) 903 + return PTR_ERR(onenand->ahb_addr); 901 904 902 905 /* Allocate 4KiB BufferRAM */ 903 - onenand->page_buf = kzalloc(SZ_4K, GFP_KERNEL); 904 - if (!onenand->page_buf) { 905 - err = -ENOMEM; 906 - goto page_buf_fail; 907 - } 906 + onenand->page_buf = devm_kzalloc(&pdev->dev, SZ_4K, 907 + GFP_KERNEL); 908 + if (!onenand->page_buf) 909 + return -ENOMEM; 908 910 909 911 /* Allocate 128 SpareRAM */ 910 - onenand->oob_buf = kzalloc(128, GFP_KERNEL); 911 - if (!onenand->oob_buf) { 912 - err = -ENOMEM; 913 - goto oob_buf_fail; 914 - } 912 + onenand->oob_buf = devm_kzalloc(&pdev->dev, 128, GFP_KERNEL); 913 + if (!onenand->oob_buf) 914 + return -ENOMEM; 915 915 916 916 /* S3C doesn't handle subpage write */ 917 917 mtd->subpage_sft = 0; ··· 899 939 900 940 } else { /* S5PC110 */ 901 941 r = platform_get_resource(pdev, IORESOURCE_MEM, 1); 902 - if (!r) { 903 - dev_err(&pdev->dev, "no dma memory resource defined\n"); 904 - err = -ENOENT; 905 - goto dma_resource_failed; 906 - } 907 - 908 - onenand->dma_res = request_mem_region(r->start, resource_size(r), 909 - pdev->name); 910 - if (!onenand->dma_res) { 911 - dev_err(&pdev->dev, "failed to request dma memory resource\n"); 912 - err = -EBUSY; 913 - goto dma_resource_failed; 914 - } 915 - 916 - onenand->dma_addr = ioremap(r->start, resource_size(r)); 917 - if (!onenand->dma_addr) { 918 - dev_err(&pdev->dev, "failed to map dma memory resource\n"); 919 - err = -EINVAL; 920 - goto dma_ioremap_failed; 921 - } 922 - 923 - onenand->phys_base = onenand->base_res->start; 942 + onenand->dma_addr = devm_ioremap_resource(&pdev->dev, r); 943 + if (IS_ERR(onenand->dma_addr)) 944 + return PTR_ERR(onenand->dma_addr); 924 945 925 946 s5pc110_dma_ops = s5pc110_dma_poll; 926 947 /* Interrupt support */ ··· 909 968 if (r) { 910 969 init_completion(&onenand->complete); 911 970 s5pc110_dma_ops = s5pc110_dma_irq; 912 - err = request_irq(r->start, s5pc110_onenand_irq, 913 - IRQF_SHARED, "onenand", &onenand); 971 + err = devm_request_irq(&pdev->dev, r->start, 972 + s5pc110_onenand_irq, 973 + IRQF_SHARED, "onenand", 974 + &onenand); 914 975 if (err) { 915 976 dev_err(&pdev->dev, "failed to get irq\n"); 916 - goto scan_failed; 977 + return err; 917 978 } 918 979 } 919 980 } 920 981 921 - if (onenand_scan(mtd, 1)) { 922 - err = -EFAULT; 923 - goto scan_failed; 924 - } 982 + err = onenand_scan(mtd, 1); 983 + if (err) 984 + return err; 925 985 926 986 if (onenand->type != TYPE_S5PC110) { 927 987 /* S3C doesn't handle subpage write */ ··· 936 994 err = mtd_device_parse_register(mtd, NULL, NULL, 937 995 pdata ? pdata->parts : NULL, 938 996 pdata ? pdata->nr_parts : 0); 997 + if (err) { 998 + dev_err(&pdev->dev, "failed to parse partitions and register the MTD device\n"); 999 + onenand_release(mtd); 1000 + return err; 1001 + } 939 1002 940 1003 platform_set_drvdata(pdev, mtd); 941 1004 942 1005 return 0; 943 - 944 - scan_failed: 945 - if (onenand->dma_addr) 946 - iounmap(onenand->dma_addr); 947 - dma_ioremap_failed: 948 - if (onenand->dma_res) 949 - release_mem_region(onenand->dma_res->start, 950 - resource_size(onenand->dma_res)); 951 - kfree(onenand->oob_buf); 952 - oob_buf_fail: 953 - kfree(onenand->page_buf); 954 - page_buf_fail: 955 - if (onenand->ahb_addr) 956 - iounmap(onenand->ahb_addr); 957 - ahb_ioremap_failed: 958 - if (onenand->ahb_res) 959 - release_mem_region(onenand->ahb_res->start, 960 - resource_size(onenand->ahb_res)); 961 - dma_resource_failed: 962 - ahb_resource_failed: 963 - iounmap(onenand->base); 964 - ioremap_failed: 965 - if (onenand->base_res) 966 - release_mem_region(onenand->base_res->start, 967 - resource_size(onenand->base_res)); 968 - resource_failed: 969 - kfree(onenand); 970 - onenand_fail: 971 - kfree(mtd); 972 - return err; 973 1006 } 974 1007 975 1008 static int s3c_onenand_remove(struct platform_device *pdev) ··· 952 1035 struct mtd_info *mtd = platform_get_drvdata(pdev); 953 1036 954 1037 onenand_release(mtd); 955 - if (onenand->ahb_addr) 956 - iounmap(onenand->ahb_addr); 957 - if (onenand->ahb_res) 958 - release_mem_region(onenand->ahb_res->start, 959 - resource_size(onenand->ahb_res)); 960 - if (onenand->dma_addr) 961 - iounmap(onenand->dma_addr); 962 - if (onenand->dma_res) 963 - release_mem_region(onenand->dma_res->start, 964 - resource_size(onenand->dma_res)); 965 1038 966 - iounmap(onenand->base); 967 - release_mem_region(onenand->base_res->start, 968 - resource_size(onenand->base_res)); 969 - 970 - kfree(onenand->oob_buf); 971 - kfree(onenand->page_buf); 972 - kfree(onenand); 973 - kfree(mtd); 974 1039 return 0; 975 1040 } 976 1041
+1 -1
drivers/mtd/tests/nandbiterrs.c
··· 151 151 memcpy(&oldstats, &mtd->ecc_stats, sizeof(oldstats)); 152 152 153 153 err = mtd_read(mtd, offset, mtd->writesize, &read, rbuffer); 154 - if (err == -EUCLEAN) 154 + if (!err || err == -EUCLEAN) 155 155 err = mtd->ecc_stats.corrected - oldstats.corrected; 156 156 157 157 if (err < 0 || read != mtd->writesize) {
+21
drivers/mtd/tests/oobtest.c
··· 193 193 ops.datbuf = NULL; 194 194 ops.oobbuf = readbuf; 195 195 err = mtd_read_oob(mtd, addr, &ops); 196 + if (mtd_is_bitflip(err)) 197 + err = 0; 198 + 196 199 if (err || ops.oobretlen != use_len) { 197 200 pr_err("error: readoob failed at %#llx\n", 198 201 (long long)addr); ··· 230 227 ops.datbuf = NULL; 231 228 ops.oobbuf = readbuf; 232 229 err = mtd_read_oob(mtd, addr, &ops); 230 + if (mtd_is_bitflip(err)) 231 + err = 0; 232 + 233 233 if (err || ops.oobretlen != mtd->oobavail) { 234 234 pr_err("error: readoob failed at %#llx\n", 235 235 (long long)addr); ··· 292 286 293 287 /* read entire block's OOB at one go */ 294 288 err = mtd_read_oob(mtd, addr, &ops); 289 + if (mtd_is_bitflip(err)) 290 + err = 0; 291 + 295 292 if (err || ops.oobretlen != len) { 296 293 pr_err("error: readoob failed at %#llx\n", 297 294 (long long)addr); ··· 536 527 pr_info("attempting to start read past end of OOB\n"); 537 528 pr_info("an error is expected...\n"); 538 529 err = mtd_read_oob(mtd, addr0, &ops); 530 + if (mtd_is_bitflip(err)) 531 + err = 0; 532 + 539 533 if (err) { 540 534 pr_info("error occurred as expected\n"); 541 535 err = 0; ··· 583 571 pr_info("attempting to read past end of device\n"); 584 572 pr_info("an error is expected...\n"); 585 573 err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops); 574 + if (mtd_is_bitflip(err)) 575 + err = 0; 576 + 586 577 if (err) { 587 578 pr_info("error occurred as expected\n"); 588 579 err = 0; ··· 630 615 pr_info("attempting to read past end of device\n"); 631 616 pr_info("an error is expected...\n"); 632 617 err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops); 618 + if (mtd_is_bitflip(err)) 619 + err = 0; 620 + 633 621 if (err) { 634 622 pr_info("error occurred as expected\n"); 635 623 err = 0; ··· 702 684 ops.datbuf = NULL; 703 685 ops.oobbuf = readbuf; 704 686 err = mtd_read_oob(mtd, addr, &ops); 687 + if (mtd_is_bitflip(err)) 688 + err = 0; 689 + 705 690 if (err) 706 691 goto out; 707 692 if (memcmpshow(addr, readbuf, writebuf,
+2 -3
drivers/staging/mt29f_spinand/mt29f_spinand.c
··· 637 637 int eccsteps = chip->ecc.steps; 638 638 639 639 enable_hw_ecc = 1; 640 - chip->write_buf(mtd, p, eccsize * eccsteps); 641 - return 0; 640 + return nand_prog_page_op(chip, page, 0, p, eccsize * eccsteps); 642 641 } 643 642 644 643 static int spinand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, ··· 652 653 653 654 enable_read_hw_ecc = 1; 654 655 655 - chip->read_buf(mtd, p, eccsize * eccsteps); 656 + nand_read_page_op(chip, page, 0, p, eccsize * eccsteps); 656 657 if (oob_required) 657 658 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 658 659
+404 -39
include/linux/mtd/rawnand.h
··· 133 133 */ 134 134 #define NAND_ECC_GENERIC_ERASED_CHECK BIT(0) 135 135 #define NAND_ECC_MAXIMIZE BIT(1) 136 - /* 137 - * If your controller already sends the required NAND commands when 138 - * reading or writing a page, then the framework is not supposed to 139 - * send READ0 and SEQIN/PAGEPROG respectively. 140 - */ 141 - #define NAND_ECC_CUSTOM_PAGE_ACCESS BIT(2) 142 136 143 137 /* Bit mask for flags passed to do_nand_read_ecc */ 144 138 #define NAND_GET_DEVICE 0x80 ··· 185 191 /* Non chip related options */ 186 192 /* This option skips the bbt scan during initialization. */ 187 193 #define NAND_SKIP_BBTSCAN 0x00010000 188 - /* 189 - * This option is defined if the board driver allocates its own buffers 190 - * (e.g. because it needs them DMA-coherent). 191 - */ 192 - #define NAND_OWN_BUFFERS 0x00020000 193 194 /* Chip may not exist, so silence any errors in scan */ 194 195 #define NAND_SCAN_SILENT_NODEV 0x00040000 195 196 /* ··· 514 525 * @postpad: padding information for syndrome based ECC generators 515 526 * @options: ECC specific options (see NAND_ECC_XXX flags defined above) 516 527 * @priv: pointer to private ECC control data 528 + * @calc_buf: buffer for calculated ECC, size is oobsize. 529 + * @code_buf: buffer for ECC read from flash, size is oobsize. 517 530 * @hwctl: function to control hardware ECC generator. Must only 518 531 * be provided if an hardware ECC is available 519 532 * @calculate: function for ECC calculation or readback from ECC hardware ··· 566 575 int postpad; 567 576 unsigned int options; 568 577 void *priv; 578 + u8 *calc_buf; 579 + u8 *code_buf; 569 580 void (*hwctl)(struct mtd_info *mtd, int mode); 570 581 int (*calculate)(struct mtd_info *mtd, const uint8_t *dat, 571 582 uint8_t *ecc_code); ··· 593 600 int (*read_oob)(struct mtd_info *mtd, struct nand_chip *chip, int page); 594 601 int (*write_oob)(struct mtd_info *mtd, struct nand_chip *chip, 595 602 int page); 596 - }; 597 - 598 - static inline int nand_standard_page_accessors(struct nand_ecc_ctrl *ecc) 599 - { 600 - return !(ecc->options & NAND_ECC_CUSTOM_PAGE_ACCESS); 601 - } 602 - 603 - /** 604 - * struct nand_buffers - buffer structure for read/write 605 - * @ecccalc: buffer pointer for calculated ECC, size is oobsize. 606 - * @ecccode: buffer pointer for ECC read from flash, size is oobsize. 607 - * @databuf: buffer pointer for data, size is (page size + oobsize). 608 - * 609 - * Do not change the order of buffers. databuf and oobrbuf must be in 610 - * consecutive order. 611 - */ 612 - struct nand_buffers { 613 - uint8_t *ecccalc; 614 - uint8_t *ecccode; 615 - uint8_t *databuf; 616 603 }; 617 604 618 605 /** ··· 735 762 }; 736 763 737 764 /** 765 + * struct nand_op_cmd_instr - Definition of a command instruction 766 + * @opcode: the command to issue in one cycle 767 + */ 768 + struct nand_op_cmd_instr { 769 + u8 opcode; 770 + }; 771 + 772 + /** 773 + * struct nand_op_addr_instr - Definition of an address instruction 774 + * @naddrs: length of the @addrs array 775 + * @addrs: array containing the address cycles to issue 776 + */ 777 + struct nand_op_addr_instr { 778 + unsigned int naddrs; 779 + const u8 *addrs; 780 + }; 781 + 782 + /** 783 + * struct nand_op_data_instr - Definition of a data instruction 784 + * @len: number of data bytes to move 785 + * @in: buffer to fill when reading from the NAND chip 786 + * @out: buffer to read from when writing to the NAND chip 787 + * @force_8bit: force 8-bit access 788 + * 789 + * Please note that "in" and "out" are inverted from the ONFI specification 790 + * and are from the controller perspective, so a "in" is a read from the NAND 791 + * chip while a "out" is a write to the NAND chip. 792 + */ 793 + struct nand_op_data_instr { 794 + unsigned int len; 795 + union { 796 + void *in; 797 + const void *out; 798 + } buf; 799 + bool force_8bit; 800 + }; 801 + 802 + /** 803 + * struct nand_op_waitrdy_instr - Definition of a wait ready instruction 804 + * @timeout_ms: maximum delay while waiting for the ready/busy pin in ms 805 + */ 806 + struct nand_op_waitrdy_instr { 807 + unsigned int timeout_ms; 808 + }; 809 + 810 + /** 811 + * enum nand_op_instr_type - Definition of all instruction types 812 + * @NAND_OP_CMD_INSTR: command instruction 813 + * @NAND_OP_ADDR_INSTR: address instruction 814 + * @NAND_OP_DATA_IN_INSTR: data in instruction 815 + * @NAND_OP_DATA_OUT_INSTR: data out instruction 816 + * @NAND_OP_WAITRDY_INSTR: wait ready instruction 817 + */ 818 + enum nand_op_instr_type { 819 + NAND_OP_CMD_INSTR, 820 + NAND_OP_ADDR_INSTR, 821 + NAND_OP_DATA_IN_INSTR, 822 + NAND_OP_DATA_OUT_INSTR, 823 + NAND_OP_WAITRDY_INSTR, 824 + }; 825 + 826 + /** 827 + * struct nand_op_instr - Instruction object 828 + * @type: the instruction type 829 + * @cmd/@addr/@data/@waitrdy: extra data associated to the instruction. 830 + * You'll have to use the appropriate element 831 + * depending on @type 832 + * @delay_ns: delay the controller should apply after the instruction has been 833 + * issued on the bus. Most modern controllers have internal timings 834 + * control logic, and in this case, the controller driver can ignore 835 + * this field. 836 + */ 837 + struct nand_op_instr { 838 + enum nand_op_instr_type type; 839 + union { 840 + struct nand_op_cmd_instr cmd; 841 + struct nand_op_addr_instr addr; 842 + struct nand_op_data_instr data; 843 + struct nand_op_waitrdy_instr waitrdy; 844 + } ctx; 845 + unsigned int delay_ns; 846 + }; 847 + 848 + /* 849 + * Special handling must be done for the WAITRDY timeout parameter as it usually 850 + * is either tPROG (after a prog), tR (before a read), tRST (during a reset) or 851 + * tBERS (during an erase) which all of them are u64 values that cannot be 852 + * divided by usual kernel macros and must be handled with the special 853 + * DIV_ROUND_UP_ULL() macro. 854 + */ 855 + #define __DIVIDE(dividend, divisor) ({ \ 856 + sizeof(dividend) == sizeof(u32) ? \ 857 + DIV_ROUND_UP(dividend, divisor) : \ 858 + DIV_ROUND_UP_ULL(dividend, divisor); \ 859 + }) 860 + #define PSEC_TO_NSEC(x) __DIVIDE(x, 1000) 861 + #define PSEC_TO_MSEC(x) __DIVIDE(x, 1000000000) 862 + 863 + #define NAND_OP_CMD(id, ns) \ 864 + { \ 865 + .type = NAND_OP_CMD_INSTR, \ 866 + .ctx.cmd.opcode = id, \ 867 + .delay_ns = ns, \ 868 + } 869 + 870 + #define NAND_OP_ADDR(ncycles, cycles, ns) \ 871 + { \ 872 + .type = NAND_OP_ADDR_INSTR, \ 873 + .ctx.addr = { \ 874 + .naddrs = ncycles, \ 875 + .addrs = cycles, \ 876 + }, \ 877 + .delay_ns = ns, \ 878 + } 879 + 880 + #define NAND_OP_DATA_IN(l, b, ns) \ 881 + { \ 882 + .type = NAND_OP_DATA_IN_INSTR, \ 883 + .ctx.data = { \ 884 + .len = l, \ 885 + .buf.in = b, \ 886 + .force_8bit = false, \ 887 + }, \ 888 + .delay_ns = ns, \ 889 + } 890 + 891 + #define NAND_OP_DATA_OUT(l, b, ns) \ 892 + { \ 893 + .type = NAND_OP_DATA_OUT_INSTR, \ 894 + .ctx.data = { \ 895 + .len = l, \ 896 + .buf.out = b, \ 897 + .force_8bit = false, \ 898 + }, \ 899 + .delay_ns = ns, \ 900 + } 901 + 902 + #define NAND_OP_8BIT_DATA_IN(l, b, ns) \ 903 + { \ 904 + .type = NAND_OP_DATA_IN_INSTR, \ 905 + .ctx.data = { \ 906 + .len = l, \ 907 + .buf.in = b, \ 908 + .force_8bit = true, \ 909 + }, \ 910 + .delay_ns = ns, \ 911 + } 912 + 913 + #define NAND_OP_8BIT_DATA_OUT(l, b, ns) \ 914 + { \ 915 + .type = NAND_OP_DATA_OUT_INSTR, \ 916 + .ctx.data = { \ 917 + .len = l, \ 918 + .buf.out = b, \ 919 + .force_8bit = true, \ 920 + }, \ 921 + .delay_ns = ns, \ 922 + } 923 + 924 + #define NAND_OP_WAIT_RDY(tout_ms, ns) \ 925 + { \ 926 + .type = NAND_OP_WAITRDY_INSTR, \ 927 + .ctx.waitrdy.timeout_ms = tout_ms, \ 928 + .delay_ns = ns, \ 929 + } 930 + 931 + /** 932 + * struct nand_subop - a sub operation 933 + * @instrs: array of instructions 934 + * @ninstrs: length of the @instrs array 935 + * @first_instr_start_off: offset to start from for the first instruction 936 + * of the sub-operation 937 + * @last_instr_end_off: offset to end at (excluded) for the last instruction 938 + * of the sub-operation 939 + * 940 + * Both @first_instr_start_off and @last_instr_end_off only apply to data or 941 + * address instructions. 942 + * 943 + * When an operation cannot be handled as is by the NAND controller, it will 944 + * be split by the parser into sub-operations which will be passed to the 945 + * controller driver. 946 + */ 947 + struct nand_subop { 948 + const struct nand_op_instr *instrs; 949 + unsigned int ninstrs; 950 + unsigned int first_instr_start_off; 951 + unsigned int last_instr_end_off; 952 + }; 953 + 954 + int nand_subop_get_addr_start_off(const struct nand_subop *subop, 955 + unsigned int op_id); 956 + int nand_subop_get_num_addr_cyc(const struct nand_subop *subop, 957 + unsigned int op_id); 958 + int nand_subop_get_data_start_off(const struct nand_subop *subop, 959 + unsigned int op_id); 960 + int nand_subop_get_data_len(const struct nand_subop *subop, 961 + unsigned int op_id); 962 + 963 + /** 964 + * struct nand_op_parser_addr_constraints - Constraints for address instructions 965 + * @maxcycles: maximum number of address cycles the controller can issue in a 966 + * single step 967 + */ 968 + struct nand_op_parser_addr_constraints { 969 + unsigned int maxcycles; 970 + }; 971 + 972 + /** 973 + * struct nand_op_parser_data_constraints - Constraints for data instructions 974 + * @maxlen: maximum data length that the controller can handle in a single step 975 + */ 976 + struct nand_op_parser_data_constraints { 977 + unsigned int maxlen; 978 + }; 979 + 980 + /** 981 + * struct nand_op_parser_pattern_elem - One element of a pattern 982 + * @type: the instructuction type 983 + * @optional: whether this element of the pattern is optional or mandatory 984 + * @addr/@data: address or data constraint (number of cycles or data length) 985 + */ 986 + struct nand_op_parser_pattern_elem { 987 + enum nand_op_instr_type type; 988 + bool optional; 989 + union { 990 + struct nand_op_parser_addr_constraints addr; 991 + struct nand_op_parser_data_constraints data; 992 + } ctx; 993 + }; 994 + 995 + #define NAND_OP_PARSER_PAT_CMD_ELEM(_opt) \ 996 + { \ 997 + .type = NAND_OP_CMD_INSTR, \ 998 + .optional = _opt, \ 999 + } 1000 + 1001 + #define NAND_OP_PARSER_PAT_ADDR_ELEM(_opt, _maxcycles) \ 1002 + { \ 1003 + .type = NAND_OP_ADDR_INSTR, \ 1004 + .optional = _opt, \ 1005 + .ctx.addr.maxcycles = _maxcycles, \ 1006 + } 1007 + 1008 + #define NAND_OP_PARSER_PAT_DATA_IN_ELEM(_opt, _maxlen) \ 1009 + { \ 1010 + .type = NAND_OP_DATA_IN_INSTR, \ 1011 + .optional = _opt, \ 1012 + .ctx.data.maxlen = _maxlen, \ 1013 + } 1014 + 1015 + #define NAND_OP_PARSER_PAT_DATA_OUT_ELEM(_opt, _maxlen) \ 1016 + { \ 1017 + .type = NAND_OP_DATA_OUT_INSTR, \ 1018 + .optional = _opt, \ 1019 + .ctx.data.maxlen = _maxlen, \ 1020 + } 1021 + 1022 + #define NAND_OP_PARSER_PAT_WAITRDY_ELEM(_opt) \ 1023 + { \ 1024 + .type = NAND_OP_WAITRDY_INSTR, \ 1025 + .optional = _opt, \ 1026 + } 1027 + 1028 + /** 1029 + * struct nand_op_parser_pattern - NAND sub-operation pattern descriptor 1030 + * @elems: array of pattern elements 1031 + * @nelems: number of pattern elements in @elems array 1032 + * @exec: the function that will issue a sub-operation 1033 + * 1034 + * A pattern is a list of elements, each element reprensenting one instruction 1035 + * with its constraints. The pattern itself is used by the core to match NAND 1036 + * chip operation with NAND controller operations. 1037 + * Once a match between a NAND controller operation pattern and a NAND chip 1038 + * operation (or a sub-set of a NAND operation) is found, the pattern ->exec() 1039 + * hook is called so that the controller driver can issue the operation on the 1040 + * bus. 1041 + * 1042 + * Controller drivers should declare as many patterns as they support and pass 1043 + * this list of patterns (created with the help of the following macro) to 1044 + * the nand_op_parser_exec_op() helper. 1045 + */ 1046 + struct nand_op_parser_pattern { 1047 + const struct nand_op_parser_pattern_elem *elems; 1048 + unsigned int nelems; 1049 + int (*exec)(struct nand_chip *chip, const struct nand_subop *subop); 1050 + }; 1051 + 1052 + #define NAND_OP_PARSER_PATTERN(_exec, ...) \ 1053 + { \ 1054 + .exec = _exec, \ 1055 + .elems = (struct nand_op_parser_pattern_elem[]) { __VA_ARGS__ }, \ 1056 + .nelems = sizeof((struct nand_op_parser_pattern_elem[]) { __VA_ARGS__ }) / \ 1057 + sizeof(struct nand_op_parser_pattern_elem), \ 1058 + } 1059 + 1060 + /** 1061 + * struct nand_op_parser - NAND controller operation parser descriptor 1062 + * @patterns: array of supported patterns 1063 + * @npatterns: length of the @patterns array 1064 + * 1065 + * The parser descriptor is just an array of supported patterns which will be 1066 + * iterated by nand_op_parser_exec_op() everytime it tries to execute an 1067 + * NAND operation (or tries to determine if a specific operation is supported). 1068 + * 1069 + * It is worth mentioning that patterns will be tested in their declaration 1070 + * order, and the first match will be taken, so it's important to order patterns 1071 + * appropriately so that simple/inefficient patterns are placed at the end of 1072 + * the list. Usually, this is where you put single instruction patterns. 1073 + */ 1074 + struct nand_op_parser { 1075 + const struct nand_op_parser_pattern *patterns; 1076 + unsigned int npatterns; 1077 + }; 1078 + 1079 + #define NAND_OP_PARSER(...) \ 1080 + { \ 1081 + .patterns = (struct nand_op_parser_pattern[]) { __VA_ARGS__ }, \ 1082 + .npatterns = sizeof((struct nand_op_parser_pattern[]) { __VA_ARGS__ }) / \ 1083 + sizeof(struct nand_op_parser_pattern), \ 1084 + } 1085 + 1086 + /** 1087 + * struct nand_operation - NAND operation descriptor 1088 + * @instrs: array of instructions to execute 1089 + * @ninstrs: length of the @instrs array 1090 + * 1091 + * The actual operation structure that will be passed to chip->exec_op(). 1092 + */ 1093 + struct nand_operation { 1094 + const struct nand_op_instr *instrs; 1095 + unsigned int ninstrs; 1096 + }; 1097 + 1098 + #define NAND_OPERATION(_instrs) \ 1099 + { \ 1100 + .instrs = _instrs, \ 1101 + .ninstrs = ARRAY_SIZE(_instrs), \ 1102 + } 1103 + 1104 + int nand_op_parser_exec_op(struct nand_chip *chip, 1105 + const struct nand_op_parser *parser, 1106 + const struct nand_operation *op, bool check_only); 1107 + 1108 + /** 738 1109 * struct nand_chip - NAND Private Flash Chip Data 739 1110 * @mtd: MTD device registered to the MTD framework 740 1111 * @IO_ADDR_R: [BOARDSPECIFIC] address to read the 8 I/O lines of the ··· 1104 787 * commands to the chip. 1105 788 * @waitfunc: [REPLACEABLE] hardwarespecific function for wait on 1106 789 * ready. 790 + * @exec_op: controller specific method to execute NAND operations. 791 + * This method replaces ->cmdfunc(), 792 + * ->{read,write}_{buf,byte,word}(), ->dev_ready() and 793 + * ->waifunc(). 1107 794 * @setup_read_retry: [FLASHSPECIFIC] flash (vendor) specific function for 1108 795 * setting the read-retry mode. Mostly needed for MLC NAND. 1109 796 * @ecc: [BOARDSPECIFIC] ECC control structure 1110 - * @buffers: buffer structure for read/write 1111 797 * @buf_align: minimum buffer alignment required by a platform 1112 798 * @hwcontrol: platform-specific hardware control structure 1113 799 * @erase: [REPLACEABLE] erase function ··· 1150 830 * @numchips: [INTERN] number of physical chips 1151 831 * @chipsize: [INTERN] the size of one chip for multichip arrays 1152 832 * @pagemask: [INTERN] page number mask = number of (pages / chip) - 1 833 + * @data_buf: [INTERN] buffer for data, size is (page size + oobsize). 1153 834 * @pagebuf: [INTERN] holds the pagenumber which is currently in 1154 835 * data_buf. 1155 836 * @pagebuf_bitflips: [INTERN] holds the bitflip count for the page which is ··· 1207 886 void (*cmdfunc)(struct mtd_info *mtd, unsigned command, int column, 1208 887 int page_addr); 1209 888 int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this); 889 + int (*exec_op)(struct nand_chip *chip, 890 + const struct nand_operation *op, 891 + bool check_only); 1210 892 int (*erase)(struct mtd_info *mtd, int page); 1211 893 int (*scan_bbt)(struct mtd_info *mtd); 1212 894 int (*onfi_set_features)(struct mtd_info *mtd, struct nand_chip *chip, ··· 1219 895 int (*setup_read_retry)(struct mtd_info *mtd, int retry_mode); 1220 896 int (*setup_data_interface)(struct mtd_info *mtd, int chipnr, 1221 897 const struct nand_data_interface *conf); 1222 - 1223 898 1224 899 int chip_delay; 1225 900 unsigned int options; ··· 1231 908 int numchips; 1232 909 uint64_t chipsize; 1233 910 int pagemask; 911 + u8 *data_buf; 1234 912 int pagebuf; 1235 913 unsigned int pagebuf_bitflips; 1236 914 int subpagesize; ··· 1252 928 u16 max_bb_per_die; 1253 929 u32 blocks_per_die; 1254 930 1255 - struct nand_data_interface *data_interface; 931 + struct nand_data_interface data_interface; 1256 932 1257 933 int read_retries; 1258 934 ··· 1262 938 struct nand_hw_control *controller; 1263 939 1264 940 struct nand_ecc_ctrl ecc; 1265 - struct nand_buffers *buffers; 1266 941 unsigned long buf_align; 1267 942 struct nand_hw_control hwcontrol; 1268 943 ··· 1278 955 void *priv; 1279 956 } manufacturer; 1280 957 }; 958 + 959 + static inline int nand_exec_op(struct nand_chip *chip, 960 + const struct nand_operation *op) 961 + { 962 + if (!chip->exec_op) 963 + return -ENOTSUPP; 964 + 965 + return chip->exec_op(chip, op, false); 966 + } 1281 967 1282 968 extern const struct mtd_ooblayout_ops nand_ooblayout_sp_ops; 1283 969 extern const struct mtd_ooblayout_ops nand_ooblayout_lp_ops; ··· 1557 1225 return le16_to_cpu(chip->onfi_params.src_sync_timing_mode); 1558 1226 } 1559 1227 1560 - int onfi_init_data_interface(struct nand_chip *chip, 1561 - struct nand_data_interface *iface, 1228 + int onfi_fill_data_interface(struct nand_chip *chip, 1562 1229 enum nand_data_interface_type type, 1563 1230 int timing_mode); 1564 1231 ··· 1600 1269 1601 1270 /* get timing characteristics from ONFI timing mode. */ 1602 1271 const struct nand_sdr_timings *onfi_async_timing_mode_to_sdr_timings(int mode); 1603 - /* get data interface from ONFI timing mode 0, used after reset. */ 1604 - const struct nand_data_interface *nand_get_default_data_interface(void); 1605 1272 1606 1273 int nand_check_erased_ecc_chunk(void *data, int datalen, 1607 1274 void *ecc, int ecclen, ··· 1645 1316 /* Reset and initialize a NAND device */ 1646 1317 int nand_reset(struct nand_chip *chip, int chipnr); 1647 1318 1319 + /* NAND operation helpers */ 1320 + int nand_reset_op(struct nand_chip *chip); 1321 + int nand_readid_op(struct nand_chip *chip, u8 addr, void *buf, 1322 + unsigned int len); 1323 + int nand_status_op(struct nand_chip *chip, u8 *status); 1324 + int nand_exit_status_op(struct nand_chip *chip); 1325 + int nand_erase_op(struct nand_chip *chip, unsigned int eraseblock); 1326 + int nand_read_page_op(struct nand_chip *chip, unsigned int page, 1327 + unsigned int offset_in_page, void *buf, unsigned int len); 1328 + int nand_change_read_column_op(struct nand_chip *chip, 1329 + unsigned int offset_in_page, void *buf, 1330 + unsigned int len, bool force_8bit); 1331 + int nand_read_oob_op(struct nand_chip *chip, unsigned int page, 1332 + unsigned int offset_in_page, void *buf, unsigned int len); 1333 + int nand_prog_page_begin_op(struct nand_chip *chip, unsigned int page, 1334 + unsigned int offset_in_page, const void *buf, 1335 + unsigned int len); 1336 + int nand_prog_page_end_op(struct nand_chip *chip); 1337 + int nand_prog_page_op(struct nand_chip *chip, unsigned int page, 1338 + unsigned int offset_in_page, const void *buf, 1339 + unsigned int len); 1340 + int nand_change_write_column_op(struct nand_chip *chip, 1341 + unsigned int offset_in_page, const void *buf, 1342 + unsigned int len, bool force_8bit); 1343 + int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len, 1344 + bool force_8bit); 1345 + int nand_write_data_op(struct nand_chip *chip, const void *buf, 1346 + unsigned int len, bool force_8bit); 1347 + 1648 1348 /* Free resources held by the NAND device */ 1649 1349 void nand_cleanup(struct nand_chip *chip); 1650 1350 1651 1351 /* Default extended ID decoding function */ 1652 1352 void nand_decode_ext_id(struct nand_chip *chip); 1353 + 1354 + /* 1355 + * External helper for controller drivers that have to implement the WAITRDY 1356 + * instruction and have no physical pin to check it. 1357 + */ 1358 + int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms); 1359 + 1653 1360 #endif /* __LINUX_MTD_RAWNAND_H */
+28
include/linux/omap-gpmc.h
··· 25 25 26 26 struct gpmc_nand_regs; 27 27 28 + struct gpmc_onenand_info { 29 + bool sync_read; 30 + bool sync_write; 31 + int burst_len; 32 + }; 33 + 28 34 #if IS_ENABLED(CONFIG_OMAP_GPMC) 29 35 struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *regs, 30 36 int cs); 37 + /** 38 + * gpmc_omap_onenand_set_timings - set optimized sync timings. 39 + * @cs: Chip Select Region 40 + * @freq: Chip frequency 41 + * @latency: Burst latency cycle count 42 + * @info: Structure describing parameters used 43 + * 44 + * Sets optimized timings for the @cs region based on @freq and @latency. 45 + * Updates the @info structure based on the GPMC settings. 46 + */ 47 + int gpmc_omap_onenand_set_timings(struct device *dev, int cs, int freq, 48 + int latency, 49 + struct gpmc_onenand_info *info); 50 + 31 51 #else 32 52 static inline struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *regs, 33 53 int cs) 34 54 { 35 55 return NULL; 56 + } 57 + 58 + static inline 59 + int gpmc_omap_onenand_set_timings(struct device *dev, int cs, int freq, 60 + int latency, 61 + struct gpmc_onenand_info *info) 62 + { 63 + return -EINVAL; 36 64 } 37 65 #endif /* CONFIG_OMAP_GPMC */ 38 66
-34
include/linux/platform_data/mtd-onenand-omap2.h
··· 1 - /* 2 - * Copyright (C) 2006 Nokia Corporation 3 - * Author: Juha Yrjola 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 as 7 - * published by the Free Software Foundation. 8 - */ 9 - 10 - #ifndef __MTD_ONENAND_OMAP2_H 11 - #define __MTD_ONENAND_OMAP2_H 12 - 13 - #include <linux/mtd/mtd.h> 14 - #include <linux/mtd/partitions.h> 15 - 16 - #define ONENAND_SYNC_READ (1 << 0) 17 - #define ONENAND_SYNC_READWRITE (1 << 1) 18 - #define ONENAND_IN_OMAP34XX (1 << 2) 19 - 20 - struct omap_onenand_platform_data { 21 - int cs; 22 - int gpio_irq; 23 - struct mtd_partition *parts; 24 - int nr_parts; 25 - int (*onenand_setup)(void __iomem *, int *freq_ptr); 26 - int dma_channel; 27 - u8 flags; 28 - u8 regulator_can_sleep; 29 - u8 skip_initial_unlocking; 30 - 31 - /* for passing the partitions */ 32 - struct device_node *of_node; 33 - }; 34 - #endif