Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'for-linus-3.5-20120601' of git://git.infradead.org/linux-mtd

Pull mtd update from David Woodhouse:
- More robust parsing especially of xattr data in JFFS2
- Updates to mxc_nand and gpmi drivers to support new boards and device tree
- Improve consistency of information about ECC strength in NAND devices
- Clean up partition handling of plat_nand
- Support NAND drivers without dedicated access to OOB area
- BCH hardware ECC support for OMAP
- Other fixes and cleanups, and a few new device IDs

Fixed trivial conflict in drivers/mtd/nand/gpmi-nand/gpmi-nand.c due to
added include files next to each other.

* tag 'for-linus-3.5-20120601' of git://git.infradead.org/linux-mtd: (75 commits)
mtd: mxc_nand: move ecc strengh setup before nand_scan_tail
mtd: block2mtd: fix recursive call of mtd_writev
mtd: gpmi-nand: define ecc.strength
mtd: of_parts: fix breakage in Kconfig
mtd: nand: fix scan_read_raw_oob
mtd: docg3 fix in-middle of blocks reads
mtd: cfi_cmdset_0002: Slight cleanup of fixup messages
mtd: add fixup for S29NS512P NOR flash.
jffs2: allow to complete xattr integrity check on first GC scan
jffs2: allow to discriminate between recoverable and non-recoverable errors
mtd: nand: omap: add support for hardware BCH ecc
ARM: OMAP3: gpmc: add BCH ecc api and modes
mtd: nand: check the return code of 'read_oob/read_oob_raw'
mtd: nand: remove 'sndcmd' parameter of 'read_oob/read_oob_raw'
mtd: m25p80: Add support for Winbond W25Q80BW
jffs2: get rid of jffs2_sync_super
jffs2: remove unnecessary GC pass on sync
jffs2: remove unnecessary GC pass on umount
jffs2: remove lock_super
mtd: gpmi: add gpmi support for mx6q
...

+1778 -795
+51
Documentation/ABI/testing/sysfs-class-mtd
··· 123 123 half page, or a quarter page). 124 124 125 125 In the case of ECC NOR, it is the ECC block size. 126 + 127 + What: /sys/class/mtd/mtdX/ecc_strength 128 + Date: April 2012 129 + KernelVersion: 3.4 130 + Contact: linux-mtd@lists.infradead.org 131 + Description: 132 + Maximum number of bit errors that the device is capable of 133 + correcting within each region covering an ecc step. This will 134 + always be a non-negative integer. Note that some devices will 135 + have multiple ecc steps within each writesize region. 136 + 137 + In the case of devices lacking any ECC capability, it is 0. 138 + 139 + What: /sys/class/mtd/mtdX/bitflip_threshold 140 + Date: April 2012 141 + KernelVersion: 3.4 142 + Contact: linux-mtd@lists.infradead.org 143 + Description: 144 + This allows the user to examine and adjust the criteria by which 145 + mtd returns -EUCLEAN from mtd_read(). If the maximum number of 146 + bit errors that were corrected on any single region comprising 147 + an ecc step (as reported by the driver) equals or exceeds this 148 + value, -EUCLEAN is returned. Otherwise, absent an error, 0 is 149 + returned. Higher layers (e.g., UBI) use this return code as an 150 + indication that an erase block may be degrading and should be 151 + scrutinized as a candidate for being marked as bad. 152 + 153 + The initial value may be specified by the flash device driver. 154 + If not, then the default value is ecc_strength. 155 + 156 + The introduction of this feature brings a subtle change to the 157 + meaning of the -EUCLEAN return code. Previously, it was 158 + interpreted to mean simply "one or more bit errors were 159 + corrected". Its new interpretation can be phrased as "a 160 + dangerously high number of bit errors were corrected on one or 161 + more regions comprising an ecc step". The precise definition of 162 + "dangerously high" can be adjusted by the user with 163 + bitflip_threshold. Users are discouraged from doing this, 164 + however, unless they know what they are doing and have intimate 165 + knowledge of the properties of their device. Broadly speaking, 166 + bitflip_threshold should be low enough to detect genuine erase 167 + block degradation, but high enough to avoid the consequences of 168 + a persistent return value of -EUCLEAN on devices where sticky 169 + bitflips occur. Note that if bitflip_threshold exceeds 170 + ecc_strength, -EUCLEAN is never returned by mtd_read(). 171 + Conversely, if bitflip_threshold is zero, -EUCLEAN is always 172 + returned, absent a hard error. 173 + 174 + This is generally applicable only to NAND flash devices with ECC 175 + capability. It is ignored on devices lacking ECC capability; 176 + i.e., devices for which ecc_strength is zero.
-2
Documentation/DocBook/mtdnand.tmpl
··· 1119 1119 These constants are defined in nand.h. They are ored together to describe 1120 1120 the chip functionality. 1121 1121 <programlisting> 1122 - /* Chip can not auto increment pages */ 1123 - #define NAND_NO_AUTOINCR 0x00000001 1124 1122 /* Buswitdh is 16 bit */ 1125 1123 #define NAND_BUSWIDTH_16 0x00000002 1126 1124 /* Device supports partial programming without padding */
+33
Documentation/devicetree/bindings/mtd/gpmi-nand.txt
··· 1 + * Freescale General-Purpose Media Interface (GPMI) 2 + 3 + The GPMI nand controller provides an interface to control the 4 + NAND flash chips. We support only one NAND chip now. 5 + 6 + Required properties: 7 + - compatible : should be "fsl,<chip>-gpmi-nand" 8 + - reg : should contain registers location and length for gpmi and bch. 9 + - reg-names: Should contain the reg names "gpmi-nand" and "bch" 10 + - interrupts : The first is the DMA interrupt number for GPMI. 11 + The second is the BCH interrupt number. 12 + - interrupt-names : The interrupt names "gpmi-dma", "bch"; 13 + - fsl,gpmi-dma-channel : Should contain the dma channel it uses. 14 + 15 + The device tree may optionally contain sub-nodes describing partitions of the 16 + address space. See partition.txt for more detail. 17 + 18 + Examples: 19 + 20 + gpmi-nand@8000c000 { 21 + compatible = "fsl,imx28-gpmi-nand"; 22 + #address-cells = <1>; 23 + #size-cells = <1>; 24 + reg = <0x8000c000 2000>, <0x8000a000 2000>; 25 + reg-names = "gpmi-nand", "bch"; 26 + interrupts = <88>, <41>; 27 + interrupt-names = "gpmi-dma", "bch"; 28 + fsl,gpmi-dma-channel = <4>; 29 + 30 + partition@0 { 31 + ... 32 + }; 33 + };
+19
Documentation/devicetree/bindings/mtd/mxc-nand.txt
··· 1 + * Freescale's mxc_nand 2 + 3 + Required properties: 4 + - compatible: "fsl,imxXX-nand" 5 + - reg: address range of the nfc block 6 + - interrupts: irq to be used 7 + - nand-bus-width: see nand.txt 8 + - nand-ecc-mode: see nand.txt 9 + - nand-on-flash-bbt: see nand.txt 10 + 11 + Example: 12 + 13 + nand@d8000000 { 14 + compatible = "fsl,imx27-nand"; 15 + reg = <0xd8000000 0x1000>; 16 + interrupts = <29>; 17 + nand-bus-width = <8>; 18 + nand-ecc-mode = "hw"; 19 + };
+9
arch/arm/boot/dts/imx27.dtsi
··· 213 213 status = "disabled"; 214 214 }; 215 215 }; 216 + nand@d8000000 { 217 + #address-cells = <1>; 218 + #size-cells = <1>; 219 + 220 + compatible = "fsl,imx27-nand"; 221 + reg = <0xd8000000 0x1000>; 222 + interrupts = <29>; 223 + status = "disabled"; 224 + }; 216 225 }; 217 226 };
-4
arch/arm/mach-ep93xx/snappercl15.c
··· 82 82 return !!(__raw_readw(NAND_CTRL_ADDR(chip)) & SNAPPERCL15_NAND_RDY); 83 83 } 84 84 85 - static const char *snappercl15_nand_part_probes[] = {"cmdlinepart", NULL}; 86 - 87 85 static struct mtd_partition snappercl15_nand_parts[] = { 88 86 { 89 87 .name = "Kernel", ··· 98 100 static struct platform_nand_data snappercl15_nand_data = { 99 101 .chip = { 100 102 .nr_chips = 1, 101 - .part_probe_types = snappercl15_nand_part_probes, 102 103 .partitions = snappercl15_nand_parts, 103 104 .nr_partitions = ARRAY_SIZE(snappercl15_nand_parts), 104 - .options = NAND_NO_AUTOINCR, 105 105 .chip_delay = 25, 106 106 }, 107 107 .ctrl = {
-3
arch/arm/mach-ep93xx/ts72xx.c
··· 105 105 return !!(__raw_readb(addr) & 0x20); 106 106 } 107 107 108 - static const char *ts72xx_nand_part_probes[] = { "cmdlinepart", NULL }; 109 - 110 108 #define TS72XX_BOOTROM_PART_SIZE (SZ_16K) 111 109 #define TS72XX_REDBOOT_PART_SIZE (SZ_2M + SZ_1M) 112 110 ··· 132 134 .nr_chips = 1, 133 135 .chip_offset = 0, 134 136 .chip_delay = 15, 135 - .part_probe_types = ts72xx_nand_part_probes, 136 137 .partitions = ts72xx_nand_parts, 137 138 .nr_partitions = ARRAY_SIZE(ts72xx_nand_parts), 138 139 },
+1
arch/arm/mach-imx/imx27-dt.c
··· 29 29 OF_DEV_AUXDATA("fsl,imx27-cspi", MX27_CSPI2_BASE_ADDR, "imx27-cspi.1", NULL), 30 30 OF_DEV_AUXDATA("fsl,imx27-cspi", MX27_CSPI3_BASE_ADDR, "imx27-cspi.2", NULL), 31 31 OF_DEV_AUXDATA("fsl,imx27-wdt", MX27_WDOG_BASE_ADDR, "imx2-wdt.0", NULL), 32 + OF_DEV_AUXDATA("fsl,imx27-nand", MX27_NFC_BASE_ADDR, "mxc_nand.0", NULL), 32 33 { /* sentinel */ } 33 34 }; 34 35
-4
arch/arm/mach-ixp4xx/ixdp425-setup.c
··· 60 60 #if defined(CONFIG_MTD_NAND_PLATFORM) || \ 61 61 defined(CONFIG_MTD_NAND_PLATFORM_MODULE) 62 62 63 - const char *part_probes[] = { "cmdlinepart", NULL }; 64 - 65 63 static struct mtd_partition ixdp425_partitions[] = { 66 64 { 67 65 .name = "ixp400 NAND FS 0", ··· 98 100 .chip = { 99 101 .nr_chips = 1, 100 102 .chip_delay = 30, 101 - .options = NAND_NO_AUTOINCR, 102 - .part_probe_types = part_probes, 103 103 .partitions = ixdp425_partitions, 104 104 .nr_partitions = ARRAY_SIZE(ixdp425_partitions), 105 105 },
+1 -1
arch/arm/mach-nomadik/board-nhk8815.c
··· 111 111 .parts = nhk8815_partitions, 112 112 .nparts = ARRAY_SIZE(nhk8815_partitions), 113 113 .options = NAND_COPYBACK | NAND_CACHEPRG | NAND_NO_PADDING \ 114 - | NAND_NO_READRDY | NAND_NO_AUTOINCR, 114 + | NAND_NO_READRDY, 115 115 .init = nhk8815_nand_init, 116 116 }; 117 117
-3
arch/arm/mach-omap1/board-fsample.c
··· 192 192 return gpio_get_value(FSAMPLE_NAND_RB_GPIO_PIN); 193 193 } 194 194 195 - static const char *part_probes[] = { "cmdlinepart", NULL }; 196 - 197 195 static struct platform_nand_data nand_data = { 198 196 .chip = { 199 197 .nr_chips = 1, 200 198 .chip_offset = 0, 201 199 .options = NAND_SAMSUNG_LP_OPTIONS, 202 - .part_probe_types = part_probes, 203 200 }, 204 201 .ctrl = { 205 202 .cmd_ctrl = omap1_nand_cmd_ctl,
-3
arch/arm/mach-omap1/board-h2.c
··· 186 186 return gpio_get_value(H2_NAND_RB_GPIO_PIN); 187 187 } 188 188 189 - static const char *h2_part_probes[] = { "cmdlinepart", NULL }; 190 - 191 189 static struct platform_nand_data h2_nand_platdata = { 192 190 .chip = { 193 191 .nr_chips = 1, ··· 193 195 .nr_partitions = ARRAY_SIZE(h2_nand_partitions), 194 196 .partitions = h2_nand_partitions, 195 197 .options = NAND_SAMSUNG_LP_OPTIONS, 196 - .part_probe_types = h2_part_probes, 197 198 }, 198 199 .ctrl = { 199 200 .cmd_ctrl = omap1_nand_cmd_ctl,
-3
arch/arm/mach-omap1/board-h3.c
··· 188 188 return gpio_get_value(H3_NAND_RB_GPIO_PIN); 189 189 } 190 190 191 - static const char *part_probes[] = { "cmdlinepart", NULL }; 192 - 193 191 static struct platform_nand_data nand_platdata = { 194 192 .chip = { 195 193 .nr_chips = 1, ··· 195 197 .nr_partitions = ARRAY_SIZE(nand_partitions), 196 198 .partitions = nand_partitions, 197 199 .options = NAND_SAMSUNG_LP_OPTIONS, 198 - .part_probe_types = part_probes, 199 200 }, 200 201 .ctrl = { 201 202 .cmd_ctrl = omap1_nand_cmd_ctl,
-3
arch/arm/mach-omap1/board-perseus2.c
··· 150 150 return gpio_get_value(P2_NAND_RB_GPIO_PIN); 151 151 } 152 152 153 - static const char *part_probes[] = { "cmdlinepart", NULL }; 154 - 155 153 static struct platform_nand_data nand_data = { 156 154 .chip = { 157 155 .nr_chips = 1, 158 156 .chip_offset = 0, 159 157 .options = NAND_SAMSUNG_LP_OPTIONS, 160 - .part_probe_types = part_probes, 161 158 }, 162 159 .ctrl = { 163 160 .cmd_ctrl = omap1_nand_cmd_ctl,
+184
arch/arm/mach-omap2/gpmc.c
··· 49 49 #define GPMC_ECC_CONTROL 0x1f8 50 50 #define GPMC_ECC_SIZE_CONFIG 0x1fc 51 51 #define GPMC_ECC1_RESULT 0x200 52 + #define GPMC_ECC_BCH_RESULT_0 0x240 /* not available on OMAP2 */ 52 53 53 54 /* GPMC ECC control settings */ 54 55 #define GPMC_ECC_CTRL_ECCCLEAR 0x100 ··· 936 935 return 0; 937 936 } 938 937 EXPORT_SYMBOL_GPL(gpmc_calculate_ecc); 938 + 939 + #ifdef CONFIG_ARCH_OMAP3 940 + 941 + /** 942 + * gpmc_init_hwecc_bch - initialize hardware BCH ecc functionality 943 + * @cs: chip select number 944 + * @nsectors: how many 512-byte sectors to process 945 + * @nerrors: how many errors to correct per sector (4 or 8) 946 + * 947 + * This function must be executed before any call to gpmc_enable_hwecc_bch. 948 + */ 949 + int gpmc_init_hwecc_bch(int cs, int nsectors, int nerrors) 950 + { 951 + /* check if ecc module is in use */ 952 + if (gpmc_ecc_used != -EINVAL) 953 + return -EINVAL; 954 + 955 + /* support only OMAP3 class */ 956 + if (!cpu_is_omap34xx()) { 957 + printk(KERN_ERR "BCH ecc is not supported on this CPU\n"); 958 + return -EINVAL; 959 + } 960 + 961 + /* 962 + * For now, assume 4-bit mode is only supported on OMAP3630 ES1.x, x>=1. 963 + * Other chips may be added if confirmed to work. 964 + */ 965 + if ((nerrors == 4) && 966 + (!cpu_is_omap3630() || (GET_OMAP_REVISION() == 0))) { 967 + printk(KERN_ERR "BCH 4-bit mode is not supported on this CPU\n"); 968 + return -EINVAL; 969 + } 970 + 971 + /* sanity check */ 972 + if (nsectors > 8) { 973 + printk(KERN_ERR "BCH cannot process %d sectors (max is 8)\n", 974 + nsectors); 975 + return -EINVAL; 976 + } 977 + 978 + return 0; 979 + } 980 + EXPORT_SYMBOL_GPL(gpmc_init_hwecc_bch); 981 + 982 + /** 983 + * gpmc_enable_hwecc_bch - enable hardware BCH ecc functionality 984 + * @cs: chip select number 985 + * @mode: read/write mode 986 + * @dev_width: device bus width(1 for x16, 0 for x8) 987 + * @nsectors: how many 512-byte sectors to process 988 + * @nerrors: how many errors to correct per sector (4 or 8) 989 + */ 990 + int gpmc_enable_hwecc_bch(int cs, int mode, int dev_width, int nsectors, 991 + int nerrors) 992 + { 993 + unsigned int val; 994 + 995 + /* check if ecc module is in use */ 996 + if (gpmc_ecc_used != -EINVAL) 997 + return -EINVAL; 998 + 999 + gpmc_ecc_used = cs; 1000 + 1001 + /* clear ecc and enable bits */ 1002 + gpmc_write_reg(GPMC_ECC_CONTROL, 0x1); 1003 + 1004 + /* 1005 + * When using BCH, sector size is hardcoded to 512 bytes. 1006 + * Here we are using wrapping mode 6 both for reading and writing, with: 1007 + * size0 = 0 (no additional protected byte in spare area) 1008 + * size1 = 32 (skip 32 nibbles = 16 bytes per sector in spare area) 1009 + */ 1010 + gpmc_write_reg(GPMC_ECC_SIZE_CONFIG, (32 << 22) | (0 << 12)); 1011 + 1012 + /* BCH configuration */ 1013 + val = ((1 << 16) | /* enable BCH */ 1014 + (((nerrors == 8) ? 1 : 0) << 12) | /* 8 or 4 bits */ 1015 + (0x06 << 8) | /* wrap mode = 6 */ 1016 + (dev_width << 7) | /* bus width */ 1017 + (((nsectors-1) & 0x7) << 4) | /* number of sectors */ 1018 + (cs << 1) | /* ECC CS */ 1019 + (0x1)); /* enable ECC */ 1020 + 1021 + gpmc_write_reg(GPMC_ECC_CONFIG, val); 1022 + gpmc_write_reg(GPMC_ECC_CONTROL, 0x101); 1023 + return 0; 1024 + } 1025 + EXPORT_SYMBOL_GPL(gpmc_enable_hwecc_bch); 1026 + 1027 + /** 1028 + * gpmc_calculate_ecc_bch4 - Generate 7 ecc bytes per sector of 512 data bytes 1029 + * @cs: chip select number 1030 + * @dat: The pointer to data on which ecc is computed 1031 + * @ecc: The ecc output buffer 1032 + */ 1033 + int gpmc_calculate_ecc_bch4(int cs, const u_char *dat, u_char *ecc) 1034 + { 1035 + int i; 1036 + unsigned long nsectors, reg, val1, val2; 1037 + 1038 + if (gpmc_ecc_used != cs) 1039 + return -EINVAL; 1040 + 1041 + nsectors = ((gpmc_read_reg(GPMC_ECC_CONFIG) >> 4) & 0x7) + 1; 1042 + 1043 + for (i = 0; i < nsectors; i++) { 1044 + 1045 + reg = GPMC_ECC_BCH_RESULT_0 + 16*i; 1046 + 1047 + /* Read hw-computed remainder */ 1048 + val1 = gpmc_read_reg(reg + 0); 1049 + val2 = gpmc_read_reg(reg + 4); 1050 + 1051 + /* 1052 + * Add constant polynomial to remainder, in order to get an ecc 1053 + * sequence of 0xFFs for a buffer filled with 0xFFs; and 1054 + * left-justify the resulting polynomial. 1055 + */ 1056 + *ecc++ = 0x28 ^ ((val2 >> 12) & 0xFF); 1057 + *ecc++ = 0x13 ^ ((val2 >> 4) & 0xFF); 1058 + *ecc++ = 0xcc ^ (((val2 & 0xF) << 4)|((val1 >> 28) & 0xF)); 1059 + *ecc++ = 0x39 ^ ((val1 >> 20) & 0xFF); 1060 + *ecc++ = 0x96 ^ ((val1 >> 12) & 0xFF); 1061 + *ecc++ = 0xac ^ ((val1 >> 4) & 0xFF); 1062 + *ecc++ = 0x7f ^ ((val1 & 0xF) << 4); 1063 + } 1064 + 1065 + gpmc_ecc_used = -EINVAL; 1066 + return 0; 1067 + } 1068 + EXPORT_SYMBOL_GPL(gpmc_calculate_ecc_bch4); 1069 + 1070 + /** 1071 + * gpmc_calculate_ecc_bch8 - Generate 13 ecc bytes per block of 512 data bytes 1072 + * @cs: chip select number 1073 + * @dat: The pointer to data on which ecc is computed 1074 + * @ecc: The ecc output buffer 1075 + */ 1076 + int gpmc_calculate_ecc_bch8(int cs, const u_char *dat, u_char *ecc) 1077 + { 1078 + int i; 1079 + unsigned long nsectors, reg, val1, val2, val3, val4; 1080 + 1081 + if (gpmc_ecc_used != cs) 1082 + return -EINVAL; 1083 + 1084 + nsectors = ((gpmc_read_reg(GPMC_ECC_CONFIG) >> 4) & 0x7) + 1; 1085 + 1086 + for (i = 0; i < nsectors; i++) { 1087 + 1088 + reg = GPMC_ECC_BCH_RESULT_0 + 16*i; 1089 + 1090 + /* Read hw-computed remainder */ 1091 + val1 = gpmc_read_reg(reg + 0); 1092 + val2 = gpmc_read_reg(reg + 4); 1093 + val3 = gpmc_read_reg(reg + 8); 1094 + val4 = gpmc_read_reg(reg + 12); 1095 + 1096 + /* 1097 + * Add constant polynomial to remainder, in order to get an ecc 1098 + * sequence of 0xFFs for a buffer filled with 0xFFs. 1099 + */ 1100 + *ecc++ = 0xef ^ (val4 & 0xFF); 1101 + *ecc++ = 0x51 ^ ((val3 >> 24) & 0xFF); 1102 + *ecc++ = 0x2e ^ ((val3 >> 16) & 0xFF); 1103 + *ecc++ = 0x09 ^ ((val3 >> 8) & 0xFF); 1104 + *ecc++ = 0xed ^ (val3 & 0xFF); 1105 + *ecc++ = 0x93 ^ ((val2 >> 24) & 0xFF); 1106 + *ecc++ = 0x9a ^ ((val2 >> 16) & 0xFF); 1107 + *ecc++ = 0xc2 ^ ((val2 >> 8) & 0xFF); 1108 + *ecc++ = 0x97 ^ (val2 & 0xFF); 1109 + *ecc++ = 0x79 ^ ((val1 >> 24) & 0xFF); 1110 + *ecc++ = 0xe5 ^ ((val1 >> 16) & 0xFF); 1111 + *ecc++ = 0x24 ^ ((val1 >> 8) & 0xFF); 1112 + *ecc++ = 0xb5 ^ (val1 & 0xFF); 1113 + } 1114 + 1115 + gpmc_ecc_used = -EINVAL; 1116 + return 0; 1117 + } 1118 + EXPORT_SYMBOL_GPL(gpmc_calculate_ecc_bch8); 1119 + 1120 + #endif /* CONFIG_ARCH_OMAP3 */
-3
arch/arm/mach-orion5x/ts78xx-setup.c
··· 251 251 readsb(io_base, buf, len); 252 252 } 253 253 254 - const char *ts_nand_part_probes[] = { "cmdlinepart", NULL }; 255 - 256 254 static struct mtd_partition ts78xx_ts_nand_parts[] = { 257 255 { 258 256 .name = "mbr", ··· 275 277 static struct platform_nand_data ts78xx_ts_nand_data = { 276 278 .chip = { 277 279 .nr_chips = 1, 278 - .part_probe_types = ts_nand_part_probes, 279 280 .partitions = ts78xx_ts_nand_parts, 280 281 .nr_partitions = ARRAY_SIZE(ts78xx_ts_nand_parts), 281 282 .chip_delay = 15,
-3
arch/arm/mach-pxa/balloon3.c
··· 679 679 }, 680 680 }; 681 681 682 - static const char *balloon3_part_probes[] = { "cmdlinepart", NULL }; 683 - 684 682 struct platform_nand_data balloon3_nand_pdata = { 685 683 .chip = { 686 684 .nr_chips = 4, ··· 686 688 .nr_partitions = ARRAY_SIZE(balloon3_partition_info), 687 689 .partitions = balloon3_partition_info, 688 690 .chip_delay = 50, 689 - .part_probe_types = balloon3_part_probes, 690 691 }, 691 692 .ctrl = { 692 693 .hwcontrol = 0,
-3
arch/arm/mach-pxa/em-x270.c
··· 338 338 }, 339 339 }; 340 340 341 - static const char *em_x270_part_probes[] = { "cmdlinepart", NULL }; 342 - 343 341 struct platform_nand_data em_x270_nand_platdata = { 344 342 .chip = { 345 343 .nr_chips = 1, ··· 345 347 .nr_partitions = ARRAY_SIZE(em_x270_partition_info), 346 348 .partitions = em_x270_partition_info, 347 349 .chip_delay = 20, 348 - .part_probe_types = em_x270_part_probes, 349 350 }, 350 351 .ctrl = { 351 352 .hwcontrol = 0,
-3
arch/arm/mach-pxa/palmtx.c
··· 268 268 }, 269 269 }; 270 270 271 - static const char *palmtx_part_probes[] = { "cmdlinepart", NULL }; 272 - 273 271 struct platform_nand_data palmtx_nand_platdata = { 274 272 .chip = { 275 273 .nr_chips = 1, ··· 275 277 .nr_partitions = ARRAY_SIZE(palmtx_partition_info), 276 278 .partitions = palmtx_partition_info, 277 279 .chip_delay = 20, 278 - .part_probe_types = palmtx_part_probes, 279 280 }, 280 281 .ctrl = { 281 282 .cmd_ctrl = palmtx_nand_cmd_ctl,
+11
arch/arm/plat-omap/include/plat/gpmc.h
··· 92 92 OMAP_ECC_HAMMING_CODE_HW, /* gpmc to detect the error */ 93 93 /* 1-bit ecc: stored at beginning of spare area as romcode */ 94 94 OMAP_ECC_HAMMING_CODE_HW_ROMCODE, /* gpmc method & romcode layout */ 95 + OMAP_ECC_BCH4_CODE_HW, /* 4-bit BCH ecc code */ 96 + OMAP_ECC_BCH8_CODE_HW, /* 8-bit BCH ecc code */ 95 97 }; 96 98 97 99 /* ··· 159 157 160 158 int gpmc_enable_hwecc(int cs, int mode, int dev_width, int ecc_size); 161 159 int gpmc_calculate_ecc(int cs, const u_char *dat, u_char *ecc_code); 160 + 161 + #ifdef CONFIG_ARCH_OMAP3 162 + int gpmc_init_hwecc_bch(int cs, int nsectors, int nerrors); 163 + int gpmc_enable_hwecc_bch(int cs, int mode, int dev_width, int nsectors, 164 + int nerrors); 165 + int gpmc_calculate_ecc_bch4(int cs, const u_char *dat, u_char *ecc); 166 + int gpmc_calculate_ecc_bch8(int cs, const u_char *dat, u_char *ecc); 167 + #endif /* CONFIG_ARCH_OMAP3 */ 168 + 162 169 #endif
-3
arch/blackfin/mach-bf561/boards/acvilon.c
··· 248 248 249 249 #if defined(CONFIG_MTD_NAND_PLATFORM) || defined(CONFIG_MTD_NAND_PLATFORM_MODULE) 250 250 251 - const char *part_probes[] = { "cmdlinepart", NULL }; 252 - 253 251 static struct mtd_partition bfin_plat_nand_partitions[] = { 254 252 { 255 253 .name = "params(nand)", ··· 287 289 .chip = { 288 290 .nr_chips = 1, 289 291 .chip_delay = 30, 290 - .part_probe_types = part_probes, 291 292 .partitions = bfin_plat_nand_partitions, 292 293 .nr_partitions = ARRAY_SIZE(bfin_plat_nand_partitions), 293 294 },
-3
arch/mips/alchemy/devboards/db1200.c
··· 213 213 return __raw_readl((void __iomem *)MEM_STSTAT) & 1; 214 214 } 215 215 216 - static const char *db1200_part_probes[] = { "cmdlinepart", NULL }; 217 - 218 216 static struct mtd_partition db1200_nand_parts[] = { 219 217 { 220 218 .name = "NAND FS 0", ··· 233 235 .nr_partitions = ARRAY_SIZE(db1200_nand_parts), 234 236 .partitions = db1200_nand_parts, 235 237 .chip_delay = 20, 236 - .part_probe_types = db1200_part_probes, 237 238 }, 238 239 .ctrl = { 239 240 .dev_ready = au1200_nand_device_ready,
-3
arch/mips/alchemy/devboards/db1300.c
··· 145 145 return __raw_readl((void __iomem *)MEM_STSTAT) & 1; 146 146 } 147 147 148 - static const char *db1300_part_probes[] = { "cmdlinepart", NULL }; 149 - 150 148 static struct mtd_partition db1300_nand_parts[] = { 151 149 { 152 150 .name = "NAND FS 0", ··· 165 167 .nr_partitions = ARRAY_SIZE(db1300_nand_parts), 166 168 .partitions = db1300_nand_parts, 167 169 .chip_delay = 20, 168 - .part_probe_types = db1300_part_probes, 169 170 }, 170 171 .ctrl = { 171 172 .dev_ready = au1300_nand_device_ready,
-3
arch/mips/alchemy/devboards/db1550.c
··· 149 149 return __raw_readl((void __iomem *)MEM_STSTAT) & 1; 150 150 } 151 151 152 - static const char *db1550_part_probes[] = { "cmdlinepart", NULL }; 153 - 154 152 static struct mtd_partition db1550_nand_parts[] = { 155 153 { 156 154 .name = "NAND FS 0", ··· 169 171 .nr_partitions = ARRAY_SIZE(db1550_nand_parts), 170 172 .partitions = db1550_nand_parts, 171 173 .chip_delay = 20, 172 - .part_probe_types = db1550_part_probes, 173 174 }, 174 175 .ctrl = { 175 176 .dev_ready = au1550_nand_device_ready,
-6
arch/mips/pnx833x/common/platform.c
··· 244 244 .resource = pnx833x_sata_resources, 245 245 }; 246 246 247 - static const char *part_probes[] = { 248 - "cmdlinepart", 249 - NULL 250 - }; 251 - 252 247 static void 253 248 pnx833x_flash_nand_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl) 254 249 { ··· 263 268 .chip = { 264 269 .nr_chips = 1, 265 270 .chip_delay = 25, 266 - .part_probe_types = part_probes, 267 271 }, 268 272 .ctrl = { 269 273 .cmd_ctrl = pnx833x_flash_nand_cmd_ctrl
-1
arch/mips/rb532/devices.c
··· 293 293 rb532_nand_data.chip.nr_partitions = ARRAY_SIZE(rb532_partition_info); 294 294 rb532_nand_data.chip.partitions = rb532_partition_info; 295 295 rb532_nand_data.chip.chip_delay = NAND_CHIP_DELAY; 296 - rb532_nand_data.chip.options = NAND_NO_AUTOINCR; 297 296 } 298 297 299 298
-1
arch/sh/boards/mach-migor/setup.c
··· 188 188 .partitions = migor_nand_flash_partitions, 189 189 .nr_partitions = ARRAY_SIZE(migor_nand_flash_partitions), 190 190 .chip_delay = 20, 191 - .part_probe_types = (const char *[]) { "cmdlinepart", NULL }, 192 191 }, 193 192 .ctrl = { 194 193 .dev_ready = migor_nand_flash_ready,
+1 -1
drivers/mtd/Kconfig
··· 128 128 129 129 config MTD_OF_PARTS 130 130 tristate "OpenFirmware partitioning information support" 131 - default Y 131 + default y 132 132 depends on OF 133 133 help 134 134 This provides a partition parsing function which derives
+30 -11
drivers/mtd/bcm63xxpart.c
··· 4 4 * Copyright © 2006-2008 Florian Fainelli <florian@openwrt.org> 5 5 * Mike Albon <malbon@openwrt.org> 6 6 * Copyright © 2009-2010 Daniel Dickinson <openwrt@cshore.neomailbox.net> 7 - * Copyright © 2011 Jonas Gorski <jonas.gorski@gmail.com> 7 + * Copyright © 2011-2012 Jonas Gorski <jonas.gorski@gmail.com> 8 8 * 9 9 * This program is free software; you can redistribute it and/or modify 10 10 * it under the terms of the GNU General Public License as published by ··· 82 82 int namelen = 0; 83 83 int i; 84 84 u32 computed_crc; 85 + bool rootfs_first = false; 85 86 86 87 if (bcm63xx_detect_cfe(master)) 87 88 return -EINVAL; ··· 110 109 char *boardid = &(buf->board_id[0]); 111 110 char *tagversion = &(buf->tag_version[0]); 112 111 112 + sscanf(buf->flash_image_start, "%u", &rootfsaddr); 113 113 sscanf(buf->kernel_address, "%u", &kerneladdr); 114 114 sscanf(buf->kernel_length, "%u", &kernellen); 115 115 sscanf(buf->total_length, "%u", &totallen); ··· 119 117 tagversion, boardid); 120 118 121 119 kerneladdr = kerneladdr - BCM63XX_EXTENDED_SIZE; 122 - rootfsaddr = kerneladdr + kernellen; 120 + rootfsaddr = rootfsaddr - BCM63XX_EXTENDED_SIZE; 123 121 spareaddr = roundup(totallen, master->erasesize) + cfelen; 124 122 sparelen = master->size - spareaddr - nvramlen; 125 - rootfslen = spareaddr - rootfsaddr; 123 + 124 + if (rootfsaddr < kerneladdr) { 125 + /* default Broadcom layout */ 126 + rootfslen = kerneladdr - rootfsaddr; 127 + rootfs_first = true; 128 + } else { 129 + /* OpenWrt layout */ 130 + rootfsaddr = kerneladdr + kernellen; 131 + rootfslen = spareaddr - rootfsaddr; 132 + } 126 133 } else { 127 134 pr_warn("CFE boot tag CRC invalid (expected %08x, actual %08x)\n", 128 135 buf->header_crc, computed_crc); ··· 167 156 curpart++; 168 157 169 158 if (kernellen > 0) { 170 - parts[curpart].name = "kernel"; 171 - parts[curpart].offset = kerneladdr; 172 - parts[curpart].size = kernellen; 159 + int kernelpart = curpart; 160 + 161 + if (rootfslen > 0 && rootfs_first) 162 + kernelpart++; 163 + parts[kernelpart].name = "kernel"; 164 + parts[kernelpart].offset = kerneladdr; 165 + parts[kernelpart].size = kernellen; 173 166 curpart++; 174 167 } 175 168 176 169 if (rootfslen > 0) { 177 - parts[curpart].name = "rootfs"; 178 - parts[curpart].offset = rootfsaddr; 179 - parts[curpart].size = rootfslen; 180 - if (sparelen > 0) 181 - parts[curpart].size += sparelen; 170 + int rootfspart = curpart; 171 + 172 + if (kernellen > 0 && rootfs_first) 173 + rootfspart--; 174 + parts[rootfspart].name = "rootfs"; 175 + parts[rootfspart].offset = rootfsaddr; 176 + parts[rootfspart].size = rootfslen; 177 + if (sparelen > 0 && !rootfs_first) 178 + parts[rootfspart].size += sparelen; 182 179 curpart++; 183 180 } 184 181
+16 -2
drivers/mtd/chips/cfi_cmdset_0002.c
··· 317 317 318 318 if ((cfi->cfiq->EraseRegionInfo[0] & 0xffff) == 0x003f) { 319 319 cfi->cfiq->EraseRegionInfo[0] |= 0x0040; 320 - pr_warning("%s: Bad S29GL064N CFI data, adjust from 64 to 128 sectors\n", mtd->name); 320 + pr_warning("%s: Bad S29GL064N CFI data; adjust from 64 to 128 sectors\n", mtd->name); 321 321 } 322 322 } 323 323 ··· 328 328 329 329 if ((cfi->cfiq->EraseRegionInfo[1] & 0xffff) == 0x007e) { 330 330 cfi->cfiq->EraseRegionInfo[1] &= ~0x0040; 331 - pr_warning("%s: Bad S29GL032N CFI data, adjust from 127 to 63 sectors\n", mtd->name); 331 + pr_warning("%s: Bad S29GL032N CFI data; adjust from 127 to 63 sectors\n", mtd->name); 332 332 } 333 + } 334 + 335 + static void fixup_s29ns512p_sectors(struct mtd_info *mtd) 336 + { 337 + struct map_info *map = mtd->priv; 338 + struct cfi_private *cfi = map->fldrv_priv; 339 + 340 + /* 341 + * S29NS512P flash uses more than 8bits to report number of sectors, 342 + * which is not permitted by CFI. 343 + */ 344 + cfi->cfiq->EraseRegionInfo[0] = 0x020001ff; 345 + pr_warning("%s: Bad S29NS512P CFI data; adjust to 512 sectors\n", mtd->name); 333 346 } 334 347 335 348 /* Used to fix CFI-Tables of chips without Extended Query Tables */ ··· 375 362 { CFI_MFR_AMD, 0x1301, fixup_s29gl064n_sectors }, 376 363 { CFI_MFR_AMD, 0x1a00, fixup_s29gl032n_sectors }, 377 364 { CFI_MFR_AMD, 0x1a01, fixup_s29gl032n_sectors }, 365 + { CFI_MFR_AMD, 0x3f00, fixup_s29ns512p_sectors }, 378 366 { CFI_MFR_SST, 0x536a, fixup_sst38vf640x_sectorsize }, /* SST38VF6402 */ 379 367 { CFI_MFR_SST, 0x536b, fixup_sst38vf640x_sectorsize }, /* SST38VF6401 */ 380 368 { CFI_MFR_SST, 0x536c, fixup_sst38vf640x_sectorsize }, /* SST38VF6404 */
+1 -1
drivers/mtd/cmdlinepart.c
··· 70 70 /* mtdpart_setup() parses into here */ 71 71 static struct cmdline_mtd_partition *partitions; 72 72 73 - /* the command line passed to mtdpart_setupd() */ 73 + /* the command line passed to mtdpart_setup() */ 74 74 static char *cmdline; 75 75 static int cmdline_parsed = 0; 76 76
-7
drivers/mtd/devices/block2mtd.c
··· 52 52 53 53 while (pages) { 54 54 page = page_read(mapping, index); 55 - if (!page) 56 - return -ENOMEM; 57 55 if (IS_ERR(page)) 58 56 return PTR_ERR(page); 59 57 ··· 110 112 len = len - cpylen; 111 113 112 114 page = page_read(dev->blkdev->bd_inode->i_mapping, index); 113 - if (!page) 114 - return -ENOMEM; 115 115 if (IS_ERR(page)) 116 116 return PTR_ERR(page); 117 117 ··· 144 148 len = len - cpylen; 145 149 146 150 page = page_read(mapping, index); 147 - if (!page) 148 - return -ENOMEM; 149 151 if (IS_ERR(page)) 150 152 return PTR_ERR(page); 151 153 ··· 265 271 dev->mtd.flags = MTD_CAP_RAM; 266 272 dev->mtd._erase = block2mtd_erase; 267 273 dev->mtd._write = block2mtd_write; 268 - dev->mtd._writev = mtd_writev; 269 274 dev->mtd._sync = block2mtd_sync; 270 275 dev->mtd._read = block2mtd_read; 271 276 dev->mtd.priv = dev;
+27 -13
drivers/mtd/devices/docg3.c
··· 227 227 u8 data8, *dst8; 228 228 229 229 doc_dbg("doc_read_data_area(buf=%p, len=%d)\n", buf, len); 230 - cdr = len & 0x3; 230 + cdr = len & 0x1; 231 231 len4 = len - cdr; 232 232 233 233 if (first) ··· 732 732 * @len: the number of bytes to be read (must be a multiple of 4) 733 733 * @buf: the buffer to be filled in (or NULL is forget bytes) 734 734 * @first: 1 if first time read, DOC_READADDRESS should be set 735 + * @last_odd: 1 if last read ended up on an odd byte 736 + * 737 + * Reads bytes from a prepared page. There is a trickery here : if the last read 738 + * ended up on an odd offset in the 1024 bytes double page, ie. between the 2 739 + * planes, the first byte must be read apart. If a word (16bit) read was used, 740 + * the read would return the byte of plane 2 as low *and* high endian, which 741 + * will mess the read. 735 742 * 736 743 */ 737 744 static int doc_read_page_getbytes(struct docg3 *docg3, int len, u_char *buf, 738 - int first) 745 + int first, int last_odd) 739 746 { 740 - doc_read_data_area(docg3, buf, len, first); 747 + if (last_odd && len > 0) { 748 + doc_read_data_area(docg3, buf, 1, first); 749 + doc_read_data_area(docg3, buf ? buf + 1 : buf, len - 1, 0); 750 + } else { 751 + doc_read_data_area(docg3, buf, len, first); 752 + } 741 753 doc_delay(docg3, 2); 742 754 return len; 743 755 } ··· 862 850 u8 *buf = ops->datbuf; 863 851 size_t len, ooblen, nbdata, nboob; 864 852 u8 hwecc[DOC_ECC_BCH_SIZE], eccconf1; 853 + int max_bitflips = 0; 865 854 866 855 if (buf) 867 856 len = ops->len; ··· 889 876 ret = 0; 890 877 skip = from % DOC_LAYOUT_PAGE_SIZE; 891 878 mutex_lock(&docg3->cascade->lock); 892 - while (!ret && (len > 0 || ooblen > 0)) { 879 + while (ret >= 0 && (len > 0 || ooblen > 0)) { 893 880 calc_block_sector(from - skip, &block0, &block1, &page, &ofs, 894 881 docg3->reliable); 895 882 nbdata = min_t(size_t, len, DOC_LAYOUT_PAGE_SIZE - skip); ··· 900 887 ret = doc_read_page_ecc_init(docg3, DOC_ECC_BCH_TOTAL_BYTES); 901 888 if (ret < 0) 902 889 goto err_in_read; 903 - ret = doc_read_page_getbytes(docg3, skip, NULL, 1); 890 + ret = doc_read_page_getbytes(docg3, skip, NULL, 1, 0); 904 891 if (ret < skip) 905 892 goto err_in_read; 906 - ret = doc_read_page_getbytes(docg3, nbdata, buf, 0); 893 + ret = doc_read_page_getbytes(docg3, nbdata, buf, 0, skip % 2); 907 894 if (ret < nbdata) 908 895 goto err_in_read; 909 896 doc_read_page_getbytes(docg3, 910 897 DOC_LAYOUT_PAGE_SIZE - nbdata - skip, 911 - NULL, 0); 912 - ret = doc_read_page_getbytes(docg3, nboob, oobbuf, 0); 898 + NULL, 0, (skip + nbdata) % 2); 899 + ret = doc_read_page_getbytes(docg3, nboob, oobbuf, 0, 0); 913 900 if (ret < nboob) 914 901 goto err_in_read; 915 902 doc_read_page_getbytes(docg3, DOC_LAYOUT_OOB_SIZE - nboob, 916 - NULL, 0); 903 + NULL, 0, nboob % 2); 917 904 918 905 doc_get_bch_hw_ecc(docg3, hwecc); 919 906 eccconf1 = doc_register_readb(docg3, DOC_ECCCONF1); ··· 949 936 } 950 937 if (ret > 0) { 951 938 mtd->ecc_stats.corrected += ret; 952 - ret = -EUCLEAN; 939 + max_bitflips = max(max_bitflips, ret); 940 + ret = max_bitflips; 953 941 } 954 942 } 955 943 ··· 1018 1004 DOC_LAYOUT_PAGE_SIZE); 1019 1005 if (!ret) 1020 1006 doc_read_page_getbytes(docg3, DOC_LAYOUT_PAGE_SIZE, 1021 - buf, 1); 1007 + buf, 1, 0); 1022 1008 buf += DOC_LAYOUT_PAGE_SIZE; 1023 1009 } 1024 1010 doc_read_page_finish(docg3); ··· 1078 1064 ret = doc_reset_seq(docg3); 1079 1065 if (!ret) 1080 1066 ret = doc_read_page_prepare(docg3, block0, block1, page, 1081 - ofs + DOC_LAYOUT_WEAR_OFFSET); 1067 + ofs + DOC_LAYOUT_WEAR_OFFSET, 0); 1082 1068 if (!ret) 1083 1069 ret = doc_read_page_getbytes(docg3, DOC_LAYOUT_WEAR_SIZE, 1084 - buf, 1); 1070 + buf, 1, 0); 1085 1071 doc_read_page_finish(docg3); 1086 1072 1087 1073 if (ret || (buf[0] != DOC_ERASE_MARK) || (buf[2] != DOC_ERASE_MARK))
+5
drivers/mtd/devices/m25p80.c
··· 639 639 { "en25q32b", INFO(0x1c3016, 0, 64 * 1024, 64, 0) }, 640 640 { "en25p64", INFO(0x1c2017, 0, 64 * 1024, 128, 0) }, 641 641 642 + /* Everspin */ 643 + { "mr25h256", CAT25_INFO( 32 * 1024, 1, 256, 2) }, 644 + 642 645 /* Intel/Numonyx -- xxxs33b */ 643 646 { "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) }, 644 647 { "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) }, 645 648 { "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) }, 646 649 647 650 /* Macronix */ 651 + { "mx25l2005a", INFO(0xc22012, 0, 64 * 1024, 4, SECT_4K) }, 648 652 { "mx25l4005a", INFO(0xc22013, 0, 64 * 1024, 8, SECT_4K) }, 649 653 { "mx25l8005", INFO(0xc22014, 0, 64 * 1024, 16, 0) }, 650 654 { "mx25l1606e", INFO(0xc22015, 0, 64 * 1024, 32, SECT_4K) }, ··· 732 728 { "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) }, 733 729 { "w25x64", INFO(0xef3017, 0, 64 * 1024, 128, SECT_4K) }, 734 730 { "w25q64", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) }, 731 + { "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) }, 735 732 736 733 /* Catalyst / On Semiconductor -- non-JEDEC */ 737 734 { "cat25c11", CAT25_INFO( 16, 8, 16, 1) },
+7 -7
drivers/mtd/devices/spear_smi.c
··· 990 990 goto err_clk; 991 991 } 992 992 993 - ret = clk_enable(dev->clk); 993 + ret = clk_prepare_enable(dev->clk); 994 994 if (ret) 995 - goto err_clk_enable; 995 + goto err_clk_prepare_enable; 996 996 997 997 ret = request_irq(irq, spear_smi_int_handler, 0, pdev->name, dev); 998 998 if (ret) { ··· 1020 1020 free_irq(irq, dev); 1021 1021 platform_set_drvdata(pdev, NULL); 1022 1022 err_irq: 1023 - clk_disable(dev->clk); 1024 - err_clk_enable: 1023 + clk_disable_unprepare(dev->clk); 1024 + err_clk_prepare_enable: 1025 1025 clk_put(dev->clk); 1026 1026 err_clk: 1027 1027 iounmap(dev->io_base); ··· 1074 1074 irq = platform_get_irq(pdev, 0); 1075 1075 free_irq(irq, dev); 1076 1076 1077 - clk_disable(dev->clk); 1077 + clk_disable_unprepare(dev->clk); 1078 1078 clk_put(dev->clk); 1079 1079 iounmap(dev->io_base); 1080 1080 kfree(dev); ··· 1091 1091 struct spear_smi *dev = platform_get_drvdata(pdev); 1092 1092 1093 1093 if (dev && dev->clk) 1094 - clk_disable(dev->clk); 1094 + clk_disable_unprepare(dev->clk); 1095 1095 1096 1096 return 0; 1097 1097 } ··· 1102 1102 int ret = -EPERM; 1103 1103 1104 1104 if (dev && dev->clk) 1105 - ret = clk_enable(dev->clk); 1105 + ret = clk_prepare_enable(dev->clk); 1106 1106 1107 1107 if (!ret) 1108 1108 spear_smi_hw_init(dev);
+1 -1
drivers/mtd/lpddr/qinfo_probe.c
··· 57 57 58 58 static long lpddr_get_qinforec_pos(struct map_info *map, char *id_str) 59 59 { 60 - int qinfo_lines = sizeof(qinfo_array)/sizeof(struct qinfo_query_info); 60 + int qinfo_lines = ARRAY_SIZE(qinfo_array); 61 61 int i; 62 62 int bankwidth = map_bankwidth(map) * 8; 63 63 int major, minor;
+1 -1
drivers/mtd/maps/Kconfig
··· 224 224 225 225 config MTD_SCB2_FLASH 226 226 tristate "BIOS flash chip on Intel SCB2 boards" 227 - depends on X86 && MTD_JEDECPROBE 227 + depends on X86 && MTD_JEDECPROBE && PCI 228 228 help 229 229 Support for treating the BIOS flash chip on Intel SCB2 boards 230 230 as an MTD device - with this you can reprogram your BIOS.
+1 -12
drivers/mtd/maps/intel_vr_nor.c
··· 260 260 .id_table = vr_nor_pci_ids, 261 261 }; 262 262 263 - static int __init vr_nor_mtd_init(void) 264 - { 265 - return pci_register_driver(&vr_nor_pci_driver); 266 - } 267 - 268 - static void __exit vr_nor_mtd_exit(void) 269 - { 270 - pci_unregister_driver(&vr_nor_pci_driver); 271 - } 272 - 273 - module_init(vr_nor_mtd_init); 274 - module_exit(vr_nor_mtd_exit); 263 + module_pci_driver(vr_nor_pci_driver); 275 264 276 265 MODULE_AUTHOR("Andy Lowe"); 277 266 MODULE_DESCRIPTION("MTD map driver for NOR flash on Intel Vermilion Range");
+1 -12
drivers/mtd/maps/pci.c
··· 352 352 .id_table = mtd_pci_ids, 353 353 }; 354 354 355 - static int __init mtd_pci_maps_init(void) 356 - { 357 - return pci_register_driver(&mtd_pci_driver); 358 - } 359 - 360 - static void __exit mtd_pci_maps_exit(void) 361 - { 362 - pci_unregister_driver(&mtd_pci_driver); 363 - } 364 - 365 - module_init(mtd_pci_maps_init); 366 - module_exit(mtd_pci_maps_exit); 355 + module_pci_driver(mtd_pci_driver); 367 356 368 357 MODULE_LICENSE("GPL"); 369 358 MODULE_AUTHOR("Russell King <rmk@arm.linux.org.uk>");
+1 -14
drivers/mtd/maps/scb2_flash.c
··· 234 234 .remove = __devexit_p(scb2_flash_remove), 235 235 }; 236 236 237 - static int __init 238 - scb2_flash_init(void) 239 - { 240 - return pci_register_driver(&scb2_flash_driver); 241 - } 242 - 243 - static void __exit 244 - scb2_flash_exit(void) 245 - { 246 - pci_unregister_driver(&scb2_flash_driver); 247 - } 248 - 249 - module_init(scb2_flash_init); 250 - module_exit(scb2_flash_exit); 237 + module_pci_driver(scb2_flash_driver); 251 238 252 239 MODULE_LICENSE("GPL"); 253 240 MODULE_AUTHOR("Tim Hockin <thockin@sun.com>");
+1 -1
drivers/mtd/maps/wr_sbc82xx_flash.c
··· 59 59 } 60 60 }; 61 61 62 - static const char *part_probes[] __initdata = {"cmdlinepart", "RedBoot", NULL}; 62 + static const char *part_probes[] __initconst = {"cmdlinepart", "RedBoot", NULL}; 63 63 64 64 #define init_sbc82xx_one_flash(map, br, or) \ 65 65 do { \
+56 -1
drivers/mtd/mtdcore.c
··· 250 250 } 251 251 static DEVICE_ATTR(name, S_IRUGO, mtd_name_show, NULL); 252 252 253 + static ssize_t mtd_ecc_strength_show(struct device *dev, 254 + struct device_attribute *attr, char *buf) 255 + { 256 + struct mtd_info *mtd = dev_get_drvdata(dev); 257 + 258 + return snprintf(buf, PAGE_SIZE, "%u\n", mtd->ecc_strength); 259 + } 260 + static DEVICE_ATTR(ecc_strength, S_IRUGO, mtd_ecc_strength_show, NULL); 261 + 262 + static ssize_t mtd_bitflip_threshold_show(struct device *dev, 263 + struct device_attribute *attr, 264 + char *buf) 265 + { 266 + struct mtd_info *mtd = dev_get_drvdata(dev); 267 + 268 + return snprintf(buf, PAGE_SIZE, "%u\n", mtd->bitflip_threshold); 269 + } 270 + 271 + static ssize_t mtd_bitflip_threshold_store(struct device *dev, 272 + struct device_attribute *attr, 273 + const char *buf, size_t count) 274 + { 275 + struct mtd_info *mtd = dev_get_drvdata(dev); 276 + unsigned int bitflip_threshold; 277 + int retval; 278 + 279 + retval = kstrtouint(buf, 0, &bitflip_threshold); 280 + if (retval) 281 + return retval; 282 + 283 + mtd->bitflip_threshold = bitflip_threshold; 284 + return count; 285 + } 286 + static DEVICE_ATTR(bitflip_threshold, S_IRUGO | S_IWUSR, 287 + mtd_bitflip_threshold_show, 288 + mtd_bitflip_threshold_store); 289 + 253 290 static struct attribute *mtd_attrs[] = { 254 291 &dev_attr_type.attr, 255 292 &dev_attr_flags.attr, ··· 297 260 &dev_attr_oobsize.attr, 298 261 &dev_attr_numeraseregions.attr, 299 262 &dev_attr_name.attr, 263 + &dev_attr_ecc_strength.attr, 264 + &dev_attr_bitflip_threshold.attr, 300 265 NULL, 301 266 }; 302 267 ··· 360 321 361 322 mtd->index = i; 362 323 mtd->usecount = 0; 324 + 325 + /* default value if not set by driver */ 326 + if (mtd->bitflip_threshold == 0) 327 + mtd->bitflip_threshold = mtd->ecc_strength; 363 328 364 329 if (is_power_of_2(mtd->erasesize)) 365 330 mtd->erasesize_shift = ffs(mtd->erasesize) - 1; ··· 800 757 int mtd_read(struct mtd_info *mtd, loff_t from, size_t len, size_t *retlen, 801 758 u_char *buf) 802 759 { 760 + int ret_code; 803 761 *retlen = 0; 804 762 if (from < 0 || from > mtd->size || len > mtd->size - from) 805 763 return -EINVAL; 806 764 if (!len) 807 765 return 0; 808 - return mtd->_read(mtd, from, len, retlen, buf); 766 + 767 + /* 768 + * In the absence of an error, drivers return a non-negative integer 769 + * representing the maximum number of bitflips that were corrected on 770 + * any one ecc region (if applicable; zero otherwise). 771 + */ 772 + ret_code = mtd->_read(mtd, from, len, retlen, buf); 773 + if (unlikely(ret_code < 0)) 774 + return ret_code; 775 + if (mtd->ecc_strength == 0) 776 + return 0; /* device lacks ecc */ 777 + return ret_code >= mtd->bitflip_threshold ? -EUCLEAN : 0; 809 778 } 810 779 EXPORT_SYMBOL_GPL(mtd_read); 811 780
+8 -6
drivers/mtd/mtdpart.c
··· 67 67 stats = part->master->ecc_stats; 68 68 res = part->master->_read(part->master, from + part->offset, len, 69 69 retlen, buf); 70 - if (unlikely(res)) { 71 - if (mtd_is_bitflip(res)) 72 - mtd->ecc_stats.corrected += part->master->ecc_stats.corrected - stats.corrected; 73 - if (mtd_is_eccerr(res)) 74 - mtd->ecc_stats.failed += part->master->ecc_stats.failed - stats.failed; 75 - } 70 + if (unlikely(mtd_is_eccerr(res))) 71 + mtd->ecc_stats.failed += 72 + part->master->ecc_stats.failed - stats.failed; 73 + else 74 + mtd->ecc_stats.corrected += 75 + part->master->ecc_stats.corrected - stats.corrected; 76 76 return res; 77 77 } 78 78 ··· 517 517 518 518 slave->mtd.ecclayout = master->ecclayout; 519 519 slave->mtd.ecc_strength = master->ecc_strength; 520 + slave->mtd.bitflip_threshold = master->bitflip_threshold; 521 + 520 522 if (master->_block_isbad) { 521 523 uint64_t offs = 0; 522 524
+41 -1
drivers/mtd/nand/Kconfig
··· 115 115 Support for NAND flash on Texas Instruments OMAP2, OMAP3 and OMAP4 116 116 platforms. 117 117 118 + config MTD_NAND_OMAP_BCH 119 + depends on MTD_NAND && MTD_NAND_OMAP2 && ARCH_OMAP3 120 + bool "Enable support for hardware BCH error correction" 121 + default n 122 + select BCH 123 + select BCH_CONST_PARAMS 124 + help 125 + Support for hardware BCH error correction. 126 + 127 + choice 128 + prompt "BCH error correction capability" 129 + depends on MTD_NAND_OMAP_BCH 130 + 131 + config MTD_NAND_OMAP_BCH8 132 + bool "8 bits / 512 bytes (recommended)" 133 + help 134 + Support correcting up to 8 bitflips per 512-byte block. 135 + This will use 13 bytes of spare area per 512 bytes of page data. 136 + This is the recommended mode, as 4-bit mode does not work 137 + on some OMAP3 revisions, due to a hardware bug. 138 + 139 + config MTD_NAND_OMAP_BCH4 140 + bool "4 bits / 512 bytes" 141 + help 142 + Support correcting up to 4 bitflips per 512-byte block. 143 + This will use 7 bytes of spare area per 512 bytes of page data. 144 + Note that this mode does not work on some OMAP3 revisions, due to a 145 + hardware bug. Please check your OMAP datasheet before selecting this 146 + mode. 147 + 148 + endchoice 149 + 150 + if MTD_NAND_OMAP_BCH 151 + config BCH_CONST_M 152 + default 13 153 + config BCH_CONST_T 154 + default 4 if MTD_NAND_OMAP_BCH4 155 + default 8 if MTD_NAND_OMAP_BCH8 156 + endif 157 + 118 158 config MTD_NAND_IDS 119 159 tristate 120 160 ··· 480 440 481 441 config MTD_NAND_GPMI_NAND 482 442 bool "GPMI NAND Flash Controller driver" 483 - depends on MTD_NAND && (SOC_IMX23 || SOC_IMX28) 443 + depends on MTD_NAND && (SOC_IMX23 || SOC_IMX28 || SOC_IMX6Q) 484 444 help 485 445 Enables NAND Flash support for IMX23 or IMX28. 486 446 The GPMI controller is very powerful, with the help of BCH
+2 -2
drivers/mtd/nand/alauda.c
··· 414 414 } 415 415 err = 0; 416 416 if (corrected) 417 - err = -EUCLEAN; 417 + err = 1; /* return max_bitflips per ecc step */ 418 418 if (uncorrected) 419 419 err = -EBADMSG; 420 420 out: ··· 446 446 } 447 447 err = 0; 448 448 if (corrected) 449 - err = -EUCLEAN; 449 + err = 1; /* return max_bitflips per ecc step */ 450 450 if (uncorrected) 451 451 err = -EBADMSG; 452 452 return err;
+9 -5
drivers/mtd/nand/atmel_nand.c
··· 324 324 * mtd: mtd info structure 325 325 * chip: nand chip info structure 326 326 * buf: buffer to store read data 327 + * oob_required: caller expects OOB data read to chip->oob_poi 327 328 */ 328 - static int atmel_nand_read_page(struct mtd_info *mtd, 329 - struct nand_chip *chip, uint8_t *buf, int page) 329 + static int atmel_nand_read_page(struct mtd_info *mtd, struct nand_chip *chip, 330 + uint8_t *buf, int oob_required, int page) 330 331 { 331 332 int eccsize = chip->ecc.size; 332 333 int eccbytes = chip->ecc.bytes; ··· 336 335 uint8_t *oob = chip->oob_poi; 337 336 uint8_t *ecc_pos; 338 337 int stat; 338 + unsigned int max_bitflips = 0; 339 339 340 340 /* 341 341 * Errata: ALE is incorrectly wired up to the ECC controller ··· 373 371 /* check if there's an error */ 374 372 stat = chip->ecc.correct(mtd, p, oob, NULL); 375 373 376 - if (stat < 0) 374 + if (stat < 0) { 377 375 mtd->ecc_stats.failed++; 378 - else 376 + } else { 379 377 mtd->ecc_stats.corrected += stat; 378 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 379 + } 380 380 381 381 /* get back to oob start (end of page) */ 382 382 chip->cmdfunc(mtd, NAND_CMD_RNDOUT, mtd->writesize, -1); ··· 386 382 /* read the oob */ 387 383 chip->read_buf(mtd, oob, mtd->oobsize); 388 384 389 - return 0; 385 + return max_bitflips; 390 386 } 391 387 392 388 /*
-2
drivers/mtd/nand/au1550nd.c
··· 508 508 this->chip_delay = 30; 509 509 this->ecc.mode = NAND_ECC_SOFT; 510 510 511 - this->options = NAND_NO_AUTOINCR; 512 - 513 511 if (pd->devwidth) 514 512 this->options |= NAND_BUSWIDTH_16; 515 513
+9 -5
drivers/mtd/nand/bcm_umi_bch.c
··· 22 22 23 23 /* ---- Private Function Prototypes -------------------------------------- */ 24 24 static int bcm_umi_bch_read_page_hwecc(struct mtd_info *mtd, 25 - struct nand_chip *chip, uint8_t *buf, int page); 25 + struct nand_chip *chip, uint8_t *buf, int oob_required, int page); 26 26 static void bcm_umi_bch_write_page_hwecc(struct mtd_info *mtd, 27 - struct nand_chip *chip, const uint8_t *buf); 27 + struct nand_chip *chip, const uint8_t *buf, int oob_required); 28 28 29 29 /* ---- Private Variables ------------------------------------------------ */ 30 30 ··· 103 103 * @mtd: mtd info structure 104 104 * @chip: nand chip info structure 105 105 * @buf: buffer to store read data 106 + * @oob_required: caller expects OOB data read to chip->oob_poi 106 107 * 107 108 ***************************************************************************/ 108 109 static int bcm_umi_bch_read_page_hwecc(struct mtd_info *mtd, 109 110 struct nand_chip *chip, uint8_t * buf, 110 - int page) 111 + int oob_required, int page) 111 112 { 112 113 int sectorIdx = 0; 113 114 int eccsize = chip->ecc.size; ··· 117 116 uint8_t eccCalc[NAND_ECC_NUM_BYTES]; 118 117 int sectorOobSize = mtd->oobsize / eccsteps; 119 118 int stat; 119 + unsigned int max_bitflips = 0; 120 120 121 121 for (sectorIdx = 0; sectorIdx < eccsteps; 122 122 sectorIdx++, datap += eccsize) { ··· 179 177 } 180 178 #endif 181 179 mtd->ecc_stats.corrected += stat; 180 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 182 181 } 183 182 } 184 - return 0; 183 + return max_bitflips; 185 184 } 186 185 187 186 /**************************************************************************** ··· 191 188 * @mtd: mtd info structure 192 189 * @chip: nand chip info structure 193 190 * @buf: data buffer 191 + * @oob_required: must write chip->oob_poi to OOB 194 192 * 195 193 ***************************************************************************/ 196 194 static void bcm_umi_bch_write_page_hwecc(struct mtd_info *mtd, 197 - struct nand_chip *chip, const uint8_t *buf) 195 + struct nand_chip *chip, const uint8_t *buf, int oob_required) 198 196 { 199 197 int sectorIdx = 0; 200 198 int eccsize = chip->ecc.size;
+2 -7
drivers/mtd/nand/bcm_umi_nand.c
··· 341 341 * for MLC parts which may have permanently stuck bits. 342 342 */ 343 343 struct nand_chip *chip = mtd->priv; 344 - int ret = chip->ecc.read_page(mtd, chip, readbackbuf, 0); 344 + int ret = chip->ecc.read_page(mtd, chip, readbackbuf, 0, 0); 345 345 if (ret < 0) 346 346 return -EFAULT; 347 347 else { ··· 476 476 this->badblock_pattern = &largepage_bbt; 477 477 } 478 478 479 - /* 480 - * FIXME: ecc strength value of 6 bits per 512 bytes of data is a 481 - * conservative guess, given 13 ecc bytes and using bch alg. 482 - * (Assume Galois field order m=15 to allow a margin of error.) 483 - */ 484 - this->ecc.strength = 6; 479 + this->ecc.strength = 8; 485 480 486 481 #endif 487 482
+2 -2
drivers/mtd/nand/bf5xx_nand.c
··· 558 558 } 559 559 560 560 static int bf5xx_nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 561 - uint8_t *buf, int page) 561 + uint8_t *buf, int oob_required, int page) 562 562 { 563 563 bf5xx_nand_read_buf(mtd, buf, mtd->writesize); 564 564 bf5xx_nand_read_buf(mtd, chip->oob_poi, mtd->oobsize); ··· 567 567 } 568 568 569 569 static void bf5xx_nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 570 - const uint8_t *buf) 570 + const uint8_t *buf, int oob_required) 571 571 { 572 572 bf5xx_nand_write_buf(mtd, buf, mtd->writesize); 573 573 bf5xx_nand_write_buf(mtd, chip->oob_poi, mtd->oobsize);
+15 -20
drivers/mtd/nand/cafe_nand.c
··· 364 364 365 365 /* Don't use -- use nand_read_oob_std for now */ 366 366 static int cafe_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 367 - int page, int sndcmd) 367 + int page) 368 368 { 369 369 chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 370 370 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 371 - return 1; 371 + return 0; 372 372 } 373 373 /** 374 374 * cafe_nand_read_page_syndrome - [REPLACEABLE] hardware ecc syndrome based page read 375 375 * @mtd: mtd info structure 376 376 * @chip: nand chip info structure 377 377 * @buf: buffer to store read data 378 + * @oob_required: caller expects OOB data read to chip->oob_poi 378 379 * 379 380 * The hw generator calculates the error syndrome automatically. Therefor 380 381 * we need a special oob layout and handling. 381 382 */ 382 383 static int cafe_nand_read_page(struct mtd_info *mtd, struct nand_chip *chip, 383 - uint8_t *buf, int page) 384 + uint8_t *buf, int oob_required, int page) 384 385 { 385 386 struct cafe_priv *cafe = mtd->priv; 387 + unsigned int max_bitflips = 0; 386 388 387 389 cafe_dev_dbg(&cafe->pdev->dev, "ECC result %08x SYN1,2 %08x\n", 388 390 cafe_readl(cafe, NAND_ECC_RESULT), ··· 451 449 } else { 452 450 dev_dbg(&cafe->pdev->dev, "Corrected %d symbol errors\n", n); 453 451 mtd->ecc_stats.corrected += n; 452 + max_bitflips = max_t(unsigned int, max_bitflips, n); 454 453 } 455 454 } 456 455 457 - return 0; 456 + return max_bitflips; 458 457 } 459 458 460 459 static struct nand_ecclayout cafe_oobinfo_2048 = { ··· 521 518 522 519 523 520 static void cafe_nand_write_page_lowlevel(struct mtd_info *mtd, 524 - struct nand_chip *chip, const uint8_t *buf) 521 + struct nand_chip *chip, 522 + const uint8_t *buf, int oob_required) 525 523 { 526 524 struct cafe_priv *cafe = mtd->priv; 527 525 ··· 534 530 } 535 531 536 532 static int cafe_nand_write_page(struct mtd_info *mtd, struct nand_chip *chip, 537 - const uint8_t *buf, int page, int cached, int raw) 533 + const uint8_t *buf, int oob_required, int page, 534 + int cached, int raw) 538 535 { 539 536 int status; 540 537 541 538 chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 542 539 543 540 if (unlikely(raw)) 544 - chip->ecc.write_page_raw(mtd, chip, buf); 541 + chip->ecc.write_page_raw(mtd, chip, buf, oob_required); 545 542 else 546 - chip->ecc.write_page(mtd, chip, buf); 543 + chip->ecc.write_page(mtd, chip, buf, oob_required); 547 544 548 545 /* 549 546 * Cached progamming disabled for now, Not sure if its worth the ··· 690 685 691 686 /* Enable the following for a flash based bad block table */ 692 687 cafe->nand.bbt_options = NAND_BBT_USE_FLASH; 693 - cafe->nand.options = NAND_NO_AUTOINCR | NAND_OWN_BUFFERS; 688 + cafe->nand.options = NAND_OWN_BUFFERS; 694 689 695 690 if (skipbbt) { 696 691 cafe->nand.options |= NAND_SKIP_BBTSCAN; ··· 893 888 .resume = cafe_nand_resume, 894 889 }; 895 890 896 - static int __init cafe_nand_init(void) 897 - { 898 - return pci_register_driver(&cafe_nand_pci_driver); 899 - } 900 - 901 - static void __exit cafe_nand_exit(void) 902 - { 903 - pci_unregister_driver(&cafe_nand_pci_driver); 904 - } 905 - module_init(cafe_nand_init); 906 - module_exit(cafe_nand_exit); 891 + module_pci_driver(cafe_nand_pci_driver); 907 892 908 893 MODULE_LICENSE("GPL"); 909 894 MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>");
-1
drivers/mtd/nand/cs553x_nand.c
··· 240 240 241 241 /* Enable the following for a flash based bad block table */ 242 242 this->bbt_options = NAND_BBT_USE_FLASH; 243 - this->options = NAND_NO_AUTOINCR; 244 243 245 244 /* Scan to find existence of the device */ 246 245 if (nand_scan(new_mtd, 1)) {
+14 -24
drivers/mtd/nand/denali.c
··· 924 924 #define ECC_LAST_ERR(x) ((x) & ERR_CORRECTION_INFO__LAST_ERR_INFO) 925 925 926 926 static bool handle_ecc(struct denali_nand_info *denali, uint8_t *buf, 927 - uint32_t irq_status) 927 + uint32_t irq_status, unsigned int *max_bitflips) 928 928 { 929 929 bool check_erased_page = false; 930 + unsigned int bitflips = 0; 930 931 931 932 if (irq_status & INTR_STATUS__ECC_ERR) { 932 933 /* read the ECC errors. we'll ignore them for now */ ··· 966 965 /* correct the ECC error */ 967 966 buf[offset] ^= err_correction_value; 968 967 denali->mtd.ecc_stats.corrected++; 968 + bitflips++; 969 969 } 970 970 } else { 971 971 /* if the error is not correctable, need to ··· 986 984 clear_interrupts(denali); 987 985 denali_set_intr_modes(denali, true); 988 986 } 987 + *max_bitflips = bitflips; 989 988 return check_erased_page; 990 989 } 991 990 ··· 1087 1084 * by write_page above. 1088 1085 * */ 1089 1086 static void denali_write_page(struct mtd_info *mtd, struct nand_chip *chip, 1090 - const uint8_t *buf) 1087 + const uint8_t *buf, int oob_required) 1091 1088 { 1092 1089 /* for regular page writes, we let HW handle all the ECC 1093 1090 * data written to the device. */ ··· 1099 1096 * write_page() function above. 1100 1097 */ 1101 1098 static void denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 1102 - const uint8_t *buf) 1099 + const uint8_t *buf, int oob_required) 1103 1100 { 1104 1101 /* for raw page writes, we want to disable ECC and simply write 1105 1102 whatever data is in the buffer. */ ··· 1113 1110 } 1114 1111 1115 1112 static int denali_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 1116 - int page, int sndcmd) 1113 + int page) 1117 1114 { 1118 1115 read_oob_data(mtd, chip->oob_poi, page); 1119 1116 1120 - return 0; /* notify NAND core to send command to 1121 - NAND device. */ 1117 + return 0; 1122 1118 } 1123 1119 1124 1120 static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip, 1125 - uint8_t *buf, int page) 1121 + uint8_t *buf, int oob_required, int page) 1126 1122 { 1123 + unsigned int max_bitflips; 1127 1124 struct denali_nand_info *denali = mtd_to_denali(mtd); 1128 1125 1129 1126 dma_addr_t addr = denali->buf.dma_buf; ··· 1156 1153 1157 1154 memcpy(buf, denali->buf.buf, mtd->writesize); 1158 1155 1159 - check_erased_page = handle_ecc(denali, buf, irq_status); 1156 + check_erased_page = handle_ecc(denali, buf, irq_status, &max_bitflips); 1160 1157 denali_enable_dma(denali, false); 1161 1158 1162 1159 if (check_erased_page) { ··· 1170 1167 denali->mtd.ecc_stats.failed++; 1171 1168 } 1172 1169 } 1173 - return 0; 1170 + return max_bitflips; 1174 1171 } 1175 1172 1176 1173 static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 1177 - uint8_t *buf, int page) 1174 + uint8_t *buf, int oob_required, int page) 1178 1175 { 1179 1176 struct denali_nand_info *denali = mtd_to_denali(mtd); 1180 1177 ··· 1705 1702 .remove = denali_pci_remove, 1706 1703 }; 1707 1704 1708 - static int __devinit denali_init(void) 1709 - { 1710 - printk(KERN_INFO "Spectra MTD driver\n"); 1711 - return pci_register_driver(&denali_pci_driver); 1712 - } 1713 - 1714 - /* Free memory */ 1715 - static void __devexit denali_exit(void) 1716 - { 1717 - pci_unregister_driver(&denali_pci_driver); 1718 - } 1719 - 1720 - module_init(denali_init); 1721 - module_exit(denali_exit); 1705 + module_pci_driver(denali_pci_driver);
+11 -11
drivers/mtd/nand/docg4.c
··· 720 720 struct docg4_priv *doc = nand->priv; 721 721 void __iomem *docptr = doc->virtadr; 722 722 uint16_t status, edc_err, *buf16; 723 + int bits_corrected = 0; 723 724 724 725 dev_dbg(doc->dev, "%s: page %08x\n", __func__, page); 725 726 ··· 773 772 774 773 /* If bitflips are reported, attempt to correct with ecc */ 775 774 if (edc_err & DOC_ECCCONF1_BCH_SYNDROM_ERR) { 776 - int bits_corrected = correct_data(mtd, buf, page); 775 + bits_corrected = correct_data(mtd, buf, page); 777 776 if (bits_corrected == -EBADMSG) 778 777 mtd->ecc_stats.failed++; 779 778 else ··· 782 781 } 783 782 784 783 writew(0, docptr + DOC_DATAEND); 785 - return 0; 784 + return bits_corrected; 786 785 } 787 786 788 787 789 788 static int docg4_read_page_raw(struct mtd_info *mtd, struct nand_chip *nand, 790 - uint8_t *buf, int page) 789 + uint8_t *buf, int oob_required, int page) 791 790 { 792 791 return read_page(mtd, nand, buf, page, false); 793 792 } 794 793 795 794 static int docg4_read_page(struct mtd_info *mtd, struct nand_chip *nand, 796 - uint8_t *buf, int page) 795 + uint8_t *buf, int oob_required, int page) 797 796 { 798 797 return read_page(mtd, nand, buf, page, true); 799 798 } 800 799 801 800 static int docg4_read_oob(struct mtd_info *mtd, struct nand_chip *nand, 802 - int page, int sndcmd) 801 + int page) 803 802 { 804 803 struct docg4_priv *doc = nand->priv; 805 804 void __iomem *docptr = doc->virtadr; ··· 953 952 } 954 953 955 954 static void docg4_write_page_raw(struct mtd_info *mtd, struct nand_chip *nand, 956 - const uint8_t *buf) 955 + const uint8_t *buf, int oob_required) 957 956 { 958 957 return write_page(mtd, nand, buf, false); 959 958 } 960 959 961 960 static void docg4_write_page(struct mtd_info *mtd, struct nand_chip *nand, 962 - const uint8_t *buf) 961 + const uint8_t *buf, int oob_required) 963 962 { 964 963 return write_page(mtd, nand, buf, true); 965 964 } ··· 1003 1002 return -ENOMEM; 1004 1003 1005 1004 read_page_prologue(mtd, g4_addr); 1006 - status = docg4_read_page(mtd, nand, buf, DOCG4_FACTORY_BBT_PAGE); 1005 + status = docg4_read_page(mtd, nand, buf, 0, DOCG4_FACTORY_BBT_PAGE); 1007 1006 if (status) 1008 1007 goto exit; 1009 1008 ··· 1080 1079 1081 1080 /* write first page of block */ 1082 1081 write_page_prologue(mtd, g4_addr); 1083 - docg4_write_page(mtd, nand, buf); 1082 + docg4_write_page(mtd, nand, buf, 1); 1084 1083 ret = pageprog(mtd); 1085 1084 if (!ret) 1086 1085 mtd->ecc_stats.badblocks++; ··· 1193 1192 nand->ecc.prepad = 8; 1194 1193 nand->ecc.bytes = 8; 1195 1194 nand->ecc.strength = DOCG4_T; 1196 - nand->options = 1197 - NAND_BUSWIDTH_16 | NAND_NO_SUBPAGE_WRITE | NAND_NO_AUTOINCR; 1195 + nand->options = NAND_BUSWIDTH_16 | NAND_NO_SUBPAGE_WRITE; 1198 1196 nand->IO_ADDR_R = nand->IO_ADDR_W = doc->virtadr + DOC_IOSPACE_DATA; 1199 1197 nand->controller = &nand->hwcontrol; 1200 1198 spin_lock_init(&nand->controller->lock);
+21 -16
drivers/mtd/nand/fsl_elbc_nand.c
··· 75 75 unsigned int use_mdr; /* Non zero if the MDR is to be set */ 76 76 unsigned int oob; /* Non zero if operating on OOB data */ 77 77 unsigned int counter; /* counter for the initializations */ 78 + unsigned int max_bitflips; /* Saved during READ0 cmd */ 78 79 }; 79 80 80 81 /* These map to the positions used by the FCM hardware ECC generator */ ··· 254 253 if (chip->ecc.mode != NAND_ECC_HW) 255 254 return 0; 256 255 256 + elbc_fcm_ctrl->max_bitflips = 0; 257 + 257 258 if (elbc_fcm_ctrl->read_bytes == mtd->writesize + mtd->oobsize) { 258 259 uint32_t lteccr = in_be32(&lbc->lteccr); 259 260 /* ··· 265 262 * bits 28-31 are uncorrectable errors, marked elsewhere. 266 263 * for small page nand only 1 bit is used. 267 264 * if the ELBC doesn't have the lteccr register it reads 0 265 + * FIXME: 4 bits can be corrected on NANDs with 2k pages, so 266 + * count the number of sub-pages with bitflips and update 267 + * ecc_stats.corrected accordingly. 268 268 */ 269 269 if (lteccr & 0x000F000F) 270 270 out_be32(&lbc->lteccr, 0x000F000F); /* clear lteccr */ 271 - if (lteccr & 0x000F0000) 271 + if (lteccr & 0x000F0000) { 272 272 mtd->ecc_stats.corrected++; 273 + elbc_fcm_ctrl->max_bitflips = 1; 274 + } 273 275 } 274 276 275 277 return 0; ··· 746 738 return 0; 747 739 } 748 740 749 - static int fsl_elbc_read_page(struct mtd_info *mtd, 750 - struct nand_chip *chip, 751 - uint8_t *buf, 752 - int page) 741 + static int fsl_elbc_read_page(struct mtd_info *mtd, struct nand_chip *chip, 742 + uint8_t *buf, int oob_required, int page) 753 743 { 744 + struct fsl_elbc_mtd *priv = chip->priv; 745 + struct fsl_lbc_ctrl *ctrl = priv->ctrl; 746 + struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = ctrl->nand; 747 + 754 748 fsl_elbc_read_buf(mtd, buf, mtd->writesize); 755 - fsl_elbc_read_buf(mtd, chip->oob_poi, mtd->oobsize); 749 + if (oob_required) 750 + fsl_elbc_read_buf(mtd, chip->oob_poi, mtd->oobsize); 756 751 757 752 if (fsl_elbc_wait(mtd, chip) & NAND_STATUS_FAIL) 758 753 mtd->ecc_stats.failed++; 759 754 760 - return 0; 755 + return elbc_fcm_ctrl->max_bitflips; 761 756 } 762 757 763 758 /* ECC will be calculated automatically, and errors will be detected in 764 759 * waitfunc. 765 760 */ 766 - static void fsl_elbc_write_page(struct mtd_info *mtd, 767 - struct nand_chip *chip, 768 - const uint8_t *buf) 761 + static void fsl_elbc_write_page(struct mtd_info *mtd, struct nand_chip *chip, 762 + const uint8_t *buf, int oob_required) 769 763 { 770 764 fsl_elbc_write_buf(mtd, buf, mtd->writesize); 771 765 fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize); ··· 805 795 chip->bbt_md = &bbt_mirror_descr; 806 796 807 797 /* set up nand options */ 808 - chip->options = NAND_NO_READRDY | NAND_NO_AUTOINCR; 798 + chip->options = NAND_NO_READRDY; 809 799 chip->bbt_options = NAND_BBT_USE_FLASH; 810 800 811 801 chip->controller = &elbc_fcm_ctrl->controller; ··· 824 814 chip->ecc.size = 512; 825 815 chip->ecc.bytes = 3; 826 816 chip->ecc.strength = 1; 827 - /* 828 - * FIXME: can hardware ecc correct 4 bitflips if page size is 829 - * 2k? Then does hardware report number of corrections for this 830 - * case? If so, ecc_stats reporting needs to be fixed as well. 831 - */ 832 817 } else { 833 818 /* otherwise fall back to default software ECC */ 834 819 chip->ecc.mode = NAND_ECC_SOFT;
+32 -15
drivers/mtd/nand/fsl_ifc_nand.c
··· 63 63 unsigned int oob; /* Non zero if operating on OOB data */ 64 64 unsigned int eccread; /* Non zero for a full-page ECC read */ 65 65 unsigned int counter; /* counter for the initializations */ 66 + unsigned int max_bitflips; /* Saved during READ0 cmd */ 66 67 }; 67 68 68 69 static struct fsl_ifc_nand_ctrl *ifc_nand_ctrl; ··· 263 262 if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_WPER) 264 263 dev_err(priv->dev, "NAND Flash Write Protect Error\n"); 265 264 265 + nctrl->max_bitflips = 0; 266 + 266 267 if (nctrl->eccread) { 267 268 int errors; 268 269 int bufnum = nctrl->page & priv->bufnum_mask; ··· 293 290 } 294 291 295 292 mtd->ecc_stats.corrected += errors; 293 + nctrl->max_bitflips = max_t(unsigned int, 294 + nctrl->max_bitflips, 295 + errors); 296 296 } 297 297 298 298 nctrl->eccread = 0; ··· 381 375 382 376 return; 383 377 384 - /* READID must read all 8 possible bytes */ 385 378 case NAND_CMD_READID: 379 + case NAND_CMD_PARAM: { 380 + int timing = IFC_FIR_OP_RB; 381 + if (command == NAND_CMD_PARAM) 382 + timing = IFC_FIR_OP_RBCD; 383 + 386 384 out_be32(&ifc->ifc_nand.nand_fir0, 387 385 (IFC_FIR_OP_CMD0 << IFC_NAND_FIR0_OP0_SHIFT) | 388 386 (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | 389 - (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP2_SHIFT)); 387 + (timing << IFC_NAND_FIR0_OP2_SHIFT)); 390 388 out_be32(&ifc->ifc_nand.nand_fcr0, 391 - NAND_CMD_READID << IFC_NAND_FCR0_CMD0_SHIFT); 392 - /* 8 bytes for manuf, device and exts */ 393 - out_be32(&ifc->ifc_nand.nand_fbcr, 8); 394 - ifc_nand_ctrl->read_bytes = 8; 389 + command << IFC_NAND_FCR0_CMD0_SHIFT); 390 + out_be32(&ifc->ifc_nand.row3, column); 391 + 392 + /* 393 + * although currently it's 8 bytes for READID, we always read 394 + * the maximum 256 bytes(for PARAM) 395 + */ 396 + out_be32(&ifc->ifc_nand.nand_fbcr, 256); 397 + ifc_nand_ctrl->read_bytes = 256; 395 398 396 399 set_addr(mtd, 0, 0, 0); 397 400 fsl_ifc_run_command(mtd); 398 401 return; 402 + } 399 403 400 404 /* ERASE1 stores the block and page address */ 401 405 case NAND_CMD_ERASE1: ··· 698 682 return nand_fsr | NAND_STATUS_WP; 699 683 } 700 684 701 - static int fsl_ifc_read_page(struct mtd_info *mtd, 702 - struct nand_chip *chip, 703 - uint8_t *buf, int page) 685 + static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip, 686 + uint8_t *buf, int oob_required, int page) 704 687 { 705 688 struct fsl_ifc_mtd *priv = chip->priv; 706 689 struct fsl_ifc_ctrl *ctrl = priv->ctrl; 690 + struct fsl_ifc_nand_ctrl *nctrl = ifc_nand_ctrl; 707 691 708 692 fsl_ifc_read_buf(mtd, buf, mtd->writesize); 709 - fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize); 693 + if (oob_required) 694 + fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize); 710 695 711 696 if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_ECCER) 712 697 dev_err(priv->dev, "NAND Flash ECC Uncorrectable Error\n"); ··· 715 698 if (ctrl->nand_stat != IFC_NAND_EVTER_STAT_OPC) 716 699 mtd->ecc_stats.failed++; 717 700 718 - return 0; 701 + return nctrl->max_bitflips; 719 702 } 720 703 721 704 /* ECC will be calculated automatically, and errors will be detected in 722 705 * waitfunc. 723 706 */ 724 - static void fsl_ifc_write_page(struct mtd_info *mtd, 725 - struct nand_chip *chip, 726 - const uint8_t *buf) 707 + static void fsl_ifc_write_page(struct mtd_info *mtd, struct nand_chip *chip, 708 + const uint8_t *buf, int oob_required) 727 709 { 728 710 fsl_ifc_write_buf(mtd, buf, mtd->writesize); 729 711 fsl_ifc_write_buf(mtd, chip->oob_poi, mtd->oobsize); ··· 805 789 out_be32(&ifc->ifc_nand.ncfgr, 0x0); 806 790 807 791 /* set up nand options */ 808 - chip->options = NAND_NO_READRDY | NAND_NO_AUTOINCR; 792 + chip->options = NAND_NO_READRDY; 809 793 chip->bbt_options = NAND_BBT_USE_FLASH; 810 794 811 795 ··· 827 811 /* Hardware generates ECC per 512 Bytes */ 828 812 chip->ecc.size = 512; 829 813 chip->ecc.bytes = 8; 814 + chip->ecc.strength = 4; 830 815 831 816 switch (csor & CSOR_NAND_PGS_MASK) { 832 817 case CSOR_NAND_PGS_512:
+15 -11
drivers/mtd/nand/fsmc_nand.c
··· 692 692 * @mtd: mtd info structure 693 693 * @chip: nand chip info structure 694 694 * @buf: buffer to store read data 695 + * @oob_required: caller expects OOB data read to chip->oob_poi 695 696 * @page: page number to read 696 697 * 697 698 * This routine is needed for fsmc version 8 as reading from NAND chip has to be ··· 702 701 * max of 8 bits) 703 702 */ 704 703 static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, 705 - uint8_t *buf, int page) 704 + uint8_t *buf, int oob_required, int page) 706 705 { 707 706 struct fsmc_nand_data *host = container_of(mtd, 708 707 struct fsmc_nand_data, mtd); ··· 721 720 */ 722 721 uint16_t ecc_oob[7]; 723 722 uint8_t *oob = (uint8_t *)&ecc_oob[0]; 723 + unsigned int max_bitflips = 0; 724 724 725 725 for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) { 726 726 chip->cmdfunc(mtd, NAND_CMD_READ0, s * eccsize, page); ··· 750 748 chip->ecc.calculate(mtd, p, &ecc_calc[i]); 751 749 752 750 stat = chip->ecc.correct(mtd, p, &ecc_code[i], &ecc_calc[i]); 753 - if (stat < 0) 751 + if (stat < 0) { 754 752 mtd->ecc_stats.failed++; 755 - else 753 + } else { 756 754 mtd->ecc_stats.corrected += stat; 755 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 756 + } 757 757 } 758 758 759 - return 0; 759 + return max_bitflips; 760 760 } 761 761 762 762 /* ··· 998 994 return PTR_ERR(host->clk); 999 995 } 1000 996 1001 - ret = clk_enable(host->clk); 997 + ret = clk_prepare_enable(host->clk); 1002 998 if (ret) 1003 - goto err_clk_enable; 999 + goto err_clk_prepare_enable; 1004 1000 1005 1001 /* 1006 1002 * This device ID is actually a common AMBA ID as used on the ··· 1180 1176 if (host->mode == USE_DMA_ACCESS) 1181 1177 dma_release_channel(host->read_dma_chan); 1182 1178 err_req_read_chnl: 1183 - clk_disable(host->clk); 1184 - err_clk_enable: 1179 + clk_disable_unprepare(host->clk); 1180 + err_clk_prepare_enable: 1185 1181 clk_put(host->clk); 1186 1182 return ret; 1187 1183 } ··· 1202 1198 dma_release_channel(host->write_dma_chan); 1203 1199 dma_release_channel(host->read_dma_chan); 1204 1200 } 1205 - clk_disable(host->clk); 1201 + clk_disable_unprepare(host->clk); 1206 1202 clk_put(host->clk); 1207 1203 } 1208 1204 ··· 1214 1210 { 1215 1211 struct fsmc_nand_data *host = dev_get_drvdata(dev); 1216 1212 if (host) 1217 - clk_disable(host->clk); 1213 + clk_disable_unprepare(host->clk); 1218 1214 return 0; 1219 1215 } 1220 1216 ··· 1222 1218 { 1223 1219 struct fsmc_nand_data *host = dev_get_drvdata(dev); 1224 1220 if (host) { 1225 - clk_enable(host->clk); 1221 + clk_prepare_enable(host->clk); 1226 1222 fsmc_nand_setup(host->regs_va, host->bank, 1227 1223 host->nand.options & NAND_BUSWIDTH_16, 1228 1224 host->dev_timings);
+32 -10
drivers/mtd/nand/gpmi-nand/bch-regs.h
··· 51 51 52 52 #define BP_BCH_FLASH0LAYOUT0_ECC0 12 53 53 #define BM_BCH_FLASH0LAYOUT0_ECC0 (0xf << BP_BCH_FLASH0LAYOUT0_ECC0) 54 - #define BF_BCH_FLASH0LAYOUT0_ECC0(v) \ 55 - (((v) << BP_BCH_FLASH0LAYOUT0_ECC0) & BM_BCH_FLASH0LAYOUT0_ECC0) 54 + #define MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0 11 55 + #define MX6Q_BM_BCH_FLASH0LAYOUT0_ECC0 (0x1f << MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0) 56 + #define BF_BCH_FLASH0LAYOUT0_ECC0(v, x) \ 57 + (GPMI_IS_MX6Q(x) \ 58 + ? (((v) << MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0) \ 59 + & MX6Q_BM_BCH_FLASH0LAYOUT0_ECC0) \ 60 + : (((v) << BP_BCH_FLASH0LAYOUT0_ECC0) \ 61 + & BM_BCH_FLASH0LAYOUT0_ECC0) \ 62 + ) 56 63 57 64 #define BP_BCH_FLASH0LAYOUT0_DATA0_SIZE 0 58 65 #define BM_BCH_FLASH0LAYOUT0_DATA0_SIZE \ 59 66 (0xfff << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE) 60 - #define BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(v) \ 61 - (((v) << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE)\ 62 - & BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) 67 + #define MX6Q_BM_BCH_FLASH0LAYOUT0_DATA0_SIZE \ 68 + (0x3ff << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE) 69 + #define BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(v, x) \ 70 + (GPMI_IS_MX6Q(x) \ 71 + ? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) \ 72 + : ((v) & BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) \ 73 + ) 63 74 64 75 #define HW_BCH_FLASH0LAYOUT1 0x00000090 65 76 ··· 83 72 84 73 #define BP_BCH_FLASH0LAYOUT1_ECCN 12 85 74 #define BM_BCH_FLASH0LAYOUT1_ECCN (0xf << BP_BCH_FLASH0LAYOUT1_ECCN) 86 - #define BF_BCH_FLASH0LAYOUT1_ECCN(v) \ 87 - (((v) << BP_BCH_FLASH0LAYOUT1_ECCN) & BM_BCH_FLASH0LAYOUT1_ECCN) 75 + #define MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN 11 76 + #define MX6Q_BM_BCH_FLASH0LAYOUT1_ECCN (0x1f << MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN) 77 + #define BF_BCH_FLASH0LAYOUT1_ECCN(v, x) \ 78 + (GPMI_IS_MX6Q(x) \ 79 + ? (((v) << MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN) \ 80 + & MX6Q_BM_BCH_FLASH0LAYOUT1_ECCN) \ 81 + : (((v) << BP_BCH_FLASH0LAYOUT1_ECCN) \ 82 + & BM_BCH_FLASH0LAYOUT1_ECCN) \ 83 + ) 88 84 89 85 #define BP_BCH_FLASH0LAYOUT1_DATAN_SIZE 0 90 86 #define BM_BCH_FLASH0LAYOUT1_DATAN_SIZE \ 91 87 (0xfff << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE) 92 - #define BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(v) \ 93 - (((v) << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ 94 - & BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) 88 + #define MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE \ 89 + (0x3ff << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE) 90 + #define BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(v, x) \ 91 + (GPMI_IS_MX6Q(x) \ 92 + ? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ 93 + : ((v) & BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ 94 + ) 95 95 #endif
+15 -12
drivers/mtd/nand/gpmi-nand/gpmi-lib.c
··· 21 21 #include <linux/mtd/gpmi-nand.h> 22 22 #include <linux/delay.h> 23 23 #include <linux/clk.h> 24 - #include <mach/mxs.h> 25 24 26 25 #include "gpmi-nand.h" 27 26 #include "gpmi-regs.h" ··· 36 37 .max_dll_delay_in_ns = 16, 37 38 }; 38 39 40 + #define MXS_SET_ADDR 0x4 41 + #define MXS_CLR_ADDR 0x8 39 42 /* 40 43 * Clear the bit and poll it cleared. This is usually called with 41 44 * a reset address and mask being either SFTRST(bit 31) or CLKGATE ··· 48 47 int timeout = 0x400; 49 48 50 49 /* clear the bit */ 51 - __mxs_clrl(mask, addr); 50 + writel(mask, addr + MXS_CLR_ADDR); 52 51 53 52 /* 54 53 * SFTRST needs 3 GPMI clocks to settle, the reference manual ··· 93 92 goto error; 94 93 95 94 /* clear CLKGATE */ 96 - __mxs_clrl(MODULE_CLKGATE, reset_addr); 95 + writel(MODULE_CLKGATE, reset_addr + MXS_CLR_ADDR); 97 96 98 97 if (!just_enable) { 99 98 /* set SFTRST to reset the block */ 100 - __mxs_setl(MODULE_SFTRST, reset_addr); 99 + writel(MODULE_SFTRST, reset_addr + MXS_SET_ADDR); 101 100 udelay(1); 102 101 103 102 /* poll CLKGATE becoming set */ ··· 224 223 /* Configure layout 0. */ 225 224 writel(BF_BCH_FLASH0LAYOUT0_NBLOCKS(block_count) 226 225 | BF_BCH_FLASH0LAYOUT0_META_SIZE(metadata_size) 227 - | BF_BCH_FLASH0LAYOUT0_ECC0(ecc_strength) 228 - | BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(block_size), 226 + | BF_BCH_FLASH0LAYOUT0_ECC0(ecc_strength, this) 227 + | BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(block_size, this), 229 228 r->bch_regs + HW_BCH_FLASH0LAYOUT0); 230 229 231 230 writel(BF_BCH_FLASH0LAYOUT1_PAGE_SIZE(page_size) 232 - | BF_BCH_FLASH0LAYOUT1_ECCN(ecc_strength) 233 - | BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(block_size), 231 + | BF_BCH_FLASH0LAYOUT1_ECCN(ecc_strength, this) 232 + | BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(block_size, this), 234 233 r->bch_regs + HW_BCH_FLASH0LAYOUT1); 235 234 236 235 /* Set *all* chip selects to use layout 0. */ ··· 256 255 return max(k, min); 257 256 } 258 257 258 + #define DEF_MIN_PROP_DELAY 5 259 + #define DEF_MAX_PROP_DELAY 9 259 260 /* Apply timing to current hardware conditions. */ 260 261 static int gpmi_nfc_compute_hardware_timing(struct gpmi_nand_data *this, 261 262 struct gpmi_nfc_hardware_timing *hw) 262 263 { 263 - struct gpmi_nand_platform_data *pdata = this->pdata; 264 264 struct timing_threshod *nfc = &timing_default_threshold; 265 265 struct nand_chip *nand = &this->nand; 266 266 struct nand_timing target = this->timing; ··· 278 276 int ideal_sample_delay_in_ns; 279 277 unsigned int sample_delay_factor; 280 278 int tEYE; 281 - unsigned int min_prop_delay_in_ns = pdata->min_prop_delay_in_ns; 282 - unsigned int max_prop_delay_in_ns = pdata->max_prop_delay_in_ns; 279 + unsigned int min_prop_delay_in_ns = DEF_MIN_PROP_DELAY; 280 + unsigned int max_prop_delay_in_ns = DEF_MAX_PROP_DELAY; 283 281 284 282 /* 285 283 * If there are multiple chips, we need to relax the timings to allow ··· 805 803 if (GPMI_IS_MX23(this)) { 806 804 mask = MX23_BM_GPMI_DEBUG_READY0 << chip; 807 805 reg = readl(r->gpmi_regs + HW_GPMI_DEBUG); 808 - } else if (GPMI_IS_MX28(this)) { 806 + } else if (GPMI_IS_MX28(this) || GPMI_IS_MX6Q(this)) { 807 + /* MX28 shares the same R/B register as MX6Q. */ 809 808 mask = MX28_BF_GPMI_STAT_READY_BUSY(1 << chip); 810 809 reg = readl(r->gpmi_regs + HW_GPMI_STAT); 811 810 } else
+98 -92
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 25 25 #include <linux/mtd/gpmi-nand.h> 26 26 #include <linux/mtd/partitions.h> 27 27 #include <linux/pinctrl/consumer.h> 28 + #include <linux/of.h> 29 + #include <linux/of_device.h> 28 30 #include "gpmi-nand.h" 29 31 30 32 /* add our owner bbt descriptor */ ··· 389 387 static bool gpmi_dma_filter(struct dma_chan *chan, void *param) 390 388 { 391 389 struct gpmi_nand_data *this = param; 392 - struct resource *r = this->private; 390 + int dma_channel = (int)this->private; 393 391 394 392 if (!mxs_dma_is_apbh(chan)) 395 393 return false; ··· 401 399 * for mx28 : MX28_DMA_GPMI0 ~ MX28_DMA_GPMI7 402 400 * (These eight channels share the same IRQ!) 403 401 */ 404 - if (r->start <= chan->chan_id && chan->chan_id <= r->end) { 402 + if (dma_channel == chan->chan_id) { 405 403 chan->private = &this->dma_data; 406 404 return true; 407 405 } ··· 421 419 static int __devinit acquire_dma_channels(struct gpmi_nand_data *this) 422 420 { 423 421 struct platform_device *pdev = this->pdev; 424 - struct gpmi_nand_platform_data *pdata = this->pdata; 425 - struct resources *res = &this->resources; 426 - struct resource *r, *r_dma; 427 - unsigned int i; 422 + struct resource *r_dma; 423 + struct device_node *dn; 424 + int dma_channel; 425 + unsigned int ret; 426 + struct dma_chan *dma_chan; 427 + dma_cap_mask_t mask; 428 428 429 - r = platform_get_resource_byname(pdev, IORESOURCE_DMA, 430 - GPMI_NAND_DMA_CHANNELS_RES_NAME); 429 + /* dma channel, we only use the first one. */ 430 + dn = pdev->dev.of_node; 431 + ret = of_property_read_u32(dn, "fsl,gpmi-dma-channel", &dma_channel); 432 + if (ret) { 433 + pr_err("unable to get DMA channel from dt.\n"); 434 + goto acquire_err; 435 + } 436 + this->private = (void *)dma_channel; 437 + 438 + /* gpmi dma interrupt */ 431 439 r_dma = platform_get_resource_byname(pdev, IORESOURCE_IRQ, 432 440 GPMI_NAND_DMA_INTERRUPT_RES_NAME); 433 - if (!r || !r_dma) { 441 + if (!r_dma) { 434 442 pr_err("Can't get resource for DMA\n"); 435 - return -ENXIO; 443 + goto acquire_err; 444 + } 445 + this->dma_data.chan_irq = r_dma->start; 446 + 447 + /* request dma channel */ 448 + dma_cap_zero(mask); 449 + dma_cap_set(DMA_SLAVE, mask); 450 + 451 + dma_chan = dma_request_channel(mask, gpmi_dma_filter, this); 452 + if (!dma_chan) { 453 + pr_err("dma_request_channel failed.\n"); 454 + goto acquire_err; 436 455 } 437 456 438 - /* used in gpmi_dma_filter() */ 439 - this->private = r; 440 - 441 - for (i = r->start; i <= r->end; i++) { 442 - struct dma_chan *dma_chan; 443 - dma_cap_mask_t mask; 444 - 445 - if (i - r->start >= pdata->max_chip_count) 446 - break; 447 - 448 - dma_cap_zero(mask); 449 - dma_cap_set(DMA_SLAVE, mask); 450 - 451 - /* get the DMA interrupt */ 452 - if (r_dma->start == r_dma->end) { 453 - /* only register the first. */ 454 - if (i == r->start) 455 - this->dma_data.chan_irq = r_dma->start; 456 - else 457 - this->dma_data.chan_irq = NO_IRQ; 458 - } else 459 - this->dma_data.chan_irq = r_dma->start + (i - r->start); 460 - 461 - dma_chan = dma_request_channel(mask, gpmi_dma_filter, this); 462 - if (!dma_chan) 463 - goto acquire_err; 464 - 465 - /* fill the first empty item */ 466 - this->dma_chans[i - r->start] = dma_chan; 467 - } 468 - 469 - res->dma_low_channel = r->start; 470 - res->dma_high_channel = i; 457 + this->dma_chans[0] = dma_chan; 471 458 return 0; 472 459 473 460 acquire_err: 474 - pr_err("Can't acquire DMA channel %u\n", i); 475 461 release_dma_channels(this); 476 462 return -EINVAL; 477 463 } ··· 841 851 } 842 852 843 853 static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip, 844 - uint8_t *buf, int page) 854 + uint8_t *buf, int oob_required, int page) 845 855 { 846 856 struct gpmi_nand_data *this = chip->priv; 847 857 struct bch_geometry *nfc_geo = &this->bch_geometry; ··· 907 917 mtd->ecc_stats.corrected += corrected; 908 918 } 909 919 910 - /* 911 - * It's time to deliver the OOB bytes. See gpmi_ecc_read_oob() for 912 - * details about our policy for delivering the OOB. 913 - * 914 - * We fill the caller's buffer with set bits, and then copy the block 915 - * mark to th caller's buffer. Note that, if block mark swapping was 916 - * necessary, it has already been done, so we can rely on the first 917 - * byte of the auxiliary buffer to contain the block mark. 918 - */ 919 - memset(chip->oob_poi, ~0, mtd->oobsize); 920 - chip->oob_poi[0] = ((uint8_t *) auxiliary_virt)[0]; 920 + if (oob_required) { 921 + /* 922 + * It's time to deliver the OOB bytes. See gpmi_ecc_read_oob() 923 + * for details about our policy for delivering the OOB. 924 + * 925 + * We fill the caller's buffer with set bits, and then copy the 926 + * block mark to th caller's buffer. Note that, if block mark 927 + * swapping was necessary, it has already been done, so we can 928 + * rely on the first byte of the auxiliary buffer to contain 929 + * the block mark. 930 + */ 931 + memset(chip->oob_poi, ~0, mtd->oobsize); 932 + chip->oob_poi[0] = ((uint8_t *) auxiliary_virt)[0]; 921 933 922 - read_page_swap_end(this, buf, mtd->writesize, 923 - this->payload_virt, this->payload_phys, 924 - nfc_geo->payload_size, 925 - payload_virt, payload_phys); 934 + read_page_swap_end(this, buf, mtd->writesize, 935 + this->payload_virt, this->payload_phys, 936 + nfc_geo->payload_size, 937 + payload_virt, payload_phys); 938 + } 926 939 exit_nfc: 927 940 return ret; 928 941 } 929 942 930 - static void gpmi_ecc_write_page(struct mtd_info *mtd, 931 - struct nand_chip *chip, const uint8_t *buf) 943 + static void gpmi_ecc_write_page(struct mtd_info *mtd, struct nand_chip *chip, 944 + const uint8_t *buf, int oob_required) 932 945 { 933 946 struct gpmi_nand_data *this = chip->priv; 934 947 struct bch_geometry *nfc_geo = &this->bch_geometry; ··· 1070 1077 * this driver. 1071 1078 */ 1072 1079 static int gpmi_ecc_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 1073 - int page, int sndcmd) 1080 + int page) 1074 1081 { 1075 1082 struct gpmi_nand_data *this = chip->priv; 1076 1083 ··· 1093 1100 chip->oob_poi[0] = chip->read_byte(mtd); 1094 1101 } 1095 1102 1096 - /* 1097 - * Return true, indicating that the next call to this function must send 1098 - * a command. 1099 - */ 1100 - return true; 1103 + return 0; 1101 1104 } 1102 1105 1103 1106 static int ··· 1307 1318 /* Write the first page of the current stride. */ 1308 1319 dev_dbg(dev, "Writing an NCB fingerprint in page 0x%x\n", page); 1309 1320 chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 1310 - chip->ecc.write_page_raw(mtd, chip, buffer); 1321 + chip->ecc.write_page_raw(mtd, chip, buffer, 0); 1311 1322 chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1312 1323 1313 1324 /* Wait for the write to finish. */ ··· 1433 1444 if (ret) 1434 1445 return ret; 1435 1446 1447 + /* Adjust the ECC strength according to the chip. */ 1448 + this->nand.ecc.strength = this->bch_geometry.ecc_strength; 1449 + this->mtd.ecc_strength = this->bch_geometry.ecc_strength; 1450 + 1436 1451 /* NAND boot init, depends on the gpmi_set_geometry(). */ 1437 1452 return nand_boot_init(this); 1438 1453 } ··· 1464 1471 1465 1472 static int __devinit gpmi_nfc_init(struct gpmi_nand_data *this) 1466 1473 { 1467 - struct gpmi_nand_platform_data *pdata = this->pdata; 1468 1474 struct mtd_info *mtd = &this->mtd; 1469 1475 struct nand_chip *chip = &this->nand; 1476 + struct mtd_part_parser_data ppdata = {}; 1470 1477 int ret; 1471 1478 1472 1479 /* init current chip */ ··· 1495 1502 chip->options |= NAND_NO_SUBPAGE_WRITE; 1496 1503 chip->ecc.mode = NAND_ECC_HW; 1497 1504 chip->ecc.size = 1; 1505 + chip->ecc.strength = 8; 1498 1506 chip->ecc.layout = &gpmi_hw_ecclayout; 1499 1507 1500 1508 /* Allocate a temporary DMA buffer for reading ID in the nand_scan() */ ··· 1505 1511 if (ret) 1506 1512 goto err_out; 1507 1513 1508 - ret = nand_scan(mtd, pdata->max_chip_count); 1514 + ret = nand_scan(mtd, 1); 1509 1515 if (ret) { 1510 1516 pr_err("Chip scan failed\n"); 1511 1517 goto err_out; 1512 1518 } 1513 1519 1514 - ret = mtd_device_parse_register(mtd, NULL, NULL, 1515 - pdata->partitions, pdata->partition_count); 1520 + ppdata.of_node = this->pdev->dev.of_node; 1521 + ret = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0); 1516 1522 if (ret) 1517 1523 goto err_out; 1518 1524 return 0; ··· 1522 1528 return ret; 1523 1529 } 1524 1530 1531 + static const struct platform_device_id gpmi_ids[] = { 1532 + { .name = "imx23-gpmi-nand", .driver_data = IS_MX23, }, 1533 + { .name = "imx28-gpmi-nand", .driver_data = IS_MX28, }, 1534 + { .name = "imx6q-gpmi-nand", .driver_data = IS_MX6Q, }, 1535 + {}, 1536 + }; 1537 + 1538 + static const struct of_device_id gpmi_nand_id_table[] = { 1539 + { 1540 + .compatible = "fsl,imx23-gpmi-nand", 1541 + .data = (void *)&gpmi_ids[IS_MX23] 1542 + }, { 1543 + .compatible = "fsl,imx28-gpmi-nand", 1544 + .data = (void *)&gpmi_ids[IS_MX28] 1545 + }, { 1546 + .compatible = "fsl,imx6q-gpmi-nand", 1547 + .data = (void *)&gpmi_ids[IS_MX6Q] 1548 + }, {} 1549 + }; 1550 + MODULE_DEVICE_TABLE(of, gpmi_nand_id_table); 1551 + 1525 1552 static int __devinit gpmi_nand_probe(struct platform_device *pdev) 1526 1553 { 1527 - struct gpmi_nand_platform_data *pdata = pdev->dev.platform_data; 1528 1554 struct gpmi_nand_data *this; 1555 + const struct of_device_id *of_id; 1529 1556 int ret; 1557 + 1558 + of_id = of_match_device(gpmi_nand_id_table, &pdev->dev); 1559 + if (of_id) { 1560 + pdev->id_entry = of_id->data; 1561 + } else { 1562 + pr_err("Failed to find the right device id.\n"); 1563 + return -ENOMEM; 1564 + } 1530 1565 1531 1566 this = kzalloc(sizeof(*this), GFP_KERNEL); 1532 1567 if (!this) { ··· 1566 1543 platform_set_drvdata(pdev, this); 1567 1544 this->pdev = pdev; 1568 1545 this->dev = &pdev->dev; 1569 - this->pdata = pdata; 1570 - 1571 - if (pdata->platform_init) { 1572 - ret = pdata->platform_init(); 1573 - if (ret) 1574 - goto platform_init_error; 1575 - } 1576 1546 1577 1547 ret = acquire_resources(this); 1578 1548 if (ret) ··· 1583 1567 1584 1568 exit_nfc_init: 1585 1569 release_resources(this); 1586 - platform_init_error: 1587 1570 exit_acquire_resources: 1588 1571 platform_set_drvdata(pdev, NULL); 1589 1572 kfree(this); ··· 1600 1585 return 0; 1601 1586 } 1602 1587 1603 - static const struct platform_device_id gpmi_ids[] = { 1604 - { 1605 - .name = "imx23-gpmi-nand", 1606 - .driver_data = IS_MX23, 1607 - }, { 1608 - .name = "imx28-gpmi-nand", 1609 - .driver_data = IS_MX28, 1610 - }, {}, 1611 - }; 1612 - 1613 1588 static struct platform_driver gpmi_nand_driver = { 1614 1589 .driver = { 1615 1590 .name = "gpmi-nand", 1591 + .of_match_table = gpmi_nand_id_table, 1616 1592 }, 1617 1593 .probe = gpmi_nand_probe, 1618 1594 .remove = __exit_p(gpmi_nand_remove),
+4 -2
drivers/mtd/nand/gpmi-nand/gpmi-nand.h
··· 266 266 #define STATUS_UNCORRECTABLE 0xfe 267 267 268 268 /* Use the platform_id to distinguish different Archs. */ 269 - #define IS_MX23 0x1 270 - #define IS_MX28 0x2 269 + #define IS_MX23 0x0 270 + #define IS_MX28 0x1 271 + #define IS_MX6Q 0x2 271 272 #define GPMI_IS_MX23(x) ((x)->pdev->id_entry->driver_data == IS_MX23) 272 273 #define GPMI_IS_MX28(x) ((x)->pdev->id_entry->driver_data == IS_MX28) 274 + #define GPMI_IS_MX6Q(x) ((x)->pdev->id_entry->driver_data == IS_MX6Q) 273 275 #endif
-1
drivers/mtd/nand/h1910.c
··· 124 124 /* 15 us command delay time */ 125 125 this->chip_delay = 50; 126 126 this->ecc.mode = NAND_ECC_SOFT; 127 - this->options = NAND_NO_AUTOINCR; 128 127 129 128 /* Scan to find existence of the device */ 130 129 if (nand_scan(h1910_nand_mtd, 1)) {
+1 -5
drivers/mtd/nand/jz4740_nand.c
··· 332 332 chip->ecc.mode = NAND_ECC_HW_OOB_FIRST; 333 333 chip->ecc.size = 512; 334 334 chip->ecc.bytes = 9; 335 - chip->ecc.strength = 2; 336 - /* 337 - * FIXME: ecc_strength value of 2 bits per 512 bytes of data is a 338 - * conservative guess, given 9 ecc bytes and reed-solomon alg. 339 - */ 335 + chip->ecc.strength = 4; 340 336 341 337 if (pdata) 342 338 chip->ecc.layout = pdata->ecc_layout;
-1
drivers/mtd/nand/mpc5121_nfc.c
··· 734 734 chip->write_buf = mpc5121_nfc_write_buf; 735 735 chip->verify_buf = mpc5121_nfc_verify_buf; 736 736 chip->select_chip = mpc5121_nfc_select_chip; 737 - chip->options = NAND_NO_AUTOINCR; 738 737 chip->bbt_options = NAND_BBT_USE_FLASH; 739 738 chip->ecc.mode = NAND_ECC_SOFT; 740 739
+444 -192
drivers/mtd/nand/mxc_nand.c
··· 32 32 #include <linux/io.h> 33 33 #include <linux/irq.h> 34 34 #include <linux/completion.h> 35 + #include <linux/of_device.h> 36 + #include <linux/of_mtd.h> 35 37 36 38 #include <asm/mach/flash.h> 37 39 #include <mach/mxc_nand.h> ··· 142 140 143 141 #define NFC_V3_DELAY_LINE (host->regs_ip + 0x34) 144 142 143 + struct mxc_nand_host; 144 + 145 + struct mxc_nand_devtype_data { 146 + void (*preset)(struct mtd_info *); 147 + void (*send_cmd)(struct mxc_nand_host *, uint16_t, int); 148 + void (*send_addr)(struct mxc_nand_host *, uint16_t, int); 149 + void (*send_page)(struct mtd_info *, unsigned int); 150 + void (*send_read_id)(struct mxc_nand_host *); 151 + uint16_t (*get_dev_status)(struct mxc_nand_host *); 152 + int (*check_int)(struct mxc_nand_host *); 153 + void (*irq_control)(struct mxc_nand_host *, int); 154 + u32 (*get_ecc_status)(struct mxc_nand_host *); 155 + struct nand_ecclayout *ecclayout_512, *ecclayout_2k, *ecclayout_4k; 156 + void (*select_chip)(struct mtd_info *mtd, int chip); 157 + int (*correct_data)(struct mtd_info *mtd, u_char *dat, 158 + u_char *read_ecc, u_char *calc_ecc); 159 + 160 + /* 161 + * On i.MX21 the CONFIG2:INT bit cannot be read if interrupts are masked 162 + * (CONFIG1:INT_MSK is set). To handle this the driver uses 163 + * enable_irq/disable_irq_nosync instead of CONFIG1:INT_MSK 164 + */ 165 + int irqpending_quirk; 166 + int needs_ip; 167 + 168 + size_t regs_offset; 169 + size_t spare0_offset; 170 + size_t axi_offset; 171 + 172 + int spare_len; 173 + int eccbytes; 174 + int eccsize; 175 + }; 176 + 145 177 struct mxc_nand_host { 146 178 struct mtd_info mtd; 147 179 struct nand_chip nand; 148 180 struct device *dev; 149 181 150 - void *spare0; 151 - void *main_area0; 182 + void __iomem *spare0; 183 + void __iomem *main_area0; 152 184 153 185 void __iomem *base; 154 186 void __iomem *regs; ··· 199 163 200 164 uint8_t *data_buf; 201 165 unsigned int buf_start; 202 - int spare_len; 203 166 204 - void (*preset)(struct mtd_info *); 205 - void (*send_cmd)(struct mxc_nand_host *, uint16_t, int); 206 - void (*send_addr)(struct mxc_nand_host *, uint16_t, int); 207 - void (*send_page)(struct mtd_info *, unsigned int); 208 - void (*send_read_id)(struct mxc_nand_host *); 209 - uint16_t (*get_dev_status)(struct mxc_nand_host *); 210 - int (*check_int)(struct mxc_nand_host *); 211 - void (*irq_control)(struct mxc_nand_host *, int); 167 + const struct mxc_nand_devtype_data *devtype_data; 168 + struct mxc_nand_platform_data pdata; 212 169 }; 213 170 214 171 /* OOB placement block for use with hardware ecc generation */ ··· 271 242 } 272 243 }; 273 244 274 - static const char *part_probes[] = { "RedBoot", "cmdlinepart", NULL }; 275 - 276 - static irqreturn_t mxc_nfc_irq(int irq, void *dev_id) 277 - { 278 - struct mxc_nand_host *host = dev_id; 279 - 280 - if (!host->check_int(host)) 281 - return IRQ_NONE; 282 - 283 - host->irq_control(host, 0); 284 - 285 - complete(&host->op_completion); 286 - 287 - return IRQ_HANDLED; 288 - } 245 + static const char *part_probes[] = { "RedBoot", "cmdlinepart", "ofpart", NULL }; 289 246 290 247 static int check_int_v3(struct mxc_nand_host *host) 291 248 { ··· 295 280 if (!(tmp & NFC_V1_V2_CONFIG2_INT)) 296 281 return 0; 297 282 298 - if (!cpu_is_mx21()) 283 + if (!host->devtype_data->irqpending_quirk) 299 284 writew(tmp & ~NFC_V1_V2_CONFIG2_INT, NFC_V1_V2_CONFIG2); 300 285 301 286 return 1; 302 - } 303 - 304 - /* 305 - * It has been observed that the i.MX21 cannot read the CONFIG2:INT bit 306 - * if interrupts are masked (CONFIG1:INT_MSK is set). To handle this, the 307 - * driver can enable/disable the irq line rather than simply masking the 308 - * interrupts. 309 - */ 310 - static void irq_control_mx21(struct mxc_nand_host *host, int activate) 311 - { 312 - if (activate) 313 - enable_irq(host->irq); 314 - else 315 - disable_irq_nosync(host->irq); 316 287 } 317 288 318 289 static void irq_control_v1_v2(struct mxc_nand_host *host, int activate) ··· 329 328 writel(tmp, NFC_V3_CONFIG2); 330 329 } 331 330 331 + static void irq_control(struct mxc_nand_host *host, int activate) 332 + { 333 + if (host->devtype_data->irqpending_quirk) { 334 + if (activate) 335 + enable_irq(host->irq); 336 + else 337 + disable_irq_nosync(host->irq); 338 + } else { 339 + host->devtype_data->irq_control(host, activate); 340 + } 341 + } 342 + 343 + static u32 get_ecc_status_v1(struct mxc_nand_host *host) 344 + { 345 + return readw(NFC_V1_V2_ECC_STATUS_RESULT); 346 + } 347 + 348 + static u32 get_ecc_status_v2(struct mxc_nand_host *host) 349 + { 350 + return readl(NFC_V1_V2_ECC_STATUS_RESULT); 351 + } 352 + 353 + static u32 get_ecc_status_v3(struct mxc_nand_host *host) 354 + { 355 + return readl(NFC_V3_ECC_STATUS_RESULT); 356 + } 357 + 358 + static irqreturn_t mxc_nfc_irq(int irq, void *dev_id) 359 + { 360 + struct mxc_nand_host *host = dev_id; 361 + 362 + if (!host->devtype_data->check_int(host)) 363 + return IRQ_NONE; 364 + 365 + irq_control(host, 0); 366 + 367 + complete(&host->op_completion); 368 + 369 + return IRQ_HANDLED; 370 + } 371 + 332 372 /* This function polls the NANDFC to wait for the basic operation to 333 373 * complete by checking the INT bit of config2 register. 334 374 */ ··· 378 336 int max_retries = 8000; 379 337 380 338 if (useirq) { 381 - if (!host->check_int(host)) { 339 + if (!host->devtype_data->check_int(host)) { 382 340 INIT_COMPLETION(host->op_completion); 383 - host->irq_control(host, 1); 341 + irq_control(host, 1); 384 342 wait_for_completion(&host->op_completion); 385 343 } 386 344 } else { 387 345 while (max_retries-- > 0) { 388 - if (host->check_int(host)) 346 + if (host->devtype_data->check_int(host)) 389 347 break; 390 348 391 349 udelay(1); ··· 416 374 writew(cmd, NFC_V1_V2_FLASH_CMD); 417 375 writew(NFC_CMD, NFC_V1_V2_CONFIG2); 418 376 419 - if (cpu_is_mx21() && (cmd == NAND_CMD_RESET)) { 377 + if (host->devtype_data->irqpending_quirk && (cmd == NAND_CMD_RESET)) { 420 378 int max_retries = 100; 421 379 /* Reset completion is indicated by NFC_CONFIG2 */ 422 380 /* being set to 0 */ ··· 475 433 wait_op_done(host, false); 476 434 } 477 435 478 - static void send_page_v1_v2(struct mtd_info *mtd, unsigned int ops) 436 + static void send_page_v2(struct mtd_info *mtd, unsigned int ops) 437 + { 438 + struct nand_chip *nand_chip = mtd->priv; 439 + struct mxc_nand_host *host = nand_chip->priv; 440 + 441 + /* NANDFC buffer 0 is used for page read/write */ 442 + writew(host->active_cs << 4, NFC_V1_V2_BUF_ADDR); 443 + 444 + writew(ops, NFC_V1_V2_CONFIG2); 445 + 446 + /* Wait for operation to complete */ 447 + wait_op_done(host, true); 448 + } 449 + 450 + static void send_page_v1(struct mtd_info *mtd, unsigned int ops) 479 451 { 480 452 struct nand_chip *nand_chip = mtd->priv; 481 453 struct mxc_nand_host *host = nand_chip->priv; 482 454 int bufs, i; 483 455 484 - if (nfc_is_v1() && mtd->writesize > 512) 456 + if (mtd->writesize > 512) 485 457 bufs = 4; 486 458 else 487 459 bufs = 1; ··· 519 463 520 464 wait_op_done(host, true); 521 465 522 - memcpy(host->data_buf, host->main_area0, 16); 466 + memcpy_fromio(host->data_buf, host->main_area0, 16); 523 467 } 524 468 525 469 /* Request the NANDFC to perform a read of the NAND device ID. */ ··· 535 479 /* Wait for operation to complete */ 536 480 wait_op_done(host, true); 537 481 538 - memcpy(host->data_buf, host->main_area0, 16); 482 + memcpy_fromio(host->data_buf, host->main_area0, 16); 539 483 540 484 if (this->options & NAND_BUSWIDTH_16) { 541 485 /* compress the ID info */ ··· 611 555 * additional correction. 2-Bit errors cannot be corrected by 612 556 * HW ECC, so we need to return failure 613 557 */ 614 - uint16_t ecc_status = readw(NFC_V1_V2_ECC_STATUS_RESULT); 558 + uint16_t ecc_status = get_ecc_status_v1(host); 615 559 616 560 if (((ecc_status & 0x3) == 2) || ((ecc_status >> 2) == 2)) { 617 561 pr_debug("MXC_NAND: HWECC uncorrectable 2-bit ECC error\n"); ··· 636 580 637 581 no_subpages = mtd->writesize >> 9; 638 582 639 - if (nfc_is_v21()) 640 - ecc_stat = readl(NFC_V1_V2_ECC_STATUS_RESULT); 641 - else 642 - ecc_stat = readl(NFC_V3_ECC_STATUS_RESULT); 583 + ecc_stat = host->devtype_data->get_ecc_status(host); 643 584 644 585 do { 645 586 err = ecc_stat & ecc_bit_mask; ··· 669 616 670 617 /* Check for status request */ 671 618 if (host->status_request) 672 - return host->get_dev_status(host) & 0xFF; 619 + return host->devtype_data->get_dev_status(host) & 0xFF; 673 620 674 621 ret = *(uint8_t *)(host->data_buf + host->buf_start); 675 622 host->buf_start++; ··· 735 682 736 683 /* This function is used by upper layer for select and 737 684 * deselect of the NAND chip */ 738 - static void mxc_nand_select_chip(struct mtd_info *mtd, int chip) 685 + static void mxc_nand_select_chip_v1_v3(struct mtd_info *mtd, int chip) 739 686 { 740 687 struct nand_chip *nand_chip = mtd->priv; 741 688 struct mxc_nand_host *host = nand_chip->priv; ··· 754 701 clk_prepare_enable(host->clk); 755 702 host->clk_act = 1; 756 703 } 704 + } 757 705 758 - if (nfc_is_v21()) { 759 - host->active_cs = chip; 760 - writew(host->active_cs << 4, NFC_V1_V2_BUF_ADDR); 706 + static void mxc_nand_select_chip_v2(struct mtd_info *mtd, int chip) 707 + { 708 + struct nand_chip *nand_chip = mtd->priv; 709 + struct mxc_nand_host *host = nand_chip->priv; 710 + 711 + if (chip == -1) { 712 + /* Disable the NFC clock */ 713 + if (host->clk_act) { 714 + clk_disable(host->clk); 715 + host->clk_act = 0; 716 + } 717 + return; 761 718 } 719 + 720 + if (!host->clk_act) { 721 + /* Enable the NFC clock */ 722 + clk_enable(host->clk); 723 + host->clk_act = 1; 724 + } 725 + 726 + host->active_cs = chip; 727 + writew(host->active_cs << 4, NFC_V1_V2_BUF_ADDR); 762 728 } 763 729 764 730 /* ··· 790 718 u16 i, j; 791 719 u16 n = mtd->writesize >> 9; 792 720 u8 *d = host->data_buf + mtd->writesize; 793 - u8 *s = host->spare0; 794 - u16 t = host->spare_len; 721 + u8 __iomem *s = host->spare0; 722 + u16 t = host->devtype_data->spare_len; 795 723 796 724 j = (mtd->oobsize / n >> 1) << 1; 797 725 798 726 if (bfrom) { 799 727 for (i = 0; i < n - 1; i++) 800 - memcpy(d + i * j, s + i * t, j); 728 + memcpy_fromio(d + i * j, s + i * t, j); 801 729 802 730 /* the last section */ 803 - memcpy(d + i * j, s + i * t, mtd->oobsize - i * j); 731 + memcpy_fromio(d + i * j, s + i * t, mtd->oobsize - i * j); 804 732 } else { 805 733 for (i = 0; i < n - 1; i++) 806 - memcpy(&s[i * t], &d[i * j], j); 734 + memcpy_toio(&s[i * t], &d[i * j], j); 807 735 808 736 /* the last section */ 809 - memcpy(&s[i * t], &d[i * j], mtd->oobsize - i * j); 737 + memcpy_toio(&s[i * t], &d[i * j], mtd->oobsize - i * j); 810 738 } 811 739 } 812 740 ··· 823 751 * perform a read/write buf operation, the saved column 824 752 * address is used to index into the full page. 825 753 */ 826 - host->send_addr(host, 0, page_addr == -1); 754 + host->devtype_data->send_addr(host, 0, page_addr == -1); 827 755 if (mtd->writesize > 512) 828 756 /* another col addr cycle for 2k page */ 829 - host->send_addr(host, 0, false); 757 + host->devtype_data->send_addr(host, 0, false); 830 758 } 831 759 832 760 /* Write out page address, if necessary */ 833 761 if (page_addr != -1) { 834 762 /* paddr_0 - p_addr_7 */ 835 - host->send_addr(host, (page_addr & 0xff), false); 763 + host->devtype_data->send_addr(host, (page_addr & 0xff), false); 836 764 837 765 if (mtd->writesize > 512) { 838 766 if (mtd->size >= 0x10000000) { 839 767 /* paddr_8 - paddr_15 */ 840 - host->send_addr(host, (page_addr >> 8) & 0xff, false); 841 - host->send_addr(host, (page_addr >> 16) & 0xff, true); 768 + host->devtype_data->send_addr(host, 769 + (page_addr >> 8) & 0xff, 770 + false); 771 + host->devtype_data->send_addr(host, 772 + (page_addr >> 16) & 0xff, 773 + true); 842 774 } else 843 775 /* paddr_8 - paddr_15 */ 844 - host->send_addr(host, (page_addr >> 8) & 0xff, true); 776 + host->devtype_data->send_addr(host, 777 + (page_addr >> 8) & 0xff, true); 845 778 } else { 846 779 /* One more address cycle for higher density devices */ 847 780 if (mtd->size >= 0x4000000) { 848 781 /* paddr_8 - paddr_15 */ 849 - host->send_addr(host, (page_addr >> 8) & 0xff, false); 850 - host->send_addr(host, (page_addr >> 16) & 0xff, true); 782 + host->devtype_data->send_addr(host, 783 + (page_addr >> 8) & 0xff, 784 + false); 785 + host->devtype_data->send_addr(host, 786 + (page_addr >> 16) & 0xff, 787 + true); 851 788 } else 852 789 /* paddr_8 - paddr_15 */ 853 - host->send_addr(host, (page_addr >> 8) & 0xff, true); 790 + host->devtype_data->send_addr(host, 791 + (page_addr >> 8) & 0xff, true); 854 792 } 855 793 } 856 794 } ··· 882 800 return 8; 883 801 } 884 802 885 - static void preset_v1_v2(struct mtd_info *mtd) 803 + static void preset_v1(struct mtd_info *mtd) 886 804 { 887 805 struct nand_chip *nand_chip = mtd->priv; 888 806 struct mxc_nand_host *host = nand_chip->priv; ··· 891 809 if (nand_chip->ecc.mode == NAND_ECC_HW) 892 810 config1 |= NFC_V1_V2_CONFIG1_ECC_EN; 893 811 894 - if (nfc_is_v21()) 895 - config1 |= NFC_V2_CONFIG1_FP_INT; 896 - 897 - if (!cpu_is_mx21()) 812 + if (!host->devtype_data->irqpending_quirk) 898 813 config1 |= NFC_V1_V2_CONFIG1_INT_MSK; 899 814 900 - if (nfc_is_v21() && mtd->writesize) { 815 + host->eccsize = 1; 816 + 817 + writew(config1, NFC_V1_V2_CONFIG1); 818 + /* preset operation */ 819 + 820 + /* Unlock the internal RAM Buffer */ 821 + writew(0x2, NFC_V1_V2_CONFIG); 822 + 823 + /* Blocks to be unlocked */ 824 + writew(0x0, NFC_V1_UNLOCKSTART_BLKADDR); 825 + writew(0xffff, NFC_V1_UNLOCKEND_BLKADDR); 826 + 827 + /* Unlock Block Command for given address range */ 828 + writew(0x4, NFC_V1_V2_WRPROT); 829 + } 830 + 831 + static void preset_v2(struct mtd_info *mtd) 832 + { 833 + struct nand_chip *nand_chip = mtd->priv; 834 + struct mxc_nand_host *host = nand_chip->priv; 835 + uint16_t config1 = 0; 836 + 837 + if (nand_chip->ecc.mode == NAND_ECC_HW) 838 + config1 |= NFC_V1_V2_CONFIG1_ECC_EN; 839 + 840 + config1 |= NFC_V2_CONFIG1_FP_INT; 841 + 842 + if (!host->devtype_data->irqpending_quirk) 843 + config1 |= NFC_V1_V2_CONFIG1_INT_MSK; 844 + 845 + if (mtd->writesize) { 901 846 uint16_t pages_per_block = mtd->erasesize / mtd->writesize; 902 847 903 848 host->eccsize = get_eccsize(mtd); ··· 943 834 writew(0x2, NFC_V1_V2_CONFIG); 944 835 945 836 /* Blocks to be unlocked */ 946 - if (nfc_is_v21()) { 947 - writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR0); 948 - writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR1); 949 - writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR2); 950 - writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR3); 951 - writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR0); 952 - writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR1); 953 - writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR2); 954 - writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR3); 955 - } else if (nfc_is_v1()) { 956 - writew(0x0, NFC_V1_UNLOCKSTART_BLKADDR); 957 - writew(0xffff, NFC_V1_UNLOCKEND_BLKADDR); 958 - } else 959 - BUG(); 837 + writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR0); 838 + writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR1); 839 + writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR2); 840 + writew(0x0, NFC_V21_UNLOCKSTART_BLKADDR3); 841 + writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR0); 842 + writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR1); 843 + writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR2); 844 + writew(0xffff, NFC_V21_UNLOCKEND_BLKADDR3); 960 845 961 846 /* Unlock Block Command for given address range */ 962 847 writew(0x4, NFC_V1_V2_WRPROT); ··· 1040 937 /* Command pre-processing step */ 1041 938 switch (command) { 1042 939 case NAND_CMD_RESET: 1043 - host->preset(mtd); 1044 - host->send_cmd(host, command, false); 940 + host->devtype_data->preset(mtd); 941 + host->devtype_data->send_cmd(host, command, false); 1045 942 break; 1046 943 1047 944 case NAND_CMD_STATUS: 1048 945 host->buf_start = 0; 1049 946 host->status_request = true; 1050 947 1051 - host->send_cmd(host, command, true); 948 + host->devtype_data->send_cmd(host, command, true); 1052 949 mxc_do_addr_cycle(mtd, column, page_addr); 1053 950 break; 1054 951 ··· 1061 958 1062 959 command = NAND_CMD_READ0; /* only READ0 is valid */ 1063 960 1064 - host->send_cmd(host, command, false); 961 + host->devtype_data->send_cmd(host, command, false); 1065 962 mxc_do_addr_cycle(mtd, column, page_addr); 1066 963 1067 964 if (mtd->writesize > 512) 1068 - host->send_cmd(host, NAND_CMD_READSTART, true); 965 + host->devtype_data->send_cmd(host, 966 + NAND_CMD_READSTART, true); 1069 967 1070 - host->send_page(mtd, NFC_OUTPUT); 968 + host->devtype_data->send_page(mtd, NFC_OUTPUT); 1071 969 1072 - memcpy(host->data_buf, host->main_area0, mtd->writesize); 970 + memcpy_fromio(host->data_buf, host->main_area0, mtd->writesize); 1073 971 copy_spare(mtd, true); 1074 972 break; 1075 973 ··· 1081 977 1082 978 host->buf_start = column; 1083 979 1084 - host->send_cmd(host, command, false); 980 + host->devtype_data->send_cmd(host, command, false); 1085 981 mxc_do_addr_cycle(mtd, column, page_addr); 1086 982 break; 1087 983 1088 984 case NAND_CMD_PAGEPROG: 1089 - memcpy(host->main_area0, host->data_buf, mtd->writesize); 985 + memcpy_toio(host->main_area0, host->data_buf, mtd->writesize); 1090 986 copy_spare(mtd, false); 1091 - host->send_page(mtd, NFC_INPUT); 1092 - host->send_cmd(host, command, true); 987 + host->devtype_data->send_page(mtd, NFC_INPUT); 988 + host->devtype_data->send_cmd(host, command, true); 1093 989 mxc_do_addr_cycle(mtd, column, page_addr); 1094 990 break; 1095 991 1096 992 case NAND_CMD_READID: 1097 - host->send_cmd(host, command, true); 993 + host->devtype_data->send_cmd(host, command, true); 1098 994 mxc_do_addr_cycle(mtd, column, page_addr); 1099 - host->send_read_id(host); 995 + host->devtype_data->send_read_id(host); 1100 996 host->buf_start = column; 1101 997 break; 1102 998 1103 999 case NAND_CMD_ERASE1: 1104 1000 case NAND_CMD_ERASE2: 1105 - host->send_cmd(host, command, false); 1001 + host->devtype_data->send_cmd(host, command, false); 1106 1002 mxc_do_addr_cycle(mtd, column, page_addr); 1107 1003 1108 1004 break; ··· 1136 1032 .pattern = mirror_pattern, 1137 1033 }; 1138 1034 1035 + /* v1 + irqpending_quirk: i.MX21 */ 1036 + static const struct mxc_nand_devtype_data imx21_nand_devtype_data = { 1037 + .preset = preset_v1, 1038 + .send_cmd = send_cmd_v1_v2, 1039 + .send_addr = send_addr_v1_v2, 1040 + .send_page = send_page_v1, 1041 + .send_read_id = send_read_id_v1_v2, 1042 + .get_dev_status = get_dev_status_v1_v2, 1043 + .check_int = check_int_v1_v2, 1044 + .irq_control = irq_control_v1_v2, 1045 + .get_ecc_status = get_ecc_status_v1, 1046 + .ecclayout_512 = &nandv1_hw_eccoob_smallpage, 1047 + .ecclayout_2k = &nandv1_hw_eccoob_largepage, 1048 + .ecclayout_4k = &nandv1_hw_eccoob_smallpage, /* XXX: needs fix */ 1049 + .select_chip = mxc_nand_select_chip_v1_v3, 1050 + .correct_data = mxc_nand_correct_data_v1, 1051 + .irqpending_quirk = 1, 1052 + .needs_ip = 0, 1053 + .regs_offset = 0xe00, 1054 + .spare0_offset = 0x800, 1055 + .spare_len = 16, 1056 + .eccbytes = 3, 1057 + .eccsize = 1, 1058 + }; 1059 + 1060 + /* v1 + !irqpending_quirk: i.MX27, i.MX31 */ 1061 + static const struct mxc_nand_devtype_data imx27_nand_devtype_data = { 1062 + .preset = preset_v1, 1063 + .send_cmd = send_cmd_v1_v2, 1064 + .send_addr = send_addr_v1_v2, 1065 + .send_page = send_page_v1, 1066 + .send_read_id = send_read_id_v1_v2, 1067 + .get_dev_status = get_dev_status_v1_v2, 1068 + .check_int = check_int_v1_v2, 1069 + .irq_control = irq_control_v1_v2, 1070 + .get_ecc_status = get_ecc_status_v1, 1071 + .ecclayout_512 = &nandv1_hw_eccoob_smallpage, 1072 + .ecclayout_2k = &nandv1_hw_eccoob_largepage, 1073 + .ecclayout_4k = &nandv1_hw_eccoob_smallpage, /* XXX: needs fix */ 1074 + .select_chip = mxc_nand_select_chip_v1_v3, 1075 + .correct_data = mxc_nand_correct_data_v1, 1076 + .irqpending_quirk = 0, 1077 + .needs_ip = 0, 1078 + .regs_offset = 0xe00, 1079 + .spare0_offset = 0x800, 1080 + .axi_offset = 0, 1081 + .spare_len = 16, 1082 + .eccbytes = 3, 1083 + .eccsize = 1, 1084 + }; 1085 + 1086 + /* v21: i.MX25, i.MX35 */ 1087 + static const struct mxc_nand_devtype_data imx25_nand_devtype_data = { 1088 + .preset = preset_v2, 1089 + .send_cmd = send_cmd_v1_v2, 1090 + .send_addr = send_addr_v1_v2, 1091 + .send_page = send_page_v2, 1092 + .send_read_id = send_read_id_v1_v2, 1093 + .get_dev_status = get_dev_status_v1_v2, 1094 + .check_int = check_int_v1_v2, 1095 + .irq_control = irq_control_v1_v2, 1096 + .get_ecc_status = get_ecc_status_v2, 1097 + .ecclayout_512 = &nandv2_hw_eccoob_smallpage, 1098 + .ecclayout_2k = &nandv2_hw_eccoob_largepage, 1099 + .ecclayout_4k = &nandv2_hw_eccoob_4k, 1100 + .select_chip = mxc_nand_select_chip_v2, 1101 + .correct_data = mxc_nand_correct_data_v2_v3, 1102 + .irqpending_quirk = 0, 1103 + .needs_ip = 0, 1104 + .regs_offset = 0x1e00, 1105 + .spare0_offset = 0x1000, 1106 + .axi_offset = 0, 1107 + .spare_len = 64, 1108 + .eccbytes = 9, 1109 + .eccsize = 0, 1110 + }; 1111 + 1112 + /* v3: i.MX51, i.MX53 */ 1113 + static const struct mxc_nand_devtype_data imx51_nand_devtype_data = { 1114 + .preset = preset_v3, 1115 + .send_cmd = send_cmd_v3, 1116 + .send_addr = send_addr_v3, 1117 + .send_page = send_page_v3, 1118 + .send_read_id = send_read_id_v3, 1119 + .get_dev_status = get_dev_status_v3, 1120 + .check_int = check_int_v3, 1121 + .irq_control = irq_control_v3, 1122 + .get_ecc_status = get_ecc_status_v3, 1123 + .ecclayout_512 = &nandv2_hw_eccoob_smallpage, 1124 + .ecclayout_2k = &nandv2_hw_eccoob_largepage, 1125 + .ecclayout_4k = &nandv2_hw_eccoob_smallpage, /* XXX: needs fix */ 1126 + .select_chip = mxc_nand_select_chip_v1_v3, 1127 + .correct_data = mxc_nand_correct_data_v2_v3, 1128 + .irqpending_quirk = 0, 1129 + .needs_ip = 1, 1130 + .regs_offset = 0, 1131 + .spare0_offset = 0x1000, 1132 + .axi_offset = 0x1e00, 1133 + .spare_len = 64, 1134 + .eccbytes = 0, 1135 + .eccsize = 0, 1136 + }; 1137 + 1138 + #ifdef CONFIG_OF_MTD 1139 + static const struct of_device_id mxcnd_dt_ids[] = { 1140 + { 1141 + .compatible = "fsl,imx21-nand", 1142 + .data = &imx21_nand_devtype_data, 1143 + }, { 1144 + .compatible = "fsl,imx27-nand", 1145 + .data = &imx27_nand_devtype_data, 1146 + }, { 1147 + .compatible = "fsl,imx25-nand", 1148 + .data = &imx25_nand_devtype_data, 1149 + }, { 1150 + .compatible = "fsl,imx51-nand", 1151 + .data = &imx51_nand_devtype_data, 1152 + }, 1153 + { /* sentinel */ } 1154 + }; 1155 + 1156 + static int __init mxcnd_probe_dt(struct mxc_nand_host *host) 1157 + { 1158 + struct device_node *np = host->dev->of_node; 1159 + struct mxc_nand_platform_data *pdata = &host->pdata; 1160 + const struct of_device_id *of_id = 1161 + of_match_device(mxcnd_dt_ids, host->dev); 1162 + int buswidth; 1163 + 1164 + if (!np) 1165 + return 1; 1166 + 1167 + if (of_get_nand_ecc_mode(np) >= 0) 1168 + pdata->hw_ecc = 1; 1169 + 1170 + pdata->flash_bbt = of_get_nand_on_flash_bbt(np); 1171 + 1172 + buswidth = of_get_nand_bus_width(np); 1173 + if (buswidth < 0) 1174 + return buswidth; 1175 + 1176 + pdata->width = buswidth / 8; 1177 + 1178 + host->devtype_data = of_id->data; 1179 + 1180 + return 0; 1181 + } 1182 + #else 1183 + static int __init mxcnd_probe_dt(struct mxc_nand_host *host) 1184 + { 1185 + return 1; 1186 + } 1187 + #endif 1188 + 1189 + static int __init mxcnd_probe_pdata(struct mxc_nand_host *host) 1190 + { 1191 + struct mxc_nand_platform_data *pdata = host->dev->platform_data; 1192 + 1193 + if (!pdata) 1194 + return -ENODEV; 1195 + 1196 + host->pdata = *pdata; 1197 + 1198 + if (nfc_is_v1()) { 1199 + if (cpu_is_mx21()) 1200 + host->devtype_data = &imx21_nand_devtype_data; 1201 + else 1202 + host->devtype_data = &imx27_nand_devtype_data; 1203 + } else if (nfc_is_v21()) { 1204 + host->devtype_data = &imx25_nand_devtype_data; 1205 + } else if (nfc_is_v3_2()) { 1206 + host->devtype_data = &imx51_nand_devtype_data; 1207 + } else 1208 + BUG(); 1209 + 1210 + return 0; 1211 + } 1212 + 1139 1213 static int __init mxcnd_probe(struct platform_device *pdev) 1140 1214 { 1141 1215 struct nand_chip *this; 1142 1216 struct mtd_info *mtd; 1143 - struct mxc_nand_platform_data *pdata = pdev->dev.platform_data; 1144 1217 struct mxc_nand_host *host; 1145 1218 struct resource *res; 1146 1219 int err = 0; 1147 - struct nand_ecclayout *oob_smallpage, *oob_largepage; 1148 1220 1149 1221 /* Allocate memory for MTD device structure and private data */ 1150 1222 host = kzalloc(sizeof(struct mxc_nand_host) + NAND_MAX_PAGESIZE + ··· 1345 1065 this->priv = host; 1346 1066 this->dev_ready = mxc_nand_dev_ready; 1347 1067 this->cmdfunc = mxc_nand_command; 1348 - this->select_chip = mxc_nand_select_chip; 1349 1068 this->read_byte = mxc_nand_read_byte; 1350 1069 this->read_word = mxc_nand_read_word; 1351 1070 this->write_buf = mxc_nand_write_buf; ··· 1374 1095 1375 1096 host->main_area0 = host->base; 1376 1097 1377 - if (nfc_is_v1() || nfc_is_v21()) { 1378 - host->preset = preset_v1_v2; 1379 - host->send_cmd = send_cmd_v1_v2; 1380 - host->send_addr = send_addr_v1_v2; 1381 - host->send_page = send_page_v1_v2; 1382 - host->send_read_id = send_read_id_v1_v2; 1383 - host->get_dev_status = get_dev_status_v1_v2; 1384 - host->check_int = check_int_v1_v2; 1385 - if (cpu_is_mx21()) 1386 - host->irq_control = irq_control_mx21; 1387 - else 1388 - host->irq_control = irq_control_v1_v2; 1389 - } 1098 + err = mxcnd_probe_dt(host); 1099 + if (err > 0) 1100 + err = mxcnd_probe_pdata(host); 1101 + if (err < 0) 1102 + goto eirq; 1390 1103 1391 - if (nfc_is_v21()) { 1392 - host->regs = host->base + 0x1e00; 1393 - host->spare0 = host->base + 0x1000; 1394 - host->spare_len = 64; 1395 - oob_smallpage = &nandv2_hw_eccoob_smallpage; 1396 - oob_largepage = &nandv2_hw_eccoob_largepage; 1397 - this->ecc.bytes = 9; 1398 - } else if (nfc_is_v1()) { 1399 - host->regs = host->base + 0xe00; 1400 - host->spare0 = host->base + 0x800; 1401 - host->spare_len = 16; 1402 - oob_smallpage = &nandv1_hw_eccoob_smallpage; 1403 - oob_largepage = &nandv1_hw_eccoob_largepage; 1404 - this->ecc.bytes = 3; 1405 - host->eccsize = 1; 1406 - } else if (nfc_is_v3_2()) { 1104 + if (host->devtype_data->regs_offset) 1105 + host->regs = host->base + host->devtype_data->regs_offset; 1106 + host->spare0 = host->base + host->devtype_data->spare0_offset; 1107 + if (host->devtype_data->axi_offset) 1108 + host->regs_axi = host->base + host->devtype_data->axi_offset; 1109 + 1110 + this->ecc.bytes = host->devtype_data->eccbytes; 1111 + host->eccsize = host->devtype_data->eccsize; 1112 + 1113 + this->select_chip = host->devtype_data->select_chip; 1114 + this->ecc.size = 512; 1115 + this->ecc.layout = host->devtype_data->ecclayout_512; 1116 + 1117 + if (host->devtype_data->needs_ip) { 1407 1118 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1408 1119 if (!res) { 1409 1120 err = -ENODEV; ··· 1404 1135 err = -ENOMEM; 1405 1136 goto eirq; 1406 1137 } 1407 - host->regs_axi = host->base + 0x1e00; 1408 - host->spare0 = host->base + 0x1000; 1409 - host->spare_len = 64; 1410 - host->preset = preset_v3; 1411 - host->send_cmd = send_cmd_v3; 1412 - host->send_addr = send_addr_v3; 1413 - host->send_page = send_page_v3; 1414 - host->send_read_id = send_read_id_v3; 1415 - host->check_int = check_int_v3; 1416 - host->get_dev_status = get_dev_status_v3; 1417 - host->irq_control = irq_control_v3; 1418 - oob_smallpage = &nandv2_hw_eccoob_smallpage; 1419 - oob_largepage = &nandv2_hw_eccoob_largepage; 1420 - } else 1421 - BUG(); 1138 + } 1422 1139 1423 - this->ecc.size = 512; 1424 - this->ecc.layout = oob_smallpage; 1425 - 1426 - if (pdata->hw_ecc) { 1140 + if (host->pdata.hw_ecc) { 1427 1141 this->ecc.calculate = mxc_nand_calculate_ecc; 1428 1142 this->ecc.hwctl = mxc_nand_enable_hwecc; 1429 - if (nfc_is_v1()) 1430 - this->ecc.correct = mxc_nand_correct_data_v1; 1431 - else 1432 - this->ecc.correct = mxc_nand_correct_data_v2_v3; 1143 + this->ecc.correct = host->devtype_data->correct_data; 1433 1144 this->ecc.mode = NAND_ECC_HW; 1434 1145 } else { 1435 1146 this->ecc.mode = NAND_ECC_SOFT; 1436 1147 } 1437 1148 1438 - /* NAND bus width determines access funtions used by upper layer */ 1439 - if (pdata->width == 2) 1149 + /* NAND bus width determines access functions used by upper layer */ 1150 + if (host->pdata.width == 2) 1440 1151 this->options |= NAND_BUSWIDTH_16; 1441 1152 1442 - if (pdata->flash_bbt) { 1153 + if (host->pdata.flash_bbt) { 1443 1154 this->bbt_td = &bbt_main_descr; 1444 1155 this->bbt_md = &bbt_mirror_descr; 1445 1156 /* update flash based bbt */ ··· 1431 1182 host->irq = platform_get_irq(pdev, 0); 1432 1183 1433 1184 /* 1434 - * mask the interrupt. For i.MX21 explicitely call 1435 - * irq_control_v1_v2 to use the mask bit. We can't call 1436 - * disable_irq_nosync() for an interrupt we do not own yet. 1185 + * Use host->devtype_data->irq_control() here instead of irq_control() 1186 + * because we must not disable_irq_nosync without having requested the 1187 + * irq. 1437 1188 */ 1438 - if (cpu_is_mx21()) 1439 - irq_control_v1_v2(host, 0); 1440 - else 1441 - host->irq_control(host, 0); 1189 + host->devtype_data->irq_control(host, 0); 1442 1190 1443 1191 err = request_irq(host->irq, mxc_nfc_irq, IRQF_DISABLED, DRIVER_NAME, host); 1444 1192 if (err) 1445 1193 goto eirq; 1446 1194 1447 - host->irq_control(host, 0); 1448 - 1449 1195 /* 1450 - * Now that the interrupt is disabled make sure the interrupt 1451 - * mask bit is cleared on i.MX21. Otherwise we can't read 1452 - * the interrupt status bit on this machine. 1196 + * Now that we "own" the interrupt make sure the interrupt mask bit is 1197 + * cleared on i.MX21. Otherwise we can't read the interrupt status bit 1198 + * on this machine. 1453 1199 */ 1454 - if (cpu_is_mx21()) 1455 - irq_control_v1_v2(host, 1); 1200 + if (host->devtype_data->irqpending_quirk) { 1201 + disable_irq_nosync(host->irq); 1202 + host->devtype_data->irq_control(host, 1); 1203 + } 1456 1204 1457 1205 /* first scan to find the device and get the page size */ 1458 1206 if (nand_scan_ident(mtd, nfc_is_v21() ? 4 : 1, NULL)) { ··· 1458 1212 } 1459 1213 1460 1214 /* Call preset again, with correct writesize this time */ 1461 - host->preset(mtd); 1215 + host->devtype_data->preset(mtd); 1462 1216 1463 1217 if (mtd->writesize == 2048) 1464 - this->ecc.layout = oob_largepage; 1465 - if (nfc_is_v21() && mtd->writesize == 4096) 1466 - this->ecc.layout = &nandv2_hw_eccoob_4k; 1467 - 1468 - /* second phase scan */ 1469 - if (nand_scan_tail(mtd)) { 1470 - err = -ENXIO; 1471 - goto escan; 1472 - } 1218 + this->ecc.layout = host->devtype_data->ecclayout_2k; 1219 + else if (mtd->writesize == 4096) 1220 + this->ecc.layout = host->devtype_data->ecclayout_4k; 1473 1221 1474 1222 if (this->ecc.mode == NAND_ECC_HW) { 1475 1223 if (nfc_is_v1()) ··· 1472 1232 this->ecc.strength = (host->eccsize == 4) ? 4 : 8; 1473 1233 } 1474 1234 1235 + /* second phase scan */ 1236 + if (nand_scan_tail(mtd)) { 1237 + err = -ENXIO; 1238 + goto escan; 1239 + } 1240 + 1475 1241 /* Register the partitions */ 1476 - mtd_device_parse_register(mtd, part_probes, NULL, pdata->parts, 1477 - pdata->nr_parts); 1242 + mtd_device_parse_register(mtd, part_probes, 1243 + &(struct mtd_part_parser_data){ 1244 + .of_node = pdev->dev.of_node, 1245 + }, 1246 + host->pdata.parts, 1247 + host->pdata.nr_parts); 1478 1248 1479 1249 platform_set_drvdata(pdev, host); 1480 1250 ··· 1525 1275 static struct platform_driver mxcnd_driver = { 1526 1276 .driver = { 1527 1277 .name = DRIVER_NAME, 1278 + .owner = THIS_MODULE, 1279 + .of_match_table = of_match_ptr(mxcnd_dt_ids), 1528 1280 }, 1529 1281 .remove = __devexit_p(mxcnd_remove), 1530 1282 };
+125 -108
drivers/mtd/nand/nand_base.c
··· 1066 1066 * @mtd: mtd info structure 1067 1067 * @chip: nand chip info structure 1068 1068 * @buf: buffer to store read data 1069 + * @oob_required: caller requires OOB data read to chip->oob_poi 1069 1070 * @page: page number to read 1070 1071 * 1071 1072 * Not for syndrome calculating ECC controllers, which use a special oob layout. 1072 1073 */ 1073 1074 static int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 1074 - uint8_t *buf, int page) 1075 + uint8_t *buf, int oob_required, int page) 1075 1076 { 1076 1077 chip->read_buf(mtd, buf, mtd->writesize); 1077 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1078 + if (oob_required) 1079 + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1078 1080 return 0; 1079 1081 } 1080 1082 ··· 1085 1083 * @mtd: mtd info structure 1086 1084 * @chip: nand chip info structure 1087 1085 * @buf: buffer to store read data 1086 + * @oob_required: caller requires OOB data read to chip->oob_poi 1088 1087 * @page: page number to read 1089 1088 * 1090 1089 * We need a special oob layout and handling even when OOB isn't used. 1091 1090 */ 1092 1091 static int nand_read_page_raw_syndrome(struct mtd_info *mtd, 1093 - struct nand_chip *chip, 1094 - uint8_t *buf, int page) 1092 + struct nand_chip *chip, uint8_t *buf, 1093 + int oob_required, int page) 1095 1094 { 1096 1095 int eccsize = chip->ecc.size; 1097 1096 int eccbytes = chip->ecc.bytes; ··· 1129 1126 * @mtd: mtd info structure 1130 1127 * @chip: nand chip info structure 1131 1128 * @buf: buffer to store read data 1129 + * @oob_required: caller requires OOB data read to chip->oob_poi 1132 1130 * @page: page number to read 1133 1131 */ 1134 1132 static int nand_read_page_swecc(struct mtd_info *mtd, struct nand_chip *chip, 1135 - uint8_t *buf, int page) 1133 + uint8_t *buf, int oob_required, int page) 1136 1134 { 1137 1135 int i, eccsize = chip->ecc.size; 1138 1136 int eccbytes = chip->ecc.bytes; ··· 1142 1138 uint8_t *ecc_calc = chip->buffers->ecccalc; 1143 1139 uint8_t *ecc_code = chip->buffers->ecccode; 1144 1140 uint32_t *eccpos = chip->ecc.layout->eccpos; 1141 + unsigned int max_bitflips = 0; 1145 1142 1146 - chip->ecc.read_page_raw(mtd, chip, buf, page); 1143 + chip->ecc.read_page_raw(mtd, chip, buf, 1, page); 1147 1144 1148 1145 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) 1149 1146 chip->ecc.calculate(mtd, p, &ecc_calc[i]); ··· 1159 1154 int stat; 1160 1155 1161 1156 stat = chip->ecc.correct(mtd, p, &ecc_code[i], &ecc_calc[i]); 1162 - if (stat < 0) 1157 + if (stat < 0) { 1163 1158 mtd->ecc_stats.failed++; 1164 - else 1159 + } else { 1165 1160 mtd->ecc_stats.corrected += stat; 1161 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 1162 + } 1166 1163 } 1167 - return 0; 1164 + return max_bitflips; 1168 1165 } 1169 1166 1170 1167 /** ··· 1187 1180 int datafrag_len, eccfrag_len, aligned_len, aligned_pos; 1188 1181 int busw = (chip->options & NAND_BUSWIDTH_16) ? 2 : 1; 1189 1182 int index = 0; 1183 + unsigned int max_bitflips = 0; 1190 1184 1191 1185 /* Column address within the page aligned to ECC size (256bytes) */ 1192 1186 start_step = data_offs / chip->ecc.size; ··· 1252 1244 1253 1245 stat = chip->ecc.correct(mtd, p, 1254 1246 &chip->buffers->ecccode[i], &chip->buffers->ecccalc[i]); 1255 - if (stat < 0) 1247 + if (stat < 0) { 1256 1248 mtd->ecc_stats.failed++; 1257 - else 1249 + } else { 1258 1250 mtd->ecc_stats.corrected += stat; 1251 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 1252 + } 1259 1253 } 1260 - return 0; 1254 + return max_bitflips; 1261 1255 } 1262 1256 1263 1257 /** ··· 1267 1257 * @mtd: mtd info structure 1268 1258 * @chip: nand chip info structure 1269 1259 * @buf: buffer to store read data 1260 + * @oob_required: caller requires OOB data read to chip->oob_poi 1270 1261 * @page: page number to read 1271 1262 * 1272 1263 * Not for syndrome calculating ECC controllers which need a special oob layout. 1273 1264 */ 1274 1265 static int nand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, 1275 - uint8_t *buf, int page) 1266 + uint8_t *buf, int oob_required, int page) 1276 1267 { 1277 1268 int i, eccsize = chip->ecc.size; 1278 1269 int eccbytes = chip->ecc.bytes; ··· 1282 1271 uint8_t *ecc_calc = chip->buffers->ecccalc; 1283 1272 uint8_t *ecc_code = chip->buffers->ecccode; 1284 1273 uint32_t *eccpos = chip->ecc.layout->eccpos; 1274 + unsigned int max_bitflips = 0; 1285 1275 1286 1276 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 1287 1277 chip->ecc.hwctl(mtd, NAND_ECC_READ); ··· 1301 1289 int stat; 1302 1290 1303 1291 stat = chip->ecc.correct(mtd, p, &ecc_code[i], &ecc_calc[i]); 1304 - if (stat < 0) 1292 + if (stat < 0) { 1305 1293 mtd->ecc_stats.failed++; 1306 - else 1294 + } else { 1307 1295 mtd->ecc_stats.corrected += stat; 1296 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 1297 + } 1308 1298 } 1309 - return 0; 1299 + return max_bitflips; 1310 1300 } 1311 1301 1312 1302 /** ··· 1316 1302 * @mtd: mtd info structure 1317 1303 * @chip: nand chip info structure 1318 1304 * @buf: buffer to store read data 1305 + * @oob_required: caller requires OOB data read to chip->oob_poi 1319 1306 * @page: page number to read 1320 1307 * 1321 1308 * Hardware ECC for large page chips, require OOB to be read first. For this ··· 1326 1311 * the data area, by overwriting the NAND manufacturer bad block markings. 1327 1312 */ 1328 1313 static int nand_read_page_hwecc_oob_first(struct mtd_info *mtd, 1329 - struct nand_chip *chip, uint8_t *buf, int page) 1314 + struct nand_chip *chip, uint8_t *buf, int oob_required, int page) 1330 1315 { 1331 1316 int i, eccsize = chip->ecc.size; 1332 1317 int eccbytes = chip->ecc.bytes; ··· 1335 1320 uint8_t *ecc_code = chip->buffers->ecccode; 1336 1321 uint32_t *eccpos = chip->ecc.layout->eccpos; 1337 1322 uint8_t *ecc_calc = chip->buffers->ecccalc; 1323 + unsigned int max_bitflips = 0; 1338 1324 1339 1325 /* Read the OOB area first */ 1340 1326 chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); ··· 1353 1337 chip->ecc.calculate(mtd, p, &ecc_calc[i]); 1354 1338 1355 1339 stat = chip->ecc.correct(mtd, p, &ecc_code[i], NULL); 1356 - if (stat < 0) 1340 + if (stat < 0) { 1357 1341 mtd->ecc_stats.failed++; 1358 - else 1342 + } else { 1359 1343 mtd->ecc_stats.corrected += stat; 1344 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 1345 + } 1360 1346 } 1361 - return 0; 1347 + return max_bitflips; 1362 1348 } 1363 1349 1364 1350 /** ··· 1368 1350 * @mtd: mtd info structure 1369 1351 * @chip: nand chip info structure 1370 1352 * @buf: buffer to store read data 1353 + * @oob_required: caller requires OOB data read to chip->oob_poi 1371 1354 * @page: page number to read 1372 1355 * 1373 1356 * The hw generator calculates the error syndrome automatically. Therefore we 1374 1357 * need a special oob layout and handling. 1375 1358 */ 1376 1359 static int nand_read_page_syndrome(struct mtd_info *mtd, struct nand_chip *chip, 1377 - uint8_t *buf, int page) 1360 + uint8_t *buf, int oob_required, int page) 1378 1361 { 1379 1362 int i, eccsize = chip->ecc.size; 1380 1363 int eccbytes = chip->ecc.bytes; 1381 1364 int eccsteps = chip->ecc.steps; 1382 1365 uint8_t *p = buf; 1383 1366 uint8_t *oob = chip->oob_poi; 1367 + unsigned int max_bitflips = 0; 1384 1368 1385 1369 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 1386 1370 int stat; ··· 1399 1379 chip->read_buf(mtd, oob, eccbytes); 1400 1380 stat = chip->ecc.correct(mtd, p, oob, NULL); 1401 1381 1402 - if (stat < 0) 1382 + if (stat < 0) { 1403 1383 mtd->ecc_stats.failed++; 1404 - else 1384 + } else { 1405 1385 mtd->ecc_stats.corrected += stat; 1386 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 1387 + } 1406 1388 1407 1389 oob += eccbytes; 1408 1390 ··· 1419 1397 if (i) 1420 1398 chip->read_buf(mtd, oob, i); 1421 1399 1422 - return 0; 1400 + return max_bitflips; 1423 1401 } 1424 1402 1425 1403 /** ··· 1481 1459 static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, 1482 1460 struct mtd_oob_ops *ops) 1483 1461 { 1484 - int chipnr, page, realpage, col, bytes, aligned; 1462 + int chipnr, page, realpage, col, bytes, aligned, oob_required; 1485 1463 struct nand_chip *chip = mtd->priv; 1486 1464 struct mtd_ecc_stats stats; 1487 - int blkcheck = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1; 1488 - int sndcmd = 1; 1489 1465 int ret = 0; 1490 1466 uint32_t readlen = ops->len; 1491 1467 uint32_t oobreadlen = ops->ooblen; ··· 1491 1471 mtd->oobavail : mtd->oobsize; 1492 1472 1493 1473 uint8_t *bufpoi, *oob, *buf; 1474 + unsigned int max_bitflips = 0; 1494 1475 1495 1476 stats = mtd->ecc_stats; 1496 1477 ··· 1505 1484 1506 1485 buf = ops->datbuf; 1507 1486 oob = ops->oobbuf; 1487 + oob_required = oob ? 1 : 0; 1508 1488 1509 1489 while (1) { 1510 1490 bytes = min(mtd->writesize - col, readlen); ··· 1515 1493 if (realpage != chip->pagebuf || oob) { 1516 1494 bufpoi = aligned ? buf : chip->buffers->databuf; 1517 1495 1518 - if (likely(sndcmd)) { 1519 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); 1520 - sndcmd = 0; 1521 - } 1496 + chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); 1522 1497 1523 - /* Now read the page into the buffer */ 1498 + /* 1499 + * Now read the page into the buffer. Absent an error, 1500 + * the read methods return max bitflips per ecc step. 1501 + */ 1524 1502 if (unlikely(ops->mode == MTD_OPS_RAW)) 1525 - ret = chip->ecc.read_page_raw(mtd, chip, 1526 - bufpoi, page); 1503 + ret = chip->ecc.read_page_raw(mtd, chip, bufpoi, 1504 + oob_required, 1505 + page); 1527 1506 else if (!aligned && NAND_SUBPAGE_READ(chip) && !oob) 1528 1507 ret = chip->ecc.read_subpage(mtd, chip, 1529 1508 col, bytes, bufpoi); 1530 1509 else 1531 1510 ret = chip->ecc.read_page(mtd, chip, bufpoi, 1532 - page); 1511 + oob_required, page); 1533 1512 if (ret < 0) { 1534 1513 if (!aligned) 1535 1514 /* Invalidate page cache */ ··· 1538 1515 break; 1539 1516 } 1540 1517 1518 + max_bitflips = max_t(unsigned int, max_bitflips, ret); 1519 + 1541 1520 /* Transfer not aligned data */ 1542 1521 if (!aligned) { 1543 1522 if (!NAND_SUBPAGE_READ(chip) && !oob && 1544 1523 !(mtd->ecc_stats.failed - stats.failed) && 1545 - (ops->mode != MTD_OPS_RAW)) 1524 + (ops->mode != MTD_OPS_RAW)) { 1546 1525 chip->pagebuf = realpage; 1547 - else 1526 + chip->pagebuf_bitflips = ret; 1527 + } else { 1548 1528 /* Invalidate page cache */ 1549 1529 chip->pagebuf = -1; 1530 + } 1550 1531 memcpy(buf, chip->buffers->databuf + col, bytes); 1551 1532 } 1552 1533 1553 1534 buf += bytes; 1554 1535 1555 1536 if (unlikely(oob)) { 1556 - 1557 1537 int toread = min(oobreadlen, max_oobsize); 1558 1538 1559 1539 if (toread) { ··· 1567 1541 } 1568 1542 1569 1543 if (!(chip->options & NAND_NO_READRDY)) { 1570 - /* 1571 - * Apply delay or wait for ready/busy pin. Do 1572 - * this before the AUTOINCR check, so no 1573 - * problems arise if a chip which does auto 1574 - * increment is marked as NOAUTOINCR by the 1575 - * board driver. 1576 - */ 1544 + /* Apply delay or wait for ready/busy pin */ 1577 1545 if (!chip->dev_ready) 1578 1546 udelay(chip->chip_delay); 1579 1547 else ··· 1576 1556 } else { 1577 1557 memcpy(buf, chip->buffers->databuf + col, bytes); 1578 1558 buf += bytes; 1559 + max_bitflips = max_t(unsigned int, max_bitflips, 1560 + chip->pagebuf_bitflips); 1579 1561 } 1580 1562 1581 1563 readlen -= bytes; ··· 1597 1575 chip->select_chip(mtd, -1); 1598 1576 chip->select_chip(mtd, chipnr); 1599 1577 } 1600 - 1601 - /* 1602 - * Check, if the chip supports auto page increment or if we 1603 - * have hit a block boundary. 1604 - */ 1605 - if (!NAND_CANAUTOINCR(chip) || !(page & blkcheck)) 1606 - sndcmd = 1; 1607 1578 } 1608 1579 1609 1580 ops->retlen = ops->len - (size_t) readlen; 1610 1581 if (oob) 1611 1582 ops->oobretlen = ops->ooblen - oobreadlen; 1612 1583 1613 - if (ret) 1584 + if (ret < 0) 1614 1585 return ret; 1615 1586 1616 1587 if (mtd->ecc_stats.failed - stats.failed) 1617 1588 return -EBADMSG; 1618 1589 1619 - return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0; 1590 + return max_bitflips; 1620 1591 } 1621 1592 1622 1593 /** ··· 1645 1630 * @mtd: mtd info structure 1646 1631 * @chip: nand chip info structure 1647 1632 * @page: page number to read 1648 - * @sndcmd: flag whether to issue read command or not 1649 1633 */ 1650 1634 static int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, 1651 - int page, int sndcmd) 1635 + int page) 1652 1636 { 1653 - if (sndcmd) { 1654 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 1655 - sndcmd = 0; 1656 - } 1637 + chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 1657 1638 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 1658 - return sndcmd; 1639 + return 0; 1659 1640 } 1660 1641 1661 1642 /** ··· 1660 1649 * @mtd: mtd info structure 1661 1650 * @chip: nand chip info structure 1662 1651 * @page: page number to read 1663 - * @sndcmd: flag whether to issue read command or not 1664 1652 */ 1665 1653 static int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip, 1666 - int page, int sndcmd) 1654 + int page) 1667 1655 { 1668 1656 uint8_t *buf = chip->oob_poi; 1669 1657 int length = mtd->oobsize; ··· 1689 1679 if (length > 0) 1690 1680 chip->read_buf(mtd, bufpoi, length); 1691 1681 1692 - return 1; 1682 + return 0; 1693 1683 } 1694 1684 1695 1685 /** ··· 1785 1775 static int nand_do_read_oob(struct mtd_info *mtd, loff_t from, 1786 1776 struct mtd_oob_ops *ops) 1787 1777 { 1788 - int page, realpage, chipnr, sndcmd = 1; 1778 + int page, realpage, chipnr; 1789 1779 struct nand_chip *chip = mtd->priv; 1790 1780 struct mtd_ecc_stats stats; 1791 - int blkcheck = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1; 1792 1781 int readlen = ops->ooblen; 1793 1782 int len; 1794 1783 uint8_t *buf = ops->oobbuf; 1784 + int ret = 0; 1795 1785 1796 1786 pr_debug("%s: from = 0x%08Lx, len = %i\n", 1797 1787 __func__, (unsigned long long)from, readlen); ··· 1827 1817 1828 1818 while (1) { 1829 1819 if (ops->mode == MTD_OPS_RAW) 1830 - sndcmd = chip->ecc.read_oob_raw(mtd, chip, page, sndcmd); 1820 + ret = chip->ecc.read_oob_raw(mtd, chip, page); 1831 1821 else 1832 - sndcmd = chip->ecc.read_oob(mtd, chip, page, sndcmd); 1822 + ret = chip->ecc.read_oob(mtd, chip, page); 1823 + 1824 + if (ret < 0) 1825 + break; 1833 1826 1834 1827 len = min(len, readlen); 1835 1828 buf = nand_transfer_oob(chip, buf, ops, len); 1836 1829 1837 1830 if (!(chip->options & NAND_NO_READRDY)) { 1838 - /* 1839 - * Apply delay or wait for ready/busy pin. Do this 1840 - * before the AUTOINCR check, so no problems arise if a 1841 - * chip which does auto increment is marked as 1842 - * NOAUTOINCR by the board driver. 1843 - */ 1831 + /* Apply delay or wait for ready/busy pin */ 1844 1832 if (!chip->dev_ready) 1845 1833 udelay(chip->chip_delay); 1846 1834 else ··· 1859 1851 chip->select_chip(mtd, -1); 1860 1852 chip->select_chip(mtd, chipnr); 1861 1853 } 1862 - 1863 - /* 1864 - * Check, if the chip supports auto page increment or if we 1865 - * have hit a block boundary. 1866 - */ 1867 - if (!NAND_CANAUTOINCR(chip) || !(page & blkcheck)) 1868 - sndcmd = 1; 1869 1854 } 1870 1855 1871 - ops->oobretlen = ops->ooblen; 1856 + ops->oobretlen = ops->ooblen - readlen; 1857 + 1858 + if (ret < 0) 1859 + return ret; 1872 1860 1873 1861 if (mtd->ecc_stats.failed - stats.failed) 1874 1862 return -EBADMSG; ··· 1923 1919 * @mtd: mtd info structure 1924 1920 * @chip: nand chip info structure 1925 1921 * @buf: data buffer 1922 + * @oob_required: must write chip->oob_poi to OOB 1926 1923 * 1927 1924 * Not for syndrome calculating ECC controllers, which use a special oob layout. 1928 1925 */ 1929 1926 static void nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 1930 - const uint8_t *buf) 1927 + const uint8_t *buf, int oob_required) 1931 1928 { 1932 1929 chip->write_buf(mtd, buf, mtd->writesize); 1933 - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1930 + if (oob_required) 1931 + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1934 1932 } 1935 1933 1936 1934 /** ··· 1940 1934 * @mtd: mtd info structure 1941 1935 * @chip: nand chip info structure 1942 1936 * @buf: data buffer 1937 + * @oob_required: must write chip->oob_poi to OOB 1943 1938 * 1944 1939 * We need a special oob layout and handling even when ECC isn't checked. 1945 1940 */ 1946 1941 static void nand_write_page_raw_syndrome(struct mtd_info *mtd, 1947 1942 struct nand_chip *chip, 1948 - const uint8_t *buf) 1943 + const uint8_t *buf, int oob_required) 1949 1944 { 1950 1945 int eccsize = chip->ecc.size; 1951 1946 int eccbytes = chip->ecc.bytes; ··· 1980 1973 * @mtd: mtd info structure 1981 1974 * @chip: nand chip info structure 1982 1975 * @buf: data buffer 1976 + * @oob_required: must write chip->oob_poi to OOB 1983 1977 */ 1984 1978 static void nand_write_page_swecc(struct mtd_info *mtd, struct nand_chip *chip, 1985 - const uint8_t *buf) 1979 + const uint8_t *buf, int oob_required) 1986 1980 { 1987 1981 int i, eccsize = chip->ecc.size; 1988 1982 int eccbytes = chip->ecc.bytes; ··· 1999 1991 for (i = 0; i < chip->ecc.total; i++) 2000 1992 chip->oob_poi[eccpos[i]] = ecc_calc[i]; 2001 1993 2002 - chip->ecc.write_page_raw(mtd, chip, buf); 1994 + chip->ecc.write_page_raw(mtd, chip, buf, 1); 2003 1995 } 2004 1996 2005 1997 /** ··· 2007 1999 * @mtd: mtd info structure 2008 2000 * @chip: nand chip info structure 2009 2001 * @buf: data buffer 2002 + * @oob_required: must write chip->oob_poi to OOB 2010 2003 */ 2011 2004 static void nand_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, 2012 - const uint8_t *buf) 2005 + const uint8_t *buf, int oob_required) 2013 2006 { 2014 2007 int i, eccsize = chip->ecc.size; 2015 2008 int eccbytes = chip->ecc.bytes; ··· 2036 2027 * @mtd: mtd info structure 2037 2028 * @chip: nand chip info structure 2038 2029 * @buf: data buffer 2030 + * @oob_required: must write chip->oob_poi to OOB 2039 2031 * 2040 2032 * The hw generator calculates the error syndrome automatically. Therefore we 2041 2033 * need a special oob layout and handling. 2042 2034 */ 2043 2035 static void nand_write_page_syndrome(struct mtd_info *mtd, 2044 - struct nand_chip *chip, const uint8_t *buf) 2036 + struct nand_chip *chip, 2037 + const uint8_t *buf, int oob_required) 2045 2038 { 2046 2039 int i, eccsize = chip->ecc.size; 2047 2040 int eccbytes = chip->ecc.bytes; ··· 2082 2071 * @mtd: MTD device structure 2083 2072 * @chip: NAND chip descriptor 2084 2073 * @buf: the data to write 2074 + * @oob_required: must write chip->oob_poi to OOB 2085 2075 * @page: page number to write 2086 2076 * @cached: cached programming 2087 2077 * @raw: use _raw version of write_page 2088 2078 */ 2089 2079 static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip, 2090 - const uint8_t *buf, int page, int cached, int raw) 2080 + const uint8_t *buf, int oob_required, int page, 2081 + int cached, int raw) 2091 2082 { 2092 2083 int status; 2093 2084 2094 2085 chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page); 2095 2086 2096 2087 if (unlikely(raw)) 2097 - chip->ecc.write_page_raw(mtd, chip, buf); 2088 + chip->ecc.write_page_raw(mtd, chip, buf, oob_required); 2098 2089 else 2099 - chip->ecc.write_page(mtd, chip, buf); 2090 + chip->ecc.write_page(mtd, chip, buf, oob_required); 2100 2091 2101 2092 /* 2102 2093 * Cached progamming disabled for now. Not sure if it's worth the ··· 2131 2118 2132 2119 if (chip->verify_buf(mtd, buf, mtd->writesize)) 2133 2120 return -EIO; 2121 + 2122 + /* Make sure the next page prog is preceded by a status read */ 2123 + chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1); 2134 2124 #endif 2135 2125 return 0; 2136 2126 } ··· 2218 2202 uint8_t *oob = ops->oobbuf; 2219 2203 uint8_t *buf = ops->datbuf; 2220 2204 int ret, subpage; 2205 + int oob_required = oob ? 1 : 0; 2221 2206 2222 2207 ops->retlen = 0; 2223 2208 if (!writelen) ··· 2281 2264 memset(chip->oob_poi, 0xff, mtd->oobsize); 2282 2265 } 2283 2266 2284 - ret = chip->write_page(mtd, chip, wbuf, page, cached, 2285 - (ops->mode == MTD_OPS_RAW)); 2267 + ret = chip->write_page(mtd, chip, wbuf, oob_required, page, 2268 + cached, (ops->mode == MTD_OPS_RAW)); 2286 2269 if (ret) 2287 2270 break; 2288 2271 ··· 2915 2898 *busw = NAND_BUSWIDTH_16; 2916 2899 2917 2900 chip->options &= ~NAND_CHIPOPTIONS_MSK; 2918 - chip->options |= (NAND_NO_READRDY | 2919 - NAND_NO_AUTOINCR) & NAND_CHIPOPTIONS_MSK; 2901 + chip->options |= NAND_NO_READRDY & NAND_CHIPOPTIONS_MSK; 2920 2902 2921 2903 pr_info("ONFI flash detected\n"); 2922 2904 return 1; ··· 3092 3076 chip->options &= ~NAND_SAMSUNG_LP_OPTIONS; 3093 3077 ident_done: 3094 3078 3095 - /* 3096 - * Set chip as a default. Board drivers can override it, if necessary. 3097 - */ 3098 - chip->options |= NAND_NO_AUTOINCR; 3099 - 3100 3079 /* Try to identify manufacturer */ 3101 3080 for (maf_idx = 0; nand_manuf_ids[maf_idx].id != 0x0; maf_idx++) { 3102 3081 if (nand_manuf_ids[maf_idx].id == *maf_id) ··· 3165 3154 if (mtd->writesize > 512 && chip->cmdfunc == nand_command) 3166 3155 chip->cmdfunc = nand_command_lp; 3167 3156 3168 - pr_info("NAND device: Manufacturer ID:" 3169 - " 0x%02x, Chip ID: 0x%02x (%s %s)\n", *maf_id, *dev_id, 3170 - nand_manuf_ids[maf_idx].name, 3171 - chip->onfi_version ? chip->onfi_params.model : type->name); 3157 + pr_info("NAND device: Manufacturer ID: 0x%02x, Chip ID: 0x%02x (%s %s)," 3158 + " page size: %d, OOB size: %d\n", 3159 + *maf_id, *dev_id, nand_manuf_ids[maf_idx].name, 3160 + chip->onfi_version ? chip->onfi_params.model : type->name, 3161 + mtd->writesize, mtd->oobsize); 3172 3162 3173 3163 return type; 3174 3164 } ··· 3341 3329 if (!chip->ecc.write_oob) 3342 3330 chip->ecc.write_oob = nand_write_oob_syndrome; 3343 3331 3344 - if (mtd->writesize >= chip->ecc.size) 3332 + if (mtd->writesize >= chip->ecc.size) { 3333 + if (!chip->ecc.strength) { 3334 + pr_warn("Driver must set ecc.strength when using hardware ECC\n"); 3335 + BUG(); 3336 + } 3345 3337 break; 3338 + } 3346 3339 pr_warn("%d byte HW ECC not possible on " 3347 3340 "%d byte page size, fallback to SW ECC\n", 3348 3341 chip->ecc.size, mtd->writesize); ··· 3402 3385 BUG(); 3403 3386 } 3404 3387 chip->ecc.strength = 3405 - chip->ecc.bytes*8 / fls(8*chip->ecc.size); 3388 + chip->ecc.bytes * 8 / fls(8 * chip->ecc.size); 3406 3389 break; 3407 3390 3408 3391 case NAND_ECC_NONE: ··· 3500 3483 3501 3484 /* propagate ecc info to mtd_info */ 3502 3485 mtd->ecclayout = chip->ecc.layout; 3503 - mtd->ecc_strength = chip->ecc.strength * chip->ecc.steps; 3486 + mtd->ecc_strength = chip->ecc.strength; 3504 3487 3505 3488 /* Check, if we should skip the bad block table scan */ 3506 3489 if (chip->options & NAND_SKIP_BBTSCAN)
+1
drivers/mtd/nand/nand_bbt.c
··· 324 324 325 325 buf += mtd->oobsize + mtd->writesize; 326 326 len -= mtd->writesize; 327 + offs += mtd->writesize; 327 328 } 328 329 return 0; 329 330 }
+2 -4
drivers/mtd/nand/nand_ids.c
··· 70 70 * These are the new chips with large page size. The pagesize and the 71 71 * erasesize is determined from the extended id bytes 72 72 */ 73 - #define LP_OPTIONS (NAND_SAMSUNG_LP_OPTIONS | NAND_NO_READRDY | NAND_NO_AUTOINCR) 73 + #define LP_OPTIONS (NAND_SAMSUNG_LP_OPTIONS | NAND_NO_READRDY) 74 74 #define LP_OPTIONS16 (LP_OPTIONS | NAND_BUSWIDTH_16) 75 75 76 76 /* 512 Megabit */ ··· 157 157 * writes possible, but not implemented now 158 158 */ 159 159 {"AND 128MiB 3,3V 8-bit", 0x01, 2048, 128, 0x4000, 160 - NAND_IS_AND | NAND_NO_AUTOINCR |NAND_NO_READRDY | NAND_4PAGE_ARRAY | 161 - BBT_AUTO_REFRESH 162 - }, 160 + NAND_IS_AND | NAND_NO_READRDY | NAND_4PAGE_ARRAY | BBT_AUTO_REFRESH}, 163 161 164 162 {NULL,} 165 163 };
+3 -25
drivers/mtd/nand/nandsim.c
··· 268 268 #define OPT_PAGE512 0x00000002 /* 512-byte page chips */ 269 269 #define OPT_PAGE2048 0x00000008 /* 2048-byte page chips */ 270 270 #define OPT_SMARTMEDIA 0x00000010 /* SmartMedia technology chips */ 271 - #define OPT_AUTOINCR 0x00000020 /* page number auto incrementation is possible */ 272 271 #define OPT_PAGE512_8BIT 0x00000040 /* 512-byte page chips with 8-bit bus width */ 273 272 #define OPT_PAGE4096 0x00000080 /* 4096-byte page chips */ 274 273 #define OPT_LARGEPAGE (OPT_PAGE2048 | OPT_PAGE4096) /* 2048 & 4096-byte page chips */ ··· 593 594 ns->options |= OPT_PAGE256; 594 595 } 595 596 else if (ns->geom.pgsz == 512) { 596 - ns->options |= (OPT_PAGE512 | OPT_AUTOINCR); 597 + ns->options |= OPT_PAGE512; 597 598 if (ns->busw == 8) 598 599 ns->options |= OPT_PAGE512_8BIT; 599 600 } else if (ns->geom.pgsz == 2048) { ··· 662 663 for (i = 0; nand_flash_ids[i].name != NULL; i++) { 663 664 if (second_id_byte != nand_flash_ids[i].id) 664 665 continue; 665 - if (!(nand_flash_ids[i].options & NAND_NO_AUTOINCR)) 666 - ns->options |= OPT_AUTOINCR; 667 666 } 668 667 669 668 if (ns->busw == 16) ··· 1933 1936 if (ns->regs.count == ns->regs.num) { 1934 1937 NS_DBG("read_byte: all bytes were read\n"); 1935 1938 1936 - /* 1937 - * The OPT_AUTOINCR allows to read next consecutive pages without 1938 - * new read operation cycle. 1939 - */ 1940 - if ((ns->options & OPT_AUTOINCR) && NS_STATE(ns->state) == STATE_DATAOUT) { 1941 - ns->regs.count = 0; 1942 - if (ns->regs.row + 1 < ns->geom.pgnum) 1943 - ns->regs.row += 1; 1944 - NS_DBG("read_byte: switch to the next page (%#x)\n", ns->regs.row); 1945 - do_state_action(ns, ACTION_CPY); 1946 - } 1947 - else if (NS_STATE(ns->nxstate) == STATE_READY) 1939 + if (NS_STATE(ns->nxstate) == STATE_READY) 1948 1940 switch_state(ns); 1949 - 1950 1941 } 1951 1942 1952 1943 return outb; ··· 2188 2203 ns->regs.count += len; 2189 2204 2190 2205 if (ns->regs.count == ns->regs.num) { 2191 - if ((ns->options & OPT_AUTOINCR) && NS_STATE(ns->state) == STATE_DATAOUT) { 2192 - ns->regs.count = 0; 2193 - if (ns->regs.row + 1 < ns->geom.pgnum) 2194 - ns->regs.row += 1; 2195 - NS_DBG("read_buf: switch to the next page (%#x)\n", ns->regs.row); 2196 - do_state_action(ns, ACTION_CPY); 2197 - } 2198 - else if (NS_STATE(ns->nxstate) == STATE_READY) 2206 + if (NS_STATE(ns->nxstate) == STATE_READY) 2199 2207 switch_state(ns); 2200 2208 } 2201 2209
+251 -2
drivers/mtd/nand/omap2.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/slab.h> 23 23 24 + #ifdef CONFIG_MTD_NAND_OMAP_BCH 25 + #include <linux/bch.h> 26 + #endif 27 + 24 28 #include <plat/dma.h> 25 29 #include <plat/gpmc.h> 26 30 #include <plat/nand.h> ··· 131 127 } iomode; 132 128 u_char *buf; 133 129 int buf_len; 130 + 131 + #ifdef CONFIG_MTD_NAND_OMAP_BCH 132 + struct bch_control *bch; 133 + struct nand_ecclayout ecclayout; 134 + #endif 134 135 }; 135 136 136 137 /** ··· 411 402 PREFETCH_FIFOTHRESHOLD_MAX, 0x1, len, is_write); 412 403 if (ret) 413 404 /* PFPW engine is busy, use cpu copy method */ 414 - goto out_copy; 405 + goto out_copy_unmap; 415 406 416 407 init_completion(&info->comp); 417 408 ··· 430 421 dma_unmap_single(&info->pdev->dev, dma_addr, len, dir); 431 422 return 0; 432 423 424 + out_copy_unmap: 425 + dma_unmap_single(&info->pdev->dev, dma_addr, len, dir); 433 426 out_copy: 434 427 if (info->nand.options & NAND_BUSWIDTH_16) 435 428 is_write == 0 ? omap_read_buf16(mtd, (u_char *) addr, len) ··· 890 879 struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 891 880 mtd); 892 881 unsigned long timeo = jiffies; 893 - int status = NAND_STATUS_FAIL, state = this->state; 882 + int status, state = this->state; 894 883 895 884 if (state == FL_ERASING) 896 885 timeo += (HZ * 400) / 1000; ··· 905 894 break; 906 895 cond_resched(); 907 896 } 897 + 898 + status = gpmc_nand_read(info->gpmc_cs, GPMC_NAND_DATA); 908 899 return status; 909 900 } 910 901 ··· 937 924 938 925 return 1; 939 926 } 927 + 928 + #ifdef CONFIG_MTD_NAND_OMAP_BCH 929 + 930 + /** 931 + * omap3_enable_hwecc_bch - Program OMAP3 GPMC to perform BCH ECC correction 932 + * @mtd: MTD device structure 933 + * @mode: Read/Write mode 934 + */ 935 + static void omap3_enable_hwecc_bch(struct mtd_info *mtd, int mode) 936 + { 937 + int nerrors; 938 + unsigned int dev_width; 939 + struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 940 + mtd); 941 + struct nand_chip *chip = mtd->priv; 942 + 943 + nerrors = (info->nand.ecc.bytes == 13) ? 8 : 4; 944 + dev_width = (chip->options & NAND_BUSWIDTH_16) ? 1 : 0; 945 + /* 946 + * Program GPMC to perform correction on one 512-byte sector at a time. 947 + * Using 4 sectors at a time (i.e. ecc.size = 2048) is also possible and 948 + * gives a slight (5%) performance gain (but requires additional code). 949 + */ 950 + (void)gpmc_enable_hwecc_bch(info->gpmc_cs, mode, dev_width, 1, nerrors); 951 + } 952 + 953 + /** 954 + * omap3_calculate_ecc_bch4 - Generate 7 bytes of ECC bytes 955 + * @mtd: MTD device structure 956 + * @dat: The pointer to data on which ecc is computed 957 + * @ecc_code: The ecc_code buffer 958 + */ 959 + static int omap3_calculate_ecc_bch4(struct mtd_info *mtd, const u_char *dat, 960 + u_char *ecc_code) 961 + { 962 + struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 963 + mtd); 964 + return gpmc_calculate_ecc_bch4(info->gpmc_cs, dat, ecc_code); 965 + } 966 + 967 + /** 968 + * omap3_calculate_ecc_bch8 - Generate 13 bytes of ECC bytes 969 + * @mtd: MTD device structure 970 + * @dat: The pointer to data on which ecc is computed 971 + * @ecc_code: The ecc_code buffer 972 + */ 973 + static int omap3_calculate_ecc_bch8(struct mtd_info *mtd, const u_char *dat, 974 + u_char *ecc_code) 975 + { 976 + struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 977 + mtd); 978 + return gpmc_calculate_ecc_bch8(info->gpmc_cs, dat, ecc_code); 979 + } 980 + 981 + /** 982 + * omap3_correct_data_bch - Decode received data and correct errors 983 + * @mtd: MTD device structure 984 + * @data: page data 985 + * @read_ecc: ecc read from nand flash 986 + * @calc_ecc: ecc read from HW ECC registers 987 + */ 988 + static int omap3_correct_data_bch(struct mtd_info *mtd, u_char *data, 989 + u_char *read_ecc, u_char *calc_ecc) 990 + { 991 + int i, count; 992 + /* cannot correct more than 8 errors */ 993 + unsigned int errloc[8]; 994 + struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 995 + mtd); 996 + 997 + count = decode_bch(info->bch, NULL, 512, read_ecc, calc_ecc, NULL, 998 + errloc); 999 + if (count > 0) { 1000 + /* correct errors */ 1001 + for (i = 0; i < count; i++) { 1002 + /* correct data only, not ecc bytes */ 1003 + if (errloc[i] < 8*512) 1004 + data[errloc[i]/8] ^= 1 << (errloc[i] & 7); 1005 + pr_debug("corrected bitflip %u\n", errloc[i]); 1006 + } 1007 + } else if (count < 0) { 1008 + pr_err("ecc unrecoverable error\n"); 1009 + } 1010 + return count; 1011 + } 1012 + 1013 + /** 1014 + * omap3_free_bch - Release BCH ecc resources 1015 + * @mtd: MTD device structure 1016 + */ 1017 + static void omap3_free_bch(struct mtd_info *mtd) 1018 + { 1019 + struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 1020 + mtd); 1021 + if (info->bch) { 1022 + free_bch(info->bch); 1023 + info->bch = NULL; 1024 + } 1025 + } 1026 + 1027 + /** 1028 + * omap3_init_bch - Initialize BCH ECC 1029 + * @mtd: MTD device structure 1030 + * @ecc_opt: OMAP ECC mode (OMAP_ECC_BCH4_CODE_HW or OMAP_ECC_BCH8_CODE_HW) 1031 + */ 1032 + static int omap3_init_bch(struct mtd_info *mtd, int ecc_opt) 1033 + { 1034 + int ret, max_errors; 1035 + struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 1036 + mtd); 1037 + #ifdef CONFIG_MTD_NAND_OMAP_BCH8 1038 + const int hw_errors = 8; 1039 + #else 1040 + const int hw_errors = 4; 1041 + #endif 1042 + info->bch = NULL; 1043 + 1044 + max_errors = (ecc_opt == OMAP_ECC_BCH8_CODE_HW) ? 8 : 4; 1045 + if (max_errors != hw_errors) { 1046 + pr_err("cannot configure %d-bit BCH ecc, only %d-bit supported", 1047 + max_errors, hw_errors); 1048 + goto fail; 1049 + } 1050 + 1051 + /* initialize GPMC BCH engine */ 1052 + ret = gpmc_init_hwecc_bch(info->gpmc_cs, 1, max_errors); 1053 + if (ret) 1054 + goto fail; 1055 + 1056 + /* software bch library is only used to detect and locate errors */ 1057 + info->bch = init_bch(13, max_errors, 0x201b /* hw polynomial */); 1058 + if (!info->bch) 1059 + goto fail; 1060 + 1061 + info->nand.ecc.size = 512; 1062 + info->nand.ecc.hwctl = omap3_enable_hwecc_bch; 1063 + info->nand.ecc.correct = omap3_correct_data_bch; 1064 + info->nand.ecc.mode = NAND_ECC_HW; 1065 + 1066 + /* 1067 + * The number of corrected errors in an ecc block that will trigger 1068 + * block scrubbing defaults to the ecc strength (4 or 8). 1069 + * Set mtd->bitflip_threshold here to define a custom threshold. 1070 + */ 1071 + 1072 + if (max_errors == 8) { 1073 + info->nand.ecc.strength = 8; 1074 + info->nand.ecc.bytes = 13; 1075 + info->nand.ecc.calculate = omap3_calculate_ecc_bch8; 1076 + } else { 1077 + info->nand.ecc.strength = 4; 1078 + info->nand.ecc.bytes = 7; 1079 + info->nand.ecc.calculate = omap3_calculate_ecc_bch4; 1080 + } 1081 + 1082 + pr_info("enabling NAND BCH ecc with %d-bit correction\n", max_errors); 1083 + return 0; 1084 + fail: 1085 + omap3_free_bch(mtd); 1086 + return -1; 1087 + } 1088 + 1089 + /** 1090 + * omap3_init_bch_tail - Build an oob layout for BCH ECC correction. 1091 + * @mtd: MTD device structure 1092 + */ 1093 + static int omap3_init_bch_tail(struct mtd_info *mtd) 1094 + { 1095 + int i, steps; 1096 + struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 1097 + mtd); 1098 + struct nand_ecclayout *layout = &info->ecclayout; 1099 + 1100 + /* build oob layout */ 1101 + steps = mtd->writesize/info->nand.ecc.size; 1102 + layout->eccbytes = steps*info->nand.ecc.bytes; 1103 + 1104 + /* do not bother creating special oob layouts for small page devices */ 1105 + if (mtd->oobsize < 64) { 1106 + pr_err("BCH ecc is not supported on small page devices\n"); 1107 + goto fail; 1108 + } 1109 + 1110 + /* reserve 2 bytes for bad block marker */ 1111 + if (layout->eccbytes+2 > mtd->oobsize) { 1112 + pr_err("no oob layout available for oobsize %d eccbytes %u\n", 1113 + mtd->oobsize, layout->eccbytes); 1114 + goto fail; 1115 + } 1116 + 1117 + /* put ecc bytes at oob tail */ 1118 + for (i = 0; i < layout->eccbytes; i++) 1119 + layout->eccpos[i] = mtd->oobsize-layout->eccbytes+i; 1120 + 1121 + layout->oobfree[0].offset = 2; 1122 + layout->oobfree[0].length = mtd->oobsize-2-layout->eccbytes; 1123 + info->nand.ecc.layout = layout; 1124 + 1125 + if (!(info->nand.options & NAND_BUSWIDTH_16)) 1126 + info->nand.badblock_pattern = &bb_descrip_flashbased; 1127 + return 0; 1128 + fail: 1129 + omap3_free_bch(mtd); 1130 + return -1; 1131 + } 1132 + 1133 + #else 1134 + static int omap3_init_bch(struct mtd_info *mtd, int ecc_opt) 1135 + { 1136 + pr_err("CONFIG_MTD_NAND_OMAP_BCH is not enabled\n"); 1137 + return -1; 1138 + } 1139 + static int omap3_init_bch_tail(struct mtd_info *mtd) 1140 + { 1141 + return -1; 1142 + } 1143 + static void omap3_free_bch(struct mtd_info *mtd) 1144 + { 1145 + } 1146 + #endif /* CONFIG_MTD_NAND_OMAP_BCH */ 940 1147 941 1148 static int __devinit omap_nand_probe(struct platform_device *pdev) 942 1149 { ··· 1296 1063 info->nand.ecc.hwctl = omap_enable_hwecc; 1297 1064 info->nand.ecc.correct = omap_correct_data; 1298 1065 info->nand.ecc.mode = NAND_ECC_HW; 1066 + } else if ((pdata->ecc_opt == OMAP_ECC_BCH4_CODE_HW) || 1067 + (pdata->ecc_opt == OMAP_ECC_BCH8_CODE_HW)) { 1068 + err = omap3_init_bch(&info->mtd, pdata->ecc_opt); 1069 + if (err) { 1070 + err = -EINVAL; 1071 + goto out_release_mem_region; 1072 + } 1299 1073 } 1300 1074 1301 1075 /* DIP switches on some boards change between 8 and 16 bit ··· 1334 1094 (offset + omap_oobinfo.eccbytes); 1335 1095 1336 1096 info->nand.ecc.layout = &omap_oobinfo; 1097 + } else if ((pdata->ecc_opt == OMAP_ECC_BCH4_CODE_HW) || 1098 + (pdata->ecc_opt == OMAP_ECC_BCH8_CODE_HW)) { 1099 + /* build OOB layout for BCH ECC correction */ 1100 + err = omap3_init_bch_tail(&info->mtd); 1101 + if (err) { 1102 + err = -EINVAL; 1103 + goto out_release_mem_region; 1104 + } 1337 1105 } 1338 1106 1339 1107 /* second phase scan */ ··· 1370 1122 struct mtd_info *mtd = platform_get_drvdata(pdev); 1371 1123 struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 1372 1124 mtd); 1125 + omap3_free_bch(&info->mtd); 1373 1126 1374 1127 platform_set_drvdata(pdev, NULL); 1375 1128 if (info->dma_ch != -1)
-1
drivers/mtd/nand/pasemi_nand.c
··· 155 155 chip->ecc.mode = NAND_ECC_SOFT; 156 156 157 157 /* Enable the following for a flash based bad block table */ 158 - chip->options = NAND_NO_AUTOINCR; 159 158 chip->bbt_options = NAND_BBT_USE_FLASH; 160 159 161 160 /* Scan to find existence of the device */
+21 -7
drivers/mtd/nand/plat_nand.c
··· 23 23 void __iomem *io_base; 24 24 }; 25 25 26 + static const char *part_probe_types[] = { "cmdlinepart", NULL }; 27 + 26 28 /* 27 29 * Probe for the NAND device. 28 30 */ 29 31 static int __devinit plat_nand_probe(struct platform_device *pdev) 30 32 { 31 33 struct platform_nand_data *pdata = pdev->dev.platform_data; 34 + struct mtd_part_parser_data ppdata; 32 35 struct plat_nand_data *data; 33 36 struct resource *res; 37 + const char **part_types; 34 38 int err = 0; 35 39 36 40 if (pdata->chip.nr_chips < 1) { ··· 79 75 data->chip.select_chip = pdata->ctrl.select_chip; 80 76 data->chip.write_buf = pdata->ctrl.write_buf; 81 77 data->chip.read_buf = pdata->ctrl.read_buf; 78 + data->chip.read_byte = pdata->ctrl.read_byte; 82 79 data->chip.chip_delay = pdata->chip.chip_delay; 83 80 data->chip.options |= pdata->chip.options; 84 81 data->chip.bbt_options |= pdata->chip.bbt_options; ··· 103 98 goto out; 104 99 } 105 100 106 - err = mtd_device_parse_register(&data->mtd, 107 - pdata->chip.part_probe_types, NULL, 101 + part_types = pdata->chip.part_probe_types ? : part_probe_types; 102 + 103 + ppdata.of_node = pdev->dev.of_node; 104 + err = mtd_device_parse_register(&data->mtd, part_types, &ppdata, 108 105 pdata->chip.partitions, 109 106 pdata->chip.nr_partitions); 110 107 ··· 147 140 return 0; 148 141 } 149 142 143 + static const struct of_device_id plat_nand_match[] = { 144 + { .compatible = "gen_nand" }, 145 + {}, 146 + }; 147 + MODULE_DEVICE_TABLE(of, plat_nand_match); 148 + 150 149 static struct platform_driver plat_nand_driver = { 151 - .probe = plat_nand_probe, 152 - .remove = __devexit_p(plat_nand_remove), 153 - .driver = { 154 - .name = "gen_nand", 155 - .owner = THIS_MODULE, 150 + .probe = plat_nand_probe, 151 + .remove = __devexit_p(plat_nand_remove), 152 + .driver = { 153 + .name = "gen_nand", 154 + .owner = THIS_MODULE, 155 + .of_match_table = plat_nand_match, 156 156 }, 157 157 }; 158 158
+3 -3
drivers/mtd/nand/pxa3xx_nand.c
··· 682 682 } 683 683 684 684 static void pxa3xx_nand_write_page_hwecc(struct mtd_info *mtd, 685 - struct nand_chip *chip, const uint8_t *buf) 685 + struct nand_chip *chip, const uint8_t *buf, int oob_required) 686 686 { 687 687 chip->write_buf(mtd, buf, mtd->writesize); 688 688 chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 689 689 } 690 690 691 691 static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd, 692 - struct nand_chip *chip, uint8_t *buf, int page) 692 + struct nand_chip *chip, uint8_t *buf, int oob_required, 693 + int page) 693 694 { 694 695 struct pxa3xx_nand_host *host = mtd->priv; 695 696 struct pxa3xx_nand_info *info = host->info_data; ··· 1005 1004 chip->ecc.size = host->page_size; 1006 1005 chip->ecc.strength = 1; 1007 1006 1008 - chip->options = NAND_NO_AUTOINCR; 1009 1007 chip->options |= NAND_NO_READRDY; 1010 1008 if (host->reg_ndcr & NDCR_DWIDTH_M) 1011 1009 chip->options |= NAND_BUSWIDTH_16;
+4 -18
drivers/mtd/nand/r852.c
··· 539 539 * nand_read_oob_syndrome assumes we can send column address - we can't 540 540 */ 541 541 static int r852_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 542 - int page, int sndcmd) 542 + int page) 543 543 { 544 - if (sndcmd) { 545 - chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 546 - sndcmd = 0; 547 - } 544 + chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 548 545 chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 549 - return sndcmd; 546 + return 0; 550 547 } 551 548 552 549 /* ··· 1101 1104 .driver.pm = &r852_pm_ops, 1102 1105 }; 1103 1106 1104 - static __init int r852_module_init(void) 1105 - { 1106 - return pci_register_driver(&r852_pci_driver); 1107 - } 1108 - 1109 - static void __exit r852_module_exit(void) 1110 - { 1111 - pci_unregister_driver(&r852_pci_driver); 1112 - } 1113 - 1114 - module_init(r852_module_init); 1115 - module_exit(r852_module_exit); 1107 + module_pci_driver(r852_pci_driver); 1116 1108 1117 1109 MODULE_LICENSE("GPL"); 1118 1110 MODULE_AUTHOR("Maxim Levitsky <maximlevitsky@gmail.com>");
+3 -5
drivers/mtd/nand/sh_flctl.c
··· 344 344 } 345 345 346 346 static int flctl_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, 347 - uint8_t *buf, int page) 347 + uint8_t *buf, int oob_required, int page) 348 348 { 349 349 int i, eccsize = chip->ecc.size; 350 350 int eccbytes = chip->ecc.bytes; ··· 359 359 if (flctl->hwecc_cant_correct[i]) 360 360 mtd->ecc_stats.failed++; 361 361 else 362 - mtd->ecc_stats.corrected += 0; 362 + mtd->ecc_stats.corrected += 0; /* FIXME */ 363 363 } 364 364 365 365 return 0; 366 366 } 367 367 368 368 static void flctl_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, 369 - const uint8_t *buf) 369 + const uint8_t *buf, int oob_required) 370 370 { 371 371 int i, eccsize = chip->ecc.size; 372 372 int eccbytes = chip->ecc.bytes; ··· 880 880 flctl->flcmncr_base = pdata->flcmncr_val; 881 881 flctl->hwecc = pdata->has_hwecc; 882 882 flctl->holden = pdata->use_holden; 883 - 884 - nand->options = NAND_NO_AUTOINCR; 885 883 886 884 /* Set address of hardware control function */ 887 885 /* 20 us command delay time */
+4 -5
drivers/mtd/nand/sm_common.c
··· 94 94 {NULL,} 95 95 }; 96 96 97 - #define XD_TYPEM (NAND_NO_AUTOINCR | NAND_BROKEN_XD) 98 97 static struct nand_flash_dev nand_xd_flash_ids[] = { 99 98 100 99 {"xD 16MiB 3,3V", 0x73, 512, 16, 0x4000, 0}, 101 100 {"xD 32MiB 3,3V", 0x75, 512, 32, 0x4000, 0}, 102 101 {"xD 64MiB 3,3V", 0x76, 512, 64, 0x4000, 0}, 103 102 {"xD 128MiB 3,3V", 0x79, 512, 128, 0x4000, 0}, 104 - {"xD 256MiB 3,3V", 0x71, 512, 256, 0x4000, XD_TYPEM}, 105 - {"xD 512MiB 3,3V", 0xdc, 512, 512, 0x4000, XD_TYPEM}, 106 - {"xD 1GiB 3,3V", 0xd3, 512, 1024, 0x4000, XD_TYPEM}, 107 - {"xD 2GiB 3,3V", 0xd5, 512, 2048, 0x4000, XD_TYPEM}, 103 + {"xD 256MiB 3,3V", 0x71, 512, 256, 0x4000, NAND_BROKEN_XD}, 104 + {"xD 512MiB 3,3V", 0xdc, 512, 512, 0x4000, NAND_BROKEN_XD}, 105 + {"xD 1GiB 3,3V", 0xd3, 512, 1024, 0x4000, NAND_BROKEN_XD}, 106 + {"xD 2GiB 3,3V", 0xd5, 512, 2048, 0x4000, NAND_BROKEN_XD}, 108 107 {NULL,} 109 108 }; 110 109
+4 -2
drivers/mtd/onenand/onenand_base.c
··· 1201 1201 if (mtd->ecc_stats.failed - stats.failed) 1202 1202 return -EBADMSG; 1203 1203 1204 - return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0; 1204 + /* return max bitflips per ecc step; ONENANDs correct 1 bit only */ 1205 + return mtd->ecc_stats.corrected != stats.corrected ? 1 : 0; 1205 1206 } 1206 1207 1207 1208 /** ··· 1334 1333 if (mtd->ecc_stats.failed - stats.failed) 1335 1334 return -EBADMSG; 1336 1335 1337 - return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0; 1336 + /* return max bitflips per ecc step; ONENANDs correct 1 bit only */ 1337 + return mtd->ecc_stats.corrected != stats.corrected ? 1 : 0; 1338 1338 } 1339 1339 1340 1340 /**
+7
fs/jffs2/jffs2_fs_sb.h
··· 32 32 struct jffs2_mount_opts { 33 33 bool override_compr; 34 34 unsigned int compr; 35 + 36 + /* The size of the reserved pool. The reserved pool is the JFFS2 flash 37 + * space which may only be used by root cannot be used by the other 38 + * users. This is implemented simply by means of not allowing the 39 + * latter users to write to the file system if the amount if the 40 + * available space is less then 'rp_size'. */ 41 + unsigned int rp_size; 35 42 }; 36 43 37 44 /* A struct for the overall file system control. Pointers to
+42
fs/jffs2/nodemgmt.c
··· 18 18 #include "nodelist.h" 19 19 #include "debug.h" 20 20 21 + /* 22 + * Check whether the user is allowed to write. 23 + */ 24 + static int jffs2_rp_can_write(struct jffs2_sb_info *c) 25 + { 26 + uint32_t avail; 27 + struct jffs2_mount_opts *opts = &c->mount_opts; 28 + 29 + avail = c->dirty_size + c->free_size + c->unchecked_size + 30 + c->erasing_size - c->resv_blocks_write * c->sector_size 31 + - c->nospc_dirty_size; 32 + 33 + if (avail < 2 * opts->rp_size) 34 + jffs2_dbg(1, "rpsize %u, dirty_size %u, free_size %u, " 35 + "erasing_size %u, unchecked_size %u, " 36 + "nr_erasing_blocks %u, avail %u, resrv %u\n", 37 + opts->rp_size, c->dirty_size, c->free_size, 38 + c->erasing_size, c->unchecked_size, 39 + c->nr_erasing_blocks, avail, c->nospc_dirty_size); 40 + 41 + if (avail > opts->rp_size) 42 + return 1; 43 + 44 + /* Always allow root */ 45 + if (capable(CAP_SYS_RESOURCE)) 46 + return 1; 47 + 48 + jffs2_dbg(1, "forbid writing\n"); 49 + return 0; 50 + } 51 + 21 52 /** 22 53 * jffs2_reserve_space - request physical space to write nodes to flash 23 54 * @c: superblock info ··· 85 54 jffs2_dbg(1, "%s(): alloc sem got\n", __func__); 86 55 87 56 spin_lock(&c->erase_completion_lock); 57 + 58 + /* 59 + * Check if the free space is greater then size of the reserved pool. 60 + * If not, only allow root to proceed with writing. 61 + */ 62 + if (prio != ALLOC_DELETION && !jffs2_rp_can_write(c)) { 63 + ret = -ENOSPC; 64 + goto out; 65 + } 88 66 89 67 /* this needs a little more thought (true <tglx> :)) */ 90 68 while(ret == -EAGAIN) { ··· 198 158 jffs2_dbg(1, "%s(): ret is %d\n", __func__, ret); 199 159 } 200 160 } 161 + 162 + out: 201 163 spin_unlock(&c->erase_completion_lock); 202 164 if (!ret) 203 165 ret = jffs2_prealloc_raw_node_refs(c, c->nextblock, 1);
+13 -6
fs/jffs2/readinode.c
··· 1266 1266 /* Symlink's inode data is the target path. Read it and 1267 1267 * keep in RAM to facilitate quick follow symlink 1268 1268 * operation. */ 1269 - f->target = kmalloc(je32_to_cpu(latest_node->csize) + 1, GFP_KERNEL); 1269 + uint32_t csize = je32_to_cpu(latest_node->csize); 1270 + if (csize > JFFS2_MAX_NAME_LEN) { 1271 + mutex_unlock(&f->sem); 1272 + jffs2_do_clear_inode(c, f); 1273 + return -ENAMETOOLONG; 1274 + } 1275 + f->target = kmalloc(csize + 1, GFP_KERNEL); 1270 1276 if (!f->target) { 1271 - JFFS2_ERROR("can't allocate %d bytes of memory for the symlink target path cache\n", je32_to_cpu(latest_node->csize)); 1277 + JFFS2_ERROR("can't allocate %u bytes of memory for the symlink target path cache\n", csize); 1272 1278 mutex_unlock(&f->sem); 1273 1279 jffs2_do_clear_inode(c, f); 1274 1280 return -ENOMEM; 1275 1281 } 1276 1282 1277 1283 ret = jffs2_flash_read(c, ref_offset(rii.latest_ref) + sizeof(*latest_node), 1278 - je32_to_cpu(latest_node->csize), &retlen, (char *)f->target); 1284 + csize, &retlen, (char *)f->target); 1279 1285 1280 - if (ret || retlen != je32_to_cpu(latest_node->csize)) { 1281 - if (retlen != je32_to_cpu(latest_node->csize)) 1286 + if (ret || retlen != csize) { 1287 + if (retlen != csize) 1282 1288 ret = -EIO; 1283 1289 kfree(f->target); 1284 1290 f->target = NULL; ··· 1293 1287 return ret; 1294 1288 } 1295 1289 1296 - f->target[je32_to_cpu(latest_node->csize)] = '\0'; 1290 + f->target[csize] = '\0'; 1297 1291 dbg_readinode("symlink's target '%s' cached\n", f->target); 1298 1292 } 1299 1293 ··· 1421 1415 mutex_unlock(&f->sem); 1422 1416 jffs2_do_clear_inode(c, f); 1423 1417 } 1418 + jffs2_xattr_do_crccheck_inode(c, ic); 1424 1419 kfree (f); 1425 1420 return ret; 1426 1421 }
+17
fs/jffs2/super.c
··· 90 90 91 91 if (opts->override_compr) 92 92 seq_printf(s, ",compr=%s", jffs2_compr_name(opts->compr)); 93 + if (opts->rp_size) 94 + seq_printf(s, ",rp_size=%u", opts->rp_size / 1024); 93 95 94 96 return 0; 95 97 } ··· 156 154 * JFFS2 mount options. 157 155 * 158 156 * Opt_override_compr: override default compressor 157 + * Opt_rp_size: size of reserved pool in KiB 159 158 * Opt_err: just end of array marker 160 159 */ 161 160 enum { 162 161 Opt_override_compr, 162 + Opt_rp_size, 163 163 Opt_err, 164 164 }; 165 165 166 166 static const match_table_t tokens = { 167 167 {Opt_override_compr, "compr=%s"}, 168 + {Opt_rp_size, "rp_size=%u"}, 168 169 {Opt_err, NULL}, 169 170 }; 170 171 ··· 175 170 { 176 171 substring_t args[MAX_OPT_ARGS]; 177 172 char *p, *name; 173 + unsigned int opt; 178 174 179 175 if (!data) 180 176 return 0; ··· 212 206 } 213 207 kfree(name); 214 208 c->mount_opts.override_compr = true; 209 + break; 210 + case Opt_rp_size: 211 + if (match_int(&args[0], &opt)) 212 + return -EINVAL; 213 + opt *= 1024; 214 + if (opt > c->mtd->size) { 215 + pr_warn("Too large reserve pool specified, max " 216 + "is %llu KB\n", c->mtd->size / 1024); 217 + return -EINVAL; 218 + } 219 + c->mount_opts.rp_size = opt; 215 220 break; 216 221 default: 217 222 pr_err("Error: unrecognized mount option '%s' or missing value\n",
+16 -7
fs/jffs2/xattr.c
··· 11 11 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 + #define JFFS2_XATTR_IS_CORRUPTED 1 15 + 14 16 #include <linux/kernel.h> 15 17 #include <linux/slab.h> 16 18 #include <linux/fs.h> ··· 155 153 JFFS2_ERROR("node CRC failed at %#08x, read=%#08x, calc=%#08x\n", 156 154 offset, je32_to_cpu(rx.hdr_crc), crc); 157 155 xd->flags |= JFFS2_XFLAGS_INVALID; 158 - return -EIO; 156 + return JFFS2_XATTR_IS_CORRUPTED; 159 157 } 160 158 totlen = PAD(sizeof(rx) + rx.name_len + 1 + je16_to_cpu(rx.value_len)); 161 159 if (je16_to_cpu(rx.magic) != JFFS2_MAGIC_BITMASK ··· 171 169 je32_to_cpu(rx.xid), xd->xid, 172 170 je32_to_cpu(rx.version), xd->version); 173 171 xd->flags |= JFFS2_XFLAGS_INVALID; 174 - return -EIO; 172 + return JFFS2_XATTR_IS_CORRUPTED; 175 173 } 176 174 xd->xprefix = rx.xprefix; 177 175 xd->name_len = rx.name_len; ··· 229 227 data[xd->name_len] = '\0'; 230 228 crc = crc32(0, data, length); 231 229 if (crc != xd->data_crc) { 232 - JFFS2_WARNING("node CRC failed (JFFS2_NODETYPE_XREF)" 230 + JFFS2_WARNING("node CRC failed (JFFS2_NODETYPE_XATTR)" 233 231 " at %#08x, read: 0x%08x calculated: 0x%08x\n", 234 232 ref_offset(xd->node), xd->data_crc, crc); 235 233 kfree(data); 236 234 xd->flags |= JFFS2_XFLAGS_INVALID; 237 - return -EIO; 235 + return JFFS2_XATTR_IS_CORRUPTED; 238 236 } 239 237 240 238 xd->flags |= JFFS2_XFLAGS_HOT; ··· 272 270 if (xd->xname) 273 271 return 0; 274 272 if (xd->flags & JFFS2_XFLAGS_INVALID) 275 - return -EIO; 273 + return JFFS2_XATTR_IS_CORRUPTED; 276 274 if (unlikely(is_xattr_datum_unchecked(c, xd))) 277 275 rc = do_verify_xattr_datum(c, xd); 278 276 if (!rc) ··· 437 435 * is called to release xattr related objects when unmounting. 438 436 * check_xattr_ref_inode(c, ic) 439 437 * is used to confirm inode does not have duplicate xattr name/value pair. 438 + * jffs2_xattr_do_crccheck_inode(c, ic) 439 + * is used to force xattr data integrity check during the initial gc scan. 440 440 * -------------------------------------------------- */ 441 441 static int verify_xattr_ref(struct jffs2_sb_info *c, struct jffs2_xattr_ref *ref) 442 442 { ··· 466 462 if (crc != je32_to_cpu(rr.node_crc)) { 467 463 JFFS2_ERROR("node CRC failed at %#08x, read=%#08x, calc=%#08x\n", 468 464 offset, je32_to_cpu(rr.node_crc), crc); 469 - return -EIO; 465 + return JFFS2_XATTR_IS_CORRUPTED; 470 466 } 471 467 if (je16_to_cpu(rr.magic) != JFFS2_MAGIC_BITMASK 472 468 || je16_to_cpu(rr.nodetype) != JFFS2_NODETYPE_XREF ··· 476 472 offset, je16_to_cpu(rr.magic), JFFS2_MAGIC_BITMASK, 477 473 je16_to_cpu(rr.nodetype), JFFS2_NODETYPE_XREF, 478 474 je32_to_cpu(rr.totlen), PAD(sizeof(rr))); 479 - return -EIO; 475 + return JFFS2_XATTR_IS_CORRUPTED; 480 476 } 481 477 ref->ino = je32_to_cpu(rr.ino); 482 478 ref->xid = je32_to_cpu(rr.xid); ··· 684 680 up_write(&c->xattr_sem); 685 681 686 682 return rc; 683 + } 684 + 685 + void jffs2_xattr_do_crccheck_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic) 686 + { 687 + check_xattr_ref_inode(c, ic); 687 688 } 688 689 689 690 /* -------- xattr subsystem functions ---------------
+2
fs/jffs2/xattr.h
··· 77 77 extern struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c, 78 78 uint32_t xid, uint32_t version); 79 79 80 + extern void jffs2_xattr_do_crccheck_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic); 80 81 extern void jffs2_xattr_delete_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic); 81 82 extern void jffs2_xattr_free_inode(struct jffs2_sb_info *c, struct jffs2_inode_cache *ic); 82 83 ··· 109 108 #define jffs2_build_xattr_subsystem(c) 110 109 #define jffs2_clear_xattr_subsystem(c) 111 110 111 + #define jffs2_xattr_do_crccheck_inode(c, ic) 112 112 #define jffs2_xattr_delete_inode(c, ic) 113 113 #define jffs2_xattr_free_inode(c, ic) 114 114 #define jffs2_verify_xattr(c) (1)
+4 -4
include/linux/mtd/gpmi-nand.h
··· 23 23 #define GPMI_NAND_RES_SIZE 6 24 24 25 25 /* Resource names for the GPMI NAND driver. */ 26 - #define GPMI_NAND_GPMI_REGS_ADDR_RES_NAME "GPMI NAND GPMI Registers" 26 + #define GPMI_NAND_GPMI_REGS_ADDR_RES_NAME "gpmi-nand" 27 27 #define GPMI_NAND_GPMI_INTERRUPT_RES_NAME "GPMI NAND GPMI Interrupt" 28 - #define GPMI_NAND_BCH_REGS_ADDR_RES_NAME "GPMI NAND BCH Registers" 29 - #define GPMI_NAND_BCH_INTERRUPT_RES_NAME "GPMI NAND BCH Interrupt" 28 + #define GPMI_NAND_BCH_REGS_ADDR_RES_NAME "bch" 29 + #define GPMI_NAND_BCH_INTERRUPT_RES_NAME "bch" 30 30 #define GPMI_NAND_DMA_CHANNELS_RES_NAME "GPMI NAND DMA Channels" 31 - #define GPMI_NAND_DMA_INTERRUPT_RES_NAME "GPMI NAND DMA Interrupt" 31 + #define GPMI_NAND_DMA_INTERRUPT_RES_NAME "gpmi-dma" 32 32 33 33 /** 34 34 * struct gpmi_nand_platform_data - GPMI NAND driver platform data.
+10 -1
include/linux/mtd/mtd.h
··· 157 157 unsigned int erasesize_mask; 158 158 unsigned int writesize_mask; 159 159 160 + /* 161 + * read ops return -EUCLEAN if max number of bitflips corrected on any 162 + * one region comprising an ecc step equals or exceeds this value. 163 + * Settable by driver, else defaults to ecc_strength. User can override 164 + * in sysfs. N.B. The meaning of the -EUCLEAN return code has changed; 165 + * see Documentation/ABI/testing/sysfs-class-mtd for more detail. 166 + */ 167 + unsigned int bitflip_threshold; 168 + 160 169 // Kernel-only stuff starts here. 161 170 const char *name; 162 171 int index; ··· 173 164 /* ECC layout structure pointer - read only! */ 174 165 struct nand_ecclayout *ecclayout; 175 166 176 - /* max number of correctible bit errors per writesize */ 167 + /* max number of correctible bit errors per ecc step */ 177 168 unsigned int ecc_strength; 178 169 179 170 /* Data for variable erase regions. If numeraseregions is zero,
+13 -12
include/linux/mtd/nand.h
··· 161 161 * Option constants for bizarre disfunctionality and real 162 162 * features. 163 163 */ 164 - /* Chip can not auto increment pages */ 165 - #define NAND_NO_AUTOINCR 0x00000001 166 164 /* Buswidth is 16 bit */ 167 165 #define NAND_BUSWIDTH_16 0x00000002 168 166 /* Device supports partial programming without padding */ ··· 205 207 (NAND_NO_PADDING | NAND_CACHEPRG | NAND_COPYBACK) 206 208 207 209 /* Macros to identify the above */ 208 - #define NAND_CANAUTOINCR(chip) (!(chip->options & NAND_NO_AUTOINCR)) 209 210 #define NAND_MUST_PAD(chip) (!(chip->options & NAND_NO_PADDING)) 210 211 #define NAND_HAS_CACHEPROG(chip) ((chip->options & NAND_CACHEPRG)) 211 212 #define NAND_HAS_COPYBACK(chip) ((chip->options & NAND_COPYBACK)) ··· 213 216 && (chip->page_shift > 9)) 214 217 215 218 /* Mask to zero out the chip options, which come from the id table */ 216 - #define NAND_CHIPOPTIONS_MSK (0x0000ffff & ~NAND_NO_AUTOINCR) 219 + #define NAND_CHIPOPTIONS_MSK 0x0000ffff 217 220 218 221 /* Non chip related options */ 219 222 /* This option skips the bbt scan during initialization. */ ··· 360 363 int (*correct)(struct mtd_info *mtd, uint8_t *dat, uint8_t *read_ecc, 361 364 uint8_t *calc_ecc); 362 365 int (*read_page_raw)(struct mtd_info *mtd, struct nand_chip *chip, 363 - uint8_t *buf, int page); 366 + uint8_t *buf, int oob_required, int page); 364 367 void (*write_page_raw)(struct mtd_info *mtd, struct nand_chip *chip, 365 - const uint8_t *buf); 368 + const uint8_t *buf, int oob_required); 366 369 int (*read_page)(struct mtd_info *mtd, struct nand_chip *chip, 367 - uint8_t *buf, int page); 370 + uint8_t *buf, int oob_required, int page); 368 371 int (*read_subpage)(struct mtd_info *mtd, struct nand_chip *chip, 369 372 uint32_t offs, uint32_t len, uint8_t *buf); 370 373 void (*write_page)(struct mtd_info *mtd, struct nand_chip *chip, 371 - const uint8_t *buf); 374 + const uint8_t *buf, int oob_required); 372 375 int (*write_oob_raw)(struct mtd_info *mtd, struct nand_chip *chip, 373 376 int page); 374 377 int (*read_oob_raw)(struct mtd_info *mtd, struct nand_chip *chip, 375 - int page, int sndcmd); 376 - int (*read_oob)(struct mtd_info *mtd, struct nand_chip *chip, int page, 377 - int sndcmd); 378 + int page); 379 + int (*read_oob)(struct mtd_info *mtd, struct nand_chip *chip, int page); 378 380 int (*write_oob)(struct mtd_info *mtd, struct nand_chip *chip, 379 381 int page); 380 382 }; ··· 455 459 * @pagemask: [INTERN] page number mask = number of (pages / chip) - 1 456 460 * @pagebuf: [INTERN] holds the pagenumber which is currently in 457 461 * data_buf. 462 + * @pagebuf_bitflips: [INTERN] holds the bitflip count for the page which is 463 + * currently in data_buf. 458 464 * @subpagesize: [INTERN] holds the subpagesize 459 465 * @onfi_version: [INTERN] holds the chip ONFI version (BCD encoded), 460 466 * non 0 if ONFI supported. ··· 503 505 int (*errstat)(struct mtd_info *mtd, struct nand_chip *this, int state, 504 506 int status, int page); 505 507 int (*write_page)(struct mtd_info *mtd, struct nand_chip *chip, 506 - const uint8_t *buf, int page, int cached, int raw); 508 + const uint8_t *buf, int oob_required, int page, 509 + int cached, int raw); 507 510 508 511 int chip_delay; 509 512 unsigned int options; ··· 518 519 uint64_t chipsize; 519 520 int pagemask; 520 521 int pagebuf; 522 + unsigned int pagebuf_bitflips; 521 523 int subpagesize; 522 524 uint8_t cellinfo; 523 525 int badblockpos; ··· 654 654 void (*cmd_ctrl)(struct mtd_info *mtd, int dat, unsigned int ctrl); 655 655 void (*write_buf)(struct mtd_info *mtd, const uint8_t *buf, int len); 656 656 void (*read_buf)(struct mtd_info *mtd, uint8_t *buf, int len); 657 + unsigned char (*read_byte)(struct mtd_info *mtd); 657 658 void *priv; 658 659 }; 659 660