Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-linus-20140808' of git://git.infradead.org/linux-mtd

Pull MTD updates from Brian Norris:
"AMD-compatible CFI driver:
- Support OTP programming for Micron M29EW family
- Increase buffer write timeout, according to detected flash
parameter info

NAND
- Add helpers for retrieving ONFI timing modes
- GPMI: provide option to disable bad block marker swapping (required
for Ka-On electronics platforms)

SPI NOR
- EON EN25QH128 support
- Support new Flag Status Register (FSR) on a few Micron flash

Common
- New sysfs entries for bad block and ECC stats

And a few miscellaneous refactorings, cleanups, and driver
improvements"

* tag 'for-linus-20140808' of git://git.infradead.org/linux-mtd: (31 commits)
mtd: gpmi: make blockmark swapping optional
mtd: gpmi: remove line breaks from error messages and improve wording
mtd: gpmi: remove useless (void *) type casts and spaces between type casts and variables
mtd: atmel_nand: NFC: support multiple interrupt handling
mtd: atmel_nand: implement the nfc_device_ready() by checking the R/B bit
mtd: atmel_nand: add NFC status error check
mtd: atmel_nand: make ecc parameters same as definition
mtd: nand: add ONFI timing mode to nand_timings converter
mtd: nand: define struct nand_timings
mtd: cfi_cmdset_0002: fix do_write_buffer() timeout error
mtd: denali: use 8 bytes for READID command
mtd/ftl: fix the double free of the buffers allocated in build_maps()
mtd: phram: Fix whitespace issues
mtd: spi-nor: add support for EON EN25QH128
mtd: cfi_cmdset_0002: Add support for locking OTP memory
mtd: cfi_cmdset_0002: Add support for writing OTP memory
mtd: cfi_cmdset_0002: Invalidate cache after entering/exiting OTP memory
mtd: cfi_cmdset_0002: Add support for reading OTP
mtd: spi-nor: add support for flag status register on Micron chips
mtd: Account for BBT blocks when a partition is being allocated
...

+1033 -147
+38
Documentation/ABI/testing/sysfs-class-mtd
··· 184 184 185 185 It will always be a non-negative integer. In the case of 186 186 devices lacking any ECC capability, it is 0. 187 + 188 + What: /sys/class/mtd/mtdX/ecc_failures 189 + Date: June 2014 190 + KernelVersion: 3.17 191 + Contact: linux-mtd@lists.infradead.org 192 + Description: 193 + The number of failures reported by this device's ECC. Typically, 194 + these failures are associated with failed read operations. 195 + 196 + It will always be a non-negative integer. In the case of 197 + devices lacking any ECC capability, it is 0. 198 + 199 + What: /sys/class/mtd/mtdX/corrected_bits 200 + Date: June 2014 201 + KernelVersion: 3.17 202 + Contact: linux-mtd@lists.infradead.org 203 + Description: 204 + The number of bits that have been corrected by means of the 205 + device's ECC. 206 + 207 + It will always be a non-negative integer. In the case of 208 + devices lacking any ECC capability, it is 0. 209 + 210 + What: /sys/class/mtd/mtdX/bad_blocks 211 + Date: June 2014 212 + KernelVersion: 3.17 213 + Contact: linux-mtd@lists.infradead.org 214 + Description: 215 + The number of blocks marked as bad, if any, in this partition. 216 + 217 + What: /sys/class/mtd/mtdX/bbt_blocks 218 + Date: June 2014 219 + KernelVersion: 3.17 220 + Contact: linux-mtd@lists.infradead.org 221 + Description: 222 + The number of blocks that are marked as reserved, if any, in 223 + this partition. These are typically used to store the in-flash 224 + bad block table (BBT).
+10
Documentation/devicetree/bindings/mtd/gpmi-nand.txt
··· 25 25 discoverable or this property is not enabled, 26 26 the software may chooses an implementation-defined 27 27 ECC scheme. 28 + - fsl,no-blockmark-swap: Don't swap the bad block marker from the OOB 29 + area with the byte in the data area but rely on the 30 + flash based BBT for identifying bad blocks. 31 + NOTE: this is only valid in conjunction with 32 + 'nand-on-flash-bbt'. 33 + WARNING: on i.MX28 blockmark swapping cannot be 34 + disabled for the BootROM in the FCB. Thus, 35 + partitions written from Linux with this feature 36 + turned on may not be accessible by the BootROM 37 + code. 28 38 29 39 The device tree may optionally contain sub-nodes describing partitions of the 30 40 address space. See partition.txt for more detail.
+354 -21
drivers/mtd/chips/cfi_cmdset_0002.c
··· 58 58 static int cfi_amdstd_suspend (struct mtd_info *); 59 59 static void cfi_amdstd_resume (struct mtd_info *); 60 60 static int cfi_amdstd_reboot(struct notifier_block *, unsigned long, void *); 61 + static int cfi_amdstd_get_fact_prot_info(struct mtd_info *, size_t, 62 + size_t *, struct otp_info *); 63 + static int cfi_amdstd_get_user_prot_info(struct mtd_info *, size_t, 64 + size_t *, struct otp_info *); 61 65 static int cfi_amdstd_secsi_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); 66 + static int cfi_amdstd_read_fact_prot_reg(struct mtd_info *, loff_t, size_t, 67 + size_t *, u_char *); 68 + static int cfi_amdstd_read_user_prot_reg(struct mtd_info *, loff_t, size_t, 69 + size_t *, u_char *); 70 + static int cfi_amdstd_write_user_prot_reg(struct mtd_info *, loff_t, size_t, 71 + size_t *, u_char *); 72 + static int cfi_amdstd_lock_user_prot_reg(struct mtd_info *, loff_t, size_t); 62 73 63 74 static int cfi_amdstd_panic_write(struct mtd_info *mtd, loff_t to, size_t len, 64 75 size_t *retlen, const u_char *buf); ··· 529 518 mtd->_sync = cfi_amdstd_sync; 530 519 mtd->_suspend = cfi_amdstd_suspend; 531 520 mtd->_resume = cfi_amdstd_resume; 521 + mtd->_read_user_prot_reg = cfi_amdstd_read_user_prot_reg; 522 + mtd->_read_fact_prot_reg = cfi_amdstd_read_fact_prot_reg; 523 + mtd->_get_fact_prot_info = cfi_amdstd_get_fact_prot_info; 524 + mtd->_get_user_prot_info = cfi_amdstd_get_user_prot_info; 525 + mtd->_write_user_prot_reg = cfi_amdstd_write_user_prot_reg; 526 + mtd->_lock_user_prot_reg = cfi_amdstd_lock_user_prot_reg; 532 527 mtd->flags = MTD_CAP_NORFLASH; 533 528 mtd->name = map->name; 534 529 mtd->writesize = 1; ··· 645 628 cfi->chips[i].word_write_time = 1<<cfi->cfiq->WordWriteTimeoutTyp; 646 629 cfi->chips[i].buffer_write_time = 1<<cfi->cfiq->BufWriteTimeoutTyp; 647 630 cfi->chips[i].erase_time = 1<<cfi->cfiq->BlockEraseTimeoutTyp; 631 + /* 632 + * First calculate the timeout max according to timeout field 633 + * of struct cfi_ident that probed from chip's CFI aera, if 634 + * available. Specify a minimum of 2000us, in case the CFI data 635 + * is wrong. 636 + */ 637 + if (cfi->cfiq->BufWriteTimeoutTyp && 638 + cfi->cfiq->BufWriteTimeoutMax) 639 + cfi->chips[i].buffer_write_time_max = 640 + 1 << (cfi->cfiq->BufWriteTimeoutTyp + 641 + cfi->cfiq->BufWriteTimeoutMax); 642 + else 643 + cfi->chips[i].buffer_write_time_max = 0; 644 + 645 + cfi->chips[i].buffer_write_time_max = 646 + max(cfi->chips[i].buffer_write_time_max, 2000); 647 + 648 648 cfi->chips[i].ref_point_counter = 0; 649 649 init_waitqueue_head(&(cfi->chips[i].wq)); 650 650 } ··· 1171 1137 return ret; 1172 1138 } 1173 1139 1140 + typedef int (*otp_op_t)(struct map_info *map, struct flchip *chip, 1141 + loff_t adr, size_t len, u_char *buf, size_t grouplen); 1174 1142 1175 - static inline int do_read_secsi_onechip(struct map_info *map, struct flchip *chip, loff_t adr, size_t len, u_char *buf) 1143 + static inline void otp_enter(struct map_info *map, struct flchip *chip, 1144 + loff_t adr, size_t len) 1145 + { 1146 + struct cfi_private *cfi = map->fldrv_priv; 1147 + 1148 + cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, 1149 + cfi->device_type, NULL); 1150 + cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, 1151 + cfi->device_type, NULL); 1152 + cfi_send_gen_cmd(0x88, cfi->addr_unlock1, chip->start, map, cfi, 1153 + cfi->device_type, NULL); 1154 + 1155 + INVALIDATE_CACHED_RANGE(map, chip->start + adr, len); 1156 + } 1157 + 1158 + static inline void otp_exit(struct map_info *map, struct flchip *chip, 1159 + loff_t adr, size_t len) 1160 + { 1161 + struct cfi_private *cfi = map->fldrv_priv; 1162 + 1163 + cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, 1164 + cfi->device_type, NULL); 1165 + cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, 1166 + cfi->device_type, NULL); 1167 + cfi_send_gen_cmd(0x90, cfi->addr_unlock1, chip->start, map, cfi, 1168 + cfi->device_type, NULL); 1169 + cfi_send_gen_cmd(0x00, cfi->addr_unlock1, chip->start, map, cfi, 1170 + cfi->device_type, NULL); 1171 + 1172 + INVALIDATE_CACHED_RANGE(map, chip->start + adr, len); 1173 + } 1174 + 1175 + static inline int do_read_secsi_onechip(struct map_info *map, 1176 + struct flchip *chip, loff_t adr, 1177 + size_t len, u_char *buf, 1178 + size_t grouplen) 1176 1179 { 1177 1180 DECLARE_WAITQUEUE(wait, current); 1178 1181 unsigned long timeo = jiffies + HZ; 1179 - struct cfi_private *cfi = map->fldrv_priv; 1180 1182 1181 1183 retry: 1182 1184 mutex_lock(&chip->mutex); ··· 1234 1164 1235 1165 chip->state = FL_READY; 1236 1166 1237 - cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1238 - cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL); 1239 - cfi_send_gen_cmd(0x88, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1240 - 1167 + otp_enter(map, chip, adr, len); 1241 1168 map_copy_from(map, buf, adr, len); 1242 - 1243 - cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1244 - cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL); 1245 - cfi_send_gen_cmd(0x90, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1246 - cfi_send_gen_cmd(0x00, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1169 + otp_exit(map, chip, adr, len); 1247 1170 1248 1171 wake_up(&chip->wq); 1249 1172 mutex_unlock(&chip->mutex); ··· 1268 1205 else 1269 1206 thislen = len; 1270 1207 1271 - ret = do_read_secsi_onechip(map, &cfi->chips[chipnum], ofs, thislen, buf); 1208 + ret = do_read_secsi_onechip(map, &cfi->chips[chipnum], ofs, 1209 + thislen, buf, 0); 1272 1210 if (ret) 1273 1211 break; 1274 1212 ··· 1283 1219 return ret; 1284 1220 } 1285 1221 1222 + static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, 1223 + unsigned long adr, map_word datum, 1224 + int mode); 1286 1225 1287 - static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, unsigned long adr, map_word datum) 1226 + static int do_otp_write(struct map_info *map, struct flchip *chip, loff_t adr, 1227 + size_t len, u_char *buf, size_t grouplen) 1228 + { 1229 + int ret; 1230 + while (len) { 1231 + unsigned long bus_ofs = adr & ~(map_bankwidth(map)-1); 1232 + int gap = adr - bus_ofs; 1233 + int n = min_t(int, len, map_bankwidth(map) - gap); 1234 + map_word datum; 1235 + 1236 + if (n != map_bankwidth(map)) { 1237 + /* partial write of a word, load old contents */ 1238 + otp_enter(map, chip, bus_ofs, map_bankwidth(map)); 1239 + datum = map_read(map, bus_ofs); 1240 + otp_exit(map, chip, bus_ofs, map_bankwidth(map)); 1241 + } 1242 + 1243 + datum = map_word_load_partial(map, datum, buf, gap, n); 1244 + ret = do_write_oneword(map, chip, bus_ofs, datum, FL_OTP_WRITE); 1245 + if (ret) 1246 + return ret; 1247 + 1248 + adr += n; 1249 + buf += n; 1250 + len -= n; 1251 + } 1252 + 1253 + return 0; 1254 + } 1255 + 1256 + static int do_otp_lock(struct map_info *map, struct flchip *chip, loff_t adr, 1257 + size_t len, u_char *buf, size_t grouplen) 1258 + { 1259 + struct cfi_private *cfi = map->fldrv_priv; 1260 + uint8_t lockreg; 1261 + unsigned long timeo; 1262 + int ret; 1263 + 1264 + /* make sure area matches group boundaries */ 1265 + if ((adr != 0) || (len != grouplen)) 1266 + return -EINVAL; 1267 + 1268 + mutex_lock(&chip->mutex); 1269 + ret = get_chip(map, chip, chip->start, FL_LOCKING); 1270 + if (ret) { 1271 + mutex_unlock(&chip->mutex); 1272 + return ret; 1273 + } 1274 + chip->state = FL_LOCKING; 1275 + 1276 + /* Enter lock register command */ 1277 + cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, 1278 + cfi->device_type, NULL); 1279 + cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, 1280 + cfi->device_type, NULL); 1281 + cfi_send_gen_cmd(0x40, cfi->addr_unlock1, chip->start, map, cfi, 1282 + cfi->device_type, NULL); 1283 + 1284 + /* read lock register */ 1285 + lockreg = cfi_read_query(map, 0); 1286 + 1287 + /* set bit 0 to protect extended memory block */ 1288 + lockreg &= ~0x01; 1289 + 1290 + /* set bit 0 to protect extended memory block */ 1291 + /* write lock register */ 1292 + map_write(map, CMD(0xA0), chip->start); 1293 + map_write(map, CMD(lockreg), chip->start); 1294 + 1295 + /* wait for chip to become ready */ 1296 + timeo = jiffies + msecs_to_jiffies(2); 1297 + for (;;) { 1298 + if (chip_ready(map, adr)) 1299 + break; 1300 + 1301 + if (time_after(jiffies, timeo)) { 1302 + pr_err("Waiting for chip to be ready timed out.\n"); 1303 + ret = -EIO; 1304 + break; 1305 + } 1306 + UDELAY(map, chip, 0, 1); 1307 + } 1308 + 1309 + /* exit protection commands */ 1310 + map_write(map, CMD(0x90), chip->start); 1311 + map_write(map, CMD(0x00), chip->start); 1312 + 1313 + chip->state = FL_READY; 1314 + put_chip(map, chip, chip->start); 1315 + mutex_unlock(&chip->mutex); 1316 + 1317 + return ret; 1318 + } 1319 + 1320 + static int cfi_amdstd_otp_walk(struct mtd_info *mtd, loff_t from, size_t len, 1321 + size_t *retlen, u_char *buf, 1322 + otp_op_t action, int user_regs) 1323 + { 1324 + struct map_info *map = mtd->priv; 1325 + struct cfi_private *cfi = map->fldrv_priv; 1326 + int ofs_factor = cfi->interleave * cfi->device_type; 1327 + unsigned long base; 1328 + int chipnum; 1329 + struct flchip *chip; 1330 + uint8_t otp, lockreg; 1331 + int ret; 1332 + 1333 + size_t user_size, factory_size, otpsize; 1334 + loff_t user_offset, factory_offset, otpoffset; 1335 + int user_locked = 0, otplocked; 1336 + 1337 + *retlen = 0; 1338 + 1339 + for (chipnum = 0; chipnum < cfi->numchips; chipnum++) { 1340 + chip = &cfi->chips[chipnum]; 1341 + factory_size = 0; 1342 + user_size = 0; 1343 + 1344 + /* Micron M29EW family */ 1345 + if (is_m29ew(cfi)) { 1346 + base = chip->start; 1347 + 1348 + /* check whether secsi area is factory locked 1349 + or user lockable */ 1350 + mutex_lock(&chip->mutex); 1351 + ret = get_chip(map, chip, base, FL_CFI_QUERY); 1352 + if (ret) { 1353 + mutex_unlock(&chip->mutex); 1354 + return ret; 1355 + } 1356 + cfi_qry_mode_on(base, map, cfi); 1357 + otp = cfi_read_query(map, base + 0x3 * ofs_factor); 1358 + cfi_qry_mode_off(base, map, cfi); 1359 + put_chip(map, chip, base); 1360 + mutex_unlock(&chip->mutex); 1361 + 1362 + if (otp & 0x80) { 1363 + /* factory locked */ 1364 + factory_offset = 0; 1365 + factory_size = 0x100; 1366 + } else { 1367 + /* customer lockable */ 1368 + user_offset = 0; 1369 + user_size = 0x100; 1370 + 1371 + mutex_lock(&chip->mutex); 1372 + ret = get_chip(map, chip, base, FL_LOCKING); 1373 + 1374 + /* Enter lock register command */ 1375 + cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, 1376 + chip->start, map, cfi, 1377 + cfi->device_type, NULL); 1378 + cfi_send_gen_cmd(0x55, cfi->addr_unlock2, 1379 + chip->start, map, cfi, 1380 + cfi->device_type, NULL); 1381 + cfi_send_gen_cmd(0x40, cfi->addr_unlock1, 1382 + chip->start, map, cfi, 1383 + cfi->device_type, NULL); 1384 + /* read lock register */ 1385 + lockreg = cfi_read_query(map, 0); 1386 + /* exit protection commands */ 1387 + map_write(map, CMD(0x90), chip->start); 1388 + map_write(map, CMD(0x00), chip->start); 1389 + put_chip(map, chip, chip->start); 1390 + mutex_unlock(&chip->mutex); 1391 + 1392 + user_locked = ((lockreg & 0x01) == 0x00); 1393 + } 1394 + } 1395 + 1396 + otpsize = user_regs ? user_size : factory_size; 1397 + if (!otpsize) 1398 + continue; 1399 + otpoffset = user_regs ? user_offset : factory_offset; 1400 + otplocked = user_regs ? user_locked : 1; 1401 + 1402 + if (!action) { 1403 + /* return otpinfo */ 1404 + struct otp_info *otpinfo; 1405 + len -= sizeof(*otpinfo); 1406 + if (len <= 0) 1407 + return -ENOSPC; 1408 + otpinfo = (struct otp_info *)buf; 1409 + otpinfo->start = from; 1410 + otpinfo->length = otpsize; 1411 + otpinfo->locked = otplocked; 1412 + buf += sizeof(*otpinfo); 1413 + *retlen += sizeof(*otpinfo); 1414 + from += otpsize; 1415 + } else if ((from < otpsize) && (len > 0)) { 1416 + size_t size; 1417 + size = (len < otpsize - from) ? len : otpsize - from; 1418 + ret = action(map, chip, otpoffset + from, size, buf, 1419 + otpsize); 1420 + if (ret < 0) 1421 + return ret; 1422 + 1423 + buf += size; 1424 + len -= size; 1425 + *retlen += size; 1426 + from = 0; 1427 + } else { 1428 + from -= otpsize; 1429 + } 1430 + } 1431 + return 0; 1432 + } 1433 + 1434 + static int cfi_amdstd_get_fact_prot_info(struct mtd_info *mtd, size_t len, 1435 + size_t *retlen, struct otp_info *buf) 1436 + { 1437 + return cfi_amdstd_otp_walk(mtd, 0, len, retlen, (u_char *)buf, 1438 + NULL, 0); 1439 + } 1440 + 1441 + static int cfi_amdstd_get_user_prot_info(struct mtd_info *mtd, size_t len, 1442 + size_t *retlen, struct otp_info *buf) 1443 + { 1444 + return cfi_amdstd_otp_walk(mtd, 0, len, retlen, (u_char *)buf, 1445 + NULL, 1); 1446 + } 1447 + 1448 + static int cfi_amdstd_read_fact_prot_reg(struct mtd_info *mtd, loff_t from, 1449 + size_t len, size_t *retlen, 1450 + u_char *buf) 1451 + { 1452 + return cfi_amdstd_otp_walk(mtd, from, len, retlen, 1453 + buf, do_read_secsi_onechip, 0); 1454 + } 1455 + 1456 + static int cfi_amdstd_read_user_prot_reg(struct mtd_info *mtd, loff_t from, 1457 + size_t len, size_t *retlen, 1458 + u_char *buf) 1459 + { 1460 + return cfi_amdstd_otp_walk(mtd, from, len, retlen, 1461 + buf, do_read_secsi_onechip, 1); 1462 + } 1463 + 1464 + static int cfi_amdstd_write_user_prot_reg(struct mtd_info *mtd, loff_t from, 1465 + size_t len, size_t *retlen, 1466 + u_char *buf) 1467 + { 1468 + return cfi_amdstd_otp_walk(mtd, from, len, retlen, buf, 1469 + do_otp_write, 1); 1470 + } 1471 + 1472 + static int cfi_amdstd_lock_user_prot_reg(struct mtd_info *mtd, loff_t from, 1473 + size_t len) 1474 + { 1475 + size_t retlen; 1476 + return cfi_amdstd_otp_walk(mtd, from, len, &retlen, NULL, 1477 + do_otp_lock, 1); 1478 + } 1479 + 1480 + static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, 1481 + unsigned long adr, map_word datum, 1482 + int mode) 1288 1483 { 1289 1484 struct cfi_private *cfi = map->fldrv_priv; 1290 1485 unsigned long timeo = jiffies + HZ; ··· 1564 1241 adr += chip->start; 1565 1242 1566 1243 mutex_lock(&chip->mutex); 1567 - ret = get_chip(map, chip, adr, FL_WRITING); 1244 + ret = get_chip(map, chip, adr, mode); 1568 1245 if (ret) { 1569 1246 mutex_unlock(&chip->mutex); 1570 1247 return ret; ··· 1572 1249 1573 1250 pr_debug("MTD %s(): WRITE 0x%.8lx(0x%.8lx)\n", 1574 1251 __func__, adr, datum.x[0] ); 1252 + 1253 + if (mode == FL_OTP_WRITE) 1254 + otp_enter(map, chip, adr, map_bankwidth(map)); 1575 1255 1576 1256 /* 1577 1257 * Check for a NOP for the case when the datum to write is already ··· 1592 1266 XIP_INVAL_CACHED_RANGE(map, adr, map_bankwidth(map)); 1593 1267 ENABLE_VPP(map); 1594 1268 xip_disable(map, chip, adr); 1269 + 1595 1270 retry: 1596 1271 cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1597 1272 cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL); 1598 1273 cfi_send_gen_cmd(0xA0, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1599 1274 map_write(map, datum, adr); 1600 - chip->state = FL_WRITING; 1275 + chip->state = mode; 1601 1276 1602 1277 INVALIDATE_CACHE_UDELAY(map, chip, 1603 1278 adr, map_bankwidth(map), ··· 1607 1280 /* See comment above for timeout value. */ 1608 1281 timeo = jiffies + uWriteTimeout; 1609 1282 for (;;) { 1610 - if (chip->state != FL_WRITING) { 1283 + if (chip->state != mode) { 1611 1284 /* Someone's suspended the write. Sleep */ 1612 1285 DECLARE_WAITQUEUE(wait, current); 1613 1286 ··· 1647 1320 } 1648 1321 xip_enable(map, chip, adr); 1649 1322 op_done: 1323 + if (mode == FL_OTP_WRITE) 1324 + otp_exit(map, chip, adr, map_bankwidth(map)); 1650 1325 chip->state = FL_READY; 1651 1326 DISABLE_VPP(map); 1652 1327 put_chip(map, chip, adr); ··· 1704 1375 tmp_buf = map_word_load_partial(map, tmp_buf, buf, i, n); 1705 1376 1706 1377 ret = do_write_oneword(map, &cfi->chips[chipnum], 1707 - bus_ofs, tmp_buf); 1378 + bus_ofs, tmp_buf, FL_WRITING); 1708 1379 if (ret) 1709 1380 return ret; 1710 1381 ··· 1728 1399 datum = map_word_load(map, buf); 1729 1400 1730 1401 ret = do_write_oneword(map, &cfi->chips[chipnum], 1731 - ofs, datum); 1402 + ofs, datum, FL_WRITING); 1732 1403 if (ret) 1733 1404 return ret; 1734 1405 ··· 1771 1442 tmp_buf = map_word_load_partial(map, tmp_buf, buf, 0, len); 1772 1443 1773 1444 ret = do_write_oneword(map, &cfi->chips[chipnum], 1774 - ofs, tmp_buf); 1445 + ofs, tmp_buf, FL_WRITING); 1775 1446 if (ret) 1776 1447 return ret; 1777 1448 ··· 1791 1462 { 1792 1463 struct cfi_private *cfi = map->fldrv_priv; 1793 1464 unsigned long timeo = jiffies + HZ; 1794 - /* see comments in do_write_oneword() regarding uWriteTimeo. */ 1795 - unsigned long uWriteTimeout = ( HZ / 1000 ) + 1; 1465 + /* 1466 + * Timeout is calculated according to CFI data, if available. 1467 + * See more comments in cfi_cmdset_0002(). 1468 + */ 1469 + unsigned long uWriteTimeout = 1470 + usecs_to_jiffies(chip->buffer_write_time_max); 1796 1471 int ret = -EIO; 1797 1472 unsigned long cmd_adr; 1798 1473 int z, words;
+1 -1
drivers/mtd/cmdlinepart.c
··· 26 26 * <mtd-id> := unique name used in mapping driver/device (mtd->name) 27 27 * <size> := standard linux memsize OR "-" to denote all remaining space 28 28 * size is automatically truncated at end of device 29 - * if specified or trucated size is 0 the part is skipped 29 + * if specified or truncated size is 0 the part is skipped 30 30 * <offset> := standard linux memsize 31 31 * if omitted the part will immediately follow the previous part 32 32 * or 0 if the first part
+3 -4
drivers/mtd/devices/phram.c
··· 181 181 if (len > 64) 182 182 return -ENOSPC; 183 183 184 - name = kmalloc(len, GFP_KERNEL); 184 + name = kstrdup(token, GFP_KERNEL); 185 185 if (!name) 186 186 return -ENOMEM; 187 - 188 - strcpy(name, token); 189 187 190 188 *pname = name; 191 189 return 0; ··· 193 195 static inline void kill_final_newline(char *str) 194 196 { 195 197 char *newline = strrchr(str, '\n'); 198 + 196 199 if (newline && !newline[1]) 197 200 *newline = 0; 198 201 } ··· 232 233 strcpy(str, val); 233 234 kill_final_newline(str); 234 235 235 - for (i=0; i<3; i++) 236 + for (i = 0; i < 3; i++) 236 237 token[i] = strsep(&str, ","); 237 238 238 239 if (str)
-4
drivers/mtd/ftl.c
··· 111 111 struct mtd_blktrans_dev mbd; 112 112 uint32_t state; 113 113 uint32_t *VirtualBlockMap; 114 - uint32_t *VirtualPageMap; 115 114 uint32_t FreeTotal; 116 115 struct eun_info_t { 117 116 uint32_t Offset; ··· 1034 1035 { 1035 1036 vfree(part->VirtualBlockMap); 1036 1037 part->VirtualBlockMap = NULL; 1037 - kfree(part->VirtualPageMap); 1038 - part->VirtualPageMap = NULL; 1039 1038 kfree(part->EUNInfo); 1040 1039 part->EUNInfo = NULL; 1041 1040 kfree(part->XferInfo); ··· 1072 1075 return; 1073 1076 } 1074 1077 1075 - ftl_freepart(partition); 1076 1078 kfree(partition); 1077 1079 } 1078 1080
-2
drivers/mtd/maps/rbtx4939-flash.c
··· 35 35 return 0; 36 36 37 37 if (info->mtd) { 38 - struct rbtx4939_flash_data *pdata = dev_get_platdata(&dev->dev); 39 - 40 38 mtd_device_unregister(info->mtd); 41 39 map_destroy(info->mtd); 42 40 }
+58 -3
drivers/mtd/mtdcore.c
··· 298 298 } 299 299 static DEVICE_ATTR(ecc_step_size, S_IRUGO, mtd_ecc_step_size_show, NULL); 300 300 301 + static ssize_t mtd_ecc_stats_corrected_show(struct device *dev, 302 + struct device_attribute *attr, char *buf) 303 + { 304 + struct mtd_info *mtd = dev_get_drvdata(dev); 305 + struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 306 + 307 + return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->corrected); 308 + } 309 + static DEVICE_ATTR(corrected_bits, S_IRUGO, 310 + mtd_ecc_stats_corrected_show, NULL); 311 + 312 + static ssize_t mtd_ecc_stats_errors_show(struct device *dev, 313 + struct device_attribute *attr, char *buf) 314 + { 315 + struct mtd_info *mtd = dev_get_drvdata(dev); 316 + struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 317 + 318 + return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->failed); 319 + } 320 + static DEVICE_ATTR(ecc_failures, S_IRUGO, mtd_ecc_stats_errors_show, NULL); 321 + 322 + static ssize_t mtd_badblocks_show(struct device *dev, 323 + struct device_attribute *attr, char *buf) 324 + { 325 + struct mtd_info *mtd = dev_get_drvdata(dev); 326 + struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 327 + 328 + return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->badblocks); 329 + } 330 + static DEVICE_ATTR(bad_blocks, S_IRUGO, mtd_badblocks_show, NULL); 331 + 332 + static ssize_t mtd_bbtblocks_show(struct device *dev, 333 + struct device_attribute *attr, char *buf) 334 + { 335 + struct mtd_info *mtd = dev_get_drvdata(dev); 336 + struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 337 + 338 + return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->bbtblocks); 339 + } 340 + static DEVICE_ATTR(bbt_blocks, S_IRUGO, mtd_bbtblocks_show, NULL); 341 + 301 342 static struct attribute *mtd_attrs[] = { 302 343 &dev_attr_type.attr, 303 344 &dev_attr_flags.attr, ··· 351 310 &dev_attr_name.attr, 352 311 &dev_attr_ecc_strength.attr, 353 312 &dev_attr_ecc_step_size.attr, 313 + &dev_attr_corrected_bits.attr, 314 + &dev_attr_ecc_failures.attr, 315 + &dev_attr_bad_blocks.attr, 316 + &dev_attr_bbt_blocks.attr, 354 317 &dev_attr_bitflip_threshold.attr, 355 318 NULL, 356 319 }; ··· 1043 998 } 1044 999 EXPORT_SYMBOL_GPL(mtd_is_locked); 1045 1000 1046 - int mtd_block_isbad(struct mtd_info *mtd, loff_t ofs) 1001 + int mtd_block_isreserved(struct mtd_info *mtd, loff_t ofs) 1047 1002 { 1048 - if (!mtd->_block_isbad) 1049 - return 0; 1050 1003 if (ofs < 0 || ofs > mtd->size) 1051 1004 return -EINVAL; 1005 + if (!mtd->_block_isreserved) 1006 + return 0; 1007 + return mtd->_block_isreserved(mtd, ofs); 1008 + } 1009 + EXPORT_SYMBOL_GPL(mtd_block_isreserved); 1010 + 1011 + int mtd_block_isbad(struct mtd_info *mtd, loff_t ofs) 1012 + { 1013 + if (ofs < 0 || ofs > mtd->size) 1014 + return -EINVAL; 1015 + if (!mtd->_block_isbad) 1016 + return 0; 1052 1017 return mtd->_block_isbad(mtd, ofs); 1053 1018 } 1054 1019 EXPORT_SYMBOL_GPL(mtd_block_isbad);
+12 -1
drivers/mtd/mtdpart.c
··· 290 290 part->master->_resume(part->master); 291 291 } 292 292 293 + static int part_block_isreserved(struct mtd_info *mtd, loff_t ofs) 294 + { 295 + struct mtd_part *part = PART(mtd); 296 + ofs += part->offset; 297 + return part->master->_block_isreserved(part->master, ofs); 298 + } 299 + 293 300 static int part_block_isbad(struct mtd_info *mtd, loff_t ofs) 294 301 { 295 302 struct mtd_part *part = PART(mtd); ··· 429 422 slave->mtd._unlock = part_unlock; 430 423 if (master->_is_locked) 431 424 slave->mtd._is_locked = part_is_locked; 425 + if (master->_block_isreserved) 426 + slave->mtd._block_isreserved = part_block_isreserved; 432 427 if (master->_block_isbad) 433 428 slave->mtd._block_isbad = part_block_isbad; 434 429 if (master->_block_markbad) ··· 535 526 uint64_t offs = 0; 536 527 537 528 while (offs < slave->mtd.size) { 538 - if (mtd_block_isbad(master, offs + slave->offset)) 529 + if (mtd_block_isreserved(master, offs + slave->offset)) 530 + slave->mtd.ecc_stats.bbtblocks++; 531 + else if (mtd_block_isbad(master, offs + slave->offset)) 539 532 slave->mtd.ecc_stats.badblocks++; 540 533 offs += slave->mtd.erasesize; 541 534 }
+1 -1
drivers/mtd/nand/Makefile
··· 50 50 obj-$(CONFIG_MTD_NAND_XWAY) += xway_nand.o 51 51 obj-$(CONFIG_MTD_NAND_BCM47XXNFLASH) += bcm47xxnflash/ 52 52 53 - nand-objs := nand_base.o nand_bbt.o 53 + nand-objs := nand_base.o nand_bbt.o nand_timings.o
+106 -36
drivers/mtd/nand/atmel_nand.c
··· 97 97 bool write_by_sram; 98 98 99 99 bool is_initialized; 100 - struct completion comp_nfc; 100 + struct completion comp_ready; 101 + struct completion comp_cmd_done; 102 + struct completion comp_xfer_done; 101 103 102 104 /* Point to the sram bank which include readed data via NFC */ 103 105 void __iomem *data_in_sram; ··· 863 861 { 864 862 struct nand_chip *nand_chip = mtd->priv; 865 863 struct atmel_nand_host *host = nand_chip->priv; 866 - int i, err_nbr, eccbytes; 864 + int i, err_nbr; 867 865 uint8_t *buf_pos; 868 866 int total_err = 0; 869 867 870 - eccbytes = nand_chip->ecc.bytes; 871 - for (i = 0; i < eccbytes; i++) 868 + for (i = 0; i < nand_chip->ecc.total; i++) 872 869 if (ecc[i] != 0xff) 873 870 goto normal_check; 874 871 /* Erased page, return OK */ ··· 929 928 struct nand_chip *chip, uint8_t *buf, int oob_required, int page) 930 929 { 931 930 struct atmel_nand_host *host = chip->priv; 932 - int eccsize = chip->ecc.size; 931 + int eccsize = chip->ecc.size * chip->ecc.steps; 933 932 uint8_t *oob = chip->oob_poi; 934 933 uint32_t *eccpos = chip->ecc.layout->eccpos; 935 934 uint32_t stat; ··· 1170 1169 goto err; 1171 1170 } 1172 1171 1173 - /* ECC is calculated for the whole page (1 step) */ 1174 - nand_chip->ecc.size = mtd->writesize; 1172 + nand_chip->ecc.size = sector_size; 1175 1173 1176 1174 /* set ECC page size and oob layout */ 1177 1175 switch (mtd->writesize) { ··· 1185 1185 host->pmecc_index_of = host->pmecc_rom_base + 1186 1186 host->pmecc_lookup_table_offset; 1187 1187 1188 - nand_chip->ecc.steps = 1; 1188 + nand_chip->ecc.steps = host->pmecc_sector_number; 1189 1189 nand_chip->ecc.strength = cap; 1190 - nand_chip->ecc.bytes = host->pmecc_bytes_per_sector * 1190 + nand_chip->ecc.bytes = host->pmecc_bytes_per_sector; 1191 + nand_chip->ecc.total = host->pmecc_bytes_per_sector * 1191 1192 host->pmecc_sector_number; 1192 - if (nand_chip->ecc.bytes > mtd->oobsize - 2) { 1193 + if (nand_chip->ecc.total > mtd->oobsize - 2) { 1193 1194 dev_err(host->dev, "No room for ECC bytes\n"); 1194 1195 err_no = -EINVAL; 1195 1196 goto err; 1196 1197 } 1197 1198 pmecc_config_ecc_layout(&atmel_pmecc_oobinfo, 1198 1199 mtd->oobsize, 1199 - nand_chip->ecc.bytes); 1200 + nand_chip->ecc.total); 1201 + 1200 1202 nand_chip->ecc.layout = &atmel_pmecc_oobinfo; 1201 1203 break; 1202 1204 case 512: ··· 1574 1572 return 0; 1575 1573 } 1576 1574 1575 + static inline u32 nfc_read_status(struct atmel_nand_host *host) 1576 + { 1577 + u32 err_flags = NFC_SR_DTOE | NFC_SR_UNDEF | NFC_SR_AWB | NFC_SR_ASE; 1578 + u32 nfc_status = nfc_readl(host->nfc->hsmc_regs, SR); 1579 + 1580 + if (unlikely(nfc_status & err_flags)) { 1581 + if (nfc_status & NFC_SR_DTOE) 1582 + dev_err(host->dev, "NFC: Waiting Nand R/B Timeout Error\n"); 1583 + else if (nfc_status & NFC_SR_UNDEF) 1584 + dev_err(host->dev, "NFC: Access Undefined Area Error\n"); 1585 + else if (nfc_status & NFC_SR_AWB) 1586 + dev_err(host->dev, "NFC: Access memory While NFC is busy\n"); 1587 + else if (nfc_status & NFC_SR_ASE) 1588 + dev_err(host->dev, "NFC: Access memory Size Error\n"); 1589 + } 1590 + 1591 + return nfc_status; 1592 + } 1593 + 1577 1594 /* SMC interrupt service routine */ 1578 1595 static irqreturn_t hsmc_interrupt(int irq, void *dev_id) 1579 1596 { 1580 1597 struct atmel_nand_host *host = dev_id; 1581 1598 u32 status, mask, pending; 1582 - irqreturn_t ret = IRQ_HANDLED; 1599 + irqreturn_t ret = IRQ_NONE; 1583 1600 1584 - status = nfc_readl(host->nfc->hsmc_regs, SR); 1601 + status = nfc_read_status(host); 1585 1602 mask = nfc_readl(host->nfc->hsmc_regs, IMR); 1586 1603 pending = status & mask; 1587 1604 1588 1605 if (pending & NFC_SR_XFR_DONE) { 1589 - complete(&host->nfc->comp_nfc); 1606 + complete(&host->nfc->comp_xfer_done); 1590 1607 nfc_writel(host->nfc->hsmc_regs, IDR, NFC_SR_XFR_DONE); 1591 - } else if (pending & NFC_SR_RB_EDGE) { 1592 - complete(&host->nfc->comp_nfc); 1608 + ret = IRQ_HANDLED; 1609 + } 1610 + if (pending & NFC_SR_RB_EDGE) { 1611 + complete(&host->nfc->comp_ready); 1593 1612 nfc_writel(host->nfc->hsmc_regs, IDR, NFC_SR_RB_EDGE); 1594 - } else if (pending & NFC_SR_CMD_DONE) { 1595 - complete(&host->nfc->comp_nfc); 1613 + ret = IRQ_HANDLED; 1614 + } 1615 + if (pending & NFC_SR_CMD_DONE) { 1616 + complete(&host->nfc->comp_cmd_done); 1596 1617 nfc_writel(host->nfc->hsmc_regs, IDR, NFC_SR_CMD_DONE); 1597 - } else { 1598 - ret = IRQ_NONE; 1618 + ret = IRQ_HANDLED; 1599 1619 } 1600 1620 1601 1621 return ret; 1602 1622 } 1603 1623 1604 1624 /* NFC(Nand Flash Controller) related functions */ 1605 - static int nfc_wait_interrupt(struct atmel_nand_host *host, u32 flag) 1625 + static void nfc_prepare_interrupt(struct atmel_nand_host *host, u32 flag) 1606 1626 { 1607 - unsigned long timeout; 1608 - init_completion(&host->nfc->comp_nfc); 1627 + if (flag & NFC_SR_XFR_DONE) 1628 + init_completion(&host->nfc->comp_xfer_done); 1629 + 1630 + if (flag & NFC_SR_RB_EDGE) 1631 + init_completion(&host->nfc->comp_ready); 1632 + 1633 + if (flag & NFC_SR_CMD_DONE) 1634 + init_completion(&host->nfc->comp_cmd_done); 1609 1635 1610 1636 /* Enable interrupt that need to wait for */ 1611 1637 nfc_writel(host->nfc->hsmc_regs, IER, flag); 1638 + } 1612 1639 1613 - timeout = wait_for_completion_timeout(&host->nfc->comp_nfc, 1614 - msecs_to_jiffies(NFC_TIME_OUT_MS)); 1615 - if (timeout) 1616 - return 0; 1640 + static int nfc_wait_interrupt(struct atmel_nand_host *host, u32 flag) 1641 + { 1642 + int i, index = 0; 1643 + struct completion *comp[3]; /* Support 3 interrupt completion */ 1617 1644 1618 - /* Time out to wait for the interrupt */ 1645 + if (flag & NFC_SR_XFR_DONE) 1646 + comp[index++] = &host->nfc->comp_xfer_done; 1647 + 1648 + if (flag & NFC_SR_RB_EDGE) 1649 + comp[index++] = &host->nfc->comp_ready; 1650 + 1651 + if (flag & NFC_SR_CMD_DONE) 1652 + comp[index++] = &host->nfc->comp_cmd_done; 1653 + 1654 + if (index == 0) { 1655 + dev_err(host->dev, "Unkown interrupt flag: 0x%08x\n", flag); 1656 + return -EINVAL; 1657 + } 1658 + 1659 + for (i = 0; i < index; i++) { 1660 + if (wait_for_completion_timeout(comp[i], 1661 + msecs_to_jiffies(NFC_TIME_OUT_MS))) 1662 + continue; /* wait for next completion */ 1663 + else 1664 + goto err_timeout; 1665 + } 1666 + 1667 + return 0; 1668 + 1669 + err_timeout: 1619 1670 dev_err(host->dev, "Time out to wait for interrupt: 0x%08x\n", flag); 1671 + /* Disable the interrupt as it is not handled by interrupt handler */ 1672 + nfc_writel(host->nfc->hsmc_regs, IDR, flag); 1620 1673 return -ETIMEDOUT; 1621 1674 } 1622 1675 ··· 1679 1622 unsigned int cmd, unsigned int addr, unsigned char cycle0) 1680 1623 { 1681 1624 unsigned long timeout; 1625 + u32 flag = NFC_SR_CMD_DONE; 1626 + flag |= cmd & NFCADDR_CMD_DATAEN ? NFC_SR_XFR_DONE : 0; 1627 + 1682 1628 dev_dbg(host->dev, 1683 1629 "nfc_cmd: 0x%08x, addr1234: 0x%08x, cycle0: 0x%02x\n", 1684 1630 cmd, addr, cycle0); ··· 1695 1635 return -ETIMEDOUT; 1696 1636 } 1697 1637 } 1638 + 1639 + nfc_prepare_interrupt(host, flag); 1698 1640 nfc_writel(host->nfc->hsmc_regs, CYCLE0, cycle0); 1699 1641 nfc_cmd_addr1234_writel(cmd, addr, host->nfc->base_cmd_regs); 1700 - return nfc_wait_interrupt(host, NFC_SR_CMD_DONE); 1642 + return nfc_wait_interrupt(host, flag); 1701 1643 } 1702 1644 1703 1645 static int nfc_device_ready(struct mtd_info *mtd) 1704 1646 { 1647 + u32 status, mask; 1705 1648 struct nand_chip *nand_chip = mtd->priv; 1706 1649 struct atmel_nand_host *host = nand_chip->priv; 1707 - if (!nfc_wait_interrupt(host, NFC_SR_RB_EDGE)) 1708 - return 1; 1709 - return 0; 1650 + 1651 + status = nfc_read_status(host); 1652 + mask = nfc_readl(host->nfc->hsmc_regs, IMR); 1653 + 1654 + /* The mask should be 0. If not we may lost interrupts */ 1655 + if (unlikely(mask & status)) 1656 + dev_err(host->dev, "Lost the interrupt flags: 0x%08x\n", 1657 + mask & status); 1658 + 1659 + return status & NFC_SR_RB_EDGE; 1710 1660 } 1711 1661 1712 1662 static void nfc_select_chip(struct mtd_info *mtd, int chip) ··· 1865 1795 nfc_addr_cmd = cmd1 | cmd2 | vcmd2 | acycle | csid | dataen | nfcwr; 1866 1796 nfc_send_command(host, nfc_addr_cmd, addr1234, cycle0); 1867 1797 1868 - if (dataen == NFCADDR_CMD_DATAEN) 1869 - if (nfc_wait_interrupt(host, NFC_SR_XFR_DONE)) 1870 - dev_err(host->dev, "something wrong, No XFR_DONE interrupt comes.\n"); 1871 - 1872 1798 /* 1873 1799 * Program and erase have their own busy handlers status, sequential 1874 1800 * in, and deplete1 need no delay. ··· 1889 1823 } 1890 1824 /* fall through */ 1891 1825 default: 1826 + nfc_prepare_interrupt(host, NFC_SR_RB_EDGE); 1892 1827 nfc_wait_interrupt(host, NFC_SR_RB_EDGE); 1893 1828 } 1894 1829 } ··· 2275 2208 "atmel,write-by-sram"); 2276 2209 } 2277 2210 } 2211 + 2212 + nfc_writel(nfc->hsmc_regs, IDR, 0xffffffff); 2213 + nfc_readl(nfc->hsmc_regs, SR); /* clear the NFC_SR */ 2278 2214 2279 2215 nfc->is_initialized = true; 2280 2216 dev_info(&pdev->dev, "NFC is probed.\n");
+4
drivers/mtd/nand/atmel_nand_nfc.h
··· 37 37 #define ATMEL_HSMC_NFC_SR 0x08 /* NFC Status Register */ 38 38 #define NFC_SR_XFR_DONE (1 << 16) 39 39 #define NFC_SR_CMD_DONE (1 << 17) 40 + #define NFC_SR_DTOE (1 << 20) 41 + #define NFC_SR_UNDEF (1 << 21) 42 + #define NFC_SR_AWB (1 << 22) 43 + #define NFC_SR_ASE (1 << 23) 40 44 #define NFC_SR_RB_EDGE (1 << 24) 41 45 42 46 #define ATMEL_HSMC_NFC_IER 0x0c
-24
drivers/mtd/nand/bf5xx_nand.c
··· 830 830 return err; 831 831 } 832 832 833 - /* PM Support */ 834 - #ifdef CONFIG_PM 835 - 836 - static int bf5xx_nand_suspend(struct platform_device *dev, pm_message_t pm) 837 - { 838 - struct bf5xx_nand_info *info = platform_get_drvdata(dev); 839 - 840 - return 0; 841 - } 842 - 843 - static int bf5xx_nand_resume(struct platform_device *dev) 844 - { 845 - struct bf5xx_nand_info *info = platform_get_drvdata(dev); 846 - 847 - return 0; 848 - } 849 - 850 - #else 851 - #define bf5xx_nand_suspend NULL 852 - #define bf5xx_nand_resume NULL 853 - #endif 854 - 855 833 /* driver device registration */ 856 834 static struct platform_driver bf5xx_nand_driver = { 857 835 .probe = bf5xx_nand_probe, 858 836 .remove = bf5xx_nand_remove, 859 - .suspend = bf5xx_nand_suspend, 860 - .resume = bf5xx_nand_resume, 861 837 .driver = { 862 838 .name = DRV_NAME, 863 839 .owner = THIS_MODULE,
+3 -3
drivers/mtd/nand/denali.c
··· 473 473 static uint16_t denali_nand_timing_set(struct denali_nand_info *denali) 474 474 { 475 475 uint16_t status = PASS; 476 - uint32_t id_bytes[5], addr; 476 + uint32_t id_bytes[8], addr; 477 477 uint8_t i, maf_id, device_id; 478 478 479 479 dev_dbg(denali->dev, ··· 488 488 addr = (uint32_t)MODE_11 | BANK(denali->flash_bank); 489 489 index_addr(denali, (uint32_t)addr | 0, 0x90); 490 490 index_addr(denali, (uint32_t)addr | 1, 0); 491 - for (i = 0; i < 5; i++) 491 + for (i = 0; i < 8; i++) 492 492 index_addr_read_data(denali, addr | 2, &id_bytes[i]); 493 493 maf_id = id_bytes[0]; 494 494 device_id = id_bytes[1]; ··· 1276 1276 addr = (uint32_t)MODE_11 | BANK(denali->flash_bank); 1277 1277 index_addr(denali, (uint32_t)addr | 0, 0x90); 1278 1278 index_addr(denali, (uint32_t)addr | 1, 0); 1279 - for (i = 0; i < 5; i++) { 1279 + for (i = 0; i < 8; i++) { 1280 1280 index_addr_read_data(denali, 1281 1281 (uint32_t)addr | 2, 1282 1282 &id);
+42 -29
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 285 285 geo->ecc_strength = get_ecc_strength(this); 286 286 if (!gpmi_check_ecc(this)) { 287 287 dev_err(this->dev, 288 - "We can not support this nand chip." 289 - " Its required ecc strength(%d) is beyond our" 290 - " capability(%d).\n", geo->ecc_strength, 288 + "required ecc strength of the NAND chip: %d is not supported by the GPMI controller (%d)\n", 289 + geo->ecc_strength, 291 290 this->devdata->bch_max_ecc_strength); 292 291 return -EINVAL; 293 292 } ··· 1081 1082 int first, last, marker_pos; 1082 1083 int ecc_parity_size; 1083 1084 int col = 0; 1085 + int old_swap_block_mark = this->swap_block_mark; 1084 1086 1085 1087 /* The size of ECC parity */ 1086 1088 ecc_parity_size = geo->gf_len * geo->ecc_strength / 8; ··· 1090 1090 first = offs / size; 1091 1091 last = (offs + len - 1) / size; 1092 1092 1093 - /* 1094 - * Find the chunk which contains the Block Marker. If this chunk is 1095 - * in the range of [first, last], we have to read out the whole page. 1096 - * Why? since we had swapped the data at the position of Block Marker 1097 - * to the metadata which is bound with the chunk 0. 1098 - */ 1099 - marker_pos = geo->block_mark_byte_offset / size; 1100 - if (last >= marker_pos && first <= marker_pos) { 1101 - dev_dbg(this->dev, "page:%d, first:%d, last:%d, marker at:%d\n", 1093 + if (this->swap_block_mark) { 1094 + /* 1095 + * Find the chunk which contains the Block Marker. 1096 + * If this chunk is in the range of [first, last], 1097 + * we have to read out the whole page. 1098 + * Why? since we had swapped the data at the position of Block 1099 + * Marker to the metadata which is bound with the chunk 0. 1100 + */ 1101 + marker_pos = geo->block_mark_byte_offset / size; 1102 + if (last >= marker_pos && first <= marker_pos) { 1103 + dev_dbg(this->dev, 1104 + "page:%d, first:%d, last:%d, marker at:%d\n", 1102 1105 page, first, last, marker_pos); 1103 - return gpmi_ecc_read_page(mtd, chip, buf, 0, page); 1106 + return gpmi_ecc_read_page(mtd, chip, buf, 0, page); 1107 + } 1104 1108 } 1105 1109 1106 1110 meta = geo->metadata_size; ··· 1150 1146 writel(r1_old, bch_regs + HW_BCH_FLASH0LAYOUT0); 1151 1147 writel(r2_old, bch_regs + HW_BCH_FLASH0LAYOUT1); 1152 1148 this->bch_geometry = old_geo; 1153 - this->swap_block_mark = true; 1149 + this->swap_block_mark = old_swap_block_mark; 1154 1150 1155 1151 return max_bitflips; 1156 1152 } ··· 1184 1180 1185 1181 /* Handle block mark swapping. */ 1186 1182 block_mark_swapping(this, 1187 - (void *) payload_virt, (void *) auxiliary_virt); 1183 + (void *)payload_virt, (void *)auxiliary_virt); 1188 1184 } else { 1189 1185 /* 1190 1186 * If control arrives here, we're not doing block mark swapping, ··· 1314 1310 1315 1311 /* 1316 1312 * Now, we want to make sure the block mark is correct. In the 1317 - * Swapping/Raw case, we already have it. Otherwise, we need to 1318 - * explicitly read it. 1313 + * non-transcribing case (!GPMI_IS_MX23()), we already have it. 1314 + * Otherwise, we need to explicitly read it. 1319 1315 */ 1320 - if (!this->swap_block_mark) { 1316 + if (GPMI_IS_MX23(this)) { 1321 1317 /* Read the block mark into the first byte of the OOB buffer. */ 1322 1318 chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page); 1323 1319 chip->oob_poi[0] = chip->read_byte(mtd); ··· 1358 1354 chipnr = (int)(ofs >> chip->chip_shift); 1359 1355 chip->select_chip(mtd, chipnr); 1360 1356 1361 - column = this->swap_block_mark ? mtd->writesize : 0; 1357 + column = !GPMI_IS_MX23(this) ? mtd->writesize : 0; 1362 1358 1363 1359 /* Write the block mark. */ 1364 1360 block_mark = this->data_buffer_dma; ··· 1601 1597 dev_dbg(dev, "Transcribing mark in block %u\n", block); 1602 1598 ret = chip->block_markbad(mtd, byte); 1603 1599 if (ret) 1604 - dev_err(dev, "Failed to mark block bad with " 1605 - "ret %d\n", ret); 1600 + dev_err(dev, 1601 + "Failed to mark block bad with ret %d\n", 1602 + ret); 1606 1603 } 1607 1604 } 1608 1605 ··· 1653 1648 struct nand_ecc_ctrl *ecc = &chip->ecc; 1654 1649 struct bch_geometry *bch_geo = &this->bch_geometry; 1655 1650 int ret; 1656 - 1657 - /* Set up swap_block_mark, must be set before the gpmi_set_geometry() */ 1658 - this->swap_block_mark = !GPMI_IS_MX23(this); 1659 1651 1660 1652 /* Set up the medium geometry */ 1661 1653 ret = gpmi_set_geometry(this); ··· 1717 1715 chip->badblock_pattern = &gpmi_bbt_descr; 1718 1716 chip->block_markbad = gpmi_block_markbad; 1719 1717 chip->options |= NAND_NO_SUBPAGE_WRITE; 1720 - if (of_get_nand_on_flash_bbt(this->dev->of_node)) 1718 + 1719 + /* Set up swap_block_mark, must be set before the gpmi_set_geometry() */ 1720 + this->swap_block_mark = !GPMI_IS_MX23(this); 1721 + 1722 + if (of_get_nand_on_flash_bbt(this->dev->of_node)) { 1721 1723 chip->bbt_options |= NAND_BBT_USE_FLASH | NAND_BBT_NO_OOB; 1724 + 1725 + if (of_property_read_bool(this->dev->of_node, 1726 + "fsl,no-blockmark-swap")) 1727 + this->swap_block_mark = false; 1728 + } 1729 + dev_dbg(this->dev, "Blockmark swapping %sabled\n", 1730 + this->swap_block_mark ? "en" : "dis"); 1722 1731 1723 1732 /* 1724 1733 * Allocate a temporary DMA buffer for reading ID in the ··· 1773 1760 static const struct of_device_id gpmi_nand_id_table[] = { 1774 1761 { 1775 1762 .compatible = "fsl,imx23-gpmi-nand", 1776 - .data = (void *)&gpmi_devdata_imx23, 1763 + .data = &gpmi_devdata_imx23, 1777 1764 }, { 1778 1765 .compatible = "fsl,imx28-gpmi-nand", 1779 - .data = (void *)&gpmi_devdata_imx28, 1766 + .data = &gpmi_devdata_imx28, 1780 1767 }, { 1781 1768 .compatible = "fsl,imx6q-gpmi-nand", 1782 - .data = (void *)&gpmi_devdata_imx6q, 1769 + .data = &gpmi_devdata_imx6q, 1783 1770 }, { 1784 1771 .compatible = "fsl,imx6sx-gpmi-nand", 1785 - .data = (void *)&gpmi_devdata_imx6sx, 1772 + .data = &gpmi_devdata_imx6sx, 1786 1773 }, {} 1787 1774 }; 1788 1775 MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
-6
drivers/mtd/nand/lpc32xx_mlc.c
··· 721 721 nand_chip->bbt_td = &lpc32xx_nand_bbt; 722 722 nand_chip->bbt_md = &lpc32xx_nand_bbt_mirror; 723 723 724 - /* bitflip_threshold's default is defined as ecc_strength anyway. 725 - * Unfortunately, it is set only later at add_mtd_device(). Meanwhile 726 - * being 0, it causes bad block table scanning errors in 727 - * nand_scan_tail(), so preparing it here. */ 728 - mtd->bitflip_threshold = nand_chip->ecc.strength; 729 - 730 724 if (use_dma) { 731 725 res = lpc32xx_dma_setup(host); 732 726 if (res) {
-6
drivers/mtd/nand/lpc32xx_slc.c
··· 840 840 chip->ecc.strength = 1; 841 841 chip->ecc.hwctl = lpc32xx_nand_ecc_enable; 842 842 843 - /* bitflip_threshold's default is defined as ecc_strength anyway. 844 - * Unfortunately, it is set only later at add_mtd_device(). Meanwhile 845 - * being 0, it causes bad block table scanning errors in 846 - * nand_scan_tail(), so preparing it here already. */ 847 - mtd->bitflip_threshold = chip->ecc.strength; 848 - 849 843 /* 850 844 * Allocate a large enough buffer for a single huge page plus 851 845 * extra space for the spare area and ECC storage area
+18
drivers/mtd/nand/nand_base.c
··· 488 488 * nand_block_checkbad - [GENERIC] Check if a block is marked bad 489 489 * @mtd: MTD device structure 490 490 * @ofs: offset from device start 491 + * 492 + * Check if the block is mark as reserved. 493 + */ 494 + static int nand_block_isreserved(struct mtd_info *mtd, loff_t ofs) 495 + { 496 + struct nand_chip *chip = mtd->priv; 497 + 498 + if (!chip->bbt) 499 + return 0; 500 + /* Return info from the table */ 501 + return nand_isreserved_bbt(mtd, ofs); 502 + } 503 + 504 + /** 505 + * nand_block_checkbad - [GENERIC] Check if a block is marked bad 506 + * @mtd: MTD device structure 507 + * @ofs: offset from device start 491 508 * @getchip: 0, if the chip is already selected 492 509 * @allowbbt: 1, if its allowed to access the bbt area 493 510 * ··· 4130 4113 mtd->_unlock = NULL; 4131 4114 mtd->_suspend = nand_suspend; 4132 4115 mtd->_resume = nand_resume; 4116 + mtd->_block_isreserved = nand_block_isreserved; 4133 4117 mtd->_block_isbad = nand_block_isbad; 4134 4118 mtd->_block_markbad = nand_block_markbad; 4135 4119 mtd->writebufsize = mtd->writesize;
+14
drivers/mtd/nand/nand_bbt.c
··· 1311 1311 } 1312 1312 1313 1313 /** 1314 + * nand_isreserved_bbt - [NAND Interface] Check if a block is reserved 1315 + * @mtd: MTD device structure 1316 + * @offs: offset in the device 1317 + */ 1318 + int nand_isreserved_bbt(struct mtd_info *mtd, loff_t offs) 1319 + { 1320 + struct nand_chip *this = mtd->priv; 1321 + int block; 1322 + 1323 + block = (int)(offs >> this->bbt_erase_shift); 1324 + return bbt_get_entry(this, block) == BBT_BLOCK_RESERVED; 1325 + } 1326 + 1327 + /** 1314 1328 * nand_isbad_bbt - [NAND Interface] Check if a block is bad 1315 1329 * @mtd: MTD device structure 1316 1330 * @offs: offset in the device
+253
drivers/mtd/nand/nand_timings.c
··· 1 + /* 2 + * Copyright (C) 2014 Free Electrons 3 + * 4 + * Author: Boris BREZILLON <boris.brezillon@free-electrons.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + #include <linux/kernel.h> 12 + #include <linux/err.h> 13 + #include <linux/export.h> 14 + #include <linux/mtd/nand.h> 15 + 16 + static const struct nand_sdr_timings onfi_sdr_timings[] = { 17 + /* Mode 0 */ 18 + { 19 + .tADL_min = 200000, 20 + .tALH_min = 20000, 21 + .tALS_min = 50000, 22 + .tAR_min = 25000, 23 + .tCEA_max = 100000, 24 + .tCEH_min = 20000, 25 + .tCH_min = 20000, 26 + .tCHZ_max = 100000, 27 + .tCLH_min = 20000, 28 + .tCLR_min = 20000, 29 + .tCLS_min = 50000, 30 + .tCOH_min = 0, 31 + .tCS_min = 70000, 32 + .tDH_min = 20000, 33 + .tDS_min = 40000, 34 + .tFEAT_max = 1000000, 35 + .tIR_min = 10000, 36 + .tITC_max = 1000000, 37 + .tRC_min = 100000, 38 + .tREA_max = 40000, 39 + .tREH_min = 30000, 40 + .tRHOH_min = 0, 41 + .tRHW_min = 200000, 42 + .tRHZ_max = 200000, 43 + .tRLOH_min = 0, 44 + .tRP_min = 50000, 45 + .tRST_max = 250000000000, 46 + .tWB_max = 200000, 47 + .tRR_min = 40000, 48 + .tWC_min = 100000, 49 + .tWH_min = 30000, 50 + .tWHR_min = 120000, 51 + .tWP_min = 50000, 52 + .tWW_min = 100000, 53 + }, 54 + /* Mode 1 */ 55 + { 56 + .tADL_min = 100000, 57 + .tALH_min = 10000, 58 + .tALS_min = 25000, 59 + .tAR_min = 10000, 60 + .tCEA_max = 45000, 61 + .tCEH_min = 20000, 62 + .tCH_min = 10000, 63 + .tCHZ_max = 50000, 64 + .tCLH_min = 10000, 65 + .tCLR_min = 10000, 66 + .tCLS_min = 25000, 67 + .tCOH_min = 15000, 68 + .tCS_min = 35000, 69 + .tDH_min = 10000, 70 + .tDS_min = 20000, 71 + .tFEAT_max = 1000000, 72 + .tIR_min = 0, 73 + .tITC_max = 1000000, 74 + .tRC_min = 50000, 75 + .tREA_max = 30000, 76 + .tREH_min = 15000, 77 + .tRHOH_min = 15000, 78 + .tRHW_min = 100000, 79 + .tRHZ_max = 100000, 80 + .tRLOH_min = 0, 81 + .tRP_min = 25000, 82 + .tRR_min = 20000, 83 + .tRST_max = 500000000, 84 + .tWB_max = 100000, 85 + .tWC_min = 45000, 86 + .tWH_min = 15000, 87 + .tWHR_min = 80000, 88 + .tWP_min = 25000, 89 + .tWW_min = 100000, 90 + }, 91 + /* Mode 2 */ 92 + { 93 + .tADL_min = 100000, 94 + .tALH_min = 10000, 95 + .tALS_min = 15000, 96 + .tAR_min = 10000, 97 + .tCEA_max = 30000, 98 + .tCEH_min = 20000, 99 + .tCH_min = 10000, 100 + .tCHZ_max = 50000, 101 + .tCLH_min = 10000, 102 + .tCLR_min = 10000, 103 + .tCLS_min = 15000, 104 + .tCOH_min = 15000, 105 + .tCS_min = 25000, 106 + .tDH_min = 5000, 107 + .tDS_min = 15000, 108 + .tFEAT_max = 1000000, 109 + .tIR_min = 0, 110 + .tITC_max = 1000000, 111 + .tRC_min = 35000, 112 + .tREA_max = 25000, 113 + .tREH_min = 15000, 114 + .tRHOH_min = 15000, 115 + .tRHW_min = 100000, 116 + .tRHZ_max = 100000, 117 + .tRLOH_min = 0, 118 + .tRR_min = 20000, 119 + .tRST_max = 500000000, 120 + .tWB_max = 100000, 121 + .tRP_min = 17000, 122 + .tWC_min = 35000, 123 + .tWH_min = 15000, 124 + .tWHR_min = 80000, 125 + .tWP_min = 17000, 126 + .tWW_min = 100000, 127 + }, 128 + /* Mode 3 */ 129 + { 130 + .tADL_min = 100000, 131 + .tALH_min = 5000, 132 + .tALS_min = 10000, 133 + .tAR_min = 10000, 134 + .tCEA_max = 25000, 135 + .tCEH_min = 20000, 136 + .tCH_min = 5000, 137 + .tCHZ_max = 50000, 138 + .tCLH_min = 5000, 139 + .tCLR_min = 10000, 140 + .tCLS_min = 10000, 141 + .tCOH_min = 15000, 142 + .tCS_min = 25000, 143 + .tDH_min = 5000, 144 + .tDS_min = 10000, 145 + .tFEAT_max = 1000000, 146 + .tIR_min = 0, 147 + .tITC_max = 1000000, 148 + .tRC_min = 30000, 149 + .tREA_max = 20000, 150 + .tREH_min = 10000, 151 + .tRHOH_min = 15000, 152 + .tRHW_min = 100000, 153 + .tRHZ_max = 100000, 154 + .tRLOH_min = 0, 155 + .tRP_min = 15000, 156 + .tRR_min = 20000, 157 + .tRST_max = 500000000, 158 + .tWB_max = 100000, 159 + .tWC_min = 30000, 160 + .tWH_min = 10000, 161 + .tWHR_min = 80000, 162 + .tWP_min = 15000, 163 + .tWW_min = 100000, 164 + }, 165 + /* Mode 4 */ 166 + { 167 + .tADL_min = 70000, 168 + .tALH_min = 5000, 169 + .tALS_min = 10000, 170 + .tAR_min = 10000, 171 + .tCEA_max = 25000, 172 + .tCEH_min = 20000, 173 + .tCH_min = 5000, 174 + .tCHZ_max = 30000, 175 + .tCLH_min = 5000, 176 + .tCLR_min = 10000, 177 + .tCLS_min = 10000, 178 + .tCOH_min = 15000, 179 + .tCS_min = 20000, 180 + .tDH_min = 5000, 181 + .tDS_min = 10000, 182 + .tFEAT_max = 1000000, 183 + .tIR_min = 0, 184 + .tITC_max = 1000000, 185 + .tRC_min = 25000, 186 + .tREA_max = 20000, 187 + .tREH_min = 10000, 188 + .tRHOH_min = 15000, 189 + .tRHW_min = 100000, 190 + .tRHZ_max = 100000, 191 + .tRLOH_min = 5000, 192 + .tRP_min = 12000, 193 + .tRR_min = 20000, 194 + .tRST_max = 500000000, 195 + .tWB_max = 100000, 196 + .tWC_min = 25000, 197 + .tWH_min = 10000, 198 + .tWHR_min = 80000, 199 + .tWP_min = 12000, 200 + .tWW_min = 100000, 201 + }, 202 + /* Mode 5 */ 203 + { 204 + .tADL_min = 70000, 205 + .tALH_min = 5000, 206 + .tALS_min = 10000, 207 + .tAR_min = 10000, 208 + .tCEA_max = 25000, 209 + .tCEH_min = 20000, 210 + .tCH_min = 5000, 211 + .tCHZ_max = 30000, 212 + .tCLH_min = 5000, 213 + .tCLR_min = 10000, 214 + .tCLS_min = 10000, 215 + .tCOH_min = 15000, 216 + .tCS_min = 15000, 217 + .tDH_min = 5000, 218 + .tDS_min = 7000, 219 + .tFEAT_max = 1000000, 220 + .tIR_min = 0, 221 + .tITC_max = 1000000, 222 + .tRC_min = 20000, 223 + .tREA_max = 16000, 224 + .tREH_min = 7000, 225 + .tRHOH_min = 15000, 226 + .tRHW_min = 100000, 227 + .tRHZ_max = 100000, 228 + .tRLOH_min = 5000, 229 + .tRP_min = 10000, 230 + .tRR_min = 20000, 231 + .tRST_max = 500000000, 232 + .tWB_max = 100000, 233 + .tWC_min = 20000, 234 + .tWH_min = 7000, 235 + .tWHR_min = 80000, 236 + .tWP_min = 10000, 237 + .tWW_min = 100000, 238 + }, 239 + }; 240 + 241 + /** 242 + * onfi_async_timing_mode_to_sdr_timings - [NAND Interface] Retrieve NAND 243 + * timings according to the given ONFI timing mode 244 + * @mode: ONFI timing mode 245 + */ 246 + const struct nand_sdr_timings *onfi_async_timing_mode_to_sdr_timings(int mode) 247 + { 248 + if (mode < 0 || mode >= ARRAY_SIZE(onfi_sdr_timings)) 249 + return ERR_PTR(-EINVAL); 250 + 251 + return &onfi_sdr_timings[mode]; 252 + } 253 + EXPORT_SYMBOL(onfi_async_timing_mode_to_sdr_timings);
+2 -2
drivers/mtd/nand/s3c2410.c
··· 208 208 209 209 if (info->clk_state == CLOCK_ENABLE) { 210 210 if (new_state != CLOCK_ENABLE) 211 - clk_disable(info->clk); 211 + clk_disable_unprepare(info->clk); 212 212 } else { 213 213 if (new_state == CLOCK_ENABLE) 214 - clk_enable(info->clk); 214 + clk_prepare_enable(info->clk); 215 215 } 216 216 217 217 info->clk_state = new_state;
+53
drivers/mtd/spi-nor/spi-nor.c
··· 48 48 } 49 49 50 50 /* 51 + * Read the flag status register, returning its value in the location 52 + * Return the status register value. 53 + * Returns negative if error occurred. 54 + */ 55 + static int read_fsr(struct spi_nor *nor) 56 + { 57 + int ret; 58 + u8 val; 59 + 60 + ret = nor->read_reg(nor, SPINOR_OP_RDFSR, &val, 1); 61 + if (ret < 0) { 62 + pr_err("error %d reading FSR\n", ret); 63 + return ret; 64 + } 65 + 66 + return val; 67 + } 68 + 69 + /* 51 70 * Read configuration register, returning its value in the 52 71 * location. Return the configuration register value. 53 72 * Returns negative if error occured. ··· 179 160 break; 180 161 else if (!(sr & SR_WIP)) 181 162 return 0; 163 + } while (!time_after_eq(jiffies, deadline)); 164 + 165 + return -ETIMEDOUT; 166 + } 167 + 168 + static int spi_nor_wait_till_fsr_ready(struct spi_nor *nor) 169 + { 170 + unsigned long deadline; 171 + int sr; 172 + int fsr; 173 + 174 + deadline = jiffies + MAX_READY_WAIT_JIFFIES; 175 + 176 + do { 177 + cond_resched(); 178 + 179 + sr = read_sr(nor); 180 + if (sr < 0) { 181 + break; 182 + } else if (!(sr & SR_WIP)) { 183 + fsr = read_fsr(nor); 184 + if (fsr < 0) 185 + break; 186 + if (fsr & FSR_READY) 187 + return 0; 188 + } 182 189 } while (!time_after_eq(jiffies, deadline)); 183 190 184 191 return -ETIMEDOUT; ··· 447 402 #define SECT_4K_PMC 0x10 /* SPINOR_OP_BE_4K_PMC works uniformly */ 448 403 #define SPI_NOR_DUAL_READ 0x20 /* Flash supports Dual Read */ 449 404 #define SPI_NOR_QUAD_READ 0x40 /* Flash supports Quad Read */ 405 + #define USE_FSR 0x80 /* use flag status register */ 450 406 }; 451 407 452 408 #define INFO(_jedec_id, _ext_id, _sector_size, _n_sectors, _flags) \ ··· 495 449 { "en25q32b", INFO(0x1c3016, 0, 64 * 1024, 64, 0) }, 496 450 { "en25p64", INFO(0x1c2017, 0, 64 * 1024, 128, 0) }, 497 451 { "en25q64", INFO(0x1c3017, 0, 64 * 1024, 128, SECT_4K) }, 452 + { "en25qh128", INFO(0x1c7018, 0, 64 * 1024, 256, 0) }, 498 453 { "en25qh256", INFO(0x1c7019, 0, 64 * 1024, 512, 0) }, 499 454 500 455 /* ESMT */ ··· 535 488 { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, 0) }, 536 489 { "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K) }, 537 490 { "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K) }, 491 + { "n25q512ax3", INFO(0x20ba20, 0, 64 * 1024, 1024, USE_FSR) }, 492 + { "n25q00", INFO(0x20ba21, 0, 64 * 1024, 2048, USE_FSR) }, 538 493 539 494 /* PMC */ 540 495 { "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) }, ··· 1013 964 mtd->_write = sst_write; 1014 965 else 1015 966 mtd->_write = spi_nor_write; 967 + 968 + if ((info->flags & USE_FSR) && 969 + nor->wait_till_ready == spi_nor_wait_till_ready) 970 + nor->wait_till_ready = spi_nor_wait_till_fsr_ready; 1016 971 1017 972 /* prefer "small sector" erase if possible */ 1018 973 if (info->flags & SECT_4K) {
+1 -2
fs/jffs2/acl.c
··· 202 202 } else { 203 203 acl = ERR_PTR(rc); 204 204 } 205 - if (value) 206 - kfree(value); 205 + kfree(value); 207 206 if (!IS_ERR(acl)) 208 207 set_cached_acl(inode, type, acl); 209 208 return acl;
+1 -2
fs/jffs2/xattr.c
··· 756 756 for (i=0; i < XATTRINDEX_HASHSIZE; i++) { 757 757 list_for_each_entry_safe(xd, _xd, &c->xattrindex[i], xindex) { 758 758 list_del(&xd->xindex); 759 - if (xd->xname) 760 - kfree(xd->xname); 759 + kfree(xd->xname); 761 760 jffs2_free_xattr_datum(xd); 762 761 } 763 762 }
+2
include/linux/mtd/mtd.h
··· 222 222 int (*_lock) (struct mtd_info *mtd, loff_t ofs, uint64_t len); 223 223 int (*_unlock) (struct mtd_info *mtd, loff_t ofs, uint64_t len); 224 224 int (*_is_locked) (struct mtd_info *mtd, loff_t ofs, uint64_t len); 225 + int (*_block_isreserved) (struct mtd_info *mtd, loff_t ofs); 225 226 int (*_block_isbad) (struct mtd_info *mtd, loff_t ofs); 226 227 int (*_block_markbad) (struct mtd_info *mtd, loff_t ofs); 227 228 int (*_suspend) (struct mtd_info *mtd); ··· 303 302 int mtd_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len); 304 303 int mtd_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len); 305 304 int mtd_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len); 305 + int mtd_block_isreserved(struct mtd_info *mtd, loff_t ofs); 306 306 int mtd_block_isbad(struct mtd_info *mtd, loff_t ofs); 307 307 int mtd_block_markbad(struct mtd_info *mtd, loff_t ofs); 308 308
+53
include/linux/mtd/nand.h
··· 810 810 extern int nand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd); 811 811 extern int nand_default_bbt(struct mtd_info *mtd); 812 812 extern int nand_markbad_bbt(struct mtd_info *mtd, loff_t offs); 813 + extern int nand_isreserved_bbt(struct mtd_info *mtd, loff_t offs); 813 814 extern int nand_isbad_bbt(struct mtd_info *mtd, loff_t offs, int allowbbt); 814 815 extern int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr, 815 816 int allowbbt); ··· 948 947 return chip->jedec_version ? le16_to_cpu(chip->jedec_params.features) 949 948 : 0; 950 949 } 950 + 951 + /** 952 + * struct nand_sdr_timings - SDR NAND chip timings 953 + * 954 + * This struct defines the timing requirements of a SDR NAND chip. 955 + * These informations can be found in every NAND datasheets and the timings 956 + * meaning are described in the ONFI specifications: 957 + * www.onfi.org/~/media/ONFI/specs/onfi_3_1_spec.pdf (chapter 4.15 Timing 958 + * Parameters) 959 + * 960 + * All these timings are expressed in picoseconds. 961 + */ 962 + 963 + struct nand_sdr_timings { 964 + u32 tALH_min; 965 + u32 tADL_min; 966 + u32 tALS_min; 967 + u32 tAR_min; 968 + u32 tCEA_max; 969 + u32 tCEH_min; 970 + u32 tCH_min; 971 + u32 tCHZ_max; 972 + u32 tCLH_min; 973 + u32 tCLR_min; 974 + u32 tCLS_min; 975 + u32 tCOH_min; 976 + u32 tCS_min; 977 + u32 tDH_min; 978 + u32 tDS_min; 979 + u32 tFEAT_max; 980 + u32 tIR_min; 981 + u32 tITC_max; 982 + u32 tRC_min; 983 + u32 tREA_max; 984 + u32 tREH_min; 985 + u32 tRHOH_min; 986 + u32 tRHW_min; 987 + u32 tRHZ_max; 988 + u32 tRLOH_min; 989 + u32 tRP_min; 990 + u32 tRR_min; 991 + u64 tRST_max; 992 + u32 tWB_max; 993 + u32 tWC_min; 994 + u32 tWH_min; 995 + u32 tWHR_min; 996 + u32 tWP_min; 997 + u32 tWW_min; 998 + }; 999 + 1000 + /* get timing characteristics from ONFI timing mode. */ 1001 + const struct nand_sdr_timings *onfi_async_timing_mode_to_sdr_timings(int mode); 951 1002 #endif /* __LINUX_MTD_NAND_H */
+4
include/linux/mtd/spi-nor.h
··· 34 34 #define SPINOR_OP_SE 0xd8 /* Sector erase (usually 64KiB) */ 35 35 #define SPINOR_OP_RDID 0x9f /* Read JEDEC ID */ 36 36 #define SPINOR_OP_RDCR 0x35 /* Read configuration register */ 37 + #define SPINOR_OP_RDFSR 0x70 /* Read flag status register */ 37 38 38 39 /* 4-byte address opcodes - used on Spansion and some Macronix flashes. */ 39 40 #define SPINOR_OP_READ4 0x13 /* Read data bytes (low frequency) */ ··· 66 65 #define SR_SRWD 0x80 /* SR write protect */ 67 66 68 67 #define SR_QUAD_EN_MX 0x40 /* Macronix Quad I/O */ 68 + 69 + /* Flag Status Register bits */ 70 + #define FSR_READY 0x80 69 71 70 72 /* Configuration Register bits. */ 71 73 #define CR_QUAD_EN_SPAN 0x2 /* Spansion Quad I/O */