Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'upstream-3.7-rc1-fastmap' of git://git.infradead.org/linux-ubi

Pull UBI fastmap changes from Artem Bityutskiy:
"This pull request contains the UBI fastmap support implemented by
Richard Weinberger from Linutronix. Fastmap is designed to address
UBI's slow scanning issues. Namely, it introduces a new on-flash
data-structure called "fastmap", which stores the information about
logical<->physical eraseblocks mappings. So now to get this
information just read the fastmap, instead of doing full scan. More
information here can be found in Richard's announcement in LKML
(Subject: UBI: Fastmap request for inclusion (v19)):

http://thread.gmane.org/gmane.linux.kernel/1364922/focus=1369109

One thing I want to explicitly say is that fastmap did not have large
enough linux-next exposure. It is partially my fault - I did not
respond quickly enough. I _really_ apologize for this. But it had
good testing and disabled by default, so I do not expect that we'll
break anything.

Fastmap is declared as experimental so far, and it is off by default.
We did declare that the on-flash format may be changed. The reason
for this is that no one used it in real production so far, so there is
a high risk that something is missing. Besides, we do not have
user-space tools supporting fastmap so far.

Nevertheless, I suggest we merge this feature. Many people want UBI's
scanning bottleneck to be fixed and merging fastmap now should
accelerate its production use. The plan is to make it bullet-prove,
somewhat clean-up, and make it the default for UBI. I do not know how
many kernel releases will it take.

Basically, I what I want to do for fastmap is something like Linus did
for btrfs few years ago."

* tag 'upstream-3.7-rc1-fastmap' of git://git.infradead.org/linux-ubi:
UBI: Wire-up fastmap
UBI: Add fastmap core
UBI: Add fastmap support to the WL sub-system
UBI: Add fastmap stuff to attach.c
UBI: Wire-up ->fm_sem
UBI: Add fastmap bits to build.c
UBI: Add self_check_eba()
UBI: Export next_sqnum()
UBI: Add fastmap stuff to ubi.h
UBI: Add fastmap on-flash data structures

+2823 -244
+6
MAINTAINERS
··· 7457 7457 F: include/linux/mtd/ubi.h 7458 7458 F: include/mtd/ubi-user.h 7459 7459 7460 + UNSORTED BLOCK IMAGES (UBI) Fastmap 7461 + M: Richard Weinberger <richard@nod.at> 7462 + L: linux-mtd@lists.infradead.org 7463 + S: Maintained 7464 + F: drivers/mtd/ubi/fastmap.c 7465 + 7460 7466 USB ACM DRIVER 7461 7467 M: Oliver Neukum <oliver@neukum.org> 7462 7468 L: linux-usb@vger.kernel.org
+21
drivers/mtd/ubi/Kconfig
··· 56 56 57 57 Leave the default value if unsure. 58 58 59 + config MTD_UBI_FASTMAP 60 + bool "UBI Fastmap (Experimental feature)" 61 + default n 62 + help 63 + Important: this feature is experimental so far and the on-flash 64 + format for fastmap may change in the next kernel versions 65 + 66 + Fastmap is a mechanism which allows attaching an UBI device 67 + in nearly constant time. Instead of scanning the whole MTD device it 68 + only has to locate a checkpoint (called fastmap) on the device. 69 + The on-flash fastmap contains all information needed to attach 70 + the device. Using fastmap makes only sense on large devices where 71 + attaching by scanning takes long. UBI will not automatically install 72 + a fastmap on old images, but you can set the UBI module parameter 73 + fm_autoconvert to 1 if you want so. Please note that fastmap-enabled 74 + images are still usable with UBI implementations without 75 + fastmap support. On typical flash devices the whole fastmap fits 76 + into one PEB. UBI will reserve PEBs to hold two fastmaps. 77 + 78 + If in doubt, say "N". 79 + 59 80 config MTD_UBI_GLUEBI 60 81 tristate "MTD devices emulation driver (gluebi)" 61 82 help
+1
drivers/mtd/ubi/Makefile
··· 2 2 3 3 ubi-y += vtbl.o vmt.o upd.o build.o cdev.o kapi.o eba.o io.o wl.o attach.o 4 4 ubi-y += misc.o debug.o 5 + ubi-$(CONFIG_MTD_UBI_FASTMAP) += fastmap.o 5 6 6 7 obj-$(CONFIG_MTD_UBI_GLUEBI) += gluebi.o
+291 -161
drivers/mtd/ubi/attach.c
··· 300 300 } 301 301 302 302 /** 303 - * compare_lebs - find out which logical eraseblock is newer. 303 + * ubi_compare_lebs - find out which logical eraseblock is newer. 304 304 * @ubi: UBI device description object 305 305 * @aeb: first logical eraseblock to compare 306 306 * @pnum: physical eraseblock number of the second logical eraseblock to ··· 319 319 * o bit 2 is cleared: the older LEB is not corrupted; 320 320 * o bit 2 is set: the older LEB is corrupted. 321 321 */ 322 - static int compare_lebs(struct ubi_device *ubi, const struct ubi_ainf_peb *aeb, 322 + int ubi_compare_lebs(struct ubi_device *ubi, const struct ubi_ainf_peb *aeb, 323 323 int pnum, const struct ubi_vid_hdr *vid_hdr) 324 324 { 325 325 void *buf; ··· 337 337 * support these images anymore. Well, those images still work, 338 338 * but only if no unclean reboots happened. 339 339 */ 340 - ubi_err("unsupported on-flash UBI format\n"); 340 + ubi_err("unsupported on-flash UBI format"); 341 341 return -EINVAL; 342 342 } 343 343 ··· 507 507 * sequence numbers. We still can attach these images, unless 508 508 * there is a need to distinguish between old and new 509 509 * eraseblocks, in which case we'll refuse the image in 510 - * 'compare_lebs()'. In other words, we attach old clean 510 + * 'ubi_compare_lebs()'. In other words, we attach old clean 511 511 * images, but refuse attaching old images with duplicated 512 512 * logical eraseblocks because there was an unclean reboot. 513 513 */ ··· 523 523 * Now we have to drop the older one and preserve the newer 524 524 * one. 525 525 */ 526 - cmp_res = compare_lebs(ubi, aeb, pnum, vid_hdr); 526 + cmp_res = ubi_compare_lebs(ubi, aeb, pnum, vid_hdr); 527 527 if (cmp_res < 0) 528 528 return cmp_res; 529 529 ··· 748 748 /** 749 749 * check_corruption - check the data area of PEB. 750 750 * @ubi: UBI device description object 751 - * @vid_hrd: the (corrupted) VID header of this PEB 751 + * @vid_hdr: the (corrupted) VID header of this PEB 752 752 * @pnum: the physical eraseblock number to check 753 753 * 754 754 * This is a helper function which is used to distinguish between VID header ··· 810 810 * @ubi: UBI device description object 811 811 * @ai: attaching information 812 812 * @pnum: the physical eraseblock number 813 + * @vid: The volume ID of the found volume will be stored in this pointer 814 + * @sqnum: The sqnum of the found volume will be stored in this pointer 813 815 * 814 816 * This function reads UBI headers of PEB @pnum, checks them, and adds 815 817 * information about this PEB to the corresponding list or RB-tree in the ··· 819 817 * successfully handled and a negative error code in case of failure. 820 818 */ 821 819 static int scan_peb(struct ubi_device *ubi, struct ubi_attach_info *ai, 822 - int pnum) 820 + int pnum, int *vid, unsigned long long *sqnum) 823 821 { 824 822 long long uninitialized_var(ec); 825 - int err, bitflips = 0, vol_id, ec_err = 0; 823 + int err, bitflips = 0, vol_id = -1, ec_err = 0; 826 824 827 825 dbg_bld("scan PEB %d", pnum); 828 826 ··· 993 991 } 994 992 995 993 vol_id = be32_to_cpu(vidh->vol_id); 994 + if (vid) 995 + *vid = vol_id; 996 + if (sqnum) 997 + *sqnum = be64_to_cpu(vidh->sqnum); 996 998 if (vol_id > UBI_MAX_VOLUMES && vol_id != UBI_LAYOUT_VOLUME_ID) { 997 999 int lnum = be32_to_cpu(vidh->lnum); 998 1000 999 1001 /* Unsupported internal volume */ 1000 1002 switch (vidh->compat) { 1001 1003 case UBI_COMPAT_DELETE: 1002 - ubi_msg("\"delete\" compatible internal volume %d:%d found, will remove it", 1003 - vol_id, lnum); 1004 + if (vol_id != UBI_FM_SB_VOLUME_ID 1005 + && vol_id != UBI_FM_DATA_VOLUME_ID) { 1006 + ubi_msg("\"delete\" compatible internal volume %d:%d found, will remove it", 1007 + vol_id, lnum); 1008 + } 1004 1009 err = add_to_list(ai, pnum, vol_id, lnum, 1005 1010 ec, 1, &ai->erase); 1006 1011 if (err) ··· 1130 1121 } 1131 1122 1132 1123 /** 1133 - * scan_all - scan entire MTD device. 1134 - * @ubi: UBI device description object 1135 - * 1136 - * This function does full scanning of an MTD device and returns complete 1137 - * information about it in form of a "struct ubi_attach_info" object. In case 1138 - * of failure, an error code is returned. 1139 - */ 1140 - static struct ubi_attach_info *scan_all(struct ubi_device *ubi) 1141 - { 1142 - int err, pnum; 1143 - struct rb_node *rb1, *rb2; 1144 - struct ubi_ainf_volume *av; 1145 - struct ubi_ainf_peb *aeb; 1146 - struct ubi_attach_info *ai; 1147 - 1148 - ai = kzalloc(sizeof(struct ubi_attach_info), GFP_KERNEL); 1149 - if (!ai) 1150 - return ERR_PTR(-ENOMEM); 1151 - 1152 - INIT_LIST_HEAD(&ai->corr); 1153 - INIT_LIST_HEAD(&ai->free); 1154 - INIT_LIST_HEAD(&ai->erase); 1155 - INIT_LIST_HEAD(&ai->alien); 1156 - ai->volumes = RB_ROOT; 1157 - 1158 - err = -ENOMEM; 1159 - ai->aeb_slab_cache = kmem_cache_create("ubi_aeb_slab_cache", 1160 - sizeof(struct ubi_ainf_peb), 1161 - 0, 0, NULL); 1162 - if (!ai->aeb_slab_cache) 1163 - goto out_ai; 1164 - 1165 - ech = kzalloc(ubi->ec_hdr_alsize, GFP_KERNEL); 1166 - if (!ech) 1167 - goto out_ai; 1168 - 1169 - vidh = ubi_zalloc_vid_hdr(ubi, GFP_KERNEL); 1170 - if (!vidh) 1171 - goto out_ech; 1172 - 1173 - for (pnum = 0; pnum < ubi->peb_count; pnum++) { 1174 - cond_resched(); 1175 - 1176 - dbg_gen("process PEB %d", pnum); 1177 - err = scan_peb(ubi, ai, pnum); 1178 - if (err < 0) 1179 - goto out_vidh; 1180 - } 1181 - 1182 - ubi_msg("scanning is finished"); 1183 - 1184 - /* Calculate mean erase counter */ 1185 - if (ai->ec_count) 1186 - ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count); 1187 - 1188 - err = late_analysis(ubi, ai); 1189 - if (err) 1190 - goto out_vidh; 1191 - 1192 - /* 1193 - * In case of unknown erase counter we use the mean erase counter 1194 - * value. 1195 - */ 1196 - ubi_rb_for_each_entry(rb1, av, &ai->volumes, rb) { 1197 - ubi_rb_for_each_entry(rb2, aeb, &av->root, u.rb) 1198 - if (aeb->ec == UBI_UNKNOWN) 1199 - aeb->ec = ai->mean_ec; 1200 - } 1201 - 1202 - list_for_each_entry(aeb, &ai->free, u.list) { 1203 - if (aeb->ec == UBI_UNKNOWN) 1204 - aeb->ec = ai->mean_ec; 1205 - } 1206 - 1207 - list_for_each_entry(aeb, &ai->corr, u.list) 1208 - if (aeb->ec == UBI_UNKNOWN) 1209 - aeb->ec = ai->mean_ec; 1210 - 1211 - list_for_each_entry(aeb, &ai->erase, u.list) 1212 - if (aeb->ec == UBI_UNKNOWN) 1213 - aeb->ec = ai->mean_ec; 1214 - 1215 - err = self_check_ai(ubi, ai); 1216 - if (err) 1217 - goto out_vidh; 1218 - 1219 - ubi_free_vid_hdr(ubi, vidh); 1220 - kfree(ech); 1221 - 1222 - return ai; 1223 - 1224 - out_vidh: 1225 - ubi_free_vid_hdr(ubi, vidh); 1226 - out_ech: 1227 - kfree(ech); 1228 - out_ai: 1229 - ubi_destroy_ai(ai); 1230 - return ERR_PTR(err); 1231 - } 1232 - 1233 - /** 1234 - * ubi_attach - attach an MTD device. 1235 - * @ubi: UBI device descriptor 1236 - * 1237 - * This function returns zero in case of success and a negative error code in 1238 - * case of failure. 1239 - */ 1240 - int ubi_attach(struct ubi_device *ubi) 1241 - { 1242 - int err; 1243 - struct ubi_attach_info *ai; 1244 - 1245 - ai = scan_all(ubi); 1246 - if (IS_ERR(ai)) 1247 - return PTR_ERR(ai); 1248 - 1249 - ubi->bad_peb_count = ai->bad_peb_count; 1250 - ubi->good_peb_count = ubi->peb_count - ubi->bad_peb_count; 1251 - ubi->corr_peb_count = ai->corr_peb_count; 1252 - ubi->max_ec = ai->max_ec; 1253 - ubi->mean_ec = ai->mean_ec; 1254 - dbg_gen("max. sequence number: %llu", ai->max_sqnum); 1255 - 1256 - err = ubi_read_volume_table(ubi, ai); 1257 - if (err) 1258 - goto out_ai; 1259 - 1260 - err = ubi_wl_init(ubi, ai); 1261 - if (err) 1262 - goto out_vtbl; 1263 - 1264 - err = ubi_eba_init(ubi, ai); 1265 - if (err) 1266 - goto out_wl; 1267 - 1268 - ubi_destroy_ai(ai); 1269 - return 0; 1270 - 1271 - out_wl: 1272 - ubi_wl_close(ubi); 1273 - out_vtbl: 1274 - ubi_free_internal_volumes(ubi); 1275 - vfree(ubi->vtbl); 1276 - out_ai: 1277 - ubi_destroy_ai(ai); 1278 - return err; 1279 - } 1280 - 1281 - /** 1282 1124 * destroy_av - free volume attaching information. 1283 1125 * @av: volume attaching information 1284 1126 * @ai: attaching information ··· 1163 1303 } 1164 1304 1165 1305 /** 1166 - * ubi_destroy_ai - destroy attaching information. 1306 + * destroy_ai - destroy attaching information. 1167 1307 * @ai: attaching information 1168 1308 */ 1169 - void ubi_destroy_ai(struct ubi_attach_info *ai) 1309 + static void destroy_ai(struct ubi_attach_info *ai) 1170 1310 { 1171 1311 struct ubi_ainf_peb *aeb, *aeb_tmp; 1172 1312 struct ubi_ainf_volume *av; ··· 1215 1355 kmem_cache_destroy(ai->aeb_slab_cache); 1216 1356 1217 1357 kfree(ai); 1358 + } 1359 + 1360 + /** 1361 + * scan_all - scan entire MTD device. 1362 + * @ubi: UBI device description object 1363 + * @ai: attach info object 1364 + * @start: start scanning at this PEB 1365 + * 1366 + * This function does full scanning of an MTD device and returns complete 1367 + * information about it in form of a "struct ubi_attach_info" object. In case 1368 + * of failure, an error code is returned. 1369 + */ 1370 + static int scan_all(struct ubi_device *ubi, struct ubi_attach_info *ai, 1371 + int start) 1372 + { 1373 + int err, pnum; 1374 + struct rb_node *rb1, *rb2; 1375 + struct ubi_ainf_volume *av; 1376 + struct ubi_ainf_peb *aeb; 1377 + 1378 + err = -ENOMEM; 1379 + 1380 + ech = kzalloc(ubi->ec_hdr_alsize, GFP_KERNEL); 1381 + if (!ech) 1382 + return err; 1383 + 1384 + vidh = ubi_zalloc_vid_hdr(ubi, GFP_KERNEL); 1385 + if (!vidh) 1386 + goto out_ech; 1387 + 1388 + for (pnum = start; pnum < ubi->peb_count; pnum++) { 1389 + cond_resched(); 1390 + 1391 + dbg_gen("process PEB %d", pnum); 1392 + err = scan_peb(ubi, ai, pnum, NULL, NULL); 1393 + if (err < 0) 1394 + goto out_vidh; 1395 + } 1396 + 1397 + ubi_msg("scanning is finished"); 1398 + 1399 + /* Calculate mean erase counter */ 1400 + if (ai->ec_count) 1401 + ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count); 1402 + 1403 + err = late_analysis(ubi, ai); 1404 + if (err) 1405 + goto out_vidh; 1406 + 1407 + /* 1408 + * In case of unknown erase counter we use the mean erase counter 1409 + * value. 1410 + */ 1411 + ubi_rb_for_each_entry(rb1, av, &ai->volumes, rb) { 1412 + ubi_rb_for_each_entry(rb2, aeb, &av->root, u.rb) 1413 + if (aeb->ec == UBI_UNKNOWN) 1414 + aeb->ec = ai->mean_ec; 1415 + } 1416 + 1417 + list_for_each_entry(aeb, &ai->free, u.list) { 1418 + if (aeb->ec == UBI_UNKNOWN) 1419 + aeb->ec = ai->mean_ec; 1420 + } 1421 + 1422 + list_for_each_entry(aeb, &ai->corr, u.list) 1423 + if (aeb->ec == UBI_UNKNOWN) 1424 + aeb->ec = ai->mean_ec; 1425 + 1426 + list_for_each_entry(aeb, &ai->erase, u.list) 1427 + if (aeb->ec == UBI_UNKNOWN) 1428 + aeb->ec = ai->mean_ec; 1429 + 1430 + err = self_check_ai(ubi, ai); 1431 + if (err) 1432 + goto out_vidh; 1433 + 1434 + ubi_free_vid_hdr(ubi, vidh); 1435 + kfree(ech); 1436 + 1437 + return 0; 1438 + 1439 + out_vidh: 1440 + ubi_free_vid_hdr(ubi, vidh); 1441 + out_ech: 1442 + kfree(ech); 1443 + return err; 1444 + } 1445 + 1446 + #ifdef CONFIG_MTD_UBI_FASTMAP 1447 + 1448 + /** 1449 + * scan_fastmap - try to find a fastmap and attach from it. 1450 + * @ubi: UBI device description object 1451 + * @ai: attach info object 1452 + * 1453 + * Returns 0 on success, negative return values indicate an internal 1454 + * error. 1455 + * UBI_NO_FASTMAP denotes that no fastmap was found. 1456 + * UBI_BAD_FASTMAP denotes that the found fastmap was invalid. 1457 + */ 1458 + static int scan_fast(struct ubi_device *ubi, struct ubi_attach_info *ai) 1459 + { 1460 + int err, pnum, fm_anchor = -1; 1461 + unsigned long long max_sqnum = 0; 1462 + 1463 + err = -ENOMEM; 1464 + 1465 + ech = kzalloc(ubi->ec_hdr_alsize, GFP_KERNEL); 1466 + if (!ech) 1467 + goto out; 1468 + 1469 + vidh = ubi_zalloc_vid_hdr(ubi, GFP_KERNEL); 1470 + if (!vidh) 1471 + goto out_ech; 1472 + 1473 + for (pnum = 0; pnum < UBI_FM_MAX_START; pnum++) { 1474 + int vol_id = -1; 1475 + unsigned long long sqnum = -1; 1476 + cond_resched(); 1477 + 1478 + dbg_gen("process PEB %d", pnum); 1479 + err = scan_peb(ubi, ai, pnum, &vol_id, &sqnum); 1480 + if (err < 0) 1481 + goto out_vidh; 1482 + 1483 + if (vol_id == UBI_FM_SB_VOLUME_ID && sqnum > max_sqnum) { 1484 + max_sqnum = sqnum; 1485 + fm_anchor = pnum; 1486 + } 1487 + } 1488 + 1489 + ubi_free_vid_hdr(ubi, vidh); 1490 + kfree(ech); 1491 + 1492 + if (fm_anchor < 0) 1493 + return UBI_NO_FASTMAP; 1494 + 1495 + return ubi_scan_fastmap(ubi, ai, fm_anchor); 1496 + 1497 + out_vidh: 1498 + ubi_free_vid_hdr(ubi, vidh); 1499 + out_ech: 1500 + kfree(ech); 1501 + out: 1502 + return err; 1503 + } 1504 + 1505 + #endif 1506 + 1507 + static struct ubi_attach_info *alloc_ai(const char *slab_name) 1508 + { 1509 + struct ubi_attach_info *ai; 1510 + 1511 + ai = kzalloc(sizeof(struct ubi_attach_info), GFP_KERNEL); 1512 + if (!ai) 1513 + return ai; 1514 + 1515 + INIT_LIST_HEAD(&ai->corr); 1516 + INIT_LIST_HEAD(&ai->free); 1517 + INIT_LIST_HEAD(&ai->erase); 1518 + INIT_LIST_HEAD(&ai->alien); 1519 + ai->volumes = RB_ROOT; 1520 + ai->aeb_slab_cache = kmem_cache_create(slab_name, 1521 + sizeof(struct ubi_ainf_peb), 1522 + 0, 0, NULL); 1523 + if (!ai->aeb_slab_cache) { 1524 + kfree(ai); 1525 + ai = NULL; 1526 + } 1527 + 1528 + return ai; 1529 + } 1530 + 1531 + /** 1532 + * ubi_attach - attach an MTD device. 1533 + * @ubi: UBI device descriptor 1534 + * @force_scan: if set to non-zero attach by scanning 1535 + * 1536 + * This function returns zero in case of success and a negative error code in 1537 + * case of failure. 1538 + */ 1539 + int ubi_attach(struct ubi_device *ubi, int force_scan) 1540 + { 1541 + int err; 1542 + struct ubi_attach_info *ai; 1543 + 1544 + ai = alloc_ai("ubi_aeb_slab_cache"); 1545 + if (!ai) 1546 + return -ENOMEM; 1547 + 1548 + #ifdef CONFIG_MTD_UBI_FASTMAP 1549 + /* On small flash devices we disable fastmap in any case. */ 1550 + if ((int)mtd_div_by_eb(ubi->mtd->size, ubi->mtd) <= UBI_FM_MAX_START) { 1551 + ubi->fm_disabled = 1; 1552 + force_scan = 1; 1553 + } 1554 + 1555 + if (force_scan) 1556 + err = scan_all(ubi, ai, 0); 1557 + else { 1558 + err = scan_fast(ubi, ai); 1559 + if (err > 0) { 1560 + if (err != UBI_NO_FASTMAP) { 1561 + destroy_ai(ai); 1562 + ai = alloc_ai("ubi_aeb_slab_cache2"); 1563 + if (!ai) 1564 + return -ENOMEM; 1565 + } 1566 + 1567 + err = scan_all(ubi, ai, UBI_FM_MAX_START); 1568 + } 1569 + } 1570 + #else 1571 + err = scan_all(ubi, ai, 0); 1572 + #endif 1573 + if (err) 1574 + goto out_ai; 1575 + 1576 + ubi->bad_peb_count = ai->bad_peb_count; 1577 + ubi->good_peb_count = ubi->peb_count - ubi->bad_peb_count; 1578 + ubi->corr_peb_count = ai->corr_peb_count; 1579 + ubi->max_ec = ai->max_ec; 1580 + ubi->mean_ec = ai->mean_ec; 1581 + dbg_gen("max. sequence number: %llu", ai->max_sqnum); 1582 + 1583 + err = ubi_read_volume_table(ubi, ai); 1584 + if (err) 1585 + goto out_ai; 1586 + 1587 + err = ubi_wl_init(ubi, ai); 1588 + if (err) 1589 + goto out_vtbl; 1590 + 1591 + err = ubi_eba_init(ubi, ai); 1592 + if (err) 1593 + goto out_wl; 1594 + 1595 + #ifdef CONFIG_MTD_UBI_FASTMAP 1596 + if (ubi->fm && ubi->dbg->chk_gen) { 1597 + struct ubi_attach_info *scan_ai; 1598 + 1599 + scan_ai = alloc_ai("ubi_ckh_aeb_slab_cache"); 1600 + if (!scan_ai) 1601 + goto out_wl; 1602 + 1603 + err = scan_all(ubi, scan_ai, 0); 1604 + if (err) { 1605 + destroy_ai(scan_ai); 1606 + goto out_wl; 1607 + } 1608 + 1609 + err = self_check_eba(ubi, ai, scan_ai); 1610 + destroy_ai(scan_ai); 1611 + 1612 + if (err) 1613 + goto out_wl; 1614 + } 1615 + #endif 1616 + 1617 + destroy_ai(ai); 1618 + return 0; 1619 + 1620 + out_wl: 1621 + ubi_wl_close(ubi); 1622 + out_vtbl: 1623 + ubi_free_internal_volumes(ubi); 1624 + vfree(ubi->vtbl); 1625 + out_ai: 1626 + destroy_ai(ai); 1627 + return err; 1218 1628 } 1219 1629 1220 1630 /**
+66 -4
drivers/mtd/ubi/build.c
··· 76 76 77 77 /* MTD devices specification parameters */ 78 78 static struct mtd_dev_param __initdata mtd_dev_param[UBI_MAX_DEVICES]; 79 - 79 + #ifdef CONFIG_MTD_UBI_FASTMAP 80 + /* UBI module parameter to enable fastmap automatically on non-fastmap images */ 81 + static bool fm_autoconvert; 82 + #endif 80 83 /* Root UBI "class" object (corresponds to '/<sysfs>/class/ubi/') */ 81 84 struct class *ubi_class; 82 85 ··· 156 153 157 154 ubi_do_get_device_info(ubi, &nt.di); 158 155 ubi_do_get_volume_info(ubi, vol, &nt.vi); 156 + 157 + #ifdef CONFIG_MTD_UBI_FASTMAP 158 + switch (ntype) { 159 + case UBI_VOLUME_ADDED: 160 + case UBI_VOLUME_REMOVED: 161 + case UBI_VOLUME_RESIZED: 162 + case UBI_VOLUME_RENAMED: 163 + if (ubi_update_fastmap(ubi)) { 164 + ubi_err("Unable to update fastmap!"); 165 + ubi_ro_mode(ubi); 166 + } 167 + } 168 + #endif 159 169 return blocking_notifier_call_chain(&ubi_notifiers, ntype, &nt); 160 170 } 161 171 ··· 934 918 ubi->vid_hdr_offset = vid_hdr_offset; 935 919 ubi->autoresize_vol_id = -1; 936 920 921 + #ifdef CONFIG_MTD_UBI_FASTMAP 922 + ubi->fm_pool.used = ubi->fm_pool.size = 0; 923 + ubi->fm_wl_pool.used = ubi->fm_wl_pool.size = 0; 924 + 925 + /* 926 + * fm_pool.max_size is 5% of the total number of PEBs but it's also 927 + * between UBI_FM_MAX_POOL_SIZE and UBI_FM_MIN_POOL_SIZE. 928 + */ 929 + ubi->fm_pool.max_size = min(((int)mtd_div_by_eb(ubi->mtd->size, 930 + ubi->mtd) / 100) * 5, UBI_FM_MAX_POOL_SIZE); 931 + if (ubi->fm_pool.max_size < UBI_FM_MIN_POOL_SIZE) 932 + ubi->fm_pool.max_size = UBI_FM_MIN_POOL_SIZE; 933 + 934 + ubi->fm_wl_pool.max_size = UBI_FM_WL_POOL_SIZE; 935 + ubi->fm_disabled = !fm_autoconvert; 936 + 937 + if (!ubi->fm_disabled && (int)mtd_div_by_eb(ubi->mtd->size, ubi->mtd) 938 + <= UBI_FM_MAX_START) { 939 + ubi_err("More than %i PEBs are needed for fastmap, sorry.", 940 + UBI_FM_MAX_START); 941 + ubi->fm_disabled = 1; 942 + } 943 + 944 + ubi_msg("default fastmap pool size: %d", ubi->fm_pool.max_size); 945 + ubi_msg("default fastmap WL pool size: %d", ubi->fm_wl_pool.max_size); 946 + #else 947 + ubi->fm_disabled = 1; 948 + #endif 937 949 mutex_init(&ubi->buf_mutex); 938 950 mutex_init(&ubi->ckvol_mutex); 939 951 mutex_init(&ubi->device_mutex); 940 952 spin_lock_init(&ubi->volumes_lock); 953 + mutex_init(&ubi->fm_mutex); 954 + init_rwsem(&ubi->fm_sem); 941 955 942 956 ubi_msg("attaching mtd%d to ubi%d", mtd->index, ubi_num); 943 957 ··· 980 934 if (!ubi->peb_buf) 981 935 goto out_free; 982 936 937 + #ifdef CONFIG_MTD_UBI_FASTMAP 938 + ubi->fm_size = ubi_calc_fm_size(ubi); 939 + ubi->fm_buf = vzalloc(ubi->fm_size); 940 + if (!ubi->fm_buf) 941 + goto out_free; 942 + #endif 983 943 err = ubi_debugging_init_dev(ubi); 984 944 if (err) 985 945 goto out_free; 986 946 987 - err = ubi_attach(ubi); 947 + err = ubi_attach(ubi, 0); 988 948 if (err) { 989 949 ubi_err("failed to attach mtd%d, error %d", mtd->index, err); 990 950 goto out_debugging; ··· 1064 1012 ubi_debugging_exit_dev(ubi); 1065 1013 out_free: 1066 1014 vfree(ubi->peb_buf); 1015 + vfree(ubi->fm_buf); 1067 1016 if (ref) 1068 1017 put_device(&ubi->dev); 1069 1018 else ··· 1114 1061 ubi_assert(ubi_num == ubi->ubi_num); 1115 1062 ubi_notify_all(ubi, UBI_VOLUME_REMOVED, NULL); 1116 1063 ubi_msg("detaching mtd%d from ubi%d", ubi->mtd->index, ubi_num); 1117 - 1064 + #ifdef CONFIG_MTD_UBI_FASTMAP 1065 + /* If we don't write a new fastmap at detach time we lose all 1066 + * EC updates that have been made since the last written fastmap. */ 1067 + ubi_update_fastmap(ubi); 1068 + #endif 1118 1069 /* 1119 1070 * Before freeing anything, we have to stop the background thread to 1120 1071 * prevent it from doing anything on this device while we are freeing. ··· 1134 1077 1135 1078 ubi_debugfs_exit_dev(ubi); 1136 1079 uif_close(ubi); 1080 + 1137 1081 ubi_wl_close(ubi); 1138 1082 ubi_free_internal_volumes(ubi); 1139 1083 vfree(ubi->vtbl); 1140 1084 put_mtd_device(ubi->mtd); 1141 1085 ubi_debugging_exit_dev(ubi); 1142 1086 vfree(ubi->peb_buf); 1087 + vfree(ubi->fm_buf); 1143 1088 ubi_msg("mtd%d is detached from ubi%d", ubi->mtd->index, ubi->ubi_num); 1144 1089 put_device(&ubi->dev); 1145 1090 return 0; ··· 1463 1404 "Example 2: mtd=content,1984 mtd=4 - attach MTD device with name \"content\" using VID header offset 1984, and MTD device number 4 with default VID header offset.\n" 1464 1405 "Example 3: mtd=/dev/mtd1,0,25 - attach MTD device /dev/mtd1 using default VID header offset and reserve 25*nand_size_in_blocks/1024 erase blocks for bad block handling.\n" 1465 1406 "\t(e.g. if the NAND *chipset* has 4096 PEB, 100 will be reserved for this UBI device)."); 1466 - 1407 + #ifdef CONFIG_MTD_UBI_FASTMAP 1408 + module_param(fm_autoconvert, bool, 0644); 1409 + MODULE_PARM_DESC(fm_autoconvert, "Set this parameter to enable fastmap automatically on images without a fastmap."); 1410 + #endif 1467 1411 MODULE_VERSION(__stringify(UBI_VERSION)); 1468 1412 MODULE_DESCRIPTION("UBI - Unsorted Block Images"); 1469 1413 MODULE_AUTHOR("Artem Bityutskiy");
+117 -9
drivers/mtd/ubi/eba.c
··· 57 57 * global sequence counter value. It also increases the global sequence 58 58 * counter. 59 59 */ 60 - static unsigned long long next_sqnum(struct ubi_device *ubi) 60 + unsigned long long ubi_next_sqnum(struct ubi_device *ubi) 61 61 { 62 62 unsigned long long sqnum; 63 63 ··· 340 340 341 341 dbg_eba("erase LEB %d:%d, PEB %d", vol_id, lnum, pnum); 342 342 343 + down_read(&ubi->fm_sem); 343 344 vol->eba_tbl[lnum] = UBI_LEB_UNMAPPED; 345 + up_read(&ubi->fm_sem); 344 346 err = ubi_wl_put_peb(ubi, vol_id, lnum, pnum, 0); 345 347 346 348 out_unlock: ··· 523 521 goto out_put; 524 522 } 525 523 526 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 524 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 527 525 err = ubi_io_write_vid_hdr(ubi, new_pnum, vid_hdr); 528 526 if (err) 529 527 goto write_error; ··· 550 548 mutex_unlock(&ubi->buf_mutex); 551 549 ubi_free_vid_hdr(ubi, vid_hdr); 552 550 551 + down_read(&ubi->fm_sem); 553 552 vol->eba_tbl[lnum] = new_pnum; 553 + up_read(&ubi->fm_sem); 554 554 ubi_wl_put_peb(ubi, vol_id, lnum, pnum, 1); 555 555 556 556 ubi_msg("data was successfully recovered"); ··· 636 632 } 637 633 638 634 vid_hdr->vol_type = UBI_VID_DYNAMIC; 639 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 635 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 640 636 vid_hdr->vol_id = cpu_to_be32(vol_id); 641 637 vid_hdr->lnum = cpu_to_be32(lnum); 642 638 vid_hdr->compat = ubi_get_compat(ubi, vol_id); ··· 669 665 } 670 666 } 671 667 668 + down_read(&ubi->fm_sem); 672 669 vol->eba_tbl[lnum] = pnum; 670 + up_read(&ubi->fm_sem); 673 671 674 672 leb_write_unlock(ubi, vol_id, lnum); 675 673 ubi_free_vid_hdr(ubi, vid_hdr); ··· 698 692 return err; 699 693 } 700 694 701 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 695 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 702 696 ubi_msg("try another PEB"); 703 697 goto retry; 704 698 } ··· 751 745 return err; 752 746 } 753 747 754 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 748 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 755 749 vid_hdr->vol_id = cpu_to_be32(vol_id); 756 750 vid_hdr->lnum = cpu_to_be32(lnum); 757 751 vid_hdr->compat = ubi_get_compat(ubi, vol_id); ··· 789 783 } 790 784 791 785 ubi_assert(vol->eba_tbl[lnum] < 0); 786 + down_read(&ubi->fm_sem); 792 787 vol->eba_tbl[lnum] = pnum; 788 + up_read(&ubi->fm_sem); 793 789 794 790 leb_write_unlock(ubi, vol_id, lnum); 795 791 ubi_free_vid_hdr(ubi, vid_hdr); ··· 818 810 return err; 819 811 } 820 812 821 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 813 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 822 814 ubi_msg("try another PEB"); 823 815 goto retry; 824 816 } ··· 870 862 if (err) 871 863 goto out_mutex; 872 864 873 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 865 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 874 866 vid_hdr->vol_id = cpu_to_be32(vol_id); 875 867 vid_hdr->lnum = cpu_to_be32(lnum); 876 868 vid_hdr->compat = ubi_get_compat(ubi, vol_id); ··· 912 904 goto out_leb_unlock; 913 905 } 914 906 907 + down_read(&ubi->fm_sem); 915 908 vol->eba_tbl[lnum] = pnum; 909 + up_read(&ubi->fm_sem); 916 910 917 911 out_leb_unlock: 918 912 leb_write_unlock(ubi, vol_id, lnum); ··· 940 930 goto out_leb_unlock; 941 931 } 942 932 943 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 933 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 944 934 ubi_msg("try another PEB"); 945 935 goto retry; 946 936 } ··· 1099 1089 vid_hdr->data_size = cpu_to_be32(data_size); 1100 1090 vid_hdr->data_crc = cpu_to_be32(crc); 1101 1091 } 1102 - vid_hdr->sqnum = cpu_to_be64(next_sqnum(ubi)); 1092 + vid_hdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 1103 1093 1104 1094 err = ubi_io_write_vid_hdr(ubi, to, vid_hdr); 1105 1095 if (err) { ··· 1161 1151 } 1162 1152 1163 1153 ubi_assert(vol->eba_tbl[lnum] == from); 1154 + down_read(&ubi->fm_sem); 1164 1155 vol->eba_tbl[lnum] = to; 1156 + up_read(&ubi->fm_sem); 1165 1157 1166 1158 out_unlock_buf: 1167 1159 mutex_unlock(&ubi->buf_mutex); ··· 1211 1199 if (ubi->corr_peb_count) 1212 1200 ubi_warn("%d PEBs are corrupted and not used", 1213 1201 ubi->corr_peb_count); 1202 + } 1203 + 1204 + /** 1205 + * self_check_eba - run a self check on the EBA table constructed by fastmap. 1206 + * @ubi: UBI device description object 1207 + * @ai_fastmap: UBI attach info object created by fastmap 1208 + * @ai_scan: UBI attach info object created by scanning 1209 + * 1210 + * Returns < 0 in case of an internal error, 0 otherwise. 1211 + * If a bad EBA table entry was found it will be printed out and 1212 + * ubi_assert() triggers. 1213 + */ 1214 + int self_check_eba(struct ubi_device *ubi, struct ubi_attach_info *ai_fastmap, 1215 + struct ubi_attach_info *ai_scan) 1216 + { 1217 + int i, j, num_volumes, ret = 0; 1218 + int **scan_eba, **fm_eba; 1219 + struct ubi_ainf_volume *av; 1220 + struct ubi_volume *vol; 1221 + struct ubi_ainf_peb *aeb; 1222 + struct rb_node *rb; 1223 + 1224 + num_volumes = ubi->vtbl_slots + UBI_INT_VOL_COUNT; 1225 + 1226 + scan_eba = kmalloc(sizeof(*scan_eba) * num_volumes, GFP_KERNEL); 1227 + if (!scan_eba) 1228 + return -ENOMEM; 1229 + 1230 + fm_eba = kmalloc(sizeof(*fm_eba) * num_volumes, GFP_KERNEL); 1231 + if (!fm_eba) { 1232 + kfree(scan_eba); 1233 + return -ENOMEM; 1234 + } 1235 + 1236 + for (i = 0; i < num_volumes; i++) { 1237 + vol = ubi->volumes[i]; 1238 + if (!vol) 1239 + continue; 1240 + 1241 + scan_eba[i] = kmalloc(vol->reserved_pebs * sizeof(**scan_eba), 1242 + GFP_KERNEL); 1243 + if (!scan_eba[i]) { 1244 + ret = -ENOMEM; 1245 + goto out_free; 1246 + } 1247 + 1248 + fm_eba[i] = kmalloc(vol->reserved_pebs * sizeof(**fm_eba), 1249 + GFP_KERNEL); 1250 + if (!fm_eba[i]) { 1251 + ret = -ENOMEM; 1252 + goto out_free; 1253 + } 1254 + 1255 + for (j = 0; j < vol->reserved_pebs; j++) 1256 + scan_eba[i][j] = fm_eba[i][j] = UBI_LEB_UNMAPPED; 1257 + 1258 + av = ubi_find_av(ai_scan, idx2vol_id(ubi, i)); 1259 + if (!av) 1260 + continue; 1261 + 1262 + ubi_rb_for_each_entry(rb, aeb, &av->root, u.rb) 1263 + scan_eba[i][aeb->lnum] = aeb->pnum; 1264 + 1265 + av = ubi_find_av(ai_fastmap, idx2vol_id(ubi, i)); 1266 + if (!av) 1267 + continue; 1268 + 1269 + ubi_rb_for_each_entry(rb, aeb, &av->root, u.rb) 1270 + fm_eba[i][aeb->lnum] = aeb->pnum; 1271 + 1272 + for (j = 0; j < vol->reserved_pebs; j++) { 1273 + if (scan_eba[i][j] != fm_eba[i][j]) { 1274 + if (scan_eba[i][j] == UBI_LEB_UNMAPPED || 1275 + fm_eba[i][j] == UBI_LEB_UNMAPPED) 1276 + continue; 1277 + 1278 + ubi_err("LEB:%i:%i is PEB:%i instead of %i!", 1279 + vol->vol_id, i, fm_eba[i][j], 1280 + scan_eba[i][j]); 1281 + ubi_assert(0); 1282 + } 1283 + } 1284 + } 1285 + 1286 + out_free: 1287 + for (i = 0; i < num_volumes; i++) { 1288 + if (!ubi->volumes[i]) 1289 + continue; 1290 + 1291 + kfree(scan_eba[i]); 1292 + kfree(fm_eba[i]); 1293 + } 1294 + 1295 + kfree(scan_eba); 1296 + kfree(fm_eba); 1297 + return ret; 1214 1298 } 1215 1299 1216 1300 /**
+1537
drivers/mtd/ubi/fastmap.c
··· 1 + /* 2 + * Copyright (c) 2012 Linutronix GmbH 3 + * Author: Richard Weinberger <richard@nod.at> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; version 2. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See 12 + * the GNU General Public License for more details. 13 + * 14 + */ 15 + 16 + #include <linux/crc32.h> 17 + #include "ubi.h" 18 + 19 + /** 20 + * ubi_calc_fm_size - calculates the fastmap size in bytes for an UBI device. 21 + * @ubi: UBI device description object 22 + */ 23 + size_t ubi_calc_fm_size(struct ubi_device *ubi) 24 + { 25 + size_t size; 26 + 27 + size = sizeof(struct ubi_fm_hdr) + \ 28 + sizeof(struct ubi_fm_scan_pool) + \ 29 + sizeof(struct ubi_fm_scan_pool) + \ 30 + (ubi->peb_count * sizeof(struct ubi_fm_ec)) + \ 31 + (sizeof(struct ubi_fm_eba) + \ 32 + (ubi->peb_count * sizeof(__be32))) + \ 33 + sizeof(struct ubi_fm_volhdr) * UBI_MAX_VOLUMES; 34 + return roundup(size, ubi->leb_size); 35 + } 36 + 37 + 38 + /** 39 + * new_fm_vhdr - allocate a new volume header for fastmap usage. 40 + * @ubi: UBI device description object 41 + * @vol_id: the VID of the new header 42 + * 43 + * Returns a new struct ubi_vid_hdr on success. 44 + * NULL indicates out of memory. 45 + */ 46 + static struct ubi_vid_hdr *new_fm_vhdr(struct ubi_device *ubi, int vol_id) 47 + { 48 + struct ubi_vid_hdr *new; 49 + 50 + new = ubi_zalloc_vid_hdr(ubi, GFP_KERNEL); 51 + if (!new) 52 + goto out; 53 + 54 + new->vol_type = UBI_VID_DYNAMIC; 55 + new->vol_id = cpu_to_be32(vol_id); 56 + 57 + /* UBI implementations without fastmap support have to delete the 58 + * fastmap. 59 + */ 60 + new->compat = UBI_COMPAT_DELETE; 61 + 62 + out: 63 + return new; 64 + } 65 + 66 + /** 67 + * add_aeb - create and add a attach erase block to a given list. 68 + * @ai: UBI attach info object 69 + * @list: the target list 70 + * @pnum: PEB number of the new attach erase block 71 + * @ec: erease counter of the new LEB 72 + * @scrub: scrub this PEB after attaching 73 + * 74 + * Returns 0 on success, < 0 indicates an internal error. 75 + */ 76 + static int add_aeb(struct ubi_attach_info *ai, struct list_head *list, 77 + int pnum, int ec, int scrub) 78 + { 79 + struct ubi_ainf_peb *aeb; 80 + 81 + aeb = kmem_cache_alloc(ai->aeb_slab_cache, GFP_KERNEL); 82 + if (!aeb) 83 + return -ENOMEM; 84 + 85 + aeb->pnum = pnum; 86 + aeb->ec = ec; 87 + aeb->lnum = -1; 88 + aeb->scrub = scrub; 89 + aeb->copy_flag = aeb->sqnum = 0; 90 + 91 + ai->ec_sum += aeb->ec; 92 + ai->ec_count++; 93 + 94 + if (ai->max_ec < aeb->ec) 95 + ai->max_ec = aeb->ec; 96 + 97 + if (ai->min_ec > aeb->ec) 98 + ai->min_ec = aeb->ec; 99 + 100 + list_add_tail(&aeb->u.list, list); 101 + 102 + return 0; 103 + } 104 + 105 + /** 106 + * add_vol - create and add a new volume to ubi_attach_info. 107 + * @ai: ubi_attach_info object 108 + * @vol_id: VID of the new volume 109 + * @used_ebs: number of used EBS 110 + * @data_pad: data padding value of the new volume 111 + * @vol_type: volume type 112 + * @last_eb_bytes: number of bytes in the last LEB 113 + * 114 + * Returns the new struct ubi_ainf_volume on success. 115 + * NULL indicates an error. 116 + */ 117 + static struct ubi_ainf_volume *add_vol(struct ubi_attach_info *ai, int vol_id, 118 + int used_ebs, int data_pad, u8 vol_type, 119 + int last_eb_bytes) 120 + { 121 + struct ubi_ainf_volume *av; 122 + struct rb_node **p = &ai->volumes.rb_node, *parent = NULL; 123 + 124 + while (*p) { 125 + parent = *p; 126 + av = rb_entry(parent, struct ubi_ainf_volume, rb); 127 + 128 + if (vol_id > av->vol_id) 129 + p = &(*p)->rb_left; 130 + else if (vol_id > av->vol_id) 131 + p = &(*p)->rb_right; 132 + } 133 + 134 + av = kmalloc(sizeof(struct ubi_ainf_volume), GFP_KERNEL); 135 + if (!av) 136 + goto out; 137 + 138 + av->highest_lnum = av->leb_count = 0; 139 + av->vol_id = vol_id; 140 + av->used_ebs = used_ebs; 141 + av->data_pad = data_pad; 142 + av->last_data_size = last_eb_bytes; 143 + av->compat = 0; 144 + av->vol_type = vol_type; 145 + av->root = RB_ROOT; 146 + 147 + dbg_bld("found volume (ID %i)", vol_id); 148 + 149 + rb_link_node(&av->rb, parent, p); 150 + rb_insert_color(&av->rb, &ai->volumes); 151 + 152 + out: 153 + return av; 154 + } 155 + 156 + /** 157 + * assign_aeb_to_av - assigns a SEB to a given ainf_volume and removes it 158 + * from it's original list. 159 + * @ai: ubi_attach_info object 160 + * @aeb: the to be assigned SEB 161 + * @av: target scan volume 162 + */ 163 + static void assign_aeb_to_av(struct ubi_attach_info *ai, 164 + struct ubi_ainf_peb *aeb, 165 + struct ubi_ainf_volume *av) 166 + { 167 + struct ubi_ainf_peb *tmp_aeb; 168 + struct rb_node **p = &ai->volumes.rb_node, *parent = NULL; 169 + 170 + p = &av->root.rb_node; 171 + while (*p) { 172 + parent = *p; 173 + 174 + tmp_aeb = rb_entry(parent, struct ubi_ainf_peb, u.rb); 175 + if (aeb->lnum != tmp_aeb->lnum) { 176 + if (aeb->lnum < tmp_aeb->lnum) 177 + p = &(*p)->rb_left; 178 + else 179 + p = &(*p)->rb_right; 180 + 181 + continue; 182 + } else 183 + break; 184 + } 185 + 186 + list_del(&aeb->u.list); 187 + av->leb_count++; 188 + 189 + rb_link_node(&aeb->u.rb, parent, p); 190 + rb_insert_color(&aeb->u.rb, &av->root); 191 + } 192 + 193 + /** 194 + * update_vol - inserts or updates a LEB which was found a pool. 195 + * @ubi: the UBI device object 196 + * @ai: attach info object 197 + * @av: the volume this LEB belongs to 198 + * @new_vh: the volume header derived from new_aeb 199 + * @new_aeb: the AEB to be examined 200 + * 201 + * Returns 0 on success, < 0 indicates an internal error. 202 + */ 203 + static int update_vol(struct ubi_device *ubi, struct ubi_attach_info *ai, 204 + struct ubi_ainf_volume *av, struct ubi_vid_hdr *new_vh, 205 + struct ubi_ainf_peb *new_aeb) 206 + { 207 + struct rb_node **p = &av->root.rb_node, *parent = NULL; 208 + struct ubi_ainf_peb *aeb, *victim; 209 + int cmp_res; 210 + 211 + while (*p) { 212 + parent = *p; 213 + aeb = rb_entry(parent, struct ubi_ainf_peb, u.rb); 214 + 215 + if (be32_to_cpu(new_vh->lnum) != aeb->lnum) { 216 + if (be32_to_cpu(new_vh->lnum) < aeb->lnum) 217 + p = &(*p)->rb_left; 218 + else 219 + p = &(*p)->rb_right; 220 + 221 + continue; 222 + } 223 + 224 + /* This case can happen if the fastmap gets written 225 + * because of a volume change (creation, deletion, ..). 226 + * Then a PEB can be within the persistent EBA and the pool. 227 + */ 228 + if (aeb->pnum == new_aeb->pnum) { 229 + ubi_assert(aeb->lnum == new_aeb->lnum); 230 + kmem_cache_free(ai->aeb_slab_cache, new_aeb); 231 + 232 + return 0; 233 + } 234 + 235 + cmp_res = ubi_compare_lebs(ubi, aeb, new_aeb->pnum, new_vh); 236 + if (cmp_res < 0) 237 + return cmp_res; 238 + 239 + /* new_aeb is newer */ 240 + if (cmp_res & 1) { 241 + victim = kmem_cache_alloc(ai->aeb_slab_cache, 242 + GFP_KERNEL); 243 + if (!victim) 244 + return -ENOMEM; 245 + 246 + victim->ec = aeb->ec; 247 + victim->pnum = aeb->pnum; 248 + list_add_tail(&victim->u.list, &ai->erase); 249 + 250 + if (av->highest_lnum == be32_to_cpu(new_vh->lnum)) 251 + av->last_data_size = \ 252 + be32_to_cpu(new_vh->data_size); 253 + 254 + dbg_bld("vol %i: AEB %i's PEB %i is the newer", 255 + av->vol_id, aeb->lnum, new_aeb->pnum); 256 + 257 + aeb->ec = new_aeb->ec; 258 + aeb->pnum = new_aeb->pnum; 259 + aeb->copy_flag = new_vh->copy_flag; 260 + aeb->scrub = new_aeb->scrub; 261 + kmem_cache_free(ai->aeb_slab_cache, new_aeb); 262 + 263 + /* new_aeb is older */ 264 + } else { 265 + dbg_bld("vol %i: AEB %i's PEB %i is old, dropping it", 266 + av->vol_id, aeb->lnum, new_aeb->pnum); 267 + list_add_tail(&new_aeb->u.list, &ai->erase); 268 + } 269 + 270 + return 0; 271 + } 272 + /* This LEB is new, let's add it to the volume */ 273 + 274 + if (av->highest_lnum <= be32_to_cpu(new_vh->lnum)) { 275 + av->highest_lnum = be32_to_cpu(new_vh->lnum); 276 + av->last_data_size = be32_to_cpu(new_vh->data_size); 277 + } 278 + 279 + if (av->vol_type == UBI_STATIC_VOLUME) 280 + av->used_ebs = be32_to_cpu(new_vh->used_ebs); 281 + 282 + av->leb_count++; 283 + 284 + rb_link_node(&new_aeb->u.rb, parent, p); 285 + rb_insert_color(&new_aeb->u.rb, &av->root); 286 + 287 + return 0; 288 + } 289 + 290 + /** 291 + * process_pool_aeb - we found a non-empty PEB in a pool. 292 + * @ubi: UBI device object 293 + * @ai: attach info object 294 + * @new_vh: the volume header derived from new_aeb 295 + * @new_aeb: the AEB to be examined 296 + * 297 + * Returns 0 on success, < 0 indicates an internal error. 298 + */ 299 + static int process_pool_aeb(struct ubi_device *ubi, struct ubi_attach_info *ai, 300 + struct ubi_vid_hdr *new_vh, 301 + struct ubi_ainf_peb *new_aeb) 302 + { 303 + struct ubi_ainf_volume *av, *tmp_av = NULL; 304 + struct rb_node **p = &ai->volumes.rb_node, *parent = NULL; 305 + int found = 0; 306 + 307 + if (be32_to_cpu(new_vh->vol_id) == UBI_FM_SB_VOLUME_ID || 308 + be32_to_cpu(new_vh->vol_id) == UBI_FM_DATA_VOLUME_ID) { 309 + kmem_cache_free(ai->aeb_slab_cache, new_aeb); 310 + 311 + return 0; 312 + } 313 + 314 + /* Find the volume this SEB belongs to */ 315 + while (*p) { 316 + parent = *p; 317 + tmp_av = rb_entry(parent, struct ubi_ainf_volume, rb); 318 + 319 + if (be32_to_cpu(new_vh->vol_id) > tmp_av->vol_id) 320 + p = &(*p)->rb_left; 321 + else if (be32_to_cpu(new_vh->vol_id) < tmp_av->vol_id) 322 + p = &(*p)->rb_right; 323 + else { 324 + found = 1; 325 + break; 326 + } 327 + } 328 + 329 + if (found) 330 + av = tmp_av; 331 + else { 332 + ubi_err("orphaned volume in fastmap pool!"); 333 + return UBI_BAD_FASTMAP; 334 + } 335 + 336 + ubi_assert(be32_to_cpu(new_vh->vol_id) == av->vol_id); 337 + 338 + return update_vol(ubi, ai, av, new_vh, new_aeb); 339 + } 340 + 341 + /** 342 + * unmap_peb - unmap a PEB. 343 + * If fastmap detects a free PEB in the pool it has to check whether 344 + * this PEB has been unmapped after writing the fastmap. 345 + * 346 + * @ai: UBI attach info object 347 + * @pnum: The PEB to be unmapped 348 + */ 349 + static void unmap_peb(struct ubi_attach_info *ai, int pnum) 350 + { 351 + struct ubi_ainf_volume *av; 352 + struct rb_node *node, *node2; 353 + struct ubi_ainf_peb *aeb; 354 + 355 + for (node = rb_first(&ai->volumes); node; node = rb_next(node)) { 356 + av = rb_entry(node, struct ubi_ainf_volume, rb); 357 + 358 + for (node2 = rb_first(&av->root); node2; 359 + node2 = rb_next(node2)) { 360 + aeb = rb_entry(node2, struct ubi_ainf_peb, u.rb); 361 + if (aeb->pnum == pnum) { 362 + rb_erase(&aeb->u.rb, &av->root); 363 + kmem_cache_free(ai->aeb_slab_cache, aeb); 364 + return; 365 + } 366 + } 367 + } 368 + } 369 + 370 + /** 371 + * scan_pool - scans a pool for changed (no longer empty PEBs). 372 + * @ubi: UBI device object 373 + * @ai: attach info object 374 + * @pebs: an array of all PEB numbers in the to be scanned pool 375 + * @pool_size: size of the pool (number of entries in @pebs) 376 + * @max_sqnum: pointer to the maximal sequence number 377 + * @eba_orphans: list of PEBs which need to be scanned 378 + * @free: list of PEBs which are most likely free (and go into @ai->free) 379 + * 380 + * Returns 0 on success, if the pool is unusable UBI_BAD_FASTMAP is returned. 381 + * < 0 indicates an internal error. 382 + */ 383 + static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai, 384 + int *pebs, int pool_size, unsigned long long *max_sqnum, 385 + struct list_head *eba_orphans, struct list_head *free) 386 + { 387 + struct ubi_vid_hdr *vh; 388 + struct ubi_ec_hdr *ech; 389 + struct ubi_ainf_peb *new_aeb, *tmp_aeb; 390 + int i, pnum, err, found_orphan, ret = 0; 391 + 392 + ech = kzalloc(ubi->ec_hdr_alsize, GFP_KERNEL); 393 + if (!ech) 394 + return -ENOMEM; 395 + 396 + vh = ubi_zalloc_vid_hdr(ubi, GFP_KERNEL); 397 + if (!vh) { 398 + kfree(ech); 399 + return -ENOMEM; 400 + } 401 + 402 + dbg_bld("scanning fastmap pool: size = %i", pool_size); 403 + 404 + /* 405 + * Now scan all PEBs in the pool to find changes which have been made 406 + * after the creation of the fastmap 407 + */ 408 + for (i = 0; i < pool_size; i++) { 409 + int scrub = 0; 410 + 411 + pnum = be32_to_cpu(pebs[i]); 412 + 413 + if (ubi_io_is_bad(ubi, pnum)) { 414 + ubi_err("bad PEB in fastmap pool!"); 415 + ret = UBI_BAD_FASTMAP; 416 + goto out; 417 + } 418 + 419 + err = ubi_io_read_ec_hdr(ubi, pnum, ech, 0); 420 + if (err && err != UBI_IO_BITFLIPS) { 421 + ubi_err("unable to read EC header! PEB:%i err:%i", 422 + pnum, err); 423 + ret = err > 0 ? UBI_BAD_FASTMAP : err; 424 + goto out; 425 + } else if (ret == UBI_IO_BITFLIPS) 426 + scrub = 1; 427 + 428 + if (be32_to_cpu(ech->image_seq) != ubi->image_seq) { 429 + ubi_err("bad image seq: 0x%x, expected: 0x%x", 430 + be32_to_cpu(ech->image_seq), ubi->image_seq); 431 + err = UBI_BAD_FASTMAP; 432 + goto out; 433 + } 434 + 435 + err = ubi_io_read_vid_hdr(ubi, pnum, vh, 0); 436 + if (err == UBI_IO_FF || err == UBI_IO_FF_BITFLIPS) { 437 + unsigned long long ec = be64_to_cpu(ech->ec); 438 + unmap_peb(ai, pnum); 439 + dbg_bld("Adding PEB to free: %i", pnum); 440 + if (err == UBI_IO_FF_BITFLIPS) 441 + add_aeb(ai, free, pnum, ec, 1); 442 + else 443 + add_aeb(ai, free, pnum, ec, 0); 444 + continue; 445 + } else if (err == 0 || err == UBI_IO_BITFLIPS) { 446 + dbg_bld("Found non empty PEB:%i in pool", pnum); 447 + 448 + if (err == UBI_IO_BITFLIPS) 449 + scrub = 1; 450 + 451 + found_orphan = 0; 452 + list_for_each_entry(tmp_aeb, eba_orphans, u.list) { 453 + if (tmp_aeb->pnum == pnum) { 454 + found_orphan = 1; 455 + break; 456 + } 457 + } 458 + if (found_orphan) { 459 + kmem_cache_free(ai->aeb_slab_cache, tmp_aeb); 460 + list_del(&tmp_aeb->u.list); 461 + } 462 + 463 + new_aeb = kmem_cache_alloc(ai->aeb_slab_cache, 464 + GFP_KERNEL); 465 + if (!new_aeb) { 466 + ret = -ENOMEM; 467 + goto out; 468 + } 469 + 470 + new_aeb->ec = be64_to_cpu(ech->ec); 471 + new_aeb->pnum = pnum; 472 + new_aeb->lnum = be32_to_cpu(vh->lnum); 473 + new_aeb->sqnum = be64_to_cpu(vh->sqnum); 474 + new_aeb->copy_flag = vh->copy_flag; 475 + new_aeb->scrub = scrub; 476 + 477 + if (*max_sqnum < new_aeb->sqnum) 478 + *max_sqnum = new_aeb->sqnum; 479 + 480 + err = process_pool_aeb(ubi, ai, vh, new_aeb); 481 + if (err) { 482 + ret = err > 0 ? UBI_BAD_FASTMAP : err; 483 + goto out; 484 + } 485 + } else { 486 + /* We are paranoid and fall back to scanning mode */ 487 + ubi_err("fastmap pool PEBs contains damaged PEBs!"); 488 + ret = err > 0 ? UBI_BAD_FASTMAP : err; 489 + goto out; 490 + } 491 + 492 + } 493 + 494 + out: 495 + ubi_free_vid_hdr(ubi, vh); 496 + kfree(ech); 497 + return ret; 498 + } 499 + 500 + /** 501 + * count_fastmap_pebs - Counts the PEBs found by fastmap. 502 + * @ai: The UBI attach info object 503 + */ 504 + static int count_fastmap_pebs(struct ubi_attach_info *ai) 505 + { 506 + struct ubi_ainf_peb *aeb; 507 + struct ubi_ainf_volume *av; 508 + struct rb_node *rb1, *rb2; 509 + int n = 0; 510 + 511 + list_for_each_entry(aeb, &ai->erase, u.list) 512 + n++; 513 + 514 + list_for_each_entry(aeb, &ai->free, u.list) 515 + n++; 516 + 517 + ubi_rb_for_each_entry(rb1, av, &ai->volumes, rb) 518 + ubi_rb_for_each_entry(rb2, aeb, &av->root, u.rb) 519 + n++; 520 + 521 + return n; 522 + } 523 + 524 + /** 525 + * ubi_attach_fastmap - creates ubi_attach_info from a fastmap. 526 + * @ubi: UBI device object 527 + * @ai: UBI attach info object 528 + * @fm: the fastmap to be attached 529 + * 530 + * Returns 0 on success, UBI_BAD_FASTMAP if the found fastmap was unusable. 531 + * < 0 indicates an internal error. 532 + */ 533 + static int ubi_attach_fastmap(struct ubi_device *ubi, 534 + struct ubi_attach_info *ai, 535 + struct ubi_fastmap_layout *fm) 536 + { 537 + struct list_head used, eba_orphans, free; 538 + struct ubi_ainf_volume *av; 539 + struct ubi_ainf_peb *aeb, *tmp_aeb, *_tmp_aeb; 540 + struct ubi_ec_hdr *ech; 541 + struct ubi_fm_sb *fmsb; 542 + struct ubi_fm_hdr *fmhdr; 543 + struct ubi_fm_scan_pool *fmpl1, *fmpl2; 544 + struct ubi_fm_ec *fmec; 545 + struct ubi_fm_volhdr *fmvhdr; 546 + struct ubi_fm_eba *fm_eba; 547 + int ret, i, j, pool_size, wl_pool_size; 548 + size_t fm_pos = 0, fm_size = ubi->fm_size; 549 + unsigned long long max_sqnum = 0; 550 + void *fm_raw = ubi->fm_buf; 551 + 552 + INIT_LIST_HEAD(&used); 553 + INIT_LIST_HEAD(&free); 554 + INIT_LIST_HEAD(&eba_orphans); 555 + INIT_LIST_HEAD(&ai->corr); 556 + INIT_LIST_HEAD(&ai->free); 557 + INIT_LIST_HEAD(&ai->erase); 558 + INIT_LIST_HEAD(&ai->alien); 559 + ai->volumes = RB_ROOT; 560 + ai->min_ec = UBI_MAX_ERASECOUNTER; 561 + 562 + ai->aeb_slab_cache = kmem_cache_create("ubi_ainf_peb_slab", 563 + sizeof(struct ubi_ainf_peb), 564 + 0, 0, NULL); 565 + if (!ai->aeb_slab_cache) { 566 + ret = -ENOMEM; 567 + goto fail; 568 + } 569 + 570 + fmsb = (struct ubi_fm_sb *)(fm_raw); 571 + ai->max_sqnum = fmsb->sqnum; 572 + fm_pos += sizeof(struct ubi_fm_sb); 573 + if (fm_pos >= fm_size) 574 + goto fail_bad; 575 + 576 + fmhdr = (struct ubi_fm_hdr *)(fm_raw + fm_pos); 577 + fm_pos += sizeof(*fmhdr); 578 + if (fm_pos >= fm_size) 579 + goto fail_bad; 580 + 581 + if (be32_to_cpu(fmhdr->magic) != UBI_FM_HDR_MAGIC) { 582 + ubi_err("bad fastmap header magic: 0x%x, expected: 0x%x", 583 + be32_to_cpu(fmhdr->magic), UBI_FM_HDR_MAGIC); 584 + goto fail_bad; 585 + } 586 + 587 + fmpl1 = (struct ubi_fm_scan_pool *)(fm_raw + fm_pos); 588 + fm_pos += sizeof(*fmpl1); 589 + if (fm_pos >= fm_size) 590 + goto fail_bad; 591 + if (be32_to_cpu(fmpl1->magic) != UBI_FM_POOL_MAGIC) { 592 + ubi_err("bad fastmap pool magic: 0x%x, expected: 0x%x", 593 + be32_to_cpu(fmpl1->magic), UBI_FM_POOL_MAGIC); 594 + goto fail_bad; 595 + } 596 + 597 + fmpl2 = (struct ubi_fm_scan_pool *)(fm_raw + fm_pos); 598 + fm_pos += sizeof(*fmpl2); 599 + if (fm_pos >= fm_size) 600 + goto fail_bad; 601 + if (be32_to_cpu(fmpl2->magic) != UBI_FM_POOL_MAGIC) { 602 + ubi_err("bad fastmap pool magic: 0x%x, expected: 0x%x", 603 + be32_to_cpu(fmpl2->magic), UBI_FM_POOL_MAGIC); 604 + goto fail_bad; 605 + } 606 + 607 + pool_size = be16_to_cpu(fmpl1->size); 608 + wl_pool_size = be16_to_cpu(fmpl2->size); 609 + fm->max_pool_size = be16_to_cpu(fmpl1->max_size); 610 + fm->max_wl_pool_size = be16_to_cpu(fmpl2->max_size); 611 + 612 + if (pool_size > UBI_FM_MAX_POOL_SIZE || pool_size < 0) { 613 + ubi_err("bad pool size: %i", pool_size); 614 + goto fail_bad; 615 + } 616 + 617 + if (wl_pool_size > UBI_FM_MAX_POOL_SIZE || wl_pool_size < 0) { 618 + ubi_err("bad WL pool size: %i", wl_pool_size); 619 + goto fail_bad; 620 + } 621 + 622 + 623 + if (fm->max_pool_size > UBI_FM_MAX_POOL_SIZE || 624 + fm->max_pool_size < 0) { 625 + ubi_err("bad maximal pool size: %i", fm->max_pool_size); 626 + goto fail_bad; 627 + } 628 + 629 + if (fm->max_wl_pool_size > UBI_FM_MAX_POOL_SIZE || 630 + fm->max_wl_pool_size < 0) { 631 + ubi_err("bad maximal WL pool size: %i", fm->max_wl_pool_size); 632 + goto fail_bad; 633 + } 634 + 635 + /* read EC values from free list */ 636 + for (i = 0; i < be32_to_cpu(fmhdr->free_peb_count); i++) { 637 + fmec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 638 + fm_pos += sizeof(*fmec); 639 + if (fm_pos >= fm_size) 640 + goto fail_bad; 641 + 642 + add_aeb(ai, &ai->free, be32_to_cpu(fmec->pnum), 643 + be32_to_cpu(fmec->ec), 0); 644 + } 645 + 646 + /* read EC values from used list */ 647 + for (i = 0; i < be32_to_cpu(fmhdr->used_peb_count); i++) { 648 + fmec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 649 + fm_pos += sizeof(*fmec); 650 + if (fm_pos >= fm_size) 651 + goto fail_bad; 652 + 653 + add_aeb(ai, &used, be32_to_cpu(fmec->pnum), 654 + be32_to_cpu(fmec->ec), 0); 655 + } 656 + 657 + /* read EC values from scrub list */ 658 + for (i = 0; i < be32_to_cpu(fmhdr->scrub_peb_count); i++) { 659 + fmec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 660 + fm_pos += sizeof(*fmec); 661 + if (fm_pos >= fm_size) 662 + goto fail_bad; 663 + 664 + add_aeb(ai, &used, be32_to_cpu(fmec->pnum), 665 + be32_to_cpu(fmec->ec), 1); 666 + } 667 + 668 + /* read EC values from erase list */ 669 + for (i = 0; i < be32_to_cpu(fmhdr->erase_peb_count); i++) { 670 + fmec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 671 + fm_pos += sizeof(*fmec); 672 + if (fm_pos >= fm_size) 673 + goto fail_bad; 674 + 675 + add_aeb(ai, &ai->erase, be32_to_cpu(fmec->pnum), 676 + be32_to_cpu(fmec->ec), 1); 677 + } 678 + 679 + ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count); 680 + ai->bad_peb_count = be32_to_cpu(fmhdr->bad_peb_count); 681 + 682 + /* Iterate over all volumes and read their EBA table */ 683 + for (i = 0; i < be32_to_cpu(fmhdr->vol_count); i++) { 684 + fmvhdr = (struct ubi_fm_volhdr *)(fm_raw + fm_pos); 685 + fm_pos += sizeof(*fmvhdr); 686 + if (fm_pos >= fm_size) 687 + goto fail_bad; 688 + 689 + if (be32_to_cpu(fmvhdr->magic) != UBI_FM_VHDR_MAGIC) { 690 + ubi_err("bad fastmap vol header magic: 0x%x, " \ 691 + "expected: 0x%x", 692 + be32_to_cpu(fmvhdr->magic), UBI_FM_VHDR_MAGIC); 693 + goto fail_bad; 694 + } 695 + 696 + av = add_vol(ai, be32_to_cpu(fmvhdr->vol_id), 697 + be32_to_cpu(fmvhdr->used_ebs), 698 + be32_to_cpu(fmvhdr->data_pad), 699 + fmvhdr->vol_type, 700 + be32_to_cpu(fmvhdr->last_eb_bytes)); 701 + 702 + if (!av) 703 + goto fail_bad; 704 + 705 + ai->vols_found++; 706 + if (ai->highest_vol_id < be32_to_cpu(fmvhdr->vol_id)) 707 + ai->highest_vol_id = be32_to_cpu(fmvhdr->vol_id); 708 + 709 + fm_eba = (struct ubi_fm_eba *)(fm_raw + fm_pos); 710 + fm_pos += sizeof(*fm_eba); 711 + fm_pos += (sizeof(__be32) * be32_to_cpu(fm_eba->reserved_pebs)); 712 + if (fm_pos >= fm_size) 713 + goto fail_bad; 714 + 715 + if (be32_to_cpu(fm_eba->magic) != UBI_FM_EBA_MAGIC) { 716 + ubi_err("bad fastmap EBA header magic: 0x%x, " \ 717 + "expected: 0x%x", 718 + be32_to_cpu(fm_eba->magic), UBI_FM_EBA_MAGIC); 719 + goto fail_bad; 720 + } 721 + 722 + for (j = 0; j < be32_to_cpu(fm_eba->reserved_pebs); j++) { 723 + int pnum = be32_to_cpu(fm_eba->pnum[j]); 724 + 725 + if ((int)be32_to_cpu(fm_eba->pnum[j]) < 0) 726 + continue; 727 + 728 + aeb = NULL; 729 + list_for_each_entry(tmp_aeb, &used, u.list) { 730 + if (tmp_aeb->pnum == pnum) 731 + aeb = tmp_aeb; 732 + } 733 + 734 + /* This can happen if a PEB is already in an EBA known 735 + * by this fastmap but the PEB itself is not in the used 736 + * list. 737 + * In this case the PEB can be within the fastmap pool 738 + * or while writing the fastmap it was in the protection 739 + * queue. 740 + */ 741 + if (!aeb) { 742 + aeb = kmem_cache_alloc(ai->aeb_slab_cache, 743 + GFP_KERNEL); 744 + if (!aeb) { 745 + ret = -ENOMEM; 746 + 747 + goto fail; 748 + } 749 + 750 + aeb->lnum = j; 751 + aeb->pnum = be32_to_cpu(fm_eba->pnum[j]); 752 + aeb->ec = -1; 753 + aeb->scrub = aeb->copy_flag = aeb->sqnum = 0; 754 + list_add_tail(&aeb->u.list, &eba_orphans); 755 + continue; 756 + } 757 + 758 + aeb->lnum = j; 759 + 760 + if (av->highest_lnum <= aeb->lnum) 761 + av->highest_lnum = aeb->lnum; 762 + 763 + assign_aeb_to_av(ai, aeb, av); 764 + 765 + dbg_bld("inserting PEB:%i (LEB %i) to vol %i", 766 + aeb->pnum, aeb->lnum, av->vol_id); 767 + } 768 + 769 + ech = kzalloc(ubi->ec_hdr_alsize, GFP_KERNEL); 770 + if (!ech) { 771 + ret = -ENOMEM; 772 + goto fail; 773 + } 774 + 775 + list_for_each_entry_safe(tmp_aeb, _tmp_aeb, &eba_orphans, 776 + u.list) { 777 + int err; 778 + 779 + if (ubi_io_is_bad(ubi, tmp_aeb->pnum)) { 780 + ubi_err("bad PEB in fastmap EBA orphan list"); 781 + ret = UBI_BAD_FASTMAP; 782 + kfree(ech); 783 + goto fail; 784 + } 785 + 786 + err = ubi_io_read_ec_hdr(ubi, tmp_aeb->pnum, ech, 0); 787 + if (err && err != UBI_IO_BITFLIPS) { 788 + ubi_err("unable to read EC header! PEB:%i " \ 789 + "err:%i", tmp_aeb->pnum, err); 790 + ret = err > 0 ? UBI_BAD_FASTMAP : err; 791 + kfree(ech); 792 + 793 + goto fail; 794 + } else if (err == UBI_IO_BITFLIPS) 795 + tmp_aeb->scrub = 1; 796 + 797 + tmp_aeb->ec = be64_to_cpu(ech->ec); 798 + assign_aeb_to_av(ai, tmp_aeb, av); 799 + } 800 + 801 + kfree(ech); 802 + } 803 + 804 + ret = scan_pool(ubi, ai, fmpl1->pebs, pool_size, &max_sqnum, 805 + &eba_orphans, &free); 806 + if (ret) 807 + goto fail; 808 + 809 + ret = scan_pool(ubi, ai, fmpl2->pebs, wl_pool_size, &max_sqnum, 810 + &eba_orphans, &free); 811 + if (ret) 812 + goto fail; 813 + 814 + if (max_sqnum > ai->max_sqnum) 815 + ai->max_sqnum = max_sqnum; 816 + 817 + list_for_each_entry_safe(tmp_aeb, _tmp_aeb, &free, u.list) { 818 + list_del(&tmp_aeb->u.list); 819 + list_add_tail(&tmp_aeb->u.list, &ai->free); 820 + } 821 + 822 + /* 823 + * If fastmap is leaking PEBs (must not happen), raise a 824 + * fat warning and fall back to scanning mode. 825 + * We do this here because in ubi_wl_init() it's too late 826 + * and we cannot fall back to scanning. 827 + */ 828 + if (WARN_ON(count_fastmap_pebs(ai) != ubi->peb_count - 829 + ai->bad_peb_count - fm->used_blocks)) 830 + goto fail_bad; 831 + 832 + return 0; 833 + 834 + fail_bad: 835 + ret = UBI_BAD_FASTMAP; 836 + fail: 837 + return ret; 838 + } 839 + 840 + /** 841 + * ubi_scan_fastmap - scan the fastmap. 842 + * @ubi: UBI device object 843 + * @ai: UBI attach info to be filled 844 + * @fm_anchor: The fastmap starts at this PEB 845 + * 846 + * Returns 0 on success, UBI_NO_FASTMAP if no fastmap was found, 847 + * UBI_BAD_FASTMAP if one was found but is not usable. 848 + * < 0 indicates an internal error. 849 + */ 850 + int ubi_scan_fastmap(struct ubi_device *ubi, struct ubi_attach_info *ai, 851 + int fm_anchor) 852 + { 853 + struct ubi_fm_sb *fmsb, *fmsb2; 854 + struct ubi_vid_hdr *vh; 855 + struct ubi_ec_hdr *ech; 856 + struct ubi_fastmap_layout *fm; 857 + int i, used_blocks, pnum, ret = 0; 858 + size_t fm_size; 859 + __be32 crc, tmp_crc; 860 + unsigned long long sqnum = 0; 861 + 862 + mutex_lock(&ubi->fm_mutex); 863 + memset(ubi->fm_buf, 0, ubi->fm_size); 864 + 865 + fmsb = kmalloc(sizeof(*fmsb), GFP_KERNEL); 866 + if (!fmsb) { 867 + ret = -ENOMEM; 868 + goto out; 869 + } 870 + 871 + fm = kzalloc(sizeof(*fm), GFP_KERNEL); 872 + if (!fm) { 873 + ret = -ENOMEM; 874 + kfree(fmsb); 875 + goto out; 876 + } 877 + 878 + ret = ubi_io_read(ubi, fmsb, fm_anchor, ubi->leb_start, sizeof(*fmsb)); 879 + if (ret && ret != UBI_IO_BITFLIPS) 880 + goto free_fm_sb; 881 + else if (ret == UBI_IO_BITFLIPS) 882 + fm->to_be_tortured[0] = 1; 883 + 884 + if (be32_to_cpu(fmsb->magic) != UBI_FM_SB_MAGIC) { 885 + ubi_err("bad super block magic: 0x%x, expected: 0x%x", 886 + be32_to_cpu(fmsb->magic), UBI_FM_SB_MAGIC); 887 + ret = UBI_BAD_FASTMAP; 888 + goto free_fm_sb; 889 + } 890 + 891 + if (fmsb->version != UBI_FM_FMT_VERSION) { 892 + ubi_err("bad fastmap version: %i, expected: %i", 893 + fmsb->version, UBI_FM_FMT_VERSION); 894 + ret = UBI_BAD_FASTMAP; 895 + goto free_fm_sb; 896 + } 897 + 898 + used_blocks = be32_to_cpu(fmsb->used_blocks); 899 + if (used_blocks > UBI_FM_MAX_BLOCKS || used_blocks < 1) { 900 + ubi_err("number of fastmap blocks is invalid: %i", used_blocks); 901 + ret = UBI_BAD_FASTMAP; 902 + goto free_fm_sb; 903 + } 904 + 905 + fm_size = ubi->leb_size * used_blocks; 906 + if (fm_size != ubi->fm_size) { 907 + ubi_err("bad fastmap size: %zi, expected: %zi", fm_size, 908 + ubi->fm_size); 909 + ret = UBI_BAD_FASTMAP; 910 + goto free_fm_sb; 911 + } 912 + 913 + ech = kzalloc(ubi->ec_hdr_alsize, GFP_KERNEL); 914 + if (!ech) { 915 + ret = -ENOMEM; 916 + goto free_fm_sb; 917 + } 918 + 919 + vh = ubi_zalloc_vid_hdr(ubi, GFP_KERNEL); 920 + if (!vh) { 921 + ret = -ENOMEM; 922 + goto free_hdr; 923 + } 924 + 925 + for (i = 0; i < used_blocks; i++) { 926 + pnum = be32_to_cpu(fmsb->block_loc[i]); 927 + 928 + if (ubi_io_is_bad(ubi, pnum)) { 929 + ret = UBI_BAD_FASTMAP; 930 + goto free_hdr; 931 + } 932 + 933 + ret = ubi_io_read_ec_hdr(ubi, pnum, ech, 0); 934 + if (ret && ret != UBI_IO_BITFLIPS) { 935 + ubi_err("unable to read fastmap block# %i EC (PEB: %i)", 936 + i, pnum); 937 + if (ret > 0) 938 + ret = UBI_BAD_FASTMAP; 939 + goto free_hdr; 940 + } else if (ret == UBI_IO_BITFLIPS) 941 + fm->to_be_tortured[i] = 1; 942 + 943 + if (!ubi->image_seq) 944 + ubi->image_seq = be32_to_cpu(ech->image_seq); 945 + 946 + if (be32_to_cpu(ech->image_seq) != ubi->image_seq) { 947 + ret = UBI_BAD_FASTMAP; 948 + goto free_hdr; 949 + } 950 + 951 + ret = ubi_io_read_vid_hdr(ubi, pnum, vh, 0); 952 + if (ret && ret != UBI_IO_BITFLIPS) { 953 + ubi_err("unable to read fastmap block# %i (PEB: %i)", 954 + i, pnum); 955 + goto free_hdr; 956 + } 957 + 958 + if (i == 0) { 959 + if (be32_to_cpu(vh->vol_id) != UBI_FM_SB_VOLUME_ID) { 960 + ubi_err("bad fastmap anchor vol_id: 0x%x," \ 961 + " expected: 0x%x", 962 + be32_to_cpu(vh->vol_id), 963 + UBI_FM_SB_VOLUME_ID); 964 + ret = UBI_BAD_FASTMAP; 965 + goto free_hdr; 966 + } 967 + } else { 968 + if (be32_to_cpu(vh->vol_id) != UBI_FM_DATA_VOLUME_ID) { 969 + ubi_err("bad fastmap data vol_id: 0x%x," \ 970 + " expected: 0x%x", 971 + be32_to_cpu(vh->vol_id), 972 + UBI_FM_DATA_VOLUME_ID); 973 + ret = UBI_BAD_FASTMAP; 974 + goto free_hdr; 975 + } 976 + } 977 + 978 + if (sqnum < be64_to_cpu(vh->sqnum)) 979 + sqnum = be64_to_cpu(vh->sqnum); 980 + 981 + ret = ubi_io_read(ubi, ubi->fm_buf + (ubi->leb_size * i), pnum, 982 + ubi->leb_start, ubi->leb_size); 983 + if (ret && ret != UBI_IO_BITFLIPS) { 984 + ubi_err("unable to read fastmap block# %i (PEB: %i, " \ 985 + "err: %i)", i, pnum, ret); 986 + goto free_hdr; 987 + } 988 + } 989 + 990 + kfree(fmsb); 991 + fmsb = NULL; 992 + 993 + fmsb2 = (struct ubi_fm_sb *)(ubi->fm_buf); 994 + tmp_crc = be32_to_cpu(fmsb2->data_crc); 995 + fmsb2->data_crc = 0; 996 + crc = crc32(UBI_CRC32_INIT, ubi->fm_buf, fm_size); 997 + if (crc != tmp_crc) { 998 + ubi_err("fastmap data CRC is invalid"); 999 + ubi_err("CRC should be: 0x%x, calc: 0x%x", tmp_crc, crc); 1000 + ret = UBI_BAD_FASTMAP; 1001 + goto free_hdr; 1002 + } 1003 + 1004 + fmsb2->sqnum = sqnum; 1005 + 1006 + fm->used_blocks = used_blocks; 1007 + 1008 + ret = ubi_attach_fastmap(ubi, ai, fm); 1009 + if (ret) { 1010 + if (ret > 0) 1011 + ret = UBI_BAD_FASTMAP; 1012 + goto free_hdr; 1013 + } 1014 + 1015 + for (i = 0; i < used_blocks; i++) { 1016 + struct ubi_wl_entry *e; 1017 + 1018 + e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); 1019 + if (!e) { 1020 + while (i--) 1021 + kfree(fm->e[i]); 1022 + 1023 + ret = -ENOMEM; 1024 + goto free_hdr; 1025 + } 1026 + 1027 + e->pnum = be32_to_cpu(fmsb2->block_loc[i]); 1028 + e->ec = be32_to_cpu(fmsb2->block_ec[i]); 1029 + fm->e[i] = e; 1030 + } 1031 + 1032 + ubi->fm = fm; 1033 + ubi->fm_pool.max_size = ubi->fm->max_pool_size; 1034 + ubi->fm_wl_pool.max_size = ubi->fm->max_wl_pool_size; 1035 + ubi_msg("attached by fastmap"); 1036 + ubi_msg("fastmap pool size: %d", ubi->fm_pool.max_size); 1037 + ubi_msg("fastmap WL pool size: %d", ubi->fm_wl_pool.max_size); 1038 + ubi->fm_disabled = 0; 1039 + 1040 + ubi_free_vid_hdr(ubi, vh); 1041 + kfree(ech); 1042 + out: 1043 + mutex_unlock(&ubi->fm_mutex); 1044 + if (ret == UBI_BAD_FASTMAP) 1045 + ubi_err("Attach by fastmap failed, doing a full scan!"); 1046 + return ret; 1047 + 1048 + free_hdr: 1049 + ubi_free_vid_hdr(ubi, vh); 1050 + kfree(ech); 1051 + free_fm_sb: 1052 + kfree(fmsb); 1053 + kfree(fm); 1054 + goto out; 1055 + } 1056 + 1057 + /** 1058 + * ubi_write_fastmap - writes a fastmap. 1059 + * @ubi: UBI device object 1060 + * @new_fm: the to be written fastmap 1061 + * 1062 + * Returns 0 on success, < 0 indicates an internal error. 1063 + */ 1064 + static int ubi_write_fastmap(struct ubi_device *ubi, 1065 + struct ubi_fastmap_layout *new_fm) 1066 + { 1067 + size_t fm_pos = 0; 1068 + void *fm_raw; 1069 + struct ubi_fm_sb *fmsb; 1070 + struct ubi_fm_hdr *fmh; 1071 + struct ubi_fm_scan_pool *fmpl1, *fmpl2; 1072 + struct ubi_fm_ec *fec; 1073 + struct ubi_fm_volhdr *fvh; 1074 + struct ubi_fm_eba *feba; 1075 + struct rb_node *node; 1076 + struct ubi_wl_entry *wl_e; 1077 + struct ubi_volume *vol; 1078 + struct ubi_vid_hdr *avhdr, *dvhdr; 1079 + struct ubi_work *ubi_wrk; 1080 + int ret, i, j, free_peb_count, used_peb_count, vol_count; 1081 + int scrub_peb_count, erase_peb_count; 1082 + 1083 + fm_raw = ubi->fm_buf; 1084 + memset(ubi->fm_buf, 0, ubi->fm_size); 1085 + 1086 + avhdr = new_fm_vhdr(ubi, UBI_FM_SB_VOLUME_ID); 1087 + if (!avhdr) { 1088 + ret = -ENOMEM; 1089 + goto out; 1090 + } 1091 + 1092 + dvhdr = new_fm_vhdr(ubi, UBI_FM_DATA_VOLUME_ID); 1093 + if (!dvhdr) { 1094 + ret = -ENOMEM; 1095 + goto out_kfree; 1096 + } 1097 + 1098 + spin_lock(&ubi->volumes_lock); 1099 + spin_lock(&ubi->wl_lock); 1100 + 1101 + fmsb = (struct ubi_fm_sb *)fm_raw; 1102 + fm_pos += sizeof(*fmsb); 1103 + ubi_assert(fm_pos <= ubi->fm_size); 1104 + 1105 + fmh = (struct ubi_fm_hdr *)(fm_raw + fm_pos); 1106 + fm_pos += sizeof(*fmh); 1107 + ubi_assert(fm_pos <= ubi->fm_size); 1108 + 1109 + fmsb->magic = cpu_to_be32(UBI_FM_SB_MAGIC); 1110 + fmsb->version = UBI_FM_FMT_VERSION; 1111 + fmsb->used_blocks = cpu_to_be32(new_fm->used_blocks); 1112 + /* the max sqnum will be filled in while *reading* the fastmap */ 1113 + fmsb->sqnum = 0; 1114 + 1115 + fmh->magic = cpu_to_be32(UBI_FM_HDR_MAGIC); 1116 + free_peb_count = 0; 1117 + used_peb_count = 0; 1118 + scrub_peb_count = 0; 1119 + erase_peb_count = 0; 1120 + vol_count = 0; 1121 + 1122 + fmpl1 = (struct ubi_fm_scan_pool *)(fm_raw + fm_pos); 1123 + fm_pos += sizeof(*fmpl1); 1124 + fmpl1->magic = cpu_to_be32(UBI_FM_POOL_MAGIC); 1125 + fmpl1->size = cpu_to_be16(ubi->fm_pool.size); 1126 + fmpl1->max_size = cpu_to_be16(ubi->fm_pool.max_size); 1127 + 1128 + for (i = 0; i < ubi->fm_pool.size; i++) 1129 + fmpl1->pebs[i] = cpu_to_be32(ubi->fm_pool.pebs[i]); 1130 + 1131 + fmpl2 = (struct ubi_fm_scan_pool *)(fm_raw + fm_pos); 1132 + fm_pos += sizeof(*fmpl2); 1133 + fmpl2->magic = cpu_to_be32(UBI_FM_POOL_MAGIC); 1134 + fmpl2->size = cpu_to_be16(ubi->fm_wl_pool.size); 1135 + fmpl2->max_size = cpu_to_be16(ubi->fm_wl_pool.max_size); 1136 + 1137 + for (i = 0; i < ubi->fm_wl_pool.size; i++) 1138 + fmpl2->pebs[i] = cpu_to_be32(ubi->fm_wl_pool.pebs[i]); 1139 + 1140 + for (node = rb_first(&ubi->free); node; node = rb_next(node)) { 1141 + wl_e = rb_entry(node, struct ubi_wl_entry, u.rb); 1142 + fec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 1143 + 1144 + fec->pnum = cpu_to_be32(wl_e->pnum); 1145 + fec->ec = cpu_to_be32(wl_e->ec); 1146 + 1147 + free_peb_count++; 1148 + fm_pos += sizeof(*fec); 1149 + ubi_assert(fm_pos <= ubi->fm_size); 1150 + } 1151 + fmh->free_peb_count = cpu_to_be32(free_peb_count); 1152 + 1153 + for (node = rb_first(&ubi->used); node; node = rb_next(node)) { 1154 + wl_e = rb_entry(node, struct ubi_wl_entry, u.rb); 1155 + fec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 1156 + 1157 + fec->pnum = cpu_to_be32(wl_e->pnum); 1158 + fec->ec = cpu_to_be32(wl_e->ec); 1159 + 1160 + used_peb_count++; 1161 + fm_pos += sizeof(*fec); 1162 + ubi_assert(fm_pos <= ubi->fm_size); 1163 + } 1164 + fmh->used_peb_count = cpu_to_be32(used_peb_count); 1165 + 1166 + for (node = rb_first(&ubi->scrub); node; node = rb_next(node)) { 1167 + wl_e = rb_entry(node, struct ubi_wl_entry, u.rb); 1168 + fec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 1169 + 1170 + fec->pnum = cpu_to_be32(wl_e->pnum); 1171 + fec->ec = cpu_to_be32(wl_e->ec); 1172 + 1173 + scrub_peb_count++; 1174 + fm_pos += sizeof(*fec); 1175 + ubi_assert(fm_pos <= ubi->fm_size); 1176 + } 1177 + fmh->scrub_peb_count = cpu_to_be32(scrub_peb_count); 1178 + 1179 + 1180 + list_for_each_entry(ubi_wrk, &ubi->works, list) { 1181 + if (ubi_is_erase_work(ubi_wrk)) { 1182 + wl_e = ubi_wrk->e; 1183 + ubi_assert(wl_e); 1184 + 1185 + fec = (struct ubi_fm_ec *)(fm_raw + fm_pos); 1186 + 1187 + fec->pnum = cpu_to_be32(wl_e->pnum); 1188 + fec->ec = cpu_to_be32(wl_e->ec); 1189 + 1190 + erase_peb_count++; 1191 + fm_pos += sizeof(*fec); 1192 + ubi_assert(fm_pos <= ubi->fm_size); 1193 + } 1194 + } 1195 + fmh->erase_peb_count = cpu_to_be32(erase_peb_count); 1196 + 1197 + for (i = 0; i < UBI_MAX_VOLUMES + UBI_INT_VOL_COUNT; i++) { 1198 + vol = ubi->volumes[i]; 1199 + 1200 + if (!vol) 1201 + continue; 1202 + 1203 + vol_count++; 1204 + 1205 + fvh = (struct ubi_fm_volhdr *)(fm_raw + fm_pos); 1206 + fm_pos += sizeof(*fvh); 1207 + ubi_assert(fm_pos <= ubi->fm_size); 1208 + 1209 + fvh->magic = cpu_to_be32(UBI_FM_VHDR_MAGIC); 1210 + fvh->vol_id = cpu_to_be32(vol->vol_id); 1211 + fvh->vol_type = vol->vol_type; 1212 + fvh->used_ebs = cpu_to_be32(vol->used_ebs); 1213 + fvh->data_pad = cpu_to_be32(vol->data_pad); 1214 + fvh->last_eb_bytes = cpu_to_be32(vol->last_eb_bytes); 1215 + 1216 + ubi_assert(vol->vol_type == UBI_DYNAMIC_VOLUME || 1217 + vol->vol_type == UBI_STATIC_VOLUME); 1218 + 1219 + feba = (struct ubi_fm_eba *)(fm_raw + fm_pos); 1220 + fm_pos += sizeof(*feba) + (sizeof(__be32) * vol->reserved_pebs); 1221 + ubi_assert(fm_pos <= ubi->fm_size); 1222 + 1223 + for (j = 0; j < vol->reserved_pebs; j++) 1224 + feba->pnum[j] = cpu_to_be32(vol->eba_tbl[j]); 1225 + 1226 + feba->reserved_pebs = cpu_to_be32(j); 1227 + feba->magic = cpu_to_be32(UBI_FM_EBA_MAGIC); 1228 + } 1229 + fmh->vol_count = cpu_to_be32(vol_count); 1230 + fmh->bad_peb_count = cpu_to_be32(ubi->bad_peb_count); 1231 + 1232 + avhdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 1233 + avhdr->lnum = 0; 1234 + 1235 + spin_unlock(&ubi->wl_lock); 1236 + spin_unlock(&ubi->volumes_lock); 1237 + 1238 + dbg_bld("writing fastmap SB to PEB %i", new_fm->e[0]->pnum); 1239 + ret = ubi_io_write_vid_hdr(ubi, new_fm->e[0]->pnum, avhdr); 1240 + if (ret) { 1241 + ubi_err("unable to write vid_hdr to fastmap SB!"); 1242 + goto out_kfree; 1243 + } 1244 + 1245 + for (i = 0; i < new_fm->used_blocks; i++) { 1246 + fmsb->block_loc[i] = cpu_to_be32(new_fm->e[i]->pnum); 1247 + fmsb->block_ec[i] = cpu_to_be32(new_fm->e[i]->ec); 1248 + } 1249 + 1250 + fmsb->data_crc = 0; 1251 + fmsb->data_crc = cpu_to_be32(crc32(UBI_CRC32_INIT, fm_raw, 1252 + ubi->fm_size)); 1253 + 1254 + for (i = 1; i < new_fm->used_blocks; i++) { 1255 + dvhdr->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 1256 + dvhdr->lnum = cpu_to_be32(i); 1257 + dbg_bld("writing fastmap data to PEB %i sqnum %llu", 1258 + new_fm->e[i]->pnum, be64_to_cpu(dvhdr->sqnum)); 1259 + ret = ubi_io_write_vid_hdr(ubi, new_fm->e[i]->pnum, dvhdr); 1260 + if (ret) { 1261 + ubi_err("unable to write vid_hdr to PEB %i!", 1262 + new_fm->e[i]->pnum); 1263 + goto out_kfree; 1264 + } 1265 + } 1266 + 1267 + for (i = 0; i < new_fm->used_blocks; i++) { 1268 + ret = ubi_io_write(ubi, fm_raw + (i * ubi->leb_size), 1269 + new_fm->e[i]->pnum, ubi->leb_start, ubi->leb_size); 1270 + if (ret) { 1271 + ubi_err("unable to write fastmap to PEB %i!", 1272 + new_fm->e[i]->pnum); 1273 + goto out_kfree; 1274 + } 1275 + } 1276 + 1277 + ubi_assert(new_fm); 1278 + ubi->fm = new_fm; 1279 + 1280 + dbg_bld("fastmap written!"); 1281 + 1282 + out_kfree: 1283 + ubi_free_vid_hdr(ubi, avhdr); 1284 + ubi_free_vid_hdr(ubi, dvhdr); 1285 + out: 1286 + return ret; 1287 + } 1288 + 1289 + /** 1290 + * erase_block - Manually erase a PEB. 1291 + * @ubi: UBI device object 1292 + * @pnum: PEB to be erased 1293 + * 1294 + * Returns the new EC value on success, < 0 indicates an internal error. 1295 + */ 1296 + static int erase_block(struct ubi_device *ubi, int pnum) 1297 + { 1298 + int ret; 1299 + struct ubi_ec_hdr *ec_hdr; 1300 + long long ec; 1301 + 1302 + ec_hdr = kzalloc(ubi->ec_hdr_alsize, GFP_KERNEL); 1303 + if (!ec_hdr) 1304 + return -ENOMEM; 1305 + 1306 + ret = ubi_io_read_ec_hdr(ubi, pnum, ec_hdr, 0); 1307 + if (ret < 0) 1308 + goto out; 1309 + else if (ret && ret != UBI_IO_BITFLIPS) { 1310 + ret = -EINVAL; 1311 + goto out; 1312 + } 1313 + 1314 + ret = ubi_io_sync_erase(ubi, pnum, 0); 1315 + if (ret < 0) 1316 + goto out; 1317 + 1318 + ec = be64_to_cpu(ec_hdr->ec); 1319 + ec += ret; 1320 + if (ec > UBI_MAX_ERASECOUNTER) { 1321 + ret = -EINVAL; 1322 + goto out; 1323 + } 1324 + 1325 + ec_hdr->ec = cpu_to_be64(ec); 1326 + ret = ubi_io_write_ec_hdr(ubi, pnum, ec_hdr); 1327 + if (ret < 0) 1328 + goto out; 1329 + 1330 + ret = ec; 1331 + out: 1332 + kfree(ec_hdr); 1333 + return ret; 1334 + } 1335 + 1336 + /** 1337 + * invalidate_fastmap - destroys a fastmap. 1338 + * @ubi: UBI device object 1339 + * @fm: the fastmap to be destroyed 1340 + * 1341 + * Returns 0 on success, < 0 indicates an internal error. 1342 + */ 1343 + static int invalidate_fastmap(struct ubi_device *ubi, 1344 + struct ubi_fastmap_layout *fm) 1345 + { 1346 + int ret, i; 1347 + struct ubi_vid_hdr *vh; 1348 + 1349 + ret = erase_block(ubi, fm->e[0]->pnum); 1350 + if (ret < 0) 1351 + return ret; 1352 + 1353 + vh = new_fm_vhdr(ubi, UBI_FM_SB_VOLUME_ID); 1354 + if (!vh) 1355 + return -ENOMEM; 1356 + 1357 + /* deleting the current fastmap SB is not enough, an old SB may exist, 1358 + * so create a (corrupted) SB such that fastmap will find it and fall 1359 + * back to scanning mode in any case */ 1360 + vh->sqnum = cpu_to_be64(ubi_next_sqnum(ubi)); 1361 + ret = ubi_io_write_vid_hdr(ubi, fm->e[0]->pnum, vh); 1362 + 1363 + for (i = 0; i < fm->used_blocks; i++) 1364 + ubi_wl_put_fm_peb(ubi, fm->e[i], i, fm->to_be_tortured[i]); 1365 + 1366 + return ret; 1367 + } 1368 + 1369 + /** 1370 + * ubi_update_fastmap - will be called by UBI if a volume changes or 1371 + * a fastmap pool becomes full. 1372 + * @ubi: UBI device object 1373 + * 1374 + * Returns 0 on success, < 0 indicates an internal error. 1375 + */ 1376 + int ubi_update_fastmap(struct ubi_device *ubi) 1377 + { 1378 + int ret, i; 1379 + struct ubi_fastmap_layout *new_fm, *old_fm; 1380 + struct ubi_wl_entry *tmp_e; 1381 + 1382 + mutex_lock(&ubi->fm_mutex); 1383 + 1384 + ubi_refill_pools(ubi); 1385 + 1386 + if (ubi->ro_mode || ubi->fm_disabled) { 1387 + mutex_unlock(&ubi->fm_mutex); 1388 + return 0; 1389 + } 1390 + 1391 + ret = ubi_ensure_anchor_pebs(ubi); 1392 + if (ret) { 1393 + mutex_unlock(&ubi->fm_mutex); 1394 + return ret; 1395 + } 1396 + 1397 + new_fm = kzalloc(sizeof(*new_fm), GFP_KERNEL); 1398 + if (!new_fm) { 1399 + mutex_unlock(&ubi->fm_mutex); 1400 + return -ENOMEM; 1401 + } 1402 + 1403 + new_fm->used_blocks = ubi->fm_size / ubi->leb_size; 1404 + 1405 + for (i = 0; i < new_fm->used_blocks; i++) { 1406 + new_fm->e[i] = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL); 1407 + if (!new_fm->e[i]) { 1408 + while (i--) 1409 + kfree(new_fm->e[i]); 1410 + 1411 + kfree(new_fm); 1412 + mutex_unlock(&ubi->fm_mutex); 1413 + return -ENOMEM; 1414 + } 1415 + } 1416 + 1417 + old_fm = ubi->fm; 1418 + ubi->fm = NULL; 1419 + 1420 + if (new_fm->used_blocks > UBI_FM_MAX_BLOCKS) { 1421 + ubi_err("fastmap too large"); 1422 + ret = -ENOSPC; 1423 + goto err; 1424 + } 1425 + 1426 + for (i = 1; i < new_fm->used_blocks; i++) { 1427 + spin_lock(&ubi->wl_lock); 1428 + tmp_e = ubi_wl_get_fm_peb(ubi, 0); 1429 + spin_unlock(&ubi->wl_lock); 1430 + 1431 + if (!tmp_e && !old_fm) { 1432 + int j; 1433 + ubi_err("could not get any free erase block"); 1434 + 1435 + for (j = 1; j < i; j++) 1436 + ubi_wl_put_fm_peb(ubi, new_fm->e[j], j, 0); 1437 + 1438 + ret = -ENOSPC; 1439 + goto err; 1440 + } else if (!tmp_e && old_fm) { 1441 + ret = erase_block(ubi, old_fm->e[i]->pnum); 1442 + if (ret < 0) { 1443 + int j; 1444 + 1445 + for (j = 1; j < i; j++) 1446 + ubi_wl_put_fm_peb(ubi, new_fm->e[j], 1447 + j, 0); 1448 + 1449 + ubi_err("could not erase old fastmap PEB"); 1450 + goto err; 1451 + } 1452 + 1453 + new_fm->e[i]->pnum = old_fm->e[i]->pnum; 1454 + new_fm->e[i]->ec = old_fm->e[i]->ec; 1455 + } else { 1456 + new_fm->e[i]->pnum = tmp_e->pnum; 1457 + new_fm->e[i]->ec = tmp_e->ec; 1458 + 1459 + if (old_fm) 1460 + ubi_wl_put_fm_peb(ubi, old_fm->e[i], i, 1461 + old_fm->to_be_tortured[i]); 1462 + } 1463 + } 1464 + 1465 + spin_lock(&ubi->wl_lock); 1466 + tmp_e = ubi_wl_get_fm_peb(ubi, 1); 1467 + spin_unlock(&ubi->wl_lock); 1468 + 1469 + if (old_fm) { 1470 + /* no fresh anchor PEB was found, reuse the old one */ 1471 + if (!tmp_e) { 1472 + ret = erase_block(ubi, old_fm->e[0]->pnum); 1473 + if (ret < 0) { 1474 + int i; 1475 + ubi_err("could not erase old anchor PEB"); 1476 + 1477 + for (i = 1; i < new_fm->used_blocks; i++) 1478 + ubi_wl_put_fm_peb(ubi, new_fm->e[i], 1479 + i, 0); 1480 + goto err; 1481 + } 1482 + 1483 + new_fm->e[0]->pnum = old_fm->e[0]->pnum; 1484 + new_fm->e[0]->ec = ret; 1485 + } else { 1486 + /* we've got a new anchor PEB, return the old one */ 1487 + ubi_wl_put_fm_peb(ubi, old_fm->e[0], 0, 1488 + old_fm->to_be_tortured[0]); 1489 + 1490 + new_fm->e[0]->pnum = tmp_e->pnum; 1491 + new_fm->e[0]->ec = tmp_e->ec; 1492 + } 1493 + } else { 1494 + if (!tmp_e) { 1495 + int i; 1496 + ubi_err("could not find any anchor PEB"); 1497 + 1498 + for (i = 1; i < new_fm->used_blocks; i++) 1499 + ubi_wl_put_fm_peb(ubi, new_fm->e[i], i, 0); 1500 + 1501 + ret = -ENOSPC; 1502 + goto err; 1503 + } 1504 + 1505 + new_fm->e[0]->pnum = tmp_e->pnum; 1506 + new_fm->e[0]->ec = tmp_e->ec; 1507 + } 1508 + 1509 + down_write(&ubi->work_sem); 1510 + down_write(&ubi->fm_sem); 1511 + ret = ubi_write_fastmap(ubi, new_fm); 1512 + up_write(&ubi->fm_sem); 1513 + up_write(&ubi->work_sem); 1514 + 1515 + if (ret) 1516 + goto err; 1517 + 1518 + out_unlock: 1519 + mutex_unlock(&ubi->fm_mutex); 1520 + kfree(old_fm); 1521 + return ret; 1522 + 1523 + err: 1524 + kfree(new_fm); 1525 + 1526 + ubi_warn("Unable to write new fastmap, err=%i", ret); 1527 + 1528 + ret = 0; 1529 + if (old_fm) { 1530 + ret = invalidate_fastmap(ubi, old_fm); 1531 + if (ret < 0) 1532 + ubi_err("Unable to invalidiate current fastmap!"); 1533 + else if (ret) 1534 + ret = 0; 1535 + } 1536 + goto out_unlock; 1537 + }
+137
drivers/mtd/ubi/ubi-media.h
··· 375 375 __be32 crc; 376 376 } __packed; 377 377 378 + /* UBI fastmap on-flash data structures */ 379 + 380 + #define UBI_FM_SB_VOLUME_ID (UBI_LAYOUT_VOLUME_ID + 1) 381 + #define UBI_FM_DATA_VOLUME_ID (UBI_LAYOUT_VOLUME_ID + 2) 382 + 383 + /* fastmap on-flash data structure format version */ 384 + #define UBI_FM_FMT_VERSION 1 385 + 386 + #define UBI_FM_SB_MAGIC 0x7B11D69F 387 + #define UBI_FM_HDR_MAGIC 0xD4B82EF7 388 + #define UBI_FM_VHDR_MAGIC 0xFA370ED1 389 + #define UBI_FM_POOL_MAGIC 0x67AF4D08 390 + #define UBI_FM_EBA_MAGIC 0xf0c040a8 391 + 392 + /* A fastmap supber block can be located between PEB 0 and 393 + * UBI_FM_MAX_START */ 394 + #define UBI_FM_MAX_START 64 395 + 396 + /* A fastmap can use up to UBI_FM_MAX_BLOCKS PEBs */ 397 + #define UBI_FM_MAX_BLOCKS 32 398 + 399 + /* 5% of the total number of PEBs have to be scanned while attaching 400 + * from a fastmap. 401 + * But the size of this pool is limited to be between UBI_FM_MIN_POOL_SIZE and 402 + * UBI_FM_MAX_POOL_SIZE */ 403 + #define UBI_FM_MIN_POOL_SIZE 8 404 + #define UBI_FM_MAX_POOL_SIZE 256 405 + 406 + #define UBI_FM_WL_POOL_SIZE 25 407 + 408 + /** 409 + * struct ubi_fm_sb - UBI fastmap super block 410 + * @magic: fastmap super block magic number (%UBI_FM_SB_MAGIC) 411 + * @version: format version of this fastmap 412 + * @data_crc: CRC over the fastmap data 413 + * @used_blocks: number of PEBs used by this fastmap 414 + * @block_loc: an array containing the location of all PEBs of the fastmap 415 + * @block_ec: the erase counter of each used PEB 416 + * @sqnum: highest sequence number value at the time while taking the fastmap 417 + * 418 + */ 419 + struct ubi_fm_sb { 420 + __be32 magic; 421 + __u8 version; 422 + __u8 padding1[3]; 423 + __be32 data_crc; 424 + __be32 used_blocks; 425 + __be32 block_loc[UBI_FM_MAX_BLOCKS]; 426 + __be32 block_ec[UBI_FM_MAX_BLOCKS]; 427 + __be64 sqnum; 428 + __u8 padding2[32]; 429 + } __packed; 430 + 431 + /** 432 + * struct ubi_fm_hdr - header of the fastmap data set 433 + * @magic: fastmap header magic number (%UBI_FM_HDR_MAGIC) 434 + * @free_peb_count: number of free PEBs known by this fastmap 435 + * @used_peb_count: number of used PEBs known by this fastmap 436 + * @scrub_peb_count: number of to be scrubbed PEBs known by this fastmap 437 + * @bad_peb_count: number of bad PEBs known by this fastmap 438 + * @erase_peb_count: number of bad PEBs which have to be erased 439 + * @vol_count: number of UBI volumes known by this fastmap 440 + */ 441 + struct ubi_fm_hdr { 442 + __be32 magic; 443 + __be32 free_peb_count; 444 + __be32 used_peb_count; 445 + __be32 scrub_peb_count; 446 + __be32 bad_peb_count; 447 + __be32 erase_peb_count; 448 + __be32 vol_count; 449 + __u8 padding[4]; 450 + } __packed; 451 + 452 + /* struct ubi_fm_hdr is followed by two struct ubi_fm_scan_pool */ 453 + 454 + /** 455 + * struct ubi_fm_scan_pool - Fastmap pool PEBs to be scanned while attaching 456 + * @magic: pool magic numer (%UBI_FM_POOL_MAGIC) 457 + * @size: current pool size 458 + * @max_size: maximal pool size 459 + * @pebs: an array containing the location of all PEBs in this pool 460 + */ 461 + struct ubi_fm_scan_pool { 462 + __be32 magic; 463 + __be16 size; 464 + __be16 max_size; 465 + __be32 pebs[UBI_FM_MAX_POOL_SIZE]; 466 + __be32 padding[4]; 467 + } __packed; 468 + 469 + /* ubi_fm_scan_pool is followed by nfree+nused struct ubi_fm_ec records */ 470 + 471 + /** 472 + * struct ubi_fm_ec - stores the erase counter of a PEB 473 + * @pnum: PEB number 474 + * @ec: ec of this PEB 475 + */ 476 + struct ubi_fm_ec { 477 + __be32 pnum; 478 + __be32 ec; 479 + } __packed; 480 + 481 + /** 482 + * struct ubi_fm_volhdr - Fastmap volume header 483 + * it identifies the start of an eba table 484 + * @magic: Fastmap volume header magic number (%UBI_FM_VHDR_MAGIC) 485 + * @vol_id: volume id of the fastmapped volume 486 + * @vol_type: type of the fastmapped volume 487 + * @data_pad: data_pad value of the fastmapped volume 488 + * @used_ebs: number of used LEBs within this volume 489 + * @last_eb_bytes: number of bytes used in the last LEB 490 + */ 491 + struct ubi_fm_volhdr { 492 + __be32 magic; 493 + __be32 vol_id; 494 + __u8 vol_type; 495 + __u8 padding1[3]; 496 + __be32 data_pad; 497 + __be32 used_ebs; 498 + __be32 last_eb_bytes; 499 + __u8 padding2[8]; 500 + } __packed; 501 + 502 + /* struct ubi_fm_volhdr is followed by one struct ubi_fm_eba records */ 503 + 504 + /** 505 + * struct ubi_fm_eba - denotes an association beween a PEB and LEB 506 + * @magic: EBA table magic number 507 + * @reserved_pebs: number of table entries 508 + * @pnum: PEB number of LEB (LEB is the index) 509 + */ 510 + struct ubi_fm_eba { 511 + __be32 magic; 512 + __be32 reserved_pebs; 513 + __be32 pnum[0]; 514 + } __packed; 378 515 #endif /* !__UBI_MEDIA_H__ */
+116 -2
drivers/mtd/ubi/ubi.h
··· 133 133 MOVE_RETRY, 134 134 }; 135 135 136 + /* 137 + * Return codes of the fastmap sub-system 138 + * 139 + * UBI_NO_FASTMAP: No fastmap super block was found 140 + * UBI_BAD_FASTMAP: A fastmap was found but it's unusable 141 + */ 142 + enum { 143 + UBI_NO_FASTMAP = 1, 144 + UBI_BAD_FASTMAP, 145 + }; 146 + 136 147 /** 137 148 * struct ubi_wl_entry - wear-leveling entry. 138 149 * @u.rb: link in the corresponding (free/used) RB-tree ··· 208 197 }; 209 198 210 199 struct ubi_volume_desc; 200 + 201 + /** 202 + * struct ubi_fastmap_layout - in-memory fastmap data structure. 203 + * @e: PEBs used by the current fastmap 204 + * @to_be_tortured: if non-zero tortured this PEB 205 + * @used_blocks: number of used PEBs 206 + * @max_pool_size: maximal size of the user pool 207 + * @max_wl_pool_size: maximal size of the pool used by the WL sub-system 208 + */ 209 + struct ubi_fastmap_layout { 210 + struct ubi_wl_entry *e[UBI_FM_MAX_BLOCKS]; 211 + int to_be_tortured[UBI_FM_MAX_BLOCKS]; 212 + int used_blocks; 213 + int max_pool_size; 214 + int max_wl_pool_size; 215 + }; 216 + 217 + /** 218 + * struct ubi_fm_pool - in-memory fastmap pool 219 + * @pebs: PEBs in this pool 220 + * @used: number of used PEBs 221 + * @size: total number of PEBs in this pool 222 + * @max_size: maximal size of the pool 223 + * 224 + * A pool gets filled with up to max_size. 225 + * If all PEBs within the pool are used a new fastmap will be written 226 + * to the flash and the pool gets refilled with empty PEBs. 227 + * 228 + */ 229 + struct ubi_fm_pool { 230 + int pebs[UBI_FM_MAX_POOL_SIZE]; 231 + int used; 232 + int size; 233 + int max_size; 234 + }; 211 235 212 236 /** 213 237 * struct ubi_volume - UBI volume description data structure. ··· 379 333 * @ltree: the lock tree 380 334 * @alc_mutex: serializes "atomic LEB change" operations 381 335 * 336 + * @fm_disabled: non-zero if fastmap is disabled (default) 337 + * @fm: in-memory data structure of the currently used fastmap 338 + * @fm_pool: in-memory data structure of the fastmap pool 339 + * @fm_wl_pool: in-memory data structure of the fastmap pool used by the WL 340 + * sub-system 341 + * @fm_mutex: serializes ubi_update_fastmap() and protects @fm_buf 342 + * @fm_buf: vmalloc()'d buffer which holds the raw fastmap 343 + * @fm_size: fastmap size in bytes 344 + * @fm_sem: allows ubi_update_fastmap() to block EBA table changes 345 + * @fm_work: fastmap work queue 346 + * 382 347 * @used: RB-tree of used physical eraseblocks 383 348 * @erroneous: RB-tree of erroneous used physical eraseblocks 384 349 * @free: RB-tree of free physical eraseblocks 350 + * @free_count: Contains the number of elements in @free 385 351 * @scrub: RB-tree of physical eraseblocks which need scrubbing 386 352 * @pq: protection queue (contain physical eraseblocks which are temporarily 387 353 * protected from the wear-leveling worker) ··· 484 426 struct rb_root ltree; 485 427 struct mutex alc_mutex; 486 428 429 + /* Fastmap stuff */ 430 + int fm_disabled; 431 + struct ubi_fastmap_layout *fm; 432 + struct ubi_fm_pool fm_pool; 433 + struct ubi_fm_pool fm_wl_pool; 434 + struct rw_semaphore fm_sem; 435 + struct mutex fm_mutex; 436 + void *fm_buf; 437 + size_t fm_size; 438 + struct work_struct fm_work; 439 + 487 440 /* Wear-leveling sub-system's stuff */ 488 441 struct rb_root used; 489 442 struct rb_root erroneous; 490 443 struct rb_root free; 444 + int free_count; 491 445 struct rb_root scrub; 492 446 struct list_head pq[UBI_PROT_QUEUE_LEN]; 493 447 int pq_head; ··· 666 596 struct kmem_cache *aeb_slab_cache; 667 597 }; 668 598 599 + /** 600 + * struct ubi_work - UBI work description data structure. 601 + * @list: a link in the list of pending works 602 + * @func: worker function 603 + * @e: physical eraseblock to erase 604 + * @vol_id: the volume ID on which this erasure is being performed 605 + * @lnum: the logical eraseblock number 606 + * @torture: if the physical eraseblock has to be tortured 607 + * @anchor: produce a anchor PEB to by used by fastmap 608 + * 609 + * The @func pointer points to the worker function. If the @cancel argument is 610 + * not zero, the worker has to free the resources and exit immediately. The 611 + * worker has to return zero in case of success and a negative error code in 612 + * case of failure. 613 + */ 614 + struct ubi_work { 615 + struct list_head list; 616 + int (*func)(struct ubi_device *ubi, struct ubi_work *wrk, int cancel); 617 + /* The below fields are only relevant to erasure works */ 618 + struct ubi_wl_entry *e; 619 + int vol_id; 620 + int lnum; 621 + int torture; 622 + int anchor; 623 + }; 624 + 669 625 #include "debug.h" 670 626 671 627 extern struct kmem_cache *ubi_wl_entry_slab; ··· 702 606 extern struct mutex ubi_devices_mutex; 703 607 extern struct blocking_notifier_head ubi_notifiers; 704 608 705 - /* scan.c */ 609 + /* attach.c */ 706 610 int ubi_add_to_av(struct ubi_device *ubi, struct ubi_attach_info *ai, int pnum, 707 611 int ec, const struct ubi_vid_hdr *vid_hdr, int bitflips); 708 612 struct ubi_ainf_volume *ubi_find_av(const struct ubi_attach_info *ai, ··· 710 614 void ubi_remove_av(struct ubi_attach_info *ai, struct ubi_ainf_volume *av); 711 615 struct ubi_ainf_peb *ubi_early_get_peb(struct ubi_device *ubi, 712 616 struct ubi_attach_info *ai); 713 - int ubi_attach(struct ubi_device *ubi); 617 + int ubi_attach(struct ubi_device *ubi, int force_scan); 714 618 void ubi_destroy_ai(struct ubi_attach_info *ai); 715 619 716 620 /* vtbl.c */ ··· 760 664 int ubi_eba_copy_leb(struct ubi_device *ubi, int from, int to, 761 665 struct ubi_vid_hdr *vid_hdr); 762 666 int ubi_eba_init(struct ubi_device *ubi, struct ubi_attach_info *ai); 667 + unsigned long long ubi_next_sqnum(struct ubi_device *ubi); 668 + int self_check_eba(struct ubi_device *ubi, struct ubi_attach_info *ai_fastmap, 669 + struct ubi_attach_info *ai_scan); 763 670 764 671 /* wl.c */ 765 672 int ubi_wl_get_peb(struct ubi_device *ubi); ··· 773 674 int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai); 774 675 void ubi_wl_close(struct ubi_device *ubi); 775 676 int ubi_thread(void *u); 677 + struct ubi_wl_entry *ubi_wl_get_fm_peb(struct ubi_device *ubi, int anchor); 678 + int ubi_wl_put_fm_peb(struct ubi_device *ubi, struct ubi_wl_entry *used_e, 679 + int lnum, int torture); 680 + int ubi_is_erase_work(struct ubi_work *wrk); 681 + void ubi_refill_pools(struct ubi_device *ubi); 682 + int ubi_ensure_anchor_pebs(struct ubi_device *ubi); 776 683 777 684 /* io.c */ 778 685 int ubi_io_read(const struct ubi_device *ubi, void *buf, int pnum, int offset, ··· 816 711 void ubi_do_get_device_info(struct ubi_device *ubi, struct ubi_device_info *di); 817 712 void ubi_do_get_volume_info(struct ubi_device *ubi, struct ubi_volume *vol, 818 713 struct ubi_volume_info *vi); 714 + /* scan.c */ 715 + int ubi_compare_lebs(struct ubi_device *ubi, const struct ubi_ainf_peb *aeb, 716 + int pnum, const struct ubi_vid_hdr *vid_hdr); 717 + 718 + /* fastmap.c */ 719 + size_t ubi_calc_fm_size(struct ubi_device *ubi); 720 + int ubi_update_fastmap(struct ubi_device *ubi); 721 + int ubi_scan_fastmap(struct ubi_device *ubi, struct ubi_attach_info *ai, 722 + int fm_anchor); 819 723 820 724 /* 821 725 * ubi_rb_for_each_entry - walk an RB-tree.
+531 -68
drivers/mtd/ubi/wl.c
··· 135 135 */ 136 136 #define WL_MAX_FAILURES 32 137 137 138 - /** 139 - * struct ubi_work - UBI work description data structure. 140 - * @list: a link in the list of pending works 141 - * @func: worker function 142 - * @e: physical eraseblock to erase 143 - * @vol_id: the volume ID on which this erasure is being performed 144 - * @lnum: the logical eraseblock number 145 - * @torture: if the physical eraseblock has to be tortured 146 - * 147 - * The @func pointer points to the worker function. If the @cancel argument is 148 - * not zero, the worker has to free the resources and exit immediately. The 149 - * worker has to return zero in case of success and a negative error code in 150 - * case of failure. 151 - */ 152 - struct ubi_work { 153 - struct list_head list; 154 - int (*func)(struct ubi_device *ubi, struct ubi_work *wrk, int cancel); 155 - /* The below fields are only relevant to erasure works */ 156 - struct ubi_wl_entry *e; 157 - int vol_id; 158 - int lnum; 159 - int torture; 160 - }; 161 - 162 138 static int self_check_ec(struct ubi_device *ubi, int pnum, int ec); 163 139 static int self_check_in_wl_tree(const struct ubi_device *ubi, 164 140 struct ubi_wl_entry *e, struct rb_root *root); 165 141 static int self_check_in_pq(const struct ubi_device *ubi, 166 142 struct ubi_wl_entry *e); 143 + 144 + #ifdef CONFIG_MTD_UBI_FASTMAP 145 + /** 146 + * update_fastmap_work_fn - calls ubi_update_fastmap from a work queue 147 + * @wrk: the work description object 148 + */ 149 + static void update_fastmap_work_fn(struct work_struct *wrk) 150 + { 151 + struct ubi_device *ubi = container_of(wrk, struct ubi_device, fm_work); 152 + ubi_update_fastmap(ubi); 153 + } 154 + 155 + /** 156 + * ubi_ubi_is_fm_block - returns 1 if a PEB is currently used in a fastmap. 157 + * @ubi: UBI device description object 158 + * @pnum: the to be checked PEB 159 + */ 160 + static int ubi_is_fm_block(struct ubi_device *ubi, int pnum) 161 + { 162 + int i; 163 + 164 + if (!ubi->fm) 165 + return 0; 166 + 167 + for (i = 0; i < ubi->fm->used_blocks; i++) 168 + if (ubi->fm->e[i]->pnum == pnum) 169 + return 1; 170 + 171 + return 0; 172 + } 173 + #else 174 + static int ubi_is_fm_block(struct ubi_device *ubi, int pnum) 175 + { 176 + return 0; 177 + } 178 + #endif 167 179 168 180 /** 169 181 * wl_tree_add - add a wear-leveling entry to a WL RB-tree. ··· 273 261 { 274 262 int err; 275 263 276 - spin_lock(&ubi->wl_lock); 277 264 while (!ubi->free.rb_node) { 278 265 spin_unlock(&ubi->wl_lock); 279 266 280 267 dbg_wl("do one work synchronously"); 281 268 err = do_work(ubi); 282 - if (err) 283 - return err; 284 269 285 270 spin_lock(&ubi->wl_lock); 271 + if (err) 272 + return err; 286 273 } 287 - spin_unlock(&ubi->wl_lock); 288 274 289 275 return 0; 290 276 } ··· 349 339 350 340 /** 351 341 * find_wl_entry - find wear-leveling entry closest to certain erase counter. 342 + * @ubi: UBI device description object 352 343 * @root: the RB-tree where to look for 353 344 * @diff: maximum possible difference from the smallest erase counter 354 345 * 355 346 * This function looks for a wear leveling entry with erase counter closest to 356 347 * min + @diff, where min is the smallest erase counter. 357 348 */ 358 - static struct ubi_wl_entry *find_wl_entry(struct rb_root *root, int diff) 349 + static struct ubi_wl_entry *find_wl_entry(struct ubi_device *ubi, 350 + struct rb_root *root, int diff) 359 351 { 360 352 struct rb_node *p; 361 - struct ubi_wl_entry *e; 353 + struct ubi_wl_entry *e, *prev_e = NULL; 362 354 int max; 363 355 364 356 e = rb_entry(rb_first(root), struct ubi_wl_entry, u.rb); ··· 375 363 p = p->rb_left; 376 364 else { 377 365 p = p->rb_right; 366 + prev_e = e; 378 367 e = e1; 379 368 } 380 369 } 370 + 371 + /* If no fastmap has been written and this WL entry can be used 372 + * as anchor PEB, hold it back and return the second best WL entry 373 + * such that fastmap can use the anchor PEB later. */ 374 + if (prev_e && !ubi->fm_disabled && 375 + !ubi->fm && e->pnum < UBI_FM_MAX_START) 376 + return prev_e; 381 377 382 378 return e; 383 379 } 384 380 385 381 /** 386 - * ubi_wl_get_peb - get a physical eraseblock. 382 + * find_mean_wl_entry - find wear-leveling entry with medium erase counter. 383 + * @ubi: UBI device description object 384 + * @root: the RB-tree where to look for 385 + * 386 + * This function looks for a wear leveling entry with medium erase counter, 387 + * but not greater or equivalent than the lowest erase counter plus 388 + * %WL_FREE_MAX_DIFF/2. 389 + */ 390 + static struct ubi_wl_entry *find_mean_wl_entry(struct ubi_device *ubi, 391 + struct rb_root *root) 392 + { 393 + struct ubi_wl_entry *e, *first, *last; 394 + 395 + first = rb_entry(rb_first(root), struct ubi_wl_entry, u.rb); 396 + last = rb_entry(rb_last(root), struct ubi_wl_entry, u.rb); 397 + 398 + if (last->ec - first->ec < WL_FREE_MAX_DIFF) { 399 + e = rb_entry(root->rb_node, struct ubi_wl_entry, u.rb); 400 + 401 + #ifdef CONFIG_MTD_UBI_FASTMAP 402 + /* If no fastmap has been written and this WL entry can be used 403 + * as anchor PEB, hold it back and return the second best 404 + * WL entry such that fastmap can use the anchor PEB later. */ 405 + if (e && !ubi->fm_disabled && !ubi->fm && 406 + e->pnum < UBI_FM_MAX_START) 407 + e = rb_entry(rb_next(root->rb_node), 408 + struct ubi_wl_entry, u.rb); 409 + #endif 410 + } else 411 + e = find_wl_entry(ubi, root, WL_FREE_MAX_DIFF/2); 412 + 413 + return e; 414 + } 415 + 416 + #ifdef CONFIG_MTD_UBI_FASTMAP 417 + /** 418 + * find_anchor_wl_entry - find wear-leveling entry to used as anchor PEB. 419 + * @root: the RB-tree where to look for 420 + */ 421 + static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root) 422 + { 423 + struct rb_node *p; 424 + struct ubi_wl_entry *e, *victim = NULL; 425 + int max_ec = UBI_MAX_ERASECOUNTER; 426 + 427 + ubi_rb_for_each_entry(p, e, root, u.rb) { 428 + if (e->pnum < UBI_FM_MAX_START && e->ec < max_ec) { 429 + victim = e; 430 + max_ec = e->ec; 431 + } 432 + } 433 + 434 + return victim; 435 + } 436 + 437 + static int anchor_pebs_avalible(struct rb_root *root) 438 + { 439 + struct rb_node *p; 440 + struct ubi_wl_entry *e; 441 + 442 + ubi_rb_for_each_entry(p, e, root, u.rb) 443 + if (e->pnum < UBI_FM_MAX_START) 444 + return 1; 445 + 446 + return 0; 447 + } 448 + 449 + /** 450 + * ubi_wl_get_fm_peb - find a physical erase block with a given maximal number. 451 + * @ubi: UBI device description object 452 + * @anchor: This PEB will be used as anchor PEB by fastmap 453 + * 454 + * The function returns a physical erase block with a given maximal number 455 + * and removes it from the wl subsystem. 456 + * Must be called with wl_lock held! 457 + */ 458 + struct ubi_wl_entry *ubi_wl_get_fm_peb(struct ubi_device *ubi, int anchor) 459 + { 460 + struct ubi_wl_entry *e = NULL; 461 + 462 + if (!ubi->free.rb_node || (ubi->free_count - ubi->beb_rsvd_pebs < 1)) 463 + goto out; 464 + 465 + if (anchor) 466 + e = find_anchor_wl_entry(&ubi->free); 467 + else 468 + e = find_mean_wl_entry(ubi, &ubi->free); 469 + 470 + if (!e) 471 + goto out; 472 + 473 + self_check_in_wl_tree(ubi, e, &ubi->free); 474 + 475 + /* remove it from the free list, 476 + * the wl subsystem does no longer know this erase block */ 477 + rb_erase(&e->u.rb, &ubi->free); 478 + ubi->free_count--; 479 + out: 480 + return e; 481 + } 482 + #endif 483 + 484 + /** 485 + * __wl_get_peb - get a physical eraseblock. 387 486 * @ubi: UBI device description object 388 487 * 389 488 * This function returns a physical eraseblock in case of success and a 390 489 * negative error code in case of failure. Might sleep. 391 490 */ 392 - int ubi_wl_get_peb(struct ubi_device *ubi) 491 + static int __wl_get_peb(struct ubi_device *ubi) 393 492 { 394 493 int err; 395 - struct ubi_wl_entry *e, *first, *last; 494 + struct ubi_wl_entry *e; 396 495 397 496 retry: 398 - spin_lock(&ubi->wl_lock); 399 497 if (!ubi->free.rb_node) { 400 498 if (ubi->works_count == 0) { 401 - ubi_assert(list_empty(&ubi->works)); 402 499 ubi_err("no free eraseblocks"); 403 - spin_unlock(&ubi->wl_lock); 500 + ubi_assert(list_empty(&ubi->works)); 404 501 return -ENOSPC; 405 502 } 406 - spin_unlock(&ubi->wl_lock); 407 503 408 504 err = produce_free_peb(ubi); 409 505 if (err < 0) ··· 519 399 goto retry; 520 400 } 521 401 522 - first = rb_entry(rb_first(&ubi->free), struct ubi_wl_entry, u.rb); 523 - last = rb_entry(rb_last(&ubi->free), struct ubi_wl_entry, u.rb); 524 - 525 - if (last->ec - first->ec < WL_FREE_MAX_DIFF) 526 - e = rb_entry(ubi->free.rb_node, struct ubi_wl_entry, u.rb); 527 - else 528 - e = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF/2); 402 + e = find_mean_wl_entry(ubi, &ubi->free); 403 + if (!e) { 404 + ubi_err("no free eraseblocks"); 405 + return -ENOSPC; 406 + } 529 407 530 408 self_check_in_wl_tree(ubi, e, &ubi->free); 531 409 ··· 532 414 * be protected from being moved for some time. 533 415 */ 534 416 rb_erase(&e->u.rb, &ubi->free); 417 + ubi->free_count--; 535 418 dbg_wl("PEB %d EC %d", e->pnum, e->ec); 419 + #ifndef CONFIG_MTD_UBI_FASTMAP 420 + /* We have to enqueue e only if fastmap is disabled, 421 + * is fastmap enabled prot_queue_add() will be called by 422 + * ubi_wl_get_peb() after removing e from the pool. */ 536 423 prot_queue_add(ubi, e); 537 - spin_unlock(&ubi->wl_lock); 538 - 424 + #endif 539 425 err = ubi_self_check_all_ff(ubi, e->pnum, ubi->vid_hdr_aloffset, 540 426 ubi->peb_size - ubi->vid_hdr_aloffset); 541 427 if (err) { ··· 549 427 550 428 return e->pnum; 551 429 } 430 + 431 + #ifdef CONFIG_MTD_UBI_FASTMAP 432 + /** 433 + * return_unused_pool_pebs - returns unused PEB to the free tree. 434 + * @ubi: UBI device description object 435 + * @pool: fastmap pool description object 436 + */ 437 + static void return_unused_pool_pebs(struct ubi_device *ubi, 438 + struct ubi_fm_pool *pool) 439 + { 440 + int i; 441 + struct ubi_wl_entry *e; 442 + 443 + for (i = pool->used; i < pool->size; i++) { 444 + e = ubi->lookuptbl[pool->pebs[i]]; 445 + wl_tree_add(e, &ubi->free); 446 + ubi->free_count++; 447 + } 448 + } 449 + 450 + /** 451 + * refill_wl_pool - refills all the fastmap pool used by the 452 + * WL sub-system. 453 + * @ubi: UBI device description object 454 + */ 455 + static void refill_wl_pool(struct ubi_device *ubi) 456 + { 457 + struct ubi_wl_entry *e; 458 + struct ubi_fm_pool *pool = &ubi->fm_wl_pool; 459 + 460 + return_unused_pool_pebs(ubi, pool); 461 + 462 + for (pool->size = 0; pool->size < pool->max_size; pool->size++) { 463 + if (!ubi->free.rb_node || 464 + (ubi->free_count - ubi->beb_rsvd_pebs < 5)) 465 + break; 466 + 467 + e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF); 468 + self_check_in_wl_tree(ubi, e, &ubi->free); 469 + rb_erase(&e->u.rb, &ubi->free); 470 + ubi->free_count--; 471 + 472 + pool->pebs[pool->size] = e->pnum; 473 + } 474 + pool->used = 0; 475 + } 476 + 477 + /** 478 + * refill_wl_user_pool - refills all the fastmap pool used by ubi_wl_get_peb. 479 + * @ubi: UBI device description object 480 + */ 481 + static void refill_wl_user_pool(struct ubi_device *ubi) 482 + { 483 + struct ubi_fm_pool *pool = &ubi->fm_pool; 484 + 485 + return_unused_pool_pebs(ubi, pool); 486 + 487 + for (pool->size = 0; pool->size < pool->max_size; pool->size++) { 488 + if (!ubi->free.rb_node || 489 + (ubi->free_count - ubi->beb_rsvd_pebs < 1)) 490 + break; 491 + 492 + pool->pebs[pool->size] = __wl_get_peb(ubi); 493 + if (pool->pebs[pool->size] < 0) 494 + break; 495 + } 496 + pool->used = 0; 497 + } 498 + 499 + /** 500 + * ubi_refill_pools - refills all fastmap PEB pools. 501 + * @ubi: UBI device description object 502 + */ 503 + void ubi_refill_pools(struct ubi_device *ubi) 504 + { 505 + spin_lock(&ubi->wl_lock); 506 + refill_wl_pool(ubi); 507 + refill_wl_user_pool(ubi); 508 + spin_unlock(&ubi->wl_lock); 509 + } 510 + 511 + /* ubi_wl_get_peb - works exaclty like __wl_get_peb but keeps track of 512 + * the fastmap pool. 513 + */ 514 + int ubi_wl_get_peb(struct ubi_device *ubi) 515 + { 516 + int ret; 517 + struct ubi_fm_pool *pool = &ubi->fm_pool; 518 + struct ubi_fm_pool *wl_pool = &ubi->fm_wl_pool; 519 + 520 + if (!pool->size || !wl_pool->size || pool->used == pool->size || 521 + wl_pool->used == wl_pool->size) 522 + ubi_update_fastmap(ubi); 523 + 524 + /* we got not a single free PEB */ 525 + if (!pool->size) 526 + ret = -ENOSPC; 527 + else { 528 + spin_lock(&ubi->wl_lock); 529 + ret = pool->pebs[pool->used++]; 530 + prot_queue_add(ubi, ubi->lookuptbl[ret]); 531 + spin_unlock(&ubi->wl_lock); 532 + } 533 + 534 + return ret; 535 + } 536 + 537 + /* get_peb_for_wl - returns a PEB to be used internally by the WL sub-system. 538 + * 539 + * @ubi: UBI device description object 540 + */ 541 + static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi) 542 + { 543 + struct ubi_fm_pool *pool = &ubi->fm_wl_pool; 544 + int pnum; 545 + 546 + if (pool->used == pool->size || !pool->size) { 547 + /* We cannot update the fastmap here because this 548 + * function is called in atomic context. 549 + * Let's fail here and refill/update it as soon as possible. */ 550 + schedule_work(&ubi->fm_work); 551 + return NULL; 552 + } else { 553 + pnum = pool->pebs[pool->used++]; 554 + return ubi->lookuptbl[pnum]; 555 + } 556 + } 557 + #else 558 + static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi) 559 + { 560 + return find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF); 561 + } 562 + 563 + int ubi_wl_get_peb(struct ubi_device *ubi) 564 + { 565 + int peb; 566 + 567 + spin_lock(&ubi->wl_lock); 568 + peb = __wl_get_peb(ubi); 569 + spin_unlock(&ubi->wl_lock); 570 + 571 + return peb; 572 + } 573 + #endif 552 574 553 575 /** 554 576 * prot_queue_del - remove a physical eraseblock from the protection queue. ··· 824 558 } 825 559 826 560 /** 827 - * schedule_ubi_work - schedule a work. 561 + * __schedule_ubi_work - schedule a work. 828 562 * @ubi: UBI device description object 829 563 * @wrk: the work to schedule 830 564 * 831 565 * This function adds a work defined by @wrk to the tail of the pending works 832 - * list. 566 + * list. Can only be used of ubi->work_sem is already held in read mode! 833 567 */ 834 - static void schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk) 568 + static void __schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk) 835 569 { 836 570 spin_lock(&ubi->wl_lock); 837 571 list_add_tail(&wrk->list, &ubi->works); ··· 842 576 spin_unlock(&ubi->wl_lock); 843 577 } 844 578 579 + /** 580 + * schedule_ubi_work - schedule a work. 581 + * @ubi: UBI device description object 582 + * @wrk: the work to schedule 583 + * 584 + * This function adds a work defined by @wrk to the tail of the pending works 585 + * list. 586 + */ 587 + static void schedule_ubi_work(struct ubi_device *ubi, struct ubi_work *wrk) 588 + { 589 + down_read(&ubi->work_sem); 590 + __schedule_ubi_work(ubi, wrk); 591 + up_read(&ubi->work_sem); 592 + } 593 + 845 594 static int erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk, 846 595 int cancel); 596 + 597 + #ifdef CONFIG_MTD_UBI_FASTMAP 598 + /** 599 + * ubi_is_erase_work - checks whether a work is erase work. 600 + * @wrk: The work object to be checked 601 + */ 602 + int ubi_is_erase_work(struct ubi_work *wrk) 603 + { 604 + return wrk->func == erase_worker; 605 + } 606 + #endif 847 607 848 608 /** 849 609 * schedule_erase - schedule an erase work. ··· 886 594 int vol_id, int lnum, int torture) 887 595 { 888 596 struct ubi_work *wl_wrk; 597 + 598 + ubi_assert(e); 599 + ubi_assert(!ubi_is_fm_block(ubi, e->pnum)); 889 600 890 601 dbg_wl("schedule erasure of PEB %d, EC %d, torture %d", 891 602 e->pnum, e->ec, torture); ··· 908 613 } 909 614 910 615 /** 616 + * do_sync_erase - run the erase worker synchronously. 617 + * @ubi: UBI device description object 618 + * @e: the WL entry of the physical eraseblock to erase 619 + * @vol_id: the volume ID that last used this PEB 620 + * @lnum: the last used logical eraseblock number for the PEB 621 + * @torture: if the physical eraseblock has to be tortured 622 + * 623 + */ 624 + static int do_sync_erase(struct ubi_device *ubi, struct ubi_wl_entry *e, 625 + int vol_id, int lnum, int torture) 626 + { 627 + struct ubi_work *wl_wrk; 628 + 629 + dbg_wl("sync erase of PEB %i", e->pnum); 630 + 631 + wl_wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS); 632 + if (!wl_wrk) 633 + return -ENOMEM; 634 + 635 + wl_wrk->e = e; 636 + wl_wrk->vol_id = vol_id; 637 + wl_wrk->lnum = lnum; 638 + wl_wrk->torture = torture; 639 + 640 + return erase_worker(ubi, wl_wrk, 0); 641 + } 642 + 643 + #ifdef CONFIG_MTD_UBI_FASTMAP 644 + /** 645 + * ubi_wl_put_fm_peb - returns a PEB used in a fastmap to the wear-leveling 646 + * sub-system. 647 + * see: ubi_wl_put_peb() 648 + * 649 + * @ubi: UBI device description object 650 + * @fm_e: physical eraseblock to return 651 + * @lnum: the last used logical eraseblock number for the PEB 652 + * @torture: if this physical eraseblock has to be tortured 653 + */ 654 + int ubi_wl_put_fm_peb(struct ubi_device *ubi, struct ubi_wl_entry *fm_e, 655 + int lnum, int torture) 656 + { 657 + struct ubi_wl_entry *e; 658 + int vol_id, pnum = fm_e->pnum; 659 + 660 + dbg_wl("PEB %d", pnum); 661 + 662 + ubi_assert(pnum >= 0); 663 + ubi_assert(pnum < ubi->peb_count); 664 + 665 + spin_lock(&ubi->wl_lock); 666 + e = ubi->lookuptbl[pnum]; 667 + 668 + /* This can happen if we recovered from a fastmap the very 669 + * first time and writing now a new one. In this case the wl system 670 + * has never seen any PEB used by the original fastmap. 671 + */ 672 + if (!e) { 673 + e = fm_e; 674 + ubi_assert(e->ec >= 0); 675 + ubi->lookuptbl[pnum] = e; 676 + } else { 677 + e->ec = fm_e->ec; 678 + kfree(fm_e); 679 + } 680 + 681 + spin_unlock(&ubi->wl_lock); 682 + 683 + vol_id = lnum ? UBI_FM_DATA_VOLUME_ID : UBI_FM_SB_VOLUME_ID; 684 + return schedule_erase(ubi, e, vol_id, lnum, torture); 685 + } 686 + #endif 687 + 688 + /** 911 689 * wear_leveling_worker - wear-leveling worker function. 912 690 * @ubi: UBI device description object 913 691 * @wrk: the work object ··· 995 627 { 996 628 int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0; 997 629 int vol_id = -1, uninitialized_var(lnum); 630 + #ifdef CONFIG_MTD_UBI_FASTMAP 631 + int anchor = wrk->anchor; 632 + #endif 998 633 struct ubi_wl_entry *e1, *e2; 999 634 struct ubi_vid_hdr *vid_hdr; 1000 635 ··· 1031 660 goto out_cancel; 1032 661 } 1033 662 663 + #ifdef CONFIG_MTD_UBI_FASTMAP 664 + /* Check whether we need to produce an anchor PEB */ 665 + if (!anchor) 666 + anchor = !anchor_pebs_avalible(&ubi->free); 667 + 668 + if (anchor) { 669 + e1 = find_anchor_wl_entry(&ubi->used); 670 + if (!e1) 671 + goto out_cancel; 672 + e2 = get_peb_for_wl(ubi); 673 + if (!e2) 674 + goto out_cancel; 675 + 676 + self_check_in_wl_tree(ubi, e1, &ubi->used); 677 + rb_erase(&e1->u.rb, &ubi->used); 678 + dbg_wl("anchor-move PEB %d to PEB %d", e1->pnum, e2->pnum); 679 + } else if (!ubi->scrub.rb_node) { 680 + #else 1034 681 if (!ubi->scrub.rb_node) { 682 + #endif 1035 683 /* 1036 684 * Now pick the least worn-out used physical eraseblock and a 1037 685 * highly worn-out free physical eraseblock. If the erase 1038 686 * counters differ much enough, start wear-leveling. 1039 687 */ 1040 688 e1 = rb_entry(rb_first(&ubi->used), struct ubi_wl_entry, u.rb); 1041 - e2 = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF); 689 + e2 = get_peb_for_wl(ubi); 690 + if (!e2) 691 + goto out_cancel; 1042 692 1043 693 if (!(e2->ec - e1->ec >= UBI_WL_THRESHOLD)) { 1044 694 dbg_wl("no WL needed: min used EC %d, max free EC %d", ··· 1074 682 /* Perform scrubbing */ 1075 683 scrubbing = 1; 1076 684 e1 = rb_entry(rb_first(&ubi->scrub), struct ubi_wl_entry, u.rb); 1077 - e2 = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF); 685 + e2 = get_peb_for_wl(ubi); 686 + if (!e2) 687 + goto out_cancel; 688 + 1078 689 self_check_in_wl_tree(ubi, e1, &ubi->scrub); 1079 690 rb_erase(&e1->u.rb, &ubi->scrub); 1080 691 dbg_wl("scrub PEB %d to PEB %d", e1->pnum, e2->pnum); 1081 692 } 1082 693 1083 - self_check_in_wl_tree(ubi, e2, &ubi->free); 1084 - rb_erase(&e2->u.rb, &ubi->free); 1085 694 ubi->move_from = e1; 1086 695 ubi->move_to = e2; 1087 696 spin_unlock(&ubi->wl_lock); ··· 1199 806 ubi->move_to_put = ubi->wl_scheduled = 0; 1200 807 spin_unlock(&ubi->wl_lock); 1201 808 1202 - err = schedule_erase(ubi, e1, vol_id, lnum, 0); 809 + err = do_sync_erase(ubi, e1, vol_id, lnum, 0); 1203 810 if (err) { 1204 811 kmem_cache_free(ubi_wl_entry_slab, e1); 1205 812 if (e2) ··· 1214 821 */ 1215 822 dbg_wl("PEB %d (LEB %d:%d) was put meanwhile, erase", 1216 823 e2->pnum, vol_id, lnum); 1217 - err = schedule_erase(ubi, e2, vol_id, lnum, 0); 824 + err = do_sync_erase(ubi, e2, vol_id, lnum, 0); 1218 825 if (err) { 1219 826 kmem_cache_free(ubi_wl_entry_slab, e2); 1220 827 goto out_ro; ··· 1253 860 spin_unlock(&ubi->wl_lock); 1254 861 1255 862 ubi_free_vid_hdr(ubi, vid_hdr); 1256 - err = schedule_erase(ubi, e2, vol_id, lnum, torture); 863 + err = do_sync_erase(ubi, e2, vol_id, lnum, torture); 1257 864 if (err) { 1258 865 kmem_cache_free(ubi_wl_entry_slab, e2); 1259 866 goto out_ro; ··· 1294 901 /** 1295 902 * ensure_wear_leveling - schedule wear-leveling if it is needed. 1296 903 * @ubi: UBI device description object 904 + * @nested: set to non-zero if this function is called from UBI worker 1297 905 * 1298 906 * This function checks if it is time to start wear-leveling and schedules it 1299 907 * if yes. This function returns zero in case of success and a negative error 1300 908 * code in case of failure. 1301 909 */ 1302 - static int ensure_wear_leveling(struct ubi_device *ubi) 910 + static int ensure_wear_leveling(struct ubi_device *ubi, int nested) 1303 911 { 1304 912 int err = 0; 1305 913 struct ubi_wl_entry *e1; ··· 1328 934 * %UBI_WL_THRESHOLD. 1329 935 */ 1330 936 e1 = rb_entry(rb_first(&ubi->used), struct ubi_wl_entry, u.rb); 1331 - e2 = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF); 937 + e2 = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF); 1332 938 1333 939 if (!(e2->ec - e1->ec >= UBI_WL_THRESHOLD)) 1334 940 goto out_unlock; ··· 1345 951 goto out_cancel; 1346 952 } 1347 953 954 + wrk->anchor = 0; 1348 955 wrk->func = &wear_leveling_worker; 1349 - schedule_ubi_work(ubi, wrk); 956 + if (nested) 957 + __schedule_ubi_work(ubi, wrk); 958 + else 959 + schedule_ubi_work(ubi, wrk); 1350 960 return err; 1351 961 1352 962 out_cancel: ··· 1360 962 spin_unlock(&ubi->wl_lock); 1361 963 return err; 1362 964 } 965 + 966 + #ifdef CONFIG_MTD_UBI_FASTMAP 967 + /** 968 + * ubi_ensure_anchor_pebs - schedule wear-leveling to produce an anchor PEB. 969 + * @ubi: UBI device description object 970 + */ 971 + int ubi_ensure_anchor_pebs(struct ubi_device *ubi) 972 + { 973 + struct ubi_work *wrk; 974 + 975 + spin_lock(&ubi->wl_lock); 976 + if (ubi->wl_scheduled) { 977 + spin_unlock(&ubi->wl_lock); 978 + return 0; 979 + } 980 + ubi->wl_scheduled = 1; 981 + spin_unlock(&ubi->wl_lock); 982 + 983 + wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS); 984 + if (!wrk) { 985 + spin_lock(&ubi->wl_lock); 986 + ubi->wl_scheduled = 0; 987 + spin_unlock(&ubi->wl_lock); 988 + return -ENOMEM; 989 + } 990 + 991 + wrk->anchor = 1; 992 + wrk->func = &wear_leveling_worker; 993 + schedule_ubi_work(ubi, wrk); 994 + return 0; 995 + } 996 + #endif 1363 997 1364 998 /** 1365 999 * erase_worker - physical eraseblock erase worker function. ··· 1423 993 dbg_wl("erase PEB %d EC %d LEB %d:%d", 1424 994 pnum, e->ec, wl_wrk->vol_id, wl_wrk->lnum); 1425 995 996 + ubi_assert(!ubi_is_fm_block(ubi, e->pnum)); 997 + 1426 998 err = sync_erase(ubi, e, wl_wrk->torture); 1427 999 if (!err) { 1428 1000 /* Fine, we've erased it successfully */ ··· 1432 1000 1433 1001 spin_lock(&ubi->wl_lock); 1434 1002 wl_tree_add(e, &ubi->free); 1003 + ubi->free_count++; 1435 1004 spin_unlock(&ubi->wl_lock); 1436 1005 1437 1006 /* ··· 1442 1009 serve_prot_queue(ubi); 1443 1010 1444 1011 /* And take care about wear-leveling */ 1445 - err = ensure_wear_leveling(ubi); 1012 + err = ensure_wear_leveling(ubi, 1); 1446 1013 return err; 1447 1014 } 1448 1015 ··· 1680 1247 * Technically scrubbing is the same as wear-leveling, so it is done 1681 1248 * by the WL worker. 1682 1249 */ 1683 - return ensure_wear_leveling(ubi); 1250 + return ensure_wear_leveling(ubi, 0); 1684 1251 } 1685 1252 1686 1253 /** ··· 1861 1428 */ 1862 1429 int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai) 1863 1430 { 1864 - int err, i; 1431 + int err, i, reserved_pebs, found_pebs = 0; 1865 1432 struct rb_node *rb1, *rb2; 1866 1433 struct ubi_ainf_volume *av; 1867 1434 struct ubi_ainf_peb *aeb, *tmp; ··· 1873 1440 init_rwsem(&ubi->work_sem); 1874 1441 ubi->max_ec = ai->max_ec; 1875 1442 INIT_LIST_HEAD(&ubi->works); 1443 + #ifdef CONFIG_MTD_UBI_FASTMAP 1444 + INIT_WORK(&ubi->fm_work, update_fastmap_work_fn); 1445 + #endif 1876 1446 1877 1447 sprintf(ubi->bgt_name, UBI_BGT_NAME_PATTERN, ubi->ubi_num); 1878 1448 ··· 1897 1461 1898 1462 e->pnum = aeb->pnum; 1899 1463 e->ec = aeb->ec; 1464 + ubi_assert(!ubi_is_fm_block(ubi, e->pnum)); 1900 1465 ubi->lookuptbl[e->pnum] = e; 1901 1466 if (schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0)) { 1902 1467 kmem_cache_free(ubi_wl_entry_slab, e); 1903 1468 goto out_free; 1904 1469 } 1470 + 1471 + found_pebs++; 1905 1472 } 1906 1473 1474 + ubi->free_count = 0; 1907 1475 list_for_each_entry(aeb, &ai->free, u.list) { 1908 1476 cond_resched(); 1909 1477 ··· 1918 1478 e->pnum = aeb->pnum; 1919 1479 e->ec = aeb->ec; 1920 1480 ubi_assert(e->ec >= 0); 1481 + ubi_assert(!ubi_is_fm_block(ubi, e->pnum)); 1482 + 1921 1483 wl_tree_add(e, &ubi->free); 1484 + ubi->free_count++; 1485 + 1922 1486 ubi->lookuptbl[e->pnum] = e; 1487 + 1488 + found_pebs++; 1923 1489 } 1924 1490 1925 1491 ubi_rb_for_each_entry(rb1, av, &ai->volumes, rb) { ··· 1939 1493 e->pnum = aeb->pnum; 1940 1494 e->ec = aeb->ec; 1941 1495 ubi->lookuptbl[e->pnum] = e; 1496 + 1942 1497 if (!aeb->scrub) { 1943 1498 dbg_wl("add PEB %d EC %d to the used tree", 1944 1499 e->pnum, e->ec); ··· 1949 1502 e->pnum, e->ec); 1950 1503 wl_tree_add(e, &ubi->scrub); 1951 1504 } 1505 + 1506 + found_pebs++; 1952 1507 } 1953 1508 } 1954 1509 1955 - if (ubi->avail_pebs < WL_RESERVED_PEBS) { 1510 + dbg_wl("found %i PEBs", found_pebs); 1511 + 1512 + if (ubi->fm) 1513 + ubi_assert(ubi->good_peb_count == \ 1514 + found_pebs + ubi->fm->used_blocks); 1515 + else 1516 + ubi_assert(ubi->good_peb_count == found_pebs); 1517 + 1518 + reserved_pebs = WL_RESERVED_PEBS; 1519 + #ifdef CONFIG_MTD_UBI_FASTMAP 1520 + /* Reserve enough LEBs to store two fastmaps. */ 1521 + reserved_pebs += (ubi->fm_size / ubi->leb_size) * 2; 1522 + #endif 1523 + 1524 + if (ubi->avail_pebs < reserved_pebs) { 1956 1525 ubi_err("no enough physical eraseblocks (%d, need %d)", 1957 - ubi->avail_pebs, WL_RESERVED_PEBS); 1526 + ubi->avail_pebs, reserved_pebs); 1958 1527 if (ubi->corr_peb_count) 1959 1528 ubi_err("%d PEBs are corrupted and not used", 1960 1529 ubi->corr_peb_count); 1961 1530 goto out_free; 1962 1531 } 1963 - ubi->avail_pebs -= WL_RESERVED_PEBS; 1964 - ubi->rsvd_pebs += WL_RESERVED_PEBS; 1532 + ubi->avail_pebs -= reserved_pebs; 1533 + ubi->rsvd_pebs += reserved_pebs; 1965 1534 1966 1535 /* Schedule wear-leveling if needed */ 1967 - err = ensure_wear_leveling(ubi); 1536 + err = ensure_wear_leveling(ubi, 0); 1968 1537 if (err) 1969 1538 goto out_free; 1970 1539 ··· 2059 1596 } 2060 1597 2061 1598 read_ec = be64_to_cpu(ec_hdr->ec); 2062 - if (ec != read_ec) { 1599 + if (ec != read_ec && read_ec - ec > 1) { 2063 1600 ubi_err("self-check failed for PEB %d", pnum); 2064 1601 ubi_err("read EC is %lld, should be %d", read_ec, ec); 2065 1602 dump_stack();