Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'nand/for-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux into mtd/next

Raw NAND core changes:
* Stop using nand_release(), patched all drivers.
* Give more information about the ECC weakness when not matching the
chip's requirement.
* MAINTAINERS updates.
* Support emulated SLC mode on MLC NANDs.
* Support "constrained" controllers, adapt the core and ONFI/JEDEC
table parsing and Micron's code.
* Take check_only into account.
* Add an invalid ECC mode to discriminate with valid ones.
* Return an enum from of_get_nand_ecc_algo().
* Drop OOB_FIRST placement scheme.
* Introduce nand_extract_bits().
* Ensure a consistent bitflips numbering.
* BCH lib:
- Allow easy bit swapping.
- Rework a little bit the exported function names.
* Fix nand_gpio_waitrdy().
* Propage CS selection to sub operations.
* Add a NAND_NO_BBM_QUIRK flag.
* Give the possibility to verify a read operation is supported.
* Add a helper to check supported operations.
* Avoid indirect access to ->data_buf().
* Rename the use_bufpoi variables.
* Fix comments about the use of bufpoi.
* Rename a NAND chip option.
* Reorder the nand_chip->options flags.
* Translate obscure bitfields into readable macros.
* Timings:
- Fix default values.
- Add mode information to the timings structure.

Raw NAND controller driver changes:
* Fixed many error paths.
* Arasan
- New driver
* Au1550nd:
- Various cleanups
- Migration to ->exec_op()
* brcmnand:
- Misc cleanup.
- Support v2.1-v2.2 controllers.
- Remove unused including <linux/version.h>.
- Correctly verify erased pages.
- Fix Hamming OOB layout.
* Cadence
- Make cadence_nand_attach_chip static.
* Cafe:
- Set the NAND_NO_BBM_QUIRK flag
* cmx270:
- Remove this controller driver.
* cs553x:
- Misc cleanup
- Migration to ->exec_op()
* Davinci:
- Misc cleanup.
- Migration to ->exec_op()
* Denali:
- Add more delays before latching incoming data
* Diskonchip:
- Misc cleanup
- Migration to ->exec_op()
* Fsmc:
- Change to non-atomic bit operations.
* GPMI:
- Use nand_extract_bits()
- Fix runtime PM imbalance.
* Ingenic:
- Migration to exec_op()
- Fix the RB gpio active-high property on qi, lb60
- Make qi_lb60_ooblayout_ops static.
* Marvell:
- Misc cleanup and small fixes
* Nandsim:
- Fix the error paths, driver wide.
* Omap_elm:
- Fix runtime PM imbalance.
* STM32_FMC2:
- Misc cleanups (error cases, comments, timeout valus, cosmetic
changes).

+4289 -2567
+63
Documentation/devicetree/bindings/mtd/arasan,nand-controller.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/arasan,nand-controller.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Arasan NAND Flash Controller with ONFI 3.1 support device tree bindings 8 + 9 + allOf: 10 + - $ref: "nand-controller.yaml" 11 + 12 + maintainers: 13 + - Naga Sureshkumar Relli <naga.sureshkumar.relli@xilinx.com> 14 + 15 + properties: 16 + compatible: 17 + oneOf: 18 + - items: 19 + - enum: 20 + - xlnx,zynqmp-nand-controller 21 + - enum: 22 + - arasan,nfc-v3p10 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + clocks: 28 + items: 29 + - description: Controller clock 30 + - description: NAND bus clock 31 + 32 + clock-names: 33 + items: 34 + - const: controller 35 + - const: bus 36 + 37 + interrupts: 38 + maxItems: 1 39 + 40 + "#address-cells": true 41 + "#size-cells": true 42 + 43 + required: 44 + - compatible 45 + - reg 46 + - clocks 47 + - clock-names 48 + - interrupts 49 + 50 + additionalProperties: true 51 + 52 + examples: 53 + - | 54 + nfc: nand-controller@ff100000 { 55 + compatible = "xlnx,zynqmp-nand-controller", "arasan,nfc-v3p10"; 56 + reg = <0x0 0xff100000 0x0 0x1000>; 57 + clock-names = "controller", "bus"; 58 + clocks = <&clk200>, <&clk100>; 59 + interrupt-parent = <&gic>; 60 + interrupts = <0 14 4>; 61 + #address-cells = <1>; 62 + #size-cells = <0>; 63 + };
+2
Documentation/devicetree/bindings/mtd/brcm,brcmnand.txt
··· 20 20 "brcm,brcmnand" and an appropriate version compatibility 21 21 string, like "brcm,brcmnand-v7.0" 22 22 Possible values: 23 + brcm,brcmnand-v2.1 24 + brcm,brcmnand-v2.2 23 25 brcm,brcmnand-v4.0 24 26 brcm,brcmnand-v5.0 25 27 brcm,brcmnand-v6.0
+3
Documentation/devicetree/bindings/mtd/partition.txt
··· 61 61 clobbered. 62 62 - lock : Do not unlock the partition at initialization time (not supported on 63 63 all devices) 64 + - slc-mode: This parameter, if present, allows one to emulate SLC mode on a 65 + partition attached to an MLC NAND thus making this partition immune to 66 + paired-pages corruptions 64 67 65 68 Examples: 66 69
+4 -2
Documentation/driver-api/mtdnand.rst
··· 276 276 #ifdef MODULE 277 277 static void __exit board_cleanup (void) 278 278 { 279 - /* Release resources, unregister device */ 280 - nand_release (mtd_to_nand(board_mtd)); 279 + /* Unregister device */ 280 + WARN_ON(mtd_device_unregister(board_mtd)); 281 + /* Release resources */ 282 + nand_cleanup(mtd_to_nand(board_mtd)); 281 283 282 284 /* unmap physical address */ 283 285 iounmap(baseaddr);
+9 -4
MAINTAINERS
··· 1284 1284 W: http://www.aquantia.com 1285 1285 F: drivers/net/ethernet/aquantia/atlantic/aq_ptp* 1286 1286 1287 + ARASAN NAND CONTROLLER DRIVER 1288 + M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1289 + L: linux-mtd@lists.infradead.org 1290 + S: Maintained 1291 + F: Documentation/devicetree/bindings/mtd/arasan,nand-controller.yaml 1292 + F: drivers/mtd/nand/raw/arasan-nand-controller.c 1293 + 1287 1294 ARC FRAMEBUFFER DRIVER 1288 1295 M: Jaya Kumar <jayalk@intworks.biz> 1289 1296 S: Maintained ··· 3748 3741 F: drivers/media/platform/cadence/cdns-csi2* 3749 3742 3750 3743 CADENCE NAND DRIVER 3751 - M: Piotr Sroka <piotrs@cadence.com> 3752 3744 L: linux-mtd@lists.infradead.org 3753 - S: Maintained 3745 + S: Orphan 3754 3746 F: Documentation/devicetree/bindings/mtd/cadence-nand-controller.txt 3755 3747 F: drivers/mtd/nand/raw/cadence-nand-controller.c 3756 3748 ··· 10733 10727 F: drivers/i2c/busses/i2c-mt7621.c 10734 10728 10735 10729 MEDIATEK NAND CONTROLLER DRIVER 10736 - M: Xiaolei Li <xiaolei.li@mediatek.com> 10737 10730 L: linux-mtd@lists.infradead.org 10738 - S: Maintained 10731 + S: Orphan 10739 10732 F: Documentation/devicetree/bindings/mtd/mtk-nand.txt 10740 10733 F: drivers/mtd/nand/raw/mtk_* 10741 10734
+5 -5
drivers/mtd/devices/docg3.c
··· 647 647 648 648 for (i = 0; i < DOC_ECC_BCH_SIZE; i++) 649 649 ecc[i] = bitrev8(hwecc[i]); 650 - numerrs = decode_bch(docg3->cascade->bch, NULL, 650 + numerrs = bch_decode(docg3->cascade->bch, NULL, 651 651 DOC_ECC_BCH_COVERED_BYTES, 652 652 NULL, ecc, NULL, errorpos); 653 653 BUG_ON(numerrs == -EINVAL); ··· 1984 1984 return ret; 1985 1985 cascade->base = base; 1986 1986 mutex_init(&cascade->lock); 1987 - cascade->bch = init_bch(DOC_ECC_BCH_M, DOC_ECC_BCH_T, 1988 - DOC_ECC_BCH_PRIMPOLY); 1987 + cascade->bch = bch_init(DOC_ECC_BCH_M, DOC_ECC_BCH_T, 1988 + DOC_ECC_BCH_PRIMPOLY, false); 1989 1989 if (!cascade->bch) 1990 1990 return ret; 1991 1991 ··· 2021 2021 ret = -ENODEV; 2022 2022 dev_info(dev, "No supported DiskOnChip found\n"); 2023 2023 err_probe: 2024 - free_bch(cascade->bch); 2024 + bch_free(cascade->bch); 2025 2025 for (floor = 0; floor < DOC_MAX_NBFLOORS; floor++) 2026 2026 if (cascade->floors[floor]) 2027 2027 doc_release_device(cascade->floors[floor]); ··· 2045 2045 if (cascade->floors[floor]) 2046 2046 doc_release_device(cascade->floors[floor]); 2047 2047 2048 - free_bch(docg3->cascade->bch); 2048 + bch_free(docg3->cascade->bch); 2049 2049 return 0; 2050 2050 } 2051 2051
+174 -17
drivers/mtd/mtdcore.c
··· 617 617 !(mtd->flags & MTD_NO_ERASE))) 618 618 return -EINVAL; 619 619 620 + /* 621 + * MTD_SLC_ON_MLC_EMULATION can only be set on partitions, when the 622 + * master is an MLC NAND and has a proper pairing scheme defined. 623 + * We also reject masters that implement ->_writev() for now, because 624 + * NAND controller drivers don't implement this hook, and adding the 625 + * SLC -> MLC address/length conversion to this path is useless if we 626 + * don't have a user. 627 + */ 628 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION && 629 + (!mtd_is_partition(mtd) || master->type != MTD_MLCNANDFLASH || 630 + !master->pairing || master->_writev)) 631 + return -EINVAL; 632 + 620 633 mutex_lock(&mtd_table_mutex); 621 634 622 635 i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); ··· 644 631 /* default value if not set by driver */ 645 632 if (mtd->bitflip_threshold == 0) 646 633 mtd->bitflip_threshold = mtd->ecc_strength; 634 + 635 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) { 636 + int ngroups = mtd_pairing_groups(master); 637 + 638 + mtd->erasesize /= ngroups; 639 + mtd->size = (u64)mtd_div_by_eb(mtd->size, master) * 640 + mtd->erasesize; 641 + } 647 642 648 643 if (is_power_of_2(mtd->erasesize)) 649 644 mtd->erasesize_shift = ffs(mtd->erasesize) - 1; ··· 1095 1074 { 1096 1075 struct mtd_info *master = mtd_get_master(mtd); 1097 1076 u64 mst_ofs = mtd_get_master_ofs(mtd, 0); 1077 + struct erase_info adjinstr; 1098 1078 int ret; 1099 1079 1100 1080 instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN; 1081 + adjinstr = *instr; 1101 1082 1102 1083 if (!mtd->erasesize || !master->_erase) 1103 1084 return -ENOTSUPP; ··· 1114 1091 1115 1092 ledtrig_mtd_activity(); 1116 1093 1117 - instr->addr += mst_ofs; 1118 - ret = master->_erase(master, instr); 1119 - if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN) 1120 - instr->fail_addr -= mst_ofs; 1094 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) { 1095 + adjinstr.addr = (loff_t)mtd_div_by_eb(instr->addr, mtd) * 1096 + master->erasesize; 1097 + adjinstr.len = ((u64)mtd_div_by_eb(instr->addr + instr->len, mtd) * 1098 + master->erasesize) - 1099 + adjinstr.addr; 1100 + } 1121 1101 1122 - instr->addr -= mst_ofs; 1102 + adjinstr.addr += mst_ofs; 1103 + 1104 + ret = master->_erase(master, &adjinstr); 1105 + 1106 + if (adjinstr.fail_addr != MTD_FAIL_ADDR_UNKNOWN) { 1107 + instr->fail_addr = adjinstr.fail_addr - mst_ofs; 1108 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) { 1109 + instr->fail_addr = mtd_div_by_eb(instr->fail_addr, 1110 + master); 1111 + instr->fail_addr *= mtd->erasesize; 1112 + } 1113 + } 1114 + 1123 1115 return ret; 1124 1116 } 1125 1117 EXPORT_SYMBOL_GPL(mtd_erase); ··· 1314 1276 return 0; 1315 1277 } 1316 1278 1279 + static int mtd_read_oob_std(struct mtd_info *mtd, loff_t from, 1280 + struct mtd_oob_ops *ops) 1281 + { 1282 + struct mtd_info *master = mtd_get_master(mtd); 1283 + int ret; 1284 + 1285 + from = mtd_get_master_ofs(mtd, from); 1286 + if (master->_read_oob) 1287 + ret = master->_read_oob(master, from, ops); 1288 + else 1289 + ret = master->_read(master, from, ops->len, &ops->retlen, 1290 + ops->datbuf); 1291 + 1292 + return ret; 1293 + } 1294 + 1295 + static int mtd_write_oob_std(struct mtd_info *mtd, loff_t to, 1296 + struct mtd_oob_ops *ops) 1297 + { 1298 + struct mtd_info *master = mtd_get_master(mtd); 1299 + int ret; 1300 + 1301 + to = mtd_get_master_ofs(mtd, to); 1302 + if (master->_write_oob) 1303 + ret = master->_write_oob(master, to, ops); 1304 + else 1305 + ret = master->_write(master, to, ops->len, &ops->retlen, 1306 + ops->datbuf); 1307 + 1308 + return ret; 1309 + } 1310 + 1311 + static int mtd_io_emulated_slc(struct mtd_info *mtd, loff_t start, bool read, 1312 + struct mtd_oob_ops *ops) 1313 + { 1314 + struct mtd_info *master = mtd_get_master(mtd); 1315 + int ngroups = mtd_pairing_groups(master); 1316 + int npairs = mtd_wunit_per_eb(master) / ngroups; 1317 + struct mtd_oob_ops adjops = *ops; 1318 + unsigned int wunit, oobavail; 1319 + struct mtd_pairing_info info; 1320 + int max_bitflips = 0; 1321 + u32 ebofs, pageofs; 1322 + loff_t base, pos; 1323 + 1324 + ebofs = mtd_mod_by_eb(start, mtd); 1325 + base = (loff_t)mtd_div_by_eb(start, mtd) * master->erasesize; 1326 + info.group = 0; 1327 + info.pair = mtd_div_by_ws(ebofs, mtd); 1328 + pageofs = mtd_mod_by_ws(ebofs, mtd); 1329 + oobavail = mtd_oobavail(mtd, ops); 1330 + 1331 + while (ops->retlen < ops->len || ops->oobretlen < ops->ooblen) { 1332 + int ret; 1333 + 1334 + if (info.pair >= npairs) { 1335 + info.pair = 0; 1336 + base += master->erasesize; 1337 + } 1338 + 1339 + wunit = mtd_pairing_info_to_wunit(master, &info); 1340 + pos = mtd_wunit_to_offset(mtd, base, wunit); 1341 + 1342 + adjops.len = ops->len - ops->retlen; 1343 + if (adjops.len > mtd->writesize - pageofs) 1344 + adjops.len = mtd->writesize - pageofs; 1345 + 1346 + adjops.ooblen = ops->ooblen - ops->oobretlen; 1347 + if (adjops.ooblen > oobavail - adjops.ooboffs) 1348 + adjops.ooblen = oobavail - adjops.ooboffs; 1349 + 1350 + if (read) { 1351 + ret = mtd_read_oob_std(mtd, pos + pageofs, &adjops); 1352 + if (ret > 0) 1353 + max_bitflips = max(max_bitflips, ret); 1354 + } else { 1355 + ret = mtd_write_oob_std(mtd, pos + pageofs, &adjops); 1356 + } 1357 + 1358 + if (ret < 0) 1359 + return ret; 1360 + 1361 + max_bitflips = max(max_bitflips, ret); 1362 + ops->retlen += adjops.retlen; 1363 + ops->oobretlen += adjops.oobretlen; 1364 + adjops.datbuf += adjops.retlen; 1365 + adjops.oobbuf += adjops.oobretlen; 1366 + adjops.ooboffs = 0; 1367 + pageofs = 0; 1368 + info.pair++; 1369 + } 1370 + 1371 + return max_bitflips; 1372 + } 1373 + 1317 1374 int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops) 1318 1375 { 1319 1376 struct mtd_info *master = mtd_get_master(mtd); ··· 1427 1294 if (!master->_read_oob && (!master->_read || ops->oobbuf)) 1428 1295 return -EOPNOTSUPP; 1429 1296 1430 - from = mtd_get_master_ofs(mtd, from); 1431 - if (master->_read_oob) 1432 - ret_code = master->_read_oob(master, from, ops); 1297 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) 1298 + ret_code = mtd_io_emulated_slc(mtd, from, true, ops); 1433 1299 else 1434 - ret_code = master->_read(master, from, ops->len, &ops->retlen, 1435 - ops->datbuf); 1300 + ret_code = mtd_read_oob_std(mtd, from, ops); 1436 1301 1437 1302 mtd_update_ecc_stats(mtd, master, &old_stats); 1438 1303 ··· 1469 1338 if (!master->_write_oob && (!master->_write || ops->oobbuf)) 1470 1339 return -EOPNOTSUPP; 1471 1340 1472 - to = mtd_get_master_ofs(mtd, to); 1341 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) 1342 + return mtd_io_emulated_slc(mtd, to, false, ops); 1473 1343 1474 - if (master->_write_oob) 1475 - return master->_write_oob(master, to, ops); 1476 - else 1477 - return master->_write(master, to, ops->len, &ops->retlen, 1478 - ops->datbuf); 1344 + return mtd_write_oob_std(mtd, to, ops); 1479 1345 } 1480 1346 EXPORT_SYMBOL_GPL(mtd_write_oob); 1481 1347 ··· 1800 1672 * @start: first ECC byte to set 1801 1673 * @nbytes: number of ECC bytes to set 1802 1674 * 1803 - * Works like mtd_ooblayout_get_bytes(), except it acts on free bytes. 1675 + * Works like mtd_ooblayout_set_bytes(), except it acts on free bytes. 1804 1676 * 1805 1677 * Returns zero on success, a negative error code otherwise. 1806 1678 */ ··· 1945 1817 return -EINVAL; 1946 1818 if (!len) 1947 1819 return 0; 1820 + 1821 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) { 1822 + ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize; 1823 + len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize; 1824 + } 1825 + 1948 1826 return master->_lock(master, mtd_get_master_ofs(mtd, ofs), len); 1949 1827 } 1950 1828 EXPORT_SYMBOL_GPL(mtd_lock); ··· 1965 1831 return -EINVAL; 1966 1832 if (!len) 1967 1833 return 0; 1834 + 1835 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) { 1836 + ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize; 1837 + len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize; 1838 + } 1839 + 1968 1840 return master->_unlock(master, mtd_get_master_ofs(mtd, ofs), len); 1969 1841 } 1970 1842 EXPORT_SYMBOL_GPL(mtd_unlock); ··· 1985 1845 return -EINVAL; 1986 1846 if (!len) 1987 1847 return 0; 1848 + 1849 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) { 1850 + ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize; 1851 + len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize; 1852 + } 1853 + 1988 1854 return master->_is_locked(master, mtd_get_master_ofs(mtd, ofs), len); 1989 1855 } 1990 1856 EXPORT_SYMBOL_GPL(mtd_is_locked); ··· 2003 1857 return -EINVAL; 2004 1858 if (!master->_block_isreserved) 2005 1859 return 0; 1860 + 1861 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) 1862 + ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize; 1863 + 2006 1864 return master->_block_isreserved(master, mtd_get_master_ofs(mtd, ofs)); 2007 1865 } 2008 1866 EXPORT_SYMBOL_GPL(mtd_block_isreserved); ··· 2019 1869 return -EINVAL; 2020 1870 if (!master->_block_isbad) 2021 1871 return 0; 1872 + 1873 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) 1874 + ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize; 1875 + 2022 1876 return master->_block_isbad(master, mtd_get_master_ofs(mtd, ofs)); 2023 1877 } 2024 1878 EXPORT_SYMBOL_GPL(mtd_block_isbad); ··· 2038 1884 return -EINVAL; 2039 1885 if (!(mtd->flags & MTD_WRITEABLE)) 2040 1886 return -EROFS; 1887 + 1888 + if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) 1889 + ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize; 2041 1890 2042 1891 ret = master->_block_markbad(master, mtd_get_master_ofs(mtd, ofs)); 2043 1892 if (ret)
+31 -23
drivers/mtd/mtdpart.c
··· 35 35 const struct mtd_partition *part, 36 36 int partno, uint64_t cur_offset) 37 37 { 38 - int wr_alignment = (parent->flags & MTD_NO_ERASE) ? parent->writesize : 39 - parent->erasesize; 40 - struct mtd_info *child, *master = mtd_get_master(parent); 38 + struct mtd_info *master = mtd_get_master(parent); 39 + int wr_alignment = (parent->flags & MTD_NO_ERASE) ? 40 + master->writesize : master->erasesize; 41 + u64 parent_size = mtd_is_partition(parent) ? 42 + parent->part.size : parent->size; 43 + struct mtd_info *child; 41 44 u32 remainder; 42 45 char *name; 43 46 u64 tmp; ··· 59 56 /* set up the MTD object for this partition */ 60 57 child->type = parent->type; 61 58 child->part.flags = parent->flags & ~part->mask_flags; 59 + child->part.flags |= part->add_flags; 62 60 child->flags = child->part.flags; 63 - child->size = part->size; 61 + child->part.size = part->size; 64 62 child->writesize = parent->writesize; 65 63 child->writebufsize = parent->writebufsize; 66 64 child->oobsize = parent->oobsize; ··· 102 98 } 103 99 if (child->part.offset == MTDPART_OFS_RETAIN) { 104 100 child->part.offset = cur_offset; 105 - if (parent->size - child->part.offset >= child->size) { 106 - child->size = parent->size - child->part.offset - 107 - child->size; 101 + if (parent_size - child->part.offset >= child->part.size) { 102 + child->part.size = parent_size - child->part.offset - 103 + child->part.size; 108 104 } else { 109 105 printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n", 110 - part->name, parent->size - child->part.offset, 111 - child->size); 106 + part->name, parent_size - child->part.offset, 107 + child->part.size); 112 108 /* register to preserve ordering */ 113 109 goto out_register; 114 110 } 115 111 } 116 - if (child->size == MTDPART_SIZ_FULL) 117 - child->size = parent->size - child->part.offset; 112 + if (child->part.size == MTDPART_SIZ_FULL) 113 + child->part.size = parent_size - child->part.offset; 118 114 119 115 printk(KERN_NOTICE "0x%012llx-0x%012llx : \"%s\"\n", 120 - child->part.offset, child->part.offset + child->size, 116 + child->part.offset, child->part.offset + child->part.size, 121 117 child->name); 122 118 123 119 /* let's do some sanity checks */ 124 - if (child->part.offset >= parent->size) { 120 + if (child->part.offset >= parent_size) { 125 121 /* let's register it anyway to preserve ordering */ 126 122 child->part.offset = 0; 127 - child->size = 0; 123 + child->part.size = 0; 128 124 129 125 /* Initialize ->erasesize to make add_mtd_device() happy. */ 130 126 child->erasesize = parent->erasesize; ··· 132 128 part->name); 133 129 goto out_register; 134 130 } 135 - if (child->part.offset + child->size > parent->size) { 136 - child->size = parent->size - child->part.offset; 131 + if (child->part.offset + child->part.size > parent->size) { 132 + child->part.size = parent_size - child->part.offset; 137 133 printk(KERN_WARNING"mtd: partition \"%s\" extends beyond the end of device \"%s\" -- size truncated to %#llx\n", 138 - part->name, parent->name, child->size); 134 + part->name, parent->name, child->part.size); 139 135 } 136 + 140 137 if (parent->numeraseregions > 1) { 141 138 /* Deal with variable erase size stuff */ 142 139 int i, max = parent->numeraseregions; 143 - u64 end = child->part.offset + child->size; 140 + u64 end = child->part.offset + child->part.size; 144 141 struct mtd_erase_region_info *regions = parent->eraseregions; 145 142 146 143 /* Find the first erase regions which is part of this ··· 161 156 BUG_ON(child->erasesize == 0); 162 157 } else { 163 158 /* Single erase size */ 164 - child->erasesize = parent->erasesize; 159 + child->erasesize = master->erasesize; 165 160 } 166 161 167 162 /* ··· 183 178 part->name); 184 179 } 185 180 186 - tmp = mtd_get_master_ofs(child, 0) + child->size; 181 + tmp = mtd_get_master_ofs(child, 0) + child->part.size; 187 182 remainder = do_div(tmp, wr_alignment); 188 183 if ((child->flags & MTD_WRITEABLE) && remainder) { 189 184 child->flags &= ~MTD_WRITEABLE; ··· 191 186 part->name); 192 187 } 193 188 189 + child->size = child->part.size; 194 190 child->ecc_step_size = parent->ecc_step_size; 195 191 child->ecc_strength = parent->ecc_strength; 196 192 child->bitflip_threshold = parent->bitflip_threshold; ··· 199 193 if (master->_block_isbad) { 200 194 uint64_t offs = 0; 201 195 202 - while (offs < child->size) { 196 + while (offs < child->part.size) { 203 197 if (mtd_block_isreserved(child, offs)) 204 198 child->ecc_stats.bbtblocks++; 205 199 else if (mtd_block_isbad(child, offs)) ··· 240 234 long long offset, long long length) 241 235 { 242 236 struct mtd_info *master = mtd_get_master(parent); 237 + u64 parent_size = mtd_is_partition(parent) ? 238 + parent->part.size : parent->size; 243 239 struct mtd_partition part; 244 240 struct mtd_info *child; 245 241 int ret = 0; ··· 252 244 return -EINVAL; 253 245 254 246 if (length == MTDPART_SIZ_FULL) 255 - length = parent->size - offset; 247 + length = parent_size - offset; 256 248 257 249 if (length <= 0) 258 250 return -EINVAL; ··· 427 419 /* Look for subpartitions */ 428 420 parse_mtd_partitions(child, parts[i].types, NULL); 429 421 430 - cur_offset = child->part.offset + child->size; 422 + cur_offset = child->part.offset + child->part.size; 431 423 } 432 424 433 425 return 0;
+8 -4
drivers/mtd/nand/raw/Kconfig
··· 213 213 Please check the actual NAND chip connected and its support 214 214 by the MLC NAND controller. 215 215 216 - config MTD_NAND_CM_X270 217 - tristate "CM-X270 modules NAND controller" 218 - depends on MACH_ARMCORE 219 - 220 216 config MTD_NAND_PASEMI 221 217 tristate "PA Semi PWRficient NAND controller" 222 218 depends on PPC_PASEMI ··· 452 456 help 453 457 Enable the driver for NAND flash on platforms using a Cadence NAND 454 458 controller. 459 + 460 + config MTD_NAND_ARASAN 461 + tristate "Support for Arasan NAND flash controller" 462 + depends on HAS_IOMEM && HAS_DMA 463 + select BCH 464 + help 465 + Enables the driver for the Arasan NAND flash controller on 466 + Zynq Ultrascale+ MPSoC. 455 467 456 468 comment "Misc" 457 469
+1 -1
drivers/mtd/nand/raw/Makefile
··· 25 25 omap2_nand-objs := omap2.o 26 26 obj-$(CONFIG_MTD_NAND_OMAP2) += omap2_nand.o 27 27 obj-$(CONFIG_MTD_NAND_OMAP_BCH_BUILD) += omap_elm.o 28 - obj-$(CONFIG_MTD_NAND_CM_X270) += cmx270_nand.o 29 28 obj-$(CONFIG_MTD_NAND_MARVELL) += marvell_nand.o 30 29 obj-$(CONFIG_MTD_NAND_TMIO) += tmio_nand.o 31 30 obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o ··· 57 58 obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o 58 59 obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o 59 60 obj-$(CONFIG_MTD_NAND_CADENCE) += cadence-nand-controller.o 61 + obj-$(CONFIG_MTD_NAND_ARASAN) += arasan-nand-controller.o 60 62 61 63 nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o 62 64 nand-objs += nand_onfi.o
+4 -1
drivers/mtd/nand/raw/ams-delta.c
··· 387 387 { 388 388 struct gpio_nand *priv = platform_get_drvdata(pdev); 389 389 struct mtd_info *mtd = nand_to_mtd(&priv->nand_chip); 390 + int ret; 390 391 391 392 /* Apply write protection */ 392 393 gpiod_set_value(priv->gpiod_nwp, 1); 393 394 394 395 /* Unregister device */ 395 - nand_release(mtd_to_nand(mtd)); 396 + ret = mtd_device_unregister(mtd); 397 + WARN_ON(ret); 398 + nand_cleanup(mtd_to_nand(mtd)); 396 399 397 400 return 0; 398 401 }
+1297
drivers/mtd/nand/raw/arasan-nand-controller.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Arasan NAND Flash Controller Driver 4 + * 5 + * Copyright (C) 2014 - 2020 Xilinx, Inc. 6 + * Author: 7 + * Miquel Raynal <miquel.raynal@bootlin.com> 8 + * Original work (fully rewritten): 9 + * Punnaiah Choudary Kalluri <punnaia@xilinx.com> 10 + * Naga Sureshkumar Relli <nagasure@xilinx.com> 11 + */ 12 + 13 + #include <linux/bch.h> 14 + #include <linux/bitfield.h> 15 + #include <linux/clk.h> 16 + #include <linux/delay.h> 17 + #include <linux/dma-mapping.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/iopoll.h> 20 + #include <linux/module.h> 21 + #include <linux/mtd/mtd.h> 22 + #include <linux/mtd/partitions.h> 23 + #include <linux/mtd/rawnand.h> 24 + #include <linux/of.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/slab.h> 27 + 28 + #define PKT_REG 0x00 29 + #define PKT_SIZE(x) FIELD_PREP(GENMASK(10, 0), (x)) 30 + #define PKT_STEPS(x) FIELD_PREP(GENMASK(23, 12), (x)) 31 + 32 + #define MEM_ADDR1_REG 0x04 33 + 34 + #define MEM_ADDR2_REG 0x08 35 + #define ADDR2_STRENGTH(x) FIELD_PREP(GENMASK(27, 25), (x)) 36 + #define ADDR2_CS(x) FIELD_PREP(GENMASK(31, 30), (x)) 37 + 38 + #define CMD_REG 0x0C 39 + #define CMD_1(x) FIELD_PREP(GENMASK(7, 0), (x)) 40 + #define CMD_2(x) FIELD_PREP(GENMASK(15, 8), (x)) 41 + #define CMD_PAGE_SIZE(x) FIELD_PREP(GENMASK(25, 23), (x)) 42 + #define CMD_DMA_ENABLE BIT(27) 43 + #define CMD_NADDRS(x) FIELD_PREP(GENMASK(30, 28), (x)) 44 + #define CMD_ECC_ENABLE BIT(31) 45 + 46 + #define PROG_REG 0x10 47 + #define PROG_PGRD BIT(0) 48 + #define PROG_ERASE BIT(2) 49 + #define PROG_STATUS BIT(3) 50 + #define PROG_PGPROG BIT(4) 51 + #define PROG_RDID BIT(6) 52 + #define PROG_RDPARAM BIT(7) 53 + #define PROG_RST BIT(8) 54 + #define PROG_GET_FEATURE BIT(9) 55 + #define PROG_SET_FEATURE BIT(10) 56 + 57 + #define INTR_STS_EN_REG 0x14 58 + #define INTR_SIG_EN_REG 0x18 59 + #define INTR_STS_REG 0x1C 60 + #define WRITE_READY BIT(0) 61 + #define READ_READY BIT(1) 62 + #define XFER_COMPLETE BIT(2) 63 + #define DMA_BOUNDARY BIT(6) 64 + #define EVENT_MASK GENMASK(7, 0) 65 + 66 + #define READY_STS_REG 0x20 67 + 68 + #define DMA_ADDR0_REG 0x50 69 + #define DMA_ADDR1_REG 0x24 70 + 71 + #define FLASH_STS_REG 0x28 72 + 73 + #define DATA_PORT_REG 0x30 74 + 75 + #define ECC_CONF_REG 0x34 76 + #define ECC_CONF_COL(x) FIELD_PREP(GENMASK(15, 0), (x)) 77 + #define ECC_CONF_LEN(x) FIELD_PREP(GENMASK(26, 16), (x)) 78 + #define ECC_CONF_BCH_EN BIT(27) 79 + 80 + #define ECC_ERR_CNT_REG 0x38 81 + #define GET_PKT_ERR_CNT(x) FIELD_GET(GENMASK(7, 0), (x)) 82 + #define GET_PAGE_ERR_CNT(x) FIELD_GET(GENMASK(16, 8), (x)) 83 + 84 + #define ECC_SP_REG 0x3C 85 + #define ECC_SP_CMD1(x) FIELD_PREP(GENMASK(7, 0), (x)) 86 + #define ECC_SP_CMD2(x) FIELD_PREP(GENMASK(15, 8), (x)) 87 + #define ECC_SP_ADDRS(x) FIELD_PREP(GENMASK(30, 28), (x)) 88 + 89 + #define ECC_1ERR_CNT_REG 0x40 90 + #define ECC_2ERR_CNT_REG 0x44 91 + 92 + #define DATA_INTERFACE_REG 0x6C 93 + #define DIFACE_SDR_MODE(x) FIELD_PREP(GENMASK(2, 0), (x)) 94 + #define DIFACE_DDR_MODE(x) FIELD_PREP(GENMASK(5, 3), (X)) 95 + #define DIFACE_SDR 0 96 + #define DIFACE_NVDDR BIT(9) 97 + 98 + #define ANFC_MAX_CS 2 99 + #define ANFC_DFLT_TIMEOUT_US 1000000 100 + #define ANFC_MAX_CHUNK_SIZE SZ_1M 101 + #define ANFC_MAX_PARAM_SIZE SZ_4K 102 + #define ANFC_MAX_STEPS SZ_2K 103 + #define ANFC_MAX_PKT_SIZE (SZ_2K - 1) 104 + #define ANFC_MAX_ADDR_CYC 5U 105 + #define ANFC_RSVD_ECC_BYTES 21 106 + 107 + #define ANFC_XLNX_SDR_DFLT_CORE_CLK 100000000 108 + #define ANFC_XLNX_SDR_HS_CORE_CLK 80000000 109 + 110 + /** 111 + * struct anfc_op - Defines how to execute an operation 112 + * @pkt_reg: Packet register 113 + * @addr1_reg: Memory address 1 register 114 + * @addr2_reg: Memory address 2 register 115 + * @cmd_reg: Command register 116 + * @prog_reg: Program register 117 + * @steps: Number of "packets" to read/write 118 + * @rdy_timeout_ms: Timeout for waits on Ready/Busy pin 119 + * @len: Data transfer length 120 + * @read: Data transfer direction from the controller point of view 121 + */ 122 + struct anfc_op { 123 + u32 pkt_reg; 124 + u32 addr1_reg; 125 + u32 addr2_reg; 126 + u32 cmd_reg; 127 + u32 prog_reg; 128 + int steps; 129 + unsigned int rdy_timeout_ms; 130 + unsigned int len; 131 + bool read; 132 + u8 *buf; 133 + }; 134 + 135 + /** 136 + * struct anand - Defines the NAND chip related information 137 + * @node: Used to store NAND chips into a list 138 + * @chip: NAND chip information structure 139 + * @cs: Chip select line 140 + * @rb: Ready-busy line 141 + * @page_sz: Register value of the page_sz field to use 142 + * @clk: Expected clock frequency to use 143 + * @timings: Data interface timing mode to use 144 + * @ecc_conf: Hardware ECC configuration value 145 + * @strength: Register value of the ECC strength 146 + * @raddr_cycles: Row address cycle information 147 + * @caddr_cycles: Column address cycle information 148 + * @ecc_bits: Exact number of ECC bits per syndrome 149 + * @ecc_total: Total number of ECC bytes 150 + * @errloc: Array of errors located with soft BCH 151 + * @hw_ecc: Buffer to store syndromes computed by hardware 152 + * @bch: BCH structure 153 + */ 154 + struct anand { 155 + struct list_head node; 156 + struct nand_chip chip; 157 + unsigned int cs; 158 + unsigned int rb; 159 + unsigned int page_sz; 160 + unsigned long clk; 161 + u32 timings; 162 + u32 ecc_conf; 163 + u32 strength; 164 + u16 raddr_cycles; 165 + u16 caddr_cycles; 166 + unsigned int ecc_bits; 167 + unsigned int ecc_total; 168 + unsigned int *errloc; 169 + u8 *hw_ecc; 170 + struct bch_control *bch; 171 + }; 172 + 173 + /** 174 + * struct arasan_nfc - Defines the Arasan NAND flash controller driver instance 175 + * @dev: Pointer to the device structure 176 + * @base: Remapped register area 177 + * @controller_clk: Pointer to the system clock 178 + * @bus_clk: Pointer to the flash clock 179 + * @controller: Base controller structure 180 + * @chips: List of all NAND chips attached to the controller 181 + * @assigned_cs: Bitmask describing already assigned CS lines 182 + * @cur_clk: Current clock rate 183 + */ 184 + struct arasan_nfc { 185 + struct device *dev; 186 + void __iomem *base; 187 + struct clk *controller_clk; 188 + struct clk *bus_clk; 189 + struct nand_controller controller; 190 + struct list_head chips; 191 + unsigned long assigned_cs; 192 + unsigned int cur_clk; 193 + }; 194 + 195 + static struct anand *to_anand(struct nand_chip *nand) 196 + { 197 + return container_of(nand, struct anand, chip); 198 + } 199 + 200 + static struct arasan_nfc *to_anfc(struct nand_controller *ctrl) 201 + { 202 + return container_of(ctrl, struct arasan_nfc, controller); 203 + } 204 + 205 + static int anfc_wait_for_event(struct arasan_nfc *nfc, unsigned int event) 206 + { 207 + u32 val; 208 + int ret; 209 + 210 + ret = readl_relaxed_poll_timeout(nfc->base + INTR_STS_REG, val, 211 + val & event, 0, 212 + ANFC_DFLT_TIMEOUT_US); 213 + if (ret) { 214 + dev_err(nfc->dev, "Timeout waiting for event 0x%x\n", event); 215 + return -ETIMEDOUT; 216 + } 217 + 218 + writel_relaxed(event, nfc->base + INTR_STS_REG); 219 + 220 + return 0; 221 + } 222 + 223 + static int anfc_wait_for_rb(struct arasan_nfc *nfc, struct nand_chip *chip, 224 + unsigned int timeout_ms) 225 + { 226 + struct anand *anand = to_anand(chip); 227 + u32 val; 228 + int ret; 229 + 230 + /* There is no R/B interrupt, we must poll a register */ 231 + ret = readl_relaxed_poll_timeout(nfc->base + READY_STS_REG, val, 232 + val & BIT(anand->rb), 233 + 1, timeout_ms * 1000); 234 + if (ret) { 235 + dev_err(nfc->dev, "Timeout waiting for R/B 0x%x\n", 236 + readl_relaxed(nfc->base + READY_STS_REG)); 237 + return -ETIMEDOUT; 238 + } 239 + 240 + return 0; 241 + } 242 + 243 + static void anfc_trigger_op(struct arasan_nfc *nfc, struct anfc_op *nfc_op) 244 + { 245 + writel_relaxed(nfc_op->pkt_reg, nfc->base + PKT_REG); 246 + writel_relaxed(nfc_op->addr1_reg, nfc->base + MEM_ADDR1_REG); 247 + writel_relaxed(nfc_op->addr2_reg, nfc->base + MEM_ADDR2_REG); 248 + writel_relaxed(nfc_op->cmd_reg, nfc->base + CMD_REG); 249 + writel_relaxed(nfc_op->prog_reg, nfc->base + PROG_REG); 250 + } 251 + 252 + static int anfc_pkt_len_config(unsigned int len, unsigned int *steps, 253 + unsigned int *pktsize) 254 + { 255 + unsigned int nb, sz; 256 + 257 + for (nb = 1; nb < ANFC_MAX_STEPS; nb *= 2) { 258 + sz = len / nb; 259 + if (sz <= ANFC_MAX_PKT_SIZE) 260 + break; 261 + } 262 + 263 + if (sz * nb != len) 264 + return -ENOTSUPP; 265 + 266 + if (steps) 267 + *steps = nb; 268 + 269 + if (pktsize) 270 + *pktsize = sz; 271 + 272 + return 0; 273 + } 274 + 275 + /* 276 + * When using the embedded hardware ECC engine, the controller is in charge of 277 + * feeding the engine with, first, the ECC residue present in the data array. 278 + * A typical read operation is: 279 + * 1/ Assert the read operation by sending the relevant command/address cycles 280 + * but targeting the column of the first ECC bytes in the OOB area instead of 281 + * the main data directly. 282 + * 2/ After having read the relevant number of ECC bytes, the controller uses 283 + * the RNDOUT/RNDSTART commands which are set into the "ECC Spare Command 284 + * Register" to move the pointer back at the beginning of the main data. 285 + * 3/ It will read the content of the main area for a given size (pktsize) and 286 + * will feed the ECC engine with this buffer again. 287 + * 4/ The ECC engine derives the ECC bytes for the given data and compare them 288 + * with the ones already received. It eventually trigger status flags and 289 + * then set the "Buffer Read Ready" flag. 290 + * 5/ The corrected data is then available for reading from the data port 291 + * register. 292 + * 293 + * The hardware BCH ECC engine is known to be inconstent in BCH mode and never 294 + * reports uncorrectable errors. Because of this bug, we have to use the 295 + * software BCH implementation in the read path. 296 + */ 297 + static int anfc_read_page_hw_ecc(struct nand_chip *chip, u8 *buf, 298 + int oob_required, int page) 299 + { 300 + struct arasan_nfc *nfc = to_anfc(chip->controller); 301 + struct mtd_info *mtd = nand_to_mtd(chip); 302 + struct anand *anand = to_anand(chip); 303 + unsigned int len = mtd->writesize + (oob_required ? mtd->oobsize : 0); 304 + unsigned int max_bitflips = 0; 305 + dma_addr_t dma_addr; 306 + int step, ret; 307 + struct anfc_op nfc_op = { 308 + .pkt_reg = 309 + PKT_SIZE(chip->ecc.size) | 310 + PKT_STEPS(chip->ecc.steps), 311 + .addr1_reg = 312 + (page & 0xFF) << (8 * (anand->caddr_cycles)) | 313 + (((page >> 8) & 0xFF) << (8 * (1 + anand->caddr_cycles))), 314 + .addr2_reg = 315 + ((page >> 16) & 0xFF) | 316 + ADDR2_STRENGTH(anand->strength) | 317 + ADDR2_CS(anand->cs), 318 + .cmd_reg = 319 + CMD_1(NAND_CMD_READ0) | 320 + CMD_2(NAND_CMD_READSTART) | 321 + CMD_PAGE_SIZE(anand->page_sz) | 322 + CMD_DMA_ENABLE | 323 + CMD_NADDRS(anand->caddr_cycles + 324 + anand->raddr_cycles), 325 + .prog_reg = PROG_PGRD, 326 + }; 327 + 328 + dma_addr = dma_map_single(nfc->dev, (void *)buf, len, DMA_FROM_DEVICE); 329 + if (dma_mapping_error(nfc->dev, dma_addr)) { 330 + dev_err(nfc->dev, "Buffer mapping error"); 331 + return -EIO; 332 + } 333 + 334 + writel_relaxed(lower_32_bits(dma_addr), nfc->base + DMA_ADDR0_REG); 335 + writel_relaxed(upper_32_bits(dma_addr), nfc->base + DMA_ADDR1_REG); 336 + 337 + anfc_trigger_op(nfc, &nfc_op); 338 + 339 + ret = anfc_wait_for_event(nfc, XFER_COMPLETE); 340 + dma_unmap_single(nfc->dev, dma_addr, len, DMA_FROM_DEVICE); 341 + if (ret) { 342 + dev_err(nfc->dev, "Error reading page %d\n", page); 343 + return ret; 344 + } 345 + 346 + /* Store the raw OOB bytes as well */ 347 + ret = nand_change_read_column_op(chip, mtd->writesize, chip->oob_poi, 348 + mtd->oobsize, 0); 349 + if (ret) 350 + return ret; 351 + 352 + /* 353 + * For each step, compute by softare the BCH syndrome over the raw data. 354 + * Compare the theoretical amount of errors and compare with the 355 + * hardware engine feedback. 356 + */ 357 + for (step = 0; step < chip->ecc.steps; step++) { 358 + u8 *raw_buf = &buf[step * chip->ecc.size]; 359 + unsigned int bit, byte; 360 + int bf, i; 361 + 362 + /* Extract the syndrome, it is not necessarily aligned */ 363 + memset(anand->hw_ecc, 0, chip->ecc.bytes); 364 + nand_extract_bits(anand->hw_ecc, 0, 365 + &chip->oob_poi[mtd->oobsize - anand->ecc_total], 366 + anand->ecc_bits * step, anand->ecc_bits); 367 + 368 + bf = bch_decode(anand->bch, raw_buf, chip->ecc.size, 369 + anand->hw_ecc, NULL, NULL, anand->errloc); 370 + if (!bf) { 371 + continue; 372 + } else if (bf > 0) { 373 + for (i = 0; i < bf; i++) { 374 + /* Only correct the data, not the syndrome */ 375 + if (anand->errloc[i] < (chip->ecc.size * 8)) { 376 + bit = BIT(anand->errloc[i] & 7); 377 + byte = anand->errloc[i] >> 3; 378 + raw_buf[byte] ^= bit; 379 + } 380 + } 381 + 382 + mtd->ecc_stats.corrected += bf; 383 + max_bitflips = max_t(unsigned int, max_bitflips, bf); 384 + 385 + continue; 386 + } 387 + 388 + bf = nand_check_erased_ecc_chunk(raw_buf, chip->ecc.size, 389 + NULL, 0, NULL, 0, 390 + chip->ecc.strength); 391 + if (bf > 0) { 392 + mtd->ecc_stats.corrected += bf; 393 + max_bitflips = max_t(unsigned int, max_bitflips, bf); 394 + memset(raw_buf, 0xFF, chip->ecc.size); 395 + } else if (bf < 0) { 396 + mtd->ecc_stats.failed++; 397 + } 398 + } 399 + 400 + return 0; 401 + } 402 + 403 + static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf, 404 + int oob_required, int page) 405 + { 406 + struct anand *anand = to_anand(chip); 407 + struct arasan_nfc *nfc = to_anfc(chip->controller); 408 + struct mtd_info *mtd = nand_to_mtd(chip); 409 + unsigned int len = mtd->writesize + (oob_required ? mtd->oobsize : 0); 410 + dma_addr_t dma_addr; 411 + int ret; 412 + struct anfc_op nfc_op = { 413 + .pkt_reg = 414 + PKT_SIZE(chip->ecc.size) | 415 + PKT_STEPS(chip->ecc.steps), 416 + .addr1_reg = 417 + (page & 0xFF) << (8 * (anand->caddr_cycles)) | 418 + (((page >> 8) & 0xFF) << (8 * (1 + anand->caddr_cycles))), 419 + .addr2_reg = 420 + ((page >> 16) & 0xFF) | 421 + ADDR2_STRENGTH(anand->strength) | 422 + ADDR2_CS(anand->cs), 423 + .cmd_reg = 424 + CMD_1(NAND_CMD_SEQIN) | 425 + CMD_2(NAND_CMD_PAGEPROG) | 426 + CMD_PAGE_SIZE(anand->page_sz) | 427 + CMD_DMA_ENABLE | 428 + CMD_NADDRS(anand->caddr_cycles + 429 + anand->raddr_cycles) | 430 + CMD_ECC_ENABLE, 431 + .prog_reg = PROG_PGPROG, 432 + }; 433 + 434 + writel_relaxed(anand->ecc_conf, nfc->base + ECC_CONF_REG); 435 + writel_relaxed(ECC_SP_CMD1(NAND_CMD_RNDIN) | 436 + ECC_SP_ADDRS(anand->caddr_cycles), 437 + nfc->base + ECC_SP_REG); 438 + 439 + dma_addr = dma_map_single(nfc->dev, (void *)buf, len, DMA_TO_DEVICE); 440 + if (dma_mapping_error(nfc->dev, dma_addr)) { 441 + dev_err(nfc->dev, "Buffer mapping error"); 442 + return -EIO; 443 + } 444 + 445 + writel_relaxed(lower_32_bits(dma_addr), nfc->base + DMA_ADDR0_REG); 446 + writel_relaxed(upper_32_bits(dma_addr), nfc->base + DMA_ADDR1_REG); 447 + 448 + anfc_trigger_op(nfc, &nfc_op); 449 + ret = anfc_wait_for_event(nfc, XFER_COMPLETE); 450 + dma_unmap_single(nfc->dev, dma_addr, len, DMA_TO_DEVICE); 451 + if (ret) { 452 + dev_err(nfc->dev, "Error writing page %d\n", page); 453 + return ret; 454 + } 455 + 456 + /* Spare data is not protected */ 457 + if (oob_required) 458 + ret = nand_write_oob_std(chip, page); 459 + 460 + return ret; 461 + } 462 + 463 + /* NAND framework ->exec_op() hooks and related helpers */ 464 + static int anfc_parse_instructions(struct nand_chip *chip, 465 + const struct nand_subop *subop, 466 + struct anfc_op *nfc_op) 467 + { 468 + struct anand *anand = to_anand(chip); 469 + const struct nand_op_instr *instr = NULL; 470 + bool first_cmd = true; 471 + unsigned int op_id; 472 + int ret, i; 473 + 474 + memset(nfc_op, 0, sizeof(*nfc_op)); 475 + nfc_op->addr2_reg = ADDR2_CS(anand->cs); 476 + nfc_op->cmd_reg = CMD_PAGE_SIZE(anand->page_sz); 477 + 478 + for (op_id = 0; op_id < subop->ninstrs; op_id++) { 479 + unsigned int offset, naddrs, pktsize; 480 + const u8 *addrs; 481 + u8 *buf; 482 + 483 + instr = &subop->instrs[op_id]; 484 + 485 + switch (instr->type) { 486 + case NAND_OP_CMD_INSTR: 487 + if (first_cmd) 488 + nfc_op->cmd_reg |= CMD_1(instr->ctx.cmd.opcode); 489 + else 490 + nfc_op->cmd_reg |= CMD_2(instr->ctx.cmd.opcode); 491 + 492 + first_cmd = false; 493 + break; 494 + 495 + case NAND_OP_ADDR_INSTR: 496 + offset = nand_subop_get_addr_start_off(subop, op_id); 497 + naddrs = nand_subop_get_num_addr_cyc(subop, op_id); 498 + addrs = &instr->ctx.addr.addrs[offset]; 499 + nfc_op->cmd_reg |= CMD_NADDRS(naddrs); 500 + 501 + for (i = 0; i < min(ANFC_MAX_ADDR_CYC, naddrs); i++) { 502 + if (i < 4) 503 + nfc_op->addr1_reg |= (u32)addrs[i] << i * 8; 504 + else 505 + nfc_op->addr2_reg |= addrs[i]; 506 + } 507 + 508 + break; 509 + case NAND_OP_DATA_IN_INSTR: 510 + nfc_op->read = true; 511 + fallthrough; 512 + case NAND_OP_DATA_OUT_INSTR: 513 + offset = nand_subop_get_data_start_off(subop, op_id); 514 + buf = instr->ctx.data.buf.in; 515 + nfc_op->buf = &buf[offset]; 516 + nfc_op->len = nand_subop_get_data_len(subop, op_id); 517 + ret = anfc_pkt_len_config(nfc_op->len, &nfc_op->steps, 518 + &pktsize); 519 + if (ret) 520 + return ret; 521 + 522 + /* 523 + * Number of DATA cycles must be aligned on 4, this 524 + * means the controller might read/write more than 525 + * requested. This is harmless most of the time as extra 526 + * DATA are discarded in the write path and read pointer 527 + * adjusted in the read path. 528 + * 529 + * FIXME: The core should mark operations where 530 + * reading/writing more is allowed so the exec_op() 531 + * implementation can take the right decision when the 532 + * alignment constraint is not met: adjust the number of 533 + * DATA cycles when it's allowed, reject the operation 534 + * otherwise. 535 + */ 536 + nfc_op->pkt_reg |= PKT_SIZE(round_up(pktsize, 4)) | 537 + PKT_STEPS(nfc_op->steps); 538 + break; 539 + case NAND_OP_WAITRDY_INSTR: 540 + nfc_op->rdy_timeout_ms = instr->ctx.waitrdy.timeout_ms; 541 + break; 542 + } 543 + } 544 + 545 + return 0; 546 + } 547 + 548 + static int anfc_rw_pio_op(struct arasan_nfc *nfc, struct anfc_op *nfc_op) 549 + { 550 + unsigned int dwords = (nfc_op->len / 4) / nfc_op->steps; 551 + unsigned int last_len = nfc_op->len % 4; 552 + unsigned int offset, dir; 553 + u8 *buf = nfc_op->buf; 554 + int ret, i; 555 + 556 + for (i = 0; i < nfc_op->steps; i++) { 557 + dir = nfc_op->read ? READ_READY : WRITE_READY; 558 + ret = anfc_wait_for_event(nfc, dir); 559 + if (ret) { 560 + dev_err(nfc->dev, "PIO %s ready signal not received\n", 561 + nfc_op->read ? "Read" : "Write"); 562 + return ret; 563 + } 564 + 565 + offset = i * (dwords * 4); 566 + if (nfc_op->read) 567 + ioread32_rep(nfc->base + DATA_PORT_REG, &buf[offset], 568 + dwords); 569 + else 570 + iowrite32_rep(nfc->base + DATA_PORT_REG, &buf[offset], 571 + dwords); 572 + } 573 + 574 + if (last_len) { 575 + u32 remainder; 576 + 577 + offset = nfc_op->len - last_len; 578 + 579 + if (nfc_op->read) { 580 + remainder = readl_relaxed(nfc->base + DATA_PORT_REG); 581 + memcpy(&buf[offset], &remainder, last_len); 582 + } else { 583 + memcpy(&remainder, &buf[offset], last_len); 584 + writel_relaxed(remainder, nfc->base + DATA_PORT_REG); 585 + } 586 + } 587 + 588 + return anfc_wait_for_event(nfc, XFER_COMPLETE); 589 + } 590 + 591 + static int anfc_misc_data_type_exec(struct nand_chip *chip, 592 + const struct nand_subop *subop, 593 + u32 prog_reg) 594 + { 595 + struct arasan_nfc *nfc = to_anfc(chip->controller); 596 + struct anfc_op nfc_op = {}; 597 + int ret; 598 + 599 + ret = anfc_parse_instructions(chip, subop, &nfc_op); 600 + if (ret) 601 + return ret; 602 + 603 + nfc_op.prog_reg = prog_reg; 604 + anfc_trigger_op(nfc, &nfc_op); 605 + 606 + if (nfc_op.rdy_timeout_ms) { 607 + ret = anfc_wait_for_rb(nfc, chip, nfc_op.rdy_timeout_ms); 608 + if (ret) 609 + return ret; 610 + } 611 + 612 + return anfc_rw_pio_op(nfc, &nfc_op); 613 + } 614 + 615 + static int anfc_param_read_type_exec(struct nand_chip *chip, 616 + const struct nand_subop *subop) 617 + { 618 + return anfc_misc_data_type_exec(chip, subop, PROG_RDPARAM); 619 + } 620 + 621 + static int anfc_data_read_type_exec(struct nand_chip *chip, 622 + const struct nand_subop *subop) 623 + { 624 + return anfc_misc_data_type_exec(chip, subop, PROG_PGRD); 625 + } 626 + 627 + static int anfc_param_write_type_exec(struct nand_chip *chip, 628 + const struct nand_subop *subop) 629 + { 630 + return anfc_misc_data_type_exec(chip, subop, PROG_SET_FEATURE); 631 + } 632 + 633 + static int anfc_data_write_type_exec(struct nand_chip *chip, 634 + const struct nand_subop *subop) 635 + { 636 + return anfc_misc_data_type_exec(chip, subop, PROG_PGPROG); 637 + } 638 + 639 + static int anfc_misc_zerolen_type_exec(struct nand_chip *chip, 640 + const struct nand_subop *subop, 641 + u32 prog_reg) 642 + { 643 + struct arasan_nfc *nfc = to_anfc(chip->controller); 644 + struct anfc_op nfc_op = {}; 645 + int ret; 646 + 647 + ret = anfc_parse_instructions(chip, subop, &nfc_op); 648 + if (ret) 649 + return ret; 650 + 651 + nfc_op.prog_reg = prog_reg; 652 + anfc_trigger_op(nfc, &nfc_op); 653 + 654 + ret = anfc_wait_for_event(nfc, XFER_COMPLETE); 655 + if (ret) 656 + return ret; 657 + 658 + if (nfc_op.rdy_timeout_ms) 659 + ret = anfc_wait_for_rb(nfc, chip, nfc_op.rdy_timeout_ms); 660 + 661 + return ret; 662 + } 663 + 664 + static int anfc_status_type_exec(struct nand_chip *chip, 665 + const struct nand_subop *subop) 666 + { 667 + struct arasan_nfc *nfc = to_anfc(chip->controller); 668 + u32 tmp; 669 + int ret; 670 + 671 + /* See anfc_check_op() for details about this constraint */ 672 + if (subop->instrs[0].ctx.cmd.opcode != NAND_CMD_STATUS) 673 + return -ENOTSUPP; 674 + 675 + ret = anfc_misc_zerolen_type_exec(chip, subop, PROG_STATUS); 676 + if (ret) 677 + return ret; 678 + 679 + tmp = readl_relaxed(nfc->base + FLASH_STS_REG); 680 + memcpy(subop->instrs[1].ctx.data.buf.in, &tmp, 1); 681 + 682 + return 0; 683 + } 684 + 685 + static int anfc_reset_type_exec(struct nand_chip *chip, 686 + const struct nand_subop *subop) 687 + { 688 + return anfc_misc_zerolen_type_exec(chip, subop, PROG_RST); 689 + } 690 + 691 + static int anfc_erase_type_exec(struct nand_chip *chip, 692 + const struct nand_subop *subop) 693 + { 694 + return anfc_misc_zerolen_type_exec(chip, subop, PROG_ERASE); 695 + } 696 + 697 + static int anfc_wait_type_exec(struct nand_chip *chip, 698 + const struct nand_subop *subop) 699 + { 700 + struct arasan_nfc *nfc = to_anfc(chip->controller); 701 + struct anfc_op nfc_op = {}; 702 + int ret; 703 + 704 + ret = anfc_parse_instructions(chip, subop, &nfc_op); 705 + if (ret) 706 + return ret; 707 + 708 + return anfc_wait_for_rb(nfc, chip, nfc_op.rdy_timeout_ms); 709 + } 710 + 711 + static const struct nand_op_parser anfc_op_parser = NAND_OP_PARSER( 712 + NAND_OP_PARSER_PATTERN( 713 + anfc_param_read_type_exec, 714 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 715 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, ANFC_MAX_ADDR_CYC), 716 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 717 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, ANFC_MAX_CHUNK_SIZE)), 718 + NAND_OP_PARSER_PATTERN( 719 + anfc_param_write_type_exec, 720 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 721 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, ANFC_MAX_ADDR_CYC), 722 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, ANFC_MAX_PARAM_SIZE)), 723 + NAND_OP_PARSER_PATTERN( 724 + anfc_data_read_type_exec, 725 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 726 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, ANFC_MAX_ADDR_CYC), 727 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 728 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 729 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(true, ANFC_MAX_CHUNK_SIZE)), 730 + NAND_OP_PARSER_PATTERN( 731 + anfc_data_write_type_exec, 732 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 733 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, ANFC_MAX_ADDR_CYC), 734 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, ANFC_MAX_CHUNK_SIZE), 735 + NAND_OP_PARSER_PAT_CMD_ELEM(false)), 736 + NAND_OP_PARSER_PATTERN( 737 + anfc_reset_type_exec, 738 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 739 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 740 + NAND_OP_PARSER_PATTERN( 741 + anfc_erase_type_exec, 742 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 743 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, ANFC_MAX_ADDR_CYC), 744 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 745 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 746 + NAND_OP_PARSER_PATTERN( 747 + anfc_status_type_exec, 748 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 749 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, ANFC_MAX_CHUNK_SIZE)), 750 + NAND_OP_PARSER_PATTERN( 751 + anfc_wait_type_exec, 752 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 753 + ); 754 + 755 + static int anfc_select_target(struct nand_chip *chip, int target) 756 + { 757 + struct anand *anand = to_anand(chip); 758 + struct arasan_nfc *nfc = to_anfc(chip->controller); 759 + int ret; 760 + 761 + /* Update the controller timings and the potential ECC configuration */ 762 + writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG); 763 + 764 + /* Update clock frequency */ 765 + if (nfc->cur_clk != anand->clk) { 766 + clk_disable_unprepare(nfc->controller_clk); 767 + ret = clk_set_rate(nfc->controller_clk, anand->clk); 768 + if (ret) { 769 + dev_err(nfc->dev, "Failed to change clock rate\n"); 770 + return ret; 771 + } 772 + 773 + ret = clk_prepare_enable(nfc->controller_clk); 774 + if (ret) { 775 + dev_err(nfc->dev, 776 + "Failed to re-enable the controller clock\n"); 777 + return ret; 778 + } 779 + 780 + nfc->cur_clk = anand->clk; 781 + } 782 + 783 + return 0; 784 + } 785 + 786 + static int anfc_check_op(struct nand_chip *chip, 787 + const struct nand_operation *op) 788 + { 789 + const struct nand_op_instr *instr; 790 + int op_id; 791 + 792 + /* 793 + * The controller abstracts all the NAND operations and do not support 794 + * data only operations. 795 + * 796 + * TODO: The nand_op_parser framework should be extended to 797 + * support custom checks on DATA instructions. 798 + */ 799 + for (op_id = 0; op_id < op->ninstrs; op_id++) { 800 + instr = &op->instrs[op_id]; 801 + 802 + switch (instr->type) { 803 + case NAND_OP_ADDR_INSTR: 804 + if (instr->ctx.addr.naddrs > ANFC_MAX_ADDR_CYC) 805 + return -ENOTSUPP; 806 + 807 + break; 808 + case NAND_OP_DATA_IN_INSTR: 809 + case NAND_OP_DATA_OUT_INSTR: 810 + if (instr->ctx.data.len > ANFC_MAX_CHUNK_SIZE) 811 + return -ENOTSUPP; 812 + 813 + if (anfc_pkt_len_config(instr->ctx.data.len, 0, 0)) 814 + return -ENOTSUPP; 815 + 816 + break; 817 + default: 818 + break; 819 + } 820 + } 821 + 822 + /* 823 + * The controller does not allow to proceed with a CMD+DATA_IN cycle 824 + * manually on the bus by reading data from the data register. Instead, 825 + * the controller abstract a status read operation with its own status 826 + * register after ordering a read status operation. Hence, we cannot 827 + * support any CMD+DATA_IN operation other than a READ STATUS. 828 + * 829 + * TODO: The nand_op_parser() framework should be extended to describe 830 + * fixed patterns instead of open-coding this check here. 831 + */ 832 + if (op->ninstrs == 2 && 833 + op->instrs[0].type == NAND_OP_CMD_INSTR && 834 + op->instrs[0].ctx.cmd.opcode != NAND_CMD_STATUS && 835 + op->instrs[1].type == NAND_OP_DATA_IN_INSTR) 836 + return -ENOTSUPP; 837 + 838 + return nand_op_parser_exec_op(chip, &anfc_op_parser, op, true); 839 + } 840 + 841 + static int anfc_exec_op(struct nand_chip *chip, 842 + const struct nand_operation *op, 843 + bool check_only) 844 + { 845 + int ret; 846 + 847 + if (check_only) 848 + return anfc_check_op(chip, op); 849 + 850 + ret = anfc_select_target(chip, op->cs); 851 + if (ret) 852 + return ret; 853 + 854 + return nand_op_parser_exec_op(chip, &anfc_op_parser, op, check_only); 855 + } 856 + 857 + static int anfc_setup_data_interface(struct nand_chip *chip, int target, 858 + const struct nand_data_interface *conf) 859 + { 860 + struct anand *anand = to_anand(chip); 861 + struct arasan_nfc *nfc = to_anfc(chip->controller); 862 + struct device_node *np = nfc->dev->of_node; 863 + 864 + if (target < 0) 865 + return 0; 866 + 867 + anand->timings = DIFACE_SDR | DIFACE_SDR_MODE(conf->timings.mode); 868 + anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK; 869 + 870 + /* 871 + * Due to a hardware bug in the ZynqMP SoC, SDR timing modes 0-1 work 872 + * with f > 90MHz (default clock is 100MHz) but signals are unstable 873 + * with higher modes. Hence we decrease a little bit the clock rate to 874 + * 80MHz when using modes 2-5 with this SoC. 875 + */ 876 + if (of_device_is_compatible(np, "xlnx,zynqmp-nand-controller") && 877 + conf->timings.mode >= 2) 878 + anand->clk = ANFC_XLNX_SDR_HS_CORE_CLK; 879 + 880 + return 0; 881 + } 882 + 883 + static int anfc_calc_hw_ecc_bytes(int step_size, int strength) 884 + { 885 + unsigned int bch_gf_mag, ecc_bits; 886 + 887 + switch (step_size) { 888 + case SZ_512: 889 + bch_gf_mag = 13; 890 + break; 891 + case SZ_1K: 892 + bch_gf_mag = 14; 893 + break; 894 + default: 895 + return -EINVAL; 896 + } 897 + 898 + ecc_bits = bch_gf_mag * strength; 899 + 900 + return DIV_ROUND_UP(ecc_bits, 8); 901 + } 902 + 903 + static const int anfc_hw_ecc_512_strengths[] = {4, 8, 12}; 904 + 905 + static const int anfc_hw_ecc_1024_strengths[] = {24}; 906 + 907 + static const struct nand_ecc_step_info anfc_hw_ecc_step_infos[] = { 908 + { 909 + .stepsize = SZ_512, 910 + .strengths = anfc_hw_ecc_512_strengths, 911 + .nstrengths = ARRAY_SIZE(anfc_hw_ecc_512_strengths), 912 + }, 913 + { 914 + .stepsize = SZ_1K, 915 + .strengths = anfc_hw_ecc_1024_strengths, 916 + .nstrengths = ARRAY_SIZE(anfc_hw_ecc_1024_strengths), 917 + }, 918 + }; 919 + 920 + static const struct nand_ecc_caps anfc_hw_ecc_caps = { 921 + .stepinfos = anfc_hw_ecc_step_infos, 922 + .nstepinfos = ARRAY_SIZE(anfc_hw_ecc_step_infos), 923 + .calc_ecc_bytes = anfc_calc_hw_ecc_bytes, 924 + }; 925 + 926 + static int anfc_init_hw_ecc_controller(struct arasan_nfc *nfc, 927 + struct nand_chip *chip) 928 + { 929 + struct anand *anand = to_anand(chip); 930 + struct mtd_info *mtd = nand_to_mtd(chip); 931 + struct nand_ecc_ctrl *ecc = &chip->ecc; 932 + unsigned int bch_prim_poly = 0, bch_gf_mag = 0, ecc_offset; 933 + int ret; 934 + 935 + switch (mtd->writesize) { 936 + case SZ_512: 937 + case SZ_2K: 938 + case SZ_4K: 939 + case SZ_8K: 940 + case SZ_16K: 941 + break; 942 + default: 943 + dev_err(nfc->dev, "Unsupported page size %d\n", mtd->writesize); 944 + return -EINVAL; 945 + } 946 + 947 + ret = nand_ecc_choose_conf(chip, &anfc_hw_ecc_caps, mtd->oobsize); 948 + if (ret) 949 + return ret; 950 + 951 + switch (ecc->strength) { 952 + case 12: 953 + anand->strength = 0x1; 954 + break; 955 + case 8: 956 + anand->strength = 0x2; 957 + break; 958 + case 4: 959 + anand->strength = 0x3; 960 + break; 961 + case 24: 962 + anand->strength = 0x4; 963 + break; 964 + default: 965 + dev_err(nfc->dev, "Unsupported strength %d\n", ecc->strength); 966 + return -EINVAL; 967 + } 968 + 969 + switch (ecc->size) { 970 + case SZ_512: 971 + bch_gf_mag = 13; 972 + bch_prim_poly = 0x201b; 973 + break; 974 + case SZ_1K: 975 + bch_gf_mag = 14; 976 + bch_prim_poly = 0x4443; 977 + break; 978 + default: 979 + dev_err(nfc->dev, "Unsupported step size %d\n", ecc->strength); 980 + return -EINVAL; 981 + } 982 + 983 + mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops); 984 + 985 + ecc->steps = mtd->writesize / ecc->size; 986 + ecc->algo = NAND_ECC_BCH; 987 + anand->ecc_bits = bch_gf_mag * ecc->strength; 988 + ecc->bytes = DIV_ROUND_UP(anand->ecc_bits, 8); 989 + anand->ecc_total = DIV_ROUND_UP(anand->ecc_bits * ecc->steps, 8); 990 + ecc_offset = mtd->writesize + mtd->oobsize - anand->ecc_total; 991 + anand->ecc_conf = ECC_CONF_COL(ecc_offset) | 992 + ECC_CONF_LEN(anand->ecc_total) | 993 + ECC_CONF_BCH_EN; 994 + 995 + anand->errloc = devm_kmalloc_array(nfc->dev, ecc->strength, 996 + sizeof(*anand->errloc), GFP_KERNEL); 997 + if (!anand->errloc) 998 + return -ENOMEM; 999 + 1000 + anand->hw_ecc = devm_kmalloc(nfc->dev, ecc->bytes, GFP_KERNEL); 1001 + if (!anand->hw_ecc) 1002 + return -ENOMEM; 1003 + 1004 + /* Enforce bit swapping to fit the hardware */ 1005 + anand->bch = bch_init(bch_gf_mag, ecc->strength, bch_prim_poly, true); 1006 + if (!anand->bch) 1007 + return -EINVAL; 1008 + 1009 + ecc->read_page = anfc_read_page_hw_ecc; 1010 + ecc->write_page = anfc_write_page_hw_ecc; 1011 + 1012 + return 0; 1013 + } 1014 + 1015 + static int anfc_attach_chip(struct nand_chip *chip) 1016 + { 1017 + struct anand *anand = to_anand(chip); 1018 + struct arasan_nfc *nfc = to_anfc(chip->controller); 1019 + struct mtd_info *mtd = nand_to_mtd(chip); 1020 + int ret = 0; 1021 + 1022 + if (mtd->writesize <= SZ_512) 1023 + anand->caddr_cycles = 1; 1024 + else 1025 + anand->caddr_cycles = 2; 1026 + 1027 + if (chip->options & NAND_ROW_ADDR_3) 1028 + anand->raddr_cycles = 3; 1029 + else 1030 + anand->raddr_cycles = 2; 1031 + 1032 + switch (mtd->writesize) { 1033 + case 512: 1034 + anand->page_sz = 0; 1035 + break; 1036 + case 1024: 1037 + anand->page_sz = 5; 1038 + break; 1039 + case 2048: 1040 + anand->page_sz = 1; 1041 + break; 1042 + case 4096: 1043 + anand->page_sz = 2; 1044 + break; 1045 + case 8192: 1046 + anand->page_sz = 3; 1047 + break; 1048 + case 16384: 1049 + anand->page_sz = 4; 1050 + break; 1051 + default: 1052 + return -EINVAL; 1053 + } 1054 + 1055 + /* These hooks are valid for all ECC providers */ 1056 + chip->ecc.read_page_raw = nand_monolithic_read_page_raw; 1057 + chip->ecc.write_page_raw = nand_monolithic_write_page_raw; 1058 + 1059 + switch (chip->ecc.mode) { 1060 + case NAND_ECC_NONE: 1061 + case NAND_ECC_SOFT: 1062 + case NAND_ECC_ON_DIE: 1063 + break; 1064 + case NAND_ECC_HW: 1065 + ret = anfc_init_hw_ecc_controller(nfc, chip); 1066 + break; 1067 + default: 1068 + dev_err(nfc->dev, "Unsupported ECC mode: %d\n", 1069 + chip->ecc.mode); 1070 + return -EINVAL; 1071 + } 1072 + 1073 + return ret; 1074 + } 1075 + 1076 + static void anfc_detach_chip(struct nand_chip *chip) 1077 + { 1078 + struct anand *anand = to_anand(chip); 1079 + 1080 + if (anand->bch) 1081 + bch_free(anand->bch); 1082 + } 1083 + 1084 + static const struct nand_controller_ops anfc_ops = { 1085 + .exec_op = anfc_exec_op, 1086 + .setup_data_interface = anfc_setup_data_interface, 1087 + .attach_chip = anfc_attach_chip, 1088 + .detach_chip = anfc_detach_chip, 1089 + }; 1090 + 1091 + static int anfc_chip_init(struct arasan_nfc *nfc, struct device_node *np) 1092 + { 1093 + struct anand *anand; 1094 + struct nand_chip *chip; 1095 + struct mtd_info *mtd; 1096 + int cs, rb, ret; 1097 + 1098 + anand = devm_kzalloc(nfc->dev, sizeof(*anand), GFP_KERNEL); 1099 + if (!anand) 1100 + return -ENOMEM; 1101 + 1102 + /* We do not support multiple CS per chip yet */ 1103 + if (of_property_count_elems_of_size(np, "reg", sizeof(u32)) != 1) { 1104 + dev_err(nfc->dev, "Invalid reg property\n"); 1105 + return -EINVAL; 1106 + } 1107 + 1108 + ret = of_property_read_u32(np, "reg", &cs); 1109 + if (ret) 1110 + return ret; 1111 + 1112 + ret = of_property_read_u32(np, "nand-rb", &rb); 1113 + if (ret) 1114 + return ret; 1115 + 1116 + if (cs >= ANFC_MAX_CS || rb >= ANFC_MAX_CS) { 1117 + dev_err(nfc->dev, "Wrong CS %d or RB %d\n", cs, rb); 1118 + return -EINVAL; 1119 + } 1120 + 1121 + if (test_and_set_bit(cs, &nfc->assigned_cs)) { 1122 + dev_err(nfc->dev, "Already assigned CS %d\n", cs); 1123 + return -EINVAL; 1124 + } 1125 + 1126 + anand->cs = cs; 1127 + anand->rb = rb; 1128 + 1129 + chip = &anand->chip; 1130 + mtd = nand_to_mtd(chip); 1131 + mtd->dev.parent = nfc->dev; 1132 + chip->controller = &nfc->controller; 1133 + chip->options = NAND_BUSWIDTH_AUTO | NAND_NO_SUBPAGE_WRITE | 1134 + NAND_USES_DMA; 1135 + 1136 + nand_set_flash_node(chip, np); 1137 + if (!mtd->name) { 1138 + dev_err(nfc->dev, "NAND label property is mandatory\n"); 1139 + return -EINVAL; 1140 + } 1141 + 1142 + ret = nand_scan(chip, 1); 1143 + if (ret) { 1144 + dev_err(nfc->dev, "Scan operation failed\n"); 1145 + return ret; 1146 + } 1147 + 1148 + ret = mtd_device_register(mtd, NULL, 0); 1149 + if (ret) { 1150 + nand_cleanup(chip); 1151 + return ret; 1152 + } 1153 + 1154 + list_add_tail(&anand->node, &nfc->chips); 1155 + 1156 + return 0; 1157 + } 1158 + 1159 + static void anfc_chips_cleanup(struct arasan_nfc *nfc) 1160 + { 1161 + struct anand *anand, *tmp; 1162 + struct nand_chip *chip; 1163 + int ret; 1164 + 1165 + list_for_each_entry_safe(anand, tmp, &nfc->chips, node) { 1166 + chip = &anand->chip; 1167 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1168 + WARN_ON(ret); 1169 + nand_cleanup(chip); 1170 + list_del(&anand->node); 1171 + } 1172 + } 1173 + 1174 + static int anfc_chips_init(struct arasan_nfc *nfc) 1175 + { 1176 + struct device_node *np = nfc->dev->of_node, *nand_np; 1177 + int nchips = of_get_child_count(np); 1178 + int ret; 1179 + 1180 + if (!nchips || nchips > ANFC_MAX_CS) { 1181 + dev_err(nfc->dev, "Incorrect number of NAND chips (%d)\n", 1182 + nchips); 1183 + return -EINVAL; 1184 + } 1185 + 1186 + for_each_child_of_node(np, nand_np) { 1187 + ret = anfc_chip_init(nfc, nand_np); 1188 + if (ret) { 1189 + of_node_put(nand_np); 1190 + anfc_chips_cleanup(nfc); 1191 + break; 1192 + } 1193 + } 1194 + 1195 + return ret; 1196 + } 1197 + 1198 + static void anfc_reset(struct arasan_nfc *nfc) 1199 + { 1200 + /* Disable interrupt signals */ 1201 + writel_relaxed(0, nfc->base + INTR_SIG_EN_REG); 1202 + 1203 + /* Enable interrupt status */ 1204 + writel_relaxed(EVENT_MASK, nfc->base + INTR_STS_EN_REG); 1205 + } 1206 + 1207 + static int anfc_probe(struct platform_device *pdev) 1208 + { 1209 + struct arasan_nfc *nfc; 1210 + int ret; 1211 + 1212 + nfc = devm_kzalloc(&pdev->dev, sizeof(*nfc), GFP_KERNEL); 1213 + if (!nfc) 1214 + return -ENOMEM; 1215 + 1216 + nfc->dev = &pdev->dev; 1217 + nand_controller_init(&nfc->controller); 1218 + nfc->controller.ops = &anfc_ops; 1219 + INIT_LIST_HEAD(&nfc->chips); 1220 + 1221 + nfc->base = devm_platform_ioremap_resource(pdev, 0); 1222 + if (IS_ERR(nfc->base)) 1223 + return PTR_ERR(nfc->base); 1224 + 1225 + anfc_reset(nfc); 1226 + 1227 + nfc->controller_clk = devm_clk_get(&pdev->dev, "controller"); 1228 + if (IS_ERR(nfc->controller_clk)) 1229 + return PTR_ERR(nfc->controller_clk); 1230 + 1231 + nfc->bus_clk = devm_clk_get(&pdev->dev, "bus"); 1232 + if (IS_ERR(nfc->bus_clk)) 1233 + return PTR_ERR(nfc->bus_clk); 1234 + 1235 + ret = clk_prepare_enable(nfc->controller_clk); 1236 + if (ret) 1237 + return ret; 1238 + 1239 + ret = clk_prepare_enable(nfc->bus_clk); 1240 + if (ret) 1241 + goto disable_controller_clk; 1242 + 1243 + ret = anfc_chips_init(nfc); 1244 + if (ret) 1245 + goto disable_bus_clk; 1246 + 1247 + platform_set_drvdata(pdev, nfc); 1248 + 1249 + return 0; 1250 + 1251 + disable_bus_clk: 1252 + clk_disable_unprepare(nfc->bus_clk); 1253 + 1254 + disable_controller_clk: 1255 + clk_disable_unprepare(nfc->controller_clk); 1256 + 1257 + return ret; 1258 + } 1259 + 1260 + static int anfc_remove(struct platform_device *pdev) 1261 + { 1262 + struct arasan_nfc *nfc = platform_get_drvdata(pdev); 1263 + 1264 + anfc_chips_cleanup(nfc); 1265 + 1266 + clk_disable_unprepare(nfc->bus_clk); 1267 + clk_disable_unprepare(nfc->controller_clk); 1268 + 1269 + return 0; 1270 + } 1271 + 1272 + static const struct of_device_id anfc_ids[] = { 1273 + { 1274 + .compatible = "xlnx,zynqmp-nand-controller", 1275 + }, 1276 + { 1277 + .compatible = "arasan,nfc-v3p10", 1278 + }, 1279 + {} 1280 + }; 1281 + MODULE_DEVICE_TABLE(of, anfc_ids); 1282 + 1283 + static struct platform_driver anfc_driver = { 1284 + .driver = { 1285 + .name = "arasan-nand-controller", 1286 + .of_match_table = anfc_ids, 1287 + }, 1288 + .probe = anfc_probe, 1289 + .remove = anfc_remove, 1290 + }; 1291 + module_platform_driver(anfc_driver); 1292 + 1293 + MODULE_LICENSE("GPL v2"); 1294 + MODULE_AUTHOR("Punnaiah Choudary Kalluri <punnaia@xilinx.com>"); 1295 + MODULE_AUTHOR("Naga Sureshkumar Relli <nagasure@xilinx.com>"); 1296 + MODULE_AUTHOR("Miquel Raynal <miquel.raynal@bootlin.com>"); 1297 + MODULE_DESCRIPTION("Arasan NAND Flash Controller Driver");
+1 -1
drivers/mtd/nand/raw/atmel/nand-controller.c
··· 1494 1494 * suitable for DMA. 1495 1495 */ 1496 1496 if (nc->dmac) 1497 - chip->options |= NAND_USE_BOUNCE_BUFFER; 1497 + chip->options |= NAND_USES_DMA; 1498 1498 1499 1499 /* Default to HW ECC if pmecc is available. */ 1500 1500 if (nc->pmecc)
+140 -277
drivers/mtd/nand/raw/au1550nd.c
··· 16 16 17 17 18 18 struct au1550nd_ctx { 19 + struct nand_controller controller; 19 20 struct nand_chip chip; 20 21 21 22 int cs; 22 23 void __iomem *base; 23 - void (*write_byte)(struct nand_chip *, u_char); 24 24 }; 25 25 26 - /** 27 - * au_read_byte - read one byte from the chip 28 - * @this: NAND chip object 29 - * 30 - * read function for 8bit buswidth 31 - */ 32 - static u_char au_read_byte(struct nand_chip *this) 26 + static struct au1550nd_ctx *chip_to_au_ctx(struct nand_chip *this) 33 27 { 34 - u_char ret = readb(this->legacy.IO_ADDR_R); 35 - wmb(); /* drain writebuffer */ 36 - return ret; 37 - } 38 - 39 - /** 40 - * au_write_byte - write one byte to the chip 41 - * @this: NAND chip object 42 - * @byte: pointer to data byte to write 43 - * 44 - * write function for 8it buswidth 45 - */ 46 - static void au_write_byte(struct nand_chip *this, u_char byte) 47 - { 48 - writeb(byte, this->legacy.IO_ADDR_W); 49 - wmb(); /* drain writebuffer */ 50 - } 51 - 52 - /** 53 - * au_read_byte16 - read one byte endianness aware from the chip 54 - * @this: NAND chip object 55 - * 56 - * read function for 16bit buswidth with endianness conversion 57 - */ 58 - static u_char au_read_byte16(struct nand_chip *this) 59 - { 60 - u_char ret = (u_char) cpu_to_le16(readw(this->legacy.IO_ADDR_R)); 61 - wmb(); /* drain writebuffer */ 62 - return ret; 63 - } 64 - 65 - /** 66 - * au_write_byte16 - write one byte endianness aware to the chip 67 - * @this: NAND chip object 68 - * @byte: pointer to data byte to write 69 - * 70 - * write function for 16bit buswidth with endianness conversion 71 - */ 72 - static void au_write_byte16(struct nand_chip *this, u_char byte) 73 - { 74 - writew(le16_to_cpu((u16) byte), this->legacy.IO_ADDR_W); 75 - wmb(); /* drain writebuffer */ 28 + return container_of(this, struct au1550nd_ctx, chip); 76 29 } 77 30 78 31 /** ··· 36 83 * 37 84 * write function for 8bit buswidth 38 85 */ 39 - static void au_write_buf(struct nand_chip *this, const u_char *buf, int len) 86 + static void au_write_buf(struct nand_chip *this, const void *buf, 87 + unsigned int len) 40 88 { 89 + struct au1550nd_ctx *ctx = chip_to_au_ctx(this); 90 + const u8 *p = buf; 41 91 int i; 42 92 43 93 for (i = 0; i < len; i++) { 44 - writeb(buf[i], this->legacy.IO_ADDR_W); 94 + writeb(p[i], ctx->base + MEM_STNAND_DATA); 45 95 wmb(); /* drain writebuffer */ 46 96 } 47 97 } ··· 57 101 * 58 102 * read function for 8bit buswidth 59 103 */ 60 - static void au_read_buf(struct nand_chip *this, u_char *buf, int len) 104 + static void au_read_buf(struct nand_chip *this, void *buf, 105 + unsigned int len) 61 106 { 107 + struct au1550nd_ctx *ctx = chip_to_au_ctx(this); 108 + u8 *p = buf; 62 109 int i; 63 110 64 111 for (i = 0; i < len; i++) { 65 - buf[i] = readb(this->legacy.IO_ADDR_R); 112 + p[i] = readb(ctx->base + MEM_STNAND_DATA); 66 113 wmb(); /* drain writebuffer */ 67 114 } 68 115 } ··· 78 119 * 79 120 * write function for 16bit buswidth 80 121 */ 81 - static void au_write_buf16(struct nand_chip *this, const u_char *buf, int len) 122 + static void au_write_buf16(struct nand_chip *this, const void *buf, 123 + unsigned int len) 82 124 { 83 - int i; 84 - u16 *p = (u16 *) buf; 85 - len >>= 1; 125 + struct au1550nd_ctx *ctx = chip_to_au_ctx(this); 126 + const u16 *p = buf; 127 + unsigned int i; 86 128 129 + len >>= 1; 87 130 for (i = 0; i < len; i++) { 88 - writew(p[i], this->legacy.IO_ADDR_W); 131 + writew(p[i], ctx->base + MEM_STNAND_DATA); 89 132 wmb(); /* drain writebuffer */ 90 133 } 91 - 92 134 } 93 135 94 136 /** ··· 100 140 * 101 141 * read function for 16bit buswidth 102 142 */ 103 - static void au_read_buf16(struct nand_chip *this, u_char *buf, int len) 143 + static void au_read_buf16(struct nand_chip *this, void *buf, unsigned int len) 104 144 { 105 - int i; 106 - u16 *p = (u16 *) buf; 107 - len >>= 1; 145 + struct au1550nd_ctx *ctx = chip_to_au_ctx(this); 146 + unsigned int i; 147 + u16 *p = buf; 108 148 149 + len >>= 1; 109 150 for (i = 0; i < len; i++) { 110 - p[i] = readw(this->legacy.IO_ADDR_R); 151 + p[i] = readw(ctx->base + MEM_STNAND_DATA); 111 152 wmb(); /* drain writebuffer */ 112 153 } 113 - } 114 - 115 - /* Select the chip by setting nCE to low */ 116 - #define NAND_CTL_SETNCE 1 117 - /* Deselect the chip by setting nCE to high */ 118 - #define NAND_CTL_CLRNCE 2 119 - /* Select the command latch by setting CLE to high */ 120 - #define NAND_CTL_SETCLE 3 121 - /* Deselect the command latch by setting CLE to low */ 122 - #define NAND_CTL_CLRCLE 4 123 - /* Select the address latch by setting ALE to high */ 124 - #define NAND_CTL_SETALE 5 125 - /* Deselect the address latch by setting ALE to low */ 126 - #define NAND_CTL_CLRALE 6 127 - 128 - static void au1550_hwcontrol(struct mtd_info *mtd, int cmd) 129 - { 130 - struct nand_chip *this = mtd_to_nand(mtd); 131 - struct au1550nd_ctx *ctx = container_of(this, struct au1550nd_ctx, 132 - chip); 133 - 134 - switch (cmd) { 135 - 136 - case NAND_CTL_SETCLE: 137 - this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_CMD; 138 - break; 139 - 140 - case NAND_CTL_CLRCLE: 141 - this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_DATA; 142 - break; 143 - 144 - case NAND_CTL_SETALE: 145 - this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_ADDR; 146 - break; 147 - 148 - case NAND_CTL_CLRALE: 149 - this->legacy.IO_ADDR_W = ctx->base + MEM_STNAND_DATA; 150 - /* FIXME: Nobody knows why this is necessary, 151 - * but it works only that way */ 152 - udelay(1); 153 - break; 154 - 155 - case NAND_CTL_SETNCE: 156 - /* assert (force assert) chip enable */ 157 - alchemy_wrsmem((1 << (4 + ctx->cs)), AU1000_MEM_STNDCTL); 158 - break; 159 - 160 - case NAND_CTL_CLRNCE: 161 - /* deassert chip enable */ 162 - alchemy_wrsmem(0, AU1000_MEM_STNDCTL); 163 - break; 164 - } 165 - 166 - this->legacy.IO_ADDR_R = this->legacy.IO_ADDR_W; 167 - 168 - wmb(); /* Drain the writebuffer */ 169 - } 170 - 171 - int au1550_device_ready(struct nand_chip *this) 172 - { 173 - return (alchemy_rdsmem(AU1000_MEM_STSTAT) & 0x1) ? 1 : 0; 174 - } 175 - 176 - /** 177 - * au1550_select_chip - control -CE line 178 - * Forbid driving -CE manually permitting the NAND controller to do this. 179 - * Keeping -CE asserted during the whole sector reads interferes with the 180 - * NOR flash and PCMCIA drivers as it causes contention on the static bus. 181 - * We only have to hold -CE low for the NAND read commands since the flash 182 - * chip needs it to be asserted during chip not ready time but the NAND 183 - * controller keeps it released. 184 - * 185 - * @this: NAND chip object 186 - * @chip: chipnumber to select, -1 for deselect 187 - */ 188 - static void au1550_select_chip(struct nand_chip *this, int chip) 189 - { 190 - } 191 - 192 - /** 193 - * au1550_command - Send command to NAND device 194 - * @this: NAND chip object 195 - * @command: the command to be sent 196 - * @column: the column address for this command, -1 if none 197 - * @page_addr: the page address for this command, -1 if none 198 - */ 199 - static void au1550_command(struct nand_chip *this, unsigned command, 200 - int column, int page_addr) 201 - { 202 - struct mtd_info *mtd = nand_to_mtd(this); 203 - struct au1550nd_ctx *ctx = container_of(this, struct au1550nd_ctx, 204 - chip); 205 - int ce_override = 0, i; 206 - unsigned long flags = 0; 207 - 208 - /* Begin command latch cycle */ 209 - au1550_hwcontrol(mtd, NAND_CTL_SETCLE); 210 - /* 211 - * Write out the command to the device. 212 - */ 213 - if (command == NAND_CMD_SEQIN) { 214 - int readcmd; 215 - 216 - if (column >= mtd->writesize) { 217 - /* OOB area */ 218 - column -= mtd->writesize; 219 - readcmd = NAND_CMD_READOOB; 220 - } else if (column < 256) { 221 - /* First 256 bytes --> READ0 */ 222 - readcmd = NAND_CMD_READ0; 223 - } else { 224 - column -= 256; 225 - readcmd = NAND_CMD_READ1; 226 - } 227 - ctx->write_byte(this, readcmd); 228 - } 229 - ctx->write_byte(this, command); 230 - 231 - /* Set ALE and clear CLE to start address cycle */ 232 - au1550_hwcontrol(mtd, NAND_CTL_CLRCLE); 233 - 234 - if (column != -1 || page_addr != -1) { 235 - au1550_hwcontrol(mtd, NAND_CTL_SETALE); 236 - 237 - /* Serially input address */ 238 - if (column != -1) { 239 - /* Adjust columns for 16 bit buswidth */ 240 - if (this->options & NAND_BUSWIDTH_16 && 241 - !nand_opcode_8bits(command)) 242 - column >>= 1; 243 - ctx->write_byte(this, column); 244 - } 245 - if (page_addr != -1) { 246 - ctx->write_byte(this, (u8)(page_addr & 0xff)); 247 - 248 - if (command == NAND_CMD_READ0 || 249 - command == NAND_CMD_READ1 || 250 - command == NAND_CMD_READOOB) { 251 - /* 252 - * NAND controller will release -CE after 253 - * the last address byte is written, so we'll 254 - * have to forcibly assert it. No interrupts 255 - * are allowed while we do this as we don't 256 - * want the NOR flash or PCMCIA drivers to 257 - * steal our precious bytes of data... 258 - */ 259 - ce_override = 1; 260 - local_irq_save(flags); 261 - au1550_hwcontrol(mtd, NAND_CTL_SETNCE); 262 - } 263 - 264 - ctx->write_byte(this, (u8)(page_addr >> 8)); 265 - 266 - if (this->options & NAND_ROW_ADDR_3) 267 - ctx->write_byte(this, 268 - ((page_addr >> 16) & 0x0f)); 269 - } 270 - /* Latch in address */ 271 - au1550_hwcontrol(mtd, NAND_CTL_CLRALE); 272 - } 273 - 274 - /* 275 - * Program and erase have their own busy handlers. 276 - * Status and sequential in need no delay. 277 - */ 278 - switch (command) { 279 - 280 - case NAND_CMD_PAGEPROG: 281 - case NAND_CMD_ERASE1: 282 - case NAND_CMD_ERASE2: 283 - case NAND_CMD_SEQIN: 284 - case NAND_CMD_STATUS: 285 - return; 286 - 287 - case NAND_CMD_RESET: 288 - break; 289 - 290 - case NAND_CMD_READ0: 291 - case NAND_CMD_READ1: 292 - case NAND_CMD_READOOB: 293 - /* Check if we're really driving -CE low (just in case) */ 294 - if (unlikely(!ce_override)) 295 - break; 296 - 297 - /* Apply a short delay always to ensure that we do wait tWB. */ 298 - ndelay(100); 299 - /* Wait for a chip to become ready... */ 300 - for (i = this->legacy.chip_delay; 301 - !this->legacy.dev_ready(this) && i > 0; --i) 302 - udelay(1); 303 - 304 - /* Release -CE and re-enable interrupts. */ 305 - au1550_hwcontrol(mtd, NAND_CTL_CLRNCE); 306 - local_irq_restore(flags); 307 - return; 308 - } 309 - /* Apply this short delay always to ensure that we do wait tWB. */ 310 - ndelay(100); 311 - 312 - while(!this->legacy.dev_ready(this)); 313 154 } 314 155 315 156 static int find_nand_cs(unsigned long nand_base) ··· 133 372 134 373 return -ENODEV; 135 374 } 375 + 376 + static int au1550nd_waitrdy(struct nand_chip *this, unsigned int timeout_ms) 377 + { 378 + unsigned long timeout_jiffies = jiffies; 379 + 380 + timeout_jiffies += msecs_to_jiffies(timeout_ms) + 1; 381 + do { 382 + if (alchemy_rdsmem(AU1000_MEM_STSTAT) & 0x1) 383 + return 0; 384 + 385 + usleep_range(10, 100); 386 + } while (time_before(jiffies, timeout_jiffies)); 387 + 388 + return -ETIMEDOUT; 389 + } 390 + 391 + static int au1550nd_exec_instr(struct nand_chip *this, 392 + const struct nand_op_instr *instr) 393 + { 394 + struct au1550nd_ctx *ctx = chip_to_au_ctx(this); 395 + unsigned int i; 396 + int ret = 0; 397 + 398 + switch (instr->type) { 399 + case NAND_OP_CMD_INSTR: 400 + writeb(instr->ctx.cmd.opcode, 401 + ctx->base + MEM_STNAND_CMD); 402 + /* Drain the writebuffer */ 403 + wmb(); 404 + break; 405 + 406 + case NAND_OP_ADDR_INSTR: 407 + for (i = 0; i < instr->ctx.addr.naddrs; i++) { 408 + writeb(instr->ctx.addr.addrs[i], 409 + ctx->base + MEM_STNAND_ADDR); 410 + /* Drain the writebuffer */ 411 + wmb(); 412 + } 413 + break; 414 + 415 + case NAND_OP_DATA_IN_INSTR: 416 + if ((this->options & NAND_BUSWIDTH_16) && 417 + !instr->ctx.data.force_8bit) 418 + au_read_buf16(this, instr->ctx.data.buf.in, 419 + instr->ctx.data.len); 420 + else 421 + au_read_buf(this, instr->ctx.data.buf.in, 422 + instr->ctx.data.len); 423 + break; 424 + 425 + case NAND_OP_DATA_OUT_INSTR: 426 + if ((this->options & NAND_BUSWIDTH_16) && 427 + !instr->ctx.data.force_8bit) 428 + au_write_buf16(this, instr->ctx.data.buf.out, 429 + instr->ctx.data.len); 430 + else 431 + au_write_buf(this, instr->ctx.data.buf.out, 432 + instr->ctx.data.len); 433 + break; 434 + 435 + case NAND_OP_WAITRDY_INSTR: 436 + ret = au1550nd_waitrdy(this, instr->ctx.waitrdy.timeout_ms); 437 + break; 438 + default: 439 + return -EINVAL; 440 + } 441 + 442 + if (instr->delay_ns) 443 + ndelay(instr->delay_ns); 444 + 445 + return ret; 446 + } 447 + 448 + static int au1550nd_exec_op(struct nand_chip *this, 449 + const struct nand_operation *op, 450 + bool check_only) 451 + { 452 + struct au1550nd_ctx *ctx = chip_to_au_ctx(this); 453 + unsigned int i; 454 + int ret; 455 + 456 + if (check_only) 457 + return 0; 458 + 459 + /* assert (force assert) chip enable */ 460 + alchemy_wrsmem((1 << (4 + ctx->cs)), AU1000_MEM_STNDCTL); 461 + /* Drain the writebuffer */ 462 + wmb(); 463 + 464 + for (i = 0; i < op->ninstrs; i++) { 465 + ret = au1550nd_exec_instr(this, &op->instrs[i]); 466 + if (ret) 467 + break; 468 + } 469 + 470 + /* deassert chip enable */ 471 + alchemy_wrsmem(0, AU1000_MEM_STNDCTL); 472 + /* Drain the writebuffer */ 473 + wmb(); 474 + 475 + return ret; 476 + } 477 + 478 + static const struct nand_controller_ops au1550nd_ops = { 479 + .exec_op = au1550nd_exec_op, 480 + }; 136 481 137 482 static int au1550nd_probe(struct platform_device *pdev) 138 483 { ··· 291 424 } 292 425 ctx->cs = cs; 293 426 294 - this->legacy.dev_ready = au1550_device_ready; 295 - this->legacy.select_chip = au1550_select_chip; 296 - this->legacy.cmdfunc = au1550_command; 297 - 298 - /* 30 us command delay time */ 299 - this->legacy.chip_delay = 30; 427 + nand_controller_init(&ctx->controller); 428 + ctx->controller.ops = &au1550nd_ops; 429 + this->controller = &ctx->controller; 300 430 this->ecc.mode = NAND_ECC_SOFT; 301 431 this->ecc.algo = NAND_ECC_HAMMING; 302 432 303 433 if (pd->devwidth) 304 434 this->options |= NAND_BUSWIDTH_16; 305 - 306 - this->legacy.read_byte = (pd->devwidth) ? au_read_byte16 : au_read_byte; 307 - ctx->write_byte = (pd->devwidth) ? au_write_byte16 : au_write_byte; 308 - this->legacy.write_buf = (pd->devwidth) ? au_write_buf16 : au_write_buf; 309 - this->legacy.read_buf = (pd->devwidth) ? au_read_buf16 : au_read_buf; 310 435 311 436 ret = nand_scan(this, 1); 312 437 if (ret) { ··· 325 466 { 326 467 struct au1550nd_ctx *ctx = platform_get_drvdata(pdev); 327 468 struct resource *r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 469 + struct nand_chip *chip = &ctx->chip; 470 + int ret; 328 471 329 - nand_release(&ctx->chip); 472 + ret = mtd_device_unregister(nand_to_mtd(chip)); 473 + WARN_ON(ret); 474 + nand_cleanup(chip); 330 475 iounmap(ctx->base); 331 476 release_mem_region(r->start, 0x1000); 332 477 kfree(ctx);
+5 -1
drivers/mtd/nand/raw/bcm47xxnflash/main.c
··· 60 60 static int bcm47xxnflash_remove(struct platform_device *pdev) 61 61 { 62 62 struct bcm47xxnflash *nflash = platform_get_drvdata(pdev); 63 + struct nand_chip *chip = &nflash->nand_chip; 64 + int ret; 63 65 64 - nand_release(&nflash->nand_chip); 66 + ret = mtd_device_unregister(nand_to_mtd(chip)); 67 + WARN_ON(ret); 68 + nand_cleanup(chip); 65 69 66 70 return 0; 67 71 }
+120 -44
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 4 4 */ 5 5 6 6 #include <linux/clk.h> 7 - #include <linux/version.h> 8 7 #include <linux/module.h> 9 8 #include <linux/init.h> 10 9 #include <linux/delay.h> ··· 263 264 const unsigned int *block_sizes; 264 265 unsigned int max_page_size; 265 266 const unsigned int *page_sizes; 267 + unsigned int page_size_shift; 266 268 unsigned int max_oob; 267 269 u32 features; 268 270 ··· 338 338 BRCMNAND_FC_BASE, 339 339 }; 340 340 341 - /* BRCMNAND v4.0 */ 342 - static const u16 brcmnand_regs_v40[] = { 341 + /* BRCMNAND v2.1-v2.2 */ 342 + static const u16 brcmnand_regs_v21[] = { 343 + [BRCMNAND_CMD_START] = 0x04, 344 + [BRCMNAND_CMD_EXT_ADDRESS] = 0x08, 345 + [BRCMNAND_CMD_ADDRESS] = 0x0c, 346 + [BRCMNAND_INTFC_STATUS] = 0x5c, 347 + [BRCMNAND_CS_SELECT] = 0x14, 348 + [BRCMNAND_CS_XOR] = 0x18, 349 + [BRCMNAND_LL_OP] = 0, 350 + [BRCMNAND_CS0_BASE] = 0x40, 351 + [BRCMNAND_CS1_BASE] = 0, 352 + [BRCMNAND_CORR_THRESHOLD] = 0, 353 + [BRCMNAND_CORR_THRESHOLD_EXT] = 0, 354 + [BRCMNAND_UNCORR_COUNT] = 0, 355 + [BRCMNAND_CORR_COUNT] = 0, 356 + [BRCMNAND_CORR_EXT_ADDR] = 0x60, 357 + [BRCMNAND_CORR_ADDR] = 0x64, 358 + [BRCMNAND_UNCORR_EXT_ADDR] = 0x68, 359 + [BRCMNAND_UNCORR_ADDR] = 0x6c, 360 + [BRCMNAND_SEMAPHORE] = 0x50, 361 + [BRCMNAND_ID] = 0x54, 362 + [BRCMNAND_ID_EXT] = 0, 363 + [BRCMNAND_LL_RDATA] = 0, 364 + [BRCMNAND_OOB_READ_BASE] = 0x20, 365 + [BRCMNAND_OOB_READ_10_BASE] = 0, 366 + [BRCMNAND_OOB_WRITE_BASE] = 0x30, 367 + [BRCMNAND_OOB_WRITE_10_BASE] = 0, 368 + [BRCMNAND_FC_BASE] = 0x200, 369 + }; 370 + 371 + /* BRCMNAND v3.3-v4.0 */ 372 + static const u16 brcmnand_regs_v33[] = { 343 373 [BRCMNAND_CMD_START] = 0x04, 344 374 [BRCMNAND_CMD_EXT_ADDRESS] = 0x08, 345 375 [BRCMNAND_CMD_ADDRESS] = 0x0c, ··· 566 536 CFG_BUS_WIDTH = BIT(CFG_BUS_WIDTH_SHIFT), 567 537 CFG_DEVICE_SIZE_SHIFT = 24, 568 538 539 + /* Only for v2.1 */ 540 + CFG_PAGE_SIZE_SHIFT_v2_1 = 30, 541 + 569 542 /* Only for pre-v7.1 (with no CFG_EXT register) */ 570 543 CFG_PAGE_SIZE_SHIFT = 20, 571 544 CFG_BLK_SIZE_SHIFT = 28, ··· 604 571 { 605 572 static const unsigned int block_sizes_v6[] = { 8, 16, 128, 256, 512, 1024, 2048, 0 }; 606 573 static const unsigned int block_sizes_v4[] = { 16, 128, 8, 512, 256, 1024, 2048, 0 }; 607 - static const unsigned int page_sizes[] = { 512, 2048, 4096, 8192, 0 }; 574 + static const unsigned int block_sizes_v2_2[] = { 16, 128, 8, 512, 256, 0 }; 575 + static const unsigned int block_sizes_v2_1[] = { 16, 128, 8, 512, 0 }; 576 + static const unsigned int page_sizes_v3_4[] = { 512, 2048, 4096, 8192, 0 }; 577 + static const unsigned int page_sizes_v2_2[] = { 512, 2048, 4096, 0 }; 578 + static const unsigned int page_sizes_v2_1[] = { 512, 2048, 0 }; 608 579 609 580 ctrl->nand_version = nand_readreg(ctrl, 0) & 0xffff; 610 581 611 - /* Only support v4.0+? */ 612 - if (ctrl->nand_version < 0x0400) { 582 + /* Only support v2.1+ */ 583 + if (ctrl->nand_version < 0x0201) { 613 584 dev_err(ctrl->dev, "version %#x not supported\n", 614 585 ctrl->nand_version); 615 586 return -ENODEV; ··· 628 591 ctrl->reg_offsets = brcmnand_regs_v60; 629 592 else if (ctrl->nand_version >= 0x0500) 630 593 ctrl->reg_offsets = brcmnand_regs_v50; 631 - else if (ctrl->nand_version >= 0x0400) 632 - ctrl->reg_offsets = brcmnand_regs_v40; 594 + else if (ctrl->nand_version >= 0x0303) 595 + ctrl->reg_offsets = brcmnand_regs_v33; 596 + else if (ctrl->nand_version >= 0x0201) 597 + ctrl->reg_offsets = brcmnand_regs_v21; 633 598 634 599 /* Chip-select stride */ 635 600 if (ctrl->nand_version >= 0x0701) ··· 645 606 } else { 646 607 ctrl->cs_offsets = brcmnand_cs_offsets; 647 608 648 - /* v5.0 and earlier has a different CS0 offset layout */ 649 - if (ctrl->nand_version <= 0x0500) 609 + /* v3.3-5.0 have a different CS0 offset layout */ 610 + if (ctrl->nand_version >= 0x0303 && 611 + ctrl->nand_version <= 0x0500) 650 612 ctrl->cs0_offsets = brcmnand_cs_offsets_cs0; 651 613 } 652 614 ··· 657 617 ctrl->max_page_size = 16 * 1024; 658 618 ctrl->max_block_size = 2 * 1024 * 1024; 659 619 } else { 660 - ctrl->page_sizes = page_sizes; 620 + if (ctrl->nand_version >= 0x0304) 621 + ctrl->page_sizes = page_sizes_v3_4; 622 + else if (ctrl->nand_version >= 0x0202) 623 + ctrl->page_sizes = page_sizes_v2_2; 624 + else 625 + ctrl->page_sizes = page_sizes_v2_1; 626 + 627 + if (ctrl->nand_version >= 0x0202) 628 + ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT; 629 + else 630 + ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT_v2_1; 631 + 661 632 if (ctrl->nand_version >= 0x0600) 662 633 ctrl->block_sizes = block_sizes_v6; 663 - else 634 + else if (ctrl->nand_version >= 0x0400) 664 635 ctrl->block_sizes = block_sizes_v4; 636 + else if (ctrl->nand_version >= 0x0202) 637 + ctrl->block_sizes = block_sizes_v2_2; 638 + else 639 + ctrl->block_sizes = block_sizes_v2_1; 665 640 666 641 if (ctrl->nand_version < 0x0400) { 667 - ctrl->max_page_size = 4096; 642 + if (ctrl->nand_version < 0x0202) 643 + ctrl->max_page_size = 2048; 644 + else 645 + ctrl->max_page_size = 4096; 668 646 ctrl->max_block_size = 512 * 1024; 669 647 } 670 648 } ··· 868 810 enum brcmnand_reg reg = BRCMNAND_CORR_THRESHOLD; 869 811 int cs = host->cs; 870 812 813 + if (!ctrl->reg_offsets[reg]) 814 + return; 815 + 871 816 if (ctrl->nand_version == 0x0702) 872 817 bits = 7; 873 818 else if (ctrl->nand_version >= 0x0600) ··· 929 868 return GENMASK(7, 0); 930 869 else if (ctrl->nand_version >= 0x0600) 931 870 return GENMASK(6, 0); 932 - else 871 + else if (ctrl->nand_version >= 0x0303) 933 872 return GENMASK(5, 0); 873 + else 874 + return GENMASK(4, 0); 934 875 } 935 876 936 877 #define NAND_ACC_CONTROL_ECC_SHIFT 16 ··· 1163 1100 struct brcmnand_cfg *cfg = &host->hwcfg; 1164 1101 int sas = cfg->spare_area_size << cfg->sector_size_1k; 1165 1102 int sectors = cfg->page_size / (512 << cfg->sector_size_1k); 1103 + u32 next; 1166 1104 1167 - if (section >= sectors * 2) 1105 + if (section > sectors) 1168 1106 return -ERANGE; 1169 1107 1170 - oobregion->offset = (section / 2) * sas; 1108 + next = (section * sas); 1109 + if (section < sectors) 1110 + next += 6; 1171 1111 1172 - if (section & 1) { 1173 - oobregion->offset += 9; 1174 - oobregion->length = 7; 1112 + if (section) { 1113 + oobregion->offset = ((section - 1) * sas) + 9; 1175 1114 } else { 1176 - oobregion->length = 6; 1177 - 1178 - /* First sector of each page may have BBI */ 1179 - if (!section) { 1180 - /* 1181 - * Small-page NAND use byte 6 for BBI while large-page 1182 - * NAND use byte 0. 1183 - */ 1184 - if (cfg->page_size > 512) 1185 - oobregion->offset++; 1186 - oobregion->length--; 1115 + if (cfg->page_size > 512) { 1116 + /* Large page NAND uses first 2 bytes for BBI */ 1117 + oobregion->offset = 2; 1118 + } else { 1119 + /* Small page NAND uses last byte before ECC for BBI */ 1120 + oobregion->offset = 0; 1121 + next--; 1187 1122 } 1188 1123 } 1124 + 1125 + oobregion->length = next - oobregion->offset; 1189 1126 1190 1127 return 0; 1191 1128 } ··· 2081 2018 static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd, 2082 2019 struct nand_chip *chip, void *buf, u64 addr) 2083 2020 { 2084 - int i, sas; 2085 - void *oob = chip->oob_poi; 2021 + struct mtd_oob_region ecc; 2022 + int i; 2086 2023 int bitflips = 0; 2087 2024 int page = addr >> chip->page_shift; 2088 2025 int ret; 2026 + void *ecc_bytes; 2089 2027 void *ecc_chunk; 2090 2028 2091 2029 if (!buf) 2092 2030 buf = nand_get_data_buf(chip); 2093 - 2094 - sas = mtd->oobsize / chip->ecc.steps; 2095 2031 2096 2032 /* read without ecc for verification */ 2097 2033 ret = chip->ecc.read_page_raw(chip, buf, true, page); 2098 2034 if (ret) 2099 2035 return ret; 2100 2036 2101 - for (i = 0; i < chip->ecc.steps; i++, oob += sas) { 2037 + for (i = 0; i < chip->ecc.steps; i++) { 2102 2038 ecc_chunk = buf + chip->ecc.size * i; 2103 - ret = nand_check_erased_ecc_chunk(ecc_chunk, 2104 - chip->ecc.size, 2105 - oob, sas, NULL, 0, 2039 + 2040 + mtd_ooblayout_ecc(mtd, i, &ecc); 2041 + ecc_bytes = chip->oob_poi + ecc.offset; 2042 + 2043 + ret = nand_check_erased_ecc_chunk(ecc_chunk, chip->ecc.size, 2044 + ecc_bytes, ecc.length, 2045 + NULL, 0, 2106 2046 chip->ecc.strength); 2107 2047 if (ret < 0) 2108 2048 return ret; ··· 2443 2377 (!!(cfg->device_width == 16) << CFG_BUS_WIDTH_SHIFT) | 2444 2378 (device_size << CFG_DEVICE_SIZE_SHIFT); 2445 2379 if (cfg_offs == cfg_ext_offs) { 2446 - tmp |= (page_size << CFG_PAGE_SIZE_SHIFT) | 2380 + tmp |= (page_size << ctrl->page_size_shift) | 2447 2381 (block_size << CFG_BLK_SIZE_SHIFT); 2448 2382 nand_writereg(ctrl, cfg_offs, tmp); 2449 2383 } else { ··· 2455 2389 2456 2390 tmp = nand_readreg(ctrl, acc_control_offs); 2457 2391 tmp &= ~brcmnand_ecc_level_mask(ctrl); 2458 - tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT; 2459 2392 tmp &= ~brcmnand_spare_area_mask(ctrl); 2460 - tmp |= cfg->spare_area_size; 2393 + if (ctrl->nand_version >= 0x0302) { 2394 + tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT; 2395 + tmp |= cfg->spare_area_size; 2396 + } 2461 2397 nand_writereg(ctrl, acc_control_offs, tmp); 2462 2398 2463 2399 brcmnand_set_sector_size_1k(host, cfg->sector_size_1k); ··· 2645 2577 * to/from, and have nand_base pass us a bounce buffer instead, as 2646 2578 * needed. 2647 2579 */ 2648 - chip->options |= NAND_USE_BOUNCE_BUFFER; 2580 + chip->options |= NAND_USES_DMA; 2649 2581 2650 2582 if (chip->bbt_options & NAND_BBT_USE_FLASH) 2651 2583 chip->bbt_options |= NAND_BBT_NO_OOB; ··· 2832 2764 EXPORT_SYMBOL_GPL(brcmnand_pm_ops); 2833 2765 2834 2766 static const struct of_device_id brcmnand_of_match[] = { 2767 + { .compatible = "brcm,brcmnand-v2.1" }, 2768 + { .compatible = "brcm,brcmnand-v2.2" }, 2835 2769 { .compatible = "brcm,brcmnand-v4.0" }, 2836 2770 { .compatible = "brcm,brcmnand-v5.0" }, 2837 2771 { .compatible = "brcm,brcmnand-v6.0" }, ··· 3115 3045 { 3116 3046 struct brcmnand_controller *ctrl = dev_get_drvdata(&pdev->dev); 3117 3047 struct brcmnand_host *host; 3048 + struct nand_chip *chip; 3049 + int ret; 3118 3050 3119 - list_for_each_entry(host, &ctrl->host_list, node) 3120 - nand_release(&host->chip); 3051 + list_for_each_entry(host, &ctrl->host_list, node) { 3052 + chip = &host->chip; 3053 + ret = mtd_device_unregister(nand_to_mtd(chip)); 3054 + WARN_ON(ret); 3055 + nand_cleanup(chip); 3056 + } 3121 3057 3122 3058 clk_disable_unprepare(ctrl->clk); 3123 3059
+12 -5
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 2223 2223 const struct nand_operation *op, 2224 2224 bool check_only) 2225 2225 { 2226 - int status = cadence_nand_select_target(chip); 2226 + if (!check_only) { 2227 + int status = cadence_nand_select_target(chip); 2227 2228 2228 - if (status) 2229 - return status; 2229 + if (status) 2230 + return status; 2231 + } 2230 2232 2231 2233 return nand_op_parser_exec_op(chip, &cadence_nand_op_parser, op, 2232 2234 check_only); ··· 2594 2592 return 0; 2595 2593 } 2596 2594 2597 - int cadence_nand_attach_chip(struct nand_chip *chip) 2595 + static int cadence_nand_attach_chip(struct nand_chip *chip) 2598 2596 { 2599 2597 struct cdns_nand_ctrl *cdns_ctrl = to_cdns_nand_ctrl(chip->controller); 2600 2598 struct cdns_nand_chip *cdns_chip = to_cdns_nand_chip(chip); ··· 2780 2778 static void cadence_nand_chips_cleanup(struct cdns_nand_ctrl *cdns_ctrl) 2781 2779 { 2782 2780 struct cdns_nand_chip *entry, *temp; 2781 + struct nand_chip *chip; 2782 + int ret; 2783 2783 2784 2784 list_for_each_entry_safe(entry, temp, &cdns_ctrl->chips, node) { 2785 - nand_release(&entry->chip); 2785 + chip = &entry->chip; 2786 + ret = mtd_device_unregister(nand_to_mtd(chip)); 2787 + WARN_ON(ret); 2788 + nand_cleanup(chip); 2786 2789 list_del(&entry->node); 2787 2790 } 2788 2791 }
+6 -10
drivers/mtd/nand/raw/cafe_nand.c
··· 546 546 return nand_prog_page_end_op(chip); 547 547 } 548 548 549 - static int cafe_nand_block_bad(struct nand_chip *chip, loff_t ofs) 550 - { 551 - return 0; 552 - } 553 - 554 549 /* F_2[X]/(X**6+X+1) */ 555 550 static unsigned short gf64_mul(u8 a, u8 b) 556 551 { ··· 713 718 /* Enable the following for a flash based bad block table */ 714 719 cafe->nand.bbt_options = NAND_BBT_USE_FLASH; 715 720 716 - if (skipbbt) { 717 - cafe->nand.options |= NAND_SKIP_BBTSCAN; 718 - cafe->nand.legacy.block_bad = cafe_nand_block_bad; 719 - } 721 + if (skipbbt) 722 + cafe->nand.options |= NAND_SKIP_BBTSCAN | NAND_NO_BBM_QUIRK; 720 723 721 724 if (numtimings && numtimings != 3) { 722 725 dev_warn(&cafe->pdev->dev, "%d timing register values ignored; precisely three are required\n", numtimings); ··· 807 814 struct mtd_info *mtd = pci_get_drvdata(pdev); 808 815 struct nand_chip *chip = mtd_to_nand(mtd); 809 816 struct cafe_priv *cafe = nand_get_controller_data(chip); 817 + int ret; 810 818 811 819 /* Disable NAND IRQ in global IRQ mask register */ 812 820 cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK); 813 821 free_irq(pdev->irq, mtd); 814 - nand_release(chip); 822 + ret = mtd_device_unregister(mtd); 823 + WARN_ON(ret); 824 + nand_cleanup(chip); 815 825 free_rs(cafe->rs); 816 826 pci_iounmap(pdev, cafe->mmio); 817 827 dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr);
-236
drivers/mtd/nand/raw/cmx270_nand.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) 2006 Compulab, Ltd. 4 - * Mike Rapoport <mike@compulab.co.il> 5 - * 6 - * Derived from drivers/mtd/nand/h1910.c (removed in v3.10) 7 - * Copyright (C) 2002 Marius Gröger (mag@sysgo.de) 8 - * Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de) 9 - * 10 - * Overview: 11 - * This is a device driver for the NAND flash device found on the 12 - * CM-X270 board. 13 - */ 14 - 15 - #include <linux/mtd/rawnand.h> 16 - #include <linux/mtd/partitions.h> 17 - #include <linux/slab.h> 18 - #include <linux/gpio.h> 19 - #include <linux/module.h> 20 - 21 - #include <asm/io.h> 22 - #include <asm/irq.h> 23 - #include <asm/mach-types.h> 24 - 25 - #include <mach/pxa2xx-regs.h> 26 - 27 - #define GPIO_NAND_CS (11) 28 - #define GPIO_NAND_RB (89) 29 - 30 - /* MTD structure for CM-X270 board */ 31 - static struct mtd_info *cmx270_nand_mtd; 32 - 33 - /* remaped IO address of the device */ 34 - static void __iomem *cmx270_nand_io; 35 - 36 - /* 37 - * Define static partitions for flash device 38 - */ 39 - static const struct mtd_partition partition_info[] = { 40 - [0] = { 41 - .name = "cmx270-0", 42 - .offset = 0, 43 - .size = MTDPART_SIZ_FULL 44 - } 45 - }; 46 - #define NUM_PARTITIONS (ARRAY_SIZE(partition_info)) 47 - 48 - static u_char cmx270_read_byte(struct nand_chip *this) 49 - { 50 - return (readl(this->legacy.IO_ADDR_R) >> 16); 51 - } 52 - 53 - static void cmx270_write_buf(struct nand_chip *this, const u_char *buf, 54 - int len) 55 - { 56 - int i; 57 - 58 - for (i=0; i<len; i++) 59 - writel((*buf++ << 16), this->legacy.IO_ADDR_W); 60 - } 61 - 62 - static void cmx270_read_buf(struct nand_chip *this, u_char *buf, int len) 63 - { 64 - int i; 65 - 66 - for (i=0; i<len; i++) 67 - *buf++ = readl(this->legacy.IO_ADDR_R) >> 16; 68 - } 69 - 70 - static inline void nand_cs_on(void) 71 - { 72 - gpio_set_value(GPIO_NAND_CS, 0); 73 - } 74 - 75 - static void nand_cs_off(void) 76 - { 77 - dsb(); 78 - 79 - gpio_set_value(GPIO_NAND_CS, 1); 80 - } 81 - 82 - /* 83 - * hardware specific access to control-lines 84 - */ 85 - static void cmx270_hwcontrol(struct nand_chip *this, int dat, 86 - unsigned int ctrl) 87 - { 88 - unsigned int nandaddr = (unsigned int)this->legacy.IO_ADDR_W; 89 - 90 - dsb(); 91 - 92 - if (ctrl & NAND_CTRL_CHANGE) { 93 - if ( ctrl & NAND_ALE ) 94 - nandaddr |= (1 << 3); 95 - else 96 - nandaddr &= ~(1 << 3); 97 - if ( ctrl & NAND_CLE ) 98 - nandaddr |= (1 << 2); 99 - else 100 - nandaddr &= ~(1 << 2); 101 - if ( ctrl & NAND_NCE ) 102 - nand_cs_on(); 103 - else 104 - nand_cs_off(); 105 - } 106 - 107 - dsb(); 108 - this->legacy.IO_ADDR_W = (void __iomem*)nandaddr; 109 - if (dat != NAND_CMD_NONE) 110 - writel((dat << 16), this->legacy.IO_ADDR_W); 111 - 112 - dsb(); 113 - } 114 - 115 - /* 116 - * read device ready pin 117 - */ 118 - static int cmx270_device_ready(struct nand_chip *this) 119 - { 120 - dsb(); 121 - 122 - return (gpio_get_value(GPIO_NAND_RB)); 123 - } 124 - 125 - /* 126 - * Main initialization routine 127 - */ 128 - static int __init cmx270_init(void) 129 - { 130 - struct nand_chip *this; 131 - int ret; 132 - 133 - if (!(machine_is_armcore() && cpu_is_pxa27x())) 134 - return -ENODEV; 135 - 136 - ret = gpio_request(GPIO_NAND_CS, "NAND CS"); 137 - if (ret) { 138 - pr_warn("CM-X270: failed to request NAND CS gpio\n"); 139 - return ret; 140 - } 141 - 142 - gpio_direction_output(GPIO_NAND_CS, 1); 143 - 144 - ret = gpio_request(GPIO_NAND_RB, "NAND R/B"); 145 - if (ret) { 146 - pr_warn("CM-X270: failed to request NAND R/B gpio\n"); 147 - goto err_gpio_request; 148 - } 149 - 150 - gpio_direction_input(GPIO_NAND_RB); 151 - 152 - /* Allocate memory for MTD device structure and private data */ 153 - this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL); 154 - if (!this) { 155 - ret = -ENOMEM; 156 - goto err_kzalloc; 157 - } 158 - 159 - cmx270_nand_io = ioremap(PXA_CS1_PHYS, 12); 160 - if (!cmx270_nand_io) { 161 - pr_debug("Unable to ioremap NAND device\n"); 162 - ret = -EINVAL; 163 - goto err_ioremap; 164 - } 165 - 166 - cmx270_nand_mtd = nand_to_mtd(this); 167 - 168 - /* Link the private data with the MTD structure */ 169 - cmx270_nand_mtd->owner = THIS_MODULE; 170 - 171 - /* insert callbacks */ 172 - this->legacy.IO_ADDR_R = cmx270_nand_io; 173 - this->legacy.IO_ADDR_W = cmx270_nand_io; 174 - this->legacy.cmd_ctrl = cmx270_hwcontrol; 175 - this->legacy.dev_ready = cmx270_device_ready; 176 - 177 - /* 15 us command delay time */ 178 - this->legacy.chip_delay = 20; 179 - this->ecc.mode = NAND_ECC_SOFT; 180 - this->ecc.algo = NAND_ECC_HAMMING; 181 - 182 - /* read/write functions */ 183 - this->legacy.read_byte = cmx270_read_byte; 184 - this->legacy.read_buf = cmx270_read_buf; 185 - this->legacy.write_buf = cmx270_write_buf; 186 - 187 - /* Scan to find existence of the device */ 188 - ret = nand_scan(this, 1); 189 - if (ret) { 190 - pr_notice("No NAND device\n"); 191 - goto err_scan; 192 - } 193 - 194 - /* Register the partitions */ 195 - ret = mtd_device_register(cmx270_nand_mtd, partition_info, 196 - NUM_PARTITIONS); 197 - if (ret) 198 - goto err_scan; 199 - 200 - /* Return happy */ 201 - return 0; 202 - 203 - err_scan: 204 - iounmap(cmx270_nand_io); 205 - err_ioremap: 206 - kfree(this); 207 - err_kzalloc: 208 - gpio_free(GPIO_NAND_RB); 209 - err_gpio_request: 210 - gpio_free(GPIO_NAND_CS); 211 - 212 - return ret; 213 - 214 - } 215 - module_init(cmx270_init); 216 - 217 - /* 218 - * Clean up routine 219 - */ 220 - static void __exit cmx270_cleanup(void) 221 - { 222 - /* Release resources, unregister device */ 223 - nand_release(mtd_to_nand(cmx270_nand_mtd)); 224 - 225 - gpio_free(GPIO_NAND_RB); 226 - gpio_free(GPIO_NAND_CS); 227 - 228 - iounmap(cmx270_nand_io); 229 - 230 - kfree(mtd_to_nand(cmx270_nand_mtd)); 231 - } 232 - module_exit(cmx270_cleanup); 233 - 234 - MODULE_LICENSE("GPL"); 235 - MODULE_AUTHOR("Mike Rapoport <mike@compulab.co.il>"); 236 - MODULE_DESCRIPTION("NAND flash driver for Compulab CM-X270 Module");
+138 -61
drivers/mtd/nand/raw/cs553x_nand.c
··· 21 21 #include <linux/mtd/rawnand.h> 22 22 #include <linux/mtd/nand_ecc.h> 23 23 #include <linux/mtd/partitions.h> 24 + #include <linux/iopoll.h> 24 25 25 26 #include <asm/msr.h> 26 - #include <asm/io.h> 27 27 28 28 #define NR_CS553X_CONTROLLERS 4 29 29 ··· 89 89 #define CS_NAND_ECC_CLRECC (1<<1) 90 90 #define CS_NAND_ECC_ENECC (1<<0) 91 91 92 - static void cs553x_read_buf(struct nand_chip *this, u_char *buf, int len) 92 + struct cs553x_nand_controller { 93 + struct nand_controller base; 94 + struct nand_chip chip; 95 + void __iomem *mmio; 96 + }; 97 + 98 + static struct cs553x_nand_controller * 99 + to_cs553x(struct nand_controller *controller) 93 100 { 101 + return container_of(controller, struct cs553x_nand_controller, base); 102 + } 103 + 104 + static int cs553x_write_ctrl_byte(struct cs553x_nand_controller *cs553x, 105 + u32 ctl, u8 data) 106 + { 107 + u8 status; 108 + int ret; 109 + 110 + writeb(ctl, cs553x->mmio + MM_NAND_CTL); 111 + writeb(data, cs553x->mmio + MM_NAND_IO); 112 + ret = readb_poll_timeout_atomic(cs553x->mmio + MM_NAND_STS, status, 113 + !(status & CS_NAND_CTLR_BUSY), 1, 114 + 100000); 115 + if (ret) 116 + return ret; 117 + 118 + return 0; 119 + } 120 + 121 + static void cs553x_data_in(struct cs553x_nand_controller *cs553x, void *buf, 122 + unsigned int len) 123 + { 124 + writeb(0, cs553x->mmio + MM_NAND_CTL); 94 125 while (unlikely(len > 0x800)) { 95 - memcpy_fromio(buf, this->legacy.IO_ADDR_R, 0x800); 126 + memcpy_fromio(buf, cs553x->mmio, 0x800); 96 127 buf += 0x800; 97 128 len -= 0x800; 98 129 } 99 - memcpy_fromio(buf, this->legacy.IO_ADDR_R, len); 130 + memcpy_fromio(buf, cs553x->mmio, len); 100 131 } 101 132 102 - static void cs553x_write_buf(struct nand_chip *this, const u_char *buf, int len) 133 + static void cs553x_data_out(struct cs553x_nand_controller *cs553x, 134 + const void *buf, unsigned int len) 103 135 { 136 + writeb(0, cs553x->mmio + MM_NAND_CTL); 104 137 while (unlikely(len > 0x800)) { 105 - memcpy_toio(this->legacy.IO_ADDR_R, buf, 0x800); 138 + memcpy_toio(cs553x->mmio, buf, 0x800); 106 139 buf += 0x800; 107 140 len -= 0x800; 108 141 } 109 - memcpy_toio(this->legacy.IO_ADDR_R, buf, len); 142 + memcpy_toio(cs553x->mmio, buf, len); 110 143 } 111 144 112 - static unsigned char cs553x_read_byte(struct nand_chip *this) 145 + static int cs553x_wait_ready(struct cs553x_nand_controller *cs553x, 146 + unsigned int timeout_ms) 113 147 { 114 - return readb(this->legacy.IO_ADDR_R); 148 + u8 mask = CS_NAND_CTLR_BUSY | CS_NAND_STS_FLASH_RDY; 149 + u8 status; 150 + 151 + return readb_poll_timeout(cs553x->mmio + MM_NAND_STS, status, 152 + (status & mask) == CS_NAND_STS_FLASH_RDY, 100, 153 + timeout_ms * 1000); 115 154 } 116 155 117 - static void cs553x_write_byte(struct nand_chip *this, u_char byte) 156 + static int cs553x_exec_instr(struct cs553x_nand_controller *cs553x, 157 + const struct nand_op_instr *instr) 118 158 { 119 - int i = 100000; 159 + unsigned int i; 160 + int ret = 0; 120 161 121 - while (i && readb(this->legacy.IO_ADDR_R + MM_NAND_STS) & CS_NAND_CTLR_BUSY) { 122 - udelay(1); 123 - i--; 162 + switch (instr->type) { 163 + case NAND_OP_CMD_INSTR: 164 + ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_CLE, 165 + instr->ctx.cmd.opcode); 166 + break; 167 + 168 + case NAND_OP_ADDR_INSTR: 169 + for (i = 0; i < instr->ctx.addr.naddrs; i++) { 170 + ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_ALE, 171 + instr->ctx.addr.addrs[i]); 172 + if (ret) 173 + break; 174 + } 175 + break; 176 + 177 + case NAND_OP_DATA_IN_INSTR: 178 + cs553x_data_in(cs553x, instr->ctx.data.buf.in, 179 + instr->ctx.data.len); 180 + break; 181 + 182 + case NAND_OP_DATA_OUT_INSTR: 183 + cs553x_data_out(cs553x, instr->ctx.data.buf.out, 184 + instr->ctx.data.len); 185 + break; 186 + 187 + case NAND_OP_WAITRDY_INSTR: 188 + ret = cs553x_wait_ready(cs553x, instr->ctx.waitrdy.timeout_ms); 189 + break; 124 190 } 125 - writeb(byte, this->legacy.IO_ADDR_W + 0x801); 191 + 192 + if (instr->delay_ns) 193 + ndelay(instr->delay_ns); 194 + 195 + return ret; 126 196 } 127 197 128 - static void cs553x_hwcontrol(struct nand_chip *this, int cmd, 129 - unsigned int ctrl) 198 + static int cs553x_exec_op(struct nand_chip *this, 199 + const struct nand_operation *op, 200 + bool check_only) 130 201 { 131 - void __iomem *mmio_base = this->legacy.IO_ADDR_R; 132 - if (ctrl & NAND_CTRL_CHANGE) { 133 - unsigned char ctl = (ctrl & ~NAND_CTRL_CHANGE ) ^ 0x01; 134 - writeb(ctl, mmio_base + MM_NAND_CTL); 202 + struct cs553x_nand_controller *cs553x = to_cs553x(this->controller); 203 + unsigned int i; 204 + int ret; 205 + 206 + if (check_only) 207 + return true; 208 + 209 + /* De-assert the CE pin */ 210 + writeb(0, cs553x->mmio + MM_NAND_CTL); 211 + for (i = 0; i < op->ninstrs; i++) { 212 + ret = cs553x_exec_instr(cs553x, &op->instrs[i]); 213 + if (ret) 214 + break; 135 215 } 136 - if (cmd != NAND_CMD_NONE) 137 - cs553x_write_byte(this, cmd); 138 - } 139 216 140 - static int cs553x_device_ready(struct nand_chip *this) 141 - { 142 - void __iomem *mmio_base = this->legacy.IO_ADDR_R; 143 - unsigned char foo = readb(mmio_base + MM_NAND_STS); 217 + /* Re-assert the CE pin. */ 218 + writeb(CS_NAND_CTL_CE, cs553x->mmio + MM_NAND_CTL); 144 219 145 - return (foo & CS_NAND_STS_FLASH_RDY) && !(foo & CS_NAND_CTLR_BUSY); 220 + return ret; 146 221 } 147 222 148 223 static void cs_enable_hwecc(struct nand_chip *this, int mode) 149 224 { 150 - void __iomem *mmio_base = this->legacy.IO_ADDR_R; 225 + struct cs553x_nand_controller *cs553x = to_cs553x(this->controller); 151 226 152 - writeb(0x07, mmio_base + MM_NAND_ECC_CTL); 227 + writeb(0x07, cs553x->mmio + MM_NAND_ECC_CTL); 153 228 } 154 229 155 230 static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat, 156 231 u_char *ecc_code) 157 232 { 233 + struct cs553x_nand_controller *cs553x = to_cs553x(this->controller); 158 234 uint32_t ecc; 159 - void __iomem *mmio_base = this->legacy.IO_ADDR_R; 160 235 161 - ecc = readl(mmio_base + MM_NAND_STS); 236 + ecc = readl(cs553x->mmio + MM_NAND_STS); 162 237 163 238 ecc_code[1] = ecc >> 8; 164 239 ecc_code[0] = ecc >> 16; ··· 241 166 return 0; 242 167 } 243 168 244 - static struct mtd_info *cs553x_mtd[4]; 169 + static struct cs553x_nand_controller *controllers[4]; 170 + 171 + static const struct nand_controller_ops cs553x_nand_controller_ops = { 172 + .exec_op = cs553x_exec_op, 173 + }; 245 174 246 175 static int __init cs553x_init_one(int cs, int mmio, unsigned long adr) 247 176 { 177 + struct cs553x_nand_controller *controller; 248 178 int err = 0; 249 179 struct nand_chip *this; 250 180 struct mtd_info *new_mtd; ··· 263 183 } 264 184 265 185 /* Allocate memory for MTD device structure and private data */ 266 - this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL); 267 - if (!this) { 186 + controller = kzalloc(sizeof(*controller), GFP_KERNEL); 187 + if (!controller) { 268 188 err = -ENOMEM; 269 189 goto out; 270 190 } 271 191 192 + this = &controller->chip; 193 + nand_controller_init(&controller->base); 194 + controller->base.ops = &cs553x_nand_controller_ops; 195 + this->controller = &controller->base; 272 196 new_mtd = nand_to_mtd(this); 273 197 274 198 /* Link the private data with the MTD structure */ 275 199 new_mtd->owner = THIS_MODULE; 276 200 277 201 /* map physical address */ 278 - this->legacy.IO_ADDR_R = this->legacy.IO_ADDR_W = ioremap(adr, 4096); 279 - if (!this->legacy.IO_ADDR_R) { 202 + controller->mmio = ioremap(adr, 4096); 203 + if (!controller->mmio) { 280 204 pr_warn("ioremap cs553x NAND @0x%08lx failed\n", adr); 281 205 err = -EIO; 282 206 goto out_mtd; 283 207 } 284 - 285 - this->legacy.cmd_ctrl = cs553x_hwcontrol; 286 - this->legacy.dev_ready = cs553x_device_ready; 287 - this->legacy.read_byte = cs553x_read_byte; 288 - this->legacy.read_buf = cs553x_read_buf; 289 - this->legacy.write_buf = cs553x_write_buf; 290 - 291 - this->legacy.chip_delay = 0; 292 208 293 209 this->ecc.mode = NAND_ECC_HW; 294 210 this->ecc.size = 256; ··· 308 232 if (err) 309 233 goto out_free; 310 234 311 - cs553x_mtd[cs] = new_mtd; 235 + controllers[cs] = controller; 312 236 goto out; 313 237 314 238 out_free: 315 239 kfree(new_mtd->name); 316 240 out_ior: 317 - iounmap(this->legacy.IO_ADDR_R); 241 + iounmap(controller->mmio); 318 242 out_mtd: 319 - kfree(this); 243 + kfree(controller); 320 244 out: 321 245 return err; 322 246 } ··· 371 295 /* Register all devices together here. This means we can easily hack it to 372 296 do mtdconcat etc. if we want to. */ 373 297 for (i = 0; i < NR_CS553X_CONTROLLERS; i++) { 374 - if (cs553x_mtd[i]) { 298 + if (controllers[i]) { 375 299 /* If any devices registered, return success. Else the last error. */ 376 - mtd_device_register(cs553x_mtd[i], NULL, 0); 300 + mtd_device_register(nand_to_mtd(&controllers[i]->chip), 301 + NULL, 0); 377 302 err = 0; 378 303 } 379 304 } ··· 389 312 int i; 390 313 391 314 for (i = 0; i < NR_CS553X_CONTROLLERS; i++) { 392 - struct mtd_info *mtd = cs553x_mtd[i]; 393 - struct nand_chip *this; 394 - void __iomem *mmio_base; 315 + struct cs553x_nand_controller *controller = controllers[i]; 316 + struct nand_chip *this = &controller->chip; 317 + struct mtd_info *mtd = nand_to_mtd(this); 318 + int ret; 395 319 396 320 if (!mtd) 397 321 continue; 398 322 399 - this = mtd_to_nand(mtd); 400 - mmio_base = this->legacy.IO_ADDR_R; 401 - 402 323 /* Release resources, unregister device */ 403 - nand_release(this); 324 + ret = mtd_device_unregister(mtd); 325 + WARN_ON(ret); 326 + nand_cleanup(this); 404 327 kfree(mtd->name); 405 - cs553x_mtd[i] = NULL; 328 + controllers[i] = NULL; 406 329 407 330 /* unmap physical address */ 408 - iounmap(mmio_base); 331 + iounmap(controller->mmio); 409 332 410 333 /* Free the MTD device structure */ 411 - kfree(this); 334 + kfree(controller); 412 335 } 413 336 } 414 337
+196 -116
drivers/mtd/nand/raw/davinci_nand.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/err.h> 17 - #include <linux/io.h> 17 + #include <linux/iopoll.h> 18 18 #include <linux/mtd/rawnand.h> 19 19 #include <linux/mtd/partitions.h> 20 20 #include <linux/slab.h> ··· 38 38 * outputs in a "wire-AND" configuration, with no per-chip signals. 39 39 */ 40 40 struct davinci_nand_info { 41 + struct nand_controller controller; 41 42 struct nand_chip chip; 42 43 43 44 struct platform_device *pdev; ··· 77 76 int offset, unsigned long value) 78 77 { 79 78 __raw_writel(value, info->base + offset); 80 - } 81 - 82 - /*----------------------------------------------------------------------*/ 83 - 84 - /* 85 - * Access to hardware control lines: ALE, CLE, secondary chipselect. 86 - */ 87 - 88 - static void nand_davinci_hwcontrol(struct nand_chip *nand, int cmd, 89 - unsigned int ctrl) 90 - { 91 - struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(nand)); 92 - void __iomem *addr = info->current_cs; 93 - 94 - /* Did the control lines change? */ 95 - if (ctrl & NAND_CTRL_CHANGE) { 96 - if ((ctrl & NAND_CTRL_CLE) == NAND_CTRL_CLE) 97 - addr += info->mask_cle; 98 - else if ((ctrl & NAND_CTRL_ALE) == NAND_CTRL_ALE) 99 - addr += info->mask_ale; 100 - 101 - nand->legacy.IO_ADDR_W = addr; 102 - } 103 - 104 - if (cmd != NAND_CMD_NONE) 105 - iowrite8(cmd, nand->legacy.IO_ADDR_W); 106 - } 107 - 108 - static void nand_davinci_select_chip(struct nand_chip *nand, int chip) 109 - { 110 - struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(nand)); 111 - 112 - info->current_cs = info->vaddr; 113 - 114 - /* maybe kick in a second chipselect */ 115 - if (chip > 0) 116 - info->current_cs += info->mask_chipsel; 117 - 118 - info->chip.legacy.IO_ADDR_W = info->current_cs; 119 - info->chip.legacy.IO_ADDR_R = info->chip.legacy.IO_ADDR_W; 120 79 } 121 80 122 81 /*----------------------------------------------------------------------*/ ··· 371 410 return corrected; 372 411 } 373 412 374 - /*----------------------------------------------------------------------*/ 375 - 376 - /* 377 - * NOTE: NAND boot requires ALE == EM_A[1], CLE == EM_A[2], so that's 378 - * how these chips are normally wired. This translates to both 8 and 16 379 - * bit busses using ALE == BIT(3) in byte addresses, and CLE == BIT(4). 413 + /** 414 + * nand_read_page_hwecc_oob_first - hw ecc, read oob first 415 + * @chip: nand chip info structure 416 + * @buf: buffer to store read data 417 + * @oob_required: caller requires OOB data read to chip->oob_poi 418 + * @page: page number to read 380 419 * 381 - * For now we assume that configuration, or any other one which ignores 382 - * the two LSBs for NAND access ... so we can issue 32-bit reads/writes 383 - * and have that transparently morphed into multiple NAND operations. 420 + * Hardware ECC for large page chips, require OOB to be read first. For this 421 + * ECC mode, the write_page method is re-used from ECC_HW. These methods 422 + * read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with 423 + * multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from 424 + * the data area, by overwriting the NAND manufacturer bad block markings. 384 425 */ 385 - static void nand_davinci_read_buf(struct nand_chip *chip, uint8_t *buf, 386 - int len) 426 + static int nand_davinci_read_page_hwecc_oob_first(struct nand_chip *chip, 427 + uint8_t *buf, 428 + int oob_required, int page) 387 429 { 388 - if ((0x03 & ((uintptr_t)buf)) == 0 && (0x03 & len) == 0) 389 - ioread32_rep(chip->legacy.IO_ADDR_R, buf, len >> 2); 390 - else if ((0x01 & ((uintptr_t)buf)) == 0 && (0x01 & len) == 0) 391 - ioread16_rep(chip->legacy.IO_ADDR_R, buf, len >> 1); 392 - else 393 - ioread8_rep(chip->legacy.IO_ADDR_R, buf, len); 394 - } 430 + struct mtd_info *mtd = nand_to_mtd(chip); 431 + int i, eccsize = chip->ecc.size, ret; 432 + int eccbytes = chip->ecc.bytes; 433 + int eccsteps = chip->ecc.steps; 434 + uint8_t *p = buf; 435 + uint8_t *ecc_code = chip->ecc.code_buf; 436 + uint8_t *ecc_calc = chip->ecc.calc_buf; 437 + unsigned int max_bitflips = 0; 395 438 396 - static void nand_davinci_write_buf(struct nand_chip *chip, const uint8_t *buf, 397 - int len) 398 - { 399 - if ((0x03 & ((uintptr_t)buf)) == 0 && (0x03 & len) == 0) 400 - iowrite32_rep(chip->legacy.IO_ADDR_R, buf, len >> 2); 401 - else if ((0x01 & ((uintptr_t)buf)) == 0 && (0x01 & len) == 0) 402 - iowrite16_rep(chip->legacy.IO_ADDR_R, buf, len >> 1); 403 - else 404 - iowrite8_rep(chip->legacy.IO_ADDR_R, buf, len); 405 - } 439 + /* Read the OOB area first */ 440 + ret = nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 441 + if (ret) 442 + return ret; 406 443 407 - /* 408 - * Check hardware register for wait status. Returns 1 if device is ready, 409 - * 0 if it is still busy. 410 - */ 411 - static int nand_davinci_dev_ready(struct nand_chip *chip) 412 - { 413 - struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(chip)); 444 + ret = nand_read_page_op(chip, page, 0, NULL, 0); 445 + if (ret) 446 + return ret; 414 447 415 - return davinci_nand_readl(info, NANDFSR_OFFSET) & BIT(0); 448 + ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0, 449 + chip->ecc.total); 450 + if (ret) 451 + return ret; 452 + 453 + for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 454 + int stat; 455 + 456 + chip->ecc.hwctl(chip, NAND_ECC_READ); 457 + 458 + ret = nand_read_data_op(chip, p, eccsize, false, false); 459 + if (ret) 460 + return ret; 461 + 462 + chip->ecc.calculate(chip, p, &ecc_calc[i]); 463 + 464 + stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL); 465 + if (stat == -EBADMSG && 466 + (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) { 467 + /* check for empty pages with bitflips */ 468 + stat = nand_check_erased_ecc_chunk(p, eccsize, 469 + &ecc_code[i], 470 + eccbytes, NULL, 0, 471 + chip->ecc.strength); 472 + } 473 + 474 + if (stat < 0) { 475 + mtd->ecc_stats.failed++; 476 + } else { 477 + mtd->ecc_stats.corrected += stat; 478 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 479 + } 480 + } 481 + return max_bitflips; 416 482 } 417 483 418 484 /*----------------------------------------------------------------------*/ ··· 601 613 break; 602 614 case NAND_ECC_HW: 603 615 if (pdata->ecc_bits == 4) { 616 + int chunks = mtd->writesize / 512; 617 + 618 + if (!chunks || mtd->oobsize < 16) { 619 + dev_dbg(&info->pdev->dev, "too small\n"); 620 + return -EINVAL; 621 + } 622 + 604 623 /* 605 624 * No sanity checks: CPUs must support this, 606 625 * and the chips may not use NAND_BUSWIDTH_16. ··· 630 635 info->chip.ecc.bytes = 10; 631 636 info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK; 632 637 info->chip.ecc.algo = NAND_ECC_BCH; 638 + 639 + /* 640 + * Update ECC layout if needed ... for 1-bit HW ECC, the 641 + * default is OK, but it allocates 6 bytes when only 3 642 + * are needed (for each 512 bytes). For 4-bit HW ECC, 643 + * the default is not usable: 10 bytes needed, not 6. 644 + * 645 + * For small page chips, preserve the manufacturer's 646 + * badblock marking data ... and make sure a flash BBT 647 + * table marker fits in the free bytes. 648 + */ 649 + if (chunks == 1) { 650 + mtd_set_ooblayout(mtd, 651 + &hwecc4_small_ooblayout_ops); 652 + } else if (chunks == 4 || chunks == 8) { 653 + mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops); 654 + info->chip.ecc.read_page = nand_davinci_read_page_hwecc_oob_first; 655 + } else { 656 + return -EIO; 657 + } 633 658 } else { 634 659 /* 1bit ecc hamming */ 635 660 info->chip.ecc.calculate = nand_davinci_calculate_1bit; ··· 665 650 return -EINVAL; 666 651 } 667 652 668 - /* 669 - * Update ECC layout if needed ... for 1-bit HW ECC, the default 670 - * is OK, but it allocates 6 bytes when only 3 are needed (for 671 - * each 512 bytes). For the 4-bit HW ECC, that default is not 672 - * usable: 10 bytes are needed, not 6. 673 - */ 674 - if (pdata->ecc_bits == 4) { 675 - int chunks = mtd->writesize / 512; 653 + return ret; 654 + } 676 655 677 - if (!chunks || mtd->oobsize < 16) { 678 - dev_dbg(&info->pdev->dev, "too small\n"); 679 - return -EINVAL; 680 - } 656 + static void nand_davinci_data_in(struct davinci_nand_info *info, void *buf, 657 + unsigned int len, bool force_8bit) 658 + { 659 + u32 alignment = ((uintptr_t)buf | len) & 3; 681 660 682 - /* For small page chips, preserve the manufacturer's 683 - * badblock marking data ... and make sure a flash BBT 684 - * table marker fits in the free bytes. 685 - */ 686 - if (chunks == 1) { 687 - mtd_set_ooblayout(mtd, &hwecc4_small_ooblayout_ops); 688 - } else if (chunks == 4 || chunks == 8) { 689 - mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops); 690 - info->chip.ecc.mode = NAND_ECC_HW_OOB_FIRST; 691 - } else { 692 - return -EIO; 661 + if (force_8bit || (alignment & 1)) 662 + ioread8_rep(info->current_cs, buf, len); 663 + else if (alignment & 3) 664 + ioread16_rep(info->current_cs, buf, len >> 1); 665 + else 666 + ioread32_rep(info->current_cs, buf, len >> 2); 667 + } 668 + 669 + static void nand_davinci_data_out(struct davinci_nand_info *info, 670 + const void *buf, unsigned int len, 671 + bool force_8bit) 672 + { 673 + u32 alignment = ((uintptr_t)buf | len) & 3; 674 + 675 + if (force_8bit || (alignment & 1)) 676 + iowrite8_rep(info->current_cs, buf, len); 677 + else if (alignment & 3) 678 + iowrite16_rep(info->current_cs, buf, len >> 1); 679 + else 680 + iowrite32_rep(info->current_cs, buf, len >> 2); 681 + } 682 + 683 + static int davinci_nand_exec_instr(struct davinci_nand_info *info, 684 + const struct nand_op_instr *instr) 685 + { 686 + unsigned int i, timeout_us; 687 + u32 status; 688 + int ret; 689 + 690 + switch (instr->type) { 691 + case NAND_OP_CMD_INSTR: 692 + iowrite8(instr->ctx.cmd.opcode, 693 + info->current_cs + info->mask_cle); 694 + break; 695 + 696 + case NAND_OP_ADDR_INSTR: 697 + for (i = 0; i < instr->ctx.addr.naddrs; i++) { 698 + iowrite8(instr->ctx.addr.addrs[i], 699 + info->current_cs + info->mask_ale); 693 700 } 701 + break; 702 + 703 + case NAND_OP_DATA_IN_INSTR: 704 + nand_davinci_data_in(info, instr->ctx.data.buf.in, 705 + instr->ctx.data.len, 706 + instr->ctx.data.force_8bit); 707 + break; 708 + 709 + case NAND_OP_DATA_OUT_INSTR: 710 + nand_davinci_data_out(info, instr->ctx.data.buf.out, 711 + instr->ctx.data.len, 712 + instr->ctx.data.force_8bit); 713 + break; 714 + 715 + case NAND_OP_WAITRDY_INSTR: 716 + timeout_us = instr->ctx.waitrdy.timeout_ms * 1000; 717 + ret = readl_relaxed_poll_timeout(info->base + NANDFSR_OFFSET, 718 + status, status & BIT(0), 100, 719 + timeout_us); 720 + if (ret) 721 + return ret; 722 + 723 + break; 694 724 } 695 725 696 - return ret; 726 + if (instr->delay_ns) 727 + ndelay(instr->delay_ns); 728 + 729 + return 0; 730 + } 731 + 732 + static int davinci_nand_exec_op(struct nand_chip *chip, 733 + const struct nand_operation *op, 734 + bool check_only) 735 + { 736 + struct davinci_nand_info *info = to_davinci_nand(nand_to_mtd(chip)); 737 + unsigned int i; 738 + 739 + if (check_only) 740 + return 0; 741 + 742 + info->current_cs = info->vaddr + (op->cs * info->mask_chipsel); 743 + 744 + for (i = 0; i < op->ninstrs; i++) { 745 + int ret; 746 + 747 + ret = davinci_nand_exec_instr(info, &op->instrs[i]); 748 + if (ret) 749 + return ret; 750 + } 751 + 752 + return 0; 697 753 } 698 754 699 755 static const struct nand_controller_ops davinci_nand_controller_ops = { 700 756 .attach_chip = davinci_nand_attach_chip, 757 + .exec_op = davinci_nand_exec_op, 701 758 }; 702 759 703 760 static int nand_davinci_probe(struct platform_device *pdev) ··· 833 746 mtd->dev.parent = &pdev->dev; 834 747 nand_set_flash_node(&info->chip, pdev->dev.of_node); 835 748 836 - info->chip.legacy.IO_ADDR_R = vaddr; 837 - info->chip.legacy.IO_ADDR_W = vaddr; 838 - info->chip.legacy.chip_delay = 0; 839 - info->chip.legacy.select_chip = nand_davinci_select_chip; 840 - 841 749 /* options such as NAND_BBT_USE_FLASH */ 842 750 info->chip.bbt_options = pdata->bbt_options; 843 751 /* options such as 16-bit widths */ ··· 849 767 info->mask_ale = pdata->mask_ale ? : MASK_ALE; 850 768 info->mask_cle = pdata->mask_cle ? : MASK_CLE; 851 769 852 - /* Set address of hardware control function */ 853 - info->chip.legacy.cmd_ctrl = nand_davinci_hwcontrol; 854 - info->chip.legacy.dev_ready = nand_davinci_dev_ready; 855 - 856 - /* Speed up buffer I/O */ 857 - info->chip.legacy.read_buf = nand_davinci_read_buf; 858 - info->chip.legacy.write_buf = nand_davinci_write_buf; 859 - 860 770 /* Use board-specific ECC config */ 861 771 info->chip.ecc.mode = pdata->ecc_mode; 862 772 ··· 862 788 spin_unlock_irq(&davinci_nand_lock); 863 789 864 790 /* Scan to find existence of the device(s) */ 865 - info->chip.legacy.dummy_controller.ops = &davinci_nand_controller_ops; 791 + nand_controller_init(&info->controller); 792 + info->controller.ops = &davinci_nand_controller_ops; 793 + info->chip.controller = &info->controller; 866 794 ret = nand_scan(&info->chip, pdata->mask_chipsel ? 2 : 1); 867 795 if (ret < 0) { 868 796 dev_dbg(&pdev->dev, "no NAND chip(s) found\n"); ··· 893 817 static int nand_davinci_remove(struct platform_device *pdev) 894 818 { 895 819 struct davinci_nand_info *info = platform_get_drvdata(pdev); 820 + struct nand_chip *chip = &info->chip; 821 + int ret; 896 822 897 823 spin_lock_irq(&davinci_nand_lock); 898 824 if (info->chip.ecc.mode == NAND_ECC_HW_SYNDROME) 899 825 ecc4_busy = false; 900 826 spin_unlock_irq(&davinci_nand_lock); 901 827 902 - nand_release(&info->chip); 828 + ret = mtd_device_unregister(nand_to_mtd(chip)); 829 + WARN_ON(ret); 830 + nand_cleanup(chip); 903 831 904 832 return 0; 905 833 }
+45 -15
drivers/mtd/nand/raw/denali.c
··· 764 764 static int denali_setup_data_interface(struct nand_chip *chip, int chipnr, 765 765 const struct nand_data_interface *conf) 766 766 { 767 + static const unsigned int data_setup_on_host = 10000; 767 768 struct denali_controller *denali = to_denali_controller(chip); 768 769 struct denali_chip_sel *sel; 769 770 const struct nand_sdr_timings *timings; ··· 796 795 return 0; 797 796 798 797 sel = &to_denali_chip(chip)->sels[chipnr]; 799 - 800 - /* tREA -> ACC_CLKS */ 801 - acc_clks = DIV_ROUND_UP(timings->tREA_max, t_x); 802 - acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE); 803 - 804 - tmp = ioread32(denali->reg + ACC_CLKS); 805 - tmp &= ~ACC_CLKS__VALUE; 806 - tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks); 807 - sel->acc_clks = tmp; 808 798 809 799 /* tRWH -> RE_2_WE */ 810 800 re_2_we = DIV_ROUND_UP(timings->tRHW_min, t_x); ··· 854 862 tmp |= FIELD_PREP(RDWR_EN_HI_CNT__VALUE, rdwr_en_hi); 855 863 sel->rdwr_en_hi_cnt = tmp; 856 864 857 - /* tRP, tWP -> RDWR_EN_LO_CNT */ 865 + /* 866 + * tREA -> ACC_CLKS 867 + * tRP, tWP, tRHOH, tRC, tWC -> RDWR_EN_LO_CNT 868 + */ 869 + 870 + /* 871 + * Determine the minimum of acc_clks to meet the setup timing when 872 + * capturing the incoming data. 873 + * 874 + * The delay on the chip side is well-defined as tREA, but we need to 875 + * take additional delay into account. This includes a certain degree 876 + * of unknowledge, such as signal propagation delays on the PCB and 877 + * in the SoC, load capacity of the I/O pins, etc. 878 + */ 879 + acc_clks = DIV_ROUND_UP(timings->tREA_max + data_setup_on_host, t_x); 880 + 881 + /* Determine the minimum of rdwr_en_lo_cnt from RE#/WE# pulse width */ 858 882 rdwr_en_lo = DIV_ROUND_UP(max(timings->tRP_min, timings->tWP_min), t_x); 883 + 884 + /* Extend rdwr_en_lo to meet the data hold timing */ 885 + rdwr_en_lo = max_t(int, rdwr_en_lo, 886 + acc_clks - timings->tRHOH_min / t_x); 887 + 888 + /* Extend rdwr_en_lo to meet the requirement for RE#/WE# cycle time */ 859 889 rdwr_en_lo_hi = DIV_ROUND_UP(max(timings->tRC_min, timings->tWC_min), 860 890 t_x); 861 - rdwr_en_lo_hi = max_t(int, rdwr_en_lo_hi, mult_x); 862 891 rdwr_en_lo = max(rdwr_en_lo, rdwr_en_lo_hi - rdwr_en_hi); 863 892 rdwr_en_lo = min_t(int, rdwr_en_lo, RDWR_EN_LO_CNT__VALUE); 893 + 894 + /* Center the data latch timing for extra safety */ 895 + acc_clks = (acc_clks + rdwr_en_lo + 896 + DIV_ROUND_UP(timings->tRHOH_min, t_x)) / 2; 897 + acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE); 898 + 899 + tmp = ioread32(denali->reg + ACC_CLKS); 900 + tmp &= ~ACC_CLKS__VALUE; 901 + tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks); 902 + sel->acc_clks = tmp; 864 903 865 904 tmp = ioread32(denali->reg + RDWR_EN_LO_CNT); 866 905 tmp &= ~RDWR_EN_LO_CNT__VALUE; ··· 1226 1203 mtd->name = "denali-nand"; 1227 1204 1228 1205 if (denali->dma_avail) { 1229 - chip->options |= NAND_USE_BOUNCE_BUFFER; 1206 + chip->options |= NAND_USES_DMA; 1230 1207 chip->buf_align = 16; 1231 1208 } 1232 1209 ··· 1359 1336 1360 1337 void denali_remove(struct denali_controller *denali) 1361 1338 { 1362 - struct denali_chip *dchip; 1339 + struct denali_chip *dchip, *tmp; 1340 + struct nand_chip *chip; 1341 + int ret; 1363 1342 1364 - list_for_each_entry(dchip, &denali->chips, node) 1365 - nand_release(&dchip->chip); 1343 + list_for_each_entry_safe(dchip, tmp, &denali->chips, node) { 1344 + chip = &dchip->chip; 1345 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1346 + WARN_ON(ret); 1347 + nand_cleanup(chip); 1348 + list_del(&dchip->node); 1349 + } 1366 1350 1367 1351 denali_disable_irq(denali); 1368 1352 }
+207 -334
drivers/mtd/nand/raw/diskonchip.c
··· 58 58 static struct mtd_info *doclist = NULL; 59 59 60 60 struct doc_priv { 61 + struct nand_controller base; 61 62 void __iomem *virtadr; 62 63 unsigned long physadr; 63 64 u_char ChipID; ··· 70 69 int mh1_page; 71 70 struct rs_control *rs_decoder; 72 71 struct mtd_info *nextdoc; 72 + bool supports_32b_reads; 73 73 74 74 /* Handle the last stage of initialization (BBT scan, partitioning) */ 75 75 int (*late_init)(struct mtd_info *mtd); ··· 85 83 #define DoC_is_MillenniumPlus(doc) ((doc)->ChipID == DOC_ChipID_DocMilPlus16 || (doc)->ChipID == DOC_ChipID_DocMilPlus32) 86 84 #define DoC_is_Millennium(doc) ((doc)->ChipID == DOC_ChipID_DocMil) 87 85 #define DoC_is_2000(doc) ((doc)->ChipID == DOC_ChipID_Doc2k) 88 - 89 - static void doc200x_hwcontrol(struct nand_chip *this, int cmd, 90 - unsigned int bitmask); 91 - static void doc200x_select_chip(struct nand_chip *this, int chip); 92 86 93 87 static int debug = 0; 94 88 module_param(debug, int, 0); ··· 300 302 WriteDOC(datum, docptr, 2k_CDSN_IO); 301 303 } 302 304 303 - static u_char doc2000_read_byte(struct nand_chip *this) 304 - { 305 - struct doc_priv *doc = nand_get_controller_data(this); 306 - void __iomem *docptr = doc->virtadr; 307 - u_char ret; 308 - 309 - ReadDOC(docptr, CDSNSlowIO); 310 - DoC_Delay(doc, 2); 311 - ret = ReadDOC(docptr, 2k_CDSN_IO); 312 - if (debug) 313 - printk("read_byte returns %02x\n", ret); 314 - return ret; 315 - } 316 - 317 305 static void doc2000_writebuf(struct nand_chip *this, const u_char *buf, 318 306 int len) 319 307 { ··· 321 337 { 322 338 struct doc_priv *doc = nand_get_controller_data(this); 323 339 void __iomem *docptr = doc->virtadr; 340 + u32 *buf32 = (u32 *)buf; 324 341 int i; 325 342 326 343 if (debug) 327 344 printk("readbuf of %d bytes: ", len); 328 345 329 - for (i = 0; i < len; i++) 330 - buf[i] = ReadDOC(docptr, 2k_CDSN_IO + i); 346 + if (!doc->supports_32b_reads || 347 + ((((unsigned long)buf) | len) & 3)) { 348 + for (i = 0; i < len; i++) 349 + buf[i] = ReadDOC(docptr, 2k_CDSN_IO + i); 350 + } else { 351 + for (i = 0; i < len / 4; i++) 352 + buf32[i] = readl(docptr + DoC_2k_CDSN_IO + i); 353 + } 331 354 } 332 355 333 - static void doc2000_readbuf_dword(struct nand_chip *this, u_char *buf, int len) 356 + /* 357 + * We need our own readid() here because it's called before the NAND chip 358 + * has been initialized, and calling nand_op_readid() would lead to a NULL 359 + * pointer exception when dereferencing the NAND timings. 360 + */ 361 + static void doc200x_readid(struct nand_chip *this, unsigned int cs, u8 *id) 334 362 { 335 - struct doc_priv *doc = nand_get_controller_data(this); 336 - void __iomem *docptr = doc->virtadr; 337 - int i; 363 + u8 addr = 0; 364 + struct nand_op_instr instrs[] = { 365 + NAND_OP_CMD(NAND_CMD_READID, 0), 366 + NAND_OP_ADDR(1, &addr, 50), 367 + NAND_OP_8BIT_DATA_IN(2, id, 0), 368 + }; 338 369 339 - if (debug) 340 - printk("readbuf_dword of %d bytes: ", len); 370 + struct nand_operation op = NAND_OPERATION(cs, instrs); 341 371 342 - if (unlikely((((unsigned long)buf) | len) & 3)) { 343 - for (i = 0; i < len; i++) { 344 - *(uint8_t *) (&buf[i]) = ReadDOC(docptr, 2k_CDSN_IO + i); 345 - } 346 - } else { 347 - for (i = 0; i < len; i += 4) { 348 - *(uint32_t *) (&buf[i]) = readl(docptr + DoC_2k_CDSN_IO + i); 349 - } 350 - } 372 + if (!id) 373 + op.ninstrs--; 374 + 375 + this->controller->ops->exec_op(this, &op, false); 351 376 } 352 377 353 378 static uint16_t __init doc200x_ident_chip(struct mtd_info *mtd, int nr) ··· 364 371 struct nand_chip *this = mtd_to_nand(mtd); 365 372 struct doc_priv *doc = nand_get_controller_data(this); 366 373 uint16_t ret; 374 + u8 id[2]; 367 375 368 - doc200x_select_chip(this, nr); 369 - doc200x_hwcontrol(this, NAND_CMD_READID, 370 - NAND_CTRL_CLE | NAND_CTRL_CHANGE); 371 - doc200x_hwcontrol(this, 0, NAND_CTRL_ALE | NAND_CTRL_CHANGE); 372 - doc200x_hwcontrol(this, NAND_CMD_NONE, NAND_NCE | NAND_CTRL_CHANGE); 376 + doc200x_readid(this, nr, id); 373 377 374 - /* We can't use dev_ready here, but at least we wait for the 375 - * command to complete 376 - */ 377 - udelay(50); 378 - 379 - ret = this->legacy.read_byte(this) << 8; 380 - ret |= this->legacy.read_byte(this); 378 + ret = ((u16)id[0] << 8) | id[1]; 381 379 382 380 if (doc->ChipID == DOC_ChipID_Doc2k && try_dword && !nr) { 383 381 /* First chip probe. See if we get same results by 32-bit access */ ··· 378 394 } ident; 379 395 void __iomem *docptr = doc->virtadr; 380 396 381 - doc200x_hwcontrol(this, NAND_CMD_READID, 382 - NAND_CTRL_CLE | NAND_CTRL_CHANGE); 383 - doc200x_hwcontrol(this, 0, NAND_CTRL_ALE | NAND_CTRL_CHANGE); 384 - doc200x_hwcontrol(this, NAND_CMD_NONE, 385 - NAND_NCE | NAND_CTRL_CHANGE); 386 - 387 - udelay(50); 397 + doc200x_readid(this, nr, NULL); 388 398 389 399 ident.dword = readl(docptr + DoC_2k_CDSN_IO); 390 400 if (((ident.byte[0] << 8) | ident.byte[1]) == ret) { 391 401 pr_info("DiskOnChip 2000 responds to DWORD access\n"); 392 - this->legacy.read_buf = &doc2000_readbuf_dword; 402 + doc->supports_32b_reads = true; 393 403 } 394 404 } 395 405 ··· 412 434 pr_debug("Detected %d chips per floor.\n", i); 413 435 } 414 436 415 - static int doc200x_wait(struct nand_chip *this) 416 - { 417 - struct doc_priv *doc = nand_get_controller_data(this); 418 - 419 - int status; 420 - 421 - DoC_WaitReady(doc); 422 - nand_status_op(this, NULL); 423 - DoC_WaitReady(doc); 424 - status = (int)this->legacy.read_byte(this); 425 - 426 - return status; 427 - } 428 - 429 437 static void doc2001_write_byte(struct nand_chip *this, u_char datum) 430 438 { 431 439 struct doc_priv *doc = nand_get_controller_data(this); ··· 420 456 WriteDOC(datum, docptr, CDSNSlowIO); 421 457 WriteDOC(datum, docptr, Mil_CDSN_IO); 422 458 WriteDOC(datum, docptr, WritePipeTerm); 423 - } 424 - 425 - static u_char doc2001_read_byte(struct nand_chip *this) 426 - { 427 - struct doc_priv *doc = nand_get_controller_data(this); 428 - void __iomem *docptr = doc->virtadr; 429 - 430 - //ReadDOC(docptr, CDSNSlowIO); 431 - /* 11.4.5 -- delay twice to allow extended length cycle */ 432 - DoC_Delay(doc, 2); 433 - ReadDOC(docptr, ReadPipeInit); 434 - //return ReadDOC(docptr, Mil_CDSN_IO); 435 - return ReadDOC(docptr, LastDataRead); 436 459 } 437 460 438 461 static void doc2001_writebuf(struct nand_chip *this, const u_char *buf, int len) ··· 448 497 449 498 /* Terminate read pipeline */ 450 499 buf[i] = ReadDOC(docptr, LastDataRead); 451 - } 452 - 453 - static u_char doc2001plus_read_byte(struct nand_chip *this) 454 - { 455 - struct doc_priv *doc = nand_get_controller_data(this); 456 - void __iomem *docptr = doc->virtadr; 457 - u_char ret; 458 - 459 - ReadDOC(docptr, Mplus_ReadPipeInit); 460 - ReadDOC(docptr, Mplus_ReadPipeInit); 461 - ret = ReadDOC(docptr, Mplus_LastDataRead); 462 - if (debug) 463 - printk("read_byte returns %02x\n", ret); 464 - return ret; 465 500 } 466 501 467 502 static void doc2001plus_writebuf(struct nand_chip *this, const u_char *buf, int len) ··· 487 550 } 488 551 489 552 /* Terminate read pipeline */ 490 - buf[len - 2] = ReadDOC(docptr, Mplus_LastDataRead); 491 - if (debug && i < 16) 492 - printk("%02x ", buf[len - 2]); 553 + if (len >= 2) { 554 + buf[len - 2] = ReadDOC(docptr, Mplus_LastDataRead); 555 + if (debug && i < 16) 556 + printk("%02x ", buf[len - 2]); 557 + } 558 + 493 559 buf[len - 1] = ReadDOC(docptr, Mplus_LastDataRead); 494 560 if (debug && i < 16) 495 561 printk("%02x ", buf[len - 1]); ··· 500 560 printk("\n"); 501 561 } 502 562 503 - static void doc2001plus_select_chip(struct nand_chip *this, int chip) 563 + static void doc200x_write_control(struct doc_priv *doc, u8 value) 564 + { 565 + WriteDOC(value, doc->virtadr, CDSNControl); 566 + /* 11.4.3 -- 4 NOPs after CSDNControl write */ 567 + DoC_Delay(doc, 4); 568 + } 569 + 570 + static void doc200x_exec_instr(struct nand_chip *this, 571 + const struct nand_op_instr *instr) 504 572 { 505 573 struct doc_priv *doc = nand_get_controller_data(this); 506 - void __iomem *docptr = doc->virtadr; 507 - int floor = 0; 574 + unsigned int i; 508 575 509 - if (debug) 510 - printk("select chip (%d)\n", chip); 576 + switch (instr->type) { 577 + case NAND_OP_CMD_INSTR: 578 + doc200x_write_control(doc, CDSN_CTRL_CE | CDSN_CTRL_CLE); 579 + doc2000_write_byte(this, instr->ctx.cmd.opcode); 580 + break; 511 581 512 - if (chip == -1) { 513 - /* Disable flash internally */ 514 - WriteDOC(0, docptr, Mplus_FlashSelect); 515 - return; 582 + case NAND_OP_ADDR_INSTR: 583 + doc200x_write_control(doc, CDSN_CTRL_CE | CDSN_CTRL_ALE); 584 + for (i = 0; i < instr->ctx.addr.naddrs; i++) { 585 + u8 addr = instr->ctx.addr.addrs[i]; 586 + 587 + if (DoC_is_2000(doc)) 588 + doc2000_write_byte(this, addr); 589 + else 590 + doc2001_write_byte(this, addr); 591 + } 592 + break; 593 + 594 + case NAND_OP_DATA_IN_INSTR: 595 + doc200x_write_control(doc, CDSN_CTRL_CE); 596 + if (DoC_is_2000(doc)) 597 + doc2000_readbuf(this, instr->ctx.data.buf.in, 598 + instr->ctx.data.len); 599 + else 600 + doc2001_readbuf(this, instr->ctx.data.buf.in, 601 + instr->ctx.data.len); 602 + break; 603 + 604 + case NAND_OP_DATA_OUT_INSTR: 605 + doc200x_write_control(doc, CDSN_CTRL_CE); 606 + if (DoC_is_2000(doc)) 607 + doc2000_writebuf(this, instr->ctx.data.buf.out, 608 + instr->ctx.data.len); 609 + else 610 + doc2001_writebuf(this, instr->ctx.data.buf.out, 611 + instr->ctx.data.len); 612 + break; 613 + 614 + case NAND_OP_WAITRDY_INSTR: 615 + DoC_WaitReady(doc); 616 + break; 516 617 } 517 618 518 - floor = chip / doc->chips_per_floor; 519 - chip -= (floor * doc->chips_per_floor); 619 + if (instr->delay_ns) 620 + ndelay(instr->delay_ns); 621 + } 622 + 623 + static int doc200x_exec_op(struct nand_chip *this, 624 + const struct nand_operation *op, 625 + bool check_only) 626 + { 627 + struct doc_priv *doc = nand_get_controller_data(this); 628 + unsigned int i; 629 + 630 + if (check_only) 631 + return true; 632 + 633 + doc->curchip = op->cs % doc->chips_per_floor; 634 + doc->curfloor = op->cs / doc->chips_per_floor; 635 + 636 + WriteDOC(doc->curfloor, doc->virtadr, FloorSelect); 637 + WriteDOC(doc->curchip, doc->virtadr, CDSNDeviceSelect); 638 + 639 + /* Assert CE pin */ 640 + doc200x_write_control(doc, CDSN_CTRL_CE); 641 + 642 + for (i = 0; i < op->ninstrs; i++) 643 + doc200x_exec_instr(this, &op->instrs[i]); 644 + 645 + /* De-assert CE pin */ 646 + doc200x_write_control(doc, 0); 647 + 648 + return 0; 649 + } 650 + 651 + static void doc2001plus_write_pipe_term(struct doc_priv *doc) 652 + { 653 + WriteDOC(0x00, doc->virtadr, Mplus_WritePipeTerm); 654 + WriteDOC(0x00, doc->virtadr, Mplus_WritePipeTerm); 655 + } 656 + 657 + static void doc2001plus_exec_instr(struct nand_chip *this, 658 + const struct nand_op_instr *instr) 659 + { 660 + struct doc_priv *doc = nand_get_controller_data(this); 661 + unsigned int i; 662 + 663 + switch (instr->type) { 664 + case NAND_OP_CMD_INSTR: 665 + WriteDOC(instr->ctx.cmd.opcode, doc->virtadr, Mplus_FlashCmd); 666 + doc2001plus_write_pipe_term(doc); 667 + break; 668 + 669 + case NAND_OP_ADDR_INSTR: 670 + for (i = 0; i < instr->ctx.addr.naddrs; i++) { 671 + u8 addr = instr->ctx.addr.addrs[i]; 672 + 673 + WriteDOC(addr, doc->virtadr, Mplus_FlashAddress); 674 + } 675 + doc2001plus_write_pipe_term(doc); 676 + /* deassert ALE */ 677 + WriteDOC(0, doc->virtadr, Mplus_FlashControl); 678 + break; 679 + 680 + case NAND_OP_DATA_IN_INSTR: 681 + doc2001plus_readbuf(this, instr->ctx.data.buf.in, 682 + instr->ctx.data.len); 683 + break; 684 + case NAND_OP_DATA_OUT_INSTR: 685 + doc2001plus_writebuf(this, instr->ctx.data.buf.out, 686 + instr->ctx.data.len); 687 + doc2001plus_write_pipe_term(doc); 688 + break; 689 + case NAND_OP_WAITRDY_INSTR: 690 + DoC_WaitReady(doc); 691 + break; 692 + } 693 + 694 + if (instr->delay_ns) 695 + ndelay(instr->delay_ns); 696 + } 697 + 698 + static int doc2001plus_exec_op(struct nand_chip *this, 699 + const struct nand_operation *op, 700 + bool check_only) 701 + { 702 + struct doc_priv *doc = nand_get_controller_data(this); 703 + unsigned int i; 704 + 705 + if (check_only) 706 + return true; 707 + 708 + doc->curchip = op->cs % doc->chips_per_floor; 709 + doc->curfloor = op->cs / doc->chips_per_floor; 520 710 521 711 /* Assert ChipEnable and deassert WriteProtect */ 522 - WriteDOC((DOC_FLASH_CE), docptr, Mplus_FlashSelect); 523 - nand_reset_op(this); 712 + WriteDOC(DOC_FLASH_CE, doc->virtadr, Mplus_FlashSelect); 524 713 525 - doc->curchip = chip; 526 - doc->curfloor = floor; 527 - } 714 + for (i = 0; i < op->ninstrs; i++) 715 + doc2001plus_exec_instr(this, &op->instrs[i]); 528 716 529 - static void doc200x_select_chip(struct nand_chip *this, int chip) 530 - { 531 - struct doc_priv *doc = nand_get_controller_data(this); 532 - void __iomem *docptr = doc->virtadr; 533 - int floor = 0; 717 + /* De-assert ChipEnable */ 718 + WriteDOC(0, doc->virtadr, Mplus_FlashSelect); 534 719 535 - if (debug) 536 - printk("select chip (%d)\n", chip); 537 - 538 - if (chip == -1) 539 - return; 540 - 541 - floor = chip / doc->chips_per_floor; 542 - chip -= (floor * doc->chips_per_floor); 543 - 544 - /* 11.4.4 -- deassert CE before changing chip */ 545 - doc200x_hwcontrol(this, NAND_CMD_NONE, 0 | NAND_CTRL_CHANGE); 546 - 547 - WriteDOC(floor, docptr, FloorSelect); 548 - WriteDOC(chip, docptr, CDSNDeviceSelect); 549 - 550 - doc200x_hwcontrol(this, NAND_CMD_NONE, NAND_NCE | NAND_CTRL_CHANGE); 551 - 552 - doc->curchip = chip; 553 - doc->curfloor = floor; 554 - } 555 - 556 - #define CDSN_CTRL_MSK (CDSN_CTRL_CE | CDSN_CTRL_CLE | CDSN_CTRL_ALE) 557 - 558 - static void doc200x_hwcontrol(struct nand_chip *this, int cmd, 559 - unsigned int ctrl) 560 - { 561 - struct doc_priv *doc = nand_get_controller_data(this); 562 - void __iomem *docptr = doc->virtadr; 563 - 564 - if (ctrl & NAND_CTRL_CHANGE) { 565 - doc->CDSNControl &= ~CDSN_CTRL_MSK; 566 - doc->CDSNControl |= ctrl & CDSN_CTRL_MSK; 567 - if (debug) 568 - printk("hwcontrol(%d): %02x\n", cmd, doc->CDSNControl); 569 - WriteDOC(doc->CDSNControl, docptr, CDSNControl); 570 - /* 11.4.3 -- 4 NOPs after CSDNControl write */ 571 - DoC_Delay(doc, 4); 572 - } 573 - if (cmd != NAND_CMD_NONE) { 574 - if (DoC_is_2000(doc)) 575 - doc2000_write_byte(this, cmd); 576 - else 577 - doc2001_write_byte(this, cmd); 578 - } 579 - } 580 - 581 - static void doc2001plus_command(struct nand_chip *this, unsigned command, 582 - int column, int page_addr) 583 - { 584 - struct mtd_info *mtd = nand_to_mtd(this); 585 - struct doc_priv *doc = nand_get_controller_data(this); 586 - void __iomem *docptr = doc->virtadr; 587 - 588 - /* 589 - * Must terminate write pipeline before sending any commands 590 - * to the device. 591 - */ 592 - if (command == NAND_CMD_PAGEPROG) { 593 - WriteDOC(0x00, docptr, Mplus_WritePipeTerm); 594 - WriteDOC(0x00, docptr, Mplus_WritePipeTerm); 595 - } 596 - 597 - /* 598 - * Write out the command to the device. 599 - */ 600 - if (command == NAND_CMD_SEQIN) { 601 - int readcmd; 602 - 603 - if (column >= mtd->writesize) { 604 - /* OOB area */ 605 - column -= mtd->writesize; 606 - readcmd = NAND_CMD_READOOB; 607 - } else if (column < 256) { 608 - /* First 256 bytes --> READ0 */ 609 - readcmd = NAND_CMD_READ0; 610 - } else { 611 - column -= 256; 612 - readcmd = NAND_CMD_READ1; 613 - } 614 - WriteDOC(readcmd, docptr, Mplus_FlashCmd); 615 - } 616 - WriteDOC(command, docptr, Mplus_FlashCmd); 617 - WriteDOC(0, docptr, Mplus_WritePipeTerm); 618 - WriteDOC(0, docptr, Mplus_WritePipeTerm); 619 - 620 - if (column != -1 || page_addr != -1) { 621 - /* Serially input address */ 622 - if (column != -1) { 623 - /* Adjust columns for 16 bit buswidth */ 624 - if (this->options & NAND_BUSWIDTH_16 && 625 - !nand_opcode_8bits(command)) 626 - column >>= 1; 627 - WriteDOC(column, docptr, Mplus_FlashAddress); 628 - } 629 - if (page_addr != -1) { 630 - WriteDOC((unsigned char)(page_addr & 0xff), docptr, Mplus_FlashAddress); 631 - WriteDOC((unsigned char)((page_addr >> 8) & 0xff), docptr, Mplus_FlashAddress); 632 - if (this->options & NAND_ROW_ADDR_3) { 633 - WriteDOC((unsigned char)((page_addr >> 16) & 0x0f), docptr, Mplus_FlashAddress); 634 - printk("high density\n"); 635 - } 636 - } 637 - WriteDOC(0, docptr, Mplus_WritePipeTerm); 638 - WriteDOC(0, docptr, Mplus_WritePipeTerm); 639 - /* deassert ALE */ 640 - if (command == NAND_CMD_READ0 || command == NAND_CMD_READ1 || 641 - command == NAND_CMD_READOOB || command == NAND_CMD_READID) 642 - WriteDOC(0, docptr, Mplus_FlashControl); 643 - } 644 - 645 - /* 646 - * program and erase have their own busy handlers 647 - * status and sequential in needs no delay 648 - */ 649 - switch (command) { 650 - 651 - case NAND_CMD_PAGEPROG: 652 - case NAND_CMD_ERASE1: 653 - case NAND_CMD_ERASE2: 654 - case NAND_CMD_SEQIN: 655 - case NAND_CMD_STATUS: 656 - return; 657 - 658 - case NAND_CMD_RESET: 659 - if (this->legacy.dev_ready) 660 - break; 661 - udelay(this->legacy.chip_delay); 662 - WriteDOC(NAND_CMD_STATUS, docptr, Mplus_FlashCmd); 663 - WriteDOC(0, docptr, Mplus_WritePipeTerm); 664 - WriteDOC(0, docptr, Mplus_WritePipeTerm); 665 - while (!(this->legacy.read_byte(this) & 0x40)) ; 666 - return; 667 - 668 - /* This applies to read commands */ 669 - default: 670 - /* 671 - * If we don't have access to the busy pin, we apply the given 672 - * command delay 673 - */ 674 - if (!this->legacy.dev_ready) { 675 - udelay(this->legacy.chip_delay); 676 - return; 677 - } 678 - } 679 - 680 - /* Apply this short delay always to ensure that we do wait tWB in 681 - * any case on any machine. */ 682 - ndelay(100); 683 - /* wait until command is processed */ 684 - while (!this->legacy.dev_ready(this)) ; 685 - } 686 - 687 - static int doc200x_dev_ready(struct nand_chip *this) 688 - { 689 - struct doc_priv *doc = nand_get_controller_data(this); 690 - void __iomem *docptr = doc->virtadr; 691 - 692 - if (DoC_is_MillenniumPlus(doc)) { 693 - /* 11.4.2 -- must NOP four times before checking FR/B# */ 694 - DoC_Delay(doc, 4); 695 - if ((ReadDOC(docptr, Mplus_FlashControl) & CDSN_CTRL_FR_B_MASK) != CDSN_CTRL_FR_B_MASK) { 696 - if (debug) 697 - printk("not ready\n"); 698 - return 0; 699 - } 700 - if (debug) 701 - printk("was ready\n"); 702 - return 1; 703 - } else { 704 - /* 11.4.2 -- must NOP four times before checking FR/B# */ 705 - DoC_Delay(doc, 4); 706 - if (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B)) { 707 - if (debug) 708 - printk("not ready\n"); 709 - return 0; 710 - } 711 - /* 11.4.2 -- Must NOP twice if it's ready */ 712 - DoC_Delay(doc, 2); 713 - if (debug) 714 - printk("was ready\n"); 715 - return 1; 716 - } 717 - } 718 - 719 - static int doc200x_block_bad(struct nand_chip *this, loff_t ofs) 720 - { 721 - /* This is our last resort if we couldn't find or create a BBT. Just 722 - pretend all blocks are good. */ 723 720 return 0; 724 721 } 725 722 ··· 1221 1344 struct nand_chip *this = mtd_to_nand(mtd); 1222 1345 struct doc_priv *doc = nand_get_controller_data(this); 1223 1346 1224 - this->legacy.read_byte = doc2000_read_byte; 1225 - this->legacy.write_buf = doc2000_writebuf; 1226 - this->legacy.read_buf = doc2000_readbuf; 1227 1347 doc->late_init = nftl_scan_bbt; 1228 1348 1229 1349 doc->CDSNControl = CDSN_CTRL_FLASH_IO | CDSN_CTRL_ECC_IO; ··· 1233 1359 { 1234 1360 struct nand_chip *this = mtd_to_nand(mtd); 1235 1361 struct doc_priv *doc = nand_get_controller_data(this); 1236 - 1237 - this->legacy.read_byte = doc2001_read_byte; 1238 - this->legacy.write_buf = doc2001_writebuf; 1239 - this->legacy.read_buf = doc2001_readbuf; 1240 1362 1241 1363 ReadDOC(doc->virtadr, ChipID); 1242 1364 ReadDOC(doc->virtadr, ChipID); ··· 1260 1390 struct nand_chip *this = mtd_to_nand(mtd); 1261 1391 struct doc_priv *doc = nand_get_controller_data(this); 1262 1392 1263 - this->legacy.read_byte = doc2001plus_read_byte; 1264 - this->legacy.write_buf = doc2001plus_writebuf; 1265 - this->legacy.read_buf = doc2001plus_readbuf; 1266 1393 doc->late_init = inftl_scan_bbt; 1267 - this->legacy.cmd_ctrl = NULL; 1268 - this->legacy.select_chip = doc2001plus_select_chip; 1269 - this->legacy.cmdfunc = doc2001plus_command; 1270 1394 this->ecc.hwctl = doc2001plus_enable_hwecc; 1271 1395 1272 1396 doc->chips_per_floor = 1; ··· 1268 1404 1269 1405 return 1; 1270 1406 } 1407 + 1408 + static const struct nand_controller_ops doc200x_ops = { 1409 + .exec_op = doc200x_exec_op, 1410 + }; 1411 + 1412 + static const struct nand_controller_ops doc2001plus_ops = { 1413 + .exec_op = doc2001plus_exec_op, 1414 + }; 1271 1415 1272 1416 static int __init doc_probe(unsigned long physadr) 1273 1417 { ··· 1420 1548 goto fail; 1421 1549 } 1422 1550 1423 - 1424 1551 /* 1425 1552 * Allocate a RS codec instance 1426 1553 * ··· 1437 1566 goto fail; 1438 1567 } 1439 1568 1569 + nand_controller_init(&doc->base); 1570 + if (ChipID == DOC_ChipID_DocMilPlus16) 1571 + doc->base.ops = &doc2001plus_ops; 1572 + else 1573 + doc->base.ops = &doc200x_ops; 1574 + 1440 1575 mtd = nand_to_mtd(nand); 1441 1576 nand->bbt_td = (struct nand_bbt_descr *) (doc + 1); 1442 1577 nand->bbt_md = nand->bbt_td + 1; ··· 1450 1573 mtd->owner = THIS_MODULE; 1451 1574 mtd_set_ooblayout(mtd, &doc200x_ooblayout_ops); 1452 1575 1576 + nand->controller = &doc->base; 1453 1577 nand_set_controller_data(nand, doc); 1454 - nand->legacy.select_chip = doc200x_select_chip; 1455 - nand->legacy.cmd_ctrl = doc200x_hwcontrol; 1456 - nand->legacy.dev_ready = doc200x_dev_ready; 1457 - nand->legacy.waitfunc = doc200x_wait; 1458 - nand->legacy.block_bad = doc200x_block_bad; 1459 1578 nand->ecc.hwctl = doc200x_enable_hwecc; 1460 1579 nand->ecc.calculate = doc200x_calculate_ecc; 1461 1580 nand->ecc.correct = doc200x_correct_data; ··· 1463 1590 nand->ecc.options = NAND_ECC_GENERIC_ERASED_CHECK; 1464 1591 nand->bbt_options = NAND_BBT_USE_FLASH; 1465 1592 /* Skip the automatic BBT scan so we can run it manually */ 1466 - nand->options |= NAND_SKIP_BBTSCAN; 1593 + nand->options |= NAND_SKIP_BBTSCAN | NAND_NO_BBM_QUIRK; 1467 1594 1468 1595 doc->physadr = physadr; 1469 1596 doc->virtadr = virtadr; ··· 1482 1609 numchips = doc2001_init(mtd); 1483 1610 1484 1611 if ((ret = nand_scan(nand, numchips)) || (ret = doc->late_init(mtd))) { 1485 - /* DBB note: i believe nand_release is necessary here, as 1612 + /* DBB note: i believe nand_cleanup is necessary here, as 1486 1613 buffers may have been allocated in nand_base. Check with 1487 1614 Thomas. FIX ME! */ 1488 - /* nand_release will call mtd_device_unregister, but we 1489 - haven't yet added it. This is handled without incident by 1490 - mtd_device_unregister, as far as I can tell. */ 1491 - nand_release(nand); 1615 + nand_cleanup(nand); 1492 1616 goto fail; 1493 1617 } 1494 1618 ··· 1514 1644 struct mtd_info *mtd, *nextmtd; 1515 1645 struct nand_chip *nand; 1516 1646 struct doc_priv *doc; 1647 + int ret; 1517 1648 1518 1649 for (mtd = doclist; mtd; mtd = nextmtd) { 1519 1650 nand = mtd_to_nand(mtd); 1520 1651 doc = nand_get_controller_data(nand); 1521 1652 1522 1653 nextmtd = doc->nextdoc; 1523 - nand_release(nand); 1654 + ret = mtd_device_unregister(mtd); 1655 + WARN_ON(ret); 1656 + nand_cleanup(nand); 1524 1657 iounmap(doc->virtadr); 1525 1658 release_mem_region(doc->physadr, DOC_IOREMAP_LEN); 1526 1659 free_rs(doc->rs_decoder);
+6 -1
drivers/mtd/nand/raw/fsl_elbc_nand.c
··· 956 956 { 957 957 struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = fsl_lbc_ctrl_dev->nand; 958 958 struct fsl_elbc_mtd *priv = dev_get_drvdata(&pdev->dev); 959 + struct nand_chip *chip = &priv->chip; 960 + int ret; 959 961 960 - nand_release(&priv->chip); 962 + ret = mtd_device_unregister(nand_to_mtd(chip)); 963 + WARN_ON(ret); 964 + nand_cleanup(chip); 965 + 961 966 fsl_elbc_chip_remove(priv); 962 967 963 968 mutex_lock(&fsl_elbc_nand_mutex);
+6 -1
drivers/mtd/nand/raw/fsl_ifc_nand.c
··· 1093 1093 static int fsl_ifc_nand_remove(struct platform_device *dev) 1094 1094 { 1095 1095 struct fsl_ifc_mtd *priv = dev_get_drvdata(&dev->dev); 1096 + struct nand_chip *chip = &priv->chip; 1097 + int ret; 1096 1098 1097 - nand_release(&priv->chip); 1099 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1100 + WARN_ON(ret); 1101 + nand_cleanup(chip); 1102 + 1098 1103 fsl_ifc_chip_remove(priv); 1099 1104 1100 1105 mutex_lock(&fsl_ifc_nand_mutex);
+6 -3
drivers/mtd/nand/raw/fsl_upm.c
··· 317 317 static int fun_remove(struct platform_device *ofdev) 318 318 { 319 319 struct fsl_upm_nand *fun = dev_get_drvdata(&ofdev->dev); 320 - struct mtd_info *mtd = nand_to_mtd(&fun->chip); 321 - int i; 320 + struct nand_chip *chip = &fun->chip; 321 + struct mtd_info *mtd = nand_to_mtd(chip); 322 + int ret, i; 322 323 323 - nand_release(&fun->chip); 324 + ret = mtd_device_unregister(mtd); 325 + WARN_ON(ret); 326 + nand_cleanup(chip); 324 327 kfree(mtd->name); 325 328 326 329 for (i = 0; i < fun->mchip_count; i++) {
+14 -5
drivers/mtd/nand/raw/fsmc_nand.c
··· 608 608 unsigned int op_id; 609 609 int i; 610 610 611 + if (check_only) 612 + return 0; 613 + 611 614 pr_debug("Executing operation [%d instructions]:\n", op->ninstrs); 612 615 613 616 for (op_id = 0; op_id < op->ninstrs; op_id++) { ··· 694 691 for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) { 695 692 nand_read_page_op(chip, page, s * eccsize, NULL, 0); 696 693 chip->ecc.hwctl(chip, NAND_ECC_READ); 697 - ret = nand_read_data_op(chip, p, eccsize, false); 694 + ret = nand_read_data_op(chip, p, eccsize, false, false); 698 695 if (ret) 699 696 return ret; 700 697 ··· 812 809 813 810 i = 0; 814 811 while (num_err--) { 815 - change_bit(0, (unsigned long *)&err_idx[i]); 816 - change_bit(1, (unsigned long *)&err_idx[i]); 812 + err_idx[i] ^= 3; 817 813 818 814 if (err_idx[i] < chip->ecc.size * 8) { 819 - change_bit(err_idx[i], (unsigned long *)dat); 815 + int err = err_idx[i]; 816 + 817 + dat[err >> 3] ^= BIT(err & 7); 820 818 i++; 821 819 } 822 820 } ··· 1136 1132 struct fsmc_nand_data *host = platform_get_drvdata(pdev); 1137 1133 1138 1134 if (host) { 1139 - nand_release(&host->nand); 1135 + struct nand_chip *chip = &host->nand; 1136 + int ret; 1137 + 1138 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1139 + WARN_ON(ret); 1140 + nand_cleanup(chip); 1140 1141 fsmc_nand_disable(host); 1141 1142 1142 1143 if (host->mode == USE_DMA_ACCESS) {
+5 -1
drivers/mtd/nand/raw/gpio.c
··· 190 190 static int gpio_nand_remove(struct platform_device *pdev) 191 191 { 192 192 struct gpiomtd *gpiomtd = platform_get_drvdata(pdev); 193 + struct nand_chip *chip = &gpiomtd->nand_chip; 194 + int ret; 193 195 194 - nand_release(&gpiomtd->nand_chip); 196 + ret = mtd_device_unregister(nand_to_mtd(chip)); 197 + WARN_ON(ret); 198 + nand_cleanup(chip); 195 199 196 200 /* Enable write protection and disable the chip */ 197 201 if (gpiomtd->nwp && !IS_ERR(gpiomtd->nwp))
+22 -167
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
··· 540 540 return ret; 541 541 542 542 ret = pm_runtime_get_sync(this->dev); 543 - if (ret < 0) 543 + if (ret < 0) { 544 + pm_runtime_put_autosuspend(this->dev); 544 545 return ret; 546 + } 545 547 546 548 /* 547 549 * Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this ··· 834 832 dma_map_sg(this->dev, sgl, 1, dr); 835 833 836 834 return false; 837 - } 838 - 839 - /** 840 - * gpmi_copy_bits - copy bits from one memory region to another 841 - * @dst: destination buffer 842 - * @dst_bit_off: bit offset we're starting to write at 843 - * @src: source buffer 844 - * @src_bit_off: bit offset we're starting to read from 845 - * @nbits: number of bits to copy 846 - * 847 - * This functions copies bits from one memory region to another, and is used by 848 - * the GPMI driver to copy ECC sections which are not guaranteed to be byte 849 - * aligned. 850 - * 851 - * src and dst should not overlap. 852 - * 853 - */ 854 - static void gpmi_copy_bits(u8 *dst, size_t dst_bit_off, const u8 *src, 855 - size_t src_bit_off, size_t nbits) 856 - { 857 - size_t i; 858 - size_t nbytes; 859 - u32 src_buffer = 0; 860 - size_t bits_in_src_buffer = 0; 861 - 862 - if (!nbits) 863 - return; 864 - 865 - /* 866 - * Move src and dst pointers to the closest byte pointer and store bit 867 - * offsets within a byte. 868 - */ 869 - src += src_bit_off / 8; 870 - src_bit_off %= 8; 871 - 872 - dst += dst_bit_off / 8; 873 - dst_bit_off %= 8; 874 - 875 - /* 876 - * Initialize the src_buffer value with bits available in the first 877 - * byte of data so that we end up with a byte aligned src pointer. 878 - */ 879 - if (src_bit_off) { 880 - src_buffer = src[0] >> src_bit_off; 881 - if (nbits >= (8 - src_bit_off)) { 882 - bits_in_src_buffer += 8 - src_bit_off; 883 - } else { 884 - src_buffer &= GENMASK(nbits - 1, 0); 885 - bits_in_src_buffer += nbits; 886 - } 887 - nbits -= bits_in_src_buffer; 888 - src++; 889 - } 890 - 891 - /* Calculate the number of bytes that can be copied from src to dst. */ 892 - nbytes = nbits / 8; 893 - 894 - /* Try to align dst to a byte boundary. */ 895 - if (dst_bit_off) { 896 - if (bits_in_src_buffer < (8 - dst_bit_off) && nbytes) { 897 - src_buffer |= src[0] << bits_in_src_buffer; 898 - bits_in_src_buffer += 8; 899 - src++; 900 - nbytes--; 901 - } 902 - 903 - if (bits_in_src_buffer >= (8 - dst_bit_off)) { 904 - dst[0] &= GENMASK(dst_bit_off - 1, 0); 905 - dst[0] |= src_buffer << dst_bit_off; 906 - src_buffer >>= (8 - dst_bit_off); 907 - bits_in_src_buffer -= (8 - dst_bit_off); 908 - dst_bit_off = 0; 909 - dst++; 910 - if (bits_in_src_buffer > 7) { 911 - bits_in_src_buffer -= 8; 912 - dst[0] = src_buffer; 913 - dst++; 914 - src_buffer >>= 8; 915 - } 916 - } 917 - } 918 - 919 - if (!bits_in_src_buffer && !dst_bit_off) { 920 - /* 921 - * Both src and dst pointers are byte aligned, thus we can 922 - * just use the optimized memcpy function. 923 - */ 924 - if (nbytes) 925 - memcpy(dst, src, nbytes); 926 - } else { 927 - /* 928 - * src buffer is not byte aligned, hence we have to copy each 929 - * src byte to the src_buffer variable before extracting a byte 930 - * to store in dst. 931 - */ 932 - for (i = 0; i < nbytes; i++) { 933 - src_buffer |= src[i] << bits_in_src_buffer; 934 - dst[i] = src_buffer; 935 - src_buffer >>= 8; 936 - } 937 - } 938 - /* Update dst and src pointers */ 939 - dst += nbytes; 940 - src += nbytes; 941 - 942 - /* 943 - * nbits is the number of remaining bits. It should not exceed 8 as 944 - * we've already copied as much bytes as possible. 945 - */ 946 - nbits %= 8; 947 - 948 - /* 949 - * If there's no more bits to copy to the destination and src buffer 950 - * was already byte aligned, then we're done. 951 - */ 952 - if (!nbits && !bits_in_src_buffer) 953 - return; 954 - 955 - /* Copy the remaining bits to src_buffer */ 956 - if (nbits) 957 - src_buffer |= (*src & GENMASK(nbits - 1, 0)) << 958 - bits_in_src_buffer; 959 - bits_in_src_buffer += nbits; 960 - 961 - /* 962 - * In case there were not enough bits to get a byte aligned dst buffer 963 - * prepare the src_buffer variable to match the dst organization (shift 964 - * src_buffer by dst_bit_off and retrieve the least significant bits 965 - * from dst). 966 - */ 967 - if (dst_bit_off) 968 - src_buffer = (src_buffer << dst_bit_off) | 969 - (*dst & GENMASK(dst_bit_off - 1, 0)); 970 - bits_in_src_buffer += dst_bit_off; 971 - 972 - /* 973 - * Keep most significant bits from dst if we end up with an unaligned 974 - * number of bits. 975 - */ 976 - nbytes = bits_in_src_buffer / 8; 977 - if (bits_in_src_buffer % 8) { 978 - src_buffer |= (dst[nbytes] & 979 - GENMASK(7, bits_in_src_buffer % 8)) << 980 - (nbytes * 8); 981 - nbytes++; 982 - } 983 - 984 - /* Copy the remaining bytes to dst */ 985 - for (i = 0; i < nbytes; i++) { 986 - dst[i] = src_buffer; 987 - src_buffer >>= 8; 988 - } 989 835 } 990 836 991 837 /* add our owner bbt descriptor */ ··· 1563 1713 * inline (interleaved with payload DATA), and do not align data chunk on 1564 1714 * byte boundaries. 1565 1715 * We thus need to take care moving the payload data and ECC bits stored in the 1566 - * page into the provided buffers, which is why we're using gpmi_copy_bits. 1716 + * page into the provided buffers, which is why we're using nand_extract_bits(). 1567 1717 * 1568 1718 * See set_geometry_by_ecc_info inline comments to have a full description 1569 1719 * of the layout used by the GPMI controller. ··· 1612 1762 /* Extract interleaved payload data and ECC bits */ 1613 1763 for (step = 0; step < nfc_geo->ecc_chunk_count; step++) { 1614 1764 if (buf) 1615 - gpmi_copy_bits(buf, step * eccsize * 8, 1616 - tmp_buf, src_bit_off, 1617 - eccsize * 8); 1765 + nand_extract_bits(buf, step * eccsize, tmp_buf, 1766 + src_bit_off, eccsize * 8); 1618 1767 src_bit_off += eccsize * 8; 1619 1768 1620 1769 /* Align last ECC block to align a byte boundary */ ··· 1622 1773 eccbits += 8 - ((oob_bit_off + eccbits) % 8); 1623 1774 1624 1775 if (oob_required) 1625 - gpmi_copy_bits(oob, oob_bit_off, 1626 - tmp_buf, src_bit_off, 1627 - eccbits); 1776 + nand_extract_bits(oob, oob_bit_off, tmp_buf, 1777 + src_bit_off, eccbits); 1628 1778 1629 1779 src_bit_off += eccbits; 1630 1780 oob_bit_off += eccbits; ··· 1648 1800 * inline (interleaved with payload DATA), and do not align data chunk on 1649 1801 * byte boundaries. 1650 1802 * We thus need to take care moving the OOB area at the right place in the 1651 - * final page, which is why we're using gpmi_copy_bits. 1803 + * final page, which is why we're using nand_extract_bits(). 1652 1804 * 1653 1805 * See set_geometry_by_ecc_info inline comments to have a full description 1654 1806 * of the layout used by the GPMI controller. ··· 1687 1839 /* Interleave payload data and ECC bits */ 1688 1840 for (step = 0; step < nfc_geo->ecc_chunk_count; step++) { 1689 1841 if (buf) 1690 - gpmi_copy_bits(tmp_buf, dst_bit_off, 1691 - buf, step * eccsize * 8, eccsize * 8); 1842 + nand_extract_bits(tmp_buf, dst_bit_off, buf, 1843 + step * eccsize * 8, eccsize * 8); 1692 1844 dst_bit_off += eccsize * 8; 1693 1845 1694 1846 /* Align last ECC block to align a byte boundary */ ··· 1697 1849 eccbits += 8 - ((oob_bit_off + eccbits) % 8); 1698 1850 1699 1851 if (oob_required) 1700 - gpmi_copy_bits(tmp_buf, dst_bit_off, 1701 - oob, oob_bit_off, eccbits); 1852 + nand_extract_bits(tmp_buf, dst_bit_off, oob, 1853 + oob_bit_off, eccbits); 1702 1854 1703 1855 dst_bit_off += eccbits; 1704 1856 oob_bit_off += eccbits; ··· 2256 2408 struct completion *completion; 2257 2409 unsigned long to; 2258 2410 2411 + if (check_only) 2412 + return 0; 2413 + 2259 2414 this->ntransfers = 0; 2260 2415 for (i = 0; i < GPMI_MAX_TRANSFERS; i++) 2261 2416 this->transfers[i].direction = DMA_NONE; ··· 2509 2658 2510 2659 ret = __gpmi_enable_clk(this, true); 2511 2660 if (ret) 2512 - goto exit_nfc_init; 2661 + goto exit_acquire_resources; 2513 2662 2514 2663 pm_runtime_set_autosuspend_delay(&pdev->dev, 500); 2515 2664 pm_runtime_use_autosuspend(&pdev->dev); ··· 2544 2693 static int gpmi_nand_remove(struct platform_device *pdev) 2545 2694 { 2546 2695 struct gpmi_nand_data *this = platform_get_drvdata(pdev); 2696 + struct nand_chip *chip = &this->nand; 2697 + int ret; 2547 2698 2548 2699 pm_runtime_put_sync(&pdev->dev); 2549 2700 pm_runtime_disable(&pdev->dev); 2550 2701 2551 - nand_release(&this->nand); 2702 + ret = mtd_device_unregister(nand_to_mtd(chip)); 2703 + WARN_ON(ret); 2704 + nand_cleanup(chip); 2552 2705 gpmi_free_dma_buffer(this); 2553 2706 release_resources(this); 2554 2707 return 0;
+5 -1
drivers/mtd/nand/raw/hisi504_nand.c
··· 806 806 static int hisi_nfc_remove(struct platform_device *pdev) 807 807 { 808 808 struct hinfc_host *host = platform_get_drvdata(pdev); 809 + struct nand_chip *chip = &host->chip; 810 + int ret; 809 811 810 - nand_release(&host->chip); 812 + ret = mtd_device_unregister(nand_to_mtd(chip)); 813 + WARN_ON(ret); 814 + nand_cleanup(chip); 811 815 812 816 return 0; 813 817 }
+107 -63
drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
··· 27 27 28 28 #define DRV_NAME "ingenic-nand" 29 29 30 - /* Command delay when there is no R/B pin. */ 31 - #define RB_DELAY_US 100 32 - 33 30 struct jz_soc_info { 34 31 unsigned long data_offset; 35 32 unsigned long addr_offset; ··· 46 49 struct nand_controller controller; 47 50 unsigned int num_banks; 48 51 struct list_head chips; 49 - int selected; 50 52 struct ingenic_nand_cs cs[]; 51 53 }; 52 54 ··· 98 102 return 0; 99 103 } 100 104 101 - const struct mtd_ooblayout_ops qi_lb60_ooblayout_ops = { 105 + static const struct mtd_ooblayout_ops qi_lb60_ooblayout_ops = { 102 106 .ecc = qi_lb60_ooblayout_ecc, 103 107 .free = qi_lb60_ooblayout_free, 104 108 }; ··· 137 141 .ecc = jz4725b_ooblayout_ecc, 138 142 .free = jz4725b_ooblayout_free, 139 143 }; 140 - 141 - static void ingenic_nand_select_chip(struct nand_chip *chip, int chipnr) 142 - { 143 - struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip)); 144 - struct ingenic_nfc *nfc = to_ingenic_nfc(nand->chip.controller); 145 - struct ingenic_nand_cs *cs; 146 - 147 - /* Ensure the currently selected chip is deasserted. */ 148 - if (chipnr == -1 && nfc->selected >= 0) { 149 - cs = &nfc->cs[nfc->selected]; 150 - jz4780_nemc_assert(nfc->dev, cs->bank, false); 151 - } 152 - 153 - nfc->selected = chipnr; 154 - } 155 - 156 - static void ingenic_nand_cmd_ctrl(struct nand_chip *chip, int cmd, 157 - unsigned int ctrl) 158 - { 159 - struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip)); 160 - struct ingenic_nfc *nfc = to_ingenic_nfc(nand->chip.controller); 161 - struct ingenic_nand_cs *cs; 162 - 163 - if (WARN_ON(nfc->selected < 0)) 164 - return; 165 - 166 - cs = &nfc->cs[nfc->selected]; 167 - 168 - jz4780_nemc_assert(nfc->dev, cs->bank, ctrl & NAND_NCE); 169 - 170 - if (cmd == NAND_CMD_NONE) 171 - return; 172 - 173 - if (ctrl & NAND_ALE) 174 - writeb(cmd, cs->base + nfc->soc_info->addr_offset); 175 - else if (ctrl & NAND_CLE) 176 - writeb(cmd, cs->base + nfc->soc_info->cmd_offset); 177 - } 178 - 179 - static int ingenic_nand_dev_ready(struct nand_chip *chip) 180 - { 181 - struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip)); 182 - 183 - return !gpiod_get_value_cansleep(nand->busy_gpio); 184 - } 185 144 186 145 static void ingenic_nand_ecc_hwctl(struct nand_chip *chip, int mode) 187 146 { ··· 249 298 return 0; 250 299 } 251 300 301 + static int ingenic_nand_exec_instr(struct nand_chip *chip, 302 + struct ingenic_nand_cs *cs, 303 + const struct nand_op_instr *instr) 304 + { 305 + struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip)); 306 + struct ingenic_nfc *nfc = to_ingenic_nfc(chip->controller); 307 + unsigned int i; 308 + 309 + switch (instr->type) { 310 + case NAND_OP_CMD_INSTR: 311 + writeb(instr->ctx.cmd.opcode, 312 + cs->base + nfc->soc_info->cmd_offset); 313 + return 0; 314 + case NAND_OP_ADDR_INSTR: 315 + for (i = 0; i < instr->ctx.addr.naddrs; i++) 316 + writeb(instr->ctx.addr.addrs[i], 317 + cs->base + nfc->soc_info->addr_offset); 318 + return 0; 319 + case NAND_OP_DATA_IN_INSTR: 320 + if (instr->ctx.data.force_8bit || 321 + !(chip->options & NAND_BUSWIDTH_16)) 322 + ioread8_rep(cs->base + nfc->soc_info->data_offset, 323 + instr->ctx.data.buf.in, 324 + instr->ctx.data.len); 325 + else 326 + ioread16_rep(cs->base + nfc->soc_info->data_offset, 327 + instr->ctx.data.buf.in, 328 + instr->ctx.data.len); 329 + return 0; 330 + case NAND_OP_DATA_OUT_INSTR: 331 + if (instr->ctx.data.force_8bit || 332 + !(chip->options & NAND_BUSWIDTH_16)) 333 + iowrite8_rep(cs->base + nfc->soc_info->data_offset, 334 + instr->ctx.data.buf.out, 335 + instr->ctx.data.len); 336 + else 337 + iowrite16_rep(cs->base + nfc->soc_info->data_offset, 338 + instr->ctx.data.buf.out, 339 + instr->ctx.data.len); 340 + return 0; 341 + case NAND_OP_WAITRDY_INSTR: 342 + if (!nand->busy_gpio) 343 + return nand_soft_waitrdy(chip, 344 + instr->ctx.waitrdy.timeout_ms); 345 + 346 + return nand_gpio_waitrdy(chip, nand->busy_gpio, 347 + instr->ctx.waitrdy.timeout_ms); 348 + default: 349 + break; 350 + } 351 + 352 + return -EINVAL; 353 + } 354 + 355 + static int ingenic_nand_exec_op(struct nand_chip *chip, 356 + const struct nand_operation *op, 357 + bool check_only) 358 + { 359 + struct ingenic_nand *nand = to_ingenic_nand(nand_to_mtd(chip)); 360 + struct ingenic_nfc *nfc = to_ingenic_nfc(nand->chip.controller); 361 + struct ingenic_nand_cs *cs; 362 + unsigned int i; 363 + int ret = 0; 364 + 365 + if (check_only) 366 + return 0; 367 + 368 + cs = &nfc->cs[op->cs]; 369 + jz4780_nemc_assert(nfc->dev, cs->bank, true); 370 + for (i = 0; i < op->ninstrs; i++) { 371 + ret = ingenic_nand_exec_instr(chip, cs, &op->instrs[i]); 372 + if (ret) 373 + break; 374 + 375 + if (op->instrs[i].delay_ns) 376 + ndelay(op->instrs[i].delay_ns); 377 + } 378 + jz4780_nemc_assert(nfc->dev, cs->bank, false); 379 + 380 + return ret; 381 + } 382 + 252 383 static const struct nand_controller_ops ingenic_nand_controller_ops = { 253 384 .attach_chip = ingenic_nand_attach_chip, 385 + .exec_op = ingenic_nand_exec_op, 254 386 }; 255 387 256 388 static int ingenic_nand_init_chip(struct platform_device *pdev, ··· 373 339 ret = PTR_ERR(nand->busy_gpio); 374 340 dev_err(dev, "failed to request busy GPIO: %d\n", ret); 375 341 return ret; 376 - } else if (nand->busy_gpio) { 377 - nand->chip.legacy.dev_ready = ingenic_nand_dev_ready; 378 342 } 343 + 344 + /* 345 + * The rb-gpios semantics was undocumented and qi,lb60 (along with 346 + * the ingenic driver) got it wrong. The active state encodes the 347 + * NAND ready state, which is high level. Since there's no signal 348 + * inverter on this board, it should be active-high. Let's fix that 349 + * here for older DTs so we can re-use the generic nand_gpio_waitrdy() 350 + * helper, and be consistent with what other drivers do. 351 + */ 352 + if (of_machine_is_compatible("qi,lb60") && 353 + gpiod_is_active_low(nand->busy_gpio)) 354 + gpiod_toggle_active_low(nand->busy_gpio); 379 355 380 356 nand->wp_gpio = devm_gpiod_get_optional(dev, "wp", GPIOD_OUT_LOW); 381 357 ··· 403 359 return -ENOMEM; 404 360 mtd->dev.parent = dev; 405 361 406 - chip->legacy.IO_ADDR_R = cs->base + nfc->soc_info->data_offset; 407 - chip->legacy.IO_ADDR_W = cs->base + nfc->soc_info->data_offset; 408 - chip->legacy.chip_delay = RB_DELAY_US; 409 362 chip->options = NAND_NO_SUBPAGE_WRITE; 410 - chip->legacy.select_chip = ingenic_nand_select_chip; 411 - chip->legacy.cmd_ctrl = ingenic_nand_cmd_ctrl; 412 363 chip->ecc.mode = NAND_ECC_HW; 413 364 chip->controller = &nfc->controller; 414 365 nand_set_flash_node(chip, np); ··· 415 376 416 377 ret = mtd_device_register(mtd, NULL, 0); 417 378 if (ret) { 418 - nand_release(chip); 379 + nand_cleanup(chip); 419 380 return ret; 420 381 } 421 382 ··· 426 387 427 388 static void ingenic_nand_cleanup_chips(struct ingenic_nfc *nfc) 428 389 { 429 - struct ingenic_nand *chip; 390 + struct ingenic_nand *ingenic_chip; 391 + struct nand_chip *chip; 392 + int ret; 430 393 431 394 while (!list_empty(&nfc->chips)) { 432 - chip = list_first_entry(&nfc->chips, 433 - struct ingenic_nand, chip_list); 434 - nand_release(&chip->chip); 435 - list_del(&chip->chip_list); 395 + ingenic_chip = list_first_entry(&nfc->chips, 396 + struct ingenic_nand, chip_list); 397 + chip = &ingenic_chip->chip; 398 + ret = mtd_device_unregister(nand_to_mtd(chip)); 399 + WARN_ON(ret); 400 + nand_cleanup(chip); 401 + list_del(&ingenic_chip->chip_list); 436 402 } 437 403 } 438 404
+12
drivers/mtd/nand/raw/internals.h
··· 75 75 extern const struct nand_manufacturer_ops samsung_nand_manuf_ops; 76 76 extern const struct nand_manufacturer_ops toshiba_nand_manuf_ops; 77 77 78 + /* MLC pairing schemes */ 79 + extern const struct mtd_pairing_scheme dist3_pairing_scheme; 80 + 78 81 /* Core functions */ 79 82 const struct nand_manufacturer *nand_get_manufacturer(u8 id); 80 83 int nand_bbm_get_next_page(struct nand_chip *chip, int page); ··· 107 104 return false; 108 105 109 106 return true; 107 + } 108 + 109 + static inline int nand_check_op(struct nand_chip *chip, 110 + const struct nand_operation *op) 111 + { 112 + if (!nand_has_exec_op(chip)) 113 + return 0; 114 + 115 + return chip->controller->ops->exec_op(chip, op, true); 110 116 } 111 117 112 118 static inline int nand_exec_op(struct nand_chip *chip,
+6 -1
drivers/mtd/nand/raw/lpc32xx_mlc.c
··· 826 826 static int lpc32xx_nand_remove(struct platform_device *pdev) 827 827 { 828 828 struct lpc32xx_nand_host *host = platform_get_drvdata(pdev); 829 + struct nand_chip *chip = &host->nand_chip; 830 + int ret; 829 831 830 - nand_release(&host->nand_chip); 832 + ret = mtd_device_unregister(nand_to_mtd(chip)); 833 + WARN_ON(ret); 834 + nand_cleanup(chip); 835 + 831 836 free_irq(host->irq, host); 832 837 if (use_dma) 833 838 dma_release_channel(host->dma_chan);
+5 -1
drivers/mtd/nand/raw/lpc32xx_slc.c
··· 947 947 { 948 948 uint32_t tmp; 949 949 struct lpc32xx_nand_host *host = platform_get_drvdata(pdev); 950 + struct nand_chip *chip = &host->nand_chip; 951 + int ret; 950 952 951 - nand_release(&host->nand_chip); 953 + ret = mtd_device_unregister(nand_to_mtd(chip)); 954 + WARN_ON(ret); 955 + nand_cleanup(chip); 952 956 dma_release_channel(host->dma_chan); 953 957 954 958 /* Force CE high */
+38 -30
drivers/mtd/nand/raw/marvell_nand.c
··· 707 707 * In case the interrupt was not served in the required time frame, 708 708 * check if the ISR was not served or if something went actually wrong. 709 709 */ 710 - if (ret && !pending) { 710 + if (!ret && !pending) { 711 711 dev_err(nfc->dev, "Timeout waiting for RB signal\n"); 712 712 return -ETIMEDOUT; 713 713 } ··· 932 932 } 933 933 934 934 /* 935 - * Check a chunk is correct or not according to hardware ECC engine. 935 + * Check if a chunk is correct or not according to the hardware ECC engine. 936 936 * mtd->ecc_stats.corrected is updated, as well as max_bitflips, however 937 937 * mtd->ecc_stats.failure is not, the function will instead return a non-zero 938 938 * value indicating that a check on the emptyness of the subpage must be 939 - * performed before declaring the subpage corrupted. 939 + * performed before actually declaring the subpage as "corrupted". 940 940 */ 941 - static int marvell_nfc_hw_ecc_correct(struct nand_chip *chip, 942 - unsigned int *max_bitflips) 941 + static int marvell_nfc_hw_ecc_check_bitflips(struct nand_chip *chip, 942 + unsigned int *max_bitflips) 943 943 { 944 944 struct mtd_info *mtd = nand_to_mtd(chip); 945 945 struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); ··· 1053 1053 marvell_nfc_enable_hw_ecc(chip); 1054 1054 marvell_nfc_hw_ecc_hmg_do_read_page(chip, buf, chip->oob_poi, false, 1055 1055 page); 1056 - ret = marvell_nfc_hw_ecc_correct(chip, &max_bitflips); 1056 + ret = marvell_nfc_hw_ecc_check_bitflips(chip, &max_bitflips); 1057 1057 marvell_nfc_disable_hw_ecc(chip); 1058 1058 1059 1059 if (!ret) ··· 1224 1224 1225 1225 /* Read spare bytes */ 1226 1226 nand_read_data_op(chip, oob + (lt->spare_bytes * chunk), 1227 - spare_len, false); 1227 + spare_len, false, false); 1228 1228 1229 1229 /* Read ECC bytes */ 1230 1230 nand_read_data_op(chip, oob + ecc_offset + 1231 1231 (ALIGN(lt->ecc_bytes, 32) * chunk), 1232 - ecc_len, false); 1232 + ecc_len, false, false); 1233 1233 } 1234 1234 1235 1235 return 0; ··· 1336 1336 /* Read the chunk and detect number of bitflips */ 1337 1337 marvell_nfc_hw_ecc_bch_read_chunk(chip, chunk, data, data_len, 1338 1338 spare, spare_len, page); 1339 - ret = marvell_nfc_hw_ecc_correct(chip, &max_bitflips); 1339 + ret = marvell_nfc_hw_ecc_check_bitflips(chip, &max_bitflips); 1340 1340 if (ret) 1341 1341 failure_mask |= BIT(chunk); 1342 1342 ··· 1358 1358 */ 1359 1359 1360 1360 /* 1361 - * In case there is any subpage read error reported by ->correct(), we 1362 - * usually re-read only ECC bytes in raw mode and check if the whole 1363 - * page is empty. In this case, it is normal that the ECC check failed 1364 - * and we just ignore the error. 1361 + * In case there is any subpage read error, we usually re-read only ECC 1362 + * bytes in raw mode and check if the whole page is empty. In this case, 1363 + * it is normal that the ECC check failed and we just ignore the error. 1365 1364 * 1366 1365 * However, it has been empirically observed that for some layouts (e.g 1367 1366 * 2k page, 8b strength per 512B chunk), the controller tries to correct ··· 2106 2107 { 2107 2108 struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 2108 2109 2109 - marvell_nfc_select_target(chip, op->cs); 2110 + if (!check_only) 2111 + marvell_nfc_select_target(chip, op->cs); 2110 2112 2111 2113 if (nfc->caps->is_nfcv2) 2112 2114 return nand_op_parser_exec_op(chip, &marvell_nfcv2_op_parser, ··· 2166 2166 .free = marvell_nand_ooblayout_free, 2167 2167 }; 2168 2168 2169 - static int marvell_nand_hw_ecc_ctrl_init(struct mtd_info *mtd, 2170 - struct nand_ecc_ctrl *ecc) 2169 + static int marvell_nand_hw_ecc_controller_init(struct mtd_info *mtd, 2170 + struct nand_ecc_ctrl *ecc) 2171 2171 { 2172 2172 struct nand_chip *chip = mtd_to_nand(mtd); 2173 2173 struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); ··· 2261 2261 2262 2262 switch (ecc->mode) { 2263 2263 case NAND_ECC_HW: 2264 - ret = marvell_nand_hw_ecc_ctrl_init(mtd, ecc); 2264 + ret = marvell_nand_hw_ecc_controller_init(mtd, ecc); 2265 2265 if (ret) 2266 2266 return ret; 2267 2267 break; ··· 2664 2664 ret = mtd_device_register(mtd, NULL, 0); 2665 2665 if (ret) { 2666 2666 dev_err(dev, "failed to register mtd device: %d\n", ret); 2667 - nand_release(chip); 2667 + nand_cleanup(chip); 2668 2668 return ret; 2669 2669 } 2670 2670 2671 2671 list_add_tail(&marvell_nand->node, &nfc->chips); 2672 2672 2673 2673 return 0; 2674 + } 2675 + 2676 + static void marvell_nand_chips_cleanup(struct marvell_nfc *nfc) 2677 + { 2678 + struct marvell_nand_chip *entry, *temp; 2679 + struct nand_chip *chip; 2680 + int ret; 2681 + 2682 + list_for_each_entry_safe(entry, temp, &nfc->chips, node) { 2683 + chip = &entry->chip; 2684 + ret = mtd_device_unregister(nand_to_mtd(chip)); 2685 + WARN_ON(ret); 2686 + nand_cleanup(chip); 2687 + list_del(&entry->node); 2688 + } 2674 2689 } 2675 2690 2676 2691 static int marvell_nand_chips_init(struct device *dev, struct marvell_nfc *nfc) ··· 2722 2707 ret = marvell_nand_chip_init(dev, nfc, nand_np); 2723 2708 if (ret) { 2724 2709 of_node_put(nand_np); 2725 - return ret; 2710 + goto cleanup_chips; 2726 2711 } 2727 2712 } 2728 2713 2729 2714 return 0; 2730 - } 2731 2715 2732 - static void marvell_nand_chips_cleanup(struct marvell_nfc *nfc) 2733 - { 2734 - struct marvell_nand_chip *entry, *temp; 2716 + cleanup_chips: 2717 + marvell_nand_chips_cleanup(nfc); 2735 2718 2736 - list_for_each_entry_safe(entry, temp, &nfc->chips, node) { 2737 - nand_release(&entry->chip); 2738 - list_del(&entry->node); 2739 - } 2719 + return ret; 2740 2720 } 2741 2721 2742 2722 static int marvell_nfc_init_dma(struct marvell_nfc *nfc) ··· 2864 2854 static int marvell_nfc_probe(struct platform_device *pdev) 2865 2855 { 2866 2856 struct device *dev = &pdev->dev; 2867 - struct resource *r; 2868 2857 struct marvell_nfc *nfc; 2869 2858 int ret; 2870 2859 int irq; ··· 2878 2869 nfc->controller.ops = &marvell_nand_controller_ops; 2879 2870 INIT_LIST_HEAD(&nfc->chips); 2880 2871 2881 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2882 - nfc->regs = devm_ioremap_resource(dev, r); 2872 + nfc->regs = devm_platform_ioremap_resource(pdev, 0); 2883 2873 if (IS_ERR(nfc->regs)) 2884 2874 return PTR_ERR(nfc->regs); 2885 2875
+4 -1
drivers/mtd/nand/raw/meson_nand.c
··· 899 899 u32 op_id, delay_idle, cmd; 900 900 int i; 901 901 902 + if (check_only) 903 + return 0; 904 + 902 905 meson_nfc_select_chip(nand, op->cs); 903 906 for (op_id = 0; op_id < op->ninstrs; op_id++) { 904 907 instr = &op->instrs[op_id]; ··· 1269 1266 nand_set_flash_node(nand, np); 1270 1267 nand_set_controller_data(nand, nfc); 1271 1268 1272 - nand->options |= NAND_USE_BOUNCE_BUFFER; 1269 + nand->options |= NAND_USES_DMA; 1273 1270 mtd = nand_to_mtd(nand); 1274 1271 mtd->owner = THIS_MODULE; 1275 1272 mtd->dev.parent = dev;
+4 -1
drivers/mtd/nand/raw/mpc5121_nfc.c
··· 805 805 { 806 806 struct device *dev = &op->dev; 807 807 struct mtd_info *mtd = dev_get_drvdata(dev); 808 + int ret; 808 809 809 - nand_release(mtd_to_nand(mtd)); 810 + ret = mtd_device_unregister(mtd); 811 + WARN_ON(ret); 812 + nand_cleanup(mtd_to_nand(mtd)); 810 813 mpc5121_nfc_free(dev, mtd); 811 814 812 815 return 0;
+12 -7
drivers/mtd/nand/raw/mtk_nand.c
··· 1380 1380 nand_set_flash_node(nand, np); 1381 1381 nand_set_controller_data(nand, nfc); 1382 1382 1383 - nand->options |= NAND_USE_BOUNCE_BUFFER | NAND_SUBPAGE_READ; 1383 + nand->options |= NAND_USES_DMA | NAND_SUBPAGE_READ; 1384 1384 nand->legacy.dev_ready = mtk_nfc_dev_ready; 1385 1385 nand->legacy.select_chip = mtk_nfc_select_chip; 1386 1386 nand->legacy.write_byte = mtk_nfc_write_byte; ··· 1419 1419 ret = mtd_device_register(mtd, NULL, 0); 1420 1420 if (ret) { 1421 1421 dev_err(dev, "mtd parse partition error\n"); 1422 - nand_release(nand); 1422 + nand_cleanup(nand); 1423 1423 return ret; 1424 1424 } 1425 1425 ··· 1578 1578 static int mtk_nfc_remove(struct platform_device *pdev) 1579 1579 { 1580 1580 struct mtk_nfc *nfc = platform_get_drvdata(pdev); 1581 - struct mtk_nfc_nand_chip *chip; 1581 + struct mtk_nfc_nand_chip *mtk_chip; 1582 + struct nand_chip *chip; 1583 + int ret; 1582 1584 1583 1585 while (!list_empty(&nfc->chips)) { 1584 - chip = list_first_entry(&nfc->chips, struct mtk_nfc_nand_chip, 1585 - node); 1586 - nand_release(&chip->nand); 1587 - list_del(&chip->node); 1586 + mtk_chip = list_first_entry(&nfc->chips, 1587 + struct mtk_nfc_nand_chip, node); 1588 + chip = &mtk_chip->nand; 1589 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1590 + WARN_ON(ret); 1591 + nand_cleanup(chip); 1592 + list_del(&mtk_chip->node); 1588 1593 } 1589 1594 1590 1595 mtk_ecc_release(nfc->ecc);
+5 -1
drivers/mtd/nand/raw/mxc_nand.c
··· 1919 1919 static int mxcnd_remove(struct platform_device *pdev) 1920 1920 { 1921 1921 struct mxc_nand_host *host = platform_get_drvdata(pdev); 1922 + struct nand_chip *chip = &host->nand; 1923 + int ret; 1922 1924 1923 - nand_release(&host->nand); 1925 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1926 + WARN_ON(ret); 1927 + nand_cleanup(chip); 1924 1928 if (host->clk_act) 1925 1929 clk_disable_unprepare(host->clk); 1926 1930
+9 -1
drivers/mtd/nand/raw/mxic_nand.c
··· 393 393 int ret = 0; 394 394 unsigned int op_id; 395 395 396 + if (check_only) 397 + return 0; 398 + 396 399 mxic_nfc_cs_enable(nfc); 397 400 init_completion(&nfc->complete); 398 401 for (op_id = 0; op_id < op->ninstrs; op_id++) { ··· 556 553 static int mxic_nfc_remove(struct platform_device *pdev) 557 554 { 558 555 struct mxic_nand_ctlr *nfc = platform_get_drvdata(pdev); 556 + struct nand_chip *chip = &nfc->chip; 557 + int ret; 559 558 560 - nand_release(&nfc->chip); 559 + ret = mtd_device_unregister(nand_to_mtd(chip)); 560 + WARN_ON(ret); 561 + nand_cleanup(chip); 562 + 561 563 mxic_nfc_clk_disable(nfc); 562 564 return 0; 563 565 }
+279 -166
drivers/mtd/nand/raw/nand_base.c
··· 205 205 .free = nand_ooblayout_free_lp_hamming, 206 206 }; 207 207 208 + static int nand_pairing_dist3_get_info(struct mtd_info *mtd, int page, 209 + struct mtd_pairing_info *info) 210 + { 211 + int lastpage = (mtd->erasesize / mtd->writesize) - 1; 212 + int dist = 3; 213 + 214 + if (page == lastpage) 215 + dist = 2; 216 + 217 + if (!page || (page & 1)) { 218 + info->group = 0; 219 + info->pair = (page + 1) / 2; 220 + } else { 221 + info->group = 1; 222 + info->pair = (page + 1 - dist) / 2; 223 + } 224 + 225 + return 0; 226 + } 227 + 228 + static int nand_pairing_dist3_get_wunit(struct mtd_info *mtd, 229 + const struct mtd_pairing_info *info) 230 + { 231 + int lastpair = ((mtd->erasesize / mtd->writesize) - 1) / 2; 232 + int page = info->pair * 2; 233 + int dist = 3; 234 + 235 + if (!info->group && !info->pair) 236 + return 0; 237 + 238 + if (info->pair == lastpair && info->group) 239 + dist = 2; 240 + 241 + if (!info->group) 242 + page--; 243 + else if (info->pair) 244 + page += dist - 1; 245 + 246 + if (page >= mtd->erasesize / mtd->writesize) 247 + return -EINVAL; 248 + 249 + return page; 250 + } 251 + 252 + const struct mtd_pairing_scheme dist3_pairing_scheme = { 253 + .ngroups = 2, 254 + .get_info = nand_pairing_dist3_get_info, 255 + .get_wunit = nand_pairing_dist3_get_wunit, 256 + }; 257 + 208 258 static int check_offs_len(struct nand_chip *chip, loff_t ofs, uint64_t len) 209 259 { 210 260 int ret = 0; ··· 273 223 274 224 return ret; 275 225 } 226 + 227 + /** 228 + * nand_extract_bits - Copy unaligned bits from one buffer to another one 229 + * @dst: destination buffer 230 + * @dst_off: bit offset at which the writing starts 231 + * @src: source buffer 232 + * @src_off: bit offset at which the reading starts 233 + * @nbits: number of bits to copy from @src to @dst 234 + * 235 + * Copy bits from one memory region to another (overlap authorized). 236 + */ 237 + void nand_extract_bits(u8 *dst, unsigned int dst_off, const u8 *src, 238 + unsigned int src_off, unsigned int nbits) 239 + { 240 + unsigned int tmp, n; 241 + 242 + dst += dst_off / 8; 243 + dst_off %= 8; 244 + src += src_off / 8; 245 + src_off %= 8; 246 + 247 + while (nbits) { 248 + n = min3(8 - dst_off, 8 - src_off, nbits); 249 + 250 + tmp = (*src >> src_off) & GENMASK(n - 1, 0); 251 + *dst &= ~GENMASK(n - 1 + dst_off, dst_off); 252 + *dst |= tmp << dst_off; 253 + 254 + dst_off += n; 255 + if (dst_off >= 8) { 256 + dst++; 257 + dst_off -= 8; 258 + } 259 + 260 + src_off += n; 261 + if (src_off >= 8) { 262 + src++; 263 + src_off -= 8; 264 + } 265 + 266 + nbits -= n; 267 + } 268 + } 269 + EXPORT_SYMBOL_GPL(nand_extract_bits); 276 270 277 271 /** 278 272 * nand_select_target() - Select a NAND target (A.K.A. die) ··· 439 345 440 346 static int nand_isbad_bbm(struct nand_chip *chip, loff_t ofs) 441 347 { 348 + if (chip->options & NAND_NO_BBM_QUIRK) 349 + return 0; 350 + 442 351 if (chip->legacy.block_bad) 443 352 return chip->legacy.block_bad(chip, ofs); 444 353 ··· 787 690 */ 788 691 timeout_ms = jiffies + msecs_to_jiffies(timeout_ms) + 1; 789 692 do { 790 - ret = nand_read_data_op(chip, &status, sizeof(status), true); 693 + ret = nand_read_data_op(chip, &status, sizeof(status), true, 694 + false); 791 695 if (ret) 792 696 break; 793 697 ··· 834 736 int nand_gpio_waitrdy(struct nand_chip *chip, struct gpio_desc *gpiod, 835 737 unsigned long timeout_ms) 836 738 { 837 - /* Wait until R/B pin indicates chip is ready or timeout occurs */ 838 - timeout_ms = jiffies + msecs_to_jiffies(timeout_ms); 739 + 740 + /* 741 + * Wait until R/B pin indicates chip is ready or timeout occurs. 742 + * +1 below is necessary because if we are now in the last fraction 743 + * of jiffy and msecs_to_jiffies is 1 then we will wait only that 744 + * small jiffy fraction - possibly leading to false timeout. 745 + */ 746 + timeout_ms = jiffies + msecs_to_jiffies(timeout_ms) + 1; 839 747 do { 840 748 if (gpiod_get_value_cansleep(gpiod)) 841 749 return 0; ··· 874 770 u8 status; 875 771 876 772 ret = nand_read_data_op(chip, &status, sizeof(status), 877 - true); 773 + true, false); 878 774 if (ret) 879 775 return; 880 776 ··· 1972 1868 * @buf: buffer used to store the data 1973 1869 * @len: length of the buffer 1974 1870 * @force_8bit: force 8-bit bus access 1871 + * @check_only: do not actually run the command, only checks if the 1872 + * controller driver supports it 1975 1873 * 1976 1874 * This function does a raw data read on the bus. Usually used after launching 1977 1875 * another NAND operation like nand_read_page_op(). ··· 1982 1876 * Returns 0 on success, a negative error code otherwise. 1983 1877 */ 1984 1878 int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len, 1985 - bool force_8bit) 1879 + bool force_8bit, bool check_only) 1986 1880 { 1987 1881 if (!len || !buf) 1988 1882 return -EINVAL; ··· 1995 1889 1996 1890 instrs[0].ctx.data.force_8bit = force_8bit; 1997 1891 1892 + if (check_only) 1893 + return nand_check_op(chip, &op); 1894 + 1998 1895 return nand_exec_op(chip, &op); 1999 1896 } 1897 + 1898 + if (check_only) 1899 + return 0; 2000 1900 2001 1901 if (force_8bit) { 2002 1902 u8 *p = buf; ··· 2224 2112 char *prefix = " "; 2225 2113 unsigned int i; 2226 2114 2227 - pr_debug("executing subop:\n"); 2115 + pr_debug("executing subop (CS%d):\n", ctx->subop.cs); 2228 2116 2229 2117 for (i = 0; i < ctx->ninstrs; i++) { 2230 2118 instr = &ctx->instrs[i]; ··· 2288 2176 const struct nand_operation *op, bool check_only) 2289 2177 { 2290 2178 struct nand_op_parser_ctx ctx = { 2179 + .subop.cs = op->cs, 2291 2180 .subop.instrs = op->instrs, 2292 2181 .instrs = op->instrs, 2293 2182 .ninstrs = op->ninstrs, ··· 2733 2620 2734 2621 if (oob_required) { 2735 2622 ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, 2736 - false); 2623 + false, false); 2737 2624 if (ret) 2738 2625 return ret; 2739 2626 } ··· 2741 2628 return 0; 2742 2629 } 2743 2630 EXPORT_SYMBOL(nand_read_page_raw); 2631 + 2632 + /** 2633 + * nand_monolithic_read_page_raw - Monolithic page read in raw mode 2634 + * @chip: NAND chip info structure 2635 + * @buf: buffer to store read data 2636 + * @oob_required: caller requires OOB data read to chip->oob_poi 2637 + * @page: page number to read 2638 + * 2639 + * This is a raw page read, ie. without any error detection/correction. 2640 + * Monolithic means we are requesting all the relevant data (main plus 2641 + * eventually OOB) to be loaded in the NAND cache and sent over the 2642 + * bus (from the NAND chip to the NAND controller) in a single 2643 + * operation. This is an alternative to nand_read_page_raw(), which 2644 + * first reads the main data, and if the OOB data is requested too, 2645 + * then reads more data on the bus. 2646 + */ 2647 + int nand_monolithic_read_page_raw(struct nand_chip *chip, u8 *buf, 2648 + int oob_required, int page) 2649 + { 2650 + struct mtd_info *mtd = nand_to_mtd(chip); 2651 + unsigned int size = mtd->writesize; 2652 + u8 *read_buf = buf; 2653 + int ret; 2654 + 2655 + if (oob_required) { 2656 + size += mtd->oobsize; 2657 + 2658 + if (buf != chip->data_buf) 2659 + read_buf = nand_get_data_buf(chip); 2660 + } 2661 + 2662 + ret = nand_read_page_op(chip, page, 0, read_buf, size); 2663 + if (ret) 2664 + return ret; 2665 + 2666 + if (buf != chip->data_buf) 2667 + memcpy(buf, read_buf, mtd->writesize); 2668 + 2669 + return 0; 2670 + } 2671 + EXPORT_SYMBOL(nand_monolithic_read_page_raw); 2744 2672 2745 2673 /** 2746 2674 * nand_read_page_raw_syndrome - [INTERN] read raw page data without ecc ··· 2806 2652 return ret; 2807 2653 2808 2654 for (steps = chip->ecc.steps; steps > 0; steps--) { 2809 - ret = nand_read_data_op(chip, buf, eccsize, false); 2655 + ret = nand_read_data_op(chip, buf, eccsize, false, false); 2810 2656 if (ret) 2811 2657 return ret; 2812 2658 ··· 2814 2660 2815 2661 if (chip->ecc.prepad) { 2816 2662 ret = nand_read_data_op(chip, oob, chip->ecc.prepad, 2817 - false); 2663 + false, false); 2818 2664 if (ret) 2819 2665 return ret; 2820 2666 2821 2667 oob += chip->ecc.prepad; 2822 2668 } 2823 2669 2824 - ret = nand_read_data_op(chip, oob, eccbytes, false); 2670 + ret = nand_read_data_op(chip, oob, eccbytes, false, false); 2825 2671 if (ret) 2826 2672 return ret; 2827 2673 ··· 2829 2675 2830 2676 if (chip->ecc.postpad) { 2831 2677 ret = nand_read_data_op(chip, oob, chip->ecc.postpad, 2832 - false); 2678 + false, false); 2833 2679 if (ret) 2834 2680 return ret; 2835 2681 ··· 2839 2685 2840 2686 size = mtd->oobsize - (oob - chip->oob_poi); 2841 2687 if (size) { 2842 - ret = nand_read_data_op(chip, oob, size, false); 2688 + ret = nand_read_data_op(chip, oob, size, false, false); 2843 2689 if (ret) 2844 2690 return ret; 2845 2691 } ··· 3032 2878 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 3033 2879 chip->ecc.hwctl(chip, NAND_ECC_READ); 3034 2880 3035 - ret = nand_read_data_op(chip, p, eccsize, false); 2881 + ret = nand_read_data_op(chip, p, eccsize, false, false); 3036 2882 if (ret) 3037 2883 return ret; 3038 2884 3039 2885 chip->ecc.calculate(chip, p, &ecc_calc[i]); 3040 2886 } 3041 2887 3042 - ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false); 2888 + ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false, 2889 + false); 3043 2890 if (ret) 3044 2891 return ret; 3045 2892 ··· 3056 2901 int stat; 3057 2902 3058 2903 stat = chip->ecc.correct(chip, p, &ecc_code[i], &ecc_calc[i]); 3059 - if (stat == -EBADMSG && 3060 - (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) { 3061 - /* check for empty pages with bitflips */ 3062 - stat = nand_check_erased_ecc_chunk(p, eccsize, 3063 - &ecc_code[i], eccbytes, 3064 - NULL, 0, 3065 - chip->ecc.strength); 3066 - } 3067 - 3068 - if (stat < 0) { 3069 - mtd->ecc_stats.failed++; 3070 - } else { 3071 - mtd->ecc_stats.corrected += stat; 3072 - max_bitflips = max_t(unsigned int, max_bitflips, stat); 3073 - } 3074 - } 3075 - return max_bitflips; 3076 - } 3077 - 3078 - /** 3079 - * nand_read_page_hwecc_oob_first - [REPLACEABLE] hw ecc, read oob first 3080 - * @chip: nand chip info structure 3081 - * @buf: buffer to store read data 3082 - * @oob_required: caller requires OOB data read to chip->oob_poi 3083 - * @page: page number to read 3084 - * 3085 - * Hardware ECC for large page chips, require OOB to be read first. For this 3086 - * ECC mode, the write_page method is re-used from ECC_HW. These methods 3087 - * read/write ECC from the OOB area, unlike the ECC_HW_SYNDROME support with 3088 - * multiple ECC steps, follows the "infix ECC" scheme and reads/writes ECC from 3089 - * the data area, by overwriting the NAND manufacturer bad block markings. 3090 - */ 3091 - static int nand_read_page_hwecc_oob_first(struct nand_chip *chip, uint8_t *buf, 3092 - int oob_required, int page) 3093 - { 3094 - struct mtd_info *mtd = nand_to_mtd(chip); 3095 - int i, eccsize = chip->ecc.size, ret; 3096 - int eccbytes = chip->ecc.bytes; 3097 - int eccsteps = chip->ecc.steps; 3098 - uint8_t *p = buf; 3099 - uint8_t *ecc_code = chip->ecc.code_buf; 3100 - uint8_t *ecc_calc = chip->ecc.calc_buf; 3101 - unsigned int max_bitflips = 0; 3102 - 3103 - /* Read the OOB area first */ 3104 - ret = nand_read_oob_op(chip, page, 0, chip->oob_poi, mtd->oobsize); 3105 - if (ret) 3106 - return ret; 3107 - 3108 - ret = nand_read_page_op(chip, page, 0, NULL, 0); 3109 - if (ret) 3110 - return ret; 3111 - 3112 - ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0, 3113 - chip->ecc.total); 3114 - if (ret) 3115 - return ret; 3116 - 3117 - for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) { 3118 - int stat; 3119 - 3120 - chip->ecc.hwctl(chip, NAND_ECC_READ); 3121 - 3122 - ret = nand_read_data_op(chip, p, eccsize, false); 3123 - if (ret) 3124 - return ret; 3125 - 3126 - chip->ecc.calculate(chip, p, &ecc_calc[i]); 3127 - 3128 - stat = chip->ecc.correct(chip, p, &ecc_code[i], NULL); 3129 2904 if (stat == -EBADMSG && 3130 2905 (chip->ecc.options & NAND_ECC_GENERIC_ERASED_CHECK)) { 3131 2906 /* check for empty pages with bitflips */ ··· 3106 3021 3107 3022 chip->ecc.hwctl(chip, NAND_ECC_READ); 3108 3023 3109 - ret = nand_read_data_op(chip, p, eccsize, false); 3024 + ret = nand_read_data_op(chip, p, eccsize, false, false); 3110 3025 if (ret) 3111 3026 return ret; 3112 3027 3113 3028 if (chip->ecc.prepad) { 3114 3029 ret = nand_read_data_op(chip, oob, chip->ecc.prepad, 3115 - false); 3030 + false, false); 3116 3031 if (ret) 3117 3032 return ret; 3118 3033 ··· 3121 3036 3122 3037 chip->ecc.hwctl(chip, NAND_ECC_READSYN); 3123 3038 3124 - ret = nand_read_data_op(chip, oob, eccbytes, false); 3039 + ret = nand_read_data_op(chip, oob, eccbytes, false, false); 3125 3040 if (ret) 3126 3041 return ret; 3127 3042 ··· 3131 3046 3132 3047 if (chip->ecc.postpad) { 3133 3048 ret = nand_read_data_op(chip, oob, chip->ecc.postpad, 3134 - false); 3049 + false, false); 3135 3050 if (ret) 3136 3051 return ret; 3137 3052 ··· 3159 3074 /* Calculate remaining oob bytes */ 3160 3075 i = mtd->oobsize - (oob - chip->oob_poi); 3161 3076 if (i) { 3162 - ret = nand_read_data_op(chip, oob, i, false); 3077 + ret = nand_read_data_op(chip, oob, i, false, false); 3163 3078 if (ret) 3164 3079 return ret; 3165 3080 } ··· 3251 3166 uint32_t max_oobsize = mtd_oobavail(mtd, ops); 3252 3167 3253 3168 uint8_t *bufpoi, *oob, *buf; 3254 - int use_bufpoi; 3169 + int use_bounce_buf; 3255 3170 unsigned int max_bitflips = 0; 3256 3171 int retry_mode = 0; 3257 3172 bool ecc_fail = false; ··· 3269 3184 oob_required = oob ? 1 : 0; 3270 3185 3271 3186 while (1) { 3272 - unsigned int ecc_failures = mtd->ecc_stats.failed; 3187 + struct mtd_ecc_stats ecc_stats = mtd->ecc_stats; 3273 3188 3274 3189 bytes = min(mtd->writesize - col, readlen); 3275 3190 aligned = (bytes == mtd->writesize); 3276 3191 3277 3192 if (!aligned) 3278 - use_bufpoi = 1; 3279 - else if (chip->options & NAND_USE_BOUNCE_BUFFER) 3280 - use_bufpoi = !virt_addr_valid(buf) || 3281 - !IS_ALIGNED((unsigned long)buf, 3282 - chip->buf_align); 3193 + use_bounce_buf = 1; 3194 + else if (chip->options & NAND_USES_DMA) 3195 + use_bounce_buf = !virt_addr_valid(buf) || 3196 + !IS_ALIGNED((unsigned long)buf, 3197 + chip->buf_align); 3283 3198 else 3284 - use_bufpoi = 0; 3199 + use_bounce_buf = 0; 3285 3200 3286 3201 /* Is the current page in the buffer? */ 3287 3202 if (realpage != chip->pagecache.page || oob) { 3288 - bufpoi = use_bufpoi ? chip->data_buf : buf; 3203 + bufpoi = use_bounce_buf ? chip->data_buf : buf; 3289 3204 3290 - if (use_bufpoi && aligned) 3205 + if (use_bounce_buf && aligned) 3291 3206 pr_debug("%s: using read bounce buffer for buf@%p\n", 3292 3207 __func__, buf); 3293 3208 ··· 3308 3223 ret = chip->ecc.read_page(chip, bufpoi, 3309 3224 oob_required, page); 3310 3225 if (ret < 0) { 3311 - if (use_bufpoi) 3226 + if (use_bounce_buf) 3312 3227 /* Invalidate page cache */ 3313 3228 chip->pagecache.page = -1; 3314 3229 break; 3315 3230 } 3316 3231 3317 - /* Transfer not aligned data */ 3318 - if (use_bufpoi) { 3232 + /* 3233 + * Copy back the data in the initial buffer when reading 3234 + * partial pages or when a bounce buffer is required. 3235 + */ 3236 + if (use_bounce_buf) { 3319 3237 if (!NAND_HAS_SUBPAGE_READ(chip) && !oob && 3320 - !(mtd->ecc_stats.failed - ecc_failures) && 3238 + !(mtd->ecc_stats.failed - ecc_stats.failed) && 3321 3239 (ops->mode != MTD_OPS_RAW)) { 3322 3240 chip->pagecache.page = realpage; 3323 3241 chip->pagecache.bitflips = ret; ··· 3328 3240 /* Invalidate page cache */ 3329 3241 chip->pagecache.page = -1; 3330 3242 } 3331 - memcpy(buf, chip->data_buf + col, bytes); 3243 + memcpy(buf, bufpoi + col, bytes); 3332 3244 } 3333 3245 3334 3246 if (unlikely(oob)) { ··· 3343 3255 3344 3256 nand_wait_readrdy(chip); 3345 3257 3346 - if (mtd->ecc_stats.failed - ecc_failures) { 3258 + if (mtd->ecc_stats.failed - ecc_stats.failed) { 3347 3259 if (retry_mode + 1 < chip->read_retries) { 3348 3260 retry_mode++; 3349 3261 ret = nand_setup_read_retry(chip, ··· 3351 3263 if (ret < 0) 3352 3264 break; 3353 3265 3354 - /* Reset failures; retry */ 3355 - mtd->ecc_stats.failed = ecc_failures; 3266 + /* Reset ecc_stats; retry */ 3267 + mtd->ecc_stats = ecc_stats; 3356 3268 goto read_retry; 3357 3269 } else { 3358 3270 /* No more retry modes; real failure */ ··· 3461 3373 sndrnd = 1; 3462 3374 toread = min_t(int, length, chunk); 3463 3375 3464 - ret = nand_read_data_op(chip, bufpoi, toread, false); 3376 + ret = nand_read_data_op(chip, bufpoi, toread, false, false); 3465 3377 if (ret) 3466 3378 return ret; 3467 3379 ··· 3469 3381 length -= toread; 3470 3382 } 3471 3383 if (length > 0) { 3472 - ret = nand_read_data_op(chip, bufpoi, length, false); 3384 + ret = nand_read_data_op(chip, bufpoi, length, false, false); 3473 3385 if (ret) 3474 3386 return ret; 3475 3387 } ··· 3720 3632 return nand_prog_page_end_op(chip); 3721 3633 } 3722 3634 EXPORT_SYMBOL(nand_write_page_raw); 3635 + 3636 + /** 3637 + * nand_monolithic_write_page_raw - Monolithic page write in raw mode 3638 + * @chip: NAND chip info structure 3639 + * @buf: data buffer to write 3640 + * @oob_required: must write chip->oob_poi to OOB 3641 + * @page: page number to write 3642 + * 3643 + * This is a raw page write, ie. without any error detection/correction. 3644 + * Monolithic means we are requesting all the relevant data (main plus 3645 + * eventually OOB) to be sent over the bus and effectively programmed 3646 + * into the NAND chip arrays in a single operation. This is an 3647 + * alternative to nand_write_page_raw(), which first sends the main 3648 + * data, then eventually send the OOB data by latching more data 3649 + * cycles on the NAND bus, and finally sends the program command to 3650 + * synchronyze the NAND chip cache. 3651 + */ 3652 + int nand_monolithic_write_page_raw(struct nand_chip *chip, const u8 *buf, 3653 + int oob_required, int page) 3654 + { 3655 + struct mtd_info *mtd = nand_to_mtd(chip); 3656 + unsigned int size = mtd->writesize; 3657 + u8 *write_buf = (u8 *)buf; 3658 + 3659 + if (oob_required) { 3660 + size += mtd->oobsize; 3661 + 3662 + if (buf != chip->data_buf) { 3663 + write_buf = nand_get_data_buf(chip); 3664 + memcpy(write_buf, buf, mtd->writesize); 3665 + } 3666 + } 3667 + 3668 + return nand_prog_page_op(chip, page, 0, write_buf, size); 3669 + } 3670 + EXPORT_SYMBOL(nand_monolithic_write_page_raw); 3723 3671 3724 3672 /** 3725 3673 * nand_write_page_raw_syndrome - [INTERN] raw page write function ··· 4136 4012 while (1) { 4137 4013 int bytes = mtd->writesize; 4138 4014 uint8_t *wbuf = buf; 4139 - int use_bufpoi; 4015 + int use_bounce_buf; 4140 4016 int part_pagewr = (column || writelen < mtd->writesize); 4141 4017 4142 4018 if (part_pagewr) 4143 - use_bufpoi = 1; 4144 - else if (chip->options & NAND_USE_BOUNCE_BUFFER) 4145 - use_bufpoi = !virt_addr_valid(buf) || 4146 - !IS_ALIGNED((unsigned long)buf, 4147 - chip->buf_align); 4019 + use_bounce_buf = 1; 4020 + else if (chip->options & NAND_USES_DMA) 4021 + use_bounce_buf = !virt_addr_valid(buf) || 4022 + !IS_ALIGNED((unsigned long)buf, 4023 + chip->buf_align); 4148 4024 else 4149 - use_bufpoi = 0; 4025 + use_bounce_buf = 0; 4150 4026 4151 - /* Partial page write?, or need to use bounce buffer */ 4152 - if (use_bufpoi) { 4027 + /* 4028 + * Copy the data from the initial buffer when doing partial page 4029 + * writes or when a bounce buffer is required. 4030 + */ 4031 + if (use_bounce_buf) { 4153 4032 pr_debug("%s: using write bounce buffer for buf@%p\n", 4154 4033 __func__, buf); 4155 4034 if (part_pagewr) ··· 5010 4883 [NAND_ECC_SOFT] = "soft", 5011 4884 [NAND_ECC_HW] = "hw", 5012 4885 [NAND_ECC_HW_SYNDROME] = "hw_syndrome", 5013 - [NAND_ECC_HW_OOB_FIRST] = "hw_oob_first", 5014 4886 [NAND_ECC_ON_DIE] = "on-die", 5015 4887 }; 5016 4888 ··· 5022 4896 if (err < 0) 5023 4897 return err; 5024 4898 5025 - for (i = 0; i < ARRAY_SIZE(nand_ecc_modes); i++) 4899 + for (i = NAND_ECC_NONE; i < ARRAY_SIZE(nand_ecc_modes); i++) 5026 4900 if (!strcasecmp(pm, nand_ecc_modes[i])) 5027 4901 return i; 5028 4902 5029 4903 /* 5030 4904 * For backward compatibility we support few obsoleted values that don't 5031 - * have their mappings into nand_ecc_modes_t anymore (they were merged 5032 - * with other enums). 4905 + * have their mappings into the nand_ecc_mode enum anymore (they were 4906 + * merged with other enums). 5033 4907 */ 5034 4908 if (!strcasecmp(pm, "soft_bch")) 5035 4909 return NAND_ECC_SOFT; ··· 5043 4917 [NAND_ECC_RS] = "rs", 5044 4918 }; 5045 4919 5046 - static int of_get_nand_ecc_algo(struct device_node *np) 4920 + static enum nand_ecc_algo of_get_nand_ecc_algo(struct device_node *np) 5047 4921 { 4922 + enum nand_ecc_algo ecc_algo; 5048 4923 const char *pm; 5049 - int err, i; 4924 + int err; 5050 4925 5051 4926 err = of_property_read_string(np, "nand-ecc-algo", &pm); 5052 4927 if (!err) { 5053 - for (i = NAND_ECC_HAMMING; i < ARRAY_SIZE(nand_ecc_algos); i++) 5054 - if (!strcasecmp(pm, nand_ecc_algos[i])) 5055 - return i; 5056 - return -ENODEV; 4928 + for (ecc_algo = NAND_ECC_HAMMING; 4929 + ecc_algo < ARRAY_SIZE(nand_ecc_algos); 4930 + ecc_algo++) { 4931 + if (!strcasecmp(pm, nand_ecc_algos[ecc_algo])) 4932 + return ecc_algo; 4933 + } 5057 4934 } 5058 4935 5059 4936 /* ··· 5064 4935 * for some obsoleted values that were specifying ECC algorithm. 5065 4936 */ 5066 4937 err = of_property_read_string(np, "nand-ecc-mode", &pm); 5067 - if (err < 0) 5068 - return err; 4938 + if (!err) { 4939 + if (!strcasecmp(pm, "soft")) 4940 + return NAND_ECC_HAMMING; 4941 + else if (!strcasecmp(pm, "soft_bch")) 4942 + return NAND_ECC_BCH; 4943 + } 5069 4944 5070 - if (!strcasecmp(pm, "soft")) 5071 - return NAND_ECC_HAMMING; 5072 - else if (!strcasecmp(pm, "soft_bch")) 5073 - return NAND_ECC_BCH; 5074 - 5075 - return -ENODEV; 4945 + return NAND_ECC_UNKNOWN; 5076 4946 } 5077 4947 5078 4948 static int of_get_nand_ecc_step_size(struct device_node *np) ··· 5116 4988 static int nand_dt_init(struct nand_chip *chip) 5117 4989 { 5118 4990 struct device_node *dn = nand_get_flash_node(chip); 5119 - int ecc_mode, ecc_algo, ecc_strength, ecc_step; 4991 + enum nand_ecc_algo ecc_algo; 4992 + int ecc_mode, ecc_strength, ecc_step; 5120 4993 5121 4994 if (!dn) 5122 4995 return 0; ··· 5139 5010 if (ecc_mode >= 0) 5140 5011 chip->ecc.mode = ecc_mode; 5141 5012 5142 - if (ecc_algo >= 0) 5013 + if (ecc_algo != NAND_ECC_UNKNOWN) 5143 5014 chip->ecc.algo = ecc_algo; 5144 5015 5145 5016 if (ecc_strength >= 0) ··· 5269 5140 ecc->read_page = nand_read_page_swecc; 5270 5141 ecc->read_subpage = nand_read_subpage; 5271 5142 ecc->write_page = nand_write_page_swecc; 5272 - ecc->read_page_raw = nand_read_page_raw; 5273 - ecc->write_page_raw = nand_write_page_raw; 5143 + if (!ecc->read_page_raw) 5144 + ecc->read_page_raw = nand_read_page_raw; 5145 + if (!ecc->write_page_raw) 5146 + ecc->write_page_raw = nand_write_page_raw; 5274 5147 ecc->read_oob = nand_read_oob_std; 5275 5148 ecc->write_oob = nand_write_oob_std; 5276 5149 if (!ecc->size) ··· 5294 5163 ecc->read_page = nand_read_page_swecc; 5295 5164 ecc->read_subpage = nand_read_subpage; 5296 5165 ecc->write_page = nand_write_page_swecc; 5297 - ecc->read_page_raw = nand_read_page_raw; 5298 - ecc->write_page_raw = nand_write_page_raw; 5166 + if (!ecc->read_page_raw) 5167 + ecc->read_page_raw = nand_read_page_raw; 5168 + if (!ecc->write_page_raw) 5169 + ecc->write_page_raw = nand_write_page_raw; 5299 5170 ecc->read_oob = nand_read_oob_std; 5300 5171 ecc->write_oob = nand_write_oob_std; 5301 5172 ··· 5761 5628 */ 5762 5629 5763 5630 switch (ecc->mode) { 5764 - case NAND_ECC_HW_OOB_FIRST: 5765 - /* Similar to NAND_ECC_HW, but a separate read_page handle */ 5766 - if (!ecc->calculate || !ecc->correct || !ecc->hwctl) { 5767 - WARN(1, "No ECC functions supplied; hardware ECC not possible\n"); 5768 - ret = -EINVAL; 5769 - goto err_nand_manuf_cleanup; 5770 - } 5771 - if (!ecc->read_page) 5772 - ecc->read_page = nand_read_page_hwecc_oob_first; 5773 - fallthrough; 5774 5631 case NAND_ECC_HW: 5775 5632 /* Use standard hwecc read page function? */ 5776 5633 if (!ecc->read_page) ··· 5904 5781 5905 5782 /* ECC sanity check: warn if it's too weak */ 5906 5783 if (!nand_ecc_strength_good(chip)) 5907 - pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n", 5908 - mtd->name); 5784 + pr_warn("WARNING: %s: the ECC used on your system (%db/%dB) is too weak compared to the one required by the NAND chip (%db/%dB)\n", 5785 + mtd->name, chip->ecc.strength, chip->ecc.size, 5786 + chip->base.eccreq.strength, 5787 + chip->base.eccreq.step_size); 5909 5788 5910 5789 /* Allow subpage writes up to ecc.steps. Not possible for MLC flash */ 5911 5790 if (!(chip->options & NAND_NO_SUBPAGE_WRITE) && nand_is_slc(chip)) { ··· 6099 5974 } 6100 5975 6101 5976 EXPORT_SYMBOL_GPL(nand_cleanup); 6102 - 6103 - /** 6104 - * nand_release - [NAND Interface] Unregister the MTD device and free resources 6105 - * held by the NAND device 6106 - * @chip: NAND chip object 6107 - */ 6108 - void nand_release(struct nand_chip *chip) 6109 - { 6110 - mtd_device_unregister(nand_to_mtd(chip)); 6111 - nand_cleanup(chip); 6112 - } 6113 - EXPORT_SYMBOL_GPL(nand_release); 6114 5977 6115 5978 MODULE_LICENSE("GPL"); 6116 5979 MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>");
+5 -5
drivers/mtd/nand/raw/nand_bch.c
··· 41 41 unsigned int i; 42 42 43 43 memset(code, 0, chip->ecc.bytes); 44 - encode_bch(nbc->bch, buf, chip->ecc.size, code); 44 + bch_encode(nbc->bch, buf, chip->ecc.size, code); 45 45 46 46 /* apply mask so that an erased page is a valid codeword */ 47 47 for (i = 0; i < chip->ecc.bytes; i++) ··· 67 67 unsigned int *errloc = nbc->errloc; 68 68 int i, count; 69 69 70 - count = decode_bch(nbc->bch, NULL, chip->ecc.size, read_ecc, calc_ecc, 70 + count = bch_decode(nbc->bch, NULL, chip->ecc.size, read_ecc, calc_ecc, 71 71 NULL, errloc); 72 72 if (count > 0) { 73 73 for (i = 0; i < count; i++) { ··· 130 130 if (!nbc) 131 131 goto fail; 132 132 133 - nbc->bch = init_bch(m, t, 0); 133 + nbc->bch = bch_init(m, t, 0, false); 134 134 if (!nbc->bch) 135 135 goto fail; 136 136 ··· 182 182 goto fail; 183 183 184 184 memset(erased_page, 0xff, eccsize); 185 - encode_bch(nbc->bch, erased_page, eccsize, nbc->eccmask); 185 + bch_encode(nbc->bch, erased_page, eccsize, nbc->eccmask); 186 186 kfree(erased_page); 187 187 188 188 for (i = 0; i < eccbytes; i++) ··· 205 205 void nand_bch_free(struct nand_bch_control *nbc) 206 206 { 207 207 if (nbc) { 208 - free_bch(nbc->bch); 208 + bch_free(nbc->bch); 209 209 kfree(nbc->errloc); 210 210 kfree(nbc->eccmask); 211 211 kfree(nbc);
+20 -10
drivers/mtd/nand/raw/nand_jedec.c
··· 16 16 17 17 #include "internals.h" 18 18 19 + #define JEDEC_PARAM_PAGES 3 20 + 19 21 /* 20 22 * Check if the NAND chip is JEDEC compliant, returns 1 if it is, 0 otherwise. 21 23 */ ··· 27 25 struct nand_memory_organization *memorg; 28 26 struct nand_jedec_params *p; 29 27 struct jedec_ecc_info *ecc; 28 + bool use_datain = false; 30 29 int jedec_version = 0; 31 30 char id[5]; 32 31 int i, val, ret; 32 + u16 crc; 33 33 34 34 memorg = nanddev_get_memorg(&chip->base); 35 35 ··· 45 41 if (!p) 46 42 return -ENOMEM; 47 43 48 - ret = nand_read_param_page_op(chip, 0x40, NULL, 0); 49 - if (ret) { 50 - ret = 0; 51 - goto free_jedec_param_page; 52 - } 44 + if (!nand_has_exec_op(chip) || 45 + !nand_read_data_op(chip, p, sizeof(*p), true, true)) 46 + use_datain = true; 53 47 54 - for (i = 0; i < 3; i++) { 55 - ret = nand_read_data_op(chip, p, sizeof(*p), true); 48 + for (i = 0; i < JEDEC_PARAM_PAGES; i++) { 49 + if (!i) 50 + ret = nand_read_param_page_op(chip, 0x40, p, 51 + sizeof(*p)); 52 + else if (use_datain) 53 + ret = nand_read_data_op(chip, p, sizeof(*p), true, 54 + false); 55 + else 56 + ret = nand_change_read_column_op(chip, sizeof(*p) * i, 57 + p, sizeof(*p), true); 56 58 if (ret) { 57 59 ret = 0; 58 60 goto free_jedec_param_page; 59 61 } 60 62 61 - if (onfi_crc16(ONFI_CRC_BASE, (uint8_t *)p, 510) == 62 - le16_to_cpu(p->crc)) 63 + crc = onfi_crc16(ONFI_CRC_BASE, (u8 *)p, 510); 64 + if (crc == le16_to_cpu(p->crc)) 63 65 break; 64 66 } 65 67 66 - if (i == 3) { 68 + if (i == JEDEC_PARAM_PAGES) { 67 69 pr_err("Could not find valid JEDEC parameter page; aborting\n"); 68 70 goto free_jedec_param_page; 69 71 }
+5 -3
drivers/mtd/nand/raw/nand_legacy.c
··· 225 225 do { 226 226 u8 status; 227 227 228 - ret = nand_read_data_op(chip, &status, sizeof(status), true); 228 + ret = nand_read_data_op(chip, &status, sizeof(status), true, 229 + false); 229 230 if (ret) 230 231 return; 231 232 ··· 553 552 break; 554 553 } else { 555 554 ret = nand_read_data_op(chip, &status, 556 - sizeof(status), true); 555 + sizeof(status), true, 556 + false); 557 557 if (ret) 558 558 return ret; 559 559 ··· 565 563 } while (time_before(jiffies, timeo)); 566 564 } 567 565 568 - ret = nand_read_data_op(chip, &status, sizeof(status), true); 566 + ret = nand_read_data_op(chip, &status, sizeof(status), true, false); 569 567 if (ret) 570 568 return ret; 571 569
+55 -10
drivers/mtd/nand/raw/nand_micron.c
··· 192 192 struct micron_nand *micron = nand_get_manufacturer_data(chip); 193 193 struct mtd_info *mtd = nand_to_mtd(chip); 194 194 unsigned int step, max_bitflips = 0; 195 + bool use_datain = false; 195 196 int ret; 196 197 197 198 if (!(status & NAND_ECC_STATUS_WRITE_RECOMMENDED)) { ··· 212 211 * in non-raw mode, even if the user did not request those bytes. 213 212 */ 214 213 if (!oob_required) { 215 - ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, 216 - false); 214 + /* 215 + * We first check which operation is supported by the controller 216 + * before running it. This trick makes it possible to support 217 + * all controllers, even the most constraints, without almost 218 + * any performance hit. 219 + * 220 + * TODO: could be enhanced to avoid repeating the same check 221 + * over and over in the fast path. 222 + */ 223 + if (!nand_has_exec_op(chip) || 224 + !nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false, 225 + true)) 226 + use_datain = true; 227 + 228 + if (use_datain) 229 + ret = nand_read_data_op(chip, chip->oob_poi, 230 + mtd->oobsize, false, false); 231 + else 232 + ret = nand_change_read_column_op(chip, mtd->writesize, 233 + chip->oob_poi, 234 + mtd->oobsize, false); 217 235 if (ret) 218 236 return ret; 219 237 } ··· 305 285 int oob_required, int page) 306 286 { 307 287 struct mtd_info *mtd = nand_to_mtd(chip); 288 + bool use_datain = false; 308 289 u8 status; 309 290 int ret, max_bitflips = 0; 310 291 ··· 321 300 if (ret) 322 301 goto out; 323 302 324 - ret = nand_exit_status_op(chip); 325 - if (ret) 326 - goto out; 303 + /* 304 + * We first check which operation is supported by the controller before 305 + * running it. This trick makes it possible to support all controllers, 306 + * even the most constraints, without almost any performance hit. 307 + * 308 + * TODO: could be enhanced to avoid repeating the same check over and 309 + * over in the fast path. 310 + */ 311 + if (!nand_has_exec_op(chip) || 312 + !nand_read_data_op(chip, buf, mtd->writesize, false, true)) 313 + use_datain = true; 327 314 328 - ret = nand_read_data_op(chip, buf, mtd->writesize, false); 329 - if (!ret && oob_required) 330 - ret = nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, 315 + if (use_datain) { 316 + ret = nand_exit_status_op(chip); 317 + if (ret) 318 + goto out; 319 + 320 + ret = nand_read_data_op(chip, buf, mtd->writesize, false, 331 321 false); 322 + if (!ret && oob_required) 323 + ret = nand_read_data_op(chip, chip->oob_poi, 324 + mtd->oobsize, false, false); 325 + } else { 326 + ret = nand_change_read_column_op(chip, 0, buf, mtd->writesize, 327 + false); 328 + if (!ret && oob_required) 329 + ret = nand_change_read_column_op(chip, mtd->writesize, 330 + chip->oob_poi, 331 + mtd->oobsize, false); 332 + } 332 333 333 334 if (chip->ecc.strength == 4) 334 335 max_bitflips = micron_nand_on_die_ecc_status_4(chip, status, ··· 551 508 chip->ecc.read_page_raw = nand_read_page_raw_notsupp; 552 509 chip->ecc.write_page_raw = nand_write_page_raw_notsupp; 553 510 } else { 554 - chip->ecc.read_page_raw = nand_read_page_raw; 555 - chip->ecc.write_page_raw = nand_write_page_raw; 511 + if (!chip->ecc.read_page_raw) 512 + chip->ecc.read_page_raw = nand_read_page_raw; 513 + if (!chip->ecc.write_page_raw) 514 + chip->ecc.write_page_raw = nand_write_page_raw; 556 515 } 557 516 } 558 517
+41 -28
drivers/mtd/nand/raw/nand_onfi.c
··· 16 16 17 17 #include "internals.h" 18 18 19 + #define ONFI_PARAM_PAGES 3 20 + 19 21 u16 onfi_crc16(u16 crc, u8 const *p, size_t len) 20 22 { 21 23 int i; ··· 47 45 if (!ep) 48 46 return -ENOMEM; 49 47 50 - /* Send our own NAND_CMD_PARAM. */ 51 - ret = nand_read_param_page_op(chip, 0, NULL, 0); 52 - if (ret) 53 - goto ext_out; 54 - 55 - /* Use the Change Read Column command to skip the ONFI param pages. */ 48 + /* 49 + * Use the Change Read Column command to skip the ONFI param pages and 50 + * ensure we read at the right location. 51 + */ 56 52 ret = nand_change_read_column_op(chip, 57 53 sizeof(*p) * p->num_of_param_pages, 58 54 ep, len, true); ··· 141 141 { 142 142 struct mtd_info *mtd = nand_to_mtd(chip); 143 143 struct nand_memory_organization *memorg; 144 - struct nand_onfi_params *p; 144 + struct nand_onfi_params *p = NULL, *pbuf; 145 145 struct onfi_params *onfi; 146 + bool use_datain = false; 146 147 int onfi_version = 0; 147 148 char id[4]; 148 149 int i, ret, val; 150 + u16 crc; 149 151 150 152 memorg = nanddev_get_memorg(&chip->base); 151 153 ··· 157 155 return 0; 158 156 159 157 /* ONFI chip: allocate a buffer to hold its parameter page */ 160 - p = kzalloc((sizeof(*p) * 3), GFP_KERNEL); 161 - if (!p) 158 + pbuf = kzalloc((sizeof(*pbuf) * ONFI_PARAM_PAGES), GFP_KERNEL); 159 + if (!pbuf) 162 160 return -ENOMEM; 163 161 164 - ret = nand_read_param_page_op(chip, 0, NULL, 0); 165 - if (ret) { 166 - ret = 0; 167 - goto free_onfi_param_page; 168 - } 162 + if (!nand_has_exec_op(chip) || 163 + !nand_read_data_op(chip, &pbuf[0], sizeof(*pbuf), true, true)) 164 + use_datain = true; 169 165 170 - for (i = 0; i < 3; i++) { 171 - ret = nand_read_data_op(chip, &p[i], sizeof(*p), true); 166 + for (i = 0; i < ONFI_PARAM_PAGES; i++) { 167 + if (!i) 168 + ret = nand_read_param_page_op(chip, 0, &pbuf[i], 169 + sizeof(*pbuf)); 170 + else if (use_datain) 171 + ret = nand_read_data_op(chip, &pbuf[i], sizeof(*pbuf), 172 + true, false); 173 + else 174 + ret = nand_change_read_column_op(chip, sizeof(*pbuf) * i, 175 + &pbuf[i], sizeof(*pbuf), 176 + true); 172 177 if (ret) { 173 178 ret = 0; 174 179 goto free_onfi_param_page; 175 180 } 176 181 177 - if (onfi_crc16(ONFI_CRC_BASE, (u8 *)&p[i], 254) == 178 - le16_to_cpu(p->crc)) { 179 - if (i) 180 - memcpy(p, &p[i], sizeof(*p)); 182 + crc = onfi_crc16(ONFI_CRC_BASE, (u8 *)&pbuf[i], 254); 183 + if (crc == le16_to_cpu(pbuf[i].crc)) { 184 + p = &pbuf[i]; 181 185 break; 182 186 } 183 187 } 184 188 185 - if (i == 3) { 186 - const void *srcbufs[3] = {p, p + 1, p + 2}; 189 + if (i == ONFI_PARAM_PAGES) { 190 + const void *srcbufs[ONFI_PARAM_PAGES]; 191 + unsigned int j; 192 + 193 + for (j = 0; j < ONFI_PARAM_PAGES; j++) 194 + srcbufs[j] = pbuf + j; 187 195 188 196 pr_warn("Could not find a valid ONFI parameter page, trying bit-wise majority to recover it\n"); 189 - nand_bit_wise_majority(srcbufs, ARRAY_SIZE(srcbufs), p, 190 - sizeof(*p)); 197 + nand_bit_wise_majority(srcbufs, ONFI_PARAM_PAGES, pbuf, 198 + sizeof(*pbuf)); 191 199 192 - if (onfi_crc16(ONFI_CRC_BASE, (u8 *)p, 254) != 193 - le16_to_cpu(p->crc)) { 200 + crc = onfi_crc16(ONFI_CRC_BASE, (u8 *)pbuf, 254); 201 + if (crc != le16_to_cpu(pbuf->crc)) { 194 202 pr_err("ONFI parameter recovery failed, aborting\n"); 195 203 goto free_onfi_param_page; 196 204 } 205 + p = pbuf; 197 206 } 198 207 199 208 if (chip->manufacturer.desc && chip->manufacturer.desc->ops && ··· 312 299 chip->parameters.onfi = onfi; 313 300 314 301 /* Identification done, free the full ONFI parameter page and exit */ 315 - kfree(p); 302 + kfree(pbuf); 316 303 317 304 return 1; 318 305 319 306 free_model: 320 307 kfree(chip->parameters.model); 321 308 free_onfi_param_page: 322 - kfree(p); 309 + kfree(pbuf); 323 310 324 311 return ret; 325 312 }
+8 -3
drivers/mtd/nand/raw/nand_timings.c
··· 16 16 /* Mode 0 */ 17 17 { 18 18 .type = NAND_SDR_IFACE, 19 + .timings.mode = 0, 19 20 .timings.sdr = { 20 21 .tCCS_min = 500000, 21 22 .tR_max = 200000000, ··· 59 58 /* Mode 1 */ 60 59 { 61 60 .type = NAND_SDR_IFACE, 61 + .timings.mode = 1, 62 62 .timings.sdr = { 63 63 .tCCS_min = 500000, 64 64 .tR_max = 200000000, ··· 102 100 /* Mode 2 */ 103 101 { 104 102 .type = NAND_SDR_IFACE, 103 + .timings.mode = 2, 105 104 .timings.sdr = { 106 105 .tCCS_min = 500000, 107 106 .tR_max = 200000000, ··· 145 142 /* Mode 3 */ 146 143 { 147 144 .type = NAND_SDR_IFACE, 145 + .timings.mode = 3, 148 146 .timings.sdr = { 149 147 .tCCS_min = 500000, 150 148 .tR_max = 200000000, ··· 188 184 /* Mode 4 */ 189 185 { 190 186 .type = NAND_SDR_IFACE, 187 + .timings.mode = 4, 191 188 .timings.sdr = { 192 189 .tCCS_min = 500000, 193 190 .tR_max = 200000000, ··· 231 226 /* Mode 5 */ 232 227 { 233 228 .type = NAND_SDR_IFACE, 229 + .timings.mode = 5, 234 230 .timings.sdr = { 235 231 .tCCS_min = 500000, 236 232 .tR_max = 200000000, ··· 320 314 /* microseconds -> picoseconds */ 321 315 timings->tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX; 322 316 timings->tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX; 323 - timings->tR_max = 1000000ULL * 200000000ULL; 324 317 325 - /* nanoseconds -> picoseconds */ 326 - timings->tCCS_min = 1000UL * 500000; 318 + timings->tR_max = 200000000; 319 + timings->tCCS_min = 500000; 327 320 } 328 321 329 322 return 0;
+14
drivers/mtd/nand/raw/nand_toshiba.c
··· 194 194 } 195 195 } 196 196 197 + static int tc58teg5dclta00_init(struct nand_chip *chip) 198 + { 199 + struct mtd_info *mtd = nand_to_mtd(chip); 200 + 201 + chip->onfi_timing_mode_default = 5; 202 + chip->options |= NAND_NEED_SCRAMBLING; 203 + mtd_set_pairing_scheme(mtd, &dist3_pairing_scheme); 204 + 205 + return 0; 206 + } 207 + 197 208 static int toshiba_nand_init(struct nand_chip *chip) 198 209 { 199 210 if (nand_is_slc(chip)) ··· 214 203 if (nand_is_slc(chip) && chip->ecc.mode == NAND_ECC_ON_DIE && 215 204 chip->id.data[4] & TOSHIBA_NAND_ID4_IS_BENAND) 216 205 toshiba_nand_benand_init(chip); 206 + 207 + if (!strcmp("TC58TEG5DCLTA00", chip->parameters.model)) 208 + tc58teg5dclta00_init(chip); 217 209 218 210 return 0; 219 211 }
+255 -183
drivers/mtd/nand/raw/nandsim.c
··· 353 353 void *file_buf; 354 354 struct page *held_pages[NS_MAX_HELD_PAGES]; 355 355 int held_cnt; 356 + 357 + /* debugfs entry */ 358 + struct dentry *dent; 356 359 }; 357 360 358 361 /* ··· 435 432 /* MTD structure for NAND controller */ 436 433 static struct mtd_info *nsmtd; 437 434 438 - static int nandsim_show(struct seq_file *m, void *private) 435 + static int ns_show(struct seq_file *m, void *private) 439 436 { 440 437 unsigned long wmin = -1, wmax = 0, avg; 441 438 unsigned long deciles[10], decile_max[10], tot = 0; ··· 486 483 487 484 return 0; 488 485 } 489 - DEFINE_SHOW_ATTRIBUTE(nandsim); 486 + DEFINE_SHOW_ATTRIBUTE(ns); 490 487 491 488 /** 492 - * nandsim_debugfs_create - initialize debugfs 493 - * @dev: nandsim device description object 489 + * ns_debugfs_create - initialize debugfs 490 + * @ns: nandsim device description object 494 491 * 495 492 * This function creates all debugfs files for UBI device @ubi. Returns zero in 496 493 * case of success and a negative error code in case of failure. 497 494 */ 498 - static int nandsim_debugfs_create(struct nandsim *dev) 495 + static int ns_debugfs_create(struct nandsim *ns) 499 496 { 500 497 struct dentry *root = nsmtd->dbg.dfs_dir; 501 - struct dentry *dent; 502 498 503 499 /* 504 500 * Just skip debugfs initialization when the debugfs directory is ··· 510 508 return 0; 511 509 } 512 510 513 - dent = debugfs_create_file("nandsim_wear_report", S_IRUSR, 514 - root, dev, &nandsim_fops); 515 - if (IS_ERR_OR_NULL(dent)) { 511 + ns->dent = debugfs_create_file("nandsim_wear_report", 0400, root, ns, 512 + &ns_fops); 513 + if (IS_ERR_OR_NULL(ns->dent)) { 516 514 NS_ERR("cannot create \"nandsim_wear_report\" debugfs entry\n"); 517 515 return -1; 518 516 } 519 517 520 518 return 0; 519 + } 520 + 521 + static void ns_debugfs_remove(struct nandsim *ns) 522 + { 523 + debugfs_remove_recursive(ns->dent); 521 524 } 522 525 523 526 /* ··· 531 524 * 532 525 * RETURNS: 0 if success, -ENOMEM if memory alloc fails. 533 526 */ 534 - static int __init alloc_device(struct nandsim *ns) 527 + static int __init ns_alloc_device(struct nandsim *ns) 535 528 { 536 529 struct file *cfile; 537 530 int i, err; ··· 543 536 if (!(cfile->f_mode & FMODE_CAN_READ)) { 544 537 NS_ERR("alloc_device: cache file not readable\n"); 545 538 err = -EINVAL; 546 - goto err_close; 539 + goto err_close_filp; 547 540 } 548 541 if (!(cfile->f_mode & FMODE_CAN_WRITE)) { 549 542 NS_ERR("alloc_device: cache file not writeable\n"); 550 543 err = -EINVAL; 551 - goto err_close; 544 + goto err_close_filp; 552 545 } 553 546 ns->pages_written = 554 547 vzalloc(array_size(sizeof(unsigned long), ··· 556 549 if (!ns->pages_written) { 557 550 NS_ERR("alloc_device: unable to allocate pages written array\n"); 558 551 err = -ENOMEM; 559 - goto err_close; 552 + goto err_close_filp; 560 553 } 561 554 ns->file_buf = kmalloc(ns->geom.pgszoob, GFP_KERNEL); 562 555 if (!ns->file_buf) { 563 556 NS_ERR("alloc_device: unable to allocate file buf\n"); 564 557 err = -ENOMEM; 565 - goto err_free; 558 + goto err_free_pw; 566 559 } 567 560 ns->cfile = cfile; 561 + 568 562 return 0; 563 + 564 + err_free_pw: 565 + vfree(ns->pages_written); 566 + err_close_filp: 567 + filp_close(cfile, NULL); 568 + 569 + return err; 569 570 } 570 571 571 572 ns->pages = vmalloc(array_size(sizeof(union ns_mem), ns->geom.pgnum)); ··· 588 573 ns->geom.pgszoob, 0, 0, NULL); 589 574 if (!ns->nand_pages_slab) { 590 575 NS_ERR("cache_create: unable to create kmem_cache\n"); 591 - return -ENOMEM; 576 + err = -ENOMEM; 577 + goto err_free_pg; 592 578 } 593 579 594 580 return 0; 595 581 596 - err_free: 597 - vfree(ns->pages_written); 598 - err_close: 599 - filp_close(cfile, NULL); 582 + err_free_pg: 583 + vfree(ns->pages); 584 + 600 585 return err; 601 586 } 602 587 603 588 /* 604 589 * Free any allocated pages, and free the array of page pointers. 605 590 */ 606 - static void free_device(struct nandsim *ns) 591 + static void ns_free_device(struct nandsim *ns) 607 592 { 608 593 int i; 609 594 ··· 625 610 } 626 611 } 627 612 628 - static char __init *get_partition_name(int i) 613 + static char __init *ns_get_partition_name(int i) 629 614 { 630 615 return kasprintf(GFP_KERNEL, "NAND simulator partition %d", i); 631 616 } ··· 635 620 * 636 621 * RETURNS: 0 if success, -ERRNO if failure. 637 622 */ 638 - static int __init init_nandsim(struct mtd_info *mtd) 623 + static int __init ns_init(struct mtd_info *mtd) 639 624 { 640 625 struct nand_chip *chip = mtd_to_nand(mtd); 641 626 struct nandsim *ns = nand_get_controller_data(chip); ··· 708 693 NS_ERR("bad partition size.\n"); 709 694 return -EINVAL; 710 695 } 711 - ns->partitions[i].name = get_partition_name(i); 696 + ns->partitions[i].name = ns_get_partition_name(i); 712 697 if (!ns->partitions[i].name) { 713 698 NS_ERR("unable to allocate memory.\n"); 714 699 return -ENOMEM; ··· 722 707 if (remains) { 723 708 if (parts_num + 1 > ARRAY_SIZE(ns->partitions)) { 724 709 NS_ERR("too many partitions.\n"); 725 - return -EINVAL; 710 + ret = -EINVAL; 711 + goto free_partition_names; 726 712 } 727 - ns->partitions[i].name = get_partition_name(i); 713 + ns->partitions[i].name = ns_get_partition_name(i); 728 714 if (!ns->partitions[i].name) { 729 715 NS_ERR("unable to allocate memory.\n"); 730 - return -ENOMEM; 716 + ret = -ENOMEM; 717 + goto free_partition_names; 731 718 } 732 719 ns->partitions[i].offset = next_offset; 733 720 ns->partitions[i].size = remains; ··· 756 739 printk("sector address bytes: %u\n", ns->geom.secaddrbytes); 757 740 printk("options: %#x\n", ns->options); 758 741 759 - if ((ret = alloc_device(ns)) != 0) 760 - return ret; 742 + ret = ns_alloc_device(ns); 743 + if (ret) 744 + goto free_partition_names; 761 745 762 746 /* Allocate / initialize the internal buffer */ 763 747 ns->buf.byte = kmalloc(ns->geom.pgszoob, GFP_KERNEL); 764 748 if (!ns->buf.byte) { 765 749 NS_ERR("init_nandsim: unable to allocate %u bytes for the internal buffer\n", 766 750 ns->geom.pgszoob); 767 - return -ENOMEM; 751 + ret = -ENOMEM; 752 + goto free_device; 768 753 } 769 754 memset(ns->buf.byte, 0xFF, ns->geom.pgszoob); 770 755 771 756 return 0; 757 + 758 + free_device: 759 + ns_free_device(ns); 760 + free_partition_names: 761 + for (i = 0; i < ARRAY_SIZE(ns->partitions); ++i) 762 + kfree(ns->partitions[i].name); 763 + 764 + return ret; 772 765 } 773 766 774 767 /* 775 768 * Free the nandsim structure. 776 769 */ 777 - static void free_nandsim(struct nandsim *ns) 770 + static void ns_free(struct nandsim *ns) 778 771 { 772 + int i; 773 + 774 + for (i = 0; i < ARRAY_SIZE(ns->partitions); ++i) 775 + kfree(ns->partitions[i].name); 776 + 779 777 kfree(ns->buf.byte); 780 - free_device(ns); 778 + ns_free_device(ns); 781 779 782 780 return; 783 781 } 784 782 785 - static int parse_badblocks(struct nandsim *ns, struct mtd_info *mtd) 783 + static int ns_parse_badblocks(struct nandsim *ns, struct mtd_info *mtd) 786 784 { 787 785 char *w; 788 786 int zero_ok; ··· 825 793 return 0; 826 794 } 827 795 828 - static int parse_weakblocks(void) 796 + static int ns_parse_weakblocks(void) 829 797 { 830 798 char *w; 831 799 int zero_ok; ··· 862 830 return 0; 863 831 } 864 832 865 - static int erase_error(unsigned int erase_block_no) 833 + static int ns_erase_error(unsigned int erase_block_no) 866 834 { 867 835 struct weak_block *wb; 868 836 ··· 876 844 return 0; 877 845 } 878 846 879 - static int parse_weakpages(void) 847 + static int ns_parse_weakpages(void) 880 848 { 881 849 char *w; 882 850 int zero_ok; ··· 913 881 return 0; 914 882 } 915 883 916 - static int write_error(unsigned int page_no) 884 + static int ns_write_error(unsigned int page_no) 917 885 { 918 886 struct weak_page *wp; 919 887 ··· 927 895 return 0; 928 896 } 929 897 930 - static int parse_gravepages(void) 898 + static int ns_parse_gravepages(void) 931 899 { 932 900 char *g; 933 901 int zero_ok; ··· 964 932 return 0; 965 933 } 966 934 967 - static int read_error(unsigned int page_no) 935 + static int ns_read_error(unsigned int page_no) 968 936 { 969 937 struct grave_page *gp; 970 938 ··· 978 946 return 0; 979 947 } 980 948 981 - static void free_lists(void) 982 - { 983 - struct list_head *pos, *n; 984 - list_for_each_safe(pos, n, &weak_blocks) { 985 - list_del(pos); 986 - kfree(list_entry(pos, struct weak_block, list)); 987 - } 988 - list_for_each_safe(pos, n, &weak_pages) { 989 - list_del(pos); 990 - kfree(list_entry(pos, struct weak_page, list)); 991 - } 992 - list_for_each_safe(pos, n, &grave_pages) { 993 - list_del(pos); 994 - kfree(list_entry(pos, struct grave_page, list)); 995 - } 996 - kfree(erase_block_wear); 997 - } 998 - 999 - static int setup_wear_reporting(struct mtd_info *mtd) 949 + static int ns_setup_wear_reporting(struct mtd_info *mtd) 1000 950 { 1001 951 size_t mem; 1002 952 ··· 996 982 return 0; 997 983 } 998 984 999 - static void update_wear(unsigned int erase_block_no) 985 + static void ns_update_wear(unsigned int erase_block_no) 1000 986 { 1001 987 if (!erase_block_wear) 1002 988 return; ··· 1015 1001 /* 1016 1002 * Returns the string representation of 'state' state. 1017 1003 */ 1018 - static char *get_state_name(uint32_t state) 1004 + static char *ns_get_state_name(uint32_t state) 1019 1005 { 1020 1006 switch (NS_STATE(state)) { 1021 1007 case STATE_CMD_READ0: ··· 1075 1061 * 1076 1062 * RETURNS: 1 if wrong command, 0 if right. 1077 1063 */ 1078 - static int check_command(int cmd) 1064 + static int ns_check_command(int cmd) 1079 1065 { 1080 1066 switch (cmd) { 1081 1067 ··· 1102 1088 /* 1103 1089 * Returns state after command is accepted by command number. 1104 1090 */ 1105 - static uint32_t get_state_by_command(unsigned command) 1091 + static uint32_t ns_get_state_by_command(unsigned command) 1106 1092 { 1107 1093 switch (command) { 1108 1094 case NAND_CMD_READ0: ··· 1140 1126 /* 1141 1127 * Move an address byte to the correspondent internal register. 1142 1128 */ 1143 - static inline void accept_addr_byte(struct nandsim *ns, u_char bt) 1129 + static inline void ns_accept_addr_byte(struct nandsim *ns, u_char bt) 1144 1130 { 1145 1131 uint byte = (uint)bt; 1146 1132 ··· 1158 1144 /* 1159 1145 * Switch to STATE_READY state. 1160 1146 */ 1161 - static inline void switch_to_ready_state(struct nandsim *ns, u_char status) 1147 + static inline void ns_switch_to_ready_state(struct nandsim *ns, u_char status) 1162 1148 { 1163 - NS_DBG("switch_to_ready_state: switch to %s state\n", get_state_name(STATE_READY)); 1149 + NS_DBG("switch_to_ready_state: switch to %s state\n", 1150 + ns_get_state_name(STATE_READY)); 1164 1151 1165 1152 ns->state = STATE_READY; 1166 1153 ns->nxstate = STATE_UNKNOWN; ··· 1218 1203 * -1 - several matches. 1219 1204 * 0 - operation is found. 1220 1205 */ 1221 - static int find_operation(struct nandsim *ns, uint32_t flag) 1206 + static int ns_find_operation(struct nandsim *ns, uint32_t flag) 1222 1207 { 1223 1208 int opsfound = 0; 1224 1209 int i, j, idx = 0; ··· 1271 1256 ns->state = ns->op[ns->stateidx]; 1272 1257 ns->nxstate = ns->op[ns->stateidx + 1]; 1273 1258 NS_DBG("find_operation: operation found, index: %d, state: %s, nxstate %s\n", 1274 - idx, get_state_name(ns->state), get_state_name(ns->nxstate)); 1259 + idx, ns_get_state_name(ns->state), 1260 + ns_get_state_name(ns->nxstate)); 1275 1261 return 0; 1276 1262 } 1277 1263 ··· 1280 1264 /* Nothing was found. Try to ignore previous commands (if any) and search again */ 1281 1265 if (ns->npstates != 0) { 1282 1266 NS_DBG("find_operation: no operation found, try again with state %s\n", 1283 - get_state_name(ns->state)); 1267 + ns_get_state_name(ns->state)); 1284 1268 ns->npstates = 0; 1285 - return find_operation(ns, 0); 1269 + return ns_find_operation(ns, 0); 1286 1270 1287 1271 } 1288 1272 NS_DBG("find_operation: no operations found\n"); 1289 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1273 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1290 1274 return -2; 1291 1275 } 1292 1276 ··· 1303 1287 return -1; 1304 1288 } 1305 1289 1306 - static void put_pages(struct nandsim *ns) 1290 + static void ns_put_pages(struct nandsim *ns) 1307 1291 { 1308 1292 int i; 1309 1293 ··· 1312 1296 } 1313 1297 1314 1298 /* Get page cache pages in advance to provide NOFS memory allocation */ 1315 - static int get_pages(struct nandsim *ns, struct file *file, size_t count, loff_t pos) 1299 + static int ns_get_pages(struct nandsim *ns, struct file *file, size_t count, 1300 + loff_t pos) 1316 1301 { 1317 1302 pgoff_t index, start_index, end_index; 1318 1303 struct page *page; ··· 1333 1316 page = find_or_create_page(mapping, index, GFP_NOFS); 1334 1317 } 1335 1318 if (page == NULL) { 1336 - put_pages(ns); 1319 + ns_put_pages(ns); 1337 1320 return -ENOMEM; 1338 1321 } 1339 1322 unlock_page(page); ··· 1343 1326 return 0; 1344 1327 } 1345 1328 1346 - static ssize_t read_file(struct nandsim *ns, struct file *file, void *buf, size_t count, loff_t pos) 1329 + static ssize_t ns_read_file(struct nandsim *ns, struct file *file, void *buf, 1330 + size_t count, loff_t pos) 1347 1331 { 1348 1332 ssize_t tx; 1349 1333 int err; 1350 1334 unsigned int noreclaim_flag; 1351 1335 1352 - err = get_pages(ns, file, count, pos); 1336 + err = ns_get_pages(ns, file, count, pos); 1353 1337 if (err) 1354 1338 return err; 1355 1339 noreclaim_flag = memalloc_noreclaim_save(); 1356 1340 tx = kernel_read(file, buf, count, &pos); 1357 1341 memalloc_noreclaim_restore(noreclaim_flag); 1358 - put_pages(ns); 1342 + ns_put_pages(ns); 1359 1343 return tx; 1360 1344 } 1361 1345 1362 - static ssize_t write_file(struct nandsim *ns, struct file *file, void *buf, size_t count, loff_t pos) 1346 + static ssize_t ns_write_file(struct nandsim *ns, struct file *file, void *buf, 1347 + size_t count, loff_t pos) 1363 1348 { 1364 1349 ssize_t tx; 1365 1350 int err; 1366 1351 unsigned int noreclaim_flag; 1367 1352 1368 - err = get_pages(ns, file, count, pos); 1353 + err = ns_get_pages(ns, file, count, pos); 1369 1354 if (err) 1370 1355 return err; 1371 1356 noreclaim_flag = memalloc_noreclaim_save(); 1372 1357 tx = kernel_write(file, buf, count, &pos); 1373 1358 memalloc_noreclaim_restore(noreclaim_flag); 1374 - put_pages(ns); 1359 + ns_put_pages(ns); 1375 1360 return tx; 1376 1361 } 1377 1362 ··· 1393 1374 return NS_GET_PAGE(ns)->byte + ns->regs.column + ns->regs.off; 1394 1375 } 1395 1376 1396 - static int do_read_error(struct nandsim *ns, int num) 1377 + static int ns_do_read_error(struct nandsim *ns, int num) 1397 1378 { 1398 1379 unsigned int page_no = ns->regs.row; 1399 1380 1400 - if (read_error(page_no)) { 1381 + if (ns_read_error(page_no)) { 1401 1382 prandom_bytes(ns->buf.byte, num); 1402 1383 NS_WARN("simulating read error in page %u\n", page_no); 1403 1384 return 1; ··· 1405 1386 return 0; 1406 1387 } 1407 1388 1408 - static void do_bit_flips(struct nandsim *ns, int num) 1389 + static void ns_do_bit_flips(struct nandsim *ns, int num) 1409 1390 { 1410 1391 if (bitflips && prandom_u32() < (1 << 22)) { 1411 1392 int flips = 1; ··· 1425 1406 /* 1426 1407 * Fill the NAND buffer with data read from the specified page. 1427 1408 */ 1428 - static void read_page(struct nandsim *ns, int num) 1409 + static void ns_read_page(struct nandsim *ns, int num) 1429 1410 { 1430 1411 union ns_mem *mypage; 1431 1412 ··· 1439 1420 1440 1421 NS_DBG("read_page: page %d written, reading from %d\n", 1441 1422 ns->regs.row, ns->regs.column + ns->regs.off); 1442 - if (do_read_error(ns, num)) 1423 + if (ns_do_read_error(ns, num)) 1443 1424 return; 1444 1425 pos = (loff_t)NS_RAW_OFFSET(ns) + ns->regs.off; 1445 - tx = read_file(ns, ns->cfile, ns->buf.byte, num, pos); 1426 + tx = ns_read_file(ns, ns->cfile, ns->buf.byte, num, 1427 + pos); 1446 1428 if (tx != num) { 1447 1429 NS_ERR("read_page: read error for page %d ret %ld\n", ns->regs.row, (long)tx); 1448 1430 return; 1449 1431 } 1450 - do_bit_flips(ns, num); 1432 + ns_do_bit_flips(ns, num); 1451 1433 } 1452 1434 return; 1453 1435 } ··· 1460 1440 } else { 1461 1441 NS_DBG("read_page: page %d allocated, reading from %d\n", 1462 1442 ns->regs.row, ns->regs.column + ns->regs.off); 1463 - if (do_read_error(ns, num)) 1443 + if (ns_do_read_error(ns, num)) 1464 1444 return; 1465 1445 memcpy(ns->buf.byte, NS_PAGE_BYTE_OFF(ns), num); 1466 - do_bit_flips(ns, num); 1446 + ns_do_bit_flips(ns, num); 1467 1447 } 1468 1448 } 1469 1449 1470 1450 /* 1471 1451 * Erase all pages in the specified sector. 1472 1452 */ 1473 - static void erase_sector(struct nandsim *ns) 1453 + static void ns_erase_sector(struct nandsim *ns) 1474 1454 { 1475 1455 union ns_mem *mypage; 1476 1456 int i; ··· 1498 1478 /* 1499 1479 * Program the specified page with the contents from the NAND buffer. 1500 1480 */ 1501 - static int prog_page(struct nandsim *ns, int num) 1481 + static int ns_prog_page(struct nandsim *ns, int num) 1502 1482 { 1503 1483 int i; 1504 1484 union ns_mem *mypage; ··· 1517 1497 memset(ns->file_buf, 0xff, ns->geom.pgszoob); 1518 1498 } else { 1519 1499 all = 0; 1520 - tx = read_file(ns, ns->cfile, pg_off, num, off); 1500 + tx = ns_read_file(ns, ns->cfile, pg_off, num, off); 1521 1501 if (tx != num) { 1522 1502 NS_ERR("prog_page: read error for page %d ret %ld\n", ns->regs.row, (long)tx); 1523 1503 return -1; ··· 1527 1507 pg_off[i] &= ns->buf.byte[i]; 1528 1508 if (all) { 1529 1509 loff_t pos = (loff_t)ns->regs.row * ns->geom.pgszoob; 1530 - tx = write_file(ns, ns->cfile, ns->file_buf, ns->geom.pgszoob, pos); 1510 + tx = ns_write_file(ns, ns->cfile, ns->file_buf, 1511 + ns->geom.pgszoob, pos); 1531 1512 if (tx != ns->geom.pgszoob) { 1532 1513 NS_ERR("prog_page: write error for page %d ret %ld\n", ns->regs.row, (long)tx); 1533 1514 return -1; 1534 1515 } 1535 1516 __set_bit(ns->regs.row, ns->pages_written); 1536 1517 } else { 1537 - tx = write_file(ns, ns->cfile, pg_off, num, off); 1518 + tx = ns_write_file(ns, ns->cfile, pg_off, num, off); 1538 1519 if (tx != num) { 1539 1520 NS_ERR("prog_page: write error for page %d ret %ld\n", ns->regs.row, (long)tx); 1540 1521 return -1; ··· 1573 1552 * 1574 1553 * RETURNS: 0 if success, -1 if error. 1575 1554 */ 1576 - static int do_state_action(struct nandsim *ns, uint32_t action) 1555 + static int ns_do_state_action(struct nandsim *ns, uint32_t action) 1577 1556 { 1578 1557 int num; 1579 1558 int busdiv = ns->busw == 8 ? 1 : 2; ··· 1600 1579 break; 1601 1580 } 1602 1581 num = ns->geom.pgszoob - ns->regs.off - ns->regs.column; 1603 - read_page(ns, num); 1582 + ns_read_page(ns, num); 1604 1583 1605 1584 NS_DBG("do_state_action: (ACTION_CPY:) copy %d bytes to int buf, raw offset %d\n", 1606 1585 num, NS_RAW_OFFSET(ns) + ns->regs.off); ··· 1643 1622 ns->regs.row, NS_RAW_OFFSET(ns)); 1644 1623 NS_LOG("erase sector %u\n", erase_block_no); 1645 1624 1646 - erase_sector(ns); 1625 + ns_erase_sector(ns); 1647 1626 1648 1627 NS_MDELAY(erase_delay); 1649 1628 1650 1629 if (erase_block_wear) 1651 - update_wear(erase_block_no); 1630 + ns_update_wear(erase_block_no); 1652 1631 1653 - if (erase_error(erase_block_no)) { 1632 + if (ns_erase_error(erase_block_no)) { 1654 1633 NS_WARN("simulating erase failure in erase block %u\n", erase_block_no); 1655 1634 return -1; 1656 1635 } ··· 1674 1653 return -1; 1675 1654 } 1676 1655 1677 - if (prog_page(ns, num) == -1) 1656 + if (ns_prog_page(ns, num) == -1) 1678 1657 return -1; 1679 1658 1680 1659 page_no = ns->regs.row; ··· 1686 1665 NS_UDELAY(programm_delay); 1687 1666 NS_UDELAY(output_cycle * ns->geom.pgsz / 1000 / busdiv); 1688 1667 1689 - if (write_error(page_no)) { 1668 + if (ns_write_error(page_no)) { 1690 1669 NS_WARN("simulating write failure in page %u\n", page_no); 1691 1670 return -1; 1692 1671 } ··· 1723 1702 /* 1724 1703 * Switch simulator's state. 1725 1704 */ 1726 - static void switch_state(struct nandsim *ns) 1705 + static void ns_switch_state(struct nandsim *ns) 1727 1706 { 1728 1707 if (ns->op) { 1729 1708 /* ··· 1737 1716 1738 1717 NS_DBG("switch_state: operation is known, switch to the next state, " 1739 1718 "state: %s, nxstate: %s\n", 1740 - get_state_name(ns->state), get_state_name(ns->nxstate)); 1719 + ns_get_state_name(ns->state), 1720 + ns_get_state_name(ns->nxstate)); 1741 1721 1742 1722 /* See, whether we need to do some action */ 1743 - if ((ns->state & ACTION_MASK) && do_state_action(ns, ns->state) < 0) { 1744 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1723 + if ((ns->state & ACTION_MASK) && 1724 + ns_do_state_action(ns, ns->state) < 0) { 1725 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1745 1726 return; 1746 1727 } 1747 1728 ··· 1757 1734 * The only event causing the switch_state function to 1758 1735 * be called with yet unknown operation is new command. 1759 1736 */ 1760 - ns->state = get_state_by_command(ns->regs.command); 1737 + ns->state = ns_get_state_by_command(ns->regs.command); 1761 1738 1762 1739 NS_DBG("switch_state: operation is unknown, try to find it\n"); 1763 1740 1764 - if (find_operation(ns, 0) != 0) 1741 + if (!ns_find_operation(ns, 0)) 1765 1742 return; 1766 1743 1767 - if ((ns->state & ACTION_MASK) && do_state_action(ns, ns->state) < 0) { 1768 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1744 + if ((ns->state & ACTION_MASK) && 1745 + ns_do_state_action(ns, ns->state) < 0) { 1746 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1769 1747 return; 1770 1748 } 1771 1749 } ··· 1794 1770 1795 1771 NS_DBG("switch_state: operation complete, switch to STATE_READY state\n"); 1796 1772 1797 - switch_to_ready_state(ns, status); 1773 + ns_switch_to_ready_state(ns, status); 1798 1774 1799 1775 return; 1800 1776 } else if (ns->nxstate & (STATE_DATAIN_MASK | STATE_DATAOUT_MASK)) { ··· 1808 1784 1809 1785 NS_DBG("switch_state: the next state is data I/O, switch, " 1810 1786 "state: %s, nxstate: %s\n", 1811 - get_state_name(ns->state), get_state_name(ns->nxstate)); 1787 + ns_get_state_name(ns->state), 1788 + ns_get_state_name(ns->nxstate)); 1812 1789 1813 1790 /* 1814 1791 * Set the internal register to the count of bytes which ··· 1887 1862 return outb; 1888 1863 } 1889 1864 if (!(ns->state & STATE_DATAOUT_MASK)) { 1890 - NS_WARN("read_byte: unexpected data output cycle, state is %s " 1891 - "return %#x\n", get_state_name(ns->state), (uint)outb); 1865 + NS_WARN("read_byte: unexpected data output cycle, state is %s return %#x\n", 1866 + ns_get_state_name(ns->state), (uint)outb); 1892 1867 return outb; 1893 1868 } 1894 1869 ··· 1927 1902 NS_DBG("read_byte: all bytes were read\n"); 1928 1903 1929 1904 if (NS_STATE(ns->nxstate) == STATE_READY) 1930 - switch_state(ns); 1905 + ns_switch_state(ns); 1931 1906 } 1932 1907 1933 1908 return outb; ··· 1954 1929 1955 1930 if (byte == NAND_CMD_RESET) { 1956 1931 NS_LOG("reset chip\n"); 1957 - switch_to_ready_state(ns, NS_STATUS_OK(ns)); 1932 + ns_switch_to_ready_state(ns, NS_STATUS_OK(ns)); 1958 1933 return; 1959 1934 } 1960 1935 1961 1936 /* Check that the command byte is correct */ 1962 - if (check_command(byte)) { 1937 + if (ns_check_command(byte)) { 1963 1938 NS_ERR("write_byte: unknown command %#x\n", (uint)byte); 1964 1939 return; 1965 1940 } ··· 1968 1943 || NS_STATE(ns->state) == STATE_DATAOUT) { 1969 1944 int row = ns->regs.row; 1970 1945 1971 - switch_state(ns); 1946 + ns_switch_state(ns); 1972 1947 if (byte == NAND_CMD_RNDOUT) 1973 1948 ns->regs.row = row; 1974 1949 } ··· 1983 1958 * was expected but command was input. In this case ignore 1984 1959 * previous command(s)/state(s) and accept the last one. 1985 1960 */ 1986 - NS_WARN("write_byte: command (%#x) wasn't expected, expected state is %s, " 1987 - "ignore previous states\n", (uint)byte, get_state_name(ns->nxstate)); 1961 + NS_WARN("write_byte: command (%#x) wasn't expected, expected state is %s, ignore previous states\n", 1962 + (uint)byte, 1963 + ns_get_state_name(ns->nxstate)); 1988 1964 } 1989 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1965 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1990 1966 } 1991 1967 1992 1968 NS_DBG("command byte corresponding to %s state accepted\n", 1993 - get_state_name(get_state_by_command(byte))); 1969 + ns_get_state_name(ns_get_state_by_command(byte))); 1994 1970 ns->regs.command = byte; 1995 - switch_state(ns); 1971 + ns_switch_state(ns); 1996 1972 1997 1973 } else if (ns->lines.ale == 1) { 1998 1974 /* ··· 2004 1978 2005 1979 NS_DBG("write_byte: operation isn't known yet, identify it\n"); 2006 1980 2007 - if (find_operation(ns, 1) < 0) 1981 + if (ns_find_operation(ns, 1) < 0) 2008 1982 return; 2009 1983 2010 - if ((ns->state & ACTION_MASK) && do_state_action(ns, ns->state) < 0) { 2011 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 1984 + if ((ns->state & ACTION_MASK) && 1985 + ns_do_state_action(ns, ns->state) < 0) { 1986 + ns_switch_to_ready_state(ns, 1987 + NS_STATUS_FAILED(ns)); 2012 1988 return; 2013 1989 } 2014 1990 ··· 2032 2004 2033 2005 /* Check that chip is expecting address */ 2034 2006 if (!(ns->nxstate & STATE_ADDR_MASK)) { 2035 - NS_ERR("write_byte: address (%#x) isn't expected, expected state is %s, " 2036 - "switch to STATE_READY\n", (uint)byte, get_state_name(ns->nxstate)); 2037 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2007 + NS_ERR("write_byte: address (%#x) isn't expected, expected state is %s, switch to STATE_READY\n", 2008 + (uint)byte, ns_get_state_name(ns->nxstate)); 2009 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2038 2010 return; 2039 2011 } 2040 2012 2041 2013 /* Check if this is expected byte */ 2042 2014 if (ns->regs.count == ns->regs.num) { 2043 2015 NS_ERR("write_byte: no more address bytes expected\n"); 2044 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2016 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2045 2017 return; 2046 2018 } 2047 2019 2048 - accept_addr_byte(ns, byte); 2020 + ns_accept_addr_byte(ns, byte); 2049 2021 2050 2022 ns->regs.count += 1; 2051 2023 ··· 2054 2026 2055 2027 if (ns->regs.count == ns->regs.num) { 2056 2028 NS_DBG("address (%#x, %#x) is accepted\n", ns->regs.row, ns->regs.column); 2057 - switch_state(ns); 2029 + ns_switch_state(ns); 2058 2030 } 2059 2031 2060 2032 } else { ··· 2064 2036 2065 2037 /* Check that chip is expecting data input */ 2066 2038 if (!(ns->state & STATE_DATAIN_MASK)) { 2067 - NS_ERR("write_byte: data input (%#x) isn't expected, state is %s, " 2068 - "switch to %s\n", (uint)byte, 2069 - get_state_name(ns->state), get_state_name(STATE_READY)); 2070 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2039 + NS_ERR("write_byte: data input (%#x) isn't expected, state is %s, switch to %s\n", 2040 + (uint)byte, ns_get_state_name(ns->state), 2041 + ns_get_state_name(STATE_READY)); 2042 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2071 2043 return; 2072 2044 } 2073 2045 ··· 2097 2069 2098 2070 /* Check that chip is expecting data input */ 2099 2071 if (!(ns->state & STATE_DATAIN_MASK)) { 2100 - NS_ERR("write_buf: data input isn't expected, state is %s, " 2101 - "switch to STATE_READY\n", get_state_name(ns->state)); 2102 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2072 + NS_ERR("write_buf: data input isn't expected, state is %s, switch to STATE_READY\n", 2073 + ns_get_state_name(ns->state)); 2074 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2103 2075 return; 2104 2076 } 2105 2077 2106 2078 /* Check if these are expected bytes */ 2107 2079 if (ns->regs.count + len > ns->regs.num) { 2108 2080 NS_ERR("write_buf: too many input bytes\n"); 2109 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2081 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2110 2082 return; 2111 2083 } 2112 2084 ··· 2133 2105 } 2134 2106 if (!(ns->state & STATE_DATAOUT_MASK)) { 2135 2107 NS_WARN("read_buf: unexpected data output cycle, current state is %s\n", 2136 - get_state_name(ns->state)); 2108 + ns_get_state_name(ns->state)); 2137 2109 return; 2138 2110 } 2139 2111 ··· 2149 2121 /* Check if these are expected bytes */ 2150 2122 if (ns->regs.count + len > ns->regs.num) { 2151 2123 NS_ERR("read_buf: too many bytes to read\n"); 2152 - switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2124 + ns_switch_to_ready_state(ns, NS_STATUS_FAILED(ns)); 2153 2125 return; 2154 2126 } 2155 2127 ··· 2158 2130 2159 2131 if (ns->regs.count == ns->regs.num) { 2160 2132 if (NS_STATE(ns->nxstate) == STATE_READY) 2161 - switch_state(ns); 2133 + ns_switch_state(ns); 2162 2134 } 2163 2135 2164 2136 return; ··· 2171 2143 unsigned int op_id; 2172 2144 const struct nand_op_instr *instr = NULL; 2173 2145 struct nandsim *ns = nand_get_controller_data(chip); 2146 + 2147 + if (check_only) 2148 + return 0; 2174 2149 2175 2150 ns->lines.ce = 1; 2176 2151 ··· 2255 2224 */ 2256 2225 static int __init ns_init_module(void) 2257 2226 { 2227 + struct list_head *pos, *n; 2258 2228 struct nand_chip *chip; 2259 2229 struct nandsim *ns; 2260 - int retval = -ENOMEM, i; 2230 + int ret; 2261 2231 2262 2232 if (bus_width != 8 && bus_width != 16) { 2263 2233 NS_ERR("wrong bus width (%d), use only 8 or 16\n", bus_width); ··· 2291 2259 break; 2292 2260 default: 2293 2261 NS_ERR("bbt has to be 0..2\n"); 2294 - retval = -EINVAL; 2295 - goto error; 2262 + ret = -EINVAL; 2263 + goto free_ns_struct; 2296 2264 } 2297 2265 /* 2298 2266 * Perform minimum nandsim structure initialization to handle ··· 2317 2285 2318 2286 nsmtd->owner = THIS_MODULE; 2319 2287 2320 - if ((retval = parse_weakblocks()) != 0) 2321 - goto error; 2288 + ret = ns_parse_weakblocks(); 2289 + if (ret) 2290 + goto free_ns_struct; 2322 2291 2323 - if ((retval = parse_weakpages()) != 0) 2324 - goto error; 2292 + ret = ns_parse_weakpages(); 2293 + if (ret) 2294 + goto free_wb_list; 2325 2295 2326 - if ((retval = parse_gravepages()) != 0) 2327 - goto error; 2296 + ret = ns_parse_gravepages(); 2297 + if (ret) 2298 + goto free_wp_list; 2328 2299 2329 2300 nand_controller_init(&ns->base); 2330 2301 ns->base.ops = &ns_controller_ops; 2331 2302 chip->controller = &ns->base; 2332 2303 2333 - retval = nand_scan(chip, 1); 2334 - if (retval) { 2304 + ret = nand_scan(chip, 1); 2305 + if (ret) { 2335 2306 NS_ERR("Could not scan NAND Simulator device\n"); 2336 - goto error; 2307 + goto free_gp_list; 2337 2308 } 2338 2309 2339 2310 if (overridesize) { ··· 2348 2313 2349 2314 if (new_size >> overridesize != nsmtd->erasesize) { 2350 2315 NS_ERR("overridesize is too big\n"); 2351 - retval = -EINVAL; 2352 - goto err_exit; 2316 + ret = -EINVAL; 2317 + goto cleanup_nand; 2353 2318 } 2354 2319 2355 2320 /* N.B. This relies on nand_scan not doing anything with the size before we change it */ ··· 2360 2325 chip->pagemask = (targetsize >> chip->page_shift) - 1; 2361 2326 } 2362 2327 2363 - if ((retval = setup_wear_reporting(nsmtd)) != 0) 2364 - goto err_exit; 2328 + ret = ns_setup_wear_reporting(nsmtd); 2329 + if (ret) 2330 + goto cleanup_nand; 2365 2331 2366 - if ((retval = init_nandsim(nsmtd)) != 0) 2367 - goto err_exit; 2332 + ret = ns_init(nsmtd); 2333 + if (ret) 2334 + goto free_ebw; 2368 2335 2369 - if ((retval = nand_create_bbt(chip)) != 0) 2370 - goto err_exit; 2336 + ret = nand_create_bbt(chip); 2337 + if (ret) 2338 + goto free_ns_object; 2371 2339 2372 - if ((retval = parse_badblocks(ns, nsmtd)) != 0) 2373 - goto err_exit; 2340 + ret = ns_parse_badblocks(ns, nsmtd); 2341 + if (ret) 2342 + goto free_ns_object; 2374 2343 2375 2344 /* Register NAND partitions */ 2376 - retval = mtd_device_register(nsmtd, &ns->partitions[0], 2377 - ns->nbparts); 2378 - if (retval != 0) 2379 - goto err_exit; 2345 + ret = mtd_device_register(nsmtd, &ns->partitions[0], ns->nbparts); 2346 + if (ret) 2347 + goto free_ns_object; 2380 2348 2381 - if ((retval = nandsim_debugfs_create(ns)) != 0) 2382 - goto err_exit; 2349 + ret = ns_debugfs_create(ns); 2350 + if (ret) 2351 + goto unregister_mtd; 2383 2352 2384 2353 return 0; 2385 2354 2386 - err_exit: 2387 - free_nandsim(ns); 2388 - nand_release(chip); 2389 - for (i = 0;i < ARRAY_SIZE(ns->partitions); ++i) 2390 - kfree(ns->partitions[i].name); 2391 - error: 2355 + unregister_mtd: 2356 + WARN_ON(mtd_device_unregister(nsmtd)); 2357 + free_ns_object: 2358 + ns_free(ns); 2359 + free_ebw: 2360 + kfree(erase_block_wear); 2361 + cleanup_nand: 2362 + nand_cleanup(chip); 2363 + free_gp_list: 2364 + list_for_each_safe(pos, n, &grave_pages) { 2365 + list_del(pos); 2366 + kfree(list_entry(pos, struct grave_page, list)); 2367 + } 2368 + free_wp_list: 2369 + list_for_each_safe(pos, n, &weak_pages) { 2370 + list_del(pos); 2371 + kfree(list_entry(pos, struct weak_page, list)); 2372 + } 2373 + free_wb_list: 2374 + list_for_each_safe(pos, n, &weak_blocks) { 2375 + list_del(pos); 2376 + kfree(list_entry(pos, struct weak_block, list)); 2377 + } 2378 + free_ns_struct: 2392 2379 kfree(ns); 2393 - free_lists(); 2394 2380 2395 - return retval; 2381 + return ret; 2396 2382 } 2397 2383 2398 2384 module_init(ns_init_module); ··· 2425 2369 { 2426 2370 struct nand_chip *chip = mtd_to_nand(nsmtd); 2427 2371 struct nandsim *ns = nand_get_controller_data(chip); 2428 - int i; 2372 + struct list_head *pos, *n; 2429 2373 2430 - free_nandsim(ns); /* Free nandsim private resources */ 2431 - nand_release(chip); /* Unregister driver */ 2432 - for (i = 0;i < ARRAY_SIZE(ns->partitions); ++i) 2433 - kfree(ns->partitions[i].name); 2434 - kfree(ns); /* Free other structures */ 2435 - free_lists(); 2374 + ns_debugfs_remove(ns); 2375 + WARN_ON(mtd_device_unregister(nsmtd)); 2376 + ns_free(ns); 2377 + kfree(erase_block_wear); 2378 + nand_cleanup(chip); 2379 + 2380 + list_for_each_safe(pos, n, &grave_pages) { 2381 + list_del(pos); 2382 + kfree(list_entry(pos, struct grave_page, list)); 2383 + } 2384 + 2385 + list_for_each_safe(pos, n, &weak_pages) { 2386 + list_del(pos); 2387 + kfree(list_entry(pos, struct weak_page, list)); 2388 + } 2389 + 2390 + list_for_each_safe(pos, n, &weak_blocks) { 2391 + list_del(pos); 2392 + kfree(list_entry(pos, struct weak_block, list)); 2393 + } 2394 + 2395 + kfree(ns); 2436 2396 } 2437 2397 2438 2398 module_exit(ns_cleanup_module);
+6 -2
drivers/mtd/nand/raw/ndfc.c
··· 244 244 static int ndfc_remove(struct platform_device *ofdev) 245 245 { 246 246 struct ndfc_controller *ndfc = dev_get_drvdata(&ofdev->dev); 247 - struct mtd_info *mtd = nand_to_mtd(&ndfc->chip); 247 + struct nand_chip *chip = &ndfc->chip; 248 + struct mtd_info *mtd = nand_to_mtd(chip); 249 + int ret; 248 250 249 - nand_release(&ndfc->chip); 251 + ret = mtd_device_unregister(mtd); 252 + WARN_ON(ret); 253 + nand_cleanup(chip); 250 254 kfree(mtd->name); 251 255 252 256 return 0;
+6 -2
drivers/mtd/nand/raw/omap2.c
··· 2283 2283 struct mtd_info *mtd = platform_get_drvdata(pdev); 2284 2284 struct nand_chip *nand_chip = mtd_to_nand(mtd); 2285 2285 struct omap_nand_info *info = mtd_to_omap(mtd); 2286 + int ret; 2287 + 2286 2288 if (nand_chip->ecc.priv) { 2287 2289 nand_bch_free(nand_chip->ecc.priv); 2288 2290 nand_chip->ecc.priv = NULL; 2289 2291 } 2290 2292 if (info->dma) 2291 2293 dma_release_channel(info->dma); 2292 - nand_release(nand_chip); 2293 - return 0; 2294 + ret = mtd_device_unregister(mtd); 2295 + WARN_ON(ret); 2296 + nand_cleanup(nand_chip); 2297 + return ret; 2294 2298 } 2295 2299 2296 2300 static const struct of_device_id omap_nand_ids[] = {
+1
drivers/mtd/nand/raw/omap_elm.c
··· 411 411 pm_runtime_enable(&pdev->dev); 412 412 if (pm_runtime_get_sync(&pdev->dev) < 0) { 413 413 ret = -EINVAL; 414 + pm_runtime_put_sync(&pdev->dev); 414 415 pm_runtime_disable(&pdev->dev); 415 416 dev_err(&pdev->dev, "can't enable clock\n"); 416 417 return ret;
+6 -2
drivers/mtd/nand/raw/orion_nand.c
··· 180 180 mtd->name = "orion_nand"; 181 181 ret = mtd_device_register(mtd, board->parts, board->nr_parts); 182 182 if (ret) { 183 - nand_release(nc); 183 + nand_cleanup(nc); 184 184 goto no_dev; 185 185 } 186 186 ··· 195 195 { 196 196 struct orion_nand_info *info = platform_get_drvdata(pdev); 197 197 struct nand_chip *chip = &info->chip; 198 + int ret; 198 199 199 - nand_release(chip); 200 + ret = mtd_device_unregister(nand_to_mtd(chip)); 201 + WARN_ON(ret); 202 + 203 + nand_cleanup(chip); 200 204 201 205 clk_disable_unprepare(info->clk); 202 206
+23 -10
drivers/mtd/nand/raw/oxnas_nand.c
··· 32 32 void __iomem *io_base; 33 33 struct clk *clk; 34 34 struct nand_chip *chips[OXNAS_NAND_MAX_CHIPS]; 35 + unsigned int nchips; 35 36 }; 36 37 37 38 static uint8_t oxnas_nand_read_byte(struct nand_chip *chip) ··· 80 79 struct nand_chip *chip; 81 80 struct mtd_info *mtd; 82 81 struct resource *res; 83 - int nchips = 0; 84 82 int count = 0; 85 83 int err = 0; 84 + int i; 86 85 87 86 /* Allocate memory for the device structure (and zero it) */ 88 87 oxnas = devm_kzalloc(&pdev->dev, sizeof(*oxnas), ··· 141 140 goto err_release_child; 142 141 143 142 err = mtd_device_register(mtd, NULL, 0); 144 - if (err) { 145 - nand_release(chip); 146 - goto err_release_child; 147 - } 143 + if (err) 144 + goto err_cleanup_nand; 148 145 149 - oxnas->chips[nchips] = chip; 150 - ++nchips; 146 + oxnas->chips[oxnas->nchips] = chip; 147 + ++oxnas->nchips; 151 148 } 152 149 153 150 /* Exit if no chips found */ 154 - if (!nchips) { 151 + if (!oxnas->nchips) { 155 152 err = -ENODEV; 156 153 goto err_clk_unprepare; 157 154 } ··· 158 159 159 160 return 0; 160 161 162 + err_cleanup_nand: 163 + nand_cleanup(chip); 161 164 err_release_child: 162 165 of_node_put(nand_np); 166 + 167 + for (i = 0; i < oxnas->nchips; i++) { 168 + chip = oxnas->chips[i]; 169 + WARN_ON(mtd_device_unregister(nand_to_mtd(chip))); 170 + nand_cleanup(chip); 171 + } 172 + 163 173 err_clk_unprepare: 164 174 clk_disable_unprepare(oxnas->clk); 165 175 return err; ··· 177 169 static int oxnas_nand_remove(struct platform_device *pdev) 178 170 { 179 171 struct oxnas_nand_ctrl *oxnas = platform_get_drvdata(pdev); 172 + struct nand_chip *chip; 173 + int i; 180 174 181 - if (oxnas->chips[0]) 182 - nand_release(oxnas->chips[0]); 175 + for (i = 0; i < oxnas->nchips; i++) { 176 + chip = oxnas->chips[i]; 177 + WARN_ON(mtd_device_unregister(nand_to_mtd(chip))); 178 + nand_cleanup(chip); 179 + } 183 180 184 181 clk_disable_unprepare(oxnas->clk); 185 182
+7 -2
drivers/mtd/nand/raw/pasemi_nand.c
··· 146 146 if (mtd_device_register(pasemi_nand_mtd, NULL, 0)) { 147 147 dev_err(dev, "Unable to register MTD device\n"); 148 148 err = -ENODEV; 149 - goto out_lpc; 149 + goto out_cleanup_nand; 150 150 } 151 151 152 152 dev_info(dev, "PA Semi NAND flash at %pR, control at I/O %x\n", &res, ··· 154 154 155 155 return 0; 156 156 157 + out_cleanup_nand: 158 + nand_cleanup(chip); 157 159 out_lpc: 158 160 release_region(lpcctl, 4); 159 161 out_ior: ··· 169 167 static int pasemi_nand_remove(struct platform_device *ofdev) 170 168 { 171 169 struct nand_chip *chip; 170 + int ret; 172 171 173 172 if (!pasemi_nand_mtd) 174 173 return 0; ··· 177 174 chip = mtd_to_nand(pasemi_nand_mtd); 178 175 179 176 /* Release resources, unregister device */ 180 - nand_release(chip); 177 + ret = mtd_device_unregister(pasemi_nand_mtd); 178 + WARN_ON(ret); 179 + nand_cleanup(chip); 181 180 182 181 release_region(lpcctl, 4); 183 182
+6 -2
drivers/mtd/nand/raw/plat_nand.c
··· 92 92 if (!err) 93 93 return err; 94 94 95 - nand_release(&data->chip); 95 + nand_cleanup(&data->chip); 96 96 out: 97 97 if (pdata->ctrl.remove) 98 98 pdata->ctrl.remove(pdev); ··· 106 106 { 107 107 struct plat_nand_data *data = platform_get_drvdata(pdev); 108 108 struct platform_nand_data *pdata = dev_get_platdata(&pdev->dev); 109 + struct nand_chip *chip = &data->chip; 110 + int ret; 109 111 110 - nand_release(&data->chip); 112 + ret = mtd_device_unregister(nand_to_mtd(chip)); 113 + WARN_ON(ret); 114 + nand_cleanup(chip); 111 115 if (pdata->ctrl.remove) 112 116 pdata->ctrl.remove(pdev); 113 117
+9 -4
drivers/mtd/nand/raw/qcom_nandc.c
··· 2836 2836 chip->legacy.block_markbad = qcom_nandc_block_markbad; 2837 2837 2838 2838 chip->controller = &nandc->controller; 2839 - chip->options |= NAND_NO_SUBPAGE_WRITE | NAND_USE_BOUNCE_BUFFER | 2839 + chip->options |= NAND_NO_SUBPAGE_WRITE | NAND_USES_DMA | 2840 2840 NAND_SKIP_BBTSCAN; 2841 2841 2842 2842 /* set up initial status value */ ··· 3005 3005 struct qcom_nand_controller *nandc = platform_get_drvdata(pdev); 3006 3006 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 3007 3007 struct qcom_nand_host *host; 3008 + struct nand_chip *chip; 3009 + int ret; 3008 3010 3009 - list_for_each_entry(host, &nandc->host_list, node) 3010 - nand_release(&host->chip); 3011 - 3011 + list_for_each_entry(host, &nandc->host_list, node) { 3012 + chip = &host->chip; 3013 + ret = mtd_device_unregister(nand_to_mtd(chip)); 3014 + WARN_ON(ret); 3015 + nand_cleanup(chip); 3016 + } 3012 3017 3013 3018 qcom_nandc_unalloc(nandc); 3014 3019
+4 -2
drivers/mtd/nand/raw/r852.c
··· 651 651 dev->card_registered = 1; 652 652 return 0; 653 653 error3: 654 - nand_release(dev->chip); 654 + WARN_ON(mtd_device_unregister(nand_to_mtd(dev->chip))); 655 + nand_cleanup(dev->chip); 655 656 error1: 656 657 /* Force card redetect */ 657 658 dev->card_detected = 0; ··· 671 670 return; 672 671 673 672 device_remove_file(&mtd->dev, &dev_attr_media_type); 674 - nand_release(dev->chip); 673 + WARN_ON(mtd_device_unregister(mtd)); 674 + nand_cleanup(dev->chip); 675 675 r852_engine_disable(dev); 676 676 dev->card_registered = 0; 677 677 }
+2 -1
drivers/mtd/nand/raw/s3c2410.c
··· 779 779 780 780 for (mtdno = 0; mtdno < info->mtd_count; mtdno++, ptr++) { 781 781 pr_debug("releasing mtd %d (%p)\n", mtdno, ptr); 782 - nand_release(&ptr->chip); 782 + WARN_ON(mtd_device_unregister(nand_to_mtd(&ptr->chip))); 783 + nand_cleanup(&ptr->chip); 783 784 } 784 785 } 785 786
+5 -1
drivers/mtd/nand/raw/sh_flctl.c
··· 1204 1204 static int flctl_remove(struct platform_device *pdev) 1205 1205 { 1206 1206 struct sh_flctl *flctl = platform_get_drvdata(pdev); 1207 + struct nand_chip *chip = &flctl->chip; 1208 + int ret; 1207 1209 1208 1210 flctl_release_dma(flctl); 1209 - nand_release(&flctl->chip); 1211 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1212 + WARN_ON(ret); 1213 + nand_cleanup(chip); 1210 1214 pm_runtime_disable(&pdev->dev); 1211 1215 1212 1216 return 0;
+10 -4
drivers/mtd/nand/raw/sharpsl.c
··· 183 183 return 0; 184 184 185 185 err_add: 186 - nand_release(this); 186 + nand_cleanup(this); 187 187 188 188 err_scan: 189 189 iounmap(sharpsl->io); ··· 199 199 static int sharpsl_nand_remove(struct platform_device *pdev) 200 200 { 201 201 struct sharpsl_nand *sharpsl = platform_get_drvdata(pdev); 202 + struct nand_chip *chip = &sharpsl->chip; 203 + int ret; 202 204 203 - /* Release resources, unregister device */ 204 - nand_release(&sharpsl->chip); 205 + /* Unregister device */ 206 + ret = mtd_device_unregister(nand_to_mtd(chip)); 207 + WARN_ON(ret); 208 + 209 + /* Release resources */ 210 + nand_cleanup(chip); 205 211 206 212 iounmap(sharpsl->io); 207 213 208 - /* Free the MTD device structure */ 214 + /* Free the driver's structure */ 209 215 kfree(sharpsl); 210 216 211 217 return 0;
+6 -2
drivers/mtd/nand/raw/socrates_nand.c
··· 169 169 if (!res) 170 170 return res; 171 171 172 - nand_release(nand_chip); 172 + nand_cleanup(nand_chip); 173 173 174 174 out: 175 175 iounmap(host->io_base); ··· 182 182 static int socrates_nand_remove(struct platform_device *ofdev) 183 183 { 184 184 struct socrates_nand_host *host = dev_get_drvdata(&ofdev->dev); 185 + struct nand_chip *chip = &host->nand_chip; 186 + int ret; 185 187 186 - nand_release(&host->nand_chip); 188 + ret = mtd_device_unregister(nand_to_mtd(chip)); 189 + WARN_ON(ret); 190 + nand_cleanup(chip); 187 191 188 192 iounmap(host->io_base); 189 193
+511 -554
drivers/mtd/nand/raw/stm32_fmc2_nand.c
··· 4 4 * Author: Christophe Kerello <christophe.kerello@st.com> 5 5 */ 6 6 7 + #include <linux/bitfield.h> 7 8 #include <linux/clk.h> 8 9 #include <linux/dmaengine.h> 9 10 #include <linux/dma-mapping.h> ··· 38 37 /* Max ECC buffer length */ 39 38 #define FMC2_MAX_ECC_BUF_LEN (FMC2_BCHDSRS_LEN * FMC2_MAX_SG) 40 39 41 - #define FMC2_TIMEOUT_US 1000 42 - #define FMC2_TIMEOUT_MS 1000 40 + #define FMC2_TIMEOUT_MS 5000 43 41 44 42 /* Timings */ 45 43 #define FMC2_THIZ 1 ··· 85 85 /* Register: FMC2_PCR */ 86 86 #define FMC2_PCR_PWAITEN BIT(1) 87 87 #define FMC2_PCR_PBKEN BIT(2) 88 - #define FMC2_PCR_PWID_MASK GENMASK(5, 4) 89 - #define FMC2_PCR_PWID(x) (((x) & 0x3) << 4) 88 + #define FMC2_PCR_PWID GENMASK(5, 4) 90 89 #define FMC2_PCR_PWID_BUSWIDTH_8 0 91 90 #define FMC2_PCR_PWID_BUSWIDTH_16 1 92 91 #define FMC2_PCR_ECCEN BIT(6) 93 92 #define FMC2_PCR_ECCALG BIT(8) 94 - #define FMC2_PCR_TCLR_MASK GENMASK(12, 9) 95 - #define FMC2_PCR_TCLR(x) (((x) & 0xf) << 9) 93 + #define FMC2_PCR_TCLR GENMASK(12, 9) 96 94 #define FMC2_PCR_TCLR_DEFAULT 0xf 97 - #define FMC2_PCR_TAR_MASK GENMASK(16, 13) 98 - #define FMC2_PCR_TAR(x) (((x) & 0xf) << 13) 95 + #define FMC2_PCR_TAR GENMASK(16, 13) 99 96 #define FMC2_PCR_TAR_DEFAULT 0xf 100 - #define FMC2_PCR_ECCSS_MASK GENMASK(19, 17) 101 - #define FMC2_PCR_ECCSS(x) (((x) & 0x7) << 17) 97 + #define FMC2_PCR_ECCSS GENMASK(19, 17) 102 98 #define FMC2_PCR_ECCSS_512 1 103 99 #define FMC2_PCR_ECCSS_2048 3 104 100 #define FMC2_PCR_BCHECC BIT(24) ··· 104 108 #define FMC2_SR_NWRF BIT(6) 105 109 106 110 /* Register: FMC2_PMEM */ 107 - #define FMC2_PMEM_MEMSET(x) (((x) & 0xff) << 0) 108 - #define FMC2_PMEM_MEMWAIT(x) (((x) & 0xff) << 8) 109 - #define FMC2_PMEM_MEMHOLD(x) (((x) & 0xff) << 16) 110 - #define FMC2_PMEM_MEMHIZ(x) (((x) & 0xff) << 24) 111 + #define FMC2_PMEM_MEMSET GENMASK(7, 0) 112 + #define FMC2_PMEM_MEMWAIT GENMASK(15, 8) 113 + #define FMC2_PMEM_MEMHOLD GENMASK(23, 16) 114 + #define FMC2_PMEM_MEMHIZ GENMASK(31, 24) 111 115 #define FMC2_PMEM_DEFAULT 0x0a0a0a0a 112 116 113 117 /* Register: FMC2_PATT */ 114 - #define FMC2_PATT_ATTSET(x) (((x) & 0xff) << 0) 115 - #define FMC2_PATT_ATTWAIT(x) (((x) & 0xff) << 8) 116 - #define FMC2_PATT_ATTHOLD(x) (((x) & 0xff) << 16) 117 - #define FMC2_PATT_ATTHIZ(x) (((x) & 0xff) << 24) 118 + #define FMC2_PATT_ATTSET GENMASK(7, 0) 119 + #define FMC2_PATT_ATTWAIT GENMASK(15, 8) 120 + #define FMC2_PATT_ATTHOLD GENMASK(23, 16) 121 + #define FMC2_PATT_ATTHIZ GENMASK(31, 24) 118 122 #define FMC2_PATT_DEFAULT 0x0a0a0a0a 119 123 120 124 /* Register: FMC2_ISR */ ··· 129 133 /* Register: FMC2_CSQCFGR1 */ 130 134 #define FMC2_CSQCFGR1_CMD2EN BIT(1) 131 135 #define FMC2_CSQCFGR1_DMADEN BIT(2) 132 - #define FMC2_CSQCFGR1_ACYNBR(x) (((x) & 0x7) << 4) 133 - #define FMC2_CSQCFGR1_CMD1(x) (((x) & 0xff) << 8) 134 - #define FMC2_CSQCFGR1_CMD2(x) (((x) & 0xff) << 16) 136 + #define FMC2_CSQCFGR1_ACYNBR GENMASK(6, 4) 137 + #define FMC2_CSQCFGR1_CMD1 GENMASK(15, 8) 138 + #define FMC2_CSQCFGR1_CMD2 GENMASK(23, 16) 135 139 #define FMC2_CSQCFGR1_CMD1T BIT(24) 136 140 #define FMC2_CSQCFGR1_CMD2T BIT(25) 137 141 ··· 139 143 #define FMC2_CSQCFGR2_SQSDTEN BIT(0) 140 144 #define FMC2_CSQCFGR2_RCMD2EN BIT(1) 141 145 #define FMC2_CSQCFGR2_DMASEN BIT(2) 142 - #define FMC2_CSQCFGR2_RCMD1(x) (((x) & 0xff) << 8) 143 - #define FMC2_CSQCFGR2_RCMD2(x) (((x) & 0xff) << 16) 146 + #define FMC2_CSQCFGR2_RCMD1 GENMASK(15, 8) 147 + #define FMC2_CSQCFGR2_RCMD2 GENMASK(23, 16) 144 148 #define FMC2_CSQCFGR2_RCMD1T BIT(24) 145 149 #define FMC2_CSQCFGR2_RCMD2T BIT(25) 146 150 147 151 /* Register: FMC2_CSQCFGR3 */ 148 - #define FMC2_CSQCFGR3_SNBR(x) (((x) & 0x1f) << 8) 152 + #define FMC2_CSQCFGR3_SNBR GENMASK(13, 8) 149 153 #define FMC2_CSQCFGR3_AC1T BIT(16) 150 154 #define FMC2_CSQCFGR3_AC2T BIT(17) 151 155 #define FMC2_CSQCFGR3_AC3T BIT(18) ··· 156 160 #define FMC2_CSQCFGR3_RAC2T BIT(23) 157 161 158 162 /* Register: FMC2_CSQCAR1 */ 159 - #define FMC2_CSQCAR1_ADDC1(x) (((x) & 0xff) << 0) 160 - #define FMC2_CSQCAR1_ADDC2(x) (((x) & 0xff) << 8) 161 - #define FMC2_CSQCAR1_ADDC3(x) (((x) & 0xff) << 16) 162 - #define FMC2_CSQCAR1_ADDC4(x) (((x) & 0xff) << 24) 163 + #define FMC2_CSQCAR1_ADDC1 GENMASK(7, 0) 164 + #define FMC2_CSQCAR1_ADDC2 GENMASK(15, 8) 165 + #define FMC2_CSQCAR1_ADDC3 GENMASK(23, 16) 166 + #define FMC2_CSQCAR1_ADDC4 GENMASK(31, 24) 163 167 164 168 /* Register: FMC2_CSQCAR2 */ 165 - #define FMC2_CSQCAR2_ADDC5(x) (((x) & 0xff) << 0) 166 - #define FMC2_CSQCAR2_NANDCEN(x) (((x) & 0x3) << 10) 167 - #define FMC2_CSQCAR2_SAO(x) (((x) & 0xffff) << 16) 169 + #define FMC2_CSQCAR2_ADDC5 GENMASK(7, 0) 170 + #define FMC2_CSQCAR2_NANDCEN GENMASK(11, 10) 171 + #define FMC2_CSQCAR2_SAO GENMASK(31, 16) 168 172 169 173 /* Register: FMC2_CSQIER */ 170 174 #define FMC2_CSQIER_TCIE BIT(0) ··· 185 189 /* Register: FMC2_BCHDSR0 */ 186 190 #define FMC2_BCHDSR0_DUE BIT(0) 187 191 #define FMC2_BCHDSR0_DEF BIT(1) 188 - #define FMC2_BCHDSR0_DEN_MASK GENMASK(7, 4) 189 - #define FMC2_BCHDSR0_DEN_SHIFT 4 192 + #define FMC2_BCHDSR0_DEN GENMASK(7, 4) 190 193 191 194 /* Register: FMC2_BCHDSR1 */ 192 - #define FMC2_BCHDSR1_EBP1_MASK GENMASK(12, 0) 193 - #define FMC2_BCHDSR1_EBP2_MASK GENMASK(28, 16) 194 - #define FMC2_BCHDSR1_EBP2_SHIFT 16 195 + #define FMC2_BCHDSR1_EBP1 GENMASK(12, 0) 196 + #define FMC2_BCHDSR1_EBP2 GENMASK(28, 16) 195 197 196 198 /* Register: FMC2_BCHDSR2 */ 197 - #define FMC2_BCHDSR2_EBP3_MASK GENMASK(12, 0) 198 - #define FMC2_BCHDSR2_EBP4_MASK GENMASK(28, 16) 199 - #define FMC2_BCHDSR2_EBP4_SHIFT 16 199 + #define FMC2_BCHDSR2_EBP3 GENMASK(12, 0) 200 + #define FMC2_BCHDSR2_EBP4 GENMASK(28, 16) 200 201 201 202 /* Register: FMC2_BCHDSR3 */ 202 - #define FMC2_BCHDSR3_EBP5_MASK GENMASK(12, 0) 203 - #define FMC2_BCHDSR3_EBP6_MASK GENMASK(28, 16) 204 - #define FMC2_BCHDSR3_EBP6_SHIFT 16 203 + #define FMC2_BCHDSR3_EBP5 GENMASK(12, 0) 204 + #define FMC2_BCHDSR3_EBP6 GENMASK(28, 16) 205 205 206 206 /* Register: FMC2_BCHDSR4 */ 207 - #define FMC2_BCHDSR4_EBP7_MASK GENMASK(12, 0) 208 - #define FMC2_BCHDSR4_EBP8_MASK GENMASK(28, 16) 209 - #define FMC2_BCHDSR4_EBP8_SHIFT 16 207 + #define FMC2_BCHDSR4_EBP7 GENMASK(12, 0) 208 + #define FMC2_BCHDSR4_EBP8 GENMASK(28, 16) 210 209 211 210 enum stm32_fmc2_ecc { 212 211 FMC2_ECC_HAM = 1, ··· 272 281 return container_of(base, struct stm32_fmc2_nfc, base); 273 282 } 274 283 275 - /* Timings configuration */ 276 - static void stm32_fmc2_timings_init(struct nand_chip *chip) 284 + static void stm32_fmc2_nfc_timings_init(struct nand_chip *chip) 277 285 { 278 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 286 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 279 287 struct stm32_fmc2_nand *nand = to_fmc2_nand(chip); 280 288 struct stm32_fmc2_timings *timings = &nand->timings; 281 - u32 pcr = readl_relaxed(fmc2->io_base + FMC2_PCR); 289 + u32 pcr = readl_relaxed(nfc->io_base + FMC2_PCR); 282 290 u32 pmem, patt; 283 291 284 292 /* Set tclr/tar timings */ 285 - pcr &= ~FMC2_PCR_TCLR_MASK; 286 - pcr |= FMC2_PCR_TCLR(timings->tclr); 287 - pcr &= ~FMC2_PCR_TAR_MASK; 288 - pcr |= FMC2_PCR_TAR(timings->tar); 293 + pcr &= ~FMC2_PCR_TCLR; 294 + pcr |= FIELD_PREP(FMC2_PCR_TCLR, timings->tclr); 295 + pcr &= ~FMC2_PCR_TAR; 296 + pcr |= FIELD_PREP(FMC2_PCR_TAR, timings->tar); 289 297 290 298 /* Set tset/twait/thold/thiz timings in common bank */ 291 - pmem = FMC2_PMEM_MEMSET(timings->tset_mem); 292 - pmem |= FMC2_PMEM_MEMWAIT(timings->twait); 293 - pmem |= FMC2_PMEM_MEMHOLD(timings->thold_mem); 294 - pmem |= FMC2_PMEM_MEMHIZ(timings->thiz); 299 + pmem = FIELD_PREP(FMC2_PMEM_MEMSET, timings->tset_mem); 300 + pmem |= FIELD_PREP(FMC2_PMEM_MEMWAIT, timings->twait); 301 + pmem |= FIELD_PREP(FMC2_PMEM_MEMHOLD, timings->thold_mem); 302 + pmem |= FIELD_PREP(FMC2_PMEM_MEMHIZ, timings->thiz); 295 303 296 304 /* Set tset/twait/thold/thiz timings in attribut bank */ 297 - patt = FMC2_PATT_ATTSET(timings->tset_att); 298 - patt |= FMC2_PATT_ATTWAIT(timings->twait); 299 - patt |= FMC2_PATT_ATTHOLD(timings->thold_att); 300 - patt |= FMC2_PATT_ATTHIZ(timings->thiz); 305 + patt = FIELD_PREP(FMC2_PATT_ATTSET, timings->tset_att); 306 + patt |= FIELD_PREP(FMC2_PATT_ATTWAIT, timings->twait); 307 + patt |= FIELD_PREP(FMC2_PATT_ATTHOLD, timings->thold_att); 308 + patt |= FIELD_PREP(FMC2_PATT_ATTHIZ, timings->thiz); 301 309 302 - writel_relaxed(pcr, fmc2->io_base + FMC2_PCR); 303 - writel_relaxed(pmem, fmc2->io_base + FMC2_PMEM); 304 - writel_relaxed(patt, fmc2->io_base + FMC2_PATT); 310 + writel_relaxed(pcr, nfc->io_base + FMC2_PCR); 311 + writel_relaxed(pmem, nfc->io_base + FMC2_PMEM); 312 + writel_relaxed(patt, nfc->io_base + FMC2_PATT); 305 313 } 306 314 307 - /* Controller configuration */ 308 - static void stm32_fmc2_setup(struct nand_chip *chip) 315 + static void stm32_fmc2_nfc_setup(struct nand_chip *chip) 309 316 { 310 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 311 - u32 pcr = readl_relaxed(fmc2->io_base + FMC2_PCR); 317 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 318 + u32 pcr = readl_relaxed(nfc->io_base + FMC2_PCR); 312 319 313 320 /* Configure ECC algorithm (default configuration is Hamming) */ 314 321 pcr &= ~FMC2_PCR_ECCALG; ··· 319 330 } 320 331 321 332 /* Set buswidth */ 322 - pcr &= ~FMC2_PCR_PWID_MASK; 333 + pcr &= ~FMC2_PCR_PWID; 323 334 if (chip->options & NAND_BUSWIDTH_16) 324 - pcr |= FMC2_PCR_PWID(FMC2_PCR_PWID_BUSWIDTH_16); 335 + pcr |= FIELD_PREP(FMC2_PCR_PWID, FMC2_PCR_PWID_BUSWIDTH_16); 325 336 326 337 /* Set ECC sector size */ 327 - pcr &= ~FMC2_PCR_ECCSS_MASK; 328 - pcr |= FMC2_PCR_ECCSS(FMC2_PCR_ECCSS_512); 338 + pcr &= ~FMC2_PCR_ECCSS; 339 + pcr |= FIELD_PREP(FMC2_PCR_ECCSS, FMC2_PCR_ECCSS_512); 329 340 330 - writel_relaxed(pcr, fmc2->io_base + FMC2_PCR); 341 + writel_relaxed(pcr, nfc->io_base + FMC2_PCR); 331 342 } 332 343 333 - /* Select target */ 334 - static int stm32_fmc2_select_chip(struct nand_chip *chip, int chipnr) 344 + static int stm32_fmc2_nfc_select_chip(struct nand_chip *chip, int chipnr) 335 345 { 336 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 346 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 337 347 struct stm32_fmc2_nand *nand = to_fmc2_nand(chip); 338 348 struct dma_slave_config dma_cfg; 339 349 int ret; 340 350 341 - if (nand->cs_used[chipnr] == fmc2->cs_sel) 351 + if (nand->cs_used[chipnr] == nfc->cs_sel) 342 352 return 0; 343 353 344 - fmc2->cs_sel = nand->cs_used[chipnr]; 354 + nfc->cs_sel = nand->cs_used[chipnr]; 355 + stm32_fmc2_nfc_setup(chip); 356 + stm32_fmc2_nfc_timings_init(chip); 345 357 346 - /* FMC2 setup routine */ 347 - stm32_fmc2_setup(chip); 348 - 349 - /* Apply timings */ 350 - stm32_fmc2_timings_init(chip); 351 - 352 - if (fmc2->dma_tx_ch && fmc2->dma_rx_ch) { 358 + if (nfc->dma_tx_ch && nfc->dma_rx_ch) { 353 359 memset(&dma_cfg, 0, sizeof(dma_cfg)); 354 - dma_cfg.src_addr = fmc2->data_phys_addr[fmc2->cs_sel]; 355 - dma_cfg.dst_addr = fmc2->data_phys_addr[fmc2->cs_sel]; 360 + dma_cfg.src_addr = nfc->data_phys_addr[nfc->cs_sel]; 361 + dma_cfg.dst_addr = nfc->data_phys_addr[nfc->cs_sel]; 356 362 dma_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 357 363 dma_cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 358 364 dma_cfg.src_maxburst = 32; 359 365 dma_cfg.dst_maxburst = 32; 360 366 361 - ret = dmaengine_slave_config(fmc2->dma_tx_ch, &dma_cfg); 367 + ret = dmaengine_slave_config(nfc->dma_tx_ch, &dma_cfg); 362 368 if (ret) { 363 - dev_err(fmc2->dev, "tx DMA engine slave config failed\n"); 369 + dev_err(nfc->dev, "tx DMA engine slave config failed\n"); 364 370 return ret; 365 371 } 366 372 367 - ret = dmaengine_slave_config(fmc2->dma_rx_ch, &dma_cfg); 373 + ret = dmaengine_slave_config(nfc->dma_rx_ch, &dma_cfg); 368 374 if (ret) { 369 - dev_err(fmc2->dev, "rx DMA engine slave config failed\n"); 375 + dev_err(nfc->dev, "rx DMA engine slave config failed\n"); 370 376 return ret; 371 377 } 372 378 } 373 379 374 - if (fmc2->dma_ecc_ch) { 380 + if (nfc->dma_ecc_ch) { 375 381 /* 376 382 * Hamming: we read HECCR register 377 383 * BCH4/BCH8: we read BCHDSRSx registers 378 384 */ 379 385 memset(&dma_cfg, 0, sizeof(dma_cfg)); 380 - dma_cfg.src_addr = fmc2->io_phys_addr; 386 + dma_cfg.src_addr = nfc->io_phys_addr; 381 387 dma_cfg.src_addr += chip->ecc.strength == FMC2_ECC_HAM ? 382 388 FMC2_HECCR : FMC2_BCHDSR0; 383 389 dma_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 384 390 385 - ret = dmaengine_slave_config(fmc2->dma_ecc_ch, &dma_cfg); 391 + ret = dmaengine_slave_config(nfc->dma_ecc_ch, &dma_cfg); 386 392 if (ret) { 387 - dev_err(fmc2->dev, "ECC DMA engine slave config failed\n"); 393 + dev_err(nfc->dev, "ECC DMA engine slave config failed\n"); 388 394 return ret; 389 395 } 390 396 391 397 /* Calculate ECC length needed for one sector */ 392 - fmc2->dma_ecc_len = chip->ecc.strength == FMC2_ECC_HAM ? 393 - FMC2_HECCR_LEN : FMC2_BCHDSRS_LEN; 398 + nfc->dma_ecc_len = chip->ecc.strength == FMC2_ECC_HAM ? 399 + FMC2_HECCR_LEN : FMC2_BCHDSRS_LEN; 394 400 } 395 401 396 402 return 0; 397 403 } 398 404 399 - /* Set bus width to 16-bit or 8-bit */ 400 - static void stm32_fmc2_set_buswidth_16(struct stm32_fmc2_nfc *fmc2, bool set) 405 + static void stm32_fmc2_nfc_set_buswidth_16(struct stm32_fmc2_nfc *nfc, bool set) 401 406 { 402 - u32 pcr = readl_relaxed(fmc2->io_base + FMC2_PCR); 407 + u32 pcr = readl_relaxed(nfc->io_base + FMC2_PCR); 403 408 404 - pcr &= ~FMC2_PCR_PWID_MASK; 409 + pcr &= ~FMC2_PCR_PWID; 405 410 if (set) 406 - pcr |= FMC2_PCR_PWID(FMC2_PCR_PWID_BUSWIDTH_16); 407 - writel_relaxed(pcr, fmc2->io_base + FMC2_PCR); 411 + pcr |= FIELD_PREP(FMC2_PCR_PWID, FMC2_PCR_PWID_BUSWIDTH_16); 412 + writel_relaxed(pcr, nfc->io_base + FMC2_PCR); 408 413 } 409 414 410 - /* Enable/disable ECC */ 411 - static void stm32_fmc2_set_ecc(struct stm32_fmc2_nfc *fmc2, bool enable) 415 + static void stm32_fmc2_nfc_set_ecc(struct stm32_fmc2_nfc *nfc, bool enable) 412 416 { 413 - u32 pcr = readl(fmc2->io_base + FMC2_PCR); 417 + u32 pcr = readl(nfc->io_base + FMC2_PCR); 414 418 415 419 pcr &= ~FMC2_PCR_ECCEN; 416 420 if (enable) 417 421 pcr |= FMC2_PCR_ECCEN; 418 - writel(pcr, fmc2->io_base + FMC2_PCR); 422 + writel(pcr, nfc->io_base + FMC2_PCR); 419 423 } 420 424 421 - /* Enable irq sources in case of the sequencer is used */ 422 - static inline void stm32_fmc2_enable_seq_irq(struct stm32_fmc2_nfc *fmc2) 425 + static inline void stm32_fmc2_nfc_enable_seq_irq(struct stm32_fmc2_nfc *nfc) 423 426 { 424 - u32 csqier = readl_relaxed(fmc2->io_base + FMC2_CSQIER); 427 + u32 csqier = readl_relaxed(nfc->io_base + FMC2_CSQIER); 425 428 426 429 csqier |= FMC2_CSQIER_TCIE; 427 430 428 - fmc2->irq_state = FMC2_IRQ_SEQ; 431 + nfc->irq_state = FMC2_IRQ_SEQ; 429 432 430 - writel_relaxed(csqier, fmc2->io_base + FMC2_CSQIER); 433 + writel_relaxed(csqier, nfc->io_base + FMC2_CSQIER); 431 434 } 432 435 433 - /* Disable irq sources in case of the sequencer is used */ 434 - static inline void stm32_fmc2_disable_seq_irq(struct stm32_fmc2_nfc *fmc2) 436 + static inline void stm32_fmc2_nfc_disable_seq_irq(struct stm32_fmc2_nfc *nfc) 435 437 { 436 - u32 csqier = readl_relaxed(fmc2->io_base + FMC2_CSQIER); 438 + u32 csqier = readl_relaxed(nfc->io_base + FMC2_CSQIER); 437 439 438 440 csqier &= ~FMC2_CSQIER_TCIE; 439 441 440 - writel_relaxed(csqier, fmc2->io_base + FMC2_CSQIER); 442 + writel_relaxed(csqier, nfc->io_base + FMC2_CSQIER); 441 443 442 - fmc2->irq_state = FMC2_IRQ_UNKNOWN; 444 + nfc->irq_state = FMC2_IRQ_UNKNOWN; 443 445 } 444 446 445 - /* Clear irq sources in case of the sequencer is used */ 446 - static inline void stm32_fmc2_clear_seq_irq(struct stm32_fmc2_nfc *fmc2) 447 + static inline void stm32_fmc2_nfc_clear_seq_irq(struct stm32_fmc2_nfc *nfc) 447 448 { 448 - writel_relaxed(FMC2_CSQICR_CLEAR_IRQ, fmc2->io_base + FMC2_CSQICR); 449 + writel_relaxed(FMC2_CSQICR_CLEAR_IRQ, nfc->io_base + FMC2_CSQICR); 449 450 } 450 451 451 - /* Enable irq sources in case of bch is used */ 452 - static inline void stm32_fmc2_enable_bch_irq(struct stm32_fmc2_nfc *fmc2, 453 - int mode) 452 + static inline void stm32_fmc2_nfc_enable_bch_irq(struct stm32_fmc2_nfc *nfc, 453 + int mode) 454 454 { 455 - u32 bchier = readl_relaxed(fmc2->io_base + FMC2_BCHIER); 455 + u32 bchier = readl_relaxed(nfc->io_base + FMC2_BCHIER); 456 456 457 457 if (mode == NAND_ECC_WRITE) 458 458 bchier |= FMC2_BCHIER_EPBRIE; 459 459 else 460 460 bchier |= FMC2_BCHIER_DERIE; 461 461 462 - fmc2->irq_state = FMC2_IRQ_BCH; 462 + nfc->irq_state = FMC2_IRQ_BCH; 463 463 464 - writel_relaxed(bchier, fmc2->io_base + FMC2_BCHIER); 464 + writel_relaxed(bchier, nfc->io_base + FMC2_BCHIER); 465 465 } 466 466 467 - /* Disable irq sources in case of bch is used */ 468 - static inline void stm32_fmc2_disable_bch_irq(struct stm32_fmc2_nfc *fmc2) 467 + static inline void stm32_fmc2_nfc_disable_bch_irq(struct stm32_fmc2_nfc *nfc) 469 468 { 470 - u32 bchier = readl_relaxed(fmc2->io_base + FMC2_BCHIER); 469 + u32 bchier = readl_relaxed(nfc->io_base + FMC2_BCHIER); 471 470 472 471 bchier &= ~FMC2_BCHIER_DERIE; 473 472 bchier &= ~FMC2_BCHIER_EPBRIE; 474 473 475 - writel_relaxed(bchier, fmc2->io_base + FMC2_BCHIER); 474 + writel_relaxed(bchier, nfc->io_base + FMC2_BCHIER); 476 475 477 - fmc2->irq_state = FMC2_IRQ_UNKNOWN; 476 + nfc->irq_state = FMC2_IRQ_UNKNOWN; 478 477 } 479 478 480 - /* Clear irq sources in case of bch is used */ 481 - static inline void stm32_fmc2_clear_bch_irq(struct stm32_fmc2_nfc *fmc2) 479 + static inline void stm32_fmc2_nfc_clear_bch_irq(struct stm32_fmc2_nfc *nfc) 482 480 { 483 - writel_relaxed(FMC2_BCHICR_CLEAR_IRQ, fmc2->io_base + FMC2_BCHICR); 481 + writel_relaxed(FMC2_BCHICR_CLEAR_IRQ, nfc->io_base + FMC2_BCHICR); 484 482 } 485 483 486 484 /* 487 485 * Enable ECC logic and reset syndrome/parity bits previously calculated 488 486 * Syndrome/parity bits is cleared by setting the ECCEN bit to 0 489 487 */ 490 - static void stm32_fmc2_hwctl(struct nand_chip *chip, int mode) 488 + static void stm32_fmc2_nfc_hwctl(struct nand_chip *chip, int mode) 491 489 { 492 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 490 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 493 491 494 - stm32_fmc2_set_ecc(fmc2, false); 492 + stm32_fmc2_nfc_set_ecc(nfc, false); 495 493 496 494 if (chip->ecc.strength != FMC2_ECC_HAM) { 497 - u32 pcr = readl_relaxed(fmc2->io_base + FMC2_PCR); 495 + u32 pcr = readl_relaxed(nfc->io_base + FMC2_PCR); 498 496 499 497 if (mode == NAND_ECC_WRITE) 500 498 pcr |= FMC2_PCR_WEN; 501 499 else 502 500 pcr &= ~FMC2_PCR_WEN; 503 - writel_relaxed(pcr, fmc2->io_base + FMC2_PCR); 501 + writel_relaxed(pcr, nfc->io_base + FMC2_PCR); 504 502 505 - reinit_completion(&fmc2->complete); 506 - stm32_fmc2_clear_bch_irq(fmc2); 507 - stm32_fmc2_enable_bch_irq(fmc2, mode); 503 + reinit_completion(&nfc->complete); 504 + stm32_fmc2_nfc_clear_bch_irq(nfc); 505 + stm32_fmc2_nfc_enable_bch_irq(nfc, mode); 508 506 } 509 507 510 - stm32_fmc2_set_ecc(fmc2, true); 508 + stm32_fmc2_nfc_set_ecc(nfc, true); 511 509 } 512 510 513 511 /* ··· 502 526 * ECC is 3 bytes for 512 bytes of data (supports error correction up to 503 527 * max of 1-bit) 504 528 */ 505 - static inline void stm32_fmc2_ham_set_ecc(const u32 ecc_sta, u8 *ecc) 529 + static inline void stm32_fmc2_nfc_ham_set_ecc(const u32 ecc_sta, u8 *ecc) 506 530 { 507 531 ecc[0] = ecc_sta; 508 532 ecc[1] = ecc_sta >> 8; 509 533 ecc[2] = ecc_sta >> 16; 510 534 } 511 535 512 - static int stm32_fmc2_ham_calculate(struct nand_chip *chip, const u8 *data, 513 - u8 *ecc) 536 + static int stm32_fmc2_nfc_ham_calculate(struct nand_chip *chip, const u8 *data, 537 + u8 *ecc) 514 538 { 515 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 539 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 516 540 u32 sr, heccr; 517 541 int ret; 518 542 519 - ret = readl_relaxed_poll_timeout(fmc2->io_base + FMC2_SR, 520 - sr, sr & FMC2_SR_NWRF, 10, 521 - FMC2_TIMEOUT_MS); 543 + ret = readl_relaxed_poll_timeout(nfc->io_base + FMC2_SR, 544 + sr, sr & FMC2_SR_NWRF, 1, 545 + 1000 * FMC2_TIMEOUT_MS); 522 546 if (ret) { 523 - dev_err(fmc2->dev, "ham timeout\n"); 547 + dev_err(nfc->dev, "ham timeout\n"); 524 548 return ret; 525 549 } 526 550 527 - heccr = readl_relaxed(fmc2->io_base + FMC2_HECCR); 528 - 529 - stm32_fmc2_ham_set_ecc(heccr, ecc); 530 - 531 - /* Disable ECC */ 532 - stm32_fmc2_set_ecc(fmc2, false); 551 + heccr = readl_relaxed(nfc->io_base + FMC2_HECCR); 552 + stm32_fmc2_nfc_ham_set_ecc(heccr, ecc); 553 + stm32_fmc2_nfc_set_ecc(nfc, false); 533 554 534 555 return 0; 535 556 } 536 557 537 - static int stm32_fmc2_ham_correct(struct nand_chip *chip, u8 *dat, 538 - u8 *read_ecc, u8 *calc_ecc) 558 + static int stm32_fmc2_nfc_ham_correct(struct nand_chip *chip, u8 *dat, 559 + u8 *read_ecc, u8 *calc_ecc) 539 560 { 540 561 u8 bit_position = 0, b0, b1, b2; 541 562 u32 byte_addr = 0, b; ··· 588 615 * ECC is 7/13 bytes for 512 bytes of data (supports error correction up to 589 616 * max of 4-bit/8-bit) 590 617 */ 591 - static int stm32_fmc2_bch_calculate(struct nand_chip *chip, const u8 *data, 592 - u8 *ecc) 618 + static int stm32_fmc2_nfc_bch_calculate(struct nand_chip *chip, const u8 *data, 619 + u8 *ecc) 593 620 { 594 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 621 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 595 622 u32 bchpbr; 596 623 597 624 /* Wait until the BCH code is ready */ 598 - if (!wait_for_completion_timeout(&fmc2->complete, 625 + if (!wait_for_completion_timeout(&nfc->complete, 599 626 msecs_to_jiffies(FMC2_TIMEOUT_MS))) { 600 - dev_err(fmc2->dev, "bch timeout\n"); 601 - stm32_fmc2_disable_bch_irq(fmc2); 627 + dev_err(nfc->dev, "bch timeout\n"); 628 + stm32_fmc2_nfc_disable_bch_irq(nfc); 602 629 return -ETIMEDOUT; 603 630 } 604 631 605 632 /* Read parity bits */ 606 - bchpbr = readl_relaxed(fmc2->io_base + FMC2_BCHPBR1); 633 + bchpbr = readl_relaxed(nfc->io_base + FMC2_BCHPBR1); 607 634 ecc[0] = bchpbr; 608 635 ecc[1] = bchpbr >> 8; 609 636 ecc[2] = bchpbr >> 16; 610 637 ecc[3] = bchpbr >> 24; 611 638 612 - bchpbr = readl_relaxed(fmc2->io_base + FMC2_BCHPBR2); 639 + bchpbr = readl_relaxed(nfc->io_base + FMC2_BCHPBR2); 613 640 ecc[4] = bchpbr; 614 641 ecc[5] = bchpbr >> 8; 615 642 ecc[6] = bchpbr >> 16; ··· 617 644 if (chip->ecc.strength == FMC2_ECC_BCH8) { 618 645 ecc[7] = bchpbr >> 24; 619 646 620 - bchpbr = readl_relaxed(fmc2->io_base + FMC2_BCHPBR3); 647 + bchpbr = readl_relaxed(nfc->io_base + FMC2_BCHPBR3); 621 648 ecc[8] = bchpbr; 622 649 ecc[9] = bchpbr >> 8; 623 650 ecc[10] = bchpbr >> 16; 624 651 ecc[11] = bchpbr >> 24; 625 652 626 - bchpbr = readl_relaxed(fmc2->io_base + FMC2_BCHPBR4); 653 + bchpbr = readl_relaxed(nfc->io_base + FMC2_BCHPBR4); 627 654 ecc[12] = bchpbr; 628 655 } 629 656 630 - /* Disable ECC */ 631 - stm32_fmc2_set_ecc(fmc2, false); 657 + stm32_fmc2_nfc_set_ecc(nfc, false); 632 658 633 659 return 0; 634 660 } 635 661 636 - /* BCH algorithm correction */ 637 - static int stm32_fmc2_bch_decode(int eccsize, u8 *dat, u32 *ecc_sta) 662 + static int stm32_fmc2_nfc_bch_decode(int eccsize, u8 *dat, u32 *ecc_sta) 638 663 { 639 664 u32 bchdsr0 = ecc_sta[0]; 640 665 u32 bchdsr1 = ecc_sta[1]; ··· 651 680 if (unlikely(bchdsr0 & FMC2_BCHDSR0_DUE)) 652 681 return -EBADMSG; 653 682 654 - pos[0] = bchdsr1 & FMC2_BCHDSR1_EBP1_MASK; 655 - pos[1] = (bchdsr1 & FMC2_BCHDSR1_EBP2_MASK) >> FMC2_BCHDSR1_EBP2_SHIFT; 656 - pos[2] = bchdsr2 & FMC2_BCHDSR2_EBP3_MASK; 657 - pos[3] = (bchdsr2 & FMC2_BCHDSR2_EBP4_MASK) >> FMC2_BCHDSR2_EBP4_SHIFT; 658 - pos[4] = bchdsr3 & FMC2_BCHDSR3_EBP5_MASK; 659 - pos[5] = (bchdsr3 & FMC2_BCHDSR3_EBP6_MASK) >> FMC2_BCHDSR3_EBP6_SHIFT; 660 - pos[6] = bchdsr4 & FMC2_BCHDSR4_EBP7_MASK; 661 - pos[7] = (bchdsr4 & FMC2_BCHDSR4_EBP8_MASK) >> FMC2_BCHDSR4_EBP8_SHIFT; 683 + pos[0] = FIELD_GET(FMC2_BCHDSR1_EBP1, bchdsr1); 684 + pos[1] = FIELD_GET(FMC2_BCHDSR1_EBP2, bchdsr1); 685 + pos[2] = FIELD_GET(FMC2_BCHDSR2_EBP3, bchdsr2); 686 + pos[3] = FIELD_GET(FMC2_BCHDSR2_EBP4, bchdsr2); 687 + pos[4] = FIELD_GET(FMC2_BCHDSR3_EBP5, bchdsr3); 688 + pos[5] = FIELD_GET(FMC2_BCHDSR3_EBP6, bchdsr3); 689 + pos[6] = FIELD_GET(FMC2_BCHDSR4_EBP7, bchdsr4); 690 + pos[7] = FIELD_GET(FMC2_BCHDSR4_EBP8, bchdsr4); 662 691 663 - den = (bchdsr0 & FMC2_BCHDSR0_DEN_MASK) >> FMC2_BCHDSR0_DEN_SHIFT; 692 + den = FIELD_GET(FMC2_BCHDSR0_DEN, bchdsr0); 664 693 for (i = 0; i < den; i++) { 665 694 if (pos[i] < eccsize * 8) { 666 695 change_bit(pos[i], (unsigned long *)dat); ··· 671 700 return nb_errs; 672 701 } 673 702 674 - static int stm32_fmc2_bch_correct(struct nand_chip *chip, u8 *dat, 675 - u8 *read_ecc, u8 *calc_ecc) 703 + static int stm32_fmc2_nfc_bch_correct(struct nand_chip *chip, u8 *dat, 704 + u8 *read_ecc, u8 *calc_ecc) 676 705 { 677 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 706 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 678 707 u32 ecc_sta[5]; 679 708 680 709 /* Wait until the decoding error is ready */ 681 - if (!wait_for_completion_timeout(&fmc2->complete, 710 + if (!wait_for_completion_timeout(&nfc->complete, 682 711 msecs_to_jiffies(FMC2_TIMEOUT_MS))) { 683 - dev_err(fmc2->dev, "bch timeout\n"); 684 - stm32_fmc2_disable_bch_irq(fmc2); 712 + dev_err(nfc->dev, "bch timeout\n"); 713 + stm32_fmc2_nfc_disable_bch_irq(nfc); 685 714 return -ETIMEDOUT; 686 715 } 687 716 688 - ecc_sta[0] = readl_relaxed(fmc2->io_base + FMC2_BCHDSR0); 689 - ecc_sta[1] = readl_relaxed(fmc2->io_base + FMC2_BCHDSR1); 690 - ecc_sta[2] = readl_relaxed(fmc2->io_base + FMC2_BCHDSR2); 691 - ecc_sta[3] = readl_relaxed(fmc2->io_base + FMC2_BCHDSR3); 692 - ecc_sta[4] = readl_relaxed(fmc2->io_base + FMC2_BCHDSR4); 717 + ecc_sta[0] = readl_relaxed(nfc->io_base + FMC2_BCHDSR0); 718 + ecc_sta[1] = readl_relaxed(nfc->io_base + FMC2_BCHDSR1); 719 + ecc_sta[2] = readl_relaxed(nfc->io_base + FMC2_BCHDSR2); 720 + ecc_sta[3] = readl_relaxed(nfc->io_base + FMC2_BCHDSR3); 721 + ecc_sta[4] = readl_relaxed(nfc->io_base + FMC2_BCHDSR4); 693 722 694 - /* Disable ECC */ 695 - stm32_fmc2_set_ecc(fmc2, false); 723 + stm32_fmc2_nfc_set_ecc(nfc, false); 696 724 697 - return stm32_fmc2_bch_decode(chip->ecc.size, dat, ecc_sta); 725 + return stm32_fmc2_nfc_bch_decode(chip->ecc.size, dat, ecc_sta); 698 726 } 699 727 700 - static int stm32_fmc2_read_page(struct nand_chip *chip, u8 *buf, 701 - int oob_required, int page) 728 + static int stm32_fmc2_nfc_read_page(struct nand_chip *chip, u8 *buf, 729 + int oob_required, int page) 702 730 { 703 731 struct mtd_info *mtd = nand_to_mtd(chip); 704 732 int ret, i, s, stat, eccsize = chip->ecc.size; ··· 759 789 } 760 790 761 791 /* Sequencer read/write configuration */ 762 - static void stm32_fmc2_rw_page_init(struct nand_chip *chip, int page, 763 - int raw, bool write_data) 792 + static void stm32_fmc2_nfc_rw_page_init(struct nand_chip *chip, int page, 793 + int raw, bool write_data) 764 794 { 765 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 795 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 766 796 struct mtd_info *mtd = nand_to_mtd(chip); 767 797 u32 csqcfgr1, csqcfgr2, csqcfgr3; 768 798 u32 csqar1, csqar2; 769 799 u32 ecc_offset = mtd->writesize + FMC2_BBM_LEN; 770 - u32 pcr = readl_relaxed(fmc2->io_base + FMC2_PCR); 800 + u32 pcr = readl_relaxed(nfc->io_base + FMC2_PCR); 771 801 772 802 if (write_data) 773 803 pcr |= FMC2_PCR_WEN; 774 804 else 775 805 pcr &= ~FMC2_PCR_WEN; 776 - writel_relaxed(pcr, fmc2->io_base + FMC2_PCR); 806 + writel_relaxed(pcr, nfc->io_base + FMC2_PCR); 777 807 778 808 /* 779 809 * - Set Program Page/Page Read command ··· 782 812 */ 783 813 csqcfgr1 = FMC2_CSQCFGR1_DMADEN | FMC2_CSQCFGR1_CMD1T; 784 814 if (write_data) 785 - csqcfgr1 |= FMC2_CSQCFGR1_CMD1(NAND_CMD_SEQIN); 815 + csqcfgr1 |= FIELD_PREP(FMC2_CSQCFGR1_CMD1, NAND_CMD_SEQIN); 786 816 else 787 - csqcfgr1 |= FMC2_CSQCFGR1_CMD1(NAND_CMD_READ0) | 817 + csqcfgr1 |= FIELD_PREP(FMC2_CSQCFGR1_CMD1, NAND_CMD_READ0) | 788 818 FMC2_CSQCFGR1_CMD2EN | 789 - FMC2_CSQCFGR1_CMD2(NAND_CMD_READSTART) | 819 + FIELD_PREP(FMC2_CSQCFGR1_CMD2, NAND_CMD_READSTART) | 790 820 FMC2_CSQCFGR1_CMD2T; 791 821 792 822 /* ··· 796 826 * - Set timings 797 827 */ 798 828 if (write_data) 799 - csqcfgr2 = FMC2_CSQCFGR2_RCMD1(NAND_CMD_RNDIN); 829 + csqcfgr2 = FIELD_PREP(FMC2_CSQCFGR2_RCMD1, NAND_CMD_RNDIN); 800 830 else 801 - csqcfgr2 = FMC2_CSQCFGR2_RCMD1(NAND_CMD_RNDOUT) | 831 + csqcfgr2 = FIELD_PREP(FMC2_CSQCFGR2_RCMD1, NAND_CMD_RNDOUT) | 802 832 FMC2_CSQCFGR2_RCMD2EN | 803 - FMC2_CSQCFGR2_RCMD2(NAND_CMD_RNDOUTSTART) | 833 + FIELD_PREP(FMC2_CSQCFGR2_RCMD2, 834 + NAND_CMD_RNDOUTSTART) | 804 835 FMC2_CSQCFGR2_RCMD1T | 805 836 FMC2_CSQCFGR2_RCMD2T; 806 837 if (!raw) { ··· 813 842 * - Set the number of sectors to be written 814 843 * - Set timings 815 844 */ 816 - csqcfgr3 = FMC2_CSQCFGR3_SNBR(chip->ecc.steps - 1); 845 + csqcfgr3 = FIELD_PREP(FMC2_CSQCFGR3_SNBR, chip->ecc.steps - 1); 817 846 if (write_data) { 818 847 csqcfgr3 |= FMC2_CSQCFGR3_RAC2T; 819 848 if (chip->options & NAND_ROW_ADDR_3) ··· 827 856 * Byte 1 and byte 2 => column, we start at 0x0 828 857 * Byte 3 and byte 4 => page 829 858 */ 830 - csqar1 = FMC2_CSQCAR1_ADDC3(page); 831 - csqar1 |= FMC2_CSQCAR1_ADDC4(page >> 8); 859 + csqar1 = FIELD_PREP(FMC2_CSQCAR1_ADDC3, page); 860 + csqar1 |= FIELD_PREP(FMC2_CSQCAR1_ADDC4, page >> 8); 832 861 833 862 /* 834 863 * - Set chip enable number ··· 836 865 * - Calculate the number of address cycles to be issued 837 866 * - Set byte 5 of address cycle if needed 838 867 */ 839 - csqar2 = FMC2_CSQCAR2_NANDCEN(fmc2->cs_sel); 868 + csqar2 = FIELD_PREP(FMC2_CSQCAR2_NANDCEN, nfc->cs_sel); 840 869 if (chip->options & NAND_BUSWIDTH_16) 841 - csqar2 |= FMC2_CSQCAR2_SAO(ecc_offset >> 1); 870 + csqar2 |= FIELD_PREP(FMC2_CSQCAR2_SAO, ecc_offset >> 1); 842 871 else 843 - csqar2 |= FMC2_CSQCAR2_SAO(ecc_offset); 872 + csqar2 |= FIELD_PREP(FMC2_CSQCAR2_SAO, ecc_offset); 844 873 if (chip->options & NAND_ROW_ADDR_3) { 845 - csqcfgr1 |= FMC2_CSQCFGR1_ACYNBR(5); 846 - csqar2 |= FMC2_CSQCAR2_ADDC5(page >> 16); 874 + csqcfgr1 |= FIELD_PREP(FMC2_CSQCFGR1_ACYNBR, 5); 875 + csqar2 |= FIELD_PREP(FMC2_CSQCAR2_ADDC5, page >> 16); 847 876 } else { 848 - csqcfgr1 |= FMC2_CSQCFGR1_ACYNBR(4); 877 + csqcfgr1 |= FIELD_PREP(FMC2_CSQCFGR1_ACYNBR, 4); 849 878 } 850 879 851 - writel_relaxed(csqcfgr1, fmc2->io_base + FMC2_CSQCFGR1); 852 - writel_relaxed(csqcfgr2, fmc2->io_base + FMC2_CSQCFGR2); 853 - writel_relaxed(csqcfgr3, fmc2->io_base + FMC2_CSQCFGR3); 854 - writel_relaxed(csqar1, fmc2->io_base + FMC2_CSQAR1); 855 - writel_relaxed(csqar2, fmc2->io_base + FMC2_CSQAR2); 880 + writel_relaxed(csqcfgr1, nfc->io_base + FMC2_CSQCFGR1); 881 + writel_relaxed(csqcfgr2, nfc->io_base + FMC2_CSQCFGR2); 882 + writel_relaxed(csqcfgr3, nfc->io_base + FMC2_CSQCFGR3); 883 + writel_relaxed(csqar1, nfc->io_base + FMC2_CSQAR1); 884 + writel_relaxed(csqar2, nfc->io_base + FMC2_CSQAR2); 856 885 } 857 886 858 - static void stm32_fmc2_dma_callback(void *arg) 887 + static void stm32_fmc2_nfc_dma_callback(void *arg) 859 888 { 860 889 complete((struct completion *)arg); 861 890 } 862 891 863 892 /* Read/write data from/to a page */ 864 - static int stm32_fmc2_xfer(struct nand_chip *chip, const u8 *buf, 865 - int raw, bool write_data) 893 + static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf, 894 + int raw, bool write_data) 866 895 { 867 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 896 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 868 897 struct dma_async_tx_descriptor *desc_data, *desc_ecc; 869 898 struct scatterlist *sg; 870 - struct dma_chan *dma_ch = fmc2->dma_rx_ch; 899 + struct dma_chan *dma_ch = nfc->dma_rx_ch; 871 900 enum dma_data_direction dma_data_dir = DMA_FROM_DEVICE; 872 901 enum dma_transfer_direction dma_transfer_dir = DMA_DEV_TO_MEM; 873 - u32 csqcr = readl_relaxed(fmc2->io_base + FMC2_CSQCR); 902 + u32 csqcr = readl_relaxed(nfc->io_base + FMC2_CSQCR); 874 903 int eccsteps = chip->ecc.steps; 875 904 int eccsize = chip->ecc.size; 905 + unsigned long timeout = msecs_to_jiffies(FMC2_TIMEOUT_MS); 876 906 const u8 *p = buf; 877 907 int s, ret; 878 908 ··· 881 909 if (write_data) { 882 910 dma_data_dir = DMA_TO_DEVICE; 883 911 dma_transfer_dir = DMA_MEM_TO_DEV; 884 - dma_ch = fmc2->dma_tx_ch; 912 + dma_ch = nfc->dma_tx_ch; 885 913 } 886 914 887 - for_each_sg(fmc2->dma_data_sg.sgl, sg, eccsteps, s) { 915 + for_each_sg(nfc->dma_data_sg.sgl, sg, eccsteps, s) { 888 916 sg_set_buf(sg, p, eccsize); 889 917 p += eccsize; 890 918 } 891 919 892 - ret = dma_map_sg(fmc2->dev, fmc2->dma_data_sg.sgl, 920 + ret = dma_map_sg(nfc->dev, nfc->dma_data_sg.sgl, 893 921 eccsteps, dma_data_dir); 894 922 if (ret < 0) 895 923 return ret; 896 924 897 - desc_data = dmaengine_prep_slave_sg(dma_ch, fmc2->dma_data_sg.sgl, 925 + desc_data = dmaengine_prep_slave_sg(dma_ch, nfc->dma_data_sg.sgl, 898 926 eccsteps, dma_transfer_dir, 899 927 DMA_PREP_INTERRUPT); 900 928 if (!desc_data) { ··· 902 930 goto err_unmap_data; 903 931 } 904 932 905 - reinit_completion(&fmc2->dma_data_complete); 906 - reinit_completion(&fmc2->complete); 907 - desc_data->callback = stm32_fmc2_dma_callback; 908 - desc_data->callback_param = &fmc2->dma_data_complete; 933 + reinit_completion(&nfc->dma_data_complete); 934 + reinit_completion(&nfc->complete); 935 + desc_data->callback = stm32_fmc2_nfc_dma_callback; 936 + desc_data->callback_param = &nfc->dma_data_complete; 909 937 ret = dma_submit_error(dmaengine_submit(desc_data)); 910 938 if (ret) 911 939 goto err_unmap_data; ··· 914 942 915 943 if (!write_data && !raw) { 916 944 /* Configure DMA ECC status */ 917 - p = fmc2->ecc_buf; 918 - for_each_sg(fmc2->dma_ecc_sg.sgl, sg, eccsteps, s) { 919 - sg_set_buf(sg, p, fmc2->dma_ecc_len); 920 - p += fmc2->dma_ecc_len; 945 + p = nfc->ecc_buf; 946 + for_each_sg(nfc->dma_ecc_sg.sgl, sg, eccsteps, s) { 947 + sg_set_buf(sg, p, nfc->dma_ecc_len); 948 + p += nfc->dma_ecc_len; 921 949 } 922 950 923 - ret = dma_map_sg(fmc2->dev, fmc2->dma_ecc_sg.sgl, 951 + ret = dma_map_sg(nfc->dev, nfc->dma_ecc_sg.sgl, 924 952 eccsteps, dma_data_dir); 925 953 if (ret < 0) 926 954 goto err_unmap_data; 927 955 928 - desc_ecc = dmaengine_prep_slave_sg(fmc2->dma_ecc_ch, 929 - fmc2->dma_ecc_sg.sgl, 956 + desc_ecc = dmaengine_prep_slave_sg(nfc->dma_ecc_ch, 957 + nfc->dma_ecc_sg.sgl, 930 958 eccsteps, dma_transfer_dir, 931 959 DMA_PREP_INTERRUPT); 932 960 if (!desc_ecc) { ··· 934 962 goto err_unmap_ecc; 935 963 } 936 964 937 - reinit_completion(&fmc2->dma_ecc_complete); 938 - desc_ecc->callback = stm32_fmc2_dma_callback; 939 - desc_ecc->callback_param = &fmc2->dma_ecc_complete; 965 + reinit_completion(&nfc->dma_ecc_complete); 966 + desc_ecc->callback = stm32_fmc2_nfc_dma_callback; 967 + desc_ecc->callback_param = &nfc->dma_ecc_complete; 940 968 ret = dma_submit_error(dmaengine_submit(desc_ecc)); 941 969 if (ret) 942 970 goto err_unmap_ecc; 943 971 944 - dma_async_issue_pending(fmc2->dma_ecc_ch); 972 + dma_async_issue_pending(nfc->dma_ecc_ch); 945 973 } 946 974 947 - stm32_fmc2_clear_seq_irq(fmc2); 948 - stm32_fmc2_enable_seq_irq(fmc2); 975 + stm32_fmc2_nfc_clear_seq_irq(nfc); 976 + stm32_fmc2_nfc_enable_seq_irq(nfc); 949 977 950 978 /* Start the transfer */ 951 979 csqcr |= FMC2_CSQCR_CSQSTART; 952 - writel_relaxed(csqcr, fmc2->io_base + FMC2_CSQCR); 980 + writel_relaxed(csqcr, nfc->io_base + FMC2_CSQCR); 953 981 954 982 /* Wait end of sequencer transfer */ 955 - if (!wait_for_completion_timeout(&fmc2->complete, 956 - msecs_to_jiffies(FMC2_TIMEOUT_MS))) { 957 - dev_err(fmc2->dev, "seq timeout\n"); 958 - stm32_fmc2_disable_seq_irq(fmc2); 983 + if (!wait_for_completion_timeout(&nfc->complete, timeout)) { 984 + dev_err(nfc->dev, "seq timeout\n"); 985 + stm32_fmc2_nfc_disable_seq_irq(nfc); 959 986 dmaengine_terminate_all(dma_ch); 960 987 if (!write_data && !raw) 961 - dmaengine_terminate_all(fmc2->dma_ecc_ch); 988 + dmaengine_terminate_all(nfc->dma_ecc_ch); 962 989 ret = -ETIMEDOUT; 963 990 goto err_unmap_ecc; 964 991 } 965 992 966 993 /* Wait DMA data transfer completion */ 967 - if (!wait_for_completion_timeout(&fmc2->dma_data_complete, 968 - msecs_to_jiffies(FMC2_TIMEOUT_MS))) { 969 - dev_err(fmc2->dev, "data DMA timeout\n"); 994 + if (!wait_for_completion_timeout(&nfc->dma_data_complete, timeout)) { 995 + dev_err(nfc->dev, "data DMA timeout\n"); 970 996 dmaengine_terminate_all(dma_ch); 971 997 ret = -ETIMEDOUT; 972 998 } 973 999 974 1000 /* Wait DMA ECC transfer completion */ 975 1001 if (!write_data && !raw) { 976 - if (!wait_for_completion_timeout(&fmc2->dma_ecc_complete, 977 - msecs_to_jiffies(FMC2_TIMEOUT_MS))) { 978 - dev_err(fmc2->dev, "ECC DMA timeout\n"); 979 - dmaengine_terminate_all(fmc2->dma_ecc_ch); 1002 + if (!wait_for_completion_timeout(&nfc->dma_ecc_complete, 1003 + timeout)) { 1004 + dev_err(nfc->dev, "ECC DMA timeout\n"); 1005 + dmaengine_terminate_all(nfc->dma_ecc_ch); 980 1006 ret = -ETIMEDOUT; 981 1007 } 982 1008 } 983 1009 984 1010 err_unmap_ecc: 985 1011 if (!write_data && !raw) 986 - dma_unmap_sg(fmc2->dev, fmc2->dma_ecc_sg.sgl, 1012 + dma_unmap_sg(nfc->dev, nfc->dma_ecc_sg.sgl, 987 1013 eccsteps, dma_data_dir); 988 1014 989 1015 err_unmap_data: 990 - dma_unmap_sg(fmc2->dev, fmc2->dma_data_sg.sgl, eccsteps, dma_data_dir); 1016 + dma_unmap_sg(nfc->dev, nfc->dma_data_sg.sgl, eccsteps, dma_data_dir); 991 1017 992 1018 return ret; 993 1019 } 994 1020 995 - static int stm32_fmc2_sequencer_write(struct nand_chip *chip, 996 - const u8 *buf, int oob_required, 997 - int page, int raw) 1021 + static int stm32_fmc2_nfc_seq_write(struct nand_chip *chip, const u8 *buf, 1022 + int oob_required, int page, int raw) 998 1023 { 999 1024 struct mtd_info *mtd = nand_to_mtd(chip); 1000 1025 int ret; 1001 1026 1002 1027 /* Configure the sequencer */ 1003 - stm32_fmc2_rw_page_init(chip, page, raw, true); 1028 + stm32_fmc2_nfc_rw_page_init(chip, page, raw, true); 1004 1029 1005 1030 /* Write the page */ 1006 - ret = stm32_fmc2_xfer(chip, buf, raw, true); 1031 + ret = stm32_fmc2_nfc_xfer(chip, buf, raw, true); 1007 1032 if (ret) 1008 1033 return ret; 1009 1034 ··· 1016 1047 return nand_prog_page_end_op(chip); 1017 1048 } 1018 1049 1019 - static int stm32_fmc2_sequencer_write_page(struct nand_chip *chip, 1020 - const u8 *buf, 1021 - int oob_required, 1022 - int page) 1050 + static int stm32_fmc2_nfc_seq_write_page(struct nand_chip *chip, const u8 *buf, 1051 + int oob_required, int page) 1023 1052 { 1024 1053 int ret; 1025 1054 1026 - /* Select the target */ 1027 - ret = stm32_fmc2_select_chip(chip, chip->cur_cs); 1055 + ret = stm32_fmc2_nfc_select_chip(chip, chip->cur_cs); 1028 1056 if (ret) 1029 1057 return ret; 1030 1058 1031 - return stm32_fmc2_sequencer_write(chip, buf, oob_required, page, false); 1059 + return stm32_fmc2_nfc_seq_write(chip, buf, oob_required, page, false); 1032 1060 } 1033 1061 1034 - static int stm32_fmc2_sequencer_write_page_raw(struct nand_chip *chip, 1035 - const u8 *buf, 1036 - int oob_required, 1037 - int page) 1062 + static int stm32_fmc2_nfc_seq_write_page_raw(struct nand_chip *chip, 1063 + const u8 *buf, int oob_required, 1064 + int page) 1038 1065 { 1039 1066 int ret; 1040 1067 1041 - /* Select the target */ 1042 - ret = stm32_fmc2_select_chip(chip, chip->cur_cs); 1068 + ret = stm32_fmc2_nfc_select_chip(chip, chip->cur_cs); 1043 1069 if (ret) 1044 1070 return ret; 1045 1071 1046 - return stm32_fmc2_sequencer_write(chip, buf, oob_required, page, true); 1072 + return stm32_fmc2_nfc_seq_write(chip, buf, oob_required, page, true); 1047 1073 } 1048 1074 1049 1075 /* Get a status indicating which sectors have errors */ 1050 - static inline u16 stm32_fmc2_get_mapping_status(struct stm32_fmc2_nfc *fmc2) 1076 + static inline u16 stm32_fmc2_nfc_get_mapping_status(struct stm32_fmc2_nfc *nfc) 1051 1077 { 1052 - u32 csqemsr = readl_relaxed(fmc2->io_base + FMC2_CSQEMSR); 1078 + u32 csqemsr = readl_relaxed(nfc->io_base + FMC2_CSQEMSR); 1053 1079 1054 1080 return csqemsr & FMC2_CSQEMSR_SEM; 1055 1081 } 1056 1082 1057 - static int stm32_fmc2_sequencer_correct(struct nand_chip *chip, u8 *dat, 1058 - u8 *read_ecc, u8 *calc_ecc) 1083 + static int stm32_fmc2_nfc_seq_correct(struct nand_chip *chip, u8 *dat, 1084 + u8 *read_ecc, u8 *calc_ecc) 1059 1085 { 1060 1086 struct mtd_info *mtd = nand_to_mtd(chip); 1061 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1087 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1062 1088 int eccbytes = chip->ecc.bytes; 1063 1089 int eccsteps = chip->ecc.steps; 1064 1090 int eccstrength = chip->ecc.strength; 1065 1091 int i, s, eccsize = chip->ecc.size; 1066 - u32 *ecc_sta = (u32 *)fmc2->ecc_buf; 1067 - u16 sta_map = stm32_fmc2_get_mapping_status(fmc2); 1092 + u32 *ecc_sta = (u32 *)nfc->ecc_buf; 1093 + u16 sta_map = stm32_fmc2_nfc_get_mapping_status(nfc); 1068 1094 unsigned int max_bitflips = 0; 1069 1095 1070 1096 for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, dat += eccsize) { ··· 1068 1104 if (eccstrength == FMC2_ECC_HAM) { 1069 1105 /* Ecc_sta = FMC2_HECCR */ 1070 1106 if (sta_map & BIT(s)) { 1071 - stm32_fmc2_ham_set_ecc(*ecc_sta, &calc_ecc[i]); 1072 - stat = stm32_fmc2_ham_correct(chip, dat, 1073 - &read_ecc[i], 1074 - &calc_ecc[i]); 1107 + stm32_fmc2_nfc_ham_set_ecc(*ecc_sta, 1108 + &calc_ecc[i]); 1109 + stat = stm32_fmc2_nfc_ham_correct(chip, dat, 1110 + &read_ecc[i], 1111 + &calc_ecc[i]); 1075 1112 } 1076 1113 ecc_sta++; 1077 1114 } else { ··· 1084 1119 * Ecc_sta[4] = FMC2_BCHDSR4 1085 1120 */ 1086 1121 if (sta_map & BIT(s)) 1087 - stat = stm32_fmc2_bch_decode(eccsize, dat, 1088 - ecc_sta); 1122 + stat = stm32_fmc2_nfc_bch_decode(eccsize, dat, 1123 + ecc_sta); 1089 1124 ecc_sta += 5; 1090 1125 } 1091 1126 ··· 1108 1143 return max_bitflips; 1109 1144 } 1110 1145 1111 - static int stm32_fmc2_sequencer_read_page(struct nand_chip *chip, u8 *buf, 1112 - int oob_required, int page) 1146 + static int stm32_fmc2_nfc_seq_read_page(struct nand_chip *chip, u8 *buf, 1147 + int oob_required, int page) 1113 1148 { 1114 1149 struct mtd_info *mtd = nand_to_mtd(chip); 1115 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1150 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1116 1151 u8 *ecc_calc = chip->ecc.calc_buf; 1117 1152 u8 *ecc_code = chip->ecc.code_buf; 1118 1153 u16 sta_map; 1119 1154 int ret; 1120 1155 1121 - /* Select the target */ 1122 - ret = stm32_fmc2_select_chip(chip, chip->cur_cs); 1156 + ret = stm32_fmc2_nfc_select_chip(chip, chip->cur_cs); 1123 1157 if (ret) 1124 1158 return ret; 1125 1159 1126 1160 /* Configure the sequencer */ 1127 - stm32_fmc2_rw_page_init(chip, page, 0, false); 1161 + stm32_fmc2_nfc_rw_page_init(chip, page, 0, false); 1128 1162 1129 1163 /* Read the page */ 1130 - ret = stm32_fmc2_xfer(chip, buf, 0, false); 1164 + ret = stm32_fmc2_nfc_xfer(chip, buf, 0, false); 1131 1165 if (ret) 1132 1166 return ret; 1133 1167 1134 - sta_map = stm32_fmc2_get_mapping_status(fmc2); 1168 + sta_map = stm32_fmc2_nfc_get_mapping_status(nfc); 1135 1169 1136 1170 /* Check if errors happen */ 1137 1171 if (likely(!sta_map)) { ··· 1157 1193 return chip->ecc.correct(chip, buf, ecc_code, ecc_calc); 1158 1194 } 1159 1195 1160 - static int stm32_fmc2_sequencer_read_page_raw(struct nand_chip *chip, u8 *buf, 1161 - int oob_required, int page) 1196 + static int stm32_fmc2_nfc_seq_read_page_raw(struct nand_chip *chip, u8 *buf, 1197 + int oob_required, int page) 1162 1198 { 1163 1199 struct mtd_info *mtd = nand_to_mtd(chip); 1164 1200 int ret; 1165 1201 1166 - /* Select the target */ 1167 - ret = stm32_fmc2_select_chip(chip, chip->cur_cs); 1202 + ret = stm32_fmc2_nfc_select_chip(chip, chip->cur_cs); 1168 1203 if (ret) 1169 1204 return ret; 1170 1205 1171 1206 /* Configure the sequencer */ 1172 - stm32_fmc2_rw_page_init(chip, page, 1, false); 1207 + stm32_fmc2_nfc_rw_page_init(chip, page, 1, false); 1173 1208 1174 1209 /* Read the page */ 1175 - ret = stm32_fmc2_xfer(chip, buf, 1, false); 1210 + ret = stm32_fmc2_nfc_xfer(chip, buf, 1, false); 1176 1211 if (ret) 1177 1212 return ret; 1178 1213 ··· 1184 1221 return 0; 1185 1222 } 1186 1223 1187 - static irqreturn_t stm32_fmc2_irq(int irq, void *dev_id) 1224 + static irqreturn_t stm32_fmc2_nfc_irq(int irq, void *dev_id) 1188 1225 { 1189 - struct stm32_fmc2_nfc *fmc2 = (struct stm32_fmc2_nfc *)dev_id; 1226 + struct stm32_fmc2_nfc *nfc = (struct stm32_fmc2_nfc *)dev_id; 1190 1227 1191 - if (fmc2->irq_state == FMC2_IRQ_SEQ) 1228 + if (nfc->irq_state == FMC2_IRQ_SEQ) 1192 1229 /* Sequencer is used */ 1193 - stm32_fmc2_disable_seq_irq(fmc2); 1194 - else if (fmc2->irq_state == FMC2_IRQ_BCH) 1230 + stm32_fmc2_nfc_disable_seq_irq(nfc); 1231 + else if (nfc->irq_state == FMC2_IRQ_BCH) 1195 1232 /* BCH is used */ 1196 - stm32_fmc2_disable_bch_irq(fmc2); 1233 + stm32_fmc2_nfc_disable_bch_irq(nfc); 1197 1234 1198 - complete(&fmc2->complete); 1235 + complete(&nfc->complete); 1199 1236 1200 1237 return IRQ_HANDLED; 1201 1238 } 1202 1239 1203 - static void stm32_fmc2_read_data(struct nand_chip *chip, void *buf, 1204 - unsigned int len, bool force_8bit) 1240 + static void stm32_fmc2_nfc_read_data(struct nand_chip *chip, void *buf, 1241 + unsigned int len, bool force_8bit) 1205 1242 { 1206 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1207 - void __iomem *io_addr_r = fmc2->data_base[fmc2->cs_sel]; 1243 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1244 + void __iomem *io_addr_r = nfc->data_base[nfc->cs_sel]; 1208 1245 1209 1246 if (force_8bit && chip->options & NAND_BUSWIDTH_16) 1210 1247 /* Reconfigure bus width to 8-bit */ 1211 - stm32_fmc2_set_buswidth_16(fmc2, false); 1248 + stm32_fmc2_nfc_set_buswidth_16(nfc, false); 1212 1249 1213 1250 if (!IS_ALIGNED((uintptr_t)buf, sizeof(u32))) { 1214 1251 if (!IS_ALIGNED((uintptr_t)buf, sizeof(u16)) && len) { ··· 1244 1281 1245 1282 if (force_8bit && chip->options & NAND_BUSWIDTH_16) 1246 1283 /* Reconfigure bus width to 16-bit */ 1247 - stm32_fmc2_set_buswidth_16(fmc2, true); 1284 + stm32_fmc2_nfc_set_buswidth_16(nfc, true); 1248 1285 } 1249 1286 1250 - static void stm32_fmc2_write_data(struct nand_chip *chip, const void *buf, 1251 - unsigned int len, bool force_8bit) 1287 + static void stm32_fmc2_nfc_write_data(struct nand_chip *chip, const void *buf, 1288 + unsigned int len, bool force_8bit) 1252 1289 { 1253 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1254 - void __iomem *io_addr_w = fmc2->data_base[fmc2->cs_sel]; 1290 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1291 + void __iomem *io_addr_w = nfc->data_base[nfc->cs_sel]; 1255 1292 1256 1293 if (force_8bit && chip->options & NAND_BUSWIDTH_16) 1257 1294 /* Reconfigure bus width to 8-bit */ 1258 - stm32_fmc2_set_buswidth_16(fmc2, false); 1295 + stm32_fmc2_nfc_set_buswidth_16(nfc, false); 1259 1296 1260 1297 if (!IS_ALIGNED((uintptr_t)buf, sizeof(u32))) { 1261 1298 if (!IS_ALIGNED((uintptr_t)buf, sizeof(u16)) && len) { ··· 1291 1328 1292 1329 if (force_8bit && chip->options & NAND_BUSWIDTH_16) 1293 1330 /* Reconfigure bus width to 16-bit */ 1294 - stm32_fmc2_set_buswidth_16(fmc2, true); 1331 + stm32_fmc2_nfc_set_buswidth_16(nfc, true); 1295 1332 } 1296 1333 1297 - static int stm32_fmc2_waitrdy(struct nand_chip *chip, unsigned long timeout_ms) 1334 + static int stm32_fmc2_nfc_waitrdy(struct nand_chip *chip, 1335 + unsigned long timeout_ms) 1298 1336 { 1299 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1337 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1300 1338 const struct nand_sdr_timings *timings; 1301 1339 u32 isr, sr; 1302 1340 1303 1341 /* Check if there is no pending requests to the NAND flash */ 1304 - if (readl_relaxed_poll_timeout_atomic(fmc2->io_base + FMC2_SR, sr, 1342 + if (readl_relaxed_poll_timeout_atomic(nfc->io_base + FMC2_SR, sr, 1305 1343 sr & FMC2_SR_NWRF, 1, 1306 - FMC2_TIMEOUT_US)) 1307 - dev_warn(fmc2->dev, "Waitrdy timeout\n"); 1344 + 1000 * FMC2_TIMEOUT_MS)) 1345 + dev_warn(nfc->dev, "Waitrdy timeout\n"); 1308 1346 1309 1347 /* Wait tWB before R/B# signal is low */ 1310 1348 timings = nand_get_sdr_timings(&chip->data_interface); 1311 1349 ndelay(PSEC_TO_NSEC(timings->tWB_max)); 1312 1350 1313 1351 /* R/B# signal is low, clear high level flag */ 1314 - writel_relaxed(FMC2_ICR_CIHLF, fmc2->io_base + FMC2_ICR); 1352 + writel_relaxed(FMC2_ICR_CIHLF, nfc->io_base + FMC2_ICR); 1315 1353 1316 1354 /* Wait R/B# signal is high */ 1317 - return readl_relaxed_poll_timeout_atomic(fmc2->io_base + FMC2_ISR, 1355 + return readl_relaxed_poll_timeout_atomic(nfc->io_base + FMC2_ISR, 1318 1356 isr, isr & FMC2_ISR_IHLF, 1319 1357 5, 1000 * timeout_ms); 1320 1358 } 1321 1359 1322 - static int stm32_fmc2_exec_op(struct nand_chip *chip, 1323 - const struct nand_operation *op, 1324 - bool check_only) 1360 + static int stm32_fmc2_nfc_exec_op(struct nand_chip *chip, 1361 + const struct nand_operation *op, 1362 + bool check_only) 1325 1363 { 1326 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1364 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1327 1365 const struct nand_op_instr *instr = NULL; 1328 - unsigned int op_id, i; 1366 + unsigned int op_id, i, timeout; 1329 1367 int ret; 1330 1368 1331 - ret = stm32_fmc2_select_chip(chip, op->cs); 1332 - if (ret) 1333 - return ret; 1334 - 1335 1369 if (check_only) 1370 + return 0; 1371 + 1372 + ret = stm32_fmc2_nfc_select_chip(chip, op->cs); 1373 + if (ret) 1336 1374 return ret; 1337 1375 1338 1376 for (op_id = 0; op_id < op->ninstrs; op_id++) { ··· 1342 1378 switch (instr->type) { 1343 1379 case NAND_OP_CMD_INSTR: 1344 1380 writeb_relaxed(instr->ctx.cmd.opcode, 1345 - fmc2->cmd_base[fmc2->cs_sel]); 1381 + nfc->cmd_base[nfc->cs_sel]); 1346 1382 break; 1347 1383 1348 1384 case NAND_OP_ADDR_INSTR: 1349 1385 for (i = 0; i < instr->ctx.addr.naddrs; i++) 1350 1386 writeb_relaxed(instr->ctx.addr.addrs[i], 1351 - fmc2->addr_base[fmc2->cs_sel]); 1387 + nfc->addr_base[nfc->cs_sel]); 1352 1388 break; 1353 1389 1354 1390 case NAND_OP_DATA_IN_INSTR: 1355 - stm32_fmc2_read_data(chip, instr->ctx.data.buf.in, 1356 - instr->ctx.data.len, 1357 - instr->ctx.data.force_8bit); 1391 + stm32_fmc2_nfc_read_data(chip, instr->ctx.data.buf.in, 1392 + instr->ctx.data.len, 1393 + instr->ctx.data.force_8bit); 1358 1394 break; 1359 1395 1360 1396 case NAND_OP_DATA_OUT_INSTR: 1361 - stm32_fmc2_write_data(chip, instr->ctx.data.buf.out, 1362 - instr->ctx.data.len, 1363 - instr->ctx.data.force_8bit); 1397 + stm32_fmc2_nfc_write_data(chip, instr->ctx.data.buf.out, 1398 + instr->ctx.data.len, 1399 + instr->ctx.data.force_8bit); 1364 1400 break; 1365 1401 1366 1402 case NAND_OP_WAITRDY_INSTR: 1367 - ret = stm32_fmc2_waitrdy(chip, 1368 - instr->ctx.waitrdy.timeout_ms); 1403 + timeout = instr->ctx.waitrdy.timeout_ms; 1404 + ret = stm32_fmc2_nfc_waitrdy(chip, timeout); 1369 1405 break; 1370 1406 } 1371 1407 } ··· 1373 1409 return ret; 1374 1410 } 1375 1411 1376 - /* Controller initialization */ 1377 - static void stm32_fmc2_init(struct stm32_fmc2_nfc *fmc2) 1412 + static void stm32_fmc2_nfc_init(struct stm32_fmc2_nfc *nfc) 1378 1413 { 1379 - u32 pcr = readl_relaxed(fmc2->io_base + FMC2_PCR); 1380 - u32 bcr1 = readl_relaxed(fmc2->io_base + FMC2_BCR1); 1414 + u32 pcr = readl_relaxed(nfc->io_base + FMC2_PCR); 1415 + u32 bcr1 = readl_relaxed(nfc->io_base + FMC2_BCR1); 1381 1416 1382 1417 /* Set CS used to undefined */ 1383 - fmc2->cs_sel = -1; 1418 + nfc->cs_sel = -1; 1384 1419 1385 1420 /* Enable wait feature and nand flash memory bank */ 1386 1421 pcr |= FMC2_PCR_PWAITEN; 1387 1422 pcr |= FMC2_PCR_PBKEN; 1388 1423 1389 1424 /* Set buswidth to 8 bits mode for identification */ 1390 - pcr &= ~FMC2_PCR_PWID_MASK; 1425 + pcr &= ~FMC2_PCR_PWID; 1391 1426 1392 1427 /* ECC logic is disabled */ 1393 1428 pcr &= ~FMC2_PCR_ECCEN; ··· 1397 1434 pcr &= ~FMC2_PCR_WEN; 1398 1435 1399 1436 /* Set default ECC sector size */ 1400 - pcr &= ~FMC2_PCR_ECCSS_MASK; 1401 - pcr |= FMC2_PCR_ECCSS(FMC2_PCR_ECCSS_2048); 1437 + pcr &= ~FMC2_PCR_ECCSS; 1438 + pcr |= FIELD_PREP(FMC2_PCR_ECCSS, FMC2_PCR_ECCSS_2048); 1402 1439 1403 1440 /* Set default tclr/tar timings */ 1404 - pcr &= ~FMC2_PCR_TCLR_MASK; 1405 - pcr |= FMC2_PCR_TCLR(FMC2_PCR_TCLR_DEFAULT); 1406 - pcr &= ~FMC2_PCR_TAR_MASK; 1407 - pcr |= FMC2_PCR_TAR(FMC2_PCR_TAR_DEFAULT); 1441 + pcr &= ~FMC2_PCR_TCLR; 1442 + pcr |= FIELD_PREP(FMC2_PCR_TCLR, FMC2_PCR_TCLR_DEFAULT); 1443 + pcr &= ~FMC2_PCR_TAR; 1444 + pcr |= FIELD_PREP(FMC2_PCR_TAR, FMC2_PCR_TAR_DEFAULT); 1408 1445 1409 1446 /* Enable FMC2 controller */ 1410 1447 bcr1 |= FMC2_BCR1_FMC2EN; 1411 1448 1412 - writel_relaxed(bcr1, fmc2->io_base + FMC2_BCR1); 1413 - writel_relaxed(pcr, fmc2->io_base + FMC2_PCR); 1414 - writel_relaxed(FMC2_PMEM_DEFAULT, fmc2->io_base + FMC2_PMEM); 1415 - writel_relaxed(FMC2_PATT_DEFAULT, fmc2->io_base + FMC2_PATT); 1449 + writel_relaxed(bcr1, nfc->io_base + FMC2_BCR1); 1450 + writel_relaxed(pcr, nfc->io_base + FMC2_PCR); 1451 + writel_relaxed(FMC2_PMEM_DEFAULT, nfc->io_base + FMC2_PMEM); 1452 + writel_relaxed(FMC2_PATT_DEFAULT, nfc->io_base + FMC2_PATT); 1416 1453 } 1417 1454 1418 - /* Controller timings */ 1419 - static void stm32_fmc2_calc_timings(struct nand_chip *chip, 1420 - const struct nand_sdr_timings *sdrt) 1455 + static void stm32_fmc2_nfc_calc_timings(struct nand_chip *chip, 1456 + const struct nand_sdr_timings *sdrt) 1421 1457 { 1422 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1458 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1423 1459 struct stm32_fmc2_nand *nand = to_fmc2_nand(chip); 1424 1460 struct stm32_fmc2_timings *tims = &nand->timings; 1425 - unsigned long hclk = clk_get_rate(fmc2->clk); 1461 + unsigned long hclk = clk_get_rate(nfc->clk); 1426 1462 unsigned long hclkp = NSEC_PER_SEC / (hclk / 1000); 1427 1463 unsigned long timing, tar, tclr, thiz, twait; 1428 1464 unsigned long tset_mem, tset_att, thold_mem, thold_att; ··· 1545 1583 tims->thold_att = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK); 1546 1584 } 1547 1585 1548 - static int stm32_fmc2_setup_interface(struct nand_chip *chip, int chipnr, 1549 - const struct nand_data_interface *conf) 1586 + static int stm32_fmc2_nfc_setup_interface(struct nand_chip *chip, int chipnr, 1587 + const struct nand_data_interface *conf) 1550 1588 { 1551 1589 const struct nand_sdr_timings *sdrt; 1552 1590 ··· 1557 1595 if (chipnr == NAND_DATA_IFACE_CHECK_ONLY) 1558 1596 return 0; 1559 1597 1560 - stm32_fmc2_calc_timings(chip, sdrt); 1561 - 1562 - /* Apply timings */ 1563 - stm32_fmc2_timings_init(chip); 1598 + stm32_fmc2_nfc_calc_timings(chip, sdrt); 1599 + stm32_fmc2_nfc_timings_init(chip); 1564 1600 1565 1601 return 0; 1566 1602 } 1567 1603 1568 - /* DMA configuration */ 1569 - static int stm32_fmc2_dma_setup(struct stm32_fmc2_nfc *fmc2) 1604 + static int stm32_fmc2_nfc_dma_setup(struct stm32_fmc2_nfc *nfc) 1570 1605 { 1571 1606 int ret = 0; 1572 1607 1573 - fmc2->dma_tx_ch = dma_request_chan(fmc2->dev, "tx"); 1574 - if (IS_ERR(fmc2->dma_tx_ch)) { 1575 - ret = PTR_ERR(fmc2->dma_tx_ch); 1608 + nfc->dma_tx_ch = dma_request_chan(nfc->dev, "tx"); 1609 + if (IS_ERR(nfc->dma_tx_ch)) { 1610 + ret = PTR_ERR(nfc->dma_tx_ch); 1576 1611 if (ret != -ENODEV) 1577 - dev_err(fmc2->dev, 1612 + dev_err(nfc->dev, 1578 1613 "failed to request tx DMA channel: %d\n", ret); 1579 - fmc2->dma_tx_ch = NULL; 1614 + nfc->dma_tx_ch = NULL; 1580 1615 goto err_dma; 1581 1616 } 1582 1617 1583 - fmc2->dma_rx_ch = dma_request_chan(fmc2->dev, "rx"); 1584 - if (IS_ERR(fmc2->dma_rx_ch)) { 1585 - ret = PTR_ERR(fmc2->dma_rx_ch); 1618 + nfc->dma_rx_ch = dma_request_chan(nfc->dev, "rx"); 1619 + if (IS_ERR(nfc->dma_rx_ch)) { 1620 + ret = PTR_ERR(nfc->dma_rx_ch); 1586 1621 if (ret != -ENODEV) 1587 - dev_err(fmc2->dev, 1622 + dev_err(nfc->dev, 1588 1623 "failed to request rx DMA channel: %d\n", ret); 1589 - fmc2->dma_rx_ch = NULL; 1624 + nfc->dma_rx_ch = NULL; 1590 1625 goto err_dma; 1591 1626 } 1592 1627 1593 - fmc2->dma_ecc_ch = dma_request_chan(fmc2->dev, "ecc"); 1594 - if (IS_ERR(fmc2->dma_ecc_ch)) { 1595 - ret = PTR_ERR(fmc2->dma_ecc_ch); 1628 + nfc->dma_ecc_ch = dma_request_chan(nfc->dev, "ecc"); 1629 + if (IS_ERR(nfc->dma_ecc_ch)) { 1630 + ret = PTR_ERR(nfc->dma_ecc_ch); 1596 1631 if (ret != -ENODEV) 1597 - dev_err(fmc2->dev, 1632 + dev_err(nfc->dev, 1598 1633 "failed to request ecc DMA channel: %d\n", ret); 1599 - fmc2->dma_ecc_ch = NULL; 1634 + nfc->dma_ecc_ch = NULL; 1600 1635 goto err_dma; 1601 1636 } 1602 1637 1603 - ret = sg_alloc_table(&fmc2->dma_ecc_sg, FMC2_MAX_SG, GFP_KERNEL); 1638 + ret = sg_alloc_table(&nfc->dma_ecc_sg, FMC2_MAX_SG, GFP_KERNEL); 1604 1639 if (ret) 1605 1640 return ret; 1606 1641 1607 1642 /* Allocate a buffer to store ECC status registers */ 1608 - fmc2->ecc_buf = devm_kzalloc(fmc2->dev, FMC2_MAX_ECC_BUF_LEN, 1609 - GFP_KERNEL); 1610 - if (!fmc2->ecc_buf) 1643 + nfc->ecc_buf = devm_kzalloc(nfc->dev, FMC2_MAX_ECC_BUF_LEN, GFP_KERNEL); 1644 + if (!nfc->ecc_buf) 1611 1645 return -ENOMEM; 1612 1646 1613 - ret = sg_alloc_table(&fmc2->dma_data_sg, FMC2_MAX_SG, GFP_KERNEL); 1647 + ret = sg_alloc_table(&nfc->dma_data_sg, FMC2_MAX_SG, GFP_KERNEL); 1614 1648 if (ret) 1615 1649 return ret; 1616 1650 1617 - init_completion(&fmc2->dma_data_complete); 1618 - init_completion(&fmc2->dma_ecc_complete); 1651 + init_completion(&nfc->dma_data_complete); 1652 + init_completion(&nfc->dma_ecc_complete); 1619 1653 1620 1654 return 0; 1621 1655 1622 1656 err_dma: 1623 1657 if (ret == -ENODEV) { 1624 - dev_warn(fmc2->dev, 1658 + dev_warn(nfc->dev, 1625 1659 "DMAs not defined in the DT, polling mode is used\n"); 1626 1660 ret = 0; 1627 1661 } ··· 1625 1667 return ret; 1626 1668 } 1627 1669 1628 - /* NAND callbacks setup */ 1629 - static void stm32_fmc2_nand_callbacks_setup(struct nand_chip *chip) 1670 + static void stm32_fmc2_nfc_nand_callbacks_setup(struct nand_chip *chip) 1630 1671 { 1631 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1672 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1632 1673 1633 1674 /* 1634 1675 * Specific callbacks to read/write a page depending on 1635 1676 * the mode (polling/sequencer) and the algo used (Hamming, BCH). 1636 1677 */ 1637 - if (fmc2->dma_tx_ch && fmc2->dma_rx_ch && fmc2->dma_ecc_ch) { 1678 + if (nfc->dma_tx_ch && nfc->dma_rx_ch && nfc->dma_ecc_ch) { 1638 1679 /* DMA => use sequencer mode callbacks */ 1639 - chip->ecc.correct = stm32_fmc2_sequencer_correct; 1640 - chip->ecc.write_page = stm32_fmc2_sequencer_write_page; 1641 - chip->ecc.read_page = stm32_fmc2_sequencer_read_page; 1642 - chip->ecc.write_page_raw = stm32_fmc2_sequencer_write_page_raw; 1643 - chip->ecc.read_page_raw = stm32_fmc2_sequencer_read_page_raw; 1680 + chip->ecc.correct = stm32_fmc2_nfc_seq_correct; 1681 + chip->ecc.write_page = stm32_fmc2_nfc_seq_write_page; 1682 + chip->ecc.read_page = stm32_fmc2_nfc_seq_read_page; 1683 + chip->ecc.write_page_raw = stm32_fmc2_nfc_seq_write_page_raw; 1684 + chip->ecc.read_page_raw = stm32_fmc2_nfc_seq_read_page_raw; 1644 1685 } else { 1645 1686 /* No DMA => use polling mode callbacks */ 1646 - chip->ecc.hwctl = stm32_fmc2_hwctl; 1687 + chip->ecc.hwctl = stm32_fmc2_nfc_hwctl; 1647 1688 if (chip->ecc.strength == FMC2_ECC_HAM) { 1648 1689 /* Hamming is used */ 1649 - chip->ecc.calculate = stm32_fmc2_ham_calculate; 1650 - chip->ecc.correct = stm32_fmc2_ham_correct; 1690 + chip->ecc.calculate = stm32_fmc2_nfc_ham_calculate; 1691 + chip->ecc.correct = stm32_fmc2_nfc_ham_correct; 1651 1692 chip->ecc.options |= NAND_ECC_GENERIC_ERASED_CHECK; 1652 1693 } else { 1653 1694 /* BCH is used */ 1654 - chip->ecc.calculate = stm32_fmc2_bch_calculate; 1655 - chip->ecc.correct = stm32_fmc2_bch_correct; 1656 - chip->ecc.read_page = stm32_fmc2_read_page; 1695 + chip->ecc.calculate = stm32_fmc2_nfc_bch_calculate; 1696 + chip->ecc.correct = stm32_fmc2_nfc_bch_correct; 1697 + chip->ecc.read_page = stm32_fmc2_nfc_read_page; 1657 1698 } 1658 1699 } 1659 1700 ··· 1665 1708 chip->ecc.bytes = chip->options & NAND_BUSWIDTH_16 ? 8 : 7; 1666 1709 } 1667 1710 1668 - /* FMC2 layout */ 1669 - static int stm32_fmc2_nand_ooblayout_ecc(struct mtd_info *mtd, int section, 1670 - struct mtd_oob_region *oobregion) 1711 + static int stm32_fmc2_nfc_ooblayout_ecc(struct mtd_info *mtd, int section, 1712 + struct mtd_oob_region *oobregion) 1671 1713 { 1672 1714 struct nand_chip *chip = mtd_to_nand(mtd); 1673 1715 struct nand_ecc_ctrl *ecc = &chip->ecc; ··· 1680 1724 return 0; 1681 1725 } 1682 1726 1683 - static int stm32_fmc2_nand_ooblayout_free(struct mtd_info *mtd, int section, 1684 - struct mtd_oob_region *oobregion) 1727 + static int stm32_fmc2_nfc_ooblayout_free(struct mtd_info *mtd, int section, 1728 + struct mtd_oob_region *oobregion) 1685 1729 { 1686 1730 struct nand_chip *chip = mtd_to_nand(mtd); 1687 1731 struct nand_ecc_ctrl *ecc = &chip->ecc; ··· 1695 1739 return 0; 1696 1740 } 1697 1741 1698 - static const struct mtd_ooblayout_ops stm32_fmc2_nand_ooblayout_ops = { 1699 - .ecc = stm32_fmc2_nand_ooblayout_ecc, 1700 - .free = stm32_fmc2_nand_ooblayout_free, 1742 + static const struct mtd_ooblayout_ops stm32_fmc2_nfc_ooblayout_ops = { 1743 + .ecc = stm32_fmc2_nfc_ooblayout_ecc, 1744 + .free = stm32_fmc2_nfc_ooblayout_free, 1701 1745 }; 1702 1746 1703 - /* FMC2 caps */ 1704 - static int stm32_fmc2_calc_ecc_bytes(int step_size, int strength) 1747 + static int stm32_fmc2_nfc_calc_ecc_bytes(int step_size, int strength) 1705 1748 { 1706 1749 /* Hamming */ 1707 1750 if (strength == FMC2_ECC_HAM) ··· 1714 1759 return 8; 1715 1760 } 1716 1761 1717 - NAND_ECC_CAPS_SINGLE(stm32_fmc2_ecc_caps, stm32_fmc2_calc_ecc_bytes, 1762 + NAND_ECC_CAPS_SINGLE(stm32_fmc2_nfc_ecc_caps, stm32_fmc2_nfc_calc_ecc_bytes, 1718 1763 FMC2_ECC_STEP_SIZE, 1719 1764 FMC2_ECC_HAM, FMC2_ECC_BCH4, FMC2_ECC_BCH8); 1720 1765 1721 - /* FMC2 controller ops */ 1722 - static int stm32_fmc2_attach_chip(struct nand_chip *chip) 1766 + static int stm32_fmc2_nfc_attach_chip(struct nand_chip *chip) 1723 1767 { 1724 - struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller); 1768 + struct stm32_fmc2_nfc *nfc = to_stm32_nfc(chip->controller); 1725 1769 struct mtd_info *mtd = nand_to_mtd(chip); 1726 1770 int ret; 1727 1771 ··· 1732 1778 * ECC sector size = 512 1733 1779 */ 1734 1780 if (chip->ecc.mode != NAND_ECC_HW) { 1735 - dev_err(fmc2->dev, "nand_ecc_mode is not well defined in the DT\n"); 1781 + dev_err(nfc->dev, "nand_ecc_mode is not well defined in the DT\n"); 1736 1782 return -EINVAL; 1737 1783 } 1738 1784 1739 - ret = nand_ecc_choose_conf(chip, &stm32_fmc2_ecc_caps, 1785 + ret = nand_ecc_choose_conf(chip, &stm32_fmc2_nfc_ecc_caps, 1740 1786 mtd->oobsize - FMC2_BBM_LEN); 1741 1787 if (ret) { 1742 - dev_err(fmc2->dev, "no valid ECC settings set\n"); 1788 + dev_err(nfc->dev, "no valid ECC settings set\n"); 1743 1789 return ret; 1744 1790 } 1745 1791 1746 1792 if (mtd->writesize / chip->ecc.size > FMC2_MAX_SG) { 1747 - dev_err(fmc2->dev, "nand page size is not supported\n"); 1793 + dev_err(nfc->dev, "nand page size is not supported\n"); 1748 1794 return -EINVAL; 1749 1795 } 1750 1796 1751 1797 if (chip->bbt_options & NAND_BBT_USE_FLASH) 1752 1798 chip->bbt_options |= NAND_BBT_NO_OOB; 1753 1799 1754 - /* NAND callbacks setup */ 1755 - stm32_fmc2_nand_callbacks_setup(chip); 1800 + stm32_fmc2_nfc_nand_callbacks_setup(chip); 1756 1801 1757 - /* Define ECC layout */ 1758 - mtd_set_ooblayout(mtd, &stm32_fmc2_nand_ooblayout_ops); 1802 + mtd_set_ooblayout(mtd, &stm32_fmc2_nfc_ooblayout_ops); 1759 1803 1760 - /* Configure bus width to 16-bit */ 1761 1804 if (chip->options & NAND_BUSWIDTH_16) 1762 - stm32_fmc2_set_buswidth_16(fmc2, true); 1805 + stm32_fmc2_nfc_set_buswidth_16(nfc, true); 1763 1806 1764 1807 return 0; 1765 1808 } 1766 1809 1767 - static const struct nand_controller_ops stm32_fmc2_nand_controller_ops = { 1768 - .attach_chip = stm32_fmc2_attach_chip, 1769 - .exec_op = stm32_fmc2_exec_op, 1770 - .setup_data_interface = stm32_fmc2_setup_interface, 1810 + static const struct nand_controller_ops stm32_fmc2_nfc_controller_ops = { 1811 + .attach_chip = stm32_fmc2_nfc_attach_chip, 1812 + .exec_op = stm32_fmc2_nfc_exec_op, 1813 + .setup_data_interface = stm32_fmc2_nfc_setup_interface, 1771 1814 }; 1772 1815 1773 - /* FMC2 probe */ 1774 - static int stm32_fmc2_parse_child(struct stm32_fmc2_nfc *fmc2, 1775 - struct device_node *dn) 1816 + static int stm32_fmc2_nfc_parse_child(struct stm32_fmc2_nfc *nfc, 1817 + struct device_node *dn) 1776 1818 { 1777 - struct stm32_fmc2_nand *nand = &fmc2->nand; 1819 + struct stm32_fmc2_nand *nand = &nfc->nand; 1778 1820 u32 cs; 1779 1821 int ret, i; 1780 1822 ··· 1779 1829 1780 1830 nand->ncs /= sizeof(u32); 1781 1831 if (!nand->ncs) { 1782 - dev_err(fmc2->dev, "invalid reg property size\n"); 1832 + dev_err(nfc->dev, "invalid reg property size\n"); 1783 1833 return -EINVAL; 1784 1834 } 1785 1835 1786 1836 for (i = 0; i < nand->ncs; i++) { 1787 1837 ret = of_property_read_u32_index(dn, "reg", i, &cs); 1788 1838 if (ret) { 1789 - dev_err(fmc2->dev, "could not retrieve reg property: %d\n", 1839 + dev_err(nfc->dev, "could not retrieve reg property: %d\n", 1790 1840 ret); 1791 1841 return ret; 1792 1842 } 1793 1843 1794 1844 if (cs > FMC2_MAX_CE) { 1795 - dev_err(fmc2->dev, "invalid reg value: %d\n", cs); 1845 + dev_err(nfc->dev, "invalid reg value: %d\n", cs); 1796 1846 return -EINVAL; 1797 1847 } 1798 1848 1799 - if (fmc2->cs_assigned & BIT(cs)) { 1800 - dev_err(fmc2->dev, "cs already assigned: %d\n", cs); 1849 + if (nfc->cs_assigned & BIT(cs)) { 1850 + dev_err(nfc->dev, "cs already assigned: %d\n", cs); 1801 1851 return -EINVAL; 1802 1852 } 1803 1853 1804 - fmc2->cs_assigned |= BIT(cs); 1854 + nfc->cs_assigned |= BIT(cs); 1805 1855 nand->cs_used[i] = cs; 1806 1856 } 1807 1857 ··· 1810 1860 return 0; 1811 1861 } 1812 1862 1813 - static int stm32_fmc2_parse_dt(struct stm32_fmc2_nfc *fmc2) 1863 + static int stm32_fmc2_nfc_parse_dt(struct stm32_fmc2_nfc *nfc) 1814 1864 { 1815 - struct device_node *dn = fmc2->dev->of_node; 1865 + struct device_node *dn = nfc->dev->of_node; 1816 1866 struct device_node *child; 1817 1867 int nchips = of_get_child_count(dn); 1818 1868 int ret = 0; 1819 1869 1820 1870 if (!nchips) { 1821 - dev_err(fmc2->dev, "NAND chip not defined\n"); 1871 + dev_err(nfc->dev, "NAND chip not defined\n"); 1822 1872 return -EINVAL; 1823 1873 } 1824 1874 1825 1875 if (nchips > 1) { 1826 - dev_err(fmc2->dev, "too many NAND chips defined\n"); 1876 + dev_err(nfc->dev, "too many NAND chips defined\n"); 1827 1877 return -EINVAL; 1828 1878 } 1829 1879 1830 1880 for_each_child_of_node(dn, child) { 1831 - ret = stm32_fmc2_parse_child(fmc2, child); 1881 + ret = stm32_fmc2_nfc_parse_child(nfc, child); 1832 1882 if (ret < 0) { 1833 1883 of_node_put(child); 1834 1884 return ret; ··· 1838 1888 return ret; 1839 1889 } 1840 1890 1841 - static int stm32_fmc2_probe(struct platform_device *pdev) 1891 + static int stm32_fmc2_nfc_probe(struct platform_device *pdev) 1842 1892 { 1843 1893 struct device *dev = &pdev->dev; 1844 1894 struct reset_control *rstc; 1845 - struct stm32_fmc2_nfc *fmc2; 1895 + struct stm32_fmc2_nfc *nfc; 1846 1896 struct stm32_fmc2_nand *nand; 1847 1897 struct resource *res; 1848 1898 struct mtd_info *mtd; 1849 1899 struct nand_chip *chip; 1850 1900 int chip_cs, mem_region, ret, irq; 1851 1901 1852 - fmc2 = devm_kzalloc(dev, sizeof(*fmc2), GFP_KERNEL); 1853 - if (!fmc2) 1902 + nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL); 1903 + if (!nfc) 1854 1904 return -ENOMEM; 1855 1905 1856 - fmc2->dev = dev; 1857 - nand_controller_init(&fmc2->base); 1858 - fmc2->base.ops = &stm32_fmc2_nand_controller_ops; 1906 + nfc->dev = dev; 1907 + nand_controller_init(&nfc->base); 1908 + nfc->base.ops = &stm32_fmc2_nfc_controller_ops; 1859 1909 1860 - ret = stm32_fmc2_parse_dt(fmc2); 1910 + ret = stm32_fmc2_nfc_parse_dt(nfc); 1861 1911 if (ret) 1862 1912 return ret; 1863 1913 1864 1914 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1865 - fmc2->io_base = devm_ioremap_resource(dev, res); 1866 - if (IS_ERR(fmc2->io_base)) 1867 - return PTR_ERR(fmc2->io_base); 1915 + nfc->io_base = devm_ioremap_resource(dev, res); 1916 + if (IS_ERR(nfc->io_base)) 1917 + return PTR_ERR(nfc->io_base); 1868 1918 1869 - fmc2->io_phys_addr = res->start; 1919 + nfc->io_phys_addr = res->start; 1870 1920 1871 1921 for (chip_cs = 0, mem_region = 1; chip_cs < FMC2_MAX_CE; 1872 1922 chip_cs++, mem_region += 3) { 1873 - if (!(fmc2->cs_assigned & BIT(chip_cs))) 1923 + if (!(nfc->cs_assigned & BIT(chip_cs))) 1874 1924 continue; 1875 1925 1876 1926 res = platform_get_resource(pdev, IORESOURCE_MEM, mem_region); 1877 - fmc2->data_base[chip_cs] = devm_ioremap_resource(dev, res); 1878 - if (IS_ERR(fmc2->data_base[chip_cs])) 1879 - return PTR_ERR(fmc2->data_base[chip_cs]); 1927 + nfc->data_base[chip_cs] = devm_ioremap_resource(dev, res); 1928 + if (IS_ERR(nfc->data_base[chip_cs])) 1929 + return PTR_ERR(nfc->data_base[chip_cs]); 1880 1930 1881 - fmc2->data_phys_addr[chip_cs] = res->start; 1931 + nfc->data_phys_addr[chip_cs] = res->start; 1882 1932 1883 1933 res = platform_get_resource(pdev, IORESOURCE_MEM, 1884 1934 mem_region + 1); 1885 - fmc2->cmd_base[chip_cs] = devm_ioremap_resource(dev, res); 1886 - if (IS_ERR(fmc2->cmd_base[chip_cs])) 1887 - return PTR_ERR(fmc2->cmd_base[chip_cs]); 1935 + nfc->cmd_base[chip_cs] = devm_ioremap_resource(dev, res); 1936 + if (IS_ERR(nfc->cmd_base[chip_cs])) 1937 + return PTR_ERR(nfc->cmd_base[chip_cs]); 1888 1938 1889 1939 res = platform_get_resource(pdev, IORESOURCE_MEM, 1890 1940 mem_region + 2); 1891 - fmc2->addr_base[chip_cs] = devm_ioremap_resource(dev, res); 1892 - if (IS_ERR(fmc2->addr_base[chip_cs])) 1893 - return PTR_ERR(fmc2->addr_base[chip_cs]); 1941 + nfc->addr_base[chip_cs] = devm_ioremap_resource(dev, res); 1942 + if (IS_ERR(nfc->addr_base[chip_cs])) 1943 + return PTR_ERR(nfc->addr_base[chip_cs]); 1894 1944 } 1895 1945 1896 1946 irq = platform_get_irq(pdev, 0); 1897 1947 if (irq < 0) 1898 1948 return irq; 1899 1949 1900 - ret = devm_request_irq(dev, irq, stm32_fmc2_irq, 0, 1901 - dev_name(dev), fmc2); 1950 + ret = devm_request_irq(dev, irq, stm32_fmc2_nfc_irq, 0, 1951 + dev_name(dev), nfc); 1902 1952 if (ret) { 1903 1953 dev_err(dev, "failed to request irq\n"); 1904 1954 return ret; 1905 1955 } 1906 1956 1907 - init_completion(&fmc2->complete); 1957 + init_completion(&nfc->complete); 1908 1958 1909 - fmc2->clk = devm_clk_get(dev, NULL); 1910 - if (IS_ERR(fmc2->clk)) 1911 - return PTR_ERR(fmc2->clk); 1959 + nfc->clk = devm_clk_get(dev, NULL); 1960 + if (IS_ERR(nfc->clk)) 1961 + return PTR_ERR(nfc->clk); 1912 1962 1913 - ret = clk_prepare_enable(fmc2->clk); 1963 + ret = clk_prepare_enable(nfc->clk); 1914 1964 if (ret) { 1915 1965 dev_err(dev, "can not enable the clock\n"); 1916 1966 return ret; 1917 1967 } 1918 1968 1919 1969 rstc = devm_reset_control_get(dev, NULL); 1920 - if (!IS_ERR(rstc)) { 1970 + if (IS_ERR(rstc)) { 1971 + ret = PTR_ERR(rstc); 1972 + if (ret == -EPROBE_DEFER) 1973 + goto err_clk_disable; 1974 + } else { 1921 1975 reset_control_assert(rstc); 1922 1976 reset_control_deassert(rstc); 1923 1977 } 1924 1978 1925 - /* DMA setup */ 1926 - ret = stm32_fmc2_dma_setup(fmc2); 1979 + ret = stm32_fmc2_nfc_dma_setup(nfc); 1927 1980 if (ret) 1928 - return ret; 1981 + goto err_release_dma; 1929 1982 1930 - /* FMC2 init routine */ 1931 - stm32_fmc2_init(fmc2); 1983 + stm32_fmc2_nfc_init(nfc); 1932 1984 1933 - nand = &fmc2->nand; 1985 + nand = &nfc->nand; 1934 1986 chip = &nand->chip; 1935 1987 mtd = nand_to_mtd(chip); 1936 1988 mtd->dev.parent = dev; 1937 1989 1938 - chip->controller = &fmc2->base; 1990 + chip->controller = &nfc->base; 1939 1991 chip->options |= NAND_BUSWIDTH_AUTO | NAND_NO_SUBPAGE_WRITE | 1940 - NAND_USE_BOUNCE_BUFFER; 1992 + NAND_USES_DMA; 1941 1993 1942 1994 /* Default ECC settings */ 1943 1995 chip->ecc.mode = NAND_ECC_HW; ··· 1949 1997 /* Scan to find existence of the device */ 1950 1998 ret = nand_scan(chip, nand->ncs); 1951 1999 if (ret) 1952 - goto err_scan; 2000 + goto err_release_dma; 1953 2001 1954 2002 ret = mtd_device_register(mtd, NULL, 0); 1955 2003 if (ret) 1956 - goto err_device_register; 2004 + goto err_nand_cleanup; 1957 2005 1958 - platform_set_drvdata(pdev, fmc2); 2006 + platform_set_drvdata(pdev, nfc); 1959 2007 1960 2008 return 0; 1961 2009 1962 - err_device_register: 2010 + err_nand_cleanup: 1963 2011 nand_cleanup(chip); 1964 2012 1965 - err_scan: 1966 - if (fmc2->dma_ecc_ch) 1967 - dma_release_channel(fmc2->dma_ecc_ch); 1968 - if (fmc2->dma_tx_ch) 1969 - dma_release_channel(fmc2->dma_tx_ch); 1970 - if (fmc2->dma_rx_ch) 1971 - dma_release_channel(fmc2->dma_rx_ch); 2013 + err_release_dma: 2014 + if (nfc->dma_ecc_ch) 2015 + dma_release_channel(nfc->dma_ecc_ch); 2016 + if (nfc->dma_tx_ch) 2017 + dma_release_channel(nfc->dma_tx_ch); 2018 + if (nfc->dma_rx_ch) 2019 + dma_release_channel(nfc->dma_rx_ch); 1972 2020 1973 - sg_free_table(&fmc2->dma_data_sg); 1974 - sg_free_table(&fmc2->dma_ecc_sg); 2021 + sg_free_table(&nfc->dma_data_sg); 2022 + sg_free_table(&nfc->dma_ecc_sg); 1975 2023 1976 - clk_disable_unprepare(fmc2->clk); 2024 + err_clk_disable: 2025 + clk_disable_unprepare(nfc->clk); 1977 2026 1978 2027 return ret; 1979 2028 } 1980 2029 1981 - static int stm32_fmc2_remove(struct platform_device *pdev) 2030 + static int stm32_fmc2_nfc_remove(struct platform_device *pdev) 1982 2031 { 1983 - struct stm32_fmc2_nfc *fmc2 = platform_get_drvdata(pdev); 1984 - struct stm32_fmc2_nand *nand = &fmc2->nand; 2032 + struct stm32_fmc2_nfc *nfc = platform_get_drvdata(pdev); 2033 + struct stm32_fmc2_nand *nand = &nfc->nand; 2034 + struct nand_chip *chip = &nand->chip; 2035 + int ret; 1985 2036 1986 - nand_release(&nand->chip); 2037 + ret = mtd_device_unregister(nand_to_mtd(chip)); 2038 + WARN_ON(ret); 2039 + nand_cleanup(chip); 1987 2040 1988 - if (fmc2->dma_ecc_ch) 1989 - dma_release_channel(fmc2->dma_ecc_ch); 1990 - if (fmc2->dma_tx_ch) 1991 - dma_release_channel(fmc2->dma_tx_ch); 1992 - if (fmc2->dma_rx_ch) 1993 - dma_release_channel(fmc2->dma_rx_ch); 2041 + if (nfc->dma_ecc_ch) 2042 + dma_release_channel(nfc->dma_ecc_ch); 2043 + if (nfc->dma_tx_ch) 2044 + dma_release_channel(nfc->dma_tx_ch); 2045 + if (nfc->dma_rx_ch) 2046 + dma_release_channel(nfc->dma_rx_ch); 1994 2047 1995 - sg_free_table(&fmc2->dma_data_sg); 1996 - sg_free_table(&fmc2->dma_ecc_sg); 2048 + sg_free_table(&nfc->dma_data_sg); 2049 + sg_free_table(&nfc->dma_ecc_sg); 1997 2050 1998 - clk_disable_unprepare(fmc2->clk); 2051 + clk_disable_unprepare(nfc->clk); 1999 2052 2000 2053 return 0; 2001 2054 } 2002 2055 2003 - static int __maybe_unused stm32_fmc2_suspend(struct device *dev) 2056 + static int __maybe_unused stm32_fmc2_nfc_suspend(struct device *dev) 2004 2057 { 2005 - struct stm32_fmc2_nfc *fmc2 = dev_get_drvdata(dev); 2058 + struct stm32_fmc2_nfc *nfc = dev_get_drvdata(dev); 2006 2059 2007 - clk_disable_unprepare(fmc2->clk); 2060 + clk_disable_unprepare(nfc->clk); 2008 2061 2009 2062 pinctrl_pm_select_sleep_state(dev); 2010 2063 2011 2064 return 0; 2012 2065 } 2013 2066 2014 - static int __maybe_unused stm32_fmc2_resume(struct device *dev) 2067 + static int __maybe_unused stm32_fmc2_nfc_resume(struct device *dev) 2015 2068 { 2016 - struct stm32_fmc2_nfc *fmc2 = dev_get_drvdata(dev); 2017 - struct stm32_fmc2_nand *nand = &fmc2->nand; 2069 + struct stm32_fmc2_nfc *nfc = dev_get_drvdata(dev); 2070 + struct stm32_fmc2_nand *nand = &nfc->nand; 2018 2071 int chip_cs, ret; 2019 2072 2020 2073 pinctrl_pm_select_default_state(dev); 2021 2074 2022 - ret = clk_prepare_enable(fmc2->clk); 2075 + ret = clk_prepare_enable(nfc->clk); 2023 2076 if (ret) { 2024 2077 dev_err(dev, "can not enable the clock\n"); 2025 2078 return ret; 2026 2079 } 2027 2080 2028 - stm32_fmc2_init(fmc2); 2081 + stm32_fmc2_nfc_init(nfc); 2029 2082 2030 2083 for (chip_cs = 0; chip_cs < FMC2_MAX_CE; chip_cs++) { 2031 - if (!(fmc2->cs_assigned & BIT(chip_cs))) 2084 + if (!(nfc->cs_assigned & BIT(chip_cs))) 2032 2085 continue; 2033 2086 2034 2087 nand_reset(&nand->chip, chip_cs); ··· 2042 2085 return 0; 2043 2086 } 2044 2087 2045 - static SIMPLE_DEV_PM_OPS(stm32_fmc2_pm_ops, stm32_fmc2_suspend, 2046 - stm32_fmc2_resume); 2088 + static SIMPLE_DEV_PM_OPS(stm32_fmc2_nfc_pm_ops, stm32_fmc2_nfc_suspend, 2089 + stm32_fmc2_nfc_resume); 2047 2090 2048 - static const struct of_device_id stm32_fmc2_match[] = { 2091 + static const struct of_device_id stm32_fmc2_nfc_match[] = { 2049 2092 {.compatible = "st,stm32mp15-fmc2"}, 2050 2093 {} 2051 2094 }; 2052 - MODULE_DEVICE_TABLE(of, stm32_fmc2_match); 2095 + MODULE_DEVICE_TABLE(of, stm32_fmc2_nfc_match); 2053 2096 2054 - static struct platform_driver stm32_fmc2_driver = { 2055 - .probe = stm32_fmc2_probe, 2056 - .remove = stm32_fmc2_remove, 2097 + static struct platform_driver stm32_fmc2_nfc_driver = { 2098 + .probe = stm32_fmc2_nfc_probe, 2099 + .remove = stm32_fmc2_nfc_remove, 2057 2100 .driver = { 2058 - .name = "stm32_fmc2_nand", 2059 - .of_match_table = stm32_fmc2_match, 2060 - .pm = &stm32_fmc2_pm_ops, 2101 + .name = "stm32_fmc2_nfc", 2102 + .of_match_table = stm32_fmc2_nfc_match, 2103 + .pm = &stm32_fmc2_nfc_pm_ops, 2061 2104 }, 2062 2105 }; 2063 - module_platform_driver(stm32_fmc2_driver); 2106 + module_platform_driver(stm32_fmc2_nfc_driver); 2064 2107 2065 - MODULE_ALIAS("platform:stm32_fmc2_nand"); 2108 + MODULE_ALIAS("platform:stm32_fmc2_nfc"); 2066 2109 MODULE_AUTHOR("Christophe Kerello <christophe.kerello@st.com>"); 2067 - MODULE_DESCRIPTION("STMicroelectronics STM32 FMC2 nand driver"); 2110 + MODULE_DESCRIPTION("STMicroelectronics STM32 FMC2 NFC driver"); 2068 2111 MODULE_LICENSE("GPL v2");
+11 -5
drivers/mtd/nand/raw/sunxi_nand.c
··· 1698 1698 ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma; 1699 1699 ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma; 1700 1700 ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma; 1701 - nand->options |= NAND_USE_BOUNCE_BUFFER; 1701 + nand->options |= NAND_USES_DMA; 1702 1702 } else { 1703 1703 ecc->read_page = sunxi_nfc_hw_ecc_read_page; 1704 1704 ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage; ··· 1907 1907 struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1908 1908 const struct nand_op_parser *parser; 1909 1909 1910 - sunxi_nfc_select_chip(nand, op->cs); 1910 + if (!check_only) 1911 + sunxi_nfc_select_chip(nand, op->cs); 1911 1912 1912 1913 if (sunxi_nand->sels[op->cs].rb >= 0) 1913 1914 parser = &sunxi_nfc_op_parser; ··· 2004 2003 ret = mtd_device_register(mtd, NULL, 0); 2005 2004 if (ret) { 2006 2005 dev_err(dev, "failed to register mtd device: %d\n", ret); 2007 - nand_release(nand); 2006 + nand_cleanup(nand); 2008 2007 return ret; 2009 2008 } 2010 2009 ··· 2039 2038 static void sunxi_nand_chips_cleanup(struct sunxi_nfc *nfc) 2040 2039 { 2041 2040 struct sunxi_nand_chip *sunxi_nand; 2041 + struct nand_chip *chip; 2042 + int ret; 2042 2043 2043 2044 while (!list_empty(&nfc->chips)) { 2044 2045 sunxi_nand = list_first_entry(&nfc->chips, 2045 2046 struct sunxi_nand_chip, 2046 2047 node); 2047 - nand_release(&sunxi_nand->nand); 2048 - sunxi_nand_ecc_cleanup(&sunxi_nand->nand.ecc); 2048 + chip = &sunxi_nand->nand; 2049 + ret = mtd_device_unregister(nand_to_mtd(chip)); 2050 + WARN_ON(ret); 2051 + nand_cleanup(chip); 2052 + sunxi_nand_ecc_cleanup(&chip->ecc); 2049 2053 list_del(&sunxi_nand->node); 2050 2054 } 2051 2055 }
+9 -4
drivers/mtd/nand/raw/tango_nand.c
··· 568 568 chip->legacy.select_chip = tango_select_chip; 569 569 chip->legacy.cmd_ctrl = tango_cmd_ctrl; 570 570 chip->legacy.dev_ready = tango_dev_ready; 571 - chip->options = NAND_USE_BOUNCE_BUFFER | 571 + chip->options = NAND_USES_DMA | 572 572 NAND_NO_SUBPAGE_WRITE | 573 573 NAND_WAIT_TCCS; 574 574 chip->controller = &nfc->hw; ··· 600 600 601 601 static int tango_nand_remove(struct platform_device *pdev) 602 602 { 603 - int cs; 604 603 struct tango_nfc *nfc = platform_get_drvdata(pdev); 604 + struct nand_chip *chip; 605 + int cs, ret; 605 606 606 607 dma_release_channel(nfc->chan); 607 608 608 609 for (cs = 0; cs < MAX_CS; ++cs) { 609 - if (nfc->chips[cs]) 610 - nand_release(&nfc->chips[cs]->nand_chip); 610 + if (nfc->chips[cs]) { 611 + chip = &nfc->chips[cs]->nand_chip; 612 + ret = mtd_device_unregister(nand_to_mtd(chip)); 613 + WARN_ON(ret); 614 + nand_cleanup(chip); 615 + } 611 616 } 612 617 613 618 return 0;
+4 -2
drivers/mtd/nand/raw/tegra_nand.c
··· 467 467 const struct nand_operation *op, 468 468 bool check_only) 469 469 { 470 - tegra_nand_select_target(chip, op->cs); 470 + if (!check_only) 471 + tegra_nand_select_target(chip, op->cs); 472 + 471 473 return nand_op_parser_exec_op(chip, &tegra_nand_op_parser, op, 472 474 check_only); 473 475 } ··· 1115 1113 if (!mtd->name) 1116 1114 mtd->name = "tegra_nand"; 1117 1115 1118 - chip->options = NAND_NO_SUBPAGE_WRITE | NAND_USE_BOUNCE_BUFFER; 1116 + chip->options = NAND_NO_SUBPAGE_WRITE | NAND_USES_DMA; 1119 1117 1120 1118 ret = nand_scan(chip, 1); 1121 1119 if (ret)
+6 -2
drivers/mtd/nand/raw/tmio_nand.c
··· 448 448 if (!retval) 449 449 return retval; 450 450 451 - nand_release(nand_chip); 451 + nand_cleanup(nand_chip); 452 452 453 453 err_irq: 454 454 tmio_hw_stop(dev, tmio); ··· 458 458 static int tmio_remove(struct platform_device *dev) 459 459 { 460 460 struct tmio_nand *tmio = platform_get_drvdata(dev); 461 + struct nand_chip *chip = &tmio->chip; 462 + int ret; 461 463 462 - nand_release(&tmio->chip); 464 + ret = mtd_device_unregister(nand_to_mtd(chip)); 465 + WARN_ON(ret); 466 + nand_cleanup(chip); 463 467 tmio_hw_stop(dev, tmio); 464 468 return 0; 465 469 }
+4 -2
drivers/mtd/nand/raw/txx9ndfmc.c
··· 371 371 static int __exit txx9ndfmc_remove(struct platform_device *dev) 372 372 { 373 373 struct txx9ndfmc_drvdata *drvdata = platform_get_drvdata(dev); 374 - int i; 374 + int ret, i; 375 375 376 376 if (!drvdata) 377 377 return 0; ··· 385 385 chip = mtd_to_nand(mtd); 386 386 txx9_priv = nand_get_controller_data(chip); 387 387 388 - nand_release(chip); 388 + ret = mtd_device_unregister(nand_to_mtd(chip)); 389 + WARN_ON(ret); 390 + nand_cleanup(chip); 389 391 kfree(txx9_priv->mtdname); 390 392 kfree(txx9_priv); 391 393 }
+8 -2
drivers/mtd/nand/raw/vf610_nfc.c
··· 502 502 const struct nand_operation *op, 503 503 bool check_only) 504 504 { 505 - vf610_nfc_select_target(chip, op->cs); 505 + if (!check_only) 506 + vf610_nfc_select_target(chip, op->cs); 507 + 506 508 return nand_op_parser_exec_op(chip, &vf610_nfc_op_parser, op, 507 509 check_only); 508 510 } ··· 917 915 static int vf610_nfc_remove(struct platform_device *pdev) 918 916 { 919 917 struct vf610_nfc *nfc = platform_get_drvdata(pdev); 918 + struct nand_chip *chip = &nfc->chip; 919 + int ret; 920 920 921 - nand_release(&nfc->chip); 921 + ret = mtd_device_unregister(nand_to_mtd(chip)); 922 + WARN_ON(ret); 923 + nand_cleanup(chip); 922 924 clk_disable_unprepare(nfc->clk); 923 925 return 0; 924 926 }
+6 -2
drivers/mtd/nand/raw/xway_nand.c
··· 210 210 211 211 err = mtd_device_register(mtd, NULL, 0); 212 212 if (err) 213 - nand_release(&data->chip); 213 + nand_cleanup(&data->chip); 214 214 215 215 return err; 216 216 } ··· 221 221 static int xway_nand_remove(struct platform_device *pdev) 222 222 { 223 223 struct xway_nand_data *data = platform_get_drvdata(pdev); 224 + struct nand_chip *chip = &data->chip; 225 + int ret; 224 226 225 - nand_release(&data->chip); 227 + ret = mtd_device_unregister(mtd); 228 + WARN_ON(ret); 229 + nand_cleanup(chip); 226 230 227 231 return 0; 228 232 }
+10 -2
drivers/mtd/parsers/cmdlinepart.c
··· 9 9 * 10 10 * mtdparts=<mtddef>[;<mtddef] 11 11 * <mtddef> := <mtd-id>:<partdef>[,<partdef>] 12 - * <partdef> := <size>[@<offset>][<name>][ro][lk] 12 + * <partdef> := <size>[@<offset>][<name>][ro][lk][slc] 13 13 * <mtd-id> := unique name used in mapping driver/device (mtd->name) 14 14 * <size> := standard linux memsize OR "-" to denote all remaining space 15 15 * size is automatically truncated at end of device ··· 92 92 int name_len; 93 93 unsigned char *extra_mem; 94 94 char delim; 95 - unsigned int mask_flags; 95 + unsigned int mask_flags, add_flags; 96 96 97 97 /* fetch the partition size */ 98 98 if (*s == '-') { ··· 109 109 110 110 /* fetch partition name and flags */ 111 111 mask_flags = 0; /* this is going to be a regular partition */ 112 + add_flags = 0; 112 113 delim = 0; 113 114 114 115 /* check for offset */ ··· 153 152 s += 2; 154 153 } 155 154 155 + /* if slc is found use emulated SLC mode on this partition*/ 156 + if (!strncmp(s, "slc", 3)) { 157 + add_flags |= MTD_SLC_ON_MLC_EMULATION; 158 + s += 3; 159 + } 160 + 156 161 /* test if more partitions are following */ 157 162 if (*s == ',') { 158 163 if (size == SIZE_REMAINING) { ··· 191 184 parts[this_part].size = size; 192 185 parts[this_part].offset = offset; 193 186 parts[this_part].mask_flags = mask_flags; 187 + parts[this_part].add_flags = add_flags; 194 188 if (name) 195 189 strlcpy(extra_mem, name, name_len + 1); 196 190 else
+3
drivers/mtd/parsers/ofpart.c
··· 117 117 if (of_get_property(pp, "lock", &len)) 118 118 parts[i].mask_flags |= MTD_POWERUP_LOCK; 119 119 120 + if (of_property_read_bool(pp, "slc-mode")) 121 + parts[i].add_flags |= MTD_SLC_ON_MLC_EMULATION; 122 + 120 123 i++; 121 124 } 122 125
+4 -1
drivers/mtd/ubi/build.c
··· 867 867 * Both UBI and UBIFS have been designed for SLC NAND and NOR flashes. 868 868 * MLC NAND is different and needs special care, otherwise UBI or UBIFS 869 869 * will die soon and you will lose all your data. 870 + * Relax this rule if the partition we're attaching to operates in SLC 871 + * mode. 870 872 */ 871 - if (mtd->type == MTD_MLCNANDFLASH) { 873 + if (mtd->type == MTD_MLCNANDFLASH && 874 + !(mtd->flags & MTD_SLC_ON_MLC_EMULATION)) { 872 875 pr_err("ubi: refuse attaching mtd%d - MLC NAND is not supported\n", 873 876 mtd->index); 874 877 return -EINVAL;
+7 -4
include/linux/bch.h
··· 33 33 * @cache: log-based polynomial representation buffer 34 34 * @elp: error locator polynomial 35 35 * @poly_2t: temporary polynomials of degree 2t 36 + * @swap_bits: swap bits within data and syndrome bytes 36 37 */ 37 38 struct bch_control { 38 39 unsigned int m; ··· 52 51 int *cache; 53 52 struct gf_poly *elp; 54 53 struct gf_poly *poly_2t[4]; 54 + bool swap_bits; 55 55 }; 56 56 57 - struct bch_control *init_bch(int m, int t, unsigned int prim_poly); 57 + struct bch_control *bch_init(int m, int t, unsigned int prim_poly, 58 + bool swap_bits); 58 59 59 - void free_bch(struct bch_control *bch); 60 + void bch_free(struct bch_control *bch); 60 61 61 - void encode_bch(struct bch_control *bch, const uint8_t *data, 62 + void bch_encode(struct bch_control *bch, const uint8_t *data, 62 63 unsigned int len, uint8_t *ecc); 63 64 64 - int decode_bch(struct bch_control *bch, const uint8_t *data, unsigned int len, 65 + int bch_decode(struct bch_control *bch, const uint8_t *data, unsigned int len, 65 66 const uint8_t *recv_ecc, const uint8_t *calc_ecc, 66 67 const unsigned int *syn, unsigned int *errloc); 67 68
+1 -1
include/linux/mtd/bbm.h
··· 98 98 99 99 /* 100 100 * Flag set by nand_create_default_bbt_descr(), marking that the nand_bbt_descr 101 - * was allocated dynamicaly and must be freed in nand_release(). Has no meaning 101 + * was allocated dynamicaly and must be freed in nand_cleanup(). Has no meaning 102 102 * in nand_chip.bbt_options. 103 103 */ 104 104 #define NAND_BBT_DYNAMICSTRUCT 0x80000000
+6 -1
include/linux/mtd/mtd.h
··· 200 200 * 201 201 * @node: list node used to add an MTD partition to the parent partition list 202 202 * @offset: offset of the partition relatively to the parent offset 203 + * @size: partition size. Should be equal to mtd->size unless 204 + * MTD_SLC_ON_MLC_EMULATION is set 203 205 * @flags: original flags (before the mtdpart logic decided to tweak them based 204 206 * on flash constraints, like eraseblock/pagesize alignment) 205 207 * ··· 211 209 struct mtd_part { 212 210 struct list_head node; 213 211 u64 offset; 212 + u64 size; 214 213 u32 flags; 215 214 }; 216 215 ··· 625 622 626 623 static inline int mtd_wunit_per_eb(struct mtd_info *mtd) 627 624 { 628 - return mtd->erasesize / mtd->writesize; 625 + struct mtd_info *master = mtd_get_master(mtd); 626 + 627 + return master->erasesize / mtd->writesize; 629 628 } 630 629 631 630 static inline int mtd_offset_to_wunit(struct mtd_info *mtd, loff_t offs)
+2
include/linux/mtd/partitions.h
··· 37 37 * master MTD flag set for the corresponding MTD partition. 38 38 * For example, to force a read-only partition, simply adding 39 39 * MTD_WRITEABLE to the mask_flags will do the trick. 40 + * add_flags: contains flags to add to the parent flags 40 41 * 41 42 * Note: writeable partitions require their size and offset be 42 43 * erasesize aligned (e.g. use MTDPART_OFS_NEXTBLK). ··· 49 48 uint64_t size; /* partition size */ 50 49 uint64_t offset; /* offset within the master MTD space */ 51 50 uint32_t mask_flags; /* master MTD flags to mask out for this partition */ 51 + uint32_t add_flags; /* flags to add to the partition */ 52 52 struct device_node *of_node; 53 53 }; 54 54
+77 -54
include/linux/mtd/rawnand.h
··· 83 83 /* 84 84 * Constants for ECC_MODES 85 85 */ 86 - typedef enum { 86 + enum nand_ecc_mode { 87 + NAND_ECC_INVALID, 87 88 NAND_ECC_NONE, 88 89 NAND_ECC_SOFT, 89 90 NAND_ECC_HW, 90 91 NAND_ECC_HW_SYNDROME, 91 - NAND_ECC_HW_OOB_FIRST, 92 92 NAND_ECC_ON_DIE, 93 - } nand_ecc_modes_t; 93 + }; 94 94 95 95 enum nand_ecc_algo { 96 96 NAND_ECC_UNKNOWN, ··· 119 119 #define NAND_ECC_MAXIMIZE BIT(1) 120 120 121 121 /* 122 + * Option constants for bizarre disfunctionality and real 123 + * features. 124 + */ 125 + 126 + /* Buswidth is 16 bit */ 127 + #define NAND_BUSWIDTH_16 BIT(1) 128 + 129 + /* 122 130 * When using software implementation of Hamming, we can specify which byte 123 131 * ordering should be used. 124 132 */ 125 133 #define NAND_ECC_SOFT_HAMMING_SM_ORDER BIT(2) 126 134 127 - /* 128 - * Option constants for bizarre disfunctionality and real 129 - * features. 130 - */ 131 - /* Buswidth is 16 bit */ 132 - #define NAND_BUSWIDTH_16 0x00000002 133 135 /* Chip has cache program function */ 134 - #define NAND_CACHEPRG 0x00000008 136 + #define NAND_CACHEPRG BIT(3) 137 + /* Options valid for Samsung large page devices */ 138 + #define NAND_SAMSUNG_LP_OPTIONS NAND_CACHEPRG 139 + 135 140 /* 136 141 * Chip requires ready check on read (for auto-incremented sequential read). 137 142 * True only for small page devices; large page devices do not support 138 143 * autoincrement. 139 144 */ 140 - #define NAND_NEED_READRDY 0x00000100 145 + #define NAND_NEED_READRDY BIT(8) 141 146 142 147 /* Chip does not allow subpage writes */ 143 - #define NAND_NO_SUBPAGE_WRITE 0x00000200 148 + #define NAND_NO_SUBPAGE_WRITE BIT(9) 144 149 145 150 /* Device is one of 'new' xD cards that expose fake nand command set */ 146 - #define NAND_BROKEN_XD 0x00000400 151 + #define NAND_BROKEN_XD BIT(10) 147 152 148 153 /* Device behaves just like nand, but is readonly */ 149 - #define NAND_ROM 0x00000800 154 + #define NAND_ROM BIT(11) 150 155 151 156 /* Device supports subpage reads */ 152 - #define NAND_SUBPAGE_READ 0x00001000 157 + #define NAND_SUBPAGE_READ BIT(12) 158 + /* Macros to identify the above */ 159 + #define NAND_HAS_SUBPAGE_READ(chip) ((chip->options & NAND_SUBPAGE_READ)) 153 160 154 161 /* 155 162 * Some MLC NANDs need data scrambling to limit bitflips caused by repeated 156 163 * patterns. 157 164 */ 158 - #define NAND_NEED_SCRAMBLING 0x00002000 165 + #define NAND_NEED_SCRAMBLING BIT(13) 159 166 160 167 /* Device needs 3rd row address cycle */ 161 - #define NAND_ROW_ADDR_3 0x00004000 162 - 163 - /* Options valid for Samsung large page devices */ 164 - #define NAND_SAMSUNG_LP_OPTIONS NAND_CACHEPRG 165 - 166 - /* Macros to identify the above */ 167 - #define NAND_HAS_SUBPAGE_READ(chip) ((chip->options & NAND_SUBPAGE_READ)) 168 - 169 - /* 170 - * There are different places where the manufacturer stores the factory bad 171 - * block markers. 172 - * 173 - * Position within the block: Each of these pages needs to be checked for a 174 - * bad block marking pattern. 175 - */ 176 - #define NAND_BBM_FIRSTPAGE 0x01000000 177 - #define NAND_BBM_SECONDPAGE 0x02000000 178 - #define NAND_BBM_LASTPAGE 0x04000000 179 - 180 - /* Position within the OOB data of the page */ 181 - #define NAND_BBM_POS_SMALL 5 182 - #define NAND_BBM_POS_LARGE 0 168 + #define NAND_ROW_ADDR_3 BIT(14) 183 169 184 170 /* Non chip related options */ 185 171 /* This option skips the bbt scan during initialization. */ 186 - #define NAND_SKIP_BBTSCAN 0x00010000 172 + #define NAND_SKIP_BBTSCAN BIT(16) 187 173 /* Chip may not exist, so silence any errors in scan */ 188 - #define NAND_SCAN_SILENT_NODEV 0x00040000 174 + #define NAND_SCAN_SILENT_NODEV BIT(18) 175 + 189 176 /* 190 177 * Autodetect nand buswidth with readid/onfi. 191 178 * This suppose the driver will configure the hardware in 8 bits mode 192 179 * when calling nand_scan_ident, and update its configuration 193 180 * before calling nand_scan_tail. 194 181 */ 195 - #define NAND_BUSWIDTH_AUTO 0x00080000 182 + #define NAND_BUSWIDTH_AUTO BIT(19) 183 + 196 184 /* 197 185 * This option could be defined by controller drivers to protect against 198 186 * kmap'ed, vmalloc'ed highmem buffers being passed from upper layers 199 187 */ 200 - #define NAND_USE_BOUNCE_BUFFER 0x00100000 188 + #define NAND_USES_DMA BIT(20) 201 189 202 190 /* 203 191 * In case your controller is implementing ->legacy.cmd_ctrl() and is relying ··· 195 207 * If your controller already takes care of this delay, you don't need to set 196 208 * this flag. 197 209 */ 198 - #define NAND_WAIT_TCCS 0x00200000 210 + #define NAND_WAIT_TCCS BIT(21) 199 211 200 212 /* 201 213 * Whether the NAND chip is a boot medium. Drivers might use this information 202 214 * to select ECC algorithms supported by the boot ROM or similar restrictions. 203 215 */ 204 - #define NAND_IS_BOOT_MEDIUM 0x00400000 216 + #define NAND_IS_BOOT_MEDIUM BIT(22) 205 217 206 218 /* 207 219 * Do not try to tweak the timings at runtime. This is needed when the 208 220 * controller initializes the timings on itself or when it relies on 209 221 * configuration done by the bootloader. 210 222 */ 211 - #define NAND_KEEP_TIMINGS 0x00800000 223 + #define NAND_KEEP_TIMINGS BIT(23) 224 + 225 + /* 226 + * There are different places where the manufacturer stores the factory bad 227 + * block markers. 228 + * 229 + * Position within the block: Each of these pages needs to be checked for a 230 + * bad block marking pattern. 231 + */ 232 + #define NAND_BBM_FIRSTPAGE BIT(24) 233 + #define NAND_BBM_SECONDPAGE BIT(25) 234 + #define NAND_BBM_LASTPAGE BIT(26) 235 + 236 + /* 237 + * Some controllers with pipelined ECC engines override the BBM marker with 238 + * data or ECC bytes, thus making bad block detection through bad block marker 239 + * impossible. Let's flag those chips so the core knows it shouldn't check the 240 + * BBM and consider all blocks good. 241 + */ 242 + #define NAND_NO_BBM_QUIRK BIT(27) 212 243 213 244 /* Cell info constants */ 214 245 #define NAND_CI_CHIPNR_MSK 0x03 215 246 #define NAND_CI_CELLTYPE_MSK 0x0C 216 247 #define NAND_CI_CELLTYPE_SHIFT 2 248 + 249 + /* Position within the OOB data of the page */ 250 + #define NAND_BBM_POS_SMALL 5 251 + #define NAND_BBM_POS_LARGE 0 217 252 218 253 /** 219 254 * struct nand_parameters - NAND generic parameters from the parameter page ··· 362 351 * @write_oob: function to write chip OOB data 363 352 */ 364 353 struct nand_ecc_ctrl { 365 - nand_ecc_modes_t mode; 354 + enum nand_ecc_mode mode; 366 355 enum nand_ecc_algo algo; 367 356 int steps; 368 357 int size; ··· 502 491 /** 503 492 * struct nand_data_interface - NAND interface timing 504 493 * @type: type of the timing 505 - * @timings: The timing, type according to @type 494 + * @timings: The timing information 495 + * @timings.mode: Timing mode as defined in the specification 506 496 * @timings.sdr: Use it when @type is %NAND_SDR_IFACE. 507 497 */ 508 498 struct nand_data_interface { 509 499 enum nand_data_interface_type type; 510 - union { 511 - struct nand_sdr_timings sdr; 500 + struct nand_timings { 501 + unsigned int mode; 502 + union { 503 + struct nand_sdr_timings sdr; 504 + }; 512 505 } timings; 513 506 }; 514 507 ··· 709 694 710 695 /** 711 696 * struct nand_subop - a sub operation 697 + * @cs: the CS line to select for this NAND sub-operation 712 698 * @instrs: array of instructions 713 699 * @ninstrs: length of the @instrs array 714 700 * @first_instr_start_off: offset to start from for the first instruction ··· 725 709 * controller driver. 726 710 */ 727 711 struct nand_subop { 712 + unsigned int cs; 728 713 const struct nand_op_instr *instrs; 729 714 unsigned int ninstrs; 730 715 unsigned int first_instr_start_off; ··· 1338 1321 int nand_get_set_features_notsupp(struct nand_chip *chip, int addr, 1339 1322 u8 *subfeature_param); 1340 1323 1341 - /* Default read_page_raw implementation */ 1324 + /* read_page_raw implementations */ 1342 1325 int nand_read_page_raw(struct nand_chip *chip, uint8_t *buf, int oob_required, 1343 1326 int page); 1327 + int nand_monolithic_read_page_raw(struct nand_chip *chip, uint8_t *buf, 1328 + int oob_required, int page); 1344 1329 1345 - /* Default write_page_raw implementation */ 1330 + /* write_page_raw implementations */ 1346 1331 int nand_write_page_raw(struct nand_chip *chip, const uint8_t *buf, 1347 1332 int oob_required, int page); 1333 + int nand_monolithic_write_page_raw(struct nand_chip *chip, const uint8_t *buf, 1334 + int oob_required, int page); 1348 1335 1349 1336 /* Reset and initialize a NAND device */ 1350 1337 int nand_reset(struct nand_chip *chip, int chipnr); ··· 1377 1356 unsigned int offset_in_page, const void *buf, 1378 1357 unsigned int len, bool force_8bit); 1379 1358 int nand_read_data_op(struct nand_chip *chip, void *buf, unsigned int len, 1380 - bool force_8bit); 1359 + bool force_8bit, bool check_only); 1381 1360 int nand_write_data_op(struct nand_chip *chip, const void *buf, 1382 1361 unsigned int len, bool force_8bit); 1383 1362 ··· 1398 1377 * sucessful nand_scan(). 1399 1378 */ 1400 1379 void nand_cleanup(struct nand_chip *chip); 1401 - /* Unregister the MTD device and calls nand_cleanup() */ 1402 - void nand_release(struct nand_chip *chip); 1403 1380 1404 1381 /* 1405 1382 * External helper for controller drivers that have to implement the WAITRDY ··· 1411 1392 /* Select/deselect a NAND target. */ 1412 1393 void nand_select_target(struct nand_chip *chip, unsigned int cs); 1413 1394 void nand_deselect_target(struct nand_chip *chip); 1395 + 1396 + /* Bitops */ 1397 + void nand_extract_bits(u8 *dst, unsigned int dst_off, const u8 *src, 1398 + unsigned int src_off, unsigned int nbits); 1414 1399 1415 1400 /** 1416 1401 * nand_get_data_buf() - Get the internal page buffer
+1 -1
include/linux/platform_data/mtd-davinci.h
··· 68 68 * Newer ones also support 4-bit ECC, but are awkward 69 69 * using it with large page chips. 70 70 */ 71 - nand_ecc_modes_t ecc_mode; 71 + enum nand_ecc_mode ecc_mode; 72 72 u8 ecc_bits; 73 73 74 74 /* e.g. NAND_BUSWIDTH_16 */
+1 -1
include/linux/platform_data/mtd-nand-s3c2410.h
··· 49 49 50 50 unsigned int ignore_unset_ecc:1; 51 51 52 - nand_ecc_modes_t ecc_mode; 52 + enum nand_ecc_mode ecc_mode; 53 53 54 54 int nr_sets; 55 55 struct s3c2410_nand_set *sets;
+1
include/uapi/mtd/mtd-abi.h
··· 104 104 #define MTD_BIT_WRITEABLE 0x800 /* Single bits can be flipped */ 105 105 #define MTD_NO_ERASE 0x1000 /* No erase necessary */ 106 106 #define MTD_POWERUP_LOCK 0x2000 /* Always locked after reset */ 107 + #define MTD_SLC_ON_MLC_EMULATION 0x4000 /* Emulate SLC behavior on MLC NANDs */ 107 108 108 109 /* Some common devices / combinations of capabilities */ 109 110 #define MTD_CAP_ROM 0
+107 -45
lib/bch.c
··· 23 23 * This library provides runtime configurable encoding/decoding of binary 24 24 * Bose-Chaudhuri-Hocquenghem (BCH) codes. 25 25 * 26 - * Call init_bch to get a pointer to a newly allocated bch_control structure for 26 + * Call bch_init to get a pointer to a newly allocated bch_control structure for 27 27 * the given m (Galois field order), t (error correction capability) and 28 28 * (optional) primitive polynomial parameters. 29 29 * 30 - * Call encode_bch to compute and store ecc parity bytes to a given buffer. 31 - * Call decode_bch to detect and locate errors in received data. 30 + * Call bch_encode to compute and store ecc parity bytes to a given buffer. 31 + * Call bch_decode to detect and locate errors in received data. 32 32 * 33 33 * On systems supporting hw BCH features, intermediate results may be provided 34 - * to decode_bch in order to skip certain steps. See decode_bch() documentation 34 + * to bch_decode in order to skip certain steps. See bch_decode() documentation 35 35 * for details. 36 36 * 37 37 * Option CONFIG_BCH_CONST_PARAMS can be used to force fixed values of ··· 114 114 unsigned int c[2]; 115 115 }; 116 116 117 + static u8 swap_bits_table[] = { 118 + 0x00, 0x80, 0x40, 0xc0, 0x20, 0xa0, 0x60, 0xe0, 119 + 0x10, 0x90, 0x50, 0xd0, 0x30, 0xb0, 0x70, 0xf0, 120 + 0x08, 0x88, 0x48, 0xc8, 0x28, 0xa8, 0x68, 0xe8, 121 + 0x18, 0x98, 0x58, 0xd8, 0x38, 0xb8, 0x78, 0xf8, 122 + 0x04, 0x84, 0x44, 0xc4, 0x24, 0xa4, 0x64, 0xe4, 123 + 0x14, 0x94, 0x54, 0xd4, 0x34, 0xb4, 0x74, 0xf4, 124 + 0x0c, 0x8c, 0x4c, 0xcc, 0x2c, 0xac, 0x6c, 0xec, 125 + 0x1c, 0x9c, 0x5c, 0xdc, 0x3c, 0xbc, 0x7c, 0xfc, 126 + 0x02, 0x82, 0x42, 0xc2, 0x22, 0xa2, 0x62, 0xe2, 127 + 0x12, 0x92, 0x52, 0xd2, 0x32, 0xb2, 0x72, 0xf2, 128 + 0x0a, 0x8a, 0x4a, 0xca, 0x2a, 0xaa, 0x6a, 0xea, 129 + 0x1a, 0x9a, 0x5a, 0xda, 0x3a, 0xba, 0x7a, 0xfa, 130 + 0x06, 0x86, 0x46, 0xc6, 0x26, 0xa6, 0x66, 0xe6, 131 + 0x16, 0x96, 0x56, 0xd6, 0x36, 0xb6, 0x76, 0xf6, 132 + 0x0e, 0x8e, 0x4e, 0xce, 0x2e, 0xae, 0x6e, 0xee, 133 + 0x1e, 0x9e, 0x5e, 0xde, 0x3e, 0xbe, 0x7e, 0xfe, 134 + 0x01, 0x81, 0x41, 0xc1, 0x21, 0xa1, 0x61, 0xe1, 135 + 0x11, 0x91, 0x51, 0xd1, 0x31, 0xb1, 0x71, 0xf1, 136 + 0x09, 0x89, 0x49, 0xc9, 0x29, 0xa9, 0x69, 0xe9, 137 + 0x19, 0x99, 0x59, 0xd9, 0x39, 0xb9, 0x79, 0xf9, 138 + 0x05, 0x85, 0x45, 0xc5, 0x25, 0xa5, 0x65, 0xe5, 139 + 0x15, 0x95, 0x55, 0xd5, 0x35, 0xb5, 0x75, 0xf5, 140 + 0x0d, 0x8d, 0x4d, 0xcd, 0x2d, 0xad, 0x6d, 0xed, 141 + 0x1d, 0x9d, 0x5d, 0xdd, 0x3d, 0xbd, 0x7d, 0xfd, 142 + 0x03, 0x83, 0x43, 0xc3, 0x23, 0xa3, 0x63, 0xe3, 143 + 0x13, 0x93, 0x53, 0xd3, 0x33, 0xb3, 0x73, 0xf3, 144 + 0x0b, 0x8b, 0x4b, 0xcb, 0x2b, 0xab, 0x6b, 0xeb, 145 + 0x1b, 0x9b, 0x5b, 0xdb, 0x3b, 0xbb, 0x7b, 0xfb, 146 + 0x07, 0x87, 0x47, 0xc7, 0x27, 0xa7, 0x67, 0xe7, 147 + 0x17, 0x97, 0x57, 0xd7, 0x37, 0xb7, 0x77, 0xf7, 148 + 0x0f, 0x8f, 0x4f, 0xcf, 0x2f, 0xaf, 0x6f, 0xef, 149 + 0x1f, 0x9f, 0x5f, 0xdf, 0x3f, 0xbf, 0x7f, 0xff, 150 + }; 151 + 152 + static u8 swap_bits(struct bch_control *bch, u8 in) 153 + { 154 + if (!bch->swap_bits) 155 + return in; 156 + 157 + return swap_bits_table[in]; 158 + } 159 + 117 160 /* 118 - * same as encode_bch(), but process input data one byte at a time 161 + * same as bch_encode(), but process input data one byte at a time 119 162 */ 120 - static void encode_bch_unaligned(struct bch_control *bch, 163 + static void bch_encode_unaligned(struct bch_control *bch, 121 164 const unsigned char *data, unsigned int len, 122 165 uint32_t *ecc) 123 166 { ··· 169 126 const int l = BCH_ECC_WORDS(bch)-1; 170 127 171 128 while (len--) { 172 - p = bch->mod8_tab + (l+1)*(((ecc[0] >> 24)^(*data++)) & 0xff); 129 + u8 tmp = swap_bits(bch, *data++); 130 + 131 + p = bch->mod8_tab + (l+1)*(((ecc[0] >> 24)^(tmp)) & 0xff); 173 132 174 133 for (i = 0; i < l; i++) 175 134 ecc[i] = ((ecc[i] << 8)|(ecc[i+1] >> 24))^(*p++); ··· 190 145 unsigned int i, nwords = BCH_ECC_WORDS(bch)-1; 191 146 192 147 for (i = 0; i < nwords; i++, src += 4) 193 - dst[i] = (src[0] << 24)|(src[1] << 16)|(src[2] << 8)|src[3]; 148 + dst[i] = ((u32)swap_bits(bch, src[0]) << 24) | 149 + ((u32)swap_bits(bch, src[1]) << 16) | 150 + ((u32)swap_bits(bch, src[2]) << 8) | 151 + swap_bits(bch, src[3]); 194 152 195 153 memcpy(pad, src, BCH_ECC_BYTES(bch)-4*nwords); 196 - dst[nwords] = (pad[0] << 24)|(pad[1] << 16)|(pad[2] << 8)|pad[3]; 154 + dst[nwords] = ((u32)swap_bits(bch, pad[0]) << 24) | 155 + ((u32)swap_bits(bch, pad[1]) << 16) | 156 + ((u32)swap_bits(bch, pad[2]) << 8) | 157 + swap_bits(bch, pad[3]); 197 158 } 198 159 199 160 /* ··· 212 161 unsigned int i, nwords = BCH_ECC_WORDS(bch)-1; 213 162 214 163 for (i = 0; i < nwords; i++) { 215 - *dst++ = (src[i] >> 24); 216 - *dst++ = (src[i] >> 16) & 0xff; 217 - *dst++ = (src[i] >> 8) & 0xff; 218 - *dst++ = (src[i] >> 0) & 0xff; 164 + *dst++ = swap_bits(bch, src[i] >> 24); 165 + *dst++ = swap_bits(bch, src[i] >> 16); 166 + *dst++ = swap_bits(bch, src[i] >> 8); 167 + *dst++ = swap_bits(bch, src[i]); 219 168 } 220 - pad[0] = (src[nwords] >> 24); 221 - pad[1] = (src[nwords] >> 16) & 0xff; 222 - pad[2] = (src[nwords] >> 8) & 0xff; 223 - pad[3] = (src[nwords] >> 0) & 0xff; 169 + pad[0] = swap_bits(bch, src[nwords] >> 24); 170 + pad[1] = swap_bits(bch, src[nwords] >> 16); 171 + pad[2] = swap_bits(bch, src[nwords] >> 8); 172 + pad[3] = swap_bits(bch, src[nwords]); 224 173 memcpy(dst, pad, BCH_ECC_BYTES(bch)-4*nwords); 225 174 } 226 175 227 176 /** 228 - * encode_bch - calculate BCH ecc parity of data 177 + * bch_encode - calculate BCH ecc parity of data 229 178 * @bch: BCH control structure 230 179 * @data: data to encode 231 180 * @len: data length in bytes ··· 238 187 * The exact number of computed ecc parity bits is given by member @ecc_bits of 239 188 * @bch; it may be less than m*t for large values of t. 240 189 */ 241 - void encode_bch(struct bch_control *bch, const uint8_t *data, 190 + void bch_encode(struct bch_control *bch, const uint8_t *data, 242 191 unsigned int len, uint8_t *ecc) 243 192 { 244 193 const unsigned int l = BCH_ECC_WORDS(bch)-1; ··· 266 215 m = ((unsigned long)data) & 3; 267 216 if (m) { 268 217 mlen = (len < (4-m)) ? len : 4-m; 269 - encode_bch_unaligned(bch, data, mlen, bch->ecc_buf); 218 + bch_encode_unaligned(bch, data, mlen, bch->ecc_buf); 270 219 data += mlen; 271 220 len -= mlen; 272 221 } ··· 291 240 */ 292 241 while (mlen--) { 293 242 /* input data is read in big-endian format */ 294 - w = r[0]^cpu_to_be32(*pdata++); 243 + w = cpu_to_be32(*pdata++); 244 + if (bch->swap_bits) 245 + w = (u32)swap_bits(bch, w) | 246 + ((u32)swap_bits(bch, w >> 8) << 8) | 247 + ((u32)swap_bits(bch, w >> 16) << 16) | 248 + ((u32)swap_bits(bch, w >> 24) << 24); 249 + w ^= r[0]; 295 250 p0 = tab0 + (l+1)*((w >> 0) & 0xff); 296 251 p1 = tab1 + (l+1)*((w >> 8) & 0xff); 297 252 p2 = tab2 + (l+1)*((w >> 16) & 0xff); ··· 312 255 313 256 /* process last unaligned bytes */ 314 257 if (len) 315 - encode_bch_unaligned(bch, data, len, bch->ecc_buf); 258 + bch_encode_unaligned(bch, data, len, bch->ecc_buf); 316 259 317 260 /* store ecc parity bytes into original parity buffer */ 318 261 if (ecc) 319 262 store_ecc8(bch, ecc, bch->ecc_buf); 320 263 } 321 - EXPORT_SYMBOL_GPL(encode_bch); 264 + EXPORT_SYMBOL_GPL(bch_encode); 322 265 323 266 static inline int modulo(struct bch_control *bch, unsigned int v) 324 267 { ··· 1009 952 #endif /* USE_CHIEN_SEARCH */ 1010 953 1011 954 /** 1012 - * decode_bch - decode received codeword and find bit error locations 955 + * bch_decode - decode received codeword and find bit error locations 1013 956 * @bch: BCH control structure 1014 957 * @data: received data, ignored if @calc_ecc is provided 1015 958 * @len: data length in bytes, must always be provided ··· 1023 966 * invalid parameters were provided 1024 967 * 1025 968 * Depending on the available hw BCH support and the need to compute @calc_ecc 1026 - * separately (using encode_bch()), this function should be called with one of 969 + * separately (using bch_encode()), this function should be called with one of 1027 970 * the following parameter configurations - 1028 971 * 1029 972 * by providing @data and @recv_ecc only: 1030 - * decode_bch(@bch, @data, @len, @recv_ecc, NULL, NULL, @errloc) 973 + * bch_decode(@bch, @data, @len, @recv_ecc, NULL, NULL, @errloc) 1031 974 * 1032 975 * by providing @recv_ecc and @calc_ecc: 1033 - * decode_bch(@bch, NULL, @len, @recv_ecc, @calc_ecc, NULL, @errloc) 976 + * bch_decode(@bch, NULL, @len, @recv_ecc, @calc_ecc, NULL, @errloc) 1034 977 * 1035 978 * by providing ecc = recv_ecc XOR calc_ecc: 1036 - * decode_bch(@bch, NULL, @len, NULL, ecc, NULL, @errloc) 979 + * bch_decode(@bch, NULL, @len, NULL, ecc, NULL, @errloc) 1037 980 * 1038 981 * by providing syndrome results @syn: 1039 - * decode_bch(@bch, NULL, @len, NULL, NULL, @syn, @errloc) 982 + * bch_decode(@bch, NULL, @len, NULL, NULL, @syn, @errloc) 1040 983 * 1041 - * Once decode_bch() has successfully returned with a positive value, error 984 + * Once bch_decode() has successfully returned with a positive value, error 1042 985 * locations returned in array @errloc should be interpreted as follows - 1043 986 * 1044 987 * if (errloc[n] >= 8*len), then n-th error is located in ecc (no need for ··· 1050 993 * Note that this function does not perform any data correction by itself, it 1051 994 * merely indicates error locations. 1052 995 */ 1053 - int decode_bch(struct bch_control *bch, const uint8_t *data, unsigned int len, 996 + int bch_decode(struct bch_control *bch, const uint8_t *data, unsigned int len, 1054 997 const uint8_t *recv_ecc, const uint8_t *calc_ecc, 1055 998 const unsigned int *syn, unsigned int *errloc) 1056 999 { ··· 1069 1012 /* compute received data ecc into an internal buffer */ 1070 1013 if (!data || !recv_ecc) 1071 1014 return -EINVAL; 1072 - encode_bch(bch, data, len, NULL); 1015 + bch_encode(bch, data, len, NULL); 1073 1016 } else { 1074 1017 /* load provided calculated ecc */ 1075 1018 load_ecc8(bch, bch->ecc_buf, calc_ecc); ··· 1105 1048 break; 1106 1049 } 1107 1050 errloc[i] = nbits-1-errloc[i]; 1108 - errloc[i] = (errloc[i] & ~7)|(7-(errloc[i] & 7)); 1051 + if (!bch->swap_bits) 1052 + errloc[i] = (errloc[i] & ~7) | 1053 + (7-(errloc[i] & 7)); 1109 1054 } 1110 1055 } 1111 1056 return (err >= 0) ? err : -EBADMSG; 1112 1057 } 1113 - EXPORT_SYMBOL_GPL(decode_bch); 1058 + EXPORT_SYMBOL_GPL(bch_decode); 1114 1059 1115 1060 /* 1116 1061 * generate Galois field lookup tables ··· 1295 1236 } 1296 1237 1297 1238 /** 1298 - * init_bch - initialize a BCH encoder/decoder 1239 + * bch_init - initialize a BCH encoder/decoder 1299 1240 * @m: Galois field order, should be in the range 5-15 1300 1241 * @t: maximum error correction capability, in bits 1301 1242 * @prim_poly: user-provided primitive polynomial (or 0 to use default) 1243 + * @swap_bits: swap bits within data and syndrome bytes 1302 1244 * 1303 1245 * Returns: 1304 1246 * a newly allocated BCH control structure if successful, NULL otherwise 1305 1247 * 1306 1248 * This initialization can take some time, as lookup tables are built for fast 1307 1249 * encoding/decoding; make sure not to call this function from a time critical 1308 - * path. Usually, init_bch() should be called on module/driver init and 1309 - * free_bch() should be called to release memory on exit. 1250 + * path. Usually, bch_init() should be called on module/driver init and 1251 + * bch_free() should be called to release memory on exit. 1310 1252 * 1311 1253 * You may provide your own primitive polynomial of degree @m in argument 1312 - * @prim_poly, or let init_bch() use its default polynomial. 1254 + * @prim_poly, or let bch_init() use its default polynomial. 1313 1255 * 1314 - * Once init_bch() has successfully returned a pointer to a newly allocated 1256 + * Once bch_init() has successfully returned a pointer to a newly allocated 1315 1257 * BCH control structure, ecc length in bytes is given by member @ecc_bytes of 1316 1258 * the structure. 1317 1259 */ 1318 - struct bch_control *init_bch(int m, int t, unsigned int prim_poly) 1260 + struct bch_control *bch_init(int m, int t, unsigned int prim_poly, 1261 + bool swap_bits) 1319 1262 { 1320 1263 int err = 0; 1321 1264 unsigned int i, words; ··· 1382 1321 bch->syn = bch_alloc(2*t*sizeof(*bch->syn), &err); 1383 1322 bch->cache = bch_alloc(2*t*sizeof(*bch->cache), &err); 1384 1323 bch->elp = bch_alloc((t+1)*sizeof(struct gf_poly_deg1), &err); 1324 + bch->swap_bits = swap_bits; 1385 1325 1386 1326 for (i = 0; i < ARRAY_SIZE(bch->poly_2t); i++) 1387 1327 bch->poly_2t[i] = bch_alloc(GF_POLY_SZ(2*t), &err); ··· 1409 1347 return bch; 1410 1348 1411 1349 fail: 1412 - free_bch(bch); 1350 + bch_free(bch); 1413 1351 return NULL; 1414 1352 } 1415 - EXPORT_SYMBOL_GPL(init_bch); 1353 + EXPORT_SYMBOL_GPL(bch_init); 1416 1354 1417 1355 /** 1418 - * free_bch - free the BCH control structure 1356 + * bch_free - free the BCH control structure 1419 1357 * @bch: BCH control structure to release 1420 1358 */ 1421 - void free_bch(struct bch_control *bch) 1359 + void bch_free(struct bch_control *bch) 1422 1360 { 1423 1361 unsigned int i; 1424 1362 ··· 1439 1377 kfree(bch); 1440 1378 } 1441 1379 } 1442 - EXPORT_SYMBOL_GPL(free_bch); 1380 + EXPORT_SYMBOL_GPL(bch_free); 1443 1381 1444 1382 MODULE_LICENSE("GPL"); 1445 1383 MODULE_AUTHOR("Ivan Djelic <ivan.djelic@parrot.com>");