Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-linus-20160324' of git://git.infradead.org/linux-mtd

Pull MTD updates from Brian Norris:
"NAND:
- Add sunxi_nand randomizer support
- begin refactoring NAND ecclayout structs
- fix pxa3xx_nand dmaengine usage
- brcmnand: fix support for v7.1 controller
- add Qualcomm NAND controller driver

SPI NOR:
- add new ls1021a, ls2080a support to Freescale QuadSPI
- add new flash ID entries
- support bottom-block protection for Winbond flash
- support Status Register Write Protect
- remove broken QPI support for Micron SPI flash

JFFS2:
- improve post-mount CRC scan efficiency

General:
- refactor bcm63xxpart parser, to later extend for NAND
- add writebuf size parameter to mtdram

Other minor code quality improvements"

* tag 'for-linus-20160324' of git://git.infradead.org/linux-mtd: (72 commits)
mtd: nand: remove kerneldoc for removed function parameter
mtd: nand: Qualcomm NAND controller driver
dt/bindings: qcom_nandc: Add DT bindings
mtd: nand: don't select chip in nand_chip's block_bad op
mtd: spi-nor: support lock/unlock for a few Winbond chips
mtd: spi-nor: add TB (Top/Bottom) protect support
mtd: spi-nor: add SPI_NOR_HAS_LOCK flag
mtd: spi-nor: use BIT() for flash_info flags
mtd: spi-nor: disallow further writes to SR if WP# is low
mtd: spi-nor: make lock/unlock bounds checks more obvious and robust
mtd: spi-nor: silently drop lock/unlock for already locked/unlocked region
mtd: spi-nor: wait for SR_WIP to clear on initial unlock
mtd: nand: simplify nand_bch_init() usage
mtd: mtdswap: remove useless if (!mtd->ecclayout) test
mtd: create an mtd_oobavail() helper and make use of it
mtd: kill the ecclayout->oobavail field
mtd: nand: check status before reporting timeout
mtd: bcm63xxpart: give width specifier an 'int', not 'size_t'
mtd: mtdram: Add parameter for setting writebuf size
mtd: nand: pxa3xx_nand: kill unused field 'drcmr_cmd'
...

+3522 -590
+18 -13
Documentation/devicetree/bindings/mtd/atmel-nand.txt
··· 1 1 Atmel NAND flash 2 2 3 3 Required properties: 4 - - compatible : should be "atmel,at91rm9200-nand" or "atmel,sama5d4-nand". 4 + - compatible: The possible values are: 5 + "atmel,at91rm9200-nand" 6 + "atmel,sama5d2-nand" 7 + "atmel,sama5d4-nand" 5 8 - reg : should specify localbus address and size used for the chip, 6 9 and hardware ECC controller if available. 7 10 If the hardware ECC is PMECC, it should contain address and size for ··· 24 21 - nand-ecc-mode : String, operation mode of the NAND ecc mode, soft by default. 25 22 Supported values are: "none", "soft", "hw", "hw_syndrome", "hw_oob_first", 26 23 "soft_bch". 27 - - atmel,has-pmecc : boolean to enable Programmable Multibit ECC hardware. 28 - Only supported by at91sam9x5 or later sam9 product. 24 + - atmel,has-pmecc : boolean to enable Programmable Multibit ECC hardware, 25 + capable of BCH encoding and decoding, on devices where it is present. 29 26 - atmel,pmecc-cap : error correct capability for Programmable Multibit ECC 30 - Controller. Supported values are: 2, 4, 8, 12, 24. 27 + Controller. Supported values are: 2, 4, 8, 12, 24. If the compatible string 28 + is "atmel,sama5d2-nand", 32 is also valid. 31 29 - atmel,pmecc-sector-size : sector size for ECC computation. Supported values 32 30 are: 512, 1024. 33 31 - atmel,pmecc-lookup-table-offset : includes two offsets of lookup table in ROM ··· 36 32 sector size 1024. If not specified, driver will build the table in runtime. 37 33 - nand-bus-width : 8 or 16 bus width if not present 8 38 34 - nand-on-flash-bbt: boolean to enable on flash bbt option if not present false 39 - - Nand Flash Controller(NFC) is a slave driver under Atmel nand flash 40 - - Required properties: 41 - - compatible : "atmel,sama5d3-nfc". 42 - - reg : should specify the address and size used for NFC command registers, 43 - NFC registers and NFC Sram. NFC Sram address and size can be absent 44 - if don't want to use it. 45 - - clocks: phandle to the peripheral clock 46 - - Optional properties: 47 - - atmel,write-by-sram: boolean to enable NFC write by sram. 35 + 36 + Nand Flash Controller(NFC) is an optional sub-node 37 + Required properties: 38 + - compatible : "atmel,sama5d3-nfc" or "atmel,sama5d4-nfc". 39 + - reg : should specify the address and size used for NFC command registers, 40 + NFC registers and NFC SRAM. NFC SRAM address and size can be absent 41 + if don't want to use it. 42 + - clocks: phandle to the peripheral clock 43 + Optional properties: 44 + - atmel,write-by-sram: boolean to enable NFC write by SRAM. 48 45 49 46 Examples: 50 47 nand0: nand@40000000,0 {
+4 -1
Documentation/devicetree/bindings/mtd/fsl-quadspi.txt
··· 3 3 Required properties: 4 4 - compatible : Should be "fsl,vf610-qspi", "fsl,imx6sx-qspi", 5 5 "fsl,imx7d-qspi", "fsl,imx6ul-qspi", 6 - "fsl,ls1021-qspi" 6 + "fsl,ls1021a-qspi" 7 + or 8 + "fsl,ls2080a-qspi" followed by "fsl,ls1021a-qspi" 7 9 - reg : the first contains the register location and length, 8 10 the second contains the memory mapping address and length 9 11 - reg-names: Should contain the reg names "QuadSPI" and "QuadSPI-memory" ··· 21 19 But if there are two NOR flashes connected to the 22 20 bus, you should enable this property. 23 21 (Please check the board's schematic.) 22 + - big-endian : That means the IP register is big endian 24 23 25 24 Example: 26 25
+86
Documentation/devicetree/bindings/mtd/qcom_nandc.txt
··· 1 + * Qualcomm NAND controller 2 + 3 + Required properties: 4 + - compatible: should be "qcom,ipq806x-nand" 5 + - reg: MMIO address range 6 + - clocks: must contain core clock and always on clock 7 + - clock-names: must contain "core" for the core clock and "aon" for the 8 + always on clock 9 + - dmas: DMA specifier, consisting of a phandle to the ADM DMA 10 + controller node and the channel number to be used for 11 + NAND. Refer to dma.txt and qcom_adm.txt for more details 12 + - dma-names: must be "rxtx" 13 + - qcom,cmd-crci: must contain the ADM command type CRCI block instance 14 + number specified for the NAND controller on the given 15 + platform 16 + - qcom,data-crci: must contain the ADM data type CRCI block instance 17 + number specified for the NAND controller on the given 18 + platform 19 + - #address-cells: <1> - subnodes give the chip-select number 20 + - #size-cells: <0> 21 + 22 + * NAND chip-select 23 + 24 + Each controller may contain one or more subnodes to represent enabled 25 + chip-selects which (may) contain NAND flash chips. Their properties are as 26 + follows. 27 + 28 + Required properties: 29 + - compatible: should contain "qcom,nandcs" 30 + - reg: a single integer representing the chip-select 31 + number (e.g., 0, 1, 2, etc.) 32 + - #address-cells: see partition.txt 33 + - #size-cells: see partition.txt 34 + - nand-ecc-strength: see nand.txt 35 + - nand-ecc-step-size: must be 512. see nand.txt for more details. 36 + 37 + Optional properties: 38 + - nand-bus-width: see nand.txt 39 + 40 + Each nandcs device node may optionally contain a 'partitions' sub-node, which 41 + further contains sub-nodes describing the flash partition mapping. See 42 + partition.txt for more detail. 43 + 44 + Example: 45 + 46 + nand@1ac00000 { 47 + compatible = "qcom,ebi2-nandc"; 48 + reg = <0x1ac00000 0x800>; 49 + 50 + clocks = <&gcc EBI2_CLK>, 51 + <&gcc EBI2_AON_CLK>; 52 + clock-names = "core", "aon"; 53 + 54 + dmas = <&adm_dma 3>; 55 + dma-names = "rxtx"; 56 + qcom,cmd-crci = <15>; 57 + qcom,data-crci = <3>; 58 + 59 + #address-cells = <1>; 60 + #size-cells = <0>; 61 + 62 + nandcs@0 { 63 + compatible = "qcom,nandcs"; 64 + reg = <0>; 65 + 66 + nand-ecc-strength = <4>; 67 + nand-ecc-step-size = <512>; 68 + nand-bus-width = <8>; 69 + 70 + partitions { 71 + compatible = "fixed-partitions"; 72 + #address-cells = <1>; 73 + #size-cells = <1>; 74 + 75 + partition@0 { 76 + label = "boot-nand"; 77 + reg = <0 0x58a0000>; 78 + }; 79 + 80 + partition@58a0000 { 81 + label = "fs-nand"; 82 + reg = <0x58a0000 0x4000000>; 83 + }; 84 + }; 85 + }; 86 + };
-9
arch/arm/plat-samsung/devs.c
··· 727 727 return -ENOMEM; 728 728 } 729 729 730 - if (set->ecc_layout) { 731 - ptr = kmemdup(set->ecc_layout, 732 - sizeof(struct nand_ecclayout), GFP_KERNEL); 733 - set->ecc_layout = ptr; 734 - 735 - if (!ptr) 736 - return -ENOMEM; 737 - } 738 - 739 730 return 0; 740 731 } 741 732
-2
arch/mips/include/asm/mach-jz4740/jz4740_nand.h
··· 25 25 int num_partitions; 26 26 struct mtd_partition *partitions; 27 27 28 - struct nand_ecclayout *ecc_layout; 29 - 30 28 unsigned char banks[JZ_NAND_NUM_BANKS]; 31 29 32 30 void (*ident_callback)(struct platform_device *, struct nand_chip *,
+1 -1
drivers/memory/fsl_ifc.c
··· 260 260 261 261 /* get the Controller level irq */ 262 262 fsl_ifc_ctrl_dev->irq = irq_of_parse_and_map(dev->dev.of_node, 0); 263 - if (fsl_ifc_ctrl_dev->irq == NO_IRQ) { 263 + if (fsl_ifc_ctrl_dev->irq == 0) { 264 264 dev_err(&dev->dev, "failed to get irq resource " 265 265 "for IFC\n"); 266 266 ret = -ENODEV;
+1 -1
drivers/mtd/Kconfig
··· 142 142 143 143 config MTD_BCM63XX_PARTS 144 144 tristate "BCM63XX CFE partitioning support" 145 - depends on BCM63XX 145 + depends on BCM63XX || BMIPS_GENERIC || COMPILE_TEST 146 146 select CRC32 147 147 help 148 148 This provides partions parsing for BCM63xx devices with CFE
+24 -18
drivers/mtd/bcm47xxpart.c
··· 66 66 { 67 67 uint32_t buf; 68 68 size_t bytes_read; 69 + int err; 69 70 70 - if (mtd_read(master, offset, sizeof(buf), &bytes_read, 71 - (uint8_t *)&buf) < 0) { 72 - pr_err("mtd_read error while parsing (offset: 0x%X)!\n", 73 - offset); 71 + err = mtd_read(master, offset, sizeof(buf), &bytes_read, 72 + (uint8_t *)&buf); 73 + if (err && !mtd_is_bitflip(err)) { 74 + pr_err("mtd_read error while parsing (offset: 0x%X): %d\n", 75 + offset, err); 74 76 goto out_default; 75 77 } 76 78 ··· 97 95 int trx_part = -1; 98 96 int last_trx_part = -1; 99 97 int possible_nvram_sizes[] = { 0x8000, 0xF000, 0x10000, }; 98 + int err; 100 99 101 100 /* 102 101 * Some really old flashes (like AT45DB*) had smaller erasesize-s, but ··· 121 118 /* Parse block by block looking for magics */ 122 119 for (offset = 0; offset <= master->size - blocksize; 123 120 offset += blocksize) { 124 - /* Nothing more in higher memory */ 125 - if (offset >= 0x2000000) 121 + /* Nothing more in higher memory on BCM47XX (MIPS) */ 122 + if (config_enabled(CONFIG_BCM47XX) && offset >= 0x2000000) 126 123 break; 127 124 128 125 if (curr_part >= BCM47XXPART_MAX_PARTS) { ··· 131 128 } 132 129 133 130 /* Read beginning of the block */ 134 - if (mtd_read(master, offset, BCM47XXPART_BYTES_TO_READ, 135 - &bytes_read, (uint8_t *)buf) < 0) { 136 - pr_err("mtd_read error while parsing (offset: 0x%X)!\n", 137 - offset); 131 + err = mtd_read(master, offset, BCM47XXPART_BYTES_TO_READ, 132 + &bytes_read, (uint8_t *)buf); 133 + if (err && !mtd_is_bitflip(err)) { 134 + pr_err("mtd_read error while parsing (offset: 0x%X): %d\n", 135 + offset, err); 138 136 continue; 139 137 } 140 138 ··· 258 254 } 259 255 260 256 /* Read middle of the block */ 261 - if (mtd_read(master, offset + 0x8000, 0x4, 262 - &bytes_read, (uint8_t *)buf) < 0) { 263 - pr_err("mtd_read error while parsing (offset: 0x%X)!\n", 264 - offset); 257 + err = mtd_read(master, offset + 0x8000, 0x4, &bytes_read, 258 + (uint8_t *)buf); 259 + if (err && !mtd_is_bitflip(err)) { 260 + pr_err("mtd_read error while parsing (offset: 0x%X): %d\n", 261 + offset, err); 265 262 continue; 266 263 } 267 264 ··· 282 277 } 283 278 284 279 offset = master->size - possible_nvram_sizes[i]; 285 - if (mtd_read(master, offset, 0x4, &bytes_read, 286 - (uint8_t *)buf) < 0) { 287 - pr_err("mtd_read error while reading at offset 0x%X!\n", 288 - offset); 280 + err = mtd_read(master, offset, 0x4, &bytes_read, 281 + (uint8_t *)buf); 282 + if (err && !mtd_is_bitflip(err)) { 283 + pr_err("mtd_read error while reading (offset 0x%X): %d\n", 284 + offset, err); 289 285 continue; 290 286 } 291 287
+141 -41
drivers/mtd/bcm63xxpart.c
··· 24 24 25 25 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 26 26 27 + #include <linux/bcm963xx_nvram.h> 27 28 #include <linux/bcm963xx_tag.h> 28 29 #include <linux/crc32.h> 29 30 #include <linux/module.h> ··· 35 34 #include <linux/mtd/mtd.h> 36 35 #include <linux/mtd/partitions.h> 37 36 38 - #include <asm/mach-bcm63xx/bcm63xx_nvram.h> 39 - #include <asm/mach-bcm63xx/board_bcm963xx.h> 37 + #define BCM963XX_CFE_BLOCK_SIZE SZ_64K /* always at least 64KiB */ 40 38 41 - #define BCM63XX_CFE_BLOCK_SIZE SZ_64K /* always at least 64KiB */ 39 + #define BCM963XX_CFE_MAGIC_OFFSET 0x4e0 40 + #define BCM963XX_CFE_VERSION_OFFSET 0x570 41 + #define BCM963XX_NVRAM_OFFSET 0x580 42 42 43 - #define BCM63XX_CFE_MAGIC_OFFSET 0x4e0 43 + /* Ensure strings read from flash structs are null terminated */ 44 + #define STR_NULL_TERMINATE(x) \ 45 + do { char *_str = (x); _str[sizeof(x) - 1] = 0; } while (0) 44 46 45 47 static int bcm63xx_detect_cfe(struct mtd_info *master) 46 48 { ··· 62 58 return 0; 63 59 64 60 /* very old CFE's do not have the cfe-v string, so check for magic */ 65 - ret = mtd_read(master, BCM63XX_CFE_MAGIC_OFFSET, 8, &retlen, 61 + ret = mtd_read(master, BCM963XX_CFE_MAGIC_OFFSET, 8, &retlen, 66 62 (void *)buf); 67 63 buf[retlen] = 0; 68 64 69 65 return strncmp("CFE1CFE1", buf, 8); 70 66 } 71 67 72 - static int bcm63xx_parse_cfe_partitions(struct mtd_info *master, 73 - const struct mtd_partition **pparts, 74 - struct mtd_part_parser_data *data) 68 + static int bcm63xx_read_nvram(struct mtd_info *master, 69 + struct bcm963xx_nvram *nvram) 70 + { 71 + u32 actual_crc, expected_crc; 72 + size_t retlen; 73 + int ret; 74 + 75 + /* extract nvram data */ 76 + ret = mtd_read(master, BCM963XX_NVRAM_OFFSET, BCM963XX_NVRAM_V5_SIZE, 77 + &retlen, (void *)nvram); 78 + if (ret) 79 + return ret; 80 + 81 + ret = bcm963xx_nvram_checksum(nvram, &expected_crc, &actual_crc); 82 + if (ret) 83 + pr_warn("nvram checksum failed, contents may be invalid (expected %08x, got %08x)\n", 84 + expected_crc, actual_crc); 85 + 86 + if (!nvram->psi_size) 87 + nvram->psi_size = BCM963XX_DEFAULT_PSI_SIZE; 88 + 89 + return 0; 90 + } 91 + 92 + static int bcm63xx_read_image_tag(struct mtd_info *master, const char *name, 93 + loff_t tag_offset, struct bcm_tag *buf) 94 + { 95 + int ret; 96 + size_t retlen; 97 + u32 computed_crc; 98 + 99 + ret = mtd_read(master, tag_offset, sizeof(*buf), &retlen, (void *)buf); 100 + if (ret) 101 + return ret; 102 + 103 + if (retlen != sizeof(*buf)) 104 + return -EIO; 105 + 106 + computed_crc = crc32_le(IMAGETAG_CRC_START, (u8 *)buf, 107 + offsetof(struct bcm_tag, header_crc)); 108 + if (computed_crc == buf->header_crc) { 109 + STR_NULL_TERMINATE(buf->board_id); 110 + STR_NULL_TERMINATE(buf->tag_version); 111 + 112 + pr_info("%s: CFE image tag found at 0x%llx with version %s, board type %s\n", 113 + name, tag_offset, buf->tag_version, buf->board_id); 114 + 115 + return 0; 116 + } 117 + 118 + pr_warn("%s: CFE image tag at 0x%llx CRC invalid (expected %08x, actual %08x)\n", 119 + name, tag_offset, buf->header_crc, computed_crc); 120 + return 1; 121 + } 122 + 123 + static int bcm63xx_parse_cfe_nor_partitions(struct mtd_info *master, 124 + const struct mtd_partition **pparts, struct bcm963xx_nvram *nvram) 75 125 { 76 126 /* CFE, NVRAM and global Linux are always present */ 77 127 int nrparts = 3, curpart = 0; 78 - struct bcm_tag *buf; 128 + struct bcm_tag *buf = NULL; 79 129 struct mtd_partition *parts; 80 130 int ret; 81 - size_t retlen; 82 131 unsigned int rootfsaddr, kerneladdr, spareaddr; 83 132 unsigned int rootfslen, kernellen, sparelen, totallen; 84 133 unsigned int cfelen, nvramlen; 85 134 unsigned int cfe_erasesize; 86 135 int i; 87 - u32 computed_crc; 88 136 bool rootfs_first = false; 89 137 90 - if (bcm63xx_detect_cfe(master)) 91 - return -EINVAL; 92 - 93 138 cfe_erasesize = max_t(uint32_t, master->erasesize, 94 - BCM63XX_CFE_BLOCK_SIZE); 139 + BCM963XX_CFE_BLOCK_SIZE); 95 140 96 141 cfelen = cfe_erasesize; 97 - nvramlen = bcm63xx_nvram_get_psi_size() * SZ_1K; 142 + nvramlen = nvram->psi_size * SZ_1K; 98 143 nvramlen = roundup(nvramlen, cfe_erasesize); 99 144 100 - /* Allocate memory for buffer */ 101 145 buf = vmalloc(sizeof(struct bcm_tag)); 102 146 if (!buf) 103 147 return -ENOMEM; 104 148 105 149 /* Get the tag */ 106 - ret = mtd_read(master, cfelen, sizeof(struct bcm_tag), &retlen, 107 - (void *)buf); 150 + ret = bcm63xx_read_image_tag(master, "rootfs", cfelen, buf); 151 + if (!ret) { 152 + STR_NULL_TERMINATE(buf->flash_image_start); 153 + if (kstrtouint(buf->flash_image_start, 10, &rootfsaddr) || 154 + rootfsaddr < BCM963XX_EXTENDED_SIZE) { 155 + pr_err("invalid rootfs address: %*ph\n", 156 + (int)sizeof(buf->flash_image_start), 157 + buf->flash_image_start); 158 + goto invalid_tag; 159 + } 108 160 109 - if (retlen != sizeof(struct bcm_tag)) { 110 - vfree(buf); 111 - return -EIO; 112 - } 161 + STR_NULL_TERMINATE(buf->kernel_address); 162 + if (kstrtouint(buf->kernel_address, 10, &kerneladdr) || 163 + kerneladdr < BCM963XX_EXTENDED_SIZE) { 164 + pr_err("invalid kernel address: %*ph\n", 165 + (int)sizeof(buf->kernel_address), 166 + buf->kernel_address); 167 + goto invalid_tag; 168 + } 113 169 114 - computed_crc = crc32_le(IMAGETAG_CRC_START, (u8 *)buf, 115 - offsetof(struct bcm_tag, header_crc)); 116 - if (computed_crc == buf->header_crc) { 117 - char *boardid = &(buf->board_id[0]); 118 - char *tagversion = &(buf->tag_version[0]); 170 + STR_NULL_TERMINATE(buf->kernel_length); 171 + if (kstrtouint(buf->kernel_length, 10, &kernellen)) { 172 + pr_err("invalid kernel length: %*ph\n", 173 + (int)sizeof(buf->kernel_length), 174 + buf->kernel_length); 175 + goto invalid_tag; 176 + } 119 177 120 - sscanf(buf->flash_image_start, "%u", &rootfsaddr); 121 - sscanf(buf->kernel_address, "%u", &kerneladdr); 122 - sscanf(buf->kernel_length, "%u", &kernellen); 123 - sscanf(buf->total_length, "%u", &totallen); 124 - 125 - pr_info("CFE boot tag found with version %s and board type %s\n", 126 - tagversion, boardid); 178 + STR_NULL_TERMINATE(buf->total_length); 179 + if (kstrtouint(buf->total_length, 10, &totallen)) { 180 + pr_err("invalid total length: %*ph\n", 181 + (int)sizeof(buf->total_length), 182 + buf->total_length); 183 + goto invalid_tag; 184 + } 127 185 128 186 kerneladdr = kerneladdr - BCM963XX_EXTENDED_SIZE; 129 187 rootfsaddr = rootfsaddr - BCM963XX_EXTENDED_SIZE; ··· 200 134 rootfsaddr = kerneladdr + kernellen; 201 135 rootfslen = spareaddr - rootfsaddr; 202 136 } 203 - } else { 204 - pr_warn("CFE boot tag CRC invalid (expected %08x, actual %08x)\n", 205 - buf->header_crc, computed_crc); 137 + } else if (ret > 0) { 138 + invalid_tag: 206 139 kernellen = 0; 207 140 rootfslen = 0; 208 141 rootfsaddr = 0; 209 142 spareaddr = cfelen; 143 + } else { 144 + goto out; 210 145 } 211 146 sparelen = master->size - spareaddr - nvramlen; 212 147 ··· 218 151 if (kernellen > 0) 219 152 nrparts++; 220 153 221 - /* Ask kernel for more memory */ 222 154 parts = kzalloc(sizeof(*parts) * nrparts + 10 * nrparts, GFP_KERNEL); 223 155 if (!parts) { 224 - vfree(buf); 225 - return -ENOMEM; 156 + ret = -ENOMEM; 157 + goto out; 226 158 } 227 159 228 160 /* Start building partition list */ ··· 272 206 sparelen); 273 207 274 208 *pparts = parts; 209 + ret = 0; 210 + 211 + out: 275 212 vfree(buf); 276 213 214 + if (ret) 215 + return ret; 216 + 277 217 return nrparts; 218 + } 219 + 220 + static int bcm63xx_parse_cfe_partitions(struct mtd_info *master, 221 + const struct mtd_partition **pparts, 222 + struct mtd_part_parser_data *data) 223 + { 224 + struct bcm963xx_nvram *nvram = NULL; 225 + int ret; 226 + 227 + if (bcm63xx_detect_cfe(master)) 228 + return -EINVAL; 229 + 230 + nvram = vzalloc(sizeof(*nvram)); 231 + if (!nvram) 232 + return -ENOMEM; 233 + 234 + ret = bcm63xx_read_nvram(master, nvram); 235 + if (ret) 236 + goto out; 237 + 238 + if (!mtd_type_is_nand(master)) 239 + ret = bcm63xx_parse_cfe_nor_partitions(master, pparts, nvram); 240 + else 241 + ret = -EINVAL; 242 + 243 + out: 244 + vfree(nvram); 245 + return ret; 278 246 }; 279 247 280 248 static struct mtd_part_parser bcm63xx_cfe_parser = {
+2 -3
drivers/mtd/devices/docg3.c
··· 72 72 * @eccbytes: 8 bytes are used (1 for Hamming ECC, 7 for BCH ECC) 73 73 * @eccpos: ecc positions (byte 7 is Hamming ECC, byte 8-14 are BCH ECC) 74 74 * @oobfree: free pageinfo bytes (byte 0 until byte 6, byte 15 75 - * @oobavail: 8 available bytes remaining after ECC toll 76 75 */ 77 76 static struct nand_ecclayout docg3_oobinfo = { 78 77 .eccbytes = 8, 79 78 .eccpos = {7, 8, 9, 10, 11, 12, 13, 14}, 80 79 .oobfree = {{0, 7}, {15, 1} }, 81 - .oobavail = 8, 82 80 }; 83 81 84 82 static inline u8 doc_readb(struct docg3 *docg3, u16 reg) ··· 1436 1438 oobdelta = mtd->oobsize; 1437 1439 break; 1438 1440 case MTD_OPS_AUTO_OOB: 1439 - oobdelta = mtd->ecclayout->oobavail; 1441 + oobdelta = mtd->oobavail; 1440 1442 break; 1441 1443 default: 1442 1444 return -EINVAL; ··· 1858 1860 mtd->_write_oob = doc_write_oob; 1859 1861 mtd->_block_isbad = doc_block_isbad; 1860 1862 mtd->ecclayout = &docg3_oobinfo; 1863 + mtd->oobavail = 8; 1861 1864 mtd->ecc_strength = DOC_ECC_BCH_T; 1862 1865 1863 1866 return 0;
+4 -1
drivers/mtd/devices/mtdram.c
··· 19 19 20 20 static unsigned long total_size = CONFIG_MTDRAM_TOTAL_SIZE; 21 21 static unsigned long erase_size = CONFIG_MTDRAM_ERASE_SIZE; 22 + static unsigned long writebuf_size = 64; 22 23 #define MTDRAM_TOTAL_SIZE (total_size * 1024) 23 24 #define MTDRAM_ERASE_SIZE (erase_size * 1024) 24 25 ··· 28 27 MODULE_PARM_DESC(total_size, "Total device size in KiB"); 29 28 module_param(erase_size, ulong, 0); 30 29 MODULE_PARM_DESC(erase_size, "Device erase block size in KiB"); 30 + module_param(writebuf_size, ulong, 0); 31 + MODULE_PARM_DESC(writebuf_size, "Device write buf size in Bytes (Default: 64)"); 31 32 #endif 32 33 33 34 // We could store these in the mtd structure, but we only support 1 device.. ··· 126 123 mtd->flags = MTD_CAP_RAM; 127 124 mtd->size = size; 128 125 mtd->writesize = 1; 129 - mtd->writebufsize = 64; /* Mimic CFI NOR flashes */ 126 + mtd->writebufsize = writebuf_size; 130 127 mtd->erasesize = MTDRAM_ERASE_SIZE; 131 128 mtd->priv = mapped_address; 132 129
+1 -4
drivers/mtd/mtdpart.c
··· 126 126 if (ops->oobbuf) { 127 127 size_t len, pages; 128 128 129 - if (ops->mode == MTD_OPS_AUTO_OOB) 130 - len = mtd->oobavail; 131 - else 132 - len = mtd->oobsize; 129 + len = mtd_oobavail(mtd, ops); 133 130 pages = mtd_div_by_ws(mtd->size, mtd); 134 131 pages -= mtd_div_by_ws(from, mtd); 135 132 if (ops->ooboffs + ops->ooblen > pages * len)
+8 -16
drivers/mtd/mtdswap.c
··· 346 346 if (mtd_can_have_bb(d->mtd) && mtd_block_isbad(d->mtd, offset)) 347 347 return MTDSWAP_SCANNED_BAD; 348 348 349 - ops.ooblen = 2 * d->mtd->ecclayout->oobavail; 349 + ops.ooblen = 2 * d->mtd->oobavail; 350 350 ops.oobbuf = d->oob_buf; 351 351 ops.ooboffs = 0; 352 352 ops.datbuf = NULL; ··· 359 359 360 360 data = (struct mtdswap_oobdata *)d->oob_buf; 361 361 data2 = (struct mtdswap_oobdata *) 362 - (d->oob_buf + d->mtd->ecclayout->oobavail); 362 + (d->oob_buf + d->mtd->oobavail); 363 363 364 364 if (le16_to_cpu(data->magic) == MTDSWAP_MAGIC_CLEAN) { 365 365 eb->erase_count = le32_to_cpu(data->count); ··· 933 933 934 934 ops.mode = MTD_OPS_AUTO_OOB; 935 935 ops.len = mtd->writesize; 936 - ops.ooblen = mtd->ecclayout->oobavail; 936 + ops.ooblen = mtd->oobavail; 937 937 ops.ooboffs = 0; 938 938 ops.datbuf = d->page_buf; 939 939 ops.oobbuf = d->oob_buf; ··· 945 945 for (i = 0; i < mtd_pages; i++) { 946 946 patt = mtdswap_test_patt(test + i); 947 947 memset(d->page_buf, patt, mtd->writesize); 948 - memset(d->oob_buf, patt, mtd->ecclayout->oobavail); 948 + memset(d->oob_buf, patt, mtd->oobavail); 949 949 ret = mtd_write_oob(mtd, pos, &ops); 950 950 if (ret) 951 951 goto error; ··· 964 964 if (p1[j] != patt) 965 965 goto error; 966 966 967 - for (j = 0; j < mtd->ecclayout->oobavail; j++) 967 + for (j = 0; j < mtd->oobavail; j++) 968 968 if (p2[j] != (unsigned char)patt) 969 969 goto error; 970 970 ··· 1387 1387 if (!d->page_buf) 1388 1388 goto page_buf_fail; 1389 1389 1390 - d->oob_buf = kmalloc(2 * mtd->ecclayout->oobavail, GFP_KERNEL); 1390 + d->oob_buf = kmalloc(2 * mtd->oobavail, GFP_KERNEL); 1391 1391 if (!d->oob_buf) 1392 1392 goto oob_buf_fail; 1393 1393 ··· 1417 1417 unsigned long part; 1418 1418 unsigned int eblocks, eavailable, bad_blocks, spare_cnt; 1419 1419 uint64_t swap_size, use_size, size_limit; 1420 - struct nand_ecclayout *oinfo; 1421 1420 int ret; 1422 1421 1423 1422 parts = &partitions[0]; ··· 1446 1447 return; 1447 1448 } 1448 1449 1449 - oinfo = mtd->ecclayout; 1450 - if (!oinfo) { 1451 - printk(KERN_ERR "%s: mtd%d does not have OOB\n", 1452 - MTDSWAP_PREFIX, mtd->index); 1453 - return; 1454 - } 1455 - 1456 - if (!mtd->oobsize || oinfo->oobavail < MTDSWAP_OOBSIZE) { 1450 + if (!mtd->oobsize || mtd->oobavail < MTDSWAP_OOBSIZE) { 1457 1451 printk(KERN_ERR "%s: Not enough free bytes in OOB, " 1458 1452 "%d available, %zu needed.\n", 1459 - MTDSWAP_PREFIX, oinfo->oobavail, MTDSWAP_OOBSIZE); 1453 + MTDSWAP_PREFIX, mtd->oobavail, MTDSWAP_OOBSIZE); 1460 1454 return; 1461 1455 } 1462 1456
+10
drivers/mtd/nand/Kconfig
··· 74 74 config MTD_NAND_GPIO 75 75 tristate "GPIO assisted NAND Flash driver" 76 76 depends on GPIOLIB || COMPILE_TEST 77 + depends on HAS_IOMEM 77 78 help 78 79 This enables a NAND flash driver where control signals are 79 80 connected to GPIO pins, and commands and data are communicated ··· 311 310 config MTD_NAND_CS553X 312 311 tristate "NAND support for CS5535/CS5536 (AMD Geode companion chip)" 313 312 depends on X86_32 313 + depends on !UML && HAS_IOMEM 314 314 help 315 315 The CS553x companion chips for the AMD Geode processor 316 316 include NAND flash controllers with built-in hardware ECC ··· 465 463 config MTD_NAND_VF610_NFC 466 464 tristate "Support for Freescale NFC for VF610/MPC5125" 467 465 depends on (SOC_VF610 || COMPILE_TEST) 466 + depends on HAS_IOMEM 468 467 help 469 468 Enables support for NAND Flash Controller on some Freescale 470 469 processors like the VF610, MPC5125, MCF54418 or Kinetis K70. ··· 555 552 depends on HAS_DMA 556 553 help 557 554 Enables support for NAND controller on Hisilicon SoC Hip04. 555 + 556 + config MTD_NAND_QCOM 557 + tristate "Support for NAND on QCOM SoCs" 558 + depends on ARCH_QCOM 559 + help 560 + Enables support for NAND flash chips on SoCs containing the EBI2 NAND 561 + controller. This controller is found on IPQ806x SoC. 558 562 559 563 endif # MTD_NAND
+1
drivers/mtd/nand/Makefile
··· 56 56 obj-$(CONFIG_MTD_NAND_SUNXI) += sunxi_nand.o 57 57 obj-$(CONFIG_MTD_NAND_HISI504) += hisi504_nand.o 58 58 obj-$(CONFIG_MTD_NAND_BRCMNAND) += brcmnand/ 59 + obj-$(CONFIG_MTD_NAND_QCOM) += qcom_nandc.o 59 60 60 61 nand-objs := nand_base.o nand_bbt.o nand_timings.o
+71 -18
drivers/mtd/nand/atmel_nand.c
··· 65 65 66 66 struct atmel_nand_caps { 67 67 bool pmecc_correct_erase_page; 68 + uint8_t pmecc_max_correction; 69 + }; 70 + 71 + struct atmel_nand_nfc_caps { 72 + uint32_t rb_mask; 68 73 }; 69 74 70 75 /* oob layout for large page size ··· 116 111 /* Point to the sram bank which include readed data via NFC */ 117 112 void *data_in_sram; 118 113 bool will_write_sram; 114 + const struct atmel_nand_nfc_caps *caps; 119 115 }; 120 116 static struct atmel_nfc nand_nfc; 121 117 ··· 146 140 int pmecc_cw_len; /* Length of codeword */ 147 141 148 142 void __iomem *pmerrloc_base; 143 + void __iomem *pmerrloc_el_base; 149 144 void __iomem *pmecc_rom_base; 150 145 151 146 /* lookup table for alpha_to and index_of */ ··· 475 468 * 8-bits 13-bytes 14-bytes 476 469 * 12-bits 20-bytes 21-bytes 477 470 * 24-bits 39-bytes 42-bytes 471 + * 32-bits 52-bytes 56-bytes 478 472 */ 479 473 static int pmecc_get_ecc_bytes(int cap, int sector_size) 480 474 { ··· 821 813 sector_size = host->pmecc_sector_size; 822 814 823 815 while (err_nbr) { 824 - tmp = pmerrloc_readl_el_relaxed(host->pmerrloc_base, i) - 1; 816 + tmp = pmerrloc_readl_el_relaxed(host->pmerrloc_el_base, i) - 1; 825 817 byte_pos = tmp / 8; 826 818 bit_pos = tmp % 8; 827 819 ··· 833 825 *(buf + byte_pos) ^= (1 << bit_pos); 834 826 835 827 pos = sector_num * host->pmecc_sector_size + byte_pos; 836 - dev_info(host->dev, "Bit flip in data area, byte_pos: %d, bit_pos: %d, 0x%02x -> 0x%02x\n", 828 + dev_dbg(host->dev, "Bit flip in data area, byte_pos: %d, bit_pos: %d, 0x%02x -> 0x%02x\n", 837 829 pos, bit_pos, err_byte, *(buf + byte_pos)); 838 830 } else { 839 831 /* Bit flip in OOB area */ ··· 843 835 ecc[tmp] ^= (1 << bit_pos); 844 836 845 837 pos = tmp + nand_chip->ecc.layout->eccpos[0]; 846 - dev_info(host->dev, "Bit flip in OOB, oob_byte_pos: %d, bit_pos: %d, 0x%02x -> 0x%02x\n", 838 + dev_dbg(host->dev, "Bit flip in OOB, oob_byte_pos: %d, bit_pos: %d, 0x%02x -> 0x%02x\n", 847 839 pos, bit_pos, err_byte, ecc[tmp]); 848 840 } 849 841 ··· 1025 1017 case 24: 1026 1018 val = PMECC_CFG_BCH_ERR24; 1027 1019 break; 1020 + case 32: 1021 + val = PMECC_CFG_BCH_ERR32; 1022 + break; 1028 1023 } 1029 1024 1030 1025 if (host->pmecc_sector_size == 512) ··· 1089 1078 1090 1079 /* If device tree doesn't specify, use NAND's minimum ECC parameters */ 1091 1080 if (host->pmecc_corr_cap == 0) { 1081 + if (*cap > host->caps->pmecc_max_correction) 1082 + return -EINVAL; 1083 + 1092 1084 /* use the most fitable ecc bits (the near bigger one ) */ 1093 1085 if (*cap <= 2) 1094 1086 host->pmecc_corr_cap = 2; ··· 1103 1089 host->pmecc_corr_cap = 12; 1104 1090 else if (*cap <= 24) 1105 1091 host->pmecc_corr_cap = 24; 1092 + else if (*cap <= 32) 1093 + host->pmecc_corr_cap = 32; 1106 1094 else 1107 1095 return -EINVAL; 1108 1096 } ··· 1221 1205 err_no = PTR_ERR(host->pmerrloc_base); 1222 1206 goto err; 1223 1207 } 1208 + host->pmerrloc_el_base = host->pmerrloc_base + ATMEL_PMERRLOC_SIGMAx + 1209 + (host->caps->pmecc_max_correction + 1) * 4; 1224 1210 1225 1211 if (!host->has_no_lookup_table) { 1226 1212 regs_rom = platform_get_resource(pdev, IORESOURCE_MEM, 3); ··· 1504 1486 ecc_writel(host->ecc, CR, ATMEL_ECC_RST); 1505 1487 } 1506 1488 1507 - static const struct of_device_id atmel_nand_dt_ids[]; 1508 - 1509 1489 static int atmel_of_init_port(struct atmel_nand_host *host, 1510 1490 struct device_node *np) 1511 1491 { ··· 1514 1498 enum of_gpio_flags flags = 0; 1515 1499 1516 1500 host->caps = (struct atmel_nand_caps *) 1517 - of_match_device(atmel_nand_dt_ids, host->dev)->data; 1501 + of_device_get_match_data(host->dev); 1518 1502 1519 1503 if (of_property_read_u32(np, "atmel,nand-addr-offset", &val) == 0) { 1520 1504 if (val >= 32) { ··· 1563 1547 * them from NAND ONFI parameters. 1564 1548 */ 1565 1549 if (of_property_read_u32(np, "atmel,pmecc-cap", &val) == 0) { 1566 - if ((val != 2) && (val != 4) && (val != 8) && (val != 12) && 1567 - (val != 24)) { 1550 + if (val > host->caps->pmecc_max_correction) { 1568 1551 dev_err(host->dev, 1569 - "Unsupported PMECC correction capability: %d; should be 2, 4, 8, 12 or 24\n", 1552 + "Required ECC strength too high: %u max %u\n", 1553 + val, host->caps->pmecc_max_correction); 1554 + return -EINVAL; 1555 + } 1556 + if ((val != 2) && (val != 4) && (val != 8) && 1557 + (val != 12) && (val != 24) && (val != 32)) { 1558 + dev_err(host->dev, 1559 + "Required ECC strength not supported: %u\n", 1570 1560 val); 1571 1561 return -EINVAL; 1572 1562 } ··· 1582 1560 if (of_property_read_u32(np, "atmel,pmecc-sector-size", &val) == 0) { 1583 1561 if ((val != 512) && (val != 1024)) { 1584 1562 dev_err(host->dev, 1585 - "Unsupported PMECC sector size: %d; should be 512 or 1024 bytes\n", 1563 + "Required ECC sector size not supported: %u\n", 1586 1564 val); 1587 1565 return -EINVAL; 1588 1566 } ··· 1699 1677 nfc_writel(host->nfc->hsmc_regs, IDR, NFC_SR_XFR_DONE); 1700 1678 ret = IRQ_HANDLED; 1701 1679 } 1702 - if (pending & NFC_SR_RB_EDGE) { 1680 + if (pending & host->nfc->caps->rb_mask) { 1703 1681 complete(&host->nfc->comp_ready); 1704 - nfc_writel(host->nfc->hsmc_regs, IDR, NFC_SR_RB_EDGE); 1682 + nfc_writel(host->nfc->hsmc_regs, IDR, host->nfc->caps->rb_mask); 1705 1683 ret = IRQ_HANDLED; 1706 1684 } 1707 1685 if (pending & NFC_SR_CMD_DONE) { ··· 1719 1697 if (flag & NFC_SR_XFR_DONE) 1720 1698 init_completion(&host->nfc->comp_xfer_done); 1721 1699 1722 - if (flag & NFC_SR_RB_EDGE) 1700 + if (flag & host->nfc->caps->rb_mask) 1723 1701 init_completion(&host->nfc->comp_ready); 1724 1702 1725 1703 if (flag & NFC_SR_CMD_DONE) ··· 1737 1715 if (flag & NFC_SR_XFR_DONE) 1738 1716 comp[index++] = &host->nfc->comp_xfer_done; 1739 1717 1740 - if (flag & NFC_SR_RB_EDGE) 1718 + if (flag & host->nfc->caps->rb_mask) 1741 1719 comp[index++] = &host->nfc->comp_ready; 1742 1720 1743 1721 if (flag & NFC_SR_CMD_DONE) ··· 1805 1783 dev_err(host->dev, "Lost the interrupt flags: 0x%08x\n", 1806 1784 mask & status); 1807 1785 1808 - return status & NFC_SR_RB_EDGE; 1786 + return status & host->nfc->caps->rb_mask; 1809 1787 } 1810 1788 1811 1789 static void nfc_select_chip(struct mtd_info *mtd, int chip) ··· 1978 1956 } 1979 1957 /* fall through */ 1980 1958 default: 1981 - nfc_prepare_interrupt(host, NFC_SR_RB_EDGE); 1982 - nfc_wait_interrupt(host, NFC_SR_RB_EDGE); 1959 + nfc_prepare_interrupt(host, host->nfc->caps->rb_mask); 1960 + nfc_wait_interrupt(host, host->nfc->caps->rb_mask); 1983 1961 } 1984 1962 } 1985 1963 ··· 2326 2304 return 0; 2327 2305 } 2328 2306 2307 + /* 2308 + * AT91RM9200 does not have PMECC or PMECC Errloc peripherals for 2309 + * BCH ECC. Combined with the "atmel,has-pmecc", it is used to describe 2310 + * devices from the SAM9 family that have those. 2311 + */ 2329 2312 static const struct atmel_nand_caps at91rm9200_caps = { 2330 2313 .pmecc_correct_erase_page = false, 2314 + .pmecc_max_correction = 24, 2331 2315 }; 2332 2316 2333 2317 static const struct atmel_nand_caps sama5d4_caps = { 2334 2318 .pmecc_correct_erase_page = true, 2319 + .pmecc_max_correction = 24, 2320 + }; 2321 + 2322 + /* 2323 + * The PMECC Errloc controller starting in SAMA5D2 is not compatible, 2324 + * as the increased correction strength requires more registers. 2325 + */ 2326 + static const struct atmel_nand_caps sama5d2_caps = { 2327 + .pmecc_correct_erase_page = true, 2328 + .pmecc_max_correction = 32, 2335 2329 }; 2336 2330 2337 2331 static const struct of_device_id atmel_nand_dt_ids[] = { 2338 2332 { .compatible = "atmel,at91rm9200-nand", .data = &at91rm9200_caps }, 2339 2333 { .compatible = "atmel,sama5d4-nand", .data = &sama5d4_caps }, 2334 + { .compatible = "atmel,sama5d2-nand", .data = &sama5d2_caps }, 2340 2335 { /* sentinel */ } 2341 2336 }; 2342 2337 ··· 2393 2354 } 2394 2355 } 2395 2356 2357 + nfc->caps = (const struct atmel_nand_nfc_caps *) 2358 + of_device_get_match_data(&pdev->dev); 2359 + if (!nfc->caps) 2360 + return -ENODEV; 2361 + 2396 2362 nfc_writel(nfc->hsmc_regs, IDR, 0xffffffff); 2397 2363 nfc_readl(nfc->hsmc_regs, SR); /* clear the NFC_SR */ 2398 2364 ··· 2426 2382 return 0; 2427 2383 } 2428 2384 2385 + static const struct atmel_nand_nfc_caps sama5d3_nfc_caps = { 2386 + .rb_mask = NFC_SR_RB_EDGE0, 2387 + }; 2388 + 2389 + static const struct atmel_nand_nfc_caps sama5d4_nfc_caps = { 2390 + .rb_mask = NFC_SR_RB_EDGE3, 2391 + }; 2392 + 2429 2393 static const struct of_device_id atmel_nand_nfc_match[] = { 2430 - { .compatible = "atmel,sama5d3-nfc" }, 2394 + { .compatible = "atmel,sama5d3-nfc", .data = &sama5d3_nfc_caps }, 2395 + { .compatible = "atmel,sama5d4-nfc", .data = &sama5d4_nfc_caps }, 2431 2396 { /* sentinel */ } 2432 2397 }; 2433 2398 MODULE_DEVICE_TABLE(of, atmel_nand_nfc_match);
+7 -2
drivers/mtd/nand/atmel_nand_ecc.h
··· 43 43 #define PMECC_CFG_BCH_ERR8 (2 << 0) 44 44 #define PMECC_CFG_BCH_ERR12 (3 << 0) 45 45 #define PMECC_CFG_BCH_ERR24 (4 << 0) 46 + #define PMECC_CFG_BCH_ERR32 (5 << 0) 46 47 47 48 #define PMECC_CFG_SECTOR512 (0 << 4) 48 49 #define PMECC_CFG_SECTOR1024 (1 << 4) ··· 109 108 #define PMERRLOC_ERR_NUM_MASK (0x1f << 8) 110 109 #define PMERRLOC_CALC_DONE (1 << 0) 111 110 #define ATMEL_PMERRLOC_SIGMAx 0x028 /* Error location SIGMA x */ 112 - #define ATMEL_PMERRLOC_ELx 0x08c /* Error location x */ 111 + 112 + /* 113 + * The ATMEL_PMERRLOC_ELx register location depends from the number of 114 + * bits corrected by the PMECC controller. Do not use it. 115 + */ 113 116 114 117 /* Register access macros for PMECC */ 115 118 #define pmecc_readl_relaxed(addr, reg) \ ··· 141 136 readl_relaxed((addr) + ATMEL_PMERRLOC_SIGMAx + ((n) * 4)) 142 137 143 138 #define pmerrloc_readl_el_relaxed(addr, n) \ 144 - readl_relaxed((addr) + ATMEL_PMERRLOC_ELx + ((n) * 4)) 139 + readl_relaxed((addr) + ((n) * 4)) 145 140 146 141 /* Galois field dimension */ 147 142 #define PMECC_GF_DIMENSION_13 13
+2 -1
drivers/mtd/nand/atmel_nand_nfc.h
··· 42 42 #define NFC_SR_UNDEF (1 << 21) 43 43 #define NFC_SR_AWB (1 << 22) 44 44 #define NFC_SR_ASE (1 << 23) 45 - #define NFC_SR_RB_EDGE (1 << 24) 45 + #define NFC_SR_RB_EDGE0 (1 << 24) 46 + #define NFC_SR_RB_EDGE3 (1 << 27) 46 47 47 48 #define ATMEL_HSMC_NFC_IER 0x0c 48 49 #define ATMEL_HSMC_NFC_IDR 0x10
+36 -6
drivers/mtd/nand/brcmnand/brcmnand.c
··· 311 311 [BRCMNAND_FC_BASE] = 0x400, 312 312 }; 313 313 314 + /* BRCMNAND v7.1 */ 315 + static const u16 brcmnand_regs_v71[] = { 316 + [BRCMNAND_CMD_START] = 0x04, 317 + [BRCMNAND_CMD_EXT_ADDRESS] = 0x08, 318 + [BRCMNAND_CMD_ADDRESS] = 0x0c, 319 + [BRCMNAND_INTFC_STATUS] = 0x14, 320 + [BRCMNAND_CS_SELECT] = 0x18, 321 + [BRCMNAND_CS_XOR] = 0x1c, 322 + [BRCMNAND_LL_OP] = 0x20, 323 + [BRCMNAND_CS0_BASE] = 0x50, 324 + [BRCMNAND_CS1_BASE] = 0, 325 + [BRCMNAND_CORR_THRESHOLD] = 0xdc, 326 + [BRCMNAND_CORR_THRESHOLD_EXT] = 0xe0, 327 + [BRCMNAND_UNCORR_COUNT] = 0xfc, 328 + [BRCMNAND_CORR_COUNT] = 0x100, 329 + [BRCMNAND_CORR_EXT_ADDR] = 0x10c, 330 + [BRCMNAND_CORR_ADDR] = 0x110, 331 + [BRCMNAND_UNCORR_EXT_ADDR] = 0x114, 332 + [BRCMNAND_UNCORR_ADDR] = 0x118, 333 + [BRCMNAND_SEMAPHORE] = 0x150, 334 + [BRCMNAND_ID] = 0x194, 335 + [BRCMNAND_ID_EXT] = 0x198, 336 + [BRCMNAND_LL_RDATA] = 0x19c, 337 + [BRCMNAND_OOB_READ_BASE] = 0x200, 338 + [BRCMNAND_OOB_READ_10_BASE] = 0, 339 + [BRCMNAND_OOB_WRITE_BASE] = 0x280, 340 + [BRCMNAND_OOB_WRITE_10_BASE] = 0, 341 + [BRCMNAND_FC_BASE] = 0x400, 342 + }; 343 + 314 344 enum brcmnand_cs_reg { 315 345 BRCMNAND_CS_CFG_EXT = 0, 316 346 BRCMNAND_CS_CFG, ··· 436 406 } 437 407 438 408 /* Register offsets */ 439 - if (ctrl->nand_version >= 0x0600) 409 + if (ctrl->nand_version >= 0x0701) 410 + ctrl->reg_offsets = brcmnand_regs_v71; 411 + else if (ctrl->nand_version >= 0x0600) 440 412 ctrl->reg_offsets = brcmnand_regs_v60; 441 413 else if (ctrl->nand_version >= 0x0500) 442 414 ctrl->reg_offsets = brcmnand_regs_v50; ··· 828 796 idx2 >= MTD_MAX_OOBFREE_ENTRIES_LARGE - 1) 829 797 break; 830 798 } 831 - goto out; 799 + 800 + return layout; 832 801 } 833 802 834 803 /* ··· 880 847 idx2 >= MTD_MAX_OOBFREE_ENTRIES_LARGE - 1) 881 848 break; 882 849 } 883 - out: 884 - /* Sum available OOB */ 885 - for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES_LARGE; i++) 886 - layout->oobavail += layout->oobfree[i].length; 850 + 887 851 return layout; 888 852 } 889 853
+1 -1
drivers/mtd/nand/cafe_nand.c
··· 537 537 return 0; 538 538 } 539 539 540 - static int cafe_nand_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip) 540 + static int cafe_nand_block_bad(struct mtd_info *mtd, loff_t ofs) 541 541 { 542 542 return 0; 543 543 }
+1 -1
drivers/mtd/nand/diskonchip.c
··· 794 794 } 795 795 } 796 796 797 - static int doc200x_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip) 797 + static int doc200x_block_bad(struct mtd_info *mtd, loff_t ofs) 798 798 { 799 799 /* This is our last resort if we couldn't find or create a BBT. Just 800 800 pretend all blocks are good. */
+1 -2
drivers/mtd/nand/docg4.c
··· 225 225 static struct nand_ecclayout docg4_oobinfo = { 226 226 .eccbytes = 9, 227 227 .eccpos = {7, 8, 9, 10, 11, 12, 13, 14, 15}, 228 - .oobavail = 5, 229 228 .oobfree = { {.offset = 2, .length = 5} } 230 229 }; 231 230 ··· 1120 1121 return ret; 1121 1122 } 1122 1123 1123 - static int docg4_block_neverbad(struct mtd_info *mtd, loff_t ofs, int getchip) 1124 + static int docg4_block_neverbad(struct mtd_info *mtd, loff_t ofs) 1124 1125 { 1125 1126 /* only called when module_param ignore_badblocks is set */ 1126 1127 return 0;
+60 -13
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 1 1 /* 2 2 * Freescale GPMI NAND Flash Driver 3 3 * 4 - * Copyright (C) 2010-2011 Freescale Semiconductor, Inc. 4 + * Copyright (C) 2010-2015 Freescale Semiconductor, Inc. 5 5 * Copyright (C) 2008 Embedded Alley Solutions, Inc. 6 6 * 7 7 * This program is free software; you can redistribute it and/or modify ··· 136 136 * 137 137 * We may have available oob space in this case. 138 138 */ 139 - static bool set_geometry_by_ecc_info(struct gpmi_nand_data *this) 139 + static int set_geometry_by_ecc_info(struct gpmi_nand_data *this) 140 140 { 141 141 struct bch_geometry *geo = &this->bch_geometry; 142 142 struct nand_chip *chip = &this->nand; ··· 145 145 unsigned int block_mark_bit_offset; 146 146 147 147 if (!(chip->ecc_strength_ds > 0 && chip->ecc_step_ds > 0)) 148 - return false; 148 + return -EINVAL; 149 149 150 150 switch (chip->ecc_step_ds) { 151 151 case SZ_512: ··· 158 158 dev_err(this->dev, 159 159 "unsupported nand chip. ecc bits : %d, ecc size : %d\n", 160 160 chip->ecc_strength_ds, chip->ecc_step_ds); 161 - return false; 161 + return -EINVAL; 162 162 } 163 163 geo->ecc_chunk_size = chip->ecc_step_ds; 164 164 geo->ecc_strength = round_up(chip->ecc_strength_ds, 2); 165 165 if (!gpmi_check_ecc(this)) 166 - return false; 166 + return -EINVAL; 167 167 168 168 /* Keep the C >= O */ 169 169 if (geo->ecc_chunk_size < mtd->oobsize) { 170 170 dev_err(this->dev, 171 171 "unsupported nand chip. ecc size: %d, oob size : %d\n", 172 172 chip->ecc_step_ds, mtd->oobsize); 173 - return false; 173 + return -EINVAL; 174 174 } 175 175 176 176 /* The default value, see comment in the legacy_set_geometry(). */ ··· 242 242 + ALIGN(geo->ecc_chunk_count, 4); 243 243 244 244 if (!this->swap_block_mark) 245 - return true; 245 + return 0; 246 246 247 247 /* For bit swap. */ 248 248 block_mark_bit_offset = mtd->writesize * 8 - ··· 251 251 252 252 geo->block_mark_byte_offset = block_mark_bit_offset / 8; 253 253 geo->block_mark_bit_offset = block_mark_bit_offset % 8; 254 - return true; 254 + return 0; 255 255 } 256 256 257 257 static int legacy_set_geometry(struct gpmi_nand_data *this) ··· 285 285 geo->ecc_strength = get_ecc_strength(this); 286 286 if (!gpmi_check_ecc(this)) { 287 287 dev_err(this->dev, 288 - "required ecc strength of the NAND chip: %d is not supported by the GPMI controller (%d)\n", 288 + "ecc strength: %d cannot be supported by the controller (%d)\n" 289 + "try to use minimum ecc strength that NAND chip required\n", 289 290 geo->ecc_strength, 290 291 this->devdata->bch_max_ecc_strength); 291 292 return -EINVAL; ··· 367 366 368 367 int common_nfc_set_geometry(struct gpmi_nand_data *this) 369 368 { 370 - if (of_property_read_bool(this->dev->of_node, "fsl,use-minimum-ecc") 371 - && set_geometry_by_ecc_info(this)) 372 - return 0; 373 - return legacy_set_geometry(this); 369 + if ((of_property_read_bool(this->dev->of_node, "fsl,use-minimum-ecc")) 370 + || legacy_set_geometry(this)) 371 + return set_geometry_by_ecc_info(this); 372 + 373 + return 0; 374 374 } 375 375 376 376 struct dma_chan *get_dma_chan(struct gpmi_nand_data *this) ··· 2035 2033 return 0; 2036 2034 } 2037 2035 2036 + #ifdef CONFIG_PM_SLEEP 2037 + static int gpmi_pm_suspend(struct device *dev) 2038 + { 2039 + struct gpmi_nand_data *this = dev_get_drvdata(dev); 2040 + 2041 + release_dma_channels(this); 2042 + return 0; 2043 + } 2044 + 2045 + static int gpmi_pm_resume(struct device *dev) 2046 + { 2047 + struct gpmi_nand_data *this = dev_get_drvdata(dev); 2048 + int ret; 2049 + 2050 + ret = acquire_dma_channels(this); 2051 + if (ret < 0) 2052 + return ret; 2053 + 2054 + /* re-init the GPMI registers */ 2055 + this->flags &= ~GPMI_TIMING_INIT_OK; 2056 + ret = gpmi_init(this); 2057 + if (ret) { 2058 + dev_err(this->dev, "Error setting GPMI : %d\n", ret); 2059 + return ret; 2060 + } 2061 + 2062 + /* re-init the BCH registers */ 2063 + ret = bch_set_geometry(this); 2064 + if (ret) { 2065 + dev_err(this->dev, "Error setting BCH : %d\n", ret); 2066 + return ret; 2067 + } 2068 + 2069 + /* re-init others */ 2070 + gpmi_extra_init(this); 2071 + 2072 + return 0; 2073 + } 2074 + #endif /* CONFIG_PM_SLEEP */ 2075 + 2076 + static const struct dev_pm_ops gpmi_pm_ops = { 2077 + SET_SYSTEM_SLEEP_PM_OPS(gpmi_pm_suspend, gpmi_pm_resume) 2078 + }; 2079 + 2038 2080 static struct platform_driver gpmi_nand_driver = { 2039 2081 .driver = { 2040 2082 .name = "gpmi-nand", 2083 + .pm = &gpmi_pm_ops, 2041 2084 .of_match_table = gpmi_nand_id_table, 2042 2085 }, 2043 2086 .probe = gpmi_nand_probe,
-1
drivers/mtd/nand/hisi504_nand.c
··· 632 632 } 633 633 634 634 static struct nand_ecclayout nand_ecc_2K_16bits = { 635 - .oobavail = 6, 636 635 .oobfree = { {2, 6} }, 637 636 }; 638 637
-3
drivers/mtd/nand/jz4740_nand.c
··· 427 427 chip->ecc.strength = 4; 428 428 chip->ecc.options = NAND_ECC_GENERIC_ERASED_CHECK; 429 429 430 - if (pdata) 431 - chip->ecc.layout = pdata->ecc_layout; 432 - 433 430 chip->chip_delay = 50; 434 431 chip->cmd_ctrl = jz_nand_cmd_ctrl; 435 432 chip->select_chip = jz_nand_select_chip;
+1 -1
drivers/mtd/nand/lpc32xx_mlc.c
··· 750 750 } 751 751 752 752 nand_chip->ecc.mode = NAND_ECC_HW; 753 - nand_chip->ecc.size = mtd->writesize; 753 + nand_chip->ecc.size = 512; 754 754 nand_chip->ecc.layout = &lpc32xx_nand_oob; 755 755 host->mlcsubpages = mtd->writesize / 512; 756 756
+2 -5
drivers/mtd/nand/mpc5121_nfc.c
··· 626 626 627 627 static int mpc5121_nfc_probe(struct platform_device *op) 628 628 { 629 - struct device_node *rootnode, *dn = op->dev.of_node; 629 + struct device_node *dn = op->dev.of_node; 630 630 struct clk *clk; 631 631 struct device *dev = &op->dev; 632 632 struct mpc5121_nfc_prv *prv; ··· 712 712 chip->ecc.mode = NAND_ECC_SOFT; 713 713 714 714 /* Support external chip-select logic on ADS5121 board */ 715 - rootnode = of_find_node_by_path("/"); 716 - if (of_device_is_compatible(rootnode, "fsl,mpc5121ads")) { 715 + if (of_machine_is_compatible("fsl,mpc5121ads")) { 717 716 retval = ads5121_chipselect_init(mtd); 718 717 if (retval) { 719 718 dev_err(dev, "Chipselect init error!\n"); 720 - of_node_put(rootnode); 721 719 return retval; 722 720 } 723 721 724 722 chip->select_chip = ads5121_select_chip; 725 723 } 726 - of_node_put(rootnode); 727 724 728 725 /* Enable NFC clock */ 729 726 clk = devm_clk_get(dev, "ipg");
+32 -46
drivers/mtd/nand/nand_base.c
··· 313 313 * nand_block_bad - [DEFAULT] Read bad block marker from the chip 314 314 * @mtd: MTD device structure 315 315 * @ofs: offset from device start 316 - * @getchip: 0, if the chip is already selected 317 316 * 318 317 * Check, if the block is bad. 319 318 */ 320 - static int nand_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip) 319 + static int nand_block_bad(struct mtd_info *mtd, loff_t ofs) 321 320 { 322 - int page, chipnr, res = 0, i = 0; 321 + int page, res = 0, i = 0; 323 322 struct nand_chip *chip = mtd_to_nand(mtd); 324 323 u16 bad; 325 324 ··· 326 327 ofs += mtd->erasesize - mtd->writesize; 327 328 328 329 page = (int)(ofs >> chip->page_shift) & chip->pagemask; 329 - 330 - if (getchip) { 331 - chipnr = (int)(ofs >> chip->chip_shift); 332 - 333 - nand_get_device(mtd, FL_READING); 334 - 335 - /* Select the NAND device */ 336 - chip->select_chip(mtd, chipnr); 337 - } 338 330 339 331 do { 340 332 if (chip->options & NAND_BUSWIDTH_16) { ··· 350 360 page = (int)(ofs >> chip->page_shift) & chip->pagemask; 351 361 i++; 352 362 } while (!res && i < 2 && (chip->bbt_options & NAND_BBT_SCAN2NDPAGE)); 353 - 354 - if (getchip) { 355 - chip->select_chip(mtd, -1); 356 - nand_release_device(mtd); 357 - } 358 363 359 364 return res; 360 365 } ··· 488 503 * nand_block_checkbad - [GENERIC] Check if a block is marked bad 489 504 * @mtd: MTD device structure 490 505 * @ofs: offset from device start 491 - * @getchip: 0, if the chip is already selected 492 506 * @allowbbt: 1, if its allowed to access the bbt area 493 507 * 494 508 * Check, if the block is bad. Either by reading the bad block table or 495 509 * calling of the scan function. 496 510 */ 497 - static int nand_block_checkbad(struct mtd_info *mtd, loff_t ofs, int getchip, 498 - int allowbbt) 511 + static int nand_block_checkbad(struct mtd_info *mtd, loff_t ofs, int allowbbt) 499 512 { 500 513 struct nand_chip *chip = mtd_to_nand(mtd); 501 514 502 515 if (!chip->bbt) 503 - return chip->block_bad(mtd, ofs, getchip); 516 + return chip->block_bad(mtd, ofs); 504 517 505 518 /* Return info from the table */ 506 519 return nand_isbad_bbt(mtd, ofs, allowbbt); ··· 549 566 cond_resched(); 550 567 } while (time_before(jiffies, timeo)); 551 568 552 - pr_warn_ratelimited( 553 - "timeout while waiting for chip to become ready\n"); 569 + if (!chip->dev_ready(mtd)) 570 + pr_warn_ratelimited("timeout while waiting for chip to become ready\n"); 554 571 out: 555 572 led_trigger_event(nand_led_trigger, LED_OFF); 556 573 } ··· 1706 1723 int ret = 0; 1707 1724 uint32_t readlen = ops->len; 1708 1725 uint32_t oobreadlen = ops->ooblen; 1709 - uint32_t max_oobsize = ops->mode == MTD_OPS_AUTO_OOB ? 1710 - mtd->oobavail : mtd->oobsize; 1726 + uint32_t max_oobsize = mtd_oobavail(mtd, ops); 1711 1727 1712 1728 uint8_t *bufpoi, *oob, *buf; 1713 1729 int use_bufpoi; ··· 2057 2075 2058 2076 stats = mtd->ecc_stats; 2059 2077 2060 - if (ops->mode == MTD_OPS_AUTO_OOB) 2061 - len = chip->ecc.layout->oobavail; 2062 - else 2063 - len = mtd->oobsize; 2078 + len = mtd_oobavail(mtd, ops); 2064 2079 2065 2080 if (unlikely(ops->ooboffs >= len)) { 2066 2081 pr_debug("%s: attempt to start read outside oob\n", ··· 2554 2575 uint32_t writelen = ops->len; 2555 2576 2556 2577 uint32_t oobwritelen = ops->ooblen; 2557 - uint32_t oobmaxlen = ops->mode == MTD_OPS_AUTO_OOB ? 2558 - mtd->oobavail : mtd->oobsize; 2578 + uint32_t oobmaxlen = mtd_oobavail(mtd, ops); 2559 2579 2560 2580 uint8_t *oob = ops->oobbuf; 2561 2581 uint8_t *buf = ops->datbuf; ··· 2744 2766 pr_debug("%s: to = 0x%08x, len = %i\n", 2745 2767 __func__, (unsigned int)to, (int)ops->ooblen); 2746 2768 2747 - if (ops->mode == MTD_OPS_AUTO_OOB) 2748 - len = chip->ecc.layout->oobavail; 2749 - else 2750 - len = mtd->oobsize; 2769 + len = mtd_oobavail(mtd, ops); 2751 2770 2752 2771 /* Do not allow write past end of page */ 2753 2772 if ((ops->ooboffs + ops->ooblen) > len) { ··· 2932 2957 while (len) { 2933 2958 /* Check if we have a bad block, we do not erase bad blocks! */ 2934 2959 if (nand_block_checkbad(mtd, ((loff_t) page) << 2935 - chip->page_shift, 0, allowbbt)) { 2960 + chip->page_shift, allowbbt)) { 2936 2961 pr_warn("%s: attempt to erase a bad block at page 0x%08x\n", 2937 2962 __func__, page); 2938 2963 instr->state = MTD_ERASE_FAILED; ··· 3019 3044 */ 3020 3045 static int nand_block_isbad(struct mtd_info *mtd, loff_t offs) 3021 3046 { 3022 - return nand_block_checkbad(mtd, offs, 1, 0); 3047 + struct nand_chip *chip = mtd_to_nand(mtd); 3048 + int chipnr = (int)(offs >> chip->chip_shift); 3049 + int ret; 3050 + 3051 + /* Select the NAND device */ 3052 + nand_get_device(mtd, FL_READING); 3053 + chip->select_chip(mtd, chipnr); 3054 + 3055 + ret = nand_block_checkbad(mtd, offs, 0); 3056 + 3057 + chip->select_chip(mtd, -1); 3058 + nand_release_device(mtd); 3059 + 3060 + return ret; 3023 3061 } 3024 3062 3025 3063 /** ··· 4275 4287 } 4276 4288 4277 4289 /* See nand_bch_init() for details. */ 4278 - ecc->bytes = DIV_ROUND_UP( 4279 - ecc->strength * fls(8 * ecc->size), 8); 4280 - ecc->priv = nand_bch_init(mtd, ecc->size, ecc->bytes, 4281 - &ecc->layout); 4290 + ecc->bytes = 0; 4291 + ecc->priv = nand_bch_init(mtd); 4282 4292 if (!ecc->priv) { 4283 4293 pr_warn("BCH ECC initialization failed!\n"); 4284 4294 BUG(); ··· 4311 4325 * The number of bytes available for a client to place data into 4312 4326 * the out of band area. 4313 4327 */ 4314 - ecc->layout->oobavail = 0; 4315 - for (i = 0; ecc->layout->oobfree[i].length 4316 - && i < ARRAY_SIZE(ecc->layout->oobfree); i++) 4317 - ecc->layout->oobavail += ecc->layout->oobfree[i].length; 4318 - mtd->oobavail = ecc->layout->oobavail; 4328 + mtd->oobavail = 0; 4329 + if (ecc->layout) { 4330 + for (i = 0; ecc->layout->oobfree[i].length; i++) 4331 + mtd->oobavail += ecc->layout->oobfree[i].length; 4332 + } 4319 4333 4320 4334 /* ECC sanity check: warn if it's too weak */ 4321 4335 if (!nand_ecc_strength_good(mtd))
-2
drivers/mtd/nand/nand_bbt.c
··· 1373 1373 1374 1374 return ret; 1375 1375 } 1376 - 1377 - EXPORT_SYMBOL(nand_scan_bbt);
+17 -10
drivers/mtd/nand/nand_bch.c
··· 107 107 /** 108 108 * nand_bch_init - [NAND Interface] Initialize NAND BCH error correction 109 109 * @mtd: MTD block structure 110 - * @eccsize: ecc block size in bytes 111 - * @eccbytes: ecc length in bytes 112 - * @ecclayout: output default layout 113 110 * 114 111 * Returns: 115 112 * a pointer to a new NAND BCH control structure, or NULL upon failure ··· 120 123 * @eccsize = 512 (thus, m=13 is the smallest integer such that 2^m-1 > 512*8) 121 124 * @eccbytes = 7 (7 bytes are required to store m*t = 13*4 = 52 bits) 122 125 */ 123 - struct nand_bch_control * 124 - nand_bch_init(struct mtd_info *mtd, unsigned int eccsize, unsigned int eccbytes, 125 - struct nand_ecclayout **ecclayout) 126 + struct nand_bch_control *nand_bch_init(struct mtd_info *mtd) 126 127 { 128 + struct nand_chip *nand = mtd_to_nand(mtd); 127 129 unsigned int m, t, eccsteps, i; 128 - struct nand_ecclayout *layout; 130 + struct nand_ecclayout *layout = nand->ecc.layout; 129 131 struct nand_bch_control *nbc = NULL; 130 132 unsigned char *erased_page; 133 + unsigned int eccsize = nand->ecc.size; 134 + unsigned int eccbytes = nand->ecc.bytes; 135 + unsigned int eccstrength = nand->ecc.strength; 136 + 137 + if (!eccbytes && eccstrength) { 138 + eccbytes = DIV_ROUND_UP(eccstrength * fls(8 * eccsize), 8); 139 + nand->ecc.bytes = eccbytes; 140 + } 131 141 132 142 if (!eccsize || !eccbytes) { 133 143 printk(KERN_WARNING "ecc parameters not supplied\n"); ··· 162 158 eccsteps = mtd->writesize/eccsize; 163 159 164 160 /* if no ecc placement scheme was provided, build one */ 165 - if (!*ecclayout) { 161 + if (!layout) { 166 162 167 163 /* handle large page devices only */ 168 164 if (mtd->oobsize < 64) { ··· 188 184 layout->oobfree[0].offset = 2; 189 185 layout->oobfree[0].length = mtd->oobsize-2-layout->eccbytes; 190 186 191 - *ecclayout = layout; 187 + nand->ecc.layout = layout; 192 188 } 193 189 194 190 /* sanity checks */ ··· 196 192 printk(KERN_WARNING "eccsize %u is too large\n", eccsize); 197 193 goto fail; 198 194 } 199 - if ((*ecclayout)->eccbytes != (eccsteps*eccbytes)) { 195 + if (layout->eccbytes != (eccsteps*eccbytes)) { 200 196 printk(KERN_WARNING "invalid ecc layout\n"); 201 197 goto fail; 202 198 } ··· 219 215 220 216 for (i = 0; i < eccbytes; i++) 221 217 nbc->eccmask[i] ^= 0xff; 218 + 219 + if (!eccstrength) 220 + nand->ecc.strength = (eccbytes * 8) / fls(8 * eccsize); 222 221 223 222 return nbc; 224 223 fail:
+2 -2
drivers/mtd/nand/nand_ids.c
··· 50 50 SZ_16K, SZ_8K, SZ_4M, 0, 6, 1280, NAND_ECC_INFO(40, SZ_1K) }, 51 51 {"H27UCG8T2ATR-BC 64G 3.3V 8-bit", 52 52 { .id = {0xad, 0xde, 0x94, 0xda, 0x74, 0xc4} }, 53 - SZ_8K, SZ_8K, SZ_2M, 0, 6, 640, NAND_ECC_INFO(40, SZ_1K), 54 - 4 }, 53 + SZ_8K, SZ_8K, SZ_2M, NAND_NEED_SCRAMBLING, 6, 640, 54 + NAND_ECC_INFO(40, SZ_1K), 4 }, 55 55 56 56 LEGACY_ID_NAND("NAND 4MiB 5V 8-bit", 0x6B, 4, SZ_8K, SP_OPTIONS), 57 57 LEGACY_ID_NAND("NAND 4MiB 3,3V 8-bit", 0xE3, 4, SZ_8K, SP_OPTIONS),
+1 -1
drivers/mtd/nand/nuc900_nand.c
··· 113 113 { 114 114 unsigned int val; 115 115 spin_lock(&nand->lock); 116 - val = __raw_readl(REG_SMISR); 116 + val = __raw_readl(nand->reg + REG_SMISR); 117 117 val &= READYBUSY; 118 118 spin_unlock(&nand->lock); 119 119
+12 -16
drivers/mtd/nand/omap2.c
··· 1807 1807 goto return_error; 1808 1808 } 1809 1809 1810 + /* 1811 + * Bail out earlier to let NAND_ECC_SOFT code create its own 1812 + * ecclayout instead of using ours. 1813 + */ 1814 + if (info->ecc_opt == OMAP_ECC_HAM1_CODE_SW) { 1815 + nand_chip->ecc.mode = NAND_ECC_SOFT; 1816 + goto scan_tail; 1817 + } 1818 + 1810 1819 /* populate MTD interface based on ECC scheme */ 1811 1820 ecclayout = &info->oobinfo; 1821 + nand_chip->ecc.layout = ecclayout; 1812 1822 switch (info->ecc_opt) { 1813 - case OMAP_ECC_HAM1_CODE_SW: 1814 - nand_chip->ecc.mode = NAND_ECC_SOFT; 1815 - break; 1816 - 1817 1823 case OMAP_ECC_HAM1_CODE_HW: 1818 1824 pr_info("nand: using OMAP_ECC_HAM1_CODE_HW\n"); 1819 1825 nand_chip->ecc.mode = NAND_ECC_HW; ··· 1867 1861 ecclayout->oobfree->offset = 1 + 1868 1862 ecclayout->eccpos[ecclayout->eccbytes - 1] + 1; 1869 1863 /* software bch library is used for locating errors */ 1870 - nand_chip->ecc.priv = nand_bch_init(mtd, 1871 - nand_chip->ecc.size, 1872 - nand_chip->ecc.bytes, 1873 - &ecclayout); 1864 + nand_chip->ecc.priv = nand_bch_init(mtd); 1874 1865 if (!nand_chip->ecc.priv) { 1875 1866 dev_err(&info->pdev->dev, "unable to use BCH library\n"); 1876 1867 err = -EINVAL; ··· 1928 1925 ecclayout->oobfree->offset = 1 + 1929 1926 ecclayout->eccpos[ecclayout->eccbytes - 1] + 1; 1930 1927 /* software bch library is used for locating errors */ 1931 - nand_chip->ecc.priv = nand_bch_init(mtd, 1932 - nand_chip->ecc.size, 1933 - nand_chip->ecc.bytes, 1934 - &ecclayout); 1928 + nand_chip->ecc.priv = nand_bch_init(mtd); 1935 1929 if (!nand_chip->ecc.priv) { 1936 1930 dev_err(&info->pdev->dev, "unable to use BCH library\n"); 1937 1931 err = -EINVAL; ··· 2002 2002 goto return_error; 2003 2003 } 2004 2004 2005 - if (info->ecc_opt == OMAP_ECC_HAM1_CODE_SW) 2006 - goto scan_tail; 2007 - 2008 2005 /* all OOB bytes from oobfree->offset till end off OOB are free */ 2009 2006 ecclayout->oobfree->length = mtd->oobsize - ecclayout->oobfree->offset; 2010 2007 /* check if NAND device's OOB is enough to store ECC signatures */ ··· 2012 2015 err = -EINVAL; 2013 2016 goto return_error; 2014 2017 } 2015 - nand_chip->ecc.layout = ecclayout; 2016 2018 2017 2019 scan_tail: 2018 2020 /* second phase scan */
-1
drivers/mtd/nand/plat_nand.c
··· 73 73 data->chip.bbt_options |= pdata->chip.bbt_options; 74 74 75 75 data->chip.ecc.hwctl = pdata->ctrl.hwcontrol; 76 - data->chip.ecc.layout = pdata->chip.ecclayout; 77 76 data->chip.ecc.mode = NAND_ECC_SOFT; 78 77 79 78 platform_set_drvdata(pdev, data);
+118 -71
drivers/mtd/nand/pxa3xx_nand.c
··· 131 131 #define READ_ID_BYTES 7 132 132 133 133 /* macros for registers read/write */ 134 - #define nand_writel(info, off, val) \ 135 - writel_relaxed((val), (info)->mmio_base + (off)) 134 + #define nand_writel(info, off, val) \ 135 + do { \ 136 + dev_vdbg(&info->pdev->dev, \ 137 + "%s():%d nand_writel(0x%x, 0x%04x)\n", \ 138 + __func__, __LINE__, (val), (off)); \ 139 + writel_relaxed((val), (info)->mmio_base + (off)); \ 140 + } while (0) 136 141 137 - #define nand_readl(info, off) \ 138 - readl_relaxed((info)->mmio_base + (off)) 142 + #define nand_readl(info, off) \ 143 + ({ \ 144 + unsigned int _v; \ 145 + _v = readl_relaxed((info)->mmio_base + (off)); \ 146 + dev_vdbg(&info->pdev->dev, \ 147 + "%s():%d nand_readl(0x%04x) = 0x%x\n", \ 148 + __func__, __LINE__, (off), _v); \ 149 + _v; \ 150 + }) 139 151 140 152 /* error code and state */ 141 153 enum { ··· 211 199 struct dma_chan *dma_chan; 212 200 dma_cookie_t dma_cookie; 213 201 int drcmr_dat; 214 - int drcmr_cmd; 215 202 216 203 unsigned char *data_buff; 217 204 unsigned char *oob_buff; ··· 233 222 int use_spare; /* use spare ? */ 234 223 int need_wait; 235 224 236 - unsigned int data_size; /* data to be read from FIFO */ 237 - unsigned int chunk_size; /* split commands chunk size */ 238 - unsigned int oob_size; 225 + /* Amount of real data per full chunk */ 226 + unsigned int chunk_size; 227 + 228 + /* Amount of spare data per full chunk */ 239 229 unsigned int spare_size; 230 + 231 + /* Number of full chunks (i.e chunk_size + spare_size) */ 232 + unsigned int nfullchunks; 233 + 234 + /* 235 + * Total number of chunks. If equal to nfullchunks, then there 236 + * are only full chunks. Otherwise, there is one last chunk of 237 + * size (last_chunk_size + last_spare_size) 238 + */ 239 + unsigned int ntotalchunks; 240 + 241 + /* Amount of real data in the last chunk */ 242 + unsigned int last_chunk_size; 243 + 244 + /* Amount of spare data in the last chunk */ 245 + unsigned int last_spare_size; 246 + 240 247 unsigned int ecc_size; 241 248 unsigned int ecc_err_cnt; 242 249 unsigned int max_bitflips; 243 250 int retcode; 251 + 252 + /* 253 + * Variables only valid during command 254 + * execution. step_chunk_size and step_spare_size is the 255 + * amount of real data and spare data in the current 256 + * chunk. cur_chunk is the current chunk being 257 + * read/programmed. 258 + */ 259 + unsigned int step_chunk_size; 260 + unsigned int step_spare_size; 261 + unsigned int cur_chunk; 244 262 245 263 /* cached register value */ 246 264 uint32_t reg_ndcr; ··· 566 526 return 0; 567 527 } 568 528 569 - /* 570 - * Set the data and OOB size, depending on the selected 571 - * spare and ECC configuration. 572 - * Only applicable to READ0, READOOB and PAGEPROG commands. 573 - */ 574 - static void pxa3xx_set_datasize(struct pxa3xx_nand_info *info, 575 - struct mtd_info *mtd) 576 - { 577 - int oob_enable = info->reg_ndcr & NDCR_SPARE_EN; 578 - 579 - info->data_size = mtd->writesize; 580 - if (!oob_enable) 581 - return; 582 - 583 - info->oob_size = info->spare_size; 584 - if (!info->use_ecc) 585 - info->oob_size += info->ecc_size; 586 - } 587 - 588 529 /** 589 530 * NOTE: it is a must to set ND_RUN firstly, then write 590 531 * command buffer, otherwise, it does not work. ··· 681 660 682 661 static void handle_data_pio(struct pxa3xx_nand_info *info) 683 662 { 684 - unsigned int do_bytes = min(info->data_size, info->chunk_size); 685 - 686 663 switch (info->state) { 687 664 case STATE_PIO_WRITING: 688 - writesl(info->mmio_base + NDDB, 689 - info->data_buff + info->data_buff_pos, 690 - DIV_ROUND_UP(do_bytes, 4)); 665 + if (info->step_chunk_size) 666 + writesl(info->mmio_base + NDDB, 667 + info->data_buff + info->data_buff_pos, 668 + DIV_ROUND_UP(info->step_chunk_size, 4)); 691 669 692 - if (info->oob_size > 0) 670 + if (info->step_spare_size) 693 671 writesl(info->mmio_base + NDDB, 694 672 info->oob_buff + info->oob_buff_pos, 695 - DIV_ROUND_UP(info->oob_size, 4)); 673 + DIV_ROUND_UP(info->step_spare_size, 4)); 696 674 break; 697 675 case STATE_PIO_READING: 698 - drain_fifo(info, 699 - info->data_buff + info->data_buff_pos, 700 - DIV_ROUND_UP(do_bytes, 4)); 676 + if (info->step_chunk_size) 677 + drain_fifo(info, 678 + info->data_buff + info->data_buff_pos, 679 + DIV_ROUND_UP(info->step_chunk_size, 4)); 701 680 702 - if (info->oob_size > 0) 681 + if (info->step_spare_size) 703 682 drain_fifo(info, 704 683 info->oob_buff + info->oob_buff_pos, 705 - DIV_ROUND_UP(info->oob_size, 4)); 684 + DIV_ROUND_UP(info->step_spare_size, 4)); 706 685 break; 707 686 default: 708 687 dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__, ··· 711 690 } 712 691 713 692 /* Update buffer pointers for multi-page read/write */ 714 - info->data_buff_pos += do_bytes; 715 - info->oob_buff_pos += info->oob_size; 716 - info->data_size -= do_bytes; 693 + info->data_buff_pos += info->step_chunk_size; 694 + info->oob_buff_pos += info->step_spare_size; 717 695 } 718 696 719 697 static void pxa3xx_nand_data_dma_irq(void *data) ··· 753 733 info->state); 754 734 BUG(); 755 735 } 756 - info->sg.length = info->data_size + 757 - (info->oob_size ? info->spare_size + info->ecc_size : 0); 736 + info->sg.length = info->chunk_size; 737 + if (info->use_spare) 738 + info->sg.length += info->spare_size + info->ecc_size; 758 739 dma_map_sg(info->dma_chan->device->dev, &info->sg, 1, info->dma_dir); 759 740 760 741 tx = dmaengine_prep_slave_sg(info->dma_chan, &info->sg, 1, direction, ··· 916 895 /* reset data and oob column point to handle data */ 917 896 info->buf_start = 0; 918 897 info->buf_count = 0; 919 - info->oob_size = 0; 920 898 info->data_buff_pos = 0; 921 899 info->oob_buff_pos = 0; 900 + info->step_chunk_size = 0; 901 + info->step_spare_size = 0; 902 + info->cur_chunk = 0; 922 903 info->use_ecc = 0; 923 904 info->use_spare = 1; 924 905 info->retcode = ERR_NONE; ··· 932 909 case NAND_CMD_READ0: 933 910 case NAND_CMD_PAGEPROG: 934 911 info->use_ecc = 1; 935 - case NAND_CMD_READOOB: 936 - pxa3xx_set_datasize(info, mtd); 937 912 break; 938 913 case NAND_CMD_PARAM: 939 914 info->use_spare = 0; ··· 990 969 if (command == NAND_CMD_READOOB) 991 970 info->buf_start += mtd->writesize; 992 971 972 + if (info->cur_chunk < info->nfullchunks) { 973 + info->step_chunk_size = info->chunk_size; 974 + info->step_spare_size = info->spare_size; 975 + } else { 976 + info->step_chunk_size = info->last_chunk_size; 977 + info->step_spare_size = info->last_spare_size; 978 + } 979 + 993 980 /* 994 981 * Multiple page read needs an 'extended command type' field, 995 982 * which is either naked-read or last-read according to the ··· 1009 980 info->ndcb0 |= NDCB0_DBC | (NAND_CMD_READSTART << 8) 1010 981 | NDCB0_LEN_OVRD 1011 982 | NDCB0_EXT_CMD_TYPE(ext_cmd_type); 1012 - info->ndcb3 = info->chunk_size + 1013 - info->oob_size; 983 + info->ndcb3 = info->step_chunk_size + 984 + info->step_spare_size; 1014 985 } 1015 986 1016 987 set_command_address(info, mtd->writesize, column, page_addr); ··· 1030 1001 | NDCB0_EXT_CMD_TYPE(ext_cmd_type) 1031 1002 | addr_cycle 1032 1003 | command; 1033 - /* No data transfer in this case */ 1034 - info->data_size = 0; 1035 1004 exec_cmd = 1; 1036 1005 } 1037 1006 break; ··· 1039 1012 (mtd->writesize + mtd->oobsize))) { 1040 1013 exec_cmd = 0; 1041 1014 break; 1015 + } 1016 + 1017 + if (info->cur_chunk < info->nfullchunks) { 1018 + info->step_chunk_size = info->chunk_size; 1019 + info->step_spare_size = info->spare_size; 1020 + } else { 1021 + info->step_chunk_size = info->last_chunk_size; 1022 + info->step_spare_size = info->last_spare_size; 1042 1023 } 1043 1024 1044 1025 /* Second command setting for large pages */ ··· 1059 1024 info->ndcb0 |= NDCB0_CMD_TYPE(0x1) 1060 1025 | NDCB0_LEN_OVRD 1061 1026 | NDCB0_EXT_CMD_TYPE(ext_cmd_type); 1062 - info->ndcb3 = info->chunk_size + 1063 - info->oob_size; 1027 + info->ndcb3 = info->step_chunk_size + 1028 + info->step_spare_size; 1064 1029 1065 1030 /* 1066 1031 * This is the command dispatch that completes a chunked 1067 1032 * page program operation. 1068 1033 */ 1069 - if (info->data_size == 0) { 1034 + if (info->cur_chunk == info->ntotalchunks) { 1070 1035 info->ndcb0 = NDCB0_CMD_TYPE(0x1) 1071 1036 | NDCB0_EXT_CMD_TYPE(ext_cmd_type) 1072 1037 | command; ··· 1093 1058 | command; 1094 1059 info->ndcb1 = (column & 0xFF); 1095 1060 info->ndcb3 = INIT_BUFFER_SIZE; 1096 - info->data_size = INIT_BUFFER_SIZE; 1061 + info->step_chunk_size = INIT_BUFFER_SIZE; 1097 1062 break; 1098 1063 1099 1064 case NAND_CMD_READID: ··· 1103 1068 | command; 1104 1069 info->ndcb1 = (column & 0xFF); 1105 1070 1106 - info->data_size = 8; 1071 + info->step_chunk_size = 8; 1107 1072 break; 1108 1073 case NAND_CMD_STATUS: 1109 1074 info->buf_count = 1; ··· 1111 1076 | NDCB0_ADDR_CYC(1) 1112 1077 | command; 1113 1078 1114 - info->data_size = 8; 1079 + info->step_chunk_size = 8; 1115 1080 break; 1116 1081 1117 1082 case NAND_CMD_ERASE1: ··· 1252 1217 init_completion(&info->dev_ready); 1253 1218 do { 1254 1219 info->state = STATE_PREPARED; 1220 + 1255 1221 exec_cmd = prepare_set_command(info, command, ext_cmd_type, 1256 1222 column, page_addr); 1257 1223 if (!exec_cmd) { ··· 1272 1236 break; 1273 1237 } 1274 1238 1239 + /* Only a few commands need several steps */ 1240 + if (command != NAND_CMD_PAGEPROG && 1241 + command != NAND_CMD_READ0 && 1242 + command != NAND_CMD_READOOB) 1243 + break; 1244 + 1245 + info->cur_chunk++; 1246 + 1275 1247 /* Check if the sequence is complete */ 1276 - if (info->data_size == 0 && command != NAND_CMD_PAGEPROG) 1248 + if (info->cur_chunk == info->ntotalchunks && command != NAND_CMD_PAGEPROG) 1277 1249 break; 1278 1250 1279 1251 /* 1280 1252 * After a splitted program command sequence has issued 1281 1253 * the command dispatch, the command sequence is complete. 1282 1254 */ 1283 - if (info->data_size == 0 && 1255 + if (info->cur_chunk == (info->ntotalchunks + 1) && 1284 1256 command == NAND_CMD_PAGEPROG && 1285 1257 ext_cmd_type == EXT_CMD_TYPE_DISPATCH) 1286 1258 break; 1287 1259 1288 1260 if (command == NAND_CMD_READ0 || command == NAND_CMD_READOOB) { 1289 1261 /* Last read: issue a 'last naked read' */ 1290 - if (info->data_size == info->chunk_size) 1262 + if (info->cur_chunk == info->ntotalchunks - 1) 1291 1263 ext_cmd_type = EXT_CMD_TYPE_LAST_RW; 1292 1264 else 1293 1265 ext_cmd_type = EXT_CMD_TYPE_NAKED_RW; ··· 1305 1261 * the command dispatch must be issued to complete. 1306 1262 */ 1307 1263 } else if (command == NAND_CMD_PAGEPROG && 1308 - info->data_size == 0) { 1264 + info->cur_chunk == info->ntotalchunks) { 1309 1265 ext_cmd_type = EXT_CMD_TYPE_DISPATCH; 1310 1266 } 1311 1267 } while (1); ··· 1550 1506 int strength, int ecc_stepsize, int page_size) 1551 1507 { 1552 1508 if (strength == 1 && ecc_stepsize == 512 && page_size == 2048) { 1509 + info->nfullchunks = 1; 1510 + info->ntotalchunks = 1; 1553 1511 info->chunk_size = 2048; 1554 1512 info->spare_size = 40; 1555 1513 info->ecc_size = 24; ··· 1560 1514 ecc->strength = 1; 1561 1515 1562 1516 } else if (strength == 1 && ecc_stepsize == 512 && page_size == 512) { 1517 + info->nfullchunks = 1; 1518 + info->ntotalchunks = 1; 1563 1519 info->chunk_size = 512; 1564 1520 info->spare_size = 8; 1565 1521 info->ecc_size = 8; ··· 1575 1527 */ 1576 1528 } else if (strength == 4 && ecc_stepsize == 512 && page_size == 2048) { 1577 1529 info->ecc_bch = 1; 1530 + info->nfullchunks = 1; 1531 + info->ntotalchunks = 1; 1578 1532 info->chunk_size = 2048; 1579 1533 info->spare_size = 32; 1580 1534 info->ecc_size = 32; ··· 1587 1537 1588 1538 } else if (strength == 4 && ecc_stepsize == 512 && page_size == 4096) { 1589 1539 info->ecc_bch = 1; 1540 + info->nfullchunks = 2; 1541 + info->ntotalchunks = 2; 1590 1542 info->chunk_size = 2048; 1591 1543 info->spare_size = 32; 1592 1544 info->ecc_size = 32; ··· 1603 1551 */ 1604 1552 } else if (strength == 8 && ecc_stepsize == 512 && page_size == 4096) { 1605 1553 info->ecc_bch = 1; 1554 + info->nfullchunks = 4; 1555 + info->ntotalchunks = 5; 1606 1556 info->chunk_size = 1024; 1607 1557 info->spare_size = 0; 1558 + info->last_chunk_size = 0; 1559 + info->last_spare_size = 64; 1608 1560 info->ecc_size = 32; 1609 1561 ecc->mode = NAND_ECC_HW; 1610 1562 ecc->size = info->chunk_size; ··· 1794 1738 if (ret < 0) 1795 1739 return ret; 1796 1740 1797 - if (use_dma) { 1741 + if (!np && use_dma) { 1798 1742 r = platform_get_resource(pdev, IORESOURCE_DMA, 0); 1799 1743 if (r == NULL) { 1800 1744 dev_err(&pdev->dev, ··· 1803 1747 goto fail_disable_clk; 1804 1748 } 1805 1749 info->drcmr_dat = r->start; 1806 - 1807 - r = platform_get_resource(pdev, IORESOURCE_DMA, 1); 1808 - if (r == NULL) { 1809 - dev_err(&pdev->dev, 1810 - "no resource defined for cmd DMA\n"); 1811 - ret = -ENXIO; 1812 - goto fail_disable_clk; 1813 - } 1814 - info->drcmr_cmd = r->start; 1815 1750 } 1816 1751 1817 1752 irq = platform_get_irq(pdev, 0);
+2223
drivers/mtd/nand/qcom_nandc.c
··· 1 + /* 2 + * Copyright (c) 2016, The Linux Foundation. All rights reserved. 3 + * 4 + * This software is licensed under the terms of the GNU General Public 5 + * License version 2, as published by the Free Software Foundation, and 6 + * may be copied, distributed, and modified under those terms. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + 14 + #include <linux/clk.h> 15 + #include <linux/slab.h> 16 + #include <linux/bitops.h> 17 + #include <linux/dma-mapping.h> 18 + #include <linux/dmaengine.h> 19 + #include <linux/module.h> 20 + #include <linux/mtd/nand.h> 21 + #include <linux/mtd/partitions.h> 22 + #include <linux/of.h> 23 + #include <linux/of_device.h> 24 + #include <linux/of_mtd.h> 25 + #include <linux/delay.h> 26 + 27 + /* NANDc reg offsets */ 28 + #define NAND_FLASH_CMD 0x00 29 + #define NAND_ADDR0 0x04 30 + #define NAND_ADDR1 0x08 31 + #define NAND_FLASH_CHIP_SELECT 0x0c 32 + #define NAND_EXEC_CMD 0x10 33 + #define NAND_FLASH_STATUS 0x14 34 + #define NAND_BUFFER_STATUS 0x18 35 + #define NAND_DEV0_CFG0 0x20 36 + #define NAND_DEV0_CFG1 0x24 37 + #define NAND_DEV0_ECC_CFG 0x28 38 + #define NAND_DEV1_ECC_CFG 0x2c 39 + #define NAND_DEV1_CFG0 0x30 40 + #define NAND_DEV1_CFG1 0x34 41 + #define NAND_READ_ID 0x40 42 + #define NAND_READ_STATUS 0x44 43 + #define NAND_DEV_CMD0 0xa0 44 + #define NAND_DEV_CMD1 0xa4 45 + #define NAND_DEV_CMD2 0xa8 46 + #define NAND_DEV_CMD_VLD 0xac 47 + #define SFLASHC_BURST_CFG 0xe0 48 + #define NAND_ERASED_CW_DETECT_CFG 0xe8 49 + #define NAND_ERASED_CW_DETECT_STATUS 0xec 50 + #define NAND_EBI2_ECC_BUF_CFG 0xf0 51 + #define FLASH_BUF_ACC 0x100 52 + 53 + #define NAND_CTRL 0xf00 54 + #define NAND_VERSION 0xf08 55 + #define NAND_READ_LOCATION_0 0xf20 56 + #define NAND_READ_LOCATION_1 0xf24 57 + 58 + /* dummy register offsets, used by write_reg_dma */ 59 + #define NAND_DEV_CMD1_RESTORE 0xdead 60 + #define NAND_DEV_CMD_VLD_RESTORE 0xbeef 61 + 62 + /* NAND_FLASH_CMD bits */ 63 + #define PAGE_ACC BIT(4) 64 + #define LAST_PAGE BIT(5) 65 + 66 + /* NAND_FLASH_CHIP_SELECT bits */ 67 + #define NAND_DEV_SEL 0 68 + #define DM_EN BIT(2) 69 + 70 + /* NAND_FLASH_STATUS bits */ 71 + #define FS_OP_ERR BIT(4) 72 + #define FS_READY_BSY_N BIT(5) 73 + #define FS_MPU_ERR BIT(8) 74 + #define FS_DEVICE_STS_ERR BIT(16) 75 + #define FS_DEVICE_WP BIT(23) 76 + 77 + /* NAND_BUFFER_STATUS bits */ 78 + #define BS_UNCORRECTABLE_BIT BIT(8) 79 + #define BS_CORRECTABLE_ERR_MSK 0x1f 80 + 81 + /* NAND_DEVn_CFG0 bits */ 82 + #define DISABLE_STATUS_AFTER_WRITE 4 83 + #define CW_PER_PAGE 6 84 + #define UD_SIZE_BYTES 9 85 + #define ECC_PARITY_SIZE_BYTES_RS 19 86 + #define SPARE_SIZE_BYTES 23 87 + #define NUM_ADDR_CYCLES 27 88 + #define STATUS_BFR_READ 30 89 + #define SET_RD_MODE_AFTER_STATUS 31 90 + 91 + /* NAND_DEVn_CFG0 bits */ 92 + #define DEV0_CFG1_ECC_DISABLE 0 93 + #define WIDE_FLASH 1 94 + #define NAND_RECOVERY_CYCLES 2 95 + #define CS_ACTIVE_BSY 5 96 + #define BAD_BLOCK_BYTE_NUM 6 97 + #define BAD_BLOCK_IN_SPARE_AREA 16 98 + #define WR_RD_BSY_GAP 17 99 + #define ENABLE_BCH_ECC 27 100 + 101 + /* NAND_DEV0_ECC_CFG bits */ 102 + #define ECC_CFG_ECC_DISABLE 0 103 + #define ECC_SW_RESET 1 104 + #define ECC_MODE 4 105 + #define ECC_PARITY_SIZE_BYTES_BCH 8 106 + #define ECC_NUM_DATA_BYTES 16 107 + #define ECC_FORCE_CLK_OPEN 30 108 + 109 + /* NAND_DEV_CMD1 bits */ 110 + #define READ_ADDR 0 111 + 112 + /* NAND_DEV_CMD_VLD bits */ 113 + #define READ_START_VLD 0 114 + 115 + /* NAND_EBI2_ECC_BUF_CFG bits */ 116 + #define NUM_STEPS 0 117 + 118 + /* NAND_ERASED_CW_DETECT_CFG bits */ 119 + #define ERASED_CW_ECC_MASK 1 120 + #define AUTO_DETECT_RES 0 121 + #define MASK_ECC (1 << ERASED_CW_ECC_MASK) 122 + #define RESET_ERASED_DET (1 << AUTO_DETECT_RES) 123 + #define ACTIVE_ERASED_DET (0 << AUTO_DETECT_RES) 124 + #define CLR_ERASED_PAGE_DET (RESET_ERASED_DET | MASK_ECC) 125 + #define SET_ERASED_PAGE_DET (ACTIVE_ERASED_DET | MASK_ECC) 126 + 127 + /* NAND_ERASED_CW_DETECT_STATUS bits */ 128 + #define PAGE_ALL_ERASED BIT(7) 129 + #define CODEWORD_ALL_ERASED BIT(6) 130 + #define PAGE_ERASED BIT(5) 131 + #define CODEWORD_ERASED BIT(4) 132 + #define ERASED_PAGE (PAGE_ALL_ERASED | PAGE_ERASED) 133 + #define ERASED_CW (CODEWORD_ALL_ERASED | CODEWORD_ERASED) 134 + 135 + /* Version Mask */ 136 + #define NAND_VERSION_MAJOR_MASK 0xf0000000 137 + #define NAND_VERSION_MAJOR_SHIFT 28 138 + #define NAND_VERSION_MINOR_MASK 0x0fff0000 139 + #define NAND_VERSION_MINOR_SHIFT 16 140 + 141 + /* NAND OP_CMDs */ 142 + #define PAGE_READ 0x2 143 + #define PAGE_READ_WITH_ECC 0x3 144 + #define PAGE_READ_WITH_ECC_SPARE 0x4 145 + #define PROGRAM_PAGE 0x6 146 + #define PAGE_PROGRAM_WITH_ECC 0x7 147 + #define PROGRAM_PAGE_SPARE 0x9 148 + #define BLOCK_ERASE 0xa 149 + #define FETCH_ID 0xb 150 + #define RESET_DEVICE 0xd 151 + 152 + /* 153 + * the NAND controller performs reads/writes with ECC in 516 byte chunks. 154 + * the driver calls the chunks 'step' or 'codeword' interchangeably 155 + */ 156 + #define NANDC_STEP_SIZE 512 157 + 158 + /* 159 + * the largest page size we support is 8K, this will have 16 steps/codewords 160 + * of 512 bytes each 161 + */ 162 + #define MAX_NUM_STEPS (SZ_8K / NANDC_STEP_SIZE) 163 + 164 + /* we read at most 3 registers per codeword scan */ 165 + #define MAX_REG_RD (3 * MAX_NUM_STEPS) 166 + 167 + /* ECC modes supported by the controller */ 168 + #define ECC_NONE BIT(0) 169 + #define ECC_RS_4BIT BIT(1) 170 + #define ECC_BCH_4BIT BIT(2) 171 + #define ECC_BCH_8BIT BIT(3) 172 + 173 + struct desc_info { 174 + struct list_head node; 175 + 176 + enum dma_data_direction dir; 177 + struct scatterlist sgl; 178 + struct dma_async_tx_descriptor *dma_desc; 179 + }; 180 + 181 + /* 182 + * holds the current register values that we want to write. acts as a contiguous 183 + * chunk of memory which we use to write the controller registers through DMA. 184 + */ 185 + struct nandc_regs { 186 + __le32 cmd; 187 + __le32 addr0; 188 + __le32 addr1; 189 + __le32 chip_sel; 190 + __le32 exec; 191 + 192 + __le32 cfg0; 193 + __le32 cfg1; 194 + __le32 ecc_bch_cfg; 195 + 196 + __le32 clrflashstatus; 197 + __le32 clrreadstatus; 198 + 199 + __le32 cmd1; 200 + __le32 vld; 201 + 202 + __le32 orig_cmd1; 203 + __le32 orig_vld; 204 + 205 + __le32 ecc_buf_cfg; 206 + }; 207 + 208 + /* 209 + * NAND controller data struct 210 + * 211 + * @controller: base controller structure 212 + * @host_list: list containing all the chips attached to the 213 + * controller 214 + * @dev: parent device 215 + * @base: MMIO base 216 + * @base_dma: physical base address of controller registers 217 + * @core_clk: controller clock 218 + * @aon_clk: another controller clock 219 + * 220 + * @chan: dma channel 221 + * @cmd_crci: ADM DMA CRCI for command flow control 222 + * @data_crci: ADM DMA CRCI for data flow control 223 + * @desc_list: DMA descriptor list (list of desc_infos) 224 + * 225 + * @data_buffer: our local DMA buffer for page read/writes, 226 + * used when we can't use the buffer provided 227 + * by upper layers directly 228 + * @buf_size/count/start: markers for chip->read_buf/write_buf functions 229 + * @reg_read_buf: local buffer for reading back registers via DMA 230 + * @reg_read_pos: marker for data read in reg_read_buf 231 + * 232 + * @regs: a contiguous chunk of memory for DMA register 233 + * writes. contains the register values to be 234 + * written to controller 235 + * @cmd1/vld: some fixed controller register values 236 + * @ecc_modes: supported ECC modes by the current controller, 237 + * initialized via DT match data 238 + */ 239 + struct qcom_nand_controller { 240 + struct nand_hw_control controller; 241 + struct list_head host_list; 242 + 243 + struct device *dev; 244 + 245 + void __iomem *base; 246 + dma_addr_t base_dma; 247 + 248 + struct clk *core_clk; 249 + struct clk *aon_clk; 250 + 251 + struct dma_chan *chan; 252 + unsigned int cmd_crci; 253 + unsigned int data_crci; 254 + struct list_head desc_list; 255 + 256 + u8 *data_buffer; 257 + int buf_size; 258 + int buf_count; 259 + int buf_start; 260 + 261 + __le32 *reg_read_buf; 262 + int reg_read_pos; 263 + 264 + struct nandc_regs *regs; 265 + 266 + u32 cmd1, vld; 267 + u32 ecc_modes; 268 + }; 269 + 270 + /* 271 + * NAND chip structure 272 + * 273 + * @chip: base NAND chip structure 274 + * @node: list node to add itself to host_list in 275 + * qcom_nand_controller 276 + * 277 + * @cs: chip select value for this chip 278 + * @cw_size: the number of bytes in a single step/codeword 279 + * of a page, consisting of all data, ecc, spare 280 + * and reserved bytes 281 + * @cw_data: the number of bytes within a codeword protected 282 + * by ECC 283 + * @use_ecc: request the controller to use ECC for the 284 + * upcoming read/write 285 + * @bch_enabled: flag to tell whether BCH ECC mode is used 286 + * @ecc_bytes_hw: ECC bytes used by controller hardware for this 287 + * chip 288 + * @status: value to be returned if NAND_CMD_STATUS command 289 + * is executed 290 + * @last_command: keeps track of last command on this chip. used 291 + * for reading correct status 292 + * 293 + * @cfg0, cfg1, cfg0_raw..: NANDc register configurations needed for 294 + * ecc/non-ecc mode for the current nand flash 295 + * device 296 + */ 297 + struct qcom_nand_host { 298 + struct nand_chip chip; 299 + struct list_head node; 300 + 301 + int cs; 302 + int cw_size; 303 + int cw_data; 304 + bool use_ecc; 305 + bool bch_enabled; 306 + int ecc_bytes_hw; 307 + int spare_bytes; 308 + int bbm_size; 309 + u8 status; 310 + int last_command; 311 + 312 + u32 cfg0, cfg1; 313 + u32 cfg0_raw, cfg1_raw; 314 + u32 ecc_buf_cfg; 315 + u32 ecc_bch_cfg; 316 + u32 clrflashstatus; 317 + u32 clrreadstatus; 318 + }; 319 + 320 + static inline struct qcom_nand_host *to_qcom_nand_host(struct nand_chip *chip) 321 + { 322 + return container_of(chip, struct qcom_nand_host, chip); 323 + } 324 + 325 + static inline struct qcom_nand_controller * 326 + get_qcom_nand_controller(struct nand_chip *chip) 327 + { 328 + return container_of(chip->controller, struct qcom_nand_controller, 329 + controller); 330 + } 331 + 332 + static inline u32 nandc_read(struct qcom_nand_controller *nandc, int offset) 333 + { 334 + return ioread32(nandc->base + offset); 335 + } 336 + 337 + static inline void nandc_write(struct qcom_nand_controller *nandc, int offset, 338 + u32 val) 339 + { 340 + iowrite32(val, nandc->base + offset); 341 + } 342 + 343 + static __le32 *offset_to_nandc_reg(struct nandc_regs *regs, int offset) 344 + { 345 + switch (offset) { 346 + case NAND_FLASH_CMD: 347 + return &regs->cmd; 348 + case NAND_ADDR0: 349 + return &regs->addr0; 350 + case NAND_ADDR1: 351 + return &regs->addr1; 352 + case NAND_FLASH_CHIP_SELECT: 353 + return &regs->chip_sel; 354 + case NAND_EXEC_CMD: 355 + return &regs->exec; 356 + case NAND_FLASH_STATUS: 357 + return &regs->clrflashstatus; 358 + case NAND_DEV0_CFG0: 359 + return &regs->cfg0; 360 + case NAND_DEV0_CFG1: 361 + return &regs->cfg1; 362 + case NAND_DEV0_ECC_CFG: 363 + return &regs->ecc_bch_cfg; 364 + case NAND_READ_STATUS: 365 + return &regs->clrreadstatus; 366 + case NAND_DEV_CMD1: 367 + return &regs->cmd1; 368 + case NAND_DEV_CMD1_RESTORE: 369 + return &regs->orig_cmd1; 370 + case NAND_DEV_CMD_VLD: 371 + return &regs->vld; 372 + case NAND_DEV_CMD_VLD_RESTORE: 373 + return &regs->orig_vld; 374 + case NAND_EBI2_ECC_BUF_CFG: 375 + return &regs->ecc_buf_cfg; 376 + default: 377 + return NULL; 378 + } 379 + } 380 + 381 + static void nandc_set_reg(struct qcom_nand_controller *nandc, int offset, 382 + u32 val) 383 + { 384 + struct nandc_regs *regs = nandc->regs; 385 + __le32 *reg; 386 + 387 + reg = offset_to_nandc_reg(regs, offset); 388 + 389 + if (reg) 390 + *reg = cpu_to_le32(val); 391 + } 392 + 393 + /* helper to configure address register values */ 394 + static void set_address(struct qcom_nand_host *host, u16 column, int page) 395 + { 396 + struct nand_chip *chip = &host->chip; 397 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 398 + 399 + if (chip->options & NAND_BUSWIDTH_16) 400 + column >>= 1; 401 + 402 + nandc_set_reg(nandc, NAND_ADDR0, page << 16 | column); 403 + nandc_set_reg(nandc, NAND_ADDR1, page >> 16 & 0xff); 404 + } 405 + 406 + /* 407 + * update_rw_regs: set up read/write register values, these will be 408 + * written to the NAND controller registers via DMA 409 + * 410 + * @num_cw: number of steps for the read/write operation 411 + * @read: read or write operation 412 + */ 413 + static void update_rw_regs(struct qcom_nand_host *host, int num_cw, bool read) 414 + { 415 + struct nand_chip *chip = &host->chip; 416 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 417 + u32 cmd, cfg0, cfg1, ecc_bch_cfg; 418 + 419 + if (read) { 420 + if (host->use_ecc) 421 + cmd = PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; 422 + else 423 + cmd = PAGE_READ | PAGE_ACC | LAST_PAGE; 424 + } else { 425 + cmd = PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; 426 + } 427 + 428 + if (host->use_ecc) { 429 + cfg0 = (host->cfg0 & ~(7U << CW_PER_PAGE)) | 430 + (num_cw - 1) << CW_PER_PAGE; 431 + 432 + cfg1 = host->cfg1; 433 + ecc_bch_cfg = host->ecc_bch_cfg; 434 + } else { 435 + cfg0 = (host->cfg0_raw & ~(7U << CW_PER_PAGE)) | 436 + (num_cw - 1) << CW_PER_PAGE; 437 + 438 + cfg1 = host->cfg1_raw; 439 + ecc_bch_cfg = 1 << ECC_CFG_ECC_DISABLE; 440 + } 441 + 442 + nandc_set_reg(nandc, NAND_FLASH_CMD, cmd); 443 + nandc_set_reg(nandc, NAND_DEV0_CFG0, cfg0); 444 + nandc_set_reg(nandc, NAND_DEV0_CFG1, cfg1); 445 + nandc_set_reg(nandc, NAND_DEV0_ECC_CFG, ecc_bch_cfg); 446 + nandc_set_reg(nandc, NAND_EBI2_ECC_BUF_CFG, host->ecc_buf_cfg); 447 + nandc_set_reg(nandc, NAND_FLASH_STATUS, host->clrflashstatus); 448 + nandc_set_reg(nandc, NAND_READ_STATUS, host->clrreadstatus); 449 + nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 450 + } 451 + 452 + static int prep_dma_desc(struct qcom_nand_controller *nandc, bool read, 453 + int reg_off, const void *vaddr, int size, 454 + bool flow_control) 455 + { 456 + struct desc_info *desc; 457 + struct dma_async_tx_descriptor *dma_desc; 458 + struct scatterlist *sgl; 459 + struct dma_slave_config slave_conf; 460 + enum dma_transfer_direction dir_eng; 461 + int ret; 462 + 463 + desc = kzalloc(sizeof(*desc), GFP_KERNEL); 464 + if (!desc) 465 + return -ENOMEM; 466 + 467 + sgl = &desc->sgl; 468 + 469 + sg_init_one(sgl, vaddr, size); 470 + 471 + if (read) { 472 + dir_eng = DMA_DEV_TO_MEM; 473 + desc->dir = DMA_FROM_DEVICE; 474 + } else { 475 + dir_eng = DMA_MEM_TO_DEV; 476 + desc->dir = DMA_TO_DEVICE; 477 + } 478 + 479 + ret = dma_map_sg(nandc->dev, sgl, 1, desc->dir); 480 + if (ret == 0) { 481 + ret = -ENOMEM; 482 + goto err; 483 + } 484 + 485 + memset(&slave_conf, 0x00, sizeof(slave_conf)); 486 + 487 + slave_conf.device_fc = flow_control; 488 + if (read) { 489 + slave_conf.src_maxburst = 16; 490 + slave_conf.src_addr = nandc->base_dma + reg_off; 491 + slave_conf.slave_id = nandc->data_crci; 492 + } else { 493 + slave_conf.dst_maxburst = 16; 494 + slave_conf.dst_addr = nandc->base_dma + reg_off; 495 + slave_conf.slave_id = nandc->cmd_crci; 496 + } 497 + 498 + ret = dmaengine_slave_config(nandc->chan, &slave_conf); 499 + if (ret) { 500 + dev_err(nandc->dev, "failed to configure dma channel\n"); 501 + goto err; 502 + } 503 + 504 + dma_desc = dmaengine_prep_slave_sg(nandc->chan, sgl, 1, dir_eng, 0); 505 + if (!dma_desc) { 506 + dev_err(nandc->dev, "failed to prepare desc\n"); 507 + ret = -EINVAL; 508 + goto err; 509 + } 510 + 511 + desc->dma_desc = dma_desc; 512 + 513 + list_add_tail(&desc->node, &nandc->desc_list); 514 + 515 + return 0; 516 + err: 517 + kfree(desc); 518 + 519 + return ret; 520 + } 521 + 522 + /* 523 + * read_reg_dma: prepares a descriptor to read a given number of 524 + * contiguous registers to the reg_read_buf pointer 525 + * 526 + * @first: offset of the first register in the contiguous block 527 + * @num_regs: number of registers to read 528 + */ 529 + static int read_reg_dma(struct qcom_nand_controller *nandc, int first, 530 + int num_regs) 531 + { 532 + bool flow_control = false; 533 + void *vaddr; 534 + int size; 535 + 536 + if (first == NAND_READ_ID || first == NAND_FLASH_STATUS) 537 + flow_control = true; 538 + 539 + size = num_regs * sizeof(u32); 540 + vaddr = nandc->reg_read_buf + nandc->reg_read_pos; 541 + nandc->reg_read_pos += num_regs; 542 + 543 + return prep_dma_desc(nandc, true, first, vaddr, size, flow_control); 544 + } 545 + 546 + /* 547 + * write_reg_dma: prepares a descriptor to write a given number of 548 + * contiguous registers 549 + * 550 + * @first: offset of the first register in the contiguous block 551 + * @num_regs: number of registers to write 552 + */ 553 + static int write_reg_dma(struct qcom_nand_controller *nandc, int first, 554 + int num_regs) 555 + { 556 + bool flow_control = false; 557 + struct nandc_regs *regs = nandc->regs; 558 + void *vaddr; 559 + int size; 560 + 561 + vaddr = offset_to_nandc_reg(regs, first); 562 + 563 + if (first == NAND_FLASH_CMD) 564 + flow_control = true; 565 + 566 + if (first == NAND_DEV_CMD1_RESTORE) 567 + first = NAND_DEV_CMD1; 568 + 569 + if (first == NAND_DEV_CMD_VLD_RESTORE) 570 + first = NAND_DEV_CMD_VLD; 571 + 572 + size = num_regs * sizeof(u32); 573 + 574 + return prep_dma_desc(nandc, false, first, vaddr, size, flow_control); 575 + } 576 + 577 + /* 578 + * read_data_dma: prepares a DMA descriptor to transfer data from the 579 + * controller's internal buffer to the buffer 'vaddr' 580 + * 581 + * @reg_off: offset within the controller's data buffer 582 + * @vaddr: virtual address of the buffer we want to write to 583 + * @size: DMA transaction size in bytes 584 + */ 585 + static int read_data_dma(struct qcom_nand_controller *nandc, int reg_off, 586 + const u8 *vaddr, int size) 587 + { 588 + return prep_dma_desc(nandc, true, reg_off, vaddr, size, false); 589 + } 590 + 591 + /* 592 + * write_data_dma: prepares a DMA descriptor to transfer data from 593 + * 'vaddr' to the controller's internal buffer 594 + * 595 + * @reg_off: offset within the controller's data buffer 596 + * @vaddr: virtual address of the buffer we want to read from 597 + * @size: DMA transaction size in bytes 598 + */ 599 + static int write_data_dma(struct qcom_nand_controller *nandc, int reg_off, 600 + const u8 *vaddr, int size) 601 + { 602 + return prep_dma_desc(nandc, false, reg_off, vaddr, size, false); 603 + } 604 + 605 + /* 606 + * helper to prepare dma descriptors to configure registers needed for reading a 607 + * codeword/step in a page 608 + */ 609 + static void config_cw_read(struct qcom_nand_controller *nandc) 610 + { 611 + write_reg_dma(nandc, NAND_FLASH_CMD, 3); 612 + write_reg_dma(nandc, NAND_DEV0_CFG0, 3); 613 + write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1); 614 + 615 + write_reg_dma(nandc, NAND_EXEC_CMD, 1); 616 + 617 + read_reg_dma(nandc, NAND_FLASH_STATUS, 2); 618 + read_reg_dma(nandc, NAND_ERASED_CW_DETECT_STATUS, 1); 619 + } 620 + 621 + /* 622 + * helpers to prepare dma descriptors used to configure registers needed for 623 + * writing a codeword/step in a page 624 + */ 625 + static void config_cw_write_pre(struct qcom_nand_controller *nandc) 626 + { 627 + write_reg_dma(nandc, NAND_FLASH_CMD, 3); 628 + write_reg_dma(nandc, NAND_DEV0_CFG0, 3); 629 + write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1); 630 + } 631 + 632 + static void config_cw_write_post(struct qcom_nand_controller *nandc) 633 + { 634 + write_reg_dma(nandc, NAND_EXEC_CMD, 1); 635 + 636 + read_reg_dma(nandc, NAND_FLASH_STATUS, 1); 637 + 638 + write_reg_dma(nandc, NAND_FLASH_STATUS, 1); 639 + write_reg_dma(nandc, NAND_READ_STATUS, 1); 640 + } 641 + 642 + /* 643 + * the following functions are used within chip->cmdfunc() to perform different 644 + * NAND_CMD_* commands 645 + */ 646 + 647 + /* sets up descriptors for NAND_CMD_PARAM */ 648 + static int nandc_param(struct qcom_nand_host *host) 649 + { 650 + struct nand_chip *chip = &host->chip; 651 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 652 + 653 + /* 654 + * NAND_CMD_PARAM is called before we know much about the FLASH chip 655 + * in use. we configure the controller to perform a raw read of 512 656 + * bytes to read onfi params 657 + */ 658 + nandc_set_reg(nandc, NAND_FLASH_CMD, PAGE_READ | PAGE_ACC | LAST_PAGE); 659 + nandc_set_reg(nandc, NAND_ADDR0, 0); 660 + nandc_set_reg(nandc, NAND_ADDR1, 0); 661 + nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE 662 + | 512 << UD_SIZE_BYTES 663 + | 5 << NUM_ADDR_CYCLES 664 + | 0 << SPARE_SIZE_BYTES); 665 + nandc_set_reg(nandc, NAND_DEV0_CFG1, 7 << NAND_RECOVERY_CYCLES 666 + | 0 << CS_ACTIVE_BSY 667 + | 17 << BAD_BLOCK_BYTE_NUM 668 + | 1 << BAD_BLOCK_IN_SPARE_AREA 669 + | 2 << WR_RD_BSY_GAP 670 + | 0 << WIDE_FLASH 671 + | 1 << DEV0_CFG1_ECC_DISABLE); 672 + nandc_set_reg(nandc, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE); 673 + 674 + /* configure CMD1 and VLD for ONFI param probing */ 675 + nandc_set_reg(nandc, NAND_DEV_CMD_VLD, 676 + (nandc->vld & ~(1 << READ_START_VLD)) 677 + | 0 << READ_START_VLD); 678 + nandc_set_reg(nandc, NAND_DEV_CMD1, 679 + (nandc->cmd1 & ~(0xFF << READ_ADDR)) 680 + | NAND_CMD_PARAM << READ_ADDR); 681 + 682 + nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 683 + 684 + nandc_set_reg(nandc, NAND_DEV_CMD1_RESTORE, nandc->cmd1); 685 + nandc_set_reg(nandc, NAND_DEV_CMD_VLD_RESTORE, nandc->vld); 686 + 687 + write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1); 688 + write_reg_dma(nandc, NAND_DEV_CMD1, 1); 689 + 690 + nandc->buf_count = 512; 691 + memset(nandc->data_buffer, 0xff, nandc->buf_count); 692 + 693 + config_cw_read(nandc); 694 + 695 + read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, 696 + nandc->buf_count); 697 + 698 + /* restore CMD1 and VLD regs */ 699 + write_reg_dma(nandc, NAND_DEV_CMD1_RESTORE, 1); 700 + write_reg_dma(nandc, NAND_DEV_CMD_VLD_RESTORE, 1); 701 + 702 + return 0; 703 + } 704 + 705 + /* sets up descriptors for NAND_CMD_ERASE1 */ 706 + static int erase_block(struct qcom_nand_host *host, int page_addr) 707 + { 708 + struct nand_chip *chip = &host->chip; 709 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 710 + 711 + nandc_set_reg(nandc, NAND_FLASH_CMD, 712 + BLOCK_ERASE | PAGE_ACC | LAST_PAGE); 713 + nandc_set_reg(nandc, NAND_ADDR0, page_addr); 714 + nandc_set_reg(nandc, NAND_ADDR1, 0); 715 + nandc_set_reg(nandc, NAND_DEV0_CFG0, 716 + host->cfg0_raw & ~(7 << CW_PER_PAGE)); 717 + nandc_set_reg(nandc, NAND_DEV0_CFG1, host->cfg1_raw); 718 + nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 719 + nandc_set_reg(nandc, NAND_FLASH_STATUS, host->clrflashstatus); 720 + nandc_set_reg(nandc, NAND_READ_STATUS, host->clrreadstatus); 721 + 722 + write_reg_dma(nandc, NAND_FLASH_CMD, 3); 723 + write_reg_dma(nandc, NAND_DEV0_CFG0, 2); 724 + write_reg_dma(nandc, NAND_EXEC_CMD, 1); 725 + 726 + read_reg_dma(nandc, NAND_FLASH_STATUS, 1); 727 + 728 + write_reg_dma(nandc, NAND_FLASH_STATUS, 1); 729 + write_reg_dma(nandc, NAND_READ_STATUS, 1); 730 + 731 + return 0; 732 + } 733 + 734 + /* sets up descriptors for NAND_CMD_READID */ 735 + static int read_id(struct qcom_nand_host *host, int column) 736 + { 737 + struct nand_chip *chip = &host->chip; 738 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 739 + 740 + if (column == -1) 741 + return 0; 742 + 743 + nandc_set_reg(nandc, NAND_FLASH_CMD, FETCH_ID); 744 + nandc_set_reg(nandc, NAND_ADDR0, column); 745 + nandc_set_reg(nandc, NAND_ADDR1, 0); 746 + nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT, DM_EN); 747 + nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 748 + 749 + write_reg_dma(nandc, NAND_FLASH_CMD, 4); 750 + write_reg_dma(nandc, NAND_EXEC_CMD, 1); 751 + 752 + read_reg_dma(nandc, NAND_READ_ID, 1); 753 + 754 + return 0; 755 + } 756 + 757 + /* sets up descriptors for NAND_CMD_RESET */ 758 + static int reset(struct qcom_nand_host *host) 759 + { 760 + struct nand_chip *chip = &host->chip; 761 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 762 + 763 + nandc_set_reg(nandc, NAND_FLASH_CMD, RESET_DEVICE); 764 + nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 765 + 766 + write_reg_dma(nandc, NAND_FLASH_CMD, 1); 767 + write_reg_dma(nandc, NAND_EXEC_CMD, 1); 768 + 769 + read_reg_dma(nandc, NAND_FLASH_STATUS, 1); 770 + 771 + return 0; 772 + } 773 + 774 + /* helpers to submit/free our list of dma descriptors */ 775 + static int submit_descs(struct qcom_nand_controller *nandc) 776 + { 777 + struct desc_info *desc; 778 + dma_cookie_t cookie = 0; 779 + 780 + list_for_each_entry(desc, &nandc->desc_list, node) 781 + cookie = dmaengine_submit(desc->dma_desc); 782 + 783 + if (dma_sync_wait(nandc->chan, cookie) != DMA_COMPLETE) 784 + return -ETIMEDOUT; 785 + 786 + return 0; 787 + } 788 + 789 + static void free_descs(struct qcom_nand_controller *nandc) 790 + { 791 + struct desc_info *desc, *n; 792 + 793 + list_for_each_entry_safe(desc, n, &nandc->desc_list, node) { 794 + list_del(&desc->node); 795 + dma_unmap_sg(nandc->dev, &desc->sgl, 1, desc->dir); 796 + kfree(desc); 797 + } 798 + } 799 + 800 + /* reset the register read buffer for next NAND operation */ 801 + static void clear_read_regs(struct qcom_nand_controller *nandc) 802 + { 803 + nandc->reg_read_pos = 0; 804 + memset(nandc->reg_read_buf, 0, 805 + MAX_REG_RD * sizeof(*nandc->reg_read_buf)); 806 + } 807 + 808 + static void pre_command(struct qcom_nand_host *host, int command) 809 + { 810 + struct nand_chip *chip = &host->chip; 811 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 812 + 813 + nandc->buf_count = 0; 814 + nandc->buf_start = 0; 815 + host->use_ecc = false; 816 + host->last_command = command; 817 + 818 + clear_read_regs(nandc); 819 + } 820 + 821 + /* 822 + * this is called after NAND_CMD_PAGEPROG and NAND_CMD_ERASE1 to set our 823 + * privately maintained status byte, this status byte can be read after 824 + * NAND_CMD_STATUS is called 825 + */ 826 + static void parse_erase_write_errors(struct qcom_nand_host *host, int command) 827 + { 828 + struct nand_chip *chip = &host->chip; 829 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 830 + struct nand_ecc_ctrl *ecc = &chip->ecc; 831 + int num_cw; 832 + int i; 833 + 834 + num_cw = command == NAND_CMD_PAGEPROG ? ecc->steps : 1; 835 + 836 + for (i = 0; i < num_cw; i++) { 837 + u32 flash_status = le32_to_cpu(nandc->reg_read_buf[i]); 838 + 839 + if (flash_status & FS_MPU_ERR) 840 + host->status &= ~NAND_STATUS_WP; 841 + 842 + if (flash_status & FS_OP_ERR || (i == (num_cw - 1) && 843 + (flash_status & 844 + FS_DEVICE_STS_ERR))) 845 + host->status |= NAND_STATUS_FAIL; 846 + } 847 + } 848 + 849 + static void post_command(struct qcom_nand_host *host, int command) 850 + { 851 + struct nand_chip *chip = &host->chip; 852 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 853 + 854 + switch (command) { 855 + case NAND_CMD_READID: 856 + memcpy(nandc->data_buffer, nandc->reg_read_buf, 857 + nandc->buf_count); 858 + break; 859 + case NAND_CMD_PAGEPROG: 860 + case NAND_CMD_ERASE1: 861 + parse_erase_write_errors(host, command); 862 + break; 863 + default: 864 + break; 865 + } 866 + } 867 + 868 + /* 869 + * Implements chip->cmdfunc. It's only used for a limited set of commands. 870 + * The rest of the commands wouldn't be called by upper layers. For example, 871 + * NAND_CMD_READOOB would never be called because we have our own versions 872 + * of read_oob ops for nand_ecc_ctrl. 873 + */ 874 + static void qcom_nandc_command(struct mtd_info *mtd, unsigned int command, 875 + int column, int page_addr) 876 + { 877 + struct nand_chip *chip = mtd_to_nand(mtd); 878 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 879 + struct nand_ecc_ctrl *ecc = &chip->ecc; 880 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 881 + bool wait = false; 882 + int ret = 0; 883 + 884 + pre_command(host, command); 885 + 886 + switch (command) { 887 + case NAND_CMD_RESET: 888 + ret = reset(host); 889 + wait = true; 890 + break; 891 + 892 + case NAND_CMD_READID: 893 + nandc->buf_count = 4; 894 + ret = read_id(host, column); 895 + wait = true; 896 + break; 897 + 898 + case NAND_CMD_PARAM: 899 + ret = nandc_param(host); 900 + wait = true; 901 + break; 902 + 903 + case NAND_CMD_ERASE1: 904 + ret = erase_block(host, page_addr); 905 + wait = true; 906 + break; 907 + 908 + case NAND_CMD_READ0: 909 + /* we read the entire page for now */ 910 + WARN_ON(column != 0); 911 + 912 + host->use_ecc = true; 913 + set_address(host, 0, page_addr); 914 + update_rw_regs(host, ecc->steps, true); 915 + break; 916 + 917 + case NAND_CMD_SEQIN: 918 + WARN_ON(column != 0); 919 + set_address(host, 0, page_addr); 920 + break; 921 + 922 + case NAND_CMD_PAGEPROG: 923 + case NAND_CMD_STATUS: 924 + case NAND_CMD_NONE: 925 + default: 926 + break; 927 + } 928 + 929 + if (ret) { 930 + dev_err(nandc->dev, "failure executing command %d\n", 931 + command); 932 + free_descs(nandc); 933 + return; 934 + } 935 + 936 + if (wait) { 937 + ret = submit_descs(nandc); 938 + if (ret) 939 + dev_err(nandc->dev, 940 + "failure submitting descs for command %d\n", 941 + command); 942 + } 943 + 944 + free_descs(nandc); 945 + 946 + post_command(host, command); 947 + } 948 + 949 + /* 950 + * when using BCH ECC, the HW flags an error in NAND_FLASH_STATUS if it read 951 + * an erased CW, and reports an erased CW in NAND_ERASED_CW_DETECT_STATUS. 952 + * 953 + * when using RS ECC, the HW reports the same erros when reading an erased CW, 954 + * but it notifies that it is an erased CW by placing special characters at 955 + * certain offsets in the buffer. 956 + * 957 + * verify if the page is erased or not, and fix up the page for RS ECC by 958 + * replacing the special characters with 0xff. 959 + */ 960 + static bool erased_chunk_check_and_fixup(u8 *data_buf, int data_len) 961 + { 962 + u8 empty1, empty2; 963 + 964 + /* 965 + * an erased page flags an error in NAND_FLASH_STATUS, check if the page 966 + * is erased by looking for 0x54s at offsets 3 and 175 from the 967 + * beginning of each codeword 968 + */ 969 + 970 + empty1 = data_buf[3]; 971 + empty2 = data_buf[175]; 972 + 973 + /* 974 + * if the erased codework markers, if they exist override them with 975 + * 0xffs 976 + */ 977 + if ((empty1 == 0x54 && empty2 == 0xff) || 978 + (empty1 == 0xff && empty2 == 0x54)) { 979 + data_buf[3] = 0xff; 980 + data_buf[175] = 0xff; 981 + } 982 + 983 + /* 984 + * check if the entire chunk contains 0xffs or not. if it doesn't, then 985 + * restore the original values at the special offsets 986 + */ 987 + if (memchr_inv(data_buf, 0xff, data_len)) { 988 + data_buf[3] = empty1; 989 + data_buf[175] = empty2; 990 + 991 + return false; 992 + } 993 + 994 + return true; 995 + } 996 + 997 + struct read_stats { 998 + __le32 flash; 999 + __le32 buffer; 1000 + __le32 erased_cw; 1001 + }; 1002 + 1003 + /* 1004 + * reads back status registers set by the controller to notify page read 1005 + * errors. this is equivalent to what 'ecc->correct()' would do. 1006 + */ 1007 + static int parse_read_errors(struct qcom_nand_host *host, u8 *data_buf, 1008 + u8 *oob_buf) 1009 + { 1010 + struct nand_chip *chip = &host->chip; 1011 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1012 + struct mtd_info *mtd = nand_to_mtd(chip); 1013 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1014 + unsigned int max_bitflips = 0; 1015 + struct read_stats *buf; 1016 + int i; 1017 + 1018 + buf = (struct read_stats *)nandc->reg_read_buf; 1019 + 1020 + for (i = 0; i < ecc->steps; i++, buf++) { 1021 + u32 flash, buffer, erased_cw; 1022 + int data_len, oob_len; 1023 + 1024 + if (i == (ecc->steps - 1)) { 1025 + data_len = ecc->size - ((ecc->steps - 1) << 2); 1026 + oob_len = ecc->steps << 2; 1027 + } else { 1028 + data_len = host->cw_data; 1029 + oob_len = 0; 1030 + } 1031 + 1032 + flash = le32_to_cpu(buf->flash); 1033 + buffer = le32_to_cpu(buf->buffer); 1034 + erased_cw = le32_to_cpu(buf->erased_cw); 1035 + 1036 + if (flash & (FS_OP_ERR | FS_MPU_ERR)) { 1037 + bool erased; 1038 + 1039 + /* ignore erased codeword errors */ 1040 + if (host->bch_enabled) { 1041 + erased = (erased_cw & ERASED_CW) == ERASED_CW ? 1042 + true : false; 1043 + } else { 1044 + erased = erased_chunk_check_and_fixup(data_buf, 1045 + data_len); 1046 + } 1047 + 1048 + if (erased) { 1049 + data_buf += data_len; 1050 + if (oob_buf) 1051 + oob_buf += oob_len + ecc->bytes; 1052 + continue; 1053 + } 1054 + 1055 + if (buffer & BS_UNCORRECTABLE_BIT) { 1056 + int ret, ecclen, extraooblen; 1057 + void *eccbuf; 1058 + 1059 + eccbuf = oob_buf ? oob_buf + oob_len : NULL; 1060 + ecclen = oob_buf ? host->ecc_bytes_hw : 0; 1061 + extraooblen = oob_buf ? oob_len : 0; 1062 + 1063 + /* 1064 + * make sure it isn't an erased page reported 1065 + * as not-erased by HW because of a few bitflips 1066 + */ 1067 + ret = nand_check_erased_ecc_chunk(data_buf, 1068 + data_len, eccbuf, ecclen, oob_buf, 1069 + extraooblen, ecc->strength); 1070 + if (ret < 0) { 1071 + mtd->ecc_stats.failed++; 1072 + } else { 1073 + mtd->ecc_stats.corrected += ret; 1074 + max_bitflips = 1075 + max_t(unsigned int, max_bitflips, ret); 1076 + } 1077 + } 1078 + } else { 1079 + unsigned int stat; 1080 + 1081 + stat = buffer & BS_CORRECTABLE_ERR_MSK; 1082 + mtd->ecc_stats.corrected += stat; 1083 + max_bitflips = max(max_bitflips, stat); 1084 + } 1085 + 1086 + data_buf += data_len; 1087 + if (oob_buf) 1088 + oob_buf += oob_len + ecc->bytes; 1089 + } 1090 + 1091 + return max_bitflips; 1092 + } 1093 + 1094 + /* 1095 + * helper to perform the actual page read operation, used by ecc->read_page(), 1096 + * ecc->read_oob() 1097 + */ 1098 + static int read_page_ecc(struct qcom_nand_host *host, u8 *data_buf, 1099 + u8 *oob_buf) 1100 + { 1101 + struct nand_chip *chip = &host->chip; 1102 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1103 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1104 + int i, ret; 1105 + 1106 + /* queue cmd descs for each codeword */ 1107 + for (i = 0; i < ecc->steps; i++) { 1108 + int data_size, oob_size; 1109 + 1110 + if (i == (ecc->steps - 1)) { 1111 + data_size = ecc->size - ((ecc->steps - 1) << 2); 1112 + oob_size = (ecc->steps << 2) + host->ecc_bytes_hw + 1113 + host->spare_bytes; 1114 + } else { 1115 + data_size = host->cw_data; 1116 + oob_size = host->ecc_bytes_hw + host->spare_bytes; 1117 + } 1118 + 1119 + config_cw_read(nandc); 1120 + 1121 + if (data_buf) 1122 + read_data_dma(nandc, FLASH_BUF_ACC, data_buf, 1123 + data_size); 1124 + 1125 + /* 1126 + * when ecc is enabled, the controller doesn't read the real 1127 + * or dummy bad block markers in each chunk. To maintain a 1128 + * consistent layout across RAW and ECC reads, we just 1129 + * leave the real/dummy BBM offsets empty (i.e, filled with 1130 + * 0xffs) 1131 + */ 1132 + if (oob_buf) { 1133 + int j; 1134 + 1135 + for (j = 0; j < host->bbm_size; j++) 1136 + *oob_buf++ = 0xff; 1137 + 1138 + read_data_dma(nandc, FLASH_BUF_ACC + data_size, 1139 + oob_buf, oob_size); 1140 + } 1141 + 1142 + if (data_buf) 1143 + data_buf += data_size; 1144 + if (oob_buf) 1145 + oob_buf += oob_size; 1146 + } 1147 + 1148 + ret = submit_descs(nandc); 1149 + if (ret) 1150 + dev_err(nandc->dev, "failure to read page/oob\n"); 1151 + 1152 + free_descs(nandc); 1153 + 1154 + return ret; 1155 + } 1156 + 1157 + /* 1158 + * a helper that copies the last step/codeword of a page (containing free oob) 1159 + * into our local buffer 1160 + */ 1161 + static int copy_last_cw(struct qcom_nand_host *host, int page) 1162 + { 1163 + struct nand_chip *chip = &host->chip; 1164 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1165 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1166 + int size; 1167 + int ret; 1168 + 1169 + clear_read_regs(nandc); 1170 + 1171 + size = host->use_ecc ? host->cw_data : host->cw_size; 1172 + 1173 + /* prepare a clean read buffer */ 1174 + memset(nandc->data_buffer, 0xff, size); 1175 + 1176 + set_address(host, host->cw_size * (ecc->steps - 1), page); 1177 + update_rw_regs(host, 1, true); 1178 + 1179 + config_cw_read(nandc); 1180 + 1181 + read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, size); 1182 + 1183 + ret = submit_descs(nandc); 1184 + if (ret) 1185 + dev_err(nandc->dev, "failed to copy last codeword\n"); 1186 + 1187 + free_descs(nandc); 1188 + 1189 + return ret; 1190 + } 1191 + 1192 + /* implements ecc->read_page() */ 1193 + static int qcom_nandc_read_page(struct mtd_info *mtd, struct nand_chip *chip, 1194 + uint8_t *buf, int oob_required, int page) 1195 + { 1196 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1197 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1198 + u8 *data_buf, *oob_buf = NULL; 1199 + int ret; 1200 + 1201 + data_buf = buf; 1202 + oob_buf = oob_required ? chip->oob_poi : NULL; 1203 + 1204 + ret = read_page_ecc(host, data_buf, oob_buf); 1205 + if (ret) { 1206 + dev_err(nandc->dev, "failure to read page\n"); 1207 + return ret; 1208 + } 1209 + 1210 + return parse_read_errors(host, data_buf, oob_buf); 1211 + } 1212 + 1213 + /* implements ecc->read_page_raw() */ 1214 + static int qcom_nandc_read_page_raw(struct mtd_info *mtd, 1215 + struct nand_chip *chip, uint8_t *buf, 1216 + int oob_required, int page) 1217 + { 1218 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1219 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1220 + u8 *data_buf, *oob_buf; 1221 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1222 + int i, ret; 1223 + 1224 + data_buf = buf; 1225 + oob_buf = chip->oob_poi; 1226 + 1227 + host->use_ecc = false; 1228 + update_rw_regs(host, ecc->steps, true); 1229 + 1230 + for (i = 0; i < ecc->steps; i++) { 1231 + int data_size1, data_size2, oob_size1, oob_size2; 1232 + int reg_off = FLASH_BUF_ACC; 1233 + 1234 + data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); 1235 + oob_size1 = host->bbm_size; 1236 + 1237 + if (i == (ecc->steps - 1)) { 1238 + data_size2 = ecc->size - data_size1 - 1239 + ((ecc->steps - 1) << 2); 1240 + oob_size2 = (ecc->steps << 2) + host->ecc_bytes_hw + 1241 + host->spare_bytes; 1242 + } else { 1243 + data_size2 = host->cw_data - data_size1; 1244 + oob_size2 = host->ecc_bytes_hw + host->spare_bytes; 1245 + } 1246 + 1247 + config_cw_read(nandc); 1248 + 1249 + read_data_dma(nandc, reg_off, data_buf, data_size1); 1250 + reg_off += data_size1; 1251 + data_buf += data_size1; 1252 + 1253 + read_data_dma(nandc, reg_off, oob_buf, oob_size1); 1254 + reg_off += oob_size1; 1255 + oob_buf += oob_size1; 1256 + 1257 + read_data_dma(nandc, reg_off, data_buf, data_size2); 1258 + reg_off += data_size2; 1259 + data_buf += data_size2; 1260 + 1261 + read_data_dma(nandc, reg_off, oob_buf, oob_size2); 1262 + oob_buf += oob_size2; 1263 + } 1264 + 1265 + ret = submit_descs(nandc); 1266 + if (ret) 1267 + dev_err(nandc->dev, "failure to read raw page\n"); 1268 + 1269 + free_descs(nandc); 1270 + 1271 + return 0; 1272 + } 1273 + 1274 + /* implements ecc->read_oob() */ 1275 + static int qcom_nandc_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 1276 + int page) 1277 + { 1278 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1279 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1280 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1281 + int ret; 1282 + 1283 + clear_read_regs(nandc); 1284 + 1285 + host->use_ecc = true; 1286 + set_address(host, 0, page); 1287 + update_rw_regs(host, ecc->steps, true); 1288 + 1289 + ret = read_page_ecc(host, NULL, chip->oob_poi); 1290 + if (ret) 1291 + dev_err(nandc->dev, "failure to read oob\n"); 1292 + 1293 + return ret; 1294 + } 1295 + 1296 + /* implements ecc->write_page() */ 1297 + static int qcom_nandc_write_page(struct mtd_info *mtd, struct nand_chip *chip, 1298 + const uint8_t *buf, int oob_required, int page) 1299 + { 1300 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1301 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1302 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1303 + u8 *data_buf, *oob_buf; 1304 + int i, ret; 1305 + 1306 + clear_read_regs(nandc); 1307 + 1308 + data_buf = (u8 *)buf; 1309 + oob_buf = chip->oob_poi; 1310 + 1311 + host->use_ecc = true; 1312 + update_rw_regs(host, ecc->steps, false); 1313 + 1314 + for (i = 0; i < ecc->steps; i++) { 1315 + int data_size, oob_size; 1316 + 1317 + if (i == (ecc->steps - 1)) { 1318 + data_size = ecc->size - ((ecc->steps - 1) << 2); 1319 + oob_size = (ecc->steps << 2) + host->ecc_bytes_hw + 1320 + host->spare_bytes; 1321 + } else { 1322 + data_size = host->cw_data; 1323 + oob_size = ecc->bytes; 1324 + } 1325 + 1326 + config_cw_write_pre(nandc); 1327 + 1328 + write_data_dma(nandc, FLASH_BUF_ACC, data_buf, data_size); 1329 + 1330 + /* 1331 + * when ECC is enabled, we don't really need to write anything 1332 + * to oob for the first n - 1 codewords since these oob regions 1333 + * just contain ECC bytes that's written by the controller 1334 + * itself. For the last codeword, we skip the bbm positions and 1335 + * write to the free oob area. 1336 + */ 1337 + if (i == (ecc->steps - 1)) { 1338 + oob_buf += host->bbm_size; 1339 + 1340 + write_data_dma(nandc, FLASH_BUF_ACC + data_size, 1341 + oob_buf, oob_size); 1342 + } 1343 + 1344 + config_cw_write_post(nandc); 1345 + 1346 + data_buf += data_size; 1347 + oob_buf += oob_size; 1348 + } 1349 + 1350 + ret = submit_descs(nandc); 1351 + if (ret) 1352 + dev_err(nandc->dev, "failure to write page\n"); 1353 + 1354 + free_descs(nandc); 1355 + 1356 + return ret; 1357 + } 1358 + 1359 + /* implements ecc->write_page_raw() */ 1360 + static int qcom_nandc_write_page_raw(struct mtd_info *mtd, 1361 + struct nand_chip *chip, const uint8_t *buf, 1362 + int oob_required, int page) 1363 + { 1364 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1365 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1366 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1367 + u8 *data_buf, *oob_buf; 1368 + int i, ret; 1369 + 1370 + clear_read_regs(nandc); 1371 + 1372 + data_buf = (u8 *)buf; 1373 + oob_buf = chip->oob_poi; 1374 + 1375 + host->use_ecc = false; 1376 + update_rw_regs(host, ecc->steps, false); 1377 + 1378 + for (i = 0; i < ecc->steps; i++) { 1379 + int data_size1, data_size2, oob_size1, oob_size2; 1380 + int reg_off = FLASH_BUF_ACC; 1381 + 1382 + data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); 1383 + oob_size1 = host->bbm_size; 1384 + 1385 + if (i == (ecc->steps - 1)) { 1386 + data_size2 = ecc->size - data_size1 - 1387 + ((ecc->steps - 1) << 2); 1388 + oob_size2 = (ecc->steps << 2) + host->ecc_bytes_hw + 1389 + host->spare_bytes; 1390 + } else { 1391 + data_size2 = host->cw_data - data_size1; 1392 + oob_size2 = host->ecc_bytes_hw + host->spare_bytes; 1393 + } 1394 + 1395 + config_cw_write_pre(nandc); 1396 + 1397 + write_data_dma(nandc, reg_off, data_buf, data_size1); 1398 + reg_off += data_size1; 1399 + data_buf += data_size1; 1400 + 1401 + write_data_dma(nandc, reg_off, oob_buf, oob_size1); 1402 + reg_off += oob_size1; 1403 + oob_buf += oob_size1; 1404 + 1405 + write_data_dma(nandc, reg_off, data_buf, data_size2); 1406 + reg_off += data_size2; 1407 + data_buf += data_size2; 1408 + 1409 + write_data_dma(nandc, reg_off, oob_buf, oob_size2); 1410 + oob_buf += oob_size2; 1411 + 1412 + config_cw_write_post(nandc); 1413 + } 1414 + 1415 + ret = submit_descs(nandc); 1416 + if (ret) 1417 + dev_err(nandc->dev, "failure to write raw page\n"); 1418 + 1419 + free_descs(nandc); 1420 + 1421 + return ret; 1422 + } 1423 + 1424 + /* 1425 + * implements ecc->write_oob() 1426 + * 1427 + * the NAND controller cannot write only data or only oob within a codeword, 1428 + * since ecc is calculated for the combined codeword. we first copy the 1429 + * entire contents for the last codeword(data + oob), replace the old oob 1430 + * with the new one in chip->oob_poi, and then write the entire codeword. 1431 + * this read-copy-write operation results in a slight performance loss. 1432 + */ 1433 + static int qcom_nandc_write_oob(struct mtd_info *mtd, struct nand_chip *chip, 1434 + int page) 1435 + { 1436 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1437 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1438 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1439 + u8 *oob = chip->oob_poi; 1440 + int free_boff; 1441 + int data_size, oob_size; 1442 + int ret, status = 0; 1443 + 1444 + host->use_ecc = true; 1445 + 1446 + ret = copy_last_cw(host, page); 1447 + if (ret) 1448 + return ret; 1449 + 1450 + clear_read_regs(nandc); 1451 + 1452 + /* calculate the data and oob size for the last codeword/step */ 1453 + data_size = ecc->size - ((ecc->steps - 1) << 2); 1454 + oob_size = ecc->steps << 2; 1455 + 1456 + free_boff = ecc->layout->oobfree[0].offset; 1457 + 1458 + /* override new oob content to last codeword */ 1459 + memcpy(nandc->data_buffer + data_size, oob + free_boff, oob_size); 1460 + 1461 + set_address(host, host->cw_size * (ecc->steps - 1), page); 1462 + update_rw_regs(host, 1, false); 1463 + 1464 + config_cw_write_pre(nandc); 1465 + write_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, 1466 + data_size + oob_size); 1467 + config_cw_write_post(nandc); 1468 + 1469 + ret = submit_descs(nandc); 1470 + 1471 + free_descs(nandc); 1472 + 1473 + if (ret) { 1474 + dev_err(nandc->dev, "failure to write oob\n"); 1475 + return -EIO; 1476 + } 1477 + 1478 + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1479 + 1480 + status = chip->waitfunc(mtd, chip); 1481 + 1482 + return status & NAND_STATUS_FAIL ? -EIO : 0; 1483 + } 1484 + 1485 + static int qcom_nandc_block_bad(struct mtd_info *mtd, loff_t ofs) 1486 + { 1487 + struct nand_chip *chip = mtd_to_nand(mtd); 1488 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1489 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1490 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1491 + int page, ret, bbpos, bad = 0; 1492 + u32 flash_status; 1493 + 1494 + page = (int)(ofs >> chip->page_shift) & chip->pagemask; 1495 + 1496 + /* 1497 + * configure registers for a raw sub page read, the address is set to 1498 + * the beginning of the last codeword, we don't care about reading ecc 1499 + * portion of oob. we just want the first few bytes from this codeword 1500 + * that contains the BBM 1501 + */ 1502 + host->use_ecc = false; 1503 + 1504 + ret = copy_last_cw(host, page); 1505 + if (ret) 1506 + goto err; 1507 + 1508 + flash_status = le32_to_cpu(nandc->reg_read_buf[0]); 1509 + 1510 + if (flash_status & (FS_OP_ERR | FS_MPU_ERR)) { 1511 + dev_warn(nandc->dev, "error when trying to read BBM\n"); 1512 + goto err; 1513 + } 1514 + 1515 + bbpos = mtd->writesize - host->cw_size * (ecc->steps - 1); 1516 + 1517 + bad = nandc->data_buffer[bbpos] != 0xff; 1518 + 1519 + if (chip->options & NAND_BUSWIDTH_16) 1520 + bad = bad || (nandc->data_buffer[bbpos + 1] != 0xff); 1521 + err: 1522 + return bad; 1523 + } 1524 + 1525 + static int qcom_nandc_block_markbad(struct mtd_info *mtd, loff_t ofs) 1526 + { 1527 + struct nand_chip *chip = mtd_to_nand(mtd); 1528 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1529 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1530 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1531 + int page, ret, status = 0; 1532 + 1533 + clear_read_regs(nandc); 1534 + 1535 + /* 1536 + * to mark the BBM as bad, we flash the entire last codeword with 0s. 1537 + * we don't care about the rest of the content in the codeword since 1538 + * we aren't going to use this block again 1539 + */ 1540 + memset(nandc->data_buffer, 0x00, host->cw_size); 1541 + 1542 + page = (int)(ofs >> chip->page_shift) & chip->pagemask; 1543 + 1544 + /* prepare write */ 1545 + host->use_ecc = false; 1546 + set_address(host, host->cw_size * (ecc->steps - 1), page); 1547 + update_rw_regs(host, 1, false); 1548 + 1549 + config_cw_write_pre(nandc); 1550 + write_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, host->cw_size); 1551 + config_cw_write_post(nandc); 1552 + 1553 + ret = submit_descs(nandc); 1554 + 1555 + free_descs(nandc); 1556 + 1557 + if (ret) { 1558 + dev_err(nandc->dev, "failure to update BBM\n"); 1559 + return -EIO; 1560 + } 1561 + 1562 + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1563 + 1564 + status = chip->waitfunc(mtd, chip); 1565 + 1566 + return status & NAND_STATUS_FAIL ? -EIO : 0; 1567 + } 1568 + 1569 + /* 1570 + * the three functions below implement chip->read_byte(), chip->read_buf() 1571 + * and chip->write_buf() respectively. these aren't used for 1572 + * reading/writing page data, they are used for smaller data like reading 1573 + * id, status etc 1574 + */ 1575 + static uint8_t qcom_nandc_read_byte(struct mtd_info *mtd) 1576 + { 1577 + struct nand_chip *chip = mtd_to_nand(mtd); 1578 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 1579 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1580 + u8 *buf = nandc->data_buffer; 1581 + u8 ret = 0x0; 1582 + 1583 + if (host->last_command == NAND_CMD_STATUS) { 1584 + ret = host->status; 1585 + 1586 + host->status = NAND_STATUS_READY | NAND_STATUS_WP; 1587 + 1588 + return ret; 1589 + } 1590 + 1591 + if (nandc->buf_start < nandc->buf_count) 1592 + ret = buf[nandc->buf_start++]; 1593 + 1594 + return ret; 1595 + } 1596 + 1597 + static void qcom_nandc_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) 1598 + { 1599 + struct nand_chip *chip = mtd_to_nand(mtd); 1600 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1601 + int real_len = min_t(size_t, len, nandc->buf_count - nandc->buf_start); 1602 + 1603 + memcpy(buf, nandc->data_buffer + nandc->buf_start, real_len); 1604 + nandc->buf_start += real_len; 1605 + } 1606 + 1607 + static void qcom_nandc_write_buf(struct mtd_info *mtd, const uint8_t *buf, 1608 + int len) 1609 + { 1610 + struct nand_chip *chip = mtd_to_nand(mtd); 1611 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1612 + int real_len = min_t(size_t, len, nandc->buf_count - nandc->buf_start); 1613 + 1614 + memcpy(nandc->data_buffer + nandc->buf_start, buf, real_len); 1615 + 1616 + nandc->buf_start += real_len; 1617 + } 1618 + 1619 + /* we support only one external chip for now */ 1620 + static void qcom_nandc_select_chip(struct mtd_info *mtd, int chipnr) 1621 + { 1622 + struct nand_chip *chip = mtd_to_nand(mtd); 1623 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1624 + 1625 + if (chipnr <= 0) 1626 + return; 1627 + 1628 + dev_warn(nandc->dev, "invalid chip select\n"); 1629 + } 1630 + 1631 + /* 1632 + * NAND controller page layout info 1633 + * 1634 + * Layout with ECC enabled: 1635 + * 1636 + * |----------------------| |---------------------------------| 1637 + * | xx.......yy| | *********xx.......yy| 1638 + * | DATA xx..ECC..yy| | DATA **SPARE**xx..ECC..yy| 1639 + * | (516) xx.......yy| | (516-n*4) **(n*4)**xx.......yy| 1640 + * | xx.......yy| | *********xx.......yy| 1641 + * |----------------------| |---------------------------------| 1642 + * codeword 1,2..n-1 codeword n 1643 + * <---(528/532 Bytes)--> <-------(528/532 Bytes)---------> 1644 + * 1645 + * n = Number of codewords in the page 1646 + * . = ECC bytes 1647 + * * = Spare/free bytes 1648 + * x = Unused byte(s) 1649 + * y = Reserved byte(s) 1650 + * 1651 + * 2K page: n = 4, spare = 16 bytes 1652 + * 4K page: n = 8, spare = 32 bytes 1653 + * 8K page: n = 16, spare = 64 bytes 1654 + * 1655 + * the qcom nand controller operates at a sub page/codeword level. each 1656 + * codeword is 528 and 532 bytes for 4 bit and 8 bit ECC modes respectively. 1657 + * the number of ECC bytes vary based on the ECC strength and the bus width. 1658 + * 1659 + * the first n - 1 codewords contains 516 bytes of user data, the remaining 1660 + * 12/16 bytes consist of ECC and reserved data. The nth codeword contains 1661 + * both user data and spare(oobavail) bytes that sum up to 516 bytes. 1662 + * 1663 + * When we access a page with ECC enabled, the reserved bytes(s) are not 1664 + * accessible at all. When reading, we fill up these unreadable positions 1665 + * with 0xffs. When writing, the controller skips writing the inaccessible 1666 + * bytes. 1667 + * 1668 + * Layout with ECC disabled: 1669 + * 1670 + * |------------------------------| |---------------------------------------| 1671 + * | yy xx.......| | bb *********xx.......| 1672 + * | DATA1 yy DATA2 xx..ECC..| | DATA1 bb DATA2 **SPARE**xx..ECC..| 1673 + * | (size1) yy (size2) xx.......| | (size1) bb (size2) **(n*4)**xx.......| 1674 + * | yy xx.......| | bb *********xx.......| 1675 + * |------------------------------| |---------------------------------------| 1676 + * codeword 1,2..n-1 codeword n 1677 + * <-------(528/532 Bytes)------> <-----------(528/532 Bytes)-----------> 1678 + * 1679 + * n = Number of codewords in the page 1680 + * . = ECC bytes 1681 + * * = Spare/free bytes 1682 + * x = Unused byte(s) 1683 + * y = Dummy Bad Bock byte(s) 1684 + * b = Real Bad Block byte(s) 1685 + * size1/size2 = function of codeword size and 'n' 1686 + * 1687 + * when the ECC block is disabled, one reserved byte (or two for 16 bit bus 1688 + * width) is now accessible. For the first n - 1 codewords, these are dummy Bad 1689 + * Block Markers. In the last codeword, this position contains the real BBM 1690 + * 1691 + * In order to have a consistent layout between RAW and ECC modes, we assume 1692 + * the following OOB layout arrangement: 1693 + * 1694 + * |-----------| |--------------------| 1695 + * |yyxx.......| |bb*********xx.......| 1696 + * |yyxx..ECC..| |bb*FREEOOB*xx..ECC..| 1697 + * |yyxx.......| |bb*********xx.......| 1698 + * |yyxx.......| |bb*********xx.......| 1699 + * |-----------| |--------------------| 1700 + * first n - 1 nth OOB region 1701 + * OOB regions 1702 + * 1703 + * n = Number of codewords in the page 1704 + * . = ECC bytes 1705 + * * = FREE OOB bytes 1706 + * y = Dummy bad block byte(s) (inaccessible when ECC enabled) 1707 + * x = Unused byte(s) 1708 + * b = Real bad block byte(s) (inaccessible when ECC enabled) 1709 + * 1710 + * This layout is read as is when ECC is disabled. When ECC is enabled, the 1711 + * inaccessible Bad Block byte(s) are ignored when we write to a page/oob, 1712 + * and assumed as 0xffs when we read a page/oob. The ECC, unused and 1713 + * dummy/real bad block bytes are grouped as ecc bytes in nand_ecclayout (i.e, 1714 + * ecc->bytes is the sum of the three). 1715 + */ 1716 + 1717 + static struct nand_ecclayout * 1718 + qcom_nand_create_layout(struct qcom_nand_host *host) 1719 + { 1720 + struct nand_chip *chip = &host->chip; 1721 + struct mtd_info *mtd = nand_to_mtd(chip); 1722 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1723 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1724 + struct nand_ecclayout *layout; 1725 + int i, j, steps, pos = 0, shift = 0; 1726 + 1727 + layout = devm_kzalloc(nandc->dev, sizeof(*layout), GFP_KERNEL); 1728 + if (!layout) 1729 + return NULL; 1730 + 1731 + steps = mtd->writesize / ecc->size; 1732 + layout->eccbytes = steps * ecc->bytes; 1733 + 1734 + layout->oobfree[0].offset = (steps - 1) * ecc->bytes + host->bbm_size; 1735 + layout->oobfree[0].length = steps << 2; 1736 + 1737 + /* 1738 + * the oob bytes in the first n - 1 codewords are all grouped together 1739 + * in the format: 1740 + * DUMMY_BBM + UNUSED + ECC 1741 + */ 1742 + for (i = 0; i < steps - 1; i++) { 1743 + for (j = 0; j < ecc->bytes; j++) 1744 + layout->eccpos[pos++] = i * ecc->bytes + j; 1745 + } 1746 + 1747 + /* 1748 + * the oob bytes in the last codeword are grouped in the format: 1749 + * BBM + FREE OOB + UNUSED + ECC 1750 + */ 1751 + 1752 + /* fill up the bbm positions */ 1753 + for (j = 0; j < host->bbm_size; j++) 1754 + layout->eccpos[pos++] = i * ecc->bytes + j; 1755 + 1756 + /* 1757 + * fill up the ecc and reserved positions, their indices are offseted 1758 + * by the free oob region 1759 + */ 1760 + shift = layout->oobfree[0].length + host->bbm_size; 1761 + 1762 + for (j = 0; j < (host->ecc_bytes_hw + host->spare_bytes); j++) 1763 + layout->eccpos[pos++] = i * ecc->bytes + shift + j; 1764 + 1765 + return layout; 1766 + } 1767 + 1768 + static int qcom_nand_host_setup(struct qcom_nand_host *host) 1769 + { 1770 + struct nand_chip *chip = &host->chip; 1771 + struct mtd_info *mtd = nand_to_mtd(chip); 1772 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1773 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1774 + int cwperpage, bad_block_byte; 1775 + bool wide_bus; 1776 + int ecc_mode = 1; 1777 + 1778 + /* 1779 + * the controller requires each step consists of 512 bytes of data. 1780 + * bail out if DT has populated a wrong step size. 1781 + */ 1782 + if (ecc->size != NANDC_STEP_SIZE) { 1783 + dev_err(nandc->dev, "invalid ecc size\n"); 1784 + return -EINVAL; 1785 + } 1786 + 1787 + wide_bus = chip->options & NAND_BUSWIDTH_16 ? true : false; 1788 + 1789 + if (ecc->strength >= 8) { 1790 + /* 8 bit ECC defaults to BCH ECC on all platforms */ 1791 + host->bch_enabled = true; 1792 + ecc_mode = 1; 1793 + 1794 + if (wide_bus) { 1795 + host->ecc_bytes_hw = 14; 1796 + host->spare_bytes = 0; 1797 + host->bbm_size = 2; 1798 + } else { 1799 + host->ecc_bytes_hw = 13; 1800 + host->spare_bytes = 2; 1801 + host->bbm_size = 1; 1802 + } 1803 + } else { 1804 + /* 1805 + * if the controller supports BCH for 4 bit ECC, the controller 1806 + * uses lesser bytes for ECC. If RS is used, the ECC bytes is 1807 + * always 10 bytes 1808 + */ 1809 + if (nandc->ecc_modes & ECC_BCH_4BIT) { 1810 + /* BCH */ 1811 + host->bch_enabled = true; 1812 + ecc_mode = 0; 1813 + 1814 + if (wide_bus) { 1815 + host->ecc_bytes_hw = 8; 1816 + host->spare_bytes = 2; 1817 + host->bbm_size = 2; 1818 + } else { 1819 + host->ecc_bytes_hw = 7; 1820 + host->spare_bytes = 4; 1821 + host->bbm_size = 1; 1822 + } 1823 + } else { 1824 + /* RS */ 1825 + host->ecc_bytes_hw = 10; 1826 + 1827 + if (wide_bus) { 1828 + host->spare_bytes = 0; 1829 + host->bbm_size = 2; 1830 + } else { 1831 + host->spare_bytes = 1; 1832 + host->bbm_size = 1; 1833 + } 1834 + } 1835 + } 1836 + 1837 + /* 1838 + * we consider ecc->bytes as the sum of all the non-data content in a 1839 + * step. It gives us a clean representation of the oob area (even if 1840 + * all the bytes aren't used for ECC).It is always 16 bytes for 8 bit 1841 + * ECC and 12 bytes for 4 bit ECC 1842 + */ 1843 + ecc->bytes = host->ecc_bytes_hw + host->spare_bytes + host->bbm_size; 1844 + 1845 + ecc->read_page = qcom_nandc_read_page; 1846 + ecc->read_page_raw = qcom_nandc_read_page_raw; 1847 + ecc->read_oob = qcom_nandc_read_oob; 1848 + ecc->write_page = qcom_nandc_write_page; 1849 + ecc->write_page_raw = qcom_nandc_write_page_raw; 1850 + ecc->write_oob = qcom_nandc_write_oob; 1851 + 1852 + ecc->mode = NAND_ECC_HW; 1853 + 1854 + ecc->layout = qcom_nand_create_layout(host); 1855 + if (!ecc->layout) 1856 + return -ENOMEM; 1857 + 1858 + cwperpage = mtd->writesize / ecc->size; 1859 + 1860 + /* 1861 + * DATA_UD_BYTES varies based on whether the read/write command protects 1862 + * spare data with ECC too. We protect spare data by default, so we set 1863 + * it to main + spare data, which are 512 and 4 bytes respectively. 1864 + */ 1865 + host->cw_data = 516; 1866 + 1867 + /* 1868 + * total bytes in a step, either 528 bytes for 4 bit ECC, or 532 bytes 1869 + * for 8 bit ECC 1870 + */ 1871 + host->cw_size = host->cw_data + ecc->bytes; 1872 + 1873 + if (ecc->bytes * (mtd->writesize / ecc->size) > mtd->oobsize) { 1874 + dev_err(nandc->dev, "ecc data doesn't fit in OOB area\n"); 1875 + return -EINVAL; 1876 + } 1877 + 1878 + bad_block_byte = mtd->writesize - host->cw_size * (cwperpage - 1) + 1; 1879 + 1880 + host->cfg0 = (cwperpage - 1) << CW_PER_PAGE 1881 + | host->cw_data << UD_SIZE_BYTES 1882 + | 0 << DISABLE_STATUS_AFTER_WRITE 1883 + | 5 << NUM_ADDR_CYCLES 1884 + | host->ecc_bytes_hw << ECC_PARITY_SIZE_BYTES_RS 1885 + | 0 << STATUS_BFR_READ 1886 + | 1 << SET_RD_MODE_AFTER_STATUS 1887 + | host->spare_bytes << SPARE_SIZE_BYTES; 1888 + 1889 + host->cfg1 = 7 << NAND_RECOVERY_CYCLES 1890 + | 0 << CS_ACTIVE_BSY 1891 + | bad_block_byte << BAD_BLOCK_BYTE_NUM 1892 + | 0 << BAD_BLOCK_IN_SPARE_AREA 1893 + | 2 << WR_RD_BSY_GAP 1894 + | wide_bus << WIDE_FLASH 1895 + | host->bch_enabled << ENABLE_BCH_ECC; 1896 + 1897 + host->cfg0_raw = (cwperpage - 1) << CW_PER_PAGE 1898 + | host->cw_size << UD_SIZE_BYTES 1899 + | 5 << NUM_ADDR_CYCLES 1900 + | 0 << SPARE_SIZE_BYTES; 1901 + 1902 + host->cfg1_raw = 7 << NAND_RECOVERY_CYCLES 1903 + | 0 << CS_ACTIVE_BSY 1904 + | 17 << BAD_BLOCK_BYTE_NUM 1905 + | 1 << BAD_BLOCK_IN_SPARE_AREA 1906 + | 2 << WR_RD_BSY_GAP 1907 + | wide_bus << WIDE_FLASH 1908 + | 1 << DEV0_CFG1_ECC_DISABLE; 1909 + 1910 + host->ecc_bch_cfg = host->bch_enabled << ECC_CFG_ECC_DISABLE 1911 + | 0 << ECC_SW_RESET 1912 + | host->cw_data << ECC_NUM_DATA_BYTES 1913 + | 1 << ECC_FORCE_CLK_OPEN 1914 + | ecc_mode << ECC_MODE 1915 + | host->ecc_bytes_hw << ECC_PARITY_SIZE_BYTES_BCH; 1916 + 1917 + host->ecc_buf_cfg = 0x203 << NUM_STEPS; 1918 + 1919 + host->clrflashstatus = FS_READY_BSY_N; 1920 + host->clrreadstatus = 0xc0; 1921 + 1922 + dev_dbg(nandc->dev, 1923 + "cfg0 %x cfg1 %x ecc_buf_cfg %x ecc_bch cfg %x cw_size %d cw_data %d strength %d parity_bytes %d steps %d\n", 1924 + host->cfg0, host->cfg1, host->ecc_buf_cfg, host->ecc_bch_cfg, 1925 + host->cw_size, host->cw_data, ecc->strength, ecc->bytes, 1926 + cwperpage); 1927 + 1928 + return 0; 1929 + } 1930 + 1931 + static int qcom_nandc_alloc(struct qcom_nand_controller *nandc) 1932 + { 1933 + int ret; 1934 + 1935 + ret = dma_set_coherent_mask(nandc->dev, DMA_BIT_MASK(32)); 1936 + if (ret) { 1937 + dev_err(nandc->dev, "failed to set DMA mask\n"); 1938 + return ret; 1939 + } 1940 + 1941 + /* 1942 + * we use the internal buffer for reading ONFI params, reading small 1943 + * data like ID and status, and preforming read-copy-write operations 1944 + * when writing to a codeword partially. 532 is the maximum possible 1945 + * size of a codeword for our nand controller 1946 + */ 1947 + nandc->buf_size = 532; 1948 + 1949 + nandc->data_buffer = devm_kzalloc(nandc->dev, nandc->buf_size, 1950 + GFP_KERNEL); 1951 + if (!nandc->data_buffer) 1952 + return -ENOMEM; 1953 + 1954 + nandc->regs = devm_kzalloc(nandc->dev, sizeof(*nandc->regs), 1955 + GFP_KERNEL); 1956 + if (!nandc->regs) 1957 + return -ENOMEM; 1958 + 1959 + nandc->reg_read_buf = devm_kzalloc(nandc->dev, 1960 + MAX_REG_RD * sizeof(*nandc->reg_read_buf), 1961 + GFP_KERNEL); 1962 + if (!nandc->reg_read_buf) 1963 + return -ENOMEM; 1964 + 1965 + nandc->chan = dma_request_slave_channel(nandc->dev, "rxtx"); 1966 + if (!nandc->chan) { 1967 + dev_err(nandc->dev, "failed to request slave channel\n"); 1968 + return -ENODEV; 1969 + } 1970 + 1971 + INIT_LIST_HEAD(&nandc->desc_list); 1972 + INIT_LIST_HEAD(&nandc->host_list); 1973 + 1974 + spin_lock_init(&nandc->controller.lock); 1975 + init_waitqueue_head(&nandc->controller.wq); 1976 + 1977 + return 0; 1978 + } 1979 + 1980 + static void qcom_nandc_unalloc(struct qcom_nand_controller *nandc) 1981 + { 1982 + dma_release_channel(nandc->chan); 1983 + } 1984 + 1985 + /* one time setup of a few nand controller registers */ 1986 + static int qcom_nandc_setup(struct qcom_nand_controller *nandc) 1987 + { 1988 + /* kill onenand */ 1989 + nandc_write(nandc, SFLASHC_BURST_CFG, 0); 1990 + 1991 + /* enable ADM DMA */ 1992 + nandc_write(nandc, NAND_FLASH_CHIP_SELECT, DM_EN); 1993 + 1994 + /* save the original values of these registers */ 1995 + nandc->cmd1 = nandc_read(nandc, NAND_DEV_CMD1); 1996 + nandc->vld = nandc_read(nandc, NAND_DEV_CMD_VLD); 1997 + 1998 + return 0; 1999 + } 2000 + 2001 + static int qcom_nand_host_init(struct qcom_nand_controller *nandc, 2002 + struct qcom_nand_host *host, 2003 + struct device_node *dn) 2004 + { 2005 + struct nand_chip *chip = &host->chip; 2006 + struct mtd_info *mtd = nand_to_mtd(chip); 2007 + struct device *dev = nandc->dev; 2008 + int ret; 2009 + 2010 + ret = of_property_read_u32(dn, "reg", &host->cs); 2011 + if (ret) { 2012 + dev_err(dev, "can't get chip-select\n"); 2013 + return -ENXIO; 2014 + } 2015 + 2016 + nand_set_flash_node(chip, dn); 2017 + mtd->name = devm_kasprintf(dev, GFP_KERNEL, "qcom_nand.%d", host->cs); 2018 + mtd->owner = THIS_MODULE; 2019 + mtd->dev.parent = dev; 2020 + 2021 + chip->cmdfunc = qcom_nandc_command; 2022 + chip->select_chip = qcom_nandc_select_chip; 2023 + chip->read_byte = qcom_nandc_read_byte; 2024 + chip->read_buf = qcom_nandc_read_buf; 2025 + chip->write_buf = qcom_nandc_write_buf; 2026 + 2027 + /* 2028 + * the bad block marker is readable only when we read the last codeword 2029 + * of a page with ECC disabled. currently, the nand_base and nand_bbt 2030 + * helpers don't allow us to read BB from a nand chip with ECC 2031 + * disabled (MTD_OPS_PLACE_OOB is set by default). use the block_bad 2032 + * and block_markbad helpers until we permanently switch to using 2033 + * MTD_OPS_RAW for all drivers (with the help of badblockbits) 2034 + */ 2035 + chip->block_bad = qcom_nandc_block_bad; 2036 + chip->block_markbad = qcom_nandc_block_markbad; 2037 + 2038 + chip->controller = &nandc->controller; 2039 + chip->options |= NAND_NO_SUBPAGE_WRITE | NAND_USE_BOUNCE_BUFFER | 2040 + NAND_SKIP_BBTSCAN; 2041 + 2042 + /* set up initial status value */ 2043 + host->status = NAND_STATUS_READY | NAND_STATUS_WP; 2044 + 2045 + ret = nand_scan_ident(mtd, 1, NULL); 2046 + if (ret) 2047 + return ret; 2048 + 2049 + ret = qcom_nand_host_setup(host); 2050 + if (ret) 2051 + return ret; 2052 + 2053 + ret = nand_scan_tail(mtd); 2054 + if (ret) 2055 + return ret; 2056 + 2057 + return mtd_device_register(mtd, NULL, 0); 2058 + } 2059 + 2060 + /* parse custom DT properties here */ 2061 + static int qcom_nandc_parse_dt(struct platform_device *pdev) 2062 + { 2063 + struct qcom_nand_controller *nandc = platform_get_drvdata(pdev); 2064 + struct device_node *np = nandc->dev->of_node; 2065 + int ret; 2066 + 2067 + ret = of_property_read_u32(np, "qcom,cmd-crci", &nandc->cmd_crci); 2068 + if (ret) { 2069 + dev_err(nandc->dev, "command CRCI unspecified\n"); 2070 + return ret; 2071 + } 2072 + 2073 + ret = of_property_read_u32(np, "qcom,data-crci", &nandc->data_crci); 2074 + if (ret) { 2075 + dev_err(nandc->dev, "data CRCI unspecified\n"); 2076 + return ret; 2077 + } 2078 + 2079 + return 0; 2080 + } 2081 + 2082 + static int qcom_nandc_probe(struct platform_device *pdev) 2083 + { 2084 + struct qcom_nand_controller *nandc; 2085 + struct qcom_nand_host *host; 2086 + const void *dev_data; 2087 + struct device *dev = &pdev->dev; 2088 + struct device_node *dn = dev->of_node, *child; 2089 + struct resource *res; 2090 + int ret; 2091 + 2092 + nandc = devm_kzalloc(&pdev->dev, sizeof(*nandc), GFP_KERNEL); 2093 + if (!nandc) 2094 + return -ENOMEM; 2095 + 2096 + platform_set_drvdata(pdev, nandc); 2097 + nandc->dev = dev; 2098 + 2099 + dev_data = of_device_get_match_data(dev); 2100 + if (!dev_data) { 2101 + dev_err(&pdev->dev, "failed to get device data\n"); 2102 + return -ENODEV; 2103 + } 2104 + 2105 + nandc->ecc_modes = (unsigned long)dev_data; 2106 + 2107 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2108 + nandc->base = devm_ioremap_resource(dev, res); 2109 + if (IS_ERR(nandc->base)) 2110 + return PTR_ERR(nandc->base); 2111 + 2112 + nandc->base_dma = phys_to_dma(dev, (phys_addr_t)res->start); 2113 + 2114 + nandc->core_clk = devm_clk_get(dev, "core"); 2115 + if (IS_ERR(nandc->core_clk)) 2116 + return PTR_ERR(nandc->core_clk); 2117 + 2118 + nandc->aon_clk = devm_clk_get(dev, "aon"); 2119 + if (IS_ERR(nandc->aon_clk)) 2120 + return PTR_ERR(nandc->aon_clk); 2121 + 2122 + ret = qcom_nandc_parse_dt(pdev); 2123 + if (ret) 2124 + return ret; 2125 + 2126 + ret = qcom_nandc_alloc(nandc); 2127 + if (ret) 2128 + return ret; 2129 + 2130 + ret = clk_prepare_enable(nandc->core_clk); 2131 + if (ret) 2132 + goto err_core_clk; 2133 + 2134 + ret = clk_prepare_enable(nandc->aon_clk); 2135 + if (ret) 2136 + goto err_aon_clk; 2137 + 2138 + ret = qcom_nandc_setup(nandc); 2139 + if (ret) 2140 + goto err_setup; 2141 + 2142 + for_each_available_child_of_node(dn, child) { 2143 + if (of_device_is_compatible(child, "qcom,nandcs")) { 2144 + host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 2145 + if (!host) { 2146 + of_node_put(child); 2147 + ret = -ENOMEM; 2148 + goto err_cs_init; 2149 + } 2150 + 2151 + ret = qcom_nand_host_init(nandc, host, child); 2152 + if (ret) { 2153 + devm_kfree(dev, host); 2154 + continue; 2155 + } 2156 + 2157 + list_add_tail(&host->node, &nandc->host_list); 2158 + } 2159 + } 2160 + 2161 + if (list_empty(&nandc->host_list)) { 2162 + ret = -ENODEV; 2163 + goto err_cs_init; 2164 + } 2165 + 2166 + return 0; 2167 + 2168 + err_cs_init: 2169 + list_for_each_entry(host, &nandc->host_list, node) 2170 + nand_release(nand_to_mtd(&host->chip)); 2171 + err_setup: 2172 + clk_disable_unprepare(nandc->aon_clk); 2173 + err_aon_clk: 2174 + clk_disable_unprepare(nandc->core_clk); 2175 + err_core_clk: 2176 + qcom_nandc_unalloc(nandc); 2177 + 2178 + return ret; 2179 + } 2180 + 2181 + static int qcom_nandc_remove(struct platform_device *pdev) 2182 + { 2183 + struct qcom_nand_controller *nandc = platform_get_drvdata(pdev); 2184 + struct qcom_nand_host *host; 2185 + 2186 + list_for_each_entry(host, &nandc->host_list, node) 2187 + nand_release(nand_to_mtd(&host->chip)); 2188 + 2189 + qcom_nandc_unalloc(nandc); 2190 + 2191 + clk_disable_unprepare(nandc->aon_clk); 2192 + clk_disable_unprepare(nandc->core_clk); 2193 + 2194 + return 0; 2195 + } 2196 + 2197 + #define EBI2_NANDC_ECC_MODES (ECC_RS_4BIT | ECC_BCH_8BIT) 2198 + 2199 + /* 2200 + * data will hold a struct pointer containing more differences once we support 2201 + * more controller variants 2202 + */ 2203 + static const struct of_device_id qcom_nandc_of_match[] = { 2204 + { .compatible = "qcom,ipq806x-nand", 2205 + .data = (void *)EBI2_NANDC_ECC_MODES, 2206 + }, 2207 + {} 2208 + }; 2209 + MODULE_DEVICE_TABLE(of, qcom_nandc_of_match); 2210 + 2211 + static struct platform_driver qcom_nandc_driver = { 2212 + .driver = { 2213 + .name = "qcom-nandc", 2214 + .of_match_table = qcom_nandc_of_match, 2215 + }, 2216 + .probe = qcom_nandc_probe, 2217 + .remove = qcom_nandc_remove, 2218 + }; 2219 + module_platform_driver(qcom_nandc_driver); 2220 + 2221 + MODULE_AUTHOR("Archit Taneja <architt@codeaurora.org>"); 2222 + MODULE_DESCRIPTION("Qualcomm NAND Controller driver"); 2223 + MODULE_LICENSE("GPL v2");
-3
drivers/mtd/nand/s3c2410.c
··· 861 861 chip->ecc.mode = NAND_ECC_SOFT; 862 862 #endif 863 863 864 - if (set->ecc_layout != NULL) 865 - chip->ecc.layout = set->ecc_layout; 866 - 867 864 if (set->disable_ecc) 868 865 chip->ecc.mode = NAND_ECC_NONE; 869 866
+261 -26
drivers/mtd/nand/sunxi_nand.c
··· 60 60 #define NFC_REG_ECC_ERR_CNT(x) ((0x0040 + (x)) & ~0x3) 61 61 #define NFC_REG_USER_DATA(x) (0x0050 + ((x) * 4)) 62 62 #define NFC_REG_SPARE_AREA 0x00A0 63 + #define NFC_REG_PAT_ID 0x00A4 63 64 #define NFC_RAM0_BASE 0x0400 64 65 #define NFC_RAM1_BASE 0x0800 65 66 ··· 539 538 sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0); 540 539 } 541 540 541 + /* These seed values have been extracted from Allwinner's BSP */ 542 + static const u16 sunxi_nfc_randomizer_page_seeds[] = { 543 + 0x2b75, 0x0bd0, 0x5ca3, 0x62d1, 0x1c93, 0x07e9, 0x2162, 0x3a72, 544 + 0x0d67, 0x67f9, 0x1be7, 0x077d, 0x032f, 0x0dac, 0x2716, 0x2436, 545 + 0x7922, 0x1510, 0x3860, 0x5287, 0x480f, 0x4252, 0x1789, 0x5a2d, 546 + 0x2a49, 0x5e10, 0x437f, 0x4b4e, 0x2f45, 0x216e, 0x5cb7, 0x7130, 547 + 0x2a3f, 0x60e4, 0x4dc9, 0x0ef0, 0x0f52, 0x1bb9, 0x6211, 0x7a56, 548 + 0x226d, 0x4ea7, 0x6f36, 0x3692, 0x38bf, 0x0c62, 0x05eb, 0x4c55, 549 + 0x60f4, 0x728c, 0x3b6f, 0x2037, 0x7f69, 0x0936, 0x651a, 0x4ceb, 550 + 0x6218, 0x79f3, 0x383f, 0x18d9, 0x4f05, 0x5c82, 0x2912, 0x6f17, 551 + 0x6856, 0x5938, 0x1007, 0x61ab, 0x3e7f, 0x57c2, 0x542f, 0x4f62, 552 + 0x7454, 0x2eac, 0x7739, 0x42d4, 0x2f90, 0x435a, 0x2e52, 0x2064, 553 + 0x637c, 0x66ad, 0x2c90, 0x0bad, 0x759c, 0x0029, 0x0986, 0x7126, 554 + 0x1ca7, 0x1605, 0x386a, 0x27f5, 0x1380, 0x6d75, 0x24c3, 0x0f8e, 555 + 0x2b7a, 0x1418, 0x1fd1, 0x7dc1, 0x2d8e, 0x43af, 0x2267, 0x7da3, 556 + 0x4e3d, 0x1338, 0x50db, 0x454d, 0x764d, 0x40a3, 0x42e6, 0x262b, 557 + 0x2d2e, 0x1aea, 0x2e17, 0x173d, 0x3a6e, 0x71bf, 0x25f9, 0x0a5d, 558 + 0x7c57, 0x0fbe, 0x46ce, 0x4939, 0x6b17, 0x37bb, 0x3e91, 0x76db, 559 + }; 560 + 561 + /* 562 + * sunxi_nfc_randomizer_ecc512_seeds and sunxi_nfc_randomizer_ecc1024_seeds 563 + * have been generated using 564 + * sunxi_nfc_randomizer_step(seed, (step_size * 8) + 15), which is what 565 + * the randomizer engine does internally before de/scrambling OOB data. 566 + * 567 + * Those tables are statically defined to avoid calculating randomizer state 568 + * at runtime. 569 + */ 570 + static const u16 sunxi_nfc_randomizer_ecc512_seeds[] = { 571 + 0x3346, 0x367f, 0x1f18, 0x769a, 0x4f64, 0x068c, 0x2ef1, 0x6b64, 572 + 0x28a9, 0x15d7, 0x30f8, 0x3659, 0x53db, 0x7c5f, 0x71d4, 0x4409, 573 + 0x26eb, 0x03cc, 0x655d, 0x47d4, 0x4daa, 0x0877, 0x712d, 0x3617, 574 + 0x3264, 0x49aa, 0x7f9e, 0x588e, 0x4fbc, 0x7176, 0x7f91, 0x6c6d, 575 + 0x4b95, 0x5fb7, 0x3844, 0x4037, 0x0184, 0x081b, 0x0ee8, 0x5b91, 576 + 0x293d, 0x1f71, 0x0e6f, 0x402b, 0x5122, 0x1e52, 0x22be, 0x3d2d, 577 + 0x75bc, 0x7c60, 0x6291, 0x1a2f, 0x61d4, 0x74aa, 0x4140, 0x29ab, 578 + 0x472d, 0x2852, 0x017e, 0x15e8, 0x5ec2, 0x17cf, 0x7d0f, 0x06b8, 579 + 0x117a, 0x6b94, 0x789b, 0x3126, 0x6ac5, 0x5be7, 0x150f, 0x51f8, 580 + 0x7889, 0x0aa5, 0x663d, 0x77e8, 0x0b87, 0x3dcb, 0x360d, 0x218b, 581 + 0x512f, 0x7dc9, 0x6a4d, 0x630a, 0x3547, 0x1dd2, 0x5aea, 0x69a5, 582 + 0x7bfa, 0x5e4f, 0x1519, 0x6430, 0x3a0e, 0x5eb3, 0x5425, 0x0c7a, 583 + 0x5540, 0x3670, 0x63c1, 0x31e9, 0x5a39, 0x2de7, 0x5979, 0x2891, 584 + 0x1562, 0x014b, 0x5b05, 0x2756, 0x5a34, 0x13aa, 0x6cb5, 0x2c36, 585 + 0x5e72, 0x1306, 0x0861, 0x15ef, 0x1ee8, 0x5a37, 0x7ac4, 0x45dd, 586 + 0x44c4, 0x7266, 0x2f41, 0x3ccc, 0x045e, 0x7d40, 0x7c66, 0x0fa0, 587 + }; 588 + 589 + static const u16 sunxi_nfc_randomizer_ecc1024_seeds[] = { 590 + 0x2cf5, 0x35f1, 0x63a4, 0x5274, 0x2bd2, 0x778b, 0x7285, 0x32b6, 591 + 0x6a5c, 0x70d6, 0x757d, 0x6769, 0x5375, 0x1e81, 0x0cf3, 0x3982, 592 + 0x6787, 0x042a, 0x6c49, 0x1925, 0x56a8, 0x40a9, 0x063e, 0x7bd9, 593 + 0x4dbf, 0x55ec, 0x672e, 0x7334, 0x5185, 0x4d00, 0x232a, 0x7e07, 594 + 0x445d, 0x6b92, 0x528f, 0x4255, 0x53ba, 0x7d82, 0x2a2e, 0x3a4e, 595 + 0x75eb, 0x450c, 0x6844, 0x1b5d, 0x581a, 0x4cc6, 0x0379, 0x37b2, 596 + 0x419f, 0x0e92, 0x6b27, 0x5624, 0x01e3, 0x07c1, 0x44a5, 0x130c, 597 + 0x13e8, 0x5910, 0x0876, 0x60c5, 0x54e3, 0x5b7f, 0x2269, 0x509f, 598 + 0x7665, 0x36fd, 0x3e9a, 0x0579, 0x6295, 0x14ef, 0x0a81, 0x1bcc, 599 + 0x4b16, 0x64db, 0x0514, 0x4f07, 0x0591, 0x3576, 0x6853, 0x0d9e, 600 + 0x259f, 0x38b7, 0x64fb, 0x3094, 0x4693, 0x6ddd, 0x29bb, 0x0bc8, 601 + 0x3f47, 0x490e, 0x0c0e, 0x7933, 0x3c9e, 0x5840, 0x398d, 0x3e68, 602 + 0x4af1, 0x71f5, 0x57cf, 0x1121, 0x64eb, 0x3579, 0x15ac, 0x584d, 603 + 0x5f2a, 0x47e2, 0x6528, 0x6eac, 0x196e, 0x6b96, 0x0450, 0x0179, 604 + 0x609c, 0x06e1, 0x4626, 0x42c7, 0x273e, 0x486f, 0x0705, 0x1601, 605 + 0x145b, 0x407e, 0x062b, 0x57a5, 0x53f9, 0x5659, 0x4410, 0x3ccd, 606 + }; 607 + 608 + static u16 sunxi_nfc_randomizer_step(u16 state, int count) 609 + { 610 + state &= 0x7fff; 611 + 612 + /* 613 + * This loop is just a simple implementation of a Fibonacci LFSR using 614 + * the x16 + x15 + 1 polynomial. 615 + */ 616 + while (count--) 617 + state = ((state >> 1) | 618 + (((state ^ (state >> 1)) & 1) << 14)) & 0x7fff; 619 + 620 + return state; 621 + } 622 + 623 + static u16 sunxi_nfc_randomizer_state(struct mtd_info *mtd, int page, bool ecc) 624 + { 625 + const u16 *seeds = sunxi_nfc_randomizer_page_seeds; 626 + int mod = mtd_div_by_ws(mtd->erasesize, mtd); 627 + 628 + if (mod > ARRAY_SIZE(sunxi_nfc_randomizer_page_seeds)) 629 + mod = ARRAY_SIZE(sunxi_nfc_randomizer_page_seeds); 630 + 631 + if (ecc) { 632 + if (mtd->ecc_step_size == 512) 633 + seeds = sunxi_nfc_randomizer_ecc512_seeds; 634 + else 635 + seeds = sunxi_nfc_randomizer_ecc1024_seeds; 636 + } 637 + 638 + return seeds[page % mod]; 639 + } 640 + 641 + static void sunxi_nfc_randomizer_config(struct mtd_info *mtd, 642 + int page, bool ecc) 643 + { 644 + struct nand_chip *nand = mtd_to_nand(mtd); 645 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 646 + u32 ecc_ctl = readl(nfc->regs + NFC_REG_ECC_CTL); 647 + u16 state; 648 + 649 + if (!(nand->options & NAND_NEED_SCRAMBLING)) 650 + return; 651 + 652 + ecc_ctl = readl(nfc->regs + NFC_REG_ECC_CTL); 653 + state = sunxi_nfc_randomizer_state(mtd, page, ecc); 654 + ecc_ctl = readl(nfc->regs + NFC_REG_ECC_CTL) & ~NFC_RANDOM_SEED_MSK; 655 + writel(ecc_ctl | NFC_RANDOM_SEED(state), nfc->regs + NFC_REG_ECC_CTL); 656 + } 657 + 658 + static void sunxi_nfc_randomizer_enable(struct mtd_info *mtd) 659 + { 660 + struct nand_chip *nand = mtd_to_nand(mtd); 661 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 662 + 663 + if (!(nand->options & NAND_NEED_SCRAMBLING)) 664 + return; 665 + 666 + writel(readl(nfc->regs + NFC_REG_ECC_CTL) | NFC_RANDOM_EN, 667 + nfc->regs + NFC_REG_ECC_CTL); 668 + } 669 + 670 + static void sunxi_nfc_randomizer_disable(struct mtd_info *mtd) 671 + { 672 + struct nand_chip *nand = mtd_to_nand(mtd); 673 + struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 674 + 675 + if (!(nand->options & NAND_NEED_SCRAMBLING)) 676 + return; 677 + 678 + writel(readl(nfc->regs + NFC_REG_ECC_CTL) & ~NFC_RANDOM_EN, 679 + nfc->regs + NFC_REG_ECC_CTL); 680 + } 681 + 682 + static void sunxi_nfc_randomize_bbm(struct mtd_info *mtd, int page, u8 *bbm) 683 + { 684 + u16 state = sunxi_nfc_randomizer_state(mtd, page, true); 685 + 686 + bbm[0] ^= state; 687 + bbm[1] ^= sunxi_nfc_randomizer_step(state, 8); 688 + } 689 + 690 + static void sunxi_nfc_randomizer_write_buf(struct mtd_info *mtd, 691 + const uint8_t *buf, int len, 692 + bool ecc, int page) 693 + { 694 + sunxi_nfc_randomizer_config(mtd, page, ecc); 695 + sunxi_nfc_randomizer_enable(mtd); 696 + sunxi_nfc_write_buf(mtd, buf, len); 697 + sunxi_nfc_randomizer_disable(mtd); 698 + } 699 + 700 + static void sunxi_nfc_randomizer_read_buf(struct mtd_info *mtd, uint8_t *buf, 701 + int len, bool ecc, int page) 702 + { 703 + sunxi_nfc_randomizer_config(mtd, page, ecc); 704 + sunxi_nfc_randomizer_enable(mtd); 705 + sunxi_nfc_read_buf(mtd, buf, len); 706 + sunxi_nfc_randomizer_disable(mtd); 707 + } 708 + 542 709 static void sunxi_nfc_hw_ecc_enable(struct mtd_info *mtd) 543 710 { 544 711 struct nand_chip *nand = mtd_to_nand(mtd); ··· 743 574 u8 *data, int data_off, 744 575 u8 *oob, int oob_off, 745 576 int *cur_off, 746 - unsigned int *max_bitflips) 577 + unsigned int *max_bitflips, 578 + bool bbm, int page) 747 579 { 748 580 struct nand_chip *nand = mtd_to_nand(mtd); 749 581 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 750 582 struct nand_ecc_ctrl *ecc = &nand->ecc; 583 + int raw_mode = 0; 751 584 u32 status; 752 585 int ret; 753 586 754 587 if (*cur_off != data_off) 755 588 nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1); 756 589 757 - sunxi_nfc_read_buf(mtd, NULL, ecc->size); 590 + sunxi_nfc_randomizer_read_buf(mtd, NULL, ecc->size, false, page); 758 591 759 592 if (data_off + ecc->size != oob_off) 760 593 nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1); ··· 765 594 if (ret) 766 595 return ret; 767 596 597 + sunxi_nfc_randomizer_enable(mtd); 768 598 writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ECC_OP, 769 599 nfc->regs + NFC_REG_CMD); 770 600 771 601 ret = sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0); 602 + sunxi_nfc_randomizer_disable(mtd); 772 603 if (ret) 773 604 return ret; 774 605 606 + *cur_off = oob_off + ecc->bytes + 4; 607 + 775 608 status = readl(nfc->regs + NFC_REG_ECC_ST); 609 + if (status & NFC_ECC_PAT_FOUND(0)) { 610 + u8 pattern = 0xff; 611 + 612 + if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) 613 + pattern = 0x0; 614 + 615 + memset(data, pattern, ecc->size); 616 + memset(oob, pattern, ecc->bytes + 4); 617 + 618 + return 1; 619 + } 620 + 776 621 ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0))); 777 622 778 623 memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size); 779 624 780 625 nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1); 781 - sunxi_nfc_read_buf(mtd, oob, ecc->bytes + 4); 626 + sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page); 782 627 783 628 if (status & NFC_ECC_ERR(0)) { 629 + /* 630 + * Re-read the data with the randomizer disabled to identify 631 + * bitflips in erased pages. 632 + */ 633 + if (nand->options & NAND_NEED_SCRAMBLING) { 634 + nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1); 635 + nand->read_buf(mtd, data, ecc->size); 636 + nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1); 637 + nand->read_buf(mtd, oob, ecc->bytes + 4); 638 + } 639 + 784 640 ret = nand_check_erased_ecc_chunk(data, ecc->size, 785 641 oob, ecc->bytes + 4, 786 642 NULL, 0, ecc->strength); 643 + if (ret >= 0) 644 + raw_mode = 1; 787 645 } else { 788 646 /* 789 647 * The engine protects 4 bytes of OOB data per chunk. ··· 820 620 */ 821 621 sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)), 822 622 oob); 623 + 624 + /* De-randomize the Bad Block Marker. */ 625 + if (bbm && nand->options & NAND_NEED_SCRAMBLING) 626 + sunxi_nfc_randomize_bbm(mtd, page, oob); 823 627 } 824 628 825 629 if (ret < 0) { ··· 833 629 *max_bitflips = max_t(unsigned int, *max_bitflips, ret); 834 630 } 835 631 836 - *cur_off = oob_off + ecc->bytes + 4; 837 - 838 - return 0; 632 + return raw_mode; 839 633 } 840 634 841 635 static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd, 842 - u8 *oob, int *cur_off) 636 + u8 *oob, int *cur_off, 637 + bool randomize, int page) 843 638 { 844 639 struct nand_chip *nand = mtd_to_nand(mtd); 845 640 struct nand_ecc_ctrl *ecc = &nand->ecc; ··· 852 649 nand->cmdfunc(mtd, NAND_CMD_RNDOUT, 853 650 offset + mtd->writesize, -1); 854 651 855 - sunxi_nfc_read_buf(mtd, oob + offset, len); 652 + if (!randomize) 653 + sunxi_nfc_read_buf(mtd, oob + offset, len); 654 + else 655 + sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len, 656 + false, page); 856 657 857 658 *cur_off = mtd->oobsize + mtd->writesize; 858 659 } ··· 869 662 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd, 870 663 const u8 *data, int data_off, 871 664 const u8 *oob, int oob_off, 872 - int *cur_off) 665 + int *cur_off, bool bbm, 666 + int page) 873 667 { 874 668 struct nand_chip *nand = mtd_to_nand(mtd); 875 669 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); ··· 880 672 if (data_off != *cur_off) 881 673 nand->cmdfunc(mtd, NAND_CMD_RNDIN, data_off, -1); 882 674 883 - sunxi_nfc_write_buf(mtd, data, ecc->size); 675 + sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page); 884 676 885 677 /* Fill OOB data in */ 886 - writel(sunxi_nfc_buf_to_user_data(oob), 887 - nfc->regs + NFC_REG_USER_DATA(0)); 678 + if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) { 679 + u8 user_data[4]; 680 + 681 + memcpy(user_data, oob, 4); 682 + sunxi_nfc_randomize_bbm(mtd, page, user_data); 683 + writel(sunxi_nfc_buf_to_user_data(user_data), 684 + nfc->regs + NFC_REG_USER_DATA(0)); 685 + } else { 686 + writel(sunxi_nfc_buf_to_user_data(oob), 687 + nfc->regs + NFC_REG_USER_DATA(0)); 688 + } 888 689 889 690 if (data_off + ecc->size != oob_off) 890 691 nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1); ··· 902 685 if (ret) 903 686 return ret; 904 687 688 + sunxi_nfc_randomizer_enable(mtd); 905 689 writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | 906 690 NFC_ACCESS_DIR | NFC_ECC_OP, 907 691 nfc->regs + NFC_REG_CMD); 908 692 909 693 ret = sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0); 694 + sunxi_nfc_randomizer_disable(mtd); 910 695 if (ret) 911 696 return ret; 912 697 ··· 918 699 } 919 700 920 701 static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd, 921 - u8 *oob, int *cur_off) 702 + u8 *oob, int *cur_off, 703 + int page) 922 704 { 923 705 struct nand_chip *nand = mtd_to_nand(mtd); 924 706 struct nand_ecc_ctrl *ecc = &nand->ecc; ··· 933 713 nand->cmdfunc(mtd, NAND_CMD_RNDIN, 934 714 offset + mtd->writesize, -1); 935 715 936 - sunxi_nfc_write_buf(mtd, oob + offset, len); 716 + sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page); 937 717 938 718 *cur_off = mtd->oobsize + mtd->writesize; 939 719 } ··· 945 725 struct nand_ecc_ctrl *ecc = &chip->ecc; 946 726 unsigned int max_bitflips = 0; 947 727 int ret, i, cur_off = 0; 728 + bool raw_mode = false; 948 729 949 730 sunxi_nfc_hw_ecc_enable(mtd); 950 731 ··· 957 736 958 737 ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob, 959 738 oob_off + mtd->writesize, 960 - &cur_off, &max_bitflips); 961 - if (ret) 739 + &cur_off, &max_bitflips, 740 + !i, page); 741 + if (ret < 0) 962 742 return ret; 743 + else if (ret) 744 + raw_mode = true; 963 745 } 964 746 965 747 if (oob_required) 966 - sunxi_nfc_hw_ecc_read_extra_oob(mtd, chip->oob_poi, &cur_off); 748 + sunxi_nfc_hw_ecc_read_extra_oob(mtd, chip->oob_poi, &cur_off, 749 + !raw_mode, page); 967 750 968 751 sunxi_nfc_hw_ecc_disable(mtd); 969 752 ··· 992 767 993 768 ret = sunxi_nfc_hw_ecc_write_chunk(mtd, data, data_off, oob, 994 769 oob_off + mtd->writesize, 995 - &cur_off); 770 + &cur_off, !i, page); 996 771 if (ret) 997 772 return ret; 998 773 } 999 774 1000 - if (oob_required) 1001 - sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi, &cur_off); 775 + if (oob_required || (chip->options & NAND_NEED_SCRAMBLING)) 776 + sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi, 777 + &cur_off, page); 1002 778 1003 779 sunxi_nfc_hw_ecc_disable(mtd); 1004 780 ··· 1014 788 struct nand_ecc_ctrl *ecc = &chip->ecc; 1015 789 unsigned int max_bitflips = 0; 1016 790 int ret, i, cur_off = 0; 791 + bool raw_mode = false; 1017 792 1018 793 sunxi_nfc_hw_ecc_enable(mtd); 1019 794 ··· 1026 799 1027 800 ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob, 1028 801 oob_off, &cur_off, 1029 - &max_bitflips); 1030 - if (ret) 802 + &max_bitflips, !i, page); 803 + if (ret < 0) 1031 804 return ret; 805 + else if (ret) 806 + raw_mode = true; 1032 807 } 1033 808 1034 809 if (oob_required) 1035 - sunxi_nfc_hw_ecc_read_extra_oob(mtd, chip->oob_poi, &cur_off); 810 + sunxi_nfc_hw_ecc_read_extra_oob(mtd, chip->oob_poi, &cur_off, 811 + !raw_mode, page); 1036 812 1037 813 sunxi_nfc_hw_ecc_disable(mtd); 1038 814 ··· 1059 829 const u8 *oob = chip->oob_poi + (i * (ecc->bytes + 4)); 1060 830 1061 831 ret = sunxi_nfc_hw_ecc_write_chunk(mtd, data, data_off, 1062 - oob, oob_off, &cur_off); 832 + oob, oob_off, &cur_off, 833 + false, page); 1063 834 if (ret) 1064 835 return ret; 1065 836 } 1066 837 1067 - if (oob_required) 1068 - sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi, &cur_off); 838 + if (oob_required || (chip->options & NAND_NEED_SCRAMBLING)) 839 + sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi, 840 + &cur_off, page); 1069 841 1070 842 sunxi_nfc_hw_ecc_disable(mtd); 1071 843 ··· 1576 1344 1577 1345 if (nand->bbt_options & NAND_BBT_USE_FLASH) 1578 1346 nand->bbt_options |= NAND_BBT_NO_OOB; 1347 + 1348 + if (nand->options & NAND_NEED_SCRAMBLING) 1349 + nand->options |= NAND_NO_SUBPAGE_WRITE; 1579 1350 1580 1351 ret = sunxi_nand_chip_init_timings(chip, np); 1581 1352 if (ret) {
-2
drivers/mtd/nand/vf610_nfc.c
··· 795 795 goto error; 796 796 } 797 797 798 - /* propagate ecc.layout to mtd_info */ 799 - mtd->ecclayout = chip->ecc.layout; 800 798 chip->ecc.read_page = vf610_nfc_read_page; 801 799 chip->ecc.write_page = vf610_nfc_write_page; 802 800
+9 -23
drivers/mtd/onenand/onenand_base.c
··· 1124 1124 pr_debug("%s: from = 0x%08x, len = %i\n", __func__, (unsigned int)from, 1125 1125 (int)len); 1126 1126 1127 - if (ops->mode == MTD_OPS_AUTO_OOB) 1128 - oobsize = this->ecclayout->oobavail; 1129 - else 1130 - oobsize = mtd->oobsize; 1131 - 1127 + oobsize = mtd_oobavail(mtd, ops); 1132 1128 oobcolumn = from & (mtd->oobsize - 1); 1133 1129 1134 1130 /* Do not allow reads past end of device */ ··· 1225 1229 pr_debug("%s: from = 0x%08x, len = %i\n", __func__, (unsigned int)from, 1226 1230 (int)len); 1227 1231 1228 - if (ops->mode == MTD_OPS_AUTO_OOB) 1229 - oobsize = this->ecclayout->oobavail; 1230 - else 1231 - oobsize = mtd->oobsize; 1232 - 1232 + oobsize = mtd_oobavail(mtd, ops); 1233 1233 oobcolumn = from & (mtd->oobsize - 1); 1234 1234 1235 1235 /* Do not allow reads past end of device */ ··· 1357 1365 ops->oobretlen = 0; 1358 1366 1359 1367 if (mode == MTD_OPS_AUTO_OOB) 1360 - oobsize = this->ecclayout->oobavail; 1368 + oobsize = mtd->oobavail; 1361 1369 else 1362 1370 oobsize = mtd->oobsize; 1363 1371 ··· 1877 1885 /* Check zero length */ 1878 1886 if (!len) 1879 1887 return 0; 1880 - 1881 - if (ops->mode == MTD_OPS_AUTO_OOB) 1882 - oobsize = this->ecclayout->oobavail; 1883 - else 1884 - oobsize = mtd->oobsize; 1885 - 1888 + oobsize = mtd_oobavail(mtd, ops); 1886 1889 oobcolumn = to & (mtd->oobsize - 1); 1887 1890 1888 1891 column = to & (mtd->writesize - 1); ··· 2050 2063 ops->oobretlen = 0; 2051 2064 2052 2065 if (mode == MTD_OPS_AUTO_OOB) 2053 - oobsize = this->ecclayout->oobavail; 2066 + oobsize = mtd->oobavail; 2054 2067 else 2055 2068 oobsize = mtd->oobsize; 2056 2069 ··· 2586 2599 */ 2587 2600 static int onenand_block_markbad(struct mtd_info *mtd, loff_t ofs) 2588 2601 { 2602 + struct onenand_chip *this = mtd->priv; 2589 2603 int ret; 2590 2604 2591 2605 ret = onenand_block_isbad(mtd, ofs); ··· 2598 2610 } 2599 2611 2600 2612 onenand_get_device(mtd, FL_WRITING); 2601 - ret = mtd_block_markbad(mtd, ofs); 2613 + ret = this->block_markbad(mtd, ofs); 2602 2614 onenand_release_device(mtd); 2603 2615 return ret; 2604 2616 } ··· 4037 4049 * The number of bytes available for a client to place data into 4038 4050 * the out of band area 4039 4051 */ 4040 - this->ecclayout->oobavail = 0; 4052 + mtd->oobavail = 0; 4041 4053 for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES && 4042 4054 this->ecclayout->oobfree[i].length; i++) 4043 - this->ecclayout->oobavail += 4044 - this->ecclayout->oobfree[i].length; 4045 - mtd->oobavail = this->ecclayout->oobavail; 4055 + mtd->oobavail += this->ecclayout->oobfree[i].length; 4046 4056 4047 4057 mtd->ecclayout = this->ecclayout; 4048 4058 mtd->ecc_strength = 1;
+1 -4
drivers/mtd/onenand/onenand_bbt.c
··· 179 179 * by the onenand_release function. 180 180 * 181 181 */ 182 - int onenand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) 182 + static int onenand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd) 183 183 { 184 184 struct onenand_chip *this = mtd->priv; 185 185 struct bbm_info *bbm = this->bbm; ··· 247 247 248 248 return onenand_scan_bbt(mtd, bbm->badblock_pattern); 249 249 } 250 - 251 - EXPORT_SYMBOL(onenand_scan_bbt); 252 - EXPORT_SYMBOL(onenand_default_bbt);
+2 -1
drivers/mtd/spi-nor/Kconfig
··· 9 9 10 10 config MTD_MT81xx_NOR 11 11 tristate "Mediatek MT81xx SPI NOR flash controller" 12 + depends on HAS_IOMEM 12 13 help 13 14 This enables access to SPI NOR flash, using MT81xx SPI NOR flash 14 15 controller. This controller does not support generic SPI BUS, it only ··· 31 30 32 31 config SPI_FSL_QUADSPI 33 32 tristate "Freescale Quad SPI controller" 34 - depends on ARCH_MXC || COMPILE_TEST 33 + depends on ARCH_MXC || SOC_LS1021A || ARCH_LAYERSCAPE || COMPILE_TEST 35 34 depends on HAS_IOMEM 36 35 help 37 36 This enables support for the Quad SPI controller in master mode.
+107 -60
drivers/mtd/spi-nor/fsl-quadspi.c
··· 213 213 FSL_QUADSPI_IMX6SX, 214 214 FSL_QUADSPI_IMX7D, 215 215 FSL_QUADSPI_IMX6UL, 216 + FSL_QUADSPI_LS1021A, 216 217 }; 217 218 218 219 struct fsl_qspi_devtype_data { ··· 259 258 | QUADSPI_QUIRK_4X_INT_CLK, 260 259 }; 261 260 261 + static struct fsl_qspi_devtype_data ls1021a_data = { 262 + .devtype = FSL_QUADSPI_LS1021A, 263 + .rxfifo = 128, 264 + .txfifo = 64, 265 + .ahb_buf_size = 1024, 266 + .driver_data = 0, 267 + }; 268 + 262 269 #define FSL_QSPI_MAX_CHIP 4 263 270 struct fsl_qspi { 264 271 struct spi_nor nor[FSL_QSPI_MAX_CHIP]; ··· 284 275 u32 clk_rate; 285 276 unsigned int chip_base_addr; /* We may support two chips. */ 286 277 bool has_second_chip; 278 + bool big_endian; 287 279 struct mutex lock; 288 280 struct pm_qos_request pm_qos_req; 289 281 }; ··· 310 300 } 311 301 312 302 /* 303 + * R/W functions for big- or little-endian registers: 304 + * The qSPI controller's endian is independent of the CPU core's endian. 305 + * So far, although the CPU core is little-endian but the qSPI have two 306 + * versions for big-endian and little-endian. 307 + */ 308 + static void qspi_writel(struct fsl_qspi *q, u32 val, void __iomem *addr) 309 + { 310 + if (q->big_endian) 311 + iowrite32be(val, addr); 312 + else 313 + iowrite32(val, addr); 314 + } 315 + 316 + static u32 qspi_readl(struct fsl_qspi *q, void __iomem *addr) 317 + { 318 + if (q->big_endian) 319 + return ioread32be(addr); 320 + else 321 + return ioread32(addr); 322 + } 323 + 324 + /* 313 325 * An IC bug makes us to re-arrange the 32-bit data. 314 326 * The following chips, such as IMX6SLX, have fixed this bug. 315 327 */ ··· 342 310 343 311 static inline void fsl_qspi_unlock_lut(struct fsl_qspi *q) 344 312 { 345 - writel(QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY); 346 - writel(QUADSPI_LCKER_UNLOCK, q->iobase + QUADSPI_LCKCR); 313 + qspi_writel(q, QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY); 314 + qspi_writel(q, QUADSPI_LCKER_UNLOCK, q->iobase + QUADSPI_LCKCR); 347 315 } 348 316 349 317 static inline void fsl_qspi_lock_lut(struct fsl_qspi *q) 350 318 { 351 - writel(QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY); 352 - writel(QUADSPI_LCKER_LOCK, q->iobase + QUADSPI_LCKCR); 319 + qspi_writel(q, QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY); 320 + qspi_writel(q, QUADSPI_LCKER_LOCK, q->iobase + QUADSPI_LCKCR); 353 321 } 354 322 355 323 static irqreturn_t fsl_qspi_irq_handler(int irq, void *dev_id) ··· 358 326 u32 reg; 359 327 360 328 /* clear interrupt */ 361 - reg = readl(q->iobase + QUADSPI_FR); 362 - writel(reg, q->iobase + QUADSPI_FR); 329 + reg = qspi_readl(q, q->iobase + QUADSPI_FR); 330 + qspi_writel(q, reg, q->iobase + QUADSPI_FR); 363 331 364 332 if (reg & QUADSPI_FR_TFF_MASK) 365 333 complete(&q->c); ··· 380 348 381 349 /* Clear all the LUT table */ 382 350 for (i = 0; i < QUADSPI_LUT_NUM; i++) 383 - writel(0, base + QUADSPI_LUT_BASE + i * 4); 351 + qspi_writel(q, 0, base + QUADSPI_LUT_BASE + i * 4); 384 352 385 353 /* Quad Read */ 386 354 lut_base = SEQID_QUAD_READ * 4; ··· 396 364 dummy = 8; 397 365 } 398 366 399 - writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 367 + qspi_writel(q, LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 400 368 base + QUADSPI_LUT(lut_base)); 401 - writel(LUT0(DUMMY, PAD1, dummy) | LUT1(FSL_READ, PAD4, rxfifo), 369 + qspi_writel(q, LUT0(DUMMY, PAD1, dummy) | LUT1(FSL_READ, PAD4, rxfifo), 402 370 base + QUADSPI_LUT(lut_base + 1)); 403 371 404 372 /* Write enable */ 405 373 lut_base = SEQID_WREN * 4; 406 - writel(LUT0(CMD, PAD1, SPINOR_OP_WREN), base + QUADSPI_LUT(lut_base)); 374 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_WREN), 375 + base + QUADSPI_LUT(lut_base)); 407 376 408 377 /* Page Program */ 409 378 lut_base = SEQID_PP * 4; ··· 418 385 addrlen = ADDR32BIT; 419 386 } 420 387 421 - writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 388 + qspi_writel(q, LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 422 389 base + QUADSPI_LUT(lut_base)); 423 - writel(LUT0(FSL_WRITE, PAD1, 0), base + QUADSPI_LUT(lut_base + 1)); 390 + qspi_writel(q, LUT0(FSL_WRITE, PAD1, 0), 391 + base + QUADSPI_LUT(lut_base + 1)); 424 392 425 393 /* Read Status */ 426 394 lut_base = SEQID_RDSR * 4; 427 - writel(LUT0(CMD, PAD1, SPINOR_OP_RDSR) | LUT1(FSL_READ, PAD1, 0x1), 395 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_RDSR) | 396 + LUT1(FSL_READ, PAD1, 0x1), 428 397 base + QUADSPI_LUT(lut_base)); 429 398 430 399 /* Erase a sector */ ··· 435 400 cmd = q->nor[0].erase_opcode; 436 401 addrlen = q->nor_size <= SZ_16M ? ADDR24BIT : ADDR32BIT; 437 402 438 - writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 403 + qspi_writel(q, LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 439 404 base + QUADSPI_LUT(lut_base)); 440 405 441 406 /* Erase the whole chip */ 442 407 lut_base = SEQID_CHIP_ERASE * 4; 443 - writel(LUT0(CMD, PAD1, SPINOR_OP_CHIP_ERASE), 408 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_CHIP_ERASE), 444 409 base + QUADSPI_LUT(lut_base)); 445 410 446 411 /* READ ID */ 447 412 lut_base = SEQID_RDID * 4; 448 - writel(LUT0(CMD, PAD1, SPINOR_OP_RDID) | LUT1(FSL_READ, PAD1, 0x8), 413 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_RDID) | 414 + LUT1(FSL_READ, PAD1, 0x8), 449 415 base + QUADSPI_LUT(lut_base)); 450 416 451 417 /* Write Register */ 452 418 lut_base = SEQID_WRSR * 4; 453 - writel(LUT0(CMD, PAD1, SPINOR_OP_WRSR) | LUT1(FSL_WRITE, PAD1, 0x2), 419 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_WRSR) | 420 + LUT1(FSL_WRITE, PAD1, 0x2), 454 421 base + QUADSPI_LUT(lut_base)); 455 422 456 423 /* Read Configuration Register */ 457 424 lut_base = SEQID_RDCR * 4; 458 - writel(LUT0(CMD, PAD1, SPINOR_OP_RDCR) | LUT1(FSL_READ, PAD1, 0x1), 425 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_RDCR) | 426 + LUT1(FSL_READ, PAD1, 0x1), 459 427 base + QUADSPI_LUT(lut_base)); 460 428 461 429 /* Write disable */ 462 430 lut_base = SEQID_WRDI * 4; 463 - writel(LUT0(CMD, PAD1, SPINOR_OP_WRDI), base + QUADSPI_LUT(lut_base)); 431 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_WRDI), 432 + base + QUADSPI_LUT(lut_base)); 464 433 465 434 /* Enter 4 Byte Mode (Micron) */ 466 435 lut_base = SEQID_EN4B * 4; 467 - writel(LUT0(CMD, PAD1, SPINOR_OP_EN4B), base + QUADSPI_LUT(lut_base)); 436 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_EN4B), 437 + base + QUADSPI_LUT(lut_base)); 468 438 469 439 /* Enter 4 Byte Mode (Spansion) */ 470 440 lut_base = SEQID_BRWR * 4; 471 - writel(LUT0(CMD, PAD1, SPINOR_OP_BRWR), base + QUADSPI_LUT(lut_base)); 441 + qspi_writel(q, LUT0(CMD, PAD1, SPINOR_OP_BRWR), 442 + base + QUADSPI_LUT(lut_base)); 472 443 473 444 fsl_qspi_lock_lut(q); 474 445 } ··· 529 488 q->chip_base_addr, addr, len, cmd); 530 489 531 490 /* save the reg */ 532 - reg = readl(base + QUADSPI_MCR); 491 + reg = qspi_readl(q, base + QUADSPI_MCR); 533 492 534 - writel(q->memmap_phy + q->chip_base_addr + addr, base + QUADSPI_SFAR); 535 - writel(QUADSPI_RBCT_WMRK_MASK | QUADSPI_RBCT_RXBRD_USEIPS, 493 + qspi_writel(q, q->memmap_phy + q->chip_base_addr + addr, 494 + base + QUADSPI_SFAR); 495 + qspi_writel(q, QUADSPI_RBCT_WMRK_MASK | QUADSPI_RBCT_RXBRD_USEIPS, 536 496 base + QUADSPI_RBCT); 537 - writel(reg | QUADSPI_MCR_CLR_RXF_MASK, base + QUADSPI_MCR); 497 + qspi_writel(q, reg | QUADSPI_MCR_CLR_RXF_MASK, base + QUADSPI_MCR); 538 498 539 499 do { 540 - reg2 = readl(base + QUADSPI_SR); 500 + reg2 = qspi_readl(q, base + QUADSPI_SR); 541 501 if (reg2 & (QUADSPI_SR_IP_ACC_MASK | QUADSPI_SR_AHB_ACC_MASK)) { 542 502 udelay(1); 543 503 dev_dbg(q->dev, "The controller is busy, 0x%x\n", reg2); ··· 549 507 550 508 /* trigger the LUT now */ 551 509 seqid = fsl_qspi_get_seqid(q, cmd); 552 - writel((seqid << QUADSPI_IPCR_SEQID_SHIFT) | len, base + QUADSPI_IPCR); 510 + qspi_writel(q, (seqid << QUADSPI_IPCR_SEQID_SHIFT) | len, 511 + base + QUADSPI_IPCR); 553 512 554 513 /* Wait for the interrupt. */ 555 514 if (!wait_for_completion_timeout(&q->c, msecs_to_jiffies(1000))) { 556 515 dev_err(q->dev, 557 516 "cmd 0x%.2x timeout, addr@%.8x, FR:0x%.8x, SR:0x%.8x\n", 558 - cmd, addr, readl(base + QUADSPI_FR), 559 - readl(base + QUADSPI_SR)); 517 + cmd, addr, qspi_readl(q, base + QUADSPI_FR), 518 + qspi_readl(q, base + QUADSPI_SR)); 560 519 err = -ETIMEDOUT; 561 520 } else { 562 521 err = 0; 563 522 } 564 523 565 524 /* restore the MCR */ 566 - writel(reg, base + QUADSPI_MCR); 525 + qspi_writel(q, reg, base + QUADSPI_MCR); 567 526 568 527 return err; 569 528 } ··· 576 533 int i = 0; 577 534 578 535 while (len > 0) { 579 - tmp = readl(q->iobase + QUADSPI_RBDR + i * 4); 536 + tmp = qspi_readl(q, q->iobase + QUADSPI_RBDR + i * 4); 580 537 tmp = fsl_qspi_endian_xchg(q, tmp); 581 538 dev_dbg(q->dev, "chip addr:0x%.8x, rcv:0x%.8x\n", 582 539 q->chip_base_addr, tmp); ··· 604 561 { 605 562 u32 reg; 606 563 607 - reg = readl(q->iobase + QUADSPI_MCR); 564 + reg = qspi_readl(q, q->iobase + QUADSPI_MCR); 608 565 reg |= QUADSPI_MCR_SWRSTHD_MASK | QUADSPI_MCR_SWRSTSD_MASK; 609 - writel(reg, q->iobase + QUADSPI_MCR); 566 + qspi_writel(q, reg, q->iobase + QUADSPI_MCR); 610 567 611 568 /* 612 569 * The minimum delay : 1 AHB + 2 SFCK clocks. ··· 615 572 udelay(1); 616 573 617 574 reg &= ~(QUADSPI_MCR_SWRSTHD_MASK | QUADSPI_MCR_SWRSTSD_MASK); 618 - writel(reg, q->iobase + QUADSPI_MCR); 575 + qspi_writel(q, reg, q->iobase + QUADSPI_MCR); 619 576 } 620 577 621 578 static int fsl_qspi_nor_write(struct fsl_qspi *q, struct spi_nor *nor, ··· 629 586 q->chip_base_addr, to, count); 630 587 631 588 /* clear the TX FIFO. */ 632 - tmp = readl(q->iobase + QUADSPI_MCR); 633 - writel(tmp | QUADSPI_MCR_CLR_TXF_MASK, q->iobase + QUADSPI_MCR); 589 + tmp = qspi_readl(q, q->iobase + QUADSPI_MCR); 590 + qspi_writel(q, tmp | QUADSPI_MCR_CLR_TXF_MASK, q->iobase + QUADSPI_MCR); 634 591 635 592 /* fill the TX data to the FIFO */ 636 593 for (j = 0, i = ((count + 3) / 4); j < i; j++) { 637 594 tmp = fsl_qspi_endian_xchg(q, *txbuf); 638 - writel(tmp, q->iobase + QUADSPI_TBDR); 595 + qspi_writel(q, tmp, q->iobase + QUADSPI_TBDR); 639 596 txbuf++; 640 597 } 641 598 642 599 /* fill the TXFIFO upto 16 bytes for i.MX7d */ 643 600 if (needs_fill_txfifo(q)) 644 601 for (; i < 4; i++) 645 - writel(tmp, q->iobase + QUADSPI_TBDR); 602 + qspi_writel(q, tmp, q->iobase + QUADSPI_TBDR); 646 603 647 604 /* Trigger it */ 648 605 ret = fsl_qspi_runcmd(q, opcode, to, count); ··· 658 615 int nor_size = q->nor_size; 659 616 void __iomem *base = q->iobase; 660 617 661 - writel(nor_size + q->memmap_phy, base + QUADSPI_SFA1AD); 662 - writel(nor_size * 2 + q->memmap_phy, base + QUADSPI_SFA2AD); 663 - writel(nor_size * 3 + q->memmap_phy, base + QUADSPI_SFB1AD); 664 - writel(nor_size * 4 + q->memmap_phy, base + QUADSPI_SFB2AD); 618 + qspi_writel(q, nor_size + q->memmap_phy, base + QUADSPI_SFA1AD); 619 + qspi_writel(q, nor_size * 2 + q->memmap_phy, base + QUADSPI_SFA2AD); 620 + qspi_writel(q, nor_size * 3 + q->memmap_phy, base + QUADSPI_SFB1AD); 621 + qspi_writel(q, nor_size * 4 + q->memmap_phy, base + QUADSPI_SFB2AD); 665 622 } 666 623 667 624 /* ··· 683 640 int seqid; 684 641 685 642 /* AHB configuration for access buffer 0/1/2 .*/ 686 - writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF0CR); 687 - writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF1CR); 688 - writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF2CR); 643 + qspi_writel(q, QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF0CR); 644 + qspi_writel(q, QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF1CR); 645 + qspi_writel(q, QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF2CR); 689 646 /* 690 647 * Set ADATSZ with the maximum AHB buffer size to improve the 691 648 * read performance. 692 649 */ 693 - writel(QUADSPI_BUF3CR_ALLMST_MASK | ((q->devtype_data->ahb_buf_size / 8) 694 - << QUADSPI_BUF3CR_ADATSZ_SHIFT), base + QUADSPI_BUF3CR); 650 + qspi_writel(q, QUADSPI_BUF3CR_ALLMST_MASK | 651 + ((q->devtype_data->ahb_buf_size / 8) 652 + << QUADSPI_BUF3CR_ADATSZ_SHIFT), 653 + base + QUADSPI_BUF3CR); 695 654 696 655 /* We only use the buffer3 */ 697 - writel(0, base + QUADSPI_BUF0IND); 698 - writel(0, base + QUADSPI_BUF1IND); 699 - writel(0, base + QUADSPI_BUF2IND); 656 + qspi_writel(q, 0, base + QUADSPI_BUF0IND); 657 + qspi_writel(q, 0, base + QUADSPI_BUF1IND); 658 + qspi_writel(q, 0, base + QUADSPI_BUF2IND); 700 659 701 660 /* Set the default lut sequence for AHB Read. */ 702 661 seqid = fsl_qspi_get_seqid(q, q->nor[0].read_opcode); 703 - writel(seqid << QUADSPI_BFGENCR_SEQID_SHIFT, 662 + qspi_writel(q, seqid << QUADSPI_BFGENCR_SEQID_SHIFT, 704 663 q->iobase + QUADSPI_BFGENCR); 705 664 } 706 665 ··· 758 713 return ret; 759 714 760 715 /* Reset the module */ 761 - writel(QUADSPI_MCR_SWRSTSD_MASK | QUADSPI_MCR_SWRSTHD_MASK, 716 + qspi_writel(q, QUADSPI_MCR_SWRSTSD_MASK | QUADSPI_MCR_SWRSTHD_MASK, 762 717 base + QUADSPI_MCR); 763 718 udelay(1); 764 719 ··· 766 721 fsl_qspi_init_lut(q); 767 722 768 723 /* Disable the module */ 769 - writel(QUADSPI_MCR_MDIS_MASK | QUADSPI_MCR_RESERVED_MASK, 724 + qspi_writel(q, QUADSPI_MCR_MDIS_MASK | QUADSPI_MCR_RESERVED_MASK, 770 725 base + QUADSPI_MCR); 771 726 772 - reg = readl(base + QUADSPI_SMPR); 773 - writel(reg & ~(QUADSPI_SMPR_FSDLY_MASK 727 + reg = qspi_readl(q, base + QUADSPI_SMPR); 728 + qspi_writel(q, reg & ~(QUADSPI_SMPR_FSDLY_MASK 774 729 | QUADSPI_SMPR_FSPHS_MASK 775 730 | QUADSPI_SMPR_HSENA_MASK 776 731 | QUADSPI_SMPR_DDRSMP_MASK), base + QUADSPI_SMPR); 777 732 778 733 /* Enable the module */ 779 - writel(QUADSPI_MCR_RESERVED_MASK | QUADSPI_MCR_END_CFG_MASK, 734 + qspi_writel(q, QUADSPI_MCR_RESERVED_MASK | QUADSPI_MCR_END_CFG_MASK, 780 735 base + QUADSPI_MCR); 781 736 782 737 /* clear all interrupt status */ 783 - writel(0xffffffff, q->iobase + QUADSPI_FR); 738 + qspi_writel(q, 0xffffffff, q->iobase + QUADSPI_FR); 784 739 785 740 /* enable the interrupt */ 786 - writel(QUADSPI_RSER_TFIE, q->iobase + QUADSPI_RSER); 741 + qspi_writel(q, QUADSPI_RSER_TFIE, q->iobase + QUADSPI_RSER); 787 742 788 743 return 0; 789 744 } ··· 821 776 { .compatible = "fsl,imx6sx-qspi", .data = (void *)&imx6sx_data, }, 822 777 { .compatible = "fsl,imx7d-qspi", .data = (void *)&imx7d_data, }, 823 778 { .compatible = "fsl,imx6ul-qspi", .data = (void *)&imx6ul_data, }, 779 + { .compatible = "fsl,ls1021a-qspi", .data = (void *)&ls1021a_data, }, 824 780 { /* sentinel */ } 825 781 }; 826 782 MODULE_DEVICE_TABLE(of, fsl_qspi_dt_ids); ··· 1000 954 if (IS_ERR(q->iobase)) 1001 955 return PTR_ERR(q->iobase); 1002 956 957 + q->big_endian = of_property_read_bool(np, "big-endian"); 1003 958 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 1004 959 "QuadSPI-memory"); 1005 960 if (!devm_request_mem_region(dev, res->start, resource_size(res), ··· 1148 1101 } 1149 1102 1150 1103 /* disable the hardware */ 1151 - writel(QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR); 1152 - writel(0x0, q->iobase + QUADSPI_RSER); 1104 + qspi_writel(q, QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR); 1105 + qspi_writel(q, 0x0, q->iobase + QUADSPI_RSER); 1153 1106 1154 1107 mutex_destroy(&q->lock); 1155 1108
+2 -2
drivers/mtd/spi-nor/mtk-quadspi.c
··· 371 371 return ret; 372 372 } 373 373 374 - static int __init mtk_nor_init(struct mt8173_nor *mt8173_nor, 375 - struct device_node *flash_node) 374 + static int mtk_nor_init(struct mt8173_nor *mt8173_nor, 375 + struct device_node *flash_node) 376 376 { 377 377 int ret; 378 378 struct spi_nor *nor;
+161 -85
drivers/mtd/spi-nor/spi-nor.c
··· 61 61 u16 addr_width; 62 62 63 63 u16 flags; 64 - #define SECT_4K 0x01 /* SPINOR_OP_BE_4K works uniformly */ 65 - #define SPI_NOR_NO_ERASE 0x02 /* No erase command needed */ 66 - #define SST_WRITE 0x04 /* use SST byte programming */ 67 - #define SPI_NOR_NO_FR 0x08 /* Can't do fastread */ 68 - #define SECT_4K_PMC 0x10 /* SPINOR_OP_BE_4K_PMC works uniformly */ 69 - #define SPI_NOR_DUAL_READ 0x20 /* Flash supports Dual Read */ 70 - #define SPI_NOR_QUAD_READ 0x40 /* Flash supports Quad Read */ 71 - #define USE_FSR 0x80 /* use flag status register */ 64 + #define SECT_4K BIT(0) /* SPINOR_OP_BE_4K works uniformly */ 65 + #define SPI_NOR_NO_ERASE BIT(1) /* No erase command needed */ 66 + #define SST_WRITE BIT(2) /* use SST byte programming */ 67 + #define SPI_NOR_NO_FR BIT(3) /* Can't do fastread */ 68 + #define SECT_4K_PMC BIT(4) /* SPINOR_OP_BE_4K_PMC works uniformly */ 69 + #define SPI_NOR_DUAL_READ BIT(5) /* Flash supports Dual Read */ 70 + #define SPI_NOR_QUAD_READ BIT(6) /* Flash supports Quad Read */ 71 + #define USE_FSR BIT(7) /* use flag status register */ 72 + #define SPI_NOR_HAS_LOCK BIT(8) /* Flash supports lock/unlock via SR */ 73 + #define SPI_NOR_HAS_TB BIT(9) /* 74 + * Flash SR has Top/Bottom (TB) protect 75 + * bit. Must be used with 76 + * SPI_NOR_HAS_LOCK. 77 + */ 72 78 }; 73 79 74 80 #define JEDEC_MFR(info) ((info)->id[0]) ··· 440 434 } else { 441 435 pow = ((sr & mask) ^ mask) >> shift; 442 436 *len = mtd->size >> pow; 443 - *ofs = mtd->size - *len; 437 + if (nor->flags & SNOR_F_HAS_SR_TB && sr & SR_TB) 438 + *ofs = 0; 439 + else 440 + *ofs = mtd->size - *len; 444 441 } 445 442 } 446 443 447 444 /* 448 - * Return 1 if the entire region is locked, 0 otherwise 445 + * Return 1 if the entire region is locked (if @locked is true) or unlocked (if 446 + * @locked is false); 0 otherwise 449 447 */ 450 - static int stm_is_locked_sr(struct spi_nor *nor, loff_t ofs, uint64_t len, 451 - u8 sr) 448 + static int stm_check_lock_status_sr(struct spi_nor *nor, loff_t ofs, uint64_t len, 449 + u8 sr, bool locked) 452 450 { 453 451 loff_t lock_offs; 454 452 uint64_t lock_len; 455 453 454 + if (!len) 455 + return 1; 456 + 456 457 stm_get_locked_range(nor, sr, &lock_offs, &lock_len); 457 458 458 - return (ofs + len <= lock_offs + lock_len) && (ofs >= lock_offs); 459 + if (locked) 460 + /* Requested range is a sub-range of locked range */ 461 + return (ofs + len <= lock_offs + lock_len) && (ofs >= lock_offs); 462 + else 463 + /* Requested range does not overlap with locked range */ 464 + return (ofs >= lock_offs + lock_len) || (ofs + len <= lock_offs); 465 + } 466 + 467 + static int stm_is_locked_sr(struct spi_nor *nor, loff_t ofs, uint64_t len, 468 + u8 sr) 469 + { 470 + return stm_check_lock_status_sr(nor, ofs, len, sr, true); 471 + } 472 + 473 + static int stm_is_unlocked_sr(struct spi_nor *nor, loff_t ofs, uint64_t len, 474 + u8 sr) 475 + { 476 + return stm_check_lock_status_sr(nor, ofs, len, sr, false); 459 477 } 460 478 461 479 /* 462 480 * Lock a region of the flash. Compatible with ST Micro and similar flash. 463 - * Supports only the block protection bits BP{0,1,2} in the status register 481 + * Supports the block protection bits BP{0,1,2} in the status register 464 482 * (SR). Does not support these features found in newer SR bitfields: 465 - * - TB: top/bottom protect - only handle TB=0 (top protect) 466 483 * - SEC: sector/block protect - only handle SEC=0 (block protect) 467 484 * - CMP: complement protect - only support CMP=0 (range is not complemented) 485 + * 486 + * Support for the following is provided conditionally for some flash: 487 + * - TB: top/bottom protect 468 488 * 469 489 * Sample table portion for 8MB flash (Winbond w25q64fw): 470 490 * ··· 504 472 * 0 | 0 | 1 | 0 | 1 | 2 MB | Upper 1/4 505 473 * 0 | 0 | 1 | 1 | 0 | 4 MB | Upper 1/2 506 474 * X | X | 1 | 1 | 1 | 8 MB | ALL 475 + * ------|-------|-------|-------|-------|---------------|------------------- 476 + * 0 | 1 | 0 | 0 | 1 | 128 KB | Lower 1/64 477 + * 0 | 1 | 0 | 1 | 0 | 256 KB | Lower 1/32 478 + * 0 | 1 | 0 | 1 | 1 | 512 KB | Lower 1/16 479 + * 0 | 1 | 1 | 0 | 0 | 1 MB | Lower 1/8 480 + * 0 | 1 | 1 | 0 | 1 | 2 MB | Lower 1/4 481 + * 0 | 1 | 1 | 1 | 0 | 4 MB | Lower 1/2 507 482 * 508 483 * Returns negative on errors, 0 on success. 509 484 */ ··· 520 481 int status_old, status_new; 521 482 u8 mask = SR_BP2 | SR_BP1 | SR_BP0; 522 483 u8 shift = ffs(mask) - 1, pow, val; 484 + loff_t lock_len; 485 + bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB; 486 + bool use_top; 523 487 int ret; 524 488 525 489 status_old = read_sr(nor); 526 490 if (status_old < 0) 527 491 return status_old; 528 492 529 - /* SPI NOR always locks to the end */ 530 - if (ofs + len != mtd->size) { 531 - /* Does combined region extend to end? */ 532 - if (!stm_is_locked_sr(nor, ofs + len, mtd->size - ofs - len, 533 - status_old)) 534 - return -EINVAL; 535 - len = mtd->size - ofs; 536 - } 493 + /* If nothing in our range is unlocked, we don't need to do anything */ 494 + if (stm_is_locked_sr(nor, ofs, len, status_old)) 495 + return 0; 496 + 497 + /* If anything below us is unlocked, we can't use 'bottom' protection */ 498 + if (!stm_is_locked_sr(nor, 0, ofs, status_old)) 499 + can_be_bottom = false; 500 + 501 + /* If anything above us is unlocked, we can't use 'top' protection */ 502 + if (!stm_is_locked_sr(nor, ofs + len, mtd->size - (ofs + len), 503 + status_old)) 504 + can_be_top = false; 505 + 506 + if (!can_be_bottom && !can_be_top) 507 + return -EINVAL; 508 + 509 + /* Prefer top, if both are valid */ 510 + use_top = can_be_top; 511 + 512 + /* lock_len: length of region that should end up locked */ 513 + if (use_top) 514 + lock_len = mtd->size - ofs; 515 + else 516 + lock_len = ofs + len; 537 517 538 518 /* 539 519 * Need smallest pow such that: ··· 563 505 * 564 506 * pow = ceil(log2(size / len)) = log2(size) - floor(log2(len)) 565 507 */ 566 - pow = ilog2(mtd->size) - ilog2(len); 508 + pow = ilog2(mtd->size) - ilog2(lock_len); 567 509 val = mask - (pow << shift); 568 510 if (val & ~mask) 569 511 return -EINVAL; ··· 571 513 if (!(val & mask)) 572 514 return -EINVAL; 573 515 574 - status_new = (status_old & ~mask) | val; 516 + status_new = (status_old & ~mask & ~SR_TB) | val; 517 + 518 + /* Disallow further writes if WP pin is asserted */ 519 + status_new |= SR_SRWD; 520 + 521 + if (!use_top) 522 + status_new |= SR_TB; 523 + 524 + /* Don't bother if they're the same */ 525 + if (status_new == status_old) 526 + return 0; 575 527 576 528 /* Only modify protection if it will not unlock other areas */ 577 - if ((status_new & mask) <= (status_old & mask)) 529 + if ((status_new & mask) < (status_old & mask)) 578 530 return -EINVAL; 579 531 580 532 write_enable(nor); ··· 605 537 int status_old, status_new; 606 538 u8 mask = SR_BP2 | SR_BP1 | SR_BP0; 607 539 u8 shift = ffs(mask) - 1, pow, val; 540 + loff_t lock_len; 541 + bool can_be_top = true, can_be_bottom = nor->flags & SNOR_F_HAS_SR_TB; 542 + bool use_top; 608 543 int ret; 609 544 610 545 status_old = read_sr(nor); 611 546 if (status_old < 0) 612 547 return status_old; 613 548 614 - /* Cannot unlock; would unlock larger region than requested */ 615 - if (stm_is_locked_sr(nor, ofs - mtd->erasesize, mtd->erasesize, 616 - status_old)) 549 + /* If nothing in our range is locked, we don't need to do anything */ 550 + if (stm_is_unlocked_sr(nor, ofs, len, status_old)) 551 + return 0; 552 + 553 + /* If anything below us is locked, we can't use 'top' protection */ 554 + if (!stm_is_unlocked_sr(nor, 0, ofs, status_old)) 555 + can_be_top = false; 556 + 557 + /* If anything above us is locked, we can't use 'bottom' protection */ 558 + if (!stm_is_unlocked_sr(nor, ofs + len, mtd->size - (ofs + len), 559 + status_old)) 560 + can_be_bottom = false; 561 + 562 + if (!can_be_bottom && !can_be_top) 617 563 return -EINVAL; 564 + 565 + /* Prefer top, if both are valid */ 566 + use_top = can_be_top; 567 + 568 + /* lock_len: length of region that should remain locked */ 569 + if (use_top) 570 + lock_len = mtd->size - (ofs + len); 571 + else 572 + lock_len = ofs; 618 573 619 574 /* 620 575 * Need largest pow such that: ··· 648 557 * 649 558 * pow = floor(log2(size / len)) = log2(size) - ceil(log2(len)) 650 559 */ 651 - pow = ilog2(mtd->size) - order_base_2(mtd->size - (ofs + len)); 652 - if (ofs + len == mtd->size) { 560 + pow = ilog2(mtd->size) - order_base_2(lock_len); 561 + if (lock_len == 0) { 653 562 val = 0; /* fully unlocked */ 654 563 } else { 655 564 val = mask - (pow << shift); ··· 658 567 return -EINVAL; 659 568 } 660 569 661 - status_new = (status_old & ~mask) | val; 570 + status_new = (status_old & ~mask & ~SR_TB) | val; 571 + 572 + /* Don't protect status register if we're fully unlocked */ 573 + if (lock_len == mtd->size) 574 + status_new &= ~SR_SRWD; 575 + 576 + if (!use_top) 577 + status_new |= SR_TB; 578 + 579 + /* Don't bother if they're the same */ 580 + if (status_new == status_old) 581 + return 0; 662 582 663 583 /* Only modify protection if it will not lock other areas */ 664 - if ((status_new & mask) >= (status_old & mask)) 584 + if ((status_new & mask) > (status_old & mask)) 665 585 return -EINVAL; 666 586 667 587 write_enable(nor); ··· 864 762 { "n25q032a", INFO(0x20bb16, 0, 64 * 1024, 64, SPI_NOR_QUAD_READ) }, 865 763 { "n25q064", INFO(0x20ba17, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_QUAD_READ) }, 866 764 { "n25q064a", INFO(0x20bb17, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_QUAD_READ) }, 867 - { "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, SPI_NOR_QUAD_READ) }, 868 - { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, SPI_NOR_QUAD_READ) }, 765 + { "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, SECT_4K | SPI_NOR_QUAD_READ) }, 766 + { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, SECT_4K | SPI_NOR_QUAD_READ) }, 869 767 { "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_QUAD_READ) }, 870 768 { "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) }, 871 769 { "n25q512ax3", INFO(0x20ba20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) }, ··· 899 797 { "s25fl008k", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 900 798 { "s25fl016k", INFO(0xef4015, 0, 64 * 1024, 32, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 901 799 { "s25fl064k", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) }, 800 + { "s25fl116k", INFO(0x014015, 0, 64 * 1024, 32, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 902 801 { "s25fl132k", INFO(0x014016, 0, 64 * 1024, 64, SECT_4K) }, 903 802 { "s25fl164k", INFO(0x014017, 0, 64 * 1024, 128, SECT_4K) }, 904 803 { "s25fl204k", INFO(0x014013, 0, 64 * 1024, 8, SECT_4K | SPI_NOR_DUAL_READ) }, ··· 963 860 { "w25x16", INFO(0xef3015, 0, 64 * 1024, 32, SECT_4K) }, 964 861 { "w25x32", INFO(0xef3016, 0, 64 * 1024, 64, SECT_4K) }, 965 862 { "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) }, 966 - { "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 863 + { 864 + "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64, 865 + SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | 866 + SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) 867 + }, 967 868 { "w25x64", INFO(0xef3017, 0, 64 * 1024, 128, SECT_4K) }, 968 869 { "w25q64", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) }, 969 - { "w25q64dw", INFO(0xef6017, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 970 - { "w25q128fw", INFO(0xef6018, 0, 64 * 1024, 256, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 870 + { 871 + "w25q64dw", INFO(0xef6017, 0, 64 * 1024, 128, 872 + SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | 873 + SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) 874 + }, 875 + { 876 + "w25q128fw", INFO(0xef6018, 0, 64 * 1024, 256, 877 + SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | 878 + SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) 879 + }, 971 880 { "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) }, 972 881 { "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) }, 973 882 { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) }, ··· 1215 1100 return 0; 1216 1101 } 1217 1102 1218 - static int micron_quad_enable(struct spi_nor *nor) 1219 - { 1220 - int ret; 1221 - u8 val; 1222 - 1223 - ret = nor->read_reg(nor, SPINOR_OP_RD_EVCR, &val, 1); 1224 - if (ret < 0) { 1225 - dev_err(nor->dev, "error %d reading EVCR\n", ret); 1226 - return ret; 1227 - } 1228 - 1229 - write_enable(nor); 1230 - 1231 - /* set EVCR, enable quad I/O */ 1232 - nor->cmd_buf[0] = val & ~EVCR_QUAD_EN_MICRON; 1233 - ret = nor->write_reg(nor, SPINOR_OP_WD_EVCR, nor->cmd_buf, 1); 1234 - if (ret < 0) { 1235 - dev_err(nor->dev, "error while writing EVCR register\n"); 1236 - return ret; 1237 - } 1238 - 1239 - ret = spi_nor_wait_till_ready(nor); 1240 - if (ret) 1241 - return ret; 1242 - 1243 - /* read EVCR and check it */ 1244 - ret = nor->read_reg(nor, SPINOR_OP_RD_EVCR, &val, 1); 1245 - if (ret < 0) { 1246 - dev_err(nor->dev, "error %d reading EVCR\n", ret); 1247 - return ret; 1248 - } 1249 - if (val & EVCR_QUAD_EN_MICRON) { 1250 - dev_err(nor->dev, "Micron EVCR Quad bit not clear\n"); 1251 - return -EINVAL; 1252 - } 1253 - 1254 - return 0; 1255 - } 1256 - 1257 1103 static int set_quad_mode(struct spi_nor *nor, const struct flash_info *info) 1258 1104 { 1259 1105 int status; ··· 1228 1152 } 1229 1153 return status; 1230 1154 case SNOR_MFR_MICRON: 1231 - status = micron_quad_enable(nor); 1232 - if (status) { 1233 - dev_err(nor->dev, "Micron quad-read not enabled\n"); 1234 - return -EINVAL; 1235 - } 1236 - return status; 1155 + return 0; 1237 1156 default: 1238 1157 status = spansion_quad_enable(nor); 1239 1158 if (status) { ··· 1304 1233 1305 1234 if (JEDEC_MFR(info) == SNOR_MFR_ATMEL || 1306 1235 JEDEC_MFR(info) == SNOR_MFR_INTEL || 1307 - JEDEC_MFR(info) == SNOR_MFR_SST) { 1236 + JEDEC_MFR(info) == SNOR_MFR_SST || 1237 + info->flags & SPI_NOR_HAS_LOCK) { 1308 1238 write_enable(nor); 1309 1239 write_sr(nor, 0); 1240 + spi_nor_wait_till_ready(nor); 1310 1241 } 1311 1242 1312 1243 if (!mtd->name) ··· 1322 1249 mtd->_read = spi_nor_read; 1323 1250 1324 1251 /* NOR protection support for STmicro/Micron chips and similar */ 1325 - if (JEDEC_MFR(info) == SNOR_MFR_MICRON) { 1252 + if (JEDEC_MFR(info) == SNOR_MFR_MICRON || 1253 + info->flags & SPI_NOR_HAS_LOCK) { 1326 1254 nor->flash_lock = stm_lock; 1327 1255 nor->flash_unlock = stm_unlock; 1328 1256 nor->flash_is_locked = stm_is_locked; ··· 1343 1269 1344 1270 if (info->flags & USE_FSR) 1345 1271 nor->flags |= SNOR_F_USE_FSR; 1272 + if (info->flags & SPI_NOR_HAS_TB) 1273 + nor->flags |= SNOR_F_HAS_SR_TB; 1346 1274 1347 1275 #ifdef CONFIG_MTD_SPI_NOR_USE_4K_SECTORS 1348 1276 /* prefer "small sector" erase if possible */
+24 -25
drivers/mtd/tests/oobtest.c
··· 215 215 pr_info("ignoring error as within bitflip_limit\n"); 216 216 } 217 217 218 - if (use_offset != 0 || use_len < mtd->ecclayout->oobavail) { 218 + if (use_offset != 0 || use_len < mtd->oobavail) { 219 219 int k; 220 220 221 221 ops.mode = MTD_OPS_AUTO_OOB; 222 222 ops.len = 0; 223 223 ops.retlen = 0; 224 - ops.ooblen = mtd->ecclayout->oobavail; 224 + ops.ooblen = mtd->oobavail; 225 225 ops.oobretlen = 0; 226 226 ops.ooboffs = 0; 227 227 ops.datbuf = NULL; 228 228 ops.oobbuf = readbuf; 229 229 err = mtd_read_oob(mtd, addr, &ops); 230 - if (err || ops.oobretlen != mtd->ecclayout->oobavail) { 230 + if (err || ops.oobretlen != mtd->oobavail) { 231 231 pr_err("error: readoob failed at %#llx\n", 232 232 (long long)addr); 233 233 errcnt += 1; ··· 244 244 /* verify post-(use_offset + use_len) area for 0xff */ 245 245 k = use_offset + use_len; 246 246 bitflips += memffshow(addr, k, readbuf + k, 247 - mtd->ecclayout->oobavail - k); 247 + mtd->oobavail - k); 248 248 249 249 if (bitflips > bitflip_limit) { 250 250 pr_err("error: verify failed at %#llx\n", ··· 269 269 struct mtd_oob_ops ops; 270 270 int err = 0; 271 271 loff_t addr = (loff_t)ebnum * mtd->erasesize; 272 - size_t len = mtd->ecclayout->oobavail * pgcnt; 273 - size_t oobavail = mtd->ecclayout->oobavail; 272 + size_t len = mtd->oobavail * pgcnt; 273 + size_t oobavail = mtd->oobavail; 274 274 size_t bitflips; 275 275 int i; 276 276 ··· 394 394 goto out; 395 395 396 396 use_offset = 0; 397 - use_len = mtd->ecclayout->oobavail; 398 - use_len_max = mtd->ecclayout->oobavail; 397 + use_len = mtd->oobavail; 398 + use_len_max = mtd->oobavail; 399 399 vary_offset = 0; 400 400 401 401 /* First test: write all OOB, read it back and verify */ ··· 460 460 461 461 /* Write all eraseblocks */ 462 462 use_offset = 0; 463 - use_len = mtd->ecclayout->oobavail; 464 - use_len_max = mtd->ecclayout->oobavail; 463 + use_len = mtd->oobavail; 464 + use_len_max = mtd->oobavail; 465 465 vary_offset = 1; 466 466 prandom_seed_state(&rnd_state, 5); 467 467 ··· 471 471 472 472 /* Check all eraseblocks */ 473 473 use_offset = 0; 474 - use_len = mtd->ecclayout->oobavail; 475 - use_len_max = mtd->ecclayout->oobavail; 474 + use_len = mtd->oobavail; 475 + use_len_max = mtd->oobavail; 476 476 vary_offset = 1; 477 477 prandom_seed_state(&rnd_state, 5); 478 478 err = verify_all_eraseblocks(); ··· 480 480 goto out; 481 481 482 482 use_offset = 0; 483 - use_len = mtd->ecclayout->oobavail; 484 - use_len_max = mtd->ecclayout->oobavail; 483 + use_len = mtd->oobavail; 484 + use_len_max = mtd->oobavail; 485 485 vary_offset = 0; 486 486 487 487 /* Fourth test: try to write off end of device */ ··· 501 501 ops.retlen = 0; 502 502 ops.ooblen = 1; 503 503 ops.oobretlen = 0; 504 - ops.ooboffs = mtd->ecclayout->oobavail; 504 + ops.ooboffs = mtd->oobavail; 505 505 ops.datbuf = NULL; 506 506 ops.oobbuf = writebuf; 507 507 pr_info("attempting to start write past end of OOB\n"); ··· 521 521 ops.retlen = 0; 522 522 ops.ooblen = 1; 523 523 ops.oobretlen = 0; 524 - ops.ooboffs = mtd->ecclayout->oobavail; 524 + ops.ooboffs = mtd->oobavail; 525 525 ops.datbuf = NULL; 526 526 ops.oobbuf = readbuf; 527 527 pr_info("attempting to start read past end of OOB\n"); ··· 543 543 ops.mode = MTD_OPS_AUTO_OOB; 544 544 ops.len = 0; 545 545 ops.retlen = 0; 546 - ops.ooblen = mtd->ecclayout->oobavail + 1; 546 + ops.ooblen = mtd->oobavail + 1; 547 547 ops.oobretlen = 0; 548 548 ops.ooboffs = 0; 549 549 ops.datbuf = NULL; ··· 563 563 ops.mode = MTD_OPS_AUTO_OOB; 564 564 ops.len = 0; 565 565 ops.retlen = 0; 566 - ops.ooblen = mtd->ecclayout->oobavail + 1; 566 + ops.ooblen = mtd->oobavail + 1; 567 567 ops.oobretlen = 0; 568 568 ops.ooboffs = 0; 569 569 ops.datbuf = NULL; ··· 587 587 ops.mode = MTD_OPS_AUTO_OOB; 588 588 ops.len = 0; 589 589 ops.retlen = 0; 590 - ops.ooblen = mtd->ecclayout->oobavail; 590 + ops.ooblen = mtd->oobavail; 591 591 ops.oobretlen = 0; 592 592 ops.ooboffs = 1; 593 593 ops.datbuf = NULL; ··· 607 607 ops.mode = MTD_OPS_AUTO_OOB; 608 608 ops.len = 0; 609 609 ops.retlen = 0; 610 - ops.ooblen = mtd->ecclayout->oobavail; 610 + ops.ooblen = mtd->oobavail; 611 611 ops.oobretlen = 0; 612 612 ops.ooboffs = 1; 613 613 ops.datbuf = NULL; ··· 638 638 for (i = 0; i < ebcnt - 1; ++i) { 639 639 int cnt = 2; 640 640 int pg; 641 - size_t sz = mtd->ecclayout->oobavail; 641 + size_t sz = mtd->oobavail; 642 642 if (bbt[i] || bbt[i + 1]) 643 643 continue; 644 644 addr = (loff_t)(i + 1) * mtd->erasesize - mtd->writesize; ··· 673 673 for (i = 0; i < ebcnt - 1; ++i) { 674 674 if (bbt[i] || bbt[i + 1]) 675 675 continue; 676 - prandom_bytes_state(&rnd_state, writebuf, 677 - mtd->ecclayout->oobavail * 2); 676 + prandom_bytes_state(&rnd_state, writebuf, mtd->oobavail * 2); 678 677 addr = (loff_t)(i + 1) * mtd->erasesize - mtd->writesize; 679 678 ops.mode = MTD_OPS_AUTO_OOB; 680 679 ops.len = 0; 681 680 ops.retlen = 0; 682 - ops.ooblen = mtd->ecclayout->oobavail * 2; 681 + ops.ooblen = mtd->oobavail * 2; 683 682 ops.oobretlen = 0; 684 683 ops.ooboffs = 0; 685 684 ops.datbuf = NULL; ··· 687 688 if (err) 688 689 goto out; 689 690 if (memcmpshow(addr, readbuf, writebuf, 690 - mtd->ecclayout->oobavail * 2)) { 691 + mtd->oobavail * 2)) { 691 692 pr_err("error: verify failed at %#llx\n", 692 693 (long long)addr); 693 694 errcnt += 1;
-1
drivers/staging/mt29f_spinand/mt29f_spinand.c
··· 49 49 17, 18, 19, 20, 21, 22, 50 50 33, 34, 35, 36, 37, 38, 51 51 49, 50, 51, 52, 53, 54, }, 52 - .oobavail = 32, 53 52 .oobfree = { 54 53 {.offset = 8, 55 54 .length = 8},
-1
drivers/staging/mt29f_spinand/mt29f_spinand.h
··· 78 78 #define BL_ALL_UNLOCKED 0 79 79 80 80 struct spinand_info { 81 - struct nand_ecclayout *ecclayout; 82 81 struct spi_device *spi; 83 82 void *priv; 84 83 };
+41 -21
fs/jffs2/gc.c
··· 134 134 if (mutex_lock_interruptible(&c->alloc_sem)) 135 135 return -EINTR; 136 136 137 + 137 138 for (;;) { 139 + /* We can't start doing GC until we've finished checking 140 + the node CRCs etc. */ 141 + int bucket, want_ino; 142 + 138 143 spin_lock(&c->erase_completion_lock); 139 144 if (!c->unchecked_size) 140 145 break; 141 - 142 - /* We can't start doing GC yet. We haven't finished checking 143 - the node CRCs etc. Do it now. */ 144 - 145 - /* checked_ino is protected by the alloc_sem */ 146 - if (c->checked_ino > c->highest_ino && xattr) { 147 - pr_crit("Checked all inodes but still 0x%x bytes of unchecked space?\n", 148 - c->unchecked_size); 149 - jffs2_dbg_dump_block_lists_nolock(c); 150 - spin_unlock(&c->erase_completion_lock); 151 - mutex_unlock(&c->alloc_sem); 152 - return -ENOSPC; 153 - } 154 - 155 146 spin_unlock(&c->erase_completion_lock); 156 147 157 148 if (!xattr) 158 149 xattr = jffs2_verify_xattr(c); 159 150 160 151 spin_lock(&c->inocache_lock); 152 + /* Instead of doing the inodes in numeric order, doing a lookup 153 + * in the hash for each possible number, just walk the hash 154 + * buckets of *existing* inodes. This means that we process 155 + * them out-of-order, but it can be a lot faster if there's 156 + * a sparse inode# space. Which there often is. */ 157 + want_ino = c->check_ino; 158 + for (bucket = c->check_ino % c->inocache_hashsize ; bucket < c->inocache_hashsize; bucket++) { 159 + for (ic = c->inocache_list[bucket]; ic; ic = ic->next) { 160 + if (ic->ino < want_ino) 161 + continue; 161 162 162 - ic = jffs2_get_ino_cache(c, c->checked_ino++); 163 + if (ic->state != INO_STATE_CHECKEDABSENT && 164 + ic->state != INO_STATE_PRESENT) 165 + goto got_next; /* with inocache_lock held */ 163 166 164 - if (!ic) { 165 - spin_unlock(&c->inocache_lock); 166 - continue; 167 + jffs2_dbg(1, "Skipping ino #%u already checked\n", 168 + ic->ino); 169 + } 170 + want_ino = 0; 167 171 } 172 + 173 + /* Point c->check_ino past the end of the last bucket. */ 174 + c->check_ino = ((c->highest_ino + c->inocache_hashsize + 1) & 175 + ~c->inocache_hashsize) - 1; 176 + 177 + spin_unlock(&c->inocache_lock); 178 + 179 + pr_crit("Checked all inodes but still 0x%x bytes of unchecked space?\n", 180 + c->unchecked_size); 181 + jffs2_dbg_dump_block_lists_nolock(c); 182 + mutex_unlock(&c->alloc_sem); 183 + return -ENOSPC; 184 + 185 + got_next: 186 + /* For next time round the loop, we want c->checked_ino to indicate 187 + * the *next* one we want to check. And since we're walking the 188 + * buckets rather than doing it sequentially, it's: */ 189 + c->check_ino = ic->ino + c->inocache_hashsize; 168 190 169 191 if (!ic->pino_nlink) { 170 192 jffs2_dbg(1, "Skipping check of ino #%d with nlink/pino zero\n", ··· 198 176 switch(ic->state) { 199 177 case INO_STATE_CHECKEDABSENT: 200 178 case INO_STATE_PRESENT: 201 - jffs2_dbg(1, "Skipping ino #%u already checked\n", 202 - ic->ino); 203 179 spin_unlock(&c->inocache_lock); 204 180 continue; 205 181 ··· 216 196 ic->ino); 217 197 /* We need to come back again for the _same_ inode. We've 218 198 made no progress in this case, but that should be OK */ 219 - c->checked_ino--; 199 + c->check_ino = ic->ino; 220 200 221 201 mutex_unlock(&c->alloc_sem); 222 202 sleep_on_spinunlock(&c->inocache_wq, &c->inocache_lock);
+1 -1
fs/jffs2/jffs2_fs_sb.h
··· 49 49 struct mtd_info *mtd; 50 50 51 51 uint32_t highest_ino; 52 - uint32_t checked_ino; 52 + uint32_t check_ino; /* *NEXT* inode to be checked */ 53 53 54 54 unsigned int flags; 55 55
+2 -2
fs/jffs2/nodemgmt.c
··· 846 846 return 1; 847 847 848 848 if (c->unchecked_size) { 849 - jffs2_dbg(1, "jffs2_thread_should_wake(): unchecked_size %d, checked_ino #%d\n", 850 - c->unchecked_size, c->checked_ino); 849 + jffs2_dbg(1, "jffs2_thread_should_wake(): unchecked_size %d, check_ino #%d\n", 850 + c->unchecked_size, c->check_ino); 851 851 return 1; 852 852 } 853 853
+2 -4
fs/jffs2/wbuf.c
··· 1183 1183 1184 1184 int jffs2_nand_flash_setup(struct jffs2_sb_info *c) 1185 1185 { 1186 - struct nand_ecclayout *oinfo = c->mtd->ecclayout; 1187 - 1188 1186 if (!c->mtd->oobsize) 1189 1187 return 0; 1190 1188 1191 1189 /* Cleanmarker is out-of-band, so inline size zero */ 1192 1190 c->cleanmarker_size = 0; 1193 1191 1194 - if (!oinfo || oinfo->oobavail == 0) { 1192 + if (c->mtd->oobavail == 0) { 1195 1193 pr_err("inconsistent device description\n"); 1196 1194 return -EINVAL; 1197 1195 } 1198 1196 1199 1197 jffs2_dbg(1, "using OOB on NAND\n"); 1200 1198 1201 - c->oobavail = oinfo->oobavail; 1199 + c->oobavail = c->mtd->oobavail; 1202 1200 1203 1201 /* Initialise write buffer */ 1204 1202 init_rwsem(&c->wbuf_sem);
-1
include/linux/mtd/bbm.h
··· 166 166 }; 167 167 168 168 /* OneNAND BBT interface */ 169 - extern int onenand_scan_bbt(struct mtd_info *mtd, struct nand_bbt_descr *bd); 170 169 extern int onenand_default_bbt(struct mtd_info *mtd); 171 170 172 171 #endif /* __LINUX_MTD_BBM_H */
-1
include/linux/mtd/inftl.h
··· 44 44 unsigned int nb_blocks; /* number of physical blocks */ 45 45 unsigned int nb_boot_blocks; /* number of blocks used by the bios */ 46 46 struct erase_info instr; 47 - struct nand_ecclayout oobinfo; 48 47 }; 49 48 50 49 int INFTL_mount(struct INFTLrecord *s);
+5 -2
include/linux/mtd/map.h
··· 240 240 If there is no cache to care about this can be set to NULL. */ 241 241 void (*inval_cache)(struct map_info *, unsigned long, ssize_t); 242 242 243 - /* set_vpp() must handle being reentered -- enable, enable, disable 244 - must leave it enabled. */ 243 + /* This will be called with 1 as parameter when the first map user 244 + * needs VPP, and called with 0 when the last user exits. The map 245 + * core maintains a reference counter, and assumes that VPP is a 246 + * global resource applying to all mapped flash chips on the system. 247 + */ 245 248 void (*set_vpp)(struct map_info *, int); 246 249 247 250 unsigned long pfow_base;
+5 -1
include/linux/mtd/mtd.h
··· 105 105 struct nand_ecclayout { 106 106 __u32 eccbytes; 107 107 __u32 eccpos[MTD_MAX_ECCPOS_ENTRIES_LARGE]; 108 - __u32 oobavail; 109 108 struct nand_oobfree oobfree[MTD_MAX_OOBFREE_ENTRIES_LARGE]; 110 109 }; 111 110 ··· 262 263 static inline struct device_node *mtd_get_of_node(struct mtd_info *mtd) 263 264 { 264 265 return mtd->dev.of_node; 266 + } 267 + 268 + static inline int mtd_oobavail(struct mtd_info *mtd, struct mtd_oob_ops *ops) 269 + { 270 + return ops->mode == MTD_OPS_AUTO_OOB ? mtd->oobavail : mtd->oobsize; 265 271 } 266 272 267 273 int mtd_erase(struct mtd_info *mtd, struct erase_info *instr);
+7 -3
include/linux/mtd/nand.h
··· 168 168 /* Device supports subpage reads */ 169 169 #define NAND_SUBPAGE_READ 0x00001000 170 170 171 + /* 172 + * Some MLC NANDs need data scrambling to limit bitflips caused by repeated 173 + * patterns. 174 + */ 175 + #define NAND_NEED_SCRAMBLING 0x00002000 176 + 171 177 /* Options valid for Samsung large page devices */ 172 178 #define NAND_SAMSUNG_LP_OPTIONS NAND_CACHEPRG 173 179 ··· 672 666 void (*write_buf)(struct mtd_info *mtd, const uint8_t *buf, int len); 673 667 void (*read_buf)(struct mtd_info *mtd, uint8_t *buf, int len); 674 668 void (*select_chip)(struct mtd_info *mtd, int chip); 675 - int (*block_bad)(struct mtd_info *mtd, loff_t ofs, int getchip); 669 + int (*block_bad)(struct mtd_info *mtd, loff_t ofs); 676 670 int (*block_markbad)(struct mtd_info *mtd, loff_t ofs); 677 671 void (*cmd_ctrl)(struct mtd_info *mtd, int dat, unsigned int ctrl); 678 672 int (*dev_ready)(struct mtd_info *mtd); ··· 902 896 * @chip_delay: R/B delay value in us 903 897 * @options: Option flags, e.g. 16bit buswidth 904 898 * @bbt_options: BBT option flags, e.g. NAND_BBT_USE_FLASH 905 - * @ecclayout: ECC layout info structure 906 899 * @part_probe_types: NULL-terminated array of probe types 907 900 */ 908 901 struct platform_nand_chip { ··· 909 904 int chip_offset; 910 905 int nr_partitions; 911 906 struct mtd_partition *partitions; 912 - struct nand_ecclayout *ecclayout; 913 907 int chip_delay; 914 908 unsigned int options; 915 909 unsigned int bbt_options;
+2 -6
include/linux/mtd/nand_bch.h
··· 32 32 /* 33 33 * Initialize BCH encoder/decoder 34 34 */ 35 - struct nand_bch_control * 36 - nand_bch_init(struct mtd_info *mtd, unsigned int eccsize, 37 - unsigned int eccbytes, struct nand_ecclayout **ecclayout); 35 + struct nand_bch_control *nand_bch_init(struct mtd_info *mtd); 38 36 /* 39 37 * Release BCH encoder/decoder resources 40 38 */ ··· 56 58 return -ENOTSUPP; 57 59 } 58 60 59 - static inline struct nand_bch_control * 60 - nand_bch_init(struct mtd_info *mtd, unsigned int eccsize, 61 - unsigned int eccbytes, struct nand_ecclayout **ecclayout) 61 + static inline struct nand_bch_control *nand_bch_init(struct mtd_info *mtd) 62 62 { 63 63 return NULL; 64 64 }
-1
include/linux/mtd/nftl.h
··· 50 50 unsigned int nb_blocks; /* number of physical blocks */ 51 51 unsigned int nb_boot_blocks; /* number of blocks used by the bios */ 52 52 struct erase_info instr; 53 - struct nand_ecclayout oobinfo; 54 53 }; 55 54 56 55 int NFTL_mount(struct NFTLrecord *s);
+2
include/linux/mtd/spi-nor.h
··· 85 85 #define SR_BP0 BIT(2) /* Block protect 0 */ 86 86 #define SR_BP1 BIT(3) /* Block protect 1 */ 87 87 #define SR_BP2 BIT(4) /* Block protect 2 */ 88 + #define SR_TB BIT(5) /* Top/Bottom protect */ 88 89 #define SR_SRWD BIT(7) /* SR write protect */ 89 90 90 91 #define SR_QUAD_EN_MX BIT(6) /* Macronix Quad I/O */ ··· 117 116 118 117 enum spi_nor_option_flags { 119 118 SNOR_F_USE_FSR = BIT(0), 119 + SNOR_F_HAS_SR_TB = BIT(1), 120 120 }; 121 121 122 122 /**
-1
include/linux/platform_data/mtd-nand-s3c2410.h
··· 40 40 char *name; 41 41 int *nr_map; 42 42 struct mtd_partition *partitions; 43 - struct nand_ecclayout *ecc_layout; 44 43 }; 45 44 46 45 struct s3c2410_platform_nand {