Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-linus-20150216' of git://git.infradead.org/linux-mtd

Pull MTD updates from Brian Norris:
"NAND:

- Add new Hisilicon NAND driver for Hip04
- Add default reboot handler, to ensure all outstanding erase
transactions complete in time
- jz4740: convert to use GPIO descriptor API
- Atmel: add support for sama5d4
- Change default bitflip threshold to 75% of correction strength
- Miscellaneous cleanups and bugfixes

SPI NOR:

- Freescale QuadSPI:
- Fix a few probe() and remove() issues
- Add a MAINTAINERS entry for this driver
- Tweak transfer size to increase read performance
- Add suspend/resume support
- Add Micron quad I/O support
- ST FSM SPI: miscellaneous fixes

JFFS2:

- gracefully handle corrupted 'offset' field found on flash

Other:

- bcm47xxpart: add tweaks for a few new devices
- mtdconcat: set return lengths properly for mtd_write_oob()
- map_ram: enable use with mtdoops
- maps: support fallback to ROM/UBI for write-protected NOR flash"

* tag 'for-linus-20150216' of git://git.infradead.org/linux-mtd: (46 commits)
mtd: hisilicon: && vs & typo
jffs2: fix handling of corrupted summary length
mtd: hisilicon: add device tree binding documentation
mtd: hisilicon: add a new NAND controller driver for hisilicon hip04 Soc
mtd: avoid registering reboot notifier twice
mtd: concat: set the return lengths properly
mtd: kconfig: replace PPC_OF with PPC
mtd: denali: remove unnecessary stubs
mtd: nand: remove redundant local variable
MAINTAINERS: add maintainer entry for FREESCALE QUAD SPI driver
mtd: fsl-quadspi: improve read performance by increase AHB transfer size
mtd: fsl-quadspi: Remove unnecessary 'map_failed' label
mtd: fsl-quadspi: Remove unneeded success/error messages
mtd: fsl-quadspi: Fix the error paths
mtd: nand: omap: drop condition with no effect
mtd: nand: jz4740: Convert to GPIO descriptor API
mtd: nand: Request strength instead of bytes for soft BCH
mtd: nand: default bitflip-reporting threshold to 75% of correction strength
mtd: atmel_nand: introduce a new compatible string for sama5d4 chip
mtd: atmel_nand: return max bitflips in all sectors in pmecc_correction()
...

+1385 -214
+1 -1
Documentation/devicetree/bindings/mtd/atmel-nand.txt
··· 1 1 Atmel NAND flash 2 2 3 3 Required properties: 4 - - compatible : "atmel,at91rm9200-nand". 4 + - compatible : should be "atmel,at91rm9200-nand" or "atmel,sama5d4-nand". 5 5 - reg : should specify localbus address and size used for the chip, 6 6 and hardware ECC controller if available. 7 7 If the hardware ECC is PMECC, it should contain address and size for
+1 -1
Documentation/devicetree/bindings/mtd/fsl-quadspi.txt
··· 1 1 * Freescale Quad Serial Peripheral Interface(QuadSPI) 2 2 3 3 Required properties: 4 - - compatible : Should be "fsl,vf610-qspi" 4 + - compatible : Should be "fsl,vf610-qspi" or "fsl,imx6sx-qspi" 5 5 - reg : the first contains the register location and length, 6 6 the second contains the memory mapping address and length 7 7 - reg-names: Should contain the reg names "QuadSPI" and "QuadSPI-memory"
+1 -1
Documentation/devicetree/bindings/mtd/gpmi-nand.txt
··· 1 1 * Freescale General-Purpose Media Interface (GPMI) 2 2 3 3 The GPMI nand controller provides an interface to control the 4 - NAND flash chips. We support only one NAND chip now. 4 + NAND flash chips. 5 5 6 6 Required properties: 7 7 - compatible : should be "fsl,<chip>-gpmi-nand"
+47
Documentation/devicetree/bindings/mtd/hisi504-nand.txt
··· 1 + Hisilicon Hip04 Soc NAND controller DT binding 2 + 3 + Required properties: 4 + 5 + - compatible: Should be "hisilicon,504-nfc". 6 + - reg: The first contains base physical address and size of 7 + NAND controller's registers. The second contains base 8 + physical address and size of NAND controller's buffer. 9 + - interrupts: Interrupt number for nfc. 10 + - nand-bus-width: See nand.txt. 11 + - nand-ecc-mode: Support none and hw ecc mode. 12 + - #address-cells: Partition address, should be set 1. 13 + - #size-cells: Partition size, should be set 1. 14 + 15 + Optional properties: 16 + 17 + - nand-ecc-strength: Number of bits to correct per ECC step. 18 + - nand-ecc-step-size: Number of data bytes covered by a single ECC step. 19 + 20 + The following ECC strength and step size are currently supported: 21 + 22 + - nand-ecc-strength = <16>, nand-ecc-step-size = <1024> 23 + 24 + Flash chip may optionally contain additional sub-nodes describing partitions of 25 + the address space. See partition.txt for more detail. 26 + 27 + Example: 28 + 29 + nand: nand@4020000 { 30 + compatible = "hisilicon,504-nfc"; 31 + reg = <0x4020000 0x10000>, <0x5000000 0x1000>; 32 + interrupts = <0 379 4>; 33 + nand-bus-width = <8>; 34 + nand-ecc-mode = "hw"; 35 + nand-ecc-strength = <16>; 36 + nand-ecc-step-size = <1024>; 37 + #address-cells = <1>; 38 + #size-cells = <1>; 39 + 40 + partition@0 { 41 + label = "nand_text"; 42 + reg = <0x00000000 0x00400000>; 43 + }; 44 + 45 + ... 46 + 47 + };
+5
Documentation/devicetree/bindings/mtd/mtd-physmap.txt
··· 36 36 - vendor-id : Contains the flash chip's vendor id (1 byte). 37 37 - device-id : Contains the flash chip's device id (1 byte). 38 38 39 + For ROM compatible devices (and ROM fallback from cfi-flash), the following 40 + additional (optional) property is defined: 41 + 42 + - erase-size : The chip's physical erase block size in bytes. 43 + 39 44 The device tree may optionally contain sub-nodes describing partitions of the 40 45 address space. See partition.txt for more detail. 41 46
+6
MAINTAINERS
··· 4092 4092 F: include/linux/platform_data/video-imxfb.h 4093 4093 F: drivers/video/fbdev/imxfb.c 4094 4094 4095 + FREESCALE QUAD SPI DRIVER 4096 + M: Han Xu <han.xu@freescale.com> 4097 + L: linux-mtd@lists.infradead.org 4098 + S: Maintained 4099 + F: drivers/mtd/spi-nor/fsl-quadspi.c 4100 + 4095 4101 FREESCALE SOC FS_ENET DRIVER 4096 4102 M: Pantelis Antoniou <pantelis.antoniou@gmail.com> 4097 4103 M: Vitaly Bordug <vbordug@ru.mvista.com>
-2
arch/mips/include/asm/mach-jz4740/jz4740_nand.h
··· 27 27 28 28 struct nand_ecclayout *ecc_layout; 29 29 30 - unsigned int busy_gpio; 31 - 32 30 unsigned char banks[JZ_NAND_NUM_BANKS]; 33 31 34 32 void (*ident_callback)(struct platform_device *, struct nand_chip *,
+10 -1
arch/mips/jz4740/board-qi_lb60.c
··· 140 140 141 141 static struct jz_nand_platform_data qi_lb60_nand_pdata = { 142 142 .ident_callback = qi_lb60_nand_ident, 143 - .busy_gpio = 94, 144 143 .banks = { 1 }, 145 144 }; 145 + 146 + static struct gpiod_lookup_table qi_lb60_nand_gpio_table = { 147 + .dev_id = "jz4740-nand.0", 148 + .table = { 149 + GPIO_LOOKUP("Bank C", 30, "busy", 0), 150 + { }, 151 + }, 152 + }; 153 + 146 154 147 155 /* Keyboard*/ 148 156 ··· 480 472 jz4740_mmc_device.dev.platform_data = &qi_lb60_mmc_pdata; 481 473 482 474 gpiod_add_lookup_table(&qi_lb60_audio_gpio_table); 475 + gpiod_add_lookup_table(&qi_lb60_nand_gpio_table); 483 476 484 477 jz4740_serial_device_register(); 485 478
+37 -6
drivers/mtd/bcm47xxpart.c
··· 15 15 #include <linux/mtd/mtd.h> 16 16 #include <linux/mtd/partitions.h> 17 17 18 + #include <uapi/linux/magic.h> 19 + 18 20 /* 19 21 * NAND flash on Netgear R6250 was verified to contain 15 partitions. 20 22 * This will result in allocating too big array for some old devices, but the ··· 41 39 #define ML_MAGIC1 0x39685a42 42 40 #define ML_MAGIC2 0x26594131 43 41 #define TRX_MAGIC 0x30524448 44 - #define SQSH_MAGIC 0x71736873 /* shsq */ 42 + #define SHSQ_MAGIC 0x71736873 /* shsq (weird ZTE H218N endianness) */ 43 + #define UBI_EC_MAGIC 0x23494255 /* UBI# */ 45 44 46 45 struct trx_header { 47 46 uint32_t magic; ··· 53 50 uint32_t offset[3]; 54 51 } __packed; 55 52 56 - static void bcm47xxpart_add_part(struct mtd_partition *part, char *name, 53 + static void bcm47xxpart_add_part(struct mtd_partition *part, const char *name, 57 54 u64 offset, uint32_t mask_flags) 58 55 { 59 56 part->name = name; 60 57 part->offset = offset; 61 58 part->mask_flags = mask_flags; 59 + } 60 + 61 + static const char *bcm47xxpart_trx_data_part_name(struct mtd_info *master, 62 + size_t offset) 63 + { 64 + uint32_t buf; 65 + size_t bytes_read; 66 + 67 + if (mtd_read(master, offset, sizeof(buf), &bytes_read, 68 + (uint8_t *)&buf) < 0) { 69 + pr_err("mtd_read error while parsing (offset: 0x%X)!\n", 70 + offset); 71 + goto out_default; 72 + } 73 + 74 + if (buf == UBI_EC_MAGIC) 75 + return "ubi"; 76 + 77 + out_default: 78 + return "rootfs"; 62 79 } 63 80 64 81 static int bcm47xxpart_parse(struct mtd_info *master, ··· 96 73 int last_trx_part = -1; 97 74 int possible_nvram_sizes[] = { 0x8000, 0xF000, 0x10000, }; 98 75 99 - if (blocksize <= 0x10000) 100 - blocksize = 0x10000; 76 + /* 77 + * Some really old flashes (like AT45DB*) had smaller erasesize-s, but 78 + * partitions were aligned to at least 0x1000 anyway. 79 + */ 80 + if (blocksize < 0x1000) 81 + blocksize = 0x1000; 101 82 102 83 /* Alloc */ 103 84 parts = kzalloc(sizeof(struct mtd_partition) * BCM47XXPART_MAX_PARTS, ··· 213 186 * we want to have jffs2 (overlay) in the same mtd. 214 187 */ 215 188 if (trx->offset[i]) { 189 + const char *name; 190 + 191 + name = bcm47xxpart_trx_data_part_name(master, offset + trx->offset[i]); 216 192 bcm47xxpart_add_part(&parts[curr_part++], 217 - "rootfs", 193 + name, 218 194 offset + trx->offset[i], 219 195 0); 220 196 i++; ··· 235 205 } 236 206 237 207 /* Squashfs on devices not using TRX */ 238 - if (buf[0x000 / 4] == SQSH_MAGIC) { 208 + if (le32_to_cpu(buf[0x000 / 4]) == SQUASHFS_MAGIC || 209 + buf[0x000 / 4] == SHSQ_MAGIC) { 239 210 bcm47xxpart_add_part(&parts[curr_part++], "rootfs", 240 211 offset, 0); 241 212 continue;
+1
drivers/mtd/chips/map_ram.c
··· 68 68 mtd->_get_unmapped_area = mapram_unmapped_area; 69 69 mtd->_read = mapram_read; 70 70 mtd->_write = mapram_write; 71 + mtd->_panic_write = mapram_write; 71 72 mtd->_sync = mapram_nop; 72 73 mtd->flags = MTD_CAP_RAM; 73 74 mtd->writesize = 1;
+12 -1
drivers/mtd/chips/map_rom.c
··· 11 11 #include <linux/errno.h> 12 12 #include <linux/slab.h> 13 13 #include <linux/init.h> 14 + #include <linux/of.h> 14 15 #include <linux/mtd/mtd.h> 15 16 #include <linux/mtd/map.h> 16 17 ··· 28 27 .name = "map_rom", 29 28 .module = THIS_MODULE 30 29 }; 30 + 31 + static unsigned int default_erasesize(struct map_info *map) 32 + { 33 + const __be32 *erase_size = NULL; 34 + 35 + erase_size = of_get_property(map->device_node, "erase-size", NULL); 36 + 37 + return !erase_size ? map->size : be32_to_cpu(*erase_size); 38 + } 31 39 32 40 static struct mtd_info *map_rom_probe(struct map_info *map) 33 41 { ··· 57 47 mtd->_sync = maprom_nop; 58 48 mtd->_erase = maprom_erase; 59 49 mtd->flags = MTD_CAP_ROM; 60 - mtd->erasesize = map->size; 50 + mtd->erasesize = default_erasesize(map); 61 51 mtd->writesize = 1; 52 + mtd->writebufsize = 1; 62 53 63 54 __module_get(THIS_MODULE); 64 55 return mtd;
+118 -19
drivers/mtd/devices/st_spi_fsm.c
··· 24 24 #include <linux/delay.h> 25 25 #include <linux/io.h> 26 26 #include <linux/of.h> 27 + #include <linux/clk.h> 27 28 28 29 #include "serial_flash_cmds.h" 29 30 ··· 263 262 struct mtd_info mtd; 264 263 struct mutex lock; 265 264 struct flash_info *info; 265 + struct clk *clk; 266 266 267 267 uint32_t configuration; 268 268 uint32_t fifo_dir_delay; ··· 665 663 SEQ_CFG_STARTSEQ), 666 664 }; 667 665 666 + /* Dummy sequence to read one byte of data from flash into the FIFO */ 667 + static const struct stfsm_seq stfsm_seq_load_fifo_byte = { 668 + .data_size = TRANSFER_SIZE(1), 669 + .seq_opc[0] = (SEQ_OPC_PADS_1 | 670 + SEQ_OPC_CYCLES(8) | 671 + SEQ_OPC_OPCODE(SPINOR_OP_RDID)), 672 + .seq = { 673 + STFSM_INST_CMD1, 674 + STFSM_INST_DATA_READ, 675 + STFSM_INST_STOP, 676 + }, 677 + .seq_cfg = (SEQ_CFG_PADS_1 | 678 + SEQ_CFG_READNOTWRITE | 679 + SEQ_CFG_CSDEASSERT | 680 + SEQ_CFG_STARTSEQ), 681 + }; 682 + 668 683 static int stfsm_n25q_en_32bit_addr_seq(struct stfsm_seq *seq) 669 684 { 670 685 seq->seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | ··· 712 693 static inline uint32_t stfsm_fifo_available(struct stfsm *fsm) 713 694 { 714 695 return (readl(fsm->base + SPI_FAST_SEQ_STA) >> 5) & 0x7f; 715 - } 716 - 717 - static void stfsm_clear_fifo(struct stfsm *fsm) 718 - { 719 - uint32_t avail; 720 - 721 - for (;;) { 722 - avail = stfsm_fifo_available(fsm); 723 - if (!avail) 724 - break; 725 - 726 - while (avail) { 727 - readl(fsm->base + SPI_FAST_SEQ_DATA_REG); 728 - avail--; 729 - } 730 - } 731 696 } 732 697 733 698 static inline void stfsm_load_seq(struct stfsm *fsm, ··· 773 770 readsl(fsm->base + SPI_FAST_SEQ_DATA_REG, buf, words); 774 771 buf += words; 775 772 } 773 + } 774 + 775 + /* 776 + * Clear the data FIFO 777 + * 778 + * Typically, this is only required during driver initialisation, where no 779 + * assumptions can be made regarding the state of the FIFO. 780 + * 781 + * The process of clearing the FIFO is complicated by fact that while it is 782 + * possible for the FIFO to contain an arbitrary number of bytes [1], the 783 + * SPI_FAST_SEQ_STA register only reports the number of complete 32-bit words 784 + * present. Furthermore, data can only be drained from the FIFO by reading 785 + * complete 32-bit words. 786 + * 787 + * With this in mind, a two stage process is used to the clear the FIFO: 788 + * 789 + * 1. Read any complete 32-bit words from the FIFO, as reported by the 790 + * SPI_FAST_SEQ_STA register. 791 + * 792 + * 2. Mop up any remaining bytes. At this point, it is not known if there 793 + * are 0, 1, 2, or 3 bytes in the FIFO. To handle all cases, a dummy FSM 794 + * sequence is used to load one byte at a time, until a complete 32-bit 795 + * word is formed; at most, 4 bytes will need to be loaded. 796 + * 797 + * [1] It is theoretically possible for the FIFO to contain an arbitrary number 798 + * of bits. However, since there are no known use-cases that leave 799 + * incomplete bytes in the FIFO, only words and bytes are considered here. 800 + */ 801 + static void stfsm_clear_fifo(struct stfsm *fsm) 802 + { 803 + const struct stfsm_seq *seq = &stfsm_seq_load_fifo_byte; 804 + uint32_t words, i; 805 + 806 + /* 1. Clear any 32-bit words */ 807 + words = stfsm_fifo_available(fsm); 808 + if (words) { 809 + for (i = 0; i < words; i++) 810 + readl(fsm->base + SPI_FAST_SEQ_DATA_REG); 811 + dev_dbg(fsm->dev, "cleared %d words from FIFO\n", words); 812 + } 813 + 814 + /* 815 + * 2. Clear any remaining bytes 816 + * - Load the FIFO, one byte at a time, until a complete 32-bit word 817 + * is available. 818 + */ 819 + for (i = 0, words = 0; i < 4 && !words; i++) { 820 + stfsm_load_seq(fsm, seq); 821 + stfsm_wait_seq(fsm); 822 + words = stfsm_fifo_available(fsm); 823 + } 824 + 825 + /* - A single word must be available now */ 826 + if (words != 1) { 827 + dev_err(fsm->dev, "failed to clear bytes from the data FIFO\n"); 828 + return; 829 + } 830 + 831 + /* - Read the 32-bit word */ 832 + readl(fsm->base + SPI_FAST_SEQ_DATA_REG); 833 + 834 + dev_dbg(fsm->dev, "cleared %d byte(s) from the data FIFO\n", 4 - i); 776 835 } 777 836 778 837 static int stfsm_write_fifo(struct stfsm *fsm, const uint32_t *buf, ··· 1586 1521 uint32_t size_lb; 1587 1522 uint32_t size_mop; 1588 1523 uint32_t tmp[4]; 1524 + uint32_t i; 1589 1525 uint32_t page_buf[FLASH_PAGESIZE_32]; 1590 1526 uint8_t *t = (uint8_t *)&tmp; 1591 1527 const uint8_t *p; 1592 1528 int ret; 1593 - int i; 1594 1529 1595 1530 dev_dbg(fsm->dev, "writing %d bytes to 0x%08x\n", size, offset); 1596 1531 ··· 1908 1843 uint32_t emi_freq; 1909 1844 uint32_t clk_div; 1910 1845 1911 - /* TODO: Make this dynamic */ 1912 - emi_freq = STFSM_DEFAULT_EMI_FREQ; 1846 + emi_freq = clk_get_rate(fsm->clk); 1913 1847 1914 1848 /* 1915 1849 * Calculate clk_div - values between 2 and 128 ··· 2058 1994 return PTR_ERR(fsm->base); 2059 1995 } 2060 1996 1997 + fsm->clk = devm_clk_get(&pdev->dev, NULL); 1998 + if (IS_ERR(fsm->clk)) { 1999 + dev_err(fsm->dev, "Couldn't find EMI clock.\n"); 2000 + return PTR_ERR(fsm->clk); 2001 + } 2002 + 2003 + ret = clk_prepare_enable(fsm->clk); 2004 + if (ret) { 2005 + dev_err(fsm->dev, "Failed to enable EMI clock.\n"); 2006 + return ret; 2007 + } 2008 + 2061 2009 mutex_init(&fsm->lock); 2062 2010 2063 2011 ret = stfsm_init(fsm); ··· 2134 2058 return mtd_device_unregister(&fsm->mtd); 2135 2059 } 2136 2060 2061 + #ifdef CONFIG_PM_SLEEP 2062 + static int stfsmfsm_suspend(struct device *dev) 2063 + { 2064 + struct stfsm *fsm = dev_get_drvdata(dev); 2065 + 2066 + clk_disable_unprepare(fsm->clk); 2067 + 2068 + return 0; 2069 + } 2070 + 2071 + static int stfsmfsm_resume(struct device *dev) 2072 + { 2073 + struct stfsm *fsm = dev_get_drvdata(dev); 2074 + 2075 + clk_prepare_enable(fsm->clk); 2076 + 2077 + return 0; 2078 + } 2079 + #endif 2080 + 2081 + static SIMPLE_DEV_PM_OPS(stfsm_pm_ops, stfsmfsm_suspend, stfsmfsm_resume); 2082 + 2137 2083 static const struct of_device_id stfsm_match[] = { 2138 2084 { .compatible = "st,spi-fsm", }, 2139 2085 {}, ··· 2168 2070 .driver = { 2169 2071 .name = "st-spi-fsm", 2170 2072 .of_match_table = stfsm_match, 2073 + .pm = &stfsm_pm_ops, 2171 2074 }, 2172 2075 }; 2173 2076 module_platform_driver(stfsm_driver);
+10
drivers/mtd/maps/physmap_of.c
··· 269 269 info->list[i].mtd = obsolete_probe(dev, 270 270 &info->list[i].map); 271 271 } 272 + 273 + /* Fall back to mapping region as ROM */ 274 + if (!info->list[i].mtd) { 275 + dev_warn(&dev->dev, 276 + "do_map_probe() failed for type %s\n", 277 + probe_type); 278 + 279 + info->list[i].mtd = do_map_probe("map_rom", 280 + &info->list[i].map); 281 + } 272 282 mtd_list[i] = info->list[i].mtd; 273 283 274 284 err = -ENXIO;
-10
drivers/mtd/mtdblock.c
··· 45 45 enum { STATE_EMPTY, STATE_CLEAN, STATE_DIRTY } cache_state; 46 46 }; 47 47 48 - static DEFINE_MUTEX(mtdblks_lock); 49 - 50 48 /* 51 49 * Cache stuff... 52 50 * ··· 284 286 285 287 pr_debug("mtdblock_open\n"); 286 288 287 - mutex_lock(&mtdblks_lock); 288 289 if (mtdblk->count) { 289 290 mtdblk->count++; 290 - mutex_unlock(&mtdblks_lock); 291 291 return 0; 292 292 } 293 293 ··· 298 302 mtdblk->cache_data = NULL; 299 303 } 300 304 301 - mutex_unlock(&mtdblks_lock); 302 - 303 305 pr_debug("ok\n"); 304 306 305 307 return 0; ··· 308 314 struct mtdblk_dev *mtdblk = container_of(mbd, struct mtdblk_dev, mbd); 309 315 310 316 pr_debug("mtdblock_release\n"); 311 - 312 - mutex_lock(&mtdblks_lock); 313 317 314 318 mutex_lock(&mtdblk->cache_mutex); 315 319 write_cached_data(mtdblk); ··· 322 330 mtd_sync(mbd->mtd); 323 331 vfree(mtdblk->cache_data); 324 332 } 325 - 326 - mutex_unlock(&mtdblks_lock); 327 333 328 334 pr_debug("ok\n"); 329 335 }
+2 -1
drivers/mtd/mtdconcat.c
··· 311 311 devops.len = subdev->size - to; 312 312 313 313 err = mtd_write_oob(subdev, to, &devops); 314 - ops->retlen += devops.oobretlen; 314 + ops->retlen += devops.retlen; 315 + ops->oobretlen += devops.oobretlen; 315 316 if (err) 316 317 return err; 317 318
+28
drivers/mtd/mtdcore.c
··· 37 37 #include <linux/backing-dev.h> 38 38 #include <linux/gfp.h> 39 39 #include <linux/slab.h> 40 + #include <linux/reboot.h> 40 41 41 42 #include <linux/mtd/mtd.h> 42 43 #include <linux/mtd/partitions.h> ··· 357 356 EXPORT_SYMBOL_GPL(mtd_mmap_capabilities); 358 357 #endif 359 358 359 + static int mtd_reboot_notifier(struct notifier_block *n, unsigned long state, 360 + void *cmd) 361 + { 362 + struct mtd_info *mtd; 363 + 364 + mtd = container_of(n, struct mtd_info, reboot_notifier); 365 + mtd->_reboot(mtd); 366 + 367 + return NOTIFY_DONE; 368 + } 369 + 360 370 /** 361 371 * add_mtd_device - register an MTD device 362 372 * @mtd: pointer to new MTD device info structure ··· 556 544 err = -ENODEV; 557 545 } 558 546 547 + /* 548 + * FIXME: some drivers unfortunately call this function more than once. 549 + * So we have to check if we've already assigned the reboot notifier. 550 + * 551 + * Generally, we can make multiple calls work for most cases, but it 552 + * does cause problems with parse_mtd_partitions() above (e.g., 553 + * cmdlineparts will register partitions more than once). 554 + */ 555 + if (mtd->_reboot && !mtd->reboot_notifier.notifier_call) { 556 + mtd->reboot_notifier.notifier_call = mtd_reboot_notifier; 557 + register_reboot_notifier(&mtd->reboot_notifier); 558 + } 559 + 559 560 return err; 560 561 } 561 562 EXPORT_SYMBOL_GPL(mtd_device_parse_register); ··· 582 557 int mtd_device_unregister(struct mtd_info *master) 583 558 { 584 559 int err; 560 + 561 + if (master->_reboot) 562 + unregister_reboot_notifier(&master->reboot_notifier); 585 563 586 564 err = del_mtd_partitions(master); 587 565 if (err)
+6 -1
drivers/mtd/nand/Kconfig
··· 421 421 422 422 config MTD_NAND_FSL_ELBC 423 423 tristate "NAND support for Freescale eLBC controllers" 424 - depends on PPC_OF 424 + depends on PPC 425 425 select FSL_LBC 426 426 help 427 427 Various Freescale chips, including the 8313, include a NAND Flash ··· 523 523 depends on ARCH_SUNXI 524 524 help 525 525 Enables support for NAND Flash chips on Allwinner SoCs. 526 + 527 + config MTD_NAND_HISI504 528 + tristate "Support for NAND controller on Hisilicon SoC Hip04" 529 + help 530 + Enables support for NAND controller on Hisilicon SoC Hip04. 526 531 527 532 endif # MTD_NAND
+1
drivers/mtd/nand/Makefile
··· 51 51 obj-$(CONFIG_MTD_NAND_XWAY) += xway_nand.o 52 52 obj-$(CONFIG_MTD_NAND_BCM47XXNFLASH) += bcm47xxnflash/ 53 53 obj-$(CONFIG_MTD_NAND_SUNXI) += sunxi_nand.o 54 + obj-$(CONFIG_MTD_NAND_HISI504) += hisi504_nand.o 54 55 55 56 nand-objs := nand_base.o nand_bbt.o nand_timings.o
+1 -5
drivers/mtd/nand/ams-delta.c
··· 183 183 return -ENXIO; 184 184 185 185 /* Allocate memory for MTD device structure and private data */ 186 - ams_delta_mtd = kmalloc(sizeof(struct mtd_info) + 186 + ams_delta_mtd = kzalloc(sizeof(struct mtd_info) + 187 187 sizeof(struct nand_chip), GFP_KERNEL); 188 188 if (!ams_delta_mtd) { 189 189 printk (KERN_WARNING "Unable to allocate E3 NAND MTD device structure.\n"); ··· 195 195 196 196 /* Get pointer to private data */ 197 197 this = (struct nand_chip *) (&ams_delta_mtd[1]); 198 - 199 - /* Initialize structures */ 200 - memset(ams_delta_mtd, 0, sizeof(struct mtd_info)); 201 - memset(this, 0, sizeof(struct nand_chip)); 202 198 203 199 /* Link the private data with the MTD structure */ 204 200 ams_delta_mtd->priv = this;
+27 -4
drivers/mtd/nand/atmel_nand.c
··· 63 63 #include "atmel_nand_ecc.h" /* Hardware ECC registers */ 64 64 #include "atmel_nand_nfc.h" /* Nand Flash Controller definition */ 65 65 66 + struct atmel_nand_caps { 67 + bool pmecc_correct_erase_page; 68 + }; 69 + 66 70 /* oob layout for large page size 67 71 * bad block info is on bytes 0 and 1 68 72 * the bytes have to be consecutives to avoid ··· 128 124 129 125 struct atmel_nfc *nfc; 130 126 127 + struct atmel_nand_caps *caps; 131 128 bool has_pmecc; 132 129 u8 pmecc_corr_cap; 133 130 u16 pmecc_sector_size; ··· 852 847 struct atmel_nand_host *host = nand_chip->priv; 853 848 int i, err_nbr; 854 849 uint8_t *buf_pos; 855 - int total_err = 0; 850 + int max_bitflips = 0; 851 + 852 + /* If can correct bitfilps from erased page, do the normal check */ 853 + if (host->caps->pmecc_correct_erase_page) 854 + goto normal_check; 856 855 857 856 for (i = 0; i < nand_chip->ecc.total; i++) 858 857 if (ecc[i] != 0xff) ··· 883 874 pmecc_correct_data(mtd, buf_pos, ecc, i, 884 875 nand_chip->ecc.bytes, err_nbr); 885 876 mtd->ecc_stats.corrected += err_nbr; 886 - total_err += err_nbr; 877 + max_bitflips = max_t(int, max_bitflips, err_nbr); 887 878 } 888 879 } 889 880 pmecc_stat >>= 1; 890 881 } 891 882 892 - return total_err; 883 + return max_bitflips; 893 884 } 894 885 895 886 static void pmecc_enable(struct atmel_nand_host *host, int ecc_op) ··· 1483 1474 ecc_writel(host->ecc, CR, ATMEL_ECC_RST); 1484 1475 } 1485 1476 1477 + static const struct of_device_id atmel_nand_dt_ids[]; 1478 + 1486 1479 static int atmel_of_init_port(struct atmel_nand_host *host, 1487 1480 struct device_node *np) 1488 1481 { ··· 1493 1482 int ecc_mode; 1494 1483 struct atmel_nand_data *board = &host->board; 1495 1484 enum of_gpio_flags flags = 0; 1485 + 1486 + host->caps = (struct atmel_nand_caps *) 1487 + of_match_device(atmel_nand_dt_ids, host->dev)->data; 1496 1488 1497 1489 if (of_property_read_u32(np, "atmel,nand-addr-offset", &val) == 0) { 1498 1490 if (val >= 32) { ··· 2302 2288 return 0; 2303 2289 } 2304 2290 2291 + static struct atmel_nand_caps at91rm9200_caps = { 2292 + .pmecc_correct_erase_page = false, 2293 + }; 2294 + 2295 + static struct atmel_nand_caps sama5d4_caps = { 2296 + .pmecc_correct_erase_page = true, 2297 + }; 2298 + 2305 2299 static const struct of_device_id atmel_nand_dt_ids[] = { 2306 - { .compatible = "atmel,at91rm9200-nand" }, 2300 + { .compatible = "atmel,at91rm9200-nand", .data = &at91rm9200_caps }, 2301 + { .compatible = "atmel,sama5d4-nand", .data = &sama5d4_caps }, 2307 2302 { /* sentinel */ } 2308 2303 }; 2309 2304
+1 -39
drivers/mtd/nand/denali.c
··· 1041 1041 index_addr(denali, mode | ((addr >> 16) << 8), 0x2200); 1042 1042 1043 1043 /* 3. set memory low address bits 23:8 */ 1044 - index_addr(denali, mode | ((addr & 0xff) << 8), 0x2300); 1044 + index_addr(denali, mode | ((addr & 0xffff) << 8), 0x2300); 1045 1045 1046 1046 /* 4. interrupt when complete, burst len = 64 bytes */ 1047 1047 index_addr(denali, mode | 0x14000, 0x2400); ··· 1328 1328 break; 1329 1329 } 1330 1330 } 1331 - 1332 - /* stubs for ECC functions not used by the NAND core */ 1333 - static int denali_ecc_calculate(struct mtd_info *mtd, const uint8_t *data, 1334 - uint8_t *ecc_code) 1335 - { 1336 - struct denali_nand_info *denali = mtd_to_denali(mtd); 1337 - 1338 - dev_err(denali->dev, "denali_ecc_calculate called unexpectedly\n"); 1339 - BUG(); 1340 - return -EIO; 1341 - } 1342 - 1343 - static int denali_ecc_correct(struct mtd_info *mtd, uint8_t *data, 1344 - uint8_t *read_ecc, uint8_t *calc_ecc) 1345 - { 1346 - struct denali_nand_info *denali = mtd_to_denali(mtd); 1347 - 1348 - dev_err(denali->dev, "denali_ecc_correct called unexpectedly\n"); 1349 - BUG(); 1350 - return -EIO; 1351 - } 1352 - 1353 - static void denali_ecc_hwctl(struct mtd_info *mtd, int mode) 1354 - { 1355 - struct denali_nand_info *denali = mtd_to_denali(mtd); 1356 - 1357 - dev_err(denali->dev, "denali_ecc_hwctl called unexpectedly\n"); 1358 - BUG(); 1359 - } 1360 1331 /* end NAND core entry points */ 1361 1332 1362 1333 /* Initialization code to bring the device up to a known good state */ ··· 1579 1608 */ 1580 1609 denali->totalblks = denali->mtd.size >> denali->nand.phys_erase_shift; 1581 1610 denali->blksperchip = denali->totalblks / denali->nand.numchips; 1582 - 1583 - /* 1584 - * These functions are required by the NAND core framework, otherwise, 1585 - * the NAND core will assert. However, we don't need them, so we'll stub 1586 - * them out. 1587 - */ 1588 - denali->nand.ecc.calculate = denali_ecc_calculate; 1589 - denali->nand.ecc.correct = denali_ecc_correct; 1590 - denali->nand.ecc.hwctl = denali_ecc_hwctl; 1591 1611 1592 1612 /* override the default read operations */ 1593 1613 denali->nand.ecc.size = ECC_SECTOR_SIZE * denali->devnum;
-9
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 1294 1294 * ecc.read_page or ecc.read_page_raw function. Thus, the fact that MTD wants an 1295 1295 * ECC-based or raw view of the page is implicit in which function it calls 1296 1296 * (there is a similar pair of ECC-based/raw functions for writing). 1297 - * 1298 - * FIXME: The following paragraph is incorrect, now that there exist 1299 - * ecc.read_oob_raw and ecc.write_oob_raw functions. 1300 - * 1301 - * Since MTD assumes the OOB is not covered by ECC, there is no pair of 1302 - * ECC-based/raw functions for reading or or writing the OOB. The fact that the 1303 - * caller wants an ECC-based or raw view of the page is not propagated down to 1304 - * this driver. 1305 1297 */ 1306 1298 static int gpmi_ecc_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 1307 1299 int page) ··· 2021 2029 exit_nfc_init: 2022 2030 release_resources(this); 2023 2031 exit_acquire_resources: 2024 - dev_err(this->dev, "driver registration failed: %d\n", ret); 2025 2032 2026 2033 return ret; 2027 2034 }
+891
drivers/mtd/nand/hisi504_nand.c
··· 1 + /* 2 + * Hisilicon NAND Flash controller driver 3 + * 4 + * Copyright © 2012-2014 HiSilicon Technologies Co., Ltd. 5 + * http://www.hisilicon.com 6 + * 7 + * Author: Zhou Wang <wangzhou.bry@gmail.com> 8 + * The initial developer of the original code is Zhiyong Cai 9 + * <caizhiyong@huawei.com> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License as published by 13 + * the Free Software Foundation; either version 2 of the License, or 14 + * (at your option) any later version. 15 + * 16 + * This program is distributed in the hope that it will be useful, 17 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 + * GNU General Public License for more details. 20 + */ 21 + #include <linux/of.h> 22 + #include <linux/of_mtd.h> 23 + #include <linux/mtd/mtd.h> 24 + #include <linux/sizes.h> 25 + #include <linux/clk.h> 26 + #include <linux/slab.h> 27 + #include <linux/module.h> 28 + #include <linux/delay.h> 29 + #include <linux/interrupt.h> 30 + #include <linux/mtd/nand.h> 31 + #include <linux/dma-mapping.h> 32 + #include <linux/platform_device.h> 33 + #include <linux/mtd/partitions.h> 34 + 35 + #define HINFC504_MAX_CHIP (4) 36 + #define HINFC504_W_LATCH (5) 37 + #define HINFC504_R_LATCH (7) 38 + #define HINFC504_RW_LATCH (3) 39 + 40 + #define HINFC504_NFC_TIMEOUT (2 * HZ) 41 + #define HINFC504_NFC_PM_TIMEOUT (1 * HZ) 42 + #define HINFC504_NFC_DMA_TIMEOUT (5 * HZ) 43 + #define HINFC504_CHIP_DELAY (25) 44 + 45 + #define HINFC504_REG_BASE_ADDRESS_LEN (0x100) 46 + #define HINFC504_BUFFER_BASE_ADDRESS_LEN (2048 + 128) 47 + 48 + #define HINFC504_ADDR_CYCLE_MASK 0x4 49 + 50 + #define HINFC504_CON 0x00 51 + #define HINFC504_CON_OP_MODE_NORMAL BIT(0) 52 + #define HINFC504_CON_PAGEISZE_SHIFT (1) 53 + #define HINFC504_CON_PAGESIZE_MASK (0x07) 54 + #define HINFC504_CON_BUS_WIDTH BIT(4) 55 + #define HINFC504_CON_READY_BUSY_SEL BIT(8) 56 + #define HINFC504_CON_ECCTYPE_SHIFT (9) 57 + #define HINFC504_CON_ECCTYPE_MASK (0x07) 58 + 59 + #define HINFC504_PWIDTH 0x04 60 + #define SET_HINFC504_PWIDTH(_w_lcnt, _r_lcnt, _rw_hcnt) \ 61 + ((_w_lcnt) | (((_r_lcnt) & 0x0F) << 4) | (((_rw_hcnt) & 0x0F) << 8)) 62 + 63 + #define HINFC504_CMD 0x0C 64 + #define HINFC504_ADDRL 0x10 65 + #define HINFC504_ADDRH 0x14 66 + #define HINFC504_DATA_NUM 0x18 67 + 68 + #define HINFC504_OP 0x1C 69 + #define HINFC504_OP_READ_DATA_EN BIT(1) 70 + #define HINFC504_OP_WAIT_READY_EN BIT(2) 71 + #define HINFC504_OP_CMD2_EN BIT(3) 72 + #define HINFC504_OP_WRITE_DATA_EN BIT(4) 73 + #define HINFC504_OP_ADDR_EN BIT(5) 74 + #define HINFC504_OP_CMD1_EN BIT(6) 75 + #define HINFC504_OP_NF_CS_SHIFT (7) 76 + #define HINFC504_OP_NF_CS_MASK (3) 77 + #define HINFC504_OP_ADDR_CYCLE_SHIFT (9) 78 + #define HINFC504_OP_ADDR_CYCLE_MASK (7) 79 + 80 + #define HINFC504_STATUS 0x20 81 + #define HINFC504_READY BIT(0) 82 + 83 + #define HINFC504_INTEN 0x24 84 + #define HINFC504_INTEN_DMA BIT(9) 85 + #define HINFC504_INTEN_UE BIT(6) 86 + #define HINFC504_INTEN_CE BIT(5) 87 + 88 + #define HINFC504_INTS 0x28 89 + #define HINFC504_INTS_DMA BIT(9) 90 + #define HINFC504_INTS_UE BIT(6) 91 + #define HINFC504_INTS_CE BIT(5) 92 + 93 + #define HINFC504_INTCLR 0x2C 94 + #define HINFC504_INTCLR_DMA BIT(9) 95 + #define HINFC504_INTCLR_UE BIT(6) 96 + #define HINFC504_INTCLR_CE BIT(5) 97 + 98 + #define HINFC504_ECC_STATUS 0x5C 99 + #define HINFC504_ECC_16_BIT_SHIFT 12 100 + 101 + #define HINFC504_DMA_CTRL 0x60 102 + #define HINFC504_DMA_CTRL_DMA_START BIT(0) 103 + #define HINFC504_DMA_CTRL_WE BIT(1) 104 + #define HINFC504_DMA_CTRL_DATA_AREA_EN BIT(2) 105 + #define HINFC504_DMA_CTRL_OOB_AREA_EN BIT(3) 106 + #define HINFC504_DMA_CTRL_BURST4_EN BIT(4) 107 + #define HINFC504_DMA_CTRL_BURST8_EN BIT(5) 108 + #define HINFC504_DMA_CTRL_BURST16_EN BIT(6) 109 + #define HINFC504_DMA_CTRL_ADDR_NUM_SHIFT (7) 110 + #define HINFC504_DMA_CTRL_ADDR_NUM_MASK (1) 111 + #define HINFC504_DMA_CTRL_CS_SHIFT (8) 112 + #define HINFC504_DMA_CTRL_CS_MASK (0x03) 113 + 114 + #define HINFC504_DMA_ADDR_DATA 0x64 115 + #define HINFC504_DMA_ADDR_OOB 0x68 116 + 117 + #define HINFC504_DMA_LEN 0x6C 118 + #define HINFC504_DMA_LEN_OOB_SHIFT (16) 119 + #define HINFC504_DMA_LEN_OOB_MASK (0xFFF) 120 + 121 + #define HINFC504_DMA_PARA 0x70 122 + #define HINFC504_DMA_PARA_DATA_RW_EN BIT(0) 123 + #define HINFC504_DMA_PARA_OOB_RW_EN BIT(1) 124 + #define HINFC504_DMA_PARA_DATA_EDC_EN BIT(2) 125 + #define HINFC504_DMA_PARA_OOB_EDC_EN BIT(3) 126 + #define HINFC504_DMA_PARA_DATA_ECC_EN BIT(4) 127 + #define HINFC504_DMA_PARA_OOB_ECC_EN BIT(5) 128 + 129 + #define HINFC_VERSION 0x74 130 + #define HINFC504_LOG_READ_ADDR 0x7C 131 + #define HINFC504_LOG_READ_LEN 0x80 132 + 133 + #define HINFC504_NANDINFO_LEN 0x10 134 + 135 + struct hinfc_host { 136 + struct nand_chip chip; 137 + struct mtd_info mtd; 138 + struct device *dev; 139 + void __iomem *iobase; 140 + void __iomem *mmio; 141 + struct completion cmd_complete; 142 + unsigned int offset; 143 + unsigned int command; 144 + int chipselect; 145 + unsigned int addr_cycle; 146 + u32 addr_value[2]; 147 + u32 cache_addr_value[2]; 148 + char *buffer; 149 + dma_addr_t dma_buffer; 150 + dma_addr_t dma_oob; 151 + int version; 152 + unsigned int irq_status; /* interrupt status */ 153 + }; 154 + 155 + static inline unsigned int hinfc_read(struct hinfc_host *host, unsigned int reg) 156 + { 157 + return readl(host->iobase + reg); 158 + } 159 + 160 + static inline void hinfc_write(struct hinfc_host *host, unsigned int value, 161 + unsigned int reg) 162 + { 163 + writel(value, host->iobase + reg); 164 + } 165 + 166 + static void wait_controller_finished(struct hinfc_host *host) 167 + { 168 + unsigned long timeout = jiffies + HINFC504_NFC_TIMEOUT; 169 + int val; 170 + 171 + while (time_before(jiffies, timeout)) { 172 + val = hinfc_read(host, HINFC504_STATUS); 173 + if (host->command == NAND_CMD_ERASE2) { 174 + /* nfc is ready */ 175 + while (!(val & HINFC504_READY)) { 176 + usleep_range(500, 1000); 177 + val = hinfc_read(host, HINFC504_STATUS); 178 + } 179 + return; 180 + } 181 + 182 + if (val & HINFC504_READY) 183 + return; 184 + } 185 + 186 + /* wait cmd timeout */ 187 + dev_err(host->dev, "Wait NAND controller exec cmd timeout.\n"); 188 + } 189 + 190 + static void hisi_nfc_dma_transfer(struct hinfc_host *host, int todev) 191 + { 192 + struct mtd_info *mtd = &host->mtd; 193 + struct nand_chip *chip = mtd->priv; 194 + unsigned long val; 195 + int ret; 196 + 197 + hinfc_write(host, host->dma_buffer, HINFC504_DMA_ADDR_DATA); 198 + hinfc_write(host, host->dma_oob, HINFC504_DMA_ADDR_OOB); 199 + 200 + if (chip->ecc.mode == NAND_ECC_NONE) { 201 + hinfc_write(host, ((mtd->oobsize & HINFC504_DMA_LEN_OOB_MASK) 202 + << HINFC504_DMA_LEN_OOB_SHIFT), HINFC504_DMA_LEN); 203 + 204 + hinfc_write(host, HINFC504_DMA_PARA_DATA_RW_EN 205 + | HINFC504_DMA_PARA_OOB_RW_EN, HINFC504_DMA_PARA); 206 + } else { 207 + if (host->command == NAND_CMD_READOOB) 208 + hinfc_write(host, HINFC504_DMA_PARA_OOB_RW_EN 209 + | HINFC504_DMA_PARA_OOB_EDC_EN 210 + | HINFC504_DMA_PARA_OOB_ECC_EN, HINFC504_DMA_PARA); 211 + else 212 + hinfc_write(host, HINFC504_DMA_PARA_DATA_RW_EN 213 + | HINFC504_DMA_PARA_OOB_RW_EN 214 + | HINFC504_DMA_PARA_DATA_EDC_EN 215 + | HINFC504_DMA_PARA_OOB_EDC_EN 216 + | HINFC504_DMA_PARA_DATA_ECC_EN 217 + | HINFC504_DMA_PARA_OOB_ECC_EN, HINFC504_DMA_PARA); 218 + 219 + } 220 + 221 + val = (HINFC504_DMA_CTRL_DMA_START | HINFC504_DMA_CTRL_BURST4_EN 222 + | HINFC504_DMA_CTRL_BURST8_EN | HINFC504_DMA_CTRL_BURST16_EN 223 + | HINFC504_DMA_CTRL_DATA_AREA_EN | HINFC504_DMA_CTRL_OOB_AREA_EN 224 + | ((host->addr_cycle == 4 ? 1 : 0) 225 + << HINFC504_DMA_CTRL_ADDR_NUM_SHIFT) 226 + | ((host->chipselect & HINFC504_DMA_CTRL_CS_MASK) 227 + << HINFC504_DMA_CTRL_CS_SHIFT)); 228 + 229 + if (todev) 230 + val |= HINFC504_DMA_CTRL_WE; 231 + 232 + init_completion(&host->cmd_complete); 233 + 234 + hinfc_write(host, val, HINFC504_DMA_CTRL); 235 + ret = wait_for_completion_timeout(&host->cmd_complete, 236 + HINFC504_NFC_DMA_TIMEOUT); 237 + 238 + if (!ret) { 239 + dev_err(host->dev, "DMA operation(irq) timeout!\n"); 240 + /* sanity check */ 241 + val = hinfc_read(host, HINFC504_DMA_CTRL); 242 + if (!(val & HINFC504_DMA_CTRL_DMA_START)) 243 + dev_err(host->dev, "DMA is already done but without irq ACK!\n"); 244 + else 245 + dev_err(host->dev, "DMA is really timeout!\n"); 246 + } 247 + } 248 + 249 + static int hisi_nfc_send_cmd_pageprog(struct hinfc_host *host) 250 + { 251 + host->addr_value[0] &= 0xffff0000; 252 + 253 + hinfc_write(host, host->addr_value[0], HINFC504_ADDRL); 254 + hinfc_write(host, host->addr_value[1], HINFC504_ADDRH); 255 + hinfc_write(host, NAND_CMD_PAGEPROG << 8 | NAND_CMD_SEQIN, 256 + HINFC504_CMD); 257 + 258 + hisi_nfc_dma_transfer(host, 1); 259 + 260 + return 0; 261 + } 262 + 263 + static int hisi_nfc_send_cmd_readstart(struct hinfc_host *host) 264 + { 265 + struct mtd_info *mtd = &host->mtd; 266 + 267 + if ((host->addr_value[0] == host->cache_addr_value[0]) && 268 + (host->addr_value[1] == host->cache_addr_value[1])) 269 + return 0; 270 + 271 + host->addr_value[0] &= 0xffff0000; 272 + 273 + hinfc_write(host, host->addr_value[0], HINFC504_ADDRL); 274 + hinfc_write(host, host->addr_value[1], HINFC504_ADDRH); 275 + hinfc_write(host, NAND_CMD_READSTART << 8 | NAND_CMD_READ0, 276 + HINFC504_CMD); 277 + 278 + hinfc_write(host, 0, HINFC504_LOG_READ_ADDR); 279 + hinfc_write(host, mtd->writesize + mtd->oobsize, 280 + HINFC504_LOG_READ_LEN); 281 + 282 + hisi_nfc_dma_transfer(host, 0); 283 + 284 + host->cache_addr_value[0] = host->addr_value[0]; 285 + host->cache_addr_value[1] = host->addr_value[1]; 286 + 287 + return 0; 288 + } 289 + 290 + static int hisi_nfc_send_cmd_erase(struct hinfc_host *host) 291 + { 292 + hinfc_write(host, host->addr_value[0], HINFC504_ADDRL); 293 + hinfc_write(host, (NAND_CMD_ERASE2 << 8) | NAND_CMD_ERASE1, 294 + HINFC504_CMD); 295 + 296 + hinfc_write(host, HINFC504_OP_WAIT_READY_EN 297 + | HINFC504_OP_CMD2_EN 298 + | HINFC504_OP_CMD1_EN 299 + | HINFC504_OP_ADDR_EN 300 + | ((host->chipselect & HINFC504_OP_NF_CS_MASK) 301 + << HINFC504_OP_NF_CS_SHIFT) 302 + | ((host->addr_cycle & HINFC504_OP_ADDR_CYCLE_MASK) 303 + << HINFC504_OP_ADDR_CYCLE_SHIFT), 304 + HINFC504_OP); 305 + 306 + wait_controller_finished(host); 307 + 308 + return 0; 309 + } 310 + 311 + static int hisi_nfc_send_cmd_readid(struct hinfc_host *host) 312 + { 313 + hinfc_write(host, HINFC504_NANDINFO_LEN, HINFC504_DATA_NUM); 314 + hinfc_write(host, NAND_CMD_READID, HINFC504_CMD); 315 + hinfc_write(host, 0, HINFC504_ADDRL); 316 + 317 + hinfc_write(host, HINFC504_OP_CMD1_EN | HINFC504_OP_ADDR_EN 318 + | HINFC504_OP_READ_DATA_EN 319 + | ((host->chipselect & HINFC504_OP_NF_CS_MASK) 320 + << HINFC504_OP_NF_CS_SHIFT) 321 + | 1 << HINFC504_OP_ADDR_CYCLE_SHIFT, HINFC504_OP); 322 + 323 + wait_controller_finished(host); 324 + 325 + return 0; 326 + } 327 + 328 + static int hisi_nfc_send_cmd_status(struct hinfc_host *host) 329 + { 330 + hinfc_write(host, HINFC504_NANDINFO_LEN, HINFC504_DATA_NUM); 331 + hinfc_write(host, NAND_CMD_STATUS, HINFC504_CMD); 332 + hinfc_write(host, HINFC504_OP_CMD1_EN 333 + | HINFC504_OP_READ_DATA_EN 334 + | ((host->chipselect & HINFC504_OP_NF_CS_MASK) 335 + << HINFC504_OP_NF_CS_SHIFT), 336 + HINFC504_OP); 337 + 338 + wait_controller_finished(host); 339 + 340 + return 0; 341 + } 342 + 343 + static int hisi_nfc_send_cmd_reset(struct hinfc_host *host, int chipselect) 344 + { 345 + hinfc_write(host, NAND_CMD_RESET, HINFC504_CMD); 346 + 347 + hinfc_write(host, HINFC504_OP_CMD1_EN 348 + | ((chipselect & HINFC504_OP_NF_CS_MASK) 349 + << HINFC504_OP_NF_CS_SHIFT) 350 + | HINFC504_OP_WAIT_READY_EN, 351 + HINFC504_OP); 352 + 353 + wait_controller_finished(host); 354 + 355 + return 0; 356 + } 357 + 358 + static void hisi_nfc_select_chip(struct mtd_info *mtd, int chipselect) 359 + { 360 + struct nand_chip *chip = mtd->priv; 361 + struct hinfc_host *host = chip->priv; 362 + 363 + if (chipselect < 0) 364 + return; 365 + 366 + host->chipselect = chipselect; 367 + } 368 + 369 + static uint8_t hisi_nfc_read_byte(struct mtd_info *mtd) 370 + { 371 + struct nand_chip *chip = mtd->priv; 372 + struct hinfc_host *host = chip->priv; 373 + 374 + if (host->command == NAND_CMD_STATUS) 375 + return *(uint8_t *)(host->mmio); 376 + 377 + host->offset++; 378 + 379 + if (host->command == NAND_CMD_READID) 380 + return *(uint8_t *)(host->mmio + host->offset - 1); 381 + 382 + return *(uint8_t *)(host->buffer + host->offset - 1); 383 + } 384 + 385 + static u16 hisi_nfc_read_word(struct mtd_info *mtd) 386 + { 387 + struct nand_chip *chip = mtd->priv; 388 + struct hinfc_host *host = chip->priv; 389 + 390 + host->offset += 2; 391 + return *(u16 *)(host->buffer + host->offset - 2); 392 + } 393 + 394 + static void 395 + hisi_nfc_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len) 396 + { 397 + struct nand_chip *chip = mtd->priv; 398 + struct hinfc_host *host = chip->priv; 399 + 400 + memcpy(host->buffer + host->offset, buf, len); 401 + host->offset += len; 402 + } 403 + 404 + static void hisi_nfc_read_buf(struct mtd_info *mtd, uint8_t *buf, int len) 405 + { 406 + struct nand_chip *chip = mtd->priv; 407 + struct hinfc_host *host = chip->priv; 408 + 409 + memcpy(buf, host->buffer + host->offset, len); 410 + host->offset += len; 411 + } 412 + 413 + static void set_addr(struct mtd_info *mtd, int column, int page_addr) 414 + { 415 + struct nand_chip *chip = mtd->priv; 416 + struct hinfc_host *host = chip->priv; 417 + unsigned int command = host->command; 418 + 419 + host->addr_cycle = 0; 420 + host->addr_value[0] = 0; 421 + host->addr_value[1] = 0; 422 + 423 + /* Serially input address */ 424 + if (column != -1) { 425 + /* Adjust columns for 16 bit buswidth */ 426 + if (chip->options & NAND_BUSWIDTH_16 && 427 + !nand_opcode_8bits(command)) 428 + column >>= 1; 429 + 430 + host->addr_value[0] = column & 0xffff; 431 + host->addr_cycle = 2; 432 + } 433 + if (page_addr != -1) { 434 + host->addr_value[0] |= (page_addr & 0xffff) 435 + << (host->addr_cycle * 8); 436 + host->addr_cycle += 2; 437 + /* One more address cycle for devices > 128MiB */ 438 + if (chip->chipsize > (128 << 20)) { 439 + host->addr_cycle += 1; 440 + if (host->command == NAND_CMD_ERASE1) 441 + host->addr_value[0] |= ((page_addr >> 16) & 0xff) << 16; 442 + else 443 + host->addr_value[1] |= ((page_addr >> 16) & 0xff); 444 + } 445 + } 446 + } 447 + 448 + static void hisi_nfc_cmdfunc(struct mtd_info *mtd, unsigned command, int column, 449 + int page_addr) 450 + { 451 + struct nand_chip *chip = mtd->priv; 452 + struct hinfc_host *host = chip->priv; 453 + int is_cache_invalid = 1; 454 + unsigned int flag = 0; 455 + 456 + host->command = command; 457 + 458 + switch (command) { 459 + case NAND_CMD_READ0: 460 + case NAND_CMD_READOOB: 461 + if (command == NAND_CMD_READ0) 462 + host->offset = column; 463 + else 464 + host->offset = column + mtd->writesize; 465 + 466 + is_cache_invalid = 0; 467 + set_addr(mtd, column, page_addr); 468 + hisi_nfc_send_cmd_readstart(host); 469 + break; 470 + 471 + case NAND_CMD_SEQIN: 472 + host->offset = column; 473 + set_addr(mtd, column, page_addr); 474 + break; 475 + 476 + case NAND_CMD_ERASE1: 477 + set_addr(mtd, column, page_addr); 478 + break; 479 + 480 + case NAND_CMD_PAGEPROG: 481 + hisi_nfc_send_cmd_pageprog(host); 482 + break; 483 + 484 + case NAND_CMD_ERASE2: 485 + hisi_nfc_send_cmd_erase(host); 486 + break; 487 + 488 + case NAND_CMD_READID: 489 + host->offset = column; 490 + memset(host->mmio, 0, 0x10); 491 + hisi_nfc_send_cmd_readid(host); 492 + break; 493 + 494 + case NAND_CMD_STATUS: 495 + flag = hinfc_read(host, HINFC504_CON); 496 + if (chip->ecc.mode == NAND_ECC_HW) 497 + hinfc_write(host, 498 + flag & ~(HINFC504_CON_ECCTYPE_MASK << 499 + HINFC504_CON_ECCTYPE_SHIFT), HINFC504_CON); 500 + 501 + host->offset = 0; 502 + memset(host->mmio, 0, 0x10); 503 + hisi_nfc_send_cmd_status(host); 504 + hinfc_write(host, flag, HINFC504_CON); 505 + break; 506 + 507 + case NAND_CMD_RESET: 508 + hisi_nfc_send_cmd_reset(host, host->chipselect); 509 + break; 510 + 511 + default: 512 + dev_err(host->dev, "Error: unsupported cmd(cmd=%x, col=%x, page=%x)\n", 513 + command, column, page_addr); 514 + } 515 + 516 + if (is_cache_invalid) { 517 + host->cache_addr_value[0] = ~0; 518 + host->cache_addr_value[1] = ~0; 519 + } 520 + } 521 + 522 + static irqreturn_t hinfc_irq_handle(int irq, void *devid) 523 + { 524 + struct hinfc_host *host = devid; 525 + unsigned int flag; 526 + 527 + flag = hinfc_read(host, HINFC504_INTS); 528 + /* store interrupts state */ 529 + host->irq_status |= flag; 530 + 531 + if (flag & HINFC504_INTS_DMA) { 532 + hinfc_write(host, HINFC504_INTCLR_DMA, HINFC504_INTCLR); 533 + complete(&host->cmd_complete); 534 + } else if (flag & HINFC504_INTS_CE) { 535 + hinfc_write(host, HINFC504_INTCLR_CE, HINFC504_INTCLR); 536 + } else if (flag & HINFC504_INTS_UE) { 537 + hinfc_write(host, HINFC504_INTCLR_UE, HINFC504_INTCLR); 538 + } 539 + 540 + return IRQ_HANDLED; 541 + } 542 + 543 + static int hisi_nand_read_page_hwecc(struct mtd_info *mtd, 544 + struct nand_chip *chip, uint8_t *buf, int oob_required, int page) 545 + { 546 + struct hinfc_host *host = chip->priv; 547 + int max_bitflips = 0, stat = 0, stat_max = 0, status_ecc; 548 + int stat_1, stat_2; 549 + 550 + chip->read_buf(mtd, buf, mtd->writesize); 551 + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 552 + 553 + /* errors which can not be corrected by ECC */ 554 + if (host->irq_status & HINFC504_INTS_UE) { 555 + mtd->ecc_stats.failed++; 556 + } else if (host->irq_status & HINFC504_INTS_CE) { 557 + /* TODO: need add other ECC modes! */ 558 + switch (chip->ecc.strength) { 559 + case 16: 560 + status_ecc = hinfc_read(host, HINFC504_ECC_STATUS) >> 561 + HINFC504_ECC_16_BIT_SHIFT & 0x0fff; 562 + stat_2 = status_ecc & 0x3f; 563 + stat_1 = status_ecc >> 6 & 0x3f; 564 + stat = stat_1 + stat_2; 565 + stat_max = max_t(int, stat_1, stat_2); 566 + } 567 + mtd->ecc_stats.corrected += stat; 568 + max_bitflips = max_t(int, max_bitflips, stat_max); 569 + } 570 + host->irq_status = 0; 571 + 572 + return max_bitflips; 573 + } 574 + 575 + static int hisi_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip, 576 + int page) 577 + { 578 + struct hinfc_host *host = chip->priv; 579 + 580 + chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page); 581 + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 582 + 583 + if (host->irq_status & HINFC504_INTS_UE) { 584 + host->irq_status = 0; 585 + return -EBADMSG; 586 + } 587 + 588 + host->irq_status = 0; 589 + return 0; 590 + } 591 + 592 + static int hisi_nand_write_page_hwecc(struct mtd_info *mtd, 593 + struct nand_chip *chip, const uint8_t *buf, int oob_required) 594 + { 595 + chip->write_buf(mtd, buf, mtd->writesize); 596 + if (oob_required) 597 + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 598 + 599 + return 0; 600 + } 601 + 602 + static void hisi_nfc_host_init(struct hinfc_host *host) 603 + { 604 + struct nand_chip *chip = &host->chip; 605 + unsigned int flag = 0; 606 + 607 + host->version = hinfc_read(host, HINFC_VERSION); 608 + host->addr_cycle = 0; 609 + host->addr_value[0] = 0; 610 + host->addr_value[1] = 0; 611 + host->cache_addr_value[0] = ~0; 612 + host->cache_addr_value[1] = ~0; 613 + host->chipselect = 0; 614 + 615 + /* default page size: 2K, ecc_none. need modify */ 616 + flag = HINFC504_CON_OP_MODE_NORMAL | HINFC504_CON_READY_BUSY_SEL 617 + | ((0x001 & HINFC504_CON_PAGESIZE_MASK) 618 + << HINFC504_CON_PAGEISZE_SHIFT) 619 + | ((0x0 & HINFC504_CON_ECCTYPE_MASK) 620 + << HINFC504_CON_ECCTYPE_SHIFT) 621 + | ((chip->options & NAND_BUSWIDTH_16) ? 622 + HINFC504_CON_BUS_WIDTH : 0); 623 + hinfc_write(host, flag, HINFC504_CON); 624 + 625 + memset(host->mmio, 0xff, HINFC504_BUFFER_BASE_ADDRESS_LEN); 626 + 627 + hinfc_write(host, SET_HINFC504_PWIDTH(HINFC504_W_LATCH, 628 + HINFC504_R_LATCH, HINFC504_RW_LATCH), HINFC504_PWIDTH); 629 + 630 + /* enable DMA irq */ 631 + hinfc_write(host, HINFC504_INTEN_DMA, HINFC504_INTEN); 632 + } 633 + 634 + static struct nand_ecclayout nand_ecc_2K_16bits = { 635 + .oobavail = 6, 636 + .oobfree = { {2, 6} }, 637 + }; 638 + 639 + static int hisi_nfc_ecc_probe(struct hinfc_host *host) 640 + { 641 + unsigned int flag; 642 + int size, strength, ecc_bits; 643 + struct device *dev = host->dev; 644 + struct nand_chip *chip = &host->chip; 645 + struct mtd_info *mtd = &host->mtd; 646 + struct device_node *np = host->dev->of_node; 647 + 648 + size = of_get_nand_ecc_step_size(np); 649 + strength = of_get_nand_ecc_strength(np); 650 + if (size != 1024) { 651 + dev_err(dev, "error ecc size: %d\n", size); 652 + return -EINVAL; 653 + } 654 + 655 + if ((size == 1024) && ((strength != 8) && (strength != 16) && 656 + (strength != 24) && (strength != 40))) { 657 + dev_err(dev, "ecc size and strength do not match\n"); 658 + return -EINVAL; 659 + } 660 + 661 + chip->ecc.size = size; 662 + chip->ecc.strength = strength; 663 + 664 + chip->ecc.read_page = hisi_nand_read_page_hwecc; 665 + chip->ecc.read_oob = hisi_nand_read_oob; 666 + chip->ecc.write_page = hisi_nand_write_page_hwecc; 667 + 668 + switch (chip->ecc.strength) { 669 + case 16: 670 + ecc_bits = 6; 671 + if (mtd->writesize == 2048) 672 + chip->ecc.layout = &nand_ecc_2K_16bits; 673 + 674 + /* TODO: add more page size support */ 675 + break; 676 + 677 + /* TODO: add more ecc strength support */ 678 + default: 679 + dev_err(dev, "not support strength: %d\n", chip->ecc.strength); 680 + return -EINVAL; 681 + } 682 + 683 + flag = hinfc_read(host, HINFC504_CON); 684 + /* add ecc type configure */ 685 + flag |= ((ecc_bits & HINFC504_CON_ECCTYPE_MASK) 686 + << HINFC504_CON_ECCTYPE_SHIFT); 687 + hinfc_write(host, flag, HINFC504_CON); 688 + 689 + /* enable ecc irq */ 690 + flag = hinfc_read(host, HINFC504_INTEN) & 0xfff; 691 + hinfc_write(host, flag | HINFC504_INTEN_UE | HINFC504_INTEN_CE, 692 + HINFC504_INTEN); 693 + 694 + return 0; 695 + } 696 + 697 + static int hisi_nfc_probe(struct platform_device *pdev) 698 + { 699 + int ret = 0, irq, buswidth, flag, max_chips = HINFC504_MAX_CHIP; 700 + struct device *dev = &pdev->dev; 701 + struct hinfc_host *host; 702 + struct nand_chip *chip; 703 + struct mtd_info *mtd; 704 + struct resource *res; 705 + struct device_node *np = dev->of_node; 706 + struct mtd_part_parser_data ppdata; 707 + 708 + host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 709 + if (!host) 710 + return -ENOMEM; 711 + host->dev = dev; 712 + 713 + platform_set_drvdata(pdev, host); 714 + chip = &host->chip; 715 + mtd = &host->mtd; 716 + 717 + irq = platform_get_irq(pdev, 0); 718 + if (irq < 0) { 719 + dev_err(dev, "no IRQ resource defined\n"); 720 + ret = -ENXIO; 721 + goto err_res; 722 + } 723 + 724 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 725 + host->iobase = devm_ioremap_resource(dev, res); 726 + if (IS_ERR(host->iobase)) { 727 + ret = PTR_ERR(host->iobase); 728 + goto err_res; 729 + } 730 + 731 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 732 + host->mmio = devm_ioremap_resource(dev, res); 733 + if (IS_ERR(host->mmio)) { 734 + ret = PTR_ERR(host->mmio); 735 + dev_err(dev, "devm_ioremap_resource[1] fail\n"); 736 + goto err_res; 737 + } 738 + 739 + mtd->priv = chip; 740 + mtd->owner = THIS_MODULE; 741 + mtd->name = "hisi_nand"; 742 + mtd->dev.parent = &pdev->dev; 743 + 744 + chip->priv = host; 745 + chip->cmdfunc = hisi_nfc_cmdfunc; 746 + chip->select_chip = hisi_nfc_select_chip; 747 + chip->read_byte = hisi_nfc_read_byte; 748 + chip->read_word = hisi_nfc_read_word; 749 + chip->write_buf = hisi_nfc_write_buf; 750 + chip->read_buf = hisi_nfc_read_buf; 751 + chip->chip_delay = HINFC504_CHIP_DELAY; 752 + 753 + chip->ecc.mode = of_get_nand_ecc_mode(np); 754 + 755 + buswidth = of_get_nand_bus_width(np); 756 + if (buswidth == 16) 757 + chip->options |= NAND_BUSWIDTH_16; 758 + 759 + hisi_nfc_host_init(host); 760 + 761 + ret = devm_request_irq(dev, irq, hinfc_irq_handle, IRQF_DISABLED, 762 + "nandc", host); 763 + if (ret) { 764 + dev_err(dev, "failed to request IRQ\n"); 765 + goto err_res; 766 + } 767 + 768 + ret = nand_scan_ident(mtd, max_chips, NULL); 769 + if (ret) { 770 + ret = -ENODEV; 771 + goto err_res; 772 + } 773 + 774 + host->buffer = dmam_alloc_coherent(dev, mtd->writesize + mtd->oobsize, 775 + &host->dma_buffer, GFP_KERNEL); 776 + if (!host->buffer) { 777 + ret = -ENOMEM; 778 + goto err_res; 779 + } 780 + 781 + host->dma_oob = host->dma_buffer + mtd->writesize; 782 + memset(host->buffer, 0xff, mtd->writesize + mtd->oobsize); 783 + 784 + flag = hinfc_read(host, HINFC504_CON); 785 + flag &= ~(HINFC504_CON_PAGESIZE_MASK << HINFC504_CON_PAGEISZE_SHIFT); 786 + switch (mtd->writesize) { 787 + case 2048: 788 + flag |= (0x001 << HINFC504_CON_PAGEISZE_SHIFT); break; 789 + /* 790 + * TODO: add more pagesize support, 791 + * default pagesize has been set in hisi_nfc_host_init 792 + */ 793 + default: 794 + dev_err(dev, "NON-2KB page size nand flash\n"); 795 + ret = -EINVAL; 796 + goto err_res; 797 + } 798 + hinfc_write(host, flag, HINFC504_CON); 799 + 800 + if (chip->ecc.mode == NAND_ECC_HW) 801 + hisi_nfc_ecc_probe(host); 802 + 803 + ret = nand_scan_tail(mtd); 804 + if (ret) { 805 + dev_err(dev, "nand_scan_tail failed: %d\n", ret); 806 + goto err_res; 807 + } 808 + 809 + ppdata.of_node = np; 810 + ret = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0); 811 + if (ret) { 812 + dev_err(dev, "Err MTD partition=%d\n", ret); 813 + goto err_mtd; 814 + } 815 + 816 + return 0; 817 + 818 + err_mtd: 819 + nand_release(mtd); 820 + err_res: 821 + return ret; 822 + } 823 + 824 + static int hisi_nfc_remove(struct platform_device *pdev) 825 + { 826 + struct hinfc_host *host = platform_get_drvdata(pdev); 827 + struct mtd_info *mtd = &host->mtd; 828 + 829 + nand_release(mtd); 830 + 831 + return 0; 832 + } 833 + 834 + #ifdef CONFIG_PM_SLEEP 835 + static int hisi_nfc_suspend(struct device *dev) 836 + { 837 + struct hinfc_host *host = dev_get_drvdata(dev); 838 + unsigned long timeout = jiffies + HINFC504_NFC_PM_TIMEOUT; 839 + 840 + while (time_before(jiffies, timeout)) { 841 + if (((hinfc_read(host, HINFC504_STATUS) & 0x1) == 0x0) && 842 + (hinfc_read(host, HINFC504_DMA_CTRL) & 843 + HINFC504_DMA_CTRL_DMA_START)) { 844 + cond_resched(); 845 + return 0; 846 + } 847 + } 848 + 849 + dev_err(host->dev, "nand controller suspend timeout.\n"); 850 + 851 + return -EAGAIN; 852 + } 853 + 854 + static int hisi_nfc_resume(struct device *dev) 855 + { 856 + int cs; 857 + struct hinfc_host *host = dev_get_drvdata(dev); 858 + struct nand_chip *chip = &host->chip; 859 + 860 + for (cs = 0; cs < chip->numchips; cs++) 861 + hisi_nfc_send_cmd_reset(host, cs); 862 + hinfc_write(host, SET_HINFC504_PWIDTH(HINFC504_W_LATCH, 863 + HINFC504_R_LATCH, HINFC504_RW_LATCH), HINFC504_PWIDTH); 864 + 865 + return 0; 866 + } 867 + #endif 868 + static SIMPLE_DEV_PM_OPS(hisi_nfc_pm_ops, hisi_nfc_suspend, hisi_nfc_resume); 869 + 870 + static const struct of_device_id nfc_id_table[] = { 871 + { .compatible = "hisilicon,504-nfc" }, 872 + {} 873 + }; 874 + MODULE_DEVICE_TABLE(of, nfc_id_table); 875 + 876 + static struct platform_driver hisi_nfc_driver = { 877 + .driver = { 878 + .name = "hisi_nand", 879 + .of_match_table = nfc_id_table, 880 + .pm = &hisi_nfc_pm_ops, 881 + }, 882 + .probe = hisi_nfc_probe, 883 + .remove = hisi_nfc_remove, 884 + }; 885 + 886 + module_platform_driver(hisi_nfc_driver); 887 + 888 + MODULE_LICENSE("GPL"); 889 + MODULE_AUTHOR("Zhou Wang"); 890 + MODULE_AUTHOR("Zhiyong Cai"); 891 + MODULE_DESCRIPTION("Hisilicon Nand Flash Controller Driver");
+10 -19
drivers/mtd/nand/jz4740_nand.c
··· 69 69 70 70 int selected_bank; 71 71 72 - struct jz_nand_platform_data *pdata; 72 + struct gpio_desc *busy_gpio; 73 73 bool is_reading; 74 74 }; 75 75 ··· 131 131 static int jz_nand_dev_ready(struct mtd_info *mtd) 132 132 { 133 133 struct jz_nand *nand = mtd_to_jz_nand(mtd); 134 - return gpio_get_value_cansleep(nand->pdata->busy_gpio); 134 + return gpiod_get_value_cansleep(nand->busy_gpio); 135 135 } 136 136 137 137 static void jz_nand_hwctl(struct mtd_info *mtd, int mode) ··· 423 423 if (ret) 424 424 goto err_free; 425 425 426 - if (pdata && gpio_is_valid(pdata->busy_gpio)) { 427 - ret = gpio_request(pdata->busy_gpio, "NAND busy pin"); 428 - if (ret) { 429 - dev_err(&pdev->dev, 430 - "Failed to request busy gpio %d: %d\n", 431 - pdata->busy_gpio, ret); 432 - goto err_iounmap_mmio; 433 - } 426 + nand->busy_gpio = devm_gpiod_get_optional(&pdev->dev, "busy", GPIOD_IN); 427 + if (IS_ERR(nand->busy_gpio)) { 428 + ret = PTR_ERR(nand->busy_gpio); 429 + dev_err(&pdev->dev, "Failed to request busy gpio %d\n", 430 + ret); 431 + goto err_iounmap_mmio; 434 432 } 435 433 436 434 mtd = &nand->mtd; ··· 452 454 chip->cmd_ctrl = jz_nand_cmd_ctrl; 453 455 chip->select_chip = jz_nand_select_chip; 454 456 455 - if (pdata && gpio_is_valid(pdata->busy_gpio)) 457 + if (nand->busy_gpio) 456 458 chip->dev_ready = jz_nand_dev_ready; 457 459 458 - nand->pdata = pdata; 459 460 platform_set_drvdata(pdev, nand); 460 461 461 462 /* We are going to autodetect NAND chips in the banks specified in the ··· 493 496 } 494 497 if (chipnr == 0) { 495 498 dev_err(&pdev->dev, "No NAND chips found\n"); 496 - goto err_gpio_busy; 499 + goto err_iounmap_mmio; 497 500 } 498 501 499 502 if (pdata && pdata->ident_callback) { ··· 530 533 nand->bank_base[bank - 1]); 531 534 } 532 535 writel(0, nand->base + JZ_REG_NAND_CTRL); 533 - err_gpio_busy: 534 - if (pdata && gpio_is_valid(pdata->busy_gpio)) 535 - gpio_free(pdata->busy_gpio); 536 536 err_iounmap_mmio: 537 537 jz_nand_iounmap_resource(nand->mem, nand->base); 538 538 err_free: ··· 540 546 static int jz_nand_remove(struct platform_device *pdev) 541 547 { 542 548 struct jz_nand *nand = platform_get_drvdata(pdev); 543 - struct jz_nand_platform_data *pdata = dev_get_platdata(&pdev->dev); 544 549 size_t i; 545 550 546 551 nand_release(&nand->mtd); ··· 555 562 gpio_free(JZ_GPIO_MEM_CS0 + bank - 1); 556 563 } 557 564 } 558 - if (pdata && gpio_is_valid(pdata->busy_gpio)) 559 - gpio_free(pdata->busy_gpio); 560 565 561 566 jz_nand_iounmap_resource(nand->mem, nand->base); 562 567
+21 -10
drivers/mtd/nand/nand_base.c
··· 157 157 158 158 /** 159 159 * nand_read_byte16 - [DEFAULT] read one byte endianness aware from the chip 160 - * nand_read_byte16 - [DEFAULT] read one byte endianness aware from the chip 161 160 * @mtd: MTD device structure 162 161 * 163 162 * Default read function for 16bit buswidth with endianness conversion. ··· 1750 1751 static int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip, 1751 1752 int page) 1752 1753 { 1753 - uint8_t *buf = chip->oob_poi; 1754 1754 int length = mtd->oobsize; 1755 1755 int chunk = chip->ecc.bytes + chip->ecc.prepad + chip->ecc.postpad; 1756 1756 int eccsize = chip->ecc.size; 1757 - uint8_t *bufpoi = buf; 1757 + uint8_t *bufpoi = chip->oob_poi; 1758 1758 int i, toread, sndrnd = 0, pos; 1759 1759 1760 1760 chip->cmdfunc(mtd, NAND_CMD_READ0, chip->ecc.size, page); ··· 2942 2944 __func__); 2943 2945 } 2944 2946 2947 + /** 2948 + * nand_shutdown - [MTD Interface] Finish the current NAND operation and 2949 + * prevent further operations 2950 + * @mtd: MTD device structure 2951 + */ 2952 + static void nand_shutdown(struct mtd_info *mtd) 2953 + { 2954 + nand_get_device(mtd, FL_SHUTDOWN); 2955 + } 2956 + 2945 2957 /* Set default functions */ 2946 2958 static void nand_set_defaults(struct nand_chip *chip, int busw) 2947 2959 { ··· 4036 4028 ecc->read_oob = nand_read_oob_std; 4037 4029 ecc->write_oob = nand_write_oob_std; 4038 4030 /* 4039 - * Board driver should supply ecc.size and ecc.bytes values to 4040 - * select how many bits are correctable; see nand_bch_init() 4041 - * for details. Otherwise, default to 4 bits for large page 4042 - * devices. 4031 + * Board driver should supply ecc.size and ecc.strength values 4032 + * to select how many bits are correctable. Otherwise, default 4033 + * to 4 bits for large page devices. 4043 4034 */ 4044 4035 if (!ecc->size && (mtd->oobsize >= 64)) { 4045 4036 ecc->size = 512; 4046 - ecc->bytes = DIV_ROUND_UP(13 * ecc->strength, 8); 4037 + ecc->strength = 4; 4047 4038 } 4039 + 4040 + /* See nand_bch_init() for details. */ 4041 + ecc->bytes = DIV_ROUND_UP( 4042 + ecc->strength * fls(8 * ecc->size), 8); 4048 4043 ecc->priv = nand_bch_init(mtd, ecc->size, ecc->bytes, 4049 4044 &ecc->layout); 4050 4045 if (!ecc->priv) { 4051 4046 pr_warn("BCH ECC initialization failed!\n"); 4052 4047 BUG(); 4053 4048 } 4054 - ecc->strength = ecc->bytes * 8 / fls(8 * ecc->size); 4055 4049 break; 4056 4050 4057 4051 case NAND_ECC_NONE: ··· 4156 4146 mtd->_unlock = NULL; 4157 4147 mtd->_suspend = nand_suspend; 4158 4148 mtd->_resume = nand_resume; 4149 + mtd->_reboot = nand_shutdown; 4159 4150 mtd->_block_isreserved = nand_block_isreserved; 4160 4151 mtd->_block_isbad = nand_block_isbad; 4161 4152 mtd->_block_markbad = nand_block_markbad; ··· 4172 4161 * properly set. 4173 4162 */ 4174 4163 if (!mtd->bitflip_threshold) 4175 - mtd->bitflip_threshold = mtd->ecc_strength; 4164 + mtd->bitflip_threshold = DIV_ROUND_UP(mtd->ecc_strength * 3, 4); 4176 4165 4177 4166 /* Check, if we should skip the bad block table scan */ 4178 4167 if (chip->options & NAND_SKIP_BBTSCAN)
+1 -6
drivers/mtd/nand/nandsim.c
··· 245 245 #define STATE_DATAOUT 0x00001000 /* waiting for page data output */ 246 246 #define STATE_DATAOUT_ID 0x00002000 /* waiting for ID bytes output */ 247 247 #define STATE_DATAOUT_STATUS 0x00003000 /* waiting for status output */ 248 - #define STATE_DATAOUT_STATUS_M 0x00004000 /* waiting for multi-plane status output */ 249 248 #define STATE_DATAOUT_MASK 0x00007000 /* data output states mask */ 250 249 251 250 /* Previous operation is done, ready to accept new requests */ ··· 268 269 #define OPT_ANY 0xFFFFFFFF /* any chip supports this operation */ 269 270 #define OPT_PAGE512 0x00000002 /* 512-byte page chips */ 270 271 #define OPT_PAGE2048 0x00000008 /* 2048-byte page chips */ 271 - #define OPT_SMARTMEDIA 0x00000010 /* SmartMedia technology chips */ 272 272 #define OPT_PAGE512_8BIT 0x00000040 /* 512-byte page chips with 8-bit bus width */ 273 273 #define OPT_PAGE4096 0x00000080 /* 4096-byte page chips */ 274 274 #define OPT_LARGEPAGE (OPT_PAGE2048 | OPT_PAGE4096) /* 2048 & 4096-byte page chips */ ··· 1094 1096 return "STATE_DATAOUT_ID"; 1095 1097 case STATE_DATAOUT_STATUS: 1096 1098 return "STATE_DATAOUT_STATUS"; 1097 - case STATE_DATAOUT_STATUS_M: 1098 - return "STATE_DATAOUT_STATUS_M"; 1099 1099 case STATE_READY: 1100 1100 return "STATE_READY"; 1101 1101 case STATE_UNKNOWN: ··· 1861 1865 break; 1862 1866 1863 1867 case STATE_DATAOUT_STATUS: 1864 - case STATE_DATAOUT_STATUS_M: 1865 1868 ns->regs.count = ns->regs.num = 0; 1866 1869 break; 1867 1870 ··· 2000 2005 } 2001 2006 2002 2007 if (NS_STATE(ns->state) == STATE_DATAOUT_STATUS 2003 - || NS_STATE(ns->state) == STATE_DATAOUT_STATUS_M 2004 2008 || NS_STATE(ns->state) == STATE_DATAOUT) { 2005 2009 int row = ns->regs.row; 2006 2010 ··· 2337 2343 } 2338 2344 chip->ecc.mode = NAND_ECC_SOFT_BCH; 2339 2345 chip->ecc.size = 512; 2346 + chip->ecc.strength = bch; 2340 2347 chip->ecc.bytes = eccbytes; 2341 2348 NS_INFO("using %u-bit/%u bytes BCH ECC\n", bch, chip->ecc.size); 2342 2349 }
+9 -22
drivers/mtd/nand/omap2.c
··· 1048 1048 * @mtd: MTD device structure 1049 1049 * @mode: Read/Write mode 1050 1050 * 1051 - * When using BCH, sector size is hardcoded to 512 bytes. 1052 - * Using wrapping mode 6 both for reading and writing if ELM module not uses 1053 - * for error correction. 1054 - * On writing, 1051 + * When using BCH with SW correction (i.e. no ELM), sector size is set 1052 + * to 512 bytes and we use BCH_WRAPMODE_6 wrapping mode 1053 + * for both reading and writing with: 1055 1054 * eccsize0 = 0 (no additional protected byte in spare area) 1056 1055 * eccsize1 = 32 (skip 32 nibbles = 16 bytes per sector in spare area) 1057 1056 */ ··· 1070 1071 case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW: 1071 1072 bch_type = 0; 1072 1073 nsectors = 1; 1073 - if (mode == NAND_ECC_READ) { 1074 - wr_mode = BCH_WRAPMODE_6; 1075 - ecc_size0 = BCH_ECC_SIZE0; 1076 - ecc_size1 = BCH_ECC_SIZE1; 1077 - } else { 1078 - wr_mode = BCH_WRAPMODE_6; 1079 - ecc_size0 = BCH_ECC_SIZE0; 1080 - ecc_size1 = BCH_ECC_SIZE1; 1081 - } 1074 + wr_mode = BCH_WRAPMODE_6; 1075 + ecc_size0 = BCH_ECC_SIZE0; 1076 + ecc_size1 = BCH_ECC_SIZE1; 1082 1077 break; 1083 1078 case OMAP_ECC_BCH4_CODE_HW: 1084 1079 bch_type = 0; ··· 1090 1097 case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW: 1091 1098 bch_type = 1; 1092 1099 nsectors = 1; 1093 - if (mode == NAND_ECC_READ) { 1094 - wr_mode = BCH_WRAPMODE_6; 1095 - ecc_size0 = BCH_ECC_SIZE0; 1096 - ecc_size1 = BCH_ECC_SIZE1; 1097 - } else { 1098 - wr_mode = BCH_WRAPMODE_6; 1099 - ecc_size0 = BCH_ECC_SIZE0; 1100 - ecc_size1 = BCH_ECC_SIZE1; 1101 - } 1100 + wr_mode = BCH_WRAPMODE_6; 1101 + ecc_size0 = BCH_ECC_SIZE0; 1102 + ecc_size1 = BCH_ECC_SIZE1; 1102 1103 break; 1103 1104 case OMAP_ECC_BCH8_CODE_HW: 1104 1105 bch_type = 1;
-2
drivers/mtd/nand/sunxi_nand.c
··· 1110 1110 1111 1111 switch (ecc->mode) { 1112 1112 case NAND_ECC_SOFT_BCH: 1113 - ecc->bytes = DIV_ROUND_UP(ecc->strength * fls(8 * ecc->size), 1114 - 8); 1115 1113 break; 1116 1114 case NAND_ECC_HW: 1117 1115 ret = sunxi_nand_hw_ecc_ctrl_init(mtd, ecc, np);
+11 -7
drivers/mtd/nftlmount.c
··· 89 89 } 90 90 91 91 /* To be safer with BIOS, also use erase mark as discriminant */ 92 - if ((ret = nftl_read_oob(mtd, block * nftl->EraseSize + 92 + ret = nftl_read_oob(mtd, block * nftl->EraseSize + 93 93 SECTORSIZE + 8, 8, &retlen, 94 - (char *)&h1) < 0)) { 94 + (char *)&h1); 95 + if (ret < 0) { 95 96 printk(KERN_WARNING "ANAND header found at 0x%x in mtd%d, but OOB data read failed (err %d)\n", 96 97 block * nftl->EraseSize, nftl->mbd.mtd->index, ret); 97 98 continue; ··· 110 109 } 111 110 112 111 /* Finally reread to check ECC */ 113 - if ((ret = mtd->read(mtd, block * nftl->EraseSize, SECTORSIZE, 114 - &retlen, buf) < 0)) { 112 + ret = mtd->read(mtd, block * nftl->EraseSize, SECTORSIZE, 113 + &retlen, buf); 114 + if (ret < 0) { 115 115 printk(KERN_NOTICE "ANAND header found at 0x%x in mtd%d, but ECC read failed (err %d)\n", 116 116 block * nftl->EraseSize, nftl->mbd.mtd->index, ret); 117 117 continue; ··· 230 228 The new DiskOnChip driver already scanned the bad block table. Just query it. 231 229 if ((i & (SECTORSIZE - 1)) == 0) { 232 230 /* read one sector for every SECTORSIZE of blocks */ 233 - if ((ret = mtd->read(nftl->mbd.mtd, block * nftl->EraseSize + 234 - i + SECTORSIZE, SECTORSIZE, &retlen, 235 - buf)) < 0) { 231 + ret = mtd->read(nftl->mbd.mtd, 232 + block * nftl->EraseSize + i + 233 + SECTORSIZE, SECTORSIZE, 234 + &retlen, buf); 235 + if (ret < 0) { 236 236 printk(KERN_NOTICE "Read of bad sector table failed (err %d)\n", 237 237 ret); 238 238 kfree(nftl->ReplUnitTable);
+59 -34
drivers/mtd/spi-nor/fsl-quadspi.c
··· 57 57 58 58 #define QUADSPI_BUF3CR 0x1c 59 59 #define QUADSPI_BUF3CR_ALLMST_SHIFT 31 60 - #define QUADSPI_BUF3CR_ALLMST (1 << QUADSPI_BUF3CR_ALLMST_SHIFT) 60 + #define QUADSPI_BUF3CR_ALLMST_MASK (1 << QUADSPI_BUF3CR_ALLMST_SHIFT) 61 + #define QUADSPI_BUF3CR_ADATSZ_SHIFT 8 62 + #define QUADSPI_BUF3CR_ADATSZ_MASK (0xFF << QUADSPI_BUF3CR_ADATSZ_SHIFT) 61 63 62 64 #define QUADSPI_BFGENCR 0x20 63 65 #define QUADSPI_BFGENCR_PAR_EN_SHIFT 16 ··· 200 198 enum fsl_qspi_devtype devtype; 201 199 int rxfifo; 202 200 int txfifo; 201 + int ahb_buf_size; 203 202 }; 204 203 205 204 static struct fsl_qspi_devtype_data vybrid_data = { 206 205 .devtype = FSL_QUADSPI_VYBRID, 207 206 .rxfifo = 128, 208 - .txfifo = 64 207 + .txfifo = 64, 208 + .ahb_buf_size = 1024 209 209 }; 210 210 211 211 static struct fsl_qspi_devtype_data imx6sx_data = { 212 212 .devtype = FSL_QUADSPI_IMX6SX, 213 213 .rxfifo = 128, 214 - .txfifo = 512 214 + .txfifo = 512, 215 + .ahb_buf_size = 1024 215 216 }; 216 217 217 218 #define FSL_QSPI_MAX_CHIP 4 ··· 232 227 u32 nor_num; 233 228 u32 clk_rate; 234 229 unsigned int chip_base_addr; /* We may support two chips. */ 230 + bool has_second_chip; 235 231 }; 236 232 237 233 static inline int is_vybrid_qspi(struct fsl_qspi *q) ··· 589 583 writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF0CR); 590 584 writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF1CR); 591 585 writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF2CR); 592 - writel(QUADSPI_BUF3CR_ALLMST, base + QUADSPI_BUF3CR); 586 + /* 587 + * Set ADATSZ with the maximum AHB buffer size to improve the 588 + * read performance. 589 + */ 590 + writel(QUADSPI_BUF3CR_ALLMST_MASK | ((q->devtype_data->ahb_buf_size / 8) 591 + << QUADSPI_BUF3CR_ADATSZ_SHIFT), base + QUADSPI_BUF3CR); 593 592 594 593 /* We only use the buffer3 */ 595 594 writel(0, base + QUADSPI_BUF0IND); ··· 794 783 struct spi_nor *nor; 795 784 struct mtd_info *mtd; 796 785 int ret, i = 0; 797 - bool has_second_chip = false; 798 786 const struct of_device_id *of_id = 799 787 of_match_device(fsl_qspi_dt_ids, &pdev->dev); 800 788 ··· 808 798 /* find the resources */ 809 799 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "QuadSPI"); 810 800 q->iobase = devm_ioremap_resource(dev, res); 811 - if (IS_ERR(q->iobase)) { 812 - ret = PTR_ERR(q->iobase); 813 - goto map_failed; 814 - } 801 + if (IS_ERR(q->iobase)) 802 + return PTR_ERR(q->iobase); 815 803 816 804 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 817 805 "QuadSPI-memory"); 818 806 q->ahb_base = devm_ioremap_resource(dev, res); 819 - if (IS_ERR(q->ahb_base)) { 820 - ret = PTR_ERR(q->ahb_base); 821 - goto map_failed; 822 - } 807 + if (IS_ERR(q->ahb_base)) 808 + return PTR_ERR(q->ahb_base); 809 + 823 810 q->memmap_phy = res->start; 824 811 825 812 /* find the clocks */ 826 813 q->clk_en = devm_clk_get(dev, "qspi_en"); 827 - if (IS_ERR(q->clk_en)) { 828 - ret = PTR_ERR(q->clk_en); 829 - goto map_failed; 830 - } 814 + if (IS_ERR(q->clk_en)) 815 + return PTR_ERR(q->clk_en); 831 816 832 817 q->clk = devm_clk_get(dev, "qspi"); 833 - if (IS_ERR(q->clk)) { 834 - ret = PTR_ERR(q->clk); 835 - goto map_failed; 836 - } 818 + if (IS_ERR(q->clk)) 819 + return PTR_ERR(q->clk); 837 820 838 821 ret = clk_prepare_enable(q->clk_en); 839 822 if (ret) { 840 823 dev_err(dev, "can not enable the qspi_en clock\n"); 841 - goto map_failed; 824 + return ret; 842 825 } 843 826 844 827 ret = clk_prepare_enable(q->clk); ··· 863 860 goto irq_failed; 864 861 865 862 if (of_get_property(np, "fsl,qspi-has-second-chip", NULL)) 866 - has_second_chip = true; 863 + q->has_second_chip = true; 867 864 868 865 /* iterate the subnodes. */ 869 866 for_each_available_child_of_node(dev->of_node, np) { 870 867 char modalias[40]; 871 868 872 869 /* skip the holes */ 873 - if (!has_second_chip) 870 + if (!q->has_second_chip) 874 871 i *= 2; 875 872 876 873 nor = &q->nor[i]; ··· 893 890 894 891 ret = of_modalias_node(np, modalias, sizeof(modalias)); 895 892 if (ret < 0) 896 - goto map_failed; 893 + goto irq_failed; 897 894 898 895 ret = of_property_read_u32(np, "spi-max-frequency", 899 896 &q->clk_rate); 900 897 if (ret < 0) 901 - goto map_failed; 898 + goto irq_failed; 902 899 903 900 /* set the chip address for READID */ 904 901 fsl_qspi_set_base_addr(q, nor); 905 902 906 903 ret = spi_nor_scan(nor, modalias, SPI_NOR_QUAD); 907 904 if (ret) 908 - goto map_failed; 905 + goto irq_failed; 909 906 910 907 ppdata.of_node = np; 911 908 ret = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0); 912 909 if (ret) 913 - goto map_failed; 910 + goto irq_failed; 914 911 915 912 /* Set the correct NOR size now. */ 916 913 if (q->nor_size == 0) { ··· 942 939 943 940 clk_disable(q->clk); 944 941 clk_disable(q->clk_en); 945 - dev_info(dev, "QuadSPI SPI NOR flash driver\n"); 946 942 return 0; 947 943 948 944 last_init_failed: 949 - for (i = 0; i < q->nor_num; i++) 945 + for (i = 0; i < q->nor_num; i++) { 946 + /* skip the holes */ 947 + if (!q->has_second_chip) 948 + i *= 2; 950 949 mtd_device_unregister(&q->mtd[i]); 951 - 950 + } 952 951 irq_failed: 953 952 clk_disable_unprepare(q->clk); 954 953 clk_failed: 955 954 clk_disable_unprepare(q->clk_en); 956 - map_failed: 957 - dev_err(dev, "Freescale QuadSPI probe failed\n"); 958 955 return ret; 959 956 } 960 957 ··· 963 960 struct fsl_qspi *q = platform_get_drvdata(pdev); 964 961 int i; 965 962 966 - for (i = 0; i < q->nor_num; i++) 963 + for (i = 0; i < q->nor_num; i++) { 964 + /* skip the holes */ 965 + if (!q->has_second_chip) 966 + i *= 2; 967 967 mtd_device_unregister(&q->mtd[i]); 968 + } 968 969 969 970 /* disable the hardware */ 970 971 writel(QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR); ··· 976 969 977 970 clk_unprepare(q->clk); 978 971 clk_unprepare(q->clk_en); 972 + return 0; 973 + } 974 + 975 + static int fsl_qspi_suspend(struct platform_device *pdev, pm_message_t state) 976 + { 977 + return 0; 978 + } 979 + 980 + static int fsl_qspi_resume(struct platform_device *pdev) 981 + { 982 + struct fsl_qspi *q = platform_get_drvdata(pdev); 983 + 984 + fsl_qspi_nor_setup(q); 985 + fsl_qspi_set_map_addr(q); 986 + fsl_qspi_nor_setup_last(q); 987 + 979 988 return 0; 980 989 } 981 990 ··· 1003 980 }, 1004 981 .probe = fsl_qspi_probe, 1005 982 .remove = fsl_qspi_remove, 983 + .suspend = fsl_qspi_suspend, 984 + .resume = fsl_qspi_resume, 1006 985 }; 1007 986 module_platform_driver(fsl_qspi_driver); 1008 987
+55 -8
drivers/mtd/spi-nor/spi-nor.c
··· 538 538 /* GigaDevice */ 539 539 { "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) }, 540 540 { "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) }, 541 + { "gd25q128", INFO(0xc84018, 0, 64 * 1024, 256, SECT_4K) }, 541 542 542 543 /* Intel/Numonyx -- xxxs33b */ 543 544 { "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) }, ··· 561 560 { "mx66l1g55g", INFO(0xc2261b, 0, 64 * 1024, 2048, SPI_NOR_QUAD_READ) }, 562 561 563 562 /* Micron */ 564 - { "n25q032", INFO(0x20ba16, 0, 64 * 1024, 64, 0) }, 565 - { "n25q064", INFO(0x20ba17, 0, 64 * 1024, 128, 0) }, 566 - { "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, 0) }, 567 - { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, 0) }, 568 - { "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K) }, 569 - { "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K) }, 570 - { "n25q512ax3", INFO(0x20ba20, 0, 64 * 1024, 1024, USE_FSR) }, 571 - { "n25q00", INFO(0x20ba21, 0, 64 * 1024, 2048, USE_FSR) }, 563 + { "n25q032", INFO(0x20ba16, 0, 64 * 1024, 64, SPI_NOR_QUAD_READ) }, 564 + { "n25q064", INFO(0x20ba17, 0, 64 * 1024, 128, SPI_NOR_QUAD_READ) }, 565 + { "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, SPI_NOR_QUAD_READ) }, 566 + { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, SPI_NOR_QUAD_READ) }, 567 + { "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_QUAD_READ) }, 568 + { "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) }, 569 + { "n25q512ax3", INFO(0x20ba20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) }, 570 + { "n25q00", INFO(0x20ba21, 0, 64 * 1024, 2048, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) }, 572 571 573 572 /* PMC */ 574 573 { "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) }, ··· 892 891 return 0; 893 892 } 894 893 894 + static int micron_quad_enable(struct spi_nor *nor) 895 + { 896 + int ret; 897 + u8 val; 898 + 899 + ret = nor->read_reg(nor, SPINOR_OP_RD_EVCR, &val, 1); 900 + if (ret < 0) { 901 + dev_err(nor->dev, "error %d reading EVCR\n", ret); 902 + return ret; 903 + } 904 + 905 + write_enable(nor); 906 + 907 + /* set EVCR, enable quad I/O */ 908 + nor->cmd_buf[0] = val & ~EVCR_QUAD_EN_MICRON; 909 + ret = nor->write_reg(nor, SPINOR_OP_WD_EVCR, nor->cmd_buf, 1, 0); 910 + if (ret < 0) { 911 + dev_err(nor->dev, "error while writing EVCR register\n"); 912 + return ret; 913 + } 914 + 915 + ret = spi_nor_wait_till_ready(nor); 916 + if (ret) 917 + return ret; 918 + 919 + /* read EVCR and check it */ 920 + ret = nor->read_reg(nor, SPINOR_OP_RD_EVCR, &val, 1); 921 + if (ret < 0) { 922 + dev_err(nor->dev, "error %d reading EVCR\n", ret); 923 + return ret; 924 + } 925 + if (val & EVCR_QUAD_EN_MICRON) { 926 + dev_err(nor->dev, "Micron EVCR Quad bit not clear\n"); 927 + return -EINVAL; 928 + } 929 + 930 + return 0; 931 + } 932 + 895 933 static int set_quad_mode(struct spi_nor *nor, struct flash_info *info) 896 934 { 897 935 int status; ··· 940 900 status = macronix_quad_enable(nor); 941 901 if (status) { 942 902 dev_err(nor->dev, "Macronix quad-read not enabled\n"); 903 + return -EINVAL; 904 + } 905 + return status; 906 + case CFI_MFR_ST: 907 + status = micron_quad_enable(nor); 908 + if (status) { 909 + dev_err(nor->dev, "Micron quad-read not enabled\n"); 943 910 return -EINVAL; 944 911 } 945 912 return status;
-5
fs/jffs2/compr_rubin.c
··· 84 84 return bit; 85 85 } 86 86 87 - static inline int pulledbits(struct pushpull *pp) 88 - { 89 - return pp->ofs; 90 - } 91 - 92 87 93 88 static void init_rubin(struct rubin_state *rs, int div, int *bits) 94 89 {
+5
fs/jffs2/scan.c
··· 510 510 sumlen = c->sector_size - je32_to_cpu(sm->offset); 511 511 sumptr = buf + buf_size - sumlen; 512 512 513 + /* sm->offset maybe wrong but MAGIC maybe right */ 514 + if (sumlen > c->sector_size) 515 + goto full_scan; 516 + 513 517 /* Now, make sure the summary itself is available */ 514 518 if (sumlen > buf_size) { 515 519 /* Need to kmalloc for this. */ ··· 548 544 } 549 545 } 550 546 547 + full_scan: 551 548 buf_ofs = jeb->offset; 552 549 553 550 if (!buf_size) {
+1
include/linux/mtd/mtd.h
··· 227 227 int (*_block_markbad) (struct mtd_info *mtd, loff_t ofs); 228 228 int (*_suspend) (struct mtd_info *mtd); 229 229 void (*_resume) (struct mtd_info *mtd); 230 + void (*_reboot) (struct mtd_info *mtd); 230 231 /* 231 232 * If the driver is something smart, like UBI, it may need to maintain 232 233 * its own reference counting. The below functions are only for driver.
+7
include/linux/mtd/spi-nor.h
··· 56 56 /* Used for Spansion flashes only. */ 57 57 #define SPINOR_OP_BRWR 0x17 /* Bank register write */ 58 58 59 + /* Used for Micron flashes only. */ 60 + #define SPINOR_OP_RD_EVCR 0x65 /* Read EVCR register */ 61 + #define SPINOR_OP_WD_EVCR 0x61 /* Write EVCR register */ 62 + 59 63 /* Status Register bits. */ 60 64 #define SR_WIP 1 /* Write in progress */ 61 65 #define SR_WEL 2 /* Write enable latch */ ··· 70 66 #define SR_SRWD 0x80 /* SR write protect */ 71 67 72 68 #define SR_QUAD_EN_MX 0x40 /* Macronix Quad I/O */ 69 + 70 + /* Enhanced Volatile Configuration Register bits */ 71 + #define EVCR_QUAD_EN_MICRON 0x80 /* Micron Quad I/O */ 73 72 74 73 /* Flag Status Register bits */ 75 74 #define FSR_READY 0x80