Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-linus-20140610' of git://git.infradead.org/linux-mtd

Pull MTD updates from Brian Norris:
- refactor m25p80.c driver for use as a general SPI NOR framework for
other drivers which may speak to SPI NOR flash without providing full
SPI support (i.e., not part of drivers/spi/)
- new Freescale QuadSPI driver (utilizing new SPI NOR framework)
- updates for the STMicro "FSM" SPI NOR driver
- fix sync/flush behavior on mtd_blkdevs
- fixup subpage write support on a few NAND drivers
- correct the MTD OOB test for odd-sized OOB areas
- add BCH-16 support for OMAP NAND
- fix warnings and trivial refactoring
- utilize new ECC DT bindings in pxa3xx NAND driver
- new LPDDR NVM driver
- address a few assorted bugs caught by Coverity
- add new imx6sx support for GPMI NAND
- use a bounce buffer for NAND when non-DMA-able buffers are used

* tag 'for-linus-20140610' of git://git.infradead.org/linux-mtd: (77 commits)
mtd: gpmi: add gpmi support for imx6sx
mtd: maps: remove check for CONFIG_MTD_SUPERH_RESERVE
mtd: bf5xx_nand: use the managed version of kzalloc
mtd: pxa3xx_nand: make the driver work on big-endian systems
mtd: nand: omap: fix omap_calculate_ecc_bch() for-loop error
mtd: nand: r852: correct write_buf loop bounds
mtd: nand_bbt: handle error case for nand_create_badblock_pattern()
mtd: nand_bbt: remove unused variable
mtd: maps: sc520cdp: fix warnings
mtd: slram: fix unused variable warning
mtd: pfow: remove unused variable
mtd: lpddr: fix Kconfig dependency, for I/O accessors
mtd: nand: pxa3xx: Add supported ECC strength and step size to the DT binding
mtd: nand: pxa3xx: Use ECC strength and step size devicetree binding
mtd: nand: pxa3xx: Clean pxa_ecc_init() error handling
mtd: nand: Warn the user if the selected ECC strength is too weak
mtd: nand: omap: Documentation: How to select correct ECC scheme for your device ?
mtd: nand: omap: add support for BCH16_ECC - NAND driver updates
mtd: nand: omap: add support for BCH16_ECC - ELM driver updates
mtd: nand: omap: add support for BCH16_ECC - GPMC driver updates
...

+3760 -1619
+35
Documentation/devicetree/bindings/mtd/fsl-quadspi.txt
··· 1 + * Freescale Quad Serial Peripheral Interface(QuadSPI) 2 + 3 + Required properties: 4 + - compatible : Should be "fsl,vf610-qspi" 5 + - reg : the first contains the register location and length, 6 + the second contains the memory mapping address and length 7 + - reg-names: Should contain the reg names "QuadSPI" and "QuadSPI-memory" 8 + - interrupts : Should contain the interrupt for the device 9 + - clocks : The clocks needed by the QuadSPI controller 10 + - clock-names : the name of the clocks 11 + 12 + Optional properties: 13 + - fsl,qspi-has-second-chip: The controller has two buses, bus A and bus B. 14 + Each bus can be connected with two NOR flashes. 15 + Most of the time, each bus only has one NOR flash 16 + connected, this is the default case. 17 + But if there are two NOR flashes connected to the 18 + bus, you should enable this property. 19 + (Please check the board's schematic.) 20 + 21 + Example: 22 + 23 + qspi0: quadspi@40044000 { 24 + compatible = "fsl,vf610-qspi"; 25 + reg = <0x40044000 0x1000>, <0x20000000 0x10000000>; 26 + reg-names = "QuadSPI", "QuadSPI-memory"; 27 + interrupts = <0 24 IRQ_TYPE_LEVEL_HIGH>; 28 + clocks = <&clks VF610_CLK_QSPI0_EN>, 29 + <&clks VF610_CLK_QSPI0>; 30 + clock-names = "qspi_en", "qspi"; 31 + 32 + flash0: s25fl128s@0 { 33 + .... 34 + }; 35 + };
+45
Documentation/devicetree/bindings/mtd/gpmc-nand.txt
··· 28 28 "ham1" 1-bit Hamming ecc code 29 29 "bch4" 4-bit BCH ecc code 30 30 "bch8" 8-bit BCH ecc code 31 + "bch16" 16-bit BCH ECC code 32 + Refer below "How to select correct ECC scheme for your device ?" 31 33 32 34 - ti,nand-xfer-type: A string setting the data transfer type. One of: 33 35 ··· 92 90 }; 93 91 }; 94 92 93 + How to select correct ECC scheme for your device ? 94 + -------------------------------------------------- 95 + Higher ECC scheme usually means better protection against bit-flips and 96 + increased system lifetime. However, selection of ECC scheme is dependent 97 + on various other factors also like; 98 + 99 + (1) support of built in hardware engines. 100 + Some legacy OMAP SoC do not have ELM harware engine, so those SoC cannot 101 + support ecc-schemes with hardware error-correction (BCHx_HW). However 102 + such SoC can use ecc-schemes with software library for error-correction 103 + (BCHx_HW_DETECTION_SW). The error correction capability with software 104 + library remains equivalent to their hardware counter-part, but there is 105 + slight CPU penalty when too many bit-flips are detected during reads. 106 + 107 + (2) Device parameters like OOBSIZE. 108 + Other factor which governs the selection of ecc-scheme is oob-size. 109 + Higher ECC schemes require more OOB/Spare area to store ECC syndrome, 110 + so the device should have enough free bytes available its OOB/Spare 111 + area to accomodate ECC for entire page. In general following expression 112 + helps in determining if given device can accomodate ECC syndrome: 113 + "2 + (PAGESIZE / 512) * ECC_BYTES" >= OOBSIZE" 114 + where 115 + OOBSIZE number of bytes in OOB/spare area 116 + PAGESIZE number of bytes in main-area of device page 117 + ECC_BYTES number of ECC bytes generated to protect 118 + 512 bytes of data, which is: 119 + '3' for HAM1_xx ecc schemes 120 + '7' for BCH4_xx ecc schemes 121 + '14' for BCH8_xx ecc schemes 122 + '26' for BCH16_xx ecc schemes 123 + 124 + Example(a): For a device with PAGESIZE = 2048 and OOBSIZE = 64 and 125 + trying to use BCH16 (ECC_BYTES=26) ecc-scheme. 126 + Number of ECC bytes per page = (2 + (2048 / 512) * 26) = 106 B 127 + which is greater than capacity of NAND device (OOBSIZE=64) 128 + Hence, BCH16 cannot be supported on given device. But it can 129 + probably use lower ecc-schemes like BCH8. 130 + 131 + Example(b): For a device with PAGESIZE = 2048 and OOBSIZE = 128 and 132 + trying to use BCH16 (ECC_BYTES=26) ecc-scheme. 133 + Number of ECC bytes per page = (2 + (2048 / 512) * 26) = 106 B 134 + which can be accomodate in the OOB/Spare area of this device 135 + (OOBSIZE=128). So this device can use BCH16 ecc-scheme.
+2 -2
Documentation/devicetree/bindings/mtd/m25p80.txt
··· 5 5 representing partitions. 6 6 - compatible : Should be the manufacturer and the name of the chip. Bear in mind 7 7 the DT binding is not Linux-only, but in case of Linux, see the 8 - "m25p_ids" table in drivers/mtd/devices/m25p80.c for the list of 9 - supported chips. 8 + "spi_nor_ids" table in drivers/mtd/spi-nor/spi-nor.c for the list 9 + of supported chips. 10 10 - reg : Chip-Select number 11 11 - spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at 12 12
+8
Documentation/devicetree/bindings/mtd/pxa3xx-nand.txt
··· 17 17 - num-cs: Number of chipselect lines to usw 18 18 - nand-on-flash-bbt: boolean to enable on flash bbt option if 19 19 not present false 20 + - nand-ecc-strength: number of bits to correct per ECC step 21 + - nand-ecc-step-size: number of data bytes covered by a single ECC step 22 + 23 + The following ECC strength and step size are currently supported: 24 + 25 + - nand-ecc-strength = <1>, nand-ecc-step-size = <512> 26 + - nand-ecc-strength = <4>, nand-ecc-step-size = <512> 27 + - nand-ecc-strength = <8>, nand-ecc-step-size = <512> 20 28 21 29 Example: 22 30
+62
Documentation/mtd/spi-nor.txt
··· 1 + SPI NOR framework 2 + ============================================ 3 + 4 + Part I - Why do we need this framework? 5 + --------------------------------------- 6 + 7 + SPI bus controllers (drivers/spi/) only deal with streams of bytes; the bus 8 + controller operates agnostic of the specific device attached. However, some 9 + controllers (such as Freescale's QuadSPI controller) cannot easily handle 10 + arbitrary streams of bytes, but rather are designed specifically for SPI NOR. 11 + 12 + In particular, Freescale's QuadSPI controller must know the NOR commands to 13 + find the right LUT sequence. Unfortunately, the SPI subsystem has no notion of 14 + opcodes, addresses, or data payloads; a SPI controller simply knows to send or 15 + receive bytes (Tx and Rx). Therefore, we must define a new layering scheme under 16 + which the controller driver is aware of the opcodes, addressing, and other 17 + details of the SPI NOR protocol. 18 + 19 + Part II - How does the framework work? 20 + -------------------------------------- 21 + 22 + This framework just adds a new layer between the MTD and the SPI bus driver. 23 + With this new layer, the SPI NOR controller driver does not depend on the 24 + m25p80 code anymore. 25 + 26 + Before this framework, the layer is like: 27 + 28 + MTD 29 + ------------------------ 30 + m25p80 31 + ------------------------ 32 + SPI bus driver 33 + ------------------------ 34 + SPI NOR chip 35 + 36 + After this framework, the layer is like: 37 + MTD 38 + ------------------------ 39 + SPI NOR framework 40 + ------------------------ 41 + m25p80 42 + ------------------------ 43 + SPI bus driver 44 + ------------------------ 45 + SPI NOR chip 46 + 47 + With the SPI NOR controller driver (Freescale QuadSPI), it looks like: 48 + MTD 49 + ------------------------ 50 + SPI NOR framework 51 + ------------------------ 52 + fsl-quadSPI 53 + ------------------------ 54 + SPI NOR chip 55 + 56 + Part III - How can drivers use the framework? 57 + --------------------------------------------- 58 + 59 + The main API is spi_nor_scan(). Before you call the hook, a driver should 60 + initialize the necessary fields for spi_nor{}. Please see 61 + drivers/mtd/spi-nor/spi-nor.c for detail. Please also refer to fsl-quadspi.c 62 + when you want to write a new driver for a SPI NOR controller.
+15
arch/arm/mach-omap2/gpmc.c
··· 68 68 #define GPMC_ECC_BCH_RESULT_1 0x244 /* not available on OMAP2 */ 69 69 #define GPMC_ECC_BCH_RESULT_2 0x248 /* not available on OMAP2 */ 70 70 #define GPMC_ECC_BCH_RESULT_3 0x24c /* not available on OMAP2 */ 71 + #define GPMC_ECC_BCH_RESULT_4 0x300 /* not available on OMAP2 */ 72 + #define GPMC_ECC_BCH_RESULT_5 0x304 /* not available on OMAP2 */ 73 + #define GPMC_ECC_BCH_RESULT_6 0x308 /* not available on OMAP2 */ 71 74 72 75 /* GPMC ECC control settings */ 73 76 #define GPMC_ECC_CTRL_ECCCLEAR 0x100 ··· 680 677 GPMC_BCH_SIZE * i; 681 678 reg->gpmc_bch_result3[i] = gpmc_base + GPMC_ECC_BCH_RESULT_3 + 682 679 GPMC_BCH_SIZE * i; 680 + reg->gpmc_bch_result4[i] = gpmc_base + GPMC_ECC_BCH_RESULT_4 + 681 + i * GPMC_BCH_SIZE; 682 + reg->gpmc_bch_result5[i] = gpmc_base + GPMC_ECC_BCH_RESULT_5 + 683 + i * GPMC_BCH_SIZE; 684 + reg->gpmc_bch_result6[i] = gpmc_base + GPMC_ECC_BCH_RESULT_6 + 685 + i * GPMC_BCH_SIZE; 683 686 } 684 687 } 685 688 ··· 1421 1412 else 1422 1413 gpmc_nand_data->ecc_opt = 1423 1414 OMAP_ECC_BCH8_CODE_HW_DETECTION_SW; 1415 + else if (!strcmp(s, "bch16")) 1416 + if (gpmc_nand_data->elm_of_node) 1417 + gpmc_nand_data->ecc_opt = 1418 + OMAP_ECC_BCH16_CODE_HW; 1419 + else 1420 + pr_err("%s: BCH16 requires ELM support\n", __func__); 1424 1421 else 1425 1422 pr_err("%s: ti,nand-ecc-opt invalid value\n", __func__); 1426 1423
+2
drivers/mtd/Kconfig
··· 321 321 322 322 source "drivers/mtd/lpddr/Kconfig" 323 323 324 + source "drivers/mtd/spi-nor/Kconfig" 325 + 324 326 source "drivers/mtd/ubi/Kconfig" 325 327 326 328 endif # MTD
+1
drivers/mtd/Makefile
··· 32 32 33 33 obj-y += chips/ lpddr/ maps/ devices/ nand/ onenand/ tests/ 34 34 35 + obj-$(CONFIG_MTD_SPI_NOR) += spi-nor/ 35 36 obj-$(CONFIG_MTD_UBI) += ubi/
+8 -8
drivers/mtd/chips/Kconfig
··· 169 169 in the programming of OTP bits will waste them. 170 170 171 171 config MTD_CFI_INTELEXT 172 - tristate "Support for Intel/Sharp flash chips" 172 + tristate "Support for CFI command set 0001 (Intel/Sharp chips)" 173 173 depends on MTD_GEN_PROBE 174 174 select MTD_CFI_UTIL 175 175 help 176 176 The Common Flash Interface defines a number of different command 177 177 sets which a CFI-compliant chip may claim to implement. This code 178 - provides support for one of those command sets, used on Intel 179 - StrataFlash and other parts. 178 + provides support for command set 0001, used on Intel StrataFlash 179 + and other parts. 180 180 181 181 config MTD_CFI_AMDSTD 182 - tristate "Support for AMD/Fujitsu/Spansion flash chips" 182 + tristate "Support for CFI command set 0002 (AMD/Fujitsu/Spansion chips)" 183 183 depends on MTD_GEN_PROBE 184 184 select MTD_CFI_UTIL 185 185 help 186 186 The Common Flash Interface defines a number of different command 187 187 sets which a CFI-compliant chip may claim to implement. This code 188 - provides support for one of those command sets, used on chips 189 - including the AMD Am29LV320. 188 + provides support for command set 0002, used on chips including 189 + the AMD Am29LV320. 190 190 191 191 config MTD_CFI_STAA 192 - tristate "Support for ST (Advanced Architecture) flash chips" 192 + tristate "Support for CFI command set 0020 (ST (Advanced Architecture) chips)" 193 193 depends on MTD_GEN_PROBE 194 194 select MTD_CFI_UTIL 195 195 help 196 196 The Common Flash Interface defines a number of different command 197 197 sets which a CFI-compliant chip may claim to implement. This code 198 - provides support for one of those command sets. 198 + provides support for command set 0020. 199 199 200 200 config MTD_CFI_UTIL 201 201 tristate
+2 -2
drivers/mtd/chips/cfi_cmdset_0020.c
··· 961 961 chipnum++; 962 962 963 963 if (chipnum >= cfi->numchips) 964 - break; 964 + break; 965 965 } 966 966 } 967 967 ··· 1170 1170 chipnum++; 1171 1171 1172 1172 if (chipnum >= cfi->numchips) 1173 - break; 1173 + break; 1174 1174 } 1175 1175 } 1176 1176 return 0;
+1 -1
drivers/mtd/chips/cfi_util.c
··· 239 239 chipnum++; 240 240 241 241 if (chipnum >= cfi->numchips) 242 - break; 242 + break; 243 243 } 244 244 } 245 245
+2 -2
drivers/mtd/devices/Kconfig
··· 80 80 81 81 config MTD_M25P80 82 82 tristate "Support most SPI Flash chips (AT26DF, M25P, W25X, ...)" 83 - depends on SPI_MASTER 83 + depends on SPI_MASTER && MTD_SPI_NOR 84 84 help 85 85 This enables access to most modern SPI flash chips, used for 86 86 program and data storage. Series supported include Atmel AT26DF, ··· 212 212 213 213 config MTD_ST_SPI_FSM 214 214 tristate "ST Microelectronics SPI FSM Serial Flash Controller" 215 - depends on ARM || SH 215 + depends on ARCH_STI 216 216 help 217 217 This provides an MTD device driver for the ST Microelectronics 218 218 SPI Fast Sequence Mode (FSM) Serial Flash Controller and support
+38
drivers/mtd/devices/elm.c
··· 213 213 val = cpu_to_be32(*(u32 *) &ecc[0]) >> 12; 214 214 elm_write_reg(info, offset, val); 215 215 break; 216 + case BCH16_ECC: 217 + val = cpu_to_be32(*(u32 *) &ecc[22]); 218 + elm_write_reg(info, offset, val); 219 + offset += 4; 220 + val = cpu_to_be32(*(u32 *) &ecc[18]); 221 + elm_write_reg(info, offset, val); 222 + offset += 4; 223 + val = cpu_to_be32(*(u32 *) &ecc[14]); 224 + elm_write_reg(info, offset, val); 225 + offset += 4; 226 + val = cpu_to_be32(*(u32 *) &ecc[10]); 227 + elm_write_reg(info, offset, val); 228 + offset += 4; 229 + val = cpu_to_be32(*(u32 *) &ecc[6]); 230 + elm_write_reg(info, offset, val); 231 + offset += 4; 232 + val = cpu_to_be32(*(u32 *) &ecc[2]); 233 + elm_write_reg(info, offset, val); 234 + offset += 4; 235 + val = cpu_to_be32(*(u32 *) &ecc[0]) >> 16; 236 + elm_write_reg(info, offset, val); 237 + break; 216 238 default: 217 239 pr_err("invalid config bch_type\n"); 218 240 } ··· 440 418 return 0; 441 419 } 442 420 421 + #ifdef CONFIG_PM_SLEEP 443 422 /** 444 423 * elm_context_save 445 424 * saves ELM configurations to preserve them across Hardware powered-down ··· 458 435 for (i = 0; i < ERROR_VECTOR_MAX; i++) { 459 436 offset = i * SYNDROME_FRAGMENT_REG_SIZE; 460 437 switch (bch_type) { 438 + case BCH16_ECC: 439 + regs->elm_syndrome_fragment_6[i] = elm_read_reg(info, 440 + ELM_SYNDROME_FRAGMENT_6 + offset); 441 + regs->elm_syndrome_fragment_5[i] = elm_read_reg(info, 442 + ELM_SYNDROME_FRAGMENT_5 + offset); 443 + regs->elm_syndrome_fragment_4[i] = elm_read_reg(info, 444 + ELM_SYNDROME_FRAGMENT_4 + offset); 461 445 case BCH8_ECC: 462 446 regs->elm_syndrome_fragment_3[i] = elm_read_reg(info, 463 447 ELM_SYNDROME_FRAGMENT_3 + offset); ··· 503 473 for (i = 0; i < ERROR_VECTOR_MAX; i++) { 504 474 offset = i * SYNDROME_FRAGMENT_REG_SIZE; 505 475 switch (bch_type) { 476 + case BCH16_ECC: 477 + elm_write_reg(info, ELM_SYNDROME_FRAGMENT_6 + offset, 478 + regs->elm_syndrome_fragment_6[i]); 479 + elm_write_reg(info, ELM_SYNDROME_FRAGMENT_5 + offset, 480 + regs->elm_syndrome_fragment_5[i]); 481 + elm_write_reg(info, ELM_SYNDROME_FRAGMENT_4 + offset, 482 + regs->elm_syndrome_fragment_4[i]); 506 483 case BCH8_ECC: 507 484 elm_write_reg(info, ELM_SYNDROME_FRAGMENT_3 + offset, 508 485 regs->elm_syndrome_fragment_3[i]); ··· 546 509 elm_context_restore(info); 547 510 return 0; 548 511 } 512 + #endif 549 513 550 514 static SIMPLE_DEV_PM_OPS(elm_pm_ops, elm_suspend, elm_resume); 551 515
+109 -1200
drivers/mtd/devices/m25p80.c
··· 19 19 #include <linux/errno.h> 20 20 #include <linux/module.h> 21 21 #include <linux/device.h> 22 - #include <linux/interrupt.h> 23 - #include <linux/mutex.h> 24 - #include <linux/math64.h> 25 - #include <linux/slab.h> 26 - #include <linux/sched.h> 27 - #include <linux/mod_devicetable.h> 28 22 29 - #include <linux/mtd/cfi.h> 30 23 #include <linux/mtd/mtd.h> 31 24 #include <linux/mtd/partitions.h> 32 - #include <linux/of_platform.h> 33 25 34 26 #include <linux/spi/spi.h> 35 27 #include <linux/spi/flash.h> 28 + #include <linux/mtd/spi-nor.h> 36 29 37 - /* Flash opcodes. */ 38 - #define OPCODE_WREN 0x06 /* Write enable */ 39 - #define OPCODE_RDSR 0x05 /* Read status register */ 40 - #define OPCODE_WRSR 0x01 /* Write status register 1 byte */ 41 - #define OPCODE_NORM_READ 0x03 /* Read data bytes (low frequency) */ 42 - #define OPCODE_FAST_READ 0x0b /* Read data bytes (high frequency) */ 43 - #define OPCODE_DUAL_READ 0x3b /* Read data bytes (Dual SPI) */ 44 - #define OPCODE_QUAD_READ 0x6b /* Read data bytes (Quad SPI) */ 45 - #define OPCODE_PP 0x02 /* Page program (up to 256 bytes) */ 46 - #define OPCODE_BE_4K 0x20 /* Erase 4KiB block */ 47 - #define OPCODE_BE_4K_PMC 0xd7 /* Erase 4KiB block on PMC chips */ 48 - #define OPCODE_BE_32K 0x52 /* Erase 32KiB block */ 49 - #define OPCODE_CHIP_ERASE 0xc7 /* Erase whole flash chip */ 50 - #define OPCODE_SE 0xd8 /* Sector erase (usually 64KiB) */ 51 - #define OPCODE_RDID 0x9f /* Read JEDEC ID */ 52 - #define OPCODE_RDCR 0x35 /* Read configuration register */ 53 - 54 - /* 4-byte address opcodes - used on Spansion and some Macronix flashes. */ 55 - #define OPCODE_NORM_READ_4B 0x13 /* Read data bytes (low frequency) */ 56 - #define OPCODE_FAST_READ_4B 0x0c /* Read data bytes (high frequency) */ 57 - #define OPCODE_DUAL_READ_4B 0x3c /* Read data bytes (Dual SPI) */ 58 - #define OPCODE_QUAD_READ_4B 0x6c /* Read data bytes (Quad SPI) */ 59 - #define OPCODE_PP_4B 0x12 /* Page program (up to 256 bytes) */ 60 - #define OPCODE_SE_4B 0xdc /* Sector erase (usually 64KiB) */ 61 - 62 - /* Used for SST flashes only. */ 63 - #define OPCODE_BP 0x02 /* Byte program */ 64 - #define OPCODE_WRDI 0x04 /* Write disable */ 65 - #define OPCODE_AAI_WP 0xad /* Auto address increment word program */ 66 - 67 - /* Used for Macronix and Winbond flashes. */ 68 - #define OPCODE_EN4B 0xb7 /* Enter 4-byte mode */ 69 - #define OPCODE_EX4B 0xe9 /* Exit 4-byte mode */ 70 - 71 - /* Used for Spansion flashes only. */ 72 - #define OPCODE_BRWR 0x17 /* Bank register write */ 73 - 74 - /* Status Register bits. */ 75 - #define SR_WIP 1 /* Write in progress */ 76 - #define SR_WEL 2 /* Write enable latch */ 77 - /* meaning of other SR_* bits may differ between vendors */ 78 - #define SR_BP0 4 /* Block protect 0 */ 79 - #define SR_BP1 8 /* Block protect 1 */ 80 - #define SR_BP2 0x10 /* Block protect 2 */ 81 - #define SR_SRWD 0x80 /* SR write protect */ 82 - 83 - #define SR_QUAD_EN_MX 0x40 /* Macronix Quad I/O */ 84 - 85 - /* Configuration Register bits. */ 86 - #define CR_QUAD_EN_SPAN 0x2 /* Spansion Quad I/O */ 87 - 88 - /* Define max times to check status register before we give up. */ 89 - #define MAX_READY_WAIT_JIFFIES (40 * HZ) /* M25P16 specs 40s max chip erase */ 90 30 #define MAX_CMD_SIZE 6 91 - 92 - #define JEDEC_MFR(_jedec_id) ((_jedec_id) >> 16) 93 - 94 - /****************************************************************************/ 95 - 96 - enum read_type { 97 - M25P80_NORMAL = 0, 98 - M25P80_FAST, 99 - M25P80_DUAL, 100 - M25P80_QUAD, 101 - }; 102 - 103 31 struct m25p { 104 32 struct spi_device *spi; 105 - struct mutex lock; 33 + struct spi_nor spi_nor; 106 34 struct mtd_info mtd; 107 - u16 page_size; 108 - u16 addr_width; 109 - u8 erase_opcode; 110 - u8 read_opcode; 111 - u8 program_opcode; 112 - u8 *command; 113 - enum read_type flash_read; 35 + u8 command[MAX_CMD_SIZE]; 114 36 }; 115 37 116 - static inline struct m25p *mtd_to_m25p(struct mtd_info *mtd) 38 + static int m25p80_read_reg(struct spi_nor *nor, u8 code, u8 *val, int len) 117 39 { 118 - return container_of(mtd, struct m25p, mtd); 119 - } 120 - 121 - /****************************************************************************/ 122 - 123 - /* 124 - * Internal helper functions 125 - */ 126 - 127 - /* 128 - * Read the status register, returning its value in the location 129 - * Return the status register value. 130 - * Returns negative if error occurred. 131 - */ 132 - static int read_sr(struct m25p *flash) 133 - { 134 - ssize_t retval; 135 - u8 code = OPCODE_RDSR; 136 - u8 val; 137 - 138 - retval = spi_write_then_read(flash->spi, &code, 1, &val, 1); 139 - 140 - if (retval < 0) { 141 - dev_err(&flash->spi->dev, "error %d reading SR\n", 142 - (int) retval); 143 - return retval; 144 - } 145 - 146 - return val; 147 - } 148 - 149 - /* 150 - * Read configuration register, returning its value in the 151 - * location. Return the configuration register value. 152 - * Returns negative if error occured. 153 - */ 154 - static int read_cr(struct m25p *flash) 155 - { 156 - u8 code = OPCODE_RDCR; 40 + struct m25p *flash = nor->priv; 41 + struct spi_device *spi = flash->spi; 157 42 int ret; 158 - u8 val; 159 43 160 - ret = spi_write_then_read(flash->spi, &code, 1, &val, 1); 161 - if (ret < 0) { 162 - dev_err(&flash->spi->dev, "error %d reading CR\n", ret); 163 - return ret; 164 - } 44 + ret = spi_write_then_read(spi, &code, 1, val, len); 45 + if (ret < 0) 46 + dev_err(&spi->dev, "error %d reading %x\n", ret, code); 165 47 166 - return val; 48 + return ret; 167 49 } 168 50 169 - /* 170 - * Write status register 1 byte 171 - * Returns negative if error occurred. 172 - */ 173 - static int write_sr(struct m25p *flash, u8 val) 174 - { 175 - flash->command[0] = OPCODE_WRSR; 176 - flash->command[1] = val; 177 - 178 - return spi_write(flash->spi, flash->command, 2); 179 - } 180 - 181 - /* 182 - * Set write enable latch with Write Enable command. 183 - * Returns negative if error occurred. 184 - */ 185 - static inline int write_enable(struct m25p *flash) 186 - { 187 - u8 code = OPCODE_WREN; 188 - 189 - return spi_write_then_read(flash->spi, &code, 1, NULL, 0); 190 - } 191 - 192 - /* 193 - * Send write disble instruction to the chip. 194 - */ 195 - static inline int write_disable(struct m25p *flash) 196 - { 197 - u8 code = OPCODE_WRDI; 198 - 199 - return spi_write_then_read(flash->spi, &code, 1, NULL, 0); 200 - } 201 - 202 - /* 203 - * Enable/disable 4-byte addressing mode. 204 - */ 205 - static inline int set_4byte(struct m25p *flash, u32 jedec_id, int enable) 206 - { 207 - int status; 208 - bool need_wren = false; 209 - 210 - switch (JEDEC_MFR(jedec_id)) { 211 - case CFI_MFR_ST: /* Micron, actually */ 212 - /* Some Micron need WREN command; all will accept it */ 213 - need_wren = true; 214 - case CFI_MFR_MACRONIX: 215 - case 0xEF /* winbond */: 216 - if (need_wren) 217 - write_enable(flash); 218 - 219 - flash->command[0] = enable ? OPCODE_EN4B : OPCODE_EX4B; 220 - status = spi_write(flash->spi, flash->command, 1); 221 - 222 - if (need_wren) 223 - write_disable(flash); 224 - 225 - return status; 226 - default: 227 - /* Spansion style */ 228 - flash->command[0] = OPCODE_BRWR; 229 - flash->command[1] = enable << 7; 230 - return spi_write(flash->spi, flash->command, 2); 231 - } 232 - } 233 - 234 - /* 235 - * Service routine to read status register until ready, or timeout occurs. 236 - * Returns non-zero if error. 237 - */ 238 - static int wait_till_ready(struct m25p *flash) 239 - { 240 - unsigned long deadline; 241 - int sr; 242 - 243 - deadline = jiffies + MAX_READY_WAIT_JIFFIES; 244 - 245 - do { 246 - if ((sr = read_sr(flash)) < 0) 247 - break; 248 - else if (!(sr & SR_WIP)) 249 - return 0; 250 - 251 - cond_resched(); 252 - 253 - } while (!time_after_eq(jiffies, deadline)); 254 - 255 - return 1; 256 - } 257 - 258 - /* 259 - * Write status Register and configuration register with 2 bytes 260 - * The first byte will be written to the status register, while the 261 - * second byte will be written to the configuration register. 262 - * Return negative if error occured. 263 - */ 264 - static int write_sr_cr(struct m25p *flash, u16 val) 265 - { 266 - flash->command[0] = OPCODE_WRSR; 267 - flash->command[1] = val & 0xff; 268 - flash->command[2] = (val >> 8); 269 - 270 - return spi_write(flash->spi, flash->command, 3); 271 - } 272 - 273 - static int macronix_quad_enable(struct m25p *flash) 274 - { 275 - int ret, val; 276 - u8 cmd[2]; 277 - cmd[0] = OPCODE_WRSR; 278 - 279 - val = read_sr(flash); 280 - cmd[1] = val | SR_QUAD_EN_MX; 281 - write_enable(flash); 282 - 283 - spi_write(flash->spi, &cmd, 2); 284 - 285 - if (wait_till_ready(flash)) 286 - return 1; 287 - 288 - ret = read_sr(flash); 289 - if (!(ret > 0 && (ret & SR_QUAD_EN_MX))) { 290 - dev_err(&flash->spi->dev, "Macronix Quad bit not set\n"); 291 - return -EINVAL; 292 - } 293 - 294 - return 0; 295 - } 296 - 297 - static int spansion_quad_enable(struct m25p *flash) 298 - { 299 - int ret; 300 - int quad_en = CR_QUAD_EN_SPAN << 8; 301 - 302 - write_enable(flash); 303 - 304 - ret = write_sr_cr(flash, quad_en); 305 - if (ret < 0) { 306 - dev_err(&flash->spi->dev, 307 - "error while writing configuration register\n"); 308 - return -EINVAL; 309 - } 310 - 311 - /* read back and check it */ 312 - ret = read_cr(flash); 313 - if (!(ret > 0 && (ret & CR_QUAD_EN_SPAN))) { 314 - dev_err(&flash->spi->dev, "Spansion Quad bit not set\n"); 315 - return -EINVAL; 316 - } 317 - 318 - return 0; 319 - } 320 - 321 - static int set_quad_mode(struct m25p *flash, u32 jedec_id) 322 - { 323 - int status; 324 - 325 - switch (JEDEC_MFR(jedec_id)) { 326 - case CFI_MFR_MACRONIX: 327 - status = macronix_quad_enable(flash); 328 - if (status) { 329 - dev_err(&flash->spi->dev, 330 - "Macronix quad-read not enabled\n"); 331 - return -EINVAL; 332 - } 333 - return status; 334 - default: 335 - status = spansion_quad_enable(flash); 336 - if (status) { 337 - dev_err(&flash->spi->dev, 338 - "Spansion quad-read not enabled\n"); 339 - return -EINVAL; 340 - } 341 - return status; 342 - } 343 - } 344 - 345 - /* 346 - * Erase the whole flash memory 347 - * 348 - * Returns 0 if successful, non-zero otherwise. 349 - */ 350 - static int erase_chip(struct m25p *flash) 351 - { 352 - pr_debug("%s: %s %lldKiB\n", dev_name(&flash->spi->dev), __func__, 353 - (long long)(flash->mtd.size >> 10)); 354 - 355 - /* Wait until finished previous write command. */ 356 - if (wait_till_ready(flash)) 357 - return 1; 358 - 359 - /* Send write enable, then erase commands. */ 360 - write_enable(flash); 361 - 362 - /* Set up command buffer. */ 363 - flash->command[0] = OPCODE_CHIP_ERASE; 364 - 365 - spi_write(flash->spi, flash->command, 1); 366 - 367 - return 0; 368 - } 369 - 370 - static void m25p_addr2cmd(struct m25p *flash, unsigned int addr, u8 *cmd) 51 + static void m25p_addr2cmd(struct spi_nor *nor, unsigned int addr, u8 *cmd) 371 52 { 372 53 /* opcode is in cmd[0] */ 373 - cmd[1] = addr >> (flash->addr_width * 8 - 8); 374 - cmd[2] = addr >> (flash->addr_width * 8 - 16); 375 - cmd[3] = addr >> (flash->addr_width * 8 - 24); 376 - cmd[4] = addr >> (flash->addr_width * 8 - 32); 54 + cmd[1] = addr >> (nor->addr_width * 8 - 8); 55 + cmd[2] = addr >> (nor->addr_width * 8 - 16); 56 + cmd[3] = addr >> (nor->addr_width * 8 - 24); 57 + cmd[4] = addr >> (nor->addr_width * 8 - 32); 377 58 } 378 59 379 - static int m25p_cmdsz(struct m25p *flash) 60 + static int m25p_cmdsz(struct spi_nor *nor) 380 61 { 381 - return 1 + flash->addr_width; 62 + return 1 + nor->addr_width; 382 63 } 383 64 384 - /* 385 - * Erase one sector of flash memory at offset ``offset'' which is any 386 - * address within the sector which should be erased. 387 - * 388 - * Returns 0 if successful, non-zero otherwise. 389 - */ 390 - static int erase_sector(struct m25p *flash, u32 offset) 65 + static int m25p80_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len, 66 + int wr_en) 391 67 { 392 - pr_debug("%s: %s %dKiB at 0x%08x\n", dev_name(&flash->spi->dev), 393 - __func__, flash->mtd.erasesize / 1024, offset); 68 + struct m25p *flash = nor->priv; 69 + struct spi_device *spi = flash->spi; 394 70 395 - /* Wait until finished previous write command. */ 396 - if (wait_till_ready(flash)) 397 - return 1; 71 + flash->command[0] = opcode; 72 + if (buf) 73 + memcpy(&flash->command[1], buf, len); 398 74 399 - /* Send write enable, then erase commands. */ 400 - write_enable(flash); 401 - 402 - /* Set up command buffer. */ 403 - flash->command[0] = flash->erase_opcode; 404 - m25p_addr2cmd(flash, offset, flash->command); 405 - 406 - spi_write(flash->spi, flash->command, m25p_cmdsz(flash)); 407 - 408 - return 0; 75 + return spi_write(spi, flash->command, len + 1); 409 76 } 410 77 411 - /****************************************************************************/ 412 - 413 - /* 414 - * MTD implementation 415 - */ 416 - 417 - /* 418 - * Erase an address range on the flash chip. The address range may extend 419 - * one or more erase sectors. Return an error is there is a problem erasing. 420 - */ 421 - static int m25p80_erase(struct mtd_info *mtd, struct erase_info *instr) 78 + static void m25p80_write(struct spi_nor *nor, loff_t to, size_t len, 79 + size_t *retlen, const u_char *buf) 422 80 { 423 - struct m25p *flash = mtd_to_m25p(mtd); 424 - u32 addr,len; 425 - uint32_t rem; 81 + struct m25p *flash = nor->priv; 82 + struct spi_device *spi = flash->spi; 83 + struct spi_transfer t[2] = {}; 84 + struct spi_message m; 85 + int cmd_sz = m25p_cmdsz(nor); 426 86 427 - pr_debug("%s: %s at 0x%llx, len %lld\n", dev_name(&flash->spi->dev), 428 - __func__, (long long)instr->addr, 429 - (long long)instr->len); 87 + spi_message_init(&m); 430 88 431 - div_u64_rem(instr->len, mtd->erasesize, &rem); 432 - if (rem) 433 - return -EINVAL; 89 + if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second) 90 + cmd_sz = 1; 434 91 435 - addr = instr->addr; 436 - len = instr->len; 92 + flash->command[0] = nor->program_opcode; 93 + m25p_addr2cmd(nor, to, flash->command); 437 94 438 - mutex_lock(&flash->lock); 95 + t[0].tx_buf = flash->command; 96 + t[0].len = cmd_sz; 97 + spi_message_add_tail(&t[0], &m); 439 98 440 - /* whole-chip erase? */ 441 - if (len == flash->mtd.size) { 442 - if (erase_chip(flash)) { 443 - instr->state = MTD_ERASE_FAILED; 444 - mutex_unlock(&flash->lock); 445 - return -EIO; 446 - } 99 + t[1].tx_buf = buf; 100 + t[1].len = len; 101 + spi_message_add_tail(&t[1], &m); 447 102 448 - /* REVISIT in some cases we could speed up erasing large regions 449 - * by using OPCODE_SE instead of OPCODE_BE_4K. We may have set up 450 - * to use "small sector erase", but that's not always optimal. 451 - */ 103 + spi_sync(spi, &m); 452 104 453 - /* "sector"-at-a-time erase */ 454 - } else { 455 - while (len) { 456 - if (erase_sector(flash, addr)) { 457 - instr->state = MTD_ERASE_FAILED; 458 - mutex_unlock(&flash->lock); 459 - return -EIO; 460 - } 461 - 462 - addr += mtd->erasesize; 463 - len -= mtd->erasesize; 464 - } 465 - } 466 - 467 - mutex_unlock(&flash->lock); 468 - 469 - instr->state = MTD_ERASE_DONE; 470 - mtd_erase_callback(instr); 471 - 472 - return 0; 105 + *retlen += m.actual_length - cmd_sz; 473 106 } 474 107 475 - /* 476 - * Dummy Cycle calculation for different type of read. 477 - * It can be used to support more commands with 478 - * different dummy cycle requirements. 479 - */ 480 - static inline int m25p80_dummy_cycles_read(struct m25p *flash) 108 + static inline unsigned int m25p80_rx_nbits(struct spi_nor *nor) 481 109 { 482 - switch (flash->flash_read) { 483 - case M25P80_FAST: 484 - case M25P80_DUAL: 485 - case M25P80_QUAD: 486 - return 1; 487 - case M25P80_NORMAL: 488 - return 0; 489 - default: 490 - dev_err(&flash->spi->dev, "No valid read type supported\n"); 491 - return -1; 492 - } 493 - } 494 - 495 - static inline unsigned int m25p80_rx_nbits(const struct m25p *flash) 496 - { 497 - switch (flash->flash_read) { 498 - case M25P80_DUAL: 110 + switch (nor->flash_read) { 111 + case SPI_NOR_DUAL: 499 112 return 2; 500 - case M25P80_QUAD: 113 + case SPI_NOR_QUAD: 501 114 return 4; 502 115 default: 503 116 return 0; ··· 118 505 } 119 506 120 507 /* 121 - * Read an address range from the flash chip. The address range 508 + * Read an address range from the nor chip. The address range 122 509 * may be any size provided it is within the physical boundaries. 123 510 */ 124 - static int m25p80_read(struct mtd_info *mtd, loff_t from, size_t len, 125 - size_t *retlen, u_char *buf) 511 + static int m25p80_read(struct spi_nor *nor, loff_t from, size_t len, 512 + size_t *retlen, u_char *buf) 126 513 { 127 - struct m25p *flash = mtd_to_m25p(mtd); 514 + struct m25p *flash = nor->priv; 515 + struct spi_device *spi = flash->spi; 128 516 struct spi_transfer t[2]; 129 517 struct spi_message m; 130 - uint8_t opcode; 131 - int dummy; 518 + int dummy = nor->read_dummy; 519 + int ret; 132 520 133 - pr_debug("%s: %s from 0x%08x, len %zd\n", dev_name(&flash->spi->dev), 134 - __func__, (u32)from, len); 521 + /* Wait till previous write/erase is done. */ 522 + ret = nor->wait_till_ready(nor); 523 + if (ret) 524 + return ret; 135 525 136 526 spi_message_init(&m); 137 527 memset(t, 0, (sizeof t)); 138 528 139 - dummy = m25p80_dummy_cycles_read(flash); 140 - if (dummy < 0) { 141 - dev_err(&flash->spi->dev, "No valid read command supported\n"); 142 - return -EINVAL; 143 - } 529 + flash->command[0] = nor->read_opcode; 530 + m25p_addr2cmd(nor, from, flash->command); 144 531 145 532 t[0].tx_buf = flash->command; 146 - t[0].len = m25p_cmdsz(flash) + dummy; 533 + t[0].len = m25p_cmdsz(nor) + dummy; 147 534 spi_message_add_tail(&t[0], &m); 148 535 149 536 t[1].rx_buf = buf; 150 - t[1].rx_nbits = m25p80_rx_nbits(flash); 537 + t[1].rx_nbits = m25p80_rx_nbits(nor); 151 538 t[1].len = len; 152 539 spi_message_add_tail(&t[1], &m); 153 540 154 - mutex_lock(&flash->lock); 541 + spi_sync(spi, &m); 155 542 156 - /* Wait till previous write/erase is done. */ 157 - if (wait_till_ready(flash)) { 158 - /* REVISIT status return?? */ 159 - mutex_unlock(&flash->lock); 160 - return 1; 161 - } 162 - 163 - /* Set up the write data buffer. */ 164 - opcode = flash->read_opcode; 165 - flash->command[0] = opcode; 166 - m25p_addr2cmd(flash, from, flash->command); 167 - 168 - spi_sync(flash->spi, &m); 169 - 170 - *retlen = m.actual_length - m25p_cmdsz(flash) - dummy; 171 - 172 - mutex_unlock(&flash->lock); 173 - 543 + *retlen = m.actual_length - m25p_cmdsz(nor) - dummy; 174 544 return 0; 175 545 } 176 546 177 - /* 178 - * Write an address range to the flash chip. Data must be written in 179 - * FLASH_PAGESIZE chunks. The address range may be any size provided 180 - * it is within the physical boundaries. 181 - */ 182 - static int m25p80_write(struct mtd_info *mtd, loff_t to, size_t len, 183 - size_t *retlen, const u_char *buf) 547 + static int m25p80_erase(struct spi_nor *nor, loff_t offset) 184 548 { 185 - struct m25p *flash = mtd_to_m25p(mtd); 186 - u32 page_offset, page_size; 187 - struct spi_transfer t[2]; 188 - struct spi_message m; 549 + struct m25p *flash = nor->priv; 550 + int ret; 189 551 190 - pr_debug("%s: %s to 0x%08x, len %zd\n", dev_name(&flash->spi->dev), 191 - __func__, (u32)to, len); 192 - 193 - spi_message_init(&m); 194 - memset(t, 0, (sizeof t)); 195 - 196 - t[0].tx_buf = flash->command; 197 - t[0].len = m25p_cmdsz(flash); 198 - spi_message_add_tail(&t[0], &m); 199 - 200 - t[1].tx_buf = buf; 201 - spi_message_add_tail(&t[1], &m); 202 - 203 - mutex_lock(&flash->lock); 552 + dev_dbg(nor->dev, "%dKiB at 0x%08x\n", 553 + flash->mtd.erasesize / 1024, (u32)offset); 204 554 205 555 /* Wait until finished previous write command. */ 206 - if (wait_till_ready(flash)) { 207 - mutex_unlock(&flash->lock); 208 - return 1; 209 - } 556 + ret = nor->wait_till_ready(nor); 557 + if (ret) 558 + return ret; 210 559 211 - write_enable(flash); 560 + /* Send write enable, then erase commands. */ 561 + ret = nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0, 0); 562 + if (ret) 563 + return ret; 212 564 213 - /* Set up the opcode in the write buffer. */ 214 - flash->command[0] = flash->program_opcode; 215 - m25p_addr2cmd(flash, to, flash->command); 565 + /* Set up command buffer. */ 566 + flash->command[0] = nor->erase_opcode; 567 + m25p_addr2cmd(nor, offset, flash->command); 216 568 217 - page_offset = to & (flash->page_size - 1); 218 - 219 - /* do all the bytes fit onto one page? */ 220 - if (page_offset + len <= flash->page_size) { 221 - t[1].len = len; 222 - 223 - spi_sync(flash->spi, &m); 224 - 225 - *retlen = m.actual_length - m25p_cmdsz(flash); 226 - } else { 227 - u32 i; 228 - 229 - /* the size of data remaining on the first page */ 230 - page_size = flash->page_size - page_offset; 231 - 232 - t[1].len = page_size; 233 - spi_sync(flash->spi, &m); 234 - 235 - *retlen = m.actual_length - m25p_cmdsz(flash); 236 - 237 - /* write everything in flash->page_size chunks */ 238 - for (i = page_size; i < len; i += page_size) { 239 - page_size = len - i; 240 - if (page_size > flash->page_size) 241 - page_size = flash->page_size; 242 - 243 - /* write the next page to flash */ 244 - m25p_addr2cmd(flash, to + i, flash->command); 245 - 246 - t[1].tx_buf = buf + i; 247 - t[1].len = page_size; 248 - 249 - wait_till_ready(flash); 250 - 251 - write_enable(flash); 252 - 253 - spi_sync(flash->spi, &m); 254 - 255 - *retlen += m.actual_length - m25p_cmdsz(flash); 256 - } 257 - } 258 - 259 - mutex_unlock(&flash->lock); 569 + spi_write(flash->spi, flash->command, m25p_cmdsz(nor)); 260 570 261 571 return 0; 262 572 } 263 - 264 - static int sst_write(struct mtd_info *mtd, loff_t to, size_t len, 265 - size_t *retlen, const u_char *buf) 266 - { 267 - struct m25p *flash = mtd_to_m25p(mtd); 268 - struct spi_transfer t[2]; 269 - struct spi_message m; 270 - size_t actual; 271 - int cmd_sz, ret; 272 - 273 - pr_debug("%s: %s to 0x%08x, len %zd\n", dev_name(&flash->spi->dev), 274 - __func__, (u32)to, len); 275 - 276 - spi_message_init(&m); 277 - memset(t, 0, (sizeof t)); 278 - 279 - t[0].tx_buf = flash->command; 280 - t[0].len = m25p_cmdsz(flash); 281 - spi_message_add_tail(&t[0], &m); 282 - 283 - t[1].tx_buf = buf; 284 - spi_message_add_tail(&t[1], &m); 285 - 286 - mutex_lock(&flash->lock); 287 - 288 - /* Wait until finished previous write command. */ 289 - ret = wait_till_ready(flash); 290 - if (ret) 291 - goto time_out; 292 - 293 - write_enable(flash); 294 - 295 - actual = to % 2; 296 - /* Start write from odd address. */ 297 - if (actual) { 298 - flash->command[0] = OPCODE_BP; 299 - m25p_addr2cmd(flash, to, flash->command); 300 - 301 - /* write one byte. */ 302 - t[1].len = 1; 303 - spi_sync(flash->spi, &m); 304 - ret = wait_till_ready(flash); 305 - if (ret) 306 - goto time_out; 307 - *retlen += m.actual_length - m25p_cmdsz(flash); 308 - } 309 - to += actual; 310 - 311 - flash->command[0] = OPCODE_AAI_WP; 312 - m25p_addr2cmd(flash, to, flash->command); 313 - 314 - /* Write out most of the data here. */ 315 - cmd_sz = m25p_cmdsz(flash); 316 - for (; actual < len - 1; actual += 2) { 317 - t[0].len = cmd_sz; 318 - /* write two bytes. */ 319 - t[1].len = 2; 320 - t[1].tx_buf = buf + actual; 321 - 322 - spi_sync(flash->spi, &m); 323 - ret = wait_till_ready(flash); 324 - if (ret) 325 - goto time_out; 326 - *retlen += m.actual_length - cmd_sz; 327 - cmd_sz = 1; 328 - to += 2; 329 - } 330 - write_disable(flash); 331 - ret = wait_till_ready(flash); 332 - if (ret) 333 - goto time_out; 334 - 335 - /* Write out trailing byte if it exists. */ 336 - if (actual != len) { 337 - write_enable(flash); 338 - flash->command[0] = OPCODE_BP; 339 - m25p_addr2cmd(flash, to, flash->command); 340 - t[0].len = m25p_cmdsz(flash); 341 - t[1].len = 1; 342 - t[1].tx_buf = buf + actual; 343 - 344 - spi_sync(flash->spi, &m); 345 - ret = wait_till_ready(flash); 346 - if (ret) 347 - goto time_out; 348 - *retlen += m.actual_length - m25p_cmdsz(flash); 349 - write_disable(flash); 350 - } 351 - 352 - time_out: 353 - mutex_unlock(&flash->lock); 354 - return ret; 355 - } 356 - 357 - static int m25p80_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 358 - { 359 - struct m25p *flash = mtd_to_m25p(mtd); 360 - uint32_t offset = ofs; 361 - uint8_t status_old, status_new; 362 - int res = 0; 363 - 364 - mutex_lock(&flash->lock); 365 - /* Wait until finished previous command */ 366 - if (wait_till_ready(flash)) { 367 - res = 1; 368 - goto err; 369 - } 370 - 371 - status_old = read_sr(flash); 372 - 373 - if (offset < flash->mtd.size-(flash->mtd.size/2)) 374 - status_new = status_old | SR_BP2 | SR_BP1 | SR_BP0; 375 - else if (offset < flash->mtd.size-(flash->mtd.size/4)) 376 - status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1; 377 - else if (offset < flash->mtd.size-(flash->mtd.size/8)) 378 - status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0; 379 - else if (offset < flash->mtd.size-(flash->mtd.size/16)) 380 - status_new = (status_old & ~(SR_BP0|SR_BP1)) | SR_BP2; 381 - else if (offset < flash->mtd.size-(flash->mtd.size/32)) 382 - status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0; 383 - else if (offset < flash->mtd.size-(flash->mtd.size/64)) 384 - status_new = (status_old & ~(SR_BP2|SR_BP0)) | SR_BP1; 385 - else 386 - status_new = (status_old & ~(SR_BP2|SR_BP1)) | SR_BP0; 387 - 388 - /* Only modify protection if it will not unlock other areas */ 389 - if ((status_new&(SR_BP2|SR_BP1|SR_BP0)) > 390 - (status_old&(SR_BP2|SR_BP1|SR_BP0))) { 391 - write_enable(flash); 392 - if (write_sr(flash, status_new) < 0) { 393 - res = 1; 394 - goto err; 395 - } 396 - } 397 - 398 - err: mutex_unlock(&flash->lock); 399 - return res; 400 - } 401 - 402 - static int m25p80_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 403 - { 404 - struct m25p *flash = mtd_to_m25p(mtd); 405 - uint32_t offset = ofs; 406 - uint8_t status_old, status_new; 407 - int res = 0; 408 - 409 - mutex_lock(&flash->lock); 410 - /* Wait until finished previous command */ 411 - if (wait_till_ready(flash)) { 412 - res = 1; 413 - goto err; 414 - } 415 - 416 - status_old = read_sr(flash); 417 - 418 - if (offset+len > flash->mtd.size-(flash->mtd.size/64)) 419 - status_new = status_old & ~(SR_BP2|SR_BP1|SR_BP0); 420 - else if (offset+len > flash->mtd.size-(flash->mtd.size/32)) 421 - status_new = (status_old & ~(SR_BP2|SR_BP1)) | SR_BP0; 422 - else if (offset+len > flash->mtd.size-(flash->mtd.size/16)) 423 - status_new = (status_old & ~(SR_BP2|SR_BP0)) | SR_BP1; 424 - else if (offset+len > flash->mtd.size-(flash->mtd.size/8)) 425 - status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0; 426 - else if (offset+len > flash->mtd.size-(flash->mtd.size/4)) 427 - status_new = (status_old & ~(SR_BP0|SR_BP1)) | SR_BP2; 428 - else if (offset+len > flash->mtd.size-(flash->mtd.size/2)) 429 - status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0; 430 - else 431 - status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1; 432 - 433 - /* Only modify protection if it will not lock other areas */ 434 - if ((status_new&(SR_BP2|SR_BP1|SR_BP0)) < 435 - (status_old&(SR_BP2|SR_BP1|SR_BP0))) { 436 - write_enable(flash); 437 - if (write_sr(flash, status_new) < 0) { 438 - res = 1; 439 - goto err; 440 - } 441 - } 442 - 443 - err: mutex_unlock(&flash->lock); 444 - return res; 445 - } 446 - 447 - /****************************************************************************/ 448 - 449 - /* 450 - * SPI device driver setup and teardown 451 - */ 452 - 453 - struct flash_info { 454 - /* JEDEC id zero means "no ID" (most older chips); otherwise it has 455 - * a high byte of zero plus three data bytes: the manufacturer id, 456 - * then a two byte device id. 457 - */ 458 - u32 jedec_id; 459 - u16 ext_id; 460 - 461 - /* The size listed here is what works with OPCODE_SE, which isn't 462 - * necessarily called a "sector" by the vendor. 463 - */ 464 - unsigned sector_size; 465 - u16 n_sectors; 466 - 467 - u16 page_size; 468 - u16 addr_width; 469 - 470 - u16 flags; 471 - #define SECT_4K 0x01 /* OPCODE_BE_4K works uniformly */ 472 - #define M25P_NO_ERASE 0x02 /* No erase command needed */ 473 - #define SST_WRITE 0x04 /* use SST byte programming */ 474 - #define M25P_NO_FR 0x08 /* Can't do fastread */ 475 - #define SECT_4K_PMC 0x10 /* OPCODE_BE_4K_PMC works uniformly */ 476 - #define M25P80_DUAL_READ 0x20 /* Flash supports Dual Read */ 477 - #define M25P80_QUAD_READ 0x40 /* Flash supports Quad Read */ 478 - }; 479 - 480 - #define INFO(_jedec_id, _ext_id, _sector_size, _n_sectors, _flags) \ 481 - ((kernel_ulong_t)&(struct flash_info) { \ 482 - .jedec_id = (_jedec_id), \ 483 - .ext_id = (_ext_id), \ 484 - .sector_size = (_sector_size), \ 485 - .n_sectors = (_n_sectors), \ 486 - .page_size = 256, \ 487 - .flags = (_flags), \ 488 - }) 489 - 490 - #define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_width, _flags) \ 491 - ((kernel_ulong_t)&(struct flash_info) { \ 492 - .sector_size = (_sector_size), \ 493 - .n_sectors = (_n_sectors), \ 494 - .page_size = (_page_size), \ 495 - .addr_width = (_addr_width), \ 496 - .flags = (_flags), \ 497 - }) 498 - 499 - /* NOTE: double check command sets and memory organization when you add 500 - * more flash chips. This current list focusses on newer chips, which 501 - * have been converging on command sets which including JEDEC ID. 502 - */ 503 - static const struct spi_device_id m25p_ids[] = { 504 - /* Atmel -- some are (confusingly) marketed as "DataFlash" */ 505 - { "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K) }, 506 - { "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) }, 507 - 508 - { "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, SECT_4K) }, 509 - { "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, SECT_4K) }, 510 - { "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) }, 511 - 512 - { "at26f004", INFO(0x1f0400, 0, 64 * 1024, 8, SECT_4K) }, 513 - { "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K) }, 514 - { "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K) }, 515 - { "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) }, 516 - 517 - { "at45db081d", INFO(0x1f2500, 0, 64 * 1024, 16, SECT_4K) }, 518 - 519 - /* EON -- en25xxx */ 520 - { "en25f32", INFO(0x1c3116, 0, 64 * 1024, 64, SECT_4K) }, 521 - { "en25p32", INFO(0x1c2016, 0, 64 * 1024, 64, 0) }, 522 - { "en25q32b", INFO(0x1c3016, 0, 64 * 1024, 64, 0) }, 523 - { "en25p64", INFO(0x1c2017, 0, 64 * 1024, 128, 0) }, 524 - { "en25q64", INFO(0x1c3017, 0, 64 * 1024, 128, SECT_4K) }, 525 - { "en25qh256", INFO(0x1c7019, 0, 64 * 1024, 512, 0) }, 526 - 527 - /* ESMT */ 528 - { "f25l32pa", INFO(0x8c2016, 0, 64 * 1024, 64, SECT_4K) }, 529 - 530 - /* Everspin */ 531 - { "mr25h256", CAT25_INFO( 32 * 1024, 1, 256, 2, M25P_NO_ERASE | M25P_NO_FR) }, 532 - { "mr25h10", CAT25_INFO(128 * 1024, 1, 256, 3, M25P_NO_ERASE | M25P_NO_FR) }, 533 - 534 - /* GigaDevice */ 535 - { "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) }, 536 - { "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) }, 537 - 538 - /* Intel/Numonyx -- xxxs33b */ 539 - { "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) }, 540 - { "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) }, 541 - { "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) }, 542 - 543 - /* Macronix */ 544 - { "mx25l2005a", INFO(0xc22012, 0, 64 * 1024, 4, SECT_4K) }, 545 - { "mx25l4005a", INFO(0xc22013, 0, 64 * 1024, 8, SECT_4K) }, 546 - { "mx25l8005", INFO(0xc22014, 0, 64 * 1024, 16, 0) }, 547 - { "mx25l1606e", INFO(0xc22015, 0, 64 * 1024, 32, SECT_4K) }, 548 - { "mx25l3205d", INFO(0xc22016, 0, 64 * 1024, 64, 0) }, 549 - { "mx25l3255e", INFO(0xc29e16, 0, 64 * 1024, 64, SECT_4K) }, 550 - { "mx25l6405d", INFO(0xc22017, 0, 64 * 1024, 128, 0) }, 551 - { "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, 0) }, 552 - { "mx25l12855e", INFO(0xc22618, 0, 64 * 1024, 256, 0) }, 553 - { "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, 0) }, 554 - { "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) }, 555 - { "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, M25P80_QUAD_READ) }, 556 - { "mx66l1g55g", INFO(0xc2261b, 0, 64 * 1024, 2048, M25P80_QUAD_READ) }, 557 - 558 - /* Micron */ 559 - { "n25q064", INFO(0x20ba17, 0, 64 * 1024, 128, 0) }, 560 - { "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, 0) }, 561 - { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, 0) }, 562 - { "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K) }, 563 - { "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K) }, 564 - 565 - /* PMC */ 566 - { "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) }, 567 - { "pm25lv010", INFO(0, 0, 32 * 1024, 4, SECT_4K_PMC) }, 568 - { "pm25lq032", INFO(0x7f9d46, 0, 64 * 1024, 64, SECT_4K) }, 569 - 570 - /* Spansion -- single (large) sector size only, at least 571 - * for the chips listed here (without boot sectors). 572 - */ 573 - { "s25sl032p", INFO(0x010215, 0x4d00, 64 * 1024, 64, 0) }, 574 - { "s25sl064p", INFO(0x010216, 0x4d00, 64 * 1024, 128, 0) }, 575 - { "s25fl256s0", INFO(0x010219, 0x4d00, 256 * 1024, 128, 0) }, 576 - { "s25fl256s1", INFO(0x010219, 0x4d01, 64 * 1024, 512, M25P80_DUAL_READ | M25P80_QUAD_READ) }, 577 - { "s25fl512s", INFO(0x010220, 0x4d00, 256 * 1024, 256, M25P80_DUAL_READ | M25P80_QUAD_READ) }, 578 - { "s70fl01gs", INFO(0x010221, 0x4d00, 256 * 1024, 256, 0) }, 579 - { "s25sl12800", INFO(0x012018, 0x0300, 256 * 1024, 64, 0) }, 580 - { "s25sl12801", INFO(0x012018, 0x0301, 64 * 1024, 256, 0) }, 581 - { "s25fl129p0", INFO(0x012018, 0x4d00, 256 * 1024, 64, 0) }, 582 - { "s25fl129p1", INFO(0x012018, 0x4d01, 64 * 1024, 256, 0) }, 583 - { "s25sl004a", INFO(0x010212, 0, 64 * 1024, 8, 0) }, 584 - { "s25sl008a", INFO(0x010213, 0, 64 * 1024, 16, 0) }, 585 - { "s25sl016a", INFO(0x010214, 0, 64 * 1024, 32, 0) }, 586 - { "s25sl032a", INFO(0x010215, 0, 64 * 1024, 64, 0) }, 587 - { "s25sl064a", INFO(0x010216, 0, 64 * 1024, 128, 0) }, 588 - { "s25fl008k", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) }, 589 - { "s25fl016k", INFO(0xef4015, 0, 64 * 1024, 32, SECT_4K) }, 590 - { "s25fl064k", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) }, 591 - 592 - /* SST -- large erase sizes are "overlays", "sectors" are 4K */ 593 - { "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) }, 594 - { "sst25vf080b", INFO(0xbf258e, 0, 64 * 1024, 16, SECT_4K | SST_WRITE) }, 595 - { "sst25vf016b", INFO(0xbf2541, 0, 64 * 1024, 32, SECT_4K | SST_WRITE) }, 596 - { "sst25vf032b", INFO(0xbf254a, 0, 64 * 1024, 64, SECT_4K | SST_WRITE) }, 597 - { "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, SECT_4K) }, 598 - { "sst25wf512", INFO(0xbf2501, 0, 64 * 1024, 1, SECT_4K | SST_WRITE) }, 599 - { "sst25wf010", INFO(0xbf2502, 0, 64 * 1024, 2, SECT_4K | SST_WRITE) }, 600 - { "sst25wf020", INFO(0xbf2503, 0, 64 * 1024, 4, SECT_4K | SST_WRITE) }, 601 - { "sst25wf040", INFO(0xbf2504, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) }, 602 - 603 - /* ST Microelectronics -- newer production may have feature updates */ 604 - { "m25p05", INFO(0x202010, 0, 32 * 1024, 2, 0) }, 605 - { "m25p10", INFO(0x202011, 0, 32 * 1024, 4, 0) }, 606 - { "m25p20", INFO(0x202012, 0, 64 * 1024, 4, 0) }, 607 - { "m25p40", INFO(0x202013, 0, 64 * 1024, 8, 0) }, 608 - { "m25p80", INFO(0x202014, 0, 64 * 1024, 16, 0) }, 609 - { "m25p16", INFO(0x202015, 0, 64 * 1024, 32, 0) }, 610 - { "m25p32", INFO(0x202016, 0, 64 * 1024, 64, 0) }, 611 - { "m25p64", INFO(0x202017, 0, 64 * 1024, 128, 0) }, 612 - { "m25p128", INFO(0x202018, 0, 256 * 1024, 64, 0) }, 613 - { "n25q032", INFO(0x20ba16, 0, 64 * 1024, 64, 0) }, 614 - 615 - { "m25p05-nonjedec", INFO(0, 0, 32 * 1024, 2, 0) }, 616 - { "m25p10-nonjedec", INFO(0, 0, 32 * 1024, 4, 0) }, 617 - { "m25p20-nonjedec", INFO(0, 0, 64 * 1024, 4, 0) }, 618 - { "m25p40-nonjedec", INFO(0, 0, 64 * 1024, 8, 0) }, 619 - { "m25p80-nonjedec", INFO(0, 0, 64 * 1024, 16, 0) }, 620 - { "m25p16-nonjedec", INFO(0, 0, 64 * 1024, 32, 0) }, 621 - { "m25p32-nonjedec", INFO(0, 0, 64 * 1024, 64, 0) }, 622 - { "m25p64-nonjedec", INFO(0, 0, 64 * 1024, 128, 0) }, 623 - { "m25p128-nonjedec", INFO(0, 0, 256 * 1024, 64, 0) }, 624 - 625 - { "m45pe10", INFO(0x204011, 0, 64 * 1024, 2, 0) }, 626 - { "m45pe80", INFO(0x204014, 0, 64 * 1024, 16, 0) }, 627 - { "m45pe16", INFO(0x204015, 0, 64 * 1024, 32, 0) }, 628 - 629 - { "m25pe20", INFO(0x208012, 0, 64 * 1024, 4, 0) }, 630 - { "m25pe80", INFO(0x208014, 0, 64 * 1024, 16, 0) }, 631 - { "m25pe16", INFO(0x208015, 0, 64 * 1024, 32, SECT_4K) }, 632 - 633 - { "m25px16", INFO(0x207115, 0, 64 * 1024, 32, SECT_4K) }, 634 - { "m25px32", INFO(0x207116, 0, 64 * 1024, 64, SECT_4K) }, 635 - { "m25px32-s0", INFO(0x207316, 0, 64 * 1024, 64, SECT_4K) }, 636 - { "m25px32-s1", INFO(0x206316, 0, 64 * 1024, 64, SECT_4K) }, 637 - { "m25px64", INFO(0x207117, 0, 64 * 1024, 128, 0) }, 638 - 639 - /* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */ 640 - { "w25x10", INFO(0xef3011, 0, 64 * 1024, 2, SECT_4K) }, 641 - { "w25x20", INFO(0xef3012, 0, 64 * 1024, 4, SECT_4K) }, 642 - { "w25x40", INFO(0xef3013, 0, 64 * 1024, 8, SECT_4K) }, 643 - { "w25x80", INFO(0xef3014, 0, 64 * 1024, 16, SECT_4K) }, 644 - { "w25x16", INFO(0xef3015, 0, 64 * 1024, 32, SECT_4K) }, 645 - { "w25x32", INFO(0xef3016, 0, 64 * 1024, 64, SECT_4K) }, 646 - { "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) }, 647 - { "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64, SECT_4K) }, 648 - { "w25x64", INFO(0xef3017, 0, 64 * 1024, 128, SECT_4K) }, 649 - { "w25q64", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) }, 650 - { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) }, 651 - { "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) }, 652 - { "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) }, 653 - { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) }, 654 - { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K) }, 655 - 656 - /* Catalyst / On Semiconductor -- non-JEDEC */ 657 - { "cat25c11", CAT25_INFO( 16, 8, 16, 1, M25P_NO_ERASE | M25P_NO_FR) }, 658 - { "cat25c03", CAT25_INFO( 32, 8, 16, 2, M25P_NO_ERASE | M25P_NO_FR) }, 659 - { "cat25c09", CAT25_INFO( 128, 8, 32, 2, M25P_NO_ERASE | M25P_NO_FR) }, 660 - { "cat25c17", CAT25_INFO( 256, 8, 32, 2, M25P_NO_ERASE | M25P_NO_FR) }, 661 - { "cat25128", CAT25_INFO(2048, 8, 64, 2, M25P_NO_ERASE | M25P_NO_FR) }, 662 - { }, 663 - }; 664 - MODULE_DEVICE_TABLE(spi, m25p_ids); 665 - 666 - static const struct spi_device_id *jedec_probe(struct spi_device *spi) 667 - { 668 - int tmp; 669 - u8 code = OPCODE_RDID; 670 - u8 id[5]; 671 - u32 jedec; 672 - u16 ext_jedec; 673 - struct flash_info *info; 674 - 675 - /* JEDEC also defines an optional "extended device information" 676 - * string for after vendor-specific data, after the three bytes 677 - * we use here. Supporting some chips might require using it. 678 - */ 679 - tmp = spi_write_then_read(spi, &code, 1, id, 5); 680 - if (tmp < 0) { 681 - pr_debug("%s: error %d reading JEDEC ID\n", 682 - dev_name(&spi->dev), tmp); 683 - return ERR_PTR(tmp); 684 - } 685 - jedec = id[0]; 686 - jedec = jedec << 8; 687 - jedec |= id[1]; 688 - jedec = jedec << 8; 689 - jedec |= id[2]; 690 - 691 - ext_jedec = id[3] << 8 | id[4]; 692 - 693 - for (tmp = 0; tmp < ARRAY_SIZE(m25p_ids) - 1; tmp++) { 694 - info = (void *)m25p_ids[tmp].driver_data; 695 - if (info->jedec_id == jedec) { 696 - if (info->ext_id == 0 || info->ext_id == ext_jedec) 697 - return &m25p_ids[tmp]; 698 - } 699 - } 700 - dev_err(&spi->dev, "unrecognized JEDEC id %06x\n", jedec); 701 - return ERR_PTR(-ENODEV); 702 - } 703 - 704 573 705 574 /* 706 575 * board specific setup should have ensured the SPI clock used here ··· 191 1096 */ 192 1097 static int m25p_probe(struct spi_device *spi) 193 1098 { 194 - const struct spi_device_id *id = spi_get_device_id(spi); 195 - struct flash_platform_data *data; 196 - struct m25p *flash; 197 - struct flash_info *info; 198 - unsigned i; 199 1099 struct mtd_part_parser_data ppdata; 200 - struct device_node *np = spi->dev.of_node; 1100 + struct flash_platform_data *data; 1101 + struct m25p *flash; 1102 + struct spi_nor *nor; 1103 + enum read_mode mode = SPI_NOR_NORMAL; 201 1104 int ret; 202 - 203 - /* Platform data helps sort out which chip type we have, as 204 - * well as how this board partitions it. If we don't have 205 - * a chip ID, try the JEDEC id commands; they'll work for most 206 - * newer chips, even if we don't recognize the particular chip. 207 - */ 208 - data = dev_get_platdata(&spi->dev); 209 - if (data && data->type) { 210 - const struct spi_device_id *plat_id; 211 - 212 - for (i = 0; i < ARRAY_SIZE(m25p_ids) - 1; i++) { 213 - plat_id = &m25p_ids[i]; 214 - if (strcmp(data->type, plat_id->name)) 215 - continue; 216 - break; 217 - } 218 - 219 - if (i < ARRAY_SIZE(m25p_ids) - 1) 220 - id = plat_id; 221 - else 222 - dev_warn(&spi->dev, "unrecognized id %s\n", data->type); 223 - } 224 - 225 - info = (void *)id->driver_data; 226 - 227 - if (info->jedec_id) { 228 - const struct spi_device_id *jid; 229 - 230 - jid = jedec_probe(spi); 231 - if (IS_ERR(jid)) { 232 - return PTR_ERR(jid); 233 - } else if (jid != id) { 234 - /* 235 - * JEDEC knows better, so overwrite platform ID. We 236 - * can't trust partitions any longer, but we'll let 237 - * mtd apply them anyway, since some partitions may be 238 - * marked read-only, and we don't want to lose that 239 - * information, even if it's not 100% accurate. 240 - */ 241 - dev_warn(&spi->dev, "found %s, expected %s\n", 242 - jid->name, id->name); 243 - id = jid; 244 - info = (void *)jid->driver_data; 245 - } 246 - } 247 1105 248 1106 flash = devm_kzalloc(&spi->dev, sizeof(*flash), GFP_KERNEL); 249 1107 if (!flash) 250 1108 return -ENOMEM; 251 1109 252 - flash->command = devm_kzalloc(&spi->dev, MAX_CMD_SIZE, GFP_KERNEL); 253 - if (!flash->command) 254 - return -ENOMEM; 1110 + nor = &flash->spi_nor; 255 1111 256 - flash->spi = spi; 257 - mutex_init(&flash->lock); 1112 + /* install the hooks */ 1113 + nor->read = m25p80_read; 1114 + nor->write = m25p80_write; 1115 + nor->erase = m25p80_erase; 1116 + nor->write_reg = m25p80_write_reg; 1117 + nor->read_reg = m25p80_read_reg; 1118 + 1119 + nor->dev = &spi->dev; 1120 + nor->mtd = &flash->mtd; 1121 + nor->priv = flash; 1122 + 258 1123 spi_set_drvdata(spi, flash); 1124 + flash->mtd.priv = nor; 1125 + flash->spi = spi; 259 1126 260 - /* 261 - * Atmel, SST and Intel/Numonyx serial flash tend to power 262 - * up with the software protection bits set 263 - */ 1127 + if (spi->mode & SPI_RX_QUAD) 1128 + mode = SPI_NOR_QUAD; 1129 + else if (spi->mode & SPI_RX_DUAL) 1130 + mode = SPI_NOR_DUAL; 1131 + ret = spi_nor_scan(nor, spi_get_device_id(spi), mode); 1132 + if (ret) 1133 + return ret; 264 1134 265 - if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ATMEL || 266 - JEDEC_MFR(info->jedec_id) == CFI_MFR_INTEL || 267 - JEDEC_MFR(info->jedec_id) == CFI_MFR_SST) { 268 - write_enable(flash); 269 - write_sr(flash, 0); 270 - } 271 - 272 - if (data && data->name) 273 - flash->mtd.name = data->name; 274 - else 275 - flash->mtd.name = dev_name(&spi->dev); 276 - 277 - flash->mtd.type = MTD_NORFLASH; 278 - flash->mtd.writesize = 1; 279 - flash->mtd.flags = MTD_CAP_NORFLASH; 280 - flash->mtd.size = info->sector_size * info->n_sectors; 281 - flash->mtd._erase = m25p80_erase; 282 - flash->mtd._read = m25p80_read; 283 - 284 - /* flash protection support for STmicro chips */ 285 - if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ST) { 286 - flash->mtd._lock = m25p80_lock; 287 - flash->mtd._unlock = m25p80_unlock; 288 - } 289 - 290 - /* sst flash chips use AAI word program */ 291 - if (info->flags & SST_WRITE) 292 - flash->mtd._write = sst_write; 293 - else 294 - flash->mtd._write = m25p80_write; 295 - 296 - /* prefer "small sector" erase if possible */ 297 - if (info->flags & SECT_4K) { 298 - flash->erase_opcode = OPCODE_BE_4K; 299 - flash->mtd.erasesize = 4096; 300 - } else if (info->flags & SECT_4K_PMC) { 301 - flash->erase_opcode = OPCODE_BE_4K_PMC; 302 - flash->mtd.erasesize = 4096; 303 - } else { 304 - flash->erase_opcode = OPCODE_SE; 305 - flash->mtd.erasesize = info->sector_size; 306 - } 307 - 308 - if (info->flags & M25P_NO_ERASE) 309 - flash->mtd.flags |= MTD_NO_ERASE; 310 - 1135 + data = dev_get_platdata(&spi->dev); 311 1136 ppdata.of_node = spi->dev.of_node; 312 - flash->mtd.dev.parent = &spi->dev; 313 - flash->page_size = info->page_size; 314 - flash->mtd.writebufsize = flash->page_size; 315 1137 316 - if (np) { 317 - /* If we were instantiated by DT, use it */ 318 - if (of_property_read_bool(np, "m25p,fast-read")) 319 - flash->flash_read = M25P80_FAST; 320 - else 321 - flash->flash_read = M25P80_NORMAL; 322 - } else { 323 - /* If we weren't instantiated by DT, default to fast-read */ 324 - flash->flash_read = M25P80_FAST; 325 - } 326 - 327 - /* Some devices cannot do fast-read, no matter what DT tells us */ 328 - if (info->flags & M25P_NO_FR) 329 - flash->flash_read = M25P80_NORMAL; 330 - 331 - /* Quad/Dual-read mode takes precedence over fast/normal */ 332 - if (spi->mode & SPI_RX_QUAD && info->flags & M25P80_QUAD_READ) { 333 - ret = set_quad_mode(flash, info->jedec_id); 334 - if (ret) { 335 - dev_err(&flash->spi->dev, "quad mode not supported\n"); 336 - return ret; 337 - } 338 - flash->flash_read = M25P80_QUAD; 339 - } else if (spi->mode & SPI_RX_DUAL && info->flags & M25P80_DUAL_READ) { 340 - flash->flash_read = M25P80_DUAL; 341 - } 342 - 343 - /* Default commands */ 344 - switch (flash->flash_read) { 345 - case M25P80_QUAD: 346 - flash->read_opcode = OPCODE_QUAD_READ; 347 - break; 348 - case M25P80_DUAL: 349 - flash->read_opcode = OPCODE_DUAL_READ; 350 - break; 351 - case M25P80_FAST: 352 - flash->read_opcode = OPCODE_FAST_READ; 353 - break; 354 - case M25P80_NORMAL: 355 - flash->read_opcode = OPCODE_NORM_READ; 356 - break; 357 - default: 358 - dev_err(&flash->spi->dev, "No Read opcode defined\n"); 359 - return -EINVAL; 360 - } 361 - 362 - flash->program_opcode = OPCODE_PP; 363 - 364 - if (info->addr_width) 365 - flash->addr_width = info->addr_width; 366 - else if (flash->mtd.size > 0x1000000) { 367 - /* enable 4-byte addressing if the device exceeds 16MiB */ 368 - flash->addr_width = 4; 369 - if (JEDEC_MFR(info->jedec_id) == CFI_MFR_AMD) { 370 - /* Dedicated 4-byte command set */ 371 - switch (flash->flash_read) { 372 - case M25P80_QUAD: 373 - flash->read_opcode = OPCODE_QUAD_READ_4B; 374 - break; 375 - case M25P80_DUAL: 376 - flash->read_opcode = OPCODE_DUAL_READ_4B; 377 - break; 378 - case M25P80_FAST: 379 - flash->read_opcode = OPCODE_FAST_READ_4B; 380 - break; 381 - case M25P80_NORMAL: 382 - flash->read_opcode = OPCODE_NORM_READ_4B; 383 - break; 384 - } 385 - flash->program_opcode = OPCODE_PP_4B; 386 - /* No small sector erase for 4-byte command set */ 387 - flash->erase_opcode = OPCODE_SE_4B; 388 - flash->mtd.erasesize = info->sector_size; 389 - } else 390 - set_4byte(flash, info->jedec_id, 1); 391 - } else { 392 - flash->addr_width = 3; 393 - } 394 - 395 - dev_info(&spi->dev, "%s (%lld Kbytes)\n", id->name, 396 - (long long)flash->mtd.size >> 10); 397 - 398 - pr_debug("mtd .name = %s, .size = 0x%llx (%lldMiB) " 399 - ".erasesize = 0x%.8x (%uKiB) .numeraseregions = %d\n", 400 - flash->mtd.name, 401 - (long long)flash->mtd.size, (long long)(flash->mtd.size >> 20), 402 - flash->mtd.erasesize, flash->mtd.erasesize / 1024, 403 - flash->mtd.numeraseregions); 404 - 405 - if (flash->mtd.numeraseregions) 406 - for (i = 0; i < flash->mtd.numeraseregions; i++) 407 - pr_debug("mtd.eraseregions[%d] = { .offset = 0x%llx, " 408 - ".erasesize = 0x%.8x (%uKiB), " 409 - ".numblocks = %d }\n", 410 - i, (long long)flash->mtd.eraseregions[i].offset, 411 - flash->mtd.eraseregions[i].erasesize, 412 - flash->mtd.eraseregions[i].erasesize / 1024, 413 - flash->mtd.eraseregions[i].numblocks); 414 - 415 - 416 - /* partitions should match sector boundaries; and it may be good to 417 - * use readonly partitions for writeprotected sectors (BP2..BP0). 418 - */ 419 1138 return mtd_device_parse_register(&flash->mtd, NULL, &ppdata, 420 1139 data ? data->parts : NULL, 421 1140 data ? data->nr_parts : 0); ··· 250 1341 .name = "m25p80", 251 1342 .owner = THIS_MODULE, 252 1343 }, 253 - .id_table = m25p_ids, 1344 + .id_table = spi_nor_ids, 254 1345 .probe = m25p_probe, 255 1346 .remove = m25p_remove, 256 1347
+12 -32
drivers/mtd/devices/serial_flash_cmds.h
··· 13 13 #define _MTD_SERIAL_FLASH_CMDS_H 14 14 15 15 /* Generic Flash Commands/OPCODEs */ 16 - #define FLASH_CMD_WREN 0x06 17 - #define FLASH_CMD_WRDI 0x04 18 - #define FLASH_CMD_RDID 0x9f 19 - #define FLASH_CMD_RDSR 0x05 20 - #define FLASH_CMD_RDSR2 0x35 21 - #define FLASH_CMD_WRSR 0x01 22 - #define FLASH_CMD_SE_4K 0x20 23 - #define FLASH_CMD_SE_32K 0x52 24 - #define FLASH_CMD_SE 0xd8 25 - #define FLASH_CMD_CHIPERASE 0xc7 26 - #define FLASH_CMD_WRVCR 0x81 27 - #define FLASH_CMD_RDVCR 0x85 16 + #define SPINOR_OP_RDSR2 0x35 17 + #define SPINOR_OP_WRVCR 0x81 18 + #define SPINOR_OP_RDVCR 0x85 28 19 29 20 /* JEDEC Standard - Serial Flash Discoverable Parmeters (SFDP) Commands */ 30 - #define FLASH_CMD_READ 0x03 /* READ */ 31 - #define FLASH_CMD_READ_FAST 0x0b /* FAST READ */ 32 - #define FLASH_CMD_READ_1_1_2 0x3b /* DUAL OUTPUT READ */ 33 - #define FLASH_CMD_READ_1_2_2 0xbb /* DUAL I/O READ */ 34 - #define FLASH_CMD_READ_1_1_4 0x6b /* QUAD OUTPUT READ */ 35 - #define FLASH_CMD_READ_1_4_4 0xeb /* QUAD I/O READ */ 21 + #define SPINOR_OP_READ_1_2_2 0xbb /* DUAL I/O READ */ 22 + #define SPINOR_OP_READ_1_4_4 0xeb /* QUAD I/O READ */ 36 23 37 - #define FLASH_CMD_WRITE 0x02 /* PAGE PROGRAM */ 38 - #define FLASH_CMD_WRITE_1_1_2 0xa2 /* DUAL INPUT PROGRAM */ 39 - #define FLASH_CMD_WRITE_1_2_2 0xd2 /* DUAL INPUT EXT PROGRAM */ 40 - #define FLASH_CMD_WRITE_1_1_4 0x32 /* QUAD INPUT PROGRAM */ 41 - #define FLASH_CMD_WRITE_1_4_4 0x12 /* QUAD INPUT EXT PROGRAM */ 42 - 43 - #define FLASH_CMD_EN4B_ADDR 0xb7 /* Enter 4-byte address mode */ 44 - #define FLASH_CMD_EX4B_ADDR 0xe9 /* Exit 4-byte address mode */ 24 + #define SPINOR_OP_WRITE 0x02 /* PAGE PROGRAM */ 25 + #define SPINOR_OP_WRITE_1_1_2 0xa2 /* DUAL INPUT PROGRAM */ 26 + #define SPINOR_OP_WRITE_1_2_2 0xd2 /* DUAL INPUT EXT PROGRAM */ 27 + #define SPINOR_OP_WRITE_1_1_4 0x32 /* QUAD INPUT PROGRAM */ 28 + #define SPINOR_OP_WRITE_1_4_4 0x12 /* QUAD INPUT EXT PROGRAM */ 45 29 46 30 /* READ commands with 32-bit addressing */ 47 - #define FLASH_CMD_READ4 0x13 48 - #define FLASH_CMD_READ4_FAST 0x0c 49 - #define FLASH_CMD_READ4_1_1_2 0x3c 50 - #define FLASH_CMD_READ4_1_2_2 0xbc 51 - #define FLASH_CMD_READ4_1_1_4 0x6c 52 - #define FLASH_CMD_READ4_1_4_4 0xec 31 + #define SPINOR_OP_READ4_1_2_2 0xbc 32 + #define SPINOR_OP_READ4_1_4_4 0xec 53 33 54 34 /* Configuration flags */ 55 35 #define FLASH_FLAG_SINGLE 0x000000ff
+1 -3
drivers/mtd/devices/slram.c
··· 280 280 static int __init init_slram(void) 281 281 { 282 282 char *devname; 283 - int i; 284 283 285 284 #ifndef MODULE 286 285 char *devstart; 287 286 char *devlength; 288 - 289 - i = 0; 290 287 291 288 if (!map) { 292 289 E("slram: not enough parameters.\n"); ··· 311 314 } 312 315 #else 313 316 int count; 317 + int i; 314 318 315 319 for (count = 0; count < SLRAM_MAX_DEVICES_PARAMS && map[count]; 316 320 count++) {
+156 -184
drivers/mtd/devices/st_spi_fsm.c
··· 19 19 #include <linux/mfd/syscon.h> 20 20 #include <linux/mtd/mtd.h> 21 21 #include <linux/mtd/partitions.h> 22 + #include <linux/mtd/spi-nor.h> 22 23 #include <linux/sched.h> 23 24 #include <linux/delay.h> 24 25 #include <linux/io.h> ··· 202 201 203 202 #define STFSM_MAX_WAIT_SEQ_MS 1000 /* FSM execution time */ 204 203 205 - /* Flash Commands */ 206 - #define FLASH_CMD_WREN 0x06 207 - #define FLASH_CMD_WRDI 0x04 208 - #define FLASH_CMD_RDID 0x9f 209 - #define FLASH_CMD_RDSR 0x05 210 - #define FLASH_CMD_RDSR2 0x35 211 - #define FLASH_CMD_WRSR 0x01 212 - #define FLASH_CMD_SE_4K 0x20 213 - #define FLASH_CMD_SE_32K 0x52 214 - #define FLASH_CMD_SE 0xd8 215 - #define FLASH_CMD_CHIPERASE 0xc7 216 - #define FLASH_CMD_WRVCR 0x81 217 - #define FLASH_CMD_RDVCR 0x85 218 - 219 - #define FLASH_CMD_READ 0x03 /* READ */ 220 - #define FLASH_CMD_READ_FAST 0x0b /* FAST READ */ 221 - #define FLASH_CMD_READ_1_1_2 0x3b /* DUAL OUTPUT READ */ 222 - #define FLASH_CMD_READ_1_2_2 0xbb /* DUAL I/O READ */ 223 - #define FLASH_CMD_READ_1_1_4 0x6b /* QUAD OUTPUT READ */ 224 - #define FLASH_CMD_READ_1_4_4 0xeb /* QUAD I/O READ */ 225 - 226 - #define FLASH_CMD_WRITE 0x02 /* PAGE PROGRAM */ 227 - #define FLASH_CMD_WRITE_1_1_2 0xa2 /* DUAL INPUT PROGRAM */ 228 - #define FLASH_CMD_WRITE_1_2_2 0xd2 /* DUAL INPUT EXT PROGRAM */ 229 - #define FLASH_CMD_WRITE_1_1_4 0x32 /* QUAD INPUT PROGRAM */ 230 - #define FLASH_CMD_WRITE_1_4_4 0x12 /* QUAD INPUT EXT PROGRAM */ 231 - 232 - #define FLASH_CMD_EN4B_ADDR 0xb7 /* Enter 4-byte address mode */ 233 - #define FLASH_CMD_EX4B_ADDR 0xe9 /* Exit 4-byte address mode */ 234 - 235 - /* READ commands with 32-bit addressing (N25Q256 and S25FLxxxS) */ 236 - #define FLASH_CMD_READ4 0x13 237 - #define FLASH_CMD_READ4_FAST 0x0c 238 - #define FLASH_CMD_READ4_1_1_2 0x3c 239 - #define FLASH_CMD_READ4_1_2_2 0xbc 240 - #define FLASH_CMD_READ4_1_1_4 0x6c 241 - #define FLASH_CMD_READ4_1_4_4 0xec 242 - 243 204 /* S25FLxxxS commands */ 244 205 #define S25FL_CMD_WRITE4_1_1_4 0x34 245 206 #define S25FL_CMD_SE4 0xdc ··· 209 246 #define S25FL_CMD_DYBWR 0xe1 210 247 #define S25FL_CMD_DYBRD 0xe0 211 248 #define S25FL_CMD_WRITE4 0x12 /* Note, opcode clashes with 212 - * 'FLASH_CMD_WRITE_1_4_4' 249 + * 'SPINOR_OP_WRITE_1_4_4' 213 250 * as found on N25Qxxx devices! */ 214 251 215 252 /* Status register */ ··· 224 261 #define S25FL_STATUS_E_ERR 0x20 225 262 #define S25FL_STATUS_P_ERR 0x40 226 263 264 + #define N25Q_CMD_WRVCR 0x81 265 + #define N25Q_CMD_RDVCR 0x85 266 + #define N25Q_CMD_RDVECR 0x65 267 + #define N25Q_CMD_RDNVCR 0xb5 268 + #define N25Q_CMD_WRNVCR 0xb1 269 + 227 270 #define FLASH_PAGESIZE 256 /* In Bytes */ 228 271 #define FLASH_PAGESIZE_32 (FLASH_PAGESIZE / 4) /* In uint32_t */ 229 272 #define FLASH_MAX_BUSY_WAIT (300 * HZ) /* Maximum 'CHIPERASE' time */ ··· 239 270 */ 240 271 #define CFG_READ_TOGGLE_32BIT_ADDR 0x00000001 241 272 #define CFG_WRITE_TOGGLE_32BIT_ADDR 0x00000002 242 - #define CFG_WRITE_EX_32BIT_ADDR_DELAY 0x00000004 243 273 #define CFG_ERASESEC_TOGGLE_32BIT_ADDR 0x00000008 244 274 #define CFG_S25FL_CHECK_ERROR_FLAGS 0x00000010 245 275 ··· 297 329 u32 jedec_id; 298 330 u16 ext_id; 299 331 /* 300 - * The size listed here is what works with FLASH_CMD_SE, which isn't 332 + * The size listed here is what works with SPINOR_OP_SE, which isn't 301 333 * necessarily called a "sector" by the vendor. 302 334 */ 303 335 unsigned sector_size; ··· 337 369 { "m25px32", 0x207116, 0, 64 * 1024, 64, M25PX_FLAG, 75, NULL }, 338 370 { "m25px64", 0x207117, 0, 64 * 1024, 128, M25PX_FLAG, 75, NULL }, 339 371 372 + /* Macronix MX25xxx 373 + * - Support for 'FLASH_FLAG_WRITE_1_4_4' is omitted for devices 374 + * where operating frequency must be reduced. 375 + */ 340 376 #define MX25_FLAG (FLASH_FLAG_READ_WRITE | \ 341 377 FLASH_FLAG_READ_FAST | \ 342 378 FLASH_FLAG_READ_1_1_2 | \ 343 379 FLASH_FLAG_READ_1_2_2 | \ 344 380 FLASH_FLAG_READ_1_1_4 | \ 345 - FLASH_FLAG_READ_1_4_4 | \ 346 381 FLASH_FLAG_SE_4K | \ 347 382 FLASH_FLAG_SE_32K) 383 + { "mx25l3255e", 0xc29e16, 0, 64 * 1024, 64, 384 + (MX25_FLAG | FLASH_FLAG_WRITE_1_4_4), 86, 385 + stfsm_mx25_config}, 348 386 { "mx25l25635e", 0xc22019, 0, 64*1024, 512, 349 387 (MX25_FLAG | FLASH_FLAG_32BIT_ADDR | FLASH_FLAG_RESET), 70, 350 388 stfsm_mx25_config }, 389 + { "mx25l25655e", 0xc22619, 0, 64*1024, 512, 390 + (MX25_FLAG | FLASH_FLAG_32BIT_ADDR | FLASH_FLAG_RESET), 70, 391 + stfsm_mx25_config}, 351 392 352 393 #define N25Q_FLAG (FLASH_FLAG_READ_WRITE | \ 353 394 FLASH_FLAG_READ_FAST | \ ··· 384 407 FLASH_FLAG_READ_1_4_4 | \ 385 408 FLASH_FLAG_WRITE_1_1_4 | \ 386 409 FLASH_FLAG_READ_FAST) 410 + { "s25fl032p", 0x010215, 0x4d00, 64 * 1024, 64, S25FLXXXP_FLAG, 80, 411 + stfsm_s25fl_config}, 387 412 { "s25fl129p0", 0x012018, 0x4d00, 256 * 1024, 64, S25FLXXXP_FLAG, 80, 388 413 stfsm_s25fl_config }, 389 414 { "s25fl129p1", 0x012018, 0x4d01, 64 * 1024, 256, S25FLXXXP_FLAG, 80, ··· 452 473 453 474 /* Default READ configurations, in order of preference */ 454 475 static struct seq_rw_config default_read_configs[] = { 455 - {FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ_1_4_4, 0, 4, 4, 0x00, 2, 4}, 456 - {FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ_1_1_4, 0, 1, 4, 0x00, 4, 0}, 457 - {FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ_1_2_2, 0, 2, 2, 0x00, 4, 0}, 458 - {FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ_1_1_2, 0, 1, 2, 0x00, 0, 8}, 459 - {FLASH_FLAG_READ_FAST, FLASH_CMD_READ_FAST, 0, 1, 1, 0x00, 0, 8}, 460 - {FLASH_FLAG_READ_WRITE, FLASH_CMD_READ, 0, 1, 1, 0x00, 0, 0}, 476 + {FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ_1_4_4, 0, 4, 4, 0x00, 2, 4}, 477 + {FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ_1_1_4, 0, 1, 4, 0x00, 4, 0}, 478 + {FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ_1_2_2, 0, 2, 2, 0x00, 4, 0}, 479 + {FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ_1_1_2, 0, 1, 2, 0x00, 0, 8}, 480 + {FLASH_FLAG_READ_FAST, SPINOR_OP_READ_FAST, 0, 1, 1, 0x00, 0, 8}, 481 + {FLASH_FLAG_READ_WRITE, SPINOR_OP_READ, 0, 1, 1, 0x00, 0, 0}, 461 482 {0x00, 0, 0, 0, 0, 0x00, 0, 0}, 462 483 }; 463 484 464 485 /* Default WRITE configurations */ 465 486 static struct seq_rw_config default_write_configs[] = { 466 - {FLASH_FLAG_WRITE_1_4_4, FLASH_CMD_WRITE_1_4_4, 1, 4, 4, 0x00, 0, 0}, 467 - {FLASH_FLAG_WRITE_1_1_4, FLASH_CMD_WRITE_1_1_4, 1, 1, 4, 0x00, 0, 0}, 468 - {FLASH_FLAG_WRITE_1_2_2, FLASH_CMD_WRITE_1_2_2, 1, 2, 2, 0x00, 0, 0}, 469 - {FLASH_FLAG_WRITE_1_1_2, FLASH_CMD_WRITE_1_1_2, 1, 1, 2, 0x00, 0, 0}, 470 - {FLASH_FLAG_READ_WRITE, FLASH_CMD_WRITE, 1, 1, 1, 0x00, 0, 0}, 487 + {FLASH_FLAG_WRITE_1_4_4, SPINOR_OP_WRITE_1_4_4, 1, 4, 4, 0x00, 0, 0}, 488 + {FLASH_FLAG_WRITE_1_1_4, SPINOR_OP_WRITE_1_1_4, 1, 1, 4, 0x00, 0, 0}, 489 + {FLASH_FLAG_WRITE_1_2_2, SPINOR_OP_WRITE_1_2_2, 1, 2, 2, 0x00, 0, 0}, 490 + {FLASH_FLAG_WRITE_1_1_2, SPINOR_OP_WRITE_1_1_2, 1, 1, 2, 0x00, 0, 0}, 491 + {FLASH_FLAG_READ_WRITE, SPINOR_OP_WRITE, 1, 1, 1, 0x00, 0, 0}, 471 492 {0x00, 0, 0, 0, 0, 0x00, 0, 0}, 472 493 }; 473 494 ··· 490 511 * cycles. 491 512 */ 492 513 static struct seq_rw_config n25q_read3_configs[] = { 493 - {FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ_1_4_4, 0, 4, 4, 0x00, 0, 8}, 494 - {FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ_1_1_4, 0, 1, 4, 0x00, 0, 8}, 495 - {FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ_1_2_2, 0, 2, 2, 0x00, 0, 8}, 496 - {FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ_1_1_2, 0, 1, 2, 0x00, 0, 8}, 497 - {FLASH_FLAG_READ_FAST, FLASH_CMD_READ_FAST, 0, 1, 1, 0x00, 0, 8}, 498 - {FLASH_FLAG_READ_WRITE, FLASH_CMD_READ, 0, 1, 1, 0x00, 0, 0}, 514 + {FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ_1_4_4, 0, 4, 4, 0x00, 0, 8}, 515 + {FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ_1_1_4, 0, 1, 4, 0x00, 0, 8}, 516 + {FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ_1_2_2, 0, 2, 2, 0x00, 0, 8}, 517 + {FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ_1_1_2, 0, 1, 2, 0x00, 0, 8}, 518 + {FLASH_FLAG_READ_FAST, SPINOR_OP_READ_FAST, 0, 1, 1, 0x00, 0, 8}, 519 + {FLASH_FLAG_READ_WRITE, SPINOR_OP_READ, 0, 1, 1, 0x00, 0, 0}, 499 520 {0x00, 0, 0, 0, 0, 0x00, 0, 0}, 500 521 }; 501 522 ··· 505 526 * - 'FAST' variants configured for 8 dummy cycles (see note above.) 506 527 */ 507 528 static struct seq_rw_config n25q_read4_configs[] = { 508 - {FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ4_1_4_4, 0, 4, 4, 0x00, 0, 8}, 509 - {FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8}, 510 - {FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ4_1_2_2, 0, 2, 2, 0x00, 0, 8}, 511 - {FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8}, 512 - {FLASH_FLAG_READ_FAST, FLASH_CMD_READ4_FAST, 0, 1, 1, 0x00, 0, 8}, 513 - {FLASH_FLAG_READ_WRITE, FLASH_CMD_READ4, 0, 1, 1, 0x00, 0, 0}, 529 + {FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ4_1_4_4, 0, 4, 4, 0x00, 0, 8}, 530 + {FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8}, 531 + {FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ4_1_2_2, 0, 2, 2, 0x00, 0, 8}, 532 + {FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8}, 533 + {FLASH_FLAG_READ_FAST, SPINOR_OP_READ4_FAST, 0, 1, 1, 0x00, 0, 8}, 534 + {FLASH_FLAG_READ_WRITE, SPINOR_OP_READ4, 0, 1, 1, 0x00, 0, 0}, 514 535 {0x00, 0, 0, 0, 0, 0x00, 0, 0}, 515 536 }; 516 537 ··· 523 544 { 524 545 seq->seq_opc[0] = (SEQ_OPC_PADS_1 | 525 546 SEQ_OPC_CYCLES(8) | 526 - SEQ_OPC_OPCODE(FLASH_CMD_EN4B_ADDR) | 547 + SEQ_OPC_OPCODE(SPINOR_OP_EN4B) | 527 548 SEQ_OPC_CSDEASSERT); 528 549 529 550 seq->seq[0] = STFSM_INST_CMD1; ··· 551 572 * entering a state that is incompatible with the SPIBoot Controller. 552 573 */ 553 574 static struct seq_rw_config stfsm_s25fl_read4_configs[] = { 554 - {FLASH_FLAG_READ_1_4_4, FLASH_CMD_READ4_1_4_4, 0, 4, 4, 0x00, 2, 4}, 555 - {FLASH_FLAG_READ_1_1_4, FLASH_CMD_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8}, 556 - {FLASH_FLAG_READ_1_2_2, FLASH_CMD_READ4_1_2_2, 0, 2, 2, 0x00, 4, 0}, 557 - {FLASH_FLAG_READ_1_1_2, FLASH_CMD_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8}, 558 - {FLASH_FLAG_READ_FAST, FLASH_CMD_READ4_FAST, 0, 1, 1, 0x00, 0, 8}, 559 - {FLASH_FLAG_READ_WRITE, FLASH_CMD_READ4, 0, 1, 1, 0x00, 0, 0}, 575 + {FLASH_FLAG_READ_1_4_4, SPINOR_OP_READ4_1_4_4, 0, 4, 4, 0x00, 2, 4}, 576 + {FLASH_FLAG_READ_1_1_4, SPINOR_OP_READ4_1_1_4, 0, 1, 4, 0x00, 0, 8}, 577 + {FLASH_FLAG_READ_1_2_2, SPINOR_OP_READ4_1_2_2, 0, 2, 2, 0x00, 4, 0}, 578 + {FLASH_FLAG_READ_1_1_2, SPINOR_OP_READ4_1_1_2, 0, 1, 2, 0x00, 0, 8}, 579 + {FLASH_FLAG_READ_FAST, SPINOR_OP_READ4_FAST, 0, 1, 1, 0x00, 0, 8}, 580 + {FLASH_FLAG_READ_WRITE, SPINOR_OP_READ4, 0, 1, 1, 0x00, 0, 0}, 560 581 {0x00, 0, 0, 0, 0, 0x00, 0, 0}, 561 582 }; 562 583 ··· 569 590 /* 570 591 * [W25Qxxx] Configuration 571 592 */ 572 - #define W25Q_STATUS_QE (0x1 << 9) 593 + #define W25Q_STATUS_QE (0x1 << 1) 573 594 574 595 static struct stfsm_seq stfsm_seq_read_jedec = { 575 596 .data_size = TRANSFER_SIZE(8), 576 597 .seq_opc[0] = (SEQ_OPC_PADS_1 | 577 598 SEQ_OPC_CYCLES(8) | 578 - SEQ_OPC_OPCODE(FLASH_CMD_RDID)), 599 + SEQ_OPC_OPCODE(SPINOR_OP_RDID)), 579 600 .seq = { 580 601 STFSM_INST_CMD1, 581 602 STFSM_INST_DATA_READ, ··· 591 612 .data_size = TRANSFER_SIZE(4), 592 613 .seq_opc[0] = (SEQ_OPC_PADS_1 | 593 614 SEQ_OPC_CYCLES(8) | 594 - SEQ_OPC_OPCODE(FLASH_CMD_RDSR)), 615 + SEQ_OPC_OPCODE(SPINOR_OP_RDSR)), 595 616 .seq = { 596 617 STFSM_INST_CMD1, 597 618 STFSM_INST_DATA_READ, ··· 607 628 /* 'addr_cfg' configured during initialisation */ 608 629 .seq_opc = { 609 630 (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 610 - SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT), 631 + SEQ_OPC_OPCODE(SPINOR_OP_WREN) | SEQ_OPC_CSDEASSERT), 611 632 612 633 (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 613 - SEQ_OPC_OPCODE(FLASH_CMD_SE)), 634 + SEQ_OPC_OPCODE(SPINOR_OP_SE)), 614 635 }, 615 636 .seq = { 616 637 STFSM_INST_CMD1, ··· 628 649 static struct stfsm_seq stfsm_seq_erase_chip = { 629 650 .seq_opc = { 630 651 (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 631 - SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT), 652 + SEQ_OPC_OPCODE(SPINOR_OP_WREN) | SEQ_OPC_CSDEASSERT), 632 653 633 654 (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 634 - SEQ_OPC_OPCODE(FLASH_CMD_CHIPERASE) | SEQ_OPC_CSDEASSERT), 655 + SEQ_OPC_OPCODE(SPINOR_OP_CHIP_ERASE) | SEQ_OPC_CSDEASSERT), 635 656 }, 636 657 .seq = { 637 658 STFSM_INST_CMD1, ··· 648 669 649 670 static struct stfsm_seq stfsm_seq_write_status = { 650 671 .seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 651 - SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT), 672 + SEQ_OPC_OPCODE(SPINOR_OP_WREN) | SEQ_OPC_CSDEASSERT), 652 673 .seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 653 - SEQ_OPC_OPCODE(FLASH_CMD_WRSR)), 654 - .seq = { 655 - STFSM_INST_CMD1, 656 - STFSM_INST_CMD2, 657 - STFSM_INST_STA_WR1, 658 - STFSM_INST_STOP, 659 - }, 660 - .seq_cfg = (SEQ_CFG_PADS_1 | 661 - SEQ_CFG_READNOTWRITE | 662 - SEQ_CFG_CSDEASSERT | 663 - SEQ_CFG_STARTSEQ), 664 - }; 665 - 666 - static struct stfsm_seq stfsm_seq_wrvcr = { 667 - .seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 668 - SEQ_OPC_OPCODE(FLASH_CMD_WREN) | SEQ_OPC_CSDEASSERT), 669 - .seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 670 - SEQ_OPC_OPCODE(FLASH_CMD_WRVCR)), 674 + SEQ_OPC_OPCODE(SPINOR_OP_WRSR)), 671 675 .seq = { 672 676 STFSM_INST_CMD1, 673 677 STFSM_INST_CMD2, ··· 666 704 static int stfsm_n25q_en_32bit_addr_seq(struct stfsm_seq *seq) 667 705 { 668 706 seq->seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 669 - SEQ_OPC_OPCODE(FLASH_CMD_EN4B_ADDR)); 707 + SEQ_OPC_OPCODE(SPINOR_OP_EN4B)); 670 708 seq->seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 671 - SEQ_OPC_OPCODE(FLASH_CMD_WREN) | 709 + SEQ_OPC_OPCODE(SPINOR_OP_WREN) | 672 710 SEQ_OPC_CSDEASSERT); 673 711 674 712 seq->seq[0] = STFSM_INST_CMD2; ··· 755 793 756 794 dev_dbg(fsm->dev, "Reading %d bytes from FIFO\n", size); 757 795 758 - BUG_ON((((uint32_t)buf) & 0x3) || (size & 0x3)); 796 + BUG_ON((((uintptr_t)buf) & 0x3) || (size & 0x3)); 759 797 760 798 while (remaining) { 761 799 for (;;) { ··· 779 817 780 818 dev_dbg(fsm->dev, "writing %d bytes to FIFO\n", size); 781 819 782 - BUG_ON((((uint32_t)buf) & 0x3) || (size & 0x3)); 820 + BUG_ON((((uintptr_t)buf) & 0x3) || (size & 0x3)); 783 821 784 822 writesl(fsm->base + SPI_FAST_SEQ_DATA_REG, buf, words); 785 823 ··· 789 827 static int stfsm_enter_32bit_addr(struct stfsm *fsm, int enter) 790 828 { 791 829 struct stfsm_seq *seq = &fsm->stfsm_seq_en_32bit_addr; 792 - uint32_t cmd = enter ? FLASH_CMD_EN4B_ADDR : FLASH_CMD_EX4B_ADDR; 830 + uint32_t cmd = enter ? SPINOR_OP_EN4B : SPINOR_OP_EX4B; 793 831 794 832 seq->seq_opc[0] = (SEQ_OPC_PADS_1 | 795 833 SEQ_OPC_CYCLES(8) | ··· 813 851 /* Use RDRS1 */ 814 852 seq->seq_opc[0] = (SEQ_OPC_PADS_1 | 815 853 SEQ_OPC_CYCLES(8) | 816 - SEQ_OPC_OPCODE(FLASH_CMD_RDSR)); 854 + SEQ_OPC_OPCODE(SPINOR_OP_RDSR)); 817 855 818 856 /* Load read_status sequence */ 819 857 stfsm_load_seq(fsm, seq); ··· 851 889 } 852 890 853 891 static int stfsm_read_status(struct stfsm *fsm, uint8_t cmd, 854 - uint8_t *status) 892 + uint8_t *data, int bytes) 855 893 { 856 894 struct stfsm_seq *seq = &stfsm_seq_read_status_fifo; 857 895 uint32_t tmp; 896 + uint8_t *t = (uint8_t *)&tmp; 897 + int i; 858 898 859 - dev_dbg(fsm->dev, "reading STA[%s]\n", 860 - (cmd == FLASH_CMD_RDSR) ? "1" : "2"); 899 + dev_dbg(fsm->dev, "read 'status' register [0x%02x], %d byte(s)\n", 900 + cmd, bytes); 861 901 862 - seq->seq_opc[0] = (SEQ_OPC_PADS_1 | 863 - SEQ_OPC_CYCLES(8) | 902 + BUG_ON(bytes != 1 && bytes != 2); 903 + 904 + seq->seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 864 905 SEQ_OPC_OPCODE(cmd)), 865 906 866 907 stfsm_load_seq(fsm, seq); 867 908 868 909 stfsm_read_fifo(fsm, &tmp, 4); 869 910 870 - *status = (uint8_t)(tmp >> 24); 911 + for (i = 0; i < bytes; i++) 912 + data[i] = t[i]; 871 913 872 914 stfsm_wait_seq(fsm); 873 915 874 916 return 0; 875 917 } 876 918 877 - static int stfsm_write_status(struct stfsm *fsm, uint16_t status, 878 - int sta_bytes) 919 + static int stfsm_write_status(struct stfsm *fsm, uint8_t cmd, 920 + uint16_t data, int bytes, int wait_busy) 879 921 { 880 922 struct stfsm_seq *seq = &stfsm_seq_write_status; 881 923 882 - dev_dbg(fsm->dev, "writing STA[%s] 0x%04x\n", 883 - (sta_bytes == 1) ? "1" : "1+2", status); 924 + dev_dbg(fsm->dev, 925 + "write 'status' register [0x%02x], %d byte(s), 0x%04x\n" 926 + " %s wait-busy\n", cmd, bytes, data, wait_busy ? "with" : "no"); 884 927 885 - seq->status = (uint32_t)status | STA_PADS_1 | STA_CSDEASSERT; 886 - seq->seq[2] = (sta_bytes == 1) ? 887 - STFSM_INST_STA_WR1 : STFSM_INST_STA_WR1_2; 928 + BUG_ON(bytes != 1 && bytes != 2); 888 929 889 - stfsm_load_seq(fsm, seq); 930 + seq->seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 931 + SEQ_OPC_OPCODE(cmd)); 890 932 891 - stfsm_wait_seq(fsm); 892 - 893 - return 0; 894 - }; 895 - 896 - static int stfsm_wrvcr(struct stfsm *fsm, uint8_t data) 897 - { 898 - struct stfsm_seq *seq = &stfsm_seq_wrvcr; 899 - 900 - dev_dbg(fsm->dev, "writing VCR 0x%02x\n", data); 901 - 902 - seq->status = (STA_DATA_BYTE1(data) | STA_PADS_1 | STA_CSDEASSERT); 933 + seq->status = (uint32_t)data | STA_PADS_1 | STA_CSDEASSERT; 934 + seq->seq[2] = (bytes == 1) ? STFSM_INST_STA_WR1 : STFSM_INST_STA_WR1_2; 903 935 904 936 stfsm_load_seq(fsm, seq); 905 937 906 938 stfsm_wait_seq(fsm); 939 + 940 + if (wait_busy) 941 + stfsm_wait_busy(fsm); 907 942 908 943 return 0; 909 944 } ··· 986 1027 if (cfg->write) 987 1028 seq->seq_opc[i++] = (SEQ_OPC_PADS_1 | 988 1029 SEQ_OPC_CYCLES(8) | 989 - SEQ_OPC_OPCODE(FLASH_CMD_WREN) | 1030 + SEQ_OPC_OPCODE(SPINOR_OP_WREN) | 990 1031 SEQ_OPC_CSDEASSERT); 991 1032 992 1033 /* Address configuration (24 or 32-bit addresses) */ ··· 1108 1149 stfsm_mx25_en_32bit_addr_seq(&fsm->stfsm_seq_en_32bit_addr); 1109 1150 1110 1151 soc_reset = stfsm_can_handle_soc_reset(fsm); 1111 - if (soc_reset || !fsm->booted_from_spi) { 1152 + if (soc_reset || !fsm->booted_from_spi) 1112 1153 /* If we can handle SoC resets, we enable 32-bit address 1113 1154 * mode pervasively */ 1114 1155 stfsm_enter_32bit_addr(fsm, 1); 1115 1156 1116 - } else { 1157 + else 1117 1158 /* Else, enable/disable 32-bit addressing before/after 1118 1159 * each operation */ 1119 1160 fsm->configuration = (CFG_READ_TOGGLE_32BIT_ADDR | 1120 1161 CFG_WRITE_TOGGLE_32BIT_ADDR | 1121 1162 CFG_ERASESEC_TOGGLE_32BIT_ADDR); 1122 - /* It seems a small delay is required after exiting 1123 - * 32-bit mode following a write operation. The issue 1124 - * is under investigation. 1125 - */ 1126 - fsm->configuration |= CFG_WRITE_EX_32BIT_ADDR_DELAY; 1127 - } 1128 1163 } 1129 1164 1130 - /* For QUAD mode, set 'QE' STATUS bit */ 1165 + /* Check status of 'QE' bit, update if required. */ 1166 + stfsm_read_status(fsm, SPINOR_OP_RDSR, &sta, 1); 1131 1167 data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1; 1132 1168 if (data_pads == 4) { 1133 - stfsm_read_status(fsm, FLASH_CMD_RDSR, &sta); 1134 - sta |= MX25_STATUS_QE; 1135 - stfsm_write_status(fsm, sta, 1); 1169 + if (!(sta & MX25_STATUS_QE)) { 1170 + /* Set 'QE' */ 1171 + sta |= MX25_STATUS_QE; 1172 + 1173 + stfsm_write_status(fsm, SPINOR_OP_WRSR, sta, 1, 1); 1174 + } 1175 + } else { 1176 + if (sta & MX25_STATUS_QE) { 1177 + /* Clear 'QE' */ 1178 + sta &= ~MX25_STATUS_QE; 1179 + 1180 + stfsm_write_status(fsm, SPINOR_OP_WRSR, sta, 1, 1); 1181 + } 1136 1182 } 1137 1183 1138 1184 return 0; ··· 1203 1239 */ 1204 1240 vcr = (N25Q_VCR_DUMMY_CYCLES(8) | N25Q_VCR_XIP_DISABLED | 1205 1241 N25Q_VCR_WRAP_CONT); 1206 - stfsm_wrvcr(fsm, vcr); 1242 + stfsm_write_status(fsm, N25Q_CMD_WRVCR, vcr, 1, 0); 1207 1243 1208 1244 return 0; 1209 1245 } ··· 1261 1297 { 1262 1298 struct stfsm_seq seq = { 1263 1299 .seq_opc[0] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 1264 - SEQ_OPC_OPCODE(FLASH_CMD_WREN) | 1300 + SEQ_OPC_OPCODE(SPINOR_OP_WREN) | 1265 1301 SEQ_OPC_CSDEASSERT), 1266 1302 .seq_opc[1] = (SEQ_OPC_PADS_1 | SEQ_OPC_CYCLES(8) | 1267 1303 SEQ_OPC_OPCODE(S25FL_CMD_DYBWR)), ··· 1301 1337 SEQ_OPC_CSDEASSERT), 1302 1338 .seq_opc[1] = (SEQ_OPC_PADS_1 | 1303 1339 SEQ_OPC_CYCLES(8) | 1304 - SEQ_OPC_OPCODE(FLASH_CMD_WRDI) | 1340 + SEQ_OPC_OPCODE(SPINOR_OP_WRDI) | 1305 1341 SEQ_OPC_CSDEASSERT), 1306 1342 .seq = { 1307 1343 STFSM_INST_CMD1, ··· 1331 1367 uint32_t offs; 1332 1368 uint16_t sta_wr; 1333 1369 uint8_t sr1, cr1, dyb; 1370 + int update_sr = 0; 1334 1371 int ret; 1335 1372 1336 1373 if (flags & FLASH_FLAG_32BIT_ADDR) { ··· 1379 1414 } 1380 1415 } 1381 1416 1382 - /* Check status of 'QE' bit */ 1417 + /* Check status of 'QE' bit, update if required. */ 1418 + stfsm_read_status(fsm, SPINOR_OP_RDSR2, &cr1, 1); 1383 1419 data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1; 1384 - stfsm_read_status(fsm, FLASH_CMD_RDSR2, &cr1); 1385 1420 if (data_pads == 4) { 1386 1421 if (!(cr1 & STFSM_S25FL_CONFIG_QE)) { 1387 1422 /* Set 'QE' */ 1388 1423 cr1 |= STFSM_S25FL_CONFIG_QE; 1389 1424 1390 - stfsm_read_status(fsm, FLASH_CMD_RDSR, &sr1); 1391 - sta_wr = ((uint16_t)cr1 << 8) | sr1; 1392 - 1393 - stfsm_write_status(fsm, sta_wr, 2); 1394 - 1395 - stfsm_wait_busy(fsm); 1425 + update_sr = 1; 1396 1426 } 1397 1427 } else { 1398 - if ((cr1 & STFSM_S25FL_CONFIG_QE)) { 1428 + if (cr1 & STFSM_S25FL_CONFIG_QE) { 1399 1429 /* Clear 'QE' */ 1400 1430 cr1 &= ~STFSM_S25FL_CONFIG_QE; 1401 1431 1402 - stfsm_read_status(fsm, FLASH_CMD_RDSR, &sr1); 1403 - sta_wr = ((uint16_t)cr1 << 8) | sr1; 1404 - 1405 - stfsm_write_status(fsm, sta_wr, 2); 1406 - 1407 - stfsm_wait_busy(fsm); 1432 + update_sr = 1; 1408 1433 } 1409 - 1434 + } 1435 + if (update_sr) { 1436 + stfsm_read_status(fsm, SPINOR_OP_RDSR, &sr1, 1); 1437 + sta_wr = ((uint16_t)cr1 << 8) | sr1; 1438 + stfsm_write_status(fsm, SPINOR_OP_WRSR, sta_wr, 2, 1); 1410 1439 } 1411 1440 1412 1441 /* ··· 1415 1456 static int stfsm_w25q_config(struct stfsm *fsm) 1416 1457 { 1417 1458 uint32_t data_pads; 1418 - uint16_t sta_wr; 1419 - uint8_t sta1, sta2; 1459 + uint8_t sr1, sr2; 1460 + uint16_t sr_wr; 1461 + int update_sr = 0; 1420 1462 int ret; 1421 1463 1422 1464 ret = stfsm_prepare_rwe_seqs_default(fsm); 1423 1465 if (ret) 1424 1466 return ret; 1425 1467 1426 - /* If using QUAD mode, set QE STATUS bit */ 1468 + /* Check status of 'QE' bit, update if required. */ 1469 + stfsm_read_status(fsm, SPINOR_OP_RDSR2, &sr2, 1); 1427 1470 data_pads = ((fsm->stfsm_seq_read.seq_cfg >> 16) & 0x3) + 1; 1428 1471 if (data_pads == 4) { 1429 - stfsm_read_status(fsm, FLASH_CMD_RDSR, &sta1); 1430 - stfsm_read_status(fsm, FLASH_CMD_RDSR2, &sta2); 1431 - 1432 - sta_wr = ((uint16_t)sta2 << 8) | sta1; 1433 - 1434 - sta_wr |= W25Q_STATUS_QE; 1435 - 1436 - stfsm_write_status(fsm, sta_wr, 2); 1437 - 1438 - stfsm_wait_busy(fsm); 1472 + if (!(sr2 & W25Q_STATUS_QE)) { 1473 + /* Set 'QE' */ 1474 + sr2 |= W25Q_STATUS_QE; 1475 + update_sr = 1; 1476 + } 1477 + } else { 1478 + if (sr2 & W25Q_STATUS_QE) { 1479 + /* Clear 'QE' */ 1480 + sr2 &= ~W25Q_STATUS_QE; 1481 + update_sr = 1; 1482 + } 1483 + } 1484 + if (update_sr) { 1485 + /* Write status register */ 1486 + stfsm_read_status(fsm, SPINOR_OP_RDSR, &sr1, 1); 1487 + sr_wr = ((uint16_t)sr2 << 8) | sr1; 1488 + stfsm_write_status(fsm, SPINOR_OP_WRSR, sr_wr, 2, 1); 1439 1489 } 1440 1490 1441 1491 return 0; ··· 1474 1506 read_mask = (data_pads << 2) - 1; 1475 1507 1476 1508 /* Handle non-aligned buf */ 1477 - p = ((uint32_t)buf & 0x3) ? (uint8_t *)page_buf : buf; 1509 + p = ((uintptr_t)buf & 0x3) ? (uint8_t *)page_buf : buf; 1478 1510 1479 1511 /* Handle non-aligned size */ 1480 1512 size_ub = (size + read_mask) & ~read_mask; ··· 1496 1528 } 1497 1529 1498 1530 /* Handle non-aligned buf */ 1499 - if ((uint32_t)buf & 0x3) 1531 + if ((uintptr_t)buf & 0x3) 1500 1532 memcpy(buf, page_buf, size); 1501 1533 1502 1534 /* Wait for sequence to finish */ ··· 1538 1570 write_mask = (data_pads << 2) - 1; 1539 1571 1540 1572 /* Handle non-aligned buf */ 1541 - if ((uint32_t)buf & 0x3) { 1573 + if ((uintptr_t)buf & 0x3) { 1542 1574 memcpy(page_buf, buf, size); 1543 1575 p = (uint8_t *)page_buf; 1544 1576 } else { ··· 1596 1628 stfsm_s25fl_clear_status_reg(fsm); 1597 1629 1598 1630 /* Exit 32-bit address mode, if required */ 1599 - if (fsm->configuration & CFG_WRITE_TOGGLE_32BIT_ADDR) { 1631 + if (fsm->configuration & CFG_WRITE_TOGGLE_32BIT_ADDR) 1600 1632 stfsm_enter_32bit_addr(fsm, 0); 1601 - if (fsm->configuration & CFG_WRITE_EX_32BIT_ADDR_DELAY) 1602 - udelay(1); 1603 - } 1604 1633 1605 1634 return 0; 1606 1635 } ··· 1701 1736 1702 1737 while (len) { 1703 1738 /* Write up to page boundary */ 1704 - bytes = min(FLASH_PAGESIZE - page_offs, len); 1739 + bytes = min_t(size_t, FLASH_PAGESIZE - page_offs, len); 1705 1740 1706 1741 ret = stfsm_write(fsm, b, bytes, to); 1707 1742 if (ret) ··· 1900 1935 fsm->base + SPI_CONFIGDATA); 1901 1936 writel(STFSM_DEFAULT_WR_TIME, fsm->base + SPI_STATUS_WR_TIME_REG); 1902 1937 1938 + /* 1939 + * Set the FSM 'WAIT' delay to the minimum workable value. Note, for 1940 + * our purposes, the WAIT instruction is used purely to achieve 1941 + * "sequence validity" rather than actually implement a delay. 1942 + */ 1943 + writel(0x00000001, fsm->base + SPI_PROGRAM_ERASE_TIME); 1944 + 1903 1945 /* Clear FIFO, just in case */ 1904 1946 stfsm_clear_fifo(fsm); 1905 1947 ··· 2058 2086 return mtd_device_unregister(&fsm->mtd); 2059 2087 } 2060 2088 2061 - static struct of_device_id stfsm_match[] = { 2089 + static const struct of_device_id stfsm_match[] = { 2062 2090 { .compatible = "st,spi-fsm", }, 2063 2091 {}, 2064 2092 };
+11 -2
drivers/mtd/lpddr/Kconfig
··· 1 - menu "LPDDR flash memory drivers" 2 - depends on MTD!=n 1 + menu "LPDDR & LPDDR2 PCM memory drivers" 2 + depends on MTD 3 3 4 4 config MTD_LPDDR 5 5 tristate "Support for LPDDR flash chips" ··· 17 17 Window QINFO interface, permits software to be used for entire 18 18 families of devices. This serves similar purpose of CFI on legacy 19 19 Flash products 20 + 21 + config MTD_LPDDR2_NVM 22 + # ARM dependency is only for writel_relaxed() 23 + depends on MTD && ARM 24 + tristate "Support for LPDDR2-NVM flash chips" 25 + help 26 + This option enables support of PCM memories with a LPDDR2-NVM 27 + (Low power double data rate 2) interface. 28 + 20 29 endmenu
+1
drivers/mtd/lpddr/Makefile
··· 4 4 5 5 obj-$(CONFIG_MTD_QINFO_PROBE) += qinfo_probe.o 6 6 obj-$(CONFIG_MTD_LPDDR) += lpddr_cmds.o 7 + obj-$(CONFIG_MTD_LPDDR2_NVM) += lpddr2_nvm.o
+507
drivers/mtd/lpddr/lpddr2_nvm.c
··· 1 + /* 2 + * LPDDR2-NVM MTD driver. This module provides read, write, erase, lock/unlock 3 + * support for LPDDR2-NVM PCM memories 4 + * 5 + * Copyright © 2012 Micron Technology, Inc. 6 + * 7 + * Vincenzo Aliberti <vincenzo.aliberti@gmail.com> 8 + * Domenico Manna <domenico.manna@gmail.com> 9 + * Many thanks to Andrea Vigilante for initial enabling 10 + * 11 + * This program is free software; you can redistribute it and/or 12 + * modify it under the terms of the GNU General Public License 13 + * as published by the Free Software Foundation; either version 2 14 + * of the License, or (at your option) any later version. 15 + * 16 + * This program is distributed in the hope that it will be useful, 17 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 + * GNU General Public License for more details. 20 + */ 21 + 22 + #define pr_fmt(fmt) KBUILD_MODNAME ": %s: " fmt, __func__ 23 + 24 + #include <linux/init.h> 25 + #include <linux/io.h> 26 + #include <linux/module.h> 27 + #include <linux/kernel.h> 28 + #include <linux/mtd/map.h> 29 + #include <linux/mtd/mtd.h> 30 + #include <linux/mtd/partitions.h> 31 + #include <linux/slab.h> 32 + #include <linux/platform_device.h> 33 + #include <linux/ioport.h> 34 + #include <linux/err.h> 35 + 36 + /* Parameters */ 37 + #define ERASE_BLOCKSIZE (0x00020000/2) /* in Word */ 38 + #define WRITE_BUFFSIZE (0x00000400/2) /* in Word */ 39 + #define OW_BASE_ADDRESS 0x00000000 /* OW offset */ 40 + #define BUS_WIDTH 0x00000020 /* x32 devices */ 41 + 42 + /* PFOW symbols address offset */ 43 + #define PFOW_QUERY_STRING_P (0x0000/2) /* in Word */ 44 + #define PFOW_QUERY_STRING_F (0x0002/2) /* in Word */ 45 + #define PFOW_QUERY_STRING_O (0x0004/2) /* in Word */ 46 + #define PFOW_QUERY_STRING_W (0x0006/2) /* in Word */ 47 + 48 + /* OW registers address */ 49 + #define CMD_CODE_OFS (0x0080/2) /* in Word */ 50 + #define CMD_DATA_OFS (0x0084/2) /* in Word */ 51 + #define CMD_ADD_L_OFS (0x0088/2) /* in Word */ 52 + #define CMD_ADD_H_OFS (0x008A/2) /* in Word */ 53 + #define MPR_L_OFS (0x0090/2) /* in Word */ 54 + #define MPR_H_OFS (0x0092/2) /* in Word */ 55 + #define CMD_EXEC_OFS (0x00C0/2) /* in Word */ 56 + #define STATUS_REG_OFS (0x00CC/2) /* in Word */ 57 + #define PRG_BUFFER_OFS (0x0010/2) /* in Word */ 58 + 59 + /* Datamask */ 60 + #define MR_CFGMASK 0x8000 61 + #define SR_OK_DATAMASK 0x0080 62 + 63 + /* LPDDR2-NVM Commands */ 64 + #define LPDDR2_NVM_LOCK 0x0061 65 + #define LPDDR2_NVM_UNLOCK 0x0062 66 + #define LPDDR2_NVM_SW_PROGRAM 0x0041 67 + #define LPDDR2_NVM_SW_OVERWRITE 0x0042 68 + #define LPDDR2_NVM_BUF_PROGRAM 0x00E9 69 + #define LPDDR2_NVM_BUF_OVERWRITE 0x00EA 70 + #define LPDDR2_NVM_ERASE 0x0020 71 + 72 + /* LPDDR2-NVM Registers offset */ 73 + #define LPDDR2_MODE_REG_DATA 0x0040 74 + #define LPDDR2_MODE_REG_CFG 0x0050 75 + 76 + /* 77 + * Internal Type Definitions 78 + * pcm_int_data contains memory controller details: 79 + * @reg_data : LPDDR2_MODE_REG_DATA register address after remapping 80 + * @reg_cfg : LPDDR2_MODE_REG_CFG register address after remapping 81 + * &bus_width: memory bus-width (eg: x16 2 Bytes, x32 4 Bytes) 82 + */ 83 + struct pcm_int_data { 84 + void __iomem *ctl_regs; 85 + int bus_width; 86 + }; 87 + 88 + static DEFINE_MUTEX(lpdd2_nvm_mutex); 89 + 90 + /* 91 + * Build a map_word starting from an u_long 92 + */ 93 + static inline map_word build_map_word(u_long myword) 94 + { 95 + map_word val = { {0} }; 96 + val.x[0] = myword; 97 + return val; 98 + } 99 + 100 + /* 101 + * Build Mode Register Configuration DataMask based on device bus-width 102 + */ 103 + static inline u_int build_mr_cfgmask(u_int bus_width) 104 + { 105 + u_int val = MR_CFGMASK; 106 + 107 + if (bus_width == 0x0004) /* x32 device */ 108 + val = val << 16; 109 + 110 + return val; 111 + } 112 + 113 + /* 114 + * Build Status Register OK DataMask based on device bus-width 115 + */ 116 + static inline u_int build_sr_ok_datamask(u_int bus_width) 117 + { 118 + u_int val = SR_OK_DATAMASK; 119 + 120 + if (bus_width == 0x0004) /* x32 device */ 121 + val = (val << 16)+val; 122 + 123 + return val; 124 + } 125 + 126 + /* 127 + * Evaluates Overlay Window Control Registers address 128 + */ 129 + static inline u_long ow_reg_add(struct map_info *map, u_long offset) 130 + { 131 + u_long val = 0; 132 + struct pcm_int_data *pcm_data = map->fldrv_priv; 133 + 134 + val = map->pfow_base + offset*pcm_data->bus_width; 135 + 136 + return val; 137 + } 138 + 139 + /* 140 + * Enable lpddr2-nvm Overlay Window 141 + * Overlay Window is a memory mapped area containing all LPDDR2-NVM registers 142 + * used by device commands as well as uservisible resources like Device Status 143 + * Register, Device ID, etc 144 + */ 145 + static inline void ow_enable(struct map_info *map) 146 + { 147 + struct pcm_int_data *pcm_data = map->fldrv_priv; 148 + 149 + writel_relaxed(build_mr_cfgmask(pcm_data->bus_width) | 0x18, 150 + pcm_data->ctl_regs + LPDDR2_MODE_REG_CFG); 151 + writel_relaxed(0x01, pcm_data->ctl_regs + LPDDR2_MODE_REG_DATA); 152 + } 153 + 154 + /* 155 + * Disable lpddr2-nvm Overlay Window 156 + * Overlay Window is a memory mapped area containing all LPDDR2-NVM registers 157 + * used by device commands as well as uservisible resources like Device Status 158 + * Register, Device ID, etc 159 + */ 160 + static inline void ow_disable(struct map_info *map) 161 + { 162 + struct pcm_int_data *pcm_data = map->fldrv_priv; 163 + 164 + writel_relaxed(build_mr_cfgmask(pcm_data->bus_width) | 0x18, 165 + pcm_data->ctl_regs + LPDDR2_MODE_REG_CFG); 166 + writel_relaxed(0x02, pcm_data->ctl_regs + LPDDR2_MODE_REG_DATA); 167 + } 168 + 169 + /* 170 + * Execute lpddr2-nvm operations 171 + */ 172 + static int lpddr2_nvm_do_op(struct map_info *map, u_long cmd_code, 173 + u_long cmd_data, u_long cmd_add, u_long cmd_mpr, u_char *buf) 174 + { 175 + map_word add_l = { {0} }, add_h = { {0} }, mpr_l = { {0} }, 176 + mpr_h = { {0} }, data_l = { {0} }, cmd = { {0} }, 177 + exec_cmd = { {0} }, sr; 178 + map_word data_h = { {0} }; /* only for 2x x16 devices stacked */ 179 + u_long i, status_reg, prg_buff_ofs; 180 + struct pcm_int_data *pcm_data = map->fldrv_priv; 181 + u_int sr_ok_datamask = build_sr_ok_datamask(pcm_data->bus_width); 182 + 183 + /* Builds low and high words for OW Control Registers */ 184 + add_l.x[0] = cmd_add & 0x0000FFFF; 185 + add_h.x[0] = (cmd_add >> 16) & 0x0000FFFF; 186 + mpr_l.x[0] = cmd_mpr & 0x0000FFFF; 187 + mpr_h.x[0] = (cmd_mpr >> 16) & 0x0000FFFF; 188 + cmd.x[0] = cmd_code & 0x0000FFFF; 189 + exec_cmd.x[0] = 0x0001; 190 + data_l.x[0] = cmd_data & 0x0000FFFF; 191 + data_h.x[0] = (cmd_data >> 16) & 0x0000FFFF; /* only for 2x x16 */ 192 + 193 + /* Set Overlay Window Control Registers */ 194 + map_write(map, cmd, ow_reg_add(map, CMD_CODE_OFS)); 195 + map_write(map, data_l, ow_reg_add(map, CMD_DATA_OFS)); 196 + map_write(map, add_l, ow_reg_add(map, CMD_ADD_L_OFS)); 197 + map_write(map, add_h, ow_reg_add(map, CMD_ADD_H_OFS)); 198 + map_write(map, mpr_l, ow_reg_add(map, MPR_L_OFS)); 199 + map_write(map, mpr_h, ow_reg_add(map, MPR_H_OFS)); 200 + if (pcm_data->bus_width == 0x0004) { /* 2x16 devices stacked */ 201 + map_write(map, cmd, ow_reg_add(map, CMD_CODE_OFS) + 2); 202 + map_write(map, data_h, ow_reg_add(map, CMD_DATA_OFS) + 2); 203 + map_write(map, add_l, ow_reg_add(map, CMD_ADD_L_OFS) + 2); 204 + map_write(map, add_h, ow_reg_add(map, CMD_ADD_H_OFS) + 2); 205 + map_write(map, mpr_l, ow_reg_add(map, MPR_L_OFS) + 2); 206 + map_write(map, mpr_h, ow_reg_add(map, MPR_H_OFS) + 2); 207 + } 208 + 209 + /* Fill Program Buffer */ 210 + if ((cmd_code == LPDDR2_NVM_BUF_PROGRAM) || 211 + (cmd_code == LPDDR2_NVM_BUF_OVERWRITE)) { 212 + prg_buff_ofs = (map_read(map, 213 + ow_reg_add(map, PRG_BUFFER_OFS))).x[0]; 214 + for (i = 0; i < cmd_mpr; i++) { 215 + map_write(map, build_map_word(buf[i]), map->pfow_base + 216 + prg_buff_ofs + i); 217 + } 218 + } 219 + 220 + /* Command Execute */ 221 + map_write(map, exec_cmd, ow_reg_add(map, CMD_EXEC_OFS)); 222 + if (pcm_data->bus_width == 0x0004) /* 2x16 devices stacked */ 223 + map_write(map, exec_cmd, ow_reg_add(map, CMD_EXEC_OFS) + 2); 224 + 225 + /* Status Register Check */ 226 + do { 227 + sr = map_read(map, ow_reg_add(map, STATUS_REG_OFS)); 228 + status_reg = sr.x[0]; 229 + if (pcm_data->bus_width == 0x0004) {/* 2x16 devices stacked */ 230 + sr = map_read(map, ow_reg_add(map, 231 + STATUS_REG_OFS) + 2); 232 + status_reg += sr.x[0] << 16; 233 + } 234 + } while ((status_reg & sr_ok_datamask) != sr_ok_datamask); 235 + 236 + return (((status_reg & sr_ok_datamask) == sr_ok_datamask) ? 0 : -EIO); 237 + } 238 + 239 + /* 240 + * Execute lpddr2-nvm operations @ block level 241 + */ 242 + static int lpddr2_nvm_do_block_op(struct mtd_info *mtd, loff_t start_add, 243 + uint64_t len, u_char block_op) 244 + { 245 + struct map_info *map = mtd->priv; 246 + u_long add, end_add; 247 + int ret = 0; 248 + 249 + mutex_lock(&lpdd2_nvm_mutex); 250 + 251 + ow_enable(map); 252 + 253 + add = start_add; 254 + end_add = add + len; 255 + 256 + do { 257 + ret = lpddr2_nvm_do_op(map, block_op, 0x00, add, add, NULL); 258 + if (ret) 259 + goto out; 260 + add += mtd->erasesize; 261 + } while (add < end_add); 262 + 263 + out: 264 + ow_disable(map); 265 + mutex_unlock(&lpdd2_nvm_mutex); 266 + return ret; 267 + } 268 + 269 + /* 270 + * verify presence of PFOW string 271 + */ 272 + static int lpddr2_nvm_pfow_present(struct map_info *map) 273 + { 274 + map_word pfow_val[4]; 275 + unsigned int found = 1; 276 + 277 + mutex_lock(&lpdd2_nvm_mutex); 278 + 279 + ow_enable(map); 280 + 281 + /* Load string from array */ 282 + pfow_val[0] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_P)); 283 + pfow_val[1] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_F)); 284 + pfow_val[2] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_O)); 285 + pfow_val[3] = map_read(map, ow_reg_add(map, PFOW_QUERY_STRING_W)); 286 + 287 + /* Verify the string loaded vs expected */ 288 + if (!map_word_equal(map, build_map_word('P'), pfow_val[0])) 289 + found = 0; 290 + if (!map_word_equal(map, build_map_word('F'), pfow_val[1])) 291 + found = 0; 292 + if (!map_word_equal(map, build_map_word('O'), pfow_val[2])) 293 + found = 0; 294 + if (!map_word_equal(map, build_map_word('W'), pfow_val[3])) 295 + found = 0; 296 + 297 + ow_disable(map); 298 + 299 + mutex_unlock(&lpdd2_nvm_mutex); 300 + 301 + return found; 302 + } 303 + 304 + /* 305 + * lpddr2_nvm driver read method 306 + */ 307 + static int lpddr2_nvm_read(struct mtd_info *mtd, loff_t start_add, 308 + size_t len, size_t *retlen, u_char *buf) 309 + { 310 + struct map_info *map = mtd->priv; 311 + 312 + mutex_lock(&lpdd2_nvm_mutex); 313 + 314 + *retlen = len; 315 + 316 + map_copy_from(map, buf, start_add, *retlen); 317 + 318 + mutex_unlock(&lpdd2_nvm_mutex); 319 + return 0; 320 + } 321 + 322 + /* 323 + * lpddr2_nvm driver write method 324 + */ 325 + static int lpddr2_nvm_write(struct mtd_info *mtd, loff_t start_add, 326 + size_t len, size_t *retlen, const u_char *buf) 327 + { 328 + struct map_info *map = mtd->priv; 329 + struct pcm_int_data *pcm_data = map->fldrv_priv; 330 + u_long add, current_len, tot_len, target_len, my_data; 331 + u_char *write_buf = (u_char *)buf; 332 + int ret = 0; 333 + 334 + mutex_lock(&lpdd2_nvm_mutex); 335 + 336 + ow_enable(map); 337 + 338 + /* Set start value for the variables */ 339 + add = start_add; 340 + target_len = len; 341 + tot_len = 0; 342 + 343 + while (tot_len < target_len) { 344 + if (!(IS_ALIGNED(add, mtd->writesize))) { /* do sw program */ 345 + my_data = write_buf[tot_len]; 346 + my_data += (write_buf[tot_len+1]) << 8; 347 + if (pcm_data->bus_width == 0x0004) {/* 2x16 devices */ 348 + my_data += (write_buf[tot_len+2]) << 16; 349 + my_data += (write_buf[tot_len+3]) << 24; 350 + } 351 + ret = lpddr2_nvm_do_op(map, LPDDR2_NVM_SW_OVERWRITE, 352 + my_data, add, 0x00, NULL); 353 + if (ret) 354 + goto out; 355 + 356 + add += pcm_data->bus_width; 357 + tot_len += pcm_data->bus_width; 358 + } else { /* do buffer program */ 359 + current_len = min(target_len - tot_len, 360 + (u_long) mtd->writesize); 361 + ret = lpddr2_nvm_do_op(map, LPDDR2_NVM_BUF_OVERWRITE, 362 + 0x00, add, current_len, write_buf + tot_len); 363 + if (ret) 364 + goto out; 365 + 366 + add += current_len; 367 + tot_len += current_len; 368 + } 369 + } 370 + 371 + out: 372 + *retlen = tot_len; 373 + ow_disable(map); 374 + mutex_unlock(&lpdd2_nvm_mutex); 375 + return ret; 376 + } 377 + 378 + /* 379 + * lpddr2_nvm driver erase method 380 + */ 381 + static int lpddr2_nvm_erase(struct mtd_info *mtd, struct erase_info *instr) 382 + { 383 + int ret = lpddr2_nvm_do_block_op(mtd, instr->addr, instr->len, 384 + LPDDR2_NVM_ERASE); 385 + if (!ret) { 386 + instr->state = MTD_ERASE_DONE; 387 + mtd_erase_callback(instr); 388 + } 389 + 390 + return ret; 391 + } 392 + 393 + /* 394 + * lpddr2_nvm driver unlock method 395 + */ 396 + static int lpddr2_nvm_unlock(struct mtd_info *mtd, loff_t start_add, 397 + uint64_t len) 398 + { 399 + return lpddr2_nvm_do_block_op(mtd, start_add, len, LPDDR2_NVM_UNLOCK); 400 + } 401 + 402 + /* 403 + * lpddr2_nvm driver lock method 404 + */ 405 + static int lpddr2_nvm_lock(struct mtd_info *mtd, loff_t start_add, 406 + uint64_t len) 407 + { 408 + return lpddr2_nvm_do_block_op(mtd, start_add, len, LPDDR2_NVM_LOCK); 409 + } 410 + 411 + /* 412 + * lpddr2_nvm driver probe method 413 + */ 414 + static int lpddr2_nvm_probe(struct platform_device *pdev) 415 + { 416 + struct map_info *map; 417 + struct mtd_info *mtd; 418 + struct resource *add_range; 419 + struct resource *control_regs; 420 + struct pcm_int_data *pcm_data; 421 + 422 + /* Allocate memory control_regs data structures */ 423 + pcm_data = devm_kzalloc(&pdev->dev, sizeof(*pcm_data), GFP_KERNEL); 424 + if (!pcm_data) 425 + return -ENOMEM; 426 + 427 + pcm_data->bus_width = BUS_WIDTH; 428 + 429 + /* Allocate memory for map_info & mtd_info data structures */ 430 + map = devm_kzalloc(&pdev->dev, sizeof(*map), GFP_KERNEL); 431 + if (!map) 432 + return -ENOMEM; 433 + 434 + mtd = devm_kzalloc(&pdev->dev, sizeof(*mtd), GFP_KERNEL); 435 + if (!mtd) 436 + return -ENOMEM; 437 + 438 + /* lpddr2_nvm address range */ 439 + add_range = platform_get_resource(pdev, IORESOURCE_MEM, 0); 440 + 441 + /* Populate map_info data structure */ 442 + *map = (struct map_info) { 443 + .virt = devm_ioremap_resource(&pdev->dev, add_range), 444 + .name = pdev->dev.init_name, 445 + .phys = add_range->start, 446 + .size = resource_size(add_range), 447 + .bankwidth = pcm_data->bus_width / 2, 448 + .pfow_base = OW_BASE_ADDRESS, 449 + .fldrv_priv = pcm_data, 450 + }; 451 + if (IS_ERR(map->virt)) 452 + return PTR_ERR(map->virt); 453 + 454 + simple_map_init(map); /* fill with default methods */ 455 + 456 + control_regs = platform_get_resource(pdev, IORESOURCE_MEM, 1); 457 + pcm_data->ctl_regs = devm_ioremap_resource(&pdev->dev, control_regs); 458 + if (IS_ERR(pcm_data->ctl_regs)) 459 + return PTR_ERR(pcm_data->ctl_regs); 460 + 461 + /* Populate mtd_info data structure */ 462 + *mtd = (struct mtd_info) { 463 + .name = pdev->dev.init_name, 464 + .type = MTD_RAM, 465 + .priv = map, 466 + .size = resource_size(add_range), 467 + .erasesize = ERASE_BLOCKSIZE * pcm_data->bus_width, 468 + .writesize = 1, 469 + .writebufsize = WRITE_BUFFSIZE * pcm_data->bus_width, 470 + .flags = (MTD_CAP_NVRAM | MTD_POWERUP_LOCK), 471 + ._read = lpddr2_nvm_read, 472 + ._write = lpddr2_nvm_write, 473 + ._erase = lpddr2_nvm_erase, 474 + ._unlock = lpddr2_nvm_unlock, 475 + ._lock = lpddr2_nvm_lock, 476 + }; 477 + 478 + /* Verify the presence of the device looking for PFOW string */ 479 + if (!lpddr2_nvm_pfow_present(map)) { 480 + pr_err("device not recognized\n"); 481 + return -EINVAL; 482 + } 483 + /* Parse partitions and register the MTD device */ 484 + return mtd_device_parse_register(mtd, NULL, NULL, NULL, 0); 485 + } 486 + 487 + /* 488 + * lpddr2_nvm driver remove method 489 + */ 490 + static int lpddr2_nvm_remove(struct platform_device *pdev) 491 + { 492 + return mtd_device_unregister(dev_get_drvdata(&pdev->dev)); 493 + } 494 + 495 + /* Initialize platform_driver data structure for lpddr2_nvm */ 496 + static struct platform_driver lpddr2_nvm_drv = { 497 + .driver = { 498 + .name = "lpddr2_nvm", 499 + }, 500 + .probe = lpddr2_nvm_probe, 501 + .remove = lpddr2_nvm_remove, 502 + }; 503 + 504 + module_platform_driver(lpddr2_nvm_drv); 505 + MODULE_LICENSE("GPL"); 506 + MODULE_AUTHOR("Vincenzo Aliberti <vincenzo.aliberti@gmail.com>"); 507 + MODULE_DESCRIPTION("MTD driver for LPDDR2-NVM PCM memories");
+2 -2
drivers/mtd/maps/Kconfig
··· 108 108 109 109 config MTD_SC520CDP 110 110 tristate "CFI Flash device mapped on AMD SC520 CDP" 111 - depends on X86 && MTD_CFI 111 + depends on (MELAN || COMPILE_TEST) && MTD_CFI 112 112 help 113 113 The SC520 CDP board has two banks of CFI-compliant chips and one 114 114 Dual-in-line JEDEC chip. This 'mapping' driver supports that ··· 116 116 117 117 config MTD_NETSC520 118 118 tristate "CFI Flash device mapped on AMD NetSc520" 119 - depends on X86 && MTD_CFI 119 + depends on (MELAN || COMPILE_TEST) && MTD_CFI 120 120 help 121 121 This enables access routines for the flash chips on the AMD NetSc520 122 122 demonstration board. If you have one of these boards and would like
+3 -3
drivers/mtd/maps/sc520cdp.c
··· 183 183 184 184 static void sc520cdp_setup_par(void) 185 185 { 186 - volatile unsigned long __iomem *mmcr; 186 + unsigned long __iomem *mmcr; 187 187 unsigned long mmcr_val; 188 188 int i, j; 189 189 ··· 203 203 */ 204 204 for(i = 0; i < NUM_FLASH_BANKS; i++) { /* for each par_table entry */ 205 205 for(j = 0; j < NUM_SC520_PAR; j++) { /* for each PAR register */ 206 - mmcr_val = mmcr[SC520_PAR(j)]; 206 + mmcr_val = readl(&mmcr[SC520_PAR(j)]); 207 207 /* if target device field matches, reprogram the PAR */ 208 208 if((mmcr_val & SC520_PAR_TRGDEV) == par_table[i].trgdev) 209 209 { 210 - mmcr[SC520_PAR(j)] = par_table[i].new_par; 210 + writel(par_table[i].new_par, &mmcr[SC520_PAR(j)]); 211 211 break; 212 212 } 213 213 }
+1 -24
drivers/mtd/maps/solutionengine.c
··· 33 33 34 34 static const char * const probes[] = { "RedBoot", "cmdlinepart", NULL }; 35 35 36 - #ifdef CONFIG_MTD_SUPERH_RESERVE 37 - static struct mtd_partition superh_se_partitions[] = { 38 - /* Reserved for boot code, read-only */ 39 - { 40 - .name = "flash_boot", 41 - .offset = 0x00000000, 42 - .size = CONFIG_MTD_SUPERH_RESERVE, 43 - .mask_flags = MTD_WRITEABLE, 44 - }, 45 - /* All else is writable (e.g. JFFS) */ 46 - { 47 - .name = "Flash FS", 48 - .offset = MTDPART_OFS_NXTBLK, 49 - .size = MTDPART_SIZ_FULL, 50 - } 51 - }; 52 - #define NUM_PARTITIONS ARRAY_SIZE(superh_se_partitions) 53 - #else 54 - #define superh_se_partitions NULL 55 - #define NUM_PARTITIONS 0 56 - #endif /* CONFIG_MTD_SUPERH_RESERVE */ 57 - 58 36 static int __init init_soleng_maps(void) 59 37 { 60 38 /* First probe at offset 0 */ ··· 70 92 mtd_device_register(eprom_mtd, NULL, 0); 71 93 } 72 94 73 - mtd_device_parse_register(flash_mtd, probes, NULL, 74 - superh_se_partitions, NUM_PARTITIONS); 95 + mtd_device_parse_register(flash_mtd, probes, NULL, NULL, 0); 75 96 76 97 return 0; 77 98 }
+6
drivers/mtd/mtd_blkdevs.c
··· 87 87 if (req->cmd_type != REQ_TYPE_FS) 88 88 return -EIO; 89 89 90 + if (req->cmd_flags & REQ_FLUSH) 91 + return tr->flush(dev); 92 + 90 93 if (blk_rq_pos(req) + blk_rq_cur_sectors(req) > 91 94 get_capacity(req->rq_disk)) 92 95 return -EIO; ··· 409 406 410 407 if (!new->rq) 411 408 goto error3; 409 + 410 + if (tr->flush) 411 + blk_queue_flush(new->rq, REQ_FLUSH); 412 412 413 413 new->rq->queuedata = new; 414 414 blk_queue_logical_block_size(new->rq, tr->blksize);
+11 -9
drivers/mtd/mtdchar.c
··· 568 568 { 569 569 struct mtd_write_req req; 570 570 struct mtd_oob_ops ops; 571 - void __user *usr_data, *usr_oob; 571 + const void __user *usr_data, *usr_oob; 572 572 int ret; 573 573 574 - if (copy_from_user(&req, argp, sizeof(req)) || 575 - !access_ok(VERIFY_READ, req.usr_data, req.len) || 576 - !access_ok(VERIFY_READ, req.usr_oob, req.ooblen)) 574 + if (copy_from_user(&req, argp, sizeof(req))) 577 575 return -EFAULT; 576 + 577 + usr_data = (const void __user *)(uintptr_t)req.usr_data; 578 + usr_oob = (const void __user *)(uintptr_t)req.usr_oob; 579 + if (!access_ok(VERIFY_READ, usr_data, req.len) || 580 + !access_ok(VERIFY_READ, usr_oob, req.ooblen)) 581 + return -EFAULT; 582 + 578 583 if (!mtd->_write_oob) 579 584 return -EOPNOTSUPP; 580 585 ··· 588 583 ops.ooblen = (size_t)req.ooblen; 589 584 ops.ooboffs = 0; 590 585 591 - usr_data = (void __user *)(uintptr_t)req.usr_data; 592 - usr_oob = (void __user *)(uintptr_t)req.usr_oob; 593 - 594 - if (req.usr_data) { 586 + if (usr_data) { 595 587 ops.datbuf = memdup_user(usr_data, ops.len); 596 588 if (IS_ERR(ops.datbuf)) 597 589 return PTR_ERR(ops.datbuf); ··· 596 594 ops.datbuf = NULL; 597 595 } 598 596 599 - if (req.usr_oob) { 597 + if (usr_oob) { 600 598 ops.oobbuf = memdup_user(usr_oob, ops.ooblen); 601 599 if (IS_ERR(ops.oobbuf)) { 602 600 kfree(ops.datbuf);
+4 -9
drivers/mtd/nand/bf5xx_nand.c
··· 679 679 peripheral_free_list(bfin_nfc_pin_req); 680 680 bf5xx_nand_dma_remove(info); 681 681 682 - /* free the common resources */ 683 - kfree(info); 684 - 685 682 return 0; 686 683 } 687 684 ··· 739 742 return -EFAULT; 740 743 } 741 744 742 - info = kzalloc(sizeof(*info), GFP_KERNEL); 745 + info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL); 743 746 if (info == NULL) { 744 747 err = -ENOMEM; 745 - goto out_err_kzalloc; 748 + goto out_err; 746 749 } 747 750 748 751 platform_set_drvdata(pdev, info); ··· 787 790 /* initialise the hardware */ 788 791 err = bf5xx_nand_hw_init(info); 789 792 if (err) 790 - goto out_err_hw_init; 793 + goto out_err; 791 794 792 795 /* setup hardware ECC data struct */ 793 796 if (hardware_ecc) { ··· 824 827 825 828 out_err_nand_scan: 826 829 bf5xx_nand_dma_remove(info); 827 - out_err_hw_init: 828 - kfree(info); 829 - out_err_kzalloc: 830 + out_err: 830 831 peripheral_free_list(bfin_nfc_pin_req); 831 832 832 833 return err;
+3 -4
drivers/mtd/nand/denali.c
··· 1233 1233 return status; 1234 1234 } 1235 1235 1236 - static void denali_erase(struct mtd_info *mtd, int page) 1236 + static int denali_erase(struct mtd_info *mtd, int page) 1237 1237 { 1238 1238 struct denali_nand_info *denali = mtd_to_denali(mtd); 1239 1239 ··· 1250 1250 irq_status = wait_for_irq(denali, INTR_STATUS__ERASE_COMP | 1251 1251 INTR_STATUS__ERASE_FAIL); 1252 1252 1253 - denali->status = (irq_status & INTR_STATUS__ERASE_FAIL) ? 1254 - NAND_STATUS_FAIL : PASS; 1253 + return (irq_status & INTR_STATUS__ERASE_FAIL) ? NAND_STATUS_FAIL : PASS; 1255 1254 } 1256 1255 1257 1256 static void denali_cmdfunc(struct mtd_info *mtd, unsigned int cmd, int col, ··· 1583 1584 denali->nand.ecc.write_page_raw = denali_write_page_raw; 1584 1585 denali->nand.ecc.read_oob = denali_read_oob; 1585 1586 denali->nand.ecc.write_oob = denali_write_oob; 1586 - denali->nand.erase_cmd = denali_erase; 1587 + denali->nand.erase = denali_erase; 1587 1588 1588 1589 if (nand_scan_tail(&denali->mtd)) { 1589 1590 ret = -ENXIO;
+4 -2
drivers/mtd/nand/docg4.c
··· 872 872 return 0; 873 873 } 874 874 875 - static void docg4_erase_block(struct mtd_info *mtd, int page) 875 + static int docg4_erase_block(struct mtd_info *mtd, int page) 876 876 { 877 877 struct nand_chip *nand = mtd->priv; 878 878 struct docg4_priv *doc = nand->priv; ··· 916 916 write_nop(docptr); 917 917 poll_status(doc); 918 918 write_nop(docptr); 919 + 920 + return nand->waitfunc(mtd, nand); 919 921 } 920 922 921 923 static int write_page(struct mtd_info *mtd, struct nand_chip *nand, ··· 1238 1236 nand->block_markbad = docg4_block_markbad; 1239 1237 nand->read_buf = docg4_read_buf; 1240 1238 nand->write_buf = docg4_write_buf16; 1241 - nand->erase_cmd = docg4_erase_block; 1239 + nand->erase = docg4_erase_block; 1242 1240 nand->ecc.read_page = docg4_read_page; 1243 1241 nand->ecc.write_page = docg4_write_page; 1244 1242 nand->ecc.read_page_raw = docg4_read_page_raw;
+14
drivers/mtd/nand/fsl_elbc_nand.c
··· 723 723 return 0; 724 724 } 725 725 726 + /* ECC will be calculated automatically, and errors will be detected in 727 + * waitfunc. 728 + */ 729 + static int fsl_elbc_write_subpage(struct mtd_info *mtd, struct nand_chip *chip, 730 + uint32_t offset, uint32_t data_len, 731 + const uint8_t *buf, int oob_required) 732 + { 733 + fsl_elbc_write_buf(mtd, buf, mtd->writesize); 734 + fsl_elbc_write_buf(mtd, chip->oob_poi, mtd->oobsize); 735 + 736 + return 0; 737 + } 738 + 726 739 static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv) 727 740 { 728 741 struct fsl_lbc_ctrl *ctrl = priv->ctrl; ··· 774 761 775 762 chip->ecc.read_page = fsl_elbc_read_page; 776 763 chip->ecc.write_page = fsl_elbc_write_page; 764 + chip->ecc.write_subpage = fsl_elbc_write_subpage; 777 765 778 766 /* If CS Base Register selects full hardware ECC then use it */ 779 767 if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) ==
+13 -8
drivers/mtd/nand/fsl_ifc_nand.c
··· 56 56 struct nand_hw_control controller; 57 57 struct fsl_ifc_mtd *chips[FSL_IFC_BANK_COUNT]; 58 58 59 - u8 __iomem *addr; /* Address of assigned IFC buffer */ 59 + void __iomem *addr; /* Address of assigned IFC buffer */ 60 60 unsigned int page; /* Last page written to / read from */ 61 61 unsigned int read_bytes;/* Number of bytes read during command */ 62 62 unsigned int column; /* Saved column from SEQIN */ ··· 591 591 * The chip always seems to report that it is 592 592 * write-protected, even when it is not. 593 593 */ 594 - setbits8(ifc_nand_ctrl->addr, NAND_STATUS_WP); 594 + if (chip->options & NAND_BUSWIDTH_16) 595 + setbits16(ifc_nand_ctrl->addr, NAND_STATUS_WP); 596 + else 597 + setbits8(ifc_nand_ctrl->addr, NAND_STATUS_WP); 595 598 return; 596 599 597 600 case NAND_CMD_RESET: ··· 639 636 len = bufsize - ifc_nand_ctrl->index; 640 637 } 641 638 642 - memcpy_toio(&ifc_nand_ctrl->addr[ifc_nand_ctrl->index], buf, len); 639 + memcpy_toio(ifc_nand_ctrl->addr + ifc_nand_ctrl->index, buf, len); 643 640 ifc_nand_ctrl->index += len; 644 641 } 645 642 ··· 651 648 { 652 649 struct nand_chip *chip = mtd->priv; 653 650 struct fsl_ifc_mtd *priv = chip->priv; 651 + unsigned int offset; 654 652 655 653 /* 656 654 * If there are still bytes in the IFC buffer, then use the 657 655 * next byte. 658 656 */ 659 - if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes) 660 - return in_8(&ifc_nand_ctrl->addr[ifc_nand_ctrl->index++]); 657 + if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes) { 658 + offset = ifc_nand_ctrl->index++; 659 + return in_8(ifc_nand_ctrl->addr + offset); 660 + } 661 661 662 662 dev_err(priv->dev, "%s: beyond end of buffer\n", __func__); 663 663 return ERR_BYTE; ··· 681 675 * next byte. 682 676 */ 683 677 if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes) { 684 - data = in_be16((uint16_t __iomem *)&ifc_nand_ctrl-> 685 - addr[ifc_nand_ctrl->index]); 678 + data = in_be16(ifc_nand_ctrl->addr + ifc_nand_ctrl->index); 686 679 ifc_nand_ctrl->index += 2; 687 680 return (uint8_t) data; 688 681 } ··· 706 701 707 702 avail = min((unsigned int)len, 708 703 ifc_nand_ctrl->read_bytes - ifc_nand_ctrl->index); 709 - memcpy_fromio(buf, &ifc_nand_ctrl->addr[ifc_nand_ctrl->index], avail); 704 + memcpy_fromio(buf, ifc_nand_ctrl->addr + ifc_nand_ctrl->index, avail); 710 705 ifc_nand_ctrl->index += avail; 711 706 712 707 if (len > avail)
+6 -6
drivers/mtd/nand/gpmi-nand/bch-regs.h
··· 54 54 #define MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0 11 55 55 #define MX6Q_BM_BCH_FLASH0LAYOUT0_ECC0 (0x1f << MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0) 56 56 #define BF_BCH_FLASH0LAYOUT0_ECC0(v, x) \ 57 - (GPMI_IS_MX6Q(x) \ 57 + (GPMI_IS_MX6(x) \ 58 58 ? (((v) << MX6Q_BP_BCH_FLASH0LAYOUT0_ECC0) \ 59 59 & MX6Q_BM_BCH_FLASH0LAYOUT0_ECC0) \ 60 60 : (((v) << BP_BCH_FLASH0LAYOUT0_ECC0) \ ··· 65 65 #define MX6Q_BM_BCH_FLASH0LAYOUT0_GF_13_14 \ 66 66 (0x1 << MX6Q_BP_BCH_FLASH0LAYOUT0_GF_13_14) 67 67 #define BF_BCH_FLASH0LAYOUT0_GF(v, x) \ 68 - ((GPMI_IS_MX6Q(x) && ((v) == 14)) \ 68 + ((GPMI_IS_MX6(x) && ((v) == 14)) \ 69 69 ? (((1) << MX6Q_BP_BCH_FLASH0LAYOUT0_GF_13_14) \ 70 70 & MX6Q_BM_BCH_FLASH0LAYOUT0_GF_13_14) \ 71 71 : 0 \ ··· 77 77 #define MX6Q_BM_BCH_FLASH0LAYOUT0_DATA0_SIZE \ 78 78 (0x3ff << BP_BCH_FLASH0LAYOUT0_DATA0_SIZE) 79 79 #define BF_BCH_FLASH0LAYOUT0_DATA0_SIZE(v, x) \ 80 - (GPMI_IS_MX6Q(x) \ 80 + (GPMI_IS_MX6(x) \ 81 81 ? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) \ 82 82 : ((v) & BM_BCH_FLASH0LAYOUT0_DATA0_SIZE) \ 83 83 ) ··· 96 96 #define MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN 11 97 97 #define MX6Q_BM_BCH_FLASH0LAYOUT1_ECCN (0x1f << MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN) 98 98 #define BF_BCH_FLASH0LAYOUT1_ECCN(v, x) \ 99 - (GPMI_IS_MX6Q(x) \ 99 + (GPMI_IS_MX6(x) \ 100 100 ? (((v) << MX6Q_BP_BCH_FLASH0LAYOUT1_ECCN) \ 101 101 & MX6Q_BM_BCH_FLASH0LAYOUT1_ECCN) \ 102 102 : (((v) << BP_BCH_FLASH0LAYOUT1_ECCN) \ ··· 107 107 #define MX6Q_BM_BCH_FLASH0LAYOUT1_GF_13_14 \ 108 108 (0x1 << MX6Q_BP_BCH_FLASH0LAYOUT1_GF_13_14) 109 109 #define BF_BCH_FLASH0LAYOUT1_GF(v, x) \ 110 - ((GPMI_IS_MX6Q(x) && ((v) == 14)) \ 110 + ((GPMI_IS_MX6(x) && ((v) == 14)) \ 111 111 ? (((1) << MX6Q_BP_BCH_FLASH0LAYOUT1_GF_13_14) \ 112 112 & MX6Q_BM_BCH_FLASH0LAYOUT1_GF_13_14) \ 113 113 : 0 \ ··· 119 119 #define MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE \ 120 120 (0x3ff << BP_BCH_FLASH0LAYOUT1_DATAN_SIZE) 121 121 #define BF_BCH_FLASH0LAYOUT1_DATAN_SIZE(v, x) \ 122 - (GPMI_IS_MX6Q(x) \ 122 + (GPMI_IS_MX6(x) \ 123 123 ? (((v) >> 2) & MX6Q_BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ 124 124 : ((v) & BM_BCH_FLASH0LAYOUT1_DATAN_SIZE) \ 125 125 )
+4 -7
drivers/mtd/nand/gpmi-nand/gpmi-lib.c
··· 861 861 struct resources *r = &this->resources; 862 862 unsigned long rate = clk_get_rate(r->clock[0]); 863 863 int mode = this->timing_mode; 864 - int dll_threshold = 16; /* in ns */ 864 + int dll_threshold = this->devdata->max_chain_delay; 865 865 unsigned long delay; 866 866 unsigned long clk_period; 867 867 int t_rea; ··· 885 885 886 886 /* [3] for GPMI_HW_GPMI_CTRL1 */ 887 887 hw->wrn_dly_sel = BV_GPMI_CTRL1_WRN_DLY_SEL_NO_DELAY; 888 - 889 - if (GPMI_IS_MX6Q(this)) 890 - dll_threshold = 12; 891 888 892 889 /* 893 890 * Enlarge 10 times for the numerator and denominator in {3}. ··· 971 974 struct nand_chip *chip = &this->nand; 972 975 973 976 /* Enable the asynchronous EDO feature. */ 974 - if (GPMI_IS_MX6Q(this) && chip->onfi_version) { 977 + if (GPMI_IS_MX6(this) && chip->onfi_version) { 975 978 int mode = onfi_get_async_timing_mode(chip); 976 979 977 980 /* We only support the timing mode 4 and mode 5. */ ··· 1093 1096 if (GPMI_IS_MX23(this)) { 1094 1097 mask = MX23_BM_GPMI_DEBUG_READY0 << chip; 1095 1098 reg = readl(r->gpmi_regs + HW_GPMI_DEBUG); 1096 - } else if (GPMI_IS_MX28(this) || GPMI_IS_MX6Q(this)) { 1099 + } else if (GPMI_IS_MX28(this) || GPMI_IS_MX6(this)) { 1097 1100 /* 1098 1101 * In the imx6, all the ready/busy pins are bound 1099 1102 * together. So we only need to check chip 0. 1100 1103 */ 1101 - if (GPMI_IS_MX6Q(this)) 1104 + if (GPMI_IS_MX6(this)) 1102 1105 chip = 0; 1103 1106 1104 1107 /* MX28 shares the same R/B register as MX6Q. */
+42 -30
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 53 53 .oobfree = { {.offset = 0, .length = 0} } 54 54 }; 55 55 56 + static const struct gpmi_devdata gpmi_devdata_imx23 = { 57 + .type = IS_MX23, 58 + .bch_max_ecc_strength = 20, 59 + .max_chain_delay = 16, 60 + }; 61 + 62 + static const struct gpmi_devdata gpmi_devdata_imx28 = { 63 + .type = IS_MX28, 64 + .bch_max_ecc_strength = 20, 65 + .max_chain_delay = 16, 66 + }; 67 + 68 + static const struct gpmi_devdata gpmi_devdata_imx6q = { 69 + .type = IS_MX6Q, 70 + .bch_max_ecc_strength = 40, 71 + .max_chain_delay = 12, 72 + }; 73 + 74 + static const struct gpmi_devdata gpmi_devdata_imx6sx = { 75 + .type = IS_MX6SX, 76 + .bch_max_ecc_strength = 62, 77 + .max_chain_delay = 12, 78 + }; 79 + 56 80 static irqreturn_t bch_irq(int irq, void *cookie) 57 81 { 58 82 struct gpmi_nand_data *this = cookie; ··· 126 102 /* The mx23/mx28 only support the GF13. */ 127 103 if (geo->gf_len == 14) 128 104 return false; 129 - 130 - if (geo->ecc_strength > MXS_ECC_STRENGTH_MAX) 131 - return false; 132 - } else if (GPMI_IS_MX6Q(this)) { 133 - if (geo->ecc_strength > MX6_ECC_STRENGTH_MAX) 134 - return false; 135 105 } 136 - return true; 106 + return geo->ecc_strength <= this->devdata->bch_max_ecc_strength; 137 107 } 138 108 139 109 /* ··· 288 270 "We can not support this nand chip." 289 271 " Its required ecc strength(%d) is beyond our" 290 272 " capability(%d).\n", geo->ecc_strength, 291 - (GPMI_IS_MX6Q(this) ? MX6_ECC_STRENGTH_MAX 292 - : MXS_ECC_STRENGTH_MAX)); 273 + this->devdata->bch_max_ecc_strength); 293 274 return -EINVAL; 294 275 } 295 276 ··· 589 572 } 590 573 591 574 /* Get extra clocks */ 592 - if (GPMI_IS_MX6Q(this)) 575 + if (GPMI_IS_MX6(this)) 593 576 extra_clks = extra_clks_for_mx6q; 594 577 if (!extra_clks) 595 578 return 0; ··· 607 590 r->clock[i] = clk; 608 591 } 609 592 610 - if (GPMI_IS_MX6Q(this)) 593 + if (GPMI_IS_MX6(this)) 611 594 /* 612 - * Set the default value for the gpmi clock in mx6q: 595 + * Set the default value for the gpmi clock. 613 596 * 614 597 * If you want to use the ONFI nand which is in the 615 598 * Synchronous Mode, you should change the clock as you need. ··· 1672 1655 * (1) the chip is imx6, and 1673 1656 * (2) the size of the ECC parity is byte aligned. 1674 1657 */ 1675 - if (GPMI_IS_MX6Q(this) && 1658 + if (GPMI_IS_MX6(this) && 1676 1659 ((bch_geo->gf_len * bch_geo->ecc_strength) % 8) == 0) { 1677 1660 ecc->read_subpage = gpmi_ecc_read_subpage; 1678 1661 chip->options |= NAND_SUBPAGE_READ; ··· 1728 1711 if (ret) 1729 1712 goto err_out; 1730 1713 1731 - ret = nand_scan_ident(mtd, GPMI_IS_MX6Q(this) ? 2 : 1, NULL); 1714 + ret = nand_scan_ident(mtd, GPMI_IS_MX6(this) ? 2 : 1, NULL); 1732 1715 if (ret) 1733 1716 goto err_out; 1734 1717 ··· 1757 1740 return ret; 1758 1741 } 1759 1742 1760 - static const struct platform_device_id gpmi_ids[] = { 1761 - { .name = "imx23-gpmi-nand", .driver_data = IS_MX23, }, 1762 - { .name = "imx28-gpmi-nand", .driver_data = IS_MX28, }, 1763 - { .name = "imx6q-gpmi-nand", .driver_data = IS_MX6Q, }, 1764 - {} 1765 - }; 1766 - 1767 1743 static const struct of_device_id gpmi_nand_id_table[] = { 1768 1744 { 1769 1745 .compatible = "fsl,imx23-gpmi-nand", 1770 - .data = (void *)&gpmi_ids[IS_MX23], 1746 + .data = (void *)&gpmi_devdata_imx23, 1771 1747 }, { 1772 1748 .compatible = "fsl,imx28-gpmi-nand", 1773 - .data = (void *)&gpmi_ids[IS_MX28], 1749 + .data = (void *)&gpmi_devdata_imx28, 1774 1750 }, { 1775 1751 .compatible = "fsl,imx6q-gpmi-nand", 1776 - .data = (void *)&gpmi_ids[IS_MX6Q], 1752 + .data = (void *)&gpmi_devdata_imx6q, 1753 + }, { 1754 + .compatible = "fsl,imx6sx-gpmi-nand", 1755 + .data = (void *)&gpmi_devdata_imx6sx, 1777 1756 }, {} 1778 1757 }; 1779 1758 MODULE_DEVICE_TABLE(of, gpmi_nand_id_table); ··· 1780 1767 const struct of_device_id *of_id; 1781 1768 int ret; 1782 1769 1770 + this = devm_kzalloc(&pdev->dev, sizeof(*this), GFP_KERNEL); 1771 + if (!this) 1772 + return -ENOMEM; 1773 + 1783 1774 of_id = of_match_device(gpmi_nand_id_table, &pdev->dev); 1784 1775 if (of_id) { 1785 - pdev->id_entry = of_id->data; 1776 + this->devdata = of_id->data; 1786 1777 } else { 1787 1778 dev_err(&pdev->dev, "Failed to find the right device id.\n"); 1788 1779 return -ENODEV; 1789 1780 } 1790 - 1791 - this = devm_kzalloc(&pdev->dev, sizeof(*this), GFP_KERNEL); 1792 - if (!this) 1793 - return -ENOMEM; 1794 1781 1795 1782 platform_set_drvdata(pdev, this); 1796 1783 this->pdev = pdev; ··· 1836 1823 }, 1837 1824 .probe = gpmi_nand_probe, 1838 1825 .remove = gpmi_nand_remove, 1839 - .id_table = gpmi_ids, 1840 1826 }; 1841 1827 module_platform_driver(gpmi_nand_driver); 1842 1828
+20 -10
drivers/mtd/nand/gpmi-nand/gpmi-nand.h
··· 119 119 int8_t tRHOH_in_ns; 120 120 }; 121 121 122 + enum gpmi_type { 123 + IS_MX23, 124 + IS_MX28, 125 + IS_MX6Q, 126 + IS_MX6SX 127 + }; 128 + 129 + struct gpmi_devdata { 130 + enum gpmi_type type; 131 + int bch_max_ecc_strength; 132 + int max_chain_delay; /* See the async EDO mode */ 133 + }; 134 + 122 135 struct gpmi_nand_data { 123 136 /* flags */ 124 137 #define GPMI_ASYNC_EDO_ENABLED (1 << 0) 125 138 #define GPMI_TIMING_INIT_OK (1 << 1) 126 139 int flags; 140 + const struct gpmi_devdata *devdata; 127 141 128 142 /* System Interface */ 129 143 struct device *dev; ··· 295 281 #define STATUS_ERASED 0xff 296 282 #define STATUS_UNCORRECTABLE 0xfe 297 283 298 - /* BCH's bit correction capability. */ 299 - #define MXS_ECC_STRENGTH_MAX 20 /* mx23 and mx28 */ 300 - #define MX6_ECC_STRENGTH_MAX 40 284 + /* Use the devdata to distinguish different Archs. */ 285 + #define GPMI_IS_MX23(x) ((x)->devdata->type == IS_MX23) 286 + #define GPMI_IS_MX28(x) ((x)->devdata->type == IS_MX28) 287 + #define GPMI_IS_MX6Q(x) ((x)->devdata->type == IS_MX6Q) 288 + #define GPMI_IS_MX6SX(x) ((x)->devdata->type == IS_MX6SX) 301 289 302 - /* Use the platform_id to distinguish different Archs. */ 303 - #define IS_MX23 0x0 304 - #define IS_MX28 0x1 305 - #define IS_MX6Q 0x2 306 - #define GPMI_IS_MX23(x) ((x)->pdev->id_entry->driver_data == IS_MX23) 307 - #define GPMI_IS_MX28(x) ((x)->pdev->id_entry->driver_data == IS_MX28) 308 - #define GPMI_IS_MX6Q(x) ((x)->pdev->id_entry->driver_data == IS_MX6Q) 290 + #define GPMI_IS_MX6(x) (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x)) 309 291 #endif
+86 -18
drivers/mtd/nand/nand_base.c
··· 37 37 #include <linux/err.h> 38 38 #include <linux/sched.h> 39 39 #include <linux/slab.h> 40 + #include <linux/mm.h> 40 41 #include <linux/types.h> 41 42 #include <linux/mtd/mtd.h> 42 43 #include <linux/mtd/nand.h> ··· 1205 1204 * ecc.pos. Let's make sure that there are no gaps in ECC positions. 1206 1205 */ 1207 1206 for (i = 0; i < eccfrag_len - 1; i++) { 1208 - if (eccpos[i + start_step * chip->ecc.bytes] + 1 != 1209 - eccpos[i + start_step * chip->ecc.bytes + 1]) { 1207 + if (eccpos[i + index] + 1 != eccpos[i + index + 1]) { 1210 1208 gaps = 1; 1211 1209 break; 1212 1210 } ··· 1501 1501 mtd->oobavail : mtd->oobsize; 1502 1502 1503 1503 uint8_t *bufpoi, *oob, *buf; 1504 + int use_bufpoi; 1504 1505 unsigned int max_bitflips = 0; 1505 1506 int retry_mode = 0; 1506 1507 bool ecc_fail = false; ··· 1524 1523 bytes = min(mtd->writesize - col, readlen); 1525 1524 aligned = (bytes == mtd->writesize); 1526 1525 1526 + if (!aligned) 1527 + use_bufpoi = 1; 1528 + else if (chip->options & NAND_USE_BOUNCE_BUFFER) 1529 + use_bufpoi = !virt_addr_valid(buf); 1530 + else 1531 + use_bufpoi = 0; 1532 + 1527 1533 /* Is the current page in the buffer? */ 1528 1534 if (realpage != chip->pagebuf || oob) { 1529 - bufpoi = aligned ? buf : chip->buffers->databuf; 1535 + bufpoi = use_bufpoi ? chip->buffers->databuf : buf; 1536 + 1537 + if (use_bufpoi && aligned) 1538 + pr_debug("%s: using read bounce buffer for buf@%p\n", 1539 + __func__, buf); 1530 1540 1531 1541 read_retry: 1532 1542 chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page); ··· 1559 1547 ret = chip->ecc.read_page(mtd, chip, bufpoi, 1560 1548 oob_required, page); 1561 1549 if (ret < 0) { 1562 - if (!aligned) 1550 + if (use_bufpoi) 1563 1551 /* Invalidate page cache */ 1564 1552 chip->pagebuf = -1; 1565 1553 break; ··· 1568 1556 max_bitflips = max_t(unsigned int, max_bitflips, ret); 1569 1557 1570 1558 /* Transfer not aligned data */ 1571 - if (!aligned) { 1559 + if (use_bufpoi) { 1572 1560 if (!NAND_HAS_SUBPAGE_READ(chip) && !oob && 1573 1561 !(mtd->ecc_stats.failed - ecc_failures) && 1574 1562 (ops->mode != MTD_OPS_RAW)) { ··· 2388 2376 int bytes = mtd->writesize; 2389 2377 int cached = writelen > bytes && page != blockmask; 2390 2378 uint8_t *wbuf = buf; 2379 + int use_bufpoi; 2380 + int part_pagewr = (column || writelen < (mtd->writesize - 1)); 2391 2381 2392 - /* Partial page write? */ 2393 - if (unlikely(column || writelen < (mtd->writesize - 1))) { 2382 + if (part_pagewr) 2383 + use_bufpoi = 1; 2384 + else if (chip->options & NAND_USE_BOUNCE_BUFFER) 2385 + use_bufpoi = !virt_addr_valid(buf); 2386 + else 2387 + use_bufpoi = 0; 2388 + 2389 + /* Partial page write?, or need to use bounce buffer */ 2390 + if (use_bufpoi) { 2391 + pr_debug("%s: using write bounce buffer for buf@%p\n", 2392 + __func__, buf); 2394 2393 cached = 0; 2395 - bytes = min_t(int, bytes - column, (int) writelen); 2394 + if (part_pagewr) 2395 + bytes = min_t(int, bytes - column, writelen); 2396 2396 chip->pagebuf = -1; 2397 2397 memset(chip->buffers->databuf, 0xff, mtd->writesize); 2398 2398 memcpy(&chip->buffers->databuf[column], buf, bytes); ··· 2642 2618 } 2643 2619 2644 2620 /** 2645 - * single_erase_cmd - [GENERIC] NAND standard block erase command function 2621 + * single_erase - [GENERIC] NAND standard block erase command function 2646 2622 * @mtd: MTD device structure 2647 2623 * @page: the page address of the block which will be erased 2648 2624 * 2649 - * Standard erase command for NAND chips. 2625 + * Standard erase command for NAND chips. Returns NAND status. 2650 2626 */ 2651 - static void single_erase_cmd(struct mtd_info *mtd, int page) 2627 + static int single_erase(struct mtd_info *mtd, int page) 2652 2628 { 2653 2629 struct nand_chip *chip = mtd->priv; 2654 2630 /* Send commands to erase a block */ 2655 2631 chip->cmdfunc(mtd, NAND_CMD_ERASE1, -1, page); 2656 2632 chip->cmdfunc(mtd, NAND_CMD_ERASE2, -1, -1); 2633 + 2634 + return chip->waitfunc(mtd, chip); 2657 2635 } 2658 2636 2659 2637 /** ··· 2736 2710 (page + pages_per_block)) 2737 2711 chip->pagebuf = -1; 2738 2712 2739 - chip->erase_cmd(mtd, page & chip->pagemask); 2740 - 2741 - status = chip->waitfunc(mtd, chip); 2713 + status = chip->erase(mtd, page & chip->pagemask); 2742 2714 2743 2715 /* 2744 2716 * See if operation failed and additional status checks are ··· 3631 3607 3632 3608 chip->onfi_version = 0; 3633 3609 if (!type->name || !type->pagesize) { 3634 - /* Check is chip is ONFI compliant */ 3610 + /* Check if the chip is ONFI compliant */ 3635 3611 if (nand_flash_detect_onfi(mtd, chip, &busw)) 3636 3612 goto ident_done; 3637 3613 ··· 3709 3685 } 3710 3686 3711 3687 chip->badblockbits = 8; 3712 - chip->erase_cmd = single_erase_cmd; 3688 + chip->erase = single_erase; 3713 3689 3714 3690 /* Do not replace user supplied command function! */ 3715 3691 if (mtd->writesize > 512 && chip->cmdfunc == nand_command) ··· 3794 3770 } 3795 3771 EXPORT_SYMBOL(nand_scan_ident); 3796 3772 3773 + /* 3774 + * Check if the chip configuration meet the datasheet requirements. 3775 + 3776 + * If our configuration corrects A bits per B bytes and the minimum 3777 + * required correction level is X bits per Y bytes, then we must ensure 3778 + * both of the following are true: 3779 + * 3780 + * (1) A / B >= X / Y 3781 + * (2) A >= X 3782 + * 3783 + * Requirement (1) ensures we can correct for the required bitflip density. 3784 + * Requirement (2) ensures we can correct even when all bitflips are clumped 3785 + * in the same sector. 3786 + */ 3787 + static bool nand_ecc_strength_good(struct mtd_info *mtd) 3788 + { 3789 + struct nand_chip *chip = mtd->priv; 3790 + struct nand_ecc_ctrl *ecc = &chip->ecc; 3791 + int corr, ds_corr; 3792 + 3793 + if (ecc->size == 0 || chip->ecc_step_ds == 0) 3794 + /* Not enough information */ 3795 + return true; 3796 + 3797 + /* 3798 + * We get the number of corrected bits per page to compare 3799 + * the correction density. 3800 + */ 3801 + corr = (mtd->writesize * ecc->strength) / ecc->size; 3802 + ds_corr = (mtd->writesize * chip->ecc_strength_ds) / chip->ecc_step_ds; 3803 + 3804 + return corr >= ds_corr && ecc->strength >= chip->ecc_strength_ds; 3805 + } 3797 3806 3798 3807 /** 3799 3808 * nand_scan_tail - [NAND Interface] Scan for the NAND device ··· 4047 3990 ecc->layout->oobavail += ecc->layout->oobfree[i].length; 4048 3991 mtd->oobavail = ecc->layout->oobavail; 4049 3992 3993 + /* ECC sanity check: warn noisily if it's too weak */ 3994 + WARN_ON(!nand_ecc_strength_good(mtd)); 3995 + 4050 3996 /* 4051 3997 * Set the number of read / write steps for one page depending on ECC 4052 3998 * mode. ··· 4083 4023 chip->pagebuf = -1; 4084 4024 4085 4025 /* Large page NAND with SOFT_ECC should support subpage reads */ 4086 - if ((ecc->mode == NAND_ECC_SOFT) && (chip->page_shift > 9)) 4087 - chip->options |= NAND_SUBPAGE_READ; 4026 + switch (ecc->mode) { 4027 + case NAND_ECC_SOFT: 4028 + case NAND_ECC_SOFT_BCH: 4029 + if (chip->page_shift > 9) 4030 + chip->options |= NAND_SUBPAGE_READ; 4031 + break; 4032 + 4033 + default: 4034 + break; 4035 + } 4088 4036 4089 4037 /* Fill in remaining MTD driver data */ 4090 4038 mtd->type = nand_is_slc(chip) ? MTD_NANDFLASH : MTD_MLCNANDFLASH;
+7 -6
drivers/mtd/nand/nand_bbt.c
··· 528 528 { 529 529 struct nand_chip *this = mtd->priv; 530 530 int i, chips; 531 - int bits, startblock, block, dir; 531 + int startblock, block, dir; 532 532 int scanlen = mtd->writesize + mtd->oobsize; 533 533 int bbtblocks; 534 534 int blocktopage = this->bbt_erase_shift - this->page_shift; ··· 551 551 chips = 1; 552 552 bbtblocks = mtd->size >> this->bbt_erase_shift; 553 553 } 554 - 555 - /* Number of bits for each erase block in the bbt */ 556 - bits = td->options & NAND_BBT_NRBITS_MSK; 557 554 558 555 for (i = 0; i < chips; i++) { 559 556 /* Reset version information */ ··· 1282 1285 int nand_default_bbt(struct mtd_info *mtd) 1283 1286 { 1284 1287 struct nand_chip *this = mtd->priv; 1288 + int ret; 1285 1289 1286 1290 /* Is a flash based bad block table requested? */ 1287 1291 if (this->bbt_options & NAND_BBT_USE_FLASH) { ··· 1301 1303 this->bbt_md = NULL; 1302 1304 } 1303 1305 1304 - if (!this->badblock_pattern) 1305 - nand_create_badblock_pattern(this); 1306 + if (!this->badblock_pattern) { 1307 + ret = nand_create_badblock_pattern(this); 1308 + if (ret) 1309 + return ret; 1310 + } 1306 1311 1307 1312 return nand_scan_bbt(mtd, this->badblock_pattern); 1308 1313 }
+1 -1
drivers/mtd/nand/nand_ecc.c
··· 506 506 if ((bitsperbyte[b0] + bitsperbyte[b1] + bitsperbyte[b2]) == 1) 507 507 return 1; /* error in ECC data; no action needed */ 508 508 509 - pr_err("%s: uncorrectable ECC error", __func__); 509 + pr_err("%s: uncorrectable ECC error\n", __func__); 510 510 return -1; 511 511 } 512 512 EXPORT_SYMBOL(__nand_correct_data);
+101 -7
drivers/mtd/nand/omap2.c
··· 137 137 #define BADBLOCK_MARKER_LENGTH 2 138 138 139 139 #ifdef CONFIG_MTD_NAND_OMAP_BCH 140 + static u_char bch16_vector[] = {0xf5, 0x24, 0x1c, 0xd0, 0x61, 0xb3, 0xf1, 0x55, 141 + 0x2e, 0x2c, 0x86, 0xa3, 0xed, 0x36, 0x1b, 0x78, 142 + 0x48, 0x76, 0xa9, 0x3b, 0x97, 0xd1, 0x7a, 0x93, 143 + 0x07, 0x0e}; 140 144 static u_char bch8_vector[] = {0xf3, 0xdb, 0x14, 0x16, 0x8b, 0xd2, 0xbe, 0xcc, 141 145 0xac, 0x6b, 0xff, 0x99, 0x7b}; 142 146 static u_char bch4_vector[] = {0x00, 0x6b, 0x31, 0xdd, 0x41, 0xbc, 0x10}; ··· 1118 1114 ecc_size1 = BCH_ECC_SIZE1; 1119 1115 } 1120 1116 break; 1117 + case OMAP_ECC_BCH16_CODE_HW: 1118 + bch_type = 0x2; 1119 + nsectors = chip->ecc.steps; 1120 + if (mode == NAND_ECC_READ) { 1121 + wr_mode = 0x01; 1122 + ecc_size0 = 52; /* ECC bits in nibbles per sector */ 1123 + ecc_size1 = 0; /* non-ECC bits in nibbles per sector */ 1124 + } else { 1125 + wr_mode = 0x01; 1126 + ecc_size0 = 0; /* extra bits in nibbles per sector */ 1127 + ecc_size1 = 52; /* OOB bits in nibbles per sector */ 1128 + } 1129 + break; 1121 1130 default: 1122 1131 return; 1123 1132 } ··· 1179 1162 struct gpmc_nand_regs *gpmc_regs = &info->reg; 1180 1163 u8 *ecc_code; 1181 1164 unsigned long nsectors, bch_val1, bch_val2, bch_val3, bch_val4; 1182 - int i; 1165 + u32 val; 1166 + int i, j; 1183 1167 1184 1168 nsectors = ((readl(info->reg.gpmc_ecc_config) >> 4) & 0x7) + 1; 1185 1169 for (i = 0; i < nsectors; i++) { ··· 1219 1201 *ecc_code++ = ((bch_val1 >> 4) & 0xFF); 1220 1202 *ecc_code++ = ((bch_val1 & 0xF) << 4); 1221 1203 break; 1204 + case OMAP_ECC_BCH16_CODE_HW: 1205 + val = readl(gpmc_regs->gpmc_bch_result6[i]); 1206 + ecc_code[0] = ((val >> 8) & 0xFF); 1207 + ecc_code[1] = ((val >> 0) & 0xFF); 1208 + val = readl(gpmc_regs->gpmc_bch_result5[i]); 1209 + ecc_code[2] = ((val >> 24) & 0xFF); 1210 + ecc_code[3] = ((val >> 16) & 0xFF); 1211 + ecc_code[4] = ((val >> 8) & 0xFF); 1212 + ecc_code[5] = ((val >> 0) & 0xFF); 1213 + val = readl(gpmc_regs->gpmc_bch_result4[i]); 1214 + ecc_code[6] = ((val >> 24) & 0xFF); 1215 + ecc_code[7] = ((val >> 16) & 0xFF); 1216 + ecc_code[8] = ((val >> 8) & 0xFF); 1217 + ecc_code[9] = ((val >> 0) & 0xFF); 1218 + val = readl(gpmc_regs->gpmc_bch_result3[i]); 1219 + ecc_code[10] = ((val >> 24) & 0xFF); 1220 + ecc_code[11] = ((val >> 16) & 0xFF); 1221 + ecc_code[12] = ((val >> 8) & 0xFF); 1222 + ecc_code[13] = ((val >> 0) & 0xFF); 1223 + val = readl(gpmc_regs->gpmc_bch_result2[i]); 1224 + ecc_code[14] = ((val >> 24) & 0xFF); 1225 + ecc_code[15] = ((val >> 16) & 0xFF); 1226 + ecc_code[16] = ((val >> 8) & 0xFF); 1227 + ecc_code[17] = ((val >> 0) & 0xFF); 1228 + val = readl(gpmc_regs->gpmc_bch_result1[i]); 1229 + ecc_code[18] = ((val >> 24) & 0xFF); 1230 + ecc_code[19] = ((val >> 16) & 0xFF); 1231 + ecc_code[20] = ((val >> 8) & 0xFF); 1232 + ecc_code[21] = ((val >> 0) & 0xFF); 1233 + val = readl(gpmc_regs->gpmc_bch_result0[i]); 1234 + ecc_code[22] = ((val >> 24) & 0xFF); 1235 + ecc_code[23] = ((val >> 16) & 0xFF); 1236 + ecc_code[24] = ((val >> 8) & 0xFF); 1237 + ecc_code[25] = ((val >> 0) & 0xFF); 1238 + break; 1222 1239 default: 1223 1240 return -EINVAL; 1224 1241 } ··· 1263 1210 case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW: 1264 1211 /* Add constant polynomial to remainder, so that 1265 1212 * ECC of blank pages results in 0x0 on reading back */ 1266 - for (i = 0; i < eccbytes; i++) 1267 - ecc_calc[i] ^= bch4_polynomial[i]; 1213 + for (j = 0; j < eccbytes; j++) 1214 + ecc_calc[j] ^= bch4_polynomial[j]; 1268 1215 break; 1269 1216 case OMAP_ECC_BCH4_CODE_HW: 1270 1217 /* Set 8th ECC byte as 0x0 for ROM compatibility */ ··· 1273 1220 case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW: 1274 1221 /* Add constant polynomial to remainder, so that 1275 1222 * ECC of blank pages results in 0x0 on reading back */ 1276 - for (i = 0; i < eccbytes; i++) 1277 - ecc_calc[i] ^= bch8_polynomial[i]; 1223 + for (j = 0; j < eccbytes; j++) 1224 + ecc_calc[j] ^= bch8_polynomial[j]; 1278 1225 break; 1279 1226 case OMAP_ECC_BCH8_CODE_HW: 1280 1227 /* Set 14th ECC byte as 0x0 for ROM compatibility */ 1281 1228 ecc_calc[eccbytes - 1] = 0x0; 1229 + break; 1230 + case OMAP_ECC_BCH16_CODE_HW: 1282 1231 break; 1283 1232 default: 1284 1233 return -EINVAL; ··· 1292 1237 return 0; 1293 1238 } 1294 1239 1240 + #ifdef CONFIG_MTD_NAND_OMAP_BCH 1295 1241 /** 1296 1242 * erased_sector_bitflips - count bit flips 1297 1243 * @data: data sector buffer ··· 1332 1276 return flip_bits; 1333 1277 } 1334 1278 1335 - #ifdef CONFIG_MTD_NAND_OMAP_BCH 1336 1279 /** 1337 1280 * omap_elm_correct_data - corrects page data area in case error reported 1338 1281 * @mtd: MTD device structure ··· 1372 1317 /* omit 14th ECC byte reserved for ROM code compatibility */ 1373 1318 actual_eccbytes = ecc->bytes - 1; 1374 1319 erased_ecc_vec = bch8_vector; 1320 + break; 1321 + case OMAP_ECC_BCH16_CODE_HW: 1322 + actual_eccbytes = ecc->bytes; 1323 + erased_ecc_vec = bch16_vector; 1375 1324 break; 1376 1325 default: 1377 1326 pr_err("invalid driver configuration\n"); ··· 1441 1382 1442 1383 /* Check if any error reported */ 1443 1384 if (!is_error_reported) 1444 - return 0; 1385 + return stat; 1445 1386 1446 1387 /* Decode BCH error using ELM module */ 1447 1388 elm_decode_bch_error_page(info->elm_dev, ecc_vec, err_vec); ··· 1460 1401 BCH4_BIT_PAD; 1461 1402 break; 1462 1403 case OMAP_ECC_BCH8_CODE_HW: 1404 + case OMAP_ECC_BCH16_CODE_HW: 1463 1405 pos = err_vec[i].error_loc[j]; 1464 1406 break; 1465 1407 default: ··· 1972 1912 goto return_error; 1973 1913 #endif 1974 1914 1915 + case OMAP_ECC_BCH16_CODE_HW: 1916 + #ifdef CONFIG_MTD_NAND_OMAP_BCH 1917 + pr_info("using OMAP_ECC_BCH16_CODE_HW ECC scheme\n"); 1918 + nand_chip->ecc.mode = NAND_ECC_HW; 1919 + nand_chip->ecc.size = 512; 1920 + nand_chip->ecc.bytes = 26; 1921 + nand_chip->ecc.strength = 16; 1922 + nand_chip->ecc.hwctl = omap_enable_hwecc_bch; 1923 + nand_chip->ecc.correct = omap_elm_correct_data; 1924 + nand_chip->ecc.calculate = omap_calculate_ecc_bch; 1925 + nand_chip->ecc.read_page = omap_read_page_bch; 1926 + nand_chip->ecc.write_page = omap_write_page_bch; 1927 + /* This ECC scheme requires ELM H/W block */ 1928 + err = is_elm_present(info, pdata->elm_of_node, BCH16_ECC); 1929 + if (err < 0) { 1930 + pr_err("ELM is required for this ECC scheme\n"); 1931 + goto return_error; 1932 + } 1933 + /* define ECC layout */ 1934 + ecclayout->eccbytes = nand_chip->ecc.bytes * 1935 + (mtd->writesize / 1936 + nand_chip->ecc.size); 1937 + oob_index = BADBLOCK_MARKER_LENGTH; 1938 + for (i = 0; i < ecclayout->eccbytes; i++, oob_index++) 1939 + ecclayout->eccpos[i] = oob_index; 1940 + /* reserved marker already included in ecclayout->eccbytes */ 1941 + ecclayout->oobfree->offset = 1942 + ecclayout->eccpos[ecclayout->eccbytes - 1] + 1; 1943 + break; 1944 + #else 1945 + pr_err("nand: error: CONFIG_MTD_NAND_OMAP_BCH not enabled\n"); 1946 + err = -EINVAL; 1947 + goto return_error; 1948 + #endif 1975 1949 default: 1976 1950 pr_err("nand: error: invalid or unsupported ECC scheme\n"); 1977 1951 err = -EINVAL;
+1 -1
drivers/mtd/nand/orion_nand.c
··· 214 214 } 215 215 216 216 #ifdef CONFIG_OF 217 - static struct of_device_id orion_nand_of_match_table[] = { 217 + static const struct of_device_id orion_nand_of_match_table[] = { 218 218 { .compatible = "marvell,orion-nand", }, 219 219 {}, 220 220 };
+28 -16
drivers/mtd/nand/pxa3xx_nand.c
··· 127 127 128 128 /* macros for registers read/write */ 129 129 #define nand_writel(info, off, val) \ 130 - __raw_writel((val), (info)->mmio_base + (off)) 130 + writel_relaxed((val), (info)->mmio_base + (off)) 131 131 132 132 #define nand_readl(info, off) \ 133 - __raw_readl((info)->mmio_base + (off)) 133 + readl_relaxed((info)->mmio_base + (off)) 134 134 135 135 /* error code and state */ 136 136 enum { ··· 337 337 /* convert nano-seconds to nand flash controller clock cycles */ 338 338 #define ns2cycle(ns, clk) (int)((ns) * (clk / 1000000) / 1000) 339 339 340 - static struct of_device_id pxa3xx_nand_dt_ids[] = { 340 + static const struct of_device_id pxa3xx_nand_dt_ids[] = { 341 341 { 342 342 .compatible = "marvell,pxa3xx-nand", 343 343 .data = (void *)PXA3XX_NAND_VARIANT_PXA, ··· 1354 1354 ecc->mode = NAND_ECC_HW; 1355 1355 ecc->size = 512; 1356 1356 ecc->strength = 1; 1357 - return 1; 1358 1357 1359 1358 } else if (strength == 1 && ecc_stepsize == 512 && page_size == 512) { 1360 1359 info->chunk_size = 512; ··· 1362 1363 ecc->mode = NAND_ECC_HW; 1363 1364 ecc->size = 512; 1364 1365 ecc->strength = 1; 1365 - return 1; 1366 1366 1367 1367 /* 1368 1368 * Required ECC: 4-bit correction per 512 bytes ··· 1376 1378 ecc->size = info->chunk_size; 1377 1379 ecc->layout = &ecc_layout_2KB_bch4bit; 1378 1380 ecc->strength = 16; 1379 - return 1; 1380 1381 1381 1382 } else if (strength == 4 && ecc_stepsize == 512 && page_size == 4096) { 1382 1383 info->ecc_bch = 1; ··· 1386 1389 ecc->size = info->chunk_size; 1387 1390 ecc->layout = &ecc_layout_4KB_bch4bit; 1388 1391 ecc->strength = 16; 1389 - return 1; 1390 1392 1391 1393 /* 1392 1394 * Required ECC: 8-bit correction per 512 bytes ··· 1400 1404 ecc->size = info->chunk_size; 1401 1405 ecc->layout = &ecc_layout_4KB_bch8bit; 1402 1406 ecc->strength = 16; 1403 - return 1; 1407 + } else { 1408 + dev_err(&info->pdev->dev, 1409 + "ECC strength %d at page size %d is not supported\n", 1410 + strength, page_size); 1411 + return -ENODEV; 1404 1412 } 1413 + 1414 + dev_info(&info->pdev->dev, "ECC strength %d, ECC step size %d\n", 1415 + ecc->strength, ecc->size); 1405 1416 return 0; 1406 1417 } 1407 1418 ··· 1519 1516 } 1520 1517 } 1521 1518 1522 - ecc_strength = chip->ecc_strength_ds; 1523 - ecc_step = chip->ecc_step_ds; 1519 + if (pdata->ecc_strength && pdata->ecc_step_size) { 1520 + ecc_strength = pdata->ecc_strength; 1521 + ecc_step = pdata->ecc_step_size; 1522 + } else { 1523 + ecc_strength = chip->ecc_strength_ds; 1524 + ecc_step = chip->ecc_step_ds; 1525 + } 1524 1526 1525 1527 /* Set default ECC strength requirements on non-ONFI devices */ 1526 1528 if (ecc_strength < 1 && ecc_step < 1) { ··· 1535 1527 1536 1528 ret = pxa_ecc_init(info, &chip->ecc, ecc_strength, 1537 1529 ecc_step, mtd->writesize); 1538 - if (!ret) { 1539 - dev_err(&info->pdev->dev, 1540 - "ECC strength %d at page size %d is not supported\n", 1541 - ecc_strength, mtd->writesize); 1542 - return -ENODEV; 1543 - } 1530 + if (ret) 1531 + return ret; 1544 1532 1545 1533 /* calculate addressing information */ 1546 1534 if (mtd->writesize >= 2048) ··· 1733 1729 pdata->keep_config = 1; 1734 1730 of_property_read_u32(np, "num-cs", &pdata->num_cs); 1735 1731 pdata->flash_bbt = of_get_nand_on_flash_bbt(np); 1732 + 1733 + pdata->ecc_strength = of_get_nand_ecc_strength(np); 1734 + if (pdata->ecc_strength < 0) 1735 + pdata->ecc_strength = 0; 1736 + 1737 + pdata->ecc_step_size = of_get_nand_ecc_step_size(np); 1738 + if (pdata->ecc_step_size < 0) 1739 + pdata->ecc_step_size = 0; 1736 1740 1737 1741 pdev->dev.platform_data = pdata; 1738 1742
+4 -2
drivers/mtd/nand/r852.c
··· 245 245 } 246 246 247 247 /* write DWORD chinks - faster */ 248 - while (len) { 248 + while (len >= 4) { 249 249 reg = buf[0] | buf[1] << 8 | buf[2] << 16 | buf[3] << 24; 250 250 r852_write_reg_dword(dev, R852_DATALINE, reg); 251 251 buf += 4; ··· 254 254 } 255 255 256 256 /* write rest */ 257 - while (len) 257 + while (len > 0) { 258 258 r852_write_reg(dev, R852_DATALINE, *buf++); 259 + len--; 260 + } 259 261 } 260 262 261 263 /*
+4 -4
drivers/mtd/onenand/samsung.c
··· 537 537 return 0; 538 538 } 539 539 540 - static int (*s5pc110_dma_ops)(void *dst, void *src, size_t count, int direction); 540 + static int (*s5pc110_dma_ops)(dma_addr_t dst, dma_addr_t src, size_t count, int direction); 541 541 542 - static int s5pc110_dma_poll(void *dst, void *src, size_t count, int direction) 542 + static int s5pc110_dma_poll(dma_addr_t dst, dma_addr_t src, size_t count, int direction) 543 543 { 544 544 void __iomem *base = onenand->dma_addr; 545 545 int status; ··· 605 605 return IRQ_HANDLED; 606 606 } 607 607 608 - static int s5pc110_dma_irq(void *dst, void *src, size_t count, int direction) 608 + static int s5pc110_dma_irq(dma_addr_t dst, dma_addr_t src, size_t count, int direction) 609 609 { 610 610 void __iomem *base = onenand->dma_addr; 611 611 int status; ··· 686 686 dev_err(dev, "Couldn't map a %d byte buffer for DMA\n", count); 687 687 goto normal; 688 688 } 689 - err = s5pc110_dma_ops((void *) dma_dst, (void *) dma_src, 689 + err = s5pc110_dma_ops(dma_dst, dma_src, 690 690 count, S5PC110_DMA_DIR_READ); 691 691 692 692 if (page_dma)
+17
drivers/mtd/spi-nor/Kconfig
··· 1 + menuconfig MTD_SPI_NOR 2 + tristate "SPI-NOR device support" 3 + depends on MTD 4 + help 5 + This is the framework for the SPI NOR which can be used by the SPI 6 + device drivers and the SPI-NOR device driver. 7 + 8 + if MTD_SPI_NOR 9 + 10 + config SPI_FSL_QUADSPI 11 + tristate "Freescale Quad SPI controller" 12 + depends on ARCH_MXC 13 + help 14 + This enables support for the Quad SPI controller in master mode. 15 + We only connect the NOR to this controller now. 16 + 17 + endif # MTD_SPI_NOR
+2
drivers/mtd/spi-nor/Makefile
··· 1 + obj-$(CONFIG_MTD_SPI_NOR) += spi-nor.o 2 + obj-$(CONFIG_SPI_FSL_QUADSPI) += fsl-quadspi.o
+1009
drivers/mtd/spi-nor/fsl-quadspi.c
··· 1 + /* 2 + * Freescale QuadSPI driver. 3 + * 4 + * Copyright (C) 2013 Freescale Semiconductor, Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; either version 2 of the License, or 9 + * (at your option) any later version. 10 + */ 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/interrupt.h> 14 + #include <linux/errno.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/sched.h> 17 + #include <linux/delay.h> 18 + #include <linux/io.h> 19 + #include <linux/clk.h> 20 + #include <linux/err.h> 21 + #include <linux/of.h> 22 + #include <linux/of_device.h> 23 + #include <linux/timer.h> 24 + #include <linux/jiffies.h> 25 + #include <linux/completion.h> 26 + #include <linux/mtd/mtd.h> 27 + #include <linux/mtd/partitions.h> 28 + #include <linux/mtd/spi-nor.h> 29 + 30 + /* The registers */ 31 + #define QUADSPI_MCR 0x00 32 + #define QUADSPI_MCR_RESERVED_SHIFT 16 33 + #define QUADSPI_MCR_RESERVED_MASK (0xF << QUADSPI_MCR_RESERVED_SHIFT) 34 + #define QUADSPI_MCR_MDIS_SHIFT 14 35 + #define QUADSPI_MCR_MDIS_MASK (1 << QUADSPI_MCR_MDIS_SHIFT) 36 + #define QUADSPI_MCR_CLR_TXF_SHIFT 11 37 + #define QUADSPI_MCR_CLR_TXF_MASK (1 << QUADSPI_MCR_CLR_TXF_SHIFT) 38 + #define QUADSPI_MCR_CLR_RXF_SHIFT 10 39 + #define QUADSPI_MCR_CLR_RXF_MASK (1 << QUADSPI_MCR_CLR_RXF_SHIFT) 40 + #define QUADSPI_MCR_DDR_EN_SHIFT 7 41 + #define QUADSPI_MCR_DDR_EN_MASK (1 << QUADSPI_MCR_DDR_EN_SHIFT) 42 + #define QUADSPI_MCR_END_CFG_SHIFT 2 43 + #define QUADSPI_MCR_END_CFG_MASK (3 << QUADSPI_MCR_END_CFG_SHIFT) 44 + #define QUADSPI_MCR_SWRSTHD_SHIFT 1 45 + #define QUADSPI_MCR_SWRSTHD_MASK (1 << QUADSPI_MCR_SWRSTHD_SHIFT) 46 + #define QUADSPI_MCR_SWRSTSD_SHIFT 0 47 + #define QUADSPI_MCR_SWRSTSD_MASK (1 << QUADSPI_MCR_SWRSTSD_SHIFT) 48 + 49 + #define QUADSPI_IPCR 0x08 50 + #define QUADSPI_IPCR_SEQID_SHIFT 24 51 + #define QUADSPI_IPCR_SEQID_MASK (0xF << QUADSPI_IPCR_SEQID_SHIFT) 52 + 53 + #define QUADSPI_BUF0CR 0x10 54 + #define QUADSPI_BUF1CR 0x14 55 + #define QUADSPI_BUF2CR 0x18 56 + #define QUADSPI_BUFXCR_INVALID_MSTRID 0xe 57 + 58 + #define QUADSPI_BUF3CR 0x1c 59 + #define QUADSPI_BUF3CR_ALLMST_SHIFT 31 60 + #define QUADSPI_BUF3CR_ALLMST (1 << QUADSPI_BUF3CR_ALLMST_SHIFT) 61 + 62 + #define QUADSPI_BFGENCR 0x20 63 + #define QUADSPI_BFGENCR_PAR_EN_SHIFT 16 64 + #define QUADSPI_BFGENCR_PAR_EN_MASK (1 << (QUADSPI_BFGENCR_PAR_EN_SHIFT)) 65 + #define QUADSPI_BFGENCR_SEQID_SHIFT 12 66 + #define QUADSPI_BFGENCR_SEQID_MASK (0xF << QUADSPI_BFGENCR_SEQID_SHIFT) 67 + 68 + #define QUADSPI_BUF0IND 0x30 69 + #define QUADSPI_BUF1IND 0x34 70 + #define QUADSPI_BUF2IND 0x38 71 + #define QUADSPI_SFAR 0x100 72 + 73 + #define QUADSPI_SMPR 0x108 74 + #define QUADSPI_SMPR_DDRSMP_SHIFT 16 75 + #define QUADSPI_SMPR_DDRSMP_MASK (7 << QUADSPI_SMPR_DDRSMP_SHIFT) 76 + #define QUADSPI_SMPR_FSDLY_SHIFT 6 77 + #define QUADSPI_SMPR_FSDLY_MASK (1 << QUADSPI_SMPR_FSDLY_SHIFT) 78 + #define QUADSPI_SMPR_FSPHS_SHIFT 5 79 + #define QUADSPI_SMPR_FSPHS_MASK (1 << QUADSPI_SMPR_FSPHS_SHIFT) 80 + #define QUADSPI_SMPR_HSENA_SHIFT 0 81 + #define QUADSPI_SMPR_HSENA_MASK (1 << QUADSPI_SMPR_HSENA_SHIFT) 82 + 83 + #define QUADSPI_RBSR 0x10c 84 + #define QUADSPI_RBSR_RDBFL_SHIFT 8 85 + #define QUADSPI_RBSR_RDBFL_MASK (0x3F << QUADSPI_RBSR_RDBFL_SHIFT) 86 + 87 + #define QUADSPI_RBCT 0x110 88 + #define QUADSPI_RBCT_WMRK_MASK 0x1F 89 + #define QUADSPI_RBCT_RXBRD_SHIFT 8 90 + #define QUADSPI_RBCT_RXBRD_USEIPS (0x1 << QUADSPI_RBCT_RXBRD_SHIFT) 91 + 92 + #define QUADSPI_TBSR 0x150 93 + #define QUADSPI_TBDR 0x154 94 + #define QUADSPI_SR 0x15c 95 + #define QUADSPI_SR_IP_ACC_SHIFT 1 96 + #define QUADSPI_SR_IP_ACC_MASK (0x1 << QUADSPI_SR_IP_ACC_SHIFT) 97 + #define QUADSPI_SR_AHB_ACC_SHIFT 2 98 + #define QUADSPI_SR_AHB_ACC_MASK (0x1 << QUADSPI_SR_AHB_ACC_SHIFT) 99 + 100 + #define QUADSPI_FR 0x160 101 + #define QUADSPI_FR_TFF_MASK 0x1 102 + 103 + #define QUADSPI_SFA1AD 0x180 104 + #define QUADSPI_SFA2AD 0x184 105 + #define QUADSPI_SFB1AD 0x188 106 + #define QUADSPI_SFB2AD 0x18c 107 + #define QUADSPI_RBDR 0x200 108 + 109 + #define QUADSPI_LUTKEY 0x300 110 + #define QUADSPI_LUTKEY_VALUE 0x5AF05AF0 111 + 112 + #define QUADSPI_LCKCR 0x304 113 + #define QUADSPI_LCKER_LOCK 0x1 114 + #define QUADSPI_LCKER_UNLOCK 0x2 115 + 116 + #define QUADSPI_RSER 0x164 117 + #define QUADSPI_RSER_TFIE (0x1 << 0) 118 + 119 + #define QUADSPI_LUT_BASE 0x310 120 + 121 + /* 122 + * The definition of the LUT register shows below: 123 + * 124 + * --------------------------------------------------- 125 + * | INSTR1 | PAD1 | OPRND1 | INSTR0 | PAD0 | OPRND0 | 126 + * --------------------------------------------------- 127 + */ 128 + #define OPRND0_SHIFT 0 129 + #define PAD0_SHIFT 8 130 + #define INSTR0_SHIFT 10 131 + #define OPRND1_SHIFT 16 132 + 133 + /* Instruction set for the LUT register. */ 134 + #define LUT_STOP 0 135 + #define LUT_CMD 1 136 + #define LUT_ADDR 2 137 + #define LUT_DUMMY 3 138 + #define LUT_MODE 4 139 + #define LUT_MODE2 5 140 + #define LUT_MODE4 6 141 + #define LUT_READ 7 142 + #define LUT_WRITE 8 143 + #define LUT_JMP_ON_CS 9 144 + #define LUT_ADDR_DDR 10 145 + #define LUT_MODE_DDR 11 146 + #define LUT_MODE2_DDR 12 147 + #define LUT_MODE4_DDR 13 148 + #define LUT_READ_DDR 14 149 + #define LUT_WRITE_DDR 15 150 + #define LUT_DATA_LEARN 16 151 + 152 + /* 153 + * The PAD definitions for LUT register. 154 + * 155 + * The pad stands for the lines number of IO[0:3]. 156 + * For example, the Quad read need four IO lines, so you should 157 + * set LUT_PAD4 which means we use four IO lines. 158 + */ 159 + #define LUT_PAD1 0 160 + #define LUT_PAD2 1 161 + #define LUT_PAD4 2 162 + 163 + /* Oprands for the LUT register. */ 164 + #define ADDR24BIT 0x18 165 + #define ADDR32BIT 0x20 166 + 167 + /* Macros for constructing the LUT register. */ 168 + #define LUT0(ins, pad, opr) \ 169 + (((opr) << OPRND0_SHIFT) | ((LUT_##pad) << PAD0_SHIFT) | \ 170 + ((LUT_##ins) << INSTR0_SHIFT)) 171 + 172 + #define LUT1(ins, pad, opr) (LUT0(ins, pad, opr) << OPRND1_SHIFT) 173 + 174 + /* other macros for LUT register. */ 175 + #define QUADSPI_LUT(x) (QUADSPI_LUT_BASE + (x) * 4) 176 + #define QUADSPI_LUT_NUM 64 177 + 178 + /* SEQID -- we can have 16 seqids at most. */ 179 + #define SEQID_QUAD_READ 0 180 + #define SEQID_WREN 1 181 + #define SEQID_WRDI 2 182 + #define SEQID_RDSR 3 183 + #define SEQID_SE 4 184 + #define SEQID_CHIP_ERASE 5 185 + #define SEQID_PP 6 186 + #define SEQID_RDID 7 187 + #define SEQID_WRSR 8 188 + #define SEQID_RDCR 9 189 + #define SEQID_EN4B 10 190 + #define SEQID_BRWR 11 191 + 192 + enum fsl_qspi_devtype { 193 + FSL_QUADSPI_VYBRID, 194 + FSL_QUADSPI_IMX6SX, 195 + }; 196 + 197 + struct fsl_qspi_devtype_data { 198 + enum fsl_qspi_devtype devtype; 199 + int rxfifo; 200 + int txfifo; 201 + }; 202 + 203 + static struct fsl_qspi_devtype_data vybrid_data = { 204 + .devtype = FSL_QUADSPI_VYBRID, 205 + .rxfifo = 128, 206 + .txfifo = 64 207 + }; 208 + 209 + static struct fsl_qspi_devtype_data imx6sx_data = { 210 + .devtype = FSL_QUADSPI_IMX6SX, 211 + .rxfifo = 128, 212 + .txfifo = 512 213 + }; 214 + 215 + #define FSL_QSPI_MAX_CHIP 4 216 + struct fsl_qspi { 217 + struct mtd_info mtd[FSL_QSPI_MAX_CHIP]; 218 + struct spi_nor nor[FSL_QSPI_MAX_CHIP]; 219 + void __iomem *iobase; 220 + void __iomem *ahb_base; /* Used when read from AHB bus */ 221 + u32 memmap_phy; 222 + struct clk *clk, *clk_en; 223 + struct device *dev; 224 + struct completion c; 225 + struct fsl_qspi_devtype_data *devtype_data; 226 + u32 nor_size; 227 + u32 nor_num; 228 + u32 clk_rate; 229 + unsigned int chip_base_addr; /* We may support two chips. */ 230 + }; 231 + 232 + static inline int is_vybrid_qspi(struct fsl_qspi *q) 233 + { 234 + return q->devtype_data->devtype == FSL_QUADSPI_VYBRID; 235 + } 236 + 237 + static inline int is_imx6sx_qspi(struct fsl_qspi *q) 238 + { 239 + return q->devtype_data->devtype == FSL_QUADSPI_IMX6SX; 240 + } 241 + 242 + /* 243 + * An IC bug makes us to re-arrange the 32-bit data. 244 + * The following chips, such as IMX6SLX, have fixed this bug. 245 + */ 246 + static inline u32 fsl_qspi_endian_xchg(struct fsl_qspi *q, u32 a) 247 + { 248 + return is_vybrid_qspi(q) ? __swab32(a) : a; 249 + } 250 + 251 + static inline void fsl_qspi_unlock_lut(struct fsl_qspi *q) 252 + { 253 + writel(QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY); 254 + writel(QUADSPI_LCKER_UNLOCK, q->iobase + QUADSPI_LCKCR); 255 + } 256 + 257 + static inline void fsl_qspi_lock_lut(struct fsl_qspi *q) 258 + { 259 + writel(QUADSPI_LUTKEY_VALUE, q->iobase + QUADSPI_LUTKEY); 260 + writel(QUADSPI_LCKER_LOCK, q->iobase + QUADSPI_LCKCR); 261 + } 262 + 263 + static irqreturn_t fsl_qspi_irq_handler(int irq, void *dev_id) 264 + { 265 + struct fsl_qspi *q = dev_id; 266 + u32 reg; 267 + 268 + /* clear interrupt */ 269 + reg = readl(q->iobase + QUADSPI_FR); 270 + writel(reg, q->iobase + QUADSPI_FR); 271 + 272 + if (reg & QUADSPI_FR_TFF_MASK) 273 + complete(&q->c); 274 + 275 + dev_dbg(q->dev, "QUADSPI_FR : 0x%.8x:0x%.8x\n", q->chip_base_addr, reg); 276 + return IRQ_HANDLED; 277 + } 278 + 279 + static void fsl_qspi_init_lut(struct fsl_qspi *q) 280 + { 281 + void __iomem *base = q->iobase; 282 + int rxfifo = q->devtype_data->rxfifo; 283 + u32 lut_base; 284 + u8 cmd, addrlen, dummy; 285 + int i; 286 + 287 + fsl_qspi_unlock_lut(q); 288 + 289 + /* Clear all the LUT table */ 290 + for (i = 0; i < QUADSPI_LUT_NUM; i++) 291 + writel(0, base + QUADSPI_LUT_BASE + i * 4); 292 + 293 + /* Quad Read */ 294 + lut_base = SEQID_QUAD_READ * 4; 295 + 296 + if (q->nor_size <= SZ_16M) { 297 + cmd = SPINOR_OP_READ_1_1_4; 298 + addrlen = ADDR24BIT; 299 + dummy = 8; 300 + } else { 301 + /* use the 4-byte address */ 302 + cmd = SPINOR_OP_READ_1_1_4; 303 + addrlen = ADDR32BIT; 304 + dummy = 8; 305 + } 306 + 307 + writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 308 + base + QUADSPI_LUT(lut_base)); 309 + writel(LUT0(DUMMY, PAD1, dummy) | LUT1(READ, PAD4, rxfifo), 310 + base + QUADSPI_LUT(lut_base + 1)); 311 + 312 + /* Write enable */ 313 + lut_base = SEQID_WREN * 4; 314 + writel(LUT0(CMD, PAD1, SPINOR_OP_WREN), base + QUADSPI_LUT(lut_base)); 315 + 316 + /* Page Program */ 317 + lut_base = SEQID_PP * 4; 318 + 319 + if (q->nor_size <= SZ_16M) { 320 + cmd = SPINOR_OP_PP; 321 + addrlen = ADDR24BIT; 322 + } else { 323 + /* use the 4-byte address */ 324 + cmd = SPINOR_OP_PP; 325 + addrlen = ADDR32BIT; 326 + } 327 + 328 + writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 329 + base + QUADSPI_LUT(lut_base)); 330 + writel(LUT0(WRITE, PAD1, 0), base + QUADSPI_LUT(lut_base + 1)); 331 + 332 + /* Read Status */ 333 + lut_base = SEQID_RDSR * 4; 334 + writel(LUT0(CMD, PAD1, SPINOR_OP_RDSR) | LUT1(READ, PAD1, 0x1), 335 + base + QUADSPI_LUT(lut_base)); 336 + 337 + /* Erase a sector */ 338 + lut_base = SEQID_SE * 4; 339 + 340 + if (q->nor_size <= SZ_16M) { 341 + cmd = SPINOR_OP_SE; 342 + addrlen = ADDR24BIT; 343 + } else { 344 + /* use the 4-byte address */ 345 + cmd = SPINOR_OP_SE; 346 + addrlen = ADDR32BIT; 347 + } 348 + 349 + writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), 350 + base + QUADSPI_LUT(lut_base)); 351 + 352 + /* Erase the whole chip */ 353 + lut_base = SEQID_CHIP_ERASE * 4; 354 + writel(LUT0(CMD, PAD1, SPINOR_OP_CHIP_ERASE), 355 + base + QUADSPI_LUT(lut_base)); 356 + 357 + /* READ ID */ 358 + lut_base = SEQID_RDID * 4; 359 + writel(LUT0(CMD, PAD1, SPINOR_OP_RDID) | LUT1(READ, PAD1, 0x8), 360 + base + QUADSPI_LUT(lut_base)); 361 + 362 + /* Write Register */ 363 + lut_base = SEQID_WRSR * 4; 364 + writel(LUT0(CMD, PAD1, SPINOR_OP_WRSR) | LUT1(WRITE, PAD1, 0x2), 365 + base + QUADSPI_LUT(lut_base)); 366 + 367 + /* Read Configuration Register */ 368 + lut_base = SEQID_RDCR * 4; 369 + writel(LUT0(CMD, PAD1, SPINOR_OP_RDCR) | LUT1(READ, PAD1, 0x1), 370 + base + QUADSPI_LUT(lut_base)); 371 + 372 + /* Write disable */ 373 + lut_base = SEQID_WRDI * 4; 374 + writel(LUT0(CMD, PAD1, SPINOR_OP_WRDI), base + QUADSPI_LUT(lut_base)); 375 + 376 + /* Enter 4 Byte Mode (Micron) */ 377 + lut_base = SEQID_EN4B * 4; 378 + writel(LUT0(CMD, PAD1, SPINOR_OP_EN4B), base + QUADSPI_LUT(lut_base)); 379 + 380 + /* Enter 4 Byte Mode (Spansion) */ 381 + lut_base = SEQID_BRWR * 4; 382 + writel(LUT0(CMD, PAD1, SPINOR_OP_BRWR), base + QUADSPI_LUT(lut_base)); 383 + 384 + fsl_qspi_lock_lut(q); 385 + } 386 + 387 + /* Get the SEQID for the command */ 388 + static int fsl_qspi_get_seqid(struct fsl_qspi *q, u8 cmd) 389 + { 390 + switch (cmd) { 391 + case SPINOR_OP_READ_1_1_4: 392 + return SEQID_QUAD_READ; 393 + case SPINOR_OP_WREN: 394 + return SEQID_WREN; 395 + case SPINOR_OP_WRDI: 396 + return SEQID_WRDI; 397 + case SPINOR_OP_RDSR: 398 + return SEQID_RDSR; 399 + case SPINOR_OP_SE: 400 + return SEQID_SE; 401 + case SPINOR_OP_CHIP_ERASE: 402 + return SEQID_CHIP_ERASE; 403 + case SPINOR_OP_PP: 404 + return SEQID_PP; 405 + case SPINOR_OP_RDID: 406 + return SEQID_RDID; 407 + case SPINOR_OP_WRSR: 408 + return SEQID_WRSR; 409 + case SPINOR_OP_RDCR: 410 + return SEQID_RDCR; 411 + case SPINOR_OP_EN4B: 412 + return SEQID_EN4B; 413 + case SPINOR_OP_BRWR: 414 + return SEQID_BRWR; 415 + default: 416 + dev_err(q->dev, "Unsupported cmd 0x%.2x\n", cmd); 417 + break; 418 + } 419 + return -EINVAL; 420 + } 421 + 422 + static int 423 + fsl_qspi_runcmd(struct fsl_qspi *q, u8 cmd, unsigned int addr, int len) 424 + { 425 + void __iomem *base = q->iobase; 426 + int seqid; 427 + u32 reg, reg2; 428 + int err; 429 + 430 + init_completion(&q->c); 431 + dev_dbg(q->dev, "to 0x%.8x:0x%.8x, len:%d, cmd:%.2x\n", 432 + q->chip_base_addr, addr, len, cmd); 433 + 434 + /* save the reg */ 435 + reg = readl(base + QUADSPI_MCR); 436 + 437 + writel(q->memmap_phy + q->chip_base_addr + addr, base + QUADSPI_SFAR); 438 + writel(QUADSPI_RBCT_WMRK_MASK | QUADSPI_RBCT_RXBRD_USEIPS, 439 + base + QUADSPI_RBCT); 440 + writel(reg | QUADSPI_MCR_CLR_RXF_MASK, base + QUADSPI_MCR); 441 + 442 + do { 443 + reg2 = readl(base + QUADSPI_SR); 444 + if (reg2 & (QUADSPI_SR_IP_ACC_MASK | QUADSPI_SR_AHB_ACC_MASK)) { 445 + udelay(1); 446 + dev_dbg(q->dev, "The controller is busy, 0x%x\n", reg2); 447 + continue; 448 + } 449 + break; 450 + } while (1); 451 + 452 + /* trigger the LUT now */ 453 + seqid = fsl_qspi_get_seqid(q, cmd); 454 + writel((seqid << QUADSPI_IPCR_SEQID_SHIFT) | len, base + QUADSPI_IPCR); 455 + 456 + /* Wait for the interrupt. */ 457 + err = wait_for_completion_timeout(&q->c, msecs_to_jiffies(1000)); 458 + if (!err) { 459 + dev_err(q->dev, 460 + "cmd 0x%.2x timeout, addr@%.8x, FR:0x%.8x, SR:0x%.8x\n", 461 + cmd, addr, readl(base + QUADSPI_FR), 462 + readl(base + QUADSPI_SR)); 463 + err = -ETIMEDOUT; 464 + } else { 465 + err = 0; 466 + } 467 + 468 + /* restore the MCR */ 469 + writel(reg, base + QUADSPI_MCR); 470 + 471 + return err; 472 + } 473 + 474 + /* Read out the data from the QUADSPI_RBDR buffer registers. */ 475 + static void fsl_qspi_read_data(struct fsl_qspi *q, int len, u8 *rxbuf) 476 + { 477 + u32 tmp; 478 + int i = 0; 479 + 480 + while (len > 0) { 481 + tmp = readl(q->iobase + QUADSPI_RBDR + i * 4); 482 + tmp = fsl_qspi_endian_xchg(q, tmp); 483 + dev_dbg(q->dev, "chip addr:0x%.8x, rcv:0x%.8x\n", 484 + q->chip_base_addr, tmp); 485 + 486 + if (len >= 4) { 487 + *((u32 *)rxbuf) = tmp; 488 + rxbuf += 4; 489 + } else { 490 + memcpy(rxbuf, &tmp, len); 491 + break; 492 + } 493 + 494 + len -= 4; 495 + i++; 496 + } 497 + } 498 + 499 + /* 500 + * If we have changed the content of the flash by writing or erasing, 501 + * we need to invalidate the AHB buffer. If we do not do so, we may read out 502 + * the wrong data. The spec tells us reset the AHB domain and Serial Flash 503 + * domain at the same time. 504 + */ 505 + static inline void fsl_qspi_invalid(struct fsl_qspi *q) 506 + { 507 + u32 reg; 508 + 509 + reg = readl(q->iobase + QUADSPI_MCR); 510 + reg |= QUADSPI_MCR_SWRSTHD_MASK | QUADSPI_MCR_SWRSTSD_MASK; 511 + writel(reg, q->iobase + QUADSPI_MCR); 512 + 513 + /* 514 + * The minimum delay : 1 AHB + 2 SFCK clocks. 515 + * Delay 1 us is enough. 516 + */ 517 + udelay(1); 518 + 519 + reg &= ~(QUADSPI_MCR_SWRSTHD_MASK | QUADSPI_MCR_SWRSTSD_MASK); 520 + writel(reg, q->iobase + QUADSPI_MCR); 521 + } 522 + 523 + static int fsl_qspi_nor_write(struct fsl_qspi *q, struct spi_nor *nor, 524 + u8 opcode, unsigned int to, u32 *txbuf, 525 + unsigned count, size_t *retlen) 526 + { 527 + int ret, i, j; 528 + u32 tmp; 529 + 530 + dev_dbg(q->dev, "to 0x%.8x:0x%.8x, len : %d\n", 531 + q->chip_base_addr, to, count); 532 + 533 + /* clear the TX FIFO. */ 534 + tmp = readl(q->iobase + QUADSPI_MCR); 535 + writel(tmp | QUADSPI_MCR_CLR_RXF_MASK, q->iobase + QUADSPI_MCR); 536 + 537 + /* fill the TX data to the FIFO */ 538 + for (j = 0, i = ((count + 3) / 4); j < i; j++) { 539 + tmp = fsl_qspi_endian_xchg(q, *txbuf); 540 + writel(tmp, q->iobase + QUADSPI_TBDR); 541 + txbuf++; 542 + } 543 + 544 + /* Trigger it */ 545 + ret = fsl_qspi_runcmd(q, opcode, to, count); 546 + 547 + if (ret == 0 && retlen) 548 + *retlen += count; 549 + 550 + return ret; 551 + } 552 + 553 + static void fsl_qspi_set_map_addr(struct fsl_qspi *q) 554 + { 555 + int nor_size = q->nor_size; 556 + void __iomem *base = q->iobase; 557 + 558 + writel(nor_size + q->memmap_phy, base + QUADSPI_SFA1AD); 559 + writel(nor_size * 2 + q->memmap_phy, base + QUADSPI_SFA2AD); 560 + writel(nor_size * 3 + q->memmap_phy, base + QUADSPI_SFB1AD); 561 + writel(nor_size * 4 + q->memmap_phy, base + QUADSPI_SFB2AD); 562 + } 563 + 564 + /* 565 + * There are two different ways to read out the data from the flash: 566 + * the "IP Command Read" and the "AHB Command Read". 567 + * 568 + * The IC guy suggests we use the "AHB Command Read" which is faster 569 + * then the "IP Command Read". (What's more is that there is a bug in 570 + * the "IP Command Read" in the Vybrid.) 571 + * 572 + * After we set up the registers for the "AHB Command Read", we can use 573 + * the memcpy to read the data directly. A "missed" access to the buffer 574 + * causes the controller to clear the buffer, and use the sequence pointed 575 + * by the QUADSPI_BFGENCR[SEQID] to initiate a read from the flash. 576 + */ 577 + static void fsl_qspi_init_abh_read(struct fsl_qspi *q) 578 + { 579 + void __iomem *base = q->iobase; 580 + int seqid; 581 + 582 + /* AHB configuration for access buffer 0/1/2 .*/ 583 + writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF0CR); 584 + writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF1CR); 585 + writel(QUADSPI_BUFXCR_INVALID_MSTRID, base + QUADSPI_BUF2CR); 586 + writel(QUADSPI_BUF3CR_ALLMST, base + QUADSPI_BUF3CR); 587 + 588 + /* We only use the buffer3 */ 589 + writel(0, base + QUADSPI_BUF0IND); 590 + writel(0, base + QUADSPI_BUF1IND); 591 + writel(0, base + QUADSPI_BUF2IND); 592 + 593 + /* Set the default lut sequence for AHB Read. */ 594 + seqid = fsl_qspi_get_seqid(q, q->nor[0].read_opcode); 595 + writel(seqid << QUADSPI_BFGENCR_SEQID_SHIFT, 596 + q->iobase + QUADSPI_BFGENCR); 597 + } 598 + 599 + /* We use this function to do some basic init for spi_nor_scan(). */ 600 + static int fsl_qspi_nor_setup(struct fsl_qspi *q) 601 + { 602 + void __iomem *base = q->iobase; 603 + u32 reg; 604 + int ret; 605 + 606 + /* the default frequency, we will change it in the future.*/ 607 + ret = clk_set_rate(q->clk, 66000000); 608 + if (ret) 609 + return ret; 610 + 611 + /* Init the LUT table. */ 612 + fsl_qspi_init_lut(q); 613 + 614 + /* Disable the module */ 615 + writel(QUADSPI_MCR_MDIS_MASK | QUADSPI_MCR_RESERVED_MASK, 616 + base + QUADSPI_MCR); 617 + 618 + reg = readl(base + QUADSPI_SMPR); 619 + writel(reg & ~(QUADSPI_SMPR_FSDLY_MASK 620 + | QUADSPI_SMPR_FSPHS_MASK 621 + | QUADSPI_SMPR_HSENA_MASK 622 + | QUADSPI_SMPR_DDRSMP_MASK), base + QUADSPI_SMPR); 623 + 624 + /* Enable the module */ 625 + writel(QUADSPI_MCR_RESERVED_MASK | QUADSPI_MCR_END_CFG_MASK, 626 + base + QUADSPI_MCR); 627 + 628 + /* enable the interrupt */ 629 + writel(QUADSPI_RSER_TFIE, q->iobase + QUADSPI_RSER); 630 + 631 + return 0; 632 + } 633 + 634 + static int fsl_qspi_nor_setup_last(struct fsl_qspi *q) 635 + { 636 + unsigned long rate = q->clk_rate; 637 + int ret; 638 + 639 + if (is_imx6sx_qspi(q)) 640 + rate *= 4; 641 + 642 + ret = clk_set_rate(q->clk, rate); 643 + if (ret) 644 + return ret; 645 + 646 + /* Init the LUT table again. */ 647 + fsl_qspi_init_lut(q); 648 + 649 + /* Init for AHB read */ 650 + fsl_qspi_init_abh_read(q); 651 + 652 + return 0; 653 + } 654 + 655 + static struct of_device_id fsl_qspi_dt_ids[] = { 656 + { .compatible = "fsl,vf610-qspi", .data = (void *)&vybrid_data, }, 657 + { .compatible = "fsl,imx6sx-qspi", .data = (void *)&imx6sx_data, }, 658 + { /* sentinel */ } 659 + }; 660 + MODULE_DEVICE_TABLE(of, fsl_qspi_dt_ids); 661 + 662 + static void fsl_qspi_set_base_addr(struct fsl_qspi *q, struct spi_nor *nor) 663 + { 664 + q->chip_base_addr = q->nor_size * (nor - q->nor); 665 + } 666 + 667 + static int fsl_qspi_read_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len) 668 + { 669 + int ret; 670 + struct fsl_qspi *q = nor->priv; 671 + 672 + ret = fsl_qspi_runcmd(q, opcode, 0, len); 673 + if (ret) 674 + return ret; 675 + 676 + fsl_qspi_read_data(q, len, buf); 677 + return 0; 678 + } 679 + 680 + static int fsl_qspi_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len, 681 + int write_enable) 682 + { 683 + struct fsl_qspi *q = nor->priv; 684 + int ret; 685 + 686 + if (!buf) { 687 + ret = fsl_qspi_runcmd(q, opcode, 0, 1); 688 + if (ret) 689 + return ret; 690 + 691 + if (opcode == SPINOR_OP_CHIP_ERASE) 692 + fsl_qspi_invalid(q); 693 + 694 + } else if (len > 0) { 695 + ret = fsl_qspi_nor_write(q, nor, opcode, 0, 696 + (u32 *)buf, len, NULL); 697 + } else { 698 + dev_err(q->dev, "invalid cmd %d\n", opcode); 699 + ret = -EINVAL; 700 + } 701 + 702 + return ret; 703 + } 704 + 705 + static void fsl_qspi_write(struct spi_nor *nor, loff_t to, 706 + size_t len, size_t *retlen, const u_char *buf) 707 + { 708 + struct fsl_qspi *q = nor->priv; 709 + 710 + fsl_qspi_nor_write(q, nor, nor->program_opcode, to, 711 + (u32 *)buf, len, retlen); 712 + 713 + /* invalid the data in the AHB buffer. */ 714 + fsl_qspi_invalid(q); 715 + } 716 + 717 + static int fsl_qspi_read(struct spi_nor *nor, loff_t from, 718 + size_t len, size_t *retlen, u_char *buf) 719 + { 720 + struct fsl_qspi *q = nor->priv; 721 + u8 cmd = nor->read_opcode; 722 + int ret; 723 + 724 + dev_dbg(q->dev, "cmd [%x],read from (0x%p, 0x%.8x, 0x%.8x),len:%d\n", 725 + cmd, q->ahb_base, q->chip_base_addr, (unsigned int)from, len); 726 + 727 + /* Wait until the previous command is finished. */ 728 + ret = nor->wait_till_ready(nor); 729 + if (ret) 730 + return ret; 731 + 732 + /* Read out the data directly from the AHB buffer.*/ 733 + memcpy(buf, q->ahb_base + q->chip_base_addr + from, len); 734 + 735 + *retlen += len; 736 + return 0; 737 + } 738 + 739 + static int fsl_qspi_erase(struct spi_nor *nor, loff_t offs) 740 + { 741 + struct fsl_qspi *q = nor->priv; 742 + int ret; 743 + 744 + dev_dbg(nor->dev, "%dKiB at 0x%08x:0x%08x\n", 745 + nor->mtd->erasesize / 1024, q->chip_base_addr, (u32)offs); 746 + 747 + /* Wait until finished previous write command. */ 748 + ret = nor->wait_till_ready(nor); 749 + if (ret) 750 + return ret; 751 + 752 + /* Send write enable, then erase commands. */ 753 + ret = nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0, 0); 754 + if (ret) 755 + return ret; 756 + 757 + ret = fsl_qspi_runcmd(q, nor->erase_opcode, offs, 0); 758 + if (ret) 759 + return ret; 760 + 761 + fsl_qspi_invalid(q); 762 + return 0; 763 + } 764 + 765 + static int fsl_qspi_prep(struct spi_nor *nor, enum spi_nor_ops ops) 766 + { 767 + struct fsl_qspi *q = nor->priv; 768 + int ret; 769 + 770 + ret = clk_enable(q->clk_en); 771 + if (ret) 772 + return ret; 773 + 774 + ret = clk_enable(q->clk); 775 + if (ret) { 776 + clk_disable(q->clk_en); 777 + return ret; 778 + } 779 + 780 + fsl_qspi_set_base_addr(q, nor); 781 + return 0; 782 + } 783 + 784 + static void fsl_qspi_unprep(struct spi_nor *nor, enum spi_nor_ops ops) 785 + { 786 + struct fsl_qspi *q = nor->priv; 787 + 788 + clk_disable(q->clk); 789 + clk_disable(q->clk_en); 790 + } 791 + 792 + static int fsl_qspi_probe(struct platform_device *pdev) 793 + { 794 + struct device_node *np = pdev->dev.of_node; 795 + struct mtd_part_parser_data ppdata; 796 + struct device *dev = &pdev->dev; 797 + struct fsl_qspi *q; 798 + struct resource *res; 799 + struct spi_nor *nor; 800 + struct mtd_info *mtd; 801 + int ret, i = 0; 802 + bool has_second_chip = false; 803 + const struct of_device_id *of_id = 804 + of_match_device(fsl_qspi_dt_ids, &pdev->dev); 805 + 806 + q = devm_kzalloc(dev, sizeof(*q), GFP_KERNEL); 807 + if (!q) 808 + return -ENOMEM; 809 + 810 + q->nor_num = of_get_child_count(dev->of_node); 811 + if (!q->nor_num || q->nor_num > FSL_QSPI_MAX_CHIP) 812 + return -ENODEV; 813 + 814 + /* find the resources */ 815 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "QuadSPI"); 816 + q->iobase = devm_ioremap_resource(dev, res); 817 + if (IS_ERR(q->iobase)) { 818 + ret = PTR_ERR(q->iobase); 819 + goto map_failed; 820 + } 821 + 822 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 823 + "QuadSPI-memory"); 824 + q->ahb_base = devm_ioremap_resource(dev, res); 825 + if (IS_ERR(q->ahb_base)) { 826 + ret = PTR_ERR(q->ahb_base); 827 + goto map_failed; 828 + } 829 + q->memmap_phy = res->start; 830 + 831 + /* find the clocks */ 832 + q->clk_en = devm_clk_get(dev, "qspi_en"); 833 + if (IS_ERR(q->clk_en)) { 834 + ret = PTR_ERR(q->clk_en); 835 + goto map_failed; 836 + } 837 + 838 + q->clk = devm_clk_get(dev, "qspi"); 839 + if (IS_ERR(q->clk)) { 840 + ret = PTR_ERR(q->clk); 841 + goto map_failed; 842 + } 843 + 844 + ret = clk_prepare_enable(q->clk_en); 845 + if (ret) { 846 + dev_err(dev, "can not enable the qspi_en clock\n"); 847 + goto map_failed; 848 + } 849 + 850 + ret = clk_prepare_enable(q->clk); 851 + if (ret) { 852 + clk_disable_unprepare(q->clk_en); 853 + dev_err(dev, "can not enable the qspi clock\n"); 854 + goto map_failed; 855 + } 856 + 857 + /* find the irq */ 858 + ret = platform_get_irq(pdev, 0); 859 + if (ret < 0) { 860 + dev_err(dev, "failed to get the irq\n"); 861 + goto irq_failed; 862 + } 863 + 864 + ret = devm_request_irq(dev, ret, 865 + fsl_qspi_irq_handler, 0, pdev->name, q); 866 + if (ret) { 867 + dev_err(dev, "failed to request irq.\n"); 868 + goto irq_failed; 869 + } 870 + 871 + q->dev = dev; 872 + q->devtype_data = (struct fsl_qspi_devtype_data *)of_id->data; 873 + platform_set_drvdata(pdev, q); 874 + 875 + ret = fsl_qspi_nor_setup(q); 876 + if (ret) 877 + goto irq_failed; 878 + 879 + if (of_get_property(np, "fsl,qspi-has-second-chip", NULL)) 880 + has_second_chip = true; 881 + 882 + /* iterate the subnodes. */ 883 + for_each_available_child_of_node(dev->of_node, np) { 884 + const struct spi_device_id *id; 885 + char modalias[40]; 886 + 887 + /* skip the holes */ 888 + if (!has_second_chip) 889 + i *= 2; 890 + 891 + nor = &q->nor[i]; 892 + mtd = &q->mtd[i]; 893 + 894 + nor->mtd = mtd; 895 + nor->dev = dev; 896 + nor->priv = q; 897 + mtd->priv = nor; 898 + 899 + /* fill the hooks */ 900 + nor->read_reg = fsl_qspi_read_reg; 901 + nor->write_reg = fsl_qspi_write_reg; 902 + nor->read = fsl_qspi_read; 903 + nor->write = fsl_qspi_write; 904 + nor->erase = fsl_qspi_erase; 905 + 906 + nor->prepare = fsl_qspi_prep; 907 + nor->unprepare = fsl_qspi_unprep; 908 + 909 + if (of_modalias_node(np, modalias, sizeof(modalias)) < 0) 910 + goto map_failed; 911 + 912 + id = spi_nor_match_id(modalias); 913 + if (!id) 914 + goto map_failed; 915 + 916 + ret = of_property_read_u32(np, "spi-max-frequency", 917 + &q->clk_rate); 918 + if (ret < 0) 919 + goto map_failed; 920 + 921 + /* set the chip address for READID */ 922 + fsl_qspi_set_base_addr(q, nor); 923 + 924 + ret = spi_nor_scan(nor, id, SPI_NOR_QUAD); 925 + if (ret) 926 + goto map_failed; 927 + 928 + ppdata.of_node = np; 929 + ret = mtd_device_parse_register(mtd, NULL, &ppdata, NULL, 0); 930 + if (ret) 931 + goto map_failed; 932 + 933 + /* Set the correct NOR size now. */ 934 + if (q->nor_size == 0) { 935 + q->nor_size = mtd->size; 936 + 937 + /* Map the SPI NOR to accessiable address */ 938 + fsl_qspi_set_map_addr(q); 939 + } 940 + 941 + /* 942 + * The TX FIFO is 64 bytes in the Vybrid, but the Page Program 943 + * may writes 265 bytes per time. The write is working in the 944 + * unit of the TX FIFO, not in the unit of the SPI NOR's page 945 + * size. 946 + * 947 + * So shrink the spi_nor->page_size if it is larger then the 948 + * TX FIFO. 949 + */ 950 + if (nor->page_size > q->devtype_data->txfifo) 951 + nor->page_size = q->devtype_data->txfifo; 952 + 953 + i++; 954 + } 955 + 956 + /* finish the rest init. */ 957 + ret = fsl_qspi_nor_setup_last(q); 958 + if (ret) 959 + goto last_init_failed; 960 + 961 + clk_disable(q->clk); 962 + clk_disable(q->clk_en); 963 + dev_info(dev, "QuadSPI SPI NOR flash driver\n"); 964 + return 0; 965 + 966 + last_init_failed: 967 + for (i = 0; i < q->nor_num; i++) 968 + mtd_device_unregister(&q->mtd[i]); 969 + 970 + irq_failed: 971 + clk_disable_unprepare(q->clk); 972 + clk_disable_unprepare(q->clk_en); 973 + map_failed: 974 + dev_err(dev, "Freescale QuadSPI probe failed\n"); 975 + return ret; 976 + } 977 + 978 + static int fsl_qspi_remove(struct platform_device *pdev) 979 + { 980 + struct fsl_qspi *q = platform_get_drvdata(pdev); 981 + int i; 982 + 983 + for (i = 0; i < q->nor_num; i++) 984 + mtd_device_unregister(&q->mtd[i]); 985 + 986 + /* disable the hardware */ 987 + writel(QUADSPI_MCR_MDIS_MASK, q->iobase + QUADSPI_MCR); 988 + writel(0x0, q->iobase + QUADSPI_RSER); 989 + 990 + clk_unprepare(q->clk); 991 + clk_unprepare(q->clk_en); 992 + return 0; 993 + } 994 + 995 + static struct platform_driver fsl_qspi_driver = { 996 + .driver = { 997 + .name = "fsl-quadspi", 998 + .bus = &platform_bus_type, 999 + .owner = THIS_MODULE, 1000 + .of_match_table = fsl_qspi_dt_ids, 1001 + }, 1002 + .probe = fsl_qspi_probe, 1003 + .remove = fsl_qspi_remove, 1004 + }; 1005 + module_platform_driver(fsl_qspi_driver); 1006 + 1007 + MODULE_DESCRIPTION("Freescale QuadSPI Controller Driver"); 1008 + MODULE_AUTHOR("Freescale Semiconductor Inc."); 1009 + MODULE_LICENSE("GPL v2");
+1107
drivers/mtd/spi-nor/spi-nor.c
··· 1 + /* 2 + * Based on m25p80.c, by Mike Lavender (mike@steroidmicros.com), with 3 + * influence from lart.c (Abraham Van Der Merwe) and mtd_dataflash.c 4 + * 5 + * Copyright (C) 2005, Intec Automation Inc. 6 + * Copyright (C) 2014, Freescale Semiconductor, Inc. 7 + * 8 + * This code is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + 13 + #include <linux/err.h> 14 + #include <linux/errno.h> 15 + #include <linux/module.h> 16 + #include <linux/device.h> 17 + #include <linux/mutex.h> 18 + #include <linux/math64.h> 19 + 20 + #include <linux/mtd/cfi.h> 21 + #include <linux/mtd/mtd.h> 22 + #include <linux/of_platform.h> 23 + #include <linux/spi/flash.h> 24 + #include <linux/mtd/spi-nor.h> 25 + 26 + /* Define max times to check status register before we give up. */ 27 + #define MAX_READY_WAIT_JIFFIES (40 * HZ) /* M25P16 specs 40s max chip erase */ 28 + 29 + #define JEDEC_MFR(_jedec_id) ((_jedec_id) >> 16) 30 + 31 + /* 32 + * Read the status register, returning its value in the location 33 + * Return the status register value. 34 + * Returns negative if error occurred. 35 + */ 36 + static int read_sr(struct spi_nor *nor) 37 + { 38 + int ret; 39 + u8 val; 40 + 41 + ret = nor->read_reg(nor, SPINOR_OP_RDSR, &val, 1); 42 + if (ret < 0) { 43 + pr_err("error %d reading SR\n", (int) ret); 44 + return ret; 45 + } 46 + 47 + return val; 48 + } 49 + 50 + /* 51 + * Read configuration register, returning its value in the 52 + * location. Return the configuration register value. 53 + * Returns negative if error occured. 54 + */ 55 + static int read_cr(struct spi_nor *nor) 56 + { 57 + int ret; 58 + u8 val; 59 + 60 + ret = nor->read_reg(nor, SPINOR_OP_RDCR, &val, 1); 61 + if (ret < 0) { 62 + dev_err(nor->dev, "error %d reading CR\n", ret); 63 + return ret; 64 + } 65 + 66 + return val; 67 + } 68 + 69 + /* 70 + * Dummy Cycle calculation for different type of read. 71 + * It can be used to support more commands with 72 + * different dummy cycle requirements. 73 + */ 74 + static inline int spi_nor_read_dummy_cycles(struct spi_nor *nor) 75 + { 76 + switch (nor->flash_read) { 77 + case SPI_NOR_FAST: 78 + case SPI_NOR_DUAL: 79 + case SPI_NOR_QUAD: 80 + return 1; 81 + case SPI_NOR_NORMAL: 82 + return 0; 83 + } 84 + return 0; 85 + } 86 + 87 + /* 88 + * Write status register 1 byte 89 + * Returns negative if error occurred. 90 + */ 91 + static inline int write_sr(struct spi_nor *nor, u8 val) 92 + { 93 + nor->cmd_buf[0] = val; 94 + return nor->write_reg(nor, SPINOR_OP_WRSR, nor->cmd_buf, 1, 0); 95 + } 96 + 97 + /* 98 + * Set write enable latch with Write Enable command. 99 + * Returns negative if error occurred. 100 + */ 101 + static inline int write_enable(struct spi_nor *nor) 102 + { 103 + return nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0, 0); 104 + } 105 + 106 + /* 107 + * Send write disble instruction to the chip. 108 + */ 109 + static inline int write_disable(struct spi_nor *nor) 110 + { 111 + return nor->write_reg(nor, SPINOR_OP_WRDI, NULL, 0, 0); 112 + } 113 + 114 + static inline struct spi_nor *mtd_to_spi_nor(struct mtd_info *mtd) 115 + { 116 + return mtd->priv; 117 + } 118 + 119 + /* Enable/disable 4-byte addressing mode. */ 120 + static inline int set_4byte(struct spi_nor *nor, u32 jedec_id, int enable) 121 + { 122 + int status; 123 + bool need_wren = false; 124 + u8 cmd; 125 + 126 + switch (JEDEC_MFR(jedec_id)) { 127 + case CFI_MFR_ST: /* Micron, actually */ 128 + /* Some Micron need WREN command; all will accept it */ 129 + need_wren = true; 130 + case CFI_MFR_MACRONIX: 131 + case 0xEF /* winbond */: 132 + if (need_wren) 133 + write_enable(nor); 134 + 135 + cmd = enable ? SPINOR_OP_EN4B : SPINOR_OP_EX4B; 136 + status = nor->write_reg(nor, cmd, NULL, 0, 0); 137 + if (need_wren) 138 + write_disable(nor); 139 + 140 + return status; 141 + default: 142 + /* Spansion style */ 143 + nor->cmd_buf[0] = enable << 7; 144 + return nor->write_reg(nor, SPINOR_OP_BRWR, nor->cmd_buf, 1, 0); 145 + } 146 + } 147 + 148 + static int spi_nor_wait_till_ready(struct spi_nor *nor) 149 + { 150 + unsigned long deadline; 151 + int sr; 152 + 153 + deadline = jiffies + MAX_READY_WAIT_JIFFIES; 154 + 155 + do { 156 + cond_resched(); 157 + 158 + sr = read_sr(nor); 159 + if (sr < 0) 160 + break; 161 + else if (!(sr & SR_WIP)) 162 + return 0; 163 + } while (!time_after_eq(jiffies, deadline)); 164 + 165 + return -ETIMEDOUT; 166 + } 167 + 168 + /* 169 + * Service routine to read status register until ready, or timeout occurs. 170 + * Returns non-zero if error. 171 + */ 172 + static int wait_till_ready(struct spi_nor *nor) 173 + { 174 + return nor->wait_till_ready(nor); 175 + } 176 + 177 + /* 178 + * Erase the whole flash memory 179 + * 180 + * Returns 0 if successful, non-zero otherwise. 181 + */ 182 + static int erase_chip(struct spi_nor *nor) 183 + { 184 + int ret; 185 + 186 + dev_dbg(nor->dev, " %lldKiB\n", (long long)(nor->mtd->size >> 10)); 187 + 188 + /* Wait until finished previous write command. */ 189 + ret = wait_till_ready(nor); 190 + if (ret) 191 + return ret; 192 + 193 + /* Send write enable, then erase commands. */ 194 + write_enable(nor); 195 + 196 + return nor->write_reg(nor, SPINOR_OP_CHIP_ERASE, NULL, 0, 0); 197 + } 198 + 199 + static int spi_nor_lock_and_prep(struct spi_nor *nor, enum spi_nor_ops ops) 200 + { 201 + int ret = 0; 202 + 203 + mutex_lock(&nor->lock); 204 + 205 + if (nor->prepare) { 206 + ret = nor->prepare(nor, ops); 207 + if (ret) { 208 + dev_err(nor->dev, "failed in the preparation.\n"); 209 + mutex_unlock(&nor->lock); 210 + return ret; 211 + } 212 + } 213 + return ret; 214 + } 215 + 216 + static void spi_nor_unlock_and_unprep(struct spi_nor *nor, enum spi_nor_ops ops) 217 + { 218 + if (nor->unprepare) 219 + nor->unprepare(nor, ops); 220 + mutex_unlock(&nor->lock); 221 + } 222 + 223 + /* 224 + * Erase an address range on the nor chip. The address range may extend 225 + * one or more erase sectors. Return an error is there is a problem erasing. 226 + */ 227 + static int spi_nor_erase(struct mtd_info *mtd, struct erase_info *instr) 228 + { 229 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 230 + u32 addr, len; 231 + uint32_t rem; 232 + int ret; 233 + 234 + dev_dbg(nor->dev, "at 0x%llx, len %lld\n", (long long)instr->addr, 235 + (long long)instr->len); 236 + 237 + div_u64_rem(instr->len, mtd->erasesize, &rem); 238 + if (rem) 239 + return -EINVAL; 240 + 241 + addr = instr->addr; 242 + len = instr->len; 243 + 244 + ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_ERASE); 245 + if (ret) 246 + return ret; 247 + 248 + /* whole-chip erase? */ 249 + if (len == mtd->size) { 250 + if (erase_chip(nor)) { 251 + ret = -EIO; 252 + goto erase_err; 253 + } 254 + 255 + /* REVISIT in some cases we could speed up erasing large regions 256 + * by using SPINOR_OP_SE instead of SPINOR_OP_BE_4K. We may have set up 257 + * to use "small sector erase", but that's not always optimal. 258 + */ 259 + 260 + /* "sector"-at-a-time erase */ 261 + } else { 262 + while (len) { 263 + if (nor->erase(nor, addr)) { 264 + ret = -EIO; 265 + goto erase_err; 266 + } 267 + 268 + addr += mtd->erasesize; 269 + len -= mtd->erasesize; 270 + } 271 + } 272 + 273 + spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_ERASE); 274 + 275 + instr->state = MTD_ERASE_DONE; 276 + mtd_erase_callback(instr); 277 + 278 + return ret; 279 + 280 + erase_err: 281 + spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_ERASE); 282 + instr->state = MTD_ERASE_FAILED; 283 + return ret; 284 + } 285 + 286 + static int spi_nor_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 287 + { 288 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 289 + uint32_t offset = ofs; 290 + uint8_t status_old, status_new; 291 + int ret = 0; 292 + 293 + ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_LOCK); 294 + if (ret) 295 + return ret; 296 + 297 + /* Wait until finished previous command */ 298 + ret = wait_till_ready(nor); 299 + if (ret) 300 + goto err; 301 + 302 + status_old = read_sr(nor); 303 + 304 + if (offset < mtd->size - (mtd->size / 2)) 305 + status_new = status_old | SR_BP2 | SR_BP1 | SR_BP0; 306 + else if (offset < mtd->size - (mtd->size / 4)) 307 + status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1; 308 + else if (offset < mtd->size - (mtd->size / 8)) 309 + status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0; 310 + else if (offset < mtd->size - (mtd->size / 16)) 311 + status_new = (status_old & ~(SR_BP0 | SR_BP1)) | SR_BP2; 312 + else if (offset < mtd->size - (mtd->size / 32)) 313 + status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0; 314 + else if (offset < mtd->size - (mtd->size / 64)) 315 + status_new = (status_old & ~(SR_BP2 | SR_BP0)) | SR_BP1; 316 + else 317 + status_new = (status_old & ~(SR_BP2 | SR_BP1)) | SR_BP0; 318 + 319 + /* Only modify protection if it will not unlock other areas */ 320 + if ((status_new & (SR_BP2 | SR_BP1 | SR_BP0)) > 321 + (status_old & (SR_BP2 | SR_BP1 | SR_BP0))) { 322 + write_enable(nor); 323 + ret = write_sr(nor, status_new); 324 + if (ret) 325 + goto err; 326 + } 327 + 328 + err: 329 + spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_LOCK); 330 + return ret; 331 + } 332 + 333 + static int spi_nor_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 334 + { 335 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 336 + uint32_t offset = ofs; 337 + uint8_t status_old, status_new; 338 + int ret = 0; 339 + 340 + ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_UNLOCK); 341 + if (ret) 342 + return ret; 343 + 344 + /* Wait until finished previous command */ 345 + ret = wait_till_ready(nor); 346 + if (ret) 347 + goto err; 348 + 349 + status_old = read_sr(nor); 350 + 351 + if (offset+len > mtd->size - (mtd->size / 64)) 352 + status_new = status_old & ~(SR_BP2 | SR_BP1 | SR_BP0); 353 + else if (offset+len > mtd->size - (mtd->size / 32)) 354 + status_new = (status_old & ~(SR_BP2 | SR_BP1)) | SR_BP0; 355 + else if (offset+len > mtd->size - (mtd->size / 16)) 356 + status_new = (status_old & ~(SR_BP2 | SR_BP0)) | SR_BP1; 357 + else if (offset+len > mtd->size - (mtd->size / 8)) 358 + status_new = (status_old & ~SR_BP2) | SR_BP1 | SR_BP0; 359 + else if (offset+len > mtd->size - (mtd->size / 4)) 360 + status_new = (status_old & ~(SR_BP0 | SR_BP1)) | SR_BP2; 361 + else if (offset+len > mtd->size - (mtd->size / 2)) 362 + status_new = (status_old & ~SR_BP1) | SR_BP2 | SR_BP0; 363 + else 364 + status_new = (status_old & ~SR_BP0) | SR_BP2 | SR_BP1; 365 + 366 + /* Only modify protection if it will not lock other areas */ 367 + if ((status_new & (SR_BP2 | SR_BP1 | SR_BP0)) < 368 + (status_old & (SR_BP2 | SR_BP1 | SR_BP0))) { 369 + write_enable(nor); 370 + ret = write_sr(nor, status_new); 371 + if (ret) 372 + goto err; 373 + } 374 + 375 + err: 376 + spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_UNLOCK); 377 + return ret; 378 + } 379 + 380 + struct flash_info { 381 + /* JEDEC id zero means "no ID" (most older chips); otherwise it has 382 + * a high byte of zero plus three data bytes: the manufacturer id, 383 + * then a two byte device id. 384 + */ 385 + u32 jedec_id; 386 + u16 ext_id; 387 + 388 + /* The size listed here is what works with SPINOR_OP_SE, which isn't 389 + * necessarily called a "sector" by the vendor. 390 + */ 391 + unsigned sector_size; 392 + u16 n_sectors; 393 + 394 + u16 page_size; 395 + u16 addr_width; 396 + 397 + u16 flags; 398 + #define SECT_4K 0x01 /* SPINOR_OP_BE_4K works uniformly */ 399 + #define SPI_NOR_NO_ERASE 0x02 /* No erase command needed */ 400 + #define SST_WRITE 0x04 /* use SST byte programming */ 401 + #define SPI_NOR_NO_FR 0x08 /* Can't do fastread */ 402 + #define SECT_4K_PMC 0x10 /* SPINOR_OP_BE_4K_PMC works uniformly */ 403 + #define SPI_NOR_DUAL_READ 0x20 /* Flash supports Dual Read */ 404 + #define SPI_NOR_QUAD_READ 0x40 /* Flash supports Quad Read */ 405 + }; 406 + 407 + #define INFO(_jedec_id, _ext_id, _sector_size, _n_sectors, _flags) \ 408 + ((kernel_ulong_t)&(struct flash_info) { \ 409 + .jedec_id = (_jedec_id), \ 410 + .ext_id = (_ext_id), \ 411 + .sector_size = (_sector_size), \ 412 + .n_sectors = (_n_sectors), \ 413 + .page_size = 256, \ 414 + .flags = (_flags), \ 415 + }) 416 + 417 + #define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_width, _flags) \ 418 + ((kernel_ulong_t)&(struct flash_info) { \ 419 + .sector_size = (_sector_size), \ 420 + .n_sectors = (_n_sectors), \ 421 + .page_size = (_page_size), \ 422 + .addr_width = (_addr_width), \ 423 + .flags = (_flags), \ 424 + }) 425 + 426 + /* NOTE: double check command sets and memory organization when you add 427 + * more nor chips. This current list focusses on newer chips, which 428 + * have been converging on command sets which including JEDEC ID. 429 + */ 430 + const struct spi_device_id spi_nor_ids[] = { 431 + /* Atmel -- some are (confusingly) marketed as "DataFlash" */ 432 + { "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K) }, 433 + { "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) }, 434 + 435 + { "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, SECT_4K) }, 436 + { "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, SECT_4K) }, 437 + { "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) }, 438 + 439 + { "at26f004", INFO(0x1f0400, 0, 64 * 1024, 8, SECT_4K) }, 440 + { "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K) }, 441 + { "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K) }, 442 + { "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) }, 443 + 444 + { "at45db081d", INFO(0x1f2500, 0, 64 * 1024, 16, SECT_4K) }, 445 + 446 + /* EON -- en25xxx */ 447 + { "en25f32", INFO(0x1c3116, 0, 64 * 1024, 64, SECT_4K) }, 448 + { "en25p32", INFO(0x1c2016, 0, 64 * 1024, 64, 0) }, 449 + { "en25q32b", INFO(0x1c3016, 0, 64 * 1024, 64, 0) }, 450 + { "en25p64", INFO(0x1c2017, 0, 64 * 1024, 128, 0) }, 451 + { "en25q64", INFO(0x1c3017, 0, 64 * 1024, 128, SECT_4K) }, 452 + { "en25qh256", INFO(0x1c7019, 0, 64 * 1024, 512, 0) }, 453 + 454 + /* ESMT */ 455 + { "f25l32pa", INFO(0x8c2016, 0, 64 * 1024, 64, SECT_4K) }, 456 + 457 + /* Everspin */ 458 + { "mr25h256", CAT25_INFO( 32 * 1024, 1, 256, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 459 + { "mr25h10", CAT25_INFO(128 * 1024, 1, 256, 3, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 460 + 461 + /* GigaDevice */ 462 + { "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) }, 463 + { "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) }, 464 + 465 + /* Intel/Numonyx -- xxxs33b */ 466 + { "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) }, 467 + { "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) }, 468 + { "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) }, 469 + 470 + /* Macronix */ 471 + { "mx25l2005a", INFO(0xc22012, 0, 64 * 1024, 4, SECT_4K) }, 472 + { "mx25l4005a", INFO(0xc22013, 0, 64 * 1024, 8, SECT_4K) }, 473 + { "mx25l8005", INFO(0xc22014, 0, 64 * 1024, 16, 0) }, 474 + { "mx25l1606e", INFO(0xc22015, 0, 64 * 1024, 32, SECT_4K) }, 475 + { "mx25l3205d", INFO(0xc22016, 0, 64 * 1024, 64, 0) }, 476 + { "mx25l3255e", INFO(0xc29e16, 0, 64 * 1024, 64, SECT_4K) }, 477 + { "mx25l6405d", INFO(0xc22017, 0, 64 * 1024, 128, 0) }, 478 + { "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, 0) }, 479 + { "mx25l12855e", INFO(0xc22618, 0, 64 * 1024, 256, 0) }, 480 + { "mx25l25635e", INFO(0xc22019, 0, 64 * 1024, 512, 0) }, 481 + { "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) }, 482 + { "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, SPI_NOR_QUAD_READ) }, 483 + { "mx66l1g55g", INFO(0xc2261b, 0, 64 * 1024, 2048, SPI_NOR_QUAD_READ) }, 484 + 485 + /* Micron */ 486 + { "n25q064", INFO(0x20ba17, 0, 64 * 1024, 128, 0) }, 487 + { "n25q128a11", INFO(0x20bb18, 0, 64 * 1024, 256, 0) }, 488 + { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, 0) }, 489 + { "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K) }, 490 + { "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K) }, 491 + 492 + /* PMC */ 493 + { "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) }, 494 + { "pm25lv010", INFO(0, 0, 32 * 1024, 4, SECT_4K_PMC) }, 495 + { "pm25lq032", INFO(0x7f9d46, 0, 64 * 1024, 64, SECT_4K) }, 496 + 497 + /* Spansion -- single (large) sector size only, at least 498 + * for the chips listed here (without boot sectors). 499 + */ 500 + { "s25sl032p", INFO(0x010215, 0x4d00, 64 * 1024, 64, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 501 + { "s25sl064p", INFO(0x010216, 0x4d00, 64 * 1024, 128, 0) }, 502 + { "s25fl256s0", INFO(0x010219, 0x4d00, 256 * 1024, 128, 0) }, 503 + { "s25fl256s1", INFO(0x010219, 0x4d01, 64 * 1024, 512, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 504 + { "s25fl512s", INFO(0x010220, 0x4d00, 256 * 1024, 256, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 505 + { "s70fl01gs", INFO(0x010221, 0x4d00, 256 * 1024, 256, 0) }, 506 + { "s25sl12800", INFO(0x012018, 0x0300, 256 * 1024, 64, 0) }, 507 + { "s25sl12801", INFO(0x012018, 0x0301, 64 * 1024, 256, 0) }, 508 + { "s25fl129p0", INFO(0x012018, 0x4d00, 256 * 1024, 64, 0) }, 509 + { "s25fl129p1", INFO(0x012018, 0x4d01, 64 * 1024, 256, 0) }, 510 + { "s25sl004a", INFO(0x010212, 0, 64 * 1024, 8, 0) }, 511 + { "s25sl008a", INFO(0x010213, 0, 64 * 1024, 16, 0) }, 512 + { "s25sl016a", INFO(0x010214, 0, 64 * 1024, 32, 0) }, 513 + { "s25sl032a", INFO(0x010215, 0, 64 * 1024, 64, 0) }, 514 + { "s25sl064a", INFO(0x010216, 0, 64 * 1024, 128, 0) }, 515 + { "s25fl008k", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) }, 516 + { "s25fl016k", INFO(0xef4015, 0, 64 * 1024, 32, SECT_4K) }, 517 + { "s25fl064k", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) }, 518 + 519 + /* SST -- large erase sizes are "overlays", "sectors" are 4K */ 520 + { "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) }, 521 + { "sst25vf080b", INFO(0xbf258e, 0, 64 * 1024, 16, SECT_4K | SST_WRITE) }, 522 + { "sst25vf016b", INFO(0xbf2541, 0, 64 * 1024, 32, SECT_4K | SST_WRITE) }, 523 + { "sst25vf032b", INFO(0xbf254a, 0, 64 * 1024, 64, SECT_4K | SST_WRITE) }, 524 + { "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, SECT_4K) }, 525 + { "sst25wf512", INFO(0xbf2501, 0, 64 * 1024, 1, SECT_4K | SST_WRITE) }, 526 + { "sst25wf010", INFO(0xbf2502, 0, 64 * 1024, 2, SECT_4K | SST_WRITE) }, 527 + { "sst25wf020", INFO(0xbf2503, 0, 64 * 1024, 4, SECT_4K | SST_WRITE) }, 528 + { "sst25wf040", INFO(0xbf2504, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) }, 529 + 530 + /* ST Microelectronics -- newer production may have feature updates */ 531 + { "m25p05", INFO(0x202010, 0, 32 * 1024, 2, 0) }, 532 + { "m25p10", INFO(0x202011, 0, 32 * 1024, 4, 0) }, 533 + { "m25p20", INFO(0x202012, 0, 64 * 1024, 4, 0) }, 534 + { "m25p40", INFO(0x202013, 0, 64 * 1024, 8, 0) }, 535 + { "m25p80", INFO(0x202014, 0, 64 * 1024, 16, 0) }, 536 + { "m25p16", INFO(0x202015, 0, 64 * 1024, 32, 0) }, 537 + { "m25p32", INFO(0x202016, 0, 64 * 1024, 64, 0) }, 538 + { "m25p64", INFO(0x202017, 0, 64 * 1024, 128, 0) }, 539 + { "m25p128", INFO(0x202018, 0, 256 * 1024, 64, 0) }, 540 + { "n25q032", INFO(0x20ba16, 0, 64 * 1024, 64, 0) }, 541 + 542 + { "m25p05-nonjedec", INFO(0, 0, 32 * 1024, 2, 0) }, 543 + { "m25p10-nonjedec", INFO(0, 0, 32 * 1024, 4, 0) }, 544 + { "m25p20-nonjedec", INFO(0, 0, 64 * 1024, 4, 0) }, 545 + { "m25p40-nonjedec", INFO(0, 0, 64 * 1024, 8, 0) }, 546 + { "m25p80-nonjedec", INFO(0, 0, 64 * 1024, 16, 0) }, 547 + { "m25p16-nonjedec", INFO(0, 0, 64 * 1024, 32, 0) }, 548 + { "m25p32-nonjedec", INFO(0, 0, 64 * 1024, 64, 0) }, 549 + { "m25p64-nonjedec", INFO(0, 0, 64 * 1024, 128, 0) }, 550 + { "m25p128-nonjedec", INFO(0, 0, 256 * 1024, 64, 0) }, 551 + 552 + { "m45pe10", INFO(0x204011, 0, 64 * 1024, 2, 0) }, 553 + { "m45pe80", INFO(0x204014, 0, 64 * 1024, 16, 0) }, 554 + { "m45pe16", INFO(0x204015, 0, 64 * 1024, 32, 0) }, 555 + 556 + { "m25pe20", INFO(0x208012, 0, 64 * 1024, 4, 0) }, 557 + { "m25pe80", INFO(0x208014, 0, 64 * 1024, 16, 0) }, 558 + { "m25pe16", INFO(0x208015, 0, 64 * 1024, 32, SECT_4K) }, 559 + 560 + { "m25px16", INFO(0x207115, 0, 64 * 1024, 32, SECT_4K) }, 561 + { "m25px32", INFO(0x207116, 0, 64 * 1024, 64, SECT_4K) }, 562 + { "m25px32-s0", INFO(0x207316, 0, 64 * 1024, 64, SECT_4K) }, 563 + { "m25px32-s1", INFO(0x206316, 0, 64 * 1024, 64, SECT_4K) }, 564 + { "m25px64", INFO(0x207117, 0, 64 * 1024, 128, 0) }, 565 + 566 + /* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */ 567 + { "w25x10", INFO(0xef3011, 0, 64 * 1024, 2, SECT_4K) }, 568 + { "w25x20", INFO(0xef3012, 0, 64 * 1024, 4, SECT_4K) }, 569 + { "w25x40", INFO(0xef3013, 0, 64 * 1024, 8, SECT_4K) }, 570 + { "w25x80", INFO(0xef3014, 0, 64 * 1024, 16, SECT_4K) }, 571 + { "w25x16", INFO(0xef3015, 0, 64 * 1024, 32, SECT_4K) }, 572 + { "w25x32", INFO(0xef3016, 0, 64 * 1024, 64, SECT_4K) }, 573 + { "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) }, 574 + { "w25q32dw", INFO(0xef6016, 0, 64 * 1024, 64, SECT_4K) }, 575 + { "w25x64", INFO(0xef3017, 0, 64 * 1024, 128, SECT_4K) }, 576 + { "w25q64", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) }, 577 + { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) }, 578 + { "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) }, 579 + { "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) }, 580 + { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) }, 581 + { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K) }, 582 + 583 + /* Catalyst / On Semiconductor -- non-JEDEC */ 584 + { "cat25c11", CAT25_INFO( 16, 8, 16, 1, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 585 + { "cat25c03", CAT25_INFO( 32, 8, 16, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 586 + { "cat25c09", CAT25_INFO( 128, 8, 32, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 587 + { "cat25c17", CAT25_INFO( 256, 8, 32, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 588 + { "cat25128", CAT25_INFO(2048, 8, 64, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 589 + { }, 590 + }; 591 + EXPORT_SYMBOL_GPL(spi_nor_ids); 592 + 593 + static const struct spi_device_id *spi_nor_read_id(struct spi_nor *nor) 594 + { 595 + int tmp; 596 + u8 id[5]; 597 + u32 jedec; 598 + u16 ext_jedec; 599 + struct flash_info *info; 600 + 601 + tmp = nor->read_reg(nor, SPINOR_OP_RDID, id, 5); 602 + if (tmp < 0) { 603 + dev_dbg(nor->dev, " error %d reading JEDEC ID\n", tmp); 604 + return ERR_PTR(tmp); 605 + } 606 + jedec = id[0]; 607 + jedec = jedec << 8; 608 + jedec |= id[1]; 609 + jedec = jedec << 8; 610 + jedec |= id[2]; 611 + 612 + ext_jedec = id[3] << 8 | id[4]; 613 + 614 + for (tmp = 0; tmp < ARRAY_SIZE(spi_nor_ids) - 1; tmp++) { 615 + info = (void *)spi_nor_ids[tmp].driver_data; 616 + if (info->jedec_id == jedec) { 617 + if (info->ext_id == 0 || info->ext_id == ext_jedec) 618 + return &spi_nor_ids[tmp]; 619 + } 620 + } 621 + dev_err(nor->dev, "unrecognized JEDEC id %06x\n", jedec); 622 + return ERR_PTR(-ENODEV); 623 + } 624 + 625 + static const struct spi_device_id *jedec_probe(struct spi_nor *nor) 626 + { 627 + return nor->read_id(nor); 628 + } 629 + 630 + static int spi_nor_read(struct mtd_info *mtd, loff_t from, size_t len, 631 + size_t *retlen, u_char *buf) 632 + { 633 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 634 + int ret; 635 + 636 + dev_dbg(nor->dev, "from 0x%08x, len %zd\n", (u32)from, len); 637 + 638 + ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_READ); 639 + if (ret) 640 + return ret; 641 + 642 + ret = nor->read(nor, from, len, retlen, buf); 643 + 644 + spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_READ); 645 + return ret; 646 + } 647 + 648 + static int sst_write(struct mtd_info *mtd, loff_t to, size_t len, 649 + size_t *retlen, const u_char *buf) 650 + { 651 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 652 + size_t actual; 653 + int ret; 654 + 655 + dev_dbg(nor->dev, "to 0x%08x, len %zd\n", (u32)to, len); 656 + 657 + ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_WRITE); 658 + if (ret) 659 + return ret; 660 + 661 + /* Wait until finished previous write command. */ 662 + ret = wait_till_ready(nor); 663 + if (ret) 664 + goto time_out; 665 + 666 + write_enable(nor); 667 + 668 + nor->sst_write_second = false; 669 + 670 + actual = to % 2; 671 + /* Start write from odd address. */ 672 + if (actual) { 673 + nor->program_opcode = SPINOR_OP_BP; 674 + 675 + /* write one byte. */ 676 + nor->write(nor, to, 1, retlen, buf); 677 + ret = wait_till_ready(nor); 678 + if (ret) 679 + goto time_out; 680 + } 681 + to += actual; 682 + 683 + /* Write out most of the data here. */ 684 + for (; actual < len - 1; actual += 2) { 685 + nor->program_opcode = SPINOR_OP_AAI_WP; 686 + 687 + /* write two bytes. */ 688 + nor->write(nor, to, 2, retlen, buf + actual); 689 + ret = wait_till_ready(nor); 690 + if (ret) 691 + goto time_out; 692 + to += 2; 693 + nor->sst_write_second = true; 694 + } 695 + nor->sst_write_second = false; 696 + 697 + write_disable(nor); 698 + ret = wait_till_ready(nor); 699 + if (ret) 700 + goto time_out; 701 + 702 + /* Write out trailing byte if it exists. */ 703 + if (actual != len) { 704 + write_enable(nor); 705 + 706 + nor->program_opcode = SPINOR_OP_BP; 707 + nor->write(nor, to, 1, retlen, buf + actual); 708 + 709 + ret = wait_till_ready(nor); 710 + if (ret) 711 + goto time_out; 712 + write_disable(nor); 713 + } 714 + time_out: 715 + spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_WRITE); 716 + return ret; 717 + } 718 + 719 + /* 720 + * Write an address range to the nor chip. Data must be written in 721 + * FLASH_PAGESIZE chunks. The address range may be any size provided 722 + * it is within the physical boundaries. 723 + */ 724 + static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len, 725 + size_t *retlen, const u_char *buf) 726 + { 727 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 728 + u32 page_offset, page_size, i; 729 + int ret; 730 + 731 + dev_dbg(nor->dev, "to 0x%08x, len %zd\n", (u32)to, len); 732 + 733 + ret = spi_nor_lock_and_prep(nor, SPI_NOR_OPS_WRITE); 734 + if (ret) 735 + return ret; 736 + 737 + /* Wait until finished previous write command. */ 738 + ret = wait_till_ready(nor); 739 + if (ret) 740 + goto write_err; 741 + 742 + write_enable(nor); 743 + 744 + page_offset = to & (nor->page_size - 1); 745 + 746 + /* do all the bytes fit onto one page? */ 747 + if (page_offset + len <= nor->page_size) { 748 + nor->write(nor, to, len, retlen, buf); 749 + } else { 750 + /* the size of data remaining on the first page */ 751 + page_size = nor->page_size - page_offset; 752 + nor->write(nor, to, page_size, retlen, buf); 753 + 754 + /* write everything in nor->page_size chunks */ 755 + for (i = page_size; i < len; i += page_size) { 756 + page_size = len - i; 757 + if (page_size > nor->page_size) 758 + page_size = nor->page_size; 759 + 760 + wait_till_ready(nor); 761 + write_enable(nor); 762 + 763 + nor->write(nor, to + i, page_size, retlen, buf + i); 764 + } 765 + } 766 + 767 + write_err: 768 + spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_WRITE); 769 + return 0; 770 + } 771 + 772 + static int macronix_quad_enable(struct spi_nor *nor) 773 + { 774 + int ret, val; 775 + 776 + val = read_sr(nor); 777 + write_enable(nor); 778 + 779 + nor->cmd_buf[0] = val | SR_QUAD_EN_MX; 780 + nor->write_reg(nor, SPINOR_OP_WRSR, nor->cmd_buf, 1, 0); 781 + 782 + if (wait_till_ready(nor)) 783 + return 1; 784 + 785 + ret = read_sr(nor); 786 + if (!(ret > 0 && (ret & SR_QUAD_EN_MX))) { 787 + dev_err(nor->dev, "Macronix Quad bit not set\n"); 788 + return -EINVAL; 789 + } 790 + 791 + return 0; 792 + } 793 + 794 + /* 795 + * Write status Register and configuration register with 2 bytes 796 + * The first byte will be written to the status register, while the 797 + * second byte will be written to the configuration register. 798 + * Return negative if error occured. 799 + */ 800 + static int write_sr_cr(struct spi_nor *nor, u16 val) 801 + { 802 + nor->cmd_buf[0] = val & 0xff; 803 + nor->cmd_buf[1] = (val >> 8); 804 + 805 + return nor->write_reg(nor, SPINOR_OP_WRSR, nor->cmd_buf, 2, 0); 806 + } 807 + 808 + static int spansion_quad_enable(struct spi_nor *nor) 809 + { 810 + int ret; 811 + int quad_en = CR_QUAD_EN_SPAN << 8; 812 + 813 + write_enable(nor); 814 + 815 + ret = write_sr_cr(nor, quad_en); 816 + if (ret < 0) { 817 + dev_err(nor->dev, 818 + "error while writing configuration register\n"); 819 + return -EINVAL; 820 + } 821 + 822 + /* read back and check it */ 823 + ret = read_cr(nor); 824 + if (!(ret > 0 && (ret & CR_QUAD_EN_SPAN))) { 825 + dev_err(nor->dev, "Spansion Quad bit not set\n"); 826 + return -EINVAL; 827 + } 828 + 829 + return 0; 830 + } 831 + 832 + static int set_quad_mode(struct spi_nor *nor, u32 jedec_id) 833 + { 834 + int status; 835 + 836 + switch (JEDEC_MFR(jedec_id)) { 837 + case CFI_MFR_MACRONIX: 838 + status = macronix_quad_enable(nor); 839 + if (status) { 840 + dev_err(nor->dev, "Macronix quad-read not enabled\n"); 841 + return -EINVAL; 842 + } 843 + return status; 844 + default: 845 + status = spansion_quad_enable(nor); 846 + if (status) { 847 + dev_err(nor->dev, "Spansion quad-read not enabled\n"); 848 + return -EINVAL; 849 + } 850 + return status; 851 + } 852 + } 853 + 854 + static int spi_nor_check(struct spi_nor *nor) 855 + { 856 + if (!nor->dev || !nor->read || !nor->write || 857 + !nor->read_reg || !nor->write_reg || !nor->erase) { 858 + pr_err("spi-nor: please fill all the necessary fields!\n"); 859 + return -EINVAL; 860 + } 861 + 862 + if (!nor->read_id) 863 + nor->read_id = spi_nor_read_id; 864 + if (!nor->wait_till_ready) 865 + nor->wait_till_ready = spi_nor_wait_till_ready; 866 + 867 + return 0; 868 + } 869 + 870 + int spi_nor_scan(struct spi_nor *nor, const struct spi_device_id *id, 871 + enum read_mode mode) 872 + { 873 + struct flash_info *info; 874 + struct flash_platform_data *data; 875 + struct device *dev = nor->dev; 876 + struct mtd_info *mtd = nor->mtd; 877 + struct device_node *np = dev->of_node; 878 + int ret; 879 + int i; 880 + 881 + ret = spi_nor_check(nor); 882 + if (ret) 883 + return ret; 884 + 885 + /* Platform data helps sort out which chip type we have, as 886 + * well as how this board partitions it. If we don't have 887 + * a chip ID, try the JEDEC id commands; they'll work for most 888 + * newer chips, even if we don't recognize the particular chip. 889 + */ 890 + data = dev_get_platdata(dev); 891 + if (data && data->type) { 892 + const struct spi_device_id *plat_id; 893 + 894 + for (i = 0; i < ARRAY_SIZE(spi_nor_ids) - 1; i++) { 895 + plat_id = &spi_nor_ids[i]; 896 + if (strcmp(data->type, plat_id->name)) 897 + continue; 898 + break; 899 + } 900 + 901 + if (i < ARRAY_SIZE(spi_nor_ids) - 1) 902 + id = plat_id; 903 + else 904 + dev_warn(dev, "unrecognized id %s\n", data->type); 905 + } 906 + 907 + info = (void *)id->driver_data; 908 + 909 + if (info->jedec_id) { 910 + const struct spi_device_id *jid; 911 + 912 + jid = jedec_probe(nor); 913 + if (IS_ERR(jid)) { 914 + return PTR_ERR(jid); 915 + } else if (jid != id) { 916 + /* 917 + * JEDEC knows better, so overwrite platform ID. We 918 + * can't trust partitions any longer, but we'll let 919 + * mtd apply them anyway, since some partitions may be 920 + * marked read-only, and we don't want to lose that 921 + * information, even if it's not 100% accurate. 922 + */ 923 + dev_warn(dev, "found %s, expected %s\n", 924 + jid->name, id->name); 925 + id = jid; 926 + info = (void *)jid->driver_data; 927 + } 928 + } 929 + 930 + mutex_init(&nor->lock); 931 + 932 + /* 933 + * Atmel, SST and Intel/Numonyx serial nor tend to power 934 + * up with the software protection bits set 935 + */ 936 + 937 + if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ATMEL || 938 + JEDEC_MFR(info->jedec_id) == CFI_MFR_INTEL || 939 + JEDEC_MFR(info->jedec_id) == CFI_MFR_SST) { 940 + write_enable(nor); 941 + write_sr(nor, 0); 942 + } 943 + 944 + if (data && data->name) 945 + mtd->name = data->name; 946 + else 947 + mtd->name = dev_name(dev); 948 + 949 + mtd->type = MTD_NORFLASH; 950 + mtd->writesize = 1; 951 + mtd->flags = MTD_CAP_NORFLASH; 952 + mtd->size = info->sector_size * info->n_sectors; 953 + mtd->_erase = spi_nor_erase; 954 + mtd->_read = spi_nor_read; 955 + 956 + /* nor protection support for STmicro chips */ 957 + if (JEDEC_MFR(info->jedec_id) == CFI_MFR_ST) { 958 + mtd->_lock = spi_nor_lock; 959 + mtd->_unlock = spi_nor_unlock; 960 + } 961 + 962 + /* sst nor chips use AAI word program */ 963 + if (info->flags & SST_WRITE) 964 + mtd->_write = sst_write; 965 + else 966 + mtd->_write = spi_nor_write; 967 + 968 + /* prefer "small sector" erase if possible */ 969 + if (info->flags & SECT_4K) { 970 + nor->erase_opcode = SPINOR_OP_BE_4K; 971 + mtd->erasesize = 4096; 972 + } else if (info->flags & SECT_4K_PMC) { 973 + nor->erase_opcode = SPINOR_OP_BE_4K_PMC; 974 + mtd->erasesize = 4096; 975 + } else { 976 + nor->erase_opcode = SPINOR_OP_SE; 977 + mtd->erasesize = info->sector_size; 978 + } 979 + 980 + if (info->flags & SPI_NOR_NO_ERASE) 981 + mtd->flags |= MTD_NO_ERASE; 982 + 983 + mtd->dev.parent = dev; 984 + nor->page_size = info->page_size; 985 + mtd->writebufsize = nor->page_size; 986 + 987 + if (np) { 988 + /* If we were instantiated by DT, use it */ 989 + if (of_property_read_bool(np, "m25p,fast-read")) 990 + nor->flash_read = SPI_NOR_FAST; 991 + else 992 + nor->flash_read = SPI_NOR_NORMAL; 993 + } else { 994 + /* If we weren't instantiated by DT, default to fast-read */ 995 + nor->flash_read = SPI_NOR_FAST; 996 + } 997 + 998 + /* Some devices cannot do fast-read, no matter what DT tells us */ 999 + if (info->flags & SPI_NOR_NO_FR) 1000 + nor->flash_read = SPI_NOR_NORMAL; 1001 + 1002 + /* Quad/Dual-read mode takes precedence over fast/normal */ 1003 + if (mode == SPI_NOR_QUAD && info->flags & SPI_NOR_QUAD_READ) { 1004 + ret = set_quad_mode(nor, info->jedec_id); 1005 + if (ret) { 1006 + dev_err(dev, "quad mode not supported\n"); 1007 + return ret; 1008 + } 1009 + nor->flash_read = SPI_NOR_QUAD; 1010 + } else if (mode == SPI_NOR_DUAL && info->flags & SPI_NOR_DUAL_READ) { 1011 + nor->flash_read = SPI_NOR_DUAL; 1012 + } 1013 + 1014 + /* Default commands */ 1015 + switch (nor->flash_read) { 1016 + case SPI_NOR_QUAD: 1017 + nor->read_opcode = SPINOR_OP_READ_1_1_4; 1018 + break; 1019 + case SPI_NOR_DUAL: 1020 + nor->read_opcode = SPINOR_OP_READ_1_1_2; 1021 + break; 1022 + case SPI_NOR_FAST: 1023 + nor->read_opcode = SPINOR_OP_READ_FAST; 1024 + break; 1025 + case SPI_NOR_NORMAL: 1026 + nor->read_opcode = SPINOR_OP_READ; 1027 + break; 1028 + default: 1029 + dev_err(dev, "No Read opcode defined\n"); 1030 + return -EINVAL; 1031 + } 1032 + 1033 + nor->program_opcode = SPINOR_OP_PP; 1034 + 1035 + if (info->addr_width) 1036 + nor->addr_width = info->addr_width; 1037 + else if (mtd->size > 0x1000000) { 1038 + /* enable 4-byte addressing if the device exceeds 16MiB */ 1039 + nor->addr_width = 4; 1040 + if (JEDEC_MFR(info->jedec_id) == CFI_MFR_AMD) { 1041 + /* Dedicated 4-byte command set */ 1042 + switch (nor->flash_read) { 1043 + case SPI_NOR_QUAD: 1044 + nor->read_opcode = SPINOR_OP_READ4_1_1_4; 1045 + break; 1046 + case SPI_NOR_DUAL: 1047 + nor->read_opcode = SPINOR_OP_READ4_1_1_2; 1048 + break; 1049 + case SPI_NOR_FAST: 1050 + nor->read_opcode = SPINOR_OP_READ4_FAST; 1051 + break; 1052 + case SPI_NOR_NORMAL: 1053 + nor->read_opcode = SPINOR_OP_READ4; 1054 + break; 1055 + } 1056 + nor->program_opcode = SPINOR_OP_PP_4B; 1057 + /* No small sector erase for 4-byte command set */ 1058 + nor->erase_opcode = SPINOR_OP_SE_4B; 1059 + mtd->erasesize = info->sector_size; 1060 + } else 1061 + set_4byte(nor, info->jedec_id, 1); 1062 + } else { 1063 + nor->addr_width = 3; 1064 + } 1065 + 1066 + nor->read_dummy = spi_nor_read_dummy_cycles(nor); 1067 + 1068 + dev_info(dev, "%s (%lld Kbytes)\n", id->name, 1069 + (long long)mtd->size >> 10); 1070 + 1071 + dev_dbg(dev, 1072 + "mtd .name = %s, .size = 0x%llx (%lldMiB), " 1073 + ".erasesize = 0x%.8x (%uKiB) .numeraseregions = %d\n", 1074 + mtd->name, (long long)mtd->size, (long long)(mtd->size >> 20), 1075 + mtd->erasesize, mtd->erasesize / 1024, mtd->numeraseregions); 1076 + 1077 + if (mtd->numeraseregions) 1078 + for (i = 0; i < mtd->numeraseregions; i++) 1079 + dev_dbg(dev, 1080 + "mtd.eraseregions[%d] = { .offset = 0x%llx, " 1081 + ".erasesize = 0x%.8x (%uKiB), " 1082 + ".numblocks = %d }\n", 1083 + i, (long long)mtd->eraseregions[i].offset, 1084 + mtd->eraseregions[i].erasesize, 1085 + mtd->eraseregions[i].erasesize / 1024, 1086 + mtd->eraseregions[i].numblocks); 1087 + return 0; 1088 + } 1089 + EXPORT_SYMBOL_GPL(spi_nor_scan); 1090 + 1091 + const struct spi_device_id *spi_nor_match_id(char *name) 1092 + { 1093 + const struct spi_device_id *id = spi_nor_ids; 1094 + 1095 + while (id->name[0]) { 1096 + if (!strcmp(name, id->name)) 1097 + return id; 1098 + id++; 1099 + } 1100 + return NULL; 1101 + } 1102 + EXPORT_SYMBOL_GPL(spi_nor_match_id); 1103 + 1104 + MODULE_LICENSE("GPL"); 1105 + MODULE_AUTHOR("Huang Shijie <shijie8@gmail.com>"); 1106 + MODULE_AUTHOR("Mike Lavender"); 1107 + MODULE_DESCRIPTION("framework for SPI NOR");
+10 -7
drivers/mtd/tests/oobtest.c
··· 69 69 int err = 0; 70 70 loff_t addr = ebnum * mtd->erasesize; 71 71 72 + prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt); 72 73 for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) { 73 - prandom_bytes_state(&rnd_state, writebuf, use_len); 74 74 ops.mode = MTD_OPS_AUTO_OOB; 75 75 ops.len = 0; 76 76 ops.retlen = 0; ··· 78 78 ops.oobretlen = 0; 79 79 ops.ooboffs = use_offset; 80 80 ops.datbuf = NULL; 81 - ops.oobbuf = writebuf; 81 + ops.oobbuf = writebuf + (use_len_max * i) + use_offset; 82 82 err = mtd_write_oob(mtd, addr, &ops); 83 83 if (err || ops.oobretlen != use_len) { 84 84 pr_err("error: writeoob failed at %#llx\n", ··· 122 122 int err = 0; 123 123 loff_t addr = ebnum * mtd->erasesize; 124 124 125 + prandom_bytes_state(&rnd_state, writebuf, use_len_max * pgcnt); 125 126 for (i = 0; i < pgcnt; ++i, addr += mtd->writesize) { 126 - prandom_bytes_state(&rnd_state, writebuf, use_len); 127 127 ops.mode = MTD_OPS_AUTO_OOB; 128 128 ops.len = 0; 129 129 ops.retlen = 0; ··· 139 139 errcnt += 1; 140 140 return err ? err : -1; 141 141 } 142 - if (memcmp(readbuf, writebuf, use_len)) { 142 + if (memcmp(readbuf, writebuf + (use_len_max * i) + use_offset, 143 + use_len)) { 143 144 pr_err("error: verify failed at %#llx\n", 144 145 (long long)addr); 145 146 errcnt += 1; ··· 167 166 errcnt += 1; 168 167 return err ? err : -1; 169 168 } 170 - if (memcmp(readbuf + use_offset, writebuf, use_len)) { 169 + if (memcmp(readbuf + use_offset, 170 + writebuf + (use_len_max * i) + use_offset, 171 + use_len)) { 171 172 pr_err("error: verify failed at %#llx\n", 172 173 (long long)addr); 173 174 errcnt += 1; ··· 569 566 if (bbt[i] || bbt[i + 1]) 570 567 continue; 571 568 addr = (i + 1) * mtd->erasesize - mtd->writesize; 569 + prandom_bytes_state(&rnd_state, writebuf, sz * cnt); 572 570 for (pg = 0; pg < cnt; ++pg) { 573 - prandom_bytes_state(&rnd_state, writebuf, sz); 574 571 ops.mode = MTD_OPS_AUTO_OOB; 575 572 ops.len = 0; 576 573 ops.retlen = 0; ··· 578 575 ops.oobretlen = 0; 579 576 ops.ooboffs = 0; 580 577 ops.datbuf = NULL; 581 - ops.oobbuf = writebuf; 578 + ops.oobbuf = writebuf + pg * sz; 582 579 err = mtd_write_oob(mtd, addr, &ops); 583 580 if (err) 584 581 goto out;
+7 -3
include/linux/mtd/nand.h
··· 176 176 /* Chip may not exist, so silence any errors in scan */ 177 177 #define NAND_SCAN_SILENT_NODEV 0x00040000 178 178 /* 179 + * This option could be defined by controller drivers to protect against 180 + * kmap'ed, vmalloc'ed highmem buffers being passed from upper layers 181 + */ 182 + #define NAND_USE_BOUNCE_BUFFER 0x00080000 183 + /* 179 184 * Autodetect nand buswidth with readid/onfi. 180 185 * This suppose the driver will configure the hardware in 8 bits mode 181 186 * when calling nand_scan_ident, and update its configuration ··· 557 552 * @ecc: [BOARDSPECIFIC] ECC control structure 558 553 * @buffers: buffer structure for read/write 559 554 * @hwcontrol: platform-specific hardware control structure 560 - * @erase_cmd: [INTERN] erase command write function, selectable due 561 - * to AND support. 555 + * @erase: [REPLACEABLE] erase function 562 556 * @scan_bbt: [REPLACEABLE] function to scan bad block table 563 557 * @chip_delay: [BOARDSPECIFIC] chip dependent delay for transferring 564 558 * data from array to read regs (tR). ··· 641 637 void (*cmdfunc)(struct mtd_info *mtd, unsigned command, int column, 642 638 int page_addr); 643 639 int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this); 644 - void (*erase_cmd)(struct mtd_info *mtd, int page); 640 + int (*erase)(struct mtd_info *mtd, int page); 645 641 int (*scan_bbt)(struct mtd_info *mtd); 646 642 int (*errstat)(struct mtd_info *mtd, struct nand_chip *this, int state, 647 643 int status, int page);
-3
include/linux/mtd/pfow.h
··· 101 101 unsigned long len, map_word *datum) 102 102 { 103 103 int bits_per_chip = map_bankwidth(map) * 8; 104 - int chipnum; 105 - struct lpddr_private *lpddr = map->fldrv_priv; 106 - chipnum = adr >> lpddr->chipshift; 107 104 108 105 map_write(map, CMD(cmd_code), map->pfow_base + PFOW_COMMAND_CODE); 109 106 map_write(map, CMD(adr & ((1<<bits_per_chip) - 1)),
+214
include/linux/mtd/spi-nor.h
··· 1 + /* 2 + * Copyright (C) 2014 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + */ 9 + 10 + #ifndef __LINUX_MTD_SPI_NOR_H 11 + #define __LINUX_MTD_SPI_NOR_H 12 + 13 + /* 14 + * Note on opcode nomenclature: some opcodes have a format like 15 + * SPINOR_OP_FUNCTION{4,}_x_y_z. The numbers x, y, and z stand for the number 16 + * of I/O lines used for the opcode, address, and data (respectively). The 17 + * FUNCTION has an optional suffix of '4', to represent an opcode which 18 + * requires a 4-byte (32-bit) address. 19 + */ 20 + 21 + /* Flash opcodes. */ 22 + #define SPINOR_OP_WREN 0x06 /* Write enable */ 23 + #define SPINOR_OP_RDSR 0x05 /* Read status register */ 24 + #define SPINOR_OP_WRSR 0x01 /* Write status register 1 byte */ 25 + #define SPINOR_OP_READ 0x03 /* Read data bytes (low frequency) */ 26 + #define SPINOR_OP_READ_FAST 0x0b /* Read data bytes (high frequency) */ 27 + #define SPINOR_OP_READ_1_1_2 0x3b /* Read data bytes (Dual SPI) */ 28 + #define SPINOR_OP_READ_1_1_4 0x6b /* Read data bytes (Quad SPI) */ 29 + #define SPINOR_OP_PP 0x02 /* Page program (up to 256 bytes) */ 30 + #define SPINOR_OP_BE_4K 0x20 /* Erase 4KiB block */ 31 + #define SPINOR_OP_BE_4K_PMC 0xd7 /* Erase 4KiB block on PMC chips */ 32 + #define SPINOR_OP_BE_32K 0x52 /* Erase 32KiB block */ 33 + #define SPINOR_OP_CHIP_ERASE 0xc7 /* Erase whole flash chip */ 34 + #define SPINOR_OP_SE 0xd8 /* Sector erase (usually 64KiB) */ 35 + #define SPINOR_OP_RDID 0x9f /* Read JEDEC ID */ 36 + #define SPINOR_OP_RDCR 0x35 /* Read configuration register */ 37 + 38 + /* 4-byte address opcodes - used on Spansion and some Macronix flashes. */ 39 + #define SPINOR_OP_READ4 0x13 /* Read data bytes (low frequency) */ 40 + #define SPINOR_OP_READ4_FAST 0x0c /* Read data bytes (high frequency) */ 41 + #define SPINOR_OP_READ4_1_1_2 0x3c /* Read data bytes (Dual SPI) */ 42 + #define SPINOR_OP_READ4_1_1_4 0x6c /* Read data bytes (Quad SPI) */ 43 + #define SPINOR_OP_PP_4B 0x12 /* Page program (up to 256 bytes) */ 44 + #define SPINOR_OP_SE_4B 0xdc /* Sector erase (usually 64KiB) */ 45 + 46 + /* Used for SST flashes only. */ 47 + #define SPINOR_OP_BP 0x02 /* Byte program */ 48 + #define SPINOR_OP_WRDI 0x04 /* Write disable */ 49 + #define SPINOR_OP_AAI_WP 0xad /* Auto address increment word program */ 50 + 51 + /* Used for Macronix and Winbond flashes. */ 52 + #define SPINOR_OP_EN4B 0xb7 /* Enter 4-byte mode */ 53 + #define SPINOR_OP_EX4B 0xe9 /* Exit 4-byte mode */ 54 + 55 + /* Used for Spansion flashes only. */ 56 + #define SPINOR_OP_BRWR 0x17 /* Bank register write */ 57 + 58 + /* Status Register bits. */ 59 + #define SR_WIP 1 /* Write in progress */ 60 + #define SR_WEL 2 /* Write enable latch */ 61 + /* meaning of other SR_* bits may differ between vendors */ 62 + #define SR_BP0 4 /* Block protect 0 */ 63 + #define SR_BP1 8 /* Block protect 1 */ 64 + #define SR_BP2 0x10 /* Block protect 2 */ 65 + #define SR_SRWD 0x80 /* SR write protect */ 66 + 67 + #define SR_QUAD_EN_MX 0x40 /* Macronix Quad I/O */ 68 + 69 + /* Configuration Register bits. */ 70 + #define CR_QUAD_EN_SPAN 0x2 /* Spansion Quad I/O */ 71 + 72 + enum read_mode { 73 + SPI_NOR_NORMAL = 0, 74 + SPI_NOR_FAST, 75 + SPI_NOR_DUAL, 76 + SPI_NOR_QUAD, 77 + }; 78 + 79 + /** 80 + * struct spi_nor_xfer_cfg - Structure for defining a Serial Flash transfer 81 + * @wren: command for "Write Enable", or 0x00 for not required 82 + * @cmd: command for operation 83 + * @cmd_pins: number of pins to send @cmd (1, 2, 4) 84 + * @addr: address for operation 85 + * @addr_pins: number of pins to send @addr (1, 2, 4) 86 + * @addr_width: number of address bytes 87 + * (3,4, or 0 for address not required) 88 + * @mode: mode data 89 + * @mode_pins: number of pins to send @mode (1, 2, 4) 90 + * @mode_cycles: number of mode cycles (0 for mode not required) 91 + * @dummy_cycles: number of dummy cycles (0 for dummy not required) 92 + */ 93 + struct spi_nor_xfer_cfg { 94 + u8 wren; 95 + u8 cmd; 96 + u8 cmd_pins; 97 + u32 addr; 98 + u8 addr_pins; 99 + u8 addr_width; 100 + u8 mode; 101 + u8 mode_pins; 102 + u8 mode_cycles; 103 + u8 dummy_cycles; 104 + }; 105 + 106 + #define SPI_NOR_MAX_CMD_SIZE 8 107 + enum spi_nor_ops { 108 + SPI_NOR_OPS_READ = 0, 109 + SPI_NOR_OPS_WRITE, 110 + SPI_NOR_OPS_ERASE, 111 + SPI_NOR_OPS_LOCK, 112 + SPI_NOR_OPS_UNLOCK, 113 + }; 114 + 115 + /** 116 + * struct spi_nor - Structure for defining a the SPI NOR layer 117 + * @mtd: point to a mtd_info structure 118 + * @lock: the lock for the read/write/erase/lock/unlock operations 119 + * @dev: point to a spi device, or a spi nor controller device. 120 + * @page_size: the page size of the SPI NOR 121 + * @addr_width: number of address bytes 122 + * @erase_opcode: the opcode for erasing a sector 123 + * @read_opcode: the read opcode 124 + * @read_dummy: the dummy needed by the read operation 125 + * @program_opcode: the program opcode 126 + * @flash_read: the mode of the read 127 + * @sst_write_second: used by the SST write operation 128 + * @cfg: used by the read_xfer/write_xfer 129 + * @cmd_buf: used by the write_reg 130 + * @prepare: [OPTIONAL] do some preparations for the 131 + * read/write/erase/lock/unlock operations 132 + * @unprepare: [OPTIONAL] do some post work after the 133 + * read/write/erase/lock/unlock operations 134 + * @read_xfer: [OPTIONAL] the read fundamental primitive 135 + * @write_xfer: [OPTIONAL] the writefundamental primitive 136 + * @read_reg: [DRIVER-SPECIFIC] read out the register 137 + * @write_reg: [DRIVER-SPECIFIC] write data to the register 138 + * @read_id: [REPLACEABLE] read out the ID data, and find 139 + * the proper spi_device_id 140 + * @wait_till_ready: [REPLACEABLE] wait till the NOR becomes ready 141 + * @read: [DRIVER-SPECIFIC] read data from the SPI NOR 142 + * @write: [DRIVER-SPECIFIC] write data to the SPI NOR 143 + * @erase: [DRIVER-SPECIFIC] erase a sector of the SPI NOR 144 + * at the offset @offs 145 + * @priv: the private data 146 + */ 147 + struct spi_nor { 148 + struct mtd_info *mtd; 149 + struct mutex lock; 150 + struct device *dev; 151 + u32 page_size; 152 + u8 addr_width; 153 + u8 erase_opcode; 154 + u8 read_opcode; 155 + u8 read_dummy; 156 + u8 program_opcode; 157 + enum read_mode flash_read; 158 + bool sst_write_second; 159 + struct spi_nor_xfer_cfg cfg; 160 + u8 cmd_buf[SPI_NOR_MAX_CMD_SIZE]; 161 + 162 + int (*prepare)(struct spi_nor *nor, enum spi_nor_ops ops); 163 + void (*unprepare)(struct spi_nor *nor, enum spi_nor_ops ops); 164 + int (*read_xfer)(struct spi_nor *nor, struct spi_nor_xfer_cfg *cfg, 165 + u8 *buf, size_t len); 166 + int (*write_xfer)(struct spi_nor *nor, struct spi_nor_xfer_cfg *cfg, 167 + u8 *buf, size_t len); 168 + int (*read_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len); 169 + int (*write_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len, 170 + int write_enable); 171 + const struct spi_device_id *(*read_id)(struct spi_nor *nor); 172 + int (*wait_till_ready)(struct spi_nor *nor); 173 + 174 + int (*read)(struct spi_nor *nor, loff_t from, 175 + size_t len, size_t *retlen, u_char *read_buf); 176 + void (*write)(struct spi_nor *nor, loff_t to, 177 + size_t len, size_t *retlen, const u_char *write_buf); 178 + int (*erase)(struct spi_nor *nor, loff_t offs); 179 + 180 + void *priv; 181 + }; 182 + 183 + /** 184 + * spi_nor_scan() - scan the SPI NOR 185 + * @nor: the spi_nor structure 186 + * @id: the spi_device_id provided by the driver 187 + * @mode: the read mode supported by the driver 188 + * 189 + * The drivers can use this fuction to scan the SPI NOR. 190 + * In the scanning, it will try to get all the necessary information to 191 + * fill the mtd_info{} and the spi_nor{}. 192 + * 193 + * The board may assigns a spi_device_id with @id which be used to compared with 194 + * the spi_device_id detected by the scanning. 195 + * 196 + * Return: 0 for success, others for failure. 197 + */ 198 + int spi_nor_scan(struct spi_nor *nor, const struct spi_device_id *id, 199 + enum read_mode mode); 200 + extern const struct spi_device_id spi_nor_ids[]; 201 + 202 + /** 203 + * spi_nor_match_id() - find the spi_device_id by the name 204 + * @name: the name of the spi_device_id 205 + * 206 + * The drivers use this function to find the spi_device_id 207 + * specified by the @name. 208 + * 209 + * Return: returns the right spi_device_id pointer on success, 210 + * and returns NULL on failure. 211 + */ 212 + const struct spi_device_id *spi_nor_match_id(char *name); 213 + 214 + #endif
+2 -1
include/linux/platform_data/elm.h
··· 21 21 enum bch_ecc { 22 22 BCH4_ECC = 0, 23 23 BCH8_ECC, 24 + BCH16_ECC, 24 25 }; 25 26 26 27 /* ELM support 8 error syndrome process */ ··· 39 38 bool error_reported; 40 39 bool error_uncorrectable; 41 40 int error_count; 42 - int error_loc[ERROR_VECTOR_MAX]; 41 + int error_loc[16]; 43 42 }; 44 43 45 44 void elm_decode_bch_error_page(struct device *dev, u8 *ecc_calc,
+5
include/linux/platform_data/mtd-nand-omap2.h
··· 31 31 OMAP_ECC_BCH8_CODE_HW_DETECTION_SW, 32 32 /* 8-bit ECC calculation by GPMC, Error detection by ELM */ 33 33 OMAP_ECC_BCH8_CODE_HW, 34 + /* 16-bit ECC calculation by GPMC, Error detection by ELM */ 35 + OMAP_ECC_BCH16_CODE_HW, 34 36 }; 35 37 36 38 struct gpmc_nand_regs { ··· 52 50 void __iomem *gpmc_bch_result1[GPMC_BCH_NUM_REMAINDER]; 53 51 void __iomem *gpmc_bch_result2[GPMC_BCH_NUM_REMAINDER]; 54 52 void __iomem *gpmc_bch_result3[GPMC_BCH_NUM_REMAINDER]; 53 + void __iomem *gpmc_bch_result4[GPMC_BCH_NUM_REMAINDER]; 54 + void __iomem *gpmc_bch_result5[GPMC_BCH_NUM_REMAINDER]; 55 + void __iomem *gpmc_bch_result6[GPMC_BCH_NUM_REMAINDER]; 55 56 }; 56 57 57 58 struct omap_nand_platform_data {
+3
include/linux/platform_data/mtd-nand-pxa3xx.h
··· 58 58 /* use an flash-based bad block table */ 59 59 bool flash_bbt; 60 60 61 + /* requested ECC strength and ECC step size */ 62 + int ecc_strength, ecc_step_size; 63 + 61 64 const struct mtd_partition *parts[NUM_CHIP_SELECT]; 62 65 unsigned int nr_parts[NUM_CHIP_SELECT]; 63 66
+1
include/uapi/mtd/mtd-abi.h
··· 109 109 #define MTD_CAP_RAM (MTD_WRITEABLE | MTD_BIT_WRITEABLE | MTD_NO_ERASE) 110 110 #define MTD_CAP_NORFLASH (MTD_WRITEABLE | MTD_BIT_WRITEABLE) 111 111 #define MTD_CAP_NANDFLASH (MTD_WRITEABLE) 112 + #define MTD_CAP_NVRAM (MTD_WRITEABLE | MTD_BIT_WRITEABLE | MTD_NO_ERASE) 112 113 113 114 /* Obsolete ECC byte placement modes (used with obsolete MEMGETOOBSEL) */ 114 115 #define MTD_NANDECC_OFF 0 // Switch off ECC (Not recommended)