Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Richard Weinberger:
"MTD core changes:
- Convert list_for_each to entry variant
- Use MTD_DEVICE_ATTR_RO/RW() helper macros
- Remove unnecessary OOM messages
- Potential NULL dereference in mtd_otp_size()
- Fix freeing of otp_info buffer
- Create partname and partid debug files for child MTDs
- tests:
- Remove redundant assignment to err
- Fix error return code in mtd_oobtest_init()
- Add OTP NVMEM provider support
- Allow specifying of_node
- Convert sysfs sprintf/snprintf family to sysfs_emit

Bindings changes:
- Convert ti,am654-hbmc.txt to YAML schema
- spi-nor: add otp property
- Add OTP bindings
- add YAML schema for the generic MTD bindings
- Add brcm,trx-magic

MTD device drivers changes:
- Add support for microchip 48l640 EERAM
- Remove superfluous "break"
- sm_ftl:
- Fix alignment of block comment
- nftl:
- Return -ENOMEM when kmalloc failed
- nftlcore:
- Remove set but rewrite variables
- phram:
- Fix error return code in phram_setup()
- plat-ram:
- Remove redundant dev_err call in platram_probe()

MTD parsers changes:
- Qcom:
- Fix leaking of partition name
- Redboot:
- Fix style issues
- Seek fis-index-block in the right node
- trx:
- Allow to use TRX parser on Mediatek SoCs
- Allow to specify brcm, trx-magic in DT

Raw NAND core:
- Allow SDR timings to be nacked
- Bring support for NV-DDR timings which involved a number of small
preparation changes to bring new helpers, properly introduce NV-DDR
structures, fill them, differenciate them and pick the best timing
set.
- Add the necessary infrastructure to parse the new gpio-cs property
which aims at enlarging the number of available CS when a hardware
controller is too constrained.
- Update dead URL
- Silence static checker warning in nand_setup_interface()
- BBT:
- Fix corner case in bad block table handling
- onfi:
- Use more recent ONFI specification wording
- Use the BIT() macro when possible

Raw NAND controller drivers:
- Atmel:
- Ensure the data interface is supported.
- Arasan:
- Finer grain NV-DDR configuration
- Rename the data interface register
- Use the right DMA mask
- Leverage additional GPIO CS
- Ensure proper configuration for the asserted target
- Add support for the NV-DDR interface
- Fix a macro parameter
- brcmnand:
- Convert bindings to json-schema
- OMAP:
- Various fixes and style improvements
- Add larger page NAND chips support
- PL35X:
- New driver
- QCOM:
- Avoid writing to obsolete register
- Delete an unneeded bool conversion
- Allow override of partition parser
- Marvell:
- Minor documentation correction
- Add missing clk_disable_unprepare() on error in
marvell_nfc_resume()
- R852:
- Use DEVICE_ATTR_RO() helper macro
- MTK:
- Remove redundant dev_err call in mtk_ecc_probe()
- HISI504:
- Remove redundant dev_err call in probe

SPI-NAND core:
- Light reorganisation for the introduction of a core resume handler
- Fix double counting of ECC stats

SPI-NAND manufacturer drivers:
- Macronix:
- Add support for serial NAND flash

SPI NOR core changes:
- Ability to dump SFDP tables via sysfs
- Support for erasing OTP regions on Winbond and similar flashes
- Few API doc updates and fixes
- Locking support for MX25L12805D

SPI NOR controller drivers changes:
- Use SPI_MODE_X_MASK in nxp-spifi
- Intel Alder Lake-M SPI serial flash support"

* tag 'mtd/for-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (125 commits)
mtd: spi-nor: remove redundant continue statement
mtd: rawnand: omap: Add larger page NAND chips support
mtd: rawnand: omap: Various style fixes
mtd: rawnand: omap: Check return values
mtd: rawnand: omap: Rename a macro
mtd: rawnand: omap: Aggregate the HW configuration of the ELM
mtd: rawnand: pl353: Add support for the ARM PL353 SMC NAND controller
dt-bindings: mtd: pl353-nand: Describe this hardware controller
MAINTAINERS: Add PL353 NAND controller entry
mtd: rawnand: qcom: avoid writing to obsolete register
mtd: rawnand: marvell: Minor documentation correction
mtd: rawnand: r852: use DEVICE_ATTR_RO() helper macro
mtd: spinand: add SPI-NAND MTD resume handler
mtd: spinand: Add spinand_init_flash() helper
mtd: spinand: add spinand_read_cfg() helper
mtd: rawnand: marvell: add missing clk_disable_unprepare() on error in marvell_nfc_resume()
mtd: rawnand: arasan: Finer grain NV-DDR configuration
mtd: rawnand: arasan: Rename the data interface register
mtd: rawnand: onfi: Fix endianness when reading NV-DDR values
mtd: rawnand: arasan: Use the right DMA mask
...

+4465 -1167
+31
Documentation/ABI/testing/sysfs-bus-spi-devices-spi-nor
··· 1 + What: /sys/bus/spi/devices/.../spi-nor/jedec_id 2 + Date: April 2021 3 + KernelVersion: 5.14 4 + Contact: linux-mtd@lists.infradead.org 5 + Description: (RO) The JEDEC ID of the SPI NOR flash as reported by the 6 + flash device. 7 + 8 + 9 + What: /sys/bus/spi/devices/.../spi-nor/manufacturer 10 + Date: April 2021 11 + KernelVersion: 5.14 12 + Contact: linux-mtd@lists.infradead.org 13 + Description: (RO) Manufacturer of the SPI NOR flash. 14 + 15 + 16 + What: /sys/bus/spi/devices/.../spi-nor/partname 17 + Date: April 2021 18 + KernelVersion: 5.14 19 + Contact: linux-mtd@lists.infradead.org 20 + Description: (RO) Part name of the SPI NOR flash. 21 + 22 + 23 + What: /sys/bus/spi/devices/.../spi-nor/sfdp 24 + Date: April 2021 25 + KernelVersion: 5.14 26 + Contact: linux-mtd@lists.infradead.org 27 + Description: (RO) This attribute is only present if the SPI NOR flash 28 + device supports the "Read SFDP" command (5Ah). 29 + 30 + If present, it contains the complete SFDP (serial flash 31 + discoverable parameters) binary data of the flash.
+131
Documentation/devicetree/bindings/memory-controllers/arm,pl353-smc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/arm,pl353-smc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: ARM PL353 Static Memory Controller (SMC) device-tree bindings 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + - Naga Sureshkumar Relli <naga.sureshkumar.relli@xilinx.com> 12 + 13 + description: 14 + The PL353 Static Memory Controller is a bus where you can connect two kinds 15 + of memory interfaces, which are NAND and memory mapped interfaces (such as 16 + SRAM or NOR). 17 + 18 + # We need a select here so we don't match all nodes with 'arm,primecell' 19 + select: 20 + properties: 21 + compatible: 22 + contains: 23 + const: arm,pl353-smc-r2p1 24 + required: 25 + - compatible 26 + 27 + properties: 28 + $nodename: 29 + pattern: "^memory-controller@[0-9a-f]+$" 30 + 31 + compatible: 32 + items: 33 + - const: arm,pl353-smc-r2p1 34 + - const: arm,primecell 35 + 36 + "#address-cells": 37 + const: 2 38 + 39 + "#size-cells": 40 + const: 1 41 + 42 + reg: 43 + items: 44 + - description: 45 + Configuration registers for the host and sub-controllers. 46 + The three chip select regions are defined in 'ranges'. 47 + 48 + clocks: 49 + items: 50 + - description: clock for the memory device bus 51 + - description: main clock of the SMC 52 + 53 + clock-names: 54 + items: 55 + - const: memclk 56 + - const: apb_pclk 57 + 58 + ranges: 59 + minItems: 1 60 + maxItems: 3 61 + description: | 62 + Memory bus areas for interacting with the devices. Reflects 63 + the memory layout with four integer values following: 64 + <cs-number> 0 <offset> <size> 65 + items: 66 + - description: NAND bank 0 67 + - description: NOR/SRAM bank 0 68 + - description: NOR/SRAM bank 1 69 + 70 + interrupts: true 71 + 72 + patternProperties: 73 + "@[0-3],[a-f0-9]+$": 74 + type: object 75 + description: | 76 + The child device node represents the controller connected to the SMC 77 + bus. The controller can be a NAND controller or a pair of any memory 78 + mapped controllers such as NOR and SRAM controllers. 79 + 80 + properties: 81 + compatible: 82 + description: 83 + Compatible of memory controller. 84 + 85 + reg: 86 + items: 87 + - items: 88 + - description: | 89 + Chip-select ID, as in the parent range property. 90 + minimum: 0 91 + maximum: 2 92 + - description: | 93 + Offset of the memory region requested by the device. 94 + - description: | 95 + Length of the memory region requested by the device. 96 + 97 + required: 98 + - compatible 99 + - reg 100 + 101 + required: 102 + - compatible 103 + - reg 104 + - clock-names 105 + - clocks 106 + - "#address-cells" 107 + - "#size-cells" 108 + - ranges 109 + 110 + additionalProperties: false 111 + 112 + examples: 113 + - | 114 + smcc: memory-controller@e000e000 { 115 + compatible = "arm,pl353-smc-r2p1", "arm,primecell"; 116 + reg = <0xe000e000 0x0001000>; 117 + clock-names = "memclk", "apb_pclk"; 118 + clocks = <&clkc 11>, <&clkc 44>; 119 + ranges = <0x0 0x0 0xe1000000 0x1000000 /* Nand CS region */ 120 + 0x1 0x0 0xe2000000 0x2000000 /* SRAM/NOR CS0 region */ 121 + 0x2 0x0 0xe4000000 0x2000000>; /* SRAM/NOR CS1 region */ 122 + #address-cells = <2>; 123 + #size-cells = <1>; 124 + 125 + nfc0: nand-controller@0,0 { 126 + compatible = "arm,pl353-nand-r2p1"; 127 + reg = <0 0 0x1000000>; 128 + #address-cells = <1>; 129 + #size-cells = <0>; 130 + }; 131 + };
-47
Documentation/devicetree/bindings/memory-controllers/pl353-smc.txt
··· 1 - Device tree bindings for ARM PL353 static memory controller 2 - 3 - PL353 static memory controller supports two kinds of memory 4 - interfaces.i.e NAND and SRAM/NOR interfaces. 5 - The actual devices are instantiated from the child nodes of pl353 smc node. 6 - 7 - Required properties: 8 - - compatible : Should be "arm,pl353-smc-r2p1", "arm,primecell". 9 - - reg : Controller registers map and length. 10 - - clock-names : List of input clock names - "memclk", "apb_pclk" 11 - (See clock bindings for details). 12 - - clocks : Clock phandles (see clock bindings for details). 13 - - address-cells : Must be 2. 14 - - size-cells : Must be 1. 15 - 16 - Child nodes: 17 - For NAND the "arm,pl353-nand-r2p1" and for NOR the "cfi-flash" drivers are 18 - supported as child nodes. 19 - 20 - for NAND partition information please refer the below file 21 - Documentation/devicetree/bindings/mtd/partition.txt 22 - 23 - Example: 24 - smcc: memory-controller@e000e000 25 - compatible = "arm,pl353-smc-r2p1", "arm,primecell"; 26 - clock-names = "memclk", "apb_pclk"; 27 - clocks = <&clkc 11>, <&clkc 44>; 28 - reg = <0xe000e000 0x1000>; 29 - #address-cells = <2>; 30 - #size-cells = <1>; 31 - ranges = <0x0 0x0 0xe1000000 0x1000000 //Nand CS Region 32 - 0x1 0x0 0xe2000000 0x2000000 //SRAM/NOR CS Region 33 - 0x2 0x0 0xe4000000 0x2000000>; //SRAM/NOR CS Region 34 - nand_0: flash@e1000000 { 35 - compatible = "arm,pl353-nand-r2p1" 36 - reg = <0 0 0x1000000>; 37 - (...) 38 - }; 39 - nor0: flash@e2000000 { 40 - compatible = "cfi-flash"; 41 - reg = <1 0 0x2000000>; 42 - }; 43 - nor1: flash@e4000000 { 44 - compatible = "cfi-flash"; 45 - reg = <2 0 0x2000000>; 46 - }; 47 - };
+53
Documentation/devicetree/bindings/mtd/arm,pl353-nand-r2p1.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/arm,pl353-nand-r2p1.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: PL353 NAND Controller device tree bindings 8 + 9 + allOf: 10 + - $ref: "nand-controller.yaml" 11 + 12 + maintainers: 13 + - Miquel Raynal <miquel.raynal@bootlin.com> 14 + - Naga Sureshkumar Relli <naga.sureshkumar.relli@xilinx.com> 15 + 16 + properties: 17 + compatible: 18 + items: 19 + - const: arm,pl353-nand-r2p1 20 + 21 + reg: 22 + items: 23 + - items: 24 + - description: CS with regard to the parent ranges property 25 + - description: Offset of the memory region requested by the device 26 + - description: Length of the memory region requested by the device 27 + 28 + required: 29 + - compatible 30 + - reg 31 + 32 + unevaluatedProperties: false 33 + 34 + examples: 35 + - | 36 + smcc: memory-controller@e000e000 { 37 + compatible = "arm,pl353-smc-r2p1", "arm,primecell"; 38 + reg = <0xe000e000 0x0001000>; 39 + clock-names = "memclk", "apb_pclk"; 40 + clocks = <&clkc 11>, <&clkc 44>; 41 + ranges = <0x0 0x0 0xe1000000 0x1000000 /* Nand CS region */ 42 + 0x1 0x0 0xe2000000 0x2000000 /* SRAM/NOR CS0 region */ 43 + 0x2 0x0 0xe4000000 0x2000000>; /* SRAM/NOR CS1 region */ 44 + #address-cells = <2>; 45 + #size-cells = <1>; 46 + 47 + nfc0: nand-controller@0,0 { 48 + compatible = "arm,pl353-nand-r2p1"; 49 + reg = <0 0 0x1000000>; 50 + #address-cells = <1>; 51 + #size-cells = <0>; 52 + }; 53 + };
-186
Documentation/devicetree/bindings/mtd/brcm,brcmnand.txt
··· 1 - * Broadcom STB NAND Controller 2 - 3 - The Broadcom Set-Top Box NAND controller supports low-level access to raw NAND 4 - flash chips. It has a memory-mapped register interface for both control 5 - registers and for its data input/output buffer. On some SoCs, this controller is 6 - paired with a custom DMA engine (inventively named "Flash DMA") which supports 7 - basic PROGRAM and READ functions, among other features. 8 - 9 - This controller was originally designed for STB SoCs (BCM7xxx) but is now 10 - available on a variety of Broadcom SoCs, including some BCM3xxx, BCM63xx, and 11 - iProc/Cygnus. Its history includes several similar (but not fully register 12 - compatible) versions. 13 - 14 - Required properties: 15 - - compatible : May contain an SoC-specific compatibility string (see below) 16 - to account for any SoC-specific hardware bits that may be 17 - added on top of the base core controller. 18 - In addition, must contain compatibility information about 19 - the core NAND controller, of the following form: 20 - "brcm,brcmnand" and an appropriate version compatibility 21 - string, like "brcm,brcmnand-v7.0" 22 - Possible values: 23 - brcm,brcmnand-v2.1 24 - brcm,brcmnand-v2.2 25 - brcm,brcmnand-v4.0 26 - brcm,brcmnand-v5.0 27 - brcm,brcmnand-v6.0 28 - brcm,brcmnand-v6.1 29 - brcm,brcmnand-v6.2 30 - brcm,brcmnand-v7.0 31 - brcm,brcmnand-v7.1 32 - brcm,brcmnand-v7.2 33 - brcm,brcmnand-v7.3 34 - brcm,brcmnand 35 - - reg : the register start and length for NAND register region. 36 - (optional) Flash DMA register range (if present) 37 - (optional) NAND flash cache range (if at non-standard offset) 38 - - reg-names : a list of the names corresponding to the previous register 39 - ranges. Should contain "nand" and (optionally) 40 - "flash-dma" or "flash-edu" and/or "nand-cache". 41 - - interrupts : The NAND CTLRDY interrupt, (if Flash DMA is available) 42 - FLASH_DMA_DONE and if EDU is avaialble and used FLASH_EDU_DONE 43 - - interrupt-names : May be "nand_ctlrdy" or "flash_dma_done" or "flash_edu_done", 44 - if broken out as individual interrupts. 45 - May be "nand", if the SoC has the individual NAND 46 - interrupts multiplexed behind another custom piece of 47 - hardware 48 - - #address-cells : <1> - subnodes give the chip-select number 49 - - #size-cells : <0> 50 - 51 - Optional properties: 52 - - clock : reference to the clock for the NAND controller 53 - - clock-names : "nand" (required for the above clock) 54 - - brcm,nand-has-wp : Some versions of this IP include a write-protect 55 - (WP) control bit. It is always available on >= 56 - v7.0. Use this property to describe the rare 57 - earlier versions of this core that include WP 58 - 59 - -- Additional SoC-specific NAND controller properties -- 60 - 61 - The NAND controller is integrated differently on the variety of SoCs on which it 62 - is found. Part of this integration involves providing status and enable bits 63 - with which to control the 8 exposed NAND interrupts, as well as hardware for 64 - configuring the endianness of the data bus. On some SoCs, these features are 65 - handled via standard, modular components (e.g., their interrupts look like a 66 - normal IRQ chip), but on others, they are controlled in unique and interesting 67 - ways, sometimes with registers that lump multiple NAND-related functions 68 - together. The former case can be described simply by the standard interrupts 69 - properties in the main controller node. But for the latter exceptional cases, 70 - we define additional 'compatible' properties and associated register resources within the NAND controller node above. 71 - 72 - - compatible: Can be one of several SoC-specific strings. Each SoC may have 73 - different requirements for its additional properties, as described below each 74 - bullet point below. 75 - 76 - * "brcm,nand-bcm63138" 77 - - reg: (required) the 'NAND_INT_BASE' register range, with separate status 78 - and enable registers 79 - - reg-names: (required) "nand-int-base" 80 - 81 - * "brcm,nand-bcm6368" 82 - - compatible: should contain "brcm,nand-bcm<soc>", "brcm,nand-bcm6368" 83 - - reg: (required) the 'NAND_INTR_BASE' register range, with combined status 84 - and enable registers, and boot address registers 85 - - reg-names: (required) "nand-int-base" 86 - 87 - * "brcm,nand-iproc" 88 - - reg: (required) the "IDM" register range, for interrupt enable and APB 89 - bus access endianness configuration, and the "EXT" register range, 90 - for interrupt status/ack. 91 - - reg-names: (required) a list of the names corresponding to the previous 92 - register ranges. Should contain "iproc-idm" and "iproc-ext". 93 - 94 - 95 - * NAND chip-select 96 - 97 - Each controller (compatible: "brcm,brcmnand") may contain one or more subnodes 98 - to represent enabled chip-selects which (may) contain NAND flash chips. Their 99 - properties are as follows. 100 - 101 - Required properties: 102 - - compatible : should contain "brcm,nandcs" 103 - - reg : a single integer representing the chip-select 104 - number (e.g., 0, 1, 2, etc.) 105 - - #address-cells : see partition.txt 106 - - #size-cells : see partition.txt 107 - 108 - Optional properties: 109 - - nand-ecc-strength : see nand-controller.yaml 110 - - nand-ecc-step-size : must be 512 or 1024. See nand-controller.yaml 111 - - nand-on-flash-bbt : boolean, to enable the on-flash BBT for this 112 - chip-select. See nand-controller.yaml 113 - - brcm,nand-oob-sector-size : integer, to denote the spare area sector size 114 - expected for the ECC layout in use. This size, in 115 - addition to the strength and step-size, 116 - determines how the hardware BCH engine will lay 117 - out the parity bytes it stores on the flash. 118 - This property can be automatically determined by 119 - the flash geometry (particularly the NAND page 120 - and OOB size) in many cases, but when booting 121 - from NAND, the boot controller has only a limited 122 - number of available options for its default ECC 123 - layout. 124 - 125 - Each nandcs device node may optionally contain sub-nodes describing the flash 126 - partition mapping. See partition.txt for more detail. 127 - 128 - 129 - Example: 130 - 131 - nand@f0442800 { 132 - compatible = "brcm,brcmnand-v7.0", "brcm,brcmnand"; 133 - reg = <0xF0442800 0x600>, 134 - <0xF0443000 0x100>; 135 - reg-names = "nand", "flash-dma"; 136 - interrupt-parent = <&hif_intr2_intc>; 137 - interrupts = <24>, <4>; 138 - 139 - #address-cells = <1>; 140 - #size-cells = <0>; 141 - 142 - nandcs@1 { 143 - compatible = "brcm,nandcs"; 144 - reg = <1>; // Chip select 1 145 - nand-on-flash-bbt; 146 - nand-ecc-strength = <12>; 147 - nand-ecc-step-size = <512>; 148 - 149 - // Partitions 150 - #address-cells = <1>; // <2>, for 64-bit offset 151 - #size-cells = <1>; // <2>, for 64-bit length 152 - flash0.rootfs@0 { 153 - reg = <0 0x10000000>; 154 - }; 155 - flash0@0 { 156 - reg = <0 0>; // MTDPART_SIZ_FULL 157 - }; 158 - flash0.kernel@10000000 { 159 - reg = <0x10000000 0x400000>; 160 - }; 161 - }; 162 - }; 163 - 164 - nand@10000200 { 165 - compatible = "brcm,nand-bcm63168", "brcm,nand-bcm6368", 166 - "brcm,brcmnand-v4.0", "brcm,brcmnand"; 167 - reg = <0x10000200 0x180>, 168 - <0x10000600 0x200>, 169 - <0x100000b0 0x10>; 170 - reg-names = "nand", "nand-cache", "nand-int-base"; 171 - interrupt-parent = <&periph_intc>; 172 - interrupts = <50>; 173 - clocks = <&periph_clk 20>; 174 - clock-names = "nand"; 175 - 176 - #address-cells = <1>; 177 - #size-cells = <0>; 178 - 179 - nand0: nandcs@0 { 180 - compatible = "brcm,nandcs"; 181 - reg = <0>; 182 - nand-on-flash-bbt; 183 - nand-ecc-strength = <1>; 184 - nand-ecc-step-size = <512>; 185 - }; 186 - };
+242
Documentation/devicetree/bindings/mtd/brcm,brcmnand.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/brcm,brcmnand.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Broadcom STB NAND Controller 8 + 9 + maintainers: 10 + - Brian Norris <computersforpeace@gmail.com> 11 + - Kamal Dasu <kdasu.kdev@gmail.com> 12 + 13 + description: | 14 + The Broadcom Set-Top Box NAND controller supports low-level access to raw NAND 15 + flash chips. It has a memory-mapped register interface for both control 16 + registers and for its data input/output buffer. On some SoCs, this controller 17 + is paired with a custom DMA engine (inventively named "Flash DMA") which 18 + supports basic PROGRAM and READ functions, among other features. 19 + 20 + This controller was originally designed for STB SoCs (BCM7xxx) but is now 21 + available on a variety of Broadcom SoCs, including some BCM3xxx, BCM63xx, and 22 + iProc/Cygnus. Its history includes several similar (but not fully register 23 + compatible) versions. 24 + 25 + -- Additional SoC-specific NAND controller properties -- 26 + 27 + The NAND controller is integrated differently on the variety of SoCs on which 28 + it is found. Part of this integration involves providing status and enable 29 + bits with which to control the 8 exposed NAND interrupts, as well as hardware 30 + for configuring the endianness of the data bus. On some SoCs, these features 31 + are handled via standard, modular components (e.g., their interrupts look like 32 + a normal IRQ chip), but on others, they are controlled in unique and 33 + interesting ways, sometimes with registers that lump multiple NAND-related 34 + functions together. The former case can be described simply by the standard 35 + interrupts properties in the main controller node. But for the latter 36 + exceptional cases, we define additional 'compatible' properties and associated 37 + register resources within the NAND controller node above. 38 + 39 + properties: 40 + compatible: 41 + oneOf: 42 + - items: 43 + - enum: 44 + - brcm,brcmnand-v2.1 45 + - brcm,brcmnand-v2.2 46 + - brcm,brcmnand-v4.0 47 + - brcm,brcmnand-v5.0 48 + - brcm,brcmnand-v6.0 49 + - brcm,brcmnand-v6.1 50 + - brcm,brcmnand-v6.2 51 + - brcm,brcmnand-v7.0 52 + - brcm,brcmnand-v7.1 53 + - brcm,brcmnand-v7.2 54 + - brcm,brcmnand-v7.3 55 + - const: brcm,brcmnand 56 + - description: BCM63138 SoC-specific NAND controller 57 + items: 58 + - const: brcm,nand-bcm63138 59 + - enum: 60 + - brcm,brcmnand-v7.0 61 + - brcm,brcmnand-v7.1 62 + - const: brcm,brcmnand 63 + - description: iProc SoC-specific NAND controller 64 + items: 65 + - const: brcm,nand-iproc 66 + - const: brcm,brcmnand-v6.1 67 + - const: brcm,brcmnand 68 + - description: BCM63168 SoC-specific NAND controller 69 + items: 70 + - const: brcm,nand-bcm63168 71 + - const: brcm,nand-bcm6368 72 + - const: brcm,brcmnand-v4.0 73 + - const: brcm,brcmnand 74 + 75 + reg: 76 + minItems: 1 77 + maxItems: 6 78 + 79 + reg-names: 80 + minItems: 1 81 + maxItems: 6 82 + items: 83 + enum: [ nand, flash-dma, flash-edu, nand-cache, nand-int-base, iproc-idm, iproc-ext ] 84 + 85 + interrupts: 86 + minItems: 1 87 + maxItems: 3 88 + items: 89 + - description: NAND CTLRDY interrupt 90 + - description: FLASH_DMA_DONE if flash DMA is available 91 + - description: FLASH_EDU_DONE if EDU is available 92 + 93 + interrupt-names: 94 + minItems: 1 95 + maxItems: 3 96 + items: 97 + - const: nand_ctlrdy 98 + - const: flash_dma_done 99 + - const: flash_edu_done 100 + 101 + clocks: 102 + maxItems: 1 103 + description: reference to the clock for the NAND controller 104 + 105 + clock-names: 106 + const: nand 107 + 108 + brcm,nand-has-wp: 109 + description: > 110 + Some versions of this IP include a write-protect 111 + (WP) control bit. It is always available on >= 112 + v7.0. Use this property to describe the rare 113 + earlier versions of this core that include WP 114 + type: boolean 115 + 116 + patternProperties: 117 + "^nand@[a-f0-9]$": 118 + type: object 119 + properties: 120 + compatible: 121 + const: brcm,nandcs 122 + 123 + nand-ecc-step-size: 124 + enum: [ 512, 1024 ] 125 + 126 + brcm,nand-oob-sector-size: 127 + description: | 128 + integer, to denote the spare area sector size 129 + expected for the ECC layout in use. This size, in 130 + addition to the strength and step-size, 131 + determines how the hardware BCH engine will lay 132 + out the parity bytes it stores on the flash. 133 + This property can be automatically determined by 134 + the flash geometry (particularly the NAND page 135 + and OOB size) in many cases, but when booting 136 + from NAND, the boot controller has only a limited 137 + number of available options for its default ECC 138 + layout. 139 + $ref: /schemas/types.yaml#/definitions/uint32 140 + 141 + allOf: 142 + - $ref: nand-controller.yaml# 143 + - if: 144 + properties: 145 + compatible: 146 + contains: 147 + const: brcm,nand-bcm63138 148 + then: 149 + properties: 150 + reg-names: 151 + minItems: 2 152 + maxItems: 2 153 + items: 154 + - const: nand 155 + - const: nand-int-base 156 + - if: 157 + properties: 158 + compatible: 159 + contains: 160 + const: brcm,nand-bcm6368 161 + then: 162 + properties: 163 + reg-names: 164 + minItems: 3 165 + maxItems: 3 166 + items: 167 + - const: nand 168 + - const: nand-int-base 169 + - const: nand-cache 170 + - if: 171 + properties: 172 + compatible: 173 + contains: 174 + const: brcm,nand-iproc 175 + then: 176 + properties: 177 + reg-names: 178 + minItems: 3 179 + maxItems: 3 180 + items: 181 + - const: nand 182 + - const: iproc-idm 183 + - const: iproc-ext 184 + 185 + unevaluatedProperties: false 186 + 187 + required: 188 + - reg 189 + - reg-names 190 + - interrupts 191 + 192 + examples: 193 + - | 194 + nand-controller@f0442800 { 195 + compatible = "brcm,brcmnand-v7.0", "brcm,brcmnand"; 196 + reg = <0xf0442800 0x600>, 197 + <0xf0443000 0x100>; 198 + reg-names = "nand", "flash-dma"; 199 + interrupt-parent = <&hif_intr2_intc>; 200 + interrupts = <24>, <4>; 201 + 202 + #address-cells = <1>; 203 + #size-cells = <0>; 204 + 205 + nand@1 { 206 + compatible = "brcm,nandcs"; 207 + reg = <1>; // Chip select 1 208 + nand-on-flash-bbt; 209 + nand-ecc-strength = <12>; 210 + nand-ecc-step-size = <512>; 211 + 212 + #address-cells = <1>; 213 + #size-cells = <1>; 214 + }; 215 + }; 216 + - | 217 + nand-controller@10000200 { 218 + compatible = "brcm,nand-bcm63168", "brcm,nand-bcm6368", 219 + "brcm,brcmnand-v4.0", "brcm,brcmnand"; 220 + reg = <0x10000200 0x180>, 221 + <0x100000b0 0x10>, 222 + <0x10000600 0x200>; 223 + reg-names = "nand", "nand-int-base", "nand-cache"; 224 + interrupt-parent = <&periph_intc>; 225 + interrupts = <50>; 226 + clocks = <&periph_clk 20>; 227 + clock-names = "nand"; 228 + 229 + #address-cells = <1>; 230 + #size-cells = <0>; 231 + 232 + nand@0 { 233 + compatible = "brcm,nandcs"; 234 + reg = <0>; 235 + nand-on-flash-bbt; 236 + nand-ecc-strength = <1>; 237 + nand-ecc-step-size = <512>; 238 + 239 + #address-cells = <1>; 240 + #size-cells = <1>; 241 + }; 242 + };
+1 -15
Documentation/devicetree/bindings/mtd/common.txt
··· 1 - * Common properties of all MTD devices 2 - 3 - Optional properties: 4 - - label: user-defined MTD device name. Can be used to assign user 5 - friendly names to MTD devices (instead of the flash model or flash 6 - controller based name) in order to ease flash device identification 7 - and/or describe what they are used for. 8 - 9 - Example: 10 - 11 - flash@0 { 12 - label = "System-firmware"; 13 - 14 - /* flash type specific properties */ 15 - }; 1 + This file has been moved to mtd.yaml.
+6
Documentation/devicetree/bindings/mtd/jedec,spi-nor.yaml
··· 9 9 maintainers: 10 10 - Rob Herring <robh@kernel.org> 11 11 12 + allOf: 13 + - $ref: "mtd.yaml#" 14 + 12 15 properties: 13 16 compatible: 14 17 oneOf: ··· 83 80 patternProperties: 84 81 # Note: use 'partitions' node for new users 85 82 '^partition@': 83 + type: object 84 + 85 + "^otp(-[0-9]+)?$": 86 86 type: object 87 87 88 88 additionalProperties: false
+45
Documentation/devicetree/bindings/mtd/microchip,mchp48l640.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/mtd/microchip,mchp48l640.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Microchip 48l640 (and similar) serial EERAM bindings 8 + 9 + maintainers: 10 + - Heiko Schocher <hs@denx.de> 11 + 12 + description: | 13 + The Microchip 48l640 is a 8KByte EERAM connected via SPI. 14 + 15 + datasheet: http://ww1.microchip.com/downloads/en/DeviceDoc/20006055B.pdf 16 + 17 + properties: 18 + compatible: 19 + items: 20 + - const: microchip,48l640 21 + 22 + reg: 23 + maxItems: 1 24 + 25 + spi-max-frequency: true 26 + 27 + required: 28 + - compatible 29 + - reg 30 + 31 + additionalProperties: false 32 + 33 + examples: 34 + - | 35 + spi { 36 + #address-cells = <1>; 37 + #size-cells = <0>; 38 + 39 + eeram@0 { 40 + compatible = "microchip,48l640"; 41 + reg = <0>; 42 + spi-max-frequency = <20000000>; 43 + }; 44 + }; 45 + ...
+89
Documentation/devicetree/bindings/mtd/mtd.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/mtd.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: MTD (Memory Technology Device) Device Tree Bindings 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + - Richard Weinberger <richard@nod.at> 12 + 13 + properties: 14 + $nodename: 15 + pattern: "^flash(@.*)?$" 16 + 17 + label: 18 + description: 19 + User-defined MTD device name. Can be used to assign user friendly 20 + names to MTD devices (instead of the flash model or flash controller 21 + based name) in order to ease flash device identification and/or 22 + describe what they are used for. 23 + 24 + patternProperties: 25 + "^otp(-[0-9]+)?$": 26 + type: object 27 + $ref: ../nvmem/nvmem.yaml# 28 + 29 + description: | 30 + An OTP memory region. Some flashes provide a one-time-programmable 31 + memory whose content can either be programmed by a user or is already 32 + pre-programmed by the factory. Some flashes might provide both. 33 + 34 + properties: 35 + compatible: 36 + enum: 37 + - user-otp 38 + - factory-otp 39 + 40 + required: 41 + - compatible 42 + 43 + additionalProperties: true 44 + 45 + examples: 46 + - | 47 + spi { 48 + #address-cells = <1>; 49 + #size-cells = <0>; 50 + 51 + flash@0 { 52 + reg = <0>; 53 + compatible = "jedec,spi-nor"; 54 + label = "System-firmware"; 55 + }; 56 + }; 57 + 58 + - | 59 + spi { 60 + #address-cells = <1>; 61 + #size-cells = <0>; 62 + 63 + flash@0 { 64 + reg = <0>; 65 + compatible = "jedec,spi-nor"; 66 + 67 + otp-1 { 68 + compatible = "factory-otp"; 69 + #address-cells = <1>; 70 + #size-cells = <1>; 71 + 72 + electronic-serial-number@0 { 73 + reg = <0 8>; 74 + }; 75 + }; 76 + 77 + otp-2 { 78 + compatible = "user-otp"; 79 + #address-cells = <1>; 80 + #size-cells = <1>; 81 + 82 + mac-address@0 { 83 + reg = <0 6>; 84 + }; 85 + }; 86 + }; 87 + }; 88 + 89 + ...
+17 -1
Documentation/devicetree/bindings/mtd/nand-controller.yaml
··· 38 38 39 39 ranges: true 40 40 41 + cs-gpios: 42 + minItems: 1 43 + maxItems: 8 44 + description: 45 + Array of chip-select available to the controller. The first 46 + entries are a 1:1 mapping of the available chip-select on the 47 + NAND controller (even if they are not used). As many additional 48 + chip-select as needed may follow and should be phandles of GPIO 49 + lines. 'reg' entries of the NAND chip subnodes become indexes of 50 + this array when this property is present. 51 + 41 52 patternProperties: 42 53 "^nand@[a-f0-9]$": 43 54 type: object ··· 175 164 nand-controller { 176 165 #address-cells = <1>; 177 166 #size-cells = <0>; 167 + cs-gpios = <0>, <&gpioA 1>; /* A single native CS is available */ 178 168 179 169 /* controller specific properties */ 180 170 181 171 nand@0 { 182 - reg = <0>; 172 + reg = <0>; /* Native CS */ 183 173 nand-use-soft-ecc-engine; 184 174 nand-ecc-algo = "bch"; 185 175 186 176 /* controller specific properties */ 177 + }; 178 + 179 + nand@1 { 180 + reg = <1>; /* GPIO CS */ 187 181 }; 188 182 };
+5
Documentation/devicetree/bindings/mtd/partitions/brcm,trx.txt
··· 28 28 Required properties: 29 29 - compatible : (required) must be "brcm,trx" 30 30 31 + Optional properties: 32 + 33 + - brcm,trx-magic: TRX magic, if it is different from the default magic 34 + 0x30524448 as a u32. 35 + 31 36 Example: 32 37 33 38 flash@0 {
-51
Documentation/devicetree/bindings/mtd/ti,am654-hbmc.txt
··· 1 - Bindings for HyperBus Memory Controller (HBMC) on TI's K3 family of SoCs 2 - 3 - Required properties: 4 - - compatible : "ti,am654-hbmc" for AM654 SoC 5 - - reg : Two entries: 6 - First entry pointed to the register space of HBMC controller 7 - Second entry pointing to the memory map region dedicated for 8 - MMIO access to attached flash devices 9 - - ranges : Address translation from offset within CS to allocated MMIO 10 - space in SoC 11 - 12 - Optional properties: 13 - - mux-controls : phandle to the multiplexer that controls selection of 14 - HBMC vs OSPI inside Flash SubSystem (FSS). Default is OSPI, 15 - if property is absent. 16 - See Documentation/devicetree/bindings/mux/reg-mux.yaml 17 - for mmio-mux binding details 18 - 19 - Example: 20 - 21 - system-controller@47000000 { 22 - compatible = "syscon", "simple-mfd"; 23 - reg = <0x0 0x47000000 0x0 0x100>; 24 - #address-cells = <2>; 25 - #size-cells = <2>; 26 - ranges; 27 - 28 - hbmc_mux: multiplexer { 29 - compatible = "mmio-mux"; 30 - #mux-control-cells = <1>; 31 - mux-reg-masks = <0x4 0x2>; /* 0: reg 0x4, bit 1 */ 32 - }; 33 - }; 34 - 35 - hbmc: hyperbus@47034000 { 36 - compatible = "ti,am654-hbmc"; 37 - reg = <0x0 0x47034000 0x0 0x100>, 38 - <0x5 0x00000000 0x1 0x0000000>; 39 - power-domains = <&k3_pds 55>; 40 - #address-cells = <2>; 41 - #size-cells = <1>; 42 - ranges = <0x0 0x0 0x5 0x00000000 0x4000000>, /* CS0 - 64MB */ 43 - <0x1 0x0 0x5 0x04000000 0x4000000>; /* CS1 - 64MB */ 44 - mux-controls = <&hbmc_mux 0>; 45 - 46 - /* Slave flash node */ 47 - flash@0,0 { 48 - compatible = "cypress,hyperflash", "cfi-flash"; 49 - reg = <0x0 0x0 0x4000000>; 50 - }; 51 - };
+69
Documentation/devicetree/bindings/mtd/ti,am654-hbmc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/ti,am654-hbmc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: HyperBus Memory Controller (HBMC) on TI's K3 family of SoCs 8 + 9 + maintainers: 10 + - Vignesh Raghavendra <vigneshr@ti.com> 11 + 12 + properties: 13 + compatible: 14 + const: ti,am654-hbmc 15 + 16 + reg: 17 + maxItems: 2 18 + 19 + power-domains: true 20 + '#address-cells': true 21 + '#size-cells': true 22 + ranges: true 23 + 24 + mux-controls: 25 + description: MMIO mux controller node to select b/w OSPI and HBMC. 26 + 27 + clocks: 28 + maxItems: 1 29 + 30 + patternProperties: 31 + "^flash@[0-1],[0-9a-f]+$": 32 + type: object 33 + 34 + required: 35 + - compatible 36 + - reg 37 + - ranges 38 + - clocks 39 + - '#address-cells' 40 + - '#size-cells' 41 + 42 + additionalProperties: false 43 + 44 + examples: 45 + - | 46 + bus { 47 + #address-cells = <2>; 48 + #size-cells = <2>; 49 + 50 + hbmc: memory-controller@47034000 { 51 + compatible = "ti,am654-hbmc"; 52 + reg = <0x0 0x47034000 0x0 0x100>, 53 + <0x5 0x00000000 0x1 0x0000000>; 54 + ranges = <0x0 0x0 0x5 0x00000000 0x4000000>, /* CS0 - 64MB */ 55 + <0x1 0x0 0x5 0x04000000 0x4000000>; /* CS1 - 64MB */ 56 + clocks = <&k3_clks 102 0>; 57 + #address-cells = <2>; 58 + #size-cells = <1>; 59 + power-domains = <&k3_pds 55>; 60 + mux-controls = <&hbmc_mux 0>; 61 + 62 + flash@0,0 { 63 + compatible = "cypress,hyperflash", "cfi-flash"; 64 + reg = <0x0 0x0 0x4000000>; 65 + #address-cells = <1>; 66 + #size-cells = <1>; 67 + }; 68 + }; 69 + };
+17
MAINTAINERS
··· 1319 1319 F: drivers/net/ethernet/aquantia/atlantic/aq_ptp* 1320 1320 1321 1321 ARASAN NAND CONTROLLER DRIVER 1322 + M: Miquel Raynal <miquel.raynal@bootlin.com> 1322 1323 M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1323 1324 L: linux-mtd@lists.infradead.org 1324 1325 S: Maintained ··· 1460 1459 S: Odd Fixes 1461 1460 F: drivers/amba/ 1462 1461 F: include/linux/amba/bus.h 1462 + 1463 + ARM PRIMECELL PL35X NAND CONTROLLER DRIVER 1464 + M: Miquel Raynal <miquel.raynal@bootlin.com@bootlin.com> 1465 + M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1466 + L: linux-mtd@lists.infradead.org 1467 + S: Maintained 1468 + F: Documentation/devicetree/bindings/mtd/arm,pl353-nand-r2p1.yaml 1469 + F: drivers/mtd/nand/raw/pl35x-nand-controller.c 1470 + 1471 + ARM PRIMECELL PL35X SMC DRIVER 1472 + M: Miquel Raynal <miquel.raynal@bootlin.com@bootlin.com> 1473 + M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1474 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1475 + S: Maintained 1476 + F: Documentation/devicetree/bindings/mtd/arm,pl353-smc.yaml 1477 + F: drivers/memory/pl353-smc.c 1463 1478 1464 1479 ARM PRIMECELL CLCD PL110 DRIVER 1465 1480 M: Russell King <linux@armlinux.org.uk>
+10 -304
drivers/memory/pl353-smc.c
··· 8 8 */ 9 9 10 10 #include <linux/clk.h> 11 - #include <linux/io.h> 12 11 #include <linux/kernel.h> 13 12 #include <linux/module.h> 14 13 #include <linux/of_platform.h> 15 14 #include <linux/platform_device.h> 16 - #include <linux/slab.h> 17 - #include <linux/pl353-smc.h> 18 15 #include <linux/amba/bus.h> 19 16 20 - /* Register definitions */ 21 - #define PL353_SMC_MEMC_STATUS_OFFS 0 /* Controller status reg, RO */ 22 - #define PL353_SMC_CFG_CLR_OFFS 0xC /* Clear config reg, WO */ 23 - #define PL353_SMC_DIRECT_CMD_OFFS 0x10 /* Direct command reg, WO */ 24 - #define PL353_SMC_SET_CYCLES_OFFS 0x14 /* Set cycles register, WO */ 25 - #define PL353_SMC_SET_OPMODE_OFFS 0x18 /* Set opmode register, WO */ 26 - #define PL353_SMC_ECC_STATUS_OFFS 0x400 /* ECC status register */ 27 - #define PL353_SMC_ECC_MEMCFG_OFFS 0x404 /* ECC mem config reg */ 28 - #define PL353_SMC_ECC_MEMCMD1_OFFS 0x408 /* ECC mem cmd1 reg */ 29 - #define PL353_SMC_ECC_MEMCMD2_OFFS 0x40C /* ECC mem cmd2 reg */ 30 - #define PL353_SMC_ECC_VALUE0_OFFS 0x418 /* ECC value 0 reg */ 31 - 32 - /* Controller status register specific constants */ 33 - #define PL353_SMC_MEMC_STATUS_RAW_INT_1_SHIFT 6 34 - 35 - /* Clear configuration register specific constants */ 36 - #define PL353_SMC_CFG_CLR_INT_CLR_1 0x10 37 - #define PL353_SMC_CFG_CLR_ECC_INT_DIS_1 0x40 38 - #define PL353_SMC_CFG_CLR_INT_DIS_1 0x2 39 - #define PL353_SMC_CFG_CLR_DEFAULT_MASK (PL353_SMC_CFG_CLR_INT_CLR_1 | \ 40 - PL353_SMC_CFG_CLR_ECC_INT_DIS_1 | \ 41 - PL353_SMC_CFG_CLR_INT_DIS_1) 42 - 43 - /* Set cycles register specific constants */ 44 - #define PL353_SMC_SET_CYCLES_T0_MASK 0xF 45 - #define PL353_SMC_SET_CYCLES_T0_SHIFT 0 46 - #define PL353_SMC_SET_CYCLES_T1_MASK 0xF 47 - #define PL353_SMC_SET_CYCLES_T1_SHIFT 4 48 - #define PL353_SMC_SET_CYCLES_T2_MASK 0x7 49 - #define PL353_SMC_SET_CYCLES_T2_SHIFT 8 50 - #define PL353_SMC_SET_CYCLES_T3_MASK 0x7 51 - #define PL353_SMC_SET_CYCLES_T3_SHIFT 11 52 - #define PL353_SMC_SET_CYCLES_T4_MASK 0x7 53 - #define PL353_SMC_SET_CYCLES_T4_SHIFT 14 54 - #define PL353_SMC_SET_CYCLES_T5_MASK 0x7 55 - #define PL353_SMC_SET_CYCLES_T5_SHIFT 17 56 - #define PL353_SMC_SET_CYCLES_T6_MASK 0xF 57 - #define PL353_SMC_SET_CYCLES_T6_SHIFT 20 58 - 59 - /* ECC status register specific constants */ 60 - #define PL353_SMC_ECC_STATUS_BUSY BIT(6) 61 - #define PL353_SMC_ECC_REG_SIZE_OFFS 4 62 - 63 - /* ECC memory config register specific constants */ 64 - #define PL353_SMC_ECC_MEMCFG_MODE_MASK 0xC 65 - #define PL353_SMC_ECC_MEMCFG_MODE_SHIFT 2 66 - #define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK 0x3 67 - 68 - #define PL353_SMC_DC_UPT_NAND_REGS ((4 << 23) | /* CS: NAND chip */ \ 69 - (2 << 21)) /* UpdateRegs operation */ 70 - 71 - #define PL353_NAND_ECC_CMD1 ((0x80) | /* Write command */ \ 72 - (0 << 8) | /* Read command */ \ 73 - (0x30 << 16) | /* Read End command */ \ 74 - (1 << 24)) /* Read End command calid */ 75 - 76 - #define PL353_NAND_ECC_CMD2 ((0x85) | /* Write col change cmd */ \ 77 - (5 << 8) | /* Read col change cmd */ \ 78 - (0xE0 << 16) | /* Read col change end cmd */ \ 79 - (1 << 24)) /* Read col change end cmd valid */ 80 - #define PL353_NAND_ECC_BUSY_TIMEOUT (1 * HZ) 81 17 /** 82 18 * struct pl353_smc_data - Private smc driver structure 83 19 * @memclk: Pointer to the peripheral clock 84 - * @aclk: Pointer to the APER clock 20 + * @aclk: Pointer to the AXI peripheral clock 85 21 */ 86 22 struct pl353_smc_data { 87 23 struct clk *memclk; 88 24 struct clk *aclk; 89 25 }; 90 - 91 - /* SMC virtual register base */ 92 - static void __iomem *pl353_smc_base; 93 - 94 - /** 95 - * pl353_smc_set_buswidth - Set memory buswidth 96 - * @bw: Memory buswidth (8 | 16) 97 - * Return: 0 on success or negative errno. 98 - */ 99 - int pl353_smc_set_buswidth(unsigned int bw) 100 - { 101 - if (bw != PL353_SMC_MEM_WIDTH_8 && bw != PL353_SMC_MEM_WIDTH_16) 102 - return -EINVAL; 103 - 104 - writel(bw, pl353_smc_base + PL353_SMC_SET_OPMODE_OFFS); 105 - writel(PL353_SMC_DC_UPT_NAND_REGS, pl353_smc_base + 106 - PL353_SMC_DIRECT_CMD_OFFS); 107 - 108 - return 0; 109 - } 110 - EXPORT_SYMBOL_GPL(pl353_smc_set_buswidth); 111 - 112 - /** 113 - * pl353_smc_set_cycles - Set memory timing parameters 114 - * @timings: NAND controller timing parameters 115 - * 116 - * Sets NAND chip specific timing parameters. 117 - */ 118 - void pl353_smc_set_cycles(u32 timings[]) 119 - { 120 - /* 121 - * Set write pulse timing. This one is easy to extract: 122 - * 123 - * NWE_PULSE = tWP 124 - */ 125 - timings[0] &= PL353_SMC_SET_CYCLES_T0_MASK; 126 - timings[1] = (timings[1] & PL353_SMC_SET_CYCLES_T1_MASK) << 127 - PL353_SMC_SET_CYCLES_T1_SHIFT; 128 - timings[2] = (timings[2] & PL353_SMC_SET_CYCLES_T2_MASK) << 129 - PL353_SMC_SET_CYCLES_T2_SHIFT; 130 - timings[3] = (timings[3] & PL353_SMC_SET_CYCLES_T3_MASK) << 131 - PL353_SMC_SET_CYCLES_T3_SHIFT; 132 - timings[4] = (timings[4] & PL353_SMC_SET_CYCLES_T4_MASK) << 133 - PL353_SMC_SET_CYCLES_T4_SHIFT; 134 - timings[5] = (timings[5] & PL353_SMC_SET_CYCLES_T5_MASK) << 135 - PL353_SMC_SET_CYCLES_T5_SHIFT; 136 - timings[6] = (timings[6] & PL353_SMC_SET_CYCLES_T6_MASK) << 137 - PL353_SMC_SET_CYCLES_T6_SHIFT; 138 - timings[0] |= timings[1] | timings[2] | timings[3] | 139 - timings[4] | timings[5] | timings[6]; 140 - 141 - writel(timings[0], pl353_smc_base + PL353_SMC_SET_CYCLES_OFFS); 142 - writel(PL353_SMC_DC_UPT_NAND_REGS, pl353_smc_base + 143 - PL353_SMC_DIRECT_CMD_OFFS); 144 - } 145 - EXPORT_SYMBOL_GPL(pl353_smc_set_cycles); 146 - 147 - /** 148 - * pl353_smc_ecc_is_busy - Read ecc busy flag 149 - * Return: the ecc_status bit from the ecc_status register. 1 = busy, 0 = idle 150 - */ 151 - bool pl353_smc_ecc_is_busy(void) 152 - { 153 - return ((readl(pl353_smc_base + PL353_SMC_ECC_STATUS_OFFS) & 154 - PL353_SMC_ECC_STATUS_BUSY) == PL353_SMC_ECC_STATUS_BUSY); 155 - } 156 - EXPORT_SYMBOL_GPL(pl353_smc_ecc_is_busy); 157 - 158 - /** 159 - * pl353_smc_get_ecc_val - Read ecc_valueN registers 160 - * @ecc_reg: Index of the ecc_value reg (0..3) 161 - * Return: the content of the requested ecc_value register. 162 - * 163 - * There are four valid ecc_value registers. The argument is truncated to stay 164 - * within this valid boundary. 165 - */ 166 - u32 pl353_smc_get_ecc_val(int ecc_reg) 167 - { 168 - u32 addr, reg; 169 - 170 - addr = PL353_SMC_ECC_VALUE0_OFFS + 171 - (ecc_reg * PL353_SMC_ECC_REG_SIZE_OFFS); 172 - reg = readl(pl353_smc_base + addr); 173 - 174 - return reg; 175 - } 176 - EXPORT_SYMBOL_GPL(pl353_smc_get_ecc_val); 177 - 178 - /** 179 - * pl353_smc_get_nand_int_status_raw - Get NAND interrupt status bit 180 - * Return: the raw_int_status1 bit from the memc_status register 181 - */ 182 - int pl353_smc_get_nand_int_status_raw(void) 183 - { 184 - u32 reg; 185 - 186 - reg = readl(pl353_smc_base + PL353_SMC_MEMC_STATUS_OFFS); 187 - reg >>= PL353_SMC_MEMC_STATUS_RAW_INT_1_SHIFT; 188 - reg &= 1; 189 - 190 - return reg; 191 - } 192 - EXPORT_SYMBOL_GPL(pl353_smc_get_nand_int_status_raw); 193 - 194 - /** 195 - * pl353_smc_clr_nand_int - Clear NAND interrupt 196 - */ 197 - void pl353_smc_clr_nand_int(void) 198 - { 199 - writel(PL353_SMC_CFG_CLR_INT_CLR_1, 200 - pl353_smc_base + PL353_SMC_CFG_CLR_OFFS); 201 - } 202 - EXPORT_SYMBOL_GPL(pl353_smc_clr_nand_int); 203 - 204 - /** 205 - * pl353_smc_set_ecc_mode - Set SMC ECC mode 206 - * @mode: ECC mode (BYPASS, APB, MEM) 207 - * Return: 0 on success or negative errno. 208 - */ 209 - int pl353_smc_set_ecc_mode(enum pl353_smc_ecc_mode mode) 210 - { 211 - u32 reg; 212 - int ret = 0; 213 - 214 - switch (mode) { 215 - case PL353_SMC_ECCMODE_BYPASS: 216 - case PL353_SMC_ECCMODE_APB: 217 - case PL353_SMC_ECCMODE_MEM: 218 - 219 - reg = readl(pl353_smc_base + PL353_SMC_ECC_MEMCFG_OFFS); 220 - reg &= ~PL353_SMC_ECC_MEMCFG_MODE_MASK; 221 - reg |= mode << PL353_SMC_ECC_MEMCFG_MODE_SHIFT; 222 - writel(reg, pl353_smc_base + PL353_SMC_ECC_MEMCFG_OFFS); 223 - 224 - break; 225 - default: 226 - ret = -EINVAL; 227 - } 228 - 229 - return ret; 230 - } 231 - EXPORT_SYMBOL_GPL(pl353_smc_set_ecc_mode); 232 - 233 - /** 234 - * pl353_smc_set_ecc_pg_size - Set SMC ECC page size 235 - * @pg_sz: ECC page size 236 - * Return: 0 on success or negative errno. 237 - */ 238 - int pl353_smc_set_ecc_pg_size(unsigned int pg_sz) 239 - { 240 - u32 reg, sz; 241 - 242 - switch (pg_sz) { 243 - case 0: 244 - sz = 0; 245 - break; 246 - case SZ_512: 247 - sz = 1; 248 - break; 249 - case SZ_1K: 250 - sz = 2; 251 - break; 252 - case SZ_2K: 253 - sz = 3; 254 - break; 255 - default: 256 - return -EINVAL; 257 - } 258 - 259 - reg = readl(pl353_smc_base + PL353_SMC_ECC_MEMCFG_OFFS); 260 - reg &= ~PL353_SMC_ECC_MEMCFG_PGSIZE_MASK; 261 - reg |= sz; 262 - writel(reg, pl353_smc_base + PL353_SMC_ECC_MEMCFG_OFFS); 263 - 264 - return 0; 265 - } 266 - EXPORT_SYMBOL_GPL(pl353_smc_set_ecc_pg_size); 267 26 268 27 static int __maybe_unused pl353_smc_suspend(struct device *dev) 269 28 { ··· 36 277 37 278 static int __maybe_unused pl353_smc_resume(struct device *dev) 38 279 { 39 - int ret; 40 280 struct pl353_smc_data *pl353_smc = dev_get_drvdata(dev); 281 + int ret; 41 282 42 283 ret = clk_enable(pl353_smc->aclk); 43 284 if (ret) { ··· 55 296 return ret; 56 297 } 57 298 58 - static struct amba_driver pl353_smc_driver; 59 - 60 299 static SIMPLE_DEV_PM_OPS(pl353_smc_dev_pm_ops, pl353_smc_suspend, 61 300 pl353_smc_resume); 62 - 63 - /** 64 - * pl353_smc_init_nand_interface - Initialize the NAND interface 65 - * @adev: Pointer to the amba_device struct 66 - * @nand_node: Pointer to the pl353_nand device_node struct 67 - */ 68 - static void pl353_smc_init_nand_interface(struct amba_device *adev, 69 - struct device_node *nand_node) 70 - { 71 - unsigned long timeout; 72 - 73 - pl353_smc_set_buswidth(PL353_SMC_MEM_WIDTH_8); 74 - writel(PL353_SMC_CFG_CLR_INT_CLR_1, 75 - pl353_smc_base + PL353_SMC_CFG_CLR_OFFS); 76 - writel(PL353_SMC_DC_UPT_NAND_REGS, pl353_smc_base + 77 - PL353_SMC_DIRECT_CMD_OFFS); 78 - 79 - timeout = jiffies + PL353_NAND_ECC_BUSY_TIMEOUT; 80 - /* Wait till the ECC operation is complete */ 81 - do { 82 - if (pl353_smc_ecc_is_busy()) 83 - cpu_relax(); 84 - else 85 - break; 86 - } while (!time_after_eq(jiffies, timeout)); 87 - 88 - if (time_after_eq(jiffies, timeout)) 89 - return; 90 - 91 - writel(PL353_NAND_ECC_CMD1, 92 - pl353_smc_base + PL353_SMC_ECC_MEMCMD1_OFFS); 93 - writel(PL353_NAND_ECC_CMD2, 94 - pl353_smc_base + PL353_SMC_ECC_MEMCMD2_OFFS); 95 - } 96 301 97 302 static const struct of_device_id pl353_smc_supported_children[] = { 98 303 { ··· 64 341 }, 65 342 { 66 343 .compatible = "arm,pl353-nand-r2p1", 67 - .data = pl353_smc_init_nand_interface 68 344 }, 69 345 {} 70 346 }; 71 347 72 348 static int pl353_smc_probe(struct amba_device *adev, const struct amba_id *id) 73 349 { 350 + struct device_node *of_node = adev->dev.of_node; 351 + const struct of_device_id *match = NULL; 74 352 struct pl353_smc_data *pl353_smc; 75 353 struct device_node *child; 76 - struct resource *res; 77 354 int err; 78 - struct device_node *of_node = adev->dev.of_node; 79 - static void (*init)(struct amba_device *adev, 80 - struct device_node *nand_node); 81 - const struct of_device_id *match = NULL; 82 355 83 356 pl353_smc = devm_kzalloc(&adev->dev, sizeof(*pl353_smc), GFP_KERNEL); 84 357 if (!pl353_smc) 85 358 return -ENOMEM; 86 - 87 - /* Get the NAND controller virtual address */ 88 - res = &adev->res; 89 - pl353_smc_base = devm_ioremap_resource(&adev->dev, res); 90 - if (IS_ERR(pl353_smc_base)) 91 - return PTR_ERR(pl353_smc_base); 92 359 93 360 pl353_smc->aclk = devm_clk_get(&adev->dev, "apb_pclk"); 94 361 if (IS_ERR(pl353_smc->aclk)) { ··· 101 388 err = clk_prepare_enable(pl353_smc->memclk); 102 389 if (err) { 103 390 dev_err(&adev->dev, "Unable to enable memory clock.\n"); 104 - goto out_clk_dis_aper; 391 + goto disable_axi_clk; 105 392 } 106 393 107 394 amba_set_drvdata(adev, pl353_smc); 108 - 109 - /* clear interrupts */ 110 - writel(PL353_SMC_CFG_CLR_DEFAULT_MASK, 111 - pl353_smc_base + PL353_SMC_CFG_CLR_OFFS); 112 395 113 396 /* Find compatible children. Only a single child is supported */ 114 397 for_each_available_child_of_node(of_node, child) { ··· 117 408 } 118 409 if (!match) { 119 410 dev_err(&adev->dev, "no matching children\n"); 120 - goto out_clk_disable; 411 + goto disable_mem_clk; 121 412 } 122 413 123 - init = match->data; 124 - if (init) 125 - init(adev, child); 126 414 of_platform_device_create(child, NULL, &adev->dev); 127 415 128 416 return 0; 129 417 130 - out_clk_disable: 418 + disable_mem_clk: 131 419 clk_disable_unprepare(pl353_smc->memclk); 132 - out_clk_dis_aper: 420 + disable_axi_clk: 133 421 clk_disable_unprepare(pl353_smc->aclk); 134 422 135 423 return err; ··· 142 436 143 437 static const struct amba_id pl353_ids[] = { 144 438 { 145 - .id = 0x00041353, 146 - .mask = 0x000fffff, 439 + .id = 0x00041353, 440 + .mask = 0x000fffff, 147 441 }, 148 442 { 0, 0 }, 149 443 };
+1 -4
drivers/mtd/chips/chipreg.c
··· 31 31 32 32 static struct mtd_chip_driver *get_mtd_chip_driver (const char *name) 33 33 { 34 - struct list_head *pos; 35 34 struct mtd_chip_driver *ret = NULL, *this; 36 35 37 36 spin_lock(&chip_drvs_lock); 38 37 39 - list_for_each(pos, &chip_drvs_list) { 40 - this = list_entry(pos, typeof(*this), list); 41 - 38 + list_for_each_entry(this, &chip_drvs_list, list) { 42 39 if (!strcmp(this->name, name)) { 43 40 ret = this; 44 41 break;
+6
drivers/mtd/devices/Kconfig
··· 89 89 platform data, or a device tree description if you want to 90 90 specify device partitioning 91 91 92 + config MTD_MCHP48L640 93 + tristate "Microchip 48L640 EERAM" 94 + depends on SPI_MASTER 95 + help 96 + This enables access to Microchip 48L640 EERAM chips, using SPI. 97 + 92 98 config MTD_SPEAR_SMI 93 99 tristate "SPEAR MTD NOR Support through SMI controller" 94 100 depends on PLAT_SPEAR || COMPILE_TEST
+1
drivers/mtd/devices/Makefile
··· 13 13 obj-$(CONFIG_MTD_BLOCK2MTD) += block2mtd.o 14 14 obj-$(CONFIG_MTD_DATAFLASH) += mtd_dataflash.o 15 15 obj-$(CONFIG_MTD_MCHP23K256) += mchp23k256.o 16 + obj-$(CONFIG_MTD_MCHP48L640) += mchp48l640.o 16 17 obj-$(CONFIG_MTD_SPEAR_SMI) += spear_smi.o 17 18 obj-$(CONFIG_MTD_SST25L) += sst25l.o 18 19 obj-$(CONFIG_MTD_BCM47XXSFLASH) += bcm47xxsflash.o
+373
drivers/mtd/devices/mchp48l640.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for Microchip 48L640 64 Kb SPI Serial EERAM 4 + * 5 + * Copyright Heiko Schocher <hs@denx.de> 6 + * 7 + * datasheet: http://ww1.microchip.com/downloads/en/DeviceDoc/20006055B.pdf 8 + * 9 + * we set continuous mode but reading/writing more bytes than 10 + * pagesize seems to bring chip into state where readden values 11 + * are wrong ... no idea why. 12 + * 13 + */ 14 + #include <linux/delay.h> 15 + #include <linux/device.h> 16 + #include <linux/jiffies.h> 17 + #include <linux/module.h> 18 + #include <linux/mtd/mtd.h> 19 + #include <linux/mtd/partitions.h> 20 + #include <linux/mutex.h> 21 + #include <linux/sched.h> 22 + #include <linux/sizes.h> 23 + #include <linux/spi/flash.h> 24 + #include <linux/spi/spi.h> 25 + #include <linux/of_device.h> 26 + 27 + struct mchp48_caps { 28 + unsigned int size; 29 + unsigned int page_size; 30 + }; 31 + 32 + struct mchp48l640_flash { 33 + struct spi_device *spi; 34 + struct mutex lock; 35 + struct mtd_info mtd; 36 + const struct mchp48_caps *caps; 37 + }; 38 + 39 + #define MCHP48L640_CMD_WREN 0x06 40 + #define MCHP48L640_CMD_WRDI 0x04 41 + #define MCHP48L640_CMD_WRITE 0x02 42 + #define MCHP48L640_CMD_READ 0x03 43 + #define MCHP48L640_CMD_WRSR 0x01 44 + #define MCHP48L640_CMD_RDSR 0x05 45 + 46 + #define MCHP48L640_STATUS_RDY 0x01 47 + #define MCHP48L640_STATUS_WEL 0x02 48 + #define MCHP48L640_STATUS_BP0 0x04 49 + #define MCHP48L640_STATUS_BP1 0x08 50 + #define MCHP48L640_STATUS_SWM 0x10 51 + #define MCHP48L640_STATUS_PRO 0x20 52 + #define MCHP48L640_STATUS_ASE 0x40 53 + 54 + #define MCHP48L640_TIMEOUT 100 55 + 56 + #define MAX_CMD_SIZE 0x10 57 + 58 + #define to_mchp48l640_flash(x) container_of(x, struct mchp48l640_flash, mtd) 59 + 60 + static int mchp48l640_mkcmd(struct mchp48l640_flash *flash, u8 cmd, loff_t addr, char *buf) 61 + { 62 + buf[0] = cmd; 63 + buf[1] = addr >> 8; 64 + buf[2] = addr; 65 + 66 + return 3; 67 + } 68 + 69 + static int mchp48l640_read_status(struct mchp48l640_flash *flash, int *status) 70 + { 71 + unsigned char cmd[2]; 72 + int ret; 73 + 74 + cmd[0] = MCHP48L640_CMD_RDSR; 75 + cmd[1] = 0x00; 76 + mutex_lock(&flash->lock); 77 + ret = spi_write_then_read(flash->spi, &cmd[0], 1, &cmd[1], 1); 78 + mutex_unlock(&flash->lock); 79 + if (!ret) 80 + *status = cmd[1]; 81 + dev_dbg(&flash->spi->dev, "read status ret: %d status: %x", ret, *status); 82 + 83 + return ret; 84 + } 85 + 86 + static int mchp48l640_waitforbit(struct mchp48l640_flash *flash, int bit, bool set) 87 + { 88 + int ret, status; 89 + unsigned long deadline; 90 + 91 + deadline = jiffies + msecs_to_jiffies(MCHP48L640_TIMEOUT); 92 + do { 93 + ret = mchp48l640_read_status(flash, &status); 94 + dev_dbg(&flash->spi->dev, "read status ret: %d bit: %x %sset status: %x", 95 + ret, bit, (set ? "" : "not"), status); 96 + if (ret) 97 + return ret; 98 + 99 + if (set) { 100 + if ((status & bit) == bit) 101 + return 0; 102 + } else { 103 + if ((status & bit) == 0) 104 + return 0; 105 + } 106 + 107 + usleep_range(1000, 2000); 108 + } while (!time_after_eq(jiffies, deadline)); 109 + 110 + dev_err(&flash->spi->dev, "Timeout waiting for bit %x %s set in status register.", 111 + bit, (set ? "" : "not")); 112 + return -ETIMEDOUT; 113 + } 114 + 115 + static int mchp48l640_write_prepare(struct mchp48l640_flash *flash, bool enable) 116 + { 117 + unsigned char cmd[2]; 118 + int ret; 119 + 120 + if (enable) 121 + cmd[0] = MCHP48L640_CMD_WREN; 122 + else 123 + cmd[0] = MCHP48L640_CMD_WRDI; 124 + 125 + mutex_lock(&flash->lock); 126 + ret = spi_write(flash->spi, cmd, 1); 127 + mutex_unlock(&flash->lock); 128 + 129 + if (ret) 130 + dev_err(&flash->spi->dev, "write %sable failed ret: %d", 131 + (enable ? "en" : "dis"), ret); 132 + 133 + dev_dbg(&flash->spi->dev, "write %sable success ret: %d", 134 + (enable ? "en" : "dis"), ret); 135 + if (enable) 136 + return mchp48l640_waitforbit(flash, MCHP48L640_STATUS_WEL, true); 137 + 138 + return ret; 139 + } 140 + 141 + static int mchp48l640_set_mode(struct mchp48l640_flash *flash) 142 + { 143 + unsigned char cmd[2]; 144 + int ret; 145 + 146 + ret = mchp48l640_write_prepare(flash, true); 147 + if (ret) 148 + return ret; 149 + 150 + cmd[0] = MCHP48L640_CMD_WRSR; 151 + cmd[1] = MCHP48L640_STATUS_PRO; 152 + 153 + mutex_lock(&flash->lock); 154 + ret = spi_write(flash->spi, cmd, 2); 155 + mutex_unlock(&flash->lock); 156 + if (ret) 157 + dev_err(&flash->spi->dev, "Could not set continuous mode ret: %d", ret); 158 + 159 + return mchp48l640_waitforbit(flash, MCHP48L640_STATUS_PRO, true); 160 + } 161 + 162 + static int mchp48l640_wait_rdy(struct mchp48l640_flash *flash) 163 + { 164 + return mchp48l640_waitforbit(flash, MCHP48L640_STATUS_RDY, false); 165 + }; 166 + 167 + static int mchp48l640_write_page(struct mtd_info *mtd, loff_t to, size_t len, 168 + size_t *retlen, const unsigned char *buf) 169 + { 170 + struct mchp48l640_flash *flash = to_mchp48l640_flash(mtd); 171 + unsigned char *cmd; 172 + int ret; 173 + int cmdlen; 174 + 175 + cmd = kmalloc((3 + len), GFP_KERNEL | GFP_DMA); 176 + if (!cmd) 177 + return -ENOMEM; 178 + 179 + ret = mchp48l640_wait_rdy(flash); 180 + if (ret) 181 + goto fail; 182 + 183 + ret = mchp48l640_write_prepare(flash, true); 184 + if (ret) 185 + goto fail; 186 + 187 + mutex_lock(&flash->lock); 188 + cmdlen = mchp48l640_mkcmd(flash, MCHP48L640_CMD_WRITE, to, cmd); 189 + memcpy(&cmd[cmdlen], buf, len); 190 + ret = spi_write(flash->spi, cmd, cmdlen + len); 191 + mutex_unlock(&flash->lock); 192 + if (!ret) 193 + *retlen += len; 194 + else 195 + goto fail; 196 + 197 + ret = mchp48l640_waitforbit(flash, MCHP48L640_STATUS_WEL, false); 198 + if (ret) 199 + goto fail; 200 + 201 + kfree(cmd); 202 + return 0; 203 + fail: 204 + kfree(cmd); 205 + dev_err(&flash->spi->dev, "write fail with: %d", ret); 206 + return ret; 207 + }; 208 + 209 + static int mchp48l640_write(struct mtd_info *mtd, loff_t to, size_t len, 210 + size_t *retlen, const unsigned char *buf) 211 + { 212 + struct mchp48l640_flash *flash = to_mchp48l640_flash(mtd); 213 + int ret; 214 + size_t wlen = 0; 215 + loff_t woff = to; 216 + size_t ws; 217 + size_t page_sz = flash->caps->page_size; 218 + 219 + /* 220 + * we set PRO bit (page rollover), but writing length > page size 221 + * does result in total chaos, so write in 32 byte chunks. 222 + */ 223 + while (wlen < len) { 224 + ws = min((len - wlen), page_sz); 225 + ret = mchp48l640_write_page(mtd, woff, ws, retlen, &buf[wlen]); 226 + if (ret) 227 + return ret; 228 + wlen += ws; 229 + woff += ws; 230 + } 231 + 232 + return ret; 233 + } 234 + 235 + static int mchp48l640_read_page(struct mtd_info *mtd, loff_t from, size_t len, 236 + size_t *retlen, unsigned char *buf) 237 + { 238 + struct mchp48l640_flash *flash = to_mchp48l640_flash(mtd); 239 + unsigned char *cmd; 240 + int ret; 241 + int cmdlen; 242 + 243 + cmd = kmalloc((3 + len), GFP_KERNEL | GFP_DMA); 244 + if (!cmd) 245 + return -ENOMEM; 246 + 247 + ret = mchp48l640_wait_rdy(flash); 248 + if (ret) 249 + goto fail; 250 + 251 + mutex_lock(&flash->lock); 252 + cmdlen = mchp48l640_mkcmd(flash, MCHP48L640_CMD_READ, from, cmd); 253 + ret = spi_write_then_read(flash->spi, cmd, cmdlen, buf, len); 254 + mutex_unlock(&flash->lock); 255 + if (!ret) 256 + *retlen += len; 257 + 258 + return ret; 259 + 260 + fail: 261 + kfree(cmd); 262 + dev_err(&flash->spi->dev, "read fail with: %d", ret); 263 + return ret; 264 + } 265 + 266 + static int mchp48l640_read(struct mtd_info *mtd, loff_t from, size_t len, 267 + size_t *retlen, unsigned char *buf) 268 + { 269 + struct mchp48l640_flash *flash = to_mchp48l640_flash(mtd); 270 + int ret; 271 + size_t wlen = 0; 272 + loff_t woff = from; 273 + size_t ws; 274 + size_t page_sz = flash->caps->page_size; 275 + 276 + /* 277 + * we set PRO bit (page rollover), but if read length > page size 278 + * does result in total chaos in result ... 279 + */ 280 + while (wlen < len) { 281 + ws = min((len - wlen), page_sz); 282 + ret = mchp48l640_read_page(mtd, woff, ws, retlen, &buf[wlen]); 283 + if (ret) 284 + return ret; 285 + wlen += ws; 286 + woff += ws; 287 + } 288 + 289 + return ret; 290 + }; 291 + 292 + static const struct mchp48_caps mchp48l640_caps = { 293 + .size = SZ_8K, 294 + .page_size = 32, 295 + }; 296 + 297 + static int mchp48l640_probe(struct spi_device *spi) 298 + { 299 + struct mchp48l640_flash *flash; 300 + struct flash_platform_data *data; 301 + int err; 302 + int status; 303 + 304 + flash = devm_kzalloc(&spi->dev, sizeof(*flash), GFP_KERNEL); 305 + if (!flash) 306 + return -ENOMEM; 307 + 308 + flash->spi = spi; 309 + mutex_init(&flash->lock); 310 + spi_set_drvdata(spi, flash); 311 + 312 + err = mchp48l640_read_status(flash, &status); 313 + if (err) 314 + return err; 315 + 316 + err = mchp48l640_set_mode(flash); 317 + if (err) 318 + return err; 319 + 320 + data = dev_get_platdata(&spi->dev); 321 + 322 + flash->caps = of_device_get_match_data(&spi->dev); 323 + if (!flash->caps) 324 + flash->caps = &mchp48l640_caps; 325 + 326 + mtd_set_of_node(&flash->mtd, spi->dev.of_node); 327 + flash->mtd.dev.parent = &spi->dev; 328 + flash->mtd.type = MTD_RAM; 329 + flash->mtd.flags = MTD_CAP_RAM; 330 + flash->mtd.writesize = flash->caps->page_size; 331 + flash->mtd.size = flash->caps->size; 332 + flash->mtd._read = mchp48l640_read; 333 + flash->mtd._write = mchp48l640_write; 334 + 335 + err = mtd_device_register(&flash->mtd, data ? data->parts : NULL, 336 + data ? data->nr_parts : 0); 337 + if (err) 338 + return err; 339 + 340 + return 0; 341 + } 342 + 343 + static int mchp48l640_remove(struct spi_device *spi) 344 + { 345 + struct mchp48l640_flash *flash = spi_get_drvdata(spi); 346 + 347 + return mtd_device_unregister(&flash->mtd); 348 + } 349 + 350 + static const struct of_device_id mchp48l640_of_table[] = { 351 + { 352 + .compatible = "microchip,48l640", 353 + .data = &mchp48l640_caps, 354 + }, 355 + {} 356 + }; 357 + MODULE_DEVICE_TABLE(of, mchp48l640_of_table); 358 + 359 + static struct spi_driver mchp48l640_driver = { 360 + .driver = { 361 + .name = "mchp48l640", 362 + .of_match_table = of_match_ptr(mchp48l640_of_table), 363 + }, 364 + .probe = mchp48l640_probe, 365 + .remove = mchp48l640_remove, 366 + }; 367 + 368 + module_spi_driver(mchp48l640_driver); 369 + 370 + MODULE_DESCRIPTION("MTD SPI driver for Microchip 48l640 EERAM chips"); 371 + MODULE_AUTHOR("Heiko Schocher <hs@denx.de>"); 372 + MODULE_LICENSE("GPL v2"); 373 + MODULE_ALIAS("spi:mchp48l640");
-1
drivers/mtd/devices/ms02-nv.c
··· 286 286 break; 287 287 default: 288 288 return -ENODEV; 289 - break; 290 289 } 291 290 292 291 for (i = 0; i < ARRAY_SIZE(ms02nv_addrs); i++)
+1
drivers/mtd/devices/phram.c
··· 270 270 if (len == 0 || erasesize == 0 || erasesize > len 271 271 || erasesize > UINT_MAX || rem) { 272 272 parse_err("illegal erasesize or len\n"); 273 + ret = -EINVAL; 273 274 goto error; 274 275 } 275 276
+3 -14
drivers/mtd/inftlmount.c
··· 259 259 /* Memory alloc */ 260 260 inftl->PUtable = kmalloc_array(inftl->nb_blocks, sizeof(u16), 261 261 GFP_KERNEL); 262 - if (!inftl->PUtable) { 263 - printk(KERN_WARNING "INFTL: allocation of PUtable " 264 - "failed (%zd bytes)\n", 265 - inftl->nb_blocks * sizeof(u16)); 262 + if (!inftl->PUtable) 266 263 return -ENOMEM; 267 - } 268 264 269 265 inftl->VUtable = kmalloc_array(inftl->nb_blocks, sizeof(u16), 270 266 GFP_KERNEL); 271 267 if (!inftl->VUtable) { 272 268 kfree(inftl->PUtable); 273 - printk(KERN_WARNING "INFTL: allocation of VUtable " 274 - "failed (%zd bytes)\n", 275 - inftl->nb_blocks * sizeof(u16)); 276 269 return -ENOMEM; 277 270 } 278 271 ··· 323 330 324 331 buf = kmalloc(SECTORSIZE + mtd->oobsize, GFP_KERNEL); 325 332 if (!buf) 326 - return -1; 333 + return -ENOMEM; 327 334 328 335 ret = -1; 329 336 for (i = 0; i < len; i += SECTORSIZE) { ··· 551 558 552 559 /* Temporary buffer to store ANAC numbers. */ 553 560 ANACtable = kcalloc(s->nb_blocks, sizeof(u8), GFP_KERNEL); 554 - if (!ANACtable) { 555 - printk(KERN_WARNING "INFTL: allocation of ANACtable " 556 - "failed (%zd bytes)\n", 557 - s->nb_blocks * sizeof(u8)); 561 + if (!ANACtable) 558 562 return -ENOMEM; 559 - } 560 563 561 564 /* 562 565 * First pass is to explore each physical unit, and construct the
+2 -4
drivers/mtd/maps/amd76xrom.c
··· 189 189 190 190 if (!map) { 191 191 map = kmalloc(sizeof(*map), GFP_KERNEL); 192 - } 193 - if (!map) { 194 - printk(KERN_ERR MOD_NAME ": kmalloc failed"); 195 - goto out; 192 + if (!map) 193 + goto out; 196 194 } 197 195 memset(map, 0, sizeof(*map)); 198 196 INIT_LIST_HEAD(&map->list);
+3 -5
drivers/mtd/maps/ck804xrom.c
··· 217 217 unsigned long offset; 218 218 int i; 219 219 220 - if (!map) 221 - map = kmalloc(sizeof(*map), GFP_KERNEL); 222 - 223 220 if (!map) { 224 - printk(KERN_ERR MOD_NAME ": kmalloc failed"); 225 - goto out; 221 + map = kmalloc(sizeof(*map), GFP_KERNEL); 222 + if (!map) 223 + goto out; 226 224 } 227 225 memset(map, 0, sizeof(*map)); 228 226 INIT_LIST_HEAD(&map->list);
+3 -4
drivers/mtd/maps/esb2rom.c
··· 277 277 unsigned long offset; 278 278 int i; 279 279 280 - if (!map) 281 - map = kmalloc(sizeof(*map), GFP_KERNEL); 282 280 if (!map) { 283 - printk(KERN_ERR MOD_NAME ": kmalloc failed"); 284 - goto out; 281 + map = kmalloc(sizeof(*map), GFP_KERNEL); 282 + if (!map) 283 + goto out; 285 284 } 286 285 memset(map, 0, sizeof(*map)); 287 286 INIT_LIST_HEAD(&map->list);
+2 -4
drivers/mtd/maps/ichxrom.c
··· 213 213 214 214 if (!map) { 215 215 map = kmalloc(sizeof(*map), GFP_KERNEL); 216 - } 217 - if (!map) { 218 - printk(KERN_ERR MOD_NAME ": kmalloc failed"); 219 - goto out; 216 + if (!map) 217 + goto out; 220 218 } 221 219 memset(map, 0, sizeof(*map)); 222 220 INIT_LIST_HEAD(&map->list);
-1
drivers/mtd/maps/plat-ram.c
··· 127 127 info->map.virt = devm_ioremap_resource(&pdev->dev, res); 128 128 if (IS_ERR(info->map.virt)) { 129 129 err = PTR_ERR(info->map.virt); 130 - dev_err(&pdev->dev, "failed to ioremap() region\n"); 131 130 goto exit_free; 132 131 } 133 132
+1 -3
drivers/mtd/maps/sun_uflash.c
··· 62 62 } 63 63 64 64 up = kzalloc(sizeof(struct uflash_dev), GFP_KERNEL); 65 - if (!up) { 66 - printk(KERN_ERR PFX "Cannot allocate struct uflash_dev\n"); 65 + if (!up) 67 66 return -ENOMEM; 68 - } 69 67 70 68 /* copy defaults and tweak parameters */ 71 69 memcpy(&up->map, &uflash_map_templ, sizeof(uflash_map_templ));
+199 -47
drivers/mtd/mtdcore.c
··· 96 96 device_destroy(&mtd_class, index + 1); 97 97 } 98 98 99 + #define MTD_DEVICE_ATTR_RO(name) \ 100 + static DEVICE_ATTR(name, 0444, mtd_##name##_show, NULL) 101 + 102 + #define MTD_DEVICE_ATTR_RW(name) \ 103 + static DEVICE_ATTR(name, 0644, mtd_##name##_show, mtd_##name##_store) 104 + 99 105 static ssize_t mtd_type_show(struct device *dev, 100 106 struct device_attribute *attr, char *buf) 101 107 { ··· 137 131 type = "unknown"; 138 132 } 139 133 140 - return snprintf(buf, PAGE_SIZE, "%s\n", type); 134 + return sysfs_emit(buf, "%s\n", type); 141 135 } 142 - static DEVICE_ATTR(type, S_IRUGO, mtd_type_show, NULL); 136 + MTD_DEVICE_ATTR_RO(type); 143 137 144 138 static ssize_t mtd_flags_show(struct device *dev, 145 139 struct device_attribute *attr, char *buf) 146 140 { 147 141 struct mtd_info *mtd = dev_get_drvdata(dev); 148 142 149 - return snprintf(buf, PAGE_SIZE, "0x%lx\n", (unsigned long)mtd->flags); 143 + return sysfs_emit(buf, "0x%lx\n", (unsigned long)mtd->flags); 150 144 } 151 - static DEVICE_ATTR(flags, S_IRUGO, mtd_flags_show, NULL); 145 + MTD_DEVICE_ATTR_RO(flags); 152 146 153 147 static ssize_t mtd_size_show(struct device *dev, 154 148 struct device_attribute *attr, char *buf) 155 149 { 156 150 struct mtd_info *mtd = dev_get_drvdata(dev); 157 151 158 - return snprintf(buf, PAGE_SIZE, "%llu\n", 159 - (unsigned long long)mtd->size); 152 + return sysfs_emit(buf, "%llu\n", (unsigned long long)mtd->size); 160 153 } 161 - static DEVICE_ATTR(size, S_IRUGO, mtd_size_show, NULL); 154 + MTD_DEVICE_ATTR_RO(size); 162 155 163 156 static ssize_t mtd_erasesize_show(struct device *dev, 164 157 struct device_attribute *attr, char *buf) 165 158 { 166 159 struct mtd_info *mtd = dev_get_drvdata(dev); 167 160 168 - return snprintf(buf, PAGE_SIZE, "%lu\n", (unsigned long)mtd->erasesize); 161 + return sysfs_emit(buf, "%lu\n", (unsigned long)mtd->erasesize); 169 162 } 170 - static DEVICE_ATTR(erasesize, S_IRUGO, mtd_erasesize_show, NULL); 163 + MTD_DEVICE_ATTR_RO(erasesize); 171 164 172 165 static ssize_t mtd_writesize_show(struct device *dev, 173 166 struct device_attribute *attr, char *buf) 174 167 { 175 168 struct mtd_info *mtd = dev_get_drvdata(dev); 176 169 177 - return snprintf(buf, PAGE_SIZE, "%lu\n", (unsigned long)mtd->writesize); 170 + return sysfs_emit(buf, "%lu\n", (unsigned long)mtd->writesize); 178 171 } 179 - static DEVICE_ATTR(writesize, S_IRUGO, mtd_writesize_show, NULL); 172 + MTD_DEVICE_ATTR_RO(writesize); 180 173 181 174 static ssize_t mtd_subpagesize_show(struct device *dev, 182 175 struct device_attribute *attr, char *buf) ··· 183 178 struct mtd_info *mtd = dev_get_drvdata(dev); 184 179 unsigned int subpagesize = mtd->writesize >> mtd->subpage_sft; 185 180 186 - return snprintf(buf, PAGE_SIZE, "%u\n", subpagesize); 181 + return sysfs_emit(buf, "%u\n", subpagesize); 187 182 } 188 - static DEVICE_ATTR(subpagesize, S_IRUGO, mtd_subpagesize_show, NULL); 183 + MTD_DEVICE_ATTR_RO(subpagesize); 189 184 190 185 static ssize_t mtd_oobsize_show(struct device *dev, 191 186 struct device_attribute *attr, char *buf) 192 187 { 193 188 struct mtd_info *mtd = dev_get_drvdata(dev); 194 189 195 - return snprintf(buf, PAGE_SIZE, "%lu\n", (unsigned long)mtd->oobsize); 190 + return sysfs_emit(buf, "%lu\n", (unsigned long)mtd->oobsize); 196 191 } 197 - static DEVICE_ATTR(oobsize, S_IRUGO, mtd_oobsize_show, NULL); 192 + MTD_DEVICE_ATTR_RO(oobsize); 198 193 199 194 static ssize_t mtd_oobavail_show(struct device *dev, 200 195 struct device_attribute *attr, char *buf) 201 196 { 202 197 struct mtd_info *mtd = dev_get_drvdata(dev); 203 198 204 - return snprintf(buf, PAGE_SIZE, "%u\n", mtd->oobavail); 199 + return sysfs_emit(buf, "%u\n", mtd->oobavail); 205 200 } 206 - static DEVICE_ATTR(oobavail, S_IRUGO, mtd_oobavail_show, NULL); 201 + MTD_DEVICE_ATTR_RO(oobavail); 207 202 208 203 static ssize_t mtd_numeraseregions_show(struct device *dev, 209 204 struct device_attribute *attr, char *buf) 210 205 { 211 206 struct mtd_info *mtd = dev_get_drvdata(dev); 212 207 213 - return snprintf(buf, PAGE_SIZE, "%u\n", mtd->numeraseregions); 208 + return sysfs_emit(buf, "%u\n", mtd->numeraseregions); 214 209 } 215 - static DEVICE_ATTR(numeraseregions, S_IRUGO, mtd_numeraseregions_show, 216 - NULL); 210 + MTD_DEVICE_ATTR_RO(numeraseregions); 217 211 218 212 static ssize_t mtd_name_show(struct device *dev, 219 213 struct device_attribute *attr, char *buf) 220 214 { 221 215 struct mtd_info *mtd = dev_get_drvdata(dev); 222 216 223 - return snprintf(buf, PAGE_SIZE, "%s\n", mtd->name); 217 + return sysfs_emit(buf, "%s\n", mtd->name); 224 218 } 225 - static DEVICE_ATTR(name, S_IRUGO, mtd_name_show, NULL); 219 + MTD_DEVICE_ATTR_RO(name); 226 220 227 221 static ssize_t mtd_ecc_strength_show(struct device *dev, 228 222 struct device_attribute *attr, char *buf) 229 223 { 230 224 struct mtd_info *mtd = dev_get_drvdata(dev); 231 225 232 - return snprintf(buf, PAGE_SIZE, "%u\n", mtd->ecc_strength); 226 + return sysfs_emit(buf, "%u\n", mtd->ecc_strength); 233 227 } 234 - static DEVICE_ATTR(ecc_strength, S_IRUGO, mtd_ecc_strength_show, NULL); 228 + MTD_DEVICE_ATTR_RO(ecc_strength); 235 229 236 230 static ssize_t mtd_bitflip_threshold_show(struct device *dev, 237 231 struct device_attribute *attr, ··· 238 234 { 239 235 struct mtd_info *mtd = dev_get_drvdata(dev); 240 236 241 - return snprintf(buf, PAGE_SIZE, "%u\n", mtd->bitflip_threshold); 237 + return sysfs_emit(buf, "%u\n", mtd->bitflip_threshold); 242 238 } 243 239 244 240 static ssize_t mtd_bitflip_threshold_store(struct device *dev, ··· 256 252 mtd->bitflip_threshold = bitflip_threshold; 257 253 return count; 258 254 } 259 - static DEVICE_ATTR(bitflip_threshold, S_IRUGO | S_IWUSR, 260 - mtd_bitflip_threshold_show, 261 - mtd_bitflip_threshold_store); 255 + MTD_DEVICE_ATTR_RW(bitflip_threshold); 262 256 263 257 static ssize_t mtd_ecc_step_size_show(struct device *dev, 264 258 struct device_attribute *attr, char *buf) 265 259 { 266 260 struct mtd_info *mtd = dev_get_drvdata(dev); 267 261 268 - return snprintf(buf, PAGE_SIZE, "%u\n", mtd->ecc_step_size); 262 + return sysfs_emit(buf, "%u\n", mtd->ecc_step_size); 269 263 270 264 } 271 - static DEVICE_ATTR(ecc_step_size, S_IRUGO, mtd_ecc_step_size_show, NULL); 265 + MTD_DEVICE_ATTR_RO(ecc_step_size); 272 266 273 - static ssize_t mtd_ecc_stats_corrected_show(struct device *dev, 267 + static ssize_t mtd_corrected_bits_show(struct device *dev, 274 268 struct device_attribute *attr, char *buf) 275 269 { 276 270 struct mtd_info *mtd = dev_get_drvdata(dev); 277 271 struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 278 272 279 - return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->corrected); 273 + return sysfs_emit(buf, "%u\n", ecc_stats->corrected); 280 274 } 281 - static DEVICE_ATTR(corrected_bits, S_IRUGO, 282 - mtd_ecc_stats_corrected_show, NULL); 275 + MTD_DEVICE_ATTR_RO(corrected_bits); /* ecc stats corrected */ 283 276 284 - static ssize_t mtd_ecc_stats_errors_show(struct device *dev, 277 + static ssize_t mtd_ecc_failures_show(struct device *dev, 285 278 struct device_attribute *attr, char *buf) 286 279 { 287 280 struct mtd_info *mtd = dev_get_drvdata(dev); 288 281 struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 289 282 290 - return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->failed); 283 + return sysfs_emit(buf, "%u\n", ecc_stats->failed); 291 284 } 292 - static DEVICE_ATTR(ecc_failures, S_IRUGO, mtd_ecc_stats_errors_show, NULL); 285 + MTD_DEVICE_ATTR_RO(ecc_failures); /* ecc stats errors */ 293 286 294 - static ssize_t mtd_badblocks_show(struct device *dev, 287 + static ssize_t mtd_bad_blocks_show(struct device *dev, 295 288 struct device_attribute *attr, char *buf) 296 289 { 297 290 struct mtd_info *mtd = dev_get_drvdata(dev); 298 291 struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 299 292 300 - return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->badblocks); 293 + return sysfs_emit(buf, "%u\n", ecc_stats->badblocks); 301 294 } 302 - static DEVICE_ATTR(bad_blocks, S_IRUGO, mtd_badblocks_show, NULL); 295 + MTD_DEVICE_ATTR_RO(bad_blocks); 303 296 304 - static ssize_t mtd_bbtblocks_show(struct device *dev, 297 + static ssize_t mtd_bbt_blocks_show(struct device *dev, 305 298 struct device_attribute *attr, char *buf) 306 299 { 307 300 struct mtd_info *mtd = dev_get_drvdata(dev); 308 301 struct mtd_ecc_stats *ecc_stats = &mtd->ecc_stats; 309 302 310 - return snprintf(buf, PAGE_SIZE, "%u\n", ecc_stats->bbtblocks); 303 + return sysfs_emit(buf, "%u\n", ecc_stats->bbtblocks); 311 304 } 312 - static DEVICE_ATTR(bbt_blocks, S_IRUGO, mtd_bbtblocks_show, NULL); 305 + MTD_DEVICE_ATTR_RO(bbt_blocks); 313 306 314 307 static struct attribute *mtd_attrs[] = { 315 308 &dev_attr_type.attr, ··· 362 361 363 362 static void mtd_debugfs_populate(struct mtd_info *mtd) 364 363 { 364 + struct mtd_info *master = mtd_get_master(mtd); 365 365 struct device *dev = &mtd->dev; 366 366 struct dentry *root; 367 367 ··· 372 370 root = debugfs_create_dir(dev_name(dev), dfs_dir_mtd); 373 371 mtd->dbg.dfs_dir = root; 374 372 375 - if (mtd->dbg.partid) 376 - debugfs_create_file("partid", 0400, root, mtd, 373 + if (master->dbg.partid) 374 + debugfs_create_file("partid", 0400, root, master, 377 375 &mtd_partid_debug_fops); 378 376 379 - if (mtd->dbg.partname) 380 - debugfs_create_file("partname", 0400, root, mtd, 377 + if (master->dbg.partname) 378 + debugfs_create_file("partname", 0400, root, master, 381 379 &mtd_partname_debug_fops); 382 380 } 383 381 ··· 779 777 mutex_init(&mtd->master.chrdev_lock); 780 778 } 781 779 780 + static ssize_t mtd_otp_size(struct mtd_info *mtd, bool is_user) 781 + { 782 + struct otp_info *info; 783 + ssize_t size = 0; 784 + unsigned int i; 785 + size_t retlen; 786 + int ret; 787 + 788 + info = kmalloc(PAGE_SIZE, GFP_KERNEL); 789 + if (!info) 790 + return -ENOMEM; 791 + 792 + if (is_user) 793 + ret = mtd_get_user_prot_info(mtd, PAGE_SIZE, &retlen, info); 794 + else 795 + ret = mtd_get_fact_prot_info(mtd, PAGE_SIZE, &retlen, info); 796 + if (ret) 797 + goto err; 798 + 799 + for (i = 0; i < retlen / sizeof(*info); i++) 800 + size += info[i].length; 801 + 802 + kfree(info); 803 + return size; 804 + 805 + err: 806 + kfree(info); 807 + return ret; 808 + } 809 + 810 + static struct nvmem_device *mtd_otp_nvmem_register(struct mtd_info *mtd, 811 + const char *compatible, 812 + int size, 813 + nvmem_reg_read_t reg_read) 814 + { 815 + struct nvmem_device *nvmem = NULL; 816 + struct nvmem_config config = {}; 817 + struct device_node *np; 818 + 819 + /* DT binding is optional */ 820 + np = of_get_compatible_child(mtd->dev.of_node, compatible); 821 + 822 + /* OTP nvmem will be registered on the physical device */ 823 + config.dev = mtd->dev.parent; 824 + /* just reuse the compatible as name */ 825 + config.name = compatible; 826 + config.id = NVMEM_DEVID_NONE; 827 + config.owner = THIS_MODULE; 828 + config.type = NVMEM_TYPE_OTP; 829 + config.root_only = true; 830 + config.reg_read = reg_read; 831 + config.size = size; 832 + config.of_node = np; 833 + config.priv = mtd; 834 + 835 + nvmem = nvmem_register(&config); 836 + /* Just ignore if there is no NVMEM support in the kernel */ 837 + if (IS_ERR(nvmem) && PTR_ERR(nvmem) == -EOPNOTSUPP) 838 + nvmem = NULL; 839 + 840 + of_node_put(np); 841 + 842 + return nvmem; 843 + } 844 + 845 + static int mtd_nvmem_user_otp_reg_read(void *priv, unsigned int offset, 846 + void *val, size_t bytes) 847 + { 848 + struct mtd_info *mtd = priv; 849 + size_t retlen; 850 + int ret; 851 + 852 + ret = mtd_read_user_prot_reg(mtd, offset, bytes, &retlen, val); 853 + if (ret) 854 + return ret; 855 + 856 + return retlen == bytes ? 0 : -EIO; 857 + } 858 + 859 + static int mtd_nvmem_fact_otp_reg_read(void *priv, unsigned int offset, 860 + void *val, size_t bytes) 861 + { 862 + struct mtd_info *mtd = priv; 863 + size_t retlen; 864 + int ret; 865 + 866 + ret = mtd_read_fact_prot_reg(mtd, offset, bytes, &retlen, val); 867 + if (ret) 868 + return ret; 869 + 870 + return retlen == bytes ? 0 : -EIO; 871 + } 872 + 873 + static int mtd_otp_nvmem_add(struct mtd_info *mtd) 874 + { 875 + struct nvmem_device *nvmem; 876 + ssize_t size; 877 + int err; 878 + 879 + if (mtd->_get_user_prot_info && mtd->_read_user_prot_reg) { 880 + size = mtd_otp_size(mtd, true); 881 + if (size < 0) 882 + return size; 883 + 884 + if (size > 0) { 885 + nvmem = mtd_otp_nvmem_register(mtd, "user-otp", size, 886 + mtd_nvmem_user_otp_reg_read); 887 + if (IS_ERR(nvmem)) { 888 + dev_err(&mtd->dev, "Failed to register OTP NVMEM device\n"); 889 + return PTR_ERR(nvmem); 890 + } 891 + mtd->otp_user_nvmem = nvmem; 892 + } 893 + } 894 + 895 + if (mtd->_get_fact_prot_info && mtd->_read_fact_prot_reg) { 896 + size = mtd_otp_size(mtd, false); 897 + if (size < 0) { 898 + err = size; 899 + goto err; 900 + } 901 + 902 + if (size > 0) { 903 + nvmem = mtd_otp_nvmem_register(mtd, "factory-otp", size, 904 + mtd_nvmem_fact_otp_reg_read); 905 + if (IS_ERR(nvmem)) { 906 + dev_err(&mtd->dev, "Failed to register OTP NVMEM device\n"); 907 + err = PTR_ERR(nvmem); 908 + goto err; 909 + } 910 + mtd->otp_factory_nvmem = nvmem; 911 + } 912 + } 913 + 914 + return 0; 915 + 916 + err: 917 + if (mtd->otp_user_nvmem) 918 + nvmem_unregister(mtd->otp_user_nvmem); 919 + return err; 920 + } 921 + 782 922 /** 783 923 * mtd_device_parse_register - parse partitions and register an MTD device. 784 924 * ··· 996 852 register_reboot_notifier(&mtd->reboot_notifier); 997 853 } 998 854 855 + ret = mtd_otp_nvmem_add(mtd); 856 + 999 857 out: 1000 858 if (ret && device_is_registered(&mtd->dev)) 1001 859 del_mtd_device(mtd); ··· 1018 872 1019 873 if (master->_reboot) 1020 874 unregister_reboot_notifier(&master->reboot_notifier); 875 + 876 + if (master->otp_user_nvmem) 877 + nvmem_unregister(master->otp_user_nvmem); 878 + 879 + if (master->otp_factory_nvmem) 880 + nvmem_unregister(master->otp_factory_nvmem); 1021 881 1022 882 err = del_mtd_partitions(master); 1023 883 if (err)
+1 -3
drivers/mtd/mtdoops.c
··· 401 401 cxt->mtd_index = mtd_index; 402 402 403 403 cxt->oops_buf = vmalloc(record_size); 404 - if (!cxt->oops_buf) { 405 - printk(KERN_ERR "mtdoops: failed to allocate buffer workspace\n"); 404 + if (!cxt->oops_buf) 406 405 return -ENOMEM; 407 - } 408 406 memset(cxt->oops_buf, 0xff, record_size); 409 407 cxt->oops_buf_busy = 0; 410 408
+4 -5
drivers/mtd/mtdpart.c
··· 212 212 return child; 213 213 } 214 214 215 - static ssize_t mtd_partition_offset_show(struct device *dev, 216 - struct device_attribute *attr, char *buf) 215 + static ssize_t offset_show(struct device *dev, 216 + struct device_attribute *attr, char *buf) 217 217 { 218 218 struct mtd_info *mtd = dev_get_drvdata(dev); 219 219 220 - return snprintf(buf, PAGE_SIZE, "%lld\n", mtd->part.offset); 220 + return sysfs_emit(buf, "%lld\n", mtd->part.offset); 221 221 } 222 - 223 - static DEVICE_ATTR(offset, S_IRUGO, mtd_partition_offset_show, NULL); 222 + static DEVICE_ATTR_RO(offset); /* mtd partition offset */ 224 223 225 224 static const struct attribute *mtd_partition_attrs[] = { 226 225 &dev_attr_offset.attr,
+1 -1
drivers/mtd/nand/bbt.c
··· 123 123 unsigned int rbits = bits_per_block + offs - BITS_PER_LONG; 124 124 125 125 pos[1] &= ~GENMASK(rbits - 1, 0); 126 - pos[1] |= val >> rbits; 126 + pos[1] |= val >> (bits_per_block - rbits); 127 127 } 128 128 129 129 return 0;
+8
drivers/mtd/nand/raw/Kconfig
··· 453 453 NFC v800: RK3308, RV1108 454 454 NFC v900: PX30, RK3326 455 455 456 + config MTD_NAND_PL35X 457 + tristate "ARM PL35X NAND controller" 458 + depends on OF || COMPILE_TEST 459 + depends on PL353_SMC 460 + help 461 + Enables support for PrimeCell SMC PL351 and PL353 NAND 462 + controller found on Zynq7000. 463 + 456 464 comment "Misc" 457 465 458 466 config MTD_SM_COMMON
+1
drivers/mtd/nand/raw/Makefile
··· 57 57 obj-$(CONFIG_MTD_NAND_ARASAN) += arasan-nand-controller.o 58 58 obj-$(CONFIG_MTD_NAND_INTEL_LGM) += intel-nand-controller.o 59 59 obj-$(CONFIG_MTD_NAND_ROCKCHIP) += rockchip-nand-controller.o 60 + obj-$(CONFIG_MTD_NAND_PL35X) += pl35x-nand-controller.o 60 61 61 62 nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o 62 63 nand-objs += nand_onfi.o
+279 -62
drivers/mtd/nand/raw/arasan-nand-controller.c
··· 15 15 #include <linux/clk.h> 16 16 #include <linux/delay.h> 17 17 #include <linux/dma-mapping.h> 18 + #include <linux/gpio/consumer.h> 18 19 #include <linux/interrupt.h> 19 20 #include <linux/iopoll.h> 20 21 #include <linux/module.h> ··· 54 53 #define PROG_RST BIT(8) 55 54 #define PROG_GET_FEATURE BIT(9) 56 55 #define PROG_SET_FEATURE BIT(10) 56 + #define PROG_CHG_RD_COL_ENH BIT(14) 57 57 58 58 #define INTR_STS_EN_REG 0x14 59 59 #define INTR_SIG_EN_REG 0x18 ··· 71 69 #define DMA_ADDR1_REG 0x24 72 70 73 71 #define FLASH_STS_REG 0x28 72 + 73 + #define TIMING_REG 0x2C 74 + #define TCCS_TIME_500NS 0 75 + #define TCCS_TIME_300NS 3 76 + #define TCCS_TIME_200NS 2 77 + #define TCCS_TIME_100NS 1 78 + #define FAST_TCAD BIT(2) 79 + #define DQS_BUFF_SEL_IN(x) FIELD_PREP(GENMASK(6, 3), (x)) 80 + #define DQS_BUFF_SEL_OUT(x) FIELD_PREP(GENMASK(18, 15), (x)) 74 81 75 82 #define DATA_PORT_REG 0x30 76 83 ··· 102 91 103 92 #define DATA_INTERFACE_REG 0x6C 104 93 #define DIFACE_SDR_MODE(x) FIELD_PREP(GENMASK(2, 0), (x)) 105 - #define DIFACE_DDR_MODE(x) FIELD_PREP(GENMASK(5, 3), (X)) 94 + #define DIFACE_DDR_MODE(x) FIELD_PREP(GENMASK(5, 3), (x)) 106 95 #define DIFACE_SDR 0 107 96 #define DIFACE_NVDDR BIT(9) 108 97 ··· 117 106 118 107 #define ANFC_XLNX_SDR_DFLT_CORE_CLK 100000000 119 108 #define ANFC_XLNX_SDR_HS_CORE_CLK 80000000 109 + 110 + static struct gpio_desc *anfc_default_cs_array[2] = {NULL, NULL}; 120 111 121 112 /** 122 113 * struct anfc_op - Defines how to execute an operation ··· 150 137 * struct anand - Defines the NAND chip related information 151 138 * @node: Used to store NAND chips into a list 152 139 * @chip: NAND chip information structure 153 - * @cs: Chip select line 154 140 * @rb: Ready-busy line 155 141 * @page_sz: Register value of the page_sz field to use 156 142 * @clk: Expected clock frequency to use 157 - * @timings: Data interface timing mode to use 143 + * @data_iface: Data interface timing mode to use 144 + * @timings: NV-DDR specific timings to use 158 145 * @ecc_conf: Hardware ECC configuration value 159 146 * @strength: Register value of the ECC strength 160 147 * @raddr_cycles: Row address cycle information ··· 164 151 * @errloc: Array of errors located with soft BCH 165 152 * @hw_ecc: Buffer to store syndromes computed by hardware 166 153 * @bch: BCH structure 154 + * @cs_idx: Array of chip-select for this device, values are indexes 155 + * of the controller structure @gpio_cs array 156 + * @ncs_idx: Size of the @cs_idx array 167 157 */ 168 158 struct anand { 169 159 struct list_head node; 170 160 struct nand_chip chip; 171 - unsigned int cs; 172 161 unsigned int rb; 173 162 unsigned int page_sz; 174 163 unsigned long clk; 164 + u32 data_iface; 175 165 u32 timings; 176 166 u32 ecc_conf; 177 167 u32 strength; ··· 185 169 unsigned int *errloc; 186 170 u8 *hw_ecc; 187 171 struct bch_control *bch; 172 + int *cs_idx; 173 + int ncs_idx; 188 174 }; 189 175 190 176 /** ··· 197 179 * @bus_clk: Pointer to the flash clock 198 180 * @controller: Base controller structure 199 181 * @chips: List of all NAND chips attached to the controller 200 - * @assigned_cs: Bitmask describing already assigned CS lines 201 182 * @cur_clk: Current clock rate 183 + * @cs_array: CS array. Native CS are left empty, the other cells are 184 + * populated with their corresponding GPIO descriptor. 185 + * @ncs: Size of @cs_array 186 + * @cur_cs: Index in @cs_array of the currently in use CS 187 + * @native_cs: Currently selected native CS 188 + * @spare_cs: Native CS that is not wired (may be selected when a GPIO 189 + * CS is in use) 202 190 */ 203 191 struct arasan_nfc { 204 192 struct device *dev; ··· 213 189 struct clk *bus_clk; 214 190 struct nand_controller controller; 215 191 struct list_head chips; 216 - unsigned long assigned_cs; 217 192 unsigned int cur_clk; 193 + struct gpio_desc **cs_array; 194 + unsigned int ncs; 195 + int cur_cs; 196 + unsigned int native_cs; 197 + unsigned int spare_cs; 218 198 }; 219 199 220 200 static struct anand *to_anand(struct nand_chip *nand) ··· 301 273 return 0; 302 274 } 303 275 276 + static bool anfc_is_gpio_cs(struct arasan_nfc *nfc, int nfc_cs) 277 + { 278 + return nfc_cs >= 0 && nfc->cs_array[nfc_cs]; 279 + } 280 + 281 + static int anfc_relative_to_absolute_cs(struct anand *anand, int num) 282 + { 283 + return anand->cs_idx[num]; 284 + } 285 + 286 + static void anfc_assert_cs(struct arasan_nfc *nfc, unsigned int nfc_cs_idx) 287 + { 288 + /* CS did not change: do nothing */ 289 + if (nfc->cur_cs == nfc_cs_idx) 290 + return; 291 + 292 + /* Deassert the previous CS if it was a GPIO */ 293 + if (anfc_is_gpio_cs(nfc, nfc->cur_cs)) 294 + gpiod_set_value_cansleep(nfc->cs_array[nfc->cur_cs], 1); 295 + 296 + /* Assert the new one */ 297 + if (anfc_is_gpio_cs(nfc, nfc_cs_idx)) { 298 + nfc->native_cs = nfc->spare_cs; 299 + gpiod_set_value_cansleep(nfc->cs_array[nfc_cs_idx], 0); 300 + } else { 301 + nfc->native_cs = nfc_cs_idx; 302 + } 303 + 304 + nfc->cur_cs = nfc_cs_idx; 305 + } 306 + 307 + static int anfc_select_target(struct nand_chip *chip, int target) 308 + { 309 + struct anand *anand = to_anand(chip); 310 + struct arasan_nfc *nfc = to_anfc(chip->controller); 311 + unsigned int nfc_cs_idx = anfc_relative_to_absolute_cs(anand, target); 312 + int ret; 313 + 314 + anfc_assert_cs(nfc, nfc_cs_idx); 315 + 316 + /* Update the controller timings and the potential ECC configuration */ 317 + writel_relaxed(anand->data_iface, nfc->base + DATA_INTERFACE_REG); 318 + writel_relaxed(anand->timings, nfc->base + TIMING_REG); 319 + 320 + /* Update clock frequency */ 321 + if (nfc->cur_clk != anand->clk) { 322 + clk_disable_unprepare(nfc->controller_clk); 323 + ret = clk_set_rate(nfc->controller_clk, anand->clk); 324 + if (ret) { 325 + dev_err(nfc->dev, "Failed to change clock rate\n"); 326 + return ret; 327 + } 328 + 329 + ret = clk_prepare_enable(nfc->controller_clk); 330 + if (ret) { 331 + dev_err(nfc->dev, 332 + "Failed to re-enable the controller clock\n"); 333 + return ret; 334 + } 335 + 336 + nfc->cur_clk = anand->clk; 337 + } 338 + 339 + return 0; 340 + } 341 + 304 342 /* 305 343 * When using the embedded hardware ECC engine, the controller is in charge of 306 344 * feeding the engine with, first, the ECC residue present in the data array. ··· 409 315 .addr2_reg = 410 316 ((page >> 16) & 0xFF) | 411 317 ADDR2_STRENGTH(anand->strength) | 412 - ADDR2_CS(anand->cs), 318 + ADDR2_CS(nfc->native_cs), 413 319 .cmd_reg = 414 320 CMD_1(NAND_CMD_READ0) | 415 321 CMD_2(NAND_CMD_READSTART) | ··· 495 401 return 0; 496 402 } 497 403 404 + static int anfc_sel_read_page_hw_ecc(struct nand_chip *chip, u8 *buf, 405 + int oob_required, int page) 406 + { 407 + int ret; 408 + 409 + ret = anfc_select_target(chip, chip->cur_cs); 410 + if (ret) 411 + return ret; 412 + 413 + return anfc_read_page_hw_ecc(chip, buf, oob_required, page); 414 + }; 415 + 498 416 static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf, 499 417 int oob_required, int page) 500 418 { ··· 526 420 .addr2_reg = 527 421 ((page >> 16) & 0xFF) | 528 422 ADDR2_STRENGTH(anand->strength) | 529 - ADDR2_CS(anand->cs), 423 + ADDR2_CS(nfc->native_cs), 530 424 .cmd_reg = 531 425 CMD_1(NAND_CMD_SEQIN) | 532 426 CMD_2(NAND_CMD_PAGEPROG) | ··· 567 461 return ret; 568 462 } 569 463 464 + static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf, 465 + int oob_required, int page) 466 + { 467 + int ret; 468 + 469 + ret = anfc_select_target(chip, chip->cur_cs); 470 + if (ret) 471 + return ret; 472 + 473 + return anfc_write_page_hw_ecc(chip, buf, oob_required, page); 474 + }; 475 + 570 476 /* NAND framework ->exec_op() hooks and related helpers */ 571 477 static int anfc_parse_instructions(struct nand_chip *chip, 572 478 const struct nand_subop *subop, 573 479 struct anfc_op *nfc_op) 574 480 { 481 + struct arasan_nfc *nfc = to_anfc(chip->controller); 575 482 struct anand *anand = to_anand(chip); 576 483 const struct nand_op_instr *instr = NULL; 577 484 bool first_cmd = true; ··· 592 473 int ret, i; 593 474 594 475 memset(nfc_op, 0, sizeof(*nfc_op)); 595 - nfc_op->addr2_reg = ADDR2_CS(anand->cs); 476 + nfc_op->addr2_reg = ADDR2_CS(nfc->native_cs); 596 477 nfc_op->cmd_reg = CMD_PAGE_SIZE(anand->page_sz); 597 478 598 479 for (op_id = 0; op_id < subop->ninstrs; op_id++) { ··· 741 622 static int anfc_data_read_type_exec(struct nand_chip *chip, 742 623 const struct nand_subop *subop) 743 624 { 744 - return anfc_misc_data_type_exec(chip, subop, PROG_PGRD); 625 + u32 prog_reg = PROG_PGRD; 626 + 627 + /* 628 + * Experience shows that while in SDR mode sending a CHANGE READ COLUMN 629 + * command through the READ PAGE "type" always works fine, when in 630 + * NV-DDR mode the same command simply fails. However, it was also 631 + * spotted that any CHANGE READ COLUMN command sent through the CHANGE 632 + * READ COLUMN ENHANCED "type" would correctly work in both cases (SDR 633 + * and NV-DDR). So, for simplicity, let's program the controller with 634 + * the CHANGE READ COLUMN ENHANCED "type" whenever we are requested to 635 + * perform a CHANGE READ COLUMN operation. 636 + */ 637 + if (subop->instrs[0].ctx.cmd.opcode == NAND_CMD_RNDOUT && 638 + subop->instrs[2].ctx.cmd.opcode == NAND_CMD_RNDOUTSTART) 639 + prog_reg = PROG_CHG_RD_COL_ENH; 640 + 641 + return anfc_misc_data_type_exec(chip, subop, prog_reg); 745 642 } 746 643 747 644 static int anfc_param_write_type_exec(struct nand_chip *chip, ··· 888 753 NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 889 754 ); 890 755 891 - static int anfc_select_target(struct nand_chip *chip, int target) 892 - { 893 - struct anand *anand = to_anand(chip); 894 - struct arasan_nfc *nfc = to_anfc(chip->controller); 895 - int ret; 896 - 897 - /* Update the controller timings and the potential ECC configuration */ 898 - writel_relaxed(anand->timings, nfc->base + DATA_INTERFACE_REG); 899 - 900 - /* Update clock frequency */ 901 - if (nfc->cur_clk != anand->clk) { 902 - clk_disable_unprepare(nfc->controller_clk); 903 - ret = clk_set_rate(nfc->controller_clk, anand->clk); 904 - if (ret) { 905 - dev_err(nfc->dev, "Failed to change clock rate\n"); 906 - return ret; 907 - } 908 - 909 - ret = clk_prepare_enable(nfc->controller_clk); 910 - if (ret) { 911 - dev_err(nfc->dev, 912 - "Failed to re-enable the controller clock\n"); 913 - return ret; 914 - } 915 - 916 - nfc->cur_clk = anand->clk; 917 - } 918 - 919 - return 0; 920 - } 921 - 922 756 static int anfc_check_op(struct nand_chip *chip, 923 757 const struct nand_operation *op) 924 758 { ··· 965 861 struct anand *anand = to_anand(chip); 966 862 struct arasan_nfc *nfc = to_anfc(chip->controller); 967 863 struct device_node *np = nfc->dev->of_node; 864 + const struct nand_sdr_timings *sdr; 865 + const struct nand_nvddr_timings *nvddr; 866 + unsigned int tccs_min, dqs_mode, fast_tcad; 867 + 868 + if (nand_interface_is_nvddr(conf)) { 869 + nvddr = nand_get_nvddr_timings(conf); 870 + if (IS_ERR(nvddr)) 871 + return PTR_ERR(nvddr); 872 + } else { 873 + sdr = nand_get_sdr_timings(conf); 874 + if (IS_ERR(sdr)) 875 + return PTR_ERR(sdr); 876 + } 968 877 969 878 if (target < 0) 970 879 return 0; 971 880 972 - anand->timings = DIFACE_SDR | DIFACE_SDR_MODE(conf->timings.mode); 881 + if (nand_interface_is_sdr(conf)) { 882 + anand->data_iface = DIFACE_SDR | 883 + DIFACE_SDR_MODE(conf->timings.mode); 884 + anand->timings = 0; 885 + } else { 886 + anand->data_iface = DIFACE_NVDDR | 887 + DIFACE_DDR_MODE(conf->timings.mode); 888 + 889 + if (conf->timings.nvddr.tCCS_min <= 100000) 890 + tccs_min = TCCS_TIME_100NS; 891 + else if (conf->timings.nvddr.tCCS_min <= 200000) 892 + tccs_min = TCCS_TIME_200NS; 893 + else if (conf->timings.nvddr.tCCS_min <= 300000) 894 + tccs_min = TCCS_TIME_300NS; 895 + else 896 + tccs_min = TCCS_TIME_500NS; 897 + 898 + fast_tcad = 0; 899 + if (conf->timings.nvddr.tCAD_min < 45000) 900 + fast_tcad = FAST_TCAD; 901 + 902 + switch (conf->timings.mode) { 903 + case 5: 904 + case 4: 905 + dqs_mode = 2; 906 + break; 907 + case 3: 908 + dqs_mode = 3; 909 + break; 910 + case 2: 911 + dqs_mode = 4; 912 + break; 913 + case 1: 914 + dqs_mode = 5; 915 + break; 916 + case 0: 917 + default: 918 + dqs_mode = 6; 919 + break; 920 + } 921 + 922 + anand->timings = tccs_min | fast_tcad | 923 + DQS_BUFF_SEL_IN(dqs_mode) | 924 + DQS_BUFF_SEL_OUT(dqs_mode); 925 + } 926 + 973 927 anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK; 974 928 975 929 /* 976 930 * Due to a hardware bug in the ZynqMP SoC, SDR timing modes 0-1 work 977 931 * with f > 90MHz (default clock is 100MHz) but signals are unstable 978 932 * with higher modes. Hence we decrease a little bit the clock rate to 979 - * 80MHz when using modes 2-5 with this SoC. 933 + * 80MHz when using SDR modes 2-5 with this SoC. 980 934 */ 981 935 if (of_device_is_compatible(np, "xlnx,zynqmp-nand-controller") && 982 - conf->timings.mode >= 2) 936 + nand_interface_is_sdr(conf) && conf->timings.mode >= 2) 983 937 anand->clk = ANFC_XLNX_SDR_HS_CORE_CLK; 984 938 985 939 return 0; ··· 1169 1007 if (!anand->bch) 1170 1008 return -EINVAL; 1171 1009 1172 - ecc->read_page = anfc_read_page_hw_ecc; 1173 - ecc->write_page = anfc_write_page_hw_ecc; 1010 + ecc->read_page = anfc_sel_read_page_hw_ecc; 1011 + ecc->write_page = anfc_sel_write_page_hw_ecc; 1174 1012 1175 1013 return 0; 1176 1014 } ··· 1256 1094 struct anand *anand; 1257 1095 struct nand_chip *chip; 1258 1096 struct mtd_info *mtd; 1259 - int cs, rb, ret; 1097 + int rb, ret, i; 1260 1098 1261 1099 anand = devm_kzalloc(nfc->dev, sizeof(*anand), GFP_KERNEL); 1262 1100 if (!anand) 1263 1101 return -ENOMEM; 1264 1102 1265 - /* We do not support multiple CS per chip yet */ 1266 - if (of_property_count_elems_of_size(np, "reg", sizeof(u32)) != 1) { 1103 + /* Chip-select init */ 1104 + anand->ncs_idx = of_property_count_elems_of_size(np, "reg", sizeof(u32)); 1105 + if (anand->ncs_idx <= 0 || anand->ncs_idx > nfc->ncs) { 1267 1106 dev_err(nfc->dev, "Invalid reg property\n"); 1268 1107 return -EINVAL; 1269 1108 } 1270 1109 1271 - ret = of_property_read_u32(np, "reg", &cs); 1272 - if (ret) 1273 - return ret; 1110 + anand->cs_idx = devm_kcalloc(nfc->dev, anand->ncs_idx, 1111 + sizeof(*anand->cs_idx), GFP_KERNEL); 1112 + if (!anand->cs_idx) 1113 + return -ENOMEM; 1274 1114 1115 + for (i = 0; i < anand->ncs_idx; i++) { 1116 + ret = of_property_read_u32_index(np, "reg", i, 1117 + &anand->cs_idx[i]); 1118 + if (ret) { 1119 + dev_err(nfc->dev, "invalid CS property: %d\n", ret); 1120 + return ret; 1121 + } 1122 + } 1123 + 1124 + /* Ready-busy init */ 1275 1125 ret = of_property_read_u32(np, "nand-rb", &rb); 1276 1126 if (ret) 1277 1127 return ret; 1278 1128 1279 - if (cs >= ANFC_MAX_CS || rb >= ANFC_MAX_CS) { 1280 - dev_err(nfc->dev, "Wrong CS %d or RB %d\n", cs, rb); 1129 + if (rb >= ANFC_MAX_CS) { 1130 + dev_err(nfc->dev, "Wrong RB %d\n", rb); 1281 1131 return -EINVAL; 1282 1132 } 1283 1133 1284 - if (test_and_set_bit(cs, &nfc->assigned_cs)) { 1285 - dev_err(nfc->dev, "Already assigned CS %d\n", cs); 1286 - return -EINVAL; 1287 - } 1288 - 1289 - anand->cs = cs; 1290 1134 anand->rb = rb; 1291 1135 1292 1136 chip = &anand->chip; ··· 1308 1140 return -EINVAL; 1309 1141 } 1310 1142 1311 - ret = nand_scan(chip, 1); 1143 + ret = nand_scan(chip, anand->ncs_idx); 1312 1144 if (ret) { 1313 1145 dev_err(nfc->dev, "Scan operation failed\n"); 1314 1146 return ret; ··· 1346 1178 int nchips = of_get_child_count(np); 1347 1179 int ret; 1348 1180 1349 - if (!nchips || nchips > ANFC_MAX_CS) { 1181 + if (!nchips) { 1350 1182 dev_err(nfc->dev, "Incorrect number of NAND chips (%d)\n", 1351 1183 nchips); 1352 1184 return -EINVAL; ··· 1371 1203 1372 1204 /* Enable interrupt status */ 1373 1205 writel_relaxed(EVENT_MASK, nfc->base + INTR_STS_EN_REG); 1206 + 1207 + nfc->cur_cs = -1; 1208 + } 1209 + 1210 + static int anfc_parse_cs(struct arasan_nfc *nfc) 1211 + { 1212 + int ret; 1213 + 1214 + /* Check the gpio-cs property */ 1215 + ret = rawnand_dt_parse_gpio_cs(nfc->dev, &nfc->cs_array, &nfc->ncs); 1216 + if (ret) 1217 + return ret; 1218 + 1219 + /* 1220 + * The controller native CS cannot be both disabled at the same time. 1221 + * Hence, only one native CS can be used if GPIO CS are needed, so that 1222 + * the other is selected when a non-native CS must be asserted (not 1223 + * wired physically or configured as GPIO instead of NAND CS). In this 1224 + * case, the "not" chosen CS is assigned to nfc->spare_cs and selected 1225 + * whenever a GPIO CS must be asserted. 1226 + */ 1227 + if (nfc->cs_array && nfc->ncs > 2) { 1228 + if (!nfc->cs_array[0] && !nfc->cs_array[1]) { 1229 + dev_err(nfc->dev, 1230 + "Assign a single native CS when using GPIOs\n"); 1231 + return -EINVAL; 1232 + } 1233 + 1234 + if (nfc->cs_array[0]) 1235 + nfc->spare_cs = 0; 1236 + else 1237 + nfc->spare_cs = 1; 1238 + } 1239 + 1240 + if (!nfc->cs_array) { 1241 + nfc->cs_array = anfc_default_cs_array; 1242 + nfc->ncs = ANFC_MAX_CS; 1243 + return 0; 1244 + } 1245 + 1246 + return 0; 1374 1247 } 1375 1248 1376 1249 static int anfc_probe(struct platform_device *pdev) ··· 1449 1240 ret = clk_prepare_enable(nfc->bus_clk); 1450 1241 if (ret) 1451 1242 goto disable_controller_clk; 1243 + 1244 + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1245 + if (ret) 1246 + goto disable_bus_clk; 1247 + 1248 + ret = anfc_parse_cs(nfc); 1249 + if (ret) 1250 + goto disable_bus_clk; 1452 1251 1453 1252 ret = anfc_chips_init(nfc); 1454 1253 if (ret)
+7 -4
drivers/mtd/nand/raw/atmel/nand-controller.c
··· 1246 1246 nc = to_nand_controller(nand->base.controller); 1247 1247 1248 1248 /* DDR interface not supported. */ 1249 - if (conf->type != NAND_SDR_IFACE) 1249 + if (!nand_interface_is_sdr(conf)) 1250 1250 return -ENOTSUPP; 1251 1251 1252 1252 /* ··· 1524 1524 const struct nand_interface_config *conf) 1525 1525 { 1526 1526 struct atmel_nand *nand = to_atmel_nand(chip); 1527 + const struct nand_sdr_timings *sdr; 1527 1528 struct atmel_nand_controller *nc; 1529 + 1530 + sdr = nand_get_sdr_timings(conf); 1531 + if (IS_ERR(sdr)) 1532 + return PTR_ERR(sdr); 1528 1533 1529 1534 nc = to_nand_controller(nand->base.controller); 1530 1535 ··· 1634 1629 } 1635 1630 1636 1631 nand = devm_kzalloc(nc->dev, struct_size(nand, cs, numcs), GFP_KERNEL); 1637 - if (!nand) { 1638 - dev_err(nc->dev, "Failed to allocate NAND object\n"); 1632 + if (!nand) 1639 1633 return ERR_PTR(-ENOMEM); 1640 - } 1641 1634 1642 1635 nand->numcs = numcs; 1643 1636
+3 -3
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 2348 2348 * for tRP and tRH timings. If it is NOT possible to sample data 2349 2349 * with optimal tRP/tRH settings, the parameters will be extended. 2350 2350 * If clk_period is 50ns (the lowest value) this condition is met 2351 - * for asynchronous timing modes 1, 2, 3, 4 and 5. 2352 - * If clk_period is 20ns the condition is met only 2353 - * for asynchronous timing mode 5. 2351 + * for SDR timing modes 1, 2, 3, 4 and 5. 2352 + * If clk_period is 20ns the condition is met only for SDR timing 2353 + * mode 5. 2354 2354 */ 2355 2355 if (sdr->tRC_min <= clk_period && 2356 2356 sdr->tRP_min <= (clk_period / 2) &&
+1 -1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.h
··· 79 79 struct gpmi_devdata { 80 80 enum gpmi_type type; 81 81 int bch_max_ecc_strength; 82 - int max_chain_delay; /* See the async EDO mode */ 82 + int max_chain_delay; /* See the SDR EDO mode */ 83 83 const char * const *clks; 84 84 const int clks_count; 85 85 };
+1 -3
drivers/mtd/nand/raw/hisi504_nand.c
··· 761 761 762 762 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 763 763 host->mmio = devm_ioremap_resource(dev, res); 764 - if (IS_ERR(host->mmio)) { 765 - dev_err(dev, "devm_ioremap_resource[1] fail\n"); 764 + if (IS_ERR(host->mmio)) 766 765 return PTR_ERR(host->mmio); 767 - } 768 766 769 767 mtd->name = "hisi_nand"; 770 768 mtd->dev.parent = &pdev->dev;
+5
drivers/mtd/nand/raw/internals.h
··· 90 90 unsigned int timing_mode); 91 91 unsigned int 92 92 onfi_find_closest_sdr_mode(const struct nand_sdr_timings *spec_timings); 93 + unsigned int 94 + onfi_find_closest_nvddr_mode(const struct nand_nvddr_timings *spec_timings); 93 95 int nand_choose_best_sdr_timings(struct nand_chip *chip, 94 96 struct nand_interface_config *iface, 95 97 struct nand_sdr_timings *spec_timings); 98 + int nand_choose_best_nvddr_timings(struct nand_chip *chip, 99 + struct nand_interface_config *iface, 100 + struct nand_nvddr_timings *spec_timings); 96 101 const struct nand_interface_config *nand_get_reset_interface_config(void); 97 102 int nand_get_features(struct nand_chip *chip, int addr, u8 *subfeature_param); 98 103 int nand_set_features(struct nand_chip *chip, int addr, u8 *subfeature_param);
+4 -2
drivers/mtd/nand/raw/marvell_nand.c
··· 451 451 }; 452 452 453 453 /** 454 - * Derives a duration in numbers of clock cycles. 454 + * TO_CYCLES() - Derives a duration in numbers of clock cycles. 455 455 * 456 456 * @ps: Duration in pico-seconds 457 457 * @period_ns: Clock period in nano-seconds ··· 3030 3030 return ret; 3031 3031 3032 3032 ret = clk_prepare_enable(nfc->reg_clk); 3033 - if (ret < 0) 3033 + if (ret < 0) { 3034 + clk_disable_unprepare(nfc->core_clk); 3034 3035 return ret; 3036 + } 3035 3037 3036 3038 /* 3037 3039 * Reset nfc->selected_chip so the next command will cause the timing
+1 -3
drivers/mtd/nand/raw/mtk_ecc.c
··· 515 515 516 516 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 517 517 ecc->regs = devm_ioremap_resource(dev, res); 518 - if (IS_ERR(ecc->regs)) { 519 - dev_err(dev, "failed to map regs: %ld\n", PTR_ERR(ecc->regs)); 518 + if (IS_ERR(ecc->regs)) 520 519 return PTR_ERR(ecc->regs); 521 - } 522 520 523 521 ecc->clk = devm_clk_get(dev, NULL); 524 522 if (IS_ERR(ecc->clk)) {
+287 -77
drivers/mtd/nand/raw/nand_base.c
··· 42 42 #include <linux/io.h> 43 43 #include <linux/mtd/partitions.h> 44 44 #include <linux/of.h> 45 + #include <linux/of_gpio.h> 45 46 #include <linux/gpio/consumer.h> 46 47 47 48 #include "internals.h" ··· 648 647 */ 649 648 int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms) 650 649 { 651 - const struct nand_sdr_timings *timings; 650 + const struct nand_interface_config *conf; 652 651 u8 status = 0; 653 652 int ret; 654 653 ··· 656 655 return -ENOTSUPP; 657 656 658 657 /* Wait tWB before polling the STATUS reg. */ 659 - timings = nand_get_sdr_timings(nand_get_interface_config(chip)); 660 - ndelay(PSEC_TO_NSEC(timings->tWB_max)); 658 + conf = nand_get_interface_config(chip); 659 + ndelay(NAND_COMMON_TIMING_NS(conf, tWB_max)); 661 660 662 661 ret = nand_status_op(chip, NULL); 663 662 if (ret) ··· 833 832 static int nand_setup_interface(struct nand_chip *chip, int chipnr) 834 833 { 835 834 const struct nand_controller_ops *ops = chip->controller->ops; 836 - u8 tmode_param[ONFI_SUBFEATURE_PARAM_LEN] = { }; 835 + u8 tmode_param[ONFI_SUBFEATURE_PARAM_LEN] = { }, request; 837 836 int ret; 838 837 839 838 if (!nand_controller_can_setup_interface(chip)) ··· 849 848 if (!chip->best_interface_config) 850 849 return 0; 851 850 852 - tmode_param[0] = chip->best_interface_config->timings.mode; 851 + request = chip->best_interface_config->timings.mode; 852 + if (nand_interface_is_sdr(chip->best_interface_config)) 853 + request |= ONFI_DATA_INTERFACE_SDR; 854 + else 855 + request |= ONFI_DATA_INTERFACE_NVDDR; 856 + tmode_param[0] = request; 853 857 854 858 /* Change the mode on the chip side (if supported by the NAND chip) */ 855 859 if (nand_supports_set_features(chip, ONFI_FEATURE_ADDR_TIMING_MODE)) { ··· 883 877 if (ret) 884 878 goto err_reset_chip; 885 879 886 - if (tmode_param[0] != chip->best_interface_config->timings.mode) { 887 - pr_warn("timing mode %d not acknowledged by the NAND chip\n", 880 + if (request != tmode_param[0]) { 881 + pr_warn("%s timing mode %d not acknowledged by the NAND chip\n", 882 + nand_interface_is_nvddr(chip->best_interface_config) ? "NV-DDR" : "SDR", 888 883 chip->best_interface_config->timings.mode); 884 + pr_debug("NAND chip would work in %s timing mode %d\n", 885 + tmode_param[0] & ONFI_DATA_INTERFACE_NVDDR ? "NV-DDR" : "SDR", 886 + (unsigned int)ONFI_TIMING_MODE_PARAM(tmode_param[0])); 889 887 goto err_reset_chip; 890 888 } 891 889 ··· 945 935 /* Fallback to slower modes */ 946 936 best_mode = iface->timings.mode; 947 937 } else if (chip->parameters.onfi) { 948 - best_mode = fls(chip->parameters.onfi->async_timing_mode) - 1; 938 + best_mode = fls(chip->parameters.onfi->sdr_timing_modes) - 1; 949 939 } 950 940 951 941 for (mode = best_mode; mode >= 0; mode--) { ··· 953 943 954 944 ret = ops->setup_interface(chip, NAND_DATA_IFACE_CHECK_ONLY, 955 945 iface); 956 - if (!ret) 946 + if (!ret) { 947 + chip->best_interface_config = iface; 957 948 break; 949 + } 958 950 } 959 951 960 - chip->best_interface_config = iface; 952 + return ret; 953 + } 961 954 962 - return 0; 955 + /** 956 + * nand_choose_best_nvddr_timings - Pick up the best NVDDR timings that both the 957 + * NAND controller and the NAND chip support 958 + * @chip: the NAND chip 959 + * @iface: the interface configuration (can eventually be updated) 960 + * @spec_timings: specific timings, when not fitting the ONFI specification 961 + * 962 + * If specific timings are provided, use them. Otherwise, retrieve supported 963 + * timing modes from ONFI information. 964 + */ 965 + int nand_choose_best_nvddr_timings(struct nand_chip *chip, 966 + struct nand_interface_config *iface, 967 + struct nand_nvddr_timings *spec_timings) 968 + { 969 + const struct nand_controller_ops *ops = chip->controller->ops; 970 + int best_mode = 0, mode, ret; 971 + 972 + iface->type = NAND_NVDDR_IFACE; 973 + 974 + if (spec_timings) { 975 + iface->timings.nvddr = *spec_timings; 976 + iface->timings.mode = onfi_find_closest_nvddr_mode(spec_timings); 977 + 978 + /* Verify the controller supports the requested interface */ 979 + ret = ops->setup_interface(chip, NAND_DATA_IFACE_CHECK_ONLY, 980 + iface); 981 + if (!ret) { 982 + chip->best_interface_config = iface; 983 + return ret; 984 + } 985 + 986 + /* Fallback to slower modes */ 987 + best_mode = iface->timings.mode; 988 + } else if (chip->parameters.onfi) { 989 + best_mode = fls(chip->parameters.onfi->nvddr_timing_modes) - 1; 990 + } 991 + 992 + for (mode = best_mode; mode >= 0; mode--) { 993 + onfi_fill_interface_config(chip, iface, NAND_NVDDR_IFACE, mode); 994 + 995 + ret = ops->setup_interface(chip, NAND_DATA_IFACE_CHECK_ONLY, 996 + iface); 997 + if (!ret) { 998 + chip->best_interface_config = iface; 999 + break; 1000 + } 1001 + } 1002 + 1003 + return ret; 1004 + } 1005 + 1006 + /** 1007 + * nand_choose_best_timings - Pick up the best NVDDR or SDR timings that both 1008 + * NAND controller and the NAND chip support 1009 + * @chip: the NAND chip 1010 + * @iface: the interface configuration (can eventually be updated) 1011 + * 1012 + * If specific timings are provided, use them. Otherwise, retrieve supported 1013 + * timing modes from ONFI information. 1014 + */ 1015 + static int nand_choose_best_timings(struct nand_chip *chip, 1016 + struct nand_interface_config *iface) 1017 + { 1018 + int ret; 1019 + 1020 + /* Try the fastest timings: NV-DDR */ 1021 + ret = nand_choose_best_nvddr_timings(chip, iface, NULL); 1022 + if (!ret) 1023 + return 0; 1024 + 1025 + /* Fallback to SDR timings otherwise */ 1026 + return nand_choose_best_sdr_timings(chip, iface, NULL); 963 1027 } 964 1028 965 1029 /** ··· 1064 980 if (chip->ops.choose_interface_config) 1065 981 ret = chip->ops.choose_interface_config(chip, iface); 1066 982 else 1067 - ret = nand_choose_best_sdr_timings(chip, iface, NULL); 983 + ret = nand_choose_best_timings(chip, iface); 1068 984 1069 985 if (ret) 1070 986 kfree(iface); ··· 1130 1046 unsigned int offset_in_page, void *buf, 1131 1047 unsigned int len) 1132 1048 { 1133 - const struct nand_sdr_timings *sdr = 1134 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1049 + const struct nand_interface_config *conf = 1050 + nand_get_interface_config(chip); 1135 1051 struct mtd_info *mtd = nand_to_mtd(chip); 1136 1052 u8 addrs[4]; 1137 1053 struct nand_op_instr instrs[] = { 1138 1054 NAND_OP_CMD(NAND_CMD_READ0, 0), 1139 - NAND_OP_ADDR(3, addrs, PSEC_TO_NSEC(sdr->tWB_max)), 1140 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tR_max), 1141 - PSEC_TO_NSEC(sdr->tRR_min)), 1055 + NAND_OP_ADDR(3, addrs, NAND_COMMON_TIMING_NS(conf, tWB_max)), 1056 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tR_max), 1057 + NAND_COMMON_TIMING_NS(conf, tRR_min)), 1142 1058 NAND_OP_DATA_IN(len, buf, 0), 1143 1059 }; 1144 1060 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); ··· 1173 1089 unsigned int offset_in_page, void *buf, 1174 1090 unsigned int len) 1175 1091 { 1176 - const struct nand_sdr_timings *sdr = 1177 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1092 + const struct nand_interface_config *conf = 1093 + nand_get_interface_config(chip); 1178 1094 u8 addrs[5]; 1179 1095 struct nand_op_instr instrs[] = { 1180 1096 NAND_OP_CMD(NAND_CMD_READ0, 0), 1181 1097 NAND_OP_ADDR(4, addrs, 0), 1182 - NAND_OP_CMD(NAND_CMD_READSTART, PSEC_TO_NSEC(sdr->tWB_max)), 1183 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tR_max), 1184 - PSEC_TO_NSEC(sdr->tRR_min)), 1098 + NAND_OP_CMD(NAND_CMD_READSTART, NAND_COMMON_TIMING_NS(conf, tWB_max)), 1099 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tR_max), 1100 + NAND_COMMON_TIMING_NS(conf, tRR_min)), 1185 1101 NAND_OP_DATA_IN(len, buf, 0), 1186 1102 }; 1187 1103 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); ··· 1270 1186 return -EINVAL; 1271 1187 1272 1188 if (nand_has_exec_op(chip)) { 1273 - const struct nand_sdr_timings *sdr = 1274 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1189 + const struct nand_interface_config *conf = 1190 + nand_get_interface_config(chip); 1275 1191 struct nand_op_instr instrs[] = { 1276 1192 NAND_OP_CMD(NAND_CMD_PARAM, 0), 1277 - NAND_OP_ADDR(1, &page, PSEC_TO_NSEC(sdr->tWB_max)), 1278 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tR_max), 1279 - PSEC_TO_NSEC(sdr->tRR_min)), 1193 + NAND_OP_ADDR(1, &page, 1194 + NAND_COMMON_TIMING_NS(conf, tWB_max)), 1195 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tR_max), 1196 + NAND_COMMON_TIMING_NS(conf, tRR_min)), 1280 1197 NAND_OP_8BIT_DATA_IN(len, buf, 0), 1281 1198 }; 1282 1199 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); ··· 1326 1241 return -ENOTSUPP; 1327 1242 1328 1243 if (nand_has_exec_op(chip)) { 1329 - const struct nand_sdr_timings *sdr = 1330 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1244 + const struct nand_interface_config *conf = 1245 + nand_get_interface_config(chip); 1331 1246 u8 addrs[2] = {}; 1332 1247 struct nand_op_instr instrs[] = { 1333 1248 NAND_OP_CMD(NAND_CMD_RNDOUT, 0), 1334 1249 NAND_OP_ADDR(2, addrs, 0), 1335 1250 NAND_OP_CMD(NAND_CMD_RNDOUTSTART, 1336 - PSEC_TO_NSEC(sdr->tCCS_min)), 1251 + NAND_COMMON_TIMING_NS(conf, tCCS_min)), 1337 1252 NAND_OP_DATA_IN(len, buf, 0), 1338 1253 }; 1339 1254 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); ··· 1401 1316 unsigned int offset_in_page, const void *buf, 1402 1317 unsigned int len, bool prog) 1403 1318 { 1404 - const struct nand_sdr_timings *sdr = 1405 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1319 + const struct nand_interface_config *conf = 1320 + nand_get_interface_config(chip); 1406 1321 struct mtd_info *mtd = nand_to_mtd(chip); 1407 1322 u8 addrs[5] = {}; 1408 1323 struct nand_op_instr instrs[] = { ··· 1413 1328 */ 1414 1329 NAND_OP_CMD(NAND_CMD_READ0, 0), 1415 1330 NAND_OP_CMD(NAND_CMD_SEQIN, 0), 1416 - NAND_OP_ADDR(0, addrs, PSEC_TO_NSEC(sdr->tADL_min)), 1331 + NAND_OP_ADDR(0, addrs, NAND_COMMON_TIMING_NS(conf, tADL_min)), 1417 1332 NAND_OP_DATA_OUT(len, buf, 0), 1418 - NAND_OP_CMD(NAND_CMD_PAGEPROG, PSEC_TO_NSEC(sdr->tWB_max)), 1419 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tPROG_max), 0), 1333 + NAND_OP_CMD(NAND_CMD_PAGEPROG, 1334 + NAND_COMMON_TIMING_NS(conf, tWB_max)), 1335 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tPROG_max), 0), 1420 1336 }; 1421 1337 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1422 1338 int naddrs = nand_fill_column_cycles(chip, addrs, offset_in_page); ··· 1516 1430 u8 status; 1517 1431 1518 1432 if (nand_has_exec_op(chip)) { 1519 - const struct nand_sdr_timings *sdr = 1520 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1433 + const struct nand_interface_config *conf = 1434 + nand_get_interface_config(chip); 1521 1435 struct nand_op_instr instrs[] = { 1522 1436 NAND_OP_CMD(NAND_CMD_PAGEPROG, 1523 - PSEC_TO_NSEC(sdr->tWB_max)), 1524 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tPROG_max), 0), 1437 + NAND_COMMON_TIMING_NS(conf, tWB_max)), 1438 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tPROG_max), 1439 + 0), 1525 1440 }; 1526 1441 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1527 1442 ··· 1635 1548 return -ENOTSUPP; 1636 1549 1637 1550 if (nand_has_exec_op(chip)) { 1638 - const struct nand_sdr_timings *sdr = 1639 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1551 + const struct nand_interface_config *conf = 1552 + nand_get_interface_config(chip); 1640 1553 u8 addrs[2]; 1641 1554 struct nand_op_instr instrs[] = { 1642 1555 NAND_OP_CMD(NAND_CMD_RNDIN, 0), 1643 - NAND_OP_ADDR(2, addrs, PSEC_TO_NSEC(sdr->tCCS_min)), 1556 + NAND_OP_ADDR(2, addrs, NAND_COMMON_TIMING_NS(conf, tCCS_min)), 1644 1557 NAND_OP_DATA_OUT(len, buf, 0), 1645 1558 }; 1646 1559 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); ··· 1684 1597 unsigned int len) 1685 1598 { 1686 1599 unsigned int i; 1687 - u8 *id = buf; 1600 + u8 *id = buf, *ddrbuf = NULL; 1688 1601 1689 1602 if (len && !buf) 1690 1603 return -EINVAL; 1691 1604 1692 1605 if (nand_has_exec_op(chip)) { 1693 - const struct nand_sdr_timings *sdr = 1694 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1606 + const struct nand_interface_config *conf = 1607 + nand_get_interface_config(chip); 1695 1608 struct nand_op_instr instrs[] = { 1696 1609 NAND_OP_CMD(NAND_CMD_READID, 0), 1697 - NAND_OP_ADDR(1, &addr, PSEC_TO_NSEC(sdr->tADL_min)), 1610 + NAND_OP_ADDR(1, &addr, 1611 + NAND_COMMON_TIMING_NS(conf, tADL_min)), 1698 1612 NAND_OP_8BIT_DATA_IN(len, buf, 0), 1699 1613 }; 1700 1614 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1615 + int ret; 1616 + 1617 + /* READ_ID data bytes are received twice in NV-DDR mode */ 1618 + if (len && nand_interface_is_nvddr(conf)) { 1619 + ddrbuf = kzalloc(len * 2, GFP_KERNEL); 1620 + if (!ddrbuf) 1621 + return -ENOMEM; 1622 + 1623 + instrs[2].ctx.data.len *= 2; 1624 + instrs[2].ctx.data.buf.in = ddrbuf; 1625 + } 1701 1626 1702 1627 /* Drop the DATA_IN instruction if len is set to 0. */ 1703 1628 if (!len) 1704 1629 op.ninstrs--; 1705 1630 1706 - return nand_exec_op(chip, &op); 1631 + ret = nand_exec_op(chip, &op); 1632 + if (!ret && len && nand_interface_is_nvddr(conf)) { 1633 + for (i = 0; i < len; i++) 1634 + id[i] = ddrbuf[i * 2]; 1635 + } 1636 + 1637 + kfree(ddrbuf); 1638 + 1639 + return ret; 1707 1640 } 1708 1641 1709 1642 chip->legacy.cmdfunc(chip, NAND_CMD_READID, addr, -1); ··· 1749 1642 int nand_status_op(struct nand_chip *chip, u8 *status) 1750 1643 { 1751 1644 if (nand_has_exec_op(chip)) { 1752 - const struct nand_sdr_timings *sdr = 1753 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1645 + const struct nand_interface_config *conf = 1646 + nand_get_interface_config(chip); 1647 + u8 ddrstatus[2]; 1754 1648 struct nand_op_instr instrs[] = { 1755 1649 NAND_OP_CMD(NAND_CMD_STATUS, 1756 - PSEC_TO_NSEC(sdr->tADL_min)), 1650 + NAND_COMMON_TIMING_NS(conf, tADL_min)), 1757 1651 NAND_OP_8BIT_DATA_IN(1, status, 0), 1758 1652 }; 1759 1653 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1654 + int ret; 1655 + 1656 + /* The status data byte will be received twice in NV-DDR mode */ 1657 + if (status && nand_interface_is_nvddr(conf)) { 1658 + instrs[1].ctx.data.len *= 2; 1659 + instrs[1].ctx.data.buf.in = ddrstatus; 1660 + } 1760 1661 1761 1662 if (!status) 1762 1663 op.ninstrs--; 1763 1664 1764 - return nand_exec_op(chip, &op); 1665 + ret = nand_exec_op(chip, &op); 1666 + if (!ret && status && nand_interface_is_nvddr(conf)) 1667 + *status = ddrstatus[0]; 1668 + 1669 + return ret; 1765 1670 } 1766 1671 1767 1672 chip->legacy.cmdfunc(chip, NAND_CMD_STATUS, -1, -1); ··· 1830 1711 u8 status; 1831 1712 1832 1713 if (nand_has_exec_op(chip)) { 1833 - const struct nand_sdr_timings *sdr = 1834 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1714 + const struct nand_interface_config *conf = 1715 + nand_get_interface_config(chip); 1835 1716 u8 addrs[3] = { page, page >> 8, page >> 16 }; 1836 1717 struct nand_op_instr instrs[] = { 1837 1718 NAND_OP_CMD(NAND_CMD_ERASE1, 0), 1838 1719 NAND_OP_ADDR(2, addrs, 0), 1839 1720 NAND_OP_CMD(NAND_CMD_ERASE2, 1840 - PSEC_TO_MSEC(sdr->tWB_max)), 1841 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tBERS_max), 0), 1721 + NAND_COMMON_TIMING_MS(conf, tWB_max)), 1722 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tBERS_max), 1723 + 0), 1842 1724 }; 1843 1725 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1844 1726 ··· 1890 1770 int i, ret; 1891 1771 1892 1772 if (nand_has_exec_op(chip)) { 1893 - const struct nand_sdr_timings *sdr = 1894 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1773 + const struct nand_interface_config *conf = 1774 + nand_get_interface_config(chip); 1895 1775 struct nand_op_instr instrs[] = { 1896 1776 NAND_OP_CMD(NAND_CMD_SET_FEATURES, 0), 1897 - NAND_OP_ADDR(1, &feature, PSEC_TO_NSEC(sdr->tADL_min)), 1777 + NAND_OP_ADDR(1, &feature, NAND_COMMON_TIMING_NS(conf, 1778 + tADL_min)), 1898 1779 NAND_OP_8BIT_DATA_OUT(ONFI_SUBFEATURE_PARAM_LEN, data, 1899 - PSEC_TO_NSEC(sdr->tWB_max)), 1900 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tFEAT_max), 0), 1780 + NAND_COMMON_TIMING_NS(conf, 1781 + tWB_max)), 1782 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tFEAT_max), 1783 + 0), 1901 1784 }; 1902 1785 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1903 1786 ··· 1936 1813 static int nand_get_features_op(struct nand_chip *chip, u8 feature, 1937 1814 void *data) 1938 1815 { 1939 - u8 *params = data; 1816 + u8 *params = data, ddrbuf[ONFI_SUBFEATURE_PARAM_LEN * 2]; 1940 1817 int i; 1941 1818 1942 1819 if (nand_has_exec_op(chip)) { 1943 - const struct nand_sdr_timings *sdr = 1944 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1820 + const struct nand_interface_config *conf = 1821 + nand_get_interface_config(chip); 1945 1822 struct nand_op_instr instrs[] = { 1946 1823 NAND_OP_CMD(NAND_CMD_GET_FEATURES, 0), 1947 - NAND_OP_ADDR(1, &feature, PSEC_TO_NSEC(sdr->tWB_max)), 1948 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tFEAT_max), 1949 - PSEC_TO_NSEC(sdr->tRR_min)), 1824 + NAND_OP_ADDR(1, &feature, 1825 + NAND_COMMON_TIMING_NS(conf, tWB_max)), 1826 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tFEAT_max), 1827 + NAND_COMMON_TIMING_NS(conf, tRR_min)), 1950 1828 NAND_OP_8BIT_DATA_IN(ONFI_SUBFEATURE_PARAM_LEN, 1951 1829 data, 0), 1952 1830 }; 1953 1831 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1832 + int ret; 1954 1833 1955 - return nand_exec_op(chip, &op); 1834 + /* GET_FEATURE data bytes are received twice in NV-DDR mode */ 1835 + if (nand_interface_is_nvddr(conf)) { 1836 + instrs[3].ctx.data.len *= 2; 1837 + instrs[3].ctx.data.buf.in = ddrbuf; 1838 + } 1839 + 1840 + ret = nand_exec_op(chip, &op); 1841 + if (nand_interface_is_nvddr(conf)) { 1842 + for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; i++) 1843 + params[i] = ddrbuf[i * 2]; 1844 + } 1845 + 1846 + return ret; 1956 1847 } 1957 1848 1958 1849 chip->legacy.cmdfunc(chip, NAND_CMD_GET_FEATURES, feature, -1); ··· 2011 1874 int nand_reset_op(struct nand_chip *chip) 2012 1875 { 2013 1876 if (nand_has_exec_op(chip)) { 2014 - const struct nand_sdr_timings *sdr = 2015 - nand_get_sdr_timings(nand_get_interface_config(chip)); 1877 + const struct nand_interface_config *conf = 1878 + nand_get_interface_config(chip); 2016 1879 struct nand_op_instr instrs[] = { 2017 - NAND_OP_CMD(NAND_CMD_RESET, PSEC_TO_NSEC(sdr->tWB_max)), 2018 - NAND_OP_WAIT_RDY(PSEC_TO_MSEC(sdr->tRST_max), 0), 1880 + NAND_OP_CMD(NAND_CMD_RESET, 1881 + NAND_COMMON_TIMING_NS(conf, tWB_max)), 1882 + NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tRST_max), 1883 + 0), 2019 1884 }; 2020 1885 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 2021 1886 ··· 2052 1913 return -EINVAL; 2053 1914 2054 1915 if (nand_has_exec_op(chip)) { 1916 + const struct nand_interface_config *conf = 1917 + nand_get_interface_config(chip); 2055 1918 struct nand_op_instr instrs[] = { 2056 1919 NAND_OP_DATA_IN(len, buf, 0), 2057 1920 }; 2058 1921 struct nand_operation op = NAND_OPERATION(chip->cur_cs, instrs); 1922 + u8 *ddrbuf = NULL; 1923 + int ret, i; 2059 1924 2060 1925 instrs[0].ctx.data.force_8bit = force_8bit; 2061 1926 2062 - if (check_only) 2063 - return nand_check_op(chip, &op); 1927 + /* 1928 + * Parameter payloads (ID, status, features, etc) do not go 1929 + * through the same pipeline as regular data, hence the 1930 + * force_8bit flag must be set and this also indicates that in 1931 + * case NV-DDR timings are being used the data will be received 1932 + * twice. 1933 + */ 1934 + if (force_8bit && nand_interface_is_nvddr(conf)) { 1935 + ddrbuf = kzalloc(len * 2, GFP_KERNEL); 1936 + if (!ddrbuf) 1937 + return -ENOMEM; 2064 1938 2065 - return nand_exec_op(chip, &op); 1939 + instrs[0].ctx.data.len *= 2; 1940 + instrs[0].ctx.data.buf.in = ddrbuf; 1941 + } 1942 + 1943 + if (check_only) { 1944 + ret = nand_check_op(chip, &op); 1945 + kfree(ddrbuf); 1946 + return ret; 1947 + } 1948 + 1949 + ret = nand_exec_op(chip, &op); 1950 + if (!ret && force_8bit && nand_interface_is_nvddr(conf)) { 1951 + u8 *dst = buf; 1952 + 1953 + for (i = 0; i < len; i++) 1954 + dst[i] = ddrbuf[i * 2]; 1955 + } 1956 + 1957 + kfree(ddrbuf); 1958 + 1959 + return ret; 2066 1960 } 2067 1961 2068 1962 if (check_only) ··· 3308 3136 3309 3137 static void nand_wait_readrdy(struct nand_chip *chip) 3310 3138 { 3311 - const struct nand_sdr_timings *sdr; 3139 + const struct nand_interface_config *conf; 3312 3140 3313 3141 if (!(chip->options & NAND_NEED_READRDY)) 3314 3142 return; 3315 3143 3316 - sdr = nand_get_sdr_timings(nand_get_interface_config(chip)); 3317 - WARN_ON(nand_wait_rdy_op(chip, PSEC_TO_MSEC(sdr->tR_max), 0)); 3144 + conf = nand_get_interface_config(chip); 3145 + WARN_ON(nand_wait_rdy_op(chip, NAND_COMMON_TIMING_MS(conf, tR_max), 0)); 3318 3146 } 3319 3147 3320 3148 /** ··· 5249 5077 5250 5078 return 0; 5251 5079 } 5080 + 5081 + /** 5082 + * rawnand_dt_parse_gpio_cs - Parse the gpio-cs property of a controller 5083 + * @dev: Device that will be parsed. Also used for managed allocations. 5084 + * @cs_array: Array of GPIO desc pointers allocated on success 5085 + * @ncs_array: Number of entries in @cs_array updated on success. 5086 + * @return 0 on success, an error otherwise. 5087 + */ 5088 + int rawnand_dt_parse_gpio_cs(struct device *dev, struct gpio_desc ***cs_array, 5089 + unsigned int *ncs_array) 5090 + { 5091 + struct device_node *np = dev->of_node; 5092 + struct gpio_desc **descs; 5093 + int ndescs, i; 5094 + 5095 + ndescs = of_gpio_named_count(np, "cs-gpios"); 5096 + if (ndescs < 0) { 5097 + dev_dbg(dev, "No valid cs-gpios property\n"); 5098 + return 0; 5099 + } 5100 + 5101 + descs = devm_kcalloc(dev, ndescs, sizeof(*descs), GFP_KERNEL); 5102 + if (!descs) 5103 + return -ENOMEM; 5104 + 5105 + for (i = 0; i < ndescs; i++) { 5106 + descs[i] = gpiod_get_index_optional(dev, "cs", i, 5107 + GPIOD_OUT_HIGH); 5108 + if (IS_ERR(descs[i])) 5109 + return PTR_ERR(descs[i]); 5110 + } 5111 + 5112 + *ncs_array = ndescs; 5113 + *cs_array = descs; 5114 + 5115 + return 0; 5116 + } 5117 + EXPORT_SYMBOL(rawnand_dt_parse_gpio_cs); 5252 5118 5253 5119 static int rawnand_dt_init(struct nand_chip *chip) 5254 5120 {
+1 -1
drivers/mtd/nand/raw/nand_legacy.c
··· 369 369 * Wait tCCS_min if it is correctly defined, otherwise wait 500ns 370 370 * (which should be safe for all NANDs). 371 371 */ 372 - if (nand_controller_can_setup_interface(chip)) 372 + if (!IS_ERR(sdr) && nand_controller_can_setup_interface(chip)) 373 373 ndelay(sdr->tCCS_min / 1000); 374 374 else 375 375 ndelay(500);
+4 -1
drivers/mtd/nand/raw/nand_onfi.c
··· 315 315 onfi->tBERS = le16_to_cpu(p->t_bers); 316 316 onfi->tR = le16_to_cpu(p->t_r); 317 317 onfi->tCCS = le16_to_cpu(p->t_ccs); 318 - onfi->async_timing_mode = le16_to_cpu(p->async_timing_mode); 318 + onfi->fast_tCAD = le16_to_cpu(p->nvddr_nvddr2_features) & BIT(0); 319 + onfi->sdr_timing_modes = le16_to_cpu(p->sdr_timing_modes); 320 + if (le16_to_cpu(p->features) & ONFI_FEATURE_NV_DDR) 321 + onfi->nvddr_timing_modes = le16_to_cpu(p->nvddr_timing_modes); 319 322 onfi->vendor_revision = le16_to_cpu(p->vendor_revision); 320 323 memcpy(onfi->vendor, p->vendor, sizeof(p->vendor)); 321 324 chip->parameters.onfi = onfi;
+360 -10
drivers/mtd/nand/raw/nand_timings.c
··· 292 292 }, 293 293 }; 294 294 295 + static const struct nand_interface_config onfi_nvddr_timings[] = { 296 + /* Mode 0 */ 297 + { 298 + .type = NAND_NVDDR_IFACE, 299 + .timings.mode = 0, 300 + .timings.nvddr = { 301 + .tCCS_min = 500000, 302 + .tR_max = 200000000, 303 + .tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 304 + .tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 305 + .tAC_min = 3000, 306 + .tAC_max = 25000, 307 + .tADL_min = 400000, 308 + .tCAD_min = 45000, 309 + .tCAH_min = 10000, 310 + .tCALH_min = 10000, 311 + .tCALS_min = 10000, 312 + .tCAS_min = 10000, 313 + .tCEH_min = 20000, 314 + .tCH_min = 10000, 315 + .tCK_min = 50000, 316 + .tCS_min = 35000, 317 + .tDH_min = 5000, 318 + .tDQSCK_min = 3000, 319 + .tDQSCK_max = 25000, 320 + .tDQSD_min = 0, 321 + .tDQSD_max = 18000, 322 + .tDQSHZ_max = 20000, 323 + .tDQSQ_max = 5000, 324 + .tDS_min = 5000, 325 + .tDSC_min = 50000, 326 + .tFEAT_max = 1000000, 327 + .tITC_max = 1000000, 328 + .tQHS_max = 6000, 329 + .tRHW_min = 100000, 330 + .tRR_min = 20000, 331 + .tRST_max = 500000000, 332 + .tWB_max = 100000, 333 + .tWHR_min = 80000, 334 + .tWRCK_min = 20000, 335 + .tWW_min = 100000, 336 + }, 337 + }, 338 + /* Mode 1 */ 339 + { 340 + .type = NAND_NVDDR_IFACE, 341 + .timings.mode = 1, 342 + .timings.nvddr = { 343 + .tCCS_min = 500000, 344 + .tR_max = 200000000, 345 + .tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 346 + .tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 347 + .tAC_min = 3000, 348 + .tAC_max = 25000, 349 + .tADL_min = 400000, 350 + .tCAD_min = 45000, 351 + .tCAH_min = 5000, 352 + .tCALH_min = 5000, 353 + .tCALS_min = 5000, 354 + .tCAS_min = 5000, 355 + .tCEH_min = 20000, 356 + .tCH_min = 5000, 357 + .tCK_min = 30000, 358 + .tCS_min = 25000, 359 + .tDH_min = 2500, 360 + .tDQSCK_min = 3000, 361 + .tDQSCK_max = 25000, 362 + .tDQSD_min = 0, 363 + .tDQSD_max = 18000, 364 + .tDQSHZ_max = 20000, 365 + .tDQSQ_max = 2500, 366 + .tDS_min = 3000, 367 + .tDSC_min = 30000, 368 + .tFEAT_max = 1000000, 369 + .tITC_max = 1000000, 370 + .tQHS_max = 3000, 371 + .tRHW_min = 100000, 372 + .tRR_min = 20000, 373 + .tRST_max = 500000000, 374 + .tWB_max = 100000, 375 + .tWHR_min = 80000, 376 + .tWRCK_min = 20000, 377 + .tWW_min = 100000, 378 + }, 379 + }, 380 + /* Mode 2 */ 381 + { 382 + .type = NAND_NVDDR_IFACE, 383 + .timings.mode = 2, 384 + .timings.nvddr = { 385 + .tCCS_min = 500000, 386 + .tR_max = 200000000, 387 + .tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 388 + .tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 389 + .tAC_min = 3000, 390 + .tAC_max = 25000, 391 + .tADL_min = 400000, 392 + .tCAD_min = 45000, 393 + .tCAH_min = 4000, 394 + .tCALH_min = 4000, 395 + .tCALS_min = 4000, 396 + .tCAS_min = 4000, 397 + .tCEH_min = 20000, 398 + .tCH_min = 4000, 399 + .tCK_min = 20000, 400 + .tCS_min = 15000, 401 + .tDH_min = 1700, 402 + .tDQSCK_min = 3000, 403 + .tDQSCK_max = 25000, 404 + .tDQSD_min = 0, 405 + .tDQSD_max = 18000, 406 + .tDQSHZ_max = 20000, 407 + .tDQSQ_max = 1700, 408 + .tDS_min = 2000, 409 + .tDSC_min = 20000, 410 + .tFEAT_max = 1000000, 411 + .tITC_max = 1000000, 412 + .tQHS_max = 2000, 413 + .tRHW_min = 100000, 414 + .tRR_min = 20000, 415 + .tRST_max = 500000000, 416 + .tWB_max = 100000, 417 + .tWHR_min = 80000, 418 + .tWRCK_min = 20000, 419 + .tWW_min = 100000, 420 + }, 421 + }, 422 + /* Mode 3 */ 423 + { 424 + .type = NAND_NVDDR_IFACE, 425 + .timings.mode = 3, 426 + .timings.nvddr = { 427 + .tCCS_min = 500000, 428 + .tR_max = 200000000, 429 + .tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 430 + .tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 431 + .tAC_min = 3000, 432 + .tAC_max = 25000, 433 + .tADL_min = 400000, 434 + .tCAD_min = 45000, 435 + .tCAH_min = 3000, 436 + .tCALH_min = 3000, 437 + .tCALS_min = 3000, 438 + .tCAS_min = 3000, 439 + .tCEH_min = 20000, 440 + .tCH_min = 3000, 441 + .tCK_min = 15000, 442 + .tCS_min = 15000, 443 + .tDH_min = 1300, 444 + .tDQSCK_min = 3000, 445 + .tDQSCK_max = 25000, 446 + .tDQSD_min = 0, 447 + .tDQSD_max = 18000, 448 + .tDQSHZ_max = 20000, 449 + .tDQSQ_max = 1300, 450 + .tDS_min = 1500, 451 + .tDSC_min = 15000, 452 + .tFEAT_max = 1000000, 453 + .tITC_max = 1000000, 454 + .tQHS_max = 1500, 455 + .tRHW_min = 100000, 456 + .tRR_min = 20000, 457 + .tRST_max = 500000000, 458 + .tWB_max = 100000, 459 + .tWHR_min = 80000, 460 + .tWRCK_min = 20000, 461 + .tWW_min = 100000, 462 + }, 463 + }, 464 + /* Mode 4 */ 465 + { 466 + .type = NAND_NVDDR_IFACE, 467 + .timings.mode = 4, 468 + .timings.nvddr = { 469 + .tCCS_min = 500000, 470 + .tR_max = 200000000, 471 + .tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 472 + .tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 473 + .tAC_min = 3000, 474 + .tAC_max = 25000, 475 + .tADL_min = 400000, 476 + .tCAD_min = 45000, 477 + .tCAH_min = 2500, 478 + .tCALH_min = 2500, 479 + .tCALS_min = 2500, 480 + .tCAS_min = 2500, 481 + .tCEH_min = 20000, 482 + .tCH_min = 2500, 483 + .tCK_min = 12000, 484 + .tCS_min = 15000, 485 + .tDH_min = 1100, 486 + .tDQSCK_min = 3000, 487 + .tDQSCK_max = 25000, 488 + .tDQSD_min = 0, 489 + .tDQSD_max = 18000, 490 + .tDQSHZ_max = 20000, 491 + .tDQSQ_max = 1000, 492 + .tDS_min = 1100, 493 + .tDSC_min = 12000, 494 + .tFEAT_max = 1000000, 495 + .tITC_max = 1000000, 496 + .tQHS_max = 1200, 497 + .tRHW_min = 100000, 498 + .tRR_min = 20000, 499 + .tRST_max = 500000000, 500 + .tWB_max = 100000, 501 + .tWHR_min = 80000, 502 + .tWRCK_min = 20000, 503 + .tWW_min = 100000, 504 + }, 505 + }, 506 + /* Mode 5 */ 507 + { 508 + .type = NAND_NVDDR_IFACE, 509 + .timings.mode = 5, 510 + .timings.nvddr = { 511 + .tCCS_min = 500000, 512 + .tR_max = 200000000, 513 + .tPROG_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 514 + .tBERS_max = 1000000ULL * ONFI_DYN_TIMING_MAX, 515 + .tAC_min = 3000, 516 + .tAC_max = 25000, 517 + .tADL_min = 400000, 518 + .tCAD_min = 45000, 519 + .tCAH_min = 2000, 520 + .tCALH_min = 2000, 521 + .tCALS_min = 2000, 522 + .tCAS_min = 2000, 523 + .tCEH_min = 20000, 524 + .tCH_min = 2000, 525 + .tCK_min = 10000, 526 + .tCS_min = 15000, 527 + .tDH_min = 900, 528 + .tDQSCK_min = 3000, 529 + .tDQSCK_max = 25000, 530 + .tDQSD_min = 0, 531 + .tDQSD_max = 18000, 532 + .tDQSHZ_max = 20000, 533 + .tDQSQ_max = 850, 534 + .tDS_min = 900, 535 + .tDSC_min = 10000, 536 + .tFEAT_max = 1000000, 537 + .tITC_max = 1000000, 538 + .tQHS_max = 1000, 539 + .tRHW_min = 100000, 540 + .tRR_min = 20000, 541 + .tRST_max = 500000000, 542 + .tWB_max = 100000, 543 + .tWHR_min = 80000, 544 + .tWRCK_min = 20000, 545 + .tWW_min = 100000, 546 + }, 547 + }, 548 + }; 549 + 295 550 /* All NAND chips share the same reset data interface: SDR mode 0 */ 296 551 const struct nand_interface_config *nand_get_reset_interface_config(void) 297 552 { ··· 601 346 } 602 347 603 348 /** 604 - * onfi_fill_interface_config - Initialize an interface config from a given 605 - * ONFI mode 349 + * onfi_find_closest_nvddr_mode - Derive the closest ONFI NVDDR timing mode 350 + * given a set of timings 351 + * @spec_timings: the timings to challenge 352 + */ 353 + unsigned int 354 + onfi_find_closest_nvddr_mode(const struct nand_nvddr_timings *spec_timings) 355 + { 356 + const struct nand_nvddr_timings *onfi_timings; 357 + int mode; 358 + 359 + for (mode = ARRAY_SIZE(onfi_nvddr_timings) - 1; mode > 0; mode--) { 360 + onfi_timings = &onfi_nvddr_timings[mode].timings.nvddr; 361 + 362 + if (spec_timings->tCCS_min <= onfi_timings->tCCS_min && 363 + spec_timings->tAC_min <= onfi_timings->tAC_min && 364 + spec_timings->tADL_min <= onfi_timings->tADL_min && 365 + spec_timings->tCAD_min <= onfi_timings->tCAD_min && 366 + spec_timings->tCAH_min <= onfi_timings->tCAH_min && 367 + spec_timings->tCALH_min <= onfi_timings->tCALH_min && 368 + spec_timings->tCALS_min <= onfi_timings->tCALS_min && 369 + spec_timings->tCAS_min <= onfi_timings->tCAS_min && 370 + spec_timings->tCEH_min <= onfi_timings->tCEH_min && 371 + spec_timings->tCH_min <= onfi_timings->tCH_min && 372 + spec_timings->tCK_min <= onfi_timings->tCK_min && 373 + spec_timings->tCS_min <= onfi_timings->tCS_min && 374 + spec_timings->tDH_min <= onfi_timings->tDH_min && 375 + spec_timings->tDQSCK_min <= onfi_timings->tDQSCK_min && 376 + spec_timings->tDQSD_min <= onfi_timings->tDQSD_min && 377 + spec_timings->tDS_min <= onfi_timings->tDS_min && 378 + spec_timings->tDSC_min <= onfi_timings->tDSC_min && 379 + spec_timings->tRHW_min <= onfi_timings->tRHW_min && 380 + spec_timings->tRR_min <= onfi_timings->tRR_min && 381 + spec_timings->tWHR_min <= onfi_timings->tWHR_min && 382 + spec_timings->tWRCK_min <= onfi_timings->tWRCK_min && 383 + spec_timings->tWW_min <= onfi_timings->tWW_min) 384 + return mode; 385 + } 386 + 387 + return 0; 388 + } 389 + 390 + /* 391 + * onfi_fill_sdr_interface_config - Initialize a SDR interface config from a 392 + * given ONFI mode 606 393 * @chip: The NAND chip 607 394 * @iface: The interface configuration to fill 608 - * @type: The interface type 609 395 * @timing_mode: The ONFI timing mode 610 396 */ 611 - void onfi_fill_interface_config(struct nand_chip *chip, 612 - struct nand_interface_config *iface, 613 - enum nand_interface_type type, 614 - unsigned int timing_mode) 397 + static void onfi_fill_sdr_interface_config(struct nand_chip *chip, 398 + struct nand_interface_config *iface, 399 + unsigned int timing_mode) 615 400 { 616 401 struct onfi_params *onfi = chip->parameters.onfi; 617 - 618 - if (WARN_ON(type != NAND_SDR_IFACE)) 619 - return; 620 402 621 403 if (WARN_ON(timing_mode >= ARRAY_SIZE(onfi_sdr_timings))) 622 404 return; ··· 676 384 /* nanoseconds -> picoseconds */ 677 385 timings->tCCS_min = 1000UL * onfi->tCCS; 678 386 } 387 + } 388 + 389 + /** 390 + * onfi_fill_nvddr_interface_config - Initialize a NVDDR interface config from a 391 + * given ONFI mode 392 + * @chip: The NAND chip 393 + * @iface: The interface configuration to fill 394 + * @timing_mode: The ONFI timing mode 395 + */ 396 + static void onfi_fill_nvddr_interface_config(struct nand_chip *chip, 397 + struct nand_interface_config *iface, 398 + unsigned int timing_mode) 399 + { 400 + struct onfi_params *onfi = chip->parameters.onfi; 401 + 402 + if (WARN_ON(timing_mode >= ARRAY_SIZE(onfi_nvddr_timings))) 403 + return; 404 + 405 + *iface = onfi_nvddr_timings[timing_mode]; 406 + 407 + /* 408 + * Initialize timings that cannot be deduced from timing mode: 409 + * tPROG, tBERS, tR, tCCS and tCAD. 410 + * These information are part of the ONFI parameter page. 411 + */ 412 + if (onfi) { 413 + struct nand_nvddr_timings *timings = &iface->timings.nvddr; 414 + 415 + /* microseconds -> picoseconds */ 416 + timings->tPROG_max = 1000000ULL * onfi->tPROG; 417 + timings->tBERS_max = 1000000ULL * onfi->tBERS; 418 + timings->tR_max = 1000000ULL * onfi->tR; 419 + 420 + /* nanoseconds -> picoseconds */ 421 + timings->tCCS_min = 1000UL * onfi->tCCS; 422 + 423 + if (onfi->fast_tCAD) 424 + timings->tCAD_min = 25000; 425 + } 426 + } 427 + 428 + /** 429 + * onfi_fill_interface_config - Initialize an interface config from a given 430 + * ONFI mode 431 + * @chip: The NAND chip 432 + * @iface: The interface configuration to fill 433 + * @type: The interface type 434 + * @timing_mode: The ONFI timing mode 435 + */ 436 + void onfi_fill_interface_config(struct nand_chip *chip, 437 + struct nand_interface_config *iface, 438 + enum nand_interface_type type, 439 + unsigned int timing_mode) 440 + { 441 + if (type == NAND_SDR_IFACE) 442 + return onfi_fill_sdr_interface_config(chip, iface, timing_mode); 443 + else 444 + return onfi_fill_nvddr_interface_config(chip, iface, timing_mode); 679 445 }
+148 -95
drivers/mtd/nand/raw/omap2.c
··· 131 131 #define BCH_ECC_SIZE0 0x0 /* ecc_size0 = 0, no oob protection */ 132 132 #define BCH_ECC_SIZE1 0x20 /* ecc_size1 = 32 */ 133 133 134 - #define BADBLOCK_MARKER_LENGTH 2 134 + #define BBM_LEN 2 135 135 136 136 static u_char bch16_vector[] = {0xf5, 0x24, 0x1c, 0xd0, 0x61, 0xb3, 0xf1, 0x55, 137 137 0x2e, 0x2c, 0x86, 0xa3, 0xed, 0x36, 0x1b, 0x78, ··· 171 171 struct device *elm_dev; 172 172 /* NAND ready gpio */ 173 173 struct gpio_desc *ready_gpiod; 174 + unsigned int neccpg; 175 + unsigned int nsteps_per_eccpg; 176 + unsigned int eccpg_size; 177 + unsigned int eccpg_bytes; 174 178 }; 175 179 176 180 static inline struct omap_nand_info *mtd_to_omap(struct mtd_info *mtd) ··· 1359 1355 { 1360 1356 struct omap_nand_info *info = mtd_to_omap(nand_to_mtd(chip)); 1361 1357 struct nand_ecc_ctrl *ecc = &info->nand.ecc; 1362 - int eccsteps = info->nand.ecc.steps; 1358 + int eccsteps = info->nsteps_per_eccpg; 1363 1359 int i , j, stat = 0; 1364 1360 int eccflag, actual_eccbytes; 1365 1361 struct elm_errorvec err_vec[ERROR_VECTOR_MAX]; ··· 1529 1525 int oob_required, int page) 1530 1526 { 1531 1527 struct mtd_info *mtd = nand_to_mtd(chip); 1532 - int ret; 1528 + struct omap_nand_info *info = mtd_to_omap(mtd); 1533 1529 uint8_t *ecc_calc = chip->ecc.calc_buf; 1530 + unsigned int eccpg; 1531 + int ret; 1534 1532 1535 - nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1536 - 1537 - /* Enable GPMC ecc engine */ 1538 - chip->ecc.hwctl(chip, NAND_ECC_WRITE); 1539 - 1540 - /* Write data */ 1541 - chip->legacy.write_buf(chip, buf, mtd->writesize); 1542 - 1543 - /* Update ecc vector from GPMC result registers */ 1544 - omap_calculate_ecc_bch_multi(mtd, buf, &ecc_calc[0]); 1545 - 1546 - ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0, 1547 - chip->ecc.total); 1533 + ret = nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1548 1534 if (ret) 1549 1535 return ret; 1536 + 1537 + for (eccpg = 0; eccpg < info->neccpg; eccpg++) { 1538 + /* Enable GPMC ecc engine */ 1539 + chip->ecc.hwctl(chip, NAND_ECC_WRITE); 1540 + 1541 + /* Write data */ 1542 + chip->legacy.write_buf(chip, buf + (eccpg * info->eccpg_size), 1543 + info->eccpg_size); 1544 + 1545 + /* Update ecc vector from GPMC result registers */ 1546 + ret = omap_calculate_ecc_bch_multi(mtd, 1547 + buf + (eccpg * info->eccpg_size), 1548 + ecc_calc); 1549 + if (ret) 1550 + return ret; 1551 + 1552 + ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, 1553 + chip->oob_poi, 1554 + eccpg * info->eccpg_bytes, 1555 + info->eccpg_bytes); 1556 + if (ret) 1557 + return ret; 1558 + } 1550 1559 1551 1560 /* Write ecc vector to OOB area */ 1552 1561 chip->legacy.write_buf(chip, chip->oob_poi, mtd->oobsize); ··· 1583 1566 int oob_required, int page) 1584 1567 { 1585 1568 struct mtd_info *mtd = nand_to_mtd(chip); 1569 + struct omap_nand_info *info = mtd_to_omap(mtd); 1586 1570 u8 *ecc_calc = chip->ecc.calc_buf; 1587 1571 int ecc_size = chip->ecc.size; 1588 1572 int ecc_bytes = chip->ecc.bytes; 1589 - int ecc_steps = chip->ecc.steps; 1590 1573 u32 start_step = offset / ecc_size; 1591 1574 u32 end_step = (offset + data_len - 1) / ecc_size; 1575 + unsigned int eccpg; 1592 1576 int step, ret = 0; 1593 1577 1594 1578 /* ··· 1598 1580 * ECC is calculated for all subpages but we choose 1599 1581 * only what we want. 1600 1582 */ 1601 - nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1602 - 1603 - /* Enable GPMC ECC engine */ 1604 - chip->ecc.hwctl(chip, NAND_ECC_WRITE); 1605 - 1606 - /* Write data */ 1607 - chip->legacy.write_buf(chip, buf, mtd->writesize); 1608 - 1609 - for (step = 0; step < ecc_steps; step++) { 1610 - /* mask ECC of un-touched subpages by padding 0xFF */ 1611 - if (step < start_step || step > end_step) 1612 - memset(ecc_calc, 0xff, ecc_bytes); 1613 - else 1614 - ret = _omap_calculate_ecc_bch(mtd, buf, ecc_calc, step); 1615 - 1616 - if (ret) 1617 - return ret; 1618 - 1619 - buf += ecc_size; 1620 - ecc_calc += ecc_bytes; 1621 - } 1622 - 1623 - /* copy calculated ECC for whole page to chip->buffer->oob */ 1624 - /* this include masked-value(0xFF) for unwritten subpages */ 1625 - ecc_calc = chip->ecc.calc_buf; 1626 - ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0, 1627 - chip->ecc.total); 1583 + ret = nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1628 1584 if (ret) 1629 1585 return ret; 1586 + 1587 + for (eccpg = 0; eccpg < info->neccpg; eccpg++) { 1588 + /* Enable GPMC ECC engine */ 1589 + chip->ecc.hwctl(chip, NAND_ECC_WRITE); 1590 + 1591 + /* Write data */ 1592 + chip->legacy.write_buf(chip, buf + (eccpg * info->eccpg_size), 1593 + info->eccpg_size); 1594 + 1595 + for (step = 0; step < info->nsteps_per_eccpg; step++) { 1596 + unsigned int base_step = eccpg * info->nsteps_per_eccpg; 1597 + const u8 *bufoffs = buf + (eccpg * info->eccpg_size); 1598 + 1599 + /* Mask ECC of un-touched subpages with 0xFFs */ 1600 + if ((step + base_step) < start_step || 1601 + (step + base_step) > end_step) 1602 + memset(ecc_calc + (step * ecc_bytes), 0xff, 1603 + ecc_bytes); 1604 + else 1605 + ret = _omap_calculate_ecc_bch(mtd, 1606 + bufoffs + (step * ecc_size), 1607 + ecc_calc + (step * ecc_bytes), 1608 + step); 1609 + 1610 + if (ret) 1611 + return ret; 1612 + } 1613 + 1614 + /* 1615 + * Copy the calculated ECC for the whole page including the 1616 + * masked values (0xFF) corresponding to unwritten subpages. 1617 + */ 1618 + ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 1619 + eccpg * info->eccpg_bytes, 1620 + info->eccpg_bytes); 1621 + if (ret) 1622 + return ret; 1623 + } 1630 1624 1631 1625 /* write OOB buffer to NAND device */ 1632 1626 chip->legacy.write_buf(chip, chip->oob_poi, mtd->oobsize); ··· 1664 1634 int oob_required, int page) 1665 1635 { 1666 1636 struct mtd_info *mtd = nand_to_mtd(chip); 1637 + struct omap_nand_info *info = mtd_to_omap(mtd); 1667 1638 uint8_t *ecc_calc = chip->ecc.calc_buf; 1668 1639 uint8_t *ecc_code = chip->ecc.code_buf; 1640 + unsigned int max_bitflips = 0, eccpg; 1669 1641 int stat, ret; 1670 - unsigned int max_bitflips = 0; 1671 1642 1672 - nand_read_page_op(chip, page, 0, NULL, 0); 1673 - 1674 - /* Enable GPMC ecc engine */ 1675 - chip->ecc.hwctl(chip, NAND_ECC_READ); 1676 - 1677 - /* Read data */ 1678 - chip->legacy.read_buf(chip, buf, mtd->writesize); 1679 - 1680 - /* Read oob bytes */ 1681 - nand_change_read_column_op(chip, 1682 - mtd->writesize + BADBLOCK_MARKER_LENGTH, 1683 - chip->oob_poi + BADBLOCK_MARKER_LENGTH, 1684 - chip->ecc.total, false); 1685 - 1686 - /* Calculate ecc bytes */ 1687 - omap_calculate_ecc_bch_multi(mtd, buf, ecc_calc); 1688 - 1689 - ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0, 1690 - chip->ecc.total); 1643 + ret = nand_read_page_op(chip, page, 0, NULL, 0); 1691 1644 if (ret) 1692 1645 return ret; 1693 1646 1694 - stat = chip->ecc.correct(chip, buf, ecc_code, ecc_calc); 1647 + for (eccpg = 0; eccpg < info->neccpg; eccpg++) { 1648 + /* Enable GPMC ecc engine */ 1649 + chip->ecc.hwctl(chip, NAND_ECC_READ); 1695 1650 1696 - if (stat < 0) { 1697 - mtd->ecc_stats.failed++; 1698 - } else { 1699 - mtd->ecc_stats.corrected += stat; 1700 - max_bitflips = max_t(unsigned int, max_bitflips, stat); 1651 + /* Read data */ 1652 + ret = nand_change_read_column_op(chip, eccpg * info->eccpg_size, 1653 + buf + (eccpg * info->eccpg_size), 1654 + info->eccpg_size, false); 1655 + if (ret) 1656 + return ret; 1657 + 1658 + /* Read oob bytes */ 1659 + ret = nand_change_read_column_op(chip, 1660 + mtd->writesize + BBM_LEN + 1661 + (eccpg * info->eccpg_bytes), 1662 + chip->oob_poi + BBM_LEN + 1663 + (eccpg * info->eccpg_bytes), 1664 + info->eccpg_bytes, false); 1665 + if (ret) 1666 + return ret; 1667 + 1668 + /* Calculate ecc bytes */ 1669 + ret = omap_calculate_ecc_bch_multi(mtd, 1670 + buf + (eccpg * info->eccpg_size), 1671 + ecc_calc); 1672 + if (ret) 1673 + return ret; 1674 + 1675 + ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, 1676 + chip->oob_poi, 1677 + eccpg * info->eccpg_bytes, 1678 + info->eccpg_bytes); 1679 + if (ret) 1680 + return ret; 1681 + 1682 + stat = chip->ecc.correct(chip, 1683 + buf + (eccpg * info->eccpg_size), 1684 + ecc_code, ecc_calc); 1685 + if (stat < 0) { 1686 + mtd->ecc_stats.failed++; 1687 + } else { 1688 + mtd->ecc_stats.corrected += stat; 1689 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 1690 + } 1701 1691 } 1702 1692 1703 1693 return max_bitflips; ··· 1870 1820 { 1871 1821 struct omap_nand_info *info = mtd_to_omap(mtd); 1872 1822 struct nand_chip *chip = &info->nand; 1873 - int off = BADBLOCK_MARKER_LENGTH; 1823 + int off = BBM_LEN; 1874 1824 1875 1825 if (info->ecc_opt == OMAP_ECC_HAM1_CODE_HW && 1876 1826 !(chip->options & NAND_BUSWIDTH_16)) ··· 1890 1840 { 1891 1841 struct omap_nand_info *info = mtd_to_omap(mtd); 1892 1842 struct nand_chip *chip = &info->nand; 1893 - int off = BADBLOCK_MARKER_LENGTH; 1843 + int off = BBM_LEN; 1894 1844 1895 1845 if (info->ecc_opt == OMAP_ECC_HAM1_CODE_HW && 1896 1846 !(chip->options & NAND_BUSWIDTH_16)) ··· 1920 1870 struct nand_device *nand = mtd_to_nanddev(mtd); 1921 1871 unsigned int nsteps = nanddev_get_ecc_nsteps(nand); 1922 1872 unsigned int ecc_bytes = nanddev_get_ecc_bytes_per_step(nand); 1923 - int off = BADBLOCK_MARKER_LENGTH; 1873 + int off = BBM_LEN; 1924 1874 1925 1875 if (section >= nsteps) 1926 1876 return -ERANGE; ··· 1941 1891 struct nand_device *nand = mtd_to_nanddev(mtd); 1942 1892 unsigned int nsteps = nanddev_get_ecc_nsteps(nand); 1943 1893 unsigned int ecc_bytes = nanddev_get_ecc_bytes_per_step(nand); 1944 - int off = BADBLOCK_MARKER_LENGTH; 1894 + int off = BBM_LEN; 1945 1895 1946 1896 if (section) 1947 1897 return -ERANGE; ··· 1970 1920 struct mtd_info *mtd = nand_to_mtd(chip); 1971 1921 struct omap_nand_info *info = mtd_to_omap(mtd); 1972 1922 struct device *dev = &info->pdev->dev; 1973 - int min_oobbytes = BADBLOCK_MARKER_LENGTH; 1923 + int min_oobbytes = BBM_LEN; 1924 + int elm_bch_strength = -1; 1974 1925 int oobbytes_per_step; 1975 1926 dma_cap_mask_t mask; 1976 1927 int err; ··· 2125 2074 chip->ecc.write_subpage = omap_write_subpage_bch; 2126 2075 mtd_set_ooblayout(mtd, &omap_ooblayout_ops); 2127 2076 oobbytes_per_step = chip->ecc.bytes; 2128 - 2129 - err = elm_config(info->elm_dev, BCH4_ECC, 2130 - mtd->writesize / chip->ecc.size, 2131 - chip->ecc.size, chip->ecc.bytes); 2132 - if (err < 0) 2133 - return err; 2077 + elm_bch_strength = BCH4_ECC; 2134 2078 break; 2135 2079 2136 2080 case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW: ··· 2162 2116 chip->ecc.write_subpage = omap_write_subpage_bch; 2163 2117 mtd_set_ooblayout(mtd, &omap_ooblayout_ops); 2164 2118 oobbytes_per_step = chip->ecc.bytes; 2165 - 2166 - err = elm_config(info->elm_dev, BCH8_ECC, 2167 - mtd->writesize / chip->ecc.size, 2168 - chip->ecc.size, chip->ecc.bytes); 2169 - if (err < 0) 2170 - return err; 2171 - 2119 + elm_bch_strength = BCH8_ECC; 2172 2120 break; 2173 2121 2174 2122 case OMAP_ECC_BCH16_CODE_HW: ··· 2178 2138 chip->ecc.write_subpage = omap_write_subpage_bch; 2179 2139 mtd_set_ooblayout(mtd, &omap_ooblayout_ops); 2180 2140 oobbytes_per_step = chip->ecc.bytes; 2181 - 2182 - err = elm_config(info->elm_dev, BCH16_ECC, 2183 - mtd->writesize / chip->ecc.size, 2184 - chip->ecc.size, chip->ecc.bytes); 2185 - if (err < 0) 2186 - return err; 2187 - 2141 + elm_bch_strength = BCH16_ECC; 2188 2142 break; 2189 2143 default: 2190 2144 dev_err(dev, "Invalid or unsupported ECC scheme\n"); 2191 2145 return -EINVAL; 2146 + } 2147 + 2148 + if (elm_bch_strength >= 0) { 2149 + chip->ecc.steps = mtd->writesize / chip->ecc.size; 2150 + info->neccpg = chip->ecc.steps / ERROR_VECTOR_MAX; 2151 + if (info->neccpg) { 2152 + info->nsteps_per_eccpg = ERROR_VECTOR_MAX; 2153 + } else { 2154 + info->neccpg = 1; 2155 + info->nsteps_per_eccpg = chip->ecc.steps; 2156 + } 2157 + info->eccpg_size = info->nsteps_per_eccpg * chip->ecc.size; 2158 + info->eccpg_bytes = info->nsteps_per_eccpg * chip->ecc.bytes; 2159 + 2160 + err = elm_config(info->elm_dev, elm_bch_strength, 2161 + info->nsteps_per_eccpg, chip->ecc.size, 2162 + chip->ecc.bytes); 2163 + if (err < 0) 2164 + return err; 2192 2165 } 2193 2166 2194 2167 /* Check if NAND device's OOB is enough to store ECC signatures */
+1 -1
drivers/mtd/nand/raw/omap_elm.c
··· 116 116 return -EINVAL; 117 117 } 118 118 /* ELM support 8 error syndrome process */ 119 - if (ecc_steps > ERROR_VECTOR_MAX) { 119 + if (ecc_steps > ERROR_VECTOR_MAX && ecc_steps % ERROR_VECTOR_MAX) { 120 120 dev_err(dev, "unsupported config ecc-step=%d\n", ecc_steps); 121 121 return -EINVAL; 122 122 }
+1194
drivers/mtd/nand/raw/pl35x-nand-controller.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * ARM PL35X NAND flash controller driver 4 + * 5 + * Copyright (C) 2017 Xilinx, Inc 6 + * Author: 7 + * Miquel Raynal <miquel.raynal@bootlin.com> 8 + * Original work (rewritten): 9 + * Punnaiah Choudary Kalluri <punnaia@xilinx.com> 10 + * Naga Sureshkumar Relli <nagasure@xilinx.com> 11 + */ 12 + 13 + #include <linux/amba/bus.h> 14 + #include <linux/err.h> 15 + #include <linux/delay.h> 16 + #include <linux/interrupt.h> 17 + #include <linux/io.h> 18 + #include <linux/ioport.h> 19 + #include <linux/iopoll.h> 20 + #include <linux/irq.h> 21 + #include <linux/module.h> 22 + #include <linux/moduleparam.h> 23 + #include <linux/mtd/mtd.h> 24 + #include <linux/mtd/rawnand.h> 25 + #include <linux/mtd/partitions.h> 26 + #include <linux/of_address.h> 27 + #include <linux/of_device.h> 28 + #include <linux/of_platform.h> 29 + #include <linux/platform_device.h> 30 + #include <linux/slab.h> 31 + #include <linux/clk.h> 32 + 33 + #define PL35X_NANDC_DRIVER_NAME "pl35x-nand-controller" 34 + 35 + /* SMC controller status register (RO) */ 36 + #define PL35X_SMC_MEMC_STATUS 0x0 37 + #define PL35X_SMC_MEMC_STATUS_RAW_INT_STATUS1 BIT(6) 38 + /* SMC clear config register (WO) */ 39 + #define PL35X_SMC_MEMC_CFG_CLR 0xC 40 + #define PL35X_SMC_MEMC_CFG_CLR_INT_DIS_1 BIT(1) 41 + #define PL35X_SMC_MEMC_CFG_CLR_INT_CLR_1 BIT(4) 42 + #define PL35X_SMC_MEMC_CFG_CLR_ECC_INT_DIS_1 BIT(6) 43 + /* SMC direct command register (WO) */ 44 + #define PL35X_SMC_DIRECT_CMD 0x10 45 + #define PL35X_SMC_DIRECT_CMD_NAND_CS (0x4 << 23) 46 + #define PL35X_SMC_DIRECT_CMD_UPD_REGS (0x2 << 21) 47 + /* SMC set cycles register (WO) */ 48 + #define PL35X_SMC_CYCLES 0x14 49 + #define PL35X_SMC_NAND_TRC_CYCLES(x) ((x) << 0) 50 + #define PL35X_SMC_NAND_TWC_CYCLES(x) ((x) << 4) 51 + #define PL35X_SMC_NAND_TREA_CYCLES(x) ((x) << 8) 52 + #define PL35X_SMC_NAND_TWP_CYCLES(x) ((x) << 11) 53 + #define PL35X_SMC_NAND_TCLR_CYCLES(x) ((x) << 14) 54 + #define PL35X_SMC_NAND_TAR_CYCLES(x) ((x) << 17) 55 + #define PL35X_SMC_NAND_TRR_CYCLES(x) ((x) << 20) 56 + /* SMC set opmode register (WO) */ 57 + #define PL35X_SMC_OPMODE 0x18 58 + #define PL35X_SMC_OPMODE_BW_8 0 59 + #define PL35X_SMC_OPMODE_BW_16 1 60 + /* SMC ECC status register (RO) */ 61 + #define PL35X_SMC_ECC_STATUS 0x400 62 + #define PL35X_SMC_ECC_STATUS_ECC_BUSY BIT(6) 63 + /* SMC ECC configuration register */ 64 + #define PL35X_SMC_ECC_CFG 0x404 65 + #define PL35X_SMC_ECC_CFG_MODE_MASK 0xC 66 + #define PL35X_SMC_ECC_CFG_MODE_BYPASS 0 67 + #define PL35X_SMC_ECC_CFG_MODE_APB BIT(2) 68 + #define PL35X_SMC_ECC_CFG_MODE_MEM BIT(3) 69 + #define PL35X_SMC_ECC_CFG_PGSIZE_MASK 0x3 70 + /* SMC ECC command 1 register */ 71 + #define PL35X_SMC_ECC_CMD1 0x408 72 + #define PL35X_SMC_ECC_CMD1_WRITE(x) ((x) << 0) 73 + #define PL35X_SMC_ECC_CMD1_READ(x) ((x) << 8) 74 + #define PL35X_SMC_ECC_CMD1_READ_END(x) ((x) << 16) 75 + #define PL35X_SMC_ECC_CMD1_READ_END_VALID(x) ((x) << 24) 76 + /* SMC ECC command 2 register */ 77 + #define PL35X_SMC_ECC_CMD2 0x40C 78 + #define PL35X_SMC_ECC_CMD2_WRITE_COL_CHG(x) ((x) << 0) 79 + #define PL35X_SMC_ECC_CMD2_READ_COL_CHG(x) ((x) << 8) 80 + #define PL35X_SMC_ECC_CMD2_READ_COL_CHG_END(x) ((x) << 16) 81 + #define PL35X_SMC_ECC_CMD2_READ_COL_CHG_END_VALID(x) ((x) << 24) 82 + /* SMC ECC value registers (RO) */ 83 + #define PL35X_SMC_ECC_VALUE(x) (0x418 + (4 * (x))) 84 + #define PL35X_SMC_ECC_VALUE_IS_CORRECTABLE(x) ((x) & BIT(27)) 85 + #define PL35X_SMC_ECC_VALUE_HAS_FAILED(x) ((x) & BIT(28)) 86 + #define PL35X_SMC_ECC_VALUE_IS_VALID(x) ((x) & BIT(30)) 87 + 88 + /* NAND AXI interface */ 89 + #define PL35X_SMC_CMD_PHASE 0 90 + #define PL35X_SMC_CMD_PHASE_CMD0(x) ((x) << 3) 91 + #define PL35X_SMC_CMD_PHASE_CMD1(x) ((x) << 11) 92 + #define PL35X_SMC_CMD_PHASE_CMD1_VALID BIT(20) 93 + #define PL35X_SMC_CMD_PHASE_ADDR(pos, x) ((x) << (8 * (pos))) 94 + #define PL35X_SMC_CMD_PHASE_NADDRS(x) ((x) << 21) 95 + #define PL35X_SMC_DATA_PHASE BIT(19) 96 + #define PL35X_SMC_DATA_PHASE_ECC_LAST BIT(10) 97 + #define PL35X_SMC_DATA_PHASE_CLEAR_CS BIT(21) 98 + 99 + #define PL35X_NAND_MAX_CS 1 100 + #define PL35X_NAND_LAST_XFER_SZ 4 101 + #define TO_CYCLES(ps, period_ns) (DIV_ROUND_UP((ps) / 1000, period_ns)) 102 + 103 + #define PL35X_NAND_ECC_BITS_MASK 0xFFF 104 + #define PL35X_NAND_ECC_BYTE_OFF_MASK 0x1FF 105 + #define PL35X_NAND_ECC_BIT_OFF_MASK 0x7 106 + 107 + struct pl35x_nand_timings { 108 + unsigned int t_rc:4; 109 + unsigned int t_wc:4; 110 + unsigned int t_rea:3; 111 + unsigned int t_wp:3; 112 + unsigned int t_clr:3; 113 + unsigned int t_ar:3; 114 + unsigned int t_rr:4; 115 + unsigned int rsvd:8; 116 + }; 117 + 118 + struct pl35x_nand { 119 + struct list_head node; 120 + struct nand_chip chip; 121 + unsigned int cs; 122 + unsigned int addr_cycles; 123 + u32 ecc_cfg; 124 + u32 timings; 125 + }; 126 + 127 + /** 128 + * struct pl35x_nandc - NAND flash controller driver structure 129 + * @dev: Kernel device 130 + * @conf_regs: SMC configuration registers for command phase 131 + * @io_regs: NAND data registers for data phase 132 + * @controller: Core NAND controller structure 133 + * @chip: NAND chip information structure 134 + * @selected_chip: NAND chip currently selected by the controller 135 + * @assigned_cs: List of assigned CS 136 + * @ecc_buf: Temporary buffer to extract ECC bytes 137 + */ 138 + struct pl35x_nandc { 139 + struct device *dev; 140 + void __iomem *conf_regs; 141 + void __iomem *io_regs; 142 + struct nand_controller controller; 143 + struct list_head chips; 144 + struct nand_chip *selected_chip; 145 + unsigned long assigned_cs; 146 + u8 *ecc_buf; 147 + }; 148 + 149 + static inline struct pl35x_nandc *to_pl35x_nandc(struct nand_controller *ctrl) 150 + { 151 + return container_of(ctrl, struct pl35x_nandc, controller); 152 + } 153 + 154 + static inline struct pl35x_nand *to_pl35x_nand(struct nand_chip *chip) 155 + { 156 + return container_of(chip, struct pl35x_nand, chip); 157 + } 158 + 159 + static int pl35x_ecc_ooblayout16_ecc(struct mtd_info *mtd, int section, 160 + struct mtd_oob_region *oobregion) 161 + { 162 + struct nand_chip *chip = mtd_to_nand(mtd); 163 + 164 + if (section >= chip->ecc.steps) 165 + return -ERANGE; 166 + 167 + oobregion->offset = (section * chip->ecc.bytes); 168 + oobregion->length = chip->ecc.bytes; 169 + 170 + return 0; 171 + } 172 + 173 + static int pl35x_ecc_ooblayout16_free(struct mtd_info *mtd, int section, 174 + struct mtd_oob_region *oobregion) 175 + { 176 + struct nand_chip *chip = mtd_to_nand(mtd); 177 + 178 + if (section >= chip->ecc.steps) 179 + return -ERANGE; 180 + 181 + oobregion->offset = (section * chip->ecc.bytes) + 8; 182 + oobregion->length = 8; 183 + 184 + return 0; 185 + } 186 + 187 + static const struct mtd_ooblayout_ops pl35x_ecc_ooblayout16_ops = { 188 + .ecc = pl35x_ecc_ooblayout16_ecc, 189 + .free = pl35x_ecc_ooblayout16_free, 190 + }; 191 + 192 + /* Generic flash bbt decriptors */ 193 + static u8 bbt_pattern[] = { 'B', 'b', 't', '0' }; 194 + static u8 mirror_pattern[] = { '1', 't', 'b', 'B' }; 195 + 196 + static struct nand_bbt_descr bbt_main_descr = { 197 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE 198 + | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP, 199 + .offs = 4, 200 + .len = 4, 201 + .veroffs = 20, 202 + .maxblocks = 4, 203 + .pattern = bbt_pattern 204 + }; 205 + 206 + static struct nand_bbt_descr bbt_mirror_descr = { 207 + .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE 208 + | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP, 209 + .offs = 4, 210 + .len = 4, 211 + .veroffs = 20, 212 + .maxblocks = 4, 213 + .pattern = mirror_pattern 214 + }; 215 + 216 + static void pl35x_smc_update_regs(struct pl35x_nandc *nfc) 217 + { 218 + writel(PL35X_SMC_DIRECT_CMD_NAND_CS | 219 + PL35X_SMC_DIRECT_CMD_UPD_REGS, 220 + nfc->conf_regs + PL35X_SMC_DIRECT_CMD); 221 + } 222 + 223 + static int pl35x_smc_set_buswidth(struct pl35x_nandc *nfc, unsigned int bw) 224 + { 225 + if (bw != PL35X_SMC_OPMODE_BW_8 && bw != PL35X_SMC_OPMODE_BW_16) 226 + return -EINVAL; 227 + 228 + writel(bw, nfc->conf_regs + PL35X_SMC_OPMODE); 229 + pl35x_smc_update_regs(nfc); 230 + 231 + return 0; 232 + } 233 + 234 + static void pl35x_smc_clear_irq(struct pl35x_nandc *nfc) 235 + { 236 + writel(PL35X_SMC_MEMC_CFG_CLR_INT_CLR_1, 237 + nfc->conf_regs + PL35X_SMC_MEMC_CFG_CLR); 238 + } 239 + 240 + static int pl35x_smc_wait_for_irq(struct pl35x_nandc *nfc) 241 + { 242 + u32 reg; 243 + int ret; 244 + 245 + ret = readl_poll_timeout(nfc->conf_regs + PL35X_SMC_MEMC_STATUS, reg, 246 + reg & PL35X_SMC_MEMC_STATUS_RAW_INT_STATUS1, 247 + 10, 1000000); 248 + if (ret) 249 + dev_err(nfc->dev, 250 + "Timeout polling on NAND controller interrupt (0x%x)\n", 251 + reg); 252 + 253 + pl35x_smc_clear_irq(nfc); 254 + 255 + return ret; 256 + } 257 + 258 + static int pl35x_smc_wait_for_ecc_done(struct pl35x_nandc *nfc) 259 + { 260 + u32 reg; 261 + int ret; 262 + 263 + ret = readl_poll_timeout(nfc->conf_regs + PL35X_SMC_ECC_STATUS, reg, 264 + !(reg & PL35X_SMC_ECC_STATUS_ECC_BUSY), 265 + 10, 1000000); 266 + if (ret) 267 + dev_err(nfc->dev, 268 + "Timeout polling on ECC controller interrupt\n"); 269 + 270 + return ret; 271 + } 272 + 273 + static int pl35x_smc_set_ecc_mode(struct pl35x_nandc *nfc, 274 + struct nand_chip *chip, 275 + unsigned int mode) 276 + { 277 + struct pl35x_nand *plnand; 278 + u32 ecc_cfg; 279 + 280 + ecc_cfg = readl(nfc->conf_regs + PL35X_SMC_ECC_CFG); 281 + ecc_cfg &= ~PL35X_SMC_ECC_CFG_MODE_MASK; 282 + ecc_cfg |= mode; 283 + writel(ecc_cfg, nfc->conf_regs + PL35X_SMC_ECC_CFG); 284 + 285 + if (chip) { 286 + plnand = to_pl35x_nand(chip); 287 + plnand->ecc_cfg = ecc_cfg; 288 + } 289 + 290 + if (mode != PL35X_SMC_ECC_CFG_MODE_BYPASS) 291 + return pl35x_smc_wait_for_ecc_done(nfc); 292 + 293 + return 0; 294 + } 295 + 296 + static void pl35x_smc_force_byte_access(struct nand_chip *chip, 297 + bool force_8bit) 298 + { 299 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 300 + int ret; 301 + 302 + if (!(chip->options & NAND_BUSWIDTH_16)) 303 + return; 304 + 305 + if (force_8bit) 306 + ret = pl35x_smc_set_buswidth(nfc, PL35X_SMC_OPMODE_BW_8); 307 + else 308 + ret = pl35x_smc_set_buswidth(nfc, PL35X_SMC_OPMODE_BW_16); 309 + 310 + if (ret) 311 + dev_err(nfc->dev, "Error in Buswidth\n"); 312 + } 313 + 314 + static void pl35x_nand_select_target(struct nand_chip *chip, 315 + unsigned int die_nr) 316 + { 317 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 318 + struct pl35x_nand *plnand = to_pl35x_nand(chip); 319 + 320 + if (chip == nfc->selected_chip) 321 + return; 322 + 323 + /* Setup the timings */ 324 + writel(plnand->timings, nfc->conf_regs + PL35X_SMC_CYCLES); 325 + pl35x_smc_update_regs(nfc); 326 + 327 + /* Configure the ECC engine */ 328 + writel(plnand->ecc_cfg, nfc->conf_regs + PL35X_SMC_ECC_CFG); 329 + 330 + nfc->selected_chip = chip; 331 + } 332 + 333 + static void pl35x_nand_read_data_op(struct nand_chip *chip, u8 *in, 334 + unsigned int len, bool force_8bit, 335 + unsigned int flags, unsigned int last_flags) 336 + { 337 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 338 + unsigned int buf_end = len / 4; 339 + unsigned int in_start = round_down(len, 4); 340 + unsigned int data_phase_addr; 341 + u32 *buf32 = (u32 *)in; 342 + u8 *buf8 = (u8 *)in; 343 + int i; 344 + 345 + if (force_8bit) 346 + pl35x_smc_force_byte_access(chip, true); 347 + 348 + for (i = 0; i < buf_end; i++) { 349 + data_phase_addr = PL35X_SMC_DATA_PHASE + flags; 350 + if (i + 1 == buf_end) 351 + data_phase_addr = PL35X_SMC_DATA_PHASE + last_flags; 352 + 353 + buf32[i] = readl(nfc->io_regs + data_phase_addr); 354 + } 355 + 356 + /* No working extra flags on unaligned data accesses */ 357 + for (i = in_start; i < len; i++) 358 + buf8[i] = readb(nfc->io_regs + PL35X_SMC_DATA_PHASE); 359 + 360 + if (force_8bit) 361 + pl35x_smc_force_byte_access(chip, false); 362 + } 363 + 364 + static void pl35x_nand_write_data_op(struct nand_chip *chip, const u8 *out, 365 + int len, bool force_8bit, 366 + unsigned int flags, 367 + unsigned int last_flags) 368 + { 369 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 370 + unsigned int buf_end = len / 4; 371 + unsigned int in_start = round_down(len, 4); 372 + const u32 *buf32 = (const u32 *)out; 373 + const u8 *buf8 = (const u8 *)out; 374 + unsigned int data_phase_addr; 375 + int i; 376 + 377 + if (force_8bit) 378 + pl35x_smc_force_byte_access(chip, true); 379 + 380 + for (i = 0; i < buf_end; i++) { 381 + data_phase_addr = PL35X_SMC_DATA_PHASE + flags; 382 + if (i + 1 == buf_end) 383 + data_phase_addr = PL35X_SMC_DATA_PHASE + last_flags; 384 + 385 + writel(buf32[i], nfc->io_regs + data_phase_addr); 386 + } 387 + 388 + /* No working extra flags on unaligned data accesses */ 389 + for (i = in_start; i < len; i++) 390 + writeb(buf8[i], nfc->io_regs + PL35X_SMC_DATA_PHASE); 391 + 392 + if (force_8bit) 393 + pl35x_smc_force_byte_access(chip, false); 394 + } 395 + 396 + static int pl35x_nand_correct_data(struct pl35x_nandc *nfc, unsigned char *buf, 397 + unsigned char *read_ecc, 398 + unsigned char *calc_ecc) 399 + { 400 + unsigned short ecc_odd, ecc_even, read_ecc_lower, read_ecc_upper; 401 + unsigned short calc_ecc_lower, calc_ecc_upper; 402 + unsigned short byte_addr, bit_addr; 403 + 404 + read_ecc_lower = (read_ecc[0] | (read_ecc[1] << 8)) & 405 + PL35X_NAND_ECC_BITS_MASK; 406 + read_ecc_upper = ((read_ecc[1] >> 4) | (read_ecc[2] << 4)) & 407 + PL35X_NAND_ECC_BITS_MASK; 408 + 409 + calc_ecc_lower = (calc_ecc[0] | (calc_ecc[1] << 8)) & 410 + PL35X_NAND_ECC_BITS_MASK; 411 + calc_ecc_upper = ((calc_ecc[1] >> 4) | (calc_ecc[2] << 4)) & 412 + PL35X_NAND_ECC_BITS_MASK; 413 + 414 + ecc_odd = read_ecc_lower ^ calc_ecc_lower; 415 + ecc_even = read_ecc_upper ^ calc_ecc_upper; 416 + 417 + /* No error */ 418 + if (likely(!ecc_odd && !ecc_even)) 419 + return 0; 420 + 421 + /* One error in the main data; to be corrected */ 422 + if (ecc_odd == (~ecc_even & PL35X_NAND_ECC_BITS_MASK)) { 423 + /* Bits [11:3] of error code give the byte offset */ 424 + byte_addr = (ecc_odd >> 3) & PL35X_NAND_ECC_BYTE_OFF_MASK; 425 + /* Bits [2:0] of error code give the bit offset */ 426 + bit_addr = ecc_odd & PL35X_NAND_ECC_BIT_OFF_MASK; 427 + /* Toggle the faulty bit */ 428 + buf[byte_addr] ^= (BIT(bit_addr)); 429 + 430 + return 1; 431 + } 432 + 433 + /* One error in the ECC data; no action needed */ 434 + if (hweight32(ecc_odd | ecc_even) == 1) 435 + return 1; 436 + 437 + return -EBADMSG; 438 + } 439 + 440 + static void pl35x_nand_ecc_reg_to_array(struct nand_chip *chip, u32 ecc_reg, 441 + u8 *ecc_array) 442 + { 443 + u32 ecc_value = ~ecc_reg; 444 + unsigned int ecc_byte; 445 + 446 + for (ecc_byte = 0; ecc_byte < chip->ecc.bytes; ecc_byte++) 447 + ecc_array[ecc_byte] = ecc_value >> (8 * ecc_byte); 448 + } 449 + 450 + static int pl35x_nand_read_eccbytes(struct pl35x_nandc *nfc, 451 + struct nand_chip *chip, u8 *read_ecc) 452 + { 453 + u32 ecc_value; 454 + int chunk; 455 + 456 + for (chunk = 0; chunk < chip->ecc.steps; 457 + chunk++, read_ecc += chip->ecc.bytes) { 458 + ecc_value = readl(nfc->conf_regs + PL35X_SMC_ECC_VALUE(chunk)); 459 + if (!PL35X_SMC_ECC_VALUE_IS_VALID(ecc_value)) 460 + return -EINVAL; 461 + 462 + pl35x_nand_ecc_reg_to_array(chip, ecc_value, read_ecc); 463 + } 464 + 465 + return 0; 466 + } 467 + 468 + static int pl35x_nand_recover_data_hwecc(struct pl35x_nandc *nfc, 469 + struct nand_chip *chip, u8 *data, 470 + u8 *read_ecc) 471 + { 472 + struct mtd_info *mtd = nand_to_mtd(chip); 473 + unsigned int max_bitflips = 0, chunk; 474 + u8 calc_ecc[3]; 475 + u32 ecc_value; 476 + int stats; 477 + 478 + for (chunk = 0; chunk < chip->ecc.steps; 479 + chunk++, data += chip->ecc.size, read_ecc += chip->ecc.bytes) { 480 + /* Read ECC value for each chunk */ 481 + ecc_value = readl(nfc->conf_regs + PL35X_SMC_ECC_VALUE(chunk)); 482 + 483 + if (!PL35X_SMC_ECC_VALUE_IS_VALID(ecc_value)) 484 + return -EINVAL; 485 + 486 + if (PL35X_SMC_ECC_VALUE_HAS_FAILED(ecc_value)) { 487 + mtd->ecc_stats.failed++; 488 + continue; 489 + } 490 + 491 + pl35x_nand_ecc_reg_to_array(chip, ecc_value, calc_ecc); 492 + stats = pl35x_nand_correct_data(nfc, data, read_ecc, calc_ecc); 493 + if (stats < 0) { 494 + mtd->ecc_stats.failed++; 495 + } else { 496 + mtd->ecc_stats.corrected += stats; 497 + max_bitflips = max_t(unsigned int, max_bitflips, stats); 498 + } 499 + } 500 + 501 + return max_bitflips; 502 + } 503 + 504 + static int pl35x_nand_write_page_hwecc(struct nand_chip *chip, 505 + const u8 *buf, int oob_required, 506 + int page) 507 + { 508 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 509 + struct pl35x_nand *plnand = to_pl35x_nand(chip); 510 + struct mtd_info *mtd = nand_to_mtd(chip); 511 + unsigned int first_row = (mtd->writesize <= 512) ? 1 : 2; 512 + unsigned int nrows = plnand->addr_cycles; 513 + u32 addr1 = 0, addr2 = 0, row; 514 + u32 cmd_addr; 515 + int i, ret; 516 + 517 + ret = pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_APB); 518 + if (ret) 519 + return ret; 520 + 521 + cmd_addr = PL35X_SMC_CMD_PHASE | 522 + PL35X_SMC_CMD_PHASE_NADDRS(plnand->addr_cycles) | 523 + PL35X_SMC_CMD_PHASE_CMD0(NAND_CMD_SEQIN); 524 + 525 + for (i = 0, row = first_row; row < nrows; i++, row++) { 526 + u8 addr = page >> ((i * 8) & 0xFF); 527 + 528 + if (row < 4) 529 + addr1 |= PL35X_SMC_CMD_PHASE_ADDR(row, addr); 530 + else 531 + addr2 |= PL35X_SMC_CMD_PHASE_ADDR(row - 4, addr); 532 + } 533 + 534 + /* Send the command and address cycles */ 535 + writel(addr1, nfc->io_regs + cmd_addr); 536 + if (plnand->addr_cycles > 4) 537 + writel(addr2, nfc->io_regs + cmd_addr); 538 + 539 + /* Write the data with the engine enabled */ 540 + pl35x_nand_write_data_op(chip, buf, mtd->writesize, false, 541 + 0, PL35X_SMC_DATA_PHASE_ECC_LAST); 542 + ret = pl35x_smc_wait_for_ecc_done(nfc); 543 + if (ret) 544 + goto disable_ecc_engine; 545 + 546 + /* Copy the HW calculated ECC bytes in the OOB buffer */ 547 + ret = pl35x_nand_read_eccbytes(nfc, chip, nfc->ecc_buf); 548 + if (ret) 549 + goto disable_ecc_engine; 550 + 551 + if (!oob_required) 552 + memset(chip->oob_poi, 0xFF, mtd->oobsize); 553 + 554 + ret = mtd_ooblayout_set_eccbytes(mtd, nfc->ecc_buf, chip->oob_poi, 555 + 0, chip->ecc.total); 556 + if (ret) 557 + goto disable_ecc_engine; 558 + 559 + /* Write the spare area with ECC bytes */ 560 + pl35x_nand_write_data_op(chip, chip->oob_poi, mtd->oobsize, false, 0, 561 + PL35X_SMC_CMD_PHASE_CMD1(NAND_CMD_PAGEPROG) | 562 + PL35X_SMC_CMD_PHASE_CMD1_VALID | 563 + PL35X_SMC_DATA_PHASE_CLEAR_CS); 564 + ret = pl35x_smc_wait_for_irq(nfc); 565 + if (ret) 566 + goto disable_ecc_engine; 567 + 568 + disable_ecc_engine: 569 + pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_BYPASS); 570 + 571 + return ret; 572 + } 573 + 574 + /* 575 + * This functions reads data and checks the data integrity by comparing hardware 576 + * generated ECC values and read ECC values from spare area. 577 + * 578 + * There is a limitation with SMC controller: ECC_LAST must be set on the 579 + * last data access to tell the ECC engine not to expect any further data. 580 + * In practice, this implies to shrink the last data transfert by eg. 4 bytes, 581 + * and doing a last 4-byte transfer with the additional bit set. The last block 582 + * should be aligned with the end of an ECC block. Because of this limitation, 583 + * it is not possible to use the core routines. 584 + */ 585 + static int pl35x_nand_read_page_hwecc(struct nand_chip *chip, 586 + u8 *buf, int oob_required, int page) 587 + { 588 + const struct nand_sdr_timings *sdr = 589 + nand_get_sdr_timings(nand_get_interface_config(chip)); 590 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 591 + struct pl35x_nand *plnand = to_pl35x_nand(chip); 592 + struct mtd_info *mtd = nand_to_mtd(chip); 593 + unsigned int first_row = (mtd->writesize <= 512) ? 1 : 2; 594 + unsigned int nrows = plnand->addr_cycles; 595 + unsigned int addr1 = 0, addr2 = 0, row; 596 + u32 cmd_addr; 597 + int i, ret; 598 + 599 + ret = pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_APB); 600 + if (ret) 601 + return ret; 602 + 603 + cmd_addr = PL35X_SMC_CMD_PHASE | 604 + PL35X_SMC_CMD_PHASE_NADDRS(plnand->addr_cycles) | 605 + PL35X_SMC_CMD_PHASE_CMD0(NAND_CMD_READ0) | 606 + PL35X_SMC_CMD_PHASE_CMD1(NAND_CMD_READSTART) | 607 + PL35X_SMC_CMD_PHASE_CMD1_VALID; 608 + 609 + for (i = 0, row = first_row; row < nrows; i++, row++) { 610 + u8 addr = page >> ((i * 8) & 0xFF); 611 + 612 + if (row < 4) 613 + addr1 |= PL35X_SMC_CMD_PHASE_ADDR(row, addr); 614 + else 615 + addr2 |= PL35X_SMC_CMD_PHASE_ADDR(row - 4, addr); 616 + } 617 + 618 + /* Send the command and address cycles */ 619 + writel(addr1, nfc->io_regs + cmd_addr); 620 + if (plnand->addr_cycles > 4) 621 + writel(addr2, nfc->io_regs + cmd_addr); 622 + 623 + /* Wait the data to be available in the NAND cache */ 624 + ndelay(PSEC_TO_NSEC(sdr->tRR_min)); 625 + ret = pl35x_smc_wait_for_irq(nfc); 626 + if (ret) 627 + goto disable_ecc_engine; 628 + 629 + /* Retrieve the raw data with the engine enabled */ 630 + pl35x_nand_read_data_op(chip, buf, mtd->writesize, false, 631 + 0, PL35X_SMC_DATA_PHASE_ECC_LAST); 632 + ret = pl35x_smc_wait_for_ecc_done(nfc); 633 + if (ret) 634 + goto disable_ecc_engine; 635 + 636 + /* Retrieve the stored ECC bytes */ 637 + pl35x_nand_read_data_op(chip, chip->oob_poi, mtd->oobsize, false, 638 + 0, PL35X_SMC_DATA_PHASE_CLEAR_CS); 639 + ret = mtd_ooblayout_get_eccbytes(mtd, nfc->ecc_buf, chip->oob_poi, 0, 640 + chip->ecc.total); 641 + if (ret) 642 + goto disable_ecc_engine; 643 + 644 + pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_BYPASS); 645 + 646 + /* Correct the data and report failures */ 647 + return pl35x_nand_recover_data_hwecc(nfc, chip, buf, nfc->ecc_buf); 648 + 649 + disable_ecc_engine: 650 + pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_BYPASS); 651 + 652 + return ret; 653 + } 654 + 655 + static int pl35x_nand_exec_op(struct nand_chip *chip, 656 + const struct nand_subop *subop) 657 + { 658 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 659 + const struct nand_op_instr *instr, *data_instr = NULL; 660 + unsigned int rdy_tim_ms = 0, naddrs = 0, cmds = 0, last_flags = 0; 661 + u32 addr1 = 0, addr2 = 0, cmd0 = 0, cmd1 = 0, cmd_addr = 0; 662 + unsigned int op_id, len, offset, rdy_del_ns; 663 + int last_instr_type = -1; 664 + bool cmd1_valid = false; 665 + const u8 *addrs; 666 + int i, ret; 667 + 668 + for (op_id = 0; op_id < subop->ninstrs; op_id++) { 669 + instr = &subop->instrs[op_id]; 670 + 671 + switch (instr->type) { 672 + case NAND_OP_CMD_INSTR: 673 + if (!cmds) { 674 + cmd0 = PL35X_SMC_CMD_PHASE_CMD0(instr->ctx.cmd.opcode); 675 + } else { 676 + cmd1 = PL35X_SMC_CMD_PHASE_CMD1(instr->ctx.cmd.opcode); 677 + if (last_instr_type != NAND_OP_DATA_OUT_INSTR) 678 + cmd1_valid = true; 679 + } 680 + cmds++; 681 + break; 682 + 683 + case NAND_OP_ADDR_INSTR: 684 + offset = nand_subop_get_addr_start_off(subop, op_id); 685 + naddrs = nand_subop_get_num_addr_cyc(subop, op_id); 686 + addrs = &instr->ctx.addr.addrs[offset]; 687 + cmd_addr |= PL35X_SMC_CMD_PHASE_NADDRS(naddrs); 688 + 689 + for (i = offset; i < naddrs; i++) { 690 + if (i < 4) 691 + addr1 |= PL35X_SMC_CMD_PHASE_ADDR(i, addrs[i]); 692 + else 693 + addr2 |= PL35X_SMC_CMD_PHASE_ADDR(i - 4, addrs[i]); 694 + } 695 + break; 696 + 697 + case NAND_OP_DATA_IN_INSTR: 698 + case NAND_OP_DATA_OUT_INSTR: 699 + data_instr = instr; 700 + len = nand_subop_get_data_len(subop, op_id); 701 + break; 702 + 703 + case NAND_OP_WAITRDY_INSTR: 704 + rdy_tim_ms = instr->ctx.waitrdy.timeout_ms; 705 + rdy_del_ns = instr->delay_ns; 706 + break; 707 + } 708 + 709 + last_instr_type = instr->type; 710 + } 711 + 712 + /* Command phase */ 713 + cmd_addr |= PL35X_SMC_CMD_PHASE | cmd0 | cmd1 | 714 + (cmd1_valid ? PL35X_SMC_CMD_PHASE_CMD1_VALID : 0); 715 + writel(addr1, nfc->io_regs + cmd_addr); 716 + if (naddrs > 4) 717 + writel(addr2, nfc->io_regs + cmd_addr); 718 + 719 + /* Data phase */ 720 + if (data_instr && data_instr->type == NAND_OP_DATA_OUT_INSTR) { 721 + last_flags = PL35X_SMC_DATA_PHASE_CLEAR_CS; 722 + if (cmds == 2) 723 + last_flags |= cmd1 | PL35X_SMC_CMD_PHASE_CMD1_VALID; 724 + 725 + pl35x_nand_write_data_op(chip, data_instr->ctx.data.buf.out, 726 + len, data_instr->ctx.data.force_8bit, 727 + 0, last_flags); 728 + } 729 + 730 + if (rdy_tim_ms) { 731 + ndelay(rdy_del_ns); 732 + ret = pl35x_smc_wait_for_irq(nfc); 733 + if (ret) 734 + return ret; 735 + } 736 + 737 + if (data_instr && data_instr->type == NAND_OP_DATA_IN_INSTR) 738 + pl35x_nand_read_data_op(chip, data_instr->ctx.data.buf.in, 739 + len, data_instr->ctx.data.force_8bit, 740 + 0, PL35X_SMC_DATA_PHASE_CLEAR_CS); 741 + 742 + return 0; 743 + } 744 + 745 + static const struct nand_op_parser pl35x_nandc_op_parser = NAND_OP_PARSER( 746 + NAND_OP_PARSER_PATTERN(pl35x_nand_exec_op, 747 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 748 + NAND_OP_PARSER_PAT_ADDR_ELEM(true, 7), 749 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 750 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 751 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(true, 2112)), 752 + NAND_OP_PARSER_PATTERN(pl35x_nand_exec_op, 753 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 754 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, 7), 755 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, 2112), 756 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 757 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)), 758 + NAND_OP_PARSER_PATTERN(pl35x_nand_exec_op, 759 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 760 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, 7), 761 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, 2112), 762 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 763 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)), 764 + ); 765 + 766 + static int pl35x_nfc_exec_op(struct nand_chip *chip, 767 + const struct nand_operation *op, 768 + bool check_only) 769 + { 770 + if (!check_only) 771 + pl35x_nand_select_target(chip, op->cs); 772 + 773 + return nand_op_parser_exec_op(chip, &pl35x_nandc_op_parser, 774 + op, check_only); 775 + } 776 + 777 + static int pl35x_nfc_setup_interface(struct nand_chip *chip, int cs, 778 + const struct nand_interface_config *conf) 779 + { 780 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 781 + struct pl35x_nand *plnand = to_pl35x_nand(chip); 782 + struct pl35x_nand_timings tmgs = {}; 783 + const struct nand_sdr_timings *sdr; 784 + unsigned int period_ns, val; 785 + struct clk *mclk; 786 + 787 + sdr = nand_get_sdr_timings(conf); 788 + if (IS_ERR(sdr)) 789 + return PTR_ERR(sdr); 790 + 791 + mclk = of_clk_get_by_name(nfc->dev->parent->of_node, "memclk"); 792 + if (IS_ERR(mclk)) { 793 + dev_err(nfc->dev, "Failed to retrieve SMC memclk\n"); 794 + return PTR_ERR(mclk); 795 + } 796 + 797 + /* 798 + * SDR timings are given in pico-seconds while NFC timings must be 799 + * expressed in NAND controller clock cycles. We use the TO_CYCLE() 800 + * macro to convert from one to the other. 801 + */ 802 + period_ns = NSEC_PER_SEC / clk_get_rate(mclk); 803 + 804 + /* 805 + * PL35X SMC needs one extra read cycle in SDR Mode 5. This is not 806 + * written anywhere in the datasheet but is an empirical observation. 807 + */ 808 + val = TO_CYCLES(sdr->tRC_min, period_ns); 809 + if (sdr->tRC_min <= 20000) 810 + val++; 811 + 812 + tmgs.t_rc = val; 813 + if (tmgs.t_rc != val || tmgs.t_rc < 2) 814 + return -EINVAL; 815 + 816 + val = TO_CYCLES(sdr->tWC_min, period_ns); 817 + tmgs.t_wc = val; 818 + if (tmgs.t_wc != val || tmgs.t_wc < 2) 819 + return -EINVAL; 820 + 821 + /* 822 + * For all SDR modes, PL35X SMC needs tREA_max being 1, 823 + * this is also an empirical result. 824 + */ 825 + tmgs.t_rea = 1; 826 + 827 + val = TO_CYCLES(sdr->tWP_min, period_ns); 828 + tmgs.t_wp = val; 829 + if (tmgs.t_wp != val || tmgs.t_wp < 1) 830 + return -EINVAL; 831 + 832 + val = TO_CYCLES(sdr->tCLR_min, period_ns); 833 + tmgs.t_clr = val; 834 + if (tmgs.t_clr != val) 835 + return -EINVAL; 836 + 837 + val = TO_CYCLES(sdr->tAR_min, period_ns); 838 + tmgs.t_ar = val; 839 + if (tmgs.t_ar != val) 840 + return -EINVAL; 841 + 842 + val = TO_CYCLES(sdr->tRR_min, period_ns); 843 + tmgs.t_rr = val; 844 + if (tmgs.t_rr != val) 845 + return -EINVAL; 846 + 847 + if (cs == NAND_DATA_IFACE_CHECK_ONLY) 848 + return 0; 849 + 850 + plnand->timings = PL35X_SMC_NAND_TRC_CYCLES(tmgs.t_rc) | 851 + PL35X_SMC_NAND_TWC_CYCLES(tmgs.t_wc) | 852 + PL35X_SMC_NAND_TREA_CYCLES(tmgs.t_rea) | 853 + PL35X_SMC_NAND_TWP_CYCLES(tmgs.t_wp) | 854 + PL35X_SMC_NAND_TCLR_CYCLES(tmgs.t_clr) | 855 + PL35X_SMC_NAND_TAR_CYCLES(tmgs.t_ar) | 856 + PL35X_SMC_NAND_TRR_CYCLES(tmgs.t_rr); 857 + 858 + return 0; 859 + } 860 + 861 + static void pl35x_smc_set_ecc_pg_size(struct pl35x_nandc *nfc, 862 + struct nand_chip *chip, 863 + unsigned int pg_sz) 864 + { 865 + struct pl35x_nand *plnand = to_pl35x_nand(chip); 866 + u32 sz; 867 + 868 + switch (pg_sz) { 869 + case SZ_512: 870 + sz = 1; 871 + break; 872 + case SZ_1K: 873 + sz = 2; 874 + break; 875 + case SZ_2K: 876 + sz = 3; 877 + break; 878 + default: 879 + sz = 0; 880 + break; 881 + } 882 + 883 + plnand->ecc_cfg = readl(nfc->conf_regs + PL35X_SMC_ECC_CFG); 884 + plnand->ecc_cfg &= ~PL35X_SMC_ECC_CFG_PGSIZE_MASK; 885 + plnand->ecc_cfg |= sz; 886 + writel(plnand->ecc_cfg, nfc->conf_regs + PL35X_SMC_ECC_CFG); 887 + } 888 + 889 + static int pl35x_nand_init_hw_ecc_controller(struct pl35x_nandc *nfc, 890 + struct nand_chip *chip) 891 + { 892 + struct mtd_info *mtd = nand_to_mtd(chip); 893 + int ret = 0; 894 + 895 + if (mtd->writesize < SZ_512 || mtd->writesize > SZ_2K) { 896 + dev_err(nfc->dev, 897 + "The hardware ECC engine is limited to pages up to 2kiB\n"); 898 + return -EOPNOTSUPP; 899 + } 900 + 901 + chip->ecc.strength = 1; 902 + chip->ecc.bytes = 3; 903 + chip->ecc.size = SZ_512; 904 + chip->ecc.steps = mtd->writesize / chip->ecc.size; 905 + chip->ecc.read_page = pl35x_nand_read_page_hwecc; 906 + chip->ecc.write_page = pl35x_nand_write_page_hwecc; 907 + chip->ecc.write_page_raw = nand_monolithic_write_page_raw; 908 + pl35x_smc_set_ecc_pg_size(nfc, chip, mtd->writesize); 909 + 910 + nfc->ecc_buf = devm_kmalloc(nfc->dev, chip->ecc.bytes * chip->ecc.steps, 911 + GFP_KERNEL); 912 + if (!nfc->ecc_buf) 913 + return -ENOMEM; 914 + 915 + switch (mtd->oobsize) { 916 + case 16: 917 + /* Legacy Xilinx layout */ 918 + mtd_set_ooblayout(mtd, &pl35x_ecc_ooblayout16_ops); 919 + chip->bbt_options |= NAND_BBT_NO_OOB_BBM; 920 + break; 921 + case 64: 922 + mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout()); 923 + break; 924 + default: 925 + dev_err(nfc->dev, "Unsupported OOB size\n"); 926 + return -EOPNOTSUPP; 927 + } 928 + 929 + return ret; 930 + } 931 + 932 + static int pl35x_nand_attach_chip(struct nand_chip *chip) 933 + { 934 + const struct nand_ecc_props *requirements = 935 + nanddev_get_ecc_requirements(&chip->base); 936 + struct pl35x_nandc *nfc = to_pl35x_nandc(chip->controller); 937 + struct pl35x_nand *plnand = to_pl35x_nand(chip); 938 + struct mtd_info *mtd = nand_to_mtd(chip); 939 + int ret; 940 + 941 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_NONE && 942 + (!chip->ecc.size || !chip->ecc.strength)) { 943 + if (requirements->step_size && requirements->strength) { 944 + chip->ecc.size = requirements->step_size; 945 + chip->ecc.strength = requirements->strength; 946 + } else { 947 + dev_info(nfc->dev, 948 + "No minimum ECC strength, using 1b/512B\n"); 949 + chip->ecc.size = 512; 950 + chip->ecc.strength = 1; 951 + } 952 + } 953 + 954 + if (mtd->writesize <= SZ_512) 955 + plnand->addr_cycles = 1; 956 + else 957 + plnand->addr_cycles = 2; 958 + 959 + if (chip->options & NAND_ROW_ADDR_3) 960 + plnand->addr_cycles += 3; 961 + else 962 + plnand->addr_cycles += 2; 963 + 964 + switch (chip->ecc.engine_type) { 965 + case NAND_ECC_ENGINE_TYPE_ON_DIE: 966 + /* Keep these legacy BBT descriptors for ON_DIE situations */ 967 + chip->bbt_td = &bbt_main_descr; 968 + chip->bbt_md = &bbt_mirror_descr; 969 + fallthrough; 970 + case NAND_ECC_ENGINE_TYPE_NONE: 971 + case NAND_ECC_ENGINE_TYPE_SOFT: 972 + break; 973 + case NAND_ECC_ENGINE_TYPE_ON_HOST: 974 + ret = pl35x_nand_init_hw_ecc_controller(nfc, chip); 975 + if (ret) 976 + return ret; 977 + break; 978 + default: 979 + dev_err(nfc->dev, "Unsupported ECC mode: %d\n", 980 + chip->ecc.engine_type); 981 + return -EINVAL; 982 + } 983 + 984 + return 0; 985 + } 986 + 987 + static const struct nand_controller_ops pl35x_nandc_ops = { 988 + .attach_chip = pl35x_nand_attach_chip, 989 + .exec_op = pl35x_nfc_exec_op, 990 + .setup_interface = pl35x_nfc_setup_interface, 991 + }; 992 + 993 + static int pl35x_nand_reset_state(struct pl35x_nandc *nfc) 994 + { 995 + int ret; 996 + 997 + /* Disable interrupts and clear their status */ 998 + writel(PL35X_SMC_MEMC_CFG_CLR_INT_CLR_1 | 999 + PL35X_SMC_MEMC_CFG_CLR_ECC_INT_DIS_1 | 1000 + PL35X_SMC_MEMC_CFG_CLR_INT_DIS_1, 1001 + nfc->conf_regs + PL35X_SMC_MEMC_CFG_CLR); 1002 + 1003 + /* Set default bus width to 8-bit */ 1004 + ret = pl35x_smc_set_buswidth(nfc, PL35X_SMC_OPMODE_BW_8); 1005 + if (ret) 1006 + return ret; 1007 + 1008 + /* Ensure the ECC controller is bypassed by default */ 1009 + ret = pl35x_smc_set_ecc_mode(nfc, NULL, PL35X_SMC_ECC_CFG_MODE_BYPASS); 1010 + if (ret) 1011 + return ret; 1012 + 1013 + /* 1014 + * Configure the commands that the ECC block uses to detect the 1015 + * operations it should start/end. 1016 + */ 1017 + writel(PL35X_SMC_ECC_CMD1_WRITE(NAND_CMD_SEQIN) | 1018 + PL35X_SMC_ECC_CMD1_READ(NAND_CMD_READ0) | 1019 + PL35X_SMC_ECC_CMD1_READ_END(NAND_CMD_READSTART) | 1020 + PL35X_SMC_ECC_CMD1_READ_END_VALID(NAND_CMD_READ1), 1021 + nfc->conf_regs + PL35X_SMC_ECC_CMD1); 1022 + writel(PL35X_SMC_ECC_CMD2_WRITE_COL_CHG(NAND_CMD_RNDIN) | 1023 + PL35X_SMC_ECC_CMD2_READ_COL_CHG(NAND_CMD_RNDOUT) | 1024 + PL35X_SMC_ECC_CMD2_READ_COL_CHG_END(NAND_CMD_RNDOUTSTART) | 1025 + PL35X_SMC_ECC_CMD2_READ_COL_CHG_END_VALID(NAND_CMD_READ1), 1026 + nfc->conf_regs + PL35X_SMC_ECC_CMD2); 1027 + 1028 + return 0; 1029 + } 1030 + 1031 + static int pl35x_nand_chip_init(struct pl35x_nandc *nfc, 1032 + struct device_node *np) 1033 + { 1034 + struct pl35x_nand *plnand; 1035 + struct nand_chip *chip; 1036 + struct mtd_info *mtd; 1037 + int cs, ret; 1038 + 1039 + plnand = devm_kzalloc(nfc->dev, sizeof(*plnand), GFP_KERNEL); 1040 + if (!plnand) 1041 + return -ENOMEM; 1042 + 1043 + ret = of_property_read_u32(np, "reg", &cs); 1044 + if (ret) 1045 + return ret; 1046 + 1047 + if (cs >= PL35X_NAND_MAX_CS) { 1048 + dev_err(nfc->dev, "Wrong CS %d\n", cs); 1049 + return -EINVAL; 1050 + } 1051 + 1052 + if (test_and_set_bit(cs, &nfc->assigned_cs)) { 1053 + dev_err(nfc->dev, "Already assigned CS %d\n", cs); 1054 + return -EINVAL; 1055 + } 1056 + 1057 + plnand->cs = cs; 1058 + 1059 + chip = &plnand->chip; 1060 + chip->options = NAND_BUSWIDTH_AUTO | NAND_USES_DMA | NAND_NO_SUBPAGE_WRITE; 1061 + chip->bbt_options = NAND_BBT_USE_FLASH; 1062 + chip->controller = &nfc->controller; 1063 + mtd = nand_to_mtd(chip); 1064 + mtd->dev.parent = nfc->dev; 1065 + nand_set_flash_node(chip, nfc->dev->of_node); 1066 + if (!mtd->name) { 1067 + mtd->name = devm_kasprintf(nfc->dev, GFP_KERNEL, 1068 + "%s", PL35X_NANDC_DRIVER_NAME); 1069 + if (!mtd->name) { 1070 + dev_err(nfc->dev, "Failed to allocate mtd->name\n"); 1071 + return -ENOMEM; 1072 + } 1073 + } 1074 + 1075 + ret = nand_scan(chip, 1); 1076 + if (ret) 1077 + return ret; 1078 + 1079 + ret = mtd_device_register(mtd, NULL, 0); 1080 + if (ret) { 1081 + nand_cleanup(chip); 1082 + return ret; 1083 + } 1084 + 1085 + list_add_tail(&plnand->node, &nfc->chips); 1086 + 1087 + return ret; 1088 + } 1089 + 1090 + static void pl35x_nand_chips_cleanup(struct pl35x_nandc *nfc) 1091 + { 1092 + struct pl35x_nand *plnand, *tmp; 1093 + struct nand_chip *chip; 1094 + int ret; 1095 + 1096 + list_for_each_entry_safe(plnand, tmp, &nfc->chips, node) { 1097 + chip = &plnand->chip; 1098 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1099 + WARN_ON(ret); 1100 + nand_cleanup(chip); 1101 + list_del(&plnand->node); 1102 + } 1103 + } 1104 + 1105 + static int pl35x_nand_chips_init(struct pl35x_nandc *nfc) 1106 + { 1107 + struct device_node *np = nfc->dev->of_node, *nand_np; 1108 + int nchips = of_get_child_count(np); 1109 + int ret; 1110 + 1111 + if (!nchips || nchips > PL35X_NAND_MAX_CS) { 1112 + dev_err(nfc->dev, "Incorrect number of NAND chips (%d)\n", 1113 + nchips); 1114 + return -EINVAL; 1115 + } 1116 + 1117 + for_each_child_of_node(np, nand_np) { 1118 + ret = pl35x_nand_chip_init(nfc, nand_np); 1119 + if (ret) { 1120 + of_node_put(nand_np); 1121 + pl35x_nand_chips_cleanup(nfc); 1122 + break; 1123 + } 1124 + } 1125 + 1126 + return ret; 1127 + } 1128 + 1129 + static int pl35x_nand_probe(struct platform_device *pdev) 1130 + { 1131 + struct device *smc_dev = pdev->dev.parent; 1132 + struct amba_device *smc_amba = to_amba_device(smc_dev); 1133 + struct pl35x_nandc *nfc; 1134 + u32 ret; 1135 + 1136 + nfc = devm_kzalloc(&pdev->dev, sizeof(*nfc), GFP_KERNEL); 1137 + if (!nfc) 1138 + return -ENOMEM; 1139 + 1140 + nfc->dev = &pdev->dev; 1141 + nand_controller_init(&nfc->controller); 1142 + nfc->controller.ops = &pl35x_nandc_ops; 1143 + INIT_LIST_HEAD(&nfc->chips); 1144 + 1145 + nfc->conf_regs = devm_ioremap_resource(&smc_amba->dev, &smc_amba->res); 1146 + if (IS_ERR(nfc->conf_regs)) 1147 + return PTR_ERR(nfc->conf_regs); 1148 + 1149 + nfc->io_regs = devm_platform_ioremap_resource(pdev, 0); 1150 + if (IS_ERR(nfc->io_regs)) 1151 + return PTR_ERR(nfc->io_regs); 1152 + 1153 + ret = pl35x_nand_reset_state(nfc); 1154 + if (ret) 1155 + return ret; 1156 + 1157 + ret = pl35x_nand_chips_init(nfc); 1158 + if (ret) 1159 + return ret; 1160 + 1161 + platform_set_drvdata(pdev, nfc); 1162 + 1163 + return 0; 1164 + } 1165 + 1166 + static int pl35x_nand_remove(struct platform_device *pdev) 1167 + { 1168 + struct pl35x_nandc *nfc = platform_get_drvdata(pdev); 1169 + 1170 + pl35x_nand_chips_cleanup(nfc); 1171 + 1172 + return 0; 1173 + } 1174 + 1175 + static const struct of_device_id pl35x_nand_of_match[] = { 1176 + { .compatible = "arm,pl353-nand-r2p1" }, 1177 + {}, 1178 + }; 1179 + MODULE_DEVICE_TABLE(of, pl35x_nand_of_match); 1180 + 1181 + static struct platform_driver pl35x_nandc_driver = { 1182 + .probe = pl35x_nand_probe, 1183 + .remove = pl35x_nand_remove, 1184 + .driver = { 1185 + .name = PL35X_NANDC_DRIVER_NAME, 1186 + .of_match_table = pl35x_nand_of_match, 1187 + }, 1188 + }; 1189 + module_platform_driver(pl35x_nandc_driver); 1190 + 1191 + MODULE_AUTHOR("Xilinx, Inc."); 1192 + MODULE_ALIAS("platform:" PL35X_NANDC_DRIVER_NAME); 1193 + MODULE_DESCRIPTION("ARM PL35X NAND controller driver"); 1194 + MODULE_LICENSE("GPL");
+14 -9
drivers/mtd/nand/raw/qcom_nandc.c
··· 734 734 { 735 735 struct nand_chip *chip = &host->chip; 736 736 u32 cmd, cfg0, cfg1, ecc_bch_cfg; 737 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 737 738 738 739 if (read) { 739 740 if (host->use_ecc) ··· 763 762 nandc_set_reg(chip, NAND_DEV0_CFG0, cfg0); 764 763 nandc_set_reg(chip, NAND_DEV0_CFG1, cfg1); 765 764 nandc_set_reg(chip, NAND_DEV0_ECC_CFG, ecc_bch_cfg); 766 - nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, host->ecc_buf_cfg); 765 + if (!nandc->props->qpic_v2) 766 + nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, host->ecc_buf_cfg); 767 767 nandc_set_reg(chip, NAND_FLASH_STATUS, host->clrflashstatus); 768 768 nandc_set_reg(chip, NAND_READ_STATUS, host->clrreadstatus); 769 769 nandc_set_reg(chip, NAND_EXEC_CMD, 1); ··· 1135 1133 1136 1134 write_reg_dma(nandc, NAND_ADDR0, 2, 0); 1137 1135 write_reg_dma(nandc, NAND_DEV0_CFG0, 3, 0); 1138 - write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1, 0); 1136 + if (!nandc->props->qpic_v2) 1137 + write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1, 0); 1139 1138 write_reg_dma(nandc, NAND_ERASED_CW_DETECT_CFG, 1, 0); 1140 1139 write_reg_dma(nandc, NAND_ERASED_CW_DETECT_CFG, 1, 1141 1140 NAND_ERASED_CW_SET | NAND_BAM_NEXT_SGL); ··· 1194 1191 1195 1192 write_reg_dma(nandc, NAND_ADDR0, 2, 0); 1196 1193 write_reg_dma(nandc, NAND_DEV0_CFG0, 3, 0); 1197 - write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1, 1198 - NAND_BAM_NEXT_SGL); 1194 + if (!nandc->props->qpic_v2) 1195 + write_reg_dma(nandc, NAND_EBI2_ECC_BUF_CFG, 1, 1196 + NAND_BAM_NEXT_SGL); 1199 1197 } 1200 1198 1201 1199 /* ··· 1252 1248 | 2 << WR_RD_BSY_GAP 1253 1249 | 0 << WIDE_FLASH 1254 1250 | 1 << DEV0_CFG1_ECC_DISABLE); 1255 - nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE); 1251 + if (!nandc->props->qpic_v2) 1252 + nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE); 1256 1253 1257 1254 /* configure CMD1 and VLD for ONFI param probing in QPIC v1 */ 1258 1255 if (!nandc->props->qpic_v2) { ··· 1855 1850 * ERASED_CW bits are set. 1856 1851 */ 1857 1852 if (host->bch_enabled) { 1858 - erased = (erased_cw & ERASED_CW) == ERASED_CW ? 1859 - true : false; 1853 + erased = (erased_cw & ERASED_CW) == ERASED_CW; 1860 1854 /* 1861 1855 * For RS ECC, HW reports the erased CW by placing 1862 1856 * special characters at certain offsets in the buffer. ··· 2693 2689 | ecc_mode << ECC_MODE 2694 2690 | host->ecc_bytes_hw << ECC_PARITY_SIZE_BYTES_BCH; 2695 2691 2696 - host->ecc_buf_cfg = 0x203 << NUM_STEPS; 2692 + if (!nandc->props->qpic_v2) 2693 + host->ecc_buf_cfg = 0x203 << NUM_STEPS; 2697 2694 2698 2695 host->clrflashstatus = FS_READY_BSY_N; 2699 2696 host->clrreadstatus = 0xc0; ··· 2887 2882 return 0; 2888 2883 } 2889 2884 2890 - static const char * const probes[] = { "qcomsmem", NULL }; 2885 + static const char * const probes[] = { "cmdlinepart", "ofpart", "qcomsmem", NULL }; 2891 2886 2892 2887 static int qcom_nand_host_init_and_register(struct qcom_nand_controller *nandc, 2893 2888 struct qcom_nand_host *host,
+3 -4
drivers/mtd/nand/raw/r852.c
··· 583 583 r852_write_reg(dev, R852_CARD_IRQ_ENABLE, card_detect_reg); 584 584 } 585 585 586 - static ssize_t r852_media_type_show(struct device *sys_dev, 587 - struct device_attribute *attr, char *buf) 586 + static ssize_t media_type_show(struct device *sys_dev, 587 + struct device_attribute *attr, char *buf) 588 588 { 589 589 struct mtd_info *mtd = container_of(sys_dev, struct mtd_info, dev); 590 590 struct r852_device *dev = r852_get_dev(mtd); ··· 593 593 strcpy(buf, data); 594 594 return strlen(data); 595 595 } 596 - 597 - static DEVICE_ATTR(media_type, S_IRUGO, r852_media_type_show, NULL); 596 + static DEVICE_ATTR_RO(media_type); 598 597 599 598 600 599 /* Detect properties of card in slot */
+1 -3
drivers/mtd/nand/raw/sunxi_nand.c
··· 1972 1972 1973 1973 sunxi_nand = devm_kzalloc(dev, struct_size(sunxi_nand, sels, nsels), 1974 1974 GFP_KERNEL); 1975 - if (!sunxi_nand) { 1976 - dev_err(dev, "could not allocate chip\n"); 1975 + if (!sunxi_nand) 1977 1976 return -ENOMEM; 1978 - } 1979 1977 1980 1978 sunxi_nand->nsels = nsels; 1981 1979
+89 -40
drivers/mtd/nand/spi/core.c
··· 138 138 return 0; 139 139 } 140 140 141 - static int spinand_init_cfg_cache(struct spinand_device *spinand) 141 + static int spinand_read_cfg(struct spinand_device *spinand) 142 142 { 143 143 struct nand_device *nand = spinand_to_nand(spinand); 144 - struct device *dev = &spinand->spimem->spi->dev; 145 144 unsigned int target; 146 145 int ret; 147 - 148 - spinand->cfg_cache = devm_kcalloc(dev, 149 - nand->memorg.ntargets, 150 - sizeof(*spinand->cfg_cache), 151 - GFP_KERNEL); 152 - if (!spinand->cfg_cache) 153 - return -ENOMEM; 154 146 155 147 for (target = 0; target < nand->memorg.ntargets; target++) { 156 148 ret = spinand_select_target(spinand, target); ··· 158 166 if (ret) 159 167 return ret; 160 168 } 169 + 170 + return 0; 171 + } 172 + 173 + static int spinand_init_cfg_cache(struct spinand_device *spinand) 174 + { 175 + struct nand_device *nand = spinand_to_nand(spinand); 176 + struct device *dev = &spinand->spimem->spi->dev; 177 + 178 + spinand->cfg_cache = devm_kcalloc(dev, 179 + nand->memorg.ntargets, 180 + sizeof(*spinand->cfg_cache), 181 + GFP_KERNEL); 182 + if (!spinand->cfg_cache) 183 + return -ENOMEM; 161 184 162 185 return 0; 163 186 } ··· 297 290 { 298 291 struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv; 299 292 struct spinand_device *spinand = nand_to_spinand(nand); 293 + struct mtd_info *mtd = spinand_to_mtd(spinand); 294 + int ret; 300 295 301 296 if (req->mode == MTD_OPS_RAW) 302 297 return 0; ··· 308 299 return 0; 309 300 310 301 /* Finish a page write: check the status, report errors/bitflips */ 311 - return spinand_check_ecc_status(spinand, engine_conf->status); 302 + ret = spinand_check_ecc_status(spinand, engine_conf->status); 303 + if (ret == -EBADMSG) 304 + mtd->ecc_stats.failed++; 305 + else if (ret > 0) 306 + mtd->ecc_stats.corrected += ret; 307 + 308 + return ret; 312 309 } 313 310 314 311 static struct nand_ecc_engine_ops spinand_ondie_ecc_engine_ops = { ··· 650 635 if (ret < 0 && ret != -EBADMSG) 651 636 break; 652 637 653 - if (ret == -EBADMSG) { 638 + if (ret == -EBADMSG) 654 639 ecc_failed = true; 655 - mtd->ecc_stats.failed++; 656 - } else { 657 - mtd->ecc_stats.corrected += ret; 640 + else 658 641 max_bitflips = max_t(unsigned int, max_bitflips, ret); 659 - } 660 642 661 643 ret = 0; 662 644 ops->retlen += iter.req.datalen; ··· 1105 1093 return 0; 1106 1094 } 1107 1095 1096 + static int spinand_init_flash(struct spinand_device *spinand) 1097 + { 1098 + struct device *dev = &spinand->spimem->spi->dev; 1099 + struct nand_device *nand = spinand_to_nand(spinand); 1100 + int ret, i; 1101 + 1102 + ret = spinand_read_cfg(spinand); 1103 + if (ret) 1104 + return ret; 1105 + 1106 + ret = spinand_init_quad_enable(spinand); 1107 + if (ret) 1108 + return ret; 1109 + 1110 + ret = spinand_upd_cfg(spinand, CFG_OTP_ENABLE, 0); 1111 + if (ret) 1112 + return ret; 1113 + 1114 + ret = spinand_manufacturer_init(spinand); 1115 + if (ret) { 1116 + dev_err(dev, 1117 + "Failed to initialize the SPI NAND chip (err = %d)\n", 1118 + ret); 1119 + return ret; 1120 + } 1121 + 1122 + /* After power up, all blocks are locked, so unlock them here. */ 1123 + for (i = 0; i < nand->memorg.ntargets; i++) { 1124 + ret = spinand_select_target(spinand, i); 1125 + if (ret) 1126 + break; 1127 + 1128 + ret = spinand_lock_block(spinand, BL_ALL_UNLOCKED); 1129 + if (ret) 1130 + break; 1131 + } 1132 + 1133 + if (ret) 1134 + spinand_manufacturer_cleanup(spinand); 1135 + 1136 + return ret; 1137 + } 1138 + 1139 + static void spinand_mtd_resume(struct mtd_info *mtd) 1140 + { 1141 + struct spinand_device *spinand = mtd_to_spinand(mtd); 1142 + int ret; 1143 + 1144 + ret = spinand_reset_op(spinand); 1145 + if (ret) 1146 + return; 1147 + 1148 + ret = spinand_init_flash(spinand); 1149 + if (ret) 1150 + return; 1151 + 1152 + spinand_ecc_enable(spinand, false); 1153 + } 1154 + 1108 1155 static int spinand_init(struct spinand_device *spinand) 1109 1156 { 1110 1157 struct device *dev = &spinand->spimem->spi->dev; 1111 1158 struct mtd_info *mtd = spinand_to_mtd(spinand); 1112 1159 struct nand_device *nand = mtd_to_nanddev(mtd); 1113 - int ret, i; 1160 + int ret; 1114 1161 1115 1162 /* 1116 1163 * We need a scratch buffer because the spi_mem interface requires that ··· 1202 1131 if (ret) 1203 1132 goto err_free_bufs; 1204 1133 1205 - ret = spinand_init_quad_enable(spinand); 1134 + ret = spinand_init_flash(spinand); 1206 1135 if (ret) 1207 1136 goto err_free_bufs; 1208 - 1209 - ret = spinand_upd_cfg(spinand, CFG_OTP_ENABLE, 0); 1210 - if (ret) 1211 - goto err_free_bufs; 1212 - 1213 - ret = spinand_manufacturer_init(spinand); 1214 - if (ret) { 1215 - dev_err(dev, 1216 - "Failed to initialize the SPI NAND chip (err = %d)\n", 1217 - ret); 1218 - goto err_free_bufs; 1219 - } 1220 1137 1221 1138 ret = spinand_create_dirmaps(spinand); 1222 1139 if (ret) { ··· 1212 1153 "Failed to create direct mappings for read/write operations (err = %d)\n", 1213 1154 ret); 1214 1155 goto err_manuf_cleanup; 1215 - } 1216 - 1217 - /* After power up, all blocks are locked, so unlock them here. */ 1218 - for (i = 0; i < nand->memorg.ntargets; i++) { 1219 - ret = spinand_select_target(spinand, i); 1220 - if (ret) 1221 - goto err_manuf_cleanup; 1222 - 1223 - ret = spinand_lock_block(spinand, BL_ALL_UNLOCKED); 1224 - if (ret) 1225 - goto err_manuf_cleanup; 1226 1156 } 1227 1157 1228 1158 ret = nanddev_init(nand, &spinand_ops, THIS_MODULE); ··· 1234 1186 mtd->_block_isreserved = spinand_mtd_block_isreserved; 1235 1187 mtd->_erase = spinand_mtd_erase; 1236 1188 mtd->_max_bad_blocks = nanddev_mtd_max_bad_blocks; 1189 + mtd->_resume = spinand_mtd_resume; 1237 1190 1238 1191 if (nand->ecc.engine) { 1239 1192 ret = mtd_ooblayout_count_freebytes(mtd);
+112
drivers/mtd/nand/spi/macronix.c
··· 186 186 0 /*SPINAND_HAS_QE_BIT*/, 187 187 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 188 188 mx35lf1ge4ab_ecc_get_status)), 189 + 190 + SPINAND_INFO("MX35LF2G14AC", 191 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x20), 192 + NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 2, 1, 1), 193 + NAND_ECCREQ(4, 512), 194 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 195 + &write_cache_variants, 196 + &update_cache_variants), 197 + SPINAND_HAS_QE_BIT, 198 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 199 + mx35lf1ge4ab_ecc_get_status)), 200 + SPINAND_INFO("MX35UF4G24AD", 201 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb5), 202 + NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1), 203 + NAND_ECCREQ(8, 512), 204 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 205 + &write_cache_variants, 206 + &update_cache_variants), 207 + SPINAND_HAS_QE_BIT, 208 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 209 + mx35lf1ge4ab_ecc_get_status)), 210 + SPINAND_INFO("MX35UF4GE4AD", 211 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb7), 212 + NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1), 213 + NAND_ECCREQ(8, 512), 214 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 215 + &write_cache_variants, 216 + &update_cache_variants), 217 + SPINAND_HAS_QE_BIT, 218 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 219 + mx35lf1ge4ab_ecc_get_status)), 220 + SPINAND_INFO("MX35UF2G14AC", 221 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa0), 222 + NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 2, 1, 1), 223 + NAND_ECCREQ(4, 512), 224 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 225 + &write_cache_variants, 226 + &update_cache_variants), 227 + SPINAND_HAS_QE_BIT, 228 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 229 + mx35lf1ge4ab_ecc_get_status)), 230 + SPINAND_INFO("MX35UF2G24AD", 231 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa4), 232 + NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1), 233 + NAND_ECCREQ(8, 512), 234 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 235 + &write_cache_variants, 236 + &update_cache_variants), 237 + SPINAND_HAS_QE_BIT, 238 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 239 + mx35lf1ge4ab_ecc_get_status)), 240 + SPINAND_INFO("MX35UF2GE4AD", 241 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa6), 242 + NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1), 243 + NAND_ECCREQ(8, 512), 244 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 245 + &write_cache_variants, 246 + &update_cache_variants), 247 + SPINAND_HAS_QE_BIT, 248 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 249 + mx35lf1ge4ab_ecc_get_status)), 250 + SPINAND_INFO("MX35UF2GE4AC", 251 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa2), 252 + NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1), 253 + NAND_ECCREQ(4, 512), 254 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 255 + &write_cache_variants, 256 + &update_cache_variants), 257 + SPINAND_HAS_QE_BIT, 258 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 259 + mx35lf1ge4ab_ecc_get_status)), 260 + SPINAND_INFO("MX35UF1G14AC", 261 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x90), 262 + NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 263 + NAND_ECCREQ(4, 512), 264 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 265 + &write_cache_variants, 266 + &update_cache_variants), 267 + SPINAND_HAS_QE_BIT, 268 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 269 + mx35lf1ge4ab_ecc_get_status)), 270 + SPINAND_INFO("MX35UF1G24AD", 271 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x94), 272 + NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 273 + NAND_ECCREQ(8, 512), 274 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 275 + &write_cache_variants, 276 + &update_cache_variants), 277 + SPINAND_HAS_QE_BIT, 278 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 279 + mx35lf1ge4ab_ecc_get_status)), 280 + SPINAND_INFO("MX35UF1GE4AD", 281 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x96), 282 + NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 283 + NAND_ECCREQ(8, 512), 284 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 285 + &write_cache_variants, 286 + &update_cache_variants), 287 + SPINAND_HAS_QE_BIT, 288 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 289 + mx35lf1ge4ab_ecc_get_status)), 290 + SPINAND_INFO("MX35UF1GE4AC", 291 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x92), 292 + NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 293 + NAND_ECCREQ(4, 512), 294 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 295 + &write_cache_variants, 296 + &update_cache_variants), 297 + SPINAND_HAS_QE_BIT, 298 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 299 + mx35lf1ge4ab_ecc_get_status)), 300 + 189 301 }; 190 302 191 303 static const struct spinand_manufacturer_ops macronix_spinand_manuf_ops = {
-1
drivers/mtd/nftlcore.c
··· 619 619 return BLOCK_NIL; 620 620 } 621 621 //printk("Restarting scan\n"); 622 - lastEUN = BLOCK_NIL; 623 622 continue; 624 623 } 625 624
+2 -5
drivers/mtd/nftlmount.c
··· 188 188 /* memory alloc */ 189 189 nftl->EUNtable = kmalloc_array(nftl->nb_blocks, sizeof(u16), 190 190 GFP_KERNEL); 191 - if (!nftl->EUNtable) { 192 - printk(KERN_NOTICE "NFTL: allocation of EUNtable failed\n"); 191 + if (!nftl->EUNtable) 193 192 return -ENOMEM; 194 - } 195 193 196 194 nftl->ReplUnitTable = kmalloc_array(nftl->nb_blocks, 197 195 sizeof(u16), 198 196 GFP_KERNEL); 199 197 if (!nftl->ReplUnitTable) { 200 198 kfree(nftl->EUNtable); 201 - printk(KERN_NOTICE "NFTL: allocation of ReplUnitTable failed\n"); 202 199 return -ENOMEM; 203 200 } 204 201 ··· 266 269 267 270 buf = kmalloc(SECTORSIZE + mtd->oobsize, GFP_KERNEL); 268 271 if (!buf) 269 - return -1; 272 + return -ENOMEM; 270 273 271 274 ret = -1; 272 275 for (i = 0; i < len; i += SECTORSIZE) {
+1 -1
drivers/mtd/parsers/Kconfig
··· 115 115 116 116 config MTD_PARSER_TRX 117 117 tristate "Parser for TRX format partitions" 118 - depends on MTD && (BCM47XX || ARCH_BCM_5301X || COMPILE_TEST) 118 + depends on MTD && (BCM47XX || ARCH_BCM_5301X || ARCH_MEDIATEK || COMPILE_TEST) 119 119 help 120 120 TRX is a firmware format used by Broadcom on their devices. It 121 121 may contain up to 3/4 partitions (depending on the version).
+8 -1
drivers/mtd/parsers/parser_trx.c
··· 51 51 const struct mtd_partition **pparts, 52 52 struct mtd_part_parser_data *data) 53 53 { 54 + struct device_node *np = mtd_get_of_node(mtd); 54 55 struct mtd_partition *parts; 55 56 struct mtd_partition *part; 56 57 struct trx_header trx; 57 58 size_t bytes_read; 58 59 uint8_t curr_part = 0, i = 0; 60 + uint32_t trx_magic = TRX_MAGIC; 59 61 int err; 62 + 63 + /* Get different magic from device tree if specified */ 64 + err = of_property_read_u32(np, "brcm,trx-magic", &trx_magic); 65 + if (err != 0 && err != -EINVAL) 66 + pr_err("failed to parse \"brcm,trx-magic\" DT attribute, using default: %d\n", err); 60 67 61 68 parts = kcalloc(TRX_PARSER_MAX_PARTS, sizeof(struct mtd_partition), 62 69 GFP_KERNEL); ··· 77 70 return err; 78 71 } 79 72 80 - if (trx.magic != TRX_MAGIC) { 73 + if (trx.magic != trx_magic) { 81 74 kfree(parts); 82 75 return -ENOENT; 83 76 }
+10
drivers/mtd/parsers/qcomsmempart.c
··· 159 159 return ret; 160 160 } 161 161 162 + static void parse_qcomsmem_cleanup(const struct mtd_partition *pparts, 163 + int nr_parts) 164 + { 165 + int i; 166 + 167 + for (i = 0; i < nr_parts; i++) 168 + kfree(pparts[i].name); 169 + } 170 + 162 171 static const struct of_device_id qcomsmem_of_match_table[] = { 163 172 { .compatible = "qcom,smem-part" }, 164 173 {}, ··· 176 167 177 168 static struct mtd_part_parser mtd_parser_qcomsmem = { 178 169 .parse_fn = parse_qcomsmem_part, 170 + .cleanup = parse_qcomsmem_cleanup, 179 171 .name = "qcomsmem", 180 172 .of_match_table = qcomsmem_of_match_table, 181 173 };
+40 -36
drivers/mtd/parsers/redboot.c
··· 17 17 #include <linux/module.h> 18 18 19 19 struct fis_image_desc { 20 - unsigned char name[16]; // Null terminated name 21 - uint32_t flash_base; // Address within FLASH of image 22 - uint32_t mem_base; // Address in memory where it executes 23 - uint32_t size; // Length of image 24 - uint32_t entry_point; // Execution entry point 25 - uint32_t data_length; // Length of actual data 26 - unsigned char _pad[256-(16+7*sizeof(uint32_t))]; 27 - uint32_t desc_cksum; // Checksum over image descriptor 28 - uint32_t file_cksum; // Checksum over image data 20 + unsigned char name[16]; // Null terminated name 21 + u32 flash_base; // Address within FLASH of image 22 + u32 mem_base; // Address in memory where it executes 23 + u32 size; // Length of image 24 + u32 entry_point; // Execution entry point 25 + u32 data_length; // Length of actual data 26 + unsigned char _pad[256 - (16 + 7 * sizeof(u32))]; 27 + u32 desc_cksum; // Checksum over image descriptor 28 + u32 file_cksum; // Checksum over image data 29 29 }; 30 30 31 31 struct fis_list { ··· 45 45 static void parse_redboot_of(struct mtd_info *master) 46 46 { 47 47 struct device_node *np; 48 + struct device_node *npart; 48 49 u32 dirblock; 49 50 int ret; 50 51 ··· 53 52 if (!np) 54 53 return; 55 54 56 - ret = of_property_read_u32(np, "fis-index-block", &dirblock); 55 + npart = of_get_child_by_name(np, "partitions"); 56 + if (!npart) 57 + return; 58 + 59 + ret = of_property_read_u32(npart, "fis-index-block", &dirblock); 57 60 if (ret) 58 61 return; 59 62 ··· 90 85 91 86 parse_redboot_of(master); 92 87 93 - if ( directory < 0 ) { 88 + if (directory < 0) { 94 89 offset = master->size + directory * master->erasesize; 95 90 while (mtd_block_isbad(master, offset)) { 96 91 if (!offset) { 97 - nogood: 98 - printk(KERN_NOTICE "Failed to find a non-bad block to check for RedBoot partition table\n"); 92 + nogood: 93 + pr_notice("Failed to find a non-bad block to check for RedBoot partition table\n"); 99 94 return -EIO; 100 95 } 101 96 offset -= master->erasesize; ··· 113 108 if (!buf) 114 109 return -ENOMEM; 115 110 116 - printk(KERN_NOTICE "Searching for RedBoot partition table in %s at offset 0x%lx\n", 117 - master->name, offset); 111 + pr_notice("Searching for RedBoot partition table in %s at offset 0x%lx\n", 112 + master->name, offset); 118 113 119 114 ret = mtd_read(master, offset, master->erasesize, &retlen, 120 115 (void *)buf); ··· 150 145 && swab32(buf[i].size) < master->erasesize)) { 151 146 int j; 152 147 /* Update numslots based on actual FIS directory size */ 153 - numslots = swab32(buf[i].size) / sizeof (struct fis_image_desc); 148 + numslots = swab32(buf[i].size) / sizeof(struct fis_image_desc); 154 149 for (j = 0; j < numslots; ++j) { 155 - 156 150 /* A single 0xff denotes a deleted entry. 157 151 * Two of them in a row is the end of the table. 158 152 */ 159 153 if (buf[j].name[0] == 0xff) { 160 - if (buf[j].name[1] == 0xff) { 154 + if (buf[j].name[1] == 0xff) { 161 155 break; 162 156 } else { 163 157 continue; ··· 183 179 } 184 180 if (i == numslots) { 185 181 /* Didn't find it */ 186 - printk(KERN_NOTICE "No RedBoot partition table detected in %s\n", 187 - master->name); 182 + pr_notice("No RedBoot partition table detected in %s\n", 183 + master->name); 188 184 ret = 0; 189 185 goto out; 190 186 } ··· 203 199 break; 204 200 205 201 new_fl = kmalloc(sizeof(struct fis_list), GFP_KERNEL); 206 - namelen += strlen(buf[i].name)+1; 202 + namelen += strlen(buf[i].name) + 1; 207 203 if (!new_fl) { 208 204 ret = -ENOMEM; 209 205 goto out; ··· 212 208 if (data && data->origin) 213 209 buf[i].flash_base -= data->origin; 214 210 else 215 - buf[i].flash_base &= master->size-1; 211 + buf[i].flash_base &= master->size - 1; 216 212 217 213 /* I'm sure the JFFS2 code has done me permanent damage. 218 214 * I now think the following is _normal_ 219 215 */ 220 216 prev = &fl; 221 - while(*prev && (*prev)->img->flash_base < new_fl->img->flash_base) 217 + while (*prev && (*prev)->img->flash_base < new_fl->img->flash_base) 222 218 prev = &(*prev)->next; 223 219 new_fl->next = *prev; 224 220 *prev = new_fl; ··· 238 234 } 239 235 } 240 236 #endif 241 - parts = kzalloc(sizeof(*parts)*nrparts + nulllen + namelen, GFP_KERNEL); 237 + parts = kzalloc(sizeof(*parts) * nrparts + nulllen + namelen, GFP_KERNEL); 242 238 243 239 if (!parts) { 244 240 ret = -ENOMEM; ··· 247 243 248 244 nullname = (char *)&parts[nrparts]; 249 245 #ifdef CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED 250 - if (nulllen > 0) { 246 + if (nulllen > 0) 251 247 strcpy(nullname, nullstring); 252 - } 253 248 #endif 254 249 names = nullname + nulllen; 255 250 256 - i=0; 251 + i = 0; 257 252 258 253 #ifdef CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED 259 254 if (fl->img->flash_base) { 260 - parts[0].name = nullname; 261 - parts[0].size = fl->img->flash_base; 262 - parts[0].offset = 0; 255 + parts[0].name = nullname; 256 + parts[0].size = fl->img->flash_base; 257 + parts[0].offset = 0; 263 258 i++; 264 259 } 265 260 #endif 266 - for ( ; i<nrparts; i++) { 261 + for ( ; i < nrparts; i++) { 267 262 parts[i].size = fl->img->size; 268 263 parts[i].offset = fl->img->flash_base; 269 264 parts[i].name = names; ··· 270 267 strcpy(names, fl->img->name); 271 268 #ifdef CONFIG_MTD_REDBOOT_PARTS_READONLY 272 269 if (!memcmp(names, "RedBoot", 8) || 273 - !memcmp(names, "RedBoot config", 15) || 274 - !memcmp(names, "FIS directory", 14)) { 270 + !memcmp(names, "RedBoot config", 15) || 271 + !memcmp(names, "FIS directory", 14)) { 275 272 parts[i].mask_flags = MTD_WRITEABLE; 276 273 } 277 274 #endif 278 - names += strlen(names)+1; 275 + names += strlen(names) + 1; 279 276 280 277 #ifdef CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED 281 - if(fl->next && fl->img->flash_base + fl->img->size + master->erasesize <= fl->next->img->flash_base) { 278 + if (fl->next && fl->img->flash_base + fl->img->size + master->erasesize <= fl->next->img->flash_base) { 282 279 i++; 283 - parts[i].offset = parts[i-1].size + parts[i-1].offset; 280 + parts[i].offset = parts[i - 1].size + parts[i - 1].offset; 284 281 parts[i].size = fl->next->img->flash_base - parts[i].offset; 285 282 parts[i].name = nullname; 286 283 } ··· 294 291 out: 295 292 while (fl) { 296 293 struct fis_list *old = fl; 294 + 297 295 fl = fl->next; 298 296 kfree(old); 299 297 }
+1 -4
drivers/mtd/rfd_ftl.c
··· 192 192 193 193 part->sector_map = vmalloc(array_size(sizeof(u_long), 194 194 part->sector_count)); 195 - if (!part->sector_map) { 196 - printk(KERN_ERR PREFIX "'%s': unable to allocate memory for " 197 - "sector map", part->mbd.mtd->name); 195 + if (!part->sector_map) 198 196 goto err; 199 - } 200 197 201 198 for (i=0; i<part->sector_count; i++) 202 199 part->sector_map[i] = -1;
+32 -19
drivers/mtd/sm_ftl.c
··· 265 265 again: 266 266 if (try++) { 267 267 /* Avoid infinite recursion on CIS reads, sm_recheck_media 268 - won't help anyway */ 268 + * won't help anyway 269 + */ 269 270 if (zone == 0 && block == ftl->cis_block && boffset == 270 271 ftl->cis_boffset) 271 272 return ret; ··· 277 276 } 278 277 279 278 /* Unfortunately, oob read will _always_ succeed, 280 - despite card removal..... */ 279 + * despite card removal..... 280 + */ 281 281 ret = mtd_read_oob(mtd, sm_mkoffset(ftl, zone, block, boffset), &ops); 282 282 283 283 /* Test for unknown errors */ ··· 413 411 414 412 /* If write fails. try to erase the block */ 415 413 /* This is safe, because we never write in blocks 416 - that contain valuable data. 417 - This is intended to repair block that are marked 418 - as erased, but that isn't fully erased*/ 414 + * that contain valuable data. 415 + * This is intended to repair block that are marked 416 + * as erased, but that isn't fully erased 417 + */ 419 418 420 419 if (sm_erase_block(ftl, zone, block, 0)) 421 420 return -EIO; ··· 451 448 452 449 /* We aren't checking the return value, because we don't care */ 453 450 /* This also fails on fake xD cards, but I guess these won't expose 454 - any bad blocks till fail completely */ 451 + * any bad blocks till fail completely 452 + */ 455 453 for (boffset = 0; boffset < ftl->block_size; boffset += SM_SECTOR_SIZE) 456 454 sm_write_sector(ftl, zone, block, boffset, NULL, &oob); 457 455 } ··· 509 505 510 506 /* First just check that block doesn't look fishy */ 511 507 /* Only blocks that are valid or are sliced in two parts, are 512 - accepted */ 508 + * accepted 509 + */ 513 510 for (boffset = 0; boffset < ftl->block_size; 514 511 boffset += SM_SECTOR_SIZE) { 515 512 ··· 559 554 0x01, 0x03, 0xD9, 0x01, 0xFF, 0x18, 0x02, 0xDF, 0x01, 0x20 560 555 }; 561 556 /* Find out media parameters. 562 - * This ideally has to be based on nand id, but for now device size is enough */ 557 + * This ideally has to be based on nand id, but for now device size is enough 558 + */ 563 559 static int sm_get_media_info(struct sm_ftl *ftl, struct mtd_info *mtd) 564 560 { 565 561 int i; ··· 613 607 } 614 608 615 609 /* Minimum xD size is 16MiB. Also, all xD cards have standard zone 616 - sizes. SmartMedia cards exist up to 128 MiB and have same layout*/ 610 + * sizes. SmartMedia cards exist up to 128 MiB and have same layout 611 + */ 617 612 if (size_in_megs >= 16) { 618 613 ftl->zone_count = size_in_megs / 16; 619 614 ftl->zone_size = 1024; ··· 789 782 } 790 783 791 784 /* Test to see if block is erased. It is enough to test 792 - first sector, because erase happens in one shot */ 785 + * first sector, because erase happens in one shot 786 + */ 793 787 if (sm_block_erased(&oob)) { 794 788 kfifo_in(&zone->free_sectors, 795 789 (unsigned char *)&block, 2); ··· 800 792 /* If block is marked as bad, skip it */ 801 793 /* This assumes we can trust first sector*/ 802 794 /* However the way the block valid status is defined, ensures 803 - very low probability of failure here */ 795 + * very low probability of failure here 796 + */ 804 797 if (!sm_block_valid(&oob)) { 805 798 dbg("PH %04d <-> <marked bad>", block); 806 799 continue; ··· 812 803 813 804 /* Invalid LBA means that block is damaged. */ 814 805 /* We can try to erase it, or mark it as bad, but 815 - lets leave that to recovery application */ 806 + * lets leave that to recovery application 807 + */ 816 808 if (lba == -2 || lba >= ftl->max_lba) { 817 809 dbg("PH %04d <-> LBA %04d(bad)", block, lba); 818 810 continue; ··· 821 811 822 812 823 813 /* If there is no collision, 824 - just put the sector in the FTL table */ 814 + * just put the sector in the FTL table 815 + */ 825 816 if (zone->lba_to_phys_table[lba] < 0) { 826 817 dbg_verbose("PH %04d <-> LBA %04d", block, lba); 827 818 zone->lba_to_phys_table[lba] = block; ··· 845 834 } 846 835 847 836 /* If both blocks are valid and share same LBA, it means that 848 - they hold different versions of same data. It not 849 - known which is more recent, thus just erase one of them 850 - */ 837 + * they hold different versions of same data. It not 838 + * known which is more recent, thus just erase one of them 839 + */ 851 840 sm_printk("both blocks are valid, erasing the later"); 852 841 sm_erase_block(ftl, zone_num, block, 1); 853 842 } ··· 856 845 zone->initialized = 1; 857 846 858 847 /* No free sectors, means that the zone is heavily damaged, write won't 859 - work, but it can still can be (partially) read */ 848 + * work, but it can still can be (partially) read 849 + */ 860 850 if (!kfifo_len(&zone->free_sectors)) { 861 851 sm_printk("no free blocks in zone %d", zone_num); 862 852 return 0; ··· 964 952 965 953 /* If there are no spare blocks, */ 966 954 /* we could still continue by erasing/writing the current block, 967 - but for such worn out media it doesn't worth the trouble, 968 - and the dangers */ 955 + * but for such worn out media it doesn't worth the trouble, 956 + * and the dangers 957 + */ 969 958 if (kfifo_out(&zone->free_sectors, 970 959 (unsigned char *)&write_sector, 2) != 2) { 971 960 dbg("no free sectors for write!");
+1 -1
drivers/mtd/spi-nor/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - spi-nor-objs := core.o sfdp.o swp.o otp.o 3 + spi-nor-objs := core.o sfdp.o swp.o otp.o sysfs.o 4 4 spi-nor-objs += atmel.o 5 5 spi-nor-objs += catalyst.o 6 6 spi-nor-objs += eon.o
+1
drivers/mtd/spi-nor/controllers/intel-spi-pci.c
··· 74 74 { PCI_VDEVICE(INTEL, 0x4b24), (unsigned long)&bxt_info }, 75 75 { PCI_VDEVICE(INTEL, 0x4da4), (unsigned long)&bxt_info }, 76 76 { PCI_VDEVICE(INTEL, 0x51a4), (unsigned long)&cnl_info }, 77 + { PCI_VDEVICE(INTEL, 0x54a4), (unsigned long)&cnl_info }, 77 78 { PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info }, 78 79 { PCI_VDEVICE(INTEL, 0xa0a4), (unsigned long)&bxt_info }, 79 80 { PCI_VDEVICE(INTEL, 0xa1a4), (unsigned long)&bxt_info },
+1 -1
drivers/mtd/spi-nor/controllers/nxp-spifi.c
··· 326 326 ctrl |= SPIFI_CTRL_DUAL; 327 327 } 328 328 329 - switch (mode & (SPI_CPHA | SPI_CPOL)) { 329 + switch (mode & SPI_MODE_X_MASK) { 330 330 case SPI_MODE_0: 331 331 ctrl &= ~SPIFI_CTRL_MODE3; 332 332 break;
+18 -4
drivers/mtd/spi-nor/core.c
··· 1318 1318 /* 1319 1319 * Initiate the erasure of a single sector 1320 1320 */ 1321 - static int spi_nor_erase_sector(struct spi_nor *nor, u32 addr) 1321 + int spi_nor_erase_sector(struct spi_nor *nor, u32 addr) 1322 1322 { 1323 1323 int i; 1324 1324 ··· 1411 1411 continue; 1412 1412 1413 1413 spi_nor_div_by_erase_size(erase, addr, &rem); 1414 - if (rem) 1415 - continue; 1416 - else 1414 + if (!rem) 1417 1415 return erase; 1418 1416 } 1419 1417 ··· 2837 2839 return 0; 2838 2840 } 2839 2841 2842 + /** 2843 + * spi_nor_soft_reset() - Perform a software reset 2844 + * @nor: pointer to 'struct spi_nor' 2845 + * 2846 + * Performs a "Soft Reset and Enter Default Protocol Mode" sequence which resets 2847 + * the device to its power-on-reset state. This is useful when the software has 2848 + * made some changes to device (volatile) registers and needs to reset it before 2849 + * shutting down, for example. 2850 + * 2851 + * Not every flash supports this sequence. The same set of opcodes might be used 2852 + * for some other operation on a flash that does not support this. Support for 2853 + * this sequence can be discovered via SFDP in the BFPT table. 2854 + * 2855 + * Return: 0 on success, -errno otherwise. 2856 + */ 2840 2857 static void spi_nor_soft_reset(struct spi_nor *nor) 2841 2858 { 2842 2859 struct spi_mem_op op; ··· 3457 3444 .driver = { 3458 3445 .name = "spi-nor", 3459 3446 .of_match_table = spi_nor_of_table, 3447 + .dev_groups = spi_nor_sysfs_groups, 3460 3448 }, 3461 3449 .id_table = spi_nor_dev_ids, 3462 3450 },
+16
drivers/mtd/spi-nor/core.h
··· 207 207 * @read: read from the SPI NOR OTP area. 208 208 * @write: write to the SPI NOR OTP area. 209 209 * @lock: lock an OTP region. 210 + * @erase: erase an OTP region. 210 211 * @is_locked: check if an OTP region of the SPI NOR is locked. 211 212 */ 212 213 struct spi_nor_otp_ops { ··· 215 214 int (*write)(struct spi_nor *nor, loff_t addr, size_t len, 216 215 const u8 *buf); 217 216 int (*lock)(struct spi_nor *nor, unsigned int region); 217 + int (*erase)(struct spi_nor *nor, loff_t addr); 218 218 int (*is_locked)(struct spi_nor *nor, unsigned int region); 219 219 }; 220 220 ··· 461 459 const struct spi_nor_fixups *fixups; 462 460 }; 463 461 462 + /** 463 + * struct sfdp - SFDP data 464 + * @num_dwords: number of entries in the dwords array 465 + * @dwords: array of double words of the SFDP data 466 + */ 467 + struct sfdp { 468 + size_t num_dwords; 469 + u32 *dwords; 470 + }; 471 + 464 472 /* Manufacturer drivers. */ 465 473 extern const struct spi_nor_manufacturer spi_nor_atmel; 466 474 extern const struct spi_nor_manufacturer spi_nor_catalyst; ··· 489 477 extern const struct spi_nor_manufacturer spi_nor_winbond; 490 478 extern const struct spi_nor_manufacturer spi_nor_xilinx; 491 479 extern const struct spi_nor_manufacturer spi_nor_xmc; 480 + 481 + extern const struct attribute_group *spi_nor_sysfs_groups[]; 492 482 493 483 void spi_nor_spimem_setup_op(const struct spi_nor *nor, 494 484 struct spi_mem_op *op, ··· 517 503 u8 *buf); 518 504 ssize_t spi_nor_write_data(struct spi_nor *nor, loff_t to, size_t len, 519 505 const u8 *buf); 506 + int spi_nor_erase_sector(struct spi_nor *nor, u32 addr); 520 507 521 508 int spi_nor_otp_read_secr(struct spi_nor *nor, loff_t addr, size_t len, u8 *buf); 522 509 int spi_nor_otp_write_secr(struct spi_nor *nor, loff_t addr, size_t len, 523 510 const u8 *buf); 511 + int spi_nor_otp_erase_secr(struct spi_nor *nor, loff_t addr); 524 512 int spi_nor_otp_lock_sr2(struct spi_nor *nor, unsigned int region); 525 513 int spi_nor_otp_is_locked_sr2(struct spi_nor *nor, unsigned int region); 526 514
+3 -2
drivers/mtd/spi-nor/macronix.c
··· 49 49 { "mx25u4035", INFO(0xc22533, 0, 64 * 1024, 8, SECT_4K) }, 50 50 { "mx25u8035", INFO(0xc22534, 0, 64 * 1024, 16, SECT_4K) }, 51 51 { "mx25u6435f", INFO(0xc22537, 0, 64 * 1024, 128, SECT_4K) }, 52 - { "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, SECT_4K) }, 52 + { "mx25l12805d", INFO(0xc22018, 0, 64 * 1024, 256, SECT_4K | 53 + SPI_NOR_HAS_LOCK | SPI_NOR_4BIT_BP) }, 53 54 { "mx25l12855e", INFO(0xc22618, 0, 64 * 1024, 256, 0) }, 54 55 { "mx25r1635f", INFO(0xc22815, 0, 64 * 1024, 32, 55 56 SECT_4K | SPI_NOR_DUAL_READ | ··· 73 72 SECT_4K | SPI_NOR_DUAL_READ | 74 73 SPI_NOR_QUAD_READ) }, 75 74 { "mx25l25655e", INFO(0xc22619, 0, 64 * 1024, 512, 0) }, 76 - { "mx66l51235l", INFO(0xc2201a, 0, 64 * 1024, 1024, 75 + { "mx66l51235f", INFO(0xc2201a, 0, 64 * 1024, 1024, 77 76 SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | 78 77 SPI_NOR_4B_OPCODES) }, 79 78 { "mx66u51235f", INFO(0xc2253a, 0, 64 * 1024, 1024,
+145 -15
drivers/mtd/spi-nor/otp.c
··· 15 15 #define spi_nor_otp_n_regions(nor) ((nor)->params->otp.org->n_regions) 16 16 17 17 /** 18 - * spi_nor_otp_read_secr() - read OTP data 18 + * spi_nor_otp_read_secr() - read security register 19 19 * @nor: pointer to 'struct spi_nor' 20 - * @from: offset to read from 20 + * @addr: offset to read from 21 21 * @len: number of bytes to read 22 22 * @buf: pointer to dst buffer 23 23 * 24 - * Read OTP data from one region by using the SPINOR_OP_RSECR commands. This 25 - * method is used on GigaDevice and Winbond flashes. 24 + * Read a security register by using the SPINOR_OP_RSECR commands. 25 + * 26 + * In Winbond/GigaDevice datasheets the term "security register" stands for 27 + * an one-time-programmable memory area, consisting of multiple bytes (usually 28 + * 256). Thus one "security register" maps to one OTP region. 29 + * 30 + * This method is used on GigaDevice and Winbond flashes. 31 + * 32 + * Please note, the read must not span multiple registers. 26 33 * 27 34 * Return: number of bytes read successfully, -errno otherwise 28 35 */ ··· 47 40 rdesc = nor->dirmap.rdesc; 48 41 49 42 nor->read_opcode = SPINOR_OP_RSECR; 50 - nor->addr_width = 3; 51 43 nor->read_dummy = 8; 52 44 nor->read_proto = SNOR_PROTO_1_1_1; 53 45 nor->dirmap.rdesc = NULL; ··· 63 57 } 64 58 65 59 /** 66 - * spi_nor_otp_write_secr() - write OTP data 60 + * spi_nor_otp_write_secr() - write security register 67 61 * @nor: pointer to 'struct spi_nor' 68 - * @to: offset to write to 62 + * @addr: offset to write to 69 63 * @len: number of bytes to write 70 64 * @buf: pointer to src buffer 71 65 * 72 - * Write OTP data to one region by using the SPINOR_OP_PSECR commands. This 73 - * method is used on GigaDevice and Winbond flashes. 66 + * Write a security register by using the SPINOR_OP_PSECR commands. 74 67 * 75 - * Please note, the write must not span multiple OTP regions. 68 + * For more information on the term "security register", see the documentation 69 + * of spi_nor_otp_read_secr(). 70 + * 71 + * This method is used on GigaDevice and Winbond flashes. 72 + * 73 + * Please note, the write must not span multiple registers. 76 74 * 77 75 * Return: number of bytes written successfully, -errno otherwise 78 76 */ ··· 94 84 wdesc = nor->dirmap.wdesc; 95 85 96 86 nor->program_opcode = SPINOR_OP_PSECR; 97 - nor->addr_width = 3; 98 87 nor->write_proto = SNOR_PROTO_1_1_1; 99 88 nor->dirmap.wdesc = NULL; 100 89 101 90 /* 102 91 * We only support a write to one single page. For now all winbond 103 - * flashes only have one page per OTP region. 92 + * flashes only have one page per security register. 104 93 */ 105 94 ret = spi_nor_write_enable(nor); 106 95 if (ret) ··· 118 109 nor->dirmap.wdesc = wdesc; 119 110 120 111 return ret ?: written; 112 + } 113 + 114 + /** 115 + * spi_nor_otp_erase_secr() - erase a security register 116 + * @nor: pointer to 'struct spi_nor' 117 + * @addr: offset of the security register to be erased 118 + * 119 + * Erase a security register by using the SPINOR_OP_ESECR command. 120 + * 121 + * For more information on the term "security register", see the documentation 122 + * of spi_nor_otp_read_secr(). 123 + * 124 + * This method is used on GigaDevice and Winbond flashes. 125 + * 126 + * Return: 0 on success, -errno otherwise 127 + */ 128 + int spi_nor_otp_erase_secr(struct spi_nor *nor, loff_t addr) 129 + { 130 + u8 erase_opcode = nor->erase_opcode; 131 + int ret; 132 + 133 + ret = spi_nor_write_enable(nor); 134 + if (ret) 135 + return ret; 136 + 137 + nor->erase_opcode = SPINOR_OP_ESECR; 138 + ret = spi_nor_erase_sector(nor, addr); 139 + nor->erase_opcode = erase_opcode; 140 + if (ret) 141 + return ret; 142 + 143 + return spi_nor_wait_till_ready(nor); 121 144 } 122 145 123 146 static int spi_nor_otp_lock_bit_cr(unsigned int region) ··· 281 240 return ret; 282 241 } 283 242 243 + static int spi_nor_mtd_otp_range_is_locked(struct spi_nor *nor, loff_t ofs, 244 + size_t len) 245 + { 246 + const struct spi_nor_otp_ops *ops = nor->params->otp.ops; 247 + unsigned int region; 248 + int locked; 249 + 250 + /* 251 + * If any of the affected OTP regions are locked the entire range is 252 + * considered locked. 253 + */ 254 + for (region = spi_nor_otp_offset_to_region(nor, ofs); 255 + region <= spi_nor_otp_offset_to_region(nor, ofs + len - 1); 256 + region++) { 257 + locked = ops->is_locked(nor, region); 258 + /* take the branch it is locked or in case of an error */ 259 + if (locked) 260 + return locked; 261 + } 262 + 263 + return 0; 264 + } 265 + 284 266 static int spi_nor_mtd_otp_read_write(struct mtd_info *mtd, loff_t ofs, 285 267 size_t total_len, size_t *retlen, 286 268 const u8 *buf, bool is_write) ··· 319 255 if (ofs < 0 || ofs >= spi_nor_otp_size(nor)) 320 256 return 0; 321 257 258 + /* don't access beyond the end */ 259 + total_len = min_t(size_t, total_len, spi_nor_otp_size(nor) - ofs); 260 + 261 + if (!total_len) 262 + return 0; 263 + 322 264 ret = spi_nor_lock_and_prep(nor); 323 265 if (ret) 324 266 return ret; 325 267 326 - /* don't access beyond the end */ 327 - total_len = min_t(size_t, total_len, spi_nor_otp_size(nor) - ofs); 268 + if (is_write) { 269 + ret = spi_nor_mtd_otp_range_is_locked(nor, ofs, total_len); 270 + if (ret < 0) { 271 + goto out; 272 + } else if (ret) { 273 + ret = -EROFS; 274 + goto out; 275 + } 276 + } 328 277 329 - *retlen = 0; 330 278 while (total_len) { 331 279 /* 332 280 * The OTP regions are mapped into a contiguous area starting ··· 390 314 size_t *retlen, const u8 *buf) 391 315 { 392 316 return spi_nor_mtd_otp_read_write(mtd, to, len, retlen, buf, true); 317 + } 318 + 319 + static int spi_nor_mtd_otp_erase(struct mtd_info *mtd, loff_t from, size_t len) 320 + { 321 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 322 + const struct spi_nor_otp_ops *ops = nor->params->otp.ops; 323 + const size_t rlen = spi_nor_otp_region_len(nor); 324 + unsigned int region; 325 + loff_t rstart; 326 + int ret; 327 + 328 + /* OTP erase is optional */ 329 + if (!ops->erase) 330 + return -EOPNOTSUPP; 331 + 332 + if (!len) 333 + return 0; 334 + 335 + if (from < 0 || (from + len) > spi_nor_otp_size(nor)) 336 + return -EINVAL; 337 + 338 + /* the user has to explicitly ask for whole regions */ 339 + if (!IS_ALIGNED(len, rlen) || !IS_ALIGNED(from, rlen)) 340 + return -EINVAL; 341 + 342 + ret = spi_nor_lock_and_prep(nor); 343 + if (ret) 344 + return ret; 345 + 346 + ret = spi_nor_mtd_otp_range_is_locked(nor, from, len); 347 + if (ret < 0) { 348 + goto out; 349 + } else if (ret) { 350 + ret = -EROFS; 351 + goto out; 352 + } 353 + 354 + while (len) { 355 + region = spi_nor_otp_offset_to_region(nor, from); 356 + rstart = spi_nor_otp_region_start(nor, region); 357 + 358 + ret = ops->erase(nor, rstart); 359 + if (ret) 360 + goto out; 361 + 362 + len -= rlen; 363 + from += rlen; 364 + } 365 + 366 + out: 367 + spi_nor_unlock_and_unprep(nor); 368 + 369 + return ret; 393 370 } 394 371 395 372 static int spi_nor_mtd_otp_lock(struct mtd_info *mtd, loff_t from, size_t len) ··· 503 374 mtd->_read_user_prot_reg = spi_nor_mtd_otp_read; 504 375 mtd->_write_user_prot_reg = spi_nor_mtd_otp_write; 505 376 mtd->_lock_user_prot_reg = spi_nor_mtd_otp_lock; 377 + mtd->_erase_user_prot_reg = spi_nor_mtd_otp_erase; 506 378 }
+58
drivers/mtd/spi-nor/sfdp.c
··· 16 16 (((p)->parameter_table_pointer[2] << 16) | \ 17 17 ((p)->parameter_table_pointer[1] << 8) | \ 18 18 ((p)->parameter_table_pointer[0] << 0)) 19 + #define SFDP_PARAM_HEADER_PARAM_LEN(p) ((p)->length * 4) 19 20 20 21 #define SFDP_BFPT_ID 0xff00 /* Basic Flash Parameter Table */ 21 22 #define SFDP_SECTOR_MAP_ID 0xff81 /* Sector Map Table */ ··· 1246 1245 struct sfdp_parameter_header *param_headers = NULL; 1247 1246 struct sfdp_header header; 1248 1247 struct device *dev = nor->dev; 1248 + struct sfdp *sfdp; 1249 + size_t sfdp_size; 1249 1250 size_t psize; 1250 1251 int i, err; 1251 1252 ··· 1269 1266 if (SFDP_PARAM_HEADER_ID(bfpt_header) != SFDP_BFPT_ID || 1270 1267 bfpt_header->major != SFDP_JESD216_MAJOR) 1271 1268 return -EINVAL; 1269 + 1270 + sfdp_size = SFDP_PARAM_HEADER_PTP(bfpt_header) + 1271 + SFDP_PARAM_HEADER_PARAM_LEN(bfpt_header); 1272 1272 1273 1273 /* 1274 1274 * Allocate memory then read all parameter headers with a single ··· 1298 1292 goto exit; 1299 1293 } 1300 1294 } 1295 + 1296 + /* 1297 + * Cache the complete SFDP data. It is not (easily) possible to fetch 1298 + * SFDP after probe time and we need it for the sysfs access. 1299 + */ 1300 + for (i = 0; i < header.nph; i++) { 1301 + param_header = &param_headers[i]; 1302 + sfdp_size = max_t(size_t, sfdp_size, 1303 + SFDP_PARAM_HEADER_PTP(param_header) + 1304 + SFDP_PARAM_HEADER_PARAM_LEN(param_header)); 1305 + } 1306 + 1307 + /* 1308 + * Limit the total size to a reasonable value to avoid allocating too 1309 + * much memory just of because the flash returned some insane values. 1310 + */ 1311 + if (sfdp_size > PAGE_SIZE) { 1312 + dev_dbg(dev, "SFDP data (%zu) too big, truncating\n", 1313 + sfdp_size); 1314 + sfdp_size = PAGE_SIZE; 1315 + } 1316 + 1317 + sfdp = devm_kzalloc(dev, sizeof(*sfdp), GFP_KERNEL); 1318 + if (!sfdp) { 1319 + err = -ENOMEM; 1320 + goto exit; 1321 + } 1322 + 1323 + /* 1324 + * The SFDP is organized in chunks of DWORDs. Thus, in theory, the 1325 + * sfdp_size should be a multiple of DWORDs. But in case a flash 1326 + * is not spec compliant, make sure that we have enough space to store 1327 + * the complete SFDP data. 1328 + */ 1329 + sfdp->num_dwords = DIV_ROUND_UP(sfdp_size, sizeof(*sfdp->dwords)); 1330 + sfdp->dwords = devm_kcalloc(dev, sfdp->num_dwords, 1331 + sizeof(*sfdp->dwords), GFP_KERNEL); 1332 + if (!sfdp->dwords) { 1333 + err = -ENOMEM; 1334 + devm_kfree(dev, sfdp); 1335 + goto exit; 1336 + } 1337 + 1338 + err = spi_nor_read_sfdp(nor, 0, sfdp_size, sfdp->dwords); 1339 + if (err < 0) { 1340 + dev_dbg(dev, "failed to read SFDP data\n"); 1341 + devm_kfree(dev, sfdp->dwords); 1342 + devm_kfree(dev, sfdp); 1343 + goto exit; 1344 + } 1345 + 1346 + nor->sfdp = sfdp; 1301 1347 1302 1348 /* 1303 1349 * Check other parameter headers to get the latest revision of
+93
drivers/mtd/spi-nor/sysfs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/mtd/spi-nor.h> 4 + #include <linux/spi/spi.h> 5 + #include <linux/spi/spi-mem.h> 6 + #include <linux/sysfs.h> 7 + 8 + #include "core.h" 9 + 10 + static ssize_t manufacturer_show(struct device *dev, 11 + struct device_attribute *attr, char *buf) 12 + { 13 + struct spi_device *spi = to_spi_device(dev); 14 + struct spi_mem *spimem = spi_get_drvdata(spi); 15 + struct spi_nor *nor = spi_mem_get_drvdata(spimem); 16 + 17 + return sysfs_emit(buf, "%s\n", nor->manufacturer->name); 18 + } 19 + static DEVICE_ATTR_RO(manufacturer); 20 + 21 + static ssize_t partname_show(struct device *dev, 22 + struct device_attribute *attr, char *buf) 23 + { 24 + struct spi_device *spi = to_spi_device(dev); 25 + struct spi_mem *spimem = spi_get_drvdata(spi); 26 + struct spi_nor *nor = spi_mem_get_drvdata(spimem); 27 + 28 + return sysfs_emit(buf, "%s\n", nor->info->name); 29 + } 30 + static DEVICE_ATTR_RO(partname); 31 + 32 + static ssize_t jedec_id_show(struct device *dev, 33 + struct device_attribute *attr, char *buf) 34 + { 35 + struct spi_device *spi = to_spi_device(dev); 36 + struct spi_mem *spimem = spi_get_drvdata(spi); 37 + struct spi_nor *nor = spi_mem_get_drvdata(spimem); 38 + 39 + return sysfs_emit(buf, "%*phN\n", nor->info->id_len, nor->info->id); 40 + } 41 + static DEVICE_ATTR_RO(jedec_id); 42 + 43 + static struct attribute *spi_nor_sysfs_entries[] = { 44 + &dev_attr_manufacturer.attr, 45 + &dev_attr_partname.attr, 46 + &dev_attr_jedec_id.attr, 47 + NULL 48 + }; 49 + 50 + static ssize_t sfdp_read(struct file *filp, struct kobject *kobj, 51 + struct bin_attribute *bin_attr, char *buf, 52 + loff_t off, size_t count) 53 + { 54 + struct spi_device *spi = to_spi_device(kobj_to_dev(kobj)); 55 + struct spi_mem *spimem = spi_get_drvdata(spi); 56 + struct spi_nor *nor = spi_mem_get_drvdata(spimem); 57 + struct sfdp *sfdp = nor->sfdp; 58 + size_t sfdp_size = sfdp->num_dwords * sizeof(*sfdp->dwords); 59 + 60 + return memory_read_from_buffer(buf, count, &off, nor->sfdp->dwords, 61 + sfdp_size); 62 + } 63 + static BIN_ATTR_RO(sfdp, 0); 64 + 65 + static struct bin_attribute *spi_nor_sysfs_bin_entries[] = { 66 + &bin_attr_sfdp, 67 + NULL 68 + }; 69 + 70 + static umode_t spi_nor_sysfs_is_bin_visible(struct kobject *kobj, 71 + struct bin_attribute *attr, int n) 72 + { 73 + struct spi_device *spi = to_spi_device(kobj_to_dev(kobj)); 74 + struct spi_mem *spimem = spi_get_drvdata(spi); 75 + struct spi_nor *nor = spi_mem_get_drvdata(spimem); 76 + 77 + if (attr == &bin_attr_sfdp && nor->sfdp) 78 + return 0444; 79 + 80 + return 0; 81 + } 82 + 83 + static const struct attribute_group spi_nor_sysfs_group = { 84 + .name = "spi-nor", 85 + .is_bin_visible = spi_nor_sysfs_is_bin_visible, 86 + .attrs = spi_nor_sysfs_entries, 87 + .bin_attrs = spi_nor_sysfs_bin_entries, 88 + }; 89 + 90 + const struct attribute_group *spi_nor_sysfs_groups[] = { 91 + &spi_nor_sysfs_group, 92 + NULL 93 + };
+1
drivers/mtd/spi-nor/winbond.c
··· 139 139 static const struct spi_nor_otp_ops winbond_otp_ops = { 140 140 .read = spi_nor_otp_read_secr, 141 141 .write = spi_nor_otp_write_secr, 142 + .erase = spi_nor_otp_erase_secr, 142 143 .lock = spi_nor_otp_lock_sr2, 143 144 .is_locked = spi_nor_otp_is_locked_sr2, 144 145 };
+1 -6
drivers/mtd/tests/oobtest.c
··· 506 506 err = mtd_write_oob(mtd, addr0, &ops); 507 507 if (err) { 508 508 pr_info("error occurred as expected\n"); 509 - err = 0; 510 509 } else { 511 510 pr_err("error: can write past end of OOB\n"); 512 511 errcnt += 1; ··· 528 529 529 530 if (err) { 530 531 pr_info("error occurred as expected\n"); 531 - err = 0; 532 532 } else { 533 533 pr_err("error: can read past end of OOB\n"); 534 534 errcnt += 1; ··· 551 553 err = mtd_write_oob(mtd, mtd->size - mtd->writesize, &ops); 552 554 if (err) { 553 555 pr_info("error occurred as expected\n"); 554 - err = 0; 555 556 } else { 556 557 pr_err("error: wrote past end of device\n"); 557 558 errcnt += 1; ··· 573 576 574 577 if (err) { 575 578 pr_info("error occurred as expected\n"); 576 - err = 0; 577 579 } else { 578 580 pr_err("error: read past end of device\n"); 579 581 errcnt += 1; ··· 596 600 err = mtd_write_oob(mtd, mtd->size - mtd->writesize, &ops); 597 601 if (err) { 598 602 pr_info("error occurred as expected\n"); 599 - err = 0; 600 603 } else { 601 604 pr_err("error: wrote past end of device\n"); 602 605 errcnt += 1; ··· 618 623 619 624 if (err) { 620 625 pr_info("error occurred as expected\n"); 621 - err = 0; 622 626 } else { 623 627 pr_err("error: read past end of device\n"); 624 628 errcnt += 1; ··· 695 701 (long long)addr); 696 702 errcnt += 1; 697 703 if (errcnt > 1000) { 704 + err = -EINVAL; 698 705 pr_err("error: too many errors\n"); 699 706 goto out; 700 707 }
-2
drivers/mtd/tests/torturetest.c
··· 230 230 if (!bad_ebs) 231 231 goto out_check_buf; 232 232 233 - err = 0; 234 - 235 233 /* Initialize patterns */ 236 234 memset(patt_FF, 0xFF, mtd->erasesize); 237 235 for (i = 0; i < mtd->erasesize / pgsize; i++) {
+3 -1
drivers/nvmem/core.c
··· 789 789 nvmem->reg_write = config->reg_write; 790 790 nvmem->keepout = config->keepout; 791 791 nvmem->nkeepout = config->nkeepout; 792 - if (!config->no_of_node) 792 + if (config->of_node) 793 + nvmem->dev.of_node = config->of_node; 794 + else if (!config->no_of_node) 793 795 nvmem->dev.of_node = config->dev->of_node; 794 796 795 797 switch (config->id) {
+2
include/linux/mtd/mtd.h
··· 380 380 int usecount; 381 381 struct mtd_debug_info dbg; 382 382 struct nvmem_device *nvmem; 383 + struct nvmem_device *otp_user_nvmem; 384 + struct nvmem_device *otp_factory_nvmem; 383 385 384 386 /* 385 387 * Parent device from the MTD partition point of view.
+26 -15
include/linux/mtd/onfi.h
··· 11 11 #define __LINUX_MTD_ONFI_H 12 12 13 13 #include <linux/types.h> 14 + #include <linux/bitfield.h> 14 15 15 16 /* ONFI version bits */ 16 17 #define ONFI_VERSION_1_0 BIT(1) ··· 25 24 #define ONFI_VERSION_4_0 BIT(9) 26 25 27 26 /* ONFI features */ 28 - #define ONFI_FEATURE_16_BIT_BUS (1 << 0) 29 - #define ONFI_FEATURE_EXT_PARAM_PAGE (1 << 7) 27 + #define ONFI_FEATURE_16_BIT_BUS BIT(0) 28 + #define ONFI_FEATURE_NV_DDR BIT(5) 29 + #define ONFI_FEATURE_EXT_PARAM_PAGE BIT(7) 30 30 31 31 /* ONFI timing mode, used in both asynchronous and synchronous mode */ 32 - #define ONFI_TIMING_MODE_0 (1 << 0) 33 - #define ONFI_TIMING_MODE_1 (1 << 1) 34 - #define ONFI_TIMING_MODE_2 (1 << 2) 35 - #define ONFI_TIMING_MODE_3 (1 << 3) 36 - #define ONFI_TIMING_MODE_4 (1 << 4) 37 - #define ONFI_TIMING_MODE_5 (1 << 5) 38 - #define ONFI_TIMING_MODE_UNKNOWN (1 << 6) 32 + #define ONFI_DATA_INTERFACE_SDR 0 33 + #define ONFI_DATA_INTERFACE_NVDDR BIT(4) 34 + #define ONFI_DATA_INTERFACE_NVDDR2 BIT(5) 35 + #define ONFI_TIMING_MODE_0 BIT(0) 36 + #define ONFI_TIMING_MODE_1 BIT(1) 37 + #define ONFI_TIMING_MODE_2 BIT(2) 38 + #define ONFI_TIMING_MODE_3 BIT(3) 39 + #define ONFI_TIMING_MODE_4 BIT(4) 40 + #define ONFI_TIMING_MODE_5 BIT(5) 41 + #define ONFI_TIMING_MODE_UNKNOWN BIT(6) 42 + #define ONFI_TIMING_MODE_PARAM(x) FIELD_GET(GENMASK(3, 0), (x)) 39 43 40 44 /* ONFI feature number/address */ 41 45 #define ONFI_FEATURE_NUMBER 256 ··· 55 49 #define ONFI_SUBFEATURE_PARAM_LEN 4 56 50 57 51 /* ONFI optional commands SET/GET FEATURES supported? */ 58 - #define ONFI_OPT_CMD_SET_GET_FEATURES (1 << 2) 52 + #define ONFI_OPT_CMD_SET_GET_FEATURES BIT(2) 59 53 60 54 struct nand_onfi_params { 61 55 /* rev info and features block */ ··· 99 93 100 94 /* electrical parameter block */ 101 95 u8 io_pin_capacitance_max; 102 - __le16 async_timing_mode; 96 + __le16 sdr_timing_modes; 103 97 __le16 program_cache_timing_mode; 104 98 __le16 t_prog; 105 99 __le16 t_bers; 106 100 __le16 t_r; 107 101 __le16 t_ccs; 108 - __le16 src_sync_timing_mode; 109 - u8 src_ssync_features; 102 + u8 nvddr_timing_modes; 103 + u8 nvddr2_timing_modes; 104 + u8 nvddr_nvddr2_features; 110 105 __le16 clk_pin_capacitance_typ; 111 106 __le16 io_pin_capacitance_typ; 112 107 __le16 input_pin_capacitance_typ; ··· 167 160 * @tBERS: Block erase time 168 161 * @tR: Page read time 169 162 * @tCCS: Change column setup time 170 - * @async_timing_mode: Supported asynchronous timing mode 163 + * @fast_tCAD: Command/Address/Data slow or fast delay (NV-DDR only) 164 + * @sdr_timing_modes: Supported asynchronous/SDR timing modes 165 + * @nvddr_timing_modes: Supported source synchronous/NV-DDR timing modes 171 166 * @vendor_revision: Vendor specific revision number 172 167 * @vendor: Vendor specific data 173 168 */ ··· 179 170 u16 tBERS; 180 171 u16 tR; 181 172 u16 tCCS; 182 - u16 async_timing_mode; 173 + bool fast_tCAD; 174 + u16 sdr_timing_modes; 175 + u16 nvddr_timing_modes; 183 176 u16 vendor_revision; 184 177 u8 vendor[88]; 185 178 };
+157 -4
include/linux/mtd/rawnand.h
··· 24 24 #include <linux/types.h> 25 25 26 26 struct nand_chip; 27 + struct gpio_desc; 27 28 28 29 /* The maximum number of NAND chips in an array */ 29 30 #define NAND_MAX_CHIPS 8 ··· 386 385 * This struct defines the timing requirements of a SDR NAND chip. 387 386 * These information can be found in every NAND datasheets and the timings 388 387 * meaning are described in the ONFI specifications: 389 - * www.onfi.org/~/media/ONFI/specs/onfi_3_1_spec.pdf (chapter 4.15 Timing 390 - * Parameters) 388 + * https://media-www.micron.com/-/media/client/onfi/specs/onfi_3_1_spec.pdf 389 + * (chapter 4.15 Timing Parameters) 391 390 * 392 391 * All these timings are expressed in picoseconds. 393 392 * ··· 473 472 }; 474 473 475 474 /** 475 + * struct nand_nvddr_timings - NV-DDR NAND chip timings 476 + * 477 + * This struct defines the timing requirements of a NV-DDR NAND data interface. 478 + * These information can be found in every NAND datasheets and the timings 479 + * meaning are described in the ONFI specifications: 480 + * https://media-www.micron.com/-/media/client/onfi/specs/onfi_4_1_gold.pdf 481 + * (chapter 4.18.2 NV-DDR) 482 + * 483 + * All these timings are expressed in picoseconds. 484 + * 485 + * @tBERS_max: Block erase time 486 + * @tCCS_min: Change column setup time 487 + * @tPROG_max: Page program time 488 + * @tR_max: Page read time 489 + * @tAC_min: Access window of DQ[7:0] from CLK 490 + * @tAC_max: Access window of DQ[7:0] from CLK 491 + * @tADL_min: ALE to data loading time 492 + * @tCAD_min: Command, Address, Data delay 493 + * @tCAH_min: Command/Address DQ hold time 494 + * @tCALH_min: W/R_n, CLE and ALE hold time 495 + * @tCALS_min: W/R_n, CLE and ALE setup time 496 + * @tCAS_min: Command/address DQ setup time 497 + * @tCEH_min: CE# high hold time 498 + * @tCH_min: CE# hold time 499 + * @tCK_min: Average clock cycle time 500 + * @tCS_min: CE# setup time 501 + * @tDH_min: Data hold time 502 + * @tDQSCK_min: Start of the access window of DQS from CLK 503 + * @tDQSCK_max: End of the access window of DQS from CLK 504 + * @tDQSD_min: Min W/R_n low to DQS/DQ driven by device 505 + * @tDQSD_max: Max W/R_n low to DQS/DQ driven by device 506 + * @tDQSHZ_max: W/R_n high to DQS/DQ tri-state by device 507 + * @tDQSQ_max: DQS-DQ skew, DQS to last DQ valid, per access 508 + * @tDS_min: Data setup time 509 + * @tDSC_min: DQS cycle time 510 + * @tFEAT_max: Busy time for Set Features and Get Features 511 + * @tITC_max: Interface and Timing Mode Change time 512 + * @tQHS_max: Data hold skew factor 513 + * @tRHW_min: Data output cycle to command, address, or data input cycle 514 + * @tRR_min: Ready to RE# low (data only) 515 + * @tRST_max: Device reset time, measured from the falling edge of R/B# to the 516 + * rising edge of R/B#. 517 + * @tWB_max: WE# high to SR[6] low 518 + * @tWHR_min: WE# high to RE# low 519 + * @tWRCK_min: W/R_n low to data output cycle 520 + * @tWW_min: WP# transition to WE# low 521 + */ 522 + struct nand_nvddr_timings { 523 + u64 tBERS_max; 524 + u32 tCCS_min; 525 + u64 tPROG_max; 526 + u64 tR_max; 527 + u32 tAC_min; 528 + u32 tAC_max; 529 + u32 tADL_min; 530 + u32 tCAD_min; 531 + u32 tCAH_min; 532 + u32 tCALH_min; 533 + u32 tCALS_min; 534 + u32 tCAS_min; 535 + u32 tCEH_min; 536 + u32 tCH_min; 537 + u32 tCK_min; 538 + u32 tCS_min; 539 + u32 tDH_min; 540 + u32 tDQSCK_min; 541 + u32 tDQSCK_max; 542 + u32 tDQSD_min; 543 + u32 tDQSD_max; 544 + u32 tDQSHZ_max; 545 + u32 tDQSQ_max; 546 + u32 tDS_min; 547 + u32 tDSC_min; 548 + u32 tFEAT_max; 549 + u32 tITC_max; 550 + u32 tQHS_max; 551 + u32 tRHW_min; 552 + u32 tRR_min; 553 + u32 tRST_max; 554 + u32 tWB_max; 555 + u32 tWHR_min; 556 + u32 tWRCK_min; 557 + u32 tWW_min; 558 + }; 559 + 560 + /* 561 + * While timings related to the data interface itself are mostly different 562 + * between SDR and NV-DDR, timings related to the internal chip behavior are 563 + * common. IOW, the following entries which describe the internal delays have 564 + * the same definition and are shared in both SDR and NV-DDR timing structures: 565 + * - tADL_min 566 + * - tBERS_max 567 + * - tCCS_min 568 + * - tFEAT_max 569 + * - tPROG_max 570 + * - tR_max 571 + * - tRR_min 572 + * - tRST_max 573 + * - tWB_max 574 + * 575 + * The below macros return the value of a given timing, no matter the interface. 576 + */ 577 + #define NAND_COMMON_TIMING_PS(conf, timing_name) \ 578 + nand_interface_is_sdr(conf) ? \ 579 + nand_get_sdr_timings(conf)->timing_name : \ 580 + nand_get_nvddr_timings(conf)->timing_name 581 + 582 + #define NAND_COMMON_TIMING_MS(conf, timing_name) \ 583 + PSEC_TO_MSEC(NAND_COMMON_TIMING_PS((conf), timing_name)) 584 + 585 + #define NAND_COMMON_TIMING_NS(conf, timing_name) \ 586 + PSEC_TO_NSEC(NAND_COMMON_TIMING_PS((conf), timing_name)) 587 + 588 + /** 476 589 * enum nand_interface_type - NAND interface type 477 590 * @NAND_SDR_IFACE: Single Data Rate interface 591 + * @NAND_NVDDR_IFACE: Double Data Rate interface 478 592 */ 479 593 enum nand_interface_type { 480 594 NAND_SDR_IFACE, 595 + NAND_NVDDR_IFACE, 481 596 }; 482 597 483 598 /** ··· 602 485 * @timings: The timing information 603 486 * @timings.mode: Timing mode as defined in the specification 604 487 * @timings.sdr: Use it when @type is %NAND_SDR_IFACE. 488 + * @timings.nvddr: Use it when @type is %NAND_NVDDR_IFACE. 605 489 */ 606 490 struct nand_interface_config { 607 491 enum nand_interface_type type; ··· 610 492 unsigned int mode; 611 493 union { 612 494 struct nand_sdr_timings sdr; 495 + struct nand_nvddr_timings nvddr; 613 496 }; 614 497 } timings; 615 498 }; 499 + 500 + /** 501 + * nand_interface_is_sdr - get the interface type 502 + * @conf: The data interface 503 + */ 504 + static bool nand_interface_is_sdr(const struct nand_interface_config *conf) 505 + { 506 + return conf->type == NAND_SDR_IFACE; 507 + } 508 + 509 + /** 510 + * nand_interface_is_nvddr - get the interface type 511 + * @conf: The data interface 512 + */ 513 + static bool nand_interface_is_nvddr(const struct nand_interface_config *conf) 514 + { 515 + return conf->type == NAND_NVDDR_IFACE; 516 + } 616 517 617 518 /** 618 519 * nand_get_sdr_timings - get SDR timing from data interface ··· 640 503 static inline const struct nand_sdr_timings * 641 504 nand_get_sdr_timings(const struct nand_interface_config *conf) 642 505 { 643 - if (conf->type != NAND_SDR_IFACE) 506 + if (!nand_interface_is_sdr(conf)) 644 507 return ERR_PTR(-EINVAL); 645 508 646 509 return &conf->timings.sdr; 510 + } 511 + 512 + /** 513 + * nand_get_nvddr_timings - get NV-DDR timing from data interface 514 + * @conf: The data interface 515 + */ 516 + static inline const struct nand_nvddr_timings * 517 + nand_get_nvddr_timings(const struct nand_interface_config *conf) 518 + { 519 + if (!nand_interface_is_nvddr(conf)) 520 + return ERR_PTR(-EINVAL); 521 + 522 + return &conf->timings.nvddr; 647 523 } 648 524 649 525 /** ··· 1563 1413 * instruction and have no physical pin to check it. 1564 1414 */ 1565 1415 int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms); 1566 - struct gpio_desc; 1567 1416 int nand_gpio_waitrdy(struct nand_chip *chip, struct gpio_desc *gpiod, 1568 1417 unsigned long timeout_ms); 1569 1418 ··· 1594 1445 1595 1446 return chip->data_buf; 1596 1447 } 1448 + 1449 + /* Parse the gpio-cs property */ 1450 + int rawnand_dt_parse_gpio_cs(struct device *dev, struct gpio_desc ***cs_array, 1451 + unsigned int *ncs_array); 1597 1452 1598 1453 #endif /* __LINUX_MTD_RAWNAND_H */
+2
include/linux/mtd/spi-nor.h
··· 383 383 * @read_proto: the SPI protocol for read operations 384 384 * @write_proto: the SPI protocol for write operations 385 385 * @reg_proto: the SPI protocol for read_reg/write_reg/erase operations 386 + * @sfdp: the SFDP data of the flash 386 387 * @controller_ops: SPI NOR controller driver specific operations. 387 388 * @params: [FLASH-SPECIFIC] SPI NOR flash parameters and settings. 388 389 * The structure includes legacy flash parameters and ··· 413 412 bool sst_write_second; 414 413 u32 flags; 415 414 enum spi_nor_cmd_ext cmd_ext_type; 415 + struct sfdp *sfdp; 416 416 417 417 const struct spi_nor_controller_ops *controller_ops; 418 418
+2
include/linux/nvmem-provider.h
··· 57 57 * @type: Type of the nvmem storage 58 58 * @read_only: Device is read-only. 59 59 * @root_only: Device is accessibly to root only. 60 + * @of_node: If given, this will be used instead of the parent's of_node. 60 61 * @no_of_node: Device should not use the parent's of_node even if it's !NULL. 61 62 * @reg_read: Callback to read data. 62 63 * @reg_write: Callback to write data. ··· 87 86 enum nvmem_type type; 88 87 bool read_only; 89 88 bool root_only; 89 + struct device_node *of_node; 90 90 bool no_of_node; 91 91 nvmem_reg_read_t reg_read; 92 92 nvmem_reg_write_t reg_write;
-30
include/linux/pl353-smc.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * ARM PL353 SMC Driver Header 4 - * 5 - * Copyright (C) 2012 - 2018 Xilinx, Inc 6 - */ 7 - 8 - #ifndef __LINUX_PL353_SMC_H 9 - #define __LINUX_PL353_SMC_H 10 - 11 - enum pl353_smc_ecc_mode { 12 - PL353_SMC_ECCMODE_BYPASS = 0, 13 - PL353_SMC_ECCMODE_APB = 1, 14 - PL353_SMC_ECCMODE_MEM = 2 15 - }; 16 - 17 - enum pl353_smc_mem_width { 18 - PL353_SMC_MEM_WIDTH_8 = 0, 19 - PL353_SMC_MEM_WIDTH_16 = 1 20 - }; 21 - 22 - u32 pl353_smc_get_ecc_val(int ecc_reg); 23 - bool pl353_smc_ecc_is_busy(void); 24 - int pl353_smc_get_nand_int_status_raw(void); 25 - void pl353_smc_clr_nand_int(void); 26 - int pl353_smc_set_ecc_mode(enum pl353_smc_ecc_mode mode); 27 - int pl353_smc_set_ecc_pg_size(unsigned int pg_sz); 28 - int pl353_smc_set_buswidth(unsigned int bw); 29 - void pl353_smc_set_cycles(u32 timings[]); 30 - #endif