Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Miquel Raynal:
"MTD core:
- Fix refcounting for unpartitioned MTDs
- Fix misspelled function parameter 'section'
- Remove unneeded break
- cmdline parser: Fix parsing of part-names with colons
- mtdpart: Fix misdocumented function parameter 'mtd'

MTD devices:
- phram:
- Allow the user to set the erase page size
- File headers are not good candidates for kernel-doc
- physmap-bt1-rom: Fix __iomem addrspace removal warning
- plat-ram: correctly free memory on error path in platram_probe()
- powernv_flash: Add function names to headers and fix 'dev'
- docg3: Fix kernel-doc 'bad line' and 'excessive doc' issues

UBI cleanup fixes:
- gluebi: Fix misnamed function parameter documentation
- wl: Fix a couple of kernel-doc issues
- eba: Fix a couple of misdocumentation issues
- kapi: Correct documentation for 'ubi_leb_read_sg's 'sgl' parameter
- Document 'ubi_num' in struct mtd_dev_param

Generic NAND core ECC management:
- Add an I/O request tweaking mechanism
- Entire rework of the software BCH ECC driver, creation of a real
ECC engine, getting rid of raw NAND structures, migration to more
generic prototypes, misc fixes and style cleanup. Moved now to the
Generic NAND layer.
- Entire rework of the software Hamming ECC driver, creation of a
real ECC engine, getting rid of raw NAND structures, misc renames,
comment updates, cleanup, and style fixes. Moved now to the generic
NAND layer.
- Necessary plumbing at the NAND level to retrieve generic NAND ECC
engines (softwares and on-die).
- Update of the bindings.

Raw NAND core:
- Geting rid of the chip->ecc.priv entry.
- Fix miscellaneous typos in kernel-doc

Raw NAND controller drivers:
- Arasan: Document 'anfc_op's 'buf' member
- AU1550: Ensure the presence of the right includes
- Brcmnand: Demote non-conformant kernel-doc headers
- Cafe: Remove superfluous param doc and add another
- Davinci: Do not use extra dereferencing
- Diskonchip: Marking unused variables as __always_unused
- GPMI:
- Fix the driver only sense CS0 R/B issue
- Fix the random DMA timeout issue
- Use a single line for of_device_id
- Use of_device_get_match_data()
- Fix reference count leak in gpmi ops
- Cleanup makefile
- Fix binding matching of clocks on different SoCs
- Ingenic: remove redundant get_device() in ingenic_ecc_get()
- Intel LGM: New NAND controller driver
- Marvell: Drop useless line
- Meson:
- Fix a resource leak in init
- Fix meson_nfc_dma_buffer_release() arguments
- mxc:
- Use device_get_match_data()
- Use a single line for of_device_id
- Remove platform data support
- Omap:
- Fix a bunch of kernel-doc misdemeanours
- Finish ELM half populated function header, demote empty ones
- s3c2410: Add documentation for 2 missing struct members
- Sunxi: Document 'sunxi_nfc's 'caps' member
- Qcom:
- Add support for SDX55
- Support for IPQ6018 QPIC NAND controller
- Fix DMA sync on FLASH_STATUS register read
- Rockchip: New NAND controller driver for RK3308, RK2928 and others
- Sunxi: Add MDMA support

ONENAND:
- bbt: Fix expected kernel-doc formatting
- Fix some kernel-doc misdemeanours
- Fix expected kernel-doc formatting
- Use mtd->oops_panic_write as condition

SPI-NAND core:
- Creation of a SPI-NAND on-die ECC engine
- Move ECC related definitions earlier in the driver
- Fix typo in comment
- Fill a default ECC provider/algorithm
- Remove outdated comment
- Fix OOB read
- Allow the case where there is no ECC engine
- Use the external ECC engine logic

SPI-NAND chip drivers:
- Micron:
- Add support for MT29F2G01AAAED
- Use more specific names
- Macronix:
- Add support for MX35LFxG24AD
- Add support for MX35LFxGE4AD
- Toshiba: Demote non-conformant kernel-doc header

SPI-NOR core:
- Initial support for stateful Octal DTR mode using volatile settings
- Preliminary support for JEDEC 251 (xSPI) and JEDEC 216D standards
- Support for Cypress Semper flash
- Support to specify ECC block size of SPI NOR flashes
- Fixes to avoid clearing of non-volatile Block Protection bits at
probe
- hisi-sfc: Demote non-conformant kernel-doc"

* tag 'mtd/for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (120 commits)
mtd: spinand: macronix: Add support for MX35LFxG24AD
mtd: rawnand: rockchip: NFC driver for RK3308, RK2928 and others
dt-bindings: mtd: Describe Rockchip RK3xxx NAND flash controller
mtd: rawnand: gpmi: Use a single line for of_device_id
mtd: rawnand: gpmi: Fix the random DMA timeout issue
mtd: rawnand: gpmi: Fix the driver only sense CS0 R/B issue
mtd: rawnand: qcom: Add NAND controller support for SDX55
dt-bindings: qcom_nandc: Add SDX55 QPIC NAND documentation
mtd: rawnand: mxc: Use a single line for of_device_id
mtd: rawnand: mxc: Use device_get_match_data()
mtd: rawnand: meson: Fix a resource leak in init
mtd: rawnand: gpmi: Use of_device_get_match_data()
mtd: rawnand: Add NAND controller support on Intel LGM SoC
dt-bindings: mtd: Add Nand Flash Controller support for Intel LGM SoC
mtd: spinand: micron: Add support for MT29F2G01AAAED
mtd: spinand: micron: Use more specific names
mtd: rawnand: gpmi: fix reference count leak in gpmi ops
dt-bindings: mtd: gpmi-nand: Fix matching of clocks on different SoCs
mtd: spinand: macronix: Add support for MX35LFxGE4AD
mtd: plat-ram: correctly free memory on error path in platram_probe()
...

+6110 -1503
+61 -15
Documentation/devicetree/bindings/mtd/gpmi-nand.yaml
··· 9 9 maintainers: 10 10 - Han Xu <han.xu@nxp.com> 11 11 12 - allOf: 13 - - $ref: "nand-controller.yaml" 14 - 15 12 description: | 16 13 The GPMI nand controller provides an interface to control the NAND 17 14 flash chips. The device tree may optionally contain sub-nodes ··· 55 58 clocks: 56 59 minItems: 1 57 60 maxItems: 5 58 - items: 59 - - description: SoC gpmi io clock 60 - - description: SoC gpmi apb clock 61 - - description: SoC gpmi bch clock 62 - - description: SoC gpmi bch apb clock 63 - - description: SoC per1 bch clock 64 61 65 62 clock-names: 66 63 minItems: 1 67 64 maxItems: 5 68 - items: 69 - - const: gpmi_io 70 - - const: gpmi_apb 71 - - const: gpmi_bch 72 - - const: gpmi_bch_apb 73 - - const: per1_bch 74 65 75 66 fsl,use-minimum-ecc: 76 67 type: boolean ··· 91 106 - dma-names 92 107 93 108 unevaluatedProperties: false 109 + 110 + allOf: 111 + - $ref: "nand-controller.yaml" 112 + 113 + - if: 114 + properties: 115 + compatible: 116 + contains: 117 + enum: 118 + - fsl,imx23-gpmi-nand 119 + - fsl,imx28-gpmi-nand 120 + then: 121 + properties: 122 + clocks: 123 + items: 124 + - description: SoC gpmi io clock 125 + clock-names: 126 + items: 127 + - const: gpmi_io 128 + 129 + - if: 130 + properties: 131 + compatible: 132 + contains: 133 + enum: 134 + - fsl,imx6q-gpmi-nand 135 + - fsl,imx6sx-gpmi-nand 136 + then: 137 + properties: 138 + clocks: 139 + items: 140 + - description: SoC gpmi io clock 141 + - description: SoC gpmi apb clock 142 + - description: SoC gpmi bch clock 143 + - description: SoC gpmi bch apb clock 144 + - description: SoC per1 bch clock 145 + clock-names: 146 + items: 147 + - const: gpmi_io 148 + - const: gpmi_apb 149 + - const: gpmi_bch 150 + - const: gpmi_bch_apb 151 + - const: per1_bch 152 + 153 + - if: 154 + properties: 155 + compatible: 156 + contains: 157 + const: fsl,imx7d-gpmi-nand 158 + then: 159 + properties: 160 + clocks: 161 + items: 162 + - description: SoC gpmi io clock 163 + - description: SoC gpmi bch apb clock 164 + clock-names: 165 + minItems: 2 166 + maxItems: 2 167 + items: 168 + - const: gpmi_io 169 + - const: gpmi_bch_apb 94 170 95 171 examples: 96 172 - |
+99
Documentation/devicetree/bindings/mtd/intel,lgm-nand.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/intel,lgm-nand.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Intel LGM SoC NAND Controller Device Tree Bindings 8 + 9 + allOf: 10 + - $ref: "nand-controller.yaml" 11 + 12 + maintainers: 13 + - Ramuthevar Vadivel Murugan <vadivel.muruganx.ramuthevar@linux.intel.com> 14 + 15 + properties: 16 + compatible: 17 + const: intel,lgm-nand 18 + 19 + reg: 20 + maxItems: 6 21 + 22 + reg-names: 23 + items: 24 + - const: ebunand 25 + - const: hsnand 26 + - const: nand_cs0 27 + - const: nand_cs1 28 + - const: addr_sel0 29 + - const: addr_sel1 30 + 31 + clocks: 32 + maxItems: 1 33 + 34 + dmas: 35 + maxItems: 2 36 + 37 + dma-names: 38 + items: 39 + - const: tx 40 + - const: rx 41 + 42 + "#address-cells": 43 + const: 1 44 + 45 + "#size-cells": 46 + const: 0 47 + 48 + patternProperties: 49 + "^nand@[a-f0-9]+$": 50 + type: object 51 + properties: 52 + reg: 53 + minimum: 0 54 + maximum: 7 55 + 56 + nand-ecc-mode: true 57 + 58 + nand-ecc-algo: 59 + const: hw 60 + 61 + additionalProperties: false 62 + 63 + required: 64 + - compatible 65 + - reg 66 + - reg-names 67 + - clocks 68 + - dmas 69 + - dma-names 70 + - "#address-cells" 71 + - "#size-cells" 72 + 73 + additionalProperties: false 74 + 75 + examples: 76 + - | 77 + nand-controller@e0f00000 { 78 + compatible = "intel,lgm-nand"; 79 + reg = <0xe0f00000 0x100>, 80 + <0xe1000000 0x300>, 81 + <0xe1400000 0x8000>, 82 + <0xe1c00000 0x1000>, 83 + <0x17400000 0x4>, 84 + <0x17c00000 0x4>; 85 + reg-names = "ebunand", "hsnand", "nand_cs0", "nand_cs1", 86 + "addr_sel0", "addr_sel1"; 87 + clocks = <&cgu0 125>; 88 + dmas = <&dma0 8>, <&dma0 9>; 89 + dma-names = "tx", "rx"; 90 + #address-cells = <1>; 91 + #size-cells = <0>; 92 + 93 + nand@0 { 94 + reg = <0>; 95 + nand-ecc-mode = "hw"; 96 + }; 97 + }; 98 + 99 + ...
+1 -10
Documentation/devicetree/bindings/mtd/nand-controller.yaml
··· 46 46 description: 47 47 Contains the native Ready/Busy IDs. 48 48 49 - nand-ecc-mode: 50 - description: 51 - Desired ECC engine, either hardware (most of the time 52 - embedded in the NAND controller) or software correction 53 - (Linux will handle the calculations). soft_bch is deprecated 54 - and should be replaced by soft and nand-ecc-algo. 55 - $ref: /schemas/types.yaml#/definitions/string 56 - enum: [none, soft, hw, hw_syndrome, hw_oob_first, on-die] 57 - 58 49 nand-ecc-engine: 59 50 allOf: 60 51 - $ref: /schemas/types.yaml#/definitions/phandle ··· 162 171 163 172 nand@0 { 164 173 reg = <0>; 165 - nand-ecc-mode = "soft"; 174 + nand-use-soft-ecc-engine; 166 175 nand-ecc-algo = "bch"; 167 176 168 177 /* controller specific properties */
+4
Documentation/devicetree/bindings/mtd/qcom_nandc.txt
··· 6 6 SoC and it uses ADM DMA 7 7 * "qcom,ipq4019-nand" - for QPIC NAND controller v1.4.0 being used in 8 8 IPQ4019 SoC and it uses BAM DMA 9 + * "qcom,ipq6018-nand" - for QPIC NAND controller v1.5.0 being used in 10 + IPQ6018 SoC and it uses BAM DMA 9 11 * "qcom,ipq8074-nand" - for QPIC NAND controller v1.5.0 being used in 10 12 IPQ8074 SoC and it uses BAM DMA 13 + * "qcom,sdx55-nand" - for QPIC NAND controller v2.0.0 being used in 14 + SDX55 SoC and it uses BAM DMA 11 15 12 16 - reg: MMIO address range 13 17 - clocks: must contain core clock and always on clock
+161
Documentation/devicetree/bindings/mtd/rockchip,nand-controller.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/rockchip,nand-controller.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip SoCs NAND FLASH Controller (NFC) 8 + 9 + allOf: 10 + - $ref: "nand-controller.yaml#" 11 + 12 + maintainers: 13 + - Heiko Stuebner <heiko@sntech.de> 14 + 15 + properties: 16 + compatible: 17 + oneOf: 18 + - const: rockchip,px30-nfc 19 + - const: rockchip,rk2928-nfc 20 + - const: rockchip,rv1108-nfc 21 + - items: 22 + - const: rockchip,rk3036-nfc 23 + - const: rockchip,rk2928-nfc 24 + - items: 25 + - const: rockchip,rk3308-nfc 26 + - const: rockchip,rv1108-nfc 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + interrupts: 32 + maxItems: 1 33 + 34 + clocks: 35 + minItems: 1 36 + items: 37 + - description: Bus Clock 38 + - description: Module Clock 39 + 40 + clock-names: 41 + minItems: 1 42 + items: 43 + - const: ahb 44 + - const: nfc 45 + 46 + assigned-clocks: 47 + maxItems: 1 48 + 49 + assigned-clock-rates: 50 + maxItems: 1 51 + 52 + power-domains: 53 + maxItems: 1 54 + 55 + patternProperties: 56 + "^nand@[0-7]$": 57 + type: object 58 + properties: 59 + reg: 60 + minimum: 0 61 + maximum: 7 62 + 63 + nand-ecc-mode: 64 + const: hw 65 + 66 + nand-ecc-step-size: 67 + const: 1024 68 + 69 + nand-ecc-strength: 70 + enum: [16, 24, 40, 60, 70] 71 + description: | 72 + The ECC configurations that can be supported are as follows. 73 + NFC v600 ECC 16, 24, 40, 60 74 + RK2928, RK3066, RK3188 75 + 76 + NFC v622 ECC 16, 24, 40, 60 77 + RK3036, RK3128 78 + 79 + NFC v800 ECC 16 80 + RK3308, RV1108 81 + 82 + NFC v900 ECC 16, 40, 60, 70 83 + RK3326, PX30 84 + 85 + nand-bus-width: 86 + const: 8 87 + 88 + rockchip,boot-blks: 89 + $ref: /schemas/types.yaml#/definitions/uint32 90 + minimum: 2 91 + default: 16 92 + description: 93 + The NFC driver need this information to select ECC 94 + algorithms supported by the boot ROM. 95 + Only used in combination with 'nand-is-boot-medium'. 96 + 97 + rockchip,boot-ecc-strength: 98 + enum: [16, 24, 40, 60, 70] 99 + allOf: 100 + - $ref: /schemas/types.yaml#/definitions/uint32 101 + description: | 102 + If specified it indicates that a different BCH/ECC setting is 103 + supported by the boot ROM. 104 + NFC v600 ECC 16, 24 105 + RK2928, RK3066, RK3188 106 + 107 + NFC v622 ECC 16, 24, 40, 60 108 + RK3036, RK3128 109 + 110 + NFC v800 ECC 16 111 + RK3308, RV1108 112 + 113 + NFC v900 ECC 16, 70 114 + RK3326, PX30 115 + 116 + Only used in combination with 'nand-is-boot-medium'. 117 + 118 + required: 119 + - compatible 120 + - reg 121 + - interrupts 122 + - clocks 123 + - clock-names 124 + 125 + unevaluatedProperties: false 126 + 127 + examples: 128 + - | 129 + #include <dt-bindings/clock/rk3308-cru.h> 130 + #include <dt-bindings/interrupt-controller/arm-gic.h> 131 + nfc: nand-controller@ff4b0000 { 132 + compatible = "rockchip,rk3308-nfc", 133 + "rockchip,rv1108-nfc"; 134 + reg = <0xff4b0000 0x4000>; 135 + interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>; 136 + clocks = <&cru HCLK_NANDC>, <&cru SCLK_NANDC>; 137 + clock-names = "ahb", "nfc"; 138 + assigned-clocks = <&clks SCLK_NANDC>; 139 + assigned-clock-rates = <150000000>; 140 + 141 + pinctrl-0 = <&flash_ale &flash_bus8 &flash_cle &flash_csn0 142 + &flash_rdn &flash_rdy &flash_wrn>; 143 + pinctrl-names = "default"; 144 + 145 + #address-cells = <1>; 146 + #size-cells = <0>; 147 + 148 + nand@0 { 149 + reg = <0>; 150 + label = "rk-nand"; 151 + nand-bus-width = <8>; 152 + nand-ecc-mode = "hw"; 153 + nand-ecc-step-size = <1024>; 154 + nand-ecc-strength = <16>; 155 + nand-is-boot-medium; 156 + rockchip,boot-blks = <8>; 157 + rockchip,boot-ecc-strength = <16>; 158 + }; 159 + }; 160 + 161 + ...
+1 -1
Documentation/driver-api/mtd/nand_ecc.rst
··· 5 5 Introduction 6 6 ============ 7 7 8 - Having looked at the linux mtd/nand driver and more specific at nand_ecc.c 8 + Having looked at the linux mtd/nand Hamming software ECC engine driver 9 9 I felt there was room for optimisation. I bashed the code for a few hours 10 10 performing tricks like table lookup removing superfluous code etc. 11 11 After that the speed was increased by 35-40%.
-3
Documentation/driver-api/mtdnand.rst
··· 972 972 .. kernel-doc:: drivers/mtd/nand/raw/nand_base.c 973 973 :export: 974 974 975 - .. kernel-doc:: drivers/mtd/nand/raw/nand_ecc.c 976 - :export: 977 - 978 975 Internal Functions Provided 979 976 =========================== 980 977
+1 -1
arch/arm/mach-s3c/common-smdk-s3c24xx.c
··· 20 20 21 21 #include <linux/mtd/mtd.h> 22 22 #include <linux/mtd/rawnand.h> 23 - #include <linux/mtd/nand_ecc.h> 23 + #include <linux/mtd/nand-ecc-sw-hamming.h> 24 24 #include <linux/mtd/partitions.h> 25 25 #include <linux/io.h> 26 26
+1 -1
arch/arm/mach-s3c/mach-anubis.c
··· 34 34 35 35 #include <linux/mtd/mtd.h> 36 36 #include <linux/mtd/rawnand.h> 37 - #include <linux/mtd/nand_ecc.h> 37 + #include <linux/mtd/nand-ecc-sw-hamming.h> 38 38 #include <linux/mtd/partitions.h> 39 39 40 40 #include <net/ax88796.h>
+1 -1
arch/arm/mach-s3c/mach-at2440evb.c
··· 35 35 36 36 #include <linux/mtd/mtd.h> 37 37 #include <linux/mtd/rawnand.h> 38 - #include <linux/mtd/nand_ecc.h> 38 + #include <linux/mtd/nand-ecc-sw-hamming.h> 39 39 #include <linux/mtd/partitions.h> 40 40 41 41 #include "devs.h"
+1 -1
arch/arm/mach-s3c/mach-bast.c
··· 24 24 25 25 #include <linux/mtd/mtd.h> 26 26 #include <linux/mtd/rawnand.h> 27 - #include <linux/mtd/nand_ecc.h> 27 + #include <linux/mtd/nand-ecc-sw-hamming.h> 28 28 #include <linux/mtd/partitions.h> 29 29 30 30 #include <linux/platform_data/asoc-s3c24xx_simtec.h>
+1 -1
arch/arm/mach-s3c/mach-gta02.c
··· 37 37 38 38 #include <linux/mtd/mtd.h> 39 39 #include <linux/mtd/rawnand.h> 40 - #include <linux/mtd/nand_ecc.h> 40 + #include <linux/mtd/nand-ecc-sw-hamming.h> 41 41 #include <linux/mtd/partitions.h> 42 42 #include <linux/mtd/physmap.h> 43 43
+1 -1
arch/arm/mach-s3c/mach-jive.c
··· 40 40 41 41 #include <linux/mtd/mtd.h> 42 42 #include <linux/mtd/rawnand.h> 43 - #include <linux/mtd/nand_ecc.h> 43 + #include <linux/mtd/nand-ecc-sw-hamming.h> 44 44 #include <linux/mtd/partitions.h> 45 45 46 46 #include "gpio-cfg.h"
+1 -1
arch/arm/mach-s3c/mach-mini2440.c
··· 44 44 45 45 #include <linux/mtd/mtd.h> 46 46 #include <linux/mtd/rawnand.h> 47 - #include <linux/mtd/nand_ecc.h> 47 + #include <linux/mtd/nand-ecc-sw-hamming.h> 48 48 #include <linux/mtd/partitions.h> 49 49 50 50 #include "gpio-cfg.h"
+1 -1
arch/arm/mach-s3c/mach-osiris.c
··· 33 33 34 34 #include <linux/mtd/mtd.h> 35 35 #include <linux/mtd/rawnand.h> 36 - #include <linux/mtd/nand_ecc.h> 36 + #include <linux/mtd/nand-ecc-sw-hamming.h> 37 37 #include <linux/mtd/partitions.h> 38 38 39 39 #include "cpu.h"
+1 -1
arch/arm/mach-s3c/mach-qt2410.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/mtd/mtd.h> 23 23 #include <linux/mtd/rawnand.h> 24 - #include <linux/mtd/nand_ecc.h> 24 + #include <linux/mtd/nand-ecc-sw-hamming.h> 25 25 #include <linux/mtd/partitions.h> 26 26 27 27 #include <asm/mach/arch.h>
+1 -1
arch/arm/mach-s3c/mach-rx3715.c
··· 22 22 #include <linux/io.h> 23 23 #include <linux/mtd/mtd.h> 24 24 #include <linux/mtd/rawnand.h> 25 - #include <linux/mtd/nand_ecc.h> 25 + #include <linux/mtd/nand-ecc-sw-hamming.h> 26 26 #include <linux/mtd/partitions.h> 27 27 28 28 #include <asm/mach/arch.h>
+1 -1
arch/arm/mach-s3c/mach-vstms.c
··· 16 16 #include <linux/io.h> 17 17 #include <linux/mtd/mtd.h> 18 18 #include <linux/mtd/rawnand.h> 19 - #include <linux/mtd/nand_ecc.h> 19 + #include <linux/mtd/nand-ecc-sw-hamming.h> 20 20 #include <linux/mtd/partitions.h> 21 21 #include <linux/memblock.h> 22 22
+1
drivers/mtd/Kconfig
··· 152 152 tristate "SmartMedia/xD new translation layer" 153 153 depends on BLOCK 154 154 select MTD_BLKDEVS 155 + select MTD_NAND_CORE 155 156 select MTD_NAND_ECC_SW_HAMMING 156 157 help 157 158 This enables EXPERIMENTAL R/W support for SmartMedia/xD
+2 -3
drivers/mtd/devices/docg3.c
··· 816 816 817 817 /** 818 818 * calc_block_sector - Calculate blocks, pages and ofs. 819 - 819 + * 820 820 * @from: offset in flash 821 821 * @block0: first plane block index calculated 822 822 * @block1: second plane block index calculated ··· 1783 1783 1784 1784 /** 1785 1785 * doc_probe_device - Check if a device is available 1786 - * @base: the io space where the device is probed 1786 + * @cascade: the cascade of chips this devices will belong to 1787 1787 * @floor: the floor of the probed device 1788 1788 * @dev: the device 1789 - * @cascade: the cascade of chips this devices will belong to 1790 1789 * 1791 1790 * Checks whether a device at the specified IO range, and floor is available. 1792 1791 *
+35 -19
drivers/mtd/devices/phram.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 2 + /* 3 3 * Copyright (c) ???? Jochen Schäuble <psionic@psionic.de> 4 4 * Copyright (c) 2003-2004 Joern Engel <joern@wh.fh-wedel.de> 5 5 * 6 6 * Usage: 7 7 * 8 8 * one commend line parameter per device, each in the form: 9 - * phram=<name>,<start>,<len> 9 + * phram=<name>,<start>,<len>[,<erasesize>] 10 10 * <name> may be up to 63 characters. 11 - * <start> and <len> can be octal, decimal or hexadecimal. If followed 11 + * <start>, <len>, and <erasesize> can be octal, decimal or hexadecimal. If followed 12 12 * by "ki", "Mi" or "Gi", the numbers will be interpreted as kilo, mega or 13 - * gigabytes. 13 + * gigabytes. <erasesize> is optional and defaults to PAGE_SIZE. 14 14 * 15 15 * Example: 16 - * phram=swap,64Mi,128Mi phram=test,900Mi,1Mi 16 + * phram=swap,64Mi,128Mi phram=test,900Mi,1Mi,64Ki 17 17 */ 18 18 19 19 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ··· 26 26 #include <linux/moduleparam.h> 27 27 #include <linux/slab.h> 28 28 #include <linux/mtd/mtd.h> 29 + #include <asm/div64.h> 29 30 30 31 struct phram_mtd_list { 31 32 struct mtd_info mtd; ··· 89 88 } 90 89 } 91 90 92 - static int register_device(char *name, phys_addr_t start, size_t len) 91 + static int register_device(char *name, phys_addr_t start, size_t len, uint32_t erasesize) 93 92 { 94 93 struct phram_mtd_list *new; 95 94 int ret = -ENOMEM; ··· 116 115 new->mtd._write = phram_write; 117 116 new->mtd.owner = THIS_MODULE; 118 117 new->mtd.type = MTD_RAM; 119 - new->mtd.erasesize = PAGE_SIZE; 118 + new->mtd.erasesize = erasesize; 120 119 new->mtd.writesize = 1; 121 120 122 121 ret = -EAGAIN; ··· 205 204 static int phram_init_called; 206 205 /* 207 206 * This shall contain the module parameter if any. It is of the form: 208 - * - phram=<device>,<address>,<size> for module case 209 - * - phram.phram=<device>,<address>,<size> for built-in case 210 - * We leave 64 bytes for the device name, 20 for the address and 20 for the 211 - * size. 212 - * Example: phram.phram=rootfs,0xa0000000,512Mi 207 + * - phram=<device>,<address>,<size>[,<erasesize>] for module case 208 + * - phram.phram=<device>,<address>,<size>[,<erasesize>] for built-in case 209 + * We leave 64 bytes for the device name, 20 for the address , 20 for the 210 + * size and 20 for the erasesize. 211 + * Example: phram.phram=rootfs,0xa0000000,512Mi,65536 213 212 */ 214 - static char phram_paramline[64 + 20 + 20]; 213 + static char phram_paramline[64 + 20 + 20 + 20]; 215 214 #endif 216 215 217 216 static int phram_setup(const char *val) 218 217 { 219 - char buf[64 + 20 + 20], *str = buf; 220 - char *token[3]; 218 + char buf[64 + 20 + 20 + 20], *str = buf; 219 + char *token[4]; 221 220 char *name; 222 221 uint64_t start; 223 222 uint64_t len; 223 + uint64_t erasesize = PAGE_SIZE; 224 224 int i, ret; 225 225 226 226 if (strnlen(val, sizeof(buf)) >= sizeof(buf)) ··· 230 228 strcpy(str, val); 231 229 kill_final_newline(str); 232 230 233 - for (i = 0; i < 3; i++) 231 + for (i = 0; i < 4; i++) 234 232 token[i] = strsep(&str, ","); 235 233 236 234 if (str) ··· 255 253 goto error; 256 254 } 257 255 258 - ret = register_device(name, start, len); 256 + if (token[3]) { 257 + ret = parse_num64(&erasesize, token[3]); 258 + if (ret) { 259 + parse_err("illegal erasesize\n"); 260 + goto error; 261 + } 262 + } 263 + 264 + if (len == 0 || erasesize == 0 || erasesize > len 265 + || erasesize > UINT_MAX || do_div(len, (uint32_t)erasesize) != 0) { 266 + parse_err("illegal erasesize or len\n"); 267 + goto error; 268 + } 269 + 270 + ret = register_device(name, start, len, (uint32_t)erasesize); 259 271 if (ret) 260 272 goto error; 261 273 262 - pr_info("%s device: %#llx at %#llx\n", name, len, start); 274 + pr_info("%s device: %#llx at %#llx for erasesize %#llx\n", name, len, start, erasesize); 263 275 return 0; 264 276 265 277 error: ··· 314 298 } 315 299 316 300 module_param_call(phram, phram_param_call, NULL, NULL, 0200); 317 - MODULE_PARM_DESC(phram, "Memory region to map. \"phram=<name>,<start>,<length>\""); 301 + MODULE_PARM_DESC(phram, "Memory region to map. \"phram=<name>,<start>,<length>[,<erasesize>]\""); 318 302 319 303 320 304 static int __init init_phram(void)
+4 -1
drivers/mtd/devices/powernv_flash.c
··· 126 126 } 127 127 128 128 /** 129 + * powernv_flash_read 129 130 * @mtd: the device 130 131 * @from: the offset to read from 131 132 * @len: the number of bytes to read ··· 143 142 } 144 143 145 144 /** 145 + * powernv_flash_write 146 146 * @mtd: the device 147 147 * @to: the offset to write to 148 148 * @len: the number of bytes to write ··· 160 158 } 161 159 162 160 /** 161 + * powernv_flash_erase 163 162 * @mtd: the device 164 163 * @erase: the erase info 165 164 * Returns 0 if erase successful or -ERRNO if an error occurred ··· 179 176 180 177 /** 181 178 * powernv_flash_set_driver_info - Fill the mtd_info structure and docg3 182 - * structure @pdev: The platform device 179 + * @dev: The device structure 183 180 * @mtd: The structure to fill 184 181 */ 185 182 static int powernv_flash_set_driver_info(struct device *dev,
+4 -4
drivers/mtd/maps/physmap-bt1-rom.c
··· 31 31 unsigned long ofs) 32 32 { 33 33 void __iomem *src = map->virt + ofs; 34 - unsigned long shift; 34 + unsigned int shift; 35 35 map_word ret; 36 36 u32 data; 37 37 38 38 /* Read data within offset dword. */ 39 - shift = (unsigned long)src & 0x3; 39 + shift = (uintptr_t)src & 0x3; 40 40 data = readl_relaxed(src - shift); 41 41 if (!shift) { 42 42 ret.x[0] = data; ··· 60 60 ssize_t len) 61 61 { 62 62 void __iomem *src = map->virt + from; 63 - ssize_t shift, chunk; 63 + unsigned int shift, chunk; 64 64 u32 data; 65 65 66 66 if (len <= 0 || from >= map->size) ··· 75 75 * up into the next three stages: unaligned head, aligned body, 76 76 * unaligned tail. 77 77 */ 78 - shift = (ssize_t)src & 0x3; 78 + shift = (uintptr_t)src & 0x3; 79 79 if (shift) { 80 80 chunk = min_t(ssize_t, 4 - shift, len); 81 81 data = readl_relaxed(src - shift);
+8 -3
drivers/mtd/maps/plat-ram.c
··· 177 177 err = mtd_device_parse_register(info->mtd, pdata->probes, NULL, 178 178 pdata->partitions, 179 179 pdata->nr_partitions); 180 - if (!err) 181 - dev_info(&pdev->dev, "registered mtd device\n"); 180 + if (err) { 181 + dev_err(&pdev->dev, "failed to register mtd device\n"); 182 + goto exit_free; 183 + } 184 + 185 + dev_info(&pdev->dev, "registered mtd device\n"); 182 186 183 187 if (pdata->nr_partitions) { 184 188 /* add the whole device. */ ··· 190 186 if (err) { 191 187 dev_err(&pdev->dev, 192 188 "failed to register the entire device\n"); 189 + goto exit_free; 193 190 } 194 191 } 195 192 196 - return err; 193 + return 0; 197 194 198 195 exit_free: 199 196 platram_remove(pdev);
-2
drivers/mtd/mtdchar.c
··· 881 881 if (copy_from_user(&offs, argp, sizeof(loff_t))) 882 882 return -EFAULT; 883 883 return mtd_block_isbad(mtd, offs); 884 - break; 885 884 } 886 885 887 886 case MEMSETBADBLOCK: ··· 890 891 if (copy_from_user(&offs, argp, sizeof(loff_t))) 891 892 return -EFAULT; 892 893 return mtd_block_markbad(mtd, offs); 893 - break; 894 894 } 895 895 896 896 case OTPSELECT:
+5 -1
drivers/mtd/mtdcore.c
··· 993 993 } 994 994 } 995 995 996 + master->usecount++; 997 + 996 998 while (mtd->parent) { 997 999 mtd->usecount++; 998 1000 mtd = mtd->parent; ··· 1060 1058 BUG_ON(mtd->usecount < 0); 1061 1059 mtd = mtd->parent; 1062 1060 } 1061 + 1062 + master->usecount--; 1063 1063 1064 1064 if (master->_put_device) 1065 1065 master->_put_device(master); ··· 1582 1578 * ECC byte 1583 1579 * @mtd: mtd info structure 1584 1580 * @eccbyte: the byte we are searching for 1585 - * @sectionp: pointer where the section id will be stored 1581 + * @section: pointer where the section id will be stored 1586 1582 * @oobregion: OOB region information 1587 1583 * 1588 1584 * Works like mtd_ooblayout_find_region() except it searches for a specific ECC
+1 -1
drivers/mtd/mtdpart.c
··· 292 292 /** 293 293 * __mtd_del_partition - delete MTD partition 294 294 * 295 - * @priv: MTD structure to be deleted 295 + * @mtd: MTD structure to be deleted 296 296 * 297 297 * This function must be called with the partitions mutex locked. 298 298 */
+32 -1
drivers/mtd/nand/Kconfig
··· 13 13 14 14 config MTD_NAND_ECC 15 15 bool 16 - depends on MTD_NAND_CORE 16 + select MTD_NAND_CORE 17 + 18 + config MTD_NAND_ECC_SW_HAMMING 19 + bool "Software Hamming ECC engine" 20 + default y if MTD_RAW_NAND 21 + select MTD_NAND_ECC 22 + help 23 + This enables support for software Hamming error 24 + correction. This correction can correct up to 1 bit error 25 + per chunk and detect up to 2 bit errors. While it used to be 26 + widely used with old parts, newer NAND chips usually require 27 + more strength correction and in this case BCH or RS will be 28 + preferred. 29 + 30 + config MTD_NAND_ECC_SW_HAMMING_SMC 31 + bool "NAND ECC Smart Media byte order" 32 + depends on MTD_NAND_ECC_SW_HAMMING 33 + default n 34 + help 35 + Software ECC according to the Smart Media Specification. 36 + The original Linux implementation had byte 0 and 1 swapped. 37 + 38 + config MTD_NAND_ECC_SW_BCH 39 + bool "Software BCH ECC engine" 40 + select BCH 41 + select MTD_NAND_ECC 42 + default n 43 + help 44 + This enables support for software BCH error correction. Binary BCH 45 + codes are more powerful and cpu intensive than traditional Hamming 46 + ECC codes. They are used with NAND devices requiring more than 1 bit 47 + of error correction. 17 48 18 49 endmenu 19 50
+2
drivers/mtd/nand/Makefile
··· 8 8 obj-y += spi/ 9 9 10 10 nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o 11 + nandcore-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += ecc-sw-hamming.o 12 + nandcore-$(CONFIG_MTD_NAND_ECC_SW_BCH) += ecc-sw-bch.o
+124
drivers/mtd/nand/core.c
··· 208 208 EXPORT_SYMBOL_GPL(nanddev_mtd_max_bad_blocks); 209 209 210 210 /** 211 + * nanddev_get_ecc_engine() - Find and get a suitable ECC engine 212 + * @nand: NAND device 213 + */ 214 + static int nanddev_get_ecc_engine(struct nand_device *nand) 215 + { 216 + int engine_type; 217 + 218 + /* Read the user desires in terms of ECC engine/configuration */ 219 + of_get_nand_ecc_user_config(nand); 220 + 221 + engine_type = nand->ecc.user_conf.engine_type; 222 + if (engine_type == NAND_ECC_ENGINE_TYPE_INVALID) 223 + engine_type = nand->ecc.defaults.engine_type; 224 + 225 + switch (engine_type) { 226 + case NAND_ECC_ENGINE_TYPE_NONE: 227 + return 0; 228 + case NAND_ECC_ENGINE_TYPE_SOFT: 229 + nand->ecc.engine = nand_ecc_get_sw_engine(nand); 230 + break; 231 + case NAND_ECC_ENGINE_TYPE_ON_DIE: 232 + nand->ecc.engine = nand_ecc_get_on_die_hw_engine(nand); 233 + break; 234 + case NAND_ECC_ENGINE_TYPE_ON_HOST: 235 + pr_err("On-host hardware ECC engines not supported yet\n"); 236 + break; 237 + default: 238 + pr_err("Missing ECC engine type\n"); 239 + } 240 + 241 + if (!nand->ecc.engine) 242 + return -EINVAL; 243 + 244 + return 0; 245 + } 246 + 247 + /** 248 + * nanddev_put_ecc_engine() - Dettach and put the in-use ECC engine 249 + * @nand: NAND device 250 + */ 251 + static int nanddev_put_ecc_engine(struct nand_device *nand) 252 + { 253 + switch (nand->ecc.ctx.conf.engine_type) { 254 + case NAND_ECC_ENGINE_TYPE_ON_HOST: 255 + pr_err("On-host hardware ECC engines not supported yet\n"); 256 + break; 257 + case NAND_ECC_ENGINE_TYPE_NONE: 258 + case NAND_ECC_ENGINE_TYPE_SOFT: 259 + case NAND_ECC_ENGINE_TYPE_ON_DIE: 260 + default: 261 + break; 262 + } 263 + 264 + return 0; 265 + } 266 + 267 + /** 268 + * nanddev_find_ecc_configuration() - Find a suitable ECC configuration 269 + * @nand: NAND device 270 + */ 271 + static int nanddev_find_ecc_configuration(struct nand_device *nand) 272 + { 273 + int ret; 274 + 275 + if (!nand->ecc.engine) 276 + return -ENOTSUPP; 277 + 278 + ret = nand_ecc_init_ctx(nand); 279 + if (ret) 280 + return ret; 281 + 282 + if (!nand_ecc_is_strong_enough(nand)) 283 + pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n", 284 + nand->mtd.name); 285 + 286 + return 0; 287 + } 288 + 289 + /** 290 + * nanddev_ecc_engine_init() - Initialize an ECC engine for the chip 291 + * @nand: NAND device 292 + */ 293 + int nanddev_ecc_engine_init(struct nand_device *nand) 294 + { 295 + int ret; 296 + 297 + /* Look for the ECC engine to use */ 298 + ret = nanddev_get_ecc_engine(nand); 299 + if (ret) { 300 + pr_err("No ECC engine found\n"); 301 + return ret; 302 + } 303 + 304 + /* No ECC engine requested */ 305 + if (!nand->ecc.engine) 306 + return 0; 307 + 308 + /* Configure the engine: balance user input and chip requirements */ 309 + ret = nanddev_find_ecc_configuration(nand); 310 + if (ret) { 311 + pr_err("No suitable ECC configuration\n"); 312 + nanddev_put_ecc_engine(nand); 313 + 314 + return ret; 315 + } 316 + 317 + return 0; 318 + } 319 + EXPORT_SYMBOL_GPL(nanddev_ecc_engine_init); 320 + 321 + /** 322 + * nanddev_ecc_engine_cleanup() - Cleanup ECC engine initializations 323 + * @nand: NAND device 324 + */ 325 + void nanddev_ecc_engine_cleanup(struct nand_device *nand) 326 + { 327 + if (nand->ecc.engine) 328 + nand_ecc_cleanup_ctx(nand); 329 + 330 + nanddev_put_ecc_engine(nand); 331 + } 332 + EXPORT_SYMBOL_GPL(nanddev_ecc_engine_cleanup); 333 + 334 + /** 211 335 * nanddev_init() - Initialize a NAND device 212 336 * @nand: NAND device 213 337 * @ops: NAND device operations
+406
drivers/mtd/nand/ecc-sw-bch.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * This file provides ECC correction for more than 1 bit per block of data, 4 + * using binary BCH codes. It relies on the generic BCH library lib/bch.c. 5 + * 6 + * Copyright © 2011 Ivan Djelic <ivan.djelic@parrot.com> 7 + */ 8 + 9 + #include <linux/types.h> 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/slab.h> 13 + #include <linux/bitops.h> 14 + #include <linux/mtd/nand.h> 15 + #include <linux/mtd/nand-ecc-sw-bch.h> 16 + 17 + /** 18 + * nand_ecc_sw_bch_calculate - Calculate the ECC corresponding to a data block 19 + * @nand: NAND device 20 + * @buf: Input buffer with raw data 21 + * @code: Output buffer with ECC 22 + */ 23 + int nand_ecc_sw_bch_calculate(struct nand_device *nand, 24 + const unsigned char *buf, unsigned char *code) 25 + { 26 + struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 27 + unsigned int i; 28 + 29 + memset(code, 0, engine_conf->code_size); 30 + bch_encode(engine_conf->bch, buf, nand->ecc.ctx.conf.step_size, code); 31 + 32 + /* apply mask so that an erased page is a valid codeword */ 33 + for (i = 0; i < engine_conf->code_size; i++) 34 + code[i] ^= engine_conf->eccmask[i]; 35 + 36 + return 0; 37 + } 38 + EXPORT_SYMBOL(nand_ecc_sw_bch_calculate); 39 + 40 + /** 41 + * nand_ecc_sw_bch_correct - Detect, correct and report bit error(s) 42 + * @nand: NAND device 43 + * @buf: Raw data read from the chip 44 + * @read_ecc: ECC bytes from the chip 45 + * @calc_ecc: ECC calculated from the raw data 46 + * 47 + * Detect and correct bit errors for a data block. 48 + */ 49 + int nand_ecc_sw_bch_correct(struct nand_device *nand, unsigned char *buf, 50 + unsigned char *read_ecc, unsigned char *calc_ecc) 51 + { 52 + struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 53 + unsigned int step_size = nand->ecc.ctx.conf.step_size; 54 + unsigned int *errloc = engine_conf->errloc; 55 + int i, count; 56 + 57 + count = bch_decode(engine_conf->bch, NULL, step_size, read_ecc, 58 + calc_ecc, NULL, errloc); 59 + if (count > 0) { 60 + for (i = 0; i < count; i++) { 61 + if (errloc[i] < (step_size * 8)) 62 + /* The error is in the data area: correct it */ 63 + buf[errloc[i] >> 3] ^= (1 << (errloc[i] & 7)); 64 + 65 + /* Otherwise the error is in the ECC area: nothing to do */ 66 + pr_debug("%s: corrected bitflip %u\n", __func__, 67 + errloc[i]); 68 + } 69 + } else if (count < 0) { 70 + pr_err("ECC unrecoverable error\n"); 71 + count = -EBADMSG; 72 + } 73 + 74 + return count; 75 + } 76 + EXPORT_SYMBOL(nand_ecc_sw_bch_correct); 77 + 78 + /** 79 + * nand_ecc_sw_bch_cleanup - Cleanup software BCH ECC resources 80 + * @nand: NAND device 81 + */ 82 + static void nand_ecc_sw_bch_cleanup(struct nand_device *nand) 83 + { 84 + struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 85 + 86 + bch_free(engine_conf->bch); 87 + kfree(engine_conf->errloc); 88 + kfree(engine_conf->eccmask); 89 + } 90 + 91 + /** 92 + * nand_ecc_sw_bch_init - Initialize software BCH ECC engine 93 + * @nand: NAND device 94 + * 95 + * Returns: a pointer to a new NAND BCH control structure, or NULL upon failure 96 + * 97 + * Initialize NAND BCH error correction. @nand.ecc parameters 'step_size' and 98 + * 'bytes' are used to compute the following BCH parameters: 99 + * m, the Galois field order 100 + * t, the error correction capability 101 + * 'bytes' should be equal to the number of bytes required to store m * t 102 + * bits, where m is such that 2^m - 1 > step_size * 8. 103 + * 104 + * Example: to configure 4 bit correction per 512 bytes, you should pass 105 + * step_size = 512 (thus, m = 13 is the smallest integer such that 2^m - 1 > 512 * 8) 106 + * bytes = 7 (7 bytes are required to store m * t = 13 * 4 = 52 bits) 107 + */ 108 + static int nand_ecc_sw_bch_init(struct nand_device *nand) 109 + { 110 + struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 111 + unsigned int eccsize = nand->ecc.ctx.conf.step_size; 112 + unsigned int eccbytes = engine_conf->code_size; 113 + unsigned int m, t, i; 114 + unsigned char *erased_page; 115 + int ret; 116 + 117 + m = fls(1 + (8 * eccsize)); 118 + t = (eccbytes * 8) / m; 119 + 120 + engine_conf->bch = bch_init(m, t, 0, false); 121 + if (!engine_conf->bch) 122 + return -EINVAL; 123 + 124 + engine_conf->eccmask = kzalloc(eccbytes, GFP_KERNEL); 125 + engine_conf->errloc = kmalloc_array(t, sizeof(*engine_conf->errloc), 126 + GFP_KERNEL); 127 + if (!engine_conf->eccmask || !engine_conf->errloc) { 128 + ret = -ENOMEM; 129 + goto cleanup; 130 + } 131 + 132 + /* Compute and store the inverted ECC of an erased step */ 133 + erased_page = kmalloc(eccsize, GFP_KERNEL); 134 + if (!erased_page) { 135 + ret = -ENOMEM; 136 + goto cleanup; 137 + } 138 + 139 + memset(erased_page, 0xff, eccsize); 140 + bch_encode(engine_conf->bch, erased_page, eccsize, 141 + engine_conf->eccmask); 142 + kfree(erased_page); 143 + 144 + for (i = 0; i < eccbytes; i++) 145 + engine_conf->eccmask[i] ^= 0xff; 146 + 147 + /* Verify that the number of code bytes has the expected value */ 148 + if (engine_conf->bch->ecc_bytes != eccbytes) { 149 + pr_err("Invalid number of ECC bytes: %u, expected: %u\n", 150 + eccbytes, engine_conf->bch->ecc_bytes); 151 + ret = -EINVAL; 152 + goto cleanup; 153 + } 154 + 155 + /* Sanity checks */ 156 + if (8 * (eccsize + eccbytes) >= (1 << m)) { 157 + pr_err("ECC step size is too large (%u)\n", eccsize); 158 + ret = -EINVAL; 159 + goto cleanup; 160 + } 161 + 162 + return 0; 163 + 164 + cleanup: 165 + nand_ecc_sw_bch_cleanup(nand); 166 + 167 + return ret; 168 + } 169 + 170 + int nand_ecc_sw_bch_init_ctx(struct nand_device *nand) 171 + { 172 + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; 173 + struct mtd_info *mtd = nanddev_to_mtd(nand); 174 + struct nand_ecc_sw_bch_conf *engine_conf; 175 + unsigned int code_size = 0, nsteps; 176 + int ret; 177 + 178 + /* Only large page NAND chips may use BCH */ 179 + if (mtd->oobsize < 64) { 180 + pr_err("BCH cannot be used with small page NAND chips\n"); 181 + return -EINVAL; 182 + } 183 + 184 + if (!mtd->ooblayout) 185 + mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout()); 186 + 187 + conf->engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 188 + conf->algo = NAND_ECC_ALGO_BCH; 189 + conf->step_size = nand->ecc.user_conf.step_size; 190 + conf->strength = nand->ecc.user_conf.strength; 191 + 192 + /* 193 + * Board driver should supply ECC size and ECC strength 194 + * values to select how many bits are correctable. 195 + * Otherwise, default to 512 bytes for large page devices and 256 for 196 + * small page devices. 197 + */ 198 + if (!conf->step_size) { 199 + if (mtd->oobsize >= 64) 200 + conf->step_size = 512; 201 + else 202 + conf->step_size = 256; 203 + 204 + conf->strength = 4; 205 + } 206 + 207 + nsteps = mtd->writesize / conf->step_size; 208 + 209 + /* Maximize */ 210 + if (nand->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH) { 211 + conf->step_size = 1024; 212 + nsteps = mtd->writesize / conf->step_size; 213 + /* Reserve 2 bytes for the BBM */ 214 + code_size = (mtd->oobsize - 2) / nsteps; 215 + conf->strength = code_size * 8 / fls(8 * conf->step_size); 216 + } 217 + 218 + if (!code_size) 219 + code_size = DIV_ROUND_UP(conf->strength * 220 + fls(8 * conf->step_size), 8); 221 + 222 + if (!conf->strength) 223 + conf->strength = (code_size * 8) / fls(8 * conf->step_size); 224 + 225 + if (!code_size && !conf->strength) { 226 + pr_err("Missing ECC parameters\n"); 227 + return -EINVAL; 228 + } 229 + 230 + engine_conf = kzalloc(sizeof(*engine_conf), GFP_KERNEL); 231 + if (!engine_conf) 232 + return -ENOMEM; 233 + 234 + ret = nand_ecc_init_req_tweaking(&engine_conf->req_ctx, nand); 235 + if (ret) 236 + goto free_engine_conf; 237 + 238 + engine_conf->code_size = code_size; 239 + engine_conf->nsteps = nsteps; 240 + engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL); 241 + engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL); 242 + if (!engine_conf->calc_buf || !engine_conf->code_buf) { 243 + ret = -ENOMEM; 244 + goto free_bufs; 245 + } 246 + 247 + nand->ecc.ctx.priv = engine_conf; 248 + nand->ecc.ctx.total = nsteps * code_size; 249 + 250 + ret = nand_ecc_sw_bch_init(nand); 251 + if (ret) 252 + goto free_bufs; 253 + 254 + /* Verify the layout validity */ 255 + if (mtd_ooblayout_count_eccbytes(mtd) != 256 + engine_conf->nsteps * engine_conf->code_size) { 257 + pr_err("Invalid ECC layout\n"); 258 + ret = -EINVAL; 259 + goto cleanup_bch_ctx; 260 + } 261 + 262 + return 0; 263 + 264 + cleanup_bch_ctx: 265 + nand_ecc_sw_bch_cleanup(nand); 266 + free_bufs: 267 + nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); 268 + kfree(engine_conf->calc_buf); 269 + kfree(engine_conf->code_buf); 270 + free_engine_conf: 271 + kfree(engine_conf); 272 + 273 + return ret; 274 + } 275 + EXPORT_SYMBOL(nand_ecc_sw_bch_init_ctx); 276 + 277 + void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand) 278 + { 279 + struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 280 + 281 + if (engine_conf) { 282 + nand_ecc_sw_bch_cleanup(nand); 283 + nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); 284 + kfree(engine_conf->calc_buf); 285 + kfree(engine_conf->code_buf); 286 + kfree(engine_conf); 287 + } 288 + } 289 + EXPORT_SYMBOL(nand_ecc_sw_bch_cleanup_ctx); 290 + 291 + static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand, 292 + struct nand_page_io_req *req) 293 + { 294 + struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 295 + struct mtd_info *mtd = nanddev_to_mtd(nand); 296 + int eccsize = nand->ecc.ctx.conf.step_size; 297 + int eccbytes = engine_conf->code_size; 298 + int eccsteps = engine_conf->nsteps; 299 + int total = nand->ecc.ctx.total; 300 + u8 *ecccalc = engine_conf->calc_buf; 301 + const u8 *data; 302 + int i; 303 + 304 + /* Nothing to do for a raw operation */ 305 + if (req->mode == MTD_OPS_RAW) 306 + return 0; 307 + 308 + /* This engine does not provide BBM/free OOB bytes protection */ 309 + if (!req->datalen) 310 + return 0; 311 + 312 + nand_ecc_tweak_req(&engine_conf->req_ctx, req); 313 + 314 + /* No more preparation for page read */ 315 + if (req->type == NAND_PAGE_READ) 316 + return 0; 317 + 318 + /* Preparation for page write: derive the ECC bytes and place them */ 319 + for (i = 0, data = req->databuf.out; 320 + eccsteps; 321 + eccsteps--, i += eccbytes, data += eccsize) 322 + nand_ecc_sw_bch_calculate(nand, data, &ecccalc[i]); 323 + 324 + return mtd_ooblayout_set_eccbytes(mtd, ecccalc, (void *)req->oobbuf.out, 325 + 0, total); 326 + } 327 + 328 + static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand, 329 + struct nand_page_io_req *req) 330 + { 331 + struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 332 + struct mtd_info *mtd = nanddev_to_mtd(nand); 333 + int eccsize = nand->ecc.ctx.conf.step_size; 334 + int total = nand->ecc.ctx.total; 335 + int eccbytes = engine_conf->code_size; 336 + int eccsteps = engine_conf->nsteps; 337 + u8 *ecccalc = engine_conf->calc_buf; 338 + u8 *ecccode = engine_conf->code_buf; 339 + unsigned int max_bitflips = 0; 340 + u8 *data = req->databuf.in; 341 + int i, ret; 342 + 343 + /* Nothing to do for a raw operation */ 344 + if (req->mode == MTD_OPS_RAW) 345 + return 0; 346 + 347 + /* This engine does not provide BBM/free OOB bytes protection */ 348 + if (!req->datalen) 349 + return 0; 350 + 351 + /* No more preparation for page write */ 352 + if (req->type == NAND_PAGE_WRITE) { 353 + nand_ecc_restore_req(&engine_conf->req_ctx, req); 354 + return 0; 355 + } 356 + 357 + /* Finish a page read: retrieve the (raw) ECC bytes*/ 358 + ret = mtd_ooblayout_get_eccbytes(mtd, ecccode, req->oobbuf.in, 0, 359 + total); 360 + if (ret) 361 + return ret; 362 + 363 + /* Calculate the ECC bytes */ 364 + for (i = 0; eccsteps; eccsteps--, i += eccbytes, data += eccsize) 365 + nand_ecc_sw_bch_calculate(nand, data, &ecccalc[i]); 366 + 367 + /* Finish a page read: compare and correct */ 368 + for (eccsteps = engine_conf->nsteps, i = 0, data = req->databuf.in; 369 + eccsteps; 370 + eccsteps--, i += eccbytes, data += eccsize) { 371 + int stat = nand_ecc_sw_bch_correct(nand, data, 372 + &ecccode[i], 373 + &ecccalc[i]); 374 + if (stat < 0) { 375 + mtd->ecc_stats.failed++; 376 + } else { 377 + mtd->ecc_stats.corrected += stat; 378 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 379 + } 380 + } 381 + 382 + nand_ecc_restore_req(&engine_conf->req_ctx, req); 383 + 384 + return max_bitflips; 385 + } 386 + 387 + static struct nand_ecc_engine_ops nand_ecc_sw_bch_engine_ops = { 388 + .init_ctx = nand_ecc_sw_bch_init_ctx, 389 + .cleanup_ctx = nand_ecc_sw_bch_cleanup_ctx, 390 + .prepare_io_req = nand_ecc_sw_bch_prepare_io_req, 391 + .finish_io_req = nand_ecc_sw_bch_finish_io_req, 392 + }; 393 + 394 + static struct nand_ecc_engine nand_ecc_sw_bch_engine = { 395 + .ops = &nand_ecc_sw_bch_engine_ops, 396 + }; 397 + 398 + struct nand_ecc_engine *nand_ecc_sw_bch_get_engine(void) 399 + { 400 + return &nand_ecc_sw_bch_engine; 401 + } 402 + EXPORT_SYMBOL(nand_ecc_sw_bch_get_engine); 403 + 404 + MODULE_LICENSE("GPL"); 405 + MODULE_AUTHOR("Ivan Djelic <ivan.djelic@parrot.com>"); 406 + MODULE_DESCRIPTION("NAND software BCH ECC support");
+136 -4
drivers/mtd/nand/ecc.c
··· 95 95 96 96 #include <linux/module.h> 97 97 #include <linux/mtd/nand.h> 98 + #include <linux/slab.h> 98 99 99 100 /** 100 101 * nand_ecc_init_ctx - Init the ECC engine context ··· 105 104 */ 106 105 int nand_ecc_init_ctx(struct nand_device *nand) 107 106 { 108 - if (!nand->ecc.engine->ops->init_ctx) 107 + if (!nand->ecc.engine || !nand->ecc.engine->ops->init_ctx) 109 108 return 0; 110 109 111 110 return nand->ecc.engine->ops->init_ctx(nand); ··· 118 117 */ 119 118 void nand_ecc_cleanup_ctx(struct nand_device *nand) 120 119 { 121 - if (nand->ecc.engine->ops->cleanup_ctx) 120 + if (nand->ecc.engine && nand->ecc.engine->ops->cleanup_ctx) 122 121 nand->ecc.engine->ops->cleanup_ctx(nand); 123 122 } 124 123 EXPORT_SYMBOL(nand_ecc_cleanup_ctx); ··· 131 130 int nand_ecc_prepare_io_req(struct nand_device *nand, 132 131 struct nand_page_io_req *req) 133 132 { 134 - if (!nand->ecc.engine->ops->prepare_io_req) 133 + if (!nand->ecc.engine || !nand->ecc.engine->ops->prepare_io_req) 135 134 return 0; 136 135 137 136 return nand->ecc.engine->ops->prepare_io_req(nand, req); ··· 146 145 int nand_ecc_finish_io_req(struct nand_device *nand, 147 146 struct nand_page_io_req *req) 148 147 { 149 - if (!nand->ecc.engine->ops->finish_io_req) 148 + if (!nand->ecc.engine || !nand->ecc.engine->ops->finish_io_req) 150 149 return 0; 151 150 152 151 return nand->ecc.engine->ops->finish_io_req(nand, req); ··· 479 478 return corr >= ds_corr && conf->strength >= reqs->strength; 480 479 } 481 480 EXPORT_SYMBOL(nand_ecc_is_strong_enough); 481 + 482 + /* ECC engine driver internal helpers */ 483 + int nand_ecc_init_req_tweaking(struct nand_ecc_req_tweak_ctx *ctx, 484 + struct nand_device *nand) 485 + { 486 + unsigned int total_buffer_size; 487 + 488 + ctx->nand = nand; 489 + 490 + /* Let the user decide the exact length of each buffer */ 491 + if (!ctx->page_buffer_size) 492 + ctx->page_buffer_size = nanddev_page_size(nand); 493 + if (!ctx->oob_buffer_size) 494 + ctx->oob_buffer_size = nanddev_per_page_oobsize(nand); 495 + 496 + total_buffer_size = ctx->page_buffer_size + ctx->oob_buffer_size; 497 + 498 + ctx->spare_databuf = kzalloc(total_buffer_size, GFP_KERNEL); 499 + if (!ctx->spare_databuf) 500 + return -ENOMEM; 501 + 502 + ctx->spare_oobbuf = ctx->spare_databuf + ctx->page_buffer_size; 503 + 504 + return 0; 505 + } 506 + EXPORT_SYMBOL_GPL(nand_ecc_init_req_tweaking); 507 + 508 + void nand_ecc_cleanup_req_tweaking(struct nand_ecc_req_tweak_ctx *ctx) 509 + { 510 + kfree(ctx->spare_databuf); 511 + } 512 + EXPORT_SYMBOL_GPL(nand_ecc_cleanup_req_tweaking); 513 + 514 + /* 515 + * Ensure data and OOB area is fully read/written otherwise the correction might 516 + * not work as expected. 517 + */ 518 + void nand_ecc_tweak_req(struct nand_ecc_req_tweak_ctx *ctx, 519 + struct nand_page_io_req *req) 520 + { 521 + struct nand_device *nand = ctx->nand; 522 + struct nand_page_io_req *orig, *tweak; 523 + 524 + /* Save the original request */ 525 + ctx->orig_req = *req; 526 + ctx->bounce_data = false; 527 + ctx->bounce_oob = false; 528 + orig = &ctx->orig_req; 529 + tweak = req; 530 + 531 + /* Ensure the request covers the entire page */ 532 + if (orig->datalen < nanddev_page_size(nand)) { 533 + ctx->bounce_data = true; 534 + tweak->dataoffs = 0; 535 + tweak->datalen = nanddev_page_size(nand); 536 + tweak->databuf.in = ctx->spare_databuf; 537 + memset(tweak->databuf.in, 0xFF, ctx->page_buffer_size); 538 + } 539 + 540 + if (orig->ooblen < nanddev_per_page_oobsize(nand)) { 541 + ctx->bounce_oob = true; 542 + tweak->ooboffs = 0; 543 + tweak->ooblen = nanddev_per_page_oobsize(nand); 544 + tweak->oobbuf.in = ctx->spare_oobbuf; 545 + memset(tweak->oobbuf.in, 0xFF, ctx->oob_buffer_size); 546 + } 547 + 548 + /* Copy the data that must be writen in the bounce buffers, if needed */ 549 + if (orig->type == NAND_PAGE_WRITE) { 550 + if (ctx->bounce_data) 551 + memcpy((void *)tweak->databuf.out + orig->dataoffs, 552 + orig->databuf.out, orig->datalen); 553 + 554 + if (ctx->bounce_oob) 555 + memcpy((void *)tweak->oobbuf.out + orig->ooboffs, 556 + orig->oobbuf.out, orig->ooblen); 557 + } 558 + } 559 + EXPORT_SYMBOL_GPL(nand_ecc_tweak_req); 560 + 561 + void nand_ecc_restore_req(struct nand_ecc_req_tweak_ctx *ctx, 562 + struct nand_page_io_req *req) 563 + { 564 + struct nand_page_io_req *orig, *tweak; 565 + 566 + orig = &ctx->orig_req; 567 + tweak = req; 568 + 569 + /* Restore the data read from the bounce buffers, if needed */ 570 + if (orig->type == NAND_PAGE_READ) { 571 + if (ctx->bounce_data) 572 + memcpy(orig->databuf.in, 573 + tweak->databuf.in + orig->dataoffs, 574 + orig->datalen); 575 + 576 + if (ctx->bounce_oob) 577 + memcpy(orig->oobbuf.in, 578 + tweak->oobbuf.in + orig->ooboffs, 579 + orig->ooblen); 580 + } 581 + 582 + /* Ensure the original request is restored */ 583 + *req = *orig; 584 + } 585 + EXPORT_SYMBOL_GPL(nand_ecc_restore_req); 586 + 587 + struct nand_ecc_engine *nand_ecc_get_sw_engine(struct nand_device *nand) 588 + { 589 + unsigned int algo = nand->ecc.user_conf.algo; 590 + 591 + if (algo == NAND_ECC_ALGO_UNKNOWN) 592 + algo = nand->ecc.defaults.algo; 593 + 594 + switch (algo) { 595 + case NAND_ECC_ALGO_HAMMING: 596 + return nand_ecc_sw_hamming_get_engine(); 597 + case NAND_ECC_ALGO_BCH: 598 + return nand_ecc_sw_bch_get_engine(); 599 + default: 600 + break; 601 + } 602 + 603 + return NULL; 604 + } 605 + EXPORT_SYMBOL(nand_ecc_get_sw_engine); 606 + 607 + struct nand_ecc_engine *nand_ecc_get_on_die_hw_engine(struct nand_device *nand) 608 + { 609 + return nand->ecc.ondie_engine; 610 + } 611 + EXPORT_SYMBOL(nand_ecc_get_on_die_hw_engine); 482 612 483 613 MODULE_LICENSE("GPL"); 484 614 MODULE_AUTHOR("Miquel Raynal <miquel.raynal@bootlin.com>");
+219 -225
drivers/mtd/nand/onenand/onenand_base.c
··· 132 132 .free = onenand_ooblayout_128_free, 133 133 }; 134 134 135 - /** 135 + /* 136 136 * onenand_oob_32_64 - oob info for large (2KB) page 137 137 */ 138 138 static int onenand_ooblayout_32_64_ecc(struct mtd_info *mtd, int section, ··· 192 192 193 193 /** 194 194 * onenand_readw - [OneNAND Interface] Read OneNAND register 195 - * @param addr address to read 195 + * @addr: address to read 196 196 * 197 197 * Read OneNAND register 198 198 */ ··· 203 203 204 204 /** 205 205 * onenand_writew - [OneNAND Interface] Write OneNAND register with value 206 - * @param value value to write 207 - * @param addr address to write 206 + * @value: value to write 207 + * @addr: address to write 208 208 * 209 209 * Write OneNAND register with value 210 210 */ ··· 215 215 216 216 /** 217 217 * onenand_block_address - [DEFAULT] Get block address 218 - * @param this onenand chip data structure 219 - * @param block the block 218 + * @this: onenand chip data structure 219 + * @block: the block 220 220 * @return translated block address if DDP, otherwise same 221 221 * 222 222 * Setup Start Address 1 Register (F100h) ··· 232 232 233 233 /** 234 234 * onenand_bufferram_address - [DEFAULT] Get bufferram address 235 - * @param this onenand chip data structure 236 - * @param block the block 235 + * @this: onenand chip data structure 236 + * @block: the block 237 237 * @return set DBS value if DDP, otherwise 0 238 238 * 239 239 * Setup Start Address 2 Register (F101h) for DDP ··· 249 249 250 250 /** 251 251 * onenand_page_address - [DEFAULT] Get page address 252 - * @param page the page address 253 - * @param sector the sector address 252 + * @page: the page address 253 + * @sector: the sector address 254 254 * @return combined page and sector address 255 255 * 256 256 * Setup Start Address 8 Register (F107h) ··· 268 268 269 269 /** 270 270 * onenand_buffer_address - [DEFAULT] Get buffer address 271 - * @param dataram1 DataRAM index 272 - * @param sectors the sector address 273 - * @param count the number of sectors 274 - * @return the start buffer value 271 + * @dataram1: DataRAM index 272 + * @sectors: the sector address 273 + * @count: the number of sectors 274 + * Return: the start buffer value 275 275 * 276 276 * Setup Start Buffer Register (F200h) 277 277 */ ··· 295 295 296 296 /** 297 297 * flexonenand_block- For given address return block number 298 - * @param this - OneNAND device structure 299 - * @param addr - Address for which block number is needed 298 + * @this: - OneNAND device structure 299 + * @addr: - Address for which block number is needed 300 300 */ 301 301 static unsigned flexonenand_block(struct onenand_chip *this, loff_t addr) 302 302 { ··· 359 359 360 360 /** 361 361 * onenand_get_density - [DEFAULT] Get OneNAND density 362 - * @param dev_id OneNAND device ID 362 + * @dev_id: OneNAND device ID 363 363 * 364 364 * Get OneNAND density from device ID 365 365 */ ··· 371 371 372 372 /** 373 373 * flexonenand_region - [Flex-OneNAND] Return erase region of addr 374 - * @param mtd MTD device structure 375 - * @param addr address whose erase region needs to be identified 374 + * @mtd: MTD device structure 375 + * @addr: address whose erase region needs to be identified 376 376 */ 377 377 int flexonenand_region(struct mtd_info *mtd, loff_t addr) 378 378 { ··· 387 387 388 388 /** 389 389 * onenand_command - [DEFAULT] Send command to OneNAND device 390 - * @param mtd MTD device structure 391 - * @param cmd the command to be sent 392 - * @param addr offset to read from or write to 393 - * @param len number of bytes to read or write 390 + * @mtd: MTD device structure 391 + * @cmd: the command to be sent 392 + * @addr: offset to read from or write to 393 + * @len: number of bytes to read or write 394 394 * 395 395 * Send command to OneNAND device. This function is used for middle/large page 396 396 * devices (1KB/2KB Bytes per page) ··· 519 519 520 520 /** 521 521 * onenand_read_ecc - return ecc status 522 - * @param this onenand chip structure 522 + * @this: onenand chip structure 523 523 */ 524 524 static inline int onenand_read_ecc(struct onenand_chip *this) 525 525 { ··· 543 543 544 544 /** 545 545 * onenand_wait - [DEFAULT] wait until the command is done 546 - * @param mtd MTD device structure 547 - * @param state state to select the max. timeout value 546 + * @mtd: MTD device structure 547 + * @state: state to select the max. timeout value 548 548 * 549 549 * Wait for command done. This applies to all OneNAND command 550 550 * Read can take up to 30us, erase up to 2ms and program up to 350us ··· 625 625 626 626 /* 627 627 * onenand_interrupt - [DEFAULT] onenand interrupt handler 628 - * @param irq onenand interrupt number 629 - * @param dev_id interrupt data 628 + * @irq: onenand interrupt number 629 + * @dev_id: interrupt data 630 630 * 631 631 * complete the work 632 632 */ ··· 643 643 644 644 /* 645 645 * onenand_interrupt_wait - [DEFAULT] wait until the command is done 646 - * @param mtd MTD device structure 647 - * @param state state to select the max. timeout value 646 + * @mtd: MTD device structure 647 + * @state: state to select the max. timeout value 648 648 * 649 649 * Wait for command done. 650 650 */ ··· 659 659 660 660 /* 661 661 * onenand_try_interrupt_wait - [DEFAULT] try interrupt wait 662 - * @param mtd MTD device structure 663 - * @param state state to select the max. timeout value 662 + * @mtd: MTD device structure 663 + * @state: state to select the max. timeout value 664 664 * 665 665 * Try interrupt based wait (It is used one-time) 666 666 */ ··· 689 689 690 690 /* 691 691 * onenand_setup_wait - [OneNAND Interface] setup onenand wait method 692 - * @param mtd MTD device structure 692 + * @mtd: MTD device structure 693 693 * 694 694 * There's two method to wait onenand work 695 695 * 1. polling - read interrupt status register ··· 724 724 725 725 /** 726 726 * onenand_bufferram_offset - [DEFAULT] BufferRAM offset 727 - * @param mtd MTD data structure 728 - * @param area BufferRAM area 727 + * @mtd: MTD data structure 728 + * @area: BufferRAM area 729 729 * @return offset given area 730 730 * 731 731 * Return BufferRAM offset given area ··· 747 747 748 748 /** 749 749 * onenand_read_bufferram - [OneNAND Interface] Read the bufferram area 750 - * @param mtd MTD data structure 751 - * @param area BufferRAM area 752 - * @param buffer the databuffer to put/get data 753 - * @param offset offset to read from or write to 754 - * @param count number of bytes to read/write 750 + * @mtd: MTD data structure 751 + * @area: BufferRAM area 752 + * @buffer: the databuffer to put/get data 753 + * @offset: offset to read from or write to 754 + * @count: number of bytes to read/write 755 755 * 756 756 * Read the BufferRAM area 757 757 */ ··· 783 783 784 784 /** 785 785 * onenand_sync_read_bufferram - [OneNAND Interface] Read the bufferram area with Sync. Burst mode 786 - * @param mtd MTD data structure 787 - * @param area BufferRAM area 788 - * @param buffer the databuffer to put/get data 789 - * @param offset offset to read from or write to 790 - * @param count number of bytes to read/write 786 + * @mtd: MTD data structure 787 + * @area: BufferRAM area 788 + * @buffer: the databuffer to put/get data 789 + * @offset: offset to read from or write to 790 + * @count: number of bytes to read/write 791 791 * 792 792 * Read the BufferRAM area with Sync. Burst Mode 793 793 */ ··· 823 823 824 824 /** 825 825 * onenand_write_bufferram - [OneNAND Interface] Write the bufferram area 826 - * @param mtd MTD data structure 827 - * @param area BufferRAM area 828 - * @param buffer the databuffer to put/get data 829 - * @param offset offset to read from or write to 830 - * @param count number of bytes to read/write 826 + * @mtd: MTD data structure 827 + * @area: BufferRAM area 828 + * @buffer: the databuffer to put/get data 829 + * @offset: offset to read from or write to 830 + * @count: number of bytes to read/write 831 831 * 832 832 * Write the BufferRAM area 833 833 */ ··· 864 864 865 865 /** 866 866 * onenand_get_2x_blockpage - [GENERIC] Get blockpage at 2x program mode 867 - * @param mtd MTD data structure 868 - * @param addr address to check 867 + * @mtd: MTD data structure 868 + * @addr: address to check 869 869 * @return blockpage address 870 870 * 871 871 * Get blockpage address at 2x program mode ··· 888 888 889 889 /** 890 890 * onenand_check_bufferram - [GENERIC] Check BufferRAM information 891 - * @param mtd MTD data structure 892 - * @param addr address to check 891 + * @mtd: MTD data structure 892 + * @addr: address to check 893 893 * @return 1 if there are valid data, otherwise 0 894 894 * 895 895 * Check bufferram if there is data we required ··· 930 930 931 931 /** 932 932 * onenand_update_bufferram - [GENERIC] Update BufferRAM information 933 - * @param mtd MTD data structure 934 - * @param addr address to update 935 - * @param valid valid flag 933 + * @mtd: MTD data structure 934 + * @addr: address to update 935 + * @valid: valid flag 936 936 * 937 937 * Update BufferRAM information 938 938 */ ··· 963 963 964 964 /** 965 965 * onenand_invalidate_bufferram - [GENERIC] Invalidate BufferRAM information 966 - * @param mtd MTD data structure 967 - * @param addr start address to invalidate 968 - * @param len length to invalidate 966 + * @mtd: MTD data structure 967 + * @addr: start address to invalidate 968 + * @len: length to invalidate 969 969 * 970 970 * Invalidate BufferRAM information 971 971 */ ··· 986 986 987 987 /** 988 988 * onenand_get_device - [GENERIC] Get chip for selected access 989 - * @param mtd MTD device structure 990 - * @param new_state the state which is requested 989 + * @mtd: MTD device structure 990 + * @new_state: the state which is requested 991 991 * 992 992 * Get the device and lock it for exclusive access 993 993 */ ··· 1024 1024 1025 1025 /** 1026 1026 * onenand_release_device - [GENERIC] release chip 1027 - * @param mtd MTD device structure 1027 + * @mtd: MTD device structure 1028 1028 * 1029 1029 * Deselect, release chip lock and wake up anyone waiting on the device 1030 1030 */ ··· 1043 1043 1044 1044 /** 1045 1045 * onenand_transfer_auto_oob - [INTERN] oob auto-placement transfer 1046 - * @param mtd MTD device structure 1047 - * @param buf destination address 1048 - * @param column oob offset to read from 1049 - * @param thislen oob length to read 1046 + * @mtd: MTD device structure 1047 + * @buf: destination address 1048 + * @column: oob offset to read from 1049 + * @thislen: oob length to read 1050 1050 */ 1051 1051 static int onenand_transfer_auto_oob(struct mtd_info *mtd, uint8_t *buf, int column, 1052 1052 int thislen) ··· 1061 1061 1062 1062 /** 1063 1063 * onenand_recover_lsb - [Flex-OneNAND] Recover LSB page data 1064 - * @param mtd MTD device structure 1065 - * @param addr address to recover 1066 - * @param status return value from onenand_wait / onenand_bbt_wait 1064 + * @mtd: MTD device structure 1065 + * @addr: address to recover 1066 + * @status: return value from onenand_wait / onenand_bbt_wait 1067 1067 * 1068 1068 * MLC NAND Flash cell has paired pages - LSB page and MSB page. LSB page has 1069 1069 * lower page address and MSB page has higher page address in paired pages. ··· 1104 1104 1105 1105 /** 1106 1106 * onenand_mlc_read_ops_nolock - MLC OneNAND read main and/or out-of-band 1107 - * @param mtd MTD device structure 1108 - * @param from offset to read from 1109 - * @param ops: oob operation description structure 1107 + * @mtd: MTD device structure 1108 + * @from: offset to read from 1109 + * @ops: oob operation description structure 1110 1110 * 1111 1111 * MLC OneNAND / Flex-OneNAND has 4KB page size and 4KB dataram. 1112 1112 * So, read-while-load is not present. ··· 1206 1206 1207 1207 /** 1208 1208 * onenand_read_ops_nolock - [OneNAND Interface] OneNAND read main and/or out-of-band 1209 - * @param mtd MTD device structure 1210 - * @param from offset to read from 1211 - * @param ops: oob operation description structure 1209 + * @mtd: MTD device structure 1210 + * @from: offset to read from 1211 + * @ops: oob operation description structure 1212 1212 * 1213 1213 * OneNAND read main and/or out-of-band data 1214 1214 */ ··· 1335 1335 1336 1336 /** 1337 1337 * onenand_read_oob_nolock - [MTD Interface] OneNAND read out-of-band 1338 - * @param mtd MTD device structure 1339 - * @param from offset to read from 1340 - * @param ops: oob operation description structure 1338 + * @mtd: MTD device structure 1339 + * @from: offset to read from 1340 + * @ops: oob operation description structure 1341 1341 * 1342 1342 * OneNAND read out-of-band data from the spare area 1343 1343 */ ··· 1430 1430 1431 1431 /** 1432 1432 * onenand_read_oob - [MTD Interface] Read main and/or out-of-band 1433 - * @param mtd: MTD device structure 1434 - * @param from: offset to read from 1435 - * @param ops: oob operation description structure 1436 - 1433 + * @mtd: MTD device structure 1434 + * @from: offset to read from 1435 + * @ops: oob operation description structure 1436 + * 1437 1437 * Read main and/or out-of-band 1438 1438 */ 1439 1439 static int onenand_read_oob(struct mtd_info *mtd, loff_t from, ··· 1466 1466 1467 1467 /** 1468 1468 * onenand_bbt_wait - [DEFAULT] wait until the command is done 1469 - * @param mtd MTD device structure 1470 - * @param state state to select the max. timeout value 1469 + * @mtd: MTD device structure 1470 + * @state: state to select the max. timeout value 1471 1471 * 1472 1472 * Wait for command done. 1473 1473 */ ··· 1517 1517 1518 1518 /** 1519 1519 * onenand_bbt_read_oob - [MTD Interface] OneNAND read out-of-band for bbt scan 1520 - * @param mtd MTD device structure 1521 - * @param from offset to read from 1522 - * @param ops oob operation description structure 1520 + * @mtd: MTD device structure 1521 + * @from: offset to read from 1522 + * @ops: oob operation description structure 1523 1523 * 1524 1524 * OneNAND read out-of-band data from the spare area for bbt scan 1525 1525 */ ··· 1594 1594 #ifdef CONFIG_MTD_ONENAND_VERIFY_WRITE 1595 1595 /** 1596 1596 * onenand_verify_oob - [GENERIC] verify the oob contents after a write 1597 - * @param mtd MTD device structure 1598 - * @param buf the databuffer to verify 1599 - * @param to offset to read from 1597 + * @mtd: MTD device structure 1598 + * @buf: the databuffer to verify 1599 + * @to: offset to read from 1600 1600 */ 1601 1601 static int onenand_verify_oob(struct mtd_info *mtd, const u_char *buf, loff_t to) 1602 1602 { ··· 1622 1622 1623 1623 /** 1624 1624 * onenand_verify - [GENERIC] verify the chip contents after a write 1625 - * @param mtd MTD device structure 1626 - * @param buf the databuffer to verify 1627 - * @param addr offset to read from 1628 - * @param len number of bytes to read and compare 1625 + * @mtd: MTD device structure 1626 + * @buf: the databuffer to verify 1627 + * @addr: offset to read from 1628 + * @len: number of bytes to read and compare 1629 1629 */ 1630 1630 static int onenand_verify(struct mtd_info *mtd, const u_char *buf, loff_t addr, size_t len) 1631 1631 { ··· 1684 1684 1685 1685 /** 1686 1686 * onenand_panic_write - [MTD Interface] write buffer to FLASH in a panic context 1687 - * @param mtd MTD device structure 1688 - * @param to offset to write to 1689 - * @param len number of bytes to write 1690 - * @param retlen pointer to variable to store the number of written bytes 1691 - * @param buf the data to write 1687 + * @mtd: MTD device structure 1688 + * @to: offset to write to 1689 + * @len: number of bytes to write 1690 + * @retlen: pointer to variable to store the number of written bytes 1691 + * @buf: the data to write 1692 1692 * 1693 1693 * Write with ECC 1694 1694 */ ··· 1762 1762 1763 1763 /** 1764 1764 * onenand_fill_auto_oob - [INTERN] oob auto-placement transfer 1765 - * @param mtd MTD device structure 1766 - * @param oob_buf oob buffer 1767 - * @param buf source address 1768 - * @param column oob offset to write to 1769 - * @param thislen oob length to write 1765 + * @mtd: MTD device structure 1766 + * @oob_buf: oob buffer 1767 + * @buf: source address 1768 + * @column: oob offset to write to 1769 + * @thislen: oob length to write 1770 1770 */ 1771 1771 static int onenand_fill_auto_oob(struct mtd_info *mtd, u_char *oob_buf, 1772 1772 const u_char *buf, int column, int thislen) ··· 1776 1776 1777 1777 /** 1778 1778 * onenand_write_ops_nolock - [OneNAND Interface] write main and/or out-of-band 1779 - * @param mtd MTD device structure 1780 - * @param to offset to write to 1781 - * @param ops oob operation description structure 1779 + * @mtd: MTD device structure 1780 + * @to: offset to write to 1781 + * @ops: oob operation description structure 1782 1782 * 1783 1783 * Write main and/or oob with ECC 1784 1784 */ ··· 1957 1957 1958 1958 /** 1959 1959 * onenand_write_oob_nolock - [INTERN] OneNAND write out-of-band 1960 - * @param mtd MTD device structure 1961 - * @param to offset to write to 1962 - * @param len number of bytes to write 1963 - * @param retlen pointer to variable to store the number of written bytes 1964 - * @param buf the data to write 1965 - * @param mode operation mode 1960 + * @mtd: MTD device structure 1961 + * @to: offset to write to 1962 + * @ops: oob operation description structure 1966 1963 * 1967 1964 * OneNAND write out-of-band 1968 1965 */ ··· 2067 2070 2068 2071 /** 2069 2072 * onenand_write_oob - [MTD Interface] NAND write data and/or out-of-band 2070 - * @param mtd: MTD device structure 2071 - * @param to: offset to write 2072 - * @param ops: oob operation description structure 2073 + * @mtd: MTD device structure 2074 + * @to: offset to write 2075 + * @ops: oob operation description structure 2073 2076 */ 2074 2077 static int onenand_write_oob(struct mtd_info *mtd, loff_t to, 2075 2078 struct mtd_oob_ops *ops) ··· 2098 2101 2099 2102 /** 2100 2103 * onenand_block_isbad_nolock - [GENERIC] Check if a block is marked bad 2101 - * @param mtd MTD device structure 2102 - * @param ofs offset from device start 2103 - * @param allowbbt 1, if its allowed to access the bbt area 2104 + * @mtd: MTD device structure 2105 + * @ofs: offset from device start 2106 + * @allowbbt: 1, if its allowed to access the bbt area 2104 2107 * 2105 2108 * Check, if the block is bad. Either by reading the bad block table or 2106 2109 * calling of the scan function. ··· 2141 2144 2142 2145 /** 2143 2146 * onenand_multiblock_erase - [INTERN] erase block(s) using multiblock erase 2144 - * @param mtd MTD device structure 2145 - * @param instr erase instruction 2146 - * @param region erase region 2147 + * @mtd: MTD device structure 2148 + * @instr: erase instruction 2149 + * @block_size: block size 2147 2150 * 2148 2151 * Erase one or more blocks up to 64 block at a time 2149 2152 */ ··· 2251 2254 2252 2255 /** 2253 2256 * onenand_block_by_block_erase - [INTERN] erase block(s) using regular erase 2254 - * @param mtd MTD device structure 2255 - * @param instr erase instruction 2256 - * @param region erase region 2257 - * @param block_size erase block size 2257 + * @mtd: MTD device structure 2258 + * @instr: erase instruction 2259 + * @region: erase region 2260 + * @block_size: erase block size 2258 2261 * 2259 2262 * Erase one or more blocks one block at a time 2260 2263 */ ··· 2323 2326 2324 2327 /** 2325 2328 * onenand_erase - [MTD Interface] erase block(s) 2326 - * @param mtd MTD device structure 2327 - * @param instr erase instruction 2329 + * @mtd: MTD device structure 2330 + * @instr: erase instruction 2328 2331 * 2329 2332 * Erase one or more blocks 2330 2333 */ ··· 2388 2391 2389 2392 /** 2390 2393 * onenand_sync - [MTD Interface] sync 2391 - * @param mtd MTD device structure 2394 + * @mtd: MTD device structure 2392 2395 * 2393 2396 * Sync is actually a wait for chip ready function 2394 2397 */ ··· 2405 2408 2406 2409 /** 2407 2410 * onenand_block_isbad - [MTD Interface] Check whether the block at the given offset is bad 2408 - * @param mtd MTD device structure 2409 - * @param ofs offset relative to mtd start 2411 + * @mtd: MTD device structure 2412 + * @ofs: offset relative to mtd start 2410 2413 * 2411 2414 * Check whether the block is bad 2412 2415 */ ··· 2422 2425 2423 2426 /** 2424 2427 * onenand_default_block_markbad - [DEFAULT] mark a block bad 2425 - * @param mtd MTD device structure 2426 - * @param ofs offset from device start 2428 + * @mtd: MTD device structure 2429 + * @ofs: offset from device start 2427 2430 * 2428 2431 * This is the default implementation, which can be overridden by 2429 2432 * a hardware specific driver. ··· 2457 2460 2458 2461 /** 2459 2462 * onenand_block_markbad - [MTD Interface] Mark the block at the given offset as bad 2460 - * @param mtd MTD device structure 2461 - * @param ofs offset relative to mtd start 2463 + * @mtd: MTD device structure 2464 + * @ofs: offset relative to mtd start 2462 2465 * 2463 2466 * Mark the block as bad 2464 2467 */ ··· 2483 2486 2484 2487 /** 2485 2488 * onenand_do_lock_cmd - [OneNAND Interface] Lock or unlock block(s) 2486 - * @param mtd MTD device structure 2487 - * @param ofs offset relative to mtd start 2488 - * @param len number of bytes to lock or unlock 2489 - * @param cmd lock or unlock command 2489 + * @mtd: MTD device structure 2490 + * @ofs: offset relative to mtd start 2491 + * @len: number of bytes to lock or unlock 2492 + * @cmd: lock or unlock command 2490 2493 * 2491 2494 * Lock or unlock one or more blocks 2492 2495 */ ··· 2563 2566 2564 2567 /** 2565 2568 * onenand_lock - [MTD Interface] Lock block(s) 2566 - * @param mtd MTD device structure 2567 - * @param ofs offset relative to mtd start 2568 - * @param len number of bytes to unlock 2569 + * @mtd: MTD device structure 2570 + * @ofs: offset relative to mtd start 2571 + * @len: number of bytes to unlock 2569 2572 * 2570 2573 * Lock one or more blocks 2571 2574 */ ··· 2581 2584 2582 2585 /** 2583 2586 * onenand_unlock - [MTD Interface] Unlock block(s) 2584 - * @param mtd MTD device structure 2585 - * @param ofs offset relative to mtd start 2586 - * @param len number of bytes to unlock 2587 + * @mtd: MTD device structure 2588 + * @ofs: offset relative to mtd start 2589 + * @len: number of bytes to unlock 2587 2590 * 2588 2591 * Unlock one or more blocks 2589 2592 */ ··· 2599 2602 2600 2603 /** 2601 2604 * onenand_check_lock_status - [OneNAND Interface] Check lock status 2602 - * @param this onenand chip data structure 2605 + * @this: onenand chip data structure 2603 2606 * 2604 2607 * Check lock status 2605 2608 */ ··· 2633 2636 2634 2637 /** 2635 2638 * onenand_unlock_all - [OneNAND Interface] unlock all blocks 2636 - * @param mtd MTD device structure 2639 + * @mtd: MTD device structure 2637 2640 * 2638 2641 * Unlock all blocks 2639 2642 */ ··· 2680 2683 2681 2684 /** 2682 2685 * onenand_otp_command - Send OTP specific command to OneNAND device 2683 - * @param mtd MTD device structure 2684 - * @param cmd the command to be sent 2685 - * @param addr offset to read from or write to 2686 - * @param len number of bytes to read or write 2686 + * @mtd: MTD device structure 2687 + * @cmd: the command to be sent 2688 + * @addr: offset to read from or write to 2689 + * @len: number of bytes to read or write 2687 2690 */ 2688 2691 static int onenand_otp_command(struct mtd_info *mtd, int cmd, loff_t addr, 2689 2692 size_t len) ··· 2755 2758 2756 2759 /** 2757 2760 * onenand_otp_write_oob_nolock - [INTERN] OneNAND write out-of-band, specific to OTP 2758 - * @param mtd MTD device structure 2759 - * @param to offset to write to 2760 - * @param len number of bytes to write 2761 - * @param retlen pointer to variable to store the number of written bytes 2762 - * @param buf the data to write 2761 + * @mtd: MTD device structure 2762 + * @to: offset to write to 2763 + * @ops: oob operation description structure 2763 2764 * 2764 2765 * OneNAND write out-of-band only for OTP 2765 2766 */ ··· 2884 2889 2885 2890 /** 2886 2891 * do_otp_read - [DEFAULT] Read OTP block area 2887 - * @param mtd MTD device structure 2888 - * @param from The offset to read 2889 - * @param len number of bytes to read 2890 - * @param retlen pointer to variable to store the number of readbytes 2891 - * @param buf the databuffer to put/get data 2892 + * @mtd: MTD device structure 2893 + * @from: The offset to read 2894 + * @len: number of bytes to read 2895 + * @retlen: pointer to variable to store the number of readbytes 2896 + * @buf: the databuffer to put/get data 2892 2897 * 2893 2898 * Read OTP block area. 2894 2899 */ ··· 2921 2926 2922 2927 /** 2923 2928 * do_otp_write - [DEFAULT] Write OTP block area 2924 - * @param mtd MTD device structure 2925 - * @param to The offset to write 2926 - * @param len number of bytes to write 2927 - * @param retlen pointer to variable to store the number of write bytes 2928 - * @param buf the databuffer to put/get data 2929 + * @mtd: MTD device structure 2930 + * @to: The offset to write 2931 + * @len: number of bytes to write 2932 + * @retlen: pointer to variable to store the number of write bytes 2933 + * @buf: the databuffer to put/get data 2929 2934 * 2930 2935 * Write OTP block area. 2931 2936 */ ··· 2965 2970 2966 2971 /** 2967 2972 * do_otp_lock - [DEFAULT] Lock OTP block area 2968 - * @param mtd MTD device structure 2969 - * @param from The offset to lock 2970 - * @param len number of bytes to lock 2971 - * @param retlen pointer to variable to store the number of lock bytes 2972 - * @param buf the databuffer to put/get data 2973 + * @mtd: MTD device structure 2974 + * @from: The offset to lock 2975 + * @len: number of bytes to lock 2976 + * @retlen: pointer to variable to store the number of lock bytes 2977 + * @buf: the databuffer to put/get data 2973 2978 * 2974 2979 * Lock OTP block area. 2975 2980 */ ··· 3013 3018 3014 3019 /** 3015 3020 * onenand_otp_walk - [DEFAULT] Handle OTP operation 3016 - * @param mtd MTD device structure 3017 - * @param from The offset to read/write 3018 - * @param len number of bytes to read/write 3019 - * @param retlen pointer to variable to store the number of read bytes 3020 - * @param buf the databuffer to put/get data 3021 - * @param action do given action 3022 - * @param mode specify user and factory 3021 + * @mtd: MTD device structure 3022 + * @from: The offset to read/write 3023 + * @len: number of bytes to read/write 3024 + * @retlen: pointer to variable to store the number of read bytes 3025 + * @buf: the databuffer to put/get data 3026 + * @action: do given action 3027 + * @mode: specify user and factory 3023 3028 * 3024 3029 * Handle OTP operation. 3025 3030 */ ··· 3094 3099 3095 3100 /** 3096 3101 * onenand_get_fact_prot_info - [MTD Interface] Read factory OTP info 3097 - * @param mtd MTD device structure 3098 - * @param len number of bytes to read 3099 - * @param retlen pointer to variable to store the number of read bytes 3100 - * @param buf the databuffer to put/get data 3102 + * @mtd: MTD device structure 3103 + * @len: number of bytes to read 3104 + * @retlen: pointer to variable to store the number of read bytes 3105 + * @buf: the databuffer to put/get data 3101 3106 * 3102 3107 * Read factory OTP info. 3103 3108 */ ··· 3110 3115 3111 3116 /** 3112 3117 * onenand_read_fact_prot_reg - [MTD Interface] Read factory OTP area 3113 - * @param mtd MTD device structure 3114 - * @param from The offset to read 3115 - * @param len number of bytes to read 3116 - * @param retlen pointer to variable to store the number of read bytes 3117 - * @param buf the databuffer to put/get data 3118 + * @mtd: MTD device structure 3119 + * @from: The offset to read 3120 + * @len: number of bytes to read 3121 + * @retlen: pointer to variable to store the number of read bytes 3122 + * @buf: the databuffer to put/get data 3118 3123 * 3119 3124 * Read factory OTP area. 3120 3125 */ ··· 3126 3131 3127 3132 /** 3128 3133 * onenand_get_user_prot_info - [MTD Interface] Read user OTP info 3129 - * @param mtd MTD device structure 3130 - * @param retlen pointer to variable to store the number of read bytes 3131 - * @param len number of bytes to read 3132 - * @param buf the databuffer to put/get data 3134 + * @mtd: MTD device structure 3135 + * @retlen: pointer to variable to store the number of read bytes 3136 + * @len: number of bytes to read 3137 + * @buf: the databuffer to put/get data 3133 3138 * 3134 3139 * Read user OTP info. 3135 3140 */ ··· 3142 3147 3143 3148 /** 3144 3149 * onenand_read_user_prot_reg - [MTD Interface] Read user OTP area 3145 - * @param mtd MTD device structure 3146 - * @param from The offset to read 3147 - * @param len number of bytes to read 3148 - * @param retlen pointer to variable to store the number of read bytes 3149 - * @param buf the databuffer to put/get data 3150 + * @mtd: MTD device structure 3151 + * @from: The offset to read 3152 + * @len: number of bytes to read 3153 + * @retlen: pointer to variable to store the number of read bytes 3154 + * @buf: the databuffer to put/get data 3150 3155 * 3151 3156 * Read user OTP area. 3152 3157 */ ··· 3158 3163 3159 3164 /** 3160 3165 * onenand_write_user_prot_reg - [MTD Interface] Write user OTP area 3161 - * @param mtd MTD device structure 3162 - * @param from The offset to write 3163 - * @param len number of bytes to write 3164 - * @param retlen pointer to variable to store the number of write bytes 3165 - * @param buf the databuffer to put/get data 3166 + * @mtd: MTD device structure 3167 + * @from: The offset to write 3168 + * @len: number of bytes to write 3169 + * @retlen: pointer to variable to store the number of write bytes 3170 + * @buf: the databuffer to put/get data 3166 3171 * 3167 3172 * Write user OTP area. 3168 3173 */ ··· 3174 3179 3175 3180 /** 3176 3181 * onenand_lock_user_prot_reg - [MTD Interface] Lock user OTP area 3177 - * @param mtd MTD device structure 3178 - * @param from The offset to lock 3179 - * @param len number of bytes to unlock 3182 + * @mtd: MTD device structure 3183 + * @from: The offset to lock 3184 + * @len: number of bytes to unlock 3180 3185 * 3181 3186 * Write lock mark on spare area in page 0 in OTP block 3182 3187 */ ··· 3229 3234 3230 3235 /** 3231 3236 * onenand_check_features - Check and set OneNAND features 3232 - * @param mtd MTD data structure 3237 + * @mtd: MTD data structure 3233 3238 * 3234 3239 * Check and set OneNAND features 3235 3240 * - lock scheme ··· 3319 3324 3320 3325 /** 3321 3326 * onenand_print_device_info - Print device & version ID 3322 - * @param device device ID 3323 - * @param version version ID 3327 + * @device: device ID 3328 + * @version: version ID 3324 3329 * 3325 3330 * Print device & version ID 3326 3331 */ ··· 3350 3355 3351 3356 /** 3352 3357 * onenand_check_maf - Check manufacturer ID 3353 - * @param manuf manufacturer ID 3358 + * @manuf: manufacturer ID 3354 3359 * 3355 3360 * Check manufacturer ID 3356 3361 */ ··· 3375 3380 } 3376 3381 3377 3382 /** 3378 - * flexonenand_get_boundary - Reads the SLC boundary 3379 - * @param onenand_info - onenand info structure 3380 - **/ 3383 + * flexonenand_get_boundary - Reads the SLC boundary 3384 + * @mtd: MTD data structure 3385 + */ 3381 3386 static int flexonenand_get_boundary(struct mtd_info *mtd) 3382 3387 { 3383 3388 struct onenand_chip *this = mtd->priv; ··· 3417 3422 /** 3418 3423 * flexonenand_get_size - Fill up fields in onenand_chip and mtd_info 3419 3424 * boundary[], diesize[], mtd->size, mtd->erasesize 3420 - * @param mtd - MTD device structure 3425 + * @mtd: - MTD device structure 3421 3426 */ 3422 3427 static void flexonenand_get_size(struct mtd_info *mtd) 3423 3428 { ··· 3488 3493 3489 3494 /** 3490 3495 * flexonenand_check_blocks_erased - Check if blocks are erased 3491 - * @param mtd_info - mtd info structure 3492 - * @param start - first erase block to check 3493 - * @param end - last erase block to check 3496 + * @mtd: mtd info structure 3497 + * @start: first erase block to check 3498 + * @end: last erase block to check 3494 3499 * 3495 3500 * Converting an unerased block from MLC to SLC 3496 3501 * causes byte values to change. Since both data and its ECC ··· 3543 3548 return 0; 3544 3549 } 3545 3550 3546 - /** 3551 + /* 3547 3552 * flexonenand_set_boundary - Writes the SLC boundary 3548 - * @param mtd - mtd info structure 3549 3553 */ 3550 3554 static int flexonenand_set_boundary(struct mtd_info *mtd, int die, 3551 3555 int boundary, int lock) ··· 3634 3640 3635 3641 /** 3636 3642 * onenand_chip_probe - [OneNAND Interface] The generic chip probe 3637 - * @param mtd MTD device structure 3643 + * @mtd: MTD device structure 3638 3644 * 3639 3645 * OneNAND detection method: 3640 3646 * Compare the values from command with ones from register ··· 3682 3688 3683 3689 /** 3684 3690 * onenand_probe - [OneNAND Interface] Probe the OneNAND device 3685 - * @param mtd MTD device structure 3691 + * @mtd: MTD device structure 3686 3692 */ 3687 3693 static int onenand_probe(struct mtd_info *mtd) 3688 3694 { ··· 3777 3783 3778 3784 /** 3779 3785 * onenand_suspend - [MTD Interface] Suspend the OneNAND flash 3780 - * @param mtd MTD device structure 3786 + * @mtd: MTD device structure 3781 3787 */ 3782 3788 static int onenand_suspend(struct mtd_info *mtd) 3783 3789 { ··· 3786 3792 3787 3793 /** 3788 3794 * onenand_resume - [MTD Interface] Resume the OneNAND flash 3789 - * @param mtd MTD device structure 3795 + * @mtd: MTD device structure 3790 3796 */ 3791 3797 static void onenand_resume(struct mtd_info *mtd) 3792 3798 { ··· 3801 3807 3802 3808 /** 3803 3809 * onenand_scan - [OneNAND Interface] Scan for the OneNAND device 3804 - * @param mtd MTD device structure 3805 - * @param maxchips Number of chips to scan for 3810 + * @mtd: MTD device structure 3811 + * @maxchips: Number of chips to scan for 3806 3812 * 3807 3813 * This fills out all the not initialized function pointers 3808 3814 * with the defaults. ··· 3979 3985 3980 3986 /** 3981 3987 * onenand_release - [OneNAND Interface] Free resources held by the OneNAND device 3982 - * @param mtd MTD device structure 3988 + * @mtd: MTD device structure 3983 3989 */ 3984 3990 void onenand_release(struct mtd_info *mtd) 3985 3991 {
+16 -16
drivers/mtd/nand/onenand/onenand_bbt.c
··· 18 18 19 19 /** 20 20 * check_short_pattern - [GENERIC] check if a pattern is in the buffer 21 - * @param buf the buffer to search 22 - * @param len the length of buffer to search 23 - * @param paglen the pagelength 24 - * @param td search pattern descriptor 21 + * @buf: the buffer to search 22 + * @len: the length of buffer to search 23 + * @paglen: the pagelength 24 + * @td: search pattern descriptor 25 25 * 26 26 * Check for a pattern at the given place. Used to search bad block 27 27 * tables and good / bad block identifiers. Same as check_pattern, but ··· 44 44 45 45 /** 46 46 * create_bbt - [GENERIC] Create a bad block table by scanning the device 47 - * @param mtd MTD device structure 48 - * @param buf temporary buffer 49 - * @param bd descriptor for the good/bad block search pattern 50 - * @param chip create the table for a specific chip, -1 read all chips. 47 + * @mtd: MTD device structure 48 + * @buf: temporary buffer 49 + * @bd: descriptor for the good/bad block search pattern 50 + * @chip: create the table for a specific chip, -1 read all chips. 51 51 * Applies only if NAND_BBT_PERCHIP option is set 52 52 * 53 53 * Create a bad block table by scanning the device ··· 122 122 123 123 /** 124 124 * onenand_memory_bbt - [GENERIC] create a memory based bad block table 125 - * @param mtd MTD device structure 126 - * @param bd descriptor for the good/bad block search pattern 125 + * @mtd: MTD device structure 126 + * @bd: descriptor for the good/bad block search pattern 127 127 * 128 128 * The function creates a memory based bbt by scanning the device 129 129 * for manufacturer / software marked good / bad blocks ··· 137 137 138 138 /** 139 139 * onenand_isbad_bbt - [OneNAND Interface] Check if a block is bad 140 - * @param mtd MTD device structure 141 - * @param offs offset in the device 142 - * @param allowbbt allow access to bad block table region 140 + * @mtd: MTD device structure 141 + * @offs: offset in the device 142 + * @allowbbt: allow access to bad block table region 143 143 */ 144 144 static int onenand_isbad_bbt(struct mtd_info *mtd, loff_t offs, int allowbbt) 145 145 { ··· 166 166 167 167 /** 168 168 * onenand_scan_bbt - [OneNAND Interface] scan, find, read and maybe create bad block table(s) 169 - * @param mtd MTD device structure 170 - * @param bd descriptor for the good/bad block search pattern 169 + * @mtd: MTD device structure 170 + * @bd: descriptor for the good/bad block search pattern 171 171 * 172 172 * The function checks, if a bad block table(s) is/are already 173 173 * available. If not it scans the device for manufacturer ··· 221 221 222 222 /** 223 223 * onenand_default_bbt - [OneNAND Interface] Select a default bad block table for the device 224 - * @param mtd MTD device structure 224 + * @mtd: MTD device structure 225 225 * 226 226 * This function selects the default bad block table 227 227 * support for the device and calls the onenand_scan_bbt function
+8 -8
drivers/mtd/nand/onenand/onenand_omap2.c
··· 371 371 372 372 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 373 373 /* 374 - * If the buffer address is not DMA-able, len is not long enough to make 375 - * DMA transfers profitable or panic_write() may be in an interrupt 376 - * context fallback to PIO mode. 374 + * If the buffer address is not DMA-able, len is not long enough to 375 + * make DMA transfers profitable or if invoked from panic_write() 376 + * fallback to PIO mode. 377 377 */ 378 378 if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 || 379 - count < 384 || in_interrupt() || oops_in_progress) 379 + count < 384 || mtd->oops_panic_write) 380 380 goto out_copy; 381 381 382 382 xtra = count & 3; ··· 418 418 419 419 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 420 420 /* 421 - * If the buffer address is not DMA-able, len is not long enough to make 422 - * DMA transfers profitable or panic_write() may be in an interrupt 423 - * context fallback to PIO mode. 421 + * If the buffer address is not DMA-able, len is not long enough to 422 + * make DMA transfers profitable or if invoked from panic_write() 423 + * fallback to PIO mode. 424 424 */ 425 425 if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 || 426 - count < 384 || in_interrupt() || oops_in_progress) 426 + count < 384 || mtd->oops_panic_write) 427 427 goto out_copy; 428 428 429 429 dma_src = dma_map_single(dev, buf, count, DMA_TO_DEVICE);
+22 -23
drivers/mtd/nand/raw/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - config MTD_NAND_ECC_SW_HAMMING 3 - tristate 4 - 5 - config MTD_NAND_ECC_SW_HAMMING_SMC 6 - bool "NAND ECC Smart Media byte order" 7 - depends on MTD_NAND_ECC_SW_HAMMING 8 - default n 9 - help 10 - Software ECC according to the Smart Media Specification. 11 - The original Linux implementation had byte 0 and 1 swapped. 12 - 13 2 menuconfig MTD_RAW_NAND 14 3 tristate "Raw/Parallel NAND Device Support" 15 4 select MTD_NAND_CORE 16 5 select MTD_NAND_ECC 17 - select MTD_NAND_ECC_SW_HAMMING 18 6 help 19 7 This enables support for accessing all type of raw/parallel 20 8 NAND flash devices. For further information see 21 9 <http://www.linux-mtd.infradead.org/doc/nand.html>. 22 10 23 11 if MTD_RAW_NAND 24 - 25 - config MTD_NAND_ECC_SW_BCH 26 - bool "Support software BCH ECC" 27 - select BCH 28 - default n 29 - help 30 - This enables support for software BCH error correction. Binary BCH 31 - codes are more powerful and cpu intensive than traditional Hamming 32 - ECC codes. They are used with NAND devices requiring more than 1 bit 33 - of error correction. 34 12 35 13 comment "Raw/parallel NAND flash controllers" 36 14 ··· 71 93 config MTD_NAND_NDFC 72 94 tristate "IBM/MCC 4xx NAND controller" 73 95 depends on 4xx 96 + select MTD_NAND_ECC_SW_HAMMING 74 97 select MTD_NAND_ECC_SW_HAMMING_SMC 75 98 help 76 99 NDFC Nand Flash Controllers are integrated in IBM/AMCC's 4xx SoCs ··· 292 313 config MTD_NAND_MXC 293 314 tristate "Freescale MXC NAND controller" 294 315 depends on ARCH_MXC || COMPILE_TEST 295 - depends on HAS_IOMEM 316 + depends on HAS_IOMEM && OF 296 317 help 297 318 This enables the driver for the NAND flash controller on the 298 319 MXC processors. ··· 440 461 help 441 462 Enables the driver for the Arasan NAND flash controller on 442 463 Zynq Ultrascale+ MPSoC. 464 + 465 + config MTD_NAND_INTEL_LGM 466 + tristate "Support for NAND controller on Intel LGM SoC" 467 + depends on OF || COMPILE_TEST 468 + depends on HAS_IOMEM 469 + help 470 + Enables support for NAND Flash chips on Intel's LGM SoC. 471 + NAND flash controller interfaced through the External Bus Unit. 472 + 473 + config MTD_NAND_ROCKCHIP 474 + tristate "Rockchip NAND controller" 475 + depends on ARCH_ROCKCHIP && HAS_IOMEM 476 + help 477 + Enables support for NAND controller on Rockchip SoCs. 478 + There are four different versions of NAND FLASH Controllers, 479 + including: 480 + NFC v600: RK2928, RK3066, RK3188 481 + NFC v622: RK3036, RK3128 482 + NFC v800: RK3308, RV1108 483 + NFC v900: PX30, RK3326 443 484 444 485 comment "Misc" 445 486
+2 -2
drivers/mtd/nand/raw/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 3 obj-$(CONFIG_MTD_RAW_NAND) += nand.o 4 - obj-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += nand_ecc.o 5 - nand-$(CONFIG_MTD_NAND_ECC_SW_BCH) += nand_bch.o 6 4 obj-$(CONFIG_MTD_SM_COMMON) += sm_common.o 7 5 8 6 obj-$(CONFIG_MTD_NAND_CAFE) += cafe_nand.o ··· 56 58 obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o 57 59 obj-$(CONFIG_MTD_NAND_CADENCE) += cadence-nand-controller.o 58 60 obj-$(CONFIG_MTD_NAND_ARASAN) += arasan-nand-controller.o 61 + obj-$(CONFIG_MTD_NAND_INTEL_LGM) += intel-nand-controller.o 62 + obj-$(CONFIG_MTD_NAND_ROCKCHIP) += rockchip-nand-controller.o 59 63 60 64 nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o 61 65 nand-objs += nand_onfi.o
+1
drivers/mtd/nand/raw/arasan-nand-controller.c
··· 118 118 * @rdy_timeout_ms: Timeout for waits on Ready/Busy pin 119 119 * @len: Data transfer length 120 120 * @read: Data transfer direction from the controller point of view 121 + * @buf: Data buffer 121 122 */ 122 123 struct anfc_op { 123 124 u32 pkt_reg;
+1
drivers/mtd/nand/raw/au1550nd.c
··· 3 3 * Copyright (C) 2004 Embedded Edge, LLC 4 4 */ 5 5 6 + #include <linux/delay.h> 6 7 #include <linux/slab.h> 7 8 #include <linux/module.h> 8 9 #include <linux/interrupt.h>
+3 -3
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 1846 1846 } 1847 1847 } 1848 1848 1849 - /** 1849 + /* 1850 1850 * Kick EDU engine 1851 1851 */ 1852 1852 static int brcmnand_edu_trans(struct brcmnand_host *host, u64 addr, u32 *buf, ··· 1937 1937 return ret; 1938 1938 } 1939 1939 1940 - /** 1940 + /* 1941 1941 * Construct a FLASH_DMA descriptor as part of a linked list. You must know the 1942 1942 * following ahead of time: 1943 1943 * - Is this descriptor the beginning or end of a linked list? ··· 1970 1970 return 0; 1971 1971 } 1972 1972 1973 - /** 1973 + /* 1974 1974 * Kick the FLASH_DMA engine, with a given DMA descriptor 1975 1975 */ 1976 1976 static void brcmnand_dma_run(struct brcmnand_host *host, dma_addr_t desc)
+1 -1
drivers/mtd/nand/raw/cafe_nand.c
··· 359 359 } 360 360 /** 361 361 * cafe_nand_read_page_syndrome - [REPLACEABLE] hardware ecc syndrome based page read 362 - * @mtd: mtd info structure 363 362 * @chip: nand chip info structure 364 363 * @buf: buffer to store read data 365 364 * @oob_required: caller expects OOB data read to chip->oob_poi 365 + * @page: page number to read 366 366 * 367 367 * The hw generator calculates the error syndrome automatically. Therefore 368 368 * we need a special oob layout and handling.
+1 -2
drivers/mtd/nand/raw/cs553x_nand.c
··· 19 19 #include <linux/delay.h> 20 20 #include <linux/mtd/mtd.h> 21 21 #include <linux/mtd/rawnand.h> 22 - #include <linux/mtd/nand_ecc.h> 23 22 #include <linux/mtd/partitions.h> 24 23 #include <linux/iopoll.h> 25 24 ··· 251 252 chip->ecc.bytes = 3; 252 253 chip->ecc.hwctl = cs_enable_hwecc; 253 254 chip->ecc.calculate = cs_calculate_ecc; 254 - chip->ecc.correct = nand_correct_data; 255 + chip->ecc.correct = rawnand_sw_hamming_correct; 255 256 chip->ecc.strength = 1; 256 257 257 258 return 0;
+19 -19
drivers/mtd/nand/raw/davinci_nand.c
··· 586 586 return PTR_ERR(pdata); 587 587 588 588 /* Use board-specific ECC config */ 589 - info->chip.ecc.engine_type = pdata->engine_type; 590 - info->chip.ecc.placement = pdata->ecc_placement; 589 + chip->ecc.engine_type = pdata->engine_type; 590 + chip->ecc.placement = pdata->ecc_placement; 591 591 592 - switch (info->chip.ecc.engine_type) { 592 + switch (chip->ecc.engine_type) { 593 593 case NAND_ECC_ENGINE_TYPE_NONE: 594 594 pdata->ecc_bits = 0; 595 595 break; ··· 601 601 * NAND_ECC_ALGO_HAMMING to avoid adding an extra ->ecc_algo 602 602 * field to davinci_nand_pdata. 603 603 */ 604 - info->chip.ecc.algo = NAND_ECC_ALGO_HAMMING; 604 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 605 605 break; 606 606 case NAND_ECC_ENGINE_TYPE_ON_HOST: 607 607 if (pdata->ecc_bits == 4) { ··· 628 628 if (ret == -EBUSY) 629 629 return ret; 630 630 631 - info->chip.ecc.calculate = nand_davinci_calculate_4bit; 632 - info->chip.ecc.correct = nand_davinci_correct_4bit; 633 - info->chip.ecc.hwctl = nand_davinci_hwctl_4bit; 634 - info->chip.ecc.bytes = 10; 635 - info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK; 636 - info->chip.ecc.algo = NAND_ECC_ALGO_BCH; 631 + chip->ecc.calculate = nand_davinci_calculate_4bit; 632 + chip->ecc.correct = nand_davinci_correct_4bit; 633 + chip->ecc.hwctl = nand_davinci_hwctl_4bit; 634 + chip->ecc.bytes = 10; 635 + chip->ecc.options = NAND_ECC_GENERIC_ERASED_CHECK; 636 + chip->ecc.algo = NAND_ECC_ALGO_BCH; 637 637 638 638 /* 639 639 * Update ECC layout if needed ... for 1-bit HW ECC, the ··· 651 651 } else if (chunks == 4 || chunks == 8) { 652 652 mtd_set_ooblayout(mtd, 653 653 nand_get_large_page_ooblayout()); 654 - info->chip.ecc.read_page = nand_davinci_read_page_hwecc_oob_first; 654 + chip->ecc.read_page = nand_davinci_read_page_hwecc_oob_first; 655 655 } else { 656 656 return -EIO; 657 657 } 658 658 } else { 659 659 /* 1bit ecc hamming */ 660 - info->chip.ecc.calculate = nand_davinci_calculate_1bit; 661 - info->chip.ecc.correct = nand_davinci_correct_1bit; 662 - info->chip.ecc.hwctl = nand_davinci_hwctl_1bit; 663 - info->chip.ecc.bytes = 3; 664 - info->chip.ecc.algo = NAND_ECC_ALGO_HAMMING; 660 + chip->ecc.calculate = nand_davinci_calculate_1bit; 661 + chip->ecc.correct = nand_davinci_correct_1bit; 662 + chip->ecc.hwctl = nand_davinci_hwctl_1bit; 663 + chip->ecc.bytes = 3; 664 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 665 665 } 666 - info->chip.ecc.size = 512; 667 - info->chip.ecc.strength = pdata->ecc_bits; 666 + chip->ecc.size = 512; 667 + chip->ecc.strength = pdata->ecc_bits; 668 668 break; 669 669 default: 670 670 return -EINVAL; ··· 899 899 int ret; 900 900 901 901 spin_lock_irq(&davinci_nand_lock); 902 - if (info->chip.ecc.placement == NAND_ECC_PLACEMENT_INTERLEAVED) 902 + if (chip->ecc.placement == NAND_ECC_PLACEMENT_INTERLEAVED) 903 903 ecc4_busy = false; 904 904 spin_unlock_irq(&davinci_nand_lock); 905 905
+2 -2
drivers/mtd/nand/raw/diskonchip.c
··· 216 216 217 217 static void DoC_Delay(struct doc_priv *doc, unsigned short cycles) 218 218 { 219 - volatile char dummy; 219 + volatile char __always_unused dummy; 220 220 int i; 221 221 222 222 for (i = 0; i < cycles; i++) { ··· 703 703 struct doc_priv *doc = nand_get_controller_data(this); 704 704 void __iomem *docptr = doc->virtadr; 705 705 int i; 706 - int emptymatch = 1; 706 + int __always_unused emptymatch = 1; 707 707 708 708 /* flush the pipeline */ 709 709 if (DoC_is_2000(doc)) {
-1
drivers/mtd/nand/raw/fsl_elbc_nand.c
··· 22 22 23 23 #include <linux/mtd/mtd.h> 24 24 #include <linux/mtd/rawnand.h> 25 - #include <linux/mtd/nand_ecc.h> 26 25 #include <linux/mtd/partitions.h> 27 26 28 27 #include <asm/io.h>
-1
drivers/mtd/nand/raw/fsl_ifc_nand.c
··· 15 15 #include <linux/mtd/mtd.h> 16 16 #include <linux/mtd/rawnand.h> 17 17 #include <linux/mtd/partitions.h> 18 - #include <linux/mtd/nand_ecc.h> 19 18 #include <linux/fsl_ifc.h> 20 19 #include <linux/iopoll.h> 21 20
-1
drivers/mtd/nand/raw/fsl_upm.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/delay.h> 13 13 #include <linux/mtd/rawnand.h> 14 - #include <linux/mtd/nand_ecc.h> 15 14 #include <linux/mtd/partitions.h> 16 15 #include <linux/mtd/mtd.h> 17 16 #include <linux/of_platform.h>
+2 -3
drivers/mtd/nand/raw/fsmc_nand.c
··· 26 26 #include <linux/types.h> 27 27 #include <linux/mtd/mtd.h> 28 28 #include <linux/mtd/rawnand.h> 29 - #include <linux/mtd/nand_ecc.h> 30 29 #include <linux/platform_device.h> 31 30 #include <linux/of.h> 32 31 #include <linux/mtd/partitions.h> ··· 917 918 case NAND_ECC_ENGINE_TYPE_ON_HOST: 918 919 dev_info(host->dev, "Using 1-bit HW ECC scheme\n"); 919 920 nand->ecc.calculate = fsmc_read_hwecc_ecc1; 920 - nand->ecc.correct = nand_correct_data; 921 + nand->ecc.correct = rawnand_sw_hamming_correct; 921 922 nand->ecc.hwctl = fsmc_enable_hwecc; 922 923 nand->ecc.bytes = 3; 923 924 nand->ecc.strength = 1; ··· 941 942 942 943 /* 943 944 * Don't set layout for BCH4 SW ECC. This will be 944 - * generated later in nand_bch_init() later. 945 + * generated later during BCH initialization. 945 946 */ 946 947 if (nand->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) { 947 948 switch (mtd->oobsize) {
+1 -2
drivers/mtd/nand/raw/gpmi-nand/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_MTD_NAND_GPMI_NAND) += gpmi_nand.o 3 - gpmi_nand-objs += gpmi-nand.o 2 + obj-$(CONFIG_MTD_NAND_GPMI_NAND) += gpmi-nand.o
+38 -38
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
··· 149 149 int ret; 150 150 151 151 ret = pm_runtime_get_sync(this->dev); 152 - if (ret < 0) 152 + if (ret < 0) { 153 + pm_runtime_put_noidle(this->dev); 153 154 return ret; 155 + } 154 156 155 157 ret = gpmi_reset_block(r->gpmi_regs, false); 156 158 if (ret) ··· 181 179 182 180 /* 183 181 * Decouple the chip select from dma channel. We use dma0 for all 184 - * the chips. 182 + * the chips, force all NAND RDY_BUSY inputs to be sourced from 183 + * RDY_BUSY0. 185 184 */ 186 - writel(BM_GPMI_CTRL1_DECOUPLE_CS, r->gpmi_regs + HW_GPMI_CTRL1_SET); 185 + writel(BM_GPMI_CTRL1_DECOUPLE_CS | BM_GPMI_CTRL1_GANGED_RDYBUSY, 186 + r->gpmi_regs + HW_GPMI_CTRL1_SET); 187 187 188 188 err_out: 189 189 pm_runtime_mark_last_busy(this->dev); ··· 2256 2252 void *buf_read = NULL; 2257 2253 const void *buf_write = NULL; 2258 2254 bool direct = false; 2259 - struct completion *completion; 2255 + struct completion *dma_completion, *bch_completion; 2260 2256 unsigned long to; 2261 2257 2262 2258 if (check_only) ··· 2267 2263 this->transfers[i].direction = DMA_NONE; 2268 2264 2269 2265 ret = pm_runtime_get_sync(this->dev); 2270 - if (ret < 0) 2266 + if (ret < 0) { 2267 + pm_runtime_put_noidle(this->dev); 2271 2268 return ret; 2269 + } 2272 2270 2273 2271 /* 2274 2272 * This driver currently supports only one NAND chip. Plus, dies share ··· 2353 2347 this->resources.bch_regs + HW_BCH_FLASH0LAYOUT1); 2354 2348 } 2355 2349 2350 + desc->callback = dma_irq_callback; 2351 + desc->callback_param = this; 2352 + dma_completion = &this->dma_done; 2353 + bch_completion = NULL; 2354 + 2355 + init_completion(dma_completion); 2356 + 2356 2357 if (this->bch && buf_read) { 2357 2358 writel(BM_BCH_CTRL_COMPLETE_IRQ_EN, 2358 2359 this->resources.bch_regs + HW_BCH_CTRL_SET); 2359 - completion = &this->bch_done; 2360 - } else { 2361 - desc->callback = dma_irq_callback; 2362 - desc->callback_param = this; 2363 - completion = &this->dma_done; 2360 + bch_completion = &this->bch_done; 2361 + init_completion(bch_completion); 2364 2362 } 2365 - 2366 - init_completion(completion); 2367 2363 2368 2364 dmaengine_submit(desc); 2369 2365 dma_async_issue_pending(get_dma_chan(this)); 2370 2366 2371 - to = wait_for_completion_timeout(completion, msecs_to_jiffies(1000)); 2367 + to = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000)); 2372 2368 if (!to) { 2373 2369 dev_err(this->dev, "DMA timeout, last DMA\n"); 2374 2370 gpmi_dump_info(this); 2375 2371 ret = -ETIMEDOUT; 2376 2372 goto unmap; 2373 + } 2374 + 2375 + if (this->bch && buf_read) { 2376 + to = wait_for_completion_timeout(bch_completion, msecs_to_jiffies(1000)); 2377 + if (!to) { 2378 + dev_err(this->dev, "BCH timeout, last DMA\n"); 2379 + gpmi_dump_info(this); 2380 + ret = -ETIMEDOUT; 2381 + goto unmap; 2382 + } 2377 2383 } 2378 2384 2379 2385 writel(BM_BCH_CTRL_COMPLETE_IRQ_EN, ··· 2479 2461 } 2480 2462 2481 2463 static const struct of_device_id gpmi_nand_id_table[] = { 2482 - { 2483 - .compatible = "fsl,imx23-gpmi-nand", 2484 - .data = &gpmi_devdata_imx23, 2485 - }, { 2486 - .compatible = "fsl,imx28-gpmi-nand", 2487 - .data = &gpmi_devdata_imx28, 2488 - }, { 2489 - .compatible = "fsl,imx6q-gpmi-nand", 2490 - .data = &gpmi_devdata_imx6q, 2491 - }, { 2492 - .compatible = "fsl,imx6sx-gpmi-nand", 2493 - .data = &gpmi_devdata_imx6sx, 2494 - }, { 2495 - .compatible = "fsl,imx7d-gpmi-nand", 2496 - .data = &gpmi_devdata_imx7d, 2497 - }, {} 2464 + { .compatible = "fsl,imx23-gpmi-nand", .data = &gpmi_devdata_imx23, }, 2465 + { .compatible = "fsl,imx28-gpmi-nand", .data = &gpmi_devdata_imx28, }, 2466 + { .compatible = "fsl,imx6q-gpmi-nand", .data = &gpmi_devdata_imx6q, }, 2467 + { .compatible = "fsl,imx6sx-gpmi-nand", .data = &gpmi_devdata_imx6sx, }, 2468 + { .compatible = "fsl,imx7d-gpmi-nand", .data = &gpmi_devdata_imx7d,}, 2469 + {} 2498 2470 }; 2499 2471 MODULE_DEVICE_TABLE(of, gpmi_nand_id_table); 2500 2472 2501 2473 static int gpmi_nand_probe(struct platform_device *pdev) 2502 2474 { 2503 2475 struct gpmi_nand_data *this; 2504 - const struct of_device_id *of_id; 2505 2476 int ret; 2506 2477 2507 2478 this = devm_kzalloc(&pdev->dev, sizeof(*this), GFP_KERNEL); 2508 2479 if (!this) 2509 2480 return -ENOMEM; 2510 2481 2511 - of_id = of_match_device(gpmi_nand_id_table, &pdev->dev); 2512 - if (of_id) { 2513 - this->devdata = of_id->data; 2514 - } else { 2515 - dev_err(&pdev->dev, "Failed to find the right device id.\n"); 2516 - return -ENODEV; 2517 - } 2518 - 2482 + this->devdata = of_device_get_match_data(&pdev->dev); 2519 2483 platform_set_drvdata(pdev, this); 2520 2484 this->pdev = pdev; 2521 2485 this->dev = &pdev->dev;
+1
drivers/mtd/nand/raw/gpmi-nand/gpmi-regs.h
··· 107 107 #define BV_GPMI_CTRL1_WRN_DLY_SEL_7_TO_12NS 0x2 108 108 #define BV_GPMI_CTRL1_WRN_DLY_SEL_NO_DELAY 0x3 109 109 110 + #define BM_GPMI_CTRL1_GANGED_RDYBUSY (1 << 19) 110 111 #define BM_GPMI_CTRL1_BCH_MODE (1 << 18) 111 112 112 113 #define BP_GPMI_CTRL1_DLL_ENABLE 17
-2
drivers/mtd/nand/raw/ingenic/ingenic_ecc.c
··· 71 71 if (!pdev || !platform_get_drvdata(pdev)) 72 72 return ERR_PTR(-EPROBE_DEFER); 73 73 74 - get_device(&pdev->dev); 75 - 76 74 ecc = platform_get_drvdata(pdev); 77 75 clk_prepare_enable(ecc->clk); 78 76
+721
drivers/mtd/nand/raw/intel-nand-controller.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* Copyright (c) 2020 Intel Corporation. */ 3 + 4 + #include <linux/clk.h> 5 + #include <linux/completion.h> 6 + #include <linux/dmaengine.h> 7 + #include <linux/dma-direction.h> 8 + #include <linux/dma-mapping.h> 9 + #include <linux/err.h> 10 + #include <linux/init.h> 11 + #include <linux/iopoll.h> 12 + #include <linux/kernel.h> 13 + #include <linux/module.h> 14 + 15 + #include <linux/mtd/mtd.h> 16 + #include <linux/mtd/rawnand.h> 17 + #include <linux/mtd/nand.h> 18 + 19 + #include <linux/platform_device.h> 20 + #include <linux/sched.h> 21 + #include <linux/slab.h> 22 + #include <linux/types.h> 23 + #include <asm/unaligned.h> 24 + 25 + #define EBU_CLC 0x000 26 + #define EBU_CLC_RST 0x00000000u 27 + 28 + #define EBU_ADDR_SEL(n) (0x020 + (n) * 4) 29 + /* 5 bits 26:22 included for comparison in the ADDR_SELx */ 30 + #define EBU_ADDR_MASK(x) ((x) << 4) 31 + #define EBU_ADDR_SEL_REGEN 0x1 32 + 33 + #define EBU_BUSCON(n) (0x060 + (n) * 4) 34 + #define EBU_BUSCON_CMULT_V4 0x1 35 + #define EBU_BUSCON_RECOVC(n) ((n) << 2) 36 + #define EBU_BUSCON_HOLDC(n) ((n) << 4) 37 + #define EBU_BUSCON_WAITRDC(n) ((n) << 6) 38 + #define EBU_BUSCON_WAITWRC(n) ((n) << 8) 39 + #define EBU_BUSCON_BCGEN_CS 0x0 40 + #define EBU_BUSCON_SETUP_EN BIT(22) 41 + #define EBU_BUSCON_ALEC 0xC000 42 + 43 + #define EBU_CON 0x0B0 44 + #define EBU_CON_NANDM_EN BIT(0) 45 + #define EBU_CON_NANDM_DIS 0x0 46 + #define EBU_CON_CSMUX_E_EN BIT(1) 47 + #define EBU_CON_ALE_P_LOW BIT(2) 48 + #define EBU_CON_CLE_P_LOW BIT(3) 49 + #define EBU_CON_CS_P_LOW BIT(4) 50 + #define EBU_CON_SE_P_LOW BIT(5) 51 + #define EBU_CON_WP_P_LOW BIT(6) 52 + #define EBU_CON_PRE_P_LOW BIT(7) 53 + #define EBU_CON_IN_CS_S(n) ((n) << 8) 54 + #define EBU_CON_OUT_CS_S(n) ((n) << 10) 55 + #define EBU_CON_LAT_EN_CS_P ((0x3D) << 18) 56 + 57 + #define EBU_WAIT 0x0B4 58 + #define EBU_WAIT_RDBY BIT(0) 59 + #define EBU_WAIT_WR_C BIT(3) 60 + 61 + #define HSNAND_CTL1 0x110 62 + #define HSNAND_CTL1_ADDR_SHIFT 24 63 + 64 + #define HSNAND_CTL2 0x114 65 + #define HSNAND_CTL2_ADDR_SHIFT 8 66 + #define HSNAND_CTL2_CYC_N_V5 (0x2 << 16) 67 + 68 + #define HSNAND_INT_MSK_CTL 0x124 69 + #define HSNAND_INT_MSK_CTL_WR_C BIT(4) 70 + 71 + #define HSNAND_INT_STA 0x128 72 + #define HSNAND_INT_STA_WR_C BIT(4) 73 + 74 + #define HSNAND_CTL 0x130 75 + #define HSNAND_CTL_ENABLE_ECC BIT(0) 76 + #define HSNAND_CTL_GO BIT(2) 77 + #define HSNAND_CTL_CE_SEL_CS(n) BIT(3 + (n)) 78 + #define HSNAND_CTL_RW_READ 0x0 79 + #define HSNAND_CTL_RW_WRITE BIT(10) 80 + #define HSNAND_CTL_ECC_OFF_V8TH BIT(11) 81 + #define HSNAND_CTL_CKFF_EN 0x0 82 + #define HSNAND_CTL_MSG_EN BIT(17) 83 + 84 + #define HSNAND_PARA0 0x13c 85 + #define HSNAND_PARA0_PAGE_V8192 0x3 86 + #define HSNAND_PARA0_PIB_V256 (0x3 << 4) 87 + #define HSNAND_PARA0_BYP_EN_NP 0x0 88 + #define HSNAND_PARA0_BYP_DEC_NP 0x0 89 + #define HSNAND_PARA0_TYPE_ONFI BIT(18) 90 + #define HSNAND_PARA0_ADEP_EN BIT(21) 91 + 92 + #define HSNAND_CMSG_0 0x150 93 + #define HSNAND_CMSG_1 0x154 94 + 95 + #define HSNAND_ALE_OFFS BIT(2) 96 + #define HSNAND_CLE_OFFS BIT(3) 97 + #define HSNAND_CS_OFFS BIT(4) 98 + 99 + #define HSNAND_ECC_OFFSET 0x008 100 + 101 + #define NAND_DATA_IFACE_CHECK_ONLY -1 102 + 103 + #define MAX_CS 2 104 + 105 + #define HZ_PER_MHZ 1000000L 106 + #define USEC_PER_SEC 1000000L 107 + 108 + struct ebu_nand_cs { 109 + void __iomem *chipaddr; 110 + dma_addr_t nand_pa; 111 + u32 addr_sel; 112 + }; 113 + 114 + struct ebu_nand_controller { 115 + struct nand_controller controller; 116 + struct nand_chip chip; 117 + struct device *dev; 118 + void __iomem *ebu; 119 + void __iomem *hsnand; 120 + struct dma_chan *dma_tx; 121 + struct dma_chan *dma_rx; 122 + struct completion dma_access_complete; 123 + unsigned long clk_rate; 124 + struct clk *clk; 125 + u32 nd_para0; 126 + u8 cs_num; 127 + struct ebu_nand_cs cs[MAX_CS]; 128 + }; 129 + 130 + static inline struct ebu_nand_controller *nand_to_ebu(struct nand_chip *chip) 131 + { 132 + return container_of(chip, struct ebu_nand_controller, chip); 133 + } 134 + 135 + static int ebu_nand_waitrdy(struct nand_chip *chip, int timeout_ms) 136 + { 137 + struct ebu_nand_controller *ctrl = nand_to_ebu(chip); 138 + u32 status; 139 + 140 + return readl_poll_timeout(ctrl->ebu + EBU_WAIT, status, 141 + (status & EBU_WAIT_RDBY) || 142 + (status & EBU_WAIT_WR_C), 20, timeout_ms); 143 + } 144 + 145 + static u8 ebu_nand_readb(struct nand_chip *chip) 146 + { 147 + struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip); 148 + u8 cs_num = ebu_host->cs_num; 149 + u8 val; 150 + 151 + val = readb(ebu_host->cs[cs_num].chipaddr + HSNAND_CS_OFFS); 152 + ebu_nand_waitrdy(chip, 1000); 153 + return val; 154 + } 155 + 156 + static void ebu_nand_writeb(struct nand_chip *chip, u32 offset, u8 value) 157 + { 158 + struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip); 159 + u8 cs_num = ebu_host->cs_num; 160 + 161 + writeb(value, ebu_host->cs[cs_num].chipaddr + offset); 162 + ebu_nand_waitrdy(chip, 1000); 163 + } 164 + 165 + static void ebu_read_buf(struct nand_chip *chip, u_char *buf, unsigned int len) 166 + { 167 + int i; 168 + 169 + for (i = 0; i < len; i++) 170 + buf[i] = ebu_nand_readb(chip); 171 + } 172 + 173 + static void ebu_write_buf(struct nand_chip *chip, const u_char *buf, int len) 174 + { 175 + int i; 176 + 177 + for (i = 0; i < len; i++) 178 + ebu_nand_writeb(chip, HSNAND_CS_OFFS, buf[i]); 179 + } 180 + 181 + static void ebu_nand_disable(struct nand_chip *chip) 182 + { 183 + struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip); 184 + 185 + writel(0, ebu_host->ebu + EBU_CON); 186 + } 187 + 188 + static void ebu_select_chip(struct nand_chip *chip) 189 + { 190 + struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip); 191 + void __iomem *nand_con = ebu_host->ebu + EBU_CON; 192 + u32 cs = ebu_host->cs_num; 193 + 194 + writel(EBU_CON_NANDM_EN | EBU_CON_CSMUX_E_EN | EBU_CON_CS_P_LOW | 195 + EBU_CON_SE_P_LOW | EBU_CON_WP_P_LOW | EBU_CON_PRE_P_LOW | 196 + EBU_CON_IN_CS_S(cs) | EBU_CON_OUT_CS_S(cs) | 197 + EBU_CON_LAT_EN_CS_P, nand_con); 198 + } 199 + 200 + static int ebu_nand_set_timings(struct nand_chip *chip, int csline, 201 + const struct nand_interface_config *conf) 202 + { 203 + struct ebu_nand_controller *ctrl = nand_to_ebu(chip); 204 + unsigned int rate = clk_get_rate(ctrl->clk) / HZ_PER_MHZ; 205 + unsigned int period = DIV_ROUND_UP(USEC_PER_SEC, rate); 206 + const struct nand_sdr_timings *timings; 207 + u32 trecov, thold, twrwait, trdwait; 208 + u32 reg = 0; 209 + 210 + timings = nand_get_sdr_timings(conf); 211 + if (IS_ERR(timings)) 212 + return PTR_ERR(timings); 213 + 214 + if (csline == NAND_DATA_IFACE_CHECK_ONLY) 215 + return 0; 216 + 217 + trecov = DIV_ROUND_UP(max(timings->tREA_max, timings->tREH_min), 218 + period); 219 + reg |= EBU_BUSCON_RECOVC(trecov); 220 + 221 + thold = DIV_ROUND_UP(max(timings->tDH_min, timings->tDS_min), period); 222 + reg |= EBU_BUSCON_HOLDC(thold); 223 + 224 + trdwait = DIV_ROUND_UP(max(timings->tRC_min, timings->tREH_min), 225 + period); 226 + reg |= EBU_BUSCON_WAITRDC(trdwait); 227 + 228 + twrwait = DIV_ROUND_UP(max(timings->tWC_min, timings->tWH_min), period); 229 + reg |= EBU_BUSCON_WAITWRC(twrwait); 230 + 231 + reg |= EBU_BUSCON_CMULT_V4 | EBU_BUSCON_BCGEN_CS | EBU_BUSCON_ALEC | 232 + EBU_BUSCON_SETUP_EN; 233 + 234 + writel(reg, ctrl->ebu + EBU_BUSCON(ctrl->cs_num)); 235 + 236 + return 0; 237 + } 238 + 239 + static int ebu_nand_ooblayout_ecc(struct mtd_info *mtd, int section, 240 + struct mtd_oob_region *oobregion) 241 + { 242 + struct nand_chip *chip = mtd_to_nand(mtd); 243 + 244 + if (section) 245 + return -ERANGE; 246 + 247 + oobregion->offset = HSNAND_ECC_OFFSET; 248 + oobregion->length = chip->ecc.total; 249 + 250 + return 0; 251 + } 252 + 253 + static int ebu_nand_ooblayout_free(struct mtd_info *mtd, int section, 254 + struct mtd_oob_region *oobregion) 255 + { 256 + struct nand_chip *chip = mtd_to_nand(mtd); 257 + 258 + if (section) 259 + return -ERANGE; 260 + 261 + oobregion->offset = chip->ecc.total + HSNAND_ECC_OFFSET; 262 + oobregion->length = mtd->oobsize - oobregion->offset; 263 + 264 + return 0; 265 + } 266 + 267 + static const struct mtd_ooblayout_ops ebu_nand_ooblayout_ops = { 268 + .ecc = ebu_nand_ooblayout_ecc, 269 + .free = ebu_nand_ooblayout_free, 270 + }; 271 + 272 + static void ebu_dma_rx_callback(void *cookie) 273 + { 274 + struct ebu_nand_controller *ebu_host = cookie; 275 + 276 + dmaengine_terminate_async(ebu_host->dma_rx); 277 + 278 + complete(&ebu_host->dma_access_complete); 279 + } 280 + 281 + static void ebu_dma_tx_callback(void *cookie) 282 + { 283 + struct ebu_nand_controller *ebu_host = cookie; 284 + 285 + dmaengine_terminate_async(ebu_host->dma_tx); 286 + 287 + complete(&ebu_host->dma_access_complete); 288 + } 289 + 290 + static int ebu_dma_start(struct ebu_nand_controller *ebu_host, u32 dir, 291 + const u8 *buf, u32 len) 292 + { 293 + struct dma_async_tx_descriptor *tx; 294 + struct completion *dma_completion; 295 + dma_async_tx_callback callback; 296 + struct dma_chan *chan; 297 + dma_cookie_t cookie; 298 + unsigned long flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 299 + dma_addr_t buf_dma; 300 + int ret; 301 + u32 timeout; 302 + 303 + if (dir == DMA_DEV_TO_MEM) { 304 + chan = ebu_host->dma_rx; 305 + dma_completion = &ebu_host->dma_access_complete; 306 + callback = ebu_dma_rx_callback; 307 + } else { 308 + chan = ebu_host->dma_tx; 309 + dma_completion = &ebu_host->dma_access_complete; 310 + callback = ebu_dma_tx_callback; 311 + } 312 + 313 + buf_dma = dma_map_single(chan->device->dev, (void *)buf, len, dir); 314 + if (dma_mapping_error(chan->device->dev, buf_dma)) { 315 + dev_err(ebu_host->dev, "Failed to map DMA buffer\n"); 316 + ret = -EIO; 317 + goto err_unmap; 318 + } 319 + 320 + tx = dmaengine_prep_slave_single(chan, buf_dma, len, dir, flags); 321 + if (!tx) 322 + return -ENXIO; 323 + 324 + tx->callback = callback; 325 + tx->callback_param = ebu_host; 326 + cookie = tx->tx_submit(tx); 327 + 328 + ret = dma_submit_error(cookie); 329 + if (ret) { 330 + dev_err(ebu_host->dev, "dma_submit_error %d\n", cookie); 331 + ret = -EIO; 332 + goto err_unmap; 333 + } 334 + 335 + init_completion(dma_completion); 336 + dma_async_issue_pending(chan); 337 + 338 + /* Wait DMA to finish the data transfer.*/ 339 + timeout = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000)); 340 + if (!timeout) { 341 + dev_err(ebu_host->dev, "I/O Error in DMA RX (status %d)\n", 342 + dmaengine_tx_status(chan, cookie, NULL)); 343 + dmaengine_terminate_sync(chan); 344 + ret = -ETIMEDOUT; 345 + goto err_unmap; 346 + } 347 + 348 + return 0; 349 + 350 + err_unmap: 351 + dma_unmap_single(ebu_host->dev, buf_dma, len, dir); 352 + 353 + return ret; 354 + } 355 + 356 + static void ebu_nand_trigger(struct ebu_nand_controller *ebu_host, 357 + int page, u32 cmd) 358 + { 359 + unsigned int val; 360 + 361 + val = cmd | (page & 0xFF) << HSNAND_CTL1_ADDR_SHIFT; 362 + writel(val, ebu_host->hsnand + HSNAND_CTL1); 363 + val = (page & 0xFFFF00) >> 8 | HSNAND_CTL2_CYC_N_V5; 364 + writel(val, ebu_host->hsnand + HSNAND_CTL2); 365 + 366 + writel(ebu_host->nd_para0, ebu_host->hsnand + HSNAND_PARA0); 367 + 368 + /* clear first, will update later */ 369 + writel(0xFFFFFFFF, ebu_host->hsnand + HSNAND_CMSG_0); 370 + writel(0xFFFFFFFF, ebu_host->hsnand + HSNAND_CMSG_1); 371 + 372 + writel(HSNAND_INT_MSK_CTL_WR_C, 373 + ebu_host->hsnand + HSNAND_INT_MSK_CTL); 374 + 375 + if (!cmd) 376 + val = HSNAND_CTL_RW_READ; 377 + else 378 + val = HSNAND_CTL_RW_WRITE; 379 + 380 + writel(HSNAND_CTL_MSG_EN | HSNAND_CTL_CKFF_EN | 381 + HSNAND_CTL_ECC_OFF_V8TH | HSNAND_CTL_CE_SEL_CS(ebu_host->cs_num) | 382 + HSNAND_CTL_ENABLE_ECC | HSNAND_CTL_GO | val, 383 + ebu_host->hsnand + HSNAND_CTL); 384 + } 385 + 386 + static int ebu_nand_read_page_hwecc(struct nand_chip *chip, u8 *buf, 387 + int oob_required, int page) 388 + { 389 + struct mtd_info *mtd = nand_to_mtd(chip); 390 + struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip); 391 + int ret, reg_data; 392 + 393 + ebu_nand_trigger(ebu_host, page, NAND_CMD_READ0); 394 + 395 + ret = ebu_dma_start(ebu_host, DMA_DEV_TO_MEM, buf, mtd->writesize); 396 + if (ret) 397 + return ret; 398 + 399 + if (oob_required) 400 + chip->ecc.read_oob(chip, page); 401 + 402 + reg_data = readl(ebu_host->hsnand + HSNAND_CTL); 403 + reg_data &= ~HSNAND_CTL_GO; 404 + writel(reg_data, ebu_host->hsnand + HSNAND_CTL); 405 + 406 + return 0; 407 + } 408 + 409 + static int ebu_nand_write_page_hwecc(struct nand_chip *chip, const u8 *buf, 410 + int oob_required, int page) 411 + { 412 + struct mtd_info *mtd = nand_to_mtd(chip); 413 + struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip); 414 + void __iomem *int_sta = ebu_host->hsnand + HSNAND_INT_STA; 415 + int reg_data, ret, val; 416 + u32 reg; 417 + 418 + ebu_nand_trigger(ebu_host, page, NAND_CMD_SEQIN); 419 + 420 + ret = ebu_dma_start(ebu_host, DMA_MEM_TO_DEV, buf, mtd->writesize); 421 + if (ret) 422 + return ret; 423 + 424 + if (oob_required) { 425 + reg = get_unaligned_le32(chip->oob_poi); 426 + writel(reg, ebu_host->hsnand + HSNAND_CMSG_0); 427 + 428 + reg = get_unaligned_le32(chip->oob_poi + 4); 429 + writel(reg, ebu_host->hsnand + HSNAND_CMSG_1); 430 + } 431 + 432 + ret = readl_poll_timeout_atomic(int_sta, val, !(val & HSNAND_INT_STA_WR_C), 433 + 10, 1000); 434 + if (ret) 435 + return ret; 436 + 437 + reg_data = readl(ebu_host->hsnand + HSNAND_CTL); 438 + reg_data &= ~HSNAND_CTL_GO; 439 + writel(reg_data, ebu_host->hsnand + HSNAND_CTL); 440 + 441 + return 0; 442 + } 443 + 444 + static const u8 ecc_strength[] = { 1, 1, 4, 8, 24, 32, 40, 60, }; 445 + 446 + static int ebu_nand_attach_chip(struct nand_chip *chip) 447 + { 448 + struct mtd_info *mtd = nand_to_mtd(chip); 449 + struct ebu_nand_controller *ebu_host = nand_get_controller_data(chip); 450 + u32 ecc_steps, ecc_bytes, ecc_total, pagesize, pg_per_blk; 451 + u32 ecc_strength_ds = chip->ecc.strength; 452 + u32 ecc_size = chip->ecc.size; 453 + u32 writesize = mtd->writesize; 454 + u32 blocksize = mtd->erasesize; 455 + int bch_algo, start, val; 456 + 457 + /* Default to an ECC size of 512 */ 458 + if (!chip->ecc.size) 459 + chip->ecc.size = 512; 460 + 461 + switch (ecc_size) { 462 + case 512: 463 + start = 1; 464 + if (!ecc_strength_ds) 465 + ecc_strength_ds = 4; 466 + break; 467 + case 1024: 468 + start = 4; 469 + if (!ecc_strength_ds) 470 + ecc_strength_ds = 32; 471 + break; 472 + default: 473 + return -EINVAL; 474 + } 475 + 476 + /* BCH ECC algorithm Settings for number of bits per 512B/1024B */ 477 + bch_algo = round_up(start + 1, 4); 478 + for (val = start; val < bch_algo; val++) { 479 + if (ecc_strength_ds == ecc_strength[val]) 480 + break; 481 + } 482 + if (val == bch_algo) 483 + return -EINVAL; 484 + 485 + if (ecc_strength_ds == 8) 486 + ecc_bytes = 14; 487 + else 488 + ecc_bytes = DIV_ROUND_UP(ecc_strength_ds * fls(8 * ecc_size), 8); 489 + 490 + ecc_steps = writesize / ecc_size; 491 + ecc_total = ecc_steps * ecc_bytes; 492 + if ((ecc_total + 8) > mtd->oobsize) 493 + return -ERANGE; 494 + 495 + chip->ecc.total = ecc_total; 496 + pagesize = fls(writesize >> 11); 497 + if (pagesize > HSNAND_PARA0_PAGE_V8192) 498 + return -ERANGE; 499 + 500 + pg_per_blk = fls((blocksize / writesize) >> 6) / 8; 501 + if (pg_per_blk > HSNAND_PARA0_PIB_V256) 502 + return -ERANGE; 503 + 504 + ebu_host->nd_para0 = pagesize | pg_per_blk | HSNAND_PARA0_BYP_EN_NP | 505 + HSNAND_PARA0_BYP_DEC_NP | HSNAND_PARA0_ADEP_EN | 506 + HSNAND_PARA0_TYPE_ONFI | (val << 29); 507 + 508 + mtd_set_ooblayout(mtd, &ebu_nand_ooblayout_ops); 509 + chip->ecc.read_page = ebu_nand_read_page_hwecc; 510 + chip->ecc.write_page = ebu_nand_write_page_hwecc; 511 + 512 + return 0; 513 + } 514 + 515 + static int ebu_nand_exec_op(struct nand_chip *chip, 516 + const struct nand_operation *op, bool check_only) 517 + { 518 + const struct nand_op_instr *instr = NULL; 519 + unsigned int op_id; 520 + int i, timeout_ms, ret = 0; 521 + 522 + if (check_only) 523 + return 0; 524 + 525 + ebu_select_chip(chip); 526 + for (op_id = 0; op_id < op->ninstrs; op_id++) { 527 + instr = &op->instrs[op_id]; 528 + 529 + switch (instr->type) { 530 + case NAND_OP_CMD_INSTR: 531 + ebu_nand_writeb(chip, HSNAND_CLE_OFFS | HSNAND_CS_OFFS, 532 + instr->ctx.cmd.opcode); 533 + break; 534 + 535 + case NAND_OP_ADDR_INSTR: 536 + for (i = 0; i < instr->ctx.addr.naddrs; i++) 537 + ebu_nand_writeb(chip, 538 + HSNAND_ALE_OFFS | HSNAND_CS_OFFS, 539 + instr->ctx.addr.addrs[i]); 540 + break; 541 + 542 + case NAND_OP_DATA_IN_INSTR: 543 + ebu_read_buf(chip, instr->ctx.data.buf.in, 544 + instr->ctx.data.len); 545 + break; 546 + 547 + case NAND_OP_DATA_OUT_INSTR: 548 + ebu_write_buf(chip, instr->ctx.data.buf.out, 549 + instr->ctx.data.len); 550 + break; 551 + 552 + case NAND_OP_WAITRDY_INSTR: 553 + timeout_ms = instr->ctx.waitrdy.timeout_ms * 1000; 554 + ret = ebu_nand_waitrdy(chip, timeout_ms); 555 + break; 556 + } 557 + } 558 + 559 + return ret; 560 + } 561 + 562 + static const struct nand_controller_ops ebu_nand_controller_ops = { 563 + .attach_chip = ebu_nand_attach_chip, 564 + .setup_interface = ebu_nand_set_timings, 565 + .exec_op = ebu_nand_exec_op, 566 + }; 567 + 568 + static void ebu_dma_cleanup(struct ebu_nand_controller *ebu_host) 569 + { 570 + if (ebu_host->dma_rx) 571 + dma_release_channel(ebu_host->dma_rx); 572 + 573 + if (ebu_host->dma_tx) 574 + dma_release_channel(ebu_host->dma_tx); 575 + } 576 + 577 + static int ebu_nand_probe(struct platform_device *pdev) 578 + { 579 + struct device *dev = &pdev->dev; 580 + struct ebu_nand_controller *ebu_host; 581 + struct nand_chip *nand; 582 + struct mtd_info *mtd = NULL; 583 + struct resource *res; 584 + char *resname; 585 + int ret; 586 + u32 cs; 587 + 588 + ebu_host = devm_kzalloc(dev, sizeof(*ebu_host), GFP_KERNEL); 589 + if (!ebu_host) 590 + return -ENOMEM; 591 + 592 + ebu_host->dev = dev; 593 + nand_controller_init(&ebu_host->controller); 594 + 595 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ebunand"); 596 + ebu_host->ebu = devm_ioremap_resource(&pdev->dev, res); 597 + if (IS_ERR(ebu_host->ebu)) 598 + return PTR_ERR(ebu_host->ebu); 599 + 600 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "hsnand"); 601 + ebu_host->hsnand = devm_ioremap_resource(&pdev->dev, res); 602 + if (IS_ERR(ebu_host->hsnand)) 603 + return PTR_ERR(ebu_host->hsnand); 604 + 605 + ret = device_property_read_u32(dev, "reg", &cs); 606 + if (ret) { 607 + dev_err(dev, "failed to get chip select: %d\n", ret); 608 + return ret; 609 + } 610 + ebu_host->cs_num = cs; 611 + 612 + resname = devm_kasprintf(dev, GFP_KERNEL, "nand_cs%d", cs); 613 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname); 614 + ebu_host->cs[cs].chipaddr = devm_ioremap_resource(dev, res); 615 + ebu_host->cs[cs].nand_pa = res->start; 616 + if (IS_ERR(ebu_host->cs[cs].chipaddr)) 617 + return PTR_ERR(ebu_host->cs[cs].chipaddr); 618 + 619 + ebu_host->clk = devm_clk_get(dev, NULL); 620 + if (IS_ERR(ebu_host->clk)) 621 + return dev_err_probe(dev, PTR_ERR(ebu_host->clk), 622 + "failed to get clock\n"); 623 + 624 + ret = clk_prepare_enable(ebu_host->clk); 625 + if (ret) { 626 + dev_err(dev, "failed to enable clock: %d\n", ret); 627 + return ret; 628 + } 629 + ebu_host->clk_rate = clk_get_rate(ebu_host->clk); 630 + 631 + ebu_host->dma_tx = dma_request_chan(dev, "tx"); 632 + if (IS_ERR(ebu_host->dma_tx)) 633 + return dev_err_probe(dev, PTR_ERR(ebu_host->dma_tx), 634 + "failed to request DMA tx chan!.\n"); 635 + 636 + ebu_host->dma_rx = dma_request_chan(dev, "rx"); 637 + if (IS_ERR(ebu_host->dma_rx)) 638 + return dev_err_probe(dev, PTR_ERR(ebu_host->dma_rx), 639 + "failed to request DMA rx chan!.\n"); 640 + 641 + resname = devm_kasprintf(dev, GFP_KERNEL, "addr_sel%d", cs); 642 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname); 643 + if (!res) 644 + return -EINVAL; 645 + ebu_host->cs[cs].addr_sel = res->start; 646 + writel(ebu_host->cs[cs].addr_sel | EBU_ADDR_MASK(5) | EBU_ADDR_SEL_REGEN, 647 + ebu_host->ebu + EBU_ADDR_SEL(cs)); 648 + 649 + nand_set_flash_node(&ebu_host->chip, dev->of_node); 650 + if (!mtd->name) { 651 + dev_err(ebu_host->dev, "NAND label property is mandatory\n"); 652 + return -EINVAL; 653 + } 654 + 655 + mtd = nand_to_mtd(&ebu_host->chip); 656 + mtd->dev.parent = dev; 657 + ebu_host->dev = dev; 658 + 659 + platform_set_drvdata(pdev, ebu_host); 660 + nand_set_controller_data(&ebu_host->chip, ebu_host); 661 + 662 + nand = &ebu_host->chip; 663 + nand->controller = &ebu_host->controller; 664 + nand->controller->ops = &ebu_nand_controller_ops; 665 + 666 + /* Scan to find existence of the device */ 667 + ret = nand_scan(&ebu_host->chip, 1); 668 + if (ret) 669 + goto err_cleanup_dma; 670 + 671 + ret = mtd_device_register(mtd, NULL, 0); 672 + if (ret) 673 + goto err_clean_nand; 674 + 675 + return 0; 676 + 677 + err_clean_nand: 678 + nand_cleanup(&ebu_host->chip); 679 + err_cleanup_dma: 680 + ebu_dma_cleanup(ebu_host); 681 + clk_disable_unprepare(ebu_host->clk); 682 + 683 + return ret; 684 + } 685 + 686 + static int ebu_nand_remove(struct platform_device *pdev) 687 + { 688 + struct ebu_nand_controller *ebu_host = platform_get_drvdata(pdev); 689 + int ret; 690 + 691 + ret = mtd_device_unregister(nand_to_mtd(&ebu_host->chip)); 692 + WARN_ON(ret); 693 + nand_cleanup(&ebu_host->chip); 694 + ebu_nand_disable(&ebu_host->chip); 695 + ebu_dma_cleanup(ebu_host); 696 + clk_disable_unprepare(ebu_host->clk); 697 + 698 + return 0; 699 + } 700 + 701 + static const struct of_device_id ebu_nand_match[] = { 702 + { .compatible = "intel,nand-controller" }, 703 + { .compatible = "intel,lgm-ebunand" }, 704 + {} 705 + }; 706 + MODULE_DEVICE_TABLE(of, ebu_nand_match); 707 + 708 + static struct platform_driver ebu_nand_driver = { 709 + .probe = ebu_nand_probe, 710 + .remove = ebu_nand_remove, 711 + .driver = { 712 + .name = "intel-nand-controller", 713 + .of_match_table = ebu_nand_match, 714 + }, 715 + 716 + }; 717 + module_platform_driver(ebu_nand_driver); 718 + 719 + MODULE_LICENSE("GPL v2"); 720 + MODULE_AUTHOR("Vadivel Murugan R <vadivel.muruganx.ramuthevar@intel.com>"); 721 + MODULE_DESCRIPTION("Intel's LGM External Bus NAND Controller driver");
-1
drivers/mtd/nand/raw/lpc32xx_mlc.c
··· 31 31 #include <linux/mm.h> 32 32 #include <linux/dma-mapping.h> 33 33 #include <linux/dmaengine.h> 34 - #include <linux/mtd/nand_ecc.h> 35 34 36 35 #define DRV_NAME "lpc32xx_mlc" 37 36
+1 -2
drivers/mtd/nand/raw/lpc32xx_slc.c
··· 23 23 #include <linux/mm.h> 24 24 #include <linux/dma-mapping.h> 25 25 #include <linux/dmaengine.h> 26 - #include <linux/mtd/nand_ecc.h> 27 26 #include <linux/gpio.h> 28 27 #include <linux/of.h> 29 28 #include <linux/of_gpio.h> ··· 802 803 chip->ecc.write_oob = lpc32xx_nand_write_oob_syndrome; 803 804 chip->ecc.read_oob = lpc32xx_nand_read_oob_syndrome; 804 805 chip->ecc.calculate = lpc32xx_nand_ecc_calculate; 805 - chip->ecc.correct = nand_correct_data; 806 + chip->ecc.correct = rawnand_sw_hamming_correct; 806 807 chip->ecc.hwctl = lpc32xx_nand_ecc_enable; 807 808 808 809 /*
-6
drivers/mtd/nand/raw/marvell_nand.c
··· 2679 2679 mtd->dev.parent = dev; 2680 2680 2681 2681 /* 2682 - * Default to HW ECC engine mode. If the nand-ecc-mode property is given 2683 - * in the DT node, this entry will be overwritten in nand_scan_ident(). 2684 - */ 2685 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 2686 - 2687 - /* 2688 2682 * Save a reference value for timing registers before 2689 2683 * ->setup_interface() is called. 2690 2684 */
+5 -2
drivers/mtd/nand/raw/meson_nand.c
··· 510 510 } 511 511 512 512 static void meson_nfc_dma_buffer_release(struct nand_chip *nand, 513 - int infolen, int datalen, 513 + int datalen, int infolen, 514 514 enum dma_data_direction dir) 515 515 { 516 516 struct meson_nfc *nfc = nand_get_controller_data(nand); ··· 1044 1044 1045 1045 ret = clk_set_rate(nfc->device_clk, 24000000); 1046 1046 if (ret) 1047 - goto err_phase_rx; 1047 + goto err_disable_rx; 1048 1048 1049 1049 return 0; 1050 + 1051 + err_disable_rx: 1052 + clk_disable_unprepare(nfc->phase_rx); 1050 1053 err_phase_rx: 1051 1054 clk_disable_unprepare(nfc->phase_tx); 1052 1055 err_phase_tx:
+7 -87
drivers/mtd/nand/raw/mxc_nand.c
··· 21 21 #include <linux/completion.h> 22 22 #include <linux/of.h> 23 23 #include <linux/of_device.h> 24 - #include <linux/platform_data/mtd-mxc_nand.h> 25 24 26 25 #define DRIVER_NAME "mxc_nand" 27 26 ··· 183 184 unsigned int buf_start; 184 185 185 186 const struct mxc_nand_devtype_data *devtype_data; 186 - struct mxc_nand_platform_data pdata; 187 187 }; 188 188 189 189 static const char * const part_probes[] = { ··· 1609 1611 return host->devtype_data == &imx53_nand_devtype_data; 1610 1612 } 1611 1613 1612 - static const struct platform_device_id mxcnd_devtype[] = { 1613 - { 1614 - .name = "imx21-nand", 1615 - .driver_data = (kernel_ulong_t) &imx21_nand_devtype_data, 1616 - }, { 1617 - .name = "imx27-nand", 1618 - .driver_data = (kernel_ulong_t) &imx27_nand_devtype_data, 1619 - }, { 1620 - .name = "imx25-nand", 1621 - .driver_data = (kernel_ulong_t) &imx25_nand_devtype_data, 1622 - }, { 1623 - .name = "imx51-nand", 1624 - .driver_data = (kernel_ulong_t) &imx51_nand_devtype_data, 1625 - }, { 1626 - .name = "imx53-nand", 1627 - .driver_data = (kernel_ulong_t) &imx53_nand_devtype_data, 1628 - }, { 1629 - /* sentinel */ 1630 - } 1631 - }; 1632 - MODULE_DEVICE_TABLE(platform, mxcnd_devtype); 1633 - 1634 - #ifdef CONFIG_OF 1635 1614 static const struct of_device_id mxcnd_dt_ids[] = { 1636 - { 1637 - .compatible = "fsl,imx21-nand", 1638 - .data = &imx21_nand_devtype_data, 1639 - }, { 1640 - .compatible = "fsl,imx27-nand", 1641 - .data = &imx27_nand_devtype_data, 1642 - }, { 1643 - .compatible = "fsl,imx25-nand", 1644 - .data = &imx25_nand_devtype_data, 1645 - }, { 1646 - .compatible = "fsl,imx51-nand", 1647 - .data = &imx51_nand_devtype_data, 1648 - }, { 1649 - .compatible = "fsl,imx53-nand", 1650 - .data = &imx53_nand_devtype_data, 1651 - }, 1615 + { .compatible = "fsl,imx21-nand", .data = &imx21_nand_devtype_data, }, 1616 + { .compatible = "fsl,imx27-nand", .data = &imx27_nand_devtype_data, }, 1617 + { .compatible = "fsl,imx25-nand", .data = &imx25_nand_devtype_data, }, 1618 + { .compatible = "fsl,imx51-nand", .data = &imx51_nand_devtype_data, }, 1619 + { .compatible = "fsl,imx53-nand", .data = &imx53_nand_devtype_data, }, 1652 1620 { /* sentinel */ } 1653 1621 }; 1654 1622 MODULE_DEVICE_TABLE(of, mxcnd_dt_ids); 1655 - 1656 - static int mxcnd_probe_dt(struct mxc_nand_host *host) 1657 - { 1658 - struct device_node *np = host->dev->of_node; 1659 - const struct of_device_id *of_id = 1660 - of_match_device(mxcnd_dt_ids, host->dev); 1661 - 1662 - if (!np) 1663 - return 1; 1664 - 1665 - host->devtype_data = of_id->data; 1666 - 1667 - return 0; 1668 - } 1669 - #else 1670 - static int mxcnd_probe_dt(struct mxc_nand_host *host) 1671 - { 1672 - return 1; 1673 - } 1674 - #endif 1675 1623 1676 1624 static int mxcnd_attach_chip(struct nand_chip *chip) 1677 1625 { ··· 1744 1800 if (IS_ERR(host->clk)) 1745 1801 return PTR_ERR(host->clk); 1746 1802 1747 - err = mxcnd_probe_dt(host); 1748 - if (err > 0) { 1749 - struct mxc_nand_platform_data *pdata = 1750 - dev_get_platdata(&pdev->dev); 1751 - if (pdata) { 1752 - host->pdata = *pdata; 1753 - host->devtype_data = (struct mxc_nand_devtype_data *) 1754 - pdev->id_entry->driver_data; 1755 - } else { 1756 - err = -ENODEV; 1757 - } 1758 - } 1759 - if (err < 0) 1760 - return err; 1803 + host->devtype_data = device_get_match_data(&pdev->dev); 1761 1804 1762 1805 if (!host->devtype_data->setup_interface) 1763 1806 this->options |= NAND_KEEP_TIMINGS; ··· 1773 1842 host->regs_axi = host->base + host->devtype_data->axi_offset; 1774 1843 1775 1844 this->legacy.select_chip = host->devtype_data->select_chip; 1776 - 1777 - /* NAND bus width determines access functions used by upper layer */ 1778 - if (host->pdata.width == 2) 1779 - this->options |= NAND_BUSWIDTH_16; 1780 - 1781 - /* update flash based bbt */ 1782 - if (host->pdata.flash_bbt) 1783 - this->bbt_options |= NAND_BBT_USE_FLASH; 1784 1845 1785 1846 init_completion(&host->op_completion); 1786 1847 ··· 1814 1891 goto escan; 1815 1892 1816 1893 /* Register the partitions */ 1817 - err = mtd_device_parse_register(mtd, part_probes, NULL, 1818 - host->pdata.parts, 1819 - host->pdata.nr_parts); 1894 + err = mtd_device_parse_register(mtd, part_probes, NULL, NULL, 0); 1820 1895 if (err) 1821 1896 goto cleanup_nand; 1822 1897 ··· 1851 1930 .name = DRIVER_NAME, 1852 1931 .of_match_table = of_match_ptr(mxcnd_dt_ids), 1853 1932 }, 1854 - .id_table = mxcnd_devtype, 1855 1933 .probe = mxcnd_probe, 1856 1934 .remove = mxcnd_remove, 1857 1935 };
+1 -1
drivers/mtd/nand/raw/mxic_nand.c
··· 12 12 #include <linux/interrupt.h> 13 13 #include <linux/module.h> 14 14 #include <linux/mtd/mtd.h> 15 + #include <linux/mtd/nand-ecc-sw-hamming.h> 15 16 #include <linux/mtd/rawnand.h> 16 - #include <linux/mtd/nand_ecc.h> 17 17 #include <linux/platform_device.h> 18 18 19 19 #include "internals.h"
+148 -56
drivers/mtd/nand/raw/nand_base.c
··· 35 35 #include <linux/types.h> 36 36 #include <linux/mtd/mtd.h> 37 37 #include <linux/mtd/nand.h> 38 - #include <linux/mtd/nand_ecc.h> 39 - #include <linux/mtd/nand_bch.h> 38 + #include <linux/mtd/nand-ecc-sw-hamming.h> 39 + #include <linux/mtd/nand-ecc-sw-bch.h> 40 40 #include <linux/interrupt.h> 41 41 #include <linux/bitops.h> 42 42 #include <linux/io.h> ··· 5139 5139 kfree(chip->parameters.onfi); 5140 5140 } 5141 5141 5142 + int rawnand_sw_hamming_init(struct nand_chip *chip) 5143 + { 5144 + struct nand_ecc_sw_hamming_conf *engine_conf; 5145 + struct nand_device *base = &chip->base; 5146 + int ret; 5147 + 5148 + base->ecc.user_conf.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 5149 + base->ecc.user_conf.algo = NAND_ECC_ALGO_HAMMING; 5150 + base->ecc.user_conf.strength = chip->ecc.strength; 5151 + base->ecc.user_conf.step_size = chip->ecc.size; 5152 + 5153 + ret = nand_ecc_sw_hamming_init_ctx(base); 5154 + if (ret) 5155 + return ret; 5156 + 5157 + engine_conf = base->ecc.ctx.priv; 5158 + 5159 + if (chip->ecc.options & NAND_ECC_SOFT_HAMMING_SM_ORDER) 5160 + engine_conf->sm_order = true; 5161 + 5162 + chip->ecc.size = base->ecc.ctx.conf.step_size; 5163 + chip->ecc.strength = base->ecc.ctx.conf.strength; 5164 + chip->ecc.total = base->ecc.ctx.total; 5165 + chip->ecc.steps = engine_conf->nsteps; 5166 + chip->ecc.bytes = engine_conf->code_size; 5167 + 5168 + return 0; 5169 + } 5170 + EXPORT_SYMBOL(rawnand_sw_hamming_init); 5171 + 5172 + int rawnand_sw_hamming_calculate(struct nand_chip *chip, 5173 + const unsigned char *buf, 5174 + unsigned char *code) 5175 + { 5176 + struct nand_device *base = &chip->base; 5177 + 5178 + return nand_ecc_sw_hamming_calculate(base, buf, code); 5179 + } 5180 + EXPORT_SYMBOL(rawnand_sw_hamming_calculate); 5181 + 5182 + int rawnand_sw_hamming_correct(struct nand_chip *chip, 5183 + unsigned char *buf, 5184 + unsigned char *read_ecc, 5185 + unsigned char *calc_ecc) 5186 + { 5187 + struct nand_device *base = &chip->base; 5188 + 5189 + return nand_ecc_sw_hamming_correct(base, buf, read_ecc, calc_ecc); 5190 + } 5191 + EXPORT_SYMBOL(rawnand_sw_hamming_correct); 5192 + 5193 + void rawnand_sw_hamming_cleanup(struct nand_chip *chip) 5194 + { 5195 + struct nand_device *base = &chip->base; 5196 + 5197 + nand_ecc_sw_hamming_cleanup_ctx(base); 5198 + } 5199 + EXPORT_SYMBOL(rawnand_sw_hamming_cleanup); 5200 + 5201 + int rawnand_sw_bch_init(struct nand_chip *chip) 5202 + { 5203 + struct nand_device *base = &chip->base; 5204 + struct nand_ecc_sw_bch_conf *engine_conf; 5205 + int ret; 5206 + 5207 + base->ecc.user_conf.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 5208 + base->ecc.user_conf.algo = NAND_ECC_ALGO_BCH; 5209 + base->ecc.user_conf.step_size = chip->ecc.size; 5210 + base->ecc.user_conf.strength = chip->ecc.strength; 5211 + 5212 + ret = nand_ecc_sw_bch_init_ctx(base); 5213 + if (ret) 5214 + return ret; 5215 + 5216 + engine_conf = base->ecc.ctx.priv; 5217 + 5218 + chip->ecc.size = base->ecc.ctx.conf.step_size; 5219 + chip->ecc.strength = base->ecc.ctx.conf.strength; 5220 + chip->ecc.total = base->ecc.ctx.total; 5221 + chip->ecc.steps = engine_conf->nsteps; 5222 + chip->ecc.bytes = engine_conf->code_size; 5223 + 5224 + return 0; 5225 + } 5226 + EXPORT_SYMBOL(rawnand_sw_bch_init); 5227 + 5228 + static int rawnand_sw_bch_calculate(struct nand_chip *chip, 5229 + const unsigned char *buf, 5230 + unsigned char *code) 5231 + { 5232 + struct nand_device *base = &chip->base; 5233 + 5234 + return nand_ecc_sw_bch_calculate(base, buf, code); 5235 + } 5236 + 5237 + int rawnand_sw_bch_correct(struct nand_chip *chip, unsigned char *buf, 5238 + unsigned char *read_ecc, unsigned char *calc_ecc) 5239 + { 5240 + struct nand_device *base = &chip->base; 5241 + 5242 + return nand_ecc_sw_bch_correct(base, buf, read_ecc, calc_ecc); 5243 + } 5244 + EXPORT_SYMBOL(rawnand_sw_bch_correct); 5245 + 5246 + void rawnand_sw_bch_cleanup(struct nand_chip *chip) 5247 + { 5248 + struct nand_device *base = &chip->base; 5249 + 5250 + nand_ecc_sw_bch_cleanup_ctx(base); 5251 + } 5252 + EXPORT_SYMBOL(rawnand_sw_bch_cleanup); 5253 + 5142 5254 static int nand_set_ecc_on_host_ops(struct nand_chip *chip) 5143 5255 { 5144 5256 struct nand_ecc_ctrl *ecc = &chip->ecc; ··· 5315 5203 struct mtd_info *mtd = nand_to_mtd(chip); 5316 5204 struct nand_device *nanddev = mtd_to_nanddev(mtd); 5317 5205 struct nand_ecc_ctrl *ecc = &chip->ecc; 5206 + int ret; 5318 5207 5319 5208 if (WARN_ON(ecc->engine_type != NAND_ECC_ENGINE_TYPE_SOFT)) 5320 5209 return -EINVAL; 5321 5210 5322 5211 switch (ecc->algo) { 5323 5212 case NAND_ECC_ALGO_HAMMING: 5324 - ecc->calculate = nand_calculate_ecc; 5325 - ecc->correct = nand_correct_data; 5213 + ecc->calculate = rawnand_sw_hamming_calculate; 5214 + ecc->correct = rawnand_sw_hamming_correct; 5326 5215 ecc->read_page = nand_read_page_swecc; 5327 5216 ecc->read_subpage = nand_read_subpage; 5328 5217 ecc->write_page = nand_write_page_swecc; ··· 5341 5228 if (IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)) 5342 5229 ecc->options |= NAND_ECC_SOFT_HAMMING_SM_ORDER; 5343 5230 5231 + ret = rawnand_sw_hamming_init(chip); 5232 + if (ret) { 5233 + WARN(1, "Hamming ECC initialization failed!\n"); 5234 + return ret; 5235 + } 5236 + 5344 5237 return 0; 5345 5238 case NAND_ECC_ALGO_BCH: 5346 - if (!mtd_nand_has_bch()) { 5239 + if (!IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_BCH)) { 5347 5240 WARN(1, "CONFIG_MTD_NAND_ECC_SW_BCH not enabled\n"); 5348 5241 return -EINVAL; 5349 5242 } 5350 - ecc->calculate = nand_bch_calculate_ecc; 5351 - ecc->correct = nand_bch_correct_data; 5243 + ecc->calculate = rawnand_sw_bch_calculate; 5244 + ecc->correct = rawnand_sw_bch_correct; 5352 5245 ecc->read_page = nand_read_page_swecc; 5353 5246 ecc->read_subpage = nand_read_subpage; 5354 5247 ecc->write_page = nand_write_page_swecc; ··· 5366 5247 ecc->write_oob = nand_write_oob_std; 5367 5248 5368 5249 /* 5369 - * Board driver should supply ecc.size and ecc.strength 5370 - * values to select how many bits are correctable. 5371 - * Otherwise, default to 4 bits for large page devices. 5372 - */ 5373 - if (!ecc->size && (mtd->oobsize >= 64)) { 5374 - ecc->size = 512; 5375 - ecc->strength = 4; 5376 - } 5377 - 5378 - /* 5379 - * if no ecc placement scheme was provided pickup the default 5380 - * large page one. 5381 - */ 5382 - if (!mtd->ooblayout) { 5383 - /* handle large page devices only */ 5384 - if (mtd->oobsize < 64) { 5385 - WARN(1, "OOB layout is required when using software BCH on small pages\n"); 5386 - return -EINVAL; 5387 - } 5388 - 5389 - mtd_set_ooblayout(mtd, nand_get_large_page_ooblayout()); 5390 - 5391 - } 5392 - 5393 - /* 5394 5250 * We can only maximize ECC config when the default layout is 5395 5251 * used, otherwise we don't know how many bytes can really be 5396 5252 * used. 5397 5253 */ 5398 - if (mtd->ooblayout == nand_get_large_page_ooblayout() && 5399 - nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH) { 5400 - int steps, bytes; 5254 + if (nanddev->ecc.user_conf.flags & NAND_ECC_MAXIMIZE_STRENGTH && 5255 + mtd->ooblayout != nand_get_large_page_ooblayout()) 5256 + nanddev->ecc.user_conf.flags &= ~NAND_ECC_MAXIMIZE_STRENGTH; 5401 5257 5402 - /* Always prefer 1k blocks over 512bytes ones */ 5403 - ecc->size = 1024; 5404 - steps = mtd->writesize / ecc->size; 5405 - 5406 - /* Reserve 2 bytes for the BBM */ 5407 - bytes = (mtd->oobsize - 2) / steps; 5408 - ecc->strength = bytes * 8 / fls(8 * ecc->size); 5409 - } 5410 - 5411 - /* See nand_bch_init() for details. */ 5412 - ecc->bytes = 0; 5413 - ecc->priv = nand_bch_init(mtd); 5414 - if (!ecc->priv) { 5258 + ret = rawnand_sw_bch_init(chip); 5259 + if (ret) { 5415 5260 WARN(1, "BCH ECC initialization failed!\n"); 5416 - return -EINVAL; 5261 + return ret; 5417 5262 } 5263 + 5418 5264 return 0; 5419 5265 default: 5420 5266 WARN(1, "Unsupported ECC algorithm!\n"); ··· 5723 5639 */ 5724 5640 if (!mtd->ooblayout && 5725 5641 !(ecc->engine_type == NAND_ECC_ENGINE_TYPE_SOFT && 5726 - ecc->algo == NAND_ECC_ALGO_BCH)) { 5642 + ecc->algo == NAND_ECC_ALGO_BCH) && 5643 + !(ecc->engine_type == NAND_ECC_ENGINE_TYPE_SOFT && 5644 + ecc->algo == NAND_ECC_ALGO_HAMMING)) { 5727 5645 switch (mtd->oobsize) { 5728 5646 case 8: 5729 5647 case 16: ··· 5842 5756 * Set the number of read / write steps for one page depending on ECC 5843 5757 * mode. 5844 5758 */ 5845 - ecc->steps = mtd->writesize / ecc->size; 5759 + if (!ecc->steps) 5760 + ecc->steps = mtd->writesize / ecc->size; 5846 5761 if (ecc->steps * ecc->size != mtd->writesize) { 5847 5762 WARN(1, "Invalid ECC parameters\n"); 5848 5763 ret = -EINVAL; 5849 5764 goto err_nand_manuf_cleanup; 5850 5765 } 5851 5766 5852 - ecc->total = ecc->steps * ecc->bytes; 5853 - chip->base.ecc.ctx.total = ecc->total; 5767 + if (!ecc->total) { 5768 + ecc->total = ecc->steps * ecc->bytes; 5769 + chip->base.ecc.ctx.total = ecc->total; 5770 + } 5854 5771 5855 5772 if (ecc->total > mtd->oobsize) { 5856 5773 WARN(1, "Total number of ECC bytes exceeded oobsize\n"); ··· 6042 5953 */ 6043 5954 void nand_cleanup(struct nand_chip *chip) 6044 5955 { 6045 - if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT && 6046 - chip->ecc.algo == NAND_ECC_ALGO_BCH) 6047 - nand_bch_free((struct nand_bch_control *)chip->ecc.priv); 5956 + if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_SOFT) { 5957 + if (chip->ecc.algo == NAND_ECC_ALGO_HAMMING) 5958 + rawnand_sw_hamming_cleanup(chip); 5959 + else if (chip->ecc.algo == NAND_ECC_ALGO_BCH) 5960 + rawnand_sw_bch_cleanup(chip); 5961 + } 6048 5962 6049 5963 nanddev_cleanup(&chip->base); 6050 5964
+1 -1
drivers/mtd/nand/raw/nand_bbt.c
··· 1087 1087 } 1088 1088 1089 1089 /** 1090 - * mark_bbt_regions - [GENERIC] mark the bad block table regions 1090 + * mark_bbt_region - [GENERIC] mark the bad block table regions 1091 1091 * @this: the NAND device 1092 1092 * @td: bad block table descriptor 1093 1093 *
-219
drivers/mtd/nand/raw/nand_bch.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * This file provides ECC correction for more than 1 bit per block of data, 4 - * using binary BCH codes. It relies on the generic BCH library lib/bch.c. 5 - * 6 - * Copyright © 2011 Ivan Djelic <ivan.djelic@parrot.com> 7 - */ 8 - 9 - #include <linux/types.h> 10 - #include <linux/kernel.h> 11 - #include <linux/module.h> 12 - #include <linux/slab.h> 13 - #include <linux/bitops.h> 14 - #include <linux/mtd/mtd.h> 15 - #include <linux/mtd/rawnand.h> 16 - #include <linux/mtd/nand_bch.h> 17 - #include <linux/bch.h> 18 - 19 - /** 20 - * struct nand_bch_control - private NAND BCH control structure 21 - * @bch: BCH control structure 22 - * @errloc: error location array 23 - * @eccmask: XOR ecc mask, allows erased pages to be decoded as valid 24 - */ 25 - struct nand_bch_control { 26 - struct bch_control *bch; 27 - unsigned int *errloc; 28 - unsigned char *eccmask; 29 - }; 30 - 31 - /** 32 - * nand_bch_calculate_ecc - [NAND Interface] Calculate ECC for data block 33 - * @chip: NAND chip object 34 - * @buf: input buffer with raw data 35 - * @code: output buffer with ECC 36 - */ 37 - int nand_bch_calculate_ecc(struct nand_chip *chip, const unsigned char *buf, 38 - unsigned char *code) 39 - { 40 - struct nand_bch_control *nbc = chip->ecc.priv; 41 - unsigned int i; 42 - 43 - memset(code, 0, chip->ecc.bytes); 44 - bch_encode(nbc->bch, buf, chip->ecc.size, code); 45 - 46 - /* apply mask so that an erased page is a valid codeword */ 47 - for (i = 0; i < chip->ecc.bytes; i++) 48 - code[i] ^= nbc->eccmask[i]; 49 - 50 - return 0; 51 - } 52 - EXPORT_SYMBOL(nand_bch_calculate_ecc); 53 - 54 - /** 55 - * nand_bch_correct_data - [NAND Interface] Detect and correct bit error(s) 56 - * @chip: NAND chip object 57 - * @buf: raw data read from the chip 58 - * @read_ecc: ECC from the chip 59 - * @calc_ecc: the ECC calculated from raw data 60 - * 61 - * Detect and correct bit errors for a data byte block 62 - */ 63 - int nand_bch_correct_data(struct nand_chip *chip, unsigned char *buf, 64 - unsigned char *read_ecc, unsigned char *calc_ecc) 65 - { 66 - struct nand_bch_control *nbc = chip->ecc.priv; 67 - unsigned int *errloc = nbc->errloc; 68 - int i, count; 69 - 70 - count = bch_decode(nbc->bch, NULL, chip->ecc.size, read_ecc, calc_ecc, 71 - NULL, errloc); 72 - if (count > 0) { 73 - for (i = 0; i < count; i++) { 74 - if (errloc[i] < (chip->ecc.size*8)) 75 - /* error is located in data, correct it */ 76 - buf[errloc[i] >> 3] ^= (1 << (errloc[i] & 7)); 77 - /* else error in ecc, no action needed */ 78 - 79 - pr_debug("%s: corrected bitflip %u\n", __func__, 80 - errloc[i]); 81 - } 82 - } else if (count < 0) { 83 - pr_err("ecc unrecoverable error\n"); 84 - count = -EBADMSG; 85 - } 86 - return count; 87 - } 88 - EXPORT_SYMBOL(nand_bch_correct_data); 89 - 90 - /** 91 - * nand_bch_init - [NAND Interface] Initialize NAND BCH error correction 92 - * @mtd: MTD block structure 93 - * 94 - * Returns: 95 - * a pointer to a new NAND BCH control structure, or NULL upon failure 96 - * 97 - * Initialize NAND BCH error correction. Parameters @eccsize and @eccbytes 98 - * are used to compute BCH parameters m (Galois field order) and t (error 99 - * correction capability). @eccbytes should be equal to the number of bytes 100 - * required to store m*t bits, where m is such that 2^m-1 > @eccsize*8. 101 - * 102 - * Example: to configure 4 bit correction per 512 bytes, you should pass 103 - * @eccsize = 512 (thus, m=13 is the smallest integer such that 2^m-1 > 512*8) 104 - * @eccbytes = 7 (7 bytes are required to store m*t = 13*4 = 52 bits) 105 - */ 106 - struct nand_bch_control *nand_bch_init(struct mtd_info *mtd) 107 - { 108 - struct nand_chip *nand = mtd_to_nand(mtd); 109 - unsigned int m, t, eccsteps, i; 110 - struct nand_bch_control *nbc = NULL; 111 - unsigned char *erased_page; 112 - unsigned int eccsize = nand->ecc.size; 113 - unsigned int eccbytes = nand->ecc.bytes; 114 - unsigned int eccstrength = nand->ecc.strength; 115 - 116 - if (!eccbytes && eccstrength) { 117 - eccbytes = DIV_ROUND_UP(eccstrength * fls(8 * eccsize), 8); 118 - nand->ecc.bytes = eccbytes; 119 - } 120 - 121 - if (!eccsize || !eccbytes) { 122 - pr_warn("ecc parameters not supplied\n"); 123 - goto fail; 124 - } 125 - 126 - m = fls(1+8*eccsize); 127 - t = (eccbytes*8)/m; 128 - 129 - nbc = kzalloc(sizeof(*nbc), GFP_KERNEL); 130 - if (!nbc) 131 - goto fail; 132 - 133 - nbc->bch = bch_init(m, t, 0, false); 134 - if (!nbc->bch) 135 - goto fail; 136 - 137 - /* verify that eccbytes has the expected value */ 138 - if (nbc->bch->ecc_bytes != eccbytes) { 139 - pr_warn("invalid eccbytes %u, should be %u\n", 140 - eccbytes, nbc->bch->ecc_bytes); 141 - goto fail; 142 - } 143 - 144 - eccsteps = mtd->writesize/eccsize; 145 - 146 - /* Check that we have an oob layout description. */ 147 - if (!mtd->ooblayout) { 148 - pr_warn("missing oob scheme"); 149 - goto fail; 150 - } 151 - 152 - /* sanity checks */ 153 - if (8*(eccsize+eccbytes) >= (1 << m)) { 154 - pr_warn("eccsize %u is too large\n", eccsize); 155 - goto fail; 156 - } 157 - 158 - /* 159 - * ecc->steps and ecc->total might be used by mtd->ooblayout->ecc(), 160 - * which is called by mtd_ooblayout_count_eccbytes(). 161 - * Make sure they are properly initialized before calling 162 - * mtd_ooblayout_count_eccbytes(). 163 - * FIXME: we should probably rework the sequencing in nand_scan_tail() 164 - * to avoid setting those fields twice. 165 - */ 166 - nand->ecc.steps = eccsteps; 167 - nand->ecc.total = eccsteps * eccbytes; 168 - nand->base.ecc.ctx.total = nand->ecc.total; 169 - if (mtd_ooblayout_count_eccbytes(mtd) != (eccsteps*eccbytes)) { 170 - pr_warn("invalid ecc layout\n"); 171 - goto fail; 172 - } 173 - 174 - nbc->eccmask = kzalloc(eccbytes, GFP_KERNEL); 175 - nbc->errloc = kmalloc_array(t, sizeof(*nbc->errloc), GFP_KERNEL); 176 - if (!nbc->eccmask || !nbc->errloc) 177 - goto fail; 178 - /* 179 - * compute and store the inverted ecc of an erased ecc block 180 - */ 181 - erased_page = kmalloc(eccsize, GFP_KERNEL); 182 - if (!erased_page) 183 - goto fail; 184 - 185 - memset(erased_page, 0xff, eccsize); 186 - bch_encode(nbc->bch, erased_page, eccsize, nbc->eccmask); 187 - kfree(erased_page); 188 - 189 - for (i = 0; i < eccbytes; i++) 190 - nbc->eccmask[i] ^= 0xff; 191 - 192 - if (!eccstrength) 193 - nand->ecc.strength = (eccbytes * 8) / fls(8 * eccsize); 194 - 195 - return nbc; 196 - fail: 197 - nand_bch_free(nbc); 198 - return NULL; 199 - } 200 - EXPORT_SYMBOL(nand_bch_init); 201 - 202 - /** 203 - * nand_bch_free - [NAND Interface] Release NAND BCH ECC resources 204 - * @nbc: NAND BCH control structure 205 - */ 206 - void nand_bch_free(struct nand_bch_control *nbc) 207 - { 208 - if (nbc) { 209 - bch_free(nbc->bch); 210 - kfree(nbc->errloc); 211 - kfree(nbc->eccmask); 212 - kfree(nbc); 213 - } 214 - } 215 - EXPORT_SYMBOL(nand_bch_free); 216 - 217 - MODULE_LICENSE("GPL"); 218 - MODULE_AUTHOR("Ivan Djelic <ivan.djelic@parrot.com>"); 219 - MODULE_DESCRIPTION("NAND software BCH ECC support");
+245 -70
drivers/mtd/nand/raw/nand_ecc.c drivers/mtd/nand/ecc-sw-hamming.c
··· 17 17 #include <linux/types.h> 18 18 #include <linux/kernel.h> 19 19 #include <linux/module.h> 20 - #include <linux/mtd/mtd.h> 21 - #include <linux/mtd/rawnand.h> 22 - #include <linux/mtd/nand_ecc.h> 20 + #include <linux/mtd/nand.h> 21 + #include <linux/mtd/nand-ecc-sw-hamming.h> 22 + #include <linux/slab.h> 23 23 #include <asm/byteorder.h> 24 24 25 25 /* ··· 75 75 * addressbits is a lookup table to filter out the bits from the xor-ed 76 76 * ECC data that identify the faulty location. 77 77 * this is only used for repairing parity 78 - * see the comments in nand_correct_data for more details 78 + * see the comments in nand_ecc_sw_hamming_correct for more details 79 79 */ 80 80 static const char addressbits[256] = { 81 81 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x01, 0x01, ··· 112 112 0x0e, 0x0e, 0x0f, 0x0f, 0x0e, 0x0e, 0x0f, 0x0f 113 113 }; 114 114 115 - /** 116 - * __nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256/512-byte 117 - * block 118 - * @buf: input buffer with raw data 119 - * @eccsize: data bytes per ECC step (256 or 512) 120 - * @code: output buffer with ECC 121 - * @sm_order: Smart Media byte ordering 122 - */ 123 - void __nand_calculate_ecc(const unsigned char *buf, unsigned int eccsize, 124 - unsigned char *code, bool sm_order) 115 + int ecc_sw_hamming_calculate(const unsigned char *buf, unsigned int step_size, 116 + unsigned char *code, bool sm_order) 125 117 { 118 + const u32 *bp = (uint32_t *)buf; 119 + const u32 eccsize_mult = (step_size == 256) ? 1 : 2; 120 + /* current value in buffer */ 121 + u32 cur; 122 + /* rp0..rp17 are the various accumulated parities (per byte) */ 123 + u32 rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7, rp8, rp9, rp10, rp11, rp12, 124 + rp13, rp14, rp15, rp16, rp17; 125 + /* Cumulative parity for all data */ 126 + u32 par; 127 + /* Cumulative parity at the end of the loop (rp12, rp14, rp16) */ 128 + u32 tmppar; 126 129 int i; 127 - const uint32_t *bp = (uint32_t *)buf; 128 - /* 256 or 512 bytes/ecc */ 129 - const uint32_t eccsize_mult = eccsize >> 8; 130 - uint32_t cur; /* current value in buffer */ 131 - /* rp0..rp15..rp17 are the various accumulated parities (per byte) */ 132 - uint32_t rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7; 133 - uint32_t rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15, rp16; 134 - uint32_t rp17; 135 - uint32_t par; /* the cumulative parity for all data */ 136 - uint32_t tmppar; /* the cumulative parity for this iteration; 137 - for rp12, rp14 and rp16 at the end of the 138 - loop */ 139 130 140 131 par = 0; 141 132 rp4 = 0; ··· 136 145 rp12 = 0; 137 146 rp14 = 0; 138 147 rp16 = 0; 148 + rp17 = 0; 139 149 140 150 /* 141 151 * The loop is unrolled a number of times; ··· 348 356 (invparity[par & 0x55] << 2) | 349 357 (invparity[rp17] << 1) | 350 358 (invparity[rp16] << 0); 351 - } 352 - EXPORT_SYMBOL(__nand_calculate_ecc); 353 - 354 - /** 355 - * nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256/512-byte 356 - * block 357 - * @chip: NAND chip object 358 - * @buf: input buffer with raw data 359 - * @code: output buffer with ECC 360 - */ 361 - int nand_calculate_ecc(struct nand_chip *chip, const unsigned char *buf, 362 - unsigned char *code) 363 - { 364 - bool sm_order = chip->ecc.options & NAND_ECC_SOFT_HAMMING_SM_ORDER; 365 - 366 - __nand_calculate_ecc(buf, chip->ecc.size, code, sm_order); 367 359 368 360 return 0; 369 361 } 370 - EXPORT_SYMBOL(nand_calculate_ecc); 362 + EXPORT_SYMBOL(ecc_sw_hamming_calculate); 371 363 372 364 /** 373 - * __nand_correct_data - [NAND Interface] Detect and correct bit error(s) 374 - * @buf: raw data read from the chip 375 - * @read_ecc: ECC from the chip 376 - * @calc_ecc: the ECC calculated from raw data 377 - * @eccsize: data bytes per ECC step (256 or 512) 378 - * @sm_order: Smart Media byte order 379 - * 380 - * Detect and correct a 1 bit error for eccsize byte block 365 + * nand_ecc_sw_hamming_calculate - Calculate 3-byte ECC for 256/512-byte block 366 + * @nand: NAND device 367 + * @buf: Input buffer with raw data 368 + * @code: Output buffer with ECC 381 369 */ 382 - int __nand_correct_data(unsigned char *buf, 383 - unsigned char *read_ecc, unsigned char *calc_ecc, 384 - unsigned int eccsize, bool sm_order) 370 + int nand_ecc_sw_hamming_calculate(struct nand_device *nand, 371 + const unsigned char *buf, unsigned char *code) 385 372 { 373 + struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv; 374 + unsigned int step_size = nand->ecc.ctx.conf.step_size; 375 + 376 + return ecc_sw_hamming_calculate(buf, step_size, code, 377 + engine_conf->sm_order); 378 + } 379 + EXPORT_SYMBOL(nand_ecc_sw_hamming_calculate); 380 + 381 + int ecc_sw_hamming_correct(unsigned char *buf, unsigned char *read_ecc, 382 + unsigned char *calc_ecc, unsigned int step_size, 383 + bool sm_order) 384 + { 385 + const u32 eccsize_mult = step_size >> 8; 386 386 unsigned char b0, b1, b2, bit_addr; 387 387 unsigned int byte_addr; 388 - /* 256 or 512 bytes/ecc */ 389 - const uint32_t eccsize_mult = eccsize >> 8; 390 388 391 389 /* 392 390 * b0 to b2 indicate which bit is faulty (if any) ··· 440 458 pr_err("%s: uncorrectable ECC error\n", __func__); 441 459 return -EBADMSG; 442 460 } 443 - EXPORT_SYMBOL(__nand_correct_data); 461 + EXPORT_SYMBOL(ecc_sw_hamming_correct); 444 462 445 463 /** 446 - * nand_correct_data - [NAND Interface] Detect and correct bit error(s) 447 - * @chip: NAND chip object 448 - * @buf: raw data read from the chip 449 - * @read_ecc: ECC from the chip 450 - * @calc_ecc: the ECC calculated from raw data 464 + * nand_ecc_sw_hamming_correct - Detect and correct bit error(s) 465 + * @nand: NAND device 466 + * @buf: Raw data read from the chip 467 + * @read_ecc: ECC bytes read from the chip 468 + * @calc_ecc: ECC calculated from the raw data 451 469 * 452 - * Detect and correct a 1 bit error for 256/512 byte block 470 + * Detect and correct up to 1 bit error per 256/512-byte block. 453 471 */ 454 - int nand_correct_data(struct nand_chip *chip, unsigned char *buf, 455 - unsigned char *read_ecc, unsigned char *calc_ecc) 472 + int nand_ecc_sw_hamming_correct(struct nand_device *nand, unsigned char *buf, 473 + unsigned char *read_ecc, 474 + unsigned char *calc_ecc) 456 475 { 457 - bool sm_order = chip->ecc.options & NAND_ECC_SOFT_HAMMING_SM_ORDER; 476 + struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv; 477 + unsigned int step_size = nand->ecc.ctx.conf.step_size; 458 478 459 - return __nand_correct_data(buf, read_ecc, calc_ecc, chip->ecc.size, 460 - sm_order); 479 + return ecc_sw_hamming_correct(buf, read_ecc, calc_ecc, step_size, 480 + engine_conf->sm_order); 461 481 } 462 - EXPORT_SYMBOL(nand_correct_data); 482 + EXPORT_SYMBOL(nand_ecc_sw_hamming_correct); 483 + 484 + int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand) 485 + { 486 + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; 487 + struct nand_ecc_sw_hamming_conf *engine_conf; 488 + struct mtd_info *mtd = nanddev_to_mtd(nand); 489 + int ret; 490 + 491 + if (!mtd->ooblayout) { 492 + switch (mtd->oobsize) { 493 + case 8: 494 + case 16: 495 + mtd_set_ooblayout(mtd, nand_get_small_page_ooblayout()); 496 + break; 497 + case 64: 498 + case 128: 499 + mtd_set_ooblayout(mtd, 500 + nand_get_large_page_hamming_ooblayout()); 501 + break; 502 + default: 503 + return -ENOTSUPP; 504 + } 505 + } 506 + 507 + conf->engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 508 + conf->algo = NAND_ECC_ALGO_HAMMING; 509 + conf->step_size = nand->ecc.user_conf.step_size; 510 + conf->strength = 1; 511 + 512 + /* Use the strongest configuration by default */ 513 + if (conf->step_size != 256 && conf->step_size != 512) 514 + conf->step_size = 256; 515 + 516 + engine_conf = kzalloc(sizeof(*engine_conf), GFP_KERNEL); 517 + if (!engine_conf) 518 + return -ENOMEM; 519 + 520 + ret = nand_ecc_init_req_tweaking(&engine_conf->req_ctx, nand); 521 + if (ret) 522 + goto free_engine_conf; 523 + 524 + engine_conf->code_size = 3; 525 + engine_conf->nsteps = mtd->writesize / conf->step_size; 526 + engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL); 527 + engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL); 528 + if (!engine_conf->calc_buf || !engine_conf->code_buf) { 529 + ret = -ENOMEM; 530 + goto free_bufs; 531 + } 532 + 533 + nand->ecc.ctx.priv = engine_conf; 534 + nand->ecc.ctx.total = engine_conf->nsteps * engine_conf->code_size; 535 + 536 + return 0; 537 + 538 + free_bufs: 539 + nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); 540 + kfree(engine_conf->calc_buf); 541 + kfree(engine_conf->code_buf); 542 + free_engine_conf: 543 + kfree(engine_conf); 544 + 545 + return ret; 546 + } 547 + EXPORT_SYMBOL(nand_ecc_sw_hamming_init_ctx); 548 + 549 + void nand_ecc_sw_hamming_cleanup_ctx(struct nand_device *nand) 550 + { 551 + struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv; 552 + 553 + if (engine_conf) { 554 + nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx); 555 + kfree(engine_conf->calc_buf); 556 + kfree(engine_conf->code_buf); 557 + kfree(engine_conf); 558 + } 559 + } 560 + EXPORT_SYMBOL(nand_ecc_sw_hamming_cleanup_ctx); 561 + 562 + static int nand_ecc_sw_hamming_prepare_io_req(struct nand_device *nand, 563 + struct nand_page_io_req *req) 564 + { 565 + struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv; 566 + struct mtd_info *mtd = nanddev_to_mtd(nand); 567 + int eccsize = nand->ecc.ctx.conf.step_size; 568 + int eccbytes = engine_conf->code_size; 569 + int eccsteps = engine_conf->nsteps; 570 + int total = nand->ecc.ctx.total; 571 + u8 *ecccalc = engine_conf->calc_buf; 572 + const u8 *data; 573 + int i; 574 + 575 + /* Nothing to do for a raw operation */ 576 + if (req->mode == MTD_OPS_RAW) 577 + return 0; 578 + 579 + /* This engine does not provide BBM/free OOB bytes protection */ 580 + if (!req->datalen) 581 + return 0; 582 + 583 + nand_ecc_tweak_req(&engine_conf->req_ctx, req); 584 + 585 + /* No more preparation for page read */ 586 + if (req->type == NAND_PAGE_READ) 587 + return 0; 588 + 589 + /* Preparation for page write: derive the ECC bytes and place them */ 590 + for (i = 0, data = req->databuf.out; 591 + eccsteps; 592 + eccsteps--, i += eccbytes, data += eccsize) 593 + nand_ecc_sw_hamming_calculate(nand, data, &ecccalc[i]); 594 + 595 + return mtd_ooblayout_set_eccbytes(mtd, ecccalc, (void *)req->oobbuf.out, 596 + 0, total); 597 + } 598 + 599 + static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand, 600 + struct nand_page_io_req *req) 601 + { 602 + struct nand_ecc_sw_hamming_conf *engine_conf = nand->ecc.ctx.priv; 603 + struct mtd_info *mtd = nanddev_to_mtd(nand); 604 + int eccsize = nand->ecc.ctx.conf.step_size; 605 + int total = nand->ecc.ctx.total; 606 + int eccbytes = engine_conf->code_size; 607 + int eccsteps = engine_conf->nsteps; 608 + u8 *ecccalc = engine_conf->calc_buf; 609 + u8 *ecccode = engine_conf->code_buf; 610 + unsigned int max_bitflips = 0; 611 + u8 *data = req->databuf.in; 612 + int i, ret; 613 + 614 + /* Nothing to do for a raw operation */ 615 + if (req->mode == MTD_OPS_RAW) 616 + return 0; 617 + 618 + /* This engine does not provide BBM/free OOB bytes protection */ 619 + if (!req->datalen) 620 + return 0; 621 + 622 + /* No more preparation for page write */ 623 + if (req->type == NAND_PAGE_WRITE) { 624 + nand_ecc_restore_req(&engine_conf->req_ctx, req); 625 + return 0; 626 + } 627 + 628 + /* Finish a page read: retrieve the (raw) ECC bytes*/ 629 + ret = mtd_ooblayout_get_eccbytes(mtd, ecccode, req->oobbuf.in, 0, 630 + total); 631 + if (ret) 632 + return ret; 633 + 634 + /* Calculate the ECC bytes */ 635 + for (i = 0; eccsteps; eccsteps--, i += eccbytes, data += eccsize) 636 + nand_ecc_sw_hamming_calculate(nand, data, &ecccalc[i]); 637 + 638 + /* Finish a page read: compare and correct */ 639 + for (eccsteps = engine_conf->nsteps, i = 0, data = req->databuf.in; 640 + eccsteps; 641 + eccsteps--, i += eccbytes, data += eccsize) { 642 + int stat = nand_ecc_sw_hamming_correct(nand, data, 643 + &ecccode[i], 644 + &ecccalc[i]); 645 + if (stat < 0) { 646 + mtd->ecc_stats.failed++; 647 + } else { 648 + mtd->ecc_stats.corrected += stat; 649 + max_bitflips = max_t(unsigned int, max_bitflips, stat); 650 + } 651 + } 652 + 653 + nand_ecc_restore_req(&engine_conf->req_ctx, req); 654 + 655 + return max_bitflips; 656 + } 657 + 658 + static struct nand_ecc_engine_ops nand_ecc_sw_hamming_engine_ops = { 659 + .init_ctx = nand_ecc_sw_hamming_init_ctx, 660 + .cleanup_ctx = nand_ecc_sw_hamming_cleanup_ctx, 661 + .prepare_io_req = nand_ecc_sw_hamming_prepare_io_req, 662 + .finish_io_req = nand_ecc_sw_hamming_finish_io_req, 663 + }; 664 + 665 + static struct nand_ecc_engine nand_ecc_sw_hamming_engine = { 666 + .ops = &nand_ecc_sw_hamming_engine_ops, 667 + }; 668 + 669 + struct nand_ecc_engine *nand_ecc_sw_hamming_get_engine(void) 670 + { 671 + return &nand_ecc_sw_hamming_engine; 672 + } 673 + EXPORT_SYMBOL(nand_ecc_sw_hamming_get_engine); 463 674 464 675 MODULE_LICENSE("GPL"); 465 676 MODULE_AUTHOR("Frans Meulenbroeks <fransmeulenbroeks@gmail.com>"); 466 - MODULE_DESCRIPTION("Generic NAND ECC support"); 677 + MODULE_DESCRIPTION("NAND software Hamming ECC support");
+5 -4
drivers/mtd/nand/raw/nand_legacy.c
··· 192 192 */ 193 193 void nand_wait_ready(struct nand_chip *chip) 194 194 { 195 + struct mtd_info *mtd = nand_to_mtd(chip); 195 196 unsigned long timeo = 400; 196 197 197 - if (in_interrupt() || oops_in_progress) 198 + if (mtd->oops_panic_write) 198 199 return panic_nand_wait_ready(chip, timeo); 199 200 200 201 /* Wait until command is processed or timeout occurs */ ··· 532 531 */ 533 532 static int nand_wait(struct nand_chip *chip) 534 533 { 535 - 534 + struct mtd_info *mtd = nand_to_mtd(chip); 536 535 unsigned long timeo = 400; 537 536 u8 status; 538 537 int ret; ··· 547 546 if (ret) 548 547 return ret; 549 548 550 - if (in_interrupt() || oops_in_progress) 549 + if (mtd->oops_panic_write) { 551 550 panic_nand_wait(chip, timeo); 552 - else { 551 + } else { 553 552 timeo = jiffies + msecs_to_jiffies(timeo); 554 553 do { 555 554 if (chip->legacy.dev_ready) {
+1 -2
drivers/mtd/nand/raw/nandsim.c
··· 23 23 #include <linux/string.h> 24 24 #include <linux/mtd/mtd.h> 25 25 #include <linux/mtd/rawnand.h> 26 - #include <linux/mtd/nand_bch.h> 27 26 #include <linux/mtd/partitions.h> 28 27 #include <linux/delay.h> 29 28 #include <linux/list.h> ··· 2213 2214 if (!bch) 2214 2215 return 0; 2215 2216 2216 - if (!mtd_nand_has_bch()) { 2217 + if (!IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_BCH)) { 2217 2218 NS_ERR("BCH ECC support is disabled\n"); 2218 2219 return -EINVAL; 2219 2220 }
+1 -2
drivers/mtd/nand/raw/ndfc.c
··· 18 18 */ 19 19 #include <linux/module.h> 20 20 #include <linux/mtd/rawnand.h> 21 - #include <linux/mtd/nand_ecc.h> 22 21 #include <linux/mtd/partitions.h> 23 22 #include <linux/mtd/ndfc.h> 24 23 #include <linux/slab.h> ··· 145 146 chip->controller = &ndfc->ndfc_control; 146 147 chip->legacy.read_buf = ndfc_read_buf; 147 148 chip->legacy.write_buf = ndfc_write_buf; 148 - chip->ecc.correct = nand_correct_data; 149 + chip->ecc.correct = rawnand_sw_hamming_correct; 149 150 chip->ecc.hwctl = ndfc_enable_hwecc; 150 151 chip->ecc.calculate = ndfc_calculate_ecc; 151 152 chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
+23 -26
drivers/mtd/nand/raw/omap2.c
··· 23 23 #include <linux/of.h> 24 24 #include <linux/of_device.h> 25 25 26 - #include <linux/mtd/nand_bch.h> 27 26 #include <linux/platform_data/elm.h> 28 27 29 28 #include <linux/omap-gpmc.h> ··· 184 185 * @dma_mode: dma mode enable (1) or disable (0) 185 186 * @u32_count: number of bytes to be transferred 186 187 * @is_write: prefetch read(0) or write post(1) mode 188 + * @info: NAND device structure containing platform data 187 189 */ 188 190 static int omap_prefetch_enable(int cs, int fifo_th, int dma_mode, 189 191 unsigned int u32_count, int is_write, struct omap_nand_info *info) ··· 214 214 return 0; 215 215 } 216 216 217 - /** 217 + /* 218 218 * omap_prefetch_reset - disables and stops the prefetch engine 219 219 */ 220 220 static int omap_prefetch_reset(int cs, struct omap_nand_info *info) ··· 939 939 940 940 /** 941 941 * omap_enable_hwecc - This function enables the hardware ecc functionality 942 - * @mtd: MTD device structure 942 + * @chip: NAND chip object 943 943 * @mode: Read/Write mode 944 944 */ 945 945 static void omap_enable_hwecc(struct nand_chip *chip, int mode) ··· 1009 1009 1010 1010 /** 1011 1011 * omap_dev_ready - checks the NAND Ready GPIO line 1012 - * @mtd: MTD device structure 1012 + * @chip: NAND chip object 1013 1013 * 1014 1014 * Returns true if ready and false if busy. 1015 1015 */ ··· 1022 1022 1023 1023 /** 1024 1024 * omap_enable_hwecc_bch - Program GPMC to perform BCH ECC calculation 1025 - * @mtd: MTD device structure 1025 + * @chip: NAND chip object 1026 1026 * @mode: Read/Write mode 1027 1027 * 1028 1028 * When using BCH with SW correction (i.e. no ELM), sector size is set ··· 1131 1131 * _omap_calculate_ecc_bch - Generate ECC bytes for one sector 1132 1132 * @mtd: MTD device structure 1133 1133 * @dat: The pointer to data on which ecc is computed 1134 - * @ecc_code: The ecc_code buffer 1134 + * @ecc_calc: The ecc_code buffer 1135 1135 * @i: The sector number (for a multi sector page) 1136 1136 * 1137 1137 * Support calculating of BCH4/8/16 ECC vectors for one sector ··· 1259 1259 * omap_calculate_ecc_bch_sw - ECC generator for sector for SW based correction 1260 1260 * @chip: NAND chip object 1261 1261 * @dat: The pointer to data on which ecc is computed 1262 - * @ecc_code: The ecc_code buffer 1262 + * @ecc_calc: Buffer storing the calculated ECC bytes 1263 1263 * 1264 1264 * Support calculating of BCH4/8/16 ECC vectors for one sector. This is used 1265 1265 * when SW based correction is required as ECC is required for one sector ··· 1275 1275 * omap_calculate_ecc_bch_multi - Generate ECC for multiple sectors 1276 1276 * @mtd: MTD device structure 1277 1277 * @dat: The pointer to data on which ecc is computed 1278 - * @ecc_code: The ecc_code buffer 1278 + * @ecc_calc: Buffer storing the calculated ECC bytes 1279 1279 * 1280 1280 * Support calculating of BCH4/8/16 ecc vectors for the entire page in one go. 1281 1281 */ ··· 1674 1674 1675 1675 /** 1676 1676 * is_elm_present - checks for presence of ELM module by scanning DT nodes 1677 - * @omap_nand_info: NAND device structure containing platform data 1677 + * @info: NAND device structure containing platform data 1678 + * @elm_node: ELM's DT node 1678 1679 */ 1679 1680 static bool is_elm_present(struct omap_nand_info *info, 1680 1681 struct device_node *elm_node) ··· 2042 2041 chip->ecc.bytes = 7; 2043 2042 chip->ecc.strength = 4; 2044 2043 chip->ecc.hwctl = omap_enable_hwecc_bch; 2045 - chip->ecc.correct = nand_bch_correct_data; 2044 + chip->ecc.correct = rawnand_sw_bch_correct; 2046 2045 chip->ecc.calculate = omap_calculate_ecc_bch_sw; 2047 2046 mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops); 2048 2047 /* Reserve one byte for the OMAP marker */ 2049 2048 oobbytes_per_step = chip->ecc.bytes + 1; 2050 2049 /* Software BCH library is used for locating errors */ 2051 - chip->ecc.priv = nand_bch_init(mtd); 2052 - if (!chip->ecc.priv) { 2050 + err = rawnand_sw_bch_init(chip); 2051 + if (err) { 2053 2052 dev_err(dev, "Unable to use BCH library\n"); 2054 - return -EINVAL; 2053 + return err; 2055 2054 } 2056 2055 break; 2057 2056 ··· 2084 2083 chip->ecc.bytes = 13; 2085 2084 chip->ecc.strength = 8; 2086 2085 chip->ecc.hwctl = omap_enable_hwecc_bch; 2087 - chip->ecc.correct = nand_bch_correct_data; 2086 + chip->ecc.correct = rawnand_sw_bch_correct; 2088 2087 chip->ecc.calculate = omap_calculate_ecc_bch_sw; 2089 2088 mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops); 2090 2089 /* Reserve one byte for the OMAP marker */ 2091 2090 oobbytes_per_step = chip->ecc.bytes + 1; 2092 2091 /* Software BCH library is used for locating errors */ 2093 - chip->ecc.priv = nand_bch_init(mtd); 2094 - if (!chip->ecc.priv) { 2092 + err = rawnand_sw_bch_init(chip); 2093 + if (err) { 2095 2094 dev_err(dev, "unable to use BCH library\n"); 2096 - return -EINVAL; 2095 + return err; 2097 2096 } 2098 2097 break; 2099 2098 ··· 2196 2195 nand_chip = &info->nand; 2197 2196 mtd = nand_to_mtd(nand_chip); 2198 2197 mtd->dev.parent = &pdev->dev; 2199 - nand_chip->ecc.priv = NULL; 2200 2198 nand_set_flash_node(nand_chip, dev->of_node); 2201 2199 2202 2200 if (!mtd->name) { ··· 2271 2271 return_error: 2272 2272 if (!IS_ERR_OR_NULL(info->dma)) 2273 2273 dma_release_channel(info->dma); 2274 - if (nand_chip->ecc.priv) { 2275 - nand_bch_free(nand_chip->ecc.priv); 2276 - nand_chip->ecc.priv = NULL; 2277 - } 2274 + 2275 + rawnand_sw_bch_cleanup(nand_chip); 2276 + 2278 2277 return err; 2279 2278 } 2280 2279 ··· 2284 2285 struct omap_nand_info *info = mtd_to_omap(mtd); 2285 2286 int ret; 2286 2287 2287 - if (nand_chip->ecc.priv) { 2288 - nand_bch_free(nand_chip->ecc.priv); 2289 - nand_chip->ecc.priv = NULL; 2290 - } 2288 + rawnand_sw_bch_cleanup(nand_chip); 2289 + 2291 2290 if (info->dma) 2292 2291 dma_release_channel(info->dma); 2293 2292 ret = mtd_device_unregister(mtd);
+5 -2
drivers/mtd/nand/raw/omap_elm.c
··· 96 96 * elm_config - Configure ELM module 97 97 * @dev: ELM device 98 98 * @bch_type: Type of BCH ecc 99 + * @ecc_steps: ECC steps to assign to config 100 + * @ecc_step_size: ECC step size to assign to config 101 + * @ecc_syndrome_size: ECC syndrome size to assign to config 99 102 */ 100 103 int elm_config(struct device *dev, enum bch_ecc bch_type, 101 104 int ecc_steps, int ecc_step_size, int ecc_syndrome_size) ··· 435 432 } 436 433 437 434 #ifdef CONFIG_PM_SLEEP 438 - /** 435 + /* 439 436 * elm_context_save 440 437 * saves ELM configurations to preserve them across Hardware powered-down 441 438 */ ··· 483 480 return 0; 484 481 } 485 482 486 - /** 483 + /* 487 484 * elm_context_restore 488 485 * writes configurations saved duing power-down back into ELM registers 489 486 */
-1
drivers/mtd/nand/raw/pasemi_nand.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/mtd/mtd.h> 16 16 #include <linux/mtd/rawnand.h> 17 - #include <linux/mtd/nand_ecc.h> 18 17 #include <linux/of_address.h> 19 18 #include <linux/of_irq.h> 20 19 #include <linux/of_platform.h>
+57 -17
drivers/mtd/nand/raw/qcom_nandc.c
··· 145 145 #define OP_PAGE_READ 0x2 146 146 #define OP_PAGE_READ_WITH_ECC 0x3 147 147 #define OP_PAGE_READ_WITH_ECC_SPARE 0x4 148 + #define OP_PAGE_READ_ONFI_READ 0x5 148 149 #define OP_PROGRAM_PAGE 0x6 149 150 #define OP_PAGE_PROGRAM_WITH_ECC 0x7 150 151 #define OP_PROGRAM_PAGE_SPARE 0x9 ··· 461 460 * @ecc_modes - ecc mode for NAND 462 461 * @is_bam - whether NAND controller is using BAM 463 462 * @is_qpic - whether NAND CTRL is part of qpic IP 463 + * @qpic_v2 - flag to indicate QPIC IP version 2 464 464 * @dev_cmd_reg_start - NAND_DEV_CMD_* registers starting offset 465 465 */ 466 466 struct qcom_nandc_props { 467 467 u32 ecc_modes; 468 468 bool is_bam; 469 469 bool is_qpic; 470 + bool qpic_v2; 470 471 u32 dev_cmd_reg_start; 471 472 }; 472 473 ··· 1167 1164 * in use. we configure the controller to perform a raw read of 512 1168 1165 * bytes to read onfi params 1169 1166 */ 1170 - nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE); 1167 + if (nandc->props->qpic_v2) 1168 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ_ONFI_READ | 1169 + PAGE_ACC | LAST_PAGE); 1170 + else 1171 + nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | 1172 + PAGE_ACC | LAST_PAGE); 1173 + 1171 1174 nandc_set_reg(nandc, NAND_ADDR0, 0); 1172 1175 nandc_set_reg(nandc, NAND_ADDR1, 0); 1173 1176 nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE ··· 1189 1180 | 1 << DEV0_CFG1_ECC_DISABLE); 1190 1181 nandc_set_reg(nandc, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE); 1191 1182 1192 - /* configure CMD1 and VLD for ONFI param probing */ 1193 - nandc_set_reg(nandc, NAND_DEV_CMD_VLD, 1194 - (nandc->vld & ~READ_START_VLD)); 1195 - nandc_set_reg(nandc, NAND_DEV_CMD1, 1196 - (nandc->cmd1 & ~(0xFF << READ_ADDR)) 1197 - | NAND_CMD_PARAM << READ_ADDR); 1183 + /* configure CMD1 and VLD for ONFI param probing in QPIC v1 */ 1184 + if (!nandc->props->qpic_v2) { 1185 + nandc_set_reg(nandc, NAND_DEV_CMD_VLD, 1186 + (nandc->vld & ~READ_START_VLD)); 1187 + nandc_set_reg(nandc, NAND_DEV_CMD1, 1188 + (nandc->cmd1 & ~(0xFF << READ_ADDR)) 1189 + | NAND_CMD_PARAM << READ_ADDR); 1190 + } 1198 1191 1199 1192 nandc_set_reg(nandc, NAND_EXEC_CMD, 1); 1200 1193 1201 - nandc_set_reg(nandc, NAND_DEV_CMD1_RESTORE, nandc->cmd1); 1202 - nandc_set_reg(nandc, NAND_DEV_CMD_VLD_RESTORE, nandc->vld); 1194 + if (!nandc->props->qpic_v2) { 1195 + nandc_set_reg(nandc, NAND_DEV_CMD1_RESTORE, nandc->cmd1); 1196 + nandc_set_reg(nandc, NAND_DEV_CMD_VLD_RESTORE, nandc->vld); 1197 + } 1198 + 1203 1199 nandc_set_read_loc(nandc, 0, 0, 512, 1); 1204 1200 1205 - write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0); 1206 - write_reg_dma(nandc, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL); 1201 + if (!nandc->props->qpic_v2) { 1202 + write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0); 1203 + write_reg_dma(nandc, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL); 1204 + } 1207 1205 1208 1206 nandc->buf_count = 512; 1209 1207 memset(nandc->data_buffer, 0xff, nandc->buf_count); ··· 1221 1205 nandc->buf_count, 0); 1222 1206 1223 1207 /* restore CMD1 and VLD regs */ 1224 - write_reg_dma(nandc, NAND_DEV_CMD1_RESTORE, 1, 0); 1225 - write_reg_dma(nandc, NAND_DEV_CMD_VLD_RESTORE, 1, NAND_BAM_NEXT_SGL); 1208 + if (!nandc->props->qpic_v2) { 1209 + write_reg_dma(nandc, NAND_DEV_CMD1_RESTORE, 1, 0); 1210 + write_reg_dma(nandc, NAND_DEV_CMD_VLD_RESTORE, 1, NAND_BAM_NEXT_SGL); 1211 + } 1226 1212 1227 1213 return 0; 1228 1214 } ··· 1587 1569 struct nand_chip *chip = &host->chip; 1588 1570 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1589 1571 int i; 1572 + 1573 + nandc_read_buffer_sync(nandc, true); 1590 1574 1591 1575 for (i = 0; i < cw_cnt; i++) { 1592 1576 u32 flash = le32_to_cpu(nandc->reg_read_buf[i]); ··· 2790 2770 /* kill onenand */ 2791 2771 if (!nandc->props->is_qpic) 2792 2772 nandc_write(nandc, SFLASHC_BURST_CFG, 0); 2793 - nandc_write(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD), 2794 - NAND_DEV_CMD_VLD_VAL); 2773 + 2774 + if (!nandc->props->qpic_v2) 2775 + nandc_write(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD), 2776 + NAND_DEV_CMD_VLD_VAL); 2795 2777 2796 2778 /* enable ADM or BAM DMA */ 2797 2779 if (nandc->props->is_bam) { ··· 2813 2791 } 2814 2792 2815 2793 /* save the original values of these registers */ 2816 - nandc->cmd1 = nandc_read(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD1)); 2817 - nandc->vld = NAND_DEV_CMD_VLD_VAL; 2794 + if (!nandc->props->qpic_v2) { 2795 + nandc->cmd1 = nandc_read(nandc, dev_cmd_reg_addr(nandc, NAND_DEV_CMD1)); 2796 + nandc->vld = NAND_DEV_CMD_VLD_VAL; 2797 + } 2818 2798 2819 2799 return 0; 2820 2800 } ··· 3074 3050 .dev_cmd_reg_start = 0x7000, 3075 3051 }; 3076 3052 3053 + static const struct qcom_nandc_props sdx55_nandc_props = { 3054 + .ecc_modes = (ECC_BCH_4BIT | ECC_BCH_8BIT), 3055 + .is_bam = true, 3056 + .is_qpic = true, 3057 + .qpic_v2 = true, 3058 + .dev_cmd_reg_start = 0x7000, 3059 + }; 3060 + 3077 3061 /* 3078 3062 * data will hold a struct pointer containing more differences once we support 3079 3063 * more controller variants ··· 3096 3064 .data = &ipq4019_nandc_props, 3097 3065 }, 3098 3066 { 3067 + .compatible = "qcom,ipq6018-nand", 3068 + .data = &ipq8074_nandc_props, 3069 + }, 3070 + { 3099 3071 .compatible = "qcom,ipq8074-nand", 3100 3072 .data = &ipq8074_nandc_props, 3073 + }, 3074 + { 3075 + .compatible = "qcom,sdx55-nand", 3076 + .data = &sdx55_nandc_props, 3101 3077 }, 3102 3078 {} 3103 3079 };
+1495
drivers/mtd/nand/raw/rockchip-nand-controller.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR MIT 2 + /* 3 + * Rockchip NAND Flash controller driver. 4 + * Copyright (C) 2020 Rockchip Inc. 5 + * Author: Yifeng Zhao <yifeng.zhao@rock-chips.com> 6 + */ 7 + 8 + #include <linux/clk.h> 9 + #include <linux/delay.h> 10 + #include <linux/dma-mapping.h> 11 + #include <linux/dmaengine.h> 12 + #include <linux/interrupt.h> 13 + #include <linux/iopoll.h> 14 + #include <linux/module.h> 15 + #include <linux/mtd/mtd.h> 16 + #include <linux/mtd/rawnand.h> 17 + #include <linux/of.h> 18 + #include <linux/of_device.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/slab.h> 21 + 22 + /* 23 + * NFC Page Data Layout: 24 + * 1024 bytes data + 4Bytes sys data + 28Bytes~124Bytes ECC data + 25 + * 1024 bytes data + 4Bytes sys data + 28Bytes~124Bytes ECC data + 26 + * ...... 27 + * NAND Page Data Layout: 28 + * 1024 * n data + m Bytes oob 29 + * Original Bad Block Mask Location: 30 + * First byte of oob(spare). 31 + * nand_chip->oob_poi data layout: 32 + * 4Bytes sys data + .... + 4Bytes sys data + ECC data. 33 + */ 34 + 35 + /* NAND controller register definition */ 36 + #define NFC_READ (0) 37 + #define NFC_WRITE (1) 38 + 39 + #define NFC_FMCTL (0x00) 40 + #define FMCTL_CE_SEL_M 0xFF 41 + #define FMCTL_CE_SEL(x) (1 << (x)) 42 + #define FMCTL_WP BIT(8) 43 + #define FMCTL_RDY BIT(9) 44 + 45 + #define NFC_FMWAIT (0x04) 46 + #define FLCTL_RST BIT(0) 47 + #define FLCTL_WR (1) /* 0: read, 1: write */ 48 + #define FLCTL_XFER_ST BIT(2) 49 + #define FLCTL_XFER_EN BIT(3) 50 + #define FLCTL_ACORRECT BIT(10) /* Auto correct error bits. */ 51 + #define FLCTL_XFER_READY BIT(20) 52 + #define FLCTL_XFER_SECTOR (22) 53 + #define FLCTL_TOG_FIX BIT(29) 54 + 55 + #define BCHCTL_BANK_M (7 << 5) 56 + #define BCHCTL_BANK (5) 57 + 58 + #define DMA_ST BIT(0) 59 + #define DMA_WR (1) /* 0: write, 1: read */ 60 + #define DMA_EN BIT(2) 61 + #define DMA_AHB_SIZE (3) /* 0: 1, 1: 2, 2: 4 */ 62 + #define DMA_BURST_SIZE (6) /* 0: 1, 3: 4, 5: 8, 7: 16 */ 63 + #define DMA_INC_NUM (9) /* 1 - 16 */ 64 + 65 + #define ECC_ERR_CNT(x, e) ((((x) >> (e).low) & (e).low_mask) |\ 66 + (((x) >> (e).high) & (e).high_mask) << (e).low_bn) 67 + #define INT_DMA BIT(0) 68 + #define NFC_BANK (0x800) 69 + #define NFC_BANK_STEP (0x100) 70 + #define BANK_DATA (0x00) 71 + #define BANK_ADDR (0x04) 72 + #define BANK_CMD (0x08) 73 + #define NFC_SRAM0 (0x1000) 74 + #define NFC_SRAM1 (0x1400) 75 + #define NFC_SRAM_SIZE (0x400) 76 + #define NFC_TIMEOUT (500000) 77 + #define NFC_MAX_OOB_PER_STEP 128 78 + #define NFC_MIN_OOB_PER_STEP 64 79 + #define MAX_DATA_SIZE 0xFFFC 80 + #define MAX_ADDRESS_CYC 6 81 + #define NFC_ECC_MAX_MODES 4 82 + #define NFC_MAX_NSELS (8) /* Some Socs only have 1 or 2 CSs. */ 83 + #define NFC_SYS_DATA_SIZE (4) /* 4 bytes sys data in oob pre 1024 data.*/ 84 + #define RK_DEFAULT_CLOCK_RATE (150 * 1000 * 1000) /* 150 Mhz */ 85 + #define ACCTIMING(csrw, rwpw, rwcs) ((csrw) << 12 | (rwpw) << 5 | (rwcs)) 86 + 87 + enum nfc_type { 88 + NFC_V6, 89 + NFC_V8, 90 + NFC_V9, 91 + }; 92 + 93 + /** 94 + * struct rk_ecc_cnt_status: represent a ecc status data. 95 + * @err_flag_bit: error flag bit index at register. 96 + * @low: ECC count low bit index at register. 97 + * @low_mask: mask bit. 98 + * @low_bn: ECC count low bit number. 99 + * @high: ECC count high bit index at register. 100 + * @high_mask: mask bit 101 + */ 102 + struct ecc_cnt_status { 103 + u8 err_flag_bit; 104 + u8 low; 105 + u8 low_mask; 106 + u8 low_bn; 107 + u8 high; 108 + u8 high_mask; 109 + }; 110 + 111 + /** 112 + * @type: NFC version 113 + * @ecc_strengths: ECC strengths 114 + * @ecc_cfgs: ECC config values 115 + * @flctl_off: FLCTL register offset 116 + * @bchctl_off: BCHCTL register offset 117 + * @dma_data_buf_off: DMA_DATA_BUF register offset 118 + * @dma_oob_buf_off: DMA_OOB_BUF register offset 119 + * @dma_cfg_off: DMA_CFG register offset 120 + * @dma_st_off: DMA_ST register offset 121 + * @bch_st_off: BCG_ST register offset 122 + * @randmz_off: RANDMZ register offset 123 + * @int_en_off: interrupt enable register offset 124 + * @int_clr_off: interrupt clean register offset 125 + * @int_st_off: interrupt status register offset 126 + * @oob0_off: oob0 register offset 127 + * @oob1_off: oob1 register offset 128 + * @ecc0: represent ECC0 status data 129 + * @ecc1: represent ECC1 status data 130 + */ 131 + struct nfc_cfg { 132 + enum nfc_type type; 133 + u8 ecc_strengths[NFC_ECC_MAX_MODES]; 134 + u32 ecc_cfgs[NFC_ECC_MAX_MODES]; 135 + u32 flctl_off; 136 + u32 bchctl_off; 137 + u32 dma_cfg_off; 138 + u32 dma_data_buf_off; 139 + u32 dma_oob_buf_off; 140 + u32 dma_st_off; 141 + u32 bch_st_off; 142 + u32 randmz_off; 143 + u32 int_en_off; 144 + u32 int_clr_off; 145 + u32 int_st_off; 146 + u32 oob0_off; 147 + u32 oob1_off; 148 + struct ecc_cnt_status ecc0; 149 + struct ecc_cnt_status ecc1; 150 + }; 151 + 152 + struct rk_nfc_nand_chip { 153 + struct list_head node; 154 + struct nand_chip chip; 155 + 156 + u16 boot_blks; 157 + u16 metadata_size; 158 + u32 boot_ecc; 159 + u32 timing; 160 + 161 + u8 nsels; 162 + u8 sels[0]; 163 + /* Nothing after this field. */ 164 + }; 165 + 166 + struct rk_nfc { 167 + struct nand_controller controller; 168 + const struct nfc_cfg *cfg; 169 + struct device *dev; 170 + 171 + struct clk *nfc_clk; 172 + struct clk *ahb_clk; 173 + void __iomem *regs; 174 + 175 + u32 selected_bank; 176 + u32 band_offset; 177 + u32 cur_ecc; 178 + u32 cur_timing; 179 + 180 + struct completion done; 181 + struct list_head chips; 182 + 183 + u8 *page_buf; 184 + u32 *oob_buf; 185 + u32 page_buf_size; 186 + u32 oob_buf_size; 187 + 188 + unsigned long assigned_cs; 189 + }; 190 + 191 + static inline struct rk_nfc_nand_chip *rk_nfc_to_rknand(struct nand_chip *chip) 192 + { 193 + return container_of(chip, struct rk_nfc_nand_chip, chip); 194 + } 195 + 196 + static inline u8 *rk_nfc_buf_to_data_ptr(struct nand_chip *chip, const u8 *p, int i) 197 + { 198 + return (u8 *)p + i * chip->ecc.size; 199 + } 200 + 201 + static inline u8 *rk_nfc_buf_to_oob_ptr(struct nand_chip *chip, int i) 202 + { 203 + u8 *poi; 204 + 205 + poi = chip->oob_poi + i * NFC_SYS_DATA_SIZE; 206 + 207 + return poi; 208 + } 209 + 210 + static inline u8 *rk_nfc_buf_to_oob_ecc_ptr(struct nand_chip *chip, int i) 211 + { 212 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 213 + u8 *poi; 214 + 215 + poi = chip->oob_poi + rknand->metadata_size + chip->ecc.bytes * i; 216 + 217 + return poi; 218 + } 219 + 220 + static inline int rk_nfc_data_len(struct nand_chip *chip) 221 + { 222 + return chip->ecc.size + chip->ecc.bytes + NFC_SYS_DATA_SIZE; 223 + } 224 + 225 + static inline u8 *rk_nfc_data_ptr(struct nand_chip *chip, int i) 226 + { 227 + struct rk_nfc *nfc = nand_get_controller_data(chip); 228 + 229 + return nfc->page_buf + i * rk_nfc_data_len(chip); 230 + } 231 + 232 + static inline u8 *rk_nfc_oob_ptr(struct nand_chip *chip, int i) 233 + { 234 + struct rk_nfc *nfc = nand_get_controller_data(chip); 235 + 236 + return nfc->page_buf + i * rk_nfc_data_len(chip) + chip->ecc.size; 237 + } 238 + 239 + static int rk_nfc_hw_ecc_setup(struct nand_chip *chip, u32 strength) 240 + { 241 + struct rk_nfc *nfc = nand_get_controller_data(chip); 242 + u32 reg, i; 243 + 244 + for (i = 0; i < NFC_ECC_MAX_MODES; i++) { 245 + if (strength == nfc->cfg->ecc_strengths[i]) { 246 + reg = nfc->cfg->ecc_cfgs[i]; 247 + break; 248 + } 249 + } 250 + 251 + if (i >= NFC_ECC_MAX_MODES) 252 + return -EINVAL; 253 + 254 + writel(reg, nfc->regs + nfc->cfg->bchctl_off); 255 + 256 + /* Save chip ECC setting */ 257 + nfc->cur_ecc = strength; 258 + 259 + return 0; 260 + } 261 + 262 + static void rk_nfc_select_chip(struct nand_chip *chip, int cs) 263 + { 264 + struct rk_nfc *nfc = nand_get_controller_data(chip); 265 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 266 + struct nand_ecc_ctrl *ecc = &chip->ecc; 267 + u32 val; 268 + 269 + if (cs < 0) { 270 + nfc->selected_bank = -1; 271 + /* Deselect the currently selected target. */ 272 + val = readl_relaxed(nfc->regs + NFC_FMCTL); 273 + val &= ~FMCTL_CE_SEL_M; 274 + writel(val, nfc->regs + NFC_FMCTL); 275 + return; 276 + } 277 + 278 + nfc->selected_bank = rknand->sels[cs]; 279 + nfc->band_offset = NFC_BANK + nfc->selected_bank * NFC_BANK_STEP; 280 + 281 + val = readl_relaxed(nfc->regs + NFC_FMCTL); 282 + val &= ~FMCTL_CE_SEL_M; 283 + val |= FMCTL_CE_SEL(nfc->selected_bank); 284 + 285 + writel(val, nfc->regs + NFC_FMCTL); 286 + 287 + /* 288 + * Compare current chip timing with selected chip timing and 289 + * change if needed. 290 + */ 291 + if (nfc->cur_timing != rknand->timing) { 292 + writel(rknand->timing, nfc->regs + NFC_FMWAIT); 293 + nfc->cur_timing = rknand->timing; 294 + } 295 + 296 + /* 297 + * Compare current chip ECC setting with selected chip ECC setting and 298 + * change if needed. 299 + */ 300 + if (nfc->cur_ecc != ecc->strength) 301 + rk_nfc_hw_ecc_setup(chip, ecc->strength); 302 + } 303 + 304 + static inline int rk_nfc_wait_ioready(struct rk_nfc *nfc) 305 + { 306 + int rc; 307 + u32 val; 308 + 309 + rc = readl_relaxed_poll_timeout(nfc->regs + NFC_FMCTL, val, 310 + val & FMCTL_RDY, 10, NFC_TIMEOUT); 311 + 312 + return rc; 313 + } 314 + 315 + static void rk_nfc_read_buf(struct rk_nfc *nfc, u8 *buf, int len) 316 + { 317 + int i; 318 + 319 + for (i = 0; i < len; i++) 320 + buf[i] = readb_relaxed(nfc->regs + nfc->band_offset + 321 + BANK_DATA); 322 + } 323 + 324 + static void rk_nfc_write_buf(struct rk_nfc *nfc, const u8 *buf, int len) 325 + { 326 + int i; 327 + 328 + for (i = 0; i < len; i++) 329 + writeb(buf[i], nfc->regs + nfc->band_offset + BANK_DATA); 330 + } 331 + 332 + static int rk_nfc_cmd(struct nand_chip *chip, 333 + const struct nand_subop *subop) 334 + { 335 + struct rk_nfc *nfc = nand_get_controller_data(chip); 336 + unsigned int i, j, remaining, start; 337 + int reg_offset = nfc->band_offset; 338 + u8 *inbuf = NULL; 339 + const u8 *outbuf; 340 + u32 cnt = 0; 341 + int ret = 0; 342 + 343 + for (i = 0; i < subop->ninstrs; i++) { 344 + const struct nand_op_instr *instr = &subop->instrs[i]; 345 + 346 + switch (instr->type) { 347 + case NAND_OP_CMD_INSTR: 348 + writeb(instr->ctx.cmd.opcode, 349 + nfc->regs + reg_offset + BANK_CMD); 350 + break; 351 + 352 + case NAND_OP_ADDR_INSTR: 353 + remaining = nand_subop_get_num_addr_cyc(subop, i); 354 + start = nand_subop_get_addr_start_off(subop, i); 355 + 356 + for (j = 0; j < 8 && j + start < remaining; j++) 357 + writeb(instr->ctx.addr.addrs[j + start], 358 + nfc->regs + reg_offset + BANK_ADDR); 359 + break; 360 + 361 + case NAND_OP_DATA_IN_INSTR: 362 + case NAND_OP_DATA_OUT_INSTR: 363 + start = nand_subop_get_data_start_off(subop, i); 364 + cnt = nand_subop_get_data_len(subop, i); 365 + 366 + if (instr->type == NAND_OP_DATA_OUT_INSTR) { 367 + outbuf = instr->ctx.data.buf.out + start; 368 + rk_nfc_write_buf(nfc, outbuf, cnt); 369 + } else { 370 + inbuf = instr->ctx.data.buf.in + start; 371 + rk_nfc_read_buf(nfc, inbuf, cnt); 372 + } 373 + break; 374 + 375 + case NAND_OP_WAITRDY_INSTR: 376 + if (rk_nfc_wait_ioready(nfc) < 0) { 377 + ret = -ETIMEDOUT; 378 + dev_err(nfc->dev, "IO not ready\n"); 379 + } 380 + break; 381 + } 382 + } 383 + 384 + return ret; 385 + } 386 + 387 + static const struct nand_op_parser rk_nfc_op_parser = NAND_OP_PARSER( 388 + NAND_OP_PARSER_PATTERN( 389 + rk_nfc_cmd, 390 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 391 + NAND_OP_PARSER_PAT_ADDR_ELEM(true, MAX_ADDRESS_CYC), 392 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 393 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 394 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(true, MAX_DATA_SIZE)), 395 + NAND_OP_PARSER_PATTERN( 396 + rk_nfc_cmd, 397 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 398 + NAND_OP_PARSER_PAT_ADDR_ELEM(true, MAX_ADDRESS_CYC), 399 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(true, MAX_DATA_SIZE), 400 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 401 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)), 402 + ); 403 + 404 + static int rk_nfc_exec_op(struct nand_chip *chip, 405 + const struct nand_operation *op, 406 + bool check_only) 407 + { 408 + if (!check_only) 409 + rk_nfc_select_chip(chip, op->cs); 410 + 411 + return nand_op_parser_exec_op(chip, &rk_nfc_op_parser, op, 412 + check_only); 413 + } 414 + 415 + static int rk_nfc_setup_interface(struct nand_chip *chip, int target, 416 + const struct nand_interface_config *conf) 417 + { 418 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 419 + struct rk_nfc *nfc = nand_get_controller_data(chip); 420 + const struct nand_sdr_timings *timings; 421 + u32 rate, tc2rw, trwpw, trw2c; 422 + u32 temp; 423 + 424 + if (target < 0) 425 + return 0; 426 + 427 + timings = nand_get_sdr_timings(conf); 428 + if (IS_ERR(timings)) 429 + return -EOPNOTSUPP; 430 + 431 + if (IS_ERR(nfc->nfc_clk)) 432 + rate = clk_get_rate(nfc->ahb_clk); 433 + else 434 + rate = clk_get_rate(nfc->nfc_clk); 435 + 436 + /* Turn clock rate into kHz. */ 437 + rate /= 1000; 438 + 439 + tc2rw = 1; 440 + trw2c = 1; 441 + 442 + trwpw = max(timings->tWC_min, timings->tRC_min) / 1000; 443 + trwpw = DIV_ROUND_UP(trwpw * rate, 1000000); 444 + 445 + temp = timings->tREA_max / 1000; 446 + temp = DIV_ROUND_UP(temp * rate, 1000000); 447 + 448 + if (trwpw < temp) 449 + trwpw = temp; 450 + 451 + /* 452 + * ACCON: access timing control register 453 + * ------------------------------------- 454 + * 31:18: reserved 455 + * 17:12: csrw, clock cycles from the falling edge of CSn to the 456 + * falling edge of RDn or WRn 457 + * 11:11: reserved 458 + * 10:05: rwpw, the width of RDn or WRn in processor clock cycles 459 + * 04:00: rwcs, clock cycles from the rising edge of RDn or WRn to the 460 + * rising edge of CSn 461 + */ 462 + 463 + /* Save chip timing */ 464 + rknand->timing = ACCTIMING(tc2rw, trwpw, trw2c); 465 + 466 + return 0; 467 + } 468 + 469 + static void rk_nfc_xfer_start(struct rk_nfc *nfc, u8 rw, u8 n_KB, 470 + dma_addr_t dma_data, dma_addr_t dma_oob) 471 + { 472 + u32 dma_reg, fl_reg, bch_reg; 473 + 474 + dma_reg = DMA_ST | ((!rw) << DMA_WR) | DMA_EN | (2 << DMA_AHB_SIZE) | 475 + (7 << DMA_BURST_SIZE) | (16 << DMA_INC_NUM); 476 + 477 + fl_reg = (rw << FLCTL_WR) | FLCTL_XFER_EN | FLCTL_ACORRECT | 478 + (n_KB << FLCTL_XFER_SECTOR) | FLCTL_TOG_FIX; 479 + 480 + if (nfc->cfg->type == NFC_V6 || nfc->cfg->type == NFC_V8) { 481 + bch_reg = readl_relaxed(nfc->regs + nfc->cfg->bchctl_off); 482 + bch_reg = (bch_reg & (~BCHCTL_BANK_M)) | 483 + (nfc->selected_bank << BCHCTL_BANK); 484 + writel(bch_reg, nfc->regs + nfc->cfg->bchctl_off); 485 + } 486 + 487 + writel(dma_reg, nfc->regs + nfc->cfg->dma_cfg_off); 488 + writel((u32)dma_data, nfc->regs + nfc->cfg->dma_data_buf_off); 489 + writel((u32)dma_oob, nfc->regs + nfc->cfg->dma_oob_buf_off); 490 + writel(fl_reg, nfc->regs + nfc->cfg->flctl_off); 491 + fl_reg |= FLCTL_XFER_ST; 492 + writel(fl_reg, nfc->regs + nfc->cfg->flctl_off); 493 + } 494 + 495 + static int rk_nfc_wait_for_xfer_done(struct rk_nfc *nfc) 496 + { 497 + void __iomem *ptr; 498 + u32 reg; 499 + 500 + ptr = nfc->regs + nfc->cfg->flctl_off; 501 + 502 + return readl_relaxed_poll_timeout(ptr, reg, 503 + reg & FLCTL_XFER_READY, 504 + 10, NFC_TIMEOUT); 505 + } 506 + 507 + static int rk_nfc_write_page_raw(struct nand_chip *chip, const u8 *buf, 508 + int oob_on, int page) 509 + { 510 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 511 + struct rk_nfc *nfc = nand_get_controller_data(chip); 512 + struct mtd_info *mtd = nand_to_mtd(chip); 513 + struct nand_ecc_ctrl *ecc = &chip->ecc; 514 + int i, pages_per_blk; 515 + 516 + pages_per_blk = mtd->erasesize / mtd->writesize; 517 + if ((chip->options & NAND_IS_BOOT_MEDIUM) && 518 + (page < (pages_per_blk * rknand->boot_blks)) && 519 + rknand->boot_ecc != ecc->strength) { 520 + /* 521 + * There's currently no method to notify the MTD framework that 522 + * a different ECC strength is in use for the boot blocks. 523 + */ 524 + return -EIO; 525 + } 526 + 527 + if (!buf) 528 + memset(nfc->page_buf, 0xff, mtd->writesize + mtd->oobsize); 529 + 530 + for (i = 0; i < ecc->steps; i++) { 531 + /* Copy data to the NFC buffer. */ 532 + if (buf) 533 + memcpy(rk_nfc_data_ptr(chip, i), 534 + rk_nfc_buf_to_data_ptr(chip, buf, i), 535 + ecc->size); 536 + /* 537 + * The first four bytes of OOB are reserved for the 538 + * boot ROM. In some debugging cases, such as with a 539 + * read, erase and write back test these 4 bytes stored 540 + * in OOB also need to be written back. 541 + * 542 + * The function nand_block_bad detects bad blocks like: 543 + * 544 + * bad = chip->oob_poi[chip->badblockpos]; 545 + * 546 + * chip->badblockpos == 0 for a large page NAND Flash, 547 + * so chip->oob_poi[0] is the bad block mask (BBM). 548 + * 549 + * The OOB data layout on the NFC is: 550 + * 551 + * PA0 PA1 PA2 PA3 | BBM OOB1 OOB2 OOB3 | ... 552 + * 553 + * or 554 + * 555 + * 0xFF 0xFF 0xFF 0xFF | BBM OOB1 OOB2 OOB3 | ... 556 + * 557 + * The code here just swaps the first 4 bytes with the last 558 + * 4 bytes without losing any data. 559 + * 560 + * The chip->oob_poi data layout: 561 + * 562 + * BBM OOB1 OOB2 OOB3 |......| PA0 PA1 PA2 PA3 563 + * 564 + * The rk_nfc_ooblayout_free() function already has reserved 565 + * these 4 bytes with: 566 + * 567 + * oob_region->offset = NFC_SYS_DATA_SIZE + 2; 568 + */ 569 + if (!i) 570 + memcpy(rk_nfc_oob_ptr(chip, i), 571 + rk_nfc_buf_to_oob_ptr(chip, ecc->steps - 1), 572 + NFC_SYS_DATA_SIZE); 573 + else 574 + memcpy(rk_nfc_oob_ptr(chip, i), 575 + rk_nfc_buf_to_oob_ptr(chip, i - 1), 576 + NFC_SYS_DATA_SIZE); 577 + /* Copy ECC data to the NFC buffer. */ 578 + memcpy(rk_nfc_oob_ptr(chip, i) + NFC_SYS_DATA_SIZE, 579 + rk_nfc_buf_to_oob_ecc_ptr(chip, i), 580 + ecc->bytes); 581 + } 582 + 583 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 584 + rk_nfc_write_buf(nfc, buf, mtd->writesize + mtd->oobsize); 585 + return nand_prog_page_end_op(chip); 586 + } 587 + 588 + static int rk_nfc_write_page_hwecc(struct nand_chip *chip, const u8 *buf, 589 + int oob_on, int page) 590 + { 591 + struct mtd_info *mtd = nand_to_mtd(chip); 592 + struct rk_nfc *nfc = nand_get_controller_data(chip); 593 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 594 + struct nand_ecc_ctrl *ecc = &chip->ecc; 595 + int oob_step = (ecc->bytes > 60) ? NFC_MAX_OOB_PER_STEP : 596 + NFC_MIN_OOB_PER_STEP; 597 + int pages_per_blk = mtd->erasesize / mtd->writesize; 598 + int ret = 0, i, boot_rom_mode = 0; 599 + dma_addr_t dma_data, dma_oob; 600 + u32 reg; 601 + u8 *oob; 602 + 603 + nand_prog_page_begin_op(chip, page, 0, NULL, 0); 604 + 605 + if (buf) 606 + memcpy(nfc->page_buf, buf, mtd->writesize); 607 + else 608 + memset(nfc->page_buf, 0xFF, mtd->writesize); 609 + 610 + /* 611 + * The first blocks (4, 8 or 16 depending on the device) are used 612 + * by the boot ROM and the first 32 bits of OOB need to link to 613 + * the next page address in the same block. We can't directly copy 614 + * OOB data from the MTD framework, because this page address 615 + * conflicts for example with the bad block marker (BBM), 616 + * so we shift all OOB data including the BBM with 4 byte positions. 617 + * As a consequence the OOB size available to the MTD framework is 618 + * also reduced with 4 bytes. 619 + * 620 + * PA0 PA1 PA2 PA3 | BBM OOB1 OOB2 OOB3 | ... 621 + * 622 + * If a NAND is not a boot medium or the page is not a boot block, 623 + * the first 4 bytes are left untouched by writing 0xFF to them. 624 + * 625 + * 0xFF 0xFF 0xFF 0xFF | BBM OOB1 OOB2 OOB3 | ... 626 + * 627 + * Configure the ECC algorithm supported by the boot ROM. 628 + */ 629 + if ((page < (pages_per_blk * rknand->boot_blks)) && 630 + (chip->options & NAND_IS_BOOT_MEDIUM)) { 631 + boot_rom_mode = 1; 632 + if (rknand->boot_ecc != ecc->strength) 633 + rk_nfc_hw_ecc_setup(chip, rknand->boot_ecc); 634 + } 635 + 636 + for (i = 0; i < ecc->steps; i++) { 637 + if (!i) { 638 + reg = 0xFFFFFFFF; 639 + } else { 640 + oob = chip->oob_poi + (i - 1) * NFC_SYS_DATA_SIZE; 641 + reg = oob[0] | oob[1] << 8 | oob[2] << 16 | 642 + oob[3] << 24; 643 + } 644 + 645 + if (!i && boot_rom_mode) 646 + reg = (page & (pages_per_blk - 1)) * 4; 647 + 648 + if (nfc->cfg->type == NFC_V9) 649 + nfc->oob_buf[i] = reg; 650 + else 651 + nfc->oob_buf[i * (oob_step / 4)] = reg; 652 + } 653 + 654 + dma_data = dma_map_single(nfc->dev, (void *)nfc->page_buf, 655 + mtd->writesize, DMA_TO_DEVICE); 656 + dma_oob = dma_map_single(nfc->dev, nfc->oob_buf, 657 + ecc->steps * oob_step, 658 + DMA_TO_DEVICE); 659 + 660 + reinit_completion(&nfc->done); 661 + writel(INT_DMA, nfc->regs + nfc->cfg->int_en_off); 662 + 663 + rk_nfc_xfer_start(nfc, NFC_WRITE, ecc->steps, dma_data, 664 + dma_oob); 665 + ret = wait_for_completion_timeout(&nfc->done, 666 + msecs_to_jiffies(100)); 667 + if (!ret) 668 + dev_warn(nfc->dev, "write: wait dma done timeout.\n"); 669 + /* 670 + * Whether the DMA transfer is completed or not. The driver 671 + * needs to check the NFC`s status register to see if the data 672 + * transfer was completed. 673 + */ 674 + ret = rk_nfc_wait_for_xfer_done(nfc); 675 + 676 + dma_unmap_single(nfc->dev, dma_data, mtd->writesize, 677 + DMA_TO_DEVICE); 678 + dma_unmap_single(nfc->dev, dma_oob, ecc->steps * oob_step, 679 + DMA_TO_DEVICE); 680 + 681 + if (boot_rom_mode && rknand->boot_ecc != ecc->strength) 682 + rk_nfc_hw_ecc_setup(chip, ecc->strength); 683 + 684 + if (ret) { 685 + dev_err(nfc->dev, "write: wait transfer done timeout.\n"); 686 + return -ETIMEDOUT; 687 + } 688 + 689 + return nand_prog_page_end_op(chip); 690 + } 691 + 692 + static int rk_nfc_write_oob(struct nand_chip *chip, int page) 693 + { 694 + return rk_nfc_write_page_hwecc(chip, NULL, 1, page); 695 + } 696 + 697 + static int rk_nfc_read_page_raw(struct nand_chip *chip, u8 *buf, int oob_on, 698 + int page) 699 + { 700 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 701 + struct rk_nfc *nfc = nand_get_controller_data(chip); 702 + struct mtd_info *mtd = nand_to_mtd(chip); 703 + struct nand_ecc_ctrl *ecc = &chip->ecc; 704 + int i, pages_per_blk; 705 + 706 + pages_per_blk = mtd->erasesize / mtd->writesize; 707 + if ((chip->options & NAND_IS_BOOT_MEDIUM) && 708 + (page < (pages_per_blk * rknand->boot_blks)) && 709 + rknand->boot_ecc != ecc->strength) { 710 + /* 711 + * There's currently no method to notify the MTD framework that 712 + * a different ECC strength is in use for the boot blocks. 713 + */ 714 + return -EIO; 715 + } 716 + 717 + nand_read_page_op(chip, page, 0, NULL, 0); 718 + rk_nfc_read_buf(nfc, nfc->page_buf, mtd->writesize + mtd->oobsize); 719 + for (i = 0; i < ecc->steps; i++) { 720 + /* 721 + * The first four bytes of OOB are reserved for the 722 + * boot ROM. In some debugging cases, such as with a read, 723 + * erase and write back test, these 4 bytes also must be 724 + * saved somewhere, otherwise this information will be 725 + * lost during a write back. 726 + */ 727 + if (!i) 728 + memcpy(rk_nfc_buf_to_oob_ptr(chip, ecc->steps - 1), 729 + rk_nfc_oob_ptr(chip, i), 730 + NFC_SYS_DATA_SIZE); 731 + else 732 + memcpy(rk_nfc_buf_to_oob_ptr(chip, i - 1), 733 + rk_nfc_oob_ptr(chip, i), 734 + NFC_SYS_DATA_SIZE); 735 + 736 + /* Copy ECC data from the NFC buffer. */ 737 + memcpy(rk_nfc_buf_to_oob_ecc_ptr(chip, i), 738 + rk_nfc_oob_ptr(chip, i) + NFC_SYS_DATA_SIZE, 739 + ecc->bytes); 740 + 741 + /* Copy data from the NFC buffer. */ 742 + if (buf) 743 + memcpy(rk_nfc_buf_to_data_ptr(chip, buf, i), 744 + rk_nfc_data_ptr(chip, i), 745 + ecc->size); 746 + } 747 + 748 + return 0; 749 + } 750 + 751 + static int rk_nfc_read_page_hwecc(struct nand_chip *chip, u8 *buf, int oob_on, 752 + int page) 753 + { 754 + struct mtd_info *mtd = nand_to_mtd(chip); 755 + struct rk_nfc *nfc = nand_get_controller_data(chip); 756 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 757 + struct nand_ecc_ctrl *ecc = &chip->ecc; 758 + int oob_step = (ecc->bytes > 60) ? NFC_MAX_OOB_PER_STEP : 759 + NFC_MIN_OOB_PER_STEP; 760 + int pages_per_blk = mtd->erasesize / mtd->writesize; 761 + dma_addr_t dma_data, dma_oob; 762 + int ret = 0, i, cnt, boot_rom_mode = 0; 763 + int max_bitflips = 0, bch_st, ecc_fail = 0; 764 + u8 *oob; 765 + u32 tmp; 766 + 767 + nand_read_page_op(chip, page, 0, NULL, 0); 768 + 769 + dma_data = dma_map_single(nfc->dev, nfc->page_buf, 770 + mtd->writesize, 771 + DMA_FROM_DEVICE); 772 + dma_oob = dma_map_single(nfc->dev, nfc->oob_buf, 773 + ecc->steps * oob_step, 774 + DMA_FROM_DEVICE); 775 + 776 + /* 777 + * The first blocks (4, 8 or 16 depending on the device) 778 + * are used by the boot ROM. 779 + * Configure the ECC algorithm supported by the boot ROM. 780 + */ 781 + if ((page < (pages_per_blk * rknand->boot_blks)) && 782 + (chip->options & NAND_IS_BOOT_MEDIUM)) { 783 + boot_rom_mode = 1; 784 + if (rknand->boot_ecc != ecc->strength) 785 + rk_nfc_hw_ecc_setup(chip, rknand->boot_ecc); 786 + } 787 + 788 + reinit_completion(&nfc->done); 789 + writel(INT_DMA, nfc->regs + nfc->cfg->int_en_off); 790 + rk_nfc_xfer_start(nfc, NFC_READ, ecc->steps, dma_data, 791 + dma_oob); 792 + ret = wait_for_completion_timeout(&nfc->done, 793 + msecs_to_jiffies(100)); 794 + if (!ret) 795 + dev_warn(nfc->dev, "read: wait dma done timeout.\n"); 796 + /* 797 + * Whether the DMA transfer is completed or not. The driver 798 + * needs to check the NFC`s status register to see if the data 799 + * transfer was completed. 800 + */ 801 + ret = rk_nfc_wait_for_xfer_done(nfc); 802 + 803 + dma_unmap_single(nfc->dev, dma_data, mtd->writesize, 804 + DMA_FROM_DEVICE); 805 + dma_unmap_single(nfc->dev, dma_oob, ecc->steps * oob_step, 806 + DMA_FROM_DEVICE); 807 + 808 + if (ret) { 809 + ret = -ETIMEDOUT; 810 + dev_err(nfc->dev, "read: wait transfer done timeout.\n"); 811 + goto timeout_err; 812 + } 813 + 814 + for (i = 1; i < ecc->steps; i++) { 815 + oob = chip->oob_poi + (i - 1) * NFC_SYS_DATA_SIZE; 816 + if (nfc->cfg->type == NFC_V9) 817 + tmp = nfc->oob_buf[i]; 818 + else 819 + tmp = nfc->oob_buf[i * (oob_step / 4)]; 820 + *oob++ = (u8)tmp; 821 + *oob++ = (u8)(tmp >> 8); 822 + *oob++ = (u8)(tmp >> 16); 823 + *oob++ = (u8)(tmp >> 24); 824 + } 825 + 826 + for (i = 0; i < (ecc->steps / 2); i++) { 827 + bch_st = readl_relaxed(nfc->regs + 828 + nfc->cfg->bch_st_off + i * 4); 829 + if (bch_st & BIT(nfc->cfg->ecc0.err_flag_bit) || 830 + bch_st & BIT(nfc->cfg->ecc1.err_flag_bit)) { 831 + mtd->ecc_stats.failed++; 832 + ecc_fail = 1; 833 + } else { 834 + cnt = ECC_ERR_CNT(bch_st, nfc->cfg->ecc0); 835 + mtd->ecc_stats.corrected += cnt; 836 + max_bitflips = max_t(u32, max_bitflips, cnt); 837 + 838 + cnt = ECC_ERR_CNT(bch_st, nfc->cfg->ecc1); 839 + mtd->ecc_stats.corrected += cnt; 840 + max_bitflips = max_t(u32, max_bitflips, cnt); 841 + } 842 + } 843 + 844 + if (buf) 845 + memcpy(buf, nfc->page_buf, mtd->writesize); 846 + 847 + timeout_err: 848 + if (boot_rom_mode && rknand->boot_ecc != ecc->strength) 849 + rk_nfc_hw_ecc_setup(chip, ecc->strength); 850 + 851 + if (ret) 852 + return ret; 853 + 854 + if (ecc_fail) { 855 + dev_err(nfc->dev, "read page: %x ecc error!\n", page); 856 + return 0; 857 + } 858 + 859 + return max_bitflips; 860 + } 861 + 862 + static int rk_nfc_read_oob(struct nand_chip *chip, int page) 863 + { 864 + return rk_nfc_read_page_hwecc(chip, NULL, 1, page); 865 + } 866 + 867 + static inline void rk_nfc_hw_init(struct rk_nfc *nfc) 868 + { 869 + /* Disable flash wp. */ 870 + writel(FMCTL_WP, nfc->regs + NFC_FMCTL); 871 + /* Config default timing 40ns at 150 Mhz NFC clock. */ 872 + writel(0x1081, nfc->regs + NFC_FMWAIT); 873 + nfc->cur_timing = 0x1081; 874 + /* Disable randomizer and DMA. */ 875 + writel(0, nfc->regs + nfc->cfg->randmz_off); 876 + writel(0, nfc->regs + nfc->cfg->dma_cfg_off); 877 + writel(FLCTL_RST, nfc->regs + nfc->cfg->flctl_off); 878 + } 879 + 880 + static irqreturn_t rk_nfc_irq(int irq, void *id) 881 + { 882 + struct rk_nfc *nfc = id; 883 + u32 sta, ien; 884 + 885 + sta = readl_relaxed(nfc->regs + nfc->cfg->int_st_off); 886 + ien = readl_relaxed(nfc->regs + nfc->cfg->int_en_off); 887 + 888 + if (!(sta & ien)) 889 + return IRQ_NONE; 890 + 891 + writel(sta, nfc->regs + nfc->cfg->int_clr_off); 892 + writel(~sta & ien, nfc->regs + nfc->cfg->int_en_off); 893 + 894 + complete(&nfc->done); 895 + 896 + return IRQ_HANDLED; 897 + } 898 + 899 + static int rk_nfc_enable_clks(struct device *dev, struct rk_nfc *nfc) 900 + { 901 + int ret; 902 + 903 + if (!IS_ERR(nfc->nfc_clk)) { 904 + ret = clk_prepare_enable(nfc->nfc_clk); 905 + if (ret) { 906 + dev_err(dev, "failed to enable NFC clk\n"); 907 + return ret; 908 + } 909 + } 910 + 911 + ret = clk_prepare_enable(nfc->ahb_clk); 912 + if (ret) { 913 + dev_err(dev, "failed to enable ahb clk\n"); 914 + if (!IS_ERR(nfc->nfc_clk)) 915 + clk_disable_unprepare(nfc->nfc_clk); 916 + return ret; 917 + } 918 + 919 + return 0; 920 + } 921 + 922 + static void rk_nfc_disable_clks(struct rk_nfc *nfc) 923 + { 924 + if (!IS_ERR(nfc->nfc_clk)) 925 + clk_disable_unprepare(nfc->nfc_clk); 926 + clk_disable_unprepare(nfc->ahb_clk); 927 + } 928 + 929 + static int rk_nfc_ooblayout_free(struct mtd_info *mtd, int section, 930 + struct mtd_oob_region *oob_region) 931 + { 932 + struct nand_chip *chip = mtd_to_nand(mtd); 933 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 934 + 935 + if (section) 936 + return -ERANGE; 937 + 938 + /* 939 + * The beginning of the OOB area stores the reserved data for the NFC, 940 + * the size of the reserved data is NFC_SYS_DATA_SIZE bytes. 941 + */ 942 + oob_region->length = rknand->metadata_size - NFC_SYS_DATA_SIZE - 2; 943 + oob_region->offset = NFC_SYS_DATA_SIZE + 2; 944 + 945 + return 0; 946 + } 947 + 948 + static int rk_nfc_ooblayout_ecc(struct mtd_info *mtd, int section, 949 + struct mtd_oob_region *oob_region) 950 + { 951 + struct nand_chip *chip = mtd_to_nand(mtd); 952 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 953 + 954 + if (section) 955 + return -ERANGE; 956 + 957 + oob_region->length = mtd->oobsize - rknand->metadata_size; 958 + oob_region->offset = rknand->metadata_size; 959 + 960 + return 0; 961 + } 962 + 963 + static const struct mtd_ooblayout_ops rk_nfc_ooblayout_ops = { 964 + .free = rk_nfc_ooblayout_free, 965 + .ecc = rk_nfc_ooblayout_ecc, 966 + }; 967 + 968 + static int rk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd) 969 + { 970 + struct nand_chip *chip = mtd_to_nand(mtd); 971 + struct rk_nfc *nfc = nand_get_controller_data(chip); 972 + struct nand_ecc_ctrl *ecc = &chip->ecc; 973 + const u8 *strengths = nfc->cfg->ecc_strengths; 974 + u8 max_strength, nfc_max_strength; 975 + int i; 976 + 977 + nfc_max_strength = nfc->cfg->ecc_strengths[0]; 978 + /* If optional dt settings not present. */ 979 + if (!ecc->size || !ecc->strength || 980 + ecc->strength > nfc_max_strength) { 981 + chip->ecc.size = 1024; 982 + ecc->steps = mtd->writesize / ecc->size; 983 + 984 + /* 985 + * HW ECC always requests the number of ECC bytes per 1024 byte 986 + * blocks. The first 4 OOB bytes are reserved for sys data. 987 + */ 988 + max_strength = ((mtd->oobsize / ecc->steps) - 4) * 8 / 989 + fls(8 * 1024); 990 + if (max_strength > nfc_max_strength) 991 + max_strength = nfc_max_strength; 992 + 993 + for (i = 0; i < 4; i++) { 994 + if (max_strength >= strengths[i]) 995 + break; 996 + } 997 + 998 + if (i >= 4) { 999 + dev_err(nfc->dev, "unsupported ECC strength\n"); 1000 + return -EOPNOTSUPP; 1001 + } 1002 + 1003 + ecc->strength = strengths[i]; 1004 + } 1005 + ecc->steps = mtd->writesize / ecc->size; 1006 + ecc->bytes = DIV_ROUND_UP(ecc->strength * fls(8 * chip->ecc.size), 8); 1007 + 1008 + return 0; 1009 + } 1010 + 1011 + static int rk_nfc_attach_chip(struct nand_chip *chip) 1012 + { 1013 + struct mtd_info *mtd = nand_to_mtd(chip); 1014 + struct device *dev = mtd->dev.parent; 1015 + struct rk_nfc *nfc = nand_get_controller_data(chip); 1016 + struct rk_nfc_nand_chip *rknand = rk_nfc_to_rknand(chip); 1017 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1018 + int new_page_len, new_oob_len; 1019 + void *buf; 1020 + int ret; 1021 + 1022 + if (chip->options & NAND_BUSWIDTH_16) { 1023 + dev_err(dev, "16 bits bus width not supported"); 1024 + return -EINVAL; 1025 + } 1026 + 1027 + if (ecc->engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 1028 + return 0; 1029 + 1030 + ret = rk_nfc_ecc_init(dev, mtd); 1031 + if (ret) 1032 + return ret; 1033 + 1034 + rknand->metadata_size = NFC_SYS_DATA_SIZE * ecc->steps; 1035 + 1036 + if (rknand->metadata_size < NFC_SYS_DATA_SIZE + 2) { 1037 + dev_err(dev, 1038 + "driver needs at least %d bytes of meta data\n", 1039 + NFC_SYS_DATA_SIZE + 2); 1040 + return -EIO; 1041 + } 1042 + 1043 + /* Check buffer first, avoid duplicate alloc buffer. */ 1044 + new_page_len = mtd->writesize + mtd->oobsize; 1045 + if (nfc->page_buf && new_page_len > nfc->page_buf_size) { 1046 + buf = krealloc(nfc->page_buf, new_page_len, 1047 + GFP_KERNEL | GFP_DMA); 1048 + if (!buf) 1049 + return -ENOMEM; 1050 + nfc->page_buf = buf; 1051 + nfc->page_buf_size = new_page_len; 1052 + } 1053 + 1054 + new_oob_len = ecc->steps * NFC_MAX_OOB_PER_STEP; 1055 + if (nfc->oob_buf && new_oob_len > nfc->oob_buf_size) { 1056 + buf = krealloc(nfc->oob_buf, new_oob_len, 1057 + GFP_KERNEL | GFP_DMA); 1058 + if (!buf) { 1059 + kfree(nfc->page_buf); 1060 + nfc->page_buf = NULL; 1061 + return -ENOMEM; 1062 + } 1063 + nfc->oob_buf = buf; 1064 + nfc->oob_buf_size = new_oob_len; 1065 + } 1066 + 1067 + if (!nfc->page_buf) { 1068 + nfc->page_buf = kzalloc(new_page_len, GFP_KERNEL | GFP_DMA); 1069 + if (!nfc->page_buf) 1070 + return -ENOMEM; 1071 + nfc->page_buf_size = new_page_len; 1072 + } 1073 + 1074 + if (!nfc->oob_buf) { 1075 + nfc->oob_buf = kzalloc(new_oob_len, GFP_KERNEL | GFP_DMA); 1076 + if (!nfc->oob_buf) { 1077 + kfree(nfc->page_buf); 1078 + nfc->page_buf = NULL; 1079 + return -ENOMEM; 1080 + } 1081 + nfc->oob_buf_size = new_oob_len; 1082 + } 1083 + 1084 + chip->ecc.write_page_raw = rk_nfc_write_page_raw; 1085 + chip->ecc.write_page = rk_nfc_write_page_hwecc; 1086 + chip->ecc.write_oob = rk_nfc_write_oob; 1087 + 1088 + chip->ecc.read_page_raw = rk_nfc_read_page_raw; 1089 + chip->ecc.read_page = rk_nfc_read_page_hwecc; 1090 + chip->ecc.read_oob = rk_nfc_read_oob; 1091 + 1092 + return 0; 1093 + } 1094 + 1095 + static const struct nand_controller_ops rk_nfc_controller_ops = { 1096 + .attach_chip = rk_nfc_attach_chip, 1097 + .exec_op = rk_nfc_exec_op, 1098 + .setup_interface = rk_nfc_setup_interface, 1099 + }; 1100 + 1101 + static int rk_nfc_nand_chip_init(struct device *dev, struct rk_nfc *nfc, 1102 + struct device_node *np) 1103 + { 1104 + struct rk_nfc_nand_chip *rknand; 1105 + struct nand_chip *chip; 1106 + struct mtd_info *mtd; 1107 + int nsels; 1108 + u32 tmp; 1109 + int ret; 1110 + int i; 1111 + 1112 + if (!of_get_property(np, "reg", &nsels)) 1113 + return -ENODEV; 1114 + nsels /= sizeof(u32); 1115 + if (!nsels || nsels > NFC_MAX_NSELS) { 1116 + dev_err(dev, "invalid reg property size %d\n", nsels); 1117 + return -EINVAL; 1118 + } 1119 + 1120 + rknand = devm_kzalloc(dev, sizeof(*rknand) + nsels * sizeof(u8), 1121 + GFP_KERNEL); 1122 + if (!rknand) 1123 + return -ENOMEM; 1124 + 1125 + rknand->nsels = nsels; 1126 + for (i = 0; i < nsels; i++) { 1127 + ret = of_property_read_u32_index(np, "reg", i, &tmp); 1128 + if (ret) { 1129 + dev_err(dev, "reg property failure : %d\n", ret); 1130 + return ret; 1131 + } 1132 + 1133 + if (tmp >= NFC_MAX_NSELS) { 1134 + dev_err(dev, "invalid CS: %u\n", tmp); 1135 + return -EINVAL; 1136 + } 1137 + 1138 + if (test_and_set_bit(tmp, &nfc->assigned_cs)) { 1139 + dev_err(dev, "CS %u already assigned\n", tmp); 1140 + return -EINVAL; 1141 + } 1142 + 1143 + rknand->sels[i] = tmp; 1144 + } 1145 + 1146 + chip = &rknand->chip; 1147 + chip->controller = &nfc->controller; 1148 + 1149 + nand_set_flash_node(chip, np); 1150 + 1151 + nand_set_controller_data(chip, nfc); 1152 + 1153 + chip->options |= NAND_USES_DMA | NAND_NO_SUBPAGE_WRITE; 1154 + chip->bbt_options = NAND_BBT_USE_FLASH | NAND_BBT_NO_OOB; 1155 + 1156 + /* Set default mode in case dt entry is missing. */ 1157 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 1158 + 1159 + mtd = nand_to_mtd(chip); 1160 + mtd->owner = THIS_MODULE; 1161 + mtd->dev.parent = dev; 1162 + 1163 + if (!mtd->name) { 1164 + dev_err(nfc->dev, "NAND label property is mandatory\n"); 1165 + return -EINVAL; 1166 + } 1167 + 1168 + mtd_set_ooblayout(mtd, &rk_nfc_ooblayout_ops); 1169 + rk_nfc_hw_init(nfc); 1170 + ret = nand_scan(chip, nsels); 1171 + if (ret) 1172 + return ret; 1173 + 1174 + if (chip->options & NAND_IS_BOOT_MEDIUM) { 1175 + ret = of_property_read_u32(np, "rockchip,boot-blks", &tmp); 1176 + rknand->boot_blks = ret ? 0 : tmp; 1177 + 1178 + ret = of_property_read_u32(np, "rockchip,boot-ecc-strength", 1179 + &tmp); 1180 + rknand->boot_ecc = ret ? chip->ecc.strength : tmp; 1181 + } 1182 + 1183 + ret = mtd_device_register(mtd, NULL, 0); 1184 + if (ret) { 1185 + dev_err(dev, "MTD parse partition error\n"); 1186 + nand_cleanup(chip); 1187 + return ret; 1188 + } 1189 + 1190 + list_add_tail(&rknand->node, &nfc->chips); 1191 + 1192 + return 0; 1193 + } 1194 + 1195 + static void rk_nfc_chips_cleanup(struct rk_nfc *nfc) 1196 + { 1197 + struct rk_nfc_nand_chip *rknand, *tmp; 1198 + struct nand_chip *chip; 1199 + int ret; 1200 + 1201 + list_for_each_entry_safe(rknand, tmp, &nfc->chips, node) { 1202 + chip = &rknand->chip; 1203 + ret = mtd_device_unregister(nand_to_mtd(chip)); 1204 + WARN_ON(ret); 1205 + nand_cleanup(chip); 1206 + list_del(&rknand->node); 1207 + } 1208 + } 1209 + 1210 + static int rk_nfc_nand_chips_init(struct device *dev, struct rk_nfc *nfc) 1211 + { 1212 + struct device_node *np = dev->of_node, *nand_np; 1213 + int nchips = of_get_child_count(np); 1214 + int ret; 1215 + 1216 + if (!nchips || nchips > NFC_MAX_NSELS) { 1217 + dev_err(nfc->dev, "incorrect number of NAND chips (%d)\n", 1218 + nchips); 1219 + return -EINVAL; 1220 + } 1221 + 1222 + for_each_child_of_node(np, nand_np) { 1223 + ret = rk_nfc_nand_chip_init(dev, nfc, nand_np); 1224 + if (ret) { 1225 + of_node_put(nand_np); 1226 + rk_nfc_chips_cleanup(nfc); 1227 + return ret; 1228 + } 1229 + } 1230 + 1231 + return 0; 1232 + } 1233 + 1234 + static struct nfc_cfg nfc_v6_cfg = { 1235 + .type = NFC_V6, 1236 + .ecc_strengths = {60, 40, 24, 16}, 1237 + .ecc_cfgs = { 1238 + 0x00040011, 0x00040001, 0x00000011, 0x00000001, 1239 + }, 1240 + .flctl_off = 0x08, 1241 + .bchctl_off = 0x0C, 1242 + .dma_cfg_off = 0x10, 1243 + .dma_data_buf_off = 0x14, 1244 + .dma_oob_buf_off = 0x18, 1245 + .dma_st_off = 0x1C, 1246 + .bch_st_off = 0x20, 1247 + .randmz_off = 0x150, 1248 + .int_en_off = 0x16C, 1249 + .int_clr_off = 0x170, 1250 + .int_st_off = 0x174, 1251 + .oob0_off = 0x200, 1252 + .oob1_off = 0x230, 1253 + .ecc0 = { 1254 + .err_flag_bit = 2, 1255 + .low = 3, 1256 + .low_mask = 0x1F, 1257 + .low_bn = 5, 1258 + .high = 27, 1259 + .high_mask = 0x1, 1260 + }, 1261 + .ecc1 = { 1262 + .err_flag_bit = 15, 1263 + .low = 16, 1264 + .low_mask = 0x1F, 1265 + .low_bn = 5, 1266 + .high = 29, 1267 + .high_mask = 0x1, 1268 + }, 1269 + }; 1270 + 1271 + static struct nfc_cfg nfc_v8_cfg = { 1272 + .type = NFC_V8, 1273 + .ecc_strengths = {16, 16, 16, 16}, 1274 + .ecc_cfgs = { 1275 + 0x00000001, 0x00000001, 0x00000001, 0x00000001, 1276 + }, 1277 + .flctl_off = 0x08, 1278 + .bchctl_off = 0x0C, 1279 + .dma_cfg_off = 0x10, 1280 + .dma_data_buf_off = 0x14, 1281 + .dma_oob_buf_off = 0x18, 1282 + .dma_st_off = 0x1C, 1283 + .bch_st_off = 0x20, 1284 + .randmz_off = 0x150, 1285 + .int_en_off = 0x16C, 1286 + .int_clr_off = 0x170, 1287 + .int_st_off = 0x174, 1288 + .oob0_off = 0x200, 1289 + .oob1_off = 0x230, 1290 + .ecc0 = { 1291 + .err_flag_bit = 2, 1292 + .low = 3, 1293 + .low_mask = 0x1F, 1294 + .low_bn = 5, 1295 + .high = 27, 1296 + .high_mask = 0x1, 1297 + }, 1298 + .ecc1 = { 1299 + .err_flag_bit = 15, 1300 + .low = 16, 1301 + .low_mask = 0x1F, 1302 + .low_bn = 5, 1303 + .high = 29, 1304 + .high_mask = 0x1, 1305 + }, 1306 + }; 1307 + 1308 + static struct nfc_cfg nfc_v9_cfg = { 1309 + .type = NFC_V9, 1310 + .ecc_strengths = {70, 60, 40, 16}, 1311 + .ecc_cfgs = { 1312 + 0x00000001, 0x06000001, 0x04000001, 0x02000001, 1313 + }, 1314 + .flctl_off = 0x10, 1315 + .bchctl_off = 0x20, 1316 + .dma_cfg_off = 0x30, 1317 + .dma_data_buf_off = 0x34, 1318 + .dma_oob_buf_off = 0x38, 1319 + .dma_st_off = 0x3C, 1320 + .bch_st_off = 0x150, 1321 + .randmz_off = 0x208, 1322 + .int_en_off = 0x120, 1323 + .int_clr_off = 0x124, 1324 + .int_st_off = 0x128, 1325 + .oob0_off = 0x200, 1326 + .oob1_off = 0x204, 1327 + .ecc0 = { 1328 + .err_flag_bit = 2, 1329 + .low = 3, 1330 + .low_mask = 0x7F, 1331 + .low_bn = 7, 1332 + .high = 0, 1333 + .high_mask = 0x0, 1334 + }, 1335 + .ecc1 = { 1336 + .err_flag_bit = 18, 1337 + .low = 19, 1338 + .low_mask = 0x7F, 1339 + .low_bn = 7, 1340 + .high = 0, 1341 + .high_mask = 0x0, 1342 + }, 1343 + }; 1344 + 1345 + static const struct of_device_id rk_nfc_id_table[] = { 1346 + { 1347 + .compatible = "rockchip,px30-nfc", 1348 + .data = &nfc_v9_cfg 1349 + }, 1350 + { 1351 + .compatible = "rockchip,rk2928-nfc", 1352 + .data = &nfc_v6_cfg 1353 + }, 1354 + { 1355 + .compatible = "rockchip,rv1108-nfc", 1356 + .data = &nfc_v8_cfg 1357 + }, 1358 + { /* sentinel */ } 1359 + }; 1360 + MODULE_DEVICE_TABLE(of, rk_nfc_id_table); 1361 + 1362 + static int rk_nfc_probe(struct platform_device *pdev) 1363 + { 1364 + struct device *dev = &pdev->dev; 1365 + struct rk_nfc *nfc; 1366 + int ret, irq; 1367 + 1368 + nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL); 1369 + if (!nfc) 1370 + return -ENOMEM; 1371 + 1372 + nand_controller_init(&nfc->controller); 1373 + INIT_LIST_HEAD(&nfc->chips); 1374 + nfc->controller.ops = &rk_nfc_controller_ops; 1375 + 1376 + nfc->cfg = of_device_get_match_data(dev); 1377 + nfc->dev = dev; 1378 + 1379 + init_completion(&nfc->done); 1380 + 1381 + nfc->regs = devm_platform_ioremap_resource(pdev, 0); 1382 + if (IS_ERR(nfc->regs)) { 1383 + ret = PTR_ERR(nfc->regs); 1384 + goto release_nfc; 1385 + } 1386 + 1387 + nfc->nfc_clk = devm_clk_get(dev, "nfc"); 1388 + if (IS_ERR(nfc->nfc_clk)) { 1389 + dev_dbg(dev, "no NFC clk\n"); 1390 + /* Some earlier models, such as rk3066, have no NFC clk. */ 1391 + } 1392 + 1393 + nfc->ahb_clk = devm_clk_get(dev, "ahb"); 1394 + if (IS_ERR(nfc->ahb_clk)) { 1395 + dev_err(dev, "no ahb clk\n"); 1396 + ret = PTR_ERR(nfc->ahb_clk); 1397 + goto release_nfc; 1398 + } 1399 + 1400 + ret = rk_nfc_enable_clks(dev, nfc); 1401 + if (ret) 1402 + goto release_nfc; 1403 + 1404 + irq = platform_get_irq(pdev, 0); 1405 + if (irq < 0) { 1406 + dev_err(dev, "no NFC irq resource\n"); 1407 + ret = -EINVAL; 1408 + goto clk_disable; 1409 + } 1410 + 1411 + writel(0, nfc->regs + nfc->cfg->int_en_off); 1412 + ret = devm_request_irq(dev, irq, rk_nfc_irq, 0x0, "rk-nand", nfc); 1413 + if (ret) { 1414 + dev_err(dev, "failed to request NFC irq\n"); 1415 + goto clk_disable; 1416 + } 1417 + 1418 + platform_set_drvdata(pdev, nfc); 1419 + 1420 + ret = rk_nfc_nand_chips_init(dev, nfc); 1421 + if (ret) { 1422 + dev_err(dev, "failed to init NAND chips\n"); 1423 + goto clk_disable; 1424 + } 1425 + return 0; 1426 + 1427 + clk_disable: 1428 + rk_nfc_disable_clks(nfc); 1429 + release_nfc: 1430 + return ret; 1431 + } 1432 + 1433 + static int rk_nfc_remove(struct platform_device *pdev) 1434 + { 1435 + struct rk_nfc *nfc = platform_get_drvdata(pdev); 1436 + 1437 + kfree(nfc->page_buf); 1438 + kfree(nfc->oob_buf); 1439 + rk_nfc_chips_cleanup(nfc); 1440 + rk_nfc_disable_clks(nfc); 1441 + 1442 + return 0; 1443 + } 1444 + 1445 + static int __maybe_unused rk_nfc_suspend(struct device *dev) 1446 + { 1447 + struct rk_nfc *nfc = dev_get_drvdata(dev); 1448 + 1449 + rk_nfc_disable_clks(nfc); 1450 + 1451 + return 0; 1452 + } 1453 + 1454 + static int __maybe_unused rk_nfc_resume(struct device *dev) 1455 + { 1456 + struct rk_nfc *nfc = dev_get_drvdata(dev); 1457 + struct rk_nfc_nand_chip *rknand; 1458 + struct nand_chip *chip; 1459 + int ret; 1460 + u32 i; 1461 + 1462 + ret = rk_nfc_enable_clks(dev, nfc); 1463 + if (ret) 1464 + return ret; 1465 + 1466 + /* Reset NAND chip if VCC was powered off. */ 1467 + list_for_each_entry(rknand, &nfc->chips, node) { 1468 + chip = &rknand->chip; 1469 + for (i = 0; i < rknand->nsels; i++) 1470 + nand_reset(chip, i); 1471 + } 1472 + 1473 + return 0; 1474 + } 1475 + 1476 + static const struct dev_pm_ops rk_nfc_pm_ops = { 1477 + SET_SYSTEM_SLEEP_PM_OPS(rk_nfc_suspend, rk_nfc_resume) 1478 + }; 1479 + 1480 + static struct platform_driver rk_nfc_driver = { 1481 + .probe = rk_nfc_probe, 1482 + .remove = rk_nfc_remove, 1483 + .driver = { 1484 + .name = "rockchip-nfc", 1485 + .of_match_table = rk_nfc_id_table, 1486 + .pm = &rk_nfc_pm_ops, 1487 + }, 1488 + }; 1489 + 1490 + module_platform_driver(rk_nfc_driver); 1491 + 1492 + MODULE_LICENSE("Dual MIT/GPL"); 1493 + MODULE_AUTHOR("Yifeng Zhao <yifeng.zhao@rock-chips.com>"); 1494 + MODULE_DESCRIPTION("Rockchip Nand Flash Controller Driver"); 1495 + MODULE_ALIAS("platform:rockchip-nand-controller");
+3 -2
drivers/mtd/nand/raw/s3c2410.c
··· 30 30 31 31 #include <linux/mtd/mtd.h> 32 32 #include <linux/mtd/rawnand.h> 33 - #include <linux/mtd/nand_ecc.h> 34 33 #include <linux/mtd/partitions.h> 35 34 36 35 #include <linux/platform_data/mtd-nand-s3c2410.h> ··· 133 134 134 135 /** 135 136 * struct s3c2410_nand_info - NAND controller state. 136 - * @mtds: An array of MTD instances on this controoler. 137 + * @controller: Base controller structure. 138 + * @mtds: An array of MTD instances on this controller. 137 139 * @platform: The platform data for this board. 138 140 * @device: The platform device we bound to. 139 141 * @clk: The clock resource for this controller. ··· 146 146 * @clk_rate: The clock rate from @clk. 147 147 * @clk_state: The current clock state. 148 148 * @cpu_type: The exact type of this controller. 149 + * @freq_transition: CPUFreq notifier block 149 150 */ 150 151 struct s3c2410_nand_info { 151 152 /* mtd info */
+1 -2
drivers/mtd/nand/raw/sharpsl.c
··· 12 12 #include <linux/delay.h> 13 13 #include <linux/mtd/mtd.h> 14 14 #include <linux/mtd/rawnand.h> 15 - #include <linux/mtd/nand_ecc.h> 16 15 #include <linux/mtd/partitions.h> 17 16 #include <linux/mtd/sharpsl.h> 18 17 #include <linux/interrupt.h> ··· 106 107 chip->ecc.strength = 1; 107 108 chip->ecc.hwctl = sharpsl_nand_enable_hwecc; 108 109 chip->ecc.calculate = sharpsl_nand_calculate_ecc; 109 - chip->ecc.correct = nand_correct_data; 110 + chip->ecc.correct = rawnand_sw_hamming_correct; 110 111 111 112 return 0; 112 113 }
+91 -58
drivers/mtd/nand/raw/sunxi_nand.c
··· 51 51 #define NFC_REG_USER_DATA(x) (0x0050 + ((x) * 4)) 52 52 #define NFC_REG_SPARE_AREA 0x00A0 53 53 #define NFC_REG_PAT_ID 0x00A4 54 + #define NFC_REG_MDMA_ADDR 0x00C0 54 55 #define NFC_REG_MDMA_CNT 0x00C4 55 56 #define NFC_RAM0_BASE 0x0400 56 57 #define NFC_RAM1_BASE 0x0800 ··· 183 182 * 184 183 * @node: used to store NAND chips into a list 185 184 * @nand: base NAND chip structure 185 + * @ecc: ECC controller structure 186 186 * @clk_rate: clk_rate required for this NAND chip 187 187 * @timing_cfg: TIMING_CFG register value for this NAND chip 188 188 * @timing_ctl: TIMING_CTL register value for this NAND chip ··· 193 191 struct sunxi_nand_chip { 194 192 struct list_head node; 195 193 struct nand_chip nand; 194 + struct sunxi_nand_hw_ecc *ecc; 196 195 unsigned long clk_rate; 197 196 u32 timing_cfg; 198 197 u32 timing_ctl; ··· 210 207 * NAND Controller capabilities structure: stores NAND controller capabilities 211 208 * for distinction between compatible strings. 212 209 * 213 - * @extra_mbus_conf: Contrary to A10, A10s and A13, accessing internal RAM 210 + * @has_mdma: Use mbus dma mode, otherwise general dma 214 211 * through MBUS on A23/A33 needs extra configuration. 215 212 * @reg_io_data: I/O data register 216 213 * @dma_maxburst: DMA maxburst 217 214 */ 218 215 struct sunxi_nfc_caps { 219 - bool extra_mbus_conf; 216 + bool has_mdma; 220 217 unsigned int reg_io_data; 221 218 unsigned int dma_maxburst; 222 219 }; ··· 236 233 * controller 237 234 * @complete: a completion object used to wait for NAND controller events 238 235 * @dmac: the DMA channel attached to the NAND controller 236 + * @caps: NAND Controller capabilities 239 237 */ 240 238 struct sunxi_nfc { 241 239 struct nand_controller controller; ··· 367 363 if (!ret) 368 364 return -ENOMEM; 369 365 370 - dmad = dmaengine_prep_slave_sg(nfc->dmac, sg, 1, tdir, DMA_CTRL_ACK); 371 - if (!dmad) { 372 - ret = -EINVAL; 373 - goto err_unmap_buf; 366 + if (!nfc->caps->has_mdma) { 367 + dmad = dmaengine_prep_slave_sg(nfc->dmac, sg, 1, tdir, DMA_CTRL_ACK); 368 + if (!dmad) { 369 + ret = -EINVAL; 370 + goto err_unmap_buf; 371 + } 374 372 } 375 373 376 374 writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD, 377 375 nfc->regs + NFC_REG_CTL); 378 376 writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM); 379 377 writel(chunksize, nfc->regs + NFC_REG_CNT); 380 - if (nfc->caps->extra_mbus_conf) 378 + 379 + if (nfc->caps->has_mdma) { 380 + writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_DMA_TYPE_NORMAL, 381 + nfc->regs + NFC_REG_CTL); 381 382 writel(chunksize * nchunks, nfc->regs + NFC_REG_MDMA_CNT); 383 + writel(sg_dma_address(sg), nfc->regs + NFC_REG_MDMA_ADDR); 384 + } else { 385 + dmat = dmaengine_submit(dmad); 382 386 383 - dmat = dmaengine_submit(dmad); 384 - 385 - ret = dma_submit_error(dmat); 386 - if (ret) 387 - goto err_clr_dma_flag; 387 + ret = dma_submit_error(dmat); 388 + if (ret) 389 + goto err_clr_dma_flag; 390 + } 388 391 389 392 return 0; 390 393 ··· 687 676 688 677 static void sunxi_nfc_hw_ecc_enable(struct nand_chip *nand) 689 678 { 679 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 690 680 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 691 - struct sunxi_nand_hw_ecc *data = nand->ecc.priv; 692 681 u32 ecc_ctl; 693 682 694 683 ecc_ctl = readl(nfc->regs + NFC_REG_ECC_CTL); 695 684 ecc_ctl &= ~(NFC_ECC_MODE_MSK | NFC_ECC_PIPELINE | 696 685 NFC_ECC_BLOCK_SIZE_MSK); 697 - ecc_ctl |= NFC_ECC_EN | NFC_ECC_MODE(data->mode) | NFC_ECC_EXCEPTION | 698 - NFC_ECC_PIPELINE; 686 + ecc_ctl |= NFC_ECC_EN | NFC_ECC_MODE(sunxi_nand->ecc->mode) | 687 + NFC_ECC_EXCEPTION | NFC_ECC_PIPELINE; 699 688 700 689 if (nand->ecc.size == 512) 701 690 ecc_ctl |= NFC_ECC_BLOCK_512; ··· 922 911 unsigned int max_bitflips = 0; 923 912 int ret, i, raw_mode = 0; 924 913 struct scatterlist sg; 925 - u32 status; 914 + u32 status, wait; 926 915 927 916 ret = sunxi_nfc_wait_cmd_fifo_empty(nfc); 928 917 if (ret) ··· 940 929 writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) | 941 930 NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET); 942 931 943 - dma_async_issue_pending(nfc->dmac); 932 + wait = NFC_CMD_INT_FLAG; 933 + 934 + if (nfc->caps->has_mdma) 935 + wait |= NFC_DMA_INT_FLAG; 936 + else 937 + dma_async_issue_pending(nfc->dmac); 944 938 945 939 writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS, 946 940 nfc->regs + NFC_REG_CMD); 947 941 948 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0); 949 - if (ret) 942 + ret = sunxi_nfc_wait_events(nfc, wait, false, 0); 943 + if (ret && !nfc->caps->has_mdma) 950 944 dmaengine_terminate_all(nfc->dmac); 951 945 952 946 sunxi_nfc_randomizer_disable(nand); ··· 1292 1276 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1293 1277 struct nand_ecc_ctrl *ecc = &nand->ecc; 1294 1278 struct scatterlist sg; 1279 + u32 wait; 1295 1280 int ret, i; 1296 1281 1297 1282 sunxi_nfc_select_chip(nand, nand->cur_cs); ··· 1321 1304 writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG, 1322 1305 nfc->regs + NFC_REG_WCMD_SET); 1323 1306 1324 - dma_async_issue_pending(nfc->dmac); 1307 + wait = NFC_CMD_INT_FLAG; 1308 + 1309 + if (nfc->caps->has_mdma) 1310 + wait |= NFC_DMA_INT_FLAG; 1311 + else 1312 + dma_async_issue_pending(nfc->dmac); 1325 1313 1326 1314 writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | 1327 1315 NFC_DATA_TRANS | NFC_ACCESS_DIR, 1328 1316 nfc->regs + NFC_REG_CMD); 1329 1317 1330 - ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, false, 0); 1331 - if (ret) 1318 + ret = sunxi_nfc_wait_events(nfc, wait, false, 0); 1319 + if (ret && !nfc->caps->has_mdma) 1332 1320 dmaengine_terminate_all(nfc->dmac); 1333 1321 1334 1322 sunxi_nfc_randomizer_disable(nand); ··· 1619 1597 .free = sunxi_nand_ooblayout_free, 1620 1598 }; 1621 1599 1622 - static void sunxi_nand_hw_ecc_ctrl_cleanup(struct nand_ecc_ctrl *ecc) 1600 + static void sunxi_nand_hw_ecc_ctrl_cleanup(struct sunxi_nand_chip *sunxi_nand) 1623 1601 { 1624 - kfree(ecc->priv); 1602 + kfree(sunxi_nand->ecc); 1625 1603 } 1626 1604 1627 1605 static int sunxi_nand_hw_ecc_ctrl_init(struct nand_chip *nand, ··· 1629 1607 struct device_node *np) 1630 1608 { 1631 1609 static const u8 strengths[] = { 16, 24, 28, 32, 40, 48, 56, 60, 64 }; 1610 + struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand); 1632 1611 struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller); 1633 1612 struct mtd_info *mtd = nand_to_mtd(nand); 1634 1613 struct nand_device *nanddev = mtd_to_nanddev(mtd); 1635 - struct sunxi_nand_hw_ecc *data; 1636 1614 int nsectors; 1637 1615 int ret; 1638 1616 int i; ··· 1669 1647 if (ecc->size != 512 && ecc->size != 1024) 1670 1648 return -EINVAL; 1671 1649 1672 - data = kzalloc(sizeof(*data), GFP_KERNEL); 1673 - if (!data) 1650 + sunxi_nand->ecc = kzalloc(sizeof(*sunxi_nand->ecc), GFP_KERNEL); 1651 + if (!sunxi_nand->ecc) 1674 1652 return -ENOMEM; 1675 1653 1676 1654 /* Prefer 1k ECC chunk over 512 ones */ ··· 1697 1675 goto err; 1698 1676 } 1699 1677 1700 - data->mode = i; 1678 + sunxi_nand->ecc->mode = i; 1701 1679 1702 1680 /* HW ECC always request ECC bytes for 1024 bytes blocks */ 1703 1681 ecc->bytes = DIV_ROUND_UP(ecc->strength * fls(8 * 1024), 8); ··· 1715 1693 ecc->read_oob = sunxi_nfc_hw_ecc_read_oob; 1716 1694 ecc->write_oob = sunxi_nfc_hw_ecc_write_oob; 1717 1695 mtd_set_ooblayout(mtd, &sunxi_nand_ooblayout_ops); 1718 - ecc->priv = data; 1719 1696 1720 - if (nfc->dmac) { 1697 + if (nfc->dmac || nfc->caps->has_mdma) { 1721 1698 ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma; 1722 1699 ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma; 1723 1700 ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma; ··· 1735 1714 return 0; 1736 1715 1737 1716 err: 1738 - kfree(data); 1717 + kfree(sunxi_nand->ecc); 1739 1718 1740 1719 return ret; 1741 1720 } 1742 1721 1743 - static void sunxi_nand_ecc_cleanup(struct nand_ecc_ctrl *ecc) 1722 + static void sunxi_nand_ecc_cleanup(struct sunxi_nand_chip *sunxi_nand) 1744 1723 { 1724 + struct nand_ecc_ctrl *ecc = &sunxi_nand->nand.ecc; 1725 + 1745 1726 switch (ecc->engine_type) { 1746 1727 case NAND_ECC_ENGINE_TYPE_ON_HOST: 1747 - sunxi_nand_hw_ecc_ctrl_cleanup(ecc); 1728 + sunxi_nand_hw_ecc_ctrl_cleanup(sunxi_nand); 1748 1729 break; 1749 1730 case NAND_ECC_ENGINE_TYPE_NONE: 1750 1731 default: ··· 2076 2053 ret = mtd_device_unregister(nand_to_mtd(chip)); 2077 2054 WARN_ON(ret); 2078 2055 nand_cleanup(chip); 2079 - sunxi_nand_ecc_cleanup(&chip->ecc); 2056 + sunxi_nand_ecc_cleanup(sunxi_nand); 2080 2057 list_del(&sunxi_nand->node); 2081 2058 } 2059 + } 2060 + 2061 + static int sunxi_nfc_dma_init(struct sunxi_nfc *nfc, struct resource *r) 2062 + { 2063 + int ret; 2064 + 2065 + if (nfc->caps->has_mdma) 2066 + return 0; 2067 + 2068 + nfc->dmac = dma_request_chan(nfc->dev, "rxtx"); 2069 + if (IS_ERR(nfc->dmac)) { 2070 + ret = PTR_ERR(nfc->dmac); 2071 + if (ret == -EPROBE_DEFER) 2072 + return ret; 2073 + 2074 + /* Ignore errors to fall back to PIO mode */ 2075 + dev_warn(nfc->dev, "failed to request rxtx DMA channel: %d\n", ret); 2076 + nfc->dmac = NULL; 2077 + } else { 2078 + struct dma_slave_config dmac_cfg = { }; 2079 + 2080 + dmac_cfg.src_addr = r->start + nfc->caps->reg_io_data; 2081 + dmac_cfg.dst_addr = dmac_cfg.src_addr; 2082 + dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 2083 + dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width; 2084 + dmac_cfg.src_maxburst = nfc->caps->dma_maxburst; 2085 + dmac_cfg.dst_maxburst = nfc->caps->dma_maxburst; 2086 + dmaengine_slave_config(nfc->dmac, &dmac_cfg); 2087 + } 2088 + return 0; 2082 2089 } 2083 2090 2084 2091 static int sunxi_nfc_probe(struct platform_device *pdev) ··· 2185 2132 if (ret) 2186 2133 goto out_ahb_reset_reassert; 2187 2134 2188 - nfc->dmac = dma_request_chan(dev, "rxtx"); 2189 - if (IS_ERR(nfc->dmac)) { 2190 - ret = PTR_ERR(nfc->dmac); 2191 - if (ret == -EPROBE_DEFER) 2192 - goto out_ahb_reset_reassert; 2135 + ret = sunxi_nfc_dma_init(nfc, r); 2193 2136 2194 - /* Ignore errors to fall back to PIO mode */ 2195 - dev_warn(dev, "failed to request rxtx DMA channel: %d\n", ret); 2196 - nfc->dmac = NULL; 2197 - } else { 2198 - struct dma_slave_config dmac_cfg = { }; 2199 - 2200 - dmac_cfg.src_addr = r->start + nfc->caps->reg_io_data; 2201 - dmac_cfg.dst_addr = dmac_cfg.src_addr; 2202 - dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 2203 - dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width; 2204 - dmac_cfg.src_maxburst = nfc->caps->dma_maxburst; 2205 - dmac_cfg.dst_maxburst = nfc->caps->dma_maxburst; 2206 - dmaengine_slave_config(nfc->dmac, &dmac_cfg); 2207 - 2208 - if (nfc->caps->extra_mbus_conf) 2209 - writel(readl(nfc->regs + NFC_REG_CTL) | 2210 - NFC_DMA_TYPE_NORMAL, nfc->regs + NFC_REG_CTL); 2211 - } 2137 + if (ret) 2138 + goto out_ahb_reset_reassert; 2212 2139 2213 2140 platform_set_drvdata(pdev, nfc); 2214 2141 ··· 2235 2202 }; 2236 2203 2237 2204 static const struct sunxi_nfc_caps sunxi_nfc_a23_caps = { 2238 - .extra_mbus_conf = true, 2205 + .has_mdma = true, 2239 2206 .reg_io_data = NFC_REG_A23_IO_DATA, 2240 2207 .dma_maxburst = 8, 2241 2208 };
+3 -4
drivers/mtd/nand/raw/tmio_nand.c
··· 35 35 #include <linux/ioport.h> 36 36 #include <linux/mtd/mtd.h> 37 37 #include <linux/mtd/rawnand.h> 38 - #include <linux/mtd/nand_ecc.h> 39 38 #include <linux/mtd/partitions.h> 40 39 #include <linux/slab.h> 41 40 ··· 292 293 int r0, r1; 293 294 294 295 /* assume ecc.size = 512 and ecc.bytes = 6 */ 295 - r0 = __nand_correct_data(buf, read_ecc, calc_ecc, 256, false); 296 + r0 = rawnand_sw_hamming_correct(chip, buf, read_ecc, calc_ecc); 296 297 if (r0 < 0) 297 298 return r0; 298 - r1 = __nand_correct_data(buf + 256, read_ecc + 3, calc_ecc + 3, 256, 299 - false); 299 + r1 = rawnand_sw_hamming_correct(chip, buf + 256, read_ecc + 3, 300 + calc_ecc + 3); 300 301 if (r1 < 0) 301 302 return r1; 302 303 return r0 + r1;
+2 -3
drivers/mtd/nand/raw/txx9ndfmc.c
··· 14 14 #include <linux/delay.h> 15 15 #include <linux/mtd/mtd.h> 16 16 #include <linux/mtd/rawnand.h> 17 - #include <linux/mtd/nand_ecc.h> 18 17 #include <linux/mtd/partitions.h> 19 18 #include <linux/io.h> 20 19 #include <linux/platform_data/txx9/ndfmc.h> ··· 193 194 int stat; 194 195 195 196 for (eccsize = chip->ecc.size; eccsize > 0; eccsize -= 256) { 196 - stat = __nand_correct_data(buf, read_ecc, calc_ecc, 256, 197 - false); 197 + stat = rawnand_sw_hamming_correct(chip, buf, read_ecc, 198 + calc_ecc); 198 199 if (stat < 0) 199 200 return stat; 200 201 corrected += stat;
+1
drivers/mtd/nand/spi/Kconfig
··· 2 2 menuconfig MTD_SPI_NAND 3 3 tristate "SPI NAND device Support" 4 4 select MTD_NAND_CORE 5 + select MTD_NAND_ECC 5 6 depends on SPI_MASTER 6 7 select SPI_MEM 7 8 help
+184 -102
drivers/mtd/nand/spi/core.c
··· 193 193 enable ? CFG_ECC_ENABLE : 0); 194 194 } 195 195 196 + static int spinand_check_ecc_status(struct spinand_device *spinand, u8 status) 197 + { 198 + struct nand_device *nand = spinand_to_nand(spinand); 199 + 200 + if (spinand->eccinfo.get_status) 201 + return spinand->eccinfo.get_status(spinand, status); 202 + 203 + switch (status & STATUS_ECC_MASK) { 204 + case STATUS_ECC_NO_BITFLIPS: 205 + return 0; 206 + 207 + case STATUS_ECC_HAS_BITFLIPS: 208 + /* 209 + * We have no way to know exactly how many bitflips have been 210 + * fixed, so let's return the maximum possible value so that 211 + * wear-leveling layers move the data immediately. 212 + */ 213 + return nanddev_get_ecc_conf(nand)->strength; 214 + 215 + case STATUS_ECC_UNCOR_ERROR: 216 + return -EBADMSG; 217 + 218 + default: 219 + break; 220 + } 221 + 222 + return -EINVAL; 223 + } 224 + 225 + static int spinand_noecc_ooblayout_ecc(struct mtd_info *mtd, int section, 226 + struct mtd_oob_region *region) 227 + { 228 + return -ERANGE; 229 + } 230 + 231 + static int spinand_noecc_ooblayout_free(struct mtd_info *mtd, int section, 232 + struct mtd_oob_region *region) 233 + { 234 + if (section) 235 + return -ERANGE; 236 + 237 + /* Reserve 2 bytes for the BBM. */ 238 + region->offset = 2; 239 + region->length = 62; 240 + 241 + return 0; 242 + } 243 + 244 + static const struct mtd_ooblayout_ops spinand_noecc_ooblayout = { 245 + .ecc = spinand_noecc_ooblayout_ecc, 246 + .free = spinand_noecc_ooblayout_free, 247 + }; 248 + 249 + static int spinand_ondie_ecc_init_ctx(struct nand_device *nand) 250 + { 251 + struct spinand_device *spinand = nand_to_spinand(nand); 252 + struct mtd_info *mtd = nanddev_to_mtd(nand); 253 + struct spinand_ondie_ecc_conf *engine_conf; 254 + 255 + nand->ecc.ctx.conf.engine_type = NAND_ECC_ENGINE_TYPE_ON_DIE; 256 + nand->ecc.ctx.conf.step_size = nand->ecc.requirements.step_size; 257 + nand->ecc.ctx.conf.strength = nand->ecc.requirements.strength; 258 + 259 + engine_conf = kzalloc(sizeof(*engine_conf), GFP_KERNEL); 260 + if (!engine_conf) 261 + return -ENOMEM; 262 + 263 + nand->ecc.ctx.priv = engine_conf; 264 + 265 + if (spinand->eccinfo.ooblayout) 266 + mtd_set_ooblayout(mtd, spinand->eccinfo.ooblayout); 267 + else 268 + mtd_set_ooblayout(mtd, &spinand_noecc_ooblayout); 269 + 270 + return 0; 271 + } 272 + 273 + static void spinand_ondie_ecc_cleanup_ctx(struct nand_device *nand) 274 + { 275 + kfree(nand->ecc.ctx.priv); 276 + } 277 + 278 + static int spinand_ondie_ecc_prepare_io_req(struct nand_device *nand, 279 + struct nand_page_io_req *req) 280 + { 281 + struct spinand_device *spinand = nand_to_spinand(nand); 282 + bool enable = (req->mode != MTD_OPS_RAW); 283 + 284 + /* Only enable or disable the engine */ 285 + return spinand_ecc_enable(spinand, enable); 286 + } 287 + 288 + static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand, 289 + struct nand_page_io_req *req) 290 + { 291 + struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv; 292 + struct spinand_device *spinand = nand_to_spinand(nand); 293 + 294 + if (req->mode == MTD_OPS_RAW) 295 + return 0; 296 + 297 + /* Nothing to do when finishing a page write */ 298 + if (req->type == NAND_PAGE_WRITE) 299 + return 0; 300 + 301 + /* Finish a page write: check the status, report errors/bitflips */ 302 + return spinand_check_ecc_status(spinand, engine_conf->status); 303 + } 304 + 305 + static struct nand_ecc_engine_ops spinand_ondie_ecc_engine_ops = { 306 + .init_ctx = spinand_ondie_ecc_init_ctx, 307 + .cleanup_ctx = spinand_ondie_ecc_cleanup_ctx, 308 + .prepare_io_req = spinand_ondie_ecc_prepare_io_req, 309 + .finish_io_req = spinand_ondie_ecc_finish_io_req, 310 + }; 311 + 312 + static struct nand_ecc_engine spinand_ondie_ecc_engine = { 313 + .ops = &spinand_ondie_ecc_engine_ops, 314 + }; 315 + 316 + static void spinand_ondie_ecc_save_status(struct nand_device *nand, u8 status) 317 + { 318 + struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv; 319 + 320 + if (nand->ecc.ctx.conf.engine_type == NAND_ECC_ENGINE_TYPE_ON_DIE && 321 + engine_conf) 322 + engine_conf->status = status; 323 + } 324 + 196 325 static int spinand_write_enable_op(struct spinand_device *spinand) 197 326 { 198 327 struct spi_mem_op op = SPINAND_WR_EN_DIS_OP(true); ··· 343 214 const struct nand_page_io_req *req) 344 215 { 345 216 struct nand_device *nand = spinand_to_nand(spinand); 346 - struct mtd_info *mtd = nanddev_to_mtd(nand); 347 217 struct spi_mem_dirmap_desc *rdesc; 348 218 unsigned int nbytes = 0; 349 219 void *buf = NULL; ··· 382 254 memcpy(req->databuf.in, spinand->databuf + req->dataoffs, 383 255 req->datalen); 384 256 385 - if (req->ooblen) { 386 - if (req->mode == MTD_OPS_AUTO_OOB) 387 - mtd_ooblayout_get_databytes(mtd, req->oobbuf.in, 388 - spinand->oobbuf, 389 - req->ooboffs, 390 - req->ooblen); 391 - else 392 - memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, 393 - req->ooblen); 394 - } 257 + if (req->ooblen) 258 + memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, 259 + req->ooblen); 395 260 396 261 return 0; 397 262 } ··· 393 272 const struct nand_page_io_req *req) 394 273 { 395 274 struct nand_device *nand = spinand_to_nand(spinand); 396 - struct mtd_info *mtd = nanddev_to_mtd(nand); 275 + struct mtd_info *mtd = spinand_to_mtd(spinand); 397 276 struct spi_mem_dirmap_desc *wdesc; 398 277 unsigned int nbytes, column = 0; 399 278 void *buf = spinand->databuf; ··· 405 284 * must fill the page cache entirely even if we only want to program 406 285 * the data portion of the page, otherwise we might corrupt the BBM or 407 286 * user data previously programmed in OOB area. 287 + * 288 + * Only reset the data buffer manually, the OOB buffer is prepared by 289 + * ECC engines ->prepare_io_req() callback. 408 290 */ 409 291 nbytes = nanddev_page_size(nand) + nanddev_per_page_oobsize(nand); 410 - memset(spinand->databuf, 0xff, nbytes); 292 + memset(spinand->databuf, 0xff, nanddev_page_size(nand)); 411 293 412 294 if (req->datalen) 413 295 memcpy(spinand->databuf + req->dataoffs, req->databuf.out, ··· 526 402 return spinand_write_reg_op(spinand, REG_BLOCK_LOCK, lock); 527 403 } 528 404 529 - static int spinand_check_ecc_status(struct spinand_device *spinand, u8 status) 405 + static int spinand_read_page(struct spinand_device *spinand, 406 + const struct nand_page_io_req *req) 530 407 { 531 408 struct nand_device *nand = spinand_to_nand(spinand); 532 - 533 - if (spinand->eccinfo.get_status) 534 - return spinand->eccinfo.get_status(spinand, status); 535 - 536 - switch (status & STATUS_ECC_MASK) { 537 - case STATUS_ECC_NO_BITFLIPS: 538 - return 0; 539 - 540 - case STATUS_ECC_HAS_BITFLIPS: 541 - /* 542 - * We have no way to know exactly how many bitflips have been 543 - * fixed, so let's return the maximum possible value so that 544 - * wear-leveling layers move the data immediately. 545 - */ 546 - return nanddev_get_ecc_conf(nand)->strength; 547 - 548 - case STATUS_ECC_UNCOR_ERROR: 549 - return -EBADMSG; 550 - 551 - default: 552 - break; 553 - } 554 - 555 - return -EINVAL; 556 - } 557 - 558 - static int spinand_read_page(struct spinand_device *spinand, 559 - const struct nand_page_io_req *req, 560 - bool ecc_enabled) 561 - { 562 409 u8 status; 563 410 int ret; 411 + 412 + ret = nand_ecc_prepare_io_req(nand, (struct nand_page_io_req *)req); 413 + if (ret) 414 + return ret; 564 415 565 416 ret = spinand_load_page_op(spinand, req); 566 417 if (ret) ··· 545 446 if (ret < 0) 546 447 return ret; 547 448 449 + spinand_ondie_ecc_save_status(nand, status); 450 + 548 451 ret = spinand_read_from_cache_op(spinand, req); 549 452 if (ret) 550 453 return ret; 551 454 552 - if (!ecc_enabled) 553 - return 0; 554 - 555 - return spinand_check_ecc_status(spinand, status); 455 + return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req); 556 456 } 557 457 558 458 static int spinand_write_page(struct spinand_device *spinand, 559 459 const struct nand_page_io_req *req) 560 460 { 461 + struct nand_device *nand = spinand_to_nand(spinand); 561 462 u8 status; 562 463 int ret; 464 + 465 + ret = nand_ecc_prepare_io_req(nand, (struct nand_page_io_req *)req); 466 + if (ret) 467 + return ret; 563 468 564 469 ret = spinand_write_enable_op(spinand); 565 470 if (ret) ··· 579 476 580 477 ret = spinand_wait(spinand, &status); 581 478 if (!ret && (status & STATUS_PROG_FAILED)) 582 - ret = -EIO; 479 + return -EIO; 583 480 584 - return ret; 481 + return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req); 585 482 } 586 483 587 484 static int spinand_mtd_read(struct mtd_info *mtd, loff_t from, ··· 591 488 struct nand_device *nand = mtd_to_nanddev(mtd); 592 489 unsigned int max_bitflips = 0; 593 490 struct nand_io_iter iter; 594 - bool enable_ecc = false; 491 + bool disable_ecc = false; 595 492 bool ecc_failed = false; 596 493 int ret = 0; 597 494 598 - if (ops->mode != MTD_OPS_RAW && spinand->eccinfo.ooblayout) 599 - enable_ecc = true; 495 + if (ops->mode == MTD_OPS_RAW || !spinand->eccinfo.ooblayout) 496 + disable_ecc = true; 600 497 601 498 mutex_lock(&spinand->lock); 602 499 603 500 nanddev_io_for_each_page(nand, NAND_PAGE_READ, from, ops, &iter) { 501 + if (disable_ecc) 502 + iter.req.mode = MTD_OPS_RAW; 503 + 604 504 ret = spinand_select_target(spinand, iter.req.pos.target); 605 505 if (ret) 606 506 break; 607 507 608 - ret = spinand_ecc_enable(spinand, enable_ecc); 609 - if (ret) 610 - break; 611 - 612 - ret = spinand_read_page(spinand, &iter.req, enable_ecc); 508 + ret = spinand_read_page(spinand, &iter.req); 613 509 if (ret < 0 && ret != -EBADMSG) 614 510 break; 615 511 ··· 639 537 struct spinand_device *spinand = mtd_to_spinand(mtd); 640 538 struct nand_device *nand = mtd_to_nanddev(mtd); 641 539 struct nand_io_iter iter; 642 - bool enable_ecc = false; 540 + bool disable_ecc = false; 643 541 int ret = 0; 644 542 645 - if (ops->mode != MTD_OPS_RAW && mtd->ooblayout) 646 - enable_ecc = true; 543 + if (ops->mode == MTD_OPS_RAW || !mtd->ooblayout) 544 + disable_ecc = true; 647 545 648 546 mutex_lock(&spinand->lock); 649 547 650 548 nanddev_io_for_each_page(nand, NAND_PAGE_WRITE, to, ops, &iter) { 651 - ret = spinand_select_target(spinand, iter.req.pos.target); 652 - if (ret) 653 - break; 549 + if (disable_ecc) 550 + iter.req.mode = MTD_OPS_RAW; 654 551 655 - ret = spinand_ecc_enable(spinand, enable_ecc); 552 + ret = spinand_select_target(spinand, iter.req.pos.target); 656 553 if (ret) 657 554 break; 658 555 ··· 681 580 }; 682 581 683 582 spinand_select_target(spinand, pos->target); 684 - spinand_read_page(spinand, &req, false); 583 + spinand_read_page(spinand, &req); 685 584 if (marker[0] != 0xff || marker[1] != 0xff) 686 585 return true; 687 586 ··· 1066 965 return 0; 1067 966 } 1068 967 1069 - static int spinand_noecc_ooblayout_ecc(struct mtd_info *mtd, int section, 1070 - struct mtd_oob_region *region) 1071 - { 1072 - return -ERANGE; 1073 - } 1074 - 1075 - static int spinand_noecc_ooblayout_free(struct mtd_info *mtd, int section, 1076 - struct mtd_oob_region *region) 1077 - { 1078 - if (section) 1079 - return -ERANGE; 1080 - 1081 - /* Reserve 2 bytes for the BBM. */ 1082 - region->offset = 2; 1083 - region->length = 62; 1084 - 1085 - return 0; 1086 - } 1087 - 1088 - static const struct mtd_ooblayout_ops spinand_noecc_ooblayout = { 1089 - .ecc = spinand_noecc_ooblayout_ecc, 1090 - .free = spinand_noecc_ooblayout_free, 1091 - }; 1092 - 1093 968 static int spinand_init(struct spinand_device *spinand) 1094 969 { 1095 970 struct device *dev = &spinand->spimem->spi->dev; ··· 1143 1066 if (ret) 1144 1067 goto err_manuf_cleanup; 1145 1068 1146 - /* 1147 - * Right now, we don't support ECC, so let the whole oob 1148 - * area is available for user. 1149 - */ 1069 + /* SPI-NAND default ECC engine is on-die */ 1070 + nand->ecc.defaults.engine_type = NAND_ECC_ENGINE_TYPE_ON_DIE; 1071 + nand->ecc.ondie_engine = &spinand_ondie_ecc_engine; 1072 + 1073 + spinand_ecc_enable(spinand, false); 1074 + ret = nanddev_ecc_engine_init(nand); 1075 + if (ret) 1076 + goto err_cleanup_nanddev; 1077 + 1150 1078 mtd->_read_oob = spinand_mtd_read; 1151 1079 mtd->_write_oob = spinand_mtd_write; 1152 1080 mtd->_block_isbad = spinand_mtd_block_isbad; ··· 1160 1078 mtd->_erase = spinand_mtd_erase; 1161 1079 mtd->_max_bad_blocks = nanddev_mtd_max_bad_blocks; 1162 1080 1163 - if (spinand->eccinfo.ooblayout) 1164 - mtd_set_ooblayout(mtd, spinand->eccinfo.ooblayout); 1165 - else 1166 - mtd_set_ooblayout(mtd, &spinand_noecc_ooblayout); 1167 - 1168 - ret = mtd_ooblayout_count_freebytes(mtd); 1169 - if (ret < 0) 1170 - goto err_cleanup_nanddev; 1081 + if (nand->ecc.engine) { 1082 + ret = mtd_ooblayout_count_freebytes(mtd); 1083 + if (ret < 0) 1084 + goto err_cleanup_ecc_engine; 1085 + } 1171 1086 1172 1087 mtd->oobavail = ret; 1173 1088 ··· 1173 1094 mtd->ecc_step_size = nanddev_get_ecc_conf(nand)->step_size; 1174 1095 1175 1096 return 0; 1097 + 1098 + err_cleanup_ecc_engine: 1099 + nanddev_ecc_engine_cleanup(nand); 1176 1100 1177 1101 err_cleanup_nanddev: 1178 1102 nanddev_cleanup(nand);
+47
drivers/mtd/nand/spi/macronix.c
··· 119 119 &update_cache_variants), 120 120 SPINAND_HAS_QE_BIT, 121 121 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 122 + SPINAND_INFO("MX35LF2GE4AD", 123 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26), 124 + NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1), 125 + NAND_ECCREQ(8, 512), 126 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 127 + &write_cache_variants, 128 + &update_cache_variants), 129 + 0, 130 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 131 + mx35lf1ge4ab_ecc_get_status)), 132 + SPINAND_INFO("MX35LF4GE4AD", 133 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37), 134 + NAND_MEMORG(1, 4096, 128, 64, 2048, 40, 1, 1, 1), 135 + NAND_ECCREQ(8, 512), 136 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 137 + &write_cache_variants, 138 + &update_cache_variants), 139 + 0, 140 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 141 + mx35lf1ge4ab_ecc_get_status)), 142 + SPINAND_INFO("MX35LF1G24AD", 143 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14), 144 + NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 145 + NAND_ECCREQ(8, 512), 146 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 147 + &write_cache_variants, 148 + &update_cache_variants), 149 + 0, 150 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 151 + SPINAND_INFO("MX35LF2G24AD", 152 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24), 153 + NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1), 154 + NAND_ECCREQ(8, 512), 155 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 156 + &write_cache_variants, 157 + &update_cache_variants), 158 + 0, 159 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 160 + SPINAND_INFO("MX35LF4G24AD", 161 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35), 162 + NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1), 163 + NAND_ECCREQ(8, 512), 164 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 165 + &write_cache_variants, 166 + &update_cache_variants), 167 + 0, 168 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 122 169 SPINAND_INFO("MX31LF1GE4BC", 123 170 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x1e), 124 171 NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+94 -30
drivers/mtd/nand/spi/micron.c
··· 28 28 29 29 #define MICRON_SELECT_DIE(x) ((x) << 6) 30 30 31 - static SPINAND_OP_VARIANTS(read_cache_variants, 31 + static SPINAND_OP_VARIANTS(quadio_read_cache_variants, 32 32 SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0), 33 33 SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 34 34 SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), ··· 36 36 SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0), 37 37 SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0)); 38 38 39 - static SPINAND_OP_VARIANTS(write_cache_variants, 39 + static SPINAND_OP_VARIANTS(x4_write_cache_variants, 40 40 SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 41 41 SPINAND_PROG_LOAD(true, 0, NULL, 0)); 42 42 43 - static SPINAND_OP_VARIANTS(update_cache_variants, 43 + static SPINAND_OP_VARIANTS(x4_update_cache_variants, 44 44 SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 45 45 SPINAND_PROG_LOAD(false, 0, NULL, 0)); 46 + 47 + /* Micron MT29F2G01AAAED Device */ 48 + static SPINAND_OP_VARIANTS(x4_read_cache_variants, 49 + SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 50 + SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 51 + SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0), 52 + SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0)); 53 + 54 + static SPINAND_OP_VARIANTS(x1_write_cache_variants, 55 + SPINAND_PROG_LOAD(true, 0, NULL, 0)); 56 + 57 + static SPINAND_OP_VARIANTS(x1_update_cache_variants, 58 + SPINAND_PROG_LOAD(false, 0, NULL, 0)); 46 59 47 60 static int micron_8_ooblayout_ecc(struct mtd_info *mtd, int section, 48 61 struct mtd_oob_region *region) ··· 85 72 static const struct mtd_ooblayout_ops micron_8_ooblayout = { 86 73 .ecc = micron_8_ooblayout_ecc, 87 74 .free = micron_8_ooblayout_free, 75 + }; 76 + 77 + static int micron_4_ooblayout_ecc(struct mtd_info *mtd, int section, 78 + struct mtd_oob_region *region) 79 + { 80 + struct spinand_device *spinand = mtd_to_spinand(mtd); 81 + 82 + if (section >= spinand->base.memorg.pagesize / 83 + mtd->ecc_step_size) 84 + return -ERANGE; 85 + 86 + region->offset = (section * 16) + 8; 87 + region->length = 8; 88 + 89 + return 0; 90 + } 91 + 92 + static int micron_4_ooblayout_free(struct mtd_info *mtd, int section, 93 + struct mtd_oob_region *region) 94 + { 95 + struct spinand_device *spinand = mtd_to_spinand(mtd); 96 + 97 + if (section >= spinand->base.memorg.pagesize / 98 + mtd->ecc_step_size) 99 + return -ERANGE; 100 + 101 + if (section) { 102 + region->offset = 16 * section; 103 + region->length = 8; 104 + } else { 105 + /* section 0 has two bytes reserved for the BBM */ 106 + region->offset = 2; 107 + region->length = 6; 108 + } 109 + 110 + return 0; 111 + } 112 + 113 + static const struct mtd_ooblayout_ops micron_4_ooblayout = { 114 + .ecc = micron_4_ooblayout_ecc, 115 + .free = micron_4_ooblayout_free, 88 116 }; 89 117 90 118 static int micron_select_target(struct spinand_device *spinand, ··· 174 120 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24), 175 121 NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1), 176 122 NAND_ECCREQ(8, 512), 177 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 178 - &write_cache_variants, 179 - &update_cache_variants), 123 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 124 + &x4_write_cache_variants, 125 + &x4_update_cache_variants), 180 126 0, 181 127 SPINAND_ECCINFO(&micron_8_ooblayout, 182 128 micron_8_ecc_get_status)), ··· 185 131 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x25), 186 132 NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1), 187 133 NAND_ECCREQ(8, 512), 188 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 189 - &write_cache_variants, 190 - &update_cache_variants), 134 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 135 + &x4_write_cache_variants, 136 + &x4_update_cache_variants), 191 137 0, 192 138 SPINAND_ECCINFO(&micron_8_ooblayout, 193 139 micron_8_ecc_get_status)), ··· 196 142 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14), 197 143 NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 198 144 NAND_ECCREQ(8, 512), 199 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 200 - &write_cache_variants, 201 - &update_cache_variants), 145 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 146 + &x4_write_cache_variants, 147 + &x4_update_cache_variants), 202 148 0, 203 149 SPINAND_ECCINFO(&micron_8_ooblayout, 204 150 micron_8_ecc_get_status)), ··· 207 153 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x15), 208 154 NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 209 155 NAND_ECCREQ(8, 512), 210 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 211 - &write_cache_variants, 212 - &update_cache_variants), 156 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 157 + &x4_write_cache_variants, 158 + &x4_update_cache_variants), 213 159 0, 214 160 SPINAND_ECCINFO(&micron_8_ooblayout, 215 161 micron_8_ecc_get_status)), ··· 218 164 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x36), 219 165 NAND_MEMORG(1, 2048, 128, 64, 2048, 80, 2, 1, 2), 220 166 NAND_ECCREQ(8, 512), 221 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 222 - &write_cache_variants, 223 - &update_cache_variants), 167 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 168 + &x4_write_cache_variants, 169 + &x4_update_cache_variants), 224 170 0, 225 171 SPINAND_ECCINFO(&micron_8_ooblayout, 226 172 micron_8_ecc_get_status), ··· 230 176 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x34), 231 177 NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1), 232 178 NAND_ECCREQ(8, 512), 233 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 234 - &write_cache_variants, 235 - &update_cache_variants), 179 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 180 + &x4_write_cache_variants, 181 + &x4_update_cache_variants), 236 182 SPINAND_HAS_CR_FEAT_BIT, 237 183 SPINAND_ECCINFO(&micron_8_ooblayout, 238 184 micron_8_ecc_get_status)), ··· 241 187 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35), 242 188 NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1), 243 189 NAND_ECCREQ(8, 512), 244 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 245 - &write_cache_variants, 246 - &update_cache_variants), 190 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 191 + &x4_write_cache_variants, 192 + &x4_update_cache_variants), 247 193 SPINAND_HAS_CR_FEAT_BIT, 248 194 SPINAND_ECCINFO(&micron_8_ooblayout, 249 195 micron_8_ecc_get_status)), ··· 252 198 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x46), 253 199 NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 2), 254 200 NAND_ECCREQ(8, 512), 255 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 256 - &write_cache_variants, 257 - &update_cache_variants), 201 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 202 + &x4_write_cache_variants, 203 + &x4_update_cache_variants), 258 204 SPINAND_HAS_CR_FEAT_BIT, 259 205 SPINAND_ECCINFO(&micron_8_ooblayout, 260 206 micron_8_ecc_get_status), ··· 264 210 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x47), 265 211 NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 2), 266 212 NAND_ECCREQ(8, 512), 267 - SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 268 - &write_cache_variants, 269 - &update_cache_variants), 213 + SPINAND_INFO_OP_VARIANTS(&quadio_read_cache_variants, 214 + &x4_write_cache_variants, 215 + &x4_update_cache_variants), 270 216 SPINAND_HAS_CR_FEAT_BIT, 271 217 SPINAND_ECCINFO(&micron_8_ooblayout, 272 218 micron_8_ecc_get_status), 273 219 SPINAND_SELECT_TARGET(micron_select_target)), 220 + /* M69A 2Gb 3.3V */ 221 + SPINAND_INFO("MT29F2G01AAAED", 222 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x9F), 223 + NAND_MEMORG(1, 2048, 64, 64, 2048, 80, 2, 1, 1), 224 + NAND_ECCREQ(4, 512), 225 + SPINAND_INFO_OP_VARIANTS(&x4_read_cache_variants, 226 + &x1_write_cache_variants, 227 + &x1_update_cache_variants), 228 + 0, 229 + SPINAND_ECCINFO(&micron_4_ooblayout, NULL)), 274 230 }; 275 231 276 232 static int micron_spinand_init(struct spinand_device *spinand)
+1 -1
drivers/mtd/nand/spi/toshiba.c
··· 28 28 SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 29 29 SPINAND_PROG_LOAD(false, 0, NULL, 0)); 30 30 31 - /** 31 + /* 32 32 * Backward compatibility for 1st generation Serial NAND devices 33 33 * which don't support Quad Program Load operation. 34 34 */
+13 -1
drivers/mtd/parsers/cmdlinepart.c
··· 226 226 struct cmdline_mtd_partition *this_mtd; 227 227 struct mtd_partition *parts; 228 228 int mtd_id_len, num_parts; 229 - char *p, *mtd_id, *semicol; 229 + char *p, *mtd_id, *semicol, *open_parenth; 230 230 231 231 /* 232 232 * Replace the first ';' by a NULL char so strrchr can work ··· 236 236 if (semicol) 237 237 *semicol = '\0'; 238 238 239 + /* 240 + * make sure that part-names with ":" will not be handled as 241 + * part of the mtd-id with an ":" 242 + */ 243 + open_parenth = strchr(s, '('); 244 + if (open_parenth) 245 + *open_parenth = '\0'; 246 + 239 247 mtd_id = s; 240 248 241 249 /* ··· 252 244 * as an <mtd-id>/<part-definition> separator. 253 245 */ 254 246 p = strrchr(s, ':'); 247 + 248 + /* Restore the '(' now. */ 249 + if (open_parenth) 250 + *open_parenth = '('; 255 251 256 252 /* Restore the ';' now. */ 257 253 if (semicol)
+15 -15
drivers/mtd/sm_ftl.c
··· 13 13 #include <linux/sysfs.h> 14 14 #include <linux/bitops.h> 15 15 #include <linux/slab.h> 16 - #include <linux/mtd/nand_ecc.h> 16 + #include <linux/mtd/nand-ecc-sw-hamming.h> 17 17 #include "nand/raw/sm_common.h" 18 18 #include "sm_ftl.h" 19 19 ··· 216 216 217 217 static int sm_correct_sector(uint8_t *buffer, struct sm_oob *oob) 218 218 { 219 + bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC); 219 220 uint8_t ecc[3]; 220 221 221 - __nand_calculate_ecc(buffer, SM_SMALL_PAGE, ecc, 222 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 223 - if (__nand_correct_data(buffer, ecc, oob->ecc1, SM_SMALL_PAGE, 224 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)) < 0) 222 + ecc_sw_hamming_calculate(buffer, SM_SMALL_PAGE, ecc, sm_order); 223 + if (ecc_sw_hamming_correct(buffer, ecc, oob->ecc1, SM_SMALL_PAGE, 224 + sm_order) < 0) 225 225 return -EIO; 226 226 227 227 buffer += SM_SMALL_PAGE; 228 228 229 - __nand_calculate_ecc(buffer, SM_SMALL_PAGE, ecc, 230 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 231 - if (__nand_correct_data(buffer, ecc, oob->ecc2, SM_SMALL_PAGE, 232 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)) < 0) 229 + ecc_sw_hamming_calculate(buffer, SM_SMALL_PAGE, ecc, sm_order); 230 + if (ecc_sw_hamming_correct(buffer, ecc, oob->ecc2, SM_SMALL_PAGE, 231 + sm_order) < 0) 233 232 return -EIO; 234 233 return 0; 235 234 } ··· 368 369 int zone, int block, int lba, 369 370 unsigned long invalid_bitmap) 370 371 { 372 + bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC); 371 373 struct sm_oob oob; 372 374 int boffset; 373 375 int retry = 0; ··· 395 395 } 396 396 397 397 if (ftl->smallpagenand) { 398 - __nand_calculate_ecc(buf + boffset, SM_SMALL_PAGE, 399 - oob.ecc1, 400 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 398 + ecc_sw_hamming_calculate(buf + boffset, 399 + SM_SMALL_PAGE, oob.ecc1, 400 + sm_order); 401 401 402 - __nand_calculate_ecc(buf + boffset + SM_SMALL_PAGE, 403 - SM_SMALL_PAGE, oob.ecc2, 404 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 402 + ecc_sw_hamming_calculate(buf + boffset + SM_SMALL_PAGE, 403 + SM_SMALL_PAGE, oob.ecc2, 404 + sm_order); 405 405 } 406 406 if (!sm_write_sector(ftl, zone, block, boffset, 407 407 buf + boffset, &oob))
+44
drivers/mtd/spi-nor/Kconfig
··· 24 24 Please note that some tools/drivers/filesystems may not work with 25 25 4096 B erase size (e.g. UBIFS requires 15 KiB as a minimum). 26 26 27 + choice 28 + prompt "Software write protection at boot" 29 + default MTD_SPI_NOR_SWP_DISABLE_ON_VOLATILE 30 + 31 + config MTD_SPI_NOR_SWP_DISABLE 32 + bool "Disable SWP on any flashes (legacy behavior)" 33 + help 34 + This option disables the software write protection on any SPI 35 + flashes at boot-up. 36 + 37 + Depending on the flash chip this either clears the block protection 38 + bits or does a "Global Unprotect" command. 39 + 40 + Don't use this if you intent to use the software write protection 41 + of your SPI flash. This is only to keep backwards compatibility. 42 + 43 + config MTD_SPI_NOR_SWP_DISABLE_ON_VOLATILE 44 + bool "Disable SWP on flashes w/ volatile protection bits" 45 + help 46 + Some SPI flashes have volatile block protection bits, ie. after a 47 + power-up or a reset the flash is software write protected by 48 + default. 49 + 50 + This option disables the software write protection for these kind 51 + of flashes while keeping it enabled for any other SPI flashes 52 + which have non-volatile write protection bits. 53 + 54 + If the software write protection will be disabled depending on 55 + the flash either the block protection bits are cleared or a 56 + "Global Unprotect" command is issued. 57 + 58 + If you are unsure, select this option. 59 + 60 + config MTD_SPI_NOR_SWP_KEEP 61 + bool "Keep software write protection as is" 62 + help 63 + If you select this option the software write protection of any 64 + SPI flashes will not be changed. If your flash is software write 65 + protected or will be automatically software write protected after 66 + power-up you have to manually unlock it before you are able to 67 + write to it. 68 + 69 + endchoice 70 + 27 71 source "drivers/mtd/spi-nor/controllers/Kconfig" 28 72 29 73 endif # MTD_SPI_NOR
+172 -19
drivers/mtd/spi-nor/atmel.c
··· 8 8 9 9 #include "core.h" 10 10 11 + #define ATMEL_SR_GLOBAL_PROTECT_MASK GENMASK(5, 2) 12 + 13 + /* 14 + * The Atmel AT25FS010/AT25FS040 parts have some weird configuration for the 15 + * block protection bits. We don't support them. But legacy behavior in linux 16 + * is to unlock the whole flash array on startup. Therefore, we have to support 17 + * exactly this operation. 18 + */ 19 + static int atmel_at25fs_lock(struct spi_nor *nor, loff_t ofs, uint64_t len) 20 + { 21 + return -EOPNOTSUPP; 22 + } 23 + 24 + static int atmel_at25fs_unlock(struct spi_nor *nor, loff_t ofs, uint64_t len) 25 + { 26 + int ret; 27 + 28 + /* We only support unlocking the whole flash array */ 29 + if (ofs || len != nor->params->size) 30 + return -EINVAL; 31 + 32 + /* Write 0x00 to the status register to disable write protection */ 33 + ret = spi_nor_write_sr_and_check(nor, 0); 34 + if (ret) 35 + dev_dbg(nor->dev, "unable to clear BP bits, WP# asserted?\n"); 36 + 37 + return ret; 38 + } 39 + 40 + static int atmel_at25fs_is_locked(struct spi_nor *nor, loff_t ofs, uint64_t len) 41 + { 42 + return -EOPNOTSUPP; 43 + } 44 + 45 + static const struct spi_nor_locking_ops atmel_at25fs_locking_ops = { 46 + .lock = atmel_at25fs_lock, 47 + .unlock = atmel_at25fs_unlock, 48 + .is_locked = atmel_at25fs_is_locked, 49 + }; 50 + 51 + static void atmel_at25fs_default_init(struct spi_nor *nor) 52 + { 53 + nor->params->locking_ops = &atmel_at25fs_locking_ops; 54 + } 55 + 56 + static const struct spi_nor_fixups atmel_at25fs_fixups = { 57 + .default_init = atmel_at25fs_default_init, 58 + }; 59 + 60 + /** 61 + * atmel_set_global_protection - Do a Global Protect or Unprotect command 62 + * @nor: pointer to 'struct spi_nor' 63 + * @ofs: offset in bytes 64 + * @len: len in bytes 65 + * @is_protect: if true do a Global Protect otherwise it is a Global Unprotect 66 + * 67 + * Return: 0 on success, -error otherwise. 68 + */ 69 + static int atmel_set_global_protection(struct spi_nor *nor, loff_t ofs, 70 + uint64_t len, bool is_protect) 71 + { 72 + int ret; 73 + u8 sr; 74 + 75 + /* We only support locking the whole flash array */ 76 + if (ofs || len != nor->params->size) 77 + return -EINVAL; 78 + 79 + ret = spi_nor_read_sr(nor, nor->bouncebuf); 80 + if (ret) 81 + return ret; 82 + 83 + sr = nor->bouncebuf[0]; 84 + 85 + /* SRWD bit needs to be cleared, otherwise the protection doesn't change */ 86 + if (sr & SR_SRWD) { 87 + sr &= ~SR_SRWD; 88 + ret = spi_nor_write_sr_and_check(nor, sr); 89 + if (ret) { 90 + dev_dbg(nor->dev, "unable to clear SRWD bit, WP# asserted?\n"); 91 + return ret; 92 + } 93 + } 94 + 95 + if (is_protect) { 96 + sr |= ATMEL_SR_GLOBAL_PROTECT_MASK; 97 + /* 98 + * Set the SRWD bit again as soon as we are protecting 99 + * anything. This will ensure that the WP# pin is working 100 + * correctly. By doing this we also behave the same as 101 + * spi_nor_sr_lock(), which sets SRWD if any block protection 102 + * is active. 103 + */ 104 + sr |= SR_SRWD; 105 + } else { 106 + sr &= ~ATMEL_SR_GLOBAL_PROTECT_MASK; 107 + } 108 + 109 + nor->bouncebuf[0] = sr; 110 + 111 + /* 112 + * We cannot use the spi_nor_write_sr_and_check() because this command 113 + * isn't really setting any bits, instead it is an pseudo command for 114 + * "Global Unprotect" or "Global Protect" 115 + */ 116 + return spi_nor_write_sr(nor, nor->bouncebuf, 1); 117 + } 118 + 119 + static int atmel_global_protect(struct spi_nor *nor, loff_t ofs, uint64_t len) 120 + { 121 + return atmel_set_global_protection(nor, ofs, len, true); 122 + } 123 + 124 + static int atmel_global_unprotect(struct spi_nor *nor, loff_t ofs, uint64_t len) 125 + { 126 + return atmel_set_global_protection(nor, ofs, len, false); 127 + } 128 + 129 + static int atmel_is_global_protected(struct spi_nor *nor, loff_t ofs, uint64_t len) 130 + { 131 + int ret; 132 + 133 + if (ofs >= nor->params->size || (ofs + len) > nor->params->size) 134 + return -EINVAL; 135 + 136 + ret = spi_nor_read_sr(nor, nor->bouncebuf); 137 + if (ret) 138 + return ret; 139 + 140 + return ((nor->bouncebuf[0] & ATMEL_SR_GLOBAL_PROTECT_MASK) == ATMEL_SR_GLOBAL_PROTECT_MASK); 141 + } 142 + 143 + static const struct spi_nor_locking_ops atmel_global_protection_ops = { 144 + .lock = atmel_global_protect, 145 + .unlock = atmel_global_unprotect, 146 + .is_locked = atmel_is_global_protected, 147 + }; 148 + 149 + static void atmel_global_protection_default_init(struct spi_nor *nor) 150 + { 151 + nor->params->locking_ops = &atmel_global_protection_ops; 152 + } 153 + 154 + static const struct spi_nor_fixups atmel_global_protection_fixups = { 155 + .default_init = atmel_global_protection_default_init, 156 + }; 157 + 11 158 static const struct flash_info atmel_parts[] = { 12 159 /* Atmel -- some are (confusingly) marketed as "DataFlash" */ 13 - { "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K) }, 14 - { "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) }, 160 + { "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K | SPI_NOR_HAS_LOCK) 161 + .fixups = &atmel_at25fs_fixups }, 162 + { "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K | SPI_NOR_HAS_LOCK) 163 + .fixups = &atmel_at25fs_fixups }, 15 164 16 - { "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, SECT_4K) }, 17 - { "at25df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) }, 18 - { "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, SECT_4K) }, 19 - { "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, SECT_4K) }, 165 + { "at25df041a", INFO(0x1f4401, 0, 64 * 1024, 8, 166 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 167 + .fixups = &atmel_global_protection_fixups }, 168 + { "at25df321", INFO(0x1f4700, 0, 64 * 1024, 64, 169 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 170 + .fixups = &atmel_global_protection_fixups }, 171 + { "at25df321a", INFO(0x1f4701, 0, 64 * 1024, 64, 172 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 173 + .fixups = &atmel_global_protection_fixups }, 174 + { "at25df641", INFO(0x1f4800, 0, 64 * 1024, 128, 175 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 176 + .fixups = &atmel_global_protection_fixups }, 20 177 21 178 { "at25sl321", INFO(0x1f4216, 0, 64 * 1024, 64, 22 179 SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 23 180 24 181 { "at26f004", INFO(0x1f0400, 0, 64 * 1024, 8, SECT_4K) }, 25 - { "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, SECT_4K) }, 26 - { "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, SECT_4K) }, 27 - { "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64, SECT_4K) }, 182 + { "at26df081a", INFO(0x1f4501, 0, 64 * 1024, 16, 183 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 184 + .fixups = &atmel_global_protection_fixups }, 185 + { "at26df161a", INFO(0x1f4601, 0, 64 * 1024, 32, 186 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 187 + .fixups = &atmel_global_protection_fixups }, 188 + { "at26df321", INFO(0x1f4700, 0, 64 * 1024, 64, 189 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 190 + .fixups = &atmel_global_protection_fixups }, 28 191 29 192 { "at45db081d", INFO(0x1f2500, 0, 64 * 1024, 16, SECT_4K) }, 30 - }; 31 - 32 - static void atmel_default_init(struct spi_nor *nor) 33 - { 34 - nor->flags |= SNOR_F_HAS_LOCK; 35 - } 36 - 37 - static const struct spi_nor_fixups atmel_fixups = { 38 - .default_init = atmel_default_init, 39 193 }; 40 194 41 195 const struct spi_nor_manufacturer spi_nor_atmel = { 42 196 .name = "atmel", 43 197 .parts = atmel_parts, 44 198 .nparts = ARRAY_SIZE(atmel_parts), 45 - .fixups = &atmel_fixups, 46 199 };
+1 -1
drivers/mtd/spi-nor/controllers/hisi-sfc.c
··· 320 320 .write = hisi_spi_nor_write, 321 321 }; 322 322 323 - /** 323 + /* 324 324 * Get spi flash device information and register it as a mtd device. 325 325 */ 326 326 static int hisi_spi_nor_register(struct device_node *np,
+457 -140
drivers/mtd/spi-nor/core.c
··· 40 40 41 41 #define SPI_NOR_MAX_ADDR_WIDTH 4 42 42 43 + #define SPI_NOR_SRST_SLEEP_MIN 200 44 + #define SPI_NOR_SRST_SLEEP_MAX 400 45 + 46 + /** 47 + * spi_nor_get_cmd_ext() - Get the command opcode extension based on the 48 + * extension type. 49 + * @nor: pointer to a 'struct spi_nor' 50 + * @op: pointer to the 'struct spi_mem_op' whose properties 51 + * need to be initialized. 52 + * 53 + * Right now, only "repeat" and "invert" are supported. 54 + * 55 + * Return: The opcode extension. 56 + */ 57 + static u8 spi_nor_get_cmd_ext(const struct spi_nor *nor, 58 + const struct spi_mem_op *op) 59 + { 60 + switch (nor->cmd_ext_type) { 61 + case SPI_NOR_EXT_INVERT: 62 + return ~op->cmd.opcode; 63 + 64 + case SPI_NOR_EXT_REPEAT: 65 + return op->cmd.opcode; 66 + 67 + default: 68 + dev_err(nor->dev, "Unknown command extension type\n"); 69 + return 0; 70 + } 71 + } 72 + 73 + /** 74 + * spi_nor_spimem_setup_op() - Set up common properties of a spi-mem op. 75 + * @nor: pointer to a 'struct spi_nor' 76 + * @op: pointer to the 'struct spi_mem_op' whose properties 77 + * need to be initialized. 78 + * @proto: the protocol from which the properties need to be set. 79 + */ 80 + void spi_nor_spimem_setup_op(const struct spi_nor *nor, 81 + struct spi_mem_op *op, 82 + const enum spi_nor_protocol proto) 83 + { 84 + u8 ext; 85 + 86 + op->cmd.buswidth = spi_nor_get_protocol_inst_nbits(proto); 87 + 88 + if (op->addr.nbytes) 89 + op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto); 90 + 91 + if (op->dummy.nbytes) 92 + op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto); 93 + 94 + if (op->data.nbytes) 95 + op->data.buswidth = spi_nor_get_protocol_data_nbits(proto); 96 + 97 + if (spi_nor_protocol_is_dtr(proto)) { 98 + /* 99 + * SPIMEM supports mixed DTR modes, but right now we can only 100 + * have all phases either DTR or STR. IOW, SPIMEM can have 101 + * something like 4S-4D-4D, but SPI NOR can't. So, set all 4 102 + * phases to either DTR or STR. 103 + */ 104 + op->cmd.dtr = true; 105 + op->addr.dtr = true; 106 + op->dummy.dtr = true; 107 + op->data.dtr = true; 108 + 109 + /* 2 bytes per clock cycle in DTR mode. */ 110 + op->dummy.nbytes *= 2; 111 + 112 + ext = spi_nor_get_cmd_ext(nor, op); 113 + op->cmd.opcode = (op->cmd.opcode << 8) | ext; 114 + op->cmd.nbytes = 2; 115 + } 116 + } 117 + 43 118 /** 44 119 * spi_nor_spimem_bounce() - check if a bounce buffer is needed for the data 45 120 * transfer ··· 157 82 return spi_mem_exec_op(nor->spimem, op); 158 83 } 159 84 85 + static int spi_nor_controller_ops_read_reg(struct spi_nor *nor, u8 opcode, 86 + u8 *buf, size_t len) 87 + { 88 + if (spi_nor_protocol_is_dtr(nor->reg_proto)) 89 + return -EOPNOTSUPP; 90 + 91 + return nor->controller_ops->read_reg(nor, opcode, buf, len); 92 + } 93 + 94 + static int spi_nor_controller_ops_write_reg(struct spi_nor *nor, u8 opcode, 95 + const u8 *buf, size_t len) 96 + { 97 + if (spi_nor_protocol_is_dtr(nor->reg_proto)) 98 + return -EOPNOTSUPP; 99 + 100 + return nor->controller_ops->write_reg(nor, opcode, buf, len); 101 + } 102 + 103 + static int spi_nor_controller_ops_erase(struct spi_nor *nor, loff_t offs) 104 + { 105 + if (spi_nor_protocol_is_dtr(nor->write_proto)) 106 + return -EOPNOTSUPP; 107 + 108 + return nor->controller_ops->erase(nor, offs); 109 + } 110 + 160 111 /** 161 112 * spi_nor_spimem_read_data() - read data from flash's memory region via 162 113 * spi-mem ··· 197 96 size_t len, u8 *buf) 198 97 { 199 98 struct spi_mem_op op = 200 - SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 1), 201 - SPI_MEM_OP_ADDR(nor->addr_width, from, 1), 202 - SPI_MEM_OP_DUMMY(nor->read_dummy, 1), 203 - SPI_MEM_OP_DATA_IN(len, buf, 1)); 99 + SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 0), 100 + SPI_MEM_OP_ADDR(nor->addr_width, from, 0), 101 + SPI_MEM_OP_DUMMY(nor->read_dummy, 0), 102 + SPI_MEM_OP_DATA_IN(len, buf, 0)); 204 103 bool usebouncebuf; 205 104 ssize_t nbytes; 206 105 int error; 207 106 208 - /* get transfer protocols. */ 209 - op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->read_proto); 210 - op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->read_proto); 211 - op.dummy.buswidth = op.addr.buswidth; 212 - op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->read_proto); 107 + spi_nor_spimem_setup_op(nor, &op, nor->read_proto); 213 108 214 109 /* convert the dummy cycles to the number of bytes */ 215 110 op.dummy.nbytes = (nor->read_dummy * op.dummy.buswidth) / 8; 111 + if (spi_nor_protocol_is_dtr(nor->read_proto)) 112 + op.dummy.nbytes *= 2; 216 113 217 114 usebouncebuf = spi_nor_spimem_bounce(nor, &op); 218 115 ··· 261 162 size_t len, const u8 *buf) 262 163 { 263 164 struct spi_mem_op op = 264 - SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 1), 265 - SPI_MEM_OP_ADDR(nor->addr_width, to, 1), 165 + SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 0), 166 + SPI_MEM_OP_ADDR(nor->addr_width, to, 0), 266 167 SPI_MEM_OP_NO_DUMMY, 267 - SPI_MEM_OP_DATA_OUT(len, buf, 1)); 168 + SPI_MEM_OP_DATA_OUT(len, buf, 0)); 268 169 ssize_t nbytes; 269 170 int error; 270 171 271 - op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->write_proto); 272 - op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->write_proto); 273 - op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->write_proto); 274 - 275 172 if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second) 276 173 op.addr.nbytes = 0; 174 + 175 + spi_nor_spimem_setup_op(nor, &op, nor->write_proto); 277 176 278 177 if (spi_nor_spimem_bounce(nor, &op)) 279 178 memcpy(nor->bouncebuf, buf, op.data.nbytes); ··· 319 222 320 223 if (nor->spimem) { 321 224 struct spi_mem_op op = 322 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WREN, 1), 225 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WREN, 0), 323 226 SPI_MEM_OP_NO_ADDR, 324 227 SPI_MEM_OP_NO_DUMMY, 325 228 SPI_MEM_OP_NO_DATA); 326 229 230 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 231 + 327 232 ret = spi_mem_exec_op(nor->spimem, &op); 328 233 } else { 329 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_WREN, 330 - NULL, 0); 234 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_WREN, 235 + NULL, 0); 331 236 } 332 237 333 238 if (ret) ··· 350 251 351 252 if (nor->spimem) { 352 253 struct spi_mem_op op = 353 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRDI, 1), 254 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRDI, 0), 354 255 SPI_MEM_OP_NO_ADDR, 355 256 SPI_MEM_OP_NO_DUMMY, 356 257 SPI_MEM_OP_NO_DATA); 357 258 259 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 260 + 358 261 ret = spi_mem_exec_op(nor->spimem, &op); 359 262 } else { 360 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_WRDI, 361 - NULL, 0); 263 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_WRDI, 264 + NULL, 0); 362 265 } 363 266 364 267 if (ret) ··· 373 272 * spi_nor_read_sr() - Read the Status Register. 374 273 * @nor: pointer to 'struct spi_nor'. 375 274 * @sr: pointer to a DMA-able buffer where the value of the 376 - * Status Register will be written. 275 + * Status Register will be written. Should be at least 2 bytes. 377 276 * 378 277 * Return: 0 on success, -errno otherwise. 379 278 */ 380 - static int spi_nor_read_sr(struct spi_nor *nor, u8 *sr) 279 + int spi_nor_read_sr(struct spi_nor *nor, u8 *sr) 381 280 { 382 281 int ret; 383 282 384 283 if (nor->spimem) { 385 284 struct spi_mem_op op = 386 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDSR, 1), 285 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDSR, 0), 387 286 SPI_MEM_OP_NO_ADDR, 388 287 SPI_MEM_OP_NO_DUMMY, 389 - SPI_MEM_OP_DATA_IN(1, sr, 1)); 288 + SPI_MEM_OP_DATA_IN(1, sr, 0)); 289 + 290 + if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) { 291 + op.addr.nbytes = nor->params->rdsr_addr_nbytes; 292 + op.dummy.nbytes = nor->params->rdsr_dummy; 293 + /* 294 + * We don't want to read only one byte in DTR mode. So, 295 + * read 2 and then discard the second byte. 296 + */ 297 + op.data.nbytes = 2; 298 + } 299 + 300 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 390 301 391 302 ret = spi_mem_exec_op(nor->spimem, &op); 392 303 } else { 393 - ret = nor->controller_ops->read_reg(nor, SPINOR_OP_RDSR, 394 - sr, 1); 304 + ret = spi_nor_controller_ops_read_reg(nor, SPINOR_OP_RDSR, sr, 305 + 1); 395 306 } 396 307 397 308 if (ret) ··· 416 303 * spi_nor_read_fsr() - Read the Flag Status Register. 417 304 * @nor: pointer to 'struct spi_nor' 418 305 * @fsr: pointer to a DMA-able buffer where the value of the 419 - * Flag Status Register will be written. 306 + * Flag Status Register will be written. Should be at least 2 307 + * bytes. 420 308 * 421 309 * Return: 0 on success, -errno otherwise. 422 310 */ ··· 427 313 428 314 if (nor->spimem) { 429 315 struct spi_mem_op op = 430 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDFSR, 1), 316 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDFSR, 0), 431 317 SPI_MEM_OP_NO_ADDR, 432 318 SPI_MEM_OP_NO_DUMMY, 433 - SPI_MEM_OP_DATA_IN(1, fsr, 1)); 319 + SPI_MEM_OP_DATA_IN(1, fsr, 0)); 320 + 321 + if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) { 322 + op.addr.nbytes = nor->params->rdsr_addr_nbytes; 323 + op.dummy.nbytes = nor->params->rdsr_dummy; 324 + /* 325 + * We don't want to read only one byte in DTR mode. So, 326 + * read 2 and then discard the second byte. 327 + */ 328 + op.data.nbytes = 2; 329 + } 330 + 331 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 434 332 435 333 ret = spi_mem_exec_op(nor->spimem, &op); 436 334 } else { 437 - ret = nor->controller_ops->read_reg(nor, SPINOR_OP_RDFSR, 438 - fsr, 1); 335 + ret = spi_nor_controller_ops_read_reg(nor, SPINOR_OP_RDFSR, fsr, 336 + 1); 439 337 } 440 338 441 339 if (ret) ··· 471 345 472 346 if (nor->spimem) { 473 347 struct spi_mem_op op = 474 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDCR, 1), 348 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDCR, 0), 475 349 SPI_MEM_OP_NO_ADDR, 476 350 SPI_MEM_OP_NO_DUMMY, 477 - SPI_MEM_OP_DATA_IN(1, cr, 1)); 351 + SPI_MEM_OP_DATA_IN(1, cr, 0)); 352 + 353 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 478 354 479 355 ret = spi_mem_exec_op(nor->spimem, &op); 480 356 } else { 481 - ret = nor->controller_ops->read_reg(nor, SPINOR_OP_RDCR, cr, 1); 357 + ret = spi_nor_controller_ops_read_reg(nor, SPINOR_OP_RDCR, cr, 358 + 1); 482 359 } 483 360 484 361 if (ret) ··· 507 378 SPI_MEM_OP(SPI_MEM_OP_CMD(enable ? 508 379 SPINOR_OP_EN4B : 509 380 SPINOR_OP_EX4B, 510 - 1), 381 + 0), 511 382 SPI_MEM_OP_NO_ADDR, 512 383 SPI_MEM_OP_NO_DUMMY, 513 384 SPI_MEM_OP_NO_DATA); 514 385 386 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 387 + 515 388 ret = spi_mem_exec_op(nor->spimem, &op); 516 389 } else { 517 - ret = nor->controller_ops->write_reg(nor, 518 - enable ? SPINOR_OP_EN4B : 519 - SPINOR_OP_EX4B, 520 - NULL, 0); 390 + ret = spi_nor_controller_ops_write_reg(nor, 391 + enable ? SPINOR_OP_EN4B : 392 + SPINOR_OP_EX4B, 393 + NULL, 0); 521 394 } 522 395 523 396 if (ret) ··· 545 414 546 415 if (nor->spimem) { 547 416 struct spi_mem_op op = 548 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_BRWR, 1), 417 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_BRWR, 0), 549 418 SPI_MEM_OP_NO_ADDR, 550 419 SPI_MEM_OP_NO_DUMMY, 551 - SPI_MEM_OP_DATA_OUT(1, nor->bouncebuf, 1)); 420 + SPI_MEM_OP_DATA_OUT(1, nor->bouncebuf, 0)); 421 + 422 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 552 423 553 424 ret = spi_mem_exec_op(nor->spimem, &op); 554 425 } else { 555 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_BRWR, 556 - nor->bouncebuf, 1); 426 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_BRWR, 427 + nor->bouncebuf, 1); 557 428 } 558 429 559 430 if (ret) ··· 579 446 580 447 if (nor->spimem) { 581 448 struct spi_mem_op op = 582 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WREAR, 1), 449 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WREAR, 0), 583 450 SPI_MEM_OP_NO_ADDR, 584 451 SPI_MEM_OP_NO_DUMMY, 585 - SPI_MEM_OP_DATA_OUT(1, nor->bouncebuf, 1)); 452 + SPI_MEM_OP_DATA_OUT(1, nor->bouncebuf, 0)); 453 + 454 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 586 455 587 456 ret = spi_mem_exec_op(nor->spimem, &op); 588 457 } else { 589 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_WREAR, 590 - nor->bouncebuf, 1); 458 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_WREAR, 459 + nor->bouncebuf, 1); 591 460 } 592 461 593 462 if (ret) ··· 612 477 613 478 if (nor->spimem) { 614 479 struct spi_mem_op op = 615 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_XRDSR, 1), 480 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_XRDSR, 0), 616 481 SPI_MEM_OP_NO_ADDR, 617 482 SPI_MEM_OP_NO_DUMMY, 618 - SPI_MEM_OP_DATA_IN(1, sr, 1)); 483 + SPI_MEM_OP_DATA_IN(1, sr, 0)); 484 + 485 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 619 486 620 487 ret = spi_mem_exec_op(nor->spimem, &op); 621 488 } else { 622 - ret = nor->controller_ops->read_reg(nor, SPINOR_OP_XRDSR, 623 - sr, 1); 489 + ret = spi_nor_controller_ops_read_reg(nor, SPINOR_OP_XRDSR, sr, 490 + 1); 624 491 } 625 492 626 493 if (ret) ··· 659 522 660 523 if (nor->spimem) { 661 524 struct spi_mem_op op = 662 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CLSR, 1), 525 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CLSR, 0), 663 526 SPI_MEM_OP_NO_ADDR, 664 527 SPI_MEM_OP_NO_DUMMY, 665 528 SPI_MEM_OP_NO_DATA); 666 529 530 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 531 + 667 532 ret = spi_mem_exec_op(nor->spimem, &op); 668 533 } else { 669 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_CLSR, 670 - NULL, 0); 534 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_CLSR, 535 + NULL, 0); 671 536 } 672 537 673 538 if (ret) ··· 725 586 726 587 if (nor->spimem) { 727 588 struct spi_mem_op op = 728 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CLFSR, 1), 589 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CLFSR, 0), 729 590 SPI_MEM_OP_NO_ADDR, 730 591 SPI_MEM_OP_NO_DUMMY, 731 592 SPI_MEM_OP_NO_DATA); 732 593 594 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 595 + 733 596 ret = spi_mem_exec_op(nor->spimem, &op); 734 597 } else { 735 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_CLFSR, 736 - NULL, 0); 598 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_CLFSR, 599 + NULL, 0); 737 600 } 738 601 739 602 if (ret) ··· 861 720 * 862 721 * Return: 0 on success, -errno otherwise. 863 722 */ 864 - static int spi_nor_write_sr(struct spi_nor *nor, const u8 *sr, size_t len) 723 + int spi_nor_write_sr(struct spi_nor *nor, const u8 *sr, size_t len) 865 724 { 866 725 int ret; 867 726 ··· 871 730 872 731 if (nor->spimem) { 873 732 struct spi_mem_op op = 874 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRSR, 1), 733 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRSR, 0), 875 734 SPI_MEM_OP_NO_ADDR, 876 735 SPI_MEM_OP_NO_DUMMY, 877 - SPI_MEM_OP_DATA_OUT(len, sr, 1)); 736 + SPI_MEM_OP_DATA_OUT(len, sr, 0)); 737 + 738 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 878 739 879 740 ret = spi_mem_exec_op(nor->spimem, &op); 880 741 } else { 881 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_WRSR, 882 - sr, len); 742 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_WRSR, sr, 743 + len); 883 744 } 884 745 885 746 if (ret) { ··· 1049 906 * 1050 907 * Return: 0 on success, -errno otherwise. 1051 908 */ 1052 - static int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1) 909 + int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1) 1053 910 { 1054 911 if (nor->flags & SNOR_F_HAS_16BIT_SR) 1055 912 return spi_nor_write_16bit_sr_and_check(nor, sr1); ··· 1075 932 1076 933 if (nor->spimem) { 1077 934 struct spi_mem_op op = 1078 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRSR2, 1), 935 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRSR2, 0), 1079 936 SPI_MEM_OP_NO_ADDR, 1080 937 SPI_MEM_OP_NO_DUMMY, 1081 - SPI_MEM_OP_DATA_OUT(1, sr2, 1)); 938 + SPI_MEM_OP_DATA_OUT(1, sr2, 0)); 939 + 940 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 1082 941 1083 942 ret = spi_mem_exec_op(nor->spimem, &op); 1084 943 } else { 1085 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_WRSR2, 1086 - sr2, 1); 944 + ret = spi_nor_controller_ops_write_reg(nor, SPINOR_OP_WRSR2, 945 + sr2, 1); 1087 946 } 1088 947 1089 948 if (ret) { ··· 1111 966 1112 967 if (nor->spimem) { 1113 968 struct spi_mem_op op = 1114 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDSR2, 1), 969 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDSR2, 0), 1115 970 SPI_MEM_OP_NO_ADDR, 1116 971 SPI_MEM_OP_NO_DUMMY, 1117 - SPI_MEM_OP_DATA_IN(1, sr2, 1)); 972 + SPI_MEM_OP_DATA_IN(1, sr2, 0)); 973 + 974 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 1118 975 1119 976 ret = spi_mem_exec_op(nor->spimem, &op); 1120 977 } else { 1121 - ret = nor->controller_ops->read_reg(nor, SPINOR_OP_RDSR2, 1122 - sr2, 1); 978 + ret = spi_nor_controller_ops_read_reg(nor, SPINOR_OP_RDSR2, sr2, 979 + 1); 1123 980 } 1124 981 1125 982 if (ret) ··· 1144 997 1145 998 if (nor->spimem) { 1146 999 struct spi_mem_op op = 1147 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CHIP_ERASE, 1), 1000 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CHIP_ERASE, 0), 1148 1001 SPI_MEM_OP_NO_ADDR, 1149 1002 SPI_MEM_OP_NO_DUMMY, 1150 1003 SPI_MEM_OP_NO_DATA); 1151 1004 1005 + spi_nor_spimem_setup_op(nor, &op, nor->write_proto); 1006 + 1152 1007 ret = spi_mem_exec_op(nor->spimem, &op); 1153 1008 } else { 1154 - ret = nor->controller_ops->write_reg(nor, SPINOR_OP_CHIP_ERASE, 1155 - NULL, 0); 1009 + ret = spi_nor_controller_ops_write_reg(nor, 1010 + SPINOR_OP_CHIP_ERASE, 1011 + NULL, 0); 1156 1012 } 1157 1013 1158 1014 if (ret) ··· 1289 1139 1290 1140 if (nor->spimem) { 1291 1141 struct spi_mem_op op = 1292 - SPI_MEM_OP(SPI_MEM_OP_CMD(nor->erase_opcode, 1), 1293 - SPI_MEM_OP_ADDR(nor->addr_width, addr, 1), 1142 + SPI_MEM_OP(SPI_MEM_OP_CMD(nor->erase_opcode, 0), 1143 + SPI_MEM_OP_ADDR(nor->addr_width, addr, 0), 1294 1144 SPI_MEM_OP_NO_DUMMY, 1295 1145 SPI_MEM_OP_NO_DATA); 1296 1146 1147 + spi_nor_spimem_setup_op(nor, &op, nor->write_proto); 1148 + 1297 1149 return spi_mem_exec_op(nor->spimem, &op); 1298 1150 } else if (nor->controller_ops->erase) { 1299 - return nor->controller_ops->erase(nor, addr); 1151 + return spi_nor_controller_ops_erase(nor, addr); 1300 1152 } 1301 1153 1302 1154 /* ··· 1310 1158 addr >>= 8; 1311 1159 } 1312 1160 1313 - return nor->controller_ops->write_reg(nor, nor->erase_opcode, 1314 - nor->bouncebuf, nor->addr_width); 1161 + return spi_nor_controller_ops_write_reg(nor, nor->erase_opcode, 1162 + nor->bouncebuf, nor->addr_width); 1315 1163 } 1316 1164 1317 1165 /** ··· 1599 1447 1600 1448 /* 1601 1449 * Erase an address range on the nor chip. The address range may extend 1602 - * one or more erase sectors. Return an error is there is a problem erasing. 1450 + * one or more erase sectors. Return an error if there is a problem erasing. 1603 1451 */ 1604 1452 static int spi_nor_erase(struct mtd_info *mtd, struct erase_info *instr) 1605 1453 { ··· 2356 2204 return 0; 2357 2205 } 2358 2206 2359 - static void 2207 + void 2360 2208 spi_nor_set_read_settings(struct spi_nor_read_command *read, 2361 2209 u8 num_mode_clocks, 2362 2210 u8 num_wait_states, ··· 2405 2253 { SNOR_HWCAPS_READ_1_8_8, SNOR_CMD_READ_1_8_8 }, 2406 2254 { SNOR_HWCAPS_READ_8_8_8, SNOR_CMD_READ_8_8_8 }, 2407 2255 { SNOR_HWCAPS_READ_1_8_8_DTR, SNOR_CMD_READ_1_8_8_DTR }, 2256 + { SNOR_HWCAPS_READ_8_8_8_DTR, SNOR_CMD_READ_8_8_8_DTR }, 2408 2257 }; 2409 2258 2410 2259 return spi_nor_hwcaps2cmd(hwcaps, hwcaps_read2cmd, ··· 2422 2269 { SNOR_HWCAPS_PP_1_1_8, SNOR_CMD_PP_1_1_8 }, 2423 2270 { SNOR_HWCAPS_PP_1_8_8, SNOR_CMD_PP_1_8_8 }, 2424 2271 { SNOR_HWCAPS_PP_8_8_8, SNOR_CMD_PP_8_8_8 }, 2272 + { SNOR_HWCAPS_PP_8_8_8_DTR, SNOR_CMD_PP_8_8_8_DTR }, 2425 2273 }; 2426 2274 2427 2275 return spi_nor_hwcaps2cmd(hwcaps, hwcaps_pp2cmd, ··· 2435 2281 *@nor: pointer to a 'struct spi_nor' 2436 2282 *@op: pointer to op template to be checked 2437 2283 * 2438 - * Returns 0 if operation is supported, -ENOTSUPP otherwise. 2284 + * Returns 0 if operation is supported, -EOPNOTSUPP otherwise. 2439 2285 */ 2440 2286 static int spi_nor_spimem_check_op(struct spi_nor *nor, 2441 2287 struct spi_mem_op *op) ··· 2449 2295 op->addr.nbytes = 4; 2450 2296 if (!spi_mem_supports_op(nor->spimem, op)) { 2451 2297 if (nor->mtd.size > SZ_16M) 2452 - return -ENOTSUPP; 2298 + return -EOPNOTSUPP; 2453 2299 2454 2300 /* If flash size <= 16MB, 3 address bytes are sufficient */ 2455 2301 op->addr.nbytes = 3; 2456 2302 if (!spi_mem_supports_op(nor->spimem, op)) 2457 - return -ENOTSUPP; 2303 + return -EOPNOTSUPP; 2458 2304 } 2459 2305 2460 2306 return 0; ··· 2466 2312 *@nor: pointer to a 'struct spi_nor' 2467 2313 *@read: pointer to op template to be checked 2468 2314 * 2469 - * Returns 0 if operation is supported, -ENOTSUPP otherwise. 2315 + * Returns 0 if operation is supported, -EOPNOTSUPP otherwise. 2470 2316 */ 2471 2317 static int spi_nor_spimem_check_readop(struct spi_nor *nor, 2472 2318 const struct spi_nor_read_command *read) 2473 2319 { 2474 - struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(read->opcode, 1), 2475 - SPI_MEM_OP_ADDR(3, 0, 1), 2476 - SPI_MEM_OP_DUMMY(0, 1), 2477 - SPI_MEM_OP_DATA_IN(0, NULL, 1)); 2320 + struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(read->opcode, 0), 2321 + SPI_MEM_OP_ADDR(3, 0, 0), 2322 + SPI_MEM_OP_DUMMY(1, 0), 2323 + SPI_MEM_OP_DATA_IN(1, NULL, 0)); 2478 2324 2479 - op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(read->proto); 2480 - op.addr.buswidth = spi_nor_get_protocol_addr_nbits(read->proto); 2481 - op.data.buswidth = spi_nor_get_protocol_data_nbits(read->proto); 2482 - op.dummy.buswidth = op.addr.buswidth; 2483 - op.dummy.nbytes = (read->num_mode_clocks + read->num_wait_states) * 2484 - op.dummy.buswidth / 8; 2325 + spi_nor_spimem_setup_op(nor, &op, read->proto); 2326 + 2327 + /* convert the dummy cycles to the number of bytes */ 2328 + op.dummy.nbytes = (nor->read_dummy * op.dummy.buswidth) / 8; 2329 + if (spi_nor_protocol_is_dtr(nor->read_proto)) 2330 + op.dummy.nbytes *= 2; 2485 2331 2486 2332 return spi_nor_spimem_check_op(nor, &op); 2487 2333 } ··· 2492 2338 *@nor: pointer to a 'struct spi_nor' 2493 2339 *@pp: pointer to op template to be checked 2494 2340 * 2495 - * Returns 0 if operation is supported, -ENOTSUPP otherwise. 2341 + * Returns 0 if operation is supported, -EOPNOTSUPP otherwise. 2496 2342 */ 2497 2343 static int spi_nor_spimem_check_pp(struct spi_nor *nor, 2498 2344 const struct spi_nor_pp_command *pp) 2499 2345 { 2500 - struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(pp->opcode, 1), 2501 - SPI_MEM_OP_ADDR(3, 0, 1), 2346 + struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(pp->opcode, 0), 2347 + SPI_MEM_OP_ADDR(3, 0, 0), 2502 2348 SPI_MEM_OP_NO_DUMMY, 2503 - SPI_MEM_OP_DATA_OUT(0, NULL, 1)); 2349 + SPI_MEM_OP_DATA_OUT(1, NULL, 0)); 2504 2350 2505 - op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(pp->proto); 2506 - op.addr.buswidth = spi_nor_get_protocol_addr_nbits(pp->proto); 2507 - op.data.buswidth = spi_nor_get_protocol_data_nbits(pp->proto); 2351 + spi_nor_spimem_setup_op(nor, &op, pp->proto); 2508 2352 2509 2353 return spi_nor_spimem_check_op(nor, &op); 2510 2354 } ··· 2520 2368 struct spi_nor_flash_parameter *params = nor->params; 2521 2369 unsigned int cap; 2522 2370 2523 - /* DTR modes are not supported yet, mask them all. */ 2524 - *hwcaps &= ~SNOR_HWCAPS_DTR; 2525 - 2526 2371 /* X-X-X modes are not supported yet, mask them all. */ 2527 2372 *hwcaps &= ~SNOR_HWCAPS_X_X_X; 2373 + 2374 + /* 2375 + * If the reset line is broken, we do not want to enter a stateful 2376 + * mode. 2377 + */ 2378 + if (nor->flags & SNOR_F_BROKEN_RESET) 2379 + *hwcaps &= ~(SNOR_HWCAPS_X_X_X | SNOR_HWCAPS_X_X_X_DTR); 2528 2380 2529 2381 for (cap = 0; cap < sizeof(*hwcaps) * BITS_PER_BYTE; cap++) { 2530 2382 int rdidx, ppidx; ··· 2693 2537 } 2694 2538 2695 2539 /* 2696 - * Otherwise, the current erase size is still a valid canditate. 2540 + * Otherwise, the current erase size is still a valid candidate. 2697 2541 * Select the biggest valid candidate. 2698 2542 */ 2699 2543 if (!erase && tested_erase->size) ··· 2784 2628 * controller directly implements the spi_nor interface. 2785 2629 * Yet another reason to switch to spi-mem. 2786 2630 */ 2787 - ignored_mask = SNOR_HWCAPS_X_X_X; 2631 + ignored_mask = SNOR_HWCAPS_X_X_X | SNOR_HWCAPS_X_X_X_DTR; 2788 2632 if (shared_mask & ignored_mask) { 2789 2633 dev_dbg(nor->dev, 2790 2634 "SPI n-n-n protocols are not supported.\n"); ··· 2885 2729 nor->flags |= SNOR_F_HAS_16BIT_SR; 2886 2730 2887 2731 /* Set SPI NOR sizes. */ 2732 + params->writesize = 1; 2888 2733 params->size = (u64)info->sector_size * info->n_sectors; 2889 2734 params->page_size = info->page_size; 2890 2735 ··· 2930 2773 SNOR_PROTO_1_1_8); 2931 2774 } 2932 2775 2776 + if (info->flags & SPI_NOR_OCTAL_DTR_READ) { 2777 + params->hwcaps.mask |= SNOR_HWCAPS_READ_8_8_8_DTR; 2778 + spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_8_8_8_DTR], 2779 + 0, 20, SPINOR_OP_READ_FAST, 2780 + SNOR_PROTO_8_8_8_DTR); 2781 + } 2782 + 2933 2783 /* Page Program settings. */ 2934 2784 params->hwcaps.mask |= SNOR_HWCAPS_PP; 2935 2785 spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP], 2936 2786 SPINOR_OP_PP, SNOR_PROTO_1_1_1); 2787 + 2788 + if (info->flags & SPI_NOR_OCTAL_DTR_PP) { 2789 + params->hwcaps.mask |= SNOR_HWCAPS_PP_8_8_8_DTR; 2790 + /* 2791 + * Since xSPI Page Program opcode is backward compatible with 2792 + * Legacy SPI, use Legacy SPI opcode there as well. 2793 + */ 2794 + spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP_8_8_8_DTR], 2795 + SPINOR_OP_PP, SNOR_PROTO_8_8_8_DTR); 2796 + } 2937 2797 2938 2798 /* 2939 2799 * Sector Erase settings. Sort Erase Types in ascending order, with the ··· 3059 2885 3060 2886 spi_nor_manufacturer_init_params(nor); 3061 2887 3062 - if ((nor->info->flags & (SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ)) && 2888 + if ((nor->info->flags & (SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | 2889 + SPI_NOR_OCTAL_READ | SPI_NOR_OCTAL_DTR_READ)) && 3063 2890 !(nor->info->flags & SPI_NOR_SKIP_SFDP)) 3064 2891 spi_nor_sfdp_init_params(nor); 3065 2892 3066 2893 spi_nor_post_sfdp_fixups(nor); 3067 2894 3068 2895 spi_nor_late_init_params(nor); 2896 + 2897 + return 0; 2898 + } 2899 + 2900 + /** spi_nor_octal_dtr_enable() - enable Octal DTR I/O if needed 2901 + * @nor: pointer to a 'struct spi_nor' 2902 + * @enable: whether to enable or disable Octal DTR 2903 + * 2904 + * Return: 0 on success, -errno otherwise. 2905 + */ 2906 + static int spi_nor_octal_dtr_enable(struct spi_nor *nor, bool enable) 2907 + { 2908 + int ret; 2909 + 2910 + if (!nor->params->octal_dtr_enable) 2911 + return 0; 2912 + 2913 + if (!(nor->read_proto == SNOR_PROTO_8_8_8_DTR && 2914 + nor->write_proto == SNOR_PROTO_8_8_8_DTR)) 2915 + return 0; 2916 + 2917 + if (!(nor->flags & SNOR_F_IO_MODE_EN_VOLATILE)) 2918 + return 0; 2919 + 2920 + ret = nor->params->octal_dtr_enable(nor, enable); 2921 + if (ret) 2922 + return ret; 2923 + 2924 + if (enable) 2925 + nor->reg_proto = SNOR_PROTO_8_8_8_DTR; 2926 + else 2927 + nor->reg_proto = SNOR_PROTO_1_1_1; 3069 2928 3070 2929 return 0; 3071 2930 } ··· 3122 2915 } 3123 2916 3124 2917 /** 3125 - * spi_nor_unlock_all() - Unlocks the entire flash memory array. 2918 + * spi_nor_try_unlock_all() - Tries to unlock the entire flash memory array. 3126 2919 * @nor: pointer to a 'struct spi_nor'. 3127 2920 * 3128 2921 * Some SPI NOR flashes are write protected by default after a power-on reset 3129 2922 * cycle, in order to avoid inadvertent writes during power-up. Backward 3130 2923 * compatibility imposes to unlock the entire flash memory array at power-up 3131 2924 * by default. 2925 + * 2926 + * Unprotecting the entire flash array will fail for boards which are hardware 2927 + * write-protected. Thus any errors are ignored. 3132 2928 */ 3133 - static int spi_nor_unlock_all(struct spi_nor *nor) 2929 + static void spi_nor_try_unlock_all(struct spi_nor *nor) 3134 2930 { 3135 - if (nor->flags & SNOR_F_HAS_LOCK) 3136 - return spi_nor_unlock(&nor->mtd, 0, nor->params->size); 2931 + int ret; 3137 2932 3138 - return 0; 2933 + if (!(nor->flags & SNOR_F_HAS_LOCK)) 2934 + return; 2935 + 2936 + dev_dbg(nor->dev, "Unprotecting entire flash array\n"); 2937 + 2938 + ret = spi_nor_unlock(&nor->mtd, 0, nor->params->size); 2939 + if (ret) 2940 + dev_dbg(nor->dev, "Failed to unlock the entire flash memory array\n"); 3139 2941 } 3140 2942 3141 2943 static int spi_nor_init(struct spi_nor *nor) 3142 2944 { 3143 2945 int err; 2946 + 2947 + err = spi_nor_octal_dtr_enable(nor, true); 2948 + if (err) { 2949 + dev_dbg(nor->dev, "octal mode not supported\n"); 2950 + return err; 2951 + } 3144 2952 3145 2953 err = spi_nor_quad_enable(nor); 3146 2954 if (err) { ··· 3163 2941 return err; 3164 2942 } 3165 2943 3166 - err = spi_nor_unlock_all(nor); 3167 - if (err) { 3168 - dev_dbg(nor->dev, "Failed to unlock the entire flash memory array\n"); 3169 - return err; 3170 - } 2944 + /* 2945 + * Some SPI NOR flashes are write protected by default after a power-on 2946 + * reset cycle, in order to avoid inadvertent writes during power-up. 2947 + * Backward compatibility imposes to unlock the entire flash memory 2948 + * array at power-up by default. Depending on the kernel configuration 2949 + * (1) do nothing, (2) always unlock the entire flash array or (3) 2950 + * unlock the entire flash array only when the software write 2951 + * protection bits are volatile. The latter is indicated by 2952 + * SNOR_F_SWP_IS_VOLATILE. 2953 + */ 2954 + if (IS_ENABLED(CONFIG_MTD_SPI_NOR_SWP_DISABLE) || 2955 + (IS_ENABLED(CONFIG_MTD_SPI_NOR_SWP_DISABLE_ON_VOLATILE) && 2956 + nor->flags & SNOR_F_SWP_IS_VOLATILE)) 2957 + spi_nor_try_unlock_all(nor); 3171 2958 3172 - if (nor->addr_width == 4 && !(nor->flags & SNOR_F_4B_OPCODES)) { 2959 + if (nor->addr_width == 4 && 2960 + nor->read_proto != SNOR_PROTO_8_8_8_DTR && 2961 + !(nor->flags & SNOR_F_4B_OPCODES)) { 3173 2962 /* 3174 2963 * If the RESET# pin isn't hooked up properly, or the system 3175 2964 * otherwise doesn't perform a reset command in the boot ··· 3194 2961 } 3195 2962 3196 2963 return 0; 2964 + } 2965 + 2966 + static void spi_nor_soft_reset(struct spi_nor *nor) 2967 + { 2968 + struct spi_mem_op op; 2969 + int ret; 2970 + 2971 + op = (struct spi_mem_op)SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_SRSTEN, 0), 2972 + SPI_MEM_OP_NO_DUMMY, 2973 + SPI_MEM_OP_NO_ADDR, 2974 + SPI_MEM_OP_NO_DATA); 2975 + 2976 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 2977 + 2978 + ret = spi_mem_exec_op(nor->spimem, &op); 2979 + if (ret) { 2980 + dev_warn(nor->dev, "Software reset failed: %d\n", ret); 2981 + return; 2982 + } 2983 + 2984 + op = (struct spi_mem_op)SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_SRST, 0), 2985 + SPI_MEM_OP_NO_DUMMY, 2986 + SPI_MEM_OP_NO_ADDR, 2987 + SPI_MEM_OP_NO_DATA); 2988 + 2989 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 2990 + 2991 + ret = spi_mem_exec_op(nor->spimem, &op); 2992 + if (ret) { 2993 + dev_warn(nor->dev, "Software reset failed: %d\n", ret); 2994 + return; 2995 + } 2996 + 2997 + /* 2998 + * Software Reset is not instant, and the delay varies from flash to 2999 + * flash. Looking at a few flashes, most range somewhere below 100 3000 + * microseconds. So, sleep for a range of 200-400 us. 3001 + */ 3002 + usleep_range(SPI_NOR_SRST_SLEEP_MIN, SPI_NOR_SRST_SLEEP_MAX); 3003 + } 3004 + 3005 + /* mtd suspend handler */ 3006 + static int spi_nor_suspend(struct mtd_info *mtd) 3007 + { 3008 + struct spi_nor *nor = mtd_to_spi_nor(mtd); 3009 + int ret; 3010 + 3011 + /* Disable octal DTR mode if we enabled it. */ 3012 + ret = spi_nor_octal_dtr_enable(nor, false); 3013 + if (ret) 3014 + dev_err(nor->dev, "suspend() failed\n"); 3015 + 3016 + return ret; 3197 3017 } 3198 3018 3199 3019 /* mtd resume handler */ ··· 3268 2982 if (nor->addr_width == 4 && !(nor->flags & SNOR_F_4B_OPCODES) && 3269 2983 nor->flags & SNOR_F_BROKEN_RESET) 3270 2984 nor->params->set_4byte_addr_mode(nor, false); 2985 + 2986 + if (nor->flags & SNOR_F_SOFT_RESET) 2987 + spi_nor_soft_reset(nor); 3271 2988 } 3272 2989 EXPORT_SYMBOL_GPL(spi_nor_restore); 3273 2990 ··· 3295 3006 { 3296 3007 if (nor->addr_width) { 3297 3008 /* already configured from SFDP */ 3009 + } else if (nor->read_proto == SNOR_PROTO_8_8_8_DTR) { 3010 + /* 3011 + * In 8D-8D-8D mode, one byte takes half a cycle to transfer. So 3012 + * in this protocol an odd address width cannot be used because 3013 + * then the address phase would only span a cycle and a half. 3014 + * Half a cycle would be left over. We would then have to start 3015 + * the dummy phase in the middle of a cycle and so too the data 3016 + * phase, and we will end the transaction with half a cycle left 3017 + * over. 3018 + * 3019 + * Force all 8D-8D-8D flashes to use an address width of 4 to 3020 + * avoid this situation. 3021 + */ 3022 + nor->addr_width = 4; 3298 3023 } else if (nor->info->addr_width) { 3299 3024 nor->addr_width = nor->info->addr_width; 3300 3025 } else { ··· 3449 3146 mtd->name = dev_name(dev); 3450 3147 mtd->priv = nor; 3451 3148 mtd->type = MTD_NORFLASH; 3452 - mtd->writesize = 1; 3149 + mtd->writesize = nor->params->writesize; 3453 3150 mtd->flags = MTD_CAP_NORFLASH; 3454 3151 mtd->size = nor->params->size; 3455 3152 mtd->_erase = spi_nor_erase; 3456 3153 mtd->_read = spi_nor_read; 3154 + mtd->_suspend = spi_nor_suspend; 3457 3155 mtd->_resume = spi_nor_resume; 3458 3156 3459 3157 if (nor->params->locking_ops) { ··· 3475 3171 nor->flags |= SNOR_F_NO_OP_CHIP_ERASE; 3476 3172 if (info->flags & USE_CLSR) 3477 3173 nor->flags |= SNOR_F_USE_CLSR; 3174 + if (info->flags & SPI_NOR_SWP_IS_VOLATILE) 3175 + nor->flags |= SNOR_F_SWP_IS_VOLATILE; 3478 3176 3479 3177 if (info->flags & SPI_NOR_4BIT_BP) { 3480 3178 nor->flags |= SNOR_F_HAS_4BIT_BP; ··· 3506 3200 3507 3201 if (info->flags & SPI_NOR_4B_OPCODES) 3508 3202 nor->flags |= SNOR_F_4B_OPCODES; 3203 + 3204 + if (info->flags & SPI_NOR_IO_MODE_EN_VOLATILE) 3205 + nor->flags |= SNOR_F_IO_MODE_EN_VOLATILE; 3509 3206 3510 3207 ret = spi_nor_set_addr_width(nor); 3511 3208 if (ret) ··· 3545 3236 static int spi_nor_create_read_dirmap(struct spi_nor *nor) 3546 3237 { 3547 3238 struct spi_mem_dirmap_info info = { 3548 - .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 1), 3549 - SPI_MEM_OP_ADDR(nor->addr_width, 0, 1), 3550 - SPI_MEM_OP_DUMMY(nor->read_dummy, 1), 3551 - SPI_MEM_OP_DATA_IN(0, NULL, 1)), 3239 + .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 0), 3240 + SPI_MEM_OP_ADDR(nor->addr_width, 0, 0), 3241 + SPI_MEM_OP_DUMMY(nor->read_dummy, 0), 3242 + SPI_MEM_OP_DATA_IN(0, NULL, 0)), 3552 3243 .offset = 0, 3553 3244 .length = nor->mtd.size, 3554 3245 }; 3555 3246 struct spi_mem_op *op = &info.op_tmpl; 3556 3247 3557 - /* get transfer protocols. */ 3558 - op->cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->read_proto); 3559 - op->addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->read_proto); 3560 - op->dummy.buswidth = op->addr.buswidth; 3561 - op->data.buswidth = spi_nor_get_protocol_data_nbits(nor->read_proto); 3248 + spi_nor_spimem_setup_op(nor, op, nor->read_proto); 3562 3249 3563 3250 /* convert the dummy cycles to the number of bytes */ 3564 3251 op->dummy.nbytes = (nor->read_dummy * op->dummy.buswidth) / 8; 3252 + if (spi_nor_protocol_is_dtr(nor->read_proto)) 3253 + op->dummy.nbytes *= 2; 3254 + 3255 + /* 3256 + * Since spi_nor_spimem_setup_op() only sets buswidth when the number 3257 + * of data bytes is non-zero, the data buswidth won't be set here. So, 3258 + * do it explicitly. 3259 + */ 3260 + op->data.buswidth = spi_nor_get_protocol_data_nbits(nor->read_proto); 3565 3261 3566 3262 nor->dirmap.rdesc = devm_spi_mem_dirmap_create(nor->dev, nor->spimem, 3567 3263 &info); ··· 3576 3262 static int spi_nor_create_write_dirmap(struct spi_nor *nor) 3577 3263 { 3578 3264 struct spi_mem_dirmap_info info = { 3579 - .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 1), 3580 - SPI_MEM_OP_ADDR(nor->addr_width, 0, 1), 3265 + .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 0), 3266 + SPI_MEM_OP_ADDR(nor->addr_width, 0, 0), 3581 3267 SPI_MEM_OP_NO_DUMMY, 3582 - SPI_MEM_OP_DATA_OUT(0, NULL, 1)), 3268 + SPI_MEM_OP_DATA_OUT(0, NULL, 0)), 3583 3269 .offset = 0, 3584 3270 .length = nor->mtd.size, 3585 3271 }; 3586 3272 struct spi_mem_op *op = &info.op_tmpl; 3587 3273 3588 - /* get transfer protocols. */ 3589 - op->cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->write_proto); 3590 - op->addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->write_proto); 3591 - op->dummy.buswidth = op->addr.buswidth; 3592 - op->data.buswidth = spi_nor_get_protocol_data_nbits(nor->write_proto); 3593 - 3594 3274 if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second) 3595 3275 op->addr.nbytes = 0; 3276 + 3277 + spi_nor_spimem_setup_op(nor, op, nor->write_proto); 3278 + 3279 + /* 3280 + * Since spi_nor_spimem_setup_op() only sets buswidth when the number 3281 + * of data bytes is non-zero, the data buswidth won't be set here. So, 3282 + * do it explicitly. 3283 + */ 3284 + op->data.buswidth = spi_nor_get_protocol_data_nbits(nor->write_proto); 3596 3285 3597 3286 nor->dirmap.wdesc = devm_spi_mem_dirmap_create(nor->dev, nor->spimem, 3598 3287 &info);
+38
drivers/mtd/spi-nor/core.h
··· 26 26 SNOR_F_HAS_SR_TB_BIT6 = BIT(11), 27 27 SNOR_F_HAS_4BIT_BP = BIT(12), 28 28 SNOR_F_HAS_SR_BP3_BIT6 = BIT(13), 29 + SNOR_F_IO_MODE_EN_VOLATILE = BIT(14), 30 + SNOR_F_SOFT_RESET = BIT(15), 31 + SNOR_F_SWP_IS_VOLATILE = BIT(16), 29 32 }; 30 33 31 34 struct spi_nor_read_command { ··· 65 62 SNOR_CMD_READ_1_8_8, 66 63 SNOR_CMD_READ_8_8_8, 67 64 SNOR_CMD_READ_1_8_8_DTR, 65 + SNOR_CMD_READ_8_8_8_DTR, 68 66 69 67 SNOR_CMD_READ_MAX 70 68 }; ··· 82 78 SNOR_CMD_PP_1_1_8, 83 79 SNOR_CMD_PP_1_8_8, 84 80 SNOR_CMD_PP_8_8_8, 81 + SNOR_CMD_PP_8_8_8_DTR, 85 82 86 83 SNOR_CMD_PP_MAX 87 84 }; ··· 194 189 * Serial Flash Discoverable Parameters (SFDP) tables. 195 190 * 196 191 * @size: the flash memory density in bytes. 192 + * @writesize Minimal writable flash unit size. Defaults to 1. Set to 193 + * ECC unit size for ECC-ed flashes. 197 194 * @page_size: the page size of the SPI NOR flash memory. 195 + * @rdsr_dummy: dummy cycles needed for Read Status Register command. 196 + * @rdsr_addr_nbytes: dummy address bytes needed for Read Status Register 197 + * command. 198 198 * @hwcaps: describes the read and page program hardware 199 199 * capabilities. 200 200 * @reads: read capabilities ordered by priority: the higher index ··· 208 198 * higher index in the array, the higher priority. 209 199 * @erase_map: the erase map parsed from the SFDP Sector Map Parameter 210 200 * Table. 201 + * @octal_dtr_enable: enables SPI NOR octal DTR mode. 211 202 * @quad_enable: enables SPI NOR quad mode. 212 203 * @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode. 213 204 * @convert_addr: converts an absolute address into something the flash ··· 222 211 */ 223 212 struct spi_nor_flash_parameter { 224 213 u64 size; 214 + u32 writesize; 225 215 u32 page_size; 216 + u8 rdsr_dummy; 217 + u8 rdsr_addr_nbytes; 226 218 227 219 struct spi_nor_hwcaps hwcaps; 228 220 struct spi_nor_read_command reads[SNOR_CMD_READ_MAX]; ··· 233 219 234 220 struct spi_nor_erase_map erase_map; 235 221 222 + int (*octal_dtr_enable)(struct spi_nor *nor, bool enable); 236 223 int (*quad_enable)(struct spi_nor *nor); 237 224 int (*set_4byte_addr_mode)(struct spi_nor *nor, bool enable); 238 225 u32 (*convert_addr)(struct spi_nor *nor, u32 addr); ··· 326 311 * BP3 is bit 6 of status register. 327 312 * Must be used with SPI_NOR_4BIT_BP. 328 313 */ 314 + #define SPI_NOR_OCTAL_DTR_READ BIT(19) /* Flash supports octal DTR Read. */ 315 + #define SPI_NOR_OCTAL_DTR_PP BIT(20) /* Flash supports Octal DTR Page Program */ 316 + #define SPI_NOR_IO_MODE_EN_VOLATILE BIT(21) /* 317 + * Flash enables the best 318 + * available I/O mode via a 319 + * volatile bit. 320 + */ 321 + #define SPI_NOR_SWP_IS_VOLATILE BIT(22) /* 322 + * Flash has volatile software write 323 + * protection bits. Usually these will 324 + * power-up in a write-protected state. 325 + */ 329 326 330 327 /* Part specific fixup hooks. */ 331 328 const struct spi_nor_fixups *fixups; ··· 426 399 extern const struct spi_nor_manufacturer spi_nor_xilinx; 427 400 extern const struct spi_nor_manufacturer spi_nor_xmc; 428 401 402 + void spi_nor_spimem_setup_op(const struct spi_nor *nor, 403 + struct spi_mem_op *op, 404 + const enum spi_nor_protocol proto); 429 405 int spi_nor_write_enable(struct spi_nor *nor); 430 406 int spi_nor_write_disable(struct spi_nor *nor); 431 407 int spi_nor_set_4byte_addr_mode(struct spi_nor *nor, bool enable); ··· 439 409 int spi_nor_sr1_bit6_quad_enable(struct spi_nor *nor); 440 410 int spi_nor_sr2_bit1_quad_enable(struct spi_nor *nor); 441 411 int spi_nor_sr2_bit7_quad_enable(struct spi_nor *nor); 412 + int spi_nor_read_sr(struct spi_nor *nor, u8 *sr); 413 + int spi_nor_write_sr(struct spi_nor *nor, const u8 *sr, size_t len); 414 + int spi_nor_write_sr_and_check(struct spi_nor *nor, u8 sr1); 442 415 443 416 int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr); 444 417 ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len, ··· 451 418 452 419 int spi_nor_hwcaps_read2cmd(u32 hwcaps); 453 420 u8 spi_nor_convert_3to4_read(u8 opcode); 421 + void spi_nor_set_read_settings(struct spi_nor_read_command *read, 422 + u8 num_mode_clocks, 423 + u8 num_wait_states, 424 + u8 opcode, 425 + enum spi_nor_protocol proto); 454 426 void spi_nor_set_pp_settings(struct spi_nor_pp_command *pp, u8 opcode, 455 427 enum spi_nor_protocol proto); 456 428
+1 -1
drivers/mtd/spi-nor/esmt.c
··· 11 11 static const struct flash_info esmt_parts[] = { 12 12 /* ESMT */ 13 13 { "f25l32pa", INFO(0x8c2016, 0, 64 * 1024, 64, 14 - SECT_4K | SPI_NOR_HAS_LOCK) }, 14 + SECT_4K | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 15 15 { "f25l32qa", INFO(0x8c4116, 0, 64 * 1024, 64, 16 16 SECT_4K | SPI_NOR_HAS_LOCK) }, 17 17 { "f25l64qa", INFO(0x8c4117, 0, 64 * 1024, 128,
+6 -13
drivers/mtd/spi-nor/intel.c
··· 10 10 11 11 static const struct flash_info intel_parts[] = { 12 12 /* Intel/Numonyx -- xxxs33b */ 13 - { "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 0) }, 14 - { "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 0) }, 15 - { "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 0) }, 16 - }; 17 - 18 - static void intel_default_init(struct spi_nor *nor) 19 - { 20 - nor->flags |= SNOR_F_HAS_LOCK; 21 - } 22 - 23 - static const struct spi_nor_fixups intel_fixups = { 24 - .default_init = intel_default_init, 13 + { "160s33b", INFO(0x898911, 0, 64 * 1024, 32, 14 + SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 15 + { "320s33b", INFO(0x898912, 0, 64 * 1024, 64, 16 + SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 17 + { "640s33b", INFO(0x898913, 0, 64 * 1024, 128, 18 + SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 25 19 }; 26 20 27 21 const struct spi_nor_manufacturer spi_nor_intel = { 28 22 .name = "intel", 29 23 .parts = intel_parts, 30 24 .nparts = ARRAY_SIZE(intel_parts), 31 - .fixups = &intel_fixups, 32 25 };
+114 -1
drivers/mtd/spi-nor/micron-st.c
··· 8 8 9 9 #include "core.h" 10 10 11 + #define SPINOR_OP_MT_DTR_RD 0xfd /* Fast Read opcode in DTR mode */ 12 + #define SPINOR_OP_MT_RD_ANY_REG 0x85 /* Read volatile register */ 13 + #define SPINOR_OP_MT_WR_ANY_REG 0x81 /* Write volatile register */ 14 + #define SPINOR_REG_MT_CFR0V 0x00 /* For setting octal DTR mode */ 15 + #define SPINOR_REG_MT_CFR1V 0x01 /* For setting dummy cycles */ 16 + #define SPINOR_MT_OCT_DTR 0xe7 /* Enable Octal DTR. */ 17 + #define SPINOR_MT_EXSPI 0xff /* Enable Extended SPI (default) */ 18 + 19 + static int spi_nor_micron_octal_dtr_enable(struct spi_nor *nor, bool enable) 20 + { 21 + struct spi_mem_op op; 22 + u8 *buf = nor->bouncebuf; 23 + int ret; 24 + 25 + if (enable) { 26 + /* Use 20 dummy cycles for memory array reads. */ 27 + ret = spi_nor_write_enable(nor); 28 + if (ret) 29 + return ret; 30 + 31 + *buf = 20; 32 + op = (struct spi_mem_op) 33 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_MT_WR_ANY_REG, 1), 34 + SPI_MEM_OP_ADDR(3, SPINOR_REG_MT_CFR1V, 1), 35 + SPI_MEM_OP_NO_DUMMY, 36 + SPI_MEM_OP_DATA_OUT(1, buf, 1)); 37 + 38 + ret = spi_mem_exec_op(nor->spimem, &op); 39 + if (ret) 40 + return ret; 41 + 42 + ret = spi_nor_wait_till_ready(nor); 43 + if (ret) 44 + return ret; 45 + } 46 + 47 + ret = spi_nor_write_enable(nor); 48 + if (ret) 49 + return ret; 50 + 51 + if (enable) 52 + *buf = SPINOR_MT_OCT_DTR; 53 + else 54 + *buf = SPINOR_MT_EXSPI; 55 + 56 + op = (struct spi_mem_op) 57 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_MT_WR_ANY_REG, 1), 58 + SPI_MEM_OP_ADDR(enable ? 3 : 4, 59 + SPINOR_REG_MT_CFR0V, 1), 60 + SPI_MEM_OP_NO_DUMMY, 61 + SPI_MEM_OP_DATA_OUT(1, buf, 1)); 62 + 63 + if (!enable) 64 + spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR); 65 + 66 + ret = spi_mem_exec_op(nor->spimem, &op); 67 + if (ret) 68 + return ret; 69 + 70 + /* Read flash ID to make sure the switch was successful. */ 71 + op = (struct spi_mem_op) 72 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDID, 1), 73 + SPI_MEM_OP_NO_ADDR, 74 + SPI_MEM_OP_DUMMY(enable ? 8 : 0, 1), 75 + SPI_MEM_OP_DATA_IN(round_up(nor->info->id_len, 2), 76 + buf, 1)); 77 + 78 + if (enable) 79 + spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR); 80 + 81 + ret = spi_mem_exec_op(nor->spimem, &op); 82 + if (ret) 83 + return ret; 84 + 85 + if (memcmp(buf, nor->info->id, nor->info->id_len)) 86 + return -EINVAL; 87 + 88 + return 0; 89 + } 90 + 91 + static void mt35xu512aba_default_init(struct spi_nor *nor) 92 + { 93 + nor->params->octal_dtr_enable = spi_nor_micron_octal_dtr_enable; 94 + } 95 + 96 + static void mt35xu512aba_post_sfdp_fixup(struct spi_nor *nor) 97 + { 98 + /* Set the Fast Read settings. */ 99 + nor->params->hwcaps.mask |= SNOR_HWCAPS_READ_8_8_8_DTR; 100 + spi_nor_set_read_settings(&nor->params->reads[SNOR_CMD_READ_8_8_8_DTR], 101 + 0, 20, SPINOR_OP_MT_DTR_RD, 102 + SNOR_PROTO_8_8_8_DTR); 103 + 104 + nor->cmd_ext_type = SPI_NOR_EXT_REPEAT; 105 + nor->params->rdsr_dummy = 8; 106 + nor->params->rdsr_addr_nbytes = 0; 107 + 108 + /* 109 + * The BFPT quad enable field is set to a reserved value so the quad 110 + * enable function is ignored by spi_nor_parse_bfpt(). Make sure we 111 + * disable it. 112 + */ 113 + nor->params->quad_enable = NULL; 114 + } 115 + 116 + static struct spi_nor_fixups mt35xu512aba_fixups = { 117 + .default_init = mt35xu512aba_default_init, 118 + .post_sfdp = mt35xu512aba_post_sfdp_fixup, 119 + }; 120 + 11 121 static const struct flash_info micron_parts[] = { 12 122 { "mt35xu512aba", INFO(0x2c5b1a, 0, 128 * 1024, 512, 13 123 SECT_4K | USE_FSR | SPI_NOR_OCTAL_READ | 14 - SPI_NOR_4B_OPCODES) }, 124 + SPI_NOR_4B_OPCODES | SPI_NOR_OCTAL_DTR_READ | 125 + SPI_NOR_OCTAL_DTR_PP | 126 + SPI_NOR_IO_MODE_EN_VOLATILE) 127 + .fixups = &mt35xu512aba_fixups}, 15 128 { "mt35xu02g", INFO(0x2c5b1c, 0, 128 * 1024, 2048, 16 129 SECT_4K | USE_FSR | SPI_NOR_OCTAL_READ | 17 130 SPI_NOR_4B_OPCODES) },
+170 -2
drivers/mtd/spi-nor/sfdp.c
··· 4 4 * Copyright (C) 2014, Freescale Semiconductor, Inc. 5 5 */ 6 6 7 + #include <linux/bitfield.h> 7 8 #include <linux/slab.h> 8 9 #include <linux/sort.h> 9 10 #include <linux/mtd/spi-nor.h> ··· 20 19 #define SFDP_BFPT_ID 0xff00 /* Basic Flash Parameter Table */ 21 20 #define SFDP_SECTOR_MAP_ID 0xff81 /* Sector Map Table */ 22 21 #define SFDP_4BAIT_ID 0xff84 /* 4-byte Address Instruction Table */ 22 + #define SFDP_PROFILE1_ID 0xff05 /* xSPI Profile 1.0 table. */ 23 + #define SFDP_SCCR_MAP_ID 0xff87 /* 24 + * Status, Control and Configuration 25 + * Register Map. 26 + */ 23 27 24 28 #define SFDP_SIGNATURE 0x50444653U 25 29 ··· 65 59 66 60 struct sfdp_bfpt_erase { 67 61 /* 68 - * The half-word at offset <shift> in DWORD <dwoard> encodes the 62 + * The half-word at offset <shift> in DWORD <dword> encodes the 69 63 * op code and erase sector size to be used by Sector Erase commands. 70 64 */ 71 65 u32 dword; ··· 608 602 break; 609 603 } 610 604 605 + /* Soft Reset support. */ 606 + if (bfpt.dwords[BFPT_DWORD(16)] & BFPT_DWORD16_SWRST_EN_RST) 607 + nor->flags |= SNOR_F_SOFT_RESET; 608 + 611 609 /* Stop here if not JESD216 rev C or later. */ 612 610 if (bfpt_header->length == BFPT_DWORD_MAX_JESD216B) 613 611 return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt, 614 612 params); 613 + /* 8D-8D-8D command extension. */ 614 + switch (bfpt.dwords[BFPT_DWORD(18)] & BFPT_DWORD18_CMD_EXT_MASK) { 615 + case BFPT_DWORD18_CMD_EXT_REP: 616 + nor->cmd_ext_type = SPI_NOR_EXT_REPEAT; 617 + break; 618 + 619 + case BFPT_DWORD18_CMD_EXT_INV: 620 + nor->cmd_ext_type = SPI_NOR_EXT_INVERT; 621 + break; 622 + 623 + case BFPT_DWORD18_CMD_EXT_RES: 624 + dev_dbg(nor->dev, "Reserved command extension used\n"); 625 + break; 626 + 627 + case BFPT_DWORD18_CMD_EXT_16B: 628 + dev_dbg(nor->dev, "16-bit opcodes not supported\n"); 629 + return -EOPNOTSUPP; 630 + } 615 631 616 632 return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt, params); 617 633 } ··· 1075 1047 } 1076 1048 1077 1049 /* 4BAIT is the only SFDP table that indicates page program support. */ 1078 - if (pp_hwcaps & SNOR_HWCAPS_PP) 1050 + if (pp_hwcaps & SNOR_HWCAPS_PP) { 1079 1051 spi_nor_set_pp_settings(&params_pp[SNOR_CMD_PP], 1080 1052 SPINOR_OP_PP_4B, SNOR_PROTO_1_1_1); 1053 + /* 1054 + * Since xSPI Page Program opcode is backward compatible with 1055 + * Legacy SPI, use Legacy SPI opcode there as well. 1056 + */ 1057 + spi_nor_set_pp_settings(&params_pp[SNOR_CMD_PP_8_8_8_DTR], 1058 + SPINOR_OP_PP_4B, SNOR_PROTO_8_8_8_DTR); 1059 + } 1081 1060 if (pp_hwcaps & SNOR_HWCAPS_PP_1_1_4) 1082 1061 spi_nor_set_pp_settings(&params_pp[SNOR_CMD_PP_1_1_4], 1083 1062 SPINOR_OP_PP_1_1_4_4B, ··· 1113 1078 nor->flags |= SNOR_F_4B_OPCODES | SNOR_F_HAS_4BAIT; 1114 1079 1115 1080 /* fall through */ 1081 + out: 1082 + kfree(dwords); 1083 + return ret; 1084 + } 1085 + 1086 + #define PROFILE1_DWORD1_RDSR_ADDR_BYTES BIT(29) 1087 + #define PROFILE1_DWORD1_RDSR_DUMMY BIT(28) 1088 + #define PROFILE1_DWORD1_RD_FAST_CMD GENMASK(15, 8) 1089 + #define PROFILE1_DWORD4_DUMMY_200MHZ GENMASK(11, 7) 1090 + #define PROFILE1_DWORD5_DUMMY_166MHZ GENMASK(31, 27) 1091 + #define PROFILE1_DWORD5_DUMMY_133MHZ GENMASK(21, 17) 1092 + #define PROFILE1_DWORD5_DUMMY_100MHZ GENMASK(11, 7) 1093 + 1094 + /** 1095 + * spi_nor_parse_profile1() - parse the xSPI Profile 1.0 table 1096 + * @nor: pointer to a 'struct spi_nor' 1097 + * @profile1_header: pointer to the 'struct sfdp_parameter_header' describing 1098 + * the Profile 1.0 Table length and version. 1099 + * @params: pointer to the 'struct spi_nor_flash_parameter' to be. 1100 + * 1101 + * Return: 0 on success, -errno otherwise. 1102 + */ 1103 + static int spi_nor_parse_profile1(struct spi_nor *nor, 1104 + const struct sfdp_parameter_header *profile1_header, 1105 + struct spi_nor_flash_parameter *params) 1106 + { 1107 + u32 *dwords, addr; 1108 + size_t len; 1109 + int ret; 1110 + u8 dummy, opcode; 1111 + 1112 + len = profile1_header->length * sizeof(*dwords); 1113 + dwords = kmalloc(len, GFP_KERNEL); 1114 + if (!dwords) 1115 + return -ENOMEM; 1116 + 1117 + addr = SFDP_PARAM_HEADER_PTP(profile1_header); 1118 + ret = spi_nor_read_sfdp(nor, addr, len, dwords); 1119 + if (ret) 1120 + goto out; 1121 + 1122 + le32_to_cpu_array(dwords, profile1_header->length); 1123 + 1124 + /* Get 8D-8D-8D fast read opcode and dummy cycles. */ 1125 + opcode = FIELD_GET(PROFILE1_DWORD1_RD_FAST_CMD, dwords[0]); 1126 + 1127 + /* Set the Read Status Register dummy cycles and dummy address bytes. */ 1128 + if (dwords[0] & PROFILE1_DWORD1_RDSR_DUMMY) 1129 + params->rdsr_dummy = 8; 1130 + else 1131 + params->rdsr_dummy = 4; 1132 + 1133 + if (dwords[0] & PROFILE1_DWORD1_RDSR_ADDR_BYTES) 1134 + params->rdsr_addr_nbytes = 4; 1135 + else 1136 + params->rdsr_addr_nbytes = 0; 1137 + 1138 + /* 1139 + * We don't know what speed the controller is running at. Find the 1140 + * dummy cycles for the fastest frequency the flash can run at to be 1141 + * sure we are never short of dummy cycles. A value of 0 means the 1142 + * frequency is not supported. 1143 + * 1144 + * Default to PROFILE1_DUMMY_DEFAULT if we don't find anything, and let 1145 + * flashes set the correct value if needed in their fixup hooks. 1146 + */ 1147 + dummy = FIELD_GET(PROFILE1_DWORD4_DUMMY_200MHZ, dwords[3]); 1148 + if (!dummy) 1149 + dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_166MHZ, dwords[4]); 1150 + if (!dummy) 1151 + dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_133MHZ, dwords[4]); 1152 + if (!dummy) 1153 + dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_100MHZ, dwords[4]); 1154 + if (!dummy) 1155 + dev_dbg(nor->dev, 1156 + "Can't find dummy cycles from Profile 1.0 table\n"); 1157 + 1158 + /* Round up to an even value to avoid tripping controllers up. */ 1159 + dummy = round_up(dummy, 2); 1160 + 1161 + /* Update the fast read settings. */ 1162 + spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_8_8_8_DTR], 1163 + 0, dummy, opcode, 1164 + SNOR_PROTO_8_8_8_DTR); 1165 + 1166 + out: 1167 + kfree(dwords); 1168 + return ret; 1169 + } 1170 + 1171 + #define SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE BIT(31) 1172 + 1173 + /** 1174 + * spi_nor_parse_sccr() - Parse the Status, Control and Configuration Register 1175 + * Map. 1176 + * @nor: pointer to a 'struct spi_nor' 1177 + * @sccr_header: pointer to the 'struct sfdp_parameter_header' describing 1178 + * the SCCR Map table length and version. 1179 + * @params: pointer to the 'struct spi_nor_flash_parameter' to be. 1180 + * 1181 + * Return: 0 on success, -errno otherwise. 1182 + */ 1183 + static int spi_nor_parse_sccr(struct spi_nor *nor, 1184 + const struct sfdp_parameter_header *sccr_header, 1185 + struct spi_nor_flash_parameter *params) 1186 + { 1187 + u32 *dwords, addr; 1188 + size_t len; 1189 + int ret; 1190 + 1191 + len = sccr_header->length * sizeof(*dwords); 1192 + dwords = kmalloc(len, GFP_KERNEL); 1193 + if (!dwords) 1194 + return -ENOMEM; 1195 + 1196 + addr = SFDP_PARAM_HEADER_PTP(sccr_header); 1197 + ret = spi_nor_read_sfdp(nor, addr, len, dwords); 1198 + if (ret) 1199 + goto out; 1200 + 1201 + le32_to_cpu_array(dwords, sccr_header->length); 1202 + 1203 + if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE, dwords[22])) 1204 + nor->flags |= SNOR_F_IO_MODE_EN_VOLATILE; 1205 + 1116 1206 out: 1117 1207 kfree(dwords); 1118 1208 return ret; ··· 1342 1182 1343 1183 case SFDP_4BAIT_ID: 1344 1184 err = spi_nor_parse_4bait(nor, param_header, params); 1185 + break; 1186 + 1187 + case SFDP_PROFILE1_ID: 1188 + err = spi_nor_parse_profile1(nor, param_header, params); 1189 + break; 1190 + 1191 + case SFDP_SCCR_MAP_ID: 1192 + err = spi_nor_parse_sccr(nor, param_header, params); 1345 1193 break; 1346 1194 1347 1195 default:
+8
drivers/mtd/spi-nor/sfdp.h
··· 90 90 #define BFPT_DWORD15_QER_SR2_BIT1_NO_RD (0x4UL << 20) 91 91 #define BFPT_DWORD15_QER_SR2_BIT1 (0x5UL << 20) /* Spansion */ 92 92 93 + #define BFPT_DWORD16_SWRST_EN_RST BIT(12) 94 + 95 + #define BFPT_DWORD18_CMD_EXT_MASK GENMASK(30, 29) 96 + #define BFPT_DWORD18_CMD_EXT_REP (0x0UL << 29) /* Repeat */ 97 + #define BFPT_DWORD18_CMD_EXT_INV (0x1UL << 29) /* Invert */ 98 + #define BFPT_DWORD18_CMD_EXT_RES (0x2UL << 29) /* Reserved */ 99 + #define BFPT_DWORD18_CMD_EXT_16B (0x3UL << 29) /* 16-bit opcode */ 100 + 93 101 struct sfdp_parameter_header { 94 102 u8 id_lsb; 95 103 u8 minor;
+172
drivers/mtd/spi-nor/spansion.c
··· 8 8 9 9 #include "core.h" 10 10 11 + #define SPINOR_OP_RD_ANY_REG 0x65 /* Read any register */ 12 + #define SPINOR_OP_WR_ANY_REG 0x71 /* Write any register */ 13 + #define SPINOR_REG_CYPRESS_CFR2V 0x00800003 14 + #define SPINOR_REG_CYPRESS_CFR2V_MEMLAT_11_24 0xb 15 + #define SPINOR_REG_CYPRESS_CFR3V 0x00800004 16 + #define SPINOR_REG_CYPRESS_CFR3V_PGSZ BIT(4) /* Page size. */ 17 + #define SPINOR_REG_CYPRESS_CFR5V 0x00800006 18 + #define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN 0x3 19 + #define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS 0 20 + #define SPINOR_OP_CYPRESS_RD_FAST 0xee 21 + 22 + /** 23 + * spi_nor_cypress_octal_dtr_enable() - Enable octal DTR on Cypress flashes. 24 + * @nor: pointer to a 'struct spi_nor' 25 + * @enable: whether to enable or disable Octal DTR 26 + * 27 + * This also sets the memory access latency cycles to 24 to allow the flash to 28 + * run at up to 200MHz. 29 + * 30 + * Return: 0 on success, -errno otherwise. 31 + */ 32 + static int spi_nor_cypress_octal_dtr_enable(struct spi_nor *nor, bool enable) 33 + { 34 + struct spi_mem_op op; 35 + u8 *buf = nor->bouncebuf; 36 + int ret; 37 + 38 + if (enable) { 39 + /* Use 24 dummy cycles for memory array reads. */ 40 + ret = spi_nor_write_enable(nor); 41 + if (ret) 42 + return ret; 43 + 44 + *buf = SPINOR_REG_CYPRESS_CFR2V_MEMLAT_11_24; 45 + op = (struct spi_mem_op) 46 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WR_ANY_REG, 1), 47 + SPI_MEM_OP_ADDR(3, SPINOR_REG_CYPRESS_CFR2V, 48 + 1), 49 + SPI_MEM_OP_NO_DUMMY, 50 + SPI_MEM_OP_DATA_OUT(1, buf, 1)); 51 + 52 + ret = spi_mem_exec_op(nor->spimem, &op); 53 + if (ret) 54 + return ret; 55 + 56 + ret = spi_nor_wait_till_ready(nor); 57 + if (ret) 58 + return ret; 59 + 60 + nor->read_dummy = 24; 61 + } 62 + 63 + /* Set/unset the octal and DTR enable bits. */ 64 + ret = spi_nor_write_enable(nor); 65 + if (ret) 66 + return ret; 67 + 68 + if (enable) 69 + *buf = SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN; 70 + else 71 + *buf = SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS; 72 + 73 + op = (struct spi_mem_op) 74 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WR_ANY_REG, 1), 75 + SPI_MEM_OP_ADDR(enable ? 3 : 4, 76 + SPINOR_REG_CYPRESS_CFR5V, 77 + 1), 78 + SPI_MEM_OP_NO_DUMMY, 79 + SPI_MEM_OP_DATA_OUT(1, buf, 1)); 80 + 81 + if (!enable) 82 + spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR); 83 + 84 + ret = spi_mem_exec_op(nor->spimem, &op); 85 + if (ret) 86 + return ret; 87 + 88 + /* Read flash ID to make sure the switch was successful. */ 89 + op = (struct spi_mem_op) 90 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDID, 1), 91 + SPI_MEM_OP_ADDR(enable ? 4 : 0, 0, 1), 92 + SPI_MEM_OP_DUMMY(enable ? 3 : 0, 1), 93 + SPI_MEM_OP_DATA_IN(round_up(nor->info->id_len, 2), 94 + buf, 1)); 95 + 96 + if (enable) 97 + spi_nor_spimem_setup_op(nor, &op, SNOR_PROTO_8_8_8_DTR); 98 + 99 + ret = spi_mem_exec_op(nor->spimem, &op); 100 + if (ret) 101 + return ret; 102 + 103 + if (memcmp(buf, nor->info->id, nor->info->id_len)) 104 + return -EINVAL; 105 + 106 + return 0; 107 + } 108 + 109 + static void s28hs512t_default_init(struct spi_nor *nor) 110 + { 111 + nor->params->octal_dtr_enable = spi_nor_cypress_octal_dtr_enable; 112 + nor->params->writesize = 16; 113 + } 114 + 115 + static void s28hs512t_post_sfdp_fixup(struct spi_nor *nor) 116 + { 117 + /* 118 + * On older versions of the flash the xSPI Profile 1.0 table has the 119 + * 8D-8D-8D Fast Read opcode as 0x00. But it actually should be 0xEE. 120 + */ 121 + if (nor->params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode == 0) 122 + nor->params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode = 123 + SPINOR_OP_CYPRESS_RD_FAST; 124 + 125 + /* This flash is also missing the 4-byte Page Program opcode bit. */ 126 + spi_nor_set_pp_settings(&nor->params->page_programs[SNOR_CMD_PP], 127 + SPINOR_OP_PP_4B, SNOR_PROTO_1_1_1); 128 + /* 129 + * Since xSPI Page Program opcode is backward compatible with 130 + * Legacy SPI, use Legacy SPI opcode there as well. 131 + */ 132 + spi_nor_set_pp_settings(&nor->params->page_programs[SNOR_CMD_PP_8_8_8_DTR], 133 + SPINOR_OP_PP_4B, SNOR_PROTO_8_8_8_DTR); 134 + 135 + /* 136 + * The xSPI Profile 1.0 table advertises the number of additional 137 + * address bytes needed for Read Status Register command as 0 but the 138 + * actual value for that is 4. 139 + */ 140 + nor->params->rdsr_addr_nbytes = 4; 141 + } 142 + 143 + static int s28hs512t_post_bfpt_fixup(struct spi_nor *nor, 144 + const struct sfdp_parameter_header *bfpt_header, 145 + const struct sfdp_bfpt *bfpt, 146 + struct spi_nor_flash_parameter *params) 147 + { 148 + /* 149 + * The BFPT table advertises a 512B page size but the page size is 150 + * actually configurable (with the default being 256B). Read from 151 + * CFR3V[4] and set the correct size. 152 + */ 153 + struct spi_mem_op op = 154 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RD_ANY_REG, 1), 155 + SPI_MEM_OP_ADDR(3, SPINOR_REG_CYPRESS_CFR3V, 1), 156 + SPI_MEM_OP_NO_DUMMY, 157 + SPI_MEM_OP_DATA_IN(1, nor->bouncebuf, 1)); 158 + int ret; 159 + 160 + ret = spi_mem_exec_op(nor->spimem, &op); 161 + if (ret) 162 + return ret; 163 + 164 + if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3V_PGSZ) 165 + params->page_size = 512; 166 + else 167 + params->page_size = 256; 168 + 169 + return 0; 170 + } 171 + 172 + static struct spi_nor_fixups s28hs512t_fixups = { 173 + .default_init = s28hs512t_default_init, 174 + .post_sfdp = s28hs512t_post_sfdp_fixup, 175 + .post_bfpt = s28hs512t_post_bfpt_fixup, 176 + }; 177 + 11 178 static int 12 179 s25fs_s_post_bfpt_fixups(struct spi_nor *nor, 13 180 const struct sfdp_parameter_header *bfpt_header, ··· 271 104 SPI_NOR_4B_OPCODES) }, 272 105 { "cy15x104q", INFO6(0x042cc2, 0x7f7f7f, 512 * 1024, 1, 273 106 SPI_NOR_NO_ERASE) }, 107 + { "s28hs512t", INFO(0x345b1a, 0, 256 * 1024, 256, 108 + SECT_4K | SPI_NOR_OCTAL_DTR_READ | 109 + SPI_NOR_OCTAL_DTR_PP) 110 + .fixups = &s28hs512t_fixups, 111 + }, 274 112 }; 275 113 276 114 static void spansion_post_sfdp_fixups(struct spi_nor *nor)
+14 -18
drivers/mtd/spi-nor/sst.c
··· 11 11 static const struct flash_info sst_parts[] = { 12 12 /* SST -- large erase sizes are "overlays", "sectors" are 4K */ 13 13 { "sst25vf040b", INFO(0xbf258d, 0, 64 * 1024, 8, 14 - SECT_4K | SST_WRITE) }, 14 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 15 15 { "sst25vf080b", INFO(0xbf258e, 0, 64 * 1024, 16, 16 - SECT_4K | SST_WRITE) }, 16 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 17 17 { "sst25vf016b", INFO(0xbf2541, 0, 64 * 1024, 32, 18 - SECT_4K | SST_WRITE) }, 18 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 19 19 { "sst25vf032b", INFO(0xbf254a, 0, 64 * 1024, 64, 20 - SECT_4K | SST_WRITE) }, 21 - { "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, SECT_4K) }, 20 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 21 + { "sst25vf064c", INFO(0xbf254b, 0, 64 * 1024, 128, 22 + SECT_4K | SPI_NOR_4BIT_BP | SPI_NOR_HAS_LOCK | 23 + SPI_NOR_SWP_IS_VOLATILE) }, 22 24 { "sst25wf512", INFO(0xbf2501, 0, 64 * 1024, 1, 23 - SECT_4K | SST_WRITE) }, 25 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 24 26 { "sst25wf010", INFO(0xbf2502, 0, 64 * 1024, 2, 25 - SECT_4K | SST_WRITE) }, 27 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 26 28 { "sst25wf020", INFO(0xbf2503, 0, 64 * 1024, 4, 27 - SECT_4K | SST_WRITE) }, 28 - { "sst25wf020a", INFO(0x621612, 0, 64 * 1024, 4, SECT_4K) }, 29 - { "sst25wf040b", INFO(0x621613, 0, 64 * 1024, 8, SECT_4K) }, 29 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 30 + { "sst25wf020a", INFO(0x621612, 0, 64 * 1024, 4, SECT_4K | SPI_NOR_HAS_LOCK) }, 31 + { "sst25wf040b", INFO(0x621613, 0, 64 * 1024, 8, SECT_4K | SPI_NOR_HAS_LOCK) }, 30 32 { "sst25wf040", INFO(0xbf2504, 0, 64 * 1024, 8, 31 - SECT_4K | SST_WRITE) }, 33 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 32 34 { "sst25wf080", INFO(0xbf2505, 0, 64 * 1024, 16, 33 - SECT_4K | SST_WRITE) }, 35 + SECT_4K | SST_WRITE | SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) }, 34 36 { "sst26wf016b", INFO(0xbf2651, 0, 64 * 1024, 32, 35 37 SECT_4K | SPI_NOR_DUAL_READ | 36 38 SPI_NOR_QUAD_READ) }, ··· 129 127 return ret; 130 128 } 131 129 132 - static void sst_default_init(struct spi_nor *nor) 133 - { 134 - nor->flags |= SNOR_F_HAS_LOCK; 135 - } 136 - 137 130 static void sst_post_sfdp_fixups(struct spi_nor *nor) 138 131 { 139 132 if (nor->info->flags & SST_WRITE) ··· 136 139 } 137 140 138 141 static const struct spi_nor_fixups sst_fixups = { 139 - .default_init = sst_default_init, 140 142 .post_sfdp = sst_post_sfdp_fixups, 141 143 }; 142 144
+15 -16
drivers/mtd/tests/mtd_nandecctest.c
··· 8 8 #include <linux/string.h> 9 9 #include <linux/bitops.h> 10 10 #include <linux/slab.h> 11 - #include <linux/mtd/nand_ecc.h> 11 + #include <linux/mtd/nand-ecc-sw-hamming.h> 12 12 13 13 #include "mtd_test.h" 14 14 ··· 119 119 static int no_bit_error_verify(void *error_data, void *error_ecc, 120 120 void *correct_data, const size_t size) 121 121 { 122 + bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC); 122 123 unsigned char calc_ecc[3]; 123 124 int ret; 124 125 125 - __nand_calculate_ecc(error_data, size, calc_ecc, 126 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 127 - ret = __nand_correct_data(error_data, error_ecc, calc_ecc, size, 128 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 126 + ecc_sw_hamming_calculate(error_data, size, calc_ecc, sm_order); 127 + ret = ecc_sw_hamming_correct(error_data, error_ecc, calc_ecc, size, 128 + sm_order); 129 129 if (ret == 0 && !memcmp(correct_data, error_data, size)) 130 130 return 0; 131 131 ··· 149 149 static int single_bit_error_correct(void *error_data, void *error_ecc, 150 150 void *correct_data, const size_t size) 151 151 { 152 + bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC); 152 153 unsigned char calc_ecc[3]; 153 154 int ret; 154 155 155 - __nand_calculate_ecc(error_data, size, calc_ecc, 156 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 157 - ret = __nand_correct_data(error_data, error_ecc, calc_ecc, size, 158 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 156 + ecc_sw_hamming_calculate(error_data, size, calc_ecc, sm_order); 157 + ret = ecc_sw_hamming_correct(error_data, error_ecc, calc_ecc, size, 158 + sm_order); 159 159 if (ret == 1 && !memcmp(correct_data, error_data, size)) 160 160 return 0; 161 161 ··· 186 186 static int double_bit_error_detect(void *error_data, void *error_ecc, 187 187 void *correct_data, const size_t size) 188 188 { 189 + bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC); 189 190 unsigned char calc_ecc[3]; 190 191 int ret; 191 192 192 - __nand_calculate_ecc(error_data, size, calc_ecc, 193 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 194 - ret = __nand_correct_data(error_data, error_ecc, calc_ecc, size, 195 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 193 + ecc_sw_hamming_calculate(error_data, size, calc_ecc, sm_order); 194 + ret = ecc_sw_hamming_correct(error_data, error_ecc, calc_ecc, size, 195 + sm_order); 196 196 197 197 return (ret == -EBADMSG) ? 0 : -EINVAL; 198 198 } ··· 248 248 249 249 static int nand_ecc_test_run(const size_t size) 250 250 { 251 + bool sm_order = IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC); 251 252 int i; 252 253 int err = 0; 253 254 void *error_data; ··· 267 266 } 268 267 269 268 prandom_bytes(correct_data, size); 270 - __nand_calculate_ecc(correct_data, size, correct_ecc, 271 - IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING_SMC)); 272 - 269 + ecc_sw_hamming_calculate(correct_data, size, correct_ecc, sm_order); 273 270 for (i = 0; i < ARRAY_SIZE(nand_ecc_test); i++) { 274 271 nand_ecc_test[i].prepare(error_data, error_ecc, 275 272 correct_data, correct_ecc, size);
+1
drivers/mtd/ubi/build.c
··· 50 50 * struct mtd_dev_param - MTD device parameter description data structure. 51 51 * @name: MTD character device node path, MTD device name, or MTD device number 52 52 * string 53 + * @ubi_num: UBI number 53 54 * @vid_hdr_offs: VID header offset 54 55 * @max_beb_per1024: maximum expected number of bad PEBs per 1024 PEBs 55 56 */
+2 -1
drivers/mtd/ubi/eba.c
··· 1290 1290 * @ubi: UBI device description object 1291 1291 * @from: physical eraseblock number from where to copy 1292 1292 * @to: physical eraseblock number where to copy 1293 - * @vid_hdr: VID header of the @from physical eraseblock 1293 + * @vidb: data structure from where the VID header is derived 1294 1294 * 1295 1295 * This function copies logical eraseblock from physical eraseblock @from to 1296 1296 * physical eraseblock @to. The @vid_hdr buffer may be changed by this ··· 1463 1463 /** 1464 1464 * print_rsvd_warning - warn about not having enough reserved PEBs. 1465 1465 * @ubi: UBI device description object 1466 + * @ai: UBI attach info object 1466 1467 * 1467 1468 * This is a helper function for 'ubi_eba_init()' which is called when UBI 1468 1469 * cannot reserve enough PEBs for bad block handling. This function makes a
+1 -1
drivers/mtd/ubi/gluebi.c
··· 439 439 * gluebi_notify - UBI notification handler. 440 440 * @nb: registered notifier block 441 441 * @l: notification type 442 - * @ptr: pointer to the &struct ubi_notification object 442 + * @ns_ptr: pointer to the &struct ubi_notification object 443 443 */ 444 444 static int gluebi_notify(struct notifier_block *nb, unsigned long l, 445 445 void *ns_ptr)
+1 -1
drivers/mtd/ubi/kapi.c
··· 450 450 * ubi_leb_read_sg - read data into a scatter gather list. 451 451 * @desc: volume descriptor 452 452 * @lnum: logical eraseblock number to read from 453 - * @buf: buffer where to store the read data 453 + * @sgl: UBI scatter gather list to store the read data 454 454 * @offset: offset within the logical eraseblock to read from 455 455 * @len: how many bytes to read 456 456 * @check: whether UBI has to check the read data's CRC or not.
+1 -2
drivers/mtd/ubi/wl.c
··· 575 575 * @vol_id: the volume ID that last used this PEB 576 576 * @lnum: the last used logical eraseblock number for the PEB 577 577 * @torture: if the physical eraseblock has to be tortured 578 + * @nested: denotes whether the work_sem is already held in read mode 578 579 * 579 580 * This function returns zero in case of success and a %-ENOMEM in case of 580 581 * failure. ··· 1064 1063 * __erase_worker - physical eraseblock erase worker function. 1065 1064 * @ubi: UBI device description object 1066 1065 * @wl_wrk: the work object 1067 - * @shutdown: non-zero if the worker has to free memory and exit 1068 - * because the WL sub-system is shutting down 1069 1066 * 1070 1067 * This function erases a physical eraseblock and perform torture testing if 1071 1068 * needed. It also takes care about marking the physical eraseblock bad if
+73
include/linux/mtd/nand-ecc-sw-bch.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright © 2011 Ivan Djelic <ivan.djelic@parrot.com> 4 + * 5 + * This file is the header for the NAND BCH ECC implementation. 6 + */ 7 + 8 + #ifndef __MTD_NAND_ECC_SW_BCH_H__ 9 + #define __MTD_NAND_ECC_SW_BCH_H__ 10 + 11 + #include <linux/mtd/nand.h> 12 + #include <linux/bch.h> 13 + 14 + /** 15 + * struct nand_ecc_sw_bch_conf - private software BCH ECC engine structure 16 + * @req_ctx: Save request context and tweak the original request to fit the 17 + * engine needs 18 + * @code_size: Number of bytes needed to store a code (one code per step) 19 + * @nsteps: Number of steps 20 + * @calc_buf: Buffer to use when calculating ECC bytes 21 + * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip 22 + * @bch: BCH control structure 23 + * @errloc: error location array 24 + * @eccmask: XOR ecc mask, allows erased pages to be decoded as valid 25 + */ 26 + struct nand_ecc_sw_bch_conf { 27 + struct nand_ecc_req_tweak_ctx req_ctx; 28 + unsigned int code_size; 29 + unsigned int nsteps; 30 + u8 *calc_buf; 31 + u8 *code_buf; 32 + struct bch_control *bch; 33 + unsigned int *errloc; 34 + unsigned char *eccmask; 35 + }; 36 + 37 + #if IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_BCH) 38 + 39 + int nand_ecc_sw_bch_calculate(struct nand_device *nand, 40 + const unsigned char *buf, unsigned char *code); 41 + int nand_ecc_sw_bch_correct(struct nand_device *nand, unsigned char *buf, 42 + unsigned char *read_ecc, unsigned char *calc_ecc); 43 + int nand_ecc_sw_bch_init_ctx(struct nand_device *nand); 44 + void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand); 45 + struct nand_ecc_engine *nand_ecc_sw_bch_get_engine(void); 46 + 47 + #else /* !CONFIG_MTD_NAND_ECC_SW_BCH */ 48 + 49 + static inline int nand_ecc_sw_bch_calculate(struct nand_device *nand, 50 + const unsigned char *buf, 51 + unsigned char *code) 52 + { 53 + return -ENOTSUPP; 54 + } 55 + 56 + static inline int nand_ecc_sw_bch_correct(struct nand_device *nand, 57 + unsigned char *buf, 58 + unsigned char *read_ecc, 59 + unsigned char *calc_ecc) 60 + { 61 + return -ENOTSUPP; 62 + } 63 + 64 + static inline int nand_ecc_sw_bch_init_ctx(struct nand_device *nand) 65 + { 66 + return -ENOTSUPP; 67 + } 68 + 69 + static inline void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand) {} 70 + 71 + #endif /* CONFIG_MTD_NAND_ECC_SW_BCH */ 72 + 73 + #endif /* __MTD_NAND_ECC_SW_BCH_H__ */
+91
include/linux/mtd/nand-ecc-sw-hamming.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2000-2010 Steven J. Hill <sjhill@realitydiluted.com> 4 + * David Woodhouse <dwmw2@infradead.org> 5 + * Thomas Gleixner <tglx@linutronix.de> 6 + * 7 + * This file is the header for the NAND Hamming ECC implementation. 8 + */ 9 + 10 + #ifndef __MTD_NAND_ECC_SW_HAMMING_H__ 11 + #define __MTD_NAND_ECC_SW_HAMMING_H__ 12 + 13 + #include <linux/mtd/nand.h> 14 + 15 + /** 16 + * struct nand_ecc_sw_hamming_conf - private software Hamming ECC engine structure 17 + * @req_ctx: Save request context and tweak the original request to fit the 18 + * engine needs 19 + * @code_size: Number of bytes needed to store a code (one code per step) 20 + * @nsteps: Number of steps 21 + * @calc_buf: Buffer to use when calculating ECC bytes 22 + * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip 23 + * @sm_order: Smart Media special ordering 24 + */ 25 + struct nand_ecc_sw_hamming_conf { 26 + struct nand_ecc_req_tweak_ctx req_ctx; 27 + unsigned int code_size; 28 + unsigned int nsteps; 29 + u8 *calc_buf; 30 + u8 *code_buf; 31 + unsigned int sm_order; 32 + }; 33 + 34 + #if IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING) 35 + 36 + int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand); 37 + void nand_ecc_sw_hamming_cleanup_ctx(struct nand_device *nand); 38 + int ecc_sw_hamming_calculate(const unsigned char *buf, unsigned int step_size, 39 + unsigned char *code, bool sm_order); 40 + int nand_ecc_sw_hamming_calculate(struct nand_device *nand, 41 + const unsigned char *buf, 42 + unsigned char *code); 43 + int ecc_sw_hamming_correct(unsigned char *buf, unsigned char *read_ecc, 44 + unsigned char *calc_ecc, unsigned int step_size, 45 + bool sm_order); 46 + int nand_ecc_sw_hamming_correct(struct nand_device *nand, unsigned char *buf, 47 + unsigned char *read_ecc, 48 + unsigned char *calc_ecc); 49 + 50 + #else /* !CONFIG_MTD_NAND_ECC_SW_HAMMING */ 51 + 52 + static inline int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand) 53 + { 54 + return -ENOTSUPP; 55 + } 56 + 57 + static inline void nand_ecc_sw_hamming_cleanup_ctx(struct nand_device *nand) {} 58 + 59 + static inline int ecc_sw_hamming_calculate(const unsigned char *buf, 60 + unsigned int step_size, 61 + unsigned char *code, bool sm_order) 62 + { 63 + return -ENOTSUPP; 64 + } 65 + 66 + static inline int nand_ecc_sw_hamming_calculate(struct nand_device *nand, 67 + const unsigned char *buf, 68 + unsigned char *code) 69 + { 70 + return -ENOTSUPP; 71 + } 72 + 73 + static inline int ecc_sw_hamming_correct(unsigned char *buf, 74 + unsigned char *read_ecc, 75 + unsigned char *calc_ecc, 76 + unsigned int step_size, bool sm_order) 77 + { 78 + return -ENOTSUPP; 79 + } 80 + 81 + static inline int nand_ecc_sw_hamming_correct(struct nand_device *nand, 82 + unsigned char *buf, 83 + unsigned char *read_ecc, 84 + unsigned char *calc_ecc) 85 + { 86 + return -ENOTSUPP; 87 + } 88 + 89 + #endif /* CONFIG_MTD_NAND_ECC_SW_HAMMING */ 90 + 91 + #endif /* __MTD_NAND_ECC_SW_HAMMING_H__ */
+56
include/linux/mtd/nand.h
··· 277 277 int nand_ecc_finish_io_req(struct nand_device *nand, 278 278 struct nand_page_io_req *req); 279 279 bool nand_ecc_is_strong_enough(struct nand_device *nand); 280 + struct nand_ecc_engine *nand_ecc_get_sw_engine(struct nand_device *nand); 281 + struct nand_ecc_engine *nand_ecc_get_on_die_hw_engine(struct nand_device *nand); 282 + 283 + #if IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING) 284 + struct nand_ecc_engine *nand_ecc_sw_hamming_get_engine(void); 285 + #else 286 + static inline struct nand_ecc_engine *nand_ecc_sw_hamming_get_engine(void) 287 + { 288 + return NULL; 289 + } 290 + #endif /* CONFIG_MTD_NAND_ECC_SW_HAMMING */ 291 + 292 + #if IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_BCH) 293 + struct nand_ecc_engine *nand_ecc_sw_bch_get_engine(void); 294 + #else 295 + static inline struct nand_ecc_engine *nand_ecc_sw_bch_get_engine(void) 296 + { 297 + return NULL; 298 + } 299 + #endif /* CONFIG_MTD_NAND_ECC_SW_BCH */ 300 + 301 + /** 302 + * struct nand_ecc_req_tweak_ctx - Help for automatically tweaking requests 303 + * @orig_req: Pointer to the original IO request 304 + * @nand: Related NAND device, to have access to its memory organization 305 + * @page_buffer_size: Real size of the page buffer to use (can be set by the 306 + * user before the tweaking mechanism initialization) 307 + * @oob_buffer_size: Real size of the OOB buffer to use (can be set by the 308 + * user before the tweaking mechanism initialization) 309 + * @spare_databuf: Data bounce buffer 310 + * @spare_oobbuf: OOB bounce buffer 311 + * @bounce_data: Flag indicating a data bounce buffer is used 312 + * @bounce_oob: Flag indicating an OOB bounce buffer is used 313 + */ 314 + struct nand_ecc_req_tweak_ctx { 315 + struct nand_page_io_req orig_req; 316 + struct nand_device *nand; 317 + unsigned int page_buffer_size; 318 + unsigned int oob_buffer_size; 319 + void *spare_databuf; 320 + void *spare_oobbuf; 321 + bool bounce_data; 322 + bool bounce_oob; 323 + }; 324 + 325 + int nand_ecc_init_req_tweaking(struct nand_ecc_req_tweak_ctx *ctx, 326 + struct nand_device *nand); 327 + void nand_ecc_cleanup_req_tweaking(struct nand_ecc_req_tweak_ctx *ctx); 328 + void nand_ecc_tweak_req(struct nand_ecc_req_tweak_ctx *ctx, 329 + struct nand_page_io_req *req); 330 + void nand_ecc_restore_req(struct nand_ecc_req_tweak_ctx *ctx, 331 + struct nand_page_io_req *req); 280 332 281 333 /** 282 334 * struct nand_ecc - Information relative to the ECC ··· 935 883 bool nanddev_isreserved(struct nand_device *nand, const struct nand_pos *pos); 936 884 int nanddev_erase(struct nand_device *nand, const struct nand_pos *pos); 937 885 int nanddev_markbad(struct nand_device *nand, const struct nand_pos *pos); 886 + 887 + /* ECC related functions */ 888 + int nanddev_ecc_engine_init(struct nand_device *nand); 889 + void nanddev_ecc_engine_cleanup(struct nand_device *nand); 938 890 939 891 /* BBT related functions */ 940 892 enum nand_bbt_block_status {
-66
include/linux/mtd/nand_bch.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright © 2011 Ivan Djelic <ivan.djelic@parrot.com> 4 - * 5 - * This file is the header for the NAND BCH ECC implementation. 6 - */ 7 - 8 - #ifndef __MTD_NAND_BCH_H__ 9 - #define __MTD_NAND_BCH_H__ 10 - 11 - struct mtd_info; 12 - struct nand_chip; 13 - struct nand_bch_control; 14 - 15 - #if IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_BCH) 16 - 17 - static inline int mtd_nand_has_bch(void) { return 1; } 18 - 19 - /* 20 - * Calculate BCH ecc code 21 - */ 22 - int nand_bch_calculate_ecc(struct nand_chip *chip, const u_char *dat, 23 - u_char *ecc_code); 24 - 25 - /* 26 - * Detect and correct bit errors 27 - */ 28 - int nand_bch_correct_data(struct nand_chip *chip, u_char *dat, 29 - u_char *read_ecc, u_char *calc_ecc); 30 - /* 31 - * Initialize BCH encoder/decoder 32 - */ 33 - struct nand_bch_control *nand_bch_init(struct mtd_info *mtd); 34 - /* 35 - * Release BCH encoder/decoder resources 36 - */ 37 - void nand_bch_free(struct nand_bch_control *nbc); 38 - 39 - #else /* !CONFIG_MTD_NAND_ECC_SW_BCH */ 40 - 41 - static inline int mtd_nand_has_bch(void) { return 0; } 42 - 43 - static inline int 44 - nand_bch_calculate_ecc(struct nand_chip *chip, const u_char *dat, 45 - u_char *ecc_code) 46 - { 47 - return -1; 48 - } 49 - 50 - static inline int 51 - nand_bch_correct_data(struct nand_chip *chip, unsigned char *buf, 52 - unsigned char *read_ecc, unsigned char *calc_ecc) 53 - { 54 - return -ENOTSUPP; 55 - } 56 - 57 - static inline struct nand_bch_control *nand_bch_init(struct mtd_info *mtd) 58 - { 59 - return NULL; 60 - } 61 - 62 - static inline void nand_bch_free(struct nand_bch_control *nbc) {} 63 - 64 - #endif /* CONFIG_MTD_NAND_ECC_SW_BCH */ 65 - 66 - #endif /* __MTD_NAND_BCH_H__ */
-39
include/linux/mtd/nand_ecc.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (C) 2000-2010 Steven J. Hill <sjhill@realitydiluted.com> 4 - * David Woodhouse <dwmw2@infradead.org> 5 - * Thomas Gleixner <tglx@linutronix.de> 6 - * 7 - * This file is the header for the ECC algorithm. 8 - */ 9 - 10 - #ifndef __MTD_NAND_ECC_H__ 11 - #define __MTD_NAND_ECC_H__ 12 - 13 - struct nand_chip; 14 - 15 - /* 16 - * Calculate 3 byte ECC code for eccsize byte block 17 - */ 18 - void __nand_calculate_ecc(const u_char *dat, unsigned int eccsize, 19 - u_char *ecc_code, bool sm_order); 20 - 21 - /* 22 - * Calculate 3 byte ECC code for 256/512 byte block 23 - */ 24 - int nand_calculate_ecc(struct nand_chip *chip, const u_char *dat, 25 - u_char *ecc_code); 26 - 27 - /* 28 - * Detect and correct a 1 bit error for eccsize byte block 29 - */ 30 - int __nand_correct_data(u_char *dat, u_char *read_ecc, u_char *calc_ecc, 31 - unsigned int eccsize, bool sm_order); 32 - 33 - /* 34 - * Detect and correct a 1 bit error for 256/512 byte block 35 - */ 36 - int nand_correct_data(struct nand_chip *chip, u_char *dat, u_char *read_ecc, 37 - u_char *calc_ecc); 38 - 39 - #endif /* __MTD_NAND_ECC_H__ */
+16 -3
include/linux/mtd/rawnand.h
··· 302 302 * @prepad: padding information for syndrome based ECC generators 303 303 * @postpad: padding information for syndrome based ECC generators 304 304 * @options: ECC specific options (see NAND_ECC_XXX flags defined above) 305 - * @priv: pointer to private ECC control data 306 305 * @calc_buf: buffer for calculated ECC, size is oobsize. 307 306 * @code_buf: buffer for ECC read from flash, size is oobsize. 308 307 * @hwctl: function to control hardware ECC generator. Must only ··· 354 355 int prepad; 355 356 int postpad; 356 357 unsigned int options; 357 - void *priv; 358 358 u8 *calc_buf; 359 359 u8 *code_buf; 360 360 void (*hwctl)(struct nand_chip *chip, int mode); ··· 1284 1286 } 1285 1287 1286 1288 /** 1287 - * Check if the opcode's address should be sent only on the lower 8 bits 1289 + * nand_opcode_8bits - Check if the opcode's address should be sent only on the 1290 + * lower 8 bits 1288 1291 * @command: opcode to check 1289 1292 */ 1290 1293 static inline int nand_opcode_8bits(unsigned int command) ··· 1301 1302 } 1302 1303 return 0; 1303 1304 } 1305 + 1306 + int rawnand_sw_hamming_init(struct nand_chip *chip); 1307 + int rawnand_sw_hamming_calculate(struct nand_chip *chip, 1308 + const unsigned char *buf, 1309 + unsigned char *code); 1310 + int rawnand_sw_hamming_correct(struct nand_chip *chip, 1311 + unsigned char *buf, 1312 + unsigned char *read_ecc, 1313 + unsigned char *calc_ecc); 1314 + void rawnand_sw_hamming_cleanup(struct nand_chip *chip); 1315 + int rawnand_sw_bch_init(struct nand_chip *chip); 1316 + int rawnand_sw_bch_correct(struct nand_chip *chip, unsigned char *buf, 1317 + unsigned char *read_ecc, unsigned char *calc_ecc); 1318 + void rawnand_sw_bch_cleanup(struct nand_chip *chip); 1304 1319 1305 1320 int nand_check_erased_ecc_chunk(void *data, int datalen, 1306 1321 void *ecc, int ecclen,
-1
include/linux/mtd/sharpsl.h
··· 9 9 #define _MTD_SHARPSL_H 10 10 11 11 #include <linux/mtd/rawnand.h> 12 - #include <linux/mtd/nand_ecc.h> 13 12 #include <linux/mtd/partitions.h> 14 13 15 14 struct sharpsl_nand_platform_data {
+41 -14
include/linux/mtd/spi-nor.h
··· 51 51 #define SPINOR_OP_CLFSR 0x50 /* Clear flag status register */ 52 52 #define SPINOR_OP_RDEAR 0xc8 /* Read Extended Address Register */ 53 53 #define SPINOR_OP_WREAR 0xc5 /* Write Extended Address Register */ 54 + #define SPINOR_OP_SRSTEN 0x66 /* Software Reset Enable */ 55 + #define SPINOR_OP_SRST 0x99 /* Software Reset */ 54 56 55 57 /* 4-byte address opcodes - used on Spansion and some Macronix flashes. */ 56 58 #define SPINOR_OP_READ_4B 0x13 /* Read data bytes (low frequency) */ ··· 184 182 SNOR_PROTO_1_2_2_DTR = SNOR_PROTO_DTR(1, 2, 2), 185 183 SNOR_PROTO_1_4_4_DTR = SNOR_PROTO_DTR(1, 4, 4), 186 184 SNOR_PROTO_1_8_8_DTR = SNOR_PROTO_DTR(1, 8, 8), 185 + SNOR_PROTO_8_8_8_DTR = SNOR_PROTO_DTR(8, 8, 8), 187 186 }; 188 187 189 188 static inline bool spi_nor_protocol_is_dtr(enum spi_nor_protocol proto) ··· 231 228 * then Quad SPI protocols before Dual SPI protocols, Fast Read and lastly 232 229 * (Slow) Read. 233 230 */ 234 - #define SNOR_HWCAPS_READ_MASK GENMASK(14, 0) 231 + #define SNOR_HWCAPS_READ_MASK GENMASK(15, 0) 235 232 #define SNOR_HWCAPS_READ BIT(0) 236 233 #define SNOR_HWCAPS_READ_FAST BIT(1) 237 234 #define SNOR_HWCAPS_READ_1_1_1_DTR BIT(2) ··· 248 245 #define SNOR_HWCAPS_READ_4_4_4 BIT(9) 249 246 #define SNOR_HWCAPS_READ_1_4_4_DTR BIT(10) 250 247 251 - #define SNOR_HWCAPS_READ_OCTAL GENMASK(14, 11) 248 + #define SNOR_HWCAPS_READ_OCTAL GENMASK(15, 11) 252 249 #define SNOR_HWCAPS_READ_1_1_8 BIT(11) 253 250 #define SNOR_HWCAPS_READ_1_8_8 BIT(12) 254 251 #define SNOR_HWCAPS_READ_8_8_8 BIT(13) 255 252 #define SNOR_HWCAPS_READ_1_8_8_DTR BIT(14) 253 + #define SNOR_HWCAPS_READ_8_8_8_DTR BIT(15) 256 254 257 255 /* 258 256 * Page Program capabilities. ··· 264 260 * JEDEC/SFDP standard to define them. Also at this moment no SPI flash memory 265 261 * implements such commands. 266 262 */ 267 - #define SNOR_HWCAPS_PP_MASK GENMASK(22, 16) 268 - #define SNOR_HWCAPS_PP BIT(16) 263 + #define SNOR_HWCAPS_PP_MASK GENMASK(23, 16) 264 + #define SNOR_HWCAPS_PP BIT(16) 269 265 270 - #define SNOR_HWCAPS_PP_QUAD GENMASK(19, 17) 271 - #define SNOR_HWCAPS_PP_1_1_4 BIT(17) 272 - #define SNOR_HWCAPS_PP_1_4_4 BIT(18) 273 - #define SNOR_HWCAPS_PP_4_4_4 BIT(19) 266 + #define SNOR_HWCAPS_PP_QUAD GENMASK(19, 17) 267 + #define SNOR_HWCAPS_PP_1_1_4 BIT(17) 268 + #define SNOR_HWCAPS_PP_1_4_4 BIT(18) 269 + #define SNOR_HWCAPS_PP_4_4_4 BIT(19) 274 270 275 - #define SNOR_HWCAPS_PP_OCTAL GENMASK(22, 20) 276 - #define SNOR_HWCAPS_PP_1_1_8 BIT(20) 277 - #define SNOR_HWCAPS_PP_1_8_8 BIT(21) 278 - #define SNOR_HWCAPS_PP_8_8_8 BIT(22) 271 + #define SNOR_HWCAPS_PP_OCTAL GENMASK(23, 20) 272 + #define SNOR_HWCAPS_PP_1_1_8 BIT(20) 273 + #define SNOR_HWCAPS_PP_1_8_8 BIT(21) 274 + #define SNOR_HWCAPS_PP_8_8_8 BIT(22) 275 + #define SNOR_HWCAPS_PP_8_8_8_DTR BIT(23) 279 276 280 277 #define SNOR_HWCAPS_X_X_X (SNOR_HWCAPS_READ_2_2_2 | \ 281 278 SNOR_HWCAPS_READ_4_4_4 | \ ··· 284 279 SNOR_HWCAPS_PP_4_4_4 | \ 285 280 SNOR_HWCAPS_PP_8_8_8) 286 281 282 + #define SNOR_HWCAPS_X_X_X_DTR (SNOR_HWCAPS_READ_8_8_8_DTR | \ 283 + SNOR_HWCAPS_PP_8_8_8_DTR) 284 + 287 285 #define SNOR_HWCAPS_DTR (SNOR_HWCAPS_READ_1_1_1_DTR | \ 288 286 SNOR_HWCAPS_READ_1_2_2_DTR | \ 289 287 SNOR_HWCAPS_READ_1_4_4_DTR | \ 290 - SNOR_HWCAPS_READ_1_8_8_DTR) 288 + SNOR_HWCAPS_READ_1_8_8_DTR | \ 289 + SNOR_HWCAPS_READ_8_8_8_DTR) 291 290 292 291 #define SNOR_HWCAPS_ALL (SNOR_HWCAPS_READ_MASK | \ 293 292 SNOR_HWCAPS_PP_MASK) ··· 327 318 int (*erase)(struct spi_nor *nor, loff_t offs); 328 319 }; 329 320 321 + /** 322 + * enum spi_nor_cmd_ext - describes the command opcode extension in DTR mode 323 + * @SPI_NOR_EXT_NONE: no extension. This is the default, and is used in Legacy 324 + * SPI mode 325 + * @SPI_NOR_EXT_REPEAT: the extension is same as the opcode 326 + * @SPI_NOR_EXT_INVERT: the extension is the bitwise inverse of the opcode 327 + * @SPI_NOR_EXT_HEX: the extension is any hex value. The command and opcode 328 + * combine to form a 16-bit opcode. 329 + */ 330 + enum spi_nor_cmd_ext { 331 + SPI_NOR_EXT_NONE = 0, 332 + SPI_NOR_EXT_REPEAT, 333 + SPI_NOR_EXT_INVERT, 334 + SPI_NOR_EXT_HEX, 335 + }; 336 + 330 337 /* 331 338 * Forward declarations that are used internally by the core and manufacturer 332 339 * drivers. ··· 370 345 * @program_opcode: the program opcode 371 346 * @sst_write_second: used by the SST write operation 372 347 * @flags: flag options for the current SPI NOR (SNOR_F_*) 348 + * @cmd_ext_type: the command opcode extension type for DTR mode. 373 349 * @read_proto: the SPI protocol for read operations 374 350 * @write_proto: the SPI protocol for write operations 375 351 * @reg_proto: the SPI protocol for read_reg/write_reg/erase operations ··· 402 376 enum spi_nor_protocol reg_proto; 403 377 bool sst_write_second; 404 378 u32 flags; 379 + enum spi_nor_cmd_ext cmd_ext_type; 405 380 406 381 const struct spi_nor_controller_ops *controller_ops; 407 382 ··· 433 406 * @name: the chip type name 434 407 * @hwcaps: the hardware capabilities supported by the controller driver 435 408 * 436 - * The drivers can use this fuction to scan the SPI NOR. 409 + * The drivers can use this function to scan the SPI NOR. 437 410 * In the scanning, it will try to get all the necessary information to 438 411 * fill the mtd_info{} and the spi_nor{}. 439 412 *
+9
include/linux/mtd/spinand.h
··· 287 287 #define SPINAND_HAS_CR_FEAT_BIT BIT(1) 288 288 289 289 /** 290 + * struct spinand_ondie_ecc_conf - private SPI-NAND on-die ECC engine structure 291 + * @status: status of the last wait operation that will be used in case 292 + * ->get_status() is not populated by the spinand device. 293 + */ 294 + struct spinand_ondie_ecc_conf { 295 + u8 status; 296 + }; 297 + 298 + /** 290 299 * struct spinand_info - Structure used to describe SPI NAND chips 291 300 * @model: model name 292 301 * @devid: device ID
-19
include/linux/platform_data/mtd-mxc_nand.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Copyright 2004-2007 Freescale Semiconductor, Inc. All Rights Reserved. 4 - * Copyright 2008 Sascha Hauer, kernel@pengutronix.de 5 - */ 6 - 7 - #ifndef __ASM_ARCH_NAND_H 8 - #define __ASM_ARCH_NAND_H 9 - 10 - #include <linux/mtd/partitions.h> 11 - 12 - struct mxc_nand_platform_data { 13 - unsigned int width; /* data bus width in bytes */ 14 - unsigned int hw_ecc:1; /* 0 if suppress hardware ECC */ 15 - unsigned int flash_bbt:1; /* set to 1 to use a flash based bbt */ 16 - struct mtd_partition *parts; /* partition table */ 17 - int nr_parts; /* size of parts */ 18 - }; 19 - #endif /* __ASM_ARCH_NAND_H */