Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/spi-mem-ecc-for-5.18' into mtd/next

Topic branch bringing-in changes related to the support of ECC engines
that can be used by SPI controllers to manage SPI NANDs as well as
possibly by parallel NAND controllers. In particular, it brings support
for Macronix ECC engine that can be used with Macronix SPI controller.

The changes touch the NAND core, the NAND ECC core, the spi-mem layer, a
SPI controller driver and add a new NAND ECC driver, as well as a number
of binding updates.

Binding changes:
* Vendor prefixes: Clarify Macronix prefix
* SPI NAND: Convert spi-nand description file to yaml
* Raw NAND chip: Create a NAND chip description
* Raw NAND controller:
- Harmonize the property types
- Fix a comment in the examples
- Fix the reg property description
* Describe Macronix NAND ECC engine
* Macronix SPI controller:
- Document the nand-ecc-engine property
- Convert to yaml
- The interrupt property is not mandatory

NAND core changes:
* ECC:
- Add infrastructure to support hardware engines
- Add a new helper to retrieve the ECC context
- Provide a helper to retrieve a pilelined engine device

NAND-ECC changes:
* Macronix ECC engine:
- Add Macronix external ECC engine support
- Support SPI pipelined mode

SPI-NAND core changes:
* Delay a little bit the dirmap creation
* Create direct mapping descriptors for ECC operations

SPI-NAND driver changes:
* macronix: Use random program load

SPI changes:
* Macronix SPI controller:
- Fix the transmit path
- Create a helper to configure the controller before an operation
- Create a helper to ease the start of an operation
- Add support for direct mapping
- Add support for pipelined ECC operations
* spi-mem:
- Introduce a capability structure
- Check the controller extra capabilities
- cadence-quadspi/mxic: Provide capability structures
- Kill the spi_mem_dtr_supports_op() helper
- Add an ecc parameter to the spi_mem_op structure

+1730 -201
+77
Documentation/devicetree/bindings/mtd/mxicy,nand-ecc-engine.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/mxicy,nand-ecc-engine.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Macronix NAND ECC engine device tree bindings 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + 12 + properties: 13 + compatible: 14 + const: mxicy,nand-ecc-engine-rev3 15 + 16 + reg: 17 + maxItems: 1 18 + 19 + clocks: 20 + maxItems: 1 21 + 22 + interrupts: 23 + maxItems: 1 24 + 25 + required: 26 + - compatible 27 + - reg 28 + 29 + additionalProperties: false 30 + 31 + examples: 32 + - | 33 + /* External configuration */ 34 + spi_controller0: spi@43c30000 { 35 + compatible = "mxicy,mx25f0a-spi"; 36 + reg = <0x43c30000 0x10000>, <0xa0000000 0x4000000>; 37 + reg-names = "regs", "dirmap"; 38 + clocks = <&clkwizard 0>, <&clkwizard 1>, <&clkc 15>; 39 + clock-names = "send_clk", "send_dly_clk", "ps_clk"; 40 + #address-cells = <1>; 41 + #size-cells = <0>; 42 + 43 + flash@0 { 44 + compatible = "spi-nand"; 45 + reg = <0>; 46 + nand-ecc-engine = <&ecc_engine0>; 47 + }; 48 + }; 49 + 50 + ecc_engine0: ecc@43c40000 { 51 + compatible = "mxicy,nand-ecc-engine-rev3"; 52 + reg = <0x43c40000 0x10000>; 53 + }; 54 + 55 + - | 56 + /* Pipelined configuration */ 57 + spi_controller1: spi@43c30000 { 58 + compatible = "mxicy,mx25f0a-spi"; 59 + reg = <0x43c30000 0x10000>, <0xa0000000 0x4000000>; 60 + reg-names = "regs", "dirmap"; 61 + clocks = <&clkwizard 0>, <&clkwizard 1>, <&clkc 15>; 62 + clock-names = "send_clk", "send_dly_clk", "ps_clk"; 63 + #address-cells = <1>; 64 + #size-cells = <0>; 65 + nand-ecc-engine = <&ecc_engine1>; 66 + 67 + flash@0 { 68 + compatible = "spi-nand"; 69 + reg = <0>; 70 + nand-ecc-engine = <&spi_controller1>; 71 + }; 72 + }; 73 + 74 + ecc_engine1: ecc@43c40000 { 75 + compatible = "mxicy,nand-ecc-engine-rev3"; 76 + reg = <0x43c40000 0x10000>; 77 + };
+70
Documentation/devicetree/bindings/mtd/nand-chip.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/nand-chip.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NAND Chip and NAND Controller Generic Binding 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + 12 + description: | 13 + This file covers the generic description of a NAND chip. It implies that the 14 + bus interface should not be taken into account: both raw NAND devices and 15 + SPI-NAND devices are concerned by this description. 16 + 17 + properties: 18 + reg: 19 + description: 20 + Contains the chip-select IDs. 21 + 22 + nand-ecc-engine: 23 + description: | 24 + A phandle on the hardware ECC engine if any. There are 25 + basically three possibilities: 26 + 1/ The ECC engine is part of the NAND controller, in this 27 + case the phandle should reference the parent node. 28 + 2/ The ECC engine is part of the NAND part (on-die), in this 29 + case the phandle should reference the node itself. 30 + 3/ The ECC engine is external, in this case the phandle should 31 + reference the specific ECC engine node. 32 + $ref: /schemas/types.yaml#/definitions/phandle 33 + 34 + nand-use-soft-ecc-engine: 35 + description: Use a software ECC engine. 36 + type: boolean 37 + 38 + nand-no-ecc-engine: 39 + description: Do not use any ECC correction. 40 + type: boolean 41 + 42 + nand-ecc-algo: 43 + description: 44 + Desired ECC algorithm. 45 + $ref: /schemas/types.yaml#/definitions/string 46 + enum: [hamming, bch, rs] 47 + 48 + nand-ecc-strength: 49 + description: 50 + Maximum number of bits that can be corrected per ECC step. 51 + $ref: /schemas/types.yaml#/definitions/uint32 52 + minimum: 1 53 + 54 + nand-ecc-step-size: 55 + description: 56 + Number of data bytes covered by a single ECC step. 57 + $ref: /schemas/types.yaml#/definitions/uint32 58 + minimum: 1 59 + 60 + secure-regions: 61 + description: 62 + Regions in the NAND chip which are protected using a secure element 63 + like Trustzone. This property contains the start address and size of 64 + the secure regions present. 65 + $ref: /schemas/types.yaml#/definitions/uint64-matrix 66 + 67 + required: 68 + - reg 69 + 70 + additionalProperties: true
+11 -59
Documentation/devicetree/bindings/mtd/nand-controller.yaml
··· 39 39 ranges: true 40 40 41 41 cs-gpios: 42 - minItems: 1 43 - maxItems: 8 44 42 description: 45 43 Array of chip-select available to the controller. The first 46 44 entries are a 1:1 mapping of the available chip-select on the ··· 46 48 chip-select as needed may follow and should be phandles of GPIO 47 49 lines. 'reg' entries of the NAND chip subnodes become indexes of 48 50 this array when this property is present. 51 + minItems: 1 52 + maxItems: 8 49 53 50 54 patternProperties: 51 55 "^nand@[a-f0-9]$": 52 56 type: object 57 + $ref: "nand-chip.yaml#" 58 + 53 59 properties: 54 60 reg: 55 61 description: 56 - Contains the native Ready/Busy IDs. 57 - 58 - nand-ecc-engine: 59 - allOf: 60 - - $ref: /schemas/types.yaml#/definitions/phandle 61 - description: | 62 - A phandle on the hardware ECC engine if any. There are 63 - basically three possibilities: 64 - 1/ The ECC engine is part of the NAND controller, in this 65 - case the phandle should reference the parent node. 66 - 2/ The ECC engine is part of the NAND part (on-die), in this 67 - case the phandle should reference the node itself. 68 - 3/ The ECC engine is external, in this case the phandle should 69 - reference the specific ECC engine node. 70 - 71 - nand-use-soft-ecc-engine: 72 - type: boolean 73 - description: Use a software ECC engine. 74 - 75 - nand-no-ecc-engine: 76 - type: boolean 77 - description: Do not use any ECC correction. 62 + Contains the chip-select IDs. 78 63 79 64 nand-ecc-placement: 80 - allOf: 81 - - $ref: /schemas/types.yaml#/definitions/string 82 - - enum: [ oob, interleaved ] 83 65 description: 84 66 Location of the ECC bytes. This location is unknown by default 85 67 but can be explicitly set to "oob", if all ECC bytes are 86 68 known to be stored in the OOB area, or "interleaved" if ECC 87 69 bytes will be interleaved with regular data in the main area. 88 - 89 - nand-ecc-algo: 90 - description: 91 - Desired ECC algorithm. 92 70 $ref: /schemas/types.yaml#/definitions/string 93 - enum: [hamming, bch, rs] 71 + enum: [ oob, interleaved ] 94 72 95 73 nand-bus-width: 96 74 description: ··· 76 102 default: 8 77 103 78 104 nand-on-flash-bbt: 79 - $ref: /schemas/types.yaml#/definitions/flag 80 105 description: 81 106 With this property, the OS will search the device for a Bad 82 107 Block Table (BBT). If not found, it will create one, reserve ··· 84 111 few pages of all the blocks will be scanned at boot time to 85 112 find Bad Block Markers (BBM). These markers will help to 86 113 build a volatile BBT in RAM. 87 - 88 - nand-ecc-strength: 89 - description: 90 - Maximum number of bits that can be corrected per ECC step. 91 - $ref: /schemas/types.yaml#/definitions/uint32 92 - minimum: 1 93 - 94 - nand-ecc-step-size: 95 - description: 96 - Number of data bytes covered by a single ECC step. 97 - $ref: /schemas/types.yaml#/definitions/uint32 98 - minimum: 1 114 + $ref: /schemas/types.yaml#/definitions/flag 99 115 100 116 nand-ecc-maximize: 101 - $ref: /schemas/types.yaml#/definitions/flag 102 117 description: 103 118 Whether or not the ECC strength should be maximized. The 104 119 maximum ECC strength is both controller and chip ··· 95 134 constraint into account. This is particularly useful when 96 135 only the in-band area is used by the upper layers, and you 97 136 want to make your NAND as reliable as possible. 137 + $ref: /schemas/types.yaml#/definitions/flag 98 138 99 139 nand-is-boot-medium: 100 - $ref: /schemas/types.yaml#/definitions/flag 101 140 description: 102 141 Whether or not the NAND chip is a boot medium. Drivers might 103 142 use this information to select ECC algorithms supported by 104 143 the boot ROM or similar restrictions. 144 + $ref: /schemas/types.yaml#/definitions/flag 105 145 106 146 nand-rb: 107 - $ref: /schemas/types.yaml#/definitions/uint32-array 108 147 description: 109 148 Contains the native Ready/Busy IDs. 149 + $ref: /schemas/types.yaml#/definitions/uint32-array 110 150 111 151 rb-gpios: 112 152 description: ··· 115 153 depends on the number of R/B pins exposed by the flash) for the 116 154 Ready/Busy pins. Active state refers to the NAND ready state and 117 155 should be set to GPIOD_ACTIVE_HIGH unless the signal is inverted. 118 - 119 - secure-regions: 120 - $ref: /schemas/types.yaml#/definitions/uint64-matrix 121 - description: 122 - Regions in the NAND chip which are protected using a secure element 123 - like Trustzone. This property contains the start address and size of 124 - the secure regions present. 125 156 126 157 required: 127 158 - reg ··· 136 181 137 182 nand@0 { 138 183 reg = <0>; /* Native CS */ 139 - nand-use-soft-ecc-engine; 140 - nand-ecc-algo = "bch"; 141 - 142 - /* controller specific properties */ 184 + /* NAND chip specific properties */ 143 185 }; 144 186 145 187 nand@1 {
-5
Documentation/devicetree/bindings/mtd/spi-nand.txt
··· 1 - SPI NAND flash 2 - 3 - Required properties: 4 - - compatible: should be "spi-nand" 5 - - reg: should encode the chip-select line used to access the NAND chip
+27
Documentation/devicetree/bindings/mtd/spi-nand.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/spi-nand.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: SPI-NAND flash device tree bindings 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + 12 + allOf: 13 + - $ref: "nand-chip.yaml#" 14 + 15 + properties: 16 + compatible: 17 + const: spi-nand 18 + 19 + reg: 20 + description: Encode the chip-select line on the SPI bus 21 + maxItems: 1 22 + 23 + required: 24 + - compatible 25 + - reg 26 + 27 + unevaluatedProperties: false
+65
Documentation/devicetree/bindings/spi/mxicy,mx25f0a-spi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/mxicy,mx25f0a-spi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Macronix SPI controller device tree bindings 8 + 9 + maintainers: 10 + - Miquel Raynal <miquel.raynal@bootlin.com> 11 + 12 + allOf: 13 + - $ref: "spi-controller.yaml#" 14 + 15 + properties: 16 + compatible: 17 + const: mxicy,mx25f0a-spi 18 + 19 + reg: 20 + minItems: 2 21 + maxItems: 2 22 + 23 + reg-names: 24 + items: 25 + - const: regs 26 + - const: dirmap 27 + 28 + interrupts: 29 + maxItems: 1 30 + 31 + clocks: 32 + minItems: 3 33 + maxItems: 3 34 + 35 + clock-names: 36 + items: 37 + - const: send_clk 38 + - const: send_dly_clk 39 + - const: ps_clk 40 + 41 + nand-ecc-engine: 42 + description: NAND ECC engine used by the SPI controller in order to perform 43 + on-the-fly correction when using a SPI-NAND memory. 44 + $ref: /schemas/types.yaml#/definitions/phandle 45 + 46 + required: 47 + - compatible 48 + - reg 49 + - reg-names 50 + - clocks 51 + - clock-names 52 + 53 + unevaluatedProperties: false 54 + 55 + examples: 56 + - | 57 + spi@43c30000 { 58 + compatible = "mxicy,mx25f0a-spi"; 59 + reg = <0x43c30000 0x10000>, <0xa0000000 0x20000000>; 60 + reg-names = "regs", "dirmap"; 61 + clocks = <&clkwizard 0>, <&clkwizard 1>, <&clkc 18>; 62 + clock-names = "send_clk", "send_dly_clk", "ps_clk"; 63 + #address-cells = <1>; 64 + #size-cells = <0>; 65 + };
-34
Documentation/devicetree/bindings/spi/spi-mxic.txt
··· 1 - Macronix SPI controller Device Tree Bindings 2 - -------------------------------------------- 3 - 4 - Required properties: 5 - - compatible: should be "mxicy,mx25f0a-spi" 6 - - #address-cells: should be 1 7 - - #size-cells: should be 0 8 - - reg: should contain 2 entries, one for the registers and one for the direct 9 - mapping area 10 - - reg-names: should contain "regs" and "dirmap" 11 - - interrupts: interrupt line connected to the SPI controller 12 - - clock-names: should contain "ps_clk", "send_clk" and "send_dly_clk" 13 - - clocks: should contain 3 entries for the "ps_clk", "send_clk" and 14 - "send_dly_clk" clocks 15 - 16 - Example: 17 - 18 - spi@43c30000 { 19 - compatible = "mxicy,mx25f0a-spi"; 20 - reg = <0x43c30000 0x10000>, <0xa0000000 0x20000000>; 21 - reg-names = "regs", "dirmap"; 22 - clocks = <&clkwizard 0>, <&clkwizard 1>, <&clkc 18>; 23 - clock-names = "send_clk", "send_dly_clk", "ps_clk"; 24 - #address-cells = <1>; 25 - #size-cells = <0>; 26 - 27 - flash@0 { 28 - compatible = "jedec,spi-nor"; 29 - reg = <0>; 30 - spi-max-frequency = <25000000>; 31 - spi-tx-bus-width = <4>; 32 - spi-rx-bus-width = <4>; 33 - }; 34 - };
+3
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 802 802 description: Mundo Reader S.L. 803 803 "^murata,.*": 804 804 description: Murata Manufacturing Co., Ltd. 805 + "^mxic,.*": 806 + description: Macronix International Co., Ltd. 807 + deprecated: true 805 808 "^mxicy,.*": 806 809 description: Macronix International Co., Ltd. 807 810 "^myir,.*":
+6
drivers/mtd/nand/Kconfig
··· 46 46 ECC codes. They are used with NAND devices requiring more than 1 bit 47 47 of error correction. 48 48 49 + config MTD_NAND_ECC_MXIC 50 + bool "Macronix external hardware ECC engine" 51 + select MTD_NAND_ECC 52 + help 53 + This enables support for the hardware ECC engine from Macronix. 54 + 49 55 endmenu 50 56 51 57 endmenu
+1
drivers/mtd/nand/Makefile
··· 10 10 nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o 11 11 nandcore-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += ecc-sw-hamming.o 12 12 nandcore-$(CONFIG_MTD_NAND_ECC_SW_BCH) += ecc-sw-bch.o 13 + nandcore-$(CONFIG_MTD_NAND_ECC_MXIC) += ecc-mxic.o
+7 -3
drivers/mtd/nand/core.c
··· 235 235 nand->ecc.engine = nand_ecc_get_on_die_hw_engine(nand); 236 236 break; 237 237 case NAND_ECC_ENGINE_TYPE_ON_HOST: 238 - pr_err("On-host hardware ECC engines not supported yet\n"); 238 + nand->ecc.engine = nand_ecc_get_on_host_hw_engine(nand); 239 + if (PTR_ERR(nand->ecc.engine) == -EPROBE_DEFER) 240 + return -EPROBE_DEFER; 239 241 break; 240 242 default: 241 243 pr_err("Missing ECC engine type\n"); ··· 257 255 { 258 256 switch (nand->ecc.ctx.conf.engine_type) { 259 257 case NAND_ECC_ENGINE_TYPE_ON_HOST: 260 - pr_err("On-host hardware ECC engines not supported yet\n"); 258 + nand_ecc_put_on_host_hw_engine(nand); 261 259 break; 262 260 case NAND_ECC_ENGINE_TYPE_NONE: 263 261 case NAND_ECC_ENGINE_TYPE_SOFT: ··· 302 300 /* Look for the ECC engine to use */ 303 301 ret = nanddev_get_ecc_engine(nand); 304 302 if (ret) { 305 - pr_err("No ECC engine found\n"); 303 + if (ret != -EPROBE_DEFER) 304 + pr_err("No ECC engine found\n"); 305 + 306 306 return ret; 307 307 } 308 308
+879
drivers/mtd/nand/ecc-mxic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Support for Macronix external hardware ECC engine for NAND devices, also 4 + * called DPE for Data Processing Engine. 5 + * 6 + * Copyright © 2019 Macronix 7 + * Author: Miquel Raynal <miquel.raynal@bootlin.com> 8 + */ 9 + 10 + #include <linux/dma-mapping.h> 11 + #include <linux/init.h> 12 + #include <linux/interrupt.h> 13 + #include <linux/io.h> 14 + #include <linux/iopoll.h> 15 + #include <linux/kernel.h> 16 + #include <linux/module.h> 17 + #include <linux/mtd/mtd.h> 18 + #include <linux/mtd/nand.h> 19 + #include <linux/mtd/nand-ecc-mxic.h> 20 + #include <linux/mutex.h> 21 + #include <linux/of_device.h> 22 + #include <linux/of_platform.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/slab.h> 25 + 26 + /* DPE Configuration */ 27 + #define DP_CONFIG 0x00 28 + #define ECC_EN BIT(0) 29 + #define ECC_TYP(idx) (((idx) << 3) & GENMASK(6, 3)) 30 + /* DPE Interrupt Status */ 31 + #define INTRPT_STS 0x04 32 + #define TRANS_CMPLT BIT(0) 33 + #define SDMA_MAIN BIT(1) 34 + #define SDMA_SPARE BIT(2) 35 + #define ECC_ERR BIT(3) 36 + #define TO_SPARE BIT(4) 37 + #define TO_MAIN BIT(5) 38 + /* DPE Interrupt Status Enable */ 39 + #define INTRPT_STS_EN 0x08 40 + /* DPE Interrupt Signal Enable */ 41 + #define INTRPT_SIG_EN 0x0C 42 + /* Host Controller Configuration */ 43 + #define HC_CONFIG 0x10 44 + #define DEV2MEM 0 /* TRANS_TYP_DMA in the spec */ 45 + #define MEM2MEM BIT(4) /* TRANS_TYP_IO in the spec */ 46 + #define MAPPING BIT(5) /* TRANS_TYP_MAPPING in the spec */ 47 + #define ECC_PACKED 0 /* LAYOUT_TYP_INTEGRATED in the spec */ 48 + #define ECC_INTERLEAVED BIT(2) /* LAYOUT_TYP_DISTRIBUTED in the spec */ 49 + #define BURST_TYP_FIXED 0 50 + #define BURST_TYP_INCREASING BIT(0) 51 + /* Host Controller Slave Address */ 52 + #define HC_SLV_ADDR 0x14 53 + /* ECC Chunk Size */ 54 + #define CHUNK_SIZE 0x20 55 + /* Main Data Size */ 56 + #define MAIN_SIZE 0x24 57 + /* Spare Data Size */ 58 + #define SPARE_SIZE 0x28 59 + #define META_SZ(reg) ((reg) & GENMASK(7, 0)) 60 + #define PARITY_SZ(reg) (((reg) & GENMASK(15, 8)) >> 8) 61 + #define RSV_SZ(reg) (((reg) & GENMASK(23, 16)) >> 16) 62 + #define SPARE_SZ(reg) ((reg) >> 24) 63 + /* ECC Chunk Count */ 64 + #define CHUNK_CNT 0x30 65 + /* SDMA Control */ 66 + #define SDMA_CTRL 0x40 67 + #define WRITE_NAND 0 68 + #define READ_NAND BIT(1) 69 + #define CONT_NAND BIT(29) 70 + #define CONT_SYSM BIT(30) /* Continue System Memory? */ 71 + #define SDMA_STRT BIT(31) 72 + /* SDMA Address of Main Data */ 73 + #define SDMA_MAIN_ADDR 0x44 74 + /* SDMA Address of Spare Data */ 75 + #define SDMA_SPARE_ADDR 0x48 76 + /* DPE Version Number */ 77 + #define DP_VER 0xD0 78 + #define DP_VER_OFFSET 16 79 + 80 + /* Status bytes between each chunk of spare data */ 81 + #define STAT_BYTES 4 82 + #define NO_ERR 0x00 83 + #define MAX_CORR_ERR 0x28 84 + #define UNCORR_ERR 0xFE 85 + #define ERASED_CHUNK 0xFF 86 + 87 + struct mxic_ecc_engine { 88 + struct device *dev; 89 + void __iomem *regs; 90 + int irq; 91 + struct completion complete; 92 + struct nand_ecc_engine external_engine; 93 + struct nand_ecc_engine pipelined_engine; 94 + struct mutex lock; 95 + }; 96 + 97 + struct mxic_ecc_ctx { 98 + /* ECC machinery */ 99 + unsigned int data_step_sz; 100 + unsigned int oob_step_sz; 101 + unsigned int parity_sz; 102 + unsigned int meta_sz; 103 + u8 *status; 104 + int steps; 105 + 106 + /* DMA boilerplate */ 107 + struct nand_ecc_req_tweak_ctx req_ctx; 108 + u8 *oobwithstat; 109 + struct scatterlist sg[2]; 110 + struct nand_page_io_req *req; 111 + unsigned int pageoffs; 112 + }; 113 + 114 + static struct mxic_ecc_engine *ext_ecc_eng_to_mxic(struct nand_ecc_engine *eng) 115 + { 116 + return container_of(eng, struct mxic_ecc_engine, external_engine); 117 + } 118 + 119 + static struct mxic_ecc_engine *pip_ecc_eng_to_mxic(struct nand_ecc_engine *eng) 120 + { 121 + return container_of(eng, struct mxic_ecc_engine, pipelined_engine); 122 + } 123 + 124 + static struct mxic_ecc_engine *nand_to_mxic(struct nand_device *nand) 125 + { 126 + struct nand_ecc_engine *eng = nand->ecc.engine; 127 + 128 + if (eng->integration == NAND_ECC_ENGINE_INTEGRATION_EXTERNAL) 129 + return ext_ecc_eng_to_mxic(eng); 130 + else 131 + return pip_ecc_eng_to_mxic(eng); 132 + } 133 + 134 + static int mxic_ecc_ooblayout_ecc(struct mtd_info *mtd, int section, 135 + struct mtd_oob_region *oobregion) 136 + { 137 + struct nand_device *nand = mtd_to_nanddev(mtd); 138 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 139 + 140 + if (section < 0 || section >= ctx->steps) 141 + return -ERANGE; 142 + 143 + oobregion->offset = (section * ctx->oob_step_sz) + ctx->meta_sz; 144 + oobregion->length = ctx->parity_sz; 145 + 146 + return 0; 147 + } 148 + 149 + static int mxic_ecc_ooblayout_free(struct mtd_info *mtd, int section, 150 + struct mtd_oob_region *oobregion) 151 + { 152 + struct nand_device *nand = mtd_to_nanddev(mtd); 153 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 154 + 155 + if (section < 0 || section >= ctx->steps) 156 + return -ERANGE; 157 + 158 + if (!section) { 159 + oobregion->offset = 2; 160 + oobregion->length = ctx->meta_sz - 2; 161 + } else { 162 + oobregion->offset = section * ctx->oob_step_sz; 163 + oobregion->length = ctx->meta_sz; 164 + } 165 + 166 + return 0; 167 + } 168 + 169 + static const struct mtd_ooblayout_ops mxic_ecc_ooblayout_ops = { 170 + .ecc = mxic_ecc_ooblayout_ecc, 171 + .free = mxic_ecc_ooblayout_free, 172 + }; 173 + 174 + static void mxic_ecc_disable_engine(struct mxic_ecc_engine *mxic) 175 + { 176 + u32 reg; 177 + 178 + reg = readl(mxic->regs + DP_CONFIG); 179 + reg &= ~ECC_EN; 180 + writel(reg, mxic->regs + DP_CONFIG); 181 + } 182 + 183 + static void mxic_ecc_enable_engine(struct mxic_ecc_engine *mxic) 184 + { 185 + u32 reg; 186 + 187 + reg = readl(mxic->regs + DP_CONFIG); 188 + reg |= ECC_EN; 189 + writel(reg, mxic->regs + DP_CONFIG); 190 + } 191 + 192 + static void mxic_ecc_disable_int(struct mxic_ecc_engine *mxic) 193 + { 194 + writel(0, mxic->regs + INTRPT_SIG_EN); 195 + } 196 + 197 + static void mxic_ecc_enable_int(struct mxic_ecc_engine *mxic) 198 + { 199 + writel(TRANS_CMPLT, mxic->regs + INTRPT_SIG_EN); 200 + } 201 + 202 + static irqreturn_t mxic_ecc_isr(int irq, void *dev_id) 203 + { 204 + struct mxic_ecc_engine *mxic = dev_id; 205 + u32 sts; 206 + 207 + sts = readl(mxic->regs + INTRPT_STS); 208 + if (!sts) 209 + return IRQ_NONE; 210 + 211 + if (sts & TRANS_CMPLT) 212 + complete(&mxic->complete); 213 + 214 + writel(sts, mxic->regs + INTRPT_STS); 215 + 216 + return IRQ_HANDLED; 217 + } 218 + 219 + static int mxic_ecc_init_ctx(struct nand_device *nand, struct device *dev) 220 + { 221 + struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 222 + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; 223 + struct nand_ecc_props *reqs = &nand->ecc.requirements; 224 + struct nand_ecc_props *user = &nand->ecc.user_conf; 225 + struct mtd_info *mtd = nanddev_to_mtd(nand); 226 + int step_size = 0, strength = 0, desired_correction = 0, steps, idx; 227 + int possible_strength[] = {4, 8, 40, 48}; 228 + int spare_size[] = {32, 32, 96, 96}; 229 + struct mxic_ecc_ctx *ctx; 230 + u32 spare_reg; 231 + int ret; 232 + 233 + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); 234 + if (!ctx) 235 + return -ENOMEM; 236 + 237 + nand->ecc.ctx.priv = ctx; 238 + 239 + /* Only large page NAND chips may use BCH */ 240 + if (mtd->oobsize < 64) { 241 + pr_err("BCH cannot be used with small page NAND chips\n"); 242 + return -EINVAL; 243 + } 244 + 245 + mtd_set_ooblayout(mtd, &mxic_ecc_ooblayout_ops); 246 + 247 + /* Enable all status bits */ 248 + writel(TRANS_CMPLT | SDMA_MAIN | SDMA_SPARE | ECC_ERR | 249 + TO_SPARE | TO_MAIN, mxic->regs + INTRPT_STS_EN); 250 + 251 + /* Configure the correction depending on the NAND device topology */ 252 + if (user->step_size && user->strength) { 253 + step_size = user->step_size; 254 + strength = user->strength; 255 + } else if (reqs->step_size && reqs->strength) { 256 + step_size = reqs->step_size; 257 + strength = reqs->strength; 258 + } 259 + 260 + if (step_size && strength) { 261 + steps = mtd->writesize / step_size; 262 + desired_correction = steps * strength; 263 + } 264 + 265 + /* Step size is fixed to 1kiB, strength may vary (4 possible values) */ 266 + conf->step_size = SZ_1K; 267 + steps = mtd->writesize / conf->step_size; 268 + 269 + ctx->status = devm_kzalloc(dev, steps * sizeof(u8), GFP_KERNEL); 270 + if (!ctx->status) 271 + return -ENOMEM; 272 + 273 + if (desired_correction) { 274 + strength = desired_correction / steps; 275 + 276 + for (idx = 0; idx < ARRAY_SIZE(possible_strength); idx++) 277 + if (possible_strength[idx] >= strength) 278 + break; 279 + 280 + idx = min_t(unsigned int, idx, 281 + ARRAY_SIZE(possible_strength) - 1); 282 + } else { 283 + /* Missing data, maximize the correction */ 284 + idx = ARRAY_SIZE(possible_strength) - 1; 285 + } 286 + 287 + /* Tune the selected strength until it fits in the OOB area */ 288 + for (; idx >= 0; idx--) { 289 + if (spare_size[idx] * steps <= mtd->oobsize) 290 + break; 291 + } 292 + 293 + /* This engine cannot be used with this NAND device */ 294 + if (idx < 0) 295 + return -EINVAL; 296 + 297 + /* Configure the engine for the desired strength */ 298 + writel(ECC_TYP(idx), mxic->regs + DP_CONFIG); 299 + conf->strength = possible_strength[idx]; 300 + spare_reg = readl(mxic->regs + SPARE_SIZE); 301 + 302 + ctx->steps = steps; 303 + ctx->data_step_sz = mtd->writesize / steps; 304 + ctx->oob_step_sz = mtd->oobsize / steps; 305 + ctx->parity_sz = PARITY_SZ(spare_reg); 306 + ctx->meta_sz = META_SZ(spare_reg); 307 + 308 + /* Ensure buffers will contain enough bytes to store the STAT_BYTES */ 309 + ctx->req_ctx.oob_buffer_size = nanddev_per_page_oobsize(nand) + 310 + (ctx->steps * STAT_BYTES); 311 + ret = nand_ecc_init_req_tweaking(&ctx->req_ctx, nand); 312 + if (ret) 313 + return ret; 314 + 315 + ctx->oobwithstat = kmalloc(mtd->oobsize + (ctx->steps * STAT_BYTES), 316 + GFP_KERNEL); 317 + if (!ctx->oobwithstat) { 318 + ret = -ENOMEM; 319 + goto cleanup_req_tweak; 320 + } 321 + 322 + sg_init_table(ctx->sg, 2); 323 + 324 + /* Configuration dump and sanity checks */ 325 + dev_err(dev, "DPE version number: %d\n", 326 + readl(mxic->regs + DP_VER) >> DP_VER_OFFSET); 327 + dev_err(dev, "Chunk size: %d\n", readl(mxic->regs + CHUNK_SIZE)); 328 + dev_err(dev, "Main size: %d\n", readl(mxic->regs + MAIN_SIZE)); 329 + dev_err(dev, "Spare size: %d\n", SPARE_SZ(spare_reg)); 330 + dev_err(dev, "Rsv size: %ld\n", RSV_SZ(spare_reg)); 331 + dev_err(dev, "Parity size: %d\n", ctx->parity_sz); 332 + dev_err(dev, "Meta size: %d\n", ctx->meta_sz); 333 + 334 + if ((ctx->meta_sz + ctx->parity_sz + RSV_SZ(spare_reg)) != 335 + SPARE_SZ(spare_reg)) { 336 + dev_err(dev, "Wrong OOB configuration: %d + %d + %ld != %d\n", 337 + ctx->meta_sz, ctx->parity_sz, RSV_SZ(spare_reg), 338 + SPARE_SZ(spare_reg)); 339 + ret = -EINVAL; 340 + goto free_oobwithstat; 341 + } 342 + 343 + if (ctx->oob_step_sz != SPARE_SZ(spare_reg)) { 344 + dev_err(dev, "Wrong OOB configuration: %d != %d\n", 345 + ctx->oob_step_sz, SPARE_SZ(spare_reg)); 346 + ret = -EINVAL; 347 + goto free_oobwithstat; 348 + } 349 + 350 + return 0; 351 + 352 + free_oobwithstat: 353 + kfree(ctx->oobwithstat); 354 + cleanup_req_tweak: 355 + nand_ecc_cleanup_req_tweaking(&ctx->req_ctx); 356 + 357 + return ret; 358 + } 359 + 360 + static int mxic_ecc_init_ctx_external(struct nand_device *nand) 361 + { 362 + struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 363 + struct device *dev = nand->ecc.engine->dev; 364 + int ret; 365 + 366 + dev_info(dev, "Macronix ECC engine in external mode\n"); 367 + 368 + ret = mxic_ecc_init_ctx(nand, dev); 369 + if (ret) 370 + return ret; 371 + 372 + /* Trigger each step manually */ 373 + writel(1, mxic->regs + CHUNK_CNT); 374 + writel(BURST_TYP_INCREASING | ECC_PACKED | MEM2MEM, 375 + mxic->regs + HC_CONFIG); 376 + 377 + return 0; 378 + } 379 + 380 + static int mxic_ecc_init_ctx_pipelined(struct nand_device *nand) 381 + { 382 + struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 383 + struct mxic_ecc_ctx *ctx; 384 + struct device *dev; 385 + int ret; 386 + 387 + dev = nand_ecc_get_engine_dev(nand->ecc.engine->dev); 388 + if (!dev) 389 + return -EINVAL; 390 + 391 + dev_info(dev, "Macronix ECC engine in pipelined/mapping mode\n"); 392 + 393 + ret = mxic_ecc_init_ctx(nand, dev); 394 + if (ret) 395 + return ret; 396 + 397 + ctx = nand_to_ecc_ctx(nand); 398 + 399 + /* All steps should be handled in one go directly by the internal DMA */ 400 + writel(ctx->steps, mxic->regs + CHUNK_CNT); 401 + 402 + /* 403 + * Interleaved ECC scheme cannot be used otherwise factory bad block 404 + * markers would be lost. A packed layout is mandatory. 405 + */ 406 + writel(BURST_TYP_INCREASING | ECC_PACKED | MAPPING, 407 + mxic->regs + HC_CONFIG); 408 + 409 + return 0; 410 + } 411 + 412 + static void mxic_ecc_cleanup_ctx(struct nand_device *nand) 413 + { 414 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 415 + 416 + if (ctx) { 417 + nand_ecc_cleanup_req_tweaking(&ctx->req_ctx); 418 + kfree(ctx->oobwithstat); 419 + } 420 + } 421 + 422 + static int mxic_ecc_data_xfer_wait_for_completion(struct mxic_ecc_engine *mxic) 423 + { 424 + u32 val; 425 + int ret; 426 + 427 + if (mxic->irq) { 428 + reinit_completion(&mxic->complete); 429 + mxic_ecc_enable_int(mxic); 430 + ret = wait_for_completion_timeout(&mxic->complete, 431 + msecs_to_jiffies(1000)); 432 + mxic_ecc_disable_int(mxic); 433 + } else { 434 + ret = readl_poll_timeout(mxic->regs + INTRPT_STS, val, 435 + val & TRANS_CMPLT, 10, USEC_PER_SEC); 436 + writel(val, mxic->regs + INTRPT_STS); 437 + } 438 + 439 + if (ret) { 440 + dev_err(mxic->dev, "Timeout on data xfer completion\n"); 441 + return -ETIMEDOUT; 442 + } 443 + 444 + return 0; 445 + } 446 + 447 + static int mxic_ecc_process_data(struct mxic_ecc_engine *mxic, 448 + unsigned int direction) 449 + { 450 + unsigned int dir = (direction == NAND_PAGE_READ) ? 451 + READ_NAND : WRITE_NAND; 452 + int ret; 453 + 454 + mxic_ecc_enable_engine(mxic); 455 + 456 + /* Trigger processing */ 457 + writel(SDMA_STRT | dir, mxic->regs + SDMA_CTRL); 458 + 459 + /* Wait for completion */ 460 + ret = mxic_ecc_data_xfer_wait_for_completion(mxic); 461 + 462 + mxic_ecc_disable_engine(mxic); 463 + 464 + return ret; 465 + } 466 + 467 + int mxic_ecc_process_data_pipelined(struct nand_ecc_engine *eng, 468 + unsigned int direction, dma_addr_t dirmap) 469 + { 470 + struct mxic_ecc_engine *mxic = pip_ecc_eng_to_mxic(eng); 471 + 472 + if (dirmap) 473 + writel(dirmap, mxic->regs + HC_SLV_ADDR); 474 + 475 + return mxic_ecc_process_data(mxic, direction); 476 + } 477 + EXPORT_SYMBOL_GPL(mxic_ecc_process_data_pipelined); 478 + 479 + static void mxic_ecc_extract_status_bytes(struct mxic_ecc_ctx *ctx) 480 + { 481 + u8 *buf = ctx->oobwithstat; 482 + int next_stat_pos; 483 + int step; 484 + 485 + /* Extract the ECC status */ 486 + for (step = 0; step < ctx->steps; step++) { 487 + next_stat_pos = ctx->oob_step_sz + 488 + ((STAT_BYTES + ctx->oob_step_sz) * step); 489 + 490 + ctx->status[step] = buf[next_stat_pos]; 491 + } 492 + } 493 + 494 + static void mxic_ecc_reconstruct_oobbuf(struct mxic_ecc_ctx *ctx, 495 + u8 *dst, const u8 *src) 496 + { 497 + int step; 498 + 499 + /* Reconstruct the OOB buffer linearly (without the ECC status bytes) */ 500 + for (step = 0; step < ctx->steps; step++) 501 + memcpy(dst + (step * ctx->oob_step_sz), 502 + src + (step * (ctx->oob_step_sz + STAT_BYTES)), 503 + ctx->oob_step_sz); 504 + } 505 + 506 + static void mxic_ecc_add_room_in_oobbuf(struct mxic_ecc_ctx *ctx, 507 + u8 *dst, const u8 *src) 508 + { 509 + int step; 510 + 511 + /* Add some space in the OOB buffer for the status bytes */ 512 + for (step = 0; step < ctx->steps; step++) 513 + memcpy(dst + (step * (ctx->oob_step_sz + STAT_BYTES)), 514 + src + (step * ctx->oob_step_sz), 515 + ctx->oob_step_sz); 516 + } 517 + 518 + static int mxic_ecc_count_biterrs(struct mxic_ecc_engine *mxic, 519 + struct nand_device *nand) 520 + { 521 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 522 + struct mtd_info *mtd = nanddev_to_mtd(nand); 523 + struct device *dev = mxic->dev; 524 + unsigned int max_bf = 0; 525 + bool failure = false; 526 + int step; 527 + 528 + for (step = 0; step < ctx->steps; step++) { 529 + u8 stat = ctx->status[step]; 530 + 531 + if (stat == NO_ERR) { 532 + dev_dbg(dev, "ECC step %d: no error\n", step); 533 + } else if (stat == ERASED_CHUNK) { 534 + dev_dbg(dev, "ECC step %d: erased\n", step); 535 + } else if (stat == UNCORR_ERR || stat > MAX_CORR_ERR) { 536 + dev_dbg(dev, "ECC step %d: uncorrectable\n", step); 537 + mtd->ecc_stats.failed++; 538 + failure = true; 539 + } else { 540 + dev_dbg(dev, "ECC step %d: %d bits corrected\n", 541 + step, stat); 542 + max_bf = max_t(unsigned int, max_bf, stat); 543 + mtd->ecc_stats.corrected += stat; 544 + } 545 + } 546 + 547 + return failure ? -EBADMSG : max_bf; 548 + } 549 + 550 + /* External ECC engine helpers */ 551 + static int mxic_ecc_prepare_io_req_external(struct nand_device *nand, 552 + struct nand_page_io_req *req) 553 + { 554 + struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 555 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 556 + struct mtd_info *mtd = nanddev_to_mtd(nand); 557 + int offset, nents, step, ret; 558 + 559 + if (req->mode == MTD_OPS_RAW) 560 + return 0; 561 + 562 + nand_ecc_tweak_req(&ctx->req_ctx, req); 563 + ctx->req = req; 564 + 565 + if (req->type == NAND_PAGE_READ) 566 + return 0; 567 + 568 + mxic_ecc_add_room_in_oobbuf(ctx, ctx->oobwithstat, 569 + ctx->req->oobbuf.out); 570 + 571 + sg_set_buf(&ctx->sg[0], req->databuf.out, req->datalen); 572 + sg_set_buf(&ctx->sg[1], ctx->oobwithstat, 573 + req->ooblen + (ctx->steps * STAT_BYTES)); 574 + 575 + nents = dma_map_sg(mxic->dev, ctx->sg, 2, DMA_BIDIRECTIONAL); 576 + if (!nents) 577 + return -EINVAL; 578 + 579 + mutex_lock(&mxic->lock); 580 + 581 + for (step = 0; step < ctx->steps; step++) { 582 + writel(sg_dma_address(&ctx->sg[0]) + (step * ctx->data_step_sz), 583 + mxic->regs + SDMA_MAIN_ADDR); 584 + writel(sg_dma_address(&ctx->sg[1]) + (step * (ctx->oob_step_sz + STAT_BYTES)), 585 + mxic->regs + SDMA_SPARE_ADDR); 586 + ret = mxic_ecc_process_data(mxic, ctx->req->type); 587 + if (ret) 588 + break; 589 + } 590 + 591 + mutex_unlock(&mxic->lock); 592 + 593 + dma_unmap_sg(mxic->dev, ctx->sg, 2, DMA_BIDIRECTIONAL); 594 + 595 + if (ret) 596 + return ret; 597 + 598 + /* Retrieve the calculated ECC bytes */ 599 + for (step = 0; step < ctx->steps; step++) { 600 + offset = ctx->meta_sz + (step * ctx->oob_step_sz); 601 + mtd_ooblayout_get_eccbytes(mtd, 602 + (u8 *)ctx->req->oobbuf.out + offset, 603 + ctx->oobwithstat + (step * STAT_BYTES), 604 + step * ctx->parity_sz, 605 + ctx->parity_sz); 606 + } 607 + 608 + return 0; 609 + } 610 + 611 + static int mxic_ecc_finish_io_req_external(struct nand_device *nand, 612 + struct nand_page_io_req *req) 613 + { 614 + struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 615 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 616 + int nents, step, ret; 617 + 618 + if (req->mode == MTD_OPS_RAW) 619 + return 0; 620 + 621 + if (req->type == NAND_PAGE_WRITE) { 622 + nand_ecc_restore_req(&ctx->req_ctx, req); 623 + return 0; 624 + } 625 + 626 + /* Copy the OOB buffer and add room for the ECC engine status bytes */ 627 + mxic_ecc_add_room_in_oobbuf(ctx, ctx->oobwithstat, ctx->req->oobbuf.in); 628 + 629 + sg_set_buf(&ctx->sg[0], req->databuf.in, req->datalen); 630 + sg_set_buf(&ctx->sg[1], ctx->oobwithstat, 631 + req->ooblen + (ctx->steps * STAT_BYTES)); 632 + nents = dma_map_sg(mxic->dev, ctx->sg, 2, DMA_BIDIRECTIONAL); 633 + if (!nents) 634 + return -EINVAL; 635 + 636 + mutex_lock(&mxic->lock); 637 + 638 + for (step = 0; step < ctx->steps; step++) { 639 + writel(sg_dma_address(&ctx->sg[0]) + (step * ctx->data_step_sz), 640 + mxic->regs + SDMA_MAIN_ADDR); 641 + writel(sg_dma_address(&ctx->sg[1]) + (step * (ctx->oob_step_sz + STAT_BYTES)), 642 + mxic->regs + SDMA_SPARE_ADDR); 643 + ret = mxic_ecc_process_data(mxic, ctx->req->type); 644 + if (ret) 645 + break; 646 + } 647 + 648 + mutex_unlock(&mxic->lock); 649 + 650 + dma_unmap_sg(mxic->dev, ctx->sg, 2, DMA_BIDIRECTIONAL); 651 + 652 + if (ret) { 653 + nand_ecc_restore_req(&ctx->req_ctx, req); 654 + return ret; 655 + } 656 + 657 + /* Extract the status bytes and reconstruct the buffer */ 658 + mxic_ecc_extract_status_bytes(ctx); 659 + mxic_ecc_reconstruct_oobbuf(ctx, ctx->req->oobbuf.in, ctx->oobwithstat); 660 + 661 + nand_ecc_restore_req(&ctx->req_ctx, req); 662 + 663 + return mxic_ecc_count_biterrs(mxic, nand); 664 + } 665 + 666 + /* Pipelined ECC engine helpers */ 667 + static int mxic_ecc_prepare_io_req_pipelined(struct nand_device *nand, 668 + struct nand_page_io_req *req) 669 + { 670 + struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 671 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 672 + int nents; 673 + 674 + if (req->mode == MTD_OPS_RAW) 675 + return 0; 676 + 677 + nand_ecc_tweak_req(&ctx->req_ctx, req); 678 + ctx->req = req; 679 + 680 + /* Copy the OOB buffer and add room for the ECC engine status bytes */ 681 + mxic_ecc_add_room_in_oobbuf(ctx, ctx->oobwithstat, ctx->req->oobbuf.in); 682 + 683 + sg_set_buf(&ctx->sg[0], req->databuf.in, req->datalen); 684 + sg_set_buf(&ctx->sg[1], ctx->oobwithstat, 685 + req->ooblen + (ctx->steps * STAT_BYTES)); 686 + 687 + nents = dma_map_sg(mxic->dev, ctx->sg, 2, DMA_BIDIRECTIONAL); 688 + if (!nents) 689 + return -EINVAL; 690 + 691 + mutex_lock(&mxic->lock); 692 + 693 + writel(sg_dma_address(&ctx->sg[0]), mxic->regs + SDMA_MAIN_ADDR); 694 + writel(sg_dma_address(&ctx->sg[1]), mxic->regs + SDMA_SPARE_ADDR); 695 + 696 + return 0; 697 + } 698 + 699 + static int mxic_ecc_finish_io_req_pipelined(struct nand_device *nand, 700 + struct nand_page_io_req *req) 701 + { 702 + struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 703 + struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 704 + int ret = 0; 705 + 706 + if (req->mode == MTD_OPS_RAW) 707 + return 0; 708 + 709 + mutex_unlock(&mxic->lock); 710 + 711 + dma_unmap_sg(mxic->dev, ctx->sg, 2, DMA_BIDIRECTIONAL); 712 + 713 + if (req->type == NAND_PAGE_READ) { 714 + mxic_ecc_extract_status_bytes(ctx); 715 + mxic_ecc_reconstruct_oobbuf(ctx, ctx->req->oobbuf.in, 716 + ctx->oobwithstat); 717 + ret = mxic_ecc_count_biterrs(mxic, nand); 718 + } 719 + 720 + nand_ecc_restore_req(&ctx->req_ctx, req); 721 + 722 + return ret; 723 + } 724 + 725 + static struct nand_ecc_engine_ops mxic_ecc_engine_external_ops = { 726 + .init_ctx = mxic_ecc_init_ctx_external, 727 + .cleanup_ctx = mxic_ecc_cleanup_ctx, 728 + .prepare_io_req = mxic_ecc_prepare_io_req_external, 729 + .finish_io_req = mxic_ecc_finish_io_req_external, 730 + }; 731 + 732 + static struct nand_ecc_engine_ops mxic_ecc_engine_pipelined_ops = { 733 + .init_ctx = mxic_ecc_init_ctx_pipelined, 734 + .cleanup_ctx = mxic_ecc_cleanup_ctx, 735 + .prepare_io_req = mxic_ecc_prepare_io_req_pipelined, 736 + .finish_io_req = mxic_ecc_finish_io_req_pipelined, 737 + }; 738 + 739 + struct nand_ecc_engine_ops *mxic_ecc_get_pipelined_ops(void) 740 + { 741 + return &mxic_ecc_engine_pipelined_ops; 742 + } 743 + EXPORT_SYMBOL_GPL(mxic_ecc_get_pipelined_ops); 744 + 745 + static struct platform_device * 746 + mxic_ecc_get_pdev(struct platform_device *spi_pdev) 747 + { 748 + struct platform_device *eng_pdev; 749 + struct device_node *np; 750 + 751 + /* Retrieve the nand-ecc-engine phandle */ 752 + np = of_parse_phandle(spi_pdev->dev.of_node, "nand-ecc-engine", 0); 753 + if (!np) 754 + return NULL; 755 + 756 + /* Jump to the engine's device node */ 757 + eng_pdev = of_find_device_by_node(np); 758 + of_node_put(np); 759 + 760 + return eng_pdev; 761 + } 762 + 763 + void mxic_ecc_put_pipelined_engine(struct nand_ecc_engine *eng) 764 + { 765 + struct mxic_ecc_engine *mxic = pip_ecc_eng_to_mxic(eng); 766 + 767 + platform_device_put(to_platform_device(mxic->dev)); 768 + } 769 + EXPORT_SYMBOL_GPL(mxic_ecc_put_pipelined_engine); 770 + 771 + struct nand_ecc_engine * 772 + mxic_ecc_get_pipelined_engine(struct platform_device *spi_pdev) 773 + { 774 + struct platform_device *eng_pdev; 775 + struct mxic_ecc_engine *mxic; 776 + 777 + eng_pdev = mxic_ecc_get_pdev(spi_pdev); 778 + if (!eng_pdev) 779 + return ERR_PTR(-ENODEV); 780 + 781 + mxic = platform_get_drvdata(eng_pdev); 782 + if (!mxic) { 783 + platform_device_put(eng_pdev); 784 + return ERR_PTR(-EPROBE_DEFER); 785 + } 786 + 787 + return &mxic->pipelined_engine; 788 + } 789 + EXPORT_SYMBOL_GPL(mxic_ecc_get_pipelined_engine); 790 + 791 + /* 792 + * Only the external ECC engine is exported as the pipelined is SoC specific, so 793 + * it is registered directly by the drivers that wrap it. 794 + */ 795 + static int mxic_ecc_probe(struct platform_device *pdev) 796 + { 797 + struct device *dev = &pdev->dev; 798 + struct mxic_ecc_engine *mxic; 799 + int ret; 800 + 801 + mxic = devm_kzalloc(&pdev->dev, sizeof(*mxic), GFP_KERNEL); 802 + if (!mxic) 803 + return -ENOMEM; 804 + 805 + mxic->dev = &pdev->dev; 806 + 807 + /* 808 + * Both memory regions for the ECC engine itself and the AXI slave 809 + * address are mandatory. 810 + */ 811 + mxic->regs = devm_platform_ioremap_resource(pdev, 0); 812 + if (IS_ERR(mxic->regs)) { 813 + dev_err(&pdev->dev, "Missing memory region\n"); 814 + return PTR_ERR(mxic->regs); 815 + } 816 + 817 + mxic_ecc_disable_engine(mxic); 818 + mxic_ecc_disable_int(mxic); 819 + 820 + /* IRQ is optional yet much more efficient */ 821 + mxic->irq = platform_get_irq_byname_optional(pdev, "ecc-engine"); 822 + if (mxic->irq > 0) { 823 + ret = devm_request_irq(&pdev->dev, mxic->irq, mxic_ecc_isr, 0, 824 + "mxic-ecc", mxic); 825 + if (ret) 826 + return ret; 827 + } else { 828 + dev_info(dev, "Invalid or missing IRQ, fallback to polling\n"); 829 + mxic->irq = 0; 830 + } 831 + 832 + mutex_init(&mxic->lock); 833 + 834 + /* 835 + * In external mode, the device is the ECC engine. In pipelined mode, 836 + * the device is the host controller. The device is used to match the 837 + * right ECC engine based on the DT properties. 838 + */ 839 + mxic->external_engine.dev = &pdev->dev; 840 + mxic->external_engine.integration = NAND_ECC_ENGINE_INTEGRATION_EXTERNAL; 841 + mxic->external_engine.ops = &mxic_ecc_engine_external_ops; 842 + 843 + nand_ecc_register_on_host_hw_engine(&mxic->external_engine); 844 + 845 + platform_set_drvdata(pdev, mxic); 846 + 847 + return 0; 848 + } 849 + 850 + static int mxic_ecc_remove(struct platform_device *pdev) 851 + { 852 + struct mxic_ecc_engine *mxic = platform_get_drvdata(pdev); 853 + 854 + nand_ecc_unregister_on_host_hw_engine(&mxic->external_engine); 855 + 856 + return 0; 857 + } 858 + 859 + static const struct of_device_id mxic_ecc_of_ids[] = { 860 + { 861 + .compatible = "mxicy,nand-ecc-engine-rev3", 862 + }, 863 + { /* sentinel */ }, 864 + }; 865 + MODULE_DEVICE_TABLE(of, mxic_ecc_of_ids); 866 + 867 + static struct platform_driver mxic_ecc_driver = { 868 + .driver = { 869 + .name = "mxic-nand-ecc-engine", 870 + .of_match_table = mxic_ecc_of_ids, 871 + }, 872 + .probe = mxic_ecc_probe, 873 + .remove = mxic_ecc_remove, 874 + }; 875 + module_platform_driver(mxic_ecc_driver); 876 + 877 + MODULE_LICENSE("GPL"); 878 + MODULE_AUTHOR("Miquel Raynal <miquel.raynal@bootlin.com>"); 879 + MODULE_DESCRIPTION("Macronix NAND hardware ECC controller");
+119
drivers/mtd/nand/ecc.c
··· 96 96 #include <linux/module.h> 97 97 #include <linux/mtd/nand.h> 98 98 #include <linux/slab.h> 99 + #include <linux/of.h> 100 + #include <linux/of_device.h> 101 + #include <linux/of_platform.h> 102 + 103 + static LIST_HEAD(on_host_hw_engines); 104 + static DEFINE_MUTEX(on_host_hw_engines_mutex); 99 105 100 106 /** 101 107 * nand_ecc_init_ctx - Init the ECC engine context ··· 616 610 return nand->ecc.ondie_engine; 617 611 } 618 612 EXPORT_SYMBOL(nand_ecc_get_on_die_hw_engine); 613 + 614 + int nand_ecc_register_on_host_hw_engine(struct nand_ecc_engine *engine) 615 + { 616 + struct nand_ecc_engine *item; 617 + 618 + if (!engine) 619 + return -EINVAL; 620 + 621 + /* Prevent multiple registrations of one engine */ 622 + list_for_each_entry(item, &on_host_hw_engines, node) 623 + if (item == engine) 624 + return 0; 625 + 626 + mutex_lock(&on_host_hw_engines_mutex); 627 + list_add_tail(&engine->node, &on_host_hw_engines); 628 + mutex_unlock(&on_host_hw_engines_mutex); 629 + 630 + return 0; 631 + } 632 + EXPORT_SYMBOL(nand_ecc_register_on_host_hw_engine); 633 + 634 + int nand_ecc_unregister_on_host_hw_engine(struct nand_ecc_engine *engine) 635 + { 636 + if (!engine) 637 + return -EINVAL; 638 + 639 + mutex_lock(&on_host_hw_engines_mutex); 640 + list_del(&engine->node); 641 + mutex_unlock(&on_host_hw_engines_mutex); 642 + 643 + return 0; 644 + } 645 + EXPORT_SYMBOL(nand_ecc_unregister_on_host_hw_engine); 646 + 647 + static struct nand_ecc_engine *nand_ecc_match_on_host_hw_engine(struct device *dev) 648 + { 649 + struct nand_ecc_engine *item; 650 + 651 + list_for_each_entry(item, &on_host_hw_engines, node) 652 + if (item->dev == dev) 653 + return item; 654 + 655 + return NULL; 656 + } 657 + 658 + struct nand_ecc_engine *nand_ecc_get_on_host_hw_engine(struct nand_device *nand) 659 + { 660 + struct nand_ecc_engine *engine = NULL; 661 + struct device *dev = &nand->mtd.dev; 662 + struct platform_device *pdev; 663 + struct device_node *np; 664 + 665 + if (list_empty(&on_host_hw_engines)) 666 + return NULL; 667 + 668 + /* Check for an explicit nand-ecc-engine property */ 669 + np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0); 670 + if (np) { 671 + pdev = of_find_device_by_node(np); 672 + if (!pdev) 673 + return ERR_PTR(-EPROBE_DEFER); 674 + 675 + engine = nand_ecc_match_on_host_hw_engine(&pdev->dev); 676 + platform_device_put(pdev); 677 + of_node_put(np); 678 + 679 + if (!engine) 680 + return ERR_PTR(-EPROBE_DEFER); 681 + } 682 + 683 + if (engine) 684 + get_device(engine->dev); 685 + 686 + return engine; 687 + } 688 + EXPORT_SYMBOL(nand_ecc_get_on_host_hw_engine); 689 + 690 + void nand_ecc_put_on_host_hw_engine(struct nand_device *nand) 691 + { 692 + put_device(nand->ecc.engine->dev); 693 + } 694 + EXPORT_SYMBOL(nand_ecc_put_on_host_hw_engine); 695 + 696 + /* 697 + * In the case of a pipelined engine, the device registering the ECC 698 + * engine is not necessarily the ECC engine itself but may be a host controller. 699 + * It is then useful to provide a helper to retrieve the right device object 700 + * which actually represents the ECC engine. 701 + */ 702 + struct device *nand_ecc_get_engine_dev(struct device *host) 703 + { 704 + struct platform_device *ecc_pdev; 705 + struct device_node *np; 706 + 707 + /* 708 + * If the device node contains this property, it means we need to follow 709 + * it in order to get the right ECC engine device we are looking for. 710 + */ 711 + np = of_parse_phandle(host->of_node, "nand-ecc-engine", 0); 712 + if (!np) 713 + return host; 714 + 715 + ecc_pdev = of_find_device_by_node(np); 716 + if (!ecc_pdev) { 717 + of_node_put(np); 718 + return NULL; 719 + } 720 + 721 + platform_device_put(ecc_pdev); 722 + of_node_put(np); 723 + 724 + return &ecc_pdev->dev; 725 + } 619 726 620 727 MODULE_LICENSE("GPL"); 621 728 MODULE_AUTHOR("Miquel Raynal <miquel.raynal@bootlin.com>");
+41 -10
drivers/mtd/nand/spi/core.c
··· 381 381 } 382 382 } 383 383 384 - rdesc = spinand->dirmaps[req->pos.plane].rdesc; 384 + if (req->mode == MTD_OPS_RAW) 385 + rdesc = spinand->dirmaps[req->pos.plane].rdesc; 386 + else 387 + rdesc = spinand->dirmaps[req->pos.plane].rdesc_ecc; 385 388 386 389 while (nbytes) { 387 390 ret = spi_mem_dirmap_read(rdesc, column, nbytes, buf); ··· 455 452 req->ooblen); 456 453 } 457 454 458 - wdesc = spinand->dirmaps[req->pos.plane].wdesc; 455 + if (req->mode == MTD_OPS_RAW) 456 + wdesc = spinand->dirmaps[req->pos.plane].wdesc; 457 + else 458 + wdesc = spinand->dirmaps[req->pos.plane].wdesc_ecc; 459 459 460 460 while (nbytes) { 461 461 ret = spi_mem_dirmap_write(wdesc, column, nbytes, buf); ··· 871 865 872 866 spinand->dirmaps[plane].rdesc = desc; 873 867 868 + if (nand->ecc.engine->integration != NAND_ECC_ENGINE_INTEGRATION_PIPELINED) { 869 + spinand->dirmaps[plane].wdesc_ecc = spinand->dirmaps[plane].wdesc; 870 + spinand->dirmaps[plane].rdesc_ecc = spinand->dirmaps[plane].rdesc; 871 + 872 + return 0; 873 + } 874 + 875 + info.op_tmpl = *spinand->op_templates.update_cache; 876 + info.op_tmpl.data.ecc = true; 877 + desc = devm_spi_mem_dirmap_create(&spinand->spimem->spi->dev, 878 + spinand->spimem, &info); 879 + if (IS_ERR(desc)) 880 + return PTR_ERR(desc); 881 + 882 + spinand->dirmaps[plane].wdesc_ecc = desc; 883 + 884 + info.op_tmpl = *spinand->op_templates.read_cache; 885 + info.op_tmpl.data.ecc = true; 886 + desc = devm_spi_mem_dirmap_create(&spinand->spimem->spi->dev, 887 + spinand->spimem, &info); 888 + if (IS_ERR(desc)) 889 + return PTR_ERR(desc); 890 + 891 + spinand->dirmaps[plane].rdesc_ecc = desc; 892 + 874 893 return 0; 875 894 } 876 895 ··· 1239 1208 if (ret) 1240 1209 goto err_free_bufs; 1241 1210 1242 - ret = spinand_create_dirmaps(spinand); 1243 - if (ret) { 1244 - dev_err(dev, 1245 - "Failed to create direct mappings for read/write operations (err = %d)\n", 1246 - ret); 1247 - goto err_manuf_cleanup; 1248 - } 1249 - 1250 1211 ret = nanddev_init(nand, &spinand_ops, THIS_MODULE); 1251 1212 if (ret) 1252 1213 goto err_manuf_cleanup; ··· 1272 1249 /* Propagate ECC information to mtd_info */ 1273 1250 mtd->ecc_strength = nanddev_get_ecc_conf(nand)->strength; 1274 1251 mtd->ecc_step_size = nanddev_get_ecc_conf(nand)->step_size; 1252 + 1253 + ret = spinand_create_dirmaps(spinand); 1254 + if (ret) { 1255 + dev_err(dev, 1256 + "Failed to create direct mappings for read/write operations (err = %d)\n", 1257 + ret); 1258 + goto err_cleanup_ecc_engine; 1259 + } 1275 1260 1276 1261 return 0; 1277 1262
+1 -1
drivers/mtd/nand/spi/macronix.c
··· 20 20 21 21 static SPINAND_OP_VARIANTS(write_cache_variants, 22 22 SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 23 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 23 + SPINAND_PROG_LOAD(false, 0, NULL, 0)); 24 24 25 25 static SPINAND_OP_VARIANTS(update_cache_variants, 26 26 SPINAND_PROG_LOAD_X4(false, 0, NULL, 0),
+1
drivers/spi/Kconfig
··· 879 879 config SPI_MXIC 880 880 tristate "Macronix MX25F0A SPI controller" 881 881 depends on SPI_MASTER 882 + imply MTD_NAND_ECC_MXIC 882 883 help 883 884 This selects the Macronix MX25F0A SPI controller driver. 884 885
+6 -4
drivers/spi/spi-cadence-quadspi.c
··· 1441 1441 if (!(all_true || all_false)) 1442 1442 return false; 1443 1443 1444 - if (all_true) 1445 - return spi_mem_dtr_supports_op(mem, op); 1446 - else 1447 - return spi_mem_default_supports_op(mem, op); 1444 + return spi_mem_default_supports_op(mem, op); 1448 1445 } 1449 1446 1450 1447 static int cqspi_of_get_flash_pdata(struct platform_device *pdev, ··· 1592 1595 .supports_op = cqspi_supports_mem_op, 1593 1596 }; 1594 1597 1598 + static const struct spi_controller_mem_caps cqspi_mem_caps = { 1599 + .dtr = true, 1600 + }; 1601 + 1595 1602 static int cqspi_setup_flash(struct cqspi_st *cqspi) 1596 1603 { 1597 1604 struct platform_device *pdev = cqspi->pdev; ··· 1653 1652 } 1654 1653 master->mode_bits = SPI_RX_QUAD | SPI_RX_DUAL; 1655 1654 master->mem_ops = &cqspi_mem_ops; 1655 + master->mem_caps = &cqspi_mem_caps; 1656 1656 master->dev.of_node = pdev->dev.of_node; 1657 1657 1658 1658 cqspi = spi_master_get_devdata(master);
+18 -14
drivers/spi/spi-mem.c
··· 160 160 return true; 161 161 } 162 162 163 - bool spi_mem_dtr_supports_op(struct spi_mem *mem, 164 - const struct spi_mem_op *op) 165 - { 166 - if (op->cmd.nbytes != 2) 167 - return false; 168 - 169 - return spi_mem_check_buswidth(mem, op); 170 - } 171 - EXPORT_SYMBOL_GPL(spi_mem_dtr_supports_op); 172 - 173 163 bool spi_mem_default_supports_op(struct spi_mem *mem, 174 164 const struct spi_mem_op *op) 175 165 { 176 - if (op->cmd.dtr || op->addr.dtr || op->dummy.dtr || op->data.dtr) 177 - return false; 166 + struct spi_controller *ctlr = mem->spi->controller; 167 + bool op_is_dtr = 168 + op->cmd.dtr || op->addr.dtr || op->dummy.dtr || op->data.dtr; 178 169 179 - if (op->cmd.nbytes != 1) 180 - return false; 170 + if (op_is_dtr) { 171 + if (!spi_mem_controller_is_capable(ctlr, dtr)) 172 + return false; 173 + 174 + if (op->cmd.nbytes != 2) 175 + return false; 176 + } else { 177 + if (op->cmd.nbytes != 1) 178 + return false; 179 + } 180 + 181 + if (op->data.ecc) { 182 + if (!spi_mem_controller_is_capable(ctlr, ecc)) 183 + return false; 184 + } 181 185 182 186 return spi_mem_check_buswidth(mem, op); 183 187 }
+280 -60
drivers/spi/spi-mxic.c
··· 12 12 #include <linux/io.h> 13 13 #include <linux/iopoll.h> 14 14 #include <linux/module.h> 15 + #include <linux/mtd/nand.h> 16 + #include <linux/mtd/nand-ecc-mxic.h> 15 17 #include <linux/platform_device.h> 16 18 #include <linux/pm_runtime.h> 17 19 #include <linux/spi/spi.h> ··· 169 167 #define HW_TEST(x) (0xe0 + ((x) * 4)) 170 168 171 169 struct mxic_spi { 170 + struct device *dev; 172 171 struct clk *ps_clk; 173 172 struct clk *send_clk; 174 173 struct clk *send_dly_clk; 175 174 void __iomem *regs; 176 175 u32 cur_speed_hz; 176 + struct { 177 + void __iomem *map; 178 + dma_addr_t dma; 179 + size_t size; 180 + } linear; 181 + 182 + struct { 183 + bool use_pipelined_conf; 184 + struct nand_ecc_engine *pipelined_engine; 185 + void *ctx; 186 + } ecc; 177 187 }; 178 188 179 189 static int mxic_spi_clk_enable(struct mxic_spi *mxic) ··· 294 280 mxic->regs + HC_CFG); 295 281 } 296 282 283 + static u32 mxic_spi_prep_hc_cfg(struct spi_device *spi, u32 flags) 284 + { 285 + int nio = 1; 286 + 287 + if (spi->mode & (SPI_TX_OCTAL | SPI_RX_OCTAL)) 288 + nio = 8; 289 + else if (spi->mode & (SPI_TX_QUAD | SPI_RX_QUAD)) 290 + nio = 4; 291 + else if (spi->mode & (SPI_TX_DUAL | SPI_RX_DUAL)) 292 + nio = 2; 293 + 294 + return flags | HC_CFG_NIO(nio) | 295 + HC_CFG_TYPE(spi->chip_select, HC_CFG_TYPE_SPI_NOR) | 296 + HC_CFG_SLV_ACT(spi->chip_select) | HC_CFG_IDLE_SIO_LVL(1); 297 + } 298 + 299 + static u32 mxic_spi_mem_prep_op_cfg(const struct spi_mem_op *op, 300 + unsigned int data_len) 301 + { 302 + u32 cfg = OP_CMD_BYTES(op->cmd.nbytes) | 303 + OP_CMD_BUSW(fls(op->cmd.buswidth) - 1) | 304 + (op->cmd.dtr ? OP_CMD_DDR : 0); 305 + 306 + if (op->addr.nbytes) 307 + cfg |= OP_ADDR_BYTES(op->addr.nbytes) | 308 + OP_ADDR_BUSW(fls(op->addr.buswidth) - 1) | 309 + (op->addr.dtr ? OP_ADDR_DDR : 0); 310 + 311 + if (op->dummy.nbytes) 312 + cfg |= OP_DUMMY_CYC(op->dummy.nbytes); 313 + 314 + /* Direct mapping data.nbytes field is not populated */ 315 + if (data_len) { 316 + cfg |= OP_DATA_BUSW(fls(op->data.buswidth) - 1) | 317 + (op->data.dtr ? OP_DATA_DDR : 0); 318 + if (op->data.dir == SPI_MEM_DATA_IN) { 319 + cfg |= OP_READ; 320 + if (op->data.dtr) 321 + cfg |= OP_DQS_EN; 322 + } 323 + } 324 + 325 + return cfg; 326 + } 327 + 297 328 static int mxic_spi_data_xfer(struct mxic_spi *mxic, const void *txbuf, 298 329 void *rxbuf, unsigned int len) 299 330 { ··· 363 304 364 305 writel(data, mxic->regs + TXD(nbytes % 4)); 365 306 307 + ret = readl_poll_timeout(mxic->regs + INT_STS, sts, 308 + sts & INT_TX_EMPTY, 0, USEC_PER_SEC); 309 + if (ret) 310 + return ret; 311 + 312 + ret = readl_poll_timeout(mxic->regs + INT_STS, sts, 313 + sts & INT_RX_NOT_EMPTY, 0, 314 + USEC_PER_SEC); 315 + if (ret) 316 + return ret; 317 + 318 + data = readl(mxic->regs + RXD); 366 319 if (rxbuf) { 367 - ret = readl_poll_timeout(mxic->regs + INT_STS, sts, 368 - sts & INT_TX_EMPTY, 0, 369 - USEC_PER_SEC); 370 - if (ret) 371 - return ret; 372 - 373 - ret = readl_poll_timeout(mxic->regs + INT_STS, sts, 374 - sts & INT_RX_NOT_EMPTY, 0, 375 - USEC_PER_SEC); 376 - if (ret) 377 - return ret; 378 - 379 - data = readl(mxic->regs + RXD); 380 320 data >>= (8 * (4 - nbytes)); 381 321 memcpy(rxbuf + pos, &data, nbytes); 382 - WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY); 383 - } else { 384 - readl(mxic->regs + RXD); 385 322 } 386 323 WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY); 387 324 ··· 387 332 return 0; 388 333 } 389 334 335 + static ssize_t mxic_spi_mem_dirmap_read(struct spi_mem_dirmap_desc *desc, 336 + u64 offs, size_t len, void *buf) 337 + { 338 + struct mxic_spi *mxic = spi_master_get_devdata(desc->mem->spi->master); 339 + int ret; 340 + u32 sts; 341 + 342 + if (WARN_ON(offs + desc->info.offset + len > U32_MAX)) 343 + return -EINVAL; 344 + 345 + writel(mxic_spi_prep_hc_cfg(desc->mem->spi, 0), mxic->regs + HC_CFG); 346 + 347 + writel(mxic_spi_mem_prep_op_cfg(&desc->info.op_tmpl, len), 348 + mxic->regs + LRD_CFG); 349 + writel(desc->info.offset + offs, mxic->regs + LRD_ADDR); 350 + len = min_t(size_t, len, mxic->linear.size); 351 + writel(len, mxic->regs + LRD_RANGE); 352 + writel(LMODE_CMD0(desc->info.op_tmpl.cmd.opcode) | 353 + LMODE_SLV_ACT(desc->mem->spi->chip_select) | 354 + LMODE_EN, 355 + mxic->regs + LRD_CTRL); 356 + 357 + if (mxic->ecc.use_pipelined_conf && desc->info.op_tmpl.data.ecc) { 358 + ret = mxic_ecc_process_data_pipelined(mxic->ecc.pipelined_engine, 359 + NAND_PAGE_READ, 360 + mxic->linear.dma + offs); 361 + if (ret) 362 + return ret; 363 + } else { 364 + memcpy_fromio(buf, mxic->linear.map, len); 365 + } 366 + 367 + writel(INT_LRD_DIS, mxic->regs + INT_STS); 368 + writel(0, mxic->regs + LRD_CTRL); 369 + 370 + ret = readl_poll_timeout(mxic->regs + INT_STS, sts, 371 + sts & INT_LRD_DIS, 0, USEC_PER_SEC); 372 + if (ret) 373 + return ret; 374 + 375 + return len; 376 + } 377 + 378 + static ssize_t mxic_spi_mem_dirmap_write(struct spi_mem_dirmap_desc *desc, 379 + u64 offs, size_t len, 380 + const void *buf) 381 + { 382 + struct mxic_spi *mxic = spi_master_get_devdata(desc->mem->spi->master); 383 + u32 sts; 384 + int ret; 385 + 386 + if (WARN_ON(offs + desc->info.offset + len > U32_MAX)) 387 + return -EINVAL; 388 + 389 + writel(mxic_spi_prep_hc_cfg(desc->mem->spi, 0), mxic->regs + HC_CFG); 390 + 391 + writel(mxic_spi_mem_prep_op_cfg(&desc->info.op_tmpl, len), 392 + mxic->regs + LWR_CFG); 393 + writel(desc->info.offset + offs, mxic->regs + LWR_ADDR); 394 + len = min_t(size_t, len, mxic->linear.size); 395 + writel(len, mxic->regs + LWR_RANGE); 396 + writel(LMODE_CMD0(desc->info.op_tmpl.cmd.opcode) | 397 + LMODE_SLV_ACT(desc->mem->spi->chip_select) | 398 + LMODE_EN, 399 + mxic->regs + LWR_CTRL); 400 + 401 + if (mxic->ecc.use_pipelined_conf && desc->info.op_tmpl.data.ecc) { 402 + ret = mxic_ecc_process_data_pipelined(mxic->ecc.pipelined_engine, 403 + NAND_PAGE_WRITE, 404 + mxic->linear.dma + offs); 405 + if (ret) 406 + return ret; 407 + } else { 408 + memcpy_toio(mxic->linear.map, buf, len); 409 + } 410 + 411 + writel(INT_LWR_DIS, mxic->regs + INT_STS); 412 + writel(0, mxic->regs + LWR_CTRL); 413 + 414 + ret = readl_poll_timeout(mxic->regs + INT_STS, sts, 415 + sts & INT_LWR_DIS, 0, USEC_PER_SEC); 416 + if (ret) 417 + return ret; 418 + 419 + return len; 420 + } 421 + 390 422 static bool mxic_spi_mem_supports_op(struct spi_mem *mem, 391 423 const struct spi_mem_op *op) 392 424 { 393 - bool all_false; 394 - 395 425 if (op->data.buswidth > 8 || op->addr.buswidth > 8 || 396 426 op->dummy.buswidth > 8 || op->cmd.buswidth > 8) 397 427 return false; ··· 488 348 if (op->addr.nbytes > 7) 489 349 return false; 490 350 491 - all_false = !op->cmd.dtr && !op->addr.dtr && !op->dummy.dtr && 492 - !op->data.dtr; 351 + return spi_mem_default_supports_op(mem, op); 352 + } 493 353 494 - if (all_false) 495 - return spi_mem_default_supports_op(mem, op); 496 - else 497 - return spi_mem_dtr_supports_op(mem, op); 354 + static int mxic_spi_mem_dirmap_create(struct spi_mem_dirmap_desc *desc) 355 + { 356 + struct mxic_spi *mxic = spi_master_get_devdata(desc->mem->spi->master); 357 + 358 + if (!mxic->linear.map) 359 + return -EINVAL; 360 + 361 + if (desc->info.offset + desc->info.length > U32_MAX) 362 + return -EINVAL; 363 + 364 + if (!mxic_spi_mem_supports_op(desc->mem, &desc->info.op_tmpl)) 365 + return -EOPNOTSUPP; 366 + 367 + return 0; 498 368 } 499 369 500 370 static int mxic_spi_mem_exec_op(struct spi_mem *mem, 501 371 const struct spi_mem_op *op) 502 372 { 503 373 struct mxic_spi *mxic = spi_master_get_devdata(mem->spi->master); 504 - int nio = 1, i, ret; 505 - u32 ss_ctrl; 374 + int i, ret; 506 375 u8 addr[8], cmd[2]; 507 376 508 377 ret = mxic_spi_set_freq(mxic, mem->spi->max_speed_hz); 509 378 if (ret) 510 379 return ret; 511 380 512 - if (mem->spi->mode & (SPI_TX_OCTAL | SPI_RX_OCTAL)) 513 - nio = 8; 514 - else if (mem->spi->mode & (SPI_TX_QUAD | SPI_RX_QUAD)) 515 - nio = 4; 516 - else if (mem->spi->mode & (SPI_TX_DUAL | SPI_RX_DUAL)) 517 - nio = 2; 518 - 519 - writel(HC_CFG_NIO(nio) | 520 - HC_CFG_TYPE(mem->spi->chip_select, HC_CFG_TYPE_SPI_NOR) | 521 - HC_CFG_SLV_ACT(mem->spi->chip_select) | HC_CFG_IDLE_SIO_LVL(1) | 522 - HC_CFG_MAN_CS_EN, 381 + writel(mxic_spi_prep_hc_cfg(mem->spi, HC_CFG_MAN_CS_EN), 523 382 mxic->regs + HC_CFG); 383 + 524 384 writel(HC_EN_BIT, mxic->regs + HC_EN); 525 385 526 - ss_ctrl = OP_CMD_BYTES(op->cmd.nbytes) | 527 - OP_CMD_BUSW(fls(op->cmd.buswidth) - 1) | 528 - (op->cmd.dtr ? OP_CMD_DDR : 0); 529 - 530 - if (op->addr.nbytes) 531 - ss_ctrl |= OP_ADDR_BYTES(op->addr.nbytes) | 532 - OP_ADDR_BUSW(fls(op->addr.buswidth) - 1) | 533 - (op->addr.dtr ? OP_ADDR_DDR : 0); 534 - 535 - if (op->dummy.nbytes) 536 - ss_ctrl |= OP_DUMMY_CYC(op->dummy.nbytes); 537 - 538 - if (op->data.nbytes) { 539 - ss_ctrl |= OP_DATA_BUSW(fls(op->data.buswidth) - 1) | 540 - (op->data.dtr ? OP_DATA_DDR : 0); 541 - if (op->data.dir == SPI_MEM_DATA_IN) { 542 - ss_ctrl |= OP_READ; 543 - if (op->data.dtr) 544 - ss_ctrl |= OP_DQS_EN; 545 - } 546 - } 547 - 548 - writel(ss_ctrl, mxic->regs + SS_CTRL(mem->spi->chip_select)); 386 + writel(mxic_spi_mem_prep_op_cfg(op, op->data.nbytes), 387 + mxic->regs + SS_CTRL(mem->spi->chip_select)); 549 388 550 389 writel(readl(mxic->regs + HC_CFG) | HC_CFG_MAN_CS_ASSERT, 551 390 mxic->regs + HC_CFG); ··· 565 446 static const struct spi_controller_mem_ops mxic_spi_mem_ops = { 566 447 .supports_op = mxic_spi_mem_supports_op, 567 448 .exec_op = mxic_spi_mem_exec_op, 449 + .dirmap_create = mxic_spi_mem_dirmap_create, 450 + .dirmap_read = mxic_spi_mem_dirmap_read, 451 + .dirmap_write = mxic_spi_mem_dirmap_write, 452 + }; 453 + 454 + static const struct spi_controller_mem_caps mxic_spi_mem_caps = { 455 + .dtr = true, 456 + .ecc = true, 568 457 }; 569 458 570 459 static void mxic_spi_set_cs(struct spi_device *spi, bool lvl) ··· 637 510 return 0; 638 511 } 639 512 513 + /* ECC wrapper */ 514 + static int mxic_spi_mem_ecc_init_ctx(struct nand_device *nand) 515 + { 516 + struct nand_ecc_engine_ops *ops = mxic_ecc_get_pipelined_ops(); 517 + struct mxic_spi *mxic = nand->ecc.engine->priv; 518 + 519 + mxic->ecc.use_pipelined_conf = true; 520 + 521 + return ops->init_ctx(nand); 522 + } 523 + 524 + static void mxic_spi_mem_ecc_cleanup_ctx(struct nand_device *nand) 525 + { 526 + struct nand_ecc_engine_ops *ops = mxic_ecc_get_pipelined_ops(); 527 + struct mxic_spi *mxic = nand->ecc.engine->priv; 528 + 529 + mxic->ecc.use_pipelined_conf = false; 530 + 531 + ops->cleanup_ctx(nand); 532 + } 533 + 534 + static int mxic_spi_mem_ecc_prepare_io_req(struct nand_device *nand, 535 + struct nand_page_io_req *req) 536 + { 537 + struct nand_ecc_engine_ops *ops = mxic_ecc_get_pipelined_ops(); 538 + 539 + return ops->prepare_io_req(nand, req); 540 + } 541 + 542 + static int mxic_spi_mem_ecc_finish_io_req(struct nand_device *nand, 543 + struct nand_page_io_req *req) 544 + { 545 + struct nand_ecc_engine_ops *ops = mxic_ecc_get_pipelined_ops(); 546 + 547 + return ops->finish_io_req(nand, req); 548 + } 549 + 550 + static struct nand_ecc_engine_ops mxic_spi_mem_ecc_engine_pipelined_ops = { 551 + .init_ctx = mxic_spi_mem_ecc_init_ctx, 552 + .cleanup_ctx = mxic_spi_mem_ecc_cleanup_ctx, 553 + .prepare_io_req = mxic_spi_mem_ecc_prepare_io_req, 554 + .finish_io_req = mxic_spi_mem_ecc_finish_io_req, 555 + }; 556 + 557 + static void mxic_spi_mem_ecc_remove(struct mxic_spi *mxic) 558 + { 559 + if (mxic->ecc.pipelined_engine) { 560 + mxic_ecc_put_pipelined_engine(mxic->ecc.pipelined_engine); 561 + nand_ecc_unregister_on_host_hw_engine(mxic->ecc.pipelined_engine); 562 + } 563 + } 564 + 565 + static int mxic_spi_mem_ecc_probe(struct platform_device *pdev, 566 + struct mxic_spi *mxic) 567 + { 568 + struct nand_ecc_engine *eng; 569 + 570 + if (!mxic_ecc_get_pipelined_ops()) 571 + return -EOPNOTSUPP; 572 + 573 + eng = mxic_ecc_get_pipelined_engine(pdev); 574 + if (IS_ERR(eng)) 575 + return PTR_ERR(eng); 576 + 577 + eng->dev = &pdev->dev; 578 + eng->integration = NAND_ECC_ENGINE_INTEGRATION_PIPELINED; 579 + eng->ops = &mxic_spi_mem_ecc_engine_pipelined_ops; 580 + eng->priv = mxic; 581 + mxic->ecc.pipelined_engine = eng; 582 + nand_ecc_register_on_host_hw_engine(eng); 583 + 584 + return 0; 585 + } 586 + 640 587 static int __maybe_unused mxic_spi_runtime_suspend(struct device *dev) 641 588 { 642 589 struct spi_master *master = dev_get_drvdata(dev); ··· 756 555 platform_set_drvdata(pdev, master); 757 556 758 557 mxic = spi_master_get_devdata(master); 558 + mxic->dev = &pdev->dev; 759 559 760 560 master->dev.of_node = pdev->dev.of_node; 761 561 ··· 777 575 if (IS_ERR(mxic->regs)) 778 576 return PTR_ERR(mxic->regs); 779 577 578 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap"); 579 + mxic->linear.map = devm_ioremap_resource(&pdev->dev, res); 580 + if (!IS_ERR(mxic->linear.map)) { 581 + mxic->linear.dma = res->start; 582 + mxic->linear.size = resource_size(res); 583 + } else { 584 + mxic->linear.map = NULL; 585 + } 586 + 780 587 pm_runtime_enable(&pdev->dev); 781 588 master->auto_runtime_pm = true; 782 589 783 590 master->num_chipselect = 1; 784 591 master->mem_ops = &mxic_spi_mem_ops; 592 + master->mem_caps = &mxic_spi_mem_caps; 785 593 786 594 master->set_cs = mxic_spi_set_cs; 787 595 master->transfer_one = mxic_spi_transfer_one; ··· 802 590 SPI_RX_OCTAL | SPI_TX_OCTAL; 803 591 804 592 mxic_spi_hw_init(mxic); 593 + 594 + ret = mxic_spi_mem_ecc_probe(pdev, mxic); 595 + if (ret == -EPROBE_DEFER) { 596 + pm_runtime_disable(&pdev->dev); 597 + return ret; 598 + } 805 599 806 600 ret = spi_register_master(master); 807 601 if (ret) { ··· 821 603 static int mxic_spi_remove(struct platform_device *pdev) 822 604 { 823 605 struct spi_master *master = platform_get_drvdata(pdev); 606 + struct mxic_spi *mxic = spi_master_get_devdata(master); 824 607 825 608 pm_runtime_disable(&pdev->dev); 609 + mxic_spi_mem_ecc_remove(mxic); 826 610 spi_unregister_master(master); 827 611 828 612 return 0;
+49
include/linux/mtd/nand-ecc-mxic.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright © 2019 Macronix 4 + * Author: Miquèl Raynal <miquel.raynal@bootlin.com> 5 + * 6 + * Header for the Macronix external ECC engine. 7 + */ 8 + 9 + #ifndef __MTD_NAND_ECC_MXIC_H__ 10 + #define __MTD_NAND_ECC_MXIC_H__ 11 + 12 + #include <linux/platform_device.h> 13 + #include <linux/device.h> 14 + 15 + struct mxic_ecc_engine; 16 + 17 + #if IS_ENABLED(CONFIG_MTD_NAND_ECC_MXIC) && IS_REACHABLE(CONFIG_MTD_NAND_CORE) 18 + 19 + struct nand_ecc_engine_ops *mxic_ecc_get_pipelined_ops(void); 20 + struct nand_ecc_engine *mxic_ecc_get_pipelined_engine(struct platform_device *spi_pdev); 21 + void mxic_ecc_put_pipelined_engine(struct nand_ecc_engine *eng); 22 + int mxic_ecc_process_data_pipelined(struct nand_ecc_engine *eng, 23 + unsigned int direction, dma_addr_t dirmap); 24 + 25 + #else /* !CONFIG_MTD_NAND_ECC_MXIC */ 26 + 27 + static inline struct nand_ecc_engine_ops *mxic_ecc_get_pipelined_ops(void) 28 + { 29 + return NULL; 30 + } 31 + 32 + static inline struct nand_ecc_engine * 33 + mxic_ecc_get_pipelined_engine(struct platform_device *spi_pdev) 34 + { 35 + return ERR_PTR(-EOPNOTSUPP); 36 + } 37 + 38 + static inline void mxic_ecc_put_pipelined_engine(struct nand_ecc_engine *eng) {} 39 + 40 + static inline int mxic_ecc_process_data_pipelined(struct nand_ecc_engine *eng, 41 + unsigned int direction, 42 + dma_addr_t dirmap) 43 + { 44 + return -EOPNOTSUPP; 45 + } 46 + 47 + #endif /* CONFIG_MTD_NAND_ECC_MXIC */ 48 + 49 + #endif /* __MTD_NAND_ECC_MXIC_H__ */
+49
include/linux/mtd/nand.h
··· 264 264 }; 265 265 266 266 /** 267 + * enum nand_ecc_engine_integration - How the NAND ECC engine is integrated 268 + * @NAND_ECC_ENGINE_INTEGRATION_INVALID: Invalid value 269 + * @NAND_ECC_ENGINE_INTEGRATION_PIPELINED: Pipelined engine, performs on-the-fly 270 + * correction, does not need to copy 271 + * data around 272 + * @NAND_ECC_ENGINE_INTEGRATION_EXTERNAL: External engine, needs to bring the 273 + * data into its own area before use 274 + */ 275 + enum nand_ecc_engine_integration { 276 + NAND_ECC_ENGINE_INTEGRATION_INVALID, 277 + NAND_ECC_ENGINE_INTEGRATION_PIPELINED, 278 + NAND_ECC_ENGINE_INTEGRATION_EXTERNAL, 279 + }; 280 + 281 + /** 267 282 * struct nand_ecc_engine - ECC engine abstraction for NAND devices 283 + * @dev: Host device 284 + * @node: Private field for registration time 268 285 * @ops: ECC engine operations 286 + * @integration: How the engine is integrated with the host 287 + * (only relevant on %NAND_ECC_ENGINE_TYPE_ON_HOST engines) 288 + * @priv: Private data 269 289 */ 270 290 struct nand_ecc_engine { 291 + struct device *dev; 292 + struct list_head node; 271 293 struct nand_ecc_engine_ops *ops; 294 + enum nand_ecc_engine_integration integration; 295 + void *priv; 272 296 }; 273 297 274 298 void of_get_nand_ecc_user_config(struct nand_device *nand); ··· 303 279 int nand_ecc_finish_io_req(struct nand_device *nand, 304 280 struct nand_page_io_req *req); 305 281 bool nand_ecc_is_strong_enough(struct nand_device *nand); 282 + 283 + #if IS_REACHABLE(CONFIG_MTD_NAND_CORE) 284 + int nand_ecc_register_on_host_hw_engine(struct nand_ecc_engine *engine); 285 + int nand_ecc_unregister_on_host_hw_engine(struct nand_ecc_engine *engine); 286 + #else 287 + static inline int 288 + nand_ecc_register_on_host_hw_engine(struct nand_ecc_engine *engine) 289 + { 290 + return -ENOTSUPP; 291 + } 292 + static inline int 293 + nand_ecc_unregister_on_host_hw_engine(struct nand_ecc_engine *engine) 294 + { 295 + return -ENOTSUPP; 296 + } 297 + #endif 298 + 306 299 struct nand_ecc_engine *nand_ecc_get_sw_engine(struct nand_device *nand); 307 300 struct nand_ecc_engine *nand_ecc_get_on_die_hw_engine(struct nand_device *nand); 301 + struct nand_ecc_engine *nand_ecc_get_on_host_hw_engine(struct nand_device *nand); 302 + void nand_ecc_put_on_host_hw_engine(struct nand_device *nand); 303 + struct device *nand_ecc_get_engine_dev(struct device *host); 308 304 309 305 #if IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING) 310 306 struct nand_ecc_engine *nand_ecc_sw_hamming_get_engine(void); ··· 1005 961 /* ECC related functions */ 1006 962 int nanddev_ecc_engine_init(struct nand_device *nand); 1007 963 void nanddev_ecc_engine_cleanup(struct nand_device *nand); 964 + 965 + static inline void *nand_to_ecc_ctx(struct nand_device *nand) 966 + { 967 + return nand->ecc.ctx.priv; 968 + } 1008 969 1009 970 /* BBT related functions */ 1010 971 enum nand_bbt_block_status {
+2
include/linux/mtd/spinand.h
··· 389 389 struct spinand_dirmap { 390 390 struct spi_mem_dirmap_desc *wdesc; 391 391 struct spi_mem_dirmap_desc *rdesc; 392 + struct spi_mem_dirmap_desc *wdesc_ecc; 393 + struct spi_mem_dirmap_desc *rdesc_ecc; 392 394 }; 393 395 394 396 /**
+15 -11
include/linux/spi/spi-mem.h
··· 89 89 * @dummy.dtr: whether the dummy bytes should be sent in DTR mode or not 90 90 * @data.buswidth: number of IO lanes used to send/receive the data 91 91 * @data.dtr: whether the data should be sent in DTR mode or not 92 + * @data.ecc: whether error correction is required or not 92 93 * @data.dir: direction of the transfer 93 94 * @data.nbytes: number of data bytes to send/receive. Can be zero if the 94 95 * operation does not involve transferring data ··· 120 119 struct { 121 120 u8 buswidth; 122 121 u8 dtr : 1; 122 + u8 ecc : 1; 123 123 enum spi_mem_data_dir dir; 124 124 unsigned int nbytes; 125 125 union { ··· 288 286 }; 289 287 290 288 /** 289 + * struct spi_controller_mem_caps - SPI memory controller capabilities 290 + * @dtr: Supports DTR operations 291 + * @ecc: Supports operations with error correction 292 + */ 293 + struct spi_controller_mem_caps { 294 + bool dtr; 295 + bool ecc; 296 + }; 297 + 298 + #define spi_mem_controller_is_capable(ctlr, cap) \ 299 + ((ctlr)->mem_caps && (ctlr)->mem_caps->cap) 300 + 301 + /** 291 302 * struct spi_mem_driver - SPI memory driver 292 303 * @spidrv: inherit from a SPI driver 293 304 * @probe: probe a SPI memory. Usually where detection/initialization takes ··· 334 319 335 320 bool spi_mem_default_supports_op(struct spi_mem *mem, 336 321 const struct spi_mem_op *op); 337 - 338 - bool spi_mem_dtr_supports_op(struct spi_mem *mem, 339 - const struct spi_mem_op *op); 340 - 341 322 #else 342 323 static inline int 343 324 spi_controller_dma_map_mem_op_data(struct spi_controller *ctlr, ··· 353 342 static inline 354 343 bool spi_mem_default_supports_op(struct spi_mem *mem, 355 344 const struct spi_mem_op *op) 356 - { 357 - return false; 358 - } 359 - 360 - static inline 361 - bool spi_mem_dtr_supports_op(struct spi_mem *mem, 362 - const struct spi_mem_op *op) 363 345 { 364 346 return false; 365 347 }
+3
include/linux/spi/spi.h
··· 23 23 struct spi_controller; 24 24 struct spi_transfer; 25 25 struct spi_controller_mem_ops; 26 + struct spi_controller_mem_caps; 26 27 27 28 /* 28 29 * INTERFACES between SPI master-side drivers and SPI slave protocol handlers, ··· 416 415 * @mem_ops: optimized/dedicated operations for interactions with SPI memory. 417 416 * This field is optional and should only be implemented if the 418 417 * controller has native support for memory like operations. 418 + * @mem_caps: controller capabilities for the handling of memory operations. 419 419 * @unprepare_message: undo any work done by prepare_message(). 420 420 * @slave_abort: abort the ongoing transfer request on an SPI slave controller 421 421 * @cs_gpios: LEGACY: array of GPIO descs to use as chip select lines; one per ··· 641 639 642 640 /* Optimized handlers for SPI memory-like operations. */ 643 641 const struct spi_controller_mem_ops *mem_ops; 642 + const struct spi_controller_mem_caps *mem_caps; 644 643 645 644 /* gpio chip select */ 646 645 int *cs_gpios;