Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Miquel Raynal:
"Nothing stands out for this merge window, mostly minor fixes, such as
module descriptions, the use of debug macros and Makefile
improvements.

Raw NAND changes;

- The Freescale MXC driver has been converted to the newer
'->exec_op()' interface

- The meson driver now supports handling the boot ROM area with very
specific ECC needs

- Support for the iMX8QXP has been added to the GPMI driver

- The lpx32xx driver now can get the DMA channels using DT entries

- The Qcom binding has been improved to be more future proof by Rob

- And then there is the usual load of misc and minor changes

SPI-NAND changes:

- The Macronix vendor driver has been improved to support an extended
ID to avoid conflicting with older devices after an ID reuse issue

SPI NOR changes:

- Drop support for Xilinx S3AN flashes. These flashes are for the
very old Xilinx Spartan 3 FPGAs and they need some awkward code in
the core to support.

Drop support for these flashes, along with the special handling we
needed for them in the core like non-power-of-2 page size handling
and the .setup() callback.

- Fix regression for old w25q128 flashes without SFDP tables.

Commit 83e824a4a595 ("mtd: spi-nor: Correct flags for Winbond
w25q128") dropped support for such devices under the assumption
that they aren't being used anymore. Users have now surfaced [0] so
fix the regression by supporting both kind of devices.

- Core cleanups including removal of SPI_NOR_NO_FR flag and
simplification of spi_nor_get_flash_info()"

Link: https://lore.kernel.org/r/CALxbwRo_-9CaJmt7r7ELgu+vOcgk=xZcGHobnKf=oT2=u4d4aA@mail.gmail.com/ [0]

* tag 'mtd/for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (28 commits)
mtd: rawnand: lpx32xx: Fix dma_request_chan() error checks
mtd: spinand: macronix: Add support for serial NAND flash
mtd: spinand: macronix: Add support for reading Device ID 2
mtd: rawnand: lpx32xx: Request DMA channels using DT entries
dt-bindings: mtd: qcom,nandc: Define properties at top-level
mtd: rawnand: intel: use 'time_left' variable with wait_for_completion_timeout()
mtd: rawnand: mxc: use 'time_left' variable with wait_for_completion_timeout()
mtd: rawnand: gpmi: add iMX8QXP support.
mtd: rawnand: gpmi: add 'support_edo_timing' in gpmi_devdata
mtd: cmdlinepart: Replace `dbg()` macro with `pr_debug()`
mtd: add missing MODULE_DESCRIPTION() macros
mtd: make mtd_test.c a separate module
dt-bindings: mtd: gpmi-nand: Add 'fsl,imx8qxp-gpmi-nand' compatible string
mtd: rawnand: cadence: remove unused struct 'ecc_info'
mtd: rawnand: mxc: support software ECC
mtd: rawnand: mxc: implement exec_op
mtd: rawnand: mxc: separate page read from ecc calc
mtd: spi-nor: winbond: fix w25q128 regression
mtd: spi-nor: simplify spi_nor_get_flash_info()
mtd: spi-nor: get rid of SPI_NOR_NO_FR
...

+715 -810
+18
Documentation/devicetree/bindings/mtd/amlogic,meson-nand.yaml
··· 64 64 items: 65 65 maximum: 0 66 66 67 + amlogic,boot-pages: 68 + $ref: /schemas/types.yaml#/definitions/uint32 69 + description: 70 + Number of pages starting from offset 0, where a special ECC 71 + configuration must be used because it is accessed by the ROM 72 + code. This ECC configuration uses 384 bytes data blocks. 73 + Also scrambling mode is enabled for such pages. 74 + 75 + amlogic,boot-page-step: 76 + $ref: /schemas/types.yaml#/definitions/uint32 77 + description: 78 + Interval between pages, accessed by the ROM code. For example 79 + we have 8 pages [0, 7]. Pages 0,2,4,6 are accessed by the 80 + ROM code, so this field will be 2 (e.g. every 2nd page). Rest 81 + of pages - 1,3,5,7 are read/written without this mode. 82 + 67 83 unevaluatedProperties: false 68 84 69 85 dependencies: 70 86 nand-ecc-strength: [nand-ecc-step-size] 71 87 nand-ecc-step-size: [nand-ecc-strength] 88 + amlogic,boot-pages: [nand-is-boot-medium, "amlogic,boot-page-step"] 89 + amlogic,boot-page-step: [nand-is-boot-medium, "amlogic,boot-pages"] 72 90 73 91 74 92 required:
+22
Documentation/devicetree/bindings/mtd/gpmi-nand.yaml
··· 24 24 - fsl,imx6q-gpmi-nand 25 25 - fsl,imx6sx-gpmi-nand 26 26 - fsl,imx7d-gpmi-nand 27 + - fsl,imx8qxp-gpmi-nand 27 28 - items: 28 29 - enum: 29 30 - fsl,imx8mm-gpmi-nand ··· 150 149 clock-names: 151 150 items: 152 151 - const: gpmi_io 152 + - const: gpmi_bch_apb 153 + 154 + - if: 155 + properties: 156 + compatible: 157 + contains: 158 + enum: 159 + - fsl,imx8qxp-gpmi-nand 160 + then: 161 + properties: 162 + clocks: 163 + items: 164 + - description: SoC gpmi io clock 165 + - description: SoC gpmi apb clock 166 + - description: SoC gpmi bch clock 167 + - description: SoC gpmi bch apb clock 168 + clock-names: 169 + items: 170 + - const: gpmi_io 171 + - const: gpmi_apb 172 + - const: gpmi_bch 153 173 - const: gpmi_bch_apb 154 174 155 175 examples:
+14 -24
Documentation/devicetree/bindings/mtd/qcom,nandc.yaml
··· 31 31 - const: core 32 32 - const: aon 33 33 34 + qcom,cmd-crci: 35 + $ref: /schemas/types.yaml#/definitions/uint32 36 + description: 37 + Must contain the ADM command type CRCI block instance number specified for 38 + the NAND controller on the given platform 39 + 40 + qcom,data-crci: 41 + $ref: /schemas/types.yaml#/definitions/uint32 42 + description: 43 + Must contain the ADM data type CRCI block instance number specified for 44 + the NAND controller on the given platform 45 + 34 46 patternProperties: 35 47 "^nand@[a-f0-9]$": 36 48 type: object ··· 95 83 items: 96 84 - const: rxtx 97 85 98 - qcom,cmd-crci: 99 - $ref: /schemas/types.yaml#/definitions/uint32 100 - description: 101 - Must contain the ADM command type CRCI block instance number 102 - specified for the NAND controller on the given platform 103 - 104 - qcom,data-crci: 105 - $ref: /schemas/types.yaml#/definitions/uint32 106 - description: 107 - Must contain the ADM data type CRCI block instance number 108 - specified for the NAND controller on the given platform 109 - 110 86 - if: 111 87 properties: 112 88 compatible: ··· 119 119 - const: rx 120 120 - const: cmd 121 121 122 - - if: 123 - properties: 124 - compatible: 125 - contains: 126 - enum: 127 - - qcom,ipq806x-nand 122 + qcom,cmd-crci: false 123 + qcom,data-crci: false 128 124 129 - then: 130 - patternProperties: 131 - "^nand@[a-f0-9]$": 132 - properties: 133 - qcom,boot-partitions: true 134 - else: 135 125 patternProperties: 136 126 "^nand@[a-f0-9]$": 137 127 properties:
+1
drivers/mtd/chips/cfi_cmdset_0020.c
··· 1399 1399 kfree(cfi); 1400 1400 } 1401 1401 1402 + MODULE_DESCRIPTION("MTD chip driver for ST Advanced Architecture Command Set (ID 0x0020)"); 1402 1403 MODULE_LICENSE("GPL");
+1
drivers/mtd/chips/cfi_util.c
··· 441 441 442 442 EXPORT_SYMBOL(cfi_varsize_frob); 443 443 444 + MODULE_DESCRIPTION("Common Flash Interface Generic utility functions"); 444 445 MODULE_LICENSE("GPL");
+5 -6
drivers/mtd/maps/Makefile
··· 17 17 obj-$(CONFIG_MTD_CK804XROM) += ck804xrom.o 18 18 obj-$(CONFIG_MTD_TSUNAMI) += tsunami_flash.o 19 19 obj-$(CONFIG_MTD_PXA2XX) += pxa2xx-flash.o 20 - physmap-objs-y += physmap-core.o 21 - physmap-objs-$(CONFIG_MTD_PHYSMAP_BT1_ROM) += physmap-bt1-rom.o 22 - physmap-objs-$(CONFIG_MTD_PHYSMAP_VERSATILE) += physmap-versatile.o 23 - physmap-objs-$(CONFIG_MTD_PHYSMAP_GEMINI) += physmap-gemini.o 24 - physmap-objs-$(CONFIG_MTD_PHYSMAP_IXP4XX) += physmap-ixp4xx.o 25 - physmap-objs := $(physmap-objs-y) 26 20 obj-$(CONFIG_MTD_PHYSMAP) += physmap.o 21 + physmap-y := physmap-core.o 22 + physmap-$(CONFIG_MTD_PHYSMAP_BT1_ROM) += physmap-bt1-rom.o 23 + physmap-$(CONFIG_MTD_PHYSMAP_VERSATILE) += physmap-versatile.o 24 + physmap-$(CONFIG_MTD_PHYSMAP_GEMINI) += physmap-gemini.o 25 + physmap-$(CONFIG_MTD_PHYSMAP_IXP4XX) += physmap-ixp4xx.o 27 26 obj-$(CONFIG_MTD_PISMO) += pismo.o 28 27 obj-$(CONFIG_MTD_PCMCIA) += pcmciamtd.o 29 28 obj-$(CONFIG_MTD_SA1100) += sa1100-flash.o
+1
drivers/mtd/maps/map_funcs.c
··· 41 41 } 42 42 43 43 EXPORT_SYMBOL(simple_map_init); 44 + MODULE_DESCRIPTION("Out-of-line map I/O"); 44 45 MODULE_LICENSE("GPL");
-5
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 531 531 u8 cs[] __counted_by(nsels); 532 532 }; 533 533 534 - struct ecc_info { 535 - int (*calc_ecc_bytes)(int step_size, int strength); 536 - int max_step_size; 537 - }; 538 - 539 534 static inline struct 540 535 cdns_nand_chip *to_cdns_nand_chip(struct nand_chip *chip) 541 536 {
+19 -1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
··· 983 983 return PTR_ERR(sdr); 984 984 985 985 /* Only MX28/MX6 GPMI controller can reach EDO timings */ 986 - if (sdr->tRC_min <= 25000 && !GPMI_IS_MX28(this) && !GPMI_IS_MX6(this)) 986 + if (sdr->tRC_min <= 25000 && !this->devdata->support_edo_timing) 987 987 return -ENOTSUPP; 988 988 989 989 /* Stop here if this call was just a check */ ··· 1142 1142 .type = IS_MX28, 1143 1143 .bch_max_ecc_strength = 20, 1144 1144 .max_chain_delay = 16000, 1145 + .support_edo_timing = true, 1145 1146 .clks = gpmi_clks_for_mx2x, 1146 1147 .clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x), 1147 1148 }; ··· 1155 1154 .type = IS_MX6Q, 1156 1155 .bch_max_ecc_strength = 40, 1157 1156 .max_chain_delay = 12000, 1157 + .support_edo_timing = true, 1158 1158 .clks = gpmi_clks_for_mx6, 1159 1159 .clks_count = ARRAY_SIZE(gpmi_clks_for_mx6), 1160 1160 }; ··· 1164 1162 .type = IS_MX6SX, 1165 1163 .bch_max_ecc_strength = 62, 1166 1164 .max_chain_delay = 12000, 1165 + .support_edo_timing = true, 1167 1166 .clks = gpmi_clks_for_mx6, 1168 1167 .clks_count = ARRAY_SIZE(gpmi_clks_for_mx6), 1169 1168 }; ··· 1177 1174 .type = IS_MX7D, 1178 1175 .bch_max_ecc_strength = 62, 1179 1176 .max_chain_delay = 12000, 1177 + .support_edo_timing = true, 1180 1178 .clks = gpmi_clks_for_mx7d, 1181 1179 .clks_count = ARRAY_SIZE(gpmi_clks_for_mx7d), 1180 + }; 1181 + 1182 + static const char *gpmi_clks_for_mx8qxp[GPMI_CLK_MAX] = { 1183 + "gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", 1184 + }; 1185 + 1186 + static const struct gpmi_devdata gpmi_devdata_imx8qxp = { 1187 + .type = IS_MX8QXP, 1188 + .bch_max_ecc_strength = 62, 1189 + .max_chain_delay = 12000, 1190 + .support_edo_timing = true, 1191 + .clks = gpmi_clks_for_mx8qxp, 1192 + .clks_count = ARRAY_SIZE(gpmi_clks_for_mx8qxp), 1182 1193 }; 1183 1194 1184 1195 static int acquire_register_block(struct gpmi_nand_data *this, ··· 2738 2721 { .compatible = "fsl,imx6q-gpmi-nand", .data = &gpmi_devdata_imx6q, }, 2739 2722 { .compatible = "fsl,imx6sx-gpmi-nand", .data = &gpmi_devdata_imx6sx, }, 2740 2723 { .compatible = "fsl,imx7d-gpmi-nand", .data = &gpmi_devdata_imx7d,}, 2724 + { .compatible = "fsl,imx8qxp-gpmi-nand", .data = &gpmi_devdata_imx8qxp, }, 2741 2725 {} 2742 2726 }; 2743 2727 MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
+5 -1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.h
··· 78 78 IS_MX6Q, 79 79 IS_MX6SX, 80 80 IS_MX7D, 81 + IS_MX8QXP, 81 82 }; 82 83 83 84 struct gpmi_devdata { ··· 87 86 int max_chain_delay; /* See the SDR EDO mode */ 88 87 const char * const *clks; 89 88 const int clks_count; 89 + bool support_edo_timing; 90 90 }; 91 91 92 92 /** ··· 174 172 #define GPMI_IS_MX6Q(x) ((x)->devdata->type == IS_MX6Q) 175 173 #define GPMI_IS_MX6SX(x) ((x)->devdata->type == IS_MX6SX) 176 174 #define GPMI_IS_MX7D(x) ((x)->devdata->type == IS_MX7D) 175 + #define GPMI_IS_MX8QXP(x) ((x)->devdata->type == IS_MX8QXP) 177 176 178 177 #define GPMI_IS_MX6(x) (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x) || \ 179 - GPMI_IS_MX7D(x)) 178 + GPMI_IS_MX7D(x) || GPMI_IS_MX8QXP(x)) 179 + 180 180 #define GPMI_IS_MXS(x) (GPMI_IS_MX23(x) || GPMI_IS_MX28(x)) 181 181 #endif
+3 -3
drivers/mtd/nand/raw/intel-nand-controller.c
··· 295 295 unsigned long flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 296 296 dma_addr_t buf_dma; 297 297 int ret; 298 - u32 timeout; 298 + unsigned long time_left; 299 299 300 300 if (dir == DMA_DEV_TO_MEM) { 301 301 chan = ebu_host->dma_rx; ··· 335 335 dma_async_issue_pending(chan); 336 336 337 337 /* Wait DMA to finish the data transfer.*/ 338 - timeout = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000)); 339 - if (!timeout) { 338 + time_left = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000)); 339 + if (!time_left) { 340 340 dev_err(ebu_host->dev, "I/O Error in DMA RX (status %d)\n", 341 341 dmaengine_tx_status(chan, cookie, NULL)); 342 342 dmaengine_terminate_sync(chan);
+15 -11
drivers/mtd/nand/raw/lpc32xx_mlc.c
··· 574 574 struct mtd_info *mtd = nand_to_mtd(&host->nand_chip); 575 575 dma_cap_mask_t mask; 576 576 577 - if (!host->pdata || !host->pdata->dma_filter) { 578 - dev_err(mtd->dev.parent, "no DMA platform data\n"); 579 - return -ENOENT; 580 - } 577 + host->dma_chan = dma_request_chan(mtd->dev.parent, "rx-tx"); 578 + if (IS_ERR(host->dma_chan)) { 579 + /* fallback to request using platform data */ 580 + if (!host->pdata || !host->pdata->dma_filter) { 581 + dev_err(mtd->dev.parent, "no DMA platform data\n"); 582 + return -ENOENT; 583 + } 581 584 582 - dma_cap_zero(mask); 583 - dma_cap_set(DMA_SLAVE, mask); 584 - host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter, 585 - "nand-mlc"); 586 - if (!host->dma_chan) { 587 - dev_err(mtd->dev.parent, "Failed to request DMA channel\n"); 588 - return -EBUSY; 585 + dma_cap_zero(mask); 586 + dma_cap_set(DMA_SLAVE, mask); 587 + host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter, "nand-mlc"); 588 + 589 + if (!host->dma_chan) { 590 + dev_err(mtd->dev.parent, "Failed to request DMA channel\n"); 591 + return -EBUSY; 592 + } 589 593 } 590 594 591 595 /*
+15 -11
drivers/mtd/nand/raw/lpc32xx_slc.c
··· 721 721 struct mtd_info *mtd = nand_to_mtd(&host->nand_chip); 722 722 dma_cap_mask_t mask; 723 723 724 - if (!host->pdata || !host->pdata->dma_filter) { 725 - dev_err(mtd->dev.parent, "no DMA platform data\n"); 726 - return -ENOENT; 727 - } 724 + host->dma_chan = dma_request_chan(mtd->dev.parent, "rx-tx"); 725 + if (IS_ERR(host->dma_chan)) { 726 + /* fallback to request using platform data */ 727 + if (!host->pdata || !host->pdata->dma_filter) { 728 + dev_err(mtd->dev.parent, "no DMA platform data\n"); 729 + return -ENOENT; 730 + } 728 731 729 - dma_cap_zero(mask); 730 - dma_cap_set(DMA_SLAVE, mask); 731 - host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter, 732 - "nand-slc"); 733 - if (!host->dma_chan) { 734 - dev_err(mtd->dev.parent, "Failed to request DMA channel\n"); 735 - return -EBUSY; 732 + dma_cap_zero(mask); 733 + dma_cap_set(DMA_SLAVE, mask); 734 + host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter, "nand-slc"); 735 + 736 + if (!host->dma_chan) { 737 + dev_err(mtd->dev.parent, "Failed to request DMA channel\n"); 738 + return -EBUSY; 739 + } 736 740 } 737 741 738 742 return 0;
+59 -27
drivers/mtd/nand/raw/meson_nand.c
··· 35 35 #define NFC_CMD_RB BIT(20) 36 36 #define NFC_CMD_SCRAMBLER_ENABLE BIT(19) 37 37 #define NFC_CMD_SCRAMBLER_DISABLE 0 38 + #define NFC_CMD_SHORTMODE_ENABLE 1 38 39 #define NFC_CMD_SHORTMODE_DISABLE 0 39 40 #define NFC_CMD_RB_INT BIT(14) 40 41 #define NFC_CMD_RB_INT_NO_PIN ((0xb << 10) | BIT(18) | BIT(16)) ··· 78 77 79 78 #define DMA_DIR(dir) ((dir) ? NFC_CMD_N2M : NFC_CMD_M2N) 80 79 #define DMA_ADDR_ALIGN 8 80 + 81 + #define NFC_SHORT_MODE_ECC_SZ 384 81 82 82 83 #define ECC_CHECK_RETURN_FF (-1) 83 84 ··· 128 125 u32 twb; 129 126 u32 tadl; 130 127 u32 tbers_max; 128 + u32 boot_pages; 129 + u32 boot_page_step; 131 130 132 131 u32 bch_mode; 133 132 u8 *data_buf; ··· 303 298 nfc->reg_base + NFC_REG_CMD); 304 299 } 305 300 306 - static void meson_nfc_cmd_access(struct nand_chip *nand, int raw, bool dir, 307 - int scrambler) 301 + static int meson_nfc_is_boot_page(struct nand_chip *nand, int page) 308 302 { 303 + const struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand); 304 + 305 + return (nand->options & NAND_IS_BOOT_MEDIUM) && 306 + !(page % meson_chip->boot_page_step) && 307 + (page < meson_chip->boot_pages); 308 + } 309 + 310 + static void meson_nfc_cmd_access(struct nand_chip *nand, int raw, bool dir, int page) 311 + { 312 + const struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand); 309 313 struct mtd_info *mtd = nand_to_mtd(nand); 310 314 struct meson_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd)); 311 - struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand); 312 - u32 bch = meson_chip->bch_mode, cmd; 313 315 int len = mtd->writesize, pagesize, pages; 316 + int scrambler; 317 + u32 cmd; 314 318 315 - pagesize = nand->ecc.size; 319 + if (nand->options & NAND_NEED_SCRAMBLING) 320 + scrambler = NFC_CMD_SCRAMBLER_ENABLE; 321 + else 322 + scrambler = NFC_CMD_SCRAMBLER_DISABLE; 316 323 317 324 if (raw) { 318 325 len = mtd->writesize + mtd->oobsize; 319 326 cmd = len | scrambler | DMA_DIR(dir); 320 - writel(cmd, nfc->reg_base + NFC_REG_CMD); 321 - return; 327 + } else if (meson_nfc_is_boot_page(nand, page)) { 328 + pagesize = NFC_SHORT_MODE_ECC_SZ >> 3; 329 + pages = mtd->writesize / 512; 330 + 331 + scrambler = NFC_CMD_SCRAMBLER_ENABLE; 332 + cmd = CMDRWGEN(DMA_DIR(dir), scrambler, NFC_ECC_BCH8_1K, 333 + NFC_CMD_SHORTMODE_ENABLE, pagesize, pages); 334 + } else { 335 + pagesize = nand->ecc.size >> 3; 336 + pages = len / nand->ecc.size; 337 + 338 + cmd = CMDRWGEN(DMA_DIR(dir), scrambler, meson_chip->bch_mode, 339 + NFC_CMD_SHORTMODE_DISABLE, pagesize, pages); 322 340 } 323 341 324 - pages = len / nand->ecc.size; 325 - 326 - cmd = CMDRWGEN(DMA_DIR(dir), scrambler, bch, 327 - NFC_CMD_SHORTMODE_DISABLE, pagesize, pages); 342 + if (scrambler == NFC_CMD_SCRAMBLER_ENABLE) 343 + meson_nfc_cmd_seed(nfc, page); 328 344 329 345 writel(cmd, nfc->reg_base + NFC_REG_CMD); 330 346 } ··· 769 743 if (ret) 770 744 return ret; 771 745 772 - if (nand->options & NAND_NEED_SCRAMBLING) { 773 - meson_nfc_cmd_seed(nfc, page); 774 - meson_nfc_cmd_access(nand, raw, DIRWRITE, 775 - NFC_CMD_SCRAMBLER_ENABLE); 776 - } else { 777 - meson_nfc_cmd_access(nand, raw, DIRWRITE, 778 - NFC_CMD_SCRAMBLER_DISABLE); 779 - } 746 + meson_nfc_cmd_access(nand, raw, DIRWRITE, page); 780 747 781 748 cmd = nfc->param.chip_select | NFC_CMD_CLE | NAND_CMD_PAGEPROG; 782 749 writel(cmd, nfc->reg_base + NFC_REG_CMD); ··· 848 829 if (ret) 849 830 return ret; 850 831 851 - if (nand->options & NAND_NEED_SCRAMBLING) { 852 - meson_nfc_cmd_seed(nfc, page); 853 - meson_nfc_cmd_access(nand, raw, DIRREAD, 854 - NFC_CMD_SCRAMBLER_ENABLE); 855 - } else { 856 - meson_nfc_cmd_access(nand, raw, DIRREAD, 857 - NFC_CMD_SCRAMBLER_DISABLE); 858 - } 832 + meson_nfc_cmd_access(nand, raw, DIRREAD, page); 859 833 860 834 ret = meson_nfc_wait_dma_finish(nfc); 861 835 meson_nfc_check_ecc_pages_valid(nfc, nand, raw); ··· 1442 1430 ret = nand_scan(nand, nsels); 1443 1431 if (ret) 1444 1432 return ret; 1433 + 1434 + if (nand->options & NAND_IS_BOOT_MEDIUM) { 1435 + ret = of_property_read_u32(np, "amlogic,boot-pages", 1436 + &meson_chip->boot_pages); 1437 + if (ret) { 1438 + dev_err(dev, "could not retrieve 'amlogic,boot-pages' property: %d", 1439 + ret); 1440 + nand_cleanup(nand); 1441 + return ret; 1442 + } 1443 + 1444 + ret = of_property_read_u32(np, "amlogic,boot-page-step", 1445 + &meson_chip->boot_page_step); 1446 + if (ret) { 1447 + dev_err(dev, "could not retrieve 'amlogic,boot-page-step' property: %d", 1448 + ret); 1449 + nand_cleanup(nand); 1450 + return ret; 1451 + } 1452 + } 1445 1453 1446 1454 ret = mtd_device_register(mtd, NULL, 0); 1447 1455 if (ret) {
+348 -360
drivers/mtd/nand/raw/mxc_nand.c
··· 20 20 #include <linux/irq.h> 21 21 #include <linux/completion.h> 22 22 #include <linux/of.h> 23 + #include <linux/bitfield.h> 23 24 24 25 #define DRIVER_NAME "mxc_nand" 25 26 ··· 47 46 #define NFC_V1_V2_NF_WRPRST (host->regs + 0x18) 48 47 #define NFC_V1_V2_CONFIG1 (host->regs + 0x1a) 49 48 #define NFC_V1_V2_CONFIG2 (host->regs + 0x1c) 49 + 50 + #define NFC_V1_V2_ECC_STATUS_RESULT_ERM GENMASK(3, 2) 50 51 51 52 #define NFC_V2_CONFIG1_ECC_MODE_4 (1 << 0) 52 53 #define NFC_V1_V2_CONFIG1_SP_EN (1 << 2) ··· 126 123 127 124 struct mxc_nand_devtype_data { 128 125 void (*preset)(struct mtd_info *); 129 - int (*read_page)(struct nand_chip *chip, void *buf, void *oob, bool ecc, 130 - int page); 126 + int (*read_page)(struct nand_chip *chip); 131 127 void (*send_cmd)(struct mxc_nand_host *, uint16_t, int); 132 128 void (*send_addr)(struct mxc_nand_host *, uint16_t, int); 133 129 void (*send_page)(struct mtd_info *, unsigned int); ··· 134 132 uint16_t (*get_dev_status)(struct mxc_nand_host *); 135 133 int (*check_int)(struct mxc_nand_host *); 136 134 void (*irq_control)(struct mxc_nand_host *, int); 137 - u32 (*get_ecc_status)(struct mxc_nand_host *); 135 + u32 (*get_ecc_status)(struct nand_chip *); 138 136 const struct mtd_ooblayout_ops *ooblayout; 139 137 void (*select_chip)(struct nand_chip *chip, int cs); 140 138 int (*setup_interface)(struct nand_chip *chip, int csline, ··· 177 175 int eccsize; 178 176 int used_oobsize; 179 177 int active_cs; 178 + unsigned int ecc_stats_v1; 180 179 181 180 struct completion op_completion; 182 181 183 - uint8_t *data_buf; 184 - unsigned int buf_start; 182 + void *data_buf; 185 183 186 184 const struct mxc_nand_devtype_data *devtype_data; 187 185 }; ··· 283 281 } 284 282 } 285 283 286 - /* 287 - * MXC NANDFC can only perform full page+spare or spare-only read/write. When 288 - * the upper layers perform a read/write buf operation, the saved column address 289 - * is used to index into the full page. So usually this function is called with 290 - * column == 0 (unless no column cycle is needed indicated by column == -1) 291 - */ 292 - static void mxc_do_addr_cycle(struct mtd_info *mtd, int column, int page_addr) 293 - { 294 - struct nand_chip *nand_chip = mtd_to_nand(mtd); 295 - struct mxc_nand_host *host = nand_get_controller_data(nand_chip); 296 - 297 - /* Write out column address, if necessary */ 298 - if (column != -1) { 299 - host->devtype_data->send_addr(host, column & 0xff, 300 - page_addr == -1); 301 - if (mtd->writesize > 512) 302 - /* another col addr cycle for 2k page */ 303 - host->devtype_data->send_addr(host, 304 - (column >> 8) & 0xff, 305 - false); 306 - } 307 - 308 - /* Write out page address, if necessary */ 309 - if (page_addr != -1) { 310 - /* paddr_0 - p_addr_7 */ 311 - host->devtype_data->send_addr(host, (page_addr & 0xff), false); 312 - 313 - if (mtd->writesize > 512) { 314 - if (mtd->size >= 0x10000000) { 315 - /* paddr_8 - paddr_15 */ 316 - host->devtype_data->send_addr(host, 317 - (page_addr >> 8) & 0xff, 318 - false); 319 - host->devtype_data->send_addr(host, 320 - (page_addr >> 16) & 0xff, 321 - true); 322 - } else 323 - /* paddr_8 - paddr_15 */ 324 - host->devtype_data->send_addr(host, 325 - (page_addr >> 8) & 0xff, true); 326 - } else { 327 - if (nand_chip->options & NAND_ROW_ADDR_3) { 328 - /* paddr_8 - paddr_15 */ 329 - host->devtype_data->send_addr(host, 330 - (page_addr >> 8) & 0xff, 331 - false); 332 - host->devtype_data->send_addr(host, 333 - (page_addr >> 16) & 0xff, 334 - true); 335 - } else 336 - /* paddr_8 - paddr_15 */ 337 - host->devtype_data->send_addr(host, 338 - (page_addr >> 8) & 0xff, true); 339 - } 340 - } 341 - } 342 - 343 284 static int check_int_v3(struct mxc_nand_host *host) 344 285 { 345 286 uint32_t tmp; ··· 351 406 } 352 407 } 353 408 354 - static u32 get_ecc_status_v1(struct mxc_nand_host *host) 409 + static u32 get_ecc_status_v1(struct nand_chip *chip) 355 410 { 356 - return readw(NFC_V1_V2_ECC_STATUS_RESULT); 411 + struct mtd_info *mtd = nand_to_mtd(chip); 412 + struct mxc_nand_host *host = nand_get_controller_data(chip); 413 + unsigned int ecc_stats, max_bitflips = 0; 414 + int no_subpages, i; 415 + 416 + no_subpages = mtd->writesize >> 9; 417 + 418 + ecc_stats = host->ecc_stats_v1; 419 + 420 + for (i = 0; i < no_subpages; i++) { 421 + switch (ecc_stats & 0x3) { 422 + case 0: 423 + default: 424 + break; 425 + case 1: 426 + mtd->ecc_stats.corrected++; 427 + max_bitflips = 1; 428 + break; 429 + case 2: 430 + mtd->ecc_stats.failed++; 431 + break; 432 + } 433 + 434 + ecc_stats >>= 2; 435 + } 436 + 437 + return max_bitflips; 357 438 } 358 439 359 - static u32 get_ecc_status_v2(struct mxc_nand_host *host) 440 + static u32 get_ecc_status_v2_v3(struct nand_chip *chip, unsigned int ecc_stat) 360 441 { 361 - return readl(NFC_V1_V2_ECC_STATUS_RESULT); 442 + struct mtd_info *mtd = nand_to_mtd(chip); 443 + struct mxc_nand_host *host = nand_get_controller_data(chip); 444 + u8 ecc_bit_mask, err_limit; 445 + unsigned int max_bitflips = 0; 446 + int no_subpages, err; 447 + 448 + ecc_bit_mask = (host->eccsize == 4) ? 0x7 : 0xf; 449 + err_limit = (host->eccsize == 4) ? 0x4 : 0x8; 450 + 451 + no_subpages = mtd->writesize >> 9; 452 + 453 + do { 454 + err = ecc_stat & ecc_bit_mask; 455 + if (err > err_limit) { 456 + mtd->ecc_stats.failed++; 457 + } else { 458 + mtd->ecc_stats.corrected += err; 459 + max_bitflips = max_t(unsigned int, max_bitflips, err); 460 + } 461 + 462 + ecc_stat >>= 4; 463 + } while (--no_subpages); 464 + 465 + return max_bitflips; 362 466 } 363 467 364 - static u32 get_ecc_status_v3(struct mxc_nand_host *host) 468 + static u32 get_ecc_status_v2(struct nand_chip *chip) 365 469 { 366 - return readl(NFC_V3_ECC_STATUS_RESULT); 470 + struct mxc_nand_host *host = nand_get_controller_data(chip); 471 + 472 + u32 ecc_stat = readl(NFC_V1_V2_ECC_STATUS_RESULT); 473 + 474 + return get_ecc_status_v2_v3(chip, ecc_stat); 475 + } 476 + 477 + static u32 get_ecc_status_v3(struct nand_chip *chip) 478 + { 479 + struct mxc_nand_host *host = nand_get_controller_data(chip); 480 + 481 + u32 ecc_stat = readl(NFC_V3_ECC_STATUS_RESULT); 482 + 483 + return get_ecc_status_v2_v3(chip, ecc_stat); 367 484 } 368 485 369 486 static irqreturn_t mxc_nfc_irq(int irq, void *dev_id) ··· 457 450 return 0; 458 451 459 452 if (useirq) { 460 - unsigned long timeout; 453 + unsigned long time_left; 461 454 462 455 reinit_completion(&host->op_completion); 463 456 464 457 irq_control(host, 1); 465 458 466 - timeout = wait_for_completion_timeout(&host->op_completion, HZ); 467 - if (!timeout && !host->devtype_data->check_int(host)) { 459 + time_left = wait_for_completion_timeout(&host->op_completion, HZ); 460 + if (!time_left && !host->devtype_data->check_int(host)) { 468 461 dev_dbg(host->dev, "timeout waiting for irq\n"); 469 462 ret = -ETIMEDOUT; 470 463 } ··· 704 697 writel(config2, NFC_V3_CONFIG2); 705 698 } 706 699 707 - /* This functions is used by upper layer to checks if device is ready */ 708 - static int mxc_nand_dev_ready(struct nand_chip *chip) 709 - { 710 - /* 711 - * NFC handles R/B internally. Therefore, this function 712 - * always returns status as ready. 713 - */ 714 - return 1; 715 - } 716 - 717 - static int mxc_nand_read_page_v1(struct nand_chip *chip, void *buf, void *oob, 718 - bool ecc, int page) 700 + static int mxc_nand_read_page_v1(struct nand_chip *chip) 719 701 { 720 702 struct mtd_info *mtd = nand_to_mtd(chip); 721 703 struct mxc_nand_host *host = nand_get_controller_data(chip); 722 - unsigned int bitflips_corrected = 0; 723 704 int no_subpages; 724 705 int i; 706 + unsigned int ecc_stats = 0; 725 707 726 - host->devtype_data->enable_hwecc(chip, ecc); 727 - 728 - host->devtype_data->send_cmd(host, NAND_CMD_READ0, false); 729 - mxc_do_addr_cycle(mtd, 0, page); 730 - 731 - if (mtd->writesize > 512) 732 - host->devtype_data->send_cmd(host, NAND_CMD_READSTART, true); 733 - 734 - no_subpages = mtd->writesize >> 9; 708 + if (mtd->writesize) 709 + no_subpages = mtd->writesize >> 9; 710 + else 711 + /* READ PARAMETER PAGE is called when mtd->writesize is not yet set */ 712 + no_subpages = 1; 735 713 736 714 for (i = 0; i < no_subpages; i++) { 737 - uint16_t ecc_stats; 738 - 739 715 /* NANDFC buffer 0 is used for page read/write */ 740 716 writew((host->active_cs << 4) | i, NFC_V1_V2_BUF_ADDR); 741 717 ··· 727 737 /* Wait for operation to complete */ 728 738 wait_op_done(host, true); 729 739 730 - ecc_stats = get_ecc_status_v1(host); 731 - 732 - ecc_stats >>= 2; 733 - 734 - if (buf && ecc) { 735 - switch (ecc_stats & 0x3) { 736 - case 0: 737 - default: 738 - break; 739 - case 1: 740 - mtd->ecc_stats.corrected++; 741 - bitflips_corrected = 1; 742 - break; 743 - case 2: 744 - mtd->ecc_stats.failed++; 745 - break; 746 - } 747 - } 740 + ecc_stats |= FIELD_GET(NFC_V1_V2_ECC_STATUS_RESULT_ERM, 741 + readw(NFC_V1_V2_ECC_STATUS_RESULT)) << i * 2; 748 742 } 749 743 750 - if (buf) 751 - memcpy32_fromio(buf, host->main_area0, mtd->writesize); 752 - if (oob) 753 - copy_spare(mtd, true, oob); 744 + host->ecc_stats_v1 = ecc_stats; 754 745 755 - return bitflips_corrected; 746 + return 0; 756 747 } 757 748 758 - static int mxc_nand_read_page_v2_v3(struct nand_chip *chip, void *buf, 759 - void *oob, bool ecc, int page) 749 + static int mxc_nand_read_page_v2_v3(struct nand_chip *chip) 760 750 { 761 751 struct mtd_info *mtd = nand_to_mtd(chip); 762 752 struct mxc_nand_host *host = nand_get_controller_data(chip); 763 - unsigned int max_bitflips = 0; 764 - u32 ecc_stat, err; 765 - int no_subpages; 766 - u8 ecc_bit_mask, err_limit; 767 - 768 - host->devtype_data->enable_hwecc(chip, ecc); 769 - 770 - host->devtype_data->send_cmd(host, NAND_CMD_READ0, false); 771 - mxc_do_addr_cycle(mtd, 0, page); 772 - 773 - if (mtd->writesize > 512) 774 - host->devtype_data->send_cmd(host, 775 - NAND_CMD_READSTART, true); 776 753 777 754 host->devtype_data->send_page(mtd, NFC_OUTPUT); 778 755 779 - if (buf) 780 - memcpy32_fromio(buf, host->main_area0, mtd->writesize); 781 - if (oob) 782 - copy_spare(mtd, true, oob); 783 - 784 - ecc_bit_mask = (host->eccsize == 4) ? 0x7 : 0xf; 785 - err_limit = (host->eccsize == 4) ? 0x4 : 0x8; 786 - 787 - no_subpages = mtd->writesize >> 9; 788 - 789 - ecc_stat = host->devtype_data->get_ecc_status(host); 790 - 791 - do { 792 - err = ecc_stat & ecc_bit_mask; 793 - if (err > err_limit) { 794 - mtd->ecc_stats.failed++; 795 - } else { 796 - mtd->ecc_stats.corrected += err; 797 - max_bitflips = max_t(unsigned int, max_bitflips, err); 798 - } 799 - 800 - ecc_stat >>= 4; 801 - } while (--no_subpages); 802 - 803 - return max_bitflips; 756 + return 0; 804 757 } 805 758 806 759 static int mxc_nand_read_page(struct nand_chip *chip, uint8_t *buf, 807 760 int oob_required, int page) 808 761 { 762 + struct mtd_info *mtd = nand_to_mtd(chip); 809 763 struct mxc_nand_host *host = nand_get_controller_data(chip); 810 - void *oob_buf; 764 + int ret; 765 + 766 + host->devtype_data->enable_hwecc(chip, true); 767 + 768 + ret = nand_read_page_op(chip, page, 0, buf, mtd->writesize); 769 + 770 + host->devtype_data->enable_hwecc(chip, false); 771 + 772 + if (ret) 773 + return ret; 811 774 812 775 if (oob_required) 813 - oob_buf = chip->oob_poi; 814 - else 815 - oob_buf = NULL; 776 + copy_spare(mtd, true, chip->oob_poi); 816 777 817 - return host->devtype_data->read_page(chip, buf, oob_buf, 1, page); 778 + return host->devtype_data->get_ecc_status(chip); 818 779 } 819 780 820 781 static int mxc_nand_read_page_raw(struct nand_chip *chip, uint8_t *buf, 821 782 int oob_required, int page) 822 783 { 823 - struct mxc_nand_host *host = nand_get_controller_data(chip); 824 - void *oob_buf; 784 + struct mtd_info *mtd = nand_to_mtd(chip); 785 + int ret; 786 + 787 + ret = nand_read_page_op(chip, page, 0, buf, mtd->writesize); 788 + if (ret) 789 + return ret; 825 790 826 791 if (oob_required) 827 - oob_buf = chip->oob_poi; 828 - else 829 - oob_buf = NULL; 792 + copy_spare(mtd, true, chip->oob_poi); 830 793 831 - return host->devtype_data->read_page(chip, buf, oob_buf, 0, page); 794 + return 0; 832 795 } 833 796 834 797 static int mxc_nand_read_oob(struct nand_chip *chip, int page) 835 798 { 836 - struct mxc_nand_host *host = nand_get_controller_data(chip); 837 - 838 - return host->devtype_data->read_page(chip, NULL, chip->oob_poi, 0, 839 - page); 840 - } 841 - 842 - static int mxc_nand_write_page(struct nand_chip *chip, const uint8_t *buf, 843 - bool ecc, int page) 844 - { 845 799 struct mtd_info *mtd = nand_to_mtd(chip); 846 800 struct mxc_nand_host *host = nand_get_controller_data(chip); 801 + int ret; 847 802 848 - host->devtype_data->enable_hwecc(chip, ecc); 803 + ret = nand_read_page_op(chip, page, 0, host->data_buf, mtd->writesize); 804 + if (ret) 805 + return ret; 849 806 850 - host->devtype_data->send_cmd(host, NAND_CMD_SEQIN, false); 851 - mxc_do_addr_cycle(mtd, 0, page); 852 - 853 - memcpy32_toio(host->main_area0, buf, mtd->writesize); 854 - copy_spare(mtd, false, chip->oob_poi); 855 - 856 - host->devtype_data->send_page(mtd, NFC_INPUT); 857 - host->devtype_data->send_cmd(host, NAND_CMD_PAGEPROG, true); 858 - mxc_do_addr_cycle(mtd, 0, page); 807 + copy_spare(mtd, true, chip->oob_poi); 859 808 860 809 return 0; 861 810 } ··· 802 873 static int mxc_nand_write_page_ecc(struct nand_chip *chip, const uint8_t *buf, 803 874 int oob_required, int page) 804 875 { 805 - return mxc_nand_write_page(chip, buf, true, page); 876 + struct mtd_info *mtd = nand_to_mtd(chip); 877 + struct mxc_nand_host *host = nand_get_controller_data(chip); 878 + int ret; 879 + 880 + copy_spare(mtd, false, chip->oob_poi); 881 + 882 + host->devtype_data->enable_hwecc(chip, true); 883 + 884 + ret = nand_prog_page_op(chip, page, 0, buf, mtd->writesize); 885 + 886 + host->devtype_data->enable_hwecc(chip, false); 887 + 888 + return ret; 806 889 } 807 890 808 891 static int mxc_nand_write_page_raw(struct nand_chip *chip, const uint8_t *buf, 809 892 int oob_required, int page) 810 893 { 811 - return mxc_nand_write_page(chip, buf, false, page); 894 + struct mtd_info *mtd = nand_to_mtd(chip); 895 + 896 + copy_spare(mtd, false, chip->oob_poi); 897 + 898 + return nand_prog_page_op(chip, page, 0, buf, mtd->writesize); 812 899 } 813 900 814 901 static int mxc_nand_write_oob(struct nand_chip *chip, int page) ··· 833 888 struct mxc_nand_host *host = nand_get_controller_data(chip); 834 889 835 890 memset(host->data_buf, 0xff, mtd->writesize); 891 + copy_spare(mtd, false, chip->oob_poi); 836 892 837 - return mxc_nand_write_page(chip, host->data_buf, false, page); 838 - } 839 - 840 - static u_char mxc_nand_read_byte(struct nand_chip *nand_chip) 841 - { 842 - struct mxc_nand_host *host = nand_get_controller_data(nand_chip); 843 - uint8_t ret; 844 - 845 - /* Check for status request */ 846 - if (host->status_request) 847 - return host->devtype_data->get_dev_status(host) & 0xFF; 848 - 849 - if (nand_chip->options & NAND_BUSWIDTH_16) { 850 - /* only take the lower byte of each word */ 851 - ret = *(uint16_t *)(host->data_buf + host->buf_start); 852 - 853 - host->buf_start += 2; 854 - } else { 855 - ret = *(uint8_t *)(host->data_buf + host->buf_start); 856 - host->buf_start++; 857 - } 858 - 859 - dev_dbg(host->dev, "%s: ret=0x%hhx (start=%u)\n", __func__, ret, host->buf_start); 860 - return ret; 861 - } 862 - 863 - /* Write data of length len to buffer buf. The data to be 864 - * written on NAND Flash is first copied to RAMbuffer. After the Data Input 865 - * Operation by the NFC, the data is written to NAND Flash */ 866 - static void mxc_nand_write_buf(struct nand_chip *nand_chip, const u_char *buf, 867 - int len) 868 - { 869 - struct mtd_info *mtd = nand_to_mtd(nand_chip); 870 - struct mxc_nand_host *host = nand_get_controller_data(nand_chip); 871 - u16 col = host->buf_start; 872 - int n = mtd->oobsize + mtd->writesize - col; 873 - 874 - n = min(n, len); 875 - 876 - memcpy(host->data_buf + col, buf, n); 877 - 878 - host->buf_start += n; 879 - } 880 - 881 - /* Read the data buffer from the NAND Flash. To read the data from NAND 882 - * Flash first the data output cycle is initiated by the NFC, which copies 883 - * the data to RAMbuffer. This data of length len is then copied to buffer buf. 884 - */ 885 - static void mxc_nand_read_buf(struct nand_chip *nand_chip, u_char *buf, 886 - int len) 887 - { 888 - struct mtd_info *mtd = nand_to_mtd(nand_chip); 889 - struct mxc_nand_host *host = nand_get_controller_data(nand_chip); 890 - u16 col = host->buf_start; 891 - int n = mtd->oobsize + mtd->writesize - col; 892 - 893 - n = min(n, len); 894 - 895 - memcpy(buf, host->data_buf + col, n); 896 - 897 - host->buf_start += n; 893 + return nand_prog_page_op(chip, page, 0, host->data_buf, mtd->writesize); 898 894 } 899 895 900 896 /* This function is used by upper layer for select and ··· 1214 1328 writel(0, NFC_V3_DELAY_LINE); 1215 1329 } 1216 1330 1217 - /* Used by the upper layer to write command to NAND Flash for 1218 - * different operations to be carried out on NAND Flash */ 1219 - static void mxc_nand_command(struct nand_chip *nand_chip, unsigned command, 1220 - int column, int page_addr) 1221 - { 1222 - struct mtd_info *mtd = nand_to_mtd(nand_chip); 1223 - struct mxc_nand_host *host = nand_get_controller_data(nand_chip); 1224 - 1225 - dev_dbg(host->dev, "mxc_nand_command (cmd = 0x%x, col = 0x%x, page = 0x%x)\n", 1226 - command, column, page_addr); 1227 - 1228 - /* Reset command state information */ 1229 - host->status_request = false; 1230 - 1231 - /* Command pre-processing step */ 1232 - switch (command) { 1233 - case NAND_CMD_RESET: 1234 - host->devtype_data->preset(mtd); 1235 - host->devtype_data->send_cmd(host, command, false); 1236 - break; 1237 - 1238 - case NAND_CMD_STATUS: 1239 - host->buf_start = 0; 1240 - host->status_request = true; 1241 - 1242 - host->devtype_data->send_cmd(host, command, true); 1243 - WARN_ONCE(column != -1 || page_addr != -1, 1244 - "Unexpected column/row value (cmd=%u, col=%d, row=%d)\n", 1245 - command, column, page_addr); 1246 - mxc_do_addr_cycle(mtd, column, page_addr); 1247 - break; 1248 - 1249 - case NAND_CMD_READID: 1250 - host->devtype_data->send_cmd(host, command, true); 1251 - mxc_do_addr_cycle(mtd, column, page_addr); 1252 - host->devtype_data->send_read_id(host); 1253 - host->buf_start = 0; 1254 - break; 1255 - 1256 - case NAND_CMD_ERASE1: 1257 - case NAND_CMD_ERASE2: 1258 - host->devtype_data->send_cmd(host, command, false); 1259 - WARN_ONCE(column != -1, 1260 - "Unexpected column value (cmd=%u, col=%d)\n", 1261 - command, column); 1262 - mxc_do_addr_cycle(mtd, column, page_addr); 1263 - 1264 - break; 1265 - case NAND_CMD_PARAM: 1266 - host->devtype_data->send_cmd(host, command, false); 1267 - mxc_do_addr_cycle(mtd, column, page_addr); 1268 - host->devtype_data->send_page(mtd, NFC_OUTPUT); 1269 - memcpy32_fromio(host->data_buf, host->main_area0, 512); 1270 - host->buf_start = 0; 1271 - break; 1272 - default: 1273 - WARN_ONCE(1, "Unimplemented command (cmd=%u)\n", 1274 - command); 1275 - break; 1276 - } 1277 - } 1278 - 1279 - static int mxc_nand_set_features(struct nand_chip *chip, int addr, 1280 - u8 *subfeature_param) 1281 - { 1282 - struct mtd_info *mtd = nand_to_mtd(chip); 1283 - struct mxc_nand_host *host = nand_get_controller_data(chip); 1284 - int i; 1285 - 1286 - host->buf_start = 0; 1287 - 1288 - for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i) 1289 - chip->legacy.write_byte(chip, subfeature_param[i]); 1290 - 1291 - memcpy32_toio(host->main_area0, host->data_buf, mtd->writesize); 1292 - host->devtype_data->send_cmd(host, NAND_CMD_SET_FEATURES, false); 1293 - mxc_do_addr_cycle(mtd, addr, -1); 1294 - host->devtype_data->send_page(mtd, NFC_INPUT); 1295 - 1296 - return 0; 1297 - } 1298 - 1299 - static int mxc_nand_get_features(struct nand_chip *chip, int addr, 1300 - u8 *subfeature_param) 1301 - { 1302 - struct mtd_info *mtd = nand_to_mtd(chip); 1303 - struct mxc_nand_host *host = nand_get_controller_data(chip); 1304 - int i; 1305 - 1306 - host->devtype_data->send_cmd(host, NAND_CMD_GET_FEATURES, false); 1307 - mxc_do_addr_cycle(mtd, addr, -1); 1308 - host->devtype_data->send_page(mtd, NFC_OUTPUT); 1309 - memcpy32_fromio(host->data_buf, host->main_area0, 512); 1310 - host->buf_start = 0; 1311 - 1312 - for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i) 1313 - *subfeature_param++ = chip->legacy.read_byte(chip); 1314 - 1315 - return 0; 1316 - } 1317 - 1318 1331 /* 1319 1332 * The generic flash bbt descriptors overlap with our ecc 1320 1333 * hardware, so define some i.MX specific ones. ··· 1402 1617 chip->ecc.bytes = host->devtype_data->eccbytes; 1403 1618 host->eccsize = host->devtype_data->eccsize; 1404 1619 chip->ecc.size = 512; 1405 - mtd_set_ooblayout(mtd, host->devtype_data->ooblayout); 1406 1620 1407 1621 switch (chip->ecc.engine_type) { 1408 1622 case NAND_ECC_ENGINE_TYPE_ON_HOST: 1623 + mtd_set_ooblayout(mtd, host->devtype_data->ooblayout); 1409 1624 chip->ecc.read_page = mxc_nand_read_page; 1410 1625 chip->ecc.read_page_raw = mxc_nand_read_page_raw; 1411 1626 chip->ecc.read_oob = mxc_nand_read_oob; ··· 1415 1630 break; 1416 1631 1417 1632 case NAND_ECC_ENGINE_TYPE_SOFT: 1633 + chip->ecc.write_page_raw = nand_monolithic_write_page_raw; 1634 + chip->ecc.read_page_raw = nand_monolithic_read_page_raw; 1418 1635 break; 1419 1636 1420 1637 default: ··· 1472 1685 return host->devtype_data->setup_interface(chip, chipnr, conf); 1473 1686 } 1474 1687 1688 + static void memff16_toio(void *buf, int n) 1689 + { 1690 + __iomem u16 *t = buf; 1691 + int i; 1692 + 1693 + for (i = 0; i < (n >> 1); i++) 1694 + __raw_writew(0xffff, t++); 1695 + } 1696 + 1697 + static void copy_page_to_sram(struct mtd_info *mtd, const void *buf, int buf_len) 1698 + { 1699 + struct nand_chip *this = mtd_to_nand(mtd); 1700 + struct mxc_nand_host *host = nand_get_controller_data(this); 1701 + unsigned int no_subpages = mtd->writesize / 512; 1702 + int oob_per_subpage, i; 1703 + 1704 + oob_per_subpage = (mtd->oobsize / no_subpages) & ~1; 1705 + 1706 + /* 1707 + * During a page write the i.MX NAND controller will read 512b from 1708 + * main_area0 SRAM, then oob_per_subpage bytes from spare0 SRAM, then 1709 + * 512b from main_area1 SRAM and so on until the full page is written. 1710 + * For software ECC we want to have a 1:1 mapping between the raw page 1711 + * data on the NAND chip and the view of the NAND core. This is 1712 + * necessary to make the NAND_CMD_RNDOUT read the data it expects. 1713 + * To accomplish this we have to write the data in the order the controller 1714 + * reads it. This is reversed in copy_page_from_sram() below. 1715 + * 1716 + * buf_len can either be the full page including the OOB or user data only. 1717 + * When it's user data only make sure that we fill up the rest of the 1718 + * SRAM with 0xff. 1719 + */ 1720 + for (i = 0; i < no_subpages; i++) { 1721 + int now = min(buf_len, 512); 1722 + 1723 + if (now) 1724 + memcpy16_toio(host->main_area0 + i * 512, buf, now); 1725 + 1726 + if (now < 512) 1727 + memff16_toio(host->main_area0 + i * 512 + now, 512 - now); 1728 + 1729 + buf += 512; 1730 + buf_len -= now; 1731 + 1732 + now = min(buf_len, oob_per_subpage); 1733 + if (now) 1734 + memcpy16_toio(host->spare0 + i * host->devtype_data->spare_len, 1735 + buf, now); 1736 + 1737 + if (now < oob_per_subpage) 1738 + memff16_toio(host->spare0 + i * host->devtype_data->spare_len + now, 1739 + oob_per_subpage - now); 1740 + 1741 + buf += oob_per_subpage; 1742 + buf_len -= now; 1743 + } 1744 + } 1745 + 1746 + static void copy_page_from_sram(struct mtd_info *mtd) 1747 + { 1748 + struct nand_chip *this = mtd_to_nand(mtd); 1749 + struct mxc_nand_host *host = nand_get_controller_data(this); 1750 + void *buf = host->data_buf; 1751 + unsigned int no_subpages = mtd->writesize / 512; 1752 + int oob_per_subpage, i; 1753 + 1754 + /* mtd->writesize is not set during ident scanning */ 1755 + if (!no_subpages) 1756 + no_subpages = 1; 1757 + 1758 + oob_per_subpage = (mtd->oobsize / no_subpages) & ~1; 1759 + 1760 + for (i = 0; i < no_subpages; i++) { 1761 + memcpy16_fromio(buf, host->main_area0 + i * 512, 512); 1762 + buf += 512; 1763 + 1764 + memcpy16_fromio(buf, host->spare0 + i * host->devtype_data->spare_len, 1765 + oob_per_subpage); 1766 + buf += oob_per_subpage; 1767 + } 1768 + } 1769 + 1770 + static int mxcnd_do_exec_op(struct nand_chip *chip, 1771 + const struct nand_subop *op) 1772 + { 1773 + struct mxc_nand_host *host = nand_get_controller_data(chip); 1774 + struct mtd_info *mtd = nand_to_mtd(chip); 1775 + int i, j, buf_len; 1776 + void *buf_read = NULL; 1777 + const void *buf_write = NULL; 1778 + const struct nand_op_instr *instr; 1779 + bool readid = false; 1780 + bool statusreq = false; 1781 + 1782 + for (i = 0; i < op->ninstrs; i++) { 1783 + instr = &op->instrs[i]; 1784 + 1785 + switch (instr->type) { 1786 + case NAND_OP_WAITRDY_INSTR: 1787 + /* NFC handles R/B internally, nothing to do here */ 1788 + break; 1789 + case NAND_OP_CMD_INSTR: 1790 + host->devtype_data->send_cmd(host, instr->ctx.cmd.opcode, true); 1791 + 1792 + if (instr->ctx.cmd.opcode == NAND_CMD_READID) 1793 + readid = true; 1794 + if (instr->ctx.cmd.opcode == NAND_CMD_STATUS) 1795 + statusreq = true; 1796 + 1797 + break; 1798 + case NAND_OP_ADDR_INSTR: 1799 + for (j = 0; j < instr->ctx.addr.naddrs; j++) { 1800 + bool islast = j == instr->ctx.addr.naddrs - 1; 1801 + host->devtype_data->send_addr(host, instr->ctx.addr.addrs[j], islast); 1802 + } 1803 + break; 1804 + case NAND_OP_DATA_OUT_INSTR: 1805 + buf_write = instr->ctx.data.buf.out; 1806 + buf_len = instr->ctx.data.len; 1807 + 1808 + if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) 1809 + memcpy32_toio(host->main_area0, buf_write, buf_len); 1810 + else 1811 + copy_page_to_sram(mtd, buf_write, buf_len); 1812 + 1813 + host->devtype_data->send_page(mtd, NFC_INPUT); 1814 + 1815 + break; 1816 + case NAND_OP_DATA_IN_INSTR: 1817 + 1818 + buf_read = instr->ctx.data.buf.in; 1819 + buf_len = instr->ctx.data.len; 1820 + 1821 + if (readid) { 1822 + host->devtype_data->send_read_id(host); 1823 + readid = false; 1824 + 1825 + memcpy32_fromio(host->data_buf, host->main_area0, buf_len * 2); 1826 + 1827 + if (chip->options & NAND_BUSWIDTH_16) { 1828 + u8 *bufr = buf_read; 1829 + u16 *bufw = host->data_buf; 1830 + for (j = 0; j < buf_len; j++) 1831 + bufr[j] = bufw[j]; 1832 + } else { 1833 + memcpy(buf_read, host->data_buf, buf_len); 1834 + } 1835 + break; 1836 + } 1837 + 1838 + if (statusreq) { 1839 + *(u8*)buf_read = host->devtype_data->get_dev_status(host); 1840 + statusreq = false; 1841 + break; 1842 + } 1843 + 1844 + host->devtype_data->read_page(chip); 1845 + 1846 + if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) { 1847 + if (IS_ALIGNED(buf_len, 4)) { 1848 + memcpy32_fromio(buf_read, host->main_area0, buf_len); 1849 + } else { 1850 + memcpy32_fromio(host->data_buf, host->main_area0, mtd->writesize); 1851 + memcpy(buf_read, host->data_buf, buf_len); 1852 + } 1853 + } else { 1854 + copy_page_from_sram(mtd); 1855 + memcpy(buf_read, host->data_buf, buf_len); 1856 + } 1857 + 1858 + break; 1859 + } 1860 + } 1861 + 1862 + return 0; 1863 + } 1864 + 1865 + #define MAX_DATA_SIZE (4096 + 512) 1866 + 1867 + static const struct nand_op_parser mxcnd_op_parser = NAND_OP_PARSER( 1868 + NAND_OP_PARSER_PATTERN(mxcnd_do_exec_op, 1869 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1870 + NAND_OP_PARSER_PAT_ADDR_ELEM(true, 7), 1871 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 1872 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 1873 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(true, MAX_DATA_SIZE)), 1874 + NAND_OP_PARSER_PATTERN(mxcnd_do_exec_op, 1875 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1876 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, 7), 1877 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, MAX_DATA_SIZE), 1878 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1879 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)), 1880 + NAND_OP_PARSER_PATTERN(mxcnd_do_exec_op, 1881 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 1882 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, 7), 1883 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, MAX_DATA_SIZE), 1884 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 1885 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)), 1886 + ); 1887 + 1888 + static int mxcnd_exec_op(struct nand_chip *chip, 1889 + const struct nand_operation *op, bool check_only) 1890 + { 1891 + return nand_op_parser_exec_op(chip, &mxcnd_op_parser, 1892 + op, check_only); 1893 + } 1894 + 1475 1895 static const struct nand_controller_ops mxcnd_controller_ops = { 1476 1896 .attach_chip = mxcnd_attach_chip, 1477 1897 .setup_interface = mxcnd_setup_interface, 1898 + .exec_op = mxcnd_exec_op, 1478 1899 }; 1479 1900 1480 1901 static int mxcnd_probe(struct platform_device *pdev) ··· 1715 1720 1716 1721 nand_set_controller_data(this, host); 1717 1722 nand_set_flash_node(this, pdev->dev.of_node); 1718 - this->legacy.dev_ready = mxc_nand_dev_ready; 1719 - this->legacy.cmdfunc = mxc_nand_command; 1720 - this->legacy.read_byte = mxc_nand_read_byte; 1721 - this->legacy.write_buf = mxc_nand_write_buf; 1722 - this->legacy.read_buf = mxc_nand_read_buf; 1723 - this->legacy.set_features = mxc_nand_set_features; 1724 - this->legacy.get_features = mxc_nand_get_features; 1725 1723 1726 1724 host->clk = devm_clk_get(&pdev->dev, NULL); 1727 1725 if (IS_ERR(host->clk))
+51 -13
drivers/mtd/nand/spi/macronix.c
··· 121 121 SPINAND_HAS_QE_BIT, 122 122 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 123 123 SPINAND_INFO("MX35LF2GE4AD", 124 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26), 124 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26, 0x03), 125 125 NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1), 126 126 NAND_ECCREQ(8, 512), 127 127 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 131 131 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 132 132 mx35lf1ge4ab_ecc_get_status)), 133 133 SPINAND_INFO("MX35LF4GE4AD", 134 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37), 134 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37, 0x03), 135 135 NAND_MEMORG(1, 4096, 128, 64, 2048, 40, 1, 1, 1), 136 136 NAND_ECCREQ(8, 512), 137 137 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 141 141 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 142 142 mx35lf1ge4ab_ecc_get_status)), 143 143 SPINAND_INFO("MX35LF1G24AD", 144 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14), 144 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14, 0x03), 145 145 NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 146 146 NAND_ECCREQ(8, 512), 147 147 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 150 150 SPINAND_HAS_QE_BIT, 151 151 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 152 152 SPINAND_INFO("MX35LF2G24AD", 153 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24), 153 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24, 0x03), 154 154 NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1), 155 155 NAND_ECCREQ(8, 512), 156 156 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 158 158 &update_cache_variants), 159 159 SPINAND_HAS_QE_BIT, 160 160 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 161 + SPINAND_INFO("MX35LF2G24AD-Z4I8", 162 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x64, 0x03), 163 + NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1), 164 + NAND_ECCREQ(8, 512), 165 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 166 + &write_cache_variants, 167 + &update_cache_variants), 168 + SPINAND_HAS_QE_BIT, 169 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 161 170 SPINAND_INFO("MX35LF4G24AD", 162 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35), 171 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35, 0x03), 163 172 NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1), 173 + NAND_ECCREQ(8, 512), 174 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 175 + &write_cache_variants, 176 + &update_cache_variants), 177 + SPINAND_HAS_QE_BIT, 178 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)), 179 + SPINAND_INFO("MX35LF4G24AD-Z4I8", 180 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x75, 0x03), 181 + NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1), 164 182 NAND_ECCREQ(8, 512), 165 183 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 166 184 &write_cache_variants, ··· 217 199 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 218 200 mx35lf1ge4ab_ecc_get_status)), 219 201 SPINAND_INFO("MX35UF4G24AD", 220 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb5), 202 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb5, 0x03), 221 203 NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1), 222 204 NAND_ECCREQ(8, 512), 223 205 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 226 208 SPINAND_HAS_QE_BIT, 227 209 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 228 210 mx35lf1ge4ab_ecc_get_status)), 211 + SPINAND_INFO("MX35UF4G24AD-Z4I8", 212 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xf5, 0x03), 213 + NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1), 214 + NAND_ECCREQ(8, 512), 215 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 216 + &write_cache_variants, 217 + &update_cache_variants), 218 + SPINAND_HAS_QE_BIT, 219 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 220 + mx35lf1ge4ab_ecc_get_status)), 229 221 SPINAND_INFO("MX35UF4GE4AD", 230 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb7), 222 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb7, 0x03), 231 223 NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1), 232 224 NAND_ECCREQ(8, 512), 233 225 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 257 229 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 258 230 mx35lf1ge4ab_ecc_get_status)), 259 231 SPINAND_INFO("MX35UF2G24AD", 260 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa4), 232 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa4, 0x03), 261 233 NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1), 262 234 NAND_ECCREQ(8, 512), 263 235 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 266 238 SPINAND_HAS_QE_BIT, 267 239 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 268 240 mx35lf1ge4ab_ecc_get_status)), 241 + SPINAND_INFO("MX35UF2G24AD-Z4I8", 242 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xe4, 0x03), 243 + NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1), 244 + NAND_ECCREQ(8, 512), 245 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 246 + &write_cache_variants, 247 + &update_cache_variants), 248 + SPINAND_HAS_QE_BIT, 249 + SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 250 + mx35lf1ge4ab_ecc_get_status)), 269 251 SPINAND_INFO("MX35UF2GE4AD", 270 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa6), 252 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa6, 0x03), 271 253 NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1), 272 254 NAND_ECCREQ(8, 512), 273 255 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 287 249 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 288 250 mx35lf1ge4ab_ecc_get_status)), 289 251 SPINAND_INFO("MX35UF2GE4AC", 290 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa2), 252 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa2, 0x01), 291 253 NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1), 292 254 NAND_ECCREQ(4, 512), 293 255 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 307 269 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 308 270 mx35lf1ge4ab_ecc_get_status)), 309 271 SPINAND_INFO("MX35UF1G24AD", 310 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x94), 272 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x94, 0x03), 311 273 NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 312 274 NAND_ECCREQ(8, 512), 313 275 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 317 279 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 318 280 mx35lf1ge4ab_ecc_get_status)), 319 281 SPINAND_INFO("MX35UF1GE4AD", 320 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x96), 282 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x96, 0x03), 321 283 NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 322 284 NAND_ECCREQ(8, 512), 323 285 SPINAND_INFO_OP_VARIANTS(&read_cache_variants, ··· 327 289 SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, 328 290 mx35lf1ge4ab_ecc_get_status)), 329 291 SPINAND_INFO("MX35UF1GE4AC", 330 - SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x92), 292 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x92, 0x01), 331 293 NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 332 294 NAND_ECCREQ(4, 512), 333 295 SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+1
drivers/mtd/parsers/brcm_u-boot.c
··· 81 81 }; 82 82 module_mtd_part_parser(brcm_u_boot_mtd_parser); 83 83 84 + MODULE_DESCRIPTION("Broadcom's U-Boot partition parser"); 84 85 MODULE_LICENSE("GPL");
+5 -13
drivers/mtd/parsers/cmdlinepart.c
··· 44 44 #include <linux/module.h> 45 45 #include <linux/err.h> 46 46 47 - /* debug macro */ 48 - #if 0 49 - #define dbg(x) do { printk("DEBUG-CMDLINE-PART: "); printk x; } while(0) 50 - #else 51 - #define dbg(x) 52 - #endif 53 - 54 - 55 47 /* special size referring to all the remaining space in a partition */ 56 48 #define SIZE_REMAINING ULLONG_MAX 57 49 #define OFFSET_CONTINUOUS ULLONG_MAX ··· 191 199 parts[this_part].name = extra_mem; 192 200 extra_mem += name_len + 1; 193 201 194 - dbg(("partition %d: name <%s>, offset %llx, size %llx, mask flags %x\n", 202 + pr_debug("partition %d: name <%s>, offset %llx, size %llx, mask flags %x\n", 195 203 this_part, parts[this_part].name, parts[this_part].offset, 196 - parts[this_part].size, parts[this_part].mask_flags)); 204 + parts[this_part].size, parts[this_part].mask_flags); 197 205 198 206 /* return (updated) pointer to extra_mem memory */ 199 207 if (extra_mem_ptr) ··· 259 267 } 260 268 mtd_id_len = p - mtd_id; 261 269 262 - dbg(("parsing <%s>\n", p+1)); 270 + pr_debug("parsing <%s>\n", p+1); 263 271 264 272 /* 265 273 * parse one mtd. have it reserve memory for the ··· 296 304 this_mtd->next = partitions; 297 305 partitions = this_mtd; 298 306 299 - dbg(("mtdid=<%s> num_parts=<%d>\n", 300 - this_mtd->mtd_id, this_mtd->num_parts)); 307 + pr_debug("mtdid=<%s> num_parts=<%d>\n", 308 + this_mtd->mtd_id, this_mtd->num_parts); 301 309 302 310 303 311 /* EOS - we're done */
-1
drivers/mtd/spi-nor/Makefile
··· 13 13 spi-nor-objs += spansion.o 14 14 spi-nor-objs += sst.o 15 15 spi-nor-objs += winbond.o 16 - spi-nor-objs += xilinx.o 17 16 spi-nor-objs += xmc.o 18 17 spi-nor-$(CONFIG_DEBUG_FS) += debugfs.o 19 18 obj-$(CONFIG_MTD_SPI_NOR) += spi-nor.o
+72 -116
drivers/mtd/spi-nor/core.c
··· 1463 1463 spi_nor_unprep(nor); 1464 1464 } 1465 1465 1466 - static u32 spi_nor_convert_addr(struct spi_nor *nor, loff_t addr) 1467 - { 1468 - if (!nor->params->convert_addr) 1469 - return addr; 1470 - 1471 - return nor->params->convert_addr(nor, addr); 1472 - } 1473 - 1474 1466 /* 1475 1467 * Initiate the erasure of a single sector 1476 1468 */ 1477 1469 int spi_nor_erase_sector(struct spi_nor *nor, u32 addr) 1478 1470 { 1479 1471 int i; 1480 - 1481 - addr = spi_nor_convert_addr(nor, addr); 1482 1472 1483 1473 if (nor->spimem) { 1484 1474 struct spi_mem_op op = ··· 1976 1986 &spi_nor_spansion, 1977 1987 &spi_nor_sst, 1978 1988 &spi_nor_winbond, 1979 - &spi_nor_xilinx, 1980 1989 &spi_nor_xmc, 1981 1990 }; 1982 1991 ··· 2054 2065 while (len) { 2055 2066 loff_t addr = from; 2056 2067 2057 - addr = spi_nor_convert_addr(nor, addr); 2058 - 2059 2068 ret = spi_nor_read_data(nor, addr, len, buf); 2060 2069 if (ret == 0) { 2061 2070 /* We shouldn't see 0-length reads */ ··· 2086 2099 size_t *retlen, const u_char *buf) 2087 2100 { 2088 2101 struct spi_nor *nor = mtd_to_spi_nor(mtd); 2089 - size_t page_offset, page_remain, i; 2102 + size_t i; 2090 2103 ssize_t ret; 2091 2104 u32 page_size = nor->params->page_size; 2092 2105 ··· 2099 2112 for (i = 0; i < len; ) { 2100 2113 ssize_t written; 2101 2114 loff_t addr = to + i; 2102 - 2103 - /* 2104 - * If page_size is a power of two, the offset can be quickly 2105 - * calculated with an AND operation. On the other cases we 2106 - * need to do a modulus operation (more expensive). 2107 - */ 2108 - if (is_power_of_2(page_size)) { 2109 - page_offset = addr & (page_size - 1); 2110 - } else { 2111 - u64 aux = addr; 2112 - 2113 - page_offset = do_div(aux, page_size); 2114 - } 2115 + size_t page_offset = addr & (page_size - 1); 2115 2116 /* the size of data remaining on the first page */ 2116 - page_remain = min_t(size_t, page_size - page_offset, len - i); 2117 - 2118 - addr = spi_nor_convert_addr(nor, addr); 2117 + size_t page_remain = min_t(size_t, page_size - page_offset, len - i); 2119 2118 2120 2119 ret = spi_nor_lock_device(nor); 2121 2120 if (ret) ··· 2554 2581 return 0; 2555 2582 } 2556 2583 2557 - static int spi_nor_default_setup(struct spi_nor *nor, 2558 - const struct spi_nor_hwcaps *hwcaps) 2584 + static int spi_nor_set_addr_nbytes(struct spi_nor *nor) 2585 + { 2586 + if (nor->params->addr_nbytes) { 2587 + nor->addr_nbytes = nor->params->addr_nbytes; 2588 + } else if (nor->read_proto == SNOR_PROTO_8_8_8_DTR) { 2589 + /* 2590 + * In 8D-8D-8D mode, one byte takes half a cycle to transfer. So 2591 + * in this protocol an odd addr_nbytes cannot be used because 2592 + * then the address phase would only span a cycle and a half. 2593 + * Half a cycle would be left over. We would then have to start 2594 + * the dummy phase in the middle of a cycle and so too the data 2595 + * phase, and we will end the transaction with half a cycle left 2596 + * over. 2597 + * 2598 + * Force all 8D-8D-8D flashes to use an addr_nbytes of 4 to 2599 + * avoid this situation. 2600 + */ 2601 + nor->addr_nbytes = 4; 2602 + } else if (nor->info->addr_nbytes) { 2603 + nor->addr_nbytes = nor->info->addr_nbytes; 2604 + } else { 2605 + nor->addr_nbytes = 3; 2606 + } 2607 + 2608 + if (nor->addr_nbytes == 3 && nor->params->size > 0x1000000) { 2609 + /* enable 4-byte addressing if the device exceeds 16MiB */ 2610 + nor->addr_nbytes = 4; 2611 + } 2612 + 2613 + if (nor->addr_nbytes > SPI_NOR_MAX_ADDR_NBYTES) { 2614 + dev_dbg(nor->dev, "The number of address bytes is too large: %u\n", 2615 + nor->addr_nbytes); 2616 + return -EINVAL; 2617 + } 2618 + 2619 + /* Set 4byte opcodes when possible. */ 2620 + if (nor->addr_nbytes == 4 && nor->flags & SNOR_F_4B_OPCODES && 2621 + !(nor->flags & SNOR_F_HAS_4BAIT)) 2622 + spi_nor_set_4byte_opcodes(nor); 2623 + 2624 + return 0; 2625 + } 2626 + 2627 + static int spi_nor_setup(struct spi_nor *nor, 2628 + const struct spi_nor_hwcaps *hwcaps) 2559 2629 { 2560 2630 struct spi_nor_flash_parameter *params = nor->params; 2561 2631 u32 ignored_mask, shared_mask; ··· 2654 2638 "can't select erase settings supported by both the SPI controller and memory.\n"); 2655 2639 return err; 2656 2640 } 2657 - 2658 - return 0; 2659 - } 2660 - 2661 - static int spi_nor_set_addr_nbytes(struct spi_nor *nor) 2662 - { 2663 - if (nor->params->addr_nbytes) { 2664 - nor->addr_nbytes = nor->params->addr_nbytes; 2665 - } else if (nor->read_proto == SNOR_PROTO_8_8_8_DTR) { 2666 - /* 2667 - * In 8D-8D-8D mode, one byte takes half a cycle to transfer. So 2668 - * in this protocol an odd addr_nbytes cannot be used because 2669 - * then the address phase would only span a cycle and a half. 2670 - * Half a cycle would be left over. We would then have to start 2671 - * the dummy phase in the middle of a cycle and so too the data 2672 - * phase, and we will end the transaction with half a cycle left 2673 - * over. 2674 - * 2675 - * Force all 8D-8D-8D flashes to use an addr_nbytes of 4 to 2676 - * avoid this situation. 2677 - */ 2678 - nor->addr_nbytes = 4; 2679 - } else if (nor->info->addr_nbytes) { 2680 - nor->addr_nbytes = nor->info->addr_nbytes; 2681 - } else { 2682 - nor->addr_nbytes = 3; 2683 - } 2684 - 2685 - if (nor->addr_nbytes == 3 && nor->params->size > 0x1000000) { 2686 - /* enable 4-byte addressing if the device exceeds 16MiB */ 2687 - nor->addr_nbytes = 4; 2688 - } 2689 - 2690 - if (nor->addr_nbytes > SPI_NOR_MAX_ADDR_NBYTES) { 2691 - dev_dbg(nor->dev, "The number of address bytes is too large: %u\n", 2692 - nor->addr_nbytes); 2693 - return -EINVAL; 2694 - } 2695 - 2696 - /* Set 4byte opcodes when possible. */ 2697 - if (nor->addr_nbytes == 4 && nor->flags & SNOR_F_4B_OPCODES && 2698 - !(nor->flags & SNOR_F_HAS_4BAIT)) 2699 - spi_nor_set_4byte_opcodes(nor); 2700 - 2701 - return 0; 2702 - } 2703 - 2704 - static int spi_nor_setup(struct spi_nor *nor, 2705 - const struct spi_nor_hwcaps *hwcaps) 2706 - { 2707 - int ret; 2708 - 2709 - if (nor->params->setup) 2710 - ret = nor->params->setup(nor, hwcaps); 2711 - else 2712 - ret = spi_nor_default_setup(nor, hwcaps); 2713 - if (ret) 2714 - return ret; 2715 2641 2716 2642 return spi_nor_set_addr_nbytes(nor); 2717 2643 } ··· 2923 2965 params->page_size = info->page_size ?: SPI_NOR_DEFAULT_PAGE_SIZE; 2924 2966 params->n_banks = info->n_banks ?: SPI_NOR_DEFAULT_N_BANKS; 2925 2967 2926 - if (!(info->flags & SPI_NOR_NO_FR)) { 2927 - /* Default to Fast Read for DT and non-DT platform devices. */ 2968 + /* Default to Fast Read for non-DT and enable it if requested by DT. */ 2969 + if (!np || of_property_read_bool(np, "m25p,fast-read")) 2928 2970 params->hwcaps.mask |= SNOR_HWCAPS_READ_FAST; 2929 - 2930 - /* Mask out Fast Read if not requested at DT instantiation. */ 2931 - if (np && !of_property_read_bool(np, "m25p,fast-read")) 2932 - params->hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST; 2933 - } 2934 2971 2935 2972 /* (Fast) Read settings. */ 2936 2973 params->hwcaps.mask |= SNOR_HWCAPS_READ; ··· 3008 3055 spi_nor_init_params_deprecated(nor); 3009 3056 } 3010 3057 3011 - return spi_nor_late_init_params(nor); 3058 + ret = spi_nor_late_init_params(nor); 3059 + if (ret) 3060 + return ret; 3061 + 3062 + if (WARN_ON(!is_power_of_2(nor->params->page_size))) 3063 + return -EINVAL; 3064 + 3065 + return 0; 3012 3066 } 3013 3067 3014 3068 /** spi_nor_set_octal_dtr() - enable or disable Octal DTR I/O. ··· 3298 3338 3299 3339 if (name) 3300 3340 info = spi_nor_match_name(nor, name); 3301 - /* Try to auto-detect if chip name wasn't specified or not found */ 3302 - if (!info) 3303 - return spi_nor_detect(nor); 3304 - 3305 3341 /* 3306 - * If caller has specified name of flash model that can normally be 3307 - * detected using JEDEC, let's verify it. 3342 + * Auto-detect if chip name wasn't specified or not found, or the chip 3343 + * has an ID. If the chip supposedly has an ID, we also do an 3344 + * auto-detection to compare it later. 3308 3345 */ 3309 - if (name && info->id) { 3346 + if (!info || info->id) { 3310 3347 const struct flash_info *jinfo; 3311 3348 3312 3349 jinfo = spi_nor_detect(nor); 3313 - if (IS_ERR(jinfo)) { 3350 + if (IS_ERR(jinfo)) 3314 3351 return jinfo; 3315 - } else if (jinfo != info) { 3316 - /* 3317 - * JEDEC knows better, so overwrite platform ID. We 3318 - * can't trust partitions any longer, but we'll let 3319 - * mtd apply them anyway, since some partitions may be 3320 - * marked read-only, and we don't want to loose that 3321 - * information, even if it's not 100% accurate. 3322 - */ 3352 + 3353 + /* 3354 + * If caller has specified name of flash model that can normally 3355 + * be detected using JEDEC, let's verify it. 3356 + */ 3357 + if (info && jinfo != info) 3323 3358 dev_warn(nor->dev, "found %s, expected %s\n", 3324 3359 jinfo->name, info->name); 3325 - info = jinfo; 3326 - } 3360 + 3361 + /* If info was set before, JEDEC knows better. */ 3362 + info = jinfo; 3327 3363 } 3328 3364 3329 3365 return info;
-12
drivers/mtd/spi-nor/core.h
··· 366 366 * @set_octal_dtr: enables or disables SPI NOR octal DTR mode. 367 367 * @quad_enable: enables SPI NOR quad mode. 368 368 * @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode. 369 - * @convert_addr: converts an absolute address into something the flash 370 - * will understand. Particularly useful when pagesize is 371 - * not a power-of-2. 372 - * @setup: (optional) configures the SPI NOR memory. Useful for 373 - * SPI NOR flashes that have peculiarities to the SPI NOR 374 - * standard e.g. different opcodes, specific address 375 - * calculation, page size, etc. 376 369 * @ready: (optional) flashes might use a different mechanism 377 370 * than reading the status register to indicate they 378 371 * are ready for a new command ··· 396 403 int (*set_octal_dtr)(struct spi_nor *nor, bool enable); 397 404 int (*quad_enable)(struct spi_nor *nor); 398 405 int (*set_4byte_addr_mode)(struct spi_nor *nor, bool enable); 399 - u32 (*convert_addr)(struct spi_nor *nor, u32 addr); 400 - int (*setup)(struct spi_nor *nor, const struct spi_nor_hwcaps *hwcaps); 401 406 int (*ready)(struct spi_nor *nor); 402 407 403 408 const struct spi_nor_locking_ops *locking_ops; ··· 470 479 * Usually these will power-up in a write-protected 471 480 * state. 472 481 * SPI_NOR_NO_ERASE: no erase command needed. 473 - * SPI_NOR_NO_FR: can't do fastread. 474 482 * SPI_NOR_QUAD_PP: flash supports Quad Input Page Program. 475 483 * SPI_NOR_RWW: flash supports reads while write. 476 484 * ··· 518 528 #define SPI_NOR_BP3_SR_BIT6 BIT(4) 519 529 #define SPI_NOR_SWP_IS_VOLATILE BIT(5) 520 530 #define SPI_NOR_NO_ERASE BIT(6) 521 - #define SPI_NOR_NO_FR BIT(7) 522 531 #define SPI_NOR_QUAD_PP BIT(8) 523 532 #define SPI_NOR_RWW BIT(9) 524 533 ··· 592 603 extern const struct spi_nor_manufacturer spi_nor_spansion; 593 604 extern const struct spi_nor_manufacturer spi_nor_sst; 594 605 extern const struct spi_nor_manufacturer spi_nor_winbond; 595 - extern const struct spi_nor_manufacturer spi_nor_xilinx; 596 606 extern const struct spi_nor_manufacturer spi_nor_xmc; 597 607 598 608 extern const struct attribute_group *spi_nor_sysfs_groups[];
+15 -4
drivers/mtd/spi-nor/everspin.c
··· 14 14 .size = SZ_16K, 15 15 .sector_size = SZ_16K, 16 16 .addr_nbytes = 2, 17 - .flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR, 17 + .flags = SPI_NOR_NO_ERASE, 18 18 }, { 19 19 .name = "mr25h256", 20 20 .size = SZ_32K, 21 21 .sector_size = SZ_32K, 22 22 .addr_nbytes = 2, 23 - .flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR, 23 + .flags = SPI_NOR_NO_ERASE, 24 24 }, { 25 25 .name = "mr25h10", 26 26 .size = SZ_128K, 27 27 .sector_size = SZ_128K, 28 - .flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR, 28 + .flags = SPI_NOR_NO_ERASE, 29 29 }, { 30 30 .name = "mr25h40", 31 31 .size = SZ_512K, 32 32 .sector_size = SZ_512K, 33 - .flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR, 33 + .flags = SPI_NOR_NO_ERASE, 34 34 } 35 + }; 36 + 37 + static void everspin_nor_default_init(struct spi_nor *nor) 38 + { 39 + /* Everspin FRAMs don't support the fast read opcode. */ 40 + nor->params->hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST; 41 + } 42 + 43 + static const struct spi_nor_fixups everspin_nor_fixups = { 44 + .default_init = everspin_nor_default_init, 35 45 }; 36 46 37 47 const struct spi_nor_manufacturer spi_nor_everspin = { 38 48 .name = "everspin", 39 49 .parts = everspin_nor_parts, 40 50 .nparts = ARRAY_SIZE(everspin_nor_parts), 51 + .fixups = &everspin_nor_fixups, 41 52 };
+2
drivers/mtd/spi-nor/winbond.c
··· 105 105 }, { 106 106 .id = SNOR_ID(0xef, 0x40, 0x18), 107 107 .name = "w25q128", 108 + .size = SZ_16M, 108 109 .flags = SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB, 110 + .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 109 111 }, { 110 112 .id = SNOR_ID(0xef, 0x40, 0x19), 111 113 .name = "w25q256",
-169
drivers/mtd/spi-nor/xilinx.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Copyright (C) 2005, Intec Automation Inc. 4 - * Copyright (C) 2014, Freescale Semiconductor, Inc. 5 - */ 6 - 7 - #include <linux/mtd/spi-nor.h> 8 - 9 - #include "core.h" 10 - 11 - #define XILINX_OP_SE 0x50 /* Sector erase */ 12 - #define XILINX_OP_PP 0x82 /* Page program */ 13 - #define XILINX_OP_RDSR 0xd7 /* Read status register */ 14 - 15 - #define XSR_PAGESIZE BIT(0) /* Page size in Po2 or Linear */ 16 - #define XSR_RDY BIT(7) /* Ready */ 17 - 18 - #define XILINX_RDSR_OP(buf) \ 19 - SPI_MEM_OP(SPI_MEM_OP_CMD(XILINX_OP_RDSR, 0), \ 20 - SPI_MEM_OP_NO_ADDR, \ 21 - SPI_MEM_OP_NO_DUMMY, \ 22 - SPI_MEM_OP_DATA_IN(1, buf, 0)) 23 - 24 - #define S3AN_FLASH(_id, _name, _n_sectors, _page_size) \ 25 - .id = _id, \ 26 - .name = _name, \ 27 - .size = 8 * (_page_size) * (_n_sectors), \ 28 - .sector_size = (8 * (_page_size)), \ 29 - .page_size = (_page_size), \ 30 - .flags = SPI_NOR_NO_FR 31 - 32 - /* Xilinx S3AN share MFR with Atmel SPI NOR */ 33 - static const struct flash_info xilinx_nor_parts[] = { 34 - /* Xilinx S3AN Internal Flash */ 35 - { S3AN_FLASH(SNOR_ID(0x1f, 0x22, 0x00), "3S50AN", 64, 264) }, 36 - { S3AN_FLASH(SNOR_ID(0x1f, 0x24, 0x00), "3S200AN", 256, 264) }, 37 - { S3AN_FLASH(SNOR_ID(0x1f, 0x24, 0x00), "3S400AN", 256, 264) }, 38 - { S3AN_FLASH(SNOR_ID(0x1f, 0x25, 0x00), "3S700AN", 512, 264) }, 39 - { S3AN_FLASH(SNOR_ID(0x1f, 0x26, 0x00), "3S1400AN", 512, 528) }, 40 - }; 41 - 42 - /* 43 - * This code converts an address to the Default Address Mode, that has non 44 - * power of two page sizes. We must support this mode because it is the default 45 - * mode supported by Xilinx tools, it can access the whole flash area and 46 - * changing over to the Power-of-two mode is irreversible and corrupts the 47 - * original data. 48 - * Addr can safely be unsigned int, the biggest S3AN device is smaller than 49 - * 4 MiB. 50 - */ 51 - static u32 s3an_nor_convert_addr(struct spi_nor *nor, u32 addr) 52 - { 53 - u32 page_size = nor->params->page_size; 54 - u32 offset, page; 55 - 56 - offset = addr % page_size; 57 - page = addr / page_size; 58 - page <<= (page_size > 512) ? 10 : 9; 59 - 60 - return page | offset; 61 - } 62 - 63 - /** 64 - * xilinx_nor_read_sr() - Read the Status Register on S3AN flashes. 65 - * @nor: pointer to 'struct spi_nor'. 66 - * @sr: pointer to a DMA-able buffer where the value of the 67 - * Status Register will be written. 68 - * 69 - * Return: 0 on success, -errno otherwise. 70 - */ 71 - static int xilinx_nor_read_sr(struct spi_nor *nor, u8 *sr) 72 - { 73 - int ret; 74 - 75 - if (nor->spimem) { 76 - struct spi_mem_op op = XILINX_RDSR_OP(sr); 77 - 78 - spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 79 - 80 - ret = spi_mem_exec_op(nor->spimem, &op); 81 - } else { 82 - ret = spi_nor_controller_ops_read_reg(nor, XILINX_OP_RDSR, sr, 83 - 1); 84 - } 85 - 86 - if (ret) 87 - dev_dbg(nor->dev, "error %d reading SR\n", ret); 88 - 89 - return ret; 90 - } 91 - 92 - /** 93 - * xilinx_nor_sr_ready() - Query the Status Register of the S3AN flash to see 94 - * if the flash is ready for new commands. 95 - * @nor: pointer to 'struct spi_nor'. 96 - * 97 - * Return: 1 if ready, 0 if not ready, -errno on errors. 98 - */ 99 - static int xilinx_nor_sr_ready(struct spi_nor *nor) 100 - { 101 - int ret; 102 - 103 - ret = xilinx_nor_read_sr(nor, nor->bouncebuf); 104 - if (ret) 105 - return ret; 106 - 107 - return !!(nor->bouncebuf[0] & XSR_RDY); 108 - } 109 - 110 - static int xilinx_nor_setup(struct spi_nor *nor, 111 - const struct spi_nor_hwcaps *hwcaps) 112 - { 113 - u32 page_size; 114 - int ret; 115 - 116 - ret = xilinx_nor_read_sr(nor, nor->bouncebuf); 117 - if (ret) 118 - return ret; 119 - 120 - nor->erase_opcode = XILINX_OP_SE; 121 - nor->program_opcode = XILINX_OP_PP; 122 - nor->read_opcode = SPINOR_OP_READ; 123 - nor->flags |= SNOR_F_NO_OP_CHIP_ERASE; 124 - 125 - /* 126 - * This flashes have a page size of 264 or 528 bytes (known as 127 - * Default addressing mode). It can be changed to a more standard 128 - * Power of two mode where the page size is 256/512. This comes 129 - * with a price: there is 3% less of space, the data is corrupted 130 - * and the page size cannot be changed back to default addressing 131 - * mode. 132 - * 133 - * The current addressing mode can be read from the XRDSR register 134 - * and should not be changed, because is a destructive operation. 135 - */ 136 - if (nor->bouncebuf[0] & XSR_PAGESIZE) { 137 - /* Flash in Power of 2 mode */ 138 - page_size = (nor->params->page_size == 264) ? 256 : 512; 139 - nor->params->page_size = page_size; 140 - nor->mtd.writebufsize = page_size; 141 - nor->params->size = nor->info->size; 142 - nor->mtd.erasesize = 8 * page_size; 143 - } else { 144 - /* Flash in Default addressing mode */ 145 - nor->params->convert_addr = s3an_nor_convert_addr; 146 - nor->mtd.erasesize = nor->info->sector_size; 147 - } 148 - 149 - return 0; 150 - } 151 - 152 - static int xilinx_nor_late_init(struct spi_nor *nor) 153 - { 154 - nor->params->setup = xilinx_nor_setup; 155 - nor->params->ready = xilinx_nor_sr_ready; 156 - 157 - return 0; 158 - } 159 - 160 - static const struct spi_nor_fixups xilinx_nor_fixups = { 161 - .late_init = xilinx_nor_late_init, 162 - }; 163 - 164 - const struct spi_nor_manufacturer spi_nor_xilinx = { 165 - .name = "xilinx", 166 - .parts = xilinx_nor_parts, 167 - .nparts = ARRAY_SIZE(xilinx_nor_parts), 168 - .fixups = &xilinx_nor_fixups, 169 - };
+17 -17
drivers/mtd/tests/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o 3 - obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o 4 - obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o 5 - obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o 6 - obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o 7 - obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o 8 - obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o 9 - obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o 10 - obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o 2 + obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o mtd_test.o 3 + obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o mtd_test.o 4 + obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o mtd_test.o 5 + obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o mtd_test.o 6 + obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o mtd_test.o 7 + obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o mtd_test.o 8 + obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o mtd_test.o 9 + obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o mtd_test.o 10 + obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o mtd_test.o 11 11 12 - mtd_oobtest-objs := oobtest.o mtd_test.o 13 - mtd_pagetest-objs := pagetest.o mtd_test.o 14 - mtd_readtest-objs := readtest.o mtd_test.o 15 - mtd_speedtest-objs := speedtest.o mtd_test.o 16 - mtd_stresstest-objs := stresstest.o mtd_test.o 17 - mtd_subpagetest-objs := subpagetest.o mtd_test.o 18 - mtd_torturetest-objs := torturetest.o mtd_test.o 19 - mtd_nandbiterrs-objs := nandbiterrs.o mtd_test.o 12 + mtd_oobtest-objs := oobtest.o 13 + mtd_pagetest-objs := pagetest.o 14 + mtd_readtest-objs := readtest.o 15 + mtd_speedtest-objs := speedtest.o 16 + mtd_stresstest-objs := stresstest.o 17 + mtd_subpagetest-objs := subpagetest.o 18 + mtd_torturetest-objs := torturetest.o 19 + mtd_nandbiterrs-objs := nandbiterrs.o
+9
drivers/mtd/tests/mtd_test.c
··· 25 25 26 26 return 0; 27 27 } 28 + EXPORT_SYMBOL_GPL(mtdtest_erase_eraseblock); 28 29 29 30 static int is_block_bad(struct mtd_info *mtd, unsigned int ebnum) 30 31 { ··· 58 57 59 58 return 0; 60 59 } 60 + EXPORT_SYMBOL_GPL(mtdtest_scan_for_bad_eraseblocks); 61 61 62 62 int mtdtest_erase_good_eraseblocks(struct mtd_info *mtd, unsigned char *bbt, 63 63 unsigned int eb, int ebcnt) ··· 77 75 78 76 return 0; 79 77 } 78 + EXPORT_SYMBOL_GPL(mtdtest_erase_good_eraseblocks); 80 79 81 80 int mtdtest_read(struct mtd_info *mtd, loff_t addr, size_t size, void *buf) 82 81 { ··· 95 92 96 93 return err; 97 94 } 95 + EXPORT_SYMBOL_GPL(mtdtest_read); 98 96 99 97 int mtdtest_write(struct mtd_info *mtd, loff_t addr, size_t size, 100 98 const void *buf) ··· 111 107 112 108 return err; 113 109 } 110 + EXPORT_SYMBOL_GPL(mtdtest_write); 111 + 112 + MODULE_LICENSE("GPL"); 113 + MODULE_DESCRIPTION("MTD function test helpers"); 114 + MODULE_AUTHOR("Akinobu Mita");
+16 -16
include/linux/mtd/cfi.h
··· 308 308 { 309 309 map_word val = map_read(map, addr); 310 310 311 - if (map_bankwidth_is_1(map)) { 311 + if (map_bankwidth_is_1(map)) 312 312 return val.x[0]; 313 - } else if (map_bankwidth_is_2(map)) { 313 + if (map_bankwidth_is_2(map)) 314 314 return cfi16_to_cpu(map, val.x[0]); 315 - } else { 316 - /* No point in a 64-bit byteswap since that would just be 317 - swapping the responses from different chips, and we are 318 - only interested in one chip (a representative sample) */ 319 - return cfi32_to_cpu(map, val.x[0]); 320 - } 315 + /* 316 + * No point in a 64-bit byteswap since that would just be 317 + * swapping the responses from different chips, and we are 318 + * only interested in one chip (a representative sample) 319 + */ 320 + return cfi32_to_cpu(map, val.x[0]); 321 321 } 322 322 323 323 static inline uint16_t cfi_read_query16(struct map_info *map, uint32_t addr) 324 324 { 325 325 map_word val = map_read(map, addr); 326 326 327 - if (map_bankwidth_is_1(map)) { 327 + if (map_bankwidth_is_1(map)) 328 328 return val.x[0] & 0xff; 329 - } else if (map_bankwidth_is_2(map)) { 329 + if (map_bankwidth_is_2(map)) 330 330 return cfi16_to_cpu(map, val.x[0]); 331 - } else { 332 - /* No point in a 64-bit byteswap since that would just be 333 - swapping the responses from different chips, and we are 334 - only interested in one chip (a representative sample) */ 335 - return cfi32_to_cpu(map, val.x[0]); 336 - } 331 + /* 332 + * No point in a 64-bit byteswap since that would just be 333 + * swapping the responses from different chips, and we are 334 + * only interested in one chip (a representative sample) 335 + */ 336 + return cfi32_to_cpu(map, val.x[0]); 337 337 } 338 338 339 339 void cfi_udelay(int us);