Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull mtd updates from Miquel Raynal:
"MTD changes:

- Apart from a binding conversion to yaml, only minor changes/small
fixes have been merged.

Raw NAND changes:

- Minor fixes for various controller drivers like DMA mapping checks,
better timing derivations or bitflip statistics.

- some Hynix NAND flashes were not supporting read-retries, so don't
even try to do it

SPI NAND changes:

- In order to support high-speed modes, certain chips need extra
configuration like adding more dummy cycles. This is now possible,
especially on Winbond chips.

- Aside from that, Gigadevice gets support for a new chip (GD5F1GM9).

SPI NOR changes:

- A notable changes is the fix for exiting 4-byte addressing on
Infineon SEMPER flashes. These flashes do not support the standard
EX4B opcode (E9h), and use a vendor-specific opcode (B8h) instead.

- There is also a fix for unlocking flashes that are write-protected
at power-on. This was caused by using an uninitialized mtd_info in
spi_nor_try_unlock_all()"

* tag 'mtd/for-6.17' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (26 commits)
mtd: spinand: winbond: Add comment about the maximum frequency
mtd: spinand: winbond: Enable high-speed modes on w35n0xjw
mtd: spinand: winbond: Enable high-speed modes on w25n0xjw
mtd: spinand: Add a ->configure_chip() hook
mtd: spinand: Add a frequency field to all READ_FROM_CACHE variants
mtd: spinand: Fix macro alignment
spi: spi-mem: Take into account the actual maximum frequency
spi: spi-mem: Use picoseconds for calculating the op durations
mtd: rawnand: atmel: set pmecc data setup time
mtd: spinand: propagate spinand_wait() errors from spinand_write_page()
mtd: rawnand: fsmc: Add missing check after DMA map
mtd: rawnand: rockchip: Add missing check after DMA map
mtd: rawnand: hynix: don't try read-retry on SLC NANDs
mtd: rawnand: atmel: Fix dma_mapping_error() address
mtd: nand: brcmnand: fix mtd corrected bits stat
mtd: rawnand: renesas: Add missing check after DMA map
mtd: spinand: gigadevice: Add support for GD5F1GM9 chips
mtd: nand: brcmnand: replace manual string choices with standard helpers
mtd: map: Don't use "proxy" headers
mtd: spi-nor: Fix spi_nor_try_unlock_all()
...

+556 -258
+1 -1
Documentation/devicetree/bindings/mtd/jedec,spi-nor.yaml
··· 20 20 - pattern: "^((((micron|spansion|st),)?\ 21 21 (m25p(40|80|16|32|64|128)|\ 22 22 n25q(32b|064|128a11|128a13|256a|512a|164k)))|\ 23 - atmel,at25df(321a|641|081a)|\ 23 + atmel,at(25|26)df(321a|641|081a)|\ 24 24 everspin,mr25h(10|40|128|256)|\ 25 25 (mxicy|macronix),mx25l(4005a|1606e|6405d|8005|12805d|25635e)|\ 26 26 (mxicy|macronix),mx25u(4033|4035)|\
+74
Documentation/devicetree/bindings/mtd/nxp,lpc1773-spifi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/nxp,lpc1773-spifi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NXP SPI Flash Interface (SPIFI) 8 + 9 + description: 10 + NXP SPIFI is a specialized SPI interface for serial Flash devices. 11 + It supports one Flash device with 1-, 2- and 4-bits width in SPI 12 + mode 0 or 3. The controller operates in either command or memory 13 + mode. In memory mode the Flash is accessible from the CPU as 14 + normal memory. 15 + 16 + maintainers: 17 + - Frank Li <Frank.Li@nxp.com> 18 + 19 + properties: 20 + compatible: 21 + const: nxp,lpc1773-spifi 22 + 23 + reg: 24 + maxItems: 2 25 + 26 + reg-names: 27 + items: 28 + - const: spifi 29 + - const: flash 30 + 31 + interrupts: 32 + maxItems: 1 33 + 34 + clocks: 35 + maxItems: 2 36 + 37 + clock-names: 38 + items: 39 + - const: spifi 40 + - const: reg 41 + 42 + resets: 43 + maxItems: 1 44 + 45 + spi-cpol: 46 + enum: [0, 3] 47 + 48 + required: 49 + - compatible 50 + - reg 51 + - reg-names 52 + - interrupts 53 + - clocks 54 + - clock-names 55 + 56 + allOf: 57 + - $ref: /schemas/spi/spi-controller.yaml# 58 + 59 + unevaluatedProperties: false 60 + 61 + examples: 62 + - | 63 + #include <dt-bindings/clock/lpc18xx-ccu.h> 64 + 65 + spi@40003000 { 66 + compatible = "nxp,lpc1773-spifi"; 67 + reg = <0x40003000 0x1000>, <0x14000000 0x4000000>; 68 + reg-names = "spifi", "flash"; 69 + interrupts = <30>; 70 + clocks = <&ccu1 CLK_SPIFI>, <&ccu1 CLK_CPU_SPIFI>; 71 + clock-names = "spifi", "reg"; 72 + resets = <&rgu 53>; 73 + }; 74 +
-58
Documentation/devicetree/bindings/mtd/nxp-spifi.txt
··· 1 - * NXP SPI Flash Interface (SPIFI) 2 - 3 - NXP SPIFI is a specialized SPI interface for serial Flash devices. 4 - It supports one Flash device with 1-, 2- and 4-bits width in SPI 5 - mode 0 or 3. The controller operates in either command or memory 6 - mode. In memory mode the Flash is accessible from the CPU as 7 - normal memory. 8 - 9 - Required properties: 10 - - compatible : Should be "nxp,lpc1773-spifi" 11 - - reg : the first contains the register location and length, 12 - the second contains the memory mapping address and length 13 - - reg-names: Should contain the reg names "spifi" and "flash" 14 - - interrupts : Should contain the interrupt for the device 15 - - clocks : The clocks needed by the SPIFI controller 16 - - clock-names : Should contain the clock names "spifi" and "reg" 17 - 18 - Optional properties: 19 - - resets : phandle + reset specifier 20 - 21 - The SPI Flash must be a child of the SPIFI node and must have a 22 - compatible property as specified in bindings/mtd/jedec,spi-nor.txt 23 - 24 - Optionally it can also contain the following properties. 25 - - spi-cpol : Controller only supports mode 0 and 3 so either 26 - both spi-cpol and spi-cpha should be present or 27 - none of them 28 - - spi-cpha : See above 29 - - spi-rx-bus-width : Used to select how many pins that are used 30 - for input on the controller 31 - 32 - See bindings/spi/spi-bus.txt for more information. 33 - 34 - Example: 35 - spifi: spifi@40003000 { 36 - compatible = "nxp,lpc1773-spifi"; 37 - reg = <0x40003000 0x1000>, <0x14000000 0x4000000>; 38 - reg-names = "spifi", "flash"; 39 - interrupts = <30>; 40 - clocks = <&ccu1 CLK_SPIFI>, <&ccu1 CLK_CPU_SPIFI>; 41 - clock-names = "spifi", "reg"; 42 - resets = <&rgu 53>; 43 - 44 - flash@0 { 45 - compatible = "jedec,spi-nor"; 46 - spi-cpol; 47 - spi-cpha; 48 - spi-rx-bus-width = <4>; 49 - #address-cells = <1>; 50 - #size-cells = <1>; 51 - 52 - partition@0 { 53 - label = "data"; 54 - reg = <0 0x200000>; 55 - }; 56 - }; 57 - }; 58 -
+1 -1
drivers/mtd/ftl.c
··· 344 344 return -ENOMEM; 345 345 346 346 erase->addr = xfer->Offset; 347 - erase->len = 1 << part->header.EraseUnitSize; 347 + erase->len = 1ULL << part->header.EraseUnitSize; 348 348 349 349 ret = mtd_erase(part->mbd.mtd, erase); 350 350 if (!ret) {
+1 -1
drivers/mtd/nand/raw/atmel/nand-controller.c
··· 373 373 dma_cookie_t cookie; 374 374 375 375 buf_dma = dma_map_single(nc->dev, buf, len, dir); 376 - if (dma_mapping_error(nc->dev, dev_dma)) { 376 + if (dma_mapping_error(nc->dev, buf_dma)) { 377 377 dev_err(nc->dev, 378 378 "Failed to prepare a buffer for DMA access\n"); 379 379 goto err;
+6
drivers/mtd/nand/raw/atmel/pmecc.c
··· 143 143 int nstrengths; 144 144 int el_offset; 145 145 bool correct_erased_chunks; 146 + bool clk_ctrl; 146 147 }; 147 148 148 149 struct atmel_pmecc { ··· 844 843 if (IS_ERR(pmecc->regs.errloc)) 845 844 return ERR_CAST(pmecc->regs.errloc); 846 845 846 + /* pmecc data setup time */ 847 + if (caps->clk_ctrl) 848 + writel(PMECC_CLK_133MHZ, pmecc->regs.base + ATMEL_PMECC_CLK); 849 + 847 850 /* Disable all interrupts before registering the PMECC handler. */ 848 851 writel(0xffffffff, pmecc->regs.base + ATMEL_PMECC_IDR); 849 852 atmel_pmecc_reset(pmecc); ··· 901 896 .strengths = atmel_pmecc_strengths, 902 897 .nstrengths = 5, 903 898 .el_offset = 0x8c, 899 + .clk_ctrl = true, 904 900 }; 905 901 906 902 static struct atmel_pmecc_caps sama5d4_caps = {
+43 -19
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 29 29 #include <linux/static_key.h> 30 30 #include <linux/list.h> 31 31 #include <linux/log2.h> 32 + #include <linux/string_choices.h> 32 33 33 34 #include "brcmnand.h" 34 35 ··· 360 359 BRCMNAND_CORR_THRESHOLD_EXT, 361 360 BRCMNAND_UNCORR_COUNT, 362 361 BRCMNAND_CORR_COUNT, 362 + BRCMNAND_READ_ERROR_COUNT, 363 363 BRCMNAND_CORR_EXT_ADDR, 364 364 BRCMNAND_CORR_ADDR, 365 365 BRCMNAND_UNCORR_EXT_ADDR, ··· 391 389 [BRCMNAND_CORR_THRESHOLD_EXT] = 0, 392 390 [BRCMNAND_UNCORR_COUNT] = 0, 393 391 [BRCMNAND_CORR_COUNT] = 0, 392 + [BRCMNAND_READ_ERROR_COUNT] = 0, 394 393 [BRCMNAND_CORR_EXT_ADDR] = 0x60, 395 394 [BRCMNAND_CORR_ADDR] = 0x64, 396 395 [BRCMNAND_UNCORR_EXT_ADDR] = 0x68, ··· 422 419 [BRCMNAND_CORR_THRESHOLD_EXT] = 0, 423 420 [BRCMNAND_UNCORR_COUNT] = 0, 424 421 [BRCMNAND_CORR_COUNT] = 0, 422 + [BRCMNAND_READ_ERROR_COUNT] = 0x80, 425 423 [BRCMNAND_CORR_EXT_ADDR] = 0x70, 426 424 [BRCMNAND_CORR_ADDR] = 0x74, 427 425 [BRCMNAND_UNCORR_EXT_ADDR] = 0x78, ··· 453 449 [BRCMNAND_CORR_THRESHOLD_EXT] = 0, 454 450 [BRCMNAND_UNCORR_COUNT] = 0, 455 451 [BRCMNAND_CORR_COUNT] = 0, 452 + [BRCMNAND_READ_ERROR_COUNT] = 0x80, 456 453 [BRCMNAND_CORR_EXT_ADDR] = 0x70, 457 454 [BRCMNAND_CORR_ADDR] = 0x74, 458 455 [BRCMNAND_UNCORR_EXT_ADDR] = 0x78, ··· 484 479 [BRCMNAND_CORR_THRESHOLD_EXT] = 0xc4, 485 480 [BRCMNAND_UNCORR_COUNT] = 0xfc, 486 481 [BRCMNAND_CORR_COUNT] = 0x100, 482 + [BRCMNAND_READ_ERROR_COUNT] = 0x104, 487 483 [BRCMNAND_CORR_EXT_ADDR] = 0x10c, 488 484 [BRCMNAND_CORR_ADDR] = 0x110, 489 485 [BRCMNAND_UNCORR_EXT_ADDR] = 0x114, ··· 515 509 [BRCMNAND_CORR_THRESHOLD_EXT] = 0xe0, 516 510 [BRCMNAND_UNCORR_COUNT] = 0xfc, 517 511 [BRCMNAND_CORR_COUNT] = 0x100, 512 + [BRCMNAND_READ_ERROR_COUNT] = 0x104, 518 513 [BRCMNAND_CORR_EXT_ADDR] = 0x10c, 519 514 [BRCMNAND_CORR_ADDR] = 0x110, 520 515 [BRCMNAND_UNCORR_EXT_ADDR] = 0x114, ··· 546 539 [BRCMNAND_CORR_THRESHOLD_EXT] = 0xe0, 547 540 [BRCMNAND_UNCORR_COUNT] = 0xfc, 548 541 [BRCMNAND_CORR_COUNT] = 0x100, 542 + [BRCMNAND_READ_ERROR_COUNT] = 0x104, 549 543 [BRCMNAND_CORR_EXT_ADDR] = 0x10c, 550 544 [BRCMNAND_CORR_ADDR] = 0x110, 551 545 [BRCMNAND_UNCORR_EXT_ADDR] = 0x114, ··· 967 959 return offs_cs0 + cs * ctrl->reg_spacing + cs_offs; 968 960 } 969 961 970 - static inline u32 brcmnand_count_corrected(struct brcmnand_controller *ctrl) 962 + static inline u32 brcmnand_corr_total(struct brcmnand_controller *ctrl) 971 963 { 972 - if (ctrl->nand_version < 0x0600) 973 - return 1; 974 - return brcmnand_read_reg(ctrl, BRCMNAND_CORR_COUNT); 964 + if (ctrl->nand_version < 0x400) 965 + return 0; 966 + return brcmnand_read_reg(ctrl, BRCMNAND_READ_ERROR_COUNT); 975 967 } 976 968 977 969 static void brcmnand_wr_corr_thresh(struct brcmnand_host *host, u8 val) ··· 1470 1462 int ret; 1471 1463 1472 1464 if (old_wp != wp) { 1473 - dev_dbg(ctrl->dev, "WP %s\n", wp ? "on" : "off"); 1465 + dev_dbg(ctrl->dev, "WP %s\n", str_on_off(wp)); 1474 1466 old_wp = wp; 1475 1467 } 1476 1468 ··· 1500 1492 if (ret) 1501 1493 dev_err_ratelimited(&host->pdev->dev, 1502 1494 "nand #WP expected %s\n", 1503 - wp ? "on" : "off"); 1495 + str_on_off(wp)); 1504 1496 } 1505 1497 } 1506 1498 ··· 1877 1869 unsigned int trans = len >> FC_SHIFT; 1878 1870 dma_addr_t pa; 1879 1871 1880 - dev_dbg(ctrl->dev, "EDU %s %p:%p\n", ((edu_cmd == EDU_CMD_READ) ? 1881 - "read" : "write"), buf, oob); 1872 + dev_dbg(ctrl->dev, "EDU %s %p:%p\n", 1873 + str_read_write(edu_cmd == EDU_CMD_READ), buf, oob); 1882 1874 1883 1875 pa = dma_map_single(ctrl->dev, buf, len, dir); 1884 1876 if (dma_mapping_error(ctrl->dev, pa)) { ··· 2074 2066 */ 2075 2067 static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip, 2076 2068 u64 addr, unsigned int trans, u32 *buf, 2077 - u8 *oob, u64 *err_addr) 2069 + u8 *oob, u64 *err_addr, unsigned int *corr) 2078 2070 { 2079 2071 struct brcmnand_host *host = nand_get_controller_data(chip); 2080 2072 struct brcmnand_controller *ctrl = host->ctrl; 2081 2073 int i, ret = 0; 2074 + unsigned int prev_corr; 2075 + 2076 + if (corr) 2077 + *corr = 0; 2082 2078 2083 2079 brcmnand_clear_ecc_addr(ctrl); 2084 2080 2085 2081 for (i = 0; i < trans; i++, addr += FC_BYTES) { 2082 + prev_corr = brcmnand_corr_total(ctrl); 2086 2083 brcmnand_set_cmd_addr(mtd, addr); 2087 2084 /* SPARE_AREA_READ does not use ECC, so just use PAGE_READ */ 2088 2085 brcmnand_send_cmd(host, CMD_PAGE_READ); ··· 2112 2099 2113 2100 if (*err_addr) 2114 2101 ret = -EBADMSG; 2115 - } 2102 + else { 2103 + *err_addr = brcmnand_get_correcc_addr(ctrl); 2116 2104 2117 - if (!ret) { 2118 - *err_addr = brcmnand_get_correcc_addr(ctrl); 2105 + if (*err_addr) { 2106 + ret = -EUCLEAN; 2119 2107 2120 - if (*err_addr) 2121 - ret = -EUCLEAN; 2108 + if (corr && (brcmnand_corr_total(ctrl) - prev_corr) > *corr) 2109 + *corr = brcmnand_corr_total(ctrl) - prev_corr; 2110 + } 2111 + } 2122 2112 } 2123 2113 } 2124 2114 ··· 2189 2173 int err; 2190 2174 bool retry = true; 2191 2175 bool edu_err = false; 2176 + unsigned int corrected = 0; /* max corrected bits per subpage */ 2177 + unsigned int prev_tot = brcmnand_corr_total(ctrl); 2192 2178 2193 2179 dev_dbg(ctrl->dev, "read %llx -> %p\n", (unsigned long long)addr, buf); 2194 2180 ··· 2218 2200 memset(oob, 0x99, mtd->oobsize); 2219 2201 2220 2202 err = brcmnand_read_by_pio(mtd, chip, addr, trans, buf, 2221 - oob, &err_addr); 2203 + oob, &err_addr, &corrected); 2222 2204 } 2205 + 2206 + mtd->ecc_stats.corrected += brcmnand_corr_total(ctrl) - prev_tot; 2223 2207 2224 2208 if (mtd_is_eccerr(err)) { 2225 2209 /* ··· 2260 2240 } 2261 2241 2262 2242 if (mtd_is_bitflip(err)) { 2263 - unsigned int corrected = brcmnand_count_corrected(ctrl); 2264 - 2265 2243 /* in case of EDU correctable error we read again using PIO */ 2266 2244 if (edu_err) 2267 2245 err = brcmnand_read_by_pio(mtd, chip, addr, trans, buf, 2268 - oob, &err_addr); 2246 + oob, &err_addr, &corrected); 2269 2247 2270 2248 dev_dbg(ctrl->dev, "corrected error at 0x%llx\n", 2271 2249 (unsigned long long)err_addr); 2272 - mtd->ecc_stats.corrected += corrected; 2250 + /* 2251 + * if flipped bits accumulator is not supported but we detected 2252 + * a correction, increase stat by 1 to match previous behavior. 2253 + */ 2254 + if (brcmnand_corr_total(ctrl) == prev_tot) 2255 + mtd->ecc_stats.corrected++; 2256 + 2273 2257 /* Always exceed the software-imposed threshold */ 2274 2258 return max(mtd->bitflip_threshold, corrected); 2275 2259 }
+2
drivers/mtd/nand/raw/fsmc_nand.c
··· 503 503 504 504 dma_dev = chan->device; 505 505 dma_addr = dma_map_single(dma_dev->dev, buffer, len, direction); 506 + if (dma_mapping_error(dma_dev->dev, dma_addr)) 507 + return -EINVAL; 506 508 507 509 if (direction == DMA_TO_DEVICE) { 508 510 dma_src = dma_addr;
+2 -2
drivers/mtd/nand/raw/nand_hynix.c
··· 377 377 378 378 /* 379 379 * We only support read-retry for 1xnm NANDs, and those NANDs all 380 - * expose a valid JEDEC ID. 380 + * expose a valid JEDEC ID. SLC NANDs don't require read-retry. 381 381 */ 382 - if (valid_jedecid) { 382 + if (valid_jedecid && nanddev_bits_per_cell(&chip->base) > 1) { 383 383 u8 nand_tech = chip->id.data[5] >> 4; 384 384 385 385 /* 1xnm technology */
+6
drivers/mtd/nand/raw/renesas-nand-controller.c
··· 426 426 /* Configure DMA */ 427 427 dma_addr = dma_map_single(rnandc->dev, rnandc->buf, mtd->writesize, 428 428 DMA_FROM_DEVICE); 429 + if (dma_mapping_error(rnandc->dev, dma_addr)) 430 + return -ENOMEM; 431 + 429 432 writel(dma_addr, rnandc->regs + DMA_ADDR_LOW_REG); 430 433 writel(mtd->writesize, rnandc->regs + DMA_CNT_REG); 431 434 writel(DMA_TLVL_MAX, rnandc->regs + DMA_TLVL_REG); ··· 609 606 /* Configure DMA */ 610 607 dma_addr = dma_map_single(rnandc->dev, (void *)rnandc->buf, mtd->writesize, 611 608 DMA_TO_DEVICE); 609 + if (dma_mapping_error(rnandc->dev, dma_addr)) 610 + return -ENOMEM; 611 + 612 612 writel(dma_addr, rnandc->regs + DMA_ADDR_LOW_REG); 613 613 writel(mtd->writesize, rnandc->regs + DMA_CNT_REG); 614 614 writel(DMA_TLVL_MAX, rnandc->regs + DMA_TLVL_REG);
+15
drivers/mtd/nand/raw/rockchip-nand-controller.c
··· 656 656 657 657 dma_data = dma_map_single(nfc->dev, (void *)nfc->page_buf, 658 658 mtd->writesize, DMA_TO_DEVICE); 659 + if (dma_mapping_error(nfc->dev, dma_data)) 660 + return -ENOMEM; 661 + 659 662 dma_oob = dma_map_single(nfc->dev, nfc->oob_buf, 660 663 ecc->steps * oob_step, 661 664 DMA_TO_DEVICE); 665 + if (dma_mapping_error(nfc->dev, dma_oob)) { 666 + dma_unmap_single(nfc->dev, dma_data, mtd->writesize, DMA_TO_DEVICE); 667 + return -ENOMEM; 668 + } 662 669 663 670 reinit_completion(&nfc->done); 664 671 writel(INT_DMA, nfc->regs + nfc->cfg->int_en_off); ··· 779 772 dma_data = dma_map_single(nfc->dev, nfc->page_buf, 780 773 mtd->writesize, 781 774 DMA_FROM_DEVICE); 775 + if (dma_mapping_error(nfc->dev, dma_data)) 776 + return -ENOMEM; 777 + 782 778 dma_oob = dma_map_single(nfc->dev, nfc->oob_buf, 783 779 ecc->steps * oob_step, 784 780 DMA_FROM_DEVICE); 781 + if (dma_mapping_error(nfc->dev, dma_oob)) { 782 + dma_unmap_single(nfc->dev, dma_data, mtd->writesize, 783 + DMA_FROM_DEVICE); 784 + return -ENOMEM; 785 + } 785 786 786 787 /* 787 788 * The first blocks (4, 8 or 16 depending on the device)
+6 -6
drivers/mtd/nand/spi/alliancememory.c
··· 17 17 #define AM_STATUS_ECC_MAX_CORRECTED (3 << 4) 18 18 19 19 static SPINAND_OP_VARIANTS(read_cache_variants, 20 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 21 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 22 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 23 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 24 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 25 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 20 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0, 0), 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 22 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 23 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 24 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 25 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 26 26 27 27 static SPINAND_OP_VARIANTS(write_cache_variants, 28 28 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+3 -3
drivers/mtd/nand/spi/ato.c
··· 14 14 15 15 16 16 static SPINAND_OP_VARIANTS(read_cache_variants, 17 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 18 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 19 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 17 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 18 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 19 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 20 20 21 21 static SPINAND_OP_VARIANTS(write_cache_variants, 22 22 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+21 -6
drivers/mtd/nand/spi/core.c
··· 20 20 #include <linux/spi/spi.h> 21 21 #include <linux/spi/spi-mem.h> 22 22 23 - static int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val) 23 + int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val) 24 24 { 25 25 struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(reg, 26 26 spinand->scratchbuf); ··· 360 360 engine_conf->status = status; 361 361 } 362 362 363 - static int spinand_write_enable_op(struct spinand_device *spinand) 363 + int spinand_write_enable_op(struct spinand_device *spinand) 364 364 { 365 365 struct spi_mem_op op = SPINAND_WR_EN_DIS_1S_0_0_OP(true); 366 366 ··· 688 688 SPINAND_WRITE_INITIAL_DELAY_US, 689 689 SPINAND_WRITE_POLL_DELAY_US, 690 690 &status); 691 - if (!ret && (status & STATUS_PROG_FAILED)) 691 + if (ret) 692 + return ret; 693 + 694 + if (status & STATUS_PROG_FAILED) 692 695 return -EIO; 693 696 694 697 return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req); ··· 1253 1250 1254 1251 static int spinand_manufacturer_init(struct spinand_device *spinand) 1255 1252 { 1256 - if (spinand->manufacturer->ops->init) 1257 - return spinand->manufacturer->ops->init(spinand); 1253 + int ret; 1254 + 1255 + if (spinand->manufacturer->ops->init) { 1256 + ret = spinand->manufacturer->ops->init(spinand); 1257 + if (ret) 1258 + return ret; 1259 + } 1260 + 1261 + if (spinand->configure_chip) { 1262 + ret = spinand->configure_chip(spinand); 1263 + if (ret) 1264 + return ret; 1265 + } 1258 1266 1259 1267 return 0; 1260 1268 } ··· 1308 1294 1309 1295 nbytes -= op.data.nbytes; 1310 1296 1311 - op_duration_ns += spi_mem_calc_op_duration(&op); 1297 + op_duration_ns += spi_mem_calc_op_duration(spinand->spimem, &op); 1312 1298 } 1313 1299 1314 1300 if (!nbytes && op_duration_ns < best_op_duration_ns) { ··· 1360 1346 spinand->flags = table[i].flags; 1361 1347 spinand->id.len = 1 + table[i].devid.len; 1362 1348 spinand->select_target = table[i].select_target; 1349 + spinand->configure_chip = table[i].configure_chip; 1363 1350 spinand->set_cont_read = table[i].set_cont_read; 1364 1351 spinand->fact_otp = &table[i].fact_otp; 1365 1352 spinand->user_otp = &table[i].user_otp;
+4 -4
drivers/mtd/nand/spi/esmt.c
··· 18 18 (CFG_OTP_ENABLE | ESMT_F50L1G41LB_CFG_OTP_PROTECT) 19 19 20 20 static SPINAND_OP_VARIANTS(read_cache_variants, 21 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 22 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 23 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 24 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 22 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 23 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 24 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 25 25 26 26 static SPINAND_OP_VARIANTS(write_cache_variants, 27 27 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+4 -4
drivers/mtd/nand/spi/foresee.c
··· 12 12 #define SPINAND_MFR_FORESEE 0xCD 13 13 14 14 static SPINAND_OP_VARIANTS(read_cache_variants, 15 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 16 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 17 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 18 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 15 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 16 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 17 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 18 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 19 19 20 20 static SPINAND_OP_VARIANTS(write_cache_variants, 21 21 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+44 -24
drivers/mtd/nand/spi/gigadevice.c
··· 24 24 #define GD5FXGQ4UXFXXG_STATUS_ECC_UNCOR_ERROR (7 << 4) 25 25 26 26 static SPINAND_OP_VARIANTS(read_cache_variants, 27 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 28 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 29 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 30 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 31 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 32 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 27 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0, 0), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 29 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 30 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 31 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 32 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 33 33 34 34 static SPINAND_OP_VARIANTS(read_cache_variants_f, 35 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 36 - SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(0, 1, NULL, 0), 37 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 38 - SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(0, 1, NULL, 0), 39 - SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(0, 1, NULL, 0), 40 - SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(0, 0, NULL, 0)); 35 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0, 0), 36 + SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(0, 1, NULL, 0, 0), 37 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 38 + SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(0, 1, NULL, 0, 0), 39 + SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(0, 1, NULL, 0, 0), 40 + SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(0, 0, NULL, 0, 0)); 41 41 42 42 static SPINAND_OP_VARIANTS(read_cache_variants_1gq5, 43 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 44 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 45 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 46 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 47 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 48 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 43 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 0), 44 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 45 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 46 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 47 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 48 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 49 49 50 50 static SPINAND_OP_VARIANTS(read_cache_variants_2gq5, 51 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0), 52 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 53 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0), 54 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 55 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 56 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 51 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0, 0), 52 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 53 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0, 0), 54 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 55 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 56 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 57 57 58 58 static SPINAND_OP_VARIANTS(write_cache_variants, 59 59 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), ··· 527 527 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xc9), 528 528 NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 529 529 NAND_ECCREQ(4, 512), 530 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5, 531 + &write_cache_variants, 532 + &update_cache_variants), 533 + SPINAND_HAS_QE_BIT, 534 + SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout, 535 + gd5fxgq4uexxg_ecc_get_status)), 536 + SPINAND_INFO("GD5F1GM9UExxG", 537 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x91, 0x01), 538 + NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 539 + NAND_ECCREQ(8, 512), 540 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5, 541 + &write_cache_variants, 542 + &update_cache_variants), 543 + SPINAND_HAS_QE_BIT, 544 + SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout, 545 + gd5fxgq4uexxg_ecc_get_status)), 546 + SPINAND_INFO("GD5F1GM9RExxG", 547 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x81, 0x01), 548 + NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 549 + NAND_ECCREQ(8, 512), 530 550 SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5, 531 551 &write_cache_variants, 532 552 &update_cache_variants),
+4 -4
drivers/mtd/nand/spi/macronix.c
··· 28 28 }; 29 29 30 30 static SPINAND_OP_VARIANTS(read_cache_variants, 31 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 32 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 33 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 34 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 31 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 32 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 33 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 34 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 35 35 36 36 static SPINAND_OP_VARIANTS(write_cache_variants, 37 37 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+10 -10
drivers/mtd/nand/spi/micron.c
··· 35 35 (CFG_OTP_ENABLE | MICRON_MT29F2G01ABAGD_CFG_OTP_STATE) 36 36 37 37 static SPINAND_OP_VARIANTS(quadio_read_cache_variants, 38 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 39 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 40 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 41 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 42 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 43 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 38 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 0), 39 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 40 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 41 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 42 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 43 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 44 44 45 45 static SPINAND_OP_VARIANTS(x4_write_cache_variants, 46 46 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), ··· 52 52 53 53 /* Micron MT29F2G01AAAED Device */ 54 54 static SPINAND_OP_VARIANTS(x4_read_cache_variants, 55 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 56 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 57 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 58 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 55 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 56 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 57 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 58 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 59 59 60 60 static SPINAND_OP_VARIANTS(x1_write_cache_variants, 61 61 SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0));
+6 -6
drivers/mtd/nand/spi/paragon.c
··· 22 22 23 23 24 24 static SPINAND_OP_VARIANTS(read_cache_variants, 25 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 26 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 27 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 28 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 29 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 30 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 25 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 0), 26 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 27 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 29 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 30 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 31 31 32 32 static SPINAND_OP_VARIANTS(write_cache_variants, 33 33 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+6 -6
drivers/mtd/nand/spi/skyhigh.c
··· 17 17 #define SKYHIGH_CONFIG_PROTECT_EN BIT(1) 18 18 19 19 static SPINAND_OP_VARIANTS(read_cache_variants, 20 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0), 21 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 22 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0), 23 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 24 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 25 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 20 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0, 0), 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 22 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0, 0), 23 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 24 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 25 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 26 26 27 27 static SPINAND_OP_VARIANTS(write_cache_variants, 28 28 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+4 -4
drivers/mtd/nand/spi/toshiba.c
··· 15 15 #define TOSH_STATUS_ECC_HAS_BITFLIPS_T (3 << 4) 16 16 17 17 static SPINAND_OP_VARIANTS(read_cache_variants, 18 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 19 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 20 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 21 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 18 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 19 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 20 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 22 22 23 23 static SPINAND_OP_VARIANTS(write_cache_x4_variants, 24 24 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+147 -16
drivers/mtd/nand/spi/winbond.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/mtd/spinand.h> 13 13 #include <linux/units.h> 14 + #include <linux/delay.h> 14 15 15 16 #define SPINAND_MFR_WINBOND 0xEF 16 17 ··· 19 18 20 19 #define W25N04KV_STATUS_ECC_5_8_BITFLIPS (3 << 4) 21 20 21 + #define W25N0XJW_SR4 0xD0 22 + #define W25N0XJW_SR4_HS BIT(2) 23 + 24 + #define W35N01JW_VCR_IO_MODE 0x00 25 + #define W35N01JW_VCR_IO_MODE_SINGLE_SDR 0xFF 26 + #define W35N01JW_VCR_IO_MODE_OCTAL_SDR 0xDF 27 + #define W35N01JW_VCR_IO_MODE_OCTAL_DDR_DS 0xE7 28 + #define W35N01JW_VCR_IO_MODE_OCTAL_DDR 0xC7 29 + #define W35N01JW_VCR_DUMMY_CLOCK_REG 0x01 30 + 22 31 /* 23 32 * "X2" in the core is equivalent to "dual output" in the datasheets, 24 33 * "X4" in the core is equivalent to "quad output" in the datasheets. 34 + * Quad and octal capable chips feature an absolute maximum frequency of 166MHz. 25 35 */ 26 36 27 37 static SPINAND_OP_VARIANTS(read_cache_octal_variants, 38 + SPINAND_PAGE_READ_FROM_CACHE_1S_1D_8D_OP(0, 3, NULL, 0, 120 * HZ_PER_MHZ), 28 39 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_8D_OP(0, 2, NULL, 0, 105 * HZ_PER_MHZ), 40 + SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 20, NULL, 0, 0), 29 41 SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 16, NULL, 0, 162 * HZ_PER_MHZ), 42 + SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 12, NULL, 0, 124 * HZ_PER_MHZ), 43 + SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 8, NULL, 0, 86 * HZ_PER_MHZ), 44 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_8S_OP(0, 2, NULL, 0, 0), 30 45 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_8S_OP(0, 1, NULL, 0, 133 * HZ_PER_MHZ), 31 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 32 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 46 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 47 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 33 48 34 49 static SPINAND_OP_VARIANTS(write_cache_octal_variants, 35 50 SPINAND_PROG_LOAD_1S_8S_8S_OP(true, 0, NULL, 0), ··· 59 42 static SPINAND_OP_VARIANTS(read_cache_dual_quad_dtr_variants, 60 43 SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ), 61 44 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 45 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0, 0), 62 46 SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 104 * HZ_PER_MHZ), 63 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 47 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 64 48 SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ), 65 49 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 50 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0, 0), 66 51 SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 104 * HZ_PER_MHZ), 67 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 52 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 68 53 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 69 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 54 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 70 55 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 54 * HZ_PER_MHZ)); 71 56 72 57 static SPINAND_OP_VARIANTS(read_cache_variants, 73 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 74 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 75 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 76 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 77 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 78 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 58 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 0), 59 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 60 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 61 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 62 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 63 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 79 64 80 65 static SPINAND_OP_VARIANTS(write_cache_variants, 81 66 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), ··· 249 230 return -EINVAL; 250 231 } 251 232 233 + static int w25n0xjw_hs_cfg(struct spinand_device *spinand) 234 + { 235 + const struct spi_mem_op *op; 236 + bool hs; 237 + u8 sr4; 238 + int ret; 239 + 240 + op = spinand->op_templates.read_cache; 241 + if (op->cmd.dtr || op->addr.dtr || op->dummy.dtr || op->data.dtr) 242 + hs = false; 243 + else if (op->cmd.buswidth == 1 && op->addr.buswidth == 1 && 244 + op->dummy.buswidth == 1 && op->data.buswidth == 1) 245 + hs = false; 246 + else if (!op->max_freq) 247 + hs = true; 248 + else 249 + hs = false; 250 + 251 + ret = spinand_read_reg_op(spinand, W25N0XJW_SR4, &sr4); 252 + if (ret) 253 + return ret; 254 + 255 + if (hs) 256 + sr4 |= W25N0XJW_SR4_HS; 257 + else 258 + sr4 &= ~W25N0XJW_SR4_HS; 259 + 260 + ret = spinand_write_reg_op(spinand, W25N0XJW_SR4, sr4); 261 + if (ret) 262 + return ret; 263 + 264 + return 0; 265 + } 266 + 267 + static int w35n0xjw_write_vcr(struct spinand_device *spinand, u8 reg, u8 val) 268 + { 269 + struct spi_mem_op op = 270 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x81, 1), 271 + SPI_MEM_OP_ADDR(3, reg, 1), 272 + SPI_MEM_OP_NO_DUMMY, 273 + SPI_MEM_OP_DATA_OUT(1, spinand->scratchbuf, 1)); 274 + int ret; 275 + 276 + *spinand->scratchbuf = val; 277 + 278 + ret = spinand_write_enable_op(spinand); 279 + if (ret) 280 + return ret; 281 + 282 + ret = spi_mem_exec_op(spinand->spimem, &op); 283 + if (ret) 284 + return ret; 285 + 286 + /* 287 + * Write VCR operation doesn't set the busy bit in SR, which means we 288 + * cannot perform a status poll. Minimum time of 50ns is needed to 289 + * complete the write. 290 + */ 291 + ndelay(50); 292 + 293 + return 0; 294 + } 295 + 296 + static int w35n0xjw_vcr_cfg(struct spinand_device *spinand) 297 + { 298 + const struct spi_mem_op *op; 299 + unsigned int dummy_cycles; 300 + bool dtr, single; 301 + u8 io_mode; 302 + int ret; 303 + 304 + op = spinand->op_templates.read_cache; 305 + 306 + single = (op->cmd.buswidth == 1 && op->addr.buswidth == 1 && op->data.buswidth == 1); 307 + dtr = (op->cmd.dtr || op->addr.dtr || op->data.dtr); 308 + if (single && !dtr) 309 + io_mode = W35N01JW_VCR_IO_MODE_SINGLE_SDR; 310 + else if (!single && !dtr) 311 + io_mode = W35N01JW_VCR_IO_MODE_OCTAL_SDR; 312 + else if (!single && dtr) 313 + io_mode = W35N01JW_VCR_IO_MODE_OCTAL_DDR; 314 + else 315 + return -EINVAL; 316 + 317 + ret = w35n0xjw_write_vcr(spinand, W35N01JW_VCR_IO_MODE, io_mode); 318 + if (ret) 319 + return ret; 320 + 321 + dummy_cycles = ((op->dummy.nbytes * 8) / op->dummy.buswidth) / (op->dummy.dtr ? 2 : 1); 322 + switch (dummy_cycles) { 323 + case 8: 324 + case 12: 325 + case 16: 326 + case 20: 327 + case 24: 328 + case 28: 329 + break; 330 + default: 331 + return -EINVAL; 332 + } 333 + ret = w35n0xjw_write_vcr(spinand, W35N01JW_VCR_DUMMY_CLOCK_REG, dummy_cycles); 334 + if (ret) 335 + return ret; 336 + 337 + return 0; 338 + } 339 + 252 340 static const struct spinand_info winbond_spinand_table[] = { 253 341 /* 512M-bit densities */ 254 342 SPINAND_INFO("W25N512GW", /* 1.8V */ ··· 394 268 &write_cache_variants, 395 269 &update_cache_variants), 396 270 0, 397 - SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)), 271 + SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL), 272 + SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)), 398 273 SPINAND_INFO("W25N01KV", /* 3.3V */ 399 274 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xae, 0x21), 400 275 NAND_MEMORG(1, 2048, 96, 64, 1024, 20, 1, 1, 1), ··· 413 286 &write_cache_octal_variants, 414 287 &update_cache_octal_variants), 415 288 0, 416 - SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 289 + SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL), 290 + SPINAND_CONFIGURE_CHIP(w35n0xjw_vcr_cfg)), 417 291 SPINAND_INFO("W35N02JW", /* 1.8V */ 418 292 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x22), 419 293 NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 2, 1), ··· 423 295 &write_cache_octal_variants, 424 296 &update_cache_octal_variants), 425 297 0, 426 - SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 298 + SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL), 299 + SPINAND_CONFIGURE_CHIP(w35n0xjw_vcr_cfg)), 427 300 SPINAND_INFO("W35N04JW", /* 1.8V */ 428 301 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x23), 429 302 NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 4, 1), ··· 433 304 &write_cache_octal_variants, 434 305 &update_cache_octal_variants), 435 306 0, 436 - SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 307 + SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL), 308 + SPINAND_CONFIGURE_CHIP(w35n0xjw_vcr_cfg)), 437 309 /* 2G-bit densities */ 438 310 SPINAND_INFO("W25M02GV", /* 2x1G-bit 3.3V */ 439 311 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xab, 0x21), ··· 454 324 &write_cache_variants, 455 325 &update_cache_variants), 456 326 0, 457 - SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)), 327 + SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL), 328 + SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)), 458 329 SPINAND_INFO("W25N02KV", /* 3.3V */ 459 330 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xaa, 0x22), 460 331 NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
+6 -6
drivers/mtd/nand/spi/xtx.c
··· 23 23 #define XT26XXXD_STATUS_ECC_UNCOR_ERROR (2) 24 24 25 25 static SPINAND_OP_VARIANTS(read_cache_variants, 26 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 27 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 28 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 29 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 30 - SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 31 - SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 26 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0, 0), 27 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0, 0), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 0), 29 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0, 0), 30 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0, 0), 31 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 0)); 32 32 33 33 static SPINAND_OP_VARIANTS(write_cache_variants, 34 34 SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0),
+21 -22
drivers/mtd/nftlcore.c
··· 228 228 return BLOCK_NIL; 229 229 } 230 230 231 + static noinline_for_stack void NFTL_move_block(struct mtd_info *mtd, loff_t src, loff_t dst) 232 + { 233 + unsigned char movebuf[512]; 234 + struct nftl_oob oob; 235 + size_t retlen; 236 + int ret; 237 + 238 + ret = mtd_read(mtd, src, 512, &retlen, movebuf); 239 + if (ret < 0 && !mtd_is_bitflip(ret)) { 240 + ret = mtd_read(mtd, src, 512, &retlen, movebuf); 241 + if (ret != -EIO) 242 + printk("Error went away on retry.\n"); 243 + } 244 + memset(&oob, 0xff, sizeof(struct nftl_oob)); 245 + oob.b.Status = oob.b.Status1 = SECTOR_USED; 246 + 247 + nftl_write(mtd, dst, 512, &retlen, movebuf, (char *)&oob); 248 + } 249 + 231 250 static u16 NFTL_foldchain (struct NFTLrecord *nftl, unsigned thisVUC, unsigned pendingblock ) 232 251 { 233 252 struct mtd_info *mtd = nftl->mbd.mtd; ··· 408 389 */ 409 390 pr_debug("Folding chain %d into unit %d\n", thisVUC, targetEUN); 410 391 for (block = 0; block < nftl->EraseSize / 512 ; block++) { 411 - unsigned char movebuf[512]; 412 - int ret; 413 - 414 392 /* If it's in the target EUN already, or if it's pending write, do nothing */ 415 393 if (BlockMap[block] == targetEUN || 416 394 (pendingblock == (thisVUC * (nftl->EraseSize / 512) + block))) { ··· 419 403 if (BlockMap[block] == BLOCK_NIL) 420 404 continue; 421 405 422 - ret = mtd_read(mtd, 423 - (nftl->EraseSize * BlockMap[block]) + (block * 512), 424 - 512, 425 - &retlen, 426 - movebuf); 427 - if (ret < 0 && !mtd_is_bitflip(ret)) { 428 - ret = mtd_read(mtd, 429 - (nftl->EraseSize * BlockMap[block]) + (block * 512), 430 - 512, 431 - &retlen, 432 - movebuf); 433 - if (ret != -EIO) 434 - printk("Error went away on retry.\n"); 435 - } 436 - memset(&oob, 0xff, sizeof(struct nftl_oob)); 437 - oob.b.Status = oob.b.Status1 = SECTOR_USED; 438 - 439 - nftl_write(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + 440 - (block * 512), 512, &retlen, movebuf, (char *)&oob); 406 + NFTL_move_block(mtd, (nftl->EraseSize * BlockMap[block]) + (block * 512), 407 + (nftl->EraseSize * targetEUN) + (block * 512)); 441 408 } 442 409 443 410 /* add the header so that it is now a valid chain */
+4 -4
drivers/mtd/spi-nor/micron-st.c
··· 189 189 return 0; 190 190 } 191 191 192 - static struct spi_nor_fixups mt25qu512a_fixups = { 192 + static const struct spi_nor_fixups mt25qu512a_fixups = { 193 193 .post_bfpt = mt25qu512a_post_bfpt_fixup, 194 194 }; 195 195 ··· 225 225 return spi_nor_set_4byte_addr_mode(nor, true); 226 226 } 227 227 228 - static struct spi_nor_fixups n25q00_fixups = { 228 + static const struct spi_nor_fixups n25q00_fixups = { 229 229 .late_init = st_nor_four_die_late_init, 230 230 }; 231 231 232 - static struct spi_nor_fixups mt25q01_fixups = { 232 + static const struct spi_nor_fixups mt25q01_fixups = { 233 233 .late_init = st_nor_two_die_late_init, 234 234 }; 235 235 236 - static struct spi_nor_fixups mt25q02_fixups = { 236 + static const struct spi_nor_fixups mt25q02_fixups = { 237 237 .late_init = st_nor_four_die_late_init, 238 238 }; 239 239
+33 -2
drivers/mtd/spi-nor/spansion.c
··· 17 17 18 18 #define SPINOR_OP_CLSR 0x30 /* Clear status register 1 */ 19 19 #define SPINOR_OP_CLPEF 0x82 /* Clear program/erase failure flags */ 20 + #define SPINOR_OP_CYPRESS_EX4B 0xB8 /* Exit 4-byte address mode */ 20 21 #define SPINOR_OP_CYPRESS_DIE_ERASE 0x61 /* Chip (die) erase */ 21 22 #define SPINOR_OP_RD_ANY_REG 0x65 /* Read any register */ 22 23 #define SPINOR_OP_WR_ANY_REG 0x71 /* Write any register */ ··· 58 57 SPI_MEM_OP_ADDR(naddr, addr, 0), \ 59 58 SPI_MEM_OP_DUMMY(ndummy, 0), \ 60 59 SPI_MEM_OP_DATA_IN(1, buf, 0)) 60 + 61 + #define CYPRESS_NOR_EN4B_EX4B_OP(enable) \ 62 + SPI_MEM_OP(SPI_MEM_OP_CMD(enable ? SPINOR_OP_EN4B : \ 63 + SPINOR_OP_CYPRESS_EX4B, 0), \ 64 + SPI_MEM_OP_NO_ADDR, \ 65 + SPI_MEM_OP_NO_DUMMY, \ 66 + SPI_MEM_OP_NO_DATA) 61 67 62 68 #define SPANSION_OP(opcode) \ 63 69 SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 0), \ ··· 364 356 return 0; 365 357 } 366 358 359 + static int cypress_nor_set_4byte_addr_mode(struct spi_nor *nor, bool enable) 360 + { 361 + int ret; 362 + struct spi_mem_op op = CYPRESS_NOR_EN4B_EX4B_OP(enable); 363 + 364 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 365 + 366 + ret = spi_mem_exec_op(nor->spimem, &op); 367 + if (ret) 368 + dev_dbg(nor->dev, "error %d setting 4-byte mode\n", ret); 369 + 370 + return ret; 371 + } 372 + 367 373 /** 368 374 * cypress_nor_determine_addr_mode_by_sr1() - Determine current address mode 369 375 * (3 or 4-byte) by querying status ··· 548 526 struct spi_mem_op op; 549 527 int ret; 550 528 529 + /* Assign 4-byte address mode method that is not determined in BFPT */ 530 + nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode; 531 + 551 532 ret = cypress_nor_set_addr_mode_nbytes(nor); 552 533 if (ret) 553 534 return ret; ··· 603 578 return 0; 604 579 } 605 580 606 - static struct spi_nor_fixups s25fs256t_fixups = { 581 + static const struct spi_nor_fixups s25fs256t_fixups = { 607 582 .post_bfpt = s25fs256t_post_bfpt_fixup, 608 583 .post_sfdp = s25fs256t_post_sfdp_fixup, 609 584 .late_init = s25fs256t_late_init, ··· 615 590 const struct sfdp_bfpt *bfpt) 616 591 { 617 592 int ret; 593 + 594 + /* Assign 4-byte address mode method that is not determined in BFPT */ 595 + nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode; 618 596 619 597 ret = cypress_nor_set_addr_mode_nbytes(nor); 620 598 if (ret) ··· 678 650 return 0; 679 651 } 680 652 681 - static struct spi_nor_fixups s25hx_t_fixups = { 653 + static const struct spi_nor_fixups s25hx_t_fixups = { 682 654 .post_bfpt = s25hx_t_post_bfpt_fixup, 683 655 .post_sfdp = s25hx_t_post_sfdp_fixup, 684 656 .late_init = s25hx_t_late_init, ··· 746 718 const struct sfdp_parameter_header *bfpt_header, 747 719 const struct sfdp_bfpt *bfpt) 748 720 { 721 + /* Assign 4-byte address mode method that is not determined in BFPT */ 722 + nor->params->set_4byte_addr_mode = cypress_nor_set_4byte_addr_mode; 723 + 749 724 return cypress_nor_set_addr_mode_nbytes(nor); 750 725 } 751 726
+8 -11
drivers/mtd/spi-nor/swp.c
··· 56 56 static void spi_nor_get_locked_range_sr(struct spi_nor *nor, u8 sr, loff_t *ofs, 57 57 u64 *len) 58 58 { 59 - struct mtd_info *mtd = &nor->mtd; 60 59 u64 min_prot_len; 61 60 u8 mask = spi_nor_get_sr_bp_mask(nor); 62 61 u8 tb_mask = spi_nor_get_sr_tb_mask(nor); ··· 76 77 min_prot_len = spi_nor_get_min_prot_length_sr(nor); 77 78 *len = min_prot_len << (bp - 1); 78 79 79 - if (*len > mtd->size) 80 - *len = mtd->size; 80 + if (*len > nor->params->size) 81 + *len = nor->params->size; 81 82 82 83 if (nor->flags & SNOR_F_HAS_SR_TB && sr & tb_mask) 83 84 *ofs = 0; 84 85 else 85 - *ofs = mtd->size - *len; 86 + *ofs = nor->params->size - *len; 86 87 } 87 88 88 89 /* ··· 157 158 */ 158 159 static int spi_nor_sr_lock(struct spi_nor *nor, loff_t ofs, u64 len) 159 160 { 160 - struct mtd_info *mtd = &nor->mtd; 161 161 u64 min_prot_len; 162 162 int ret, status_old, status_new; 163 163 u8 mask = spi_nor_get_sr_bp_mask(nor); ··· 181 183 can_be_bottom = false; 182 184 183 185 /* If anything above us is unlocked, we can't use 'top' protection */ 184 - if (!spi_nor_is_locked_sr(nor, ofs + len, mtd->size - (ofs + len), 186 + if (!spi_nor_is_locked_sr(nor, ofs + len, nor->params->size - (ofs + len), 185 187 status_old)) 186 188 can_be_top = false; 187 189 ··· 193 195 194 196 /* lock_len: length of region that should end up locked */ 195 197 if (use_top) 196 - lock_len = mtd->size - ofs; 198 + lock_len = nor->params->size - ofs; 197 199 else 198 200 lock_len = ofs + len; 199 201 200 - if (lock_len == mtd->size) { 202 + if (lock_len == nor->params->size) { 201 203 val = mask; 202 204 } else { 203 205 min_prot_len = spi_nor_get_min_prot_length_sr(nor); ··· 246 248 */ 247 249 static int spi_nor_sr_unlock(struct spi_nor *nor, loff_t ofs, u64 len) 248 250 { 249 - struct mtd_info *mtd = &nor->mtd; 250 251 u64 min_prot_len; 251 252 int ret, status_old, status_new; 252 253 u8 mask = spi_nor_get_sr_bp_mask(nor); ··· 270 273 can_be_top = false; 271 274 272 275 /* If anything above us is locked, we can't use 'bottom' protection */ 273 - if (!spi_nor_is_unlocked_sr(nor, ofs + len, mtd->size - (ofs + len), 276 + if (!spi_nor_is_unlocked_sr(nor, ofs + len, nor->params->size - (ofs + len), 274 277 status_old)) 275 278 can_be_bottom = false; 276 279 ··· 282 285 283 286 /* lock_len: length of region that should remain locked */ 284 287 if (use_top) 285 - lock_len = mtd->size - (ofs + len); 288 + lock_len = nor->params->size - (ofs + len); 286 289 else 287 290 lock_len = ofs; 288 291
+22 -5
drivers/spi/spi-mem.c
··· 586 586 * accurate, all these combinations should be rated (eg. with a time estimate) 587 587 * and the best pick should be taken based on these calculations. 588 588 * 589 - * Returns a ns estimate for the time this op would take. 589 + * Returns a ns estimate for the time this op would take, except if no 590 + * frequency limit has been set, in this case we return the number of 591 + * cycles nevertheless to allow callers to distinguish which operation 592 + * would be the fastest at iso-frequency. 590 593 */ 591 - u64 spi_mem_calc_op_duration(struct spi_mem_op *op) 594 + u64 spi_mem_calc_op_duration(struct spi_mem *mem, struct spi_mem_op *op) 592 595 { 593 596 u64 ncycles = 0; 594 - u32 ns_per_cycles; 597 + u64 ps_per_cycles, duration; 595 598 596 - ns_per_cycles = 1000000000 / op->max_freq; 599 + spi_mem_adjust_op_freq(mem, op); 600 + 601 + if (op->max_freq) { 602 + ps_per_cycles = 1000000000000ULL; 603 + do_div(ps_per_cycles, op->max_freq); 604 + } else { 605 + /* In this case, the unit is no longer a time unit */ 606 + ps_per_cycles = 1; 607 + } 608 + 597 609 ncycles += ((op->cmd.nbytes * 8) / op->cmd.buswidth) / (op->cmd.dtr ? 2 : 1); 598 610 ncycles += ((op->addr.nbytes * 8) / op->addr.buswidth) / (op->addr.dtr ? 2 : 1); 599 611 ··· 615 603 616 604 ncycles += ((op->data.nbytes * 8) / op->data.buswidth) / (op->data.dtr ? 2 : 1); 617 605 618 - return ncycles * ns_per_cycles; 606 + /* Derive the duration in ps */ 607 + duration = ncycles * ps_per_cycles; 608 + /* Convert into ns */ 609 + do_div(duration, 1000); 610 + 611 + return duration; 619 612 } 620 613 EXPORT_SYMBOL_GPL(spi_mem_calc_op_duration); 621 614
+7 -6
include/linux/mtd/map.h
··· 8 8 #ifndef __LINUX_MTD_MAP_H__ 9 9 #define __LINUX_MTD_MAP_H__ 10 10 11 - #include <linux/types.h> 12 - #include <linux/list.h> 13 - #include <linux/string.h> 14 11 #include <linux/bug.h> 15 - #include <linux/kernel.h> 16 12 #include <linux/io.h> 17 - 13 + #include <linux/ioport.h> 14 + #include <linux/string.h> 15 + #include <linux/types.h> 18 16 #include <linux/unaligned.h> 19 - #include <asm/barrier.h> 17 + 18 + struct device_node; 19 + struct module; 20 20 21 21 #ifdef CONFIG_MTD_MAP_BANK_WIDTH_1 22 22 #define map_bankwidth(map) 1 ··· 188 188 of living. 189 189 */ 190 190 191 + struct mtd_chip_driver; 191 192 struct map_info { 192 193 const char *name; 193 194 unsigned long size;
+44 -26
include/linux/mtd/spinand.h
··· 62 62 SPI_MEM_OP_NO_DUMMY, \ 63 63 SPI_MEM_OP_NO_DATA) 64 64 65 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(addr, ndummy, buf, len, ...) \ 65 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(addr, ndummy, buf, len, freq) \ 66 66 SPI_MEM_OP(SPI_MEM_OP_CMD(0x03, 1), \ 67 67 SPI_MEM_OP_ADDR(2, addr, 1), \ 68 68 SPI_MEM_OP_DUMMY(ndummy, 1), \ 69 69 SPI_MEM_OP_DATA_IN(len, buf, 1), \ 70 - SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 70 + SPI_MEM_OP_MAX_FREQ(freq)) 71 71 72 - #define SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(addr, ndummy, buf, len) \ 72 + #define SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(addr, ndummy, buf, len, freq) \ 73 73 SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1), \ 74 - SPI_MEM_OP_ADDR(2, addr, 1), \ 75 - SPI_MEM_OP_DUMMY(ndummy, 1), \ 76 - SPI_MEM_OP_DATA_IN(len, buf, 1)) 74 + SPI_MEM_OP_ADDR(2, addr, 1), \ 75 + SPI_MEM_OP_DUMMY(ndummy, 1), \ 76 + SPI_MEM_OP_DATA_IN(len, buf, 1), \ 77 + SPI_MEM_OP_MAX_FREQ(freq)) 77 78 78 - #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(addr, ndummy, buf, len) \ 79 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(addr, ndummy, buf, len, freq) \ 79 80 SPI_MEM_OP(SPI_MEM_OP_CMD(0x03, 1), \ 80 81 SPI_MEM_OP_ADDR(3, addr, 1), \ 81 82 SPI_MEM_OP_DUMMY(ndummy, 1), \ 82 - SPI_MEM_OP_DATA_IN(len, buf, 1)) 83 + SPI_MEM_OP_DATA_IN(len, buf, 1), \ 84 + SPI_MEM_OP_MAX_FREQ(freq)) 83 85 84 - #define SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(addr, ndummy, buf, len) \ 86 + #define SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(addr, ndummy, buf, len, freq) \ 85 87 SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1), \ 86 88 SPI_MEM_OP_ADDR(3, addr, 1), \ 87 89 SPI_MEM_OP_DUMMY(ndummy, 1), \ 88 - SPI_MEM_OP_DATA_IN(len, buf, 1)) 90 + SPI_MEM_OP_DATA_IN(len, buf, 1), \ 91 + SPI_MEM_OP_MAX_FREQ(freq)) 89 92 90 93 #define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(addr, ndummy, buf, len, freq) \ 91 94 SPI_MEM_OP(SPI_MEM_OP_CMD(0x0d, 1), \ ··· 97 94 SPI_MEM_DTR_OP_DATA_IN(len, buf, 1), \ 98 95 SPI_MEM_OP_MAX_FREQ(freq)) 99 96 100 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(addr, ndummy, buf, len) \ 97 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(addr, ndummy, buf, len, freq) \ 101 98 SPI_MEM_OP(SPI_MEM_OP_CMD(0x3b, 1), \ 102 99 SPI_MEM_OP_ADDR(2, addr, 1), \ 103 100 SPI_MEM_OP_DUMMY(ndummy, 1), \ 104 - SPI_MEM_OP_DATA_IN(len, buf, 2)) 101 + SPI_MEM_OP_DATA_IN(len, buf, 2), \ 102 + SPI_MEM_OP_MAX_FREQ(freq)) 105 103 106 - #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(addr, ndummy, buf, len) \ 104 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(addr, ndummy, buf, len, freq) \ 107 105 SPI_MEM_OP(SPI_MEM_OP_CMD(0x3b, 1), \ 108 106 SPI_MEM_OP_ADDR(3, addr, 1), \ 109 107 SPI_MEM_OP_DUMMY(ndummy, 1), \ 110 - SPI_MEM_OP_DATA_IN(len, buf, 2)) 108 + SPI_MEM_OP_DATA_IN(len, buf, 2), \ 109 + SPI_MEM_OP_MAX_FREQ(freq)) 111 110 112 111 #define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(addr, ndummy, buf, len, freq) \ 113 112 SPI_MEM_OP(SPI_MEM_OP_CMD(0x3d, 1), \ ··· 118 113 SPI_MEM_DTR_OP_DATA_IN(len, buf, 2), \ 119 114 SPI_MEM_OP_MAX_FREQ(freq)) 120 115 121 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len, ...) \ 116 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len, freq) \ 122 117 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ 123 118 SPI_MEM_OP_ADDR(2, addr, 2), \ 124 119 SPI_MEM_OP_DUMMY(ndummy, 2), \ 125 120 SPI_MEM_OP_DATA_IN(len, buf, 2), \ 126 - SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 121 + SPI_MEM_OP_MAX_FREQ(freq)) 127 122 128 - #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_2S_2S_OP(addr, ndummy, buf, len) \ 123 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_2S_2S_OP(addr, ndummy, buf, len, freq) \ 129 124 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ 130 125 SPI_MEM_OP_ADDR(3, addr, 2), \ 131 126 SPI_MEM_OP_DUMMY(ndummy, 2), \ 132 - SPI_MEM_OP_DATA_IN(len, buf, 2)) 127 + SPI_MEM_OP_DATA_IN(len, buf, 2), \ 128 + SPI_MEM_OP_MAX_FREQ(freq)) 133 129 134 130 #define SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(addr, ndummy, buf, len, freq) \ 135 131 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbd, 1), \ ··· 139 133 SPI_MEM_DTR_OP_DATA_IN(len, buf, 2), \ 140 134 SPI_MEM_OP_MAX_FREQ(freq)) 141 135 142 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(addr, ndummy, buf, len) \ 136 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(addr, ndummy, buf, len, freq) \ 143 137 SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1), \ 144 138 SPI_MEM_OP_ADDR(2, addr, 1), \ 145 139 SPI_MEM_OP_DUMMY(ndummy, 1), \ 146 - SPI_MEM_OP_DATA_IN(len, buf, 4)) 140 + SPI_MEM_OP_DATA_IN(len, buf, 4), \ 141 + SPI_MEM_OP_MAX_FREQ(freq)) 147 142 148 - #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(addr, ndummy, buf, len) \ 143 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(addr, ndummy, buf, len, freq) \ 149 144 SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1), \ 150 145 SPI_MEM_OP_ADDR(3, addr, 1), \ 151 146 SPI_MEM_OP_DUMMY(ndummy, 1), \ 152 - SPI_MEM_OP_DATA_IN(len, buf, 4)) 147 + SPI_MEM_OP_DATA_IN(len, buf, 4), \ 148 + SPI_MEM_OP_MAX_FREQ(freq)) 153 149 154 150 #define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(addr, ndummy, buf, len, freq) \ 155 151 SPI_MEM_OP(SPI_MEM_OP_CMD(0x6d, 1), \ ··· 160 152 SPI_MEM_DTR_OP_DATA_IN(len, buf, 4), \ 161 153 SPI_MEM_OP_MAX_FREQ(freq)) 162 154 163 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len, ...) \ 155 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len, freq) \ 164 156 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \ 165 157 SPI_MEM_OP_ADDR(2, addr, 4), \ 166 158 SPI_MEM_OP_DUMMY(ndummy, 4), \ 167 159 SPI_MEM_OP_DATA_IN(len, buf, 4), \ 168 - SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 160 + SPI_MEM_OP_MAX_FREQ(freq)) 169 161 170 - #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_4S_4S_OP(addr, ndummy, buf, len) \ 162 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_4S_4S_OP(addr, ndummy, buf, len, freq) \ 171 163 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \ 172 164 SPI_MEM_OP_ADDR(3, addr, 4), \ 173 165 SPI_MEM_OP_DUMMY(ndummy, 4), \ 174 - SPI_MEM_OP_DATA_IN(len, buf, 4)) 166 + SPI_MEM_OP_DATA_IN(len, buf, 4), \ 167 + SPI_MEM_OP_MAX_FREQ(freq)) 175 168 176 169 #define SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(addr, ndummy, buf, len, freq) \ 177 170 SPI_MEM_OP(SPI_MEM_OP_CMD(0xed, 1), \ ··· 493 484 * @op_variants.update_cache: variants of the update-cache operation 494 485 * @select_target: function used to select a target/die. Required only for 495 486 * multi-die chips 487 + * @configure_chip: Align the chip configuration with the core settings 496 488 * @set_cont_read: enable/disable continuous cached reads 497 489 * @fact_otp: SPI NAND factory OTP info. 498 490 * @user_otp: SPI NAND user OTP info. ··· 517 507 } op_variants; 518 508 int (*select_target)(struct spinand_device *spinand, 519 509 unsigned int target); 510 + int (*configure_chip)(struct spinand_device *spinand); 520 511 int (*set_cont_read)(struct spinand_device *spinand, 521 512 bool enable); 522 513 struct spinand_fact_otp fact_otp; ··· 549 538 550 539 #define SPINAND_SELECT_TARGET(__func) \ 551 540 .select_target = __func 541 + 542 + #define SPINAND_CONFIGURE_CHIP(__configure_chip) \ 543 + .configure_chip = __configure_chip 552 544 553 545 #define SPINAND_CONT_READ(__set_cont_read) \ 554 546 .set_cont_read = __set_cont_read ··· 621 607 * passed in spi_mem_op be DMA-able, so we can't based the bufs on 622 608 * the stack 623 609 * @manufacturer: SPI NAND manufacturer information 610 + * @configure_chip: Align the chip configuration with the core settings 624 611 * @cont_read_possible: Field filled by the core once the whole system 625 612 * configuration is known to tell whether continuous reads are 626 613 * suitable to use or not in general with this chip/configuration. ··· 662 647 const struct spinand_manufacturer *manufacturer; 663 648 void *priv; 664 649 650 + int (*configure_chip)(struct spinand_device *spinand); 665 651 bool cont_read_possible; 666 652 int (*set_cont_read)(struct spinand_device *spinand, 667 653 bool enable); ··· 739 723 enum spinand_readid_method rdid_method); 740 724 741 725 int spinand_upd_cfg(struct spinand_device *spinand, u8 mask, u8 val); 726 + int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val); 742 727 int spinand_write_reg_op(struct spinand_device *spinand, u8 reg, u8 val); 728 + int spinand_write_enable_op(struct spinand_device *spinand); 743 729 int spinand_select_target(struct spinand_device *spinand, unsigned int target); 744 730 745 731 int spinand_wait(struct spinand_device *spinand, unsigned long initial_delay_us,
+1 -1
include/linux/spi/spi-mem.h
··· 424 424 425 425 int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op); 426 426 void spi_mem_adjust_op_freq(struct spi_mem *mem, struct spi_mem_op *op); 427 - u64 spi_mem_calc_op_duration(struct spi_mem_op *op); 427 + u64 spi_mem_calc_op_duration(struct spi_mem *mem, struct spi_mem_op *op); 428 428 429 429 bool spi_mem_supports_op(struct spi_mem *mem, 430 430 const struct spi_mem_op *op);