Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Richard Weinberger:
"MTD core changes:

- Dynamic partition support

- Fix deadlock in sm_ftl

- Various refcount fixes in maps, partitions and parser code

- Integer overflow fixes in mtdchar

- Support for Sercomm partitions

NAND driver changes:

- Clockrate fix for arasan

- Add ATO25D1GA support

- Double free fix for meson driver

- Fix probe/remove methods in cafe NAND

- Support unprotected spare data pages in qcom_nandc

SPI NOR core changes:

- move SECT_4K_PMC flag out of the core as it's a vendor specific
flag

- s/addr_width/addr_nbytes/g: address width means the number of IO
lines used for the address, whereas in the code it is used as the
number of address bytes.

- do not change nor->addr_nbytes at SFDP parsing time. At the SFDP
parsing time we should not change members of struct spi_nor, but
instead fill members of struct spi_nor_flash_parameters which could
later on be used by the callers.

- track flash's internal address mode so that we can use 4B opcodes
together with opcodes that don't have a 4B opcode correspondent.

SPI NOR manufacturer drivers changes:

- esmt: Rename "f25l32qa" flash name to "f25l32qa-2s".

- micron-st: Skip FSR reading if SPI controller does not support it
to allow flashes that support FSR to work even when attached to
such SPI controllers.

- spansion: Add s25hl-t/s25hs-t IDs and fixups"

* tag 'mtd/for-5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (53 commits)
mtd: core: check partition before dereference
mtd: spi-nor: fix spi_nor_spimem_setup_op() call in spi_nor_erase_{sector,chip}()
mtd: spi-nor: spansion: Add s25hl-t/s25hs-t IDs and fixups
mtd: spi-nor: spansion: Add local function to discover page size
mtd: spi-nor: core: Track flash's internal address mode
mtd: spi-nor: core: Return error code from set_4byte_addr_mode()
mtd: spi-nor: Do not change nor->addr_nbytes at SFDP parsing time
mtd: spi-nor: core: Shrink the storage size of the flash_info's addr_nbytes
mtd: spi-nor: s/addr_width/addr_nbytes
mtd: spi-nor: esmt: Use correct name of f25l32qa
mtd: spi-nor: micron-st: Skip FSR reading if SPI controller does not support it
MAINTAINERS: Use my kernel.org email
mtd: rawnand: arasan: Fix clock rate in NV-DDR
mtd: rawnand: arasan: Update NAND bus clock instead of system clock
mtd: core: introduce of support for dynamic partitions
dt-bindings: mtd: partitions: add additional example for qcom,smem-part
dt-bindings: mtd: partitions: support label/name only partition
mtd: spi-nor: move SECT_4K_PMC special handling
mtd: dataflash: Add SPI ID table
mtd: hyperbus: rpc-if: Fix RPM imbalance in probe error path
...

+1110 -252
-2
Documentation/devicetree/bindings/mtd/mxc-nand.yaml
··· 37 37 compatible = "fsl,imx27-nand"; 38 38 reg = <0xd8000000 0x1000>; 39 39 interrupts = <29>; 40 - nand-bus-width = <8>; 41 - nand-ecc-mode = "hw"; 42 40 };
+18 -2
Documentation/devicetree/bindings/mtd/partitions/partition.yaml
··· 11 11 relative offset and size specified. Depending on partition function extra 12 12 properties can be used. 13 13 14 + A partition may be dynamically allocated by a specific parser at runtime. 15 + In this specific case, a specific suffix is required to the node name. 16 + Everything after 'partition-' will be used as the partition name to compare 17 + with the one dynamically allocated by the specific parser. 18 + If the partition contains invalid char a label can be provided that will 19 + be used instead of the node name to make the comparison. 20 + This is used to assign an OF node to the dynamiccally allocated partition 21 + so that subsystem like NVMEM can provide an OF node and declare NVMEM cells. 22 + The OF node will be assigned only if the partition label declared match the 23 + one assigned by the parser at runtime. 24 + 14 25 maintainers: 15 26 - Rafał Miłecki <rafal@milecki.pl> 16 27 ··· 52 41 immune to paired-pages corruptions 53 42 type: boolean 54 43 55 - required: 56 - - reg 44 + if: 45 + not: 46 + required: [ reg ] 47 + then: 48 + properties: 49 + $nodename: 50 + pattern: '^partition-.*$' 57 51 58 52 additionalProperties: true
+27
Documentation/devicetree/bindings/mtd/partitions/qcom,smem-part.yaml
··· 19 19 compatible: 20 20 const: qcom,smem-part 21 21 22 + patternProperties: 23 + "^partition-[0-9a-z]+$": 24 + $ref: partition.yaml# 25 + 22 26 required: 23 27 - compatible 24 28 ··· 34 30 partitions { 35 31 compatible = "qcom,smem-part"; 36 32 }; 33 + }; 34 + 35 + - | 36 + /* Example declaring dynamic partition */ 37 + flash { 38 + partitions { 39 + compatible = "qcom,smem-part"; 40 + 41 + partition-art { 42 + compatible = "nvmem-cells"; 43 + #address-cells = <1>; 44 + #size-cells = <1>; 45 + label = "0:art"; 46 + 47 + macaddr_art_0: macaddr@0 { 48 + reg = <0x0 0x6>; 49 + }; 50 + 51 + macaddr_art_6: macaddr@6 { 52 + reg = <0x6 0x6>; 53 + }; 54 + }; 55 + }; 37 56 };
+27
Documentation/devicetree/bindings/mtd/qcom,nandc.yaml
··· 102 102 - const: rx 103 103 - const: cmd 104 104 105 + - if: 106 + properties: 107 + compatible: 108 + contains: 109 + enum: 110 + - qcom,ipq806x-nand 111 + 112 + then: 113 + properties: 114 + qcom,boot-partitions: 115 + $ref: /schemas/types.yaml#/definitions/uint32-matrix 116 + items: 117 + items: 118 + - description: offset 119 + - description: size 120 + description: 121 + Boot partition use a different layout where the 4 bytes of spare 122 + data are not protected by ECC. Use this to declare these special 123 + partitions by defining first the offset and then the size. 124 + 125 + It's in the form of <offset1 size1 offset2 size2 offset3 ...> 126 + and should be declared in ascending order. 127 + 128 + Refer to the ipq8064 example on how to use this special binding. 129 + 105 130 required: 106 131 - compatible 107 132 - reg ··· 159 134 160 135 nand-ecc-strength = <4>; 161 136 nand-bus-width = <8>; 137 + 138 + qcom,boot-partitions = <0x0 0x58a0000>; 162 139 163 140 partitions { 164 141 compatible = "fixed-partitions";
+1 -1
MAINTAINERS
··· 19130 19130 19131 19131 SPI NOR SUBSYSTEM 19132 19132 M: Tudor Ambarus <tudor.ambarus@microchip.com> 19133 - M: Pratyush Yadav <p.yadav@ti.com> 19133 + M: Pratyush Yadav <pratyush@kernel.org> 19134 19134 R: Michael Walle <michael@walle.cc> 19135 19135 L: linux-mtd@lists.infradead.org 19136 19136 S: Maintained
+8
drivers/mtd/devices/mtd_dataflash.c
··· 112 112 MODULE_DEVICE_TABLE(of, dataflash_dt_ids); 113 113 #endif 114 114 115 + static const struct spi_device_id dataflash_spi_ids[] = { 116 + { .name = "at45", }, 117 + { .name = "dataflash", }, 118 + { /* sentinel */ } 119 + }; 120 + MODULE_DEVICE_TABLE(spi, dataflash_spi_ids); 121 + 115 122 /* ......................................................................... */ 116 123 117 124 /* ··· 943 936 944 937 .probe = dataflash_probe, 945 938 .remove = dataflash_remove, 939 + .id_table = dataflash_spi_ids, 946 940 947 941 /* FIXME: investigate suspend and resume... */ 948 942 };
+3 -1
drivers/mtd/devices/powernv_flash.c
··· 270 270 struct powernv_flash *data = dev_get_drvdata(&(pdev->dev)); 271 271 272 272 /* All resources should be freed automatically */ 273 - return mtd_device_unregister(&(data->mtd)); 273 + WARN_ON(mtd_device_unregister(&data->mtd)); 274 + 275 + return 0; 274 276 } 275 277 276 278 static const struct of_device_id powernv_flash_match[] = {
+2 -8
drivers/mtd/devices/spear_smi.c
··· 1045 1045 { 1046 1046 struct spear_smi *dev; 1047 1047 struct spear_snor_flash *flash; 1048 - int ret, i; 1048 + int i; 1049 1049 1050 1050 dev = platform_get_drvdata(pdev); 1051 - if (!dev) { 1052 - dev_err(&pdev->dev, "dev is null\n"); 1053 - return -ENODEV; 1054 - } 1055 1051 1056 1052 /* clean up for all nor flash */ 1057 1053 for (i = 0; i < dev->num_flashes; i++) { ··· 1056 1060 continue; 1057 1061 1058 1062 /* clean up mtd stuff */ 1059 - ret = mtd_device_unregister(&flash->mtd); 1060 - if (ret) 1061 - dev_err(&pdev->dev, "error removing mtd\n"); 1063 + WARN_ON(mtd_device_unregister(&flash->mtd)); 1062 1064 } 1063 1065 1064 1066 clk_disable_unprepare(dev->clk);
+12 -11
drivers/mtd/devices/st_spi_fsm.c
··· 2084 2084 * Configure READ/WRITE/ERASE sequences according to platform and 2085 2085 * device flags. 2086 2086 */ 2087 - if (info->config) { 2087 + if (info->config) 2088 2088 ret = info->config(fsm); 2089 - if (ret) 2090 - goto err_clk_unprepare; 2091 - } else { 2089 + else 2092 2090 ret = stfsm_prepare_rwe_seqs_default(fsm); 2093 - if (ret) 2094 - goto err_clk_unprepare; 2095 - } 2091 + if (ret) 2092 + goto err_clk_unprepare; 2096 2093 2097 2094 fsm->mtd.name = info->name; 2098 2095 fsm->mtd.dev.parent = &pdev->dev; ··· 2112 2115 (long long)fsm->mtd.size, (long long)(fsm->mtd.size >> 20), 2113 2116 fsm->mtd.erasesize, (fsm->mtd.erasesize >> 10)); 2114 2117 2115 - return mtd_device_register(&fsm->mtd, NULL, 0); 2116 - 2118 + ret = mtd_device_register(&fsm->mtd, NULL, 0); 2119 + if (ret) { 2117 2120 err_clk_unprepare: 2118 - clk_disable_unprepare(fsm->clk); 2121 + clk_disable_unprepare(fsm->clk); 2122 + } 2123 + 2119 2124 return ret; 2120 2125 } 2121 2126 ··· 2125 2126 { 2126 2127 struct stfsm *fsm = platform_get_drvdata(pdev); 2127 2128 2129 + WARN_ON(mtd_device_unregister(&fsm->mtd)); 2130 + 2128 2131 clk_disable_unprepare(fsm->clk); 2129 2132 2130 - return mtd_device_unregister(&fsm->mtd); 2133 + return 0; 2131 2134 } 2132 2135 2133 2136 #ifdef CONFIG_PM_SLEEP
+3 -3
drivers/mtd/hyperbus/hbmc-am654.c
··· 233 233 { 234 234 struct am654_hbmc_priv *priv = platform_get_drvdata(pdev); 235 235 struct am654_hbmc_device_priv *dev_priv = priv->hbdev.priv; 236 - int ret; 237 236 238 - ret = hyperbus_unregister_device(&priv->hbdev); 237 + hyperbus_unregister_device(&priv->hbdev); 238 + 239 239 if (priv->mux_ctrl) 240 240 mux_control_deselect(priv->mux_ctrl); 241 241 242 242 if (dev_priv->rx_chan) 243 243 dma_release_channel(dev_priv->rx_chan); 244 244 245 - return ret; 245 + return 0; 246 246 } 247 247 248 248 static const struct of_device_id am654_hbmc_dt_ids[] = {
+2 -6
drivers/mtd/hyperbus/hyperbus-core.c
··· 126 126 } 127 127 EXPORT_SYMBOL_GPL(hyperbus_register_device); 128 128 129 - int hyperbus_unregister_device(struct hyperbus_device *hbdev) 129 + void hyperbus_unregister_device(struct hyperbus_device *hbdev) 130 130 { 131 - int ret = 0; 132 - 133 131 if (hbdev && hbdev->mtd) { 134 - ret = mtd_device_unregister(hbdev->mtd); 132 + WARN_ON(mtd_device_unregister(hbdev->mtd)); 135 133 map_destroy(hbdev->mtd); 136 134 } 137 - 138 - return ret; 139 135 } 140 136 EXPORT_SYMBOL_GPL(hyperbus_unregister_device); 141 137
+9 -4
drivers/mtd/hyperbus/rpc-if.c
··· 134 134 135 135 error = rpcif_hw_init(&hyperbus->rpc, true); 136 136 if (error) 137 - return error; 137 + goto out_disable_rpm; 138 138 139 139 hyperbus->hbdev.map.size = hyperbus->rpc.size; 140 140 hyperbus->hbdev.map.virt = hyperbus->rpc.dirmap; ··· 145 145 hyperbus->hbdev.np = of_get_next_child(pdev->dev.parent->of_node, NULL); 146 146 error = hyperbus_register_device(&hyperbus->hbdev); 147 147 if (error) 148 - rpcif_disable_rpm(&hyperbus->rpc); 148 + goto out_disable_rpm; 149 149 150 + return 0; 151 + 152 + out_disable_rpm: 153 + rpcif_disable_rpm(&hyperbus->rpc); 150 154 return error; 151 155 } 152 156 153 157 static int rpcif_hb_remove(struct platform_device *pdev) 154 158 { 155 159 struct rpcif_hyperbus *hyperbus = platform_get_drvdata(pdev); 156 - int error = hyperbus_unregister_device(&hyperbus->hbdev); 160 + 161 + hyperbus_unregister_device(&hyperbus->hbdev); 157 162 158 163 rpcif_disable_rpm(&hyperbus->rpc); 159 164 160 - return error; 165 + return 0; 161 166 } 162 167 163 168 static struct platform_driver rpcif_platform_driver = {
+3 -1
drivers/mtd/lpddr/lpddr2_nvm.c
··· 478 478 */ 479 479 static int lpddr2_nvm_remove(struct platform_device *pdev) 480 480 { 481 - return mtd_device_unregister(dev_get_drvdata(&pdev->dev)); 481 + WARN_ON(mtd_device_unregister(dev_get_drvdata(&pdev->dev))); 482 + 483 + return 0; 482 484 } 483 485 484 486 /* Initialize platform_driver data structure for lpddr2_nvm */
+3 -10
drivers/mtd/maps/physmap-core.c
··· 66 66 { 67 67 struct physmap_flash_info *info; 68 68 struct physmap_flash_data *physmap_data; 69 - int i, err = 0; 69 + int i; 70 70 71 71 info = platform_get_drvdata(dev); 72 - if (!info) { 73 - err = -EINVAL; 74 - goto out; 75 - } 76 72 77 73 if (info->cmtd) { 78 - err = mtd_device_unregister(info->cmtd); 79 - if (err) 80 - goto out; 74 + WARN_ON(mtd_device_unregister(info->cmtd)); 81 75 82 76 if (info->cmtd != info->mtds[0]) 83 77 mtd_concat_destroy(info->cmtd); ··· 86 92 if (physmap_data && physmap_data->exit) 87 93 physmap_data->exit(dev); 88 94 89 - out: 90 95 pm_runtime_put(&dev->dev); 91 96 pm_runtime_disable(&dev->dev); 92 - return err; 97 + return 0; 93 98 } 94 99 95 100 static void physmap_set_vpp(struct map_info *map, int state)
+2
drivers/mtd/maps/physmap-versatile.c
··· 93 93 return -ENODEV; 94 94 } 95 95 ebi_base = of_iomap(ebi, 0); 96 + of_node_put(ebi); 96 97 if (!ebi_base) 97 98 return -ENODEV; 98 99 ··· 208 207 209 208 versatile_flashprot = (enum versatile_flashprot)devid->data; 210 209 rmap = syscon_node_to_regmap(sysnp); 210 + of_node_put(sysnp); 211 211 if (IS_ERR(rmap)) 212 212 return PTR_ERR(rmap); 213 213
+8 -5
drivers/mtd/mtdchar.c
··· 615 615 if (!usr_oob) 616 616 req.ooblen = 0; 617 617 618 + req.len &= 0xffffffff; 619 + req.ooblen &= 0xffffffff; 620 + 618 621 if (req.start + req.len > mtd->size) 619 622 return -EINVAL; 620 623 621 624 datbuf_len = min_t(size_t, req.len, mtd->erasesize); 622 625 if (datbuf_len > 0) { 623 - datbuf = kmalloc(datbuf_len, GFP_KERNEL); 626 + datbuf = kvmalloc(datbuf_len, GFP_KERNEL); 624 627 if (!datbuf) 625 628 return -ENOMEM; 626 629 } 627 630 628 631 oobbuf_len = min_t(size_t, req.ooblen, mtd->erasesize); 629 632 if (oobbuf_len > 0) { 630 - oobbuf = kmalloc(oobbuf_len, GFP_KERNEL); 633 + oobbuf = kvmalloc(oobbuf_len, GFP_KERNEL); 631 634 if (!oobbuf) { 632 - kfree(datbuf); 635 + kvfree(datbuf); 633 636 return -ENOMEM; 634 637 } 635 638 } ··· 682 679 usr_oob += ops.oobretlen; 683 680 } 684 681 685 - kfree(datbuf); 686 - kfree(oobbuf); 682 + kvfree(datbuf); 683 + kvfree(oobbuf); 687 684 688 685 return ret; 689 686 }
+63
drivers/mtd/mtdcore.c
··· 546 546 return 0; 547 547 } 548 548 549 + static void mtd_check_of_node(struct mtd_info *mtd) 550 + { 551 + struct device_node *partitions, *parent_dn, *mtd_dn = NULL; 552 + const char *pname, *prefix = "partition-"; 553 + int plen, mtd_name_len, offset, prefix_len; 554 + struct mtd_info *parent; 555 + bool found = false; 556 + 557 + /* Check if MTD already has a device node */ 558 + if (dev_of_node(&mtd->dev)) 559 + return; 560 + 561 + /* Check if a partitions node exist */ 562 + if (!mtd_is_partition(mtd)) 563 + return; 564 + parent = mtd->parent; 565 + parent_dn = dev_of_node(&parent->dev); 566 + if (!parent_dn) 567 + return; 568 + 569 + partitions = of_get_child_by_name(parent_dn, "partitions"); 570 + if (!partitions) 571 + goto exit_parent; 572 + 573 + prefix_len = strlen(prefix); 574 + mtd_name_len = strlen(mtd->name); 575 + 576 + /* Search if a partition is defined with the same name */ 577 + for_each_child_of_node(partitions, mtd_dn) { 578 + offset = 0; 579 + 580 + /* Skip partition with no/wrong prefix */ 581 + if (!of_node_name_prefix(mtd_dn, "partition-")) 582 + continue; 583 + 584 + /* Label have priority. Check that first */ 585 + if (of_property_read_string(mtd_dn, "label", &pname)) { 586 + of_property_read_string(mtd_dn, "name", &pname); 587 + offset = prefix_len; 588 + } 589 + 590 + plen = strlen(pname) - offset; 591 + if (plen == mtd_name_len && 592 + !strncmp(mtd->name, pname + offset, plen)) { 593 + found = true; 594 + break; 595 + } 596 + } 597 + 598 + if (!found) 599 + goto exit_partitions; 600 + 601 + /* Set of_node only for nvmem */ 602 + if (of_device_is_compatible(mtd_dn, "nvmem-cells")) 603 + mtd_set_of_node(mtd, mtd_dn); 604 + 605 + exit_partitions: 606 + of_node_put(partitions); 607 + exit_parent: 608 + of_node_put(parent_dn); 609 + } 610 + 549 611 /** 550 612 * add_mtd_device - register an MTD device 551 613 * @mtd: pointer to new MTD device info structure ··· 720 658 mtd->dev.devt = MTD_DEVT(i); 721 659 dev_set_name(&mtd->dev, "mtd%d", i); 722 660 dev_set_drvdata(&mtd->dev, mtd); 661 + mtd_check_of_node(mtd); 723 662 of_node_get(mtd_get_of_node(mtd)); 724 663 error = device_register(&mtd->dev); 725 664 if (error)
+11 -5
drivers/mtd/nand/raw/arasan-nand-controller.c
··· 347 347 348 348 /* Update clock frequency */ 349 349 if (nfc->cur_clk != anand->clk) { 350 - clk_disable_unprepare(nfc->controller_clk); 351 - ret = clk_set_rate(nfc->controller_clk, anand->clk); 350 + clk_disable_unprepare(nfc->bus_clk); 351 + ret = clk_set_rate(nfc->bus_clk, anand->clk); 352 352 if (ret) { 353 353 dev_err(nfc->dev, "Failed to change clock rate\n"); 354 354 return ret; 355 355 } 356 356 357 - ret = clk_prepare_enable(nfc->controller_clk); 357 + ret = clk_prepare_enable(nfc->bus_clk); 358 358 if (ret) { 359 359 dev_err(nfc->dev, 360 - "Failed to re-enable the controller clock\n"); 360 + "Failed to re-enable the bus clock\n"); 361 361 return ret; 362 362 } 363 363 ··· 1043 1043 DQS_BUFF_SEL_OUT(dqs_mode); 1044 1044 } 1045 1045 1046 - anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK; 1046 + if (nand_interface_is_sdr(conf)) { 1047 + anand->clk = ANFC_XLNX_SDR_DFLT_CORE_CLK; 1048 + } else { 1049 + /* ONFI timings are defined in picoseconds */ 1050 + anand->clk = div_u64((u64)NSEC_PER_SEC * 1000, 1051 + conf->timings.nvddr.tCK_min); 1052 + } 1047 1053 1048 1054 /* 1049 1055 * Due to a hardware bug in the ZynqMP SoC, SDR timing modes 0-1 work
+3 -1
drivers/mtd/nand/raw/atmel/nand-controller.c
··· 2629 2629 { 2630 2630 struct atmel_nand_controller *nc = platform_get_drvdata(pdev); 2631 2631 2632 - return nc->caps->ops->remove(nc); 2632 + WARN_ON(nc->caps->ops->remove(nc)); 2633 + 2634 + return 0; 2633 2635 } 2634 2636 2635 2637 static __maybe_unused int atmel_nand_controller_resume(struct device *dev)
+7 -2
drivers/mtd/nand/raw/cafe_nand.c
··· 679 679 pci_set_master(pdev); 680 680 681 681 cafe = kzalloc(sizeof(*cafe), GFP_KERNEL); 682 - if (!cafe) 683 - return -ENOMEM; 682 + if (!cafe) { 683 + err = -ENOMEM; 684 + goto out_disable_device; 685 + } 684 686 685 687 mtd = nand_to_mtd(&cafe->nand); 686 688 mtd->dev.parent = &pdev->dev; ··· 803 801 pci_iounmap(pdev, cafe->mmio); 804 802 out_free_mtd: 805 803 kfree(cafe); 804 + out_disable_device: 805 + pci_disable_device(pdev); 806 806 out: 807 807 return err; 808 808 } ··· 826 822 pci_iounmap(pdev, cafe->mmio); 827 823 dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr); 828 824 kfree(cafe); 825 + pci_disable_device(pdev); 829 826 } 830 827 831 828 static const struct pci_device_id cafe_nand_tbl[] = {
+3 -14
drivers/mtd/nand/raw/meson_nand.c
··· 1293 1293 return 0; 1294 1294 } 1295 1295 1296 - static int meson_nfc_nand_chip_cleanup(struct meson_nfc *nfc) 1296 + static void meson_nfc_nand_chip_cleanup(struct meson_nfc *nfc) 1297 1297 { 1298 1298 struct meson_nfc_nand_chip *meson_chip; 1299 1299 struct mtd_info *mtd; 1300 - int ret; 1301 1300 1302 1301 while (!list_empty(&nfc->chips)) { 1303 1302 meson_chip = list_first_entry(&nfc->chips, 1304 1303 struct meson_nfc_nand_chip, node); 1305 1304 mtd = nand_to_mtd(&meson_chip->nand); 1306 - ret = mtd_device_unregister(mtd); 1307 - if (ret) 1308 - return ret; 1305 + WARN_ON(mtd_device_unregister(mtd)); 1309 1306 1310 - meson_nfc_free_buffer(&meson_chip->nand); 1311 1307 nand_cleanup(&meson_chip->nand); 1312 1308 list_del(&meson_chip->node); 1313 1309 } 1314 - 1315 - return 0; 1316 1310 } 1317 1311 1318 1312 static int meson_nfc_nand_chips_init(struct device *dev, ··· 1439 1445 static int meson_nfc_remove(struct platform_device *pdev) 1440 1446 { 1441 1447 struct meson_nfc *nfc = platform_get_drvdata(pdev); 1442 - int ret; 1443 1448 1444 - ret = meson_nfc_nand_chip_cleanup(nfc); 1445 - if (ret) 1446 - return ret; 1449 + meson_nfc_nand_chip_cleanup(nfc); 1447 1450 1448 1451 meson_nfc_disable_clk(nfc); 1449 - 1450 - platform_set_drvdata(pdev, NULL); 1451 1452 1452 1453 return 0; 1453 1454 }
+2 -4
drivers/mtd/nand/raw/omap2.c
··· 2278 2278 struct mtd_info *mtd = platform_get_drvdata(pdev); 2279 2279 struct nand_chip *nand_chip = mtd_to_nand(mtd); 2280 2280 struct omap_nand_info *info = mtd_to_omap(mtd); 2281 - int ret; 2282 2281 2283 2282 rawnand_sw_bch_cleanup(nand_chip); 2284 2283 2285 2284 if (info->dma) 2286 2285 dma_release_channel(info->dma); 2287 - ret = mtd_device_unregister(mtd); 2288 - WARN_ON(ret); 2286 + WARN_ON(mtd_device_unregister(mtd)); 2289 2287 nand_cleanup(nand_chip); 2290 - return ret; 2288 + return 0; 2291 2289 } 2292 2290 2293 2291 /* omap_nand_ids defined in linux/platform_data/mtd-nand-omap2.h */
+258 -52
drivers/mtd/nand/raw/qcom_nandc.c
··· 80 80 #define DISABLE_STATUS_AFTER_WRITE 4 81 81 #define CW_PER_PAGE 6 82 82 #define UD_SIZE_BYTES 9 83 + #define UD_SIZE_BYTES_MASK GENMASK(18, 9) 83 84 #define ECC_PARITY_SIZE_BYTES_RS 19 84 85 #define SPARE_SIZE_BYTES 23 86 + #define SPARE_SIZE_BYTES_MASK GENMASK(26, 23) 85 87 #define NUM_ADDR_CYCLES 27 86 88 #define STATUS_BFR_READ 30 87 89 #define SET_RD_MODE_AFTER_STATUS 31 ··· 104 102 #define ECC_MODE 4 105 103 #define ECC_PARITY_SIZE_BYTES_BCH 8 106 104 #define ECC_NUM_DATA_BYTES 16 105 + #define ECC_NUM_DATA_BYTES_MASK GENMASK(25, 16) 107 106 #define ECC_FORCE_CLK_OPEN 30 108 107 109 108 /* NAND_DEV_CMD1 bits */ ··· 241 238 * @bam_ce - the array of BAM command elements 242 239 * @cmd_sgl - sgl for NAND BAM command pipe 243 240 * @data_sgl - sgl for NAND BAM consumer/producer pipe 241 + * @last_data_desc - last DMA desc in data channel (tx/rx). 242 + * @last_cmd_desc - last DMA desc in command channel. 243 + * @txn_done - completion for NAND transfer. 244 244 * @bam_ce_pos - the index in bam_ce which is available for next sgl 245 245 * @bam_ce_start - the index in bam_ce which marks the start position ce 246 246 * for current sgl. It will be used for size calculation ··· 256 250 * @rx_sgl_start - start index in data sgl for rx. 257 251 * @wait_second_completion - wait for second DMA desc completion before making 258 252 * the NAND transfer completion. 259 - * @txn_done - completion for NAND transfer. 260 - * @last_data_desc - last DMA desc in data channel (tx/rx). 261 - * @last_cmd_desc - last DMA desc in command channel. 262 253 */ 263 254 struct bam_transaction { 264 255 struct bam_cmd_element *bam_ce; 265 256 struct scatterlist *cmd_sgl; 266 257 struct scatterlist *data_sgl; 258 + struct dma_async_tx_descriptor *last_data_desc; 259 + struct dma_async_tx_descriptor *last_cmd_desc; 260 + struct completion txn_done; 267 261 u32 bam_ce_pos; 268 262 u32 bam_ce_start; 269 263 u32 cmd_sgl_pos; ··· 273 267 u32 rx_sgl_pos; 274 268 u32 rx_sgl_start; 275 269 bool wait_second_completion; 276 - struct completion txn_done; 277 - struct dma_async_tx_descriptor *last_data_desc; 278 - struct dma_async_tx_descriptor *last_cmd_desc; 279 270 }; 280 271 281 272 /* 282 273 * This data type corresponds to the nand dma descriptor 274 + * @dma_desc - low level DMA engine descriptor 283 275 * @list - list for desc_info 284 - * @dir - DMA transfer direction 276 + * 285 277 * @adm_sgl - sgl which will be used for single sgl dma descriptor. Only used by 286 278 * ADM 287 279 * @bam_sgl - sgl which will be used for dma descriptor. Only used by BAM 288 280 * @sgl_cnt - number of SGL in bam_sgl. Only used by BAM 289 - * @dma_desc - low level DMA engine descriptor 281 + * @dir - DMA transfer direction 290 282 */ 291 283 struct desc_info { 284 + struct dma_async_tx_descriptor *dma_desc; 292 285 struct list_head node; 293 286 294 - enum dma_data_direction dir; 295 287 union { 296 288 struct scatterlist adm_sgl; 297 289 struct { ··· 297 293 int sgl_cnt; 298 294 }; 299 295 }; 300 - struct dma_async_tx_descriptor *dma_desc; 296 + enum dma_data_direction dir; 301 297 }; 302 298 303 299 /* ··· 341 337 /* 342 338 * NAND controller data struct 343 339 * 340 + * @dev: parent device 341 + * 342 + * @base: MMIO base 343 + * 344 + * @core_clk: controller clock 345 + * @aon_clk: another controller clock 346 + * 347 + * @regs: a contiguous chunk of memory for DMA register 348 + * writes. contains the register values to be 349 + * written to controller 350 + * 351 + * @props: properties of current NAND controller, 352 + * initialized via DT match data 353 + * 344 354 * @controller: base controller structure 345 355 * @host_list: list containing all the chips attached to the 346 356 * controller 347 - * @dev: parent device 348 - * @base: MMIO base 349 - * @base_phys: physical base address of controller registers 350 - * @base_dma: dma base address of controller registers 351 - * @core_clk: controller clock 352 - * @aon_clk: another controller clock 353 357 * 354 358 * @chan: dma channel 355 359 * @cmd_crci: ADM DMA CRCI for command flow control 356 360 * @data_crci: ADM DMA CRCI for data flow control 361 + * 357 362 * @desc_list: DMA descriptor list (list of desc_infos) 358 363 * 359 364 * @data_buffer: our local DMA buffer for page read/writes, 360 365 * used when we can't use the buffer provided 361 366 * by upper layers directly 367 + * @reg_read_buf: local buffer for reading back registers via DMA 368 + * 369 + * @base_phys: physical base address of controller registers 370 + * @base_dma: dma base address of controller registers 371 + * @reg_read_dma: contains dma address for register read buffer 372 + * 362 373 * @buf_size/count/start: markers for chip->legacy.read_buf/write_buf 363 374 * functions 364 - * @reg_read_buf: local buffer for reading back registers via DMA 365 - * @reg_read_dma: contains dma address for register read buffer 366 - * @reg_read_pos: marker for data read in reg_read_buf 367 - * 368 - * @regs: a contiguous chunk of memory for DMA register 369 - * writes. contains the register values to be 370 - * written to controller 371 - * @cmd1/vld: some fixed controller register values 372 - * @props: properties of current NAND controller, 373 - * initialized via DT match data 374 375 * @max_cwperpage: maximum QPIC codewords required. calculated 375 376 * from all connected NAND devices pagesize 377 + * 378 + * @reg_read_pos: marker for data read in reg_read_buf 379 + * 380 + * @cmd1/vld: some fixed controller register values 376 381 */ 377 382 struct qcom_nand_controller { 378 - struct nand_controller controller; 379 - struct list_head host_list; 380 - 381 383 struct device *dev; 382 384 383 385 void __iomem *base; 384 - phys_addr_t base_phys; 385 - dma_addr_t base_dma; 386 386 387 387 struct clk *core_clk; 388 388 struct clk *aon_clk; 389 + 390 + struct nandc_regs *regs; 391 + struct bam_transaction *bam_txn; 392 + 393 + const struct qcom_nandc_props *props; 394 + 395 + struct nand_controller controller; 396 + struct list_head host_list; 389 397 390 398 union { 391 399 /* will be used only by QPIC for BAM DMA */ ··· 416 400 }; 417 401 418 402 struct list_head desc_list; 419 - struct bam_transaction *bam_txn; 420 403 421 404 u8 *data_buffer; 405 + __le32 *reg_read_buf; 406 + 407 + phys_addr_t base_phys; 408 + dma_addr_t base_dma; 409 + dma_addr_t reg_read_dma; 410 + 422 411 int buf_size; 423 412 int buf_count; 424 413 int buf_start; 425 414 unsigned int max_cwperpage; 426 415 427 - __le32 *reg_read_buf; 428 - dma_addr_t reg_read_dma; 429 416 int reg_read_pos; 430 417 431 - struct nandc_regs *regs; 432 - 433 418 u32 cmd1, vld; 434 - const struct qcom_nandc_props *props; 419 + }; 420 + 421 + /* 422 + * NAND special boot partitions 423 + * 424 + * @page_offset: offset of the partition where spare data is not protected 425 + * by ECC (value in pages) 426 + * @page_offset: size of the partition where spare data is not protected 427 + * by ECC (value in pages) 428 + */ 429 + struct qcom_nand_boot_partition { 430 + u32 page_offset; 431 + u32 page_size; 435 432 }; 436 433 437 434 /* 438 435 * NAND chip structure 439 436 * 437 + * @boot_partitions: array of boot partitions where offset and size of the 438 + * boot partitions are stored 439 + * 440 440 * @chip: base NAND chip structure 441 441 * @node: list node to add itself to host_list in 442 442 * qcom_nand_controller 443 + * 444 + * @nr_boot_partitions: count of the boot partitions where spare data is not 445 + * protected by ECC 443 446 * 444 447 * @cs: chip select value for this chip 445 448 * @cw_size: the number of bytes in a single step/codeword ··· 466 431 * and reserved bytes 467 432 * @cw_data: the number of bytes within a codeword protected 468 433 * by ECC 469 - * @use_ecc: request the controller to use ECC for the 470 - * upcoming read/write 471 - * @bch_enabled: flag to tell whether BCH ECC mode is used 472 434 * @ecc_bytes_hw: ECC bytes used by controller hardware for this 473 435 * chip 474 - * @status: value to be returned if NAND_CMD_STATUS command 475 - * is executed 436 + * 476 437 * @last_command: keeps track of last command on this chip. used 477 438 * for reading correct status 478 439 * 479 440 * @cfg0, cfg1, cfg0_raw..: NANDc register configurations needed for 480 441 * ecc/non-ecc mode for the current nand flash 481 442 * device 443 + * 444 + * @status: value to be returned if NAND_CMD_STATUS command 445 + * is executed 446 + * @codeword_fixup: keep track of the current layout used by 447 + * the driver for read/write operation. 448 + * @use_ecc: request the controller to use ECC for the 449 + * upcoming read/write 450 + * @bch_enabled: flag to tell whether BCH ECC mode is used 482 451 */ 483 452 struct qcom_nand_host { 453 + struct qcom_nand_boot_partition *boot_partitions; 454 + 484 455 struct nand_chip chip; 485 456 struct list_head node; 457 + 458 + int nr_boot_partitions; 486 459 487 460 int cs; 488 461 int cw_size; 489 462 int cw_data; 490 - bool use_ecc; 491 - bool bch_enabled; 492 463 int ecc_bytes_hw; 493 464 int spare_bytes; 494 465 int bbm_size; 495 - u8 status; 466 + 496 467 int last_command; 497 468 498 469 u32 cfg0, cfg1; ··· 507 466 u32 ecc_bch_cfg; 508 467 u32 clrflashstatus; 509 468 u32 clrreadstatus; 469 + 470 + u8 status; 471 + bool codeword_fixup; 472 + bool use_ecc; 473 + bool bch_enabled; 510 474 }; 511 475 512 476 /* 513 477 * This data type corresponds to the NAND controller properties which varies 514 478 * among different NAND controllers. 515 479 * @ecc_modes - ecc mode for NAND 480 + * @dev_cmd_reg_start - NAND_DEV_CMD_* registers starting offset 516 481 * @is_bam - whether NAND controller is using BAM 517 482 * @is_qpic - whether NAND CTRL is part of qpic IP 518 483 * @qpic_v2 - flag to indicate QPIC IP version 2 519 - * @dev_cmd_reg_start - NAND_DEV_CMD_* registers starting offset 484 + * @use_codeword_fixup - whether NAND has different layout for boot partitions 520 485 */ 521 486 struct qcom_nandc_props { 522 487 u32 ecc_modes; 488 + u32 dev_cmd_reg_start; 523 489 bool is_bam; 524 490 bool is_qpic; 525 491 bool qpic_v2; 526 - u32 dev_cmd_reg_start; 492 + bool use_codeword_fixup; 527 493 }; 528 494 529 495 /* Frees the BAM transaction memory */ ··· 1749 1701 data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); 1750 1702 oob_size1 = host->bbm_size; 1751 1703 1752 - if (qcom_nandc_is_last_cw(ecc, cw)) { 1704 + if (qcom_nandc_is_last_cw(ecc, cw) && !host->codeword_fixup) { 1753 1705 data_size2 = ecc->size - data_size1 - 1754 1706 ((ecc->steps - 1) * 4); 1755 1707 oob_size2 = (ecc->steps * 4) + host->ecc_bytes_hw + ··· 1830 1782 } 1831 1783 1832 1784 for_each_set_bit(cw, &uncorrectable_cws, ecc->steps) { 1833 - if (qcom_nandc_is_last_cw(ecc, cw)) { 1785 + if (qcom_nandc_is_last_cw(ecc, cw) && !host->codeword_fixup) { 1834 1786 data_size = ecc->size - ((ecc->steps - 1) * 4); 1835 1787 oob_size = (ecc->steps * 4) + host->ecc_bytes_hw; 1836 1788 } else { ··· 1988 1940 for (i = 0; i < ecc->steps; i++) { 1989 1941 int data_size, oob_size; 1990 1942 1991 - if (qcom_nandc_is_last_cw(ecc, i)) { 1943 + if (qcom_nandc_is_last_cw(ecc, i) && !host->codeword_fixup) { 1992 1944 data_size = ecc->size - ((ecc->steps - 1) << 2); 1993 1945 oob_size = (ecc->steps << 2) + host->ecc_bytes_hw + 1994 1946 host->spare_bytes; ··· 2085 2037 return ret; 2086 2038 } 2087 2039 2040 + static bool qcom_nandc_is_boot_partition(struct qcom_nand_host *host, int page) 2041 + { 2042 + struct qcom_nand_boot_partition *boot_partition; 2043 + u32 start, end; 2044 + int i; 2045 + 2046 + /* 2047 + * Since the frequent access will be to the non-boot partitions like rootfs, 2048 + * optimize the page check by: 2049 + * 2050 + * 1. Checking if the page lies after the last boot partition. 2051 + * 2. Checking from the boot partition end. 2052 + */ 2053 + 2054 + /* First check the last boot partition */ 2055 + boot_partition = &host->boot_partitions[host->nr_boot_partitions - 1]; 2056 + start = boot_partition->page_offset; 2057 + end = start + boot_partition->page_size; 2058 + 2059 + /* Page is after the last boot partition end. This is NOT a boot partition */ 2060 + if (page > end) 2061 + return false; 2062 + 2063 + /* Actually check if it's a boot partition */ 2064 + if (page < end && page >= start) 2065 + return true; 2066 + 2067 + /* Check the other boot partitions starting from the second-last partition */ 2068 + for (i = host->nr_boot_partitions - 2; i >= 0; i--) { 2069 + boot_partition = &host->boot_partitions[i]; 2070 + start = boot_partition->page_offset; 2071 + end = start + boot_partition->page_size; 2072 + 2073 + if (page < end && page >= start) 2074 + return true; 2075 + } 2076 + 2077 + return false; 2078 + } 2079 + 2080 + static void qcom_nandc_codeword_fixup(struct qcom_nand_host *host, int page) 2081 + { 2082 + bool codeword_fixup = qcom_nandc_is_boot_partition(host, page); 2083 + 2084 + /* Skip conf write if we are already in the correct mode */ 2085 + if (codeword_fixup == host->codeword_fixup) 2086 + return; 2087 + 2088 + host->codeword_fixup = codeword_fixup; 2089 + 2090 + host->cw_data = codeword_fixup ? 512 : 516; 2091 + host->spare_bytes = host->cw_size - host->ecc_bytes_hw - 2092 + host->bbm_size - host->cw_data; 2093 + 2094 + host->cfg0 &= ~(SPARE_SIZE_BYTES_MASK | UD_SIZE_BYTES_MASK); 2095 + host->cfg0 |= host->spare_bytes << SPARE_SIZE_BYTES | 2096 + host->cw_data << UD_SIZE_BYTES; 2097 + 2098 + host->ecc_bch_cfg &= ~ECC_NUM_DATA_BYTES_MASK; 2099 + host->ecc_bch_cfg |= host->cw_data << ECC_NUM_DATA_BYTES; 2100 + host->ecc_buf_cfg = (host->cw_data - 1) << NUM_STEPS; 2101 + } 2102 + 2088 2103 /* implements ecc->read_page() */ 2089 2104 static int qcom_nandc_read_page(struct nand_chip *chip, uint8_t *buf, 2090 2105 int oob_required, int page) ··· 2155 2044 struct qcom_nand_host *host = to_qcom_nand_host(chip); 2156 2045 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2157 2046 u8 *data_buf, *oob_buf = NULL; 2047 + 2048 + if (host->nr_boot_partitions) 2049 + qcom_nandc_codeword_fixup(host, page); 2158 2050 2159 2051 nand_read_page_op(chip, page, 0, NULL, 0); 2160 2052 data_buf = buf; ··· 2177 2063 struct nand_ecc_ctrl *ecc = &chip->ecc; 2178 2064 int cw, ret; 2179 2065 u8 *data_buf = buf, *oob_buf = chip->oob_poi; 2066 + 2067 + if (host->nr_boot_partitions) 2068 + qcom_nandc_codeword_fixup(host, page); 2180 2069 2181 2070 for (cw = 0; cw < ecc->steps; cw++) { 2182 2071 ret = qcom_nandc_read_cw_raw(mtd, chip, data_buf, oob_buf, ··· 2201 2084 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2202 2085 struct nand_ecc_ctrl *ecc = &chip->ecc; 2203 2086 2087 + if (host->nr_boot_partitions) 2088 + qcom_nandc_codeword_fixup(host, page); 2089 + 2204 2090 clear_read_regs(nandc); 2205 2091 clear_bam_transaction(nandc); 2206 2092 ··· 2224 2104 u8 *data_buf, *oob_buf; 2225 2105 int i, ret; 2226 2106 2107 + if (host->nr_boot_partitions) 2108 + qcom_nandc_codeword_fixup(host, page); 2109 + 2227 2110 nand_prog_page_begin_op(chip, page, 0, NULL, 0); 2228 2111 2229 2112 clear_read_regs(nandc); ··· 2242 2119 for (i = 0; i < ecc->steps; i++) { 2243 2120 int data_size, oob_size; 2244 2121 2245 - if (qcom_nandc_is_last_cw(ecc, i)) { 2122 + if (qcom_nandc_is_last_cw(ecc, i) && !host->codeword_fixup) { 2246 2123 data_size = ecc->size - ((ecc->steps - 1) << 2); 2247 2124 oob_size = (ecc->steps << 2) + host->ecc_bytes_hw + 2248 2125 host->spare_bytes; ··· 2299 2176 u8 *data_buf, *oob_buf; 2300 2177 int i, ret; 2301 2178 2179 + if (host->nr_boot_partitions) 2180 + qcom_nandc_codeword_fixup(host, page); 2181 + 2302 2182 nand_prog_page_begin_op(chip, page, 0, NULL, 0); 2303 2183 clear_read_regs(nandc); 2304 2184 clear_bam_transaction(nandc); ··· 2320 2194 data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); 2321 2195 oob_size1 = host->bbm_size; 2322 2196 2323 - if (qcom_nandc_is_last_cw(ecc, i)) { 2197 + if (qcom_nandc_is_last_cw(ecc, i) && !host->codeword_fixup) { 2324 2198 data_size2 = ecc->size - data_size1 - 2325 2199 ((ecc->steps - 1) << 2); 2326 2200 oob_size2 = (ecc->steps << 2) + host->ecc_bytes_hw + ··· 2379 2253 u8 *oob = chip->oob_poi; 2380 2254 int data_size, oob_size; 2381 2255 int ret; 2256 + 2257 + if (host->nr_boot_partitions) 2258 + qcom_nandc_codeword_fixup(host, page); 2382 2259 2383 2260 host->use_ecc = true; 2384 2261 clear_bam_transaction(nandc); ··· 3044 2915 3045 2916 static const char * const probes[] = { "cmdlinepart", "ofpart", "qcomsmem", NULL }; 3046 2917 2918 + static int qcom_nand_host_parse_boot_partitions(struct qcom_nand_controller *nandc, 2919 + struct qcom_nand_host *host, 2920 + struct device_node *dn) 2921 + { 2922 + struct nand_chip *chip = &host->chip; 2923 + struct mtd_info *mtd = nand_to_mtd(chip); 2924 + struct qcom_nand_boot_partition *boot_partition; 2925 + struct device *dev = nandc->dev; 2926 + int partitions_count, i, j, ret; 2927 + 2928 + if (!of_find_property(dn, "qcom,boot-partitions", NULL)) 2929 + return 0; 2930 + 2931 + partitions_count = of_property_count_u32_elems(dn, "qcom,boot-partitions"); 2932 + if (partitions_count <= 0) { 2933 + dev_err(dev, "Error parsing boot partition\n"); 2934 + return partitions_count ? partitions_count : -EINVAL; 2935 + } 2936 + 2937 + host->nr_boot_partitions = partitions_count / 2; 2938 + host->boot_partitions = devm_kcalloc(dev, host->nr_boot_partitions, 2939 + sizeof(*host->boot_partitions), GFP_KERNEL); 2940 + if (!host->boot_partitions) { 2941 + host->nr_boot_partitions = 0; 2942 + return -ENOMEM; 2943 + } 2944 + 2945 + for (i = 0, j = 0; i < host->nr_boot_partitions; i++, j += 2) { 2946 + boot_partition = &host->boot_partitions[i]; 2947 + 2948 + ret = of_property_read_u32_index(dn, "qcom,boot-partitions", j, 2949 + &boot_partition->page_offset); 2950 + if (ret) { 2951 + dev_err(dev, "Error parsing boot partition offset at index %d\n", i); 2952 + host->nr_boot_partitions = 0; 2953 + return ret; 2954 + } 2955 + 2956 + if (boot_partition->page_offset % mtd->writesize) { 2957 + dev_err(dev, "Boot partition offset not multiple of writesize at index %i\n", 2958 + i); 2959 + host->nr_boot_partitions = 0; 2960 + return -EINVAL; 2961 + } 2962 + /* Convert offset to nand pages */ 2963 + boot_partition->page_offset /= mtd->writesize; 2964 + 2965 + ret = of_property_read_u32_index(dn, "qcom,boot-partitions", j + 1, 2966 + &boot_partition->page_size); 2967 + if (ret) { 2968 + dev_err(dev, "Error parsing boot partition size at index %d\n", i); 2969 + host->nr_boot_partitions = 0; 2970 + return ret; 2971 + } 2972 + 2973 + if (boot_partition->page_size % mtd->writesize) { 2974 + dev_err(dev, "Boot partition size not multiple of writesize at index %i\n", 2975 + i); 2976 + host->nr_boot_partitions = 0; 2977 + return -EINVAL; 2978 + } 2979 + /* Convert size to nand pages */ 2980 + boot_partition->page_size /= mtd->writesize; 2981 + } 2982 + 2983 + return 0; 2984 + } 2985 + 3047 2986 static int qcom_nand_host_init_and_register(struct qcom_nand_controller *nandc, 3048 2987 struct qcom_nand_host *host, 3049 2988 struct device_node *dn) ··· 3168 2971 ret = mtd_device_parse_register(mtd, probes, NULL, NULL, 0); 3169 2972 if (ret) 3170 2973 nand_cleanup(chip); 2974 + 2975 + if (nandc->props->use_codeword_fixup) { 2976 + ret = qcom_nand_host_parse_boot_partitions(nandc, host, dn); 2977 + if (ret) { 2978 + nand_cleanup(chip); 2979 + return ret; 2980 + } 2981 + } 3171 2982 3172 2983 return ret; 3173 2984 } ··· 3342 3137 static const struct qcom_nandc_props ipq806x_nandc_props = { 3343 3138 .ecc_modes = (ECC_RS_4BIT | ECC_BCH_8BIT), 3344 3139 .is_bam = false, 3140 + .use_codeword_fixup = true, 3345 3141 .dev_cmd_reg_start = 0x0, 3346 3142 }; 3347 3143
+1 -1
drivers/mtd/nand/raw/sm_common.c
··· 52 52 .free = oob_sm_ooblayout_free, 53 53 }; 54 54 55 - /* NOTE: This layout is is not compatabable with SmartMedia, */ 55 + /* NOTE: This layout is not compatabable with SmartMedia, */ 56 56 /* because the 256 byte devices have page depenent oob layout */ 57 57 /* However it does preserve the bad block markers */ 58 58 /* If you use smftl, it will bypass this and work correctly */
+1 -4
drivers/mtd/nand/raw/tegra_nand.c
··· 1223 1223 struct tegra_nand_controller *ctrl = platform_get_drvdata(pdev); 1224 1224 struct nand_chip *chip = ctrl->chip; 1225 1225 struct mtd_info *mtd = nand_to_mtd(chip); 1226 - int ret; 1227 1226 1228 - ret = mtd_device_unregister(mtd); 1229 - if (ret) 1230 - return ret; 1227 + WARN_ON(mtd_device_unregister(mtd)); 1231 1228 1232 1229 nand_cleanup(chip); 1233 1230
+1 -1
drivers/mtd/nand/spi/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - spinand-objs := core.o gigadevice.o macronix.o micron.o paragon.o toshiba.o winbond.o xtx.o 2 + spinand-objs := core.o ato.o gigadevice.o macronix.o micron.o paragon.o toshiba.o winbond.o xtx.o 3 3 obj-$(CONFIG_MTD_SPI_NAND) += spinand.o
+86
drivers/mtd/nand/spi/ato.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2022 Aidan MacDonald 4 + * 5 + * Author: Aidan MacDonald <aidanmacdonald.0x0@gmail.com> 6 + */ 7 + 8 + #include <linux/device.h> 9 + #include <linux/kernel.h> 10 + #include <linux/mtd/spinand.h> 11 + 12 + 13 + #define SPINAND_MFR_ATO 0x9b 14 + 15 + 16 + static SPINAND_OP_VARIANTS(read_cache_variants, 17 + SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 18 + SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0), 19 + SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0)); 20 + 21 + static SPINAND_OP_VARIANTS(write_cache_variants, 22 + SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 23 + SPINAND_PROG_LOAD(true, 0, NULL, 0)); 24 + 25 + static SPINAND_OP_VARIANTS(update_cache_variants, 26 + SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 27 + SPINAND_PROG_LOAD(false, 0, NULL, 0)); 28 + 29 + 30 + static int ato25d1ga_ooblayout_ecc(struct mtd_info *mtd, int section, 31 + struct mtd_oob_region *region) 32 + { 33 + if (section > 3) 34 + return -ERANGE; 35 + 36 + region->offset = (16 * section) + 8; 37 + region->length = 8; 38 + return 0; 39 + } 40 + 41 + static int ato25d1ga_ooblayout_free(struct mtd_info *mtd, int section, 42 + struct mtd_oob_region *region) 43 + { 44 + if (section > 3) 45 + return -ERANGE; 46 + 47 + if (section) { 48 + region->offset = (16 * section); 49 + region->length = 8; 50 + } else { 51 + /* first byte of section 0 is reserved for the BBM */ 52 + region->offset = 1; 53 + region->length = 7; 54 + } 55 + 56 + return 0; 57 + } 58 + 59 + static const struct mtd_ooblayout_ops ato25d1ga_ooblayout = { 60 + .ecc = ato25d1ga_ooblayout_ecc, 61 + .free = ato25d1ga_ooblayout_free, 62 + }; 63 + 64 + 65 + static const struct spinand_info ato_spinand_table[] = { 66 + SPINAND_INFO("ATO25D1GA", 67 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x12), 68 + NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 69 + NAND_ECCREQ(1, 512), 70 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 71 + &write_cache_variants, 72 + &update_cache_variants), 73 + SPINAND_HAS_QE_BIT, 74 + SPINAND_ECCINFO(&ato25d1ga_ooblayout, NULL)), 75 + }; 76 + 77 + static const struct spinand_manufacturer_ops ato_spinand_manuf_ops = { 78 + }; 79 + 80 + const struct spinand_manufacturer ato_spinand_manufacturer = { 81 + .id = SPINAND_MFR_ATO, 82 + .name = "ATO", 83 + .chips = ato_spinand_table, 84 + .nchips = ARRAY_SIZE(ato_spinand_table), 85 + .ops = &ato_spinand_manuf_ops, 86 + };
+1
drivers/mtd/nand/spi/core.c
··· 927 927 }; 928 928 929 929 static const struct spinand_manufacturer *spinand_manufacturers[] = { 930 + &ato_spinand_manufacturer, 930 931 &gigadevice_spinand_manufacturer, 931 932 &macronix_spinand_manufacturer, 932 933 &micron_spinand_manufacturer,
+9
drivers/mtd/parsers/Kconfig
··· 186 186 help 187 187 This provides support for parsing partitions from Shared Memory (SMEM) 188 188 for NAND and SPI flash on Qualcomm platforms. 189 + 190 + config MTD_SERCOMM_PARTS 191 + tristate "Sercomm partition table parser" 192 + depends on MTD && RALINK 193 + help 194 + This provides partitions table parser for devices with Sercomm 195 + partition map. This partition table contains real partition 196 + offsets, which may differ from device to device depending on the 197 + number and location of bad blocks on NAND.
+1
drivers/mtd/parsers/Makefile
··· 10 10 obj-$(CONFIG_MTD_PARSER_IMAGETAG) += parser_imagetag.o 11 11 obj-$(CONFIG_MTD_AFS_PARTS) += afs.o 12 12 obj-$(CONFIG_MTD_PARSER_TRX) += parser_trx.o 13 + obj-$(CONFIG_MTD_SERCOMM_PARTS) += scpart.o 13 14 obj-$(CONFIG_MTD_SHARPSL_PARTS) += sharpslpart.o 14 15 obj-$(CONFIG_MTD_REDBOOT_PARTS) += redboot.o 15 16 obj-$(CONFIG_MTD_QCOMSMEM_PARTS) += qcomsmempart.o
+3
drivers/mtd/parsers/ofpart_bcm4908.c
··· 35 35 err = kstrtoul(s + len + 1, 0, &offset); 36 36 if (err) { 37 37 pr_err("failed to parse %s\n", s + len + 1); 38 + of_node_put(root); 38 39 return err; 39 40 } 40 41 42 + of_node_put(root); 41 43 return offset << 10; 42 44 } 43 45 46 + of_node_put(root); 44 47 return -ENOENT; 45 48 } 46 49
+1
drivers/mtd/parsers/redboot.c
··· 58 58 return; 59 59 60 60 ret = of_property_read_u32(npart, "fis-index-block", &dirblock); 61 + of_node_put(npart); 61 62 if (ret) 62 63 return; 63 64
+249
drivers/mtd/parsers/scpart.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * drivers/mtd/scpart.c: Sercomm Partition Parser 4 + * 5 + * Copyright (C) 2018 NOGUCHI Hiroshi 6 + * Copyright (C) 2022 Mikhail Zhilkin 7 + */ 8 + 9 + #include <linux/kernel.h> 10 + #include <linux/slab.h> 11 + #include <linux/mtd/mtd.h> 12 + #include <linux/mtd/partitions.h> 13 + #include <linux/module.h> 14 + 15 + #define MOD_NAME "scpart" 16 + 17 + #ifdef pr_fmt 18 + #undef pr_fmt 19 + #endif 20 + 21 + #define pr_fmt(fmt) MOD_NAME ": " fmt 22 + 23 + #define ID_ALREADY_FOUND 0xffffffffUL 24 + 25 + #define MAP_OFFS_IN_BLK 0x800 26 + #define MAP_MIRROR_NUM 2 27 + 28 + static const char sc_part_magic[] = { 29 + 'S', 'C', 'F', 'L', 'M', 'A', 'P', 'O', 'K', '\0', 30 + }; 31 + #define PART_MAGIC_LEN sizeof(sc_part_magic) 32 + 33 + /* assumes that all fields are set by CPU native endian */ 34 + struct sc_part_desc { 35 + uint32_t part_id; 36 + uint32_t part_offs; 37 + uint32_t part_bytes; 38 + }; 39 + 40 + static uint32_t scpart_desc_is_valid(struct sc_part_desc *pdesc) 41 + { 42 + return ((pdesc->part_id != 0xffffffffUL) && 43 + (pdesc->part_offs != 0xffffffffUL) && 44 + (pdesc->part_bytes != 0xffffffffUL)); 45 + } 46 + 47 + static int scpart_scan_partmap(struct mtd_info *master, loff_t partmap_offs, 48 + struct sc_part_desc **ppdesc) 49 + { 50 + int cnt = 0; 51 + int res = 0; 52 + int res2; 53 + loff_t offs; 54 + size_t retlen; 55 + struct sc_part_desc *pdesc = NULL; 56 + struct sc_part_desc *tmpdesc; 57 + uint8_t *buf; 58 + 59 + buf = kzalloc(master->erasesize, GFP_KERNEL); 60 + if (!buf) { 61 + res = -ENOMEM; 62 + goto out; 63 + } 64 + 65 + res2 = mtd_read(master, partmap_offs, master->erasesize, &retlen, buf); 66 + if (res2 || retlen != master->erasesize) { 67 + res = -EIO; 68 + goto free; 69 + } 70 + 71 + for (offs = MAP_OFFS_IN_BLK; 72 + offs < master->erasesize - sizeof(*tmpdesc); 73 + offs += sizeof(*tmpdesc)) { 74 + tmpdesc = (struct sc_part_desc *)&buf[offs]; 75 + if (!scpart_desc_is_valid(tmpdesc)) 76 + break; 77 + cnt++; 78 + } 79 + 80 + if (cnt > 0) { 81 + int bytes = cnt * sizeof(*pdesc); 82 + 83 + pdesc = kcalloc(cnt, sizeof(*pdesc), GFP_KERNEL); 84 + if (!pdesc) { 85 + res = -ENOMEM; 86 + goto free; 87 + } 88 + memcpy(pdesc, &(buf[MAP_OFFS_IN_BLK]), bytes); 89 + 90 + *ppdesc = pdesc; 91 + res = cnt; 92 + } 93 + 94 + free: 95 + kfree(buf); 96 + 97 + out: 98 + return res; 99 + } 100 + 101 + static int scpart_find_partmap(struct mtd_info *master, 102 + struct sc_part_desc **ppdesc) 103 + { 104 + int magic_found = 0; 105 + int res = 0; 106 + int res2; 107 + loff_t offs = 0; 108 + size_t retlen; 109 + uint8_t rdbuf[PART_MAGIC_LEN]; 110 + 111 + while ((magic_found < MAP_MIRROR_NUM) && 112 + (offs < master->size) && 113 + !mtd_block_isbad(master, offs)) { 114 + res2 = mtd_read(master, offs, PART_MAGIC_LEN, &retlen, rdbuf); 115 + if (res2 || retlen != PART_MAGIC_LEN) { 116 + res = -EIO; 117 + goto out; 118 + } 119 + if (!memcmp(rdbuf, sc_part_magic, PART_MAGIC_LEN)) { 120 + pr_debug("Signature found at 0x%llx\n", offs); 121 + magic_found++; 122 + res = scpart_scan_partmap(master, offs, ppdesc); 123 + if (res > 0) 124 + goto out; 125 + } 126 + offs += master->erasesize; 127 + } 128 + 129 + out: 130 + if (res > 0) 131 + pr_info("Valid 'SC PART MAP' (%d partitions) found at 0x%llx\n", res, offs); 132 + else 133 + pr_info("No valid 'SC PART MAP' was found\n"); 134 + 135 + return res; 136 + } 137 + 138 + static int scpart_parse(struct mtd_info *master, 139 + const struct mtd_partition **pparts, 140 + struct mtd_part_parser_data *data) 141 + { 142 + const char *partname; 143 + int n; 144 + int nr_scparts; 145 + int nr_parts = 0; 146 + int res = 0; 147 + struct sc_part_desc *scpart_map = NULL; 148 + struct mtd_partition *parts = NULL; 149 + struct device_node *mtd_node; 150 + struct device_node *ofpart_node; 151 + struct device_node *pp; 152 + 153 + mtd_node = mtd_get_of_node(master); 154 + if (!mtd_node) { 155 + res = -ENOENT; 156 + goto out; 157 + } 158 + 159 + ofpart_node = of_get_child_by_name(mtd_node, "partitions"); 160 + if (!ofpart_node) { 161 + pr_info("%s: 'partitions' subnode not found on %pOF.\n", 162 + master->name, mtd_node); 163 + res = -ENOENT; 164 + goto out; 165 + } 166 + 167 + nr_scparts = scpart_find_partmap(master, &scpart_map); 168 + if (nr_scparts <= 0) { 169 + pr_info("No any partitions was found in 'SC PART MAP'.\n"); 170 + res = -ENOENT; 171 + goto free; 172 + } 173 + 174 + parts = kcalloc(of_get_child_count(ofpart_node), sizeof(*parts), 175 + GFP_KERNEL); 176 + if (!parts) { 177 + res = -ENOMEM; 178 + goto free; 179 + } 180 + 181 + for_each_child_of_node(ofpart_node, pp) { 182 + u32 scpart_id; 183 + 184 + if (of_property_read_u32(pp, "sercomm,scpart-id", &scpart_id)) 185 + continue; 186 + 187 + for (n = 0 ; n < nr_scparts ; n++) 188 + if ((scpart_map[n].part_id != ID_ALREADY_FOUND) && 189 + (scpart_id == scpart_map[n].part_id)) 190 + break; 191 + if (n >= nr_scparts) 192 + /* not match */ 193 + continue; 194 + 195 + /* add the partition found in OF into MTD partition array */ 196 + parts[nr_parts].offset = scpart_map[n].part_offs; 197 + parts[nr_parts].size = scpart_map[n].part_bytes; 198 + parts[nr_parts].of_node = pp; 199 + 200 + if (!of_property_read_string(pp, "label", &partname)) 201 + parts[nr_parts].name = partname; 202 + if (of_property_read_bool(pp, "read-only")) 203 + parts[nr_parts].mask_flags |= MTD_WRITEABLE; 204 + if (of_property_read_bool(pp, "lock")) 205 + parts[nr_parts].mask_flags |= MTD_POWERUP_LOCK; 206 + 207 + /* mark as 'done' */ 208 + scpart_map[n].part_id = ID_ALREADY_FOUND; 209 + 210 + nr_parts++; 211 + } 212 + 213 + if (nr_parts > 0) { 214 + *pparts = parts; 215 + res = nr_parts; 216 + } else 217 + pr_info("No partition in OF matches partition ID with 'SC PART MAP'.\n"); 218 + 219 + of_node_put(pp); 220 + 221 + free: 222 + of_node_put(ofpart_node); 223 + kfree(scpart_map); 224 + if (res <= 0) 225 + kfree(parts); 226 + 227 + out: 228 + return res; 229 + } 230 + 231 + static const struct of_device_id scpart_parser_of_match_table[] = { 232 + { .compatible = "sercomm,sc-partitions" }, 233 + {}, 234 + }; 235 + MODULE_DEVICE_TABLE(of, scpart_parser_of_match_table); 236 + 237 + static struct mtd_part_parser scpart_parser = { 238 + .parse_fn = scpart_parse, 239 + .name = "scpart", 240 + .of_match_table = scpart_parser_of_match_table, 241 + }; 242 + module_mtd_part_parser(scpart_parser); 243 + 244 + /* mtd parsers will request the module by parser name */ 245 + MODULE_ALIAS("scpart"); 246 + MODULE_LICENSE("GPL"); 247 + MODULE_AUTHOR("NOGUCHI Hiroshi <drvlabo@gmail.com>"); 248 + MODULE_AUTHOR("Mikhail Zhilkin <csharper2005@gmail.com>"); 249 + MODULE_DESCRIPTION("Sercomm partition parser");
+1 -1
drivers/mtd/sm_ftl.c
··· 1111 1111 { 1112 1112 struct sm_ftl *ftl = dev->priv; 1113 1113 1114 - mutex_lock(&ftl->mutex); 1115 1114 del_timer_sync(&ftl->timer); 1116 1115 cancel_work_sync(&ftl->flush_work); 1116 + mutex_lock(&ftl->mutex); 1117 1117 sm_cache_flush(ftl); 1118 1118 mutex_unlock(&ftl->mutex); 1119 1119 }
+1 -1
drivers/mtd/spi-nor/controllers/hisi-sfc.c
··· 237 237 reg = readl(host->regbase + FMC_CFG); 238 238 reg &= ~(FMC_CFG_OP_MODE_MASK | SPI_NOR_ADDR_MODE_MASK); 239 239 reg |= FMC_CFG_OP_MODE_NORMAL; 240 - reg |= (nor->addr_width == 4) ? SPI_NOR_ADDR_MODE_4BYTES 240 + reg |= (nor->addr_nbytes == 4) ? SPI_NOR_ADDR_MODE_4BYTES 241 241 : SPI_NOR_ADDR_MODE_3BYTES; 242 242 writel(reg, host->regbase + FMC_CFG); 243 243
+4 -4
drivers/mtd/spi-nor/controllers/nxp-spifi.c
··· 203 203 SPIFI_CMD_DATALEN(len) | 204 204 SPIFI_CMD_FIELDFORM_ALL_SERIAL | 205 205 SPIFI_CMD_OPCODE(nor->program_opcode) | 206 - SPIFI_CMD_FRAMEFORM(spifi->nor.addr_width + 1); 206 + SPIFI_CMD_FRAMEFORM(spifi->nor.addr_nbytes + 1); 207 207 writel(cmd, spifi->io_base + SPIFI_CMD); 208 208 209 209 for (i = 0; i < len; i++) ··· 230 230 231 231 cmd = SPIFI_CMD_FIELDFORM_ALL_SERIAL | 232 232 SPIFI_CMD_OPCODE(nor->erase_opcode) | 233 - SPIFI_CMD_FRAMEFORM(spifi->nor.addr_width + 1); 233 + SPIFI_CMD_FRAMEFORM(spifi->nor.addr_nbytes + 1); 234 234 writel(cmd, spifi->io_base + SPIFI_CMD); 235 235 236 236 return nxp_spifi_wait_for_cmd(spifi); ··· 252 252 } 253 253 254 254 /* Memory mode supports address length between 1 and 4 */ 255 - if (spifi->nor.addr_width < 1 || spifi->nor.addr_width > 4) 255 + if (spifi->nor.addr_nbytes < 1 || spifi->nor.addr_nbytes > 4) 256 256 return -EINVAL; 257 257 258 258 spifi->mcmd |= SPIFI_CMD_OPCODE(spifi->nor.read_opcode) | 259 259 SPIFI_CMD_INTLEN(spifi->nor.read_dummy / 8) | 260 - SPIFI_CMD_FRAMEFORM(spifi->nor.addr_width + 1); 260 + SPIFI_CMD_FRAMEFORM(spifi->nor.addr_nbytes + 1); 261 261 262 262 return 0; 263 263 }
+32 -38
drivers/mtd/spi-nor/core.c
··· 38 38 */ 39 39 #define CHIP_ERASE_2MB_READY_WAIT_JIFFIES (40UL * HZ) 40 40 41 - #define SPI_NOR_MAX_ADDR_WIDTH 4 41 + #define SPI_NOR_MAX_ADDR_NBYTES 4 42 42 43 43 #define SPI_NOR_SRST_SLEEP_MIN 200 44 44 #define SPI_NOR_SRST_SLEEP_MAX 400 ··· 177 177 178 178 static int spi_nor_controller_ops_erase(struct spi_nor *nor, loff_t offs) 179 179 { 180 - if (spi_nor_protocol_is_dtr(nor->write_proto)) 180 + if (spi_nor_protocol_is_dtr(nor->reg_proto)) 181 181 return -EOPNOTSUPP; 182 182 183 183 return nor->controller_ops->erase(nor, offs); ··· 198 198 { 199 199 struct spi_mem_op op = 200 200 SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 0), 201 - SPI_MEM_OP_ADDR(nor->addr_width, from, 0), 201 + SPI_MEM_OP_ADDR(nor->addr_nbytes, from, 0), 202 202 SPI_MEM_OP_DUMMY(nor->read_dummy, 0), 203 203 SPI_MEM_OP_DATA_IN(len, buf, 0)); 204 204 bool usebouncebuf; ··· 262 262 { 263 263 struct spi_mem_op op = 264 264 SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 0), 265 - SPI_MEM_OP_ADDR(nor->addr_width, to, 0), 265 + SPI_MEM_OP_ADDR(nor->addr_nbytes, to, 0), 266 266 SPI_MEM_OP_NO_DUMMY, 267 267 SPI_MEM_OP_DATA_OUT(len, buf, 0)); 268 268 ssize_t nbytes; ··· 972 972 if (nor->spimem) { 973 973 struct spi_mem_op op = SPI_NOR_CHIP_ERASE_OP; 974 974 975 - spi_nor_spimem_setup_op(nor, &op, nor->write_proto); 975 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 976 976 977 977 ret = spi_mem_exec_op(nor->spimem, &op); 978 978 } else { ··· 1113 1113 if (nor->spimem) { 1114 1114 struct spi_mem_op op = 1115 1115 SPI_NOR_SECTOR_ERASE_OP(nor->erase_opcode, 1116 - nor->addr_width, addr); 1116 + nor->addr_nbytes, addr); 1117 1117 1118 - spi_nor_spimem_setup_op(nor, &op, nor->write_proto); 1118 + spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 1119 1119 1120 1120 return spi_mem_exec_op(nor->spimem, &op); 1121 1121 } else if (nor->controller_ops->erase) { ··· 1126 1126 * Default implementation, if driver doesn't have a specialized HW 1127 1127 * control 1128 1128 */ 1129 - for (i = nor->addr_width - 1; i >= 0; i--) { 1129 + for (i = nor->addr_nbytes - 1; i >= 0; i--) { 1130 1130 nor->bouncebuf[i] = addr & 0xff; 1131 1131 addr >>= 8; 1132 1132 } 1133 1133 1134 1134 return spi_nor_controller_ops_write_reg(nor, nor->erase_opcode, 1135 - nor->bouncebuf, nor->addr_width); 1135 + nor->bouncebuf, nor->addr_nbytes); 1136 1136 } 1137 1137 1138 1138 /** ··· 2249 2249 return 0; 2250 2250 } 2251 2251 2252 - static int spi_nor_set_addr_width(struct spi_nor *nor) 2252 + static int spi_nor_set_addr_nbytes(struct spi_nor *nor) 2253 2253 { 2254 - if (nor->addr_width) { 2255 - /* already configured from SFDP */ 2254 + if (nor->params->addr_nbytes) { 2255 + nor->addr_nbytes = nor->params->addr_nbytes; 2256 2256 } else if (nor->read_proto == SNOR_PROTO_8_8_8_DTR) { 2257 2257 /* 2258 2258 * In 8D-8D-8D mode, one byte takes half a cycle to transfer. So 2259 - * in this protocol an odd address width cannot be used because 2259 + * in this protocol an odd addr_nbytes cannot be used because 2260 2260 * then the address phase would only span a cycle and a half. 2261 2261 * Half a cycle would be left over. We would then have to start 2262 2262 * the dummy phase in the middle of a cycle and so too the data 2263 2263 * phase, and we will end the transaction with half a cycle left 2264 2264 * over. 2265 2265 * 2266 - * Force all 8D-8D-8D flashes to use an address width of 4 to 2266 + * Force all 8D-8D-8D flashes to use an addr_nbytes of 4 to 2267 2267 * avoid this situation. 2268 2268 */ 2269 - nor->addr_width = 4; 2270 - } else if (nor->info->addr_width) { 2271 - nor->addr_width = nor->info->addr_width; 2269 + nor->addr_nbytes = 4; 2270 + } else if (nor->info->addr_nbytes) { 2271 + nor->addr_nbytes = nor->info->addr_nbytes; 2272 2272 } else { 2273 - nor->addr_width = 3; 2273 + nor->addr_nbytes = 3; 2274 2274 } 2275 2275 2276 - if (nor->addr_width == 3 && nor->params->size > 0x1000000) { 2276 + if (nor->addr_nbytes == 3 && nor->params->size > 0x1000000) { 2277 2277 /* enable 4-byte addressing if the device exceeds 16MiB */ 2278 - nor->addr_width = 4; 2278 + nor->addr_nbytes = 4; 2279 2279 } 2280 2280 2281 - if (nor->addr_width > SPI_NOR_MAX_ADDR_WIDTH) { 2282 - dev_dbg(nor->dev, "address width is too large: %u\n", 2283 - nor->addr_width); 2281 + if (nor->addr_nbytes > SPI_NOR_MAX_ADDR_NBYTES) { 2282 + dev_dbg(nor->dev, "The number of address bytes is too large: %u\n", 2283 + nor->addr_nbytes); 2284 2284 return -EINVAL; 2285 2285 } 2286 2286 2287 2287 /* Set 4byte opcodes when possible. */ 2288 - if (nor->addr_width == 4 && nor->flags & SNOR_F_4B_OPCODES && 2288 + if (nor->addr_nbytes == 4 && nor->flags & SNOR_F_4B_OPCODES && 2289 2289 !(nor->flags & SNOR_F_HAS_4BAIT)) 2290 2290 spi_nor_set_4byte_opcodes(nor); 2291 2291 ··· 2304 2304 if (ret) 2305 2305 return ret; 2306 2306 2307 - return spi_nor_set_addr_width(nor); 2307 + return spi_nor_set_addr_nbytes(nor); 2308 2308 } 2309 2309 2310 2310 /** ··· 2382 2382 */ 2383 2383 erase_mask = 0; 2384 2384 i = 0; 2385 - if (no_sfdp_flags & SECT_4K_PMC) { 2386 - erase_mask |= BIT(i); 2387 - spi_nor_set_erase_type(&map->erase_type[i], 4096u, 2388 - SPINOR_OP_BE_4K_PMC); 2389 - i++; 2390 - } else if (no_sfdp_flags & SECT_4K) { 2385 + if (no_sfdp_flags & SECT_4K) { 2391 2386 erase_mask |= BIT(i); 2392 2387 spi_nor_set_erase_type(&map->erase_type[i], 4096u, 2393 2388 SPINOR_OP_BE_4K); ··· 2492 2497 2493 2498 if (spi_nor_parse_sfdp(nor)) { 2494 2499 memcpy(nor->params, &sfdp_params, sizeof(*nor->params)); 2495 - nor->addr_width = 0; 2496 2500 nor->flags &= ~SNOR_F_4B_OPCODES; 2497 2501 } 2498 2502 } ··· 2712 2718 nor->flags & SNOR_F_SWP_IS_VOLATILE)) 2713 2719 spi_nor_try_unlock_all(nor); 2714 2720 2715 - if (nor->addr_width == 4 && 2721 + if (nor->addr_nbytes == 4 && 2716 2722 nor->read_proto != SNOR_PROTO_8_8_8_DTR && 2717 2723 !(nor->flags & SNOR_F_4B_OPCODES)) { 2718 2724 /* ··· 2724 2730 */ 2725 2731 WARN_ONCE(nor->flags & SNOR_F_BROKEN_RESET, 2726 2732 "enabling reset hack; may not recover from unexpected reboots\n"); 2727 - nor->params->set_4byte_addr_mode(nor, true); 2733 + return nor->params->set_4byte_addr_mode(nor, true); 2728 2734 } 2729 2735 2730 2736 return 0; ··· 2839 2845 void spi_nor_restore(struct spi_nor *nor) 2840 2846 { 2841 2847 /* restore the addressing mode */ 2842 - if (nor->addr_width == 4 && !(nor->flags & SNOR_F_4B_OPCODES) && 2848 + if (nor->addr_nbytes == 4 && !(nor->flags & SNOR_F_4B_OPCODES) && 2843 2849 nor->flags & SNOR_F_BROKEN_RESET) 2844 2850 nor->params->set_4byte_addr_mode(nor, false); 2845 2851 ··· 2983 2989 * - select op codes for (Fast) Read, Page Program and Sector Erase. 2984 2990 * - set the number of dummy cycles (mode cycles + wait states). 2985 2991 * - set the SPI protocols for register and memory accesses. 2986 - * - set the address width. 2992 + * - set the number of address bytes. 2987 2993 */ 2988 2994 ret = spi_nor_setup(nor, hwcaps); 2989 2995 if (ret) ··· 3024 3030 { 3025 3031 struct spi_mem_dirmap_info info = { 3026 3032 .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 0), 3027 - SPI_MEM_OP_ADDR(nor->addr_width, 0, 0), 3033 + SPI_MEM_OP_ADDR(nor->addr_nbytes, 0, 0), 3028 3034 SPI_MEM_OP_DUMMY(nor->read_dummy, 0), 3029 3035 SPI_MEM_OP_DATA_IN(0, NULL, 0)), 3030 3036 .offset = 0, ··· 3055 3061 { 3056 3062 struct spi_mem_dirmap_info info = { 3057 3063 .op_tmpl = SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 0), 3058 - SPI_MEM_OP_ADDR(nor->addr_width, 0, 0), 3064 + SPI_MEM_OP_ADDR(nor->addr_nbytes, 0, 0), 3059 3065 SPI_MEM_OP_NO_DUMMY, 3060 3066 SPI_MEM_OP_DATA_OUT(0, NULL, 0)), 3061 3067 .offset = 0,
+13 -8
drivers/mtd/spi-nor/core.h
··· 84 84 SPI_MEM_OP_NO_DUMMY, \ 85 85 SPI_MEM_OP_NO_DATA) 86 86 87 - #define SPI_NOR_SECTOR_ERASE_OP(opcode, addr_width, addr) \ 87 + #define SPI_NOR_SECTOR_ERASE_OP(opcode, addr_nbytes, addr) \ 88 88 SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 0), \ 89 - SPI_MEM_OP_ADDR(addr_width, addr, 0), \ 89 + SPI_MEM_OP_ADDR(addr_nbytes, addr, 0), \ 90 90 SPI_MEM_OP_NO_DUMMY, \ 91 91 SPI_MEM_OP_NO_DATA) 92 92 ··· 340 340 * @writesize Minimal writable flash unit size. Defaults to 1. Set to 341 341 * ECC unit size for ECC-ed flashes. 342 342 * @page_size: the page size of the SPI NOR flash memory. 343 + * @addr_nbytes: number of address bytes to send. 344 + * @addr_mode_nbytes: number of address bytes of current address mode. Useful 345 + * when the flash operates with 4B opcodes but needs the 346 + * internal address mode for opcodes that don't have a 4B 347 + * opcode correspondent. 343 348 * @rdsr_dummy: dummy cycles needed for Read Status Register command 344 349 * in octal DTR mode. 345 350 * @rdsr_addr_nbytes: dummy address bytes needed for Read Status Register ··· 377 372 u64 size; 378 373 u32 writesize; 379 374 u32 page_size; 375 + u8 addr_nbytes; 376 + u8 addr_mode_nbytes; 380 377 u8 rdsr_dummy; 381 378 u8 rdsr_addr_nbytes; 382 379 ··· 436 429 * isn't necessarily called a "sector" by the vendor. 437 430 * @n_sectors: the number of sectors. 438 431 * @page_size: the flash's page size. 439 - * @addr_width: the flash's address width. 432 + * @addr_nbytes: number of address bytes to send. 440 433 * 441 434 * @parse_sfdp: true when flash supports SFDP tables. The false value has no 442 435 * meaning. If one wants to skip the SFDP tables, one should ··· 464 457 * flags are used together with the SPI_NOR_SKIP_SFDP flag. 465 458 * SPI_NOR_SKIP_SFDP: skip parsing of SFDP tables. 466 459 * SECT_4K: SPINOR_OP_BE_4K works uniformly. 467 - * SECT_4K_PMC: SPINOR_OP_BE_4K_PMC works uniformly. 468 460 * SPI_NOR_DUAL_READ: flash supports Dual Read. 469 461 * SPI_NOR_QUAD_READ: flash supports Quad Read. 470 462 * SPI_NOR_OCTAL_READ: flash supports Octal Read. ··· 494 488 unsigned sector_size; 495 489 u16 n_sectors; 496 490 u16 page_size; 497 - u16 addr_width; 491 + u8 addr_nbytes; 498 492 499 493 bool parse_sfdp; 500 494 u16 flags; ··· 511 505 u8 no_sfdp_flags; 512 506 #define SPI_NOR_SKIP_SFDP BIT(0) 513 507 #define SECT_4K BIT(1) 514 - #define SECT_4K_PMC BIT(2) 515 508 #define SPI_NOR_DUAL_READ BIT(3) 516 509 #define SPI_NOR_QUAD_READ BIT(4) 517 510 #define SPI_NOR_OCTAL_READ BIT(5) ··· 555 550 .n_sectors = (_n_sectors), \ 556 551 .page_size = 256, \ 557 552 558 - #define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_width) \ 553 + #define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_nbytes) \ 559 554 .sector_size = (_sector_size), \ 560 555 .n_sectors = (_n_sectors), \ 561 556 .page_size = (_page_size), \ 562 - .addr_width = (_addr_width), \ 557 + .addr_nbytes = (_addr_nbytes), \ 563 558 .flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR, \ 564 559 565 560 #define OTP_INFO(_len, _n_regions, _base, _offset) \
+1 -1
drivers/mtd/spi-nor/debugfs.c
··· 86 86 seq_printf(s, "size\t\t%s\n", buf); 87 87 seq_printf(s, "write size\t%u\n", params->writesize); 88 88 seq_printf(s, "page size\t%u\n", params->page_size); 89 - seq_printf(s, "address width\t%u\n", nor->addr_width); 89 + seq_printf(s, "address nbytes\t%u\n", nor->addr_nbytes); 90 90 91 91 seq_puts(s, "flags\t\t"); 92 92 spi_nor_print_flags(s, nor->flags, snor_f_names, sizeof(snor_f_names));
+1 -1
drivers/mtd/spi-nor/esmt.c
··· 13 13 { "f25l32pa", INFO(0x8c2016, 0, 64 * 1024, 64) 14 14 FLAGS(SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 15 15 NO_SFDP_FLAGS(SECT_4K) }, 16 - { "f25l32qa", INFO(0x8c4116, 0, 64 * 1024, 64) 16 + { "f25l32qa-2s", INFO(0x8c4116, 0, 64 * 1024, 64) 17 17 FLAGS(SPI_NOR_HAS_LOCK) 18 18 NO_SFDP_FLAGS(SECT_4K) }, 19 19 { "f25l64qa", INFO(0x8c4117, 0, 64 * 1024, 128)
+25 -6
drivers/mtd/spi-nor/issi.c
··· 14 14 const struct sfdp_bfpt *bfpt) 15 15 { 16 16 /* 17 - * IS25LP256 supports 4B opcodes, but the BFPT advertises a 18 - * BFPT_DWORD1_ADDRESS_BYTES_3_ONLY address width. 19 - * Overwrite the address width advertised by the BFPT. 17 + * IS25LP256 supports 4B opcodes, but the BFPT advertises 18 + * BFPT_DWORD1_ADDRESS_BYTES_3_ONLY. 19 + * Overwrite the number of address bytes advertised by the BFPT. 20 20 */ 21 21 if ((bfpt->dwords[BFPT_DWORD(1)] & BFPT_DWORD1_ADDRESS_BYTES_MASK) == 22 22 BFPT_DWORD1_ADDRESS_BYTES_3_ONLY) 23 - nor->addr_width = 4; 23 + nor->params->addr_nbytes = 4; 24 24 25 25 return 0; 26 26 } 27 27 28 28 static const struct spi_nor_fixups is25lp256_fixups = { 29 29 .post_bfpt = is25lp256_post_bfpt_fixups, 30 + }; 31 + 32 + static void pm25lv_nor_late_init(struct spi_nor *nor) 33 + { 34 + struct spi_nor_erase_map *map = &nor->params->erase_map; 35 + int i; 36 + 37 + /* The PM25LV series has a different 4k sector erase opcode */ 38 + for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 39 + if (map->erase_type[i].size == 4096) 40 + map->erase_type[i].opcode = SPINOR_OP_BE_4K_PMC; 41 + } 42 + 43 + static const struct spi_nor_fixups pm25lv_nor_fixups = { 44 + .late_init = pm25lv_nor_late_init, 30 45 }; 31 46 32 47 static const struct flash_info issi_nor_parts[] = { ··· 77 62 78 63 /* PMC */ 79 64 { "pm25lv512", INFO(0, 0, 32 * 1024, 2) 80 - NO_SFDP_FLAGS(SECT_4K_PMC) }, 65 + NO_SFDP_FLAGS(SECT_4K) 66 + .fixups = &pm25lv_nor_fixups 67 + }, 81 68 { "pm25lv010", INFO(0, 0, 32 * 1024, 4) 82 - NO_SFDP_FLAGS(SECT_4K_PMC) }, 69 + NO_SFDP_FLAGS(SECT_4K) 70 + .fixups = &pm25lv_nor_fixups 71 + }, 83 72 { "pm25lq032", INFO(0x7f9d46, 0, 64 * 1024, 64) 84 73 NO_SFDP_FLAGS(SECT_4K) }, 85 74 };
+10 -2
drivers/mtd/spi-nor/micron-st.c
··· 399 399 return sr_ready; 400 400 401 401 ret = micron_st_nor_read_fsr(nor, nor->bouncebuf); 402 - if (ret) 403 - return ret; 402 + if (ret) { 403 + /* 404 + * Some controllers, such as Intel SPI, do not support low 405 + * level operations such as reading the flag status 406 + * register. They only expose small amount of high level 407 + * operations to the software. If this is the case we use 408 + * only the status register value. 409 + */ 410 + return ret == -EOPNOTSUPP ? sr_ready : ret; 411 + } 404 412 405 413 if (nor->bouncebuf[0] & (FSR_E_ERR | FSR_P_ERR)) { 406 414 if (nor->bouncebuf[0] & FSR_E_ERR)
+6 -6
drivers/mtd/spi-nor/otp.c
··· 35 35 */ 36 36 int spi_nor_otp_read_secr(struct spi_nor *nor, loff_t addr, size_t len, u8 *buf) 37 37 { 38 - u8 addr_width, read_opcode, read_dummy; 38 + u8 addr_nbytes, read_opcode, read_dummy; 39 39 struct spi_mem_dirmap_desc *rdesc; 40 40 enum spi_nor_protocol read_proto; 41 41 int ret; 42 42 43 43 read_opcode = nor->read_opcode; 44 - addr_width = nor->addr_width; 44 + addr_nbytes = nor->addr_nbytes; 45 45 read_dummy = nor->read_dummy; 46 46 read_proto = nor->read_proto; 47 47 rdesc = nor->dirmap.rdesc; ··· 54 54 ret = spi_nor_read_data(nor, addr, len, buf); 55 55 56 56 nor->read_opcode = read_opcode; 57 - nor->addr_width = addr_width; 57 + nor->addr_nbytes = addr_nbytes; 58 58 nor->read_dummy = read_dummy; 59 59 nor->read_proto = read_proto; 60 60 nor->dirmap.rdesc = rdesc; ··· 85 85 { 86 86 enum spi_nor_protocol write_proto; 87 87 struct spi_mem_dirmap_desc *wdesc; 88 - u8 addr_width, program_opcode; 88 + u8 addr_nbytes, program_opcode; 89 89 int ret, written; 90 90 91 91 program_opcode = nor->program_opcode; 92 - addr_width = nor->addr_width; 92 + addr_nbytes = nor->addr_nbytes; 93 93 write_proto = nor->write_proto; 94 94 wdesc = nor->dirmap.wdesc; 95 95 ··· 113 113 114 114 out: 115 115 nor->program_opcode = program_opcode; 116 - nor->addr_width = addr_width; 116 + nor->addr_nbytes = addr_nbytes; 117 117 nor->write_proto = write_proto; 118 118 nor->dirmap.wdesc = wdesc; 119 119
+18 -16
drivers/mtd/spi-nor/sfdp.c
··· 134 134 135 135 /** 136 136 * spi_nor_read_raw() - raw read of serial flash memory. read_opcode, 137 - * addr_width and read_dummy members of the struct spi_nor 137 + * addr_nbytes and read_dummy members of the struct spi_nor 138 138 * should be previously 139 139 * set. 140 140 * @nor: pointer to a 'struct spi_nor' ··· 178 178 static int spi_nor_read_sfdp(struct spi_nor *nor, u32 addr, 179 179 size_t len, void *buf) 180 180 { 181 - u8 addr_width, read_opcode, read_dummy; 181 + u8 addr_nbytes, read_opcode, read_dummy; 182 182 int ret; 183 183 184 184 read_opcode = nor->read_opcode; 185 - addr_width = nor->addr_width; 185 + addr_nbytes = nor->addr_nbytes; 186 186 read_dummy = nor->read_dummy; 187 187 188 188 nor->read_opcode = SPINOR_OP_RDSFDP; 189 - nor->addr_width = 3; 189 + nor->addr_nbytes = 3; 190 190 nor->read_dummy = 8; 191 191 192 192 ret = spi_nor_read_raw(nor, addr, len, buf); 193 193 194 194 nor->read_opcode = read_opcode; 195 - nor->addr_width = addr_width; 195 + nor->addr_nbytes = addr_nbytes; 196 196 nor->read_dummy = read_dummy; 197 197 198 198 return ret; ··· 462 462 switch (bfpt.dwords[BFPT_DWORD(1)] & BFPT_DWORD1_ADDRESS_BYTES_MASK) { 463 463 case BFPT_DWORD1_ADDRESS_BYTES_3_ONLY: 464 464 case BFPT_DWORD1_ADDRESS_BYTES_3_OR_4: 465 - nor->addr_width = 3; 465 + params->addr_nbytes = 3; 466 + params->addr_mode_nbytes = 3; 466 467 break; 467 468 468 469 case BFPT_DWORD1_ADDRESS_BYTES_4_ONLY: 469 - nor->addr_width = 4; 470 + params->addr_nbytes = 4; 471 + params->addr_mode_nbytes = 4; 470 472 break; 471 473 472 474 default: ··· 639 637 } 640 638 641 639 /** 642 - * spi_nor_smpt_addr_width() - return the address width used in the 640 + * spi_nor_smpt_addr_nbytes() - return the number of address bytes used in the 643 641 * configuration detection command. 644 642 * @nor: pointer to a 'struct spi_nor' 645 643 * @settings: configuration detection command descriptor, dword1 646 644 */ 647 - static u8 spi_nor_smpt_addr_width(const struct spi_nor *nor, const u32 settings) 645 + static u8 spi_nor_smpt_addr_nbytes(const struct spi_nor *nor, const u32 settings) 648 646 { 649 647 switch (settings & SMPT_CMD_ADDRESS_LEN_MASK) { 650 648 case SMPT_CMD_ADDRESS_LEN_0: ··· 655 653 return 4; 656 654 case SMPT_CMD_ADDRESS_LEN_USE_CURRENT: 657 655 default: 658 - return nor->addr_width; 656 + return nor->params->addr_mode_nbytes; 659 657 } 660 658 } 661 659 ··· 692 690 u32 addr; 693 691 int err; 694 692 u8 i; 695 - u8 addr_width, read_opcode, read_dummy; 693 + u8 addr_nbytes, read_opcode, read_dummy; 696 694 u8 read_data_mask, map_id; 697 695 698 696 /* Use a kmalloc'ed bounce buffer to guarantee it is DMA-able. */ ··· 700 698 if (!buf) 701 699 return ERR_PTR(-ENOMEM); 702 700 703 - addr_width = nor->addr_width; 701 + addr_nbytes = nor->addr_nbytes; 704 702 read_dummy = nor->read_dummy; 705 703 read_opcode = nor->read_opcode; 706 704 ··· 711 709 break; 712 710 713 711 read_data_mask = SMPT_CMD_READ_DATA(smpt[i]); 714 - nor->addr_width = spi_nor_smpt_addr_width(nor, smpt[i]); 712 + nor->addr_nbytes = spi_nor_smpt_addr_nbytes(nor, smpt[i]); 715 713 nor->read_dummy = spi_nor_smpt_read_dummy(nor, smpt[i]); 716 714 nor->read_opcode = SMPT_CMD_OPCODE(smpt[i]); 717 715 addr = smpt[i + 1]; ··· 758 756 /* fall through */ 759 757 out: 760 758 kfree(buf); 761 - nor->addr_width = addr_width; 759 + nor->addr_nbytes = addr_nbytes; 762 760 nor->read_dummy = read_dummy; 763 761 nor->read_opcode = read_opcode; 764 762 return ret; ··· 1046 1044 /* 1047 1045 * We need at least one 4-byte op code per read, program and erase 1048 1046 * operation; the .read(), .write() and .erase() hooks share the 1049 - * nor->addr_width value. 1047 + * nor->addr_nbytes value. 1050 1048 */ 1051 1049 if (!read_hwcaps || !pp_hwcaps || !erase_mask) 1052 1050 goto out; ··· 1100 1098 * Spansion memory. However this quirk is no longer needed with new 1101 1099 * SFDP compliant memories. 1102 1100 */ 1103 - nor->addr_width = 4; 1101 + params->addr_nbytes = 4; 1104 1102 nor->flags |= SNOR_F_4B_OPCODES | SNOR_F_HAS_4BAIT; 1105 1103 1106 1104 /* fall through */
+163 -22
drivers/mtd/spi-nor/spansion.c
··· 14 14 #define SPINOR_OP_CLSR 0x30 /* Clear status register 1 */ 15 15 #define SPINOR_OP_RD_ANY_REG 0x65 /* Read any register */ 16 16 #define SPINOR_OP_WR_ANY_REG 0x71 /* Write any register */ 17 + #define SPINOR_REG_CYPRESS_CFR1V 0x00800002 18 + #define SPINOR_REG_CYPRESS_CFR1V_QUAD_EN BIT(1) /* Quad Enable */ 17 19 #define SPINOR_REG_CYPRESS_CFR2V 0x00800003 18 20 #define SPINOR_REG_CYPRESS_CFR2V_MEMLAT_11_24 0xb 19 21 #define SPINOR_REG_CYPRESS_CFR3V 0x00800004 ··· 116 114 } 117 115 118 116 /** 117 + * cypress_nor_quad_enable_volatile() - enable Quad I/O mode in volatile 118 + * register. 119 + * @nor: pointer to a 'struct spi_nor' 120 + * 121 + * It is recommended to update volatile registers in the field application due 122 + * to a risk of the non-volatile registers corruption by power interrupt. This 123 + * function sets Quad Enable bit in CFR1 volatile. If users set the Quad Enable 124 + * bit in the CFR1 non-volatile in advance (typically by a Flash programmer 125 + * before mounting Flash on PCB), the Quad Enable bit in the CFR1 volatile is 126 + * also set during Flash power-up. 127 + * 128 + * Return: 0 on success, -errno otherwise. 129 + */ 130 + static int cypress_nor_quad_enable_volatile(struct spi_nor *nor) 131 + { 132 + struct spi_mem_op op; 133 + u8 addr_mode_nbytes = nor->params->addr_mode_nbytes; 134 + u8 cfr1v_written; 135 + int ret; 136 + 137 + op = (struct spi_mem_op) 138 + CYPRESS_NOR_RD_ANY_REG_OP(addr_mode_nbytes, 139 + SPINOR_REG_CYPRESS_CFR1V, 140 + nor->bouncebuf); 141 + 142 + ret = spi_nor_read_any_reg(nor, &op, nor->reg_proto); 143 + if (ret) 144 + return ret; 145 + 146 + if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR1V_QUAD_EN) 147 + return 0; 148 + 149 + /* Update the Quad Enable bit. */ 150 + nor->bouncebuf[0] |= SPINOR_REG_CYPRESS_CFR1V_QUAD_EN; 151 + op = (struct spi_mem_op) 152 + CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes, 153 + SPINOR_REG_CYPRESS_CFR1V, 1, 154 + nor->bouncebuf); 155 + ret = spi_nor_write_any_volatile_reg(nor, &op, nor->reg_proto); 156 + if (ret) 157 + return ret; 158 + 159 + cfr1v_written = nor->bouncebuf[0]; 160 + 161 + /* Read back and check it. */ 162 + op = (struct spi_mem_op) 163 + CYPRESS_NOR_RD_ANY_REG_OP(addr_mode_nbytes, 164 + SPINOR_REG_CYPRESS_CFR1V, 165 + nor->bouncebuf); 166 + ret = spi_nor_read_any_reg(nor, &op, nor->reg_proto); 167 + if (ret) 168 + return ret; 169 + 170 + if (nor->bouncebuf[0] != cfr1v_written) { 171 + dev_err(nor->dev, "CFR1: Read back test failed\n"); 172 + return -EIO; 173 + } 174 + 175 + return 0; 176 + } 177 + 178 + /** 179 + * cypress_nor_set_page_size() - Set page size which corresponds to the flash 180 + * configuration. 181 + * @nor: pointer to a 'struct spi_nor' 182 + * 183 + * The BFPT table advertises a 512B or 256B page size depending on part but the 184 + * page size is actually configurable (with the default being 256B). Read from 185 + * CFR3V[4] and set the correct size. 186 + * 187 + * Return: 0 on success, -errno otherwise. 188 + */ 189 + static int cypress_nor_set_page_size(struct spi_nor *nor) 190 + { 191 + struct spi_mem_op op = 192 + CYPRESS_NOR_RD_ANY_REG_OP(3, SPINOR_REG_CYPRESS_CFR3V, 193 + nor->bouncebuf); 194 + int ret; 195 + 196 + ret = spi_nor_read_any_reg(nor, &op, nor->reg_proto); 197 + if (ret) 198 + return ret; 199 + 200 + if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3V_PGSZ) 201 + nor->params->page_size = 512; 202 + else 203 + nor->params->page_size = 256; 204 + 205 + return 0; 206 + } 207 + 208 + static int 209 + s25hx_t_post_bfpt_fixup(struct spi_nor *nor, 210 + const struct sfdp_parameter_header *bfpt_header, 211 + const struct sfdp_bfpt *bfpt) 212 + { 213 + /* Replace Quad Enable with volatile version */ 214 + nor->params->quad_enable = cypress_nor_quad_enable_volatile; 215 + 216 + return cypress_nor_set_page_size(nor); 217 + } 218 + 219 + static void s25hx_t_post_sfdp_fixup(struct spi_nor *nor) 220 + { 221 + struct spi_nor_erase_type *erase_type = 222 + nor->params->erase_map.erase_type; 223 + unsigned int i; 224 + 225 + /* 226 + * In some parts, 3byte erase opcodes are advertised by 4BAIT. 227 + * Convert them to 4byte erase opcodes. 228 + */ 229 + for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) { 230 + switch (erase_type[i].opcode) { 231 + case SPINOR_OP_SE: 232 + erase_type[i].opcode = SPINOR_OP_SE_4B; 233 + break; 234 + case SPINOR_OP_BE_4K: 235 + erase_type[i].opcode = SPINOR_OP_BE_4K_4B; 236 + break; 237 + default: 238 + break; 239 + } 240 + } 241 + } 242 + 243 + static void s25hx_t_late_init(struct spi_nor *nor) 244 + { 245 + struct spi_nor_flash_parameter *params = nor->params; 246 + 247 + /* Fast Read 4B requires mode cycles */ 248 + params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8; 249 + 250 + /* The writesize should be ECC data unit size */ 251 + params->writesize = 16; 252 + } 253 + 254 + static struct spi_nor_fixups s25hx_t_fixups = { 255 + .post_bfpt = s25hx_t_post_bfpt_fixup, 256 + .post_sfdp = s25hx_t_post_sfdp_fixup, 257 + .late_init = s25hx_t_late_init, 258 + }; 259 + 260 + /** 119 261 * cypress_nor_octal_dtr_enable() - Enable octal DTR on Cypress flashes. 120 262 * @nor: pointer to a 'struct spi_nor' 121 263 * @enable: whether to enable or disable Octal DTR ··· 313 167 const struct sfdp_parameter_header *bfpt_header, 314 168 const struct sfdp_bfpt *bfpt) 315 169 { 316 - /* 317 - * The BFPT table advertises a 512B page size but the page size is 318 - * actually configurable (with the default being 256B). Read from 319 - * CFR3V[4] and set the correct size. 320 - */ 321 - struct spi_mem_op op = 322 - CYPRESS_NOR_RD_ANY_REG_OP(3, SPINOR_REG_CYPRESS_CFR3V, 323 - nor->bouncebuf); 324 - int ret; 325 - 326 - spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 327 - 328 - ret = spi_mem_exec_op(nor->spimem, &op); 329 - if (ret) 330 - return ret; 331 - 332 - if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3V_PGSZ) 333 - nor->params->page_size = 512; 334 - else 335 - nor->params->page_size = 256; 336 - 337 - return 0; 170 + return cypress_nor_set_page_size(nor); 338 171 } 339 172 340 173 static const struct spi_nor_fixups s28hs512t_fixups = { ··· 435 310 { "s25fl256l", INFO(0x016019, 0, 64 * 1024, 512) 436 311 NO_SFDP_FLAGS(SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) 437 312 FIXUP_FLAGS(SPI_NOR_4B_OPCODES) }, 313 + { "s25hl512t", INFO6(0x342a1a, 0x0f0390, 256 * 1024, 256) 314 + PARSE_SFDP 315 + MFR_FLAGS(USE_CLSR) 316 + .fixups = &s25hx_t_fixups }, 317 + { "s25hl01gt", INFO6(0x342a1b, 0x0f0390, 256 * 1024, 512) 318 + PARSE_SFDP 319 + MFR_FLAGS(USE_CLSR) 320 + .fixups = &s25hx_t_fixups }, 321 + { "s25hs512t", INFO6(0x342b1a, 0x0f0390, 256 * 1024, 256) 322 + PARSE_SFDP 323 + MFR_FLAGS(USE_CLSR) 324 + .fixups = &s25hx_t_fixups }, 325 + { "s25hs01gt", INFO6(0x342b1b, 0x0f0390, 256 * 1024, 512) 326 + PARSE_SFDP 327 + MFR_FLAGS(USE_CLSR) 328 + .fixups = &s25hx_t_fixups }, 438 329 { "cy15x104q", INFO6(0x042cc2, 0x7f7f7f, 512 * 1024, 1) 439 330 FLAGS(SPI_NOR_NO_ERASE) }, 440 331 { "s28hs512t", INFO(0x345b1a, 0, 256 * 1024, 256)
+1 -1
drivers/mtd/spi-nor/xilinx.c
··· 31 31 .sector_size = (8 * (_page_size)), \ 32 32 .n_sectors = (_n_sectors), \ 33 33 .page_size = (_page_size), \ 34 - .addr_width = 3, \ 34 + .addr_nbytes = 3, \ 35 35 .flags = SPI_NOR_NO_FR 36 36 37 37 /* Xilinx S3AN share MFR with Atmel SPI NOR */
+1 -3
include/linux/mtd/hyperbus.h
··· 89 89 /** 90 90 * hyperbus_unregister_device - deregister HyperBus slave memory device 91 91 * @hbdev: hyperbus_device to be unregistered 92 - * 93 - * Return: 0 for success, others for failure. 94 92 */ 95 - int hyperbus_unregister_device(struct hyperbus_device *hbdev); 93 + void hyperbus_unregister_device(struct hyperbus_device *hbdev); 96 94 97 95 #endif /* __LINUX_MTD_HYPERBUS_H__ */
+2 -2
include/linux/mtd/spi-nor.h
··· 351 351 * @bouncebuf_size: size of the bounce buffer 352 352 * @info: SPI NOR part JEDEC MFR ID and other info 353 353 * @manufacturer: SPI NOR manufacturer 354 - * @addr_width: number of address bytes 354 + * @addr_nbytes: number of address bytes 355 355 * @erase_opcode: the opcode for erasing a sector 356 356 * @read_opcode: the read opcode 357 357 * @read_dummy: the dummy needed by the read operation ··· 381 381 size_t bouncebuf_size; 382 382 const struct flash_info *info; 383 383 const struct spi_nor_manufacturer *manufacturer; 384 - u8 addr_width; 384 + u8 addr_nbytes; 385 385 u8 erase_opcode; 386 386 u8 read_opcode; 387 387 u8 read_dummy;
+1
include/linux/mtd/spinand.h
··· 260 260 }; 261 261 262 262 /* SPI NAND manufacturers */ 263 + extern const struct spinand_manufacturer ato_spinand_manufacturer; 263 264 extern const struct spinand_manufacturer gigadevice_spinand_manufacturer; 264 265 extern const struct spinand_manufacturer macronix_spinand_manufacturer; 265 266 extern const struct spinand_manufacturer micron_spinand_manufacturer;
+2 -2
include/uapi/mtd/mtd-abi.h
··· 69 69 * struct mtd_write_req - data structure for requesting a write operation 70 70 * 71 71 * @start: start address 72 - * @len: length of data buffer 73 - * @ooblen: length of OOB buffer 72 + * @len: length of data buffer (only lower 32 bits are used) 73 + * @ooblen: length of OOB buffer (only lower 32 bits are used) 74 74 * @usr_data: user-provided data buffer 75 75 * @usr_oob: user-provided OOB buffer 76 76 * @mode: MTD mode (see "MTD operation modes")