Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Miquel Raynal:
"Core MTD changes:
- Use refcount to prevent corruption
- Call external _get and _put in right order
- Fix use-after-free in mtd release
- Explicitly include correct DT includes
- Clean refcounting with MTD_PARTITIONED_MASTER
- mtdblock: make warning messages ratelimited
- dt-bindings: Add SEAMA partition bindings

Device driver changes:
- Use devm helper functions
- Fix questionable cast, remove pointless ones.
- error handling fixes
- add support for new chip versions
- update DT bindings
- misc cleanups - fix typos, whitespace, indentation"

* tag 'mtd/for-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (105 commits)
dt-bindings: mtd: amlogic,meson-nand: drop unneeded quotes
mtd: spear_smi: Use helper function devm_clk_get_enabled()
mtd: rawnand: orion: Use helper function devm_clk_get_optional_enabled()
mtd: rawnand: vf610_nfc: Use helper function devm_clk_get_enabled()
mtd: rawnand: sunxi: Use helper function devm_clk_get_enabled()
mtd: rawnand: stm32_fmc2: Use helper function devm_clk_get_enabled()
mtd: rawnand: mtk: Use helper function devm_clk_get_enabled()
mtd: rawnand: mpc5121: Use helper function devm_clk_get_enabled()
mtd: rawnand: lpc32xx_slc: Use helper function devm_clk_get_enabled()
mtd: rawnand: intel: Use helper function devm_clk_get_enabled()
mtd: rawnand: fsmc: Use helper function devm_clk_get_enabled()
mtd: rawnand: arasan: Use helper function devm_clk_get_enabled()
mtd: rawnand: qcom: Add read/read_start ops in exec_op path
mtd: rawnand: qcom: Clear buf_count and buf_start in raw read
mtd: maps: fix -Wvoid-pointer-to-enum-cast warning
mtd: rawnand: fix -Wvoid-pointer-to-enum-cast warning
mtd: rawnand: fsmc: handle clk prepare error in fsmc_nand_resume()
mtd: rawnand: Propagate error and simplify ternary operators for brcmstb_nand_wait_for_completion()
mtd: rawnand: qcom: Sort includes alphabetically
mtd: rawnand: qcom: Do not override the error no of submit_descs()
...

+1352 -1311
+5 -1
Documentation/devicetree/bindings/mtd/amlogic,meson-nand.yaml
··· 50 50 const: hw 51 51 52 52 nand-ecc-step-size: 53 - const: 1024 53 + enum: [512, 1024] 54 54 55 55 nand-ecc-strength: 56 56 enum: [8, 16, 24, 30, 40, 50, 60] ··· 65 65 maximum: 0 66 66 67 67 unevaluatedProperties: false 68 + 69 + dependencies: 70 + nand-ecc-strength: [nand-ecc-step-size] 71 + nand-ecc-step-size: [nand-ecc-strength] 68 72 69 73 70 74 required:
+19 -2
Documentation/devicetree/bindings/mtd/jedec,spi-nor.yaml
··· 43 43 - const: jedec,spi-nor 44 44 - const: jedec,spi-nor 45 45 description: 46 - Must also include "jedec,spi-nor" for any SPI NOR flash that can be 47 - identified by the JEDEC READ ID opcode (0x9F). 46 + SPI NOR flashes compatible with the JEDEC SFDP standard or which may be 47 + identified with the READ ID opcode (0x9F) do not deserve a specific 48 + compatible. They should instead only be matched against the generic 49 + "jedec,spi-nor" compatible. 48 50 49 51 reg: 50 52 minItems: 1 ··· 71 69 properly if the flash is left in the "wrong" state. This boolean flag can 72 70 be used on such systems, to denote the absence of a reliable reset 73 71 mechanism. 72 + 73 + no-wp: 74 + type: boolean 75 + description: 76 + The status register write disable (SRWD) bit in status register, combined 77 + with the WP# signal, provides hardware data protection for the device. When 78 + the SRWD bit is set to 1, and the WP# signal is either driven LOW or hard 79 + strapped to LOW, the status register nonvolatile bits become read-only and 80 + the WRITE STATUS REGISTER operation will not execute. The only way to exit 81 + this hardware-protected mode is to drive WP# HIGH. If the WP# signal of the 82 + flash device is not connected or is wrongly tied to GND (that includes internal 83 + pull-downs) then status register permanently becomes read-only as the SRWD bit 84 + cannot be reset. This boolean flag can be used on such systems to avoid setting 85 + the SRWD bit while writing the status register. WP# signal hard strapped to GND 86 + can be a valid use case. 74 87 75 88 reset-gpios: 76 89 description:
+1
Documentation/devicetree/bindings/mtd/marvell,nand-controller.yaml
··· 16 16 - const: marvell,armada-8k-nand-controller 17 17 - const: marvell,armada370-nand-controller 18 18 - enum: 19 + - marvell,ac5-nand-controller 19 20 - marvell,armada370-nand-controller 20 21 - marvell,pxa3xx-nand-controller 21 22 - description: legacy bindings
+1 -1
Documentation/devicetree/bindings/mtd/nand-controller.yaml
··· 1 - # SPDX-License-Identifier: GPL-2.0 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 4 $id: http://devicetree.org/schemas/mtd/nand-controller.yaml#
-41
Documentation/devicetree/bindings/mtd/oxnas-nand.txt
··· 1 - * Oxford Semiconductor OXNAS NAND Controller 2 - 3 - Please refer to nand-controller.yaml for generic information regarding MTD NAND bindings. 4 - 5 - Required properties: 6 - - compatible: "oxsemi,ox820-nand" 7 - - reg: Base address and length for NAND mapped memory. 8 - 9 - Optional Properties: 10 - - clocks: phandle to the NAND gate clock if needed. 11 - - resets: phandle to the NAND reset control if needed. 12 - 13 - Example: 14 - 15 - nandc: nand-controller@41000000 { 16 - compatible = "oxsemi,ox820-nand"; 17 - reg = <0x41000000 0x100000>; 18 - clocks = <&stdclk CLK_820_NAND>; 19 - resets = <&reset RESET_NAND>; 20 - #address-cells = <1>; 21 - #size-cells = <0>; 22 - 23 - nand@0 { 24 - reg = <0>; 25 - #address-cells = <1>; 26 - #size-cells = <1>; 27 - nand-ecc-mode = "soft"; 28 - nand-ecc-algo = "hamming"; 29 - 30 - partition@0 { 31 - label = "boot"; 32 - reg = <0x00000000 0x00e00000>; 33 - read-only; 34 - }; 35 - 36 - partition@e00000 { 37 - label = "ubi"; 38 - reg = <0x00e00000 0x07200000>; 39 - }; 40 - }; 41 - };
+44
Documentation/devicetree/bindings/mtd/partitions/seama.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/partitions/seama.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Seattle Image Partitions 8 + 9 + description: The SEAttle iMAge (SEAMA) partition is a type of partition 10 + used for NAND flash devices. This type of flash image is found in some 11 + D-Link routers such as DIR-645, DIR-842, DIR-859, DIR-860L, DIR-885L, 12 + DIR890L and DCH-M225, as well as in WD and NEC routers on the ath79 13 + (MIPS), Broadcom BCM53xx, and RAMIPS platforms. This partition type 14 + does not have children defined in the device tree, they need to be 15 + detected by software. 16 + 17 + allOf: 18 + - $ref: partition.yaml# 19 + 20 + maintainers: 21 + - Linus Walleij <linus.walleij@linaro.org> 22 + 23 + properties: 24 + compatible: 25 + const: seama 26 + 27 + required: 28 + - compatible 29 + 30 + unevaluatedProperties: false 31 + 32 + examples: 33 + - | 34 + partitions { 35 + compatible = "fixed-partitions"; 36 + #address-cells = <1>; 37 + #size-cells = <1>; 38 + 39 + partition@0 { 40 + compatible = "seama"; 41 + reg = <0x0 0x800000>; 42 + label = "firmware"; 43 + }; 44 + };
+4 -4
drivers/mtd/devices/docg3.c
··· 1599 1599 */ 1600 1600 static int flashcontrol_show(struct seq_file *s, void *p) 1601 1601 { 1602 - struct docg3 *docg3 = (struct docg3 *)s->private; 1602 + struct docg3 *docg3 = s->private; 1603 1603 1604 1604 u8 fctrl; 1605 1605 ··· 1621 1621 1622 1622 static int asic_mode_show(struct seq_file *s, void *p) 1623 1623 { 1624 - struct docg3 *docg3 = (struct docg3 *)s->private; 1624 + struct docg3 *docg3 = s->private; 1625 1625 1626 1626 int pctrl, mode; 1627 1627 ··· 1658 1658 1659 1659 static int device_id_show(struct seq_file *s, void *p) 1660 1660 { 1661 - struct docg3 *docg3 = (struct docg3 *)s->private; 1661 + struct docg3 *docg3 = s->private; 1662 1662 int id; 1663 1663 1664 1664 mutex_lock(&docg3->cascade->lock); ··· 1672 1672 1673 1673 static int protection_show(struct seq_file *s, void *p) 1674 1674 { 1675 - struct docg3 *docg3 = (struct docg3 *)s->private; 1675 + struct docg3 *docg3 = s->private; 1676 1676 int protect, dps0, dps0_low, dps0_high, dps1, dps1_low, dps1_high; 1677 1677 1678 1678 mutex_lock(&docg3->cascade->lock);
+1 -1
drivers/mtd/devices/mchp23k256.c
··· 15 15 #include <linux/sizes.h> 16 16 #include <linux/spi/flash.h> 17 17 #include <linux/spi/spi.h> 18 - #include <linux/of_device.h> 18 + #include <linux/of.h> 19 19 20 20 #define MAX_CMD_SIZE 4 21 21
+1 -1
drivers/mtd/devices/mchp48l640.c
··· 22 22 #include <linux/sizes.h> 23 23 #include <linux/spi/flash.h> 24 24 #include <linux/spi/spi.h> 25 - #include <linux/of_device.h> 25 + #include <linux/of.h> 26 26 27 27 struct mchp48_caps { 28 28 unsigned int size;
-1
drivers/mtd/devices/mtd_dataflash.c
··· 13 13 #include <linux/err.h> 14 14 #include <linux/math64.h> 15 15 #include <linux/of.h> 16 - #include <linux/of_device.h> 17 16 18 17 #include <linux/spi/spi.h> 19 18 #include <linux/spi/flash.h>
+4 -16
drivers/mtd/devices/spear_smi.c
··· 937 937 struct device_node *np = pdev->dev.of_node; 938 938 struct spear_smi_plat_data *pdata = NULL; 939 939 struct spear_smi *dev; 940 - struct resource *smi_base; 941 940 int irq, ret = 0; 942 941 int i; 943 942 ··· 974 975 goto err; 975 976 } 976 977 977 - smi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0); 978 - 979 - dev->io_base = devm_ioremap_resource(&pdev->dev, smi_base); 978 + dev->io_base = devm_platform_ioremap_resource(pdev, 0); 980 979 if (IS_ERR(dev->io_base)) { 981 980 ret = PTR_ERR(dev->io_base); 982 981 goto err; ··· 993 996 dev->num_flashes = MAX_NUM_FLASH_CHIP; 994 997 } 995 998 996 - dev->clk = devm_clk_get(&pdev->dev, NULL); 999 + dev->clk = devm_clk_get_enabled(&pdev->dev, NULL); 997 1000 if (IS_ERR(dev->clk)) { 998 1001 ret = PTR_ERR(dev->clk); 999 1002 goto err; 1000 1003 } 1001 1004 1002 - ret = clk_prepare_enable(dev->clk); 1003 - if (ret) 1004 - goto err; 1005 - 1006 1005 ret = devm_request_irq(&pdev->dev, irq, spear_smi_int_handler, 0, 1007 1006 pdev->name, dev); 1008 1007 if (ret) { 1009 1008 dev_err(&dev->pdev->dev, "SMI IRQ allocation failed\n"); 1010 - goto err_irq; 1009 + goto err; 1011 1010 } 1012 1011 1013 1012 mutex_init(&dev->lock); ··· 1016 1023 ret = spear_smi_setup_banks(pdev, i, pdata->np[i]); 1017 1024 if (ret) { 1018 1025 dev_err(&dev->pdev->dev, "bank setup failed\n"); 1019 - goto err_irq; 1026 + goto err; 1020 1027 } 1021 1028 } 1022 1029 1023 1030 return 0; 1024 - 1025 - err_irq: 1026 - clk_disable_unprepare(dev->clk); 1027 1031 err: 1028 1032 return ret; 1029 1033 } ··· 1048 1058 /* clean up mtd stuff */ 1049 1059 WARN_ON(mtd_device_unregister(&flash->mtd)); 1050 1060 } 1051 - 1052 - clk_disable_unprepare(dev->clk); 1053 1061 1054 1062 return 0; 1055 1063 }
+2 -12
drivers/mtd/devices/st_spi_fsm.c
··· 2016 2016 { 2017 2017 struct device_node *np = pdev->dev.of_node; 2018 2018 struct flash_info *info; 2019 - struct resource *res; 2020 2019 struct stfsm *fsm; 2021 2020 int ret; 2022 2021 ··· 2032 2033 2033 2034 platform_set_drvdata(pdev, fsm); 2034 2035 2035 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2036 - if (!res) { 2037 - dev_err(&pdev->dev, "Resource not found\n"); 2038 - return -ENODEV; 2039 - } 2040 - 2041 - fsm->base = devm_ioremap_resource(&pdev->dev, res); 2042 - if (IS_ERR(fsm->base)) { 2043 - dev_err(&pdev->dev, 2044 - "Failed to reserve memory region %pR\n", res); 2036 + fsm->base = devm_platform_ioremap_resource(pdev, 0); 2037 + if (IS_ERR(fsm->base)) 2045 2038 return PTR_ERR(fsm->base); 2046 - } 2047 2039 2048 2040 fsm->clk = devm_clk_get_enabled(&pdev->dev, NULL); 2049 2041 if (IS_ERR(fsm->clk)) {
+1 -3
drivers/mtd/lpddr/lpddr2_nvm.c
··· 412 412 struct map_info *map; 413 413 struct mtd_info *mtd; 414 414 struct resource *add_range; 415 - struct resource *control_regs; 416 415 struct pcm_int_data *pcm_data; 417 416 418 417 /* Allocate memory control_regs data structures */ ··· 451 452 452 453 simple_map_init(map); /* fill with default methods */ 453 454 454 - control_regs = platform_get_resource(pdev, IORESOURCE_MEM, 1); 455 - pcm_data->ctl_regs = devm_ioremap_resource(&pdev->dev, control_regs); 455 + pcm_data->ctl_regs = devm_platform_ioremap_resource(pdev, 1); 456 456 if (IS_ERR(pcm_data->ctl_regs)) 457 457 return PTR_ERR(pcm_data->ctl_regs); 458 458
+3 -8
drivers/mtd/maps/lantiq-flash.c
··· 118 118 119 119 platform_set_drvdata(pdev, ltq_mtd); 120 120 121 - ltq_mtd->res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 122 - if (!ltq_mtd->res) { 123 - dev_err(&pdev->dev, "failed to get memory resource\n"); 124 - return -ENOENT; 125 - } 121 + ltq_mtd->map->virt = devm_platform_get_and_ioremap_resource(pdev, 0, &ltq_mtd->res); 122 + if (IS_ERR(ltq_mtd->map->virt)) 123 + return PTR_ERR(ltq_mtd->map->virt); 126 124 127 125 ltq_mtd->map = devm_kzalloc(&pdev->dev, sizeof(struct map_info), 128 126 GFP_KERNEL); ··· 129 131 130 132 ltq_mtd->map->phys = ltq_mtd->res->start; 131 133 ltq_mtd->map->size = resource_size(ltq_mtd->res); 132 - ltq_mtd->map->virt = devm_ioremap_resource(&pdev->dev, ltq_mtd->res); 133 - if (IS_ERR(ltq_mtd->map->virt)) 134 - return PTR_ERR(ltq_mtd->map->virt); 135 134 136 135 ltq_mtd->map->name = ltq_map_name; 137 136 ltq_mtd->map->bankwidth = 2;
-1
drivers/mtd/maps/physmap-bt1-rom.c
··· 14 14 #include <linux/mtd/xip.h> 15 15 #include <linux/mux/consumer.h> 16 16 #include <linux/of.h> 17 - #include <linux/of_device.h> 18 17 #include <linux/platform_device.h> 19 18 #include <linux/string.h> 20 19 #include <linux/types.h>
+1 -2
drivers/mtd/maps/physmap-core.c
··· 508 508 for (i = 0; i < info->nmaps; i++) { 509 509 struct resource *res; 510 510 511 - res = platform_get_resource(dev, IORESOURCE_MEM, i); 512 - info->maps[i].virt = devm_ioremap_resource(&dev->dev, res); 511 + info->maps[i].virt = devm_platform_get_and_ioremap_resource(dev, i, &res); 513 512 if (IS_ERR(info->maps[i].virt)) { 514 513 err = PTR_ERR(info->maps[i].virt); 515 514 goto err_out;
+1 -1
drivers/mtd/maps/physmap-gemini.c
··· 8 8 */ 9 9 #include <linux/export.h> 10 10 #include <linux/of.h> 11 - #include <linux/of_device.h> 12 11 #include <linux/mtd/map.h> 13 12 #include <linux/mtd/xip.h> 14 13 #include <linux/mfd/syscon.h> 14 + #include <linux/platform_device.h> 15 15 #include <linux/regmap.h> 16 16 #include <linux/bitops.h> 17 17 #include <linux/pinctrl/consumer.h>
+1 -1
drivers/mtd/maps/physmap-ixp4xx.c
··· 11 11 */ 12 12 #include <linux/export.h> 13 13 #include <linux/of.h> 14 - #include <linux/of_device.h> 14 + #include <linux/platform_device.h> 15 15 #include <linux/mtd/map.h> 16 16 #include <linux/mtd/xip.h> 17 17 #include "physmap-ixp4xx.h"
+1
drivers/mtd/maps/physmap-ixp4xx.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #include <linux/of.h> 3 + #include <linux/platform_device.h> 3 4 #include <linux/mtd/map.h> 4 5 5 6 #ifdef CONFIG_MTD_PHYSMAP_IXP4XX
+2 -2
drivers/mtd/maps/physmap-versatile.c
··· 9 9 #include <linux/io.h> 10 10 #include <linux/of.h> 11 11 #include <linux/of_address.h> 12 - #include <linux/of_device.h> 13 12 #include <linux/mtd/map.h> 14 13 #include <linux/mfd/syscon.h> 14 + #include <linux/platform_device.h> 15 15 #include <linux/regmap.h> 16 16 #include <linux/bitops.h> 17 17 #include "physmap-versatile.h" ··· 206 206 if (!sysnp) 207 207 return -ENODEV; 208 208 209 - versatile_flashprot = (enum versatile_flashprot)devid->data; 209 + versatile_flashprot = (uintptr_t)devid->data; 210 210 rmap = syscon_node_to_regmap(sysnp); 211 211 of_node_put(sysnp); 212 212 if (IS_ERR(rmap))
+1 -2
drivers/mtd/maps/plat-ram.c
··· 123 123 info->pdata = pdata; 124 124 125 125 /* get the resource for the memory mapping */ 126 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 127 - info->map.virt = devm_ioremap_resource(&pdev->dev, res); 126 + info->map.virt = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 128 127 if (IS_ERR(info->map.virt)) { 129 128 err = PTR_ERR(info->map.virt); 130 129 goto exit_free;
+1 -1
drivers/mtd/maps/sun_uflash.c
··· 14 14 #include <linux/errno.h> 15 15 #include <linux/ioport.h> 16 16 #include <linux/of.h> 17 - #include <linux/of_device.h> 17 + #include <linux/platform_device.h> 18 18 #include <linux/slab.h> 19 19 #include <asm/prom.h> 20 20 #include <linux/uaccess.h>
+1 -1
drivers/mtd/mtdblock.c
··· 262 262 } 263 263 264 264 if (mtd_type_is_nand(mbd->mtd)) 265 - pr_warn("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n", 265 + pr_warn_ratelimited("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n", 266 266 mbd->tr->name, mbd->mtd->name); 267 267 268 268 /* OK, it's not open. Create cache info for it */
+1 -1
drivers/mtd/mtdblock_ro.c
··· 49 49 dev->readonly = 1; 50 50 51 51 if (mtd_type_is_nand(mtd)) 52 - pr_warn("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n", 52 + pr_warn_ratelimited("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n", 53 53 tr->name, mtd->name); 54 54 55 55 if (add_mtd_blktrans_dev(dev))
+57 -42
drivers/mtd/mtdcore.c
··· 93 93 struct mtd_info *mtd = dev_get_drvdata(dev); 94 94 dev_t index = MTD_DEVT(mtd->index); 95 95 96 + idr_remove(&mtd_idr, mtd->index); 97 + of_node_put(mtd_get_of_node(mtd)); 98 + 99 + if (mtd_is_partition(mtd)) 100 + release_mtd_partition(mtd); 101 + 96 102 /* remove /dev/mtdXro node */ 97 103 device_destroy(&mtd_class, index + 1); 104 + } 105 + 106 + static void mtd_device_release(struct kref *kref) 107 + { 108 + struct mtd_info *mtd = container_of(kref, struct mtd_info, refcnt); 109 + bool is_partition = mtd_is_partition(mtd); 110 + 111 + debugfs_remove_recursive(mtd->dbg.dfs_dir); 112 + 113 + /* Try to remove the NVMEM provider */ 114 + nvmem_unregister(mtd->nvmem); 115 + 116 + device_unregister(&mtd->dev); 117 + 118 + /* 119 + * Clear dev so mtd can be safely re-registered later if desired. 120 + * Should not be done for partition, 121 + * as it was already destroyed in device_unregister(). 122 + */ 123 + if (!is_partition) 124 + memset(&mtd->dev, 0, sizeof(mtd->dev)); 125 + 126 + module_put(THIS_MODULE); 98 127 } 99 128 100 129 #define MTD_DEVICE_ATTR_RO(name) \ ··· 695 666 } 696 667 697 668 mtd->index = i; 698 - mtd->usecount = 0; 669 + kref_init(&mtd->refcnt); 699 670 700 671 /* default value if not set by driver */ 701 672 if (mtd->bitflip_threshold == 0) ··· 808 779 { 809 780 int ret; 810 781 struct mtd_notifier *not; 811 - struct device_node *mtd_of_node; 812 782 813 783 mutex_lock(&mtd_table_mutex); 814 784 ··· 821 793 list_for_each_entry(not, &mtd_notifiers, list) 822 794 not->remove(mtd); 823 795 824 - if (mtd->usecount) { 825 - printk(KERN_NOTICE "Removing MTD device #%d (%s) with use count %d\n", 826 - mtd->index, mtd->name, mtd->usecount); 827 - ret = -EBUSY; 828 - } else { 829 - mtd_of_node = mtd_get_of_node(mtd); 830 - debugfs_remove_recursive(mtd->dbg.dfs_dir); 831 - 832 - /* Try to remove the NVMEM provider */ 833 - nvmem_unregister(mtd->nvmem); 834 - 835 - device_unregister(&mtd->dev); 836 - 837 - /* Clear dev so mtd can be safely re-registered later if desired */ 838 - memset(&mtd->dev, 0, sizeof(mtd->dev)); 839 - 840 - idr_remove(&mtd_idr, mtd->index); 841 - of_node_put(mtd_of_node); 842 - 843 - module_put(THIS_MODULE); 844 - ret = 0; 845 - } 796 + kref_put(&mtd->refcnt, mtd_device_release); 797 + ret = 0; 846 798 847 799 out_error: 848 800 mutex_unlock(&mtd_table_mutex); ··· 1235 1227 struct mtd_info *master = mtd_get_master(mtd); 1236 1228 int err; 1237 1229 1238 - if (!try_module_get(master->owner)) 1239 - return -ENODEV; 1240 - 1241 1230 if (master->_get_device) { 1242 1231 err = master->_get_device(mtd); 1243 - 1244 - if (err) { 1245 - module_put(master->owner); 1232 + if (err) 1246 1233 return err; 1247 - } 1248 1234 } 1249 1235 1250 - master->usecount++; 1236 + if (!try_module_get(master->owner)) { 1237 + if (master->_put_device) 1238 + master->_put_device(master); 1239 + return -ENODEV; 1240 + } 1251 1241 1252 - while (mtd->parent) { 1253 - mtd->usecount++; 1242 + while (mtd) { 1243 + if (mtd != master) 1244 + kref_get(&mtd->refcnt); 1254 1245 mtd = mtd->parent; 1255 1246 } 1247 + 1248 + if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1249 + kref_get(&master->refcnt); 1256 1250 1257 1251 return 0; 1258 1252 } ··· 1339 1329 { 1340 1330 struct mtd_info *master = mtd_get_master(mtd); 1341 1331 1342 - while (mtd->parent) { 1343 - --mtd->usecount; 1344 - BUG_ON(mtd->usecount < 0); 1345 - mtd = mtd->parent; 1332 + while (mtd) { 1333 + /* kref_put() can relese mtd, so keep a reference mtd->parent */ 1334 + struct mtd_info *parent = mtd->parent; 1335 + 1336 + if (mtd != master) 1337 + kref_put(&mtd->refcnt, mtd_device_release); 1338 + mtd = parent; 1346 1339 } 1347 1340 1348 - master->usecount--; 1349 - 1350 - if (master->_put_device) 1351 - master->_put_device(master); 1341 + if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1342 + kref_put(&master->refcnt, mtd_device_release); 1352 1343 1353 1344 module_put(master->owner); 1345 + 1346 + /* must be the last as master can be freed in the _put_device */ 1347 + if (master->_put_device) 1348 + master->_put_device(master); 1354 1349 } 1355 1350 EXPORT_SYMBOL_GPL(__put_mtd_device); 1356 1351
+1
drivers/mtd/mtdcore.h
··· 12 12 int del_mtd_device(struct mtd_info *mtd); 13 13 int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); 14 14 int del_mtd_partitions(struct mtd_info *); 15 + void release_mtd_partition(struct mtd_info *mtd); 15 16 16 17 struct mtd_partitions; 17 18
+8 -6
drivers/mtd/mtdpart.c
··· 32 32 kfree(mtd); 33 33 } 34 34 35 + void release_mtd_partition(struct mtd_info *mtd) 36 + { 37 + WARN_ON(!list_empty(&mtd->part.node)); 38 + free_partition(mtd); 39 + } 40 + 35 41 static struct mtd_info *allocate_partition(struct mtd_info *parent, 36 42 const struct mtd_partition *part, 37 43 int partno, uint64_t cur_offset) ··· 315 309 316 310 sysfs_remove_files(&mtd->dev.kobj, mtd_partition_attrs); 317 311 312 + list_del_init(&mtd->part.node); 318 313 err = del_mtd_device(mtd); 319 314 if (err) 320 315 return err; 321 - 322 - list_del(&mtd->part.node); 323 - free_partition(mtd); 324 316 325 317 return 0; 326 318 } ··· 337 333 __del_mtd_partitions(child); 338 334 339 335 pr_info("Deleting %s MTD partition\n", child->name); 336 + list_del_init(&child->part.node); 340 337 ret = del_mtd_device(child); 341 338 if (ret < 0) { 342 339 pr_err("Error when deleting partition \"%s\" (%d)\n", ··· 345 340 err = ret; 346 341 continue; 347 342 } 348 - 349 - list_del(&child->part.node); 350 - free_partition(child); 351 343 } 352 344 353 345 return err;
+1 -1
drivers/mtd/nand/ecc-mxic.c
··· 18 18 #include <linux/mtd/nand.h> 19 19 #include <linux/mtd/nand-ecc-mxic.h> 20 20 #include <linux/mutex.h> 21 - #include <linux/of_device.h> 21 + #include <linux/of.h> 22 22 #include <linux/of_platform.h> 23 23 #include <linux/platform_device.h> 24 24 #include <linux/slab.h>
+1 -1
drivers/mtd/nand/ecc.c
··· 95 95 96 96 #include <linux/module.h> 97 97 #include <linux/mtd/nand.h> 98 + #include <linux/platform_device.h> 98 99 #include <linux/slab.h> 99 100 #include <linux/of.h> 100 - #include <linux/of_device.h> 101 101 #include <linux/of_platform.h> 102 102 103 103 static LIST_HEAD(on_host_hw_engines);
+3 -9
drivers/mtd/nand/onenand/onenand_omap2.c
··· 13 13 #include <linux/mtd/mtd.h> 14 14 #include <linux/mtd/onenand.h> 15 15 #include <linux/mtd/partitions.h> 16 - #include <linux/of_device.h> 16 + #include <linux/of.h> 17 17 #include <linux/omap-gpmc.h> 18 18 #include <linux/platform_device.h> 19 19 #include <linux/interrupt.h> ··· 467 467 struct device *dev = &pdev->dev; 468 468 struct device_node *np = dev->of_node; 469 469 470 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 471 - if (!res) { 472 - dev_err(dev, "error getting memory resource\n"); 473 - return -EINVAL; 474 - } 475 - 476 470 r = of_property_read_u32(np, "reg", &val); 477 471 if (r) { 478 472 dev_err(dev, "reg not found in DT\n"); ··· 480 486 init_completion(&c->irq_done); 481 487 init_completion(&c->dma_done); 482 488 c->gpmc_cs = val; 483 - c->phys_base = res->start; 484 489 485 - c->onenand.base = devm_ioremap_resource(dev, res); 490 + c->onenand.base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 486 491 if (IS_ERR(c->onenand.base)) 487 492 return PTR_ERR(c->onenand.base); 493 + c->phys_base = res->start; 488 494 489 495 c->int_gpiod = devm_gpiod_get_optional(dev, "int", GPIOD_IN); 490 496 if (IS_ERR(c->int_gpiod)) {
+3 -6
drivers/mtd/nand/onenand/onenand_samsung.c
··· 860 860 861 861 s3c_onenand_setup(mtd); 862 862 863 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 864 - onenand->base = devm_ioremap_resource(&pdev->dev, r); 863 + onenand->base = devm_platform_get_and_ioremap_resource(pdev, 0, &r); 865 864 if (IS_ERR(onenand->base)) 866 865 return PTR_ERR(onenand->base); 867 866 ··· 873 874 this->options |= ONENAND_SKIP_UNLOCK_CHECK; 874 875 875 876 if (onenand->type != TYPE_S5PC110) { 876 - r = platform_get_resource(pdev, IORESOURCE_MEM, 1); 877 - onenand->ahb_addr = devm_ioremap_resource(&pdev->dev, r); 877 + onenand->ahb_addr = devm_platform_ioremap_resource(pdev, 1); 878 878 if (IS_ERR(onenand->ahb_addr)) 879 879 return PTR_ERR(onenand->ahb_addr); 880 880 ··· 893 895 this->subpagesize = mtd->writesize; 894 896 895 897 } else { /* S5PC110 */ 896 - r = platform_get_resource(pdev, IORESOURCE_MEM, 1); 897 - onenand->dma_addr = devm_ioremap_resource(&pdev->dev, r); 898 + onenand->dma_addr = devm_platform_ioremap_resource(pdev, 1); 898 899 if (IS_ERR(onenand->dma_addr)) 899 900 return PTR_ERR(onenand->dma_addr); 900 901
+1 -8
drivers/mtd/nand/raw/Kconfig
··· 160 160 including: 161 161 - PXA3xx processors (NFCv1) 162 162 - 32-bit Armada platforms (XP, 37x, 38x, 39x) (NFCv2) 163 - - 64-bit Aramda platforms (7k, 8k) (NFCv2) 163 + - 64-bit Aramda platforms (7k, 8k, ac5) (NFCv2) 164 164 165 165 config MTD_NAND_SLC_LPC32XX 166 166 tristate "NXP LPC32xx SLC NAND controller" ··· 203 203 BCMA bus can have various flash memories attached, they are 204 204 registered by bcma as platform devices. This enables driver for 205 205 NAND flash memories. For now only BCM4706 is supported. 206 - 207 - config MTD_NAND_OXNAS 208 - tristate "Oxford Semiconductor NAND controller" 209 - depends on ARCH_OXNAS || COMPILE_TEST 210 - depends on HAS_IOMEM 211 - help 212 - This enables the NAND flash controller on Oxford Semiconductor SoCs. 213 206 214 207 config MTD_NAND_MPC5121_NFC 215 208 tristate "MPC5121 NAND controller"
-1
drivers/mtd/nand/raw/Makefile
··· 26 26 obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o 27 27 obj-$(CONFIG_MTD_NAND_PASEMI) += pasemi_nand.o 28 28 obj-$(CONFIG_MTD_NAND_ORION) += orion_nand.o 29 - obj-$(CONFIG_MTD_NAND_OXNAS) += oxnas_nand.o 30 29 obj-$(CONFIG_MTD_NAND_FSL_ELBC) += fsl_elbc_nand.o 31 30 obj-$(CONFIG_MTD_NAND_FSL_IFC) += fsl_ifc_nand.o 32 31 obj-$(CONFIG_MTD_NAND_FSL_UPM) += fsl_upm.o
+1 -1
drivers/mtd/nand/raw/ams-delta.c
··· 22 22 #include <linux/mtd/nand-gpio.h> 23 23 #include <linux/mtd/rawnand.h> 24 24 #include <linux/mtd/partitions.h> 25 - #include <linux/of_device.h> 25 + #include <linux/of.h> 26 26 #include <linux/platform_device.h> 27 27 #include <linux/sizes.h> 28 28
+5 -24
drivers/mtd/nand/raw/arasan-nand-controller.c
··· 1440 1440 1441 1441 anfc_reset(nfc); 1442 1442 1443 - nfc->controller_clk = devm_clk_get(&pdev->dev, "controller"); 1443 + nfc->controller_clk = devm_clk_get_enabled(&pdev->dev, "controller"); 1444 1444 if (IS_ERR(nfc->controller_clk)) 1445 1445 return PTR_ERR(nfc->controller_clk); 1446 1446 1447 - nfc->bus_clk = devm_clk_get(&pdev->dev, "bus"); 1447 + nfc->bus_clk = devm_clk_get_enabled(&pdev->dev, "bus"); 1448 1448 if (IS_ERR(nfc->bus_clk)) 1449 1449 return PTR_ERR(nfc->bus_clk); 1450 1450 1451 - ret = clk_prepare_enable(nfc->controller_clk); 1451 + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1452 1452 if (ret) 1453 1453 return ret; 1454 1454 1455 - ret = clk_prepare_enable(nfc->bus_clk); 1456 - if (ret) 1457 - goto disable_controller_clk; 1458 - 1459 - ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1460 - if (ret) 1461 - goto disable_bus_clk; 1462 - 1463 1455 ret = anfc_parse_cs(nfc); 1464 1456 if (ret) 1465 - goto disable_bus_clk; 1457 + return ret; 1466 1458 1467 1459 ret = anfc_chips_init(nfc); 1468 1460 if (ret) 1469 - goto disable_bus_clk; 1461 + return ret; 1470 1462 1471 1463 platform_set_drvdata(pdev, nfc); 1472 1464 1473 1465 return 0; 1474 - 1475 - disable_bus_clk: 1476 - clk_disable_unprepare(nfc->bus_clk); 1477 - 1478 - disable_controller_clk: 1479 - clk_disable_unprepare(nfc->controller_clk); 1480 - 1481 - return ret; 1482 1466 } 1483 1467 1484 1468 static void anfc_remove(struct platform_device *pdev) ··· 1470 1486 struct arasan_nfc *nfc = platform_get_drvdata(pdev); 1471 1487 1472 1488 anfc_chips_cleanup(nfc); 1473 - 1474 - clk_disable_unprepare(nfc->bus_clk); 1475 - clk_disable_unprepare(nfc->controller_clk); 1476 1489 } 1477 1490 1478 1491 static const struct of_device_id anfc_ids[] = {
+1 -2
drivers/mtd/nand/raw/atmel/nand-controller.c
··· 1791 1791 1792 1792 nand->numcs = 1; 1793 1793 1794 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1795 - nand->cs[0].io.virt = devm_ioremap_resource(dev, res); 1794 + nand->cs[0].io.virt = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 1796 1795 if (IS_ERR(nand->cs[0].io.virt)) 1797 1796 return PTR_ERR(nand->cs[0].io.virt); 1798 1797
+1 -3
drivers/mtd/nand/raw/brcmnand/bcm63138_nand.c
··· 61 61 struct device *dev = &pdev->dev; 62 62 struct bcm63138_nand_soc *priv; 63 63 struct brcmnand_soc *soc; 64 - struct resource *res; 65 64 66 65 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 67 66 if (!priv) 68 67 return -ENOMEM; 69 68 soc = &priv->soc; 70 69 71 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nand-int-base"); 72 - priv->base = devm_ioremap_resource(dev, res); 70 + priv->base = devm_platform_ioremap_resource_byname(pdev, "nand-int-base"); 73 71 if (IS_ERR(priv->base)) 74 72 return PTR_ERR(priv->base); 75 73
+89 -41
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 272 272 const unsigned int *page_sizes; 273 273 unsigned int page_size_shift; 274 274 unsigned int max_oob; 275 + u32 ecc_level_shift; 275 276 u32 features; 276 277 277 278 /* for low-power standby/resume only */ ··· 597 596 INTFC_CTLR_READY = BIT(31), 598 597 }; 599 598 599 + /*********************************************************************** 600 + * NAND ACC CONTROL bitfield 601 + * 602 + * Some bits have remained constant throughout hardware revision, while 603 + * others have shifted around. 604 + ***********************************************************************/ 605 + 606 + /* Constant for all versions (where supported) */ 607 + enum { 608 + /* See BRCMNAND_HAS_CACHE_MODE */ 609 + ACC_CONTROL_CACHE_MODE = BIT(22), 610 + 611 + /* See BRCMNAND_HAS_PREFETCH */ 612 + ACC_CONTROL_PREFETCH = BIT(23), 613 + 614 + ACC_CONTROL_PAGE_HIT = BIT(24), 615 + ACC_CONTROL_WR_PREEMPT = BIT(25), 616 + ACC_CONTROL_PARTIAL_PAGE = BIT(26), 617 + ACC_CONTROL_RD_ERASED = BIT(27), 618 + ACC_CONTROL_FAST_PGM_RDIN = BIT(28), 619 + ACC_CONTROL_WR_ECC = BIT(30), 620 + ACC_CONTROL_RD_ECC = BIT(31), 621 + }; 622 + 623 + #define ACC_CONTROL_ECC_SHIFT 16 624 + /* Only for v7.2 */ 625 + #define ACC_CONTROL_ECC_EXT_SHIFT 13 626 + 600 627 static inline bool brcmnand_non_mmio_ops(struct brcmnand_controller *ctrl) 601 628 { 602 629 #if IS_ENABLED(CONFIG_MTD_NAND_BRCMNAND_BCMA) ··· 765 736 ctrl->features |= BRCMNAND_HAS_WP; 766 737 else if (of_property_read_bool(ctrl->dev->of_node, "brcm,nand-has-wp")) 767 738 ctrl->features |= BRCMNAND_HAS_WP; 739 + 740 + /* v7.2 has different ecc level shift in the acc register */ 741 + if (ctrl->nand_version == 0x0702) 742 + ctrl->ecc_level_shift = ACC_CONTROL_ECC_EXT_SHIFT; 743 + else 744 + ctrl->ecc_level_shift = ACC_CONTROL_ECC_SHIFT; 768 745 769 746 return 0; 770 747 } ··· 966 931 return 0; 967 932 } 968 933 969 - /*********************************************************************** 970 - * NAND ACC CONTROL bitfield 971 - * 972 - * Some bits have remained constant throughout hardware revision, while 973 - * others have shifted around. 974 - ***********************************************************************/ 975 - 976 - /* Constant for all versions (where supported) */ 977 - enum { 978 - /* See BRCMNAND_HAS_CACHE_MODE */ 979 - ACC_CONTROL_CACHE_MODE = BIT(22), 980 - 981 - /* See BRCMNAND_HAS_PREFETCH */ 982 - ACC_CONTROL_PREFETCH = BIT(23), 983 - 984 - ACC_CONTROL_PAGE_HIT = BIT(24), 985 - ACC_CONTROL_WR_PREEMPT = BIT(25), 986 - ACC_CONTROL_PARTIAL_PAGE = BIT(26), 987 - ACC_CONTROL_RD_ERASED = BIT(27), 988 - ACC_CONTROL_FAST_PGM_RDIN = BIT(28), 989 - ACC_CONTROL_WR_ECC = BIT(30), 990 - ACC_CONTROL_RD_ECC = BIT(31), 991 - }; 992 - 993 934 static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl) 994 935 { 995 936 if (ctrl->nand_version == 0x0702) ··· 978 967 return GENMASK(4, 0); 979 968 } 980 969 981 - #define NAND_ACC_CONTROL_ECC_SHIFT 16 982 - #define NAND_ACC_CONTROL_ECC_EXT_SHIFT 13 983 - 984 970 static inline u32 brcmnand_ecc_level_mask(struct brcmnand_controller *ctrl) 985 971 { 986 972 u32 mask = (ctrl->nand_version >= 0x0600) ? 0x1f : 0x0f; 987 973 988 - mask <<= NAND_ACC_CONTROL_ECC_SHIFT; 974 + mask <<= ACC_CONTROL_ECC_SHIFT; 989 975 990 976 /* v7.2 includes additional ECC levels */ 991 - if (ctrl->nand_version >= 0x0702) 992 - mask |= 0x7 << NAND_ACC_CONTROL_ECC_EXT_SHIFT; 977 + if (ctrl->nand_version == 0x0702) 978 + mask |= 0x7 << ACC_CONTROL_ECC_EXT_SHIFT; 993 979 994 980 return mask; 995 981 } ··· 1000 992 1001 993 if (en) { 1002 994 acc_control |= ecc_flags; /* enable RD/WR ECC */ 1003 - acc_control |= host->hwcfg.ecc_level 1004 - << NAND_ACC_CONTROL_ECC_SHIFT; 995 + acc_control &= ~brcmnand_ecc_level_mask(ctrl); 996 + acc_control |= host->hwcfg.ecc_level << ctrl->ecc_level_shift; 1005 997 } else { 1006 998 acc_control &= ~ecc_flags; /* disable RD/WR ECC */ 1007 999 acc_control &= ~brcmnand_ecc_level_mask(ctrl); ··· 1079 1071 1080 1072 cpu_relax(); 1081 1073 } while (time_after(limit, jiffies)); 1074 + 1075 + /* 1076 + * do a final check after time out in case the CPU was busy and the driver 1077 + * did not get enough time to perform the polling to avoid false alarms 1078 + */ 1079 + val = brcmnand_read_reg(ctrl, BRCMNAND_INTFC_STATUS); 1080 + if ((val & mask) == expected_val) 1081 + return 0; 1082 1082 1083 1083 dev_warn(ctrl->dev, "timeout on status poll (expected %x got %x)\n", 1084 1084 expected_val, val & mask); ··· 1477 1461 const u8 *oob, int sas, int sector_1k) 1478 1462 { 1479 1463 int tbytes = sas << sector_1k; 1480 - int j; 1464 + int j, k = 0; 1465 + u32 last = 0xffffffff; 1466 + u8 *plast = (u8 *)&last; 1481 1467 1482 1468 /* Adjust OOB values for 1K sector size */ 1483 1469 if (sector_1k && (i & 0x01)) 1484 1470 tbytes = max(0, tbytes - (int)ctrl->max_oob); 1485 1471 tbytes = min_t(int, tbytes, ctrl->max_oob); 1486 1472 1487 - for (j = 0; j < tbytes; j += 4) 1473 + /* 1474 + * tbytes may not be multiple of words. Make sure we don't read out of 1475 + * the boundary and stop at last word. 1476 + */ 1477 + for (j = 0; (j + 3) < tbytes; j += 4) 1488 1478 oob_reg_write(ctrl, j, 1489 1479 (oob[j + 0] << 24) | 1490 1480 (oob[j + 1] << 16) | 1491 1481 (oob[j + 2] << 8) | 1492 1482 (oob[j + 3] << 0)); 1483 + 1484 + /* handle the remaing bytes */ 1485 + while (j < tbytes) 1486 + plast[k++] = oob[j++]; 1487 + 1488 + if (tbytes & 0x3) 1489 + oob_reg_write(ctrl, (tbytes & ~0x3), (__force u32)cpu_to_be32(last)); 1490 + 1493 1491 return tbytes; 1494 1492 } 1495 1493 ··· 1622 1592 1623 1593 dev_dbg(ctrl->dev, "send native cmd %d addr 0x%llx\n", cmd, cmd_addr); 1624 1594 1625 - BUG_ON(ctrl->cmd_pending != 0); 1595 + /* 1596 + * If we came here through _panic_write and there is a pending 1597 + * command, try to wait for it. If it times out, rather than 1598 + * hitting BUG_ON, just return so we don't crash while crashing. 1599 + */ 1600 + if (oops_in_progress) { 1601 + if (ctrl->cmd_pending && 1602 + bcmnand_ctrl_poll_status(ctrl, NAND_CTRL_RDY, NAND_CTRL_RDY, 0)) 1603 + return; 1604 + } else 1605 + BUG_ON(ctrl->cmd_pending != 0); 1626 1606 ctrl->cmd_pending = cmd; 1627 1607 1628 1608 ret = bcmnand_ctrl_poll_status(ctrl, NAND_CTRL_RDY, NAND_CTRL_RDY, 0); ··· 1666 1626 disable_ctrl_irqs(ctrl); 1667 1627 sts = bcmnand_ctrl_poll_status(ctrl, NAND_CTRL_RDY, 1668 1628 NAND_CTRL_RDY, 0); 1669 - err = (sts < 0) ? true : false; 1629 + err = sts < 0; 1670 1630 } else { 1671 1631 unsigned long timeo = msecs_to_jiffies( 1672 1632 NAND_POLL_STATUS_TIMEOUT_MS); 1673 1633 /* wait for completion interrupt */ 1674 1634 sts = wait_for_completion_timeout(&ctrl->done, timeo); 1675 - err = (sts <= 0) ? true : false; 1635 + err = !sts; 1676 1636 } 1677 1637 1678 1638 return err; ··· 1688 1648 if (ctrl->cmd_pending) 1689 1649 err = brcmstb_nand_wait_for_completion(chip); 1690 1650 1651 + ctrl->cmd_pending = 0; 1691 1652 if (err) { 1692 1653 u32 cmd = brcmnand_read_reg(ctrl, BRCMNAND_CMD_START) 1693 1654 >> brcmnand_cmd_shift(ctrl); ··· 1697 1656 "timeout waiting for command %#02x\n", cmd); 1698 1657 dev_err_ratelimited(ctrl->dev, "intfc status %08x\n", 1699 1658 brcmnand_read_reg(ctrl, BRCMNAND_INTFC_STATUS)); 1659 + return -ETIMEDOUT; 1700 1660 } 1701 - ctrl->cmd_pending = 0; 1702 1661 return brcmnand_read_reg(ctrl, BRCMNAND_INTFC_STATUS) & 1703 1662 INTFC_FLASH_STATUS; 1704 1663 } ··· 2602 2561 tmp &= ~brcmnand_ecc_level_mask(ctrl); 2603 2562 tmp &= ~brcmnand_spare_area_mask(ctrl); 2604 2563 if (ctrl->nand_version >= 0x0302) { 2605 - tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT; 2564 + tmp |= cfg->ecc_level << ctrl->ecc_level_shift; 2606 2565 tmp |= cfg->spare_area_size; 2607 2566 } 2608 2567 nand_writereg(ctrl, acc_control_offs, tmp); ··· 2653 2612 struct nand_chip *chip = &host->chip; 2654 2613 const struct nand_ecc_props *requirements = 2655 2614 nanddev_get_ecc_requirements(&chip->base); 2615 + struct nand_memory_organization *memorg = 2616 + nanddev_get_memorg(&chip->base); 2656 2617 struct brcmnand_controller *ctrl = host->ctrl; 2657 2618 struct brcmnand_cfg *cfg = &host->hwcfg; 2658 2619 char msg[128]; ··· 2676 2633 if (cfg->spare_area_size > ctrl->max_oob) 2677 2634 cfg->spare_area_size = ctrl->max_oob; 2678 2635 /* 2679 - * Set oobsize to be consistent with controller's spare_area_size, as 2680 - * the rest is inaccessible. 2636 + * Set mtd and memorg oobsize to be consistent with controller's 2637 + * spare_area_size, as the rest is inaccessible. 2681 2638 */ 2682 2639 mtd->oobsize = cfg->spare_area_size * (mtd->writesize >> FC_SHIFT); 2640 + memorg->oobsize = mtd->oobsize; 2683 2641 2684 2642 cfg->device_size = mtd->size; 2685 2643 cfg->block_size = mtd->erasesize; ··· 3246 3202 3247 3203 ret = brcmnand_init_cs(host, NULL); 3248 3204 if (ret) { 3205 + if (ret == -EPROBE_DEFER) { 3206 + of_node_put(child); 3207 + goto err; 3208 + } 3249 3209 devm_kfree(dev, host); 3250 3210 continue; /* Try all chip-selects */ 3251 3211 }
+2 -5
drivers/mtd/nand/raw/brcmnand/iproc_nand.c
··· 103 103 struct device *dev = &pdev->dev; 104 104 struct iproc_nand_soc *priv; 105 105 struct brcmnand_soc *soc; 106 - struct resource *res; 107 106 108 107 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 109 108 if (!priv) ··· 111 112 112 113 spin_lock_init(&priv->idm_lock); 113 114 114 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "iproc-idm"); 115 - priv->idm_base = devm_ioremap_resource(dev, res); 115 + priv->idm_base = devm_platform_ioremap_resource_byname(pdev, "iproc-idm"); 116 116 if (IS_ERR(priv->idm_base)) 117 117 return PTR_ERR(priv->idm_base); 118 118 119 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "iproc-ext"); 120 - priv->ext_base = devm_ioremap_resource(dev, res); 119 + priv->ext_base = devm_platform_ioremap_resource_byname(pdev, "iproc-ext"); 121 120 if (IS_ERR(priv->ext_base)) 122 121 return PTR_ERR(priv->ext_base); 123 122
-1
drivers/mtd/nand/raw/davinci_nand.c
··· 18 18 #include <linux/mtd/rawnand.h> 19 19 #include <linux/mtd/partitions.h> 20 20 #include <linux/slab.h> 21 - #include <linux/of_device.h> 22 21 #include <linux/of.h> 23 22 24 23 #include <linux/platform_data/mtd-davinci.h>
-1
drivers/mtd/nand/raw/denali_dt.c
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/module.h> 15 15 #include <linux/of.h> 16 - #include <linux/of_device.h> 17 16 #include <linux/platform_device.h> 18 17 #include <linux/reset.h> 19 18
+1
drivers/mtd/nand/raw/fsl_ifc_nand.c
··· 8 8 */ 9 9 10 10 #include <linux/module.h> 11 + #include <linux/platform_device.h> 11 12 #include <linux/types.h> 12 13 #include <linux/kernel.h> 13 14 #include <linux/of_address.h>
+3 -3
drivers/mtd/nand/raw/fsl_upm.c
··· 13 13 #include <linux/mtd/rawnand.h> 14 14 #include <linux/mtd/partitions.h> 15 15 #include <linux/mtd/mtd.h> 16 - #include <linux/of_platform.h> 16 + #include <linux/of.h> 17 + #include <linux/platform_device.h> 17 18 #include <linux/io.h> 18 19 #include <linux/slab.h> 19 20 #include <asm/fsl_lbc.h> ··· 173 172 if (!fun) 174 173 return -ENOMEM; 175 174 176 - io_res = platform_get_resource(ofdev, IORESOURCE_MEM, 0); 177 - fun->io_base = devm_ioremap_resource(&ofdev->dev, io_res); 175 + fun->io_base = devm_platform_get_and_ioremap_resource(ofdev, 0, &io_res); 178 176 if (IS_ERR(fun->io_base)) 179 177 return PTR_ERR(fun->io_base); 180 178
+9 -10
drivers/mtd/nand/raw/fsmc_nand.c
··· 1066 1066 host->regs_va = base + FSMC_NOR_REG_SIZE + 1067 1067 (host->bank * FSMC_NAND_BANK_SZ); 1068 1068 1069 - host->clk = devm_clk_get(&pdev->dev, NULL); 1069 + host->clk = devm_clk_get_enabled(&pdev->dev, NULL); 1070 1070 if (IS_ERR(host->clk)) { 1071 1071 dev_err(&pdev->dev, "failed to fetch block clock\n"); 1072 1072 return PTR_ERR(host->clk); 1073 1073 } 1074 - 1075 - ret = clk_prepare_enable(host->clk); 1076 - if (ret) 1077 - return ret; 1078 1074 1079 1075 /* 1080 1076 * This device ID is actually a common AMBA ID as used on the ··· 1107 1111 if (!host->read_dma_chan) { 1108 1112 dev_err(&pdev->dev, "Unable to get read dma channel\n"); 1109 1113 ret = -ENODEV; 1110 - goto disable_clk; 1114 + goto disable_fsmc; 1111 1115 } 1112 1116 host->write_dma_chan = dma_request_channel(mask, filter, NULL); 1113 1117 if (!host->write_dma_chan) { ··· 1151 1155 release_dma_read_chan: 1152 1156 if (host->mode == USE_DMA_ACCESS) 1153 1157 dma_release_channel(host->read_dma_chan); 1154 - disable_clk: 1158 + disable_fsmc: 1155 1159 fsmc_nand_disable(host); 1156 - clk_disable_unprepare(host->clk); 1157 1160 1158 1161 return ret; 1159 1162 } ··· 1177 1182 dma_release_channel(host->write_dma_chan); 1178 1183 dma_release_channel(host->read_dma_chan); 1179 1184 } 1180 - clk_disable_unprepare(host->clk); 1181 1185 } 1182 1186 } 1183 1187 ··· 1194 1200 static int fsmc_nand_resume(struct device *dev) 1195 1201 { 1196 1202 struct fsmc_nand_data *host = dev_get_drvdata(dev); 1203 + int ret; 1197 1204 1198 1205 if (host) { 1199 - clk_prepare_enable(host->clk); 1206 + ret = clk_prepare_enable(host->clk); 1207 + if (ret) { 1208 + dev_err(dev, "failed to enable clk\n"); 1209 + return ret; 1210 + } 1200 1211 if (host->dev_timings) 1201 1212 fsmc_nand_setup(host, host->dev_timings); 1202 1213 nand_reset(&host->nand, 0);
+1 -1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
··· 13 13 #include <linux/module.h> 14 14 #include <linux/mtd/partitions.h> 15 15 #include <linux/of.h> 16 - #include <linux/of_device.h> 16 + #include <linux/platform_device.h> 17 17 #include <linux/pm_runtime.h> 18 18 #include <linux/dma/mxs-dma.h> 19 19 #include "gpmi-nand.h"
+1
drivers/mtd/nand/raw/ingenic/ingenic_ecc.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/init.h> 11 11 #include <linux/module.h> 12 + #include <linux/of.h> 12 13 #include <linux/of_platform.h> 13 14 #include <linux/platform_device.h> 14 15
-1
drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
··· 13 13 #include <linux/module.h> 14 14 #include <linux/of.h> 15 15 #include <linux/of_address.h> 16 - #include <linux/of_device.h> 17 16 #include <linux/gpio/consumer.h> 18 17 #include <linux/platform_device.h> 19 18 #include <linux/slab.h>
+3 -12
drivers/mtd/nand/raw/intel-nand-controller.c
··· 626 626 goto err_of_node_put; 627 627 } 628 628 629 - ebu_host->clk = devm_clk_get(dev, NULL); 629 + ebu_host->clk = devm_clk_get_enabled(dev, NULL); 630 630 if (IS_ERR(ebu_host->clk)) { 631 631 ret = dev_err_probe(dev, PTR_ERR(ebu_host->clk), 632 - "failed to get clock\n"); 633 - goto err_of_node_put; 634 - } 635 - 636 - ret = clk_prepare_enable(ebu_host->clk); 637 - if (ret) { 638 - dev_err(dev, "failed to enable clock: %d\n", ret); 632 + "failed to get and enable clock\n"); 639 633 goto err_of_node_put; 640 634 } 641 635 ··· 637 643 if (IS_ERR(ebu_host->dma_tx)) { 638 644 ret = dev_err_probe(dev, PTR_ERR(ebu_host->dma_tx), 639 645 "failed to request DMA tx chan!.\n"); 640 - goto err_disable_unprepare_clk; 646 + goto err_of_node_put; 641 647 } 642 648 643 649 ebu_host->dma_rx = dma_request_chan(dev, "rx"); ··· 692 698 nand_cleanup(&ebu_host->chip); 693 699 err_cleanup_dma: 694 700 ebu_dma_cleanup(ebu_host); 695 - err_disable_unprepare_clk: 696 - clk_disable_unprepare(ebu_host->clk); 697 701 err_of_node_put: 698 702 of_node_put(chip_np); 699 703 ··· 708 716 nand_cleanup(&ebu_host->chip); 709 717 ebu_nand_disable(&ebu_host->chip); 710 718 ebu_dma_cleanup(ebu_host); 711 - clk_disable_unprepare(ebu_host->clk); 712 719 } 713 720 714 721 static const struct of_device_id ebu_nand_match[] = {
+1 -2
drivers/mtd/nand/raw/lpc32xx_mlc.c
··· 695 695 696 696 host->pdev = pdev; 697 697 698 - rc = platform_get_resource(pdev, IORESOURCE_MEM, 0); 699 - host->io_base = devm_ioremap_resource(&pdev->dev, rc); 698 + host->io_base = devm_platform_get_and_ioremap_resource(pdev, 0, &rc); 700 699 if (IS_ERR(host->io_base)) 701 700 return PTR_ERR(host->io_base); 702 701
+4 -11
drivers/mtd/nand/raw/lpc32xx_slc.c
··· 836 836 if (!host) 837 837 return -ENOMEM; 838 838 839 - rc = platform_get_resource(pdev, IORESOURCE_MEM, 0); 840 - host->io_base = devm_ioremap_resource(&pdev->dev, rc); 839 + host->io_base = devm_platform_get_and_ioremap_resource(pdev, 0, &rc); 841 840 if (IS_ERR(host->io_base)) 842 841 return PTR_ERR(host->io_base); 843 842 ··· 871 872 mtd->dev.parent = &pdev->dev; 872 873 873 874 /* Get NAND clock */ 874 - host->clk = devm_clk_get(&pdev->dev, NULL); 875 + host->clk = devm_clk_get_enabled(&pdev->dev, NULL); 875 876 if (IS_ERR(host->clk)) { 876 877 dev_err(&pdev->dev, "Clock failure\n"); 877 878 res = -ENOENT; 878 879 goto enable_wp; 879 880 } 880 - res = clk_prepare_enable(host->clk); 881 - if (res) 882 - goto enable_wp; 883 881 884 882 /* Set NAND IO addresses and command/ready functions */ 885 883 chip->legacy.IO_ADDR_R = SLC_DATA(host->io_base); ··· 904 908 GFP_KERNEL); 905 909 if (host->data_buf == NULL) { 906 910 res = -ENOMEM; 907 - goto unprepare_clk; 911 + goto enable_wp; 908 912 } 909 913 910 914 res = lpc32xx_nand_dma_setup(host); 911 915 if (res) { 912 916 res = -EIO; 913 - goto unprepare_clk; 917 + goto enable_wp; 914 918 } 915 919 916 920 /* Find NAND device */ ··· 931 935 nand_cleanup(chip); 932 936 release_dma: 933 937 dma_release_channel(host->dma_chan); 934 - unprepare_clk: 935 - clk_disable_unprepare(host->clk); 936 938 enable_wp: 937 939 lpc32xx_wp_enable(host); 938 940 ··· 957 963 tmp &= ~SLCCFG_CE_LOW; 958 964 writel(tmp, SLC_CTRL(host->io_base)); 959 965 960 - clk_disable_unprepare(host->clk); 961 966 lpc32xx_wp_enable(host); 962 967 } 963 968
+18 -1
drivers/mtd/nand/raw/marvell_nand.c
··· 77 77 #include <linux/module.h> 78 78 #include <linux/clk.h> 79 79 #include <linux/mtd/rawnand.h> 80 - #include <linux/of_platform.h> 80 + #include <linux/of.h> 81 81 #include <linux/iopoll.h> 82 82 #include <linux/interrupt.h> 83 + #include <linux/platform_device.h> 83 84 #include <linux/slab.h> 84 85 #include <linux/mfd/syscon.h> 85 86 #include <linux/regmap.h> ··· 376 375 * BCH error detection and correction algorithm, 377 376 * NDCB3 register has been added 378 377 * @use_dma: Use dma for data transfers 378 + * @max_mode_number: Maximum timing mode supported by the controller 379 379 */ 380 380 struct marvell_nfc_caps { 381 381 unsigned int max_cs_nb; ··· 385 383 bool legacy_of_bindings; 386 384 bool is_nfcv2; 387 385 bool use_dma; 386 + unsigned int max_mode_number; 388 387 }; 389 388 390 389 /** ··· 2379 2376 if (IS_ERR(sdr)) 2380 2377 return PTR_ERR(sdr); 2381 2378 2379 + if (nfc->caps->max_mode_number && nfc->caps->max_mode_number < conf->timings.mode) 2380 + return -EOPNOTSUPP; 2381 + 2382 2382 /* 2383 2383 * SDR timings are given in pico-seconds while NFC timings must be 2384 2384 * expressed in NAND controller clock cycles, which is half of the ··· 3079 3073 .is_nfcv2 = true, 3080 3074 }; 3081 3075 3076 + static const struct marvell_nfc_caps marvell_ac5_caps = { 3077 + .max_cs_nb = 2, 3078 + .max_rb_nb = 1, 3079 + .is_nfcv2 = true, 3080 + .max_mode_number = 3, 3081 + }; 3082 + 3082 3083 static const struct marvell_nfc_caps marvell_armada370_nfc_caps = { 3083 3084 .max_cs_nb = 4, 3084 3085 .max_rb_nb = 2, ··· 3133 3120 { 3134 3121 .compatible = "marvell,armada-8k-nand-controller", 3135 3122 .data = &marvell_armada_8k_nfc_caps, 3123 + }, 3124 + { 3125 + .compatible = "marvell,ac5-nand-controller", 3126 + .data = &marvell_ac5_caps, 3136 3127 }, 3137 3128 { 3138 3129 .compatible = "marvell,armada370-nand-controller",
+46 -26
drivers/mtd/nand/raw/meson_nand.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/iopoll.h> 21 21 #include <linux/of.h> 22 - #include <linux/of_device.h> 23 22 #include <linux/sched/task_stack.h> 24 23 25 24 #define NFC_REG_CMD 0x00 ··· 134 135 struct meson_nand_ecc { 135 136 u32 bch; 136 137 u32 strength; 138 + u32 size; 137 139 }; 138 140 139 141 struct meson_nfc_data { ··· 190 190 }; 191 191 192 192 enum { 193 - NFC_ECC_BCH8_1K = 2, 193 + NFC_ECC_BCH8_512 = 1, 194 + NFC_ECC_BCH8_1K, 194 195 NFC_ECC_BCH24_1K, 195 196 NFC_ECC_BCH30_1K, 196 197 NFC_ECC_BCH40_1K, ··· 199 198 NFC_ECC_BCH60_1K, 200 199 }; 201 200 202 - #define MESON_ECC_DATA(b, s) { .bch = (b), .strength = (s)} 201 + #define MESON_ECC_DATA(b, s, sz) { .bch = (b), .strength = (s), .size = (sz) } 203 202 204 203 static struct meson_nand_ecc meson_ecc[] = { 205 - MESON_ECC_DATA(NFC_ECC_BCH8_1K, 8), 206 - MESON_ECC_DATA(NFC_ECC_BCH24_1K, 24), 207 - MESON_ECC_DATA(NFC_ECC_BCH30_1K, 30), 208 - MESON_ECC_DATA(NFC_ECC_BCH40_1K, 40), 209 - MESON_ECC_DATA(NFC_ECC_BCH50_1K, 50), 210 - MESON_ECC_DATA(NFC_ECC_BCH60_1K, 60), 204 + MESON_ECC_DATA(NFC_ECC_BCH8_512, 8, 512), 205 + MESON_ECC_DATA(NFC_ECC_BCH8_1K, 8, 1024), 206 + MESON_ECC_DATA(NFC_ECC_BCH24_1K, 24, 1024), 207 + MESON_ECC_DATA(NFC_ECC_BCH30_1K, 30, 1024), 208 + MESON_ECC_DATA(NFC_ECC_BCH40_1K, 40, 1024), 209 + MESON_ECC_DATA(NFC_ECC_BCH50_1K, 50, 1024), 210 + MESON_ECC_DATA(NFC_ECC_BCH60_1K, 60, 1024), 211 211 }; 212 212 213 213 static int meson_nand_calc_ecc_bytes(int step_size, int strength) ··· 226 224 227 225 NAND_ECC_CAPS_SINGLE(meson_gxl_ecc_caps, 228 226 meson_nand_calc_ecc_bytes, 1024, 8, 24, 30, 40, 50, 60); 229 - NAND_ECC_CAPS_SINGLE(meson_axg_ecc_caps, 230 - meson_nand_calc_ecc_bytes, 1024, 8); 227 + 228 + static const int axg_stepinfo_strengths[] = { 8 }; 229 + 230 + static const struct nand_ecc_step_info axg_stepinfo[] = { 231 + { 232 + .stepsize = 1024, 233 + .strengths = axg_stepinfo_strengths, 234 + .nstrengths = ARRAY_SIZE(axg_stepinfo_strengths) 235 + }, 236 + { 237 + .stepsize = 512, 238 + .strengths = axg_stepinfo_strengths, 239 + .nstrengths = ARRAY_SIZE(axg_stepinfo_strengths) 240 + }, 241 + }; 242 + 243 + static const struct nand_ecc_caps meson_axg_ecc_caps = { 244 + .stepinfos = axg_stepinfo, 245 + .nstepinfos = ARRAY_SIZE(axg_stepinfo), 246 + .calc_ecc_bytes = meson_nand_calc_ecc_bytes, 247 + }; 231 248 232 249 static struct meson_nfc_nand_chip *to_meson_nand(struct nand_chip *nand) 233 250 { ··· 421 400 } 422 401 } 423 402 424 - static int meson_nfc_wait_no_rb_pin(struct meson_nfc *nfc, int timeout_ms, 403 + static int meson_nfc_wait_no_rb_pin(struct nand_chip *nand, int timeout_ms, 425 404 bool need_cmd_read0) 426 405 { 406 + struct meson_nfc *nfc = nand_get_controller_data(nand); 427 407 u32 cmd, cfg; 428 408 429 409 meson_nfc_cmd_idle(nfc, nfc->timing.twb); ··· 436 414 writel(cfg, nfc->reg_base + NFC_REG_CFG); 437 415 438 416 reinit_completion(&nfc->completion); 439 - cmd = nfc->param.chip_select | NFC_CMD_CLE | NAND_CMD_STATUS; 440 - writel(cmd, nfc->reg_base + NFC_REG_CMD); 417 + nand_status_op(nand, NULL); 441 418 442 419 /* use the max erase time as the maximum clock for waiting R/B */ 443 420 cmd = NFC_CMD_RB | NFC_CMD_RB_INT_NO_PIN | nfc->timing.tbers_max; ··· 446 425 msecs_to_jiffies(timeout_ms))) 447 426 return -ETIMEDOUT; 448 427 449 - if (need_cmd_read0) { 450 - cmd = nfc->param.chip_select | NFC_CMD_CLE | NAND_CMD_READ0; 451 - writel(cmd, nfc->reg_base + NFC_REG_CMD); 452 - meson_nfc_drain_cmd(nfc); 453 - meson_nfc_wait_cmd_finish(nfc, CMD_FIFO_EMPTY_TIMEOUT); 454 - } 428 + if (need_cmd_read0) 429 + nand_exit_status_op(nand); 455 430 456 431 return 0; 457 432 } ··· 480 463 return ret; 481 464 } 482 465 483 - static int meson_nfc_queue_rb(struct meson_nfc *nfc, int timeout_ms, 466 + static int meson_nfc_queue_rb(struct nand_chip *nand, int timeout_ms, 484 467 bool need_cmd_read0) 485 468 { 469 + struct meson_nfc *nfc = nand_get_controller_data(nand); 470 + 486 471 if (nfc->no_rb_pin) { 487 472 /* This mode is used when there is no wired R/B pin. 488 473 * It works like 'nand_soft_waitrdy()', but instead of ··· 496 477 * needed (for all cases except page programming - this 497 478 * is reason of 'need_cmd_read0' flag). 498 479 */ 499 - return meson_nfc_wait_no_rb_pin(nfc, timeout_ms, 480 + return meson_nfc_wait_no_rb_pin(nand, timeout_ms, 500 481 need_cmd_read0); 501 482 } else { 502 483 return meson_nfc_wait_rb_pin(nfc, timeout_ms); ··· 706 687 if (in) { 707 688 nfc->cmdfifo.rw.cmd1 = cs | NFC_CMD_CLE | NAND_CMD_READSTART; 708 689 writel(nfc->cmdfifo.rw.cmd1, nfc->reg_base + NFC_REG_CMD); 709 - meson_nfc_queue_rb(nfc, PSEC_TO_MSEC(sdr->tR_max), true); 690 + meson_nfc_queue_rb(nand, PSEC_TO_MSEC(sdr->tR_max), true); 710 691 } else { 711 692 meson_nfc_cmd_idle(nfc, nfc->timing.tadl); 712 693 } ··· 752 733 753 734 cmd = nfc->param.chip_select | NFC_CMD_CLE | NAND_CMD_PAGEPROG; 754 735 writel(cmd, nfc->reg_base + NFC_REG_CMD); 755 - meson_nfc_queue_rb(nfc, PSEC_TO_MSEC(sdr->tPROG_max), false); 736 + meson_nfc_queue_rb(nand, PSEC_TO_MSEC(sdr->tPROG_max), false); 756 737 757 738 meson_nfc_dma_buffer_release(nand, data_len, info_len, DMA_TO_DEVICE); 758 739 ··· 1068 1049 break; 1069 1050 1070 1051 case NAND_OP_WAITRDY_INSTR: 1071 - meson_nfc_queue_rb(nfc, instr->ctx.waitrdy.timeout_ms, 1052 + meson_nfc_queue_rb(nand, instr->ctx.waitrdy.timeout_ms, 1072 1053 true); 1073 1054 if (instr->delay_ns) 1074 1055 meson_nfc_cmd_idle(nfc, delay_idle); ··· 1278 1259 return -EINVAL; 1279 1260 1280 1261 for (i = 0; i < ARRAY_SIZE(meson_ecc); i++) { 1281 - if (meson_ecc[i].strength == nand->ecc.strength) { 1262 + if (meson_ecc[i].strength == nand->ecc.strength && 1263 + meson_ecc[i].size == nand->ecc.size) { 1282 1264 meson_chip->bch_mode = meson_ecc[i].bch; 1283 1265 return 0; 1284 1266 }
+4 -11
drivers/mtd/nand/raw/mpc5121_nfc.c
··· 21 21 #include <linux/mtd/mtd.h> 22 22 #include <linux/mtd/rawnand.h> 23 23 #include <linux/mtd/partitions.h> 24 + #include <linux/of.h> 24 25 #include <linux/of_address.h> 25 - #include <linux/of_device.h> 26 26 #include <linux/of_irq.h> 27 - #include <linux/of_platform.h> 27 + #include <linux/platform_device.h> 28 28 29 29 #include <asm/mpc5121.h> 30 30 ··· 595 595 struct nand_chip *chip = mtd_to_nand(mtd); 596 596 struct mpc5121_nfc_prv *prv = nand_get_controller_data(chip); 597 597 598 - clk_disable_unprepare(prv->clk); 599 - 600 598 if (prv->csreg) 601 599 iounmap(prv->csreg); 602 600 } ··· 715 717 } 716 718 717 719 /* Enable NFC clock */ 718 - clk = devm_clk_get(dev, "ipg"); 720 + clk = devm_clk_get_enabled(dev, "ipg"); 719 721 if (IS_ERR(clk)) { 720 - dev_err(dev, "Unable to acquire NFC clock!\n"); 722 + dev_err(dev, "Unable to acquire and enable NFC clock!\n"); 721 723 retval = PTR_ERR(clk); 722 - goto error; 723 - } 724 - retval = clk_prepare_enable(clk); 725 - if (retval) { 726 - dev_err(dev, "Unable to enable NFC clock!\n"); 727 724 goto error; 728 725 } 729 726 prv->clk = clk;
+19 -44
drivers/mtd/nand/raw/mtk_nand.c
··· 16 16 #include <linux/module.h> 17 17 #include <linux/iopoll.h> 18 18 #include <linux/of.h> 19 - #include <linux/of_device.h> 20 19 #include <linux/mtd/nand-ecc-mtk.h> 21 20 22 21 /* NAND controller register definition */ ··· 1118 1119 return IRQ_HANDLED; 1119 1120 } 1120 1121 1121 - static int mtk_nfc_enable_clk(struct device *dev, struct mtk_nfc_clk *clk) 1122 - { 1123 - int ret; 1124 - 1125 - ret = clk_prepare_enable(clk->nfi_clk); 1126 - if (ret) { 1127 - dev_err(dev, "failed to enable nfi clk\n"); 1128 - return ret; 1129 - } 1130 - 1131 - ret = clk_prepare_enable(clk->pad_clk); 1132 - if (ret) { 1133 - dev_err(dev, "failed to enable pad clk\n"); 1134 - clk_disable_unprepare(clk->nfi_clk); 1135 - return ret; 1136 - } 1137 - 1138 - return 0; 1139 - } 1140 - 1141 - static void mtk_nfc_disable_clk(struct mtk_nfc_clk *clk) 1142 - { 1143 - clk_disable_unprepare(clk->nfi_clk); 1144 - clk_disable_unprepare(clk->pad_clk); 1145 - } 1146 - 1147 1122 static int mtk_nfc_ooblayout_free(struct mtd_info *mtd, int section, 1148 1123 struct mtd_oob_region *oob_region) 1149 1124 { ··· 1519 1546 goto release_ecc; 1520 1547 } 1521 1548 1522 - nfc->clk.nfi_clk = devm_clk_get(dev, "nfi_clk"); 1549 + nfc->clk.nfi_clk = devm_clk_get_enabled(dev, "nfi_clk"); 1523 1550 if (IS_ERR(nfc->clk.nfi_clk)) { 1524 1551 dev_err(dev, "no clk\n"); 1525 1552 ret = PTR_ERR(nfc->clk.nfi_clk); 1526 1553 goto release_ecc; 1527 1554 } 1528 1555 1529 - nfc->clk.pad_clk = devm_clk_get(dev, "pad_clk"); 1556 + nfc->clk.pad_clk = devm_clk_get_enabled(dev, "pad_clk"); 1530 1557 if (IS_ERR(nfc->clk.pad_clk)) { 1531 1558 dev_err(dev, "no pad clk\n"); 1532 1559 ret = PTR_ERR(nfc->clk.pad_clk); 1533 1560 goto release_ecc; 1534 1561 } 1535 1562 1536 - ret = mtk_nfc_enable_clk(dev, &nfc->clk); 1537 - if (ret) 1538 - goto release_ecc; 1539 - 1540 1563 irq = platform_get_irq(pdev, 0); 1541 1564 if (irq < 0) { 1542 1565 ret = -EINVAL; 1543 - goto clk_disable; 1566 + goto release_ecc; 1544 1567 } 1545 1568 1546 1569 ret = devm_request_irq(dev, irq, mtk_nfc_irq, 0x0, "mtk-nand", nfc); 1547 1570 if (ret) { 1548 1571 dev_err(dev, "failed to request nfi irq\n"); 1549 - goto clk_disable; 1572 + goto release_ecc; 1550 1573 } 1551 1574 1552 1575 ret = dma_set_mask(dev, DMA_BIT_MASK(32)); 1553 1576 if (ret) { 1554 1577 dev_err(dev, "failed to set dma mask\n"); 1555 - goto clk_disable; 1578 + goto release_ecc; 1556 1579 } 1557 1580 1558 1581 platform_set_drvdata(pdev, nfc); ··· 1556 1587 ret = mtk_nfc_nand_chips_init(dev, nfc); 1557 1588 if (ret) { 1558 1589 dev_err(dev, "failed to init nand chips\n"); 1559 - goto clk_disable; 1590 + goto release_ecc; 1560 1591 } 1561 1592 1562 1593 return 0; 1563 - 1564 - clk_disable: 1565 - mtk_nfc_disable_clk(&nfc->clk); 1566 1594 1567 1595 release_ecc: 1568 1596 mtk_ecc_release(nfc->ecc); ··· 1585 1619 } 1586 1620 1587 1621 mtk_ecc_release(nfc->ecc); 1588 - mtk_nfc_disable_clk(&nfc->clk); 1589 1622 } 1590 1623 1591 1624 #ifdef CONFIG_PM_SLEEP ··· 1592 1627 { 1593 1628 struct mtk_nfc *nfc = dev_get_drvdata(dev); 1594 1629 1595 - mtk_nfc_disable_clk(&nfc->clk); 1630 + clk_disable_unprepare(nfc->clk.nfi_clk); 1631 + clk_disable_unprepare(nfc->clk.pad_clk); 1596 1632 1597 1633 return 0; 1598 1634 } ··· 1608 1642 1609 1643 udelay(200); 1610 1644 1611 - ret = mtk_nfc_enable_clk(dev, &nfc->clk); 1612 - if (ret) 1645 + ret = clk_prepare_enable(nfc->clk.nfi_clk); 1646 + if (ret) { 1647 + dev_err(dev, "failed to enable nfi clk\n"); 1613 1648 return ret; 1649 + } 1650 + 1651 + ret = clk_prepare_enable(nfc->clk.pad_clk); 1652 + if (ret) { 1653 + dev_err(dev, "failed to enable pad clk\n"); 1654 + clk_disable_unprepare(nfc->clk.nfi_clk); 1655 + return ret; 1656 + } 1614 1657 1615 1658 /* reset NAND chip if VCC was powered off */ 1616 1659 list_for_each_entry(chip, &nfc->chips, node) {
+3 -7
drivers/mtd/nand/raw/mxc_nand.c
··· 20 20 #include <linux/irq.h> 21 21 #include <linux/completion.h> 22 22 #include <linux/of.h> 23 - #include <linux/of_device.h> 24 23 25 24 #define DRIVER_NAME "mxc_nand" 26 25 ··· 1695 1696 struct nand_chip *this; 1696 1697 struct mtd_info *mtd; 1697 1698 struct mxc_nand_host *host; 1698 - struct resource *res; 1699 1699 int err = 0; 1700 1700 1701 1701 /* Allocate memory for MTD device structure and private data */ ··· 1738 1740 this->options |= NAND_KEEP_TIMINGS; 1739 1741 1740 1742 if (host->devtype_data->needs_ip) { 1741 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1742 - host->regs_ip = devm_ioremap_resource(&pdev->dev, res); 1743 + host->regs_ip = devm_platform_ioremap_resource(pdev, 0); 1743 1744 if (IS_ERR(host->regs_ip)) 1744 1745 return PTR_ERR(host->regs_ip); 1745 1746 1746 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1747 + host->base = devm_platform_ioremap_resource(pdev, 1); 1747 1748 } else { 1748 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1749 + host->base = devm_platform_ioremap_resource(pdev, 0); 1749 1750 } 1750 1751 1751 - host->base = devm_ioremap_resource(&pdev->dev, res); 1752 1752 if (IS_ERR(host->base)) 1753 1753 return PTR_ERR(host->base); 1754 1754
+1
drivers/mtd/nand/raw/nand_base.c
··· 1885 1885 1886 1886 return 0; 1887 1887 } 1888 + EXPORT_SYMBOL_GPL(nand_exit_status_op); 1888 1889 1889 1890 /** 1890 1891 * nand_erase_op - Do an erase operation
+2 -1
drivers/mtd/nand/raw/ndfc.c
··· 22 22 #include <linux/mtd/ndfc.h> 23 23 #include <linux/slab.h> 24 24 #include <linux/mtd/mtd.h> 25 + #include <linux/of.h> 25 26 #include <linux/of_address.h> 26 - #include <linux/of_platform.h> 27 + #include <linux/platform_device.h> 27 28 #include <asm/io.h> 28 29 29 30 #define NDFC_MAX_CS 4
+2 -3
drivers/mtd/nand/raw/omap2.c
··· 22 22 #include <linux/iopoll.h> 23 23 #include <linux/slab.h> 24 24 #include <linux/of.h> 25 - #include <linux/of_device.h> 25 + #include <linux/of_platform.h> 26 26 27 27 #include <linux/platform_data/elm.h> 28 28 ··· 2219 2219 } 2220 2220 } 2221 2221 2222 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2223 - vaddr = devm_ioremap_resource(&pdev->dev, res); 2222 + vaddr = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 2224 2223 if (IS_ERR(vaddr)) 2225 2224 return PTR_ERR(vaddr); 2226 2225
+4 -18
drivers/mtd/nand/raw/orion_nand.c
··· 169 169 platform_set_drvdata(pdev, info); 170 170 171 171 /* Not all platforms can gate the clock, so it is optional. */ 172 - info->clk = devm_clk_get_optional(&pdev->dev, NULL); 172 + info->clk = devm_clk_get_optional_enabled(&pdev->dev, NULL); 173 173 if (IS_ERR(info->clk)) 174 174 return dev_err_probe(&pdev->dev, PTR_ERR(info->clk), 175 - "failed to get clock!\n"); 176 - 177 - ret = clk_prepare_enable(info->clk); 178 - if (ret) { 179 - dev_err(&pdev->dev, "failed to prepare clock!\n"); 180 - return ret; 181 - } 175 + "failed to get and enable clock!\n"); 182 176 183 177 /* 184 178 * This driver assumes that the default ECC engine should be TYPE_SOFT. ··· 183 189 184 190 ret = nand_scan(nc, 1); 185 191 if (ret) 186 - goto no_dev; 192 + return ret; 187 193 188 194 mtd->name = "orion_nand"; 189 195 ret = mtd_device_register(mtd, board->parts, board->nr_parts); 190 - if (ret) { 196 + if (ret) 191 197 nand_cleanup(nc); 192 - goto no_dev; 193 - } 194 198 195 - return 0; 196 - 197 - no_dev: 198 - clk_disable_unprepare(info->clk); 199 199 return ret; 200 200 } 201 201 ··· 203 215 WARN_ON(ret); 204 216 205 217 nand_cleanup(chip); 206 - 207 - clk_disable_unprepare(info->clk); 208 218 } 209 219 210 220 #ifdef CONFIG_OF
-209
drivers/mtd/nand/raw/oxnas_nand.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Oxford Semiconductor OXNAS NAND driver 4 - 5 - * Copyright (C) 2016 Neil Armstrong <narmstrong@baylibre.com> 6 - * Heavily based on plat_nand.c : 7 - * Author: Vitaly Wool <vitalywool@gmail.com> 8 - * Copyright (C) 2013 Ma Haijun <mahaijuns@gmail.com> 9 - * Copyright (C) 2012 John Crispin <blogic@openwrt.org> 10 - */ 11 - 12 - #include <linux/err.h> 13 - #include <linux/io.h> 14 - #include <linux/module.h> 15 - #include <linux/platform_device.h> 16 - #include <linux/slab.h> 17 - #include <linux/clk.h> 18 - #include <linux/reset.h> 19 - #include <linux/mtd/mtd.h> 20 - #include <linux/mtd/rawnand.h> 21 - #include <linux/mtd/partitions.h> 22 - #include <linux/of.h> 23 - 24 - /* Nand commands */ 25 - #define OXNAS_NAND_CMD_ALE BIT(18) 26 - #define OXNAS_NAND_CMD_CLE BIT(19) 27 - 28 - #define OXNAS_NAND_MAX_CHIPS 1 29 - 30 - struct oxnas_nand_ctrl { 31 - struct nand_controller base; 32 - void __iomem *io_base; 33 - struct clk *clk; 34 - struct nand_chip *chips[OXNAS_NAND_MAX_CHIPS]; 35 - unsigned int nchips; 36 - }; 37 - 38 - static uint8_t oxnas_nand_read_byte(struct nand_chip *chip) 39 - { 40 - struct oxnas_nand_ctrl *oxnas = nand_get_controller_data(chip); 41 - 42 - return readb(oxnas->io_base); 43 - } 44 - 45 - static void oxnas_nand_read_buf(struct nand_chip *chip, u8 *buf, int len) 46 - { 47 - struct oxnas_nand_ctrl *oxnas = nand_get_controller_data(chip); 48 - 49 - ioread8_rep(oxnas->io_base, buf, len); 50 - } 51 - 52 - static void oxnas_nand_write_buf(struct nand_chip *chip, const u8 *buf, 53 - int len) 54 - { 55 - struct oxnas_nand_ctrl *oxnas = nand_get_controller_data(chip); 56 - 57 - iowrite8_rep(oxnas->io_base, buf, len); 58 - } 59 - 60 - /* Single CS command control */ 61 - static void oxnas_nand_cmd_ctrl(struct nand_chip *chip, int cmd, 62 - unsigned int ctrl) 63 - { 64 - struct oxnas_nand_ctrl *oxnas = nand_get_controller_data(chip); 65 - 66 - if (ctrl & NAND_CLE) 67 - writeb(cmd, oxnas->io_base + OXNAS_NAND_CMD_CLE); 68 - else if (ctrl & NAND_ALE) 69 - writeb(cmd, oxnas->io_base + OXNAS_NAND_CMD_ALE); 70 - } 71 - 72 - /* 73 - * Probe for the NAND device. 74 - */ 75 - static int oxnas_nand_probe(struct platform_device *pdev) 76 - { 77 - struct device_node *np = pdev->dev.of_node; 78 - struct device_node *nand_np; 79 - struct oxnas_nand_ctrl *oxnas; 80 - struct nand_chip *chip; 81 - struct mtd_info *mtd; 82 - int count = 0; 83 - int err = 0; 84 - int i; 85 - 86 - /* Allocate memory for the device structure (and zero it) */ 87 - oxnas = devm_kzalloc(&pdev->dev, sizeof(*oxnas), 88 - GFP_KERNEL); 89 - if (!oxnas) 90 - return -ENOMEM; 91 - 92 - nand_controller_init(&oxnas->base); 93 - 94 - oxnas->io_base = devm_platform_ioremap_resource(pdev, 0); 95 - if (IS_ERR(oxnas->io_base)) 96 - return PTR_ERR(oxnas->io_base); 97 - 98 - oxnas->clk = devm_clk_get(&pdev->dev, NULL); 99 - if (IS_ERR(oxnas->clk)) 100 - oxnas->clk = NULL; 101 - 102 - /* Only a single chip node is supported */ 103 - count = of_get_child_count(np); 104 - if (count > 1) 105 - return -EINVAL; 106 - 107 - err = clk_prepare_enable(oxnas->clk); 108 - if (err) 109 - return err; 110 - 111 - device_reset_optional(&pdev->dev); 112 - 113 - for_each_child_of_node(np, nand_np) { 114 - chip = devm_kzalloc(&pdev->dev, sizeof(struct nand_chip), 115 - GFP_KERNEL); 116 - if (!chip) { 117 - err = -ENOMEM; 118 - goto err_release_child; 119 - } 120 - 121 - chip->controller = &oxnas->base; 122 - 123 - nand_set_flash_node(chip, nand_np); 124 - nand_set_controller_data(chip, oxnas); 125 - 126 - mtd = nand_to_mtd(chip); 127 - mtd->dev.parent = &pdev->dev; 128 - mtd->priv = chip; 129 - 130 - chip->legacy.cmd_ctrl = oxnas_nand_cmd_ctrl; 131 - chip->legacy.read_buf = oxnas_nand_read_buf; 132 - chip->legacy.read_byte = oxnas_nand_read_byte; 133 - chip->legacy.write_buf = oxnas_nand_write_buf; 134 - chip->legacy.chip_delay = 30; 135 - 136 - /* Scan to find existence of the device */ 137 - err = nand_scan(chip, 1); 138 - if (err) 139 - goto err_release_child; 140 - 141 - err = mtd_device_register(mtd, NULL, 0); 142 - if (err) 143 - goto err_cleanup_nand; 144 - 145 - oxnas->chips[oxnas->nchips++] = chip; 146 - } 147 - 148 - /* Exit if no chips found */ 149 - if (!oxnas->nchips) { 150 - err = -ENODEV; 151 - goto err_clk_unprepare; 152 - } 153 - 154 - platform_set_drvdata(pdev, oxnas); 155 - 156 - return 0; 157 - 158 - err_cleanup_nand: 159 - nand_cleanup(chip); 160 - err_release_child: 161 - of_node_put(nand_np); 162 - 163 - for (i = 0; i < oxnas->nchips; i++) { 164 - chip = oxnas->chips[i]; 165 - WARN_ON(mtd_device_unregister(nand_to_mtd(chip))); 166 - nand_cleanup(chip); 167 - } 168 - 169 - err_clk_unprepare: 170 - clk_disable_unprepare(oxnas->clk); 171 - return err; 172 - } 173 - 174 - static void oxnas_nand_remove(struct platform_device *pdev) 175 - { 176 - struct oxnas_nand_ctrl *oxnas = platform_get_drvdata(pdev); 177 - struct nand_chip *chip; 178 - int i; 179 - 180 - for (i = 0; i < oxnas->nchips; i++) { 181 - chip = oxnas->chips[i]; 182 - WARN_ON(mtd_device_unregister(nand_to_mtd(chip))); 183 - nand_cleanup(chip); 184 - } 185 - 186 - clk_disable_unprepare(oxnas->clk); 187 - } 188 - 189 - static const struct of_device_id oxnas_nand_match[] = { 190 - { .compatible = "oxsemi,ox820-nand" }, 191 - {}, 192 - }; 193 - MODULE_DEVICE_TABLE(of, oxnas_nand_match); 194 - 195 - static struct platform_driver oxnas_nand_driver = { 196 - .probe = oxnas_nand_probe, 197 - .remove_new = oxnas_nand_remove, 198 - .driver = { 199 - .name = "oxnas_nand", 200 - .of_match_table = oxnas_nand_match, 201 - }, 202 - }; 203 - 204 - module_platform_driver(oxnas_nand_driver); 205 - 206 - MODULE_LICENSE("GPL"); 207 - MODULE_AUTHOR("Neil Armstrong <narmstrong@baylibre.com>"); 208 - MODULE_DESCRIPTION("Oxnas NAND driver"); 209 - MODULE_ALIAS("platform:oxnas_nand");
+1 -3
drivers/mtd/nand/raw/pl35x-nand-controller.c
··· 23 23 #include <linux/mtd/mtd.h> 24 24 #include <linux/mtd/rawnand.h> 25 25 #include <linux/mtd/partitions.h> 26 - #include <linux/of_address.h> 27 - #include <linux/of_device.h> 28 - #include <linux/of_platform.h> 26 + #include <linux/of.h> 29 27 #include <linux/platform_device.h> 30 28 #include <linux/slab.h> 31 29 #include <linux/clk.h>
+571 -436
drivers/mtd/nand/raw/qcom_nandc.c
··· 2 2 /* 3 3 * Copyright (c) 2016, The Linux Foundation. All rights reserved. 4 4 */ 5 - #include <linux/clk.h> 6 - #include <linux/slab.h> 7 5 #include <linux/bitops.h> 8 - #include <linux/dma/qcom_adm.h> 9 - #include <linux/dma-mapping.h> 10 - #include <linux/dmaengine.h> 11 - #include <linux/module.h> 12 - #include <linux/mtd/rawnand.h> 13 - #include <linux/mtd/partitions.h> 14 - #include <linux/of.h> 15 - #include <linux/of_device.h> 6 + #include <linux/clk.h> 16 7 #include <linux/delay.h> 8 + #include <linux/dmaengine.h> 9 + #include <linux/dma-mapping.h> 10 + #include <linux/dma/qcom_adm.h> 17 11 #include <linux/dma/qcom_bam_dma.h> 12 + #include <linux/module.h> 13 + #include <linux/mtd/partitions.h> 14 + #include <linux/mtd/rawnand.h> 15 + #include <linux/of.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/slab.h> 18 18 19 19 /* NANDc reg offsets */ 20 20 #define NAND_FLASH_CMD 0x00 ··· 123 123 /* NAND_ERASED_CW_DETECT_CFG bits */ 124 124 #define ERASED_CW_ECC_MASK 1 125 125 #define AUTO_DETECT_RES 0 126 - #define MASK_ECC (1 << ERASED_CW_ECC_MASK) 127 - #define RESET_ERASED_DET (1 << AUTO_DETECT_RES) 126 + #define MASK_ECC BIT(ERASED_CW_ECC_MASK) 127 + #define RESET_ERASED_DET BIT(AUTO_DETECT_RES) 128 128 #define ACTIVE_ERASED_DET (0 << AUTO_DETECT_RES) 129 129 #define CLR_ERASED_PAGE_DET (RESET_ERASED_DET | MASK_ECC) 130 130 #define SET_ERASED_PAGE_DET (ACTIVE_ERASED_DET | MASK_ECC) ··· 157 157 #define OP_PAGE_PROGRAM_WITH_ECC 0x7 158 158 #define OP_PROGRAM_PAGE_SPARE 0x9 159 159 #define OP_BLOCK_ERASE 0xa 160 + #define OP_CHECK_STATUS 0xc 160 161 #define OP_FETCH_ID 0xb 161 162 #define OP_RESET_DEVICE 0xd 162 163 ··· 212 211 /* Returns the dma address for reg read buffer */ 213 212 #define reg_buf_dma_addr(chip, vaddr) \ 214 213 ((chip)->reg_read_dma + \ 215 - ((uint8_t *)(vaddr) - (uint8_t *)(chip)->reg_read_buf)) 214 + ((u8 *)(vaddr) - (u8 *)(chip)->reg_read_buf)) 216 215 217 216 #define QPIC_PER_CW_CMD_ELEMENTS 32 218 217 #define QPIC_PER_CW_CMD_SGL 32 ··· 235 234 * flag will determine the current value of erased codeword status register 236 235 */ 237 236 #define NAND_ERASED_CW_SET BIT(4) 237 + 238 + #define MAX_ADDRESS_CYCLE 5 238 239 239 240 /* 240 241 * This data type corresponds to the BAM transaction which will be used for all ··· 385 382 * @reg_read_pos: marker for data read in reg_read_buf 386 383 * 387 384 * @cmd1/vld: some fixed controller register values 385 + * 386 + * @exec_opwrite: flag to select correct number of code word 387 + * while reading status 388 388 */ 389 389 struct qcom_nand_controller { 390 390 struct device *dev; ··· 438 432 int reg_read_pos; 439 433 440 434 u32 cmd1, vld; 435 + bool exec_opwrite; 441 436 }; 442 437 443 438 /* ··· 452 445 struct qcom_nand_boot_partition { 453 446 u32 page_offset; 454 447 u32 page_size; 448 + }; 449 + 450 + /* 451 + * Qcom op for each exec_op transfer 452 + * 453 + * @data_instr: data instruction pointer 454 + * @data_instr_idx: data instruction index 455 + * @rdy_timeout_ms: wait ready timeout in ms 456 + * @rdy_delay_ns: Additional delay in ns 457 + * @addr1_reg: Address1 register value 458 + * @addr2_reg: Address2 register value 459 + * @cmd_reg: CMD register value 460 + * @flag: flag for misc instruction 461 + */ 462 + struct qcom_op { 463 + const struct nand_op_instr *data_instr; 464 + unsigned int data_instr_idx; 465 + unsigned int rdy_timeout_ms; 466 + unsigned int rdy_delay_ns; 467 + u32 addr1_reg; 468 + u32 addr2_reg; 469 + u32 cmd_reg; 470 + u8 flag; 455 471 }; 456 472 457 473 /* ··· 1303 1273 write_reg_dma(nandc, NAND_READ_STATUS, 1, NAND_BAM_NEXT_SGL); 1304 1274 } 1305 1275 1306 - /* 1307 - * the following functions are used within chip->legacy.cmdfunc() to 1308 - * perform different NAND_CMD_* commands 1309 - */ 1310 - 1311 - /* sets up descriptors for NAND_CMD_PARAM */ 1312 - static int nandc_param(struct qcom_nand_host *host) 1313 - { 1314 - struct nand_chip *chip = &host->chip; 1315 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1316 - 1317 - /* 1318 - * NAND_CMD_PARAM is called before we know much about the FLASH chip 1319 - * in use. we configure the controller to perform a raw read of 512 1320 - * bytes to read onfi params 1321 - */ 1322 - if (nandc->props->qpic_v2) 1323 - nandc_set_reg(chip, NAND_FLASH_CMD, OP_PAGE_READ_ONFI_READ | 1324 - PAGE_ACC | LAST_PAGE); 1325 - else 1326 - nandc_set_reg(chip, NAND_FLASH_CMD, OP_PAGE_READ | 1327 - PAGE_ACC | LAST_PAGE); 1328 - 1329 - nandc_set_reg(chip, NAND_ADDR0, 0); 1330 - nandc_set_reg(chip, NAND_ADDR1, 0); 1331 - nandc_set_reg(chip, NAND_DEV0_CFG0, 0 << CW_PER_PAGE 1332 - | 512 << UD_SIZE_BYTES 1333 - | 5 << NUM_ADDR_CYCLES 1334 - | 0 << SPARE_SIZE_BYTES); 1335 - nandc_set_reg(chip, NAND_DEV0_CFG1, 7 << NAND_RECOVERY_CYCLES 1336 - | 0 << CS_ACTIVE_BSY 1337 - | 17 << BAD_BLOCK_BYTE_NUM 1338 - | 1 << BAD_BLOCK_IN_SPARE_AREA 1339 - | 2 << WR_RD_BSY_GAP 1340 - | 0 << WIDE_FLASH 1341 - | 1 << DEV0_CFG1_ECC_DISABLE); 1342 - if (!nandc->props->qpic_v2) 1343 - nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE); 1344 - 1345 - /* configure CMD1 and VLD for ONFI param probing in QPIC v1 */ 1346 - if (!nandc->props->qpic_v2) { 1347 - nandc_set_reg(chip, NAND_DEV_CMD_VLD, 1348 - (nandc->vld & ~READ_START_VLD)); 1349 - nandc_set_reg(chip, NAND_DEV_CMD1, 1350 - (nandc->cmd1 & ~(0xFF << READ_ADDR)) 1351 - | NAND_CMD_PARAM << READ_ADDR); 1352 - } 1353 - 1354 - nandc_set_reg(chip, NAND_EXEC_CMD, 1); 1355 - 1356 - if (!nandc->props->qpic_v2) { 1357 - nandc_set_reg(chip, NAND_DEV_CMD1_RESTORE, nandc->cmd1); 1358 - nandc_set_reg(chip, NAND_DEV_CMD_VLD_RESTORE, nandc->vld); 1359 - } 1360 - 1361 - nandc_set_read_loc(chip, 0, 0, 0, 512, 1); 1362 - 1363 - if (!nandc->props->qpic_v2) { 1364 - write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0); 1365 - write_reg_dma(nandc, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL); 1366 - } 1367 - 1368 - nandc->buf_count = 512; 1369 - memset(nandc->data_buffer, 0xff, nandc->buf_count); 1370 - 1371 - config_nand_single_cw_page_read(chip, false, 0); 1372 - 1373 - read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, 1374 - nandc->buf_count, 0); 1375 - 1376 - /* restore CMD1 and VLD regs */ 1377 - if (!nandc->props->qpic_v2) { 1378 - write_reg_dma(nandc, NAND_DEV_CMD1_RESTORE, 1, 0); 1379 - write_reg_dma(nandc, NAND_DEV_CMD_VLD_RESTORE, 1, NAND_BAM_NEXT_SGL); 1380 - } 1381 - 1382 - return 0; 1383 - } 1384 - 1385 - /* sets up descriptors for NAND_CMD_ERASE1 */ 1386 - static int erase_block(struct qcom_nand_host *host, int page_addr) 1387 - { 1388 - struct nand_chip *chip = &host->chip; 1389 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1390 - 1391 - nandc_set_reg(chip, NAND_FLASH_CMD, 1392 - OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE); 1393 - nandc_set_reg(chip, NAND_ADDR0, page_addr); 1394 - nandc_set_reg(chip, NAND_ADDR1, 0); 1395 - nandc_set_reg(chip, NAND_DEV0_CFG0, 1396 - host->cfg0_raw & ~(7 << CW_PER_PAGE)); 1397 - nandc_set_reg(chip, NAND_DEV0_CFG1, host->cfg1_raw); 1398 - nandc_set_reg(chip, NAND_EXEC_CMD, 1); 1399 - nandc_set_reg(chip, NAND_FLASH_STATUS, host->clrflashstatus); 1400 - nandc_set_reg(chip, NAND_READ_STATUS, host->clrreadstatus); 1401 - 1402 - write_reg_dma(nandc, NAND_FLASH_CMD, 3, NAND_BAM_NEXT_SGL); 1403 - write_reg_dma(nandc, NAND_DEV0_CFG0, 2, NAND_BAM_NEXT_SGL); 1404 - write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); 1405 - 1406 - read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); 1407 - 1408 - write_reg_dma(nandc, NAND_FLASH_STATUS, 1, 0); 1409 - write_reg_dma(nandc, NAND_READ_STATUS, 1, NAND_BAM_NEXT_SGL); 1410 - 1411 - return 0; 1412 - } 1413 - 1414 - /* sets up descriptors for NAND_CMD_READID */ 1415 - static int read_id(struct qcom_nand_host *host, int column) 1416 - { 1417 - struct nand_chip *chip = &host->chip; 1418 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1419 - 1420 - if (column == -1) 1421 - return 0; 1422 - 1423 - nandc_set_reg(chip, NAND_FLASH_CMD, OP_FETCH_ID); 1424 - nandc_set_reg(chip, NAND_ADDR0, column); 1425 - nandc_set_reg(chip, NAND_ADDR1, 0); 1426 - nandc_set_reg(chip, NAND_FLASH_CHIP_SELECT, 1427 - nandc->props->is_bam ? 0 : DM_EN); 1428 - nandc_set_reg(chip, NAND_EXEC_CMD, 1); 1429 - 1430 - write_reg_dma(nandc, NAND_FLASH_CMD, 4, NAND_BAM_NEXT_SGL); 1431 - write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); 1432 - 1433 - read_reg_dma(nandc, NAND_READ_ID, 1, NAND_BAM_NEXT_SGL); 1434 - 1435 - return 0; 1436 - } 1437 - 1438 - /* sets up descriptors for NAND_CMD_RESET */ 1439 - static int reset(struct qcom_nand_host *host) 1440 - { 1441 - struct nand_chip *chip = &host->chip; 1442 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1443 - 1444 - nandc_set_reg(chip, NAND_FLASH_CMD, OP_RESET_DEVICE); 1445 - nandc_set_reg(chip, NAND_EXEC_CMD, 1); 1446 - 1447 - write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL); 1448 - write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); 1449 - 1450 - read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); 1451 - 1452 - return 0; 1453 - } 1454 - 1455 1276 /* helpers to submit/free our list of dma descriptors */ 1456 1277 static int submit_descs(struct qcom_nand_controller *nandc) 1457 1278 { 1458 - struct desc_info *desc; 1279 + struct desc_info *desc, *n; 1459 1280 dma_cookie_t cookie = 0; 1460 1281 struct bam_transaction *bam_txn = nandc->bam_txn; 1461 - int r; 1282 + int ret = 0; 1462 1283 1463 1284 if (nandc->props->is_bam) { 1464 1285 if (bam_txn->rx_sgl_pos > bam_txn->rx_sgl_start) { 1465 - r = prepare_bam_async_desc(nandc, nandc->rx_chan, 0); 1466 - if (r) 1467 - return r; 1286 + ret = prepare_bam_async_desc(nandc, nandc->rx_chan, 0); 1287 + if (ret) 1288 + goto err_unmap_free_desc; 1468 1289 } 1469 1290 1470 1291 if (bam_txn->tx_sgl_pos > bam_txn->tx_sgl_start) { 1471 - r = prepare_bam_async_desc(nandc, nandc->tx_chan, 1292 + ret = prepare_bam_async_desc(nandc, nandc->tx_chan, 1472 1293 DMA_PREP_INTERRUPT); 1473 - if (r) 1474 - return r; 1294 + if (ret) 1295 + goto err_unmap_free_desc; 1475 1296 } 1476 1297 1477 1298 if (bam_txn->cmd_sgl_pos > bam_txn->cmd_sgl_start) { 1478 - r = prepare_bam_async_desc(nandc, nandc->cmd_chan, 1299 + ret = prepare_bam_async_desc(nandc, nandc->cmd_chan, 1479 1300 DMA_PREP_CMD); 1480 - if (r) 1481 - return r; 1301 + if (ret) 1302 + goto err_unmap_free_desc; 1482 1303 } 1483 1304 } 1484 1305 ··· 1351 1470 1352 1471 if (!wait_for_completion_timeout(&bam_txn->txn_done, 1353 1472 QPIC_NAND_COMPLETION_TIMEOUT)) 1354 - return -ETIMEDOUT; 1473 + ret = -ETIMEDOUT; 1355 1474 } else { 1356 1475 if (dma_sync_wait(nandc->chan, cookie) != DMA_COMPLETE) 1357 - return -ETIMEDOUT; 1476 + ret = -ETIMEDOUT; 1358 1477 } 1359 1478 1360 - return 0; 1361 - } 1362 - 1363 - static void free_descs(struct qcom_nand_controller *nandc) 1364 - { 1365 - struct desc_info *desc, *n; 1366 - 1479 + err_unmap_free_desc: 1480 + /* 1481 + * Unmap the dma sg_list and free the desc allocated by both 1482 + * prepare_bam_async_desc() and prep_adm_dma_desc() functions. 1483 + */ 1367 1484 list_for_each_entry_safe(desc, n, &nandc->desc_list, node) { 1368 1485 list_del(&desc->node); 1369 1486 ··· 1374 1495 1375 1496 kfree(desc); 1376 1497 } 1498 + 1499 + return ret; 1377 1500 } 1378 1501 1379 1502 /* reset the register read buffer for next NAND operation */ ··· 1383 1502 { 1384 1503 nandc->reg_read_pos = 0; 1385 1504 nandc_read_buffer_sync(nandc, false); 1386 - } 1387 - 1388 - static void pre_command(struct qcom_nand_host *host, int command) 1389 - { 1390 - struct nand_chip *chip = &host->chip; 1391 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1392 - 1393 - nandc->buf_count = 0; 1394 - nandc->buf_start = 0; 1395 - host->use_ecc = false; 1396 - host->last_command = command; 1397 - 1398 - clear_read_regs(nandc); 1399 - 1400 - if (command == NAND_CMD_RESET || command == NAND_CMD_READID || 1401 - command == NAND_CMD_PARAM || command == NAND_CMD_ERASE1) 1402 - clear_bam_transaction(nandc); 1403 - } 1404 - 1405 - /* 1406 - * this is called after NAND_CMD_PAGEPROG and NAND_CMD_ERASE1 to set our 1407 - * privately maintained status byte, this status byte can be read after 1408 - * NAND_CMD_STATUS is called 1409 - */ 1410 - static void parse_erase_write_errors(struct qcom_nand_host *host, int command) 1411 - { 1412 - struct nand_chip *chip = &host->chip; 1413 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1414 - struct nand_ecc_ctrl *ecc = &chip->ecc; 1415 - int num_cw; 1416 - int i; 1417 - 1418 - num_cw = command == NAND_CMD_PAGEPROG ? ecc->steps : 1; 1419 - nandc_read_buffer_sync(nandc, true); 1420 - 1421 - for (i = 0; i < num_cw; i++) { 1422 - u32 flash_status = le32_to_cpu(nandc->reg_read_buf[i]); 1423 - 1424 - if (flash_status & FS_MPU_ERR) 1425 - host->status &= ~NAND_STATUS_WP; 1426 - 1427 - if (flash_status & FS_OP_ERR || (i == (num_cw - 1) && 1428 - (flash_status & 1429 - FS_DEVICE_STS_ERR))) 1430 - host->status |= NAND_STATUS_FAIL; 1431 - } 1432 - } 1433 - 1434 - static void post_command(struct qcom_nand_host *host, int command) 1435 - { 1436 - struct nand_chip *chip = &host->chip; 1437 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1438 - 1439 - switch (command) { 1440 - case NAND_CMD_READID: 1441 - nandc_read_buffer_sync(nandc, true); 1442 - memcpy(nandc->data_buffer, nandc->reg_read_buf, 1443 - nandc->buf_count); 1444 - break; 1445 - case NAND_CMD_PAGEPROG: 1446 - case NAND_CMD_ERASE1: 1447 - parse_erase_write_errors(host, command); 1448 - break; 1449 - default: 1450 - break; 1451 - } 1452 - } 1453 - 1454 - /* 1455 - * Implements chip->legacy.cmdfunc. It's only used for a limited set of 1456 - * commands. The rest of the commands wouldn't be called by upper layers. 1457 - * For example, NAND_CMD_READOOB would never be called because we have our own 1458 - * versions of read_oob ops for nand_ecc_ctrl. 1459 - */ 1460 - static void qcom_nandc_command(struct nand_chip *chip, unsigned int command, 1461 - int column, int page_addr) 1462 - { 1463 - struct qcom_nand_host *host = to_qcom_nand_host(chip); 1464 - struct nand_ecc_ctrl *ecc = &chip->ecc; 1465 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 1466 - bool wait = false; 1467 - int ret = 0; 1468 - 1469 - pre_command(host, command); 1470 - 1471 - switch (command) { 1472 - case NAND_CMD_RESET: 1473 - ret = reset(host); 1474 - wait = true; 1475 - break; 1476 - 1477 - case NAND_CMD_READID: 1478 - nandc->buf_count = 4; 1479 - ret = read_id(host, column); 1480 - wait = true; 1481 - break; 1482 - 1483 - case NAND_CMD_PARAM: 1484 - ret = nandc_param(host); 1485 - wait = true; 1486 - break; 1487 - 1488 - case NAND_CMD_ERASE1: 1489 - ret = erase_block(host, page_addr); 1490 - wait = true; 1491 - break; 1492 - 1493 - case NAND_CMD_READ0: 1494 - /* we read the entire page for now */ 1495 - WARN_ON(column != 0); 1496 - 1497 - host->use_ecc = true; 1498 - set_address(host, 0, page_addr); 1499 - update_rw_regs(host, ecc->steps, true, 0); 1500 - break; 1501 - 1502 - case NAND_CMD_SEQIN: 1503 - WARN_ON(column != 0); 1504 - set_address(host, 0, page_addr); 1505 - break; 1506 - 1507 - case NAND_CMD_PAGEPROG: 1508 - case NAND_CMD_STATUS: 1509 - case NAND_CMD_NONE: 1510 - default: 1511 - break; 1512 - } 1513 - 1514 - if (ret) { 1515 - dev_err(nandc->dev, "failure executing command %d\n", 1516 - command); 1517 - free_descs(nandc); 1518 - return; 1519 - } 1520 - 1521 - if (wait) { 1522 - ret = submit_descs(nandc); 1523 - if (ret) 1524 - dev_err(nandc->dev, 1525 - "failure submitting descs for command %d\n", 1526 - command); 1527 - } 1528 - 1529 - free_descs(nandc); 1530 - 1531 - post_command(host, command); 1532 1505 } 1533 1506 1534 1507 /* ··· 1471 1736 int raw_cw = cw; 1472 1737 1473 1738 nand_read_page_op(chip, page, 0, NULL, 0); 1739 + nandc->buf_count = 0; 1740 + nandc->buf_start = 0; 1741 + clear_read_regs(nandc); 1474 1742 host->use_ecc = false; 1475 1743 1476 1744 if (nandc->props->qpic_v2) ··· 1524 1786 read_data_dma(nandc, reg_off, oob_buf + oob_size1, oob_size2, 0); 1525 1787 1526 1788 ret = submit_descs(nandc); 1527 - free_descs(nandc); 1528 1789 if (ret) { 1529 1790 dev_err(nandc->dev, "failure to read raw cw %d\n", cw); 1530 1791 return ret; ··· 1556 1819 struct mtd_info *mtd = nand_to_mtd(chip); 1557 1820 struct nand_ecc_ctrl *ecc = &chip->ecc; 1558 1821 u8 *cw_data_buf, *cw_oob_buf; 1559 - int cw, data_size, oob_size, ret = 0; 1822 + int cw, data_size, oob_size, ret; 1560 1823 1561 1824 if (!data_buf) 1562 1825 data_buf = nand_get_data_buf(chip); ··· 1777 2040 } 1778 2041 1779 2042 ret = submit_descs(nandc); 1780 - free_descs(nandc); 1781 - 1782 2043 if (ret) { 1783 2044 dev_err(nandc->dev, "failure to read page/oob\n"); 1784 2045 return ret; ··· 1814 2079 ret = submit_descs(nandc); 1815 2080 if (ret) 1816 2081 dev_err(nandc->dev, "failed to copy last codeword\n"); 1817 - 1818 - free_descs(nandc); 1819 2082 1820 2083 return ret; 1821 2084 } ··· 1882 2149 } 1883 2150 1884 2151 /* implements ecc->read_page() */ 1885 - static int qcom_nandc_read_page(struct nand_chip *chip, uint8_t *buf, 2152 + static int qcom_nandc_read_page(struct nand_chip *chip, u8 *buf, 1886 2153 int oob_required, int page) 1887 2154 { 1888 2155 struct qcom_nand_host *host = to_qcom_nand_host(chip); 1889 2156 struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2157 + struct nand_ecc_ctrl *ecc = &chip->ecc; 1890 2158 u8 *data_buf, *oob_buf = NULL; 1891 2159 1892 2160 if (host->nr_boot_partitions) 1893 2161 qcom_nandc_codeword_fixup(host, page); 1894 2162 1895 2163 nand_read_page_op(chip, page, 0, NULL, 0); 2164 + nandc->buf_count = 0; 2165 + nandc->buf_start = 0; 2166 + host->use_ecc = true; 2167 + clear_read_regs(nandc); 2168 + set_address(host, 0, page); 2169 + update_rw_regs(host, ecc->steps, true, 0); 2170 + 1896 2171 data_buf = buf; 1897 2172 oob_buf = oob_required ? chip->oob_poi : NULL; 1898 2173 ··· 1910 2169 } 1911 2170 1912 2171 /* implements ecc->read_page_raw() */ 1913 - static int qcom_nandc_read_page_raw(struct nand_chip *chip, uint8_t *buf, 2172 + static int qcom_nandc_read_page_raw(struct nand_chip *chip, u8 *buf, 1914 2173 int oob_required, int page) 1915 2174 { 1916 2175 struct mtd_info *mtd = nand_to_mtd(chip); ··· 1956 2215 } 1957 2216 1958 2217 /* implements ecc->write_page() */ 1959 - static int qcom_nandc_write_page(struct nand_chip *chip, const uint8_t *buf, 2218 + static int qcom_nandc_write_page(struct nand_chip *chip, const u8 *buf, 1960 2219 int oob_required, int page) 1961 2220 { 1962 2221 struct qcom_nand_host *host = to_qcom_nand_host(chip); ··· 1970 2229 1971 2230 nand_prog_page_begin_op(chip, page, 0, NULL, 0); 1972 2231 2232 + set_address(host, 0, page); 2233 + nandc->buf_count = 0; 2234 + nandc->buf_start = 0; 1973 2235 clear_read_regs(nandc); 1974 2236 clear_bam_transaction(nandc); 1975 2237 ··· 1994 2250 data_size = host->cw_data; 1995 2251 oob_size = ecc->bytes; 1996 2252 } 1997 - 1998 2253 1999 2254 write_data_dma(nandc, FLASH_BUF_ACC, data_buf, data_size, 2000 2255 i == (ecc->steps - 1) ? NAND_BAM_NO_EOT : 0); ··· 2019 2276 } 2020 2277 2021 2278 ret = submit_descs(nandc); 2022 - if (ret) 2279 + if (ret) { 2023 2280 dev_err(nandc->dev, "failure to write page\n"); 2281 + return ret; 2282 + } 2024 2283 2025 - free_descs(nandc); 2026 - 2027 - if (!ret) 2028 - ret = nand_prog_page_end_op(chip); 2029 - 2030 - return ret; 2284 + return nand_prog_page_end_op(chip); 2031 2285 } 2032 2286 2033 2287 /* implements ecc->write_page_raw() */ 2034 2288 static int qcom_nandc_write_page_raw(struct nand_chip *chip, 2035 - const uint8_t *buf, int oob_required, 2289 + const u8 *buf, int oob_required, 2036 2290 int page) 2037 2291 { 2038 2292 struct mtd_info *mtd = nand_to_mtd(chip); ··· 2092 2352 } 2093 2353 2094 2354 ret = submit_descs(nandc); 2095 - if (ret) 2355 + if (ret) { 2096 2356 dev_err(nandc->dev, "failure to write raw page\n"); 2357 + return ret; 2358 + } 2097 2359 2098 - free_descs(nandc); 2099 - 2100 - if (!ret) 2101 - ret = nand_prog_page_end_op(chip); 2102 - 2103 - return ret; 2360 + return nand_prog_page_end_op(chip); 2104 2361 } 2105 2362 2106 2363 /* ··· 2141 2404 config_nand_cw_write(chip); 2142 2405 2143 2406 ret = submit_descs(nandc); 2144 - 2145 - free_descs(nandc); 2146 - 2147 2407 if (ret) { 2148 2408 dev_err(nandc->dev, "failure to write oob\n"); 2149 - return -EIO; 2409 + return ret; 2150 2410 } 2151 2411 2152 2412 return nand_prog_page_end_op(chip); ··· 2217 2483 config_nand_cw_write(chip); 2218 2484 2219 2485 ret = submit_descs(nandc); 2220 - 2221 - free_descs(nandc); 2222 - 2223 2486 if (ret) { 2224 2487 dev_err(nandc->dev, "failure to update BBM\n"); 2225 - return -EIO; 2226 - } 2227 - 2228 - return nand_prog_page_end_op(chip); 2229 - } 2230 - 2231 - /* 2232 - * the three functions below implement chip->legacy.read_byte(), 2233 - * chip->legacy.read_buf() and chip->legacy.write_buf() respectively. these 2234 - * aren't used for reading/writing page data, they are used for smaller data 2235 - * like reading id, status etc 2236 - */ 2237 - static uint8_t qcom_nandc_read_byte(struct nand_chip *chip) 2238 - { 2239 - struct qcom_nand_host *host = to_qcom_nand_host(chip); 2240 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2241 - u8 *buf = nandc->data_buffer; 2242 - u8 ret = 0x0; 2243 - 2244 - if (host->last_command == NAND_CMD_STATUS) { 2245 - ret = host->status; 2246 - 2247 - host->status = NAND_STATUS_READY | NAND_STATUS_WP; 2248 - 2249 2488 return ret; 2250 2489 } 2251 2490 2252 - if (nandc->buf_start < nandc->buf_count) 2253 - ret = buf[nandc->buf_start++]; 2254 - 2255 - return ret; 2256 - } 2257 - 2258 - static void qcom_nandc_read_buf(struct nand_chip *chip, uint8_t *buf, int len) 2259 - { 2260 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2261 - int real_len = min_t(size_t, len, nandc->buf_count - nandc->buf_start); 2262 - 2263 - memcpy(buf, nandc->data_buffer + nandc->buf_start, real_len); 2264 - nandc->buf_start += real_len; 2265 - } 2266 - 2267 - static void qcom_nandc_write_buf(struct nand_chip *chip, const uint8_t *buf, 2268 - int len) 2269 - { 2270 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2271 - int real_len = min_t(size_t, len, nandc->buf_count - nandc->buf_start); 2272 - 2273 - memcpy(nandc->data_buffer + nandc->buf_start, buf, real_len); 2274 - 2275 - nandc->buf_start += real_len; 2276 - } 2277 - 2278 - /* we support only one external chip for now */ 2279 - static void qcom_nandc_select_chip(struct nand_chip *chip, int chipnr) 2280 - { 2281 - struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2282 - 2283 - if (chipnr <= 0) 2284 - return; 2285 - 2286 - dev_warn(nandc->dev, "invalid chip select\n"); 2491 + return nand_prog_page_end_op(chip); 2287 2492 } 2288 2493 2289 2494 /* ··· 2333 2660 } 2334 2661 2335 2662 static int qcom_nand_ooblayout_free(struct mtd_info *mtd, int section, 2336 - struct mtd_oob_region *oobregion) 2663 + struct mtd_oob_region *oobregion) 2337 2664 { 2338 2665 struct nand_chip *chip = mtd_to_nand(mtd); 2339 2666 struct qcom_nand_host *host = to_qcom_nand_host(chip); ··· 2358 2685 { 2359 2686 return strength == 4 ? 12 : 16; 2360 2687 } 2688 + 2361 2689 NAND_ECC_CAPS_SINGLE(qcom_nandc_ecc_caps, qcom_nandc_calc_ecc_bytes, 2362 2690 NANDC_STEP_SIZE, 4, 8); 2363 2691 ··· 2541 2867 return 0; 2542 2868 } 2543 2869 2870 + static int qcom_op_cmd_mapping(struct nand_chip *chip, u8 opcode, 2871 + struct qcom_op *q_op) 2872 + { 2873 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2874 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 2875 + int cmd; 2876 + 2877 + switch (opcode) { 2878 + case NAND_CMD_RESET: 2879 + cmd = OP_RESET_DEVICE; 2880 + break; 2881 + case NAND_CMD_READID: 2882 + cmd = OP_FETCH_ID; 2883 + break; 2884 + case NAND_CMD_PARAM: 2885 + if (nandc->props->qpic_v2) 2886 + cmd = OP_PAGE_READ_ONFI_READ; 2887 + else 2888 + cmd = OP_PAGE_READ; 2889 + break; 2890 + case NAND_CMD_ERASE1: 2891 + case NAND_CMD_ERASE2: 2892 + cmd = OP_BLOCK_ERASE; 2893 + break; 2894 + case NAND_CMD_STATUS: 2895 + cmd = OP_CHECK_STATUS; 2896 + break; 2897 + case NAND_CMD_PAGEPROG: 2898 + cmd = OP_PROGRAM_PAGE; 2899 + q_op->flag = OP_PROGRAM_PAGE; 2900 + nandc->exec_opwrite = true; 2901 + break; 2902 + case NAND_CMD_READ0: 2903 + case NAND_CMD_READSTART: 2904 + if (host->use_ecc) 2905 + cmd = OP_PAGE_READ_WITH_ECC; 2906 + else 2907 + cmd = OP_PAGE_READ; 2908 + break; 2909 + default: 2910 + dev_err(nandc->dev, "Opcode not supported: %u\n", opcode); 2911 + return -EOPNOTSUPP; 2912 + } 2913 + 2914 + return cmd; 2915 + } 2916 + 2917 + /* NAND framework ->exec_op() hooks and related helpers */ 2918 + static int qcom_parse_instructions(struct nand_chip *chip, 2919 + const struct nand_subop *subop, 2920 + struct qcom_op *q_op) 2921 + { 2922 + const struct nand_op_instr *instr = NULL; 2923 + unsigned int op_id; 2924 + int i, ret; 2925 + 2926 + for (op_id = 0; op_id < subop->ninstrs; op_id++) { 2927 + unsigned int offset, naddrs; 2928 + const u8 *addrs; 2929 + 2930 + instr = &subop->instrs[op_id]; 2931 + 2932 + switch (instr->type) { 2933 + case NAND_OP_CMD_INSTR: 2934 + ret = qcom_op_cmd_mapping(chip, instr->ctx.cmd.opcode, q_op); 2935 + if (ret < 0) 2936 + return ret; 2937 + 2938 + q_op->cmd_reg = ret; 2939 + q_op->rdy_delay_ns = instr->delay_ns; 2940 + break; 2941 + 2942 + case NAND_OP_ADDR_INSTR: 2943 + offset = nand_subop_get_addr_start_off(subop, op_id); 2944 + naddrs = nand_subop_get_num_addr_cyc(subop, op_id); 2945 + addrs = &instr->ctx.addr.addrs[offset]; 2946 + 2947 + for (i = 0; i < min_t(unsigned int, 4, naddrs); i++) 2948 + q_op->addr1_reg |= addrs[i] << (i * 8); 2949 + 2950 + if (naddrs > 4) 2951 + q_op->addr2_reg |= addrs[4]; 2952 + 2953 + q_op->rdy_delay_ns = instr->delay_ns; 2954 + break; 2955 + 2956 + case NAND_OP_DATA_IN_INSTR: 2957 + q_op->data_instr = instr; 2958 + q_op->data_instr_idx = op_id; 2959 + q_op->rdy_delay_ns = instr->delay_ns; 2960 + fallthrough; 2961 + case NAND_OP_DATA_OUT_INSTR: 2962 + q_op->rdy_delay_ns = instr->delay_ns; 2963 + break; 2964 + 2965 + case NAND_OP_WAITRDY_INSTR: 2966 + q_op->rdy_timeout_ms = instr->ctx.waitrdy.timeout_ms; 2967 + q_op->rdy_delay_ns = instr->delay_ns; 2968 + break; 2969 + } 2970 + } 2971 + 2972 + return 0; 2973 + } 2974 + 2975 + static void qcom_delay_ns(unsigned int ns) 2976 + { 2977 + if (!ns) 2978 + return; 2979 + 2980 + if (ns < 10000) 2981 + ndelay(ns); 2982 + else 2983 + udelay(DIV_ROUND_UP(ns, 1000)); 2984 + } 2985 + 2986 + static int qcom_wait_rdy_poll(struct nand_chip *chip, unsigned int time_ms) 2987 + { 2988 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 2989 + unsigned long start = jiffies + msecs_to_jiffies(time_ms); 2990 + u32 flash; 2991 + 2992 + nandc_read_buffer_sync(nandc, true); 2993 + 2994 + do { 2995 + flash = le32_to_cpu(nandc->reg_read_buf[0]); 2996 + if (flash & FS_READY_BSY_N) 2997 + return 0; 2998 + cpu_relax(); 2999 + } while (time_after(start, jiffies)); 3000 + 3001 + dev_err(nandc->dev, "Timeout waiting for device to be ready:0x%08x\n", flash); 3002 + 3003 + return -ETIMEDOUT; 3004 + } 3005 + 3006 + static int qcom_read_status_exec(struct nand_chip *chip, 3007 + const struct nand_subop *subop) 3008 + { 3009 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 3010 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 3011 + struct nand_ecc_ctrl *ecc = &chip->ecc; 3012 + struct qcom_op q_op = {}; 3013 + const struct nand_op_instr *instr = NULL; 3014 + unsigned int op_id = 0; 3015 + unsigned int len = 0; 3016 + int ret, num_cw, i; 3017 + u32 flash_status; 3018 + 3019 + host->status = NAND_STATUS_READY | NAND_STATUS_WP; 3020 + 3021 + ret = qcom_parse_instructions(chip, subop, &q_op); 3022 + if (ret) 3023 + return ret; 3024 + 3025 + num_cw = nandc->exec_opwrite ? ecc->steps : 1; 3026 + nandc->exec_opwrite = false; 3027 + 3028 + nandc->buf_count = 0; 3029 + nandc->buf_start = 0; 3030 + host->use_ecc = false; 3031 + 3032 + clear_read_regs(nandc); 3033 + clear_bam_transaction(nandc); 3034 + 3035 + nandc_set_reg(chip, NAND_FLASH_CMD, q_op.cmd_reg); 3036 + nandc_set_reg(chip, NAND_EXEC_CMD, 1); 3037 + 3038 + write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL); 3039 + write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); 3040 + read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); 3041 + 3042 + ret = submit_descs(nandc); 3043 + if (ret) { 3044 + dev_err(nandc->dev, "failure in submitting status descriptor\n"); 3045 + goto err_out; 3046 + } 3047 + 3048 + nandc_read_buffer_sync(nandc, true); 3049 + 3050 + for (i = 0; i < num_cw; i++) { 3051 + flash_status = le32_to_cpu(nandc->reg_read_buf[i]); 3052 + 3053 + if (flash_status & FS_MPU_ERR) 3054 + host->status &= ~NAND_STATUS_WP; 3055 + 3056 + if (flash_status & FS_OP_ERR || 3057 + (i == (num_cw - 1) && (flash_status & FS_DEVICE_STS_ERR))) 3058 + host->status |= NAND_STATUS_FAIL; 3059 + } 3060 + 3061 + flash_status = host->status; 3062 + instr = q_op.data_instr; 3063 + op_id = q_op.data_instr_idx; 3064 + len = nand_subop_get_data_len(subop, op_id); 3065 + memcpy(instr->ctx.data.buf.in, &flash_status, len); 3066 + 3067 + err_out: 3068 + return ret; 3069 + } 3070 + 3071 + static int qcom_read_id_type_exec(struct nand_chip *chip, const struct nand_subop *subop) 3072 + { 3073 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 3074 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 3075 + struct qcom_op q_op = {}; 3076 + const struct nand_op_instr *instr = NULL; 3077 + unsigned int op_id = 0; 3078 + unsigned int len = 0; 3079 + int ret; 3080 + 3081 + ret = qcom_parse_instructions(chip, subop, &q_op); 3082 + if (ret) 3083 + return ret; 3084 + 3085 + nandc->buf_count = 0; 3086 + nandc->buf_start = 0; 3087 + host->use_ecc = false; 3088 + 3089 + clear_read_regs(nandc); 3090 + clear_bam_transaction(nandc); 3091 + 3092 + nandc_set_reg(chip, NAND_FLASH_CMD, q_op.cmd_reg); 3093 + nandc_set_reg(chip, NAND_ADDR0, q_op.addr1_reg); 3094 + nandc_set_reg(chip, NAND_ADDR1, q_op.addr2_reg); 3095 + nandc_set_reg(chip, NAND_FLASH_CHIP_SELECT, 3096 + nandc->props->is_bam ? 0 : DM_EN); 3097 + 3098 + nandc_set_reg(chip, NAND_EXEC_CMD, 1); 3099 + 3100 + write_reg_dma(nandc, NAND_FLASH_CMD, 4, NAND_BAM_NEXT_SGL); 3101 + write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); 3102 + 3103 + read_reg_dma(nandc, NAND_READ_ID, 1, NAND_BAM_NEXT_SGL); 3104 + 3105 + ret = submit_descs(nandc); 3106 + if (ret) { 3107 + dev_err(nandc->dev, "failure in submitting read id descriptor\n"); 3108 + goto err_out; 3109 + } 3110 + 3111 + instr = q_op.data_instr; 3112 + op_id = q_op.data_instr_idx; 3113 + len = nand_subop_get_data_len(subop, op_id); 3114 + 3115 + nandc_read_buffer_sync(nandc, true); 3116 + memcpy(instr->ctx.data.buf.in, nandc->reg_read_buf, len); 3117 + 3118 + err_out: 3119 + return ret; 3120 + } 3121 + 3122 + static int qcom_misc_cmd_type_exec(struct nand_chip *chip, const struct nand_subop *subop) 3123 + { 3124 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 3125 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 3126 + struct qcom_op q_op = {}; 3127 + int ret; 3128 + int instrs = 1; 3129 + 3130 + ret = qcom_parse_instructions(chip, subop, &q_op); 3131 + if (ret) 3132 + return ret; 3133 + 3134 + if (q_op.flag == OP_PROGRAM_PAGE) { 3135 + goto wait_rdy; 3136 + } else if (q_op.cmd_reg == OP_BLOCK_ERASE) { 3137 + q_op.cmd_reg |= PAGE_ACC | LAST_PAGE; 3138 + nandc_set_reg(chip, NAND_ADDR0, q_op.addr1_reg); 3139 + nandc_set_reg(chip, NAND_ADDR1, q_op.addr2_reg); 3140 + nandc_set_reg(chip, NAND_DEV0_CFG0, 3141 + host->cfg0_raw & ~(7 << CW_PER_PAGE)); 3142 + nandc_set_reg(chip, NAND_DEV0_CFG1, host->cfg1_raw); 3143 + instrs = 3; 3144 + } else { 3145 + return 0; 3146 + } 3147 + 3148 + nandc->buf_count = 0; 3149 + nandc->buf_start = 0; 3150 + host->use_ecc = false; 3151 + 3152 + clear_read_regs(nandc); 3153 + clear_bam_transaction(nandc); 3154 + 3155 + nandc_set_reg(chip, NAND_FLASH_CMD, q_op.cmd_reg); 3156 + nandc_set_reg(chip, NAND_EXEC_CMD, 1); 3157 + 3158 + write_reg_dma(nandc, NAND_FLASH_CMD, instrs, NAND_BAM_NEXT_SGL); 3159 + (q_op.cmd_reg == OP_BLOCK_ERASE) ? write_reg_dma(nandc, NAND_DEV0_CFG0, 3160 + 2, NAND_BAM_NEXT_SGL) : read_reg_dma(nandc, 3161 + NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); 3162 + 3163 + write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); 3164 + read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); 3165 + 3166 + ret = submit_descs(nandc); 3167 + if (ret) { 3168 + dev_err(nandc->dev, "failure in submitting misc descriptor\n"); 3169 + goto err_out; 3170 + } 3171 + 3172 + wait_rdy: 3173 + qcom_delay_ns(q_op.rdy_delay_ns); 3174 + ret = qcom_wait_rdy_poll(chip, q_op.rdy_timeout_ms); 3175 + 3176 + err_out: 3177 + return ret; 3178 + } 3179 + 3180 + static int qcom_param_page_type_exec(struct nand_chip *chip, const struct nand_subop *subop) 3181 + { 3182 + struct qcom_nand_host *host = to_qcom_nand_host(chip); 3183 + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); 3184 + struct qcom_op q_op = {}; 3185 + const struct nand_op_instr *instr = NULL; 3186 + unsigned int op_id = 0; 3187 + unsigned int len = 0; 3188 + int ret; 3189 + 3190 + ret = qcom_parse_instructions(chip, subop, &q_op); 3191 + if (ret) 3192 + return ret; 3193 + 3194 + q_op.cmd_reg |= PAGE_ACC | LAST_PAGE; 3195 + 3196 + nandc->buf_count = 0; 3197 + nandc->buf_start = 0; 3198 + host->use_ecc = false; 3199 + clear_read_regs(nandc); 3200 + clear_bam_transaction(nandc); 3201 + 3202 + nandc_set_reg(chip, NAND_FLASH_CMD, q_op.cmd_reg); 3203 + 3204 + nandc_set_reg(chip, NAND_ADDR0, 0); 3205 + nandc_set_reg(chip, NAND_ADDR1, 0); 3206 + nandc_set_reg(chip, NAND_DEV0_CFG0, 0 << CW_PER_PAGE 3207 + | 512 << UD_SIZE_BYTES 3208 + | 5 << NUM_ADDR_CYCLES 3209 + | 0 << SPARE_SIZE_BYTES); 3210 + nandc_set_reg(chip, NAND_DEV0_CFG1, 7 << NAND_RECOVERY_CYCLES 3211 + | 0 << CS_ACTIVE_BSY 3212 + | 17 << BAD_BLOCK_BYTE_NUM 3213 + | 1 << BAD_BLOCK_IN_SPARE_AREA 3214 + | 2 << WR_RD_BSY_GAP 3215 + | 0 << WIDE_FLASH 3216 + | 1 << DEV0_CFG1_ECC_DISABLE); 3217 + if (!nandc->props->qpic_v2) 3218 + nandc_set_reg(chip, NAND_EBI2_ECC_BUF_CFG, 1 << ECC_CFG_ECC_DISABLE); 3219 + 3220 + /* configure CMD1 and VLD for ONFI param probing in QPIC v1 */ 3221 + if (!nandc->props->qpic_v2) { 3222 + nandc_set_reg(chip, NAND_DEV_CMD_VLD, 3223 + (nandc->vld & ~READ_START_VLD)); 3224 + nandc_set_reg(chip, NAND_DEV_CMD1, 3225 + (nandc->cmd1 & ~(0xFF << READ_ADDR)) 3226 + | NAND_CMD_PARAM << READ_ADDR); 3227 + } 3228 + 3229 + nandc_set_reg(chip, NAND_EXEC_CMD, 1); 3230 + 3231 + if (!nandc->props->qpic_v2) { 3232 + nandc_set_reg(chip, NAND_DEV_CMD1_RESTORE, nandc->cmd1); 3233 + nandc_set_reg(chip, NAND_DEV_CMD_VLD_RESTORE, nandc->vld); 3234 + } 3235 + 3236 + instr = q_op.data_instr; 3237 + op_id = q_op.data_instr_idx; 3238 + len = nand_subop_get_data_len(subop, op_id); 3239 + 3240 + nandc_set_read_loc(chip, 0, 0, 0, len, 1); 3241 + 3242 + if (!nandc->props->qpic_v2) { 3243 + write_reg_dma(nandc, NAND_DEV_CMD_VLD, 1, 0); 3244 + write_reg_dma(nandc, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL); 3245 + } 3246 + 3247 + nandc->buf_count = len; 3248 + memset(nandc->data_buffer, 0xff, nandc->buf_count); 3249 + 3250 + config_nand_single_cw_page_read(chip, false, 0); 3251 + 3252 + read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, 3253 + nandc->buf_count, 0); 3254 + 3255 + /* restore CMD1 and VLD regs */ 3256 + if (!nandc->props->qpic_v2) { 3257 + write_reg_dma(nandc, NAND_DEV_CMD1_RESTORE, 1, 0); 3258 + write_reg_dma(nandc, NAND_DEV_CMD_VLD_RESTORE, 1, NAND_BAM_NEXT_SGL); 3259 + } 3260 + 3261 + ret = submit_descs(nandc); 3262 + if (ret) { 3263 + dev_err(nandc->dev, "failure in submitting param page descriptor\n"); 3264 + goto err_out; 3265 + } 3266 + 3267 + ret = qcom_wait_rdy_poll(chip, q_op.rdy_timeout_ms); 3268 + if (ret) 3269 + goto err_out; 3270 + 3271 + memcpy(instr->ctx.data.buf.in, nandc->data_buffer, len); 3272 + 3273 + err_out: 3274 + return ret; 3275 + } 3276 + 3277 + static const struct nand_op_parser qcom_op_parser = NAND_OP_PARSER( 3278 + NAND_OP_PARSER_PATTERN( 3279 + qcom_read_id_type_exec, 3280 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 3281 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, MAX_ADDRESS_CYCLE), 3282 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 8)), 3283 + NAND_OP_PARSER_PATTERN( 3284 + qcom_read_status_exec, 3285 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 3286 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 1)), 3287 + NAND_OP_PARSER_PATTERN( 3288 + qcom_param_page_type_exec, 3289 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 3290 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, MAX_ADDRESS_CYCLE), 3291 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 3292 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 512)), 3293 + NAND_OP_PARSER_PATTERN( 3294 + qcom_misc_cmd_type_exec, 3295 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 3296 + NAND_OP_PARSER_PAT_ADDR_ELEM(true, MAX_ADDRESS_CYCLE), 3297 + NAND_OP_PARSER_PAT_CMD_ELEM(true), 3298 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 3299 + ); 3300 + 3301 + static int qcom_check_op(struct nand_chip *chip, 3302 + const struct nand_operation *op) 3303 + { 3304 + const struct nand_op_instr *instr; 3305 + int op_id; 3306 + 3307 + for (op_id = 0; op_id < op->ninstrs; op_id++) { 3308 + instr = &op->instrs[op_id]; 3309 + 3310 + switch (instr->type) { 3311 + case NAND_OP_CMD_INSTR: 3312 + if (instr->ctx.cmd.opcode != NAND_CMD_RESET && 3313 + instr->ctx.cmd.opcode != NAND_CMD_READID && 3314 + instr->ctx.cmd.opcode != NAND_CMD_PARAM && 3315 + instr->ctx.cmd.opcode != NAND_CMD_ERASE1 && 3316 + instr->ctx.cmd.opcode != NAND_CMD_ERASE2 && 3317 + instr->ctx.cmd.opcode != NAND_CMD_STATUS && 3318 + instr->ctx.cmd.opcode != NAND_CMD_PAGEPROG && 3319 + instr->ctx.cmd.opcode != NAND_CMD_READ0 && 3320 + instr->ctx.cmd.opcode != NAND_CMD_READSTART) 3321 + return -EOPNOTSUPP; 3322 + break; 3323 + default: 3324 + break; 3325 + } 3326 + } 3327 + 3328 + return 0; 3329 + } 3330 + 3331 + static int qcom_nand_exec_op(struct nand_chip *chip, 3332 + const struct nand_operation *op, bool check_only) 3333 + { 3334 + if (check_only) 3335 + return qcom_check_op(chip, op); 3336 + 3337 + return nand_op_parser_exec_op(chip, &qcom_op_parser, op, check_only); 3338 + } 3339 + 2544 3340 static const struct nand_controller_ops qcom_nandc_ops = { 2545 3341 .attach_chip = qcom_nand_attach_chip, 3342 + .exec_op = qcom_nand_exec_op, 2546 3343 }; 2547 3344 2548 3345 static void qcom_nandc_unalloc(struct qcom_nand_controller *nandc) ··· 3057 2912 */ 3058 2913 nandc->buf_size = 532; 3059 2914 3060 - nandc->data_buffer = devm_kzalloc(nandc->dev, nandc->buf_size, 3061 - GFP_KERNEL); 2915 + nandc->data_buffer = devm_kzalloc(nandc->dev, nandc->buf_size, GFP_KERNEL); 3062 2916 if (!nandc->data_buffer) 3063 2917 return -ENOMEM; 3064 2918 3065 - nandc->regs = devm_kzalloc(nandc->dev, sizeof(*nandc->regs), 3066 - GFP_KERNEL); 2919 + nandc->regs = devm_kzalloc(nandc->dev, sizeof(*nandc->regs), GFP_KERNEL); 3067 2920 if (!nandc->regs) 3068 2921 return -ENOMEM; 3069 2922 3070 - nandc->reg_read_buf = devm_kcalloc(nandc->dev, 3071 - MAX_REG_RD, sizeof(*nandc->reg_read_buf), 3072 - GFP_KERNEL); 2923 + nandc->reg_read_buf = devm_kcalloc(nandc->dev, MAX_REG_RD, 2924 + sizeof(*nandc->reg_read_buf), 2925 + GFP_KERNEL); 3073 2926 if (!nandc->reg_read_buf) 3074 2927 return -ENOMEM; 3075 2928 ··· 3112 2969 /* 3113 2970 * Initially allocate BAM transaction to read ONFI param page. 3114 2971 * After detecting all the devices, this BAM transaction will 3115 - * be freed and the next BAM tranasction will be allocated with 2972 + * be freed and the next BAM transaction will be allocated with 3116 2973 * maximum codeword size 3117 2974 */ 3118 2975 nandc->max_cwperpage = 1; ··· 3277 3134 3278 3135 mtd->owner = THIS_MODULE; 3279 3136 mtd->dev.parent = dev; 3280 - 3281 - chip->legacy.cmdfunc = qcom_nandc_command; 3282 - chip->legacy.select_chip = qcom_nandc_select_chip; 3283 - chip->legacy.read_byte = qcom_nandc_read_byte; 3284 - chip->legacy.read_buf = qcom_nandc_read_buf; 3285 - chip->legacy.write_buf = qcom_nandc_write_buf; 3286 - chip->legacy.set_features = nand_get_set_features_notsupp; 3287 - chip->legacy.get_features = nand_get_set_features_notsupp; 3288 3137 3289 3138 /* 3290 3139 * the bad block marker is readable only when we read the last codeword
-1
drivers/mtd/nand/raw/rockchip-nand-controller.c
··· 15 15 #include <linux/mtd/mtd.h> 16 16 #include <linux/mtd/rawnand.h> 17 17 #include <linux/of.h> 18 - #include <linux/of_device.h> 19 18 #include <linux/platform_device.h> 20 19 #include <linux/slab.h> 21 20
-1
drivers/mtd/nand/raw/s3c2410.c
··· 26 26 #include <linux/clk.h> 27 27 #include <linux/cpufreq.h> 28 28 #include <linux/of.h> 29 - #include <linux/of_device.h> 30 29 31 30 #include <linux/mtd/mtd.h> 32 31 #include <linux/mtd/rawnand.h>
+1 -3
drivers/mtd/nand/raw/sh_flctl.c
··· 17 17 #include <linux/interrupt.h> 18 18 #include <linux/io.h> 19 19 #include <linux/of.h> 20 - #include <linux/of_device.h> 21 20 #include <linux/platform_device.h> 22 21 #include <linux/pm_runtime.h> 23 22 #include <linux/sh_dma.h> ··· 1123 1124 if (!flctl) 1124 1125 return -ENOMEM; 1125 1126 1126 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1127 - flctl->reg = devm_ioremap_resource(&pdev->dev, res); 1127 + flctl->reg = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 1128 1128 if (IS_ERR(flctl->reg)) 1129 1129 return PTR_ERR(flctl->reg); 1130 1130 flctl->fifo = res->start + 0x24; /* FLDTFIFO */
+2 -1
drivers/mtd/nand/raw/socrates_nand.c
··· 8 8 #include <linux/mtd/mtd.h> 9 9 #include <linux/mtd/rawnand.h> 10 10 #include <linux/mtd/partitions.h> 11 + #include <linux/of.h> 11 12 #include <linux/of_address.h> 12 - #include <linux/of_platform.h> 13 + #include <linux/platform_device.h> 13 14 #include <linux/io.h> 14 15 15 16 #define FPGA_NAND_CMD_MASK (0x7 << 28)
+6 -15
drivers/mtd/nand/raw/stm32_fmc2_nand.c
··· 1922 1922 if (!(nfc->cs_assigned & BIT(chip_cs))) 1923 1923 continue; 1924 1924 1925 - res = platform_get_resource(pdev, IORESOURCE_MEM, mem_region); 1926 - nfc->data_base[chip_cs] = devm_ioremap_resource(dev, res); 1925 + nfc->data_base[chip_cs] = devm_platform_get_and_ioremap_resource(pdev, 1926 + mem_region, &res); 1927 1927 if (IS_ERR(nfc->data_base[chip_cs])) 1928 1928 return PTR_ERR(nfc->data_base[chip_cs]); 1929 1929 ··· 1951 1951 1952 1952 init_completion(&nfc->complete); 1953 1953 1954 - nfc->clk = devm_clk_get(nfc->cdev, NULL); 1955 - if (IS_ERR(nfc->clk)) 1954 + nfc->clk = devm_clk_get_enabled(nfc->cdev, NULL); 1955 + if (IS_ERR(nfc->clk)) { 1956 + dev_err(dev, "can not get and enable the clock\n"); 1956 1957 return PTR_ERR(nfc->clk); 1957 - 1958 - ret = clk_prepare_enable(nfc->clk); 1959 - if (ret) { 1960 - dev_err(dev, "can not enable the clock\n"); 1961 - return ret; 1962 1958 } 1963 1959 1964 1960 rstc = devm_reset_control_get(dev, NULL); 1965 1961 if (IS_ERR(rstc)) { 1966 1962 ret = PTR_ERR(rstc); 1967 1963 if (ret == -EPROBE_DEFER) 1968 - goto err_clk_disable; 1964 + return ret; 1969 1965 } else { 1970 1966 reset_control_assert(rstc); 1971 1967 reset_control_deassert(rstc); ··· 2014 2018 sg_free_table(&nfc->dma_data_sg); 2015 2019 sg_free_table(&nfc->dma_ecc_sg); 2016 2020 2017 - err_clk_disable: 2018 - clk_disable_unprepare(nfc->clk); 2019 - 2020 2021 return ret; 2021 2022 } 2022 2023 ··· 2037 2044 2038 2045 sg_free_table(&nfc->dma_data_sg); 2039 2046 sg_free_table(&nfc->dma_ecc_sg); 2040 - 2041 - clk_disable_unprepare(nfc->clk); 2042 2047 2043 2048 stm32_fmc2_nfc_wp_enable(nand); 2044 2049 }
+7 -26
drivers/mtd/nand/raw/sunxi_nand.c
··· 19 19 #include <linux/moduleparam.h> 20 20 #include <linux/platform_device.h> 21 21 #include <linux/of.h> 22 - #include <linux/of_device.h> 23 22 #include <linux/mtd/mtd.h> 24 23 #include <linux/mtd/rawnand.h> 25 24 #include <linux/mtd/partitions.h> ··· 2086 2087 nand_controller_init(&nfc->controller); 2087 2088 INIT_LIST_HEAD(&nfc->chips); 2088 2089 2089 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2090 - nfc->regs = devm_ioremap_resource(dev, r); 2090 + nfc->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &r); 2091 2091 if (IS_ERR(nfc->regs)) 2092 2092 return PTR_ERR(nfc->regs); 2093 2093 ··· 2094 2096 if (irq < 0) 2095 2097 return irq; 2096 2098 2097 - nfc->ahb_clk = devm_clk_get(dev, "ahb"); 2099 + nfc->ahb_clk = devm_clk_get_enabled(dev, "ahb"); 2098 2100 if (IS_ERR(nfc->ahb_clk)) { 2099 2101 dev_err(dev, "failed to retrieve ahb clk\n"); 2100 2102 return PTR_ERR(nfc->ahb_clk); 2101 2103 } 2102 2104 2103 - ret = clk_prepare_enable(nfc->ahb_clk); 2104 - if (ret) 2105 - return ret; 2106 - 2107 - nfc->mod_clk = devm_clk_get(dev, "mod"); 2105 + nfc->mod_clk = devm_clk_get_enabled(dev, "mod"); 2108 2106 if (IS_ERR(nfc->mod_clk)) { 2109 2107 dev_err(dev, "failed to retrieve mod clk\n"); 2110 - ret = PTR_ERR(nfc->mod_clk); 2111 - goto out_ahb_clk_unprepare; 2108 + return PTR_ERR(nfc->mod_clk); 2112 2109 } 2113 - 2114 - ret = clk_prepare_enable(nfc->mod_clk); 2115 - if (ret) 2116 - goto out_ahb_clk_unprepare; 2117 2110 2118 2111 nfc->reset = devm_reset_control_get_optional_exclusive(dev, "ahb"); 2119 - if (IS_ERR(nfc->reset)) { 2120 - ret = PTR_ERR(nfc->reset); 2121 - goto out_mod_clk_unprepare; 2122 - } 2112 + if (IS_ERR(nfc->reset)) 2113 + return PTR_ERR(nfc->reset); 2123 2114 2124 2115 ret = reset_control_deassert(nfc->reset); 2125 2116 if (ret) { 2126 2117 dev_err(dev, "reset err %d\n", ret); 2127 - goto out_mod_clk_unprepare; 2118 + return ret; 2128 2119 } 2129 2120 2130 2121 nfc->caps = of_device_get_match_data(&pdev->dev); ··· 2152 2165 dma_release_channel(nfc->dmac); 2153 2166 out_ahb_reset_reassert: 2154 2167 reset_control_assert(nfc->reset); 2155 - out_mod_clk_unprepare: 2156 - clk_disable_unprepare(nfc->mod_clk); 2157 - out_ahb_clk_unprepare: 2158 - clk_disable_unprepare(nfc->ahb_clk); 2159 2168 2160 2169 return ret; 2161 2170 } ··· 2166 2183 2167 2184 if (nfc->dmac) 2168 2185 dma_release_channel(nfc->dmac); 2169 - clk_disable_unprepare(nfc->mod_clk); 2170 - clk_disable_unprepare(nfc->ahb_clk); 2171 2186 } 2172 2187 2173 2188 static const struct sunxi_nfc_caps sunxi_nfc_a10_caps = {
+12 -23
drivers/mtd/nand/raw/vf610_nfc.c
··· 827 827 mtd->name = DRV_NAME; 828 828 829 829 irq = platform_get_irq(pdev, 0); 830 - if (irq <= 0) 831 - return -EINVAL; 830 + if (irq < 0) 831 + return irq; 832 832 833 833 nfc->regs = devm_platform_ioremap_resource(pdev, 0); 834 834 if (IS_ERR(nfc->regs)) 835 835 return PTR_ERR(nfc->regs); 836 836 837 - nfc->clk = devm_clk_get(&pdev->dev, NULL); 838 - if (IS_ERR(nfc->clk)) 837 + nfc->clk = devm_clk_get_enabled(&pdev->dev, NULL); 838 + if (IS_ERR(nfc->clk)) { 839 + dev_err(nfc->dev, "Unable to get and enable clock!\n"); 839 840 return PTR_ERR(nfc->clk); 840 - 841 - err = clk_prepare_enable(nfc->clk); 842 - if (err) { 843 - dev_err(nfc->dev, "Unable to enable clock!\n"); 844 - return err; 845 841 } 846 842 847 843 of_id = of_match_device(vf610_nfc_dt_ids, &pdev->dev); 848 - if (!of_id) { 849 - err = -ENODEV; 850 - goto err_disable_clk; 851 - } 844 + if (!of_id) 845 + return -ENODEV; 852 846 853 - nfc->variant = (enum vf610_nfc_variant)of_id->data; 847 + nfc->variant = (uintptr_t)of_id->data; 854 848 855 849 for_each_available_child_of_node(nfc->dev->of_node, child) { 856 850 if (of_device_is_compatible(child, "fsl,vf610-nfc-nandcs")) { ··· 852 858 if (nand_get_flash_node(chip)) { 853 859 dev_err(nfc->dev, 854 860 "Only one NAND chip supported!\n"); 855 - err = -EINVAL; 856 861 of_node_put(child); 857 - goto err_disable_clk; 862 + return -EINVAL; 858 863 } 859 864 860 865 nand_set_flash_node(chip, child); ··· 862 869 863 870 if (!nand_get_flash_node(chip)) { 864 871 dev_err(nfc->dev, "NAND chip sub-node missing!\n"); 865 - err = -ENODEV; 866 - goto err_disable_clk; 872 + return -ENODEV; 867 873 } 868 874 869 875 chip->options |= NAND_NO_SUBPAGE_WRITE; ··· 872 880 err = devm_request_irq(nfc->dev, irq, vf610_nfc_irq, 0, DRV_NAME, nfc); 873 881 if (err) { 874 882 dev_err(nfc->dev, "Error requesting IRQ!\n"); 875 - goto err_disable_clk; 883 + return err; 876 884 } 877 885 878 886 vf610_nfc_preinit_controller(nfc); ··· 884 892 /* Scan the NAND chip */ 885 893 err = nand_scan(chip, 1); 886 894 if (err) 887 - goto err_disable_clk; 895 + return err; 888 896 889 897 platform_set_drvdata(pdev, nfc); 890 898 ··· 896 904 897 905 err_cleanup_nand: 898 906 nand_cleanup(chip); 899 - err_disable_clk: 900 - clk_disable_unprepare(nfc->clk); 901 907 return err; 902 908 } 903 909 ··· 908 918 ret = mtd_device_unregister(nand_to_mtd(chip)); 909 919 WARN_ON(ret); 910 920 nand_cleanup(chip); 911 - clk_disable_unprepare(nfc->clk); 912 921 } 913 922 914 923 #ifdef CONFIG_PM_SLEEP
+2 -1
drivers/mtd/nand/raw/xway_nand.c
··· 7 7 8 8 #include <linux/mtd/rawnand.h> 9 9 #include <linux/of_gpio.h> 10 - #include <linux/of_platform.h> 10 + #include <linux/of.h> 11 + #include <linux/platform_device.h> 11 12 12 13 #include <lantiq_soc.h> 13 14
+9
drivers/mtd/nand/spi/esmt.c
··· 121 121 &update_cache_variants), 122 122 0, 123 123 SPINAND_ECCINFO(&f50l1g41lb_ooblayout, NULL)), 124 + SPINAND_INFO("F50D2G41KA", 125 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x51), 126 + NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1), 127 + NAND_ECCREQ(8, 512), 128 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 129 + &write_cache_variants, 130 + &update_cache_variants), 131 + 0, 132 + SPINAND_ECCINFO(&f50l1g41lb_ooblayout, NULL)), 124 133 }; 125 134 126 135 static const struct spinand_manufacturer_ops esmt_spinand_manuf_ops = {
+20
drivers/mtd/nand/spi/gigadevice.c
··· 511 511 SPINAND_HAS_QE_BIT, 512 512 SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout, 513 513 gd5fxgq4uexxg_ecc_get_status)), 514 + SPINAND_INFO("GD5F1GQ5RExxH", 515 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x21), 516 + NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 517 + NAND_ECCREQ(4, 512), 518 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5, 519 + &write_cache_variants, 520 + &update_cache_variants), 521 + SPINAND_HAS_QE_BIT, 522 + SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout, 523 + gd5fxgq4uexxg_ecc_get_status)), 524 + SPINAND_INFO("GD5F1GQ4RExxH", 525 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xc9), 526 + NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 527 + NAND_ECCREQ(4, 512), 528 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5, 529 + &write_cache_variants, 530 + &update_cache_variants), 531 + SPINAND_HAS_QE_BIT, 532 + SPINAND_ECCINFO(&gd5fxgqx_variant2_ooblayout, 533 + gd5fxgq4uexxg_ecc_get_status)), 514 534 }; 515 535 516 536 static const struct spinand_manufacturer_ops gigadevice_spinand_manuf_ops = {
+33
drivers/mtd/nand/spi/toshiba.c
··· 266 266 SPINAND_HAS_QE_BIT, 267 267 SPINAND_ECCINFO(&tx58cxgxsxraix_ooblayout, 268 268 tx58cxgxsxraix_ecc_get_status)), 269 + /* 1.8V 1Gb (1st generation) */ 270 + SPINAND_INFO("TC58NYG0S3HBAI4", 271 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xA1), 272 + NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1), 273 + NAND_ECCREQ(8, 512), 274 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 275 + &write_cache_variants, 276 + &update_cache_variants), 277 + 0, 278 + SPINAND_ECCINFO(&tx58cxgxsxraix_ooblayout, 279 + tx58cxgxsxraix_ecc_get_status)), 280 + /* 1.8V 4Gb (1st generation) */ 281 + SPINAND_INFO("TH58NYG2S3HBAI4", 282 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xAC), 283 + NAND_MEMORG(1, 2048, 128, 64, 4096, 80, 1, 2, 1), 284 + NAND_ECCREQ(8, 512), 285 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 286 + &write_cache_x4_variants, 287 + &update_cache_x4_variants), 288 + SPINAND_HAS_QE_BIT, 289 + SPINAND_ECCINFO(&tx58cxgxsxraix_ooblayout, 290 + tx58cxgxsxraix_ecc_get_status)), 291 + /* 1.8V 8Gb (1st generation) */ 292 + SPINAND_INFO("TH58NYG3S0HBAI6", 293 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xA3), 294 + NAND_MEMORG(1, 4096, 256, 64, 4096, 80, 1, 1, 1), 295 + NAND_ECCREQ(8, 512), 296 + SPINAND_INFO_OP_VARIANTS(&read_cache_variants, 297 + &write_cache_x4_variants, 298 + &update_cache_x4_variants), 299 + SPINAND_HAS_QE_BIT, 300 + SPINAND_ECCINFO(&tx58cxgxsxraix_ooblayout, 301 + tx58cxgxsxraix_ecc_get_status)), 269 302 }; 270 303 271 304 static const struct spinand_manufacturer_ops toshiba_spinand_manuf_ops = {
+6 -2
drivers/mtd/spi-nor/atmel.c
··· 48 48 .is_locked = at25fs_nor_is_locked, 49 49 }; 50 50 51 - static void at25fs_nor_late_init(struct spi_nor *nor) 51 + static int at25fs_nor_late_init(struct spi_nor *nor) 52 52 { 53 53 nor->params->locking_ops = &at25fs_nor_locking_ops; 54 + 55 + return 0; 54 56 } 55 57 56 58 static const struct spi_nor_fixups at25fs_nor_fixups = { ··· 151 149 .is_locked = atmel_nor_is_global_protected, 152 150 }; 153 151 154 - static void atmel_nor_global_protection_late_init(struct spi_nor *nor) 152 + static int atmel_nor_global_protection_late_init(struct spi_nor *nor) 155 153 { 156 154 nor->params->locking_ops = &atmel_nor_global_protection_ops; 155 + 156 + return 0; 157 157 } 158 158 159 159 static const struct spi_nor_fixups atmel_nor_global_protection_fixups = {
+6 -28
drivers/mtd/spi-nor/controllers/nxp-spifi.c
··· 17 17 #include <linux/mtd/partitions.h> 18 18 #include <linux/mtd/spi-nor.h> 19 19 #include <linux/of.h> 20 - #include <linux/of_device.h> 21 20 #include <linux/platform_device.h> 22 21 #include <linux/spi/spi.h> 23 22 ··· 394 395 if (IS_ERR(spifi->flash_base)) 395 396 return PTR_ERR(spifi->flash_base); 396 397 397 - spifi->clk_spifi = devm_clk_get(&pdev->dev, "spifi"); 398 + spifi->clk_spifi = devm_clk_get_enabled(&pdev->dev, "spifi"); 398 399 if (IS_ERR(spifi->clk_spifi)) { 399 - dev_err(&pdev->dev, "spifi clock not found\n"); 400 + dev_err(&pdev->dev, "spifi clock not found or unable to enable\n"); 400 401 return PTR_ERR(spifi->clk_spifi); 401 402 } 402 403 403 - spifi->clk_reg = devm_clk_get(&pdev->dev, "reg"); 404 + spifi->clk_reg = devm_clk_get_enabled(&pdev->dev, "reg"); 404 405 if (IS_ERR(spifi->clk_reg)) { 405 - dev_err(&pdev->dev, "reg clock not found\n"); 406 + dev_err(&pdev->dev, "reg clock not found or unable to enable\n"); 406 407 return PTR_ERR(spifi->clk_reg); 407 - } 408 - 409 - ret = clk_prepare_enable(spifi->clk_reg); 410 - if (ret) { 411 - dev_err(&pdev->dev, "unable to enable reg clock\n"); 412 - return ret; 413 - } 414 - 415 - ret = clk_prepare_enable(spifi->clk_spifi); 416 - if (ret) { 417 - dev_err(&pdev->dev, "unable to enable spifi clock\n"); 418 - goto dis_clk_reg; 419 408 } 420 409 421 410 spifi->dev = &pdev->dev; ··· 418 431 flash_np = of_get_next_available_child(pdev->dev.of_node, NULL); 419 432 if (!flash_np) { 420 433 dev_err(&pdev->dev, "no SPI flash device to configure\n"); 421 - ret = -ENODEV; 422 - goto dis_clks; 434 + return -ENODEV; 423 435 } 424 436 425 437 ret = nxp_spifi_setup_flash(spifi, flash_np); 426 438 of_node_put(flash_np); 427 439 if (ret) { 428 440 dev_err(&pdev->dev, "unable to setup flash chip\n"); 429 - goto dis_clks; 441 + return ret; 430 442 } 431 443 432 444 return 0; 433 - 434 - dis_clks: 435 - clk_disable_unprepare(spifi->clk_spifi); 436 - dis_clk_reg: 437 - clk_disable_unprepare(spifi->clk_reg); 438 - return ret; 439 445 } 440 446 441 447 static int nxp_spifi_remove(struct platform_device *pdev) ··· 436 456 struct nxp_spifi *spifi = platform_get_drvdata(pdev); 437 457 438 458 mtd_device_unregister(&spifi->nor.mtd); 439 - clk_disable_unprepare(spifi->clk_spifi); 440 - clk_disable_unprepare(spifi->clk_reg); 441 459 442 460 return 0; 443 461 }
+34 -23
drivers/mtd/spi-nor/core.c
··· 870 870 ret = spi_nor_read_cr(nor, &sr_cr[1]); 871 871 if (ret) 872 872 return ret; 873 - } else if (nor->params->quad_enable) { 873 + } else if (spi_nor_get_protocol_width(nor->read_proto) == 4 && 874 + spi_nor_get_protocol_width(nor->write_proto) == 4 && 875 + nor->params->quad_enable) { 874 876 /* 875 877 * If the Status Register 2 Read command (35h) is not 876 878 * supported, we should at least be sure we don't 877 879 * change the value of the SR2 Quad Enable bit. 878 880 * 879 - * We can safely assume that when the Quad Enable method is 880 - * set, the value of the QE bit is one, as a consequence of the 881 - * nor->params->quad_enable() call. 881 + * When the Quad Enable method is set and the buswidth is 4, we 882 + * can safely assume that the value of the QE bit is one, as a 883 + * consequence of the nor->params->quad_enable() call. 882 884 * 883 - * We can safely assume that the Quad Enable bit is present in 884 - * the Status Register 2 at BIT(1). According to the JESD216 885 - * revB standard, BFPT DWORDS[15], bits 22:20, the 16-bit 886 - * Write Status (01h) command is available just for the cases 887 - * in which the QE bit is described in SR2 at BIT(1). 885 + * According to the JESD216 revB standard, BFPT DWORDS[15], 886 + * bits 22:20, the 16-bit Write Status (01h) command is 887 + * available just for the cases in which the QE bit is 888 + * described in SR2 at BIT(1). 888 889 */ 889 890 sr_cr[1] = SR2_QUAD_EN_BIT1; 890 891 } else { ··· 2845 2844 if (of_property_read_bool(np, "broken-flash-reset")) 2846 2845 nor->flags |= SNOR_F_BROKEN_RESET; 2847 2846 2847 + if (of_property_read_bool(np, "no-wp")) 2848 + nor->flags |= SNOR_F_NO_WP; 2849 + 2848 2850 if (flags & SPI_NOR_SWP_IS_VOLATILE) 2849 2851 nor->flags |= SNOR_F_SWP_IS_VOLATILE; 2850 2852 ··· 2901 2897 * SFDP standard, or where SFDP tables are not defined at all. 2902 2898 * Will replace the spi_nor_manufacturer_init_params() method. 2903 2899 */ 2904 - static void spi_nor_late_init_params(struct spi_nor *nor) 2900 + static int spi_nor_late_init_params(struct spi_nor *nor) 2905 2901 { 2906 2902 struct spi_nor_flash_parameter *params = nor->params; 2903 + int ret; 2907 2904 2908 2905 if (nor->manufacturer && nor->manufacturer->fixups && 2909 - nor->manufacturer->fixups->late_init) 2910 - nor->manufacturer->fixups->late_init(nor); 2906 + nor->manufacturer->fixups->late_init) { 2907 + ret = nor->manufacturer->fixups->late_init(nor); 2908 + if (ret) 2909 + return ret; 2910 + } 2911 2911 2912 - if (nor->info->fixups && nor->info->fixups->late_init) 2913 - nor->info->fixups->late_init(nor); 2912 + if (nor->info->fixups && nor->info->fixups->late_init) { 2913 + ret = nor->info->fixups->late_init(nor); 2914 + if (ret) 2915 + return ret; 2916 + } 2914 2917 2915 2918 /* Default method kept for backward compatibility. */ 2916 2919 if (!params->set_4byte_addr_mode) ··· 2935 2924 2936 2925 if (nor->info->n_banks > 1) 2937 2926 params->bank_size = div64_u64(params->size, nor->info->n_banks); 2927 + 2928 + return 0; 2938 2929 } 2939 2930 2940 2931 /** ··· 3095 3082 spi_nor_init_params_deprecated(nor); 3096 3083 } 3097 3084 3098 - spi_nor_late_init_params(nor); 3099 - 3100 - return 0; 3085 + return spi_nor_late_init_params(nor); 3101 3086 } 3102 3087 3103 - /** spi_nor_octal_dtr_enable() - enable Octal DTR I/O if needed 3088 + /** spi_nor_set_octal_dtr() - enable or disable Octal DTR I/O. 3104 3089 * @nor: pointer to a 'struct spi_nor' 3105 3090 * @enable: whether to enable or disable Octal DTR 3106 3091 * 3107 3092 * Return: 0 on success, -errno otherwise. 3108 3093 */ 3109 - static int spi_nor_octal_dtr_enable(struct spi_nor *nor, bool enable) 3094 + static int spi_nor_set_octal_dtr(struct spi_nor *nor, bool enable) 3110 3095 { 3111 3096 int ret; 3112 3097 3113 - if (!nor->params->octal_dtr_enable) 3098 + if (!nor->params->set_octal_dtr) 3114 3099 return 0; 3115 3100 3116 3101 if (!(nor->read_proto == SNOR_PROTO_8_8_8_DTR && ··· 3118 3107 if (!(nor->flags & SNOR_F_IO_MODE_EN_VOLATILE)) 3119 3108 return 0; 3120 3109 3121 - ret = nor->params->octal_dtr_enable(nor, enable); 3110 + ret = nor->params->set_octal_dtr(nor, enable); 3122 3111 if (ret) 3123 3112 return ret; 3124 3113 ··· 3179 3168 { 3180 3169 int err; 3181 3170 3182 - err = spi_nor_octal_dtr_enable(nor, true); 3171 + err = spi_nor_set_octal_dtr(nor, true); 3183 3172 if (err) { 3184 3173 dev_dbg(nor->dev, "octal mode not supported\n"); 3185 3174 return err; ··· 3281 3270 int ret; 3282 3271 3283 3272 /* Disable octal DTR mode if we enabled it. */ 3284 - ret = spi_nor_octal_dtr_enable(nor, false); 3273 + ret = spi_nor_set_octal_dtr(nor, false); 3285 3274 if (ret) 3286 3275 dev_err(nor->dev, "suspend() failed\n"); 3287 3276
+6 -3
drivers/mtd/spi-nor/core.h
··· 132 132 SNOR_F_SWP_IS_VOLATILE = BIT(13), 133 133 SNOR_F_RWW = BIT(14), 134 134 SNOR_F_ECC = BIT(15), 135 + SNOR_F_NO_WP = BIT(16), 135 136 }; 136 137 137 138 struct spi_nor_read_command { ··· 364 363 * @erase_map: the erase map parsed from the SFDP Sector Map Parameter 365 364 * Table. 366 365 * @otp: SPI NOR OTP info. 367 - * @octal_dtr_enable: enables SPI NOR octal DTR mode. 366 + * @set_octal_dtr: enables or disables SPI NOR octal DTR mode. 368 367 * @quad_enable: enables SPI NOR quad mode. 369 368 * @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode. 370 369 * @convert_addr: converts an absolute address into something the flash ··· 378 377 * than reading the status register to indicate they 379 378 * are ready for a new command 380 379 * @locking_ops: SPI NOR locking methods. 380 + * @priv: flash's private data. 381 381 */ 382 382 struct spi_nor_flash_parameter { 383 383 u64 bank_size; ··· 399 397 struct spi_nor_erase_map erase_map; 400 398 struct spi_nor_otp otp; 401 399 402 - int (*octal_dtr_enable)(struct spi_nor *nor, bool enable); 400 + int (*set_octal_dtr)(struct spi_nor *nor, bool enable); 403 401 int (*quad_enable)(struct spi_nor *nor); 404 402 int (*set_4byte_addr_mode)(struct spi_nor *nor, bool enable); 405 403 u32 (*convert_addr)(struct spi_nor *nor, u32 addr); ··· 407 405 int (*ready)(struct spi_nor *nor); 408 406 409 407 const struct spi_nor_locking_ops *locking_ops; 408 + void *priv; 410 409 }; 411 410 412 411 /** ··· 434 431 const struct sfdp_parameter_header *bfpt_header, 435 432 const struct sfdp_bfpt *bfpt); 436 433 int (*post_sfdp)(struct spi_nor *nor); 437 - void (*late_init)(struct spi_nor *nor); 434 + int (*late_init)(struct spi_nor *nor); 438 435 }; 439 436 440 437 /**
+1
drivers/mtd/spi-nor/debugfs.c
··· 27 27 SNOR_F_NAME(SWP_IS_VOLATILE), 28 28 SNOR_F_NAME(RWW), 29 29 SNOR_F_NAME(ECC), 30 + SNOR_F_NAME(NO_WP), 30 31 }; 31 32 #undef SNOR_F_NAME 32 33
+3 -1
drivers/mtd/spi-nor/issi.c
··· 29 29 .post_bfpt = is25lp256_post_bfpt_fixups, 30 30 }; 31 31 32 - static void pm25lv_nor_late_init(struct spi_nor *nor) 32 + static int pm25lv_nor_late_init(struct spi_nor *nor) 33 33 { 34 34 struct spi_nor_erase_map *map = &nor->params->erase_map; 35 35 int i; ··· 38 38 for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 39 39 if (map->erase_type[i].size == 4096) 40 40 map->erase_type[i].opcode = SPINOR_OP_BE_4K_PMC; 41 + 42 + return 0; 41 43 } 42 44 43 45 static const struct spi_nor_fixups pm25lv_nor_fixups = {
+3 -1
drivers/mtd/spi-nor/macronix.c
··· 110 110 nor->params->quad_enable = spi_nor_sr1_bit6_quad_enable; 111 111 } 112 112 113 - static void macronix_nor_late_init(struct spi_nor *nor) 113 + static int macronix_nor_late_init(struct spi_nor *nor) 114 114 { 115 115 if (!nor->params->set_4byte_addr_mode) 116 116 nor->params->set_4byte_addr_mode = spi_nor_set_4byte_addr_mode_en4b_ex4b; 117 + 118 + return 0; 117 119 } 118 120 119 121 static const struct spi_nor_fixups macronix_nor_fixups = {
+5 -3
drivers/mtd/spi-nor/micron-st.c
··· 120 120 return 0; 121 121 } 122 122 123 - static int micron_st_nor_octal_dtr_enable(struct spi_nor *nor, bool enable) 123 + static int micron_st_nor_set_octal_dtr(struct spi_nor *nor, bool enable) 124 124 { 125 125 return enable ? micron_st_nor_octal_dtr_en(nor) : 126 126 micron_st_nor_octal_dtr_dis(nor); ··· 128 128 129 129 static void mt35xu512aba_default_init(struct spi_nor *nor) 130 130 { 131 - nor->params->octal_dtr_enable = micron_st_nor_octal_dtr_enable; 131 + nor->params->set_octal_dtr = micron_st_nor_set_octal_dtr; 132 132 } 133 133 134 134 static int mt35xu512aba_post_sfdp_fixup(struct spi_nor *nor) ··· 429 429 nor->params->quad_enable = NULL; 430 430 } 431 431 432 - static void micron_st_nor_late_init(struct spi_nor *nor) 432 + static int micron_st_nor_late_init(struct spi_nor *nor) 433 433 { 434 434 struct spi_nor_flash_parameter *params = nor->params; 435 435 ··· 438 438 439 439 if (!params->set_4byte_addr_mode) 440 440 params->set_4byte_addr_mode = spi_nor_set_4byte_addr_mode_wren_en4b_ex4b; 441 + 442 + return 0; 441 443 } 442 444 443 445 static const struct spi_nor_fixups micron_st_nor_fixups = {
+204 -116
drivers/mtd/spi-nor/spansion.c
··· 4 4 * Copyright (C) 2014, Freescale Semiconductor, Inc. 5 5 */ 6 6 7 + #include <linux/bitfield.h> 8 + #include <linux/device.h> 9 + #include <linux/errno.h> 7 10 #include <linux/mtd/spi-nor.h> 8 11 9 12 #include "core.h" 10 13 11 14 /* flash_info mfr_flag. Used to clear sticky prorietary SR bits. */ 12 15 #define USE_CLSR BIT(0) 16 + #define USE_CLPEF BIT(1) 13 17 14 18 #define SPINOR_OP_CLSR 0x30 /* Clear status register 1 */ 19 + #define SPINOR_OP_CLPEF 0x82 /* Clear program/erase failure flags */ 15 20 #define SPINOR_OP_RD_ANY_REG 0x65 /* Read any register */ 16 21 #define SPINOR_OP_WR_ANY_REG 0x71 /* Write any register */ 17 22 #define SPINOR_REG_CYPRESS_VREG 0x00800000 ··· 24 19 #define SPINOR_REG_CYPRESS_STR1V \ 25 20 (SPINOR_REG_CYPRESS_VREG + SPINOR_REG_CYPRESS_STR1) 26 21 #define SPINOR_REG_CYPRESS_CFR1 0x2 27 - #define SPINOR_REG_CYPRESS_CFR1V \ 28 - (SPINOR_REG_CYPRESS_VREG + SPINOR_REG_CYPRESS_CFR1) 29 22 #define SPINOR_REG_CYPRESS_CFR1_QUAD_EN BIT(1) /* Quad Enable */ 30 23 #define SPINOR_REG_CYPRESS_CFR2 0x3 31 24 #define SPINOR_REG_CYPRESS_CFR2V \ 32 25 (SPINOR_REG_CYPRESS_VREG + SPINOR_REG_CYPRESS_CFR2) 26 + #define SPINOR_REG_CYPRESS_CFR2_MEMLAT_MASK GENMASK(3, 0) 33 27 #define SPINOR_REG_CYPRESS_CFR2_MEMLAT_11_24 0xb 34 28 #define SPINOR_REG_CYPRESS_CFR2_ADRBYT BIT(7) 35 29 #define SPINOR_REG_CYPRESS_CFR3 0x4 36 - #define SPINOR_REG_CYPRESS_CFR3V \ 37 - (SPINOR_REG_CYPRESS_VREG + SPINOR_REG_CYPRESS_CFR3) 38 30 #define SPINOR_REG_CYPRESS_CFR3_PGSZ BIT(4) /* Page size. */ 39 31 #define SPINOR_REG_CYPRESS_CFR5 0x6 40 - #define SPINOR_REG_CYPRESS_CFR5V \ 41 - (SPINOR_REG_CYPRESS_VREG + SPINOR_REG_CYPRESS_CFR5) 42 32 #define SPINOR_REG_CYPRESS_CFR5_BIT6 BIT(6) 43 33 #define SPINOR_REG_CYPRESS_CFR5_DDR BIT(1) 44 34 #define SPINOR_REG_CYPRESS_CFR5_OPI BIT(0) ··· 57 57 SPI_MEM_OP_DUMMY(ndummy, 0), \ 58 58 SPI_MEM_OP_DATA_IN(1, buf, 0)) 59 59 60 - #define SPANSION_CLSR_OP \ 61 - SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CLSR, 0), \ 60 + #define SPANSION_OP(opcode) \ 61 + SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 0), \ 62 62 SPI_MEM_OP_NO_ADDR, \ 63 63 SPI_MEM_OP_NO_DUMMY, \ 64 64 SPI_MEM_OP_NO_DATA) 65 + 66 + /** 67 + * struct spansion_nor_params - Spansion private parameters. 68 + * @clsr: Clear Status Register or Clear Program and Erase Failure Flag 69 + * opcode. 70 + */ 71 + struct spansion_nor_params { 72 + u8 clsr; 73 + }; 65 74 66 75 /** 67 76 * spansion_nor_clear_sr() - Clear the Status Register. ··· 78 69 */ 79 70 static void spansion_nor_clear_sr(struct spi_nor *nor) 80 71 { 72 + const struct spansion_nor_params *priv_params = nor->params->priv; 81 73 int ret; 82 74 83 75 if (nor->spimem) { 84 - struct spi_mem_op op = SPANSION_CLSR_OP; 76 + struct spi_mem_op op = SPANSION_OP(priv_params->clsr); 85 77 86 78 spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 87 79 ··· 98 88 99 89 static int cypress_nor_sr_ready_and_clear_reg(struct spi_nor *nor, u64 addr) 100 90 { 91 + struct spi_nor_flash_parameter *params = nor->params; 101 92 struct spi_mem_op op = 102 - CYPRESS_NOR_RD_ANY_REG_OP(nor->params->addr_mode_nbytes, addr, 93 + CYPRESS_NOR_RD_ANY_REG_OP(params->addr_mode_nbytes, addr, 103 94 0, nor->bouncebuf); 104 95 int ret; 96 + 97 + if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) { 98 + op.dummy.nbytes = params->rdsr_dummy; 99 + op.data.nbytes = 2; 100 + } 105 101 106 102 ret = spi_nor_read_any_reg(nor, &op, nor->reg_proto); 107 103 if (ret) ··· 157 141 return 1; 158 142 } 159 143 160 - static int cypress_nor_octal_dtr_en(struct spi_nor *nor) 144 + static int cypress_nor_set_memlat(struct spi_nor *nor, u64 addr) 161 145 { 162 146 struct spi_mem_op op; 163 147 u8 *buf = nor->bouncebuf; 164 148 int ret; 165 149 u8 addr_mode_nbytes = nor->params->addr_mode_nbytes; 166 150 167 - /* Use 24 dummy cycles for memory array reads. */ 168 - *buf = SPINOR_REG_CYPRESS_CFR2_MEMLAT_11_24; 169 151 op = (struct spi_mem_op) 170 - CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes, 171 - SPINOR_REG_CYPRESS_CFR2V, 1, buf); 152 + CYPRESS_NOR_RD_ANY_REG_OP(addr_mode_nbytes, addr, 0, buf); 153 + 154 + ret = spi_nor_read_any_reg(nor, &op, nor->reg_proto); 155 + if (ret) 156 + return ret; 157 + 158 + /* Use 24 dummy cycles for memory array reads. */ 159 + *buf &= ~SPINOR_REG_CYPRESS_CFR2_MEMLAT_MASK; 160 + *buf |= FIELD_PREP(SPINOR_REG_CYPRESS_CFR2_MEMLAT_MASK, 161 + SPINOR_REG_CYPRESS_CFR2_MEMLAT_11_24); 162 + op = (struct spi_mem_op) 163 + CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes, addr, 1, buf); 172 164 173 165 ret = spi_nor_write_any_volatile_reg(nor, &op, nor->reg_proto); 174 166 if (ret) ··· 184 160 185 161 nor->read_dummy = 24; 186 162 163 + return 0; 164 + } 165 + 166 + static int cypress_nor_set_octal_dtr_bits(struct spi_nor *nor, u64 addr) 167 + { 168 + struct spi_mem_op op; 169 + u8 *buf = nor->bouncebuf; 170 + 187 171 /* Set the octal and DTR enable bits. */ 188 172 buf[0] = SPINOR_REG_CYPRESS_CFR5_OCT_DTR_EN; 189 173 op = (struct spi_mem_op) 190 - CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes, 191 - SPINOR_REG_CYPRESS_CFR5V, 1, buf); 174 + CYPRESS_NOR_WR_ANY_REG_OP(nor->params->addr_mode_nbytes, 175 + addr, 1, buf); 192 176 193 - ret = spi_nor_write_any_volatile_reg(nor, &op, nor->reg_proto); 194 - if (ret) 195 - return ret; 177 + return spi_nor_write_any_volatile_reg(nor, &op, nor->reg_proto); 178 + } 179 + 180 + static int cypress_nor_octal_dtr_en(struct spi_nor *nor) 181 + { 182 + const struct spi_nor_flash_parameter *params = nor->params; 183 + u8 *buf = nor->bouncebuf; 184 + u64 addr; 185 + int i, ret; 186 + 187 + for (i = 0; i < params->n_dice; i++) { 188 + addr = params->vreg_offset[i] + SPINOR_REG_CYPRESS_CFR2; 189 + ret = cypress_nor_set_memlat(nor, addr); 190 + if (ret) 191 + return ret; 192 + 193 + addr = params->vreg_offset[i] + SPINOR_REG_CYPRESS_CFR5; 194 + ret = cypress_nor_set_octal_dtr_bits(nor, addr); 195 + if (ret) 196 + return ret; 197 + } 196 198 197 199 /* Read flash ID to make sure the switch was successful. */ 198 200 ret = spi_nor_read_id(nor, nor->addr_nbytes, 3, buf, ··· 234 184 return 0; 235 185 } 236 186 237 - static int cypress_nor_octal_dtr_dis(struct spi_nor *nor) 187 + static int cypress_nor_set_single_spi_bits(struct spi_nor *nor, u64 addr) 238 188 { 239 189 struct spi_mem_op op; 240 190 u8 *buf = nor->bouncebuf; 241 - int ret; 242 191 243 192 /* 244 193 * The register is 1-byte wide, but 1-byte transactions are not allowed ··· 247 198 buf[0] = SPINOR_REG_CYPRESS_CFR5_OCT_DTR_DS; 248 199 buf[1] = 0; 249 200 op = (struct spi_mem_op) 250 - CYPRESS_NOR_WR_ANY_REG_OP(nor->addr_nbytes, 251 - SPINOR_REG_CYPRESS_CFR5V, 2, buf); 252 - ret = spi_nor_write_any_volatile_reg(nor, &op, SNOR_PROTO_8_8_8_DTR); 253 - if (ret) 254 - return ret; 201 + CYPRESS_NOR_WR_ANY_REG_OP(nor->addr_nbytes, addr, 2, buf); 202 + return spi_nor_write_any_volatile_reg(nor, &op, SNOR_PROTO_8_8_8_DTR); 203 + } 204 + 205 + static int cypress_nor_octal_dtr_dis(struct spi_nor *nor) 206 + { 207 + const struct spi_nor_flash_parameter *params = nor->params; 208 + u8 *buf = nor->bouncebuf; 209 + u64 addr; 210 + int i, ret; 211 + 212 + for (i = 0; i < params->n_dice; i++) { 213 + addr = params->vreg_offset[i] + SPINOR_REG_CYPRESS_CFR5; 214 + ret = cypress_nor_set_single_spi_bits(nor, addr); 215 + if (ret) 216 + return ret; 217 + } 255 218 256 219 /* Read flash ID to make sure the switch was successful. */ 257 220 ret = spi_nor_read_id(nor, 0, 0, buf, SNOR_PROTO_1_1_1); ··· 343 282 u64 addr; 344 283 u8 i; 345 284 int ret; 346 - 347 - if (!params->n_dice) 348 - return cypress_nor_quad_enable_volatile_reg(nor, 349 - SPINOR_REG_CYPRESS_CFR1V); 350 285 351 286 for (i = 0; i < params->n_dice; i++) { 352 287 addr = params->vreg_offset[i] + SPINOR_REG_CYPRESS_CFR1; ··· 465 408 return 0; 466 409 } 467 410 468 - static int cypress_nor_get_page_size_single_chip(struct spi_nor *nor) 469 - { 470 - struct spi_mem_op op = 471 - CYPRESS_NOR_RD_ANY_REG_OP(nor->params->addr_mode_nbytes, 472 - SPINOR_REG_CYPRESS_CFR3V, 0, 473 - nor->bouncebuf); 474 - int ret; 475 - 476 - ret = spi_nor_read_any_reg(nor, &op, nor->reg_proto); 477 - if (ret) 478 - return ret; 479 - 480 - if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3_PGSZ) 481 - nor->params->page_size = 512; 482 - else 483 - nor->params->page_size = 256; 484 - 485 - return 0; 486 - } 487 - 488 - 489 - static int cypress_nor_get_page_size_mcp(struct spi_nor *nor) 411 + /** 412 + * cypress_nor_get_page_size() - Get flash page size configuration. 413 + * @nor: pointer to a 'struct spi_nor' 414 + * 415 + * The BFPT table advertises a 512B or 256B page size depending on part but the 416 + * page size is actually configurable (with the default being 256B). Read from 417 + * CFR3V[4] and set the correct size. 418 + * 419 + * Return: 0 on success, -errno otherwise. 420 + */ 421 + static int cypress_nor_get_page_size(struct spi_nor *nor) 490 422 { 491 423 struct spi_mem_op op = 492 424 CYPRESS_NOR_RD_ANY_REG_OP(nor->params->addr_mode_nbytes, ··· 503 457 params->page_size = 512; 504 458 505 459 return 0; 506 - } 507 - 508 - /** 509 - * cypress_nor_get_page_size() - Get flash page size configuration. 510 - * @nor: pointer to a 'struct spi_nor' 511 - * 512 - * The BFPT table advertises a 512B or 256B page size depending on part but the 513 - * page size is actually configurable (with the default being 256B). Read from 514 - * CFR3V[4] and set the correct size. 515 - * 516 - * Return: 0 on success, -errno otherwise. 517 - */ 518 - static int cypress_nor_get_page_size(struct spi_nor *nor) 519 - { 520 - if (nor->params->n_dice) 521 - return cypress_nor_get_page_size_mcp(nor); 522 - return cypress_nor_get_page_size_single_chip(nor); 523 460 } 524 461 525 462 static void cypress_nor_ecc_init(struct spi_nor *nor) ··· 541 512 if (nor->bouncebuf[0]) 542 513 return -ENODEV; 543 514 544 - return cypress_nor_get_page_size(nor); 515 + return 0; 545 516 } 546 517 547 518 static int s25fs256t_post_sfdp_fixup(struct spi_nor *nor) 548 519 { 549 520 struct spi_nor_flash_parameter *params = nor->params; 521 + 522 + /* 523 + * S25FS256T does not define the SCCR map, but we would like to use the 524 + * same code base for both single and multi chip package devices, thus 525 + * set the vreg_offset and n_dice to be able to do so. 526 + */ 527 + params->vreg_offset = devm_kmalloc(nor->dev, sizeof(u32), GFP_KERNEL); 528 + if (!params->vreg_offset) 529 + return -ENOMEM; 530 + 531 + params->vreg_offset[0] = SPINOR_REG_CYPRESS_VREG; 532 + params->n_dice = 1; 550 533 551 534 /* PP_1_1_4_4B is supported but missing in 4BAIT. */ 552 535 params->hwcaps.mask |= SNOR_HWCAPS_PP_1_1_4; ··· 566 525 SPINOR_OP_PP_1_1_4_4B, 567 526 SNOR_PROTO_1_1_4); 568 527 569 - return 0; 528 + return cypress_nor_get_page_size(nor); 570 529 } 571 530 572 - static void s25fs256t_late_init(struct spi_nor *nor) 531 + static int s25fs256t_late_init(struct spi_nor *nor) 573 532 { 574 533 cypress_nor_ecc_init(nor); 534 + 535 + return 0; 575 536 } 576 537 577 538 static struct spi_nor_fixups s25fs256t_fixups = { ··· 601 558 602 559 static int s25hx_t_post_sfdp_fixup(struct spi_nor *nor) 603 560 { 604 - struct spi_nor_erase_type *erase_type = 605 - nor->params->erase_map.erase_type; 561 + struct spi_nor_flash_parameter *params = nor->params; 562 + struct spi_nor_erase_type *erase_type = params->erase_map.erase_type; 606 563 unsigned int i; 564 + 565 + if (!params->n_dice || !params->vreg_offset) { 566 + dev_err(nor->dev, "%s failed. The volatile register offset could not be retrieved from SFDP.\n", 567 + __func__); 568 + return -EOPNOTSUPP; 569 + } 570 + 571 + /* The 2 Gb parts duplicate info and advertise 4 dice instead of 2. */ 572 + if (params->size == SZ_256M) 573 + params->n_dice = 2; 607 574 608 575 /* 609 576 * In some parts, 3byte erase opcodes are advertised by 4BAIT. ··· 632 579 } 633 580 } 634 581 635 - /* The 2 Gb parts duplicate info and advertise 4 dice instead of 2. */ 636 - if (nor->params->size == SZ_256M) 637 - nor->params->n_dice = 2; 638 - 639 582 return cypress_nor_get_page_size(nor); 640 583 } 641 584 642 - static void s25hx_t_late_init(struct spi_nor *nor) 585 + static int s25hx_t_late_init(struct spi_nor *nor) 643 586 { 644 587 struct spi_nor_flash_parameter *params = nor->params; 645 588 646 589 /* Fast Read 4B requires mode cycles */ 647 590 params->reads[SNOR_CMD_READ_FAST].num_mode_clocks = 8; 648 - 591 + params->ready = cypress_nor_sr_ready_and_clear; 649 592 cypress_nor_ecc_init(nor); 650 593 651 - /* Replace ready() with multi die version */ 652 - if (params->n_dice) 653 - params->ready = cypress_nor_sr_ready_and_clear; 594 + return 0; 654 595 } 655 596 656 597 static struct spi_nor_fixups s25hx_t_fixups = { ··· 654 607 }; 655 608 656 609 /** 657 - * cypress_nor_octal_dtr_enable() - Enable octal DTR on Cypress flashes. 610 + * cypress_nor_set_octal_dtr() - Enable or disable octal DTR on Cypress flashes. 658 611 * @nor: pointer to a 'struct spi_nor' 659 612 * @enable: whether to enable or disable Octal DTR 660 613 * ··· 663 616 * 664 617 * Return: 0 on success, -errno otherwise. 665 618 */ 666 - static int cypress_nor_octal_dtr_enable(struct spi_nor *nor, bool enable) 619 + static int cypress_nor_set_octal_dtr(struct spi_nor *nor, bool enable) 667 620 { 668 621 return enable ? cypress_nor_octal_dtr_en(nor) : 669 622 cypress_nor_octal_dtr_dis(nor); ··· 671 624 672 625 static int s28hx_t_post_sfdp_fixup(struct spi_nor *nor) 673 626 { 627 + struct spi_nor_flash_parameter *params = nor->params; 628 + 629 + if (!params->n_dice || !params->vreg_offset) { 630 + dev_err(nor->dev, "%s failed. The volatile register offset could not be retrieved from SFDP.\n", 631 + __func__); 632 + return -EOPNOTSUPP; 633 + } 634 + 635 + /* The 2 Gb parts duplicate info and advertise 4 dice instead of 2. */ 636 + if (params->size == SZ_256M) 637 + params->n_dice = 2; 638 + 674 639 /* 675 640 * On older versions of the flash the xSPI Profile 1.0 table has the 676 641 * 8D-8D-8D Fast Read opcode as 0x00. But it actually should be 0xEE. 677 642 */ 678 - if (nor->params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode == 0) 679 - nor->params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode = 643 + if (params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode == 0) 644 + params->reads[SNOR_CMD_READ_8_8_8_DTR].opcode = 680 645 SPINOR_OP_CYPRESS_RD_FAST; 681 646 682 647 /* This flash is also missing the 4-byte Page Program opcode bit. */ 683 - spi_nor_set_pp_settings(&nor->params->page_programs[SNOR_CMD_PP], 648 + spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP], 684 649 SPINOR_OP_PP_4B, SNOR_PROTO_1_1_1); 685 650 /* 686 651 * Since xSPI Page Program opcode is backward compatible with 687 652 * Legacy SPI, use Legacy SPI opcode there as well. 688 653 */ 689 - spi_nor_set_pp_settings(&nor->params->page_programs[SNOR_CMD_PP_8_8_8_DTR], 654 + spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP_8_8_8_DTR], 690 655 SPINOR_OP_PP_4B, SNOR_PROTO_8_8_8_DTR); 691 656 692 657 /* ··· 706 647 * address bytes needed for Read Status Register command as 0 but the 707 648 * actual value for that is 4. 708 649 */ 709 - nor->params->rdsr_addr_nbytes = 4; 650 + params->rdsr_addr_nbytes = 4; 710 651 711 652 return cypress_nor_get_page_size(nor); 712 653 } ··· 715 656 const struct sfdp_parameter_header *bfpt_header, 716 657 const struct sfdp_bfpt *bfpt) 717 658 { 718 - int ret; 719 - 720 - ret = cypress_nor_set_addr_mode_nbytes(nor); 721 - if (ret) 722 - return ret; 723 - 724 - return 0; 659 + return cypress_nor_set_addr_mode_nbytes(nor); 725 660 } 726 661 727 - static void s28hx_t_late_init(struct spi_nor *nor) 662 + static int s28hx_t_late_init(struct spi_nor *nor) 728 663 { 729 - nor->params->octal_dtr_enable = cypress_nor_octal_dtr_enable; 664 + struct spi_nor_flash_parameter *params = nor->params; 665 + 666 + params->set_octal_dtr = cypress_nor_set_octal_dtr; 667 + params->ready = cypress_nor_sr_ready_and_clear; 730 668 cypress_nor_ecc_init(nor); 669 + 670 + return 0; 731 671 } 732 672 733 673 static const struct spi_nor_fixups s28hx_t_fixups = { ··· 850 792 FIXUP_FLAGS(SPI_NOR_4B_OPCODES) }, 851 793 { "s25fs256t", INFO6(0x342b19, 0x0f0890, 0, 0) 852 794 PARSE_SFDP 795 + MFR_FLAGS(USE_CLPEF) 853 796 .fixups = &s25fs256t_fixups }, 854 - { "s25hl512t", INFO6(0x342a1a, 0x0f0390, 256 * 1024, 256) 797 + { "s25hl512t", INFO6(0x342a1a, 0x0f0390, 0, 0) 855 798 PARSE_SFDP 856 - MFR_FLAGS(USE_CLSR) 799 + MFR_FLAGS(USE_CLPEF) 857 800 .fixups = &s25hx_t_fixups }, 858 - { "s25hl01gt", INFO6(0x342a1b, 0x0f0390, 256 * 1024, 512) 801 + { "s25hl01gt", INFO6(0x342a1b, 0x0f0390, 0, 0) 859 802 PARSE_SFDP 860 - MFR_FLAGS(USE_CLSR) 803 + MFR_FLAGS(USE_CLPEF) 861 804 .fixups = &s25hx_t_fixups }, 862 805 { "s25hl02gt", INFO6(0x342a1c, 0x0f0090, 0, 0) 863 806 PARSE_SFDP 807 + MFR_FLAGS(USE_CLPEF) 864 808 FLAGS(NO_CHIP_ERASE) 865 809 .fixups = &s25hx_t_fixups }, 866 - { "s25hs512t", INFO6(0x342b1a, 0x0f0390, 256 * 1024, 256) 810 + { "s25hs512t", INFO6(0x342b1a, 0x0f0390, 0, 0) 867 811 PARSE_SFDP 868 - MFR_FLAGS(USE_CLSR) 812 + MFR_FLAGS(USE_CLPEF) 869 813 .fixups = &s25hx_t_fixups }, 870 - { "s25hs01gt", INFO6(0x342b1b, 0x0f0390, 256 * 1024, 512) 814 + { "s25hs01gt", INFO6(0x342b1b, 0x0f0390, 0, 0) 871 815 PARSE_SFDP 872 - MFR_FLAGS(USE_CLSR) 816 + MFR_FLAGS(USE_CLPEF) 873 817 .fixups = &s25hx_t_fixups }, 874 818 { "s25hs02gt", INFO6(0x342b1c, 0x0f0090, 0, 0) 875 819 PARSE_SFDP 820 + MFR_FLAGS(USE_CLPEF) 876 821 FLAGS(NO_CHIP_ERASE) 877 822 .fixups = &s25hx_t_fixups }, 878 823 { "cy15x104q", INFO6(0x042cc2, 0x7f7f7f, 512 * 1024, 1) 879 824 FLAGS(SPI_NOR_NO_ERASE) }, 880 - { "s28hl512t", INFO(0x345a1a, 0, 256 * 1024, 256) 825 + { "s28hl512t", INFO(0x345a1a, 0, 0, 0) 881 826 PARSE_SFDP 827 + MFR_FLAGS(USE_CLPEF) 882 828 .fixups = &s28hx_t_fixups, 883 829 }, 884 - { "s28hl01gt", INFO(0x345a1b, 0, 256 * 1024, 512) 830 + { "s28hl01gt", INFO(0x345a1b, 0, 0, 0) 885 831 PARSE_SFDP 832 + MFR_FLAGS(USE_CLPEF) 886 833 .fixups = &s28hx_t_fixups, 887 834 }, 888 - { "s28hs512t", INFO(0x345b1a, 0, 256 * 1024, 256) 835 + { "s28hs512t", INFO(0x345b1a, 0, 0, 0) 889 836 PARSE_SFDP 837 + MFR_FLAGS(USE_CLPEF) 890 838 .fixups = &s28hx_t_fixups, 891 839 }, 892 - { "s28hs01gt", INFO(0x345b1b, 0, 256 * 1024, 512) 840 + { "s28hs01gt", INFO(0x345b1b, 0, 0, 0) 893 841 PARSE_SFDP 842 + MFR_FLAGS(USE_CLPEF) 843 + .fixups = &s28hx_t_fixups, 844 + }, 845 + { "s28hs02gt", INFO(0x345b1c, 0, 0, 0) 846 + PARSE_SFDP 847 + MFR_FLAGS(USE_CLPEF) 894 848 .fixups = &s28hx_t_fixups, 895 849 }, 896 850 }; ··· 946 876 return !(nor->bouncebuf[0] & SR_WIP); 947 877 } 948 878 949 - static void spansion_nor_late_init(struct spi_nor *nor) 879 + static int spansion_nor_late_init(struct spi_nor *nor) 950 880 { 951 - if (nor->params->size > SZ_16M) { 881 + struct spi_nor_flash_parameter *params = nor->params; 882 + struct spansion_nor_params *priv_params; 883 + u8 mfr_flags = nor->info->mfr_flags; 884 + 885 + if (params->size > SZ_16M) { 952 886 nor->flags |= SNOR_F_4B_OPCODES; 953 887 /* No small sector erase for 4-byte command set */ 954 888 nor->erase_opcode = SPINOR_OP_SE; 955 889 nor->mtd.erasesize = nor->info->sector_size; 956 890 } 957 891 958 - if (nor->info->mfr_flags & USE_CLSR) 959 - nor->params->ready = spansion_nor_sr_ready_and_clear; 892 + if (mfr_flags & (USE_CLSR | USE_CLPEF)) { 893 + priv_params = devm_kmalloc(nor->dev, sizeof(*priv_params), 894 + GFP_KERNEL); 895 + if (!priv_params) 896 + return -ENOMEM; 897 + 898 + if (mfr_flags & USE_CLSR) 899 + priv_params->clsr = SPINOR_OP_CLSR; 900 + else if (mfr_flags & USE_CLPEF) 901 + priv_params->clsr = SPINOR_OP_CLPEF; 902 + 903 + params->priv = priv_params; 904 + params->ready = spansion_nor_sr_ready_and_clear; 905 + } 906 + 907 + return 0; 960 908 } 961 909 962 910 static const struct spi_nor_fixups spansion_nor_fixups = {
+10 -2
drivers/mtd/spi-nor/sst.c
··· 49 49 .is_locked = sst26vf_nor_is_locked, 50 50 }; 51 51 52 - static void sst26vf_nor_late_init(struct spi_nor *nor) 52 + static int sst26vf_nor_late_init(struct spi_nor *nor) 53 53 { 54 54 nor->params->locking_ops = &sst26vf_nor_locking_ops; 55 + 56 + return 0; 55 57 } 56 58 57 59 static const struct spi_nor_fixups sst26vf_nor_fixups = { ··· 113 111 SPI_NOR_QUAD_READ) }, 114 112 { "sst26vf016b", INFO(0xbf2641, 0, 64 * 1024, 32) 115 113 NO_SFDP_FLAGS(SECT_4K | SPI_NOR_DUAL_READ) }, 114 + { "sst26vf032b", INFO(0xbf2642, 0, 0, 0) 115 + FLAGS(SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 116 + PARSE_SFDP 117 + .fixups = &sst26vf_nor_fixups }, 116 118 { "sst26vf064b", INFO(0xbf2643, 0, 64 * 1024, 128) 117 119 FLAGS(SPI_NOR_HAS_LOCK | SPI_NOR_SWP_IS_VOLATILE) 118 120 NO_SFDP_FLAGS(SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) ··· 209 203 return ret; 210 204 } 211 205 212 - static void sst_nor_late_init(struct spi_nor *nor) 206 + static int sst_nor_late_init(struct spi_nor *nor) 213 207 { 214 208 if (nor->info->mfr_flags & SST_WRITE) 215 209 nor->mtd._write = sst_nor_write; 210 + 211 + return 0; 216 212 } 217 213 218 214 static const struct spi_nor_fixups sst_nor_fixups = {
+7 -2
drivers/mtd/spi-nor/swp.c
··· 214 214 215 215 status_new = (status_old & ~mask & ~tb_mask) | val; 216 216 217 - /* Disallow further writes if WP pin is asserted */ 218 - status_new |= SR_SRWD; 217 + /* 218 + * Disallow further writes if WP# pin is neither left floating nor 219 + * wrongly tied to GND (that includes internal pull-downs). 220 + * WP# pin hard strapped to GND can be a valid use case. 221 + */ 222 + if (!(nor->flags & SNOR_F_NO_WP)) 223 + status_new |= SR_SRWD; 219 224 220 225 if (!use_top) 221 226 status_new |= tb_mask;
+6 -3
drivers/mtd/spi-nor/winbond.c
··· 120 120 NO_SFDP_FLAGS(SECT_4K) }, 121 121 { "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16) 122 122 NO_SFDP_FLAGS(SECT_4K) }, 123 - { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256) 124 - NO_SFDP_FLAGS(SECT_4K) }, 123 + { "w25q128", INFO(0xef4018, 0, 0, 0) 124 + PARSE_SFDP 125 + FLAGS(SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) }, 125 126 { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512) 126 127 NO_SFDP_FLAGS(SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) 127 128 .fixups = &w25q256_fixups }, ··· 217 216 .is_locked = spi_nor_otp_is_locked_sr2, 218 217 }; 219 218 220 - static void winbond_nor_late_init(struct spi_nor *nor) 219 + static int winbond_nor_late_init(struct spi_nor *nor) 221 220 { 222 221 struct spi_nor_flash_parameter *params = nor->params; 223 222 ··· 233 232 * from BFPT, if any. 234 233 */ 235 234 params->set_4byte_addr_mode = winbond_nor_set_4byte_addr_mode; 235 + 236 + return 0; 236 237 } 237 238 238 239 static const struct spi_nor_fixups winbond_nor_fixups = {
+3 -1
drivers/mtd/spi-nor/xilinx.c
··· 155 155 return 0; 156 156 } 157 157 158 - static void xilinx_nor_late_init(struct spi_nor *nor) 158 + static int xilinx_nor_late_init(struct spi_nor *nor) 159 159 { 160 160 nor->params->setup = xilinx_nor_setup; 161 161 nor->params->ready = xilinx_nor_sr_ready; 162 + 163 + return 0; 162 164 } 163 165 164 166 static const struct spi_nor_fixups xilinx_nor_fixups = {
+1 -1
include/linux/mtd/mtd.h
··· 379 379 380 380 struct module *owner; 381 381 struct device dev; 382 - int usecount; 382 + struct kref refcnt; 383 383 struct mtd_debug_info dbg; 384 384 struct nvmem_device *nvmem; 385 385 struct nvmem_device *otp_user_nvmem;
+1
include/linux/mtd/rawnand.h
··· 1540 1540 int nand_readid_op(struct nand_chip *chip, u8 addr, void *buf, 1541 1541 unsigned int len); 1542 1542 int nand_status_op(struct nand_chip *chip, u8 *status); 1543 + int nand_exit_status_op(struct nand_chip *chip); 1543 1544 int nand_erase_op(struct nand_chip *chip, unsigned int eraseblock); 1544 1545 int nand_read_page_op(struct nand_chip *chip, unsigned int page, 1545 1546 unsigned int offset_in_page, void *buf, unsigned int len);