Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-6.16' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Miquel Raynal:
"A big core MTD change is the introduction of a new class to always
register a master device. This is a problem that has been there
forever: the "master" device was not always present depending on a
number of heuristics such as the presence of fixed partitions and the
absence of a Kconfig symbol to force its presence. This was a problem
for runtime PM operations which might not have the "master" device
available in all situation.

The SPI NAND subsystem has seen the introduction of DTR operations
(the equivalent of DDR transfers), which involved quite a few
preparation patches for clarifying macro names.

In the raw NAND subsystem, the brcmnand driver has been "fixed" for
old legacy SoCs with an update of the ->exec_op() hook, there has been
the introduction of a new controller driver named Loongson-1, and the
Qualcomm driver has received quite a few misc fixes as well as a new
compatible.

Finally, Macornix SPI NOR entries have been cleaned-up and some SFDP
table fixups for Macronix MX25L3255E have been merged.

Aside from this, there is the usual load of misc improvement, fixes,
and yaml conversion"

* tag 'mtd/for-6.16' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (42 commits)
mtd: rawnand: brcmnand: legacy exec_op implementation
mtd: rawnand: sunxi: Add randomizer configuration in sunxi_nfc_hw_ecc_write_chunk
mtd: nand: brcmnand: fix NAND timeout when accessing eMMC
mtd: nand: sunxi: Add randomizer configuration before randomizer enable
mtd: spinand: esmt: fix id code for F50D1G41LB
mtd: rawnand: brcmnand: remove unused parameters
mtd: core: always create master device
mtd: rawnand: loongson1: Fix inconsistent refcounting in ls1x_nand_chip_init()
mtd: rawnand: loongson1: Fix error code in ls1x_nand_dma_transfer()
mtd: rawnand: qcom: Fix read len for onfi param page
mtd: rawnand: qcom: Fix last codeword read in qcom_param_page_type_exec()
mtd: rawnand: qcom: Pass 18 bit offset from NANDc base to BAM base
dt-bindings: mtd: qcom,nandc: Document the SDX75 NAND controller
mtd: bcm47xxnflash: Add error handling for bcm47xxnflash_ops_bcm4706_ctl_cmd()
mtd: rawnand: Use non-hybrid PCI devres API
mtd: nand: ecc-mxic: Fix use of uninitialized variable ret
mtd: spinand: winbond: Add support for W35N02JW and W35N04JW chips
mtd: spinand: winbond: Add octal support
mtd: spinand: winbond: Add support for W35N01JW in single mode
mtd: spinand: winbond: Rename DTR variants
...

+1774 -414
+89
Documentation/devicetree/bindings/mtd/fsl,vf610-nfc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/fsl,vf610-nfc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale's NAND flash controller (NFC) 8 + 9 + description: 10 + This variant of the Freescale NAND flash controller (NFC) can be found on 11 + Vybrid (vf610), MPC5125, MCF54418 and Kinetis K70. 12 + 13 + maintainers: 14 + - Frank Li <Frank.Li@nxp.com> 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - fsl,vf610-nfc 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + interrupts: 25 + maxItems: 1 26 + 27 + clocks: 28 + maxItems: 1 29 + 30 + clock-names: 31 + items: 32 + - const: nfc 33 + 34 + patternProperties: 35 + "^nand@[a-f0-9]$": 36 + type: object 37 + $ref: raw-nand-chip.yaml 38 + 39 + properties: 40 + compatible: 41 + const: fsl,vf610-nfc-nandcs 42 + 43 + reg: 44 + const: 0 45 + 46 + nand-ecc-strength: 47 + enum: [24, 32] 48 + 49 + nand-ecc-step-size: 50 + const: 2048 51 + 52 + unevaluatedProperties: false 53 + 54 + required: 55 + - compatible 56 + - reg 57 + - interrupts 58 + 59 + allOf: 60 + - $ref: nand-controller.yaml 61 + 62 + unevaluatedProperties: false 63 + 64 + examples: 65 + - | 66 + #include <dt-bindings/interrupt-controller/arm-gic.h> 67 + #include <dt-bindings/clock/vf610-clock.h> 68 + 69 + nand-controller@400e0000 { 70 + compatible = "fsl,vf610-nfc"; 71 + reg = <0x400e0000 0x4000>; 72 + #address-cells = <1>; 73 + #size-cells = <0>; 74 + interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; 75 + clocks = <&clks VF610_CLK_NFC>; 76 + clock-names = "nfc"; 77 + assigned-clocks = <&clks VF610_CLK_NFC>; 78 + assigned-clock-rates = <33000000>; 79 + 80 + nand@0 { 81 + compatible = "fsl,vf610-nfc-nandcs"; 82 + reg = <0>; 83 + nand-bus-width = <8>; 84 + nand-ecc-mode = "hw"; 85 + nand-ecc-strength = <32>; 86 + nand-ecc-step-size = <2048>; 87 + nand-on-flash-bbt; 88 + }; 89 + };
+72
Documentation/devicetree/bindings/mtd/loongson,ls1b-nand-controller.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mtd/loongson,ls1b-nand-controller.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Loongson-1 NAND Controller 8 + 9 + maintainers: 10 + - Keguang Zhang <keguang.zhang@gmail.com> 11 + 12 + description: 13 + The Loongson-1 NAND controller abstracts all supported operations, 14 + meaning it does not support low-level access to raw NAND flash chips. 15 + Moreover, the controller is paired with the DMA engine to perform 16 + READ and PROGRAM functions. 17 + 18 + allOf: 19 + - $ref: nand-controller.yaml 20 + 21 + properties: 22 + compatible: 23 + oneOf: 24 + - enum: 25 + - loongson,ls1b-nand-controller 26 + - loongson,ls1c-nand-controller 27 + - items: 28 + - enum: 29 + - loongson,ls1a-nand-controller 30 + - const: loongson,ls1b-nand-controller 31 + 32 + reg: 33 + maxItems: 2 34 + 35 + reg-names: 36 + items: 37 + - const: nand 38 + - const: nand-dma 39 + 40 + dmas: 41 + maxItems: 1 42 + 43 + dma-names: 44 + const: rxtx 45 + 46 + required: 47 + - compatible 48 + - reg 49 + - reg-names 50 + - dmas 51 + - dma-names 52 + 53 + unevaluatedProperties: false 54 + 55 + examples: 56 + - | 57 + nand-controller@1fe78000 { 58 + compatible = "loongson,ls1b-nand-controller"; 59 + reg = <0x1fe78000 0x24>, <0x1fe78040 0x4>; 60 + reg-names = "nand", "nand-dma"; 61 + dmas = <&dma 0>; 62 + dma-names = "rxtx"; 63 + #address-cells = <1>; 64 + #size-cells = <0>; 65 + 66 + nand@0 { 67 + reg = <0>; 68 + label = "ls1x-nand"; 69 + nand-use-soft-ecc-engine; 70 + nand-ecc-algo = "hamming"; 71 + }; 72 + };
+24 -6
Documentation/devicetree/bindings/mtd/qcom,nandc.yaml
··· 11 11 12 12 properties: 13 13 compatible: 14 - enum: 15 - - qcom,ipq806x-nand 16 - - qcom,ipq4019-nand 17 - - qcom,ipq6018-nand 18 - - qcom,ipq8074-nand 19 - - qcom,sdx55-nand 14 + oneOf: 15 + - items: 16 + - enum: 17 + - qcom,sdx75-nand 18 + - const: qcom,sdx55-nand 19 + - items: 20 + - enum: 21 + - qcom,ipq806x-nand 22 + - qcom,ipq4019-nand 23 + - qcom,ipq6018-nand 24 + - qcom,ipq8074-nand 25 + - qcom,sdx55-nand 20 26 21 27 reg: 22 28 maxItems: 1 ··· 100 94 dma-names: 101 95 items: 102 96 - const: rxtx 97 + 98 + - if: 99 + properties: 100 + compatible: 101 + contains: 102 + enum: 103 + - qcom,sdx75-nand 104 + 105 + then: 106 + properties: 107 + iommus: 108 + maxItems: 1 103 109 104 110 - if: 105 111 properties:
-59
Documentation/devicetree/bindings/mtd/vf610-nfc.txt
··· 1 - Freescale's NAND flash controller (NFC) 2 - 3 - This variant of the Freescale NAND flash controller (NFC) can be found on 4 - Vybrid (vf610), MPC5125, MCF54418 and Kinetis K70. 5 - 6 - Required properties: 7 - - compatible: Should be set to "fsl,vf610-nfc". 8 - - reg: address range of the NFC. 9 - - interrupts: interrupt of the NFC. 10 - - #address-cells: shall be set to 1. Encode the nand CS. 11 - - #size-cells : shall be set to 0. 12 - - assigned-clocks: main clock from the SoC, for Vybrid <&clks VF610_CLK_NFC>; 13 - - assigned-clock-rates: The NAND bus timing is derived from this clock 14 - rate and should not exceed maximum timing for any NAND memory chip 15 - in a board stuffing. Typical NAND memory timings derived from this 16 - clock are found in the SoC hardware reference manual. Furthermore, 17 - there might be restrictions on maximum rates when using hardware ECC. 18 - 19 - - #address-cells, #size-cells : Must be present if the device has sub-nodes 20 - representing partitions. 21 - 22 - Required children nodes: 23 - Children nodes represent the available nand chips. Currently the driver can 24 - only handle one NAND chip. 25 - 26 - Required properties: 27 - - compatible: Should be set to "fsl,vf610-nfc-cs". 28 - - nand-bus-width: see nand-controller.yaml 29 - - nand-ecc-mode: see nand-controller.yaml 30 - 31 - Required properties for hardware ECC: 32 - - nand-ecc-strength: supported strengths are 24 and 32 bit (see nand-controller.yaml) 33 - - nand-ecc-step-size: step size equals page size, currently only 2k pages are 34 - supported 35 - - nand-on-flash-bbt: see nand-controller.yaml 36 - 37 - Example: 38 - 39 - nfc: nand@400e0000 { 40 - compatible = "fsl,vf610-nfc"; 41 - #address-cells = <1>; 42 - #size-cells = <0>; 43 - reg = <0x400e0000 0x4000>; 44 - interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; 45 - clocks = <&clks VF610_CLK_NFC>; 46 - clock-names = "nfc"; 47 - assigned-clocks = <&clks VF610_CLK_NFC>; 48 - assigned-clock-rates = <33000000>; 49 - 50 - nand@0 { 51 - compatible = "fsl,vf610-nfc-nandcs"; 52 - reg = <0>; 53 - nand-bus-width = <8>; 54 - nand-ecc-mode = "hw"; 55 - nand-ecc-strength = <32>; 56 - nand-ecc-step-size = <2048>; 57 - nand-on-flash-bbt; 58 - }; 59 - };
+1
MAINTAINERS
··· 16589 16589 F: arch/mips/include/asm/mach-loongson32/ 16590 16590 F: arch/mips/loongson32/ 16591 16591 F: drivers/*/*loongson1* 16592 + F: drivers/mtd/nand/raw/loongson1-nand-controller.c 16592 16593 F: drivers/net/ethernet/stmicro/stmmac/dwmac-loongson1.c 16593 16594 F: sound/soc/loongson/loongson1_ac97.c 16594 16595
+1 -1
drivers/mtd/devices/Kconfig
··· 98 98 config MTD_SPEAR_SMI 99 99 tristate "SPEAR MTD NOR Support through SMI controller" 100 100 depends on PLAT_SPEAR || COMPILE_TEST 101 - default y 101 + default PLAT_SPEAR 102 102 help 103 103 This enable SNOR support on SPEAR platforms using SMI controller 104 104
+1 -1
drivers/mtd/mtdchar.c
··· 559 559 /* Sanitize user input */ 560 560 p.devname[BLKPG_DEVNAMELTH - 1] = '\0'; 561 561 562 - return mtd_add_partition(mtd, p.devname, p.start, p.length); 562 + return mtd_add_partition(mtd, p.devname, p.start, p.length, NULL); 563 563 564 564 case BLKPG_DEL_PARTITION: 565 565
+112 -40
drivers/mtd/mtdcore.c
··· 68 68 .pm = MTD_CLS_PM_OPS, 69 69 }; 70 70 71 + static struct class mtd_master_class = { 72 + .name = "mtd_master", 73 + .pm = MTD_CLS_PM_OPS, 74 + }; 75 + 71 76 static DEFINE_IDR(mtd_idr); 77 + static DEFINE_IDR(mtd_master_idr); 72 78 73 79 /* These are exported solely for the purpose of mtd_blkdevs.c. You 74 80 should not use them for _anything_ else */ ··· 89 83 90 84 static LIST_HEAD(mtd_notifiers); 91 85 92 - 86 + #define MTD_MASTER_DEVS 255 93 87 #define MTD_DEVT(index) MKDEV(MTD_CHAR_MAJOR, (index)*2) 88 + static dev_t mtd_master_devt; 94 89 95 90 /* REVISIT once MTD uses the driver model better, whoever allocates 96 91 * the mtd_info will probably want to use the release() hook... ··· 109 102 110 103 /* remove /dev/mtdXro node */ 111 104 device_destroy(&mtd_class, index + 1); 105 + } 106 + 107 + static void mtd_master_release(struct device *dev) 108 + { 109 + struct mtd_info *mtd = dev_get_drvdata(dev); 110 + 111 + idr_remove(&mtd_master_idr, mtd->index); 112 + of_node_put(mtd_get_of_node(mtd)); 113 + 114 + if (mtd_is_partition(mtd)) 115 + release_mtd_partition(mtd); 112 116 } 113 117 114 118 static void mtd_device_release(struct kref *kref) ··· 383 365 .name = "mtd", 384 366 .groups = mtd_groups, 385 367 .release = mtd_release, 368 + }; 369 + 370 + static const struct device_type mtd_master_devtype = { 371 + .name = "mtd_master", 372 + .release = mtd_master_release, 386 373 }; 387 374 388 375 static bool mtd_expert_analysis_mode; ··· 657 634 /** 658 635 * add_mtd_device - register an MTD device 659 636 * @mtd: pointer to new MTD device info structure 637 + * @partitioned: create partitioned device 660 638 * 661 639 * Add a device to the list of MTD devices present in the system, and 662 640 * notify each currently active MTD 'user' of its arrival. Returns 663 641 * zero on success or non-zero on failure. 664 642 */ 665 - 666 - int add_mtd_device(struct mtd_info *mtd) 643 + int add_mtd_device(struct mtd_info *mtd, bool partitioned) 667 644 { 668 645 struct device_node *np = mtd_get_of_node(mtd); 669 646 struct mtd_info *master = mtd_get_master(mtd); ··· 710 687 ofidx = -1; 711 688 if (np) 712 689 ofidx = of_alias_get_id(np, "mtd"); 713 - if (ofidx >= 0) 714 - i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 715 - else 716 - i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); 690 + if (partitioned) { 691 + if (ofidx >= 0) 692 + i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 693 + else 694 + i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); 695 + } else { 696 + if (ofidx >= 0) 697 + i = idr_alloc(&mtd_master_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 698 + else 699 + i = idr_alloc(&mtd_master_idr, mtd, 0, 0, GFP_KERNEL); 700 + } 717 701 if (i < 0) { 718 702 error = i; 719 703 goto fail_locked; ··· 768 738 /* Caller should have set dev.parent to match the 769 739 * physical device, if appropriate. 770 740 */ 771 - mtd->dev.type = &mtd_devtype; 772 - mtd->dev.class = &mtd_class; 773 - mtd->dev.devt = MTD_DEVT(i); 774 - error = dev_set_name(&mtd->dev, "mtd%d", i); 741 + if (partitioned) { 742 + mtd->dev.type = &mtd_devtype; 743 + mtd->dev.class = &mtd_class; 744 + mtd->dev.devt = MTD_DEVT(i); 745 + dev_set_name(&mtd->dev, "mtd%d", i); 746 + error = dev_set_name(&mtd->dev, "mtd%d", i); 747 + } else { 748 + mtd->dev.type = &mtd_master_devtype; 749 + mtd->dev.class = &mtd_master_class; 750 + mtd->dev.devt = MKDEV(MAJOR(mtd_master_devt), i); 751 + error = dev_set_name(&mtd->dev, "mtd_master%d", i); 752 + } 775 753 if (error) 776 754 goto fail_devname; 777 755 dev_set_drvdata(&mtd->dev, mtd); ··· 787 749 of_node_get(mtd_get_of_node(mtd)); 788 750 error = device_register(&mtd->dev); 789 751 if (error) { 752 + pr_err("mtd: %s device_register fail %d\n", mtd->name, error); 790 753 put_device(&mtd->dev); 791 754 goto fail_added; 792 755 } ··· 799 760 800 761 mtd_debugfs_populate(mtd); 801 762 802 - device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, 803 - "mtd%dro", i); 763 + if (partitioned) { 764 + device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, 765 + "mtd%dro", i); 766 + } 804 767 805 - pr_debug("mtd: Giving out device %d to %s\n", i, mtd->name); 768 + pr_debug("mtd: Giving out %spartitioned device %d to %s\n", 769 + partitioned ? "" : "un-", i, mtd->name); 806 770 /* No need to get a refcount on the module containing 807 771 the notifier, since we hold the mtd_table_mutex */ 808 772 list_for_each_entry(not, &mtd_notifiers, list) ··· 813 771 814 772 mutex_unlock(&mtd_table_mutex); 815 773 816 - if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { 817 - if (IS_BUILTIN(CONFIG_MTD)) { 818 - pr_info("mtd: setting mtd%d (%s) as root device\n", mtd->index, mtd->name); 819 - ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); 820 - } else { 821 - pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", 822 - mtd->index, mtd->name); 774 + if (partitioned) { 775 + if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { 776 + if (IS_BUILTIN(CONFIG_MTD)) { 777 + pr_info("mtd: setting mtd%d (%s) as root device\n", 778 + mtd->index, mtd->name); 779 + ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); 780 + } else { 781 + pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", 782 + mtd->index, mtd->name); 783 + } 823 784 } 824 785 } 825 786 ··· 838 793 fail_added: 839 794 of_node_put(mtd_get_of_node(mtd)); 840 795 fail_devname: 841 - idr_remove(&mtd_idr, i); 796 + if (partitioned) 797 + idr_remove(&mtd_idr, i); 798 + else 799 + idr_remove(&mtd_master_idr, i); 842 800 fail_locked: 843 801 mutex_unlock(&mtd_table_mutex); 844 802 return error; ··· 859 811 860 812 int del_mtd_device(struct mtd_info *mtd) 861 813 { 862 - int ret; 863 814 struct mtd_notifier *not; 815 + struct idr *idr; 816 + int ret; 864 817 865 818 mutex_lock(&mtd_table_mutex); 866 819 867 - if (idr_find(&mtd_idr, mtd->index) != mtd) { 820 + idr = mtd->dev.class == &mtd_class ? &mtd_idr : &mtd_master_idr; 821 + if (idr_find(idr, mtd->index) != mtd) { 868 822 ret = -ENODEV; 869 823 goto out_error; 870 824 } ··· 1106 1056 const struct mtd_partition *parts, 1107 1057 int nr_parts) 1108 1058 { 1059 + struct mtd_info *parent; 1109 1060 int ret, err; 1110 1061 1111 1062 mtd_set_dev_defaults(mtd); ··· 1115 1064 if (ret) 1116 1065 goto out; 1117 1066 1067 + ret = add_mtd_device(mtd, false); 1068 + if (ret) 1069 + goto out; 1070 + 1118 1071 if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) { 1119 - ret = add_mtd_device(mtd); 1072 + ret = mtd_add_partition(mtd, mtd->name, 0, MTDPART_SIZ_FULL, &parent); 1120 1073 if (ret) 1121 1074 goto out; 1075 + 1076 + } else { 1077 + parent = mtd; 1122 1078 } 1123 1079 1124 1080 /* Prefer parsed partitions over driver-provided fallback */ 1125 - ret = parse_mtd_partitions(mtd, types, parser_data); 1081 + ret = parse_mtd_partitions(parent, types, parser_data); 1126 1082 if (ret == -EPROBE_DEFER) 1127 1083 goto out; 1128 1084 1129 1085 if (ret > 0) 1130 1086 ret = 0; 1131 1087 else if (nr_parts) 1132 - ret = add_mtd_partitions(mtd, parts, nr_parts); 1133 - else if (!device_is_registered(&mtd->dev)) 1134 - ret = add_mtd_device(mtd); 1135 - else 1136 - ret = 0; 1088 + ret = add_mtd_partitions(parent, parts, nr_parts); 1089 + else if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1090 + ret = mtd_add_partition(parent, mtd->name, 0, MTDPART_SIZ_FULL, NULL); 1137 1091 1138 1092 if (ret) 1139 1093 goto out; ··· 1158 1102 register_reboot_notifier(&mtd->reboot_notifier); 1159 1103 } 1160 1104 1105 + return 0; 1161 1106 out: 1162 - if (ret) { 1163 - nvmem_unregister(mtd->otp_user_nvmem); 1164 - nvmem_unregister(mtd->otp_factory_nvmem); 1165 - } 1107 + nvmem_unregister(mtd->otp_user_nvmem); 1108 + nvmem_unregister(mtd->otp_factory_nvmem); 1166 1109 1167 - if (ret && device_is_registered(&mtd->dev)) { 1110 + del_mtd_partitions(mtd); 1111 + 1112 + if (device_is_registered(&mtd->dev)) { 1168 1113 err = del_mtd_device(mtd); 1169 1114 if (err) 1170 1115 pr_err("Error when deleting MTD device (%d)\n", err); ··· 1324 1267 mtd = mtd->parent; 1325 1268 } 1326 1269 1327 - if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1328 - kref_get(&master->refcnt); 1270 + kref_get(&master->refcnt); 1329 1271 1330 1272 return 0; 1331 1273 } ··· 1418 1362 mtd = parent; 1419 1363 } 1420 1364 1421 - if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1422 - kref_put(&master->refcnt, mtd_device_release); 1365 + kref_put(&master->refcnt, mtd_device_release); 1423 1366 1424 1367 module_put(master->owner); 1425 1368 ··· 2585 2530 if (ret) 2586 2531 goto err_reg; 2587 2532 2533 + ret = class_register(&mtd_master_class); 2534 + if (ret) 2535 + goto err_reg2; 2536 + 2537 + ret = alloc_chrdev_region(&mtd_master_devt, 0, MTD_MASTER_DEVS, "mtd_master"); 2538 + if (ret < 0) { 2539 + pr_err("unable to allocate char dev region\n"); 2540 + goto err_chrdev; 2541 + } 2542 + 2588 2543 mtd_bdi = mtd_bdi_init("mtd"); 2589 2544 if (IS_ERR(mtd_bdi)) { 2590 2545 ret = PTR_ERR(mtd_bdi); ··· 2619 2554 bdi_unregister(mtd_bdi); 2620 2555 bdi_put(mtd_bdi); 2621 2556 err_bdi: 2557 + unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); 2558 + err_chrdev: 2559 + class_unregister(&mtd_master_class); 2560 + err_reg2: 2622 2561 class_unregister(&mtd_class); 2623 2562 err_reg: 2624 2563 pr_err("Error registering mtd class or bdi: %d\n", ret); ··· 2636 2567 if (proc_mtd) 2637 2568 remove_proc_entry("mtd", NULL); 2638 2569 class_unregister(&mtd_class); 2570 + class_unregister(&mtd_master_class); 2571 + unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); 2639 2572 bdi_unregister(mtd_bdi); 2640 2573 bdi_put(mtd_bdi); 2641 2574 idr_destroy(&mtd_idr); 2575 + idr_destroy(&mtd_master_idr); 2642 2576 } 2643 2577 2644 2578 module_init(init_mtd);
+1 -1
drivers/mtd/mtdcore.h
··· 8 8 extern struct backing_dev_info *mtd_bdi; 9 9 10 10 struct mtd_info *__mtd_next_device(int i); 11 - int __must_check add_mtd_device(struct mtd_info *mtd); 11 + int __must_check add_mtd_device(struct mtd_info *mtd, bool partitioned); 12 12 int del_mtd_device(struct mtd_info *mtd); 13 13 int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); 14 14 int del_mtd_partitions(struct mtd_info *);
+8 -8
drivers/mtd/mtdpart.c
··· 86 86 * parent conditional on that option. Note, this is a way to 87 87 * distinguish between the parent and its partitions in sysfs. 88 88 */ 89 - child->dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ? 90 - &parent->dev : parent->dev.parent; 89 + child->dev.parent = &parent->dev; 91 90 child->dev.of_node = part->of_node; 92 91 child->parent = parent; 93 92 child->part.offset = part->offset; ··· 242 243 } 243 244 244 245 int mtd_add_partition(struct mtd_info *parent, const char *name, 245 - long long offset, long long length) 246 + long long offset, long long length, struct mtd_info **out) 246 247 { 247 248 struct mtd_info *master = mtd_get_master(parent); 248 249 u64 parent_size = mtd_is_partition(parent) ? ··· 275 276 list_add_tail(&child->part.node, &parent->partitions); 276 277 mutex_unlock(&master->master.partitions_lock); 277 278 278 - ret = add_mtd_device(child); 279 + ret = add_mtd_device(child, true); 279 280 if (ret) 280 281 goto err_remove_part; 281 282 282 283 mtd_add_partition_attrs(child); 284 + 285 + if (out) 286 + *out = child; 283 287 284 288 return 0; 285 289 ··· 415 413 list_add_tail(&child->part.node, &parent->partitions); 416 414 mutex_unlock(&master->master.partitions_lock); 417 415 418 - ret = add_mtd_device(child); 416 + ret = add_mtd_device(child, true); 419 417 if (ret) { 420 418 mutex_lock(&master->master.partitions_lock); 421 419 list_del(&child->part.node); ··· 592 590 int ret, err = 0; 593 591 594 592 dev = &master->dev; 595 - /* Use parent device (controller) if the top level MTD is not registered */ 596 - if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) && !mtd_is_partition(master)) 597 - dev = master->dev.parent; 598 593 599 594 np = mtd_get_of_node(master); 600 595 if (mtd_is_partition(master)) ··· 710 711 if (ret < 0 && !err) 711 712 err = ret; 712 713 } 714 + 713 715 return err; 714 716 } 715 717
+1 -1
drivers/mtd/nand/ecc-mxic.c
··· 614 614 { 615 615 struct mxic_ecc_engine *mxic = nand_to_mxic(nand); 616 616 struct mxic_ecc_ctx *ctx = nand_to_ecc_ctx(nand); 617 - int nents, step, ret; 617 + int nents, step, ret = 0; 618 618 619 619 if (req->mode == MTD_OPS_RAW) 620 620 return 0;
+4 -4
drivers/mtd/nand/qpic_common.c
··· 236 236 int i, ret; 237 237 struct bam_cmd_element *bam_ce_buffer; 238 238 struct bam_transaction *bam_txn = nandc->bam_txn; 239 + u32 offset; 239 240 240 241 bam_ce_buffer = &bam_txn->bam_ce[bam_txn->bam_ce_pos]; 241 242 242 243 /* fill the command desc */ 243 244 for (i = 0; i < size; i++) { 245 + offset = nandc->props->bam_offset + reg_off + 4 * i; 244 246 if (read) 245 247 bam_prep_ce(&bam_ce_buffer[i], 246 - nandc_reg_phys(nandc, reg_off + 4 * i), 247 - BAM_READ_COMMAND, 248 + offset, BAM_READ_COMMAND, 248 249 reg_buf_dma_addr(nandc, 249 250 (__le32 *)vaddr + i)); 250 251 else 251 252 bam_prep_ce_le32(&bam_ce_buffer[i], 252 - nandc_reg_phys(nandc, reg_off + 4 * i), 253 - BAM_WRITE_COMMAND, 253 + offset, BAM_WRITE_COMMAND, 254 254 *((__le32 *)vaddr + i)); 255 255 } 256 256
+8 -1
drivers/mtd/nand/raw/Kconfig
··· 34 34 config MTD_NAND_AMS_DELTA 35 35 tristate "Amstrad E3 NAND controller" 36 36 depends on MACH_AMS_DELTA || COMPILE_TEST 37 - default y 37 + default MACH_AMS_DELTA 38 38 help 39 39 Support for NAND flash on Amstrad E3 (Delta). 40 40 ··· 461 461 help 462 462 Enables support for the NAND controller found on 463 463 the Nuvoton MA35 series SoCs. 464 + 465 + config MTD_NAND_LOONGSON1 466 + tristate "Loongson1 NAND controller" 467 + depends on LOONGSON1_APB_DMA || COMPILE_TEST 468 + select REGMAP_MMIO 469 + help 470 + Enables support for NAND controller on Loongson1 SoCs. 464 471 465 472 comment "Misc" 466 473
+1
drivers/mtd/nand/raw/Makefile
··· 59 59 obj-$(CONFIG_MTD_NAND_PL35X) += pl35x-nand-controller.o 60 60 obj-$(CONFIG_MTD_NAND_RENESAS) += renesas-nand-controller.o 61 61 obj-$(CONFIG_MTD_NAND_NUVOTON_MA35) += nuvoton-ma35d1-nand-controller.o 62 + obj-$(CONFIG_MTD_NAND_LOONGSON1) += loongson1-nand-controller.o 62 63 63 64 nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o 64 65 nand-objs += nand_onfi.o
+4 -1
drivers/mtd/nand/raw/bcm47xxnflash/ops_bcm4706.c
··· 171 171 { 172 172 struct bcm47xxnflash *b47n = nand_get_controller_data(nand_chip); 173 173 u32 code = 0; 174 + int rc; 174 175 175 176 if (cmd == NAND_CMD_NONE) 176 177 return; ··· 183 182 if (cmd != NAND_CMD_RESET) 184 183 code |= NCTL_CSA; 185 184 186 - bcm47xxnflash_ops_bcm4706_ctl_cmd(b47n->cc, code); 185 + rc = bcm47xxnflash_ops_bcm4706_ctl_cmd(b47n->cc, code); 186 + if (rc) 187 + pr_err("ctl_cmd didn't work with error %d\n", rc); 187 188 } 188 189 189 190 /* Default nand_select_chip calls cmd_ctrl, which is not used in BCM4706 */
+222 -26
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 65 65 #define CMD_PARAMETER_READ 0x0e 66 66 #define CMD_PARAMETER_CHANGE_COL 0x0f 67 67 #define CMD_LOW_LEVEL_OP 0x10 68 + #define CMD_NOT_SUPPORTED 0xff 68 69 69 70 struct brcm_nand_dma_desc { 70 71 u32 next_desc; ··· 102 101 #define BRCMNAND_MIN_DEVSIZE (4ULL * 1024 * 1024) 103 102 104 103 #define NAND_CTRL_RDY (INTFC_CTLR_READY | INTFC_FLASH_READY) 105 - #define NAND_POLL_STATUS_TIMEOUT_MS 100 104 + #define NAND_POLL_STATUS_TIMEOUT_MS 500 106 105 107 106 #define EDU_CMD_WRITE 0x00 108 107 #define EDU_CMD_READ 0x01 ··· 200 199 [FLASH_DMA_CURRENT_DESC_EXT] = 0x34, 201 200 }; 202 201 202 + /* Native command conversion for legacy controllers (< v5.0) */ 203 + static const u8 native_cmd_conv[] = { 204 + [NAND_CMD_READ0] = CMD_NOT_SUPPORTED, 205 + [NAND_CMD_READ1] = CMD_NOT_SUPPORTED, 206 + [NAND_CMD_RNDOUT] = CMD_PARAMETER_CHANGE_COL, 207 + [NAND_CMD_PAGEPROG] = CMD_NOT_SUPPORTED, 208 + [NAND_CMD_READOOB] = CMD_NOT_SUPPORTED, 209 + [NAND_CMD_ERASE1] = CMD_BLOCK_ERASE, 210 + [NAND_CMD_STATUS] = CMD_NOT_SUPPORTED, 211 + [NAND_CMD_SEQIN] = CMD_NOT_SUPPORTED, 212 + [NAND_CMD_RNDIN] = CMD_NOT_SUPPORTED, 213 + [NAND_CMD_READID] = CMD_DEVICE_ID_READ, 214 + [NAND_CMD_ERASE2] = CMD_NULL, 215 + [NAND_CMD_PARAM] = CMD_PARAMETER_READ, 216 + [NAND_CMD_GET_FEATURES] = CMD_NOT_SUPPORTED, 217 + [NAND_CMD_SET_FEATURES] = CMD_NOT_SUPPORTED, 218 + [NAND_CMD_RESET] = CMD_NOT_SUPPORTED, 219 + [NAND_CMD_READSTART] = CMD_NOT_SUPPORTED, 220 + [NAND_CMD_READCACHESEQ] = CMD_NOT_SUPPORTED, 221 + [NAND_CMD_READCACHEEND] = CMD_NOT_SUPPORTED, 222 + [NAND_CMD_RNDOUTSTART] = CMD_NULL, 223 + [NAND_CMD_CACHEDPROG] = CMD_NOT_SUPPORTED, 224 + }; 225 + 203 226 /* Controller feature flags */ 204 227 enum { 205 228 BRCMNAND_HAS_1K_SECTORS = BIT(0), ··· 261 236 262 237 /* List of NAND hosts (one for each chip-select) */ 263 238 struct list_head host_list; 239 + 240 + /* Functions to be called from exec_op */ 241 + int (*check_instr)(struct nand_chip *chip, 242 + const struct nand_operation *op); 243 + int (*exec_instr)(struct nand_chip *chip, 244 + const struct nand_operation *op); 264 245 265 246 /* EDU info, per-transaction */ 266 247 const u16 *edu_offsets; ··· 341 310 struct platform_device *pdev; 342 311 int cs; 343 312 344 - unsigned int last_cmd; 345 - unsigned int last_byte; 346 - u64 last_addr; 347 313 struct brcmnand_cfg hwcfg; 348 314 struct brcmnand_controller *ctrl; 349 315 }; ··· 2261 2233 int oob_required, int page) 2262 2234 { 2263 2235 struct mtd_info *mtd = nand_to_mtd(chip); 2264 - struct brcmnand_host *host = nand_get_controller_data(chip); 2265 2236 u8 *oob = oob_required ? (u8 *)chip->oob_poi : NULL; 2266 2237 u64 addr = (u64)page << chip->page_shift; 2267 2238 2268 - host->last_addr = addr; 2269 - 2270 - return brcmnand_read(mtd, chip, host->last_addr, 2271 - mtd->writesize >> FC_SHIFT, (u32 *)buf, oob); 2239 + return brcmnand_read(mtd, chip, addr, mtd->writesize >> FC_SHIFT, 2240 + (u32 *)buf, oob); 2272 2241 } 2273 2242 2274 2243 static int brcmnand_read_page_raw(struct nand_chip *chip, uint8_t *buf, ··· 2277 2252 int ret; 2278 2253 u64 addr = (u64)page << chip->page_shift; 2279 2254 2280 - host->last_addr = addr; 2281 - 2282 2255 brcmnand_set_ecc_enabled(host, 0); 2283 - ret = brcmnand_read(mtd, chip, host->last_addr, 2284 - mtd->writesize >> FC_SHIFT, (u32 *)buf, oob); 2256 + ret = brcmnand_read(mtd, chip, addr, mtd->writesize >> FC_SHIFT, 2257 + (u32 *)buf, oob); 2285 2258 brcmnand_set_ecc_enabled(host, 1); 2286 2259 return ret; 2287 2260 } ··· 2386 2363 int oob_required, int page) 2387 2364 { 2388 2365 struct mtd_info *mtd = nand_to_mtd(chip); 2389 - struct brcmnand_host *host = nand_get_controller_data(chip); 2390 2366 void *oob = oob_required ? chip->oob_poi : NULL; 2391 2367 u64 addr = (u64)page << chip->page_shift; 2392 2368 2393 - host->last_addr = addr; 2394 - 2395 - return brcmnand_write(mtd, chip, host->last_addr, (const u32 *)buf, oob); 2369 + return brcmnand_write(mtd, chip, addr, (const u32 *)buf, oob); 2396 2370 } 2397 2371 2398 2372 static int brcmnand_write_page_raw(struct nand_chip *chip, const uint8_t *buf, ··· 2401 2381 u64 addr = (u64)page << chip->page_shift; 2402 2382 int ret = 0; 2403 2383 2404 - host->last_addr = addr; 2405 2384 brcmnand_set_ecc_enabled(host, 0); 2406 - ret = brcmnand_write(mtd, chip, host->last_addr, (const u32 *)buf, oob); 2385 + ret = brcmnand_write(mtd, chip, addr, (const u32 *)buf, oob); 2407 2386 brcmnand_set_ecc_enabled(host, 1); 2408 2387 2409 2388 return ret; ··· 2509 2490 return 0; 2510 2491 } 2511 2492 2493 + static int brcmnand_check_instructions(struct nand_chip *chip, 2494 + const struct nand_operation *op) 2495 + { 2496 + return 0; 2497 + } 2498 + 2499 + static int brcmnand_exec_instructions(struct nand_chip *chip, 2500 + const struct nand_operation *op) 2501 + { 2502 + struct brcmnand_host *host = nand_get_controller_data(chip); 2503 + unsigned int i; 2504 + int ret = 0; 2505 + 2506 + for (i = 0; i < op->ninstrs; i++) { 2507 + ret = brcmnand_exec_instr(host, i, op); 2508 + if (ret) 2509 + break; 2510 + } 2511 + 2512 + return ret; 2513 + } 2514 + 2515 + static int brcmnand_check_instructions_legacy(struct nand_chip *chip, 2516 + const struct nand_operation *op) 2517 + { 2518 + const struct nand_op_instr *instr; 2519 + unsigned int i; 2520 + u8 cmd; 2521 + 2522 + for (i = 0; i < op->ninstrs; i++) { 2523 + instr = &op->instrs[i]; 2524 + 2525 + switch (instr->type) { 2526 + case NAND_OP_CMD_INSTR: 2527 + cmd = native_cmd_conv[instr->ctx.cmd.opcode]; 2528 + if (cmd == CMD_NOT_SUPPORTED) 2529 + return -EOPNOTSUPP; 2530 + break; 2531 + case NAND_OP_ADDR_INSTR: 2532 + case NAND_OP_DATA_IN_INSTR: 2533 + case NAND_OP_WAITRDY_INSTR: 2534 + break; 2535 + default: 2536 + return -EOPNOTSUPP; 2537 + } 2538 + } 2539 + 2540 + return 0; 2541 + } 2542 + 2543 + static int brcmnand_exec_instructions_legacy(struct nand_chip *chip, 2544 + const struct nand_operation *op) 2545 + { 2546 + struct mtd_info *mtd = nand_to_mtd(chip); 2547 + struct brcmnand_host *host = nand_get_controller_data(chip); 2548 + struct brcmnand_controller *ctrl = host->ctrl; 2549 + const struct nand_op_instr *instr; 2550 + unsigned int i, j; 2551 + u8 cmd = CMD_NULL, last_cmd = CMD_NULL; 2552 + int ret = 0; 2553 + u64 last_addr; 2554 + 2555 + for (i = 0; i < op->ninstrs; i++) { 2556 + instr = &op->instrs[i]; 2557 + 2558 + if (instr->type == NAND_OP_CMD_INSTR) { 2559 + cmd = native_cmd_conv[instr->ctx.cmd.opcode]; 2560 + if (cmd == CMD_NOT_SUPPORTED) { 2561 + dev_err(ctrl->dev, "unsupported cmd=%d\n", 2562 + instr->ctx.cmd.opcode); 2563 + ret = -EOPNOTSUPP; 2564 + break; 2565 + } 2566 + } else if (instr->type == NAND_OP_ADDR_INSTR) { 2567 + u64 addr = 0; 2568 + 2569 + if (cmd == CMD_NULL) 2570 + continue; 2571 + 2572 + if (instr->ctx.addr.naddrs > 8) { 2573 + dev_err(ctrl->dev, "unsupported naddrs=%u\n", 2574 + instr->ctx.addr.naddrs); 2575 + ret = -EOPNOTSUPP; 2576 + break; 2577 + } 2578 + 2579 + for (j = 0; j < instr->ctx.addr.naddrs; j++) 2580 + addr |= (instr->ctx.addr.addrs[j]) << (j << 3); 2581 + 2582 + if (cmd == CMD_BLOCK_ERASE) 2583 + addr <<= chip->page_shift; 2584 + else if (cmd == CMD_PARAMETER_CHANGE_COL) 2585 + addr &= ~((u64)(FC_BYTES - 1)); 2586 + 2587 + brcmnand_set_cmd_addr(mtd, addr); 2588 + brcmnand_send_cmd(host, cmd); 2589 + last_addr = addr; 2590 + last_cmd = cmd; 2591 + cmd = CMD_NULL; 2592 + brcmnand_waitfunc(chip); 2593 + 2594 + if (last_cmd == CMD_PARAMETER_READ || 2595 + last_cmd == CMD_PARAMETER_CHANGE_COL) { 2596 + /* Copy flash cache word-wise */ 2597 + u32 *flash_cache = (u32 *)ctrl->flash_cache; 2598 + 2599 + brcmnand_soc_data_bus_prepare(ctrl->soc, true); 2600 + 2601 + /* 2602 + * Must cache the FLASH_CACHE now, since changes in 2603 + * SECTOR_SIZE_1K may invalidate it 2604 + */ 2605 + for (j = 0; j < FC_WORDS; j++) 2606 + /* 2607 + * Flash cache is big endian for parameter pages, at 2608 + * least on STB SoCs 2609 + */ 2610 + flash_cache[j] = be32_to_cpu(brcmnand_read_fc(ctrl, j)); 2611 + 2612 + brcmnand_soc_data_bus_unprepare(ctrl->soc, true); 2613 + } 2614 + } else if (instr->type == NAND_OP_DATA_IN_INSTR) { 2615 + u8 *in = instr->ctx.data.buf.in; 2616 + 2617 + if (last_cmd == CMD_DEVICE_ID_READ) { 2618 + u32 val; 2619 + 2620 + if (instr->ctx.data.len > 8) { 2621 + dev_err(ctrl->dev, "unsupported len=%u\n", 2622 + instr->ctx.data.len); 2623 + ret = -EOPNOTSUPP; 2624 + break; 2625 + } 2626 + 2627 + for (j = 0; j < instr->ctx.data.len; j++) { 2628 + if (j == 0) 2629 + val = brcmnand_read_reg(ctrl, BRCMNAND_ID); 2630 + else if (j == 4) 2631 + val = brcmnand_read_reg(ctrl, BRCMNAND_ID_EXT); 2632 + 2633 + in[j] = (val >> (24 - ((j % 4) << 3))) & 0xff; 2634 + } 2635 + } else if (last_cmd == CMD_PARAMETER_READ || 2636 + last_cmd == CMD_PARAMETER_CHANGE_COL) { 2637 + u64 addr; 2638 + u32 offs; 2639 + 2640 + for (j = 0; j < instr->ctx.data.len; j++) { 2641 + addr = last_addr + j; 2642 + offs = addr & (FC_BYTES - 1); 2643 + 2644 + if (j > 0 && offs == 0) 2645 + nand_change_read_column_op(chip, addr, NULL, 0, 2646 + false); 2647 + 2648 + in[j] = ctrl->flash_cache[offs]; 2649 + } 2650 + } 2651 + } else if (instr->type == NAND_OP_WAITRDY_INSTR) { 2652 + ret = bcmnand_ctrl_poll_status(host, NAND_CTRL_RDY, NAND_CTRL_RDY, 0); 2653 + if (ret) 2654 + break; 2655 + } else { 2656 + dev_err(ctrl->dev, "unsupported instruction type: %d\n", instr->type); 2657 + ret = -EOPNOTSUPP; 2658 + break; 2659 + } 2660 + } 2661 + 2662 + return ret; 2663 + } 2664 + 2512 2665 static int brcmnand_exec_op(struct nand_chip *chip, 2513 2666 const struct nand_operation *op, 2514 2667 bool check_only) 2515 2668 { 2516 2669 struct brcmnand_host *host = nand_get_controller_data(chip); 2670 + struct brcmnand_controller *ctrl = host->ctrl; 2517 2671 struct mtd_info *mtd = nand_to_mtd(chip); 2518 2672 u8 *status; 2519 - unsigned int i; 2520 2673 int ret = 0; 2521 2674 2522 2675 if (check_only) 2523 - return 0; 2676 + return ctrl->check_instr(chip, op); 2524 2677 2525 2678 if (brcmnand_op_is_status(op)) { 2526 2679 status = op->instrs[1].ctx.data.buf.in; ··· 2716 2525 if (op->deassert_wp) 2717 2526 brcmnand_wp(mtd, 0); 2718 2527 2719 - for (i = 0; i < op->ninstrs; i++) { 2720 - ret = brcmnand_exec_instr(host, i, op); 2721 - if (ret) 2722 - break; 2723 - } 2528 + ret = ctrl->exec_instr(chip, op); 2724 2529 2725 2530 if (op->deassert_wp) 2726 2531 brcmnand_wp(mtd, 1); ··· 3328 3141 ret = brcmnand_revision_init(ctrl); 3329 3142 if (ret) 3330 3143 goto err; 3144 + 3145 + /* Only v5.0+ controllers have low level ops support */ 3146 + if (ctrl->nand_version >= 0x0500) { 3147 + ctrl->check_instr = brcmnand_check_instructions; 3148 + ctrl->exec_instr = brcmnand_exec_instructions; 3149 + } else { 3150 + ctrl->check_instr = brcmnand_check_instructions_legacy; 3151 + ctrl->exec_instr = brcmnand_exec_instructions_legacy; 3152 + } 3331 3153 3332 3154 /* 3333 3155 * Most chips have this cache at a fixed offset within 'nand' block.
+4 -9
drivers/mtd/nand/raw/denali_pci.c
··· 68 68 denali->clk_rate = 50000000; /* 50 MHz */ 69 69 denali->clk_x_rate = 200000000; /* 200 MHz */ 70 70 71 - ret = pci_request_regions(dev, DENALI_NAND_NAME); 71 + ret = pcim_request_all_regions(dev, DENALI_NAND_NAME); 72 72 if (ret) { 73 73 dev_err(&dev->dev, "Spectra: Unable to request memory regions\n"); 74 74 return ret; ··· 77 77 denali->reg = devm_ioremap(denali->dev, csr_base, csr_len); 78 78 if (!denali->reg) { 79 79 dev_err(&dev->dev, "Spectra: Unable to remap memory region\n"); 80 - ret = -ENOMEM; 81 - goto regions_release; 80 + return -ENOMEM; 82 81 } 83 82 84 83 denali->host = devm_ioremap(denali->dev, mem_base, mem_len); 85 84 if (!denali->host) { 86 85 dev_err(&dev->dev, "Spectra: ioremap failed!"); 87 - ret = -ENOMEM; 88 - goto regions_release; 86 + return -ENOMEM; 89 87 } 90 88 91 89 ret = denali_init(denali); 92 90 if (ret) 93 - goto regions_release; 91 + return ret; 94 92 95 93 nsels = denali->nbanks; 96 94 ··· 116 118 117 119 out_remove_denali: 118 120 denali_remove(denali); 119 - regions_release: 120 - pci_release_regions(dev); 121 121 return ret; 122 122 } 123 123 ··· 123 127 { 124 128 struct denali_controller *denali = pci_get_drvdata(dev); 125 129 126 - pci_release_regions(dev); 127 130 denali_remove(denali); 128 131 } 129 132
+836
drivers/mtd/nand/raw/loongson1-nand-controller.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * NAND Controller Driver for Loongson-1 SoC 4 + * 5 + * Copyright (C) 2015-2025 Keguang Zhang <keguang.zhang@gmail.com> 6 + */ 7 + 8 + #include <linux/kernel.h> 9 + #include <linux/module.h> 10 + #include <linux/dmaengine.h> 11 + #include <linux/dma-mapping.h> 12 + #include <linux/iopoll.h> 13 + #include <linux/mtd/mtd.h> 14 + #include <linux/mtd/rawnand.h> 15 + #include <linux/of.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/regmap.h> 18 + #include <linux/sizes.h> 19 + 20 + /* Loongson-1 NAND Controller Registers */ 21 + #define LS1X_NAND_CMD 0x0 22 + #define LS1X_NAND_ADDR1 0x4 23 + #define LS1X_NAND_ADDR2 0x8 24 + #define LS1X_NAND_TIMING 0xc 25 + #define LS1X_NAND_IDL 0x10 26 + #define LS1X_NAND_IDH_STATUS 0x14 27 + #define LS1X_NAND_PARAM 0x18 28 + #define LS1X_NAND_OP_NUM 0x1c 29 + 30 + /* NAND Command Register Bits */ 31 + #define LS1X_NAND_CMD_OP_DONE BIT(10) 32 + #define LS1X_NAND_CMD_OP_SPARE BIT(9) 33 + #define LS1X_NAND_CMD_OP_MAIN BIT(8) 34 + #define LS1X_NAND_CMD_STATUS BIT(7) 35 + #define LS1X_NAND_CMD_RESET BIT(6) 36 + #define LS1X_NAND_CMD_READID BIT(5) 37 + #define LS1X_NAND_CMD_BLOCKS_ERASE BIT(4) 38 + #define LS1X_NAND_CMD_ERASE BIT(3) 39 + #define LS1X_NAND_CMD_WRITE BIT(2) 40 + #define LS1X_NAND_CMD_READ BIT(1) 41 + #define LS1X_NAND_CMD_VALID BIT(0) 42 + 43 + #define LS1X_NAND_WAIT_CYCLE_MASK GENMASK(7, 0) 44 + #define LS1X_NAND_HOLD_CYCLE_MASK GENMASK(15, 8) 45 + #define LS1X_NAND_CELL_SIZE_MASK GENMASK(11, 8) 46 + 47 + #define LS1X_NAND_COL_ADDR_CYC 2U 48 + #define LS1X_NAND_MAX_ADDR_CYC 5U 49 + 50 + #define BITS_PER_WORD (4 * BITS_PER_BYTE) 51 + 52 + struct ls1x_nand_host; 53 + 54 + struct ls1x_nand_op { 55 + char addrs[LS1X_NAND_MAX_ADDR_CYC]; 56 + unsigned int naddrs; 57 + unsigned int addrs_offset; 58 + unsigned int aligned_offset; 59 + unsigned int cmd_reg; 60 + unsigned int row_start; 61 + unsigned int rdy_timeout_ms; 62 + unsigned int orig_len; 63 + bool is_readid; 64 + bool is_erase; 65 + bool is_write; 66 + bool is_read; 67 + bool is_change_column; 68 + size_t len; 69 + char *buf; 70 + }; 71 + 72 + struct ls1x_nand_data { 73 + unsigned int status_field; 74 + unsigned int op_scope_field; 75 + unsigned int hold_cycle; 76 + unsigned int wait_cycle; 77 + void (*set_addr)(struct ls1x_nand_host *host, struct ls1x_nand_op *op); 78 + }; 79 + 80 + struct ls1x_nand_host { 81 + struct device *dev; 82 + struct nand_chip chip; 83 + struct nand_controller controller; 84 + const struct ls1x_nand_data *data; 85 + void __iomem *reg_base; 86 + struct regmap *regmap; 87 + /* DMA Engine stuff */ 88 + dma_addr_t dma_base; 89 + struct dma_chan *dma_chan; 90 + dma_cookie_t dma_cookie; 91 + struct completion dma_complete; 92 + }; 93 + 94 + static const struct regmap_config ls1x_nand_regmap_config = { 95 + .reg_bits = 32, 96 + .val_bits = 32, 97 + .reg_stride = 4, 98 + }; 99 + 100 + static int ls1x_nand_op_cmd_mapping(struct nand_chip *chip, struct ls1x_nand_op *op, u8 opcode) 101 + { 102 + struct ls1x_nand_host *host = nand_get_controller_data(chip); 103 + 104 + op->row_start = chip->page_shift + 1; 105 + 106 + /* The controller abstracts the following NAND operations. */ 107 + switch (opcode) { 108 + case NAND_CMD_STATUS: 109 + op->cmd_reg = LS1X_NAND_CMD_STATUS; 110 + break; 111 + case NAND_CMD_RESET: 112 + op->cmd_reg = LS1X_NAND_CMD_RESET; 113 + break; 114 + case NAND_CMD_READID: 115 + op->is_readid = true; 116 + op->cmd_reg = LS1X_NAND_CMD_READID; 117 + break; 118 + case NAND_CMD_ERASE1: 119 + op->is_erase = true; 120 + op->addrs_offset = LS1X_NAND_COL_ADDR_CYC; 121 + break; 122 + case NAND_CMD_ERASE2: 123 + if (!op->is_erase) 124 + return -EOPNOTSUPP; 125 + /* During erasing, row_start differs from the default value. */ 126 + op->row_start = chip->page_shift; 127 + op->cmd_reg = LS1X_NAND_CMD_ERASE; 128 + break; 129 + case NAND_CMD_SEQIN: 130 + op->is_write = true; 131 + break; 132 + case NAND_CMD_PAGEPROG: 133 + if (!op->is_write) 134 + return -EOPNOTSUPP; 135 + op->cmd_reg = LS1X_NAND_CMD_WRITE; 136 + break; 137 + case NAND_CMD_READ0: 138 + op->is_read = true; 139 + break; 140 + case NAND_CMD_READSTART: 141 + if (!op->is_read) 142 + return -EOPNOTSUPP; 143 + op->cmd_reg = LS1X_NAND_CMD_READ; 144 + break; 145 + case NAND_CMD_RNDOUT: 146 + op->is_change_column = true; 147 + break; 148 + case NAND_CMD_RNDOUTSTART: 149 + if (!op->is_change_column) 150 + return -EOPNOTSUPP; 151 + op->cmd_reg = LS1X_NAND_CMD_READ; 152 + break; 153 + default: 154 + dev_dbg(host->dev, "unsupported opcode: %u\n", opcode); 155 + return -EOPNOTSUPP; 156 + } 157 + 158 + return 0; 159 + } 160 + 161 + static int ls1x_nand_parse_instructions(struct nand_chip *chip, 162 + const struct nand_subop *subop, struct ls1x_nand_op *op) 163 + { 164 + unsigned int op_id; 165 + int ret; 166 + 167 + for (op_id = 0; op_id < subop->ninstrs; op_id++) { 168 + const struct nand_op_instr *instr = &subop->instrs[op_id]; 169 + unsigned int offset, naddrs; 170 + const u8 *addrs; 171 + 172 + switch (instr->type) { 173 + case NAND_OP_CMD_INSTR: 174 + ret = ls1x_nand_op_cmd_mapping(chip, op, instr->ctx.cmd.opcode); 175 + if (ret < 0) 176 + return ret; 177 + 178 + break; 179 + case NAND_OP_ADDR_INSTR: 180 + naddrs = nand_subop_get_num_addr_cyc(subop, op_id); 181 + if (naddrs > LS1X_NAND_MAX_ADDR_CYC) 182 + return -EOPNOTSUPP; 183 + op->naddrs = naddrs; 184 + offset = nand_subop_get_addr_start_off(subop, op_id); 185 + addrs = &instr->ctx.addr.addrs[offset]; 186 + memcpy(op->addrs + op->addrs_offset, addrs, naddrs); 187 + break; 188 + case NAND_OP_DATA_IN_INSTR: 189 + case NAND_OP_DATA_OUT_INSTR: 190 + offset = nand_subop_get_data_start_off(subop, op_id); 191 + op->orig_len = nand_subop_get_data_len(subop, op_id); 192 + if (instr->type == NAND_OP_DATA_IN_INSTR) 193 + op->buf = instr->ctx.data.buf.in + offset; 194 + else if (instr->type == NAND_OP_DATA_OUT_INSTR) 195 + op->buf = (void *)instr->ctx.data.buf.out + offset; 196 + 197 + break; 198 + case NAND_OP_WAITRDY_INSTR: 199 + op->rdy_timeout_ms = instr->ctx.waitrdy.timeout_ms; 200 + break; 201 + default: 202 + break; 203 + } 204 + } 205 + 206 + return 0; 207 + } 208 + 209 + static void ls1b_nand_set_addr(struct ls1x_nand_host *host, struct ls1x_nand_op *op) 210 + { 211 + struct nand_chip *chip = &host->chip; 212 + int i; 213 + 214 + for (i = 0; i < LS1X_NAND_MAX_ADDR_CYC; i++) { 215 + int shift, mask, val; 216 + 217 + if (i < LS1X_NAND_COL_ADDR_CYC) { 218 + shift = i * BITS_PER_BYTE; 219 + mask = (u32)0xff << shift; 220 + mask &= GENMASK(chip->page_shift, 0); 221 + val = (u32)op->addrs[i] << shift; 222 + regmap_update_bits(host->regmap, LS1X_NAND_ADDR1, mask, val); 223 + } else if (!op->is_change_column) { 224 + shift = op->row_start + (i - LS1X_NAND_COL_ADDR_CYC) * BITS_PER_BYTE; 225 + mask = (u32)0xff << shift; 226 + val = (u32)op->addrs[i] << shift; 227 + regmap_update_bits(host->regmap, LS1X_NAND_ADDR1, mask, val); 228 + 229 + if (i == 4) { 230 + mask = (u32)0xff >> (BITS_PER_WORD - shift); 231 + val = (u32)op->addrs[i] >> (BITS_PER_WORD - shift); 232 + regmap_update_bits(host->regmap, LS1X_NAND_ADDR2, mask, val); 233 + } 234 + } 235 + } 236 + } 237 + 238 + static void ls1c_nand_set_addr(struct ls1x_nand_host *host, struct ls1x_nand_op *op) 239 + { 240 + int i; 241 + 242 + for (i = 0; i < LS1X_NAND_MAX_ADDR_CYC; i++) { 243 + int shift, mask, val; 244 + 245 + if (i < LS1X_NAND_COL_ADDR_CYC) { 246 + shift = i * BITS_PER_BYTE; 247 + mask = (u32)0xff << shift; 248 + val = (u32)op->addrs[i] << shift; 249 + regmap_update_bits(host->regmap, LS1X_NAND_ADDR1, mask, val); 250 + } else if (!op->is_change_column) { 251 + shift = (i - LS1X_NAND_COL_ADDR_CYC) * BITS_PER_BYTE; 252 + mask = (u32)0xff << shift; 253 + val = (u32)op->addrs[i] << shift; 254 + regmap_update_bits(host->regmap, LS1X_NAND_ADDR2, mask, val); 255 + } 256 + } 257 + } 258 + 259 + static void ls1x_nand_trigger_op(struct ls1x_nand_host *host, struct ls1x_nand_op *op) 260 + { 261 + struct nand_chip *chip = &host->chip; 262 + struct mtd_info *mtd = nand_to_mtd(chip); 263 + int col0 = op->addrs[0]; 264 + short col; 265 + 266 + if (!IS_ALIGNED(col0, chip->buf_align)) { 267 + col0 = ALIGN_DOWN(op->addrs[0], chip->buf_align); 268 + op->aligned_offset = op->addrs[0] - col0; 269 + op->addrs[0] = col0; 270 + } 271 + 272 + if (host->data->set_addr) 273 + host->data->set_addr(host, op); 274 + 275 + /* set operation length */ 276 + if (op->is_write || op->is_read || op->is_change_column) 277 + op->len = ALIGN(op->orig_len + op->aligned_offset, chip->buf_align); 278 + else if (op->is_erase) 279 + op->len = 1; 280 + else 281 + op->len = op->orig_len; 282 + 283 + writel(op->len, host->reg_base + LS1X_NAND_OP_NUM); 284 + 285 + /* set operation area and scope */ 286 + col = op->addrs[1] << BITS_PER_BYTE | op->addrs[0]; 287 + if (op->orig_len && !op->is_readid) { 288 + unsigned int op_scope = 0; 289 + 290 + if (col < mtd->writesize) { 291 + op->cmd_reg |= LS1X_NAND_CMD_OP_MAIN; 292 + op_scope = mtd->writesize; 293 + } 294 + 295 + op->cmd_reg |= LS1X_NAND_CMD_OP_SPARE; 296 + op_scope += mtd->oobsize; 297 + 298 + op_scope <<= __ffs(host->data->op_scope_field); 299 + regmap_update_bits(host->regmap, LS1X_NAND_PARAM, 300 + host->data->op_scope_field, op_scope); 301 + } 302 + 303 + /* set command */ 304 + writel(op->cmd_reg, host->reg_base + LS1X_NAND_CMD); 305 + 306 + /* trigger operation */ 307 + regmap_write_bits(host->regmap, LS1X_NAND_CMD, LS1X_NAND_CMD_VALID, LS1X_NAND_CMD_VALID); 308 + } 309 + 310 + static int ls1x_nand_wait_for_op_done(struct ls1x_nand_host *host, struct ls1x_nand_op *op) 311 + { 312 + unsigned int val; 313 + int ret = 0; 314 + 315 + if (op->rdy_timeout_ms) { 316 + ret = regmap_read_poll_timeout(host->regmap, LS1X_NAND_CMD, 317 + val, val & LS1X_NAND_CMD_OP_DONE, 318 + 0, op->rdy_timeout_ms * MSEC_PER_SEC); 319 + if (ret) 320 + dev_err(host->dev, "operation failed\n"); 321 + } 322 + 323 + return ret; 324 + } 325 + 326 + static void ls1x_nand_dma_callback(void *data) 327 + { 328 + struct ls1x_nand_host *host = (struct ls1x_nand_host *)data; 329 + struct dma_chan *chan = host->dma_chan; 330 + struct device *dev = chan->device->dev; 331 + enum dma_status status; 332 + 333 + status = dmaengine_tx_status(chan, host->dma_cookie, NULL); 334 + if (likely(status == DMA_COMPLETE)) { 335 + dev_dbg(dev, "DMA complete with cookie=%d\n", host->dma_cookie); 336 + complete(&host->dma_complete); 337 + } else { 338 + dev_err(dev, "DMA error with cookie=%d\n", host->dma_cookie); 339 + } 340 + } 341 + 342 + static int ls1x_nand_dma_transfer(struct ls1x_nand_host *host, struct ls1x_nand_op *op) 343 + { 344 + struct nand_chip *chip = &host->chip; 345 + struct dma_chan *chan = host->dma_chan; 346 + struct device *dev = chan->device->dev; 347 + struct dma_async_tx_descriptor *desc; 348 + enum dma_data_direction data_dir = op->is_write ? DMA_TO_DEVICE : DMA_FROM_DEVICE; 349 + enum dma_transfer_direction xfer_dir = op->is_write ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM; 350 + void *buf = op->buf; 351 + char *dma_buf = NULL; 352 + dma_addr_t dma_addr; 353 + int ret; 354 + 355 + if (IS_ALIGNED((uintptr_t)buf, chip->buf_align) && 356 + IS_ALIGNED(op->orig_len, chip->buf_align)) { 357 + dma_addr = dma_map_single(dev, buf, op->orig_len, data_dir); 358 + if (dma_mapping_error(dev, dma_addr)) { 359 + dev_err(dev, "failed to map DMA buffer\n"); 360 + return -ENXIO; 361 + } 362 + } else if (!op->is_write) { 363 + dma_buf = dma_alloc_coherent(dev, op->len, &dma_addr, GFP_KERNEL); 364 + if (!dma_buf) 365 + return -ENOMEM; 366 + } else { 367 + dev_err(dev, "subpage writing not supported\n"); 368 + return -EOPNOTSUPP; 369 + } 370 + 371 + desc = dmaengine_prep_slave_single(chan, dma_addr, op->len, xfer_dir, DMA_PREP_INTERRUPT); 372 + if (!desc) { 373 + dev_err(dev, "failed to prepare DMA descriptor\n"); 374 + ret = -ENOMEM; 375 + goto err; 376 + } 377 + desc->callback = ls1x_nand_dma_callback; 378 + desc->callback_param = host; 379 + 380 + host->dma_cookie = dmaengine_submit(desc); 381 + ret = dma_submit_error(host->dma_cookie); 382 + if (ret) { 383 + dev_err(dev, "failed to submit DMA descriptor\n"); 384 + goto err; 385 + } 386 + 387 + dev_dbg(dev, "issue DMA with cookie=%d\n", host->dma_cookie); 388 + dma_async_issue_pending(chan); 389 + 390 + if (!wait_for_completion_timeout(&host->dma_complete, msecs_to_jiffies(1000))) { 391 + dmaengine_terminate_sync(chan); 392 + reinit_completion(&host->dma_complete); 393 + ret = -ETIMEDOUT; 394 + goto err; 395 + } 396 + 397 + if (dma_buf) 398 + memcpy(buf, dma_buf + op->aligned_offset, op->orig_len); 399 + err: 400 + if (dma_buf) 401 + dma_free_coherent(dev, op->len, dma_buf, dma_addr); 402 + else 403 + dma_unmap_single(dev, dma_addr, op->orig_len, data_dir); 404 + 405 + return ret; 406 + } 407 + 408 + static int ls1x_nand_data_type_exec(struct nand_chip *chip, const struct nand_subop *subop) 409 + { 410 + struct ls1x_nand_host *host = nand_get_controller_data(chip); 411 + struct ls1x_nand_op op = {}; 412 + int ret; 413 + 414 + ret = ls1x_nand_parse_instructions(chip, subop, &op); 415 + if (ret) 416 + return ret; 417 + 418 + ls1x_nand_trigger_op(host, &op); 419 + 420 + ret = ls1x_nand_dma_transfer(host, &op); 421 + if (ret) 422 + return ret; 423 + 424 + return ls1x_nand_wait_for_op_done(host, &op); 425 + } 426 + 427 + static int ls1x_nand_misc_type_exec(struct nand_chip *chip, 428 + const struct nand_subop *subop, struct ls1x_nand_op *op) 429 + { 430 + struct ls1x_nand_host *host = nand_get_controller_data(chip); 431 + int ret; 432 + 433 + ret = ls1x_nand_parse_instructions(chip, subop, op); 434 + if (ret) 435 + return ret; 436 + 437 + ls1x_nand_trigger_op(host, op); 438 + 439 + return ls1x_nand_wait_for_op_done(host, op); 440 + } 441 + 442 + static int ls1x_nand_zerolen_type_exec(struct nand_chip *chip, const struct nand_subop *subop) 443 + { 444 + struct ls1x_nand_op op = {}; 445 + 446 + return ls1x_nand_misc_type_exec(chip, subop, &op); 447 + } 448 + 449 + static int ls1x_nand_read_id_type_exec(struct nand_chip *chip, const struct nand_subop *subop) 450 + { 451 + struct ls1x_nand_host *host = nand_get_controller_data(chip); 452 + struct ls1x_nand_op op = {}; 453 + int i, ret; 454 + union { 455 + char ids[5]; 456 + struct { 457 + int idl; 458 + char idh; 459 + }; 460 + } nand_id; 461 + 462 + ret = ls1x_nand_misc_type_exec(chip, subop, &op); 463 + if (ret) 464 + return ret; 465 + 466 + nand_id.idl = readl(host->reg_base + LS1X_NAND_IDL); 467 + nand_id.idh = readb(host->reg_base + LS1X_NAND_IDH_STATUS); 468 + 469 + for (i = 0; i < min(sizeof(nand_id.ids), op.orig_len); i++) 470 + op.buf[i] = nand_id.ids[sizeof(nand_id.ids) - 1 - i]; 471 + 472 + return ret; 473 + } 474 + 475 + static int ls1x_nand_read_status_type_exec(struct nand_chip *chip, const struct nand_subop *subop) 476 + { 477 + struct ls1x_nand_host *host = nand_get_controller_data(chip); 478 + struct ls1x_nand_op op = {}; 479 + int val, ret; 480 + 481 + ret = ls1x_nand_misc_type_exec(chip, subop, &op); 482 + if (ret) 483 + return ret; 484 + 485 + val = readl(host->reg_base + LS1X_NAND_IDH_STATUS); 486 + val &= ~host->data->status_field; 487 + op.buf[0] = val << ffs(host->data->status_field); 488 + 489 + return ret; 490 + } 491 + 492 + static const struct nand_op_parser ls1x_nand_op_parser = NAND_OP_PARSER( 493 + NAND_OP_PARSER_PATTERN( 494 + ls1x_nand_read_id_type_exec, 495 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 496 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, LS1X_NAND_MAX_ADDR_CYC), 497 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 8)), 498 + NAND_OP_PARSER_PATTERN( 499 + ls1x_nand_read_status_type_exec, 500 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 501 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 1)), 502 + NAND_OP_PARSER_PATTERN( 503 + ls1x_nand_zerolen_type_exec, 504 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 505 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 506 + NAND_OP_PARSER_PATTERN( 507 + ls1x_nand_zerolen_type_exec, 508 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 509 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, LS1X_NAND_MAX_ADDR_CYC), 510 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 511 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(false)), 512 + NAND_OP_PARSER_PATTERN( 513 + ls1x_nand_data_type_exec, 514 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 515 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, LS1X_NAND_MAX_ADDR_CYC), 516 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 517 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true), 518 + NAND_OP_PARSER_PAT_DATA_IN_ELEM(false, 0)), 519 + NAND_OP_PARSER_PATTERN( 520 + ls1x_nand_data_type_exec, 521 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 522 + NAND_OP_PARSER_PAT_ADDR_ELEM(false, LS1X_NAND_MAX_ADDR_CYC), 523 + NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, 0), 524 + NAND_OP_PARSER_PAT_CMD_ELEM(false), 525 + NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)), 526 + ); 527 + 528 + static int ls1x_nand_is_valid_cmd(u8 opcode) 529 + { 530 + if (opcode == NAND_CMD_STATUS || opcode == NAND_CMD_RESET || opcode == NAND_CMD_READID) 531 + return 0; 532 + 533 + return -EOPNOTSUPP; 534 + } 535 + 536 + static int ls1x_nand_is_valid_cmd_seq(u8 opcode1, u8 opcode2) 537 + { 538 + if (opcode1 == NAND_CMD_RNDOUT && opcode2 == NAND_CMD_RNDOUTSTART) 539 + return 0; 540 + 541 + if (opcode1 == NAND_CMD_READ0 && opcode2 == NAND_CMD_READSTART) 542 + return 0; 543 + 544 + if (opcode1 == NAND_CMD_ERASE1 && opcode2 == NAND_CMD_ERASE2) 545 + return 0; 546 + 547 + if (opcode1 == NAND_CMD_SEQIN && opcode2 == NAND_CMD_PAGEPROG) 548 + return 0; 549 + 550 + return -EOPNOTSUPP; 551 + } 552 + 553 + static int ls1x_nand_check_op(struct nand_chip *chip, const struct nand_operation *op) 554 + { 555 + const struct nand_op_instr *instr1 = NULL, *instr2 = NULL; 556 + int op_id; 557 + 558 + for (op_id = 0; op_id < op->ninstrs; op_id++) { 559 + const struct nand_op_instr *instr = &op->instrs[op_id]; 560 + 561 + if (instr->type == NAND_OP_CMD_INSTR) { 562 + if (!instr1) 563 + instr1 = instr; 564 + else if (!instr2) 565 + instr2 = instr; 566 + else 567 + break; 568 + } 569 + } 570 + 571 + if (!instr1) 572 + return -EOPNOTSUPP; 573 + 574 + if (!instr2) 575 + return ls1x_nand_is_valid_cmd(instr1->ctx.cmd.opcode); 576 + 577 + return ls1x_nand_is_valid_cmd_seq(instr1->ctx.cmd.opcode, instr2->ctx.cmd.opcode); 578 + } 579 + 580 + static int ls1x_nand_exec_op(struct nand_chip *chip, 581 + const struct nand_operation *op, bool check_only) 582 + { 583 + if (check_only) 584 + return ls1x_nand_check_op(chip, op); 585 + 586 + return nand_op_parser_exec_op(chip, &ls1x_nand_op_parser, op, check_only); 587 + } 588 + 589 + static int ls1x_nand_attach_chip(struct nand_chip *chip) 590 + { 591 + struct ls1x_nand_host *host = nand_get_controller_data(chip); 592 + u64 chipsize = nanddev_target_size(&chip->base); 593 + int cell_size = 0; 594 + 595 + switch (chipsize) { 596 + case SZ_128M: 597 + cell_size = 0x0; 598 + break; 599 + case SZ_256M: 600 + cell_size = 0x1; 601 + break; 602 + case SZ_512M: 603 + cell_size = 0x2; 604 + break; 605 + case SZ_1G: 606 + cell_size = 0x3; 607 + break; 608 + case SZ_2G: 609 + cell_size = 0x4; 610 + break; 611 + case SZ_4G: 612 + cell_size = 0x5; 613 + break; 614 + case SZ_8G: 615 + cell_size = 0x6; 616 + break; 617 + case SZ_16G: 618 + cell_size = 0x7; 619 + break; 620 + default: 621 + dev_err(host->dev, "unsupported chip size: %llu MB\n", chipsize); 622 + return -EINVAL; 623 + } 624 + 625 + switch (chip->ecc.engine_type) { 626 + case NAND_ECC_ENGINE_TYPE_NONE: 627 + break; 628 + case NAND_ECC_ENGINE_TYPE_SOFT: 629 + break; 630 + default: 631 + return -EINVAL; 632 + } 633 + 634 + /* set cell size */ 635 + regmap_update_bits(host->regmap, LS1X_NAND_PARAM, LS1X_NAND_CELL_SIZE_MASK, 636 + FIELD_PREP(LS1X_NAND_CELL_SIZE_MASK, cell_size)); 637 + 638 + regmap_update_bits(host->regmap, LS1X_NAND_TIMING, LS1X_NAND_HOLD_CYCLE_MASK, 639 + FIELD_PREP(LS1X_NAND_HOLD_CYCLE_MASK, host->data->hold_cycle)); 640 + 641 + regmap_update_bits(host->regmap, LS1X_NAND_TIMING, LS1X_NAND_WAIT_CYCLE_MASK, 642 + FIELD_PREP(LS1X_NAND_WAIT_CYCLE_MASK, host->data->wait_cycle)); 643 + 644 + chip->ecc.read_page_raw = nand_monolithic_read_page_raw; 645 + chip->ecc.write_page_raw = nand_monolithic_write_page_raw; 646 + 647 + return 0; 648 + } 649 + 650 + static const struct nand_controller_ops ls1x_nand_controller_ops = { 651 + .exec_op = ls1x_nand_exec_op, 652 + .attach_chip = ls1x_nand_attach_chip, 653 + }; 654 + 655 + static void ls1x_nand_controller_cleanup(struct ls1x_nand_host *host) 656 + { 657 + if (host->dma_chan) 658 + dma_release_channel(host->dma_chan); 659 + } 660 + 661 + static int ls1x_nand_controller_init(struct ls1x_nand_host *host) 662 + { 663 + struct device *dev = host->dev; 664 + struct dma_chan *chan; 665 + struct dma_slave_config cfg = {}; 666 + int ret; 667 + 668 + host->regmap = devm_regmap_init_mmio(dev, host->reg_base, &ls1x_nand_regmap_config); 669 + if (IS_ERR(host->regmap)) 670 + return dev_err_probe(dev, PTR_ERR(host->regmap), "failed to init regmap\n"); 671 + 672 + chan = dma_request_chan(dev, "rxtx"); 673 + if (IS_ERR(chan)) 674 + return dev_err_probe(dev, PTR_ERR(chan), "failed to request DMA channel\n"); 675 + host->dma_chan = chan; 676 + 677 + cfg.src_addr = host->dma_base; 678 + cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 679 + cfg.dst_addr = host->dma_base; 680 + cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 681 + ret = dmaengine_slave_config(host->dma_chan, &cfg); 682 + if (ret) 683 + return dev_err_probe(dev, ret, "failed to config DMA channel\n"); 684 + 685 + init_completion(&host->dma_complete); 686 + 687 + return 0; 688 + } 689 + 690 + static int ls1x_nand_chip_init(struct ls1x_nand_host *host) 691 + { 692 + struct device *dev = host->dev; 693 + int nchips = of_get_child_count(dev->of_node); 694 + struct device_node *chip_np; 695 + struct nand_chip *chip = &host->chip; 696 + struct mtd_info *mtd = nand_to_mtd(chip); 697 + int ret; 698 + 699 + if (nchips != 1) 700 + return dev_err_probe(dev, -EINVAL, "Currently one NAND chip supported\n"); 701 + 702 + chip_np = of_get_next_child(dev->of_node, NULL); 703 + if (!chip_np) 704 + return dev_err_probe(dev, -ENODEV, "failed to get child node for NAND chip\n"); 705 + 706 + nand_set_flash_node(chip, chip_np); 707 + of_node_put(chip_np); 708 + if (!mtd->name) 709 + return dev_err_probe(dev, -EINVAL, "Missing MTD label\n"); 710 + 711 + nand_set_controller_data(chip, host); 712 + chip->controller = &host->controller; 713 + chip->options = NAND_NO_SUBPAGE_WRITE | NAND_USES_DMA | NAND_BROKEN_XD; 714 + chip->buf_align = 16; 715 + mtd->dev.parent = dev; 716 + mtd->owner = THIS_MODULE; 717 + 718 + ret = nand_scan(chip, 1); 719 + if (ret) 720 + return dev_err_probe(dev, ret, "failed to scan NAND chip\n"); 721 + 722 + ret = mtd_device_register(mtd, NULL, 0); 723 + if (ret) { 724 + nand_cleanup(chip); 725 + return dev_err_probe(dev, ret, "failed to register MTD device\n"); 726 + } 727 + 728 + return 0; 729 + } 730 + 731 + static int ls1x_nand_probe(struct platform_device *pdev) 732 + { 733 + struct device *dev = &pdev->dev; 734 + const struct ls1x_nand_data *data; 735 + struct ls1x_nand_host *host; 736 + struct resource *res; 737 + int ret; 738 + 739 + data = of_device_get_match_data(dev); 740 + if (!data) 741 + return -ENODEV; 742 + 743 + host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 744 + if (!host) 745 + return -ENOMEM; 746 + 747 + host->reg_base = devm_platform_ioremap_resource(pdev, 0); 748 + if (IS_ERR(host->reg_base)) 749 + return PTR_ERR(host->reg_base); 750 + 751 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nand-dma"); 752 + if (!res) 753 + return dev_err_probe(dev, -EINVAL, "Missing 'nand-dma' in reg-names property\n"); 754 + 755 + host->dma_base = dma_map_resource(dev, res->start, resource_size(res), 756 + DMA_BIDIRECTIONAL, 0); 757 + if (dma_mapping_error(dev, host->dma_base)) 758 + return -ENXIO; 759 + 760 + host->dev = dev; 761 + host->data = data; 762 + host->controller.ops = &ls1x_nand_controller_ops; 763 + 764 + nand_controller_init(&host->controller); 765 + 766 + ret = ls1x_nand_controller_init(host); 767 + if (ret) 768 + goto err; 769 + 770 + ret = ls1x_nand_chip_init(host); 771 + if (ret) 772 + goto err; 773 + 774 + platform_set_drvdata(pdev, host); 775 + 776 + return 0; 777 + err: 778 + ls1x_nand_controller_cleanup(host); 779 + 780 + return ret; 781 + } 782 + 783 + static void ls1x_nand_remove(struct platform_device *pdev) 784 + { 785 + struct ls1x_nand_host *host = platform_get_drvdata(pdev); 786 + struct nand_chip *chip = &host->chip; 787 + int ret; 788 + 789 + ret = mtd_device_unregister(nand_to_mtd(chip)); 790 + WARN_ON(ret); 791 + nand_cleanup(chip); 792 + ls1x_nand_controller_cleanup(host); 793 + } 794 + 795 + static const struct ls1x_nand_data ls1b_nand_data = { 796 + .status_field = GENMASK(15, 8), 797 + .hold_cycle = 0x2, 798 + .wait_cycle = 0xc, 799 + .set_addr = ls1b_nand_set_addr, 800 + }; 801 + 802 + static const struct ls1x_nand_data ls1c_nand_data = { 803 + .status_field = GENMASK(23, 16), 804 + .op_scope_field = GENMASK(29, 16), 805 + .hold_cycle = 0x2, 806 + .wait_cycle = 0xc, 807 + .set_addr = ls1c_nand_set_addr, 808 + }; 809 + 810 + static const struct of_device_id ls1x_nand_match[] = { 811 + { 812 + .compatible = "loongson,ls1b-nand-controller", 813 + .data = &ls1b_nand_data, 814 + }, 815 + { 816 + .compatible = "loongson,ls1c-nand-controller", 817 + .data = &ls1c_nand_data, 818 + }, 819 + { /* sentinel */ } 820 + }; 821 + MODULE_DEVICE_TABLE(of, ls1x_nand_match); 822 + 823 + static struct platform_driver ls1x_nand_driver = { 824 + .probe = ls1x_nand_probe, 825 + .remove = ls1x_nand_remove, 826 + .driver = { 827 + .name = KBUILD_MODNAME, 828 + .of_match_table = ls1x_nand_match, 829 + }, 830 + }; 831 + 832 + module_platform_driver(ls1x_nand_driver); 833 + 834 + MODULE_AUTHOR("Keguang Zhang <keguang.zhang@gmail.com>"); 835 + MODULE_DESCRIPTION("Loongson-1 NAND Controller Driver"); 836 + MODULE_LICENSE("GPL");
+15 -3
drivers/mtd/nand/raw/qcom_nandc.c
··· 1863 1863 const struct nand_op_instr *instr = NULL; 1864 1864 unsigned int op_id = 0; 1865 1865 unsigned int len = 0; 1866 - int ret; 1866 + int ret, reg_base; 1867 + 1868 + reg_base = NAND_READ_LOCATION_0; 1869 + 1870 + if (nandc->props->qpic_version2) 1871 + reg_base = NAND_READ_LOCATION_LAST_CW_0; 1867 1872 1868 1873 ret = qcom_parse_instructions(chip, subop, &q_op); 1869 1874 if (ret) ··· 1920 1915 op_id = q_op.data_instr_idx; 1921 1916 len = nand_subop_get_data_len(subop, op_id); 1922 1917 1923 - nandc_set_read_loc(chip, 0, 0, 0, len, 1); 1918 + if (nandc->props->qpic_version2) 1919 + nandc_set_read_loc_last(chip, reg_base, 0, len, 1); 1920 + else 1921 + nandc_set_read_loc_first(chip, reg_base, 0, len, 1); 1924 1922 1925 1923 if (!nandc->props->qpic_version2) { 1926 1924 qcom_write_reg_dma(nandc, &nandc->regs->vld, NAND_DEV_CMD_VLD, 1, 0); 1927 1925 qcom_write_reg_dma(nandc, &nandc->regs->cmd1, NAND_DEV_CMD1, 1, NAND_BAM_NEXT_SGL); 1928 1926 } 1929 1927 1930 - nandc->buf_count = len; 1928 + nandc->buf_count = 512; 1931 1929 memset(nandc->data_buffer, 0xff, nandc->buf_count); 1932 1930 1933 1931 config_nand_single_cw_page_read(chip, false, 0); ··· 2368 2360 .supports_bam = false, 2369 2361 .use_codeword_fixup = true, 2370 2362 .dev_cmd_reg_start = 0x0, 2363 + .bam_offset = 0x30000, 2371 2364 }; 2372 2365 2373 2366 static const struct qcom_nandc_props ipq4019_nandc_props = { ··· 2376 2367 .supports_bam = true, 2377 2368 .nandc_part_of_qpic = true, 2378 2369 .dev_cmd_reg_start = 0x0, 2370 + .bam_offset = 0x30000, 2379 2371 }; 2380 2372 2381 2373 static const struct qcom_nandc_props ipq8074_nandc_props = { ··· 2384 2374 .supports_bam = true, 2385 2375 .nandc_part_of_qpic = true, 2386 2376 .dev_cmd_reg_start = 0x7000, 2377 + .bam_offset = 0x30000, 2387 2378 }; 2388 2379 2389 2380 static const struct qcom_nandc_props sdx55_nandc_props = { ··· 2393 2382 .nandc_part_of_qpic = true, 2394 2383 .qpic_version2 = true, 2395 2384 .dev_cmd_reg_start = 0x7000, 2385 + .bam_offset = 0x30000, 2396 2386 }; 2397 2387 2398 2388 /*
+2
drivers/mtd/nand/raw/sunxi_nand.c
··· 817 817 if (ret) 818 818 return ret; 819 819 820 + sunxi_nfc_randomizer_config(nand, page, false); 820 821 sunxi_nfc_randomizer_enable(nand); 821 822 writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ECC_OP, 822 823 nfc->regs + NFC_REG_CMD); ··· 1050 1049 if (ret) 1051 1050 return ret; 1052 1051 1052 + sunxi_nfc_randomizer_config(nand, page, false); 1053 1053 sunxi_nfc_randomizer_enable(nand); 1054 1054 sunxi_nfc_hw_ecc_set_prot_oob_bytes(nand, oob, 0, bbm, page); 1055 1055
+10 -10
drivers/mtd/nand/spi/alliancememory.c
··· 17 17 #define AM_STATUS_ECC_MAX_CORRECTED (3 << 4) 18 18 19 19 static SPINAND_OP_VARIANTS(read_cache_variants, 20 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0), 21 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 22 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 23 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 24 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 25 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 20 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 22 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 23 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 24 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 25 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 26 26 27 27 static SPINAND_OP_VARIANTS(write_cache_variants, 28 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 29 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 28 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 29 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 30 30 31 31 static SPINAND_OP_VARIANTS(update_cache_variants, 32 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 33 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 32 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 33 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 34 34 35 35 static int am_get_eccsize(struct mtd_info *mtd) 36 36 {
+7 -7
drivers/mtd/nand/spi/ato.c
··· 14 14 15 15 16 16 static SPINAND_OP_VARIANTS(read_cache_variants, 17 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 18 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 19 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 17 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 18 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 19 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 20 20 21 21 static SPINAND_OP_VARIANTS(write_cache_variants, 22 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 23 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 22 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 23 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 24 24 25 25 static SPINAND_OP_VARIANTS(update_cache_variants, 26 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 27 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 26 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 27 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 28 28 29 29 30 30 static int ato25d1ga_ooblayout_ecc(struct mtd_info *mtd, int section,
+10 -10
drivers/mtd/nand/spi/core.c
··· 22 22 23 23 static int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val) 24 24 { 25 - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(reg, 25 + struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(reg, 26 26 spinand->scratchbuf); 27 27 int ret; 28 28 ··· 36 36 37 37 int spinand_write_reg_op(struct spinand_device *spinand, u8 reg, u8 val) 38 38 { 39 - struct spi_mem_op op = SPINAND_SET_FEATURE_OP(reg, 39 + struct spi_mem_op op = SPINAND_SET_FEATURE_1S_1S_1S_OP(reg, 40 40 spinand->scratchbuf); 41 41 42 42 *spinand->scratchbuf = val; ··· 362 362 363 363 static int spinand_write_enable_op(struct spinand_device *spinand) 364 364 { 365 - struct spi_mem_op op = SPINAND_WR_EN_DIS_OP(true); 365 + struct spi_mem_op op = SPINAND_WR_EN_DIS_1S_0_0_OP(true); 366 366 367 367 return spi_mem_exec_op(spinand->spimem, &op); 368 368 } ··· 372 372 { 373 373 struct nand_device *nand = spinand_to_nand(spinand); 374 374 unsigned int row = nanddev_pos_to_row(nand, &req->pos); 375 - struct spi_mem_op op = SPINAND_PAGE_READ_OP(row); 375 + struct spi_mem_op op = SPINAND_PAGE_READ_1S_1S_0_OP(row); 376 376 377 377 return spi_mem_exec_op(spinand->spimem, &op); 378 378 } ··· 519 519 { 520 520 struct nand_device *nand = spinand_to_nand(spinand); 521 521 unsigned int row = nanddev_pos_to_row(nand, &req->pos); 522 - struct spi_mem_op op = SPINAND_PROG_EXEC_OP(row); 522 + struct spi_mem_op op = SPINAND_PROG_EXEC_1S_1S_0_OP(row); 523 523 524 524 return spi_mem_exec_op(spinand->spimem, &op); 525 525 } ··· 529 529 { 530 530 struct nand_device *nand = spinand_to_nand(spinand); 531 531 unsigned int row = nanddev_pos_to_row(nand, pos); 532 - struct spi_mem_op op = SPINAND_BLK_ERASE_OP(row); 532 + struct spi_mem_op op = SPINAND_BLK_ERASE_1S_1S_0_OP(row); 533 533 534 534 return spi_mem_exec_op(spinand->spimem, &op); 535 535 } ··· 549 549 int spinand_wait(struct spinand_device *spinand, unsigned long initial_delay_us, 550 550 unsigned long poll_delay_us, u8 *s) 551 551 { 552 - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(REG_STATUS, 553 - spinand->scratchbuf); 552 + struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(REG_STATUS, 553 + spinand->scratchbuf); 554 554 u8 status; 555 555 int ret; 556 556 ··· 583 583 static int spinand_read_id_op(struct spinand_device *spinand, u8 naddr, 584 584 u8 ndummy, u8 *buf) 585 585 { 586 - struct spi_mem_op op = SPINAND_READID_OP( 586 + struct spi_mem_op op = SPINAND_READID_1S_1S_1S_OP( 587 587 naddr, ndummy, spinand->scratchbuf, SPINAND_MAX_ID_LEN); 588 588 int ret; 589 589 ··· 596 596 597 597 static int spinand_reset_op(struct spinand_device *spinand) 598 598 { 599 - struct spi_mem_op op = SPINAND_RESET_OP; 599 + struct spi_mem_op op = SPINAND_RESET_1S_0_0_OP; 600 600 int ret; 601 601 602 602 ret = spi_mem_exec_op(spinand->spimem, &op);
+11 -11
drivers/mtd/nand/spi/esmt.c
··· 18 18 (CFG_OTP_ENABLE | ESMT_F50L1G41LB_CFG_OTP_PROTECT) 19 19 20 20 static SPINAND_OP_VARIANTS(read_cache_variants, 21 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 22 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 23 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 24 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 22 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 23 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 24 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 25 25 26 26 static SPINAND_OP_VARIANTS(write_cache_variants, 27 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 28 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 27 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 28 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 29 29 30 30 static SPINAND_OP_VARIANTS(update_cache_variants, 31 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 32 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 31 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 32 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 33 33 34 34 /* 35 35 * OOB spare area map (64 bytes) ··· 137 137 static int f50l1g41lb_otp_lock(struct spinand_device *spinand, loff_t from, 138 138 size_t len) 139 139 { 140 - struct spi_mem_op write_op = SPINAND_WR_EN_DIS_OP(true); 141 - struct spi_mem_op exec_op = SPINAND_PROG_EXEC_OP(0); 140 + struct spi_mem_op write_op = SPINAND_WR_EN_DIS_1S_0_0_OP(true); 141 + struct spi_mem_op exec_op = SPINAND_PROG_EXEC_1S_1S_0_OP(0); 142 142 u8 status; 143 143 int ret; 144 144 ··· 199 199 SPINAND_FACT_OTP_INFO(2, 0, &f50l1g41lb_fact_otp_ops)), 200 200 SPINAND_INFO("F50D1G41LB", 201 201 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_ADDR, 0x11, 0x7f, 202 - 0x7f, 0x7f), 202 + 0x7f), 203 203 NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 204 204 NAND_ECCREQ(1, 512), 205 205 SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+8 -8
drivers/mtd/nand/spi/foresee.c
··· 12 12 #define SPINAND_MFR_FORESEE 0xCD 13 13 14 14 static SPINAND_OP_VARIANTS(read_cache_variants, 15 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 16 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 17 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 18 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 15 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 16 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 17 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 18 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 19 19 20 20 static SPINAND_OP_VARIANTS(write_cache_variants, 21 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 22 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 21 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 22 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 23 23 24 24 static SPINAND_OP_VARIANTS(update_cache_variants, 25 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 26 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 25 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 26 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 27 27 28 28 static int f35sqa002g_ooblayout_ecc(struct mtd_info *mtd, int section, 29 29 struct mtd_oob_region *region)
+30 -30
drivers/mtd/nand/spi/gigadevice.c
··· 24 24 #define GD5FXGQ4UXFXXG_STATUS_ECC_UNCOR_ERROR (7 << 4) 25 25 26 26 static SPINAND_OP_VARIANTS(read_cache_variants, 27 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0), 28 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 29 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 30 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 31 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 32 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 27 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 29 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 30 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 31 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 32 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 33 33 34 34 static SPINAND_OP_VARIANTS(read_cache_variants_f, 35 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0), 36 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP_3A(0, 1, NULL, 0), 37 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 38 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP_3A(0, 1, NULL, 0), 39 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP_3A(0, 1, NULL, 0), 40 - SPINAND_PAGE_READ_FROM_CACHE_OP_3A(0, 0, NULL, 0)); 35 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 36 + SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(0, 1, NULL, 0), 37 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 38 + SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(0, 1, NULL, 0), 39 + SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(0, 1, NULL, 0), 40 + SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(0, 0, NULL, 0)); 41 41 42 42 static SPINAND_OP_VARIANTS(read_cache_variants_1gq5, 43 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0), 44 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 45 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 46 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 47 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 48 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 43 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 44 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 45 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 46 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 47 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 48 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 49 49 50 50 static SPINAND_OP_VARIANTS(read_cache_variants_2gq5, 51 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 4, NULL, 0), 52 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 53 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 2, NULL, 0), 54 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 55 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 56 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 51 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0), 52 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 53 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0), 54 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 55 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 56 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 57 57 58 58 static SPINAND_OP_VARIANTS(write_cache_variants, 59 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 60 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 59 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 60 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 61 61 62 62 static SPINAND_OP_VARIANTS(update_cache_variants, 63 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 64 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 63 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 64 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 65 65 66 66 static int gd5fxgq4xa_ooblayout_ecc(struct mtd_info *mtd, int section, 67 67 struct mtd_oob_region *region) ··· 185 185 u8 status) 186 186 { 187 187 u8 status2; 188 - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2, 188 + struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(GD5FXGQXXEXXG_REG_STATUS2, 189 189 spinand->scratchbuf); 190 190 int ret; 191 191 ··· 228 228 u8 status) 229 229 { 230 230 u8 status2; 231 - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(GD5FXGQXXEXXG_REG_STATUS2, 231 + struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(GD5FXGQXXEXXG_REG_STATUS2, 232 232 spinand->scratchbuf); 233 233 int ret; 234 234
+10 -10
drivers/mtd/nand/spi/macronix.c
··· 28 28 }; 29 29 30 30 static SPINAND_OP_VARIANTS(read_cache_variants, 31 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 32 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 33 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 34 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 31 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 32 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 33 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 34 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 35 35 36 36 static SPINAND_OP_VARIANTS(write_cache_variants, 37 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 38 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 37 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 38 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 39 39 40 40 static SPINAND_OP_VARIANTS(update_cache_variants, 41 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 42 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 41 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 42 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 43 43 44 44 static int mx35lfxge4ab_ooblayout_ecc(struct mtd_info *mtd, int section, 45 45 struct mtd_oob_region *region) ··· 148 148 static int macronix_set_read_retry(struct spinand_device *spinand, 149 149 unsigned int retry_mode) 150 150 { 151 - struct spi_mem_op op = SPINAND_SET_FEATURE_OP(MACRONIX_FEATURE_ADDR_READ_RETRY, 152 - spinand->scratchbuf); 151 + struct spi_mem_op op = SPINAND_SET_FEATURE_1S_1S_1S_OP(MACRONIX_FEATURE_ADDR_READ_RETRY, 152 + spinand->scratchbuf); 153 153 154 154 *spinand->scratchbuf = retry_mode; 155 155 return spi_mem_exec_op(spinand->spimem, &op);
+19 -19
drivers/mtd/nand/spi/micron.c
··· 35 35 (CFG_OTP_ENABLE | MICRON_MT29F2G01ABAGD_CFG_OTP_STATE) 36 36 37 37 static SPINAND_OP_VARIANTS(quadio_read_cache_variants, 38 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0), 39 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 40 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 41 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 42 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 43 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 38 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 39 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 40 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 41 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 42 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 43 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 44 44 45 45 static SPINAND_OP_VARIANTS(x4_write_cache_variants, 46 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 47 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 46 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 47 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 48 48 49 49 static SPINAND_OP_VARIANTS(x4_update_cache_variants, 50 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 51 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 50 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 51 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 52 52 53 53 /* Micron MT29F2G01AAAED Device */ 54 54 static SPINAND_OP_VARIANTS(x4_read_cache_variants, 55 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 56 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 57 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 58 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 55 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 56 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 57 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 58 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 59 59 60 60 static SPINAND_OP_VARIANTS(x1_write_cache_variants, 61 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 61 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 62 62 63 63 static SPINAND_OP_VARIANTS(x1_update_cache_variants, 64 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 64 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 65 65 66 66 static int micron_8_ooblayout_ecc(struct mtd_info *mtd, int section, 67 67 struct mtd_oob_region *region) ··· 137 137 static int micron_select_target(struct spinand_device *spinand, 138 138 unsigned int target) 139 139 { 140 - struct spi_mem_op op = SPINAND_SET_FEATURE_OP(MICRON_DIE_SELECT_REG, 140 + struct spi_mem_op op = SPINAND_SET_FEATURE_1S_1S_1S_OP(MICRON_DIE_SELECT_REG, 141 141 spinand->scratchbuf); 142 142 143 143 if (target > 1) ··· 251 251 static int mt29f2g01abagd_otp_lock(struct spinand_device *spinand, loff_t from, 252 252 size_t len) 253 253 { 254 - struct spi_mem_op write_op = SPINAND_WR_EN_DIS_OP(true); 255 - struct spi_mem_op exec_op = SPINAND_PROG_EXEC_OP(0); 254 + struct spi_mem_op write_op = SPINAND_WR_EN_DIS_1S_0_0_OP(true); 255 + struct spi_mem_op exec_op = SPINAND_PROG_EXEC_1S_1S_0_OP(0); 256 256 u8 status; 257 257 int ret; 258 258
+10 -10
drivers/mtd/nand/spi/paragon.c
··· 22 22 23 23 24 24 static SPINAND_OP_VARIANTS(read_cache_variants, 25 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0), 26 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 27 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 28 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 29 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 30 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 25 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 26 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 27 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 29 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 30 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 31 31 32 32 static SPINAND_OP_VARIANTS(write_cache_variants, 33 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 34 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 33 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 34 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 35 35 36 36 static SPINAND_OP_VARIANTS(update_cache_variants, 37 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 38 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 37 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 38 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 39 39 40 40 41 41 static int pn26g0xa_ooblayout_ecc(struct mtd_info *mtd, int section,
+10 -10
drivers/mtd/nand/spi/skyhigh.c
··· 17 17 #define SKYHIGH_CONFIG_PROTECT_EN BIT(1) 18 18 19 19 static SPINAND_OP_VARIANTS(read_cache_variants, 20 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 4, NULL, 0), 21 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 22 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 2, NULL, 0), 23 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 24 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 25 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 20 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0), 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 22 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0), 23 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 24 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 25 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 26 26 27 27 static SPINAND_OP_VARIANTS(write_cache_variants, 28 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 29 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 28 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 29 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 30 30 31 31 static SPINAND_OP_VARIANTS(update_cache_variants, 32 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 33 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 32 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 33 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 34 34 35 35 static int skyhigh_spinand_ooblayout_ecc(struct mtd_info *mtd, int section, 36 36 struct mtd_oob_region *region)
+11 -11
drivers/mtd/nand/spi/toshiba.c
··· 15 15 #define TOSH_STATUS_ECC_HAS_BITFLIPS_T (3 << 4) 16 16 17 17 static SPINAND_OP_VARIANTS(read_cache_variants, 18 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 19 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 20 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 21 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 18 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 19 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 20 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 21 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 22 22 23 23 static SPINAND_OP_VARIANTS(write_cache_x4_variants, 24 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 25 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 24 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 25 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 26 26 27 27 static SPINAND_OP_VARIANTS(update_cache_x4_variants, 28 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 29 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 28 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 29 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 30 30 31 31 /* 32 32 * Backward compatibility for 1st generation Serial NAND devices 33 33 * which don't support Quad Program Load operation. 34 34 */ 35 35 static SPINAND_OP_VARIANTS(write_cache_variants, 36 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 36 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 37 37 38 38 static SPINAND_OP_VARIANTS(update_cache_variants, 39 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 39 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 40 40 41 41 static int tx58cxgxsxraix_ooblayout_ecc(struct mtd_info *mtd, int section, 42 42 struct mtd_oob_region *region) ··· 73 73 { 74 74 struct nand_device *nand = spinand_to_nand(spinand); 75 75 u8 mbf = 0; 76 - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, spinand->scratchbuf); 76 + struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(0x30, spinand->scratchbuf); 77 77 78 78 switch (status & STATUS_ECC_MASK) { 79 79 case STATUS_ECC_NO_BITFLIPS:
+103 -25
drivers/mtd/nand/spi/winbond.c
··· 23 23 * "X4" in the core is equivalent to "quad output" in the datasheets. 24 24 */ 25 25 26 - static SPINAND_OP_VARIANTS(read_cache_dtr_variants, 27 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_DTR_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ), 28 - SPINAND_PAGE_READ_FROM_CACHE_X4_DTR_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 29 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0), 30 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 31 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_DTR_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ), 32 - SPINAND_PAGE_READ_FROM_CACHE_X2_DTR_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 33 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 34 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 35 - SPINAND_PAGE_READ_FROM_CACHE_DTR_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 36 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 37 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0, 54 * HZ_PER_MHZ)); 26 + static SPINAND_OP_VARIANTS(read_cache_octal_variants, 27 + SPINAND_PAGE_READ_FROM_CACHE_1S_1D_8D_OP(0, 2, NULL, 0, 105 * HZ_PER_MHZ), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 16, NULL, 0, 86 * HZ_PER_MHZ), 29 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_8S_OP(0, 1, NULL, 0, 133 * HZ_PER_MHZ), 30 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 31 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 32 + 33 + static SPINAND_OP_VARIANTS(write_cache_octal_variants, 34 + SPINAND_PROG_LOAD_1S_8S_8S_OP(true, 0, NULL, 0), 35 + SPINAND_PROG_LOAD_1S_1S_8S_OP(0, NULL, 0), 36 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 37 + 38 + static SPINAND_OP_VARIANTS(update_cache_octal_variants, 39 + SPINAND_PROG_LOAD_1S_8S_8S_OP(false, 0, NULL, 0), 40 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 41 + 42 + static SPINAND_OP_VARIANTS(read_cache_dual_quad_dtr_variants, 43 + SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ), 44 + SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 45 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 46 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 47 + SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ), 48 + SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 49 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 50 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 51 + SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 52 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 53 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0, 54 * HZ_PER_MHZ)); 38 54 39 55 static SPINAND_OP_VARIANTS(read_cache_variants, 40 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0), 41 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 42 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 43 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 44 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 45 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 56 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 57 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 58 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 59 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 60 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 61 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 46 62 47 63 static SPINAND_OP_VARIANTS(write_cache_variants, 48 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 49 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 64 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 65 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 50 66 51 67 static SPINAND_OP_VARIANTS(update_cache_variants, 52 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 53 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 68 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 69 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 54 70 55 71 static int w25m02gv_ooblayout_ecc(struct mtd_info *mtd, int section, 56 72 struct mtd_oob_region *region) ··· 157 141 .free = w25n02kv_ooblayout_free, 158 142 }; 159 143 144 + static int w35n01jw_ooblayout_ecc(struct mtd_info *mtd, int section, 145 + struct mtd_oob_region *region) 146 + { 147 + if (section > 7) 148 + return -ERANGE; 149 + 150 + region->offset = (16 * section) + 12; 151 + region->length = 4; 152 + 153 + return 0; 154 + } 155 + 156 + static int w35n01jw_ooblayout_free(struct mtd_info *mtd, int section, 157 + struct mtd_oob_region *region) 158 + { 159 + if (section > 7) 160 + return -ERANGE; 161 + 162 + region->offset = 16 * section; 163 + region->length = 12; 164 + 165 + /* Extract BBM */ 166 + if (!section) { 167 + region->offset += 2; 168 + region->length -= 2; 169 + } 170 + 171 + return 0; 172 + } 173 + 174 + static const struct mtd_ooblayout_ops w35n01jw_ooblayout = { 175 + .ecc = w35n01jw_ooblayout_ecc, 176 + .free = w35n01jw_ooblayout_free, 177 + }; 178 + 160 179 static int w25n02kv_ecc_get_status(struct spinand_device *spinand, 161 180 u8 status) 162 181 { 163 182 struct nand_device *nand = spinand_to_nand(spinand); 164 183 u8 mbf = 0; 165 - struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, spinand->scratchbuf); 184 + struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(0x30, spinand->scratchbuf); 166 185 167 186 switch (status & STATUS_ECC_MASK) { 168 187 case STATUS_ECC_NO_BITFLIPS: ··· 264 213 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xbc, 0x21), 265 214 NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1), 266 215 NAND_ECCREQ(1, 512), 267 - SPINAND_INFO_OP_VARIANTS(&read_cache_dtr_variants, 216 + SPINAND_INFO_OP_VARIANTS(&read_cache_dual_quad_dtr_variants, 268 217 &write_cache_variants, 269 218 &update_cache_variants), 270 219 0, ··· 278 227 &update_cache_variants), 279 228 0, 280 229 SPINAND_ECCINFO(&w25n01kv_ooblayout, w25n02kv_ecc_get_status)), 230 + SPINAND_INFO("W35N01JW", /* 1.8V */ 231 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdc, 0x21), 232 + NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 1, 1), 233 + NAND_ECCREQ(1, 512), 234 + SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants, 235 + &write_cache_octal_variants, 236 + &update_cache_octal_variants), 237 + 0, 238 + SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 239 + SPINAND_INFO("W35N02JW", /* 1.8V */ 240 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x22), 241 + NAND_MEMORG(1, 4096, 128, 64, 512, 10, 2, 1, 1), 242 + NAND_ECCREQ(1, 512), 243 + SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants, 244 + &write_cache_octal_variants, 245 + &update_cache_octal_variants), 246 + 0, 247 + SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 248 + SPINAND_INFO("W35N04JW", /* 1.8V */ 249 + SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x23), 250 + NAND_MEMORG(1, 4096, 128, 64, 512, 10, 4, 1, 1), 251 + NAND_ECCREQ(1, 512), 252 + SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants, 253 + &write_cache_octal_variants, 254 + &update_cache_octal_variants), 255 + 0, 256 + SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 281 257 /* 2G-bit densities */ 282 258 SPINAND_INFO("W25M02GV", /* 2x1G-bit 3.3V */ 283 259 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xab, 0x21), ··· 320 242 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xbf, 0x22), 321 243 NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 2, 1), 322 244 NAND_ECCREQ(1, 512), 323 - SPINAND_INFO_OP_VARIANTS(&read_cache_dtr_variants, 245 + SPINAND_INFO_OP_VARIANTS(&read_cache_dual_quad_dtr_variants, 324 246 &write_cache_variants, 325 247 &update_cache_variants), 326 248 0,
+10 -10
drivers/mtd/nand/spi/xtx.c
··· 23 23 #define XT26XXXD_STATUS_ECC_UNCOR_ERROR (2) 24 24 25 25 static SPINAND_OP_VARIANTS(read_cache_variants, 26 - SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 1, NULL, 0), 27 - SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0), 28 - SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0), 29 - SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0), 30 - SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(0, 1, NULL, 0), 31 - SPINAND_PAGE_READ_FROM_CACHE_OP(0, 1, NULL, 0)); 26 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 1, NULL, 0), 27 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 29 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 30 + SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 31 + SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); 32 32 33 33 static SPINAND_OP_VARIANTS(write_cache_variants, 34 - SPINAND_PROG_LOAD_X4(true, 0, NULL, 0), 35 - SPINAND_PROG_LOAD(true, 0, NULL, 0)); 34 + SPINAND_PROG_LOAD_1S_1S_4S_OP(true, 0, NULL, 0), 35 + SPINAND_PROG_LOAD_1S_1S_1S_OP(true, 0, NULL, 0)); 36 36 37 37 static SPINAND_OP_VARIANTS(update_cache_variants, 38 - SPINAND_PROG_LOAD_X4(false, 0, NULL, 0), 39 - SPINAND_PROG_LOAD(false, 0, NULL, 0)); 38 + SPINAND_PROG_LOAD_1S_1S_4S_OP(false, 0, NULL, 0), 39 + SPINAND_PROG_LOAD_1S_1S_1S_OP(false, 0, NULL, 0)); 40 40 41 41 static int xt26g0xa_ooblayout_ecc(struct mtd_info *mtd, int section, 42 42 struct mtd_oob_region *region)
+39 -34
drivers/mtd/spi-nor/macronix.c
··· 58 58 return 0; 59 59 } 60 60 61 + static int 62 + mx25l3255e_late_init_fixups(struct spi_nor *nor) 63 + { 64 + struct spi_nor_flash_parameter *params = nor->params; 65 + 66 + /* 67 + * SFDP of MX25L3255E is JESD216, which does not include the Quad 68 + * Enable bit Requirement in BFPT. As a result, during BFPT parsing, 69 + * the quad_enable method is not set to spi_nor_sr1_bit6_quad_enable. 70 + * Therefore, it is necessary to correct this setting by late_init. 71 + */ 72 + params->quad_enable = spi_nor_sr1_bit6_quad_enable; 73 + 74 + /* 75 + * In addition, MX25L3255E also supports 1-4-4 page program in 3-byte 76 + * address mode. However, since the 3-byte address 1-4-4 page program 77 + * is not defined in SFDP, it needs to be configured in late_init. 78 + */ 79 + params->hwcaps.mask |= SNOR_HWCAPS_PP_1_4_4; 80 + spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP_1_4_4], 81 + SPINOR_OP_PP_1_4_4, SNOR_PROTO_1_4_4); 82 + 83 + return 0; 84 + } 85 + 61 86 static const struct spi_nor_fixups mx25l25635_fixups = { 62 87 .post_bfpt = mx25l25635_post_bfpt_fixups, 63 88 .post_sfdp = macronix_qpp4b_post_sfdp_fixups, ··· 90 65 91 66 static const struct spi_nor_fixups macronix_qpp4b_fixups = { 92 67 .post_sfdp = macronix_qpp4b_post_sfdp_fixups, 68 + }; 69 + 70 + static const struct spi_nor_fixups mx25l3255e_fixups = { 71 + .late_init = mx25l3255e_late_init_fixups, 93 72 }; 94 73 95 74 static const struct flash_info macronix_nor_parts[] = { ··· 117 88 .name = "mx25l8005", 118 89 .size = SZ_1M, 119 90 }, { 91 + /* MX25L1606E */ 120 92 .id = SNOR_ID(0xc2, 0x20, 0x15), 121 - .name = "mx25l1606e", 122 - .size = SZ_2M, 123 - .no_sfdp_flags = SECT_4K, 124 93 }, { 125 94 .id = SNOR_ID(0xc2, 0x20, 0x16), 126 95 .name = "mx25l3205d", ··· 130 103 .size = SZ_8M, 131 104 .no_sfdp_flags = SECT_4K, 132 105 }, { 106 + /* MX25L12805D */ 133 107 .id = SNOR_ID(0xc2, 0x20, 0x18), 134 - .name = "mx25l12805d", 135 - .size = SZ_16M, 136 108 .flags = SPI_NOR_HAS_LOCK | SPI_NOR_4BIT_BP, 137 - .no_sfdp_flags = SECT_4K, 138 109 }, { 110 + /* MX25L25635E, MX25L25645G */ 139 111 .id = SNOR_ID(0xc2, 0x20, 0x19), 140 - .name = "mx25l25635e", 141 - .size = SZ_32M, 142 - .no_sfdp_flags = SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 143 112 .fixups = &mx25l25635_fixups 144 113 }, { 114 + /* MX66L51235F */ 145 115 .id = SNOR_ID(0xc2, 0x20, 0x1a), 146 - .name = "mx66l51235f", 147 - .size = SZ_64M, 148 - .no_sfdp_flags = SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 149 116 .fixup_flags = SPI_NOR_4B_OPCODES, 150 117 .fixups = &macronix_qpp4b_fixups, 151 118 }, { 119 + /* MX66L1G45G */ 152 120 .id = SNOR_ID(0xc2, 0x20, 0x1b), 153 - .name = "mx66l1g45g", 154 - .size = SZ_128M, 155 - .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 156 121 .fixups = &macronix_qpp4b_fixups, 157 122 }, { 158 123 /* MX66L2G45G */ ··· 186 167 .size = SZ_16M, 187 168 .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 188 169 }, { 170 + /* MX25U51245G */ 189 171 .id = SNOR_ID(0xc2, 0x25, 0x3a), 190 - .name = "mx25u51245g", 191 - .size = SZ_64M, 192 - .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 193 - .fixup_flags = SPI_NOR_4B_OPCODES, 194 - .fixups = &macronix_qpp4b_fixups, 195 - }, { 196 - .id = SNOR_ID(0xc2, 0x25, 0x3a), 197 - .name = "mx66u51235f", 198 - .size = SZ_64M, 199 - .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 200 - .fixup_flags = SPI_NOR_4B_OPCODES, 201 172 .fixups = &macronix_qpp4b_fixups, 202 173 }, { 203 174 /* MX66U1G45G */ 204 175 .id = SNOR_ID(0xc2, 0x25, 0x3b), 205 176 .fixups = &macronix_qpp4b_fixups, 206 177 }, { 178 + /* MX66U2G45G */ 207 179 .id = SNOR_ID(0xc2, 0x25, 0x3c), 208 - .name = "mx66u2g45g", 209 - .size = SZ_256M, 210 - .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 211 - .fixup_flags = SPI_NOR_4B_OPCODES, 212 180 .fixups = &macronix_qpp4b_fixups, 213 181 }, { 214 182 .id = SNOR_ID(0xc2, 0x26, 0x18), ··· 221 215 .size = SZ_4M, 222 216 .no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ, 223 217 }, { 218 + /* MX25UW51245G */ 224 219 .id = SNOR_ID(0xc2, 0x81, 0x3a), 225 - .name = "mx25uw51245g", 226 220 .n_banks = 4, 227 221 .flags = SPI_NOR_RWW, 228 222 }, { 223 + /* MX25L3255E */ 229 224 .id = SNOR_ID(0xc2, 0x9e, 0x16), 230 - .name = "mx25l3255e", 231 - .size = SZ_4M, 232 - .no_sfdp_flags = SECT_4K, 225 + .fixups = &mx25l3255e_fixups, 233 226 }, 234 227 /* 235 228 * This spares us of adding new flash entries for flashes that can be
+1
drivers/spi/spi-qpic-snand.c
··· 1616 1616 1617 1617 static const struct qcom_nandc_props ipq9574_snandc_props = { 1618 1618 .dev_cmd_reg_start = 0x7000, 1619 + .bam_offset = 0x30000, 1619 1620 .supports_bam = true, 1620 1621 }; 1621 1622
+1 -3
include/linux/mtd/nand-qpic-common.h
··· 199 199 */ 200 200 #define dev_cmd_reg_addr(nandc, reg) ((nandc)->props->dev_cmd_reg_start + (reg)) 201 201 202 - /* Returns the NAND register physical address */ 203 - #define nandc_reg_phys(chip, offset) ((chip)->base_phys + (offset)) 204 - 205 202 /* Returns the dma address for reg read buffer */ 206 203 #define reg_buf_dma_addr(chip, vaddr) \ 207 204 ((chip)->reg_read_dma + \ ··· 451 454 struct qcom_nandc_props { 452 455 u32 ecc_modes; 453 456 u32 dev_cmd_reg_start; 457 + u32 bam_offset; 454 458 bool supports_bam; 455 459 bool nandc_part_of_qpic; 456 460 bool qpic_version2;
+1 -1
include/linux/mtd/partitions.h
··· 108 108 deregister_mtd_parser) 109 109 110 110 int mtd_add_partition(struct mtd_info *master, const char *name, 111 - long long offset, long long length); 111 + long long offset, long long length, struct mtd_info **part); 112 112 int mtd_del_partition(struct mtd_info *master, int partno); 113 113 uint64_t mtd_get_device_size(const struct mtd_info *mtd); 114 114
+77 -44
include/linux/mtd/spinand.h
··· 20 20 * Standard SPI NAND flash operations 21 21 */ 22 22 23 - #define SPINAND_RESET_OP \ 23 + #define SPINAND_RESET_1S_0_0_OP \ 24 24 SPI_MEM_OP(SPI_MEM_OP_CMD(0xff, 1), \ 25 25 SPI_MEM_OP_NO_ADDR, \ 26 26 SPI_MEM_OP_NO_DUMMY, \ 27 27 SPI_MEM_OP_NO_DATA) 28 28 29 - #define SPINAND_WR_EN_DIS_OP(enable) \ 29 + #define SPINAND_WR_EN_DIS_1S_0_0_OP(enable) \ 30 30 SPI_MEM_OP(SPI_MEM_OP_CMD((enable) ? 0x06 : 0x04, 1), \ 31 31 SPI_MEM_OP_NO_ADDR, \ 32 32 SPI_MEM_OP_NO_DUMMY, \ 33 33 SPI_MEM_OP_NO_DATA) 34 34 35 - #define SPINAND_READID_OP(naddr, ndummy, buf, len) \ 35 + #define SPINAND_READID_1S_1S_1S_OP(naddr, ndummy, buf, len) \ 36 36 SPI_MEM_OP(SPI_MEM_OP_CMD(0x9f, 1), \ 37 37 SPI_MEM_OP_ADDR(naddr, 0, 1), \ 38 38 SPI_MEM_OP_DUMMY(ndummy, 1), \ 39 39 SPI_MEM_OP_DATA_IN(len, buf, 1)) 40 40 41 - #define SPINAND_SET_FEATURE_OP(reg, valptr) \ 41 + #define SPINAND_SET_FEATURE_1S_1S_1S_OP(reg, valptr) \ 42 42 SPI_MEM_OP(SPI_MEM_OP_CMD(0x1f, 1), \ 43 43 SPI_MEM_OP_ADDR(1, reg, 1), \ 44 44 SPI_MEM_OP_NO_DUMMY, \ 45 45 SPI_MEM_OP_DATA_OUT(1, valptr, 1)) 46 46 47 - #define SPINAND_GET_FEATURE_OP(reg, valptr) \ 47 + #define SPINAND_GET_FEATURE_1S_1S_1S_OP(reg, valptr) \ 48 48 SPI_MEM_OP(SPI_MEM_OP_CMD(0x0f, 1), \ 49 49 SPI_MEM_OP_ADDR(1, reg, 1), \ 50 50 SPI_MEM_OP_NO_DUMMY, \ 51 51 SPI_MEM_OP_DATA_IN(1, valptr, 1)) 52 52 53 - #define SPINAND_BLK_ERASE_OP(addr) \ 53 + #define SPINAND_BLK_ERASE_1S_1S_0_OP(addr) \ 54 54 SPI_MEM_OP(SPI_MEM_OP_CMD(0xd8, 1), \ 55 55 SPI_MEM_OP_ADDR(3, addr, 1), \ 56 56 SPI_MEM_OP_NO_DUMMY, \ 57 57 SPI_MEM_OP_NO_DATA) 58 58 59 - #define SPINAND_PAGE_READ_OP(addr) \ 59 + #define SPINAND_PAGE_READ_1S_1S_0_OP(addr) \ 60 60 SPI_MEM_OP(SPI_MEM_OP_CMD(0x13, 1), \ 61 61 SPI_MEM_OP_ADDR(3, addr, 1), \ 62 62 SPI_MEM_OP_NO_DUMMY, \ 63 63 SPI_MEM_OP_NO_DATA) 64 64 65 - #define SPINAND_PAGE_READ_FROM_CACHE_OP(addr, ndummy, buf, len, ...) \ 65 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(addr, ndummy, buf, len, ...) \ 66 66 SPI_MEM_OP(SPI_MEM_OP_CMD(0x03, 1), \ 67 67 SPI_MEM_OP_ADDR(2, addr, 1), \ 68 68 SPI_MEM_OP_DUMMY(ndummy, 1), \ 69 69 SPI_MEM_OP_DATA_IN(len, buf, 1), \ 70 70 SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 71 71 72 - #define SPINAND_PAGE_READ_FROM_CACHE_FAST_OP(addr, ndummy, buf, len) \ 73 - SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1), \ 72 + #define SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(addr, ndummy, buf, len) \ 73 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1), \ 74 74 SPI_MEM_OP_ADDR(2, addr, 1), \ 75 75 SPI_MEM_OP_DUMMY(ndummy, 1), \ 76 76 SPI_MEM_OP_DATA_IN(len, buf, 1)) 77 77 78 - #define SPINAND_PAGE_READ_FROM_CACHE_OP_3A(addr, ndummy, buf, len) \ 78 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_1S_OP(addr, ndummy, buf, len) \ 79 79 SPI_MEM_OP(SPI_MEM_OP_CMD(0x03, 1), \ 80 80 SPI_MEM_OP_ADDR(3, addr, 1), \ 81 81 SPI_MEM_OP_DUMMY(ndummy, 1), \ 82 82 SPI_MEM_OP_DATA_IN(len, buf, 1)) 83 83 84 - #define SPINAND_PAGE_READ_FROM_CACHE_FAST_OP_3A(addr, ndummy, buf, len) \ 84 + #define SPINAND_PAGE_READ_FROM_CACHE_FAST_3A_1S_1S_1S_OP(addr, ndummy, buf, len) \ 85 85 SPI_MEM_OP(SPI_MEM_OP_CMD(0x0b, 1), \ 86 86 SPI_MEM_OP_ADDR(3, addr, 1), \ 87 87 SPI_MEM_OP_DUMMY(ndummy, 1), \ 88 88 SPI_MEM_OP_DATA_IN(len, buf, 1)) 89 89 90 - #define SPINAND_PAGE_READ_FROM_CACHE_DTR_OP(addr, ndummy, buf, len, freq) \ 90 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(addr, ndummy, buf, len, freq) \ 91 91 SPI_MEM_OP(SPI_MEM_OP_CMD(0x0d, 1), \ 92 92 SPI_MEM_DTR_OP_ADDR(2, addr, 1), \ 93 93 SPI_MEM_DTR_OP_DUMMY(ndummy, 1), \ 94 94 SPI_MEM_DTR_OP_DATA_IN(len, buf, 1), \ 95 95 SPI_MEM_OP_MAX_FREQ(freq)) 96 96 97 - #define SPINAND_PAGE_READ_FROM_CACHE_X2_OP(addr, ndummy, buf, len) \ 97 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(addr, ndummy, buf, len) \ 98 98 SPI_MEM_OP(SPI_MEM_OP_CMD(0x3b, 1), \ 99 99 SPI_MEM_OP_ADDR(2, addr, 1), \ 100 100 SPI_MEM_OP_DUMMY(ndummy, 1), \ 101 101 SPI_MEM_OP_DATA_IN(len, buf, 2)) 102 102 103 - #define SPINAND_PAGE_READ_FROM_CACHE_X2_OP_3A(addr, ndummy, buf, len) \ 103 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_2S_OP(addr, ndummy, buf, len) \ 104 104 SPI_MEM_OP(SPI_MEM_OP_CMD(0x3b, 1), \ 105 105 SPI_MEM_OP_ADDR(3, addr, 1), \ 106 106 SPI_MEM_OP_DUMMY(ndummy, 1), \ 107 107 SPI_MEM_OP_DATA_IN(len, buf, 2)) 108 108 109 - #define SPINAND_PAGE_READ_FROM_CACHE_X2_DTR_OP(addr, ndummy, buf, len, freq) \ 109 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(addr, ndummy, buf, len, freq) \ 110 110 SPI_MEM_OP(SPI_MEM_OP_CMD(0x3d, 1), \ 111 111 SPI_MEM_DTR_OP_ADDR(2, addr, 1), \ 112 112 SPI_MEM_DTR_OP_DUMMY(ndummy, 1), \ 113 113 SPI_MEM_DTR_OP_DATA_IN(len, buf, 2), \ 114 114 SPI_MEM_OP_MAX_FREQ(freq)) 115 115 116 - #define SPINAND_PAGE_READ_FROM_CACHE_X4_OP(addr, ndummy, buf, len) \ 117 - SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1), \ 118 - SPI_MEM_OP_ADDR(2, addr, 1), \ 119 - SPI_MEM_OP_DUMMY(ndummy, 1), \ 120 - SPI_MEM_OP_DATA_IN(len, buf, 4)) 121 - 122 - #define SPINAND_PAGE_READ_FROM_CACHE_X4_OP_3A(addr, ndummy, buf, len) \ 123 - SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1), \ 124 - SPI_MEM_OP_ADDR(3, addr, 1), \ 125 - SPI_MEM_OP_DUMMY(ndummy, 1), \ 126 - SPI_MEM_OP_DATA_IN(len, buf, 4)) 127 - 128 - #define SPINAND_PAGE_READ_FROM_CACHE_X4_DTR_OP(addr, ndummy, buf, len, freq) \ 129 - SPI_MEM_OP(SPI_MEM_OP_CMD(0x6d, 1), \ 130 - SPI_MEM_DTR_OP_ADDR(2, addr, 1), \ 131 - SPI_MEM_DTR_OP_DUMMY(ndummy, 1), \ 132 - SPI_MEM_DTR_OP_DATA_IN(len, buf, 4), \ 133 - SPI_MEM_OP_MAX_FREQ(freq)) 134 - 135 - #define SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(addr, ndummy, buf, len) \ 116 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len) \ 136 117 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ 137 118 SPI_MEM_OP_ADDR(2, addr, 2), \ 138 119 SPI_MEM_OP_DUMMY(ndummy, 2), \ 139 120 SPI_MEM_OP_DATA_IN(len, buf, 2)) 140 121 141 - #define SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP_3A(addr, ndummy, buf, len) \ 122 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_2S_2S_OP(addr, ndummy, buf, len) \ 142 123 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ 143 124 SPI_MEM_OP_ADDR(3, addr, 2), \ 144 125 SPI_MEM_OP_DUMMY(ndummy, 2), \ 145 126 SPI_MEM_OP_DATA_IN(len, buf, 2)) 146 127 147 - #define SPINAND_PAGE_READ_FROM_CACHE_DUALIO_DTR_OP(addr, ndummy, buf, len, freq) \ 128 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(addr, ndummy, buf, len, freq) \ 148 129 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbd, 1), \ 149 130 SPI_MEM_DTR_OP_ADDR(2, addr, 2), \ 150 131 SPI_MEM_DTR_OP_DUMMY(ndummy, 2), \ 151 132 SPI_MEM_DTR_OP_DATA_IN(len, buf, 2), \ 152 133 SPI_MEM_OP_MAX_FREQ(freq)) 153 134 154 - #define SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(addr, ndummy, buf, len) \ 135 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(addr, ndummy, buf, len) \ 136 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1), \ 137 + SPI_MEM_OP_ADDR(2, addr, 1), \ 138 + SPI_MEM_OP_DUMMY(ndummy, 1), \ 139 + SPI_MEM_OP_DATA_IN(len, buf, 4)) 140 + 141 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_1S_4S_OP(addr, ndummy, buf, len) \ 142 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x6b, 1), \ 143 + SPI_MEM_OP_ADDR(3, addr, 1), \ 144 + SPI_MEM_OP_DUMMY(ndummy, 1), \ 145 + SPI_MEM_OP_DATA_IN(len, buf, 4)) 146 + 147 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(addr, ndummy, buf, len, freq) \ 148 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x6d, 1), \ 149 + SPI_MEM_DTR_OP_ADDR(2, addr, 1), \ 150 + SPI_MEM_DTR_OP_DUMMY(ndummy, 1), \ 151 + SPI_MEM_DTR_OP_DATA_IN(len, buf, 4), \ 152 + SPI_MEM_OP_MAX_FREQ(freq)) 153 + 154 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len) \ 155 155 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \ 156 156 SPI_MEM_OP_ADDR(2, addr, 4), \ 157 157 SPI_MEM_OP_DUMMY(ndummy, 4), \ 158 158 SPI_MEM_OP_DATA_IN(len, buf, 4)) 159 159 160 - #define SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP_3A(addr, ndummy, buf, len) \ 160 + #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_4S_4S_OP(addr, ndummy, buf, len) \ 161 161 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \ 162 162 SPI_MEM_OP_ADDR(3, addr, 4), \ 163 163 SPI_MEM_OP_DUMMY(ndummy, 4), \ 164 164 SPI_MEM_OP_DATA_IN(len, buf, 4)) 165 165 166 - #define SPINAND_PAGE_READ_FROM_CACHE_QUADIO_DTR_OP(addr, ndummy, buf, len, freq) \ 166 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(addr, ndummy, buf, len, freq) \ 167 167 SPI_MEM_OP(SPI_MEM_OP_CMD(0xed, 1), \ 168 168 SPI_MEM_DTR_OP_ADDR(2, addr, 4), \ 169 169 SPI_MEM_DTR_OP_DUMMY(ndummy, 4), \ 170 170 SPI_MEM_DTR_OP_DATA_IN(len, buf, 4), \ 171 171 SPI_MEM_OP_MAX_FREQ(freq)) 172 172 173 - #define SPINAND_PROG_EXEC_OP(addr) \ 173 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1S_8S_OP(addr, ndummy, buf, len, freq) \ 174 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x8b, 1), \ 175 + SPI_MEM_OP_ADDR(2, addr, 1), \ 176 + SPI_MEM_OP_DUMMY(ndummy, 1), \ 177 + SPI_MEM_OP_DATA_IN(len, buf, 8), \ 178 + SPI_MEM_OP_MAX_FREQ(freq)) 179 + 180 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(addr, ndummy, buf, len, freq) \ 181 + SPI_MEM_OP(SPI_MEM_OP_CMD(0xcb, 1), \ 182 + SPI_MEM_OP_ADDR(2, addr, 8), \ 183 + SPI_MEM_OP_DUMMY(ndummy, 8), \ 184 + SPI_MEM_OP_DATA_IN(len, buf, 8), \ 185 + SPI_MEM_OP_MAX_FREQ(freq)) 186 + 187 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_1D_8D_OP(addr, ndummy, buf, len, freq) \ 188 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x9d, 1), \ 189 + SPI_MEM_DTR_OP_ADDR(2, addr, 1), \ 190 + SPI_MEM_DTR_OP_DUMMY(ndummy, 1), \ 191 + SPI_MEM_DTR_OP_DATA_IN(len, buf, 8), \ 192 + SPI_MEM_OP_MAX_FREQ(freq)) 193 + 194 + #define SPINAND_PROG_EXEC_1S_1S_0_OP(addr) \ 174 195 SPI_MEM_OP(SPI_MEM_OP_CMD(0x10, 1), \ 175 196 SPI_MEM_OP_ADDR(3, addr, 1), \ 176 197 SPI_MEM_OP_NO_DUMMY, \ 177 198 SPI_MEM_OP_NO_DATA) 178 199 179 - #define SPINAND_PROG_LOAD(reset, addr, buf, len) \ 200 + #define SPINAND_PROG_LOAD_1S_1S_1S_OP(reset, addr, buf, len) \ 180 201 SPI_MEM_OP(SPI_MEM_OP_CMD(reset ? 0x02 : 0x84, 1), \ 181 202 SPI_MEM_OP_ADDR(2, addr, 1), \ 182 203 SPI_MEM_OP_NO_DUMMY, \ 183 204 SPI_MEM_OP_DATA_OUT(len, buf, 1)) 184 205 185 - #define SPINAND_PROG_LOAD_X4(reset, addr, buf, len) \ 206 + #define SPINAND_PROG_LOAD_1S_1S_4S_OP(reset, addr, buf, len) \ 186 207 SPI_MEM_OP(SPI_MEM_OP_CMD(reset ? 0x32 : 0x34, 1), \ 187 208 SPI_MEM_OP_ADDR(2, addr, 1), \ 188 209 SPI_MEM_OP_NO_DUMMY, \ 189 210 SPI_MEM_OP_DATA_OUT(len, buf, 4)) 211 + 212 + #define SPINAND_PROG_LOAD_1S_1S_8S_OP(addr, buf, len) \ 213 + SPI_MEM_OP(SPI_MEM_OP_CMD(0x82, 1), \ 214 + SPI_MEM_OP_ADDR(2, addr, 1), \ 215 + SPI_MEM_OP_NO_DUMMY, \ 216 + SPI_MEM_OP_DATA_OUT(len, buf, 8)) 217 + 218 + #define SPINAND_PROG_LOAD_1S_8S_8S_OP(reset, addr, buf, len) \ 219 + SPI_MEM_OP(SPI_MEM_OP_CMD(reset ? 0xc2 : 0xc4, 1), \ 220 + SPI_MEM_OP_ADDR(2, addr, 8), \ 221 + SPI_MEM_OP_NO_DUMMY, \ 222 + SPI_MEM_OP_DATA_OUT(len, buf, 8)) 190 223 191 224 /** 192 225 * Standard SPI NAND flash commands