Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mtd/for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Richard Weinberger:
"MTD core changes:
- add debugfs nodes for querying the flash name and id
- mtd parser reorganization

SPI NOR core changes:
- always use bounce buffer for register read/writes
- move m25p80 code in spi-nor.c
- rework hwcaps selection for the spi-mem case
- rework the core in order to move the manufacturer specific code out
of it:
- regroup flash parameters in 'struct spi_nor_flash_parameter'
- add default_init() and post_sfdp() hooks to tweak the flash
parameters
- introduce the ->set_4byte(), ->convert_addr() and ->setup()
methods, to deal with manufacturer specific code
- rework the SPI NOR lock/unlock logic
- fix an error code in spi_nor_read_raw()
- fix a memory leak bug
- enable the debugfs for the partname and partid
- add support for few flashes

SPI NOR controller drivers changes:
- intel-spi:
- Whitelist 4B read commands
- Add support for Intel Tiger Lake SPI serial flash
- aspeed-smc: Add of_node_put()
- hisi-sfc: add of_node_put()
- cadence-quadspi: Fix QSPI RCU Schedule Stall

NAND core:
- Fixing typos
- Adding missing of_node_put() in various drivers

Raw NAND controller drivers:
- Macronix: new controller driver
- Omap2: fix the number of bitflips returned
- Brcmnand: fix a pointer not iterating over all the page chunks
- W90x900: driver removed
- Onenand: fix a memory leak
- Sharpsl: missing include guard
- STM32: avoid warnings when building with W=1
- Ingenic: fix a coccinelle warning
- r852: call a helper to simplify the code

CFI core:
- Kill useless initializer in mtd_do_chip_probe()
- Fix a rare write failure seen on some cfi_cmdset_0002 compliant
Parallel NORs
- Bunch of cleanups for cfi_cmdset_0002 driver's write functions by
Tokunori Ikegami"

* tag 'mtd/for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (77 commits)
mtd: pmc551: Remove set but not used variable 'soff_lo'
mtd: cfi_cmdset_0002: Fix do_erase_chip() to get chip as erasing mode
mtd: sm_ftl: Fix memory leak in sm_init_zone() error path
mtd: parsers: Move CMDLINE parser
mtd: parsers: Move OF parser
mtd: parsers: Move BCM63xx parser
mtd: parsers: Move BCM47xx parser
mtd: parsers: Move TI AR7 parser
mtd: pismo: Simplify getting the adapter of a client
mtd: phram: Module parameters add writable permissions
mtd: pxa2xx: Use ioremap_cache insted of ioremap_cached
mtd: spi-nor: Rename "n25q512a" to "mt25qu512a (n25q512a)"
mtd: spi-nor: Add support for mt35xu02g
mtd: rawnand: omap2: Fix number of bitflips reporting with ELM
mtd: rawnand: brcmnand: Fix ecc chunk calculation for erased page bitfips
mtd: spi-nor: remove superfluous pass of nor->info->sector_size
mtd: spi-nor: enable the debugfs for the partname and partid
mtd: mtdcore: add debugfs nodes for querying the flash name and id
mtd: spi-nor: hisi-sfc: Add of_node_put() before break
mtd: spi-nor: aspeed-smc: Add of_node_put()
...

+2590 -1560
+36
Documentation/devicetree/bindings/mtd/mxic-nand.txt
··· 1 + Macronix Raw NAND Controller Device Tree Bindings 2 + ------------------------------------------------- 3 + 4 + Required properties: 5 + - compatible: should be "mxic,multi-itfc-v009-nand-controller" 6 + - reg: should contain 1 entry for the registers 7 + - #address-cells: should be set to 1 8 + - #size-cells: should be set to 0 9 + - interrupts: interrupt line connected to this raw NAND controller 10 + - clock-names: should contain "ps", "send" and "send_dly" 11 + - clocks: should contain 3 phandles for the "ps", "send" and 12 + "send_dly" clocks 13 + 14 + Children nodes: 15 + - children nodes represent the available NAND chips. 16 + 17 + See Documentation/devicetree/bindings/mtd/nand-controller.yaml 18 + for more details on generic bindings. 19 + 20 + Example: 21 + 22 + nand: nand-controller@43c30000 { 23 + compatible = "mxic,multi-itfc-v009-nand-controller"; 24 + reg = <0x43c30000 0x10000>; 25 + #address-cells = <1>; 26 + #size-cells = <0>; 27 + interrupts = <GIC_SPI 0x1d IRQ_TYPE_EDGE_RISING>; 28 + clocks = <&clkwizard 0>, <&clkwizard 1>, <&clkc 15>; 29 + clock-names = "send", "send_dly", "ps"; 30 + 31 + nand@0 { 32 + reg = <0>; 33 + nand-ecc-mode = "soft"; 34 + nand-ecc-algo = "bch"; 35 + }; 36 + };
-67
drivers/mtd/Kconfig
··· 23 23 WARNING: some of the tests will ERASE entire MTD device which they 24 24 test. Do not use these tests unless you really know what you do. 25 25 26 - config MTD_CMDLINE_PARTS 27 - tristate "Command line partition table parsing" 28 - depends on MTD 29 - help 30 - Allow generic configuration of the MTD partition tables via the kernel 31 - command line. Multiple flash resources are supported for hardware where 32 - different kinds of flash memory are available. 33 - 34 - You will still need the parsing functions to be called by the driver 35 - for your particular device. It won't happen automatically. The 36 - SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for 37 - example. 38 - 39 - The format for the command line is as follows: 40 - 41 - mtdparts=<mtddef>[;<mtddef] 42 - <mtddef> := <mtd-id>:<partdef>[,<partdef>] 43 - <partdef> := <size>[@offset][<name>][ro] 44 - <mtd-id> := unique id used in mapping driver/device 45 - <size> := standard linux memsize OR "-" to denote all 46 - remaining space 47 - <name> := (NAME) 48 - 49 - Due to the way Linux handles the command line, no spaces are 50 - allowed in the partition definition, including mtd id's and partition 51 - names. 52 - 53 - Examples: 54 - 55 - 1 flash resource (mtd-id "sa1100"), with 1 single writable partition: 56 - mtdparts=sa1100:- 57 - 58 - Same flash, but 2 named partitions, the first one being read-only: 59 - mtdparts=sa1100:256k(ARMboot)ro,-(root) 60 - 61 - If unsure, say 'N'. 62 - 63 - config MTD_OF_PARTS 64 - tristate "OpenFirmware partitioning information support" 65 - default y 66 - depends on OF 67 - help 68 - This provides a partition parsing function which derives 69 - the partition map from the children of the flash node, 70 - as described in Documentation/devicetree/bindings/mtd/partition.txt. 71 - 72 - config MTD_AR7_PARTS 73 - tristate "TI AR7 partitioning support" 74 - help 75 - TI AR7 partitioning support 76 - 77 - config MTD_BCM63XX_PARTS 78 - tristate "BCM63XX CFE partitioning support" 79 - depends on BCM63XX || BMIPS_GENERIC || COMPILE_TEST 80 - select CRC32 81 - select MTD_PARSER_IMAGETAG 82 - help 83 - This provides partition parsing for BCM63xx devices with CFE 84 - bootloaders. 85 - 86 - config MTD_BCM47XX_PARTS 87 - tristate "BCM47XX partitioning support" 88 - depends on BCM47XX || ARCH_BCM_5301X 89 - help 90 - This provides partitions parser for devices based on BCM47xx 91 - boards. 92 - 93 26 menu "Partition parsers" 94 27 source "drivers/mtd/parsers/Kconfig" 95 28 endmenu
-5
drivers/mtd/Makefile
··· 7 7 obj-$(CONFIG_MTD) += mtd.o 8 8 mtd-y := mtdcore.o mtdsuper.o mtdconcat.o mtdpart.o mtdchar.o 9 9 10 - obj-$(CONFIG_MTD_OF_PARTS) += ofpart.o 11 - obj-$(CONFIG_MTD_CMDLINE_PARTS) += cmdlinepart.o 12 - obj-$(CONFIG_MTD_AR7_PARTS) += ar7part.o 13 - obj-$(CONFIG_MTD_BCM63XX_PARTS) += bcm63xxpart.o 14 - obj-$(CONFIG_MTD_BCM47XX_PARTS) += bcm47xxpart.o 15 10 obj-y += parsers/ 16 11 17 12 # 'Users' - code which presents functionality to userspace.
drivers/mtd/ar7part.c drivers/mtd/parsers/ar7part.c
drivers/mtd/bcm47xxpart.c drivers/mtd/parsers/bcm47xxpart.c
drivers/mtd/bcm63xxpart.c drivers/mtd/parsers/bcm63xxpart.c
+187 -114
drivers/mtd/chips/cfi_cmdset_0002.c
··· 61 61 62 62 static int cfi_amdstd_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); 63 63 static int cfi_amdstd_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); 64 + #if !FORCE_WORD_WRITE 64 65 static int cfi_amdstd_write_buffers(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); 66 + #endif 65 67 static int cfi_amdstd_erase_chip(struct mtd_info *, struct erase_info *); 66 68 static int cfi_amdstd_erase_varsize(struct mtd_info *, struct erase_info *); 67 69 static void cfi_amdstd_sync (struct mtd_info *); ··· 258 256 } 259 257 #endif 260 258 259 + #if !FORCE_WORD_WRITE 261 260 static void fixup_use_write_buffers(struct mtd_info *mtd) 262 261 { 263 262 struct map_info *map = mtd->priv; ··· 268 265 mtd->_write = cfi_amdstd_write_buffers; 269 266 } 270 267 } 268 + #endif /* !FORCE_WORD_WRITE */ 271 269 272 270 /* Atmel chips don't use the same PRI format as AMD chips */ 273 271 static void fixup_convert_atmel_pri(struct mtd_info *mtd) ··· 1641 1637 do_otp_lock, 1); 1642 1638 } 1643 1639 1644 - static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, 1645 - unsigned long adr, map_word datum, 1646 - int mode) 1640 + static int __xipram do_write_oneword_once(struct map_info *map, 1641 + struct flchip *chip, 1642 + unsigned long adr, map_word datum, 1643 + int mode, struct cfi_private *cfi) 1647 1644 { 1648 - struct cfi_private *cfi = map->fldrv_priv; 1649 1645 unsigned long timeo = jiffies + HZ; 1650 1646 /* 1651 1647 * We use a 1ms + 1 jiffies generic timeout for writes (most devices ··· 1658 1654 */ 1659 1655 unsigned long uWriteTimeout = (HZ / 1000) + 1; 1660 1656 int ret = 0; 1661 - map_word oldd; 1662 - int retry_cnt = 0; 1663 1657 1664 - adr += chip->start; 1665 - 1666 - mutex_lock(&chip->mutex); 1667 - ret = get_chip(map, chip, adr, mode); 1668 - if (ret) { 1669 - mutex_unlock(&chip->mutex); 1670 - return ret; 1671 - } 1672 - 1673 - pr_debug("MTD %s(): WRITE 0x%.8lx(0x%.8lx)\n", 1674 - __func__, adr, datum.x[0]); 1675 - 1676 - if (mode == FL_OTP_WRITE) 1677 - otp_enter(map, chip, adr, map_bankwidth(map)); 1678 - 1679 - /* 1680 - * Check for a NOP for the case when the datum to write is already 1681 - * present - it saves time and works around buggy chips that corrupt 1682 - * data at other locations when 0xff is written to a location that 1683 - * already contains 0xff. 1684 - */ 1685 - oldd = map_read(map, adr); 1686 - if (map_word_equal(map, oldd, datum)) { 1687 - pr_debug("MTD %s(): NOP\n", 1688 - __func__); 1689 - goto op_done; 1690 - } 1691 - 1692 - XIP_INVAL_CACHED_RANGE(map, adr, map_bankwidth(map)); 1693 - ENABLE_VPP(map); 1694 - xip_disable(map, chip, adr); 1695 - 1696 - retry: 1697 1658 cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); 1698 1659 cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, cfi->device_type, NULL); 1699 1660 cfi_send_gen_cmd(0xA0, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); ··· 1686 1717 continue; 1687 1718 } 1688 1719 1720 + /* 1721 + * We check "time_after" and "!chip_good" before checking 1722 + * "chip_good" to avoid the failure due to scheduling. 1723 + */ 1689 1724 if (time_after(jiffies, timeo) && 1690 - !chip_ready(map, chip, adr)) { 1725 + !chip_good(map, chip, adr, datum)) { 1691 1726 xip_enable(map, chip, adr); 1692 1727 printk(KERN_WARNING "MTD %s(): software timeout\n", __func__); 1693 1728 xip_disable(map, chip, adr); 1729 + ret = -EIO; 1694 1730 break; 1695 1731 } 1696 1732 1697 - if (chip_ready(map, chip, adr)) 1733 + if (chip_good(map, chip, adr, datum)) 1698 1734 break; 1699 1735 1700 1736 /* Latency issues. Drop the lock, wait a while and retry */ 1701 1737 UDELAY(map, chip, adr, 1); 1702 1738 } 1703 - /* Did we succeed? */ 1704 - if (!chip_good(map, chip, adr, datum)) { 1739 + 1740 + return ret; 1741 + } 1742 + 1743 + static int __xipram do_write_oneword_start(struct map_info *map, 1744 + struct flchip *chip, 1745 + unsigned long adr, int mode) 1746 + { 1747 + int ret = 0; 1748 + 1749 + mutex_lock(&chip->mutex); 1750 + 1751 + ret = get_chip(map, chip, adr, mode); 1752 + if (ret) { 1753 + mutex_unlock(&chip->mutex); 1754 + return ret; 1755 + } 1756 + 1757 + if (mode == FL_OTP_WRITE) 1758 + otp_enter(map, chip, adr, map_bankwidth(map)); 1759 + 1760 + return ret; 1761 + } 1762 + 1763 + static void __xipram do_write_oneword_done(struct map_info *map, 1764 + struct flchip *chip, 1765 + unsigned long adr, int mode) 1766 + { 1767 + if (mode == FL_OTP_WRITE) 1768 + otp_exit(map, chip, adr, map_bankwidth(map)); 1769 + 1770 + chip->state = FL_READY; 1771 + DISABLE_VPP(map); 1772 + put_chip(map, chip, adr); 1773 + 1774 + mutex_unlock(&chip->mutex); 1775 + } 1776 + 1777 + static int __xipram do_write_oneword_retry(struct map_info *map, 1778 + struct flchip *chip, 1779 + unsigned long adr, map_word datum, 1780 + int mode) 1781 + { 1782 + struct cfi_private *cfi = map->fldrv_priv; 1783 + int ret = 0; 1784 + map_word oldd; 1785 + int retry_cnt = 0; 1786 + 1787 + /* 1788 + * Check for a NOP for the case when the datum to write is already 1789 + * present - it saves time and works around buggy chips that corrupt 1790 + * data at other locations when 0xff is written to a location that 1791 + * already contains 0xff. 1792 + */ 1793 + oldd = map_read(map, adr); 1794 + if (map_word_equal(map, oldd, datum)) { 1795 + pr_debug("MTD %s(): NOP\n", __func__); 1796 + return ret; 1797 + } 1798 + 1799 + XIP_INVAL_CACHED_RANGE(map, adr, map_bankwidth(map)); 1800 + ENABLE_VPP(map); 1801 + xip_disable(map, chip, adr); 1802 + 1803 + retry: 1804 + ret = do_write_oneword_once(map, chip, adr, datum, mode, cfi); 1805 + if (ret) { 1705 1806 /* reset on all failures. */ 1706 1807 cfi_check_err_status(map, chip, adr); 1707 1808 map_write(map, CMD(0xF0), chip->start); 1708 1809 /* FIXME - should have reset delay before continuing */ 1709 1810 1710 - if (++retry_cnt <= MAX_RETRIES) 1811 + if (++retry_cnt <= MAX_RETRIES) { 1812 + ret = 0; 1711 1813 goto retry; 1712 - 1713 - ret = -EIO; 1814 + } 1714 1815 } 1715 1816 xip_enable(map, chip, adr); 1716 - op_done: 1717 - if (mode == FL_OTP_WRITE) 1718 - otp_exit(map, chip, adr, map_bankwidth(map)); 1719 - chip->state = FL_READY; 1720 - DISABLE_VPP(map); 1721 - put_chip(map, chip, adr); 1722 - mutex_unlock(&chip->mutex); 1817 + 1818 + return ret; 1819 + } 1820 + 1821 + static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, 1822 + unsigned long adr, map_word datum, 1823 + int mode) 1824 + { 1825 + int ret = 0; 1826 + 1827 + adr += chip->start; 1828 + 1829 + pr_debug("MTD %s(): WRITE 0x%.8lx(0x%.8lx)\n", __func__, adr, 1830 + datum.x[0]); 1831 + 1832 + ret = do_write_oneword_start(map, chip, adr, mode); 1833 + if (ret) 1834 + return ret; 1835 + 1836 + ret = do_write_oneword_retry(map, chip, adr, datum, mode); 1837 + 1838 + do_write_oneword_done(map, chip, adr, mode); 1723 1839 1724 1840 return ret; 1725 1841 } ··· 1933 1879 return 0; 1934 1880 } 1935 1881 1882 + #if !FORCE_WORD_WRITE 1883 + static int __xipram do_write_buffer_wait(struct map_info *map, 1884 + struct flchip *chip, unsigned long adr, 1885 + map_word datum) 1886 + { 1887 + unsigned long timeo; 1888 + unsigned long u_write_timeout; 1889 + int ret = 0; 1890 + 1891 + /* 1892 + * Timeout is calculated according to CFI data, if available. 1893 + * See more comments in cfi_cmdset_0002(). 1894 + */ 1895 + u_write_timeout = usecs_to_jiffies(chip->buffer_write_time_max); 1896 + timeo = jiffies + u_write_timeout; 1897 + 1898 + for (;;) { 1899 + if (chip->state != FL_WRITING) { 1900 + /* Someone's suspended the write. Sleep */ 1901 + DECLARE_WAITQUEUE(wait, current); 1902 + 1903 + set_current_state(TASK_UNINTERRUPTIBLE); 1904 + add_wait_queue(&chip->wq, &wait); 1905 + mutex_unlock(&chip->mutex); 1906 + schedule(); 1907 + remove_wait_queue(&chip->wq, &wait); 1908 + timeo = jiffies + (HZ / 2); /* FIXME */ 1909 + mutex_lock(&chip->mutex); 1910 + continue; 1911 + } 1912 + 1913 + /* 1914 + * We check "time_after" and "!chip_good" before checking 1915 + * "chip_good" to avoid the failure due to scheduling. 1916 + */ 1917 + if (time_after(jiffies, timeo) && 1918 + !chip_good(map, chip, adr, datum)) { 1919 + ret = -EIO; 1920 + break; 1921 + } 1922 + 1923 + if (chip_good(map, chip, adr, datum)) 1924 + break; 1925 + 1926 + /* Latency issues. Drop the lock, wait a while and retry */ 1927 + UDELAY(map, chip, adr, 1); 1928 + } 1929 + 1930 + return ret; 1931 + } 1932 + 1933 + static void __xipram do_write_buffer_reset(struct map_info *map, 1934 + struct flchip *chip, 1935 + struct cfi_private *cfi) 1936 + { 1937 + /* 1938 + * Recovery from write-buffer programming failures requires 1939 + * the write-to-buffer-reset sequence. Since the last part 1940 + * of the sequence also works as a normal reset, we can run 1941 + * the same commands regardless of why we are here. 1942 + * See e.g. 1943 + * http://www.spansion.com/Support/Application%20Notes/MirrorBit_Write_Buffer_Prog_Page_Buffer_Read_AN.pdf 1944 + */ 1945 + cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, 1946 + cfi->device_type, NULL); 1947 + cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, 1948 + cfi->device_type, NULL); 1949 + cfi_send_gen_cmd(0xF0, cfi->addr_unlock1, chip->start, map, cfi, 1950 + cfi->device_type, NULL); 1951 + 1952 + /* FIXME - should have reset delay before continuing */ 1953 + } 1936 1954 1937 1955 /* 1938 1956 * FIXME: interleaved mode not tested, and probably not supported! ··· 2014 1888 int len) 2015 1889 { 2016 1890 struct cfi_private *cfi = map->fldrv_priv; 2017 - unsigned long timeo = jiffies + HZ; 2018 - /* 2019 - * Timeout is calculated according to CFI data, if available. 2020 - * See more comments in cfi_cmdset_0002(). 2021 - */ 2022 - unsigned long uWriteTimeout = 2023 - usecs_to_jiffies(chip->buffer_write_time_max); 2024 1891 int ret = -EIO; 2025 1892 unsigned long cmd_adr; 2026 1893 int z, words; ··· 2070 1951 adr, map_bankwidth(map), 2071 1952 chip->word_write_time); 2072 1953 2073 - timeo = jiffies + uWriteTimeout; 2074 - 2075 - for (;;) { 2076 - if (chip->state != FL_WRITING) { 2077 - /* Someone's suspended the write. Sleep */ 2078 - DECLARE_WAITQUEUE(wait, current); 2079 - 2080 - set_current_state(TASK_UNINTERRUPTIBLE); 2081 - add_wait_queue(&chip->wq, &wait); 2082 - mutex_unlock(&chip->mutex); 2083 - schedule(); 2084 - remove_wait_queue(&chip->wq, &wait); 2085 - timeo = jiffies + (HZ / 2); /* FIXME */ 2086 - mutex_lock(&chip->mutex); 2087 - continue; 2088 - } 2089 - 2090 - /* 2091 - * We check "time_after" and "!chip_good" before checking "chip_good" to avoid 2092 - * the failure due to scheduling. 2093 - */ 2094 - if (time_after(jiffies, timeo) && 2095 - !chip_good(map, chip, adr, datum)) 2096 - break; 2097 - 2098 - if (chip_good(map, chip, adr, datum)) { 2099 - xip_enable(map, chip, adr); 2100 - goto op_done; 2101 - } 2102 - 2103 - /* Latency issues. Drop the lock, wait a while and retry */ 2104 - UDELAY(map, chip, adr, 1); 1954 + ret = do_write_buffer_wait(map, chip, adr, datum); 1955 + if (ret) { 1956 + cfi_check_err_status(map, chip, adr); 1957 + do_write_buffer_reset(map, chip, cfi); 1958 + pr_err("MTD %s(): software timeout, address:0x%.8lx.\n", 1959 + __func__, adr); 2105 1960 } 2106 1961 2107 - /* 2108 - * Recovery from write-buffer programming failures requires 2109 - * the write-to-buffer-reset sequence. Since the last part 2110 - * of the sequence also works as a normal reset, we can run 2111 - * the same commands regardless of why we are here. 2112 - * See e.g. 2113 - * http://www.spansion.com/Support/Application%20Notes/MirrorBit_Write_Buffer_Prog_Page_Buffer_Read_AN.pdf 2114 - */ 2115 - cfi_check_err_status(map, chip, adr); 2116 - cfi_send_gen_cmd(0xAA, cfi->addr_unlock1, chip->start, map, cfi, 2117 - cfi->device_type, NULL); 2118 - cfi_send_gen_cmd(0x55, cfi->addr_unlock2, chip->start, map, cfi, 2119 - cfi->device_type, NULL); 2120 - cfi_send_gen_cmd(0xF0, cfi->addr_unlock1, chip->start, map, cfi, 2121 - cfi->device_type, NULL); 2122 1962 xip_enable(map, chip, adr); 2123 - /* FIXME - should have reset delay before continuing */ 2124 1963 2125 - printk(KERN_WARNING "MTD %s(): software timeout, address:0x%.8lx.\n", 2126 - __func__, adr); 2127 - 2128 - ret = -EIO; 2129 - op_done: 2130 1964 chip->state = FL_READY; 2131 1965 DISABLE_VPP(map); 2132 1966 put_chip(map, chip, adr); ··· 2163 2091 2164 2092 return 0; 2165 2093 } 2094 + #endif /* !FORCE_WORD_WRITE */ 2166 2095 2167 2096 /* 2168 2097 * Wait for the flash chip to become ready to write data ··· 2417 2344 adr = cfi->addr_unlock1; 2418 2345 2419 2346 mutex_lock(&chip->mutex); 2420 - ret = get_chip(map, chip, adr, FL_WRITING); 2347 + ret = get_chip(map, chip, adr, FL_ERASING); 2421 2348 if (ret) { 2422 2349 mutex_unlock(&chip->mutex); 2423 2350 return ret;
+1 -1
drivers/mtd/chips/gen_probe.c
··· 20 20 21 21 struct mtd_info *mtd_do_chip_probe(struct map_info *map, struct chip_probe *cp) 22 22 { 23 - struct mtd_info *mtd = NULL; 23 + struct mtd_info *mtd; 24 24 struct cfi_private *cfi; 25 25 26 26 /* First probe the map to see if we have CFI stuff there. */
drivers/mtd/cmdlinepart.c drivers/mtd/parsers/cmdlinepart.c
-18
drivers/mtd/devices/Kconfig
··· 79 79 other key product data. The second half is programmed with a 80 80 unique-to-each-chip bit pattern at the factory. 81 81 82 - config MTD_M25P80 83 - tristate "Support most SPI Flash chips (AT26DF, M25P, W25X, ...)" 84 - depends on SPI_MASTER && MTD_SPI_NOR 85 - select SPI_MEM 86 - help 87 - This enables access to most modern SPI flash chips, used for 88 - program and data storage. Series supported include Atmel AT26DF, 89 - Spansion S25SL, SST 25VF, ST M25P, and Winbond W25X. Other chips 90 - are supported as well. See the driver source for the current list, 91 - or to add other chips. 92 - 93 - Note that the original DataFlash chips (AT45 series, not AT26DF), 94 - need an entirely different driver. 95 - 96 - Set up your spi devices with the right board-specific platform data, 97 - if you want to specify device partitioning or to use a device which 98 - doesn't support the JEDEC ID instruction. 99 - 100 82 config MTD_MCHP23K256 101 83 tristate "Microchip 23K256 SRAM" 102 84 depends on SPI_MASTER
-1
drivers/mtd/devices/Makefile
··· 12 12 obj-$(CONFIG_MTD_LART) += lart.o 13 13 obj-$(CONFIG_MTD_BLOCK2MTD) += block2mtd.o 14 14 obj-$(CONFIG_MTD_DATAFLASH) += mtd_dataflash.o 15 - obj-$(CONFIG_MTD_M25P80) += m25p80.o 16 15 obj-$(CONFIG_MTD_MCHP23K256) += mchp23k256.o 17 16 obj-$(CONFIG_MTD_SPEAR_SMI) += spear_smi.o 18 17 obj-$(CONFIG_MTD_SST25L) += sst25l.o
-347
drivers/mtd/devices/m25p80.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * MTD SPI driver for ST M25Pxx (and similar) serial flash chips 4 - * 5 - * Author: Mike Lavender, mike@steroidmicros.com 6 - * 7 - * Copyright (c) 2005, Intec Automation Inc. 8 - * 9 - * Some parts are based on lart.c by Abraham Van Der Merwe 10 - * 11 - * Cleaned up and generalized based on mtd_dataflash.c 12 - */ 13 - 14 - #include <linux/err.h> 15 - #include <linux/errno.h> 16 - #include <linux/module.h> 17 - #include <linux/device.h> 18 - 19 - #include <linux/mtd/mtd.h> 20 - #include <linux/mtd/partitions.h> 21 - 22 - #include <linux/spi/spi.h> 23 - #include <linux/spi/spi-mem.h> 24 - #include <linux/spi/flash.h> 25 - #include <linux/mtd/spi-nor.h> 26 - 27 - struct m25p { 28 - struct spi_mem *spimem; 29 - struct spi_nor spi_nor; 30 - }; 31 - 32 - static int m25p80_read_reg(struct spi_nor *nor, u8 code, u8 *val, int len) 33 - { 34 - struct m25p *flash = nor->priv; 35 - struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(code, 1), 36 - SPI_MEM_OP_NO_ADDR, 37 - SPI_MEM_OP_NO_DUMMY, 38 - SPI_MEM_OP_DATA_IN(len, NULL, 1)); 39 - void *scratchbuf; 40 - int ret; 41 - 42 - scratchbuf = kmalloc(len, GFP_KERNEL); 43 - if (!scratchbuf) 44 - return -ENOMEM; 45 - 46 - op.data.buf.in = scratchbuf; 47 - ret = spi_mem_exec_op(flash->spimem, &op); 48 - if (ret < 0) 49 - dev_err(&flash->spimem->spi->dev, "error %d reading %x\n", ret, 50 - code); 51 - else 52 - memcpy(val, scratchbuf, len); 53 - 54 - kfree(scratchbuf); 55 - 56 - return ret; 57 - } 58 - 59 - static int m25p80_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len) 60 - { 61 - struct m25p *flash = nor->priv; 62 - struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 1), 63 - SPI_MEM_OP_NO_ADDR, 64 - SPI_MEM_OP_NO_DUMMY, 65 - SPI_MEM_OP_DATA_OUT(len, NULL, 1)); 66 - void *scratchbuf; 67 - int ret; 68 - 69 - scratchbuf = kmemdup(buf, len, GFP_KERNEL); 70 - if (!scratchbuf) 71 - return -ENOMEM; 72 - 73 - op.data.buf.out = scratchbuf; 74 - ret = spi_mem_exec_op(flash->spimem, &op); 75 - kfree(scratchbuf); 76 - 77 - return ret; 78 - } 79 - 80 - static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len, 81 - const u_char *buf) 82 - { 83 - struct m25p *flash = nor->priv; 84 - struct spi_mem_op op = 85 - SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 1), 86 - SPI_MEM_OP_ADDR(nor->addr_width, to, 1), 87 - SPI_MEM_OP_NO_DUMMY, 88 - SPI_MEM_OP_DATA_OUT(len, buf, 1)); 89 - int ret; 90 - 91 - /* get transfer protocols. */ 92 - op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->write_proto); 93 - op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->write_proto); 94 - op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->write_proto); 95 - 96 - if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second) 97 - op.addr.nbytes = 0; 98 - 99 - ret = spi_mem_adjust_op_size(flash->spimem, &op); 100 - if (ret) 101 - return ret; 102 - op.data.nbytes = len < op.data.nbytes ? len : op.data.nbytes; 103 - 104 - ret = spi_mem_exec_op(flash->spimem, &op); 105 - if (ret) 106 - return ret; 107 - 108 - return op.data.nbytes; 109 - } 110 - 111 - /* 112 - * Read an address range from the nor chip. The address range 113 - * may be any size provided it is within the physical boundaries. 114 - */ 115 - static ssize_t m25p80_read(struct spi_nor *nor, loff_t from, size_t len, 116 - u_char *buf) 117 - { 118 - struct m25p *flash = nor->priv; 119 - struct spi_mem_op op = 120 - SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 1), 121 - SPI_MEM_OP_ADDR(nor->addr_width, from, 1), 122 - SPI_MEM_OP_DUMMY(nor->read_dummy, 1), 123 - SPI_MEM_OP_DATA_IN(len, buf, 1)); 124 - size_t remaining = len; 125 - int ret; 126 - 127 - /* get transfer protocols. */ 128 - op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->read_proto); 129 - op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->read_proto); 130 - op.dummy.buswidth = op.addr.buswidth; 131 - op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->read_proto); 132 - 133 - /* convert the dummy cycles to the number of bytes */ 134 - op.dummy.nbytes = (nor->read_dummy * op.dummy.buswidth) / 8; 135 - 136 - while (remaining) { 137 - op.data.nbytes = remaining < UINT_MAX ? remaining : UINT_MAX; 138 - ret = spi_mem_adjust_op_size(flash->spimem, &op); 139 - if (ret) 140 - return ret; 141 - 142 - ret = spi_mem_exec_op(flash->spimem, &op); 143 - if (ret) 144 - return ret; 145 - 146 - op.addr.val += op.data.nbytes; 147 - remaining -= op.data.nbytes; 148 - op.data.buf.in += op.data.nbytes; 149 - } 150 - 151 - return len; 152 - } 153 - 154 - /* 155 - * board specific setup should have ensured the SPI clock used here 156 - * matches what the READ command supports, at least until this driver 157 - * understands FAST_READ (for clocks over 25 MHz). 158 - */ 159 - static int m25p_probe(struct spi_mem *spimem) 160 - { 161 - struct spi_device *spi = spimem->spi; 162 - struct flash_platform_data *data; 163 - struct m25p *flash; 164 - struct spi_nor *nor; 165 - struct spi_nor_hwcaps hwcaps = { 166 - .mask = SNOR_HWCAPS_READ | 167 - SNOR_HWCAPS_READ_FAST | 168 - SNOR_HWCAPS_PP, 169 - }; 170 - char *flash_name; 171 - int ret; 172 - 173 - data = dev_get_platdata(&spimem->spi->dev); 174 - 175 - flash = devm_kzalloc(&spimem->spi->dev, sizeof(*flash), GFP_KERNEL); 176 - if (!flash) 177 - return -ENOMEM; 178 - 179 - nor = &flash->spi_nor; 180 - 181 - /* install the hooks */ 182 - nor->read = m25p80_read; 183 - nor->write = m25p80_write; 184 - nor->write_reg = m25p80_write_reg; 185 - nor->read_reg = m25p80_read_reg; 186 - 187 - nor->dev = &spimem->spi->dev; 188 - spi_nor_set_flash_node(nor, spi->dev.of_node); 189 - nor->priv = flash; 190 - 191 - spi_mem_set_drvdata(spimem, flash); 192 - flash->spimem = spimem; 193 - 194 - if (spi->mode & SPI_RX_OCTAL) { 195 - hwcaps.mask |= SNOR_HWCAPS_READ_1_1_8; 196 - 197 - if (spi->mode & SPI_TX_OCTAL) 198 - hwcaps.mask |= (SNOR_HWCAPS_READ_1_8_8 | 199 - SNOR_HWCAPS_PP_1_1_8 | 200 - SNOR_HWCAPS_PP_1_8_8); 201 - } else if (spi->mode & SPI_RX_QUAD) { 202 - hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4; 203 - 204 - if (spi->mode & SPI_TX_QUAD) 205 - hwcaps.mask |= (SNOR_HWCAPS_READ_1_4_4 | 206 - SNOR_HWCAPS_PP_1_1_4 | 207 - SNOR_HWCAPS_PP_1_4_4); 208 - } else if (spi->mode & SPI_RX_DUAL) { 209 - hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2; 210 - 211 - if (spi->mode & SPI_TX_DUAL) 212 - hwcaps.mask |= SNOR_HWCAPS_READ_1_2_2; 213 - } 214 - 215 - if (data && data->name) 216 - nor->mtd.name = data->name; 217 - 218 - if (!nor->mtd.name) 219 - nor->mtd.name = spi_mem_get_name(spimem); 220 - 221 - /* For some (historical?) reason many platforms provide two different 222 - * names in flash_platform_data: "name" and "type". Quite often name is 223 - * set to "m25p80" and then "type" provides a real chip name. 224 - * If that's the case, respect "type" and ignore a "name". 225 - */ 226 - if (data && data->type) 227 - flash_name = data->type; 228 - else if (!strcmp(spi->modalias, "spi-nor")) 229 - flash_name = NULL; /* auto-detect */ 230 - else 231 - flash_name = spi->modalias; 232 - 233 - ret = spi_nor_scan(nor, flash_name, &hwcaps); 234 - if (ret) 235 - return ret; 236 - 237 - return mtd_device_register(&nor->mtd, data ? data->parts : NULL, 238 - data ? data->nr_parts : 0); 239 - } 240 - 241 - 242 - static int m25p_remove(struct spi_mem *spimem) 243 - { 244 - struct m25p *flash = spi_mem_get_drvdata(spimem); 245 - 246 - spi_nor_restore(&flash->spi_nor); 247 - 248 - /* Clean up MTD stuff. */ 249 - return mtd_device_unregister(&flash->spi_nor.mtd); 250 - } 251 - 252 - static void m25p_shutdown(struct spi_mem *spimem) 253 - { 254 - struct m25p *flash = spi_mem_get_drvdata(spimem); 255 - 256 - spi_nor_restore(&flash->spi_nor); 257 - } 258 - /* 259 - * Do NOT add to this array without reading the following: 260 - * 261 - * Historically, many flash devices are bound to this driver by their name. But 262 - * since most of these flash are compatible to some extent, and their 263 - * differences can often be differentiated by the JEDEC read-ID command, we 264 - * encourage new users to add support to the spi-nor library, and simply bind 265 - * against a generic string here (e.g., "jedec,spi-nor"). 266 - * 267 - * Many flash names are kept here in this list (as well as in spi-nor.c) to 268 - * keep them available as module aliases for existing platforms. 269 - */ 270 - static const struct spi_device_id m25p_ids[] = { 271 - /* 272 - * Allow non-DT platform devices to bind to the "spi-nor" modalias, and 273 - * hack around the fact that the SPI core does not provide uevent 274 - * matching for .of_match_table 275 - */ 276 - {"spi-nor"}, 277 - 278 - /* 279 - * Entries not used in DTs that should be safe to drop after replacing 280 - * them with "spi-nor" in platform data. 281 - */ 282 - {"s25sl064a"}, {"w25x16"}, {"m25p10"}, {"m25px64"}, 283 - 284 - /* 285 - * Entries that were used in DTs without "jedec,spi-nor" fallback and 286 - * should be kept for backward compatibility. 287 - */ 288 - {"at25df321a"}, {"at25df641"}, {"at26df081a"}, 289 - {"mx25l4005a"}, {"mx25l1606e"}, {"mx25l6405d"}, {"mx25l12805d"}, 290 - {"mx25l25635e"},{"mx66l51235l"}, 291 - {"n25q064"}, {"n25q128a11"}, {"n25q128a13"}, {"n25q512a"}, 292 - {"s25fl256s1"}, {"s25fl512s"}, {"s25sl12801"}, {"s25fl008k"}, 293 - {"s25fl064k"}, 294 - {"sst25vf040b"},{"sst25vf016b"},{"sst25vf032b"},{"sst25wf040"}, 295 - {"m25p40"}, {"m25p80"}, {"m25p16"}, {"m25p32"}, 296 - {"m25p64"}, {"m25p128"}, 297 - {"w25x80"}, {"w25x32"}, {"w25q32"}, {"w25q32dw"}, 298 - {"w25q80bl"}, {"w25q128"}, {"w25q256"}, 299 - 300 - /* Flashes that can't be detected using JEDEC */ 301 - {"m25p05-nonjedec"}, {"m25p10-nonjedec"}, {"m25p20-nonjedec"}, 302 - {"m25p40-nonjedec"}, {"m25p80-nonjedec"}, {"m25p16-nonjedec"}, 303 - {"m25p32-nonjedec"}, {"m25p64-nonjedec"}, {"m25p128-nonjedec"}, 304 - 305 - /* Everspin MRAMs (non-JEDEC) */ 306 - { "mr25h128" }, /* 128 Kib, 40 MHz */ 307 - { "mr25h256" }, /* 256 Kib, 40 MHz */ 308 - { "mr25h10" }, /* 1 Mib, 40 MHz */ 309 - { "mr25h40" }, /* 4 Mib, 40 MHz */ 310 - 311 - { }, 312 - }; 313 - MODULE_DEVICE_TABLE(spi, m25p_ids); 314 - 315 - static const struct of_device_id m25p_of_table[] = { 316 - /* 317 - * Generic compatibility for SPI NOR that can be identified by the 318 - * JEDEC READ ID opcode (0x9F). Use this, if possible. 319 - */ 320 - { .compatible = "jedec,spi-nor" }, 321 - {} 322 - }; 323 - MODULE_DEVICE_TABLE(of, m25p_of_table); 324 - 325 - static struct spi_mem_driver m25p80_driver = { 326 - .spidrv = { 327 - .driver = { 328 - .name = "m25p80", 329 - .of_match_table = m25p_of_table, 330 - }, 331 - .id_table = m25p_ids, 332 - }, 333 - .probe = m25p_probe, 334 - .remove = m25p_remove, 335 - .shutdown = m25p_shutdown, 336 - 337 - /* REVISIT: many of these chips have deep power-down modes, which 338 - * should clearly be entered on suspend() to minimize power use. 339 - * And also when they're otherwise idle... 340 - */ 341 - }; 342 - 343 - module_spi_mem_driver(m25p80_driver); 344 - 345 - MODULE_LICENSE("GPL"); 346 - MODULE_AUTHOR("Mike Lavender"); 347 - MODULE_DESCRIPTION("MTD SPI driver for ST M25Pxx flash chips");
+1 -1
drivers/mtd/devices/phram.c
··· 294 294 #endif 295 295 } 296 296 297 - module_param_call(phram, phram_param_call, NULL, NULL, 000); 297 + module_param_call(phram, phram_param_call, NULL, NULL, 0200); 298 298 MODULE_PARM_DESC(phram, "Memory region to map. \"phram=<name>,<start>,<length>\""); 299 299 300 300
+3 -6
drivers/mtd/devices/pmc551.c
··· 135 135 static int pmc551_erase(struct mtd_info *mtd, struct erase_info *instr) 136 136 { 137 137 struct mypriv *priv = mtd->priv; 138 - u32 soff_hi, soff_lo; /* start address offset hi/lo */ 138 + u32 soff_hi; /* start address offset hi */ 139 139 u32 eoff_hi, eoff_lo; /* end address offset hi/lo */ 140 140 unsigned long end; 141 141 u_char *ptr; ··· 150 150 eoff_hi = end & ~(priv->asize - 1); 151 151 soff_hi = instr->addr & ~(priv->asize - 1); 152 152 eoff_lo = end & (priv->asize - 1); 153 - soff_lo = instr->addr & (priv->asize - 1); 154 153 155 154 pmc551_point(mtd, instr->addr, instr->len, &retlen, 156 155 (void **)&ptr, NULL); ··· 224 225 size_t * retlen, u_char * buf) 225 226 { 226 227 struct mypriv *priv = mtd->priv; 227 - u32 soff_hi, soff_lo; /* start address offset hi/lo */ 228 + u32 soff_hi; /* start address offset hi */ 228 229 u32 eoff_hi, eoff_lo; /* end address offset hi/lo */ 229 230 unsigned long end; 230 231 u_char *ptr; ··· 238 239 end = from + len - 1; 239 240 soff_hi = from & ~(priv->asize - 1); 240 241 eoff_hi = end & ~(priv->asize - 1); 241 - soff_lo = from & (priv->asize - 1); 242 242 eoff_lo = end & (priv->asize - 1); 243 243 244 244 pmc551_point(mtd, from, len, retlen, (void **)&ptr, NULL); ··· 280 282 size_t * retlen, const u_char * buf) 281 283 { 282 284 struct mypriv *priv = mtd->priv; 283 - u32 soff_hi, soff_lo; /* start address offset hi/lo */ 285 + u32 soff_hi; /* start address offset hi */ 284 286 u32 eoff_hi, eoff_lo; /* end address offset hi/lo */ 285 287 unsigned long end; 286 288 u_char *ptr; ··· 294 296 end = to + len - 1; 295 297 soff_hi = to & ~(priv->asize - 1); 296 298 eoff_hi = end & ~(priv->asize - 1); 297 - soff_lo = to & (priv->asize - 1); 298 299 eoff_lo = end & (priv->asize - 1); 299 300 300 301 pmc551_point(mtd, to, len, retlen, (void **)&ptr, NULL);
+1 -2
drivers/mtd/maps/pismo.c
··· 211 211 static int pismo_probe(struct i2c_client *client, 212 212 const struct i2c_device_id *id) 213 213 { 214 - struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent); 215 214 struct pismo_pdata *pdata = client->dev.platform_data; 216 215 struct pismo_eeprom eeprom; 217 216 struct pismo_data *pismo; 218 217 int ret, i; 219 218 220 - if (!i2c_check_functionality(adapter, I2C_FUNC_I2C)) { 219 + if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) { 221 220 dev_err(&client->dev, "functionality mismatch\n"); 222 221 return -EIO; 223 222 }
+1 -2
drivers/mtd/maps/pxa2xx-flash.c
··· 68 68 info->map.name); 69 69 return -ENOMEM; 70 70 } 71 - info->map.cached = 72 - ioremap_cached(info->map.phys, info->map.size); 71 + info->map.cached = ioremap_cache(info->map.phys, info->map.size); 73 72 if (!info->map.cached) 74 73 printk(KERN_WARNING "Failed to ioremap cached %s\n", 75 74 info->map.name);
+77 -9
drivers/mtd/mtdcore.c
··· 335 335 .release = mtd_release, 336 336 }; 337 337 338 + static int mtd_partid_show(struct seq_file *s, void *p) 339 + { 340 + struct mtd_info *mtd = s->private; 341 + 342 + seq_printf(s, "%s\n", mtd->dbg.partid); 343 + 344 + return 0; 345 + } 346 + 347 + static int mtd_partid_debugfs_open(struct inode *inode, struct file *file) 348 + { 349 + return single_open(file, mtd_partid_show, inode->i_private); 350 + } 351 + 352 + static const struct file_operations mtd_partid_debug_fops = { 353 + .open = mtd_partid_debugfs_open, 354 + .read = seq_read, 355 + .llseek = seq_lseek, 356 + .release = single_release, 357 + }; 358 + 359 + static int mtd_partname_show(struct seq_file *s, void *p) 360 + { 361 + struct mtd_info *mtd = s->private; 362 + 363 + seq_printf(s, "%s\n", mtd->dbg.partname); 364 + 365 + return 0; 366 + } 367 + 368 + static int mtd_partname_debugfs_open(struct inode *inode, struct file *file) 369 + { 370 + return single_open(file, mtd_partname_show, inode->i_private); 371 + } 372 + 373 + static const struct file_operations mtd_partname_debug_fops = { 374 + .open = mtd_partname_debugfs_open, 375 + .read = seq_read, 376 + .llseek = seq_lseek, 377 + .release = single_release, 378 + }; 379 + 380 + static struct dentry *dfs_dir_mtd; 381 + 382 + static void mtd_debugfs_populate(struct mtd_info *mtd) 383 + { 384 + struct device *dev = &mtd->dev; 385 + struct dentry *root, *dent; 386 + 387 + if (IS_ERR_OR_NULL(dfs_dir_mtd)) 388 + return; 389 + 390 + root = debugfs_create_dir(dev_name(dev), dfs_dir_mtd); 391 + if (IS_ERR_OR_NULL(root)) { 392 + dev_dbg(dev, "won't show data in debugfs\n"); 393 + return; 394 + } 395 + 396 + mtd->dbg.dfs_dir = root; 397 + 398 + if (mtd->dbg.partid) { 399 + dent = debugfs_create_file("partid", 0400, root, mtd, 400 + &mtd_partid_debug_fops); 401 + if (IS_ERR_OR_NULL(dent)) 402 + dev_err(dev, "can't create debugfs entry for partid\n"); 403 + } 404 + 405 + if (mtd->dbg.partname) { 406 + dent = debugfs_create_file("partname", 0400, root, mtd, 407 + &mtd_partname_debug_fops); 408 + if (IS_ERR_OR_NULL(dent)) 409 + dev_err(dev, 410 + "can't create debugfs entry for partname\n"); 411 + } 412 + } 413 + 338 414 #ifndef CONFIG_MMU 339 415 unsigned mtd_mmap_capabilities(struct mtd_info *mtd) 340 416 { ··· 588 512 return 0; 589 513 } 590 514 591 - static struct dentry *dfs_dir_mtd; 592 - 593 515 /** 594 516 * add_mtd_device - register an MTD device 595 517 * @mtd: pointer to new MTD device info structure ··· 681 607 if (error) 682 608 goto fail_nvmem_add; 683 609 684 - if (!IS_ERR_OR_NULL(dfs_dir_mtd)) { 685 - mtd->dbg.dfs_dir = debugfs_create_dir(dev_name(&mtd->dev), dfs_dir_mtd); 686 - if (IS_ERR_OR_NULL(mtd->dbg.dfs_dir)) { 687 - pr_debug("mtd device %s won't show data in debugfs\n", 688 - dev_name(&mtd->dev)); 689 - } 690 - } 610 + mtd_debugfs_populate(mtd); 691 611 692 612 device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, 693 613 "mtd%dro", i);
+3
drivers/mtd/nand/onenand/onenand_base.c
··· 3880 3880 if (!this->oob_buf) { 3881 3881 if (this->options & ONENAND_PAGEBUF_ALLOC) { 3882 3882 this->options &= ~ONENAND_PAGEBUF_ALLOC; 3883 + #ifdef CONFIG_MTD_ONENAND_VERIFY_WRITE 3884 + kfree(this->verify_buf); 3885 + #endif 3883 3886 kfree(this->page_buf); 3884 3887 } 3885 3888 return -ENOMEM;
+6 -8
drivers/mtd/nand/raw/Kconfig
··· 351 351 help 352 352 Enables support for NAND Flash chips wired onto Socrates board. 353 353 354 - config MTD_NAND_NUC900 355 - tristate "Nuvoton NUC9xx/w90p910 NAND controller" 356 - depends on ARCH_W90X900 || COMPILE_TEST 357 - depends on HAS_IOMEM 358 - help 359 - This enables the driver for the NAND Flash on evaluation board based 360 - on w90p910 / NUC9xx. 361 - 362 354 source "drivers/mtd/nand/raw/ingenic/Kconfig" 363 355 364 356 config MTD_NAND_FSMC ··· 398 406 help 399 407 Enables support for NAND controller on MTK SoCs. 400 408 This controller is found on mt27xx, mt81xx, mt65xx SoCs. 409 + 410 + config MTD_NAND_MXIC 411 + tristate "Macronix raw NAND controller" 412 + depends on HAS_IOMEM || COMPILE_TEST 413 + help 414 + This selects the Macronix raw NAND controller driver. 401 415 402 416 config MTD_NAND_TEGRA 403 417 tristate "NVIDIA Tegra NAND controller"
+1 -1
drivers/mtd/nand/raw/Makefile
··· 41 41 obj-$(CONFIG_MTD_NAND_MXC) += mxc_nand.o 42 42 obj-$(CONFIG_MTD_NAND_SOCRATES) += socrates_nand.o 43 43 obj-$(CONFIG_MTD_NAND_TXX9NDFMC) += txx9ndfmc.o 44 - obj-$(CONFIG_MTD_NAND_NUC900) += nuc900_nand.o 45 44 obj-$(CONFIG_MTD_NAND_MPC5121_NFC) += mpc5121_nfc.o 46 45 obj-$(CONFIG_MTD_NAND_VF610_NFC) += vf610_nfc.o 47 46 obj-$(CONFIG_MTD_NAND_RICOH) += r852.o ··· 53 54 obj-$(CONFIG_MTD_NAND_BRCMNAND) += brcmnand/ 54 55 obj-$(CONFIG_MTD_NAND_QCOM) += qcom_nandc.o 55 56 obj-$(CONFIG_MTD_NAND_MTK) += mtk_ecc.o mtk_nand.o 57 + obj-$(CONFIG_MTD_NAND_MXIC) += mxic_nand.o 56 58 obj-$(CONFIG_MTD_NAND_TEGRA) += tegra_nand.o 57 59 obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o 58 60 obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o
+4 -1
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 1792 1792 int bitflips = 0; 1793 1793 int page = addr >> chip->page_shift; 1794 1794 int ret; 1795 + void *ecc_chunk; 1795 1796 1796 1797 if (!buf) 1797 1798 buf = nand_get_data_buf(chip); ··· 1805 1804 return ret; 1806 1805 1807 1806 for (i = 0; i < chip->ecc.steps; i++, oob += sas) { 1808 - ret = nand_check_erased_ecc_chunk(buf, chip->ecc.size, 1807 + ecc_chunk = buf + chip->ecc.size * i; 1808 + ret = nand_check_erased_ecc_chunk(ecc_chunk, 1809 + chip->ecc.size, 1809 1810 oob, sas, NULL, 0, 1810 1811 chip->ecc.strength); 1811 1812 if (ret < 0)
+2 -3
drivers/mtd/nand/raw/ingenic/ingenic_nand_drv.c
··· 310 310 struct device *dev = &pdev->dev; 311 311 struct ingenic_nand *nand; 312 312 struct ingenic_nand_cs *cs; 313 - struct resource *res; 314 313 struct nand_chip *chip; 315 314 struct mtd_info *mtd; 316 315 const __be32 *reg; ··· 325 326 326 327 jz4780_nemc_set_type(nfc->dev, cs->bank, JZ4780_NEMC_BANK_NAND); 327 328 328 - res = platform_get_resource(pdev, IORESOURCE_MEM, chipnr); 329 - cs->base = devm_ioremap_resource(dev, res); 329 + cs->base = devm_platform_ioremap_resource(pdev, chipnr); 330 330 if (IS_ERR(cs->base)) 331 331 return PTR_ERR(cs->base); 332 332 ··· 416 418 ret = ingenic_nand_init_chip(pdev, nfc, np, i); 417 419 if (ret) { 418 420 ingenic_nand_cleanup_chips(nfc); 421 + of_node_put(np); 419 422 return ret; 420 423 } 421 424
+1
drivers/mtd/nand/raw/meson_nand.c
··· 1320 1320 ret = meson_nfc_nand_chip_init(dev, nfc, nand_np); 1321 1321 if (ret) { 1322 1322 meson_nfc_nand_chip_cleanup(nfc); 1323 + of_node_put(nand_np); 1323 1324 return ret; 1324 1325 } 1325 1326 }
+582
drivers/mtd/nand/raw/mxic_nand.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2019 Macronix International Co., Ltd. 4 + * 5 + * Author: 6 + * Mason Yang <masonccyang@mxic.com.tw> 7 + */ 8 + 9 + #include <linux/clk.h> 10 + #include <linux/io.h> 11 + #include <linux/iopoll.h> 12 + #include <linux/interrupt.h> 13 + #include <linux/module.h> 14 + #include <linux/mtd/mtd.h> 15 + #include <linux/mtd/rawnand.h> 16 + #include <linux/mtd/nand_ecc.h> 17 + #include <linux/platform_device.h> 18 + 19 + #include "internals.h" 20 + 21 + #define HC_CFG 0x0 22 + #define HC_CFG_IF_CFG(x) ((x) << 27) 23 + #define HC_CFG_DUAL_SLAVE BIT(31) 24 + #define HC_CFG_INDIVIDUAL BIT(30) 25 + #define HC_CFG_NIO(x) (((x) / 4) << 27) 26 + #define HC_CFG_TYPE(s, t) ((t) << (23 + ((s) * 2))) 27 + #define HC_CFG_TYPE_SPI_NOR 0 28 + #define HC_CFG_TYPE_SPI_NAND 1 29 + #define HC_CFG_TYPE_SPI_RAM 2 30 + #define HC_CFG_TYPE_RAW_NAND 3 31 + #define HC_CFG_SLV_ACT(x) ((x) << 21) 32 + #define HC_CFG_CLK_PH_EN BIT(20) 33 + #define HC_CFG_CLK_POL_INV BIT(19) 34 + #define HC_CFG_BIG_ENDIAN BIT(18) 35 + #define HC_CFG_DATA_PASS BIT(17) 36 + #define HC_CFG_IDLE_SIO_LVL(x) ((x) << 16) 37 + #define HC_CFG_MAN_START_EN BIT(3) 38 + #define HC_CFG_MAN_START BIT(2) 39 + #define HC_CFG_MAN_CS_EN BIT(1) 40 + #define HC_CFG_MAN_CS_ASSERT BIT(0) 41 + 42 + #define INT_STS 0x4 43 + #define INT_STS_EN 0x8 44 + #define INT_SIG_EN 0xc 45 + #define INT_STS_ALL GENMASK(31, 0) 46 + #define INT_RDY_PIN BIT(26) 47 + #define INT_RDY_SR BIT(25) 48 + #define INT_LNR_SUSP BIT(24) 49 + #define INT_ECC_ERR BIT(17) 50 + #define INT_CRC_ERR BIT(16) 51 + #define INT_LWR_DIS BIT(12) 52 + #define INT_LRD_DIS BIT(11) 53 + #define INT_SDMA_INT BIT(10) 54 + #define INT_DMA_FINISH BIT(9) 55 + #define INT_RX_NOT_FULL BIT(3) 56 + #define INT_RX_NOT_EMPTY BIT(2) 57 + #define INT_TX_NOT_FULL BIT(1) 58 + #define INT_TX_EMPTY BIT(0) 59 + 60 + #define HC_EN 0x10 61 + #define HC_EN_BIT BIT(0) 62 + 63 + #define TXD(x) (0x14 + ((x) * 4)) 64 + #define RXD 0x24 65 + 66 + #define SS_CTRL(s) (0x30 + ((s) * 4)) 67 + #define LRD_CFG 0x44 68 + #define LWR_CFG 0x80 69 + #define RWW_CFG 0x70 70 + #define OP_READ BIT(23) 71 + #define OP_DUMMY_CYC(x) ((x) << 17) 72 + #define OP_ADDR_BYTES(x) ((x) << 14) 73 + #define OP_CMD_BYTES(x) (((x) - 1) << 13) 74 + #define OP_OCTA_CRC_EN BIT(12) 75 + #define OP_DQS_EN BIT(11) 76 + #define OP_ENHC_EN BIT(10) 77 + #define OP_PREAMBLE_EN BIT(9) 78 + #define OP_DATA_DDR BIT(8) 79 + #define OP_DATA_BUSW(x) ((x) << 6) 80 + #define OP_ADDR_DDR BIT(5) 81 + #define OP_ADDR_BUSW(x) ((x) << 3) 82 + #define OP_CMD_DDR BIT(2) 83 + #define OP_CMD_BUSW(x) (x) 84 + #define OP_BUSW_1 0 85 + #define OP_BUSW_2 1 86 + #define OP_BUSW_4 2 87 + #define OP_BUSW_8 3 88 + 89 + #define OCTA_CRC 0x38 90 + #define OCTA_CRC_IN_EN(s) BIT(3 + ((s) * 16)) 91 + #define OCTA_CRC_CHUNK(s, x) ((fls((x) / 32)) << (1 + ((s) * 16))) 92 + #define OCTA_CRC_OUT_EN(s) BIT(0 + ((s) * 16)) 93 + 94 + #define ONFI_DIN_CNT(s) (0x3c + (s)) 95 + 96 + #define LRD_CTRL 0x48 97 + #define RWW_CTRL 0x74 98 + #define LWR_CTRL 0x84 99 + #define LMODE_EN BIT(31) 100 + #define LMODE_SLV_ACT(x) ((x) << 21) 101 + #define LMODE_CMD1(x) ((x) << 8) 102 + #define LMODE_CMD0(x) (x) 103 + 104 + #define LRD_ADDR 0x4c 105 + #define LWR_ADDR 0x88 106 + #define LRD_RANGE 0x50 107 + #define LWR_RANGE 0x8c 108 + 109 + #define AXI_SLV_ADDR 0x54 110 + 111 + #define DMAC_RD_CFG 0x58 112 + #define DMAC_WR_CFG 0x94 113 + #define DMAC_CFG_PERIPH_EN BIT(31) 114 + #define DMAC_CFG_ALLFLUSH_EN BIT(30) 115 + #define DMAC_CFG_LASTFLUSH_EN BIT(29) 116 + #define DMAC_CFG_QE(x) (((x) + 1) << 16) 117 + #define DMAC_CFG_BURST_LEN(x) (((x) + 1) << 12) 118 + #define DMAC_CFG_BURST_SZ(x) ((x) << 8) 119 + #define DMAC_CFG_DIR_READ BIT(1) 120 + #define DMAC_CFG_START BIT(0) 121 + 122 + #define DMAC_RD_CNT 0x5c 123 + #define DMAC_WR_CNT 0x98 124 + 125 + #define SDMA_ADDR 0x60 126 + 127 + #define DMAM_CFG 0x64 128 + #define DMAM_CFG_START BIT(31) 129 + #define DMAM_CFG_CONT BIT(30) 130 + #define DMAM_CFG_SDMA_GAP(x) (fls((x) / 8192) << 2) 131 + #define DMAM_CFG_DIR_READ BIT(1) 132 + #define DMAM_CFG_EN BIT(0) 133 + 134 + #define DMAM_CNT 0x68 135 + 136 + #define LNR_TIMER_TH 0x6c 137 + 138 + #define RDM_CFG0 0x78 139 + #define RDM_CFG0_POLY(x) (x) 140 + 141 + #define RDM_CFG1 0x7c 142 + #define RDM_CFG1_RDM_EN BIT(31) 143 + #define RDM_CFG1_SEED(x) (x) 144 + 145 + #define LWR_SUSP_CTRL 0x90 146 + #define LWR_SUSP_CTRL_EN BIT(31) 147 + 148 + #define DMAS_CTRL 0x9c 149 + #define DMAS_CTRL_EN BIT(31) 150 + #define DMAS_CTRL_DIR_READ BIT(30) 151 + 152 + #define DATA_STROB 0xa0 153 + #define DATA_STROB_EDO_EN BIT(2) 154 + #define DATA_STROB_INV_POL BIT(1) 155 + #define DATA_STROB_DELAY_2CYC BIT(0) 156 + 157 + #define IDLY_CODE(x) (0xa4 + ((x) * 4)) 158 + #define IDLY_CODE_VAL(x, v) ((v) << (((x) % 4) * 8)) 159 + 160 + #define GPIO 0xc4 161 + #define GPIO_PT(x) BIT(3 + ((x) * 16)) 162 + #define GPIO_RESET(x) BIT(2 + ((x) * 16)) 163 + #define GPIO_HOLDB(x) BIT(1 + ((x) * 16)) 164 + #define GPIO_WPB(x) BIT((x) * 16) 165 + 166 + #define HC_VER 0xd0 167 + 168 + #define HW_TEST(x) (0xe0 + ((x) * 4)) 169 + 170 + #define MXIC_NFC_MAX_CLK_HZ 50000000 171 + #define IRQ_TIMEOUT 1000 172 + 173 + struct mxic_nand_ctlr { 174 + struct clk *ps_clk; 175 + struct clk *send_clk; 176 + struct clk *send_dly_clk; 177 + struct completion complete; 178 + void __iomem *regs; 179 + struct nand_controller controller; 180 + struct device *dev; 181 + struct nand_chip chip; 182 + }; 183 + 184 + static int mxic_nfc_clk_enable(struct mxic_nand_ctlr *nfc) 185 + { 186 + int ret; 187 + 188 + ret = clk_prepare_enable(nfc->ps_clk); 189 + if (ret) 190 + return ret; 191 + 192 + ret = clk_prepare_enable(nfc->send_clk); 193 + if (ret) 194 + goto err_ps_clk; 195 + 196 + ret = clk_prepare_enable(nfc->send_dly_clk); 197 + if (ret) 198 + goto err_send_dly_clk; 199 + 200 + return ret; 201 + 202 + err_send_dly_clk: 203 + clk_disable_unprepare(nfc->send_clk); 204 + err_ps_clk: 205 + clk_disable_unprepare(nfc->ps_clk); 206 + 207 + return ret; 208 + } 209 + 210 + static void mxic_nfc_clk_disable(struct mxic_nand_ctlr *nfc) 211 + { 212 + clk_disable_unprepare(nfc->send_clk); 213 + clk_disable_unprepare(nfc->send_dly_clk); 214 + clk_disable_unprepare(nfc->ps_clk); 215 + } 216 + 217 + static void mxic_nfc_set_input_delay(struct mxic_nand_ctlr *nfc, u8 idly_code) 218 + { 219 + writel(IDLY_CODE_VAL(0, idly_code) | 220 + IDLY_CODE_VAL(1, idly_code) | 221 + IDLY_CODE_VAL(2, idly_code) | 222 + IDLY_CODE_VAL(3, idly_code), 223 + nfc->regs + IDLY_CODE(0)); 224 + writel(IDLY_CODE_VAL(4, idly_code) | 225 + IDLY_CODE_VAL(5, idly_code) | 226 + IDLY_CODE_VAL(6, idly_code) | 227 + IDLY_CODE_VAL(7, idly_code), 228 + nfc->regs + IDLY_CODE(1)); 229 + } 230 + 231 + static int mxic_nfc_clk_setup(struct mxic_nand_ctlr *nfc, unsigned long freq) 232 + { 233 + int ret; 234 + 235 + ret = clk_set_rate(nfc->send_clk, freq); 236 + if (ret) 237 + return ret; 238 + 239 + ret = clk_set_rate(nfc->send_dly_clk, freq); 240 + if (ret) 241 + return ret; 242 + 243 + /* 244 + * A constant delay range from 0x0 ~ 0x1F for input delay, 245 + * the unit is 78 ps, the max input delay is 2.418 ns. 246 + */ 247 + mxic_nfc_set_input_delay(nfc, 0xf); 248 + 249 + /* 250 + * Phase degree = 360 * freq * output-delay 251 + * where output-delay is a constant value 1 ns in FPGA. 252 + * 253 + * Get Phase degree = 360 * freq * 1 ns 254 + * = 360 * freq * 1 sec / 1000000000 255 + * = 9 * freq / 25000000 256 + */ 257 + ret = clk_set_phase(nfc->send_dly_clk, 9 * freq / 25000000); 258 + if (ret) 259 + return ret; 260 + 261 + return 0; 262 + } 263 + 264 + static int mxic_nfc_set_freq(struct mxic_nand_ctlr *nfc, unsigned long freq) 265 + { 266 + int ret; 267 + 268 + if (freq > MXIC_NFC_MAX_CLK_HZ) 269 + freq = MXIC_NFC_MAX_CLK_HZ; 270 + 271 + mxic_nfc_clk_disable(nfc); 272 + ret = mxic_nfc_clk_setup(nfc, freq); 273 + if (ret) 274 + return ret; 275 + 276 + ret = mxic_nfc_clk_enable(nfc); 277 + if (ret) 278 + return ret; 279 + 280 + return 0; 281 + } 282 + 283 + static irqreturn_t mxic_nfc_isr(int irq, void *dev_id) 284 + { 285 + struct mxic_nand_ctlr *nfc = dev_id; 286 + u32 sts; 287 + 288 + sts = readl(nfc->regs + INT_STS); 289 + if (sts & INT_RDY_PIN) 290 + complete(&nfc->complete); 291 + else 292 + return IRQ_NONE; 293 + 294 + return IRQ_HANDLED; 295 + } 296 + 297 + static void mxic_nfc_hw_init(struct mxic_nand_ctlr *nfc) 298 + { 299 + writel(HC_CFG_NIO(8) | HC_CFG_TYPE(1, HC_CFG_TYPE_RAW_NAND) | 300 + HC_CFG_SLV_ACT(0) | HC_CFG_MAN_CS_EN | 301 + HC_CFG_IDLE_SIO_LVL(1), nfc->regs + HC_CFG); 302 + writel(INT_STS_ALL, nfc->regs + INT_STS_EN); 303 + writel(INT_RDY_PIN, nfc->regs + INT_SIG_EN); 304 + writel(0x0, nfc->regs + ONFI_DIN_CNT(0)); 305 + writel(0, nfc->regs + LRD_CFG); 306 + writel(0, nfc->regs + LRD_CTRL); 307 + writel(0x0, nfc->regs + HC_EN); 308 + } 309 + 310 + static void mxic_nfc_cs_enable(struct mxic_nand_ctlr *nfc) 311 + { 312 + writel(readl(nfc->regs + HC_CFG) | HC_CFG_MAN_CS_EN, 313 + nfc->regs + HC_CFG); 314 + writel(HC_CFG_MAN_CS_ASSERT | readl(nfc->regs + HC_CFG), 315 + nfc->regs + HC_CFG); 316 + } 317 + 318 + static void mxic_nfc_cs_disable(struct mxic_nand_ctlr *nfc) 319 + { 320 + writel(~HC_CFG_MAN_CS_ASSERT & readl(nfc->regs + HC_CFG), 321 + nfc->regs + HC_CFG); 322 + } 323 + 324 + static int mxic_nfc_wait_ready(struct nand_chip *chip) 325 + { 326 + struct mxic_nand_ctlr *nfc = nand_get_controller_data(chip); 327 + int ret; 328 + 329 + ret = wait_for_completion_timeout(&nfc->complete, 330 + msecs_to_jiffies(IRQ_TIMEOUT)); 331 + if (!ret) { 332 + dev_err(nfc->dev, "nand device timeout\n"); 333 + return -ETIMEDOUT; 334 + } 335 + 336 + return 0; 337 + } 338 + 339 + static int mxic_nfc_data_xfer(struct mxic_nand_ctlr *nfc, const void *txbuf, 340 + void *rxbuf, unsigned int len) 341 + { 342 + unsigned int pos = 0; 343 + 344 + while (pos < len) { 345 + unsigned int nbytes = len - pos; 346 + u32 data = 0xffffffff; 347 + u32 sts; 348 + int ret; 349 + 350 + if (nbytes > 4) 351 + nbytes = 4; 352 + 353 + if (txbuf) 354 + memcpy(&data, txbuf + pos, nbytes); 355 + 356 + ret = readl_poll_timeout(nfc->regs + INT_STS, sts, 357 + sts & INT_TX_EMPTY, 0, USEC_PER_SEC); 358 + if (ret) 359 + return ret; 360 + 361 + writel(data, nfc->regs + TXD(nbytes % 4)); 362 + 363 + ret = readl_poll_timeout(nfc->regs + INT_STS, sts, 364 + sts & INT_TX_EMPTY, 0, USEC_PER_SEC); 365 + if (ret) 366 + return ret; 367 + 368 + ret = readl_poll_timeout(nfc->regs + INT_STS, sts, 369 + sts & INT_RX_NOT_EMPTY, 0, 370 + USEC_PER_SEC); 371 + if (ret) 372 + return ret; 373 + 374 + data = readl(nfc->regs + RXD); 375 + if (rxbuf) { 376 + data >>= (8 * (4 - nbytes)); 377 + memcpy(rxbuf + pos, &data, nbytes); 378 + } 379 + if (readl(nfc->regs + INT_STS) & INT_RX_NOT_EMPTY) 380 + dev_warn(nfc->dev, "RX FIFO not empty\n"); 381 + 382 + pos += nbytes; 383 + } 384 + 385 + return 0; 386 + } 387 + 388 + static int mxic_nfc_exec_op(struct nand_chip *chip, 389 + const struct nand_operation *op, bool check_only) 390 + { 391 + struct mxic_nand_ctlr *nfc = nand_get_controller_data(chip); 392 + const struct nand_op_instr *instr = NULL; 393 + int ret = 0; 394 + unsigned int op_id; 395 + 396 + mxic_nfc_cs_enable(nfc); 397 + init_completion(&nfc->complete); 398 + for (op_id = 0; op_id < op->ninstrs; op_id++) { 399 + instr = &op->instrs[op_id]; 400 + 401 + switch (instr->type) { 402 + case NAND_OP_CMD_INSTR: 403 + writel(0, nfc->regs + HC_EN); 404 + writel(HC_EN_BIT, nfc->regs + HC_EN); 405 + writel(OP_CMD_BUSW(OP_BUSW_8) | OP_DUMMY_CYC(0x3F) | 406 + OP_CMD_BYTES(0), nfc->regs + SS_CTRL(0)); 407 + 408 + ret = mxic_nfc_data_xfer(nfc, 409 + &instr->ctx.cmd.opcode, 410 + NULL, 1); 411 + break; 412 + 413 + case NAND_OP_ADDR_INSTR: 414 + writel(OP_ADDR_BUSW(OP_BUSW_8) | OP_DUMMY_CYC(0x3F) | 415 + OP_ADDR_BYTES(instr->ctx.addr.naddrs), 416 + nfc->regs + SS_CTRL(0)); 417 + ret = mxic_nfc_data_xfer(nfc, 418 + instr->ctx.addr.addrs, NULL, 419 + instr->ctx.addr.naddrs); 420 + break; 421 + 422 + case NAND_OP_DATA_IN_INSTR: 423 + writel(0x0, nfc->regs + ONFI_DIN_CNT(0)); 424 + writel(OP_DATA_BUSW(OP_BUSW_8) | OP_DUMMY_CYC(0x3F) | 425 + OP_READ, nfc->regs + SS_CTRL(0)); 426 + ret = mxic_nfc_data_xfer(nfc, NULL, 427 + instr->ctx.data.buf.in, 428 + instr->ctx.data.len); 429 + break; 430 + 431 + case NAND_OP_DATA_OUT_INSTR: 432 + writel(instr->ctx.data.len, 433 + nfc->regs + ONFI_DIN_CNT(0)); 434 + writel(OP_DATA_BUSW(OP_BUSW_8) | OP_DUMMY_CYC(0x3F), 435 + nfc->regs + SS_CTRL(0)); 436 + ret = mxic_nfc_data_xfer(nfc, 437 + instr->ctx.data.buf.out, NULL, 438 + instr->ctx.data.len); 439 + break; 440 + 441 + case NAND_OP_WAITRDY_INSTR: 442 + ret = mxic_nfc_wait_ready(chip); 443 + break; 444 + } 445 + } 446 + mxic_nfc_cs_disable(nfc); 447 + 448 + return ret; 449 + } 450 + 451 + static int mxic_nfc_setup_data_interface(struct nand_chip *chip, int chipnr, 452 + const struct nand_data_interface *conf) 453 + { 454 + struct mxic_nand_ctlr *nfc = nand_get_controller_data(chip); 455 + const struct nand_sdr_timings *sdr; 456 + unsigned long freq; 457 + int ret; 458 + 459 + sdr = nand_get_sdr_timings(conf); 460 + if (IS_ERR(sdr)) 461 + return PTR_ERR(sdr); 462 + 463 + if (chipnr == NAND_DATA_IFACE_CHECK_ONLY) 464 + return 0; 465 + 466 + freq = NSEC_PER_SEC / (sdr->tRC_min / 1000); 467 + 468 + ret = mxic_nfc_set_freq(nfc, freq); 469 + if (ret) 470 + dev_err(nfc->dev, "set freq:%ld failed\n", freq); 471 + 472 + if (sdr->tRC_min < 30000) 473 + writel(DATA_STROB_EDO_EN, nfc->regs + DATA_STROB); 474 + 475 + return 0; 476 + } 477 + 478 + static const struct nand_controller_ops mxic_nand_controller_ops = { 479 + .exec_op = mxic_nfc_exec_op, 480 + .setup_data_interface = mxic_nfc_setup_data_interface, 481 + }; 482 + 483 + static int mxic_nfc_probe(struct platform_device *pdev) 484 + { 485 + struct device_node *nand_np, *np = pdev->dev.of_node; 486 + struct mtd_info *mtd; 487 + struct mxic_nand_ctlr *nfc; 488 + struct nand_chip *nand_chip; 489 + int err; 490 + int irq; 491 + 492 + nfc = devm_kzalloc(&pdev->dev, sizeof(struct mxic_nand_ctlr), 493 + GFP_KERNEL); 494 + if (!nfc) 495 + return -ENOMEM; 496 + 497 + nfc->ps_clk = devm_clk_get(&pdev->dev, "ps"); 498 + if (IS_ERR(nfc->ps_clk)) 499 + return PTR_ERR(nfc->ps_clk); 500 + 501 + nfc->send_clk = devm_clk_get(&pdev->dev, "send"); 502 + if (IS_ERR(nfc->send_clk)) 503 + return PTR_ERR(nfc->send_clk); 504 + 505 + nfc->send_dly_clk = devm_clk_get(&pdev->dev, "send_dly"); 506 + if (IS_ERR(nfc->send_dly_clk)) 507 + return PTR_ERR(nfc->send_dly_clk); 508 + 509 + nfc->regs = devm_platform_ioremap_resource(pdev, 0); 510 + if (IS_ERR(nfc->regs)) 511 + return PTR_ERR(nfc->regs); 512 + 513 + nand_chip = &nfc->chip; 514 + mtd = nand_to_mtd(nand_chip); 515 + mtd->dev.parent = &pdev->dev; 516 + 517 + for_each_child_of_node(np, nand_np) 518 + nand_set_flash_node(nand_chip, nand_np); 519 + 520 + nand_chip->priv = nfc; 521 + nfc->dev = &pdev->dev; 522 + nfc->controller.ops = &mxic_nand_controller_ops; 523 + nand_controller_init(&nfc->controller); 524 + nand_chip->controller = &nfc->controller; 525 + 526 + irq = platform_get_irq(pdev, 0); 527 + if (irq < 0) { 528 + dev_err(&pdev->dev, "failed to retrieve irq\n"); 529 + return irq; 530 + } 531 + 532 + mxic_nfc_hw_init(nfc); 533 + 534 + err = devm_request_irq(&pdev->dev, irq, mxic_nfc_isr, 535 + 0, "mxic-nfc", nfc); 536 + if (err) 537 + goto fail; 538 + 539 + err = nand_scan(nand_chip, 1); 540 + if (err) 541 + goto fail; 542 + 543 + err = mtd_device_register(mtd, NULL, 0); 544 + if (err) 545 + goto fail; 546 + 547 + platform_set_drvdata(pdev, nfc); 548 + return 0; 549 + 550 + fail: 551 + mxic_nfc_clk_disable(nfc); 552 + return err; 553 + } 554 + 555 + static int mxic_nfc_remove(struct platform_device *pdev) 556 + { 557 + struct mxic_nand_ctlr *nfc = platform_get_drvdata(pdev); 558 + 559 + nand_release(&nfc->chip); 560 + mxic_nfc_clk_disable(nfc); 561 + return 0; 562 + } 563 + 564 + static const struct of_device_id mxic_nfc_of_ids[] = { 565 + { .compatible = "mxic,multi-itfc-v009-nand-controller", }, 566 + {}, 567 + }; 568 + MODULE_DEVICE_TABLE(of, mxic_nfc_of_ids); 569 + 570 + static struct platform_driver mxic_nfc_driver = { 571 + .probe = mxic_nfc_probe, 572 + .remove = mxic_nfc_remove, 573 + .driver = { 574 + .name = "mxic-nfc", 575 + .of_match_table = mxic_nfc_of_ids, 576 + }, 577 + }; 578 + module_platform_driver(mxic_nfc_driver); 579 + 580 + MODULE_AUTHOR("Mason Yang <masonccyang@mxic.com.tw>"); 581 + MODULE_DESCRIPTION("Macronix raw NAND controller driver"); 582 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/mtd/nand/raw/nand_base.c
··· 4112 4112 struct mtd_oob_ops *ops) 4113 4113 { 4114 4114 struct nand_chip *chip = mtd_to_nand(mtd); 4115 - int ret = -ENOTSUPP; 4115 + int ret; 4116 4116 4117 4117 ops->retlen = 0; 4118 4118
+6 -4
drivers/mtd/nand/raw/nand_bbt.c
··· 1232 1232 if (!td) { 1233 1233 if ((res = nand_memory_bbt(this, bd))) { 1234 1234 pr_err("nand_bbt: can't scan flash and build the RAM-based BBT\n"); 1235 - goto err; 1235 + goto err_free_bbt; 1236 1236 } 1237 1237 return 0; 1238 1238 } ··· 1245 1245 buf = vmalloc(len); 1246 1246 if (!buf) { 1247 1247 res = -ENOMEM; 1248 - goto err; 1248 + goto err_free_bbt; 1249 1249 } 1250 1250 1251 1251 /* Is the bbt at a given page? */ ··· 1258 1258 1259 1259 res = check_create(this, buf, bd); 1260 1260 if (res) 1261 - goto err; 1261 + goto err_free_buf; 1262 1262 1263 1263 /* Prevent the bbt regions from erasing / writing */ 1264 1264 mark_bbt_region(this, td); ··· 1268 1268 vfree(buf); 1269 1269 return 0; 1270 1270 1271 - err: 1271 + err_free_buf: 1272 + vfree(buf); 1273 + err_free_bbt: 1272 1274 kfree(this->bbt); 1273 1275 this->bbt = NULL; 1274 1276 return res;
-304
drivers/mtd/nand/raw/nuc900_nand.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright © 2009 Nuvoton technology corporation. 4 - * 5 - * Wan ZongShun <mcuos.com@gmail.com> 6 - */ 7 - 8 - #include <linux/slab.h> 9 - #include <linux/module.h> 10 - #include <linux/interrupt.h> 11 - #include <linux/io.h> 12 - #include <linux/platform_device.h> 13 - #include <linux/delay.h> 14 - #include <linux/clk.h> 15 - #include <linux/err.h> 16 - 17 - #include <linux/mtd/mtd.h> 18 - #include <linux/mtd/rawnand.h> 19 - #include <linux/mtd/partitions.h> 20 - 21 - #define REG_FMICSR 0x00 22 - #define REG_SMCSR 0xa0 23 - #define REG_SMISR 0xac 24 - #define REG_SMCMD 0xb0 25 - #define REG_SMADDR 0xb4 26 - #define REG_SMDATA 0xb8 27 - 28 - #define RESET_FMI 0x01 29 - #define NAND_EN 0x08 30 - #define READYBUSY (0x01 << 18) 31 - 32 - #define SWRST 0x01 33 - #define PSIZE (0x01 << 3) 34 - #define DMARWEN (0x03 << 1) 35 - #define BUSWID (0x01 << 4) 36 - #define ECC4EN (0x01 << 5) 37 - #define WP (0x01 << 24) 38 - #define NANDCS (0x01 << 25) 39 - #define ENDADDR (0x01 << 31) 40 - 41 - #define read_data_reg(dev) \ 42 - __raw_readl((dev)->reg + REG_SMDATA) 43 - 44 - #define write_data_reg(dev, val) \ 45 - __raw_writel((val), (dev)->reg + REG_SMDATA) 46 - 47 - #define write_cmd_reg(dev, val) \ 48 - __raw_writel((val), (dev)->reg + REG_SMCMD) 49 - 50 - #define write_addr_reg(dev, val) \ 51 - __raw_writel((val), (dev)->reg + REG_SMADDR) 52 - 53 - struct nuc900_nand { 54 - struct nand_chip chip; 55 - void __iomem *reg; 56 - struct clk *clk; 57 - spinlock_t lock; 58 - }; 59 - 60 - static inline struct nuc900_nand *mtd_to_nuc900(struct mtd_info *mtd) 61 - { 62 - return container_of(mtd_to_nand(mtd), struct nuc900_nand, chip); 63 - } 64 - 65 - static const struct mtd_partition partitions[] = { 66 - { 67 - .name = "NAND FS 0", 68 - .offset = 0, 69 - .size = 8 * 1024 * 1024 70 - }, 71 - { 72 - .name = "NAND FS 1", 73 - .offset = MTDPART_OFS_APPEND, 74 - .size = MTDPART_SIZ_FULL 75 - } 76 - }; 77 - 78 - static unsigned char nuc900_nand_read_byte(struct nand_chip *chip) 79 - { 80 - unsigned char ret; 81 - struct nuc900_nand *nand = mtd_to_nuc900(nand_to_mtd(chip)); 82 - 83 - ret = (unsigned char)read_data_reg(nand); 84 - 85 - return ret; 86 - } 87 - 88 - static void nuc900_nand_read_buf(struct nand_chip *chip, 89 - unsigned char *buf, int len) 90 - { 91 - int i; 92 - struct nuc900_nand *nand = mtd_to_nuc900(nand_to_mtd(chip)); 93 - 94 - for (i = 0; i < len; i++) 95 - buf[i] = (unsigned char)read_data_reg(nand); 96 - } 97 - 98 - static void nuc900_nand_write_buf(struct nand_chip *chip, 99 - const unsigned char *buf, int len) 100 - { 101 - int i; 102 - struct nuc900_nand *nand = mtd_to_nuc900(nand_to_mtd(chip)); 103 - 104 - for (i = 0; i < len; i++) 105 - write_data_reg(nand, buf[i]); 106 - } 107 - 108 - static int nuc900_check_rb(struct nuc900_nand *nand) 109 - { 110 - unsigned int val; 111 - spin_lock(&nand->lock); 112 - val = __raw_readl(nand->reg + REG_SMISR); 113 - val &= READYBUSY; 114 - spin_unlock(&nand->lock); 115 - 116 - return val; 117 - } 118 - 119 - static int nuc900_nand_devready(struct nand_chip *chip) 120 - { 121 - struct nuc900_nand *nand = mtd_to_nuc900(nand_to_mtd(chip)); 122 - int ready; 123 - 124 - ready = (nuc900_check_rb(nand)) ? 1 : 0; 125 - return ready; 126 - } 127 - 128 - static void nuc900_nand_command_lp(struct nand_chip *chip, 129 - unsigned int command, 130 - int column, int page_addr) 131 - { 132 - struct mtd_info *mtd = nand_to_mtd(chip); 133 - struct nuc900_nand *nand = mtd_to_nuc900(mtd); 134 - 135 - if (command == NAND_CMD_READOOB) { 136 - column += mtd->writesize; 137 - command = NAND_CMD_READ0; 138 - } 139 - 140 - write_cmd_reg(nand, command & 0xff); 141 - 142 - if (column != -1 || page_addr != -1) { 143 - 144 - if (column != -1) { 145 - if (chip->options & NAND_BUSWIDTH_16 && 146 - !nand_opcode_8bits(command)) 147 - column >>= 1; 148 - write_addr_reg(nand, column); 149 - write_addr_reg(nand, column >> 8 | ENDADDR); 150 - } 151 - if (page_addr != -1) { 152 - write_addr_reg(nand, page_addr); 153 - 154 - if (chip->options & NAND_ROW_ADDR_3) { 155 - write_addr_reg(nand, page_addr >> 8); 156 - write_addr_reg(nand, page_addr >> 16 | ENDADDR); 157 - } else { 158 - write_addr_reg(nand, page_addr >> 8 | ENDADDR); 159 - } 160 - } 161 - } 162 - 163 - switch (command) { 164 - case NAND_CMD_CACHEDPROG: 165 - case NAND_CMD_PAGEPROG: 166 - case NAND_CMD_ERASE1: 167 - case NAND_CMD_ERASE2: 168 - case NAND_CMD_SEQIN: 169 - case NAND_CMD_RNDIN: 170 - case NAND_CMD_STATUS: 171 - return; 172 - 173 - case NAND_CMD_RESET: 174 - if (chip->legacy.dev_ready) 175 - break; 176 - udelay(chip->legacy.chip_delay); 177 - 178 - write_cmd_reg(nand, NAND_CMD_STATUS); 179 - write_cmd_reg(nand, command); 180 - 181 - while (!nuc900_check_rb(nand)) 182 - ; 183 - 184 - return; 185 - 186 - case NAND_CMD_RNDOUT: 187 - write_cmd_reg(nand, NAND_CMD_RNDOUTSTART); 188 - return; 189 - 190 - case NAND_CMD_READ0: 191 - write_cmd_reg(nand, NAND_CMD_READSTART); 192 - /* fall through */ 193 - 194 - default: 195 - 196 - if (!chip->legacy.dev_ready) { 197 - udelay(chip->legacy.chip_delay); 198 - return; 199 - } 200 - } 201 - 202 - /* Apply this short delay always to ensure that we do wait tWB in 203 - * any case on any machine. */ 204 - ndelay(100); 205 - 206 - while (!chip->legacy.dev_ready(chip)) 207 - ; 208 - } 209 - 210 - 211 - static void nuc900_nand_enable(struct nuc900_nand *nand) 212 - { 213 - unsigned int val; 214 - spin_lock(&nand->lock); 215 - __raw_writel(RESET_FMI, (nand->reg + REG_FMICSR)); 216 - 217 - val = __raw_readl(nand->reg + REG_FMICSR); 218 - 219 - if (!(val & NAND_EN)) 220 - __raw_writel(val | NAND_EN, nand->reg + REG_FMICSR); 221 - 222 - val = __raw_readl(nand->reg + REG_SMCSR); 223 - 224 - val &= ~(SWRST|PSIZE|DMARWEN|BUSWID|ECC4EN|NANDCS); 225 - val |= WP; 226 - 227 - __raw_writel(val, nand->reg + REG_SMCSR); 228 - 229 - spin_unlock(&nand->lock); 230 - } 231 - 232 - static int nuc900_nand_probe(struct platform_device *pdev) 233 - { 234 - struct nuc900_nand *nuc900_nand; 235 - struct nand_chip *chip; 236 - struct mtd_info *mtd; 237 - struct resource *res; 238 - 239 - nuc900_nand = devm_kzalloc(&pdev->dev, sizeof(struct nuc900_nand), 240 - GFP_KERNEL); 241 - if (!nuc900_nand) 242 - return -ENOMEM; 243 - chip = &(nuc900_nand->chip); 244 - mtd = nand_to_mtd(chip); 245 - 246 - mtd->dev.parent = &pdev->dev; 247 - spin_lock_init(&nuc900_nand->lock); 248 - 249 - nuc900_nand->clk = devm_clk_get(&pdev->dev, NULL); 250 - if (IS_ERR(nuc900_nand->clk)) 251 - return -ENOENT; 252 - clk_enable(nuc900_nand->clk); 253 - 254 - chip->legacy.cmdfunc = nuc900_nand_command_lp; 255 - chip->legacy.dev_ready = nuc900_nand_devready; 256 - chip->legacy.read_byte = nuc900_nand_read_byte; 257 - chip->legacy.write_buf = nuc900_nand_write_buf; 258 - chip->legacy.read_buf = nuc900_nand_read_buf; 259 - chip->legacy.chip_delay = 50; 260 - chip->options = 0; 261 - chip->ecc.mode = NAND_ECC_SOFT; 262 - chip->ecc.algo = NAND_ECC_HAMMING; 263 - 264 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 265 - nuc900_nand->reg = devm_ioremap_resource(&pdev->dev, res); 266 - if (IS_ERR(nuc900_nand->reg)) 267 - return PTR_ERR(nuc900_nand->reg); 268 - 269 - nuc900_nand_enable(nuc900_nand); 270 - 271 - if (nand_scan(chip, 1)) 272 - return -ENXIO; 273 - 274 - mtd_device_register(mtd, partitions, ARRAY_SIZE(partitions)); 275 - 276 - platform_set_drvdata(pdev, nuc900_nand); 277 - 278 - return 0; 279 - } 280 - 281 - static int nuc900_nand_remove(struct platform_device *pdev) 282 - { 283 - struct nuc900_nand *nuc900_nand = platform_get_drvdata(pdev); 284 - 285 - nand_release(&nuc900_nand->chip); 286 - clk_disable(nuc900_nand->clk); 287 - 288 - return 0; 289 - } 290 - 291 - static struct platform_driver nuc900_nand_driver = { 292 - .probe = nuc900_nand_probe, 293 - .remove = nuc900_nand_remove, 294 - .driver = { 295 - .name = "nuc900-fmi", 296 - }, 297 - }; 298 - 299 - module_platform_driver(nuc900_nand_driver); 300 - 301 - MODULE_AUTHOR("Wan ZongShun <mcuos.com@gmail.com>"); 302 - MODULE_DESCRIPTION("w90p910/NUC9xx nand driver!"); 303 - MODULE_LICENSE("GPL"); 304 - MODULE_ALIAS("platform:nuc900-fmi");
+1 -1
drivers/mtd/nand/raw/omap2.c
··· 1501 1501 } 1502 1502 1503 1503 /* Update number of correctable errors */ 1504 - stat += err_vec[i].error_count; 1504 + stat = max_t(unsigned int, stat, err_vec[i].error_count); 1505 1505 1506 1506 /* Update page data with sector size */ 1507 1507 data += ecc->size;
+5 -3
drivers/mtd/nand/raw/oxnas_nand.c
··· 116 116 GFP_KERNEL); 117 117 if (!chip) { 118 118 err = -ENOMEM; 119 - goto err_clk_unprepare; 119 + goto err_release_child; 120 120 } 121 121 122 122 chip->controller = &oxnas->base; ··· 137 137 /* Scan to find existence of the device */ 138 138 err = nand_scan(chip, 1); 139 139 if (err) 140 - goto err_clk_unprepare; 140 + goto err_release_child; 141 141 142 142 err = mtd_device_register(mtd, NULL, 0); 143 143 if (err) { 144 144 nand_release(chip); 145 - goto err_clk_unprepare; 145 + goto err_release_child; 146 146 } 147 147 148 148 oxnas->chips[nchips] = chip; ··· 159 159 160 160 return 0; 161 161 162 + err_release_child: 163 + of_node_put(nand_np); 162 164 err_clk_unprepare: 163 165 clk_disable_unprepare(oxnas->clk); 164 166 return err;
+2 -2
drivers/mtd/nand/raw/r852.c
··· 998 998 #ifdef CONFIG_PM_SLEEP 999 999 static int r852_suspend(struct device *device) 1000 1000 { 1001 - struct r852_device *dev = pci_get_drvdata(to_pci_dev(device)); 1001 + struct r852_device *dev = dev_get_drvdata(device); 1002 1002 1003 1003 if (dev->ctlreg & R852_CTL_CARDENABLE) 1004 1004 return -EBUSY; ··· 1019 1019 1020 1020 static int r852_resume(struct device *device) 1021 1021 { 1022 - struct r852_device *dev = pci_get_drvdata(to_pci_dev(device)); 1022 + struct r852_device *dev = dev_get_drvdata(device); 1023 1023 1024 1024 r852_disable_irqs(dev); 1025 1025 r852_card_update_present(dev);
+27 -59
drivers/mtd/nand/raw/stm32_fmc2_nand.c
··· 1427 1427 struct stm32_fmc2_timings *tims = &nand->timings; 1428 1428 unsigned long hclk = clk_get_rate(fmc2->clk); 1429 1429 unsigned long hclkp = NSEC_PER_SEC / (hclk / 1000); 1430 - int tar, tclr, thiz, twait, tset_mem, tset_att, thold_mem, thold_att; 1430 + unsigned long timing, tar, tclr, thiz, twait; 1431 + unsigned long tset_mem, tset_att, thold_mem, thold_att; 1431 1432 1432 - tar = hclkp; 1433 - if (tar < sdrt->tAR_min) 1434 - tar = sdrt->tAR_min; 1435 - tims->tar = DIV_ROUND_UP(tar, hclkp) - 1; 1436 - if (tims->tar > FMC2_PCR_TIMING_MASK) 1437 - tims->tar = FMC2_PCR_TIMING_MASK; 1433 + tar = max_t(unsigned long, hclkp, sdrt->tAR_min); 1434 + timing = DIV_ROUND_UP(tar, hclkp) - 1; 1435 + tims->tar = min_t(unsigned long, timing, FMC2_PCR_TIMING_MASK); 1438 1436 1439 - tclr = hclkp; 1440 - if (tclr < sdrt->tCLR_min) 1441 - tclr = sdrt->tCLR_min; 1442 - tims->tclr = DIV_ROUND_UP(tclr, hclkp) - 1; 1443 - if (tims->tclr > FMC2_PCR_TIMING_MASK) 1444 - tims->tclr = FMC2_PCR_TIMING_MASK; 1437 + tclr = max_t(unsigned long, hclkp, sdrt->tCLR_min); 1438 + timing = DIV_ROUND_UP(tclr, hclkp) - 1; 1439 + tims->tclr = min_t(unsigned long, timing, FMC2_PCR_TIMING_MASK); 1445 1440 1446 1441 tims->thiz = FMC2_THIZ; 1447 1442 thiz = (tims->thiz + 1) * hclkp; ··· 1446 1451 * tWAIT > tWP 1447 1452 * tWAIT > tREA + tIO 1448 1453 */ 1449 - twait = hclkp; 1450 - if (twait < sdrt->tRP_min) 1451 - twait = sdrt->tRP_min; 1452 - if (twait < sdrt->tWP_min) 1453 - twait = sdrt->tWP_min; 1454 - if (twait < sdrt->tREA_max + FMC2_TIO) 1455 - twait = sdrt->tREA_max + FMC2_TIO; 1456 - tims->twait = DIV_ROUND_UP(twait, hclkp); 1457 - if (tims->twait == 0) 1458 - tims->twait = 1; 1459 - else if (tims->twait > FMC2_PMEM_PATT_TIMING_MASK) 1460 - tims->twait = FMC2_PMEM_PATT_TIMING_MASK; 1454 + twait = max_t(unsigned long, hclkp, sdrt->tRP_min); 1455 + twait = max_t(unsigned long, twait, sdrt->tWP_min); 1456 + twait = max_t(unsigned long, twait, sdrt->tREA_max + FMC2_TIO); 1457 + timing = DIV_ROUND_UP(twait, hclkp); 1458 + tims->twait = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK); 1461 1459 1462 1460 /* 1463 1461 * tSETUP_MEM > tCS - tWAIT ··· 1465 1477 if (twait > thiz && (sdrt->tDS_min > twait - thiz) && 1466 1478 (tset_mem < sdrt->tDS_min - (twait - thiz))) 1467 1479 tset_mem = sdrt->tDS_min - (twait - thiz); 1468 - tims->tset_mem = DIV_ROUND_UP(tset_mem, hclkp); 1469 - if (tims->tset_mem == 0) 1470 - tims->tset_mem = 1; 1471 - else if (tims->tset_mem > FMC2_PMEM_PATT_TIMING_MASK) 1472 - tims->tset_mem = FMC2_PMEM_PATT_TIMING_MASK; 1480 + timing = DIV_ROUND_UP(tset_mem, hclkp); 1481 + tims->tset_mem = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK); 1473 1482 1474 1483 /* 1475 1484 * tHOLD_MEM > tCH 1476 1485 * tHOLD_MEM > tREH - tSETUP_MEM 1477 1486 * tHOLD_MEM > max(tRC, tWC) - (tSETUP_MEM + tWAIT) 1478 1487 */ 1479 - thold_mem = hclkp; 1480 - if (thold_mem < sdrt->tCH_min) 1481 - thold_mem = sdrt->tCH_min; 1488 + thold_mem = max_t(unsigned long, hclkp, sdrt->tCH_min); 1482 1489 if (sdrt->tREH_min > tset_mem && 1483 1490 (thold_mem < sdrt->tREH_min - tset_mem)) 1484 1491 thold_mem = sdrt->tREH_min - tset_mem; ··· 1483 1500 if ((sdrt->tWC_min > tset_mem + twait) && 1484 1501 (thold_mem < sdrt->tWC_min - (tset_mem + twait))) 1485 1502 thold_mem = sdrt->tWC_min - (tset_mem + twait); 1486 - tims->thold_mem = DIV_ROUND_UP(thold_mem, hclkp); 1487 - if (tims->thold_mem == 0) 1488 - tims->thold_mem = 1; 1489 - else if (tims->thold_mem > FMC2_PMEM_PATT_TIMING_MASK) 1490 - tims->thold_mem = FMC2_PMEM_PATT_TIMING_MASK; 1503 + timing = DIV_ROUND_UP(thold_mem, hclkp); 1504 + tims->thold_mem = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK); 1491 1505 1492 1506 /* 1493 1507 * tSETUP_ATT > tCS - tWAIT ··· 1506 1526 if (twait > thiz && (sdrt->tDS_min > twait - thiz) && 1507 1527 (tset_att < sdrt->tDS_min - (twait - thiz))) 1508 1528 tset_att = sdrt->tDS_min - (twait - thiz); 1509 - tims->tset_att = DIV_ROUND_UP(tset_att, hclkp); 1510 - if (tims->tset_att == 0) 1511 - tims->tset_att = 1; 1512 - else if (tims->tset_att > FMC2_PMEM_PATT_TIMING_MASK) 1513 - tims->tset_att = FMC2_PMEM_PATT_TIMING_MASK; 1529 + timing = DIV_ROUND_UP(tset_att, hclkp); 1530 + tims->tset_att = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK); 1514 1531 1515 1532 /* 1516 1533 * tHOLD_ATT > tALH ··· 1522 1545 * tHOLD_ATT > tRC - (tSETUP_ATT + tWAIT) 1523 1546 * tHOLD_ATT > tWC - (tSETUP_ATT + tWAIT) 1524 1547 */ 1525 - thold_att = hclkp; 1526 - if (thold_att < sdrt->tALH_min) 1527 - thold_att = sdrt->tALH_min; 1528 - if (thold_att < sdrt->tCH_min) 1529 - thold_att = sdrt->tCH_min; 1530 - if (thold_att < sdrt->tCLH_min) 1531 - thold_att = sdrt->tCLH_min; 1532 - if (thold_att < sdrt->tCOH_min) 1533 - thold_att = sdrt->tCOH_min; 1534 - if (thold_att < sdrt->tDH_min) 1535 - thold_att = sdrt->tDH_min; 1548 + thold_att = max_t(unsigned long, hclkp, sdrt->tALH_min); 1549 + thold_att = max_t(unsigned long, thold_att, sdrt->tCH_min); 1550 + thold_att = max_t(unsigned long, thold_att, sdrt->tCLH_min); 1551 + thold_att = max_t(unsigned long, thold_att, sdrt->tCOH_min); 1552 + thold_att = max_t(unsigned long, thold_att, sdrt->tDH_min); 1536 1553 if ((sdrt->tWB_max + FMC2_TIO + FMC2_TSYNC > tset_mem) && 1537 1554 (thold_att < sdrt->tWB_max + FMC2_TIO + FMC2_TSYNC - tset_mem)) 1538 1555 thold_att = sdrt->tWB_max + FMC2_TIO + FMC2_TSYNC - tset_mem; ··· 1545 1574 if ((sdrt->tWC_min > tset_att + twait) && 1546 1575 (thold_att < sdrt->tWC_min - (tset_att + twait))) 1547 1576 thold_att = sdrt->tWC_min - (tset_att + twait); 1548 - tims->thold_att = DIV_ROUND_UP(thold_att, hclkp); 1549 - if (tims->thold_att == 0) 1550 - tims->thold_att = 1; 1551 - else if (tims->thold_att > FMC2_PMEM_PATT_TIMING_MASK) 1552 - tims->thold_att = FMC2_PMEM_PATT_TIMING_MASK; 1577 + timing = DIV_ROUND_UP(thold_att, hclkp); 1578 + tims->thold_att = clamp_val(timing, 1, FMC2_PMEM_PATT_TIMING_MASK); 1553 1579 } 1554 1580 1555 1581 static int stm32_fmc2_setup_interface(struct nand_chip *chip, int chipnr,
+1
drivers/mtd/nand/raw/tango_nand.c
··· 659 659 err = chip_init(&pdev->dev, np); 660 660 if (err) { 661 661 tango_nand_remove(pdev); 662 + of_node_put(np); 662 663 return err; 663 664 } 664 665 }
+1
drivers/mtd/nand/raw/vf610_nfc.c
··· 862 862 dev_err(nfc->dev, 863 863 "Only one NAND chip supported!\n"); 864 864 err = -EINVAL; 865 + of_node_put(child); 865 866 goto err_disable_clk; 866 867 } 867 868
drivers/mtd/ofpart.c drivers/mtd/parsers/ofpart.c
+68
drivers/mtd/parsers/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 + config MTD_AR7_PARTS 3 + tristate "TI AR7 partitioning parser" 4 + help 5 + TI AR7 partitioning parser support 6 + 7 + config MTD_BCM47XX_PARTS 8 + tristate "BCM47XX partitioning parser" 9 + depends on BCM47XX || ARCH_BCM_5301X 10 + help 11 + This provides partitions parser for devices based on BCM47xx 12 + boards. 13 + 14 + config MTD_BCM63XX_PARTS 15 + tristate "BCM63XX CFE partitioning parser" 16 + depends on BCM63XX || BMIPS_GENERIC || COMPILE_TEST 17 + select CRC32 18 + select MTD_PARSER_IMAGETAG 19 + help 20 + This provides partition parsing for BCM63xx devices with CFE 21 + bootloaders. 22 + 23 + config MTD_CMDLINE_PARTS 24 + tristate "Command line partition table parsing" 25 + depends on MTD 26 + help 27 + Allow generic configuration of the MTD partition tables via the kernel 28 + command line. Multiple flash resources are supported for hardware where 29 + different kinds of flash memory are available. 30 + 31 + You will still need the parsing functions to be called by the driver 32 + for your particular device. It won't happen automatically. The 33 + SA1100 map driver (CONFIG_MTD_SA1100) has an option for this, for 34 + example. 35 + 36 + The format for the command line is as follows: 37 + 38 + mtdparts=<mtddef>[;<mtddef] 39 + <mtddef> := <mtd-id>:<partdef>[,<partdef>] 40 + <partdef> := <size>[@offset][<name>][ro] 41 + <mtd-id> := unique id used in mapping driver/device 42 + <size> := standard linux memsize OR "-" to denote all 43 + remaining space 44 + <name> := (NAME) 45 + 46 + Due to the way Linux handles the command line, no spaces are 47 + allowed in the partition definition, including mtd id's and partition 48 + names. 49 + 50 + Examples: 51 + 52 + 1 flash resource (mtd-id "sa1100"), with 1 single writable partition: 53 + mtdparts=sa1100:- 54 + 55 + Same flash, but 2 named partitions, the first one being read-only: 56 + mtdparts=sa1100:256k(ARMboot)ro,-(root) 57 + 58 + If unsure, say 'N'. 59 + 60 + config MTD_OF_PARTS 61 + tristate "OpenFirmware (device tree) partitioning parser" 62 + default y 63 + depends on OF 64 + help 65 + This provides a open firmware device tree partition parser 66 + which derives the partition map from the children of the 67 + flash memory node, as described in 68 + Documentation/devicetree/bindings/mtd/partition.txt. 69 + 2 70 config MTD_PARSER_IMAGETAG 3 71 tristate "Parser for BCM963XX Image Tag format partitions" 4 72 depends on BCM63XX || BMIPS_GENERIC || COMPILE_TEST
+5
drivers/mtd/parsers/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 + obj-$(CONFIG_MTD_AR7_PARTS) += ar7part.o 3 + obj-$(CONFIG_MTD_BCM47XX_PARTS) += bcm47xxpart.o 4 + obj-$(CONFIG_MTD_BCM63XX_PARTS) += bcm63xxpart.o 5 + obj-$(CONFIG_MTD_CMDLINE_PARTS) += cmdlinepart.o 6 + obj-$(CONFIG_MTD_OF_PARTS) += ofpart.o 2 7 obj-$(CONFIG_MTD_PARSER_IMAGETAG) += parser_imagetag.o 3 8 obj-$(CONFIG_MTD_AFS_PARTS) += afs.o 4 9 obj-$(CONFIG_MTD_PARSER_TRX) += parser_trx.o
+4 -1
drivers/mtd/sm_ftl.c
··· 774 774 continue; 775 775 776 776 /* Read the oob of first sector */ 777 - if (sm_read_sector(ftl, zone_num, block, 0, NULL, &oob)) 777 + if (sm_read_sector(ftl, zone_num, block, 0, NULL, &oob)) { 778 + kfifo_free(&zone->free_sectors); 779 + kfree(zone->lba_to_phys_table); 778 780 return -EIO; 781 + } 779 782 780 783 /* Test to see if block is erased. It is enough to test 781 784 first sector, because erase happens in one shot */
+2
drivers/mtd/spi-nor/Kconfig
··· 2 2 menuconfig MTD_SPI_NOR 3 3 tristate "SPI-NOR device support" 4 4 depends on MTD 5 + depends on MTD && SPI_MASTER 6 + select SPI_MEM 5 7 help 6 8 This is the framework for the SPI NOR which can be used by the SPI 7 9 device drivers and the SPI-NOR device driver.
+3 -1
drivers/mtd/spi-nor/aspeed-smc.c
··· 836 836 controller->chips[cs] = chip; 837 837 } 838 838 839 - if (ret) 839 + if (ret) { 840 + of_node_put(child); 840 841 aspeed_smc_unregister(controller); 842 + } 841 843 842 844 return ret; 843 845 }
+5 -14
drivers/mtd/spi-nor/cadence-quadspi.c
··· 13 13 #include <linux/errno.h> 14 14 #include <linux/interrupt.h> 15 15 #include <linux/io.h> 16 + #include <linux/iopoll.h> 16 17 #include <linux/jiffies.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/module.h> ··· 242 241 243 242 #define CQSPI_IRQ_STATUS_MASK 0x1FFFF 244 243 245 - static int cqspi_wait_for_bit(void __iomem *reg, const u32 mask, bool clear) 244 + static int cqspi_wait_for_bit(void __iomem *reg, const u32 mask, bool clr) 246 245 { 247 - unsigned long end = jiffies + msecs_to_jiffies(CQSPI_TIMEOUT_MS); 248 246 u32 val; 249 247 250 - while (1) { 251 - val = readl(reg); 252 - if (clear) 253 - val = ~val; 254 - val &= mask; 255 - 256 - if (val == mask) 257 - return 0; 258 - 259 - if (time_after(jiffies, end)) 260 - return -ETIMEDOUT; 261 - } 248 + return readl_relaxed_poll_timeout(reg, val, 249 + (((clr ? ~val : val) & mask) == mask), 250 + 10, CQSPI_TIMEOUT_MS * 1000); 262 251 } 263 252 264 253 static bool cqspi_is_idle(struct cqspi_st *cqspi)
+1
drivers/mtd/spi-nor/hisi-sfc.c
··· 401 401 402 402 if (host->num_chip == HIFMC_MAX_CHIP_NUM) { 403 403 dev_warn(dev, "Flash device number exceeds the maximum chipselect number\n"); 404 + of_node_put(np); 404 405 break; 405 406 } 406 407 }
+1
drivers/mtd/spi-nor/intel-spi-pci.c
··· 65 65 { PCI_VDEVICE(INTEL, 0x19e0), (unsigned long)&bxt_info }, 66 66 { PCI_VDEVICE(INTEL, 0x34a4), (unsigned long)&bxt_info }, 67 67 { PCI_VDEVICE(INTEL, 0x4b24), (unsigned long)&bxt_info }, 68 + { PCI_VDEVICE(INTEL, 0xa0a4), (unsigned long)&bxt_info }, 68 69 { PCI_VDEVICE(INTEL, 0xa1a4), (unsigned long)&bxt_info }, 69 70 { PCI_VDEVICE(INTEL, 0xa224), (unsigned long)&bxt_info }, 70 71 { },
+2
drivers/mtd/spi-nor/intel-spi.c
··· 621 621 switch (nor->read_opcode) { 622 622 case SPINOR_OP_READ: 623 623 case SPINOR_OP_READ_FAST: 624 + case SPINOR_OP_READ_4B: 625 + case SPINOR_OP_READ_FAST_4B: 624 626 break; 625 627 default: 626 628 return -EINVAL;
+1282 -452
drivers/mtd/spi-nor/spi-nor.c
··· 19 19 20 20 #include <linux/mtd/mtd.h> 21 21 #include <linux/of_platform.h> 22 + #include <linux/sched/task_stack.h> 22 23 #include <linux/spi/flash.h> 23 24 #include <linux/mtd/spi-nor.h> 24 25 ··· 39 38 40 39 #define SPI_NOR_MAX_ID_LEN 6 41 40 #define SPI_NOR_MAX_ADDR_WIDTH 4 42 - 43 - struct spi_nor_read_command { 44 - u8 num_mode_clocks; 45 - u8 num_wait_states; 46 - u8 opcode; 47 - enum spi_nor_protocol proto; 48 - }; 49 - 50 - struct spi_nor_pp_command { 51 - u8 opcode; 52 - enum spi_nor_protocol proto; 53 - }; 54 - 55 - enum spi_nor_read_command_index { 56 - SNOR_CMD_READ, 57 - SNOR_CMD_READ_FAST, 58 - SNOR_CMD_READ_1_1_1_DTR, 59 - 60 - /* Dual SPI */ 61 - SNOR_CMD_READ_1_1_2, 62 - SNOR_CMD_READ_1_2_2, 63 - SNOR_CMD_READ_2_2_2, 64 - SNOR_CMD_READ_1_2_2_DTR, 65 - 66 - /* Quad SPI */ 67 - SNOR_CMD_READ_1_1_4, 68 - SNOR_CMD_READ_1_4_4, 69 - SNOR_CMD_READ_4_4_4, 70 - SNOR_CMD_READ_1_4_4_DTR, 71 - 72 - /* Octal SPI */ 73 - SNOR_CMD_READ_1_1_8, 74 - SNOR_CMD_READ_1_8_8, 75 - SNOR_CMD_READ_8_8_8, 76 - SNOR_CMD_READ_1_8_8_DTR, 77 - 78 - SNOR_CMD_READ_MAX 79 - }; 80 - 81 - enum spi_nor_pp_command_index { 82 - SNOR_CMD_PP, 83 - 84 - /* Quad SPI */ 85 - SNOR_CMD_PP_1_1_4, 86 - SNOR_CMD_PP_1_4_4, 87 - SNOR_CMD_PP_4_4_4, 88 - 89 - /* Octal SPI */ 90 - SNOR_CMD_PP_1_1_8, 91 - SNOR_CMD_PP_1_8_8, 92 - SNOR_CMD_PP_8_8_8, 93 - 94 - SNOR_CMD_PP_MAX 95 - }; 96 - 97 - struct spi_nor_flash_parameter { 98 - u64 size; 99 - u32 page_size; 100 - 101 - struct spi_nor_hwcaps hwcaps; 102 - struct spi_nor_read_command reads[SNOR_CMD_READ_MAX]; 103 - struct spi_nor_pp_command page_programs[SNOR_CMD_PP_MAX]; 104 - 105 - int (*quad_enable)(struct spi_nor *nor); 106 - }; 107 41 108 42 struct sfdp_parameter_header { 109 43 u8 id_lsb; ··· 154 218 155 219 /** 156 220 * struct spi_nor_fixups - SPI NOR fixup hooks 221 + * @default_init: called after default flash parameters init. Used to tweak 222 + * flash parameters when information provided by the flash_info 223 + * table is incomplete or wrong. 157 224 * @post_bfpt: called after the BFPT table has been parsed 225 + * @post_sfdp: called after SFDP has been parsed (is also called for SPI NORs 226 + * that do not support RDSFDP). Typically used to tweak various 227 + * parameters that could not be extracted by other means (i.e. 228 + * when information provided by the SFDP/flash_info tables are 229 + * incomplete or wrong). 158 230 * 159 231 * Those hooks can be used to tweak the SPI NOR configuration when the SFDP 160 232 * table is broken or not available. 161 233 */ 162 234 struct spi_nor_fixups { 235 + void (*default_init)(struct spi_nor *nor); 163 236 int (*post_bfpt)(struct spi_nor *nor, 164 237 const struct sfdp_parameter_header *bfpt_header, 165 238 const struct sfdp_bfpt *bfpt, 166 239 struct spi_nor_flash_parameter *params); 240 + void (*post_sfdp)(struct spi_nor *nor); 167 241 }; 168 242 169 243 struct flash_info { ··· 211 265 * bit. Must be used with 212 266 * SPI_NOR_HAS_LOCK. 213 267 */ 268 + #define SPI_NOR_XSR_RDY BIT(10) /* 269 + * S3AN flashes have specific opcode to 270 + * read the status register. 271 + * Flags SPI_NOR_XSR_RDY and SPI_S3AN 272 + * use the same bit as one implies the 273 + * other, but we will get rid of 274 + * SPI_S3AN soon. 275 + */ 214 276 #define SPI_S3AN BIT(10) /* 215 277 * Xilinx Spartan 3AN In-System Flash 216 278 * (MFR cannot be used for probing ··· 236 282 237 283 /* Part specific fixup hooks. */ 238 284 const struct spi_nor_fixups *fixups; 239 - 240 - int (*quad_enable)(struct spi_nor *nor); 241 285 }; 242 286 243 287 #define JEDEC_MFR(info) ((info)->id[0]) 288 + 289 + /** 290 + * spi_nor_spimem_xfer_data() - helper function to read/write data to 291 + * flash's memory region 292 + * @nor: pointer to 'struct spi_nor' 293 + * @op: pointer to 'struct spi_mem_op' template for transfer 294 + * 295 + * Return: number of bytes transferred on success, -errno otherwise 296 + */ 297 + static ssize_t spi_nor_spimem_xfer_data(struct spi_nor *nor, 298 + struct spi_mem_op *op) 299 + { 300 + bool usebouncebuf = false; 301 + void *rdbuf = NULL; 302 + const void *buf; 303 + int ret; 304 + 305 + if (op->data.dir == SPI_MEM_DATA_IN) 306 + buf = op->data.buf.in; 307 + else 308 + buf = op->data.buf.out; 309 + 310 + if (object_is_on_stack(buf) || !virt_addr_valid(buf)) 311 + usebouncebuf = true; 312 + 313 + if (usebouncebuf) { 314 + if (op->data.nbytes > nor->bouncebuf_size) 315 + op->data.nbytes = nor->bouncebuf_size; 316 + 317 + if (op->data.dir == SPI_MEM_DATA_IN) { 318 + rdbuf = op->data.buf.in; 319 + op->data.buf.in = nor->bouncebuf; 320 + } else { 321 + op->data.buf.out = nor->bouncebuf; 322 + memcpy(nor->bouncebuf, buf, 323 + op->data.nbytes); 324 + } 325 + } 326 + 327 + ret = spi_mem_adjust_op_size(nor->spimem, op); 328 + if (ret) 329 + return ret; 330 + 331 + ret = spi_mem_exec_op(nor->spimem, op); 332 + if (ret) 333 + return ret; 334 + 335 + if (usebouncebuf && op->data.dir == SPI_MEM_DATA_IN) 336 + memcpy(rdbuf, nor->bouncebuf, op->data.nbytes); 337 + 338 + return op->data.nbytes; 339 + } 340 + 341 + /** 342 + * spi_nor_spimem_read_data() - read data from flash's memory region via 343 + * spi-mem 344 + * @nor: pointer to 'struct spi_nor' 345 + * @from: offset to read from 346 + * @len: number of bytes to read 347 + * @buf: pointer to dst buffer 348 + * 349 + * Return: number of bytes read successfully, -errno otherwise 350 + */ 351 + static ssize_t spi_nor_spimem_read_data(struct spi_nor *nor, loff_t from, 352 + size_t len, u8 *buf) 353 + { 354 + struct spi_mem_op op = 355 + SPI_MEM_OP(SPI_MEM_OP_CMD(nor->read_opcode, 1), 356 + SPI_MEM_OP_ADDR(nor->addr_width, from, 1), 357 + SPI_MEM_OP_DUMMY(nor->read_dummy, 1), 358 + SPI_MEM_OP_DATA_IN(len, buf, 1)); 359 + 360 + /* get transfer protocols. */ 361 + op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->read_proto); 362 + op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->read_proto); 363 + op.dummy.buswidth = op.addr.buswidth; 364 + op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->read_proto); 365 + 366 + /* convert the dummy cycles to the number of bytes */ 367 + op.dummy.nbytes = (nor->read_dummy * op.dummy.buswidth) / 8; 368 + 369 + return spi_nor_spimem_xfer_data(nor, &op); 370 + } 371 + 372 + /** 373 + * spi_nor_read_data() - read data from flash memory 374 + * @nor: pointer to 'struct spi_nor' 375 + * @from: offset to read from 376 + * @len: number of bytes to read 377 + * @buf: pointer to dst buffer 378 + * 379 + * Return: number of bytes read successfully, -errno otherwise 380 + */ 381 + static ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len, 382 + u8 *buf) 383 + { 384 + if (nor->spimem) 385 + return spi_nor_spimem_read_data(nor, from, len, buf); 386 + 387 + return nor->read(nor, from, len, buf); 388 + } 389 + 390 + /** 391 + * spi_nor_spimem_write_data() - write data to flash memory via 392 + * spi-mem 393 + * @nor: pointer to 'struct spi_nor' 394 + * @to: offset to write to 395 + * @len: number of bytes to write 396 + * @buf: pointer to src buffer 397 + * 398 + * Return: number of bytes written successfully, -errno otherwise 399 + */ 400 + static ssize_t spi_nor_spimem_write_data(struct spi_nor *nor, loff_t to, 401 + size_t len, const u8 *buf) 402 + { 403 + struct spi_mem_op op = 404 + SPI_MEM_OP(SPI_MEM_OP_CMD(nor->program_opcode, 1), 405 + SPI_MEM_OP_ADDR(nor->addr_width, to, 1), 406 + SPI_MEM_OP_NO_DUMMY, 407 + SPI_MEM_OP_DATA_OUT(len, buf, 1)); 408 + 409 + op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(nor->write_proto); 410 + op.addr.buswidth = spi_nor_get_protocol_addr_nbits(nor->write_proto); 411 + op.data.buswidth = spi_nor_get_protocol_data_nbits(nor->write_proto); 412 + 413 + if (nor->program_opcode == SPINOR_OP_AAI_WP && nor->sst_write_second) 414 + op.addr.nbytes = 0; 415 + 416 + return spi_nor_spimem_xfer_data(nor, &op); 417 + } 418 + 419 + /** 420 + * spi_nor_write_data() - write data to flash memory 421 + * @nor: pointer to 'struct spi_nor' 422 + * @to: offset to write to 423 + * @len: number of bytes to write 424 + * @buf: pointer to src buffer 425 + * 426 + * Return: number of bytes written successfully, -errno otherwise 427 + */ 428 + static ssize_t spi_nor_write_data(struct spi_nor *nor, loff_t to, size_t len, 429 + const u8 *buf) 430 + { 431 + if (nor->spimem) 432 + return spi_nor_spimem_write_data(nor, to, len, buf); 433 + 434 + return nor->write(nor, to, len, buf); 435 + } 244 436 245 437 /* 246 438 * Read the status register, returning its value in the location ··· 396 296 static int read_sr(struct spi_nor *nor) 397 297 { 398 298 int ret; 399 - u8 val; 400 299 401 - ret = nor->read_reg(nor, SPINOR_OP_RDSR, &val, 1); 300 + if (nor->spimem) { 301 + struct spi_mem_op op = 302 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDSR, 1), 303 + SPI_MEM_OP_NO_ADDR, 304 + SPI_MEM_OP_NO_DUMMY, 305 + SPI_MEM_OP_DATA_IN(1, nor->bouncebuf, 1)); 306 + 307 + ret = spi_mem_exec_op(nor->spimem, &op); 308 + } else { 309 + ret = nor->read_reg(nor, SPINOR_OP_RDSR, nor->bouncebuf, 1); 310 + } 311 + 402 312 if (ret < 0) { 403 313 pr_err("error %d reading SR\n", (int) ret); 404 314 return ret; 405 315 } 406 316 407 - return val; 317 + return nor->bouncebuf[0]; 408 318 } 409 319 410 320 /* ··· 425 315 static int read_fsr(struct spi_nor *nor) 426 316 { 427 317 int ret; 428 - u8 val; 429 318 430 - ret = nor->read_reg(nor, SPINOR_OP_RDFSR, &val, 1); 319 + if (nor->spimem) { 320 + struct spi_mem_op op = 321 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDFSR, 1), 322 + SPI_MEM_OP_NO_ADDR, 323 + SPI_MEM_OP_NO_DUMMY, 324 + SPI_MEM_OP_DATA_IN(1, nor->bouncebuf, 1)); 325 + 326 + ret = spi_mem_exec_op(nor->spimem, &op); 327 + } else { 328 + ret = nor->read_reg(nor, SPINOR_OP_RDFSR, nor->bouncebuf, 1); 329 + } 330 + 431 331 if (ret < 0) { 432 332 pr_err("error %d reading FSR\n", ret); 433 333 return ret; 434 334 } 435 335 436 - return val; 336 + return nor->bouncebuf[0]; 437 337 } 438 338 439 339 /* ··· 454 334 static int read_cr(struct spi_nor *nor) 455 335 { 456 336 int ret; 457 - u8 val; 458 337 459 - ret = nor->read_reg(nor, SPINOR_OP_RDCR, &val, 1); 338 + if (nor->spimem) { 339 + struct spi_mem_op op = 340 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDCR, 1), 341 + SPI_MEM_OP_NO_ADDR, 342 + SPI_MEM_OP_NO_DUMMY, 343 + SPI_MEM_OP_DATA_IN(1, nor->bouncebuf, 1)); 344 + 345 + ret = spi_mem_exec_op(nor->spimem, &op); 346 + } else { 347 + ret = nor->read_reg(nor, SPINOR_OP_RDCR, nor->bouncebuf, 1); 348 + } 349 + 460 350 if (ret < 0) { 461 351 dev_err(nor->dev, "error %d reading CR\n", ret); 462 352 return ret; 463 353 } 464 354 465 - return val; 355 + return nor->bouncebuf[0]; 466 356 } 467 357 468 358 /* ··· 481 351 */ 482 352 static int write_sr(struct spi_nor *nor, u8 val) 483 353 { 484 - nor->cmd_buf[0] = val; 485 - return nor->write_reg(nor, SPINOR_OP_WRSR, nor->cmd_buf, 1); 354 + nor->bouncebuf[0] = val; 355 + if (nor->spimem) { 356 + struct spi_mem_op op = 357 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRSR, 1), 358 + SPI_MEM_OP_NO_ADDR, 359 + SPI_MEM_OP_NO_DUMMY, 360 + SPI_MEM_OP_DATA_IN(1, nor->bouncebuf, 1)); 361 + 362 + return spi_mem_exec_op(nor->spimem, &op); 363 + } 364 + 365 + return nor->write_reg(nor, SPINOR_OP_WRSR, nor->bouncebuf, 1); 486 366 } 487 367 488 368 /* ··· 501 361 */ 502 362 static int write_enable(struct spi_nor *nor) 503 363 { 364 + if (nor->spimem) { 365 + struct spi_mem_op op = 366 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WREN, 1), 367 + SPI_MEM_OP_NO_ADDR, 368 + SPI_MEM_OP_NO_DUMMY, 369 + SPI_MEM_OP_NO_DATA); 370 + 371 + return spi_mem_exec_op(nor->spimem, &op); 372 + } 373 + 504 374 return nor->write_reg(nor, SPINOR_OP_WREN, NULL, 0); 505 375 } 506 376 ··· 519 369 */ 520 370 static int write_disable(struct spi_nor *nor) 521 371 { 372 + if (nor->spimem) { 373 + struct spi_mem_op op = 374 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRDI, 1), 375 + SPI_MEM_OP_NO_ADDR, 376 + SPI_MEM_OP_NO_DUMMY, 377 + SPI_MEM_OP_NO_DATA); 378 + 379 + return spi_mem_exec_op(nor->spimem, &op); 380 + } 381 + 522 382 return nor->write_reg(nor, SPINOR_OP_WRDI, NULL, 0); 523 383 } 524 384 ··· 599 439 600 440 static void spi_nor_set_4byte_opcodes(struct spi_nor *nor) 601 441 { 602 - /* Do some manufacturer fixups first */ 603 - switch (JEDEC_MFR(nor->info)) { 604 - case SNOR_MFR_SPANSION: 605 - /* No small sector erase for 4-byte command set */ 606 - nor->erase_opcode = SPINOR_OP_SE; 607 - nor->mtd.erasesize = nor->info->sector_size; 608 - break; 609 - 610 - default: 611 - break; 612 - } 613 - 614 442 nor->read_opcode = spi_nor_convert_3to4_read(nor->read_opcode); 615 443 nor->program_opcode = spi_nor_convert_3to4_program(nor->program_opcode); 616 444 nor->erase_opcode = spi_nor_convert_3to4_erase(nor->erase_opcode); 617 445 618 446 if (!spi_nor_has_uniform_erase(nor)) { 619 - struct spi_nor_erase_map *map = &nor->erase_map; 447 + struct spi_nor_erase_map *map = &nor->params.erase_map; 620 448 struct spi_nor_erase_type *erase; 621 449 int i; 622 450 ··· 616 468 } 617 469 } 618 470 619 - /* Enable/disable 4-byte addressing mode. */ 620 - static int set_4byte(struct spi_nor *nor, bool enable) 471 + static int macronix_set_4byte(struct spi_nor *nor, bool enable) 621 472 { 622 - int status; 623 - bool need_wren = false; 624 - u8 cmd; 473 + if (nor->spimem) { 474 + struct spi_mem_op op = 475 + SPI_MEM_OP(SPI_MEM_OP_CMD(enable ? 476 + SPINOR_OP_EN4B : 477 + SPINOR_OP_EX4B, 478 + 1), 479 + SPI_MEM_OP_NO_ADDR, 480 + SPI_MEM_OP_NO_DUMMY, 481 + SPI_MEM_OP_NO_DATA); 625 482 626 - switch (JEDEC_MFR(nor->info)) { 627 - case SNOR_MFR_ST: 628 - case SNOR_MFR_MICRON: 629 - /* Some Micron need WREN command; all will accept it */ 630 - need_wren = true; 631 - /* fall through */ 632 - case SNOR_MFR_MACRONIX: 633 - case SNOR_MFR_WINBOND: 634 - if (need_wren) 635 - write_enable(nor); 636 - 637 - cmd = enable ? SPINOR_OP_EN4B : SPINOR_OP_EX4B; 638 - status = nor->write_reg(nor, cmd, NULL, 0); 639 - if (need_wren) 640 - write_disable(nor); 641 - 642 - if (!status && !enable && 643 - JEDEC_MFR(nor->info) == SNOR_MFR_WINBOND) { 644 - /* 645 - * On Winbond W25Q256FV, leaving 4byte mode causes 646 - * the Extended Address Register to be set to 1, so all 647 - * 3-byte-address reads come from the second 16M. 648 - * We must clear the register to enable normal behavior. 649 - */ 650 - write_enable(nor); 651 - nor->cmd_buf[0] = 0; 652 - nor->write_reg(nor, SPINOR_OP_WREAR, nor->cmd_buf, 1); 653 - write_disable(nor); 654 - } 655 - 656 - return status; 657 - default: 658 - /* Spansion style */ 659 - nor->cmd_buf[0] = enable << 7; 660 - return nor->write_reg(nor, SPINOR_OP_BRWR, nor->cmd_buf, 1); 483 + return spi_mem_exec_op(nor->spimem, &op); 661 484 } 485 + 486 + return nor->write_reg(nor, enable ? SPINOR_OP_EN4B : SPINOR_OP_EX4B, 487 + NULL, 0); 488 + } 489 + 490 + static int st_micron_set_4byte(struct spi_nor *nor, bool enable) 491 + { 492 + int ret; 493 + 494 + write_enable(nor); 495 + ret = macronix_set_4byte(nor, enable); 496 + write_disable(nor); 497 + 498 + return ret; 499 + } 500 + 501 + static int spansion_set_4byte(struct spi_nor *nor, bool enable) 502 + { 503 + nor->bouncebuf[0] = enable << 7; 504 + 505 + if (nor->spimem) { 506 + struct spi_mem_op op = 507 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_BRWR, 1), 508 + SPI_MEM_OP_NO_ADDR, 509 + SPI_MEM_OP_NO_DUMMY, 510 + SPI_MEM_OP_DATA_OUT(1, nor->bouncebuf, 1)); 511 + 512 + return spi_mem_exec_op(nor->spimem, &op); 513 + } 514 + 515 + return nor->write_reg(nor, SPINOR_OP_BRWR, nor->bouncebuf, 1); 516 + } 517 + 518 + static int spi_nor_write_ear(struct spi_nor *nor, u8 ear) 519 + { 520 + nor->bouncebuf[0] = ear; 521 + 522 + if (nor->spimem) { 523 + struct spi_mem_op op = 524 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WREAR, 1), 525 + SPI_MEM_OP_NO_ADDR, 526 + SPI_MEM_OP_NO_DUMMY, 527 + SPI_MEM_OP_DATA_OUT(1, nor->bouncebuf, 1)); 528 + 529 + return spi_mem_exec_op(nor->spimem, &op); 530 + } 531 + 532 + return nor->write_reg(nor, SPINOR_OP_WREAR, nor->bouncebuf, 1); 533 + } 534 + 535 + static int winbond_set_4byte(struct spi_nor *nor, bool enable) 536 + { 537 + int ret; 538 + 539 + ret = macronix_set_4byte(nor, enable); 540 + if (ret || enable) 541 + return ret; 542 + 543 + /* 544 + * On Winbond W25Q256FV, leaving 4byte mode causes the Extended Address 545 + * Register to be set to 1, so all 3-byte-address reads come from the 546 + * second 16M. We must clear the register to enable normal behavior. 547 + */ 548 + write_enable(nor); 549 + ret = spi_nor_write_ear(nor, 0); 550 + write_disable(nor); 551 + 552 + return ret; 553 + } 554 + 555 + static int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr) 556 + { 557 + if (nor->spimem) { 558 + struct spi_mem_op op = 559 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_XRDSR, 1), 560 + SPI_MEM_OP_NO_ADDR, 561 + SPI_MEM_OP_NO_DUMMY, 562 + SPI_MEM_OP_DATA_IN(1, sr, 1)); 563 + 564 + return spi_mem_exec_op(nor->spimem, &op); 565 + } 566 + 567 + return nor->read_reg(nor, SPINOR_OP_XRDSR, sr, 1); 662 568 } 663 569 664 570 static int s3an_sr_ready(struct spi_nor *nor) 665 571 { 666 572 int ret; 667 - u8 val; 668 573 669 - ret = nor->read_reg(nor, SPINOR_OP_XRDSR, &val, 1); 574 + ret = spi_nor_xread_sr(nor, nor->bouncebuf); 670 575 if (ret < 0) { 671 576 dev_err(nor->dev, "error %d reading XRDSR\n", (int) ret); 672 577 return ret; 673 578 } 674 579 675 - return !!(val & XSR_RDY); 580 + return !!(nor->bouncebuf[0] & XSR_RDY); 581 + } 582 + 583 + static int spi_nor_clear_sr(struct spi_nor *nor) 584 + { 585 + if (nor->spimem) { 586 + struct spi_mem_op op = 587 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CLSR, 1), 588 + SPI_MEM_OP_NO_ADDR, 589 + SPI_MEM_OP_NO_DUMMY, 590 + SPI_MEM_OP_NO_DATA); 591 + 592 + return spi_mem_exec_op(nor->spimem, &op); 593 + } 594 + 595 + return nor->write_reg(nor, SPINOR_OP_CLSR, NULL, 0); 676 596 } 677 597 678 598 static int spi_nor_sr_ready(struct spi_nor *nor) ··· 755 539 else 756 540 dev_err(nor->dev, "Programming Error occurred\n"); 757 541 758 - nor->write_reg(nor, SPINOR_OP_CLSR, NULL, 0); 542 + spi_nor_clear_sr(nor); 759 543 return -EIO; 760 544 } 761 545 762 546 return !(sr & SR_WIP); 547 + } 548 + 549 + static int spi_nor_clear_fsr(struct spi_nor *nor) 550 + { 551 + if (nor->spimem) { 552 + struct spi_mem_op op = 553 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CLFSR, 1), 554 + SPI_MEM_OP_NO_ADDR, 555 + SPI_MEM_OP_NO_DUMMY, 556 + SPI_MEM_OP_NO_DATA); 557 + 558 + return spi_mem_exec_op(nor->spimem, &op); 559 + } 560 + 561 + return nor->write_reg(nor, SPINOR_OP_CLFSR, NULL, 0); 763 562 } 764 563 765 564 static int spi_nor_fsr_ready(struct spi_nor *nor) ··· 793 562 dev_err(nor->dev, 794 563 "Attempted to modify a protected sector.\n"); 795 564 796 - nor->write_reg(nor, SPINOR_OP_CLFSR, NULL, 0); 565 + spi_nor_clear_fsr(nor); 797 566 return -EIO; 798 567 } 799 568 ··· 861 630 { 862 631 dev_dbg(nor->dev, " %lldKiB\n", (long long)(nor->mtd.size >> 10)); 863 632 633 + if (nor->spimem) { 634 + struct spi_mem_op op = 635 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_CHIP_ERASE, 1), 636 + SPI_MEM_OP_NO_ADDR, 637 + SPI_MEM_OP_NO_DUMMY, 638 + SPI_MEM_OP_NO_DATA); 639 + 640 + return spi_mem_exec_op(nor->spimem, &op); 641 + } 642 + 864 643 return nor->write_reg(nor, SPINOR_OP_CHIP_ERASE, NULL, 0); 865 644 } 866 645 ··· 907 666 * Addr can safely be unsigned int, the biggest S3AN device is smaller than 908 667 * 4 MiB. 909 668 */ 910 - static loff_t spi_nor_s3an_addr_convert(struct spi_nor *nor, unsigned int addr) 669 + static u32 s3an_convert_addr(struct spi_nor *nor, u32 addr) 911 670 { 912 - unsigned int offset; 913 - unsigned int page; 671 + u32 offset, page; 914 672 915 673 offset = addr % nor->page_size; 916 674 page = addr / nor->page_size; ··· 918 678 return page | offset; 919 679 } 920 680 681 + static u32 spi_nor_convert_addr(struct spi_nor *nor, loff_t addr) 682 + { 683 + if (!nor->params.convert_addr) 684 + return addr; 685 + 686 + return nor->params.convert_addr(nor, addr); 687 + } 688 + 921 689 /* 922 690 * Initiate the erasure of a single sector 923 691 */ 924 692 static int spi_nor_erase_sector(struct spi_nor *nor, u32 addr) 925 693 { 926 - u8 buf[SPI_NOR_MAX_ADDR_WIDTH]; 927 694 int i; 928 695 929 - if (nor->flags & SNOR_F_S3AN_ADDR_DEFAULT) 930 - addr = spi_nor_s3an_addr_convert(nor, addr); 696 + addr = spi_nor_convert_addr(nor, addr); 931 697 932 698 if (nor->erase) 933 699 return nor->erase(nor, addr); 700 + 701 + if (nor->spimem) { 702 + struct spi_mem_op op = 703 + SPI_MEM_OP(SPI_MEM_OP_CMD(nor->erase_opcode, 1), 704 + SPI_MEM_OP_ADDR(nor->addr_width, addr, 1), 705 + SPI_MEM_OP_NO_DUMMY, 706 + SPI_MEM_OP_NO_DATA); 707 + 708 + return spi_mem_exec_op(nor->spimem, &op); 709 + } 934 710 935 711 /* 936 712 * Default implementation, if driver doesn't have a specialized HW 937 713 * control 938 714 */ 939 715 for (i = nor->addr_width - 1; i >= 0; i--) { 940 - buf[i] = addr & 0xff; 716 + nor->bouncebuf[i] = addr & 0xff; 941 717 addr >>= 8; 942 718 } 943 719 944 - return nor->write_reg(nor, nor->erase_opcode, buf, nor->addr_width); 720 + return nor->write_reg(nor, nor->erase_opcode, nor->bouncebuf, 721 + nor->addr_width); 945 722 } 946 723 947 724 /** ··· 1133 876 struct list_head *erase_list, 1134 877 u64 addr, u32 len) 1135 878 { 1136 - const struct spi_nor_erase_map *map = &nor->erase_map; 879 + const struct spi_nor_erase_map *map = &nor->params.erase_map; 1137 880 const struct spi_nor_erase_type *erase, *prev_erase = NULL; 1138 881 struct spi_nor_erase_region *region; 1139 882 struct spi_nor_erase_command *cmd = NULL; ··· 1606 1349 return stm_is_locked_sr(nor, ofs, len, status); 1607 1350 } 1608 1351 1352 + static const struct spi_nor_locking_ops stm_locking_ops = { 1353 + .lock = stm_lock, 1354 + .unlock = stm_unlock, 1355 + .is_locked = stm_is_locked, 1356 + }; 1357 + 1609 1358 static int spi_nor_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 1610 1359 { 1611 1360 struct spi_nor *nor = mtd_to_spi_nor(mtd); ··· 1621 1358 if (ret) 1622 1359 return ret; 1623 1360 1624 - ret = nor->flash_lock(nor, ofs, len); 1361 + ret = nor->params.locking_ops->lock(nor, ofs, len); 1625 1362 1626 1363 spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_UNLOCK); 1627 1364 return ret; ··· 1636 1373 if (ret) 1637 1374 return ret; 1638 1375 1639 - ret = nor->flash_unlock(nor, ofs, len); 1376 + ret = nor->params.locking_ops->unlock(nor, ofs, len); 1640 1377 1641 1378 spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_LOCK); 1642 1379 return ret; ··· 1651 1388 if (ret) 1652 1389 return ret; 1653 1390 1654 - ret = nor->flash_is_locked(nor, ofs, len); 1391 + ret = nor->params.locking_ops->is_locked(nor, ofs, len); 1655 1392 1656 1393 spi_nor_unlock_and_unprep(nor, SPI_NOR_OPS_LOCK); 1657 1394 return ret; ··· 1669 1406 1670 1407 write_enable(nor); 1671 1408 1672 - ret = nor->write_reg(nor, SPINOR_OP_WRSR, sr_cr, 2); 1409 + if (nor->spimem) { 1410 + struct spi_mem_op op = 1411 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRSR, 1), 1412 + SPI_MEM_OP_NO_ADDR, 1413 + SPI_MEM_OP_NO_DUMMY, 1414 + SPI_MEM_OP_DATA_OUT(2, sr_cr, 1)); 1415 + 1416 + ret = spi_mem_exec_op(nor->spimem, &op); 1417 + } else { 1418 + ret = nor->write_reg(nor, SPINOR_OP_WRSR, sr_cr, 2); 1419 + } 1420 + 1673 1421 if (ret < 0) { 1674 1422 dev_err(nor->dev, 1675 1423 "error while writing configuration register\n"); ··· 1759 1485 */ 1760 1486 static int spansion_quad_enable(struct spi_nor *nor) 1761 1487 { 1762 - u8 sr_cr[2] = {0, CR_QUAD_EN_SPAN}; 1488 + u8 *sr_cr = nor->bouncebuf; 1763 1489 int ret; 1764 1490 1491 + sr_cr[0] = 0; 1492 + sr_cr[1] = CR_QUAD_EN_SPAN; 1765 1493 ret = write_sr_cr(nor, sr_cr); 1766 1494 if (ret) 1767 1495 return ret; ··· 1793 1517 */ 1794 1518 static int spansion_no_read_cr_quad_enable(struct spi_nor *nor) 1795 1519 { 1796 - u8 sr_cr[2]; 1520 + u8 *sr_cr = nor->bouncebuf; 1797 1521 int ret; 1798 1522 1799 1523 /* Keep the current value of the Status Register. */ ··· 1824 1548 static int spansion_read_cr_quad_enable(struct spi_nor *nor) 1825 1549 { 1826 1550 struct device *dev = nor->dev; 1827 - u8 sr_cr[2]; 1551 + u8 *sr_cr = nor->bouncebuf; 1828 1552 int ret; 1829 1553 1830 1554 /* Check current Quad Enable bit value. */ ··· 1861 1585 return 0; 1862 1586 } 1863 1587 1588 + static int spi_nor_write_sr2(struct spi_nor *nor, u8 *sr2) 1589 + { 1590 + if (nor->spimem) { 1591 + struct spi_mem_op op = 1592 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_WRSR2, 1), 1593 + SPI_MEM_OP_NO_ADDR, 1594 + SPI_MEM_OP_NO_DUMMY, 1595 + SPI_MEM_OP_DATA_OUT(1, sr2, 1)); 1596 + 1597 + return spi_mem_exec_op(nor->spimem, &op); 1598 + } 1599 + 1600 + return nor->write_reg(nor, SPINOR_OP_WRSR2, sr2, 1); 1601 + } 1602 + 1603 + static int spi_nor_read_sr2(struct spi_nor *nor, u8 *sr2) 1604 + { 1605 + if (nor->spimem) { 1606 + struct spi_mem_op op = 1607 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDSR2, 1), 1608 + SPI_MEM_OP_NO_ADDR, 1609 + SPI_MEM_OP_NO_DUMMY, 1610 + SPI_MEM_OP_DATA_IN(1, sr2, 1)); 1611 + 1612 + return spi_mem_exec_op(nor->spimem, &op); 1613 + } 1614 + 1615 + return nor->read_reg(nor, SPINOR_OP_RDSR2, sr2, 1); 1616 + } 1617 + 1864 1618 /** 1865 1619 * sr2_bit7_quad_enable() - set QE bit in Status Register 2. 1866 1620 * @nor: pointer to a 'struct spi_nor' ··· 1905 1599 */ 1906 1600 static int sr2_bit7_quad_enable(struct spi_nor *nor) 1907 1601 { 1908 - u8 sr2; 1602 + u8 *sr2 = nor->bouncebuf; 1909 1603 int ret; 1910 1604 1911 1605 /* Check current Quad Enable bit value. */ 1912 - ret = nor->read_reg(nor, SPINOR_OP_RDSR2, &sr2, 1); 1606 + ret = spi_nor_read_sr2(nor, sr2); 1913 1607 if (ret) 1914 1608 return ret; 1915 - if (sr2 & SR2_QUAD_EN_BIT7) 1609 + if (*sr2 & SR2_QUAD_EN_BIT7) 1916 1610 return 0; 1917 1611 1918 1612 /* Update the Quad Enable bit. */ 1919 - sr2 |= SR2_QUAD_EN_BIT7; 1613 + *sr2 |= SR2_QUAD_EN_BIT7; 1920 1614 1921 1615 write_enable(nor); 1922 1616 1923 - ret = nor->write_reg(nor, SPINOR_OP_WRSR2, &sr2, 1); 1617 + ret = spi_nor_write_sr2(nor, sr2); 1924 1618 if (ret < 0) { 1925 1619 dev_err(nor->dev, "error while writing status register 2\n"); 1926 1620 return -EINVAL; ··· 1933 1627 } 1934 1628 1935 1629 /* Read back and check it. */ 1936 - ret = nor->read_reg(nor, SPINOR_OP_RDSR2, &sr2, 1); 1937 - if (!(ret > 0 && (sr2 & SR2_QUAD_EN_BIT7))) { 1630 + ret = spi_nor_read_sr2(nor, sr2); 1631 + if (!(ret > 0 && (*sr2 & SR2_QUAD_EN_BIT7))) { 1938 1632 dev_err(nor->dev, "SR2 Quad bit not set\n"); 1939 1633 return -EINVAL; 1940 1634 } ··· 1993 1687 { 1994 1688 int ret; 1995 1689 u8 mask = SR_BP2 | SR_BP1 | SR_BP0; 1996 - u8 sr_cr[2] = {0}; 1690 + u8 *sr_cr = nor->bouncebuf; 1997 1691 1998 1692 /* Check current Quad Enable bit value. */ 1999 1693 ret = read_cr(nor); ··· 2128 1822 .post_bfpt = mx25l25635_post_bfpt_fixups, 2129 1823 }; 2130 1824 1825 + static void gd25q256_default_init(struct spi_nor *nor) 1826 + { 1827 + /* 1828 + * Some manufacturer like GigaDevice may use different 1829 + * bit to set QE on different memories, so the MFR can't 1830 + * indicate the quad_enable method for this case, we need 1831 + * to set it in the default_init fixup hook. 1832 + */ 1833 + nor->params.quad_enable = macronix_quad_enable; 1834 + } 1835 + 1836 + static struct spi_nor_fixups gd25q256_fixups = { 1837 + .default_init = gd25q256_default_init, 1838 + }; 1839 + 2131 1840 /* NOTE: double check command sets and memory organization when you add 2132 1841 * more nor chips. This current list focusses on newer chips, which 2133 1842 * have been converging on command sets which including JEDEC ID. ··· 2235 1914 "gd25q256", INFO(0xc84019, 0, 64 * 1024, 512, 2236 1915 SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | 2237 1916 SPI_NOR_4B_OPCODES | SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB) 2238 - .quad_enable = macronix_quad_enable, 1917 + .fixups = &gd25q256_fixups, 2239 1918 }, 2240 1919 2241 1920 /* Intel/Numonyx -- xxxs33b */ ··· 2309 1988 { "n25q128a13", INFO(0x20ba18, 0, 64 * 1024, 256, SECT_4K | SPI_NOR_QUAD_READ) }, 2310 1989 { "n25q256a", INFO(0x20ba19, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 2311 1990 { "n25q256ax1", INFO(0x20bb19, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_QUAD_READ) }, 2312 - { "n25q512a", INFO(0x20bb20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) }, 2313 1991 { "n25q512ax3", INFO(0x20ba20, 0, 64 * 1024, 1024, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ) }, 2314 1992 { "n25q00", INFO(0x20ba21, 0, 64 * 1024, 2048, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ | NO_CHIP_ERASE) }, 2315 1993 { "n25q00a", INFO(0x20bb21, 0, 64 * 1024, 2048, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ | NO_CHIP_ERASE) }, 2316 1994 { "mt25ql02g", INFO(0x20ba22, 0, 64 * 1024, 4096, 2317 1995 SECT_4K | USE_FSR | SPI_NOR_QUAD_READ | 2318 1996 NO_CHIP_ERASE) }, 1997 + { "mt25qu512a (n25q512a)", INFO(0x20bb20, 0, 64 * 1024, 1024, 1998 + SECT_4K | USE_FSR | SPI_NOR_DUAL_READ | 1999 + SPI_NOR_QUAD_READ | 2000 + SPI_NOR_4B_OPCODES) }, 2319 2001 { "mt25qu02g", INFO(0x20bb22, 0, 64 * 1024, 4096, SECT_4K | USE_FSR | SPI_NOR_QUAD_READ | NO_CHIP_ERASE) }, 2320 2002 2321 2003 /* Micron */ ··· 2327 2003 SECT_4K | USE_FSR | SPI_NOR_OCTAL_READ | 2328 2004 SPI_NOR_4B_OPCODES) 2329 2005 }, 2006 + { "mt35xu02g", INFO(0x2c5b1c, 0, 128 * 1024, 2048, 2007 + SECT_4K | USE_FSR | SPI_NOR_OCTAL_READ | 2008 + SPI_NOR_4B_OPCODES) }, 2330 2009 2331 2010 /* PMC */ 2332 2011 { "pm25lv512", INFO(0, 0, 32 * 1024, 2, SECT_4K_PMC) }, ··· 2349 2022 { "s25fl256s1", INFO(0x010219, 0x4d01, 64 * 1024, 512, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | USE_CLSR) }, 2350 2023 { "s25fl512s", INFO6(0x010220, 0x4d0080, 256 * 1024, 256, 2351 2024 SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | 2352 - SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB | USE_CLSR) }, 2025 + SPI_NOR_HAS_LOCK | USE_CLSR) }, 2353 2026 { "s25fs512s", INFO6(0x010220, 0x4d0081, 256 * 1024, 256, SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ | USE_CLSR) }, 2354 2027 { "s70fl01gs", INFO(0x010221, 0x4d00, 256 * 1024, 256, 0) }, 2355 2028 { "s25sl12800", INFO(0x012018, 0x0300, 256 * 1024, 64, 0) }, ··· 2387 2060 { "sst25wf040b", INFO(0x621613, 0, 64 * 1024, 8, SECT_4K) }, 2388 2061 { "sst25wf040", INFO(0xbf2504, 0, 64 * 1024, 8, SECT_4K | SST_WRITE) }, 2389 2062 { "sst25wf080", INFO(0xbf2505, 0, 64 * 1024, 16, SECT_4K | SST_WRITE) }, 2063 + { "sst26wf016b", INFO(0xbf2651, 0, 64 * 1024, 32, SECT_4K | 2064 + SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 2390 2065 { "sst26vf064b", INFO(0xbf2643, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 2391 2066 2392 2067 /* ST Microelectronics -- newer production may have feature updates */ ··· 2480 2151 { "w25q80bl", INFO(0xef4014, 0, 64 * 1024, 16, SECT_4K) }, 2481 2152 { "w25q128", INFO(0xef4018, 0, 64 * 1024, 256, SECT_4K) }, 2482 2153 { "w25q256", INFO(0xef4019, 0, 64 * 1024, 512, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 2154 + { "w25q256jvm", INFO(0xef7019, 0, 64 * 1024, 512, 2155 + SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) }, 2483 2156 { "w25m512jv", INFO(0xef7119, 0, 64 * 1024, 1024, 2484 2157 SECT_4K | SPI_NOR_QUAD_READ | SPI_NOR_DUAL_READ) }, 2485 2158 ··· 2508 2177 static const struct flash_info *spi_nor_read_id(struct spi_nor *nor) 2509 2178 { 2510 2179 int tmp; 2511 - u8 id[SPI_NOR_MAX_ID_LEN]; 2180 + u8 *id = nor->bouncebuf; 2512 2181 const struct flash_info *info; 2513 2182 2514 - tmp = nor->read_reg(nor, SPINOR_OP_RDID, id, SPI_NOR_MAX_ID_LEN); 2183 + if (nor->spimem) { 2184 + struct spi_mem_op op = 2185 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_RDID, 1), 2186 + SPI_MEM_OP_NO_ADDR, 2187 + SPI_MEM_OP_NO_DUMMY, 2188 + SPI_MEM_OP_DATA_IN(SPI_NOR_MAX_ID_LEN, id, 1)); 2189 + 2190 + tmp = spi_mem_exec_op(nor->spimem, &op); 2191 + } else { 2192 + tmp = nor->read_reg(nor, SPINOR_OP_RDID, id, 2193 + SPI_NOR_MAX_ID_LEN); 2194 + } 2515 2195 if (tmp < 0) { 2516 2196 dev_err(nor->dev, "error %d reading JEDEC ID\n", tmp); 2517 2197 return ERR_PTR(tmp); ··· 2555 2213 while (len) { 2556 2214 loff_t addr = from; 2557 2215 2558 - if (nor->flags & SNOR_F_S3AN_ADDR_DEFAULT) 2559 - addr = spi_nor_s3an_addr_convert(nor, addr); 2216 + addr = spi_nor_convert_addr(nor, addr); 2560 2217 2561 - ret = nor->read(nor, addr, len, buf); 2218 + ret = spi_nor_read_data(nor, addr, len, buf); 2562 2219 if (ret == 0) { 2563 2220 /* We shouldn't see 0-length reads */ 2564 2221 ret = -EIO; ··· 2602 2261 nor->program_opcode = SPINOR_OP_BP; 2603 2262 2604 2263 /* write one byte. */ 2605 - ret = nor->write(nor, to, 1, buf); 2264 + ret = spi_nor_write_data(nor, to, 1, buf); 2606 2265 if (ret < 0) 2607 2266 goto sst_write_err; 2608 2267 WARN(ret != 1, "While writing 1 byte written %i bytes\n", ··· 2618 2277 nor->program_opcode = SPINOR_OP_AAI_WP; 2619 2278 2620 2279 /* write two bytes. */ 2621 - ret = nor->write(nor, to, 2, buf + actual); 2280 + ret = spi_nor_write_data(nor, to, 2, buf + actual); 2622 2281 if (ret < 0) 2623 2282 goto sst_write_err; 2624 2283 WARN(ret != 2, "While writing 2 bytes written %i bytes\n", ··· 2641 2300 write_enable(nor); 2642 2301 2643 2302 nor->program_opcode = SPINOR_OP_BP; 2644 - ret = nor->write(nor, to, 1, buf + actual); 2303 + ret = spi_nor_write_data(nor, to, 1, buf + actual); 2645 2304 if (ret < 0) 2646 2305 goto sst_write_err; 2647 2306 WARN(ret != 1, "While writing 1 byte written %i bytes\n", ··· 2699 2358 page_remain = min_t(size_t, 2700 2359 nor->page_size - page_offset, len - i); 2701 2360 2702 - if (nor->flags & SNOR_F_S3AN_ADDR_DEFAULT) 2703 - addr = spi_nor_s3an_addr_convert(nor, addr); 2361 + addr = spi_nor_convert_addr(nor, addr); 2704 2362 2705 2363 write_enable(nor); 2706 - ret = nor->write(nor, addr, page_remain, buf + i); 2364 + ret = spi_nor_write_data(nor, addr, page_remain, buf + i); 2707 2365 if (ret < 0) 2708 2366 goto write_err; 2709 2367 written = ret; ··· 2721 2381 2722 2382 static int spi_nor_check(struct spi_nor *nor) 2723 2383 { 2724 - if (!nor->dev || !nor->read || !nor->write || 2725 - !nor->read_reg || !nor->write_reg) { 2384 + if (!nor->dev || 2385 + (!nor->spimem && 2386 + (!nor->read || !nor->write || !nor->read_reg || 2387 + !nor->write_reg))) { 2726 2388 pr_err("spi-nor: please fill all the necessary fields!\n"); 2727 2389 return -EINVAL; 2728 2390 } ··· 2732 2390 return 0; 2733 2391 } 2734 2392 2735 - static int s3an_nor_scan(struct spi_nor *nor) 2393 + static int s3an_nor_setup(struct spi_nor *nor, 2394 + const struct spi_nor_hwcaps *hwcaps) 2736 2395 { 2737 2396 int ret; 2738 - u8 val; 2739 2397 2740 - ret = nor->read_reg(nor, SPINOR_OP_XRDSR, &val, 1); 2398 + ret = spi_nor_xread_sr(nor, nor->bouncebuf); 2741 2399 if (ret < 0) { 2742 2400 dev_err(nor->dev, "error %d reading XRDSR\n", (int) ret); 2743 2401 return ret; ··· 2759 2417 * The current addressing mode can be read from the XRDSR register 2760 2418 * and should not be changed, because is a destructive operation. 2761 2419 */ 2762 - if (val & XSR_PAGESIZE) { 2420 + if (nor->bouncebuf[0] & XSR_PAGESIZE) { 2763 2421 /* Flash in Power of 2 mode */ 2764 2422 nor->page_size = (nor->page_size == 264) ? 256 : 512; 2765 2423 nor->mtd.writebufsize = nor->page_size; ··· 2767 2425 nor->mtd.erasesize = 8 * nor->page_size; 2768 2426 } else { 2769 2427 /* Flash in Default addressing mode */ 2770 - nor->flags |= SNOR_F_S3AN_ADDR_DEFAULT; 2428 + nor->params.convert_addr = s3an_convert_addr; 2429 + nor->mtd.erasesize = nor->info->sector_size; 2771 2430 } 2772 2431 2773 2432 return 0; ··· 2868 2525 int ret; 2869 2526 2870 2527 while (len) { 2871 - ret = nor->read(nor, addr, len, buf); 2872 - if (!ret || ret > len) 2873 - return -EIO; 2528 + ret = spi_nor_read_data(nor, addr, len, buf); 2874 2529 if (ret < 0) 2875 2530 return ret; 2531 + if (!ret || ret > len) 2532 + return -EIO; 2876 2533 2877 2534 buf += ret; 2878 2535 addr += ret; ··· 2915 2572 nor->read_dummy = read_dummy; 2916 2573 2917 2574 return ret; 2575 + } 2576 + 2577 + /** 2578 + * spi_nor_spimem_check_op - check if the operation is supported 2579 + * by controller 2580 + *@nor: pointer to a 'struct spi_nor' 2581 + *@op: pointer to op template to be checked 2582 + * 2583 + * Returns 0 if operation is supported, -ENOTSUPP otherwise. 2584 + */ 2585 + static int spi_nor_spimem_check_op(struct spi_nor *nor, 2586 + struct spi_mem_op *op) 2587 + { 2588 + /* 2589 + * First test with 4 address bytes. The opcode itself might 2590 + * be a 3B addressing opcode but we don't care, because 2591 + * SPI controller implementation should not check the opcode, 2592 + * but just the sequence. 2593 + */ 2594 + op->addr.nbytes = 4; 2595 + if (!spi_mem_supports_op(nor->spimem, op)) { 2596 + if (nor->mtd.size > SZ_16M) 2597 + return -ENOTSUPP; 2598 + 2599 + /* If flash size <= 16MB, 3 address bytes are sufficient */ 2600 + op->addr.nbytes = 3; 2601 + if (!spi_mem_supports_op(nor->spimem, op)) 2602 + return -ENOTSUPP; 2603 + } 2604 + 2605 + return 0; 2606 + } 2607 + 2608 + /** 2609 + * spi_nor_spimem_check_readop - check if the read op is supported 2610 + * by controller 2611 + *@nor: pointer to a 'struct spi_nor' 2612 + *@read: pointer to op template to be checked 2613 + * 2614 + * Returns 0 if operation is supported, -ENOTSUPP otherwise. 2615 + */ 2616 + static int spi_nor_spimem_check_readop(struct spi_nor *nor, 2617 + const struct spi_nor_read_command *read) 2618 + { 2619 + struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(read->opcode, 1), 2620 + SPI_MEM_OP_ADDR(3, 0, 1), 2621 + SPI_MEM_OP_DUMMY(0, 1), 2622 + SPI_MEM_OP_DATA_IN(0, NULL, 1)); 2623 + 2624 + op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(read->proto); 2625 + op.addr.buswidth = spi_nor_get_protocol_addr_nbits(read->proto); 2626 + op.data.buswidth = spi_nor_get_protocol_data_nbits(read->proto); 2627 + op.dummy.buswidth = op.addr.buswidth; 2628 + op.dummy.nbytes = (read->num_mode_clocks + read->num_wait_states) * 2629 + op.dummy.buswidth / 8; 2630 + 2631 + return spi_nor_spimem_check_op(nor, &op); 2632 + } 2633 + 2634 + /** 2635 + * spi_nor_spimem_check_pp - check if the page program op is supported 2636 + * by controller 2637 + *@nor: pointer to a 'struct spi_nor' 2638 + *@pp: pointer to op template to be checked 2639 + * 2640 + * Returns 0 if operation is supported, -ENOTSUPP otherwise. 2641 + */ 2642 + static int spi_nor_spimem_check_pp(struct spi_nor *nor, 2643 + const struct spi_nor_pp_command *pp) 2644 + { 2645 + struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(pp->opcode, 1), 2646 + SPI_MEM_OP_ADDR(3, 0, 1), 2647 + SPI_MEM_OP_NO_DUMMY, 2648 + SPI_MEM_OP_DATA_OUT(0, NULL, 1)); 2649 + 2650 + op.cmd.buswidth = spi_nor_get_protocol_inst_nbits(pp->proto); 2651 + op.addr.buswidth = spi_nor_get_protocol_addr_nbits(pp->proto); 2652 + op.data.buswidth = spi_nor_get_protocol_data_nbits(pp->proto); 2653 + 2654 + return spi_nor_spimem_check_op(nor, &op); 2655 + } 2656 + 2657 + /** 2658 + * spi_nor_spimem_adjust_hwcaps - Find optimal Read/Write protocol 2659 + * based on SPI controller capabilities 2660 + * @nor: pointer to a 'struct spi_nor' 2661 + * @hwcaps: pointer to resulting capabilities after adjusting 2662 + * according to controller and flash's capability 2663 + */ 2664 + static void 2665 + spi_nor_spimem_adjust_hwcaps(struct spi_nor *nor, u32 *hwcaps) 2666 + { 2667 + struct spi_nor_flash_parameter *params = &nor->params; 2668 + unsigned int cap; 2669 + 2670 + /* DTR modes are not supported yet, mask them all. */ 2671 + *hwcaps &= ~SNOR_HWCAPS_DTR; 2672 + 2673 + /* X-X-X modes are not supported yet, mask them all. */ 2674 + *hwcaps &= ~SNOR_HWCAPS_X_X_X; 2675 + 2676 + for (cap = 0; cap < sizeof(*hwcaps) * BITS_PER_BYTE; cap++) { 2677 + int rdidx, ppidx; 2678 + 2679 + if (!(*hwcaps & BIT(cap))) 2680 + continue; 2681 + 2682 + rdidx = spi_nor_hwcaps_read2cmd(BIT(cap)); 2683 + if (rdidx >= 0 && 2684 + spi_nor_spimem_check_readop(nor, &params->reads[rdidx])) 2685 + *hwcaps &= ~BIT(cap); 2686 + 2687 + ppidx = spi_nor_hwcaps_pp2cmd(BIT(cap)); 2688 + if (ppidx < 0) 2689 + continue; 2690 + 2691 + if (spi_nor_spimem_check_pp(nor, 2692 + &params->page_programs[ppidx])) 2693 + *hwcaps &= ~BIT(cap); 2694 + } 2918 2695 } 2919 2696 2920 2697 /** ··· 3355 2892 const struct sfdp_parameter_header *bfpt_header, 3356 2893 struct spi_nor_flash_parameter *params) 3357 2894 { 3358 - struct spi_nor_erase_map *map = &nor->erase_map; 2895 + struct spi_nor_erase_map *map = &params->erase_map; 3359 2896 struct spi_nor_erase_type *erase_type = map->erase_type; 3360 2897 struct sfdp_bfpt bfpt; 3361 2898 size_t len; ··· 3436 2973 * Erase Types defined in the bfpt table. 3437 2974 */ 3438 2975 erase_mask = 0; 3439 - memset(&nor->erase_map, 0, sizeof(nor->erase_map)); 2976 + memset(&params->erase_map, 0, sizeof(params->erase_map)); 3440 2977 for (i = 0; i < ARRAY_SIZE(sfdp_bfpt_erases); i++) { 3441 2978 const struct sfdp_bfpt_erase *er = &sfdp_bfpt_erases[i]; 3442 2979 u32 erasesize; ··· 3711 3248 /** 3712 3249 * spi_nor_init_non_uniform_erase_map() - initialize the non-uniform erase map 3713 3250 * @nor: pointer to a 'struct spi_nor' 3251 + * @params: pointer to a duplicate 'struct spi_nor_flash_parameter' that is 3252 + * used for storing SFDP parsed data 3714 3253 * @smpt: pointer to the sector map parameter table 3715 3254 * 3716 3255 * Return: 0 on success, -errno otherwise. 3717 3256 */ 3718 - static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor, 3719 - const u32 *smpt) 3257 + static int 3258 + spi_nor_init_non_uniform_erase_map(struct spi_nor *nor, 3259 + struct spi_nor_flash_parameter *params, 3260 + const u32 *smpt) 3720 3261 { 3721 - struct spi_nor_erase_map *map = &nor->erase_map; 3262 + struct spi_nor_erase_map *map = &params->erase_map; 3722 3263 struct spi_nor_erase_type *erase = map->erase_type; 3723 3264 struct spi_nor_erase_region *region; 3724 3265 u64 offset; ··· 3801 3334 * spi_nor_parse_smpt() - parse Sector Map Parameter Table 3802 3335 * @nor: pointer to a 'struct spi_nor' 3803 3336 * @smpt_header: sector map parameter table header 3337 + * @params: pointer to a duplicate 'struct spi_nor_flash_parameter' 3338 + * that is used for storing SFDP parsed data 3804 3339 * 3805 3340 * This table is optional, but when available, we parse it to identify the 3806 3341 * location and size of sectors within the main data array of the flash memory ··· 3811 3342 * Return: 0 on success, -errno otherwise. 3812 3343 */ 3813 3344 static int spi_nor_parse_smpt(struct spi_nor *nor, 3814 - const struct sfdp_parameter_header *smpt_header) 3345 + const struct sfdp_parameter_header *smpt_header, 3346 + struct spi_nor_flash_parameter *params) 3815 3347 { 3816 3348 const u32 *sector_map; 3817 3349 u32 *smpt; ··· 3841 3371 goto out; 3842 3372 } 3843 3373 3844 - ret = spi_nor_init_non_uniform_erase_map(nor, sector_map); 3374 + ret = spi_nor_init_non_uniform_erase_map(nor, params, sector_map); 3845 3375 if (ret) 3846 3376 goto out; 3847 3377 3848 - spi_nor_regions_sort_erase_types(&nor->erase_map); 3378 + spi_nor_regions_sort_erase_types(&params->erase_map); 3849 3379 /* fall through */ 3850 3380 out: 3851 3381 kfree(smpt); ··· 3901 3431 { 0u /* not used */, BIT(12) }, 3902 3432 }; 3903 3433 struct spi_nor_pp_command *params_pp = params->page_programs; 3904 - struct spi_nor_erase_map *map = &nor->erase_map; 3434 + struct spi_nor_erase_map *map = &params->erase_map; 3905 3435 struct spi_nor_erase_type *erase_type = map->erase_type; 3906 3436 u32 *dwords; 3907 3437 size_t len; ··· 3923 3453 addr = SFDP_PARAM_HEADER_PTP(param_header); 3924 3454 ret = spi_nor_read_sfdp(nor, addr, len, dwords); 3925 3455 if (ret) 3926 - return ret; 3456 + goto out; 3927 3457 3928 3458 /* Fix endianness of the 4BAIT DWORDs. */ 3929 3459 for (i = 0; i < SFDP_4BAIT_DWORD_MAX; i++) ··· 4131 3661 4132 3662 switch (SFDP_PARAM_HEADER_ID(param_header)) { 4133 3663 case SFDP_SECTOR_MAP_ID: 4134 - err = spi_nor_parse_smpt(nor, param_header); 3664 + err = spi_nor_parse_smpt(nor, param_header, params); 4135 3665 break; 4136 3666 4137 3667 case SFDP_4BAIT_ID: ··· 4160 3690 return err; 4161 3691 } 4162 3692 4163 - static int spi_nor_init_params(struct spi_nor *nor, 4164 - struct spi_nor_flash_parameter *params) 4165 - { 4166 - struct spi_nor_erase_map *map = &nor->erase_map; 4167 - const struct flash_info *info = nor->info; 4168 - u8 i, erase_mask; 4169 - 4170 - /* Set legacy flash parameters as default. */ 4171 - memset(params, 0, sizeof(*params)); 4172 - 4173 - /* Set SPI NOR sizes. */ 4174 - params->size = (u64)info->sector_size * info->n_sectors; 4175 - params->page_size = info->page_size; 4176 - 4177 - /* (Fast) Read settings. */ 4178 - params->hwcaps.mask |= SNOR_HWCAPS_READ; 4179 - spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ], 4180 - 0, 0, SPINOR_OP_READ, 4181 - SNOR_PROTO_1_1_1); 4182 - 4183 - if (!(info->flags & SPI_NOR_NO_FR)) { 4184 - params->hwcaps.mask |= SNOR_HWCAPS_READ_FAST; 4185 - spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_FAST], 4186 - 0, 8, SPINOR_OP_READ_FAST, 4187 - SNOR_PROTO_1_1_1); 4188 - } 4189 - 4190 - if (info->flags & SPI_NOR_DUAL_READ) { 4191 - params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2; 4192 - spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_2], 4193 - 0, 8, SPINOR_OP_READ_1_1_2, 4194 - SNOR_PROTO_1_1_2); 4195 - } 4196 - 4197 - if (info->flags & SPI_NOR_QUAD_READ) { 4198 - params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4; 4199 - spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_4], 4200 - 0, 8, SPINOR_OP_READ_1_1_4, 4201 - SNOR_PROTO_1_1_4); 4202 - } 4203 - 4204 - if (info->flags & SPI_NOR_OCTAL_READ) { 4205 - params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_8; 4206 - spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_8], 4207 - 0, 8, SPINOR_OP_READ_1_1_8, 4208 - SNOR_PROTO_1_1_8); 4209 - } 4210 - 4211 - /* Page Program settings. */ 4212 - params->hwcaps.mask |= SNOR_HWCAPS_PP; 4213 - spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP], 4214 - SPINOR_OP_PP, SNOR_PROTO_1_1_1); 4215 - 4216 - /* 4217 - * Sector Erase settings. Sort Erase Types in ascending order, with the 4218 - * smallest erase size starting at BIT(0). 4219 - */ 4220 - erase_mask = 0; 4221 - i = 0; 4222 - if (info->flags & SECT_4K_PMC) { 4223 - erase_mask |= BIT(i); 4224 - spi_nor_set_erase_type(&map->erase_type[i], 4096u, 4225 - SPINOR_OP_BE_4K_PMC); 4226 - i++; 4227 - } else if (info->flags & SECT_4K) { 4228 - erase_mask |= BIT(i); 4229 - spi_nor_set_erase_type(&map->erase_type[i], 4096u, 4230 - SPINOR_OP_BE_4K); 4231 - i++; 4232 - } 4233 - erase_mask |= BIT(i); 4234 - spi_nor_set_erase_type(&map->erase_type[i], info->sector_size, 4235 - SPINOR_OP_SE); 4236 - spi_nor_init_uniform_erase_map(map, erase_mask, params->size); 4237 - 4238 - /* Select the procedure to set the Quad Enable bit. */ 4239 - if (params->hwcaps.mask & (SNOR_HWCAPS_READ_QUAD | 4240 - SNOR_HWCAPS_PP_QUAD)) { 4241 - switch (JEDEC_MFR(info)) { 4242 - case SNOR_MFR_MACRONIX: 4243 - params->quad_enable = macronix_quad_enable; 4244 - break; 4245 - 4246 - case SNOR_MFR_ST: 4247 - case SNOR_MFR_MICRON: 4248 - break; 4249 - 4250 - default: 4251 - /* Kept only for backward compatibility purpose. */ 4252 - params->quad_enable = spansion_quad_enable; 4253 - break; 4254 - } 4255 - 4256 - /* 4257 - * Some manufacturer like GigaDevice may use different 4258 - * bit to set QE on different memories, so the MFR can't 4259 - * indicate the quad_enable method for this case, we need 4260 - * set it in flash info list. 4261 - */ 4262 - if (info->quad_enable) 4263 - params->quad_enable = info->quad_enable; 4264 - } 4265 - 4266 - if ((info->flags & (SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ)) && 4267 - !(info->flags & SPI_NOR_SKIP_SFDP)) { 4268 - struct spi_nor_flash_parameter sfdp_params; 4269 - struct spi_nor_erase_map prev_map; 4270 - 4271 - memcpy(&sfdp_params, params, sizeof(sfdp_params)); 4272 - memcpy(&prev_map, &nor->erase_map, sizeof(prev_map)); 4273 - 4274 - if (spi_nor_parse_sfdp(nor, &sfdp_params)) { 4275 - nor->addr_width = 0; 4276 - nor->flags &= ~SNOR_F_4B_OPCODES; 4277 - /* restore previous erase map */ 4278 - memcpy(&nor->erase_map, &prev_map, 4279 - sizeof(nor->erase_map)); 4280 - } else { 4281 - memcpy(params, &sfdp_params, sizeof(*params)); 4282 - } 4283 - } 4284 - 4285 - return 0; 4286 - } 4287 - 4288 3693 static int spi_nor_select_read(struct spi_nor *nor, 4289 - const struct spi_nor_flash_parameter *params, 4290 3694 u32 shared_hwcaps) 4291 3695 { 4292 3696 int cmd, best_match = fls(shared_hwcaps & SNOR_HWCAPS_READ_MASK) - 1; ··· 4173 3829 if (cmd < 0) 4174 3830 return -EINVAL; 4175 3831 4176 - read = &params->reads[cmd]; 3832 + read = &nor->params.reads[cmd]; 4177 3833 nor->read_opcode = read->opcode; 4178 3834 nor->read_proto = read->proto; 4179 3835 ··· 4192 3848 } 4193 3849 4194 3850 static int spi_nor_select_pp(struct spi_nor *nor, 4195 - const struct spi_nor_flash_parameter *params, 4196 3851 u32 shared_hwcaps) 4197 3852 { 4198 3853 int cmd, best_match = fls(shared_hwcaps & SNOR_HWCAPS_PP_MASK) - 1; ··· 4204 3861 if (cmd < 0) 4205 3862 return -EINVAL; 4206 3863 4207 - pp = &params->page_programs[cmd]; 3864 + pp = &nor->params.page_programs[cmd]; 4208 3865 nor->program_opcode = pp->opcode; 4209 3866 nor->write_proto = pp->proto; 4210 3867 return 0; ··· 4263 3920 return erase; 4264 3921 } 4265 3922 4266 - static int spi_nor_select_erase(struct spi_nor *nor, u32 wanted_size) 3923 + static int spi_nor_select_erase(struct spi_nor *nor) 4267 3924 { 4268 - struct spi_nor_erase_map *map = &nor->erase_map; 3925 + struct spi_nor_erase_map *map = &nor->params.erase_map; 4269 3926 const struct spi_nor_erase_type *erase = NULL; 4270 3927 struct mtd_info *mtd = &nor->mtd; 3928 + u32 wanted_size = nor->info->sector_size; 4271 3929 int i; 4272 3930 4273 3931 /* ··· 4311 3967 return 0; 4312 3968 } 4313 3969 4314 - static int spi_nor_setup(struct spi_nor *nor, 4315 - const struct spi_nor_flash_parameter *params, 4316 - const struct spi_nor_hwcaps *hwcaps) 3970 + static int spi_nor_default_setup(struct spi_nor *nor, 3971 + const struct spi_nor_hwcaps *hwcaps) 4317 3972 { 3973 + struct spi_nor_flash_parameter *params = &nor->params; 4318 3974 u32 ignored_mask, shared_mask; 4319 - bool enable_quad_io; 4320 3975 int err; 4321 3976 4322 3977 /* ··· 4324 3981 */ 4325 3982 shared_mask = hwcaps->mask & params->hwcaps.mask; 4326 3983 4327 - /* SPI n-n-n protocols are not supported yet. */ 4328 - ignored_mask = (SNOR_HWCAPS_READ_2_2_2 | 4329 - SNOR_HWCAPS_READ_4_4_4 | 4330 - SNOR_HWCAPS_READ_8_8_8 | 4331 - SNOR_HWCAPS_PP_4_4_4 | 4332 - SNOR_HWCAPS_PP_8_8_8); 4333 - if (shared_mask & ignored_mask) { 4334 - dev_dbg(nor->dev, 4335 - "SPI n-n-n protocols are not supported yet.\n"); 4336 - shared_mask &= ~ignored_mask; 3984 + if (nor->spimem) { 3985 + /* 3986 + * When called from spi_nor_probe(), all caps are set and we 3987 + * need to discard some of them based on what the SPI 3988 + * controller actually supports (using spi_mem_supports_op()). 3989 + */ 3990 + spi_nor_spimem_adjust_hwcaps(nor, &shared_mask); 3991 + } else { 3992 + /* 3993 + * SPI n-n-n protocols are not supported when the SPI 3994 + * controller directly implements the spi_nor interface. 3995 + * Yet another reason to switch to spi-mem. 3996 + */ 3997 + ignored_mask = SNOR_HWCAPS_X_X_X; 3998 + if (shared_mask & ignored_mask) { 3999 + dev_dbg(nor->dev, 4000 + "SPI n-n-n protocols are not supported.\n"); 4001 + shared_mask &= ~ignored_mask; 4002 + } 4337 4003 } 4338 4004 4339 4005 /* Select the (Fast) Read command. */ 4340 - err = spi_nor_select_read(nor, params, shared_mask); 4006 + err = spi_nor_select_read(nor, shared_mask); 4341 4007 if (err) { 4342 4008 dev_err(nor->dev, 4343 4009 "can't select read settings supported by both the SPI controller and memory.\n"); ··· 4354 4002 } 4355 4003 4356 4004 /* Select the Page Program command. */ 4357 - err = spi_nor_select_pp(nor, params, shared_mask); 4005 + err = spi_nor_select_pp(nor, shared_mask); 4358 4006 if (err) { 4359 4007 dev_err(nor->dev, 4360 4008 "can't select write settings supported by both the SPI controller and memory.\n"); ··· 4362 4010 } 4363 4011 4364 4012 /* Select the Sector Erase command. */ 4365 - err = spi_nor_select_erase(nor, nor->info->sector_size); 4013 + err = spi_nor_select_erase(nor); 4366 4014 if (err) { 4367 4015 dev_err(nor->dev, 4368 4016 "can't select erase settings supported by both the SPI controller and memory.\n"); 4369 4017 return err; 4370 4018 } 4371 4019 4372 - /* Enable Quad I/O if needed. */ 4373 - enable_quad_io = (spi_nor_get_protocol_width(nor->read_proto) == 4 || 4374 - spi_nor_get_protocol_width(nor->write_proto) == 4); 4375 - if (enable_quad_io && params->quad_enable) 4376 - nor->quad_enable = params->quad_enable; 4377 - else 4378 - nor->quad_enable = NULL; 4379 - 4380 4020 return 0; 4021 + } 4022 + 4023 + static int spi_nor_setup(struct spi_nor *nor, 4024 + const struct spi_nor_hwcaps *hwcaps) 4025 + { 4026 + if (!nor->params.setup) 4027 + return 0; 4028 + 4029 + return nor->params.setup(nor, hwcaps); 4030 + } 4031 + 4032 + static void macronix_set_default_init(struct spi_nor *nor) 4033 + { 4034 + nor->params.quad_enable = macronix_quad_enable; 4035 + nor->params.set_4byte = macronix_set_4byte; 4036 + } 4037 + 4038 + static void st_micron_set_default_init(struct spi_nor *nor) 4039 + { 4040 + nor->flags |= SNOR_F_HAS_LOCK; 4041 + nor->params.quad_enable = NULL; 4042 + nor->params.set_4byte = st_micron_set_4byte; 4043 + } 4044 + 4045 + static void winbond_set_default_init(struct spi_nor *nor) 4046 + { 4047 + nor->params.set_4byte = winbond_set_4byte; 4048 + } 4049 + 4050 + /** 4051 + * spi_nor_manufacturer_init_params() - Initialize the flash's parameters and 4052 + * settings based on MFR register and ->default_init() hook. 4053 + * @nor: pointer to a 'struct spi-nor'. 4054 + */ 4055 + static void spi_nor_manufacturer_init_params(struct spi_nor *nor) 4056 + { 4057 + /* Init flash parameters based on MFR */ 4058 + switch (JEDEC_MFR(nor->info)) { 4059 + case SNOR_MFR_MACRONIX: 4060 + macronix_set_default_init(nor); 4061 + break; 4062 + 4063 + case SNOR_MFR_ST: 4064 + case SNOR_MFR_MICRON: 4065 + st_micron_set_default_init(nor); 4066 + break; 4067 + 4068 + case SNOR_MFR_WINBOND: 4069 + winbond_set_default_init(nor); 4070 + break; 4071 + 4072 + default: 4073 + break; 4074 + } 4075 + 4076 + if (nor->info->fixups && nor->info->fixups->default_init) 4077 + nor->info->fixups->default_init(nor); 4078 + } 4079 + 4080 + /** 4081 + * spi_nor_sfdp_init_params() - Initialize the flash's parameters and settings 4082 + * based on JESD216 SFDP standard. 4083 + * @nor: pointer to a 'struct spi-nor'. 4084 + * 4085 + * The method has a roll-back mechanism: in case the SFDP parsing fails, the 4086 + * legacy flash parameters and settings will be restored. 4087 + */ 4088 + static void spi_nor_sfdp_init_params(struct spi_nor *nor) 4089 + { 4090 + struct spi_nor_flash_parameter sfdp_params; 4091 + 4092 + memcpy(&sfdp_params, &nor->params, sizeof(sfdp_params)); 4093 + 4094 + if (spi_nor_parse_sfdp(nor, &sfdp_params)) { 4095 + nor->addr_width = 0; 4096 + nor->flags &= ~SNOR_F_4B_OPCODES; 4097 + } else { 4098 + memcpy(&nor->params, &sfdp_params, sizeof(nor->params)); 4099 + } 4100 + } 4101 + 4102 + /** 4103 + * spi_nor_info_init_params() - Initialize the flash's parameters and settings 4104 + * based on nor->info data. 4105 + * @nor: pointer to a 'struct spi-nor'. 4106 + */ 4107 + static void spi_nor_info_init_params(struct spi_nor *nor) 4108 + { 4109 + struct spi_nor_flash_parameter *params = &nor->params; 4110 + struct spi_nor_erase_map *map = &params->erase_map; 4111 + const struct flash_info *info = nor->info; 4112 + struct device_node *np = spi_nor_get_flash_node(nor); 4113 + u8 i, erase_mask; 4114 + 4115 + /* Initialize legacy flash parameters and settings. */ 4116 + params->quad_enable = spansion_quad_enable; 4117 + params->set_4byte = spansion_set_4byte; 4118 + params->setup = spi_nor_default_setup; 4119 + 4120 + /* Set SPI NOR sizes. */ 4121 + params->size = (u64)info->sector_size * info->n_sectors; 4122 + params->page_size = info->page_size; 4123 + 4124 + if (!(info->flags & SPI_NOR_NO_FR)) { 4125 + /* Default to Fast Read for DT and non-DT platform devices. */ 4126 + params->hwcaps.mask |= SNOR_HWCAPS_READ_FAST; 4127 + 4128 + /* Mask out Fast Read if not requested at DT instantiation. */ 4129 + if (np && !of_property_read_bool(np, "m25p,fast-read")) 4130 + params->hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST; 4131 + } 4132 + 4133 + /* (Fast) Read settings. */ 4134 + params->hwcaps.mask |= SNOR_HWCAPS_READ; 4135 + spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ], 4136 + 0, 0, SPINOR_OP_READ, 4137 + SNOR_PROTO_1_1_1); 4138 + 4139 + if (params->hwcaps.mask & SNOR_HWCAPS_READ_FAST) 4140 + spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_FAST], 4141 + 0, 8, SPINOR_OP_READ_FAST, 4142 + SNOR_PROTO_1_1_1); 4143 + 4144 + if (info->flags & SPI_NOR_DUAL_READ) { 4145 + params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_2; 4146 + spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_2], 4147 + 0, 8, SPINOR_OP_READ_1_1_2, 4148 + SNOR_PROTO_1_1_2); 4149 + } 4150 + 4151 + if (info->flags & SPI_NOR_QUAD_READ) { 4152 + params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_4; 4153 + spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_4], 4154 + 0, 8, SPINOR_OP_READ_1_1_4, 4155 + SNOR_PROTO_1_1_4); 4156 + } 4157 + 4158 + if (info->flags & SPI_NOR_OCTAL_READ) { 4159 + params->hwcaps.mask |= SNOR_HWCAPS_READ_1_1_8; 4160 + spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ_1_1_8], 4161 + 0, 8, SPINOR_OP_READ_1_1_8, 4162 + SNOR_PROTO_1_1_8); 4163 + } 4164 + 4165 + /* Page Program settings. */ 4166 + params->hwcaps.mask |= SNOR_HWCAPS_PP; 4167 + spi_nor_set_pp_settings(&params->page_programs[SNOR_CMD_PP], 4168 + SPINOR_OP_PP, SNOR_PROTO_1_1_1); 4169 + 4170 + /* 4171 + * Sector Erase settings. Sort Erase Types in ascending order, with the 4172 + * smallest erase size starting at BIT(0). 4173 + */ 4174 + erase_mask = 0; 4175 + i = 0; 4176 + if (info->flags & SECT_4K_PMC) { 4177 + erase_mask |= BIT(i); 4178 + spi_nor_set_erase_type(&map->erase_type[i], 4096u, 4179 + SPINOR_OP_BE_4K_PMC); 4180 + i++; 4181 + } else if (info->flags & SECT_4K) { 4182 + erase_mask |= BIT(i); 4183 + spi_nor_set_erase_type(&map->erase_type[i], 4096u, 4184 + SPINOR_OP_BE_4K); 4185 + i++; 4186 + } 4187 + erase_mask |= BIT(i); 4188 + spi_nor_set_erase_type(&map->erase_type[i], info->sector_size, 4189 + SPINOR_OP_SE); 4190 + spi_nor_init_uniform_erase_map(map, erase_mask, params->size); 4191 + } 4192 + 4193 + static void spansion_post_sfdp_fixups(struct spi_nor *nor) 4194 + { 4195 + struct mtd_info *mtd = &nor->mtd; 4196 + 4197 + if (mtd->size <= SZ_16M) 4198 + return; 4199 + 4200 + nor->flags |= SNOR_F_4B_OPCODES; 4201 + /* No small sector erase for 4-byte command set */ 4202 + nor->erase_opcode = SPINOR_OP_SE; 4203 + nor->mtd.erasesize = nor->info->sector_size; 4204 + } 4205 + 4206 + static void s3an_post_sfdp_fixups(struct spi_nor *nor) 4207 + { 4208 + nor->params.setup = s3an_nor_setup; 4209 + } 4210 + 4211 + /** 4212 + * spi_nor_post_sfdp_fixups() - Updates the flash's parameters and settings 4213 + * after SFDP has been parsed (is also called for SPI NORs that do not 4214 + * support RDSFDP). 4215 + * @nor: pointer to a 'struct spi_nor' 4216 + * 4217 + * Typically used to tweak various parameters that could not be extracted by 4218 + * other means (i.e. when information provided by the SFDP/flash_info tables 4219 + * are incomplete or wrong). 4220 + */ 4221 + static void spi_nor_post_sfdp_fixups(struct spi_nor *nor) 4222 + { 4223 + switch (JEDEC_MFR(nor->info)) { 4224 + case SNOR_MFR_SPANSION: 4225 + spansion_post_sfdp_fixups(nor); 4226 + break; 4227 + 4228 + default: 4229 + break; 4230 + } 4231 + 4232 + if (nor->info->flags & SPI_S3AN) 4233 + s3an_post_sfdp_fixups(nor); 4234 + 4235 + if (nor->info->fixups && nor->info->fixups->post_sfdp) 4236 + nor->info->fixups->post_sfdp(nor); 4237 + } 4238 + 4239 + /** 4240 + * spi_nor_late_init_params() - Late initialization of default flash parameters. 4241 + * @nor: pointer to a 'struct spi_nor' 4242 + * 4243 + * Used to set default flash parameters and settings when the ->default_init() 4244 + * hook or the SFDP parser let voids. 4245 + */ 4246 + static void spi_nor_late_init_params(struct spi_nor *nor) 4247 + { 4248 + /* 4249 + * NOR protection support. When locking_ops are not provided, we pick 4250 + * the default ones. 4251 + */ 4252 + if (nor->flags & SNOR_F_HAS_LOCK && !nor->params.locking_ops) 4253 + nor->params.locking_ops = &stm_locking_ops; 4254 + } 4255 + 4256 + /** 4257 + * spi_nor_init_params() - Initialize the flash's parameters and settings. 4258 + * @nor: pointer to a 'struct spi-nor'. 4259 + * 4260 + * The flash parameters and settings are initialized based on a sequence of 4261 + * calls that are ordered by priority: 4262 + * 4263 + * 1/ Default flash parameters initialization. The initializations are done 4264 + * based on nor->info data: 4265 + * spi_nor_info_init_params() 4266 + * 4267 + * which can be overwritten by: 4268 + * 2/ Manufacturer flash parameters initialization. The initializations are 4269 + * done based on MFR register, or when the decisions can not be done solely 4270 + * based on MFR, by using specific flash_info tweeks, ->default_init(): 4271 + * spi_nor_manufacturer_init_params() 4272 + * 4273 + * which can be overwritten by: 4274 + * 3/ SFDP flash parameters initialization. JESD216 SFDP is a standard and 4275 + * should be more accurate that the above. 4276 + * spi_nor_sfdp_init_params() 4277 + * 4278 + * Please note that there is a ->post_bfpt() fixup hook that can overwrite 4279 + * the flash parameters and settings immediately after parsing the Basic 4280 + * Flash Parameter Table. 4281 + * 4282 + * which can be overwritten by: 4283 + * 4/ Post SFDP flash parameters initialization. Used to tweak various 4284 + * parameters that could not be extracted by other means (i.e. when 4285 + * information provided by the SFDP/flash_info tables are incomplete or 4286 + * wrong). 4287 + * spi_nor_post_sfdp_fixups() 4288 + * 4289 + * 5/ Late default flash parameters initialization, used when the 4290 + * ->default_init() hook or the SFDP parser do not set specific params. 4291 + * spi_nor_late_init_params() 4292 + */ 4293 + static void spi_nor_init_params(struct spi_nor *nor) 4294 + { 4295 + spi_nor_info_init_params(nor); 4296 + 4297 + spi_nor_manufacturer_init_params(nor); 4298 + 4299 + if ((nor->info->flags & (SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ)) && 4300 + !(nor->info->flags & SPI_NOR_SKIP_SFDP)) 4301 + spi_nor_sfdp_init_params(nor); 4302 + 4303 + spi_nor_post_sfdp_fixups(nor); 4304 + 4305 + spi_nor_late_init_params(nor); 4306 + } 4307 + 4308 + /** 4309 + * spi_nor_quad_enable() - enable Quad I/O if needed. 4310 + * @nor: pointer to a 'struct spi_nor' 4311 + * 4312 + * Return: 0 on success, -errno otherwise. 4313 + */ 4314 + static int spi_nor_quad_enable(struct spi_nor *nor) 4315 + { 4316 + if (!nor->params.quad_enable) 4317 + return 0; 4318 + 4319 + if (!(spi_nor_get_protocol_width(nor->read_proto) == 4 || 4320 + spi_nor_get_protocol_width(nor->write_proto) == 4)) 4321 + return 0; 4322 + 4323 + return nor->params.quad_enable(nor); 4381 4324 } 4382 4325 4383 4326 static int spi_nor_init(struct spi_nor *nor) ··· 4680 4033 int err; 4681 4034 4682 4035 if (nor->clear_sr_bp) { 4683 - if (nor->quad_enable == spansion_quad_enable) 4036 + if (nor->params.quad_enable == spansion_quad_enable) 4684 4037 nor->clear_sr_bp = spi_nor_spansion_clear_sr_bp; 4685 4038 4686 4039 err = nor->clear_sr_bp(nor); ··· 4691 4044 } 4692 4045 } 4693 4046 4694 - if (nor->quad_enable) { 4695 - err = nor->quad_enable(nor); 4696 - if (err) { 4697 - dev_err(nor->dev, "quad mode not supported\n"); 4698 - return err; 4699 - } 4047 + err = spi_nor_quad_enable(nor); 4048 + if (err) { 4049 + dev_err(nor->dev, "quad mode not supported\n"); 4050 + return err; 4700 4051 } 4701 4052 4702 4053 if (nor->addr_width == 4 && !(nor->flags & SNOR_F_4B_OPCODES)) { ··· 4707 4062 */ 4708 4063 WARN_ONCE(nor->flags & SNOR_F_BROKEN_RESET, 4709 4064 "enabling reset hack; may not recover from unexpected reboots\n"); 4710 - set_4byte(nor, true); 4065 + nor->params.set_4byte(nor, true); 4711 4066 } 4712 4067 4713 4068 return 0; ··· 4731 4086 /* restore the addressing mode */ 4732 4087 if (nor->addr_width == 4 && !(nor->flags & SNOR_F_4B_OPCODES) && 4733 4088 nor->flags & SNOR_F_BROKEN_RESET) 4734 - set_4byte(nor, false); 4089 + nor->params.set_4byte(nor, false); 4735 4090 } 4736 4091 EXPORT_SYMBOL_GPL(spi_nor_restore); 4737 4092 ··· 4747 4102 return NULL; 4748 4103 } 4749 4104 4105 + static int spi_nor_set_addr_width(struct spi_nor *nor) 4106 + { 4107 + if (nor->addr_width) { 4108 + /* already configured from SFDP */ 4109 + } else if (nor->info->addr_width) { 4110 + nor->addr_width = nor->info->addr_width; 4111 + } else if (nor->mtd.size > 0x1000000) { 4112 + /* enable 4-byte addressing if the device exceeds 16MiB */ 4113 + nor->addr_width = 4; 4114 + } else { 4115 + nor->addr_width = 3; 4116 + } 4117 + 4118 + if (nor->addr_width > SPI_NOR_MAX_ADDR_WIDTH) { 4119 + dev_err(nor->dev, "address width is too large: %u\n", 4120 + nor->addr_width); 4121 + return -EINVAL; 4122 + } 4123 + 4124 + /* Set 4byte opcodes when possible. */ 4125 + if (nor->addr_width == 4 && nor->flags & SNOR_F_4B_OPCODES && 4126 + !(nor->flags & SNOR_F_HAS_4BAIT)) 4127 + spi_nor_set_4byte_opcodes(nor); 4128 + 4129 + return 0; 4130 + } 4131 + 4132 + static void spi_nor_debugfs_init(struct spi_nor *nor, 4133 + const struct flash_info *info) 4134 + { 4135 + struct mtd_info *mtd = &nor->mtd; 4136 + 4137 + mtd->dbg.partname = info->name; 4138 + mtd->dbg.partid = devm_kasprintf(nor->dev, GFP_KERNEL, "spi-nor:%*phN", 4139 + info->id_len, info->id); 4140 + } 4141 + 4142 + static const struct flash_info *spi_nor_get_flash_info(struct spi_nor *nor, 4143 + const char *name) 4144 + { 4145 + const struct flash_info *info = NULL; 4146 + 4147 + if (name) 4148 + info = spi_nor_match_id(name); 4149 + /* Try to auto-detect if chip name wasn't specified or not found */ 4150 + if (!info) 4151 + info = spi_nor_read_id(nor); 4152 + if (IS_ERR_OR_NULL(info)) 4153 + return ERR_PTR(-ENOENT); 4154 + 4155 + /* 4156 + * If caller has specified name of flash model that can normally be 4157 + * detected using JEDEC, let's verify it. 4158 + */ 4159 + if (name && info->id_len) { 4160 + const struct flash_info *jinfo; 4161 + 4162 + jinfo = spi_nor_read_id(nor); 4163 + if (IS_ERR(jinfo)) { 4164 + return jinfo; 4165 + } else if (jinfo != info) { 4166 + /* 4167 + * JEDEC knows better, so overwrite platform ID. We 4168 + * can't trust partitions any longer, but we'll let 4169 + * mtd apply them anyway, since some partitions may be 4170 + * marked read-only, and we don't want to lose that 4171 + * information, even if it's not 100% accurate. 4172 + */ 4173 + dev_warn(nor->dev, "found %s, expected %s\n", 4174 + jinfo->name, info->name); 4175 + info = jinfo; 4176 + } 4177 + } 4178 + 4179 + return info; 4180 + } 4181 + 4750 4182 int spi_nor_scan(struct spi_nor *nor, const char *name, 4751 4183 const struct spi_nor_hwcaps *hwcaps) 4752 4184 { 4753 - struct spi_nor_flash_parameter params; 4754 - const struct flash_info *info = NULL; 4185 + const struct flash_info *info; 4755 4186 struct device *dev = nor->dev; 4756 4187 struct mtd_info *mtd = &nor->mtd; 4757 4188 struct device_node *np = spi_nor_get_flash_node(nor); 4189 + struct spi_nor_flash_parameter *params = &nor->params; 4758 4190 int ret; 4759 4191 int i; 4760 4192 ··· 4844 4122 nor->read_proto = SNOR_PROTO_1_1_1; 4845 4123 nor->write_proto = SNOR_PROTO_1_1_1; 4846 4124 4847 - if (name) 4848 - info = spi_nor_match_id(name); 4849 - /* Try to auto-detect if chip name wasn't specified or not found */ 4850 - if (!info) 4851 - info = spi_nor_read_id(nor); 4852 - if (IS_ERR_OR_NULL(info)) 4853 - return -ENOENT; 4854 - 4855 4125 /* 4856 - * If caller has specified name of flash model that can normally be 4857 - * detected using JEDEC, let's verify it. 4126 + * We need the bounce buffer early to read/write registers when going 4127 + * through the spi-mem layer (buffers have to be DMA-able). 4128 + * For spi-mem drivers, we'll reallocate a new buffer if 4129 + * nor->page_size turns out to be greater than PAGE_SIZE (which 4130 + * shouldn't happen before long since NOR pages are usually less 4131 + * than 1KB) after spi_nor_scan() returns. 4858 4132 */ 4859 - if (name && info->id_len) { 4860 - const struct flash_info *jinfo; 4133 + nor->bouncebuf_size = PAGE_SIZE; 4134 + nor->bouncebuf = devm_kmalloc(dev, nor->bouncebuf_size, 4135 + GFP_KERNEL); 4136 + if (!nor->bouncebuf) 4137 + return -ENOMEM; 4861 4138 4862 - jinfo = spi_nor_read_id(nor); 4863 - if (IS_ERR(jinfo)) { 4864 - return PTR_ERR(jinfo); 4865 - } else if (jinfo != info) { 4866 - /* 4867 - * JEDEC knows better, so overwrite platform ID. We 4868 - * can't trust partitions any longer, but we'll let 4869 - * mtd apply them anyway, since some partitions may be 4870 - * marked read-only, and we don't want to lose that 4871 - * information, even if it's not 100% accurate. 4872 - */ 4873 - dev_warn(dev, "found %s, expected %s\n", 4874 - jinfo->name, info->name); 4875 - info = jinfo; 4876 - } 4877 - } 4139 + info = spi_nor_get_flash_info(nor, name); 4140 + if (IS_ERR(info)) 4141 + return PTR_ERR(info); 4878 4142 4879 4143 nor->info = info; 4144 + 4145 + spi_nor_debugfs_init(nor, info); 4880 4146 4881 4147 mutex_init(&nor->lock); 4882 4148 ··· 4873 4163 * spi_nor_wait_till_ready(). Xilinx S3AN share MFR 4874 4164 * with Atmel spi-nor 4875 4165 */ 4876 - if (info->flags & SPI_S3AN) 4166 + if (info->flags & SPI_NOR_XSR_RDY) 4877 4167 nor->flags |= SNOR_F_READY_XSR_RDY; 4168 + 4169 + if (info->flags & SPI_NOR_HAS_LOCK) 4170 + nor->flags |= SNOR_F_HAS_LOCK; 4878 4171 4879 4172 /* 4880 4173 * Atmel, SST, Intel/Numonyx, and others serial NOR tend to power up ··· 4889 4176 nor->info->flags & SPI_NOR_HAS_LOCK) 4890 4177 nor->clear_sr_bp = spi_nor_clear_sr_bp; 4891 4178 4892 - /* Parse the Serial Flash Discoverable Parameters table. */ 4893 - ret = spi_nor_init_params(nor, &params); 4894 - if (ret) 4895 - return ret; 4179 + /* Init flash parameters based on flash_info struct and SFDP */ 4180 + spi_nor_init_params(nor); 4896 4181 4897 4182 if (!mtd->name) 4898 4183 mtd->name = dev_name(dev); ··· 4898 4187 mtd->type = MTD_NORFLASH; 4899 4188 mtd->writesize = 1; 4900 4189 mtd->flags = MTD_CAP_NORFLASH; 4901 - mtd->size = params.size; 4190 + mtd->size = params->size; 4902 4191 mtd->_erase = spi_nor_erase; 4903 4192 mtd->_read = spi_nor_read; 4904 4193 mtd->_resume = spi_nor_resume; 4905 4194 4906 - /* NOR protection support for STmicro/Micron chips and similar */ 4907 - if (JEDEC_MFR(info) == SNOR_MFR_ST || 4908 - JEDEC_MFR(info) == SNOR_MFR_MICRON || 4909 - info->flags & SPI_NOR_HAS_LOCK) { 4910 - nor->flash_lock = stm_lock; 4911 - nor->flash_unlock = stm_unlock; 4912 - nor->flash_is_locked = stm_is_locked; 4913 - } 4914 - 4915 - if (nor->flash_lock && nor->flash_unlock && nor->flash_is_locked) { 4195 + if (nor->params.locking_ops) { 4916 4196 mtd->_lock = spi_nor_lock; 4917 4197 mtd->_unlock = spi_nor_unlock; 4918 4198 mtd->_is_locked = spi_nor_is_locked; ··· 4928 4226 mtd->flags |= MTD_NO_ERASE; 4929 4227 4930 4228 mtd->dev.parent = dev; 4931 - nor->page_size = params.page_size; 4229 + nor->page_size = params->page_size; 4932 4230 mtd->writebufsize = nor->page_size; 4933 - 4934 - if (np) { 4935 - /* If we were instantiated by DT, use it */ 4936 - if (of_property_read_bool(np, "m25p,fast-read")) 4937 - params.hwcaps.mask |= SNOR_HWCAPS_READ_FAST; 4938 - else 4939 - params.hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST; 4940 - } else { 4941 - /* If we weren't instantiated by DT, default to fast-read */ 4942 - params.hwcaps.mask |= SNOR_HWCAPS_READ_FAST; 4943 - } 4944 4231 4945 4232 if (of_property_read_bool(np, "broken-flash-reset")) 4946 4233 nor->flags |= SNOR_F_BROKEN_RESET; 4947 - 4948 - /* Some devices cannot do fast-read, no matter what DT tells us */ 4949 - if (info->flags & SPI_NOR_NO_FR) 4950 - params.hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST; 4951 4234 4952 4235 /* 4953 4236 * Configure the SPI memory: 4954 4237 * - select op codes for (Fast) Read, Page Program and Sector Erase. 4955 4238 * - set the number of dummy cycles (mode cycles + wait states). 4956 4239 * - set the SPI protocols for register and memory accesses. 4957 - * - set the Quad Enable bit if needed (required by SPI x-y-4 protos). 4958 4240 */ 4959 - ret = spi_nor_setup(nor, &params, hwcaps); 4241 + ret = spi_nor_setup(nor, hwcaps); 4960 4242 if (ret) 4961 4243 return ret; 4962 4244 4963 - if (nor->addr_width) { 4964 - /* already configured from SFDP */ 4965 - } else if (info->addr_width) { 4966 - nor->addr_width = info->addr_width; 4967 - } else if (mtd->size > 0x1000000) { 4968 - /* enable 4-byte addressing if the device exceeds 16MiB */ 4969 - nor->addr_width = 4; 4970 - } else { 4971 - nor->addr_width = 3; 4972 - } 4973 - 4974 - if (info->flags & SPI_NOR_4B_OPCODES || 4975 - (JEDEC_MFR(info) == SNOR_MFR_SPANSION && mtd->size > SZ_16M)) 4245 + if (info->flags & SPI_NOR_4B_OPCODES) 4976 4246 nor->flags |= SNOR_F_4B_OPCODES; 4977 4247 4978 - if (nor->addr_width == 4 && nor->flags & SNOR_F_4B_OPCODES && 4979 - !(nor->flags & SNOR_F_HAS_4BAIT)) 4980 - spi_nor_set_4byte_opcodes(nor); 4981 - 4982 - if (nor->addr_width > SPI_NOR_MAX_ADDR_WIDTH) { 4983 - dev_err(dev, "address width is too large: %u\n", 4984 - nor->addr_width); 4985 - return -EINVAL; 4986 - } 4987 - 4988 - if (info->flags & SPI_S3AN) { 4989 - ret = s3an_nor_scan(nor); 4990 - if (ret) 4991 - return ret; 4992 - } 4248 + ret = spi_nor_set_addr_width(nor); 4249 + if (ret) 4250 + return ret; 4993 4251 4994 4252 /* Send all the required SPI flash commands to initialize device */ 4995 4253 ret = spi_nor_init(nor); ··· 4978 4316 return 0; 4979 4317 } 4980 4318 EXPORT_SYMBOL_GPL(spi_nor_scan); 4319 + 4320 + static int spi_nor_probe(struct spi_mem *spimem) 4321 + { 4322 + struct spi_device *spi = spimem->spi; 4323 + struct flash_platform_data *data = dev_get_platdata(&spi->dev); 4324 + struct spi_nor *nor; 4325 + /* 4326 + * Enable all caps by default. The core will mask them after 4327 + * checking what's really supported using spi_mem_supports_op(). 4328 + */ 4329 + const struct spi_nor_hwcaps hwcaps = { .mask = SNOR_HWCAPS_ALL }; 4330 + char *flash_name; 4331 + int ret; 4332 + 4333 + nor = devm_kzalloc(&spi->dev, sizeof(*nor), GFP_KERNEL); 4334 + if (!nor) 4335 + return -ENOMEM; 4336 + 4337 + nor->spimem = spimem; 4338 + nor->dev = &spi->dev; 4339 + spi_nor_set_flash_node(nor, spi->dev.of_node); 4340 + 4341 + spi_mem_set_drvdata(spimem, nor); 4342 + 4343 + if (data && data->name) 4344 + nor->mtd.name = data->name; 4345 + 4346 + if (!nor->mtd.name) 4347 + nor->mtd.name = spi_mem_get_name(spimem); 4348 + 4349 + /* 4350 + * For some (historical?) reason many platforms provide two different 4351 + * names in flash_platform_data: "name" and "type". Quite often name is 4352 + * set to "m25p80" and then "type" provides a real chip name. 4353 + * If that's the case, respect "type" and ignore a "name". 4354 + */ 4355 + if (data && data->type) 4356 + flash_name = data->type; 4357 + else if (!strcmp(spi->modalias, "spi-nor")) 4358 + flash_name = NULL; /* auto-detect */ 4359 + else 4360 + flash_name = spi->modalias; 4361 + 4362 + ret = spi_nor_scan(nor, flash_name, &hwcaps); 4363 + if (ret) 4364 + return ret; 4365 + 4366 + /* 4367 + * None of the existing parts have > 512B pages, but let's play safe 4368 + * and add this logic so that if anyone ever adds support for such 4369 + * a NOR we don't end up with buffer overflows. 4370 + */ 4371 + if (nor->page_size > PAGE_SIZE) { 4372 + nor->bouncebuf_size = nor->page_size; 4373 + devm_kfree(nor->dev, nor->bouncebuf); 4374 + nor->bouncebuf = devm_kmalloc(nor->dev, 4375 + nor->bouncebuf_size, 4376 + GFP_KERNEL); 4377 + if (!nor->bouncebuf) 4378 + return -ENOMEM; 4379 + } 4380 + 4381 + return mtd_device_register(&nor->mtd, data ? data->parts : NULL, 4382 + data ? data->nr_parts : 0); 4383 + } 4384 + 4385 + static int spi_nor_remove(struct spi_mem *spimem) 4386 + { 4387 + struct spi_nor *nor = spi_mem_get_drvdata(spimem); 4388 + 4389 + spi_nor_restore(nor); 4390 + 4391 + /* Clean up MTD stuff. */ 4392 + return mtd_device_unregister(&nor->mtd); 4393 + } 4394 + 4395 + static void spi_nor_shutdown(struct spi_mem *spimem) 4396 + { 4397 + struct spi_nor *nor = spi_mem_get_drvdata(spimem); 4398 + 4399 + spi_nor_restore(nor); 4400 + } 4401 + 4402 + /* 4403 + * Do NOT add to this array without reading the following: 4404 + * 4405 + * Historically, many flash devices are bound to this driver by their name. But 4406 + * since most of these flash are compatible to some extent, and their 4407 + * differences can often be differentiated by the JEDEC read-ID command, we 4408 + * encourage new users to add support to the spi-nor library, and simply bind 4409 + * against a generic string here (e.g., "jedec,spi-nor"). 4410 + * 4411 + * Many flash names are kept here in this list (as well as in spi-nor.c) to 4412 + * keep them available as module aliases for existing platforms. 4413 + */ 4414 + static const struct spi_device_id spi_nor_dev_ids[] = { 4415 + /* 4416 + * Allow non-DT platform devices to bind to the "spi-nor" modalias, and 4417 + * hack around the fact that the SPI core does not provide uevent 4418 + * matching for .of_match_table 4419 + */ 4420 + {"spi-nor"}, 4421 + 4422 + /* 4423 + * Entries not used in DTs that should be safe to drop after replacing 4424 + * them with "spi-nor" in platform data. 4425 + */ 4426 + {"s25sl064a"}, {"w25x16"}, {"m25p10"}, {"m25px64"}, 4427 + 4428 + /* 4429 + * Entries that were used in DTs without "jedec,spi-nor" fallback and 4430 + * should be kept for backward compatibility. 4431 + */ 4432 + {"at25df321a"}, {"at25df641"}, {"at26df081a"}, 4433 + {"mx25l4005a"}, {"mx25l1606e"}, {"mx25l6405d"}, {"mx25l12805d"}, 4434 + {"mx25l25635e"},{"mx66l51235l"}, 4435 + {"n25q064"}, {"n25q128a11"}, {"n25q128a13"}, {"n25q512a"}, 4436 + {"s25fl256s1"}, {"s25fl512s"}, {"s25sl12801"}, {"s25fl008k"}, 4437 + {"s25fl064k"}, 4438 + {"sst25vf040b"},{"sst25vf016b"},{"sst25vf032b"},{"sst25wf040"}, 4439 + {"m25p40"}, {"m25p80"}, {"m25p16"}, {"m25p32"}, 4440 + {"m25p64"}, {"m25p128"}, 4441 + {"w25x80"}, {"w25x32"}, {"w25q32"}, {"w25q32dw"}, 4442 + {"w25q80bl"}, {"w25q128"}, {"w25q256"}, 4443 + 4444 + /* Flashes that can't be detected using JEDEC */ 4445 + {"m25p05-nonjedec"}, {"m25p10-nonjedec"}, {"m25p20-nonjedec"}, 4446 + {"m25p40-nonjedec"}, {"m25p80-nonjedec"}, {"m25p16-nonjedec"}, 4447 + {"m25p32-nonjedec"}, {"m25p64-nonjedec"}, {"m25p128-nonjedec"}, 4448 + 4449 + /* Everspin MRAMs (non-JEDEC) */ 4450 + { "mr25h128" }, /* 128 Kib, 40 MHz */ 4451 + { "mr25h256" }, /* 256 Kib, 40 MHz */ 4452 + { "mr25h10" }, /* 1 Mib, 40 MHz */ 4453 + { "mr25h40" }, /* 4 Mib, 40 MHz */ 4454 + 4455 + { }, 4456 + }; 4457 + MODULE_DEVICE_TABLE(spi, spi_nor_dev_ids); 4458 + 4459 + static const struct of_device_id spi_nor_of_table[] = { 4460 + /* 4461 + * Generic compatibility for SPI NOR that can be identified by the 4462 + * JEDEC READ ID opcode (0x9F). Use this, if possible. 4463 + */ 4464 + { .compatible = "jedec,spi-nor" }, 4465 + { /* sentinel */ }, 4466 + }; 4467 + MODULE_DEVICE_TABLE(of, spi_nor_of_table); 4468 + 4469 + /* 4470 + * REVISIT: many of these chips have deep power-down modes, which 4471 + * should clearly be entered on suspend() to minimize power use. 4472 + * And also when they're otherwise idle... 4473 + */ 4474 + static struct spi_mem_driver spi_nor_driver = { 4475 + .spidrv = { 4476 + .driver = { 4477 + .name = "spi-nor", 4478 + .of_match_table = spi_nor_of_table, 4479 + }, 4480 + .id_table = spi_nor_dev_ids, 4481 + }, 4482 + .probe = spi_nor_probe, 4483 + .remove = spi_nor_remove, 4484 + .shutdown = spi_nor_shutdown, 4485 + }; 4486 + module_spi_mem_driver(spi_nor_driver); 4981 4487 4982 4488 MODULE_LICENSE("GPL v2"); 4983 4489 MODULE_AUTHOR("Huang Shijie <shijie8@gmail.com>");
+3
include/linux/mtd/mtd.h
··· 189 189 */ 190 190 struct mtd_debug_info { 191 191 struct dentry *dfs_dir; 192 + 193 + const char *partname; 194 + const char *partid; 192 195 }; 193 196 194 197 struct mtd_info {
+1 -1
include/linux/mtd/nand.h
··· 346 346 } 347 347 348 348 /** 349 - * nanddev_neraseblocks() - Get the total number of erasablocks 349 + * nanddev_neraseblocks() - Get the total number of eraseblocks 350 350 * @nand: NAND device 351 351 * 352 352 * Return: the total number of eraseblocks exposed by @nand.
+5
include/linux/mtd/sharpsl.h
··· 5 5 * Copyright (C) 2008 Dmitry Baryshkov 6 6 */ 7 7 8 + #ifndef _MTD_SHARPSL_H 9 + #define _MTD_SHARPSL_H 10 + 8 11 #include <linux/mtd/rawnand.h> 9 12 #include <linux/mtd/nand_ecc.h> 10 13 #include <linux/mtd/partitions.h> ··· 19 16 unsigned int nr_partitions; 20 17 const char *const *part_parsers; 21 18 }; 19 + 20 + #endif /* _MTD_SHARPSL_H */
+258 -131
include/linux/mtd/spi-nor.h
··· 9 9 #include <linux/bitops.h> 10 10 #include <linux/mtd/cfi.h> 11 11 #include <linux/mtd/mtd.h> 12 + #include <linux/spi/spi-mem.h> 12 13 13 14 /* 14 15 * Manufacturer IDs ··· 225 224 return spi_nor_get_protocol_data_nbits(proto); 226 225 } 227 226 228 - #define SPI_NOR_MAX_CMD_SIZE 8 229 227 enum spi_nor_ops { 230 228 SPI_NOR_OPS_READ = 0, 231 229 SPI_NOR_OPS_WRITE, ··· 237 237 SNOR_F_USE_FSR = BIT(0), 238 238 SNOR_F_HAS_SR_TB = BIT(1), 239 239 SNOR_F_NO_OP_CHIP_ERASE = BIT(2), 240 - SNOR_F_S3AN_ADDR_DEFAULT = BIT(3), 241 - SNOR_F_READY_XSR_RDY = BIT(4), 242 - SNOR_F_USE_CLSR = BIT(5), 243 - SNOR_F_BROKEN_RESET = BIT(6), 244 - SNOR_F_4B_OPCODES = BIT(7), 245 - SNOR_F_HAS_4BAIT = BIT(8), 240 + SNOR_F_READY_XSR_RDY = BIT(3), 241 + SNOR_F_USE_CLSR = BIT(4), 242 + SNOR_F_BROKEN_RESET = BIT(5), 243 + SNOR_F_4B_OPCODES = BIT(6), 244 + SNOR_F_HAS_4BAIT = BIT(7), 245 + SNOR_F_HAS_LOCK = BIT(8), 246 246 }; 247 247 248 248 /** ··· 334 334 }; 335 335 336 336 /** 337 - * struct flash_info - Forward declaration of a structure used internally by 338 - * spi_nor_scan() 339 - */ 340 - struct flash_info; 341 - 342 - /** 343 - * struct spi_nor - Structure for defining a the SPI NOR layer 344 - * @mtd: point to a mtd_info structure 345 - * @lock: the lock for the read/write/erase/lock/unlock operations 346 - * @dev: point to a spi device, or a spi nor controller device. 347 - * @info: spi-nor part JDEC MFR id and other info 348 - * @page_size: the page size of the SPI NOR 349 - * @addr_width: number of address bytes 350 - * @erase_opcode: the opcode for erasing a sector 351 - * @read_opcode: the read opcode 352 - * @read_dummy: the dummy needed by the read operation 353 - * @program_opcode: the program opcode 354 - * @sst_write_second: used by the SST write operation 355 - * @flags: flag options for the current SPI-NOR (SNOR_F_*) 356 - * @read_proto: the SPI protocol for read operations 357 - * @write_proto: the SPI protocol for write operations 358 - * @reg_proto the SPI protocol for read_reg/write_reg/erase operations 359 - * @cmd_buf: used by the write_reg 360 - * @erase_map: the erase map of the SPI NOR 361 - * @prepare: [OPTIONAL] do some preparations for the 362 - * read/write/erase/lock/unlock operations 363 - * @unprepare: [OPTIONAL] do some post work after the 364 - * read/write/erase/lock/unlock operations 365 - * @read_reg: [DRIVER-SPECIFIC] read out the register 366 - * @write_reg: [DRIVER-SPECIFIC] write data to the register 367 - * @read: [DRIVER-SPECIFIC] read data from the SPI NOR 368 - * @write: [DRIVER-SPECIFIC] write data to the SPI NOR 369 - * @erase: [DRIVER-SPECIFIC] erase a sector of the SPI NOR 370 - * at the offset @offs; if not provided by the driver, 371 - * spi-nor will send the erase opcode via write_reg() 372 - * @flash_lock: [FLASH-SPECIFIC] lock a region of the SPI NOR 373 - * @flash_unlock: [FLASH-SPECIFIC] unlock a region of the SPI NOR 374 - * @flash_is_locked: [FLASH-SPECIFIC] check if a region of the SPI NOR is 375 - * @quad_enable: [FLASH-SPECIFIC] enables SPI NOR quad mode 376 - * @clear_sr_bp: [FLASH-SPECIFIC] clears the Block Protection Bits from 377 - * the SPI NOR Status Register. 378 - * completely locked 379 - * @priv: the private data 380 - */ 381 - struct spi_nor { 382 - struct mtd_info mtd; 383 - struct mutex lock; 384 - struct device *dev; 385 - const struct flash_info *info; 386 - u32 page_size; 387 - u8 addr_width; 388 - u8 erase_opcode; 389 - u8 read_opcode; 390 - u8 read_dummy; 391 - u8 program_opcode; 392 - enum spi_nor_protocol read_proto; 393 - enum spi_nor_protocol write_proto; 394 - enum spi_nor_protocol reg_proto; 395 - bool sst_write_second; 396 - u32 flags; 397 - u8 cmd_buf[SPI_NOR_MAX_CMD_SIZE]; 398 - struct spi_nor_erase_map erase_map; 399 - 400 - int (*prepare)(struct spi_nor *nor, enum spi_nor_ops ops); 401 - void (*unprepare)(struct spi_nor *nor, enum spi_nor_ops ops); 402 - int (*read_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len); 403 - int (*write_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len); 404 - 405 - ssize_t (*read)(struct spi_nor *nor, loff_t from, 406 - size_t len, u_char *read_buf); 407 - ssize_t (*write)(struct spi_nor *nor, loff_t to, 408 - size_t len, const u_char *write_buf); 409 - int (*erase)(struct spi_nor *nor, loff_t offs); 410 - 411 - int (*flash_lock)(struct spi_nor *nor, loff_t ofs, uint64_t len); 412 - int (*flash_unlock)(struct spi_nor *nor, loff_t ofs, uint64_t len); 413 - int (*flash_is_locked)(struct spi_nor *nor, loff_t ofs, uint64_t len); 414 - int (*quad_enable)(struct spi_nor *nor); 415 - int (*clear_sr_bp)(struct spi_nor *nor); 416 - 417 - void *priv; 418 - }; 419 - 420 - static u64 __maybe_unused 421 - spi_nor_region_is_last(const struct spi_nor_erase_region *region) 422 - { 423 - return region->offset & SNOR_LAST_REGION; 424 - } 425 - 426 - static u64 __maybe_unused 427 - spi_nor_region_end(const struct spi_nor_erase_region *region) 428 - { 429 - return (region->offset & ~SNOR_ERASE_FLAGS_MASK) + region->size; 430 - } 431 - 432 - static void __maybe_unused 433 - spi_nor_region_mark_end(struct spi_nor_erase_region *region) 434 - { 435 - region->offset |= SNOR_LAST_REGION; 436 - } 437 - 438 - static void __maybe_unused 439 - spi_nor_region_mark_overlay(struct spi_nor_erase_region *region) 440 - { 441 - region->offset |= SNOR_OVERLAID_REGION; 442 - } 443 - 444 - static bool __maybe_unused spi_nor_has_uniform_erase(const struct spi_nor *nor) 445 - { 446 - return !!nor->erase_map.uniform_erase_type; 447 - } 448 - 449 - static inline void spi_nor_set_flash_node(struct spi_nor *nor, 450 - struct device_node *np) 451 - { 452 - mtd_set_of_node(&nor->mtd, np); 453 - } 454 - 455 - static inline struct device_node *spi_nor_get_flash_node(struct spi_nor *nor) 456 - { 457 - return mtd_get_of_node(&nor->mtd); 458 - } 459 - 460 - /** 461 337 * struct spi_nor_hwcaps - Structure for describing the hardware capabilies 462 338 * supported by the SPI controller (bus master). 463 339 * @mask: the bitmask listing all the supported hw capabilies ··· 393 517 #define SNOR_HWCAPS_PP_1_1_8 BIT(20) 394 518 #define SNOR_HWCAPS_PP_1_8_8 BIT(21) 395 519 #define SNOR_HWCAPS_PP_8_8_8 BIT(22) 520 + 521 + #define SNOR_HWCAPS_X_X_X (SNOR_HWCAPS_READ_2_2_2 | \ 522 + SNOR_HWCAPS_READ_4_4_4 | \ 523 + SNOR_HWCAPS_READ_8_8_8 | \ 524 + SNOR_HWCAPS_PP_4_4_4 | \ 525 + SNOR_HWCAPS_PP_8_8_8) 526 + 527 + #define SNOR_HWCAPS_DTR (SNOR_HWCAPS_READ_1_1_1_DTR | \ 528 + SNOR_HWCAPS_READ_1_2_2_DTR | \ 529 + SNOR_HWCAPS_READ_1_4_4_DTR | \ 530 + SNOR_HWCAPS_READ_1_8_8_DTR) 531 + 532 + #define SNOR_HWCAPS_ALL (SNOR_HWCAPS_READ_MASK | \ 533 + SNOR_HWCAPS_PP_MASK) 534 + 535 + struct spi_nor_read_command { 536 + u8 num_mode_clocks; 537 + u8 num_wait_states; 538 + u8 opcode; 539 + enum spi_nor_protocol proto; 540 + }; 541 + 542 + struct spi_nor_pp_command { 543 + u8 opcode; 544 + enum spi_nor_protocol proto; 545 + }; 546 + 547 + enum spi_nor_read_command_index { 548 + SNOR_CMD_READ, 549 + SNOR_CMD_READ_FAST, 550 + SNOR_CMD_READ_1_1_1_DTR, 551 + 552 + /* Dual SPI */ 553 + SNOR_CMD_READ_1_1_2, 554 + SNOR_CMD_READ_1_2_2, 555 + SNOR_CMD_READ_2_2_2, 556 + SNOR_CMD_READ_1_2_2_DTR, 557 + 558 + /* Quad SPI */ 559 + SNOR_CMD_READ_1_1_4, 560 + SNOR_CMD_READ_1_4_4, 561 + SNOR_CMD_READ_4_4_4, 562 + SNOR_CMD_READ_1_4_4_DTR, 563 + 564 + /* Octal SPI */ 565 + SNOR_CMD_READ_1_1_8, 566 + SNOR_CMD_READ_1_8_8, 567 + SNOR_CMD_READ_8_8_8, 568 + SNOR_CMD_READ_1_8_8_DTR, 569 + 570 + SNOR_CMD_READ_MAX 571 + }; 572 + 573 + enum spi_nor_pp_command_index { 574 + SNOR_CMD_PP, 575 + 576 + /* Quad SPI */ 577 + SNOR_CMD_PP_1_1_4, 578 + SNOR_CMD_PP_1_4_4, 579 + SNOR_CMD_PP_4_4_4, 580 + 581 + /* Octal SPI */ 582 + SNOR_CMD_PP_1_1_8, 583 + SNOR_CMD_PP_1_8_8, 584 + SNOR_CMD_PP_8_8_8, 585 + 586 + SNOR_CMD_PP_MAX 587 + }; 588 + 589 + /* Forward declaration that will be used in 'struct spi_nor_flash_parameter' */ 590 + struct spi_nor; 591 + 592 + /** 593 + * struct spi_nor_locking_ops - SPI NOR locking methods 594 + * @lock: lock a region of the SPI NOR. 595 + * @unlock: unlock a region of the SPI NOR. 596 + * @is_locked: check if a region of the SPI NOR is completely locked 597 + */ 598 + struct spi_nor_locking_ops { 599 + int (*lock)(struct spi_nor *nor, loff_t ofs, uint64_t len); 600 + int (*unlock)(struct spi_nor *nor, loff_t ofs, uint64_t len); 601 + int (*is_locked)(struct spi_nor *nor, loff_t ofs, uint64_t len); 602 + }; 603 + 604 + /** 605 + * struct spi_nor_flash_parameter - SPI NOR flash parameters and settings. 606 + * Includes legacy flash parameters and settings that can be overwritten 607 + * by the spi_nor_fixups hooks, or dynamically when parsing the JESD216 608 + * Serial Flash Discoverable Parameters (SFDP) tables. 609 + * 610 + * @size: the flash memory density in bytes. 611 + * @page_size: the page size of the SPI NOR flash memory. 612 + * @hwcaps: describes the read and page program hardware 613 + * capabilities. 614 + * @reads: read capabilities ordered by priority: the higher index 615 + * in the array, the higher priority. 616 + * @page_programs: page program capabilities ordered by priority: the 617 + * higher index in the array, the higher priority. 618 + * @erase_map: the erase map parsed from the SFDP Sector Map Parameter 619 + * Table. 620 + * @quad_enable: enables SPI NOR quad mode. 621 + * @set_4byte: puts the SPI NOR in 4 byte addressing mode. 622 + * @convert_addr: converts an absolute address into something the flash 623 + * will understand. Particularly useful when pagesize is 624 + * not a power-of-2. 625 + * @setup: configures the SPI NOR memory. Useful for SPI NOR 626 + * flashes that have peculiarities to the SPI NOR standard 627 + * e.g. different opcodes, specific address calculation, 628 + * page size, etc. 629 + * @locking_ops: SPI NOR locking methods. 630 + */ 631 + struct spi_nor_flash_parameter { 632 + u64 size; 633 + u32 page_size; 634 + 635 + struct spi_nor_hwcaps hwcaps; 636 + struct spi_nor_read_command reads[SNOR_CMD_READ_MAX]; 637 + struct spi_nor_pp_command page_programs[SNOR_CMD_PP_MAX]; 638 + 639 + struct spi_nor_erase_map erase_map; 640 + 641 + int (*quad_enable)(struct spi_nor *nor); 642 + int (*set_4byte)(struct spi_nor *nor, bool enable); 643 + u32 (*convert_addr)(struct spi_nor *nor, u32 addr); 644 + int (*setup)(struct spi_nor *nor, const struct spi_nor_hwcaps *hwcaps); 645 + 646 + const struct spi_nor_locking_ops *locking_ops; 647 + }; 648 + 649 + /** 650 + * struct flash_info - Forward declaration of a structure used internally by 651 + * spi_nor_scan() 652 + */ 653 + struct flash_info; 654 + 655 + /** 656 + * struct spi_nor - Structure for defining a the SPI NOR layer 657 + * @mtd: point to a mtd_info structure 658 + * @lock: the lock for the read/write/erase/lock/unlock operations 659 + * @dev: point to a spi device, or a spi nor controller device. 660 + * @spimem: point to the spi mem device 661 + * @bouncebuf: bounce buffer used when the buffer passed by the MTD 662 + * layer is not DMA-able 663 + * @bouncebuf_size: size of the bounce buffer 664 + * @info: spi-nor part JDEC MFR id and other info 665 + * @page_size: the page size of the SPI NOR 666 + * @addr_width: number of address bytes 667 + * @erase_opcode: the opcode for erasing a sector 668 + * @read_opcode: the read opcode 669 + * @read_dummy: the dummy needed by the read operation 670 + * @program_opcode: the program opcode 671 + * @sst_write_second: used by the SST write operation 672 + * @flags: flag options for the current SPI-NOR (SNOR_F_*) 673 + * @read_proto: the SPI protocol for read operations 674 + * @write_proto: the SPI protocol for write operations 675 + * @reg_proto the SPI protocol for read_reg/write_reg/erase operations 676 + * @prepare: [OPTIONAL] do some preparations for the 677 + * read/write/erase/lock/unlock operations 678 + * @unprepare: [OPTIONAL] do some post work after the 679 + * read/write/erase/lock/unlock operations 680 + * @read_reg: [DRIVER-SPECIFIC] read out the register 681 + * @write_reg: [DRIVER-SPECIFIC] write data to the register 682 + * @read: [DRIVER-SPECIFIC] read data from the SPI NOR 683 + * @write: [DRIVER-SPECIFIC] write data to the SPI NOR 684 + * @erase: [DRIVER-SPECIFIC] erase a sector of the SPI NOR 685 + * at the offset @offs; if not provided by the driver, 686 + * spi-nor will send the erase opcode via write_reg() 687 + * @clear_sr_bp: [FLASH-SPECIFIC] clears the Block Protection Bits from 688 + * the SPI NOR Status Register. 689 + * @params: [FLASH-SPECIFIC] SPI-NOR flash parameters and settings. 690 + * The structure includes legacy flash parameters and 691 + * settings that can be overwritten by the spi_nor_fixups 692 + * hooks, or dynamically when parsing the SFDP tables. 693 + * @priv: the private data 694 + */ 695 + struct spi_nor { 696 + struct mtd_info mtd; 697 + struct mutex lock; 698 + struct device *dev; 699 + struct spi_mem *spimem; 700 + u8 *bouncebuf; 701 + size_t bouncebuf_size; 702 + const struct flash_info *info; 703 + u32 page_size; 704 + u8 addr_width; 705 + u8 erase_opcode; 706 + u8 read_opcode; 707 + u8 read_dummy; 708 + u8 program_opcode; 709 + enum spi_nor_protocol read_proto; 710 + enum spi_nor_protocol write_proto; 711 + enum spi_nor_protocol reg_proto; 712 + bool sst_write_second; 713 + u32 flags; 714 + 715 + int (*prepare)(struct spi_nor *nor, enum spi_nor_ops ops); 716 + void (*unprepare)(struct spi_nor *nor, enum spi_nor_ops ops); 717 + int (*read_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len); 718 + int (*write_reg)(struct spi_nor *nor, u8 opcode, u8 *buf, int len); 719 + 720 + ssize_t (*read)(struct spi_nor *nor, loff_t from, 721 + size_t len, u_char *read_buf); 722 + ssize_t (*write)(struct spi_nor *nor, loff_t to, 723 + size_t len, const u_char *write_buf); 724 + int (*erase)(struct spi_nor *nor, loff_t offs); 725 + 726 + int (*clear_sr_bp)(struct spi_nor *nor); 727 + struct spi_nor_flash_parameter params; 728 + 729 + void *priv; 730 + }; 731 + 732 + static u64 __maybe_unused 733 + spi_nor_region_is_last(const struct spi_nor_erase_region *region) 734 + { 735 + return region->offset & SNOR_LAST_REGION; 736 + } 737 + 738 + static u64 __maybe_unused 739 + spi_nor_region_end(const struct spi_nor_erase_region *region) 740 + { 741 + return (region->offset & ~SNOR_ERASE_FLAGS_MASK) + region->size; 742 + } 743 + 744 + static void __maybe_unused 745 + spi_nor_region_mark_end(struct spi_nor_erase_region *region) 746 + { 747 + region->offset |= SNOR_LAST_REGION; 748 + } 749 + 750 + static void __maybe_unused 751 + spi_nor_region_mark_overlay(struct spi_nor_erase_region *region) 752 + { 753 + region->offset |= SNOR_OVERLAID_REGION; 754 + } 755 + 756 + static bool __maybe_unused spi_nor_has_uniform_erase(const struct spi_nor *nor) 757 + { 758 + return !!nor->params.erase_map.uniform_erase_type; 759 + } 760 + 761 + static inline void spi_nor_set_flash_node(struct spi_nor *nor, 762 + struct device_node *np) 763 + { 764 + mtd_set_of_node(&nor->mtd, np); 765 + } 766 + 767 + static inline struct device_node *spi_nor_get_flash_node(struct spi_nor *nor) 768 + { 769 + return mtd_get_of_node(&nor->mtd); 770 + } 396 771 397 772 /** 398 773 * spi_nor_scan() - scan the SPI NOR