Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'spi-v5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
"A quiet release for SPI, some fixes and a couple of new drivers plus
one small refactoring:

- Move the chip select timing configuration from the controller to
the device to allow a bit more flexibility

- New drivers for Rockchip SFC and Spreadtrum ADI"

* tag 'spi-v5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (47 commits)
spi: spi-zynq-qspi: use wait_for_completion_timeout to make zynq_qspi_exec_mem_op not interruptible
spi: add sprd ADI for sc9863 and ums512
spi: Convert sprd ADI bindings to yaml
spi: sprd: Add ADI r3 support
spi: sprd: Fix the wrong WDG_LOAD_VAL
spi: davinci: invoke chipselect callback
spi: sprd: fill offset only to RD_CMD register for reading from slave device
spi: sprd: Make sure offset not equal to slave address size
spi: sprd: Pass offset instead of physical address to adi_read/_write()
spi: rockchip-sfc: Fix assigned but never used return error codes
spi: rockchip-sfc: Remove redundant IO operations
spi: stm32: fix excluded_middle.cocci warnings
spi: coldfire-qspi: Use clk_disable_unprepare in the remove function
spi: tegra20-slink: remove spi_master_put() in tegra_slink_remove()
spi: rockchip-sfc: add rockchip serial flash controller
spi: rockchip-sfc: Bindings for Rockchip serial flash controller
spi: orion: Prevent incorrect chip select behaviour
spi: mxic: add missing braces
spi: spi-pic32: Fix issue with uninitialized dma_slave_config
spi: spi-fsl-dspi: Fix issue with uninitialized dma_slave_config
...

+1520 -578
-1
Documentation/devicetree/bindings/fsi/ibm,fsi2spi.yaml
··· 19 19 compatible: 20 20 enum: 21 21 - ibm,fsi2spi 22 - - ibm,fsi2spi-restricted 23 22 24 23 reg: 25 24 items:
-48
Documentation/devicetree/bindings/spi/omap-spi.txt
··· 1 - OMAP2+ McSPI device 2 - 3 - Required properties: 4 - - compatible : 5 - - "ti,am654-mcspi" for AM654. 6 - - "ti,omap2-mcspi" for OMAP2 & OMAP3. 7 - - "ti,omap4-mcspi" for OMAP4+. 8 - - ti,spi-num-cs : Number of chipselect supported by the instance. 9 - - ti,hwmods: Name of the hwmod associated to the McSPI 10 - - ti,pindir-d0-out-d1-in: Select the D0 pin as output and D1 as 11 - input. The default is D0 as input and 12 - D1 as output. 13 - 14 - Optional properties: 15 - - dmas: List of DMA specifiers with the controller specific format 16 - as described in the generic DMA client binding. A tx and rx 17 - specifier is required for each chip select. 18 - - dma-names: List of DMA request names. These strings correspond 19 - 1:1 with the DMA specifiers listed in dmas. The string naming 20 - is to be "rxN" and "txN" for RX and TX requests, 21 - respectively, where N equals the chip select number. 22 - 23 - Examples: 24 - 25 - [hwmod populated DMA resources] 26 - 27 - mcspi1: mcspi@1 { 28 - #address-cells = <1>; 29 - #size-cells = <0>; 30 - compatible = "ti,omap4-mcspi"; 31 - ti,hwmods = "mcspi1"; 32 - ti,spi-num-cs = <4>; 33 - }; 34 - 35 - [generic DMA request binding] 36 - 37 - mcspi1: mcspi@1 { 38 - #address-cells = <1>; 39 - #size-cells = <0>; 40 - compatible = "ti,omap4-mcspi"; 41 - ti,hwmods = "mcspi1"; 42 - ti,spi-num-cs = <2>; 43 - dmas = <&edma 42 44 - &edma 43 45 - &edma 44 46 - &edma 45>; 47 - dma-names = "tx0", "rx0", "tx1", "rx1"; 48 - };
+117
Documentation/devicetree/bindings/spi/omap-spi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/omap-spi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: SPI controller bindings for OMAP and K3 SoCs 8 + 9 + maintainers: 10 + - Aswath Govindraju <a-govindraju@ti.com> 11 + 12 + allOf: 13 + - $ref: spi-controller.yaml# 14 + 15 + properties: 16 + compatible: 17 + oneOf: 18 + - items: 19 + - enum: 20 + - ti,am654-mcspi 21 + - ti,am4372-mcspi 22 + - const: ti,omap4-mcspi 23 + - items: 24 + - enum: 25 + - ti,omap2-mcspi 26 + - ti,omap4-mcspi 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + interrupts: 32 + maxItems: 1 33 + 34 + clocks: 35 + maxItems: 1 36 + 37 + power-domains: 38 + maxItems: 1 39 + 40 + ti,spi-num-cs: 41 + $ref: /schemas/types.yaml#/definitions/uint32 42 + description: Number of chipselect supported by the instance. 43 + minimum: 1 44 + maximum: 4 45 + 46 + ti,hwmods: 47 + $ref: /schemas/types.yaml#/definitions/string 48 + description: 49 + Must be "mcspi<n>", n being the instance number (1-based). 50 + This property is applicable only on legacy platforms mainly omap2/3 51 + and ti81xx and should not be used on other platforms. 52 + deprecated: true 53 + 54 + ti,pindir-d0-out-d1-in: 55 + description: 56 + Select the D0 pin as output and D1 as input. The default is D0 57 + as input and D1 as output. 58 + type: boolean 59 + 60 + dmas: 61 + description: 62 + List of DMA specifiers with the controller specific format as 63 + described in the generic DMA client binding. A tx and rx 64 + specifier is required for each chip select. 65 + minItems: 1 66 + maxItems: 8 67 + 68 + dma-names: 69 + description: 70 + List of DMA request names. These strings correspond 1:1 with 71 + the DMA sepecifiers listed in dmas. The string names is to be 72 + "rxN" and "txN" for RX and TX requests, respectively. Where N 73 + is the chip select number. 74 + minItems: 1 75 + maxItems: 8 76 + 77 + required: 78 + - compatible 79 + - reg 80 + - interrupts 81 + 82 + unevaluatedProperties: false 83 + 84 + if: 85 + properties: 86 + compatible: 87 + oneOf: 88 + - const: ti,omap2-mcspi 89 + - const: ti,omap4-mcspi 90 + 91 + then: 92 + properties: 93 + ti,hwmods: 94 + items: 95 + - pattern: "^mcspi([1-9])$" 96 + 97 + else: 98 + properties: 99 + ti,hwmods: false 100 + 101 + examples: 102 + - | 103 + #include <dt-bindings/interrupt-controller/irq.h> 104 + #include <dt-bindings/interrupt-controller/arm-gic.h> 105 + #include <dt-bindings/soc/ti,sci_pm_domain.h> 106 + 107 + spi@2100000 { 108 + compatible = "ti,am654-mcspi","ti,omap4-mcspi"; 109 + reg = <0x2100000 0x400>; 110 + interrupts = <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>; 111 + clocks = <&k3_clks 137 1>; 112 + power-domains = <&k3_pds 137 TI_SCI_PD_EXCLUSIVE>; 113 + #address-cells = <1>; 114 + #size-cells = <0>; 115 + dmas = <&main_udmap 0xc500>, <&main_udmap 0x4500>; 116 + dma-names = "tx0", "rx0"; 117 + };
+91
Documentation/devicetree/bindings/spi/rockchip-sfc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/rockchip-sfc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip Serial Flash Controller (SFC) 8 + 9 + maintainers: 10 + - Heiko Stuebner <heiko@sntech.de> 11 + - Chris Morgan <macromorgan@hotmail.com> 12 + 13 + allOf: 14 + - $ref: spi-controller.yaml# 15 + 16 + properties: 17 + compatible: 18 + const: rockchip,sfc 19 + description: 20 + The rockchip sfc controller is a standalone IP with version register, 21 + and the driver can handle all the feature difference inside the IP 22 + depending on the version register. 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + interrupts: 28 + maxItems: 1 29 + 30 + clocks: 31 + items: 32 + - description: Bus Clock 33 + - description: Module Clock 34 + 35 + clock-names: 36 + items: 37 + - const: clk_sfc 38 + - const: hclk_sfc 39 + 40 + power-domains: 41 + maxItems: 1 42 + 43 + rockchip,sfc-no-dma: 44 + description: Disable DMA and utilize FIFO mode only 45 + type: boolean 46 + 47 + patternProperties: 48 + "^flash@[0-3]$": 49 + type: object 50 + properties: 51 + reg: 52 + minimum: 0 53 + maximum: 3 54 + 55 + required: 56 + - compatible 57 + - reg 58 + - interrupts 59 + - clocks 60 + - clock-names 61 + 62 + unevaluatedProperties: false 63 + 64 + examples: 65 + - | 66 + #include <dt-bindings/clock/px30-cru.h> 67 + #include <dt-bindings/interrupt-controller/arm-gic.h> 68 + #include <dt-bindings/power/px30-power.h> 69 + 70 + sfc: spi@ff3a0000 { 71 + compatible = "rockchip,sfc"; 72 + reg = <0xff3a0000 0x4000>; 73 + interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>; 74 + clocks = <&cru SCLK_SFC>, <&cru HCLK_SFC>; 75 + clock-names = "clk_sfc", "hclk_sfc"; 76 + pinctrl-0 = <&sfc_clk &sfc_cs &sfc_bus2>; 77 + pinctrl-names = "default"; 78 + power-domains = <&power PX30_PD_MMC_NAND>; 79 + #address-cells = <1>; 80 + #size-cells = <0>; 81 + 82 + flash@0 { 83 + compatible = "jedec,spi-nor"; 84 + reg = <0>; 85 + spi-max-frequency = <108000000>; 86 + spi-rx-bus-width = <2>; 87 + spi-tx-bus-width = <2>; 88 + }; 89 + }; 90 + 91 + ...
+1
Documentation/devicetree/bindings/spi/spi-mt65xx.txt
··· 11 11 - mediatek,mt8135-spi: for mt8135 platforms 12 12 - mediatek,mt8173-spi: for mt8173 platforms 13 13 - mediatek,mt8183-spi: for mt8183 platforms 14 + - mediatek,mt6893-spi: for mt6893 platforms 14 15 - "mediatek,mt8192-spi", "mediatek,mt6765-spi": for mt8192 platforms 15 16 - "mediatek,mt8195-spi", "mediatek,mt6765-spi": for mt8195 platforms 16 17 - "mediatek,mt8516-spi", "mediatek,mt2712-spi": for mt8516 platforms
-63
Documentation/devicetree/bindings/spi/spi-sprd-adi.txt
··· 1 - Spreadtrum ADI controller 2 - 3 - ADI is the abbreviation of Anolog-Digital interface, which is used to access 4 - analog chip (such as PMIC) from digital chip. ADI controller follows the SPI 5 - framework for its hardware implementation is alike to SPI bus and its timing 6 - is compatile to SPI timing. 7 - 8 - ADI controller has 50 channels including 2 software read/write channels and 9 - 48 hardware channels to access analog chip. For 2 software read/write channels, 10 - users should set ADI registers to access analog chip. For hardware channels, 11 - we can configure them to allow other hardware components to use it independently, 12 - which means we can just link one analog chip address to one hardware channel, 13 - then users can access the mapped analog chip address by this hardware channel 14 - triggered by hardware components instead of ADI software channels. 15 - 16 - Thus we introduce one property named "sprd,hw-channels" to configure hardware 17 - channels, the first value specifies the hardware channel id which is used to 18 - transfer data triggered by hardware automatically, and the second value specifies 19 - the analog chip address where user want to access by hardware components. 20 - 21 - Since we have multi-subsystems will use unique ADI to access analog chip, when 22 - one system is reading/writing data by ADI software channels, that should be under 23 - one hardware spinlock protection to prevent other systems from reading/writing 24 - data by ADI software channels at the same time, or two parallel routine of setting 25 - ADI registers will make ADI controller registers chaos to lead incorrect results. 26 - Then we need one hardware spinlock to synchronize between the multiple subsystems. 27 - 28 - The new version ADI controller supplies multiple master channels for different 29 - subsystem accessing, that means no need to add hardware spinlock to synchronize, 30 - thus change the hardware spinlock support to be optional to keep backward 31 - compatibility. 32 - 33 - Required properties: 34 - - compatible: Should be "sprd,sc9860-adi". 35 - - reg: Offset and length of ADI-SPI controller register space. 36 - - #address-cells: Number of cells required to define a chip select address 37 - on the ADI-SPI bus. Should be set to 1. 38 - - #size-cells: Size of cells required to define a chip select address size 39 - on the ADI-SPI bus. Should be set to 0. 40 - 41 - Optional properties: 42 - - hwlocks: Reference to a phandle of a hwlock provider node. 43 - - hwlock-names: Reference to hwlock name strings defined in the same order 44 - as the hwlocks, should be "adi". 45 - - sprd,hw-channels: This is an array of channel values up to 49 channels. 46 - The first value specifies the hardware channel id which is used to 47 - transfer data triggered by hardware automatically, and the second 48 - value specifies the analog chip address where user want to access 49 - by hardware components. 50 - 51 - SPI slave nodes must be children of the SPI controller node and can contain 52 - properties described in Documentation/devicetree/bindings/spi/spi-bus.txt. 53 - 54 - Example: 55 - adi_bus: spi@40030000 { 56 - compatible = "sprd,sc9860-adi"; 57 - reg = <0 0x40030000 0 0x10000>; 58 - hwlocks = <&hwlock1 0>; 59 - hwlock-names = "adi"; 60 - #address-cells = <1>; 61 - #size-cells = <0>; 62 - sprd,hw-channels = <30 0x8c20>; 63 - };
+104
Documentation/devicetree/bindings/spi/sprd,spi-adi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + 3 + %YAML 1.2 4 + --- 5 + $id: "http://devicetree.org/schemas/spi/sprd,spi-adi.yaml#" 6 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 7 + 8 + title: Spreadtrum ADI controller 9 + 10 + maintainers: 11 + - Orson Zhai <orsonzhai@gmail.com> 12 + - Baolin Wang <baolin.wang7@gmail.com> 13 + - Chunyan Zhang <zhang.lyra@gmail.com> 14 + 15 + description: | 16 + ADI is the abbreviation of Anolog-Digital interface, which is used to access 17 + analog chip (such as PMIC) from digital chip. ADI controller follows the SPI 18 + framework for its hardware implementation is alike to SPI bus and its timing 19 + is compatile to SPI timing. 20 + 21 + ADI controller has 50 channels including 2 software read/write channels and 22 + 48 hardware channels to access analog chip. For 2 software read/write channels, 23 + users should set ADI registers to access analog chip. For hardware channels, 24 + we can configure them to allow other hardware components to use it independently, 25 + which means we can just link one analog chip address to one hardware channel, 26 + then users can access the mapped analog chip address by this hardware channel 27 + triggered by hardware components instead of ADI software channels. 28 + 29 + Thus we introduce one property named "sprd,hw-channels" to configure hardware 30 + channels, the first value specifies the hardware channel id which is used to 31 + transfer data triggered by hardware automatically, and the second value specifies 32 + the analog chip address where user want to access by hardware components. 33 + 34 + Since we have multi-subsystems will use unique ADI to access analog chip, when 35 + one system is reading/writing data by ADI software channels, that should be under 36 + one hardware spinlock protection to prevent other systems from reading/writing 37 + data by ADI software channels at the same time, or two parallel routine of setting 38 + ADI registers will make ADI controller registers chaos to lead incorrect results. 39 + Then we need one hardware spinlock to synchronize between the multiple subsystems. 40 + 41 + The new version ADI controller supplies multiple master channels for different 42 + subsystem accessing, that means no need to add hardware spinlock to synchronize, 43 + thus change the hardware spinlock support to be optional to keep backward 44 + compatibility. 45 + 46 + allOf: 47 + - $ref: /spi/spi-controller.yaml# 48 + 49 + properties: 50 + compatible: 51 + enum: 52 + - sprd,sc9860-adi 53 + - sprd,sc9863-adi 54 + - sprd,ums512-adi 55 + 56 + reg: 57 + maxItems: 1 58 + 59 + hwlocks: 60 + maxItems: 1 61 + 62 + hwlock-names: 63 + const: adi 64 + 65 + sprd,hw-channels: 66 + $ref: /schemas/types.yaml#/definitions/uint32-matrix 67 + description: A list of hardware channels 68 + minItems: 1 69 + maxItems: 48 70 + items: 71 + items: 72 + - description: The hardware channel id which is used to transfer data 73 + triggered by hardware automatically, channel id 0-1 are for software 74 + use, 2-49 are hardware channels. 75 + minimum: 2 76 + maximum: 49 77 + - description: The analog chip address where user want to access by 78 + hardware components. 79 + 80 + required: 81 + - compatible 82 + - reg 83 + - '#address-cells' 84 + - '#size-cells' 85 + 86 + unevaluatedProperties: false 87 + 88 + examples: 89 + - | 90 + aon { 91 + #address-cells = <2>; 92 + #size-cells = <2>; 93 + 94 + adi_bus: spi@40030000 { 95 + compatible = "sprd,sc9860-adi"; 96 + reg = <0 0x40030000 0 0x10000>; 97 + hwlocks = <&hwlock1 0>; 98 + hwlock-names = "adi"; 99 + #address-cells = <1>; 100 + #size-cells = <0>; 101 + sprd,hw-channels = <30 0x8c20>; 102 + }; 103 + }; 104 + ...
+12
drivers/spi/Kconfig
··· 658 658 The main usecase of this controller is to use spi flash as boot 659 659 device. 660 660 661 + config SPI_ROCKCHIP_SFC 662 + tristate "Rockchip Serial Flash Controller (SFC)" 663 + depends on ARCH_ROCKCHIP || COMPILE_TEST 664 + depends on HAS_IOMEM && HAS_DMA 665 + help 666 + This enables support for Rockchip serial flash controller. This 667 + is a specialized controller used to access SPI flash on some 668 + Rockchip SOCs. 669 + 670 + ROCKCHIP SFC supports DMA and PIO modes. When DMA is not available, 671 + the driver automatically falls back to PIO mode. 672 + 661 673 config SPI_RB4XX 662 674 tristate "Mikrotik RB4XX SPI master" 663 675 depends on SPI_MASTER && ATH79
+1
drivers/spi/Makefile
··· 95 95 obj-$(CONFIG_SPI_QCOM_QSPI) += spi-qcom-qspi.o 96 96 obj-$(CONFIG_SPI_QUP) += spi-qup.o 97 97 obj-$(CONFIG_SPI_ROCKCHIP) += spi-rockchip.o 98 + obj-$(CONFIG_SPI_ROCKCHIP_SFC) += spi-rockchip-sfc.o 98 99 obj-$(CONFIG_SPI_RB4XX) += spi-rb4xx.o 99 100 obj-$(CONFIG_MACH_REALTEK_RTL) += spi-realtek-rtl.o 100 101 obj-$(CONFIG_SPI_RPCIF) += spi-rpc-if.o
+2 -2
drivers/spi/spi-bcm2835aux.c
··· 143 143 } 144 144 #endif /* CONFIG_DEBUG_FS */ 145 145 146 - static inline u32 bcm2835aux_rd(struct bcm2835aux_spi *bs, unsigned reg) 146 + static inline u32 bcm2835aux_rd(struct bcm2835aux_spi *bs, unsigned int reg) 147 147 { 148 148 return readl(bs->regs + reg); 149 149 } 150 150 151 - static inline void bcm2835aux_wr(struct bcm2835aux_spi *bs, unsigned reg, 151 + static inline void bcm2835aux_wr(struct bcm2835aux_spi *bs, unsigned int reg, 152 152 u32 val) 153 153 { 154 154 writel(val, bs->regs + reg);
+1 -1
drivers/spi/spi-coldfire-qspi.c
··· 444 444 mcfqspi_wr_qmr(mcfqspi, MCFQSPI_QMR_MSTR); 445 445 446 446 mcfqspi_cs_teardown(mcfqspi); 447 - clk_disable(mcfqspi->clk); 447 + clk_disable_unprepare(mcfqspi->clk); 448 448 449 449 return 0; 450 450 }
+1 -7
drivers/spi/spi-davinci.c
··· 213 213 * line for the controller 214 214 */ 215 215 if (spi->cs_gpiod) { 216 - /* 217 - * FIXME: is this code ever executed? This host does not 218 - * set SPI_MASTER_GPIO_SS so this chipselect callback should 219 - * not get called from the SPI core when we are using 220 - * GPIOs for chip select. 221 - */ 222 216 if (value == BITBANG_CS_ACTIVE) 223 217 gpiod_set_value(spi->cs_gpiod, 1); 224 218 else ··· 939 945 master->bus_num = pdev->id; 940 946 master->num_chipselect = pdata->num_chipselect; 941 947 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16); 942 - master->flags = SPI_MASTER_MUST_RX; 948 + master->flags = SPI_MASTER_MUST_RX | SPI_MASTER_GPIO_SS; 943 949 master->setup = davinci_spi_setup; 944 950 master->cleanup = davinci_spi_cleanup; 945 951 master->can_dma = davinci_spi_can_dma;
+2 -2
drivers/spi/spi-ep93xx.c
··· 550 550 u32 val; 551 551 int ret; 552 552 553 - ret = clk_enable(espi->clk); 553 + ret = clk_prepare_enable(espi->clk); 554 554 if (ret) 555 555 return ret; 556 556 ··· 570 570 val &= ~SSPCR1_SSE; 571 571 writel(val, espi->mmio + SSPCR1); 572 572 573 - clk_disable(espi->clk); 573 + clk_disable_unprepare(espi->clk); 574 574 575 575 return 0; 576 576 }
+22 -103
drivers/spi/spi-fsi.c
··· 25 25 26 26 #define SPI_FSI_BASE 0x70000 27 27 #define SPI_FSI_INIT_TIMEOUT_MS 1000 28 - #define SPI_FSI_MAX_XFR_SIZE 2048 29 - #define SPI_FSI_MAX_XFR_SIZE_RESTRICTED 8 28 + #define SPI_FSI_MAX_RX_SIZE 8 29 + #define SPI_FSI_MAX_TX_SIZE 40 30 30 31 31 #define SPI_FSI_ERROR 0x0 32 32 #define SPI_FSI_COUNTER_CFG 0x1 33 - #define SPI_FSI_COUNTER_CFG_LOOPS(x) (((u64)(x) & 0xffULL) << 32) 34 - #define SPI_FSI_COUNTER_CFG_N2_RX BIT_ULL(8) 35 - #define SPI_FSI_COUNTER_CFG_N2_TX BIT_ULL(9) 36 - #define SPI_FSI_COUNTER_CFG_N2_IMPLICIT BIT_ULL(10) 37 - #define SPI_FSI_COUNTER_CFG_N2_RELOAD BIT_ULL(11) 38 33 #define SPI_FSI_CFG1 0x2 39 34 #define SPI_FSI_CLOCK_CFG 0x3 40 35 #define SPI_FSI_CLOCK_CFG_MM_ENABLE BIT_ULL(32) ··· 71 76 struct device *dev; /* SPI controller device */ 72 77 struct fsi_device *fsi; /* FSI2SPI CFAM engine device */ 73 78 u32 base; 74 - size_t max_xfr_size; 75 - bool restricted; 76 79 }; 77 80 78 81 struct fsi_spi_sequence { ··· 234 241 return fsi_spi_write_reg(ctx, SPI_FSI_STATUS, 0ULL); 235 242 } 236 243 237 - static int fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val) 244 + static void fsi_spi_sequence_add(struct fsi_spi_sequence *seq, u8 val) 238 245 { 239 246 /* 240 247 * Add the next byte of instruction to the 8-byte sequence register. ··· 244 251 */ 245 252 seq->data |= (u64)val << seq->bit; 246 253 seq->bit -= 8; 247 - 248 - return ((64 - seq->bit) / 8) - 2; 249 254 } 250 255 251 256 static void fsi_spi_sequence_init(struct fsi_spi_sequence *seq) ··· 252 261 seq->data = 0ULL; 253 262 } 254 263 255 - static int fsi_spi_sequence_transfer(struct fsi_spi *ctx, 256 - struct fsi_spi_sequence *seq, 257 - struct spi_transfer *transfer) 258 - { 259 - int loops; 260 - int idx; 261 - int rc; 262 - u8 val = 0; 263 - u8 len = min(transfer->len, 8U); 264 - u8 rem = transfer->len % len; 265 - 266 - loops = transfer->len / len; 267 - 268 - if (transfer->tx_buf) { 269 - val = SPI_FSI_SEQUENCE_SHIFT_OUT(len); 270 - idx = fsi_spi_sequence_add(seq, val); 271 - 272 - if (rem) 273 - rem = SPI_FSI_SEQUENCE_SHIFT_OUT(rem); 274 - } else if (transfer->rx_buf) { 275 - val = SPI_FSI_SEQUENCE_SHIFT_IN(len); 276 - idx = fsi_spi_sequence_add(seq, val); 277 - 278 - if (rem) 279 - rem = SPI_FSI_SEQUENCE_SHIFT_IN(rem); 280 - } else { 281 - return -EINVAL; 282 - } 283 - 284 - if (ctx->restricted && loops > 1) { 285 - dev_warn(ctx->dev, 286 - "Transfer too large; no branches permitted.\n"); 287 - return -EINVAL; 288 - } 289 - 290 - if (loops > 1) { 291 - u64 cfg = SPI_FSI_COUNTER_CFG_LOOPS(loops - 1); 292 - 293 - fsi_spi_sequence_add(seq, SPI_FSI_SEQUENCE_BRANCH(idx)); 294 - 295 - if (transfer->rx_buf) 296 - cfg |= SPI_FSI_COUNTER_CFG_N2_RX | 297 - SPI_FSI_COUNTER_CFG_N2_TX | 298 - SPI_FSI_COUNTER_CFG_N2_IMPLICIT | 299 - SPI_FSI_COUNTER_CFG_N2_RELOAD; 300 - 301 - rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, cfg); 302 - if (rc) 303 - return rc; 304 - } else { 305 - fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, 0ULL); 306 - } 307 - 308 - if (rem) 309 - fsi_spi_sequence_add(seq, rem); 310 - 311 - return 0; 312 - } 313 - 314 264 static int fsi_spi_transfer_data(struct fsi_spi *ctx, 315 265 struct spi_transfer *transfer) 316 266 { 317 267 int rc = 0; 318 268 u64 status = 0ULL; 319 - u64 cfg = 0ULL; 320 269 321 270 if (transfer->tx_buf) { 322 271 int nb; ··· 293 362 int recv = 0; 294 363 u64 in = 0ULL; 295 364 u8 *rx = transfer->rx_buf; 296 - 297 - rc = fsi_spi_read_reg(ctx, SPI_FSI_COUNTER_CFG, &cfg); 298 - if (rc) 299 - return rc; 300 - 301 - if (cfg & SPI_FSI_COUNTER_CFG_N2_IMPLICIT) { 302 - rc = fsi_spi_write_reg(ctx, SPI_FSI_DATA_TX, 0); 303 - if (rc) 304 - return rc; 305 - } 306 365 307 366 while (transfer->len > recv) { 308 367 do { ··· 360 439 } 361 440 } while (seq_state && (seq_state != SPI_FSI_STATUS_SEQ_STATE_IDLE)); 362 441 442 + rc = fsi_spi_write_reg(ctx, SPI_FSI_COUNTER_CFG, 0ULL); 443 + if (rc) 444 + return rc; 445 + 363 446 rc = fsi_spi_read_reg(ctx, SPI_FSI_CLOCK_CFG, &clock_cfg); 364 447 if (rc) 365 448 return rc; ··· 384 459 { 385 460 int rc; 386 461 u8 seq_slave = SPI_FSI_SEQUENCE_SEL_SLAVE(mesg->spi->chip_select + 1); 462 + unsigned int len; 387 463 struct spi_transfer *transfer; 388 464 struct fsi_spi *ctx = spi_controller_get_devdata(ctlr); 389 465 ··· 397 471 struct spi_transfer *next = NULL; 398 472 399 473 /* Sequencer must do shift out (tx) first. */ 400 - if (!transfer->tx_buf || 401 - transfer->len > (ctx->max_xfr_size + 8)) { 474 + if (!transfer->tx_buf || transfer->len > SPI_FSI_MAX_TX_SIZE) { 402 475 rc = -EINVAL; 403 476 goto error; 404 477 } ··· 411 486 fsi_spi_sequence_init(&seq); 412 487 fsi_spi_sequence_add(&seq, seq_slave); 413 488 414 - rc = fsi_spi_sequence_transfer(ctx, &seq, transfer); 415 - if (rc) 416 - goto error; 489 + len = transfer->len; 490 + while (len > 8) { 491 + fsi_spi_sequence_add(&seq, 492 + SPI_FSI_SEQUENCE_SHIFT_OUT(8)); 493 + len -= 8; 494 + } 495 + fsi_spi_sequence_add(&seq, SPI_FSI_SEQUENCE_SHIFT_OUT(len)); 417 496 418 497 if (!list_is_last(&transfer->transfer_list, 419 498 &mesg->transfers)) { ··· 425 496 426 497 /* Sequencer can only do shift in (rx) after tx. */ 427 498 if (next->rx_buf) { 428 - if (next->len > ctx->max_xfr_size) { 499 + u8 shift; 500 + 501 + if (next->len > SPI_FSI_MAX_RX_SIZE) { 429 502 rc = -EINVAL; 430 503 goto error; 431 504 } ··· 435 504 dev_dbg(ctx->dev, "Sequence rx of %d bytes.\n", 436 505 next->len); 437 506 438 - rc = fsi_spi_sequence_transfer(ctx, &seq, 439 - next); 440 - if (rc) 441 - goto error; 507 + shift = SPI_FSI_SEQUENCE_SHIFT_IN(next->len); 508 + fsi_spi_sequence_add(&seq, shift); 442 509 } else { 443 510 next = NULL; 444 511 } ··· 470 541 471 542 static size_t fsi_spi_max_transfer_size(struct spi_device *spi) 472 543 { 473 - struct fsi_spi *ctx = spi_controller_get_devdata(spi->controller); 474 - 475 - return ctx->max_xfr_size; 544 + return SPI_FSI_MAX_RX_SIZE; 476 545 } 477 546 478 547 static int fsi_spi_probe(struct device *dev) ··· 508 581 ctx->dev = &ctlr->dev; 509 582 ctx->fsi = fsi; 510 583 ctx->base = base + SPI_FSI_BASE; 511 - 512 - if (of_device_is_compatible(np, "ibm,fsi2spi-restricted")) { 513 - ctx->restricted = true; 514 - ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE_RESTRICTED; 515 - } else { 516 - ctx->restricted = false; 517 - ctx->max_xfr_size = SPI_FSI_MAX_XFR_SIZE; 518 - } 519 584 520 585 rc = devm_spi_register_controller(dev, ctlr); 521 586 if (rc)
+1
drivers/spi/spi-fsl-dspi.c
··· 530 530 goto err_rx_dma_buf; 531 531 } 532 532 533 + memset(&cfg, 0, sizeof(cfg)); 533 534 cfg.src_addr = phy_addr + SPI_POPR; 534 535 cfg.dst_addr = phy_addr + SPI_PUSHR; 535 536 cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
-6
drivers/spi/spi-geni-qcom.c
··· 549 549 */ 550 550 spin_lock_irq(&mas->lock); 551 551 geni_se_setup_m_cmd(se, m_cmd, FRAGMENTATION); 552 - 553 - /* 554 - * TX_WATERMARK_REG should be set after SPI configuration and 555 - * setting up GENI SE engine, as driver starts data transfer 556 - * for the watermark interrupt. 557 - */ 558 552 if (m_cmd & SPI_TX_ONLY) { 559 553 if (geni_spi_handle_tx(mas)) 560 554 writel(mas->tx_wm, se->base + SE_GENI_TX_WATERMARK_REG);
+13 -8
drivers/spi/spi-imx.c
··· 1052 1052 1053 1053 static void spi_imx_push(struct spi_imx_data *spi_imx) 1054 1054 { 1055 - unsigned int burst_len, fifo_words; 1055 + unsigned int burst_len; 1056 1056 1057 - if (spi_imx->dynamic_burst) 1058 - fifo_words = 4; 1059 - else 1060 - fifo_words = spi_imx_bytes_per_word(spi_imx->bits_per_word); 1061 1057 /* 1062 1058 * Reload the FIFO when the remaining bytes to be transferred in the 1063 1059 * current burst is 0. This only applies when bits_per_word is a ··· 1072 1076 1073 1077 spi_imx->remainder = burst_len; 1074 1078 } else { 1075 - spi_imx->remainder = fifo_words; 1079 + spi_imx->remainder = spi_imx_bytes_per_word(spi_imx->bits_per_word); 1076 1080 } 1077 1081 } 1078 1082 ··· 1080 1084 if (!spi_imx->count) 1081 1085 break; 1082 1086 if (spi_imx->dynamic_burst && 1083 - spi_imx->txfifo >= DIV_ROUND_UP(spi_imx->remainder, 1084 - fifo_words)) 1087 + spi_imx->txfifo >= DIV_ROUND_UP(spi_imx->remainder, 4)) 1085 1088 break; 1086 1089 spi_imx->tx(spi_imx); 1087 1090 spi_imx->txfifo++; ··· 1190 1195 * dynamic_burst in that case. 1191 1196 */ 1192 1197 if (spi_imx->devtype_data->dynamic_burst && !spi_imx->slave_mode && 1198 + !(spi->mode & SPI_CS_WORD) && 1193 1199 (spi_imx->bits_per_word == 8 || 1194 1200 spi_imx->bits_per_word == 16 || 1195 1201 spi_imx->bits_per_word == 32)) { ··· 1625 1629 if (is_imx35_cspi(spi_imx) || is_imx51_ecspi(spi_imx) || 1626 1630 is_imx53_ecspi(spi_imx)) 1627 1631 spi_imx->bitbang.master->mode_bits |= SPI_LOOP | SPI_READY; 1632 + 1633 + if (is_imx51_ecspi(spi_imx) && 1634 + device_property_read_u32(&pdev->dev, "cs-gpios", NULL)) 1635 + /* 1636 + * When using HW-CS implementing SPI_CS_WORD can be done by just 1637 + * setting the burst length to the word size. This is 1638 + * considerably faster than manually controlling the CS. 1639 + */ 1640 + spi_imx->bitbang.master->mode_bits |= SPI_CS_WORD; 1628 1641 1629 1642 spi_imx->spi_drctl = spi_drctl; 1630 1643
+105 -54
drivers/spi/spi-mt65xx.c
··· 42 42 #define SPI_CFG1_CS_IDLE_OFFSET 0 43 43 #define SPI_CFG1_PACKET_LOOP_OFFSET 8 44 44 #define SPI_CFG1_PACKET_LENGTH_OFFSET 16 45 - #define SPI_CFG1_GET_TICK_DLY_OFFSET 30 45 + #define SPI_CFG1_GET_TICK_DLY_OFFSET 29 46 46 47 + #define SPI_CFG1_GET_TICK_DLY_MASK 0xe0000000 47 48 #define SPI_CFG1_CS_IDLE_MASK 0xff 48 49 #define SPI_CFG1_PACKET_LOOP_MASK 0xff00 49 50 #define SPI_CFG1_PACKET_LENGTH_MASK 0x3ff0000 ··· 91 90 bool enhance_timing; 92 91 /* some IC support DMA addr extension */ 93 92 bool dma_ext; 93 + /* some IC no need unprepare SPI clk */ 94 + bool no_need_unprepare; 94 95 }; 95 96 96 97 struct mtk_spi { ··· 107 104 struct scatterlist *tx_sgl, *rx_sgl; 108 105 u32 tx_sgl_len, rx_sgl_len; 109 106 const struct mtk_spi_compatible *dev_comp; 107 + u32 spi_clk_hz; 110 108 }; 111 109 112 110 static const struct mtk_spi_compatible mtk_common_compat; ··· 139 135 .enhance_timing = true, 140 136 }; 141 137 138 + static const struct mtk_spi_compatible mt6893_compat = { 139 + .need_pad_sel = true, 140 + .must_tx = true, 141 + .enhance_timing = true, 142 + .dma_ext = true, 143 + .no_need_unprepare = true, 144 + }; 145 + 142 146 /* 143 147 * A piece of default chip info unless the platform 144 148 * supplies it. 145 149 */ 146 150 static const struct mtk_chip_config mtk_default_chip_info = { 147 151 .sample_sel = 0, 152 + .tick_delay = 0, 148 153 }; 149 154 150 155 static const struct of_device_id mtk_spi_of_match[] = { ··· 187 174 { .compatible = "mediatek,mt8192-spi", 188 175 .data = (void *)&mt6765_compat, 189 176 }, 177 + { .compatible = "mediatek,mt6893-spi", 178 + .data = (void *)&mt6893_compat, 179 + }, 190 180 {} 191 181 }; 192 182 MODULE_DEVICE_TABLE(of, mtk_spi_of_match); ··· 206 190 reg_val = readl(mdata->base + SPI_CMD_REG); 207 191 reg_val &= ~SPI_CMD_RST; 208 192 writel(reg_val, mdata->base + SPI_CMD_REG); 193 + } 194 + 195 + static int mtk_spi_set_hw_cs_timing(struct spi_device *spi) 196 + { 197 + struct mtk_spi *mdata = spi_master_get_devdata(spi->master); 198 + struct spi_delay *cs_setup = &spi->cs_setup; 199 + struct spi_delay *cs_hold = &spi->cs_hold; 200 + struct spi_delay *cs_inactive = &spi->cs_inactive; 201 + u32 setup, hold, inactive; 202 + u32 reg_val; 203 + int delay; 204 + 205 + delay = spi_delay_to_ns(cs_setup, NULL); 206 + if (delay < 0) 207 + return delay; 208 + setup = (delay * DIV_ROUND_UP(mdata->spi_clk_hz, 1000000)) / 1000; 209 + 210 + delay = spi_delay_to_ns(cs_hold, NULL); 211 + if (delay < 0) 212 + return delay; 213 + hold = (delay * DIV_ROUND_UP(mdata->spi_clk_hz, 1000000)) / 1000; 214 + 215 + delay = spi_delay_to_ns(cs_inactive, NULL); 216 + if (delay < 0) 217 + return delay; 218 + inactive = (delay * DIV_ROUND_UP(mdata->spi_clk_hz, 1000000)) / 1000; 219 + 220 + setup = setup ? setup : 1; 221 + hold = hold ? hold : 1; 222 + inactive = inactive ? inactive : 1; 223 + 224 + reg_val = readl(mdata->base + SPI_CFG0_REG); 225 + if (mdata->dev_comp->enhance_timing) { 226 + hold = min_t(u32, hold, 0x10000); 227 + setup = min_t(u32, setup, 0x10000); 228 + reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 229 + reg_val |= (((hold - 1) & 0xffff) 230 + << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 231 + reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 232 + reg_val |= (((setup - 1) & 0xffff) 233 + << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 234 + } else { 235 + hold = min_t(u32, hold, 0x100); 236 + setup = min_t(u32, setup, 0x100); 237 + reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET); 238 + reg_val |= (((hold - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET); 239 + reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET); 240 + reg_val |= (((setup - 1) & 0xff) 241 + << SPI_CFG0_CS_SETUP_OFFSET); 242 + } 243 + writel(reg_val, mdata->base + SPI_CFG0_REG); 244 + 245 + inactive = min_t(u32, inactive, 0x100); 246 + reg_val = readl(mdata->base + SPI_CFG1_REG); 247 + reg_val &= ~SPI_CFG1_CS_IDLE_MASK; 248 + reg_val |= (((inactive - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET); 249 + writel(reg_val, mdata->base + SPI_CFG1_REG); 250 + 251 + return 0; 209 252 } 210 253 211 254 static int mtk_spi_prepare_message(struct spi_master *master, ··· 336 261 writel(mdata->pad_sel[spi->chip_select], 337 262 mdata->base + SPI_PAD_SEL_REG); 338 263 264 + /* tick delay */ 265 + reg_val = readl(mdata->base + SPI_CFG1_REG); 266 + reg_val &= ~SPI_CFG1_GET_TICK_DLY_MASK; 267 + reg_val |= ((chip_config->tick_delay & 0x7) 268 + << SPI_CFG1_GET_TICK_DLY_OFFSET); 269 + writel(reg_val, mdata->base + SPI_CFG1_REG); 270 + 271 + /* set hw cs timing */ 272 + mtk_spi_set_hw_cs_timing(spi); 339 273 return 0; 340 274 } 341 275 ··· 371 287 static void mtk_spi_prepare_transfer(struct spi_master *master, 372 288 struct spi_transfer *xfer) 373 289 { 374 - u32 spi_clk_hz, div, sck_time, reg_val; 290 + u32 div, sck_time, reg_val; 375 291 struct mtk_spi *mdata = spi_master_get_devdata(master); 376 292 377 - spi_clk_hz = clk_get_rate(mdata->spi_clk); 378 - if (xfer->speed_hz < spi_clk_hz / 2) 379 - div = DIV_ROUND_UP(spi_clk_hz, xfer->speed_hz); 293 + if (xfer->speed_hz < mdata->spi_clk_hz / 2) 294 + div = DIV_ROUND_UP(mdata->spi_clk_hz, xfer->speed_hz); 380 295 else 381 296 div = 1; 382 297 ··· 588 505 return (xfer->len > MTK_SPI_MAX_FIFO_SIZE && 589 506 (unsigned long)xfer->tx_buf % 4 == 0 && 590 507 (unsigned long)xfer->rx_buf % 4 == 0); 591 - } 592 - 593 - static int mtk_spi_set_hw_cs_timing(struct spi_device *spi, 594 - struct spi_delay *setup, 595 - struct spi_delay *hold, 596 - struct spi_delay *inactive) 597 - { 598 - struct mtk_spi *mdata = spi_master_get_devdata(spi->master); 599 - u16 setup_dly, hold_dly, inactive_dly; 600 - u32 reg_val; 601 - 602 - if ((setup && setup->unit != SPI_DELAY_UNIT_SCK) || 603 - (hold && hold->unit != SPI_DELAY_UNIT_SCK) || 604 - (inactive && inactive->unit != SPI_DELAY_UNIT_SCK)) { 605 - dev_err(&spi->dev, 606 - "Invalid delay unit, should be SPI_DELAY_UNIT_SCK\n"); 607 - return -EINVAL; 608 - } 609 - 610 - setup_dly = setup ? setup->value : 1; 611 - hold_dly = hold ? hold->value : 1; 612 - inactive_dly = inactive ? inactive->value : 1; 613 - 614 - reg_val = readl(mdata->base + SPI_CFG0_REG); 615 - if (mdata->dev_comp->enhance_timing) { 616 - reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 617 - reg_val |= (((hold_dly - 1) & 0xffff) 618 - << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 619 - reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 620 - reg_val |= (((setup_dly - 1) & 0xffff) 621 - << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 622 - } else { 623 - reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET); 624 - reg_val |= (((hold_dly - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET); 625 - reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET); 626 - reg_val |= (((setup_dly - 1) & 0xff) 627 - << SPI_CFG0_CS_SETUP_OFFSET); 628 - } 629 - writel(reg_val, mdata->base + SPI_CFG0_REG); 630 - 631 - reg_val = readl(mdata->base + SPI_CFG1_REG); 632 - reg_val &= ~SPI_CFG1_CS_IDLE_MASK; 633 - reg_val |= (((inactive_dly - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET); 634 - writel(reg_val, mdata->base + SPI_CFG1_REG); 635 - 636 - return 0; 637 508 } 638 509 639 510 static int mtk_spi_setup(struct spi_device *spi) ··· 827 790 goto err_put_master; 828 791 } 829 792 830 - clk_disable_unprepare(mdata->spi_clk); 793 + mdata->spi_clk_hz = clk_get_rate(mdata->spi_clk); 794 + 795 + if (mdata->dev_comp->no_need_unprepare) 796 + clk_disable(mdata->spi_clk); 797 + else 798 + clk_disable_unprepare(mdata->spi_clk); 831 799 832 800 pm_runtime_enable(&pdev->dev); 833 801 ··· 900 858 901 859 mtk_spi_reset(mdata); 902 860 861 + if (mdata->dev_comp->no_need_unprepare) 862 + clk_unprepare(mdata->spi_clk); 863 + 903 864 return 0; 904 865 } 905 866 ··· 951 906 struct spi_master *master = dev_get_drvdata(dev); 952 907 struct mtk_spi *mdata = spi_master_get_devdata(master); 953 908 954 - clk_disable_unprepare(mdata->spi_clk); 909 + if (mdata->dev_comp->no_need_unprepare) 910 + clk_disable(mdata->spi_clk); 911 + else 912 + clk_disable_unprepare(mdata->spi_clk); 955 913 956 914 return 0; 957 915 } ··· 965 917 struct mtk_spi *mdata = spi_master_get_devdata(master); 966 918 int ret; 967 919 968 - ret = clk_prepare_enable(mdata->spi_clk); 920 + if (mdata->dev_comp->no_need_unprepare) 921 + ret = clk_enable(mdata->spi_clk); 922 + else 923 + ret = clk_prepare_enable(mdata->spi_clk); 969 924 if (ret < 0) { 970 925 dev_err(dev, "failed to enable spi_clk (%d)\n", ret); 971 926 return ret;
+32 -12
drivers/spi/spi-mxic.c
··· 335 335 static bool mxic_spi_mem_supports_op(struct spi_mem *mem, 336 336 const struct spi_mem_op *op) 337 337 { 338 - if (op->data.buswidth > 4 || op->addr.buswidth > 4 || 339 - op->dummy.buswidth > 4 || op->cmd.buswidth > 4) 338 + bool all_false; 339 + 340 + if (op->data.buswidth > 8 || op->addr.buswidth > 8 || 341 + op->dummy.buswidth > 8 || op->cmd.buswidth > 8) 340 342 return false; 341 343 342 344 if (op->data.nbytes && op->dummy.nbytes && ··· 348 346 if (op->addr.nbytes > 7) 349 347 return false; 350 348 351 - return spi_mem_default_supports_op(mem, op); 349 + all_false = !op->cmd.dtr && !op->addr.dtr && !op->dummy.dtr && 350 + !op->data.dtr; 351 + 352 + if (all_false) 353 + return spi_mem_default_supports_op(mem, op); 354 + else 355 + return spi_mem_dtr_supports_op(mem, op); 352 356 } 353 357 354 358 static int mxic_spi_mem_exec_op(struct spi_mem *mem, ··· 363 355 struct mxic_spi *mxic = spi_master_get_devdata(mem->spi->master); 364 356 int nio = 1, i, ret; 365 357 u32 ss_ctrl; 366 - u8 addr[8]; 367 - u8 opcode = op->cmd.opcode; 358 + u8 addr[8], cmd[2]; 368 359 369 360 ret = mxic_spi_set_freq(mxic, mem->spi->max_speed_hz); 370 361 if (ret) 371 362 return ret; 372 363 373 - if (mem->spi->mode & (SPI_TX_QUAD | SPI_RX_QUAD)) 364 + if (mem->spi->mode & (SPI_TX_OCTAL | SPI_RX_OCTAL)) 365 + nio = 8; 366 + else if (mem->spi->mode & (SPI_TX_QUAD | SPI_RX_QUAD)) 374 367 nio = 4; 375 368 else if (mem->spi->mode & (SPI_TX_DUAL | SPI_RX_DUAL)) 376 369 nio = 2; ··· 383 374 mxic->regs + HC_CFG); 384 375 writel(HC_EN_BIT, mxic->regs + HC_EN); 385 376 386 - ss_ctrl = OP_CMD_BYTES(1) | OP_CMD_BUSW(fls(op->cmd.buswidth) - 1); 377 + ss_ctrl = OP_CMD_BYTES(op->cmd.nbytes) | 378 + OP_CMD_BUSW(fls(op->cmd.buswidth) - 1) | 379 + (op->cmd.dtr ? OP_CMD_DDR : 0); 387 380 388 381 if (op->addr.nbytes) 389 382 ss_ctrl |= OP_ADDR_BYTES(op->addr.nbytes) | 390 - OP_ADDR_BUSW(fls(op->addr.buswidth) - 1); 383 + OP_ADDR_BUSW(fls(op->addr.buswidth) - 1) | 384 + (op->addr.dtr ? OP_ADDR_DDR : 0); 391 385 392 386 if (op->dummy.nbytes) 393 387 ss_ctrl |= OP_DUMMY_CYC(op->dummy.nbytes); 394 388 395 389 if (op->data.nbytes) { 396 - ss_ctrl |= OP_DATA_BUSW(fls(op->data.buswidth) - 1); 397 - if (op->data.dir == SPI_MEM_DATA_IN) 390 + ss_ctrl |= OP_DATA_BUSW(fls(op->data.buswidth) - 1) | 391 + (op->data.dtr ? OP_DATA_DDR : 0); 392 + if (op->data.dir == SPI_MEM_DATA_IN) { 398 393 ss_ctrl |= OP_READ; 394 + if (op->data.dtr) 395 + ss_ctrl |= OP_DQS_EN; 396 + } 399 397 } 400 398 401 399 writel(ss_ctrl, mxic->regs + SS_CTRL(mem->spi->chip_select)); ··· 410 394 writel(readl(mxic->regs + HC_CFG) | HC_CFG_MAN_CS_ASSERT, 411 395 mxic->regs + HC_CFG); 412 396 413 - ret = mxic_spi_data_xfer(mxic, &opcode, NULL, 1); 397 + for (i = 0; i < op->cmd.nbytes; i++) 398 + cmd[i] = op->cmd.opcode >> (8 * (op->cmd.nbytes - i - 1)); 399 + 400 + ret = mxic_spi_data_xfer(mxic, cmd, NULL, op->cmd.nbytes); 414 401 if (ret) 415 402 goto out; 416 403 ··· 586 567 master->bits_per_word_mask = SPI_BPW_MASK(8); 587 568 master->mode_bits = SPI_CPOL | SPI_CPHA | 588 569 SPI_RX_DUAL | SPI_TX_DUAL | 589 - SPI_RX_QUAD | SPI_TX_QUAD; 570 + SPI_RX_QUAD | SPI_TX_QUAD | 571 + SPI_RX_OCTAL | SPI_TX_OCTAL; 590 572 591 573 mxic_spi_hw_init(mxic); 592 574
+16 -6
drivers/spi/spi-orion.c
··· 328 328 static void orion_spi_set_cs(struct spi_device *spi, bool enable) 329 329 { 330 330 struct orion_spi *orion_spi; 331 + void __iomem *ctrl_reg; 332 + u32 val; 331 333 332 334 orion_spi = spi_master_get_devdata(spi->master); 335 + ctrl_reg = spi_reg(orion_spi, ORION_SPI_IF_CTRL_REG); 336 + 337 + val = readl(ctrl_reg); 338 + 339 + /* Clear existing chip-select and assertion state */ 340 + val &= ~(ORION_SPI_CS_MASK | 0x1); 333 341 334 342 /* 335 343 * If this line is using a GPIO to control chip select, this internal ··· 346 338 * as it is handled by a GPIO, but that doesn't matter. What we need 347 339 * is to deassert the old chip select and assert some other chip select. 348 340 */ 349 - orion_spi_clrbits(orion_spi, ORION_SPI_IF_CTRL_REG, ORION_SPI_CS_MASK); 350 - orion_spi_setbits(orion_spi, ORION_SPI_IF_CTRL_REG, 351 - ORION_SPI_CS(spi->chip_select)); 341 + val |= ORION_SPI_CS(spi->chip_select); 352 342 353 343 /* 354 344 * Chip select logic is inverted from spi_set_cs(). For lines using a ··· 356 350 * doesn't matter. 357 351 */ 358 352 if (!enable) 359 - orion_spi_setbits(orion_spi, ORION_SPI_IF_CTRL_REG, 0x1); 360 - else 361 - orion_spi_clrbits(orion_spi, ORION_SPI_IF_CTRL_REG, 0x1); 353 + val |= 0x1; 354 + 355 + /* 356 + * To avoid toggling unwanted chip selects update the register 357 + * with a single write. 358 + */ 359 + writel(val, ctrl_reg); 362 360 } 363 361 364 362 static inline int orion_spi_wait_till_ready(struct orion_spi *orion_spi)
+1
drivers/spi/spi-pic32.c
··· 361 361 struct dma_slave_config cfg; 362 362 int ret; 363 363 364 + memset(&cfg, 0, sizeof(cfg)); 364 365 cfg.device_fc = true; 365 366 cfg.src_addr = pic32s->dma_base + buf_offset; 366 367 cfg.dst_addr = pic32s->dma_base + buf_offset;
+17 -18
drivers/spi/spi-pxa2xx.c
··· 594 594 595 595 static void reset_sccr1(struct driver_data *drv_data) 596 596 { 597 - struct chip_data *chip = 598 - spi_get_ctldata(drv_data->controller->cur_msg->spi); 599 - u32 sccr1_reg; 597 + u32 mask = drv_data->int_cr1 | drv_data->dma_cr1, threshold; 598 + struct chip_data *chip; 600 599 601 - sccr1_reg = pxa2xx_spi_read(drv_data, SSCR1) & ~drv_data->int_cr1; 600 + if (drv_data->controller->cur_msg) { 601 + chip = spi_get_ctldata(drv_data->controller->cur_msg->spi); 602 + threshold = chip->threshold; 603 + } else { 604 + threshold = 0; 605 + } 606 + 602 607 switch (drv_data->ssp_type) { 603 608 case QUARK_X1000_SSP: 604 - sccr1_reg &= ~QUARK_X1000_SSCR1_RFT; 609 + mask |= QUARK_X1000_SSCR1_RFT; 605 610 break; 606 611 case CE4100_SSP: 607 - sccr1_reg &= ~CE4100_SSCR1_RFT; 612 + mask |= CE4100_SSCR1_RFT; 608 613 break; 609 614 default: 610 - sccr1_reg &= ~SSCR1_RFT; 615 + mask |= SSCR1_RFT; 611 616 break; 612 617 } 613 - sccr1_reg |= chip->threshold; 614 - pxa2xx_spi_write(drv_data, SSCR1, sccr1_reg); 618 + 619 + pxa2xx_spi_update(drv_data, SSCR1, mask, threshold); 615 620 } 616 621 617 622 static void int_stop_and_reset(struct driver_data *drv_data) ··· 729 724 730 725 static void handle_bad_msg(struct driver_data *drv_data) 731 726 { 727 + int_stop_and_reset(drv_data); 732 728 pxa2xx_spi_off(drv_data); 733 - clear_SSCR1_bits(drv_data, drv_data->int_cr1); 734 - if (!pxa25x_ssp_comp(drv_data)) 735 - pxa2xx_spi_write(drv_data, SSTO, 0); 736 - write_SSSR_CS(drv_data, drv_data->clear_sr); 737 729 738 730 dev_err(drv_data->ssp->dev, "bad message state in interrupt handler\n"); 739 731 } ··· 1158 1156 { 1159 1157 struct driver_data *drv_data = spi_controller_get_devdata(controller); 1160 1158 1159 + int_stop_and_reset(drv_data); 1160 + 1161 1161 /* Disable the SSP */ 1162 1162 pxa2xx_spi_off(drv_data); 1163 - /* Clear and disable interrupts and service requests */ 1164 - write_SSSR_CS(drv_data, drv_data->clear_sr); 1165 - clear_SSCR1_bits(drv_data, drv_data->int_cr1 | drv_data->dma_cr1); 1166 - if (!pxa25x_ssp_comp(drv_data)) 1167 - pxa2xx_spi_write(drv_data, SSTO, 0); 1168 1163 1169 1164 /* 1170 1165 * Stop the DMA if running. Note DMA callback handler may have unset
+694
drivers/spi/spi-rockchip-sfc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Rockchip Serial Flash Controller Driver 4 + * 5 + * Copyright (c) 2017-2021, Rockchip Inc. 6 + * Author: Shawn Lin <shawn.lin@rock-chips.com> 7 + * Chris Morgan <macroalpha82@gmail.com> 8 + * Jon Lin <Jon.lin@rock-chips.com> 9 + */ 10 + 11 + #include <linux/bitops.h> 12 + #include <linux/clk.h> 13 + #include <linux/completion.h> 14 + #include <linux/dma-mapping.h> 15 + #include <linux/iopoll.h> 16 + #include <linux/mm.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/slab.h> 21 + #include <linux/interrupt.h> 22 + #include <linux/spi/spi-mem.h> 23 + 24 + /* System control */ 25 + #define SFC_CTRL 0x0 26 + #define SFC_CTRL_PHASE_SEL_NEGETIVE BIT(1) 27 + #define SFC_CTRL_CMD_BITS_SHIFT 8 28 + #define SFC_CTRL_ADDR_BITS_SHIFT 10 29 + #define SFC_CTRL_DATA_BITS_SHIFT 12 30 + 31 + /* Interrupt mask */ 32 + #define SFC_IMR 0x4 33 + #define SFC_IMR_RX_FULL BIT(0) 34 + #define SFC_IMR_RX_UFLOW BIT(1) 35 + #define SFC_IMR_TX_OFLOW BIT(2) 36 + #define SFC_IMR_TX_EMPTY BIT(3) 37 + #define SFC_IMR_TRAN_FINISH BIT(4) 38 + #define SFC_IMR_BUS_ERR BIT(5) 39 + #define SFC_IMR_NSPI_ERR BIT(6) 40 + #define SFC_IMR_DMA BIT(7) 41 + 42 + /* Interrupt clear */ 43 + #define SFC_ICLR 0x8 44 + #define SFC_ICLR_RX_FULL BIT(0) 45 + #define SFC_ICLR_RX_UFLOW BIT(1) 46 + #define SFC_ICLR_TX_OFLOW BIT(2) 47 + #define SFC_ICLR_TX_EMPTY BIT(3) 48 + #define SFC_ICLR_TRAN_FINISH BIT(4) 49 + #define SFC_ICLR_BUS_ERR BIT(5) 50 + #define SFC_ICLR_NSPI_ERR BIT(6) 51 + #define SFC_ICLR_DMA BIT(7) 52 + 53 + /* FIFO threshold level */ 54 + #define SFC_FTLR 0xc 55 + #define SFC_FTLR_TX_SHIFT 0 56 + #define SFC_FTLR_TX_MASK 0x1f 57 + #define SFC_FTLR_RX_SHIFT 8 58 + #define SFC_FTLR_RX_MASK 0x1f 59 + 60 + /* Reset FSM and FIFO */ 61 + #define SFC_RCVR 0x10 62 + #define SFC_RCVR_RESET BIT(0) 63 + 64 + /* Enhanced mode */ 65 + #define SFC_AX 0x14 66 + 67 + /* Address Bit number */ 68 + #define SFC_ABIT 0x18 69 + 70 + /* Interrupt status */ 71 + #define SFC_ISR 0x1c 72 + #define SFC_ISR_RX_FULL_SHIFT BIT(0) 73 + #define SFC_ISR_RX_UFLOW_SHIFT BIT(1) 74 + #define SFC_ISR_TX_OFLOW_SHIFT BIT(2) 75 + #define SFC_ISR_TX_EMPTY_SHIFT BIT(3) 76 + #define SFC_ISR_TX_FINISH_SHIFT BIT(4) 77 + #define SFC_ISR_BUS_ERR_SHIFT BIT(5) 78 + #define SFC_ISR_NSPI_ERR_SHIFT BIT(6) 79 + #define SFC_ISR_DMA_SHIFT BIT(7) 80 + 81 + /* FIFO status */ 82 + #define SFC_FSR 0x20 83 + #define SFC_FSR_TX_IS_FULL BIT(0) 84 + #define SFC_FSR_TX_IS_EMPTY BIT(1) 85 + #define SFC_FSR_RX_IS_EMPTY BIT(2) 86 + #define SFC_FSR_RX_IS_FULL BIT(3) 87 + #define SFC_FSR_TXLV_MASK GENMASK(12, 8) 88 + #define SFC_FSR_TXLV_SHIFT 8 89 + #define SFC_FSR_RXLV_MASK GENMASK(20, 16) 90 + #define SFC_FSR_RXLV_SHIFT 16 91 + 92 + /* FSM status */ 93 + #define SFC_SR 0x24 94 + #define SFC_SR_IS_IDLE 0x0 95 + #define SFC_SR_IS_BUSY 0x1 96 + 97 + /* Raw interrupt status */ 98 + #define SFC_RISR 0x28 99 + #define SFC_RISR_RX_FULL BIT(0) 100 + #define SFC_RISR_RX_UNDERFLOW BIT(1) 101 + #define SFC_RISR_TX_OVERFLOW BIT(2) 102 + #define SFC_RISR_TX_EMPTY BIT(3) 103 + #define SFC_RISR_TRAN_FINISH BIT(4) 104 + #define SFC_RISR_BUS_ERR BIT(5) 105 + #define SFC_RISR_NSPI_ERR BIT(6) 106 + #define SFC_RISR_DMA BIT(7) 107 + 108 + /* Version */ 109 + #define SFC_VER 0x2C 110 + #define SFC_VER_3 0x3 111 + #define SFC_VER_4 0x4 112 + #define SFC_VER_5 0x5 113 + 114 + /* Delay line controller resiter */ 115 + #define SFC_DLL_CTRL0 0x3C 116 + #define SFC_DLL_CTRL0_SCLK_SMP_DLL BIT(15) 117 + #define SFC_DLL_CTRL0_DLL_MAX_VER4 0xFFU 118 + #define SFC_DLL_CTRL0_DLL_MAX_VER5 0x1FFU 119 + 120 + /* Master trigger */ 121 + #define SFC_DMA_TRIGGER 0x80 122 + #define SFC_DMA_TRIGGER_START 1 123 + 124 + /* Src or Dst addr for master */ 125 + #define SFC_DMA_ADDR 0x84 126 + 127 + /* Length control register extension 32GB */ 128 + #define SFC_LEN_CTRL 0x88 129 + #define SFC_LEN_CTRL_TRB_SEL 1 130 + #define SFC_LEN_EXT 0x8C 131 + 132 + /* Command */ 133 + #define SFC_CMD 0x100 134 + #define SFC_CMD_IDX_SHIFT 0 135 + #define SFC_CMD_DUMMY_SHIFT 8 136 + #define SFC_CMD_DIR_SHIFT 12 137 + #define SFC_CMD_DIR_RD 0 138 + #define SFC_CMD_DIR_WR 1 139 + #define SFC_CMD_ADDR_SHIFT 14 140 + #define SFC_CMD_ADDR_0BITS 0 141 + #define SFC_CMD_ADDR_24BITS 1 142 + #define SFC_CMD_ADDR_32BITS 2 143 + #define SFC_CMD_ADDR_XBITS 3 144 + #define SFC_CMD_TRAN_BYTES_SHIFT 16 145 + #define SFC_CMD_CS_SHIFT 30 146 + 147 + /* Address */ 148 + #define SFC_ADDR 0x104 149 + 150 + /* Data */ 151 + #define SFC_DATA 0x108 152 + 153 + /* The controller and documentation reports that it supports up to 4 CS 154 + * devices (0-3), however I have only been able to test a single CS (CS 0) 155 + * due to the configuration of my device. 156 + */ 157 + #define SFC_MAX_CHIPSELECT_NUM 4 158 + 159 + /* The SFC can transfer max 16KB - 1 at one time 160 + * we set it to 15.5KB here for alignment. 161 + */ 162 + #define SFC_MAX_IOSIZE_VER3 (512 * 31) 163 + 164 + /* DMA is only enabled for large data transmission */ 165 + #define SFC_DMA_TRANS_THRETHOLD (0x40) 166 + 167 + /* Maximum clock values from datasheet suggest keeping clock value under 168 + * 150MHz. No minimum or average value is suggested. 169 + */ 170 + #define SFC_MAX_SPEED (150 * 1000 * 1000) 171 + 172 + struct rockchip_sfc { 173 + struct device *dev; 174 + void __iomem *regbase; 175 + struct clk *hclk; 176 + struct clk *clk; 177 + u32 frequency; 178 + /* virtual mapped addr for dma_buffer */ 179 + void *buffer; 180 + dma_addr_t dma_buffer; 181 + struct completion cp; 182 + bool use_dma; 183 + u32 max_iosize; 184 + u16 version; 185 + }; 186 + 187 + static int rockchip_sfc_reset(struct rockchip_sfc *sfc) 188 + { 189 + int err; 190 + u32 status; 191 + 192 + writel_relaxed(SFC_RCVR_RESET, sfc->regbase + SFC_RCVR); 193 + 194 + err = readl_poll_timeout(sfc->regbase + SFC_RCVR, status, 195 + !(status & SFC_RCVR_RESET), 20, 196 + jiffies_to_usecs(HZ)); 197 + if (err) 198 + dev_err(sfc->dev, "SFC reset never finished\n"); 199 + 200 + /* Still need to clear the masked interrupt from RISR */ 201 + writel_relaxed(0xFFFFFFFF, sfc->regbase + SFC_ICLR); 202 + 203 + dev_dbg(sfc->dev, "reset\n"); 204 + 205 + return err; 206 + } 207 + 208 + static u16 rockchip_sfc_get_version(struct rockchip_sfc *sfc) 209 + { 210 + return (u16)(readl(sfc->regbase + SFC_VER) & 0xffff); 211 + } 212 + 213 + static u32 rockchip_sfc_get_max_iosize(struct rockchip_sfc *sfc) 214 + { 215 + return SFC_MAX_IOSIZE_VER3; 216 + } 217 + 218 + static void rockchip_sfc_irq_unmask(struct rockchip_sfc *sfc, u32 mask) 219 + { 220 + u32 reg; 221 + 222 + /* Enable transfer complete interrupt */ 223 + reg = readl(sfc->regbase + SFC_IMR); 224 + reg &= ~mask; 225 + writel(reg, sfc->regbase + SFC_IMR); 226 + } 227 + 228 + static void rockchip_sfc_irq_mask(struct rockchip_sfc *sfc, u32 mask) 229 + { 230 + u32 reg; 231 + 232 + /* Disable transfer finish interrupt */ 233 + reg = readl(sfc->regbase + SFC_IMR); 234 + reg |= mask; 235 + writel(reg, sfc->regbase + SFC_IMR); 236 + } 237 + 238 + static int rockchip_sfc_init(struct rockchip_sfc *sfc) 239 + { 240 + writel(0, sfc->regbase + SFC_CTRL); 241 + writel(0xFFFFFFFF, sfc->regbase + SFC_ICLR); 242 + rockchip_sfc_irq_mask(sfc, 0xFFFFFFFF); 243 + if (rockchip_sfc_get_version(sfc) >= SFC_VER_4) 244 + writel(SFC_LEN_CTRL_TRB_SEL, sfc->regbase + SFC_LEN_CTRL); 245 + 246 + return 0; 247 + } 248 + 249 + static int rockchip_sfc_wait_txfifo_ready(struct rockchip_sfc *sfc, u32 timeout_us) 250 + { 251 + int ret = 0; 252 + u32 status; 253 + 254 + ret = readl_poll_timeout(sfc->regbase + SFC_FSR, status, 255 + status & SFC_FSR_TXLV_MASK, 0, 256 + timeout_us); 257 + if (ret) { 258 + dev_dbg(sfc->dev, "sfc wait tx fifo timeout\n"); 259 + 260 + return -ETIMEDOUT; 261 + } 262 + 263 + return (status & SFC_FSR_TXLV_MASK) >> SFC_FSR_TXLV_SHIFT; 264 + } 265 + 266 + static int rockchip_sfc_wait_rxfifo_ready(struct rockchip_sfc *sfc, u32 timeout_us) 267 + { 268 + int ret = 0; 269 + u32 status; 270 + 271 + ret = readl_poll_timeout(sfc->regbase + SFC_FSR, status, 272 + status & SFC_FSR_RXLV_MASK, 0, 273 + timeout_us); 274 + if (ret) { 275 + dev_dbg(sfc->dev, "sfc wait rx fifo timeout\n"); 276 + 277 + return -ETIMEDOUT; 278 + } 279 + 280 + return (status & SFC_FSR_RXLV_MASK) >> SFC_FSR_RXLV_SHIFT; 281 + } 282 + 283 + static void rockchip_sfc_adjust_op_work(struct spi_mem_op *op) 284 + { 285 + if (unlikely(op->dummy.nbytes && !op->addr.nbytes)) { 286 + /* 287 + * SFC not support output DUMMY cycles right after CMD cycles, so 288 + * treat it as ADDR cycles. 289 + */ 290 + op->addr.nbytes = op->dummy.nbytes; 291 + op->addr.buswidth = op->dummy.buswidth; 292 + op->addr.val = 0xFFFFFFFFF; 293 + 294 + op->dummy.nbytes = 0; 295 + } 296 + } 297 + 298 + static int rockchip_sfc_xfer_setup(struct rockchip_sfc *sfc, 299 + struct spi_mem *mem, 300 + const struct spi_mem_op *op, 301 + u32 len) 302 + { 303 + u32 ctrl = 0, cmd = 0; 304 + 305 + /* set CMD */ 306 + cmd = op->cmd.opcode; 307 + ctrl |= ((op->cmd.buswidth >> 1) << SFC_CTRL_CMD_BITS_SHIFT); 308 + 309 + /* set ADDR */ 310 + if (op->addr.nbytes) { 311 + if (op->addr.nbytes == 4) { 312 + cmd |= SFC_CMD_ADDR_32BITS << SFC_CMD_ADDR_SHIFT; 313 + } else if (op->addr.nbytes == 3) { 314 + cmd |= SFC_CMD_ADDR_24BITS << SFC_CMD_ADDR_SHIFT; 315 + } else { 316 + cmd |= SFC_CMD_ADDR_XBITS << SFC_CMD_ADDR_SHIFT; 317 + writel(op->addr.nbytes * 8 - 1, sfc->regbase + SFC_ABIT); 318 + } 319 + 320 + ctrl |= ((op->addr.buswidth >> 1) << SFC_CTRL_ADDR_BITS_SHIFT); 321 + } 322 + 323 + /* set DUMMY */ 324 + if (op->dummy.nbytes) { 325 + if (op->dummy.buswidth == 4) 326 + cmd |= op->dummy.nbytes * 2 << SFC_CMD_DUMMY_SHIFT; 327 + else if (op->dummy.buswidth == 2) 328 + cmd |= op->dummy.nbytes * 4 << SFC_CMD_DUMMY_SHIFT; 329 + else 330 + cmd |= op->dummy.nbytes * 8 << SFC_CMD_DUMMY_SHIFT; 331 + } 332 + 333 + /* set DATA */ 334 + if (sfc->version >= SFC_VER_4) /* Clear it if no data to transfer */ 335 + writel(len, sfc->regbase + SFC_LEN_EXT); 336 + else 337 + cmd |= len << SFC_CMD_TRAN_BYTES_SHIFT; 338 + if (len) { 339 + if (op->data.dir == SPI_MEM_DATA_OUT) 340 + cmd |= SFC_CMD_DIR_WR << SFC_CMD_DIR_SHIFT; 341 + 342 + ctrl |= ((op->data.buswidth >> 1) << SFC_CTRL_DATA_BITS_SHIFT); 343 + } 344 + if (!len && op->addr.nbytes) 345 + cmd |= SFC_CMD_DIR_WR << SFC_CMD_DIR_SHIFT; 346 + 347 + /* set the Controller */ 348 + ctrl |= SFC_CTRL_PHASE_SEL_NEGETIVE; 349 + cmd |= mem->spi->chip_select << SFC_CMD_CS_SHIFT; 350 + 351 + dev_dbg(sfc->dev, "sfc addr.nbytes=%x(x%d) dummy.nbytes=%x(x%d)\n", 352 + op->addr.nbytes, op->addr.buswidth, 353 + op->dummy.nbytes, op->dummy.buswidth); 354 + dev_dbg(sfc->dev, "sfc ctrl=%x cmd=%x addr=%llx len=%x\n", 355 + ctrl, cmd, op->addr.val, len); 356 + 357 + writel(ctrl, sfc->regbase + SFC_CTRL); 358 + writel(cmd, sfc->regbase + SFC_CMD); 359 + if (op->addr.nbytes) 360 + writel(op->addr.val, sfc->regbase + SFC_ADDR); 361 + 362 + return 0; 363 + } 364 + 365 + static int rockchip_sfc_write_fifo(struct rockchip_sfc *sfc, const u8 *buf, int len) 366 + { 367 + u8 bytes = len & 0x3; 368 + u32 dwords; 369 + int tx_level; 370 + u32 write_words; 371 + u32 tmp = 0; 372 + 373 + dwords = len >> 2; 374 + while (dwords) { 375 + tx_level = rockchip_sfc_wait_txfifo_ready(sfc, 1000); 376 + if (tx_level < 0) 377 + return tx_level; 378 + write_words = min_t(u32, tx_level, dwords); 379 + iowrite32_rep(sfc->regbase + SFC_DATA, buf, write_words); 380 + buf += write_words << 2; 381 + dwords -= write_words; 382 + } 383 + 384 + /* write the rest non word aligned bytes */ 385 + if (bytes) { 386 + tx_level = rockchip_sfc_wait_txfifo_ready(sfc, 1000); 387 + if (tx_level < 0) 388 + return tx_level; 389 + memcpy(&tmp, buf, bytes); 390 + writel(tmp, sfc->regbase + SFC_DATA); 391 + } 392 + 393 + return len; 394 + } 395 + 396 + static int rockchip_sfc_read_fifo(struct rockchip_sfc *sfc, u8 *buf, int len) 397 + { 398 + u8 bytes = len & 0x3; 399 + u32 dwords; 400 + u8 read_words; 401 + int rx_level; 402 + int tmp; 403 + 404 + /* word aligned access only */ 405 + dwords = len >> 2; 406 + while (dwords) { 407 + rx_level = rockchip_sfc_wait_rxfifo_ready(sfc, 1000); 408 + if (rx_level < 0) 409 + return rx_level; 410 + read_words = min_t(u32, rx_level, dwords); 411 + ioread32_rep(sfc->regbase + SFC_DATA, buf, read_words); 412 + buf += read_words << 2; 413 + dwords -= read_words; 414 + } 415 + 416 + /* read the rest non word aligned bytes */ 417 + if (bytes) { 418 + rx_level = rockchip_sfc_wait_rxfifo_ready(sfc, 1000); 419 + if (rx_level < 0) 420 + return rx_level; 421 + tmp = readl(sfc->regbase + SFC_DATA); 422 + memcpy(buf, &tmp, bytes); 423 + } 424 + 425 + return len; 426 + } 427 + 428 + static int rockchip_sfc_fifo_transfer_dma(struct rockchip_sfc *sfc, dma_addr_t dma_buf, size_t len) 429 + { 430 + writel(0xFFFFFFFF, sfc->regbase + SFC_ICLR); 431 + writel((u32)dma_buf, sfc->regbase + SFC_DMA_ADDR); 432 + writel(SFC_DMA_TRIGGER_START, sfc->regbase + SFC_DMA_TRIGGER); 433 + 434 + return len; 435 + } 436 + 437 + static int rockchip_sfc_xfer_data_poll(struct rockchip_sfc *sfc, 438 + const struct spi_mem_op *op, u32 len) 439 + { 440 + dev_dbg(sfc->dev, "sfc xfer_poll len=%x\n", len); 441 + 442 + if (op->data.dir == SPI_MEM_DATA_OUT) 443 + return rockchip_sfc_write_fifo(sfc, op->data.buf.out, len); 444 + else 445 + return rockchip_sfc_read_fifo(sfc, op->data.buf.in, len); 446 + } 447 + 448 + static int rockchip_sfc_xfer_data_dma(struct rockchip_sfc *sfc, 449 + const struct spi_mem_op *op, u32 len) 450 + { 451 + int ret; 452 + 453 + dev_dbg(sfc->dev, "sfc xfer_dma len=%x\n", len); 454 + 455 + if (op->data.dir == SPI_MEM_DATA_OUT) 456 + memcpy(sfc->buffer, op->data.buf.out, len); 457 + 458 + ret = rockchip_sfc_fifo_transfer_dma(sfc, sfc->dma_buffer, len); 459 + if (!wait_for_completion_timeout(&sfc->cp, msecs_to_jiffies(2000))) { 460 + dev_err(sfc->dev, "DMA wait for transfer finish timeout\n"); 461 + ret = -ETIMEDOUT; 462 + } 463 + rockchip_sfc_irq_mask(sfc, SFC_IMR_DMA); 464 + if (op->data.dir == SPI_MEM_DATA_IN) 465 + memcpy(op->data.buf.in, sfc->buffer, len); 466 + 467 + return ret; 468 + } 469 + 470 + static int rockchip_sfc_xfer_done(struct rockchip_sfc *sfc, u32 timeout_us) 471 + { 472 + int ret = 0; 473 + u32 status; 474 + 475 + ret = readl_poll_timeout(sfc->regbase + SFC_SR, status, 476 + !(status & SFC_SR_IS_BUSY), 477 + 20, timeout_us); 478 + if (ret) { 479 + dev_err(sfc->dev, "wait sfc idle timeout\n"); 480 + rockchip_sfc_reset(sfc); 481 + 482 + ret = -EIO; 483 + } 484 + 485 + return ret; 486 + } 487 + 488 + static int rockchip_sfc_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op) 489 + { 490 + struct rockchip_sfc *sfc = spi_master_get_devdata(mem->spi->master); 491 + u32 len = op->data.nbytes; 492 + int ret; 493 + 494 + if (unlikely(mem->spi->max_speed_hz != sfc->frequency)) { 495 + ret = clk_set_rate(sfc->clk, mem->spi->max_speed_hz); 496 + if (ret) 497 + return ret; 498 + sfc->frequency = mem->spi->max_speed_hz; 499 + dev_dbg(sfc->dev, "set_freq=%dHz real_freq=%ldHz\n", 500 + sfc->frequency, clk_get_rate(sfc->clk)); 501 + } 502 + 503 + rockchip_sfc_adjust_op_work((struct spi_mem_op *)op); 504 + rockchip_sfc_xfer_setup(sfc, mem, op, len); 505 + if (len) { 506 + if (likely(sfc->use_dma) && len >= SFC_DMA_TRANS_THRETHOLD) { 507 + init_completion(&sfc->cp); 508 + rockchip_sfc_irq_unmask(sfc, SFC_IMR_DMA); 509 + ret = rockchip_sfc_xfer_data_dma(sfc, op, len); 510 + } else { 511 + ret = rockchip_sfc_xfer_data_poll(sfc, op, len); 512 + } 513 + 514 + if (ret != len) { 515 + dev_err(sfc->dev, "xfer data failed ret %d dir %d\n", ret, op->data.dir); 516 + 517 + return -EIO; 518 + } 519 + } 520 + 521 + return rockchip_sfc_xfer_done(sfc, 100000); 522 + } 523 + 524 + static int rockchip_sfc_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op) 525 + { 526 + struct rockchip_sfc *sfc = spi_master_get_devdata(mem->spi->master); 527 + 528 + op->data.nbytes = min(op->data.nbytes, sfc->max_iosize); 529 + 530 + return 0; 531 + } 532 + 533 + static const struct spi_controller_mem_ops rockchip_sfc_mem_ops = { 534 + .exec_op = rockchip_sfc_exec_mem_op, 535 + .adjust_op_size = rockchip_sfc_adjust_op_size, 536 + }; 537 + 538 + static irqreturn_t rockchip_sfc_irq_handler(int irq, void *dev_id) 539 + { 540 + struct rockchip_sfc *sfc = dev_id; 541 + u32 reg; 542 + 543 + reg = readl(sfc->regbase + SFC_RISR); 544 + 545 + /* Clear interrupt */ 546 + writel_relaxed(reg, sfc->regbase + SFC_ICLR); 547 + 548 + if (reg & SFC_RISR_DMA) { 549 + complete(&sfc->cp); 550 + 551 + return IRQ_HANDLED; 552 + } 553 + 554 + return IRQ_NONE; 555 + } 556 + 557 + static int rockchip_sfc_probe(struct platform_device *pdev) 558 + { 559 + struct device *dev = &pdev->dev; 560 + struct spi_master *master; 561 + struct resource *res; 562 + struct rockchip_sfc *sfc; 563 + int ret; 564 + 565 + master = devm_spi_alloc_master(&pdev->dev, sizeof(*sfc)); 566 + if (!master) 567 + return -ENOMEM; 568 + 569 + master->flags = SPI_MASTER_HALF_DUPLEX; 570 + master->mem_ops = &rockchip_sfc_mem_ops; 571 + master->dev.of_node = pdev->dev.of_node; 572 + master->mode_bits = SPI_TX_QUAD | SPI_TX_DUAL | SPI_RX_QUAD | SPI_RX_DUAL; 573 + master->max_speed_hz = SFC_MAX_SPEED; 574 + master->num_chipselect = SFC_MAX_CHIPSELECT_NUM; 575 + 576 + sfc = spi_master_get_devdata(master); 577 + sfc->dev = dev; 578 + 579 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 580 + sfc->regbase = devm_ioremap_resource(dev, res); 581 + if (IS_ERR(sfc->regbase)) 582 + return PTR_ERR(sfc->regbase); 583 + 584 + sfc->clk = devm_clk_get(&pdev->dev, "clk_sfc"); 585 + if (IS_ERR(sfc->clk)) { 586 + dev_err(&pdev->dev, "Failed to get sfc interface clk\n"); 587 + return PTR_ERR(sfc->clk); 588 + } 589 + 590 + sfc->hclk = devm_clk_get(&pdev->dev, "hclk_sfc"); 591 + if (IS_ERR(sfc->hclk)) { 592 + dev_err(&pdev->dev, "Failed to get sfc ahb clk\n"); 593 + return PTR_ERR(sfc->hclk); 594 + } 595 + 596 + sfc->use_dma = !of_property_read_bool(sfc->dev->of_node, 597 + "rockchip,sfc-no-dma"); 598 + 599 + if (sfc->use_dma) { 600 + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); 601 + if (ret) { 602 + dev_warn(dev, "Unable to set dma mask\n"); 603 + return ret; 604 + } 605 + 606 + sfc->buffer = dmam_alloc_coherent(dev, SFC_MAX_IOSIZE_VER3, 607 + &sfc->dma_buffer, 608 + GFP_KERNEL); 609 + if (!sfc->buffer) 610 + return -ENOMEM; 611 + } 612 + 613 + ret = clk_prepare_enable(sfc->hclk); 614 + if (ret) { 615 + dev_err(&pdev->dev, "Failed to enable ahb clk\n"); 616 + goto err_hclk; 617 + } 618 + 619 + ret = clk_prepare_enable(sfc->clk); 620 + if (ret) { 621 + dev_err(&pdev->dev, "Failed to enable interface clk\n"); 622 + goto err_clk; 623 + } 624 + 625 + /* Find the irq */ 626 + ret = platform_get_irq(pdev, 0); 627 + if (ret < 0) { 628 + dev_err(dev, "Failed to get the irq\n"); 629 + goto err_irq; 630 + } 631 + 632 + ret = devm_request_irq(dev, ret, rockchip_sfc_irq_handler, 633 + 0, pdev->name, sfc); 634 + if (ret) { 635 + dev_err(dev, "Failed to request irq\n"); 636 + 637 + return ret; 638 + } 639 + 640 + ret = rockchip_sfc_init(sfc); 641 + if (ret) 642 + goto err_irq; 643 + 644 + sfc->max_iosize = rockchip_sfc_get_max_iosize(sfc); 645 + sfc->version = rockchip_sfc_get_version(sfc); 646 + 647 + ret = spi_register_master(master); 648 + if (ret) 649 + goto err_irq; 650 + 651 + return 0; 652 + 653 + err_irq: 654 + clk_disable_unprepare(sfc->clk); 655 + err_clk: 656 + clk_disable_unprepare(sfc->hclk); 657 + err_hclk: 658 + return ret; 659 + } 660 + 661 + static int rockchip_sfc_remove(struct platform_device *pdev) 662 + { 663 + struct spi_master *master = platform_get_drvdata(pdev); 664 + struct rockchip_sfc *sfc = platform_get_drvdata(pdev); 665 + 666 + spi_unregister_master(master); 667 + 668 + clk_disable_unprepare(sfc->clk); 669 + clk_disable_unprepare(sfc->hclk); 670 + 671 + return 0; 672 + } 673 + 674 + static const struct of_device_id rockchip_sfc_dt_ids[] = { 675 + { .compatible = "rockchip,sfc"}, 676 + { /* sentinel */ } 677 + }; 678 + MODULE_DEVICE_TABLE(of, rockchip_sfc_dt_ids); 679 + 680 + static struct platform_driver rockchip_sfc_driver = { 681 + .driver = { 682 + .name = "rockchip-sfc", 683 + .of_match_table = rockchip_sfc_dt_ids, 684 + }, 685 + .probe = rockchip_sfc_probe, 686 + .remove = rockchip_sfc_remove, 687 + }; 688 + module_platform_driver(rockchip_sfc_driver); 689 + 690 + MODULE_LICENSE("GPL v2"); 691 + MODULE_DESCRIPTION("Rockchip Serial Flash Controller Driver"); 692 + MODULE_AUTHOR("Shawn Lin <shawn.lin@rock-chips.com>"); 693 + MODULE_AUTHOR("Chris Morgan <macromorgan@hotmail.com>"); 694 + MODULE_AUTHOR("Jon Lin <Jon.lin@rock-chips.com>");
+192 -95
drivers/spi/spi-sprd-adi.c
··· 52 52 53 53 /* 54 54 * ADI slave devices include RTC, ADC, regulator, charger, thermal and so on. 55 - * The slave devices address offset is always 0x8000 and size is 4K. 55 + * ADI supports 12/14bit address for r2p0, and additional 17bit for r3p0 or 56 + * later versions. Since bit[1:0] are zero, so the spec describe them as 57 + * 10/12/15bit address mode. 58 + * The 10bit mode supports sigle slave, 12/15bit mode supports 3 slave, the 59 + * high two bits is slave_id. 60 + * The slave devices address offset is 0x8000 for 10/12bit address mode, 61 + * and 0x20000 for 15bit mode. 56 62 */ 57 - #define ADI_SLAVE_ADDR_SIZE SZ_4K 58 - #define ADI_SLAVE_OFFSET 0x8000 63 + #define ADI_10BIT_SLAVE_ADDR_SIZE SZ_4K 64 + #define ADI_10BIT_SLAVE_OFFSET 0x8000 65 + #define ADI_12BIT_SLAVE_ADDR_SIZE SZ_16K 66 + #define ADI_12BIT_SLAVE_OFFSET 0x8000 67 + #define ADI_15BIT_SLAVE_ADDR_SIZE SZ_128K 68 + #define ADI_15BIT_SLAVE_OFFSET 0x20000 59 69 60 70 /* Timeout (ms) for the trylock of hardware spinlocks */ 61 71 #define ADI_HWSPINLOCK_TIMEOUT 5000 ··· 77 67 78 68 #define ADI_FIFO_DRAIN_TIMEOUT 1000 79 69 #define ADI_READ_TIMEOUT 2000 80 - #define REG_ADDR_LOW_MASK GENMASK(11, 0) 70 + 71 + /* 72 + * Read back address from REG_ADI_RD_DATA bit[30:16] which maps to: 73 + * REG_ADI_RD_CMD bit[14:0] for r2p0 74 + * REG_ADI_RD_CMD bit[16:2] for r3p0 75 + */ 76 + #define RDBACK_ADDR_MASK_R2 GENMASK(14, 0) 77 + #define RDBACK_ADDR_MASK_R3 GENMASK(16, 2) 78 + #define RDBACK_ADDR_SHIFT_R3 2 81 79 82 80 /* Registers definitions for PMIC watchdog controller */ 83 - #define REG_WDG_LOAD_LOW 0x80 84 - #define REG_WDG_LOAD_HIGH 0x84 85 - #define REG_WDG_CTRL 0x88 86 - #define REG_WDG_LOCK 0xa0 81 + #define REG_WDG_LOAD_LOW 0x0 82 + #define REG_WDG_LOAD_HIGH 0x4 83 + #define REG_WDG_CTRL 0x8 84 + #define REG_WDG_LOCK 0x20 87 85 88 86 /* Bits definitions for register REG_WDG_CTRL */ 89 87 #define BIT_WDG_RUN BIT(1) 90 88 #define BIT_WDG_NEW BIT(2) 91 89 #define BIT_WDG_RST BIT(3) 92 90 91 + /* Bits definitions for register REG_MODULE_EN */ 92 + #define BIT_WDG_EN BIT(2) 93 + 93 94 /* Registers definitions for PMIC */ 94 95 #define PMIC_RST_STATUS 0xee8 95 96 #define PMIC_MODULE_EN 0xc08 96 97 #define PMIC_CLK_EN 0xc18 97 - #define BIT_WDG_EN BIT(2) 98 + #define PMIC_WDG_BASE 0x80 98 99 99 100 /* Definition of PMIC reset status register */ 100 101 #define HWRST_STATUS_SECURITY 0x02 ··· 124 103 #define HWRST_STATUS_WATCHDOG 0xf0 125 104 126 105 /* Use default timeout 50 ms that converts to watchdog values */ 127 - #define WDG_LOAD_VAL ((50 * 1000) / 32768) 106 + #define WDG_LOAD_VAL ((50 * 32768) / 1000) 128 107 #define WDG_LOAD_MASK GENMASK(15, 0) 129 108 #define WDG_UNLOCK_KEY 0xe551 109 + 110 + struct sprd_adi_wdg { 111 + u32 base; 112 + u32 rst_sts; 113 + u32 wdg_en; 114 + u32 wdg_clk; 115 + }; 116 + 117 + struct sprd_adi_data { 118 + u32 slave_offset; 119 + u32 slave_addr_size; 120 + int (*read_check)(u32 val, u32 reg); 121 + int (*restart)(struct notifier_block *this, 122 + unsigned long mode, void *cmd); 123 + void (*wdg_rst)(void *p); 124 + }; 130 125 131 126 struct sprd_adi { 132 127 struct spi_controller *ctlr; ··· 152 115 unsigned long slave_vbase; 153 116 unsigned long slave_pbase; 154 117 struct notifier_block restart_handler; 118 + const struct sprd_adi_data *data; 155 119 }; 156 120 157 - static int sprd_adi_check_paddr(struct sprd_adi *sadi, u32 paddr) 121 + static int sprd_adi_check_addr(struct sprd_adi *sadi, u32 reg) 158 122 { 159 - if (paddr < sadi->slave_pbase || paddr > 160 - (sadi->slave_pbase + ADI_SLAVE_ADDR_SIZE)) { 123 + if (reg >= sadi->data->slave_addr_size) { 161 124 dev_err(sadi->dev, 162 - "slave physical address is incorrect, addr = 0x%x\n", 163 - paddr); 125 + "slave address offset is incorrect, reg = 0x%x\n", 126 + reg); 164 127 return -EINVAL; 165 128 } 166 129 167 130 return 0; 168 - } 169 - 170 - static unsigned long sprd_adi_to_vaddr(struct sprd_adi *sadi, u32 paddr) 171 - { 172 - return (paddr - sadi->slave_pbase + sadi->slave_vbase); 173 131 } 174 132 175 133 static int sprd_adi_drain_fifo(struct sprd_adi *sadi) ··· 193 161 return readl_relaxed(sadi->base + REG_ADI_ARM_FIFO_STS) & BIT_FIFO_FULL; 194 162 } 195 163 196 - static int sprd_adi_read(struct sprd_adi *sadi, u32 reg_paddr, u32 *read_val) 164 + static int sprd_adi_read_check(u32 val, u32 addr) 165 + { 166 + u32 rd_addr; 167 + 168 + rd_addr = (val & RD_ADDR_MASK) >> RD_ADDR_SHIFT; 169 + 170 + if (rd_addr != addr) { 171 + pr_err("ADI read error, addr = 0x%x, val = 0x%x\n", addr, val); 172 + return -EIO; 173 + } 174 + 175 + return 0; 176 + } 177 + 178 + static int sprd_adi_read_check_r2(u32 val, u32 reg) 179 + { 180 + return sprd_adi_read_check(val, reg & RDBACK_ADDR_MASK_R2); 181 + } 182 + 183 + static int sprd_adi_read_check_r3(u32 val, u32 reg) 184 + { 185 + return sprd_adi_read_check(val, (reg & RDBACK_ADDR_MASK_R3) >> RDBACK_ADDR_SHIFT_R3); 186 + } 187 + 188 + static int sprd_adi_read(struct sprd_adi *sadi, u32 reg, u32 *read_val) 197 189 { 198 190 int read_timeout = ADI_READ_TIMEOUT; 199 191 unsigned long flags; 200 - u32 val, rd_addr; 192 + u32 val; 201 193 int ret = 0; 202 194 203 195 if (sadi->hwlock) { ··· 234 178 } 235 179 } 236 180 181 + ret = sprd_adi_check_addr(sadi, reg); 182 + if (ret) 183 + goto out; 184 + 237 185 /* 238 - * Set the physical register address need to read into RD_CMD register, 186 + * Set the slave address offset need to read into RD_CMD register, 239 187 * then ADI controller will start to transfer automatically. 240 188 */ 241 - writel_relaxed(reg_paddr, sadi->base + REG_ADI_RD_CMD); 189 + writel_relaxed(reg, sadi->base + REG_ADI_RD_CMD); 242 190 243 191 /* 244 192 * Wait read operation complete, the BIT_RD_CMD_BUSY will be set ··· 265 205 } 266 206 267 207 /* 268 - * The return value includes data and read register address, from bit 0 269 - * to bit 15 are data, and from bit 16 to bit 30 are read register 270 - * address. Then we can check the returned register address to validate 271 - * data. 208 + * The return value before adi r5p0 includes data and read register 209 + * address, from bit 0to bit 15 are data, and from bit 16 to bit 30 210 + * are read register address. Then we can check the returned register 211 + * address to validate data. 272 212 */ 273 - rd_addr = (val & RD_ADDR_MASK) >> RD_ADDR_SHIFT; 274 - 275 - if (rd_addr != (reg_paddr & REG_ADDR_LOW_MASK)) { 276 - dev_err(sadi->dev, "read error, reg addr = 0x%x, val = 0x%x\n", 277 - reg_paddr, val); 278 - ret = -EIO; 279 - goto out; 213 + if (sadi->data->read_check) { 214 + ret = sadi->data->read_check(val, reg); 215 + if (ret < 0) 216 + goto out; 280 217 } 281 218 282 219 *read_val = val & RD_VALUE_MASK; ··· 284 227 return ret; 285 228 } 286 229 287 - static int sprd_adi_write(struct sprd_adi *sadi, u32 reg_paddr, u32 val) 230 + static int sprd_adi_write(struct sprd_adi *sadi, u32 reg, u32 val) 288 231 { 289 - unsigned long reg = sprd_adi_to_vaddr(sadi, reg_paddr); 290 232 u32 timeout = ADI_FIFO_DRAIN_TIMEOUT; 291 233 unsigned long flags; 292 234 int ret; ··· 300 244 } 301 245 } 302 246 247 + ret = sprd_adi_check_addr(sadi, reg); 248 + if (ret) 249 + goto out; 250 + 303 251 ret = sprd_adi_drain_fifo(sadi); 304 252 if (ret < 0) 305 253 goto out; ··· 314 254 */ 315 255 do { 316 256 if (!sprd_adi_fifo_is_full(sadi)) { 317 - writel_relaxed(val, (void __iomem *)reg); 257 + /* we need virtual register address to write. */ 258 + writel_relaxed(val, (void __iomem *)(sadi->slave_vbase + reg)); 318 259 break; 319 260 } 320 261 ··· 338 277 struct spi_transfer *t) 339 278 { 340 279 struct sprd_adi *sadi = spi_controller_get_devdata(ctlr); 341 - u32 phy_reg, val; 280 + u32 reg, val; 342 281 int ret; 343 282 344 283 if (t->rx_buf) { 345 - phy_reg = *(u32 *)t->rx_buf + sadi->slave_pbase; 346 - 347 - ret = sprd_adi_check_paddr(sadi, phy_reg); 348 - if (ret) 349 - return ret; 350 - 351 - ret = sprd_adi_read(sadi, phy_reg, &val); 352 - if (ret) 353 - return ret; 354 - 284 + reg = *(u32 *)t->rx_buf; 285 + ret = sprd_adi_read(sadi, reg, &val); 355 286 *(u32 *)t->rx_buf = val; 356 287 } else if (t->tx_buf) { 357 288 u32 *p = (u32 *)t->tx_buf; 358 - 359 - /* 360 - * Get the physical register address need to write and convert 361 - * the physical address to virtual address. Since we need 362 - * virtual register address to write. 363 - */ 364 - phy_reg = *p++ + sadi->slave_pbase; 365 - ret = sprd_adi_check_paddr(sadi, phy_reg); 366 - if (ret) 367 - return ret; 368 - 289 + reg = *p++; 369 290 val = *p; 370 - ret = sprd_adi_write(sadi, phy_reg, val); 371 - if (ret) 372 - return ret; 291 + ret = sprd_adi_write(sadi, reg, val); 373 292 } else { 374 293 dev_err(sadi->dev, "no buffer for transfer\n"); 375 - return -EINVAL; 294 + ret = -EINVAL; 376 295 } 377 296 378 - return 0; 297 + return ret; 379 298 } 380 299 381 - static void sprd_adi_set_wdt_rst_mode(struct sprd_adi *sadi) 300 + static void sprd_adi_set_wdt_rst_mode(void *p) 382 301 { 383 302 #if IS_ENABLED(CONFIG_SPRD_WATCHDOG) 384 303 u32 val; 304 + struct sprd_adi *sadi = (struct sprd_adi *)p; 385 305 386 - /* Set default watchdog reboot mode */ 387 - sprd_adi_read(sadi, sadi->slave_pbase + PMIC_RST_STATUS, &val); 306 + /* Init watchdog reset mode */ 307 + sprd_adi_read(sadi, PMIC_RST_STATUS, &val); 388 308 val |= HWRST_STATUS_WATCHDOG; 389 - sprd_adi_write(sadi, sadi->slave_pbase + PMIC_RST_STATUS, val); 309 + sprd_adi_write(sadi, PMIC_RST_STATUS, val); 390 310 #endif 391 311 } 392 312 393 - static int sprd_adi_restart_handler(struct notifier_block *this, 394 - unsigned long mode, void *cmd) 313 + static int sprd_adi_restart(struct notifier_block *this, unsigned long mode, 314 + void *cmd, struct sprd_adi_wdg *wdg) 395 315 { 396 316 struct sprd_adi *sadi = container_of(this, struct sprd_adi, 397 317 restart_handler); ··· 408 366 reboot_mode = HWRST_STATUS_NORMAL; 409 367 410 368 /* Record the reboot mode */ 411 - sprd_adi_read(sadi, sadi->slave_pbase + PMIC_RST_STATUS, &val); 369 + sprd_adi_read(sadi, wdg->rst_sts, &val); 412 370 val &= ~HWRST_STATUS_WATCHDOG; 413 371 val |= reboot_mode; 414 - sprd_adi_write(sadi, sadi->slave_pbase + PMIC_RST_STATUS, val); 372 + sprd_adi_write(sadi, wdg->rst_sts, val); 415 373 416 374 /* Enable the interface clock of the watchdog */ 417 - sprd_adi_read(sadi, sadi->slave_pbase + PMIC_MODULE_EN, &val); 375 + sprd_adi_read(sadi, wdg->wdg_en, &val); 418 376 val |= BIT_WDG_EN; 419 - sprd_adi_write(sadi, sadi->slave_pbase + PMIC_MODULE_EN, val); 377 + sprd_adi_write(sadi, wdg->wdg_en, val); 420 378 421 379 /* Enable the work clock of the watchdog */ 422 - sprd_adi_read(sadi, sadi->slave_pbase + PMIC_CLK_EN, &val); 380 + sprd_adi_read(sadi, wdg->wdg_clk, &val); 423 381 val |= BIT_WDG_EN; 424 - sprd_adi_write(sadi, sadi->slave_pbase + PMIC_CLK_EN, val); 382 + sprd_adi_write(sadi, wdg->wdg_clk, val); 425 383 426 384 /* Unlock the watchdog */ 427 - sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_LOCK, WDG_UNLOCK_KEY); 385 + sprd_adi_write(sadi, wdg->base + REG_WDG_LOCK, WDG_UNLOCK_KEY); 428 386 429 - sprd_adi_read(sadi, sadi->slave_pbase + REG_WDG_CTRL, &val); 387 + sprd_adi_read(sadi, wdg->base + REG_WDG_CTRL, &val); 430 388 val |= BIT_WDG_NEW; 431 - sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_CTRL, val); 389 + sprd_adi_write(sadi, wdg->base + REG_WDG_CTRL, val); 432 390 433 391 /* Load the watchdog timeout value, 50ms is always enough. */ 434 - sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_LOAD_HIGH, 0); 435 - sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_LOAD_LOW, 392 + sprd_adi_write(sadi, wdg->base + REG_WDG_LOAD_HIGH, 0); 393 + sprd_adi_write(sadi, wdg->base + REG_WDG_LOAD_LOW, 436 394 WDG_LOAD_VAL & WDG_LOAD_MASK); 437 395 438 396 /* Start the watchdog to reset system */ 439 - sprd_adi_read(sadi, sadi->slave_pbase + REG_WDG_CTRL, &val); 397 + sprd_adi_read(sadi, wdg->base + REG_WDG_CTRL, &val); 440 398 val |= BIT_WDG_RUN | BIT_WDG_RST; 441 - sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_CTRL, val); 399 + sprd_adi_write(sadi, wdg->base + REG_WDG_CTRL, val); 442 400 443 401 /* Lock the watchdog */ 444 - sprd_adi_write(sadi, sadi->slave_pbase + REG_WDG_LOCK, ~WDG_UNLOCK_KEY); 402 + sprd_adi_write(sadi, wdg->base + REG_WDG_LOCK, ~WDG_UNLOCK_KEY); 445 403 446 404 mdelay(1000); 447 405 448 406 dev_emerg(sadi->dev, "Unable to restart system\n"); 449 407 return NOTIFY_DONE; 408 + } 409 + 410 + static int sprd_adi_restart_sc9860(struct notifier_block *this, 411 + unsigned long mode, void *cmd) 412 + { 413 + struct sprd_adi_wdg wdg = { 414 + .base = PMIC_WDG_BASE, 415 + .rst_sts = PMIC_RST_STATUS, 416 + .wdg_en = PMIC_MODULE_EN, 417 + .wdg_clk = PMIC_CLK_EN, 418 + }; 419 + 420 + return sprd_adi_restart(this, mode, cmd, &wdg); 450 421 } 451 422 452 423 static void sprd_adi_hw_init(struct sprd_adi *sadi) ··· 513 458 static int sprd_adi_probe(struct platform_device *pdev) 514 459 { 515 460 struct device_node *np = pdev->dev.of_node; 461 + const struct sprd_adi_data *data; 516 462 struct spi_controller *ctlr; 517 463 struct sprd_adi *sadi; 518 464 struct resource *res; 519 - u32 num_chipselect; 465 + u16 num_chipselect; 520 466 int ret; 521 467 522 468 if (!np) { 523 469 dev_err(&pdev->dev, "can not find the adi bus node\n"); 524 470 return -ENODEV; 471 + } 472 + 473 + data = of_device_get_match_data(&pdev->dev); 474 + if (!data) { 475 + dev_err(&pdev->dev, "no matching driver data found\n"); 476 + return -EINVAL; 525 477 } 526 478 527 479 pdev->id = of_alias_get_id(np, "spi"); ··· 548 486 goto put_ctlr; 549 487 } 550 488 551 - sadi->slave_vbase = (unsigned long)sadi->base + ADI_SLAVE_OFFSET; 552 - sadi->slave_pbase = res->start + ADI_SLAVE_OFFSET; 489 + sadi->slave_vbase = (unsigned long)sadi->base + 490 + data->slave_offset; 491 + sadi->slave_pbase = res->start + data->slave_offset; 553 492 sadi->ctlr = ctlr; 554 493 sadi->dev = &pdev->dev; 494 + sadi->data = data; 555 495 ret = of_hwspin_lock_get_id(np, 0); 556 496 if (ret > 0 || (IS_ENABLED(CONFIG_HWSPINLOCK) && ret == 0)) { 557 497 sadi->hwlock = ··· 574 510 } 575 511 576 512 sprd_adi_hw_init(sadi); 577 - sprd_adi_set_wdt_rst_mode(sadi); 513 + 514 + if (sadi->data->wdg_rst) 515 + sadi->data->wdg_rst(sadi); 578 516 579 517 ctlr->dev.of_node = pdev->dev.of_node; 580 518 ctlr->bus_num = pdev->id; ··· 591 525 goto put_ctlr; 592 526 } 593 527 594 - sadi->restart_handler.notifier_call = sprd_adi_restart_handler; 595 - sadi->restart_handler.priority = 128; 596 - ret = register_restart_handler(&sadi->restart_handler); 597 - if (ret) { 598 - dev_err(&pdev->dev, "can not register restart handler\n"); 599 - goto put_ctlr; 528 + if (sadi->data->restart) { 529 + sadi->restart_handler.notifier_call = sadi->data->restart; 530 + sadi->restart_handler.priority = 128; 531 + ret = register_restart_handler(&sadi->restart_handler); 532 + if (ret) { 533 + dev_err(&pdev->dev, "can not register restart handler\n"); 534 + goto put_ctlr; 535 + } 600 536 } 601 537 602 538 return 0; ··· 617 549 return 0; 618 550 } 619 551 552 + static struct sprd_adi_data sc9860_data = { 553 + .slave_offset = ADI_10BIT_SLAVE_OFFSET, 554 + .slave_addr_size = ADI_10BIT_SLAVE_ADDR_SIZE, 555 + .read_check = sprd_adi_read_check_r2, 556 + .restart = sprd_adi_restart_sc9860, 557 + .wdg_rst = sprd_adi_set_wdt_rst_mode, 558 + }; 559 + 560 + static struct sprd_adi_data sc9863_data = { 561 + .slave_offset = ADI_12BIT_SLAVE_OFFSET, 562 + .slave_addr_size = ADI_12BIT_SLAVE_ADDR_SIZE, 563 + .read_check = sprd_adi_read_check_r3, 564 + }; 565 + 566 + static struct sprd_adi_data ums512_data = { 567 + .slave_offset = ADI_15BIT_SLAVE_OFFSET, 568 + .slave_addr_size = ADI_15BIT_SLAVE_ADDR_SIZE, 569 + .read_check = sprd_adi_read_check_r3, 570 + }; 571 + 620 572 static const struct of_device_id sprd_adi_of_match[] = { 621 573 { 622 574 .compatible = "sprd,sc9860-adi", 575 + .data = &sc9860_data, 576 + }, 577 + { 578 + .compatible = "sprd,sc9863-adi", 579 + .data = &sc9863_data, 580 + }, 581 + { 582 + .compatible = "sprd,ums512-adi", 583 + .data = &ums512_data, 623 584 }, 624 585 { }, 625 586 };
+43 -78
drivers/spi/spi-stm32.c
··· 162 162 #define SPI_3WIRE_TX 3 163 163 #define SPI_3WIRE_RX 4 164 164 165 + #define STM32_SPI_AUTOSUSPEND_DELAY 1 /* 1 ms */ 166 + 165 167 /* 166 168 * use PIO for small transfers, avoiding DMA setup/teardown overhead for drivers 167 169 * without fifo buffers. ··· 570 568 /** 571 569 * stm32h7_spi_read_rxfifo - Read bytes in Receive Data Register 572 570 * @spi: pointer to the spi controller data structure 573 - * @flush: boolean indicating that FIFO should be flushed 574 571 * 575 572 * Write in rx_buf depends on remaining bytes to avoid to write beyond 576 573 * rx_buf end. 577 574 */ 578 - static void stm32h7_spi_read_rxfifo(struct stm32_spi *spi, bool flush) 575 + static void stm32h7_spi_read_rxfifo(struct stm32_spi *spi) 579 576 { 580 577 u32 sr = readl_relaxed(spi->base + STM32H7_SPI_SR); 581 578 u32 rxplvl = FIELD_GET(STM32H7_SPI_SR_RXPLVL, sr); 582 579 583 580 while ((spi->rx_len > 0) && 584 581 ((sr & STM32H7_SPI_SR_RXP) || 585 - (flush && ((sr & STM32H7_SPI_SR_RXWNE) || (rxplvl > 0))))) { 582 + ((sr & STM32H7_SPI_SR_EOT) && 583 + ((sr & STM32H7_SPI_SR_RXWNE) || (rxplvl > 0))))) { 586 584 u32 offs = spi->cur_xferlen - spi->rx_len; 587 585 588 586 if ((spi->rx_len >= sizeof(u32)) || 589 - (flush && (sr & STM32H7_SPI_SR_RXWNE))) { 587 + (sr & STM32H7_SPI_SR_RXWNE)) { 590 588 u32 *rx_buf32 = (u32 *)(spi->rx_buf + offs); 591 589 592 590 *rx_buf32 = readl_relaxed(spi->base + STM32H7_SPI_RXDR); 593 591 spi->rx_len -= sizeof(u32); 594 592 } else if ((spi->rx_len >= sizeof(u16)) || 595 - (flush && (rxplvl >= 2 || spi->cur_bpw > 8))) { 593 + (!(sr & STM32H7_SPI_SR_RXWNE) && 594 + (rxplvl >= 2 || spi->cur_bpw > 8))) { 596 595 u16 *rx_buf16 = (u16 *)(spi->rx_buf + offs); 597 596 598 597 *rx_buf16 = readw_relaxed(spi->base + STM32H7_SPI_RXDR); ··· 609 606 rxplvl = FIELD_GET(STM32H7_SPI_SR_RXPLVL, sr); 610 607 } 611 608 612 - dev_dbg(spi->dev, "%s%s: %d bytes left\n", __func__, 613 - flush ? "(flush)" : "", spi->rx_len); 609 + dev_dbg(spi->dev, "%s: %d bytes left (sr=%08x)\n", 610 + __func__, spi->rx_len, sr); 614 611 } 615 612 616 613 /** ··· 677 674 * stm32h7_spi_disable - Disable SPI controller 678 675 * @spi: pointer to the spi controller data structure 679 676 * 680 - * RX-Fifo is flushed when SPI controller is disabled. To prevent any data 681 - * loss, use stm32h7_spi_read_rxfifo(flush) to read the remaining bytes in 682 - * RX-Fifo. 683 - * Normally, if TSIZE has been configured, we should relax the hardware at the 684 - * reception of the EOT interrupt. But in case of error, EOT will not be 685 - * raised. So the subsystem unprepare_message call allows us to properly 686 - * complete the transfer from an hardware point of view. 677 + * RX-Fifo is flushed when SPI controller is disabled. 687 678 */ 688 679 static void stm32h7_spi_disable(struct stm32_spi *spi) 689 680 { 690 681 unsigned long flags; 691 - u32 cr1, sr; 682 + u32 cr1; 692 683 693 684 dev_dbg(spi->dev, "disable controller\n"); 694 685 ··· 694 697 spin_unlock_irqrestore(&spi->lock, flags); 695 698 return; 696 699 } 697 - 698 - /* Wait on EOT or suspend the flow */ 699 - if (readl_relaxed_poll_timeout_atomic(spi->base + STM32H7_SPI_SR, 700 - sr, !(sr & STM32H7_SPI_SR_EOT), 701 - 10, 100000) < 0) { 702 - if (cr1 & STM32H7_SPI_CR1_CSTART) { 703 - writel_relaxed(cr1 | STM32H7_SPI_CR1_CSUSP, 704 - spi->base + STM32H7_SPI_CR1); 705 - if (readl_relaxed_poll_timeout_atomic( 706 - spi->base + STM32H7_SPI_SR, 707 - sr, !(sr & STM32H7_SPI_SR_SUSP), 708 - 10, 100000) < 0) 709 - dev_warn(spi->dev, 710 - "Suspend request timeout\n"); 711 - } 712 - } 713 - 714 - if (!spi->cur_usedma && spi->rx_buf && (spi->rx_len > 0)) 715 - stm32h7_spi_read_rxfifo(spi, true); 716 700 717 701 if (spi->cur_usedma && spi->dma_tx) 718 702 dmaengine_terminate_all(spi->dma_tx); ··· 889 911 if (__ratelimit(&rs)) 890 912 dev_dbg_ratelimited(spi->dev, "Communication suspended\n"); 891 913 if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0))) 892 - stm32h7_spi_read_rxfifo(spi, false); 914 + stm32h7_spi_read_rxfifo(spi); 893 915 /* 894 916 * If communication is suspended while using DMA, it means 895 917 * that something went wrong, so stop the current transfer ··· 910 932 911 933 if (sr & STM32H7_SPI_SR_EOT) { 912 934 if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0))) 913 - stm32h7_spi_read_rxfifo(spi, true); 914 - end = true; 935 + stm32h7_spi_read_rxfifo(spi); 936 + if (!spi->cur_usedma || 937 + (spi->cur_comm == SPI_SIMPLEX_TX || spi->cur_comm == SPI_3WIRE_TX)) 938 + end = true; 915 939 } 916 940 917 941 if (sr & STM32H7_SPI_SR_TXP) ··· 922 942 923 943 if (sr & STM32H7_SPI_SR_RXP) 924 944 if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0))) 925 - stm32h7_spi_read_rxfifo(spi, false); 945 + stm32h7_spi_read_rxfifo(spi); 926 946 927 947 writel_relaxed(sr & mask, spi->base + STM32H7_SPI_IFCR); 928 948 ··· 1021 1041 } 1022 1042 1023 1043 /** 1024 - * stm32f4_spi_dma_rx_cb - dma callback 1044 + * stm32_spi_dma_rx_cb - dma callback 1025 1045 * @data: pointer to the spi controller data structure 1026 1046 * 1027 1047 * DMA callback is called when the transfer is complete for DMA RX channel. 1028 1048 */ 1029 - static void stm32f4_spi_dma_rx_cb(void *data) 1049 + static void stm32_spi_dma_rx_cb(void *data) 1030 1050 { 1031 1051 struct stm32_spi *spi = data; 1032 1052 1033 1053 spi_finalize_current_transfer(spi->master); 1034 - stm32f4_spi_disable(spi); 1035 - } 1036 - 1037 - /** 1038 - * stm32h7_spi_dma_cb - dma callback 1039 - * @data: pointer to the spi controller data structure 1040 - * 1041 - * DMA callback is called when the transfer is complete or when an error 1042 - * occurs. If the transfer is complete, EOT flag is raised. 1043 - */ 1044 - static void stm32h7_spi_dma_cb(void *data) 1045 - { 1046 - struct stm32_spi *spi = data; 1047 - unsigned long flags; 1048 - u32 sr; 1049 - 1050 - spin_lock_irqsave(&spi->lock, flags); 1051 - 1052 - sr = readl_relaxed(spi->base + STM32H7_SPI_SR); 1053 - 1054 - spin_unlock_irqrestore(&spi->lock, flags); 1055 - 1056 - if (!(sr & STM32H7_SPI_SR_EOT)) 1057 - dev_warn(spi->dev, "DMA error (sr=0x%08x)\n", sr); 1058 - 1059 - /* Now wait for EOT, or SUSP or OVR in case of error */ 1054 + spi->cfg->disable(spi); 1060 1055 } 1061 1056 1062 1057 /** ··· 1197 1242 */ 1198 1243 static void stm32h7_spi_transfer_one_dma_start(struct stm32_spi *spi) 1199 1244 { 1200 - /* Enable the interrupts relative to the end of transfer */ 1201 - stm32_spi_set_bits(spi, STM32H7_SPI_IER, STM32H7_SPI_IER_EOTIE | 1202 - STM32H7_SPI_IER_TXTFIE | 1203 - STM32H7_SPI_IER_OVRIE | 1204 - STM32H7_SPI_IER_MODFIE); 1245 + uint32_t ier = STM32H7_SPI_IER_OVRIE | STM32H7_SPI_IER_MODFIE; 1246 + 1247 + /* Enable the interrupts */ 1248 + if (spi->cur_comm == SPI_SIMPLEX_TX || spi->cur_comm == SPI_3WIRE_TX) 1249 + ier |= STM32H7_SPI_IER_EOTIE | STM32H7_SPI_IER_TXTFIE; 1250 + 1251 + stm32_spi_set_bits(spi, STM32H7_SPI_IER, ier); 1205 1252 1206 1253 stm32_spi_enable(spi); 1207 1254 ··· 1602 1645 struct stm32_spi *spi = spi_master_get_devdata(master); 1603 1646 int ret; 1604 1647 1605 - /* Don't do anything on 0 bytes transfers */ 1606 - if (transfer->len == 0) 1607 - return 0; 1608 - 1609 1648 spi->tx_buf = transfer->tx_buf; 1610 1649 spi->rx_buf = transfer->rx_buf; 1611 1650 spi->tx_len = spi->tx_buf ? transfer->len : 0; ··· 1715 1762 .set_mode = stm32f4_spi_set_mode, 1716 1763 .transfer_one_dma_start = stm32f4_spi_transfer_one_dma_start, 1717 1764 .dma_tx_cb = stm32f4_spi_dma_tx_cb, 1718 - .dma_rx_cb = stm32f4_spi_dma_rx_cb, 1765 + .dma_rx_cb = stm32_spi_dma_rx_cb, 1719 1766 .transfer_one_irq = stm32f4_spi_transfer_one_irq, 1720 1767 .irq_handler_event = stm32f4_spi_irq_event, 1721 1768 .irq_handler_thread = stm32f4_spi_irq_thread, ··· 1735 1782 .set_data_idleness = stm32h7_spi_data_idleness, 1736 1783 .set_number_of_data = stm32h7_spi_number_of_data, 1737 1784 .transfer_one_dma_start = stm32h7_spi_transfer_one_dma_start, 1738 - .dma_rx_cb = stm32h7_spi_dma_cb, 1739 - .dma_tx_cb = stm32h7_spi_dma_cb, 1785 + .dma_rx_cb = stm32_spi_dma_rx_cb, 1786 + /* 1787 + * dma_tx_cb is not necessary since in case of TX, dma is followed by 1788 + * SPI access hence handling is performed within the SPI interrupt 1789 + */ 1740 1790 .transfer_one_irq = stm32h7_spi_transfer_one_irq, 1741 1791 .irq_handler_thread = stm32h7_spi_irq_thread, 1742 1792 .baud_rate_div_min = STM32H7_SPI_MBR_DIV_MIN, ··· 1883 1927 if (spi->dma_tx || spi->dma_rx) 1884 1928 master->can_dma = stm32_spi_can_dma; 1885 1929 1930 + pm_runtime_set_autosuspend_delay(&pdev->dev, 1931 + STM32_SPI_AUTOSUSPEND_DELAY); 1932 + pm_runtime_use_autosuspend(&pdev->dev); 1886 1933 pm_runtime_set_active(&pdev->dev); 1887 1934 pm_runtime_get_noresume(&pdev->dev); 1888 1935 pm_runtime_enable(&pdev->dev); ··· 1897 1938 goto err_pm_disable; 1898 1939 } 1899 1940 1941 + pm_runtime_mark_last_busy(&pdev->dev); 1942 + pm_runtime_put_autosuspend(&pdev->dev); 1943 + 1900 1944 dev_info(&pdev->dev, "driver initialized\n"); 1901 1945 1902 1946 return 0; ··· 1908 1946 pm_runtime_disable(&pdev->dev); 1909 1947 pm_runtime_put_noidle(&pdev->dev); 1910 1948 pm_runtime_set_suspended(&pdev->dev); 1949 + pm_runtime_dont_use_autosuspend(&pdev->dev); 1911 1950 err_dma_release: 1912 1951 if (spi->dma_tx) 1913 1952 dma_release_channel(spi->dma_tx); ··· 1933 1970 pm_runtime_disable(&pdev->dev); 1934 1971 pm_runtime_put_noidle(&pdev->dev); 1935 1972 pm_runtime_set_suspended(&pdev->dev); 1973 + pm_runtime_dont_use_autosuspend(&pdev->dev); 1974 + 1936 1975 if (master->dma_tx) 1937 1976 dma_release_channel(master->dma_tx); 1938 1977 if (master->dma_rx)
+4 -4
drivers/spi/spi-tegra114.c
··· 717 717 dma_release_channel(dma_chan); 718 718 } 719 719 720 - static int tegra_spi_set_hw_cs_timing(struct spi_device *spi, 721 - struct spi_delay *setup, 722 - struct spi_delay *hold, 723 - struct spi_delay *inactive) 720 + static int tegra_spi_set_hw_cs_timing(struct spi_device *spi) 724 721 { 725 722 struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master); 723 + struct spi_delay *setup = &spi->cs_setup; 724 + struct spi_delay *hold = &spi->cs_hold; 725 + struct spi_delay *inactive = &spi->cs_inactive; 726 726 u8 setup_dly, hold_dly, inactive_dly; 727 727 u32 setup_hold; 728 728 u32 spi_cs_timing;
+28 -49
drivers/spi/spi-tegra20-slink.c
··· 1061 1061 dev_err(&pdev->dev, "Can not get clock %d\n", ret); 1062 1062 goto exit_free_master; 1063 1063 } 1064 - ret = clk_prepare(tspi->clk); 1065 - if (ret < 0) { 1066 - dev_err(&pdev->dev, "Clock prepare failed %d\n", ret); 1067 - goto exit_free_master; 1068 - } 1069 - ret = clk_enable(tspi->clk); 1070 - if (ret < 0) { 1071 - dev_err(&pdev->dev, "Clock enable failed %d\n", ret); 1072 - goto exit_clk_unprepare; 1073 - } 1074 - 1075 - spi_irq = platform_get_irq(pdev, 0); 1076 - tspi->irq = spi_irq; 1077 - ret = request_threaded_irq(tspi->irq, tegra_slink_isr, 1078 - tegra_slink_isr_thread, IRQF_ONESHOT, 1079 - dev_name(&pdev->dev), tspi); 1080 - if (ret < 0) { 1081 - dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n", 1082 - tspi->irq); 1083 - goto exit_clk_disable; 1084 - } 1085 1064 1086 1065 tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi"); 1087 1066 if (IS_ERR(tspi->rst)) { 1088 1067 dev_err(&pdev->dev, "can not get reset\n"); 1089 1068 ret = PTR_ERR(tspi->rst); 1090 - goto exit_free_irq; 1069 + goto exit_free_master; 1091 1070 } 1092 1071 1093 1072 tspi->max_buf_size = SLINK_FIFO_DEPTH << 2; ··· 1074 1095 1075 1096 ret = tegra_slink_init_dma_param(tspi, true); 1076 1097 if (ret < 0) 1077 - goto exit_free_irq; 1098 + goto exit_free_master; 1078 1099 ret = tegra_slink_init_dma_param(tspi, false); 1079 1100 if (ret < 0) 1080 1101 goto exit_rx_dma_free; ··· 1085 1106 init_completion(&tspi->xfer_completion); 1086 1107 1087 1108 pm_runtime_enable(&pdev->dev); 1088 - if (!pm_runtime_enabled(&pdev->dev)) { 1089 - ret = tegra_slink_runtime_resume(&pdev->dev); 1090 - if (ret) 1091 - goto exit_pm_disable; 1092 - } 1093 - 1094 - ret = pm_runtime_get_sync(&pdev->dev); 1095 - if (ret < 0) { 1109 + ret = pm_runtime_resume_and_get(&pdev->dev); 1110 + if (ret) { 1096 1111 dev_err(&pdev->dev, "pm runtime get failed, e = %d\n", ret); 1097 - pm_runtime_put_noidle(&pdev->dev); 1098 1112 goto exit_pm_disable; 1099 1113 } 1100 1114 ··· 1095 1123 udelay(2); 1096 1124 reset_control_deassert(tspi->rst); 1097 1125 1126 + spi_irq = platform_get_irq(pdev, 0); 1127 + tspi->irq = spi_irq; 1128 + ret = request_threaded_irq(tspi->irq, tegra_slink_isr, 1129 + tegra_slink_isr_thread, IRQF_ONESHOT, 1130 + dev_name(&pdev->dev), tspi); 1131 + if (ret < 0) { 1132 + dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n", 1133 + tspi->irq); 1134 + goto exit_pm_put; 1135 + } 1136 + 1098 1137 tspi->def_command_reg = SLINK_M_S; 1099 1138 tspi->def_command2_reg = SLINK_CS_ACTIVE_BETWEEN; 1100 1139 tegra_slink_writel(tspi, tspi->def_command_reg, SLINK_COMMAND); 1101 1140 tegra_slink_writel(tspi, tspi->def_command2_reg, SLINK_COMMAND2); 1102 - pm_runtime_put(&pdev->dev); 1103 1141 1104 1142 master->dev.of_node = pdev->dev.of_node; 1105 - ret = devm_spi_register_master(&pdev->dev, master); 1143 + ret = spi_register_master(master); 1106 1144 if (ret < 0) { 1107 1145 dev_err(&pdev->dev, "can not register to master err %d\n", ret); 1108 - goto exit_pm_disable; 1146 + goto exit_free_irq; 1109 1147 } 1148 + 1149 + pm_runtime_put(&pdev->dev); 1150 + 1110 1151 return ret; 1111 1152 1153 + exit_free_irq: 1154 + free_irq(spi_irq, tspi); 1155 + exit_pm_put: 1156 + pm_runtime_put(&pdev->dev); 1112 1157 exit_pm_disable: 1113 1158 pm_runtime_disable(&pdev->dev); 1114 - if (!pm_runtime_status_suspended(&pdev->dev)) 1115 - tegra_slink_runtime_suspend(&pdev->dev); 1159 + 1116 1160 tegra_slink_deinit_dma_param(tspi, false); 1117 1161 exit_rx_dma_free: 1118 1162 tegra_slink_deinit_dma_param(tspi, true); 1119 - exit_free_irq: 1120 - free_irq(spi_irq, tspi); 1121 - exit_clk_disable: 1122 - clk_disable(tspi->clk); 1123 - exit_clk_unprepare: 1124 - clk_unprepare(tspi->clk); 1125 1163 exit_free_master: 1126 1164 spi_master_put(master); 1127 1165 return ret; ··· 1142 1160 struct spi_master *master = platform_get_drvdata(pdev); 1143 1161 struct tegra_slink_data *tspi = spi_master_get_devdata(master); 1144 1162 1163 + spi_unregister_master(master); 1164 + 1145 1165 free_irq(tspi->irq, tspi); 1146 1166 1147 - clk_disable(tspi->clk); 1148 - clk_unprepare(tspi->clk); 1167 + pm_runtime_disable(&pdev->dev); 1149 1168 1150 1169 if (tspi->tx_dma_chan) 1151 1170 tegra_slink_deinit_dma_param(tspi, false); 1152 1171 1153 1172 if (tspi->rx_dma_chan) 1154 1173 tegra_slink_deinit_dma_param(tspi, true); 1155 - 1156 - pm_runtime_disable(&pdev->dev); 1157 - if (!pm_runtime_status_suspended(&pdev->dev)) 1158 - tegra_slink_runtime_suspend(&pdev->dev); 1159 1174 1160 1175 return 0; 1161 1176 }
+4 -4
drivers/spi/spi-zynq-qspi.c
··· 545 545 zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true); 546 546 zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET, 547 547 ZYNQ_QSPI_IXR_RXTX_MASK); 548 - if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion, 548 + if (!wait_for_completion_timeout(&xqspi->data_completion, 549 549 msecs_to_jiffies(1000))) 550 550 err = -ETIMEDOUT; 551 551 } ··· 563 563 zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true); 564 564 zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET, 565 565 ZYNQ_QSPI_IXR_RXTX_MASK); 566 - if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion, 566 + if (!wait_for_completion_timeout(&xqspi->data_completion, 567 567 msecs_to_jiffies(1000))) 568 568 err = -ETIMEDOUT; 569 569 } ··· 579 579 zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true); 580 580 zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET, 581 581 ZYNQ_QSPI_IXR_RXTX_MASK); 582 - if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion, 582 + if (!wait_for_completion_timeout(&xqspi->data_completion, 583 583 msecs_to_jiffies(1000))) 584 584 err = -ETIMEDOUT; 585 585 ··· 603 603 zynq_qspi_write_op(xqspi, ZYNQ_QSPI_FIFO_DEPTH, true); 604 604 zynq_qspi_write(xqspi, ZYNQ_QSPI_IEN_OFFSET, 605 605 ZYNQ_QSPI_IXR_RXTX_MASK); 606 - if (!wait_for_completion_interruptible_timeout(&xqspi->data_completion, 606 + if (!wait_for_completion_timeout(&xqspi->data_completion, 607 607 msecs_to_jiffies(1000))) 608 608 err = -ETIMEDOUT; 609 609 }
+3 -3
drivers/spi/spi.c
··· 846 846 if (spi->cs_gpiod || gpio_is_valid(spi->cs_gpio) || 847 847 !spi->controller->set_cs_timing) { 848 848 if (activate) 849 - spi_delay_exec(&spi->controller->cs_setup, NULL); 849 + spi_delay_exec(&spi->cs_setup, NULL); 850 850 else 851 - spi_delay_exec(&spi->controller->cs_hold, NULL); 851 + spi_delay_exec(&spi->cs_hold, NULL); 852 852 } 853 853 854 854 if (spi->mode & SPI_CS_HIGH) ··· 891 891 if (spi->cs_gpiod || gpio_is_valid(spi->cs_gpio) || 892 892 !spi->controller->set_cs_timing) { 893 893 if (!activate) 894 - spi_delay_exec(&spi->controller->cs_inactive, NULL); 894 + spi_delay_exec(&spi->cs_inactive, NULL); 895 895 } 896 896 } 897 897
+1
include/linux/platform_data/spi-mt65xx.h
··· 12 12 /* Board specific platform_data */ 13 13 struct mtk_chip_config { 14 14 u32 sample_sel; 15 + u32 tick_delay; 15 16 }; 16 17 #endif
+12 -14
include/linux/spi/spi.h
··· 147 147 * not using a GPIO line) 148 148 * @word_delay: delay to be inserted between consecutive 149 149 * words of a transfer 150 - * 150 + * @cs_setup: delay to be introduced by the controller after CS is asserted 151 + * @cs_hold: delay to be introduced by the controller before CS is deasserted 152 + * @cs_inactive: delay to be introduced by the controller after CS is 153 + * deasserted. If @cs_change_delay is used from @spi_transfer, then the 154 + * two delays will be added up. 151 155 * @statistics: statistics for the spi_device 152 156 * 153 157 * A @spi_device is used to interchange data between an SPI slave ··· 192 188 int cs_gpio; /* LEGACY: chip select gpio */ 193 189 struct gpio_desc *cs_gpiod; /* chip select gpio desc */ 194 190 struct spi_delay word_delay; /* inter-word delay */ 191 + /* CS delays */ 192 + struct spi_delay cs_setup; 193 + struct spi_delay cs_hold; 194 + struct spi_delay cs_inactive; 195 195 196 196 /* the statistics */ 197 197 struct spi_statistics statistics; ··· 347 339 * @max_speed_hz: Highest supported transfer speed 348 340 * @flags: other constraints relevant to this driver 349 341 * @slave: indicates that this is an SPI slave controller 342 + * @devm_allocated: whether the allocation of this struct is devres-managed 350 343 * @max_transfer_size: function that returns the max transfer size for 351 344 * a &spi_device; may be %NULL, so the default %SIZE_MAX will be used. 352 345 * @max_message_size: function that returns the max message size for ··· 421 412 * controller has native support for memory like operations. 422 413 * @unprepare_message: undo any work done by prepare_message(). 423 414 * @slave_abort: abort the ongoing transfer request on an SPI slave controller 424 - * @cs_setup: delay to be introduced by the controller after CS is asserted 425 - * @cs_hold: delay to be introduced by the controller before CS is deasserted 426 - * @cs_inactive: delay to be introduced by the controller after CS is 427 - * deasserted. If @cs_change_delay is used from @spi_transfer, then the 428 - * two delays will be added up. 429 415 * @cs_gpios: LEGACY: array of GPIO descs to use as chip select lines; one per 430 416 * CS number. Any individual value may be -ENOENT for CS lines that 431 417 * are not GPIOs (driven by the SPI controller itself). Use the cs_gpiods ··· 515 511 516 512 #define SPI_MASTER_GPIO_SS BIT(5) /* GPIO CS must select slave */ 517 513 518 - /* flag indicating this is a non-devres managed controller */ 514 + /* flag indicating if the allocation of this struct is devres-managed */ 519 515 bool devm_allocated; 520 516 521 517 /* flag indicating this is an SPI slave controller */ ··· 554 550 * to configure specific CS timing through spi_set_cs_timing() after 555 551 * spi_setup(). 556 552 */ 557 - int (*set_cs_timing)(struct spi_device *spi, struct spi_delay *setup, 558 - struct spi_delay *hold, struct spi_delay *inactive); 553 + int (*set_cs_timing)(struct spi_device *spi); 559 554 560 555 /* bidirectional bulk transfers 561 556 * ··· 640 637 641 638 /* Optimized handlers for SPI memory-like operations. */ 642 639 const struct spi_controller_mem_ops *mem_ops; 643 - 644 - /* CS delays */ 645 - struct spi_delay cs_setup; 646 - struct spi_delay cs_hold; 647 - struct spi_delay cs_inactive; 648 640 649 641 /* gpio chip select */ 650 642 int *cs_gpios;