Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'spi-v6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
"This release is almost entirely new drivers, with a couple of small
changes in generic code.

The biggest individual update is a rename of the existing Microchip
driver and the addition of a new driver for the silicon SPI controller
in their PolarFire SoCs. The overlap between the soft IP supported by
the current driver and this new one is regrettably all in the IP and
not in the register interface offered to software.

- Add a time offset parameter for offloads, allowing them to be
defined in relation to each other. This is useful for IIO type
applcations where you trigger an operation then read the result
after a delay.

- Add a tracepoint for flash exec_ops, bringing the flash support
more in line with the debuggability of vanilla SPI.

- Support for Airoha EN7523, Arduino MCUs, Aspeed AST2700, Microchip
PolarFire SPI controllers, NXP i.MX51 ECSPI target mode, Qualcomm
IPQ5414 and IPQ5332, Renesas RZ/T2H, RZ/V2N and RZ/2NH and SpacemiT
K1 QuadSPI.

There's also a small set of ASoC cleanups that I mistakenly applied to
the SPI tree and then put more stuff on top of before it was brought
to my attention, sorry about that"

* tag 'spi-v6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (80 commits)
spi: microchip-core: Refactor FIFO read and write handlers
spi: ch341: fix out-of-bounds memory access in ch341_transfer_one
spi: microchip-core: Remove unneeded PM related macro
spi: microchip-core: Use SPI_MODE_X_MASK
spi: microchip-core: Utilise temporary variable for struct device
spi: microchip-core: Replace dead code (-ENOMEM error message)
spi: microchip-core: use min() instead of min_t()
spi: dt-bindings: airoha: add compatible for EN7523
spi: airoha-snfi: en7523: workaround flash damaging if UART_TXD was short to GND
spi: dt-bindings: renesas,rzv2h-rspi: Document RZ/V2N SoC support
spi: dt-bindings: renesas,rzv2h-rspi: Document RZ/V2N SoC support
spi: microchip: Enable compile-testing for FPGA SPI controllers
spi: Fix potential uninitialized variable in probe()
spi: rzv2h-rspi: add support for RZ/T2H and RZ/N2H
spi: dt-bindings: renesas,rzv2h-rspi: document RZ/T2H and RZ/N2H
spi: rzv2h-rspi: add support for loopback mode
spi: rzv2h-rspi: add support for variable transfer clock
spi: rzv2h-rspi: add support for using PCLK for transfer clock
spi: rzv2h-rspi: make transfer clock rate finding chip-specific
spi: rzv2h-rspi: avoid recomputing transfer frequency
...

+2496 -908
+6 -1
Documentation/devicetree/bindings/spi/airoha,en7581-snand.yaml
··· 14 14 15 15 properties: 16 16 compatible: 17 - const: airoha,en7581-snand 17 + oneOf: 18 + - const: airoha,en7581-snand 19 + - items: 20 + - enum: 21 + - airoha,en7523-snand 22 + - const: airoha,en7581-snand 18 23 19 24 reg: 20 25 items:
+3 -1
Documentation/devicetree/bindings/spi/aspeed,ast2600-fmc.yaml
··· 12 12 13 13 description: | 14 14 This binding describes the Aspeed Static Memory Controllers (FMC and 15 - SPI) of the AST2400, AST2500 and AST2600 SOCs. 15 + SPI) of the AST2400, AST2500, AST2600 and AST2700 SOCs. 16 16 17 17 allOf: 18 18 - $ref: spi-controller.yaml# ··· 20 20 properties: 21 21 compatible: 22 22 enum: 23 + - aspeed,ast2700-fmc 24 + - aspeed,ast2700-spi 23 25 - aspeed,ast2600-fmc 24 26 - aspeed,ast2600-spi 25 27 - aspeed,ast2500-fmc
+18 -3
Documentation/devicetree/bindings/spi/fsl,spi-fsl-qspi.yaml
··· 9 9 maintainers: 10 10 - Han Xu <han.xu@nxp.com> 11 11 12 - allOf: 13 - - $ref: spi-controller.yaml# 14 - 15 12 properties: 16 13 compatible: 17 14 oneOf: ··· 19 22 - fsl,imx6ul-qspi 20 23 - fsl,ls1021a-qspi 21 24 - fsl,ls2080a-qspi 25 + - spacemit,k1-qspi 22 26 - items: 23 27 - enum: 24 28 - fsl,ls1043a-qspi ··· 52 54 - const: qspi_en 53 55 - const: qspi 54 56 57 + resets: 58 + items: 59 + - description: SoC QSPI reset 60 + - description: SoC QSPI bus reset 61 + 55 62 required: 56 63 - compatible 57 64 - reg ··· 64 61 - interrupts 65 62 - clocks 66 63 - clock-names 64 + 65 + allOf: 66 + - $ref: spi-controller.yaml# 67 + - if: 68 + properties: 69 + compatible: 70 + not: 71 + contains: 72 + const: spacemit,k1-qspi 73 + then: 74 + properties: 75 + resets: false 67 76 68 77 unevaluatedProperties: false 69 78
+68 -2
Documentation/devicetree/bindings/spi/microchip,mpfs-spi.yaml
··· 21 21 - microchip,mpfs-qspi 22 22 - microchip,pic64gx-qspi 23 23 - const: microchip,coreqspi-rtl-v2 24 - - const: microchip,coreqspi-rtl-v2 # FPGA QSPI 24 + - enum: 25 + - microchip,coreqspi-rtl-v2 # FPGA QSPI 26 + - microchip,corespi-rtl-v5 # FPGA CoreSPI 27 + - microchip,mpfs-spi 25 28 - items: 26 29 - const: microchip,pic64gx-spi 27 30 - const: microchip,mpfs-spi 28 - - const: microchip,mpfs-spi 29 31 30 32 reg: 31 33 maxItems: 1 ··· 40 38 41 39 clocks: 42 40 maxItems: 1 41 + 42 + microchip,apb-datawidth: 43 + description: APB bus data width in bits. 44 + $ref: /schemas/types.yaml#/definitions/uint32 45 + enum: [8, 16, 32] 46 + default: 8 47 + 48 + microchip,frame-size: 49 + description: | 50 + Number of bits per SPI frame, as configured in Libero. 51 + In Motorola and TI modes, this corresponds directly 52 + to the requested frame size. For NSC mode this is set 53 + to 9 + the required data frame size. 54 + $ref: /schemas/types.yaml#/definitions/uint32 55 + minimum: 4 56 + maximum: 32 57 + default: 8 58 + 59 + microchip,protocol-configuration: 60 + description: CoreSPI protocol selection. Determines operating mode 61 + $ref: /schemas/types.yaml#/definitions/string 62 + enum: 63 + - motorola 64 + - ti 65 + - nsc 66 + default: motorola 67 + 68 + microchip,motorola-mode: 69 + description: Motorola SPI mode selection 70 + $ref: /schemas/types.yaml#/definitions/uint32 71 + enum: [0, 1, 2, 3] 72 + default: 3 73 + 74 + microchip,ssel-active: 75 + description: | 76 + Keep SSEL asserted between frames when using the Motorola protocol. 77 + When present, the controller keeps SSEL active across contiguous 78 + transfers and deasserts only when the overall transfer completes. 79 + type: boolean 43 80 44 81 required: 45 82 - compatible ··· 111 70 properties: 112 71 num-cs: 113 72 maximum: 1 73 + 74 + - if: 75 + properties: 76 + compatible: 77 + contains: 78 + const: microchip,corespi-rtl-v5 79 + then: 80 + properties: 81 + num-cs: 82 + minimum: 1 83 + maximum: 8 84 + default: 8 85 + 86 + fifo-depth: 87 + minimum: 1 88 + maximum: 32 89 + default: 4 90 + 91 + else: 92 + properties: 93 + microchip,apb-datawidth: false 94 + microchip,frame-size: false 95 + microchip,protocol-configuration: false 96 + microchip,motorola-mode: false 97 + microchip,ssel-active: false 114 98 115 99 unevaluatedProperties: false 116 100
-36
Documentation/devicetree/bindings/spi/nuvoton,npcm-pspi.txt
··· 1 - Nuvoton NPCM Peripheral Serial Peripheral Interface(PSPI) controller driver 2 - 3 - Nuvoton NPCM7xx SOC support two PSPI channels. 4 - 5 - Required properties: 6 - - compatible : "nuvoton,npcm750-pspi" for Poleg NPCM7XX. 7 - "nuvoton,npcm845-pspi" for Arbel NPCM8XX. 8 - - #address-cells : should be 1. see spi-bus.txt 9 - - #size-cells : should be 0. see spi-bus.txt 10 - - specifies physical base address and size of the register. 11 - - interrupts : contain PSPI interrupt. 12 - - clocks : phandle of PSPI reference clock. 13 - - clock-names: Should be "clk_apb5". 14 - - pinctrl-names : a pinctrl state named "default" must be defined. 15 - - pinctrl-0 : phandle referencing pin configuration of the device. 16 - - resets : phandle to the reset control for this device. 17 - - cs-gpios: Specifies the gpio pins to be used for chipselects. 18 - See: Documentation/devicetree/bindings/spi/spi-bus.txt 19 - 20 - Optional properties: 21 - - clock-frequency : Input clock frequency to the PSPI block in Hz. 22 - Default is 25000000 Hz. 23 - 24 - spi0: spi@f0200000 { 25 - compatible = "nuvoton,npcm750-pspi"; 26 - reg = <0xf0200000 0x1000>; 27 - pinctrl-names = "default"; 28 - pinctrl-0 = <&pspi1_pins>; 29 - #address-cells = <1>; 30 - #size-cells = <0>; 31 - interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>; 32 - clocks = <&clk NPCM7XX_CLK_APB5>; 33 - clock-names = "clk_apb5"; 34 - resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_PSPI1> 35 - cs-gpios = <&gpio6 11 GPIO_ACTIVE_LOW>; 36 - };
+72
Documentation/devicetree/bindings/spi/nuvoton,npcm-pspi.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/nuvoton,npcm-pspi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Nuvoton NPCM Peripheral SPI (PSPI) Controller 8 + 9 + maintainers: 10 + - Tomer Maimon <tmaimon77@gmail.com> 11 + 12 + allOf: 13 + - $ref: spi-controller.yaml# 14 + 15 + description: 16 + Nuvoton NPCM Peripheral Serial Peripheral Interface (PSPI) controller. 17 + Nuvoton NPCM7xx SOC supports two PSPI channels. 18 + Nuvoton NPCM8xx SOC support one PSPI channel. 19 + 20 + properties: 21 + compatible: 22 + enum: 23 + - nuvoton,npcm750-pspi # Poleg NPCM7XX 24 + - nuvoton,npcm845-pspi # Arbel NPCM8XX 25 + 26 + reg: 27 + maxItems: 1 28 + 29 + interrupts: 30 + maxItems: 1 31 + 32 + clocks: 33 + maxItems: 1 34 + description: PSPI reference clock. 35 + 36 + clock-names: 37 + items: 38 + - const: clk_apb5 39 + 40 + resets: 41 + maxItems: 1 42 + 43 + required: 44 + - compatible 45 + - reg 46 + - interrupts 47 + - clocks 48 + - clock-names 49 + - resets 50 + 51 + unevaluatedProperties: false 52 + 53 + examples: 54 + - | 55 + #include <dt-bindings/clock/nuvoton,npcm7xx-clock.h> 56 + #include <dt-bindings/interrupt-controller/arm-gic.h> 57 + #include <dt-bindings/reset/nuvoton,npcm7xx-reset.h> 58 + #include "dt-bindings/gpio/gpio.h" 59 + spi0: spi@f0200000 { 60 + compatible = "nuvoton,npcm750-pspi"; 61 + reg = <0xf0200000 0x1000>; 62 + pinctrl-names = "default"; 63 + pinctrl-0 = <&pspi1_pins>; 64 + #address-cells = <1>; 65 + #size-cells = <0>; 66 + interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>; 67 + clocks = <&clk NPCM7XX_CLK_APB5>; 68 + clock-names = "clk_apb5"; 69 + resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_PSPI1>; 70 + cs-gpios = <&gpio6 11 GPIO_ACTIVE_LOW>; 71 + }; 72 +
+2
Documentation/devicetree/bindings/spi/qcom,spi-qpic-snand.yaml
··· 25 25 - items: 26 26 - enum: 27 27 - qcom,ipq5018-snand 28 + - qcom,ipq5332-snand 29 + - qcom,ipq5424-snand 28 30 - const: qcom,ipq9574-snand 29 31 - const: qcom,ipq9574-snand 30 32
+55 -10
Documentation/devicetree/bindings/spi/renesas,rzv2h-rspi.yaml
··· 9 9 maintainers: 10 10 - Fabrizio Castro <fabrizio.castro.jz@renesas.com> 11 11 12 - allOf: 13 - - $ref: spi-controller.yaml# 14 - 15 12 properties: 16 13 compatible: 17 - const: renesas,r9a09g057-rspi # RZ/V2H(P) 14 + oneOf: 15 + - enum: 16 + - renesas,r9a09g057-rspi # RZ/V2H(P) 17 + - renesas,r9a09g077-rspi # RZ/T2H 18 + - items: 19 + - const: renesas,r9a09g056-rspi # RZ/V2N 20 + - const: renesas,r9a09g057-rspi 21 + - items: 22 + - const: renesas,r9a09g087-rspi # RZ/N2H 23 + - const: renesas,r9a09g077-rspi # RZ/T2H 18 24 19 25 reg: 20 26 maxItems: 1 ··· 42 36 - const: tx 43 37 44 38 clocks: 39 + minItems: 2 45 40 maxItems: 3 46 41 47 42 clock-names: 48 - items: 49 - - const: pclk 50 - - const: pclk_sfr 51 - - const: tclk 43 + minItems: 2 44 + maxItems: 3 52 45 53 46 resets: 54 47 maxItems: 2 ··· 67 62 - interrupt-names 68 63 - clocks 69 64 - clock-names 70 - - resets 71 - - reset-names 72 65 - power-domains 73 66 - '#address-cells' 74 67 - '#size-cells' 68 + 69 + allOf: 70 + - $ref: spi-controller.yaml# 71 + - if: 72 + properties: 73 + compatible: 74 + contains: 75 + enum: 76 + - renesas,r9a09g057-rspi 77 + then: 78 + properties: 79 + clocks: 80 + minItems: 3 81 + 82 + clock-names: 83 + items: 84 + - const: pclk 85 + - const: pclk_sfr 86 + - const: tclk 87 + 88 + required: 89 + - resets 90 + - reset-names 91 + 92 + - if: 93 + properties: 94 + compatible: 95 + contains: 96 + enum: 97 + - renesas,r9a09g077-rspi 98 + then: 99 + properties: 100 + clocks: 101 + maxItems: 2 102 + 103 + clock-names: 104 + items: 105 + - const: pclk 106 + - const: pclkspi 107 + 108 + resets: false 109 + reset-names: false 75 110 76 111 unevaluatedProperties: false 77 112
+1 -1
Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml
··· 153 153 provides an interface to override the native DWC SSI CS control. 154 154 155 155 patternProperties: 156 - "^.*@[0-9a-f]+$": 156 + "@[0-9a-f]+$": 157 157 type: object 158 158 additionalProperties: true 159 159
+1
Documentation/devicetree/bindings/spi/spi-cadence.yaml
··· 21 21 - enum: 22 22 - xlnx,zynqmp-spi-r1p6 23 23 - xlnx,versal-net-spi-r1p6 24 + - cix,sky1-spi-r1p6 24 25 - const: cdns,spi-r1p6 25 26 26 27 reg:
+1 -1
Documentation/devicetree/bindings/spi/spi-controller.yaml
··· 111 111 - compatible 112 112 113 113 patternProperties: 114 - "^.*@[0-9a-f]+$": 114 + "@[0-9a-f]+$": 115 115 type: object 116 116 $ref: spi-peripheral-props.yaml 117 117 additionalProperties: true
+2
Documentation/devicetree/bindings/trivial-devices.yaml
··· 53 53 - adi,lt7182s 54 54 # AMS iAQ-Core VOC Sensor 55 55 - ams,iaq-core 56 + # Arduino microcontroller interface over SPI on UnoQ board 57 + - arduino,unoq-mcu 56 58 # Temperature monitoring of Astera Labs PT5161L PCIe retimer 57 59 - asteralabs,pt5161l 58 60 # i2c h/w elliptic curve crypto module
+2 -1
MAINTAINERS
··· 22247 22247 F: drivers/rtc/rtc-mpfs.c 22248 22248 F: drivers/soc/microchip/mpfs-sys-controller.c 22249 22249 F: drivers/spi/spi-microchip-core-qspi.c 22250 - F: drivers/spi/spi-microchip-core.c 22250 + F: drivers/spi/spi-mpfs.c 22251 22251 F: drivers/usb/musb/mpfs.c 22252 22252 F: include/soc/microchip/mpfs.h 22253 22253 ··· 24385 24385 F: Documentation/devicetree/bindings/spi/ 24386 24386 F: Documentation/spi/ 24387 24387 F: drivers/spi/ 24388 + F: include/trace/events/spi* 24388 24389 F: include/linux/spi/ 24389 24390 F: include/uapi/linux/spi/ 24390 24391 F: tools/spi/
+21 -10
drivers/spi/Kconfig
··· 435 435 436 436 config SPI_FSL_QUADSPI 437 437 tristate "Freescale QSPI controller" 438 - depends on ARCH_MXC || SOC_LS1021A || ARCH_LAYERSCAPE || COMPILE_TEST 438 + depends on ARCH_MXC || SOC_LS1021A || ARCH_LAYERSCAPE || \ 439 + ARCH_SPACEMIT || COMPILE_TEST 439 440 depends on HAS_IOMEM 440 441 help 441 442 This enables support for the Quad SPI controller in master mode. ··· 707 706 This enables master mode support for the SPIFC (SPI flash 708 707 controller) available in Amlogic Meson SoCs. 709 708 710 - config SPI_MICROCHIP_CORE 711 - tristate "Microchip FPGA SPI controllers" 712 - depends on SPI_MASTER 713 - help 714 - This enables the SPI driver for Microchip FPGA SPI controllers. 715 - Say Y or M here if you want to use the "hard" controllers on 716 - PolarFire SoC. 717 - If built as a module, it will be called spi-microchip-core. 718 - 719 709 config SPI_MICROCHIP_CORE_QSPI 720 710 tristate "Microchip FPGA QSPI controllers" 721 711 depends on SPI_MASTER ··· 715 723 Say Y or M here if you want to use the QSPI controllers on 716 724 PolarFire SoC. 717 725 If built as a module, it will be called spi-microchip-core-qspi. 726 + 727 + config SPI_MICROCHIP_CORE_SPI 728 + tristate "Microchip FPGA CoreSPI controller" 729 + depends on SPI_MASTER 730 + help 731 + This enables the SPI driver for Microchip FPGA CoreSPI controller. 732 + Say Y or M here if you want to use the "soft" controllers on 733 + PolarFire SoC. 734 + If built as a module, it will be called spi-microchip-core-spi. 718 735 719 736 config SPI_MT65XX 720 737 tristate "MediaTek SPI controller" ··· 871 870 This selects the ARM(R) AMBA(R) PrimeCell PL022 SSP 872 871 controller. If you have an embedded system with an AMBA(R) 873 872 bus and a PL022 controller, say Y or M here. 873 + 874 + config SPI_POLARFIRE_SOC 875 + tristate "Microchip FPGA SPI controllers" 876 + depends on SPI_MASTER 877 + depends on ARCH_MICROCHIP || COMPILE_TEST 878 + help 879 + This enables the SPI driver for Microchip FPGA SPI controllers. 880 + Say Y or M here if you want to use the "hard" controllers on 881 + PolarFire SoC. 882 + If built as a module, it will be called spi-mpfs. 874 883 875 884 config SPI_PPC4xx 876 885 tristate "PPC4xx SPI Controller"
+2 -1
drivers/spi/Makefile
··· 86 86 obj-$(CONFIG_SPI_LP8841_RTC) += spi-lp8841-rtc.o 87 87 obj-$(CONFIG_SPI_MESON_SPICC) += spi-meson-spicc.o 88 88 obj-$(CONFIG_SPI_MESON_SPIFC) += spi-meson-spifc.o 89 - obj-$(CONFIG_SPI_MICROCHIP_CORE) += spi-microchip-core.o 90 89 obj-$(CONFIG_SPI_MICROCHIP_CORE_QSPI) += spi-microchip-core-qspi.o 90 + obj-$(CONFIG_SPI_MICROCHIP_CORE_SPI) += spi-microchip-core-spi.o 91 91 obj-$(CONFIG_SPI_MPC512x_PSC) += spi-mpc512x-psc.o 92 92 obj-$(CONFIG_SPI_MPC52xx_PSC) += spi-mpc52xx-psc.o 93 93 obj-$(CONFIG_SPI_MPC52xx) += spi-mpc52xx.o ··· 97 97 obj-$(CONFIG_SPI_MTK_SNFI) += spi-mtk-snfi.o 98 98 obj-$(CONFIG_SPI_MXIC) += spi-mxic.o 99 99 obj-$(CONFIG_SPI_MXS) += spi-mxs.o 100 + obj-$(CONFIG_SPI_POLARFIRE_SOC) += spi-mpfs.o 100 101 obj-$(CONFIG_SPI_WPCM_FIU) += spi-wpcm-fiu.o 101 102 obj-$(CONFIG_SPI_NPCM_FIU) += spi-npcm-fiu.o 102 103 obj-$(CONFIG_SPI_NPCM_PSPI) += spi-npcm-pspi.o
+194 -218
drivers/spi/spi-airoha-snfi.c
··· 147 147 #define SPI_NFI_CUS_SEC_SIZE_EN BIT(16) 148 148 149 149 #define REG_SPI_NFI_RD_CTL2 0x0510 150 + #define SPI_NFI_DATA_READ_CMD GENMASK(7, 0) 151 + 150 152 #define REG_SPI_NFI_RD_CTL3 0x0514 151 153 152 154 #define REG_SPI_NFI_PG_CTL1 0x0524 ··· 181 179 #define SPI_NAND_OP_READ_FROM_CACHE_SINGLE 0x03 182 180 #define SPI_NAND_OP_READ_FROM_CACHE_SINGLE_FAST 0x0b 183 181 #define SPI_NAND_OP_READ_FROM_CACHE_DUAL 0x3b 182 + #define SPI_NAND_OP_READ_FROM_CACHE_DUALIO 0xbb 184 183 #define SPI_NAND_OP_READ_FROM_CACHE_QUAD 0x6b 184 + #define SPI_NAND_OP_READ_FROM_CACHE_QUADIO 0xeb 185 185 #define SPI_NAND_OP_WRITE_ENABLE 0x06 186 186 #define SPI_NAND_OP_WRITE_DISABLE 0x04 187 187 #define SPI_NAND_OP_PROGRAM_LOAD_SINGLE 0x02 ··· 223 219 struct regmap *regmap_ctrl; 224 220 struct regmap *regmap_nfi; 225 221 struct clk *spi_clk; 226 - 227 - struct { 228 - size_t page_size; 229 - size_t sec_size; 230 - u8 sec_num; 231 - u8 spare_size; 232 - } nfi_cfg; 233 222 }; 234 223 235 224 static int airoha_snand_set_fifo_op(struct airoha_snand_ctrl *as_ctrl, ··· 483 486 SPI_NFI_ALL_IRQ_EN, SPI_NFI_AHB_DONE_EN); 484 487 } 485 488 486 - static int airoha_snand_nfi_config(struct airoha_snand_ctrl *as_ctrl) 487 - { 488 - int err; 489 - u32 val; 490 - 491 - err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 492 - SPI_NFI_FIFO_FLUSH | SPI_NFI_RST); 493 - if (err) 494 - return err; 495 - 496 - /* auto FDM */ 497 - err = regmap_clear_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 498 - SPI_NFI_AUTO_FDM_EN); 499 - if (err) 500 - return err; 501 - 502 - /* HW ECC */ 503 - err = regmap_clear_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 504 - SPI_NFI_HW_ECC_EN); 505 - if (err) 506 - return err; 507 - 508 - /* DMA Burst */ 509 - err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 510 - SPI_NFI_DMA_BURST_EN); 511 - if (err) 512 - return err; 513 - 514 - /* page format */ 515 - switch (as_ctrl->nfi_cfg.spare_size) { 516 - case 26: 517 - val = FIELD_PREP(SPI_NFI_SPARE_SIZE, 0x1); 518 - break; 519 - case 27: 520 - val = FIELD_PREP(SPI_NFI_SPARE_SIZE, 0x2); 521 - break; 522 - case 28: 523 - val = FIELD_PREP(SPI_NFI_SPARE_SIZE, 0x3); 524 - break; 525 - default: 526 - val = FIELD_PREP(SPI_NFI_SPARE_SIZE, 0x0); 527 - break; 528 - } 529 - 530 - err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_PAGEFMT, 531 - SPI_NFI_SPARE_SIZE, val); 532 - if (err) 533 - return err; 534 - 535 - switch (as_ctrl->nfi_cfg.page_size) { 536 - case 2048: 537 - val = FIELD_PREP(SPI_NFI_PAGE_SIZE, 0x1); 538 - break; 539 - case 4096: 540 - val = FIELD_PREP(SPI_NFI_PAGE_SIZE, 0x2); 541 - break; 542 - default: 543 - val = FIELD_PREP(SPI_NFI_PAGE_SIZE, 0x0); 544 - break; 545 - } 546 - 547 - err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_PAGEFMT, 548 - SPI_NFI_PAGE_SIZE, val); 549 - if (err) 550 - return err; 551 - 552 - /* sec num */ 553 - val = FIELD_PREP(SPI_NFI_SEC_NUM, as_ctrl->nfi_cfg.sec_num); 554 - err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 555 - SPI_NFI_SEC_NUM, val); 556 - if (err) 557 - return err; 558 - 559 - /* enable cust sec size */ 560 - err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SECCUS_SIZE, 561 - SPI_NFI_CUS_SEC_SIZE_EN); 562 - if (err) 563 - return err; 564 - 565 - /* set cust sec size */ 566 - val = FIELD_PREP(SPI_NFI_CUS_SEC_SIZE, as_ctrl->nfi_cfg.sec_size); 567 - return regmap_update_bits(as_ctrl->regmap_nfi, 568 - REG_SPI_NFI_SECCUS_SIZE, 569 - SPI_NFI_CUS_SEC_SIZE, val); 570 - } 571 - 572 489 static bool airoha_snand_is_page_ops(const struct spi_mem_op *op) 573 490 { 574 491 if (op->addr.nbytes != 2) ··· 513 602 default: 514 603 return false; 515 604 } 516 - } 517 - 518 - static int airoha_snand_adjust_op_size(struct spi_mem *mem, 519 - struct spi_mem_op *op) 520 - { 521 - size_t max_len; 522 - 523 - if (airoha_snand_is_page_ops(op)) { 524 - struct airoha_snand_ctrl *as_ctrl; 525 - 526 - as_ctrl = spi_controller_get_devdata(mem->spi->controller); 527 - max_len = as_ctrl->nfi_cfg.sec_size; 528 - max_len += as_ctrl->nfi_cfg.spare_size; 529 - max_len *= as_ctrl->nfi_cfg.sec_num; 530 - 531 - if (op->data.nbytes > max_len) 532 - op->data.nbytes = max_len; 533 - } else { 534 - max_len = 1 + op->addr.nbytes + op->dummy.nbytes; 535 - if (max_len >= 160) 536 - return -EOPNOTSUPP; 537 - 538 - if (op->data.nbytes > 160 - max_len) 539 - op->data.nbytes = 160 - max_len; 540 - } 541 - 542 - return 0; 543 605 } 544 606 545 607 static bool airoha_snand_supports_op(struct spi_mem *mem, ··· 555 671 static ssize_t airoha_snand_dirmap_read(struct spi_mem_dirmap_desc *desc, 556 672 u64 offs, size_t len, void *buf) 557 673 { 558 - struct spi_mem_op *op = &desc->info.op_tmpl; 559 674 struct spi_device *spi = desc->mem->spi; 560 675 struct airoha_snand_ctrl *as_ctrl; 561 676 u8 *txrx_buf = spi_get_ctldata(spi); 562 677 dma_addr_t dma_addr; 563 - u32 val, rd_mode; 678 + u32 val, rd_mode, opcode; 679 + size_t bytes; 564 680 int err; 565 681 566 - switch (op->cmd.opcode) { 682 + as_ctrl = spi_controller_get_devdata(spi->controller); 683 + 684 + /* minimum oob size is 64 */ 685 + bytes = round_up(offs + len, 64); 686 + 687 + /* 688 + * DUALIO and QUADIO opcodes are not supported by the spi controller, 689 + * replace them with supported opcodes. 690 + */ 691 + opcode = desc->info.op_tmpl.cmd.opcode; 692 + switch (opcode) { 693 + case SPI_NAND_OP_READ_FROM_CACHE_SINGLE: 694 + case SPI_NAND_OP_READ_FROM_CACHE_SINGLE_FAST: 695 + rd_mode = 0; 696 + break; 567 697 case SPI_NAND_OP_READ_FROM_CACHE_DUAL: 698 + case SPI_NAND_OP_READ_FROM_CACHE_DUALIO: 699 + opcode = SPI_NAND_OP_READ_FROM_CACHE_DUAL; 568 700 rd_mode = 1; 569 701 break; 570 702 case SPI_NAND_OP_READ_FROM_CACHE_QUAD: 703 + case SPI_NAND_OP_READ_FROM_CACHE_QUADIO: 704 + opcode = SPI_NAND_OP_READ_FROM_CACHE_QUAD; 571 705 rd_mode = 2; 572 706 break; 573 707 default: 574 - rd_mode = 0; 575 - break; 708 + /* unknown opcode */ 709 + return -EOPNOTSUPP; 576 710 } 577 711 578 - as_ctrl = spi_controller_get_devdata(spi->controller); 579 712 err = airoha_snand_set_mode(as_ctrl, SPI_MODE_DMA); 580 713 if (err < 0) 581 714 return err; 582 715 583 - err = airoha_snand_nfi_config(as_ctrl); 716 + /* NFI reset */ 717 + err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 718 + SPI_NFI_FIFO_FLUSH | SPI_NFI_RST); 719 + if (err) 720 + goto error_dma_mode_off; 721 + 722 + /* NFI configure: 723 + * - No AutoFDM (custom sector size (SECCUS) register will be used) 724 + * - No SoC's hardware ECC (flash internal ECC will be used) 725 + * - Use burst mode (faster, but requires 16 byte alignment for addresses) 726 + * - Setup for reading (SPI_NFI_READ_MODE) 727 + * - Setup reading command: FIELD_PREP(SPI_NFI_OPMODE, 6) 728 + * - Use DMA instead of PIO for data reading 729 + */ 730 + err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 731 + SPI_NFI_DMA_MODE | 732 + SPI_NFI_READ_MODE | 733 + SPI_NFI_DMA_BURST_EN | 734 + SPI_NFI_HW_ECC_EN | 735 + SPI_NFI_AUTO_FDM_EN | 736 + SPI_NFI_OPMODE, 737 + SPI_NFI_DMA_MODE | 738 + SPI_NFI_READ_MODE | 739 + SPI_NFI_DMA_BURST_EN | 740 + FIELD_PREP(SPI_NFI_OPMODE, 6)); 741 + if (err) 742 + goto error_dma_mode_off; 743 + 744 + /* Set number of sector will be read */ 745 + err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 746 + SPI_NFI_SEC_NUM, 747 + FIELD_PREP(SPI_NFI_SEC_NUM, 1)); 748 + if (err) 749 + goto error_dma_mode_off; 750 + 751 + /* Set custom sector size */ 752 + err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SECCUS_SIZE, 753 + SPI_NFI_CUS_SEC_SIZE | 754 + SPI_NFI_CUS_SEC_SIZE_EN, 755 + FIELD_PREP(SPI_NFI_CUS_SEC_SIZE, bytes) | 756 + SPI_NFI_CUS_SEC_SIZE_EN); 584 757 if (err) 585 758 goto error_dma_mode_off; 586 759 ··· 653 712 if (err) 654 713 goto error_dma_unmap; 655 714 656 - /* set cust sec size */ 657 - val = as_ctrl->nfi_cfg.sec_size * as_ctrl->nfi_cfg.sec_num; 658 - val = FIELD_PREP(SPI_NFI_READ_DATA_BYTE_NUM, val); 715 + /* 716 + * Setup transfer length 717 + * --------------------- 718 + * The following rule MUST be met: 719 + * transfer_length = 720 + * = NFI_SNF_MISC_CTL2.read_data_byte_number = 721 + * = NFI_CON.sector_number * NFI_SECCUS.custom_sector_size 722 + */ 659 723 err = regmap_update_bits(as_ctrl->regmap_nfi, 660 724 REG_SPI_NFI_SNF_MISC_CTL2, 661 - SPI_NFI_READ_DATA_BYTE_NUM, val); 725 + SPI_NFI_READ_DATA_BYTE_NUM, 726 + FIELD_PREP(SPI_NFI_READ_DATA_BYTE_NUM, bytes)); 662 727 if (err) 663 728 goto error_dma_unmap; 664 729 665 730 /* set read command */ 666 731 err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_RD_CTL2, 667 - op->cmd.opcode); 732 + FIELD_PREP(SPI_NFI_DATA_READ_CMD, opcode)); 668 733 if (err) 669 734 goto error_dma_unmap; 670 735 ··· 686 739 if (err) 687 740 goto error_dma_unmap; 688 741 689 - /* set nfi read */ 690 - err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 691 - SPI_NFI_OPMODE, 692 - FIELD_PREP(SPI_NFI_OPMODE, 6)); 693 - if (err) 694 - goto error_dma_unmap; 695 - 696 - err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 697 - SPI_NFI_READ_MODE | SPI_NFI_DMA_MODE); 698 - if (err) 699 - goto error_dma_unmap; 700 - 701 742 err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_CMD, 0x0); 702 743 if (err) 703 744 goto error_dma_unmap; 704 745 705 - /* trigger dma start read */ 746 + /* trigger dma reading */ 706 747 err = regmap_clear_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 707 748 SPI_NFI_RD_TRIG); 708 749 if (err) ··· 748 813 static ssize_t airoha_snand_dirmap_write(struct spi_mem_dirmap_desc *desc, 749 814 u64 offs, size_t len, const void *buf) 750 815 { 751 - struct spi_mem_op *op = &desc->info.op_tmpl; 752 816 struct spi_device *spi = desc->mem->spi; 753 817 u8 *txrx_buf = spi_get_ctldata(spi); 754 818 struct airoha_snand_ctrl *as_ctrl; 755 819 dma_addr_t dma_addr; 756 - u32 wr_mode, val; 820 + u32 wr_mode, val, opcode; 821 + size_t bytes; 757 822 int err; 758 823 759 824 as_ctrl = spi_controller_get_devdata(spi->controller); 760 - err = airoha_snand_set_mode(as_ctrl, SPI_MODE_MANUAL); 825 + 826 + /* minimum oob size is 64 */ 827 + bytes = round_up(offs + len, 64); 828 + 829 + opcode = desc->info.op_tmpl.cmd.opcode; 830 + switch (opcode) { 831 + case SPI_NAND_OP_PROGRAM_LOAD_SINGLE: 832 + case SPI_NAND_OP_PROGRAM_LOAD_RAMDOM_SINGLE: 833 + wr_mode = 0; 834 + break; 835 + case SPI_NAND_OP_PROGRAM_LOAD_QUAD: 836 + case SPI_NAND_OP_PROGRAM_LOAD_RAMDON_QUAD: 837 + wr_mode = 2; 838 + break; 839 + default: 840 + /* unknown opcode */ 841 + return -EOPNOTSUPP; 842 + } 843 + 844 + if (offs > 0) 845 + memset(txrx_buf, 0xff, offs); 846 + memcpy(txrx_buf + offs, buf, len); 847 + if (bytes > offs + len) 848 + memset(txrx_buf + offs + len, 0xff, bytes - offs - len); 849 + 850 + err = airoha_snand_set_mode(as_ctrl, SPI_MODE_DMA); 761 851 if (err < 0) 762 852 return err; 763 853 764 - memcpy(txrx_buf + offs, buf, len); 854 + /* NFI reset */ 855 + err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 856 + SPI_NFI_FIFO_FLUSH | SPI_NFI_RST); 857 + if (err) 858 + goto error_dma_mode_off; 859 + 860 + /* 861 + * NFI configure: 862 + * - No AutoFDM (custom sector size (SECCUS) register will be used) 863 + * - No SoC's hardware ECC (flash internal ECC will be used) 864 + * - Use burst mode (faster, but requires 16 byte alignment for addresses) 865 + * - Setup for writing (SPI_NFI_READ_MODE bit is cleared) 866 + * - Setup writing command: FIELD_PREP(SPI_NFI_OPMODE, 3) 867 + * - Use DMA instead of PIO for data writing 868 + */ 869 + err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 870 + SPI_NFI_DMA_MODE | 871 + SPI_NFI_READ_MODE | 872 + SPI_NFI_DMA_BURST_EN | 873 + SPI_NFI_HW_ECC_EN | 874 + SPI_NFI_AUTO_FDM_EN | 875 + SPI_NFI_OPMODE, 876 + SPI_NFI_DMA_MODE | 877 + SPI_NFI_DMA_BURST_EN | 878 + FIELD_PREP(SPI_NFI_OPMODE, 3)); 879 + if (err) 880 + goto error_dma_mode_off; 881 + 882 + /* Set number of sector will be written */ 883 + err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 884 + SPI_NFI_SEC_NUM, 885 + FIELD_PREP(SPI_NFI_SEC_NUM, 1)); 886 + if (err) 887 + goto error_dma_mode_off; 888 + 889 + /* Set custom sector size */ 890 + err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_SECCUS_SIZE, 891 + SPI_NFI_CUS_SEC_SIZE | 892 + SPI_NFI_CUS_SEC_SIZE_EN, 893 + FIELD_PREP(SPI_NFI_CUS_SEC_SIZE, bytes) | 894 + SPI_NFI_CUS_SEC_SIZE_EN); 895 + if (err) 896 + goto error_dma_mode_off; 897 + 765 898 dma_addr = dma_map_single(as_ctrl->dev, txrx_buf, SPI_NAND_CACHE_SIZE, 766 899 DMA_TO_DEVICE); 767 900 err = dma_mapping_error(as_ctrl->dev, dma_addr); 768 901 if (err) 769 - return err; 902 + goto error_dma_mode_off; 770 903 771 - err = airoha_snand_set_mode(as_ctrl, SPI_MODE_DMA); 772 - if (err < 0) 773 - goto error_dma_unmap; 774 - 775 - err = airoha_snand_nfi_config(as_ctrl); 776 - if (err) 777 - goto error_dma_unmap; 778 - 779 - if (op->cmd.opcode == SPI_NAND_OP_PROGRAM_LOAD_QUAD || 780 - op->cmd.opcode == SPI_NAND_OP_PROGRAM_LOAD_RAMDON_QUAD) 781 - wr_mode = BIT(1); 782 - else 783 - wr_mode = 0; 784 - 904 + /* set dma addr */ 785 905 err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_STRADDR, 786 906 dma_addr); 787 907 if (err) 788 908 goto error_dma_unmap; 789 909 790 - val = FIELD_PREP(SPI_NFI_PROG_LOAD_BYTE_NUM, 791 - as_ctrl->nfi_cfg.sec_size * as_ctrl->nfi_cfg.sec_num); 910 + /* 911 + * Setup transfer length 912 + * --------------------- 913 + * The following rule MUST be met: 914 + * transfer_length = 915 + * = NFI_SNF_MISC_CTL2.write_data_byte_number = 916 + * = NFI_CON.sector_number * NFI_SECCUS.custom_sector_size 917 + */ 792 918 err = regmap_update_bits(as_ctrl->regmap_nfi, 793 919 REG_SPI_NFI_SNF_MISC_CTL2, 794 - SPI_NFI_PROG_LOAD_BYTE_NUM, val); 920 + SPI_NFI_PROG_LOAD_BYTE_NUM, 921 + FIELD_PREP(SPI_NFI_PROG_LOAD_BYTE_NUM, bytes)); 795 922 if (err) 796 923 goto error_dma_unmap; 797 924 925 + /* set write command */ 798 926 err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_PG_CTL1, 799 - FIELD_PREP(SPI_NFI_PG_LOAD_CMD, 800 - op->cmd.opcode)); 927 + FIELD_PREP(SPI_NFI_PG_LOAD_CMD, opcode)); 801 928 if (err) 802 929 goto error_dma_unmap; 803 930 931 + /* set write mode */ 804 932 err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_SNF_MISC_CTL, 805 933 FIELD_PREP(SPI_NFI_DATA_READ_WR_MODE, wr_mode)); 806 934 if (err) ··· 875 877 if (err) 876 878 goto error_dma_unmap; 877 879 878 - err = regmap_clear_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 879 - SPI_NFI_READ_MODE); 880 - if (err) 881 - goto error_dma_unmap; 882 - 883 - err = regmap_update_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 884 - SPI_NFI_OPMODE, 885 - FIELD_PREP(SPI_NFI_OPMODE, 3)); 886 - if (err) 887 - goto error_dma_unmap; 888 - 889 - err = regmap_set_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CNFG, 890 - SPI_NFI_DMA_MODE); 891 - if (err) 892 - goto error_dma_unmap; 893 - 894 880 err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_CMD, 0x80); 895 881 if (err) 896 882 goto error_dma_unmap; 897 883 884 + /* trigger dma writing */ 898 885 err = regmap_clear_bits(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, 899 886 SPI_NFI_WR_TRIG); 900 887 if (err) ··· 924 941 error_dma_unmap: 925 942 dma_unmap_single(as_ctrl->dev, dma_addr, SPI_NAND_CACHE_SIZE, 926 943 DMA_TO_DEVICE); 944 + error_dma_mode_off: 927 945 airoha_snand_set_mode(as_ctrl, SPI_MODE_MANUAL); 928 946 return err; 929 947 } ··· 1006 1022 } 1007 1023 1008 1024 static const struct spi_controller_mem_ops airoha_snand_mem_ops = { 1009 - .adjust_op_size = airoha_snand_adjust_op_size, 1010 1025 .supports_op = airoha_snand_supports_op, 1011 1026 .exec_op = airoha_snand_exec_op, 1012 1027 .dirmap_create = airoha_snand_dirmap_create, 1013 1028 .dirmap_read = airoha_snand_dirmap_read, 1014 1029 .dirmap_write = airoha_snand_dirmap_write, 1030 + }; 1031 + 1032 + static const struct spi_controller_mem_ops airoha_snand_nodma_mem_ops = { 1033 + .supports_op = airoha_snand_supports_op, 1034 + .exec_op = airoha_snand_exec_op, 1015 1035 }; 1016 1036 1017 1037 static int airoha_snand_setup(struct spi_device *spi) ··· 1033 1045 spi_set_ctldata(spi, txrx_buf); 1034 1046 1035 1047 return 0; 1036 - } 1037 - 1038 - static int airoha_snand_nfi_setup(struct airoha_snand_ctrl *as_ctrl) 1039 - { 1040 - u32 val, sec_size, sec_num; 1041 - int err; 1042 - 1043 - err = regmap_read(as_ctrl->regmap_nfi, REG_SPI_NFI_CON, &val); 1044 - if (err) 1045 - return err; 1046 - 1047 - sec_num = FIELD_GET(SPI_NFI_SEC_NUM, val); 1048 - 1049 - err = regmap_read(as_ctrl->regmap_nfi, REG_SPI_NFI_SECCUS_SIZE, &val); 1050 - if (err) 1051 - return err; 1052 - 1053 - sec_size = FIELD_GET(SPI_NFI_CUS_SEC_SIZE, val); 1054 - 1055 - /* init default value */ 1056 - as_ctrl->nfi_cfg.sec_size = sec_size; 1057 - as_ctrl->nfi_cfg.sec_num = sec_num; 1058 - as_ctrl->nfi_cfg.page_size = round_down(sec_size * sec_num, 1024); 1059 - as_ctrl->nfi_cfg.spare_size = 16; 1060 - 1061 - err = airoha_snand_nfi_init(as_ctrl); 1062 - if (err) 1063 - return err; 1064 - 1065 - return airoha_snand_nfi_config(as_ctrl); 1066 1048 } 1067 1049 1068 1050 static const struct regmap_config spi_ctrl_regmap_config = { ··· 1062 1104 struct airoha_snand_ctrl *as_ctrl; 1063 1105 struct device *dev = &pdev->dev; 1064 1106 struct spi_controller *ctrl; 1107 + bool dma_enable = true; 1065 1108 void __iomem *base; 1109 + u32 sfc_strap; 1066 1110 int err; 1067 1111 1068 1112 ctrl = devm_spi_alloc_host(dev, sizeof(*as_ctrl)); ··· 1099 1139 return dev_err_probe(dev, PTR_ERR(as_ctrl->spi_clk), 1100 1140 "unable to get spi clk\n"); 1101 1141 1142 + if (device_is_compatible(dev, "airoha,en7523-snand")) { 1143 + err = regmap_read(as_ctrl->regmap_ctrl, 1144 + REG_SPI_CTRL_SFC_STRAP, &sfc_strap); 1145 + if (err) 1146 + return err; 1147 + 1148 + if (!(sfc_strap & 0x04)) { 1149 + dma_enable = false; 1150 + dev_warn(dev, "Detected booting in RESERVED mode (UART_TXD was short to GND).\n"); 1151 + dev_warn(dev, "This mode is known for incorrect DMA reading of some flashes.\n"); 1152 + dev_warn(dev, "Much slower PIO mode will be used to prevent flash data damage.\n"); 1153 + dev_warn(dev, "Unplug UART cable and power cycle board to get full performance.\n"); 1154 + } 1155 + } 1156 + 1102 1157 err = dma_set_mask(as_ctrl->dev, DMA_BIT_MASK(32)); 1103 1158 if (err) 1104 1159 return err; 1105 1160 1106 1161 ctrl->num_chipselect = 2; 1107 - ctrl->mem_ops = &airoha_snand_mem_ops; 1162 + ctrl->mem_ops = dma_enable ? &airoha_snand_mem_ops 1163 + : &airoha_snand_nodma_mem_ops; 1108 1164 ctrl->bits_per_word_mask = SPI_BPW_MASK(8); 1109 1165 ctrl->mode_bits = SPI_RX_DUAL; 1110 1166 ctrl->setup = airoha_snand_setup; 1111 1167 device_set_node(&ctrl->dev, dev_fwnode(dev)); 1112 1168 1113 - err = airoha_snand_nfi_setup(as_ctrl); 1169 + err = airoha_snand_nfi_init(as_ctrl); 1114 1170 if (err) 1115 1171 return err; 1116 1172
+582 -171
drivers/spi/spi-aspeed-smc.c
··· 7 7 */ 8 8 9 9 #include <linux/clk.h> 10 + #include <linux/io.h> 10 11 #include <linux/module.h> 11 12 #include <linux/of.h> 12 13 #include <linux/of_platform.h> ··· 68 67 u32 ahb_window_size; 69 68 u32 ctl_val[ASPEED_SPI_MAX]; 70 69 u32 clk_freq; 70 + bool force_user_mode; 71 71 }; 72 72 73 73 struct aspeed_spi_data { ··· 80 78 u32 timing; 81 79 u32 hclk_mask; 82 80 u32 hdiv_max; 81 + u32 min_window_size; 83 82 84 - u32 (*segment_start)(struct aspeed_spi *aspi, u32 reg); 85 - u32 (*segment_end)(struct aspeed_spi *aspi, u32 reg); 86 - u32 (*segment_reg)(struct aspeed_spi *aspi, u32 start, u32 end); 83 + phys_addr_t (*segment_start)(struct aspeed_spi *aspi, u32 reg); 84 + phys_addr_t (*segment_end)(struct aspeed_spi *aspi, u32 reg); 85 + u32 (*segment_reg)(struct aspeed_spi *aspi, phys_addr_t start, 86 + phys_addr_t end); 87 + int (*adjust_window)(struct aspeed_spi *aspi); 88 + u32 (*get_clk_div)(struct aspeed_spi_chip *chip, u32 hz); 87 89 int (*calibrate)(struct aspeed_spi_chip *chip, u32 hdiv, 88 90 const u8 *golden_buf, u8 *test_buf); 89 91 }; ··· 98 92 const struct aspeed_spi_data *data; 99 93 100 94 void __iomem *regs; 101 - void __iomem *ahb_base; 102 - u32 ahb_base_phy; 95 + phys_addr_t ahb_base_phy; 103 96 u32 ahb_window_size; 97 + u32 num_cs; 104 98 struct device *dev; 105 99 106 100 struct clk *clk; ··· 264 258 const struct spi_mem_op *op) 265 259 { 266 260 int ret; 261 + int io_mode = aspeed_spi_get_io_mode(op); 267 262 268 263 aspeed_spi_start_user(chip); 269 264 ret = aspeed_spi_send_cmd_addr(chip, op->addr.nbytes, op->addr.val, op->cmd.opcode); 270 265 if (ret < 0) 271 266 goto stop_user; 267 + 268 + aspeed_spi_set_io_mode(chip, io_mode); 269 + 272 270 aspeed_spi_write_to_ahb(chip->ahb_base, op->data.buf.out, op->data.nbytes); 273 271 stop_user: 274 272 aspeed_spi_stop_user(chip); ··· 386 376 spi_get_chipselect(mem->spi, 0)); 387 377 } 388 378 389 - struct aspeed_spi_window { 390 - u32 cs; 391 - u32 offset; 392 - u32 size; 393 - }; 394 - 395 - static void aspeed_spi_get_windows(struct aspeed_spi *aspi, 396 - struct aspeed_spi_window windows[ASPEED_SPI_MAX_NUM_CS]) 379 + static int aspeed_spi_set_window(struct aspeed_spi *aspi) 397 380 { 398 - const struct aspeed_spi_data *data = aspi->data; 399 - u32 reg_val; 381 + struct device *dev = aspi->dev; 382 + off_t offset = 0; 383 + phys_addr_t start; 384 + phys_addr_t end; 385 + void __iomem *seg_reg_base = aspi->regs + CE0_SEGMENT_ADDR_REG; 386 + void __iomem *seg_reg; 387 + u32 seg_val_backup; 388 + u32 seg_val; 400 389 u32 cs; 390 + size_t window_size; 401 391 402 392 for (cs = 0; cs < aspi->data->max_cs; cs++) { 403 - reg_val = readl(aspi->regs + CE0_SEGMENT_ADDR_REG + cs * 4); 404 - windows[cs].cs = cs; 405 - windows[cs].size = data->segment_end(aspi, reg_val) - 406 - data->segment_start(aspi, reg_val); 407 - windows[cs].offset = data->segment_start(aspi, reg_val) - aspi->ahb_base_phy; 408 - dev_vdbg(aspi->dev, "CE%d offset=0x%.8x size=0x%x\n", cs, 409 - windows[cs].offset, windows[cs].size); 393 + if (aspi->chips[cs].ahb_base) { 394 + devm_iounmap(dev, aspi->chips[cs].ahb_base); 395 + aspi->chips[cs].ahb_base = NULL; 396 + } 410 397 } 398 + 399 + for (cs = 0; cs < aspi->data->max_cs; cs++) { 400 + seg_reg = seg_reg_base + cs * 4; 401 + seg_val_backup = readl(seg_reg); 402 + 403 + start = aspi->ahb_base_phy + offset; 404 + window_size = aspi->chips[cs].ahb_window_size; 405 + end = start + window_size; 406 + 407 + seg_val = aspi->data->segment_reg(aspi, start, end); 408 + writel(seg_val, seg_reg); 409 + 410 + /* 411 + * Restore initial value if something goes wrong or the segment 412 + * register is written protected. 413 + */ 414 + if (seg_val != readl(seg_reg)) { 415 + dev_warn(dev, "CE%d expected window [ 0x%.9llx - 0x%.9llx ] %zdMB\n", 416 + cs, (u64)start, (u64)end - 1, window_size >> 20); 417 + writel(seg_val_backup, seg_reg); 418 + window_size = aspi->data->segment_end(aspi, seg_val_backup) - 419 + aspi->data->segment_start(aspi, seg_val_backup); 420 + aspi->chips[cs].ahb_window_size = window_size; 421 + end = start + window_size; 422 + } 423 + 424 + if (window_size != 0) 425 + dev_dbg(dev, "CE%d window [ 0x%.9llx - 0x%.9llx ] %zdMB\n", 426 + cs, (u64)start, (u64)end - 1, window_size >> 20); 427 + else 428 + dev_dbg(dev, "CE%d window closed\n", cs); 429 + 430 + offset += window_size; 431 + if (offset > aspi->ahb_window_size) { 432 + dev_err(dev, "CE%d offset value 0x%llx is too large.\n", 433 + cs, (u64)offset); 434 + return -ENOSPC; 435 + } 436 + 437 + /* 438 + * No need to map the address deocding range when 439 + * - window size is 0. 440 + * - the CS is unused. 441 + */ 442 + if (window_size == 0 || cs >= aspi->num_cs) 443 + continue; 444 + 445 + aspi->chips[cs].ahb_base = 446 + devm_ioremap(aspi->dev, start, window_size); 447 + if (!aspi->chips[cs].ahb_base) { 448 + dev_err(aspi->dev, 449 + "Fail to remap window [0x%.9llx - 0x%.9llx]\n", 450 + (u64)start, (u64)end - 1); 451 + return -ENOMEM; 452 + } 453 + } 454 + 455 + return 0; 411 456 } 412 457 413 - /* 414 - * On the AST2600, some CE windows are closed by default at reset but 415 - * U-Boot should open all. 416 - */ 417 - static int aspeed_spi_chip_set_default_window(struct aspeed_spi_chip *chip) 458 + static const struct aspeed_spi_data ast2500_spi_data; 459 + static const struct aspeed_spi_data ast2600_spi_data; 460 + static const struct aspeed_spi_data ast2600_fmc_data; 461 + 462 + static int aspeed_spi_chip_set_default_window(struct aspeed_spi *aspi) 418 463 { 419 - struct aspeed_spi *aspi = chip->aspi; 420 - struct aspeed_spi_window windows[ASPEED_SPI_MAX_NUM_CS] = { 0 }; 421 - struct aspeed_spi_window *win = &windows[chip->cs]; 464 + u32 cs; 422 465 423 466 /* No segment registers for the AST2400 SPI controller */ 424 467 if (aspi->data == &ast2400_spi_data) { 425 - win->offset = 0; 426 - win->size = aspi->ahb_window_size; 427 - } else { 428 - aspeed_spi_get_windows(aspi, windows); 468 + aspi->chips[0].ahb_base = devm_ioremap(aspi->dev, 469 + aspi->ahb_base_phy, 470 + aspi->ahb_window_size); 471 + aspi->chips[0].ahb_window_size = aspi->ahb_window_size; 472 + return 0; 429 473 } 430 474 431 - chip->ahb_base = aspi->ahb_base + win->offset; 432 - chip->ahb_window_size = win->size; 475 + /* Assign the minimum window size to each CS */ 476 + for (cs = 0; cs < aspi->num_cs; cs++) { 477 + aspi->chips[cs].ahb_window_size = aspi->data->min_window_size; 478 + dev_dbg(aspi->dev, "CE%d default window [ 0x%.9llx - 0x%.9llx ]", 479 + cs, (u64)(aspi->ahb_base_phy + aspi->data->min_window_size * cs), 480 + (u64)(aspi->ahb_base_phy + aspi->data->min_window_size * cs - 1)); 481 + } 433 482 434 - dev_dbg(aspi->dev, "CE%d default window [ 0x%.8x - 0x%.8x ] %dMB", 435 - chip->cs, aspi->ahb_base_phy + win->offset, 436 - aspi->ahb_base_phy + win->offset + win->size - 1, 437 - win->size >> 20); 483 + /* Close unused CS */ 484 + for (cs = aspi->num_cs; cs < aspi->data->max_cs; cs++) 485 + aspi->chips[cs].ahb_window_size = 0; 438 486 439 - return chip->ahb_window_size ? 0 : -1; 487 + if (aspi->data->adjust_window) 488 + aspi->data->adjust_window(aspi); 489 + 490 + return aspeed_spi_set_window(aspi); 440 491 } 441 492 442 - static int aspeed_spi_set_window(struct aspeed_spi *aspi, 443 - const struct aspeed_spi_window *win) 493 + /* 494 + * As the flash size grows up, we need to trim some decoding 495 + * size if needed for the sake of conforming the maximum 496 + * decoding size. We trim the decoding size from the rear CS 497 + * to avoid affecting the default boot up sequence, usually, 498 + * from CS0. Notice, if a CS decoding size is trimmed, 499 + * command mode may not work perfectly on that CS, but it only 500 + * affect performance and the debug function. 501 + */ 502 + static int aspeed_spi_trim_window_size(struct aspeed_spi *aspi) 444 503 { 445 - u32 start = aspi->ahb_base_phy + win->offset; 446 - u32 end = start + win->size; 447 - void __iomem *seg_reg = aspi->regs + CE0_SEGMENT_ADDR_REG + win->cs * 4; 448 - u32 seg_val_backup = readl(seg_reg); 449 - u32 seg_val = aspi->data->segment_reg(aspi, start, end); 504 + struct aspeed_spi_chip *chips = aspi->chips; 505 + size_t total_sz; 506 + int cs = aspi->data->max_cs - 1; 507 + u32 i; 508 + bool trimmed = false; 450 509 451 - if (seg_val == seg_val_backup) 452 - return 0; 510 + do { 511 + total_sz = 0; 512 + for (i = 0; i < aspi->data->max_cs; i++) 513 + total_sz += chips[i].ahb_window_size; 453 514 454 - writel(seg_val, seg_reg); 515 + if (cs < 0) 516 + return -ENOMEM; 455 517 456 - /* 457 - * Restore initial value if something goes wrong else we could 458 - * loose access to the chip. 459 - */ 460 - if (seg_val != readl(seg_reg)) { 461 - dev_err(aspi->dev, "CE%d invalid window [ 0x%.8x - 0x%.8x ] %dMB", 462 - win->cs, start, end - 1, win->size >> 20); 463 - writel(seg_val_backup, seg_reg); 464 - return -EIO; 518 + if (chips[cs].ahb_window_size <= aspi->data->min_window_size) { 519 + cs--; 520 + continue; 521 + } 522 + 523 + if (total_sz > aspi->ahb_window_size) { 524 + chips[cs].ahb_window_size -= 525 + aspi->data->min_window_size; 526 + total_sz -= aspi->data->min_window_size; 527 + /* 528 + * If the ahb window size is ever trimmed, only user 529 + * mode can be adopted to access the whole flash. 530 + */ 531 + chips[cs].force_user_mode = true; 532 + trimmed = true; 533 + } 534 + } while (total_sz > aspi->ahb_window_size); 535 + 536 + if (trimmed) { 537 + dev_warn(aspi->dev, "Window size after trimming:\n"); 538 + for (cs = 0; cs < aspi->data->max_cs; cs++) { 539 + dev_warn(aspi->dev, "CE%d: 0x%08x\n", 540 + cs, chips[cs].ahb_window_size); 541 + } 465 542 } 466 543 467 - if (win->size) 468 - dev_dbg(aspi->dev, "CE%d new window [ 0x%.8x - 0x%.8x ] %dMB", 469 - win->cs, start, end - 1, win->size >> 20); 470 - else 471 - dev_dbg(aspi->dev, "CE%d window closed", win->cs); 544 + return 0; 545 + } 546 + 547 + static int aspeed_adjust_window_ast2400(struct aspeed_spi *aspi) 548 + { 549 + int ret; 550 + int cs; 551 + struct aspeed_spi_chip *chips = aspi->chips; 552 + 553 + /* Close unused CS. */ 554 + for (cs = aspi->num_cs; cs < aspi->data->max_cs; cs++) 555 + chips[cs].ahb_window_size = 0; 556 + 557 + ret = aspeed_spi_trim_window_size(aspi); 558 + if (ret != 0) 559 + return ret; 560 + 561 + return 0; 562 + } 563 + 564 + /* 565 + * For AST2500, the minimum address decoding size for each CS 566 + * is 8MB. This address decoding size is mandatory for each 567 + * CS no matter whether it will be used. This is a HW limitation. 568 + */ 569 + static int aspeed_adjust_window_ast2500(struct aspeed_spi *aspi) 570 + { 571 + int ret; 572 + int cs, i; 573 + u32 cum_size, rem_size; 574 + struct aspeed_spi_chip *chips = aspi->chips; 575 + 576 + /* Assign min_window_sz to unused CS. */ 577 + for (cs = aspi->num_cs; cs < aspi->data->max_cs; cs++) { 578 + if (chips[cs].ahb_window_size < aspi->data->min_window_size) 579 + chips[cs].ahb_window_size = 580 + aspi->data->min_window_size; 581 + } 582 + 583 + /* 584 + * If command mode or normal mode is used by dirmap read, the start 585 + * address of a window should be multiple of its related flash size. 586 + * Namely, the total windows size from flash 0 to flash N should 587 + * be multiple of the size of flash (N + 1). 588 + */ 589 + for (cs = aspi->num_cs - 1; cs >= 0; cs--) { 590 + cum_size = 0; 591 + for (i = 0; i < cs; i++) 592 + cum_size += chips[i].ahb_window_size; 593 + 594 + rem_size = cum_size % chips[cs].ahb_window_size; 595 + if (chips[cs].ahb_window_size != 0 && rem_size != 0) 596 + chips[0].ahb_window_size += 597 + chips[cs].ahb_window_size - rem_size; 598 + } 599 + 600 + ret = aspeed_spi_trim_window_size(aspi); 601 + if (ret != 0) 602 + return ret; 603 + 604 + /* The total window size of AST2500 SPI1 CS0 and CS1 must be 128MB */ 605 + if (aspi->data == &ast2500_spi_data) 606 + chips[1].ahb_window_size = 607 + 0x08000000 - chips[0].ahb_window_size; 608 + 609 + return 0; 610 + } 611 + 612 + static int aspeed_adjust_window_ast2600(struct aspeed_spi *aspi) 613 + { 614 + int ret; 615 + int cs, i; 616 + u32 cum_size, rem_size; 617 + struct aspeed_spi_chip *chips = aspi->chips; 618 + 619 + /* Close unused CS. */ 620 + for (cs = aspi->num_cs; cs < aspi->data->max_cs; cs++) 621 + chips[cs].ahb_window_size = 0; 622 + 623 + /* 624 + * If command mode or normal mode is used by dirmap read, the start 625 + * address of a window should be multiple of its related flash size. 626 + * Namely, the total windows size from flash 0 to flash N should 627 + * be multiple of the size of flash (N + 1). 628 + */ 629 + for (cs = aspi->num_cs - 1; cs >= 0; cs--) { 630 + cum_size = 0; 631 + for (i = 0; i < cs; i++) 632 + cum_size += chips[i].ahb_window_size; 633 + 634 + rem_size = cum_size % chips[cs].ahb_window_size; 635 + if (chips[cs].ahb_window_size != 0 && rem_size != 0) 636 + chips[0].ahb_window_size += 637 + chips[cs].ahb_window_size - rem_size; 638 + } 639 + 640 + ret = aspeed_spi_trim_window_size(aspi); 641 + if (ret != 0) 642 + return ret; 472 643 473 644 return 0; 474 645 } ··· 660 469 * - ioremap each window, not strictly necessary since the overall window 661 470 * is correct. 662 471 */ 663 - static const struct aspeed_spi_data ast2500_spi_data; 664 - static const struct aspeed_spi_data ast2600_spi_data; 665 - static const struct aspeed_spi_data ast2600_fmc_data; 666 - 667 472 static int aspeed_spi_chip_adjust_window(struct aspeed_spi_chip *chip, 668 473 u32 local_offset, u32 size) 669 474 { 670 475 struct aspeed_spi *aspi = chip->aspi; 671 - struct aspeed_spi_window windows[ASPEED_SPI_MAX_NUM_CS] = { 0 }; 672 - struct aspeed_spi_window *win = &windows[chip->cs]; 673 476 int ret; 674 477 675 478 /* No segment registers for the AST2400 SPI controller */ 676 479 if (aspi->data == &ast2400_spi_data) 677 480 return 0; 678 481 679 - /* 680 - * Due to an HW issue on the AST2500 SPI controller, the CE0 681 - * window size should be smaller than the maximum 128MB. 682 - */ 683 - if (aspi->data == &ast2500_spi_data && chip->cs == 0 && size == SZ_128M) { 684 - size = 120 << 20; 685 - dev_info(aspi->dev, "CE%d window resized to %dMB (AST2500 HW quirk)", 686 - chip->cs, size >> 20); 687 - } 688 - 689 - /* 690 - * The decoding size of AST2600 SPI controller should set at 691 - * least 2MB. 692 - */ 693 - if ((aspi->data == &ast2600_spi_data || aspi->data == &ast2600_fmc_data) && 694 - size < SZ_2M) { 695 - size = SZ_2M; 696 - dev_info(aspi->dev, "CE%d window resized to %dMB (AST2600 Decoding)", 697 - chip->cs, size >> 20); 698 - } 699 - 700 - aspeed_spi_get_windows(aspi, windows); 701 - 702 482 /* Adjust this chip window */ 703 - win->offset += local_offset; 704 - win->size = size; 483 + aspi->chips[chip->cs].ahb_window_size = size; 705 484 706 - if (win->offset + win->size > aspi->ahb_window_size) { 707 - win->size = aspi->ahb_window_size - win->offset; 708 - dev_warn(aspi->dev, "CE%d window resized to %dMB", chip->cs, win->size >> 20); 709 - } 485 + /* Adjust the overall windows size regarding each platform */ 486 + if (aspi->data->adjust_window) 487 + aspi->data->adjust_window(aspi); 710 488 711 - ret = aspeed_spi_set_window(aspi, win); 489 + ret = aspeed_spi_set_window(aspi); 712 490 if (ret) 713 491 return ret; 714 492 715 - /* Update chip mapping info */ 716 - chip->ahb_base = aspi->ahb_base + win->offset; 717 - chip->ahb_window_size = win->size; 718 - 719 - /* 720 - * Also adjust next chip window to make sure that it does not 721 - * overlap with the current window. 722 - */ 723 - if (chip->cs < aspi->data->max_cs - 1) { 724 - struct aspeed_spi_window *next = &windows[chip->cs + 1]; 725 - 726 - /* Change offset and size to keep the same end address */ 727 - if ((next->offset + next->size) > (win->offset + win->size)) 728 - next->size = (next->offset + next->size) - (win->offset + win->size); 729 - else 730 - next->size = 0; 731 - next->offset = win->offset + win->size; 732 - 733 - aspeed_spi_set_window(aspi, next); 734 - } 735 493 return 0; 736 494 } 737 495 ··· 759 619 struct aspeed_spi_chip *chip = &aspi->chips[spi_get_chipselect(desc->mem->spi, 0)]; 760 620 761 621 /* Switch to USER command mode if mapping window is too small */ 762 - if (chip->ahb_window_size < offset + len) { 622 + if (chip->ahb_window_size < offset + len || chip->force_user_mode) { 763 623 int ret; 764 624 765 625 ret = aspeed_spi_read_user(chip, &desc->info.op_tmpl, offset, len, buf); ··· 817 677 if (data->hastype) 818 678 aspeed_spi_chip_set_type(aspi, cs, CONFIG_TYPE_SPI); 819 679 820 - if (aspeed_spi_chip_set_default_window(chip) < 0) { 821 - dev_warn(aspi->dev, "CE%d window invalid", cs); 822 - return -EINVAL; 823 - } 824 - 825 680 aspeed_spi_chip_enable(aspi, cs, true); 826 681 827 682 chip->ctl_val[ASPEED_SPI_BASE] = CTRL_CE_STOP_ACTIVE | CTRL_IO_MODE_USER; ··· 869 734 if (IS_ERR(aspi->regs)) 870 735 return PTR_ERR(aspi->regs); 871 736 872 - aspi->ahb_base = devm_platform_get_and_ioremap_resource(pdev, 1, &res); 873 - if (IS_ERR(aspi->ahb_base)) { 874 - dev_err(dev, "missing AHB mapping window\n"); 875 - return PTR_ERR(aspi->ahb_base); 737 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 738 + if (!res) { 739 + dev_err(dev, "missing AHB memory\n"); 740 + return -EINVAL; 876 741 } 877 742 878 743 aspi->ahb_window_size = resource_size(res); ··· 897 762 ctlr->mem_ops = &aspeed_spi_mem_ops; 898 763 ctlr->setup = aspeed_spi_setup; 899 764 ctlr->cleanup = aspeed_spi_cleanup; 900 - ctlr->num_chipselect = data->max_cs; 765 + ctlr->num_chipselect = of_get_available_child_count(dev->of_node); 901 766 ctlr->dev.of_node = dev->of_node; 767 + 768 + aspi->num_cs = ctlr->num_chipselect; 769 + 770 + ret = aspeed_spi_chip_set_default_window(aspi); 771 + if (ret) { 772 + dev_err(&pdev->dev, "fail to set default window\n"); 773 + return ret; 774 + } 902 775 903 776 ret = devm_spi_register_controller(dev, ctlr); 904 777 if (ret) ··· 931 788 * The address range is encoded with absolute addresses in the overall 932 789 * mapping window. 933 790 */ 934 - static u32 aspeed_spi_segment_start(struct aspeed_spi *aspi, u32 reg) 791 + static phys_addr_t aspeed_spi_segment_start(struct aspeed_spi *aspi, u32 reg) 935 792 { 936 793 return ((reg >> 16) & 0xFF) << 23; 937 794 } 938 795 939 - static u32 aspeed_spi_segment_end(struct aspeed_spi *aspi, u32 reg) 796 + static phys_addr_t aspeed_spi_segment_end(struct aspeed_spi *aspi, u32 reg) 940 797 { 941 798 return ((reg >> 24) & 0xFF) << 23; 942 799 } 943 800 944 - static u32 aspeed_spi_segment_reg(struct aspeed_spi *aspi, u32 start, u32 end) 801 + static u32 aspeed_spi_segment_reg(struct aspeed_spi *aspi, 802 + phys_addr_t start, phys_addr_t end) 945 803 { 946 804 return (((start >> 23) & 0xFF) << 16) | (((end >> 23) & 0xFF) << 24); 947 805 } ··· 954 810 955 811 #define AST2600_SEG_ADDR_MASK 0x0ff00000 956 812 957 - static u32 aspeed_spi_segment_ast2600_start(struct aspeed_spi *aspi, 958 - u32 reg) 813 + static phys_addr_t aspeed_spi_segment_ast2600_start(struct aspeed_spi *aspi, 814 + u32 reg) 959 815 { 960 816 u32 start_offset = (reg << 16) & AST2600_SEG_ADDR_MASK; 961 817 962 818 return aspi->ahb_base_phy + start_offset; 963 819 } 964 820 965 - static u32 aspeed_spi_segment_ast2600_end(struct aspeed_spi *aspi, 966 - u32 reg) 821 + static phys_addr_t aspeed_spi_segment_ast2600_end(struct aspeed_spi *aspi, 822 + u32 reg) 967 823 { 968 824 u32 end_offset = reg & AST2600_SEG_ADDR_MASK; 969 825 ··· 975 831 } 976 832 977 833 static u32 aspeed_spi_segment_ast2600_reg(struct aspeed_spi *aspi, 978 - u32 start, u32 end) 834 + phys_addr_t start, phys_addr_t end) 979 835 { 980 836 /* disable zero size segments */ 981 837 if (start == end) ··· 983 839 984 840 return ((start & AST2600_SEG_ADDR_MASK) >> 16) | 985 841 ((end - 1) & AST2600_SEG_ADDR_MASK); 842 + } 843 + 844 + /* The Segment Registers of the AST2700 use a 64KB unit. */ 845 + #define AST2700_SEG_ADDR_MASK 0x7fff0000 846 + 847 + static phys_addr_t aspeed_spi_segment_ast2700_start(struct aspeed_spi *aspi, 848 + u32 reg) 849 + { 850 + u64 start_offset = (reg << 16) & AST2700_SEG_ADDR_MASK; 851 + 852 + if (!start_offset) 853 + return aspi->ahb_base_phy; 854 + 855 + return aspi->ahb_base_phy + start_offset; 856 + } 857 + 858 + static phys_addr_t aspeed_spi_segment_ast2700_end(struct aspeed_spi *aspi, 859 + u32 reg) 860 + { 861 + u64 end_offset = reg & AST2700_SEG_ADDR_MASK; 862 + 863 + if (!end_offset) 864 + return aspi->ahb_base_phy; 865 + 866 + return aspi->ahb_base_phy + end_offset; 867 + } 868 + 869 + static u32 aspeed_spi_segment_ast2700_reg(struct aspeed_spi *aspi, 870 + phys_addr_t start, phys_addr_t end) 871 + { 872 + if (start == end) 873 + return 0; 874 + 875 + return (u32)(((start & AST2700_SEG_ADDR_MASK) >> 16) | 876 + (end & AST2700_SEG_ADDR_MASK)); 986 877 } 987 878 988 879 /* ··· 1121 942 } 1122 943 1123 944 static const u32 aspeed_spi_hclk_divs[] = { 1124 - 0xf, /* HCLK */ 1125 - 0x7, /* HCLK/2 */ 1126 - 0xe, /* HCLK/3 */ 1127 - 0x6, /* HCLK/4 */ 1128 - 0xd, /* HCLK/5 */ 945 + /* HCLK, HCLK/2, HCLK/3, HCLK/4, HCLK/5, ..., HCLK/16 */ 946 + 0xf, 0x7, 0xe, 0x6, 0xd, 947 + 0x5, 0xc, 0x4, 0xb, 0x3, 948 + 0xa, 0x2, 0x9, 0x1, 0x8, 949 + 0x0 1129 950 }; 1130 951 1131 952 #define ASPEED_SPI_HCLK_DIV(i) \ 1132 953 (aspeed_spi_hclk_divs[(i) - 1] << CTRL_FREQ_SEL_SHIFT) 954 + 955 + /* Transfer maximum clock frequency to register setting */ 956 + static u32 aspeed_get_clk_div_ast2400(struct aspeed_spi_chip *chip, 957 + u32 max_hz) 958 + { 959 + struct device *dev = chip->aspi->dev; 960 + u32 hclk_clk = chip->aspi->clk_freq; 961 + u32 div_ctl = 0; 962 + u32 i; 963 + bool found = false; 964 + 965 + /* FMC/SPIR10[11:8] */ 966 + for (i = 1; i <= ARRAY_SIZE(aspeed_spi_hclk_divs); i++) { 967 + if (hclk_clk / i <= max_hz) { 968 + found = true; 969 + break; 970 + } 971 + } 972 + 973 + if (found) { 974 + div_ctl = ASPEED_SPI_HCLK_DIV(i); 975 + chip->clk_freq = hclk_clk / i; 976 + } 977 + 978 + dev_dbg(dev, "found: %s, hclk: %d, max_clk: %d\n", 979 + found ? "yes" : "no", hclk_clk, max_hz); 980 + 981 + if (found) { 982 + dev_dbg(dev, "h_div: 0x%08x, speed: %d\n", 983 + div_ctl, chip->clk_freq); 984 + } 985 + 986 + return div_ctl; 987 + } 988 + 989 + static u32 aspeed_get_clk_div_ast2500(struct aspeed_spi_chip *chip, 990 + u32 max_hz) 991 + { 992 + struct device *dev = chip->aspi->dev; 993 + u32 hclk_clk = chip->aspi->clk_freq; 994 + u32 div_ctl = 0; 995 + u32 i; 996 + bool found = false; 997 + 998 + /* FMC/SPIR10[11:8] */ 999 + for (i = 1; i <= ARRAY_SIZE(aspeed_spi_hclk_divs); i++) { 1000 + if (hclk_clk / i <= max_hz) { 1001 + found = true; 1002 + chip->clk_freq = hclk_clk / i; 1003 + break; 1004 + } 1005 + } 1006 + 1007 + if (found) { 1008 + div_ctl = ASPEED_SPI_HCLK_DIV(i); 1009 + goto end; 1010 + } 1011 + 1012 + for (i = 1; i <= ARRAY_SIZE(aspeed_spi_hclk_divs); i++) { 1013 + if (hclk_clk / (i * 4) <= max_hz) { 1014 + found = true; 1015 + chip->clk_freq = hclk_clk / (i * 4); 1016 + break; 1017 + } 1018 + } 1019 + 1020 + if (found) 1021 + div_ctl = BIT(13) | ASPEED_SPI_HCLK_DIV(i); 1022 + 1023 + end: 1024 + dev_dbg(dev, "found: %s, hclk: %d, max_clk: %d\n", 1025 + found ? "yes" : "no", hclk_clk, max_hz); 1026 + 1027 + if (found) { 1028 + dev_dbg(dev, "h_div: 0x%08x, speed: %d\n", 1029 + div_ctl, chip->clk_freq); 1030 + } 1031 + 1032 + return div_ctl; 1033 + } 1034 + 1035 + static u32 aspeed_get_clk_div_ast2600(struct aspeed_spi_chip *chip, 1036 + u32 max_hz) 1037 + { 1038 + struct device *dev = chip->aspi->dev; 1039 + u32 hclk_clk = chip->aspi->clk_freq; 1040 + u32 div_ctl = 0; 1041 + u32 i, j; 1042 + bool found = false; 1043 + 1044 + /* FMC/SPIR10[27:24] */ 1045 + for (j = 0; j < 16; j++) { 1046 + /* FMC/SPIR10[11:8] */ 1047 + for (i = 1; i <= ARRAY_SIZE(aspeed_spi_hclk_divs); i++) { 1048 + if (j == 0 && i == 1) 1049 + continue; 1050 + 1051 + if (hclk_clk / (j * 16 + i) <= max_hz) { 1052 + found = true; 1053 + break; 1054 + } 1055 + } 1056 + 1057 + if (found) { 1058 + div_ctl = ((j << 24) | ASPEED_SPI_HCLK_DIV(i)); 1059 + chip->clk_freq = hclk_clk / (j * 16 + i); 1060 + break; 1061 + } 1062 + } 1063 + 1064 + dev_dbg(dev, "found: %s, hclk: %d, max_clk: %d\n", 1065 + found ? "yes" : "no", hclk_clk, max_hz); 1066 + 1067 + if (found) { 1068 + dev_dbg(dev, "h_div: 0x%08x, speed: %d\n", 1069 + div_ctl, chip->clk_freq); 1070 + } 1071 + 1072 + return div_ctl; 1073 + } 1133 1074 1134 1075 static int aspeed_spi_do_calibration(struct aspeed_spi_chip *chip) 1135 1076 { ··· 1257 958 const struct aspeed_spi_data *data = aspi->data; 1258 959 u32 ahb_freq = aspi->clk_freq; 1259 960 u32 max_freq = chip->clk_freq; 961 + bool exec_calib = false; 962 + u32 best_freq = 0; 1260 963 u32 ctl_val; 1261 964 u8 *golden_buf = NULL; 1262 965 u8 *test_buf = NULL; 1263 - int i, rc, best_div = -1; 966 + int i, rc; 967 + u32 div_ctl; 1264 968 1265 969 dev_dbg(aspi->dev, "calculate timing compensation - AHB freq: %d MHz", 1266 970 ahb_freq / 1000000); ··· 1284 982 memcpy_fromio(golden_buf, chip->ahb_base, CALIBRATE_BUF_SIZE); 1285 983 if (!aspeed_spi_check_calib_data(golden_buf, CALIBRATE_BUF_SIZE)) { 1286 984 dev_info(aspi->dev, "Calibration area too uniform, using low speed"); 1287 - goto no_calib; 985 + goto end_calib; 1288 986 } 1289 987 1290 988 #if defined(VERBOSE_DEBUG) ··· 1293 991 #endif 1294 992 1295 993 /* Now we iterate the HCLK dividers until we find our breaking point */ 1296 - for (i = ARRAY_SIZE(aspeed_spi_hclk_divs); i > data->hdiv_max - 1; i--) { 994 + for (i = 5; i > data->hdiv_max - 1; i--) { 1297 995 u32 tv, freq; 1298 996 1299 997 freq = ahb_freq / i; ··· 1306 1004 dev_dbg(aspi->dev, "Trying HCLK/%d [%08x] ...", i, tv); 1307 1005 rc = data->calibrate(chip, i, golden_buf, test_buf); 1308 1006 if (rc == 0) 1309 - best_div = i; 1007 + best_freq = freq; 1008 + 1009 + exec_calib = true; 1310 1010 } 1311 1011 1312 - /* Nothing found ? */ 1313 - if (best_div < 0) { 1314 - dev_warn(aspi->dev, "No good frequency, using dumb slow"); 1012 + end_calib: 1013 + if (!exec_calib) { 1014 + /* calibration process is not executed */ 1015 + dev_warn(aspi->dev, "Force to dts configuration %dkHz.\n", 1016 + max_freq / 1000); 1017 + div_ctl = data->get_clk_div(chip, max_freq); 1018 + } else if (best_freq == 0) { 1019 + /* calibration process is executed, but no good frequency */ 1020 + dev_warn(aspi->dev, "No good frequency, using dumb slow\n"); 1021 + div_ctl = 0; 1315 1022 } else { 1316 - dev_dbg(aspi->dev, "Found good read timings at HCLK/%d", best_div); 1317 - 1318 - /* Record the freq */ 1319 - for (i = 0; i < ASPEED_SPI_MAX; i++) 1320 - chip->ctl_val[i] = (chip->ctl_val[i] & data->hclk_mask) | 1321 - ASPEED_SPI_HCLK_DIV(best_div); 1023 + dev_dbg(aspi->dev, "Found good read timings at %dMHz.\n", 1024 + best_freq / 1000000); 1025 + div_ctl = data->get_clk_div(chip, best_freq); 1322 1026 } 1323 1027 1324 - no_calib: 1028 + /* Record the freq */ 1029 + for (i = 0; i < ASPEED_SPI_MAX; i++) { 1030 + chip->ctl_val[i] = (chip->ctl_val[i] & data->hclk_mask) | 1031 + div_ctl; 1032 + } 1033 + 1325 1034 writel(chip->ctl_val[ASPEED_SPI_READ], chip->ctl); 1326 1035 kfree(test_buf); 1327 1036 return 0; ··· 1340 1027 1341 1028 #define TIMING_DELAY_DI BIT(3) 1342 1029 #define TIMING_DELAY_HCYCLE_MAX 5 1030 + #define TIMING_DELAY_INPUT_MAX 16 1343 1031 #define TIMING_REG_AST2600(chip) \ 1344 1032 ((chip)->aspi->regs + (chip)->aspi->data->timing + \ 1345 1033 (chip)->cs * 4) 1034 + 1035 + /* 1036 + * This function returns the center point of the longest 1037 + * continuous "pass" interval within the buffer. The interval 1038 + * must contains the highest number of consecutive "pass" 1039 + * results and not span across multiple rows. 1040 + */ 1041 + static u32 aspeed_spi_ast2600_optimized_timing(u32 rows, u32 cols, 1042 + u8 buf[rows][cols]) 1043 + { 1044 + int r = 0, c = 0; 1045 + int max = 0; 1046 + int i, j; 1047 + 1048 + for (i = 0; i < rows; i++) { 1049 + for (j = 0; j < cols;) { 1050 + int k = j; 1051 + 1052 + while (k < cols && buf[i][k]) 1053 + k++; 1054 + 1055 + if (k - j > max) { 1056 + max = k - j; 1057 + r = i; 1058 + c = j + (k - j) / 2; 1059 + } 1060 + 1061 + j = k + 1; 1062 + } 1063 + } 1064 + 1065 + return max > 4 ? r * cols + c : 0; 1066 + } 1346 1067 1347 1068 static int aspeed_spi_ast2600_calibrate(struct aspeed_spi_chip *chip, u32 hdiv, 1348 1069 const u8 *golden_buf, u8 *test_buf) 1349 1070 { 1350 1071 struct aspeed_spi *aspi = chip->aspi; 1351 1072 int hcycle; 1073 + int delay_ns; 1352 1074 u32 shift = (hdiv - 2) << 3; 1353 - u32 mask = ~(0xfu << shift); 1075 + u32 mask = ~(0xffu << shift); 1354 1076 u32 fread_timing_val = 0; 1077 + u8 calib_res[6][17] = {0}; 1078 + u32 calib_point; 1355 1079 1356 1080 for (hcycle = 0; hcycle <= TIMING_DELAY_HCYCLE_MAX; hcycle++) { 1357 - int delay_ns; 1358 1081 bool pass = false; 1359 1082 1360 1083 fread_timing_val &= mask; ··· 1403 1054 " * [%08x] %d HCLK delay, DI delay none : %s", 1404 1055 fread_timing_val, hcycle, pass ? "PASS" : "FAIL"); 1405 1056 if (pass) 1406 - return 0; 1057 + calib_res[hcycle][0] = 1; 1407 1058 1408 1059 /* Add DI input delays */ 1409 1060 fread_timing_val &= mask; 1410 1061 fread_timing_val |= (TIMING_DELAY_DI | hcycle) << shift; 1411 1062 1412 - for (delay_ns = 0; delay_ns < 0x10; delay_ns++) { 1413 - fread_timing_val &= ~(0xf << (4 + shift)); 1063 + for (delay_ns = 0; delay_ns < TIMING_DELAY_INPUT_MAX; delay_ns++) { 1064 + fread_timing_val &= ~(0xfu << (4 + shift)); 1414 1065 fread_timing_val |= delay_ns << (4 + shift); 1415 1066 1416 1067 writel(fread_timing_val, TIMING_REG_AST2600(chip)); ··· 1419 1070 " * [%08x] %d HCLK delay, DI delay %d.%dns : %s", 1420 1071 fread_timing_val, hcycle, (delay_ns + 1) / 2, 1421 1072 (delay_ns + 1) & 1 ? 5 : 5, pass ? "PASS" : "FAIL"); 1422 - /* 1423 - * TODO: This is optimistic. We should look 1424 - * for a working interval and save the middle 1425 - * value in the read timing register. 1426 - */ 1073 + 1427 1074 if (pass) 1428 - return 0; 1075 + calib_res[hcycle][delay_ns + 1] = 1; 1429 1076 } 1430 1077 } 1431 1078 1079 + calib_point = aspeed_spi_ast2600_optimized_timing(6, 17, calib_res); 1432 1080 /* No good setting for this frequency */ 1433 - return -1; 1081 + if (calib_point == 0) 1082 + return -1; 1083 + 1084 + hcycle = calib_point / 17; 1085 + delay_ns = calib_point % 17; 1086 + 1087 + fread_timing_val = (TIMING_DELAY_DI | hcycle | (delay_ns << 4)) << shift; 1088 + 1089 + dev_dbg(aspi->dev, "timing val: %08x, final hcycle: %d, delay_ns: %d\n", 1090 + fread_timing_val, hcycle, delay_ns); 1091 + 1092 + writel(fread_timing_val, TIMING_REG_AST2600(chip)); 1093 + 1094 + return 0; 1434 1095 } 1435 1096 1436 1097 /* ··· 1454 1095 .timing = CE0_TIMING_COMPENSATION_REG, 1455 1096 .hclk_mask = 0xfffff0ff, 1456 1097 .hdiv_max = 1, 1098 + .min_window_size = 0x800000, 1457 1099 .calibrate = aspeed_spi_calibrate, 1100 + .get_clk_div = aspeed_get_clk_div_ast2400, 1458 1101 .segment_start = aspeed_spi_segment_start, 1459 1102 .segment_end = aspeed_spi_segment_end, 1460 1103 .segment_reg = aspeed_spi_segment_reg, 1104 + .adjust_window = aspeed_adjust_window_ast2400, 1461 1105 }; 1462 1106 1463 1107 static const struct aspeed_spi_data ast2400_spi_data = { ··· 1471 1109 .timing = 0x14, 1472 1110 .hclk_mask = 0xfffff0ff, 1473 1111 .hdiv_max = 1, 1112 + .get_clk_div = aspeed_get_clk_div_ast2400, 1474 1113 .calibrate = aspeed_spi_calibrate, 1475 1114 /* No segment registers */ 1476 1115 }; ··· 1484 1121 .timing = CE0_TIMING_COMPENSATION_REG, 1485 1122 .hclk_mask = 0xffffd0ff, 1486 1123 .hdiv_max = 1, 1124 + .min_window_size = 0x800000, 1125 + .get_clk_div = aspeed_get_clk_div_ast2500, 1487 1126 .calibrate = aspeed_spi_calibrate, 1488 1127 .segment_start = aspeed_spi_segment_start, 1489 1128 .segment_end = aspeed_spi_segment_end, 1490 1129 .segment_reg = aspeed_spi_segment_reg, 1130 + .adjust_window = aspeed_adjust_window_ast2500, 1491 1131 }; 1492 1132 1493 1133 static const struct aspeed_spi_data ast2500_spi_data = { ··· 1501 1135 .timing = CE0_TIMING_COMPENSATION_REG, 1502 1136 .hclk_mask = 0xffffd0ff, 1503 1137 .hdiv_max = 1, 1138 + .min_window_size = 0x800000, 1139 + .get_clk_div = aspeed_get_clk_div_ast2500, 1504 1140 .calibrate = aspeed_spi_calibrate, 1505 1141 .segment_start = aspeed_spi_segment_start, 1506 1142 .segment_end = aspeed_spi_segment_end, 1507 1143 .segment_reg = aspeed_spi_segment_reg, 1144 + .adjust_window = aspeed_adjust_window_ast2500, 1508 1145 }; 1509 1146 1510 1147 static const struct aspeed_spi_data ast2600_fmc_data = { ··· 1519 1150 .timing = CE0_TIMING_COMPENSATION_REG, 1520 1151 .hclk_mask = 0xf0fff0ff, 1521 1152 .hdiv_max = 2, 1153 + .min_window_size = 0x200000, 1154 + .get_clk_div = aspeed_get_clk_div_ast2600, 1522 1155 .calibrate = aspeed_spi_ast2600_calibrate, 1523 1156 .segment_start = aspeed_spi_segment_ast2600_start, 1524 1157 .segment_end = aspeed_spi_segment_ast2600_end, 1525 1158 .segment_reg = aspeed_spi_segment_ast2600_reg, 1159 + .adjust_window = aspeed_adjust_window_ast2600, 1526 1160 }; 1527 1161 1528 1162 static const struct aspeed_spi_data ast2600_spi_data = { ··· 1537 1165 .timing = CE0_TIMING_COMPENSATION_REG, 1538 1166 .hclk_mask = 0xf0fff0ff, 1539 1167 .hdiv_max = 2, 1168 + .min_window_size = 0x200000, 1169 + .get_clk_div = aspeed_get_clk_div_ast2600, 1540 1170 .calibrate = aspeed_spi_ast2600_calibrate, 1541 1171 .segment_start = aspeed_spi_segment_ast2600_start, 1542 1172 .segment_end = aspeed_spi_segment_ast2600_end, 1543 1173 .segment_reg = aspeed_spi_segment_ast2600_reg, 1174 + .adjust_window = aspeed_adjust_window_ast2600, 1175 + }; 1176 + 1177 + static const struct aspeed_spi_data ast2700_fmc_data = { 1178 + .max_cs = 3, 1179 + .hastype = false, 1180 + .mode_bits = SPI_RX_QUAD | SPI_TX_QUAD, 1181 + .we0 = 16, 1182 + .ctl0 = CE0_CTRL_REG, 1183 + .timing = CE0_TIMING_COMPENSATION_REG, 1184 + .hclk_mask = 0xf0fff0ff, 1185 + .hdiv_max = 2, 1186 + .min_window_size = 0x10000, 1187 + .get_clk_div = aspeed_get_clk_div_ast2600, 1188 + .calibrate = aspeed_spi_ast2600_calibrate, 1189 + .segment_start = aspeed_spi_segment_ast2700_start, 1190 + .segment_end = aspeed_spi_segment_ast2700_end, 1191 + .segment_reg = aspeed_spi_segment_ast2700_reg, 1192 + }; 1193 + 1194 + static const struct aspeed_spi_data ast2700_spi_data = { 1195 + .max_cs = 2, 1196 + .hastype = false, 1197 + .mode_bits = SPI_RX_QUAD | SPI_TX_QUAD, 1198 + .we0 = 16, 1199 + .ctl0 = CE0_CTRL_REG, 1200 + .timing = CE0_TIMING_COMPENSATION_REG, 1201 + .hclk_mask = 0xf0fff0ff, 1202 + .hdiv_max = 2, 1203 + .min_window_size = 0x10000, 1204 + .get_clk_div = aspeed_get_clk_div_ast2600, 1205 + .calibrate = aspeed_spi_ast2600_calibrate, 1206 + .segment_start = aspeed_spi_segment_ast2700_start, 1207 + .segment_end = aspeed_spi_segment_ast2700_end, 1208 + .segment_reg = aspeed_spi_segment_ast2700_reg, 1544 1209 }; 1545 1210 1546 1211 static const struct of_device_id aspeed_spi_matches[] = { ··· 1587 1178 { .compatible = "aspeed,ast2500-spi", .data = &ast2500_spi_data }, 1588 1179 { .compatible = "aspeed,ast2600-fmc", .data = &ast2600_fmc_data }, 1589 1180 { .compatible = "aspeed,ast2600-spi", .data = &ast2600_spi_data }, 1181 + { .compatible = "aspeed,ast2700-fmc", .data = &ast2700_fmc_data }, 1182 + { .compatible = "aspeed,ast2700-spi", .data = &ast2700_spi_data }, 1590 1183 { } 1591 1184 }; 1592 1185 MODULE_DEVICE_TABLE(of, aspeed_spi_matches);
+2 -2
drivers/spi/spi-bcm63xx.c
··· 582 582 host->auto_runtime_pm = true; 583 583 bs->msg_type_shift = bs->reg_offsets[SPI_MSG_TYPE_SHIFT]; 584 584 bs->msg_ctl_width = bs->reg_offsets[SPI_MSG_CTL_WIDTH]; 585 - bs->tx_io = (u8 *)(bs->regs + bs->reg_offsets[SPI_MSG_DATA]); 586 - bs->rx_io = (const u8 *)(bs->regs + bs->reg_offsets[SPI_RX_DATA]); 585 + bs->tx_io = bs->regs + bs->reg_offsets[SPI_MSG_DATA]; 586 + bs->rx_io = bs->regs + bs->reg_offsets[SPI_RX_DATA]; 587 587 588 588 /* Initialize hardware */ 589 589 ret = clk_prepare_enable(bs->clk);
+93 -13
drivers/spi/spi-cadence.c
··· 109 109 * @rxbuf: Pointer to the RX buffer 110 110 * @tx_bytes: Number of bytes left to transfer 111 111 * @rx_bytes: Number of bytes requested 112 + * @n_bytes: Number of bytes per word 112 113 * @dev_busy: Device busy flag 113 114 * @is_decoded_cs: Flag for decoder property set or not 114 115 * @tx_fifo_depth: Depth of the TX FIFO ··· 121 120 struct clk *pclk; 122 121 unsigned int clk_rate; 123 122 u32 speed_hz; 124 - const u8 *txbuf; 125 - u8 *rxbuf; 123 + const void *txbuf; 124 + void *rxbuf; 126 125 int tx_bytes; 127 126 int rx_bytes; 127 + u8 n_bytes; 128 128 u8 dev_busy; 129 129 u32 is_decoded_cs; 130 130 unsigned int tx_fifo_depth; 131 131 struct reset_control *rstc; 132 + }; 133 + 134 + enum cdns_spi_frame_n_bytes { 135 + CDNS_SPI_N_BYTES_NULL = 0, 136 + CDNS_SPI_N_BYTES_U8 = 1, 137 + CDNS_SPI_N_BYTES_U16 = 2, 138 + CDNS_SPI_N_BYTES_U32 = 4 132 139 }; 133 140 134 141 /* Macros for the SPI controller read/write */ ··· 314 305 return 0; 315 306 } 316 307 308 + static u8 cdns_spi_n_bytes(struct spi_transfer *transfer) 309 + { 310 + if (transfer->bits_per_word <= 8) 311 + return CDNS_SPI_N_BYTES_U8; 312 + else if (transfer->bits_per_word <= 16) 313 + return CDNS_SPI_N_BYTES_U16; 314 + else 315 + return CDNS_SPI_N_BYTES_U32; 316 + } 317 + 318 + static inline void cdns_spi_reader(struct cdns_spi *xspi) 319 + { 320 + u32 rxw = 0; 321 + 322 + if (xspi->rxbuf && !IS_ALIGNED((uintptr_t)xspi->rxbuf, xspi->n_bytes)) { 323 + pr_err("%s: rxbuf address is not aligned for %d bytes\n", 324 + __func__, xspi->n_bytes); 325 + return; 326 + } 327 + 328 + rxw = cdns_spi_read(xspi, CDNS_SPI_RXD); 329 + if (xspi->rxbuf) { 330 + switch (xspi->n_bytes) { 331 + case CDNS_SPI_N_BYTES_U8: 332 + *(u8 *)xspi->rxbuf = rxw; 333 + break; 334 + case CDNS_SPI_N_BYTES_U16: 335 + *(u16 *)xspi->rxbuf = rxw; 336 + break; 337 + case CDNS_SPI_N_BYTES_U32: 338 + *(u32 *)xspi->rxbuf = rxw; 339 + break; 340 + default: 341 + pr_err("%s invalid n_bytes %d\n", __func__, 342 + xspi->n_bytes); 343 + return; 344 + } 345 + xspi->rxbuf = (u8 *)xspi->rxbuf + xspi->n_bytes; 346 + } 347 + } 348 + 349 + static inline void cdns_spi_writer(struct cdns_spi *xspi) 350 + { 351 + u32 txw = 0; 352 + 353 + if (xspi->txbuf && !IS_ALIGNED((uintptr_t)xspi->txbuf, xspi->n_bytes)) { 354 + pr_err("%s: txbuf address is not aligned for %d bytes\n", 355 + __func__, xspi->n_bytes); 356 + return; 357 + } 358 + 359 + if (xspi->txbuf) { 360 + switch (xspi->n_bytes) { 361 + case CDNS_SPI_N_BYTES_U8: 362 + txw = *(u8 *)xspi->txbuf; 363 + break; 364 + case CDNS_SPI_N_BYTES_U16: 365 + txw = *(u16 *)xspi->txbuf; 366 + break; 367 + case CDNS_SPI_N_BYTES_U32: 368 + txw = *(u32 *)xspi->txbuf; 369 + break; 370 + default: 371 + pr_err("%s invalid n_bytes %d\n", __func__, 372 + xspi->n_bytes); 373 + return; 374 + } 375 + cdns_spi_write(xspi, CDNS_SPI_TXD, txw); 376 + xspi->txbuf = (u8 *)xspi->txbuf + xspi->n_bytes; 377 + } 378 + } 379 + 317 380 /** 318 381 * cdns_spi_process_fifo - Fills the TX FIFO, and drain the RX FIFO 319 382 * @xspi: Pointer to the cdns_spi structure ··· 402 321 403 322 while (ntx || nrx) { 404 323 if (nrx) { 405 - u8 data = cdns_spi_read(xspi, CDNS_SPI_RXD); 406 - 407 - if (xspi->rxbuf) 408 - *xspi->rxbuf++ = data; 409 - 324 + cdns_spi_reader(xspi); 410 325 nrx--; 411 326 } 412 327 413 328 if (ntx) { 414 - if (xspi->txbuf) 415 - cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++); 416 - else 417 - cdns_spi_write(xspi, CDNS_SPI_TXD, 0); 418 - 329 + cdns_spi_writer(xspi); 419 330 ntx--; 420 331 } 421 - 422 332 } 423 333 } 424 334 ··· 525 453 */ 526 454 if (cdns_spi_read(xspi, CDNS_SPI_ISR) & CDNS_SPI_IXR_TXFULL) 527 455 udelay(10); 456 + 457 + xspi->n_bytes = cdns_spi_n_bytes(transfer); 458 + xspi->tx_bytes = DIV_ROUND_UP(xspi->tx_bytes, xspi->n_bytes); 459 + xspi->rx_bytes = DIV_ROUND_UP(xspi->rx_bytes, xspi->n_bytes); 528 460 529 461 cdns_spi_process_fifo(xspi, xspi->tx_fifo_depth, 0); 530 462 ··· 730 654 ctlr->mode_bits = SPI_CPOL | SPI_CPHA; 731 655 ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 732 656 657 + if (of_device_is_compatible(pdev->dev.of_node, "cix,sky1-spi-r1p6")) 658 + ctlr->bits_per_word_mask |= SPI_BPW_MASK(16) | SPI_BPW_MASK(32); 659 + 733 660 if (!spi_controller_is_target(ctlr)) { 734 661 ctlr->mode_bits |= SPI_CS_HIGH; 735 662 ctlr->set_cs = cdns_spi_chipselect; ··· 876 797 877 798 static const struct of_device_id cdns_spi_of_match[] = { 878 799 { .compatible = "xlnx,zynq-spi-r1p6" }, 800 + { .compatible = "cix,sky1-spi-r1p6" }, 879 801 { .compatible = "cdns,spi-r1p6" }, 880 802 { /* end of table */ } 881 803 };
+1 -1
drivers/spi/spi-ch341.c
··· 78 78 79 79 ch341->tx_buf[0] = CH341A_CMD_SPI_STREAM; 80 80 81 - memcpy(ch341->tx_buf + 1, trans->tx_buf, len); 81 + memcpy(ch341->tx_buf + 1, trans->tx_buf, len - 1); 82 82 83 83 ret = usb_bulk_msg(ch341->udev, ch341->write_pipe, ch341->tx_buf, len, 84 84 NULL, CH341_DEFAULT_TIMEOUT);
+62 -2
drivers/spi/spi-davinci.c
··· 9 9 #include <linux/gpio/consumer.h> 10 10 #include <linux/module.h> 11 11 #include <linux/delay.h> 12 + #include <linux/platform_data/edma.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/err.h> 14 15 #include <linux/clk.h> ··· 19 18 #include <linux/spi/spi.h> 20 19 #include <linux/spi/spi_bitbang.h> 21 20 #include <linux/slab.h> 22 - 23 - #include <linux/platform_data/spi-davinci.h> 24 21 25 22 #define CS_DEFAULT 0xFF 26 23 ··· 97 98 #define SPIDEF 0x4c 98 99 #define SPIFMT0 0x50 99 100 101 + #define SPI_IO_TYPE_POLL 1 102 + #define SPI_IO_TYPE_DMA 2 103 + 100 104 #define DMA_MIN_BYTES 16 105 + 106 + enum { 107 + SPI_VERSION_1, /* For DM355/DM365/DM6467 */ 108 + SPI_VERSION_2, /* For DA8xx */ 109 + }; 110 + 111 + /** 112 + * struct davinci_spi_platform_data - Platform data for SPI master device on DaVinci 113 + * 114 + * @version: version of the SPI IP. Different DaVinci devices have slightly 115 + * varying versions of the same IP. 116 + * @num_chipselect: number of chipselects supported by this SPI master 117 + * @intr_line: interrupt line used to connect the SPI IP to the ARM interrupt 118 + * controller withn the SoC. Possible values are 0 and 1. 119 + * @prescaler_limit: max clock prescaler value 120 + * @cshold_bug: set this to true if the SPI controller on your chip requires 121 + * a write to CSHOLD bit in between transfers (like in DM355). 122 + * @dma_event_q: DMA event queue to use if SPI_IO_TYPE_DMA is used for any 123 + * device on the bus. 124 + */ 125 + struct davinci_spi_platform_data { 126 + u8 version; 127 + u8 num_chipselect; 128 + u8 intr_line; 129 + u8 prescaler_limit; 130 + bool cshold_bug; 131 + enum dma_event_q dma_event_q; 132 + }; 133 + 134 + /** 135 + * struct davinci_spi_config - Per-chip-select configuration for SPI slave devices 136 + * 137 + * @wdelay: amount of delay between transmissions. Measured in number of 138 + * SPI module clocks. 139 + * @odd_parity: polarity of parity flag at the end of transmit data stream. 140 + * 0 - odd parity, 1 - even parity. 141 + * @parity_enable: enable transmission of parity at end of each transmit 142 + * data stream. 143 + * @io_type: type of IO transfer. Choose between polled, interrupt and DMA. 144 + * @timer_disable: disable chip-select timers (setup and hold) 145 + * @c2tdelay: chip-select setup time. Measured in number of SPI module clocks. 146 + * @t2cdelay: chip-select hold time. Measured in number of SPI module clocks. 147 + * @t2edelay: transmit data finished to SPI ENAn pin inactive time. Measured 148 + * in number of SPI clocks. 149 + * @c2edelay: chip-select active to SPI ENAn signal active time. Measured in 150 + * number of SPI clocks. 151 + */ 152 + struct davinci_spi_config { 153 + u8 wdelay; 154 + u8 odd_parity; 155 + u8 parity_enable; 156 + u8 io_type; 157 + u8 timer_disable; 158 + u8 c2tdelay; 159 + u8 t2cdelay; 160 + u8 t2edelay; 161 + u8 c2edelay; 162 + }; 101 163 102 164 /* SPI Controller driver's private data. */ 103 165 struct davinci_spi {
+2 -2
drivers/spi/spi-dw-bt1.c
··· 288 288 289 289 pm_runtime_enable(&pdev->dev); 290 290 291 - ret = dw_spi_add_host(&pdev->dev, dws); 291 + ret = dw_spi_add_controller(&pdev->dev, dws); 292 292 if (ret) { 293 293 pm_runtime_disable(&pdev->dev); 294 294 return ret; ··· 303 303 { 304 304 struct dw_spi_bt1 *dwsbt1 = platform_get_drvdata(pdev); 305 305 306 - dw_spi_remove_host(&dwsbt1->dws); 306 + dw_spi_remove_controller(&dwsbt1->dws); 307 307 308 308 pm_runtime_disable(&pdev->dev); 309 309 }
+110 -78
drivers/spi/spi-dw-core.c
··· 63 63 { 64 64 char name[32]; 65 65 66 - snprintf(name, 32, "dw_spi%d", dws->host->bus_num); 66 + snprintf(name, 32, "dw_spi%d", dws->ctlr->bus_num); 67 67 dws->debugfs = debugfs_create_dir(name, NULL); 68 68 69 69 dws->regset.regs = dw_spi_dbgfs_regs; ··· 185 185 irq_status = dw_readl(dws, DW_SPI_ISR); 186 186 187 187 if (irq_status & DW_SPI_INT_RXOI) { 188 - dev_err(&dws->host->dev, "RX FIFO overflow detected\n"); 188 + dev_err(&dws->ctlr->dev, "RX FIFO overflow detected\n"); 189 189 ret = -EIO; 190 190 } 191 191 192 192 if (irq_status & DW_SPI_INT_RXUI) { 193 - dev_err(&dws->host->dev, "RX FIFO underflow detected\n"); 193 + dev_err(&dws->ctlr->dev, "RX FIFO underflow detected\n"); 194 194 ret = -EIO; 195 195 } 196 196 197 197 if (irq_status & DW_SPI_INT_TXOI) { 198 - dev_err(&dws->host->dev, "TX FIFO overflow detected\n"); 198 + dev_err(&dws->ctlr->dev, "TX FIFO overflow detected\n"); 199 199 ret = -EIO; 200 200 } 201 201 202 202 /* Generically handle the erroneous situation */ 203 203 if (ret) { 204 204 dw_spi_reset_chip(dws); 205 - if (dws->host->cur_msg) 206 - dws->host->cur_msg->status = ret; 205 + if (dws->ctlr->cur_msg) 206 + dws->ctlr->cur_msg->status = ret; 207 207 } 208 208 209 209 return ret; ··· 215 215 u16 irq_status = dw_readl(dws, DW_SPI_ISR); 216 216 217 217 if (dw_spi_check_status(dws, false)) { 218 - spi_finalize_current_transfer(dws->host); 218 + spi_finalize_current_transfer(dws->ctlr); 219 219 return IRQ_HANDLED; 220 220 } 221 221 ··· 229 229 dw_reader(dws); 230 230 if (!dws->rx_len) { 231 231 dw_spi_mask_intr(dws, 0xff); 232 - spi_finalize_current_transfer(dws->host); 232 + spi_finalize_current_transfer(dws->ctlr); 233 233 } else if (dws->rx_len <= dw_readl(dws, DW_SPI_RXFTLR)) { 234 234 dw_writel(dws, DW_SPI_RXFTLR, dws->rx_len - 1); 235 235 } ··· 250 250 251 251 static irqreturn_t dw_spi_irq(int irq, void *dev_id) 252 252 { 253 - struct spi_controller *host = dev_id; 254 - struct dw_spi *dws = spi_controller_get_devdata(host); 253 + struct spi_controller *ctlr = dev_id; 254 + struct dw_spi *dws = spi_controller_get_devdata(ctlr); 255 255 u16 irq_status = dw_readl(dws, DW_SPI_ISR) & DW_SPI_INT_MASK; 256 256 257 257 if (!irq_status) 258 258 return IRQ_NONE; 259 259 260 - if (!host->cur_msg) { 260 + if (!ctlr->cur_msg) { 261 261 dw_spi_mask_intr(dws, 0xff); 262 262 return IRQ_HANDLED; 263 263 } ··· 331 331 cr0 |= FIELD_PREP(DW_HSSI_CTRLR0_TMOD_MASK, cfg->tmode); 332 332 333 333 dw_writel(dws, DW_SPI_CTRLR0, cr0); 334 + 335 + if (spi_controller_is_target(dws->ctlr)) 336 + return; 334 337 335 338 if (cfg->tmode == DW_SPI_CTRLR0_TMOD_EPROMREAD || 336 339 cfg->tmode == DW_SPI_CTRLR0_TMOD_RO) ··· 413 410 return 0; 414 411 } 415 412 416 - static int dw_spi_transfer_one(struct spi_controller *host, 413 + static int dw_spi_transfer_one(struct spi_controller *ctlr, 417 414 struct spi_device *spi, 418 415 struct spi_transfer *transfer) 419 416 { 420 - struct dw_spi *dws = spi_controller_get_devdata(host); 417 + struct dw_spi *dws = spi_controller_get_devdata(ctlr); 421 418 struct dw_spi_cfg cfg = { 422 419 .tmode = DW_SPI_CTRLR0_TMOD_TR, 423 420 .dfs = transfer->bits_per_word, ··· 442 439 transfer->effective_speed_hz = dws->current_freq; 443 440 444 441 /* Check if current transfer is a DMA transaction */ 445 - dws->dma_mapped = spi_xfer_is_dma_mapped(host, spi, transfer); 442 + dws->dma_mapped = spi_xfer_is_dma_mapped(ctlr, spi, transfer); 446 443 447 444 /* For poll mode just disable all interrupts */ 448 445 dw_spi_mask_intr(dws, 0xff); ··· 465 462 return 1; 466 463 } 467 464 468 - static void dw_spi_handle_err(struct spi_controller *host, 469 - struct spi_message *msg) 465 + static inline void dw_spi_abort(struct spi_controller *ctlr) 470 466 { 471 - struct dw_spi *dws = spi_controller_get_devdata(host); 467 + struct dw_spi *dws = spi_controller_get_devdata(ctlr); 472 468 473 469 if (dws->dma_mapped) 474 470 dws->dma_ops->dma_stop(dws); 475 471 476 472 dw_spi_reset_chip(dws); 473 + } 474 + 475 + static void dw_spi_handle_err(struct spi_controller *ctlr, 476 + struct spi_message *msg) 477 + { 478 + dw_spi_abort(ctlr); 479 + } 480 + 481 + static int dw_spi_target_abort(struct spi_controller *ctlr) 482 + { 483 + dw_spi_abort(ctlr); 484 + 485 + return 0; 477 486 } 478 487 479 488 static int dw_spi_adjust_mem_op_size(struct spi_mem *mem, struct spi_mem_op *op) ··· 589 574 while (len) { 590 575 entries = readl_relaxed(dws->regs + DW_SPI_TXFLR); 591 576 if (!entries) { 592 - dev_err(&dws->host->dev, "CS de-assertion on Tx\n"); 577 + dev_err(&dws->ctlr->dev, "CS de-assertion on Tx\n"); 593 578 return -EIO; 594 579 } 595 580 room = min(dws->fifo_len - entries, len); ··· 609 594 if (!entries) { 610 595 sts = readl_relaxed(dws->regs + DW_SPI_RISR); 611 596 if (sts & DW_SPI_INT_RXOI) { 612 - dev_err(&dws->host->dev, "FIFO overflow on Rx\n"); 597 + dev_err(&dws->ctlr->dev, "FIFO overflow on Rx\n"); 613 598 return -EIO; 614 599 } 615 600 continue; ··· 650 635 spi_delay_exec(&delay, NULL); 651 636 652 637 if (retry < 0) { 653 - dev_err(&dws->host->dev, "Mem op hanged up\n"); 638 + dev_err(&dws->ctlr->dev, "Mem op hanged up\n"); 654 639 return -EIO; 655 640 } 656 641 ··· 849 834 DW_SPI_GET_BYTE(dws->ver, 1)); 850 835 } 851 836 852 - /* 853 - * Try to detect the number of native chip-selects if the platform 854 - * driver didn't set it up. There can be up to 16 lines configured. 855 - */ 856 - if (!dws->num_cs) { 857 - u32 ser; 837 + if (spi_controller_is_target(dws->ctlr)) { 838 + /* There is only one CS input signal in target mode */ 839 + dws->num_cs = 1; 840 + } else { 841 + /* 842 + * Try to detect the number of native chip-selects if the platform 843 + * driver didn't set it up. There can be up to 16 lines configured. 844 + */ 845 + if (!dws->num_cs) { 846 + u32 ser; 858 847 859 - dw_writel(dws, DW_SPI_SER, 0xffff); 860 - ser = dw_readl(dws, DW_SPI_SER); 861 - dw_writel(dws, DW_SPI_SER, 0); 848 + dw_writel(dws, DW_SPI_SER, 0xffff); 849 + ser = dw_readl(dws, DW_SPI_SER); 850 + dw_writel(dws, DW_SPI_SER, 0); 862 851 863 - dws->num_cs = hweight16(ser); 852 + dws->num_cs = hweight16(ser); 853 + } 864 854 } 865 855 866 856 /* ··· 918 898 .per_op_freq = true, 919 899 }; 920 900 921 - int dw_spi_add_host(struct device *dev, struct dw_spi *dws) 901 + int dw_spi_add_controller(struct device *dev, struct dw_spi *dws) 922 902 { 923 - struct spi_controller *host; 903 + struct spi_controller *ctlr; 904 + bool target; 924 905 int ret; 925 906 926 907 if (!dws) 927 908 return -EINVAL; 928 909 929 - host = spi_alloc_host(dev, 0); 930 - if (!host) 910 + target = device_property_read_bool(dev, "spi-slave"); 911 + if (target) 912 + ctlr = spi_alloc_target(dev, 0); 913 + else 914 + ctlr = spi_alloc_host(dev, 0); 915 + 916 + if (!ctlr) 931 917 return -ENOMEM; 932 918 933 - device_set_node(&host->dev, dev_fwnode(dev)); 919 + device_set_node(&ctlr->dev, dev_fwnode(dev)); 934 920 935 - dws->host = host; 921 + dws->ctlr = ctlr; 936 922 dws->dma_addr = (dma_addr_t)(dws->paddr + DW_SPI_DR); 937 923 938 - spi_controller_set_devdata(host, dws); 924 + spi_controller_set_devdata(ctlr, dws); 939 925 940 926 /* Basic HW init */ 941 927 dw_spi_hw_init(dev, dws); 942 928 943 929 ret = request_irq(dws->irq, dw_spi_irq, IRQF_SHARED, dev_name(dev), 944 - host); 930 + ctlr); 945 931 if (ret < 0 && ret != -ENOTCONN) { 946 932 dev_err(dev, "can not get IRQ\n"); 947 - goto err_free_host; 933 + goto err_free_ctlr; 948 934 } 949 935 950 936 dw_spi_init_mem_ops(dws); 951 937 952 - host->use_gpio_descriptors = true; 953 - host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LOOP; 938 + ctlr->mode_bits = SPI_CPOL | SPI_CPHA; 954 939 if (dws->caps & DW_SPI_CAP_DFS32) 955 - host->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 940 + ctlr->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 956 941 else 957 - host->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16); 958 - host->bus_num = dws->bus_num; 959 - host->num_chipselect = dws->num_cs; 960 - host->setup = dw_spi_setup; 961 - host->cleanup = dw_spi_cleanup; 962 - if (dws->set_cs) 963 - host->set_cs = dws->set_cs; 964 - else 965 - host->set_cs = dw_spi_set_cs; 966 - host->transfer_one = dw_spi_transfer_one; 967 - host->handle_err = dw_spi_handle_err; 968 - if (dws->mem_ops.exec_op) { 969 - host->mem_ops = &dws->mem_ops; 970 - host->mem_caps = &dw_spi_mem_caps; 942 + ctlr->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16); 943 + ctlr->bus_num = dws->bus_num; 944 + ctlr->num_chipselect = dws->num_cs; 945 + ctlr->setup = dw_spi_setup; 946 + ctlr->cleanup = dw_spi_cleanup; 947 + ctlr->transfer_one = dw_spi_transfer_one; 948 + ctlr->handle_err = dw_spi_handle_err; 949 + ctlr->auto_runtime_pm = true; 950 + 951 + if (!target) { 952 + ctlr->use_gpio_descriptors = true; 953 + ctlr->mode_bits |= SPI_LOOP; 954 + if (dws->set_cs) 955 + ctlr->set_cs = dws->set_cs; 956 + else 957 + ctlr->set_cs = dw_spi_set_cs; 958 + if (dws->mem_ops.exec_op) { 959 + ctlr->mem_ops = &dws->mem_ops; 960 + ctlr->mem_caps = &dw_spi_mem_caps; 961 + } 962 + ctlr->max_speed_hz = dws->max_freq; 963 + ctlr->flags = SPI_CONTROLLER_GPIO_SS; 964 + } else { 965 + ctlr->target_abort = dw_spi_target_abort; 971 966 } 972 - host->max_speed_hz = dws->max_freq; 973 - host->flags = SPI_CONTROLLER_GPIO_SS; 974 - host->auto_runtime_pm = true; 975 967 976 968 /* Get default rx sample delay */ 977 969 device_property_read_u32(dev, "rx-sample-delay-ns", ··· 996 964 } else if (ret) { 997 965 dev_warn(dev, "DMA init failed\n"); 998 966 } else { 999 - host->can_dma = dws->dma_ops->can_dma; 1000 - host->flags |= SPI_CONTROLLER_MUST_TX; 967 + ctlr->can_dma = dws->dma_ops->can_dma; 968 + ctlr->flags |= SPI_CONTROLLER_MUST_TX; 1001 969 } 1002 970 } 1003 971 1004 - ret = spi_register_controller(host); 972 + ret = spi_register_controller(ctlr); 1005 973 if (ret) { 1006 - dev_err_probe(dev, ret, "problem registering spi host\n"); 974 + dev_err_probe(dev, ret, "problem registering spi controller\n"); 1007 975 goto err_dma_exit; 1008 976 } 1009 977 ··· 1015 983 dws->dma_ops->dma_exit(dws); 1016 984 dw_spi_enable_chip(dws, 0); 1017 985 err_free_irq: 1018 - free_irq(dws->irq, host); 1019 - err_free_host: 1020 - spi_controller_put(host); 986 + free_irq(dws->irq, ctlr); 987 + err_free_ctlr: 988 + spi_controller_put(ctlr); 1021 989 return ret; 1022 990 } 1023 - EXPORT_SYMBOL_NS_GPL(dw_spi_add_host, "SPI_DW_CORE"); 991 + EXPORT_SYMBOL_NS_GPL(dw_spi_add_controller, "SPI_DW_CORE"); 1024 992 1025 - void dw_spi_remove_host(struct dw_spi *dws) 993 + void dw_spi_remove_controller(struct dw_spi *dws) 1026 994 { 1027 995 dw_spi_debugfs_remove(dws); 1028 996 1029 - spi_unregister_controller(dws->host); 997 + spi_unregister_controller(dws->ctlr); 1030 998 1031 999 if (dws->dma_ops && dws->dma_ops->dma_exit) 1032 1000 dws->dma_ops->dma_exit(dws); 1033 1001 1034 1002 dw_spi_shutdown_chip(dws); 1035 1003 1036 - free_irq(dws->irq, dws->host); 1004 + free_irq(dws->irq, dws->ctlr); 1037 1005 } 1038 - EXPORT_SYMBOL_NS_GPL(dw_spi_remove_host, "SPI_DW_CORE"); 1006 + EXPORT_SYMBOL_NS_GPL(dw_spi_remove_controller, "SPI_DW_CORE"); 1039 1007 1040 - int dw_spi_suspend_host(struct dw_spi *dws) 1008 + int dw_spi_suspend_controller(struct dw_spi *dws) 1041 1009 { 1042 1010 int ret; 1043 1011 1044 - ret = spi_controller_suspend(dws->host); 1012 + ret = spi_controller_suspend(dws->ctlr); 1045 1013 if (ret) 1046 1014 return ret; 1047 1015 1048 1016 dw_spi_shutdown_chip(dws); 1049 1017 return 0; 1050 1018 } 1051 - EXPORT_SYMBOL_NS_GPL(dw_spi_suspend_host, "SPI_DW_CORE"); 1019 + EXPORT_SYMBOL_NS_GPL(dw_spi_suspend_controller, "SPI_DW_CORE"); 1052 1020 1053 - int dw_spi_resume_host(struct dw_spi *dws) 1021 + int dw_spi_resume_controller(struct dw_spi *dws) 1054 1022 { 1055 - dw_spi_hw_init(&dws->host->dev, dws); 1056 - return spi_controller_resume(dws->host); 1023 + dw_spi_hw_init(&dws->ctlr->dev, dws); 1024 + return spi_controller_resume(dws->ctlr); 1057 1025 } 1058 - EXPORT_SYMBOL_NS_GPL(dw_spi_resume_host, "SPI_DW_CORE"); 1026 + EXPORT_SYMBOL_NS_GPL(dw_spi_resume_controller, "SPI_DW_CORE"); 1059 1027 1060 1028 MODULE_AUTHOR("Feng Tang <feng.tang@intel.com>"); 1061 1029 MODULE_DESCRIPTION("Driver for DesignWare SPI controller core");
+11 -11
drivers/spi/spi-dw-dma.c
··· 139 139 if (!dws->txchan) 140 140 goto free_rxchan; 141 141 142 - dws->host->dma_rx = dws->rxchan; 143 - dws->host->dma_tx = dws->txchan; 142 + dws->ctlr->dma_rx = dws->rxchan; 143 + dws->ctlr->dma_tx = dws->txchan; 144 144 145 145 init_completion(&dws->dma_completion); 146 146 ··· 183 183 goto free_rxchan; 184 184 } 185 185 186 - dws->host->dma_rx = dws->rxchan; 187 - dws->host->dma_tx = dws->txchan; 186 + dws->ctlr->dma_rx = dws->rxchan; 187 + dws->ctlr->dma_tx = dws->txchan; 188 188 189 189 init_completion(&dws->dma_completion); 190 190 ··· 242 242 } 243 243 } 244 244 245 - static bool dw_spi_can_dma(struct spi_controller *host, 245 + static bool dw_spi_can_dma(struct spi_controller *ctlr, 246 246 struct spi_device *spi, struct spi_transfer *xfer) 247 247 { 248 - struct dw_spi *dws = spi_controller_get_devdata(host); 248 + struct dw_spi *dws = spi_controller_get_devdata(ctlr); 249 249 enum dma_slave_buswidth dma_bus_width; 250 250 251 251 if (xfer->len <= dws->fifo_len) ··· 271 271 msecs_to_jiffies(ms)); 272 272 273 273 if (ms == 0) { 274 - dev_err(&dws->host->cur_msg->spi->dev, 274 + dev_err(&dws->ctlr->cur_msg->spi->dev, 275 275 "DMA transaction timed out\n"); 276 276 return -ETIMEDOUT; 277 277 } ··· 299 299 spi_delay_exec(&delay, xfer); 300 300 301 301 if (retry < 0) { 302 - dev_err(&dws->host->dev, "Tx hanged up\n"); 302 + dev_err(&dws->ctlr->dev, "Tx hanged up\n"); 303 303 return -EIO; 304 304 } 305 305 ··· 400 400 spi_delay_exec(&delay, NULL); 401 401 402 402 if (retry < 0) { 403 - dev_err(&dws->host->dev, "Rx hanged up\n"); 403 + dev_err(&dws->ctlr->dev, "Rx hanged up\n"); 404 404 return -EIO; 405 405 } 406 406 ··· 656 656 if (ret) 657 657 return ret; 658 658 659 - if (dws->host->cur_msg->status == -EINPROGRESS) { 659 + if (dws->ctlr->cur_msg->status == -EINPROGRESS) { 660 660 ret = dw_spi_dma_wait_tx_done(dws, xfer); 661 661 if (ret) 662 662 return ret; 663 663 } 664 664 665 - if (xfer->rx_buf && dws->host->cur_msg->status == -EINPROGRESS) 665 + if (xfer->rx_buf && dws->ctlr->cur_msg->status == -EINPROGRESS) 666 666 ret = dw_spi_dma_wait_rx_done(dws); 667 667 668 668 return ret;
+2 -7
drivers/spi/spi-dw-mmio.c
··· 321 321 struct dw_spi *dws; 322 322 int ret; 323 323 324 - if (device_property_read_bool(&pdev->dev, "spi-slave")) { 325 - dev_warn(&pdev->dev, "spi-slave is not yet supported\n"); 326 - return -ENODEV; 327 - } 328 - 329 324 dwsmmio = devm_kzalloc(&pdev->dev, sizeof(struct dw_spi_mmio), 330 325 GFP_KERNEL); 331 326 if (!dwsmmio) ··· 377 382 378 383 pm_runtime_enable(&pdev->dev); 379 384 380 - ret = dw_spi_add_host(&pdev->dev, dws); 385 + ret = dw_spi_add_controller(&pdev->dev, dws); 381 386 if (ret) 382 387 goto out; 383 388 ··· 396 401 { 397 402 struct dw_spi_mmio *dwsmmio = platform_get_drvdata(pdev); 398 403 399 - dw_spi_remove_host(&dwsmmio->dws); 404 + dw_spi_remove_controller(&dwsmmio->dws); 400 405 pm_runtime_disable(&pdev->dev); 401 406 reset_control_assert(dwsmmio->rstc); 402 407 }
+4 -4
drivers/spi/spi-dw-pci.c
··· 127 127 goto err_free_irq_vectors; 128 128 } 129 129 130 - ret = dw_spi_add_host(&pdev->dev, dws); 130 + ret = dw_spi_add_controller(&pdev->dev, dws); 131 131 if (ret) 132 132 goto err_free_irq_vectors; 133 133 ··· 156 156 pm_runtime_forbid(&pdev->dev); 157 157 pm_runtime_get_noresume(&pdev->dev); 158 158 159 - dw_spi_remove_host(dws); 159 + dw_spi_remove_controller(dws); 160 160 pci_free_irq_vectors(pdev); 161 161 } 162 162 ··· 165 165 { 166 166 struct dw_spi *dws = dev_get_drvdata(dev); 167 167 168 - return dw_spi_suspend_host(dws); 168 + return dw_spi_suspend_controller(dws); 169 169 } 170 170 171 171 static int dw_spi_pci_resume(struct device *dev) 172 172 { 173 173 struct dw_spi *dws = dev_get_drvdata(dev); 174 174 175 - return dw_spi_resume_host(dws); 175 + return dw_spi_resume_controller(dws); 176 176 } 177 177 #endif 178 178
+6 -6
drivers/spi/spi-dw.h
··· 142 142 int (*dma_init)(struct device *dev, struct dw_spi *dws); 143 143 void (*dma_exit)(struct dw_spi *dws); 144 144 int (*dma_setup)(struct dw_spi *dws, struct spi_transfer *xfer); 145 - bool (*can_dma)(struct spi_controller *host, struct spi_device *spi, 145 + bool (*can_dma)(struct spi_controller *ctlr, struct spi_device *spi, 146 146 struct spi_transfer *xfer); 147 147 int (*dma_transfer)(struct dw_spi *dws, struct spi_transfer *xfer); 148 148 void (*dma_stop)(struct dw_spi *dws); 149 149 }; 150 150 151 151 struct dw_spi { 152 - struct spi_controller *host; 152 + struct spi_controller *ctlr; 153 153 154 154 u32 ip; /* Synopsys DW SSI IP-core ID */ 155 155 u32 ver; /* Synopsys component version */ ··· 288 288 extern void dw_spi_update_config(struct dw_spi *dws, struct spi_device *spi, 289 289 struct dw_spi_cfg *cfg); 290 290 extern int dw_spi_check_status(struct dw_spi *dws, bool raw); 291 - extern int dw_spi_add_host(struct device *dev, struct dw_spi *dws); 292 - extern void dw_spi_remove_host(struct dw_spi *dws); 293 - extern int dw_spi_suspend_host(struct dw_spi *dws); 294 - extern int dw_spi_resume_host(struct dw_spi *dws); 291 + extern int dw_spi_add_controller(struct device *dev, struct dw_spi *dws); 292 + extern void dw_spi_remove_controller(struct dw_spi *dws); 293 + extern int dw_spi_suspend_controller(struct dw_spi *dws); 294 + extern int dw_spi_resume_controller(struct dw_spi *dws); 295 295 296 296 #ifdef CONFIG_SPI_DW_DMA 297 297
+63 -25
drivers/spi/spi-fsl-qspi.c
··· 36 36 #include <linux/of.h> 37 37 #include <linux/platform_device.h> 38 38 #include <linux/pm_qos.h> 39 + #include <linux/reset.h> 39 40 #include <linux/sizes.h> 40 41 41 42 #include <linux/spi/spi.h> ··· 197 196 */ 198 197 #define QUADSPI_QUIRK_USE_TDH_SETTING BIT(5) 199 198 199 + /* 200 + * Do not disable the "qspi" clock when changing its rate. 201 + */ 202 + #define QUADSPI_QUIRK_SKIP_CLK_DISABLE BIT(6) 203 + 200 204 struct fsl_qspi_devtype_data { 201 205 unsigned int rxfifo; 202 206 unsigned int txfifo; 203 207 int invalid_mstrid; 204 208 unsigned int ahb_buf_size; 209 + unsigned int sfa_size; 205 210 unsigned int quirks; 206 211 bool little_endian; 207 212 }; ··· 268 261 .little_endian = true, 269 262 }; 270 263 264 + static const struct fsl_qspi_devtype_data spacemit_k1_data = { 265 + .rxfifo = SZ_128, 266 + .txfifo = SZ_256, 267 + .ahb_buf_size = SZ_512, 268 + .sfa_size = SZ_1K, 269 + .invalid_mstrid = QUADSPI_BUFXCR_INVALID_MSTRID, 270 + .quirks = QUADSPI_QUIRK_TKT253890 | QUADSPI_QUIRK_SKIP_CLK_DISABLE, 271 + .little_endian = true, 272 + }; 273 + 271 274 struct fsl_qspi { 272 275 void __iomem *iobase; 273 276 void __iomem *ahb_addr; 274 277 const struct fsl_qspi_devtype_data *devtype_data; 275 278 struct mutex lock; 276 279 struct completion c; 280 + struct reset_control *resets; 277 281 struct clk *clk, *clk_en; 278 282 struct pm_qos_request pm_qos_req; 279 283 struct device *dev; ··· 292 274 u32 memmap_phy; 293 275 }; 294 276 295 - static inline int needs_swap_endian(struct fsl_qspi *q) 277 + static bool needs_swap_endian(struct fsl_qspi *q) 296 278 { 297 - return q->devtype_data->quirks & QUADSPI_QUIRK_SWAP_ENDIAN; 279 + return !!(q->devtype_data->quirks & QUADSPI_QUIRK_SWAP_ENDIAN); 298 280 } 299 281 300 - static inline int needs_4x_clock(struct fsl_qspi *q) 282 + static bool needs_4x_clock(struct fsl_qspi *q) 301 283 { 302 - return q->devtype_data->quirks & QUADSPI_QUIRK_4X_INT_CLK; 284 + return !!(q->devtype_data->quirks & QUADSPI_QUIRK_4X_INT_CLK); 303 285 } 304 286 305 - static inline int needs_fill_txfifo(struct fsl_qspi *q) 287 + static bool needs_fill_txfifo(struct fsl_qspi *q) 306 288 { 307 - return q->devtype_data->quirks & QUADSPI_QUIRK_TKT253890; 289 + return !!(q->devtype_data->quirks & QUADSPI_QUIRK_TKT253890); 308 290 } 309 291 310 - static inline int needs_wakeup_wait_mode(struct fsl_qspi *q) 292 + static bool needs_wakeup_wait_mode(struct fsl_qspi *q) 311 293 { 312 - return q->devtype_data->quirks & QUADSPI_QUIRK_TKT245618; 294 + return !!(q->devtype_data->quirks & QUADSPI_QUIRK_TKT245618); 313 295 } 314 296 315 - static inline int needs_amba_base_offset(struct fsl_qspi *q) 297 + static bool needs_amba_base_offset(struct fsl_qspi *q) 316 298 { 317 299 return !(q->devtype_data->quirks & QUADSPI_QUIRK_BASE_INTERNAL); 318 300 } 319 301 320 - static inline int needs_tdh_setting(struct fsl_qspi *q) 302 + static bool needs_tdh_setting(struct fsl_qspi *q) 321 303 { 322 - return q->devtype_data->quirks & QUADSPI_QUIRK_USE_TDH_SETTING; 304 + return !!(q->devtype_data->quirks & QUADSPI_QUIRK_USE_TDH_SETTING); 305 + } 306 + 307 + static bool needs_clk_disable(struct fsl_qspi *q) 308 + { 309 + return !(q->devtype_data->quirks & QUADSPI_QUIRK_SKIP_CLK_DISABLE); 323 310 } 324 311 325 312 /* ··· 557 534 if (needs_4x_clock(q)) 558 535 rate *= 4; 559 536 560 - fsl_qspi_clk_disable_unprep(q); 537 + if (needs_clk_disable(q)) 538 + fsl_qspi_clk_disable_unprep(q); 561 539 562 540 ret = clk_set_rate(q->clk, rate); 563 541 if (ret) 564 542 return; 565 543 566 - ret = fsl_qspi_clk_prep_enable(q); 567 - if (ret) 568 - return; 544 + if (needs_clk_disable(q)) { 545 + ret = fsl_qspi_clk_prep_enable(q); 546 + if (ret) 547 + return; 548 + } 569 549 570 550 q->selected = spi_get_chipselect(spi, 0); 571 551 ··· 748 722 { 749 723 void __iomem *base = q->iobase; 750 724 u32 reg, addr_offset = 0; 725 + u32 sfa_size; 751 726 int ret; 752 727 753 728 /* disable and unprepare clock to avoid glitch pass to controller */ ··· 807 780 * In HW there can be a maximum of four chips on two buses with 808 781 * two chip selects on each bus. We use four chip selects in SW 809 782 * to differentiate between the four chips. 810 - * We use ahb_buf_size for each chip and set SFA1AD, SFA2AD, SFB1AD, 811 - * SFB2AD accordingly. 783 + * 784 + * By default we write the AHB buffer size to each chip, but 785 + * a different size can be specified with devtype_data->sfa_size. 786 + * The SFA1AD, SFA2AD, SFB1AD, and SFB2AD registers define the 787 + * top (end) of these four regions. 812 788 */ 813 - qspi_writel(q, q->devtype_data->ahb_buf_size + addr_offset, 814 - base + QUADSPI_SFA1AD); 815 - qspi_writel(q, q->devtype_data->ahb_buf_size * 2 + addr_offset, 816 - base + QUADSPI_SFA2AD); 817 - qspi_writel(q, q->devtype_data->ahb_buf_size * 3 + addr_offset, 818 - base + QUADSPI_SFB1AD); 819 - qspi_writel(q, q->devtype_data->ahb_buf_size * 4 + addr_offset, 820 - base + QUADSPI_SFB2AD); 789 + sfa_size = q->devtype_data->sfa_size ? : q->devtype_data->ahb_buf_size; 790 + qspi_writel(q, addr_offset + 1 * sfa_size, base + QUADSPI_SFA1AD); 791 + qspi_writel(q, addr_offset + 2 * sfa_size, base + QUADSPI_SFA2AD); 792 + qspi_writel(q, addr_offset + 3 * sfa_size, base + QUADSPI_SFB1AD); 793 + qspi_writel(q, addr_offset + 4 * sfa_size, base + QUADSPI_SFB2AD); 821 794 822 795 q->selected = -1; 823 796 ··· 884 857 { 885 858 struct fsl_qspi *q = data; 886 859 860 + reset_control_assert(q->resets); 861 + 887 862 fsl_qspi_clk_disable_unprep(q); 888 863 889 864 mutex_destroy(&q->lock); ··· 931 902 if (!q->ahb_addr) 932 903 return -ENOMEM; 933 904 905 + q->resets = devm_reset_control_array_get_optional_exclusive(dev); 906 + if (IS_ERR(q->resets)) 907 + return PTR_ERR(q->resets); 908 + 934 909 /* find the clocks */ 935 910 q->clk_en = devm_clk_get(dev, "qspi_en"); 936 911 if (IS_ERR(q->clk_en)) ··· 953 920 } 954 921 955 922 ret = devm_add_action_or_reset(dev, fsl_qspi_cleanup, q); 923 + if (ret) 924 + return ret; 925 + 926 + ret = reset_control_deassert(q->resets); 956 927 if (ret) 957 928 return ret; 958 929 ··· 1013 976 { .compatible = "fsl,imx6ul-qspi", .data = &imx6ul_data, }, 1014 977 { .compatible = "fsl,ls1021a-qspi", .data = &ls1021a_data, }, 1015 978 { .compatible = "fsl,ls2080a-qspi", .data = &ls2080a_data, }, 979 + { .compatible = "spacemit,k1-qspi", .data = &spacemit_k1_data, }, 1016 980 { /* sentinel */ } 1017 981 }; 1018 982 MODULE_DEVICE_TABLE(of, fsl_qspi_dt_ids);
+41 -17
drivers/spi/spi-imx.c
··· 42 42 "time in us to run a transfer in polling mode\n"); 43 43 44 44 #define MXC_RPM_TIMEOUT 2000 /* 2000ms */ 45 + #define MXC_SPI_DEFAULT_SPEED 500000 /* 500KHz */ 45 46 46 47 #define MXC_CSPIRXDATA 0x00 47 48 #define MXC_CSPITXDATA 0x04 ··· 425 424 426 425 static void mx53_ecspi_rx_target(struct spi_imx_data *spi_imx) 427 426 { 428 - u32 val = ioread32be(spi_imx->base + MXC_CSPIRXDATA); 427 + u32 val = readl(spi_imx->base + MXC_CSPIRXDATA); 428 + #ifdef __LITTLE_ENDIAN 429 + unsigned int bytes_per_word = spi_imx_bytes_per_word(spi_imx->bits_per_word); 429 430 431 + if (bytes_per_word == 1) 432 + swab32s(&val); 433 + else if (bytes_per_word == 2) 434 + swahw32s(&val); 435 + #endif 430 436 if (spi_imx->rx_buf) { 431 437 int n_bytes = spi_imx->target_burst % sizeof(val); 432 438 ··· 454 446 { 455 447 u32 val = 0; 456 448 int n_bytes = spi_imx->count % sizeof(val); 449 + #ifdef __LITTLE_ENDIAN 450 + unsigned int bytes_per_word; 451 + #endif 457 452 458 453 if (!n_bytes) 459 454 n_bytes = sizeof(val); ··· 469 458 470 459 spi_imx->count -= n_bytes; 471 460 472 - iowrite32be(val, spi_imx->base + MXC_CSPITXDATA); 461 + #ifdef __LITTLE_ENDIAN 462 + bytes_per_word = spi_imx_bytes_per_word(spi_imx->bits_per_word); 463 + if (bytes_per_word == 1) 464 + swab32s(&val); 465 + else if (bytes_per_word == 2) 466 + swahw32s(&val); 467 + #endif 468 + writel(val, spi_imx->base + MXC_CSPITXDATA); 473 469 } 474 470 475 471 /* MX51 eCSPI */ ··· 609 591 * is not functional for imx53 Soc, config SPI burst completed when 610 592 * BURST_LENGTH + 1 bits are received 611 593 */ 612 - if (spi_imx->target_mode && is_imx53_ecspi(spi_imx)) 594 + if (spi_imx->target_mode) 613 595 cfg &= ~MX51_ECSPI_CONFIG_SBBCTRL(channel); 614 596 else 615 597 cfg |= MX51_ECSPI_CONFIG_SBBCTRL(channel); ··· 697 679 698 680 /* Clear BL field and set the right value */ 699 681 ctrl &= ~MX51_ECSPI_CTRL_BL_MASK; 700 - if (spi_imx->target_mode && is_imx53_ecspi(spi_imx)) 682 + if (spi_imx->target_mode) 701 683 ctrl |= (spi_imx->target_burst * 8 - 1) 702 684 << MX51_ECSPI_CTRL_BL_OFFSET; 703 685 else { ··· 708 690 /* set clock speed */ 709 691 ctrl &= ~(0xf << MX51_ECSPI_CTRL_POSTDIV_OFFSET | 710 692 0xf << MX51_ECSPI_CTRL_PREDIV_OFFSET); 711 - ctrl |= mx51_ecspi_clkdiv(spi_imx, spi_imx->spi_bus_clk, &clk); 712 - spi_imx->spi_bus_clk = clk; 693 + 694 + if (!spi_imx->target_mode) { 695 + ctrl |= mx51_ecspi_clkdiv(spi_imx, spi_imx->spi_bus_clk, &clk); 696 + spi_imx->spi_bus_clk = clk; 697 + } 713 698 714 699 mx51_configure_cpha(spi_imx, spi); 715 700 ··· 1334 1313 if (!t) 1335 1314 return 0; 1336 1315 1337 - if (!t->speed_hz) { 1338 - if (!spi->max_speed_hz) { 1339 - dev_err(&spi->dev, "no speed_hz provided!\n"); 1340 - return -EINVAL; 1316 + if (!spi_imx->target_mode) { 1317 + if (!t->speed_hz) { 1318 + if (!spi->max_speed_hz) { 1319 + dev_err(&spi->dev, "no speed_hz provided!\n"); 1320 + return -EINVAL; 1321 + } 1322 + dev_dbg(&spi->dev, "using spi->max_speed_hz!\n"); 1323 + spi_imx->spi_bus_clk = spi->max_speed_hz; 1324 + } else { 1325 + spi_imx->spi_bus_clk = t->speed_hz; 1341 1326 } 1342 - dev_dbg(&spi->dev, "using spi->max_speed_hz!\n"); 1343 - spi_imx->spi_bus_clk = spi->max_speed_hz; 1344 - } else 1345 - spi_imx->spi_bus_clk = t->speed_hz; 1327 + } 1346 1328 1347 1329 spi_imx->bits_per_word = t->bits_per_word; 1348 1330 spi_imx->count = t->len; ··· 1389 1365 spi_imx->rx_only = ((t->tx_buf == NULL) 1390 1366 || (t->tx_buf == spi->controller->dummy_tx)); 1391 1367 1392 - if (is_imx53_ecspi(spi_imx) && spi_imx->target_mode) { 1368 + if (spi_imx->target_mode) { 1393 1369 spi_imx->rx = mx53_ecspi_rx_target; 1394 1370 spi_imx->tx = mx53_ecspi_tx_target; 1395 1371 spi_imx->target_burst = t->len; ··· 1665 1641 struct spi_imx_data *spi_imx = spi_controller_get_devdata(spi->controller); 1666 1642 int ret = 0; 1667 1643 1668 - if (is_imx53_ecspi(spi_imx) && 1669 - transfer->len > MX53_MAX_TRANSFER_BYTES) { 1644 + if (transfer->len > MX53_MAX_TRANSFER_BYTES) { 1670 1645 dev_err(&spi->dev, "Transaction too big, max size is %d bytes\n", 1671 1646 MX53_MAX_TRANSFER_BYTES); 1672 1647 return -EMSGSIZE; ··· 1861 1838 controller->prepare_message = spi_imx_prepare_message; 1862 1839 controller->unprepare_message = spi_imx_unprepare_message; 1863 1840 controller->target_abort = spi_imx_target_abort; 1841 + spi_imx->spi_bus_clk = MXC_SPI_DEFAULT_SPEED; 1864 1842 controller->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_NO_CS | 1865 1843 SPI_MOSI_IDLE_LOW; 1866 1844
+5
drivers/spi/spi-mem.c
··· 12 12 #include <linux/spi/spi-mem.h> 13 13 #include <linux/sched/task_stack.h> 14 14 15 + #define CREATE_TRACE_POINTS 16 + #include <trace/events/spi-mem.h> 17 + 15 18 #include "internals.h" 16 19 17 20 #define SPI_MEM_MAX_BUSWIDTH 8 ··· 406 403 if (ret) 407 404 return ret; 408 405 406 + trace_spi_mem_start_op(mem, op); 409 407 ret = ctlr->mem_ops->exec_op(mem, op); 408 + trace_spi_mem_stop_op(mem, op); 410 409 411 410 spi_mem_access_end(mem); 412 411
+429
drivers/spi/spi-microchip-core-spi.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0) 2 + // 3 + // Microchip CoreSPI controller driver 4 + // 5 + // Copyright (c) 2025 Microchip Technology Inc. and its subsidiaries 6 + // 7 + // Author: Prajna Rajendra Kumar <prajna.rajendrakumar@microchip.com> 8 + 9 + #include <linux/clk.h> 10 + #include <linux/delay.h> 11 + #include <linux/err.h> 12 + #include <linux/init.h> 13 + #include <linux/interrupt.h> 14 + #include <linux/io.h> 15 + #include <linux/module.h> 16 + #include <linux/of.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/spi/spi.h> 19 + 20 + #define MCHP_CORESPI_MAX_CS (8) 21 + #define MCHP_CORESPI_DEFAULT_FIFO_DEPTH (4) 22 + #define MCHP_CORESPI_DEFAULT_MOTOROLA_MODE (3) 23 + 24 + #define MCHP_CORESPI_CONTROL_ENABLE BIT(0) 25 + #define MCHP_CORESPI_CONTROL_MASTER BIT(1) 26 + #define MCHP_CORESPI_CONTROL_TX_DATA_INT BIT(3) 27 + #define MCHP_CORESPI_CONTROL_RX_OVER_INT BIT(4) 28 + #define MCHP_CORESPI_CONTROL_TX_UNDER_INT BIT(5) 29 + #define MCHP_CORESPI_CONTROL_FRAMEURUN BIT(6) 30 + #define MCHP_CORESPI_CONTROL_OENOFF BIT(7) 31 + 32 + #define MCHP_CORESPI_STATUS_ACTIVE BIT(7) 33 + #define MCHP_CORESPI_STATUS_SSEL BIT(6) 34 + #define MCHP_CORESPI_STATUS_TXFIFO_UNDERFLOW BIT(5) 35 + #define MCHP_CORESPI_STATUS_RXFIFO_FULL BIT(4) 36 + #define MCHP_CORESPI_STATUS_TXFIFO_FULL BIT(3) 37 + #define MCHP_CORESPI_STATUS_RXFIFO_EMPTY BIT(2) 38 + #define MCHP_CORESPI_STATUS_DONE BIT(1) 39 + #define MCHP_CORESPI_STATUS_FIRSTFRAME BIT(0) 40 + 41 + #define MCHP_CORESPI_INT_TXDONE BIT(0) 42 + #define MCHP_CORESPI_INT_RX_CHANNEL_OVERFLOW BIT(2) 43 + #define MCHP_CORESPI_INT_TX_CHANNEL_UNDERRUN BIT(3) 44 + #define MCHP_CORESPI_INT_CMDINT BIT(4) 45 + #define MCHP_CORESPI_INT_SSEND BIT(5) 46 + #define MCHP_CORESPI_INT_DATA_RX BIT(6) 47 + #define MCHP_CORESPI_INT_TXRFM BIT(7) 48 + 49 + #define MCHP_CORESPI_CONTROL2_INTEN_TXRFMT BIT(7) 50 + #define MCHP_CORESPI_CONTROL2_INTEN_DATA_RX BIT(6) 51 + #define MCHP_CORESPI_CONTROL2_INTEN_SSEND BIT(5) 52 + #define MCHP_CORESPI_CONTROL2_INTEN_CMD BIT(4) 53 + 54 + #define INT_ENABLE_MASK (MCHP_CORESPI_CONTROL_TX_DATA_INT | MCHP_CORESPI_CONTROL_RX_OVER_INT | \ 55 + MCHP_CORESPI_CONTROL_TX_UNDER_INT) 56 + 57 + #define MCHP_CORESPI_REG_CONTROL (0x00) 58 + #define MCHP_CORESPI_REG_INTCLEAR (0x04) 59 + #define MCHP_CORESPI_REG_RXDATA (0x08) 60 + #define MCHP_CORESPI_REG_TXDATA (0x0c) 61 + #define MCHP_CORESPI_REG_INTMASK (0X10) 62 + #define MCHP_CORESPI_REG_INTRAW (0X14) 63 + #define MCHP_CORESPI_REG_CONTROL2 (0x18) 64 + #define MCHP_CORESPI_REG_COMMAND (0x1c) 65 + #define MCHP_CORESPI_REG_STAT (0x20) 66 + #define MCHP_CORESPI_REG_SSEL (0x24) 67 + #define MCHP_CORESPI_REG_TXDATA_LAST (0X28) 68 + #define MCHP_CORESPI_REG_CLK_DIV (0x2c) 69 + 70 + struct mchp_corespi { 71 + void __iomem *regs; 72 + struct clk *clk; 73 + const u8 *tx_buf; 74 + u8 *rx_buf; 75 + u32 clk_gen; 76 + int irq; 77 + unsigned int tx_len; 78 + unsigned int rx_len; 79 + u32 fifo_depth; 80 + }; 81 + 82 + static inline void mchp_corespi_disable(struct mchp_corespi *spi) 83 + { 84 + u8 control = readb(spi->regs + MCHP_CORESPI_REG_CONTROL); 85 + 86 + control &= ~MCHP_CORESPI_CONTROL_ENABLE; 87 + 88 + writeb(control, spi->regs + MCHP_CORESPI_REG_CONTROL); 89 + } 90 + 91 + static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi, u32 fifo_max) 92 + { 93 + for (int i = 0; i < fifo_max; i++) { 94 + u32 data; 95 + 96 + while (readb(spi->regs + MCHP_CORESPI_REG_STAT) & 97 + MCHP_CORESPI_STATUS_RXFIFO_EMPTY) 98 + ; 99 + 100 + /* On TX-only transfers always perform a dummy read */ 101 + data = readb(spi->regs + MCHP_CORESPI_REG_RXDATA); 102 + if (spi->rx_buf) 103 + *spi->rx_buf++ = data; 104 + 105 + spi->rx_len--; 106 + } 107 + } 108 + 109 + static void mchp_corespi_enable_ints(struct mchp_corespi *spi) 110 + { 111 + u8 control = readb(spi->regs + MCHP_CORESPI_REG_CONTROL); 112 + 113 + control |= INT_ENABLE_MASK; 114 + writeb(control, spi->regs + MCHP_CORESPI_REG_CONTROL); 115 + } 116 + 117 + static void mchp_corespi_disable_ints(struct mchp_corespi *spi) 118 + { 119 + u8 control = readb(spi->regs + MCHP_CORESPI_REG_CONTROL); 120 + 121 + control &= ~INT_ENABLE_MASK; 122 + writeb(control, spi->regs + MCHP_CORESPI_REG_CONTROL); 123 + } 124 + 125 + static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi, u32 fifo_max) 126 + { 127 + for (int i = 0; i < fifo_max; i++) { 128 + if (readb(spi->regs + MCHP_CORESPI_REG_STAT) & 129 + MCHP_CORESPI_STATUS_TXFIFO_FULL) 130 + break; 131 + 132 + /* On RX-only transfers always perform a dummy write */ 133 + if (spi->tx_buf) 134 + writeb(*spi->tx_buf++, spi->regs + MCHP_CORESPI_REG_TXDATA); 135 + else 136 + writeb(0xaa, spi->regs + MCHP_CORESPI_REG_TXDATA); 137 + 138 + spi->tx_len--; 139 + } 140 + } 141 + 142 + static void mchp_corespi_set_cs(struct spi_device *spi, bool disable) 143 + { 144 + struct mchp_corespi *corespi = spi_controller_get_devdata(spi->controller); 145 + u32 reg; 146 + 147 + reg = readb(corespi->regs + MCHP_CORESPI_REG_SSEL); 148 + reg &= ~BIT(spi_get_chipselect(spi, 0)); 149 + reg |= !disable << spi_get_chipselect(spi, 0); 150 + 151 + writeb(reg, corespi->regs + MCHP_CORESPI_REG_SSEL); 152 + } 153 + 154 + static int mchp_corespi_setup(struct spi_device *spi) 155 + { 156 + if (spi_get_csgpiod(spi, 0)) 157 + return 0; 158 + 159 + if (spi->mode & (SPI_CS_HIGH)) { 160 + dev_err(&spi->dev, "unable to support active-high CS in Motorola mode\n"); 161 + return -EOPNOTSUPP; 162 + } 163 + 164 + if (spi->mode & SPI_MODE_X_MASK & ~spi->controller->mode_bits) { 165 + dev_err(&spi->dev, "incompatible CPOL/CPHA, must match controller's Motorola mode\n"); 166 + return -EINVAL; 167 + } 168 + 169 + return 0; 170 + } 171 + 172 + static void mchp_corespi_init(struct spi_controller *host, struct mchp_corespi *spi) 173 + { 174 + u8 control = readb(spi->regs + MCHP_CORESPI_REG_CONTROL); 175 + 176 + /* Master mode changes require core to be disabled.*/ 177 + control = (control & ~MCHP_CORESPI_CONTROL_ENABLE) | MCHP_CORESPI_CONTROL_MASTER; 178 + 179 + writeb(control, spi->regs + MCHP_CORESPI_REG_CONTROL); 180 + 181 + mchp_corespi_enable_ints(spi); 182 + 183 + control = readb(spi->regs + MCHP_CORESPI_REG_CONTROL); 184 + control |= MCHP_CORESPI_CONTROL_ENABLE; 185 + 186 + writeb(control, spi->regs + MCHP_CORESPI_REG_CONTROL); 187 + } 188 + 189 + static irqreturn_t mchp_corespi_interrupt(int irq, void *dev_id) 190 + { 191 + struct spi_controller *host = dev_id; 192 + struct mchp_corespi *spi = spi_controller_get_devdata(host); 193 + u8 intfield = readb(spi->regs + MCHP_CORESPI_REG_INTMASK) & 0xff; 194 + bool finalise = false; 195 + 196 + /* Interrupt line may be shared and not for us at all */ 197 + if (intfield == 0) 198 + return IRQ_NONE; 199 + 200 + if (intfield & MCHP_CORESPI_INT_TXDONE) 201 + writeb(MCHP_CORESPI_INT_TXDONE, spi->regs + MCHP_CORESPI_REG_INTCLEAR); 202 + 203 + if (intfield & MCHP_CORESPI_INT_RX_CHANNEL_OVERFLOW) { 204 + writeb(MCHP_CORESPI_INT_RX_CHANNEL_OVERFLOW, 205 + spi->regs + MCHP_CORESPI_REG_INTCLEAR); 206 + finalise = true; 207 + dev_err(&host->dev, 208 + "RX OVERFLOW: rxlen: %u, txlen: %u\n", 209 + spi->rx_len, spi->tx_len); 210 + } 211 + 212 + if (intfield & MCHP_CORESPI_INT_TX_CHANNEL_UNDERRUN) { 213 + writeb(MCHP_CORESPI_INT_TX_CHANNEL_UNDERRUN, 214 + spi->regs + MCHP_CORESPI_REG_INTCLEAR); 215 + finalise = true; 216 + dev_err(&host->dev, 217 + "TX UNDERFLOW: rxlen: %u, txlen: %u\n", 218 + spi->rx_len, spi->tx_len); 219 + } 220 + 221 + if (finalise) 222 + spi_finalize_current_transfer(host); 223 + 224 + return IRQ_HANDLED; 225 + } 226 + 227 + static int mchp_corespi_set_clk_div(struct mchp_corespi *spi, 228 + unsigned long target_hz) 229 + { 230 + unsigned long pclk_hz, spi_hz; 231 + u32 clk_div; 232 + 233 + /* Get peripheral clock rate */ 234 + pclk_hz = clk_get_rate(spi->clk); 235 + if (!pclk_hz) 236 + return -EINVAL; 237 + 238 + /* 239 + * Calculate clock rate generated by SPI master 240 + * Formula: SPICLK = PCLK / (2 * (CLK_DIV + 1)) 241 + */ 242 + clk_div = DIV_ROUND_UP(pclk_hz, 2 * target_hz) - 1; 243 + 244 + if (clk_div > 0xFF) 245 + return -EINVAL; 246 + 247 + spi_hz = pclk_hz / (2 * (clk_div + 1)); 248 + 249 + if (spi_hz > target_hz) 250 + return -EINVAL; 251 + 252 + writeb(clk_div, spi->regs + MCHP_CORESPI_REG_CLK_DIV); 253 + 254 + return 0; 255 + } 256 + 257 + static int mchp_corespi_transfer_one(struct spi_controller *host, 258 + struct spi_device *spi_dev, 259 + struct spi_transfer *xfer) 260 + { 261 + struct mchp_corespi *spi = spi_controller_get_devdata(host); 262 + int ret; 263 + 264 + ret = mchp_corespi_set_clk_div(spi, (unsigned long)xfer->speed_hz); 265 + if (ret) { 266 + dev_err(&host->dev, "failed to set clock divider for target %u Hz\n", 267 + xfer->speed_hz); 268 + return ret; 269 + } 270 + 271 + spi->tx_buf = xfer->tx_buf; 272 + spi->rx_buf = xfer->rx_buf; 273 + spi->tx_len = xfer->len; 274 + spi->rx_len = xfer->len; 275 + 276 + while (spi->tx_len) { 277 + unsigned int fifo_max = min(spi->tx_len, spi->fifo_depth); 278 + 279 + mchp_corespi_write_fifo(spi, fifo_max); 280 + mchp_corespi_read_fifo(spi, fifo_max); 281 + } 282 + 283 + spi_finalize_current_transfer(host); 284 + return 1; 285 + } 286 + 287 + static int mchp_corespi_probe(struct platform_device *pdev) 288 + { 289 + const char *protocol = "motorola"; 290 + struct device *dev = &pdev->dev; 291 + struct spi_controller *host; 292 + struct mchp_corespi *spi; 293 + struct resource *res; 294 + u32 num_cs, mode, frame_size; 295 + bool assert_ssel; 296 + int ret = 0; 297 + 298 + host = devm_spi_alloc_host(dev, sizeof(*spi)); 299 + if (!host) 300 + return -ENOMEM; 301 + 302 + platform_set_drvdata(pdev, host); 303 + 304 + if (of_property_read_u32(dev->of_node, "num-cs", &num_cs)) 305 + num_cs = MCHP_CORESPI_MAX_CS; 306 + 307 + /* 308 + * Protocol: CFG_MODE 309 + * CoreSPI can be configured for Motorola, TI or NSC. 310 + * The current driver supports only Motorola mode. 311 + */ 312 + ret = of_property_read_string(dev->of_node, "microchip,protocol-configuration", 313 + &protocol); 314 + if (ret && ret != -EINVAL) 315 + return dev_err_probe(dev, ret, "Error reading protocol-configuration\n"); 316 + if (strcmp(protocol, "motorola") != 0) 317 + return dev_err_probe(dev, -EINVAL, 318 + "CoreSPI: protocol '%s' not supported by this driver\n", 319 + protocol); 320 + 321 + /* 322 + * Motorola mode (0-3): CFG_MOT_MODE 323 + * Mode is fixed in the IP configurator. 324 + */ 325 + ret = of_property_read_u32(dev->of_node, "microchip,motorola-mode", &mode); 326 + if (ret) 327 + mode = MCHP_CORESPI_DEFAULT_MOTOROLA_MODE; 328 + else if (mode > 3) 329 + return dev_err_probe(dev, -EINVAL, 330 + "invalid 'microchip,motorola-mode' value %u\n", mode); 331 + 332 + /* 333 + * Frame size: CFG_FRAME_SIZE 334 + * The hardware allows frame sizes <= APB data width. 335 + * However, this driver currently only supports 8-bit frames. 336 + */ 337 + ret = of_property_read_u32(dev->of_node, "microchip,frame-size", &frame_size); 338 + if (!ret && frame_size != 8) 339 + return dev_err_probe(dev, -EINVAL, 340 + "CoreSPI: frame size %u not supported by this driver\n", 341 + frame_size); 342 + 343 + /* 344 + * SSEL: CFG_MOT_SSEL 345 + * CoreSPI deasserts SSEL when the TX FIFO empties. 346 + * To prevent CS deassertion when TX FIFO drains, the ssel-active property 347 + * keeps CS asserted for the full SPI transfer. 348 + */ 349 + assert_ssel = of_property_read_bool(dev->of_node, "microchip,ssel-active"); 350 + if (!assert_ssel) 351 + return dev_err_probe(dev, -EINVAL, 352 + "hardware must enable 'microchip,ssel-active' to keep CS asserted for the SPI transfer\n"); 353 + 354 + spi = spi_controller_get_devdata(host); 355 + 356 + host->num_chipselect = num_cs; 357 + host->mode_bits = mode; 358 + host->setup = mchp_corespi_setup; 359 + host->use_gpio_descriptors = true; 360 + host->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 361 + host->transfer_one = mchp_corespi_transfer_one; 362 + host->set_cs = mchp_corespi_set_cs; 363 + host->dev.of_node = dev->of_node; 364 + 365 + ret = of_property_read_u32(dev->of_node, "fifo-depth", &spi->fifo_depth); 366 + if (ret) 367 + spi->fifo_depth = MCHP_CORESPI_DEFAULT_FIFO_DEPTH; 368 + 369 + spi->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 370 + if (IS_ERR(spi->regs)) 371 + return PTR_ERR(spi->regs); 372 + 373 + spi->irq = platform_get_irq(pdev, 0); 374 + if (spi->irq < 0) 375 + return spi->irq; 376 + 377 + ret = devm_request_irq(dev, spi->irq, mchp_corespi_interrupt, IRQF_SHARED, 378 + dev_name(dev), host); 379 + if (ret) 380 + return dev_err_probe(dev, ret, "could not request irq\n"); 381 + 382 + spi->clk = devm_clk_get_enabled(dev, NULL); 383 + if (IS_ERR(spi->clk)) 384 + return dev_err_probe(dev, PTR_ERR(spi->clk), "could not get clk\n"); 385 + 386 + mchp_corespi_init(host, spi); 387 + 388 + ret = devm_spi_register_controller(dev, host); 389 + if (ret) { 390 + mchp_corespi_disable(spi); 391 + return dev_err_probe(dev, ret, "unable to register host for CoreSPI controller\n"); 392 + } 393 + 394 + return 0; 395 + } 396 + 397 + static void mchp_corespi_remove(struct platform_device *pdev) 398 + { 399 + struct spi_controller *host = platform_get_drvdata(pdev); 400 + struct mchp_corespi *spi = spi_controller_get_devdata(host); 401 + 402 + mchp_corespi_disable_ints(spi); 403 + mchp_corespi_disable(spi); 404 + } 405 + 406 + /* 407 + * Platform driver data structure 408 + */ 409 + 410 + #if defined(CONFIG_OF) 411 + static const struct of_device_id mchp_corespi_dt_ids[] = { 412 + { .compatible = "microchip,corespi-rtl-v5" }, 413 + { /* sentinel */ } 414 + }; 415 + MODULE_DEVICE_TABLE(of, mchp_corespi_dt_ids); 416 + #endif 417 + 418 + static struct platform_driver mchp_corespi_driver = { 419 + .probe = mchp_corespi_probe, 420 + .driver = { 421 + .name = "microchip-corespi", 422 + .of_match_table = of_match_ptr(mchp_corespi_dt_ids), 423 + }, 424 + .remove = mchp_corespi_remove, 425 + }; 426 + module_platform_driver(mchp_corespi_driver); 427 + MODULE_DESCRIPTION("Microchip CoreSPI controller driver"); 428 + MODULE_AUTHOR("Prajna Rajendra Kumar <prajna.rajendrakumar@microchip.com>"); 429 + MODULE_LICENSE("GPL");
+104 -103
drivers/spi/spi-microchip-core.c drivers/spi/spi-mpfs.c
··· 99 99 #define REG_CTRL2 (0x48) 100 100 #define REG_FRAMESUP (0x50) 101 101 102 - struct mchp_corespi { 102 + struct mpfs_spi { 103 103 void __iomem *regs; 104 104 struct clk *clk; 105 105 const u8 *tx_buf; ··· 113 113 int n_bytes; 114 114 }; 115 115 116 - static inline u32 mchp_corespi_read(struct mchp_corespi *spi, unsigned int reg) 116 + static inline u32 mpfs_spi_read(struct mpfs_spi *spi, unsigned int reg) 117 117 { 118 118 return readl(spi->regs + reg); 119 119 } 120 120 121 - static inline void mchp_corespi_write(struct mchp_corespi *spi, unsigned int reg, u32 val) 121 + static inline void mpfs_spi_write(struct mpfs_spi *spi, unsigned int reg, u32 val) 122 122 { 123 123 writel(val, spi->regs + reg); 124 124 } 125 125 126 - static inline void mchp_corespi_disable(struct mchp_corespi *spi) 126 + static inline void mpfs_spi_disable(struct mpfs_spi *spi) 127 127 { 128 - u32 control = mchp_corespi_read(spi, REG_CONTROL); 128 + u32 control = mpfs_spi_read(spi, REG_CONTROL); 129 129 130 130 control &= ~CONTROL_ENABLE; 131 131 132 - mchp_corespi_write(spi, REG_CONTROL, control); 132 + mpfs_spi_write(spi, REG_CONTROL, control); 133 133 } 134 134 135 - static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi, int fifo_max) 135 + static inline void mpfs_spi_read_fifo(struct mpfs_spi *spi, int fifo_max) 136 136 { 137 137 for (int i = 0; i < fifo_max; i++) { 138 138 u32 data; 139 139 140 - while (mchp_corespi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY) 140 + while (mpfs_spi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY) 141 141 ; 142 142 143 - data = mchp_corespi_read(spi, REG_RX_DATA); 143 + data = mpfs_spi_read(spi, REG_RX_DATA); 144 144 145 145 spi->rx_len -= spi->n_bytes; 146 146 ··· 158 158 } 159 159 } 160 160 161 - static void mchp_corespi_enable_ints(struct mchp_corespi *spi) 161 + static void mpfs_spi_enable_ints(struct mpfs_spi *spi) 162 162 { 163 - u32 control = mchp_corespi_read(spi, REG_CONTROL); 163 + u32 control = mpfs_spi_read(spi, REG_CONTROL); 164 164 165 165 control |= INT_ENABLE_MASK; 166 - mchp_corespi_write(spi, REG_CONTROL, control); 166 + mpfs_spi_write(spi, REG_CONTROL, control); 167 167 } 168 168 169 - static void mchp_corespi_disable_ints(struct mchp_corespi *spi) 169 + static void mpfs_spi_disable_ints(struct mpfs_spi *spi) 170 170 { 171 - u32 control = mchp_corespi_read(spi, REG_CONTROL); 171 + u32 control = mpfs_spi_read(spi, REG_CONTROL); 172 172 173 173 control &= ~INT_ENABLE_MASK; 174 - mchp_corespi_write(spi, REG_CONTROL, control); 174 + mpfs_spi_write(spi, REG_CONTROL, control); 175 175 } 176 176 177 - static inline void mchp_corespi_set_xfer_size(struct mchp_corespi *spi, int len) 177 + static inline void mpfs_spi_set_xfer_size(struct mpfs_spi *spi, int len) 178 178 { 179 179 u32 control; 180 180 u32 lenpart; 181 - u32 frames = mchp_corespi_read(spi, REG_FRAMESUP); 181 + u32 frames = mpfs_spi_read(spi, REG_FRAMESUP); 182 182 183 183 /* 184 184 * Writing to FRAMECNT in REG_CONTROL will reset the frame count, taking 185 185 * a shortcut requires an explicit clear. 186 186 */ 187 187 if (frames == len) { 188 - mchp_corespi_write(spi, REG_COMMAND, COMMAND_CLRFRAMECNT); 188 + mpfs_spi_write(spi, REG_COMMAND, COMMAND_CLRFRAMECNT); 189 189 return; 190 190 } 191 191 ··· 208 208 * that matches the documentation. 209 209 */ 210 210 lenpart = len & 0xffff; 211 - control = mchp_corespi_read(spi, REG_CONTROL); 211 + control = mpfs_spi_read(spi, REG_CONTROL); 212 212 control &= ~CONTROL_FRAMECNT_MASK; 213 213 control |= lenpart << CONTROL_FRAMECNT_SHIFT; 214 - mchp_corespi_write(spi, REG_CONTROL, control); 215 - mchp_corespi_write(spi, REG_FRAMESUP, len); 214 + mpfs_spi_write(spi, REG_CONTROL, control); 215 + mpfs_spi_write(spi, REG_FRAMESUP, len); 216 216 } 217 217 218 - static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi, int fifo_max) 218 + static inline void mpfs_spi_write_fifo(struct mpfs_spi *spi, int fifo_max) 219 219 { 220 220 int i = 0; 221 221 222 - mchp_corespi_set_xfer_size(spi, fifo_max); 222 + mpfs_spi_set_xfer_size(spi, fifo_max); 223 223 224 - while ((i < fifo_max) && !(mchp_corespi_read(spi, REG_STATUS) & STATUS_TXFIFO_FULL)) { 224 + while ((i < fifo_max) && !(mpfs_spi_read(spi, REG_STATUS) & STATUS_TXFIFO_FULL)) { 225 225 u32 word; 226 226 227 227 if (spi->n_bytes == 4) ··· 231 231 else 232 232 word = spi->tx_buf ? *spi->tx_buf : 0xaa; 233 233 234 - mchp_corespi_write(spi, REG_TX_DATA, word); 234 + mpfs_spi_write(spi, REG_TX_DATA, word); 235 235 if (spi->tx_buf) 236 236 spi->tx_buf += spi->n_bytes; 237 237 i++; ··· 240 240 spi->tx_len -= i * spi->n_bytes; 241 241 } 242 242 243 - static inline void mchp_corespi_set_framesize(struct mchp_corespi *spi, int bt) 243 + static inline void mpfs_spi_set_framesize(struct mpfs_spi *spi, int bt) 244 244 { 245 - u32 frame_size = mchp_corespi_read(spi, REG_FRAME_SIZE); 245 + u32 frame_size = mpfs_spi_read(spi, REG_FRAME_SIZE); 246 246 u32 control; 247 247 248 248 if ((frame_size & FRAME_SIZE_MASK) == bt) ··· 252 252 * Disable the SPI controller. Writes to the frame size have 253 253 * no effect when the controller is enabled. 254 254 */ 255 - control = mchp_corespi_read(spi, REG_CONTROL); 255 + control = mpfs_spi_read(spi, REG_CONTROL); 256 256 control &= ~CONTROL_ENABLE; 257 - mchp_corespi_write(spi, REG_CONTROL, control); 257 + mpfs_spi_write(spi, REG_CONTROL, control); 258 258 259 - mchp_corespi_write(spi, REG_FRAME_SIZE, bt); 259 + mpfs_spi_write(spi, REG_FRAME_SIZE, bt); 260 260 261 261 control |= CONTROL_ENABLE; 262 - mchp_corespi_write(spi, REG_CONTROL, control); 262 + mpfs_spi_write(spi, REG_CONTROL, control); 263 263 } 264 264 265 - static void mchp_corespi_set_cs(struct spi_device *spi, bool disable) 265 + static void mpfs_spi_set_cs(struct spi_device *spi, bool disable) 266 266 { 267 267 u32 reg; 268 - struct mchp_corespi *corespi = spi_controller_get_devdata(spi->controller); 268 + struct mpfs_spi *mspi = spi_controller_get_devdata(spi->controller); 269 269 270 - reg = mchp_corespi_read(corespi, REG_SLAVE_SELECT); 270 + reg = mpfs_spi_read(mspi, REG_SLAVE_SELECT); 271 271 reg &= ~BIT(spi_get_chipselect(spi, 0)); 272 272 reg |= !disable << spi_get_chipselect(spi, 0); 273 - corespi->pending_slave_select = reg; 273 + mspi->pending_slave_select = reg; 274 274 275 275 /* 276 276 * Only deassert chip select immediately. Writing to some registers ··· 281 281 * doesn't see any spurious clock transitions whilst CS is enabled. 282 282 */ 283 283 if (((spi->mode & SPI_CS_HIGH) == 0) == disable) 284 - mchp_corespi_write(corespi, REG_SLAVE_SELECT, reg); 284 + mpfs_spi_write(mspi, REG_SLAVE_SELECT, reg); 285 285 } 286 286 287 - static int mchp_corespi_setup(struct spi_device *spi) 287 + static int mpfs_spi_setup(struct spi_device *spi) 288 288 { 289 - struct mchp_corespi *corespi = spi_controller_get_devdata(spi->controller); 289 + struct mpfs_spi *mspi = spi_controller_get_devdata(spi->controller); 290 290 u32 reg; 291 291 292 292 if (spi_is_csgpiod(spi)) ··· 298 298 * driving their select line low. 299 299 */ 300 300 if (spi->mode & SPI_CS_HIGH) { 301 - reg = mchp_corespi_read(corespi, REG_SLAVE_SELECT); 301 + reg = mpfs_spi_read(mspi, REG_SLAVE_SELECT); 302 302 reg |= BIT(spi_get_chipselect(spi, 0)); 303 - corespi->pending_slave_select = reg; 304 - mchp_corespi_write(corespi, REG_SLAVE_SELECT, reg); 303 + mspi->pending_slave_select = reg; 304 + mpfs_spi_write(mspi, REG_SLAVE_SELECT, reg); 305 305 } 306 306 return 0; 307 307 } 308 308 309 - static void mchp_corespi_init(struct spi_controller *host, struct mchp_corespi *spi) 309 + static void mpfs_spi_init(struct spi_controller *host, struct mpfs_spi *spi) 310 310 { 311 311 unsigned long clk_hz; 312 - u32 control = mchp_corespi_read(spi, REG_CONTROL); 312 + u32 control = mpfs_spi_read(spi, REG_CONTROL); 313 313 314 314 control &= ~CONTROL_ENABLE; 315 - mchp_corespi_write(spi, REG_CONTROL, control); 315 + mpfs_spi_write(spi, REG_CONTROL, control); 316 316 317 317 control |= CONTROL_MASTER; 318 318 control &= ~CONTROL_MODE_MASK; ··· 328 328 */ 329 329 control |= CONTROL_SPS | CONTROL_BIGFIFO; 330 330 331 - mchp_corespi_write(spi, REG_CONTROL, control); 331 + mpfs_spi_write(spi, REG_CONTROL, control); 332 332 333 - mchp_corespi_set_framesize(spi, DEFAULT_FRAMESIZE); 333 + mpfs_spi_set_framesize(spi, DEFAULT_FRAMESIZE); 334 334 335 335 /* max. possible spi clock rate is the apb clock rate */ 336 336 clk_hz = clk_get_rate(spi->clk); 337 337 host->max_speed_hz = clk_hz; 338 338 339 - mchp_corespi_enable_ints(spi); 339 + mpfs_spi_enable_ints(spi); 340 340 341 341 /* 342 342 * It is required to enable direct mode, otherwise control over the chip ··· 344 344 * can deal with active high targets. 345 345 */ 346 346 spi->pending_slave_select = SSELOUT | SSEL_DIRECT; 347 - mchp_corespi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select); 347 + mpfs_spi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select); 348 348 349 - control = mchp_corespi_read(spi, REG_CONTROL); 349 + control = mpfs_spi_read(spi, REG_CONTROL); 350 350 351 351 control &= ~CONTROL_RESET; 352 352 control |= CONTROL_ENABLE; 353 353 354 - mchp_corespi_write(spi, REG_CONTROL, control); 354 + mpfs_spi_write(spi, REG_CONTROL, control); 355 355 } 356 356 357 - static inline void mchp_corespi_set_clk_gen(struct mchp_corespi *spi) 357 + static inline void mpfs_spi_set_clk_gen(struct mpfs_spi *spi) 358 358 { 359 359 u32 control; 360 360 361 - control = mchp_corespi_read(spi, REG_CONTROL); 361 + control = mpfs_spi_read(spi, REG_CONTROL); 362 362 if (spi->clk_mode) 363 363 control |= CONTROL_CLKMODE; 364 364 else 365 365 control &= ~CONTROL_CLKMODE; 366 366 367 - mchp_corespi_write(spi, REG_CLK_GEN, spi->clk_gen); 368 - mchp_corespi_write(spi, REG_CONTROL, control); 367 + mpfs_spi_write(spi, REG_CLK_GEN, spi->clk_gen); 368 + mpfs_spi_write(spi, REG_CONTROL, control); 369 369 } 370 370 371 - static inline void mchp_corespi_set_mode(struct mchp_corespi *spi, unsigned int mode) 371 + static inline void mpfs_spi_set_mode(struct mpfs_spi *spi, unsigned int mode) 372 372 { 373 373 u32 mode_val; 374 - u32 control = mchp_corespi_read(spi, REG_CONTROL); 374 + u32 control = mpfs_spi_read(spi, REG_CONTROL); 375 375 376 376 switch (mode & SPI_MODE_X_MASK) { 377 377 case SPI_MODE_0: ··· 394 394 */ 395 395 396 396 control &= ~CONTROL_ENABLE; 397 - mchp_corespi_write(spi, REG_CONTROL, control); 397 + mpfs_spi_write(spi, REG_CONTROL, control); 398 398 399 399 control &= ~(SPI_MODE_X_MASK << MODE_X_MASK_SHIFT); 400 400 control |= mode_val; 401 401 402 - mchp_corespi_write(spi, REG_CONTROL, control); 402 + mpfs_spi_write(spi, REG_CONTROL, control); 403 403 404 404 control |= CONTROL_ENABLE; 405 - mchp_corespi_write(spi, REG_CONTROL, control); 405 + mpfs_spi_write(spi, REG_CONTROL, control); 406 406 } 407 407 408 - static irqreturn_t mchp_corespi_interrupt(int irq, void *dev_id) 408 + static irqreturn_t mpfs_spi_interrupt(int irq, void *dev_id) 409 409 { 410 410 struct spi_controller *host = dev_id; 411 - struct mchp_corespi *spi = spi_controller_get_devdata(host); 412 - u32 intfield = mchp_corespi_read(spi, REG_MIS) & 0xf; 411 + struct mpfs_spi *spi = spi_controller_get_devdata(host); 412 + u32 intfield = mpfs_spi_read(spi, REG_MIS) & 0xf; 413 413 bool finalise = false; 414 414 415 415 /* Interrupt line may be shared and not for us at all */ ··· 417 417 return IRQ_NONE; 418 418 419 419 if (intfield & INT_RX_CHANNEL_OVERFLOW) { 420 - mchp_corespi_write(spi, REG_INT_CLEAR, INT_RX_CHANNEL_OVERFLOW); 420 + mpfs_spi_write(spi, REG_INT_CLEAR, INT_RX_CHANNEL_OVERFLOW); 421 421 finalise = true; 422 422 dev_err(&host->dev, 423 423 "%s: RX OVERFLOW: rxlen: %d, txlen: %d\n", __func__, ··· 425 425 } 426 426 427 427 if (intfield & INT_TX_CHANNEL_UNDERRUN) { 428 - mchp_corespi_write(spi, REG_INT_CLEAR, INT_TX_CHANNEL_UNDERRUN); 428 + mpfs_spi_write(spi, REG_INT_CLEAR, INT_TX_CHANNEL_UNDERRUN); 429 429 finalise = true; 430 430 dev_err(&host->dev, 431 431 "%s: TX UNDERFLOW: rxlen: %d, txlen: %d\n", __func__, ··· 438 438 return IRQ_HANDLED; 439 439 } 440 440 441 - static int mchp_corespi_calculate_clkgen(struct mchp_corespi *spi, 442 - unsigned long target_hz) 441 + static int mpfs_spi_calculate_clkgen(struct mpfs_spi *spi, 442 + unsigned long target_hz) 443 443 { 444 444 unsigned long clk_hz, spi_hz, clk_gen; 445 445 ··· 475 475 return 0; 476 476 } 477 477 478 - static int mchp_corespi_transfer_one(struct spi_controller *host, 479 - struct spi_device *spi_dev, 480 - struct spi_transfer *xfer) 478 + static int mpfs_spi_transfer_one(struct spi_controller *host, 479 + struct spi_device *spi_dev, 480 + struct spi_transfer *xfer) 481 481 { 482 - struct mchp_corespi *spi = spi_controller_get_devdata(host); 482 + struct mpfs_spi *spi = spi_controller_get_devdata(host); 483 483 int ret; 484 484 485 - ret = mchp_corespi_calculate_clkgen(spi, (unsigned long)xfer->speed_hz); 485 + ret = mpfs_spi_calculate_clkgen(spi, (unsigned long)xfer->speed_hz); 486 486 if (ret) { 487 487 dev_err(&host->dev, "failed to set clk_gen for target %u Hz\n", xfer->speed_hz); 488 488 return ret; 489 489 } 490 490 491 - mchp_corespi_set_clk_gen(spi); 491 + mpfs_spi_set_clk_gen(spi); 492 492 493 493 spi->tx_buf = xfer->tx_buf; 494 494 spi->rx_buf = xfer->rx_buf; ··· 496 496 spi->rx_len = xfer->len; 497 497 spi->n_bytes = roundup_pow_of_two(DIV_ROUND_UP(xfer->bits_per_word, BITS_PER_BYTE)); 498 498 499 - mchp_corespi_set_framesize(spi, xfer->bits_per_word); 499 + mpfs_spi_set_framesize(spi, xfer->bits_per_word); 500 500 501 - mchp_corespi_write(spi, REG_COMMAND, COMMAND_RXFIFORST | COMMAND_TXFIFORST); 501 + mpfs_spi_write(spi, REG_COMMAND, COMMAND_RXFIFORST | COMMAND_TXFIFORST); 502 502 503 - mchp_corespi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select); 503 + mpfs_spi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select); 504 504 505 505 while (spi->tx_len) { 506 506 int fifo_max = DIV_ROUND_UP(min(spi->tx_len, FIFO_DEPTH), spi->n_bytes); 507 507 508 - mchp_corespi_write_fifo(spi, fifo_max); 509 - mchp_corespi_read_fifo(spi, fifo_max); 508 + mpfs_spi_write_fifo(spi, fifo_max); 509 + mpfs_spi_read_fifo(spi, fifo_max); 510 510 } 511 511 512 512 spi_finalize_current_transfer(host); 513 513 return 1; 514 514 } 515 515 516 - static int mchp_corespi_prepare_message(struct spi_controller *host, 517 - struct spi_message *msg) 516 + static int mpfs_spi_prepare_message(struct spi_controller *host, 517 + struct spi_message *msg) 518 518 { 519 519 struct spi_device *spi_dev = msg->spi; 520 - struct mchp_corespi *spi = spi_controller_get_devdata(host); 520 + struct mpfs_spi *spi = spi_controller_get_devdata(host); 521 521 522 - mchp_corespi_set_mode(spi, spi_dev->mode); 522 + mpfs_spi_set_mode(spi, spi_dev->mode); 523 523 524 524 return 0; 525 525 } 526 526 527 - static int mchp_corespi_probe(struct platform_device *pdev) 527 + static int mpfs_spi_probe(struct platform_device *pdev) 528 528 { 529 529 struct spi_controller *host; 530 - struct mchp_corespi *spi; 530 + struct mpfs_spi *spi; 531 531 struct resource *res; 532 532 u32 num_cs; 533 533 int ret = 0; 534 534 535 535 host = devm_spi_alloc_host(&pdev->dev, sizeof(*spi)); 536 536 if (!host) 537 - return -ENOMEM; 537 + return dev_err_probe(&pdev->dev, -ENOMEM, 538 + "unable to allocate host for SPI controller\n"); 538 539 539 540 platform_set_drvdata(pdev, host); 540 541 ··· 545 544 host->num_chipselect = num_cs; 546 545 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 547 546 host->use_gpio_descriptors = true; 548 - host->setup = mchp_corespi_setup; 547 + host->setup = mpfs_spi_setup; 549 548 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32); 550 - host->transfer_one = mchp_corespi_transfer_one; 551 - host->prepare_message = mchp_corespi_prepare_message; 552 - host->set_cs = mchp_corespi_set_cs; 549 + host->transfer_one = mpfs_spi_transfer_one; 550 + host->prepare_message = mpfs_spi_prepare_message; 551 + host->set_cs = mpfs_spi_set_cs; 553 552 host->dev.of_node = pdev->dev.of_node; 554 553 555 554 spi = spi_controller_get_devdata(host); ··· 562 561 if (spi->irq < 0) 563 562 return spi->irq; 564 563 565 - ret = devm_request_irq(&pdev->dev, spi->irq, mchp_corespi_interrupt, 564 + ret = devm_request_irq(&pdev->dev, spi->irq, mpfs_spi_interrupt, 566 565 IRQF_SHARED, dev_name(&pdev->dev), host); 567 566 if (ret) 568 567 return dev_err_probe(&pdev->dev, ret, ··· 573 572 return dev_err_probe(&pdev->dev, PTR_ERR(spi->clk), 574 573 "could not get clk\n"); 575 574 576 - mchp_corespi_init(host, spi); 575 + mpfs_spi_init(host, spi); 577 576 578 577 ret = devm_spi_register_controller(&pdev->dev, host); 579 578 if (ret) { 580 - mchp_corespi_disable(spi); 579 + mpfs_spi_disable(spi); 581 580 return dev_err_probe(&pdev->dev, ret, 582 581 "unable to register host for SPI controller\n"); 583 582 } ··· 587 586 return 0; 588 587 } 589 588 590 - static void mchp_corespi_remove(struct platform_device *pdev) 589 + static void mpfs_spi_remove(struct platform_device *pdev) 591 590 { 592 591 struct spi_controller *host = platform_get_drvdata(pdev); 593 - struct mchp_corespi *spi = spi_controller_get_devdata(host); 592 + struct mpfs_spi *spi = spi_controller_get_devdata(host); 594 593 595 - mchp_corespi_disable_ints(spi); 596 - mchp_corespi_disable(spi); 594 + mpfs_spi_disable_ints(spi); 595 + mpfs_spi_disable(spi); 597 596 } 598 597 599 598 #define MICROCHIP_SPI_PM_OPS (NULL) ··· 603 602 */ 604 603 605 604 #if defined(CONFIG_OF) 606 - static const struct of_device_id mchp_corespi_dt_ids[] = { 605 + static const struct of_device_id mpfs_spi_dt_ids[] = { 607 606 { .compatible = "microchip,mpfs-spi" }, 608 607 { /* sentinel */ } 609 608 }; 610 - MODULE_DEVICE_TABLE(of, mchp_corespi_dt_ids); 609 + MODULE_DEVICE_TABLE(of, mpfs_spi_dt_ids); 611 610 #endif 612 611 613 - static struct platform_driver mchp_corespi_driver = { 614 - .probe = mchp_corespi_probe, 612 + static struct platform_driver mpfs_spi_driver = { 613 + .probe = mpfs_spi_probe, 615 614 .driver = { 616 - .name = "microchip-corespi", 615 + .name = "microchip-spi", 617 616 .pm = MICROCHIP_SPI_PM_OPS, 618 - .of_match_table = of_match_ptr(mchp_corespi_dt_ids), 617 + .of_match_table = of_match_ptr(mpfs_spi_dt_ids), 619 618 }, 620 - .remove = mchp_corespi_remove, 619 + .remove = mpfs_spi_remove, 621 620 }; 622 - module_platform_driver(mchp_corespi_driver); 621 + module_platform_driver(mpfs_spi_driver); 623 622 MODULE_DESCRIPTION("Microchip coreSPI SPI controller driver"); 624 623 MODULE_AUTHOR("Daire McNamara <daire.mcnamara@microchip.com>"); 625 624 MODULE_AUTHOR("Conor Dooley <conor.dooley@microchip.com>");
+3
drivers/spi/spi-offload-trigger-pwm.c
··· 51 51 wf.period_length_ns = DIV_ROUND_UP_ULL(NSEC_PER_SEC, periodic->frequency_hz); 52 52 /* REVISIT: 50% duty-cycle for now - may add config parameter later */ 53 53 wf.duty_length_ns = wf.period_length_ns / 2; 54 + wf.duty_offset_ns = periodic->offset_ns; 54 55 55 56 ret = pwm_round_waveform_might_sleep(st->pwm, &wf); 56 57 if (ret < 0) 57 58 return ret; 58 59 59 60 periodic->frequency_hz = DIV_ROUND_UP_ULL(NSEC_PER_SEC, wf.period_length_ns); 61 + periodic->offset_ns = wf.duty_offset_ns; 60 62 61 63 return 0; 62 64 } ··· 79 77 wf.period_length_ns = DIV_ROUND_UP_ULL(NSEC_PER_SEC, periodic->frequency_hz); 80 78 /* REVISIT: 50% duty-cycle for now - may add config parameter later */ 81 79 wf.duty_length_ns = wf.period_length_ns / 2; 80 + wf.duty_offset_ns = periodic->offset_ns; 82 81 83 82 return pwm_set_waveform_might_sleep(st->pwm, &wf, false); 84 83 }
+1 -1
drivers/spi/spi-qpic-snand.c
··· 448 448 return snandc->qspi->ecc_stats.bitflips; 449 449 } 450 450 451 - static struct nand_ecc_engine_ops qcom_spi_ecc_engine_ops_pipelined = { 451 + static const struct nand_ecc_engine_ops qcom_spi_ecc_engine_ops_pipelined = { 452 452 .init_ctx = qcom_spi_ecc_init_ctx_pipelined, 453 453 .cleanup_ctx = qcom_spi_ecc_cleanup_ctx_pipelined, 454 454 .prepare_io_req = qcom_spi_ecc_prepare_io_req_pipelined,
+280 -59
drivers/spi/spi-rzv2h-rspi.c
··· 24 24 /* Registers */ 25 25 #define RSPI_SPDR 0x00 26 26 #define RSPI_SPCR 0x08 27 + #define RSPI_SPPCR 0x0e 27 28 #define RSPI_SSLP 0x10 28 29 #define RSPI_SPBR 0x11 29 30 #define RSPI_SPSCR 0x13 ··· 35 34 #define RSPI_SPFCR 0x6c 36 35 37 36 /* Register SPCR */ 37 + #define RSPI_SPCR_BPEN BIT(31) 38 38 #define RSPI_SPCR_MSTR BIT(30) 39 39 #define RSPI_SPCR_SPRIE BIT(17) 40 40 #define RSPI_SPCR_SCKASE BIT(12) 41 41 #define RSPI_SPCR_SPE BIT(0) 42 42 43 + /* Register SPPCR */ 44 + #define RSPI_SPPCR_SPLP2 BIT(1) 45 + 43 46 /* Register SPBR */ 44 47 #define RSPI_SPBR_SPR_MIN 0 48 + #define RSPI_SPBR_SPR_PCLK_MIN 1 45 49 #define RSPI_SPBR_SPR_MAX 255 46 50 47 51 /* Register SPCMD */ ··· 64 58 /* Register SPDCR2 */ 65 59 #define RSPI_SPDCR2_TTRG GENMASK(11, 8) 66 60 #define RSPI_SPDCR2_RTRG GENMASK(3, 0) 67 - #define RSPI_FIFO_SIZE 16 68 61 69 62 /* Register SPSR */ 70 63 #define RSPI_SPSR_SPRF BIT(15) ··· 72 67 #define RSPI_SPSRC_CLEAR 0xfd80 73 68 74 69 #define RSPI_RESET_NUM 2 75 - #define RSPI_CLK_NUM 3 70 + 71 + struct rzv2h_rspi_best_clock { 72 + struct clk *clk; 73 + unsigned long clk_rate; 74 + unsigned long error; 75 + u32 actual_hz; 76 + u8 brdv; 77 + u8 spr; 78 + }; 79 + 80 + struct rzv2h_rspi_info { 81 + void (*find_tclk_rate)(struct clk *clk, u32 hz, u8 spr_min, u8 spr_max, 82 + struct rzv2h_rspi_best_clock *best_clk); 83 + void (*find_pclk_rate)(struct clk *clk, u32 hz, u8 spr_low, u8 spr_high, 84 + struct rzv2h_rspi_best_clock *best_clk); 85 + const char *tclk_name; 86 + unsigned int fifo_size; 87 + unsigned int num_clks; 88 + }; 76 89 77 90 struct rzv2h_rspi_priv { 78 91 struct reset_control_bulk_data resets[RSPI_RESET_NUM]; 79 92 struct spi_controller *controller; 93 + const struct rzv2h_rspi_info *info; 80 94 void __iomem *base; 81 95 struct clk *tclk; 96 + struct clk *pclk; 82 97 wait_queue_head_t wait; 83 98 unsigned int bytes_per_word; 99 + u32 last_speed_hz; 84 100 u32 freq; 85 101 u16 status; 102 + u8 spr; 103 + u8 brdv; 104 + bool use_pclk; 86 105 }; 87 106 88 107 #define RZV2H_RSPI_TX(func, type) \ ··· 261 232 return DIV_ROUND_UP(tclk_rate, (2 * (spr + 1) * (1 << brdv))); 262 233 } 263 234 264 - static u32 rzv2h_rspi_setup_clock(struct rzv2h_rspi_priv *rspi, u32 hz) 235 + static void rzv2h_rspi_find_rate_variable(struct clk *clk, u32 hz, 236 + u8 spr_min, u8 spr_max, 237 + struct rzv2h_rspi_best_clock *best) 265 238 { 266 - unsigned long tclk_rate; 239 + long clk_rate, clk_min_rate, clk_max_rate; 240 + int min_rate_spr, max_rate_spr; 241 + unsigned long error; 242 + u32 actual_hz; 243 + u8 brdv; 244 + int spr; 245 + 246 + /* 247 + * On T2H / N2H, the source for the SPI clock is PCLKSPIn, which is a 248 + * 1/32, 1/30, 1/25 or 1/24 divider of PLL4, which is 2400MHz, 249 + * resulting in either 75MHz, 80MHz, 96MHz or 100MHz. 250 + */ 251 + clk_min_rate = clk_round_rate(clk, 0); 252 + if (clk_min_rate < 0) 253 + return; 254 + 255 + clk_max_rate = clk_round_rate(clk, ULONG_MAX); 256 + if (clk_max_rate < 0) 257 + return; 258 + 259 + /* 260 + * From the manual: 261 + * Bit rate = f(PCLKSPIn) / (2 * (n + 1) * 2^N) 262 + * 263 + * If we adapt it to the current context, we get the following: 264 + * hz = rate / ((spr + 1) * (1 << (brdv + 1))) 265 + * 266 + * This can be written in multiple forms depending on what we want to 267 + * determine. 268 + * 269 + * To find the rate, having hz, spr and brdv: 270 + * rate = hz * (spr + 1) * (1 << (brdv + 1) 271 + * 272 + * To find the spr, having rate, hz, and spr: 273 + * spr = rate / (hz * (1 << (brdv + 1)) - 1 274 + */ 275 + 276 + for (brdv = RSPI_SPCMD_BRDV_MIN; brdv <= RSPI_SPCMD_BRDV_MAX; brdv++) { 277 + /* Calculate the divisor needed to find the SPR from a rate. */ 278 + u32 rate_div = hz * (1 << (brdv + 1)); 279 + 280 + /* 281 + * If the SPR for the minimum rate is greater than the maximum 282 + * allowed value skip this BRDV. The divisor increases with each 283 + * BRDV iteration, so the following BRDV might result in a 284 + * minimum SPR that is in the valid range. 285 + */ 286 + min_rate_spr = DIV_ROUND_CLOSEST(clk_min_rate, rate_div) - 1; 287 + if (min_rate_spr > spr_max) 288 + continue; 289 + 290 + /* 291 + * If the SPR for the maximum rate is less than the minimum 292 + * allowed value, exit. The divisor only increases with each 293 + * BRDV iteration, so the following BRDV cannot result in a 294 + * maximum SPR that is in the valid range. 295 + */ 296 + max_rate_spr = DIV_ROUND_CLOSEST(clk_max_rate, rate_div) - 1; 297 + if (max_rate_spr < spr_min) 298 + break; 299 + 300 + if (min_rate_spr < spr_min) 301 + min_rate_spr = spr_min; 302 + 303 + if (max_rate_spr > spr_max) 304 + max_rate_spr = spr_max; 305 + 306 + for (spr = min_rate_spr; spr <= max_rate_spr; spr++) { 307 + clk_rate = (spr + 1) * rate_div; 308 + 309 + clk_rate = clk_round_rate(clk, clk_rate); 310 + if (clk_rate <= 0) 311 + continue; 312 + 313 + actual_hz = rzv2h_rspi_calc_bitrate(clk_rate, spr, brdv); 314 + error = abs((long)hz - (long)actual_hz); 315 + 316 + if (error >= best->error) 317 + continue; 318 + 319 + *best = (struct rzv2h_rspi_best_clock) { 320 + .clk = clk, 321 + .clk_rate = clk_rate, 322 + .error = error, 323 + .actual_hz = actual_hz, 324 + .brdv = brdv, 325 + .spr = spr, 326 + }; 327 + 328 + if (!error) 329 + return; 330 + } 331 + } 332 + } 333 + 334 + static void rzv2h_rspi_find_rate_fixed(struct clk *clk, u32 hz, 335 + u8 spr_min, u8 spr_max, 336 + struct rzv2h_rspi_best_clock *best) 337 + { 338 + unsigned long clk_rate; 339 + unsigned long error; 340 + u32 actual_hz; 267 341 int spr; 268 342 u8 brdv; 269 343 ··· 379 247 * * n = SPR - is RSPI_SPBR.SPR (from 0 to 255) 380 248 * * N = BRDV - is RSPI_SPCMD.BRDV (from 0 to 3) 381 249 */ 382 - tclk_rate = clk_get_rate(rspi->tclk); 250 + clk_rate = clk_get_rate(clk); 383 251 for (brdv = RSPI_SPCMD_BRDV_MIN; brdv <= RSPI_SPCMD_BRDV_MAX; brdv++) { 384 - spr = DIV_ROUND_UP(tclk_rate, hz * (1 << (brdv + 1))); 252 + spr = DIV_ROUND_UP(clk_rate, hz * (1 << (brdv + 1))); 385 253 spr--; 386 - if (spr >= RSPI_SPBR_SPR_MIN && spr <= RSPI_SPBR_SPR_MAX) 254 + if (spr >= spr_min && spr <= spr_max) 387 255 goto clock_found; 388 256 } 389 257 390 - return 0; 258 + return; 391 259 392 260 clock_found: 393 - rzv2h_rspi_reg_rmw(rspi, RSPI_SPCMD, RSPI_SPCMD_BRDV, brdv); 394 - writeb(spr, rspi->base + RSPI_SPBR); 261 + actual_hz = rzv2h_rspi_calc_bitrate(clk_rate, spr, brdv); 262 + error = abs((long)hz - (long)actual_hz); 395 263 396 - return rzv2h_rspi_calc_bitrate(tclk_rate, spr, brdv); 264 + if (error >= best->error) 265 + return; 266 + 267 + *best = (struct rzv2h_rspi_best_clock) { 268 + .clk = clk, 269 + .clk_rate = clk_rate, 270 + .error = error, 271 + .actual_hz = actual_hz, 272 + .brdv = brdv, 273 + .spr = spr, 274 + }; 275 + } 276 + 277 + static u32 rzv2h_rspi_setup_clock(struct rzv2h_rspi_priv *rspi, u32 hz) 278 + { 279 + struct rzv2h_rspi_best_clock best_clock = { 280 + .error = ULONG_MAX, 281 + }; 282 + int ret; 283 + 284 + rspi->info->find_tclk_rate(rspi->tclk, hz, RSPI_SPBR_SPR_MIN, 285 + RSPI_SPBR_SPR_MAX, &best_clock); 286 + 287 + /* 288 + * T2H and N2H can also use PCLK as a source, which is 125MHz, but not 289 + * when both SPR and BRDV are 0. 290 + */ 291 + if (best_clock.error && rspi->info->find_pclk_rate) 292 + rspi->info->find_pclk_rate(rspi->pclk, hz, RSPI_SPBR_SPR_PCLK_MIN, 293 + RSPI_SPBR_SPR_MAX, &best_clock); 294 + 295 + if (!best_clock.clk_rate) 296 + return -EINVAL; 297 + 298 + ret = clk_set_rate(best_clock.clk, best_clock.clk_rate); 299 + if (ret) 300 + return 0; 301 + 302 + rspi->use_pclk = best_clock.clk == rspi->pclk; 303 + rspi->spr = best_clock.spr; 304 + rspi->brdv = best_clock.brdv; 305 + 306 + return best_clock.actual_hz; 397 307 } 398 308 399 309 static int rzv2h_rspi_prepare_message(struct spi_controller *ctlr, ··· 448 274 u8 bits_per_word; 449 275 u32 conf32; 450 276 u16 conf16; 277 + u8 conf8; 451 278 452 279 /* Make sure SPCR.SPE is 0 before amending the configuration */ 453 280 rzv2h_rspi_spe_disable(rspi); 454 - 455 - /* Configure the device to work in "host" mode */ 456 - conf32 = RSPI_SPCR_MSTR; 457 - 458 - /* Auto-stop function */ 459 - conf32 |= RSPI_SPCR_SCKASE; 460 - 461 - /* SPI receive buffer full interrupt enable */ 462 - conf32 |= RSPI_SPCR_SPRIE; 463 - 464 - writel(conf32, rspi->base + RSPI_SPCR); 465 - 466 - /* Use SPCMD0 only */ 467 - writeb(0x0, rspi->base + RSPI_SPSCR); 468 - 469 - /* Setup mode */ 470 - conf32 = FIELD_PREP(RSPI_SPCMD_CPOL, !!(spi->mode & SPI_CPOL)); 471 - conf32 |= FIELD_PREP(RSPI_SPCMD_CPHA, !!(spi->mode & SPI_CPHA)); 472 - conf32 |= FIELD_PREP(RSPI_SPCMD_LSBF, !!(spi->mode & SPI_LSB_FIRST)); 473 - conf32 |= FIELD_PREP(RSPI_SPCMD_SSLKP, 1); 474 - conf32 |= FIELD_PREP(RSPI_SPCMD_SSLA, spi_get_chipselect(spi, 0)); 475 - writel(conf32, rspi->base + RSPI_SPCMD); 476 - if (spi->mode & SPI_CS_HIGH) 477 - writeb(BIT(spi_get_chipselect(spi, 0)), rspi->base + RSPI_SSLP); 478 - else 479 - writeb(0, rspi->base + RSPI_SSLP); 480 - 481 - /* Setup FIFO thresholds */ 482 - conf16 = FIELD_PREP(RSPI_SPDCR2_TTRG, RSPI_FIFO_SIZE - 1); 483 - conf16 |= FIELD_PREP(RSPI_SPDCR2_RTRG, 0); 484 - writew(conf16, rspi->base + RSPI_SPDCR2); 485 - 486 - rzv2h_rspi_clear_fifos(rspi); 487 281 488 282 list_for_each_entry(xfer, &message->transfers, transfer_list) { 489 283 if (!xfer->speed_hz) ··· 465 323 return -EINVAL; 466 324 467 325 rspi->bytes_per_word = roundup_pow_of_two(BITS_TO_BYTES(bits_per_word)); 468 - rzv2h_rspi_reg_rmw(rspi, RSPI_SPCMD, RSPI_SPCMD_SPB, bits_per_word - 1); 469 326 470 - rspi->freq = rzv2h_rspi_setup_clock(rspi, speed_hz); 471 - if (!rspi->freq) 472 - return -EINVAL; 327 + if (speed_hz != rspi->last_speed_hz) { 328 + rspi->freq = rzv2h_rspi_setup_clock(rspi, speed_hz); 329 + if (!rspi->freq) 330 + return -EINVAL; 331 + 332 + rspi->last_speed_hz = speed_hz; 333 + } 334 + 335 + writeb(rspi->spr, rspi->base + RSPI_SPBR); 336 + 337 + /* Configure the device to work in "host" mode */ 338 + conf32 = RSPI_SPCR_MSTR; 339 + 340 + /* Auto-stop function */ 341 + conf32 |= RSPI_SPCR_SCKASE; 342 + 343 + /* SPI receive buffer full interrupt enable */ 344 + conf32 |= RSPI_SPCR_SPRIE; 345 + 346 + /* Bypass synchronization circuit */ 347 + conf32 |= FIELD_PREP(RSPI_SPCR_BPEN, rspi->use_pclk); 348 + 349 + writel(conf32, rspi->base + RSPI_SPCR); 350 + 351 + /* Use SPCMD0 only */ 352 + writeb(0x0, rspi->base + RSPI_SPSCR); 353 + 354 + /* Setup loopback */ 355 + conf8 = FIELD_PREP(RSPI_SPPCR_SPLP2, !!(spi->mode & SPI_LOOP)); 356 + writeb(conf8, rspi->base + RSPI_SPPCR); 357 + 358 + /* Setup mode */ 359 + conf32 = FIELD_PREP(RSPI_SPCMD_CPOL, !!(spi->mode & SPI_CPOL)); 360 + conf32 |= FIELD_PREP(RSPI_SPCMD_CPHA, !!(spi->mode & SPI_CPHA)); 361 + conf32 |= FIELD_PREP(RSPI_SPCMD_LSBF, !!(spi->mode & SPI_LSB_FIRST)); 362 + conf32 |= FIELD_PREP(RSPI_SPCMD_SPB, bits_per_word - 1); 363 + conf32 |= FIELD_PREP(RSPI_SPCMD_BRDV, rspi->brdv); 364 + conf32 |= FIELD_PREP(RSPI_SPCMD_SSLKP, 1); 365 + conf32 |= FIELD_PREP(RSPI_SPCMD_SSLA, spi_get_chipselect(spi, 0)); 366 + writel(conf32, rspi->base + RSPI_SPCMD); 367 + if (spi->mode & SPI_CS_HIGH) 368 + writeb(BIT(spi_get_chipselect(spi, 0)), rspi->base + RSPI_SSLP); 369 + else 370 + writeb(0, rspi->base + RSPI_SSLP); 371 + 372 + /* Setup FIFO thresholds */ 373 + conf16 = FIELD_PREP(RSPI_SPDCR2_TTRG, rspi->info->fifo_size - 1); 374 + conf16 |= FIELD_PREP(RSPI_SPDCR2_RTRG, 0); 375 + writew(conf16, rspi->base + RSPI_SPDCR2); 376 + 377 + rzv2h_rspi_clear_fifos(rspi); 473 378 474 379 rzv2h_rspi_spe_enable(rspi); 475 380 ··· 539 350 struct device *dev = &pdev->dev; 540 351 struct rzv2h_rspi_priv *rspi; 541 352 struct clk_bulk_data *clks; 542 - unsigned long tclk_rate; 543 353 int irq_rx, ret, i; 354 + long tclk_rate; 544 355 545 356 controller = devm_spi_alloc_host(dev, sizeof(*rspi)); 546 357 if (!controller) ··· 551 362 552 363 rspi->controller = controller; 553 364 365 + rspi->info = device_get_match_data(dev); 366 + 554 367 rspi->base = devm_platform_ioremap_resource(pdev, 0); 555 368 if (IS_ERR(rspi->base)) 556 369 return PTR_ERR(rspi->base); 557 370 558 371 ret = devm_clk_bulk_get_all_enabled(dev, &clks); 559 - if (ret != RSPI_CLK_NUM) 372 + if (ret != rspi->info->num_clks) 560 373 return dev_err_probe(dev, ret >= 0 ? -EINVAL : ret, 561 374 "cannot get clocks\n"); 562 - for (i = 0; i < RSPI_CLK_NUM; i++) { 563 - if (!strcmp(clks[i].id, "tclk")) { 375 + for (i = 0; i < rspi->info->num_clks; i++) { 376 + if (!strcmp(clks[i].id, rspi->info->tclk_name)) { 564 377 rspi->tclk = clks[i].clk; 565 - break; 378 + } else if (rspi->info->find_pclk_rate && 379 + !strcmp(clks[i].id, "pclk")) { 380 + rspi->pclk = clks[i].clk; 566 381 } 567 382 } 568 383 569 384 if (!rspi->tclk) 570 385 return dev_err_probe(dev, -EINVAL, "Failed to get tclk\n"); 571 386 572 - tclk_rate = clk_get_rate(rspi->tclk); 573 - 574 387 rspi->resets[0].id = "presetn"; 575 388 rspi->resets[1].id = "tresetn"; 576 - ret = devm_reset_control_bulk_get_exclusive(dev, RSPI_RESET_NUM, 577 - rspi->resets); 389 + ret = devm_reset_control_bulk_get_optional_exclusive(dev, RSPI_RESET_NUM, 390 + rspi->resets); 578 391 if (ret) 579 392 return dev_err_probe(dev, ret, "cannot get resets\n"); 580 393 ··· 598 407 } 599 408 600 409 controller->mode_bits = SPI_CPHA | SPI_CPOL | SPI_CS_HIGH | 601 - SPI_LSB_FIRST; 410 + SPI_LSB_FIRST | SPI_LOOP; 602 411 controller->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 603 412 controller->prepare_message = rzv2h_rspi_prepare_message; 604 413 controller->unprepare_message = rzv2h_rspi_unprepare_message; 605 414 controller->num_chipselect = 4; 606 415 controller->transfer_one = rzv2h_rspi_transfer_one; 416 + 417 + tclk_rate = clk_round_rate(rspi->tclk, 0); 418 + if (tclk_rate < 0) { 419 + ret = tclk_rate; 420 + goto quit_resets; 421 + } 422 + 607 423 controller->min_speed_hz = rzv2h_rspi_calc_bitrate(tclk_rate, 608 424 RSPI_SPBR_SPR_MAX, 609 425 RSPI_SPCMD_BRDV_MAX); 426 + 427 + tclk_rate = clk_round_rate(rspi->tclk, ULONG_MAX); 428 + if (tclk_rate < 0) { 429 + ret = tclk_rate; 430 + goto quit_resets; 431 + } 432 + 610 433 controller->max_speed_hz = rzv2h_rspi_calc_bitrate(tclk_rate, 611 434 RSPI_SPBR_SPR_MIN, 612 435 RSPI_SPCMD_BRDV_MIN); ··· 650 445 reset_control_bulk_assert(RSPI_RESET_NUM, rspi->resets); 651 446 } 652 447 448 + static const struct rzv2h_rspi_info rzv2h_info = { 449 + .find_tclk_rate = rzv2h_rspi_find_rate_fixed, 450 + .tclk_name = "tclk", 451 + .fifo_size = 16, 452 + .num_clks = 3, 453 + }; 454 + 455 + static const struct rzv2h_rspi_info rzt2h_info = { 456 + .find_tclk_rate = rzv2h_rspi_find_rate_variable, 457 + .find_pclk_rate = rzv2h_rspi_find_rate_fixed, 458 + .tclk_name = "pclkspi", 459 + .fifo_size = 4, 460 + .num_clks = 2, 461 + }; 462 + 653 463 static const struct of_device_id rzv2h_rspi_match[] = { 654 - { .compatible = "renesas,r9a09g057-rspi" }, 464 + { .compatible = "renesas,r9a09g057-rspi", &rzv2h_info }, 465 + { .compatible = "renesas,r9a09g077-rspi", &rzt2h_info }, 655 466 { /* sentinel */ } 656 467 }; 657 468 MODULE_DEVICE_TABLE(of, rzv2h_rspi_match);
+2 -2
drivers/spi/spi-sg2044-nor.c
··· 42 42 #define SPIFMC_TRAN_CSR_TRAN_MODE_RX BIT(0) 43 43 #define SPIFMC_TRAN_CSR_TRAN_MODE_TX BIT(1) 44 44 #define SPIFMC_TRAN_CSR_FAST_MODE BIT(3) 45 + #define SPIFMC_TRAN_CSR_BUS_WIDTH_MASK GENMASK(5, 4) 45 46 #define SPIFMC_TRAN_CSR_BUS_WIDTH_1_BIT (0x00 << 4) 46 47 #define SPIFMC_TRAN_CSR_BUS_WIDTH_2_BIT (0x01 << 4) 47 48 #define SPIFMC_TRAN_CSR_BUS_WIDTH_4_BIT (0x02 << 4) ··· 123 122 reg = readl(spifmc->io_base + SPIFMC_TRAN_CSR); 124 123 reg &= ~(SPIFMC_TRAN_CSR_TRAN_MODE_MASK | 125 124 SPIFMC_TRAN_CSR_FAST_MODE | 126 - SPIFMC_TRAN_CSR_BUS_WIDTH_2_BIT | 127 - SPIFMC_TRAN_CSR_BUS_WIDTH_4_BIT | 125 + SPIFMC_TRAN_CSR_BUS_WIDTH_MASK | 128 126 SPIFMC_TRAN_CSR_DMA_EN | 129 127 SPIFMC_TRAN_CSR_ADDR_BYTES_MASK | 130 128 SPIFMC_TRAN_CSR_WITH_CMD |
+127 -45
drivers/spi/spi-tegra210-quad.c
··· 1019 1019 tegra_qspi_readl(tqspi, QSPI_FIFO_STATUS)); 1020 1020 } 1021 1021 1022 + static void tegra_qspi_reset(struct tegra_qspi *tqspi) 1023 + { 1024 + if (device_reset(tqspi->dev) < 0) { 1025 + dev_warn_once(tqspi->dev, "device reset failed\n"); 1026 + tegra_qspi_mask_clear_irq(tqspi); 1027 + } 1028 + } 1029 + 1022 1030 static void tegra_qspi_handle_error(struct tegra_qspi *tqspi) 1023 1031 { 1024 1032 dev_err(tqspi->dev, "error in transfer, fifo status 0x%08x\n", tqspi->status_reg); 1025 1033 tegra_qspi_dump_regs(tqspi); 1026 1034 tegra_qspi_flush_fifos(tqspi, true); 1027 - if (device_reset(tqspi->dev) < 0) 1028 - dev_warn_once(tqspi->dev, "device reset failed\n"); 1035 + tegra_qspi_reset(tqspi); 1029 1036 } 1030 1037 1031 1038 static void tegra_qspi_transfer_end(struct spi_device *spi) ··· 1046 1039 tqspi->command1_reg &= ~QSPI_CS_SW_VAL; 1047 1040 tegra_qspi_writel(tqspi, tqspi->command1_reg, QSPI_COMMAND1); 1048 1041 tegra_qspi_writel(tqspi, tqspi->def_command1_reg, QSPI_COMMAND1); 1042 + } 1043 + 1044 + static irqreturn_t handle_cpu_based_xfer(struct tegra_qspi *tqspi); 1045 + static irqreturn_t handle_dma_based_xfer(struct tegra_qspi *tqspi); 1046 + 1047 + /** 1048 + * tegra_qspi_handle_timeout - Handle transfer timeout with hardware check 1049 + * @tqspi: QSPI controller instance 1050 + * 1051 + * When a timeout occurs but hardware has completed the transfer (interrupt 1052 + * was lost or delayed), manually trigger transfer completion processing. 1053 + * This avoids failing transfers that actually succeeded. 1054 + * 1055 + * Returns: 0 if transfer was completed, -ETIMEDOUT if real timeout 1056 + */ 1057 + static int tegra_qspi_handle_timeout(struct tegra_qspi *tqspi) 1058 + { 1059 + irqreturn_t ret; 1060 + u32 status; 1061 + 1062 + /* Check if hardware actually completed the transfer */ 1063 + status = tegra_qspi_readl(tqspi, QSPI_TRANS_STATUS); 1064 + if (!(status & QSPI_RDY)) 1065 + return -ETIMEDOUT; 1066 + 1067 + /* 1068 + * Hardware completed but interrupt was lost/delayed. Manually 1069 + * process the completion by calling the appropriate handler. 1070 + */ 1071 + dev_warn_ratelimited(tqspi->dev, 1072 + "QSPI interrupt timeout, but transfer complete\n"); 1073 + 1074 + /* Clear the transfer status */ 1075 + status = tegra_qspi_readl(tqspi, QSPI_TRANS_STATUS); 1076 + tegra_qspi_writel(tqspi, status, QSPI_TRANS_STATUS); 1077 + 1078 + /* Manually trigger completion handler */ 1079 + if (!tqspi->is_curr_dma_xfer) 1080 + ret = handle_cpu_based_xfer(tqspi); 1081 + else 1082 + ret = handle_dma_based_xfer(tqspi); 1083 + 1084 + return (ret == IRQ_HANDLED) ? 0 : -EIO; 1049 1085 } 1050 1086 1051 1087 static u32 tegra_qspi_cmd_config(bool is_ddr, u8 bus_width, u8 len) ··· 1122 1072 return addr_config; 1123 1073 } 1124 1074 1075 + static void tegra_qspi_dma_stop(struct tegra_qspi *tqspi) 1076 + { 1077 + u32 value; 1078 + 1079 + if ((tqspi->cur_direction & DATA_DIR_TX) && tqspi->tx_dma_chan) 1080 + dmaengine_terminate_all(tqspi->tx_dma_chan); 1081 + 1082 + if ((tqspi->cur_direction & DATA_DIR_RX) && tqspi->rx_dma_chan) 1083 + dmaengine_terminate_all(tqspi->rx_dma_chan); 1084 + 1085 + value = tegra_qspi_readl(tqspi, QSPI_DMA_CTL); 1086 + value &= ~QSPI_DMA_EN; 1087 + tegra_qspi_writel(tqspi, value, QSPI_DMA_CTL); 1088 + } 1089 + 1090 + static void tegra_qspi_pio_stop(struct tegra_qspi *tqspi) 1091 + { 1092 + u32 value; 1093 + 1094 + value = tegra_qspi_readl(tqspi, QSPI_COMMAND1); 1095 + value &= ~QSPI_PIO; 1096 + tegra_qspi_writel(tqspi, value, QSPI_COMMAND1); 1097 + } 1098 + 1125 1099 static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi, 1126 1100 struct spi_message *msg) 1127 1101 { ··· 1153 1079 struct spi_transfer *xfer; 1154 1080 struct spi_device *spi = msg->spi; 1155 1081 u8 transfer_phase = 0; 1156 - u32 cmd1 = 0, dma_ctl = 0; 1082 + u32 cmd1 = 0; 1157 1083 int ret = 0; 1158 1084 u32 address_value = 0; 1159 1085 u32 cmd_config = 0, addr_config = 0; ··· 1220 1146 QSPI_DMA_TIMEOUT); 1221 1147 1222 1148 if (WARN_ON_ONCE(ret == 0)) { 1223 - dev_err_ratelimited(tqspi->dev, 1224 - "QSPI Transfer failed with timeout\n"); 1225 - if (tqspi->is_curr_dma_xfer) { 1226 - if ((tqspi->cur_direction & DATA_DIR_TX) && 1227 - tqspi->tx_dma_chan) 1228 - dmaengine_terminate_all(tqspi->tx_dma_chan); 1229 - if ((tqspi->cur_direction & DATA_DIR_RX) && 1230 - tqspi->rx_dma_chan) 1231 - dmaengine_terminate_all(tqspi->rx_dma_chan); 1232 - } 1149 + /* 1150 + * Check if hardware completed the transfer 1151 + * even though interrupt was lost or delayed. 1152 + * If so, process the completion and continue. 1153 + */ 1154 + ret = tegra_qspi_handle_timeout(tqspi); 1155 + if (ret < 0) { 1156 + /* Real timeout - clean up and fail */ 1157 + dev_err(tqspi->dev, "transfer timeout\n"); 1233 1158 1234 - /* Abort transfer by resetting pio/dma bit */ 1235 - if (!tqspi->is_curr_dma_xfer) { 1236 - cmd1 = tegra_qspi_readl 1237 - (tqspi, 1238 - QSPI_COMMAND1); 1239 - cmd1 &= ~QSPI_PIO; 1240 - tegra_qspi_writel 1241 - (tqspi, cmd1, 1242 - QSPI_COMMAND1); 1243 - } else { 1244 - dma_ctl = tegra_qspi_readl 1245 - (tqspi, 1246 - QSPI_DMA_CTL); 1247 - dma_ctl &= ~QSPI_DMA_EN; 1248 - tegra_qspi_writel(tqspi, dma_ctl, 1249 - QSPI_DMA_CTL); 1250 - } 1159 + /* Abort transfer by resetting pio/dma bit */ 1160 + if (tqspi->is_curr_dma_xfer) 1161 + tegra_qspi_dma_stop(tqspi); 1162 + else 1163 + tegra_qspi_pio_stop(tqspi); 1251 1164 1252 - /* Reset controller if timeout happens */ 1253 - if (device_reset(tqspi->dev) < 0) 1254 - dev_warn_once(tqspi->dev, 1255 - "device reset failed\n"); 1256 - ret = -EIO; 1257 - goto exit; 1165 + /* Reset controller if timeout happens */ 1166 + tegra_qspi_reset(tqspi); 1167 + 1168 + ret = -EIO; 1169 + goto exit; 1170 + } 1258 1171 } 1259 1172 1260 1173 if (tqspi->tx_status || tqspi->rx_status) { ··· 1261 1200 tegra_qspi_transfer_end(spi); 1262 1201 spi_transfer_delay_exec(xfer); 1263 1202 } 1203 + tqspi->curr_xfer = NULL; 1264 1204 transfer_phase++; 1265 1205 } 1266 1206 ret = 0; 1267 1207 1268 1208 exit: 1209 + tqspi->curr_xfer = NULL; 1269 1210 msg->status = ret; 1270 1211 1271 1212 return ret; ··· 1332 1269 ret = wait_for_completion_timeout(&tqspi->xfer_completion, 1333 1270 QSPI_DMA_TIMEOUT); 1334 1271 if (WARN_ON(ret == 0)) { 1335 - dev_err(tqspi->dev, "transfer timeout\n"); 1336 - if (tqspi->is_curr_dma_xfer) { 1337 - if ((tqspi->cur_direction & DATA_DIR_TX) && tqspi->tx_dma_chan) 1338 - dmaengine_terminate_all(tqspi->tx_dma_chan); 1339 - if ((tqspi->cur_direction & DATA_DIR_RX) && tqspi->rx_dma_chan) 1340 - dmaengine_terminate_all(tqspi->rx_dma_chan); 1272 + /* 1273 + * Check if hardware completed the transfer even though 1274 + * interrupt was lost or delayed. If so, process the 1275 + * completion and continue. 1276 + */ 1277 + ret = tegra_qspi_handle_timeout(tqspi); 1278 + if (ret < 0) { 1279 + /* Real timeout - clean up and fail */ 1280 + dev_err(tqspi->dev, "transfer timeout\n"); 1281 + 1282 + if (tqspi->is_curr_dma_xfer) 1283 + tegra_qspi_dma_stop(tqspi); 1284 + 1285 + tegra_qspi_handle_error(tqspi); 1286 + ret = -EIO; 1287 + goto complete_xfer; 1341 1288 } 1342 - tegra_qspi_handle_error(tqspi); 1343 - ret = -EIO; 1344 - goto complete_xfer; 1345 1289 } 1346 1290 1347 1291 if (tqspi->tx_status || tqspi->rx_status) { ··· 1360 1290 msg->actual_length += xfer->len + dummy_bytes; 1361 1291 1362 1292 complete_xfer: 1293 + tqspi->curr_xfer = NULL; 1294 + 1363 1295 if (ret < 0) { 1364 1296 tegra_qspi_transfer_end(spi); 1365 1297 spi_transfer_delay_exec(xfer); ··· 1467 1395 tegra_qspi_calculate_curr_xfer_param(tqspi, t); 1468 1396 tegra_qspi_start_cpu_based_transfer(tqspi, t); 1469 1397 exit: 1398 + tqspi->curr_xfer = NULL; 1470 1399 spin_unlock_irqrestore(&tqspi->lock, flags); 1471 1400 return IRQ_HANDLED; 1472 1401 } ··· 1552 1479 static irqreturn_t tegra_qspi_isr_thread(int irq, void *context_data) 1553 1480 { 1554 1481 struct tegra_qspi *tqspi = context_data; 1482 + 1483 + /* 1484 + * Occasionally the IRQ thread takes a long time to wake up (usually 1485 + * when the CPU that it's running on is excessively busy) and we have 1486 + * already reached the timeout before and cleaned up the timed out 1487 + * transfer. Avoid any processing in that case and bail out early. 1488 + */ 1489 + if (!tqspi->curr_xfer) 1490 + return IRQ_NONE; 1555 1491 1556 1492 tqspi->status_reg = tegra_qspi_readl(tqspi, QSPI_FIFO_STATUS); 1557 1493
+1 -1
drivers/spi/spi-tle62x0.c
··· 141 141 value = (st->gpio_state >> gpio_num) & 1; 142 142 mutex_unlock(&st->lock); 143 143 144 - return sysfs_emit(buf, "%d", value); 144 + return sysfs_emit(buf, "%d\n", value); 145 145 } 146 146 147 147 static ssize_t tle62x0_gpio_store(struct device *dev,
+2
drivers/spi/spidev.c
··· 704 704 */ 705 705 static const struct spi_device_id spidev_spi_ids[] = { 706 706 { .name = /* abb */ "spi-sensor" }, 707 + { .name = /* arduino */ "unoq-mcu" }, 707 708 { .name = /* cisco */ "spi-petra" }, 708 709 { .name = /* dh */ "dhcom-board" }, 709 710 { .name = /* elgin */ "jg10309-01" }, ··· 738 737 739 738 static const struct of_device_id spidev_dt_ids[] = { 740 739 { .compatible = "abb,spi-sensor", .data = &spidev_of_check }, 740 + { .compatible = "arduino,unoq-mcu", .data = &spidev_of_check }, 741 741 { .compatible = "cisco,spi-petra", .data = &spidev_of_check }, 742 742 { .compatible = "dh,dhcom-board", .data = &spidev_of_check }, 743 743 { .compatible = "elgin,jg10309-01", .data = &spidev_of_check },
-73
include/linux/platform_data/spi-davinci.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Copyright 2009 Texas Instruments. 4 - */ 5 - 6 - #ifndef __ARCH_ARM_DAVINCI_SPI_H 7 - #define __ARCH_ARM_DAVINCI_SPI_H 8 - 9 - #include <linux/platform_data/edma.h> 10 - 11 - #define SPI_INTERN_CS 0xFF 12 - 13 - enum { 14 - SPI_VERSION_1, /* For DM355/DM365/DM6467 */ 15 - SPI_VERSION_2, /* For DA8xx */ 16 - }; 17 - 18 - /** 19 - * davinci_spi_platform_data - Platform data for SPI master device on DaVinci 20 - * 21 - * @version: version of the SPI IP. Different DaVinci devices have slightly 22 - * varying versions of the same IP. 23 - * @num_chipselect: number of chipselects supported by this SPI master 24 - * @intr_line: interrupt line used to connect the SPI IP to the ARM interrupt 25 - * controller withn the SoC. Possible values are 0 and 1. 26 - * @cshold_bug: set this to true if the SPI controller on your chip requires 27 - * a write to CSHOLD bit in between transfers (like in DM355). 28 - * @dma_event_q: DMA event queue to use if SPI_IO_TYPE_DMA is used for any 29 - * device on the bus. 30 - */ 31 - struct davinci_spi_platform_data { 32 - u8 version; 33 - u8 num_chipselect; 34 - u8 intr_line; 35 - u8 prescaler_limit; 36 - bool cshold_bug; 37 - enum dma_event_q dma_event_q; 38 - }; 39 - 40 - /** 41 - * davinci_spi_config - Per-chip-select configuration for SPI slave devices 42 - * 43 - * @wdelay: amount of delay between transmissions. Measured in number of 44 - * SPI module clocks. 45 - * @odd_parity: polarity of parity flag at the end of transmit data stream. 46 - * 0 - odd parity, 1 - even parity. 47 - * @parity_enable: enable transmission of parity at end of each transmit 48 - * data stream. 49 - * @io_type: type of IO transfer. Choose between polled, interrupt and DMA. 50 - * @timer_disable: disable chip-select timers (setup and hold) 51 - * @c2tdelay: chip-select setup time. Measured in number of SPI module clocks. 52 - * @t2cdelay: chip-select hold time. Measured in number of SPI module clocks. 53 - * @t2edelay: transmit data finished to SPI ENAn pin inactive time. Measured 54 - * in number of SPI clocks. 55 - * @c2edelay: chip-select active to SPI ENAn signal active time. Measured in 56 - * number of SPI clocks. 57 - */ 58 - struct davinci_spi_config { 59 - u8 wdelay; 60 - u8 odd_parity; 61 - u8 parity_enable; 62 - #define SPI_IO_TYPE_INTR 0 63 - #define SPI_IO_TYPE_POLL 1 64 - #define SPI_IO_TYPE_DMA 2 65 - u8 io_type; 66 - u8 timer_disable; 67 - u8 c2tdelay; 68 - u8 t2cdelay; 69 - u8 t2edelay; 70 - u8 c2edelay; 71 - }; 72 - 73 - #endif /* __ARCH_ARM_DAVINCI_SPI_H */
+9
include/linux/spi/offload/types.h
··· 57 57 SPI_OFFLOAD_TRIGGER_PERIODIC, 58 58 }; 59 59 60 + /** 61 + * spi_offload_trigger_periodic - configuration parameters for periodic triggers 62 + * @frequency_hz: The rate that the trigger should fire in Hz. 63 + * @offset_ns: A delay in nanoseconds between when this trigger fires 64 + * compared to another trigger. This requires specialized hardware 65 + * that supports such synchronization with a delay between two or 66 + * more triggers. Set to 0 when not needed. 67 + */ 60 68 struct spi_offload_trigger_periodic { 61 69 u64 frequency_hz; 70 + u64 offset_ns; 62 71 }; 63 72 64 73 struct spi_offload_trigger_config {
+106
include/trace/events/spi-mem.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #undef TRACE_SYSTEM 3 + #define TRACE_SYSTEM spi-mem 4 + 5 + #undef TRACE_SYSTEM_VAR 6 + #define TRACE_SYSTEM_VAR spi_mem 7 + 8 + #if !defined(_TRACE_SPI_MEM_H) || defined(TRACE_HEADER_MULTI_READ) 9 + #define _TRACE_SPI_MEM_H 10 + 11 + #include <linux/tracepoint.h> 12 + #include <linux/spi/spi-mem.h> 13 + 14 + #define decode_dtr(dtr) \ 15 + __print_symbolic(dtr, \ 16 + { 0, "S" }, \ 17 + { 1, "D" }) 18 + 19 + TRACE_EVENT(spi_mem_start_op, 20 + TP_PROTO(struct spi_mem *mem, const struct spi_mem_op *op), 21 + TP_ARGS(mem, op), 22 + 23 + TP_STRUCT__entry( 24 + __string(name, mem->name) 25 + __dynamic_array(u8, op, 1 + op->addr.nbytes + op->dummy.nbytes) 26 + __dynamic_array(u8, data, op->data.dir == SPI_MEM_DATA_OUT ? 27 + min(op->data.nbytes, 64) : 0) 28 + __field(u32, data_len) 29 + __field(u32, max_freq) 30 + __field(u8, cmd_buswidth) 31 + __field(bool, cmd_dtr) 32 + __field(u8, addr_buswidth) 33 + __field(bool, addr_dtr) 34 + __field(u8, dummy_nbytes) 35 + __field(u8, data_buswidth) 36 + __field(bool, data_dtr) 37 + ), 38 + 39 + TP_fast_assign( 40 + int i; 41 + 42 + __assign_str(name); 43 + __entry->max_freq = op->max_freq ?: mem->spi->max_speed_hz; 44 + 45 + __entry->cmd_buswidth = op->cmd.buswidth; 46 + __entry->cmd_dtr = op->cmd.dtr; 47 + *((u8 *)__get_dynamic_array(op)) = op->cmd.opcode; 48 + 49 + __entry->addr_buswidth = op->addr.buswidth; 50 + __entry->addr_dtr = op->addr.dtr; 51 + for (i = 0; i < op->addr.nbytes; i++) 52 + ((u8 *)__get_dynamic_array(op))[i + 1] = 53 + op->addr.val >> (8 * (op->addr.nbytes - i - 1)); 54 + 55 + memset(((u8 *)__get_dynamic_array(op)) + op->addr.nbytes + 1, 56 + 0xff, op->dummy.nbytes); 57 + 58 + __entry->data_len = op->data.nbytes; 59 + __entry->data_buswidth = op->data.buswidth; 60 + __entry->data_dtr = op->data.dtr; 61 + if (op->data.dir == SPI_MEM_DATA_OUT) 62 + memcpy(__get_dynamic_array(data), op->data.buf.out, 63 + __get_dynamic_array_len(data)); 64 + ), 65 + 66 + TP_printk("%s %u%s-%u%s-%u%s @%u Hz op=[%*phD] len=%u tx=[%*phD]", 67 + __get_str(name), 68 + __entry->cmd_buswidth, decode_dtr(__entry->cmd_dtr), 69 + __entry->addr_buswidth, decode_dtr(__entry->addr_dtr), 70 + __entry->data_buswidth, decode_dtr(__entry->data_dtr), 71 + __entry->max_freq, 72 + __get_dynamic_array_len(op), __get_dynamic_array(op), 73 + __entry->data_len, 74 + __get_dynamic_array_len(data), __get_dynamic_array(data)) 75 + ); 76 + 77 + TRACE_EVENT(spi_mem_stop_op, 78 + TP_PROTO(struct spi_mem *mem, const struct spi_mem_op *op), 79 + TP_ARGS(mem, op), 80 + 81 + TP_STRUCT__entry( 82 + __string(name, mem->name) 83 + __dynamic_array(u8, data, op->data.dir == SPI_MEM_DATA_IN ? 84 + min(op->data.nbytes, 64) : 0) 85 + __field(u32, data_len) 86 + ), 87 + 88 + TP_fast_assign( 89 + __assign_str(name); 90 + __entry->data_len = op->data.nbytes; 91 + if (op->data.dir == SPI_MEM_DATA_IN) 92 + memcpy(__get_dynamic_array(data), op->data.buf.in, 93 + __get_dynamic_array_len(data)); 94 + ), 95 + 96 + TP_printk("%s len=%u rx=[%*phD]", 97 + __get_str(name), 98 + __entry->data_len, 99 + __get_dynamic_array_len(data), __get_dynamic_array(data)) 100 + ); 101 + 102 + 103 + #endif /* _TRACE_SPI_MEM_H */ 104 + 105 + /* This part must be outside protection */ 106 + #include <trace/define_trace.h>