Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'spi-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
"With the exception of some refactoring to fix long standing issues
where we weren't handling cache syncs properly for messages which had
PIO and DMA transfers going to the same page correctly there has been
no work on the core this time around, and it's also been quite a quiet
release for the drivers too:

- Fix cache syncs for cases where we have DMA and PIO transfers in
the same message going to the same page

- Update the fsl_spi driver to use transfer_one() rather than a
custom transfer function

- Support for configuring transfer speeds with the AMD SPI controller

- Support for a second chip select and 64K erase on Intel SPI

- Support for Microchip coreQSPI, Nuvoton NPCM845, NXP i.MX93, and
Rockchip RK3128 and RK3588"

* tag 'spi-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (73 commits)
spi: Ensure that sg_table won't be used after being freed
spi: spi-gxp: Use devm_platform_ioremap_resource()
spi: s3c64xx: Fix large transfers with DMA
spi: Split transfers larger than max size
spi: Fix cache corruption due to DMA/PIO overlap
spi: Save current RX and TX DMA devices
spi: mt65xx: Add dma max segment size declaration
spi: migrate mt7621 text bindings to YAML
spi: renesas,sh-msiof: Add r8a779g0 support
spi: spi-fsl-qspi: Use devm_platform_ioremap_resource_byname()
spi: spi-fsl-lpspi: Use devm_platform_get_and_ioremap_resource()
spi: spi-fsl-dspi: Use devm_platform_get_and_ioremap_resource()
spi/omap100k:Fix PM disable depth imbalance in omap1_spi100k_probe
spi: dw: Fix PM disable depth imbalance in dw_spi_bt1_probe
spi: cadence-quadspi: Fix PM disable depth imbalance in cqspi_probe
spi: s3c24xx: Switch to use devm_spi_alloc_master()
spi: xilinx: Switch to use devm_spi_alloc_master()
spi: img-spfi: using pm_runtime_resume_and_get instead of pm_runtime_get_sync
spi: aspeed: Remove redundant dev_err call
spi: spi-mpc52xx: switch to using gpiod API
...

+1434 -420
+2
Documentation/devicetree/bindings/display/panel/kingdisplay,kd035g6-54nt.yaml
··· 23 23 reg: true 24 24 reset-gpios: true 25 25 26 + spi-3wire: true 27 + 26 28 required: 27 29 - compatible 28 30 - power-supply
+2
Documentation/devicetree/bindings/display/panel/leadtek,ltk035c5444t.yaml
··· 24 24 reg: true 25 25 reset-gpios: true 26 26 27 + spi-3wire: true 28 + 27 29 required: 28 30 - compatible 29 31 - power-supply
+4
Documentation/devicetree/bindings/display/panel/samsung,s6e63m0.yaml
··· 24 24 default-brightness: true 25 25 max-brightness: true 26 26 27 + spi-3wire: true 28 + spi-cpha: true 29 + spi-cpol: true 30 + 27 31 vdd3-supply: 28 32 description: VDD regulator 29 33
+11 -4
Documentation/devicetree/bindings/spi/microchip,mpfs-spi.yaml
··· 4 4 $id: http://devicetree.org/schemas/spi/microchip,mpfs-spi.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Microchip MPFS {Q,}SPI Controller Device Tree Bindings 7 + title: Microchip FPGA {Q,}SPI Controllers 8 + 9 + description: 10 + SPI and QSPI controllers on Microchip PolarFire SoC and the "soft"/ 11 + fabric IP cores they are based on 8 12 9 13 maintainers: 10 14 - Conor Dooley <conor.dooley@microchip.com> ··· 18 14 19 15 properties: 20 16 compatible: 21 - enum: 22 - - microchip,mpfs-spi 23 - - microchip,mpfs-qspi 17 + oneOf: 18 + - items: 19 + - const: microchip,mpfs-qspi 20 + - const: microchip,coreqspi-rtl-v2 21 + - const: microchip,coreqspi-rtl-v2 #FPGA QSPI 22 + - const: microchip,mpfs-spi 24 23 25 24 reg: 26 25 maxItems: 1
+2 -1
Documentation/devicetree/bindings/spi/nuvoton,npcm-pspi.txt
··· 3 3 Nuvoton NPCM7xx SOC support two PSPI channels. 4 4 5 5 Required properties: 6 - - compatible : "nuvoton,npcm750-pspi" for NPCM7XX BMC 6 + - compatible : "nuvoton,npcm750-pspi" for Poleg NPCM7XX. 7 + "nuvoton,npcm845-pspi" for Arbel NPCM8XX. 7 8 - #address-cells : should be 1. see spi-bus.txt 8 9 - #size-cells : should be 0. see spi-bus.txt 9 10 - specifies physical base address and size of the register.
+1 -2
Documentation/devicetree/bindings/spi/nvidia,tegra210-quad-peripheral-props.yaml
··· 29 29 minimum: 0 30 30 maximum: 255 31 31 32 - unevaluatedProperties: true 33 - 32 + additionalProperties: true
+61
Documentation/devicetree/bindings/spi/ralink,mt7621-spi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/ralink,mt7621-spi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + maintainers: 8 + - Sergio Paracuellos <sergio.paracuellos@gmail.com> 9 + 10 + title: Mediatek MT7621/MT7628 SPI controller 11 + 12 + allOf: 13 + - $ref: /schemas/spi/spi-controller.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: ralink,mt7621-spi 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + clocks: 23 + maxItems: 1 24 + 25 + clock-names: 26 + const: spi 27 + 28 + resets: 29 + maxItems: 1 30 + 31 + reset-names: 32 + const: spi 33 + 34 + required: 35 + - compatible 36 + - reg 37 + - resets 38 + - "#address-cells" 39 + - "#size-cells" 40 + 41 + unevaluatedProperties: false 42 + 43 + examples: 44 + - | 45 + #include <dt-bindings/clock/mt7621-clk.h> 46 + #include <dt-bindings/reset/mt7621-reset.h> 47 + 48 + spi@b00 { 49 + compatible = "ralink,mt7621-spi"; 50 + reg = <0xb00 0x100>; 51 + clocks = <&sysc MT7621_CLK_SPI>; 52 + clock-names = "spi"; 53 + resets = <&sysc MT7621_RST_SPI>; 54 + reset-names = "spi"; 55 + 56 + #address-cells = <1>; 57 + #size-cells = <0>; 58 + 59 + pinctrl-names = "default"; 60 + pinctrl-0 = <&spi_pins>; 61 + };
+13 -1
Documentation/devicetree/bindings/spi/renesas,sh-msiof.yaml
··· 47 47 - renesas,msiof-r8a77980 # R-Car V3H 48 48 - renesas,msiof-r8a77990 # R-Car E3 49 49 - renesas,msiof-r8a77995 # R-Car D3 50 - - renesas,msiof-r8a779a0 # R-Car V3U 51 50 - const: renesas,rcar-gen3-msiof # generic R-Car Gen3 and RZ/G2 51 + # compatible device 52 + - items: 53 + - enum: 54 + - renesas,msiof-r8a779a0 # R-Car V3U 55 + - renesas,msiof-r8a779f0 # R-Car S4-8 56 + - renesas,msiof-r8a779g0 # R-Car V4H 57 + - const: renesas,rcar-gen4-msiof # generic R-Car Gen4 52 58 # compatible device 53 59 - items: 54 60 - const: renesas,sh-msiof # deprecated ··· 73 67 maxItems: 1 74 68 75 69 clocks: 70 + maxItems: 1 71 + 72 + power-domains: 73 + maxItems: 1 74 + 75 + resets: 76 76 maxItems: 1 77 77 78 78 num-cs:
-1
Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml
··· 104 104 const: spi 105 105 106 106 reg-io-width: 107 - $ref: /schemas/types.yaml#/definitions/uint32 108 107 description: I/O register width (in bytes) implemented by this device 109 108 default: 4 110 109 enum: [ 2, 4 ]
+5
Documentation/devicetree/bindings/spi/spi-controller.yaml
··· 96 96 $ref: spi-peripheral-props.yaml 97 97 98 98 properties: 99 + spi-3wire: 100 + $ref: /schemas/types.yaml#/definitions/flag 101 + description: 102 + The device requires 3-wire mode. 103 + 99 104 spi-cpha: 100 105 $ref: /schemas/types.yaml#/definitions/flag 101 106 description:
+13 -1
Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml
··· 19 19 - fsl,imx7ulp-spi 20 20 - fsl,imx8qxp-spi 21 21 - items: 22 - - const: fsl,imx8ulp-spi 22 + - enum: 23 + - fsl,imx8ulp-spi 24 + - fsl,imx93-spi 23 25 - const: fsl,imx7ulp-spi 24 26 reg: 25 27 maxItems: 1 ··· 38 36 items: 39 37 - const: per 40 38 - const: ipg 39 + 40 + dmas: 41 + items: 42 + - description: TX DMA Channel 43 + - description: RX DMA Channel 44 + 45 + dma-names: 46 + items: 47 + - const: tx 48 + - const: rx 41 49 42 50 fsl,spi-only-use-cs1-sel: 43 51 description:
-26
Documentation/devicetree/bindings/spi/spi-mt7621.txt
··· 1 - Binding for MTK SPI controller (MT7621 MIPS) 2 - 3 - Required properties: 4 - - compatible: Should be one of the following: 5 - - "ralink,mt7621-spi": for mt7621/mt7628/mt7688 platforms 6 - - #address-cells: should be 1. 7 - - #size-cells: should be 0. 8 - - reg: Address and length of the register set for the device 9 - - resets: phandle to the reset controller asserting this device in 10 - reset 11 - See ../reset/reset.txt for details. 12 - 13 - Optional properties: 14 - - cs-gpios: see spi-bus.txt. 15 - 16 - Example: 17 - 18 - - SoC Specific Portion: 19 - spi0: spi@b00 { 20 - compatible = "ralink,mt7621-spi"; 21 - reg = <0xb00 0x100>; 22 - #address-cells = <1>; 23 - #size-cells = <0>; 24 - resets = <&rstctrl 18>; 25 - reset-names = "spi"; 26 - };
-5
Documentation/devicetree/bindings/spi/spi-peripheral-props.yaml
··· 29 29 description: 30 30 Chip select used by the device. 31 31 32 - spi-3wire: 33 - $ref: /schemas/types.yaml#/definitions/flag 34 - description: 35 - The device requires 3-wire mode. 36 - 37 32 spi-cs-high: 38 33 $ref: /schemas/types.yaml#/definitions/flag 39 34 description:
+5
Documentation/devicetree/bindings/spi/spi-rockchip.yaml
··· 27 27 - items: 28 28 - enum: 29 29 - rockchip,px30-spi 30 + - rockchip,rk3128-spi 30 31 - rockchip,rk3188-spi 31 32 - rockchip,rk3288-spi 32 33 - rockchip,rk3308-spi ··· 35 34 - rockchip,rk3368-spi 36 35 - rockchip,rk3399-spi 37 36 - rockchip,rk3568-spi 37 + - rockchip,rk3588-spi 38 38 - rockchip,rv1126-spi 39 39 - const: rockchip,rk3066-spi 40 40 ··· 81 79 Names for the pin configuration(s); may be "default" or "sleep", 82 80 where the "sleep" configuration may describe the state 83 81 the pins should be in during system suspend. 82 + 83 + power-domains: 84 + maxItems: 1 84 85 85 86 required: 86 87 - compatible
+1
MAINTAINERS
··· 17568 17568 F: drivers/pci/controller/pcie-microchip-host.c 17569 17569 F: drivers/rtc/rtc-mpfs.c 17570 17570 F: drivers/soc/microchip/ 17571 + F: drivers/spi/spi-microchip-core-qspi.c 17571 17572 F: drivers/spi/spi-microchip-core.c 17572 17573 F: drivers/usb/musb/mpfs.c 17573 17574 F: include/soc/microchip/mpfs.h
+9
drivers/spi/Kconfig
··· 591 591 PolarFire SoC. 592 592 If built as a module, it will be called spi-microchip-core. 593 593 594 + config SPI_MICROCHIP_CORE_QSPI 595 + tristate "Microchip FPGA QSPI controllers" 596 + depends on SPI_MASTER 597 + help 598 + This enables the QSPI driver for Microchip FPGA QSPI controllers. 599 + Say Y or M here if you want to use the QSPI controllers on 600 + PolarFire SoC. 601 + If built as a module, it will be called spi-microchip-core-qspi. 602 + 594 603 config SPI_MT65XX 595 604 tristate "MediaTek SPI controller" 596 605 depends on ARCH_MEDIATEK || COMPILE_TEST
+1
drivers/spi/Makefile
··· 73 73 obj-$(CONFIG_SPI_MESON_SPICC) += spi-meson-spicc.o 74 74 obj-$(CONFIG_SPI_MESON_SPIFC) += spi-meson-spifc.o 75 75 obj-$(CONFIG_SPI_MICROCHIP_CORE) += spi-microchip-core.o 76 + obj-$(CONFIG_SPI_MICROCHIP_CORE_QSPI) += spi-microchip-core-qspi.o 76 77 obj-$(CONFIG_SPI_MPC512x_PSC) += spi-mpc512x-psc.o 77 78 obj-$(CONFIG_SPI_MPC52xx_PSC) += spi-mpc52xx-psc.o 78 79 obj-$(CONFIG_SPI_MPC52xx) += spi-mpc52xx.o
+139 -44
drivers/spi/spi-amd.c
··· 36 36 #define AMD_SPI_FIFO_SIZE 70 37 37 #define AMD_SPI_MEM_SIZE 200 38 38 39 - /* M_CMD OP codes for SPI */ 40 - #define AMD_SPI_XFER_TX 1 41 - #define AMD_SPI_XFER_RX 2 39 + #define AMD_SPI_ENA_REG 0x20 40 + #define AMD_SPI_ALT_SPD_SHIFT 20 41 + #define AMD_SPI_ALT_SPD_MASK GENMASK(23, AMD_SPI_ALT_SPD_SHIFT) 42 + #define AMD_SPI_SPI100_SHIFT 0 43 + #define AMD_SPI_SPI100_MASK GENMASK(AMD_SPI_SPI100_SHIFT, AMD_SPI_SPI100_SHIFT) 44 + #define AMD_SPI_SPEED_REG 0x6C 45 + #define AMD_SPI_SPD7_SHIFT 8 46 + #define AMD_SPI_SPD7_MASK GENMASK(13, AMD_SPI_SPD7_SHIFT) 47 + 48 + #define AMD_SPI_MAX_HZ 100000000 49 + #define AMD_SPI_MIN_HZ 800000 42 50 43 51 /** 44 52 * enum amd_spi_versions - SPI controller versions ··· 58 50 AMD_SPI_V2, 59 51 }; 60 52 53 + enum amd_spi_speed { 54 + F_66_66MHz, 55 + F_33_33MHz, 56 + F_22_22MHz, 57 + F_16_66MHz, 58 + F_100MHz, 59 + F_800KHz, 60 + SPI_SPD7, 61 + F_50MHz = 0x4, 62 + F_4MHz = 0x32, 63 + F_3_17MHz = 0x3F 64 + }; 65 + 66 + /** 67 + * struct amd_spi_freq - Matches device speed with values to write in regs 68 + * @speed_hz: Device frequency 69 + * @enable_val: Value to be written to "enable register" 70 + * @spd7_val: Some frequencies requires to have a value written at SPISPEED register 71 + */ 72 + struct amd_spi_freq { 73 + u32 speed_hz; 74 + u32 enable_val; 75 + u32 spd7_val; 76 + }; 77 + 61 78 /** 62 79 * struct amd_spi - SPI driver instance 63 80 * @io_remap_addr: Start address of the SPI controller registers 64 81 * @version: SPI controller hardware version 82 + * @speed_hz: Device frequency 65 83 */ 66 84 struct amd_spi { 67 85 void __iomem *io_remap_addr; 68 86 enum amd_spi_versions version; 87 + unsigned int speed_hz; 69 88 }; 70 89 71 90 static inline u8 amd_spi_readreg8(struct amd_spi *amd_spi, int idx) ··· 224 189 return 0; 225 190 } 226 191 192 + static const struct amd_spi_freq amd_spi_freq[] = { 193 + { AMD_SPI_MAX_HZ, F_100MHz, 0}, 194 + { 66660000, F_66_66MHz, 0}, 195 + { 50000000, SPI_SPD7, F_50MHz}, 196 + { 33330000, F_33_33MHz, 0}, 197 + { 22220000, F_22_22MHz, 0}, 198 + { 16660000, F_16_66MHz, 0}, 199 + { 4000000, SPI_SPD7, F_4MHz}, 200 + { 3170000, SPI_SPD7, F_3_17MHz}, 201 + { AMD_SPI_MIN_HZ, F_800KHz, 0}, 202 + }; 203 + 204 + static int amd_set_spi_freq(struct amd_spi *amd_spi, u32 speed_hz) 205 + { 206 + unsigned int i, spd7_val, alt_spd; 207 + 208 + if (speed_hz < AMD_SPI_MIN_HZ) 209 + return -EINVAL; 210 + 211 + for (i = 0; i < ARRAY_SIZE(amd_spi_freq); i++) 212 + if (speed_hz >= amd_spi_freq[i].speed_hz) 213 + break; 214 + 215 + if (amd_spi->speed_hz == amd_spi_freq[i].speed_hz) 216 + return 0; 217 + 218 + amd_spi->speed_hz = amd_spi_freq[i].speed_hz; 219 + 220 + alt_spd = (amd_spi_freq[i].enable_val << AMD_SPI_ALT_SPD_SHIFT) 221 + & AMD_SPI_ALT_SPD_MASK; 222 + amd_spi_setclear_reg32(amd_spi, AMD_SPI_ENA_REG, alt_spd, 223 + AMD_SPI_ALT_SPD_MASK); 224 + 225 + if (amd_spi->speed_hz == AMD_SPI_MAX_HZ) 226 + amd_spi_setclear_reg32(amd_spi, AMD_SPI_ENA_REG, 1, 227 + AMD_SPI_SPI100_MASK); 228 + 229 + if (amd_spi_freq[i].spd7_val) { 230 + spd7_val = (amd_spi_freq[i].spd7_val << AMD_SPI_SPD7_SHIFT) 231 + & AMD_SPI_SPD7_MASK; 232 + amd_spi_setclear_reg32(amd_spi, AMD_SPI_SPEED_REG, spd7_val, 233 + AMD_SPI_SPD7_MASK); 234 + } 235 + 236 + return 0; 237 + } 238 + 227 239 static inline int amd_spi_fifo_xfer(struct amd_spi *amd_spi, 228 240 struct spi_master *master, 229 241 struct spi_message *message) 230 242 { 231 243 struct spi_transfer *xfer = NULL; 232 - u8 cmd_opcode; 244 + struct spi_device *spi = message->spi; 245 + u8 cmd_opcode = 0, fifo_pos = AMD_SPI_FIFO_BASE; 233 246 u8 *buf = NULL; 234 - u32 m_cmd = 0; 235 247 u32 i = 0; 236 248 u32 tx_len = 0, rx_len = 0; 237 249 238 250 list_for_each_entry(xfer, &message->transfers, 239 251 transfer_list) { 240 - if (xfer->rx_buf) 241 - m_cmd = AMD_SPI_XFER_RX; 242 - if (xfer->tx_buf) 243 - m_cmd = AMD_SPI_XFER_TX; 252 + if (xfer->speed_hz) 253 + amd_set_spi_freq(amd_spi, xfer->speed_hz); 254 + else 255 + amd_set_spi_freq(amd_spi, spi->max_speed_hz); 244 256 245 - if (m_cmd & AMD_SPI_XFER_TX) { 257 + if (xfer->tx_buf) { 246 258 buf = (u8 *)xfer->tx_buf; 247 - tx_len = xfer->len - 1; 248 - cmd_opcode = *(u8 *)xfer->tx_buf; 249 - buf++; 250 - amd_spi_set_opcode(amd_spi, cmd_opcode); 259 + if (!tx_len) { 260 + cmd_opcode = *(u8 *)xfer->tx_buf; 261 + buf++; 262 + xfer->len--; 263 + } 264 + tx_len += xfer->len; 251 265 252 266 /* Write data into the FIFO. */ 253 - for (i = 0; i < tx_len; i++) { 254 - iowrite8(buf[i], ((u8 __iomem *)amd_spi->io_remap_addr + 255 - AMD_SPI_FIFO_BASE + i)); 256 - } 267 + for (i = 0; i < xfer->len; i++) 268 + amd_spi_writereg8(amd_spi, fifo_pos + i, buf[i]); 257 269 258 - amd_spi_set_tx_count(amd_spi, tx_len); 259 - amd_spi_clear_fifo_ptr(amd_spi); 260 - /* Execute command */ 261 - amd_spi_execute_opcode(amd_spi); 270 + fifo_pos += xfer->len; 262 271 } 263 - if (m_cmd & AMD_SPI_XFER_RX) { 264 - /* 265 - * Store no. of bytes to be received from 266 - * FIFO 267 - */ 268 - rx_len = xfer->len; 269 - buf = (u8 *)xfer->rx_buf; 270 - amd_spi_set_rx_count(amd_spi, rx_len); 271 - amd_spi_clear_fifo_ptr(amd_spi); 272 - /* Execute command */ 273 - amd_spi_execute_opcode(amd_spi); 274 - amd_spi_busy_wait(amd_spi); 275 - /* Read data from FIFO to receive buffer */ 276 - for (i = 0; i < rx_len; i++) 277 - buf[i] = amd_spi_readreg8(amd_spi, AMD_SPI_FIFO_BASE + tx_len + i); 278 - } 272 + 273 + /* Store no. of bytes to be received from FIFO */ 274 + if (xfer->rx_buf) 275 + rx_len += xfer->len; 276 + } 277 + 278 + if (!buf) { 279 + message->status = -EINVAL; 280 + goto fin_msg; 281 + } 282 + 283 + amd_spi_set_opcode(amd_spi, cmd_opcode); 284 + amd_spi_set_tx_count(amd_spi, tx_len); 285 + amd_spi_set_rx_count(amd_spi, rx_len); 286 + 287 + /* Execute command */ 288 + message->status = amd_spi_execute_opcode(amd_spi); 289 + if (message->status) 290 + goto fin_msg; 291 + 292 + if (rx_len) { 293 + message->status = amd_spi_busy_wait(amd_spi); 294 + if (message->status) 295 + goto fin_msg; 296 + 297 + list_for_each_entry(xfer, &message->transfers, transfer_list) 298 + if (xfer->rx_buf) { 299 + buf = (u8 *)xfer->rx_buf; 300 + /* Read data from FIFO to receive buffer */ 301 + for (i = 0; i < xfer->len; i++) 302 + buf[i] = amd_spi_readreg8(amd_spi, fifo_pos + i); 303 + fifo_pos += xfer->len; 304 + } 279 305 } 280 306 281 307 /* Update statistics */ 282 308 message->actual_length = tx_len + rx_len + 1; 283 - /* complete the transaction */ 284 - message->status = 0; 285 309 310 + fin_msg: 286 311 switch (amd_spi->version) { 287 312 case AMD_SPI_V1: 288 313 break; ··· 355 260 356 261 spi_finalize_current_message(master); 357 262 358 - return 0; 263 + return message->status; 359 264 } 360 265 361 266 static int amd_spi_master_transfer(struct spi_master *master, ··· 370 275 * Extract spi_transfers from the spi message and 371 276 * program the controller. 372 277 */ 373 - amd_spi_fifo_xfer(amd_spi, master, msg); 374 - 375 - return 0; 278 + return amd_spi_fifo_xfer(amd_spi, master, msg); 376 279 } 377 280 378 281 static size_t amd_spi_max_transfer_size(struct spi_device *spi) ··· 405 312 master->num_chipselect = 4; 406 313 master->mode_bits = 0; 407 314 master->flags = SPI_MASTER_HALF_DUPLEX; 315 + master->max_speed_hz = AMD_SPI_MAX_HZ; 316 + master->min_speed_hz = AMD_SPI_MIN_HZ; 408 317 master->setup = amd_spi_master_setup; 409 318 master->transfer_one_message = amd_spi_master_transfer; 410 319 master->max_transfer_size = amd_spi_max_transfer_size;
+1 -3
drivers/spi/spi-aspeed-smc.c
··· 736 736 737 737 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 738 738 aspi->regs = devm_ioremap_resource(dev, res); 739 - if (IS_ERR(aspi->regs)) { 740 - dev_err(dev, "missing AHB register window\n"); 739 + if (IS_ERR(aspi->regs)) 741 740 return PTR_ERR(aspi->regs); 742 - } 743 741 744 742 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 745 743 aspi->ahb_base = devm_ioremap_resource(dev, res);
+2 -1
drivers/spi/spi-cadence-quadspi.c
··· 1645 1645 pm_runtime_enable(dev); 1646 1646 ret = pm_runtime_resume_and_get(dev); 1647 1647 if (ret < 0) 1648 - return ret; 1648 + goto probe_pm_failed; 1649 1649 1650 1650 ret = clk_prepare_enable(cqspi->clk); 1651 1651 if (ret) { ··· 1740 1740 clk_disable_unprepare(cqspi->clk); 1741 1741 probe_clk_failed: 1742 1742 pm_runtime_put_sync(dev); 1743 + probe_pm_failed: 1743 1744 pm_runtime_disable(dev); 1744 1745 return ret; 1745 1746 }
+1 -3
drivers/spi/spi-cadence-xspi.c
··· 565 565 566 566 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "sdma"); 567 567 cdns_xspi->sdmabase = devm_ioremap_resource(dev, res); 568 - if (IS_ERR(cdns_xspi->sdmabase)) { 569 - dev_err(dev, "Failed to remap SDMA address\n"); 568 + if (IS_ERR(cdns_xspi->sdmabase)) 570 569 return PTR_ERR(cdns_xspi->sdmabase); 571 - } 572 570 cdns_xspi->sdmasize = resource_size(res); 573 571 574 572 cdns_xspi->auxbase = devm_platform_ioremap_resource_byname(pdev, "aux");
+3 -1
drivers/spi/spi-dw-bt1.c
··· 293 293 pm_runtime_enable(&pdev->dev); 294 294 295 295 ret = dw_spi_add_host(&pdev->dev, dws); 296 - if (ret) 296 + if (ret) { 297 + pm_runtime_disable(&pdev->dev); 297 298 goto err_disable_clk; 299 + } 298 300 299 301 platform_set_drvdata(pdev, dwsbt1); 300 302
+1 -1
drivers/spi/spi-dw-core.c
··· 955 955 956 956 ret = spi_register_controller(master); 957 957 if (ret) { 958 - dev_err(&master->dev, "problem registering spi master\n"); 958 + dev_err_probe(dev, ret, "problem registering spi master\n"); 959 959 goto err_dma_exit; 960 960 } 961 961
+1 -2
drivers/spi/spi-fsl-dspi.c
··· 1294 1294 else 1295 1295 ctlr->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16); 1296 1296 1297 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1298 - base = devm_ioremap_resource(&pdev->dev, res); 1297 + base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 1299 1298 if (IS_ERR(base)) { 1300 1299 ret = PTR_ERR(base); 1301 1300 goto out_ctlr_put;
+3 -7
drivers/spi/spi-fsl-lpspi.c
··· 855 855 856 856 init_completion(&fsl_lpspi->xfer_done); 857 857 858 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 859 - fsl_lpspi->base = devm_ioremap_resource(&pdev->dev, res); 858 + fsl_lpspi->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 860 859 if (IS_ERR(fsl_lpspi->base)) { 861 860 ret = PTR_ERR(fsl_lpspi->base); 862 861 goto out_controller_put; ··· 911 912 912 913 ret = devm_spi_register_controller(&pdev->dev, controller); 913 914 if (ret < 0) { 914 - dev_err_probe(&pdev->dev, ret, "spi_register_controller error: %i\n", ret); 915 + dev_err_probe(&pdev->dev, ret, "spi_register_controller error\n"); 915 916 goto free_dma; 916 917 } 917 918 ··· 946 947 947 948 static int __maybe_unused fsl_lpspi_suspend(struct device *dev) 948 949 { 949 - int ret; 950 - 951 950 pinctrl_pm_select_sleep_state(dev); 952 - ret = pm_runtime_force_suspend(dev); 953 - return ret; 951 + return pm_runtime_force_suspend(dev); 954 952 } 955 953 956 954 static int __maybe_unused fsl_lpspi_resume(struct device *dev)
+1 -2
drivers/spi/spi-fsl-qspi.c
··· 867 867 platform_set_drvdata(pdev, q); 868 868 869 869 /* find the resources */ 870 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "QuadSPI"); 871 - q->iobase = devm_ioremap_resource(dev, res); 870 + q->iobase = devm_platform_ioremap_resource_byname(pdev, "QuadSPI"); 872 871 if (IS_ERR(q->iobase)) { 873 872 ret = PTR_ERR(q->iobase); 874 873 goto err_put_ctrl;
+46 -117
drivers/spi/spi-fsl-spi.c
··· 111 111 local_irq_restore(flags); 112 112 } 113 113 114 - static void fsl_spi_chipselect(struct spi_device *spi, int value) 115 - { 116 - struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master); 117 - struct fsl_spi_platform_data *pdata; 118 - struct spi_mpc8xxx_cs *cs = spi->controller_state; 119 - 120 - pdata = spi->dev.parent->parent->platform_data; 121 - 122 - if (value == BITBANG_CS_INACTIVE) { 123 - if (pdata->cs_control) 124 - pdata->cs_control(spi, false); 125 - } 126 - 127 - if (value == BITBANG_CS_ACTIVE) { 128 - mpc8xxx_spi->rx_shift = cs->rx_shift; 129 - mpc8xxx_spi->tx_shift = cs->tx_shift; 130 - mpc8xxx_spi->get_rx = cs->get_rx; 131 - mpc8xxx_spi->get_tx = cs->get_tx; 132 - 133 - fsl_spi_change_mode(spi); 134 - 135 - if (pdata->cs_control) 136 - pdata->cs_control(spi, true); 137 - } 138 - } 139 - 140 114 static void fsl_spi_qe_cpu_set_shifts(u32 *rx_shift, u32 *tx_shift, 141 115 int bits_per_word, int msb_first) 142 116 { ··· 328 354 return mpc8xxx_spi->count; 329 355 } 330 356 331 - static int fsl_spi_do_one_msg(struct spi_master *master, 332 - struct spi_message *m) 357 + static int fsl_spi_prepare_message(struct spi_controller *ctlr, 358 + struct spi_message *m) 333 359 { 334 - struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(master); 335 - struct spi_device *spi = m->spi; 336 - struct spi_transfer *t, *first; 337 - unsigned int cs_change; 338 - const int nsecs = 50; 339 - int status, last_bpw; 360 + struct mpc8xxx_spi *mpc8xxx_spi = spi_controller_get_devdata(ctlr); 361 + struct spi_transfer *t; 340 362 341 363 /* 342 364 * In CPU mode, optimize large byte transfers to use larger ··· 348 378 t->bits_per_word = 16; 349 379 } 350 380 } 351 - 352 - /* Don't allow changes if CS is active */ 353 - cs_change = 1; 354 - list_for_each_entry(t, &m->transfers, transfer_list) { 355 - if (cs_change) 356 - first = t; 357 - cs_change = t->cs_change; 358 - if (first->speed_hz != t->speed_hz) { 359 - dev_err(&spi->dev, 360 - "speed_hz cannot change while CS is active\n"); 361 - return -EINVAL; 362 - } 363 - } 364 - 365 - last_bpw = -1; 366 - cs_change = 1; 367 - status = -EINVAL; 368 - list_for_each_entry(t, &m->transfers, transfer_list) { 369 - if (cs_change || last_bpw != t->bits_per_word) 370 - status = fsl_spi_setup_transfer(spi, t); 371 - if (status < 0) 372 - break; 373 - last_bpw = t->bits_per_word; 374 - 375 - if (cs_change) { 376 - fsl_spi_chipselect(spi, BITBANG_CS_ACTIVE); 377 - ndelay(nsecs); 378 - } 379 - cs_change = t->cs_change; 380 - if (t->len) 381 - status = fsl_spi_bufs(spi, t, m->is_dma_mapped); 382 - if (status) { 383 - status = -EMSGSIZE; 384 - break; 385 - } 386 - m->actual_length += t->len; 387 - 388 - spi_transfer_delay_exec(t); 389 - 390 - if (cs_change) { 391 - ndelay(nsecs); 392 - fsl_spi_chipselect(spi, BITBANG_CS_INACTIVE); 393 - ndelay(nsecs); 394 - } 395 - } 396 - 397 - m->status = status; 398 - 399 - if (status || !cs_change) { 400 - ndelay(nsecs); 401 - fsl_spi_chipselect(spi, BITBANG_CS_INACTIVE); 402 - } 403 - 404 - fsl_spi_setup_transfer(spi, NULL); 405 - spi_finalize_current_message(master); 406 381 return 0; 382 + } 383 + 384 + static int fsl_spi_transfer_one(struct spi_controller *controller, 385 + struct spi_device *spi, 386 + struct spi_transfer *t) 387 + { 388 + int status; 389 + 390 + status = fsl_spi_setup_transfer(spi, t); 391 + if (status < 0) 392 + return status; 393 + if (t->len) 394 + status = fsl_spi_bufs(spi, t, !!t->tx_dma || !!t->rx_dma); 395 + if (status > 0) 396 + return -EMSGSIZE; 397 + 398 + return status; 399 + } 400 + 401 + static int fsl_spi_unprepare_message(struct spi_controller *controller, 402 + struct spi_message *msg) 403 + { 404 + return fsl_spi_setup_transfer(msg->spi, NULL); 407 405 } 408 406 409 407 static int fsl_spi_setup(struct spi_device *spi) ··· 419 481 kfree(cs); 420 482 return retval; 421 483 } 422 - 423 - /* Initialize chipselect - might be active for SPI_CS_HIGH mode */ 424 - fsl_spi_chipselect(spi, BITBANG_CS_INACTIVE); 425 484 426 485 return 0; 427 486 } ··· 492 557 u32 slvsel; 493 558 u16 cs = spi->chip_select; 494 559 495 - if (spi->cs_gpiod) { 496 - gpiod_set_value(spi->cs_gpiod, on); 497 - } else if (cs < mpc8xxx_spi->native_chipselects) { 560 + if (cs < mpc8xxx_spi->native_chipselects) { 498 561 slvsel = mpc8xxx_spi_read_reg(&reg_base->slvsel); 499 562 slvsel = on ? (slvsel | (1 << cs)) : (slvsel & ~(1 << cs)); 500 563 mpc8xxx_spi_write_reg(&reg_base->slvsel, slvsel); ··· 501 568 502 569 static void fsl_spi_grlib_probe(struct device *dev) 503 570 { 504 - struct fsl_spi_platform_data *pdata = dev_get_platdata(dev); 505 571 struct spi_master *master = dev_get_drvdata(dev); 506 572 struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(master); 507 573 struct fsl_spi_reg __iomem *reg_base = mpc8xxx_spi->reg_base; ··· 520 588 mpc8xxx_spi_write_reg(&reg_base->slvsel, 0xffffffff); 521 589 } 522 590 master->num_chipselect = mpc8xxx_spi->native_chipselects; 523 - pdata->cs_control = fsl_spi_grlib_cs_control; 591 + master->set_cs = fsl_spi_grlib_cs_control; 592 + } 593 + 594 + static void fsl_spi_cs_control(struct spi_device *spi, bool on) 595 + { 596 + struct device *dev = spi->dev.parent->parent; 597 + struct fsl_spi_platform_data *pdata = dev_get_platdata(dev); 598 + struct mpc8xxx_spi_probe_info *pinfo = to_of_pinfo(pdata); 599 + 600 + if (WARN_ON_ONCE(!pinfo->immr_spi_cs)) 601 + return; 602 + iowrite32be(on ? 0 : SPI_BOOT_SEL_BIT, pinfo->immr_spi_cs); 524 603 } 525 604 526 605 static struct spi_master *fsl_spi_probe(struct device *dev, ··· 556 613 557 614 master->setup = fsl_spi_setup; 558 615 master->cleanup = fsl_spi_cleanup; 559 - master->transfer_one_message = fsl_spi_do_one_msg; 616 + master->prepare_message = fsl_spi_prepare_message; 617 + master->transfer_one = fsl_spi_transfer_one; 618 + master->unprepare_message = fsl_spi_unprepare_message; 560 619 master->use_gpio_descriptors = true; 620 + master->set_cs = fsl_spi_cs_control; 561 621 562 622 mpc8xxx_spi = spi_master_get_devdata(master); 563 623 mpc8xxx_spi->max_bits_per_word = 32; ··· 634 688 return ERR_PTR(ret); 635 689 } 636 690 637 - static void fsl_spi_cs_control(struct spi_device *spi, bool on) 638 - { 639 - if (spi->cs_gpiod) { 640 - gpiod_set_value(spi->cs_gpiod, on); 641 - } else { 642 - struct device *dev = spi->dev.parent->parent; 643 - struct fsl_spi_platform_data *pdata = dev_get_platdata(dev); 644 - struct mpc8xxx_spi_probe_info *pinfo = to_of_pinfo(pdata); 645 - 646 - if (WARN_ON_ONCE(!pinfo->immr_spi_cs)) 647 - return; 648 - iowrite32be(on ? 0 : SPI_BOOT_SEL_BIT, pinfo->immr_spi_cs); 649 - } 650 - } 651 - 652 691 static int of_fsl_spi_probe(struct platform_device *ofdev) 653 692 { 654 693 struct device *dev = &ofdev->dev; ··· 675 744 ret = gpiod_count(dev, "cs"); 676 745 if (ret < 0) 677 746 ret = 0; 678 - if (ret == 0 && !spisel_boot) { 747 + if (ret == 0 && !spisel_boot) 679 748 pdata->max_chipselect = 1; 680 - } else { 749 + else 681 750 pdata->max_chipselect = ret + spisel_boot; 682 - pdata->cs_control = fsl_spi_cs_control; 683 - } 684 751 } 685 752 686 753 ret = of_address_to_resource(np, 0, &mem);
+3 -7
drivers/spi/spi-gxp.c
··· 254 254 const struct gxp_spi_data *data; 255 255 struct spi_controller *ctlr; 256 256 struct gxp_spi *spifi; 257 - struct resource *res; 258 257 int ret; 259 258 260 259 data = of_device_get_match_data(&pdev->dev); ··· 268 269 spifi->data = data; 269 270 spifi->dev = dev; 270 271 271 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 272 - spifi->reg_base = devm_ioremap_resource(&pdev->dev, res); 272 + spifi->reg_base = devm_platform_ioremap_resource(pdev, 0); 273 273 if (IS_ERR(spifi->reg_base)) 274 274 return PTR_ERR(spifi->reg_base); 275 275 276 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 277 - spifi->dat_base = devm_ioremap_resource(&pdev->dev, res); 276 + spifi->dat_base = devm_platform_ioremap_resource(pdev, 1); 278 277 if (IS_ERR(spifi->dat_base)) 279 278 return PTR_ERR(spifi->dat_base); 280 279 281 - res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 282 - spifi->dir_base = devm_ioremap_resource(&pdev->dev, res); 280 + spifi->dir_base = devm_platform_ioremap_resource(pdev, 2); 283 281 if (IS_ERR(spifi->dir_base)) 284 282 return PTR_ERR(spifi->dir_base); 285 283
+2 -4
drivers/spi/spi-img-spfi.c
··· 730 730 struct img_spfi *spfi = spi_master_get_devdata(master); 731 731 int ret; 732 732 733 - ret = pm_runtime_get_sync(dev); 734 - if (ret < 0) { 735 - pm_runtime_put_noidle(dev); 733 + ret = pm_runtime_resume_and_get(dev); 734 + if (ret < 0) 736 735 return ret; 737 - } 738 736 spfi_reset(spfi); 739 737 pm_runtime_put(dev); 740 738
+148 -16
drivers/spi/spi-intel.c
··· 116 116 #define ERASE_64K_OPCODE_SHIFT 16 117 117 #define ERASE_64K_OPCODE_MASK (0xff << ERASE_OPCODE_SHIFT) 118 118 119 + /* Flash descriptor fields */ 120 + #define FLVALSIG_MAGIC 0x0ff0a55a 121 + #define FLMAP0_NC_MASK GENMASK(9, 8) 122 + #define FLMAP0_NC_SHIFT 8 123 + #define FLMAP0_FCBA_MASK GENMASK(7, 0) 124 + 125 + #define FLCOMP_C0DEN_MASK GENMASK(3, 0) 126 + #define FLCOMP_C0DEN_512K 0x00 127 + #define FLCOMP_C0DEN_1M 0x01 128 + #define FLCOMP_C0DEN_2M 0x02 129 + #define FLCOMP_C0DEN_4M 0x03 130 + #define FLCOMP_C0DEN_8M 0x04 131 + #define FLCOMP_C0DEN_16M 0x05 132 + #define FLCOMP_C0DEN_32M 0x06 133 + #define FLCOMP_C0DEN_64M 0x07 134 + 119 135 #define INTEL_SPI_TIMEOUT 5000 /* ms */ 120 136 #define INTEL_SPI_FIFO_SZ 64 121 137 ··· 145 129 * @master: Pointer to the SPI controller structure 146 130 * @nregions: Maximum number of regions 147 131 * @pr_num: Maximum number of protected range registers 132 + * @chip0_size: Size of the first flash chip in bytes 148 133 * @locked: Is SPI setting locked 149 134 * @swseq_reg: Use SW sequencer in register reads/writes 150 135 * @swseq_erase: Use SW sequencer in erase operation ··· 163 146 struct spi_controller *master; 164 147 size_t nregions; 165 148 size_t pr_num; 149 + size_t chip0_size; 166 150 bool locked; 167 151 bool swseq_reg; 168 152 bool swseq_erase; ··· 176 158 struct spi_mem_op mem_op; 177 159 u32 replacement_op; 178 160 int (*exec_op)(struct intel_spi *ispi, 161 + const struct spi_mem *mem, 179 162 const struct intel_spi_mem_op *iop, 180 163 const struct spi_mem_op *op); 181 164 }; ··· 460 441 return 0; 461 442 } 462 443 463 - static int intel_spi_read_reg(struct intel_spi *ispi, 444 + static u32 intel_spi_chip_addr(const struct intel_spi *ispi, 445 + const struct spi_mem *mem) 446 + { 447 + /* Pick up the correct start address */ 448 + if (!mem) 449 + return 0; 450 + return mem->spi->chip_select == 1 ? ispi->chip0_size : 0; 451 + } 452 + 453 + static int intel_spi_read_reg(struct intel_spi *ispi, const struct spi_mem *mem, 464 454 const struct intel_spi_mem_op *iop, 465 455 const struct spi_mem_op *op) 466 456 { ··· 477 449 u8 opcode = op->cmd.opcode; 478 450 int ret; 479 451 480 - /* Address of the first chip */ 481 - writel(0, ispi->base + FADDR); 452 + writel(intel_spi_chip_addr(ispi, mem), ispi->base + FADDR); 482 453 483 454 if (ispi->swseq_reg) 484 455 ret = intel_spi_sw_cycle(ispi, opcode, nbytes, ··· 491 464 return intel_spi_read_block(ispi, op->data.buf.in, nbytes); 492 465 } 493 466 494 - static int intel_spi_write_reg(struct intel_spi *ispi, 467 + static int intel_spi_write_reg(struct intel_spi *ispi, const struct spi_mem *mem, 495 468 const struct intel_spi_mem_op *iop, 496 469 const struct spi_mem_op *op) 497 470 { ··· 538 511 if (opcode == SPINOR_OP_WRDI) 539 512 return 0; 540 513 541 - writel(0, ispi->base + FADDR); 514 + writel(intel_spi_chip_addr(ispi, mem), ispi->base + FADDR); 542 515 543 516 /* Write the value beforehand */ 544 517 ret = intel_spi_write_block(ispi, op->data.buf.out, nbytes); ··· 551 524 return intel_spi_hw_cycle(ispi, opcode, nbytes); 552 525 } 553 526 554 - static int intel_spi_read(struct intel_spi *ispi, 527 + static int intel_spi_read(struct intel_spi *ispi, const struct spi_mem *mem, 555 528 const struct intel_spi_mem_op *iop, 556 529 const struct spi_mem_op *op) 557 530 { 558 - void *read_buf = op->data.buf.in; 531 + u32 addr = intel_spi_chip_addr(ispi, mem) + op->addr.val; 559 532 size_t block_size, nbytes = op->data.nbytes; 560 - u32 addr = op->addr.val; 533 + void *read_buf = op->data.buf.in; 561 534 u32 val, status; 562 535 int ret; 563 536 ··· 612 585 return 0; 613 586 } 614 587 615 - static int intel_spi_write(struct intel_spi *ispi, 588 + static int intel_spi_write(struct intel_spi *ispi, const struct spi_mem *mem, 616 589 const struct intel_spi_mem_op *iop, 617 590 const struct spi_mem_op *op) 618 591 { 592 + u32 addr = intel_spi_chip_addr(ispi, mem) + op->addr.val; 619 593 size_t block_size, nbytes = op->data.nbytes; 620 594 const void *write_buf = op->data.buf.out; 621 - u32 addr = op->addr.val; 622 595 u32 val, status; 623 596 int ret; 624 597 ··· 675 648 return 0; 676 649 } 677 650 678 - static int intel_spi_erase(struct intel_spi *ispi, 651 + static int intel_spi_erase(struct intel_spi *ispi, const struct spi_mem *mem, 679 652 const struct intel_spi_mem_op *iop, 680 653 const struct spi_mem_op *op) 681 654 { 655 + u32 addr = intel_spi_chip_addr(ispi, mem) + op->addr.val; 682 656 u8 opcode = op->cmd.opcode; 683 - u32 addr = op->addr.val; 684 657 u32 val, status; 685 658 int ret; 686 659 ··· 792 765 if (!iop) 793 766 return -EOPNOTSUPP; 794 767 795 - return iop->exec_op(ispi, iop, op); 768 + return iop->exec_op(ispi, mem, iop, op); 796 769 } 797 770 798 771 static const char *intel_spi_get_name(struct spi_mem *mem) ··· 832 805 op.data.nbytes = len; 833 806 op.data.buf.in = buf; 834 807 835 - ret = iop->exec_op(ispi, iop, &op); 808 + ret = iop->exec_op(ispi, desc->mem, iop, &op); 836 809 return ret ? ret : len; 837 810 } 838 811 ··· 848 821 op.data.nbytes = len; 849 822 op.data.buf.out = buf; 850 823 851 - ret = iop->exec_op(ispi, iop, &op); 824 + ret = iop->exec_op(ispi, desc->mem, iop, &op); 852 825 return ret ? ret : len; 853 826 } 854 827 ··· 1100 1073 ispi->pregs = ispi->base + CNL_PR; 1101 1074 ispi->nregions = CNL_FREG_NUM; 1102 1075 ispi->pr_num = CNL_PR_NUM; 1076 + erase_64k = true; 1103 1077 break; 1104 1078 1105 1079 default: ··· 1254 1226 } 1255 1227 } 1256 1228 1229 + static int intel_spi_read_desc(struct intel_spi *ispi) 1230 + { 1231 + struct spi_mem_op op = 1232 + SPI_MEM_OP(SPI_MEM_OP_CMD(SPINOR_OP_READ, 0), 1233 + SPI_MEM_OP_ADDR(3, 0, 0), 1234 + SPI_MEM_OP_NO_DUMMY, 1235 + SPI_MEM_OP_DATA_IN(0, NULL, 0)); 1236 + u32 buf[2], nc, fcba, flcomp; 1237 + ssize_t ret; 1238 + 1239 + op.addr.val = 0x10; 1240 + op.data.buf.in = buf; 1241 + op.data.nbytes = sizeof(buf); 1242 + 1243 + ret = intel_spi_read(ispi, NULL, NULL, &op); 1244 + if (ret) { 1245 + dev_warn(ispi->dev, "failed to read descriptor\n"); 1246 + return ret; 1247 + } 1248 + 1249 + dev_dbg(ispi->dev, "FLVALSIG=0x%08x\n", buf[0]); 1250 + dev_dbg(ispi->dev, "FLMAP0=0x%08x\n", buf[1]); 1251 + 1252 + if (buf[0] != FLVALSIG_MAGIC) { 1253 + dev_warn(ispi->dev, "descriptor signature not valid\n"); 1254 + return -ENODEV; 1255 + } 1256 + 1257 + fcba = (buf[1] & FLMAP0_FCBA_MASK) << 4; 1258 + dev_dbg(ispi->dev, "FCBA=%#x\n", fcba); 1259 + 1260 + op.addr.val = fcba; 1261 + op.data.buf.in = &flcomp; 1262 + op.data.nbytes = sizeof(flcomp); 1263 + 1264 + ret = intel_spi_read(ispi, NULL, NULL, &op); 1265 + if (ret) { 1266 + dev_warn(ispi->dev, "failed to read FLCOMP\n"); 1267 + return -ENODEV; 1268 + } 1269 + 1270 + dev_dbg(ispi->dev, "FLCOMP=0x%08x\n", flcomp); 1271 + 1272 + switch (flcomp & FLCOMP_C0DEN_MASK) { 1273 + case FLCOMP_C0DEN_512K: 1274 + ispi->chip0_size = SZ_512K; 1275 + break; 1276 + case FLCOMP_C0DEN_1M: 1277 + ispi->chip0_size = SZ_1M; 1278 + break; 1279 + case FLCOMP_C0DEN_2M: 1280 + ispi->chip0_size = SZ_2M; 1281 + break; 1282 + case FLCOMP_C0DEN_4M: 1283 + ispi->chip0_size = SZ_4M; 1284 + break; 1285 + case FLCOMP_C0DEN_8M: 1286 + ispi->chip0_size = SZ_8M; 1287 + break; 1288 + case FLCOMP_C0DEN_16M: 1289 + ispi->chip0_size = SZ_16M; 1290 + break; 1291 + case FLCOMP_C0DEN_32M: 1292 + ispi->chip0_size = SZ_32M; 1293 + break; 1294 + case FLCOMP_C0DEN_64M: 1295 + ispi->chip0_size = SZ_64M; 1296 + break; 1297 + default: 1298 + return -EINVAL; 1299 + } 1300 + 1301 + dev_dbg(ispi->dev, "chip0 size %zd KB\n", ispi->chip0_size / SZ_1K); 1302 + 1303 + nc = (buf[1] & FLMAP0_NC_MASK) >> FLMAP0_NC_SHIFT; 1304 + if (!nc) 1305 + ispi->master->num_chipselect = 1; 1306 + else if (nc == 1) 1307 + ispi->master->num_chipselect = 2; 1308 + else 1309 + return -EINVAL; 1310 + 1311 + dev_dbg(ispi->dev, "%u flash components found\n", 1312 + ispi->master->num_chipselect); 1313 + return 0; 1314 + } 1315 + 1257 1316 static int intel_spi_populate_chip(struct intel_spi *ispi) 1258 1317 { 1259 1318 struct flash_platform_data *pdata; 1260 1319 struct spi_board_info chip; 1320 + int ret; 1261 1321 1262 1322 pdata = devm_kzalloc(ispi->dev, sizeof(*pdata), GFP_KERNEL); 1263 1323 if (!pdata) ··· 1363 1247 snprintf(chip.modalias, 8, "spi-nor"); 1364 1248 chip.platform_data = pdata; 1365 1249 1366 - return spi_new_device(ispi->master, &chip) ? 0 : -ENODEV; 1250 + if (!spi_new_device(ispi->master, &chip)) 1251 + return -ENODEV; 1252 + 1253 + /* Add the second chip if present */ 1254 + if (ispi->master->num_chipselect < 2) 1255 + return 0; 1256 + 1257 + ret = intel_spi_read_desc(ispi); 1258 + if (ret) 1259 + return ret; 1260 + 1261 + chip.platform_data = NULL; 1262 + chip.chip_select = 1; 1263 + 1264 + if (!spi_new_device(ispi->master, &chip)) 1265 + return -ENODEV; 1266 + return 0; 1367 1267 } 1368 1268 1369 1269 /**
+27
drivers/spi/spi-loopback-test.c
··· 313 313 }, 314 314 }, 315 315 }, 316 + { 317 + .description = "three tx+rx transfers with overlapping cache lines", 318 + .fill_option = FILL_COUNT_8, 319 + /* 320 + * This should be large enough for the controller driver to 321 + * choose to transfer it with DMA. 322 + */ 323 + .iterate_len = { 512, -1 }, 324 + .iterate_transfer_mask = BIT(1), 325 + .transfer_count = 3, 326 + .transfers = { 327 + { 328 + .len = 1, 329 + .tx_buf = TX(0), 330 + .rx_buf = RX(0), 331 + }, 332 + { 333 + .tx_buf = TX(1), 334 + .rx_buf = RX(1), 335 + }, 336 + { 337 + .len = 1, 338 + .tx_buf = TX(513), 339 + .rx_buf = RX(513), 340 + }, 341 + }, 342 + }, 316 343 317 344 { /* end of tests sequence */ } 318 345 };
+4 -4
drivers/spi/spi-meson-spicc.c
··· 537 537 struct clk_divider *divider = to_clk_divider(hw); 538 538 struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider); 539 539 540 - if (!spicc->master->cur_msg || !spicc->master->busy) 540 + if (!spicc->master->cur_msg) 541 541 return 0; 542 542 543 543 return clk_divider_ops.recalc_rate(hw, parent_rate); ··· 549 549 struct clk_divider *divider = to_clk_divider(hw); 550 550 struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider); 551 551 552 - if (!spicc->master->cur_msg || !spicc->master->busy) 552 + if (!spicc->master->cur_msg) 553 553 return -EINVAL; 554 554 555 555 return clk_divider_ops.determine_rate(hw, req); ··· 561 561 struct clk_divider *divider = to_clk_divider(hw); 562 562 struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider); 563 563 564 - if (!spicc->master->cur_msg || !spicc->master->busy) 564 + if (!spicc->master->cur_msg) 565 565 return -EINVAL; 566 566 567 567 return clk_divider_ops.set_rate(hw, rate, parent_rate); 568 568 } 569 569 570 - const struct clk_ops meson_spicc_pow2_clk_ops = { 570 + static const struct clk_ops meson_spicc_pow2_clk_ops = { 571 571 .recalc_rate = meson_spicc_pow2_recalc_rate, 572 572 .determine_rate = meson_spicc_pow2_determine_rate, 573 573 .set_rate = meson_spicc_pow2_set_rate,
+600
drivers/spi/spi-microchip-core-qspi.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0) 2 + /* 3 + * Microchip coreQSPI QSPI controller driver 4 + * 5 + * Copyright (C) 2018-2022 Microchip Technology Inc. and its subsidiaries 6 + * 7 + * Author: Naga Sureshkumar Relli <nagasuresh.relli@microchip.com> 8 + * 9 + */ 10 + 11 + #include <linux/clk.h> 12 + #include <linux/err.h> 13 + #include <linux/init.h> 14 + #include <linux/interrupt.h> 15 + #include <linux/io.h> 16 + #include <linux/iopoll.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/of_irq.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/spi/spi.h> 22 + #include <linux/spi/spi-mem.h> 23 + 24 + /* 25 + * QSPI Control register mask defines 26 + */ 27 + #define CONTROL_ENABLE BIT(0) 28 + #define CONTROL_MASTER BIT(1) 29 + #define CONTROL_XIP BIT(2) 30 + #define CONTROL_XIPADDR BIT(3) 31 + #define CONTROL_CLKIDLE BIT(10) 32 + #define CONTROL_SAMPLE_MASK GENMASK(12, 11) 33 + #define CONTROL_MODE0 BIT(13) 34 + #define CONTROL_MODE12_MASK GENMASK(15, 14) 35 + #define CONTROL_MODE12_EX_RO BIT(14) 36 + #define CONTROL_MODE12_EX_RW BIT(15) 37 + #define CONTROL_MODE12_FULL GENMASK(15, 14) 38 + #define CONTROL_FLAGSX4 BIT(16) 39 + #define CONTROL_CLKRATE_MASK GENMASK(27, 24) 40 + #define CONTROL_CLKRATE_SHIFT 24 41 + 42 + /* 43 + * QSPI Frames register mask defines 44 + */ 45 + #define FRAMES_TOTALBYTES_MASK GENMASK(15, 0) 46 + #define FRAMES_CMDBYTES_MASK GENMASK(24, 16) 47 + #define FRAMES_CMDBYTES_SHIFT 16 48 + #define FRAMES_SHIFT 25 49 + #define FRAMES_IDLE_MASK GENMASK(29, 26) 50 + #define FRAMES_IDLE_SHIFT 26 51 + #define FRAMES_FLAGBYTE BIT(30) 52 + #define FRAMES_FLAGWORD BIT(31) 53 + 54 + /* 55 + * QSPI Interrupt Enable register mask defines 56 + */ 57 + #define IEN_TXDONE BIT(0) 58 + #define IEN_RXDONE BIT(1) 59 + #define IEN_RXAVAILABLE BIT(2) 60 + #define IEN_TXAVAILABLE BIT(3) 61 + #define IEN_RXFIFOEMPTY BIT(4) 62 + #define IEN_TXFIFOFULL BIT(5) 63 + 64 + /* 65 + * QSPI Status register mask defines 66 + */ 67 + #define STATUS_TXDONE BIT(0) 68 + #define STATUS_RXDONE BIT(1) 69 + #define STATUS_RXAVAILABLE BIT(2) 70 + #define STATUS_TXAVAILABLE BIT(3) 71 + #define STATUS_RXFIFOEMPTY BIT(4) 72 + #define STATUS_TXFIFOFULL BIT(5) 73 + #define STATUS_READY BIT(7) 74 + #define STATUS_FLAGSX4 BIT(8) 75 + #define STATUS_MASK GENMASK(8, 0) 76 + 77 + #define BYTESUPPER_MASK GENMASK(31, 16) 78 + #define BYTESLOWER_MASK GENMASK(15, 0) 79 + 80 + #define MAX_DIVIDER 16 81 + #define MIN_DIVIDER 0 82 + #define MAX_DATA_CMD_LEN 256 83 + 84 + /* QSPI ready time out value */ 85 + #define TIMEOUT_MS 500 86 + 87 + /* 88 + * QSPI Register offsets. 89 + */ 90 + #define REG_CONTROL (0x00) 91 + #define REG_FRAMES (0x04) 92 + #define REG_IEN (0x0c) 93 + #define REG_STATUS (0x10) 94 + #define REG_DIRECT_ACCESS (0x14) 95 + #define REG_UPPER_ACCESS (0x18) 96 + #define REG_RX_DATA (0x40) 97 + #define REG_TX_DATA (0x44) 98 + #define REG_X4_RX_DATA (0x48) 99 + #define REG_X4_TX_DATA (0x4c) 100 + #define REG_FRAMESUP (0x50) 101 + 102 + /** 103 + * struct mchp_coreqspi - Defines qspi driver instance 104 + * @regs: Virtual address of the QSPI controller registers 105 + * @clk: QSPI Operating clock 106 + * @data_completion: completion structure 107 + * @op_lock: lock access to the device 108 + * @txbuf: TX buffer 109 + * @rxbuf: RX buffer 110 + * @irq: IRQ number 111 + * @tx_len: Number of bytes left to transfer 112 + * @rx_len: Number of bytes left to receive 113 + */ 114 + struct mchp_coreqspi { 115 + void __iomem *regs; 116 + struct clk *clk; 117 + struct completion data_completion; 118 + struct mutex op_lock; /* lock access to the device */ 119 + u8 *txbuf; 120 + u8 *rxbuf; 121 + int irq; 122 + int tx_len; 123 + int rx_len; 124 + }; 125 + 126 + static int mchp_coreqspi_set_mode(struct mchp_coreqspi *qspi, const struct spi_mem_op *op) 127 + { 128 + u32 control = readl_relaxed(qspi->regs + REG_CONTROL); 129 + 130 + /* 131 + * The operating mode can be configured based on the command that needs to be send. 132 + * bits[15:14]: Sets whether multiple bit SPI operates in normal, extended or full modes. 133 + * 00: Normal (single DQ0 TX and single DQ1 RX lines) 134 + * 01: Extended RO (command and address bytes on DQ0 only) 135 + * 10: Extended RW (command byte on DQ0 only) 136 + * 11: Full. (command and address are on all DQ lines) 137 + * bit[13]: Sets whether multiple bit SPI uses 2 or 4 bits of data 138 + * 0: 2-bits (BSPI) 139 + * 1: 4-bits (QSPI) 140 + */ 141 + if (op->data.buswidth == 4 || op->data.buswidth == 2) { 142 + control &= ~CONTROL_MODE12_MASK; 143 + if (op->cmd.buswidth == 1 && (op->addr.buswidth == 1 || op->addr.buswidth == 0)) 144 + control |= CONTROL_MODE12_EX_RO; 145 + else if (op->cmd.buswidth == 1) 146 + control |= CONTROL_MODE12_EX_RW; 147 + else 148 + control |= CONTROL_MODE12_FULL; 149 + 150 + control |= CONTROL_MODE0; 151 + } else { 152 + control &= ~(CONTROL_MODE12_MASK | 153 + CONTROL_MODE0); 154 + } 155 + 156 + writel_relaxed(control, qspi->regs + REG_CONTROL); 157 + 158 + return 0; 159 + } 160 + 161 + static inline void mchp_coreqspi_read_op(struct mchp_coreqspi *qspi) 162 + { 163 + u32 control, data; 164 + 165 + if (!qspi->rx_len) 166 + return; 167 + 168 + control = readl_relaxed(qspi->regs + REG_CONTROL); 169 + 170 + /* 171 + * Read 4-bytes from the SPI FIFO in single transaction and then read 172 + * the reamaining data byte wise. 173 + */ 174 + control |= CONTROL_FLAGSX4; 175 + writel_relaxed(control, qspi->regs + REG_CONTROL); 176 + 177 + while (qspi->rx_len >= 4) { 178 + while (readl_relaxed(qspi->regs + REG_STATUS) & STATUS_RXFIFOEMPTY) 179 + ; 180 + data = readl_relaxed(qspi->regs + REG_X4_RX_DATA); 181 + *(u32 *)qspi->rxbuf = data; 182 + qspi->rxbuf += 4; 183 + qspi->rx_len -= 4; 184 + } 185 + 186 + control &= ~CONTROL_FLAGSX4; 187 + writel_relaxed(control, qspi->regs + REG_CONTROL); 188 + 189 + while (qspi->rx_len--) { 190 + while (readl_relaxed(qspi->regs + REG_STATUS) & STATUS_RXFIFOEMPTY) 191 + ; 192 + data = readl_relaxed(qspi->regs + REG_RX_DATA); 193 + *qspi->rxbuf++ = (data & 0xFF); 194 + } 195 + } 196 + 197 + static inline void mchp_coreqspi_write_op(struct mchp_coreqspi *qspi, bool word) 198 + { 199 + u32 control, data; 200 + 201 + control = readl_relaxed(qspi->regs + REG_CONTROL); 202 + control |= CONTROL_FLAGSX4; 203 + writel_relaxed(control, qspi->regs + REG_CONTROL); 204 + 205 + while (qspi->tx_len >= 4) { 206 + while (readl_relaxed(qspi->regs + REG_STATUS) & STATUS_TXFIFOFULL) 207 + ; 208 + data = *(u32 *)qspi->txbuf; 209 + qspi->txbuf += 4; 210 + qspi->tx_len -= 4; 211 + writel_relaxed(data, qspi->regs + REG_X4_TX_DATA); 212 + } 213 + 214 + control &= ~CONTROL_FLAGSX4; 215 + writel_relaxed(control, qspi->regs + REG_CONTROL); 216 + 217 + while (qspi->tx_len--) { 218 + while (readl_relaxed(qspi->regs + REG_STATUS) & STATUS_TXFIFOFULL) 219 + ; 220 + data = *qspi->txbuf++; 221 + writel_relaxed(data, qspi->regs + REG_TX_DATA); 222 + } 223 + } 224 + 225 + static void mchp_coreqspi_enable_ints(struct mchp_coreqspi *qspi) 226 + { 227 + u32 mask = IEN_TXDONE | 228 + IEN_RXDONE | 229 + IEN_RXAVAILABLE; 230 + 231 + writel_relaxed(mask, qspi->regs + REG_IEN); 232 + } 233 + 234 + static void mchp_coreqspi_disable_ints(struct mchp_coreqspi *qspi) 235 + { 236 + writel_relaxed(0, qspi->regs + REG_IEN); 237 + } 238 + 239 + static irqreturn_t mchp_coreqspi_isr(int irq, void *dev_id) 240 + { 241 + struct mchp_coreqspi *qspi = (struct mchp_coreqspi *)dev_id; 242 + irqreturn_t ret = IRQ_NONE; 243 + int intfield = readl_relaxed(qspi->regs + REG_STATUS) & STATUS_MASK; 244 + 245 + if (intfield == 0) 246 + return ret; 247 + 248 + if (intfield & IEN_TXDONE) { 249 + writel_relaxed(IEN_TXDONE, qspi->regs + REG_STATUS); 250 + ret = IRQ_HANDLED; 251 + } 252 + 253 + if (intfield & IEN_RXAVAILABLE) { 254 + writel_relaxed(IEN_RXAVAILABLE, qspi->regs + REG_STATUS); 255 + mchp_coreqspi_read_op(qspi); 256 + ret = IRQ_HANDLED; 257 + } 258 + 259 + if (intfield & IEN_RXDONE) { 260 + writel_relaxed(IEN_RXDONE, qspi->regs + REG_STATUS); 261 + complete(&qspi->data_completion); 262 + ret = IRQ_HANDLED; 263 + } 264 + 265 + return ret; 266 + } 267 + 268 + static int mchp_coreqspi_setup_clock(struct mchp_coreqspi *qspi, struct spi_device *spi) 269 + { 270 + unsigned long clk_hz; 271 + u32 control, baud_rate_val = 0; 272 + 273 + clk_hz = clk_get_rate(qspi->clk); 274 + if (!clk_hz) 275 + return -EINVAL; 276 + 277 + baud_rate_val = DIV_ROUND_UP(clk_hz, 2 * spi->max_speed_hz); 278 + if (baud_rate_val > MAX_DIVIDER || baud_rate_val < MIN_DIVIDER) { 279 + dev_err(&spi->dev, 280 + "could not configure the clock for spi clock %d Hz & system clock %ld Hz\n", 281 + spi->max_speed_hz, clk_hz); 282 + return -EINVAL; 283 + } 284 + 285 + control = readl_relaxed(qspi->regs + REG_CONTROL); 286 + control |= baud_rate_val << CONTROL_CLKRATE_SHIFT; 287 + writel_relaxed(control, qspi->regs + REG_CONTROL); 288 + control = readl_relaxed(qspi->regs + REG_CONTROL); 289 + 290 + if ((spi->mode & SPI_CPOL) && (spi->mode & SPI_CPHA)) 291 + control |= CONTROL_CLKIDLE; 292 + else 293 + control &= ~CONTROL_CLKIDLE; 294 + 295 + writel_relaxed(control, qspi->regs + REG_CONTROL); 296 + 297 + return 0; 298 + } 299 + 300 + static int mchp_coreqspi_setup_op(struct spi_device *spi_dev) 301 + { 302 + struct spi_controller *ctlr = spi_dev->master; 303 + struct mchp_coreqspi *qspi = spi_controller_get_devdata(ctlr); 304 + u32 control = readl_relaxed(qspi->regs + REG_CONTROL); 305 + 306 + control |= (CONTROL_MASTER | CONTROL_ENABLE); 307 + control &= ~CONTROL_CLKIDLE; 308 + writel_relaxed(control, qspi->regs + REG_CONTROL); 309 + 310 + return 0; 311 + } 312 + 313 + static inline void mchp_coreqspi_config_op(struct mchp_coreqspi *qspi, const struct spi_mem_op *op) 314 + { 315 + u32 idle_cycles = 0; 316 + int total_bytes, cmd_bytes, frames, ctrl; 317 + 318 + cmd_bytes = op->cmd.nbytes + op->addr.nbytes; 319 + total_bytes = cmd_bytes + op->data.nbytes; 320 + 321 + /* 322 + * As per the coreQSPI IP spec,the number of command and data bytes are 323 + * controlled by the frames register for each SPI sequence. This supports 324 + * the SPI flash memory read and writes sequences as below. so configure 325 + * the cmd and total bytes accordingly. 326 + * --------------------------------------------------------------------- 327 + * TOTAL BYTES | CMD BYTES | What happens | 328 + * ______________________________________________________________________ 329 + * | | | 330 + * 1 | 1 | The SPI core will transmit a single byte | 331 + * | | and receive data is discarded | 332 + * | | | 333 + * 1 | 0 | The SPI core will transmit a single byte | 334 + * | | and return a single byte | 335 + * | | | 336 + * 10 | 4 | The SPI core will transmit 4 command | 337 + * | | bytes discarding the receive data and | 338 + * | | transmits 6 dummy bytes returning the 6 | 339 + * | | received bytes and return a single byte | 340 + * | | | 341 + * 10 | 10 | The SPI core will transmit 10 command | 342 + * | | | 343 + * 10 | 0 | The SPI core will transmit 10 command | 344 + * | | bytes and returning 10 received bytes | 345 + * ______________________________________________________________________ 346 + */ 347 + if (!(op->data.dir == SPI_MEM_DATA_IN)) 348 + cmd_bytes = total_bytes; 349 + 350 + frames = total_bytes & BYTESUPPER_MASK; 351 + writel_relaxed(frames, qspi->regs + REG_FRAMESUP); 352 + frames = total_bytes & BYTESLOWER_MASK; 353 + frames |= cmd_bytes << FRAMES_CMDBYTES_SHIFT; 354 + 355 + if (op->dummy.buswidth) 356 + idle_cycles = op->dummy.nbytes * 8 / op->dummy.buswidth; 357 + 358 + frames |= idle_cycles << FRAMES_IDLE_SHIFT; 359 + ctrl = readl_relaxed(qspi->regs + REG_CONTROL); 360 + 361 + if (ctrl & CONTROL_MODE12_MASK) 362 + frames |= (1 << FRAMES_SHIFT); 363 + 364 + frames |= FRAMES_FLAGWORD; 365 + writel_relaxed(frames, qspi->regs + REG_FRAMES); 366 + } 367 + 368 + static int mchp_qspi_wait_for_ready(struct spi_mem *mem) 369 + { 370 + struct mchp_coreqspi *qspi = spi_controller_get_devdata 371 + (mem->spi->master); 372 + u32 status; 373 + int ret; 374 + 375 + ret = readl_poll_timeout(qspi->regs + REG_STATUS, status, 376 + (status & STATUS_READY), 0, 377 + TIMEOUT_MS); 378 + if (ret) { 379 + dev_err(&mem->spi->dev, 380 + "Timeout waiting on QSPI ready.\n"); 381 + return -ETIMEDOUT; 382 + } 383 + 384 + return ret; 385 + } 386 + 387 + static int mchp_coreqspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) 388 + { 389 + struct mchp_coreqspi *qspi = spi_controller_get_devdata 390 + (mem->spi->master); 391 + u32 address = op->addr.val; 392 + u8 opcode = op->cmd.opcode; 393 + u8 opaddr[5]; 394 + int err, i; 395 + 396 + mutex_lock(&qspi->op_lock); 397 + err = mchp_qspi_wait_for_ready(mem); 398 + if (err) 399 + goto error; 400 + 401 + err = mchp_coreqspi_setup_clock(qspi, mem->spi); 402 + if (err) 403 + goto error; 404 + 405 + err = mchp_coreqspi_set_mode(qspi, op); 406 + if (err) 407 + goto error; 408 + 409 + reinit_completion(&qspi->data_completion); 410 + mchp_coreqspi_config_op(qspi, op); 411 + if (op->cmd.opcode) { 412 + qspi->txbuf = &opcode; 413 + qspi->rxbuf = NULL; 414 + qspi->tx_len = op->cmd.nbytes; 415 + qspi->rx_len = 0; 416 + mchp_coreqspi_write_op(qspi, false); 417 + } 418 + 419 + qspi->txbuf = &opaddr[0]; 420 + if (op->addr.nbytes) { 421 + for (i = 0; i < op->addr.nbytes; i++) 422 + qspi->txbuf[i] = address >> (8 * (op->addr.nbytes - i - 1)); 423 + 424 + qspi->rxbuf = NULL; 425 + qspi->tx_len = op->addr.nbytes; 426 + qspi->rx_len = 0; 427 + mchp_coreqspi_write_op(qspi, false); 428 + } 429 + 430 + if (op->data.nbytes) { 431 + if (op->data.dir == SPI_MEM_DATA_OUT) { 432 + qspi->txbuf = (u8 *)op->data.buf.out; 433 + qspi->rxbuf = NULL; 434 + qspi->rx_len = 0; 435 + qspi->tx_len = op->data.nbytes; 436 + mchp_coreqspi_write_op(qspi, true); 437 + } else { 438 + qspi->txbuf = NULL; 439 + qspi->rxbuf = (u8 *)op->data.buf.in; 440 + qspi->rx_len = op->data.nbytes; 441 + qspi->tx_len = 0; 442 + } 443 + } 444 + 445 + mchp_coreqspi_enable_ints(qspi); 446 + 447 + if (!wait_for_completion_timeout(&qspi->data_completion, msecs_to_jiffies(1000))) 448 + err = -ETIMEDOUT; 449 + 450 + error: 451 + mutex_unlock(&qspi->op_lock); 452 + mchp_coreqspi_disable_ints(qspi); 453 + 454 + return err; 455 + } 456 + 457 + static bool mchp_coreqspi_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) 458 + { 459 + if (!spi_mem_default_supports_op(mem, op)) 460 + return false; 461 + 462 + if ((op->data.buswidth == 4 || op->data.buswidth == 2) && 463 + (op->cmd.buswidth == 1 && (op->addr.buswidth == 1 || op->addr.buswidth == 0))) { 464 + /* 465 + * If the command and address are on DQ0 only, then this 466 + * controller doesn't support sending data on dual and 467 + * quad lines. but it supports reading data on dual and 468 + * quad lines with same configuration as command and 469 + * address on DQ0. 470 + * i.e. The control register[15:13] :EX_RO(read only) is 471 + * meant only for the command and address are on DQ0 but 472 + * not to write data, it is just to read. 473 + * Ex: 0x34h is Quad Load Program Data which is not 474 + * supported. Then the spi-mem layer will iterate over 475 + * each command and it will chose the supported one. 476 + */ 477 + if (op->data.dir == SPI_MEM_DATA_OUT) 478 + return false; 479 + } 480 + 481 + return true; 482 + } 483 + 484 + static int mchp_coreqspi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op) 485 + { 486 + if (op->data.dir == SPI_MEM_DATA_OUT || op->data.dir == SPI_MEM_DATA_IN) { 487 + if (op->data.nbytes > MAX_DATA_CMD_LEN) 488 + op->data.nbytes = MAX_DATA_CMD_LEN; 489 + } 490 + 491 + return 0; 492 + } 493 + 494 + static const struct spi_controller_mem_ops mchp_coreqspi_mem_ops = { 495 + .adjust_op_size = mchp_coreqspi_adjust_op_size, 496 + .supports_op = mchp_coreqspi_supports_op, 497 + .exec_op = mchp_coreqspi_exec_op, 498 + }; 499 + 500 + static int mchp_coreqspi_probe(struct platform_device *pdev) 501 + { 502 + struct spi_controller *ctlr; 503 + struct mchp_coreqspi *qspi; 504 + struct device *dev = &pdev->dev; 505 + struct device_node *np = dev->of_node; 506 + int ret; 507 + 508 + ctlr = devm_spi_alloc_master(&pdev->dev, sizeof(*qspi)); 509 + if (!ctlr) 510 + return dev_err_probe(&pdev->dev, -ENOMEM, 511 + "unable to allocate master for QSPI controller\n"); 512 + 513 + qspi = spi_controller_get_devdata(ctlr); 514 + platform_set_drvdata(pdev, qspi); 515 + 516 + qspi->regs = devm_platform_ioremap_resource(pdev, 0); 517 + if (IS_ERR(qspi->regs)) 518 + return dev_err_probe(&pdev->dev, PTR_ERR(qspi->regs), 519 + "failed to map registers\n"); 520 + 521 + qspi->clk = devm_clk_get(&pdev->dev, NULL); 522 + if (IS_ERR(qspi->clk)) 523 + return dev_err_probe(&pdev->dev, PTR_ERR(qspi->clk), 524 + "could not get clock\n"); 525 + 526 + ret = clk_prepare_enable(qspi->clk); 527 + if (ret) 528 + return dev_err_probe(&pdev->dev, ret, 529 + "failed to enable clock\n"); 530 + 531 + init_completion(&qspi->data_completion); 532 + mutex_init(&qspi->op_lock); 533 + 534 + qspi->irq = platform_get_irq(pdev, 0); 535 + if (qspi->irq < 0) { 536 + ret = qspi->irq; 537 + goto out; 538 + } 539 + 540 + ret = devm_request_irq(&pdev->dev, qspi->irq, mchp_coreqspi_isr, 541 + IRQF_SHARED, pdev->name, qspi); 542 + if (ret) { 543 + dev_err(&pdev->dev, "request_irq failed %d\n", ret); 544 + goto out; 545 + } 546 + 547 + ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 548 + ctlr->mem_ops = &mchp_coreqspi_mem_ops; 549 + ctlr->setup = mchp_coreqspi_setup_op; 550 + ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_DUAL | SPI_RX_QUAD | 551 + SPI_TX_DUAL | SPI_TX_QUAD; 552 + ctlr->dev.of_node = np; 553 + 554 + ret = devm_spi_register_controller(&pdev->dev, ctlr); 555 + if (ret) { 556 + dev_err_probe(&pdev->dev, ret, 557 + "spi_register_controller failed\n"); 558 + goto out; 559 + } 560 + 561 + return 0; 562 + 563 + out: 564 + clk_disable_unprepare(qspi->clk); 565 + 566 + return ret; 567 + } 568 + 569 + static int mchp_coreqspi_remove(struct platform_device *pdev) 570 + { 571 + struct mchp_coreqspi *qspi = platform_get_drvdata(pdev); 572 + u32 control = readl_relaxed(qspi->regs + REG_CONTROL); 573 + 574 + mchp_coreqspi_disable_ints(qspi); 575 + control &= ~CONTROL_ENABLE; 576 + writel_relaxed(control, qspi->regs + REG_CONTROL); 577 + clk_disable_unprepare(qspi->clk); 578 + 579 + return 0; 580 + } 581 + 582 + static const struct of_device_id mchp_coreqspi_of_match[] = { 583 + { .compatible = "microchip,coreqspi-rtl-v2" }, 584 + { /* sentinel */ } 585 + }; 586 + MODULE_DEVICE_TABLE(of, mchp_coreqspi_of_match); 587 + 588 + static struct platform_driver mchp_coreqspi_driver = { 589 + .probe = mchp_coreqspi_probe, 590 + .driver = { 591 + .name = "microchip,coreqspi", 592 + .of_match_table = mchp_coreqspi_of_match, 593 + }, 594 + .remove = mchp_coreqspi_remove, 595 + }; 596 + module_platform_driver(mchp_coreqspi_driver); 597 + 598 + MODULE_AUTHOR("Naga Sureshkumar Relli <nagasuresh.relli@microchip.com"); 599 + MODULE_DESCRIPTION("Microchip coreQSPI QSPI controller driver"); 600 + MODULE_LICENSE("GPL");
+2 -2
drivers/spi/spi-microchip-core.c
··· 548 548 IRQF_SHARED, dev_name(&pdev->dev), master); 549 549 if (ret) 550 550 return dev_err_probe(&pdev->dev, ret, 551 - "could not request irq: %d\n", ret); 551 + "could not request irq\n"); 552 552 553 553 spi->clk = devm_clk_get(&pdev->dev, NULL); 554 554 if (IS_ERR(spi->clk)) 555 555 return dev_err_probe(&pdev->dev, PTR_ERR(spi->clk), 556 - "could not get clk: %d\n", ret); 556 + "could not get clk\n"); 557 557 558 558 ret = clk_prepare_enable(spi->clk); 559 559 if (ret)
+15 -20
drivers/spi/spi-mpc52xx.c
··· 11 11 */ 12 12 13 13 #include <linux/module.h> 14 + #include <linux/err.h> 14 15 #include <linux/errno.h> 15 16 #include <linux/of_platform.h> 16 17 #include <linux/interrupt.h> 17 18 #include <linux/delay.h> 19 + #include <linux/gpio/consumer.h> 18 20 #include <linux/spi/spi.h> 19 21 #include <linux/io.h> 20 - #include <linux/of_gpio.h> 21 22 #include <linux/slab.h> 22 23 #include <linux/of_address.h> 23 24 #include <linux/of_irq.h> ··· 90 89 const u8 *tx_buf; 91 90 int cs_change; 92 91 int gpio_cs_count; 93 - unsigned int *gpio_cs; 92 + struct gpio_desc **gpio_cs; 94 93 }; 95 94 96 95 /* ··· 102 101 103 102 if (ms->gpio_cs_count > 0) { 104 103 cs = ms->message->spi->chip_select; 105 - gpio_set_value(ms->gpio_cs[cs], value ? 0 : 1); 106 - } else 104 + gpiod_set_value(ms->gpio_cs[cs], value); 105 + } else { 107 106 out_8(ms->regs + SPI_PORTDATA, value ? 0 : 0x08); 107 + } 108 108 } 109 109 110 110 /* ··· 387 385 { 388 386 struct spi_master *master; 389 387 struct mpc52xx_spi *ms; 388 + struct gpio_desc *gpio_cs; 390 389 void __iomem *regs; 391 390 u8 ctrl1; 392 391 int rc, i = 0; 393 - int gpio_cs; 394 392 395 393 /* MMIO registers */ 396 394 dev_dbg(&op->dev, "probing mpc5200 SPI device\n"); ··· 440 438 ms->irq1 = irq_of_parse_and_map(op->dev.of_node, 1); 441 439 ms->state = mpc52xx_spi_fsmstate_idle; 442 440 ms->ipb_freq = mpc5xxx_get_bus_frequency(&op->dev); 443 - ms->gpio_cs_count = of_gpio_count(op->dev.of_node); 441 + ms->gpio_cs_count = gpiod_count(&op->dev, NULL); 444 442 if (ms->gpio_cs_count > 0) { 445 443 master->num_chipselect = ms->gpio_cs_count; 446 444 ms->gpio_cs = kmalloc_array(ms->gpio_cs_count, ··· 452 450 } 453 451 454 452 for (i = 0; i < ms->gpio_cs_count; i++) { 455 - gpio_cs = of_get_gpio(op->dev.of_node, i); 456 - if (!gpio_is_valid(gpio_cs)) { 457 - dev_err(&op->dev, 458 - "could not parse the gpio field in oftree\n"); 459 - rc = -ENODEV; 460 - goto err_gpio; 461 - } 462 - 463 - rc = gpio_request(gpio_cs, dev_name(&op->dev)); 453 + gpio_cs = gpiod_get_index(&op->dev, 454 + NULL, i, GPIOD_OUT_LOW); 455 + rc = PTR_ERR_OR_ZERO(gpio_cs); 464 456 if (rc) { 465 457 dev_err(&op->dev, 466 - "can't request spi cs gpio #%d on gpio line %d\n", 467 - i, gpio_cs); 458 + "failed to get spi cs gpio #%d: %d\n", 459 + i, rc); 468 460 goto err_gpio; 469 461 } 470 462 471 - gpio_direction_output(gpio_cs, 1); 472 463 ms->gpio_cs[i] = gpio_cs; 473 464 } 474 465 } ··· 502 507 dev_err(&ms->master->dev, "initialization failed\n"); 503 508 err_gpio: 504 509 while (i-- > 0) 505 - gpio_free(ms->gpio_cs[i]); 510 + gpiod_put(ms->gpio_cs[i]); 506 511 507 512 kfree(ms->gpio_cs); 508 513 err_alloc_gpio: ··· 523 528 free_irq(ms->irq1, ms); 524 529 525 530 for (i = 0; i < ms->gpio_cs_count; i++) 526 - gpio_free(ms->gpio_cs[i]); 531 + gpiod_put(ms->gpio_cs[i]); 527 532 528 533 kfree(ms->gpio_cs); 529 534 spi_unregister_master(master);
+5
drivers/spi/spi-mt65xx.c
··· 1184 1184 if (!dev->dma_mask) 1185 1185 dev->dma_mask = &dev->coherent_dma_mask; 1186 1186 1187 + if (mdata->dev_comp->ipm_design) 1188 + dma_set_max_seg_size(dev, SZ_16M); 1189 + else 1190 + dma_set_max_seg_size(dev, SZ_256K); 1191 + 1187 1192 ret = devm_request_irq(dev, irq, mtk_spi_interrupt, 1188 1193 IRQF_TRIGGER_NONE, dev_name(dev), master); 1189 1194 if (ret)
+6 -36
drivers/spi/spi-mt7621.c
··· 55 55 void __iomem *base; 56 56 unsigned int sys_freq; 57 57 unsigned int speed; 58 - struct clk *clk; 59 58 int pending_write; 60 59 }; 61 60 ··· 326 327 struct spi_controller *master; 327 328 struct mt7621_spi *rs; 328 329 void __iomem *base; 329 - int status = 0; 330 330 struct clk *clk; 331 331 int ret; 332 332 ··· 337 339 if (IS_ERR(base)) 338 340 return PTR_ERR(base); 339 341 340 - clk = devm_clk_get(&pdev->dev, NULL); 341 - if (IS_ERR(clk)) { 342 - dev_err(&pdev->dev, "unable to get SYS clock, err=%d\n", 343 - status); 344 - return PTR_ERR(clk); 345 - } 346 - 347 - status = clk_prepare_enable(clk); 348 - if (status) 349 - return status; 342 + clk = devm_clk_get_enabled(&pdev->dev, NULL); 343 + if (IS_ERR(clk)) 344 + return dev_err_probe(&pdev->dev, PTR_ERR(clk), 345 + "unable to get SYS clock\n"); 350 346 351 347 master = devm_spi_alloc_master(&pdev->dev, sizeof(*rs)); 352 348 if (!master) { 353 349 dev_info(&pdev->dev, "master allocation failed\n"); 354 - clk_disable_unprepare(clk); 355 350 return -ENOMEM; 356 351 } 357 352 ··· 360 369 361 370 rs = spi_controller_get_devdata(master); 362 371 rs->base = base; 363 - rs->clk = clk; 364 372 rs->master = master; 365 - rs->sys_freq = clk_get_rate(rs->clk); 373 + rs->sys_freq = clk_get_rate(clk); 366 374 rs->pending_write = 0; 367 375 dev_info(&pdev->dev, "sys_freq: %u\n", rs->sys_freq); 368 376 369 377 ret = device_reset(&pdev->dev); 370 378 if (ret) { 371 379 dev_err(&pdev->dev, "SPI reset failed!\n"); 372 - clk_disable_unprepare(clk); 373 380 return ret; 374 381 } 375 382 376 - ret = spi_register_controller(master); 377 - if (ret) 378 - clk_disable_unprepare(clk); 379 - 380 - return ret; 381 - } 382 - 383 - static int mt7621_spi_remove(struct platform_device *pdev) 384 - { 385 - struct spi_controller *master; 386 - struct mt7621_spi *rs; 387 - 388 - master = dev_get_drvdata(&pdev->dev); 389 - rs = spi_controller_get_devdata(master); 390 - 391 - spi_unregister_controller(master); 392 - clk_disable_unprepare(rs->clk); 393 - 394 - return 0; 383 + return devm_spi_register_controller(&pdev->dev, master); 395 384 } 396 385 397 386 MODULE_ALIAS("platform:" DRIVER_NAME); ··· 382 411 .of_match_table = mt7621_spi_match, 383 412 }, 384 413 .probe = mt7621_spi_probe, 385 - .remove = mt7621_spi_remove, 386 414 }; 387 415 388 416 module_platform_driver(mt7621_spi_driver);
+1
drivers/spi/spi-npcm-pspi.c
··· 443 443 444 444 static const struct of_device_id npcm_pspi_match[] = { 445 445 { .compatible = "nuvoton,npcm750-pspi", .data = NULL }, 446 + { .compatible = "nuvoton,npcm845-pspi", .data = NULL }, 446 447 {} 447 448 }; 448 449 MODULE_DEVICE_TABLE(of, npcm_pspi_match);
+4 -4
drivers/spi/spi-nxp-fspi.c
··· 588 588 { 589 589 int ret; 590 590 591 - if (is_acpi_node(f->dev->fwnode)) 591 + if (is_acpi_node(dev_fwnode(f->dev))) 592 592 return 0; 593 593 594 594 ret = clk_prepare_enable(f->clk_en); ··· 606 606 607 607 static int nxp_fspi_clk_disable_unprep(struct nxp_fspi *f) 608 608 { 609 - if (is_acpi_node(f->dev->fwnode)) 609 + if (is_acpi_node(dev_fwnode(f->dev))) 610 610 return 0; 611 611 612 612 clk_disable_unprepare(f->clk); ··· 1100 1100 platform_set_drvdata(pdev, f); 1101 1101 1102 1102 /* find the resources - configuration register address space */ 1103 - if (is_acpi_node(f->dev->fwnode)) 1103 + if (is_acpi_node(dev_fwnode(f->dev))) 1104 1104 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1105 1105 else 1106 1106 res = platform_get_resource_byname(pdev, ··· 1113 1113 } 1114 1114 1115 1115 /* find the resources - controller memory mapped space */ 1116 - if (is_acpi_node(f->dev->fwnode)) 1116 + if (is_acpi_node(dev_fwnode(f->dev))) 1117 1117 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1118 1118 else 1119 1119 res = platform_get_resource_byname(pdev,
+1
drivers/spi/spi-omap-100k.c
··· 412 412 return status; 413 413 414 414 err_fck: 415 + pm_runtime_disable(&pdev->dev); 415 416 clk_disable_unprepare(spi100k->fck); 416 417 err_ick: 417 418 clk_disable_unprepare(spi100k->ick);
+1 -3
drivers/spi/spi-omap2-mcspi.c
··· 1509 1509 } 1510 1510 1511 1511 status = platform_get_irq(pdev, 0); 1512 - if (status == -EPROBE_DEFER) 1513 - goto free_master; 1514 1512 if (status < 0) { 1515 - dev_err(&pdev->dev, "no irq resource found\n"); 1513 + dev_err_probe(&pdev->dev, status, "no irq resource found\n"); 1516 1514 goto free_master; 1517 1515 } 1518 1516 init_completion(&mcspi->txdone);
+1 -3
drivers/spi/spi-pxa2xx.c
··· 1856 1856 static int pxa2xx_spi_runtime_resume(struct device *dev) 1857 1857 { 1858 1858 struct driver_data *drv_data = dev_get_drvdata(dev); 1859 - int status; 1860 1859 1861 - status = clk_prepare_enable(drv_data->ssp->clk); 1862 - return status; 1860 + return clk_prepare_enable(drv_data->ssp->clk); 1863 1861 } 1864 1862 #endif 1865 1863
+17 -4
drivers/spi/spi-qup.c
··· 1198 1198 return ret; 1199 1199 1200 1200 ret = clk_prepare_enable(controller->cclk); 1201 - if (ret) 1201 + if (ret) { 1202 + clk_disable_unprepare(controller->iclk); 1202 1203 return ret; 1204 + } 1203 1205 1204 1206 /* Disable clocks auto gaiting */ 1205 1207 config = readl_relaxed(controller->base + QUP_CONFIG); ··· 1247 1245 return ret; 1248 1246 1249 1247 ret = clk_prepare_enable(controller->cclk); 1250 - if (ret) 1248 + if (ret) { 1249 + clk_disable_unprepare(controller->iclk); 1251 1250 return ret; 1251 + } 1252 1252 1253 1253 ret = spi_qup_set_state(controller, QUP_STATE_RESET); 1254 1254 if (ret) 1255 - return ret; 1255 + goto disable_clk; 1256 1256 1257 - return spi_master_resume(master); 1257 + ret = spi_master_resume(master); 1258 + if (ret) 1259 + goto disable_clk; 1260 + 1261 + return 0; 1262 + 1263 + disable_clk: 1264 + clk_disable_unprepare(controller->cclk); 1265 + clk_disable_unprepare(controller->iclk); 1266 + return ret; 1258 1267 } 1259 1268 #endif /* CONFIG_PM_SLEEP */ 1260 1269
+8 -16
drivers/spi/spi-s3c24xx.c
··· 449 449 struct spi_master *master; 450 450 int err = 0; 451 451 452 - master = spi_alloc_master(&pdev->dev, sizeof(struct s3c24xx_spi)); 452 + master = devm_spi_alloc_master(&pdev->dev, sizeof(struct s3c24xx_spi)); 453 453 if (master == NULL) { 454 454 dev_err(&pdev->dev, "No memory for spi_master\n"); 455 455 return -ENOMEM; ··· 463 463 464 464 if (pdata == NULL) { 465 465 dev_err(&pdev->dev, "No platform data supplied\n"); 466 - err = -ENOENT; 467 - goto err_no_pdata; 466 + return -ENOENT; 468 467 } 469 468 470 469 platform_set_drvdata(pdev, hw); ··· 498 499 499 500 /* find and map our resources */ 500 501 hw->regs = devm_platform_ioremap_resource(pdev, 0); 501 - if (IS_ERR(hw->regs)) { 502 - err = PTR_ERR(hw->regs); 503 - goto err_no_pdata; 504 - } 502 + if (IS_ERR(hw->regs)) 503 + return PTR_ERR(hw->regs); 505 504 506 505 hw->irq = platform_get_irq(pdev, 0); 507 - if (hw->irq < 0) { 508 - err = -ENOENT; 509 - goto err_no_pdata; 510 - } 506 + if (hw->irq < 0) 507 + return -ENOENT; 511 508 512 509 err = devm_request_irq(&pdev->dev, hw->irq, s3c24xx_spi_irq, 0, 513 510 pdev->name, hw); 514 511 if (err) { 515 512 dev_err(&pdev->dev, "Cannot claim IRQ\n"); 516 - goto err_no_pdata; 513 + return err; 517 514 } 518 515 519 516 hw->clk = devm_clk_get(&pdev->dev, "spi"); 520 517 if (IS_ERR(hw->clk)) { 521 518 dev_err(&pdev->dev, "No clock for device\n"); 522 - err = PTR_ERR(hw->clk); 523 - goto err_no_pdata; 519 + return PTR_ERR(hw->clk); 524 520 } 525 521 526 522 s3c24xx_spi_initialsetup(hw); ··· 533 539 err_register: 534 540 clk_disable(hw->clk); 535 541 536 - err_no_pdata: 537 - spi_master_put(hw->master); 538 542 return err; 539 543 } 540 544
+11 -2
drivers/spi/spi-s3c64xx.c
··· 84 84 #define S3C64XX_SPI_ST_TX_FIFORDY (1<<0) 85 85 86 86 #define S3C64XX_SPI_PACKET_CNT_EN (1<<16) 87 + #define S3C64XX_SPI_PACKET_CNT_MASK GENMASK(15, 0) 87 88 88 89 #define S3C64XX_SPI_PND_TX_UNDERRUN_CLR (1<<4) 89 90 #define S3C64XX_SPI_PND_TX_OVERRUN_CLR (1<<3) ··· 390 389 if (sdd->rx_dma.ch && sdd->tx_dma.ch) { 391 390 dma_release_channel(sdd->rx_dma.ch); 392 391 dma_release_channel(sdd->tx_dma.ch); 393 - sdd->rx_dma.ch = 0; 394 - sdd->tx_dma.ch = 0; 392 + sdd->rx_dma.ch = NULL; 393 + sdd->tx_dma.ch = NULL; 395 394 } 396 395 397 396 return 0; ··· 710 709 writel(cs->fb_delay & 0x3, sdd->regs + S3C64XX_SPI_FB_CLK); 711 710 712 711 return 0; 712 + } 713 + 714 + static size_t s3c64xx_spi_max_transfer_size(struct spi_device *spi) 715 + { 716 + struct spi_controller *ctlr = spi->controller; 717 + 718 + return ctlr->can_dma ? S3C64XX_SPI_PACKET_CNT_MASK : SIZE_MAX; 713 719 } 714 720 715 721 static int s3c64xx_spi_transfer_one(struct spi_master *master, ··· 1160 1152 master->unprepare_transfer_hardware = s3c64xx_spi_unprepare_transfer; 1161 1153 master->prepare_message = s3c64xx_spi_prepare_message; 1162 1154 master->transfer_one = s3c64xx_spi_transfer_one; 1155 + master->max_transfer_size = s3c64xx_spi_max_transfer_size; 1163 1156 master->num_chipselect = sci->num_cs; 1164 1157 master->use_gpio_descriptors = true; 1165 1158 master->dma_alignment = 8;
+1
drivers/spi/spi-sh-msiof.c
··· 1085 1085 { .compatible = "renesas,rcar-gen2-msiof", .data = &rcar_gen2_data }, 1086 1086 { .compatible = "renesas,msiof-r8a7796", .data = &rcar_gen3_data }, 1087 1087 { .compatible = "renesas,rcar-gen3-msiof", .data = &rcar_gen3_data }, 1088 + { .compatible = "renesas,rcar-gen4-msiof", .data = &rcar_gen3_data }, 1088 1089 { .compatible = "renesas,sh-msiof", .data = &sh_data }, /* Deprecated */ 1089 1090 {}, 1090 1091 };
+116 -9
drivers/spi/spi-stm32-qspi.c
··· 15 15 #include <linux/mutex.h> 16 16 #include <linux/of.h> 17 17 #include <linux/of_device.h> 18 + #include <linux/of_gpio.h> 18 19 #include <linux/pinctrl/consumer.h> 19 20 #include <linux/pm_runtime.h> 20 21 #include <linux/platform_device.h> ··· 356 355 return buswidth; 357 356 } 358 357 359 - static int stm32_qspi_send(struct spi_mem *mem, const struct spi_mem_op *op) 358 + static int stm32_qspi_send(struct spi_device *spi, const struct spi_mem_op *op) 360 359 { 361 - struct stm32_qspi *qspi = spi_controller_get_devdata(mem->spi->master); 362 - struct stm32_qspi_flash *flash = &qspi->flash[mem->spi->chip_select]; 360 + struct stm32_qspi *qspi = spi_controller_get_devdata(spi->master); 361 + struct stm32_qspi_flash *flash = &qspi->flash[spi->chip_select]; 363 362 u32 ccr, cr; 364 363 int timeout, err = 0, err_poll_status = 0; 365 364 ··· 466 465 qspi->fmode = CCR_FMODE_APM; 467 466 qspi->status_timeout = timeout_ms; 468 467 469 - ret = stm32_qspi_send(mem, op); 468 + ret = stm32_qspi_send(mem->spi, op); 470 469 mutex_unlock(&qspi->lock); 471 470 472 471 pm_runtime_mark_last_busy(qspi->dev); ··· 490 489 else 491 490 qspi->fmode = CCR_FMODE_INDW; 492 491 493 - ret = stm32_qspi_send(mem, op); 492 + ret = stm32_qspi_send(mem->spi, op); 494 493 mutex_unlock(&qspi->lock); 495 494 496 495 pm_runtime_mark_last_busy(qspi->dev); ··· 546 545 else 547 546 qspi->fmode = CCR_FMODE_INDR; 548 547 549 - ret = stm32_qspi_send(desc->mem, &op); 548 + ret = stm32_qspi_send(desc->mem->spi, &op); 550 549 mutex_unlock(&qspi->lock); 551 550 552 551 pm_runtime_mark_last_busy(qspi->dev); ··· 555 554 return ret ?: len; 556 555 } 557 556 557 + static int stm32_qspi_transfer_one_message(struct spi_controller *ctrl, 558 + struct spi_message *msg) 559 + { 560 + struct stm32_qspi *qspi = spi_controller_get_devdata(ctrl); 561 + struct spi_transfer *transfer; 562 + struct spi_device *spi = msg->spi; 563 + struct spi_mem_op op; 564 + int ret = 0; 565 + 566 + if (!spi->cs_gpiod) 567 + return -EOPNOTSUPP; 568 + 569 + ret = pm_runtime_resume_and_get(qspi->dev); 570 + if (ret < 0) 571 + return ret; 572 + 573 + mutex_lock(&qspi->lock); 574 + 575 + gpiod_set_value_cansleep(spi->cs_gpiod, true); 576 + 577 + list_for_each_entry(transfer, &msg->transfers, transfer_list) { 578 + u8 dummy_bytes = 0; 579 + 580 + memset(&op, 0, sizeof(op)); 581 + 582 + dev_dbg(qspi->dev, "tx_buf:%p tx_nbits:%d rx_buf:%p rx_nbits:%d len:%d dummy_data:%d\n", 583 + transfer->tx_buf, transfer->tx_nbits, 584 + transfer->rx_buf, transfer->rx_nbits, 585 + transfer->len, transfer->dummy_data); 586 + 587 + /* 588 + * QSPI hardware supports dummy bytes transfer. 589 + * If current transfer is dummy byte, merge it with the next 590 + * transfer in order to take into account QSPI block constraint 591 + */ 592 + if (transfer->dummy_data) { 593 + op.dummy.buswidth = transfer->tx_nbits; 594 + op.dummy.nbytes = transfer->len; 595 + dummy_bytes = transfer->len; 596 + 597 + /* if happens, means that message is not correctly built */ 598 + if (list_is_last(&transfer->transfer_list, &msg->transfers)) { 599 + ret = -EINVAL; 600 + goto end_of_transfer; 601 + } 602 + 603 + transfer = list_next_entry(transfer, transfer_list); 604 + } 605 + 606 + op.data.nbytes = transfer->len; 607 + 608 + if (transfer->rx_buf) { 609 + qspi->fmode = CCR_FMODE_INDR; 610 + op.data.buswidth = transfer->rx_nbits; 611 + op.data.dir = SPI_MEM_DATA_IN; 612 + op.data.buf.in = transfer->rx_buf; 613 + } else { 614 + qspi->fmode = CCR_FMODE_INDW; 615 + op.data.buswidth = transfer->tx_nbits; 616 + op.data.dir = SPI_MEM_DATA_OUT; 617 + op.data.buf.out = transfer->tx_buf; 618 + } 619 + 620 + ret = stm32_qspi_send(spi, &op); 621 + if (ret) 622 + goto end_of_transfer; 623 + 624 + msg->actual_length += transfer->len + dummy_bytes; 625 + } 626 + 627 + end_of_transfer: 628 + gpiod_set_value_cansleep(spi->cs_gpiod, false); 629 + 630 + mutex_unlock(&qspi->lock); 631 + 632 + msg->status = ret; 633 + spi_finalize_current_message(ctrl); 634 + 635 + pm_runtime_mark_last_busy(qspi->dev); 636 + pm_runtime_put_autosuspend(qspi->dev); 637 + 638 + return ret; 639 + } 640 + 558 641 static int stm32_qspi_setup(struct spi_device *spi) 559 642 { 560 643 struct spi_controller *ctrl = spi->master; 561 644 struct stm32_qspi *qspi = spi_controller_get_devdata(ctrl); 562 645 struct stm32_qspi_flash *flash; 563 - u32 presc; 646 + u32 presc, mode; 564 647 int ret; 565 648 566 649 if (ctrl->busy) ··· 652 567 653 568 if (!spi->max_speed_hz) 654 569 return -EINVAL; 570 + 571 + mode = spi->mode & (SPI_TX_OCTAL | SPI_RX_OCTAL); 572 + if ((mode == SPI_TX_OCTAL || mode == SPI_RX_OCTAL) || 573 + ((mode == (SPI_TX_OCTAL | SPI_RX_OCTAL)) && 574 + gpiod_count(qspi->dev, "cs") == -ENOENT)) { 575 + dev_err(qspi->dev, "spi-rx-bus-width\\/spi-tx-bus-width\\/cs-gpios\n"); 576 + dev_err(qspi->dev, "configuration not supported\n"); 577 + 578 + return -EINVAL; 579 + } 655 580 656 581 ret = pm_runtime_resume_and_get(qspi->dev); 657 582 if (ret < 0) ··· 675 580 676 581 mutex_lock(&qspi->lock); 677 582 qspi->cr_reg = CR_APMS | 3 << CR_FTHRES_SHIFT | CR_SSHIFT | CR_EN; 583 + 584 + /* 585 + * Dual flash mode is only enable in case SPI_TX_OCTAL and SPI_TX_OCTAL 586 + * are both set in spi->mode and "cs-gpios" properties is found in DT 587 + */ 588 + if (mode == (SPI_TX_OCTAL | SPI_RX_OCTAL)) { 589 + qspi->cr_reg |= CR_DFM; 590 + dev_dbg(qspi->dev, "Dual flash mode enable"); 591 + } 592 + 678 593 writel_relaxed(qspi->cr_reg, qspi->io_base + QSPI_CR); 679 594 680 595 /* set dcr fsize to max address */ ··· 846 741 847 742 mutex_init(&qspi->lock); 848 743 849 - ctrl->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD 850 - | SPI_TX_DUAL | SPI_TX_QUAD; 744 + ctrl->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_OCTAL 745 + | SPI_TX_DUAL | SPI_TX_QUAD | SPI_RX_OCTAL; 851 746 ctrl->setup = stm32_qspi_setup; 852 747 ctrl->bus_num = -1; 853 748 ctrl->mem_ops = &stm32_qspi_mem_ops; 749 + ctrl->use_gpio_descriptors = true; 750 + ctrl->transfer_one_message = stm32_qspi_transfer_one_message; 854 751 ctrl->num_chipselect = STM32_QSPI_MAX_NORCHIP; 855 752 ctrl->dev.of_node = dev->of_node; 856 753
+6 -14
drivers/spi/spi-xilinx.c
··· 421 421 return -EINVAL; 422 422 } 423 423 424 - master = spi_alloc_master(&pdev->dev, sizeof(struct xilinx_spi)); 424 + master = devm_spi_alloc_master(&pdev->dev, sizeof(struct xilinx_spi)); 425 425 if (!master) 426 426 return -ENODEV; 427 427 ··· 439 439 440 440 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 441 441 xspi->regs = devm_ioremap_resource(&pdev->dev, res); 442 - if (IS_ERR(xspi->regs)) { 443 - ret = PTR_ERR(xspi->regs); 444 - goto put_master; 445 - } 442 + if (IS_ERR(xspi->regs)) 443 + return PTR_ERR(xspi->regs); 446 444 447 445 master->bus_num = pdev->id; 448 446 master->num_chipselect = num_cs; ··· 470 472 471 473 xspi->irq = platform_get_irq(pdev, 0); 472 474 if (xspi->irq < 0 && xspi->irq != -ENXIO) { 473 - ret = xspi->irq; 474 - goto put_master; 475 + return xspi->irq; 475 476 } else if (xspi->irq >= 0) { 476 477 /* Register for SPI Interrupt */ 477 478 ret = devm_request_irq(&pdev->dev, xspi->irq, xilinx_spi_irq, 0, 478 479 dev_name(&pdev->dev), xspi); 479 480 if (ret) 480 - goto put_master; 481 + return ret; 481 482 } 482 483 483 484 /* SPI controller initializations */ ··· 485 488 ret = spi_bitbang_start(&xspi->bitbang); 486 489 if (ret) { 487 490 dev_err(&pdev->dev, "spi_bitbang_start FAILED\n"); 488 - goto put_master; 491 + return ret; 489 492 } 490 493 491 494 dev_info(&pdev->dev, "at %pR, irq=%d\n", res, xspi->irq); ··· 497 500 498 501 platform_set_drvdata(pdev, master); 499 502 return 0; 500 - 501 - put_master: 502 - spi_master_put(master); 503 - 504 - return ret; 505 503 } 506 504 507 505 static int xilinx_spi_remove(struct platform_device *pdev)
+5 -11
drivers/spi/spi-xtensa-xtfpga.c
··· 83 83 int ret; 84 84 struct spi_master *master; 85 85 86 - master = spi_alloc_master(&pdev->dev, sizeof(struct xtfpga_spi)); 86 + master = devm_spi_alloc_master(&pdev->dev, sizeof(struct xtfpga_spi)); 87 87 if (!master) 88 88 return -ENOMEM; 89 89 ··· 97 97 xspi->bitbang.chipselect = xtfpga_spi_chipselect; 98 98 xspi->bitbang.txrx_word[SPI_MODE_0] = xtfpga_spi_txrx_word; 99 99 xspi->regs = devm_platform_ioremap_resource(pdev, 0); 100 - if (IS_ERR(xspi->regs)) { 101 - ret = PTR_ERR(xspi->regs); 102 - goto err; 103 - } 100 + if (IS_ERR(xspi->regs)) 101 + return PTR_ERR(xspi->regs); 104 102 105 103 xtfpga_spi_write32(xspi, XTFPGA_SPI_START, 0); 106 104 usleep_range(1000, 2000); 107 105 if (xtfpga_spi_read32(xspi, XTFPGA_SPI_BUSY)) { 108 106 dev_err(&pdev->dev, "Device stuck in busy state\n"); 109 - ret = -EBUSY; 110 - goto err; 107 + return -EBUSY; 111 108 } 112 109 113 110 ret = spi_bitbang_start(&xspi->bitbang); 114 111 if (ret < 0) { 115 112 dev_err(&pdev->dev, "spi_bitbang_start failed\n"); 116 - goto err; 113 + return ret; 117 114 } 118 115 119 116 platform_set_drvdata(pdev, master); 120 117 return 0; 121 - err: 122 - spi_master_put(master); 123 - return ret; 124 118 } 125 119 126 120 static int xtfpga_spi_remove(struct platform_device *pdev)
+116 -43
drivers/spi/spi.c
··· 753 753 proxy->max_speed_hz = chip->max_speed_hz; 754 754 proxy->mode = chip->mode; 755 755 proxy->irq = chip->irq; 756 - strlcpy(proxy->modalias, chip->modalias, sizeof(proxy->modalias)); 756 + strscpy(proxy->modalias, chip->modalias, sizeof(proxy->modalias)); 757 757 proxy->dev.platform_data = (void *) chip->platform_data; 758 758 proxy->controller_data = chip->controller_data; 759 759 proxy->controller_state = NULL; ··· 1010 1010 } 1011 1011 1012 1012 #ifdef CONFIG_HAS_DMA 1013 - int spi_map_buf(struct spi_controller *ctlr, struct device *dev, 1014 - struct sg_table *sgt, void *buf, size_t len, 1015 - enum dma_data_direction dir) 1013 + static int spi_map_buf_attrs(struct spi_controller *ctlr, struct device *dev, 1014 + struct sg_table *sgt, void *buf, size_t len, 1015 + enum dma_data_direction dir, unsigned long attrs) 1016 1016 { 1017 1017 const bool vmalloced_buf = is_vmalloc_addr(buf); 1018 1018 unsigned int max_seg_size = dma_get_max_seg_size(dev); ··· 1078 1078 sg = sg_next(sg); 1079 1079 } 1080 1080 1081 - ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir); 1082 - if (!ret) 1083 - ret = -ENOMEM; 1081 + ret = dma_map_sgtable(dev, sgt, dir, attrs); 1084 1082 if (ret < 0) { 1085 1083 sg_free_table(sgt); 1086 1084 return ret; 1087 1085 } 1088 1086 1089 - sgt->nents = ret; 1090 - 1091 1087 return 0; 1088 + } 1089 + 1090 + int spi_map_buf(struct spi_controller *ctlr, struct device *dev, 1091 + struct sg_table *sgt, void *buf, size_t len, 1092 + enum dma_data_direction dir) 1093 + { 1094 + return spi_map_buf_attrs(ctlr, dev, sgt, buf, len, dir, 0); 1095 + } 1096 + 1097 + static void spi_unmap_buf_attrs(struct spi_controller *ctlr, 1098 + struct device *dev, struct sg_table *sgt, 1099 + enum dma_data_direction dir, 1100 + unsigned long attrs) 1101 + { 1102 + if (sgt->orig_nents) { 1103 + dma_unmap_sgtable(dev, sgt, dir, attrs); 1104 + sg_free_table(sgt); 1105 + sgt->orig_nents = 0; 1106 + sgt->nents = 0; 1107 + } 1092 1108 } 1093 1109 1094 1110 void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev, 1095 1111 struct sg_table *sgt, enum dma_data_direction dir) 1096 1112 { 1097 - if (sgt->orig_nents) { 1098 - dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir); 1099 - sg_free_table(sgt); 1100 - } 1113 + spi_unmap_buf_attrs(ctlr, dev, sgt, dir, 0); 1101 1114 } 1102 1115 1103 1116 static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) ··· 1137 1124 rx_dev = ctlr->dev.parent; 1138 1125 1139 1126 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 1127 + /* The sync is done before each transfer. */ 1128 + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; 1129 + 1140 1130 if (!ctlr->can_dma(ctlr, msg->spi, xfer)) 1141 1131 continue; 1142 1132 1143 1133 if (xfer->tx_buf != NULL) { 1144 - ret = spi_map_buf(ctlr, tx_dev, &xfer->tx_sg, 1145 - (void *)xfer->tx_buf, xfer->len, 1146 - DMA_TO_DEVICE); 1134 + ret = spi_map_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, 1135 + (void *)xfer->tx_buf, 1136 + xfer->len, DMA_TO_DEVICE, 1137 + attrs); 1147 1138 if (ret != 0) 1148 1139 return ret; 1149 1140 } 1150 1141 1151 1142 if (xfer->rx_buf != NULL) { 1152 - ret = spi_map_buf(ctlr, rx_dev, &xfer->rx_sg, 1153 - xfer->rx_buf, xfer->len, 1154 - DMA_FROM_DEVICE); 1143 + ret = spi_map_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, 1144 + xfer->rx_buf, xfer->len, 1145 + DMA_FROM_DEVICE, attrs); 1155 1146 if (ret != 0) { 1156 - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, 1157 - DMA_TO_DEVICE); 1147 + spi_unmap_buf_attrs(ctlr, tx_dev, 1148 + &xfer->tx_sg, DMA_TO_DEVICE, 1149 + attrs); 1150 + 1158 1151 return ret; 1159 1152 } 1160 1153 } 1161 1154 } 1162 1155 1156 + ctlr->cur_rx_dma_dev = rx_dev; 1157 + ctlr->cur_tx_dma_dev = tx_dev; 1163 1158 ctlr->cur_msg_mapped = true; 1164 1159 1165 1160 return 0; ··· 1175 1154 1176 1155 static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg) 1177 1156 { 1157 + struct device *rx_dev = ctlr->cur_rx_dma_dev; 1158 + struct device *tx_dev = ctlr->cur_tx_dma_dev; 1178 1159 struct spi_transfer *xfer; 1179 - struct device *tx_dev, *rx_dev; 1180 1160 1181 1161 if (!ctlr->cur_msg_mapped || !ctlr->can_dma) 1182 1162 return 0; 1183 1163 1184 - if (ctlr->dma_tx) 1185 - tx_dev = ctlr->dma_tx->device->dev; 1186 - else if (ctlr->dma_map_dev) 1187 - tx_dev = ctlr->dma_map_dev; 1188 - else 1189 - tx_dev = ctlr->dev.parent; 1190 - 1191 - if (ctlr->dma_rx) 1192 - rx_dev = ctlr->dma_rx->device->dev; 1193 - else if (ctlr->dma_map_dev) 1194 - rx_dev = ctlr->dma_map_dev; 1195 - else 1196 - rx_dev = ctlr->dev.parent; 1197 - 1198 1164 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 1165 + /* The sync has already been done after each transfer. */ 1166 + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; 1167 + 1199 1168 if (!ctlr->can_dma(ctlr, msg->spi, xfer)) 1200 1169 continue; 1201 1170 1202 - spi_unmap_buf(ctlr, rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); 1203 - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); 1171 + spi_unmap_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, 1172 + DMA_FROM_DEVICE, attrs); 1173 + spi_unmap_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, 1174 + DMA_TO_DEVICE, attrs); 1204 1175 } 1205 1176 1206 1177 ctlr->cur_msg_mapped = false; 1207 1178 1208 1179 return 0; 1180 + } 1181 + 1182 + static void spi_dma_sync_for_device(struct spi_controller *ctlr, 1183 + struct spi_transfer *xfer) 1184 + { 1185 + struct device *rx_dev = ctlr->cur_rx_dma_dev; 1186 + struct device *tx_dev = ctlr->cur_tx_dma_dev; 1187 + 1188 + if (!ctlr->cur_msg_mapped) 1189 + return; 1190 + 1191 + if (xfer->tx_sg.orig_nents) 1192 + dma_sync_sgtable_for_device(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); 1193 + if (xfer->rx_sg.orig_nents) 1194 + dma_sync_sgtable_for_device(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); 1195 + } 1196 + 1197 + static void spi_dma_sync_for_cpu(struct spi_controller *ctlr, 1198 + struct spi_transfer *xfer) 1199 + { 1200 + struct device *rx_dev = ctlr->cur_rx_dma_dev; 1201 + struct device *tx_dev = ctlr->cur_tx_dma_dev; 1202 + 1203 + if (!ctlr->cur_msg_mapped) 1204 + return; 1205 + 1206 + if (xfer->rx_sg.orig_nents) 1207 + dma_sync_sgtable_for_cpu(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); 1208 + if (xfer->tx_sg.orig_nents) 1209 + dma_sync_sgtable_for_cpu(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); 1209 1210 } 1210 1211 #else /* !CONFIG_HAS_DMA */ 1211 1212 static inline int __spi_map_msg(struct spi_controller *ctlr, ··· 1240 1197 struct spi_message *msg) 1241 1198 { 1242 1199 return 0; 1200 + } 1201 + 1202 + static void spi_dma_sync_for_device(struct spi_controller *ctrl, 1203 + struct spi_transfer *xfer) 1204 + { 1205 + } 1206 + 1207 + static void spi_dma_sync_for_cpu(struct spi_controller *ctrl, 1208 + struct spi_transfer *xfer) 1209 + { 1243 1210 } 1244 1211 #endif /* !CONFIG_HAS_DMA */ 1245 1212 ··· 1488 1435 struct spi_statistics __percpu *statm = ctlr->pcpu_statistics; 1489 1436 struct spi_statistics __percpu *stats = msg->spi->pcpu_statistics; 1490 1437 1491 - spi_set_cs(msg->spi, true, false); 1438 + xfer = list_first_entry(&msg->transfers, struct spi_transfer, transfer_list); 1439 + spi_set_cs(msg->spi, !xfer->cs_off, false); 1492 1440 1493 1441 SPI_STATISTICS_INCREMENT_FIELD(statm, messages); 1494 1442 SPI_STATISTICS_INCREMENT_FIELD(stats, messages); ··· 1509 1455 reinit_completion(&ctlr->xfer_completion); 1510 1456 1511 1457 fallback_pio: 1458 + spi_dma_sync_for_device(ctlr, xfer); 1512 1459 ret = ctlr->transfer_one(ctlr, msg->spi, xfer); 1513 1460 if (ret < 0) { 1461 + spi_dma_sync_for_cpu(ctlr, xfer); 1462 + 1514 1463 if (ctlr->cur_msg_mapped && 1515 1464 (xfer->error & SPI_TRANS_FAIL_NO_START)) { 1516 1465 __spi_unmap_msg(ctlr, msg); ··· 1536 1479 if (ret < 0) 1537 1480 msg->status = ret; 1538 1481 } 1482 + 1483 + spi_dma_sync_for_cpu(ctlr, xfer); 1539 1484 } else { 1540 1485 if (xfer->len) 1541 1486 dev_err(&msg->spi->dev, ··· 1562 1503 &msg->transfers)) { 1563 1504 keep_cs = true; 1564 1505 } else { 1565 - spi_set_cs(msg->spi, false, false); 1506 + if (!xfer->cs_off) 1507 + spi_set_cs(msg->spi, false, false); 1566 1508 _spi_transfer_cs_change_delay(msg, xfer); 1567 - spi_set_cs(msg->spi, true, false); 1509 + if (!list_next_entry(xfer, transfer_list)->cs_off) 1510 + spi_set_cs(msg->spi, true, false); 1568 1511 } 1512 + } else if (!list_is_last(&xfer->transfer_list, &msg->transfers) && 1513 + xfer->cs_off != list_next_entry(xfer, transfer_list)->cs_off) { 1514 + spi_set_cs(msg->spi, xfer->cs_off, false); 1569 1515 } 1570 1516 1571 1517 msg->actual_length += xfer->len; ··· 1650 1586 } 1651 1587 1652 1588 trace_spi_message_start(msg); 1589 + 1590 + ret = spi_split_transfers_maxsize(ctlr, msg, 1591 + spi_max_transfer_size(msg->spi), 1592 + GFP_KERNEL | GFP_DMA); 1593 + if (ret) { 1594 + msg->status = ret; 1595 + spi_finalize_current_message(ctlr); 1596 + return ret; 1597 + } 1653 1598 1654 1599 if (ctlr->prepare_message) { 1655 1600 ret = ctlr->prepare_message(ctlr, msg); ··· 2402 2329 goto err_out; 2403 2330 } 2404 2331 2405 - strlcpy(ancillary->modalias, "dummy", sizeof(ancillary->modalias)); 2332 + strscpy(ancillary->modalias, "dummy", sizeof(ancillary->modalias)); 2406 2333 2407 2334 /* Use provided chip-select for ancillary device */ 2408 2335 ancillary->chip_select = chip_select; ··· 2798 2725 if (!spi) 2799 2726 return -ENOMEM; 2800 2727 2801 - strlcpy(spi->modalias, name, sizeof(spi->modalias)); 2728 + strscpy(spi->modalias, name, sizeof(spi->modalias)); 2802 2729 2803 2730 rc = spi_add_device(spi); 2804 2731 if (rc) {
+6
include/linux/spi/spi.h
··· 378 378 * @cleanup: frees controller-specific state 379 379 * @can_dma: determine whether this controller supports DMA 380 380 * @dma_map_dev: device which can be used for DMA mapping 381 + * @cur_rx_dma_dev: device which is currently used for RX DMA mapping 382 + * @cur_tx_dma_dev: device which is currently used for TX DMA mapping 381 383 * @queued: whether this controller is providing an internal message queue 382 384 * @kworker: pointer to thread struct for message pump 383 385 * @pump_messages: work struct for scheduling work to the message pump ··· 612 610 struct spi_device *spi, 613 611 struct spi_transfer *xfer); 614 612 struct device *dma_map_dev; 613 + struct device *cur_rx_dma_dev; 614 + struct device *cur_tx_dma_dev; 615 615 616 616 /* 617 617 * These hooks are for drivers that want to use the generic ··· 852 848 * @bits_per_word: select a bits_per_word other than the device default 853 849 * for this transfer. If 0 the default (from @spi_device) is used. 854 850 * @dummy_data: indicates transfer is dummy bytes transfer. 851 + * @cs_off: performs the transfer with chipselect off. 855 852 * @cs_change: affects chipselect after this transfer completes 856 853 * @cs_change_delay: delay between cs deassert and assert when 857 854 * @cs_change is set and @spi_transfer is not the last in @spi_message ··· 963 958 struct sg_table rx_sg; 964 959 965 960 unsigned dummy_data:1; 961 + unsigned cs_off:1; 966 962 unsigned cs_change:1; 967 963 unsigned tx_nbits:3; 968 964 unsigned rx_nbits:3;