Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'spi-v3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
"A quiet release, more bug fixes than anything else. A few things do
stand out though:

- updates to several drivers to move towards the standard GPIO chip
select handling in the core.
- DMA support for the SH MSIOF driver.
- support for Rockchip SPI controllers (their first mainline
submission)"

* tag 'spi-v3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (64 commits)
spi: davinci: use spi_device.cs_gpio to store gpio cs per spi device
spi: davinci: add support to configure gpio cs through dt
spi/pl022: Explicitly truncate large bitmask
spi/atmel: Fix pointer to int conversion warnings on 64 bit builds
spi: davinci: fix to support more than 2 chip selects
spi: topcliff-pch: don't hardcode PCI slot to get DMA device
spi: orion: fix incorrect handling of cell-index DT property
spi: orion: Fix error return code in orion_spi_probe()
spi/rockchip: fix error return code in rockchip_spi_probe()
spi/rockchip: remove redundant dev_err call in rockchip_spi_probe()
spi/rockchip: remove duplicated include from spi-rockchip.c
ARM: dts: fix the chip select gpios definition in the SPI nodes
spi: s3c64xx: Update binding documentation
spi: s3c64xx: use the generic SPI "cs-gpios" property
spi: s3c64xx: Revert "spi: s3c64xx: Added provision for dedicated cs pin"
spi: atmel: Use dmaengine_prep_slave_sg() API
spi: topcliff-pch: Update error messages for dmaengine_prep_slave_sg() API
spi: sh-msiof: Use correct device for DMA mapping with IOMMU
spi: sh-msiof: Handle dmaengine_prep_slave_single() failures gracefully
spi: rspi: Handle dmaengine_prep_slave_sg() failures gracefully
...

+1724 -244
+7 -6
Documentation/devicetree/bindings/spi/efm32-spi.txt
··· 10 10 - cs-gpios: see spi-bus.txt 11 11 12 12 Recommended properties : 13 - - efm32,location: Value to write to the ROUTE register's LOCATION bitfield to 14 - configure the pinmux for the device, see datasheet for values. 15 - If "efm32,location" property is not provided, keeping what is 16 - already configured in the hardware, so its either the reset 17 - default 0 or whatever the bootloader did. 13 + - energymicro,location: Value to write to the ROUTE register's LOCATION 14 + bitfield to configure the pinmux for the device, see 15 + datasheet for values. 16 + If this property is not provided, keeping what is 17 + already configured in the hardware, so its either the 18 + reset default 0 or whatever the bootloader did. 18 19 19 20 Example: 20 21 ··· 27 26 interrupts = <15 16>; 28 27 clocks = <&cmu 20>; 29 28 cs-gpios = <&gpio 51 1>; // D3 30 - efm32,location = <1>; 29 + energymicro,location = <1>; 31 30 status = "ok"; 32 31 33 32 ks8851@0 {
+5 -1
Documentation/devicetree/bindings/spi/qcom,spi-qup.txt
··· 7 7 data path from 4 bits to 32 bits and numerous protocol variants. 8 8 9 9 Required properties: 10 - - compatible: Should contain "qcom,spi-qup-v2.1.1" or "qcom,spi-qup-v2.2.1" 10 + - compatible: Should contain: 11 + "qcom,spi-qup-v1.1.1" for 8660, 8960 and 8064. 12 + "qcom,spi-qup-v2.1.1" for 8974 and later 13 + "qcom,spi-qup-v2.2.1" for 8974 v2 and later. 14 + 11 15 - reg: Should contain base register location and length 12 16 - interrupts: Interrupt number used by this controller 13 17
+28
Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.txt
··· 1 + Synopsys DesignWare AMBA 2.0 Synchronous Serial Interface. 2 + 3 + Required properties: 4 + - compatible : "snps,dw-apb-ssi" 5 + - reg : The register base for the controller. 6 + - interrupts : One interrupt, used by the controller. 7 + - #address-cells : <1>, as required by generic SPI binding. 8 + - #size-cells : <0>, also as required by generic SPI binding. 9 + 10 + Optional properties: 11 + - cs-gpios : Specifies the gpio pis to be used for chipselects. 12 + - num-cs : The number of chipselects. If omitted, this will default to 4. 13 + 14 + Child nodes as per the generic SPI binding. 15 + 16 + Example: 17 + 18 + spi@fff00000 { 19 + compatible = "snps,dw-apb-ssi"; 20 + reg = <0xfff00000 0x1000>; 21 + interrupts = <0 154 4>; 22 + #address-cells = <1>; 23 + #size-cells = <0>; 24 + num-cs = <2>; 25 + cs-gpios = <&gpio0 13 0>, 26 + <&gpio0 14 0>; 27 + }; 28 +
+8 -1
Documentation/devicetree/bindings/spi/spi-davinci.txt
··· 8 8 - "ti,dm6441-spi" for SPI used similar to that on DM644x SoC family 9 9 - "ti,da830-spi" for SPI used similar to that on DA8xx SoC family 10 10 - reg: Offset and length of SPI controller register space 11 - - num-cs: Number of chip selects 11 + - num-cs: Number of chip selects. This includes internal as well as 12 + GPIO chip selects. 12 13 - ti,davinci-spi-intr-line: interrupt line used to connect the SPI 13 14 IP to the interrupt controller within the SoC. Possible values 14 15 are 0 and 1. Manual says one of the two possible interrupt ··· 17 16 based on a specifc SoC configuration. 18 17 - interrupts: interrupt number mapped to CPU. 19 18 - clocks: spi clk phandle 19 + 20 + Optional: 21 + - cs-gpios: gpio chip selects 22 + For example to have 3 internal CS and 2 GPIO CS, user could define 23 + cs-gpios = <0>, <0>, <0>, <&gpio1 30 0>, <&gpio1 31 0>; 24 + where first three are internal CS and last two are GPIO CS. 20 25 21 26 Example of a NOR flash slave device (n25q032) connected to DaVinci 22 27 SPI controller device over the SPI bus.
+37
Documentation/devicetree/bindings/spi/spi-rockchip.txt
··· 1 + * Rockchip SPI Controller 2 + 3 + The Rockchip SPI controller is used to interface with various devices such as flash 4 + and display controllers using the SPI communication interface. 5 + 6 + Required Properties: 7 + 8 + - compatible: should be one of the following. 9 + "rockchip,rk3066-spi" for rk3066. 10 + "rockchip,rk3188-spi", "rockchip,rk3066-spi" for rk3188. 11 + "rockchip,rk3288-spi", "rockchip,rk3066-spi" for rk3288. 12 + - reg: physical base address of the controller and length of memory mapped 13 + region. 14 + - interrupts: The interrupt number to the cpu. The interrupt specifier format 15 + depends on the interrupt controller. 16 + - clocks: Must contain an entry for each entry in clock-names. 17 + - clock-names: Shall be "spiclk" for the transfer-clock, and "apb_pclk" for 18 + the peripheral clock. 19 + - dmas: DMA specifiers for tx and rx dma. See the DMA client binding, 20 + Documentation/devicetree/bindings/dma/dma.txt 21 + - dma-names: DMA request names should include "tx" and "rx" if present. 22 + - #address-cells: should be 1. 23 + - #size-cells: should be 0. 24 + 25 + Example: 26 + 27 + spi0: spi@ff110000 { 28 + compatible = "rockchip,rk3066-spi"; 29 + reg = <0xff110000 0x1000>; 30 + dmas = <&pdma1 11>, <&pdma1 12>; 31 + dma-names = "tx", "rx"; 32 + #address-cells = <1>; 33 + #size-cells = <0>; 34 + interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>; 35 + clocks = <&cru SCLK_SPI0>, <&cru PCLK_SPI0>; 36 + clock-names = "spiclk", "apb_pclk"; 37 + };
+12 -14
Documentation/devicetree/bindings/spi/spi-samsung.txt
··· 18 18 - interrupts: The interrupt number to the cpu. The interrupt specifier format 19 19 depends on the interrupt controller. 20 20 21 - [PRELIMINARY: the dma channel allocation will change once there are 22 - official DMA bindings] 21 + - dmas : Two or more DMA channel specifiers following the convention outlined 22 + in bindings/dma/dma.txt 23 23 24 - - tx-dma-channel: The dma channel specifier for tx operations. The format of 25 - the dma specifier depends on the dma controller. 26 - 27 - - rx-dma-channel: The dma channel specifier for rx operations. The format of 28 - the dma specifier depends on the dma controller. 24 + - dma-names: Names for the dma channels. There must be at least one channel 25 + named "tx" for transmit and named "rx" for receive. 29 26 30 27 Required Board Specific Properties: 31 28 ··· 39 42 - num-cs: Specifies the number of chip select lines supported. If 40 43 not specified, the default number of chip select lines is set to 1. 41 44 45 + - cs-gpios: should specify GPIOs used for chipselects (see spi-bus.txt) 46 + 42 47 SPI Controller specific data in SPI slave nodes: 43 48 44 49 - The spi slave nodes should provide the following information which is required 45 50 by the spi controller. 46 - 47 - - cs-gpio: A gpio specifier that specifies the gpio line used as 48 - the slave select line by the spi controller. The format of the gpio 49 - specifier depends on the gpio controller. 50 51 51 52 - samsung,spi-feedback-delay: The sampling phase shift to be applied on the 52 53 miso line (to account for any lag in the miso line). The following are the ··· 69 74 compatible = "samsung,exynos4210-spi"; 70 75 reg = <0x12d20000 0x100>; 71 76 interrupts = <0 66 0>; 72 - tx-dma-channel = <&pdma0 5>; 73 - rx-dma-channel = <&pdma0 4>; 77 + dmas = <&pdma0 5 78 + &pdma0 4>; 79 + dma-names = "tx", "rx"; 80 + #address-cells = <1>; 81 + #size-cells = <0>; 74 82 }; 75 83 76 84 - Board Specific Portion: ··· 83 85 #size-cells = <0>; 84 86 pinctrl-names = "default"; 85 87 pinctrl-0 = <&spi0_bus>; 88 + cs-gpios = <&gpa2 5 0>; 86 89 87 90 w25q80bw@0 { 88 91 #address-cells = <1>; ··· 93 94 spi-max-frequency = <10000>; 94 95 95 96 controller-data { 96 - cs-gpio = <&gpa2 5 1 0 3>; 97 97 samsung,spi-feedback-delay = <0>; 98 98 }; 99 99
+1 -1
arch/arm/boot/dts/exynos4210-smdkv310.dts
··· 168 168 }; 169 169 170 170 spi_2: spi@13940000 { 171 + cs-gpios = <&gpc1 2 0>; 171 172 status = "okay"; 172 173 173 174 w25x80@0 { ··· 179 178 spi-max-frequency = <1000000>; 180 179 181 180 controller-data { 182 - cs-gpio = <&gpc1 2 0>; 183 181 samsung,spi-feedback-delay = <0>; 184 182 }; 185 183
+1 -1
arch/arm/boot/dts/exynos4412-trats2.dts
··· 589 589 spi_1: spi@13930000 { 590 590 pinctrl-names = "default"; 591 591 pinctrl-0 = <&spi1_bus>; 592 + cs-gpios = <&gpb 5 0>; 592 593 status = "okay"; 593 594 594 595 s5c73m3_spi: s5c73m3 { ··· 597 596 spi-max-frequency = <50000000>; 598 597 reg = <0>; 599 598 controller-data { 600 - cs-gpio = <&gpb 5 0>; 601 599 samsung,spi-feedback-delay = <2>; 602 600 }; 603 601 };
+1 -1
arch/arm/boot/dts/exynos5250-smdk5250.dts
··· 316 316 }; 317 317 318 318 spi_1: spi@12d30000 { 319 + cs-gpios = <&gpa2 5 0>; 319 320 status = "okay"; 320 321 321 322 w25q80bw@0 { ··· 327 326 spi-max-frequency = <1000000>; 328 327 329 328 controller-data { 330 - cs-gpio = <&gpa2 5 0>; 331 329 samsung,spi-feedback-delay = <0>; 332 330 }; 333 331
+14 -2
drivers/spi/Kconfig
··· 382 382 config SPI_PXA2XX_PCI 383 383 def_tristate SPI_PXA2XX && PCI 384 384 385 + config SPI_ROCKCHIP 386 + tristate "Rockchip SPI controller driver" 387 + depends on ARM || ARM64 || AVR32 || HEXAGON || MIPS || SUPERH 388 + help 389 + This selects a driver for Rockchip SPI controller. 390 + 391 + If you say yes to this option, support will be included for 392 + RK3066, RK3188 and RK3288 families of SPI controller. 393 + Rockchip SPI controller support DMA transport and PIO mode. 394 + The main usecase of this controller is to use spi flash as boot 395 + device. 396 + 385 397 config SPI_RSPI 386 398 tristate "Renesas RSPI/QSPI controller" 387 - depends on (SUPERH && SH_DMAE_BASE) || ARCH_SHMOBILE 399 + depends on SUPERH || ARCH_SHMOBILE || COMPILE_TEST 388 400 help 389 401 SPI driver for Renesas RSPI and QSPI blocks. 390 402 ··· 446 434 447 435 config SPI_SH_MSIOF 448 436 tristate "SuperH MSIOF SPI controller" 449 - depends on HAVE_CLK 437 + depends on HAVE_CLK && HAS_DMA 450 438 depends on SUPERH || ARCH_SHMOBILE || COMPILE_TEST 451 439 help 452 440 SPI driver for SuperH and SH Mobile MSIOF blocks.
+1
drivers/spi/Makefile
··· 61 61 obj-$(CONFIG_SPI_PXA2XX) += spi-pxa2xx-platform.o 62 62 obj-$(CONFIG_SPI_PXA2XX_PCI) += spi-pxa2xx-pci.o 63 63 obj-$(CONFIG_SPI_QUP) += spi-qup.o 64 + obj-$(CONFIG_SPI_ROCKCHIP) += spi-rockchip.o 64 65 obj-$(CONFIG_SPI_RSPI) += spi-rspi.o 65 66 obj-$(CONFIG_SPI_S3C24XX) += spi-s3c24xx-hw.o 66 67 spi-s3c24xx-hw-y := spi-s3c24xx.o
+2 -3
drivers/spi/spi-adi-v3.c
··· 660 660 struct adi_spi3_chip *chip_info = spi->controller_data; 661 661 662 662 chip = kzalloc(sizeof(*chip), GFP_KERNEL); 663 - if (!chip) { 664 - dev_err(&spi->dev, "can not allocate chip data\n"); 663 + if (!chip) 665 664 return -ENOMEM; 666 - } 665 + 667 666 if (chip_info) { 668 667 if (chip_info->control & ~ctl_reg) { 669 668 dev_err(&spi->dev,
+8 -14
drivers/spi/spi-atmel.c
··· 597 597 goto err_exit; 598 598 599 599 /* Send both scatterlists */ 600 - rxdesc = rxchan->device->device_prep_slave_sg(rxchan, 601 - &as->dma.sgrx, 602 - 1, 603 - DMA_FROM_DEVICE, 604 - DMA_PREP_INTERRUPT | DMA_CTRL_ACK, 605 - NULL); 600 + rxdesc = dmaengine_prep_slave_sg(rxchan, &as->dma.sgrx, 1, 601 + DMA_FROM_DEVICE, 602 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 606 603 if (!rxdesc) 607 604 goto err_dma; 608 605 609 - txdesc = txchan->device->device_prep_slave_sg(txchan, 610 - &as->dma.sgtx, 611 - 1, 612 - DMA_TO_DEVICE, 613 - DMA_PREP_INTERRUPT | DMA_CTRL_ACK, 614 - NULL); 606 + txdesc = dmaengine_prep_slave_sg(txchan, &as->dma.sgtx, 1, 607 + DMA_TO_DEVICE, 608 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 615 609 if (!txdesc) 616 610 goto err_dma; 617 611 ··· 1012 1018 csr |= SPI_BF(DLYBCT, 0); 1013 1019 1014 1020 /* chipselect must have been muxed as GPIO (e.g. in board setup) */ 1015 - npcs_pin = (unsigned int)spi->controller_data; 1021 + npcs_pin = (unsigned long)spi->controller_data; 1016 1022 1017 1023 if (gpio_is_valid(spi->cs_gpio)) 1018 1024 npcs_pin = spi->cs_gpio; ··· 1247 1253 static void atmel_spi_cleanup(struct spi_device *spi) 1248 1254 { 1249 1255 struct atmel_spi_device *asd = spi->controller_state; 1250 - unsigned gpio = (unsigned) spi->controller_data; 1256 + unsigned gpio = (unsigned long) spi->controller_data; 1251 1257 1252 1258 if (!asd) 1253 1259 return;
+2 -4
drivers/spi/spi-au1550.c
··· 925 925 iounmap((void __iomem *)hw->regs); 926 926 927 927 err_ioremap: 928 - release_resource(hw->ioarea); 929 - kfree(hw->ioarea); 928 + release_mem_region(r->start, sizeof(psc_spi_t)); 930 929 931 930 err_no_iores: 932 931 err_no_pdata: ··· 945 946 spi_bitbang_stop(&hw->bitbang); 946 947 free_irq(hw->irq, hw); 947 948 iounmap((void __iomem *)hw->regs); 948 - release_resource(hw->ioarea); 949 - kfree(hw->ioarea); 949 + release_mem_region(r->start, sizeof(psc_spi_t)); 950 950 951 951 if (hw->usedma) { 952 952 au1550_spi_dma_rxtmp_free(hw);
+26 -9
drivers/spi/spi-cadence.c
··· 205 205 static void cdns_spi_config_clock_mode(struct spi_device *spi) 206 206 { 207 207 struct cdns_spi *xspi = spi_master_get_devdata(spi->master); 208 - u32 ctrl_reg; 208 + u32 ctrl_reg, new_ctrl_reg; 209 209 210 - ctrl_reg = cdns_spi_read(xspi, CDNS_SPI_CR_OFFSET); 210 + new_ctrl_reg = ctrl_reg = cdns_spi_read(xspi, CDNS_SPI_CR_OFFSET); 211 211 212 212 /* Set the SPI clock phase and clock polarity */ 213 - ctrl_reg &= ~(CDNS_SPI_CR_CPHA_MASK | CDNS_SPI_CR_CPOL_MASK); 213 + new_ctrl_reg &= ~(CDNS_SPI_CR_CPHA_MASK | CDNS_SPI_CR_CPOL_MASK); 214 214 if (spi->mode & SPI_CPHA) 215 - ctrl_reg |= CDNS_SPI_CR_CPHA_MASK; 215 + new_ctrl_reg |= CDNS_SPI_CR_CPHA_MASK; 216 216 if (spi->mode & SPI_CPOL) 217 - ctrl_reg |= CDNS_SPI_CR_CPOL_MASK; 217 + new_ctrl_reg |= CDNS_SPI_CR_CPOL_MASK; 218 218 219 - cdns_spi_write(xspi, CDNS_SPI_CR_OFFSET, ctrl_reg); 219 + if (new_ctrl_reg != ctrl_reg) { 220 + /* 221 + * Just writing the CR register does not seem to apply the clock 222 + * setting changes. This is problematic when changing the clock 223 + * polarity as it will cause the SPI slave to see spurious clock 224 + * transitions. To workaround the issue toggle the ER register. 225 + */ 226 + cdns_spi_write(xspi, CDNS_SPI_ER_OFFSET, 227 + CDNS_SPI_ER_DISABLE_MASK); 228 + cdns_spi_write(xspi, CDNS_SPI_CR_OFFSET, new_ctrl_reg); 229 + cdns_spi_write(xspi, CDNS_SPI_ER_OFFSET, 230 + CDNS_SPI_ER_ENABLE_MASK); 231 + } 220 232 } 221 233 222 234 /** ··· 382 370 383 371 return status; 384 372 } 373 + static int cdns_prepare_message(struct spi_master *master, 374 + struct spi_message *msg) 375 + { 376 + cdns_spi_config_clock_mode(msg->spi); 377 + return 0; 378 + } 385 379 386 380 /** 387 381 * cdns_transfer_one - Initiates the SPI transfer ··· 433 415 static int cdns_prepare_transfer_hardware(struct spi_master *master) 434 416 { 435 417 struct cdns_spi *xspi = spi_master_get_devdata(master); 436 - 437 - cdns_spi_config_clock_mode(master->cur_msg->spi); 438 418 439 419 cdns_spi_write(xspi, CDNS_SPI_ER_OFFSET, 440 420 CDNS_SPI_ER_ENABLE_MASK); ··· 548 532 xspi->is_decoded_cs = 0; 549 533 550 534 master->prepare_transfer_hardware = cdns_prepare_transfer_hardware; 535 + master->prepare_message = cdns_prepare_message; 551 536 master->transfer_one = cdns_transfer_one; 552 537 master->unprepare_transfer_hardware = cdns_unprepare_transfer_hardware; 553 538 master->set_cs = cdns_spi_chipselect; ··· 664 647 static SIMPLE_DEV_PM_OPS(cdns_spi_dev_pm_ops, cdns_spi_suspend, 665 648 cdns_spi_resume); 666 649 667 - static struct of_device_id cdns_spi_of_match[] = { 650 + static const struct of_device_id cdns_spi_of_match[] = { 668 651 { .compatible = "xlnx,zynq-spi-r1p6" }, 669 652 { .compatible = "cdns,spi-r1p6" }, 670 653 { /* end of table */ }
-2
drivers/spi/spi-clps711x.c
··· 184 184 } 185 185 master->max_speed_hz = clk_get_rate(hw->spi_clk); 186 186 187 - platform_set_drvdata(pdev, master); 188 - 189 187 hw->syscon = syscon_regmap_lookup_by_pdevname("syscon.3"); 190 188 if (IS_ERR(hw->syscon)) { 191 189 ret = PTR_ERR(hw->syscon);
+54 -20
drivers/spi/spi-davinci.c
··· 30 30 #include <linux/edma.h> 31 31 #include <linux/of.h> 32 32 #include <linux/of_device.h> 33 + #include <linux/of_gpio.h> 33 34 #include <linux/spi/spi.h> 34 35 #include <linux/spi/spi_bitbang.h> 35 36 #include <linux/slab.h> ··· 38 37 #include <linux/platform_data/spi-davinci.h> 39 38 40 39 #define SPI_NO_RESOURCE ((resource_size_t)-1) 41 - 42 - #define SPI_MAX_CHIPSELECT 2 43 40 44 41 #define CS_DEFAULT 0xFF 45 42 ··· 141 142 void (*get_rx)(u32 rx_data, struct davinci_spi *); 142 143 u32 (*get_tx)(struct davinci_spi *); 143 144 144 - u8 bytes_per_word[SPI_MAX_CHIPSELECT]; 145 + u8 *bytes_per_word; 145 146 }; 146 147 147 148 static struct davinci_spi_config davinci_spi_default_cfg; ··· 212 213 u8 chip_sel = spi->chip_select; 213 214 u16 spidat1 = CS_DEFAULT; 214 215 bool gpio_chipsel = false; 216 + int gpio; 215 217 216 218 dspi = spi_master_get_devdata(spi->master); 217 219 pdata = &dspi->pdata; 218 220 219 - if (pdata->chip_sel && chip_sel < pdata->num_chipselect && 220 - pdata->chip_sel[chip_sel] != SPI_INTERN_CS) 221 + if (spi->cs_gpio >= 0) { 222 + /* SPI core parse and update master->cs_gpio */ 221 223 gpio_chipsel = true; 224 + gpio = spi->cs_gpio; 225 + } 222 226 223 227 /* 224 228 * Board specific chip select logic decides the polarity and cs ··· 229 227 */ 230 228 if (gpio_chipsel) { 231 229 if (value == BITBANG_CS_ACTIVE) 232 - gpio_set_value(pdata->chip_sel[chip_sel], 0); 230 + gpio_set_value(gpio, spi->mode & SPI_CS_HIGH); 233 231 else 234 - gpio_set_value(pdata->chip_sel[chip_sel], 1); 232 + gpio_set_value(gpio, !(spi->mode & SPI_CS_HIGH)); 235 233 } else { 236 234 if (value == BITBANG_CS_ACTIVE) { 237 235 spidat1 |= SPIDAT1_CSHOLD_MASK; ··· 394 392 int retval = 0; 395 393 struct davinci_spi *dspi; 396 394 struct davinci_spi_platform_data *pdata; 395 + struct spi_master *master = spi->master; 396 + struct device_node *np = spi->dev.of_node; 397 + bool internal_cs = true; 398 + unsigned long flags = GPIOF_DIR_OUT; 397 399 398 400 dspi = spi_master_get_devdata(spi->master); 399 401 pdata = &dspi->pdata; 400 402 401 - if (!(spi->mode & SPI_NO_CS)) { 402 - if ((pdata->chip_sel == NULL) || 403 - (pdata->chip_sel[spi->chip_select] == SPI_INTERN_CS)) 404 - set_io_bits(dspi->base + SPIPC0, 1 << spi->chip_select); 403 + flags |= (spi->mode & SPI_CS_HIGH) ? GPIOF_INIT_LOW : GPIOF_INIT_HIGH; 405 404 405 + if (!(spi->mode & SPI_NO_CS)) { 406 + if (np && (master->cs_gpios != NULL) && (spi->cs_gpio >= 0)) { 407 + retval = gpio_request_one(spi->cs_gpio, 408 + flags, dev_name(&spi->dev)); 409 + internal_cs = false; 410 + } else if (pdata->chip_sel && 411 + spi->chip_select < pdata->num_chipselect && 412 + pdata->chip_sel[spi->chip_select] != SPI_INTERN_CS) { 413 + spi->cs_gpio = pdata->chip_sel[spi->chip_select]; 414 + retval = gpio_request_one(spi->cs_gpio, 415 + flags, dev_name(&spi->dev)); 416 + internal_cs = false; 417 + } 406 418 } 419 + 420 + if (retval) { 421 + dev_err(&spi->dev, "GPIO %d setup failed (%d)\n", 422 + spi->cs_gpio, retval); 423 + return retval; 424 + } 425 + 426 + if (internal_cs) 427 + set_io_bits(dspi->base + SPIPC0, 1 << spi->chip_select); 407 428 408 429 if (spi->mode & SPI_READY) 409 430 set_io_bits(dspi->base + SPIPC0, SPIPC0_SPIENA_MASK); ··· 437 412 clear_io_bits(dspi->base + SPIGCR1, SPIGCR1_LOOPBACK_MASK); 438 413 439 414 return retval; 415 + } 416 + 417 + static void davinci_spi_cleanup(struct spi_device *spi) 418 + { 419 + if (spi->cs_gpio >= 0) 420 + gpio_free(spi->cs_gpio); 440 421 } 441 422 442 423 static int davinci_spi_check_error(struct davinci_spi *dspi, int int_status) ··· 843 812 844 813 /* 845 814 * default num_cs is 1 and all chipsel are internal to the chip 815 + * indicated by chip_sel being NULL or cs_gpios being NULL or 816 + * set to -ENOENT. num-cs includes internal as well as gpios. 846 817 * indicated by chip_sel being NULL. GPIO based CS is not 847 818 * supported yet in DT bindings. 848 819 */ ··· 883 850 struct resource *r; 884 851 resource_size_t dma_rx_chan = SPI_NO_RESOURCE; 885 852 resource_size_t dma_tx_chan = SPI_NO_RESOURCE; 886 - int i = 0, ret = 0; 853 + int ret = 0; 887 854 u32 spipc0; 888 855 889 856 master = spi_alloc_master(&pdev->dev, sizeof(struct davinci_spi)); ··· 908 875 909 876 /* pdata in dspi is now updated and point pdata to that */ 910 877 pdata = &dspi->pdata; 878 + 879 + dspi->bytes_per_word = devm_kzalloc(&pdev->dev, 880 + sizeof(*dspi->bytes_per_word) * 881 + pdata->num_chipselect, GFP_KERNEL); 882 + if (dspi->bytes_per_word == NULL) { 883 + ret = -ENOMEM; 884 + goto free_master; 885 + } 911 886 912 887 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 913 888 if (r == NULL) { ··· 956 915 master->num_chipselect = pdata->num_chipselect; 957 916 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16); 958 917 master->setup = davinci_spi_setup; 918 + master->cleanup = davinci_spi_cleanup; 959 919 960 920 dspi->bitbang.chipselect = davinci_spi_chipselect; 961 921 dspi->bitbang.setup_transfer = davinci_spi_setup_transfer; ··· 1003 961 /* Set up SPIPC0. CS and ENA init is done in davinci_spi_setup */ 1004 962 spipc0 = SPIPC0_DIFUN_MASK | SPIPC0_DOFUN_MASK | SPIPC0_CLKFUN_MASK; 1005 963 iowrite32(spipc0, dspi->base + SPIPC0); 1006 - 1007 - /* initialize chip selects */ 1008 - if (pdata->chip_sel) { 1009 - for (i = 0; i < pdata->num_chipselect; i++) { 1010 - if (pdata->chip_sel[i] != SPI_INTERN_CS) 1011 - gpio_direction_output(pdata->chip_sel[i], 1); 1012 - } 1013 - } 1014 964 1015 965 if (pdata->intr_line) 1016 966 iowrite32(SPI_INTLVL_1, dspi->base + SPILVL);
+18 -1
drivers/spi/spi-dw-mmio.c
··· 16 16 #include <linux/spi/spi.h> 17 17 #include <linux/scatterlist.h> 18 18 #include <linux/module.h> 19 + #include <linux/of.h> 19 20 #include <linux/of_gpio.h> 21 + #include <linux/of_platform.h> 20 22 21 23 #include "spi-dw.h" 22 24 ··· 35 33 struct dw_spi *dws; 36 34 struct resource *mem; 37 35 int ret; 36 + int num_cs; 38 37 39 38 dwsmmio = devm_kzalloc(&pdev->dev, sizeof(struct dw_spi_mmio), 40 39 GFP_KERNEL); ··· 71 68 return ret; 72 69 73 70 dws->bus_num = pdev->id; 74 - dws->num_cs = 4; 71 + 75 72 dws->max_freq = clk_get_rate(dwsmmio->clk); 73 + 74 + num_cs = 4; 75 + 76 + if (pdev->dev.of_node) 77 + of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs); 78 + 79 + dws->num_cs = num_cs; 76 80 77 81 if (pdev->dev.of_node) { 78 82 int i; ··· 124 114 return 0; 125 115 } 126 116 117 + static const struct of_device_id dw_spi_mmio_of_match[] = { 118 + { .compatible = "snps,dw-apb-ssi", }, 119 + { /* end of table */} 120 + }; 121 + MODULE_DEVICE_TABLE(of, dw_spi_mmio_of_match); 122 + 127 123 static struct platform_driver dw_spi_mmio_driver = { 128 124 .probe = dw_spi_mmio_probe, 129 125 .remove = dw_spi_mmio_remove, 130 126 .driver = { 131 127 .name = DRIVER_NAME, 132 128 .owner = THIS_MODULE, 129 + .of_match_table = dw_spi_mmio_of_match, 133 130 }, 134 131 }; 135 132 module_platform_driver(dw_spi_mmio_driver);
+7 -1
drivers/spi/spi-efm32.c
··· 294 294 u32 location; 295 295 int ret; 296 296 297 - ret = of_property_read_u32(np, "efm32,location", &location); 297 + ret = of_property_read_u32(np, "energymicro,location", &location); 298 + 299 + if (ret) 300 + /* fall back to wrongly namespaced property */ 301 + ret = of_property_read_u32(np, "efm32,location", &location); 302 + 298 303 if (ret) 299 304 /* fall back to old and (wrongly) generic property "location" */ 300 305 ret = of_property_read_u32(np, "location", &location); 306 + 301 307 if (!ret) { 302 308 dev_dbg(&pdev->dev, "using location %u\n", location); 303 309 } else {
-2
drivers/spi/spi-falcon.c
··· 425 425 master->unprepare_transfer_hardware = falcon_sflash_unprepare_xfer; 426 426 master->dev.of_node = pdev->dev.of_node; 427 427 428 - platform_set_drvdata(pdev, priv); 429 - 430 428 ret = devm_spi_register_master(&pdev->dev, master); 431 429 if (ret) 432 430 spi_master_put(master);
+1 -1
drivers/spi/spi-fsl-lib.c
··· 196 196 197 197 pinfo = devm_kzalloc(&ofdev->dev, sizeof(*pinfo), GFP_KERNEL); 198 198 if (!pinfo) 199 - return -ENOMEM; 199 + return ret; 200 200 201 201 pdata = &pinfo->pdata; 202 202 dev->platform_data = pdata;
+1 -1
drivers/spi/spi-fsl-spi.c
··· 58 58 .type = TYPE_GRLIB, 59 59 }; 60 60 61 - static struct of_device_id of_fsl_spi_match[] = { 61 + static const struct of_device_id of_fsl_spi_match[] = { 62 62 { 63 63 .compatible = "fsl,spi", 64 64 .data = &of_fsl_spi_fsl_config,
-2
drivers/spi/spi-omap-100k.c
··· 420 420 master->min_speed_hz = OMAP1_SPI100K_MAX_FREQ/(1<<16); 421 421 master->max_speed_hz = OMAP1_SPI100K_MAX_FREQ; 422 422 423 - platform_set_drvdata(pdev, master); 424 - 425 423 spi100k = spi_master_get_devdata(master); 426 424 427 425 /*
+4 -7
drivers/spi/spi-omap-uwire.c
··· 41 41 #include <linux/err.h> 42 42 #include <linux/clk.h> 43 43 #include <linux/slab.h> 44 + #include <linux/device.h> 44 45 45 46 #include <linux/spi/spi.h> 46 47 #include <linux/spi/spi_bitbang.h> 47 48 #include <linux/module.h> 49 + #include <linux/io.h> 48 50 49 51 #include <asm/irq.h> 50 52 #include <mach/hardware.h> 51 - #include <asm/io.h> 52 53 #include <asm/mach-types.h> 53 54 54 55 #include <mach/mux.h> ··· 448 447 { 449 448 uwire_write_reg(UWIRE_SR3, 0); 450 449 clk_disable(uwire->ck); 451 - clk_put(uwire->ck); 452 450 spi_master_put(uwire->bitbang.master); 453 451 } 454 452 ··· 463 463 464 464 uwire = spi_master_get_devdata(master); 465 465 466 - uwire_base = ioremap(UWIRE_BASE_PHYS, UWIRE_IO_SIZE); 466 + uwire_base = devm_ioremap(&pdev->dev, UWIRE_BASE_PHYS, UWIRE_IO_SIZE); 467 467 if (!uwire_base) { 468 468 dev_dbg(&pdev->dev, "can't ioremap UWIRE\n"); 469 469 spi_master_put(master); ··· 472 472 473 473 platform_set_drvdata(pdev, uwire); 474 474 475 - uwire->ck = clk_get(&pdev->dev, "fck"); 475 + uwire->ck = devm_clk_get(&pdev->dev, "fck"); 476 476 if (IS_ERR(uwire->ck)) { 477 477 status = PTR_ERR(uwire->ck); 478 478 dev_dbg(&pdev->dev, "no functional clock?\n"); 479 479 spi_master_put(master); 480 - iounmap(uwire_base); 481 480 return status; 482 481 } 483 482 clk_enable(uwire->ck); ··· 506 507 status = spi_bitbang_start(&uwire->bitbang); 507 508 if (status < 0) { 508 509 uwire_off(uwire); 509 - iounmap(uwire_base); 510 510 } 511 511 return status; 512 512 } ··· 518 520 519 521 spi_bitbang_stop(&uwire->bitbang); 520 522 uwire_off(uwire); 521 - iounmap(uwire_base); 522 523 return 0; 523 524 } 524 525
+14
drivers/spi/spi-omap2-mcspi.c
··· 149 149 void __iomem *base; 150 150 unsigned long phys; 151 151 int word_len; 152 + u16 mode; 152 153 struct list_head node; 153 154 /* Context save and restore shadow register */ 154 155 u32 chconf0, chctrl0; ··· 927 926 928 927 mcspi_write_chconf0(spi, l); 929 928 929 + cs->mode = spi->mode; 930 + 930 931 dev_dbg(&spi->dev, "setup: speed %d, sample %s edge, clk %s\n", 931 932 speed_hz, 932 933 (spi->mode & SPI_CPHA) ? "trailing" : "leading", ··· 1001 998 return -ENOMEM; 1002 999 cs->base = mcspi->base + spi->chip_select * 0x14; 1003 1000 cs->phys = mcspi->phys + spi->chip_select * 0x14; 1001 + cs->mode = 0; 1004 1002 cs->chconf0 = 0; 1005 1003 cs->chctrl0 = 0; 1006 1004 spi->controller_state = cs; ··· 1082 1078 mcspi_dma = mcspi->dma_channels + spi->chip_select; 1083 1079 cs = spi->controller_state; 1084 1080 cd = spi->controller_data; 1081 + 1082 + /* 1083 + * The slave driver could have changed spi->mode in which case 1084 + * it will be different from cs->mode (the current hardware setup). 1085 + * If so, set par_override (even though its not a parity issue) so 1086 + * omap2_mcspi_setup_transfer will be called to configure the hardware 1087 + * with the correct mode on the first iteration of the loop below. 1088 + */ 1089 + if (spi->mode != cs->mode) 1090 + par_override = 1; 1085 1091 1086 1092 omap2_mcspi_set_enable(spi, 0); 1087 1093 list_for_each_entry(t, &m->transfers, transfer_list) {
+60 -18
drivers/spi/spi-orion.c
··· 16 16 #include <linux/io.h> 17 17 #include <linux/spi/spi.h> 18 18 #include <linux/module.h> 19 + #include <linux/pm_runtime.h> 19 20 #include <linux/of.h> 20 21 #include <linux/clk.h> 21 22 #include <linux/sizes.h> 22 23 #include <asm/unaligned.h> 23 24 24 25 #define DRIVER_NAME "orion_spi" 26 + 27 + /* Runtime PM autosuspend timeout: PM is fairly light on this driver */ 28 + #define SPI_AUTOSUSPEND_TIMEOUT 200 25 29 26 30 #define ORION_NUM_CHIPSELECTS 1 /* only one slave is supported*/ 27 31 #define ORION_SPI_WAIT_RDY_MAX_LOOP 2000 /* in usec */ ··· 281 277 return xfer->len - count; 282 278 } 283 279 284 - 285 280 static int orion_spi_transfer_one_message(struct spi_master *master, 286 281 struct spi_message *m) 287 282 { ··· 349 346 struct resource *r; 350 347 unsigned long tclk_hz; 351 348 int status = 0; 352 - const u32 *iprop; 353 - int size; 354 349 355 350 master = spi_alloc_master(&pdev->dev, sizeof(*spi)); 356 351 if (master == NULL) { ··· 359 358 if (pdev->id != -1) 360 359 master->bus_num = pdev->id; 361 360 if (pdev->dev.of_node) { 362 - iprop = of_get_property(pdev->dev.of_node, "cell-index", 363 - &size); 364 - if (iprop && size == sizeof(*iprop)) 365 - master->bus_num = *iprop; 361 + u32 cell_index; 362 + if (!of_property_read_u32(pdev->dev.of_node, "cell-index", 363 + &cell_index)) 364 + master->bus_num = cell_index; 366 365 } 367 366 368 367 /* we support only mode 0, and no options */ ··· 371 370 master->transfer_one_message = orion_spi_transfer_one_message; 372 371 master->num_chipselect = ORION_NUM_CHIPSELECTS; 373 372 master->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16); 373 + master->auto_runtime_pm = true; 374 374 375 375 platform_set_drvdata(pdev, master); 376 376 ··· 384 382 goto out; 385 383 } 386 384 387 - clk_prepare(spi->clk); 388 - clk_enable(spi->clk); 385 + status = clk_prepare_enable(spi->clk); 386 + if (status) 387 + goto out; 388 + 389 389 tclk_hz = clk_get_rate(spi->clk); 390 390 master->max_speed_hz = DIV_ROUND_UP(tclk_hz, 4); 391 391 master->min_speed_hz = DIV_ROUND_UP(tclk_hz, 30); ··· 399 395 goto out_rel_clk; 400 396 } 401 397 402 - if (orion_spi_reset(spi) < 0) 403 - goto out_rel_clk; 398 + pm_runtime_set_active(&pdev->dev); 399 + pm_runtime_use_autosuspend(&pdev->dev); 400 + pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT); 401 + pm_runtime_enable(&pdev->dev); 402 + 403 + status = orion_spi_reset(spi); 404 + if (status < 0) 405 + goto out_rel_pm; 406 + 407 + pm_runtime_mark_last_busy(&pdev->dev); 408 + pm_runtime_put_autosuspend(&pdev->dev); 404 409 405 410 master->dev.of_node = pdev->dev.of_node; 406 - status = devm_spi_register_master(&pdev->dev, master); 411 + status = spi_register_master(master); 407 412 if (status < 0) 408 - goto out_rel_clk; 413 + goto out_rel_pm; 409 414 410 415 return status; 411 416 417 + out_rel_pm: 418 + pm_runtime_disable(&pdev->dev); 412 419 out_rel_clk: 413 420 clk_disable_unprepare(spi->clk); 414 421 out: ··· 430 415 431 416 static int orion_spi_remove(struct platform_device *pdev) 432 417 { 433 - struct spi_master *master; 434 - struct orion_spi *spi; 418 + struct spi_master *master = platform_get_drvdata(pdev); 419 + struct orion_spi *spi = spi_master_get_devdata(master); 435 420 436 - master = platform_get_drvdata(pdev); 437 - spi = spi_master_get_devdata(master); 438 - 421 + pm_runtime_get_sync(&pdev->dev); 439 422 clk_disable_unprepare(spi->clk); 423 + 424 + spi_unregister_master(master); 425 + pm_runtime_disable(&pdev->dev); 440 426 441 427 return 0; 442 428 } 443 429 444 430 MODULE_ALIAS("platform:" DRIVER_NAME); 431 + 432 + #ifdef CONFIG_PM_RUNTIME 433 + static int orion_spi_runtime_suspend(struct device *dev) 434 + { 435 + struct spi_master *master = dev_get_drvdata(dev); 436 + struct orion_spi *spi = spi_master_get_devdata(master); 437 + 438 + clk_disable_unprepare(spi->clk); 439 + return 0; 440 + } 441 + 442 + static int orion_spi_runtime_resume(struct device *dev) 443 + { 444 + struct spi_master *master = dev_get_drvdata(dev); 445 + struct orion_spi *spi = spi_master_get_devdata(master); 446 + 447 + return clk_prepare_enable(spi->clk); 448 + } 449 + #endif 450 + 451 + static const struct dev_pm_ops orion_spi_pm_ops = { 452 + SET_RUNTIME_PM_OPS(orion_spi_runtime_suspend, 453 + orion_spi_runtime_resume, 454 + NULL) 455 + }; 445 456 446 457 static const struct of_device_id orion_spi_of_match_table[] = { 447 458 { .compatible = "marvell,orion-spi", }, ··· 479 438 .driver = { 480 439 .name = DRIVER_NAME, 481 440 .owner = THIS_MODULE, 441 + .pm = &orion_spi_pm_ops, 482 442 .of_match_table = of_match_ptr(orion_spi_of_match_table), 483 443 }, 484 444 .probe = orion_spi_probe,
+1 -1
drivers/spi/spi-pl022.c
··· 1417 1417 * Default is to enable all interrupts except RX - 1418 1418 * this will be enabled once TX is complete 1419 1419 */ 1420 - u32 irqflags = ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM; 1420 + u32 irqflags = (u32)(ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM); 1421 1421 1422 1422 /* Enable target chip, if not already active */ 1423 1423 if (!pl022->next_msg_cs_active)
+22 -14
drivers/spi/spi-qup.c
··· 142 142 int w_size; /* bytes per SPI word */ 143 143 int tx_bytes; 144 144 int rx_bytes; 145 + int qup_v1; 145 146 }; 146 147 147 148 ··· 421 420 config |= QUP_CONFIG_SPI_MODE; 422 421 writel_relaxed(config, controller->base + QUP_CONFIG); 423 422 424 - writel_relaxed(0, controller->base + QUP_OPERATIONAL_MASK); 423 + /* only write to OPERATIONAL_MASK when register is present */ 424 + if (!controller->qup_v1) 425 + writel_relaxed(0, controller->base + QUP_OPERATIONAL_MASK); 425 426 return 0; 426 427 } 427 428 ··· 489 486 struct resource *res; 490 487 struct device *dev; 491 488 void __iomem *base; 492 - u32 data, max_freq, iomode; 489 + u32 max_freq, iomode; 493 490 int ret, irq, size; 494 491 495 492 dev = &pdev->dev; ··· 532 529 return ret; 533 530 } 534 531 535 - data = readl_relaxed(base + QUP_HW_VERSION); 536 - 537 - if (data < QUP_HW_VERSION_2_1_1) { 538 - clk_disable_unprepare(cclk); 539 - clk_disable_unprepare(iclk); 540 - dev_err(dev, "v.%08x is not supported\n", data); 541 - return -ENXIO; 542 - } 543 - 544 532 master = spi_alloc_master(dev, sizeof(struct spi_qup)); 545 533 if (!master) { 546 534 clk_disable_unprepare(cclk); ··· 564 570 controller->cclk = cclk; 565 571 controller->irq = irq; 566 572 573 + /* set v1 flag if device is version 1 */ 574 + if (of_device_is_compatible(dev->of_node, "qcom,spi-qup-v1.1.1")) 575 + controller->qup_v1 = 1; 576 + 567 577 spin_lock_init(&controller->lock); 568 578 init_completion(&controller->done); 569 579 ··· 591 593 size = QUP_IO_M_INPUT_FIFO_SIZE(iomode); 592 594 controller->in_fifo_sz = controller->in_blk_sz * (2 << size); 593 595 594 - dev_info(dev, "v.%08x IN:block:%d, fifo:%d, OUT:block:%d, fifo:%d\n", 595 - data, controller->in_blk_sz, controller->in_fifo_sz, 596 + dev_info(dev, "IN:block:%d, fifo:%d, OUT:block:%d, fifo:%d\n", 597 + controller->in_blk_sz, controller->in_fifo_sz, 596 598 controller->out_blk_sz, controller->out_fifo_sz); 597 599 598 600 writel_relaxed(1, base + QUP_SW_RESET); ··· 605 607 606 608 writel_relaxed(0, base + QUP_OPERATIONAL); 607 609 writel_relaxed(0, base + QUP_IO_M_MODES); 608 - writel_relaxed(0, base + QUP_OPERATIONAL_MASK); 610 + 611 + if (!controller->qup_v1) 612 + writel_relaxed(0, base + QUP_OPERATIONAL_MASK); 613 + 609 614 writel_relaxed(SPI_ERROR_CLK_UNDER_RUN | SPI_ERROR_CLK_OVER_RUN, 610 615 base + SPI_ERROR_FLAGS_EN); 616 + 617 + /* if earlier version of the QUP, disable INPUT_OVERRUN */ 618 + if (controller->qup_v1) 619 + writel_relaxed(QUP_ERROR_OUTPUT_OVER_RUN | 620 + QUP_ERROR_INPUT_UNDER_RUN | QUP_ERROR_OUTPUT_UNDER_RUN, 621 + base + QUP_ERROR_FLAGS_EN); 611 622 612 623 writel_relaxed(0, base + SPI_CONFIG); 613 624 writel_relaxed(SPI_IO_C_NO_TRI_STATE, base + SPI_IO_CONTROL); ··· 739 732 } 740 733 741 734 static const struct of_device_id spi_qup_dt_match[] = { 735 + { .compatible = "qcom,spi-qup-v1.1.1", }, 742 736 { .compatible = "qcom,spi-qup-v2.1.1", }, 743 737 { .compatible = "qcom,spi-qup-v2.2.1", }, 744 738 { }
+837
drivers/spi/spi-rockchip.c
··· 1 + /* 2 + * Copyright (c) 2014, Fuzhou Rockchip Electronics Co., Ltd 3 + * Author: Addy Ke <addy.ke@rock-chips.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + */ 15 + 16 + #include <linux/init.h> 17 + #include <linux/module.h> 18 + #include <linux/clk.h> 19 + #include <linux/err.h> 20 + #include <linux/delay.h> 21 + #include <linux/interrupt.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/slab.h> 24 + #include <linux/spi/spi.h> 25 + #include <linux/scatterlist.h> 26 + #include <linux/of.h> 27 + #include <linux/pm_runtime.h> 28 + #include <linux/io.h> 29 + #include <linux/dmaengine.h> 30 + 31 + #define DRIVER_NAME "rockchip-spi" 32 + 33 + /* SPI register offsets */ 34 + #define ROCKCHIP_SPI_CTRLR0 0x0000 35 + #define ROCKCHIP_SPI_CTRLR1 0x0004 36 + #define ROCKCHIP_SPI_SSIENR 0x0008 37 + #define ROCKCHIP_SPI_SER 0x000c 38 + #define ROCKCHIP_SPI_BAUDR 0x0010 39 + #define ROCKCHIP_SPI_TXFTLR 0x0014 40 + #define ROCKCHIP_SPI_RXFTLR 0x0018 41 + #define ROCKCHIP_SPI_TXFLR 0x001c 42 + #define ROCKCHIP_SPI_RXFLR 0x0020 43 + #define ROCKCHIP_SPI_SR 0x0024 44 + #define ROCKCHIP_SPI_IPR 0x0028 45 + #define ROCKCHIP_SPI_IMR 0x002c 46 + #define ROCKCHIP_SPI_ISR 0x0030 47 + #define ROCKCHIP_SPI_RISR 0x0034 48 + #define ROCKCHIP_SPI_ICR 0x0038 49 + #define ROCKCHIP_SPI_DMACR 0x003c 50 + #define ROCKCHIP_SPI_DMATDLR 0x0040 51 + #define ROCKCHIP_SPI_DMARDLR 0x0044 52 + #define ROCKCHIP_SPI_TXDR 0x0400 53 + #define ROCKCHIP_SPI_RXDR 0x0800 54 + 55 + /* Bit fields in CTRLR0 */ 56 + #define CR0_DFS_OFFSET 0 57 + 58 + #define CR0_CFS_OFFSET 2 59 + 60 + #define CR0_SCPH_OFFSET 6 61 + 62 + #define CR0_SCPOL_OFFSET 7 63 + 64 + #define CR0_CSM_OFFSET 8 65 + #define CR0_CSM_KEEP 0x0 66 + /* ss_n be high for half sclk_out cycles */ 67 + #define CR0_CSM_HALF 0X1 68 + /* ss_n be high for one sclk_out cycle */ 69 + #define CR0_CSM_ONE 0x2 70 + 71 + /* ss_n to sclk_out delay */ 72 + #define CR0_SSD_OFFSET 10 73 + /* 74 + * The period between ss_n active and 75 + * sclk_out active is half sclk_out cycles 76 + */ 77 + #define CR0_SSD_HALF 0x0 78 + /* 79 + * The period between ss_n active and 80 + * sclk_out active is one sclk_out cycle 81 + */ 82 + #define CR0_SSD_ONE 0x1 83 + 84 + #define CR0_EM_OFFSET 11 85 + #define CR0_EM_LITTLE 0x0 86 + #define CR0_EM_BIG 0x1 87 + 88 + #define CR0_FBM_OFFSET 12 89 + #define CR0_FBM_MSB 0x0 90 + #define CR0_FBM_LSB 0x1 91 + 92 + #define CR0_BHT_OFFSET 13 93 + #define CR0_BHT_16BIT 0x0 94 + #define CR0_BHT_8BIT 0x1 95 + 96 + #define CR0_RSD_OFFSET 14 97 + 98 + #define CR0_FRF_OFFSET 16 99 + #define CR0_FRF_SPI 0x0 100 + #define CR0_FRF_SSP 0x1 101 + #define CR0_FRF_MICROWIRE 0x2 102 + 103 + #define CR0_XFM_OFFSET 18 104 + #define CR0_XFM_MASK (0x03 << SPI_XFM_OFFSET) 105 + #define CR0_XFM_TR 0x0 106 + #define CR0_XFM_TO 0x1 107 + #define CR0_XFM_RO 0x2 108 + 109 + #define CR0_OPM_OFFSET 20 110 + #define CR0_OPM_MASTER 0x0 111 + #define CR0_OPM_SLAVE 0x1 112 + 113 + #define CR0_MTM_OFFSET 0x21 114 + 115 + /* Bit fields in SER, 2bit */ 116 + #define SER_MASK 0x3 117 + 118 + /* Bit fields in SR, 5bit */ 119 + #define SR_MASK 0x1f 120 + #define SR_BUSY (1 << 0) 121 + #define SR_TF_FULL (1 << 1) 122 + #define SR_TF_EMPTY (1 << 2) 123 + #define SR_RF_EMPTY (1 << 3) 124 + #define SR_RF_FULL (1 << 4) 125 + 126 + /* Bit fields in ISR, IMR, ISR, RISR, 5bit */ 127 + #define INT_MASK 0x1f 128 + #define INT_TF_EMPTY (1 << 0) 129 + #define INT_TF_OVERFLOW (1 << 1) 130 + #define INT_RF_UNDERFLOW (1 << 2) 131 + #define INT_RF_OVERFLOW (1 << 3) 132 + #define INT_RF_FULL (1 << 4) 133 + 134 + /* Bit fields in ICR, 4bit */ 135 + #define ICR_MASK 0x0f 136 + #define ICR_ALL (1 << 0) 137 + #define ICR_RF_UNDERFLOW (1 << 1) 138 + #define ICR_RF_OVERFLOW (1 << 2) 139 + #define ICR_TF_OVERFLOW (1 << 3) 140 + 141 + /* Bit fields in DMACR */ 142 + #define RF_DMA_EN (1 << 0) 143 + #define TF_DMA_EN (1 << 1) 144 + 145 + #define RXBUSY (1 << 0) 146 + #define TXBUSY (1 << 1) 147 + 148 + enum rockchip_ssi_type { 149 + SSI_MOTO_SPI = 0, 150 + SSI_TI_SSP, 151 + SSI_NS_MICROWIRE, 152 + }; 153 + 154 + struct rockchip_spi_dma_data { 155 + struct dma_chan *ch; 156 + enum dma_transfer_direction direction; 157 + dma_addr_t addr; 158 + }; 159 + 160 + struct rockchip_spi { 161 + struct device *dev; 162 + struct spi_master *master; 163 + 164 + struct clk *spiclk; 165 + struct clk *apb_pclk; 166 + 167 + void __iomem *regs; 168 + /*depth of the FIFO buffer */ 169 + u32 fifo_len; 170 + /* max bus freq supported */ 171 + u32 max_freq; 172 + /* supported slave numbers */ 173 + enum rockchip_ssi_type type; 174 + 175 + u16 mode; 176 + u8 tmode; 177 + u8 bpw; 178 + u8 n_bytes; 179 + unsigned len; 180 + u32 speed; 181 + 182 + const void *tx; 183 + const void *tx_end; 184 + void *rx; 185 + void *rx_end; 186 + 187 + u32 state; 188 + /* protect state */ 189 + spinlock_t lock; 190 + 191 + struct completion xfer_completion; 192 + 193 + u32 use_dma; 194 + struct sg_table tx_sg; 195 + struct sg_table rx_sg; 196 + struct rockchip_spi_dma_data dma_rx; 197 + struct rockchip_spi_dma_data dma_tx; 198 + }; 199 + 200 + static inline void spi_enable_chip(struct rockchip_spi *rs, int enable) 201 + { 202 + writel_relaxed((enable ? 1 : 0), rs->regs + ROCKCHIP_SPI_SSIENR); 203 + } 204 + 205 + static inline void spi_set_clk(struct rockchip_spi *rs, u16 div) 206 + { 207 + writel_relaxed(div, rs->regs + ROCKCHIP_SPI_BAUDR); 208 + } 209 + 210 + static inline void flush_fifo(struct rockchip_spi *rs) 211 + { 212 + while (readl_relaxed(rs->regs + ROCKCHIP_SPI_RXFLR)) 213 + readl_relaxed(rs->regs + ROCKCHIP_SPI_RXDR); 214 + } 215 + 216 + static inline void wait_for_idle(struct rockchip_spi *rs) 217 + { 218 + unsigned long timeout = jiffies + msecs_to_jiffies(5); 219 + 220 + do { 221 + if (!(readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY)) 222 + return; 223 + } while (time_before(jiffies, timeout)); 224 + 225 + dev_warn(rs->dev, "spi controller is in busy state!\n"); 226 + } 227 + 228 + static u32 get_fifo_len(struct rockchip_spi *rs) 229 + { 230 + u32 fifo; 231 + 232 + for (fifo = 2; fifo < 32; fifo++) { 233 + writel_relaxed(fifo, rs->regs + ROCKCHIP_SPI_TXFTLR); 234 + if (fifo != readl_relaxed(rs->regs + ROCKCHIP_SPI_TXFTLR)) 235 + break; 236 + } 237 + 238 + writel_relaxed(0, rs->regs + ROCKCHIP_SPI_TXFTLR); 239 + 240 + return (fifo == 31) ? 0 : fifo; 241 + } 242 + 243 + static inline u32 tx_max(struct rockchip_spi *rs) 244 + { 245 + u32 tx_left, tx_room; 246 + 247 + tx_left = (rs->tx_end - rs->tx) / rs->n_bytes; 248 + tx_room = rs->fifo_len - readl_relaxed(rs->regs + ROCKCHIP_SPI_TXFLR); 249 + 250 + return min(tx_left, tx_room); 251 + } 252 + 253 + static inline u32 rx_max(struct rockchip_spi *rs) 254 + { 255 + u32 rx_left = (rs->rx_end - rs->rx) / rs->n_bytes; 256 + u32 rx_room = (u32)readl_relaxed(rs->regs + ROCKCHIP_SPI_RXFLR); 257 + 258 + return min(rx_left, rx_room); 259 + } 260 + 261 + static void rockchip_spi_set_cs(struct spi_device *spi, bool enable) 262 + { 263 + u32 ser; 264 + struct rockchip_spi *rs = spi_master_get_devdata(spi->master); 265 + 266 + ser = readl_relaxed(rs->regs + ROCKCHIP_SPI_SER) & SER_MASK; 267 + 268 + /* 269 + * drivers/spi/spi.c: 270 + * static void spi_set_cs(struct spi_device *spi, bool enable) 271 + * { 272 + * if (spi->mode & SPI_CS_HIGH) 273 + * enable = !enable; 274 + * 275 + * if (spi->cs_gpio >= 0) 276 + * gpio_set_value(spi->cs_gpio, !enable); 277 + * else if (spi->master->set_cs) 278 + * spi->master->set_cs(spi, !enable); 279 + * } 280 + * 281 + * Note: enable(rockchip_spi_set_cs) = !enable(spi_set_cs) 282 + */ 283 + if (!enable) 284 + ser |= 1 << spi->chip_select; 285 + else 286 + ser &= ~(1 << spi->chip_select); 287 + 288 + writel_relaxed(ser, rs->regs + ROCKCHIP_SPI_SER); 289 + } 290 + 291 + static int rockchip_spi_prepare_message(struct spi_master *master, 292 + struct spi_message *msg) 293 + { 294 + struct rockchip_spi *rs = spi_master_get_devdata(master); 295 + struct spi_device *spi = msg->spi; 296 + 297 + rs->mode = spi->mode; 298 + 299 + return 0; 300 + } 301 + 302 + static int rockchip_spi_unprepare_message(struct spi_master *master, 303 + struct spi_message *msg) 304 + { 305 + unsigned long flags; 306 + struct rockchip_spi *rs = spi_master_get_devdata(master); 307 + 308 + spin_lock_irqsave(&rs->lock, flags); 309 + 310 + /* 311 + * For DMA mode, we need terminate DMA channel and flush 312 + * fifo for the next transfer if DMA thansfer timeout. 313 + * unprepare_message() was called by core if transfer complete 314 + * or timeout. Maybe it is reasonable for error handling here. 315 + */ 316 + if (rs->use_dma) { 317 + if (rs->state & RXBUSY) { 318 + dmaengine_terminate_all(rs->dma_rx.ch); 319 + flush_fifo(rs); 320 + } 321 + 322 + if (rs->state & TXBUSY) 323 + dmaengine_terminate_all(rs->dma_tx.ch); 324 + } 325 + 326 + spin_unlock_irqrestore(&rs->lock, flags); 327 + 328 + return 0; 329 + } 330 + 331 + static void rockchip_spi_pio_writer(struct rockchip_spi *rs) 332 + { 333 + u32 max = tx_max(rs); 334 + u32 txw = 0; 335 + 336 + while (max--) { 337 + if (rs->n_bytes == 1) 338 + txw = *(u8 *)(rs->tx); 339 + else 340 + txw = *(u16 *)(rs->tx); 341 + 342 + writel_relaxed(txw, rs->regs + ROCKCHIP_SPI_TXDR); 343 + rs->tx += rs->n_bytes; 344 + } 345 + } 346 + 347 + static void rockchip_spi_pio_reader(struct rockchip_spi *rs) 348 + { 349 + u32 max = rx_max(rs); 350 + u32 rxw; 351 + 352 + while (max--) { 353 + rxw = readl_relaxed(rs->regs + ROCKCHIP_SPI_RXDR); 354 + if (rs->n_bytes == 1) 355 + *(u8 *)(rs->rx) = (u8)rxw; 356 + else 357 + *(u16 *)(rs->rx) = (u16)rxw; 358 + rs->rx += rs->n_bytes; 359 + } 360 + } 361 + 362 + static int rockchip_spi_pio_transfer(struct rockchip_spi *rs) 363 + { 364 + int remain = 0; 365 + 366 + do { 367 + if (rs->tx) { 368 + remain = rs->tx_end - rs->tx; 369 + rockchip_spi_pio_writer(rs); 370 + } 371 + 372 + if (rs->rx) { 373 + remain = rs->rx_end - rs->rx; 374 + rockchip_spi_pio_reader(rs); 375 + } 376 + 377 + cpu_relax(); 378 + } while (remain); 379 + 380 + /* If tx, wait until the FIFO data completely. */ 381 + if (rs->tx) 382 + wait_for_idle(rs); 383 + 384 + return 0; 385 + } 386 + 387 + static void rockchip_spi_dma_rxcb(void *data) 388 + { 389 + unsigned long flags; 390 + struct rockchip_spi *rs = data; 391 + 392 + spin_lock_irqsave(&rs->lock, flags); 393 + 394 + rs->state &= ~RXBUSY; 395 + if (!(rs->state & TXBUSY)) 396 + spi_finalize_current_transfer(rs->master); 397 + 398 + spin_unlock_irqrestore(&rs->lock, flags); 399 + } 400 + 401 + static void rockchip_spi_dma_txcb(void *data) 402 + { 403 + unsigned long flags; 404 + struct rockchip_spi *rs = data; 405 + 406 + /* Wait until the FIFO data completely. */ 407 + wait_for_idle(rs); 408 + 409 + spin_lock_irqsave(&rs->lock, flags); 410 + 411 + rs->state &= ~TXBUSY; 412 + if (!(rs->state & RXBUSY)) 413 + spi_finalize_current_transfer(rs->master); 414 + 415 + spin_unlock_irqrestore(&rs->lock, flags); 416 + } 417 + 418 + static int rockchip_spi_dma_transfer(struct rockchip_spi *rs) 419 + { 420 + unsigned long flags; 421 + struct dma_slave_config rxconf, txconf; 422 + struct dma_async_tx_descriptor *rxdesc, *txdesc; 423 + 424 + spin_lock_irqsave(&rs->lock, flags); 425 + rs->state &= ~RXBUSY; 426 + rs->state &= ~TXBUSY; 427 + spin_unlock_irqrestore(&rs->lock, flags); 428 + 429 + if (rs->rx) { 430 + rxconf.direction = rs->dma_rx.direction; 431 + rxconf.src_addr = rs->dma_rx.addr; 432 + rxconf.src_addr_width = rs->n_bytes; 433 + rxconf.src_maxburst = rs->n_bytes; 434 + dmaengine_slave_config(rs->dma_rx.ch, &rxconf); 435 + 436 + rxdesc = dmaengine_prep_slave_sg( 437 + rs->dma_rx.ch, 438 + rs->rx_sg.sgl, rs->rx_sg.nents, 439 + rs->dma_rx.direction, DMA_PREP_INTERRUPT); 440 + 441 + rxdesc->callback = rockchip_spi_dma_rxcb; 442 + rxdesc->callback_param = rs; 443 + } 444 + 445 + if (rs->tx) { 446 + txconf.direction = rs->dma_tx.direction; 447 + txconf.dst_addr = rs->dma_tx.addr; 448 + txconf.dst_addr_width = rs->n_bytes; 449 + txconf.dst_maxburst = rs->n_bytes; 450 + dmaengine_slave_config(rs->dma_tx.ch, &txconf); 451 + 452 + txdesc = dmaengine_prep_slave_sg( 453 + rs->dma_tx.ch, 454 + rs->tx_sg.sgl, rs->tx_sg.nents, 455 + rs->dma_tx.direction, DMA_PREP_INTERRUPT); 456 + 457 + txdesc->callback = rockchip_spi_dma_txcb; 458 + txdesc->callback_param = rs; 459 + } 460 + 461 + /* rx must be started before tx due to spi instinct */ 462 + if (rs->rx) { 463 + spin_lock_irqsave(&rs->lock, flags); 464 + rs->state |= RXBUSY; 465 + spin_unlock_irqrestore(&rs->lock, flags); 466 + dmaengine_submit(rxdesc); 467 + dma_async_issue_pending(rs->dma_rx.ch); 468 + } 469 + 470 + if (rs->tx) { 471 + spin_lock_irqsave(&rs->lock, flags); 472 + rs->state |= TXBUSY; 473 + spin_unlock_irqrestore(&rs->lock, flags); 474 + dmaengine_submit(txdesc); 475 + dma_async_issue_pending(rs->dma_tx.ch); 476 + } 477 + 478 + return 1; 479 + } 480 + 481 + static void rockchip_spi_config(struct rockchip_spi *rs) 482 + { 483 + u32 div = 0; 484 + u32 dmacr = 0; 485 + 486 + u32 cr0 = (CR0_BHT_8BIT << CR0_BHT_OFFSET) 487 + | (CR0_SSD_ONE << CR0_SSD_OFFSET); 488 + 489 + cr0 |= (rs->n_bytes << CR0_DFS_OFFSET); 490 + cr0 |= ((rs->mode & 0x3) << CR0_SCPH_OFFSET); 491 + cr0 |= (rs->tmode << CR0_XFM_OFFSET); 492 + cr0 |= (rs->type << CR0_FRF_OFFSET); 493 + 494 + if (rs->use_dma) { 495 + if (rs->tx) 496 + dmacr |= TF_DMA_EN; 497 + if (rs->rx) 498 + dmacr |= RF_DMA_EN; 499 + } 500 + 501 + /* div doesn't support odd number */ 502 + div = rs->max_freq / rs->speed; 503 + div = (div + 1) & 0xfffe; 504 + 505 + spi_enable_chip(rs, 0); 506 + 507 + writel_relaxed(cr0, rs->regs + ROCKCHIP_SPI_CTRLR0); 508 + 509 + writel_relaxed(rs->len - 1, rs->regs + ROCKCHIP_SPI_CTRLR1); 510 + writel_relaxed(rs->fifo_len / 2 - 1, rs->regs + ROCKCHIP_SPI_TXFTLR); 511 + writel_relaxed(rs->fifo_len / 2 - 1, rs->regs + ROCKCHIP_SPI_RXFTLR); 512 + 513 + writel_relaxed(0, rs->regs + ROCKCHIP_SPI_DMATDLR); 514 + writel_relaxed(0, rs->regs + ROCKCHIP_SPI_DMARDLR); 515 + writel_relaxed(dmacr, rs->regs + ROCKCHIP_SPI_DMACR); 516 + 517 + spi_set_clk(rs, div); 518 + 519 + dev_dbg(rs->dev, "cr0 0x%x, div %d\n", cr0, div); 520 + 521 + spi_enable_chip(rs, 1); 522 + } 523 + 524 + static int rockchip_spi_transfer_one( 525 + struct spi_master *master, 526 + struct spi_device *spi, 527 + struct spi_transfer *xfer) 528 + { 529 + int ret = 0; 530 + struct rockchip_spi *rs = spi_master_get_devdata(master); 531 + 532 + WARN_ON((readl_relaxed(rs->regs + ROCKCHIP_SPI_SR) & SR_BUSY)); 533 + 534 + if (!xfer->tx_buf && !xfer->rx_buf) { 535 + dev_err(rs->dev, "No buffer for transfer\n"); 536 + return -EINVAL; 537 + } 538 + 539 + rs->speed = xfer->speed_hz; 540 + rs->bpw = xfer->bits_per_word; 541 + rs->n_bytes = rs->bpw >> 3; 542 + 543 + rs->tx = xfer->tx_buf; 544 + rs->tx_end = rs->tx + xfer->len; 545 + rs->rx = xfer->rx_buf; 546 + rs->rx_end = rs->rx + xfer->len; 547 + rs->len = xfer->len; 548 + 549 + rs->tx_sg = xfer->tx_sg; 550 + rs->rx_sg = xfer->rx_sg; 551 + 552 + if (rs->tx && rs->rx) 553 + rs->tmode = CR0_XFM_TR; 554 + else if (rs->tx) 555 + rs->tmode = CR0_XFM_TO; 556 + else if (rs->rx) 557 + rs->tmode = CR0_XFM_RO; 558 + 559 + if (master->can_dma && master->can_dma(master, spi, xfer)) 560 + rs->use_dma = 1; 561 + else 562 + rs->use_dma = 0; 563 + 564 + rockchip_spi_config(rs); 565 + 566 + if (rs->use_dma) 567 + ret = rockchip_spi_dma_transfer(rs); 568 + else 569 + ret = rockchip_spi_pio_transfer(rs); 570 + 571 + return ret; 572 + } 573 + 574 + static bool rockchip_spi_can_dma(struct spi_master *master, 575 + struct spi_device *spi, 576 + struct spi_transfer *xfer) 577 + { 578 + struct rockchip_spi *rs = spi_master_get_devdata(master); 579 + 580 + return (xfer->len > rs->fifo_len); 581 + } 582 + 583 + static int rockchip_spi_probe(struct platform_device *pdev) 584 + { 585 + int ret = 0; 586 + struct rockchip_spi *rs; 587 + struct spi_master *master; 588 + struct resource *mem; 589 + 590 + master = spi_alloc_master(&pdev->dev, sizeof(struct rockchip_spi)); 591 + if (!master) 592 + return -ENOMEM; 593 + 594 + platform_set_drvdata(pdev, master); 595 + 596 + rs = spi_master_get_devdata(master); 597 + memset(rs, 0, sizeof(struct rockchip_spi)); 598 + 599 + /* Get basic io resource and map it */ 600 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 601 + rs->regs = devm_ioremap_resource(&pdev->dev, mem); 602 + if (IS_ERR(rs->regs)) { 603 + ret = PTR_ERR(rs->regs); 604 + goto err_ioremap_resource; 605 + } 606 + 607 + rs->apb_pclk = devm_clk_get(&pdev->dev, "apb_pclk"); 608 + if (IS_ERR(rs->apb_pclk)) { 609 + dev_err(&pdev->dev, "Failed to get apb_pclk\n"); 610 + ret = PTR_ERR(rs->apb_pclk); 611 + goto err_ioremap_resource; 612 + } 613 + 614 + rs->spiclk = devm_clk_get(&pdev->dev, "spiclk"); 615 + if (IS_ERR(rs->spiclk)) { 616 + dev_err(&pdev->dev, "Failed to get spi_pclk\n"); 617 + ret = PTR_ERR(rs->spiclk); 618 + goto err_ioremap_resource; 619 + } 620 + 621 + ret = clk_prepare_enable(rs->apb_pclk); 622 + if (ret) { 623 + dev_err(&pdev->dev, "Failed to enable apb_pclk\n"); 624 + goto err_ioremap_resource; 625 + } 626 + 627 + ret = clk_prepare_enable(rs->spiclk); 628 + if (ret) { 629 + dev_err(&pdev->dev, "Failed to enable spi_clk\n"); 630 + goto err_spiclk_enable; 631 + } 632 + 633 + spi_enable_chip(rs, 0); 634 + 635 + rs->type = SSI_MOTO_SPI; 636 + rs->master = master; 637 + rs->dev = &pdev->dev; 638 + rs->max_freq = clk_get_rate(rs->spiclk); 639 + 640 + rs->fifo_len = get_fifo_len(rs); 641 + if (!rs->fifo_len) { 642 + dev_err(&pdev->dev, "Failed to get fifo length\n"); 643 + ret = -EINVAL; 644 + goto err_get_fifo_len; 645 + } 646 + 647 + spin_lock_init(&rs->lock); 648 + 649 + pm_runtime_set_active(&pdev->dev); 650 + pm_runtime_enable(&pdev->dev); 651 + 652 + master->auto_runtime_pm = true; 653 + master->bus_num = pdev->id; 654 + master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LOOP; 655 + master->num_chipselect = 2; 656 + master->dev.of_node = pdev->dev.of_node; 657 + master->bits_per_word_mask = SPI_BPW_MASK(16) | SPI_BPW_MASK(8); 658 + 659 + master->set_cs = rockchip_spi_set_cs; 660 + master->prepare_message = rockchip_spi_prepare_message; 661 + master->unprepare_message = rockchip_spi_unprepare_message; 662 + master->transfer_one = rockchip_spi_transfer_one; 663 + 664 + rs->dma_tx.ch = dma_request_slave_channel(rs->dev, "tx"); 665 + if (!rs->dma_tx.ch) 666 + dev_warn(rs->dev, "Failed to request TX DMA channel\n"); 667 + 668 + rs->dma_rx.ch = dma_request_slave_channel(rs->dev, "rx"); 669 + if (!rs->dma_rx.ch) { 670 + if (rs->dma_tx.ch) { 671 + dma_release_channel(rs->dma_tx.ch); 672 + rs->dma_tx.ch = NULL; 673 + } 674 + dev_warn(rs->dev, "Failed to request RX DMA channel\n"); 675 + } 676 + 677 + if (rs->dma_tx.ch && rs->dma_rx.ch) { 678 + rs->dma_tx.addr = (dma_addr_t)(mem->start + ROCKCHIP_SPI_TXDR); 679 + rs->dma_rx.addr = (dma_addr_t)(mem->start + ROCKCHIP_SPI_RXDR); 680 + rs->dma_tx.direction = DMA_MEM_TO_DEV; 681 + rs->dma_tx.direction = DMA_DEV_TO_MEM; 682 + 683 + master->can_dma = rockchip_spi_can_dma; 684 + master->dma_tx = rs->dma_tx.ch; 685 + master->dma_rx = rs->dma_rx.ch; 686 + } 687 + 688 + ret = devm_spi_register_master(&pdev->dev, master); 689 + if (ret) { 690 + dev_err(&pdev->dev, "Failed to register master\n"); 691 + goto err_register_master; 692 + } 693 + 694 + return 0; 695 + 696 + err_register_master: 697 + if (rs->dma_tx.ch) 698 + dma_release_channel(rs->dma_tx.ch); 699 + if (rs->dma_rx.ch) 700 + dma_release_channel(rs->dma_rx.ch); 701 + err_get_fifo_len: 702 + clk_disable_unprepare(rs->spiclk); 703 + err_spiclk_enable: 704 + clk_disable_unprepare(rs->apb_pclk); 705 + err_ioremap_resource: 706 + spi_master_put(master); 707 + 708 + return ret; 709 + } 710 + 711 + static int rockchip_spi_remove(struct platform_device *pdev) 712 + { 713 + struct spi_master *master = spi_master_get(platform_get_drvdata(pdev)); 714 + struct rockchip_spi *rs = spi_master_get_devdata(master); 715 + 716 + pm_runtime_disable(&pdev->dev); 717 + 718 + clk_disable_unprepare(rs->spiclk); 719 + clk_disable_unprepare(rs->apb_pclk); 720 + 721 + if (rs->dma_tx.ch) 722 + dma_release_channel(rs->dma_tx.ch); 723 + if (rs->dma_rx.ch) 724 + dma_release_channel(rs->dma_rx.ch); 725 + 726 + spi_master_put(master); 727 + 728 + return 0; 729 + } 730 + 731 + #ifdef CONFIG_PM_SLEEP 732 + static int rockchip_spi_suspend(struct device *dev) 733 + { 734 + int ret = 0; 735 + struct spi_master *master = dev_get_drvdata(dev); 736 + struct rockchip_spi *rs = spi_master_get_devdata(master); 737 + 738 + ret = spi_master_suspend(rs->master); 739 + if (ret) 740 + return ret; 741 + 742 + if (!pm_runtime_suspended(dev)) { 743 + clk_disable_unprepare(rs->spiclk); 744 + clk_disable_unprepare(rs->apb_pclk); 745 + } 746 + 747 + return ret; 748 + } 749 + 750 + static int rockchip_spi_resume(struct device *dev) 751 + { 752 + int ret = 0; 753 + struct spi_master *master = dev_get_drvdata(dev); 754 + struct rockchip_spi *rs = spi_master_get_devdata(master); 755 + 756 + if (!pm_runtime_suspended(dev)) { 757 + ret = clk_prepare_enable(rs->apb_pclk); 758 + if (ret < 0) 759 + return ret; 760 + 761 + ret = clk_prepare_enable(rs->spiclk); 762 + if (ret < 0) { 763 + clk_disable_unprepare(rs->apb_pclk); 764 + return ret; 765 + } 766 + } 767 + 768 + ret = spi_master_resume(rs->master); 769 + if (ret < 0) { 770 + clk_disable_unprepare(rs->spiclk); 771 + clk_disable_unprepare(rs->apb_pclk); 772 + } 773 + 774 + return ret; 775 + } 776 + #endif /* CONFIG_PM_SLEEP */ 777 + 778 + #ifdef CONFIG_PM_RUNTIME 779 + static int rockchip_spi_runtime_suspend(struct device *dev) 780 + { 781 + struct spi_master *master = dev_get_drvdata(dev); 782 + struct rockchip_spi *rs = spi_master_get_devdata(master); 783 + 784 + clk_disable_unprepare(rs->spiclk); 785 + clk_disable_unprepare(rs->apb_pclk); 786 + 787 + return 0; 788 + } 789 + 790 + static int rockchip_spi_runtime_resume(struct device *dev) 791 + { 792 + int ret; 793 + struct spi_master *master = dev_get_drvdata(dev); 794 + struct rockchip_spi *rs = spi_master_get_devdata(master); 795 + 796 + ret = clk_prepare_enable(rs->apb_pclk); 797 + if (ret) 798 + return ret; 799 + 800 + ret = clk_prepare_enable(rs->spiclk); 801 + if (ret) 802 + clk_disable_unprepare(rs->apb_pclk); 803 + 804 + return ret; 805 + } 806 + #endif /* CONFIG_PM_RUNTIME */ 807 + 808 + static const struct dev_pm_ops rockchip_spi_pm = { 809 + SET_SYSTEM_SLEEP_PM_OPS(rockchip_spi_suspend, rockchip_spi_resume) 810 + SET_RUNTIME_PM_OPS(rockchip_spi_runtime_suspend, 811 + rockchip_spi_runtime_resume, NULL) 812 + }; 813 + 814 + static const struct of_device_id rockchip_spi_dt_match[] = { 815 + { .compatible = "rockchip,rk3066-spi", }, 816 + { .compatible = "rockchip,rk3188-spi", }, 817 + { .compatible = "rockchip,rk3288-spi", }, 818 + { }, 819 + }; 820 + MODULE_DEVICE_TABLE(of, rockchip_spi_dt_match); 821 + 822 + static struct platform_driver rockchip_spi_driver = { 823 + .driver = { 824 + .name = DRIVER_NAME, 825 + .owner = THIS_MODULE, 826 + .pm = &rockchip_spi_pm, 827 + .of_match_table = of_match_ptr(rockchip_spi_dt_match), 828 + }, 829 + .probe = rockchip_spi_probe, 830 + .remove = rockchip_spi_remove, 831 + }; 832 + 833 + module_platform_driver(rockchip_spi_driver); 834 + 835 + MODULE_AUTHOR("Addy Ke <addy.ke@rock-chips.com>"); 836 + MODULE_DESCRIPTION("ROCKCHIP SPI Controller Driver"); 837 + MODULE_LICENSE("GPL v2");
+29 -16
drivers/spi/spi-rspi.c
··· 477 477 tx->sgl, tx->nents, DMA_TO_DEVICE, 478 478 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 479 479 if (!desc_tx) 480 - return -EIO; 480 + goto no_dma; 481 481 482 482 irq_mask |= SPCR_SPTIE; 483 483 } ··· 486 486 rx->sgl, rx->nents, DMA_FROM_DEVICE, 487 487 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 488 488 if (!desc_rx) 489 - return -EIO; 489 + goto no_dma; 490 490 491 491 irq_mask |= SPCR_SPRIE; 492 492 } ··· 540 540 enable_irq(rspi->rx_irq); 541 541 542 542 return ret; 543 + 544 + no_dma: 545 + pr_warn_once("%s %s: DMA not available, falling back to PIO\n", 546 + dev_driver_string(&rspi->master->dev), 547 + dev_name(&rspi->master->dev)); 548 + return -EAGAIN; 543 549 } 544 550 545 551 static void rspi_receive_init(const struct rspi_data *rspi) ··· 599 593 600 594 if (rspi->master->can_dma && __rspi_can_dma(rspi, xfer)) { 601 595 /* rx_buf can be NULL on RSPI on SH in TX-only Mode */ 602 - return rspi_dma_transfer(rspi, &xfer->tx_sg, 603 - xfer->rx_buf ? &xfer->rx_sg : NULL); 596 + ret = rspi_dma_transfer(rspi, &xfer->tx_sg, 597 + xfer->rx_buf ? &xfer->rx_sg : NULL); 598 + if (ret != -EAGAIN) 599 + return ret; 604 600 } 605 601 606 602 ret = rspi_pio_transfer(rspi, xfer->tx_buf, xfer->rx_buf, xfer->len); ··· 638 630 struct spi_transfer *xfer) 639 631 { 640 632 struct rspi_data *rspi = spi_master_get_devdata(master); 641 - int ret; 642 633 643 634 rspi_rz_receive_init(rspi); 644 635 ··· 656 649 { 657 650 int ret; 658 651 659 - if (rspi->master->can_dma && __rspi_can_dma(rspi, xfer)) 660 - return rspi_dma_transfer(rspi, &xfer->tx_sg, NULL); 652 + if (rspi->master->can_dma && __rspi_can_dma(rspi, xfer)) { 653 + ret = rspi_dma_transfer(rspi, &xfer->tx_sg, NULL); 654 + if (ret != -EAGAIN) 655 + return ret; 656 + } 661 657 662 658 ret = rspi_pio_transfer(rspi, xfer->tx_buf, NULL, xfer->len); 663 659 if (ret < 0) ··· 674 664 675 665 static int qspi_transfer_in(struct rspi_data *rspi, struct spi_transfer *xfer) 676 666 { 677 - if (rspi->master->can_dma && __rspi_can_dma(rspi, xfer)) 678 - return rspi_dma_transfer(rspi, NULL, &xfer->rx_sg); 667 + if (rspi->master->can_dma && __rspi_can_dma(rspi, xfer)) { 668 + int ret = rspi_dma_transfer(rspi, NULL, &xfer->rx_sg); 669 + if (ret != -EAGAIN) 670 + return ret; 671 + } 679 672 680 673 return rspi_pio_transfer(rspi, NULL, xfer->rx_buf, xfer->len); 681 674 } ··· 940 927 return 0; 941 928 } 942 929 943 - static void rspi_release_dma(struct rspi_data *rspi) 930 + static void rspi_release_dma(struct spi_master *master) 944 931 { 945 - if (rspi->master->dma_tx) 946 - dma_release_channel(rspi->master->dma_tx); 947 - if (rspi->master->dma_rx) 948 - dma_release_channel(rspi->master->dma_rx); 932 + if (master->dma_tx) 933 + dma_release_channel(master->dma_tx); 934 + if (master->dma_rx) 935 + dma_release_channel(master->dma_rx); 949 936 } 950 937 951 938 static int rspi_remove(struct platform_device *pdev) 952 939 { 953 940 struct rspi_data *rspi = platform_get_drvdata(pdev); 954 941 955 - rspi_release_dma(rspi); 942 + rspi_release_dma(rspi->master); 956 943 pm_runtime_disable(&pdev->dev); 957 944 958 945 return 0; ··· 1154 1141 return 0; 1155 1142 1156 1143 error3: 1157 - rspi_release_dma(rspi); 1144 + rspi_release_dma(master); 1158 1145 error2: 1159 1146 pm_runtime_disable(&pdev->dev); 1160 1147 error1:
+24 -30
drivers/spi/spi-s3c64xx.c
··· 197 197 struct s3c64xx_spi_dma_data tx_dma; 198 198 struct s3c64xx_spi_port_config *port_conf; 199 199 unsigned int port_id; 200 - bool cs_gpio; 201 200 }; 202 201 203 202 static void flush_fifo(struct s3c64xx_spi_driver_data *sdd) ··· 753 754 { 754 755 struct s3c64xx_spi_csinfo *cs; 755 756 struct device_node *slave_np, *data_np = NULL; 756 - struct s3c64xx_spi_driver_data *sdd; 757 757 u32 fb_delay = 0; 758 758 759 - sdd = spi_master_get_devdata(spi->master); 760 759 slave_np = spi->dev.of_node; 761 760 if (!slave_np) { 762 761 dev_err(&spi->dev, "device node not found\n"); ··· 771 774 if (!cs) { 772 775 of_node_put(data_np); 773 776 return ERR_PTR(-ENOMEM); 774 - } 775 - 776 - /* The CS line is asserted/deasserted by the gpio pin */ 777 - if (sdd->cs_gpio) 778 - cs->line = of_get_named_gpio(data_np, "cs-gpio", 0); 779 - 780 - if (!gpio_is_valid(cs->line)) { 781 - dev_err(&spi->dev, "chip select gpio is not specified or invalid\n"); 782 - kfree(cs); 783 - of_node_put(data_np); 784 - return ERR_PTR(-EINVAL); 785 777 } 786 778 787 779 of_property_read_u32(data_np, "samsung,spi-feedback-delay", &fb_delay); ··· 793 807 int err; 794 808 795 809 sdd = spi_master_get_devdata(spi->master); 796 - if (!cs && spi->dev.of_node) { 810 + if (spi->dev.of_node) { 797 811 cs = s3c64xx_get_slave_ctrldata(spi); 798 812 spi->controller_data = cs; 813 + } else if (cs) { 814 + /* On non-DT platforms the SPI core will set spi->cs_gpio 815 + * to -ENOENT. The GPIO pin used to drive the chip select 816 + * is defined by using platform data so spi->cs_gpio value 817 + * has to be override to have the proper GPIO pin number. 818 + */ 819 + spi->cs_gpio = cs->line; 799 820 } 800 821 801 822 if (IS_ERR_OR_NULL(cs)) { ··· 811 818 } 812 819 813 820 if (!spi_get_ctldata(spi)) { 814 - /* Request gpio only if cs line is asserted by gpio pins */ 815 - if (sdd->cs_gpio) { 816 - err = gpio_request_one(cs->line, GPIOF_OUT_INIT_HIGH, 817 - dev_name(&spi->dev)); 821 + if (gpio_is_valid(spi->cs_gpio)) { 822 + err = gpio_request_one(spi->cs_gpio, GPIOF_OUT_INIT_HIGH, 823 + dev_name(&spi->dev)); 818 824 if (err) { 819 825 dev_err(&spi->dev, 820 826 "Failed to get /CS gpio [%d]: %d\n", 821 - cs->line, err); 827 + spi->cs_gpio, err); 822 828 goto err_gpio_req; 823 829 } 824 - 825 - spi->cs_gpio = cs->line; 826 830 } 827 831 828 832 spi_set_ctldata(spi, cs); ··· 874 884 /* setup() returns with device de-selected */ 875 885 writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 876 886 877 - gpio_free(cs->line); 887 + if (gpio_is_valid(spi->cs_gpio)) 888 + gpio_free(spi->cs_gpio); 878 889 spi_set_ctldata(spi, NULL); 879 890 880 891 err_gpio_req: ··· 888 897 static void s3c64xx_spi_cleanup(struct spi_device *spi) 889 898 { 890 899 struct s3c64xx_spi_csinfo *cs = spi_get_ctldata(spi); 891 - struct s3c64xx_spi_driver_data *sdd; 892 900 893 - sdd = spi_master_get_devdata(spi->master); 894 - if (spi->cs_gpio) { 901 + if (gpio_is_valid(spi->cs_gpio)) { 895 902 gpio_free(spi->cs_gpio); 896 903 if (spi->dev.of_node) 897 904 kfree(cs); 905 + else { 906 + /* On non-DT platforms, the SPI core sets 907 + * spi->cs_gpio to -ENOENT and .setup() 908 + * overrides it with the GPIO pin value 909 + * passed using platform data. 910 + */ 911 + spi->cs_gpio = -ENOENT; 912 + } 898 913 } 914 + 899 915 spi_set_ctldata(spi, NULL); 900 916 } 901 917 ··· 1073 1075 sdd->cntrlr_info = sci; 1074 1076 sdd->pdev = pdev; 1075 1077 sdd->sfr_start = mem_res->start; 1076 - sdd->cs_gpio = true; 1077 1078 if (pdev->dev.of_node) { 1078 - if (!of_find_property(pdev->dev.of_node, "cs-gpio", NULL)) 1079 - sdd->cs_gpio = false; 1080 - 1081 1079 ret = of_alias_get_id(pdev->dev.of_node, "spi"); 1082 1080 if (ret < 0) { 1083 1081 dev_err(&pdev->dev, "failed to get alias id, errno %d\n",
+1 -1
drivers/spi/spi-sh-hspi.c
··· 304 304 return 0; 305 305 } 306 306 307 - static struct of_device_id hspi_of_match[] = { 307 + static const struct of_device_id hspi_of_match[] = { 308 308 { .compatible = "renesas,hspi", }, 309 309 { /* sentinel */ } 310 310 };
+477 -48
drivers/spi/spi-sh-msiof.c
··· 2 2 * SuperH MSIOF SPI Master Interface 3 3 * 4 4 * Copyright (c) 2009 Magnus Damm 5 + * Copyright (C) 2014 Glider bvba 5 6 * 6 7 * This program is free software; you can redistribute it and/or modify 7 8 * it under the terms of the GNU General Public License version 2 as ··· 14 13 #include <linux/clk.h> 15 14 #include <linux/completion.h> 16 15 #include <linux/delay.h> 16 + #include <linux/dma-mapping.h> 17 + #include <linux/dmaengine.h> 17 18 #include <linux/err.h> 18 19 #include <linux/gpio.h> 19 20 #include <linux/interrupt.h> ··· 26 23 #include <linux/of_device.h> 27 24 #include <linux/platform_device.h> 28 25 #include <linux/pm_runtime.h> 26 + #include <linux/sh_dma.h> 29 27 30 28 #include <linux/spi/sh_msiof.h> 31 29 #include <linux/spi/spi.h> ··· 41 37 }; 42 38 43 39 struct sh_msiof_spi_priv { 40 + struct spi_master *master; 44 41 void __iomem *mapbase; 45 42 struct clk *clk; 46 43 struct platform_device *pdev; ··· 50 45 struct completion done; 51 46 int tx_fifo_size; 52 47 int rx_fifo_size; 48 + void *tx_dma_page; 49 + void *rx_dma_page; 50 + dma_addr_t tx_dma_addr; 51 + dma_addr_t rx_dma_addr; 53 52 }; 54 53 55 54 #define TMDR1 0x00 /* Transmit Mode Register 1 */ ··· 93 84 #define MDR2_WDLEN1(i) (((i) - 1) << 16) /* Word Count (1-64/256 (SH, A1))) */ 94 85 #define MDR2_GRPMASK1 0x00000001 /* Group Output Mask 1 (SH, A1) */ 95 86 87 + #define MAX_WDLEN 256U 88 + 96 89 /* TSCR and RSCR */ 97 90 #define SCR_BRPS_MASK 0x1f00 /* Prescaler Setting (1-32) */ 98 91 #define SCR_BRPS(i) (((i) - 1) << 8) ··· 124 113 #define CTR_TXE 0x00000200 /* Transmit Enable */ 125 114 #define CTR_RXE 0x00000100 /* Receive Enable */ 126 115 127 - /* STR and IER */ 116 + /* FCTR */ 117 + #define FCTR_TFWM_MASK 0xe0000000 /* Transmit FIFO Watermark */ 118 + #define FCTR_TFWM_64 0x00000000 /* Transfer Request when 64 empty stages */ 119 + #define FCTR_TFWM_32 0x20000000 /* Transfer Request when 32 empty stages */ 120 + #define FCTR_TFWM_24 0x40000000 /* Transfer Request when 24 empty stages */ 121 + #define FCTR_TFWM_16 0x60000000 /* Transfer Request when 16 empty stages */ 122 + #define FCTR_TFWM_12 0x80000000 /* Transfer Request when 12 empty stages */ 123 + #define FCTR_TFWM_8 0xa0000000 /* Transfer Request when 8 empty stages */ 124 + #define FCTR_TFWM_4 0xc0000000 /* Transfer Request when 4 empty stages */ 125 + #define FCTR_TFWM_1 0xe0000000 /* Transfer Request when 1 empty stage */ 126 + #define FCTR_TFUA_MASK 0x07f00000 /* Transmit FIFO Usable Area */ 127 + #define FCTR_TFUA_SHIFT 20 128 + #define FCTR_TFUA(i) ((i) << FCTR_TFUA_SHIFT) 129 + #define FCTR_RFWM_MASK 0x0000e000 /* Receive FIFO Watermark */ 130 + #define FCTR_RFWM_1 0x00000000 /* Transfer Request when 1 valid stages */ 131 + #define FCTR_RFWM_4 0x00002000 /* Transfer Request when 4 valid stages */ 132 + #define FCTR_RFWM_8 0x00004000 /* Transfer Request when 8 valid stages */ 133 + #define FCTR_RFWM_16 0x00006000 /* Transfer Request when 16 valid stages */ 134 + #define FCTR_RFWM_32 0x00008000 /* Transfer Request when 32 valid stages */ 135 + #define FCTR_RFWM_64 0x0000a000 /* Transfer Request when 64 valid stages */ 136 + #define FCTR_RFWM_128 0x0000c000 /* Transfer Request when 128 valid stages */ 137 + #define FCTR_RFWM_256 0x0000e000 /* Transfer Request when 256 valid stages */ 138 + #define FCTR_RFUA_MASK 0x00001ff0 /* Receive FIFO Usable Area (0x40 = full) */ 139 + #define FCTR_RFUA_SHIFT 4 140 + #define FCTR_RFUA(i) ((i) << FCTR_RFUA_SHIFT) 141 + 142 + /* STR */ 143 + #define STR_TFEMP 0x20000000 /* Transmit FIFO Empty */ 144 + #define STR_TDREQ 0x10000000 /* Transmit Data Transfer Request */ 128 145 #define STR_TEOF 0x00800000 /* Frame Transmission End */ 146 + #define STR_TFSERR 0x00200000 /* Transmit Frame Synchronization Error */ 147 + #define STR_TFOVF 0x00100000 /* Transmit FIFO Overflow */ 148 + #define STR_TFUDF 0x00080000 /* Transmit FIFO Underflow */ 149 + #define STR_RFFUL 0x00002000 /* Receive FIFO Full */ 150 + #define STR_RDREQ 0x00001000 /* Receive Data Transfer Request */ 129 151 #define STR_REOF 0x00000080 /* Frame Reception End */ 152 + #define STR_RFSERR 0x00000020 /* Receive Frame Synchronization Error */ 153 + #define STR_RFUDF 0x00000010 /* Receive FIFO Underflow */ 154 + #define STR_RFOVF 0x00000008 /* Receive FIFO Overflow */ 155 + 156 + /* IER */ 157 + #define IER_TDMAE 0x80000000 /* Transmit Data DMA Transfer Req. Enable */ 158 + #define IER_TFEMPE 0x20000000 /* Transmit FIFO Empty Enable */ 159 + #define IER_TDREQE 0x10000000 /* Transmit Data Transfer Request Enable */ 160 + #define IER_TEOFE 0x00800000 /* Frame Transmission End Enable */ 161 + #define IER_TFSERRE 0x00200000 /* Transmit Frame Sync Error Enable */ 162 + #define IER_TFOVFE 0x00100000 /* Transmit FIFO Overflow Enable */ 163 + #define IER_TFUDFE 0x00080000 /* Transmit FIFO Underflow Enable */ 164 + #define IER_RDMAE 0x00008000 /* Receive Data DMA Transfer Req. Enable */ 165 + #define IER_RFFULE 0x00002000 /* Receive FIFO Full Enable */ 166 + #define IER_RDREQE 0x00001000 /* Receive Data Transfer Request Enable */ 167 + #define IER_REOFE 0x00000080 /* Frame Reception End Enable */ 168 + #define IER_RFSERRE 0x00000020 /* Receive Frame Sync Error Enable */ 169 + #define IER_RFUDFE 0x00000010 /* Receive FIFO Underflow Enable */ 170 + #define IER_RFOVFE 0x00000008 /* Receive FIFO Overflow Enable */ 130 171 131 172 132 173 static u32 sh_msiof_read(struct sh_msiof_spi_priv *p, int reg_offs) ··· 293 230 * 1 0 11 11 0 0 294 231 * 1 1 11 11 1 1 295 232 */ 296 - sh_msiof_write(p, FCTR, 0); 297 - 298 233 tmp = MDR1_SYNCMD_SPI | 1 << MDR1_FLD_SHIFT | MDR1_XXSTP; 299 234 tmp |= !cs_high << MDR1_SYNCAC_SHIFT; 300 235 tmp |= lsb_first << MDR1_BITLSB_SHIFT; ··· 328 267 329 268 if (rx_buf) 330 269 sh_msiof_write(p, RMDR2, dr2); 331 - 332 - sh_msiof_write(p, IER, STR_TEOF | STR_REOF); 333 270 } 334 271 335 272 static void sh_msiof_reset_str(struct sh_msiof_spi_priv *p) ··· 516 457 return 0; 517 458 } 518 459 460 + static int sh_msiof_spi_start(struct sh_msiof_spi_priv *p, void *rx_buf) 461 + { 462 + int ret; 463 + 464 + /* setup clock and rx/tx signals */ 465 + ret = sh_msiof_modify_ctr_wait(p, 0, CTR_TSCKE); 466 + if (rx_buf && !ret) 467 + ret = sh_msiof_modify_ctr_wait(p, 0, CTR_RXE); 468 + if (!ret) 469 + ret = sh_msiof_modify_ctr_wait(p, 0, CTR_TXE); 470 + 471 + /* start by setting frame bit */ 472 + if (!ret) 473 + ret = sh_msiof_modify_ctr_wait(p, 0, CTR_TFSE); 474 + 475 + return ret; 476 + } 477 + 478 + static int sh_msiof_spi_stop(struct sh_msiof_spi_priv *p, void *rx_buf) 479 + { 480 + int ret; 481 + 482 + /* shut down frame, rx/tx and clock signals */ 483 + ret = sh_msiof_modify_ctr_wait(p, CTR_TFSE, 0); 484 + if (!ret) 485 + ret = sh_msiof_modify_ctr_wait(p, CTR_TXE, 0); 486 + if (rx_buf && !ret) 487 + ret = sh_msiof_modify_ctr_wait(p, CTR_RXE, 0); 488 + if (!ret) 489 + ret = sh_msiof_modify_ctr_wait(p, CTR_TSCKE, 0); 490 + 491 + return ret; 492 + } 493 + 519 494 static int sh_msiof_spi_txrx_once(struct sh_msiof_spi_priv *p, 520 495 void (*tx_fifo)(struct sh_msiof_spi_priv *, 521 496 const void *, int, int), ··· 570 477 /* the fifo contents need shifting */ 571 478 fifo_shift = 32 - bits; 572 479 480 + /* default FIFO watermarks for PIO */ 481 + sh_msiof_write(p, FCTR, 0); 482 + 573 483 /* setup msiof transfer mode registers */ 574 484 sh_msiof_spi_set_mode_regs(p, tx_buf, rx_buf, bits, words); 485 + sh_msiof_write(p, IER, IER_TEOFE | IER_REOFE); 575 486 576 487 /* write tx fifo */ 577 488 if (tx_buf) 578 489 tx_fifo(p, tx_buf, words, fifo_shift); 579 490 580 - /* setup clock and rx/tx signals */ 581 - ret = sh_msiof_modify_ctr_wait(p, 0, CTR_TSCKE); 582 - if (rx_buf) 583 - ret = ret ? ret : sh_msiof_modify_ctr_wait(p, 0, CTR_RXE); 584 - ret = ret ? ret : sh_msiof_modify_ctr_wait(p, 0, CTR_TXE); 585 - 586 - /* start by setting frame bit */ 587 491 reinit_completion(&p->done); 588 - ret = ret ? ret : sh_msiof_modify_ctr_wait(p, 0, CTR_TFSE); 492 + 493 + ret = sh_msiof_spi_start(p, rx_buf); 589 494 if (ret) { 590 495 dev_err(&p->pdev->dev, "failed to start hardware\n"); 591 - goto err; 496 + goto stop_ier; 592 497 } 593 498 594 499 /* wait for tx fifo to be emptied / rx fifo to be filled */ 595 - wait_for_completion(&p->done); 500 + ret = wait_for_completion_timeout(&p->done, HZ); 501 + if (!ret) { 502 + dev_err(&p->pdev->dev, "PIO timeout\n"); 503 + ret = -ETIMEDOUT; 504 + goto stop_reset; 505 + } 596 506 597 507 /* read rx fifo */ 598 508 if (rx_buf) ··· 604 508 /* clear status bits */ 605 509 sh_msiof_reset_str(p); 606 510 607 - /* shut down frame, rx/tx and clock signals */ 608 - ret = sh_msiof_modify_ctr_wait(p, CTR_TFSE, 0); 609 - ret = ret ? ret : sh_msiof_modify_ctr_wait(p, CTR_TXE, 0); 610 - if (rx_buf) 611 - ret = ret ? ret : sh_msiof_modify_ctr_wait(p, CTR_RXE, 0); 612 - ret = ret ? ret : sh_msiof_modify_ctr_wait(p, CTR_TSCKE, 0); 511 + ret = sh_msiof_spi_stop(p, rx_buf); 613 512 if (ret) { 614 513 dev_err(&p->pdev->dev, "failed to shut down hardware\n"); 615 - goto err; 514 + return ret; 616 515 } 617 516 618 517 return words; 619 518 620 - err: 519 + stop_reset: 520 + sh_msiof_reset_str(p); 521 + sh_msiof_spi_stop(p, rx_buf); 522 + stop_ier: 621 523 sh_msiof_write(p, IER, 0); 622 524 return ret; 525 + } 526 + 527 + static void sh_msiof_dma_complete(void *arg) 528 + { 529 + struct sh_msiof_spi_priv *p = arg; 530 + 531 + sh_msiof_write(p, IER, 0); 532 + complete(&p->done); 533 + } 534 + 535 + static int sh_msiof_dma_once(struct sh_msiof_spi_priv *p, const void *tx, 536 + void *rx, unsigned int len) 537 + { 538 + u32 ier_bits = 0; 539 + struct dma_async_tx_descriptor *desc_tx = NULL, *desc_rx = NULL; 540 + dma_cookie_t cookie; 541 + int ret; 542 + 543 + if (tx) { 544 + ier_bits |= IER_TDREQE | IER_TDMAE; 545 + dma_sync_single_for_device(p->master->dma_tx->device->dev, 546 + p->tx_dma_addr, len, DMA_TO_DEVICE); 547 + desc_tx = dmaengine_prep_slave_single(p->master->dma_tx, 548 + p->tx_dma_addr, len, DMA_TO_DEVICE, 549 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 550 + if (!desc_tx) 551 + return -EAGAIN; 552 + } 553 + 554 + if (rx) { 555 + ier_bits |= IER_RDREQE | IER_RDMAE; 556 + desc_rx = dmaengine_prep_slave_single(p->master->dma_rx, 557 + p->rx_dma_addr, len, DMA_FROM_DEVICE, 558 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 559 + if (!desc_rx) 560 + return -EAGAIN; 561 + } 562 + 563 + /* 1 stage FIFO watermarks for DMA */ 564 + sh_msiof_write(p, FCTR, FCTR_TFWM_1 | FCTR_RFWM_1); 565 + 566 + /* setup msiof transfer mode registers (32-bit words) */ 567 + sh_msiof_spi_set_mode_regs(p, tx, rx, 32, len / 4); 568 + 569 + sh_msiof_write(p, IER, ier_bits); 570 + 571 + reinit_completion(&p->done); 572 + 573 + if (rx) { 574 + desc_rx->callback = sh_msiof_dma_complete; 575 + desc_rx->callback_param = p; 576 + cookie = dmaengine_submit(desc_rx); 577 + if (dma_submit_error(cookie)) { 578 + ret = cookie; 579 + goto stop_ier; 580 + } 581 + dma_async_issue_pending(p->master->dma_rx); 582 + } 583 + 584 + if (tx) { 585 + if (rx) { 586 + /* No callback */ 587 + desc_tx->callback = NULL; 588 + } else { 589 + desc_tx->callback = sh_msiof_dma_complete; 590 + desc_tx->callback_param = p; 591 + } 592 + cookie = dmaengine_submit(desc_tx); 593 + if (dma_submit_error(cookie)) { 594 + ret = cookie; 595 + goto stop_rx; 596 + } 597 + dma_async_issue_pending(p->master->dma_tx); 598 + } 599 + 600 + ret = sh_msiof_spi_start(p, rx); 601 + if (ret) { 602 + dev_err(&p->pdev->dev, "failed to start hardware\n"); 603 + goto stop_tx; 604 + } 605 + 606 + /* wait for tx fifo to be emptied / rx fifo to be filled */ 607 + ret = wait_for_completion_timeout(&p->done, HZ); 608 + if (!ret) { 609 + dev_err(&p->pdev->dev, "DMA timeout\n"); 610 + ret = -ETIMEDOUT; 611 + goto stop_reset; 612 + } 613 + 614 + /* clear status bits */ 615 + sh_msiof_reset_str(p); 616 + 617 + ret = sh_msiof_spi_stop(p, rx); 618 + if (ret) { 619 + dev_err(&p->pdev->dev, "failed to shut down hardware\n"); 620 + return ret; 621 + } 622 + 623 + if (rx) 624 + dma_sync_single_for_cpu(p->master->dma_rx->device->dev, 625 + p->rx_dma_addr, len, 626 + DMA_FROM_DEVICE); 627 + 628 + return 0; 629 + 630 + stop_reset: 631 + sh_msiof_reset_str(p); 632 + sh_msiof_spi_stop(p, rx); 633 + stop_tx: 634 + if (tx) 635 + dmaengine_terminate_all(p->master->dma_tx); 636 + stop_rx: 637 + if (rx) 638 + dmaengine_terminate_all(p->master->dma_rx); 639 + stop_ier: 640 + sh_msiof_write(p, IER, 0); 641 + return ret; 642 + } 643 + 644 + static void copy_bswap32(u32 *dst, const u32 *src, unsigned int words) 645 + { 646 + /* src or dst can be unaligned, but not both */ 647 + if ((unsigned long)src & 3) { 648 + while (words--) { 649 + *dst++ = swab32(get_unaligned(src)); 650 + src++; 651 + } 652 + } else if ((unsigned long)dst & 3) { 653 + while (words--) { 654 + put_unaligned(swab32(*src++), dst); 655 + dst++; 656 + } 657 + } else { 658 + while (words--) 659 + *dst++ = swab32(*src++); 660 + } 661 + } 662 + 663 + static void copy_wswap32(u32 *dst, const u32 *src, unsigned int words) 664 + { 665 + /* src or dst can be unaligned, but not both */ 666 + if ((unsigned long)src & 3) { 667 + while (words--) { 668 + *dst++ = swahw32(get_unaligned(src)); 669 + src++; 670 + } 671 + } else if ((unsigned long)dst & 3) { 672 + while (words--) { 673 + put_unaligned(swahw32(*src++), dst); 674 + dst++; 675 + } 676 + } else { 677 + while (words--) 678 + *dst++ = swahw32(*src++); 679 + } 680 + } 681 + 682 + static void copy_plain32(u32 *dst, const u32 *src, unsigned int words) 683 + { 684 + memcpy(dst, src, words * 4); 623 685 } 624 686 625 687 static int sh_msiof_transfer_one(struct spi_master *master, ··· 785 531 struct spi_transfer *t) 786 532 { 787 533 struct sh_msiof_spi_priv *p = spi_master_get_devdata(master); 534 + void (*copy32)(u32 *, const u32 *, unsigned int); 788 535 void (*tx_fifo)(struct sh_msiof_spi_priv *, const void *, int, int); 789 536 void (*rx_fifo)(struct sh_msiof_spi_priv *, void *, int, int); 790 - int bits; 791 - int bytes_per_word; 792 - int bytes_done; 793 - int words; 537 + const void *tx_buf = t->tx_buf; 538 + void *rx_buf = t->rx_buf; 539 + unsigned int len = t->len; 540 + unsigned int bits = t->bits_per_word; 541 + unsigned int bytes_per_word; 542 + unsigned int words; 794 543 int n; 795 544 bool swab; 545 + int ret; 796 546 797 - bits = t->bits_per_word; 547 + /* setup clocks (clock already enabled in chipselect()) */ 548 + sh_msiof_spi_set_clk_regs(p, clk_get_rate(p->clk), t->speed_hz); 798 549 799 - if (bits <= 8 && t->len > 15 && !(t->len & 3)) { 550 + while (master->dma_tx && len > 15) { 551 + /* 552 + * DMA supports 32-bit words only, hence pack 8-bit and 16-bit 553 + * words, with byte resp. word swapping. 554 + */ 555 + unsigned int l = min(len, MAX_WDLEN * 4); 556 + 557 + if (bits <= 8) { 558 + if (l & 3) 559 + break; 560 + copy32 = copy_bswap32; 561 + } else if (bits <= 16) { 562 + if (l & 1) 563 + break; 564 + copy32 = copy_wswap32; 565 + } else { 566 + copy32 = copy_plain32; 567 + } 568 + 569 + if (tx_buf) 570 + copy32(p->tx_dma_page, tx_buf, l / 4); 571 + 572 + ret = sh_msiof_dma_once(p, tx_buf, rx_buf, l); 573 + if (ret == -EAGAIN) { 574 + pr_warn_once("%s %s: DMA not available, falling back to PIO\n", 575 + dev_driver_string(&p->pdev->dev), 576 + dev_name(&p->pdev->dev)); 577 + break; 578 + } 579 + if (ret) 580 + return ret; 581 + 582 + if (rx_buf) { 583 + copy32(rx_buf, p->rx_dma_page, l / 4); 584 + rx_buf += l; 585 + } 586 + if (tx_buf) 587 + tx_buf += l; 588 + 589 + len -= l; 590 + if (!len) 591 + return 0; 592 + } 593 + 594 + if (bits <= 8 && len > 15 && !(len & 3)) { 800 595 bits = 32; 801 596 swab = true; 802 597 } else { ··· 859 556 rx_fifo = sh_msiof_spi_read_fifo_8; 860 557 } else if (bits <= 16) { 861 558 bytes_per_word = 2; 862 - if ((unsigned long)t->tx_buf & 0x01) 559 + if ((unsigned long)tx_buf & 0x01) 863 560 tx_fifo = sh_msiof_spi_write_fifo_16u; 864 561 else 865 562 tx_fifo = sh_msiof_spi_write_fifo_16; 866 563 867 - if ((unsigned long)t->rx_buf & 0x01) 564 + if ((unsigned long)rx_buf & 0x01) 868 565 rx_fifo = sh_msiof_spi_read_fifo_16u; 869 566 else 870 567 rx_fifo = sh_msiof_spi_read_fifo_16; 871 568 } else if (swab) { 872 569 bytes_per_word = 4; 873 - if ((unsigned long)t->tx_buf & 0x03) 570 + if ((unsigned long)tx_buf & 0x03) 874 571 tx_fifo = sh_msiof_spi_write_fifo_s32u; 875 572 else 876 573 tx_fifo = sh_msiof_spi_write_fifo_s32; 877 574 878 - if ((unsigned long)t->rx_buf & 0x03) 575 + if ((unsigned long)rx_buf & 0x03) 879 576 rx_fifo = sh_msiof_spi_read_fifo_s32u; 880 577 else 881 578 rx_fifo = sh_msiof_spi_read_fifo_s32; 882 579 } else { 883 580 bytes_per_word = 4; 884 - if ((unsigned long)t->tx_buf & 0x03) 581 + if ((unsigned long)tx_buf & 0x03) 885 582 tx_fifo = sh_msiof_spi_write_fifo_32u; 886 583 else 887 584 tx_fifo = sh_msiof_spi_write_fifo_32; 888 585 889 - if ((unsigned long)t->rx_buf & 0x03) 586 + if ((unsigned long)rx_buf & 0x03) 890 587 rx_fifo = sh_msiof_spi_read_fifo_32u; 891 588 else 892 589 rx_fifo = sh_msiof_spi_read_fifo_32; 893 590 } 894 591 895 - /* setup clocks (clock already enabled in chipselect()) */ 896 - sh_msiof_spi_set_clk_regs(p, clk_get_rate(p->clk), t->speed_hz); 897 - 898 592 /* transfer in fifo sized chunks */ 899 - words = t->len / bytes_per_word; 900 - bytes_done = 0; 593 + words = len / bytes_per_word; 901 594 902 - while (bytes_done < t->len) { 903 - void *rx_buf = t->rx_buf ? t->rx_buf + bytes_done : NULL; 904 - const void *tx_buf = t->tx_buf ? t->tx_buf + bytes_done : NULL; 905 - n = sh_msiof_spi_txrx_once(p, tx_fifo, rx_fifo, 906 - tx_buf, 907 - rx_buf, 595 + while (words > 0) { 596 + n = sh_msiof_spi_txrx_once(p, tx_fifo, rx_fifo, tx_buf, rx_buf, 908 597 words, bits); 909 598 if (n < 0) 910 - break; 599 + return n; 911 600 912 - bytes_done += n * bytes_per_word; 601 + if (tx_buf) 602 + tx_buf += n * bytes_per_word; 603 + if (rx_buf) 604 + rx_buf += n * bytes_per_word; 913 605 words -= n; 914 606 } 915 607 ··· 961 663 } 962 664 #endif 963 665 666 + static struct dma_chan *sh_msiof_request_dma_chan(struct device *dev, 667 + enum dma_transfer_direction dir, unsigned int id, dma_addr_t port_addr) 668 + { 669 + dma_cap_mask_t mask; 670 + struct dma_chan *chan; 671 + struct dma_slave_config cfg; 672 + int ret; 673 + 674 + dma_cap_zero(mask); 675 + dma_cap_set(DMA_SLAVE, mask); 676 + 677 + chan = dma_request_channel(mask, shdma_chan_filter, 678 + (void *)(unsigned long)id); 679 + if (!chan) { 680 + dev_warn(dev, "dma_request_channel failed\n"); 681 + return NULL; 682 + } 683 + 684 + memset(&cfg, 0, sizeof(cfg)); 685 + cfg.slave_id = id; 686 + cfg.direction = dir; 687 + if (dir == DMA_MEM_TO_DEV) 688 + cfg.dst_addr = port_addr; 689 + else 690 + cfg.src_addr = port_addr; 691 + 692 + ret = dmaengine_slave_config(chan, &cfg); 693 + if (ret) { 694 + dev_warn(dev, "dmaengine_slave_config failed %d\n", ret); 695 + dma_release_channel(chan); 696 + return NULL; 697 + } 698 + 699 + return chan; 700 + } 701 + 702 + static int sh_msiof_request_dma(struct sh_msiof_spi_priv *p) 703 + { 704 + struct platform_device *pdev = p->pdev; 705 + struct device *dev = &pdev->dev; 706 + const struct sh_msiof_spi_info *info = dev_get_platdata(dev); 707 + const struct resource *res; 708 + struct spi_master *master; 709 + struct device *tx_dev, *rx_dev; 710 + 711 + if (!info || !info->dma_tx_id || !info->dma_rx_id) 712 + return 0; /* The driver assumes no error */ 713 + 714 + /* The DMA engine uses the second register set, if present */ 715 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 716 + if (!res) 717 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 718 + 719 + master = p->master; 720 + master->dma_tx = sh_msiof_request_dma_chan(dev, DMA_MEM_TO_DEV, 721 + info->dma_tx_id, 722 + res->start + TFDR); 723 + if (!master->dma_tx) 724 + return -ENODEV; 725 + 726 + master->dma_rx = sh_msiof_request_dma_chan(dev, DMA_DEV_TO_MEM, 727 + info->dma_rx_id, 728 + res->start + RFDR); 729 + if (!master->dma_rx) 730 + goto free_tx_chan; 731 + 732 + p->tx_dma_page = (void *)__get_free_page(GFP_KERNEL | GFP_DMA); 733 + if (!p->tx_dma_page) 734 + goto free_rx_chan; 735 + 736 + p->rx_dma_page = (void *)__get_free_page(GFP_KERNEL | GFP_DMA); 737 + if (!p->rx_dma_page) 738 + goto free_tx_page; 739 + 740 + tx_dev = master->dma_tx->device->dev; 741 + p->tx_dma_addr = dma_map_single(tx_dev, p->tx_dma_page, PAGE_SIZE, 742 + DMA_TO_DEVICE); 743 + if (dma_mapping_error(tx_dev, p->tx_dma_addr)) 744 + goto free_rx_page; 745 + 746 + rx_dev = master->dma_rx->device->dev; 747 + p->rx_dma_addr = dma_map_single(rx_dev, p->rx_dma_page, PAGE_SIZE, 748 + DMA_FROM_DEVICE); 749 + if (dma_mapping_error(rx_dev, p->rx_dma_addr)) 750 + goto unmap_tx_page; 751 + 752 + dev_info(dev, "DMA available"); 753 + return 0; 754 + 755 + unmap_tx_page: 756 + dma_unmap_single(tx_dev, p->tx_dma_addr, PAGE_SIZE, DMA_TO_DEVICE); 757 + free_rx_page: 758 + free_page((unsigned long)p->rx_dma_page); 759 + free_tx_page: 760 + free_page((unsigned long)p->tx_dma_page); 761 + free_rx_chan: 762 + dma_release_channel(master->dma_rx); 763 + free_tx_chan: 764 + dma_release_channel(master->dma_tx); 765 + master->dma_tx = NULL; 766 + return -ENODEV; 767 + } 768 + 769 + static void sh_msiof_release_dma(struct sh_msiof_spi_priv *p) 770 + { 771 + struct spi_master *master = p->master; 772 + struct device *dev; 773 + 774 + if (!master->dma_tx) 775 + return; 776 + 777 + dev = &p->pdev->dev; 778 + dma_unmap_single(master->dma_rx->device->dev, p->rx_dma_addr, 779 + PAGE_SIZE, DMA_FROM_DEVICE); 780 + dma_unmap_single(master->dma_tx->device->dev, p->tx_dma_addr, 781 + PAGE_SIZE, DMA_TO_DEVICE); 782 + free_page((unsigned long)p->rx_dma_page); 783 + free_page((unsigned long)p->tx_dma_page); 784 + dma_release_channel(master->dma_rx); 785 + dma_release_channel(master->dma_tx); 786 + } 787 + 964 788 static int sh_msiof_spi_probe(struct platform_device *pdev) 965 789 { 966 790 struct resource *r; ··· 1101 681 p = spi_master_get_devdata(master); 1102 682 1103 683 platform_set_drvdata(pdev, p); 684 + p->master = master; 1104 685 1105 686 of_id = of_match_device(sh_msiof_match, &pdev->dev); 1106 687 if (of_id) { ··· 1172 751 master->auto_runtime_pm = true; 1173 752 master->transfer_one = sh_msiof_transfer_one; 1174 753 754 + ret = sh_msiof_request_dma(p); 755 + if (ret < 0) 756 + dev_warn(&pdev->dev, "DMA not available, using PIO\n"); 757 + 1175 758 ret = devm_spi_register_master(&pdev->dev, master); 1176 759 if (ret < 0) { 1177 760 dev_err(&pdev->dev, "spi_register_master error.\n"); ··· 1185 760 return 0; 1186 761 1187 762 err2: 763 + sh_msiof_release_dma(p); 1188 764 pm_runtime_disable(&pdev->dev); 1189 765 err1: 1190 766 spi_master_put(master); ··· 1194 768 1195 769 static int sh_msiof_spi_remove(struct platform_device *pdev) 1196 770 { 771 + struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev); 772 + 773 + sh_msiof_release_dma(p); 1197 774 pm_runtime_disable(&pdev->dev); 1198 775 return 0; 1199 776 }
+6 -9
drivers/spi/spi-sh.c
··· 432 432 spi_unregister_master(ss->master); 433 433 destroy_workqueue(ss->workqueue); 434 434 free_irq(ss->irq, ss); 435 - iounmap(ss->addr); 436 435 437 436 return 0; 438 437 } ··· 479 480 } 480 481 ss->irq = irq; 481 482 ss->master = master; 482 - ss->addr = ioremap(res->start, resource_size(res)); 483 + ss->addr = devm_ioremap(&pdev->dev, res->start, resource_size(res)); 483 484 if (ss->addr == NULL) { 484 485 dev_err(&pdev->dev, "ioremap error.\n"); 485 486 ret = -ENOMEM; ··· 494 495 if (ss->workqueue == NULL) { 495 496 dev_err(&pdev->dev, "create workqueue error\n"); 496 497 ret = -EBUSY; 497 - goto error2; 498 + goto error1; 498 499 } 499 500 500 501 ret = request_irq(irq, spi_sh_irq, 0, "spi_sh", ss); 501 502 if (ret < 0) { 502 503 dev_err(&pdev->dev, "request_irq error\n"); 503 - goto error3; 504 + goto error2; 504 505 } 505 506 506 507 master->num_chipselect = 2; ··· 512 513 ret = spi_register_master(master); 513 514 if (ret < 0) { 514 515 printk(KERN_ERR "spi_register_master error.\n"); 515 - goto error4; 516 + goto error3; 516 517 } 517 518 518 519 return 0; 519 520 520 - error4: 521 - free_irq(irq, ss); 522 521 error3: 523 - destroy_workqueue(ss->workqueue); 522 + free_irq(irq, ss); 524 523 error2: 525 - iounmap(ss->addr); 524 + destroy_workqueue(ss->workqueue); 526 525 error1: 527 526 spi_master_put(master); 528 527
+6 -6
drivers/spi/spi-topcliff-pch.c
··· 874 874 dma_cap_set(DMA_SLAVE, mask); 875 875 876 876 /* Get DMA's dev information */ 877 - dma_dev = pci_get_bus_and_slot(data->board_dat->pdev->bus->number, 878 - PCI_DEVFN(12, 0)); 877 + dma_dev = pci_get_slot(data->board_dat->pdev->bus, 878 + PCI_DEVFN(PCI_SLOT(data->board_dat->pdev->devfn), 0)); 879 879 880 880 /* Set Tx DMA */ 881 881 param = &dma->param_tx; ··· 1047 1047 num, DMA_DEV_TO_MEM, 1048 1048 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1049 1049 if (!desc_rx) { 1050 - dev_err(&data->master->dev, "%s:device_prep_slave_sg Failed\n", 1051 - __func__); 1050 + dev_err(&data->master->dev, 1051 + "%s:dmaengine_prep_slave_sg Failed\n", __func__); 1052 1052 return; 1053 1053 } 1054 1054 dma_sync_sg_for_device(&data->master->dev, sg, num, DMA_FROM_DEVICE); ··· 1106 1106 sg, num, DMA_MEM_TO_DEV, 1107 1107 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1108 1108 if (!desc_tx) { 1109 - dev_err(&data->master->dev, "%s:device_prep_slave_sg Failed\n", 1110 - __func__); 1109 + dev_err(&data->master->dev, 1110 + "%s:dmaengine_prep_slave_sg Failed\n", __func__); 1111 1111 return; 1112 1112 } 1113 1113 dma_sync_sg_for_device(&data->master->dev, sg, num, DMA_TO_DEVICE);
+1 -1
drivers/spi/spi-xilinx.c
··· 369 369 goto put_master; 370 370 } 371 371 372 - master->bus_num = pdev->dev.id; 372 + master->bus_num = pdev->id; 373 373 master->num_chipselect = num_cs; 374 374 master->dev.of_node = pdev->dev.of_node; 375 375
+6 -6
drivers/spi/spi.c
··· 350 350 struct spi_device *spi_alloc_device(struct spi_master *master) 351 351 { 352 352 struct spi_device *spi; 353 - struct device *dev = master->dev.parent; 354 353 355 354 if (!spi_master_get(master)) 356 355 return NULL; 357 356 358 357 spi = kzalloc(sizeof(*spi), GFP_KERNEL); 359 358 if (!spi) { 360 - dev_err(dev, "cannot alloc spi_device\n"); 361 359 spi_master_put(master); 362 360 return NULL; 363 361 } ··· 622 624 } 623 625 624 626 ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir); 627 + if (!ret) 628 + ret = -ENOMEM; 625 629 if (ret < 0) { 626 630 sg_free_table(sgt); 627 631 return ret; ··· 652 652 if (!master->can_dma) 653 653 return 0; 654 654 655 - tx_dev = &master->dma_tx->dev->device; 656 - rx_dev = &master->dma_rx->dev->device; 655 + tx_dev = master->dma_tx->device->dev; 656 + rx_dev = master->dma_rx->device->dev; 657 657 658 658 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 659 659 if (!master->can_dma(master, msg->spi, xfer)) ··· 692 692 if (!master->cur_msg_mapped || !master->can_dma) 693 693 return 0; 694 694 695 - tx_dev = &master->dma_tx->dev->device; 696 - rx_dev = &master->dma_rx->dev->device; 695 + tx_dev = master->dma_tx->device->dev; 696 + rx_dev = master->dma_rx->device->dev; 697 697 698 698 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 699 699 if (!master->can_dma(master, msg->spi, xfer))
+2
include/linux/spi/sh_msiof.h
··· 5 5 int tx_fifo_override; 6 6 int rx_fifo_override; 7 7 u16 num_chipselect; 8 + unsigned int dma_tx_id; 9 + unsigned int dma_rx_id; 8 10 }; 9 11 10 12 #endif /* __SPI_SH_MSIOF_H__ */