Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'spi-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
"Quite a lot of cleanup and maintainence work going on this release in
various drivers, and also a fix for a nasty locking issue in the core:

- A fix for locking issues when external drivers explicitly locked
the bus with spi_bus_lock() - we were using the same lock to both
control access to the physical bus in multi-threaded I/O operations
and exclude multiple callers.

Confusion between these two caused us to have scenarios where we
were dropping locks. These are fixed by splitting into two
separate locks like should have been done originally, making
everything much clearer and correct.

- Support for DMA in spi_flash_read().

- Support for instantiating spidev on ACPI systems, including some
test devices used in Windows validation.

- Use of the core DMA mapping functionality in the McSPI driver.

- Start of support for ThunderX SPI controllers, involving a very big
set of changes to the Cavium driver.

- Support for Braswell, Exynos 5433, Kaby Lake, Merrifield, RK3036,
RK3228, RK3368 controllers"

* tag 'spi-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (64 commits)
spi: Split bus and I/O locking
spi: octeon: Split driver into Octeon specific and common parts
spi: octeon: Move include file from arch/mips to drivers/spi
spi: octeon: Put register offsets into a struct
spi: octeon: Store system clock freqency in struct octeon_spi
spi: octeon: Convert driver to use readq()/writeq() functions
spi: pic32-sqi: fixup wait_for_completion_timeout return handling
spi: pic32: fixup wait_for_completion_timeout return handling
spi: rockchip: limit transfers to (64K - 1) bytes
spi: xilinx: Return IRQ_NONE if no interrupts were detected
spi: xilinx: Handle errors from platform_get_irq()
spi: s3c64xx: restore removed comments
spi: s3c64xx: add Exynos5433 compatible for ioclk handling
spi: s3c64xx: use error code from clk_prepare_enable()
spi: s3c64xx: rename goto labels to meaningful names
spi: s3c64xx: document the clocks and the clock-name property
spi: s3c64xx: add exynos5433 spi compatible
spi: s3c64xx: fix reference leak to master in s3c64xx_spi_remove()
spi: spi-sh: Remove deprecated create_singlethread_workqueue
spi: spi-topcliff-pch: Remove deprecated create_singlethread_workqueue
...

+1183 -983
+3 -2
Documentation/devicetree/bindings/spi/fsl-imx-cspi.txt
··· 11 11 - "fsl,imx51-ecspi" for SPI compatible with the one integrated on i.MX51 12 12 - reg : Offset and length of the register set for the device 13 13 - interrupts : Should contain CSPI/eCSPI interrupt 14 - - fsl,spi-num-chipselects : Contains the number of the chipselect 15 14 - cs-gpios : Specifies the gpio pins to be used for chipselects. 16 15 - clocks : Clock specifiers for both ipg and per clocks. 17 16 - clock-names : Clock names should include both "ipg" and "per" ··· 20 21 Documentation/devicetree/bindings/dma/dma.txt 21 22 - dma-names: DMA request names should include "tx" and "rx" if present. 22 23 24 + Obsolete properties: 25 + - fsl,spi-num-chipselects : Contains the number of the chipselect 26 + 23 27 Example: 24 28 25 29 ecspi@70010000 { ··· 31 29 compatible = "fsl,imx51-ecspi"; 32 30 reg = <0x70010000 0x4000>; 33 31 interrupts = <36>; 34 - fsl,spi-num-chipselects = <2>; 35 32 cs-gpios = <&gpio3 24 0>, /* GPIO3_24 */ 36 33 <&gpio3 25 0>; /* GPIO3_25 */ 37 34 dmas = <&sdma 3 7 1>, <&sdma 4 7 2>;
+18 -17
Documentation/devicetree/bindings/spi/spi-bus.txt
··· 8 8 9 9 The SPI master node requires the following properties: 10 10 - #address-cells - number of cells required to define a chip select 11 - address on the SPI bus. 11 + address on the SPI bus. 12 12 - #size-cells - should be zero. 13 13 - compatible - name of SPI bus controller following generic names 14 - recommended practice. 15 - - cs-gpios - (optional) gpios chip select. 14 + recommended practice. 16 15 No other properties are required in the SPI bus node. It is assumed 17 16 that a driver for an SPI bus device will understand that it is an SPI bus. 18 17 However, the binding does not attempt to define the specific method for ··· 21 22 chip selects. Individual drivers can define additional properties to 22 23 support describing the chip select layout. 23 24 24 - Optional property: 25 - - num-cs : total number of chipselects 25 + Optional properties: 26 + - cs-gpios - gpios chip select. 27 + - num-cs - total number of chipselects. 26 28 27 - If cs-gpios is used the number of chip select will automatically increased 28 - with max(cs-gpios > hw cs) 29 + If cs-gpios is used the number of chip selects will be increased automatically 30 + with max(cs-gpios > hw cs). 29 31 30 32 So if for example the controller has 2 CS lines, and the cs-gpios 31 33 property looks like this: ··· 45 45 contain the following properties. 46 46 - reg - (required) chip select address of device. 47 47 - compatible - (required) name of SPI device following generic names 48 - recommended practice 49 - - spi-max-frequency - (required) Maximum SPI clocking speed of device in Hz 48 + recommended practice. 49 + - spi-max-frequency - (required) Maximum SPI clocking speed of device in Hz. 50 50 - spi-cpol - (optional) Empty property indicating device requires 51 - inverse clock polarity (CPOL) mode 51 + inverse clock polarity (CPOL) mode. 52 52 - spi-cpha - (optional) Empty property indicating device requires 53 - shifted clock phase (CPHA) mode 53 + shifted clock phase (CPHA) mode. 54 54 - spi-cs-high - (optional) Empty property indicating device requires 55 - chip select active high 55 + chip select active high. 56 56 - spi-3wire - (optional) Empty property indicating device requires 57 - 3-wire mode. 57 + 3-wire mode. 58 58 - spi-lsb-first - (optional) Empty property indicating device requires 59 59 LSB first mode. 60 - - spi-tx-bus-width - (optional) The bus width(number of data wires) that 60 + - spi-tx-bus-width - (optional) The bus width (number of data wires) that is 61 61 used for MOSI. Defaults to 1 if not present. 62 - - spi-rx-bus-width - (optional) The bus width(number of data wires) that 62 + - spi-rx-bus-width - (optional) The bus width (number of data wires) that is 63 63 used for MISO. Defaults to 1 if not present. 64 64 - spi-rx-delay-us - (optional) Microsecond delay after a read transfer. 65 65 - spi-tx-delay-us - (optional) Microsecond delay after a write transfer. 66 66 67 67 Some SPI controllers and devices support Dual and Quad SPI transfer mode. 68 - It allows data in the SPI system to be transferred in 2 wires(DUAL) or 4 wires(QUAD). 68 + It allows data in the SPI system to be transferred using 2 wires (DUAL) or 4 69 + wires (QUAD). 69 70 Now the value that spi-tx-bus-width and spi-rx-bus-width can receive is 70 - only 1(SINGLE), 2(DUAL) and 4(QUAD). 71 + only 1 (SINGLE), 2 (DUAL) and 4 (QUAD). 71 72 Dual/Quad mode is not allowed when 3-wire mode is used. 72 73 73 74 If a gpio chipselect is used for the SPI slave the gpio number will be passed
+33
Documentation/devicetree/bindings/spi/spi-clps711x.txt
··· 1 + Serial Peripheral Interface on Cirrus Logic CL-PS71xx, EP72xx, EP73xx 2 + 3 + Required properties 4 + - #address-cells: must be <1> 5 + - #size-cells: must be <0> 6 + - compatible: should include "cirrus,ep7209-spi" 7 + - reg: Address and length of one register range 8 + - interrupts: one interrupt line 9 + - clocks: One entry, refers to the SPI bus clock 10 + - cs-gpios: Specifies the gpio pins to be used for chipselects. 11 + See: Documentation/devicetree/bindings/spi/spi-bus.txt 12 + 13 + An additional register is present in the system controller, 14 + which is assumed to be in the same device tree, with and marked 15 + as compatible with "cirrus,ep7209-syscon3". 16 + 17 + Example: 18 + 19 + spi@80000500 { 20 + #address-cells = <1>; 21 + #size-cells = <0>; 22 + compatible = "cirrus,ep7209-spi"; 23 + reg = <0x80000500 0x4>; 24 + interrupts = <15>; 25 + clocks = <&clks CLPS711X_CLK_SPI>; 26 + status = "disabled"; 27 + }; 28 + 29 + syscon3: syscon@80002200 { 30 + compatible = "cirrus,ep7209-syscon3", "syscon"; 31 + reg = <0x80002200 0x40>; 32 + }; 33 +
+1 -1
Documentation/devicetree/bindings/spi/spi-davinci.txt
··· 21 21 IP to the interrupt controller within the SoC. Possible values 22 22 are 0 and 1. Manual says one of the two possible interrupt 23 23 lines can be tied to the interrupt controller. Set this 24 - based on a specifc SoC configuration. 24 + based on a specific SoC configuration. 25 25 - interrupts: interrupt number mapped to CPU. 26 26 - clocks: spi clk phandle 27 27
+48 -1
Documentation/devicetree/bindings/spi/spi-orion.txt
··· 8 8 - "marvell,armada-380-spi", for the Armada 38x SoCs 9 9 - "marvell,armada-390-spi", for the Armada 39x SoCs 10 10 - "marvell,armada-xp-spi", for the Armada XP SoCs 11 - - reg : offset and length of the register set for the device 11 + - reg : offset and length of the register set for the device. 12 + This property can optionally have additional entries to configure 13 + the SPI direct access mode that some of the Marvell SoCs support 14 + additionally to the normal indirect access (PIO) mode. The values 15 + for the MBus "target" and "attribute" are defined in the Marvell 16 + SoC "Functional Specifications" Manual in the chapter "Marvell 17 + Core Processor Address Decoding". 18 + The eight register sets following the control registers refer to 19 + chip-select lines 0 through 7 respectively. 12 20 - cell-index : Which of multiple SPI controllers is this. 13 21 Optional properties: 14 22 - interrupts : Is currently not used. ··· 31 23 interrupts = <23>; 32 24 status = "disabled"; 33 25 }; 26 + 27 + Example with SPI direct mode support (optionally): 28 + spi0: spi@10600 { 29 + compatible = "marvell,orion-spi"; 30 + #address-cells = <1>; 31 + #size-cells = <0>; 32 + cell-index = <0>; 33 + reg = <MBUS_ID(0xf0, 0x01) 0x10600 0x28>, /* control */ 34 + <MBUS_ID(0x01, 0x1e) 0 0xffffffff>, /* CS0 */ 35 + <MBUS_ID(0x01, 0x5e) 0 0xffffffff>, /* CS1 */ 36 + <MBUS_ID(0x01, 0x9e) 0 0xffffffff>, /* CS2 */ 37 + <MBUS_ID(0x01, 0xde) 0 0xffffffff>, /* CS3 */ 38 + <MBUS_ID(0x01, 0x1f) 0 0xffffffff>, /* CS4 */ 39 + <MBUS_ID(0x01, 0x5f) 0 0xffffffff>, /* CS5 */ 40 + <MBUS_ID(0x01, 0x9f) 0 0xffffffff>, /* CS6 */ 41 + <MBUS_ID(0x01, 0xdf) 0 0xffffffff>; /* CS7 */ 42 + interrupts = <23>; 43 + status = "disabled"; 44 + }; 45 + 46 + To enable the direct mode, the board specific 'ranges' property in the 47 + 'soc' node needs to add the entries for the desired SPI controllers 48 + and its chip-selects that are used in the direct mode instead of PIO 49 + mode. Here an example for this (SPI controller 0, device 1 and SPI 50 + controller 1, device 2 are used in direct mode. All other SPI device 51 + are used in the default indirect (PIO) mode): 52 + soc { 53 + /* 54 + * Enable the SPI direct access by configuring an entry 55 + * here in the board-specific ranges property 56 + */ 57 + ranges = <MBUS_ID(0xf0, 0x01) 0 0 0xf1000000 0x100000>, /* internal regs */ 58 + <MBUS_ID(0x01, 0x1d) 0 0 0xfff00000 0x100000>, /* BootROM */ 59 + <MBUS_ID(0x01, 0x5e) 0 0 0xf1100000 0x10000>, /* SPI0-DEV1 */ 60 + <MBUS_ID(0x01, 0x9a) 0 0 0xf1110000 0x10000>; /* SPI1-DEV2 */ 61 + 62 + For further information on the MBus bindings, please see the MBus 63 + DT documentation: 64 + Documentation/devicetree/bindings/bus/mvebu-mbus.txt
+7 -4
Documentation/devicetree/bindings/spi/spi-rockchip.txt
··· 6 6 Required Properties: 7 7 8 8 - compatible: should be one of the following. 9 - "rockchip,rk3066-spi" for rk3066. 10 - "rockchip,rk3188-spi", "rockchip,rk3066-spi" for rk3188. 11 - "rockchip,rk3288-spi", "rockchip,rk3066-spi" for rk3288. 12 - "rockchip,rk3399-spi", "rockchip,rk3066-spi" for rk3399. 9 + "rockchip,rk3036-spi" for rk3036 SoCS. 10 + "rockchip,rk3066-spi" for rk3066 SoCs. 11 + "rockchip,rk3188-spi" for rk3188 SoCs. 12 + "rockchip,rk3228-spi" for rk3228 SoCS. 13 + "rockchip,rk3288-spi" for rk3288 SoCs. 14 + "rockchip,rk3368-spi" for rk3368 SoCs. 15 + "rockchip,rk3399-spi" for rk3399 SoCs. 13 16 - reg: physical base address of the controller and length of memory mapped 14 17 region. 15 18 - interrupts: The interrupt number to the cpu. The interrupt specifier format
+14 -1
Documentation/devicetree/bindings/spi/spi-samsung.txt
··· 9 9 - samsung,s3c2443-spi: for s3c2443, s3c2416 and s3c2450 platforms 10 10 - samsung,s3c6410-spi: for s3c6410 platforms 11 11 - samsung,s5pv210-spi: for s5pv210 and s5pc110 platforms 12 - - samsung,exynos7-spi: for exynos7 platforms 12 + - samsung,exynos5433-spi: for exynos5433 compatible controllers 13 + - samsung,exynos7-spi: for exynos7 platforms <DEPRECATED> 13 14 14 15 - reg: physical base address of the controller and length of memory mapped 15 16 region. ··· 23 22 24 23 - dma-names: Names for the dma channels. There must be at least one channel 25 24 named "tx" for transmit and named "rx" for receive. 25 + 26 + - clocks: specifies the clock IDs provided to the SPI controller; they are 27 + required for interacting with the controller itself, for synchronizing the bus 28 + and as I/O clock (the latter is required by exynos5433 and exynos7). 29 + 30 + - clock-names: string names of the clocks in the 'clocks' property; for all the 31 + the devices the names must be "spi", "spi_busclkN" (where N is determined by 32 + "samsung,spi-src-clk"), while Exynos5433 should specify a third clock 33 + "spi_ioclk" for the I/O clock. 26 34 27 35 Required Board Specific Properties: 28 36 ··· 49 39 not specified, the default number of chip select lines is set to 1. 50 40 51 41 - cs-gpios: should specify GPIOs used for chipselects (see spi-bus.txt) 42 + 43 + - no-cs-readback: the CS line is disconnected, therefore the device should not 44 + operate based on CS signalling. 52 45 53 46 SPI Controller specific data in SPI slave nodes: 54 47
+1 -1
Documentation/devicetree/bindings/spi/ti_qspi.txt
··· 20 20 chipselect register and offset of that register. 21 21 22 22 NOTE: TI QSPI controller requires different pinmux and IODelay 23 - paramaters for Mode-0 and Mode-3 operations, which needs to be set up by 23 + parameters for Mode-0 and Mode-3 operations, which needs to be set up by 24 24 the bootloader (U-Boot). Default configuration only supports Mode-0 25 25 operation. Hence, "spi-cpol" and "spi-cpha" DT properties cannot be 26 26 specified in the slave nodes of TI QSPI controller without appropriate
+1
MAINTAINERS
··· 10921 10921 T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git 10922 10922 Q: http://patchwork.kernel.org/project/spi-devel-general/list/ 10923 10923 S: Maintained 10924 + F: Documentation/devicetree/bindings/spi/ 10924 10925 F: Documentation/spi/ 10925 10926 F: drivers/spi/ 10926 10927 F: include/linux/spi/
+30 -29
arch/mips/include/asm/octeon/cvmx-mpi-defs.h drivers/spi/spi-cavium.h
··· 1 - /***********************license start*************** 2 - * Author: Cavium Networks 3 - * 4 - * Contact: support@caviumnetworks.com 5 - * This file is part of the OCTEON SDK 6 - * 7 - * Copyright (c) 2003-2012 Cavium Networks 8 - * 9 - * This file is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License, Version 2, as 11 - * published by the Free Software Foundation. 12 - * 13 - * This file is distributed in the hope that it will be useful, but 14 - * AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty 15 - * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or 16 - * NONINFRINGEMENT. See the GNU General Public License for more 17 - * details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this file; if not, write to the Free Software 21 - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 22 - * or visit http://www.gnu.org/licenses/. 23 - * 24 - * This file may also be available under a different license from Cavium. 25 - * Contact Cavium Networks for more information 26 - ***********************license end**************************************/ 1 + #ifndef __SPI_CAVIUM_H 2 + #define __SPI_CAVIUM_H 27 3 28 - #ifndef __CVMX_MPI_DEFS_H__ 29 - #define __CVMX_MPI_DEFS_H__ 4 + #define OCTEON_SPI_MAX_BYTES 9 5 + #define OCTEON_SPI_MAX_CLOCK_HZ 16000000 6 + 7 + struct octeon_spi_regs { 8 + int config; 9 + int status; 10 + int tx; 11 + int data; 12 + }; 13 + 14 + struct octeon_spi { 15 + void __iomem *register_base; 16 + u64 last_cfg; 17 + u64 cs_enax; 18 + int sys_freq; 19 + struct octeon_spi_regs regs; 20 + }; 21 + 22 + #define OCTEON_SPI_CFG(x) (x->regs.config) 23 + #define OCTEON_SPI_STS(x) (x->regs.status) 24 + #define OCTEON_SPI_TX(x) (x->regs.tx) 25 + #define OCTEON_SPI_DAT0(x) (x->regs.data) 26 + 27 + int octeon_spi_transfer_one_message(struct spi_master *master, 28 + struct spi_message *msg); 29 + 30 + /* MPI register descriptions */ 30 31 31 32 #define CVMX_MPI_CFG (CVMX_ADD_IO_SEG(0x0001070000001000ull)) 32 33 #define CVMX_MPI_DATX(offset) (CVMX_ADD_IO_SEG(0x0001070000001080ull) + ((offset) & 15) * 8) ··· 326 325 struct cvmx_mpi_tx_cn61xx cnf71xx; 327 326 }; 328 327 329 - #endif 328 + #endif /* __SPI_CAVIUM_H */
+1
drivers/spi/Kconfig
··· 411 411 tristate "McSPI driver for OMAP" 412 412 depends on HAS_DMA 413 413 depends on ARCH_OMAP2PLUS || COMPILE_TEST 414 + select SG_SPLIT 414 415 help 415 416 SPI master controller for OMAP24XX and later Multichannel SPI 416 417 (McSPI) modules.
+1
drivers/spi/Makefile
··· 56 56 obj-$(CONFIG_SPI_MXS) += spi-mxs.o 57 57 obj-$(CONFIG_SPI_NUC900) += spi-nuc900.o 58 58 obj-$(CONFIG_SPI_OC_TINY) += spi-oc-tiny.o 59 + spi-octeon-objs := spi-cavium.o spi-cavium-octeon.o 59 60 obj-$(CONFIG_SPI_OCTEON) += spi-octeon.o 60 61 obj-$(CONFIG_SPI_OMAP_UWIRE) += spi-omap-uwire.o 61 62 obj-$(CONFIG_SPI_OMAP_100K) += spi-omap-100k.o
+4 -11
drivers/spi/spi-bfin-sport.c
··· 64 64 /* Pin request list */ 65 65 u16 *pin_req; 66 66 67 - /* Driver message queue */ 68 - struct workqueue_struct *workqueue; 69 67 struct work_struct pump_messages; 70 68 spinlock_t lock; 71 69 struct list_head queue; ··· 298 300 drv_data->cur_msg = NULL; 299 301 drv_data->cur_transfer = NULL; 300 302 drv_data->cur_chip = NULL; 301 - queue_work(drv_data->workqueue, &drv_data->pump_messages); 303 + schedule_work(&drv_data->pump_messages); 302 304 spin_unlock_irqrestore(&drv_data->lock, flags); 303 305 304 306 if (!drv_data->cs_change) ··· 554 556 list_add_tail(&msg->queue, &drv_data->queue); 555 557 556 558 if (drv_data->run && !drv_data->busy) 557 - queue_work(drv_data->workqueue, &drv_data->pump_messages); 559 + schedule_work(&drv_data->pump_messages); 558 560 559 561 spin_unlock_irqrestore(&drv_data->lock, flags); 560 562 ··· 664 666 tasklet_init(&drv_data->pump_transfers, 665 667 bfin_sport_spi_pump_transfers, (unsigned long)drv_data); 666 668 667 - /* init messages workqueue */ 668 669 INIT_WORK(&drv_data->pump_messages, bfin_sport_spi_pump_messages); 669 - drv_data->workqueue = 670 - create_singlethread_workqueue(dev_name(drv_data->master->dev.parent)); 671 - if (drv_data->workqueue == NULL) 672 - return -EBUSY; 673 670 674 671 return 0; 675 672 } ··· 687 694 drv_data->cur_chip = NULL; 688 695 spin_unlock_irqrestore(&drv_data->lock, flags); 689 696 690 - queue_work(drv_data->workqueue, &drv_data->pump_messages); 697 + schedule_work(&drv_data->pump_messages); 691 698 692 699 return 0; 693 700 } ··· 731 738 if (status) 732 739 return status; 733 740 734 - destroy_workqueue(drv_data->workqueue); 741 + flush_work(&drv_data->pump_messages); 735 742 736 743 return 0; 737 744 }
+4 -11
drivers/spi/spi-bfin5xx.c
··· 67 67 /* BFIN hookup */ 68 68 struct bfin5xx_spi_master *master_info; 69 69 70 - /* Driver message queue */ 71 - struct workqueue_struct *workqueue; 72 70 struct work_struct pump_messages; 73 71 spinlock_t lock; 74 72 struct list_head queue; ··· 357 359 drv_data->cur_msg = NULL; 358 360 drv_data->cur_transfer = NULL; 359 361 drv_data->cur_chip = NULL; 360 - queue_work(drv_data->workqueue, &drv_data->pump_messages); 362 + schedule_work(&drv_data->pump_messages); 361 363 spin_unlock_irqrestore(&drv_data->lock, flags); 362 364 363 365 msg->state = NULL; ··· 944 946 list_add_tail(&msg->queue, &drv_data->queue); 945 947 946 948 if (drv_data->running && !drv_data->busy) 947 - queue_work(drv_data->workqueue, &drv_data->pump_messages); 949 + schedule_work(&drv_data->pump_messages); 948 950 949 951 spin_unlock_irqrestore(&drv_data->lock, flags); 950 952 ··· 1175 1177 tasklet_init(&drv_data->pump_transfers, 1176 1178 bfin_spi_pump_transfers, (unsigned long)drv_data); 1177 1179 1178 - /* init messages workqueue */ 1179 1180 INIT_WORK(&drv_data->pump_messages, bfin_spi_pump_messages); 1180 - drv_data->workqueue = create_singlethread_workqueue( 1181 - dev_name(drv_data->master->dev.parent)); 1182 - if (drv_data->workqueue == NULL) 1183 - return -EBUSY; 1184 1181 1185 1182 return 0; 1186 1183 } ··· 1197 1204 drv_data->cur_chip = NULL; 1198 1205 spin_unlock_irqrestore(&drv_data->lock, flags); 1199 1206 1200 - queue_work(drv_data->workqueue, &drv_data->pump_messages); 1207 + schedule_work(&drv_data->pump_messages); 1201 1208 1202 1209 return 0; 1203 1210 } ··· 1239 1246 if (status != 0) 1240 1247 return status; 1241 1248 1242 - destroy_workqueue(drv_data->workqueue); 1249 + flush_work(&drv_data->pump_messages); 1243 1250 1244 1251 return 0; 1245 1252 }
+104
drivers/spi/spi-cavium-octeon.c
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 2011, 2012 Cavium, Inc. 7 + */ 8 + 9 + #include <linux/platform_device.h> 10 + #include <linux/spi/spi.h> 11 + #include <linux/module.h> 12 + #include <linux/io.h> 13 + #include <linux/of.h> 14 + 15 + #include <asm/octeon/octeon.h> 16 + 17 + #include "spi-cavium.h" 18 + 19 + static int octeon_spi_probe(struct platform_device *pdev) 20 + { 21 + struct resource *res_mem; 22 + void __iomem *reg_base; 23 + struct spi_master *master; 24 + struct octeon_spi *p; 25 + int err = -ENOENT; 26 + 27 + master = spi_alloc_master(&pdev->dev, sizeof(struct octeon_spi)); 28 + if (!master) 29 + return -ENOMEM; 30 + p = spi_master_get_devdata(master); 31 + platform_set_drvdata(pdev, master); 32 + 33 + res_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 34 + reg_base = devm_ioremap_resource(&pdev->dev, res_mem); 35 + if (IS_ERR(reg_base)) { 36 + err = PTR_ERR(reg_base); 37 + goto fail; 38 + } 39 + 40 + p->register_base = reg_base; 41 + p->sys_freq = octeon_get_io_clock_rate(); 42 + 43 + p->regs.config = 0; 44 + p->regs.status = 0x08; 45 + p->regs.tx = 0x10; 46 + p->regs.data = 0x80; 47 + 48 + master->num_chipselect = 4; 49 + master->mode_bits = SPI_CPHA | 50 + SPI_CPOL | 51 + SPI_CS_HIGH | 52 + SPI_LSB_FIRST | 53 + SPI_3WIRE; 54 + 55 + master->transfer_one_message = octeon_spi_transfer_one_message; 56 + master->bits_per_word_mask = SPI_BPW_MASK(8); 57 + master->max_speed_hz = OCTEON_SPI_MAX_CLOCK_HZ; 58 + 59 + master->dev.of_node = pdev->dev.of_node; 60 + err = devm_spi_register_master(&pdev->dev, master); 61 + if (err) { 62 + dev_err(&pdev->dev, "register master failed: %d\n", err); 63 + goto fail; 64 + } 65 + 66 + dev_info(&pdev->dev, "OCTEON SPI bus driver\n"); 67 + 68 + return 0; 69 + fail: 70 + spi_master_put(master); 71 + return err; 72 + } 73 + 74 + static int octeon_spi_remove(struct platform_device *pdev) 75 + { 76 + struct spi_master *master = platform_get_drvdata(pdev); 77 + struct octeon_spi *p = spi_master_get_devdata(master); 78 + 79 + /* Clear the CSENA* and put everything in a known state. */ 80 + writeq(0, p->register_base + OCTEON_SPI_CFG(p)); 81 + 82 + return 0; 83 + } 84 + 85 + static const struct of_device_id octeon_spi_match[] = { 86 + { .compatible = "cavium,octeon-3010-spi", }, 87 + {}, 88 + }; 89 + MODULE_DEVICE_TABLE(of, octeon_spi_match); 90 + 91 + static struct platform_driver octeon_spi_driver = { 92 + .driver = { 93 + .name = "spi-octeon", 94 + .of_match_table = octeon_spi_match, 95 + }, 96 + .probe = octeon_spi_probe, 97 + .remove = octeon_spi_remove, 98 + }; 99 + 100 + module_platform_driver(octeon_spi_driver); 101 + 102 + MODULE_DESCRIPTION("Cavium, Inc. OCTEON SPI bus driver"); 103 + MODULE_AUTHOR("David Daney"); 104 + MODULE_LICENSE("GPL");
+151
drivers/spi/spi-cavium.c
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 2011, 2012 Cavium, Inc. 7 + */ 8 + 9 + #include <linux/spi/spi.h> 10 + #include <linux/module.h> 11 + #include <linux/delay.h> 12 + #include <linux/io.h> 13 + 14 + #include "spi-cavium.h" 15 + 16 + static void octeon_spi_wait_ready(struct octeon_spi *p) 17 + { 18 + union cvmx_mpi_sts mpi_sts; 19 + unsigned int loops = 0; 20 + 21 + do { 22 + if (loops++) 23 + __delay(500); 24 + mpi_sts.u64 = readq(p->register_base + OCTEON_SPI_STS(p)); 25 + } while (mpi_sts.s.busy); 26 + } 27 + 28 + static int octeon_spi_do_transfer(struct octeon_spi *p, 29 + struct spi_message *msg, 30 + struct spi_transfer *xfer, 31 + bool last_xfer) 32 + { 33 + struct spi_device *spi = msg->spi; 34 + union cvmx_mpi_cfg mpi_cfg; 35 + union cvmx_mpi_tx mpi_tx; 36 + unsigned int clkdiv; 37 + int mode; 38 + bool cpha, cpol; 39 + const u8 *tx_buf; 40 + u8 *rx_buf; 41 + int len; 42 + int i; 43 + 44 + mode = spi->mode; 45 + cpha = mode & SPI_CPHA; 46 + cpol = mode & SPI_CPOL; 47 + 48 + clkdiv = p->sys_freq / (2 * xfer->speed_hz); 49 + 50 + mpi_cfg.u64 = 0; 51 + 52 + mpi_cfg.s.clkdiv = clkdiv; 53 + mpi_cfg.s.cshi = (mode & SPI_CS_HIGH) ? 1 : 0; 54 + mpi_cfg.s.lsbfirst = (mode & SPI_LSB_FIRST) ? 1 : 0; 55 + mpi_cfg.s.wireor = (mode & SPI_3WIRE) ? 1 : 0; 56 + mpi_cfg.s.idlelo = cpha != cpol; 57 + mpi_cfg.s.cslate = cpha ? 1 : 0; 58 + mpi_cfg.s.enable = 1; 59 + 60 + if (spi->chip_select < 4) 61 + p->cs_enax |= 1ull << (12 + spi->chip_select); 62 + mpi_cfg.u64 |= p->cs_enax; 63 + 64 + if (mpi_cfg.u64 != p->last_cfg) { 65 + p->last_cfg = mpi_cfg.u64; 66 + writeq(mpi_cfg.u64, p->register_base + OCTEON_SPI_CFG(p)); 67 + } 68 + tx_buf = xfer->tx_buf; 69 + rx_buf = xfer->rx_buf; 70 + len = xfer->len; 71 + while (len > OCTEON_SPI_MAX_BYTES) { 72 + for (i = 0; i < OCTEON_SPI_MAX_BYTES; i++) { 73 + u8 d; 74 + if (tx_buf) 75 + d = *tx_buf++; 76 + else 77 + d = 0; 78 + writeq(d, p->register_base + OCTEON_SPI_DAT0(p) + (8 * i)); 79 + } 80 + mpi_tx.u64 = 0; 81 + mpi_tx.s.csid = spi->chip_select; 82 + mpi_tx.s.leavecs = 1; 83 + mpi_tx.s.txnum = tx_buf ? OCTEON_SPI_MAX_BYTES : 0; 84 + mpi_tx.s.totnum = OCTEON_SPI_MAX_BYTES; 85 + writeq(mpi_tx.u64, p->register_base + OCTEON_SPI_TX(p)); 86 + 87 + octeon_spi_wait_ready(p); 88 + if (rx_buf) 89 + for (i = 0; i < OCTEON_SPI_MAX_BYTES; i++) { 90 + u64 v = readq(p->register_base + OCTEON_SPI_DAT0(p) + (8 * i)); 91 + *rx_buf++ = (u8)v; 92 + } 93 + len -= OCTEON_SPI_MAX_BYTES; 94 + } 95 + 96 + for (i = 0; i < len; i++) { 97 + u8 d; 98 + if (tx_buf) 99 + d = *tx_buf++; 100 + else 101 + d = 0; 102 + writeq(d, p->register_base + OCTEON_SPI_DAT0(p) + (8 * i)); 103 + } 104 + 105 + mpi_tx.u64 = 0; 106 + mpi_tx.s.csid = spi->chip_select; 107 + if (last_xfer) 108 + mpi_tx.s.leavecs = xfer->cs_change; 109 + else 110 + mpi_tx.s.leavecs = !xfer->cs_change; 111 + mpi_tx.s.txnum = tx_buf ? len : 0; 112 + mpi_tx.s.totnum = len; 113 + writeq(mpi_tx.u64, p->register_base + OCTEON_SPI_TX(p)); 114 + 115 + octeon_spi_wait_ready(p); 116 + if (rx_buf) 117 + for (i = 0; i < len; i++) { 118 + u64 v = readq(p->register_base + OCTEON_SPI_DAT0(p) + (8 * i)); 119 + *rx_buf++ = (u8)v; 120 + } 121 + 122 + if (xfer->delay_usecs) 123 + udelay(xfer->delay_usecs); 124 + 125 + return xfer->len; 126 + } 127 + 128 + int octeon_spi_transfer_one_message(struct spi_master *master, 129 + struct spi_message *msg) 130 + { 131 + struct octeon_spi *p = spi_master_get_devdata(master); 132 + unsigned int total_len = 0; 133 + int status = 0; 134 + struct spi_transfer *xfer; 135 + 136 + list_for_each_entry(xfer, &msg->transfers, transfer_list) { 137 + bool last_xfer = list_is_last(&xfer->transfer_list, 138 + &msg->transfers); 139 + int r = octeon_spi_do_transfer(p, msg, xfer, last_xfer); 140 + if (r < 0) { 141 + status = r; 142 + goto err; 143 + } 144 + total_len += r; 145 + } 146 + err: 147 + msg->status = status; 148 + msg->actual_length = total_len; 149 + spi_finalize_current_message(master); 150 + return status; 151 + }
+26 -43
drivers/spi/spi-clps711x.c
··· 1 1 /* 2 2 * CLPS711X SPI bus driver 3 3 * 4 - * Copyright (C) 2012-2014 Alexander Shiyan <shc_work@mail.ru> 4 + * Copyright (C) 2012-2016 Alexander Shiyan <shc_work@mail.ru> 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License as published by ··· 12 12 #include <linux/io.h> 13 13 #include <linux/clk.h> 14 14 #include <linux/gpio.h> 15 - #include <linux/delay.h> 16 15 #include <linux/module.h> 17 16 #include <linux/interrupt.h> 18 17 #include <linux/platform_device.h> ··· 19 20 #include <linux/mfd/syscon.h> 20 21 #include <linux/mfd/syscon/clps711x.h> 21 22 #include <linux/spi/spi.h> 22 - #include <linux/platform_data/spi-clps711x.h> 23 23 24 - #define DRIVER_NAME "spi-clps711x" 24 + #define DRIVER_NAME "clps711x-spi" 25 25 26 26 #define SYNCIO_FRMLEN(x) ((x) << 8) 27 27 #define SYNCIO_TXFRMEN (1 << 14) ··· 38 40 39 41 static int spi_clps711x_setup(struct spi_device *spi) 40 42 { 43 + if (!spi->controller_state) { 44 + int ret; 45 + 46 + ret = devm_gpio_request(&spi->master->dev, spi->cs_gpio, 47 + dev_name(&spi->master->dev)); 48 + if (ret) 49 + return ret; 50 + 51 + spi->controller_state = spi; 52 + } 53 + 41 54 /* We are expect that SPI-device is not selected */ 42 55 gpio_direction_output(spi->cs_gpio, !(spi->mode & SPI_CS_HIGH)); 43 56 ··· 113 104 static int spi_clps711x_probe(struct platform_device *pdev) 114 105 { 115 106 struct spi_clps711x_data *hw; 116 - struct spi_clps711x_pdata *pdata = dev_get_platdata(&pdev->dev); 117 107 struct spi_master *master; 118 108 struct resource *res; 119 - int i, irq, ret; 120 - 121 - if (!pdata) { 122 - dev_err(&pdev->dev, "No platform data supplied\n"); 123 - return -EINVAL; 124 - } 125 - 126 - if (pdata->num_chipselect < 1) { 127 - dev_err(&pdev->dev, "At least one CS must be defined\n"); 128 - return -EINVAL; 129 - } 109 + int irq, ret; 130 110 131 111 irq = platform_get_irq(pdev, 0); 132 112 if (irq < 0) ··· 125 127 if (!master) 126 128 return -ENOMEM; 127 129 128 - master->cs_gpios = devm_kzalloc(&pdev->dev, sizeof(int) * 129 - pdata->num_chipselect, GFP_KERNEL); 130 - if (!master->cs_gpios) { 131 - ret = -ENOMEM; 132 - goto err_out; 133 - } 134 - 135 - master->bus_num = pdev->id; 130 + master->bus_num = -1; 136 131 master->mode_bits = SPI_CPHA | SPI_CS_HIGH; 137 132 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 8); 138 - master->num_chipselect = pdata->num_chipselect; 133 + master->dev.of_node = pdev->dev.of_node; 139 134 master->setup = spi_clps711x_setup; 140 135 master->prepare_message = spi_clps711x_prepare_message; 141 136 master->transfer_one = spi_clps711x_transfer_one; 142 137 143 138 hw = spi_master_get_devdata(master); 144 - 145 - for (i = 0; i < master->num_chipselect; i++) { 146 - master->cs_gpios[i] = pdata->chipselect[i]; 147 - ret = devm_gpio_request(&pdev->dev, master->cs_gpios[i], 148 - DRIVER_NAME); 149 - if (ret) { 150 - dev_err(&pdev->dev, "Can't get CS GPIO %i\n", i); 151 - goto err_out; 152 - } 153 - } 154 139 155 140 hw->spi_clk = devm_clk_get(&pdev->dev, NULL); 156 141 if (IS_ERR(hw->spi_clk)) { ··· 141 160 goto err_out; 142 161 } 143 162 144 - hw->syscon = syscon_regmap_lookup_by_pdevname("syscon.3"); 163 + hw->syscon = 164 + syscon_regmap_lookup_by_compatible("cirrus,ep7209-syscon3"); 145 165 if (IS_ERR(hw->syscon)) { 146 166 ret = PTR_ERR(hw->syscon); 147 167 goto err_out; ··· 167 185 goto err_out; 168 186 169 187 ret = devm_spi_register_master(&pdev->dev, master); 170 - if (!ret) { 171 - dev_info(&pdev->dev, 172 - "SPI bus driver initialized. Master clock %u Hz\n", 173 - master->max_speed_hz); 188 + if (!ret) 174 189 return 0; 175 - } 176 - 177 - dev_err(&pdev->dev, "Failed to register master\n"); 178 190 179 191 err_out: 180 192 spi_master_put(master); ··· 176 200 return ret; 177 201 } 178 202 203 + static const struct of_device_id clps711x_spi_dt_ids[] = { 204 + { .compatible = "cirrus,ep7209-spi", }, 205 + { } 206 + }; 207 + MODULE_DEVICE_TABLE(of, clps711x_spi_dt_ids); 208 + 179 209 static struct platform_driver clps711x_spi_driver = { 180 210 .driver = { 181 211 .name = DRIVER_NAME, 212 + .of_match_table = clps711x_spi_dt_ids, 182 213 }, 183 214 .probe = spi_clps711x_probe, 184 215 };
+94 -97
drivers/spi/spi-imx.c
··· 59 59 struct spi_imx_config { 60 60 unsigned int speed_hz; 61 61 unsigned int bpw; 62 - unsigned int mode; 63 - u8 cs; 64 62 }; 65 63 66 64 enum spi_imx_devtype { ··· 74 76 75 77 struct spi_imx_devtype_data { 76 78 void (*intctrl)(struct spi_imx_data *, int); 77 - int (*config)(struct spi_imx_data *, struct spi_imx_config *); 79 + int (*config)(struct spi_device *, struct spi_imx_config *); 78 80 void (*trigger)(struct spi_imx_data *); 79 81 int (*rx_available)(struct spi_imx_data *); 80 82 void (*reset)(struct spi_imx_data *); ··· 110 112 struct completion dma_tx_completion; 111 113 112 114 const struct spi_imx_devtype_data *devtype_data; 113 - int chipselect[0]; 114 115 }; 115 116 116 117 static inline int is_imx27_cspi(struct spi_imx_data *d) ··· 309 312 (post << MX51_ECSPI_CTRL_POSTDIV_OFFSET); 310 313 } 311 314 312 - static void __maybe_unused mx51_ecspi_intctrl(struct spi_imx_data *spi_imx, int enable) 315 + static void mx51_ecspi_intctrl(struct spi_imx_data *spi_imx, int enable) 313 316 { 314 317 unsigned val = 0; 315 318 ··· 322 325 writel(val, spi_imx->base + MX51_ECSPI_INT); 323 326 } 324 327 325 - static void __maybe_unused mx51_ecspi_trigger(struct spi_imx_data *spi_imx) 328 + static void mx51_ecspi_trigger(struct spi_imx_data *spi_imx) 326 329 { 327 330 u32 reg; 328 331 ··· 331 334 writel(reg, spi_imx->base + MX51_ECSPI_CTRL); 332 335 } 333 336 334 - static int __maybe_unused mx51_ecspi_config(struct spi_imx_data *spi_imx, 335 - struct spi_imx_config *config) 337 + static int mx51_ecspi_config(struct spi_device *spi, 338 + struct spi_imx_config *config) 336 339 { 340 + struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 337 341 u32 ctrl = MX51_ECSPI_CTRL_ENABLE; 338 342 u32 clk = config->speed_hz, delay, reg; 339 343 u32 cfg = readl(spi_imx->base + MX51_ECSPI_CONFIG); ··· 353 355 spi_imx->spi_bus_clk = clk; 354 356 355 357 /* set chip select to use */ 356 - ctrl |= MX51_ECSPI_CTRL_CS(config->cs); 358 + ctrl |= MX51_ECSPI_CTRL_CS(spi->chip_select); 357 359 358 360 ctrl |= (config->bpw - 1) << MX51_ECSPI_CTRL_BL_OFFSET; 359 361 360 - cfg |= MX51_ECSPI_CONFIG_SBBCTRL(config->cs); 362 + cfg |= MX51_ECSPI_CONFIG_SBBCTRL(spi->chip_select); 361 363 362 - if (config->mode & SPI_CPHA) 363 - cfg |= MX51_ECSPI_CONFIG_SCLKPHA(config->cs); 364 + if (spi->mode & SPI_CPHA) 365 + cfg |= MX51_ECSPI_CONFIG_SCLKPHA(spi->chip_select); 364 366 else 365 - cfg &= ~MX51_ECSPI_CONFIG_SCLKPHA(config->cs); 367 + cfg &= ~MX51_ECSPI_CONFIG_SCLKPHA(spi->chip_select); 366 368 367 - if (config->mode & SPI_CPOL) { 368 - cfg |= MX51_ECSPI_CONFIG_SCLKPOL(config->cs); 369 - cfg |= MX51_ECSPI_CONFIG_SCLKCTL(config->cs); 369 + if (spi->mode & SPI_CPOL) { 370 + cfg |= MX51_ECSPI_CONFIG_SCLKPOL(spi->chip_select); 371 + cfg |= MX51_ECSPI_CONFIG_SCLKCTL(spi->chip_select); 370 372 } else { 371 - cfg &= ~MX51_ECSPI_CONFIG_SCLKPOL(config->cs); 372 - cfg &= ~MX51_ECSPI_CONFIG_SCLKCTL(config->cs); 373 + cfg &= ~MX51_ECSPI_CONFIG_SCLKPOL(spi->chip_select); 374 + cfg &= ~MX51_ECSPI_CONFIG_SCLKCTL(spi->chip_select); 373 375 } 374 - if (config->mode & SPI_CS_HIGH) 375 - cfg |= MX51_ECSPI_CONFIG_SSBPOL(config->cs); 376 + if (spi->mode & SPI_CS_HIGH) 377 + cfg |= MX51_ECSPI_CONFIG_SSBPOL(spi->chip_select); 376 378 else 377 - cfg &= ~MX51_ECSPI_CONFIG_SSBPOL(config->cs); 379 + cfg &= ~MX51_ECSPI_CONFIG_SSBPOL(spi->chip_select); 378 380 379 381 if (spi_imx->usedma) 380 382 ctrl |= MX51_ECSPI_CTRL_SMC; ··· 383 385 writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL); 384 386 385 387 reg = readl(spi_imx->base + MX51_ECSPI_TESTREG); 386 - if (config->mode & SPI_LOOP) 388 + if (spi->mode & SPI_LOOP) 387 389 reg |= MX51_ECSPI_TESTREG_LBC; 388 390 else 389 391 reg &= ~MX51_ECSPI_TESTREG_LBC; ··· 422 424 return 0; 423 425 } 424 426 425 - static int __maybe_unused mx51_ecspi_rx_available(struct spi_imx_data *spi_imx) 427 + static int mx51_ecspi_rx_available(struct spi_imx_data *spi_imx) 426 428 { 427 429 return readl(spi_imx->base + MX51_ECSPI_STAT) & MX51_ECSPI_STAT_RR; 428 430 } 429 431 430 - static void __maybe_unused mx51_ecspi_reset(struct spi_imx_data *spi_imx) 432 + static void mx51_ecspi_reset(struct spi_imx_data *spi_imx) 431 433 { 432 434 /* drain receive buffer */ 433 435 while (mx51_ecspi_rx_available(spi_imx)) ··· 457 459 * the i.MX35 has a slightly different register layout for bits 458 460 * we do not use here. 459 461 */ 460 - static void __maybe_unused mx31_intctrl(struct spi_imx_data *spi_imx, int enable) 462 + static void mx31_intctrl(struct spi_imx_data *spi_imx, int enable) 461 463 { 462 464 unsigned int val = 0; 463 465 ··· 469 471 writel(val, spi_imx->base + MXC_CSPIINT); 470 472 } 471 473 472 - static void __maybe_unused mx31_trigger(struct spi_imx_data *spi_imx) 474 + static void mx31_trigger(struct spi_imx_data *spi_imx) 473 475 { 474 476 unsigned int reg; 475 477 ··· 478 480 writel(reg, spi_imx->base + MXC_CSPICTRL); 479 481 } 480 482 481 - static int __maybe_unused mx31_config(struct spi_imx_data *spi_imx, 482 - struct spi_imx_config *config) 483 + static int mx31_config(struct spi_device *spi, struct spi_imx_config *config) 483 484 { 485 + struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 484 486 unsigned int reg = MX31_CSPICTRL_ENABLE | MX31_CSPICTRL_MASTER; 485 - int cs = spi_imx->chipselect[config->cs]; 486 487 487 488 reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, config->speed_hz) << 488 489 MX31_CSPICTRL_DR_SHIFT; ··· 493 496 reg |= (config->bpw - 1) << MX31_CSPICTRL_BC_SHIFT; 494 497 } 495 498 496 - if (config->mode & SPI_CPHA) 499 + if (spi->mode & SPI_CPHA) 497 500 reg |= MX31_CSPICTRL_PHA; 498 - if (config->mode & SPI_CPOL) 501 + if (spi->mode & SPI_CPOL) 499 502 reg |= MX31_CSPICTRL_POL; 500 - if (config->mode & SPI_CS_HIGH) 503 + if (spi->mode & SPI_CS_HIGH) 501 504 reg |= MX31_CSPICTRL_SSPOL; 502 - if (cs < 0) 503 - reg |= (cs + 32) << 505 + if (spi->cs_gpio < 0) 506 + reg |= (spi->cs_gpio + 32) << 504 507 (is_imx35_cspi(spi_imx) ? MX35_CSPICTRL_CS_SHIFT : 505 508 MX31_CSPICTRL_CS_SHIFT); 506 509 ··· 509 512 return 0; 510 513 } 511 514 512 - static int __maybe_unused mx31_rx_available(struct spi_imx_data *spi_imx) 515 + static int mx31_rx_available(struct spi_imx_data *spi_imx) 513 516 { 514 517 return readl(spi_imx->base + MX31_CSPISTATUS) & MX31_STATUS_RR; 515 518 } 516 519 517 - static void __maybe_unused mx31_reset(struct spi_imx_data *spi_imx) 520 + static void mx31_reset(struct spi_imx_data *spi_imx) 518 521 { 519 522 /* drain receive buffer */ 520 523 while (readl(spi_imx->base + MX31_CSPISTATUS) & MX31_STATUS_RR) ··· 534 537 #define MX21_CSPICTRL_DR_SHIFT 14 535 538 #define MX21_CSPICTRL_CS_SHIFT 19 536 539 537 - static void __maybe_unused mx21_intctrl(struct spi_imx_data *spi_imx, int enable) 540 + static void mx21_intctrl(struct spi_imx_data *spi_imx, int enable) 538 541 { 539 542 unsigned int val = 0; 540 543 ··· 546 549 writel(val, spi_imx->base + MXC_CSPIINT); 547 550 } 548 551 549 - static void __maybe_unused mx21_trigger(struct spi_imx_data *spi_imx) 552 + static void mx21_trigger(struct spi_imx_data *spi_imx) 550 553 { 551 554 unsigned int reg; 552 555 ··· 555 558 writel(reg, spi_imx->base + MXC_CSPICTRL); 556 559 } 557 560 558 - static int __maybe_unused mx21_config(struct spi_imx_data *spi_imx, 559 - struct spi_imx_config *config) 561 + static int mx21_config(struct spi_device *spi, struct spi_imx_config *config) 560 562 { 563 + struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 561 564 unsigned int reg = MX21_CSPICTRL_ENABLE | MX21_CSPICTRL_MASTER; 562 - int cs = spi_imx->chipselect[config->cs]; 563 565 unsigned int max = is_imx27_cspi(spi_imx) ? 16 : 18; 564 566 565 567 reg |= spi_imx_clkdiv_1(spi_imx->spi_clk, config->speed_hz, max) << 566 568 MX21_CSPICTRL_DR_SHIFT; 567 569 reg |= config->bpw - 1; 568 570 569 - if (config->mode & SPI_CPHA) 571 + if (spi->mode & SPI_CPHA) 570 572 reg |= MX21_CSPICTRL_PHA; 571 - if (config->mode & SPI_CPOL) 573 + if (spi->mode & SPI_CPOL) 572 574 reg |= MX21_CSPICTRL_POL; 573 - if (config->mode & SPI_CS_HIGH) 575 + if (spi->mode & SPI_CS_HIGH) 574 576 reg |= MX21_CSPICTRL_SSPOL; 575 - if (cs < 0) 576 - reg |= (cs + 32) << MX21_CSPICTRL_CS_SHIFT; 577 + if (spi->cs_gpio < 0) 578 + reg |= (spi->cs_gpio + 32) << MX21_CSPICTRL_CS_SHIFT; 577 579 578 580 writel(reg, spi_imx->base + MXC_CSPICTRL); 579 581 580 582 return 0; 581 583 } 582 584 583 - static int __maybe_unused mx21_rx_available(struct spi_imx_data *spi_imx) 585 + static int mx21_rx_available(struct spi_imx_data *spi_imx) 584 586 { 585 587 return readl(spi_imx->base + MXC_CSPIINT) & MX21_INTREG_RR; 586 588 } 587 589 588 - static void __maybe_unused mx21_reset(struct spi_imx_data *spi_imx) 590 + static void mx21_reset(struct spi_imx_data *spi_imx) 589 591 { 590 592 writel(1, spi_imx->base + MXC_RESET); 591 593 } ··· 600 604 #define MX1_CSPICTRL_MASTER (1 << 10) 601 605 #define MX1_CSPICTRL_DR_SHIFT 13 602 606 603 - static void __maybe_unused mx1_intctrl(struct spi_imx_data *spi_imx, int enable) 607 + static void mx1_intctrl(struct spi_imx_data *spi_imx, int enable) 604 608 { 605 609 unsigned int val = 0; 606 610 ··· 612 616 writel(val, spi_imx->base + MXC_CSPIINT); 613 617 } 614 618 615 - static void __maybe_unused mx1_trigger(struct spi_imx_data *spi_imx) 619 + static void mx1_trigger(struct spi_imx_data *spi_imx) 616 620 { 617 621 unsigned int reg; 618 622 ··· 621 625 writel(reg, spi_imx->base + MXC_CSPICTRL); 622 626 } 623 627 624 - static int __maybe_unused mx1_config(struct spi_imx_data *spi_imx, 625 - struct spi_imx_config *config) 628 + static int mx1_config(struct spi_device *spi, struct spi_imx_config *config) 626 629 { 630 + struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 627 631 unsigned int reg = MX1_CSPICTRL_ENABLE | MX1_CSPICTRL_MASTER; 628 632 629 633 reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, config->speed_hz) << 630 634 MX1_CSPICTRL_DR_SHIFT; 631 635 reg |= config->bpw - 1; 632 636 633 - if (config->mode & SPI_CPHA) 637 + if (spi->mode & SPI_CPHA) 634 638 reg |= MX1_CSPICTRL_PHA; 635 - if (config->mode & SPI_CPOL) 639 + if (spi->mode & SPI_CPOL) 636 640 reg |= MX1_CSPICTRL_POL; 637 641 638 642 writel(reg, spi_imx->base + MXC_CSPICTRL); ··· 640 644 return 0; 641 645 } 642 646 643 - static int __maybe_unused mx1_rx_available(struct spi_imx_data *spi_imx) 647 + static int mx1_rx_available(struct spi_imx_data *spi_imx) 644 648 { 645 649 return readl(spi_imx->base + MXC_CSPIINT) & MX1_INTREG_RR; 646 650 } 647 651 648 - static void __maybe_unused mx1_reset(struct spi_imx_data *spi_imx) 652 + static void mx1_reset(struct spi_imx_data *spi_imx) 649 653 { 650 654 writel(1, spi_imx->base + MXC_RESET); 651 655 } ··· 743 747 744 748 static void spi_imx_chipselect(struct spi_device *spi, int is_active) 745 749 { 746 - struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 747 - int gpio = spi_imx->chipselect[spi->chip_select]; 748 750 int active = is_active != BITBANG_CS_INACTIVE; 749 751 int dev_is_lowactive = !(spi->mode & SPI_CS_HIGH); 750 752 751 - if (!gpio_is_valid(gpio)) 753 + if (!gpio_is_valid(spi->cs_gpio)) 752 754 return; 753 755 754 - gpio_set_value(gpio, dev_is_lowactive ^ active); 756 + gpio_set_value(spi->cs_gpio, dev_is_lowactive ^ active); 755 757 } 756 758 757 759 static void spi_imx_push(struct spi_imx_data *spi_imx) ··· 853 859 854 860 config.bpw = t ? t->bits_per_word : spi->bits_per_word; 855 861 config.speed_hz = t ? t->speed_hz : spi->max_speed_hz; 856 - config.mode = spi->mode; 857 - config.cs = spi->chip_select; 858 862 859 863 if (!config.speed_hz) 860 864 config.speed_hz = spi->max_speed_hz; ··· 883 891 return ret; 884 892 } 885 893 886 - spi_imx->devtype_data->config(spi_imx, &config); 894 + spi_imx->devtype_data->config(spi, &config); 887 895 888 896 return 0; 889 897 } ··· 1042 1050 struct spi_transfer *transfer) 1043 1051 { 1044 1052 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 1053 + unsigned long transfer_timeout; 1054 + unsigned long timeout; 1045 1055 1046 1056 spi_imx->tx_buf = transfer->tx_buf; 1047 1057 spi_imx->rx_buf = transfer->rx_buf; ··· 1056 1062 1057 1063 spi_imx->devtype_data->intctrl(spi_imx, MXC_INT_TE); 1058 1064 1059 - wait_for_completion(&spi_imx->xfer_done); 1065 + transfer_timeout = spi_imx_calculate_timeout(spi_imx, transfer->len); 1066 + 1067 + timeout = wait_for_completion_timeout(&spi_imx->xfer_done, 1068 + transfer_timeout); 1069 + if (!timeout) { 1070 + dev_err(&spi->dev, "I/O Error in PIO\n"); 1071 + spi_imx->devtype_data->reset(spi_imx); 1072 + return -ETIMEDOUT; 1073 + } 1060 1074 1061 1075 return transfer->len; 1062 1076 } ··· 1082 1080 1083 1081 static int spi_imx_setup(struct spi_device *spi) 1084 1082 { 1085 - struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 1086 - int gpio = spi_imx->chipselect[spi->chip_select]; 1087 - 1088 1083 dev_dbg(&spi->dev, "%s: mode %d, %u bpw, %d hz\n", __func__, 1089 1084 spi->mode, spi->bits_per_word, spi->max_speed_hz); 1090 1085 1091 - if (gpio_is_valid(gpio)) 1092 - gpio_direction_output(gpio, spi->mode & SPI_CS_HIGH ? 0 : 1); 1086 + if (gpio_is_valid(spi->cs_gpio)) 1087 + gpio_direction_output(spi->cs_gpio, 1088 + spi->mode & SPI_CS_HIGH ? 0 : 1); 1093 1089 1094 1090 spi_imx_chipselect(spi, BITBANG_CS_INACTIVE); 1095 1091 ··· 1137 1137 struct spi_master *master; 1138 1138 struct spi_imx_data *spi_imx; 1139 1139 struct resource *res; 1140 - int i, ret, num_cs, irq; 1140 + int i, ret, irq; 1141 1141 1142 1142 if (!np && !mxc_platform_info) { 1143 1143 dev_err(&pdev->dev, "can't get the platform data\n"); 1144 1144 return -EINVAL; 1145 1145 } 1146 1146 1147 - ret = of_property_read_u32(np, "fsl,spi-num-chipselects", &num_cs); 1148 - if (ret < 0) { 1149 - if (mxc_platform_info) 1150 - num_cs = mxc_platform_info->num_chipselect; 1151 - else 1152 - return ret; 1153 - } 1154 - 1155 - master = spi_alloc_master(&pdev->dev, 1156 - sizeof(struct spi_imx_data) + sizeof(int) * num_cs); 1147 + master = spi_alloc_master(&pdev->dev, sizeof(struct spi_imx_data)); 1157 1148 if (!master) 1158 1149 return -ENOMEM; 1159 1150 1160 1151 platform_set_drvdata(pdev, master); 1161 1152 1162 1153 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32); 1163 - master->bus_num = pdev->id; 1164 - master->num_chipselect = num_cs; 1154 + master->bus_num = np ? -1 : pdev->id; 1165 1155 1166 1156 spi_imx = spi_master_get_devdata(master); 1167 1157 spi_imx->bitbang.master = master; ··· 1160 1170 spi_imx->devtype_data = of_id ? of_id->data : 1161 1171 (struct spi_imx_devtype_data *)pdev->id_entry->driver_data; 1162 1172 1163 - for (i = 0; i < master->num_chipselect; i++) { 1164 - int cs_gpio = of_get_named_gpio(np, "cs-gpios", i); 1165 - if (!gpio_is_valid(cs_gpio) && mxc_platform_info) 1166 - cs_gpio = mxc_platform_info->chipselect[i]; 1173 + if (mxc_platform_info) { 1174 + master->num_chipselect = mxc_platform_info->num_chipselect; 1175 + master->cs_gpios = devm_kzalloc(&master->dev, 1176 + sizeof(int) * master->num_chipselect, GFP_KERNEL); 1177 + if (!master->cs_gpios) 1178 + return -ENOMEM; 1167 1179 1168 - spi_imx->chipselect[i] = cs_gpio; 1169 - if (!gpio_is_valid(cs_gpio)) 1170 - continue; 1171 - 1172 - ret = devm_gpio_request(&pdev->dev, spi_imx->chipselect[i], 1173 - DRIVER_NAME); 1174 - if (ret) { 1175 - dev_err(&pdev->dev, "can't get cs gpios\n"); 1176 - goto out_master_put; 1177 - } 1178 - } 1180 + for (i = 0; i < master->num_chipselect; i++) 1181 + master->cs_gpios[i] = mxc_platform_info->chipselect[i]; 1182 + } 1179 1183 1180 1184 spi_imx->bitbang.chipselect = spi_imx_chipselect; 1181 1185 spi_imx->bitbang.setup_transfer = spi_imx_setupxfer; ··· 1249 1265 if (ret) { 1250 1266 dev_err(&pdev->dev, "bitbang start failed with %d\n", ret); 1251 1267 goto out_clk_put; 1268 + } 1269 + 1270 + for (i = 0; i < master->num_chipselect; i++) { 1271 + if (!gpio_is_valid(master->cs_gpios[i])) 1272 + continue; 1273 + 1274 + ret = devm_gpio_request(&pdev->dev, master->cs_gpios[i], 1275 + DRIVER_NAME); 1276 + if (ret) { 1277 + dev_err(&pdev->dev, "Can't get CS GPIO %i\n", 1278 + master->cs_gpios[i]); 1279 + goto out_clk_put; 1280 + } 1252 1281 } 1253 1282 1254 1283 dev_info(&pdev->dev, "probed\n");
+1 -1
drivers/spi/spi-loopback-test.c
··· 536 536 537 537 mismatch_error: 538 538 dev_err(&spi->dev, 539 - "loopback strangeness - transfer missmatch on byte %04zx - expected 0x%02x, but got 0x%02x\n", 539 + "loopback strangeness - transfer mismatch on byte %04zx - expected 0x%02x, but got 0x%02x\n", 540 540 i, txb, rxb); 541 541 542 542 return -EINVAL;
+3 -14
drivers/spi/spi-mpc52xx-psc.c
··· 42 42 u8 bits_per_word; 43 43 u8 busy; 44 44 45 - struct workqueue_struct *workqueue; 46 45 struct work_struct work; 47 46 48 47 struct list_head queue; ··· 298 299 299 300 spin_lock_irqsave(&mps->lock, flags); 300 301 list_add_tail(&m->queue, &mps->queue); 301 - queue_work(mps->workqueue, &mps->work); 302 + schedule_work(&mps->work); 302 303 spin_unlock_irqrestore(&mps->lock, flags); 303 304 304 305 return 0; ··· 424 425 INIT_WORK(&mps->work, mpc52xx_psc_spi_work); 425 426 INIT_LIST_HEAD(&mps->queue); 426 427 427 - mps->workqueue = create_singlethread_workqueue( 428 - dev_name(master->dev.parent)); 429 - if (mps->workqueue == NULL) { 430 - ret = -EBUSY; 431 - goto free_irq; 432 - } 433 - 434 428 ret = spi_register_master(master); 435 429 if (ret < 0) 436 - goto unreg_master; 430 + goto free_irq; 437 431 438 432 return ret; 439 433 440 - unreg_master: 441 - destroy_workqueue(mps->workqueue); 442 434 free_irq: 443 435 free_irq(mps->irq, mps); 444 436 free_master: ··· 474 484 struct spi_master *master = spi_master_get(platform_get_drvdata(op)); 475 485 struct mpc52xx_psc_spi *mps = spi_master_get_devdata(master); 476 486 477 - flush_workqueue(mps->workqueue); 478 - destroy_workqueue(mps->workqueue); 487 + flush_work(&mps->work); 479 488 spi_unregister_master(master); 480 489 free_irq(mps->irq, mps); 481 490 if (mps->psc)
-255
drivers/spi/spi-octeon.c
··· 1 - /* 2 - * This file is subject to the terms and conditions of the GNU General Public 3 - * License. See the file "COPYING" in the main directory of this archive 4 - * for more details. 5 - * 6 - * Copyright (C) 2011, 2012 Cavium, Inc. 7 - */ 8 - 9 - #include <linux/platform_device.h> 10 - #include <linux/interrupt.h> 11 - #include <linux/spi/spi.h> 12 - #include <linux/module.h> 13 - #include <linux/delay.h> 14 - #include <linux/io.h> 15 - #include <linux/of.h> 16 - 17 - #include <asm/octeon/octeon.h> 18 - #include <asm/octeon/cvmx-mpi-defs.h> 19 - 20 - #define OCTEON_SPI_CFG 0 21 - #define OCTEON_SPI_STS 0x08 22 - #define OCTEON_SPI_TX 0x10 23 - #define OCTEON_SPI_DAT0 0x80 24 - 25 - #define OCTEON_SPI_MAX_BYTES 9 26 - 27 - #define OCTEON_SPI_MAX_CLOCK_HZ 16000000 28 - 29 - struct octeon_spi { 30 - u64 register_base; 31 - u64 last_cfg; 32 - u64 cs_enax; 33 - }; 34 - 35 - static void octeon_spi_wait_ready(struct octeon_spi *p) 36 - { 37 - union cvmx_mpi_sts mpi_sts; 38 - unsigned int loops = 0; 39 - 40 - do { 41 - if (loops++) 42 - __delay(500); 43 - mpi_sts.u64 = cvmx_read_csr(p->register_base + OCTEON_SPI_STS); 44 - } while (mpi_sts.s.busy); 45 - } 46 - 47 - static int octeon_spi_do_transfer(struct octeon_spi *p, 48 - struct spi_message *msg, 49 - struct spi_transfer *xfer, 50 - bool last_xfer) 51 - { 52 - struct spi_device *spi = msg->spi; 53 - union cvmx_mpi_cfg mpi_cfg; 54 - union cvmx_mpi_tx mpi_tx; 55 - unsigned int clkdiv; 56 - unsigned int speed_hz; 57 - int mode; 58 - bool cpha, cpol; 59 - const u8 *tx_buf; 60 - u8 *rx_buf; 61 - int len; 62 - int i; 63 - 64 - mode = spi->mode; 65 - cpha = mode & SPI_CPHA; 66 - cpol = mode & SPI_CPOL; 67 - 68 - speed_hz = xfer->speed_hz; 69 - 70 - clkdiv = octeon_get_io_clock_rate() / (2 * speed_hz); 71 - 72 - mpi_cfg.u64 = 0; 73 - 74 - mpi_cfg.s.clkdiv = clkdiv; 75 - mpi_cfg.s.cshi = (mode & SPI_CS_HIGH) ? 1 : 0; 76 - mpi_cfg.s.lsbfirst = (mode & SPI_LSB_FIRST) ? 1 : 0; 77 - mpi_cfg.s.wireor = (mode & SPI_3WIRE) ? 1 : 0; 78 - mpi_cfg.s.idlelo = cpha != cpol; 79 - mpi_cfg.s.cslate = cpha ? 1 : 0; 80 - mpi_cfg.s.enable = 1; 81 - 82 - if (spi->chip_select < 4) 83 - p->cs_enax |= 1ull << (12 + spi->chip_select); 84 - mpi_cfg.u64 |= p->cs_enax; 85 - 86 - if (mpi_cfg.u64 != p->last_cfg) { 87 - p->last_cfg = mpi_cfg.u64; 88 - cvmx_write_csr(p->register_base + OCTEON_SPI_CFG, mpi_cfg.u64); 89 - } 90 - tx_buf = xfer->tx_buf; 91 - rx_buf = xfer->rx_buf; 92 - len = xfer->len; 93 - while (len > OCTEON_SPI_MAX_BYTES) { 94 - for (i = 0; i < OCTEON_SPI_MAX_BYTES; i++) { 95 - u8 d; 96 - if (tx_buf) 97 - d = *tx_buf++; 98 - else 99 - d = 0; 100 - cvmx_write_csr(p->register_base + OCTEON_SPI_DAT0 + (8 * i), d); 101 - } 102 - mpi_tx.u64 = 0; 103 - mpi_tx.s.csid = spi->chip_select; 104 - mpi_tx.s.leavecs = 1; 105 - mpi_tx.s.txnum = tx_buf ? OCTEON_SPI_MAX_BYTES : 0; 106 - mpi_tx.s.totnum = OCTEON_SPI_MAX_BYTES; 107 - cvmx_write_csr(p->register_base + OCTEON_SPI_TX, mpi_tx.u64); 108 - 109 - octeon_spi_wait_ready(p); 110 - if (rx_buf) 111 - for (i = 0; i < OCTEON_SPI_MAX_BYTES; i++) { 112 - u64 v = cvmx_read_csr(p->register_base + OCTEON_SPI_DAT0 + (8 * i)); 113 - *rx_buf++ = (u8)v; 114 - } 115 - len -= OCTEON_SPI_MAX_BYTES; 116 - } 117 - 118 - for (i = 0; i < len; i++) { 119 - u8 d; 120 - if (tx_buf) 121 - d = *tx_buf++; 122 - else 123 - d = 0; 124 - cvmx_write_csr(p->register_base + OCTEON_SPI_DAT0 + (8 * i), d); 125 - } 126 - 127 - mpi_tx.u64 = 0; 128 - mpi_tx.s.csid = spi->chip_select; 129 - if (last_xfer) 130 - mpi_tx.s.leavecs = xfer->cs_change; 131 - else 132 - mpi_tx.s.leavecs = !xfer->cs_change; 133 - mpi_tx.s.txnum = tx_buf ? len : 0; 134 - mpi_tx.s.totnum = len; 135 - cvmx_write_csr(p->register_base + OCTEON_SPI_TX, mpi_tx.u64); 136 - 137 - octeon_spi_wait_ready(p); 138 - if (rx_buf) 139 - for (i = 0; i < len; i++) { 140 - u64 v = cvmx_read_csr(p->register_base + OCTEON_SPI_DAT0 + (8 * i)); 141 - *rx_buf++ = (u8)v; 142 - } 143 - 144 - if (xfer->delay_usecs) 145 - udelay(xfer->delay_usecs); 146 - 147 - return xfer->len; 148 - } 149 - 150 - static int octeon_spi_transfer_one_message(struct spi_master *master, 151 - struct spi_message *msg) 152 - { 153 - struct octeon_spi *p = spi_master_get_devdata(master); 154 - unsigned int total_len = 0; 155 - int status = 0; 156 - struct spi_transfer *xfer; 157 - 158 - list_for_each_entry(xfer, &msg->transfers, transfer_list) { 159 - bool last_xfer = list_is_last(&xfer->transfer_list, 160 - &msg->transfers); 161 - int r = octeon_spi_do_transfer(p, msg, xfer, last_xfer); 162 - if (r < 0) { 163 - status = r; 164 - goto err; 165 - } 166 - total_len += r; 167 - } 168 - err: 169 - msg->status = status; 170 - msg->actual_length = total_len; 171 - spi_finalize_current_message(master); 172 - return status; 173 - } 174 - 175 - static int octeon_spi_probe(struct platform_device *pdev) 176 - { 177 - struct resource *res_mem; 178 - void __iomem *reg_base; 179 - struct spi_master *master; 180 - struct octeon_spi *p; 181 - int err = -ENOENT; 182 - 183 - master = spi_alloc_master(&pdev->dev, sizeof(struct octeon_spi)); 184 - if (!master) 185 - return -ENOMEM; 186 - p = spi_master_get_devdata(master); 187 - platform_set_drvdata(pdev, master); 188 - 189 - res_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 190 - reg_base = devm_ioremap_resource(&pdev->dev, res_mem); 191 - if (IS_ERR(reg_base)) { 192 - err = PTR_ERR(reg_base); 193 - goto fail; 194 - } 195 - 196 - p->register_base = (u64)reg_base; 197 - 198 - master->num_chipselect = 4; 199 - master->mode_bits = SPI_CPHA | 200 - SPI_CPOL | 201 - SPI_CS_HIGH | 202 - SPI_LSB_FIRST | 203 - SPI_3WIRE; 204 - 205 - master->transfer_one_message = octeon_spi_transfer_one_message; 206 - master->bits_per_word_mask = SPI_BPW_MASK(8); 207 - master->max_speed_hz = OCTEON_SPI_MAX_CLOCK_HZ; 208 - 209 - master->dev.of_node = pdev->dev.of_node; 210 - err = devm_spi_register_master(&pdev->dev, master); 211 - if (err) { 212 - dev_err(&pdev->dev, "register master failed: %d\n", err); 213 - goto fail; 214 - } 215 - 216 - dev_info(&pdev->dev, "OCTEON SPI bus driver\n"); 217 - 218 - return 0; 219 - fail: 220 - spi_master_put(master); 221 - return err; 222 - } 223 - 224 - static int octeon_spi_remove(struct platform_device *pdev) 225 - { 226 - struct spi_master *master = platform_get_drvdata(pdev); 227 - struct octeon_spi *p = spi_master_get_devdata(master); 228 - u64 register_base = p->register_base; 229 - 230 - /* Clear the CSENA* and put everything in a known state. */ 231 - cvmx_write_csr(register_base + OCTEON_SPI_CFG, 0); 232 - 233 - return 0; 234 - } 235 - 236 - static const struct of_device_id octeon_spi_match[] = { 237 - { .compatible = "cavium,octeon-3010-spi", }, 238 - {}, 239 - }; 240 - MODULE_DEVICE_TABLE(of, octeon_spi_match); 241 - 242 - static struct platform_driver octeon_spi_driver = { 243 - .driver = { 244 - .name = "spi-octeon", 245 - .of_match_table = octeon_spi_match, 246 - }, 247 - .probe = octeon_spi_probe, 248 - .remove = octeon_spi_remove, 249 - }; 250 - 251 - module_platform_driver(octeon_spi_driver); 252 - 253 - MODULE_DESCRIPTION("Cavium, Inc. OCTEON SPI bus driver"); 254 - MODULE_AUTHOR("David Daney"); 255 - MODULE_LICENSE("GPL");
+69 -76
drivers/spi/spi-omap2-mcspi.c
··· 419 419 420 420 if (mcspi_dma->dma_tx) { 421 421 struct dma_async_tx_descriptor *tx; 422 - struct scatterlist sg; 423 422 424 423 dmaengine_slave_config(mcspi_dma->dma_tx, &cfg); 425 424 426 - sg_init_table(&sg, 1); 427 - sg_dma_address(&sg) = xfer->tx_dma; 428 - sg_dma_len(&sg) = xfer->len; 429 - 430 - tx = dmaengine_prep_slave_sg(mcspi_dma->dma_tx, &sg, 1, 431 - DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 425 + tx = dmaengine_prep_slave_sg(mcspi_dma->dma_tx, xfer->tx_sg.sgl, 426 + xfer->tx_sg.nents, 427 + DMA_MEM_TO_DEV, 428 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 432 429 if (tx) { 433 430 tx->callback = omap2_mcspi_tx_callback; 434 431 tx->callback_param = spi; ··· 446 449 { 447 450 struct omap2_mcspi *mcspi; 448 451 struct omap2_mcspi_dma *mcspi_dma; 449 - unsigned int count, dma_count; 452 + unsigned int count, transfer_reduction = 0; 453 + struct scatterlist *sg_out[2]; 454 + int nb_sizes = 0, out_mapped_nents[2], ret, x; 455 + size_t sizes[2]; 450 456 u32 l; 451 457 int elements = 0; 452 458 int word_len, element_count; ··· 457 457 mcspi = spi_master_get_devdata(spi->master); 458 458 mcspi_dma = &mcspi->dma_channels[spi->chip_select]; 459 459 count = xfer->len; 460 - dma_count = xfer->len; 461 460 461 + /* 462 + * In the "End-of-Transfer Procedure" section for DMA RX in OMAP35x TRM 463 + * it mentions reducing DMA transfer length by one element in master 464 + * normal mode. 465 + */ 462 466 if (mcspi->fifo_depth == 0) 463 - dma_count -= es; 467 + transfer_reduction = es; 464 468 465 469 word_len = cs->word_len; 466 470 l = mcspi_cached_chconf0(spi); ··· 478 474 479 475 if (mcspi_dma->dma_rx) { 480 476 struct dma_async_tx_descriptor *tx; 481 - struct scatterlist sg; 482 477 483 478 dmaengine_slave_config(mcspi_dma->dma_rx, &cfg); 484 479 480 + /* 481 + * Reduce DMA transfer length by one more if McSPI is 482 + * configured in turbo mode. 483 + */ 485 484 if ((l & OMAP2_MCSPI_CHCONF_TURBO) && mcspi->fifo_depth == 0) 486 - dma_count -= es; 485 + transfer_reduction += es; 487 486 488 - sg_init_table(&sg, 1); 489 - sg_dma_address(&sg) = xfer->rx_dma; 490 - sg_dma_len(&sg) = dma_count; 487 + if (transfer_reduction) { 488 + /* Split sgl into two. The second sgl won't be used. */ 489 + sizes[0] = count - transfer_reduction; 490 + sizes[1] = transfer_reduction; 491 + nb_sizes = 2; 492 + } else { 493 + /* 494 + * Don't bother splitting the sgl. This essentially 495 + * clones the original sgl. 496 + */ 497 + sizes[0] = count; 498 + nb_sizes = 1; 499 + } 491 500 492 - tx = dmaengine_prep_slave_sg(mcspi_dma->dma_rx, &sg, 1, 493 - DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | 494 - DMA_CTRL_ACK); 501 + ret = sg_split(xfer->rx_sg.sgl, xfer->rx_sg.nents, 502 + 0, nb_sizes, 503 + sizes, 504 + sg_out, out_mapped_nents, 505 + GFP_KERNEL); 506 + 507 + if (ret < 0) { 508 + dev_err(&spi->dev, "sg_split failed\n"); 509 + return 0; 510 + } 511 + 512 + tx = dmaengine_prep_slave_sg(mcspi_dma->dma_rx, 513 + sg_out[0], 514 + out_mapped_nents[0], 515 + DMA_DEV_TO_MEM, 516 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 495 517 if (tx) { 496 518 tx->callback = omap2_mcspi_rx_callback; 497 519 tx->callback_param = spi; ··· 531 501 omap2_mcspi_set_dma_req(spi, 1, 1); 532 502 533 503 wait_for_completion(&mcspi_dma->dma_rx_completion); 534 - dma_unmap_single(mcspi->dev, xfer->rx_dma, count, 535 - DMA_FROM_DEVICE); 504 + 505 + for (x = 0; x < nb_sizes; x++) 506 + kfree(sg_out[x]); 536 507 537 508 if (mcspi->fifo_depth > 0) 538 509 return count; 539 510 511 + /* 512 + * Due to the DMA transfer length reduction the missing bytes must 513 + * be read manually to receive all of the expected data. 514 + */ 540 515 omap2_mcspi_set_enable(spi, 0); 541 516 542 517 elements = element_count - 1; ··· 650 615 651 616 if (tx != NULL) { 652 617 wait_for_completion(&mcspi_dma->dma_tx_completion); 653 - dma_unmap_single(mcspi->dev, xfer->tx_dma, xfer->len, 654 - DMA_TO_DEVICE); 655 618 656 619 if (mcspi->fifo_depth > 0) { 657 620 irqstat_reg = mcspi->base + OMAP2_MCSPI_IRQSTATUS; ··· 1107 1074 gpio_free(spi->cs_gpio); 1108 1075 } 1109 1076 1110 - static int omap2_mcspi_work_one(struct omap2_mcspi *mcspi, 1111 - struct spi_device *spi, struct spi_transfer *t) 1077 + static int omap2_mcspi_transfer_one(struct spi_master *master, 1078 + struct spi_device *spi, 1079 + struct spi_transfer *t) 1112 1080 { 1113 1081 1114 1082 /* We only enable one channel at a time -- the one whose message is ··· 1119 1085 * chipselect with the FORCE bit ... CS != channel enable. 1120 1086 */ 1121 1087 1122 - struct spi_master *master; 1088 + struct omap2_mcspi *mcspi; 1123 1089 struct omap2_mcspi_dma *mcspi_dma; 1124 1090 struct omap2_mcspi_cs *cs; 1125 1091 struct omap2_mcspi_device_config *cd; ··· 1127 1093 int status = 0; 1128 1094 u32 chconf; 1129 1095 1130 - master = spi->master; 1096 + mcspi = spi_master_get_devdata(master); 1131 1097 mcspi_dma = mcspi->dma_channels + spi->chip_select; 1132 1098 cs = spi->controller_state; 1133 1099 cd = spi->controller_data; ··· 1187 1153 unsigned count; 1188 1154 1189 1155 if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) && 1190 - (t->len >= DMA_MIN_BYTES)) 1156 + master->cur_msg_mapped && 1157 + master->can_dma(master, spi, t)) 1191 1158 omap2_mcspi_set_fifo(spi, t, 1); 1192 1159 1193 1160 omap2_mcspi_set_enable(spi, 1); ··· 1199 1164 + OMAP2_MCSPI_TX0); 1200 1165 1201 1166 if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) && 1202 - (t->len >= DMA_MIN_BYTES)) 1167 + master->cur_msg_mapped && 1168 + master->can_dma(master, spi, t)) 1203 1169 count = omap2_mcspi_txrx_dma(spi, t); 1204 1170 else 1205 1171 count = omap2_mcspi_txrx_pio(spi, t); ··· 1269 1233 return 0; 1270 1234 } 1271 1235 1272 - static int omap2_mcspi_transfer_one(struct spi_master *master, 1273 - struct spi_device *spi, struct spi_transfer *t) 1236 + static bool omap2_mcspi_can_dma(struct spi_master *master, 1237 + struct spi_device *spi, 1238 + struct spi_transfer *xfer) 1274 1239 { 1275 - struct omap2_mcspi *mcspi; 1276 - struct omap2_mcspi_dma *mcspi_dma; 1277 - const void *tx_buf = t->tx_buf; 1278 - void *rx_buf = t->rx_buf; 1279 - unsigned len = t->len; 1280 - 1281 - mcspi = spi_master_get_devdata(master); 1282 - mcspi_dma = mcspi->dma_channels + spi->chip_select; 1283 - 1284 - if ((len && !(rx_buf || tx_buf))) { 1285 - dev_dbg(mcspi->dev, "transfer: %d Hz, %d %s%s, %d bpw\n", 1286 - t->speed_hz, 1287 - len, 1288 - tx_buf ? "tx" : "", 1289 - rx_buf ? "rx" : "", 1290 - t->bits_per_word); 1291 - return -EINVAL; 1292 - } 1293 - 1294 - if (len < DMA_MIN_BYTES) 1295 - goto skip_dma_map; 1296 - 1297 - if (mcspi_dma->dma_tx && tx_buf != NULL) { 1298 - t->tx_dma = dma_map_single(mcspi->dev, (void *) tx_buf, 1299 - len, DMA_TO_DEVICE); 1300 - if (dma_mapping_error(mcspi->dev, t->tx_dma)) { 1301 - dev_dbg(mcspi->dev, "dma %cX %d bytes error\n", 1302 - 'T', len); 1303 - return -EINVAL; 1304 - } 1305 - } 1306 - if (mcspi_dma->dma_rx && rx_buf != NULL) { 1307 - t->rx_dma = dma_map_single(mcspi->dev, rx_buf, t->len, 1308 - DMA_FROM_DEVICE); 1309 - if (dma_mapping_error(mcspi->dev, t->rx_dma)) { 1310 - dev_dbg(mcspi->dev, "dma %cX %d bytes error\n", 1311 - 'R', len); 1312 - if (tx_buf != NULL) 1313 - dma_unmap_single(mcspi->dev, t->tx_dma, 1314 - len, DMA_TO_DEVICE); 1315 - return -EINVAL; 1316 - } 1317 - } 1318 - 1319 - skip_dma_map: 1320 - return omap2_mcspi_work_one(mcspi, spi, t); 1240 + return (xfer->len >= DMA_MIN_BYTES); 1321 1241 } 1322 1242 1323 1243 static int omap2_mcspi_master_setup(struct omap2_mcspi *mcspi) ··· 1353 1361 master->setup = omap2_mcspi_setup; 1354 1362 master->auto_runtime_pm = true; 1355 1363 master->prepare_message = omap2_mcspi_prepare_message; 1364 + master->can_dma = omap2_mcspi_can_dma; 1356 1365 master->transfer_one = omap2_mcspi_transfer_one; 1357 1366 master->set_cs = omap2_mcspi_set_cs; 1358 1367 master->cleanup = omap2_mcspi_cleanup;
+88
drivers/spi/spi-orion.c
··· 18 18 #include <linux/module.h> 19 19 #include <linux/pm_runtime.h> 20 20 #include <linux/of.h> 21 + #include <linux/of_address.h> 21 22 #include <linux/of_device.h> 22 23 #include <linux/clk.h> 23 24 #include <linux/sizes.h> ··· 43 42 #define ORION_SPI_DATA_IN_REG 0x0c 44 43 #define ORION_SPI_INT_CAUSE_REG 0x10 45 44 #define ORION_SPI_TIMING_PARAMS_REG 0x18 45 + 46 + /* Register for the "Direct Mode" */ 47 + #define SPI_DIRECT_WRITE_CONFIG_REG 0x20 46 48 47 49 #define ORION_SPI_TMISO_SAMPLE_MASK (0x3 << 6) 48 50 #define ORION_SPI_TMISO_SAMPLE_1 (1 << 6) ··· 82 78 bool is_errata_50mhz_ac; 83 79 }; 84 80 81 + struct orion_direct_acc { 82 + void __iomem *vaddr; 83 + u32 size; 84 + }; 85 + 85 86 struct orion_spi { 86 87 struct spi_master *master; 87 88 void __iomem *base; 88 89 struct clk *clk; 89 90 const struct orion_spi_dev *devdata; 91 + 92 + struct orion_direct_acc direct_access[ORION_NUM_CHIPSELECTS]; 90 93 }; 91 94 92 95 static inline void __iomem *spi_reg(struct orion_spi *orion_spi, u32 reg) ··· 383 372 { 384 373 unsigned int count; 385 374 int word_len; 375 + struct orion_spi *orion_spi; 376 + int cs = spi->chip_select; 386 377 387 378 word_len = spi->bits_per_word; 388 379 count = xfer->len; 380 + 381 + orion_spi = spi_master_get_devdata(spi->master); 382 + 383 + /* 384 + * Use SPI direct write mode if base address is available. Otherwise 385 + * fall back to PIO mode for this transfer. 386 + */ 387 + if ((orion_spi->direct_access[cs].vaddr) && (xfer->tx_buf) && 388 + (word_len == 8)) { 389 + unsigned int cnt = count / 4; 390 + unsigned int rem = count % 4; 391 + 392 + /* 393 + * Send the TX-data to the SPI device via the direct 394 + * mapped address window 395 + */ 396 + iowrite32_rep(orion_spi->direct_access[cs].vaddr, 397 + xfer->tx_buf, cnt); 398 + if (rem) { 399 + u32 *buf = (u32 *)xfer->tx_buf; 400 + 401 + iowrite8_rep(orion_spi->direct_access[cs].vaddr, 402 + &buf[cnt], rem); 403 + } 404 + 405 + return count; 406 + } 389 407 390 408 if (word_len == 8) { 391 409 const u8 *tx = xfer->tx_buf; ··· 465 425 { 466 426 /* Verify that the CS is deasserted */ 467 427 orion_spi_clrbits(orion_spi, ORION_SPI_IF_CTRL_REG, 0x1); 428 + 429 + /* Don't deassert CS between the direct mapped SPI transfers */ 430 + writel(0, spi_reg(orion_spi, SPI_DIRECT_WRITE_CONFIG_REG)); 431 + 468 432 return 0; 469 433 } 470 434 ··· 548 504 struct resource *r; 549 505 unsigned long tclk_hz; 550 506 int status = 0; 507 + struct device_node *np; 551 508 552 509 master = spi_alloc_master(&pdev->dev, sizeof(*spi)); 553 510 if (master == NULL) { ··· 619 574 if (IS_ERR(spi->base)) { 620 575 status = PTR_ERR(spi->base); 621 576 goto out_rel_clk; 577 + } 578 + 579 + /* Scan all SPI devices of this controller for direct mapped devices */ 580 + for_each_available_child_of_node(pdev->dev.of_node, np) { 581 + u32 cs; 582 + 583 + /* Get chip-select number from the "reg" property */ 584 + status = of_property_read_u32(np, "reg", &cs); 585 + if (status) { 586 + dev_err(&pdev->dev, 587 + "%s has no valid 'reg' property (%d)\n", 588 + np->full_name, status); 589 + status = 0; 590 + continue; 591 + } 592 + 593 + /* 594 + * Check if an address is configured for this SPI device. If 595 + * not, the MBus mapping via the 'ranges' property in the 'soc' 596 + * node is not configured and this device should not use the 597 + * direct mode. In this case, just continue with the next 598 + * device. 599 + */ 600 + status = of_address_to_resource(pdev->dev.of_node, cs + 1, r); 601 + if (status) 602 + continue; 603 + 604 + /* 605 + * Only map one page for direct access. This is enough for the 606 + * simple TX transfer which only writes to the first word. 607 + * This needs to get extended for the direct SPI-NOR / SPI-NAND 608 + * support, once this gets implemented. 609 + */ 610 + spi->direct_access[cs].vaddr = devm_ioremap(&pdev->dev, 611 + r->start, 612 + PAGE_SIZE); 613 + if (!spi->direct_access[cs].vaddr) { 614 + status = -ENOMEM; 615 + goto out_rel_clk; 616 + } 617 + spi->direct_access[cs].size = PAGE_SIZE; 618 + 619 + dev_info(&pdev->dev, "CS%d configured for direct access\n", cs); 622 620 } 623 621 624 622 pm_runtime_set_active(&pdev->dev);
+4 -3
drivers/spi/spi-pic32-sqi.c
··· 354 354 struct spi_transfer *xfer; 355 355 struct pic32_sqi *sqi; 356 356 int ret = 0, mode; 357 + unsigned long timeout; 357 358 u32 val; 358 359 359 360 sqi = spi_master_get_devdata(master); ··· 420 419 writel(val, sqi->regs + PESQI_BD_CTRL_REG); 421 420 422 421 /* wait for xfer completion */ 423 - ret = wait_for_completion_timeout(&sqi->xfer_done, 5 * HZ); 424 - if (ret <= 0) { 422 + timeout = wait_for_completion_timeout(&sqi->xfer_done, 5 * HZ); 423 + if (timeout == 0) { 425 424 dev_err(&sqi->master->dev, "wait timedout/interrupted\n"); 426 - ret = -EIO; 425 + ret = -ETIMEDOUT; 427 426 msg->status = ret; 428 427 } else { 429 428 /* success */
+3 -2
drivers/spi/spi-pic32.c
··· 507 507 { 508 508 struct pic32_spi *pic32s; 509 509 bool dma_issued = false; 510 + unsigned long timeout; 510 511 int ret; 511 512 512 513 pic32s = spi_master_get_devdata(master); ··· 554 553 } 555 554 556 555 /* wait for completion */ 557 - ret = wait_for_completion_timeout(&pic32s->xfer_done, 2 * HZ); 558 - if (ret <= 0) { 556 + timeout = wait_for_completion_timeout(&pic32s->xfer_done, 2 * HZ); 557 + if (timeout == 0) { 559 558 dev_err(&spi->dev, "wait error/timedout\n"); 560 559 if (dma_issued) { 561 560 dmaengine_terminate_all(master->dma_rx);
+29 -141
drivers/spi/spi-pxa2xx-dma.c
··· 20 20 21 21 #include "spi-pxa2xx.h" 22 22 23 - static int pxa2xx_spi_map_dma_buffer(struct driver_data *drv_data, 24 - enum dma_data_direction dir) 25 - { 26 - int i, nents, len = drv_data->len; 27 - struct scatterlist *sg; 28 - struct device *dmadev; 29 - struct sg_table *sgt; 30 - void *buf, *pbuf; 31 - 32 - if (dir == DMA_TO_DEVICE) { 33 - dmadev = drv_data->tx_chan->device->dev; 34 - sgt = &drv_data->tx_sgt; 35 - buf = drv_data->tx; 36 - } else { 37 - dmadev = drv_data->rx_chan->device->dev; 38 - sgt = &drv_data->rx_sgt; 39 - buf = drv_data->rx; 40 - } 41 - 42 - nents = DIV_ROUND_UP(len, SZ_2K); 43 - if (nents != sgt->nents) { 44 - int ret; 45 - 46 - sg_free_table(sgt); 47 - ret = sg_alloc_table(sgt, nents, GFP_ATOMIC); 48 - if (ret) 49 - return ret; 50 - } 51 - 52 - pbuf = buf; 53 - for_each_sg(sgt->sgl, sg, sgt->nents, i) { 54 - size_t bytes = min_t(size_t, len, SZ_2K); 55 - 56 - sg_set_buf(sg, pbuf, bytes); 57 - pbuf += bytes; 58 - len -= bytes; 59 - } 60 - 61 - nents = dma_map_sg(dmadev, sgt->sgl, sgt->nents, dir); 62 - if (!nents) 63 - return -ENOMEM; 64 - 65 - return nents; 66 - } 67 - 68 - static void pxa2xx_spi_unmap_dma_buffer(struct driver_data *drv_data, 69 - enum dma_data_direction dir) 70 - { 71 - struct device *dmadev; 72 - struct sg_table *sgt; 73 - 74 - if (dir == DMA_TO_DEVICE) { 75 - dmadev = drv_data->tx_chan->device->dev; 76 - sgt = &drv_data->tx_sgt; 77 - } else { 78 - dmadev = drv_data->rx_chan->device->dev; 79 - sgt = &drv_data->rx_sgt; 80 - } 81 - 82 - dma_unmap_sg(dmadev, sgt->sgl, sgt->nents, dir); 83 - } 84 - 85 - static void pxa2xx_spi_unmap_dma_buffers(struct driver_data *drv_data) 86 - { 87 - if (!drv_data->dma_mapped) 88 - return; 89 - 90 - pxa2xx_spi_unmap_dma_buffer(drv_data, DMA_FROM_DEVICE); 91 - pxa2xx_spi_unmap_dma_buffer(drv_data, DMA_TO_DEVICE); 92 - 93 - drv_data->dma_mapped = 0; 94 - } 95 - 96 23 static void pxa2xx_spi_dma_transfer_complete(struct driver_data *drv_data, 97 24 bool error) 98 25 { ··· 52 125 pxa2xx_spi_write(drv_data, SSTO, 0); 53 126 54 127 if (!error) { 55 - pxa2xx_spi_unmap_dma_buffers(drv_data); 56 - 57 128 msg->actual_length += drv_data->len; 58 129 msg->state = pxa2xx_spi_next_transfer(drv_data); 59 130 } else { ··· 77 152 enum dma_transfer_direction dir) 78 153 { 79 154 struct chip_data *chip = drv_data->cur_chip; 155 + struct spi_transfer *xfer = drv_data->cur_transfer; 80 156 enum dma_slave_buswidth width; 81 157 struct dma_slave_config cfg; 82 158 struct dma_chan *chan; 83 159 struct sg_table *sgt; 84 - int nents, ret; 160 + int ret; 85 161 86 162 switch (drv_data->n_bytes) { 87 163 case 1: ··· 104 178 cfg.dst_addr_width = width; 105 179 cfg.dst_maxburst = chip->dma_burst_size; 106 180 107 - sgt = &drv_data->tx_sgt; 108 - nents = drv_data->tx_nents; 109 - chan = drv_data->tx_chan; 181 + sgt = &xfer->tx_sg; 182 + chan = drv_data->master->dma_tx; 110 183 } else { 111 184 cfg.src_addr = drv_data->ssdr_physical; 112 185 cfg.src_addr_width = width; 113 186 cfg.src_maxburst = chip->dma_burst_size; 114 187 115 - sgt = &drv_data->rx_sgt; 116 - nents = drv_data->rx_nents; 117 - chan = drv_data->rx_chan; 188 + sgt = &xfer->rx_sg; 189 + chan = drv_data->master->dma_rx; 118 190 } 119 191 120 192 ret = dmaengine_slave_config(chan, &cfg); ··· 121 197 return NULL; 122 198 } 123 199 124 - return dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, 200 + return dmaengine_prep_slave_sg(chan, sgt->sgl, sgt->nents, dir, 125 201 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 126 - } 127 - 128 - bool pxa2xx_spi_dma_is_possible(size_t len) 129 - { 130 - return len <= MAX_DMA_LEN; 131 - } 132 - 133 - int pxa2xx_spi_map_dma_buffers(struct driver_data *drv_data) 134 - { 135 - const struct chip_data *chip = drv_data->cur_chip; 136 - int ret; 137 - 138 - if (!chip->enable_dma) 139 - return 0; 140 - 141 - /* Don't bother with DMA if we can't do even a single burst */ 142 - if (drv_data->len < chip->dma_burst_size) 143 - return 0; 144 - 145 - ret = pxa2xx_spi_map_dma_buffer(drv_data, DMA_TO_DEVICE); 146 - if (ret <= 0) { 147 - dev_warn(&drv_data->pdev->dev, "failed to DMA map TX\n"); 148 - return 0; 149 - } 150 - 151 - drv_data->tx_nents = ret; 152 - 153 - ret = pxa2xx_spi_map_dma_buffer(drv_data, DMA_FROM_DEVICE); 154 - if (ret <= 0) { 155 - pxa2xx_spi_unmap_dma_buffer(drv_data, DMA_TO_DEVICE); 156 - dev_warn(&drv_data->pdev->dev, "failed to DMA map RX\n"); 157 - return 0; 158 - } 159 - 160 - drv_data->rx_nents = ret; 161 - return 1; 162 202 } 163 203 164 204 irqreturn_t pxa2xx_spi_dma_transfer(struct driver_data *drv_data) ··· 133 245 if (status & SSSR_ROR) { 134 246 dev_err(&drv_data->pdev->dev, "FIFO overrun\n"); 135 247 136 - dmaengine_terminate_async(drv_data->rx_chan); 137 - dmaengine_terminate_async(drv_data->tx_chan); 248 + dmaengine_terminate_async(drv_data->master->dma_rx); 249 + dmaengine_terminate_async(drv_data->master->dma_tx); 138 250 139 251 pxa2xx_spi_dma_transfer_complete(drv_data, true); 140 252 return IRQ_HANDLED; ··· 173 285 return 0; 174 286 175 287 err_rx: 176 - dmaengine_terminate_async(drv_data->tx_chan); 288 + dmaengine_terminate_async(drv_data->master->dma_tx); 177 289 err_tx: 178 - pxa2xx_spi_unmap_dma_buffers(drv_data); 179 290 return err; 180 291 } 181 292 182 293 void pxa2xx_spi_dma_start(struct driver_data *drv_data) 183 294 { 184 - dma_async_issue_pending(drv_data->rx_chan); 185 - dma_async_issue_pending(drv_data->tx_chan); 295 + dma_async_issue_pending(drv_data->master->dma_rx); 296 + dma_async_issue_pending(drv_data->master->dma_tx); 186 297 187 298 atomic_set(&drv_data->dma_running, 1); 188 299 } ··· 190 303 { 191 304 struct pxa2xx_spi_master *pdata = drv_data->master_info; 192 305 struct device *dev = &drv_data->pdev->dev; 306 + struct spi_master *master = drv_data->master; 193 307 dma_cap_mask_t mask; 194 308 195 309 dma_cap_zero(mask); 196 310 dma_cap_set(DMA_SLAVE, mask); 197 311 198 - drv_data->tx_chan = dma_request_slave_channel_compat(mask, 312 + master->dma_tx = dma_request_slave_channel_compat(mask, 199 313 pdata->dma_filter, pdata->tx_param, dev, "tx"); 200 - if (!drv_data->tx_chan) 314 + if (!master->dma_tx) 201 315 return -ENODEV; 202 316 203 - drv_data->rx_chan = dma_request_slave_channel_compat(mask, 317 + master->dma_rx = dma_request_slave_channel_compat(mask, 204 318 pdata->dma_filter, pdata->rx_param, dev, "rx"); 205 - if (!drv_data->rx_chan) { 206 - dma_release_channel(drv_data->tx_chan); 207 - drv_data->tx_chan = NULL; 319 + if (!master->dma_rx) { 320 + dma_release_channel(master->dma_tx); 321 + master->dma_tx = NULL; 208 322 return -ENODEV; 209 323 } 210 324 ··· 214 326 215 327 void pxa2xx_spi_dma_release(struct driver_data *drv_data) 216 328 { 217 - if (drv_data->rx_chan) { 218 - dmaengine_terminate_sync(drv_data->rx_chan); 219 - dma_release_channel(drv_data->rx_chan); 220 - sg_free_table(&drv_data->rx_sgt); 221 - drv_data->rx_chan = NULL; 329 + struct spi_master *master = drv_data->master; 330 + 331 + if (master->dma_rx) { 332 + dmaengine_terminate_sync(master->dma_rx); 333 + dma_release_channel(master->dma_rx); 334 + master->dma_rx = NULL; 222 335 } 223 - if (drv_data->tx_chan) { 224 - dmaengine_terminate_sync(drv_data->tx_chan); 225 - dma_release_channel(drv_data->tx_chan); 226 - sg_free_table(&drv_data->tx_sgt); 227 - drv_data->tx_chan = NULL; 336 + if (master->dma_tx) { 337 + dmaengine_terminate_sync(master->dma_tx); 338 + dma_release_channel(master->dma_tx); 339 + master->dma_tx = NULL; 228 340 } 229 341 } 230 342
+124 -88
drivers/spi/spi-pxa2xx-pci.c
··· 1 1 /* 2 2 * CE4100's SPI device is more or less the same one as found on PXA 3 3 * 4 + * Copyright (C) 2016, Intel Corporation 4 5 */ 6 + #include <linux/clk-provider.h> 7 + #include <linux/module.h> 8 + #include <linux/of_device.h> 5 9 #include <linux/pci.h> 6 10 #include <linux/platform_device.h> 7 - #include <linux/of_device.h> 8 - #include <linux/module.h> 9 11 #include <linux/spi/pxa2xx_spi.h> 10 - #include <linux/clk-provider.h> 11 12 12 13 #include <linux/dmaengine.h> 13 14 #include <linux/platform_data/dma-dw.h> 14 15 15 16 enum { 16 - PORT_CE4100, 17 + PORT_QUARK_X1000, 17 18 PORT_BYT, 19 + PORT_MRFLD, 18 20 PORT_BSW0, 19 21 PORT_BSW1, 20 22 PORT_BSW2, 21 - PORT_QUARK_X1000, 23 + PORT_CE4100, 22 24 PORT_LPT, 23 25 }; 24 26 ··· 31 29 unsigned long max_clk_rate; 32 30 33 31 /* DMA channel request parameters */ 32 + bool (*dma_filter)(struct dma_chan *chan, void *param); 34 33 void *tx_param; 35 34 void *rx_param; 35 + 36 + int (*setup)(struct pci_dev *pdev, struct pxa_spi_info *c); 36 37 }; 37 38 38 39 static struct dw_dma_slave byt_tx_param = { .dst_id = 0 }; ··· 62 57 return true; 63 58 } 64 59 65 - static struct pxa_spi_info spi_info_configs[] = { 66 - [PORT_CE4100] = { 67 - .type = PXA25x_SSP, 68 - .port_id = -1, 69 - .num_chipselect = -1, 70 - .max_clk_rate = 3686400, 71 - }, 72 - [PORT_BYT] = { 73 - .type = LPSS_BYT_SSP, 74 - .port_id = 0, 75 - .num_chipselect = 1, 76 - .max_clk_rate = 50000000, 77 - .tx_param = &byt_tx_param, 78 - .rx_param = &byt_rx_param, 79 - }, 80 - [PORT_BSW0] = { 81 - .type = LPSS_BYT_SSP, 82 - .port_id = 0, 83 - .num_chipselect = 1, 84 - .max_clk_rate = 50000000, 85 - .tx_param = &bsw0_tx_param, 86 - .rx_param = &bsw0_rx_param, 87 - }, 88 - [PORT_BSW1] = { 89 - .type = LPSS_BYT_SSP, 90 - .port_id = 1, 91 - .num_chipselect = 1, 92 - .max_clk_rate = 50000000, 93 - .tx_param = &bsw1_tx_param, 94 - .rx_param = &bsw1_rx_param, 95 - }, 96 - [PORT_BSW2] = { 97 - .type = LPSS_BYT_SSP, 98 - .port_id = 2, 99 - .num_chipselect = 1, 100 - .max_clk_rate = 50000000, 101 - .tx_param = &bsw2_tx_param, 102 - .rx_param = &bsw2_rx_param, 103 - }, 104 - [PORT_QUARK_X1000] = { 105 - .type = QUARK_X1000_SSP, 106 - .port_id = -1, 107 - .num_chipselect = 1, 108 - .max_clk_rate = 50000000, 109 - }, 110 - [PORT_LPT] = { 111 - .type = LPSS_LPT_SSP, 112 - .port_id = 0, 113 - .num_chipselect = 1, 114 - .max_clk_rate = 50000000, 115 - .tx_param = &lpt_tx_param, 116 - .rx_param = &lpt_rx_param, 117 - }, 118 - }; 119 - 120 - static int pxa2xx_spi_pci_probe(struct pci_dev *dev, 121 - const struct pci_device_id *ent) 60 + static int lpss_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c) 122 61 { 123 - struct platform_device_info pi; 124 - int ret; 125 - struct platform_device *pdev; 126 - struct pxa2xx_spi_master spi_pdata; 127 - struct ssp_device *ssp; 128 - struct pxa_spi_info *c; 129 - char buf[40]; 130 62 struct pci_dev *dma_dev; 131 63 132 - ret = pcim_enable_device(dev); 133 - if (ret) 134 - return ret; 135 - 136 - ret = pcim_iomap_regions(dev, 1 << 0, "PXA2xx SPI"); 137 - if (ret) 138 - return ret; 139 - 140 - c = &spi_info_configs[ent->driver_data]; 141 - 142 - memset(&spi_pdata, 0, sizeof(spi_pdata)); 143 - spi_pdata.num_chipselect = (c->num_chipselect > 0) ? 144 - c->num_chipselect : dev->devfn; 64 + c->num_chipselect = 1; 65 + c->max_clk_rate = 50000000; 145 66 146 67 dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 147 68 ··· 87 156 slave->p_master = 1; 88 157 } 89 158 90 - spi_pdata.dma_filter = lpss_dma_filter; 159 + c->dma_filter = lpss_dma_filter; 160 + return 0; 161 + } 162 + 163 + static int mrfld_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c) 164 + { 165 + switch (PCI_FUNC(dev->devfn)) { 166 + case 0: 167 + c->port_id = 3; 168 + c->num_chipselect = 1; 169 + break; 170 + case 1: 171 + c->port_id = 5; 172 + c->num_chipselect = 4; 173 + break; 174 + case 2: 175 + c->port_id = 6; 176 + c->num_chipselect = 1; 177 + break; 178 + default: 179 + return -ENODEV; 180 + } 181 + return 0; 182 + } 183 + 184 + static struct pxa_spi_info spi_info_configs[] = { 185 + [PORT_CE4100] = { 186 + .type = PXA25x_SSP, 187 + .port_id = -1, 188 + .num_chipselect = -1, 189 + .max_clk_rate = 3686400, 190 + }, 191 + [PORT_BYT] = { 192 + .type = LPSS_BYT_SSP, 193 + .port_id = 0, 194 + .setup = lpss_spi_setup, 195 + .tx_param = &byt_tx_param, 196 + .rx_param = &byt_rx_param, 197 + }, 198 + [PORT_BSW0] = { 199 + .type = LPSS_BSW_SSP, 200 + .port_id = 0, 201 + .setup = lpss_spi_setup, 202 + .tx_param = &bsw0_tx_param, 203 + .rx_param = &bsw0_rx_param, 204 + }, 205 + [PORT_BSW1] = { 206 + .type = LPSS_BSW_SSP, 207 + .port_id = 1, 208 + .setup = lpss_spi_setup, 209 + .tx_param = &bsw1_tx_param, 210 + .rx_param = &bsw1_rx_param, 211 + }, 212 + [PORT_BSW2] = { 213 + .type = LPSS_BSW_SSP, 214 + .port_id = 2, 215 + .setup = lpss_spi_setup, 216 + .tx_param = &bsw2_tx_param, 217 + .rx_param = &bsw2_rx_param, 218 + }, 219 + [PORT_MRFLD] = { 220 + .type = PXA27x_SSP, 221 + .max_clk_rate = 25000000, 222 + .setup = mrfld_spi_setup, 223 + }, 224 + [PORT_QUARK_X1000] = { 225 + .type = QUARK_X1000_SSP, 226 + .port_id = -1, 227 + .num_chipselect = 1, 228 + .max_clk_rate = 50000000, 229 + }, 230 + [PORT_LPT] = { 231 + .type = LPSS_LPT_SSP, 232 + .port_id = 0, 233 + .setup = lpss_spi_setup, 234 + .tx_param = &lpt_tx_param, 235 + .rx_param = &lpt_rx_param, 236 + }, 237 + }; 238 + 239 + static int pxa2xx_spi_pci_probe(struct pci_dev *dev, 240 + const struct pci_device_id *ent) 241 + { 242 + struct platform_device_info pi; 243 + int ret; 244 + struct platform_device *pdev; 245 + struct pxa2xx_spi_master spi_pdata; 246 + struct ssp_device *ssp; 247 + struct pxa_spi_info *c; 248 + char buf[40]; 249 + 250 + ret = pcim_enable_device(dev); 251 + if (ret) 252 + return ret; 253 + 254 + ret = pcim_iomap_regions(dev, 1 << 0, "PXA2xx SPI"); 255 + if (ret) 256 + return ret; 257 + 258 + c = &spi_info_configs[ent->driver_data]; 259 + if (c->setup) { 260 + ret = c->setup(dev, c); 261 + if (ret) 262 + return ret; 263 + } 264 + 265 + memset(&spi_pdata, 0, sizeof(spi_pdata)); 266 + spi_pdata.num_chipselect = (c->num_chipselect > 0) ? c->num_chipselect : dev->devfn; 267 + spi_pdata.dma_filter = c->dma_filter; 91 268 spi_pdata.tx_param = c->tx_param; 92 269 spi_pdata.rx_param = c->rx_param; 93 270 spi_pdata.enable_dma = c->rx_param && c->tx_param; ··· 203 164 ssp = &spi_pdata.ssp; 204 165 ssp->phys_base = pci_resource_start(dev, 0); 205 166 ssp->mmio_base = pcim_iomap_table(dev)[0]; 206 - if (!ssp->mmio_base) { 207 - dev_err(&dev->dev, "failed to ioremap() registers\n"); 208 - return -EIO; 209 - } 210 167 ssp->irq = dev->irq; 211 168 ssp->port_id = (c->port_id >= 0) ? c->port_id : dev->devfn; 212 169 ssp->type = c->type; ··· 243 208 } 244 209 245 210 static const struct pci_device_id pxa2xx_spi_pci_devices[] = { 246 - { PCI_VDEVICE(INTEL, 0x2e6a), PORT_CE4100 }, 247 211 { PCI_VDEVICE(INTEL, 0x0935), PORT_QUARK_X1000 }, 248 212 { PCI_VDEVICE(INTEL, 0x0f0e), PORT_BYT }, 213 + { PCI_VDEVICE(INTEL, 0x1194), PORT_MRFLD }, 249 214 { PCI_VDEVICE(INTEL, 0x228e), PORT_BSW0 }, 250 215 { PCI_VDEVICE(INTEL, 0x2290), PORT_BSW1 }, 251 216 { PCI_VDEVICE(INTEL, 0x22ac), PORT_BSW2 }, 217 + { PCI_VDEVICE(INTEL, 0x2e6a), PORT_CE4100 }, 252 218 { PCI_VDEVICE(INTEL, 0x9ce6), PORT_LPT }, 253 219 { }, 254 220 };
+36 -19
drivers/spi/spi-pxa2xx.c
··· 585 585 u32 sccr1_reg; 586 586 587 587 sccr1_reg = pxa2xx_spi_read(drv_data, SSCR1) & ~drv_data->int_cr1; 588 - sccr1_reg &= ~SSCR1_RFT; 588 + switch (drv_data->ssp_type) { 589 + case QUARK_X1000_SSP: 590 + sccr1_reg &= ~QUARK_X1000_SSCR1_RFT; 591 + break; 592 + default: 593 + sccr1_reg &= ~SSCR1_RFT; 594 + break; 595 + } 589 596 sccr1_reg |= chip->threshold; 590 597 pxa2xx_spi_write(drv_data, SSCR1, sccr1_reg); 591 598 } ··· 919 912 return clk_div << 8; 920 913 } 921 914 915 + static bool pxa2xx_spi_can_dma(struct spi_master *master, 916 + struct spi_device *spi, 917 + struct spi_transfer *xfer) 918 + { 919 + struct chip_data *chip = spi_get_ctldata(spi); 920 + 921 + return chip->enable_dma && 922 + xfer->len <= MAX_DMA_LEN && 923 + xfer->len >= chip->dma_burst_size; 924 + } 925 + 922 926 static void pump_transfers(unsigned long data) 923 927 { 924 928 struct driver_data *drv_data = (struct driver_data *)data; 929 + struct spi_master *master = drv_data->master; 925 930 struct spi_message *message = NULL; 926 931 struct spi_transfer *transfer = NULL; 927 932 struct spi_transfer *previous = NULL; ··· 947 928 u32 dma_burst = drv_data->cur_chip->dma_burst_size; 948 929 u32 change_mask = pxa2xx_spi_get_ssrc1_change_mask(drv_data); 949 930 int err; 931 + int dma_mapped; 950 932 951 933 /* Get current state information */ 952 934 message = drv_data->cur_msg; ··· 982 962 } 983 963 984 964 /* Check if we can DMA this transfer */ 985 - if (!pxa2xx_spi_dma_is_possible(transfer->len) && chip->enable_dma) { 965 + if (transfer->len > MAX_DMA_LEN && chip->enable_dma) { 986 966 987 967 /* reject already-mapped transfers; PIO won't always work */ 988 968 if (message->is_dma_mapped ··· 1059 1039 1060 1040 message->state = RUNNING_STATE; 1061 1041 1062 - drv_data->dma_mapped = 0; 1063 - if (pxa2xx_spi_dma_is_possible(drv_data->len)) 1064 - drv_data->dma_mapped = pxa2xx_spi_map_dma_buffers(drv_data); 1065 - if (drv_data->dma_mapped) { 1042 + dma_mapped = master->can_dma && 1043 + master->can_dma(master, message->spi, transfer) && 1044 + master->cur_msg_mapped; 1045 + if (dma_mapped) { 1066 1046 1067 1047 /* Ensure we have the correct interrupt handler */ 1068 1048 drv_data->transfer_handler = pxa2xx_spi_dma_transfer; ··· 1092 1072 cr0 = pxa2xx_configure_sscr0(drv_data, clk_div, bits); 1093 1073 if (!pxa25x_ssp_comp(drv_data)) 1094 1074 dev_dbg(&message->spi->dev, "%u Hz actual, %s\n", 1095 - drv_data->master->max_speed_hz 1075 + master->max_speed_hz 1096 1076 / (1 + ((cr0 & SSCR0_SCR(0xfff)) >> 8)), 1097 - drv_data->dma_mapped ? "DMA" : "PIO"); 1077 + dma_mapped ? "DMA" : "PIO"); 1098 1078 else 1099 1079 dev_dbg(&message->spi->dev, "%u Hz actual, %s\n", 1100 - drv_data->master->max_speed_hz / 2 1080 + master->max_speed_hz / 2 1101 1081 / (1 + ((cr0 & SSCR0_SCR(0x0ff)) >> 8)), 1102 - drv_data->dma_mapped ? "DMA" : "PIO"); 1082 + dma_mapped ? "DMA" : "PIO"); 1103 1083 1104 1084 if (is_lpss_ssp(drv_data)) { 1105 1085 if ((pxa2xx_spi_read(drv_data, SSIRF) & 0xff) ··· 1260 1240 chip->frm = spi->chip_select; 1261 1241 } else 1262 1242 chip->gpio_cs = -1; 1263 - chip->enable_dma = 0; 1243 + chip->enable_dma = drv_data->master_info->enable_dma; 1264 1244 chip->timeout = TIMOUT_DFLT; 1265 1245 } 1266 1246 ··· 1279 1259 tx_hi_thres = chip_info->tx_hi_threshold; 1280 1260 if (chip_info->rx_threshold) 1281 1261 rx_thres = chip_info->rx_threshold; 1282 - chip->enable_dma = drv_data->master_info->enable_dma; 1283 1262 chip->dma_threshold = 0; 1284 1263 if (chip_info->enable_loopback) 1285 1264 chip->cr1 = SSCR1_LBM; 1286 - } else if (ACPI_HANDLE(&spi->dev)) { 1287 - /* 1288 - * Slave devices enumerated from ACPI namespace don't 1289 - * usually have chip_info but we still might want to use 1290 - * DMA with them. 1291 - */ 1292 - chip->enable_dma = drv_data->master_info->enable_dma; 1293 1265 } 1294 1266 1295 1267 chip->lpss_rx_threshold = SSIRF_RxThresh(rx_thres); ··· 1401 1389 /* SPT-H */ 1402 1390 { PCI_VDEVICE(INTEL, 0xa129), LPSS_SPT_SSP }, 1403 1391 { PCI_VDEVICE(INTEL, 0xa12a), LPSS_SPT_SSP }, 1392 + /* KBL-H */ 1393 + { PCI_VDEVICE(INTEL, 0xa2a9), LPSS_SPT_SSP }, 1394 + { PCI_VDEVICE(INTEL, 0xa2aa), LPSS_SPT_SSP }, 1404 1395 /* BXT A-Step */ 1405 1396 { PCI_VDEVICE(INTEL, 0x0ac2), LPSS_BXT_SSP }, 1406 1397 { PCI_VDEVICE(INTEL, 0x0ac4), LPSS_BXT_SSP }, ··· 1616 1601 if (status) { 1617 1602 dev_dbg(dev, "no DMA channels available, using PIO\n"); 1618 1603 platform_info->enable_dma = false; 1604 + } else { 1605 + master->can_dma = pxa2xx_spi_can_dma; 1619 1606 } 1620 1607 } 1621 1608
-9
drivers/spi/spi-pxa2xx.h
··· 50 50 struct tasklet_struct pump_transfers; 51 51 52 52 /* DMA engine support */ 53 - struct dma_chan *rx_chan; 54 - struct dma_chan *tx_chan; 55 - struct sg_table rx_sgt; 56 - struct sg_table tx_sgt; 57 - int rx_nents; 58 - int tx_nents; 59 53 atomic_t dma_running; 60 54 61 55 /* Current message transfer state info */ ··· 61 67 void *tx_end; 62 68 void *rx; 63 69 void *rx_end; 64 - int dma_mapped; 65 70 u8 n_bytes; 66 71 int (*write)(struct driver_data *drv_data); 67 72 int (*read)(struct driver_data *drv_data); ··· 138 145 #define MAX_DMA_LEN SZ_64K 139 146 #define DEFAULT_DMA_CR1 (SSCR1_TSRE | SSCR1_RSRE | SSCR1_TRAIL) 140 147 141 - extern bool pxa2xx_spi_dma_is_possible(size_t len); 142 - extern int pxa2xx_spi_map_dma_buffers(struct driver_data *drv_data); 143 148 extern irqreturn_t pxa2xx_spi_dma_transfer(struct driver_data *drv_data); 144 149 extern int pxa2xx_spi_dma_prepare(struct driver_data *drv_data, u32 dma_burst); 145 150 extern void pxa2xx_spi_dma_start(struct driver_data *drv_data);
+20
drivers/spi/spi-rockchip.c
··· 142 142 /* sclk_out: spi master internal logic in rk3x can support 50Mhz */ 143 143 #define MAX_SCLK_OUT 50000000 144 144 145 + /* 146 + * SPI_CTRLR1 is 16-bits, so we should support lengths of 0xffff + 1. However, 147 + * the controller seems to hang when given 0x10000, so stick with this for now. 148 + */ 149 + #define ROCKCHIP_SPI_MAX_TRANLEN 0xffff 150 + 145 151 enum rockchip_ssi_type { 146 152 SSI_MOTO_SPI = 0, 147 153 SSI_TI_SSP, ··· 579 573 dev_dbg(rs->dev, "cr0 0x%x, div %d\n", cr0, div); 580 574 } 581 575 576 + static size_t rockchip_spi_max_transfer_size(struct spi_device *spi) 577 + { 578 + return ROCKCHIP_SPI_MAX_TRANLEN; 579 + } 580 + 582 581 static int rockchip_spi_transfer_one( 583 582 struct spi_master *master, 584 583 struct spi_device *spi, ··· 597 586 598 587 if (!xfer->tx_buf && !xfer->rx_buf) { 599 588 dev_err(rs->dev, "No buffer for transfer\n"); 589 + return -EINVAL; 590 + } 591 + 592 + if (xfer->len > ROCKCHIP_SPI_MAX_TRANLEN) { 593 + dev_err(rs->dev, "Transfer is too long (%d)\n", xfer->len); 600 594 return -EINVAL; 601 595 } 602 596 ··· 746 730 master->prepare_message = rockchip_spi_prepare_message; 747 731 master->unprepare_message = rockchip_spi_unprepare_message; 748 732 master->transfer_one = rockchip_spi_transfer_one; 733 + master->max_transfer_size = rockchip_spi_max_transfer_size; 749 734 master->handle_err = rockchip_spi_handle_err; 750 735 751 736 rs->dma_tx.ch = dma_request_chan(rs->dev, "tx"); ··· 911 894 }; 912 895 913 896 static const struct of_device_id rockchip_spi_dt_match[] = { 897 + { .compatible = "rockchip,rk3036-spi", }, 914 898 { .compatible = "rockchip,rk3066-spi", }, 915 899 { .compatible = "rockchip,rk3188-spi", }, 900 + { .compatible = "rockchip,rk3228-spi", }, 916 901 { .compatible = "rockchip,rk3288-spi", }, 902 + { .compatible = "rockchip,rk3368-spi", }, 917 903 { .compatible = "rockchip,rk3399-spi", }, 918 904 { }, 919 905 };
+129 -79
drivers/spi/spi-s3c64xx.c
··· 156 156 int quirks; 157 157 bool high_speed; 158 158 bool clk_from_cmu; 159 + bool clk_ioclk; 159 160 }; 160 161 161 162 /** 162 163 * struct s3c64xx_spi_driver_data - Runtime info holder for SPI driver. 163 164 * @clk: Pointer to the spi clock. 164 165 * @src_clk: Pointer to the clock used to generate SPI signals. 166 + * @ioclk: Pointer to the i/o clock between master and slave 165 167 * @master: Pointer to the SPI Protocol master. 166 168 * @cntrlr_info: Platform specific data for the controller this driver manages. 167 169 * @tgl_spi: Pointer to the last CS left untoggled by the cs_change hint. ··· 183 181 void __iomem *regs; 184 182 struct clk *clk; 185 183 struct clk *src_clk; 184 + struct clk *ioclk; 186 185 struct platform_device *pdev; 187 186 struct spi_master *master; 188 187 struct s3c64xx_spi_info *cntrlr_info; ··· 313 310 dma_async_issue_pending(dma->ch); 314 311 } 315 312 313 + static void s3c64xx_spi_set_cs(struct spi_device *spi, bool enable) 314 + { 315 + struct s3c64xx_spi_driver_data *sdd = 316 + spi_master_get_devdata(spi->master); 317 + 318 + if (sdd->cntrlr_info->no_cs) 319 + return; 320 + 321 + if (enable) { 322 + if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO)) { 323 + writel(0, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 324 + } else { 325 + u32 ssel = readl(sdd->regs + S3C64XX_SPI_SLAVE_SEL); 326 + 327 + ssel |= (S3C64XX_SPI_SLAVE_AUTO | 328 + S3C64XX_SPI_SLAVE_NSC_CNT_2); 329 + writel(ssel, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 330 + } 331 + } else { 332 + if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO)) 333 + writel(S3C64XX_SPI_SLAVE_SIG_INACT, 334 + sdd->regs + S3C64XX_SPI_SLAVE_SEL); 335 + } 336 + } 337 + 316 338 static int s3c64xx_spi_prepare_transfer(struct spi_master *spi) 317 339 { 318 340 struct s3c64xx_spi_driver_data *sdd = spi_master_get_devdata(spi); 319 341 dma_filter_fn filter = sdd->cntrlr_info->filter; 320 342 struct device *dev = &sdd->pdev->dev; 321 343 dma_cap_mask_t mask; 322 - int ret; 323 344 324 - if (!is_polling(sdd)) { 325 - dma_cap_zero(mask); 326 - dma_cap_set(DMA_SLAVE, mask); 345 + if (is_polling(sdd)) 346 + return 0; 327 347 328 - /* Acquire DMA channels */ 329 - sdd->rx_dma.ch = dma_request_slave_channel_compat(mask, filter, 330 - sdd->cntrlr_info->dma_rx, dev, "rx"); 331 - if (!sdd->rx_dma.ch) { 332 - dev_err(dev, "Failed to get RX DMA channel\n"); 333 - ret = -EBUSY; 334 - goto out; 335 - } 336 - spi->dma_rx = sdd->rx_dma.ch; 348 + dma_cap_zero(mask); 349 + dma_cap_set(DMA_SLAVE, mask); 337 350 338 - sdd->tx_dma.ch = dma_request_slave_channel_compat(mask, filter, 339 - sdd->cntrlr_info->dma_tx, dev, "tx"); 340 - if (!sdd->tx_dma.ch) { 341 - dev_err(dev, "Failed to get TX DMA channel\n"); 342 - ret = -EBUSY; 343 - goto out_rx; 344 - } 345 - spi->dma_tx = sdd->tx_dma.ch; 351 + /* Acquire DMA channels */ 352 + sdd->rx_dma.ch = dma_request_slave_channel_compat(mask, filter, 353 + sdd->cntrlr_info->dma_rx, dev, "rx"); 354 + if (!sdd->rx_dma.ch) { 355 + dev_err(dev, "Failed to get RX DMA channel\n"); 356 + return -EBUSY; 346 357 } 358 + spi->dma_rx = sdd->rx_dma.ch; 359 + 360 + sdd->tx_dma.ch = dma_request_slave_channel_compat(mask, filter, 361 + sdd->cntrlr_info->dma_tx, dev, "tx"); 362 + if (!sdd->tx_dma.ch) { 363 + dev_err(dev, "Failed to get TX DMA channel\n"); 364 + dma_release_channel(sdd->rx_dma.ch); 365 + return -EBUSY; 366 + } 367 + spi->dma_tx = sdd->tx_dma.ch; 347 368 348 369 return 0; 349 - 350 - out_rx: 351 - dma_release_channel(sdd->rx_dma.ch); 352 - out: 353 - return ret; 354 370 } 355 371 356 372 static int s3c64xx_spi_unprepare_transfer(struct spi_master *spi) ··· 599 577 u32 val; 600 578 601 579 /* Disable Clock */ 602 - if (sdd->port_conf->clk_from_cmu) { 603 - clk_disable_unprepare(sdd->src_clk); 604 - } else { 580 + if (!sdd->port_conf->clk_from_cmu) { 605 581 val = readl(regs + S3C64XX_SPI_CLK_CFG); 606 582 val &= ~S3C64XX_SPI_ENCLK_ENABLE; 607 583 writel(val, regs + S3C64XX_SPI_CLK_CFG); ··· 642 622 writel(val, regs + S3C64XX_SPI_MODE_CFG); 643 623 644 624 if (sdd->port_conf->clk_from_cmu) { 645 - /* Configure Clock */ 646 - /* There is half-multiplier before the SPI */ 625 + /* The src_clk clock is divided internally by 2 */ 647 626 clk_set_rate(sdd->src_clk, sdd->cur_speed * 2); 648 - /* Enable Clock */ 649 - clk_prepare_enable(sdd->src_clk); 650 627 } else { 651 628 /* Configure Clock */ 652 629 val = readl(regs + S3C64XX_SPI_CLK_CFG); ··· 667 650 struct s3c64xx_spi_driver_data *sdd = spi_master_get_devdata(master); 668 651 struct spi_device *spi = msg->spi; 669 652 struct s3c64xx_spi_csinfo *cs = spi->controller_data; 670 - 671 - /* If Master's(controller) state differs from that needed by Slave */ 672 - if (sdd->cur_speed != spi->max_speed_hz 673 - || sdd->cur_mode != spi->mode 674 - || sdd->cur_bpw != spi->bits_per_word) { 675 - sdd->cur_bpw = spi->bits_per_word; 676 - sdd->cur_speed = spi->max_speed_hz; 677 - sdd->cur_mode = spi->mode; 678 - s3c64xx_spi_config(sdd); 679 - } 680 653 681 654 /* Configure feedback delay */ 682 655 writel(cs->fb_delay & 0x3, sdd->regs + S3C64XX_SPI_FB_CLK); ··· 694 687 if (bpw != sdd->cur_bpw || speed != sdd->cur_speed) { 695 688 sdd->cur_bpw = bpw; 696 689 sdd->cur_speed = speed; 690 + sdd->cur_mode = spi->mode; 697 691 s3c64xx_spi_config(sdd); 698 692 } 699 693 ··· 714 706 enable_datapath(sdd, spi, xfer, use_dma); 715 707 716 708 /* Start the signals */ 717 - if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO)) 718 - writel(0, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 719 - else 720 - writel(readl(sdd->regs + S3C64XX_SPI_SLAVE_SEL) 721 - | S3C64XX_SPI_SLAVE_AUTO | S3C64XX_SPI_SLAVE_NSC_CNT_2, 722 - sdd->regs + S3C64XX_SPI_SLAVE_SEL); 709 + s3c64xx_spi_set_cs(spi, true); 723 710 724 711 spin_unlock_irqrestore(&sdd->lock, flags); 725 712 ··· 864 861 865 862 pm_runtime_mark_last_busy(&sdd->pdev->dev); 866 863 pm_runtime_put_autosuspend(&sdd->pdev->dev); 867 - if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO)) 868 - writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 864 + s3c64xx_spi_set_cs(spi, false); 865 + 869 866 return 0; 870 867 871 868 setup_exit: 872 869 pm_runtime_mark_last_busy(&sdd->pdev->dev); 873 870 pm_runtime_put_autosuspend(&sdd->pdev->dev); 874 871 /* setup() returns with device de-selected */ 875 - if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO)) 876 - writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 872 + s3c64xx_spi_set_cs(spi, false); 877 873 878 874 if (gpio_is_valid(spi->cs_gpio)) 879 875 gpio_free(spi->cs_gpio); ··· 946 944 947 945 sdd->cur_speed = 0; 948 946 949 - if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO)) 947 + if (sci->no_cs) 948 + writel(0, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 949 + else if (!(sdd->port_conf->quirks & S3C64XX_SPI_QUIRK_CS_AUTO)) 950 950 writel(S3C64XX_SPI_SLAVE_SIG_INACT, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 951 951 952 952 /* Disable Interrupts - we use Polling if not DMA mode */ ··· 1002 998 } else { 1003 999 sci->num_cs = temp; 1004 1000 } 1001 + 1002 + sci->no_cs = of_property_read_bool(dev->of_node, "broken-cs"); 1005 1003 1006 1004 return sci; 1007 1005 } ··· 1082 1076 if (ret < 0) { 1083 1077 dev_err(&pdev->dev, "failed to get alias id, errno %d\n", 1084 1078 ret); 1085 - goto err0; 1079 + goto err_deref_master; 1086 1080 } 1087 1081 sdd->port_id = ret; 1088 1082 } else { ··· 1120 1114 sdd->regs = devm_ioremap_resource(&pdev->dev, mem_res); 1121 1115 if (IS_ERR(sdd->regs)) { 1122 1116 ret = PTR_ERR(sdd->regs); 1123 - goto err0; 1117 + goto err_deref_master; 1124 1118 } 1125 1119 1126 1120 if (sci->cfg_gpio && sci->cfg_gpio()) { 1127 1121 dev_err(&pdev->dev, "Unable to config gpio\n"); 1128 1122 ret = -EBUSY; 1129 - goto err0; 1123 + goto err_deref_master; 1130 1124 } 1131 1125 1132 1126 /* Setup clocks */ ··· 1134 1128 if (IS_ERR(sdd->clk)) { 1135 1129 dev_err(&pdev->dev, "Unable to acquire clock 'spi'\n"); 1136 1130 ret = PTR_ERR(sdd->clk); 1137 - goto err0; 1131 + goto err_deref_master; 1138 1132 } 1139 1133 1140 - if (clk_prepare_enable(sdd->clk)) { 1134 + ret = clk_prepare_enable(sdd->clk); 1135 + if (ret) { 1141 1136 dev_err(&pdev->dev, "Couldn't enable clock 'spi'\n"); 1142 - ret = -EBUSY; 1143 - goto err0; 1137 + goto err_deref_master; 1144 1138 } 1145 1139 1146 1140 sprintf(clk_name, "spi_busclk%d", sci->src_clk_nr); ··· 1149 1143 dev_err(&pdev->dev, 1150 1144 "Unable to acquire clock '%s'\n", clk_name); 1151 1145 ret = PTR_ERR(sdd->src_clk); 1152 - goto err2; 1146 + goto err_disable_clk; 1153 1147 } 1154 1148 1155 - if (clk_prepare_enable(sdd->src_clk)) { 1149 + ret = clk_prepare_enable(sdd->src_clk); 1150 + if (ret) { 1156 1151 dev_err(&pdev->dev, "Couldn't enable clock '%s'\n", clk_name); 1157 - ret = -EBUSY; 1158 - goto err2; 1152 + goto err_disable_clk; 1153 + } 1154 + 1155 + if (sdd->port_conf->clk_ioclk) { 1156 + sdd->ioclk = devm_clk_get(&pdev->dev, "spi_ioclk"); 1157 + if (IS_ERR(sdd->ioclk)) { 1158 + dev_err(&pdev->dev, "Unable to acquire 'ioclk'\n"); 1159 + ret = PTR_ERR(sdd->ioclk); 1160 + goto err_disable_src_clk; 1161 + } 1162 + 1163 + ret = clk_prepare_enable(sdd->ioclk); 1164 + if (ret) { 1165 + dev_err(&pdev->dev, "Couldn't enable clock 'ioclk'\n"); 1166 + goto err_disable_src_clk; 1167 + } 1159 1168 } 1160 1169 1161 1170 pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT); ··· 1190 1169 if (ret != 0) { 1191 1170 dev_err(&pdev->dev, "Failed to request IRQ %d: %d\n", 1192 1171 irq, ret); 1193 - goto err3; 1172 + goto err_pm_put; 1194 1173 } 1195 1174 1196 1175 writel(S3C64XX_SPI_INT_RX_OVERRUN_EN | S3C64XX_SPI_INT_RX_UNDERRUN_EN | ··· 1200 1179 ret = devm_spi_register_master(&pdev->dev, master); 1201 1180 if (ret != 0) { 1202 1181 dev_err(&pdev->dev, "cannot register SPI master: %d\n", ret); 1203 - goto err3; 1182 + goto err_pm_put; 1204 1183 } 1205 1184 1206 1185 dev_dbg(&pdev->dev, "Samsung SoC SPI Driver loaded for Bus SPI-%d with %d Slaves attached\n", ··· 1214 1193 1215 1194 return 0; 1216 1195 1217 - err3: 1196 + err_pm_put: 1218 1197 pm_runtime_put_noidle(&pdev->dev); 1219 1198 pm_runtime_disable(&pdev->dev); 1220 1199 pm_runtime_set_suspended(&pdev->dev); 1221 1200 1201 + clk_disable_unprepare(sdd->ioclk); 1202 + err_disable_src_clk: 1222 1203 clk_disable_unprepare(sdd->src_clk); 1223 - err2: 1204 + err_disable_clk: 1224 1205 clk_disable_unprepare(sdd->clk); 1225 - err0: 1206 + err_deref_master: 1226 1207 spi_master_put(master); 1227 1208 1228 1209 return ret; ··· 1232 1209 1233 1210 static int s3c64xx_spi_remove(struct platform_device *pdev) 1234 1211 { 1235 - struct spi_master *master = spi_master_get(platform_get_drvdata(pdev)); 1212 + struct spi_master *master = platform_get_drvdata(pdev); 1236 1213 struct s3c64xx_spi_driver_data *sdd = spi_master_get_devdata(master); 1237 1214 1238 1215 pm_runtime_get_sync(&pdev->dev); 1239 1216 1240 1217 writel(0, sdd->regs + S3C64XX_SPI_INT_EN); 1218 + 1219 + clk_disable_unprepare(sdd->ioclk); 1241 1220 1242 1221 clk_disable_unprepare(sdd->src_clk); 1243 1222 ··· 1299 1274 1300 1275 clk_disable_unprepare(sdd->clk); 1301 1276 clk_disable_unprepare(sdd->src_clk); 1277 + clk_disable_unprepare(sdd->ioclk); 1302 1278 1303 1279 return 0; 1304 1280 } ··· 1310 1284 struct s3c64xx_spi_driver_data *sdd = spi_master_get_devdata(master); 1311 1285 int ret; 1312 1286 1313 - ret = clk_prepare_enable(sdd->src_clk); 1314 - if (ret != 0) 1315 - return ret; 1316 - 1317 - ret = clk_prepare_enable(sdd->clk); 1318 - if (ret != 0) { 1319 - clk_disable_unprepare(sdd->src_clk); 1320 - return ret; 1287 + if (sdd->port_conf->clk_ioclk) { 1288 + ret = clk_prepare_enable(sdd->ioclk); 1289 + if (ret != 0) 1290 + return ret; 1321 1291 } 1322 1292 1293 + ret = clk_prepare_enable(sdd->src_clk); 1294 + if (ret != 0) 1295 + goto err_disable_ioclk; 1296 + 1297 + ret = clk_prepare_enable(sdd->clk); 1298 + if (ret != 0) 1299 + goto err_disable_src_clk; 1300 + 1323 1301 return 0; 1302 + 1303 + err_disable_src_clk: 1304 + clk_disable_unprepare(sdd->src_clk); 1305 + err_disable_ioclk: 1306 + clk_disable_unprepare(sdd->ioclk); 1307 + 1308 + return ret; 1324 1309 } 1325 1310 #endif /* CONFIG_PM */ 1326 1311 ··· 1387 1350 .quirks = S3C64XX_SPI_QUIRK_CS_AUTO, 1388 1351 }; 1389 1352 1353 + static struct s3c64xx_spi_port_config exynos5433_spi_port_config = { 1354 + .fifo_lvl_mask = { 0x1ff, 0x7f, 0x7f, 0x7f, 0x7f, 0x1ff}, 1355 + .rx_lvl_offset = 15, 1356 + .tx_st_done = 25, 1357 + .high_speed = true, 1358 + .clk_from_cmu = true, 1359 + .clk_ioclk = true, 1360 + .quirks = S3C64XX_SPI_QUIRK_CS_AUTO, 1361 + }; 1362 + 1390 1363 static const struct platform_device_id s3c64xx_spi_driver_ids[] = { 1391 1364 { 1392 1365 .name = "s3c2443-spi", ··· 1426 1379 }, 1427 1380 { .compatible = "samsung,exynos7-spi", 1428 1381 .data = (void *)&exynos7_spi_port_config, 1382 + }, 1383 + { .compatible = "samsung,exynos5433-spi", 1384 + .data = (void *)&exynos5433_spi_port_config, 1429 1385 }, 1430 1386 { }, 1431 1387 };
+9 -9
drivers/spi/spi-sh-msiof.c
··· 45 45 void __iomem *mapbase; 46 46 struct clk *clk; 47 47 struct platform_device *pdev; 48 - const struct sh_msiof_chipdata *chipdata; 49 48 struct sh_msiof_spi_info *info; 50 49 struct completion done; 51 50 unsigned int tx_fifo_size; ··· 270 271 271 272 scr = sh_msiof_spi_div_table[k].brdv | SCR_BRPS(brps); 272 273 sh_msiof_write(p, TSCR, scr); 273 - if (!(p->chipdata->master_flags & SPI_MASTER_MUST_TX)) 274 + if (!(p->master->flags & SPI_MASTER_MUST_TX)) 274 275 sh_msiof_write(p, RSCR, scr); 275 276 } 276 277 ··· 335 336 tmp |= lsb_first << MDR1_BITLSB_SHIFT; 336 337 tmp |= sh_msiof_spi_get_dtdl_and_syncdl(p); 337 338 sh_msiof_write(p, TMDR1, tmp | MDR1_TRMD | TMDR1_PCON); 338 - if (p->chipdata->master_flags & SPI_MASTER_MUST_TX) { 339 + if (p->master->flags & SPI_MASTER_MUST_TX) { 339 340 /* These bits are reserved if RX needs TX */ 340 341 tmp &= ~0x0000ffff; 341 342 } ··· 359 360 { 360 361 u32 dr2 = MDR2_BITLEN1(bits) | MDR2_WDLEN1(words); 361 362 362 - if (tx_buf || (p->chipdata->master_flags & SPI_MASTER_MUST_TX)) 363 + if (tx_buf || (p->master->flags & SPI_MASTER_MUST_TX)) 363 364 sh_msiof_write(p, TMDR2, dr2); 364 365 else 365 366 sh_msiof_write(p, TMDR2, dr2 | MDR2_GRPMASK1); ··· 1151 1152 { 1152 1153 struct resource *r; 1153 1154 struct spi_master *master; 1155 + const struct sh_msiof_chipdata *chipdata; 1154 1156 const struct of_device_id *of_id; 1155 1157 struct sh_msiof_spi_priv *p; 1156 1158 int i; ··· 1170 1170 1171 1171 of_id = of_match_device(sh_msiof_match, &pdev->dev); 1172 1172 if (of_id) { 1173 - p->chipdata = of_id->data; 1173 + chipdata = of_id->data; 1174 1174 p->info = sh_msiof_spi_parse_dt(&pdev->dev); 1175 1175 } else { 1176 - p->chipdata = (const void *)pdev->id_entry->driver_data; 1176 + chipdata = (const void *)pdev->id_entry->driver_data; 1177 1177 p->info = dev_get_platdata(&pdev->dev); 1178 1178 } 1179 1179 ··· 1217 1217 pm_runtime_enable(&pdev->dev); 1218 1218 1219 1219 /* Platform data may override FIFO sizes */ 1220 - p->tx_fifo_size = p->chipdata->tx_fifo_size; 1221 - p->rx_fifo_size = p->chipdata->rx_fifo_size; 1220 + p->tx_fifo_size = chipdata->tx_fifo_size; 1221 + p->rx_fifo_size = chipdata->rx_fifo_size; 1222 1222 if (p->info->tx_fifo_override) 1223 1223 p->tx_fifo_size = p->info->tx_fifo_override; 1224 1224 if (p->info->rx_fifo_override) ··· 1227 1227 /* init master code */ 1228 1228 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 1229 1229 master->mode_bits |= SPI_LSB_FIRST | SPI_3WIRE; 1230 - master->flags = p->chipdata->master_flags; 1230 + master->flags = chipdata->master_flags; 1231 1231 master->bus_num = pdev->id; 1232 1232 master->dev.of_node = pdev->dev.of_node; 1233 1233 master->num_chipselect = p->info->num_chipselect;
+3 -13
drivers/spi/spi-sh.c
··· 82 82 int irq; 83 83 struct spi_master *master; 84 84 struct list_head queue; 85 - struct workqueue_struct *workqueue; 86 85 struct work_struct ws; 87 86 unsigned long cr1; 88 87 wait_queue_head_t wait; ··· 379 380 spi_sh_clear_bit(ss, SPI_SH_SSA, SPI_SH_CR1); 380 381 381 382 list_add_tail(&mesg->queue, &ss->queue); 382 - queue_work(ss->workqueue, &ss->ws); 383 + schedule_work(&ss->ws); 383 384 384 385 spin_unlock_irqrestore(&ss->lock, flags); 385 386 ··· 424 425 struct spi_sh_data *ss = platform_get_drvdata(pdev); 425 426 426 427 spi_unregister_master(ss->master); 427 - destroy_workqueue(ss->workqueue); 428 + flush_work(&ss->ws); 428 429 free_irq(ss->irq, ss); 429 430 430 431 return 0; ··· 483 484 spin_lock_init(&ss->lock); 484 485 INIT_WORK(&ss->ws, spi_sh_work); 485 486 init_waitqueue_head(&ss->wait); 486 - ss->workqueue = create_singlethread_workqueue( 487 - dev_name(master->dev.parent)); 488 - if (ss->workqueue == NULL) { 489 - dev_err(&pdev->dev, "create workqueue error\n"); 490 - ret = -EBUSY; 491 - goto error1; 492 - } 493 487 494 488 ret = request_irq(irq, spi_sh_irq, 0, "spi_sh", ss); 495 489 if (ret < 0) { 496 490 dev_err(&pdev->dev, "request_irq error\n"); 497 - goto error2; 491 + goto error1; 498 492 } 499 493 500 494 master->num_chipselect = 2; ··· 506 514 507 515 error3: 508 516 free_irq(irq, ss); 509 - error2: 510 - destroy_workqueue(ss->workqueue); 511 517 error1: 512 518 spi_master_put(master); 513 519
+8
drivers/spi/spi-sun4i.c
··· 167 167 sun4i_spi_write(sspi, SUN4I_CTL_REG, reg); 168 168 } 169 169 170 + static size_t sun4i_spi_max_transfer_size(struct spi_device *spi) 171 + { 172 + return SUN4I_FIFO_DEPTH - 1; 173 + } 174 + 170 175 static int sun4i_spi_transfer_one(struct spi_master *master, 171 176 struct spi_device *spi, 172 177 struct spi_transfer *tfr) ··· 407 402 } 408 403 409 404 sspi->master = master; 405 + master->max_speed_hz = 100 * 1000 * 1000; 406 + master->min_speed_hz = 3 * 1000; 410 407 master->set_cs = sun4i_spi_set_cs; 411 408 master->transfer_one = sun4i_spi_transfer_one; 412 409 master->num_chipselect = 4; ··· 416 409 master->bits_per_word_mask = SPI_BPW_MASK(8); 417 410 master->dev.of_node = pdev->dev.of_node; 418 411 master->auto_runtime_pm = true; 412 + master->max_transfer_size = sun4i_spi_max_transfer_size; 419 413 420 414 sspi->hclk = devm_clk_get(&pdev->dev, "ahb"); 421 415 if (IS_ERR(sspi->hclk)) {
+7
drivers/spi/spi-sun6i.c
··· 153 153 sun6i_spi_write(sspi, SUN6I_TFR_CTL_REG, reg); 154 154 } 155 155 156 + static size_t sun6i_spi_max_transfer_size(struct spi_device *spi) 157 + { 158 + return SUN6I_FIFO_DEPTH - 1; 159 + } 156 160 157 161 static int sun6i_spi_transfer_one(struct spi_master *master, 158 162 struct spi_device *spi, ··· 398 394 } 399 395 400 396 sspi->master = master; 397 + master->max_speed_hz = 100 * 1000 * 1000; 398 + master->min_speed_hz = 3 * 1000; 401 399 master->set_cs = sun6i_spi_set_cs; 402 400 master->transfer_one = sun6i_spi_transfer_one; 403 401 master->num_chipselect = 4; ··· 407 401 master->bits_per_word_mask = SPI_BPW_MASK(8); 408 402 master->dev.of_node = pdev->dev.of_node; 409 403 master->auto_runtime_pm = true; 404 + master->max_transfer_size = sun6i_spi_max_transfer_size; 410 405 411 406 sspi->hclk = devm_clk_get(&pdev->dev, "ahb"); 412 407 if (IS_ERR(sspi->hclk)) {
+1 -1
drivers/spi/spi-ti-qspi.c
··· 141 141 u32 clk_ctrl_reg, clk_rate, clk_mask; 142 142 143 143 if (spi->master->busy) { 144 - dev_dbg(qspi->dev, "master busy doing other trasnfers\n"); 144 + dev_dbg(qspi->dev, "master busy doing other transfers\n"); 145 145 return -EBUSY; 146 146 } 147 147
+3 -23
drivers/spi/spi-topcliff-pch.c
··· 133 133 * @io_remap_addr: The remapped PCI base address 134 134 * @master: Pointer to the SPI master structure 135 135 * @work: Reference to work queue handler 136 - * @wk: Workqueue for carrying out execution of the 137 - * requests 138 136 * @wait: Wait queue for waking up upon receiving an 139 137 * interrupt. 140 138 * @transfer_complete: Status of SPI Transfer ··· 167 169 unsigned long io_base_addr; 168 170 struct spi_master *master; 169 171 struct work_struct work; 170 - struct workqueue_struct *wk; 171 172 wait_queue_head_t wait; 172 173 u8 transfer_complete; 173 174 u8 bcurrent_msg_processing; ··· 514 517 515 518 dev_dbg(&pspi->dev, "%s - Invoked list_add_tail\n", __func__); 516 519 517 - /* schedule work queue to run */ 518 - queue_work(data->wk, &data->work); 520 + schedule_work(&data->work); 519 521 dev_dbg(&pspi->dev, "%s - Invoked queue work\n", __func__); 520 522 521 523 retval = 0; ··· 670 674 *more messages) 671 675 */ 672 676 dev_dbg(&data->master->dev, "%s:Invoke queue_work\n", __func__); 673 - queue_work(data->wk, &data->work); 677 + schedule_work(&data->work); 674 678 } else if (data->board_dat->suspend_sts || 675 679 data->status == STATUS_EXITING) { 676 680 dev_dbg(&data->master->dev, ··· 1262 1266 { 1263 1267 dev_dbg(&board_dat->pdev->dev, "%s ENTRY\n", __func__); 1264 1268 1265 - /* free workqueue */ 1266 - if (data->wk != NULL) { 1267 - destroy_workqueue(data->wk); 1268 - data->wk = NULL; 1269 - dev_dbg(&board_dat->pdev->dev, 1270 - "%s destroy_workqueue invoked successfully\n", 1271 - __func__); 1272 - } 1269 + flush_work(&data->work); 1273 1270 } 1274 1271 1275 1272 static int pch_spi_get_resources(struct pch_spi_board_data *board_dat, ··· 1272 1283 1273 1284 dev_dbg(&board_dat->pdev->dev, "%s ENTRY\n", __func__); 1274 1285 1275 - /* create workqueue */ 1276 - data->wk = create_singlethread_workqueue(KBUILD_MODNAME); 1277 - if (!data->wk) { 1278 - dev_err(&board_dat->pdev->dev, 1279 - "%s create_singlet hread_workqueue failed\n", __func__); 1280 - retval = -EBUSY; 1281 - goto err_return; 1282 - } 1283 1286 1284 1287 /* reset PCH SPI h/w */ 1285 1288 pch_spi_reset(data->master); ··· 1280 1299 1281 1300 dev_dbg(&board_dat->pdev->dev, "%s data->irq_reg_sts=true\n", __func__); 1282 1301 1283 - err_return: 1284 1302 if (retval != 0) { 1285 1303 dev_err(&board_dat->pdev->dev, 1286 1304 "%s FAIL:invoking pch_spi_free_resources\n", __func__);
+2 -9
drivers/spi/spi-txx9.c
··· 72 72 73 73 74 74 struct txx9spi { 75 - struct workqueue_struct *workqueue; 76 75 struct work_struct work; 77 76 spinlock_t lock; /* protect 'queue' */ 78 77 struct list_head queue; ··· 314 315 315 316 spin_lock_irqsave(&c->lock, flags); 316 317 list_add_tail(&m->queue, &c->queue); 317 - queue_work(c->workqueue, &c->work); 318 + schedule_work(&c->work); 318 319 spin_unlock_irqrestore(&c->lock, flags); 319 320 320 321 return 0; ··· 373 374 if (ret) 374 375 goto exit; 375 376 376 - c->workqueue = create_singlethread_workqueue( 377 - dev_name(master->dev.parent)); 378 - if (!c->workqueue) 379 - goto exit_busy; 380 377 c->last_chipselect = -1; 381 378 382 379 dev_info(&dev->dev, "at %#llx, irq %d, %dMHz\n", ··· 395 400 exit_busy: 396 401 ret = -EBUSY; 397 402 exit: 398 - if (c->workqueue) 399 - destroy_workqueue(c->workqueue); 400 403 clk_disable(c->clk); 401 404 spi_master_put(master); 402 405 return ret; ··· 405 412 struct spi_master *master = platform_get_drvdata(dev); 406 413 struct txx9spi *c = spi_master_get_devdata(master); 407 414 408 - destroy_workqueue(c->workqueue); 415 + flush_work(&c->work); 409 416 clk_disable(c->clk); 410 417 return 0; 411 418 }
+6 -2
drivers/spi/spi-xilinx.c
··· 341 341 342 342 if (ipif_isr & XSPI_INTR_TX_EMPTY) { /* Transmission completed */ 343 343 complete(&xspi->done); 344 + return IRQ_HANDLED; 344 345 } 345 346 346 - return IRQ_HANDLED; 347 + return IRQ_NONE; 347 348 } 348 349 349 350 static int xilinx_spi_find_buffer_size(struct xilinx_spi *xspi) ··· 456 455 xspi->buffer_size = xilinx_spi_find_buffer_size(xspi); 457 456 458 457 xspi->irq = platform_get_irq(pdev, 0); 459 - if (xspi->irq >= 0) { 458 + if (xspi->irq < 0 && xspi->irq != -ENXIO) { 459 + ret = xspi->irq; 460 + goto put_master; 461 + } else if (xspi->irq >= 0) { 460 462 /* Register for SPI Interrupt */ 461 463 ret = devm_request_irq(&pdev->dev, xspi->irq, xilinx_spi_irq, 0, 462 464 dev_name(&pdev->dev), xspi);
+46 -20
drivers/spi/spi.c
··· 851 851 return 0; 852 852 } 853 853 #else /* !CONFIG_HAS_DMA */ 854 + static inline int spi_map_buf(struct spi_master *master, 855 + struct device *dev, struct sg_table *sgt, 856 + void *buf, size_t len, 857 + enum dma_data_direction dir) 858 + { 859 + return -EINVAL; 860 + } 861 + 862 + static inline void spi_unmap_buf(struct spi_master *master, 863 + struct device *dev, struct sg_table *sgt, 864 + enum dma_data_direction dir) 865 + { 866 + } 867 + 854 868 static inline int __spi_map_msg(struct spi_master *master, 855 869 struct spi_message *msg) 856 870 { ··· 1071 1057 * __spi_pump_messages - function which processes spi message queue 1072 1058 * @master: master to process queue for 1073 1059 * @in_kthread: true if we are in the context of the message pump thread 1074 - * @bus_locked: true if the bus mutex is held when calling this function 1075 1060 * 1076 1061 * This function checks if there is any spi message in the queue that 1077 1062 * needs processing and if so call out to the driver to initialize hardware ··· 1080 1067 * inside spi_sync(); the queue extraction handling at the top of the 1081 1068 * function should deal with this safely. 1082 1069 */ 1083 - static void __spi_pump_messages(struct spi_master *master, bool in_kthread, 1084 - bool bus_locked) 1070 + static void __spi_pump_messages(struct spi_master *master, bool in_kthread) 1085 1071 { 1086 1072 unsigned long flags; 1087 1073 bool was_busy = false; ··· 1152 1140 master->busy = true; 1153 1141 spin_unlock_irqrestore(&master->queue_lock, flags); 1154 1142 1143 + mutex_lock(&master->io_mutex); 1144 + 1155 1145 if (!was_busy && master->auto_runtime_pm) { 1156 1146 ret = pm_runtime_get_sync(master->dev.parent); 1157 1147 if (ret < 0) { ··· 1177 1163 return; 1178 1164 } 1179 1165 } 1180 - 1181 - if (!bus_locked) 1182 - mutex_lock(&master->bus_lock_mutex); 1183 1166 1184 1167 trace_spi_message_start(master->cur_msg); 1185 1168 ··· 1207 1196 } 1208 1197 1209 1198 out: 1210 - if (!bus_locked) 1211 - mutex_unlock(&master->bus_lock_mutex); 1199 + mutex_unlock(&master->io_mutex); 1212 1200 1213 1201 /* Prod the scheduler in case transfer_one() was busy waiting */ 1214 1202 if (!ret) ··· 1223 1213 struct spi_master *master = 1224 1214 container_of(work, struct spi_master, pump_messages); 1225 1215 1226 - __spi_pump_messages(master, true, master->bus_lock_flag); 1216 + __spi_pump_messages(master, true); 1227 1217 } 1228 1218 1229 1219 static int spi_init_queue(struct spi_master *master) ··· 1896 1886 spin_lock_init(&master->queue_lock); 1897 1887 spin_lock_init(&master->bus_lock_spinlock); 1898 1888 mutex_init(&master->bus_lock_mutex); 1889 + mutex_init(&master->io_mutex); 1899 1890 master->bus_lock_flag = 0; 1900 1891 init_completion(&master->xfer_completion); 1901 1892 if (!master->max_dma_len) ··· 2749 2738 2750 2739 { 2751 2740 struct spi_master *master = spi->master; 2741 + struct device *rx_dev = NULL; 2752 2742 int ret; 2753 2743 2754 2744 if ((msg->opcode_nbits == SPI_NBITS_DUAL || ··· 2775 2763 return ret; 2776 2764 } 2777 2765 } 2766 + 2778 2767 mutex_lock(&master->bus_lock_mutex); 2768 + mutex_lock(&master->io_mutex); 2769 + if (master->dma_rx) { 2770 + rx_dev = master->dma_rx->device->dev; 2771 + ret = spi_map_buf(master, rx_dev, &msg->rx_sg, 2772 + msg->buf, msg->len, 2773 + DMA_FROM_DEVICE); 2774 + if (!ret) 2775 + msg->cur_msg_mapped = true; 2776 + } 2779 2777 ret = master->spi_flash_read(spi, msg); 2778 + if (msg->cur_msg_mapped) 2779 + spi_unmap_buf(master, rx_dev, &msg->rx_sg, 2780 + DMA_FROM_DEVICE); 2781 + mutex_unlock(&master->io_mutex); 2780 2782 mutex_unlock(&master->bus_lock_mutex); 2783 + 2781 2784 if (master->auto_runtime_pm) 2782 2785 pm_runtime_put(master->dev.parent); 2783 2786 ··· 2812 2785 complete(arg); 2813 2786 } 2814 2787 2815 - static int __spi_sync(struct spi_device *spi, struct spi_message *message, 2816 - int bus_locked) 2788 + static int __spi_sync(struct spi_device *spi, struct spi_message *message) 2817 2789 { 2818 2790 DECLARE_COMPLETION_ONSTACK(done); 2819 2791 int status; ··· 2829 2803 2830 2804 SPI_STATISTICS_INCREMENT_FIELD(&master->statistics, spi_sync); 2831 2805 SPI_STATISTICS_INCREMENT_FIELD(&spi->statistics, spi_sync); 2832 - 2833 - if (!bus_locked) 2834 - mutex_lock(&master->bus_lock_mutex); 2835 2806 2836 2807 /* If we're not using the legacy transfer method then we will 2837 2808 * try to transfer in the calling context so special case. ··· 2847 2824 status = spi_async_locked(spi, message); 2848 2825 } 2849 2826 2850 - if (!bus_locked) 2851 - mutex_unlock(&master->bus_lock_mutex); 2852 - 2853 2827 if (status == 0) { 2854 2828 /* Push out the messages in the calling context if we 2855 2829 * can. ··· 2856 2836 spi_sync_immediate); 2857 2837 SPI_STATISTICS_INCREMENT_FIELD(&spi->statistics, 2858 2838 spi_sync_immediate); 2859 - __spi_pump_messages(master, false, bus_locked); 2839 + __spi_pump_messages(master, false); 2860 2840 } 2861 2841 2862 2842 wait_for_completion(&done); ··· 2889 2869 */ 2890 2870 int spi_sync(struct spi_device *spi, struct spi_message *message) 2891 2871 { 2892 - return __spi_sync(spi, message, spi->master->bus_lock_flag); 2872 + int ret; 2873 + 2874 + mutex_lock(&spi->master->bus_lock_mutex); 2875 + ret = __spi_sync(spi, message); 2876 + mutex_unlock(&spi->master->bus_lock_mutex); 2877 + 2878 + return ret; 2893 2879 } 2894 2880 EXPORT_SYMBOL_GPL(spi_sync); 2895 2881 ··· 2917 2891 */ 2918 2892 int spi_sync_locked(struct spi_device *spi, struct spi_message *message) 2919 2893 { 2920 - return __spi_sync(spi, message, 1); 2894 + return __spi_sync(spi, message); 2921 2895 } 2922 2896 EXPORT_SYMBOL_GPL(spi_sync_locked); 2923 2897
+41
drivers/spi/spidev.c
··· 29 29 #include <linux/compat.h> 30 30 #include <linux/of.h> 31 31 #include <linux/of_device.h> 32 + #include <linux/acpi.h> 32 33 33 34 #include <linux/spi/spi.h> 34 35 #include <linux/spi/spidev.h> ··· 701 700 MODULE_DEVICE_TABLE(of, spidev_dt_ids); 702 701 #endif 703 702 703 + #ifdef CONFIG_ACPI 704 + 705 + /* Dummy SPI devices not to be used in production systems */ 706 + #define SPIDEV_ACPI_DUMMY 1 707 + 708 + static const struct acpi_device_id spidev_acpi_ids[] = { 709 + /* 710 + * The ACPI SPT000* devices are only meant for development and 711 + * testing. Systems used in production should have a proper ACPI 712 + * description of the connected peripheral and they should also use 713 + * a proper driver instead of poking directly to the SPI bus. 714 + */ 715 + { "SPT0001", SPIDEV_ACPI_DUMMY }, 716 + { "SPT0002", SPIDEV_ACPI_DUMMY }, 717 + { "SPT0003", SPIDEV_ACPI_DUMMY }, 718 + {}, 719 + }; 720 + MODULE_DEVICE_TABLE(acpi, spidev_acpi_ids); 721 + 722 + static void spidev_probe_acpi(struct spi_device *spi) 723 + { 724 + const struct acpi_device_id *id; 725 + 726 + if (!has_acpi_companion(&spi->dev)) 727 + return; 728 + 729 + id = acpi_match_device(spidev_acpi_ids, &spi->dev); 730 + if (WARN_ON(!id)) 731 + return; 732 + 733 + if (id->driver_data == SPIDEV_ACPI_DUMMY) 734 + dev_warn(&spi->dev, "do not use this driver in production systems!\n"); 735 + } 736 + #else 737 + static inline void spidev_probe_acpi(struct spi_device *spi) {} 738 + #endif 739 + 704 740 /*-------------------------------------------------------------------------*/ 705 741 706 742 static int spidev_probe(struct spi_device *spi) ··· 756 718 WARN_ON(spi->dev.of_node && 757 719 !of_match_device(spidev_dt_ids, &spi->dev)); 758 720 } 721 + 722 + spidev_probe_acpi(spi); 759 723 760 724 /* Allocate driver data */ 761 725 spidev = kzalloc(sizeof(*spidev), GFP_KERNEL); ··· 829 789 .driver = { 830 790 .name = "spidev", 831 791 .of_match_table = of_match_ptr(spidev_dt_ids), 792 + .acpi_match_table = ACPI_PTR(spidev_acpi_ids), 832 793 }, 833 794 .probe = spidev_probe, 834 795 .remove = spidev_remove,
+1
include/linux/platform_data/spi-s3c64xx.h
··· 38 38 struct s3c64xx_spi_info { 39 39 int src_clk_nr; 40 40 int num_cs; 41 + bool no_cs; 41 42 int (*cfg_gpio)(void); 42 43 dma_filter_fn filter; 43 44 void *dma_tx;
+9 -1
include/linux/spi/spi.h
··· 312 312 * @flags: other constraints relevant to this driver 313 313 * @max_transfer_size: function that returns the max transfer size for 314 314 * a &spi_device; may be %NULL, so the default %SIZE_MAX will be used. 315 + * @io_mutex: mutex for physical bus access 315 316 * @bus_lock_spinlock: spinlock for SPI bus locking 316 - * @bus_lock_mutex: mutex for SPI bus locking 317 + * @bus_lock_mutex: mutex for exclusion of multiple callers 317 318 * @bus_lock_flag: indicates that the SPI bus is locked for exclusive use 318 319 * @setup: updates the device mode and clocking records used by a 319 320 * device's SPI controller; protocol code may call this. This ··· 446 445 * the limit may depend on device transfer settings 447 446 */ 448 447 size_t (*max_transfer_size)(struct spi_device *spi); 448 + 449 + /* I/O mutex */ 450 + struct mutex io_mutex; 449 451 450 452 /* lock and mutex for SPI bus locking */ 451 453 spinlock_t bus_lock_spinlock; ··· 1147 1143 * @opcode_nbits: number of lines to send opcode 1148 1144 * @addr_nbits: number of lines to send address 1149 1145 * @data_nbits: number of lines for data 1146 + * @rx_sg: Scatterlist for receive data read from flash 1147 + * @cur_msg_mapped: message has been mapped for DMA 1150 1148 */ 1151 1149 struct spi_flash_read_message { 1152 1150 void *buf; ··· 1161 1155 u8 opcode_nbits; 1162 1156 u8 addr_nbits; 1163 1157 u8 data_nbits; 1158 + struct sg_table rx_sg; 1159 + bool cur_msg_mapped; 1164 1160 }; 1165 1161 1166 1162 /* SPI core interface for flash read support */