Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'spi-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
"A respun version of the merges for the pull request previously sent
with a few additional fixes. The last two merges were fixed up by
hand since the branches have moved on and currently have the prior
merge in them.

Quite a busy release for the SPI subsystem, mostly in cleanups big and
small scattered through the stack rather than anything else:

- New driver for the Broadcom BC63xx HSSPI controller
- Fix duplicate device registration for ACPI
- Conversion of s3c64xx to DMAEngine (this pulls in platform and DMA
changes upon which the transiton depends)
- Some small optimisations to reduce the amount of time we hold locks
in the datapath, eliminate some redundant checks and the size of a
spi_transfer
- Lots of fixes, cleanups and general enhancements to drivers,
especially the rspi and Atmel drivers"

* tag 'spi-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (112 commits)
spi: core: Fix transfer failure when master->transfer_one returns positive value
spi: Correct set_cs() documentation
spi: Clarify transfer_one() w.r.t. spi_finalize_current_transfer()
spi: Spelling s/finised/finished/
spi: sc18is602: Convert to use bits_per_word_mask
spi: Remove duplicate code to set default bits_per_word setting
spi/pxa2xx: fix compilation warning when !CONFIG_PM_SLEEP
spi: clps711x: Add MODULE_ALIAS to support module auto-loading
spi: rspi: Add missing clk_disable() calls in error and cleanup paths
spi: rspi: Spelling s/transmition/transmission/
spi: rspi: Add support for specifying CPHA/CPOL
spi/pxa2xx: initialize DMA channels to -1 to prevent inadvertent match
spi: rspi: Add more QSPI register documentation
spi: rspi: Add more RSPI register documentation
spi: rspi: Remove dependency on DMAE for SHMOBILE
spi/s3c64xx: Correct indentation
spi: sh: Use spi_sh_clear_bit() instead of open-coded
spi: bitbang: Grammar s/make to make/to make/
spi: sh-hspi: Spelling s/recive/receive/
spi: core: Improve tx/rx_nbits check comments
...

+1495 -1523
+1 -1
Documentation/devicetree/bindings/spi/spi-bus.txt
··· 67 67 Dual/Quad mode is not allowed when 3-wire mode is used. 68 68 69 69 If a gpio chipselect is used for the SPI slave the gpio number will be passed 70 - via the cs_gpio 70 + via the SPI master node cs-gpios property. 71 71 72 72 SPI example for an MPC5200 SPI bus: 73 73 spi@f00 {
+7 -1
Documentation/devicetree/bindings/spi/ti_qspi.txt
··· 3 3 Required properties: 4 4 - compatible : should be "ti,dra7xxx-qspi" or "ti,am4372-qspi". 5 5 - reg: Should contain QSPI registers location and length. 6 + - reg-names: Should contain the resource reg names. 7 + - qspi_base: Qspi configuration register Address space 8 + - qspi_mmap: Memory mapped Address space 9 + - (optional) qspi_ctrlmod: Control module Address space 10 + - interrupts: should contain the qspi interrupt number. 6 11 - #address-cells, #size-cells : Must be present if the device has sub-nodes 7 12 - ti,hwmods: Name of the hwmod associated to the QSPI 8 13 ··· 19 14 20 15 qspi: qspi@4b300000 { 21 16 compatible = "ti,dra7xxx-qspi"; 22 - reg = <0x4b300000 0x100>; 17 + reg = <0x47900000 0x100>, <0x30000000 0x3ffffff>; 18 + reg-names = "qspi_base", "qspi_mmap"; 23 19 #address-cells = <1>; 24 20 #size-cells = <0>; 25 21 spi-max-frequency = <25000000>;
+4 -4
Documentation/spi/spi-summary
··· 34 34 - It may also be used to stream data in either direction (half duplex), 35 35 or both of them at the same time (full duplex). 36 36 37 - - Some devices may use eight bit words. Others may different word 37 + - Some devices may use eight bit words. Others may use different word 38 38 lengths, such as streams of 12-bit or 20-bit digital samples. 39 39 40 40 - Words are usually sent with their most significant bit (MSB) first, ··· 121 121 a slave, and the slave can tell the chosen polarity by sampling the 122 122 clock level when its select line goes active. That's why many devices 123 123 support for example both modes 0 and 3: they don't care about polarity, 124 - and alway clock data in/out on rising clock edges. 124 + and always clock data in/out on rising clock edges. 125 125 126 126 127 127 How do these driver programming interfaces work? ··· 139 139 140 140 There are two types of SPI driver, here called: 141 141 142 - Controller drivers ... controllers may be built in to System-On-Chip 142 + Controller drivers ... controllers may be built into System-On-Chip 143 143 processors, and often support both Master and Slave roles. 144 144 These drivers touch hardware registers and may use DMA. 145 145 Or they can be PIO bitbangers, needing just GPIO pins. ··· 548 548 DEPRECATED METHODS 549 549 550 550 master->transfer(struct spi_device *spi, struct spi_message *message) 551 - This must not sleep. Its responsibility is arrange that the 551 + This must not sleep. Its responsibility is to arrange that the 552 552 transfer happens and its complete() callback is issued. The two 553 553 will normally happen later, after other transfers complete, and 554 554 if the controller is idle it will need to be kickstarted. This
-13
arch/arm/plat-samsung/include/plat/fiq.h
··· 1 - /* linux/arch/arm/plat-samsung/include/plat/fiq.h 2 - * 3 - * Copyright (c) 2009 Simtec Electronics 4 - * Ben Dooks <ben@simtec.co.uk> 5 - * 6 - * Header file for S3C24XX CPU FIQ support 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - */ 12 - 13 - extern int s3c24xx_set_fiq(unsigned int irq, bool on);
+11 -3
drivers/spi/Kconfig
··· 118 118 help 119 119 Enable support for the SPI controller on the Broadcom BCM63xx SoCs. 120 120 121 + config SPI_BCM63XX_HSSPI 122 + tristate "Broadcom BCM63XX HS SPI controller driver" 123 + depends on BCM63XX || COMPILE_TEST 124 + help 125 + This enables support for the High Speed SPI controller present on 126 + newer Broadcom BCM63XX SoCs. 127 + 121 128 config SPI_BITBANG 122 129 tristate "Utilities for Bitbanging SPI masters" 123 130 help ··· 166 159 tristate "Texas Instruments DaVinci/DA8x/OMAP-L/AM1x SoC SPI controller" 167 160 depends on ARCH_DAVINCI || ARCH_KEYSTONE 168 161 select SPI_BITBANG 169 - select TI_EDMA 170 162 help 171 163 SPI master controller for DaVinci/DA8x/OMAP-L/AM1x SPI modules. 172 164 ··· 307 301 308 302 config SPI_OMAP24XX 309 303 tristate "McSPI driver for OMAP" 304 + depends on ARM || ARM64 || AVR32 || HEXAGON || MIPS || SH 310 305 depends on ARCH_OMAP2PLUS || COMPILE_TEST 311 306 help 312 307 SPI master controller for OMAP24XX and later Multichannel SPI ··· 377 370 378 371 config SPI_RSPI 379 372 tristate "Renesas RSPI controller" 380 - depends on (SUPERH || ARCH_SHMOBILE) && SH_DMAE_BASE 373 + depends on (SUPERH && SH_DMAE_BASE) || ARCH_SHMOBILE 381 374 help 382 375 SPI driver for Renesas RSPI blocks. 383 376 ··· 414 407 415 408 config SPI_SH_MSIOF 416 409 tristate "SuperH MSIOF SPI controller" 417 - depends on (SUPERH || ARCH_SHMOBILE) && HAVE_CLK 410 + depends on HAVE_CLK 411 + depends on SUPERH || ARCH_SHMOBILE || COMPILE_TEST 418 412 select SPI_BITBANG 419 413 help 420 414 SPI driver for SuperH and SH Mobile MSIOF blocks.
+1
drivers/spi/Makefile
··· 16 16 obj-$(CONFIG_SPI_AU1550) += spi-au1550.o 17 17 obj-$(CONFIG_SPI_BCM2835) += spi-bcm2835.o 18 18 obj-$(CONFIG_SPI_BCM63XX) += spi-bcm63xx.o 19 + obj-$(CONFIG_SPI_BCM63XX_HSSPI) += spi-bcm63xx-hsspi.o 19 20 obj-$(CONFIG_SPI_BFIN5XX) += spi-bfin5xx.o 20 21 obj-$(CONFIG_SPI_BFIN_V3) += spi-bfin-v3.o 21 22 obj-$(CONFIG_SPI_BFIN_SPORT) += spi-bfin-sport.o
-2
drivers/spi/spi-altera.c
··· 220 220 221 221 /* setup the state for the bitbang driver */ 222 222 hw->bitbang.master = master; 223 - if (!hw->bitbang.master) 224 - return err; 225 223 hw->bitbang.chipselect = altera_spi_chipsel; 226 224 hw->bitbang.txrx_bufs = altera_spi_txrx; 227 225
+4 -10
drivers/spi/spi-ath79.c
··· 243 243 goto err_put_master; 244 244 } 245 245 246 - sp->base = ioremap(r->start, resource_size(r)); 246 + sp->base = devm_ioremap(&pdev->dev, r->start, resource_size(r)); 247 247 if (!sp->base) { 248 248 ret = -ENXIO; 249 249 goto err_put_master; 250 250 } 251 251 252 - sp->clk = clk_get(&pdev->dev, "ahb"); 252 + sp->clk = devm_clk_get(&pdev->dev, "ahb"); 253 253 if (IS_ERR(sp->clk)) { 254 254 ret = PTR_ERR(sp->clk); 255 - goto err_unmap; 255 + goto err_put_master; 256 256 } 257 257 258 258 ret = clk_enable(sp->clk); 259 259 if (ret) 260 - goto err_clk_put; 260 + goto err_put_master; 261 261 262 262 rate = DIV_ROUND_UP(clk_get_rate(sp->clk), MHZ); 263 263 if (!rate) { ··· 280 280 ath79_spi_disable(sp); 281 281 err_clk_disable: 282 282 clk_disable(sp->clk); 283 - err_clk_put: 284 - clk_put(sp->clk); 285 - err_unmap: 286 - iounmap(sp->base); 287 283 err_put_master: 288 284 spi_master_put(sp->bitbang.master); 289 285 ··· 293 297 spi_bitbang_stop(&sp->bitbang); 294 298 ath79_spi_disable(sp); 295 299 clk_disable(sp->clk); 296 - clk_put(sp->clk); 297 - iounmap(sp->base); 298 300 spi_master_put(sp->bitbang.master); 299 301 300 302 return 0;
+287 -513
drivers/spi/spi-atmel.c
··· 189 189 */ 190 190 #define DMA_MIN_BYTES 16 191 191 192 + #define SPI_DMA_TIMEOUT (msecs_to_jiffies(1000)) 193 + 192 194 struct atmel_spi_dma { 193 195 struct dma_chan *chan_rx; 194 196 struct dma_chan *chan_tx; ··· 222 220 int irq; 223 221 struct clk *clk; 224 222 struct platform_device *pdev; 225 - struct spi_device *stay; 226 223 227 - u8 stopping; 228 - struct list_head queue; 229 - struct tasklet_struct tasklet; 230 224 struct spi_transfer *current_transfer; 231 225 unsigned long current_remaining_bytes; 232 - struct spi_transfer *next_transfer; 233 - unsigned long next_remaining_bytes; 234 226 int done_status; 227 + 228 + struct completion xfer_completion; 235 229 236 230 /* scratch buffer */ 237 231 void *buffer; ··· 239 241 bool use_pdc; 240 242 /* dmaengine data */ 241 243 struct atmel_spi_dma dma; 244 + 245 + bool keep_cs; 246 + bool cs_active; 242 247 }; 243 248 244 249 /* Controller-specific per-slave state */ ··· 377 376 return as->use_dma && xfer->len >= DMA_MIN_BYTES; 378 377 } 379 378 380 - static inline int atmel_spi_xfer_is_last(struct spi_message *msg, 381 - struct spi_transfer *xfer) 382 - { 383 - return msg->transfers.prev == &xfer->transfer_list; 384 - } 385 - 386 - static inline int atmel_spi_xfer_can_be_chained(struct spi_transfer *xfer) 387 - { 388 - return xfer->delay_usecs == 0 && !xfer->cs_change; 389 - } 390 - 391 379 static int atmel_spi_dma_slave_config(struct atmel_spi *as, 392 380 struct dma_slave_config *slave_config, 393 381 u8 bits_per_word) ··· 503 513 struct spi_master *master = data; 504 514 struct atmel_spi *as = spi_master_get_devdata(master); 505 515 506 - /* trigger SPI tasklet */ 507 - tasklet_schedule(&as->tasklet); 516 + complete(&as->xfer_completion); 508 517 } 509 518 510 519 /* 511 520 * Next transfer using PIO. 512 - * lock is held, spi tasklet is blocked 513 521 */ 514 522 static void atmel_spi_next_xfer_pio(struct spi_master *master, 515 523 struct spi_transfer *xfer) 516 524 { 517 525 struct atmel_spi *as = spi_master_get_devdata(master); 526 + unsigned long xfer_pos = xfer->len - as->current_remaining_bytes; 518 527 519 528 dev_vdbg(master->dev.parent, "atmel_spi_next_xfer_pio\n"); 520 - 521 - as->current_remaining_bytes = xfer->len; 522 529 523 530 /* Make sure data is not remaining in RDR */ 524 531 spi_readl(as, RDR); ··· 524 537 cpu_relax(); 525 538 } 526 539 527 - if (xfer->tx_buf) 540 + if (xfer->tx_buf) { 528 541 if (xfer->bits_per_word > 8) 529 - spi_writel(as, TDR, *(u16 *)(xfer->tx_buf)); 542 + spi_writel(as, TDR, *(u16 *)(xfer->tx_buf + xfer_pos)); 530 543 else 531 - spi_writel(as, TDR, *(u8 *)(xfer->tx_buf)); 532 - else 544 + spi_writel(as, TDR, *(u8 *)(xfer->tx_buf + xfer_pos)); 545 + } else { 533 546 spi_writel(as, TDR, 0); 547 + } 534 548 535 549 dev_dbg(master->dev.parent, 536 550 " start pio xfer %p: len %u tx %p rx %p bitpw %d\n", ··· 544 556 545 557 /* 546 558 * Submit next transfer for DMA. 547 - * lock is held, spi tasklet is blocked 548 559 */ 549 560 static int atmel_spi_next_xfer_dma_submit(struct spi_master *master, 550 561 struct spi_transfer *xfer, ··· 681 694 *plen = len; 682 695 } 683 696 697 + static int atmel_spi_set_xfer_speed(struct atmel_spi *as, 698 + struct spi_device *spi, 699 + struct spi_transfer *xfer) 700 + { 701 + u32 scbr, csr; 702 + unsigned long bus_hz; 703 + 704 + /* v1 chips start out at half the peripheral bus speed. */ 705 + bus_hz = clk_get_rate(as->clk); 706 + if (!atmel_spi_is_v2(as)) 707 + bus_hz /= 2; 708 + 709 + /* 710 + * Calculate the lowest divider that satisfies the 711 + * constraint, assuming div32/fdiv/mbz == 0. 712 + */ 713 + if (xfer->speed_hz) 714 + scbr = DIV_ROUND_UP(bus_hz, xfer->speed_hz); 715 + else 716 + /* 717 + * This can happend if max_speed is null. 718 + * In this case, we set the lowest possible speed 719 + */ 720 + scbr = 0xff; 721 + 722 + /* 723 + * If the resulting divider doesn't fit into the 724 + * register bitfield, we can't satisfy the constraint. 725 + */ 726 + if (scbr >= (1 << SPI_SCBR_SIZE)) { 727 + dev_err(&spi->dev, 728 + "setup: %d Hz too slow, scbr %u; min %ld Hz\n", 729 + xfer->speed_hz, scbr, bus_hz/255); 730 + return -EINVAL; 731 + } 732 + if (scbr == 0) { 733 + dev_err(&spi->dev, 734 + "setup: %d Hz too high, scbr %u; max %ld Hz\n", 735 + xfer->speed_hz, scbr, bus_hz); 736 + return -EINVAL; 737 + } 738 + csr = spi_readl(as, CSR0 + 4 * spi->chip_select); 739 + csr = SPI_BFINS(SCBR, scbr, csr); 740 + spi_writel(as, CSR0 + 4 * spi->chip_select, csr); 741 + 742 + return 0; 743 + } 744 + 684 745 /* 685 746 * Submit next transfer for PDC. 686 747 * lock is held, spi irq is blocked 687 748 */ 688 749 static void atmel_spi_pdc_next_xfer(struct spi_master *master, 689 - struct spi_message *msg) 750 + struct spi_message *msg, 751 + struct spi_transfer *xfer) 690 752 { 691 753 struct atmel_spi *as = spi_master_get_devdata(master); 692 - struct spi_transfer *xfer; 693 - u32 len, remaining; 694 - u32 ieval; 754 + u32 len; 695 755 dma_addr_t tx_dma, rx_dma; 696 756 697 - if (!as->current_transfer) 698 - xfer = list_entry(msg->transfers.next, 699 - struct spi_transfer, transfer_list); 700 - else if (!as->next_transfer) 701 - xfer = list_entry(as->current_transfer->transfer_list.next, 702 - struct spi_transfer, transfer_list); 703 - else 704 - xfer = NULL; 757 + spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS)); 705 758 706 - if (xfer) { 707 - spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS)); 759 + len = as->current_remaining_bytes; 760 + atmel_spi_next_xfer_data(master, xfer, &tx_dma, &rx_dma, &len); 761 + as->current_remaining_bytes -= len; 708 762 709 - len = xfer->len; 763 + spi_writel(as, RPR, rx_dma); 764 + spi_writel(as, TPR, tx_dma); 765 + 766 + if (msg->spi->bits_per_word > 8) 767 + len >>= 1; 768 + spi_writel(as, RCR, len); 769 + spi_writel(as, TCR, len); 770 + 771 + dev_dbg(&msg->spi->dev, 772 + " start xfer %p: len %u tx %p/%08llx rx %p/%08llx\n", 773 + xfer, xfer->len, xfer->tx_buf, 774 + (unsigned long long)xfer->tx_dma, xfer->rx_buf, 775 + (unsigned long long)xfer->rx_dma); 776 + 777 + if (as->current_remaining_bytes) { 778 + len = as->current_remaining_bytes; 710 779 atmel_spi_next_xfer_data(master, xfer, &tx_dma, &rx_dma, &len); 711 - remaining = xfer->len - len; 712 - 713 - spi_writel(as, RPR, rx_dma); 714 - spi_writel(as, TPR, tx_dma); 715 - 716 - if (msg->spi->bits_per_word > 8) 717 - len >>= 1; 718 - spi_writel(as, RCR, len); 719 - spi_writel(as, TCR, len); 720 - 721 - dev_dbg(&msg->spi->dev, 722 - " start xfer %p: len %u tx %p/%08llx rx %p/%08llx\n", 723 - xfer, xfer->len, xfer->tx_buf, 724 - (unsigned long long)xfer->tx_dma, xfer->rx_buf, 725 - (unsigned long long)xfer->rx_dma); 726 - } else { 727 - xfer = as->next_transfer; 728 - remaining = as->next_remaining_bytes; 729 - } 730 - 731 - as->current_transfer = xfer; 732 - as->current_remaining_bytes = remaining; 733 - 734 - if (remaining > 0) 735 - len = remaining; 736 - else if (!atmel_spi_xfer_is_last(msg, xfer) 737 - && atmel_spi_xfer_can_be_chained(xfer)) { 738 - xfer = list_entry(xfer->transfer_list.next, 739 - struct spi_transfer, transfer_list); 740 - len = xfer->len; 741 - } else 742 - xfer = NULL; 743 - 744 - as->next_transfer = xfer; 745 - 746 - if (xfer) { 747 - u32 total; 748 - 749 - total = len; 750 - atmel_spi_next_xfer_data(master, xfer, &tx_dma, &rx_dma, &len); 751 - as->next_remaining_bytes = total - len; 780 + as->current_remaining_bytes -= len; 752 781 753 782 spi_writel(as, RNPR, rx_dma); 754 783 spi_writel(as, TNPR, tx_dma); ··· 779 776 xfer, xfer->len, xfer->tx_buf, 780 777 (unsigned long long)xfer->tx_dma, xfer->rx_buf, 781 778 (unsigned long long)xfer->rx_dma); 782 - ieval = SPI_BIT(ENDRX) | SPI_BIT(OVRES); 783 - } else { 784 - spi_writel(as, RNCR, 0); 785 - spi_writel(as, TNCR, 0); 786 - ieval = SPI_BIT(RXBUFF) | SPI_BIT(ENDRX) | SPI_BIT(OVRES); 787 779 } 788 780 789 781 /* REVISIT: We're waiting for ENDRX before we start the next ··· 791 793 * 792 794 * It should be doable, though. Just not now... 793 795 */ 794 - spi_writel(as, IER, ieval); 796 + spi_writel(as, IER, SPI_BIT(ENDRX) | SPI_BIT(OVRES)); 795 797 spi_writel(as, PTCR, SPI_BIT(TXTEN) | SPI_BIT(RXTEN)); 796 - } 797 - 798 - /* 799 - * Choose way to submit next transfer and start it. 800 - * lock is held, spi tasklet is blocked 801 - */ 802 - static void atmel_spi_dma_next_xfer(struct spi_master *master, 803 - struct spi_message *msg) 804 - { 805 - struct atmel_spi *as = spi_master_get_devdata(master); 806 - struct spi_transfer *xfer; 807 - u32 remaining, len; 808 - 809 - remaining = as->current_remaining_bytes; 810 - if (remaining) { 811 - xfer = as->current_transfer; 812 - len = remaining; 813 - } else { 814 - if (!as->current_transfer) 815 - xfer = list_entry(msg->transfers.next, 816 - struct spi_transfer, transfer_list); 817 - else 818 - xfer = list_entry( 819 - as->current_transfer->transfer_list.next, 820 - struct spi_transfer, transfer_list); 821 - 822 - as->current_transfer = xfer; 823 - len = xfer->len; 824 - } 825 - 826 - if (atmel_spi_use_dma(as, xfer)) { 827 - u32 total = len; 828 - if (!atmel_spi_next_xfer_dma_submit(master, xfer, &len)) { 829 - as->current_remaining_bytes = total - len; 830 - return; 831 - } else { 832 - dev_err(&msg->spi->dev, "unable to use DMA, fallback to PIO\n"); 833 - } 834 - } 835 - 836 - /* use PIO if error appened using DMA */ 837 - atmel_spi_next_xfer_pio(master, xfer); 838 - } 839 - 840 - static void atmel_spi_next_message(struct spi_master *master) 841 - { 842 - struct atmel_spi *as = spi_master_get_devdata(master); 843 - struct spi_message *msg; 844 - struct spi_device *spi; 845 - 846 - BUG_ON(as->current_transfer); 847 - 848 - msg = list_entry(as->queue.next, struct spi_message, queue); 849 - spi = msg->spi; 850 - 851 - dev_dbg(master->dev.parent, "start message %p for %s\n", 852 - msg, dev_name(&spi->dev)); 853 - 854 - /* select chip if it's not still active */ 855 - if (as->stay) { 856 - if (as->stay != spi) { 857 - cs_deactivate(as, as->stay); 858 - cs_activate(as, spi); 859 - } 860 - as->stay = NULL; 861 - } else 862 - cs_activate(as, spi); 863 - 864 - if (as->use_pdc) 865 - atmel_spi_pdc_next_xfer(master, msg); 866 - else 867 - atmel_spi_dma_next_xfer(master, msg); 868 798 } 869 799 870 800 /* ··· 850 924 spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS)); 851 925 } 852 926 853 - static void 854 - atmel_spi_msg_done(struct spi_master *master, struct atmel_spi *as, 855 - struct spi_message *msg, int stay) 856 - { 857 - if (!stay || as->done_status < 0) 858 - cs_deactivate(as, msg->spi); 859 - else 860 - as->stay = msg->spi; 861 - 862 - list_del(&msg->queue); 863 - msg->status = as->done_status; 864 - 865 - dev_dbg(master->dev.parent, 866 - "xfer complete: %u bytes transferred\n", 867 - msg->actual_length); 868 - 869 - atmel_spi_unlock(as); 870 - msg->complete(msg->context); 871 - atmel_spi_lock(as); 872 - 873 - as->current_transfer = NULL; 874 - as->next_transfer = NULL; 875 - as->done_status = 0; 876 - 877 - /* continue if needed */ 878 - if (list_empty(&as->queue) || as->stopping) { 879 - if (as->use_pdc) 880 - atmel_spi_disable_pdc_transfer(as); 881 - } else { 882 - atmel_spi_next_message(master); 883 - } 884 - } 885 - 886 927 /* Called from IRQ 887 - * lock is held 888 928 * 889 929 * Must update "current_remaining_bytes" to keep track of data 890 930 * to transfer. ··· 858 966 static void 859 967 atmel_spi_pump_pio_data(struct atmel_spi *as, struct spi_transfer *xfer) 860 968 { 861 - u8 *txp; 862 969 u8 *rxp; 863 - u16 *txp16; 864 970 u16 *rxp16; 865 971 unsigned long xfer_pos = xfer->len - as->current_remaining_bytes; 866 972 ··· 880 990 } else { 881 991 as->current_remaining_bytes--; 882 992 } 883 - 884 - if (as->current_remaining_bytes) { 885 - if (xfer->tx_buf) { 886 - if (xfer->bits_per_word > 8) { 887 - txp16 = (u16 *)(((u8 *)xfer->tx_buf) 888 - + xfer_pos + 2); 889 - spi_writel(as, TDR, *txp16); 890 - } else { 891 - txp = ((u8 *)xfer->tx_buf) + xfer_pos + 1; 892 - spi_writel(as, TDR, *txp); 893 - } 894 - } else { 895 - spi_writel(as, TDR, 0); 896 - } 897 - } 898 - } 899 - 900 - /* Tasklet 901 - * Called from DMA callback + pio transfer and overrun IRQ. 902 - */ 903 - static void atmel_spi_tasklet_func(unsigned long data) 904 - { 905 - struct spi_master *master = (struct spi_master *)data; 906 - struct atmel_spi *as = spi_master_get_devdata(master); 907 - struct spi_message *msg; 908 - struct spi_transfer *xfer; 909 - 910 - dev_vdbg(master->dev.parent, "atmel_spi_tasklet_func\n"); 911 - 912 - atmel_spi_lock(as); 913 - 914 - xfer = as->current_transfer; 915 - 916 - if (xfer == NULL) 917 - /* already been there */ 918 - goto tasklet_out; 919 - 920 - msg = list_entry(as->queue.next, struct spi_message, queue); 921 - 922 - if (as->current_remaining_bytes == 0) { 923 - if (as->done_status < 0) { 924 - /* error happened (overrun) */ 925 - if (atmel_spi_use_dma(as, xfer)) 926 - atmel_spi_stop_dma(as); 927 - } else { 928 - /* only update length if no error */ 929 - msg->actual_length += xfer->len; 930 - } 931 - 932 - if (atmel_spi_use_dma(as, xfer)) 933 - if (!msg->is_dma_mapped) 934 - atmel_spi_dma_unmap_xfer(master, xfer); 935 - 936 - if (xfer->delay_usecs) 937 - udelay(xfer->delay_usecs); 938 - 939 - if (atmel_spi_xfer_is_last(msg, xfer) || as->done_status < 0) { 940 - /* report completed (or erroneous) message */ 941 - atmel_spi_msg_done(master, as, msg, xfer->cs_change); 942 - } else { 943 - if (xfer->cs_change) { 944 - cs_deactivate(as, msg->spi); 945 - udelay(1); 946 - cs_activate(as, msg->spi); 947 - } 948 - 949 - /* 950 - * Not done yet. Submit the next transfer. 951 - * 952 - * FIXME handle protocol options for xfer 953 - */ 954 - atmel_spi_dma_next_xfer(master, msg); 955 - } 956 - } else { 957 - /* 958 - * Keep going, we still have data to send in 959 - * the current transfer. 960 - */ 961 - atmel_spi_dma_next_xfer(master, msg); 962 - } 963 - 964 - tasklet_out: 965 - atmel_spi_unlock(as); 966 993 } 967 994 968 995 /* Interrupt 969 996 * 970 997 * No need for locking in this Interrupt handler: done_status is the 971 - * only information modified. What we need is the update of this field 972 - * before tasklet runs. This is ensured by using barrier. 998 + * only information modified. 973 999 */ 974 1000 static irqreturn_t 975 1001 atmel_spi_pio_interrupt(int irq, void *dev_id) ··· 913 1107 * 914 1108 * We will also not process any remaning transfers in 915 1109 * the message. 916 - * 917 - * All actions are done in tasklet with done_status indication 918 1110 */ 919 1111 as->done_status = -EIO; 920 1112 smp_wmb(); ··· 920 1116 /* Clear any overrun happening while cleaning up */ 921 1117 spi_readl(as, SR); 922 1118 923 - tasklet_schedule(&as->tasklet); 1119 + complete(&as->xfer_completion); 924 1120 925 1121 } else if (pending & SPI_BIT(RDRF)) { 926 1122 atmel_spi_lock(as); ··· 929 1125 ret = IRQ_HANDLED; 930 1126 xfer = as->current_transfer; 931 1127 atmel_spi_pump_pio_data(as, xfer); 932 - if (!as->current_remaining_bytes) { 933 - /* no more data to xfer, kick tasklet */ 1128 + if (!as->current_remaining_bytes) 934 1129 spi_writel(as, IDR, pending); 935 - tasklet_schedule(&as->tasklet); 936 - } 1130 + 1131 + complete(&as->xfer_completion); 937 1132 } 938 1133 939 1134 atmel_spi_unlock(as); ··· 950 1147 { 951 1148 struct spi_master *master = dev_id; 952 1149 struct atmel_spi *as = spi_master_get_devdata(master); 953 - struct spi_message *msg; 954 - struct spi_transfer *xfer; 955 1150 u32 status, pending, imr; 956 1151 int ret = IRQ_NONE; 957 - 958 - atmel_spi_lock(as); 959 - 960 - xfer = as->current_transfer; 961 - msg = list_entry(as->queue.next, struct spi_message, queue); 962 1152 963 1153 imr = spi_readl(as, IMR); 964 1154 status = spi_readl(as, SR); 965 1155 pending = status & imr; 966 1156 967 1157 if (pending & SPI_BIT(OVRES)) { 968 - int timeout; 969 1158 970 1159 ret = IRQ_HANDLED; 971 1160 972 1161 spi_writel(as, IDR, (SPI_BIT(RXBUFF) | SPI_BIT(ENDRX) 973 1162 | SPI_BIT(OVRES))); 974 1163 975 - /* 976 - * When we get an overrun, we disregard the current 977 - * transfer. Data will not be copied back from any 978 - * bounce buffer and msg->actual_len will not be 979 - * updated with the last xfer. 980 - * 981 - * We will also not process any remaning transfers in 982 - * the message. 983 - * 984 - * First, stop the transfer and unmap the DMA buffers. 985 - */ 986 - spi_writel(as, PTCR, SPI_BIT(RXTDIS) | SPI_BIT(TXTDIS)); 987 - if (!msg->is_dma_mapped) 988 - atmel_spi_dma_unmap_xfer(master, xfer); 989 - 990 - /* REVISIT: udelay in irq is unfriendly */ 991 - if (xfer->delay_usecs) 992 - udelay(xfer->delay_usecs); 993 - 994 - dev_warn(master->dev.parent, "overrun (%u/%u remaining)\n", 995 - spi_readl(as, TCR), spi_readl(as, RCR)); 996 - 997 - /* 998 - * Clean up DMA registers and make sure the data 999 - * registers are empty. 1000 - */ 1001 - spi_writel(as, RNCR, 0); 1002 - spi_writel(as, TNCR, 0); 1003 - spi_writel(as, RCR, 0); 1004 - spi_writel(as, TCR, 0); 1005 - for (timeout = 1000; timeout; timeout--) 1006 - if (spi_readl(as, SR) & SPI_BIT(TXEMPTY)) 1007 - break; 1008 - if (!timeout) 1009 - dev_warn(master->dev.parent, 1010 - "timeout waiting for TXEMPTY"); 1011 - while (spi_readl(as, SR) & SPI_BIT(RDRF)) 1012 - spi_readl(as, RDR); 1013 - 1014 1164 /* Clear any overrun happening while cleaning up */ 1015 1165 spi_readl(as, SR); 1016 1166 1017 1167 as->done_status = -EIO; 1018 - atmel_spi_msg_done(master, as, msg, 0); 1168 + 1169 + complete(&as->xfer_completion); 1170 + 1019 1171 } else if (pending & (SPI_BIT(RXBUFF) | SPI_BIT(ENDRX))) { 1020 1172 ret = IRQ_HANDLED; 1021 1173 1022 1174 spi_writel(as, IDR, pending); 1023 1175 1024 - if (as->current_remaining_bytes == 0) { 1025 - msg->actual_length += xfer->len; 1026 - 1027 - if (!msg->is_dma_mapped) 1028 - atmel_spi_dma_unmap_xfer(master, xfer); 1029 - 1030 - /* REVISIT: udelay in irq is unfriendly */ 1031 - if (xfer->delay_usecs) 1032 - udelay(xfer->delay_usecs); 1033 - 1034 - if (atmel_spi_xfer_is_last(msg, xfer)) { 1035 - /* report completed message */ 1036 - atmel_spi_msg_done(master, as, msg, 1037 - xfer->cs_change); 1038 - } else { 1039 - if (xfer->cs_change) { 1040 - cs_deactivate(as, msg->spi); 1041 - udelay(1); 1042 - cs_activate(as, msg->spi); 1043 - } 1044 - 1045 - /* 1046 - * Not done yet. Submit the next transfer. 1047 - * 1048 - * FIXME handle protocol options for xfer 1049 - */ 1050 - atmel_spi_pdc_next_xfer(master, msg); 1051 - } 1052 - } else { 1053 - /* 1054 - * Keep going, we still have data to send in 1055 - * the current transfer. 1056 - */ 1057 - atmel_spi_pdc_next_xfer(master, msg); 1058 - } 1176 + complete(&as->xfer_completion); 1059 1177 } 1060 - 1061 - atmel_spi_unlock(as); 1062 1178 1063 1179 return ret; 1064 1180 } ··· 986 1264 { 987 1265 struct atmel_spi *as; 988 1266 struct atmel_spi_device *asd; 989 - u32 scbr, csr; 1267 + u32 csr; 990 1268 unsigned int bits = spi->bits_per_word; 991 - unsigned long bus_hz; 992 1269 unsigned int npcs_pin; 993 1270 int ret; 994 1271 995 1272 as = spi_master_get_devdata(spi->master); 996 - 997 - if (as->stopping) 998 - return -ESHUTDOWN; 999 1273 1000 1274 if (spi->chip_select > spi->master->num_chipselect) { 1001 1275 dev_dbg(&spi->dev, ··· 1008 1290 return -EINVAL; 1009 1291 } 1010 1292 1011 - /* v1 chips start out at half the peripheral bus speed. */ 1012 - bus_hz = clk_get_rate(as->clk); 1013 - if (!atmel_spi_is_v2(as)) 1014 - bus_hz /= 2; 1015 - 1016 - if (spi->max_speed_hz) { 1017 - /* 1018 - * Calculate the lowest divider that satisfies the 1019 - * constraint, assuming div32/fdiv/mbz == 0. 1020 - */ 1021 - scbr = DIV_ROUND_UP(bus_hz, spi->max_speed_hz); 1022 - 1023 - /* 1024 - * If the resulting divider doesn't fit into the 1025 - * register bitfield, we can't satisfy the constraint. 1026 - */ 1027 - if (scbr >= (1 << SPI_SCBR_SIZE)) { 1028 - dev_dbg(&spi->dev, 1029 - "setup: %d Hz too slow, scbr %u; min %ld Hz\n", 1030 - spi->max_speed_hz, scbr, bus_hz/255); 1031 - return -EINVAL; 1032 - } 1033 - } else 1034 - /* speed zero means "as slow as possible" */ 1035 - scbr = 0xff; 1036 - 1037 - csr = SPI_BF(SCBR, scbr) | SPI_BF(BITS, bits - 8); 1293 + csr = SPI_BF(BITS, bits - 8); 1038 1294 if (spi->mode & SPI_CPOL) 1039 1295 csr |= SPI_BIT(CPOL); 1040 1296 if (!(spi->mode & SPI_CPHA)) ··· 1044 1352 asd->npcs_pin = npcs_pin; 1045 1353 spi->controller_state = asd; 1046 1354 gpio_direction_output(npcs_pin, !(spi->mode & SPI_CS_HIGH)); 1047 - } else { 1048 - atmel_spi_lock(as); 1049 - if (as->stay == spi) 1050 - as->stay = NULL; 1051 - cs_deactivate(as, spi); 1052 - atmel_spi_unlock(as); 1053 1355 } 1054 1356 1055 1357 asd->csr = csr; 1056 1358 1057 1359 dev_dbg(&spi->dev, 1058 - "setup: %lu Hz bpw %u mode 0x%x -> csr%d %08x\n", 1059 - bus_hz / scbr, bits, spi->mode, spi->chip_select, csr); 1360 + "setup: bpw %u mode 0x%x -> csr%d %08x\n", 1361 + bits, spi->mode, spi->chip_select, csr); 1060 1362 1061 1363 if (!atmel_spi_is_v2(as)) 1062 1364 spi_writel(as, CSR0 + 4 * spi->chip_select, csr); ··· 1058 1372 return 0; 1059 1373 } 1060 1374 1061 - static int atmel_spi_transfer(struct spi_device *spi, struct spi_message *msg) 1375 + static int atmel_spi_one_transfer(struct spi_master *master, 1376 + struct spi_message *msg, 1377 + struct spi_transfer *xfer) 1062 1378 { 1063 1379 struct atmel_spi *as; 1064 - struct spi_transfer *xfer; 1065 - struct device *controller = spi->master->dev.parent; 1380 + struct spi_device *spi = msg->spi; 1066 1381 u8 bits; 1382 + u32 len; 1067 1383 struct atmel_spi_device *asd; 1384 + int timeout; 1385 + int ret; 1068 1386 1069 - as = spi_master_get_devdata(spi->master); 1387 + as = spi_master_get_devdata(master); 1070 1388 1071 - dev_dbg(controller, "new message %p submitted for %s\n", 1072 - msg, dev_name(&spi->dev)); 1389 + if (!(xfer->tx_buf || xfer->rx_buf) && xfer->len) { 1390 + dev_dbg(&spi->dev, "missing rx or tx buf\n"); 1391 + return -EINVAL; 1392 + } 1393 + 1394 + if (xfer->bits_per_word) { 1395 + asd = spi->controller_state; 1396 + bits = (asd->csr >> 4) & 0xf; 1397 + if (bits != xfer->bits_per_word - 8) { 1398 + dev_dbg(&spi->dev, 1399 + "you can't yet change bits_per_word in transfers\n"); 1400 + return -ENOPROTOOPT; 1401 + } 1402 + } 1403 + 1404 + if (xfer->bits_per_word > 8) { 1405 + if (xfer->len % 2) { 1406 + dev_dbg(&spi->dev, 1407 + "buffer len should be 16 bits aligned\n"); 1408 + return -EINVAL; 1409 + } 1410 + } 1411 + 1412 + /* 1413 + * DMA map early, for performance (empties dcache ASAP) and 1414 + * better fault reporting. 1415 + */ 1416 + if ((!msg->is_dma_mapped) 1417 + && (atmel_spi_use_dma(as, xfer) || as->use_pdc)) { 1418 + if (atmel_spi_dma_map_xfer(as, xfer) < 0) 1419 + return -ENOMEM; 1420 + } 1421 + 1422 + atmel_spi_set_xfer_speed(as, msg->spi, xfer); 1423 + 1424 + as->done_status = 0; 1425 + as->current_transfer = xfer; 1426 + as->current_remaining_bytes = xfer->len; 1427 + while (as->current_remaining_bytes) { 1428 + reinit_completion(&as->xfer_completion); 1429 + 1430 + if (as->use_pdc) { 1431 + atmel_spi_pdc_next_xfer(master, msg, xfer); 1432 + } else if (atmel_spi_use_dma(as, xfer)) { 1433 + len = as->current_remaining_bytes; 1434 + ret = atmel_spi_next_xfer_dma_submit(master, 1435 + xfer, &len); 1436 + if (ret) { 1437 + dev_err(&spi->dev, 1438 + "unable to use DMA, fallback to PIO\n"); 1439 + atmel_spi_next_xfer_pio(master, xfer); 1440 + } else { 1441 + as->current_remaining_bytes -= len; 1442 + } 1443 + } else { 1444 + atmel_spi_next_xfer_pio(master, xfer); 1445 + } 1446 + 1447 + ret = wait_for_completion_timeout(&as->xfer_completion, 1448 + SPI_DMA_TIMEOUT); 1449 + if (WARN_ON(ret == 0)) { 1450 + dev_err(&spi->dev, 1451 + "spi trasfer timeout, err %d\n", ret); 1452 + as->done_status = -EIO; 1453 + } else { 1454 + ret = 0; 1455 + } 1456 + 1457 + if (as->done_status) 1458 + break; 1459 + } 1460 + 1461 + if (as->done_status) { 1462 + if (as->use_pdc) { 1463 + dev_warn(master->dev.parent, 1464 + "overrun (%u/%u remaining)\n", 1465 + spi_readl(as, TCR), spi_readl(as, RCR)); 1466 + 1467 + /* 1468 + * Clean up DMA registers and make sure the data 1469 + * registers are empty. 1470 + */ 1471 + spi_writel(as, RNCR, 0); 1472 + spi_writel(as, TNCR, 0); 1473 + spi_writel(as, RCR, 0); 1474 + spi_writel(as, TCR, 0); 1475 + for (timeout = 1000; timeout; timeout--) 1476 + if (spi_readl(as, SR) & SPI_BIT(TXEMPTY)) 1477 + break; 1478 + if (!timeout) 1479 + dev_warn(master->dev.parent, 1480 + "timeout waiting for TXEMPTY"); 1481 + while (spi_readl(as, SR) & SPI_BIT(RDRF)) 1482 + spi_readl(as, RDR); 1483 + 1484 + /* Clear any overrun happening while cleaning up */ 1485 + spi_readl(as, SR); 1486 + 1487 + } else if (atmel_spi_use_dma(as, xfer)) { 1488 + atmel_spi_stop_dma(as); 1489 + } 1490 + 1491 + if (!msg->is_dma_mapped 1492 + && (atmel_spi_use_dma(as, xfer) || as->use_pdc)) 1493 + atmel_spi_dma_unmap_xfer(master, xfer); 1494 + 1495 + return 0; 1496 + 1497 + } else { 1498 + /* only update length if no error */ 1499 + msg->actual_length += xfer->len; 1500 + } 1501 + 1502 + if (!msg->is_dma_mapped 1503 + && (atmel_spi_use_dma(as, xfer) || as->use_pdc)) 1504 + atmel_spi_dma_unmap_xfer(master, xfer); 1505 + 1506 + if (xfer->delay_usecs) 1507 + udelay(xfer->delay_usecs); 1508 + 1509 + if (xfer->cs_change) { 1510 + if (list_is_last(&xfer->transfer_list, 1511 + &msg->transfers)) { 1512 + as->keep_cs = true; 1513 + } else { 1514 + as->cs_active = !as->cs_active; 1515 + if (as->cs_active) 1516 + cs_activate(as, msg->spi); 1517 + else 1518 + cs_deactivate(as, msg->spi); 1519 + } 1520 + } 1521 + 1522 + return 0; 1523 + } 1524 + 1525 + static int atmel_spi_transfer_one_message(struct spi_master *master, 1526 + struct spi_message *msg) 1527 + { 1528 + struct atmel_spi *as; 1529 + struct spi_transfer *xfer; 1530 + struct spi_device *spi = msg->spi; 1531 + int ret = 0; 1532 + 1533 + as = spi_master_get_devdata(master); 1534 + 1535 + dev_dbg(&spi->dev, "new message %p submitted for %s\n", 1536 + msg, dev_name(&spi->dev)); 1073 1537 1074 1538 if (unlikely(list_empty(&msg->transfers))) 1075 1539 return -EINVAL; 1076 1540 1077 - if (as->stopping) 1078 - return -ESHUTDOWN; 1541 + atmel_spi_lock(as); 1542 + cs_activate(as, spi); 1543 + 1544 + as->cs_active = true; 1545 + as->keep_cs = false; 1546 + 1547 + msg->status = 0; 1548 + msg->actual_length = 0; 1079 1549 1080 1550 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 1081 - if (!(xfer->tx_buf || xfer->rx_buf) && xfer->len) { 1082 - dev_dbg(&spi->dev, "missing rx or tx buf\n"); 1083 - return -EINVAL; 1084 - } 1085 - 1086 - if (xfer->bits_per_word) { 1087 - asd = spi->controller_state; 1088 - bits = (asd->csr >> 4) & 0xf; 1089 - if (bits != xfer->bits_per_word - 8) { 1090 - dev_dbg(&spi->dev, 1091 - "you can't yet change bits_per_word in transfers\n"); 1092 - return -ENOPROTOOPT; 1093 - } 1094 - } 1095 - 1096 - if (xfer->bits_per_word > 8) { 1097 - if (xfer->len % 2) { 1098 - dev_dbg(&spi->dev, "buffer len should be 16 bits aligned\n"); 1099 - return -EINVAL; 1100 - } 1101 - } 1102 - 1103 - /* FIXME implement these protocol options!! */ 1104 - if (xfer->speed_hz < spi->max_speed_hz) { 1105 - dev_dbg(&spi->dev, "can't change speed in transfer\n"); 1106 - return -ENOPROTOOPT; 1107 - } 1108 - 1109 - /* 1110 - * DMA map early, for performance (empties dcache ASAP) and 1111 - * better fault reporting. 1112 - */ 1113 - if ((!msg->is_dma_mapped) && (atmel_spi_use_dma(as, xfer) 1114 - || as->use_pdc)) { 1115 - if (atmel_spi_dma_map_xfer(as, xfer) < 0) 1116 - return -ENOMEM; 1117 - } 1551 + ret = atmel_spi_one_transfer(master, msg, xfer); 1552 + if (ret) 1553 + goto msg_done; 1118 1554 } 1119 1555 1120 - #ifdef VERBOSE 1556 + if (as->use_pdc) 1557 + atmel_spi_disable_pdc_transfer(as); 1558 + 1121 1559 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 1122 - dev_dbg(controller, 1560 + dev_dbg(&spi->dev, 1123 1561 " xfer %p: len %u tx %p/%08x rx %p/%08x\n", 1124 1562 xfer, xfer->len, 1125 1563 xfer->tx_buf, xfer->tx_dma, 1126 1564 xfer->rx_buf, xfer->rx_dma); 1127 1565 } 1128 - #endif 1129 1566 1130 - msg->status = -EINPROGRESS; 1131 - msg->actual_length = 0; 1567 + msg_done: 1568 + if (!as->keep_cs) 1569 + cs_deactivate(as, msg->spi); 1132 1570 1133 - atmel_spi_lock(as); 1134 - list_add_tail(&msg->queue, &as->queue); 1135 - if (!as->current_transfer) 1136 - atmel_spi_next_message(spi->master); 1137 1571 atmel_spi_unlock(as); 1138 1572 1139 - return 0; 1573 + msg->status = as->done_status; 1574 + spi_finalize_current_message(spi->master); 1575 + 1576 + return ret; 1140 1577 } 1141 1578 1142 1579 static void atmel_spi_cleanup(struct spi_device *spi) 1143 1580 { 1144 - struct atmel_spi *as = spi_master_get_devdata(spi->master); 1145 1581 struct atmel_spi_device *asd = spi->controller_state; 1146 1582 unsigned gpio = (unsigned) spi->controller_data; 1147 1583 1148 1584 if (!asd) 1149 1585 return; 1150 - 1151 - atmel_spi_lock(as); 1152 - if (as->stay == spi) { 1153 - as->stay = NULL; 1154 - cs_deactivate(as, spi); 1155 - } 1156 - atmel_spi_unlock(as); 1157 1586 1158 1587 spi->controller_state = NULL; 1159 1588 gpio_free(gpio); ··· 1311 1510 if (irq < 0) 1312 1511 return irq; 1313 1512 1314 - clk = clk_get(&pdev->dev, "spi_clk"); 1513 + clk = devm_clk_get(&pdev->dev, "spi_clk"); 1315 1514 if (IS_ERR(clk)) 1316 1515 return PTR_ERR(clk); 1317 1516 ··· 1328 1527 master->bus_num = pdev->id; 1329 1528 master->num_chipselect = master->dev.of_node ? 0 : 4; 1330 1529 master->setup = atmel_spi_setup; 1331 - master->transfer = atmel_spi_transfer; 1530 + master->transfer_one_message = atmel_spi_transfer_one_message; 1332 1531 master->cleanup = atmel_spi_cleanup; 1333 1532 platform_set_drvdata(pdev, master); 1334 1533 ··· 1344 1543 goto out_free; 1345 1544 1346 1545 spin_lock_init(&as->lock); 1347 - INIT_LIST_HEAD(&as->queue); 1348 1546 1349 1547 as->pdev = pdev; 1350 1548 as->regs = devm_ioremap_resource(&pdev->dev, regs); ··· 1354 1554 as->phybase = regs->start; 1355 1555 as->irq = irq; 1356 1556 as->clk = clk; 1557 + 1558 + init_completion(&as->xfer_completion); 1357 1559 1358 1560 atmel_get_caps(as); 1359 1561 ··· 1372 1570 dev_info(&pdev->dev, "Atmel SPI Controller using PIO only\n"); 1373 1571 1374 1572 if (as->use_pdc) { 1375 - ret = request_irq(irq, atmel_spi_pdc_interrupt, 0, 1376 - dev_name(&pdev->dev), master); 1573 + ret = devm_request_irq(&pdev->dev, irq, atmel_spi_pdc_interrupt, 1574 + 0, dev_name(&pdev->dev), master); 1377 1575 } else { 1378 - tasklet_init(&as->tasklet, atmel_spi_tasklet_func, 1379 - (unsigned long)master); 1380 - 1381 - ret = request_irq(irq, atmel_spi_pio_interrupt, 0, 1382 - dev_name(&pdev->dev), master); 1576 + ret = devm_request_irq(&pdev->dev, irq, atmel_spi_pio_interrupt, 1577 + 0, dev_name(&pdev->dev), master); 1383 1578 } 1384 1579 if (ret) 1385 1580 goto out_unmap_regs; ··· 1402 1603 dev_info(&pdev->dev, "Atmel SPI Controller at 0x%08lx (irq %d)\n", 1403 1604 (unsigned long)regs->start, irq); 1404 1605 1405 - ret = spi_register_master(master); 1606 + ret = devm_spi_register_master(&pdev->dev, master); 1406 1607 if (ret) 1407 1608 goto out_free_dma; 1408 1609 ··· 1416 1617 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */ 1417 1618 clk_disable_unprepare(clk); 1418 1619 out_free_irq: 1419 - free_irq(irq, master); 1420 1620 out_unmap_regs: 1421 1621 out_free_buffer: 1422 - if (!as->use_pdc) 1423 - tasklet_kill(&as->tasklet); 1424 1622 dma_free_coherent(&pdev->dev, BUFFER_SIZE, as->buffer, 1425 1623 as->buffer_dma); 1426 1624 out_free: 1427 - clk_put(clk); 1428 1625 spi_master_put(master); 1429 1626 return ret; 1430 1627 } ··· 1429 1634 { 1430 1635 struct spi_master *master = platform_get_drvdata(pdev); 1431 1636 struct atmel_spi *as = spi_master_get_devdata(master); 1432 - struct spi_message *msg; 1433 - struct spi_transfer *xfer; 1434 1637 1435 1638 /* reset the hardware and block queue progress */ 1436 1639 spin_lock_irq(&as->lock); 1437 - as->stopping = 1; 1438 1640 if (as->use_dma) { 1439 1641 atmel_spi_stop_dma(as); 1440 1642 atmel_spi_release_dma(as); ··· 1442 1650 spi_readl(as, SR); 1443 1651 spin_unlock_irq(&as->lock); 1444 1652 1445 - /* Terminate remaining queued transfers */ 1446 - list_for_each_entry(msg, &as->queue, queue) { 1447 - list_for_each_entry(xfer, &msg->transfers, transfer_list) { 1448 - if (!msg->is_dma_mapped 1449 - && (atmel_spi_use_dma(as, xfer) 1450 - || as->use_pdc)) 1451 - atmel_spi_dma_unmap_xfer(master, xfer); 1452 - } 1453 - msg->status = -ESHUTDOWN; 1454 - msg->complete(msg->context); 1455 - } 1456 - 1457 - if (!as->use_pdc) 1458 - tasklet_kill(&as->tasklet); 1459 1653 dma_free_coherent(&pdev->dev, BUFFER_SIZE, as->buffer, 1460 1654 as->buffer_dma); 1461 1655 1462 1656 clk_disable_unprepare(as->clk); 1463 - clk_put(as->clk); 1464 - free_irq(as->irq, master); 1465 - 1466 - spi_unregister_master(master); 1467 1657 1468 1658 return 0; 1469 1659 }
+3 -7
drivers/spi/spi-bcm2835.c
··· 347 347 348 348 clk_prepare_enable(bs->clk); 349 349 350 - err = request_irq(bs->irq, bcm2835_spi_interrupt, 0, 351 - dev_name(&pdev->dev), master); 350 + err = devm_request_irq(&pdev->dev, bs->irq, bcm2835_spi_interrupt, 0, 351 + dev_name(&pdev->dev), master); 352 352 if (err) { 353 353 dev_err(&pdev->dev, "could not request IRQ: %d\n", err); 354 354 goto out_clk_disable; ··· 361 361 err = devm_spi_register_master(&pdev->dev, master); 362 362 if (err) { 363 363 dev_err(&pdev->dev, "could not register SPI master: %d\n", err); 364 - goto out_free_irq; 364 + goto out_clk_disable; 365 365 } 366 366 367 367 return 0; 368 368 369 - out_free_irq: 370 - free_irq(bs->irq, master); 371 369 out_clk_disable: 372 370 clk_disable_unprepare(bs->clk); 373 371 out_master_put: ··· 377 379 { 378 380 struct spi_master *master = platform_get_drvdata(pdev); 379 381 struct bcm2835_spi *bs = spi_master_get_devdata(master); 380 - 381 - free_irq(bs->irq, master); 382 382 383 383 /* Clear FIFOs, and disable the HW block */ 384 384 bcm2835_wr(bs, BCM2835_SPI_CS,
+475
drivers/spi/spi-bcm63xx-hsspi.c
··· 1 + /* 2 + * Broadcom BCM63XX High Speed SPI Controller driver 3 + * 4 + * Copyright 2000-2010 Broadcom Corporation 5 + * Copyright 2012-2013 Jonas Gorski <jogo@openwrt.org> 6 + * 7 + * Licensed under the GNU/GPL. See COPYING for details. 8 + */ 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/init.h> 12 + #include <linux/io.h> 13 + #include <linux/clk.h> 14 + #include <linux/module.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/delay.h> 17 + #include <linux/dma-mapping.h> 18 + #include <linux/err.h> 19 + #include <linux/interrupt.h> 20 + #include <linux/spi/spi.h> 21 + #include <linux/workqueue.h> 22 + #include <linux/mutex.h> 23 + 24 + #define HSSPI_GLOBAL_CTRL_REG 0x0 25 + #define GLOBAL_CTRL_CS_POLARITY_SHIFT 0 26 + #define GLOBAL_CTRL_CS_POLARITY_MASK 0x000000ff 27 + #define GLOBAL_CTRL_PLL_CLK_CTRL_SHIFT 8 28 + #define GLOBAL_CTRL_PLL_CLK_CTRL_MASK 0x0000ff00 29 + #define GLOBAL_CTRL_CLK_GATE_SSOFF BIT(16) 30 + #define GLOBAL_CTRL_CLK_POLARITY BIT(17) 31 + #define GLOBAL_CTRL_MOSI_IDLE BIT(18) 32 + 33 + #define HSSPI_GLOBAL_EXT_TRIGGER_REG 0x4 34 + 35 + #define HSSPI_INT_STATUS_REG 0x8 36 + #define HSSPI_INT_STATUS_MASKED_REG 0xc 37 + #define HSSPI_INT_MASK_REG 0x10 38 + 39 + #define HSSPI_PINGx_CMD_DONE(i) BIT((i * 8) + 0) 40 + #define HSSPI_PINGx_RX_OVER(i) BIT((i * 8) + 1) 41 + #define HSSPI_PINGx_TX_UNDER(i) BIT((i * 8) + 2) 42 + #define HSSPI_PINGx_POLL_TIMEOUT(i) BIT((i * 8) + 3) 43 + #define HSSPI_PINGx_CTRL_INVAL(i) BIT((i * 8) + 4) 44 + 45 + #define HSSPI_INT_CLEAR_ALL 0xff001f1f 46 + 47 + #define HSSPI_PINGPONG_COMMAND_REG(x) (0x80 + (x) * 0x40) 48 + #define PINGPONG_CMD_COMMAND_MASK 0xf 49 + #define PINGPONG_COMMAND_NOOP 0 50 + #define PINGPONG_COMMAND_START_NOW 1 51 + #define PINGPONG_COMMAND_START_TRIGGER 2 52 + #define PINGPONG_COMMAND_HALT 3 53 + #define PINGPONG_COMMAND_FLUSH 4 54 + #define PINGPONG_CMD_PROFILE_SHIFT 8 55 + #define PINGPONG_CMD_SS_SHIFT 12 56 + 57 + #define HSSPI_PINGPONG_STATUS_REG(x) (0x84 + (x) * 0x40) 58 + 59 + #define HSSPI_PROFILE_CLK_CTRL_REG(x) (0x100 + (x) * 0x20) 60 + #define CLK_CTRL_FREQ_CTRL_MASK 0x0000ffff 61 + #define CLK_CTRL_SPI_CLK_2X_SEL BIT(14) 62 + #define CLK_CTRL_ACCUM_RST_ON_LOOP BIT(15) 63 + 64 + #define HSSPI_PROFILE_SIGNAL_CTRL_REG(x) (0x104 + (x) * 0x20) 65 + #define SIGNAL_CTRL_LATCH_RISING BIT(12) 66 + #define SIGNAL_CTRL_LAUNCH_RISING BIT(13) 67 + #define SIGNAL_CTRL_ASYNC_INPUT_PATH BIT(16) 68 + 69 + #define HSSPI_PROFILE_MODE_CTRL_REG(x) (0x108 + (x) * 0x20) 70 + #define MODE_CTRL_MULTIDATA_RD_STRT_SHIFT 8 71 + #define MODE_CTRL_MULTIDATA_WR_STRT_SHIFT 12 72 + #define MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT 16 73 + #define MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT 18 74 + #define MODE_CTRL_MODE_3WIRE BIT(20) 75 + #define MODE_CTRL_PREPENDBYTE_CNT_SHIFT 24 76 + 77 + #define HSSPI_FIFO_REG(x) (0x200 + (x) * 0x200) 78 + 79 + 80 + #define HSSPI_OP_CODE_SHIFT 13 81 + #define HSSPI_OP_SLEEP (0 << HSSPI_OP_CODE_SHIFT) 82 + #define HSSPI_OP_READ_WRITE (1 << HSSPI_OP_CODE_SHIFT) 83 + #define HSSPI_OP_WRITE (2 << HSSPI_OP_CODE_SHIFT) 84 + #define HSSPI_OP_READ (3 << HSSPI_OP_CODE_SHIFT) 85 + #define HSSPI_OP_SETIRQ (4 << HSSPI_OP_CODE_SHIFT) 86 + 87 + #define HSSPI_BUFFER_LEN 512 88 + #define HSSPI_OPCODE_LEN 2 89 + 90 + #define HSSPI_MAX_PREPEND_LEN 15 91 + 92 + #define HSSPI_MAX_SYNC_CLOCK 30000000 93 + 94 + #define HSSPI_BUS_NUM 1 /* 0 is legacy SPI */ 95 + 96 + struct bcm63xx_hsspi { 97 + struct completion done; 98 + struct mutex bus_mutex; 99 + 100 + struct platform_device *pdev; 101 + struct clk *clk; 102 + void __iomem *regs; 103 + u8 __iomem *fifo; 104 + 105 + u32 speed_hz; 106 + u8 cs_polarity; 107 + }; 108 + 109 + static void bcm63xx_hsspi_set_cs(struct bcm63xx_hsspi *bs, unsigned cs, 110 + bool active) 111 + { 112 + u32 reg; 113 + 114 + mutex_lock(&bs->bus_mutex); 115 + reg = __raw_readl(bs->regs + HSSPI_GLOBAL_CTRL_REG); 116 + 117 + reg &= ~BIT(cs); 118 + if (active == !(bs->cs_polarity & BIT(cs))) 119 + reg |= BIT(cs); 120 + 121 + __raw_writel(reg, bs->regs + HSSPI_GLOBAL_CTRL_REG); 122 + mutex_unlock(&bs->bus_mutex); 123 + } 124 + 125 + static void bcm63xx_hsspi_set_clk(struct bcm63xx_hsspi *bs, 126 + struct spi_device *spi, int hz) 127 + { 128 + unsigned profile = spi->chip_select; 129 + u32 reg; 130 + 131 + reg = DIV_ROUND_UP(2048, DIV_ROUND_UP(bs->speed_hz, hz)); 132 + __raw_writel(CLK_CTRL_ACCUM_RST_ON_LOOP | reg, 133 + bs->regs + HSSPI_PROFILE_CLK_CTRL_REG(profile)); 134 + 135 + reg = __raw_readl(bs->regs + HSSPI_PROFILE_SIGNAL_CTRL_REG(profile)); 136 + if (hz > HSSPI_MAX_SYNC_CLOCK) 137 + reg |= SIGNAL_CTRL_ASYNC_INPUT_PATH; 138 + else 139 + reg &= ~SIGNAL_CTRL_ASYNC_INPUT_PATH; 140 + __raw_writel(reg, bs->regs + HSSPI_PROFILE_SIGNAL_CTRL_REG(profile)); 141 + 142 + mutex_lock(&bs->bus_mutex); 143 + /* setup clock polarity */ 144 + reg = __raw_readl(bs->regs + HSSPI_GLOBAL_CTRL_REG); 145 + reg &= ~GLOBAL_CTRL_CLK_POLARITY; 146 + if (spi->mode & SPI_CPOL) 147 + reg |= GLOBAL_CTRL_CLK_POLARITY; 148 + __raw_writel(reg, bs->regs + HSSPI_GLOBAL_CTRL_REG); 149 + mutex_unlock(&bs->bus_mutex); 150 + } 151 + 152 + static int bcm63xx_hsspi_do_txrx(struct spi_device *spi, struct spi_transfer *t) 153 + { 154 + struct bcm63xx_hsspi *bs = spi_master_get_devdata(spi->master); 155 + unsigned chip_select = spi->chip_select; 156 + u16 opcode = 0; 157 + int pending = t->len; 158 + int step_size = HSSPI_BUFFER_LEN; 159 + const u8 *tx = t->tx_buf; 160 + u8 *rx = t->rx_buf; 161 + 162 + bcm63xx_hsspi_set_clk(bs, spi, t->speed_hz); 163 + bcm63xx_hsspi_set_cs(bs, spi->chip_select, true); 164 + 165 + if (tx && rx) 166 + opcode = HSSPI_OP_READ_WRITE; 167 + else if (tx) 168 + opcode = HSSPI_OP_WRITE; 169 + else if (rx) 170 + opcode = HSSPI_OP_READ; 171 + 172 + if (opcode != HSSPI_OP_READ) 173 + step_size -= HSSPI_OPCODE_LEN; 174 + 175 + __raw_writel(0 << MODE_CTRL_PREPENDBYTE_CNT_SHIFT | 176 + 2 << MODE_CTRL_MULTIDATA_WR_STRT_SHIFT | 177 + 2 << MODE_CTRL_MULTIDATA_RD_STRT_SHIFT | 0xff, 178 + bs->regs + HSSPI_PROFILE_MODE_CTRL_REG(chip_select)); 179 + 180 + while (pending > 0) { 181 + int curr_step = min_t(int, step_size, pending); 182 + 183 + init_completion(&bs->done); 184 + if (tx) { 185 + memcpy_toio(bs->fifo + HSSPI_OPCODE_LEN, tx, curr_step); 186 + tx += curr_step; 187 + } 188 + 189 + __raw_writew(opcode | curr_step, bs->fifo); 190 + 191 + /* enable interrupt */ 192 + __raw_writel(HSSPI_PINGx_CMD_DONE(0), 193 + bs->regs + HSSPI_INT_MASK_REG); 194 + 195 + /* start the transfer */ 196 + __raw_writel(!chip_select << PINGPONG_CMD_SS_SHIFT | 197 + chip_select << PINGPONG_CMD_PROFILE_SHIFT | 198 + PINGPONG_COMMAND_START_NOW, 199 + bs->regs + HSSPI_PINGPONG_COMMAND_REG(0)); 200 + 201 + if (wait_for_completion_timeout(&bs->done, HZ) == 0) { 202 + dev_err(&bs->pdev->dev, "transfer timed out!\n"); 203 + return -ETIMEDOUT; 204 + } 205 + 206 + if (rx) { 207 + memcpy_fromio(rx, bs->fifo, curr_step); 208 + rx += curr_step; 209 + } 210 + 211 + pending -= curr_step; 212 + } 213 + 214 + return 0; 215 + } 216 + 217 + static int bcm63xx_hsspi_setup(struct spi_device *spi) 218 + { 219 + struct bcm63xx_hsspi *bs = spi_master_get_devdata(spi->master); 220 + u32 reg; 221 + 222 + reg = __raw_readl(bs->regs + 223 + HSSPI_PROFILE_SIGNAL_CTRL_REG(spi->chip_select)); 224 + reg &= ~(SIGNAL_CTRL_LAUNCH_RISING | SIGNAL_CTRL_LATCH_RISING); 225 + if (spi->mode & SPI_CPHA) 226 + reg |= SIGNAL_CTRL_LAUNCH_RISING; 227 + else 228 + reg |= SIGNAL_CTRL_LATCH_RISING; 229 + __raw_writel(reg, bs->regs + 230 + HSSPI_PROFILE_SIGNAL_CTRL_REG(spi->chip_select)); 231 + 232 + mutex_lock(&bs->bus_mutex); 233 + reg = __raw_readl(bs->regs + HSSPI_GLOBAL_CTRL_REG); 234 + 235 + /* only change actual polarities if there is no transfer */ 236 + if ((reg & GLOBAL_CTRL_CS_POLARITY_MASK) == bs->cs_polarity) { 237 + if (spi->mode & SPI_CS_HIGH) 238 + reg |= BIT(spi->chip_select); 239 + else 240 + reg &= ~BIT(spi->chip_select); 241 + __raw_writel(reg, bs->regs + HSSPI_GLOBAL_CTRL_REG); 242 + } 243 + 244 + if (spi->mode & SPI_CS_HIGH) 245 + bs->cs_polarity |= BIT(spi->chip_select); 246 + else 247 + bs->cs_polarity &= ~BIT(spi->chip_select); 248 + 249 + mutex_unlock(&bs->bus_mutex); 250 + 251 + return 0; 252 + } 253 + 254 + static int bcm63xx_hsspi_transfer_one(struct spi_master *master, 255 + struct spi_message *msg) 256 + { 257 + struct bcm63xx_hsspi *bs = spi_master_get_devdata(master); 258 + struct spi_transfer *t; 259 + struct spi_device *spi = msg->spi; 260 + int status = -EINVAL; 261 + int dummy_cs; 262 + u32 reg; 263 + 264 + /* This controller does not support keeping CS active during idle. 265 + * To work around this, we use the following ugly hack: 266 + * 267 + * a. Invert the target chip select's polarity so it will be active. 268 + * b. Select a "dummy" chip select to use as the hardware target. 269 + * c. Invert the dummy chip select's polarity so it will be inactive 270 + * during the actual transfers. 271 + * d. Tell the hardware to send to the dummy chip select. Thanks to 272 + * the multiplexed nature of SPI the actual target will receive 273 + * the transfer and we see its response. 274 + * 275 + * e. At the end restore the polarities again to their default values. 276 + */ 277 + 278 + dummy_cs = !spi->chip_select; 279 + bcm63xx_hsspi_set_cs(bs, dummy_cs, true); 280 + 281 + list_for_each_entry(t, &msg->transfers, transfer_list) { 282 + status = bcm63xx_hsspi_do_txrx(spi, t); 283 + if (status) 284 + break; 285 + 286 + msg->actual_length += t->len; 287 + 288 + if (t->delay_usecs) 289 + udelay(t->delay_usecs); 290 + 291 + if (t->cs_change) 292 + bcm63xx_hsspi_set_cs(bs, spi->chip_select, false); 293 + } 294 + 295 + mutex_lock(&bs->bus_mutex); 296 + reg = __raw_readl(bs->regs + HSSPI_GLOBAL_CTRL_REG); 297 + reg &= ~GLOBAL_CTRL_CS_POLARITY_MASK; 298 + reg |= bs->cs_polarity; 299 + __raw_writel(reg, bs->regs + HSSPI_GLOBAL_CTRL_REG); 300 + mutex_unlock(&bs->bus_mutex); 301 + 302 + msg->status = status; 303 + spi_finalize_current_message(master); 304 + 305 + return 0; 306 + } 307 + 308 + static irqreturn_t bcm63xx_hsspi_interrupt(int irq, void *dev_id) 309 + { 310 + struct bcm63xx_hsspi *bs = (struct bcm63xx_hsspi *)dev_id; 311 + 312 + if (__raw_readl(bs->regs + HSSPI_INT_STATUS_MASKED_REG) == 0) 313 + return IRQ_NONE; 314 + 315 + __raw_writel(HSSPI_INT_CLEAR_ALL, bs->regs + HSSPI_INT_STATUS_REG); 316 + __raw_writel(0, bs->regs + HSSPI_INT_MASK_REG); 317 + 318 + complete(&bs->done); 319 + 320 + return IRQ_HANDLED; 321 + } 322 + 323 + static int bcm63xx_hsspi_probe(struct platform_device *pdev) 324 + { 325 + struct spi_master *master; 326 + struct bcm63xx_hsspi *bs; 327 + struct resource *res_mem; 328 + void __iomem *regs; 329 + struct device *dev = &pdev->dev; 330 + struct clk *clk; 331 + int irq, ret; 332 + u32 reg, rate; 333 + 334 + irq = platform_get_irq(pdev, 0); 335 + if (irq < 0) { 336 + dev_err(dev, "no irq\n"); 337 + return -ENXIO; 338 + } 339 + 340 + res_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 341 + regs = devm_ioremap_resource(dev, res_mem); 342 + if (IS_ERR(regs)) 343 + return PTR_ERR(regs); 344 + 345 + clk = devm_clk_get(dev, "hsspi"); 346 + 347 + if (IS_ERR(clk)) 348 + return PTR_ERR(clk); 349 + 350 + rate = clk_get_rate(clk); 351 + if (!rate) 352 + return -EINVAL; 353 + 354 + ret = clk_prepare_enable(clk); 355 + if (ret) 356 + return ret; 357 + 358 + master = spi_alloc_master(&pdev->dev, sizeof(*bs)); 359 + if (!master) { 360 + ret = -ENOMEM; 361 + goto out_disable_clk; 362 + } 363 + 364 + bs = spi_master_get_devdata(master); 365 + bs->pdev = pdev; 366 + bs->clk = clk; 367 + bs->regs = regs; 368 + bs->speed_hz = rate; 369 + bs->fifo = (u8 __iomem *)(bs->regs + HSSPI_FIFO_REG(0)); 370 + 371 + mutex_init(&bs->bus_mutex); 372 + 373 + master->bus_num = HSSPI_BUS_NUM; 374 + master->num_chipselect = 8; 375 + master->setup = bcm63xx_hsspi_setup; 376 + master->transfer_one_message = bcm63xx_hsspi_transfer_one; 377 + master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 378 + master->bits_per_word_mask = SPI_BPW_MASK(8); 379 + master->auto_runtime_pm = true; 380 + 381 + platform_set_drvdata(pdev, master); 382 + 383 + /* Initialize the hardware */ 384 + __raw_writel(0, bs->regs + HSSPI_INT_MASK_REG); 385 + 386 + /* clean up any pending interrupts */ 387 + __raw_writel(HSSPI_INT_CLEAR_ALL, bs->regs + HSSPI_INT_STATUS_REG); 388 + 389 + /* read out default CS polarities */ 390 + reg = __raw_readl(bs->regs + HSSPI_GLOBAL_CTRL_REG); 391 + bs->cs_polarity = reg & GLOBAL_CTRL_CS_POLARITY_MASK; 392 + __raw_writel(reg | GLOBAL_CTRL_CLK_GATE_SSOFF, 393 + bs->regs + HSSPI_GLOBAL_CTRL_REG); 394 + 395 + ret = devm_request_irq(dev, irq, bcm63xx_hsspi_interrupt, IRQF_SHARED, 396 + pdev->name, bs); 397 + 398 + if (ret) 399 + goto out_put_master; 400 + 401 + /* register and we are done */ 402 + ret = devm_spi_register_master(dev, master); 403 + if (ret) 404 + goto out_put_master; 405 + 406 + return 0; 407 + 408 + out_put_master: 409 + spi_master_put(master); 410 + out_disable_clk: 411 + clk_disable_unprepare(clk); 412 + return ret; 413 + } 414 + 415 + 416 + static int bcm63xx_hsspi_remove(struct platform_device *pdev) 417 + { 418 + struct spi_master *master = platform_get_drvdata(pdev); 419 + struct bcm63xx_hsspi *bs = spi_master_get_devdata(master); 420 + 421 + /* reset the hardware and block queue progress */ 422 + __raw_writel(0, bs->regs + HSSPI_INT_MASK_REG); 423 + clk_disable_unprepare(bs->clk); 424 + 425 + return 0; 426 + } 427 + 428 + #ifdef CONFIG_PM_SLEEP 429 + static int bcm63xx_hsspi_suspend(struct device *dev) 430 + { 431 + struct spi_master *master = dev_get_drvdata(dev); 432 + struct bcm63xx_hsspi *bs = spi_master_get_devdata(master); 433 + 434 + spi_master_suspend(master); 435 + clk_disable_unprepare(bs->clk); 436 + 437 + return 0; 438 + } 439 + 440 + static int bcm63xx_hsspi_resume(struct device *dev) 441 + { 442 + struct spi_master *master = dev_get_drvdata(dev); 443 + struct bcm63xx_hsspi *bs = spi_master_get_devdata(master); 444 + int ret; 445 + 446 + ret = clk_prepare_enable(bs->clk); 447 + if (ret) 448 + return ret; 449 + 450 + spi_master_resume(master); 451 + 452 + return 0; 453 + } 454 + #endif 455 + 456 + static const struct dev_pm_ops bcm63xx_hsspi_pm_ops = { 457 + SET_SYSTEM_SLEEP_PM_OPS(bcm63xx_hsspi_suspend, bcm63xx_hsspi_resume) 458 + }; 459 + 460 + static struct platform_driver bcm63xx_hsspi_driver = { 461 + .driver = { 462 + .name = "bcm63xx-hsspi", 463 + .owner = THIS_MODULE, 464 + .pm = &bcm63xx_hsspi_pm_ops, 465 + }, 466 + .probe = bcm63xx_hsspi_probe, 467 + .remove = bcm63xx_hsspi_remove, 468 + }; 469 + 470 + module_platform_driver(bcm63xx_hsspi_driver); 471 + 472 + MODULE_ALIAS("platform:bcm63xx_hsspi"); 473 + MODULE_DESCRIPTION("Broadcom BCM63xx High Speed SPI Controller driver"); 474 + MODULE_AUTHOR("Jonas Gorski <jogo@openwrt.org>"); 475 + MODULE_LICENSE("GPL");
+17 -31
drivers/spi/spi-bcm63xx.c
··· 169 169 transfer_list); 170 170 } 171 171 172 - len -= prepend_len; 173 - 174 172 init_completion(&bs->done); 175 173 176 174 /* Fill in the Message control register */ ··· 203 205 if (!timeout) 204 206 return -ETIMEDOUT; 205 207 206 - /* read out all data */ 207 - rx_tail = bcm_spi_readb(bs, SPI_RX_TAIL); 208 - 209 - if (do_rx && rx_tail != len) 210 - return -EIO; 211 - 212 - if (!rx_tail) 208 + if (!do_rx) 213 209 return 0; 214 210 215 211 len = 0; ··· 337 345 irq = platform_get_irq(pdev, 0); 338 346 if (irq < 0) { 339 347 dev_err(dev, "no irq\n"); 340 - ret = -ENXIO; 341 - goto out; 348 + return -ENXIO; 342 349 } 343 350 344 - clk = clk_get(dev, "spi"); 351 + clk = devm_clk_get(dev, "spi"); 345 352 if (IS_ERR(clk)) { 346 353 dev_err(dev, "no clock for device\n"); 347 - ret = PTR_ERR(clk); 348 - goto out; 354 + return PTR_ERR(clk); 349 355 } 350 356 351 357 master = spi_alloc_master(dev, sizeof(*bs)); 352 358 if (!master) { 353 359 dev_err(dev, "out of memory\n"); 354 - ret = -ENOMEM; 355 - goto out_clk; 360 + return -ENOMEM; 356 361 } 357 362 358 363 bs = spi_master_get_devdata(master); ··· 397 408 } 398 409 399 410 /* Initialize hardware */ 400 - clk_prepare_enable(bs->clk); 411 + ret = clk_prepare_enable(bs->clk); 412 + if (ret) 413 + goto out_err; 414 + 401 415 bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS); 402 416 403 417 /* register and we are done */ ··· 419 427 clk_disable_unprepare(clk); 420 428 out_err: 421 429 spi_master_put(master); 422 - out_clk: 423 - clk_put(clk); 424 - out: 425 430 return ret; 426 431 } 427 432 ··· 432 443 433 444 /* HW shutdown */ 434 445 clk_disable_unprepare(bs->clk); 435 - clk_put(bs->clk); 436 446 437 447 return 0; 438 448 } 439 449 440 - #ifdef CONFIG_PM 450 + #ifdef CONFIG_PM_SLEEP 441 451 static int bcm63xx_spi_suspend(struct device *dev) 442 452 { 443 453 struct spi_master *master = dev_get_drvdata(dev); ··· 453 465 { 454 466 struct spi_master *master = dev_get_drvdata(dev); 455 467 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 468 + int ret; 456 469 457 - clk_prepare_enable(bs->clk); 470 + ret = clk_prepare_enable(bs->clk); 471 + if (ret) 472 + return ret; 458 473 459 474 spi_master_resume(master); 460 475 461 476 return 0; 462 477 } 478 + #endif 463 479 464 480 static const struct dev_pm_ops bcm63xx_spi_pm_ops = { 465 - .suspend = bcm63xx_spi_suspend, 466 - .resume = bcm63xx_spi_resume, 481 + SET_SYSTEM_SLEEP_PM_OPS(bcm63xx_spi_suspend, bcm63xx_spi_resume) 467 482 }; 468 - 469 - #define BCM63XX_SPI_PM_OPS (&bcm63xx_spi_pm_ops) 470 - #else 471 - #define BCM63XX_SPI_PM_OPS NULL 472 - #endif 473 483 474 484 static struct platform_driver bcm63xx_spi_driver = { 475 485 .driver = { 476 486 .name = "bcm63xx-spi", 477 487 .owner = THIS_MODULE, 478 - .pm = BCM63XX_SPI_PM_OPS, 488 + .pm = &bcm63xx_spi_pm_ops, 479 489 }, 480 490 .probe = bcm63xx_spi_probe, 481 491 .remove = bcm63xx_spi_remove,
+1 -1
drivers/spi/spi-bitbang-txrx.h
··· 38 38 * 39 39 * Since this is software, the timings may not be exactly what your board's 40 40 * chips need ... there may be several reasons you'd need to tweak timings 41 - * in these routines, not just make to make it faster or slower to match a 41 + * in these routines, not just to make it faster or slower to match a 42 42 * particular CPU clock rate. 43 43 */ 44 44
+3 -20
drivers/spi/spi-clps711x.c
··· 1 1 /* 2 2 * CLPS711X SPI bus driver 3 3 * 4 - * Copyright (C) 2012 Alexander Shiyan <shc_work@mail.ru> 4 + * Copyright (C) 2012-2014 Alexander Shiyan <shc_work@mail.ru> 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License as published by ··· 198 198 ret = -EINVAL; 199 199 goto err_out; 200 200 } 201 - if (gpio_request(hw->chipselect[i], DRIVER_NAME)) { 201 + if (devm_gpio_request(&pdev->dev, hw->chipselect[i], NULL)) { 202 202 dev_err(&pdev->dev, "Can't get CS GPIO %i\n", i); 203 203 ret = -EINVAL; 204 204 goto err_out; ··· 240 240 dev_err(&pdev->dev, "Failed to register master\n"); 241 241 242 242 err_out: 243 - while (--i >= 0) 244 - if (gpio_is_valid(hw->chipselect[i])) 245 - gpio_free(hw->chipselect[i]); 246 - 247 243 spi_master_put(master); 248 244 249 245 return ret; 250 - } 251 - 252 - static int spi_clps711x_remove(struct platform_device *pdev) 253 - { 254 - int i; 255 - struct spi_master *master = platform_get_drvdata(pdev); 256 - struct spi_clps711x_data *hw = spi_master_get_devdata(master); 257 - 258 - for (i = 0; i < master->num_chipselect; i++) 259 - if (gpio_is_valid(hw->chipselect[i])) 260 - gpio_free(hw->chipselect[i]); 261 - 262 - return 0; 263 246 } 264 247 265 248 static struct platform_driver clps711x_spi_driver = { ··· 251 268 .owner = THIS_MODULE, 252 269 }, 253 270 .probe = spi_clps711x_probe, 254 - .remove = spi_clps711x_remove, 255 271 }; 256 272 module_platform_driver(clps711x_spi_driver); 257 273 258 274 MODULE_LICENSE("GPL"); 259 275 MODULE_AUTHOR("Alexander Shiyan <shc_work@mail.ru>"); 260 276 MODULE_DESCRIPTION("CLPS711X SPI bus driver"); 277 + MODULE_ALIAS("platform:" DRIVER_NAME);
+14 -39
drivers/spi/spi-coldfire-qspi.c
··· 397 397 mcfqspi = spi_master_get_devdata(master); 398 398 399 399 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 400 - if (!res) { 401 - dev_dbg(&pdev->dev, "platform_get_resource failed\n"); 402 - status = -ENXIO; 400 + mcfqspi->iobase = devm_ioremap_resource(&pdev->dev, res); 401 + if (IS_ERR(mcfqspi->iobase)) { 402 + status = PTR_ERR(mcfqspi->iobase); 403 403 goto fail0; 404 - } 405 - 406 - if (!request_mem_region(res->start, resource_size(res), pdev->name)) { 407 - dev_dbg(&pdev->dev, "request_mem_region failed\n"); 408 - status = -EBUSY; 409 - goto fail0; 410 - } 411 - 412 - mcfqspi->iobase = ioremap(res->start, resource_size(res)); 413 - if (!mcfqspi->iobase) { 414 - dev_dbg(&pdev->dev, "ioremap failed\n"); 415 - status = -ENOMEM; 416 - goto fail1; 417 404 } 418 405 419 406 mcfqspi->irq = platform_get_irq(pdev, 0); 420 407 if (mcfqspi->irq < 0) { 421 408 dev_dbg(&pdev->dev, "platform_get_irq failed\n"); 422 409 status = -ENXIO; 423 - goto fail2; 410 + goto fail0; 424 411 } 425 412 426 - status = request_irq(mcfqspi->irq, mcfqspi_irq_handler, 0, 427 - pdev->name, mcfqspi); 413 + status = devm_request_irq(&pdev->dev, mcfqspi->irq, mcfqspi_irq_handler, 414 + 0, pdev->name, mcfqspi); 428 415 if (status) { 429 416 dev_dbg(&pdev->dev, "request_irq failed\n"); 430 - goto fail2; 417 + goto fail0; 431 418 } 432 419 433 - mcfqspi->clk = clk_get(&pdev->dev, "qspi_clk"); 420 + mcfqspi->clk = devm_clk_get(&pdev->dev, "qspi_clk"); 434 421 if (IS_ERR(mcfqspi->clk)) { 435 422 dev_dbg(&pdev->dev, "clk_get failed\n"); 436 423 status = PTR_ERR(mcfqspi->clk); 437 - goto fail3; 424 + goto fail0; 438 425 } 439 426 clk_enable(mcfqspi->clk); 440 427 ··· 432 445 status = mcfqspi_cs_setup(mcfqspi); 433 446 if (status) { 434 447 dev_dbg(&pdev->dev, "error initializing cs_control\n"); 435 - goto fail4; 448 + goto fail1; 436 449 } 437 450 438 451 init_waitqueue_head(&mcfqspi->waitq); ··· 446 459 447 460 platform_set_drvdata(pdev, master); 448 461 449 - status = spi_register_master(master); 462 + status = devm_spi_register_master(&pdev->dev, master); 450 463 if (status) { 451 464 dev_dbg(&pdev->dev, "spi_register_master failed\n"); 452 - goto fail5; 465 + goto fail2; 453 466 } 454 467 pm_runtime_enable(mcfqspi->dev); 455 468 ··· 457 470 458 471 return 0; 459 472 460 - fail5: 461 - mcfqspi_cs_teardown(mcfqspi); 462 - fail4: 463 - clk_disable(mcfqspi->clk); 464 - clk_put(mcfqspi->clk); 465 - fail3: 466 - free_irq(mcfqspi->irq, mcfqspi); 467 473 fail2: 468 - iounmap(mcfqspi->iobase); 474 + mcfqspi_cs_teardown(mcfqspi); 469 475 fail1: 470 - release_mem_region(res->start, resource_size(res)); 476 + clk_disable(mcfqspi->clk); 471 477 fail0: 472 478 spi_master_put(master); 473 479 ··· 481 501 482 502 mcfqspi_cs_teardown(mcfqspi); 483 503 clk_disable(mcfqspi->clk); 484 - clk_put(mcfqspi->clk); 485 - free_irq(mcfqspi->irq, mcfqspi); 486 - iounmap(mcfqspi->iobase); 487 - release_mem_region(res->start, resource_size(res)); 488 - spi_unregister_master(master); 489 504 490 505 return 0; 491 506 }
+13 -36
drivers/spi/spi-davinci.c
··· 396 396 dspi = spi_master_get_devdata(spi->master); 397 397 pdata = &dspi->pdata; 398 398 399 - /* if bits per word length is zero then set it default 8 */ 400 - if (!spi->bits_per_word) 401 - spi->bits_per_word = 8; 402 - 403 399 if (!(spi->mode & SPI_NO_CS)) { 404 400 if ((pdata->chip_sel == NULL) || 405 401 (pdata->chip_sel[spi->chip_select] == SPI_INTERN_CS)) ··· 849 853 struct spi_master *master; 850 854 struct davinci_spi *dspi; 851 855 struct davinci_spi_platform_data *pdata; 852 - struct resource *r, *mem; 856 + struct resource *r; 853 857 resource_size_t dma_rx_chan = SPI_NO_RESOURCE; 854 858 resource_size_t dma_tx_chan = SPI_NO_RESOURCE; 855 859 int i = 0, ret = 0; ··· 890 894 891 895 dspi->pbase = r->start; 892 896 893 - mem = request_mem_region(r->start, resource_size(r), pdev->name); 894 - if (mem == NULL) { 895 - ret = -EBUSY; 897 + dspi->base = devm_ioremap_resource(&pdev->dev, r); 898 + if (IS_ERR(dspi->base)) { 899 + ret = PTR_ERR(dspi->base); 896 900 goto free_master; 897 - } 898 - 899 - dspi->base = ioremap(r->start, resource_size(r)); 900 - if (dspi->base == NULL) { 901 - ret = -ENOMEM; 902 - goto release_region; 903 901 } 904 902 905 903 dspi->irq = platform_get_irq(pdev, 0); 906 904 if (dspi->irq <= 0) { 907 905 ret = -EINVAL; 908 - goto unmap_io; 906 + goto free_master; 909 907 } 910 908 911 - ret = request_threaded_irq(dspi->irq, davinci_spi_irq, dummy_thread_fn, 912 - 0, dev_name(&pdev->dev), dspi); 909 + ret = devm_request_threaded_irq(&pdev->dev, dspi->irq, davinci_spi_irq, 910 + dummy_thread_fn, 0, dev_name(&pdev->dev), dspi); 913 911 if (ret) 914 - goto unmap_io; 912 + goto free_master; 915 913 916 914 dspi->bitbang.master = master; 917 915 if (dspi->bitbang.master == NULL) { 918 916 ret = -ENODEV; 919 - goto irq_free; 917 + goto free_master; 920 918 } 921 919 922 - dspi->clk = clk_get(&pdev->dev, NULL); 920 + dspi->clk = devm_clk_get(&pdev->dev, NULL); 923 921 if (IS_ERR(dspi->clk)) { 924 922 ret = -ENODEV; 925 - goto irq_free; 923 + goto free_master; 926 924 } 927 925 clk_prepare_enable(dspi->clk); 928 926 ··· 953 963 goto free_clk; 954 964 955 965 dev_info(&pdev->dev, "DMA: supported\n"); 956 - dev_info(&pdev->dev, "DMA: RX channel: %d, TX channel: %d, " 957 - "event queue: %d\n", dma_rx_chan, dma_tx_chan, 966 + dev_info(&pdev->dev, "DMA: RX channel: %pa, TX channel: %pa, " 967 + "event queue: %d\n", &dma_rx_chan, &dma_tx_chan, 958 968 pdata->dma_event_q); 959 969 } 960 970 ··· 1005 1015 dma_release_channel(dspi->dma_tx); 1006 1016 free_clk: 1007 1017 clk_disable_unprepare(dspi->clk); 1008 - clk_put(dspi->clk); 1009 - irq_free: 1010 - free_irq(dspi->irq, dspi); 1011 - unmap_io: 1012 - iounmap(dspi->base); 1013 - release_region: 1014 - release_mem_region(dspi->pbase, resource_size(r)); 1015 1018 free_master: 1016 1019 spi_master_put(master); 1017 1020 err: ··· 1024 1041 { 1025 1042 struct davinci_spi *dspi; 1026 1043 struct spi_master *master; 1027 - struct resource *r; 1028 1044 1029 1045 master = platform_get_drvdata(pdev); 1030 1046 dspi = spi_master_get_devdata(master); ··· 1031 1049 spi_bitbang_stop(&dspi->bitbang); 1032 1050 1033 1051 clk_disable_unprepare(dspi->clk); 1034 - clk_put(dspi->clk); 1035 - free_irq(dspi->irq, dspi); 1036 - iounmap(dspi->base); 1037 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1038 - release_mem_region(dspi->pbase, resource_size(r)); 1039 1052 spi_master_put(master); 1040 1053 1041 1054 return 0;
+22 -52
drivers/spi/spi-dw-mmio.c
··· 30 30 { 31 31 struct dw_spi_mmio *dwsmmio; 32 32 struct dw_spi *dws; 33 - struct resource *mem, *ioarea; 33 + struct resource *mem; 34 34 int ret; 35 35 36 - dwsmmio = kzalloc(sizeof(struct dw_spi_mmio), GFP_KERNEL); 37 - if (!dwsmmio) { 38 - ret = -ENOMEM; 39 - goto err_end; 40 - } 36 + dwsmmio = devm_kzalloc(&pdev->dev, sizeof(struct dw_spi_mmio), 37 + GFP_KERNEL); 38 + if (!dwsmmio) 39 + return -ENOMEM; 41 40 42 41 dws = &dwsmmio->dws; 43 42 ··· 44 45 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 45 46 if (!mem) { 46 47 dev_err(&pdev->dev, "no mem resource?\n"); 47 - ret = -EINVAL; 48 - goto err_kfree; 48 + return -EINVAL; 49 49 } 50 50 51 - ioarea = request_mem_region(mem->start, resource_size(mem), 52 - pdev->name); 53 - if (!ioarea) { 54 - dev_err(&pdev->dev, "SPI region already claimed\n"); 55 - ret = -EBUSY; 56 - goto err_kfree; 57 - } 58 - 59 - dws->regs = ioremap_nocache(mem->start, resource_size(mem)); 60 - if (!dws->regs) { 61 - dev_err(&pdev->dev, "SPI region already mapped\n"); 62 - ret = -ENOMEM; 63 - goto err_release_reg; 51 + dws->regs = devm_ioremap_resource(&pdev->dev, mem); 52 + if (IS_ERR(dws->regs)) { 53 + dev_err(&pdev->dev, "SPI region map failed\n"); 54 + return PTR_ERR(dws->regs); 64 55 } 65 56 66 57 dws->irq = platform_get_irq(pdev, 0); 67 58 if (dws->irq < 0) { 68 59 dev_err(&pdev->dev, "no irq resource?\n"); 69 - ret = dws->irq; /* -ENXIO */ 70 - goto err_unmap; 60 + return dws->irq; /* -ENXIO */ 71 61 } 72 62 73 - dwsmmio->clk = clk_get(&pdev->dev, NULL); 74 - if (IS_ERR(dwsmmio->clk)) { 75 - ret = PTR_ERR(dwsmmio->clk); 76 - goto err_unmap; 77 - } 78 - clk_enable(dwsmmio->clk); 63 + dwsmmio->clk = devm_clk_get(&pdev->dev, NULL); 64 + if (IS_ERR(dwsmmio->clk)) 65 + return PTR_ERR(dwsmmio->clk); 66 + ret = clk_prepare_enable(dwsmmio->clk); 67 + if (ret) 68 + return ret; 79 69 80 - dws->parent_dev = &pdev->dev; 81 70 dws->bus_num = 0; 82 71 dws->num_cs = 4; 83 72 dws->max_freq = clk_get_rate(dwsmmio->clk); 84 73 85 - ret = dw_spi_add_host(dws); 74 + ret = dw_spi_add_host(&pdev->dev, dws); 86 75 if (ret) 87 - goto err_clk; 76 + goto out; 88 77 89 78 platform_set_drvdata(pdev, dwsmmio); 90 79 return 0; 91 80 92 - err_clk: 93 - clk_disable(dwsmmio->clk); 94 - clk_put(dwsmmio->clk); 95 - dwsmmio->clk = NULL; 96 - err_unmap: 97 - iounmap(dws->regs); 98 - err_release_reg: 99 - release_mem_region(mem->start, resource_size(mem)); 100 - err_kfree: 101 - kfree(dwsmmio); 102 - err_end: 81 + out: 82 + clk_disable_unprepare(dwsmmio->clk); 103 83 return ret; 104 84 } 105 85 106 86 static int dw_spi_mmio_remove(struct platform_device *pdev) 107 87 { 108 88 struct dw_spi_mmio *dwsmmio = platform_get_drvdata(pdev); 109 - struct resource *mem; 110 89 111 - clk_disable(dwsmmio->clk); 112 - clk_put(dwsmmio->clk); 113 - dwsmmio->clk = NULL; 114 - 90 + clk_disable_unprepare(dwsmmio->clk); 115 91 dw_spi_remove_host(&dwsmmio->dws); 116 - iounmap(dwsmmio->dws.regs); 117 - kfree(dwsmmio); 118 92 119 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 120 - release_mem_region(mem->start, resource_size(mem)); 121 93 return 0; 122 94 } 123 95
+12 -35
drivers/spi/spi-dw-pci.c
··· 43 43 dev_info(&pdev->dev, "found PCI SPI controller(ID: %04x:%04x)\n", 44 44 pdev->vendor, pdev->device); 45 45 46 - ret = pci_enable_device(pdev); 46 + ret = pcim_enable_device(pdev); 47 47 if (ret) 48 48 return ret; 49 49 50 - dwpci = kzalloc(sizeof(struct dw_spi_pci), GFP_KERNEL); 51 - if (!dwpci) { 52 - ret = -ENOMEM; 53 - goto err_disable; 54 - } 50 + dwpci = devm_kzalloc(&pdev->dev, sizeof(struct dw_spi_pci), 51 + GFP_KERNEL); 52 + if (!dwpci) 53 + return -ENOMEM; 55 54 56 55 dwpci->pdev = pdev; 57 56 dws = &dwpci->dws; 58 57 59 58 /* Get basic io resource and map it */ 60 59 dws->paddr = pci_resource_start(pdev, pci_bar); 61 - dws->iolen = pci_resource_len(pdev, pci_bar); 62 60 63 - ret = pci_request_region(pdev, pci_bar, dev_name(&pdev->dev)); 61 + ret = pcim_iomap_regions(pdev, 1, dev_name(&pdev->dev)); 64 62 if (ret) 65 - goto err_kfree; 63 + return ret; 66 64 67 - dws->regs = ioremap_nocache((unsigned long)dws->paddr, 68 - pci_resource_len(pdev, pci_bar)); 69 - if (!dws->regs) { 70 - ret = -ENOMEM; 71 - goto err_release_reg; 72 - } 73 - 74 - dws->parent_dev = &pdev->dev; 75 65 dws->bus_num = 0; 76 66 dws->num_cs = 4; 77 67 dws->irq = pdev->irq; ··· 73 83 if (pdev->device == 0x0800) { 74 84 ret = dw_spi_mid_init(dws); 75 85 if (ret) 76 - goto err_unmap; 86 + return ret; 77 87 } 78 88 79 - ret = dw_spi_add_host(dws); 89 + ret = dw_spi_add_host(&pdev->dev, dws); 80 90 if (ret) 81 - goto err_unmap; 91 + return ret; 82 92 83 93 /* PCI hook and SPI hook use the same drv data */ 84 94 pci_set_drvdata(pdev, dwpci); 85 - return 0; 86 95 87 - err_unmap: 88 - iounmap(dws->regs); 89 - err_release_reg: 90 - pci_release_region(pdev, pci_bar); 91 - err_kfree: 92 - kfree(dwpci); 93 - err_disable: 94 - pci_disable_device(pdev); 95 - return ret; 96 + return 0; 96 97 } 97 98 98 99 static void spi_pci_remove(struct pci_dev *pdev) ··· 91 110 struct dw_spi_pci *dwpci = pci_get_drvdata(pdev); 92 111 93 112 dw_spi_remove_host(&dwpci->dws); 94 - iounmap(dwpci->dws.regs); 95 - pci_release_region(pdev, 0); 96 - kfree(dwpci); 97 - pci_disable_device(pdev); 98 113 } 99 114 100 115 #ifdef CONFIG_PM ··· 125 148 #define spi_resume NULL 126 149 #endif 127 150 128 - static DEFINE_PCI_DEVICE_TABLE(pci_ids) = { 151 + static const struct pci_device_id pci_ids[] = { 129 152 /* Intel MID platform SPI controller 0 */ 130 153 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0800) }, 131 154 {},
+9 -17
drivers/spi/spi-dw.c
··· 427 427 dws->tx_end = dws->tx + transfer->len; 428 428 dws->rx = transfer->rx_buf; 429 429 dws->rx_end = dws->rx + transfer->len; 430 - dws->cs_change = transfer->cs_change; 431 430 dws->len = dws->cur_transfer->len; 432 431 if (chip != dws->prev_chip) 433 432 cs_change = 1; ··· 619 620 /* Only alloc on first setup */ 620 621 chip = spi_get_ctldata(spi); 621 622 if (!chip) { 622 - chip = kzalloc(sizeof(struct chip_data), GFP_KERNEL); 623 + chip = devm_kzalloc(&spi->dev, sizeof(struct chip_data), 624 + GFP_KERNEL); 623 625 if (!chip) 624 626 return -ENOMEM; 627 + spi_set_ctldata(spi, chip); 625 628 } 626 629 627 630 /* ··· 668 667 | (spi->mode << SPI_MODE_OFFSET) 669 668 | (chip->tmode << SPI_TMOD_OFFSET); 670 669 671 - spi_set_ctldata(spi, chip); 672 670 return 0; 673 671 } 674 672 ··· 776 776 } 777 777 } 778 778 779 - int dw_spi_add_host(struct dw_spi *dws) 779 + int dw_spi_add_host(struct device *dev, struct dw_spi *dws) 780 780 { 781 781 struct spi_master *master; 782 782 int ret; 783 783 784 784 BUG_ON(dws == NULL); 785 785 786 - master = spi_alloc_master(dws->parent_dev, 0); 787 - if (!master) { 788 - ret = -ENOMEM; 789 - goto exit; 790 - } 786 + master = spi_alloc_master(dev, 0); 787 + if (!master) 788 + return -ENOMEM; 791 789 792 790 dws->master = master; 793 791 dws->type = SSI_MOTO_SPI; ··· 795 797 snprintf(dws->name, sizeof(dws->name), "dw_spi%d", 796 798 dws->bus_num); 797 799 798 - ret = request_irq(dws->irq, dw_spi_irq, IRQF_SHARED, 800 + ret = devm_request_irq(dev, dws->irq, dw_spi_irq, IRQF_SHARED, 799 801 dws->name, dws); 800 802 if (ret < 0) { 801 803 dev_err(&master->dev, "can not get IRQ\n"); ··· 834 836 } 835 837 836 838 spi_master_set_devdata(master, dws); 837 - ret = spi_register_master(master); 839 + ret = devm_spi_register_master(dev, master); 838 840 if (ret) { 839 841 dev_err(&master->dev, "problem registering spi master\n"); 840 842 goto err_queue_alloc; ··· 849 851 dws->dma_ops->dma_exit(dws); 850 852 err_diable_hw: 851 853 spi_enable_chip(dws, 0); 852 - free_irq(dws->irq, dws); 853 854 err_free_master: 854 855 spi_master_put(master); 855 - exit: 856 856 return ret; 857 857 } 858 858 EXPORT_SYMBOL_GPL(dw_spi_add_host); ··· 874 878 spi_enable_chip(dws, 0); 875 879 /* Disable clk */ 876 880 spi_set_clk(dws, 0); 877 - free_irq(dws->irq, dws); 878 - 879 - /* Disconnect from the SPI framework */ 880 - spi_unregister_master(dws->master); 881 881 } 882 882 EXPORT_SYMBOL_GPL(dw_spi_remove_host); 883 883
+1 -4
drivers/spi/spi-dw.h
··· 92 92 struct dw_spi { 93 93 struct spi_master *master; 94 94 struct spi_device *cur_dev; 95 - struct device *parent_dev; 96 95 enum dw_ssi_type type; 97 96 char name[16]; 98 97 99 98 void __iomem *regs; 100 99 unsigned long paddr; 101 - u32 iolen; 102 100 int irq; 103 101 u32 fifo_len; /* depth of the FIFO buffer */ 104 102 u32 max_freq; /* max bus freq supported */ ··· 133 135 u8 n_bytes; /* current is a 1/2 bytes op */ 134 136 u8 max_bits_per_word; /* maxim is 16b */ 135 137 u32 dma_width; 136 - int cs_change; 137 138 irqreturn_t (*transfer_handler)(struct dw_spi *dws); 138 139 void (*cs_control)(u32 command); 139 140 ··· 228 231 void (*cs_control)(u32 command); 229 232 }; 230 233 231 - extern int dw_spi_add_host(struct dw_spi *dws); 234 + extern int dw_spi_add_host(struct device *dev, struct dw_spi *dws); 232 235 extern void dw_spi_remove_host(struct dw_spi *dws); 233 236 extern int dw_spi_suspend_host(struct dw_spi *dws); 234 237 extern int dw_spi_resume_host(struct dw_spi *dws);
+1 -11
drivers/spi/spi-falcon.c
··· 433 433 434 434 platform_set_drvdata(pdev, priv); 435 435 436 - ret = spi_register_master(master); 436 + ret = devm_spi_register_master(&pdev->dev, master); 437 437 if (ret) 438 438 spi_master_put(master); 439 439 return ret; 440 - } 441 - 442 - static int falcon_sflash_remove(struct platform_device *pdev) 443 - { 444 - struct falcon_sflash *priv = platform_get_drvdata(pdev); 445 - 446 - spi_unregister_master(priv->master); 447 - 448 - return 0; 449 440 } 450 441 451 442 static const struct of_device_id falcon_sflash_match[] = { ··· 447 456 448 457 static struct platform_driver falcon_sflash_driver = { 449 458 .probe = falcon_sflash_probe, 450 - .remove = falcon_sflash_remove, 451 459 .driver = { 452 460 .name = DRV_NAME, 453 461 .owner = THIS_MODULE,
+2 -3
drivers/spi/spi-fsl-dspi.c
··· 320 320 switch (value) { 321 321 case BITBANG_CS_ACTIVE: 322 322 pushr |= SPI_PUSHR_CONT; 323 + break; 323 324 case BITBANG_CS_INACTIVE: 324 325 pushr &= ~SPI_PUSHR_CONT; 326 + break; 325 327 } 326 328 327 329 writel(pushr, dspi->base + SPI_PUSHR); ··· 374 372 { 375 373 if (!spi->max_speed_hz) 376 374 return -EINVAL; 377 - 378 - if (!spi->bits_per_word) 379 - spi->bits_per_word = 8; 380 375 381 376 return dspi_setup_transfer(spi, NULL); 382 377 }
+62 -1
drivers/spi/spi-fsl-espi.c
··· 705 705 goto err; 706 706 707 707 irq = irq_of_parse_and_map(np, 0); 708 - if (!ret) { 708 + if (!irq) { 709 709 ret = -EINVAL; 710 710 goto err; 711 711 } ··· 727 727 return mpc8xxx_spi_remove(&dev->dev); 728 728 } 729 729 730 + #ifdef CONFIG_PM_SLEEP 731 + static int of_fsl_espi_suspend(struct device *dev) 732 + { 733 + struct spi_master *master = dev_get_drvdata(dev); 734 + struct mpc8xxx_spi *mpc8xxx_spi; 735 + struct fsl_espi_reg *reg_base; 736 + u32 regval; 737 + int ret; 738 + 739 + mpc8xxx_spi = spi_master_get_devdata(master); 740 + reg_base = mpc8xxx_spi->reg_base; 741 + 742 + ret = spi_master_suspend(master); 743 + if (ret) { 744 + dev_warn(dev, "cannot suspend master\n"); 745 + return ret; 746 + } 747 + 748 + regval = mpc8xxx_spi_read_reg(&reg_base->mode); 749 + regval &= ~SPMODE_ENABLE; 750 + mpc8xxx_spi_write_reg(&reg_base->mode, regval); 751 + 752 + return 0; 753 + } 754 + 755 + static int of_fsl_espi_resume(struct device *dev) 756 + { 757 + struct fsl_spi_platform_data *pdata = dev_get_platdata(dev); 758 + struct spi_master *master = dev_get_drvdata(dev); 759 + struct mpc8xxx_spi *mpc8xxx_spi; 760 + struct fsl_espi_reg *reg_base; 761 + u32 regval; 762 + int i; 763 + 764 + mpc8xxx_spi = spi_master_get_devdata(master); 765 + reg_base = mpc8xxx_spi->reg_base; 766 + 767 + /* SPI controller initializations */ 768 + mpc8xxx_spi_write_reg(&reg_base->mode, 0); 769 + mpc8xxx_spi_write_reg(&reg_base->mask, 0); 770 + mpc8xxx_spi_write_reg(&reg_base->command, 0); 771 + mpc8xxx_spi_write_reg(&reg_base->event, 0xffffffff); 772 + 773 + /* Init eSPI CS mode register */ 774 + for (i = 0; i < pdata->max_chipselect; i++) 775 + mpc8xxx_spi_write_reg(&reg_base->csmode[i], CSMODE_INIT_VAL); 776 + 777 + /* Enable SPI interface */ 778 + regval = pdata->initial_spmode | SPMODE_INIT_VAL | SPMODE_ENABLE; 779 + 780 + mpc8xxx_spi_write_reg(&reg_base->mode, regval); 781 + 782 + return spi_master_resume(master); 783 + } 784 + #endif /* CONFIG_PM_SLEEP */ 785 + 786 + static const struct dev_pm_ops espi_pm = { 787 + SET_SYSTEM_SLEEP_PM_OPS(of_fsl_espi_suspend, of_fsl_espi_resume) 788 + }; 789 + 730 790 static const struct of_device_id of_fsl_espi_match[] = { 731 791 { .compatible = "fsl,mpc8536-espi" }, 732 792 {} ··· 798 738 .name = "fsl_espi", 799 739 .owner = THIS_MODULE, 800 740 .of_match_table = of_fsl_espi_match, 741 + .pm = &espi_pm, 801 742 }, 802 743 .probe = of_fsl_espi_probe, 803 744 .remove = of_fsl_espi_remove,
+4 -4
drivers/spi/spi-gpio.c
··· 115 115 116 116 static inline void setsck(const struct spi_device *spi, int is_on) 117 117 { 118 - gpio_set_value(SPI_SCK_GPIO, is_on); 118 + gpio_set_value_cansleep(SPI_SCK_GPIO, is_on); 119 119 } 120 120 121 121 static inline void setmosi(const struct spi_device *spi, int is_on) 122 122 { 123 - gpio_set_value(SPI_MOSI_GPIO, is_on); 123 + gpio_set_value_cansleep(SPI_MOSI_GPIO, is_on); 124 124 } 125 125 126 126 static inline int getmiso(const struct spi_device *spi) 127 127 { 128 - return !!gpio_get_value(SPI_MISO_GPIO); 128 + return !!gpio_get_value_cansleep(SPI_MISO_GPIO); 129 129 } 130 130 131 131 #undef pdata ··· 229 229 230 230 if (cs != SPI_GPIO_NO_CHIPSELECT) { 231 231 /* SPI is normally active-low */ 232 - gpio_set_value(cs, (spi->mode & SPI_CS_HIGH) ? is_active : !is_active); 232 + gpio_set_value_cansleep(cs, (spi->mode & SPI_CS_HIGH) ? is_active : !is_active); 233 233 } 234 234 } 235 235
+25 -2
drivers/spi/spi-imx.c
··· 206 206 #define MX51_ECSPI_STAT_RR (1 << 3) 207 207 208 208 /* MX51 eCSPI */ 209 - static unsigned int mx51_ecspi_clkdiv(unsigned int fin, unsigned int fspi) 209 + static unsigned int mx51_ecspi_clkdiv(unsigned int fin, unsigned int fspi, 210 + unsigned int *fres) 210 211 { 211 212 /* 212 213 * there are two 4-bit dividers, the pre-divider divides by ··· 235 234 236 235 pr_debug("%s: fin: %u, fspi: %u, post: %u, pre: %u\n", 237 236 __func__, fin, fspi, post, pre); 237 + 238 + /* Resulting frequency for the SCLK line. */ 239 + *fres = (fin / (pre + 1)) >> post; 240 + 238 241 return (pre << MX51_ECSPI_CTRL_PREDIV_OFFSET) | 239 242 (post << MX51_ECSPI_CTRL_POSTDIV_OFFSET); 240 243 } ··· 269 264 struct spi_imx_config *config) 270 265 { 271 266 u32 ctrl = MX51_ECSPI_CTRL_ENABLE, cfg = 0; 267 + u32 clk = config->speed_hz, delay; 272 268 273 269 /* 274 270 * The hardware seems to have a race condition when changing modes. The ··· 281 275 ctrl |= MX51_ECSPI_CTRL_MODE_MASK; 282 276 283 277 /* set clock speed */ 284 - ctrl |= mx51_ecspi_clkdiv(spi_imx->spi_clk, config->speed_hz); 278 + ctrl |= mx51_ecspi_clkdiv(spi_imx->spi_clk, config->speed_hz, &clk); 285 279 286 280 /* set chip select to use */ 287 281 ctrl |= MX51_ECSPI_CTRL_CS(config->cs); ··· 302 296 303 297 writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL); 304 298 writel(cfg, spi_imx->base + MX51_ECSPI_CONFIG); 299 + 300 + /* 301 + * Wait until the changes in the configuration register CONFIGREG 302 + * propagate into the hardware. It takes exactly one tick of the 303 + * SCLK clock, but we will wait two SCLK clock just to be sure. The 304 + * effect of the delay it takes for the hardware to apply changes 305 + * is noticable if the SCLK clock run very slow. In such a case, if 306 + * the polarity of SCLK should be inverted, the GPIO ChipSelect might 307 + * be asserted before the SCLK polarity changes, which would disrupt 308 + * the SPI communication as the device on the other end would consider 309 + * the change of SCLK polarity as a clock tick already. 310 + */ 311 + delay = (2 * 1000000) / clk; 312 + if (likely(delay < 10)) /* SCLK is faster than 100 kHz */ 313 + udelay(delay); 314 + else /* SCLK is _very_ slow */ 315 + usleep_range(delay, delay + 10); 305 316 306 317 return 0; 307 318 }
+5 -13
drivers/spi/spi-mpc512x-psc.c
··· 504 504 master->cleanup = mpc512x_psc_spi_cleanup; 505 505 master->dev.of_node = dev->of_node; 506 506 507 - tempp = ioremap(regaddr, size); 507 + tempp = devm_ioremap(dev, regaddr, size); 508 508 if (!tempp) { 509 509 dev_err(dev, "could not ioremap I/O port range\n"); 510 510 ret = -EFAULT; ··· 513 513 mps->psc = tempp; 514 514 mps->fifo = 515 515 (struct mpc512x_psc_fifo *)(tempp + sizeof(struct mpc52xx_psc)); 516 - 517 - ret = request_irq(mps->irq, mpc512x_psc_spi_isr, IRQF_SHARED, 518 - "mpc512x-psc-spi", mps); 516 + ret = devm_request_irq(dev, mps->irq, mpc512x_psc_spi_isr, IRQF_SHARED, 517 + "mpc512x-psc-spi", mps); 519 518 if (ret) 520 519 goto free_master; 521 520 init_completion(&mps->txisrdone); ··· 524 525 clk = devm_clk_get(dev, clk_name); 525 526 if (IS_ERR(clk)) { 526 527 ret = PTR_ERR(clk); 527 - goto free_irq; 528 + goto free_master; 528 529 } 529 530 ret = clk_prepare_enable(clk); 530 531 if (ret) 531 - goto free_irq; 532 + goto free_master; 532 533 mps->clk_mclk = clk; 533 534 mps->mclk_rate = clk_get_rate(clk); 534 535 ··· 544 545 545 546 free_clock: 546 547 clk_disable_unprepare(mps->clk_mclk); 547 - free_irq: 548 - free_irq(mps->irq, mps); 549 548 free_master: 550 - if (mps->psc) 551 - iounmap(mps->psc); 552 549 spi_master_put(master); 553 550 554 551 return ret; ··· 556 561 struct mpc512x_psc_spi *mps = spi_master_get_devdata(master); 557 562 558 563 clk_disable_unprepare(mps->clk_mclk); 559 - free_irq(mps->irq, mps); 560 - if (mps->psc) 561 - iounmap(mps->psc); 562 564 563 565 return 0; 564 566 }
-9
drivers/spi/spi-mxs.c
··· 111 111 return 0; 112 112 } 113 113 114 - static int mxs_spi_setup(struct spi_device *dev) 115 - { 116 - if (!dev->bits_per_word) 117 - dev->bits_per_word = 8; 118 - 119 - return 0; 120 - } 121 - 122 114 static u32 mxs_spi_cs_to_reg(unsigned cs) 123 115 { 124 116 u32 select = 0; ··· 494 502 return -ENOMEM; 495 503 496 504 master->transfer_one_message = mxs_spi_transfer_one; 497 - master->setup = mxs_spi_setup; 498 505 master->bits_per_word_mask = SPI_BPW_MASK(8); 499 506 master->mode_bits = SPI_CPOL | SPI_CPHA; 500 507 master->num_chipselect = 3;
+10 -46
drivers/spi/spi-nuc900.c
··· 57 57 const unsigned char *tx; 58 58 unsigned char *rx; 59 59 struct clk *clk; 60 - struct resource *ioarea; 61 60 struct spi_master *master; 62 61 struct spi_device *curdev; 63 62 struct device *dev; ··· 343 344 master = spi_alloc_master(&pdev->dev, sizeof(struct nuc900_spi)); 344 345 if (master == NULL) { 345 346 dev_err(&pdev->dev, "No memory for spi_master\n"); 346 - err = -ENOMEM; 347 - goto err_nomem; 347 + return -ENOMEM; 348 348 } 349 349 350 350 hw = spi_master_get_devdata(master); ··· 368 370 hw->bitbang.txrx_bufs = nuc900_spi_txrx; 369 371 370 372 hw->res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 371 - if (hw->res == NULL) { 372 - dev_err(&pdev->dev, "Cannot get IORESOURCE_MEM\n"); 373 - err = -ENOENT; 373 + hw->regs = devm_ioremap_resource(&pdev->dev, hw->res); 374 + if (IS_ERR(hw->regs)) { 375 + err = PTR_ERR(hw->regs); 374 376 goto err_pdata; 375 - } 376 - 377 - hw->ioarea = request_mem_region(hw->res->start, 378 - resource_size(hw->res), pdev->name); 379 - 380 - if (hw->ioarea == NULL) { 381 - dev_err(&pdev->dev, "Cannot reserve region\n"); 382 - err = -ENXIO; 383 - goto err_pdata; 384 - } 385 - 386 - hw->regs = ioremap(hw->res->start, resource_size(hw->res)); 387 - if (hw->regs == NULL) { 388 - dev_err(&pdev->dev, "Cannot map IO\n"); 389 - err = -ENXIO; 390 - goto err_iomap; 391 377 } 392 378 393 379 hw->irq = platform_get_irq(pdev, 0); 394 380 if (hw->irq < 0) { 395 381 dev_err(&pdev->dev, "No IRQ specified\n"); 396 382 err = -ENOENT; 397 - goto err_irq; 383 + goto err_pdata; 398 384 } 399 385 400 - err = request_irq(hw->irq, nuc900_spi_irq, 0, pdev->name, hw); 386 + err = devm_request_irq(&pdev->dev, hw->irq, nuc900_spi_irq, 0, 387 + pdev->name, hw); 401 388 if (err) { 402 389 dev_err(&pdev->dev, "Cannot claim IRQ\n"); 403 - goto err_irq; 390 + goto err_pdata; 404 391 } 405 392 406 - hw->clk = clk_get(&pdev->dev, "spi"); 393 + hw->clk = devm_clk_get(&pdev->dev, "spi"); 407 394 if (IS_ERR(hw->clk)) { 408 395 dev_err(&pdev->dev, "No clock for device\n"); 409 396 err = PTR_ERR(hw->clk); 410 - goto err_clk; 397 + goto err_pdata; 411 398 } 412 399 413 400 mfp_set_groupg(&pdev->dev, NULL); ··· 408 425 409 426 err_register: 410 427 clk_disable(hw->clk); 411 - clk_put(hw->clk); 412 - err_clk: 413 - free_irq(hw->irq, hw); 414 - err_irq: 415 - iounmap(hw->regs); 416 - err_iomap: 417 - release_mem_region(hw->res->start, resource_size(hw->res)); 418 - kfree(hw->ioarea); 419 428 err_pdata: 420 429 spi_master_put(hw->master); 421 - err_nomem: 422 430 return err; 423 431 } 424 432 ··· 417 443 { 418 444 struct nuc900_spi *hw = platform_get_drvdata(dev); 419 445 420 - free_irq(hw->irq, hw); 421 - 422 446 spi_bitbang_stop(&hw->bitbang); 423 - 424 447 clk_disable(hw->clk); 425 - clk_put(hw->clk); 426 - 427 - iounmap(hw->regs); 428 - 429 - release_mem_region(hw->res->start, resource_size(hw->res)); 430 - kfree(hw->ioarea); 431 - 432 448 spi_master_put(hw->master); 433 449 return 0; 434 450 }
+11 -51
drivers/spi/spi-oc-tiny.c
··· 153 153 } 154 154 155 155 wait_for_completion(&hw->done); 156 - } else if (txp && rxp) { 157 - /* we need to tighten the transfer loop */ 158 - writeb(*txp++, hw->base + TINY_SPI_TXDATA); 159 - if (t->len > 1) { 160 - writeb(*txp++, hw->base + TINY_SPI_TXDATA); 161 - for (i = 2; i < t->len; i++) { 162 - u8 rx, tx = *txp++; 163 - tiny_spi_wait_txr(hw); 164 - rx = readb(hw->base + TINY_SPI_TXDATA); 165 - writeb(tx, hw->base + TINY_SPI_TXDATA); 166 - *rxp++ = rx; 167 - } 168 - tiny_spi_wait_txr(hw); 169 - *rxp++ = readb(hw->base + TINY_SPI_TXDATA); 170 - } 171 - tiny_spi_wait_txe(hw); 172 - *rxp++ = readb(hw->base + TINY_SPI_RXDATA); 173 - } else if (rxp) { 174 - writeb(0, hw->base + TINY_SPI_TXDATA); 175 - if (t->len > 1) { 176 - writeb(0, 177 - hw->base + TINY_SPI_TXDATA); 178 - for (i = 2; i < t->len; i++) { 179 - u8 rx; 180 - tiny_spi_wait_txr(hw); 181 - rx = readb(hw->base + TINY_SPI_TXDATA); 182 - writeb(0, hw->base + TINY_SPI_TXDATA); 183 - *rxp++ = rx; 184 - } 185 - tiny_spi_wait_txr(hw); 186 - *rxp++ = readb(hw->base + TINY_SPI_TXDATA); 187 - } 188 - tiny_spi_wait_txe(hw); 189 - *rxp++ = readb(hw->base + TINY_SPI_RXDATA); 190 - } else if (txp) { 191 - writeb(*txp++, hw->base + TINY_SPI_TXDATA); 192 - if (t->len > 1) { 193 - writeb(*txp++, hw->base + TINY_SPI_TXDATA); 194 - for (i = 2; i < t->len; i++) { 195 - u8 tx = *txp++; 196 - tiny_spi_wait_txr(hw); 197 - writeb(tx, hw->base + TINY_SPI_TXDATA); 198 - } 199 - } 200 - tiny_spi_wait_txe(hw); 201 156 } else { 202 - writeb(0, hw->base + TINY_SPI_TXDATA); 203 - if (t->len > 1) { 204 - writeb(0, hw->base + TINY_SPI_TXDATA); 205 - for (i = 2; i < t->len; i++) { 157 + /* we need to tighten the transfer loop */ 158 + writeb(txp ? *txp++ : 0, hw->base + TINY_SPI_TXDATA); 159 + for (i = 1; i < t->len; i++) { 160 + writeb(txp ? *txp++ : 0, hw->base + TINY_SPI_TXDATA); 161 + 162 + if (rxp || (i != t->len - 1)) 206 163 tiny_spi_wait_txr(hw); 207 - writeb(0, hw->base + TINY_SPI_TXDATA); 208 - } 164 + if (rxp) 165 + *rxp++ = readb(hw->base + TINY_SPI_TXDATA); 209 166 } 210 167 tiny_spi_wait_txe(hw); 168 + if (rxp) 169 + *rxp++ = readb(hw->base + TINY_SPI_RXDATA); 211 170 } 171 + 212 172 return t->len; 213 173 } 214 174
-20
drivers/spi/spi-omap-100k.c
··· 470 470 return status; 471 471 } 472 472 473 - static int omap1_spi100k_remove(struct platform_device *pdev) 474 - { 475 - struct spi_master *master; 476 - struct omap1_spi100k *spi100k; 477 - struct resource *r; 478 - int status = 0; 479 - 480 - master = platform_get_drvdata(pdev); 481 - spi100k = spi_master_get_devdata(master); 482 - 483 - if (status != 0) 484 - return status; 485 - 486 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 487 - 488 - return 0; 489 - } 490 - 491 473 static struct platform_driver omap1_spi100k_driver = { 492 474 .driver = { 493 475 .name = "omap1_spi100k", 494 476 .owner = THIS_MODULE, 495 477 }, 496 478 .probe = omap1_spi100k_probe, 497 - .remove = omap1_spi100k_remove, 498 479 }; 499 480 500 481 module_platform_driver(omap1_spi100k_driver); ··· 483 502 MODULE_DESCRIPTION("OMAP7xx SPI 100k controller driver"); 484 503 MODULE_AUTHOR("Fabrice Crohas <fcrohas@gmail.com>"); 485 504 MODULE_LICENSE("GPL"); 486 -
+19 -19
drivers/spi/spi-omap2-mcspi.c
··· 157 157 { 158 158 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 159 159 160 - __raw_writel(val, mcspi->base + idx); 160 + writel_relaxed(val, mcspi->base + idx); 161 161 } 162 162 163 163 static inline u32 mcspi_read_reg(struct spi_master *master, int idx) 164 164 { 165 165 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 166 166 167 - return __raw_readl(mcspi->base + idx); 167 + return readl_relaxed(mcspi->base + idx); 168 168 } 169 169 170 170 static inline void mcspi_write_cs_reg(const struct spi_device *spi, ··· 172 172 { 173 173 struct omap2_mcspi_cs *cs = spi->controller_state; 174 174 175 - __raw_writel(val, cs->base + idx); 175 + writel_relaxed(val, cs->base + idx); 176 176 } 177 177 178 178 static inline u32 mcspi_read_cs_reg(const struct spi_device *spi, int idx) 179 179 { 180 180 struct omap2_mcspi_cs *cs = spi->controller_state; 181 181 182 - return __raw_readl(cs->base + idx); 182 + return readl_relaxed(cs->base + idx); 183 183 } 184 184 185 185 static inline u32 mcspi_cached_chconf0(const struct spi_device *spi) ··· 338 338 mcspi_write_reg(spi_cntrl, OMAP2_MCSPI_WAKEUPENABLE, ctx->wakeupenable); 339 339 340 340 list_for_each_entry(cs, &ctx->cs, node) 341 - __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 341 + writel_relaxed(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 342 342 } 343 343 344 344 static int mcspi_wait_for_reg_bit(void __iomem *reg, unsigned long bit) ··· 346 346 unsigned long timeout; 347 347 348 348 timeout = jiffies + msecs_to_jiffies(1000); 349 - while (!(__raw_readl(reg) & bit)) { 349 + while (!(readl_relaxed(reg) & bit)) { 350 350 if (time_after(jiffies, timeout)) { 351 - if (!(__raw_readl(reg) & bit)) 351 + if (!(readl_relaxed(reg) & bit)) 352 352 return -ETIMEDOUT; 353 353 else 354 354 return 0; ··· 675 675 } 676 676 dev_vdbg(&spi->dev, "write-%d %02x\n", 677 677 word_len, *tx); 678 - __raw_writel(*tx++, tx_reg); 678 + writel_relaxed(*tx++, tx_reg); 679 679 } 680 680 if (rx != NULL) { 681 681 if (mcspi_wait_for_reg_bit(chstat_reg, ··· 687 687 if (c == 1 && tx == NULL && 688 688 (l & OMAP2_MCSPI_CHCONF_TURBO)) { 689 689 omap2_mcspi_set_enable(spi, 0); 690 - *rx++ = __raw_readl(rx_reg); 690 + *rx++ = readl_relaxed(rx_reg); 691 691 dev_vdbg(&spi->dev, "read-%d %02x\n", 692 692 word_len, *(rx - 1)); 693 693 if (mcspi_wait_for_reg_bit(chstat_reg, ··· 701 701 omap2_mcspi_set_enable(spi, 0); 702 702 } 703 703 704 - *rx++ = __raw_readl(rx_reg); 704 + *rx++ = readl_relaxed(rx_reg); 705 705 dev_vdbg(&spi->dev, "read-%d %02x\n", 706 706 word_len, *(rx - 1)); 707 707 } ··· 722 722 } 723 723 dev_vdbg(&spi->dev, "write-%d %04x\n", 724 724 word_len, *tx); 725 - __raw_writel(*tx++, tx_reg); 725 + writel_relaxed(*tx++, tx_reg); 726 726 } 727 727 if (rx != NULL) { 728 728 if (mcspi_wait_for_reg_bit(chstat_reg, ··· 734 734 if (c == 2 && tx == NULL && 735 735 (l & OMAP2_MCSPI_CHCONF_TURBO)) { 736 736 omap2_mcspi_set_enable(spi, 0); 737 - *rx++ = __raw_readl(rx_reg); 737 + *rx++ = readl_relaxed(rx_reg); 738 738 dev_vdbg(&spi->dev, "read-%d %04x\n", 739 739 word_len, *(rx - 1)); 740 740 if (mcspi_wait_for_reg_bit(chstat_reg, ··· 748 748 omap2_mcspi_set_enable(spi, 0); 749 749 } 750 750 751 - *rx++ = __raw_readl(rx_reg); 751 + *rx++ = readl_relaxed(rx_reg); 752 752 dev_vdbg(&spi->dev, "read-%d %04x\n", 753 753 word_len, *(rx - 1)); 754 754 } ··· 769 769 } 770 770 dev_vdbg(&spi->dev, "write-%d %08x\n", 771 771 word_len, *tx); 772 - __raw_writel(*tx++, tx_reg); 772 + writel_relaxed(*tx++, tx_reg); 773 773 } 774 774 if (rx != NULL) { 775 775 if (mcspi_wait_for_reg_bit(chstat_reg, ··· 781 781 if (c == 4 && tx == NULL && 782 782 (l & OMAP2_MCSPI_CHCONF_TURBO)) { 783 783 omap2_mcspi_set_enable(spi, 0); 784 - *rx++ = __raw_readl(rx_reg); 784 + *rx++ = readl_relaxed(rx_reg); 785 785 dev_vdbg(&spi->dev, "read-%d %08x\n", 786 786 word_len, *(rx - 1)); 787 787 if (mcspi_wait_for_reg_bit(chstat_reg, ··· 795 795 omap2_mcspi_set_enable(spi, 0); 796 796 } 797 797 798 - *rx++ = __raw_readl(rx_reg); 798 + *rx++ = readl_relaxed(rx_reg); 799 799 dev_vdbg(&spi->dev, "read-%d %08x\n", 800 800 word_len, *(rx - 1)); 801 801 } ··· 1107 1107 1108 1108 /* RX_ONLY mode needs dummy data in TX reg */ 1109 1109 if (t->tx_buf == NULL) 1110 - __raw_writel(0, cs->base 1110 + writel_relaxed(0, cs->base 1111 1111 + OMAP2_MCSPI_TX0); 1112 1112 1113 1113 if ((mcspi_dma->dma_rx && mcspi_dma->dma_tx) && ··· 1470 1470 * change in account. 1471 1471 */ 1472 1472 cs->chconf0 |= OMAP2_MCSPI_CHCONF_FORCE; 1473 - __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 1473 + writel_relaxed(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 1474 1474 cs->chconf0 &= ~OMAP2_MCSPI_CHCONF_FORCE; 1475 - __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 1475 + writel_relaxed(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 1476 1476 } 1477 1477 } 1478 1478 pm_runtime_mark_last_busy(mcspi->dev);
+1 -3
drivers/spi/spi-orion.c
··· 434 434 spi = spi_master_get_devdata(master); 435 435 spi->master = master; 436 436 437 - spi->clk = clk_get(&pdev->dev, NULL); 437 + spi->clk = devm_clk_get(&pdev->dev, NULL); 438 438 if (IS_ERR(spi->clk)) { 439 439 status = PTR_ERR(spi->clk); 440 440 goto out; ··· 465 465 466 466 out_rel_clk: 467 467 clk_disable_unprepare(spi->clk); 468 - clk_put(spi->clk); 469 468 out: 470 469 spi_master_put(master); 471 470 return status; ··· 480 481 spi = spi_master_get_devdata(master); 481 482 482 483 clk_disable_unprepare(spi->clk); 483 - clk_put(spi->clk); 484 484 485 485 return 0; 486 486 }
+1 -1
drivers/spi/spi-pxa2xx-pci.c
··· 62 62 platform_device_unregister(pdev); 63 63 } 64 64 65 - static DEFINE_PCI_DEVICE_TABLE(ce4100_spi_devices) = { 65 + static const struct pci_device_id ce4100_spi_devices[] = { 66 66 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2e6a) }, 67 67 { }, 68 68 };
+3 -1
drivers/spi/spi-pxa2xx.c
··· 1066 1066 1067 1067 pdata->num_chipselect = 1; 1068 1068 pdata->enable_dma = true; 1069 + pdata->tx_chan_id = -1; 1070 + pdata->rx_chan_id = -1; 1069 1071 1070 1072 return pdata; 1071 1073 } ··· 1268 1266 dev_err(&pdev->dev, "shutdown failed with %d\n", status); 1269 1267 } 1270 1268 1271 - #ifdef CONFIG_PM 1269 + #ifdef CONFIG_PM_SLEEP 1272 1270 static int pxa2xx_spi_suspend(struct device *dev) 1273 1271 { 1274 1272 struct driver_data *drv_data = dev_get_drvdata(dev);
+187 -160
drivers/spi/spi-rspi.c
··· 37 37 #include <linux/spi/spi.h> 38 38 #include <linux/spi/rspi.h> 39 39 40 - #define RSPI_SPCR 0x00 41 - #define RSPI_SSLP 0x01 42 - #define RSPI_SPPCR 0x02 43 - #define RSPI_SPSR 0x03 44 - #define RSPI_SPDR 0x04 45 - #define RSPI_SPSCR 0x08 46 - #define RSPI_SPSSR 0x09 47 - #define RSPI_SPBR 0x0a 48 - #define RSPI_SPDCR 0x0b 49 - #define RSPI_SPCKD 0x0c 50 - #define RSPI_SSLND 0x0d 51 - #define RSPI_SPND 0x0e 52 - #define RSPI_SPCR2 0x0f 53 - #define RSPI_SPCMD0 0x10 54 - #define RSPI_SPCMD1 0x12 55 - #define RSPI_SPCMD2 0x14 56 - #define RSPI_SPCMD3 0x16 57 - #define RSPI_SPCMD4 0x18 58 - #define RSPI_SPCMD5 0x1a 59 - #define RSPI_SPCMD6 0x1c 60 - #define RSPI_SPCMD7 0x1e 40 + #define RSPI_SPCR 0x00 /* Control Register */ 41 + #define RSPI_SSLP 0x01 /* Slave Select Polarity Register */ 42 + #define RSPI_SPPCR 0x02 /* Pin Control Register */ 43 + #define RSPI_SPSR 0x03 /* Status Register */ 44 + #define RSPI_SPDR 0x04 /* Data Register */ 45 + #define RSPI_SPSCR 0x08 /* Sequence Control Register */ 46 + #define RSPI_SPSSR 0x09 /* Sequence Status Register */ 47 + #define RSPI_SPBR 0x0a /* Bit Rate Register */ 48 + #define RSPI_SPDCR 0x0b /* Data Control Register */ 49 + #define RSPI_SPCKD 0x0c /* Clock Delay Register */ 50 + #define RSPI_SSLND 0x0d /* Slave Select Negation Delay Register */ 51 + #define RSPI_SPND 0x0e /* Next-Access Delay Register */ 52 + #define RSPI_SPCR2 0x0f /* Control Register 2 */ 53 + #define RSPI_SPCMD0 0x10 /* Command Register 0 */ 54 + #define RSPI_SPCMD1 0x12 /* Command Register 1 */ 55 + #define RSPI_SPCMD2 0x14 /* Command Register 2 */ 56 + #define RSPI_SPCMD3 0x16 /* Command Register 3 */ 57 + #define RSPI_SPCMD4 0x18 /* Command Register 4 */ 58 + #define RSPI_SPCMD5 0x1a /* Command Register 5 */ 59 + #define RSPI_SPCMD6 0x1c /* Command Register 6 */ 60 + #define RSPI_SPCMD7 0x1e /* Command Register 7 */ 61 + #define RSPI_SPBFCR 0x20 /* Buffer Control Register */ 62 + #define RSPI_SPBFDR 0x22 /* Buffer Data Count Setting Register */ 61 63 62 64 /*qspi only */ 63 - #define QSPI_SPBFCR 0x18 64 - #define QSPI_SPBDCR 0x1a 65 - #define QSPI_SPBMUL0 0x1c 66 - #define QSPI_SPBMUL1 0x20 67 - #define QSPI_SPBMUL2 0x24 68 - #define QSPI_SPBMUL3 0x28 65 + #define QSPI_SPBFCR 0x18 /* Buffer Control Register */ 66 + #define QSPI_SPBDCR 0x1a /* Buffer Data Count Register */ 67 + #define QSPI_SPBMUL0 0x1c /* Transfer Data Length Multiplier Setting Register 0 */ 68 + #define QSPI_SPBMUL1 0x20 /* Transfer Data Length Multiplier Setting Register 1 */ 69 + #define QSPI_SPBMUL2 0x24 /* Transfer Data Length Multiplier Setting Register 2 */ 70 + #define QSPI_SPBMUL3 0x28 /* Transfer Data Length Multiplier Setting Register 3 */ 69 71 70 - /* SPCR */ 71 - #define SPCR_SPRIE 0x80 72 - #define SPCR_SPE 0x40 73 - #define SPCR_SPTIE 0x20 74 - #define SPCR_SPEIE 0x10 75 - #define SPCR_MSTR 0x08 76 - #define SPCR_MODFEN 0x04 77 - #define SPCR_TXMD 0x02 78 - #define SPCR_SPMS 0x01 72 + /* SPCR - Control Register */ 73 + #define SPCR_SPRIE 0x80 /* Receive Interrupt Enable */ 74 + #define SPCR_SPE 0x40 /* Function Enable */ 75 + #define SPCR_SPTIE 0x20 /* Transmit Interrupt Enable */ 76 + #define SPCR_SPEIE 0x10 /* Error Interrupt Enable */ 77 + #define SPCR_MSTR 0x08 /* Master/Slave Mode Select */ 78 + #define SPCR_MODFEN 0x04 /* Mode Fault Error Detection Enable */ 79 + /* RSPI on SH only */ 80 + #define SPCR_TXMD 0x02 /* TX Only Mode (vs. Full Duplex) */ 81 + #define SPCR_SPMS 0x01 /* 3-wire Mode (vs. 4-wire) */ 82 + /* QSPI on R-Car M2 only */ 83 + #define SPCR_WSWAP 0x02 /* Word Swap of read-data for DMAC */ 84 + #define SPCR_BSWAP 0x01 /* Byte Swap of read-data for DMAC */ 79 85 80 - /* SSLP */ 81 - #define SSLP_SSL1P 0x02 82 - #define SSLP_SSL0P 0x01 86 + /* SSLP - Slave Select Polarity Register */ 87 + #define SSLP_SSL1P 0x02 /* SSL1 Signal Polarity Setting */ 88 + #define SSLP_SSL0P 0x01 /* SSL0 Signal Polarity Setting */ 83 89 84 - /* SPPCR */ 85 - #define SPPCR_MOIFE 0x20 86 - #define SPPCR_MOIFV 0x10 90 + /* SPPCR - Pin Control Register */ 91 + #define SPPCR_MOIFE 0x20 /* MOSI Idle Value Fixing Enable */ 92 + #define SPPCR_MOIFV 0x10 /* MOSI Idle Fixed Value */ 87 93 #define SPPCR_SPOM 0x04 88 - #define SPPCR_SPLP2 0x02 89 - #define SPPCR_SPLP 0x01 94 + #define SPPCR_SPLP2 0x02 /* Loopback Mode 2 (non-inverting) */ 95 + #define SPPCR_SPLP 0x01 /* Loopback Mode (inverting) */ 90 96 91 - /* SPSR */ 92 - #define SPSR_SPRF 0x80 93 - #define SPSR_SPTEF 0x20 94 - #define SPSR_PERF 0x08 95 - #define SPSR_MODF 0x04 96 - #define SPSR_IDLNF 0x02 97 - #define SPSR_OVRF 0x01 97 + #define SPPCR_IO3FV 0x04 /* Single-/Dual-SPI Mode IO3 Output Fixed Value */ 98 + #define SPPCR_IO2FV 0x04 /* Single-/Dual-SPI Mode IO2 Output Fixed Value */ 98 99 99 - /* SPSCR */ 100 - #define SPSCR_SPSLN_MASK 0x07 100 + /* SPSR - Status Register */ 101 + #define SPSR_SPRF 0x80 /* Receive Buffer Full Flag */ 102 + #define SPSR_TEND 0x40 /* Transmit End */ 103 + #define SPSR_SPTEF 0x20 /* Transmit Buffer Empty Flag */ 104 + #define SPSR_PERF 0x08 /* Parity Error Flag */ 105 + #define SPSR_MODF 0x04 /* Mode Fault Error Flag */ 106 + #define SPSR_IDLNF 0x02 /* RSPI Idle Flag */ 107 + #define SPSR_OVRF 0x01 /* Overrun Error Flag */ 101 108 102 - /* SPSSR */ 103 - #define SPSSR_SPECM_MASK 0x70 104 - #define SPSSR_SPCP_MASK 0x07 109 + /* SPSCR - Sequence Control Register */ 110 + #define SPSCR_SPSLN_MASK 0x07 /* Sequence Length Specification */ 105 111 106 - /* SPDCR */ 107 - #define SPDCR_SPLW 0x20 108 - #define SPDCR_SPRDTD 0x10 112 + /* SPSSR - Sequence Status Register */ 113 + #define SPSSR_SPECM_MASK 0x70 /* Command Error Mask */ 114 + #define SPSSR_SPCP_MASK 0x07 /* Command Pointer Mask */ 115 + 116 + /* SPDCR - Data Control Register */ 117 + #define SPDCR_TXDMY 0x80 /* Dummy Data Transmission Enable */ 118 + #define SPDCR_SPLW1 0x40 /* Access Width Specification (RZ) */ 119 + #define SPDCR_SPLW0 0x20 /* Access Width Specification (RZ) */ 120 + #define SPDCR_SPLLWORD (SPDCR_SPLW1 | SPDCR_SPLW0) 121 + #define SPDCR_SPLWORD SPDCR_SPLW1 122 + #define SPDCR_SPLBYTE SPDCR_SPLW0 123 + #define SPDCR_SPLW 0x20 /* Access Width Specification (SH) */ 124 + #define SPDCR_SPRDTD 0x10 /* Receive Transmit Data Select */ 109 125 #define SPDCR_SLSEL1 0x08 110 126 #define SPDCR_SLSEL0 0x04 111 - #define SPDCR_SLSEL_MASK 0x0c 127 + #define SPDCR_SLSEL_MASK 0x0c /* SSL1 Output Select */ 112 128 #define SPDCR_SPFC1 0x02 113 129 #define SPDCR_SPFC0 0x01 130 + #define SPDCR_SPFC_MASK 0x03 /* Frame Count Setting (1-4) */ 114 131 115 - /* SPCKD */ 116 - #define SPCKD_SCKDL_MASK 0x07 132 + /* SPCKD - Clock Delay Register */ 133 + #define SPCKD_SCKDL_MASK 0x07 /* Clock Delay Setting (1-8) */ 117 134 118 - /* SSLND */ 119 - #define SSLND_SLNDL_MASK 0x07 135 + /* SSLND - Slave Select Negation Delay Register */ 136 + #define SSLND_SLNDL_MASK 0x07 /* SSL Negation Delay Setting (1-8) */ 120 137 121 - /* SPND */ 122 - #define SPND_SPNDL_MASK 0x07 138 + /* SPND - Next-Access Delay Register */ 139 + #define SPND_SPNDL_MASK 0x07 /* Next-Access Delay Setting (1-8) */ 123 140 124 - /* SPCR2 */ 125 - #define SPCR2_PTE 0x08 126 - #define SPCR2_SPIE 0x04 127 - #define SPCR2_SPOE 0x02 128 - #define SPCR2_SPPE 0x01 141 + /* SPCR2 - Control Register 2 */ 142 + #define SPCR2_PTE 0x08 /* Parity Self-Test Enable */ 143 + #define SPCR2_SPIE 0x04 /* Idle Interrupt Enable */ 144 + #define SPCR2_SPOE 0x02 /* Odd Parity Enable (vs. Even) */ 145 + #define SPCR2_SPPE 0x01 /* Parity Enable */ 129 146 130 - /* SPCMDn */ 131 - #define SPCMD_SCKDEN 0x8000 132 - #define SPCMD_SLNDEN 0x4000 133 - #define SPCMD_SPNDEN 0x2000 134 - #define SPCMD_LSBF 0x1000 135 - #define SPCMD_SPB_MASK 0x0f00 147 + /* SPCMDn - Command Registers */ 148 + #define SPCMD_SCKDEN 0x8000 /* Clock Delay Setting Enable */ 149 + #define SPCMD_SLNDEN 0x4000 /* SSL Negation Delay Setting Enable */ 150 + #define SPCMD_SPNDEN 0x2000 /* Next-Access Delay Enable */ 151 + #define SPCMD_LSBF 0x1000 /* LSB First */ 152 + #define SPCMD_SPB_MASK 0x0f00 /* Data Length Setting */ 136 153 #define SPCMD_SPB_8_TO_16(bit) (((bit - 1) << 8) & SPCMD_SPB_MASK) 137 154 #define SPCMD_SPB_8BIT 0x0000 /* qspi only */ 138 155 #define SPCMD_SPB_16BIT 0x0100 139 156 #define SPCMD_SPB_20BIT 0x0000 140 157 #define SPCMD_SPB_24BIT 0x0100 141 158 #define SPCMD_SPB_32BIT 0x0200 142 - #define SPCMD_SSLKP 0x0080 143 - #define SPCMD_SSLA_MASK 0x0030 144 - #define SPCMD_BRDV_MASK 0x000c 145 - #define SPCMD_CPOL 0x0002 146 - #define SPCMD_CPHA 0x0001 159 + #define SPCMD_SSLKP 0x0080 /* SSL Signal Level Keeping */ 160 + #define SPCMD_SPIMOD_MASK 0x0060 /* SPI Operating Mode (QSPI only) */ 161 + #define SPCMD_SPIMOD1 0x0040 162 + #define SPCMD_SPIMOD0 0x0020 163 + #define SPCMD_SPIMOD_SINGLE 0 164 + #define SPCMD_SPIMOD_DUAL SPCMD_SPIMOD0 165 + #define SPCMD_SPIMOD_QUAD SPCMD_SPIMOD1 166 + #define SPCMD_SPRW 0x0010 /* SPI Read/Write Access (Dual/Quad) */ 167 + #define SPCMD_SSLA_MASK 0x0030 /* SSL Assert Signal Setting (RSPI) */ 168 + #define SPCMD_BRDV_MASK 0x000c /* Bit Rate Division Setting */ 169 + #define SPCMD_CPOL 0x0002 /* Clock Polarity Setting */ 170 + #define SPCMD_CPHA 0x0001 /* Clock Phase Setting */ 147 171 148 - /* SPBFCR */ 149 - #define SPBFCR_TXRST 0x80 /* qspi only */ 150 - #define SPBFCR_RXRST 0x40 /* qspi only */ 172 + /* SPBFCR - Buffer Control Register */ 173 + #define SPBFCR_TXRST 0x80 /* Transmit Buffer Data Reset (qspi only) */ 174 + #define SPBFCR_RXRST 0x40 /* Receive Buffer Data Reset (qspi only) */ 175 + #define SPBFCR_TXTRG_MASK 0x30 /* Transmit Buffer Data Triggering Number */ 176 + #define SPBFCR_RXTRG_MASK 0x07 /* Receive Buffer Data Triggering Number */ 177 + 178 + #define DUMMY_DATA 0x00 151 179 152 180 struct rspi_data { 153 181 void __iomem *addr; ··· 186 158 wait_queue_head_t wait; 187 159 spinlock_t lock; 188 160 struct clk *clk; 189 - unsigned char spsr; 161 + u8 spsr; 162 + u16 spcmd; 190 163 const struct spi_ops *ops; 191 164 192 165 /* for dmaengine */ ··· 199 170 unsigned dma_callbacked:1; 200 171 }; 201 172 202 - static void rspi_write8(struct rspi_data *rspi, u8 data, u16 offset) 173 + static void rspi_write8(const struct rspi_data *rspi, u8 data, u16 offset) 203 174 { 204 175 iowrite8(data, rspi->addr + offset); 205 176 } 206 177 207 - static void rspi_write16(struct rspi_data *rspi, u16 data, u16 offset) 178 + static void rspi_write16(const struct rspi_data *rspi, u16 data, u16 offset) 208 179 { 209 180 iowrite16(data, rspi->addr + offset); 210 181 } 211 182 212 - static void rspi_write32(struct rspi_data *rspi, u32 data, u16 offset) 183 + static void rspi_write32(const struct rspi_data *rspi, u32 data, u16 offset) 213 184 { 214 185 iowrite32(data, rspi->addr + offset); 215 186 } 216 187 217 - static u8 rspi_read8(struct rspi_data *rspi, u16 offset) 188 + static u8 rspi_read8(const struct rspi_data *rspi, u16 offset) 218 189 { 219 190 return ioread8(rspi->addr + offset); 220 191 } 221 192 222 - static u16 rspi_read16(struct rspi_data *rspi, u16 offset) 193 + static u16 rspi_read16(const struct rspi_data *rspi, u16 offset) 223 194 { 224 195 return ioread16(rspi->addr + offset); 225 196 } 226 197 227 198 /* optional functions */ 228 199 struct spi_ops { 229 - int (*set_config_register)(struct rspi_data *rspi, int access_size); 200 + int (*set_config_register)(const struct rspi_data *rspi, 201 + int access_size); 230 202 int (*send_pio)(struct rspi_data *rspi, struct spi_message *mesg, 231 203 struct spi_transfer *t); 232 204 int (*receive_pio)(struct rspi_data *rspi, struct spi_message *mesg, ··· 238 208 /* 239 209 * functions for RSPI 240 210 */ 241 - static int rspi_set_config_register(struct rspi_data *rspi, int access_size) 211 + static int rspi_set_config_register(const struct rspi_data *rspi, 212 + int access_size) 242 213 { 243 214 int spbr; 244 215 ··· 262 231 rspi_write8(rspi, 0x00, RSPI_SPCR2); 263 232 264 233 /* Sets SPCMD */ 265 - rspi_write16(rspi, SPCMD_SPB_8_TO_16(access_size) | SPCMD_SSLKP, 234 + rspi_write16(rspi, SPCMD_SPB_8_TO_16(access_size) | rspi->spcmd, 266 235 RSPI_SPCMD0); 267 236 268 237 /* Sets RSPI mode */ ··· 274 243 /* 275 244 * functions for QSPI 276 245 */ 277 - static int qspi_set_config_register(struct rspi_data *rspi, int access_size) 246 + static int qspi_set_config_register(const struct rspi_data *rspi, 247 + int access_size) 278 248 { 279 249 u16 spcmd; 280 250 int spbr; ··· 300 268 spcmd = SPCMD_SPB_8BIT; 301 269 else if (access_size == 16) 302 270 spcmd = SPCMD_SPB_16BIT; 303 - else if (access_size == 32) 271 + else 304 272 spcmd = SPCMD_SPB_32BIT; 305 273 306 - spcmd |= SPCMD_SCKDEN | SPCMD_SLNDEN | SPCMD_SSLKP | SPCMD_SPNDEN; 274 + spcmd |= SPCMD_SCKDEN | SPCMD_SLNDEN | rspi->spcmd | SPCMD_SPNDEN; 307 275 308 276 /* Resets transfer data length */ 309 277 rspi_write32(rspi, 0, QSPI_SPBMUL0); ··· 324 292 325 293 #define set_config_register(spi, n) spi->ops->set_config_register(spi, n) 326 294 327 - static void rspi_enable_irq(struct rspi_data *rspi, u8 enable) 295 + static void rspi_enable_irq(const struct rspi_data *rspi, u8 enable) 328 296 { 329 297 rspi_write8(rspi, rspi_read8(rspi, RSPI_SPCR) | enable, RSPI_SPCR); 330 298 } 331 299 332 - static void rspi_disable_irq(struct rspi_data *rspi, u8 disable) 300 + static void rspi_disable_irq(const struct rspi_data *rspi, u8 disable) 333 301 { 334 302 rspi_write8(rspi, rspi_read8(rspi, RSPI_SPCR) & ~disable, RSPI_SPCR); 335 303 } ··· 348 316 return 0; 349 317 } 350 318 351 - static void rspi_assert_ssl(struct rspi_data *rspi) 319 + static void rspi_assert_ssl(const struct rspi_data *rspi) 352 320 { 353 321 rspi_write8(rspi, rspi_read8(rspi, RSPI_SPCR) | SPCR_SPE, RSPI_SPCR); 354 322 } 355 323 356 - static void rspi_negate_ssl(struct rspi_data *rspi) 324 + static void rspi_negate_ssl(const struct rspi_data *rspi) 357 325 { 358 326 rspi_write8(rspi, rspi_read8(rspi, RSPI_SPCR) & ~SPCR_SPE, RSPI_SPCR); 359 327 } ··· 362 330 struct spi_transfer *t) 363 331 { 364 332 int remain = t->len; 365 - u8 *data; 366 - 367 - data = (u8 *)t->tx_buf; 333 + const u8 *data = t->tx_buf; 368 334 while (remain > 0) { 369 335 rspi_write8(rspi, rspi_read8(rspi, RSPI_SPCR) | SPCR_TXMD, 370 336 RSPI_SPCR); ··· 378 348 remain--; 379 349 } 380 350 381 - /* Waiting for the last transmition */ 351 + /* Waiting for the last transmission */ 382 352 rspi_wait_for_interrupt(rspi, SPSR_SPTEF, SPCR_SPTIE); 383 353 384 354 return 0; ··· 388 358 struct spi_transfer *t) 389 359 { 390 360 int remain = t->len; 391 - u8 *data; 361 + const u8 *data = t->tx_buf; 392 362 393 363 rspi_write8(rspi, SPBFCR_TXRST, QSPI_SPBFCR); 394 364 rspi_write8(rspi, 0x00, QSPI_SPBFCR); 395 365 396 - data = (u8 *)t->tx_buf; 397 366 while (remain > 0) { 398 367 399 368 if (rspi_wait_for_interrupt(rspi, SPSR_SPTEF, SPCR_SPTIE) < 0) { ··· 412 383 remain--; 413 384 } 414 385 415 - /* Waiting for the last transmition */ 386 + /* Waiting for the last transmission */ 416 387 rspi_wait_for_interrupt(rspi, SPSR_SPTEF, SPCR_SPTIE); 417 388 418 389 return 0; ··· 428 399 wake_up_interruptible(&rspi->wait); 429 400 } 430 401 431 - static int rspi_dma_map_sg(struct scatterlist *sg, void *buf, unsigned len, 432 - struct dma_chan *chan, 402 + static int rspi_dma_map_sg(struct scatterlist *sg, const void *buf, 403 + unsigned len, struct dma_chan *chan, 433 404 enum dma_transfer_direction dir) 434 405 { 435 406 sg_init_table(sg, 1); ··· 469 440 static int rspi_send_dma(struct rspi_data *rspi, struct spi_transfer *t) 470 441 { 471 442 struct scatterlist sg; 472 - void *buf = NULL; 443 + const void *buf = NULL; 473 444 struct dma_async_tx_descriptor *desc; 474 445 unsigned len; 475 446 int ret = 0; 476 447 477 448 if (rspi->dma_width_16bit) { 449 + void *tmp; 478 450 /* 479 451 * If DMAC bus width is 16-bit, the driver allocates a dummy 480 452 * buffer. And, the driver converts original data into the ··· 484 454 * DMAC data: 1st byte, dummy, 2nd byte, dummy ... 485 455 */ 486 456 len = t->len * 2; 487 - buf = kmalloc(len, GFP_KERNEL); 488 - if (!buf) 457 + tmp = kmalloc(len, GFP_KERNEL); 458 + if (!tmp) 489 459 return -ENOMEM; 490 - rspi_memory_to_8bit(buf, t->tx_buf, t->len); 460 + rspi_memory_to_8bit(tmp, t->tx_buf, t->len); 461 + buf = tmp; 491 462 } else { 492 463 len = t->len; 493 - buf = (void *)t->tx_buf; 464 + buf = t->tx_buf; 494 465 } 495 466 496 467 if (!rspi_dma_map_sg(&sg, buf, len, rspi->chan_tx, DMA_TO_DEVICE)) { ··· 539 508 return ret; 540 509 } 541 510 542 - static void rspi_receive_init(struct rspi_data *rspi) 511 + static void rspi_receive_init(const struct rspi_data *rspi) 543 512 { 544 - unsigned char spsr; 513 + u8 spsr; 545 514 546 515 spsr = rspi_read8(rspi, RSPI_SPSR); 547 516 if (spsr & SPSR_SPRF) 548 517 rspi_read16(rspi, RSPI_SPDR); /* dummy read */ 549 518 if (spsr & SPSR_OVRF) 550 519 rspi_write8(rspi, rspi_read8(rspi, RSPI_SPSR) & ~SPSR_OVRF, 551 - RSPI_SPCR); 520 + RSPI_SPSR); 552 521 } 553 522 554 523 static int rspi_receive_pio(struct rspi_data *rspi, struct spi_message *mesg, ··· 559 528 560 529 rspi_receive_init(rspi); 561 530 562 - data = (u8 *)t->rx_buf; 531 + data = t->rx_buf; 563 532 while (remain > 0) { 564 533 rspi_write8(rspi, rspi_read8(rspi, RSPI_SPCR) & ~SPCR_TXMD, 565 534 RSPI_SPCR); ··· 570 539 return -ETIMEDOUT; 571 540 } 572 541 /* dummy write for generate clock */ 573 - rspi_write16(rspi, 0x00, RSPI_SPDR); 542 + rspi_write16(rspi, DUMMY_DATA, RSPI_SPDR); 574 543 575 544 if (rspi_wait_for_interrupt(rspi, SPSR_SPRF, SPCR_SPRIE) < 0) { 576 545 dev_err(&rspi->master->dev, ··· 587 556 return 0; 588 557 } 589 558 590 - static void qspi_receive_init(struct rspi_data *rspi) 559 + static void qspi_receive_init(const struct rspi_data *rspi) 591 560 { 592 - unsigned char spsr; 561 + u8 spsr; 593 562 594 563 spsr = rspi_read8(rspi, RSPI_SPSR); 595 564 if (spsr & SPSR_SPRF) ··· 606 575 607 576 qspi_receive_init(rspi); 608 577 609 - data = (u8 *)t->rx_buf; 578 + data = t->rx_buf; 610 579 while (remain > 0) { 611 580 612 581 if (rspi_wait_for_interrupt(rspi, SPSR_SPTEF, SPCR_SPTIE) < 0) { ··· 615 584 return -ETIMEDOUT; 616 585 } 617 586 /* dummy write for generate clock */ 618 - rspi_write8(rspi, 0x00, RSPI_SPDR); 587 + rspi_write8(rspi, DUMMY_DATA, RSPI_SPDR); 619 588 620 589 if (rspi_wait_for_interrupt(rspi, SPSR_SPRF, SPCR_SPRIE) < 0) { 621 590 dev_err(&rspi->master->dev, ··· 735 704 return ret; 736 705 } 737 706 738 - static int rspi_is_dma(struct rspi_data *rspi, struct spi_transfer *t) 707 + static int rspi_is_dma(const struct rspi_data *rspi, struct spi_transfer *t) 739 708 { 740 709 if (t->tx_buf && rspi->chan_tx) 741 710 return 1; ··· 802 771 { 803 772 struct rspi_data *rspi = spi_master_get_devdata(spi->master); 804 773 805 - if (!spi->bits_per_word) 806 - spi->bits_per_word = 8; 807 774 rspi->max_speed_hz = spi->max_speed_hz; 775 + 776 + rspi->spcmd = SPCMD_SSLKP; 777 + if (spi->mode & SPI_CPOL) 778 + rspi->spcmd |= SPCMD_CPOL; 779 + if (spi->mode & SPI_CPHA) 780 + rspi->spcmd |= SPCMD_CPHA; 808 781 809 782 set_config_register(rspi, 8); 810 783 ··· 837 802 838 803 static irqreturn_t rspi_irq(int irq, void *_sr) 839 804 { 840 - struct rspi_data *rspi = (struct rspi_data *)_sr; 841 - unsigned long spsr; 805 + struct rspi_data *rspi = _sr; 806 + u8 spsr; 842 807 irqreturn_t ret = IRQ_NONE; 843 - unsigned char disable_irq = 0; 808 + u8 disable_irq = 0; 844 809 845 810 rspi->spsr = spsr = rspi_read8(rspi, RSPI_SPSR); 846 811 if (spsr & SPSR_SPRF) ··· 860 825 static int rspi_request_dma(struct rspi_data *rspi, 861 826 struct platform_device *pdev) 862 827 { 863 - struct rspi_plat_data *rspi_pd = dev_get_platdata(&pdev->dev); 828 + const struct rspi_plat_data *rspi_pd = dev_get_platdata(&pdev->dev); 864 829 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 865 830 dma_cap_mask_t mask; 866 831 struct dma_slave_config cfg; ··· 922 887 { 923 888 struct rspi_data *rspi = platform_get_drvdata(pdev); 924 889 925 - spi_unregister_master(rspi->master); 926 890 rspi_release_dma(rspi); 927 - free_irq(platform_get_irq(pdev, 0), rspi); 928 - clk_put(rspi->clk); 929 - iounmap(rspi->addr); 891 + clk_disable(rspi->clk); 930 892 931 893 return 0; 932 894 } ··· 935 903 struct rspi_data *rspi; 936 904 int ret, irq; 937 905 char clk_name[16]; 938 - struct rspi_plat_data *rspi_pd = pdev->dev.platform_data; 906 + const struct rspi_plat_data *rspi_pd = dev_get_platdata(&pdev->dev); 939 907 const struct spi_ops *ops; 940 908 const struct platform_device_id *id_entry = pdev->id_entry; 941 909 ··· 944 912 if (!ops->set_config_register) { 945 913 dev_err(&pdev->dev, "there is no set_config_register\n"); 946 914 return -ENODEV; 947 - } 948 - /* get base addr */ 949 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 950 - if (unlikely(res == NULL)) { 951 - dev_err(&pdev->dev, "invalid resource\n"); 952 - return -EINVAL; 953 915 } 954 916 955 917 irq = platform_get_irq(pdev, 0); ··· 962 936 platform_set_drvdata(pdev, rspi); 963 937 rspi->ops = ops; 964 938 rspi->master = master; 965 - rspi->addr = ioremap(res->start, resource_size(res)); 966 - if (rspi->addr == NULL) { 967 - dev_err(&pdev->dev, "ioremap error.\n"); 968 - ret = -ENOMEM; 939 + 940 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 941 + rspi->addr = devm_ioremap_resource(&pdev->dev, res); 942 + if (IS_ERR(rspi->addr)) { 943 + ret = PTR_ERR(rspi->addr); 969 944 goto error1; 970 945 } 971 946 972 947 snprintf(clk_name, sizeof(clk_name), "%s%d", id_entry->name, pdev->id); 973 - rspi->clk = clk_get(&pdev->dev, clk_name); 948 + rspi->clk = devm_clk_get(&pdev->dev, clk_name); 974 949 if (IS_ERR(rspi->clk)) { 975 950 dev_err(&pdev->dev, "cannot get clock\n"); 976 951 ret = PTR_ERR(rspi->clk); 977 - goto error2; 952 + goto error1; 978 953 } 979 954 clk_enable(rspi->clk); 980 955 ··· 984 957 INIT_WORK(&rspi->ws, rspi_work); 985 958 init_waitqueue_head(&rspi->wait); 986 959 987 - master->num_chipselect = rspi_pd->num_chipselect; 988 - if (!master->num_chipselect) 960 + if (rspi_pd && rspi_pd->num_chipselect) 961 + master->num_chipselect = rspi_pd->num_chipselect; 962 + else 989 963 master->num_chipselect = 2; /* default */ 990 964 991 965 master->bus_num = pdev->id; 992 966 master->setup = rspi_setup; 993 967 master->transfer = rspi_transfer; 994 968 master->cleanup = rspi_cleanup; 969 + master->mode_bits = SPI_CPHA | SPI_CPOL; 995 970 996 - ret = request_irq(irq, rspi_irq, 0, dev_name(&pdev->dev), rspi); 971 + ret = devm_request_irq(&pdev->dev, irq, rspi_irq, 0, 972 + dev_name(&pdev->dev), rspi); 997 973 if (ret < 0) { 998 974 dev_err(&pdev->dev, "request_irq error\n"); 999 - goto error3; 975 + goto error2; 1000 976 } 1001 977 1002 978 rspi->irq = irq; 1003 979 ret = rspi_request_dma(rspi, pdev); 1004 980 if (ret < 0) { 1005 981 dev_err(&pdev->dev, "rspi_request_dma failed.\n"); 1006 - goto error4; 982 + goto error3; 1007 983 } 1008 984 1009 - ret = spi_register_master(master); 985 + ret = devm_spi_register_master(&pdev->dev, master); 1010 986 if (ret < 0) { 1011 987 dev_err(&pdev->dev, "spi_register_master error.\n"); 1012 - goto error4; 988 + goto error3; 1013 989 } 1014 990 1015 991 dev_info(&pdev->dev, "probed\n"); 1016 992 1017 993 return 0; 1018 994 1019 - error4: 1020 - rspi_release_dma(rspi); 1021 - free_irq(irq, rspi); 1022 995 error3: 1023 - clk_put(rspi->clk); 996 + rspi_release_dma(rspi); 1024 997 error2: 1025 - iounmap(rspi->addr); 998 + clk_disable(rspi->clk); 1026 999 error1: 1027 1000 spi_master_put(master); 1028 1001
+13 -61
drivers/spi/spi-s3c24xx.c
··· 29 29 30 30 #include <plat/regs-spi.h> 31 31 32 - #include <plat/fiq.h> 33 32 #include <asm/fiq.h> 34 33 35 34 #include "spi-s3c24xx-fiq.h" ··· 77 78 unsigned char *rx; 78 79 79 80 struct clk *clk; 80 - struct resource *ioarea; 81 81 struct spi_master *master; 82 82 struct spi_device *curdev; 83 83 struct device *dev; 84 84 struct s3c2410_spi_info *pdata; 85 85 }; 86 - 87 86 88 87 #define SPCON_DEFAULT (S3C2410_SPCON_MSTR | S3C2410_SPCON_SMOD_INT) 89 88 #define SPPIN_DEFAULT (S3C2410_SPPIN_KEEP) ··· 514 517 master = spi_alloc_master(&pdev->dev, sizeof(struct s3c24xx_spi)); 515 518 if (master == NULL) { 516 519 dev_err(&pdev->dev, "No memory for spi_master\n"); 517 - err = -ENOMEM; 518 - goto err_nomem; 520 + return -ENOMEM; 519 521 } 520 522 521 523 hw = spi_master_get_devdata(master); ··· 558 562 dev_dbg(hw->dev, "bitbang at %p\n", &hw->bitbang); 559 563 560 564 /* find and map our resources */ 561 - 562 565 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 563 - if (res == NULL) { 564 - dev_err(&pdev->dev, "Cannot get IORESOURCE_MEM\n"); 565 - err = -ENOENT; 566 - goto err_no_iores; 567 - } 568 - 569 - hw->ioarea = request_mem_region(res->start, resource_size(res), 570 - pdev->name); 571 - 572 - if (hw->ioarea == NULL) { 573 - dev_err(&pdev->dev, "Cannot reserve region\n"); 574 - err = -ENXIO; 575 - goto err_no_iores; 576 - } 577 - 578 - hw->regs = ioremap(res->start, resource_size(res)); 579 - if (hw->regs == NULL) { 580 - dev_err(&pdev->dev, "Cannot map IO\n"); 581 - err = -ENXIO; 582 - goto err_no_iomap; 566 + hw->regs = devm_ioremap_resource(&pdev->dev, res); 567 + if (IS_ERR(hw->regs)) { 568 + err = PTR_ERR(hw->regs); 569 + goto err_no_pdata; 583 570 } 584 571 585 572 hw->irq = platform_get_irq(pdev, 0); 586 573 if (hw->irq < 0) { 587 574 dev_err(&pdev->dev, "No IRQ specified\n"); 588 575 err = -ENOENT; 589 - goto err_no_irq; 576 + goto err_no_pdata; 590 577 } 591 578 592 - err = request_irq(hw->irq, s3c24xx_spi_irq, 0, pdev->name, hw); 579 + err = devm_request_irq(&pdev->dev, hw->irq, s3c24xx_spi_irq, 0, 580 + pdev->name, hw); 593 581 if (err) { 594 582 dev_err(&pdev->dev, "Cannot claim IRQ\n"); 595 - goto err_no_irq; 583 + goto err_no_pdata; 596 584 } 597 585 598 - hw->clk = clk_get(&pdev->dev, "spi"); 586 + hw->clk = devm_clk_get(&pdev->dev, "spi"); 599 587 if (IS_ERR(hw->clk)) { 600 588 dev_err(&pdev->dev, "No clock for device\n"); 601 589 err = PTR_ERR(hw->clk); 602 - goto err_no_clk; 590 + goto err_no_pdata; 603 591 } 604 592 605 593 /* setup any gpio we can */ ··· 595 615 goto err_register; 596 616 } 597 617 598 - err = gpio_request(pdata->pin_cs, dev_name(&pdev->dev)); 618 + err = devm_gpio_request(&pdev->dev, pdata->pin_cs, 619 + dev_name(&pdev->dev)); 599 620 if (err) { 600 621 dev_err(&pdev->dev, "Failed to get gpio for cs\n"); 601 622 goto err_register; ··· 620 639 return 0; 621 640 622 641 err_register: 623 - if (hw->set_cs == s3c24xx_spi_gpiocs) 624 - gpio_free(pdata->pin_cs); 625 - 626 642 clk_disable(hw->clk); 627 - clk_put(hw->clk); 628 643 629 - err_no_clk: 630 - free_irq(hw->irq, hw); 631 - 632 - err_no_irq: 633 - iounmap(hw->regs); 634 - 635 - err_no_iomap: 636 - release_resource(hw->ioarea); 637 - kfree(hw->ioarea); 638 - 639 - err_no_iores: 640 644 err_no_pdata: 641 645 spi_master_put(hw->master); 642 - 643 - err_nomem: 644 646 return err; 645 647 } 646 648 ··· 632 668 struct s3c24xx_spi *hw = platform_get_drvdata(dev); 633 669 634 670 spi_bitbang_stop(&hw->bitbang); 635 - 636 671 clk_disable(hw->clk); 637 - clk_put(hw->clk); 638 - 639 - free_irq(hw->irq, hw); 640 - iounmap(hw->regs); 641 - 642 - if (hw->set_cs == s3c24xx_spi_gpiocs) 643 - gpio_free(hw->pdata->pin_cs); 644 - 645 - release_resource(hw->ioarea); 646 - kfree(hw->ioarea); 647 - 648 672 spi_master_put(hw->master); 649 673 return 0; 650 674 }
+1 -4
drivers/spi/spi-s3c64xx.c
··· 890 890 unsigned long flags; 891 891 int use_dma; 892 892 893 - reinit_completion(&sdd->xfer_completion); 893 + reinit_completion(&sdd->xfer_completion); 894 894 895 895 /* Only BPW and Speed may change across transfers */ 896 896 bpw = xfer->bits_per_word; ··· 923 923 sdd->state &= ~TXBUSY; 924 924 925 925 enable_datapath(sdd, spi, xfer, use_dma); 926 - 927 - /* Start the signals */ 928 - writel(0, sdd->regs + S3C64XX_SPI_SLAVE_SEL); 929 926 930 927 /* Start the signals */ 931 928 writel(0, sdd->regs + S3C64XX_SPI_SLAVE_SEL);
+2 -22
drivers/spi/spi-sc18is602.c
··· 183 183 static int sc18is602_check_transfer(struct spi_device *spi, 184 184 struct spi_transfer *t, int tlen) 185 185 { 186 - int bpw; 187 186 uint32_t hz; 188 187 189 188 if (t && t->len + tlen > SC18IS602_BUFSIZ) 190 - return -EINVAL; 191 - 192 - bpw = spi->bits_per_word; 193 - if (t && t->bits_per_word) 194 - bpw = t->bits_per_word; 195 - if (bpw != 8) 196 189 return -EINVAL; 197 190 198 191 hz = spi->max_speed_hz; ··· 247 254 248 255 static int sc18is602_setup(struct spi_device *spi) 249 256 { 250 - if (!spi->bits_per_word) 251 - spi->bits_per_word = 8; 252 - 253 257 if (spi->mode & ~(SPI_CPHA | SPI_CPOL | SPI_LSB_FIRST)) 254 258 return -EINVAL; 255 259 ··· 305 315 } 306 316 master->bus_num = client->adapter->nr; 307 317 master->mode_bits = SPI_CPHA | SPI_CPOL | SPI_LSB_FIRST; 318 + master->bits_per_word_mask = SPI_BPW_MASK(8); 308 319 master->setup = sc18is602_setup; 309 320 master->transfer_one_message = sc18is602_transfer_one; 310 321 master->dev.of_node = np; 311 322 312 - error = spi_register_master(master); 323 + error = devm_spi_register_master(dev, master); 313 324 if (error) 314 325 goto error_reg; 315 326 ··· 319 328 error_reg: 320 329 spi_master_put(master); 321 330 return error; 322 - } 323 - 324 - static int sc18is602_remove(struct i2c_client *client) 325 - { 326 - struct sc18is602 *hw = i2c_get_clientdata(client); 327 - struct spi_master *master = hw->master; 328 - 329 - spi_unregister_master(master); 330 - 331 - return 0; 332 331 } 333 332 334 333 static const struct i2c_device_id sc18is602_id[] = { ··· 334 353 .name = "sc18is602", 335 354 }, 336 355 .probe = sc18is602_probe, 337 - .remove = sc18is602_remove, 338 356 .id_table = sc18is602_id, 339 357 }; 340 358
+2 -2
drivers/spi/spi-sh-hspi.c
··· 197 197 198 198 hspi_write(hspi, SPTBR, tx); 199 199 200 - /* wait recive */ 200 + /* wait receive */ 201 201 ret = hspi_status_check_timeout(hspi, 0x4, 0x4); 202 202 if (ret < 0) 203 203 break; ··· 353 353 MODULE_DESCRIPTION("SuperH HSPI bus driver"); 354 354 MODULE_LICENSE("GPL"); 355 355 MODULE_AUTHOR("Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>"); 356 - MODULE_ALIAS("platform:sh_spi"); 356 + MODULE_ALIAS("platform:sh-hspi");
+34 -30
drivers/spi/spi-sh-msiof.c
··· 152 152 size_t k; 153 153 154 154 if (!WARN_ON(!spi_hz || !parent_rate)) 155 - div = parent_rate / spi_hz; 155 + div = DIV_ROUND_UP(parent_rate, spi_hz); 156 156 157 157 /* TODO: make more fine grained */ 158 158 ··· 169 169 170 170 static void sh_msiof_spi_set_pin_regs(struct sh_msiof_spi_priv *p, 171 171 u32 cpol, u32 cpha, 172 - u32 tx_hi_z, u32 lsb_first) 172 + u32 tx_hi_z, u32 lsb_first, u32 cs_high) 173 173 { 174 174 u32 tmp; 175 175 int edge; ··· 182 182 * 1 1 11 11 1 1 183 183 */ 184 184 sh_msiof_write(p, FCTR, 0); 185 - sh_msiof_write(p, TMDR1, 0xe2000005 | (lsb_first << 24)); 186 - sh_msiof_write(p, RMDR1, 0x22000005 | (lsb_first << 24)); 185 + 186 + tmp = 0; 187 + tmp |= !cs_high << 25; 188 + tmp |= lsb_first << 24; 189 + sh_msiof_write(p, TMDR1, 0xe0000005 | tmp); 190 + sh_msiof_write(p, RMDR1, 0x20000005 | tmp); 187 191 188 192 tmp = 0xa0000000; 189 193 tmp |= cpol << 30; /* TSCKIZ */ ··· 421 417 sh_msiof_spi_set_pin_regs(p, !!(spi->mode & SPI_CPOL), 422 418 !!(spi->mode & SPI_CPHA), 423 419 !!(spi->mode & SPI_3WIRE), 424 - !!(spi->mode & SPI_LSB_FIRST)); 420 + !!(spi->mode & SPI_LSB_FIRST), 421 + !!(spi->mode & SPI_CS_HIGH)); 425 422 } 426 423 427 424 /* use spi->controller data for CS (same strategy as spi_gpio) */ 428 - gpio_set_value((unsigned)spi->controller_data, value); 425 + gpio_set_value((uintptr_t)spi->controller_data, value); 429 426 430 427 if (is_on == BITBANG_CS_INACTIVE) { 431 428 if (test_and_clear_bit(0, &p->flags)) { ··· 640 635 master = spi_alloc_master(&pdev->dev, sizeof(struct sh_msiof_spi_priv)); 641 636 if (master == NULL) { 642 637 dev_err(&pdev->dev, "failed to allocate spi master\n"); 643 - ret = -ENOMEM; 644 - goto err0; 638 + return -ENOMEM; 645 639 } 646 640 647 641 p = spi_master_get_devdata(master); ··· 659 655 660 656 init_completion(&p->done); 661 657 662 - p->clk = clk_get(&pdev->dev, NULL); 658 + p->clk = devm_clk_get(&pdev->dev, NULL); 663 659 if (IS_ERR(p->clk)) { 664 660 dev_err(&pdev->dev, "cannot get clock\n"); 665 661 ret = PTR_ERR(p->clk); 666 662 goto err1; 667 663 } 668 664 669 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 670 665 i = platform_get_irq(pdev, 0); 671 - if (!r || i < 0) { 672 - dev_err(&pdev->dev, "cannot get platform resources\n"); 666 + if (i < 0) { 667 + dev_err(&pdev->dev, "cannot get platform IRQ\n"); 673 668 ret = -ENOENT; 674 - goto err2; 675 - } 676 - p->mapbase = ioremap_nocache(r->start, resource_size(r)); 677 - if (!p->mapbase) { 678 - dev_err(&pdev->dev, "unable to ioremap\n"); 679 - ret = -ENXIO; 680 - goto err2; 669 + goto err1; 681 670 } 682 671 683 - ret = request_irq(i, sh_msiof_spi_irq, 0, 684 - dev_name(&pdev->dev), p); 672 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 673 + p->mapbase = devm_ioremap_resource(&pdev->dev, r); 674 + if (IS_ERR(p->mapbase)) { 675 + ret = PTR_ERR(p->mapbase); 676 + goto err1; 677 + } 678 + 679 + ret = devm_request_irq(&pdev->dev, i, sh_msiof_spi_irq, 0, 680 + dev_name(&pdev->dev), p); 685 681 if (ret) { 686 682 dev_err(&pdev->dev, "unable to request irq\n"); 687 - goto err3; 683 + goto err1; 684 + } 685 + 686 + ret = clk_prepare(p->clk); 687 + if (ret < 0) { 688 + dev_err(&pdev->dev, "unable to prepare clock\n"); 689 + goto err1; 688 690 } 689 691 690 692 p->pdev = pdev; ··· 729 719 return 0; 730 720 731 721 pm_runtime_disable(&pdev->dev); 732 - err3: 733 - iounmap(p->mapbase); 734 - err2: 735 - clk_put(p->clk); 722 + clk_unprepare(p->clk); 736 723 err1: 737 724 spi_master_put(master); 738 - err0: 739 725 return ret; 740 726 } 741 727 ··· 743 737 ret = spi_bitbang_stop(&p->bitbang); 744 738 if (!ret) { 745 739 pm_runtime_disable(&pdev->dev); 746 - free_irq(platform_get_irq(pdev, 0), p); 747 - iounmap(p->mapbase); 748 - clk_put(p->clk); 740 + clk_unprepare(p->clk); 749 741 spi_master_put(p->bitbang.master); 750 742 } 751 743 return ret;
+2 -11
drivers/spi/spi-sh.c
··· 171 171 int remain = t->len; 172 172 int cur_len; 173 173 unsigned char *data; 174 - unsigned long tmp; 175 174 long ret; 176 175 177 176 if (t->len) ··· 212 213 } 213 214 214 215 if (list_is_last(&t->transfer_list, &mesg->transfers)) { 215 - tmp = spi_sh_read(ss, SPI_SH_CR1); 216 - tmp = tmp & ~(SPI_SH_SSD | SPI_SH_SSDB); 217 - spi_sh_write(ss, tmp, SPI_SH_CR1); 216 + spi_sh_clear_bit(ss, SPI_SH_SSD | SPI_SH_SSDB, SPI_SH_CR1); 218 217 spi_sh_set_bit(ss, SPI_SH_SSA, SPI_SH_CR1); 219 218 220 219 ss->cr1 &= ~SPI_SH_TBE; ··· 236 239 int remain = t->len; 237 240 int cur_len; 238 241 unsigned char *data; 239 - unsigned long tmp; 240 242 long ret; 241 243 242 244 if (t->len > SPI_SH_MAX_BYTE) ··· 243 247 else 244 248 spi_sh_write(ss, t->len, SPI_SH_CR3); 245 249 246 - tmp = spi_sh_read(ss, SPI_SH_CR1); 247 - tmp = tmp & ~(SPI_SH_SSD | SPI_SH_SSDB); 248 - spi_sh_write(ss, tmp, SPI_SH_CR1); 250 + spi_sh_clear_bit(ss, SPI_SH_SSD | SPI_SH_SSDB, SPI_SH_CR1); 249 251 spi_sh_set_bit(ss, SPI_SH_SSA, SPI_SH_CR1); 250 252 251 253 spi_sh_wait_write_buffer_empty(ss); ··· 351 357 static int spi_sh_setup(struct spi_device *spi) 352 358 { 353 359 struct spi_sh_data *ss = spi_master_get_devdata(spi->master); 354 - 355 - if (!spi->bits_per_word) 356 - spi->bits_per_word = 8; 357 360 358 361 pr_debug("%s: enter\n", __func__); 359 362
-7
drivers/spi/spi-sirf.c
··· 536 536 537 537 static int spi_sirfsoc_setup(struct spi_device *spi) 538 538 { 539 - struct sirfsoc_spi *sspi; 540 - 541 539 if (!spi->max_speed_hz) 542 540 return -EINVAL; 543 - 544 - sspi = spi_master_get_devdata(spi->master); 545 - 546 - if (!spi->bits_per_word) 547 - spi->bits_per_word = 8; 548 541 549 542 return spi_sirfsoc_setup_transfer(spi, NULL); 550 543 }
+34 -64
drivers/spi/spi-tegra114.c
··· 54 54 #define SPI_CS_SS_VAL (1 << 20) 55 55 #define SPI_CS_SW_HW (1 << 21) 56 56 /* SPI_CS_POL_INACTIVE bits are default high */ 57 - #define SPI_CS_POL_INACTIVE 22 58 - #define SPI_CS_POL_INACTIVE_0 (1 << 22) 59 - #define SPI_CS_POL_INACTIVE_1 (1 << 23) 60 - #define SPI_CS_POL_INACTIVE_2 (1 << 24) 61 - #define SPI_CS_POL_INACTIVE_3 (1 << 25) 57 + /* n from 0 to 3 */ 58 + #define SPI_CS_POL_INACTIVE(n) (1 << (22 + (n))) 62 59 #define SPI_CS_POL_INACTIVE_MASK (0xF << 22) 63 60 64 61 #define SPI_CS_SEL_0 (0 << 26) ··· 162 165 #define MAX_HOLD_CYCLES 16 163 166 #define SPI_DEFAULT_SPEED 25000000 164 167 165 - #define MAX_CHIP_SELECT 4 166 - #define SPI_FIFO_DEPTH 64 167 - 168 168 struct tegra_spi_data { 169 169 struct device *dev; 170 170 struct spi_master *master; ··· 178 184 struct spi_device *cur_spi; 179 185 struct spi_device *cs_control; 180 186 unsigned cur_pos; 181 - unsigned cur_len; 182 187 unsigned words_per_32bit; 183 188 unsigned bytes_per_word; 184 189 unsigned curr_dma_words; ··· 197 204 u32 rx_status; 198 205 u32 status_reg; 199 206 bool is_packed; 200 - unsigned long packed_size; 201 207 202 208 u32 command1_reg; 203 209 u32 dma_control_reg; 204 210 u32 def_command1_reg; 205 - u32 spi_cs_timing; 206 211 207 212 struct completion xfer_completion; 208 213 struct spi_transfer *curr_xfer; ··· 218 227 static int tegra_spi_runtime_suspend(struct device *dev); 219 228 static int tegra_spi_runtime_resume(struct device *dev); 220 229 221 - static inline unsigned long tegra_spi_readl(struct tegra_spi_data *tspi, 230 + static inline u32 tegra_spi_readl(struct tegra_spi_data *tspi, 222 231 unsigned long reg) 223 232 { 224 233 return readl(tspi->base + reg); 225 234 } 226 235 227 236 static inline void tegra_spi_writel(struct tegra_spi_data *tspi, 228 - unsigned long val, unsigned long reg) 237 + u32 val, unsigned long reg) 229 238 { 230 239 writel(val, tspi->base + reg); 231 240 ··· 236 245 237 246 static void tegra_spi_clear_status(struct tegra_spi_data *tspi) 238 247 { 239 - unsigned long val; 248 + u32 val; 240 249 241 250 /* Write 1 to clear status register */ 242 251 val = tegra_spi_readl(tspi, SPI_TRANS_STATUS); ··· 287 296 { 288 297 unsigned nbytes; 289 298 unsigned tx_empty_count; 290 - unsigned long fifo_status; 299 + u32 fifo_status; 291 300 unsigned max_n_32bit; 292 301 unsigned i, count; 293 - unsigned long x; 294 302 unsigned int written_words; 295 303 unsigned fifo_words_left; 296 304 u8 *tx_buf = (u8 *)t->tx_buf + tspi->cur_tx_pos; ··· 303 313 nbytes = written_words * tspi->bytes_per_word; 304 314 max_n_32bit = DIV_ROUND_UP(nbytes, 4); 305 315 for (count = 0; count < max_n_32bit; count++) { 306 - x = 0; 316 + u32 x = 0; 307 317 for (i = 0; (i < 4) && nbytes; i++, nbytes--) 308 - x |= (*tx_buf++) << (i*8); 318 + x |= (u32)(*tx_buf++) << (i * 8); 309 319 tegra_spi_writel(tspi, x, SPI_TX_FIFO); 310 320 } 311 321 } else { ··· 313 323 written_words = max_n_32bit; 314 324 nbytes = written_words * tspi->bytes_per_word; 315 325 for (count = 0; count < max_n_32bit; count++) { 316 - x = 0; 326 + u32 x = 0; 317 327 for (i = 0; nbytes && (i < tspi->bytes_per_word); 318 328 i++, nbytes--) 319 - x |= ((*tx_buf++) << i*8); 329 + x |= (u32)(*tx_buf++) << (i * 8); 320 330 tegra_spi_writel(tspi, x, SPI_TX_FIFO); 321 331 } 322 332 } ··· 328 338 struct tegra_spi_data *tspi, struct spi_transfer *t) 329 339 { 330 340 unsigned rx_full_count; 331 - unsigned long fifo_status; 341 + u32 fifo_status; 332 342 unsigned i, count; 333 - unsigned long x; 334 343 unsigned int read_words = 0; 335 344 unsigned len; 336 345 u8 *rx_buf = (u8 *)t->rx_buf + tspi->cur_rx_pos; ··· 339 350 if (tspi->is_packed) { 340 351 len = tspi->curr_dma_words * tspi->bytes_per_word; 341 352 for (count = 0; count < rx_full_count; count++) { 342 - x = tegra_spi_readl(tspi, SPI_RX_FIFO); 353 + u32 x = tegra_spi_readl(tspi, SPI_RX_FIFO); 343 354 for (i = 0; len && (i < 4); i++, len--) 344 355 *rx_buf++ = (x >> i*8) & 0xFF; 345 356 } 346 357 tspi->cur_rx_pos += tspi->curr_dma_words * tspi->bytes_per_word; 347 358 read_words += tspi->curr_dma_words; 348 359 } else { 349 - unsigned int rx_mask; 350 - unsigned int bits_per_word = t->bits_per_word; 351 - 352 - rx_mask = (1 << bits_per_word) - 1; 360 + u32 rx_mask = ((u32)1 << t->bits_per_word) - 1; 353 361 for (count = 0; count < rx_full_count; count++) { 354 - x = tegra_spi_readl(tspi, SPI_RX_FIFO); 355 - x &= rx_mask; 362 + u32 x = tegra_spi_readl(tspi, SPI_RX_FIFO) & rx_mask; 356 363 for (i = 0; (i < tspi->bytes_per_word); i++) 357 364 *rx_buf++ = (x >> (i*8)) & 0xFF; 358 365 } ··· 361 376 static void tegra_spi_copy_client_txbuf_to_spi_txbuf( 362 377 struct tegra_spi_data *tspi, struct spi_transfer *t) 363 378 { 364 - unsigned len; 365 - 366 379 /* Make the dma buffer to read by cpu */ 367 380 dma_sync_single_for_cpu(tspi->dev, tspi->tx_dma_phys, 368 381 tspi->dma_buf_size, DMA_TO_DEVICE); 369 382 370 383 if (tspi->is_packed) { 371 - len = tspi->curr_dma_words * tspi->bytes_per_word; 384 + unsigned len = tspi->curr_dma_words * tspi->bytes_per_word; 372 385 memcpy(tspi->tx_dma_buf, t->tx_buf + tspi->cur_pos, len); 373 386 } else { 374 387 unsigned int i; 375 388 unsigned int count; 376 389 u8 *tx_buf = (u8 *)t->tx_buf + tspi->cur_tx_pos; 377 390 unsigned consume = tspi->curr_dma_words * tspi->bytes_per_word; 378 - unsigned int x; 379 391 380 392 for (count = 0; count < tspi->curr_dma_words; count++) { 381 - x = 0; 393 + u32 x = 0; 382 394 for (i = 0; consume && (i < tspi->bytes_per_word); 383 395 i++, consume--) 384 - x |= ((*tx_buf++) << i * 8); 396 + x |= (u32)(*tx_buf++) << (i * 8); 385 397 tspi->tx_dma_buf[count] = x; 386 398 } 387 399 } ··· 392 410 static void tegra_spi_copy_spi_rxbuf_to_client_rxbuf( 393 411 struct tegra_spi_data *tspi, struct spi_transfer *t) 394 412 { 395 - unsigned len; 396 - 397 413 /* Make the dma buffer to read by cpu */ 398 414 dma_sync_single_for_cpu(tspi->dev, tspi->rx_dma_phys, 399 415 tspi->dma_buf_size, DMA_FROM_DEVICE); 400 416 401 417 if (tspi->is_packed) { 402 - len = tspi->curr_dma_words * tspi->bytes_per_word; 418 + unsigned len = tspi->curr_dma_words * tspi->bytes_per_word; 403 419 memcpy(t->rx_buf + tspi->cur_rx_pos, tspi->rx_dma_buf, len); 404 420 } else { 405 421 unsigned int i; 406 422 unsigned int count; 407 423 unsigned char *rx_buf = t->rx_buf + tspi->cur_rx_pos; 408 - unsigned int x; 409 - unsigned int rx_mask; 410 - unsigned int bits_per_word = t->bits_per_word; 424 + u32 rx_mask = ((u32)1 << t->bits_per_word) - 1; 411 425 412 - rx_mask = (1 << bits_per_word) - 1; 413 426 for (count = 0; count < tspi->curr_dma_words; count++) { 414 - x = tspi->rx_dma_buf[count]; 415 - x &= rx_mask; 427 + u32 x = tspi->rx_dma_buf[count] & rx_mask; 416 428 for (i = 0; (i < tspi->bytes_per_word); i++) 417 429 *rx_buf++ = (x >> (i*8)) & 0xFF; 418 430 } ··· 466 490 static int tegra_spi_start_dma_based_transfer( 467 491 struct tegra_spi_data *tspi, struct spi_transfer *t) 468 492 { 469 - unsigned long val; 493 + u32 val; 470 494 unsigned int len; 471 495 int ret = 0; 472 - unsigned long status; 496 + u32 status; 473 497 474 498 /* Make sure that Rx and Tx fifo are empty */ 475 499 status = tegra_spi_readl(tspi, SPI_FIFO_STATUS); 476 500 if ((status & SPI_FIFO_EMPTY) != SPI_FIFO_EMPTY) { 477 - dev_err(tspi->dev, 478 - "Rx/Tx fifo are not empty status 0x%08lx\n", status); 501 + dev_err(tspi->dev, "Rx/Tx fifo are not empty status 0x%08x\n", 502 + (unsigned)status); 479 503 return -EIO; 480 504 } 481 505 ··· 540 564 static int tegra_spi_start_cpu_based_transfer( 541 565 struct tegra_spi_data *tspi, struct spi_transfer *t) 542 566 { 543 - unsigned long val; 567 + u32 val; 544 568 unsigned cur_words; 545 569 546 570 if (tspi->cur_direction & DATA_DIR_TX) ··· 652 676 dma_release_channel(dma_chan); 653 677 } 654 678 655 - static unsigned long tegra_spi_setup_transfer_one(struct spi_device *spi, 679 + static u32 tegra_spi_setup_transfer_one(struct spi_device *spi, 656 680 struct spi_transfer *t, bool is_first_of_msg) 657 681 { 658 682 struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master); 659 683 u32 speed = t->speed_hz; 660 684 u8 bits_per_word = t->bits_per_word; 661 - unsigned long command1; 685 + u32 command1; 662 686 int req_mode; 663 687 664 688 if (speed != tspi->cur_speed) { ··· 713 737 } 714 738 715 739 static int tegra_spi_start_transfer_one(struct spi_device *spi, 716 - struct spi_transfer *t, unsigned long command1) 740 + struct spi_transfer *t, u32 command1) 717 741 { 718 742 struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master); 719 743 unsigned total_fifo_words; ··· 738 762 tegra_spi_writel(tspi, command1, SPI_COMMAND1); 739 763 tspi->command1_reg = command1; 740 764 741 - dev_dbg(tspi->dev, "The def 0x%x and written 0x%lx\n", 742 - tspi->def_command1_reg, command1); 765 + dev_dbg(tspi->dev, "The def 0x%x and written 0x%x\n", 766 + tspi->def_command1_reg, (unsigned)command1); 743 767 744 768 if (total_fifo_words > SPI_FIFO_DEPTH) 745 769 ret = tegra_spi_start_dma_based_transfer(tspi, t); ··· 751 775 static int tegra_spi_setup(struct spi_device *spi) 752 776 { 753 777 struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master); 754 - unsigned long val; 778 + u32 val; 755 779 unsigned long flags; 756 780 int ret; 757 - unsigned int cs_pol_bit[MAX_CHIP_SELECT] = { 758 - SPI_CS_POL_INACTIVE_0, 759 - SPI_CS_POL_INACTIVE_1, 760 - SPI_CS_POL_INACTIVE_2, 761 - SPI_CS_POL_INACTIVE_3, 762 - }; 763 781 764 782 dev_dbg(&spi->dev, "setup %d bpw, %scpol, %scpha, %dHz\n", 765 783 spi->bits_per_word, ··· 775 805 spin_lock_irqsave(&tspi->lock, flags); 776 806 val = tspi->def_command1_reg; 777 807 if (spi->mode & SPI_CS_HIGH) 778 - val &= ~cs_pol_bit[spi->chip_select]; 808 + val &= ~SPI_CS_POL_INACTIVE(spi->chip_select); 779 809 else 780 - val |= cs_pol_bit[spi->chip_select]; 810 + val |= SPI_CS_POL_INACTIVE(spi->chip_select); 781 811 tspi->def_command1_reg = val; 782 812 tegra_spi_writel(tspi, tspi->def_command1_reg, SPI_COMMAND1); 783 813 spin_unlock_irqrestore(&tspi->lock, flags); ··· 811 841 msg->actual_length = 0; 812 842 813 843 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 814 - unsigned long cmd1; 844 + u32 cmd1; 815 845 816 846 reinit_completion(&tspi->xfer_completion); 817 847
+10 -12
drivers/spi/spi-tegra20-sflash.c
··· 149 149 static int tegra_sflash_runtime_suspend(struct device *dev); 150 150 static int tegra_sflash_runtime_resume(struct device *dev); 151 151 152 - static inline unsigned long tegra_sflash_readl(struct tegra_sflash_data *tsd, 152 + static inline u32 tegra_sflash_readl(struct tegra_sflash_data *tsd, 153 153 unsigned long reg) 154 154 { 155 155 return readl(tsd->base + reg); 156 156 } 157 157 158 158 static inline void tegra_sflash_writel(struct tegra_sflash_data *tsd, 159 - unsigned long val, unsigned long reg) 159 + u32 val, unsigned long reg) 160 160 { 161 161 writel(val, tsd->base + reg); 162 162 } ··· 186 186 struct tegra_sflash_data *tsd, struct spi_transfer *t) 187 187 { 188 188 unsigned nbytes; 189 - unsigned long status; 189 + u32 status; 190 190 unsigned max_n_32bit = tsd->curr_xfer_words; 191 191 u8 *tx_buf = (u8 *)t->tx_buf + tsd->cur_tx_pos; 192 192 ··· 197 197 status = tegra_sflash_readl(tsd, SPI_STATUS); 198 198 while (!(status & SPI_TXF_FULL)) { 199 199 int i; 200 - unsigned int x = 0; 200 + u32 x = 0; 201 201 202 202 for (i = 0; nbytes && (i < tsd->bytes_per_word); 203 203 i++, nbytes--) 204 - x |= ((*tx_buf++) << i*8); 204 + x |= (u32)(*tx_buf++) << (i * 8); 205 205 tegra_sflash_writel(tsd, x, SPI_TX_FIFO); 206 206 if (!nbytes) 207 207 break; ··· 215 215 static int tegra_sflash_read_rx_fifo_to_client_rxbuf( 216 216 struct tegra_sflash_data *tsd, struct spi_transfer *t) 217 217 { 218 - unsigned long status; 218 + u32 status; 219 219 unsigned int read_words = 0; 220 220 u8 *rx_buf = (u8 *)t->rx_buf + tsd->cur_rx_pos; 221 221 222 222 status = tegra_sflash_readl(tsd, SPI_STATUS); 223 223 while (!(status & SPI_RXF_EMPTY)) { 224 224 int i; 225 - unsigned long x; 226 - 227 - x = tegra_sflash_readl(tsd, SPI_RX_FIFO); 225 + u32 x = tegra_sflash_readl(tsd, SPI_RX_FIFO); 228 226 for (i = 0; (i < tsd->bytes_per_word); i++) 229 227 *rx_buf++ = (x >> (i*8)) & 0xFF; 230 228 read_words++; ··· 235 237 static int tegra_sflash_start_cpu_based_transfer( 236 238 struct tegra_sflash_data *tsd, struct spi_transfer *t) 237 239 { 238 - unsigned long val = 0; 240 + u32 val = 0; 239 241 unsigned cur_words; 240 242 241 243 if (tsd->cur_direction & DATA_DIR_TX) ··· 265 267 { 266 268 struct tegra_sflash_data *tsd = spi_master_get_devdata(spi->master); 267 269 u32 speed; 268 - unsigned long command; 270 + u32 command; 269 271 270 272 speed = t->speed_hz; 271 273 if (speed != tsd->cur_speed) { ··· 312 314 tegra_sflash_writel(tsd, command, SPI_COMMAND); 313 315 tsd->command_reg = command; 314 316 315 - return tegra_sflash_start_cpu_based_transfer(tsd, t); 317 + return tegra_sflash_start_cpu_based_transfer(tsd, t); 316 318 } 317 319 318 320 static int tegra_sflash_setup(struct spi_device *spi)
+40 -57
drivers/spi/spi-tegra20-slink.c
··· 196 196 u32 rx_status; 197 197 u32 status_reg; 198 198 bool is_packed; 199 - unsigned long packed_size; 199 + u32 packed_size; 200 200 201 201 u32 command_reg; 202 202 u32 command2_reg; ··· 220 220 static int tegra_slink_runtime_suspend(struct device *dev); 221 221 static int tegra_slink_runtime_resume(struct device *dev); 222 222 223 - static inline unsigned long tegra_slink_readl(struct tegra_slink_data *tspi, 223 + static inline u32 tegra_slink_readl(struct tegra_slink_data *tspi, 224 224 unsigned long reg) 225 225 { 226 226 return readl(tspi->base + reg); 227 227 } 228 228 229 229 static inline void tegra_slink_writel(struct tegra_slink_data *tspi, 230 - unsigned long val, unsigned long reg) 230 + u32 val, unsigned long reg) 231 231 { 232 232 writel(val, tspi->base + reg); 233 233 ··· 238 238 239 239 static void tegra_slink_clear_status(struct tegra_slink_data *tspi) 240 240 { 241 - unsigned long val; 242 - unsigned long val_write = 0; 241 + u32 val_write; 243 242 244 - val = tegra_slink_readl(tspi, SLINK_STATUS); 243 + tegra_slink_readl(tspi, SLINK_STATUS); 245 244 246 245 /* Write 1 to clear status register */ 247 246 val_write = SLINK_RDY | SLINK_FIFO_ERROR; 248 247 tegra_slink_writel(tspi, val_write, SLINK_STATUS); 249 248 } 250 249 251 - static unsigned long tegra_slink_get_packed_size(struct tegra_slink_data *tspi, 250 + static u32 tegra_slink_get_packed_size(struct tegra_slink_data *tspi, 252 251 struct spi_transfer *t) 253 252 { 254 - unsigned long val; 255 - 256 253 switch (tspi->bytes_per_word) { 257 254 case 0: 258 - val = SLINK_PACK_SIZE_4; 259 - break; 255 + return SLINK_PACK_SIZE_4; 260 256 case 1: 261 - val = SLINK_PACK_SIZE_8; 262 - break; 257 + return SLINK_PACK_SIZE_8; 263 258 case 2: 264 - val = SLINK_PACK_SIZE_16; 265 - break; 259 + return SLINK_PACK_SIZE_16; 266 260 case 4: 267 - val = SLINK_PACK_SIZE_32; 268 - break; 261 + return SLINK_PACK_SIZE_32; 269 262 default: 270 - val = 0; 263 + return 0; 271 264 } 272 - return val; 273 265 } 274 266 275 267 static unsigned tegra_slink_calculate_curr_xfer_param( ··· 304 312 { 305 313 unsigned nbytes; 306 314 unsigned tx_empty_count; 307 - unsigned long fifo_status; 315 + u32 fifo_status; 308 316 unsigned max_n_32bit; 309 317 unsigned i, count; 310 - unsigned long x; 311 318 unsigned int written_words; 312 319 unsigned fifo_words_left; 313 320 u8 *tx_buf = (u8 *)t->tx_buf + tspi->cur_tx_pos; ··· 320 329 nbytes = written_words * tspi->bytes_per_word; 321 330 max_n_32bit = DIV_ROUND_UP(nbytes, 4); 322 331 for (count = 0; count < max_n_32bit; count++) { 323 - x = 0; 332 + u32 x = 0; 324 333 for (i = 0; (i < 4) && nbytes; i++, nbytes--) 325 - x |= (*tx_buf++) << (i*8); 334 + x |= (u32)(*tx_buf++) << (i * 8); 326 335 tegra_slink_writel(tspi, x, SLINK_TX_FIFO); 327 336 } 328 337 } else { ··· 330 339 written_words = max_n_32bit; 331 340 nbytes = written_words * tspi->bytes_per_word; 332 341 for (count = 0; count < max_n_32bit; count++) { 333 - x = 0; 342 + u32 x = 0; 334 343 for (i = 0; nbytes && (i < tspi->bytes_per_word); 335 344 i++, nbytes--) 336 - x |= ((*tx_buf++) << i*8); 345 + x |= (u32)(*tx_buf++) << (i * 8); 337 346 tegra_slink_writel(tspi, x, SLINK_TX_FIFO); 338 347 } 339 348 } ··· 345 354 struct tegra_slink_data *tspi, struct spi_transfer *t) 346 355 { 347 356 unsigned rx_full_count; 348 - unsigned long fifo_status; 357 + u32 fifo_status; 349 358 unsigned i, count; 350 - unsigned long x; 351 359 unsigned int read_words = 0; 352 360 unsigned len; 353 361 u8 *rx_buf = (u8 *)t->rx_buf + tspi->cur_rx_pos; ··· 356 366 if (tspi->is_packed) { 357 367 len = tspi->curr_dma_words * tspi->bytes_per_word; 358 368 for (count = 0; count < rx_full_count; count++) { 359 - x = tegra_slink_readl(tspi, SLINK_RX_FIFO); 369 + u32 x = tegra_slink_readl(tspi, SLINK_RX_FIFO); 360 370 for (i = 0; len && (i < 4); i++, len--) 361 371 *rx_buf++ = (x >> i*8) & 0xFF; 362 372 } ··· 364 374 read_words += tspi->curr_dma_words; 365 375 } else { 366 376 for (count = 0; count < rx_full_count; count++) { 367 - x = tegra_slink_readl(tspi, SLINK_RX_FIFO); 377 + u32 x = tegra_slink_readl(tspi, SLINK_RX_FIFO); 368 378 for (i = 0; (i < tspi->bytes_per_word); i++) 369 379 *rx_buf++ = (x >> (i*8)) & 0xFF; 370 380 } ··· 377 387 static void tegra_slink_copy_client_txbuf_to_spi_txbuf( 378 388 struct tegra_slink_data *tspi, struct spi_transfer *t) 379 389 { 380 - unsigned len; 381 - 382 390 /* Make the dma buffer to read by cpu */ 383 391 dma_sync_single_for_cpu(tspi->dev, tspi->tx_dma_phys, 384 392 tspi->dma_buf_size, DMA_TO_DEVICE); 385 393 386 394 if (tspi->is_packed) { 387 - len = tspi->curr_dma_words * tspi->bytes_per_word; 395 + unsigned len = tspi->curr_dma_words * tspi->bytes_per_word; 388 396 memcpy(tspi->tx_dma_buf, t->tx_buf + tspi->cur_pos, len); 389 397 } else { 390 398 unsigned int i; 391 399 unsigned int count; 392 400 u8 *tx_buf = (u8 *)t->tx_buf + tspi->cur_tx_pos; 393 401 unsigned consume = tspi->curr_dma_words * tspi->bytes_per_word; 394 - unsigned int x; 395 402 396 403 for (count = 0; count < tspi->curr_dma_words; count++) { 397 - x = 0; 404 + u32 x = 0; 398 405 for (i = 0; consume && (i < tspi->bytes_per_word); 399 406 i++, consume--) 400 - x |= ((*tx_buf++) << i * 8); 407 + x |= (u32)(*tx_buf++) << (i * 8); 401 408 tspi->tx_dma_buf[count] = x; 402 409 } 403 410 } ··· 421 434 unsigned int i; 422 435 unsigned int count; 423 436 unsigned char *rx_buf = t->rx_buf + tspi->cur_rx_pos; 424 - unsigned int x; 425 - unsigned int rx_mask, bits_per_word; 437 + u32 rx_mask = ((u32)1 << t->bits_per_word) - 1; 426 438 427 - bits_per_word = t->bits_per_word; 428 - rx_mask = (1 << bits_per_word) - 1; 429 439 for (count = 0; count < tspi->curr_dma_words; count++) { 430 - x = tspi->rx_dma_buf[count]; 431 - x &= rx_mask; 440 + u32 x = tspi->rx_dma_buf[count] & rx_mask; 432 441 for (i = 0; (i < tspi->bytes_per_word); i++) 433 442 *rx_buf++ = (x >> (i*8)) & 0xFF; 434 443 } ··· 484 501 static int tegra_slink_start_dma_based_transfer( 485 502 struct tegra_slink_data *tspi, struct spi_transfer *t) 486 503 { 487 - unsigned long val; 488 - unsigned long test_val; 504 + u32 val; 489 505 unsigned int len; 490 506 int ret = 0; 491 - unsigned long status; 507 + u32 status; 492 508 493 509 /* Make sure that Rx and Tx fifo are empty */ 494 510 status = tegra_slink_readl(tspi, SLINK_STATUS); 495 511 if ((status & SLINK_FIFO_EMPTY) != SLINK_FIFO_EMPTY) { 496 - dev_err(tspi->dev, 497 - "Rx/Tx fifo are not empty status 0x%08lx\n", status); 512 + dev_err(tspi->dev, "Rx/Tx fifo are not empty status 0x%08x\n", 513 + (unsigned)status); 498 514 return -EIO; 499 515 } 500 516 ··· 533 551 } 534 552 535 553 /* Wait for tx fifo to be fill before starting slink */ 536 - test_val = tegra_slink_readl(tspi, SLINK_STATUS); 537 - while (!(test_val & SLINK_TX_FULL)) 538 - test_val = tegra_slink_readl(tspi, SLINK_STATUS); 554 + status = tegra_slink_readl(tspi, SLINK_STATUS); 555 + while (!(status & SLINK_TX_FULL)) 556 + status = tegra_slink_readl(tspi, SLINK_STATUS); 539 557 } 540 558 541 559 if (tspi->cur_direction & DATA_DIR_RX) { ··· 569 587 static int tegra_slink_start_cpu_based_transfer( 570 588 struct tegra_slink_data *tspi, struct spi_transfer *t) 571 589 { 572 - unsigned long val; 590 + u32 val; 573 591 unsigned cur_words; 574 592 575 593 val = tspi->packed_size; ··· 695 713 u8 bits_per_word; 696 714 unsigned total_fifo_words; 697 715 int ret; 698 - unsigned long command; 699 - unsigned long command2; 716 + u32 command; 717 + u32 command2; 700 718 701 719 bits_per_word = t->bits_per_word; 702 720 speed = t->speed_hz; ··· 743 761 744 762 static int tegra_slink_setup(struct spi_device *spi) 745 763 { 746 - struct tegra_slink_data *tspi = spi_master_get_devdata(spi->master); 747 - unsigned long val; 748 - unsigned long flags; 749 - int ret; 750 - unsigned int cs_pol_bit[MAX_CHIP_SELECT] = { 764 + static const u32 cs_pol_bit[MAX_CHIP_SELECT] = { 751 765 SLINK_CS_POLARITY, 752 766 SLINK_CS_POLARITY1, 753 767 SLINK_CS_POLARITY2, 754 768 SLINK_CS_POLARITY3, 755 769 }; 770 + 771 + struct tegra_slink_data *tspi = spi_master_get_devdata(spi->master); 772 + u32 val; 773 + unsigned long flags; 774 + int ret; 756 775 757 776 dev_dbg(&spi->dev, "setup %d bpw, %scpol, %scpha, %dHz\n", 758 777 spi->bits_per_word,
+75 -54
drivers/spi/spi-ti-qspi.c
··· 46 46 47 47 struct spi_master *master; 48 48 void __iomem *base; 49 + void __iomem *ctrl_base; 50 + void __iomem *mmap_base; 49 51 struct clk *fclk; 50 52 struct device *dev; 51 53 ··· 56 54 u32 spi_max_frequency; 57 55 u32 cmd; 58 56 u32 dc; 57 + 58 + bool ctrl_mod; 59 59 }; 60 60 61 61 #define QSPI_PID (0x0) ··· 208 204 txbuf = t->tx_buf; 209 205 cmd = qspi->cmd | QSPI_WR_SNGL; 210 206 count = t->len; 211 - wlen = t->bits_per_word; 207 + wlen = t->bits_per_word >> 3; /* in bytes */ 212 208 213 209 while (count) { 214 210 switch (wlen) { 215 - case 8: 211 + case 1: 216 212 dev_dbg(qspi->dev, "tx cmd %08x dc %08x data %02x\n", 217 213 cmd, qspi->dc, *txbuf); 218 214 writeb(*txbuf, qspi->base + QSPI_SPI_DATA_REG); 219 - ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG); 220 - ret = wait_for_completion_timeout(&qspi->transfer_complete, 221 - QSPI_COMPLETION_TIMEOUT); 222 - if (ret == 0) { 223 - dev_err(qspi->dev, "write timed out\n"); 224 - return -ETIMEDOUT; 225 - } 226 - txbuf += 1; 227 - count -= 1; 228 215 break; 229 - case 16: 216 + case 2: 230 217 dev_dbg(qspi->dev, "tx cmd %08x dc %08x data %04x\n", 231 218 cmd, qspi->dc, *txbuf); 232 219 writew(*((u16 *)txbuf), qspi->base + QSPI_SPI_DATA_REG); 233 - ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG); 234 - ret = wait_for_completion_timeout(&qspi->transfer_complete, 235 - QSPI_COMPLETION_TIMEOUT); 236 - if (ret == 0) { 237 - dev_err(qspi->dev, "write timed out\n"); 238 - return -ETIMEDOUT; 239 - } 240 - txbuf += 2; 241 - count -= 2; 242 220 break; 243 - case 32: 221 + case 4: 244 222 dev_dbg(qspi->dev, "tx cmd %08x dc %08x data %08x\n", 245 223 cmd, qspi->dc, *txbuf); 246 224 writel(*((u32 *)txbuf), qspi->base + QSPI_SPI_DATA_REG); 247 - ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG); 248 - ret = wait_for_completion_timeout(&qspi->transfer_complete, 249 - QSPI_COMPLETION_TIMEOUT); 250 - if (ret == 0) { 251 - dev_err(qspi->dev, "write timed out\n"); 252 - return -ETIMEDOUT; 253 - } 254 - txbuf += 4; 255 - count -= 4; 256 225 break; 257 226 } 227 + 228 + ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG); 229 + ret = wait_for_completion_timeout(&qspi->transfer_complete, 230 + QSPI_COMPLETION_TIMEOUT); 231 + if (ret == 0) { 232 + dev_err(qspi->dev, "write timed out\n"); 233 + return -ETIMEDOUT; 234 + } 235 + txbuf += wlen; 236 + count -= wlen; 258 237 } 259 238 260 239 return 0; ··· 263 276 break; 264 277 } 265 278 count = t->len; 266 - wlen = t->bits_per_word; 279 + wlen = t->bits_per_word >> 3; /* in bytes */ 267 280 268 281 while (count) { 269 282 dev_dbg(qspi->dev, "rx cmd %08x dc %08x\n", cmd, qspi->dc); ··· 275 288 return -ETIMEDOUT; 276 289 } 277 290 switch (wlen) { 278 - case 8: 291 + case 1: 279 292 *rxbuf = readb(qspi->base + QSPI_SPI_DATA_REG); 280 - rxbuf += 1; 281 - count -= 1; 282 293 break; 283 - case 16: 294 + case 2: 284 295 *((u16 *)rxbuf) = readw(qspi->base + QSPI_SPI_DATA_REG); 285 - rxbuf += 2; 286 - count -= 2; 287 296 break; 288 - case 32: 297 + case 4: 289 298 *((u32 *)rxbuf) = readl(qspi->base + QSPI_SPI_DATA_REG); 290 - rxbuf += 4; 291 - count -= 4; 292 299 break; 293 300 } 301 + rxbuf += wlen; 302 + count -= wlen; 294 303 } 295 304 296 305 return 0; ··· 400 417 static int ti_qspi_runtime_resume(struct device *dev) 401 418 { 402 419 struct ti_qspi *qspi; 403 - struct spi_master *master; 404 420 405 - master = dev_get_drvdata(dev); 406 - qspi = spi_master_get_devdata(master); 421 + qspi = dev_get_drvdata(dev); 407 422 ti_qspi_restore_ctx(qspi); 408 423 409 424 return 0; ··· 418 437 { 419 438 struct ti_qspi *qspi; 420 439 struct spi_master *master; 421 - struct resource *r; 440 + struct resource *r, *res_ctrl, *res_mmap; 422 441 struct device_node *np = pdev->dev.of_node; 423 442 u32 max_freq; 424 443 int ret = 0, num_cs, irq; ··· 445 464 qspi->dev = &pdev->dev; 446 465 platform_set_drvdata(pdev, qspi); 447 466 448 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 467 + r = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_base"); 468 + if (r == NULL) { 469 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 470 + if (r == NULL) { 471 + dev_err(&pdev->dev, "missing platform data\n"); 472 + return -ENODEV; 473 + } 474 + } 475 + 476 + res_mmap = platform_get_resource_byname(pdev, 477 + IORESOURCE_MEM, "qspi_mmap"); 478 + if (res_mmap == NULL) { 479 + res_mmap = platform_get_resource(pdev, IORESOURCE_MEM, 1); 480 + if (res_mmap == NULL) { 481 + dev_err(&pdev->dev, 482 + "memory mapped resource not required\n"); 483 + return -ENODEV; 484 + } 485 + } 486 + 487 + res_ctrl = platform_get_resource_byname(pdev, 488 + IORESOURCE_MEM, "qspi_ctrlmod"); 489 + if (res_ctrl == NULL) { 490 + res_ctrl = platform_get_resource(pdev, IORESOURCE_MEM, 2); 491 + if (res_ctrl == NULL) { 492 + dev_dbg(&pdev->dev, 493 + "control module resources not required\n"); 494 + } 495 + } 449 496 450 497 irq = platform_get_irq(pdev, 0); 451 498 if (irq < 0) { ··· 487 478 if (IS_ERR(qspi->base)) { 488 479 ret = PTR_ERR(qspi->base); 489 480 goto free_master; 481 + } 482 + 483 + if (res_ctrl) { 484 + qspi->ctrl_mod = true; 485 + qspi->ctrl_base = devm_ioremap_resource(&pdev->dev, res_ctrl); 486 + if (IS_ERR(qspi->ctrl_base)) { 487 + ret = PTR_ERR(qspi->ctrl_base); 488 + goto free_master; 489 + } 490 + } 491 + 492 + if (res_mmap) { 493 + qspi->mmap_base = devm_ioremap_resource(&pdev->dev, res_mmap); 494 + if (IS_ERR(qspi->mmap_base)) { 495 + ret = PTR_ERR(qspi->mmap_base); 496 + goto free_master; 497 + } 490 498 } 491 499 492 500 ret = devm_request_irq(&pdev->dev, irq, ti_qspi_isr, 0, ··· 542 516 543 517 static int ti_qspi_remove(struct platform_device *pdev) 544 518 { 545 - struct spi_master *master; 546 - struct ti_qspi *qspi; 519 + struct ti_qspi *qspi = platform_get_drvdata(pdev); 547 520 int ret; 548 - 549 - master = platform_get_drvdata(pdev); 550 - qspi = spi_master_get_devdata(master); 551 521 552 522 ret = pm_runtime_get_sync(qspi->dev); 553 523 if (ret < 0) { ··· 556 534 pm_runtime_put(qspi->dev); 557 535 pm_runtime_disable(&pdev->dev); 558 536 559 - spi_unregister_master(master); 560 - 561 537 return 0; 562 538 } 563 539 ··· 567 547 .probe = ti_qspi_probe, 568 548 .remove = ti_qspi_remove, 569 549 .driver = { 570 - .name = "ti,dra7xxx-qspi", 550 + .name = "ti-qspi", 571 551 .owner = THIS_MODULE, 572 552 .pm = &ti_qspi_pm_ops, 573 553 .of_match_table = ti_qspi_match, ··· 579 559 MODULE_AUTHOR("Sourav Poddar <sourav.poddar@ti.com>"); 580 560 MODULE_LICENSE("GPL v2"); 581 561 MODULE_DESCRIPTION("TI QSPI controller driver"); 562 + MODULE_ALIAS("platform:ti-qspi");
+1 -7
drivers/spi/spi-topcliff-pch.c
··· 217 217 struct pch_spi_board_data *board_dat; 218 218 }; 219 219 220 - static DEFINE_PCI_DEVICE_TABLE(pch_spi_pcidev_id) = { 220 + static const struct pci_device_id pch_spi_pcidev_id[] = { 221 221 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_GE_SPI), 1, }, 222 222 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_SPI), 2, }, 223 223 { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7223_SPI), 1, }, ··· 466 466 467 467 static int pch_spi_setup(struct spi_device *pspi) 468 468 { 469 - /* check bits per word */ 470 - if (pspi->bits_per_word == 0) { 471 - pspi->bits_per_word = 8; 472 - dev_dbg(&pspi->dev, "%s 8 bits per word\n", __func__); 473 - } 474 - 475 469 /* Check baud rate setting */ 476 470 /* if baud rate of chip is greater than 477 471 max we can support,return error */
+2 -6
drivers/spi/spi-txx9.c
··· 348 348 INIT_LIST_HEAD(&c->queue); 349 349 init_waitqueue_head(&c->waitq); 350 350 351 - c->clk = clk_get(&dev->dev, "spi-baseclk"); 351 + c->clk = devm_clk_get(&dev->dev, "spi-baseclk"); 352 352 if (IS_ERR(c->clk)) { 353 353 ret = PTR_ERR(c->clk); 354 354 c->clk = NULL; ··· 356 356 } 357 357 ret = clk_enable(c->clk); 358 358 if (ret) { 359 - clk_put(c->clk); 360 359 c->clk = NULL; 361 360 goto exit; 362 361 } ··· 414 415 exit: 415 416 if (c->workqueue) 416 417 destroy_workqueue(c->workqueue); 417 - if (c->clk) { 418 + if (c->clk) 418 419 clk_disable(c->clk); 419 - clk_put(c->clk); 420 - } 421 420 spi_master_put(master); 422 421 return ret; 423 422 } ··· 427 430 428 431 destroy_workqueue(c->workqueue); 429 432 clk_disable(c->clk); 430 - clk_put(c->clk); 431 433 return 0; 432 434 } 433 435
+1 -11
drivers/spi/spi-xcomm.c
··· 231 231 master->dev.of_node = i2c->dev.of_node; 232 232 i2c_set_clientdata(i2c, master); 233 233 234 - ret = spi_register_master(master); 234 + ret = devm_spi_register_master(&i2c->dev, master); 235 235 if (ret < 0) 236 236 spi_master_put(master); 237 237 238 238 return ret; 239 - } 240 - 241 - static int spi_xcomm_remove(struct i2c_client *i2c) 242 - { 243 - struct spi_master *master = i2c_get_clientdata(i2c); 244 - 245 - spi_unregister_master(master); 246 - 247 - return 0; 248 239 } 249 240 250 241 static const struct i2c_device_id spi_xcomm_ids[] = { ··· 250 259 }, 251 260 .id_table = spi_xcomm_ids, 252 261 .probe = spi_xcomm_probe, 253 - .remove = spi_xcomm_remove, 254 262 }; 255 263 module_i2c_driver(spi_xcomm_driver); 256 264
+46 -28
drivers/spi/spi.c
··· 380 380 spi->chip_select); 381 381 } 382 382 383 + static int spi_dev_check(struct device *dev, void *data) 384 + { 385 + struct spi_device *spi = to_spi_device(dev); 386 + struct spi_device *new_spi = data; 387 + 388 + if (spi->master == new_spi->master && 389 + spi->chip_select == new_spi->chip_select) 390 + return -EBUSY; 391 + return 0; 392 + } 393 + 383 394 /** 384 395 * spi_add_device - Add spi_device allocated with spi_alloc_device 385 396 * @spi: spi_device to register ··· 405 394 static DEFINE_MUTEX(spi_add_lock); 406 395 struct spi_master *master = spi->master; 407 396 struct device *dev = master->dev.parent; 408 - struct device *d; 409 397 int status; 410 398 411 399 /* Chipselects are numbered 0..max; validate. */ ··· 424 414 */ 425 415 mutex_lock(&spi_add_lock); 426 416 427 - d = bus_find_device_by_name(&spi_bus_type, NULL, dev_name(&spi->dev)); 428 - if (d != NULL) { 417 + status = bus_for_each_dev(&spi_bus_type, NULL, spi, spi_dev_check); 418 + if (status) { 429 419 dev_err(dev, "chipselect %d already in use\n", 430 420 spi->chip_select); 431 - put_device(d); 432 - status = -EBUSY; 433 421 goto done; 434 422 } 435 423 ··· 609 601 goto out; 610 602 } 611 603 612 - if (ret > 0) 604 + if (ret > 0) { 605 + ret = 0; 613 606 wait_for_completion(&master->xfer_completion); 607 + } 614 608 615 609 trace_spi_transfer_stop(msg, xfer); 616 610 ··· 652 642 * 653 643 * Called by SPI drivers using the core transfer_one_message() 654 644 * implementation to notify it that the current interrupt driven 655 - * transfer has finised and the next one may be scheduled. 645 + * transfer has finished and the next one may be scheduled. 656 646 */ 657 647 void spi_finalize_current_transfer(struct spi_master *master) 658 648 { ··· 705 695 } 706 696 /* Extract head of queue */ 707 697 master->cur_msg = 708 - list_entry(master->queue.next, struct spi_message, queue); 698 + list_first_entry(&master->queue, struct spi_message, queue); 709 699 710 700 list_del_init(&master->cur_msg->queue); 711 701 if (master->busy) ··· 755 745 ret = master->transfer_one_message(master, master->cur_msg); 756 746 if (ret) { 757 747 dev_err(&master->dev, 758 - "failed to transfer one message from queue\n"); 748 + "failed to transfer one message from queue: %d\n", ret); 749 + master->cur_msg->status = ret; 750 + spi_finalize_current_message(master); 759 751 return; 760 752 } 761 753 } ··· 813 801 814 802 /* get a pointer to the next message, if any */ 815 803 spin_lock_irqsave(&master->queue_lock, flags); 816 - if (list_empty(&master->queue)) 817 - next = NULL; 818 - else 819 - next = list_entry(master->queue.next, 820 - struct spi_message, queue); 804 + next = list_first_entry_or_null(&master->queue, struct spi_message, 805 + queue); 821 806 spin_unlock_irqrestore(&master->queue_lock, flags); 822 807 823 808 return next; ··· 1615 1606 } 1616 1607 EXPORT_SYMBOL_GPL(spi_setup); 1617 1608 1618 - static int __spi_async(struct spi_device *spi, struct spi_message *message) 1609 + static int __spi_validate(struct spi_device *spi, struct spi_message *message) 1619 1610 { 1620 1611 struct spi_master *master = spi->master; 1621 1612 struct spi_transfer *xfer; 1622 - 1623 - message->spi = spi; 1624 - 1625 - trace_spi_message_submit(message); 1626 1613 1627 1614 if (list_empty(&message->transfers)) 1628 1615 return -EINVAL; ··· 1682 1677 if (xfer->rx_buf && !xfer->rx_nbits) 1683 1678 xfer->rx_nbits = SPI_NBITS_SINGLE; 1684 1679 /* check transfer tx/rx_nbits: 1685 - * 1. keep the value is not out of single, dual and quad 1686 - * 2. keep tx/rx_nbits is contained by mode in spi_device 1687 - * 3. if SPI_3WIRE, tx/rx_nbits should be in single 1680 + * 1. check the value matches one of single, dual and quad 1681 + * 2. check tx/rx_nbits match the mode in spi_device 1688 1682 */ 1689 1683 if (xfer->tx_buf) { 1690 1684 if (xfer->tx_nbits != SPI_NBITS_SINGLE && ··· 1695 1691 return -EINVAL; 1696 1692 if ((xfer->tx_nbits == SPI_NBITS_QUAD) && 1697 1693 !(spi->mode & SPI_TX_QUAD)) 1698 - return -EINVAL; 1699 - if ((spi->mode & SPI_3WIRE) && 1700 - (xfer->tx_nbits != SPI_NBITS_SINGLE)) 1701 1694 return -EINVAL; 1702 1695 } 1703 1696 /* check transfer rx_nbits */ ··· 1709 1708 if ((xfer->rx_nbits == SPI_NBITS_QUAD) && 1710 1709 !(spi->mode & SPI_RX_QUAD)) 1711 1710 return -EINVAL; 1712 - if ((spi->mode & SPI_3WIRE) && 1713 - (xfer->rx_nbits != SPI_NBITS_SINGLE)) 1714 - return -EINVAL; 1715 1711 } 1716 1712 } 1717 1713 1718 1714 message->status = -EINPROGRESS; 1715 + 1716 + return 0; 1717 + } 1718 + 1719 + static int __spi_async(struct spi_device *spi, struct spi_message *message) 1720 + { 1721 + struct spi_master *master = spi->master; 1722 + 1723 + message->spi = spi; 1724 + 1725 + trace_spi_message_submit(message); 1726 + 1719 1727 return master->transfer(spi, message); 1720 1728 } 1721 1729 ··· 1762 1752 struct spi_master *master = spi->master; 1763 1753 int ret; 1764 1754 unsigned long flags; 1755 + 1756 + ret = __spi_validate(spi, message); 1757 + if (ret != 0) 1758 + return ret; 1765 1759 1766 1760 spin_lock_irqsave(&master->bus_lock_spinlock, flags); 1767 1761 ··· 1814 1800 struct spi_master *master = spi->master; 1815 1801 int ret; 1816 1802 unsigned long flags; 1803 + 1804 + ret = __spi_validate(spi, message); 1805 + if (ret != 0) 1806 + return ret; 1817 1807 1818 1808 spin_lock_irqsave(&master->bus_lock_spinlock, flags); 1819 1809
+3 -5
include/linux/platform_data/spi-nuc900.h
··· 1 1 /* 2 - * arch/arm/mach-w90x900/include/mach/nuc900_spi.h 3 - * 4 2 * Copyright (c) 2009 Nuvoton technology corporation. 5 3 * 6 4 * Wan ZongShun <mcuos.com@gmail.com> ··· 9 11 * 10 12 */ 11 13 12 - #ifndef __ASM_ARCH_SPI_H 13 - #define __ASM_ARCH_SPI_H 14 + #ifndef __SPI_NUC900_H 15 + #define __SPI_NUC900_H 14 16 15 17 extern void mfp_set_groupg(struct device *dev, const char *subname); 16 18 ··· 30 32 unsigned char bits_per_word; 31 33 }; 32 34 33 - #endif /* __ASM_ARCH_SPI_H */ 35 + #endif /* __SPI_NUC900_H */
+2
include/linux/spi/s3c24xx.h
··· 23 23 void (*set_cs)(struct s3c2410_spi_info *spi, int cs, int pol); 24 24 }; 25 25 26 + extern int s3c24xx_set_fiq(unsigned int irq, bool on); 27 + 26 28 #endif /* __LINUX_SPI_S3C24XX_H */
+11 -9
include/linux/spi/spi.h
··· 75 75 struct spi_master *master; 76 76 u32 max_speed_hz; 77 77 u8 chip_select; 78 + u8 bits_per_word; 78 79 u16 mode; 79 80 #define SPI_CPHA 0x01 /* clock phase */ 80 81 #define SPI_CPOL 0x02 /* clock polarity */ ··· 93 92 #define SPI_TX_QUAD 0x200 /* transmit with 4 wires */ 94 93 #define SPI_RX_DUAL 0x400 /* receive with 2 wires */ 95 94 #define SPI_RX_QUAD 0x800 /* receive with 4 wires */ 96 - u8 bits_per_word; 97 95 int irq; 98 96 void *controller_state; 99 97 void *controller_data; ··· 277 277 * @unprepare_transfer_hardware: there are currently no more messages on the 278 278 * queue so the subsystem notifies the driver that it may relax the 279 279 * hardware by issuing this call 280 - * @set_cs: assert or deassert chip select, true to assert. May be called 280 + * @set_cs: set the logic level of the chip select line. May be called 281 281 * from interrupt context. 282 282 * @prepare_message: set up the controller to transfer a single message, 283 283 * for example doing DMA mapping. Called from threaded 284 284 * context. 285 - * @transfer_one: transfer a single spi_transfer. When the 286 - * driver is finished with this transfer it must call 287 - * spi_finalize_current_transfer() so the subsystem can issue 288 - * the next transfer 285 + * @transfer_one: transfer a single spi_transfer. 286 + * - return 0 if the transfer is finished, 287 + * - return 1 if the transfer is still in progress. When 288 + * the driver is finished with this transfer it must 289 + * call spi_finalize_current_transfer() so the subsystem 290 + * can issue the next transfer 289 291 * @unprepare_message: undo any work done by prepare_message(). 290 292 * @cs_gpios: Array of GPIOs to use as chip select lines; one per CS 291 293 * number. Any individual value may be -ENOENT for CS lines that ··· 578 576 dma_addr_t rx_dma; 579 577 580 578 unsigned cs_change:1; 581 - u8 tx_nbits; 582 - u8 rx_nbits; 579 + unsigned tx_nbits:3; 580 + unsigned rx_nbits:3; 583 581 #define SPI_NBITS_SINGLE 0x01 /* 1bit transfer */ 584 582 #define SPI_NBITS_DUAL 0x02 /* 2bits transfer */ 585 583 #define SPI_NBITS_QUAD 0x04 /* 4bits transfer */ ··· 849 847 ssize_t status; 850 848 u16 result; 851 849 852 - status = spi_write_then_read(spi, &cmd, 1, (u8 *) &result, 2); 850 + status = spi_write_then_read(spi, &cmd, 1, &result, 2); 853 851 854 852 /* return negative errno or unsigned value */ 855 853 return (status < 0) ? status : result;