Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'tty-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty

Pull tty/serial driver updates from Greg KH:
"Here is the big tty and serial driver update for 4.4-rc1.

Lots of serial driver updates and a few small tty core changes. Full
details in the shortlog.

All of these have been in linux-next for a while"

* tag 'tty-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (148 commits)
tty: Use unbound workqueue for all input workers
tty: Abstract tty buffer work
tty: Prevent tty teardown during tty_write_message()
tty: core: Use correct spinlock flavor in tiocspgrp()
tty: Combine SIGTTOU/SIGTTIN handling
serial: amba-pl011: fix incorrect integer size in pl011_fifo_to_tty()
ttyFDC: Fix build problems due to use of module_{init,exit}
tty: remove unneeded return statement
serial: 8250_mid: add support for DMA engine handling from UART MMIO
dmaengine: hsu: remove platform data
dmaengine: hsu: introduce stubs for the exported functions
dmaengine: hsu: make the UART driver in control of selecting this driver
serial: fix mctrl helper functions
serial: 8250_pci: Intel MID UART support to its own driver
serial: fsl_lpuart: add earlycon support
tty: disable unbind for old 74xx based serial/mpsc console port
serial: pl011: Spelling s/clocks-names/clock-names/
n_tty: Remove reader wakeups for TTY_BREAK/TTY_PARITY chars
tty: synclink, fix indentation
serial: at91, fix rs485 properties
...

+2603 -1810
+2 -1
Documentation/devicetree/bindings/serial/ingenic,uart.txt
··· 1 1 * Ingenic SoC UART 2 2 3 3 Required properties: 4 - - compatible : "ingenic,jz4740-uart" or "ingenic,jz4780-uart" 4 + - compatible : "ingenic,jz4740-uart", "ingenic,jz4760-uart", 5 + "ingenic,jz4775-uart" or "ingenic,jz4780-uart" 5 6 - reg : offset and length of the register set for the device. 6 7 - interrupts : should contain uart interrupt. 7 8 - clocks : phandles to the module & baud clocks.
+1 -1
Documentation/devicetree/bindings/serial/pl011.txt
··· 19 19 must correspond to the PCLK clocking the internal logic 20 20 of the block. Just listing one clock (the first one) is 21 21 deprecated. 22 - - clocks-names: 22 + - clock-names: 23 23 When present, the first clock listed must be named 24 24 "uartclk" and the second clock listed must be named 25 25 "apb_pclk"
+6
Documentation/devicetree/bindings/serial/qcom,msm-uartdm.txt
··· 26 26 Optional properties: 27 27 - dmas: Should contain dma specifiers for transmit and receive channels 28 28 - dma-names: Should contain "tx" for transmit and "rx" for receive channels 29 + - qcom,tx-crci: Identificator <u32> for Client Rate Control Interface to be 30 + used with TX DMA channel. Required when using DMA for transmission 31 + with UARTDM v1.3 and bellow. 32 + - qcom,rx-crci: Identificator <u32> for Client Rate Control Interface to be 33 + used with RX DMA channel. Required when using DMA for reception 34 + with UARTDM v1.3 and bellow. 29 35 30 36 Note: Aliases may be defined to ensure the correct ordering of the UARTs. 31 37 The alias serialN will result in the UART being assigned port N. If any
+2
Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
··· 23 23 - "renesas,scifa-r8a7794" for R8A7794 (R-Car E2) SCIFA compatible UART. 24 24 - "renesas,scifb-r8a7794" for R8A7794 (R-Car E2) SCIFB compatible UART. 25 25 - "renesas,hscif-r8a7794" for R8A7794 (R-Car E2) HSCIF compatible UART. 26 + - "renesas,scif-r8a7795" for R8A7795 (R-Car H3) SCIF compatible UART. 27 + - "renesas,hscif-r8a7795" for R8A7795 (R-Car H3) HSCIF compatible UART. 26 28 - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART. 27 29 - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART. 28 30 - "renesas,scif" for generic SCIF compatible UART.
+3
Documentation/devicetree/bindings/serial/snps-dw-apb-uart.txt
··· 15 15 Required elements: "baudclk", "apb_pclk" 16 16 17 17 Optional properties: 18 + - snps,uart-16550-compatible : reflects the value of UART_16550_COMPATIBLE 19 + configuration parameter. Define this if your UART does not implement the busy 20 + functionality. 18 21 - resets : phandle to the parent reset controller. 19 22 - reg-shift : quantity to shift the register offsets by. If this property is 20 23 not present then the register offsets are not shifted.
+7
Documentation/kernel-parameters.txt
··· 1024 1024 serial port must already be setup and configured. 1025 1025 Options are not yet supported. 1026 1026 1027 + lpuart,<addr> 1028 + lpuart32,<addr> 1029 + Use early console provided by Freescale LP UART driver 1030 + found on Freescale Vybrid and QorIQ LS1021A processors. 1031 + A valid base address must be provided, and the serial 1032 + port must already be setup and configured. 1033 + 1027 1034 earlyprintk= [X86,SH,BLACKFIN,ARM,M68k] 1028 1035 earlyprintk=vga 1029 1036 earlyprintk=efi
+9 -1
Documentation/serial/driver
··· 439 439 440 440 Some helpers are provided in order to set/get modem control lines via GPIO. 441 441 442 - mctrl_gpio_init(dev, idx): 442 + mctrl_gpio_init(port, idx): 443 443 This will get the {cts,rts,...}-gpios from device tree if they are 444 444 present and request them, set direction etc, and return an 445 445 allocated structure. devm_* functions are used, so there's no need 446 446 to call mctrl_gpio_free(). 447 + As this sets up the irq handling make sure to not handle changes to the 448 + gpio input lines in your driver, too. 447 449 448 450 mctrl_gpio_free(dev, gpios): 449 451 This will free the requested gpios in mctrl_gpio_init(). ··· 460 458 461 459 mctrl_gpio_get(gpios, mctrl): 462 460 This will update mctrl with the gpios values. 461 + 462 + mctrl_gpio_enable_ms(gpios): 463 + Enables irqs and handling of changes to the ms lines. 464 + 465 + mctrl_gpio_disable_ms(gpios): 466 + Disables irqs and handling of changes to the ms lines.
+39 -21
Documentation/serial/tty.txt
··· 39 39 open() - Called when the line discipline is attached to 40 40 the terminal. No other call into the line 41 41 discipline for this tty will occur until it 42 - completes successfully. Returning an error will 43 - prevent the ldisc from being attached. Can sleep. 42 + completes successfully. Should initialize any 43 + state needed by the ldisc, and set receive_room 44 + in the tty_struct to the maximum amount of data 45 + the line discipline is willing to accept from the 46 + driver with a single call to receive_buf(). 47 + Returning an error will prevent the ldisc from 48 + being attached. Can sleep. 44 49 45 50 close() - This is called on a terminal when the line 46 51 discipline is being unplugged. At the point of ··· 57 52 No further calls into the ldisc code will occur. 58 53 The return value is ignored. Can sleep. 59 54 60 - write() - A process is writing data through the line 61 - discipline. Multiple write calls are serialized 62 - by the tty layer for the ldisc. May sleep. 55 + read() - (optional) A process requests reading data from 56 + the line. Multiple read calls may occur in parallel 57 + and the ldisc must deal with serialization issues. 58 + If not defined, the process will receive an EIO 59 + error. May sleep. 60 + 61 + write() - (optional) A process requests writing data to the 62 + line. Multiple write calls are serialized by the 63 + tty layer for the ldisc. If not defined, the 64 + process will receive an EIO error. May sleep. 63 65 64 66 flush_buffer() - (optional) May be called at any point between 65 67 open and close, and instructs the line discipline ··· 81 69 termios semaphore so allowed to sleep. Serialized 82 70 against itself only. 83 71 84 - read() - Move data from the line discipline to the user. 85 - Multiple read calls may occur in parallel and the 86 - ldisc must deal with serialization issues. May 87 - sleep. 72 + poll() - (optional) Check the status for the poll/select 73 + calls. Multiple poll calls may occur in parallel. 74 + May sleep. 88 75 89 - poll() - Check the status for the poll/select calls. Multiple 90 - poll calls may occur in parallel. May sleep. 76 + ioctl() - (optional) Called when an ioctl is handed to the 77 + tty layer that might be for the ldisc. Multiple 78 + ioctl calls may occur in parallel. May sleep. 91 79 92 - ioctl() - Called when an ioctl is handed to the tty layer 93 - that might be for the ldisc. Multiple ioctl calls 94 - may occur in parallel. May sleep. 95 - 96 - compat_ioctl() - Called when a 32 bit ioctl is handed to the tty layer 97 - that might be for the ldisc. Multiple ioctl calls 98 - may occur in parallel. May sleep. 80 + compat_ioctl() - (optional) Called when a 32 bit ioctl is handed 81 + to the tty layer that might be for the ldisc. 82 + Multiple ioctl calls may occur in parallel. 83 + May sleep. 99 84 100 85 Driver Side Interfaces: 101 86 102 - receive_buf() - Hand buffers of bytes from the driver to the ldisc 103 - for processing. Semantics currently rather 104 - mysterious 8( 87 + receive_buf() - (optional) Called by the low-level driver to hand 88 + a buffer of received bytes to the ldisc for 89 + processing. The number of bytes is guaranteed not 90 + to exceed the current value of tty->receive_room. 91 + All bytes must be processed. 92 + 93 + receive_buf2() - (optional) Called by the low-level driver to hand 94 + a buffer of received bytes to the ldisc for 95 + processing. Returns the number of bytes processed. 96 + 97 + If both receive_buf() and receive_buf2() are 98 + defined, receive_buf2() should be preferred. 105 99 106 100 write_wakeup() - May be called at any point between open and close. 107 101 The TTY_DO_WRITE_WAKEUP flag indicates if a call
+55
arch/arm64/include/asm/dcc.h
··· 1 + /* Copyright (c) 2014-2015 The Linux Foundation. All rights reserved. 2 + * 3 + * This program is free software; you can redistribute it and/or modify 4 + * it under the terms of the GNU General Public License version 2 and 5 + * only version 2 as published by the Free Software Foundation. 6 + * 7 + * This program is distributed in the hope that it will be useful, 8 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 + * GNU General Public License for more details. 11 + * 12 + * A call to __dcc_getchar() or __dcc_putchar() is typically followed by 13 + * a call to __dcc_getstatus(). We want to make sure that the CPU does 14 + * not speculative read the DCC status before executing the read or write 15 + * instruction. That's what the ISBs are for. 16 + * 17 + * The 'volatile' ensures that the compiler does not cache the status bits, 18 + * and instead reads the DCC register every time. 19 + */ 20 + #ifndef __ASM_DCC_H 21 + #define __ASM_DCC_H 22 + 23 + #include <asm/barrier.h> 24 + 25 + static inline u32 __dcc_getstatus(void) 26 + { 27 + u32 ret; 28 + 29 + asm volatile("mrs %0, mdccsr_el0" : "=r" (ret)); 30 + 31 + return ret; 32 + } 33 + 34 + static inline char __dcc_getchar(void) 35 + { 36 + char c; 37 + 38 + asm volatile("mrs %0, dbgdtrrx_el0" : "=r" (c)); 39 + isb(); 40 + 41 + return c; 42 + } 43 + 44 + static inline void __dcc_putchar(char c) 45 + { 46 + /* 47 + * The typecast is to make absolutely certain that 'c' is 48 + * zero-extended. 49 + */ 50 + asm volatile("msr dbgdtrtx_el0, %0" 51 + : : "r" ((unsigned long)(unsigned char)c)); 52 + isb(); 53 + } 54 + 55 + #endif
+2 -7
drivers/dma/hsu/Kconfig
··· 5 5 select DMA_VIRTUAL_CHANNELS 6 6 7 7 config HSU_DMA_PCI 8 - tristate "High Speed UART DMA PCI driver" 9 - depends on PCI 10 - select HSU_DMA 11 - help 12 - Support the High Speed UART DMA on the platfroms that 13 - enumerate it as a PCI device. For example, Intel Medfield 14 - has integrated this HSU DMA controller. 8 + tristate 9 + depends on HSU_DMA && PCI
+7 -17
drivers/dma/hsu/hsu.c
··· 146 146 u32 sr; 147 147 148 148 /* Sanity check */ 149 - if (nr >= chip->pdata->nr_channels) 149 + if (nr >= chip->hsu->nr_channels) 150 150 return IRQ_NONE; 151 151 152 152 hsuc = &chip->hsu->chan[nr]; ··· 375 375 int hsu_dma_probe(struct hsu_dma_chip *chip) 376 376 { 377 377 struct hsu_dma *hsu; 378 - struct hsu_dma_platform_data *pdata = chip->pdata; 379 378 void __iomem *addr = chip->regs + chip->offset; 380 379 unsigned short i; 381 380 int ret; ··· 385 386 386 387 chip->hsu = hsu; 387 388 388 - if (!pdata) { 389 - pdata = devm_kzalloc(chip->dev, sizeof(*pdata), GFP_KERNEL); 390 - if (!pdata) 391 - return -ENOMEM; 389 + /* Calculate nr_channels from the IO space length */ 390 + hsu->nr_channels = (chip->length - chip->offset) / HSU_DMA_CHAN_LENGTH; 392 391 393 - chip->pdata = pdata; 394 - 395 - /* Guess nr_channels from the IO space length */ 396 - pdata->nr_channels = (chip->length - chip->offset) / 397 - HSU_DMA_CHAN_LENGTH; 398 - } 399 - 400 - hsu->chan = devm_kcalloc(chip->dev, pdata->nr_channels, 392 + hsu->chan = devm_kcalloc(chip->dev, hsu->nr_channels, 401 393 sizeof(*hsu->chan), GFP_KERNEL); 402 394 if (!hsu->chan) 403 395 return -ENOMEM; 404 396 405 397 INIT_LIST_HEAD(&hsu->dma.channels); 406 - for (i = 0; i < pdata->nr_channels; i++) { 398 + for (i = 0; i < hsu->nr_channels; i++) { 407 399 struct hsu_dma_chan *hsuc = &hsu->chan[i]; 408 400 409 401 hsuc->vchan.desc_free = hsu_dma_desc_free; ··· 430 440 if (ret) 431 441 return ret; 432 442 433 - dev_info(chip->dev, "Found HSU DMA, %d channels\n", pdata->nr_channels); 443 + dev_info(chip->dev, "Found HSU DMA, %d channels\n", hsu->nr_channels); 434 444 return 0; 435 445 } 436 446 EXPORT_SYMBOL_GPL(hsu_dma_probe); ··· 442 452 443 453 dma_async_device_unregister(&hsu->dma); 444 454 445 - for (i = 0; i < chip->pdata->nr_channels; i++) { 455 + for (i = 0; i < hsu->nr_channels; i++) { 446 456 struct hsu_dma_chan *hsuc = &hsu->chan[i]; 447 457 448 458 tasklet_kill(&hsuc->vchan.task);
+1
drivers/dma/hsu/hsu.h
··· 107 107 108 108 /* channels */ 109 109 struct hsu_dma_chan *chan; 110 + unsigned short nr_channels; 110 111 }; 111 112 112 113 static inline struct hsu_dma *to_hsu_dma(struct dma_device *ddev)
+1 -1
drivers/dma/hsu/pci.c
··· 31 31 irqreturn_t ret = IRQ_NONE; 32 32 33 33 dmaisr = readl(chip->regs + HSU_PCI_DMAISR); 34 - for (i = 0; i < chip->pdata->nr_channels; i++) { 34 + for (i = 0; i < chip->hsu->nr_channels; i++) { 35 35 if (dmaisr & 0x1) 36 36 ret |= hsu_dma_irq(chip, i); 37 37 dmaisr >>= 1;
+1 -1
drivers/isdn/i4l/isdn_tty.c
··· 1582 1582 * line status register. 1583 1583 */ 1584 1584 if (port->flags & ASYNC_INITIALIZED) { 1585 - tty_wait_until_sent_from_close(tty, 3000); /* 30 seconds timeout */ 1585 + tty_wait_until_sent(tty, 3000); /* 30 seconds timeout */ 1586 1586 /* 1587 1587 * Before we drop DTR, make sure the UART transmitter 1588 1588 * has completely drained; this is especially
-9
drivers/tty/cyclades.c
··· 1577 1577 #endif 1578 1578 1579 1579 /* 1580 - * If the port is the middle of closing, bail out now 1581 - */ 1582 - if (info->port.flags & ASYNC_CLOSING) { 1583 - wait_event_interruptible_tty(tty, info->port.close_wait, 1584 - !(info->port.flags & ASYNC_CLOSING)); 1585 - return (info->port.flags & ASYNC_HUP_NOTIFY) ? -EAGAIN: -ERESTARTSYS; 1586 - } 1587 - 1588 - /* 1589 1580 * Start up serial port 1590 1581 */ 1591 1582 retval = cy_startup(info, tty);
+1 -1
drivers/tty/hvc/Kconfig
··· 81 81 82 82 config HVC_DCC 83 83 bool "ARM JTAG DCC console" 84 - depends on ARM 84 + depends on ARM || ARM64 85 85 select HVC_DRIVER 86 86 help 87 87 This console uses the JTAG DCC on ARM to create a console under the HVC
+2 -18
drivers/tty/hvc/hvc_console.c
··· 29 29 #include <linux/kernel.h> 30 30 #include <linux/kthread.h> 31 31 #include <linux/list.h> 32 - #include <linux/module.h> 32 + #include <linux/init.h> 33 33 #include <linux/major.h> 34 34 #include <linux/atomic.h> 35 35 #include <linux/sysrq.h> ··· 418 418 * there is no buffered data otherwise sleeps on a wait queue 419 419 * waking periodically to check chars_in_buffer(). 420 420 */ 421 - tty_wait_until_sent_from_close(tty, HVC_CLOSE_WAIT); 421 + tty_wait_until_sent(tty, HVC_CLOSE_WAIT); 422 422 } else { 423 423 if (hp->port.count < 0) 424 424 printk(KERN_ERR "hvc_close %X: oops, count is %d\n", ··· 1005 1005 out: 1006 1006 return err; 1007 1007 } 1008 - 1009 - /* This isn't particularly necessary due to this being a console driver 1010 - * but it is nice to be thorough. 1011 - */ 1012 - static void __exit hvc_exit(void) 1013 - { 1014 - if (hvc_driver) { 1015 - kthread_stop(hvc_task); 1016 - 1017 - tty_unregister_driver(hvc_driver); 1018 - /* return tty_struct instances allocated in hvc_init(). */ 1019 - put_tty_driver(hvc_driver); 1020 - unregister_console(&hvc_console); 1021 - } 1022 - } 1023 - module_exit(hvc_exit);
+11 -4
drivers/tty/hvc/hvc_dcc.c
··· 70 70 71 71 static int __init hvc_dcc_console_init(void) 72 72 { 73 + int ret; 74 + 73 75 if (!hvc_dcc_check()) 74 76 return -ENODEV; 75 77 76 - hvc_instantiate(0, 0, &hvc_dcc_get_put_ops); 77 - return 0; 78 + /* Returns -1 if error */ 79 + ret = hvc_instantiate(0, 0, &hvc_dcc_get_put_ops); 80 + 81 + return ret < 0 ? -ENODEV : 0; 78 82 } 79 83 console_initcall(hvc_dcc_console_init); 80 84 81 85 static int __init hvc_dcc_init(void) 82 86 { 87 + struct hvc_struct *p; 88 + 83 89 if (!hvc_dcc_check()) 84 90 return -ENODEV; 85 91 86 - hvc_alloc(0, 0, &hvc_dcc_get_put_ops, 128); 87 - return 0; 92 + p = hvc_alloc(0, 0, &hvc_dcc_get_put_ops, 128); 93 + 94 + return PTR_ERR_OR_ZERO(p); 88 95 } 89 96 device_initcall(hvc_dcc_init);
+1 -1
drivers/tty/hvc/hvcs.c
··· 1230 1230 irq = hvcsd->vdev->irq; 1231 1231 spin_unlock_irqrestore(&hvcsd->lock, flags); 1232 1232 1233 - tty_wait_until_sent_from_close(tty, HVCS_CLOSE_WAIT); 1233 + tty_wait_until_sent(tty, HVCS_CLOSE_WAIT); 1234 1234 1235 1235 /* 1236 1236 * This line is important because it tells hvcs_open that this
+3 -36
drivers/tty/mips_ejtag_fdc.c
··· 977 977 /* Try requesting the IRQ */ 978 978 if (priv->irq >= 0) { 979 979 /* 980 - * IRQF_SHARED, IRQF_NO_SUSPEND: The FDC IRQ may be shared with 980 + * IRQF_SHARED, IRQF_COND_SUSPEND: The FDC IRQ may be shared with 981 981 * other local interrupts such as the timer which sets 982 982 * IRQF_TIMER (including IRQF_NO_SUSPEND). 983 983 * ··· 987 987 */ 988 988 ret = devm_request_irq(priv->dev, priv->irq, mips_ejtag_fdc_isr, 989 989 IRQF_PERCPU | IRQF_SHARED | 990 - IRQF_NO_THREAD | IRQF_NO_SUSPEND, 990 + IRQF_NO_THREAD | IRQF_COND_SUSPEND, 991 991 priv->fdc_name, priv); 992 992 if (ret) 993 993 priv->irq = -1; ··· 1046 1046 } 1047 1047 put_tty_driver(priv->driver); 1048 1048 return ret; 1049 - } 1050 - 1051 - static int mips_ejtag_fdc_tty_remove(struct mips_cdmm_device *dev) 1052 - { 1053 - struct mips_ejtag_fdc_tty *priv = mips_cdmm_get_drvdata(dev); 1054 - struct mips_ejtag_fdc_tty_port *dport; 1055 - int nport; 1056 - unsigned int cfg; 1057 - 1058 - if (priv->irq >= 0) { 1059 - raw_spin_lock_irq(&priv->lock); 1060 - cfg = mips_ejtag_fdc_read(priv, REG_FDCFG); 1061 - /* Disable interrupts */ 1062 - cfg &= ~(REG_FDCFG_TXINTTHRES | REG_FDCFG_RXINTTHRES); 1063 - cfg |= REG_FDCFG_TXINTTHRES_DISABLED; 1064 - cfg |= REG_FDCFG_RXINTTHRES_DISABLED; 1065 - mips_ejtag_fdc_write(priv, REG_FDCFG, cfg); 1066 - raw_spin_unlock_irq(&priv->lock); 1067 - } else { 1068 - priv->removing = true; 1069 - del_timer_sync(&priv->poll_timer); 1070 - } 1071 - kthread_stop(priv->thread); 1072 - if (dev->cpu == 0) 1073 - mips_ejtag_fdc_con.tty_drv = NULL; 1074 - tty_unregister_driver(priv->driver); 1075 - for (nport = 0; nport < NUM_TTY_CHANNELS; nport++) { 1076 - dport = &priv->ports[nport]; 1077 - tty_port_destroy(&dport->port); 1078 - } 1079 - put_tty_driver(priv->driver); 1080 - return 0; 1081 1049 } 1082 1050 1083 1051 static int mips_ejtag_fdc_tty_cpu_down(struct mips_cdmm_device *dev) ··· 1120 1152 .name = "mips_ejtag_fdc", 1121 1153 }, 1122 1154 .probe = mips_ejtag_fdc_tty_probe, 1123 - .remove = mips_ejtag_fdc_tty_remove, 1124 1155 .cpu_down = mips_ejtag_fdc_tty_cpu_down, 1125 1156 .cpu_up = mips_ejtag_fdc_tty_cpu_up, 1126 1157 .id_table = mips_ejtag_fdc_tty_ids, 1127 1158 }; 1128 - module_mips_cdmm_driver(mips_ejtag_fdc_tty_driver); 1159 + builtin_mips_cdmm_driver(mips_ejtag_fdc_tty_driver); 1129 1160 1130 1161 static int __init mips_ejtag_fdc_init_console(void) 1131 1162 {
+16 -12
drivers/tty/n_r3964.c
··· 276 276 add_msg(pHeader->owner, R3964_MSG_ACK, pHeader->length, 277 277 error_code, NULL); 278 278 } 279 - wake_up_interruptible(&pInfo->read_wait); 279 + wake_up_interruptible(&pInfo->tty->read_wait); 280 280 } 281 281 282 282 spin_lock_irqsave(&pInfo->lock, flags); ··· 542 542 pBlock); 543 543 } 544 544 } 545 - wake_up_interruptible(&pInfo->read_wait); 545 + wake_up_interruptible(&pInfo->tty->read_wait); 546 546 547 547 pInfo->state = R3964_IDLE; 548 548 ··· 978 978 } 979 979 980 980 spin_lock_init(&pInfo->lock); 981 + mutex_init(&pInfo->read_lock); 981 982 pInfo->tty = tty; 982 - init_waitqueue_head(&pInfo->read_wait); 983 983 pInfo->priority = R3964_MASTER; 984 984 pInfo->rx_first = pInfo->rx_last = NULL; 985 985 pInfo->tx_first = pInfo->tx_last = NULL; ··· 1045 1045 } 1046 1046 1047 1047 /* Free buffers: */ 1048 - wake_up_interruptible(&pInfo->read_wait); 1049 1048 kfree(pInfo->rx_buf); 1050 1049 TRACE_M("r3964_close - rx_buf kfree %p", pInfo->rx_buf); 1051 1050 kfree(pInfo->tx_buf); ··· 1064 1065 1065 1066 TRACE_L("read()"); 1066 1067 1067 - tty_lock(tty); 1068 + /* 1069 + * Internal serialization of reads. 1070 + */ 1071 + if (file->f_flags & O_NONBLOCK) { 1072 + if (!mutex_trylock(&pInfo->read_lock)) 1073 + return -EAGAIN; 1074 + } else { 1075 + if (mutex_lock_interruptible(&pInfo->read_lock)) 1076 + return -ERESTARTSYS; 1077 + } 1068 1078 1069 1079 pClient = findClient(pInfo, task_pid(current)); 1070 1080 if (pClient) { ··· 1085 1077 goto unlock; 1086 1078 } 1087 1079 /* block until there is a message: */ 1088 - wait_event_interruptible_tty(tty, pInfo->read_wait, 1080 + wait_event_interruptible(tty->read_wait, 1089 1081 (pMsg = remove_msg(pInfo, pClient))); 1090 1082 } 1091 1083 ··· 1115 1107 } 1116 1108 ret = -EPERM; 1117 1109 unlock: 1118 - tty_unlock(tty); 1110 + mutex_unlock(&pInfo->read_lock); 1119 1111 return ret; 1120 1112 } 1121 1113 ··· 1164 1156 pHeader->locks = 0; 1165 1157 pHeader->owner = NULL; 1166 1158 1167 - tty_lock(tty); 1168 - 1169 1159 pClient = findClient(pInfo, task_pid(current)); 1170 1160 if (pClient) { 1171 1161 pHeader->owner = pClient; ··· 1180 1174 */ 1181 1175 add_tx_queue(pInfo, pHeader); 1182 1176 trigger_transmit(pInfo); 1183 - 1184 - tty_unlock(tty); 1185 1177 1186 1178 return 0; 1187 1179 } ··· 1231 1227 1232 1228 pClient = findClient(pInfo, task_pid(current)); 1233 1229 if (pClient) { 1234 - poll_wait(file, &pInfo->read_wait, wait); 1230 + poll_wait(file, &tty->read_wait, wait); 1235 1231 spin_lock_irqsave(&pInfo->lock, flags); 1236 1232 pMsg = pClient->first_msg; 1237 1233 spin_unlock_irqrestore(&pInfo->lock, flags);
+3 -29
drivers/tty/n_tty.c
··· 201 201 */ 202 202 WARN_RATELIMIT(test_bit(TTY_LDISC_HALTED, &tty->flags), 203 203 "scheduling buffer work for halted ldisc\n"); 204 - queue_work(system_unbound_wq, &tty->port->buf.work); 204 + tty_buffer_restart_work(tty->port); 205 205 } 206 206 } 207 207 ··· 1179 1179 put_tty_queue('\0', ldata); 1180 1180 } 1181 1181 put_tty_queue('\0', ldata); 1182 - if (waitqueue_active(&tty->read_wait)) 1183 - wake_up_interruptible_poll(&tty->read_wait, POLLIN); 1184 1182 } 1185 1183 1186 1184 /** ··· 1235 1237 put_tty_queue('\0', ldata); 1236 1238 } else 1237 1239 put_tty_queue(c, ldata); 1238 - if (waitqueue_active(&tty->read_wait)) 1239 - wake_up_interruptible_poll(&tty->read_wait, POLLIN); 1240 1240 } 1241 1241 1242 1242 static void ··· 2138 2142 2139 2143 static int job_control(struct tty_struct *tty, struct file *file) 2140 2144 { 2141 - struct pid *pgrp; 2142 - 2143 2145 /* Job control check -- must be done at start and after 2144 2146 every sleep (POSIX.1 7.1.1.4). */ 2145 2147 /* NOTE: not yet done after every sleep pending a thorough 2146 2148 check of the logic of this change. -- jlc */ 2147 2149 /* don't stop on /dev/console */ 2148 - if (file->f_op->write == redirected_tty_write || 2149 - current->signal->tty != tty) 2150 + if (file->f_op->write == redirected_tty_write) 2150 2151 return 0; 2151 2152 2152 - rcu_read_lock(); 2153 - pgrp = task_pgrp(current); 2154 - 2155 - spin_lock_irq(&tty->ctrl_lock); 2156 - if (!tty->pgrp) 2157 - printk(KERN_ERR "n_tty_read: no tty->pgrp!\n"); 2158 - else if (pgrp != tty->pgrp) { 2159 - spin_unlock_irq(&tty->ctrl_lock); 2160 - if (is_ignored(SIGTTIN) || is_current_pgrp_orphaned()) { 2161 - rcu_read_unlock(); 2162 - return -EIO; 2163 - } 2164 - kill_pgrp(pgrp, SIGTTIN, 1); 2165 - rcu_read_unlock(); 2166 - set_thread_flag(TIF_SIGPENDING); 2167 - return -ERESTARTSYS; 2168 - } 2169 - spin_unlock_irq(&tty->ctrl_lock); 2170 - rcu_read_unlock(); 2171 - return 0; 2153 + return __tty_check_change(tty, SIGTTIN); 2172 2154 } 2173 2155 2174 2156
+5 -2
drivers/tty/pty.c
··· 7 7 */ 8 8 9 9 #include <linux/module.h> 10 - 11 10 #include <linux/errno.h> 12 11 #include <linux/interrupt.h> 13 12 #include <linux/tty.h> ··· 500 501 } 501 502 502 503 static int legacy_count = CONFIG_LEGACY_PTY_COUNT; 504 + /* 505 + * not really modular, but the easiest way to keep compat with existing 506 + * bootargs behaviour is to continue using module_param here. 507 + */ 503 508 module_param(legacy_count, int, 0); 504 509 505 510 /* ··· 880 877 unix98_pty_init(); 881 878 return 0; 882 879 } 883 - module_init(pty_init); 880 + device_initcall(pty_init);
-13
drivers/tty/rocket.c
··· 895 895 if (!page) 896 896 return -ENOMEM; 897 897 898 - if (port->flags & ASYNC_CLOSING) { 899 - retval = wait_for_completion_interruptible(&info->close_wait); 900 - free_page(page); 901 - if (retval) 902 - return retval; 903 - return ((port->flags & ASYNC_HUP_NOTIFY) ? -EAGAIN : -ERESTARTSYS); 904 - } 905 - 906 898 /* 907 899 * We must not sleep from here until the port is marked fully in use. 908 900 */ ··· 1049 1057 mutex_unlock(&port->mutex); 1050 1058 tty_port_tty_set(port, NULL); 1051 1059 1052 - wake_up_interruptible(&port->close_wait); 1053 1060 complete_all(&info->close_wait); 1054 1061 atomic_dec(&rp_num_ports_open); 1055 1062 ··· 1502 1511 #endif 1503 1512 rp_flush_buffer(tty); 1504 1513 spin_lock_irqsave(&info->port.lock, flags); 1505 - if (info->port.flags & ASYNC_CLOSING) { 1506 - spin_unlock_irqrestore(&info->port.lock, flags); 1507 - return; 1508 - } 1509 1514 if (info->port.count) 1510 1515 atomic_dec(&rp_num_ports_open); 1511 1516 clear_bit((info->aiop * 8) + info->chan, (void *) &xmit_flags[info->board]);
+2 -3
drivers/tty/serial/68328serial.c
··· 560 560 struct m68k_serial *info = &m68k_soft[0]; 561 561 char c; 562 562 563 - if (info == 0) return; 564 - if (info->xmit_buf == 0) return; 563 + if (info == NULL) return; 564 + if (info->xmit_buf == NULL) return; 565 565 566 566 local_irq_save(flags); 567 567 left = info->xmit_cnt; ··· 1071 1071 wake_up_interruptible(&port->open_wait); 1072 1072 } 1073 1073 port->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CLOSING); 1074 - wake_up_interruptible(&port->close_wait); 1075 1074 local_irq_restore(flags); 1076 1075 } 1077 1076
+20 -6
drivers/tty/serial/8250/8250_core.c
··· 569 569 for (i = 0; i < nr_uarts; i++) { 570 570 struct uart_8250_port *up = &serial8250_ports[i]; 571 571 572 + if (up->port.type == PORT_8250_CIR) 573 + continue; 574 + 572 575 if (up->port.dev) 573 576 continue; 574 577 ··· 1030 1027 if (up->dl_write) 1031 1028 uart->dl_write = up->dl_write; 1032 1029 1033 - if (serial8250_isa_config != NULL) 1034 - serial8250_isa_config(0, &uart->port, 1035 - &uart->capabilities); 1030 + if (uart->port.type != PORT_8250_CIR) { 1031 + if (serial8250_isa_config != NULL) 1032 + serial8250_isa_config(0, &uart->port, 1033 + &uart->capabilities); 1036 1034 1037 - ret = uart_add_one_port(&serial8250_reg, &uart->port); 1038 - if (ret == 0) 1039 - ret = uart->port.line; 1035 + ret = uart_add_one_port(&serial8250_reg, 1036 + &uart->port); 1037 + if (ret == 0) 1038 + ret = uart->port.line; 1039 + } else { 1040 + dev_info(uart->port.dev, 1041 + "skipping CIR port at 0x%lx / 0x%llx, IRQ %d\n", 1042 + uart->port.iobase, 1043 + (unsigned long long)uart->port.mapbase, 1044 + uart->port.irq); 1045 + 1046 + ret = 0; 1047 + } 1040 1048 } 1041 1049 mutex_unlock(&serial_mutex); 1042 1050
-6
drivers/tty/serial/8250/8250_dma.c
··· 54 54 struct dma_tx_state state; 55 55 int count; 56 56 57 - dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr, 58 - dma->rx_size, DMA_FROM_DEVICE); 59 - 60 57 dma->rx_running = 0; 61 58 dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); 62 59 ··· 148 151 desc->callback_param = p; 149 152 150 153 dma->rx_cookie = dmaengine_submit(desc); 151 - 152 - dma_sync_single_for_device(dma->rxchan->device->dev, dma->rx_addr, 153 - dma->rx_size, DMA_FROM_DEVICE); 154 154 155 155 dma_async_issue_pending(dma->rxchan); 156 156
+133 -157
drivers/tty/serial/8250/8250_dw.c
··· 63 63 struct clk *pclk; 64 64 struct reset_control *rst; 65 65 struct uart_8250_dma dma; 66 + 67 + unsigned int skip_autocfg:1; 68 + unsigned int uart_16550_compatible:1; 66 69 }; 67 70 68 71 #define BYT_PRV_CLK 0x800 ··· 247 244 serial8250_do_set_termios(p, termios, old); 248 245 } 249 246 250 - static bool dw8250_dma_filter(struct dma_chan *chan, void *param) 247 + /* 248 + * dw8250_fallback_dma_filter will prevent the UART from getting just any free 249 + * channel on platforms that have DMA engines, but don't have any channels 250 + * assigned to the UART. 251 + * 252 + * REVISIT: This is a work around for limitation in the DMA Engine API. Once the 253 + * core problem is fixed, this function is no longer needed. 254 + */ 255 + static bool dw8250_fallback_dma_filter(struct dma_chan *chan, void *param) 251 256 { 252 257 return false; 253 258 } 254 259 255 - static void dw8250_setup_port(struct uart_8250_port *up) 260 + static bool dw8250_idma_filter(struct dma_chan *chan, void *param) 256 261 { 257 - struct uart_port *p = &up->port; 258 - u32 reg = readl(p->membase + DW_UART_UCV); 262 + return param == chan->device->dev->parent; 263 + } 264 + 265 + static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data) 266 + { 267 + if (p->dev->of_node) { 268 + struct device_node *np = p->dev->of_node; 269 + int id; 270 + 271 + /* get index of serial line, if found in DT aliases */ 272 + id = of_alias_get_id(np, "serial"); 273 + if (id >= 0) 274 + p->line = id; 275 + #ifdef CONFIG_64BIT 276 + if (of_device_is_compatible(np, "cavium,octeon-3860-uart")) { 277 + p->serial_in = dw8250_serial_inq; 278 + p->serial_out = dw8250_serial_outq; 279 + p->flags = UPF_SKIP_TEST | UPF_SHARE_IRQ | UPF_FIXED_TYPE; 280 + p->type = PORT_OCTEON; 281 + data->usr_reg = 0x27; 282 + data->skip_autocfg = true; 283 + } 284 + #endif 285 + } else if (has_acpi_companion(p->dev)) { 286 + p->iotype = UPIO_MEM32; 287 + p->regshift = 2; 288 + p->serial_in = dw8250_serial_in32; 289 + p->set_termios = dw8250_set_termios; 290 + /* So far none of there implement the Busy Functionality */ 291 + data->uart_16550_compatible = true; 292 + } 293 + 294 + /* Platforms with iDMA */ 295 + if (platform_get_resource_byname(to_platform_device(p->dev), 296 + IORESOURCE_MEM, "lpss_priv")) { 297 + p->set_termios = dw8250_set_termios; 298 + data->dma.rx_param = p->dev->parent; 299 + data->dma.tx_param = p->dev->parent; 300 + data->dma.fn = dw8250_idma_filter; 301 + } 302 + } 303 + 304 + static void dw8250_setup_port(struct uart_port *p) 305 + { 306 + struct uart_8250_port *up = up_to_u8250p(p); 307 + u32 reg; 259 308 260 309 /* 261 310 * If the Component Version Register returns zero, we know that 262 311 * ADDITIONAL_FEATURES are not enabled. No need to go any further. 263 312 */ 313 + reg = readl(p->membase + DW_UART_UCV); 264 314 if (!reg) 265 315 return; 266 316 267 - dev_dbg_ratelimited(p->dev, "Designware UART version %c.%c%c\n", 317 + dev_dbg(p->dev, "Designware UART version %c.%c%c\n", 268 318 (reg >> 24) & 0xff, (reg >> 16) & 0xff, (reg >> 8) & 0xff); 269 319 270 320 reg = readl(p->membase + DW_UART_CPR); ··· 329 273 p->type = PORT_16550A; 330 274 p->flags |= UPF_FIXED_TYPE; 331 275 p->fifosize = DW_UART_CPR_FIFO_SIZE(reg); 332 - up->tx_loadsz = p->fifosize; 333 276 up->capabilities = UART_CAP_FIFO; 334 277 } 335 278 ··· 336 281 up->capabilities |= UART_CAP_AFE; 337 282 } 338 283 339 - static int dw8250_probe_of(struct uart_port *p, 340 - struct dw8250_data *data) 341 - { 342 - struct device_node *np = p->dev->of_node; 343 - struct uart_8250_port *up = up_to_u8250p(p); 344 - u32 val; 345 - bool has_ucv = true; 346 - int id; 347 - 348 - #ifdef CONFIG_64BIT 349 - if (of_device_is_compatible(np, "cavium,octeon-3860-uart")) { 350 - p->serial_in = dw8250_serial_inq; 351 - p->serial_out = dw8250_serial_outq; 352 - p->flags = UPF_SKIP_TEST | UPF_SHARE_IRQ | UPF_FIXED_TYPE; 353 - p->type = PORT_OCTEON; 354 - data->usr_reg = 0x27; 355 - has_ucv = false; 356 - } else 357 - #endif 358 - if (!of_property_read_u32(np, "reg-io-width", &val)) { 359 - switch (val) { 360 - case 1: 361 - break; 362 - case 4: 363 - p->iotype = UPIO_MEM32; 364 - p->serial_in = dw8250_serial_in32; 365 - p->serial_out = dw8250_serial_out32; 366 - break; 367 - default: 368 - dev_err(p->dev, "unsupported reg-io-width (%u)\n", val); 369 - return -EINVAL; 370 - } 371 - } 372 - if (has_ucv) 373 - dw8250_setup_port(up); 374 - 375 - /* if we have a valid fifosize, try hooking up DMA here */ 376 - if (p->fifosize) { 377 - up->dma = &data->dma; 378 - 379 - up->dma->rxconf.src_maxburst = p->fifosize / 4; 380 - up->dma->txconf.dst_maxburst = p->fifosize / 4; 381 - } 382 - 383 - if (!of_property_read_u32(np, "reg-shift", &val)) 384 - p->regshift = val; 385 - 386 - /* get index of serial line, if found in DT aliases */ 387 - id = of_alias_get_id(np, "serial"); 388 - if (id >= 0) 389 - p->line = id; 390 - 391 - if (of_property_read_bool(np, "dcd-override")) { 392 - /* Always report DCD as active */ 393 - data->msr_mask_on |= UART_MSR_DCD; 394 - data->msr_mask_off |= UART_MSR_DDCD; 395 - } 396 - 397 - if (of_property_read_bool(np, "dsr-override")) { 398 - /* Always report DSR as active */ 399 - data->msr_mask_on |= UART_MSR_DSR; 400 - data->msr_mask_off |= UART_MSR_DDSR; 401 - } 402 - 403 - if (of_property_read_bool(np, "cts-override")) { 404 - /* Always report CTS as active */ 405 - data->msr_mask_on |= UART_MSR_CTS; 406 - data->msr_mask_off |= UART_MSR_DCTS; 407 - } 408 - 409 - if (of_property_read_bool(np, "ri-override")) { 410 - /* Always report Ring indicator as inactive */ 411 - data->msr_mask_off |= UART_MSR_RI; 412 - data->msr_mask_off |= UART_MSR_TERI; 413 - } 414 - 415 - return 0; 416 - } 417 - 418 - static bool dw8250_idma_filter(struct dma_chan *chan, void *param) 419 - { 420 - struct device *dev = param; 421 - 422 - if (dev != chan->device->dev->parent) 423 - return false; 424 - 425 - return true; 426 - } 427 - 428 - static int dw8250_probe_acpi(struct uart_8250_port *up, 429 - struct dw8250_data *data) 430 - { 431 - struct uart_port *p = &up->port; 432 - 433 - dw8250_setup_port(up); 434 - 435 - p->iotype = UPIO_MEM32; 436 - p->serial_in = dw8250_serial_in32; 437 - p->serial_out = dw8250_serial_out32; 438 - p->regshift = 2; 439 - 440 - /* Platforms with iDMA */ 441 - if (platform_get_resource_byname(to_platform_device(up->port.dev), 442 - IORESOURCE_MEM, "lpss_priv")) { 443 - data->dma.rx_param = up->port.dev->parent; 444 - data->dma.tx_param = up->port.dev->parent; 445 - data->dma.fn = dw8250_idma_filter; 446 - } 447 - 448 - up->dma = &data->dma; 449 - up->dma->rxconf.src_maxburst = p->fifosize / 4; 450 - up->dma->txconf.dst_maxburst = p->fifosize / 4; 451 - 452 - up->port.set_termios = dw8250_set_termios; 453 - 454 - return 0; 455 - } 456 - 457 284 static int dw8250_probe(struct platform_device *pdev) 458 285 { 459 286 struct uart_8250_port uart = {}; 460 287 struct resource *regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 461 288 int irq = platform_get_irq(pdev, 0); 289 + struct uart_port *p = &uart.port; 462 290 struct dw8250_data *data; 463 291 int err; 292 + u32 val; 464 293 465 294 if (!regs) { 466 295 dev_err(&pdev->dev, "no registers defined\n"); ··· 357 418 return irq; 358 419 } 359 420 360 - spin_lock_init(&uart.port.lock); 361 - uart.port.mapbase = regs->start; 362 - uart.port.irq = irq; 363 - uart.port.handle_irq = dw8250_handle_irq; 364 - uart.port.pm = dw8250_do_pm; 365 - uart.port.type = PORT_8250; 366 - uart.port.flags = UPF_SHARE_IRQ | UPF_BOOT_AUTOCONF | UPF_FIXED_PORT; 367 - uart.port.dev = &pdev->dev; 421 + spin_lock_init(&p->lock); 422 + p->mapbase = regs->start; 423 + p->irq = irq; 424 + p->handle_irq = dw8250_handle_irq; 425 + p->pm = dw8250_do_pm; 426 + p->type = PORT_8250; 427 + p->flags = UPF_SHARE_IRQ | UPF_FIXED_PORT; 428 + p->dev = &pdev->dev; 429 + p->iotype = UPIO_MEM; 430 + p->serial_in = dw8250_serial_in; 431 + p->serial_out = dw8250_serial_out; 368 432 369 - uart.port.membase = devm_ioremap(&pdev->dev, regs->start, 370 - resource_size(regs)); 371 - if (!uart.port.membase) 433 + p->membase = devm_ioremap(&pdev->dev, regs->start, resource_size(regs)); 434 + if (!p->membase) 372 435 return -ENOMEM; 373 436 374 437 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 375 438 if (!data) 376 439 return -ENOMEM; 377 440 441 + data->dma.fn = dw8250_fallback_dma_filter; 378 442 data->usr_reg = DW_UART_USR; 443 + p->private_data = data; 444 + 445 + data->uart_16550_compatible = device_property_read_bool(p->dev, 446 + "snps,uart-16550-compatible"); 447 + 448 + err = device_property_read_u32(p->dev, "reg-shift", &val); 449 + if (!err) 450 + p->regshift = val; 451 + 452 + err = device_property_read_u32(p->dev, "reg-io-width", &val); 453 + if (!err && val == 4) { 454 + p->iotype = UPIO_MEM32; 455 + p->serial_in = dw8250_serial_in32; 456 + p->serial_out = dw8250_serial_out32; 457 + } 458 + 459 + if (device_property_read_bool(p->dev, "dcd-override")) { 460 + /* Always report DCD as active */ 461 + data->msr_mask_on |= UART_MSR_DCD; 462 + data->msr_mask_off |= UART_MSR_DDCD; 463 + } 464 + 465 + if (device_property_read_bool(p->dev, "dsr-override")) { 466 + /* Always report DSR as active */ 467 + data->msr_mask_on |= UART_MSR_DSR; 468 + data->msr_mask_off |= UART_MSR_DDSR; 469 + } 470 + 471 + if (device_property_read_bool(p->dev, "cts-override")) { 472 + /* Always report CTS as active */ 473 + data->msr_mask_on |= UART_MSR_CTS; 474 + data->msr_mask_off |= UART_MSR_DCTS; 475 + } 476 + 477 + if (device_property_read_bool(p->dev, "ri-override")) { 478 + /* Always report Ring indicator as inactive */ 479 + data->msr_mask_off |= UART_MSR_RI; 480 + data->msr_mask_off |= UART_MSR_TERI; 481 + } 379 482 380 483 /* Always ask for fixed clock rate from a property. */ 381 - device_property_read_u32(&pdev->dev, "clock-frequency", 382 - &uart.port.uartclk); 484 + device_property_read_u32(p->dev, "clock-frequency", &p->uartclk); 383 485 384 486 /* If there is separate baudclk, get the rate from it. */ 385 487 data->clk = devm_clk_get(&pdev->dev, "baudclk"); ··· 434 454 dev_warn(&pdev->dev, "could not enable optional baudclk: %d\n", 435 455 err); 436 456 else 437 - uart.port.uartclk = clk_get_rate(data->clk); 457 + p->uartclk = clk_get_rate(data->clk); 438 458 } 439 459 440 460 /* If no clock rate is defined, fail. */ 441 - if (!uart.port.uartclk) { 461 + if (!p->uartclk) { 442 462 dev_err(&pdev->dev, "clock rate not defined\n"); 443 463 return -EINVAL; 444 464 } ··· 464 484 if (!IS_ERR(data->rst)) 465 485 reset_control_deassert(data->rst); 466 486 467 - data->dma.rx_param = data; 468 - data->dma.tx_param = data; 469 - data->dma.fn = dw8250_dma_filter; 487 + dw8250_quirks(p, data); 470 488 471 - uart.port.iotype = UPIO_MEM; 472 - uart.port.serial_in = dw8250_serial_in; 473 - uart.port.serial_out = dw8250_serial_out; 474 - uart.port.private_data = data; 489 + /* If the Busy Functionality is not implemented, don't handle it */ 490 + if (data->uart_16550_compatible) { 491 + p->serial_out = NULL; 492 + p->handle_irq = NULL; 493 + } 475 494 476 - if (pdev->dev.of_node) { 477 - err = dw8250_probe_of(&uart.port, data); 478 - if (err) 479 - goto err_reset; 480 - } else if (ACPI_HANDLE(&pdev->dev)) { 481 - err = dw8250_probe_acpi(&uart, data); 482 - if (err) 483 - goto err_reset; 484 - } else { 485 - err = -ENODEV; 486 - goto err_reset; 495 + if (!data->skip_autocfg) 496 + dw8250_setup_port(p); 497 + 498 + /* If we have a valid fifosize, try hooking up DMA */ 499 + if (p->fifosize) { 500 + data->dma.rxconf.src_maxburst = p->fifosize / 4; 501 + data->dma.txconf.dst_maxburst = p->fifosize / 4; 502 + uart.dma = &data->dma; 487 503 } 488 504 489 505 data->line = serial8250_register_8250_port(&uart);
+4
drivers/tty/serial/8250/8250_early.c
··· 29 29 #include <linux/tty.h> 30 30 #include <linux/init.h> 31 31 #include <linux/console.h> 32 + #include <linux/of.h> 33 + #include <linux/of_device.h> 32 34 #include <linux/serial_reg.h> 33 35 #include <linux/serial.h> 34 36 #include <linux/serial_8250.h> ··· 154 152 } 155 153 EARLYCON_DECLARE(uart8250, early_serial8250_setup); 156 154 EARLYCON_DECLARE(uart, early_serial8250_setup); 155 + OF_EARLYCON_DECLARE(ns16550, "ns16550", early_serial8250_setup); 156 + OF_EARLYCON_DECLARE(ns16550a, "ns16550a", early_serial8250_setup);
+84 -4
drivers/tty/serial/8250/8250_ingenic.c
··· 21 21 #include <linux/module.h> 22 22 #include <linux/of.h> 23 23 #include <linux/of_fdt.h> 24 + #include <linux/of_device.h> 24 25 #include <linux/platform_device.h> 25 26 #include <linux/serial_8250.h> 26 27 #include <linux/serial_core.h> 27 28 #include <linux/serial_reg.h> 29 + 30 + #include "8250.h" 31 + 32 + /** ingenic_uart_config: SOC specific config data. */ 33 + struct ingenic_uart_config { 34 + int tx_loadsz; 35 + int fifosize; 36 + }; 28 37 29 38 struct ingenic_uart_data { 30 39 struct clk *clk_module; ··· 41 32 int line; 42 33 }; 43 34 35 + static const struct of_device_id of_match[]; 36 + 44 37 #define UART_FCR_UME BIT(4) 38 + 39 + #define UART_MCR_MDCE BIT(7) 40 + #define UART_MCR_FCM BIT(6) 45 41 46 42 static struct earlycon_device *early_device; 47 43 ··· 143 129 144 130 static void ingenic_uart_serial_out(struct uart_port *p, int offset, int value) 145 131 { 132 + int ier; 133 + 146 134 switch (offset) { 147 135 case UART_FCR: 148 136 /* UART module enable */ ··· 152 136 break; 153 137 154 138 case UART_IER: 139 + /* Enable receive timeout interrupt with the 140 + * receive line status interrupt */ 155 141 value |= (value & 0x4) << 2; 142 + break; 143 + 144 + case UART_MCR: 145 + /* If we have enabled modem status IRQs we should enable modem 146 + * mode. */ 147 + ier = p->serial_in(p, UART_IER); 148 + 149 + if (ier & UART_IER_MSI) 150 + value |= UART_MCR_MDCE | UART_MCR_FCM; 151 + else 152 + value &= ~(UART_MCR_MDCE | UART_MCR_FCM); 156 153 break; 157 154 158 155 default: ··· 175 146 writeb(value, p->membase + (offset << p->regshift)); 176 147 } 177 148 149 + static unsigned int ingenic_uart_serial_in(struct uart_port *p, int offset) 150 + { 151 + unsigned int value; 152 + 153 + value = readb(p->membase + (offset << p->regshift)); 154 + 155 + /* Hide non-16550 compliant bits from higher levels */ 156 + switch (offset) { 157 + case UART_FCR: 158 + value &= ~UART_FCR_UME; 159 + break; 160 + 161 + case UART_MCR: 162 + value &= ~(UART_MCR_MDCE | UART_MCR_FCM); 163 + break; 164 + 165 + default: 166 + break; 167 + } 168 + return value; 169 + } 170 + 178 171 static int ingenic_uart_probe(struct platform_device *pdev) 179 172 { 180 173 struct uart_8250_port uart = {}; 181 174 struct resource *regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 182 175 struct resource *irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 183 176 struct ingenic_uart_data *data; 177 + const struct ingenic_uart_config *cdata; 178 + const struct of_device_id *match; 184 179 int err, line; 180 + 181 + match = of_match_device(of_match, &pdev->dev); 182 + if (!match) { 183 + dev_err(&pdev->dev, "Error: No device match found\n"); 184 + return -ENODEV; 185 + } 186 + cdata = match->data; 185 187 186 188 if (!regs || !irq) { 187 189 dev_err(&pdev->dev, "no registers/irq defined\n"); ··· 224 164 return -ENOMEM; 225 165 226 166 spin_lock_init(&uart.port.lock); 227 - uart.port.type = PORT_16550; 167 + uart.port.type = PORT_16550A; 228 168 uart.port.flags = UPF_SKIP_TEST | UPF_IOREMAP | UPF_FIXED_TYPE; 229 169 uart.port.iotype = UPIO_MEM; 230 170 uart.port.mapbase = regs->start; 231 171 uart.port.regshift = 2; 232 172 uart.port.serial_out = ingenic_uart_serial_out; 173 + uart.port.serial_in = ingenic_uart_serial_in; 233 174 uart.port.irq = irq->start; 234 175 uart.port.dev = &pdev->dev; 176 + uart.port.fifosize = cdata->fifosize; 177 + uart.tx_loadsz = cdata->tx_loadsz; 178 + uart.capabilities = UART_CAP_FIFO | UART_CAP_RTOIE; 235 179 236 180 /* Check for a fixed line number */ 237 181 line = of_alias_get_id(pdev->dev.of_node, "serial"); ··· 305 241 return 0; 306 242 } 307 243 244 + static const struct ingenic_uart_config jz4740_uart_config = { 245 + .tx_loadsz = 8, 246 + .fifosize = 16, 247 + }; 248 + 249 + static const struct ingenic_uart_config jz4760_uart_config = { 250 + .tx_loadsz = 16, 251 + .fifosize = 32, 252 + }; 253 + 254 + static const struct ingenic_uart_config jz4780_uart_config = { 255 + .tx_loadsz = 32, 256 + .fifosize = 64, 257 + }; 258 + 308 259 static const struct of_device_id of_match[] = { 309 - { .compatible = "ingenic,jz4740-uart" }, 310 - { .compatible = "ingenic,jz4775-uart" }, 311 - { .compatible = "ingenic,jz4780-uart" }, 260 + { .compatible = "ingenic,jz4740-uart", .data = &jz4740_uart_config }, 261 + { .compatible = "ingenic,jz4760-uart", .data = &jz4760_uart_config }, 262 + { .compatible = "ingenic,jz4775-uart", .data = &jz4760_uart_config }, 263 + { .compatible = "ingenic,jz4780-uart", .data = &jz4780_uart_config }, 312 264 { /* sentinel */ } 313 265 }; 314 266 MODULE_DEVICE_TABLE(of, of_match);
+326
drivers/tty/serial/8250/8250_mid.c
··· 1 + /* 2 + * 8250_mid.c - Driver for UART on Intel Penwell and various other Intel SOCs 3 + * 4 + * Copyright (C) 2015 Intel Corporation 5 + * Author: Heikki Krogerus <heikki.krogerus@linux.intel.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/rational.h> 13 + #include <linux/module.h> 14 + #include <linux/pci.h> 15 + 16 + #include <linux/dma/hsu.h> 17 + 18 + #include "8250.h" 19 + 20 + #define PCI_DEVICE_ID_INTEL_PNW_UART1 0x081b 21 + #define PCI_DEVICE_ID_INTEL_PNW_UART2 0x081c 22 + #define PCI_DEVICE_ID_INTEL_PNW_UART3 0x081d 23 + #define PCI_DEVICE_ID_INTEL_TNG_UART 0x1191 24 + #define PCI_DEVICE_ID_INTEL_DNV_UART 0x19d8 25 + 26 + /* Intel MID Specific registers */ 27 + #define INTEL_MID_UART_PS 0x30 28 + #define INTEL_MID_UART_MUL 0x34 29 + #define INTEL_MID_UART_DIV 0x38 30 + 31 + struct mid8250; 32 + 33 + struct mid8250_board { 34 + unsigned long freq; 35 + unsigned int base_baud; 36 + int (*setup)(struct mid8250 *, struct uart_port *p); 37 + void (*exit)(struct mid8250 *); 38 + }; 39 + 40 + struct mid8250 { 41 + int line; 42 + int dma_index; 43 + struct pci_dev *dma_dev; 44 + struct uart_8250_dma dma; 45 + struct mid8250_board *board; 46 + struct hsu_dma_chip dma_chip; 47 + }; 48 + 49 + /*****************************************************************************/ 50 + 51 + static int pnw_setup(struct mid8250 *mid, struct uart_port *p) 52 + { 53 + struct pci_dev *pdev = to_pci_dev(p->dev); 54 + 55 + switch (pdev->device) { 56 + case PCI_DEVICE_ID_INTEL_PNW_UART1: 57 + mid->dma_index = 0; 58 + break; 59 + case PCI_DEVICE_ID_INTEL_PNW_UART2: 60 + mid->dma_index = 1; 61 + break; 62 + case PCI_DEVICE_ID_INTEL_PNW_UART3: 63 + mid->dma_index = 2; 64 + break; 65 + default: 66 + return -EINVAL; 67 + } 68 + 69 + mid->dma_dev = pci_get_slot(pdev->bus, 70 + PCI_DEVFN(PCI_SLOT(pdev->devfn), 3)); 71 + return 0; 72 + } 73 + 74 + static int tng_setup(struct mid8250 *mid, struct uart_port *p) 75 + { 76 + struct pci_dev *pdev = to_pci_dev(p->dev); 77 + int index = PCI_FUNC(pdev->devfn); 78 + 79 + /* Currently no support for HSU port0 */ 80 + if (index-- == 0) 81 + return -ENODEV; 82 + 83 + mid->dma_index = index; 84 + mid->dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(5, 0)); 85 + return 0; 86 + } 87 + 88 + static int dnv_handle_irq(struct uart_port *p) 89 + { 90 + struct mid8250 *mid = p->private_data; 91 + int ret; 92 + 93 + ret = hsu_dma_irq(&mid->dma_chip, 0); 94 + ret |= hsu_dma_irq(&mid->dma_chip, 1); 95 + 96 + /* For now, letting the HW generate separate interrupt for the UART */ 97 + if (ret) 98 + return ret; 99 + 100 + return serial8250_handle_irq(p, serial_port_in(p, UART_IIR)); 101 + } 102 + 103 + #define DNV_DMA_CHAN_OFFSET 0x80 104 + 105 + static int dnv_setup(struct mid8250 *mid, struct uart_port *p) 106 + { 107 + struct hsu_dma_chip *chip = &mid->dma_chip; 108 + struct pci_dev *pdev = to_pci_dev(p->dev); 109 + int ret; 110 + 111 + chip->dev = &pdev->dev; 112 + chip->irq = pdev->irq; 113 + chip->regs = p->membase; 114 + chip->length = pci_resource_len(pdev, 0); 115 + chip->offset = DNV_DMA_CHAN_OFFSET; 116 + 117 + /* Falling back to PIO mode if DMA probing fails */ 118 + ret = hsu_dma_probe(chip); 119 + if (ret) 120 + return 0; 121 + 122 + mid->dma_dev = pdev; 123 + 124 + p->handle_irq = dnv_handle_irq; 125 + return 0; 126 + } 127 + 128 + static void dnv_exit(struct mid8250 *mid) 129 + { 130 + if (!mid->dma_dev) 131 + return; 132 + hsu_dma_remove(&mid->dma_chip); 133 + } 134 + 135 + /*****************************************************************************/ 136 + 137 + static void mid8250_set_termios(struct uart_port *p, 138 + struct ktermios *termios, 139 + struct ktermios *old) 140 + { 141 + unsigned int baud = tty_termios_baud_rate(termios); 142 + struct mid8250 *mid = p->private_data; 143 + unsigned short ps = 16; 144 + unsigned long fuart = baud * ps; 145 + unsigned long w = BIT(24) - 1; 146 + unsigned long mul, div; 147 + 148 + if (mid->board->freq < fuart) { 149 + /* Find prescaler value that satisfies Fuart < Fref */ 150 + if (mid->board->freq > baud) 151 + ps = mid->board->freq / baud; /* baud rate too high */ 152 + else 153 + ps = 1; /* PLL case */ 154 + fuart = baud * ps; 155 + } else { 156 + /* Get Fuart closer to Fref */ 157 + fuart *= rounddown_pow_of_two(mid->board->freq / fuart); 158 + } 159 + 160 + rational_best_approximation(fuart, mid->board->freq, w, w, &mul, &div); 161 + p->uartclk = fuart * 16 / ps; /* core uses ps = 16 always */ 162 + 163 + writel(ps, p->membase + INTEL_MID_UART_PS); /* set PS */ 164 + writel(mul, p->membase + INTEL_MID_UART_MUL); /* set MUL */ 165 + writel(div, p->membase + INTEL_MID_UART_DIV); 166 + 167 + serial8250_do_set_termios(p, termios, old); 168 + } 169 + 170 + static bool mid8250_dma_filter(struct dma_chan *chan, void *param) 171 + { 172 + struct hsu_dma_slave *s = param; 173 + 174 + if (s->dma_dev != chan->device->dev || s->chan_id != chan->chan_id) 175 + return false; 176 + 177 + chan->private = s; 178 + return true; 179 + } 180 + 181 + static int mid8250_dma_setup(struct mid8250 *mid, struct uart_8250_port *port) 182 + { 183 + struct uart_8250_dma *dma = &mid->dma; 184 + struct device *dev = port->port.dev; 185 + struct hsu_dma_slave *rx_param; 186 + struct hsu_dma_slave *tx_param; 187 + 188 + if (!mid->dma_dev) 189 + return 0; 190 + 191 + rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL); 192 + if (!rx_param) 193 + return -ENOMEM; 194 + 195 + tx_param = devm_kzalloc(dev, sizeof(*tx_param), GFP_KERNEL); 196 + if (!tx_param) 197 + return -ENOMEM; 198 + 199 + rx_param->chan_id = mid->dma_index * 2 + 1; 200 + tx_param->chan_id = mid->dma_index * 2; 201 + 202 + dma->rxconf.src_maxburst = 64; 203 + dma->txconf.dst_maxburst = 64; 204 + 205 + rx_param->dma_dev = &mid->dma_dev->dev; 206 + tx_param->dma_dev = &mid->dma_dev->dev; 207 + 208 + dma->fn = mid8250_dma_filter; 209 + dma->rx_param = rx_param; 210 + dma->tx_param = tx_param; 211 + 212 + port->dma = dma; 213 + return 0; 214 + } 215 + 216 + static int mid8250_probe(struct pci_dev *pdev, const struct pci_device_id *id) 217 + { 218 + struct uart_8250_port uart; 219 + struct mid8250 *mid; 220 + int ret; 221 + 222 + ret = pcim_enable_device(pdev); 223 + if (ret) 224 + return ret; 225 + 226 + pci_set_master(pdev); 227 + 228 + mid = devm_kzalloc(&pdev->dev, sizeof(*mid), GFP_KERNEL); 229 + if (!mid) 230 + return -ENOMEM; 231 + 232 + mid->board = (struct mid8250_board *)id->driver_data; 233 + 234 + memset(&uart, 0, sizeof(struct uart_8250_port)); 235 + 236 + uart.port.dev = &pdev->dev; 237 + uart.port.irq = pdev->irq; 238 + uart.port.private_data = mid; 239 + uart.port.type = PORT_16750; 240 + uart.port.iotype = UPIO_MEM; 241 + uart.port.uartclk = mid->board->base_baud * 16; 242 + uart.port.flags = UPF_SHARE_IRQ | UPF_FIXED_PORT | UPF_FIXED_TYPE; 243 + uart.port.set_termios = mid8250_set_termios; 244 + 245 + uart.port.mapbase = pci_resource_start(pdev, 0); 246 + uart.port.membase = pcim_iomap(pdev, 0, 0); 247 + if (!uart.port.membase) 248 + return -ENOMEM; 249 + 250 + if (mid->board->setup) { 251 + ret = mid->board->setup(mid, &uart.port); 252 + if (ret) 253 + return ret; 254 + } 255 + 256 + ret = mid8250_dma_setup(mid, &uart); 257 + if (ret) 258 + goto err; 259 + 260 + ret = serial8250_register_8250_port(&uart); 261 + if (ret < 0) 262 + goto err; 263 + 264 + mid->line = ret; 265 + 266 + pci_set_drvdata(pdev, mid); 267 + return 0; 268 + err: 269 + if (mid->board->exit) 270 + mid->board->exit(mid); 271 + return ret; 272 + } 273 + 274 + static void mid8250_remove(struct pci_dev *pdev) 275 + { 276 + struct mid8250 *mid = pci_get_drvdata(pdev); 277 + 278 + if (mid->board->exit) 279 + mid->board->exit(mid); 280 + 281 + serial8250_unregister_port(mid->line); 282 + } 283 + 284 + static const struct mid8250_board pnw_board = { 285 + .freq = 50000000, 286 + .base_baud = 115200, 287 + .setup = pnw_setup, 288 + }; 289 + 290 + static const struct mid8250_board tng_board = { 291 + .freq = 38400000, 292 + .base_baud = 1843200, 293 + .setup = tng_setup, 294 + }; 295 + 296 + static const struct mid8250_board dnv_board = { 297 + .freq = 133333333, 298 + .base_baud = 115200, 299 + .setup = dnv_setup, 300 + .exit = dnv_exit, 301 + }; 302 + 303 + #define MID_DEVICE(id, board) { PCI_VDEVICE(INTEL, id), (kernel_ulong_t)&board } 304 + 305 + static const struct pci_device_id pci_ids[] = { 306 + MID_DEVICE(PCI_DEVICE_ID_INTEL_PNW_UART1, pnw_board), 307 + MID_DEVICE(PCI_DEVICE_ID_INTEL_PNW_UART2, pnw_board), 308 + MID_DEVICE(PCI_DEVICE_ID_INTEL_PNW_UART3, pnw_board), 309 + MID_DEVICE(PCI_DEVICE_ID_INTEL_TNG_UART, tng_board), 310 + MID_DEVICE(PCI_DEVICE_ID_INTEL_DNV_UART, dnv_board), 311 + { }, 312 + }; 313 + MODULE_DEVICE_TABLE(pci, pci_ids); 314 + 315 + static struct pci_driver mid8250_pci_driver = { 316 + .name = "8250_mid", 317 + .id_table = pci_ids, 318 + .probe = mid8250_probe, 319 + .remove = mid8250_remove, 320 + }; 321 + 322 + module_pci_driver(mid8250_pci_driver); 323 + 324 + MODULE_AUTHOR("Intel Corporation"); 325 + MODULE_LICENSE("GPL v2"); 326 + MODULE_DESCRIPTION("Intel MID UART driver");
+5 -3
drivers/tty/serial/8250/8250_omap.c
··· 439 439 priv->xoff = termios->c_cc[VSTOP]; 440 440 441 441 priv->efr = 0; 442 - up->mcr &= ~(UART_MCR_RTS | UART_MCR_XONANY); 443 442 up->port.status &= ~(UPSTAT_AUTOCTS | UPSTAT_AUTORTS | UPSTAT_AUTOXOFF); 444 443 445 444 if (termios->c_cflag & CRTSCTS && up->port.flags & UPF_HARD_FLOW) { ··· 725 726 struct dma_tx_state state; 726 727 int count; 727 728 unsigned long flags; 729 + int ret; 728 730 729 731 dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr, 730 732 dma->rx_size, DMA_FROM_DEVICE); ··· 741 741 742 742 count = dma->rx_size - state.residue; 743 743 744 - tty_insert_flip_string(tty_port, dma->rx_buf, count); 745 - p->port.icount.rx += count; 744 + ret = tty_insert_flip_string(tty_port, dma->rx_buf, count); 745 + 746 + p->port.icount.rx += ret; 747 + p->port.icount.buf_overrun += count - ret; 746 748 unlock: 747 749 spin_unlock_irqrestore(&priv->rx_dma_lock, flags); 748 750
+7 -222
drivers/tty/serial/8250/8250_pci.c
··· 28 28 29 29 #include <linux/dmaengine.h> 30 30 #include <linux/platform_data/dma-dw.h> 31 - #include <linux/platform_data/dma-hsu.h> 32 31 33 32 #include "8250.h" 34 33 ··· 1507 1508 return ret; 1508 1509 } 1509 1510 1510 - #define INTEL_MID_UART_PS 0x30 1511 - #define INTEL_MID_UART_MUL 0x34 1512 - #define INTEL_MID_UART_DIV 0x38 1513 - 1514 - static void intel_mid_set_termios(struct uart_port *p, 1515 - struct ktermios *termios, 1516 - struct ktermios *old, 1517 - unsigned long fref) 1518 - { 1519 - unsigned int baud = tty_termios_baud_rate(termios); 1520 - unsigned short ps = 16; 1521 - unsigned long fuart = baud * ps; 1522 - unsigned long w = BIT(24) - 1; 1523 - unsigned long mul, div; 1524 - 1525 - if (fref < fuart) { 1526 - /* Find prescaler value that satisfies Fuart < Fref */ 1527 - if (fref > baud) 1528 - ps = fref / baud; /* baud rate too high */ 1529 - else 1530 - ps = 1; /* PLL case */ 1531 - fuart = baud * ps; 1532 - } else { 1533 - /* Get Fuart closer to Fref */ 1534 - fuart *= rounddown_pow_of_two(fref / fuart); 1535 - } 1536 - 1537 - rational_best_approximation(fuart, fref, w, w, &mul, &div); 1538 - p->uartclk = fuart * 16 / ps; /* core uses ps = 16 always */ 1539 - 1540 - writel(ps, p->membase + INTEL_MID_UART_PS); /* set PS */ 1541 - writel(mul, p->membase + INTEL_MID_UART_MUL); /* set MUL */ 1542 - writel(div, p->membase + INTEL_MID_UART_DIV); 1543 - 1544 - serial8250_do_set_termios(p, termios, old); 1545 - } 1546 - 1547 - static void intel_mid_set_termios_38_4M(struct uart_port *p, 1548 - struct ktermios *termios, 1549 - struct ktermios *old) 1550 - { 1551 - intel_mid_set_termios(p, termios, old, 38400000); 1552 - } 1553 - 1554 - static void intel_mid_set_termios_50M(struct uart_port *p, 1555 - struct ktermios *termios, 1556 - struct ktermios *old) 1557 - { 1558 - /* 1559 - * The uart clk is 50Mhz, and the baud rate come from: 1560 - * baud = 50M * MUL / (DIV * PS * DLAB) 1561 - */ 1562 - intel_mid_set_termios(p, termios, old, 50000000); 1563 - } 1564 - 1565 - static bool intel_mid_dma_filter(struct dma_chan *chan, void *param) 1566 - { 1567 - struct hsu_dma_slave *s = param; 1568 - 1569 - if (s->dma_dev != chan->device->dev || s->chan_id != chan->chan_id) 1570 - return false; 1571 - 1572 - chan->private = s; 1573 - return true; 1574 - } 1575 - 1576 - static int intel_mid_serial_setup(struct serial_private *priv, 1577 - const struct pciserial_board *board, 1578 - struct uart_8250_port *port, int idx, 1579 - int index, struct pci_dev *dma_dev) 1580 - { 1581 - struct device *dev = port->port.dev; 1582 - struct uart_8250_dma *dma; 1583 - struct hsu_dma_slave *tx_param, *rx_param; 1584 - 1585 - dma = devm_kzalloc(dev, sizeof(*dma), GFP_KERNEL); 1586 - if (!dma) 1587 - return -ENOMEM; 1588 - 1589 - tx_param = devm_kzalloc(dev, sizeof(*tx_param), GFP_KERNEL); 1590 - if (!tx_param) 1591 - return -ENOMEM; 1592 - 1593 - rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL); 1594 - if (!rx_param) 1595 - return -ENOMEM; 1596 - 1597 - rx_param->chan_id = index * 2 + 1; 1598 - tx_param->chan_id = index * 2; 1599 - 1600 - dma->rxconf.src_maxburst = 64; 1601 - dma->txconf.dst_maxburst = 64; 1602 - 1603 - rx_param->dma_dev = &dma_dev->dev; 1604 - tx_param->dma_dev = &dma_dev->dev; 1605 - 1606 - dma->fn = intel_mid_dma_filter; 1607 - dma->rx_param = rx_param; 1608 - dma->tx_param = tx_param; 1609 - 1610 - port->port.type = PORT_16750; 1611 - port->port.flags |= UPF_FIXED_PORT | UPF_FIXED_TYPE; 1612 - port->dma = dma; 1613 - 1614 - return pci_default_setup(priv, board, port, idx); 1615 - } 1616 - 1617 - #define PCI_DEVICE_ID_INTEL_PNW_UART1 0x081b 1618 - #define PCI_DEVICE_ID_INTEL_PNW_UART2 0x081c 1619 - #define PCI_DEVICE_ID_INTEL_PNW_UART3 0x081d 1620 - 1621 - static int pnw_serial_setup(struct serial_private *priv, 1622 - const struct pciserial_board *board, 1623 - struct uart_8250_port *port, int idx) 1624 - { 1625 - struct pci_dev *pdev = priv->dev; 1626 - struct pci_dev *dma_dev; 1627 - int index; 1628 - 1629 - switch (pdev->device) { 1630 - case PCI_DEVICE_ID_INTEL_PNW_UART1: 1631 - index = 0; 1632 - break; 1633 - case PCI_DEVICE_ID_INTEL_PNW_UART2: 1634 - index = 1; 1635 - break; 1636 - case PCI_DEVICE_ID_INTEL_PNW_UART3: 1637 - index = 2; 1638 - break; 1639 - default: 1640 - return -EINVAL; 1641 - } 1642 - 1643 - dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), 3)); 1644 - 1645 - port->port.set_termios = intel_mid_set_termios_50M; 1646 - 1647 - return intel_mid_serial_setup(priv, board, port, idx, index, dma_dev); 1648 - } 1649 - 1650 - #define PCI_DEVICE_ID_INTEL_TNG_UART 0x1191 1651 - 1652 - static int tng_serial_setup(struct serial_private *priv, 1653 - const struct pciserial_board *board, 1654 - struct uart_8250_port *port, int idx) 1655 - { 1656 - struct pci_dev *pdev = priv->dev; 1657 - struct pci_dev *dma_dev; 1658 - int index = PCI_FUNC(pdev->devfn); 1659 - 1660 - /* Currently no support for HSU port0 */ 1661 - if (index-- == 0) 1662 - return -ENODEV; 1663 - 1664 - dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(5, 0)); 1665 - 1666 - port->port.set_termios = intel_mid_set_termios_38_4M; 1667 - 1668 - return intel_mid_serial_setup(priv, board, port, idx, index, dma_dev); 1669 - } 1670 - 1671 1511 static int 1672 1512 pci_omegapci_setup(struct serial_private *priv, 1673 1513 const struct pciserial_board *board, ··· 2047 2209 .subvendor = PCI_ANY_ID, 2048 2210 .subdevice = PCI_ANY_ID, 2049 2211 .setup = byt_serial_setup, 2050 - }, 2051 - { 2052 - .vendor = PCI_VENDOR_ID_INTEL, 2053 - .device = PCI_DEVICE_ID_INTEL_PNW_UART1, 2054 - .subvendor = PCI_ANY_ID, 2055 - .subdevice = PCI_ANY_ID, 2056 - .setup = pnw_serial_setup, 2057 - }, 2058 - { 2059 - .vendor = PCI_VENDOR_ID_INTEL, 2060 - .device = PCI_DEVICE_ID_INTEL_PNW_UART2, 2061 - .subvendor = PCI_ANY_ID, 2062 - .subdevice = PCI_ANY_ID, 2063 - .setup = pnw_serial_setup, 2064 - }, 2065 - { 2066 - .vendor = PCI_VENDOR_ID_INTEL, 2067 - .device = PCI_DEVICE_ID_INTEL_PNW_UART3, 2068 - .subvendor = PCI_ANY_ID, 2069 - .subdevice = PCI_ANY_ID, 2070 - .setup = pnw_serial_setup, 2071 - }, 2072 - { 2073 - .vendor = PCI_VENDOR_ID_INTEL, 2074 - .device = PCI_DEVICE_ID_INTEL_TNG_UART, 2075 - .subvendor = PCI_ANY_ID, 2076 - .subdevice = PCI_ANY_ID, 2077 - .setup = tng_serial_setup, 2078 2212 }, 2079 2213 { 2080 2214 .vendor = PCI_VENDOR_ID_INTEL, ··· 2929 3119 pbn_ADDIDATA_PCIe_8_3906250, 2930 3120 pbn_ce4100_1_115200, 2931 3121 pbn_byt, 2932 - pbn_pnw, 2933 - pbn_tng, 2934 3122 pbn_qrk, 2935 3123 pbn_omegapci, 2936 3124 pbn_NETMOS9900_2s_115200, ··· 3715 3907 .uart_offset = 0x80, 3716 3908 .reg_shift = 2, 3717 3909 }, 3718 - [pbn_pnw] = { 3719 - .flags = FL_BASE0, 3720 - .num_ports = 1, 3721 - .base_baud = 115200, 3722 - }, 3723 - [pbn_tng] = { 3724 - .flags = FL_BASE0, 3725 - .num_ports = 1, 3726 - .base_baud = 1843200, 3727 - }, 3728 3910 [pbn_qrk] = { 3729 3911 .flags = FL_BASE0, 3730 3912 .num_ports = 1, ··· 3803 4005 { PCI_DEVICE(0x4348, 0x5053), }, /* WCH CH353 1S1P */ 3804 4006 { PCI_DEVICE(0x1c00, 0x3250), }, /* WCH CH382 2S1P */ 3805 4007 { PCI_DEVICE(0x1c00, 0x3470), }, /* WCH CH384 4S */ 4008 + 4009 + /* Intel platforms with MID UART */ 4010 + { PCI_VDEVICE(INTEL, 0x081b), }, 4011 + { PCI_VDEVICE(INTEL, 0x081c), }, 4012 + { PCI_VDEVICE(INTEL, 0x081d), }, 4013 + { PCI_VDEVICE(INTEL, 0x1191), }, 4014 + { PCI_VDEVICE(INTEL, 0x19d8), }, 3806 4015 }; 3807 4016 3808 4017 /* ··· 5505 5700 PCI_ANY_ID, PCI_ANY_ID, 5506 5701 PCI_CLASS_COMMUNICATION_SERIAL << 8, 0xff0000, 5507 5702 pbn_byt }, 5508 - 5509 - /* 5510 - * Intel Penwell 5511 - */ 5512 - { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PNW_UART1, 5513 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5514 - pbn_pnw}, 5515 - { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PNW_UART2, 5516 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5517 - pbn_pnw}, 5518 - { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PNW_UART3, 5519 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5520 - pbn_pnw}, 5521 - 5522 - /* 5523 - * Intel Tangier 5524 - */ 5525 - { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TNG_UART, 5526 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5527 - pbn_tng}, 5528 5703 5529 5704 /* 5530 5705 * Intel Quark x1000
+49 -37
drivers/tty/serial/8250/8250_port.c
··· 284 284 serial_out(up, UART_DLM, value >> 8 & 0xff); 285 285 } 286 286 287 - #if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X) 287 + #ifdef CONFIG_SERIAL_8250_RT288X 288 288 289 289 /* Au1x00/RT288x UART hardware has a weird register layout */ 290 290 static const s8 au_io_in_map[8] = { ··· 435 435 p->serial_out = mem32be_serial_out; 436 436 break; 437 437 438 - #if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X) 438 + #ifdef CONFIG_SERIAL_8250_RT288X 439 439 case UPIO_AU: 440 440 p->serial_in = au_serial_in; 441 441 p->serial_out = au_serial_out; ··· 1246 1246 inb_p(ICP); 1247 1247 } 1248 1248 1249 + if (uart_console(port)) 1250 + console_lock(); 1251 + 1249 1252 /* forget possible initially masked and pending IRQ */ 1250 1253 probe_irq_off(probe_irq_on()); 1251 1254 save_mcr = serial_in(up, UART_MCR); ··· 1279 1276 1280 1277 if (port->flags & UPF_FOURPORT) 1281 1278 outb_p(save_ICP, ICP); 1279 + 1280 + if (uart_console(port)) 1281 + console_unlock(); 1282 1282 1283 1283 port->irq = (irq > 0) ? irq : 0; 1284 1284 } ··· 1813 1807 unsigned char lsr, iir; 1814 1808 int retval; 1815 1809 1816 - if (port->type == PORT_8250_CIR) 1817 - return -ENODEV; 1818 - 1819 1810 if (!port->fifosize) 1820 1811 port->fifosize = uart_config[port->type].fifo_size; 1821 1812 if (!up->tx_loadsz) ··· 2233 2230 serial_port_out(port, 0x2, quot_frac); 2234 2231 } 2235 2232 2233 + static unsigned int 2234 + serial8250_get_baud_rate(struct uart_port *port, struct ktermios *termios, 2235 + struct ktermios *old) 2236 + { 2237 + unsigned int tolerance = port->uartclk / 100; 2238 + 2239 + /* 2240 + * Ask the core to calculate the divisor for us. 2241 + * Allow 1% tolerance at the upper limit so uart clks marginally 2242 + * slower than nominal still match standard baud rates without 2243 + * causing transmission errors. 2244 + */ 2245 + return uart_get_baud_rate(port, termios, old, 2246 + port->uartclk / 16 / 0xffff, 2247 + (port->uartclk + tolerance) / 16); 2248 + } 2249 + 2236 2250 void 2237 2251 serial8250_do_set_termios(struct uart_port *port, struct ktermios *termios, 2238 2252 struct ktermios *old) ··· 2261 2241 2262 2242 cval = serial8250_compute_lcr(up, termios->c_cflag); 2263 2243 2264 - /* 2265 - * Ask the core to calculate the divisor for us. 2266 - */ 2267 - baud = uart_get_baud_rate(port, termios, old, 2268 - port->uartclk / 16 / 0xffff, 2269 - port->uartclk / 16); 2244 + baud = serial8250_get_baud_rate(port, termios, old); 2270 2245 quot = serial8250_get_divisor(up, baud, &frac); 2271 2246 2272 2247 /* ··· 2528 2513 static int serial8250_request_port(struct uart_port *port) 2529 2514 { 2530 2515 struct uart_8250_port *up = up_to_u8250p(port); 2531 - int ret; 2532 2516 2533 - if (port->type == PORT_8250_CIR) 2534 - return -ENODEV; 2535 - 2536 - ret = serial8250_request_std_resource(up); 2537 - 2538 - return ret; 2517 + return serial8250_request_std_resource(up); 2539 2518 } 2540 2519 2541 2520 static int fcr_get_rxtrig_bytes(struct uart_8250_port *up) ··· 2677 2668 struct uart_8250_port *up = up_to_u8250p(port); 2678 2669 int ret; 2679 2670 2680 - if (port->type == PORT_8250_CIR) 2681 - return; 2682 - 2683 2671 /* 2684 2672 * Find the region that we can probe for. This in turn 2685 2673 * tells us whether we can probe for the type of port. ··· 2811 2805 } 2812 2806 2813 2807 /* 2808 + * Restore serial console when h/w power-off detected 2809 + */ 2810 + static void serial8250_console_restore(struct uart_8250_port *up) 2811 + { 2812 + struct uart_port *port = &up->port; 2813 + struct ktermios termios; 2814 + unsigned int baud, quot, frac = 0; 2815 + 2816 + termios.c_cflag = port->cons->cflag; 2817 + if (port->state->port.tty && termios.c_cflag == 0) 2818 + termios.c_cflag = port->state->port.tty->termios.c_cflag; 2819 + 2820 + baud = serial8250_get_baud_rate(port, &termios, NULL); 2821 + quot = serial8250_get_divisor(up, baud, &frac); 2822 + 2823 + serial8250_set_divisor(port, baud, quot, frac); 2824 + serial_port_out(port, UART_LCR, up->lcr); 2825 + serial_port_out(port, UART_MCR, UART_MCR_DTR | UART_MCR_RTS); 2826 + } 2827 + 2828 + /* 2814 2829 * Print a string to the serial port trying not to disturb 2815 2830 * any possible real use of the port... 2816 2831 * ··· 2868 2841 2869 2842 /* check scratch reg to see if port powered off during system sleep */ 2870 2843 if (up->canary && (up->canary != serial_port_in(port, UART_SCR))) { 2871 - struct ktermios termios; 2872 - unsigned int baud, quot, frac = 0; 2873 - 2874 - termios.c_cflag = port->cons->cflag; 2875 - if (port->state->port.tty && termios.c_cflag == 0) 2876 - termios.c_cflag = port->state->port.tty->termios.c_cflag; 2877 - 2878 - baud = uart_get_baud_rate(port, &termios, NULL, 2879 - port->uartclk / 16 / 0xffff, 2880 - port->uartclk / 16); 2881 - quot = serial8250_get_divisor(up, baud, &frac); 2882 - 2883 - serial8250_set_divisor(port, baud, quot, frac); 2884 - serial_port_out(port, UART_LCR, up->lcr); 2885 - serial_port_out(port, UART_MCR, UART_MCR_DTR | UART_MCR_RTS); 2886 - 2844 + serial8250_console_restore(up); 2887 2845 up->canary = 0; 2888 2846 } 2889 2847
+18 -7
drivers/tty/serial/8250/Kconfig
··· 274 274 275 275 config SERIAL_8250_FSL 276 276 bool 277 - depends on SERIAL_8250_CONSOLE && PPC_UDBG_16550 278 - default PPC 277 + depends on SERIAL_8250_CONSOLE 278 + default PPC || ARM || ARM64 279 279 280 280 config SERIAL_8250_DW 281 281 tristate "Support for Synopsys DesignWare 8250 quirks" ··· 294 294 295 295 config SERIAL_8250_RT288X 296 296 bool "Ralink RT288x/RT305x/RT3662/RT3883 serial port support" 297 - depends on SERIAL_8250 && (SOC_RT288X || SOC_RT305X || SOC_RT3883 || SOC_MT7620) 297 + depends on SERIAL_8250 298 + default y if MIPS_ALCHEMY || SOC_RT288X || SOC_RT305X || SOC_RT3883 || SOC_MT7620 298 299 help 299 - If you have a Ralink RT288x/RT305x SoC based board and want to use the 300 - serial port, say Y to this option. The driver can handle up to 2 serial 301 - ports. If unsure, say N. 300 + Selecting this option will add support for the alternate register 301 + layout used by Ralink RT288x/RT305x, Alchemy Au1xxx, and some others. 302 + If unsure, say N. 302 303 303 304 config SERIAL_8250_OMAP 304 305 tristate "Support for OMAP internal UART (8250 based driver)" ··· 338 337 through the PNP driver. If unsure, say N. 339 338 340 339 config SERIAL_8250_LPC18XX 341 - bool "NXP LPC18xx/43xx serial port support" 340 + tristate "NXP LPC18xx/43xx serial port support" 342 341 depends on SERIAL_8250 && OF && (ARCH_LPC18XX || COMPILE_TEST) 343 342 default ARCH_LPC18XX 344 343 help ··· 367 366 help 368 367 If you have a system using an Ingenic SoC and wish to make use of 369 368 its UARTs, say Y to this option. If unsure, say N. 369 + 370 + config SERIAL_8250_MID 371 + tristate "Support for serial ports on Intel MID platforms" 372 + depends on SERIAL_8250 && PCI 373 + select HSU_DMA if SERIAL_8250_DMA 374 + select HSU_DMA_PCI if X86_INTEL_MID 375 + help 376 + Selecting this option will enable handling of the extra features 377 + present on the UART found on Intel Medfield SOC and various other 378 + Intel platforms.
+1
drivers/tty/serial/8250/Makefile
··· 27 27 obj-$(CONFIG_SERIAL_8250_MT6577) += 8250_mtk.o 28 28 obj-$(CONFIG_SERIAL_8250_UNIPHIER) += 8250_uniphier.o 29 29 obj-$(CONFIG_SERIAL_8250_INGENIC) += 8250_ingenic.o 30 + obj-$(CONFIG_SERIAL_8250_MID) += 8250_mid.o 30 31 31 32 CFLAGS_8250_ingenic.o += -I$(srctree)/scripts/dtc/libfdt
+7 -5
drivers/tty/serial/Kconfig
··· 115 115 116 116 config SERIAL_ATMEL 117 117 bool "AT91 / AT32 on-chip serial port support" 118 - depends on ARCH_AT91 || AVR32 118 + depends on ARCH_AT91 || AVR32 || COMPILE_TEST 119 119 select SERIAL_CORE 120 - select SERIAL_MCTRL_GPIO 120 + select SERIAL_MCTRL_GPIO if GPIOLIB 121 121 help 122 122 This enables the driver for the on-chip UARTs of the Atmel 123 123 AT91 and AT32 processors. ··· 571 571 572 572 config SERIAL_IMX 573 573 tristate "IMX serial port support" 574 - depends on ARCH_MXC 574 + depends on ARCH_MXC || COMPILE_TEST 575 575 select SERIAL_CORE 576 576 select RATIONAL 577 577 help ··· 582 582 bool "Console on IMX serial port" 583 583 depends on SERIAL_IMX=y 584 584 select SERIAL_CORE_CONSOLE 585 + select SERIAL_EARLYCON if OF 585 586 help 586 587 If you have enabled the serial port on the Freescale IMX 587 588 CPU you can make it the console by answering Y to this option. ··· 744 743 745 744 config SERIAL_SH_SCI_DMA 746 745 bool "DMA support" 747 - depends on SERIAL_SH_SCI && SH_DMAE 746 + depends on SERIAL_SH_SCI && DMA_ENGINE 748 747 749 748 config SERIAL_PNX8XXX 750 749 bool "Enable PNX8XXX SoCs' UART Support" ··· 1409 1408 warnings and which allows logins in single user mode). 1410 1409 1411 1410 config SERIAL_MXS_AUART 1412 - depends on ARCH_MXS 1411 + depends on ARCH_MXS || COMPILE_TEST 1413 1412 tristate "MXS AUART support" 1414 1413 select SERIAL_CORE 1415 1414 select SERIAL_MCTRL_GPIO if GPIOLIB ··· 1539 1538 tristate "Freescale lpuart serial port support" 1540 1539 depends on HAS_DMA 1541 1540 select SERIAL_CORE 1541 + select SERIAL_EARLYCON 1542 1542 help 1543 1543 Support for the on-chip lpuart on some Freescale SOCs. 1544 1544
+2 -24
drivers/tty/serial/altera_uart.c
··· 508 508 .cons = ALTERA_UART_CONSOLE, 509 509 }; 510 510 511 - #ifdef CONFIG_OF 512 - static int altera_uart_get_of_uartclk(struct platform_device *pdev, 513 - struct uart_port *port) 514 - { 515 - int len; 516 - const __be32 *clk; 517 - 518 - clk = of_get_property(pdev->dev.of_node, "clock-frequency", &len); 519 - if (!clk || len < sizeof(__be32)) 520 - return -ENODEV; 521 - 522 - port->uartclk = be32_to_cpup(clk); 523 - 524 - return 0; 525 - } 526 - #else 527 - static int altera_uart_get_of_uartclk(struct platform_device *pdev, 528 - struct uart_port *port) 529 - { 530 - return -ENODEV; 531 - } 532 - #endif /* CONFIG_OF */ 533 - 534 511 static int altera_uart_probe(struct platform_device *pdev) 535 512 { 536 513 struct altera_uart_platform_uart *platp = dev_get_platdata(&pdev->dev); ··· 547 570 if (platp) 548 571 port->uartclk = platp->uartclk; 549 572 else { 550 - ret = altera_uart_get_of_uartclk(pdev, port); 573 + ret = of_property_read_u32(pdev->dev.of_node, "clock-frequency", 574 + &port->uartclk); 551 575 if (ret) 552 576 return ret; 553 577 }
+2 -2
drivers/tty/serial/amba-pl011.c
··· 191 191 */ 192 192 static int pl011_fifo_to_tty(struct uart_amba_port *uap) 193 193 { 194 - u16 status, ch; 195 - unsigned int flag, max_count = 256; 194 + u16 status; 195 + unsigned int ch, flag, max_count = 256; 196 196 int fifotaken = 0; 197 197 198 198 while (max_count--) {
+1
drivers/tty/serial/apbuart.c
··· 581 581 }, 582 582 {}, 583 583 }; 584 + MODULE_DEVICE_TABLE(of, apbuart_match); 584 585 585 586 static struct platform_driver grlib_apbuart_of_driver = { 586 587 .probe = apbuart_probe,
+16 -11
drivers/tty/serial/atmel_serial.c
··· 112 112 #define ATMEL_SERIAL_RINGSIZE 1024 113 113 114 114 /* 115 + * at91: 6 USARTs and one DBGU port (SAM9260) 116 + * avr32: 4 117 + */ 118 + #define ATMEL_MAX_UART 7 119 + 120 + /* 115 121 * We wrap our port structure around the generic uart_port. 116 122 */ 117 123 struct atmel_uart_port { ··· 927 921 sg_set_page(&atmel_port->sg_tx, 928 922 virt_to_page(port->state->xmit.buf), 929 923 UART_XMIT_SIZE, 930 - (int)port->state->xmit.buf & ~PAGE_MASK); 924 + (unsigned long)port->state->xmit.buf & ~PAGE_MASK); 931 925 nent = dma_map_sg(port->dev, 932 926 &atmel_port->sg_tx, 933 927 1, ··· 937 931 dev_dbg(port->dev, "need to release resource of dma\n"); 938 932 goto chan_err; 939 933 } else { 940 - dev_dbg(port->dev, "%s: mapped %d@%p to %x\n", __func__, 934 + dev_dbg(port->dev, "%s: mapped %d@%p to %pad\n", __func__, 941 935 sg_dma_len(&atmel_port->sg_tx), 942 936 port->state->xmit.buf, 943 - sg_dma_address(&atmel_port->sg_tx)); 937 + &sg_dma_address(&atmel_port->sg_tx)); 944 938 } 945 939 946 940 /* Configure the slave DMA */ ··· 1109 1103 sg_set_page(&atmel_port->sg_rx, 1110 1104 virt_to_page(ring->buf), 1111 1105 sizeof(struct atmel_uart_char) * ATMEL_SERIAL_RINGSIZE, 1112 - (int)ring->buf & ~PAGE_MASK); 1106 + (unsigned long)ring->buf & ~PAGE_MASK); 1113 1107 nent = dma_map_sg(port->dev, 1114 1108 &atmel_port->sg_rx, 1115 1109 1, ··· 1119 1113 dev_dbg(port->dev, "need to release resource of dma\n"); 1120 1114 goto chan_err; 1121 1115 } else { 1122 - dev_dbg(port->dev, "%s: mapped %d@%p to %x\n", __func__, 1116 + dev_dbg(port->dev, "%s: mapped %d@%p to %pad\n", __func__, 1123 1117 sg_dma_len(&atmel_port->sg_rx), 1124 1118 ring->buf, 1125 - sg_dma_address(&atmel_port->sg_rx)); 1119 + &sg_dma_address(&atmel_port->sg_rx)); 1126 1120 } 1127 1121 1128 1122 /* Configure the slave DMA */ ··· 1682 1676 struct atmel_uart_data *pdata = dev_get_platdata(&pdev->dev); 1683 1677 1684 1678 if (np) { 1679 + struct serial_rs485 *rs485conf = &port->rs485; 1685 1680 u32 rs485_delay[2]; 1686 1681 /* rs485 properties */ 1687 1682 if (of_property_read_u32_array(np, "rs485-rts-delay", 1688 1683 rs485_delay, 2) == 0) { 1689 - struct serial_rs485 *rs485conf = &port->rs485; 1690 - 1691 1684 rs485conf->delay_rts_before_send = rs485_delay[0]; 1692 1685 rs485conf->delay_rts_after_send = rs485_delay[1]; 1693 1686 rs485conf->flags = 0; 1687 + } 1694 1688 1695 1689 if (of_get_property(np, "rs485-rx-during-tx", NULL)) 1696 1690 rs485conf->flags |= SER_RS485_RX_DURING_TX; ··· 1698 1692 if (of_get_property(np, "linux,rs485-enabled-at-boot-time", 1699 1693 NULL)) 1700 1694 rs485conf->flags |= SER_RS485_ENABLED; 1701 - } 1702 1695 } else { 1703 1696 port->rs485 = pdata->rs485; 1704 1697 } ··· 2301 2296 ret = -EINVAL; 2302 2297 if (port->uartclk / 16 != ser->baud_base) 2303 2298 ret = -EINVAL; 2304 - if ((void *)port->mapbase != ser->iomem_base) 2299 + if (port->mapbase != (unsigned long)ser->iomem_base) 2305 2300 ret = -EINVAL; 2306 2301 if (port->iobase != ser->port) 2307 2302 ret = -EINVAL; ··· 2691 2686 enum mctrl_gpio_idx i; 2692 2687 struct gpio_desc *gpiod; 2693 2688 2694 - p->gpios = mctrl_gpio_init(dev, 0); 2689 + p->gpios = mctrl_gpio_init_noauto(dev, 0); 2695 2690 if (IS_ERR(p->gpios)) 2696 2691 return PTR_ERR(p->gpios); 2697 2692
+1 -1
drivers/tty/serial/clps711x.c
··· 500 500 501 501 platform_set_drvdata(pdev, s); 502 502 503 - s->gpios = mctrl_gpio_init(&pdev->dev, 0); 503 + s->gpios = mctrl_gpio_init_noauto(&pdev->dev, 0); 504 504 if (IS_ERR(s->gpios)) 505 505 return PTR_ERR(s->gpios); 506 506
+1
drivers/tty/serial/cpm_uart/cpm_uart_core.c
··· 1450 1450 }, 1451 1451 {} 1452 1452 }; 1453 + MODULE_DEVICE_TABLE(of, cpm_uart_match); 1453 1454 1454 1455 static struct platform_driver cpm_uart_driver = { 1455 1456 .driver = {
+1 -33
drivers/tty/serial/crisv10.c
··· 3655 3655 wake_up_interruptible(&info->port.open_wait); 3656 3656 } 3657 3657 info->port.flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CLOSING); 3658 - wake_up_interruptible(&info->port.close_wait); 3659 3658 local_irq_restore(flags); 3660 3659 3661 3660 /* port closed */ ··· 3758 3759 int do_clocal = 0; 3759 3760 3760 3761 /* 3761 - * If the device is in the middle of being closed, then block 3762 - * until it's done, and then try again. 3763 - */ 3764 - if (info->port.flags & ASYNC_CLOSING) { 3765 - wait_event_interruptible_tty(tty, info->port.close_wait, 3766 - !(info->port.flags & ASYNC_CLOSING)); 3767 - #ifdef SERIAL_DO_RESTART 3768 - if (info->port.flags & ASYNC_HUP_NOTIFY) 3769 - return -EAGAIN; 3770 - else 3771 - return -ERESTARTSYS; 3772 - #else 3773 - return -EAGAIN; 3774 - #endif 3775 - } 3776 - 3777 - /* 3778 3762 * If non-blocking mode is set, or the port is not enabled, 3779 3763 * then make the check up front and then exit. 3780 3764 */ ··· 3807 3825 #endif 3808 3826 break; 3809 3827 } 3810 - if (!(info->port.flags & ASYNC_CLOSING) && do_clocal) 3828 + if (do_clocal) 3811 3829 /* && (do_clocal || DCD_IS_ASSERTED) */ 3812 3830 break; 3813 3831 if (signal_pending(current)) { ··· 3875 3893 info->port.tty = tty; 3876 3894 3877 3895 info->port.low_latency = !!(info->port.flags & ASYNC_LOW_LATENCY); 3878 - 3879 - /* 3880 - * If the port is in the middle of closing, bail out now 3881 - */ 3882 - if (info->port.flags & ASYNC_CLOSING) { 3883 - wait_event_interruptible_tty(tty, info->port.close_wait, 3884 - !(info->port.flags & ASYNC_CLOSING)); 3885 - #ifdef SERIAL_DO_RESTART 3886 - return ((info->port.flags & ASYNC_HUP_NOTIFY) ? 3887 - -EAGAIN : -ERESTARTSYS); 3888 - #else 3889 - return -EAGAIN; 3890 - #endif 3891 - } 3892 3896 3893 3897 /* 3894 3898 * If DMA is enabled try to allocate the irq's.
+39
drivers/tty/serial/fsl_lpuart.c
··· 1746 1746 .data = &lpuart_reg, 1747 1747 }; 1748 1748 1749 + static void lpuart_early_write(struct console *con, const char *s, unsigned n) 1750 + { 1751 + struct earlycon_device *dev = con->data; 1752 + 1753 + uart_console_write(&dev->port, s, n, lpuart_console_putchar); 1754 + } 1755 + 1756 + static void lpuart32_early_write(struct console *con, const char *s, unsigned n) 1757 + { 1758 + struct earlycon_device *dev = con->data; 1759 + 1760 + uart_console_write(&dev->port, s, n, lpuart32_console_putchar); 1761 + } 1762 + 1763 + static int __init lpuart_early_console_setup(struct earlycon_device *device, 1764 + const char *opt) 1765 + { 1766 + if (!device->port.membase) 1767 + return -ENODEV; 1768 + 1769 + device->con->write = lpuart_early_write; 1770 + return 0; 1771 + } 1772 + 1773 + static int __init lpuart32_early_console_setup(struct earlycon_device *device, 1774 + const char *opt) 1775 + { 1776 + if (!device->port.membase) 1777 + return -ENODEV; 1778 + 1779 + device->con->write = lpuart32_early_write; 1780 + return 0; 1781 + } 1782 + 1783 + OF_EARLYCON_DECLARE(lpuart, "fsl,vf610-lpuart", lpuart_early_console_setup); 1784 + OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1021a-lpuart", lpuart32_early_console_setup); 1785 + EARLYCON_DECLARE(lpuart, lpuart_early_console_setup); 1786 + EARLYCON_DECLARE(lpuart32, lpuart32_early_console_setup); 1787 + 1749 1788 #define LPUART_CONSOLE (&lpuart_console) 1750 1789 #define LPUART32_CONSOLE (&lpuart32_console) 1751 1790 #else
+105 -74
drivers/tty/serial/imx.c
··· 139 139 #define USR1_ESCF (1<<11) /* Escape seq interrupt flag */ 140 140 #define USR1_FRAMERR (1<<10) /* Frame error interrupt flag */ 141 141 #define USR1_RRDY (1<<9) /* Receiver ready interrupt/dma flag */ 142 + #define USR1_AGTIM (1<<8) /* Ageing timer interrupt flag */ 142 143 #define USR1_TIMEOUT (1<<7) /* Receive timeout interrupt status */ 143 144 #define USR1_RXDS (1<<6) /* Receiver idle interrupt flag */ 144 145 #define USR1_AIRINT (1<<5) /* Async IR wake interrupt flag */ ··· 729 728 if ((temp & USR2_RDR) && !sport->dma_is_rxing) { 730 729 sport->dma_is_rxing = 1; 731 730 732 - /* disable the `Recerver Ready Interrrupt` */ 731 + /* disable the receiver ready and aging timer interrupts */ 733 732 temp = readl(sport->port.membase + UCR1); 734 733 temp &= ~(UCR1_RRDYEN); 735 734 writel(temp, sport->port.membase + UCR1); 735 + 736 + temp = readl(sport->port.membase + UCR2); 737 + temp &= ~(UCR2_ATEN); 738 + writel(temp, sport->port.membase + UCR2); 736 739 737 740 /* tell the DMA to receive the data. */ 738 741 start_rx_dma(sport); ··· 754 749 sts = readl(sport->port.membase + USR1); 755 750 sts2 = readl(sport->port.membase + USR2); 756 751 757 - if (sts & USR1_RRDY) { 752 + if (sts & (USR1_RRDY | USR1_AGTIM)) { 758 753 if (sport->dma_is_enabled) 759 754 imx_dma_rxint(sport); 760 755 else ··· 857 852 spin_unlock_irqrestore(&sport->port.lock, flags); 858 853 } 859 854 860 - #define TXTL 2 /* reset default */ 861 - #define RXTL 1 /* reset default */ 862 - 863 - static void imx_setup_ufcr(struct imx_port *sport, unsigned int mode) 864 - { 865 - unsigned int val; 866 - 867 - /* set receiver / transmitter trigger level */ 868 - val = readl(sport->port.membase + UFCR) & (UFCR_RFDIV | UFCR_DCEDTE); 869 - val |= TXTL << UFCR_TXTL_SHF | RXTL; 870 - writel(val, sport->port.membase + UFCR); 871 - } 872 - 873 855 #define RX_BUF_SIZE (PAGE_SIZE) 874 856 static void imx_rx_dma_done(struct imx_port *sport) 875 857 { ··· 865 873 866 874 spin_lock_irqsave(&sport->port.lock, flags); 867 875 868 - /* Enable this interrupt when the RXFIFO is empty. */ 876 + /* re-enable interrupts to get notified when new symbols are incoming */ 869 877 temp = readl(sport->port.membase + UCR1); 870 878 temp |= UCR1_RRDYEN; 871 879 writel(temp, sport->port.membase + UCR1); 880 + 881 + temp = readl(sport->port.membase + UCR2); 882 + temp |= UCR2_ATEN; 883 + writel(temp, sport->port.membase + UCR2); 872 884 873 885 sport->dma_is_rxing = 0; 874 886 ··· 884 888 } 885 889 886 890 /* 887 - * There are three kinds of RX DMA interrupts(such as in the MX6Q): 891 + * There are two kinds of RX DMA interrupts(such as in the MX6Q): 888 892 * [1] the RX DMA buffer is full. 889 - * [2] the Aging timer expires(wait for 8 bytes long) 890 - * [3] the Idle Condition Detect(enabled the UCR4_IDDMAEN). 893 + * [2] the aging timer expires 891 894 * 892 - * The [2] is trigger when a character was been sitting in the FIFO 893 - * meanwhile [3] can wait for 32 bytes long when the RX line is 894 - * on IDLE state and RxFIFO is empty. 895 + * Condition [2] is triggered when a character has been sitting in the FIFO 896 + * for at least 8 byte durations. 895 897 */ 896 898 static void dma_rx_callback(void *data) 897 899 { ··· 907 913 status = dmaengine_tx_status(chan, (dma_cookie_t)0, &state); 908 914 count = RX_BUF_SIZE - state.residue; 909 915 910 - if (readl(sport->port.membase + USR2) & USR2_IDLE) { 911 - /* In condition [3] the SDMA counted up too early */ 912 - count--; 913 - 914 - writel(USR2_IDLE, sport->port.membase + USR2); 915 - } 916 - 917 916 dev_dbg(sport->port.dev, "We get %d bytes.\n", count); 918 917 919 918 if (count) { ··· 918 931 sport->port.icount.buf_overrun++; 919 932 } 920 933 tty_flip_buffer_push(port); 921 - 922 - start_rx_dma(sport); 923 - } else if (readl(sport->port.membase + USR2) & USR2_RDR) { 924 - /* 925 - * start rx_dma directly once data in RXFIFO, more efficient 926 - * than before: 927 - * 1. call imx_rx_dma_done to stop dma if no data received 928 - * 2. wait next RDR interrupt to start dma transfer. 929 - */ 930 - start_rx_dma(sport); 931 - } else { 932 - /* 933 - * stop dma to prevent too many IDLE event trigged if no data 934 - * in RXFIFO 935 - */ 936 - imx_rx_dma_done(sport); 934 + sport->port.icount.rx += count; 937 935 } 936 + 937 + /* 938 + * Restart RX DMA directly if more data is available in order to skip 939 + * the roundtrip through the IRQ handler. If there is some data already 940 + * in the FIFO, DMA needs to be restarted soon anyways. 941 + * 942 + * Otherwise stop the DMA and reactivate FIFO IRQs to restart DMA once 943 + * data starts to arrive again. 944 + */ 945 + if (readl(sport->port.membase + USR2) & USR2_RDR) 946 + start_rx_dma(sport); 947 + else 948 + imx_rx_dma_done(sport); 938 949 } 939 950 940 951 static int start_rx_dma(struct imx_port *sport) ··· 963 978 dmaengine_submit(desc); 964 979 dma_async_issue_pending(chan); 965 980 return 0; 981 + } 982 + 983 + #define TXTL_DEFAULT 2 /* reset default */ 984 + #define RXTL_DEFAULT 1 /* reset default */ 985 + #define TXTL_DMA 8 /* DMA burst setting */ 986 + #define RXTL_DMA 9 /* DMA burst setting */ 987 + 988 + static void imx_setup_ufcr(struct imx_port *sport, 989 + unsigned char txwl, unsigned char rxwl) 990 + { 991 + unsigned int val; 992 + 993 + /* set receiver / transmitter trigger level */ 994 + val = readl(sport->port.membase + UFCR) & (UFCR_RFDIV | UFCR_DCEDTE); 995 + val |= txwl << UFCR_TXTL_SHF | rxwl; 996 + writel(val, sport->port.membase + UFCR); 966 997 } 967 998 968 999 static void imx_uart_dma_exit(struct imx_port *sport) ··· 1016 1015 slave_config.direction = DMA_DEV_TO_MEM; 1017 1016 slave_config.src_addr = sport->port.mapbase + URXD0; 1018 1017 slave_config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1019 - slave_config.src_maxburst = RXTL; 1018 + /* one byte less than the watermark level to enable the aging timer */ 1019 + slave_config.src_maxburst = RXTL_DMA - 1; 1020 1020 ret = dmaengine_slave_config(sport->dma_chan_rx, &slave_config); 1021 1021 if (ret) { 1022 1022 dev_err(dev, "error in RX dma configuration.\n"); ··· 1041 1039 slave_config.direction = DMA_MEM_TO_DEV; 1042 1040 slave_config.dst_addr = sport->port.mapbase + URTX0; 1043 1041 slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1044 - slave_config.dst_maxburst = TXTL; 1042 + slave_config.dst_maxburst = TXTL_DMA; 1045 1043 ret = dmaengine_slave_config(sport->dma_chan_tx, &slave_config); 1046 1044 if (ret) { 1047 1045 dev_err(dev, "error in TX dma configuration."); ··· 1064 1062 1065 1063 /* set UCR1 */ 1066 1064 temp = readl(sport->port.membase + UCR1); 1067 - temp |= UCR1_RDMAEN | UCR1_TDMAEN | UCR1_ATDMAEN | 1068 - /* wait for 32 idle frames for IDDMA interrupt */ 1069 - UCR1_ICD_REG(3); 1065 + temp |= UCR1_RDMAEN | UCR1_TDMAEN | UCR1_ATDMAEN; 1070 1066 writel(temp, sport->port.membase + UCR1); 1071 1067 1072 - /* set UCR4 */ 1073 - temp = readl(sport->port.membase + UCR4); 1074 - temp |= UCR4_IDDMAEN; 1075 - writel(temp, sport->port.membase + UCR4); 1068 + temp = readl(sport->port.membase + UCR2); 1069 + temp |= UCR2_ATEN; 1070 + writel(temp, sport->port.membase + UCR2); 1071 + 1072 + imx_setup_ufcr(sport, TXTL_DMA, RXTL_DMA); 1076 1073 1077 1074 sport->dma_is_enabled = 1; 1078 1075 } ··· 1087 1086 1088 1087 /* clear UCR2 */ 1089 1088 temp = readl(sport->port.membase + UCR2); 1090 - temp &= ~(UCR2_CTSC | UCR2_CTS); 1089 + temp &= ~(UCR2_CTSC | UCR2_CTS | UCR2_ATEN); 1091 1090 writel(temp, sport->port.membase + UCR2); 1092 1091 1093 - /* clear UCR4 */ 1094 - temp = readl(sport->port.membase + UCR4); 1095 - temp &= ~UCR4_IDDMAEN; 1096 - writel(temp, sport->port.membase + UCR4); 1092 + imx_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1097 1093 1098 1094 sport->dma_is_enabled = 0; 1099 1095 } ··· 1113 1115 return retval; 1114 1116 } 1115 1117 1116 - imx_setup_ufcr(sport, 0); 1118 + imx_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1117 1119 1118 1120 /* disable the DREN bit (Data Ready interrupt enable) before 1119 1121 * requesting IRQs ··· 1125 1127 temp |= CTSTL << UCR4_CTSTL_SHF; 1126 1128 1127 1129 writel(temp & ~UCR4_DREN, sport->port.membase + UCR4); 1130 + 1131 + /* Can we enable the DMA support? */ 1132 + if (is_imx6q_uart(sport) && !uart_console(port) && 1133 + !sport->dma_is_inited) 1134 + imx_uart_dma_init(sport); 1128 1135 1129 1136 spin_lock_irqsave(&sport->port.lock, flags); 1130 1137 /* Reset fifo's and state machines */ ··· 1147 1144 */ 1148 1145 writel(USR1_RTSD, sport->port.membase + USR1); 1149 1146 writel(USR2_ORE, sport->port.membase + USR2); 1147 + 1148 + if (sport->dma_is_inited && !sport->dma_is_enabled) 1149 + imx_enable_dma(sport); 1150 1150 1151 1151 temp = readl(sport->port.membase + UCR1); 1152 1152 temp |= UCR1_RRDYEN | UCR1_RTSDEN | UCR1_UARTEN; ··· 1284 1278 { 1285 1279 struct imx_port *sport = (struct imx_port *)port; 1286 1280 unsigned long flags; 1287 - unsigned int ucr2, old_ucr1, old_txrxen, baud, quot; 1281 + unsigned int ucr2, old_ucr1, old_ucr2, baud, quot; 1288 1282 unsigned int old_csize = old ? old->c_cflag & CSIZE : CS8; 1289 1283 unsigned int div, ufcr; 1290 1284 unsigned long num, denom; ··· 1321 1315 } else { 1322 1316 ucr2 |= UCR2_CTSC; 1323 1317 } 1324 - 1325 - /* Can we enable the DMA support? */ 1326 - if (is_imx6q_uart(sport) && !uart_console(port) 1327 - && !sport->dma_is_inited) 1328 - imx_uart_dma_init(sport); 1329 1318 } else { 1330 1319 termios->c_cflag &= ~CRTSCTS; 1331 1320 } ··· 1388 1387 barrier(); 1389 1388 1390 1389 /* then, disable everything */ 1391 - old_txrxen = readl(sport->port.membase + UCR2); 1392 - writel(old_txrxen & ~(UCR2_TXEN | UCR2_RXEN), 1390 + old_ucr2 = readl(sport->port.membase + UCR2); 1391 + writel(old_ucr2 & ~(UCR2_TXEN | UCR2_RXEN), 1393 1392 sport->port.membase + UCR2); 1394 - old_txrxen &= (UCR2_TXEN | UCR2_RXEN); 1393 + old_ucr2 &= (UCR2_TXEN | UCR2_RXEN | UCR2_ATEN); 1395 1394 1396 1395 /* custom-baudrate handling */ 1397 1396 div = sport->port.uartclk / (baud * 16); ··· 1432 1431 writel(old_ucr1, sport->port.membase + UCR1); 1433 1432 1434 1433 /* set the parity, stop bits and data size */ 1435 - writel(ucr2 | old_txrxen, sport->port.membase + UCR2); 1434 + writel(ucr2 | old_ucr2, sport->port.membase + UCR2); 1436 1435 1437 1436 if (UART_ENABLE_MS(&sport->port, termios->c_cflag)) 1438 1437 imx_enable_ms(&sport->port); 1439 1438 1440 - if (sport->dma_is_inited && !sport->dma_is_enabled) 1441 - imx_enable_dma(sport); 1442 1439 spin_unlock_irqrestore(&sport->port.lock, flags); 1443 1440 } 1444 1441 ··· 1502 1503 if (retval) 1503 1504 clk_disable_unprepare(sport->clk_ipg); 1504 1505 1505 - imx_setup_ufcr(sport, 0); 1506 + imx_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1506 1507 1507 1508 spin_lock_irqsave(&sport->port.lock, flags); 1508 1509 ··· 1772 1773 else 1773 1774 imx_console_get_options(sport, &baud, &parity, &bits); 1774 1775 1775 - imx_setup_ufcr(sport, 0); 1776 + imx_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1776 1777 1777 1778 retval = uart_set_options(&sport->port, co, baud, parity, bits, flow); 1778 1779 ··· 1802 1803 }; 1803 1804 1804 1805 #define IMX_CONSOLE &imx_console 1806 + 1807 + #ifdef CONFIG_OF 1808 + static void imx_console_early_putchar(struct uart_port *port, int ch) 1809 + { 1810 + while (readl_relaxed(port->membase + IMX21_UTS) & UTS_TXFULL) 1811 + cpu_relax(); 1812 + 1813 + writel_relaxed(ch, port->membase + URTX0); 1814 + } 1815 + 1816 + static void imx_console_early_write(struct console *con, const char *s, 1817 + unsigned count) 1818 + { 1819 + struct earlycon_device *dev = con->data; 1820 + 1821 + uart_console_write(&dev->port, s, count, imx_console_early_putchar); 1822 + } 1823 + 1824 + static int __init 1825 + imx_console_early_setup(struct earlycon_device *dev, const char *opt) 1826 + { 1827 + if (!dev->port.membase) 1828 + return -ENODEV; 1829 + 1830 + dev->con->write = imx_console_early_write; 1831 + 1832 + return 0; 1833 + } 1834 + OF_EARLYCON_DECLARE(ec_imx6q, "fsl,imx6q-uart", imx_console_early_setup); 1835 + OF_EARLYCON_DECLARE(ec_imx21, "fsl,imx21-uart", imx_console_early_setup); 1836 + #endif 1837 + 1805 1838 #else 1806 1839 #define IMX_CONSOLE NULL 1807 1840 #endif
+4 -3
drivers/tty/serial/lpc32xx_hs.c
··· 691 691 p->port.mapbase = res->start; 692 692 p->port.membase = NULL; 693 693 694 - p->port.irq = platform_get_irq(pdev, 0); 695 - if (p->port.irq < 0) { 694 + ret = platform_get_irq(pdev, 0); 695 + if (ret < 0) { 696 696 dev_err(&pdev->dev, "Error getting irq for HS UART port %d\n", 697 697 uarts_registered); 698 - return p->port.irq; 698 + return ret; 699 699 } 700 + p->port.irq = ret; 700 701 701 702 p->port.iotype = UPIO_MEM32; 702 703 p->port.uartclk = LPC32XX_MAIN_OSC_FREQ;
+15 -8
drivers/tty/serial/men_z135_uart.c
··· 35 35 #define MEN_Z135_BAUD_REG 0x810 36 36 #define MEN_Z135_TIMEOUT 0x814 37 37 38 - #define MEN_Z135_MEM_SIZE 0x818 39 - 40 38 #define IRQ_ID(x) ((x) & 0x1f) 41 39 42 40 #define MEN_Z135_IER_RXCIEN BIT(0) /* RX Space IRQ */ ··· 122 124 struct men_z135_port { 123 125 struct uart_port port; 124 126 struct mcb_device *mdev; 127 + struct resource *mem; 125 128 unsigned char *rxbuf; 126 129 u32 stat_reg; 127 130 spinlock_t lock; ··· 733 734 734 735 static void men_z135_release_port(struct uart_port *port) 735 736 { 737 + struct men_z135_port *uart = to_men_z135(port); 738 + 736 739 iounmap(port->membase); 737 740 port->membase = NULL; 738 741 739 - release_mem_region(port->mapbase, MEN_Z135_MEM_SIZE); 742 + mcb_release_mem(uart->mem); 740 743 } 741 744 742 745 static int men_z135_request_port(struct uart_port *port) 743 746 { 744 - int size = MEN_Z135_MEM_SIZE; 747 + struct men_z135_port *uart = to_men_z135(port); 748 + struct mcb_device *mdev = uart->mdev; 749 + struct resource *mem; 745 750 746 - if (!request_mem_region(port->mapbase, size, "men_z135_port")) 747 - return -EBUSY; 751 + mem = mcb_request_mem(uart->mdev, dev_name(&mdev->dev)); 752 + if (IS_ERR(mem)) 753 + return PTR_ERR(mem); 748 754 749 - port->membase = ioremap(port->mapbase, MEN_Z135_MEM_SIZE); 755 + port->mapbase = mem->start; 756 + uart->mem = mem; 757 + 758 + port->membase = ioremap(mem->start, resource_size(mem)); 750 759 if (port->membase == NULL) { 751 - release_mem_region(port->mapbase, MEN_Z135_MEM_SIZE); 760 + mcb_release_mem(mem); 752 761 return -ENOMEM; 753 762 } 754 763
+7
drivers/tty/serial/mpc52xx_uart.c
··· 1135 1135 psc_ops->command(port, MPC52xx_PSC_RST_RX); 1136 1136 psc_ops->command(port, MPC52xx_PSC_RST_TX); 1137 1137 1138 + /* 1139 + * According to Freescale's support the RST_TX command can produce a 1140 + * spike on the TX pin. So they recommend to delay "for one character". 1141 + * One millisecond should be enough for everyone. 1142 + */ 1143 + msleep(1); 1144 + 1138 1145 psc_ops->set_sicr(port, 0); /* UART mode DCD ignored */ 1139 1146 1140 1147 psc_ops->fifo_init(port);
+6 -35
drivers/tty/serial/mpsc.c
··· 55 55 #define SUPPORT_SYSRQ 56 56 #endif 57 57 58 - #include <linux/module.h> 59 - #include <linux/moduleparam.h> 60 58 #include <linux/tty.h> 61 59 #include <linux/tty_flip.h> 62 60 #include <linux/ioport.h> ··· 753 755 pi->port.line); 754 756 755 757 if (!pi->dma_region) { 756 - if (!dma_supported(pi->port.dev, 0xffffffff)) { 758 + if (!dma_set_mask(pi->port.dev, 0xffffffff)) { 757 759 printk(KERN_ERR "MPSC: Inadequate DMA support\n"); 758 760 rc = -ENXIO; 759 761 } else if ((pi->dma_region = dma_alloc_noncoherent(pi->port.dev, ··· 2106 2108 return rc; 2107 2109 } 2108 2110 2109 - static int mpsc_drv_remove(struct platform_device *dev) 2110 - { 2111 - pr_debug("mpsc_drv_exit: Removing MPSC %d\n", dev->id); 2112 - 2113 - if (dev->id < MPSC_NUM_CTLRS) { 2114 - uart_remove_one_port(&mpsc_reg, &mpsc_ports[dev->id].port); 2115 - mpsc_release_port((struct uart_port *) 2116 - &mpsc_ports[dev->id].port); 2117 - mpsc_drv_unmap_regs(&mpsc_ports[dev->id]); 2118 - return 0; 2119 - } else { 2120 - return -ENODEV; 2121 - } 2122 - } 2123 - 2124 2111 static struct platform_driver mpsc_driver = { 2125 2112 .probe = mpsc_drv_probe, 2126 - .remove = mpsc_drv_remove, 2127 2113 .driver = { 2128 - .name = MPSC_CTLR_NAME, 2114 + .name = MPSC_CTLR_NAME, 2115 + .suppress_bind_attrs = true, 2129 2116 }, 2130 2117 }; 2131 2118 ··· 2139 2156 2140 2157 return rc; 2141 2158 } 2159 + device_initcall(mpsc_drv_init); 2142 2160 2143 - static void __exit mpsc_drv_exit(void) 2144 - { 2145 - platform_driver_unregister(&mpsc_driver); 2146 - platform_driver_unregister(&mpsc_shared_driver); 2147 - uart_unregister_driver(&mpsc_reg); 2148 - memset(mpsc_ports, 0, sizeof(mpsc_ports)); 2149 - memset(&mpsc_shared_regs, 0, sizeof(mpsc_shared_regs)); 2150 - } 2151 - 2152 - module_init(mpsc_drv_init); 2153 - module_exit(mpsc_drv_exit); 2154 - 2161 + /* 2155 2162 MODULE_AUTHOR("Mark A. Greer <mgreer@mvista.com>"); 2156 2163 MODULE_DESCRIPTION("Generic Marvell MPSC serial/UART driver"); 2157 - MODULE_VERSION(MPSC_VERSION); 2158 2164 MODULE_LICENSE("GPL"); 2159 - MODULE_ALIAS_CHARDEV_MAJOR(MPSC_MAJOR); 2160 - MODULE_ALIAS("platform:" MPSC_CTLR_NAME); 2165 + */
+570 -52
drivers/tty/serial/msm_serial.c
··· 20 20 #endif 21 21 22 22 #include <linux/atomic.h> 23 + #include <linux/dma-mapping.h> 24 + #include <linux/dmaengine.h> 23 25 #include <linux/hrtimer.h> 24 26 #include <linux/module.h> 25 27 #include <linux/io.h> ··· 33 31 #include <linux/tty_flip.h> 34 32 #include <linux/serial_core.h> 35 33 #include <linux/serial.h> 34 + #include <linux/slab.h> 36 35 #include <linux/clk.h> 37 36 #include <linux/platform_device.h> 38 37 #include <linux/delay.h> ··· 42 39 43 40 #include "msm_serial.h" 44 41 42 + #define UARTDM_BURST_SIZE 16 /* in bytes */ 43 + #define UARTDM_TX_AIGN(x) ((x) & ~0x3) /* valid for > 1p3 */ 44 + #define UARTDM_TX_MAX 256 /* in bytes, valid for <= 1p3 */ 45 + #define UARTDM_RX_SIZE (UART_XMIT_SIZE / 4) 46 + 45 47 enum { 46 48 UARTDM_1P1 = 1, 47 49 UARTDM_1P2, 48 50 UARTDM_1P3, 49 51 UARTDM_1P4, 52 + }; 53 + 54 + struct msm_dma { 55 + struct dma_chan *chan; 56 + enum dma_data_direction dir; 57 + dma_addr_t phys; 58 + unsigned char *virt; 59 + dma_cookie_t cookie; 60 + u32 enable_bit; 61 + unsigned int count; 62 + struct dma_async_tx_descriptor *desc; 50 63 }; 51 64 52 65 struct msm_port { ··· 74 55 int is_uartdm; 75 56 unsigned int old_snap_state; 76 57 bool break_detected; 58 + struct msm_dma tx_dma; 59 + struct msm_dma rx_dma; 77 60 }; 78 61 79 - static inline void wait_for_xmitr(struct uart_port *port) 62 + static void msm_handle_tx(struct uart_port *port); 63 + static void msm_start_rx_dma(struct msm_port *msm_port); 64 + 65 + void msm_stop_dma(struct uart_port *port, struct msm_dma *dma) 66 + { 67 + struct device *dev = port->dev; 68 + unsigned int mapped; 69 + u32 val; 70 + 71 + mapped = dma->count; 72 + dma->count = 0; 73 + 74 + dmaengine_terminate_all(dma->chan); 75 + 76 + /* 77 + * DMA Stall happens if enqueue and flush command happens concurrently. 78 + * For example before changing the baud rate/protocol configuration and 79 + * sending flush command to ADM, disable the channel of UARTDM. 80 + * Note: should not reset the receiver here immediately as it is not 81 + * suggested to do disable/reset or reset/disable at the same time. 82 + */ 83 + val = msm_read(port, UARTDM_DMEN); 84 + val &= ~dma->enable_bit; 85 + msm_write(port, val, UARTDM_DMEN); 86 + 87 + if (mapped) 88 + dma_unmap_single(dev, dma->phys, mapped, dma->dir); 89 + } 90 + 91 + static void msm_release_dma(struct msm_port *msm_port) 92 + { 93 + struct msm_dma *dma; 94 + 95 + dma = &msm_port->tx_dma; 96 + if (dma->chan) { 97 + msm_stop_dma(&msm_port->uart, dma); 98 + dma_release_channel(dma->chan); 99 + } 100 + 101 + memset(dma, 0, sizeof(*dma)); 102 + 103 + dma = &msm_port->rx_dma; 104 + if (dma->chan) { 105 + msm_stop_dma(&msm_port->uart, dma); 106 + dma_release_channel(dma->chan); 107 + kfree(dma->virt); 108 + } 109 + 110 + memset(dma, 0, sizeof(*dma)); 111 + } 112 + 113 + static void msm_request_tx_dma(struct msm_port *msm_port, resource_size_t base) 114 + { 115 + struct device *dev = msm_port->uart.dev; 116 + struct dma_slave_config conf; 117 + struct msm_dma *dma; 118 + u32 crci = 0; 119 + int ret; 120 + 121 + dma = &msm_port->tx_dma; 122 + 123 + /* allocate DMA resources, if available */ 124 + dma->chan = dma_request_slave_channel_reason(dev, "tx"); 125 + if (IS_ERR(dma->chan)) 126 + goto no_tx; 127 + 128 + of_property_read_u32(dev->of_node, "qcom,tx-crci", &crci); 129 + 130 + memset(&conf, 0, sizeof(conf)); 131 + conf.direction = DMA_MEM_TO_DEV; 132 + conf.device_fc = true; 133 + conf.dst_addr = base + UARTDM_TF; 134 + conf.dst_maxburst = UARTDM_BURST_SIZE; 135 + conf.slave_id = crci; 136 + 137 + ret = dmaengine_slave_config(dma->chan, &conf); 138 + if (ret) 139 + goto rel_tx; 140 + 141 + dma->dir = DMA_TO_DEVICE; 142 + 143 + if (msm_port->is_uartdm < UARTDM_1P4) 144 + dma->enable_bit = UARTDM_DMEN_TX_DM_ENABLE; 145 + else 146 + dma->enable_bit = UARTDM_DMEN_TX_BAM_ENABLE; 147 + 148 + return; 149 + 150 + rel_tx: 151 + dma_release_channel(dma->chan); 152 + no_tx: 153 + memset(dma, 0, sizeof(*dma)); 154 + } 155 + 156 + static void msm_request_rx_dma(struct msm_port *msm_port, resource_size_t base) 157 + { 158 + struct device *dev = msm_port->uart.dev; 159 + struct dma_slave_config conf; 160 + struct msm_dma *dma; 161 + u32 crci = 0; 162 + int ret; 163 + 164 + dma = &msm_port->rx_dma; 165 + 166 + /* allocate DMA resources, if available */ 167 + dma->chan = dma_request_slave_channel_reason(dev, "rx"); 168 + if (IS_ERR(dma->chan)) 169 + goto no_rx; 170 + 171 + of_property_read_u32(dev->of_node, "qcom,rx-crci", &crci); 172 + 173 + dma->virt = kzalloc(UARTDM_RX_SIZE, GFP_KERNEL); 174 + if (!dma->virt) 175 + goto rel_rx; 176 + 177 + memset(&conf, 0, sizeof(conf)); 178 + conf.direction = DMA_DEV_TO_MEM; 179 + conf.device_fc = true; 180 + conf.src_addr = base + UARTDM_RF; 181 + conf.src_maxburst = UARTDM_BURST_SIZE; 182 + conf.slave_id = crci; 183 + 184 + ret = dmaengine_slave_config(dma->chan, &conf); 185 + if (ret) 186 + goto err; 187 + 188 + dma->dir = DMA_FROM_DEVICE; 189 + 190 + if (msm_port->is_uartdm < UARTDM_1P4) 191 + dma->enable_bit = UARTDM_DMEN_RX_DM_ENABLE; 192 + else 193 + dma->enable_bit = UARTDM_DMEN_RX_BAM_ENABLE; 194 + 195 + return; 196 + err: 197 + kfree(dma->virt); 198 + rel_rx: 199 + dma_release_channel(dma->chan); 200 + no_rx: 201 + memset(dma, 0, sizeof(*dma)); 202 + } 203 + 204 + static inline void msm_wait_for_xmitr(struct uart_port *port) 80 205 { 81 206 while (!(msm_read(port, UART_SR) & UART_SR_TX_EMPTY)) { 82 207 if (msm_read(port, UART_ISR) & UART_ISR_TX_READY) ··· 241 78 static void msm_start_tx(struct uart_port *port) 242 79 { 243 80 struct msm_port *msm_port = UART_TO_MSM(port); 81 + struct msm_dma *dma = &msm_port->tx_dma; 82 + 83 + /* Already started in DMA mode */ 84 + if (dma->count) 85 + return; 244 86 245 87 msm_port->imr |= UART_IMR_TXLEV; 246 88 msm_write(port, msm_port->imr, UART_IMR); 247 89 } 248 90 91 + static void msm_reset_dm_count(struct uart_port *port, int count) 92 + { 93 + msm_wait_for_xmitr(port); 94 + msm_write(port, count, UARTDM_NCF_TX); 95 + msm_read(port, UARTDM_NCF_TX); 96 + } 97 + 98 + static void msm_complete_tx_dma(void *args) 99 + { 100 + struct msm_port *msm_port = args; 101 + struct uart_port *port = &msm_port->uart; 102 + struct circ_buf *xmit = &port->state->xmit; 103 + struct msm_dma *dma = &msm_port->tx_dma; 104 + struct dma_tx_state state; 105 + enum dma_status status; 106 + unsigned long flags; 107 + unsigned int count; 108 + u32 val; 109 + 110 + spin_lock_irqsave(&port->lock, flags); 111 + 112 + /* Already stopped */ 113 + if (!dma->count) 114 + goto done; 115 + 116 + status = dmaengine_tx_status(dma->chan, dma->cookie, &state); 117 + 118 + dma_unmap_single(port->dev, dma->phys, dma->count, dma->dir); 119 + 120 + val = msm_read(port, UARTDM_DMEN); 121 + val &= ~dma->enable_bit; 122 + msm_write(port, val, UARTDM_DMEN); 123 + 124 + if (msm_port->is_uartdm > UARTDM_1P3) { 125 + msm_write(port, UART_CR_CMD_RESET_TX, UART_CR); 126 + msm_write(port, UART_CR_TX_ENABLE, UART_CR); 127 + } 128 + 129 + count = dma->count - state.residue; 130 + port->icount.tx += count; 131 + dma->count = 0; 132 + 133 + xmit->tail += count; 134 + xmit->tail &= UART_XMIT_SIZE - 1; 135 + 136 + /* Restore "Tx FIFO below watermark" interrupt */ 137 + msm_port->imr |= UART_IMR_TXLEV; 138 + msm_write(port, msm_port->imr, UART_IMR); 139 + 140 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 141 + uart_write_wakeup(port); 142 + 143 + msm_handle_tx(port); 144 + done: 145 + spin_unlock_irqrestore(&port->lock, flags); 146 + } 147 + 148 + static int msm_handle_tx_dma(struct msm_port *msm_port, unsigned int count) 149 + { 150 + struct circ_buf *xmit = &msm_port->uart.state->xmit; 151 + struct uart_port *port = &msm_port->uart; 152 + struct msm_dma *dma = &msm_port->tx_dma; 153 + void *cpu_addr; 154 + int ret; 155 + u32 val; 156 + 157 + cpu_addr = &xmit->buf[xmit->tail]; 158 + 159 + dma->phys = dma_map_single(port->dev, cpu_addr, count, dma->dir); 160 + ret = dma_mapping_error(port->dev, dma->phys); 161 + if (ret) 162 + return ret; 163 + 164 + dma->desc = dmaengine_prep_slave_single(dma->chan, dma->phys, 165 + count, DMA_MEM_TO_DEV, 166 + DMA_PREP_INTERRUPT | 167 + DMA_PREP_FENCE); 168 + if (!dma->desc) { 169 + ret = -EIO; 170 + goto unmap; 171 + } 172 + 173 + dma->desc->callback = msm_complete_tx_dma; 174 + dma->desc->callback_param = msm_port; 175 + 176 + dma->cookie = dmaengine_submit(dma->desc); 177 + ret = dma_submit_error(dma->cookie); 178 + if (ret) 179 + goto unmap; 180 + 181 + /* 182 + * Using DMA complete for Tx FIFO reload, no need for 183 + * "Tx FIFO below watermark" one, disable it 184 + */ 185 + msm_port->imr &= ~UART_IMR_TXLEV; 186 + msm_write(port, msm_port->imr, UART_IMR); 187 + 188 + dma->count = count; 189 + 190 + val = msm_read(port, UARTDM_DMEN); 191 + val |= dma->enable_bit; 192 + 193 + if (msm_port->is_uartdm < UARTDM_1P4) 194 + msm_write(port, val, UARTDM_DMEN); 195 + 196 + msm_reset_dm_count(port, count); 197 + 198 + if (msm_port->is_uartdm > UARTDM_1P3) 199 + msm_write(port, val, UARTDM_DMEN); 200 + 201 + dma_async_issue_pending(dma->chan); 202 + return 0; 203 + unmap: 204 + dma_unmap_single(port->dev, dma->phys, count, dma->dir); 205 + return ret; 206 + } 207 + 208 + static void msm_complete_rx_dma(void *args) 209 + { 210 + struct msm_port *msm_port = args; 211 + struct uart_port *port = &msm_port->uart; 212 + struct tty_port *tport = &port->state->port; 213 + struct msm_dma *dma = &msm_port->rx_dma; 214 + int count = 0, i, sysrq; 215 + unsigned long flags; 216 + u32 val; 217 + 218 + spin_lock_irqsave(&port->lock, flags); 219 + 220 + /* Already stopped */ 221 + if (!dma->count) 222 + goto done; 223 + 224 + val = msm_read(port, UARTDM_DMEN); 225 + val &= ~dma->enable_bit; 226 + msm_write(port, val, UARTDM_DMEN); 227 + 228 + /* Restore interrupts */ 229 + msm_port->imr |= UART_IMR_RXLEV | UART_IMR_RXSTALE; 230 + msm_write(port, msm_port->imr, UART_IMR); 231 + 232 + if (msm_read(port, UART_SR) & UART_SR_OVERRUN) { 233 + port->icount.overrun++; 234 + tty_insert_flip_char(tport, 0, TTY_OVERRUN); 235 + msm_write(port, UART_CR_CMD_RESET_ERR, UART_CR); 236 + } 237 + 238 + count = msm_read(port, UARTDM_RX_TOTAL_SNAP); 239 + 240 + port->icount.rx += count; 241 + 242 + dma->count = 0; 243 + 244 + dma_unmap_single(port->dev, dma->phys, UARTDM_RX_SIZE, dma->dir); 245 + 246 + for (i = 0; i < count; i++) { 247 + char flag = TTY_NORMAL; 248 + 249 + if (msm_port->break_detected && dma->virt[i] == 0) { 250 + port->icount.brk++; 251 + flag = TTY_BREAK; 252 + msm_port->break_detected = false; 253 + if (uart_handle_break(port)) 254 + continue; 255 + } 256 + 257 + if (!(port->read_status_mask & UART_SR_RX_BREAK)) 258 + flag = TTY_NORMAL; 259 + 260 + spin_unlock_irqrestore(&port->lock, flags); 261 + sysrq = uart_handle_sysrq_char(port, dma->virt[i]); 262 + spin_lock_irqsave(&port->lock, flags); 263 + if (!sysrq) 264 + tty_insert_flip_char(tport, dma->virt[i], flag); 265 + } 266 + 267 + msm_start_rx_dma(msm_port); 268 + done: 269 + spin_unlock_irqrestore(&port->lock, flags); 270 + 271 + if (count) 272 + tty_flip_buffer_push(tport); 273 + } 274 + 275 + static void msm_start_rx_dma(struct msm_port *msm_port) 276 + { 277 + struct msm_dma *dma = &msm_port->rx_dma; 278 + struct uart_port *uart = &msm_port->uart; 279 + u32 val; 280 + int ret; 281 + 282 + if (!dma->chan) 283 + return; 284 + 285 + dma->phys = dma_map_single(uart->dev, dma->virt, 286 + UARTDM_RX_SIZE, dma->dir); 287 + ret = dma_mapping_error(uart->dev, dma->phys); 288 + if (ret) 289 + return; 290 + 291 + dma->desc = dmaengine_prep_slave_single(dma->chan, dma->phys, 292 + UARTDM_RX_SIZE, DMA_DEV_TO_MEM, 293 + DMA_PREP_INTERRUPT); 294 + if (!dma->desc) 295 + goto unmap; 296 + 297 + dma->desc->callback = msm_complete_rx_dma; 298 + dma->desc->callback_param = msm_port; 299 + 300 + dma->cookie = dmaengine_submit(dma->desc); 301 + ret = dma_submit_error(dma->cookie); 302 + if (ret) 303 + goto unmap; 304 + /* 305 + * Using DMA for FIFO off-load, no need for "Rx FIFO over 306 + * watermark" or "stale" interrupts, disable them 307 + */ 308 + msm_port->imr &= ~(UART_IMR_RXLEV | UART_IMR_RXSTALE); 309 + 310 + /* 311 + * Well, when DMA is ADM3 engine(implied by <= UARTDM v1.3), 312 + * we need RXSTALE to flush input DMA fifo to memory 313 + */ 314 + if (msm_port->is_uartdm < UARTDM_1P4) 315 + msm_port->imr |= UART_IMR_RXSTALE; 316 + 317 + msm_write(uart, msm_port->imr, UART_IMR); 318 + 319 + dma->count = UARTDM_RX_SIZE; 320 + 321 + dma_async_issue_pending(dma->chan); 322 + 323 + msm_write(uart, UART_CR_CMD_RESET_STALE_INT, UART_CR); 324 + msm_write(uart, UART_CR_CMD_STALE_EVENT_ENABLE, UART_CR); 325 + 326 + val = msm_read(uart, UARTDM_DMEN); 327 + val |= dma->enable_bit; 328 + 329 + if (msm_port->is_uartdm < UARTDM_1P4) 330 + msm_write(uart, val, UARTDM_DMEN); 331 + 332 + msm_write(uart, UARTDM_RX_SIZE, UARTDM_DMRX); 333 + 334 + if (msm_port->is_uartdm > UARTDM_1P3) 335 + msm_write(uart, val, UARTDM_DMEN); 336 + 337 + return; 338 + unmap: 339 + dma_unmap_single(uart->dev, dma->phys, UARTDM_RX_SIZE, dma->dir); 340 + } 341 + 249 342 static void msm_stop_rx(struct uart_port *port) 250 343 { 251 344 struct msm_port *msm_port = UART_TO_MSM(port); 345 + struct msm_dma *dma = &msm_port->rx_dma; 252 346 253 347 msm_port->imr &= ~(UART_IMR_RXLEV | UART_IMR_RXSTALE); 254 348 msm_write(port, msm_port->imr, UART_IMR); 349 + 350 + if (dma->chan) 351 + msm_stop_dma(port, dma); 255 352 } 256 353 257 354 static void msm_enable_ms(struct uart_port *port) ··· 522 99 msm_write(port, msm_port->imr, UART_IMR); 523 100 } 524 101 525 - static void handle_rx_dm(struct uart_port *port, unsigned int misr) 102 + static void msm_handle_rx_dm(struct uart_port *port, unsigned int misr) 526 103 { 527 104 struct tty_port *tport = &port->state->port; 528 105 unsigned int sr; ··· 592 169 msm_write(port, UART_CR_CMD_RESET_STALE_INT, UART_CR); 593 170 msm_write(port, 0xFFFFFF, UARTDM_DMRX); 594 171 msm_write(port, UART_CR_CMD_STALE_EVENT_ENABLE, UART_CR); 172 + 173 + /* Try to use DMA */ 174 + msm_start_rx_dma(msm_port); 595 175 } 596 176 597 - static void handle_rx(struct uart_port *port) 177 + static void msm_handle_rx(struct uart_port *port) 598 178 { 599 179 struct tty_port *tport = &port->state->port; 600 180 unsigned int sr; ··· 650 224 spin_lock(&port->lock); 651 225 } 652 226 653 - static void reset_dm_count(struct uart_port *port, int count) 654 - { 655 - wait_for_xmitr(port); 656 - msm_write(port, count, UARTDM_NCF_TX); 657 - msm_read(port, UARTDM_NCF_TX); 658 - } 659 - 660 - static void handle_tx(struct uart_port *port) 227 + static void msm_handle_tx_pio(struct uart_port *port, unsigned int tx_count) 661 228 { 662 229 struct circ_buf *xmit = &port->state->xmit; 663 230 struct msm_port *msm_port = UART_TO_MSM(port); 664 - unsigned int tx_count, num_chars; 231 + unsigned int num_chars; 665 232 unsigned int tf_pointer = 0; 666 233 void __iomem *tf; 667 234 ··· 663 244 else 664 245 tf = port->membase + UART_TF; 665 246 666 - tx_count = uart_circ_chars_pending(xmit); 667 - tx_count = min3(tx_count, (unsigned int)UART_XMIT_SIZE - xmit->tail, 668 - port->fifosize); 669 - 670 - if (port->x_char) { 671 - if (msm_port->is_uartdm) 672 - reset_dm_count(port, tx_count + 1); 673 - 674 - iowrite8_rep(tf, &port->x_char, 1); 675 - port->icount.tx++; 676 - port->x_char = 0; 677 - } else if (tx_count && msm_port->is_uartdm) { 678 - reset_dm_count(port, tx_count); 679 - } 247 + if (tx_count && msm_port->is_uartdm) 248 + msm_reset_dm_count(port, tx_count); 680 249 681 250 while (tf_pointer < tx_count) { 682 251 int i; ··· 697 290 uart_write_wakeup(port); 698 291 } 699 292 700 - static void handle_delta_cts(struct uart_port *port) 293 + static void msm_handle_tx(struct uart_port *port) 294 + { 295 + struct msm_port *msm_port = UART_TO_MSM(port); 296 + struct circ_buf *xmit = &msm_port->uart.state->xmit; 297 + struct msm_dma *dma = &msm_port->tx_dma; 298 + unsigned int pio_count, dma_count, dma_min; 299 + void __iomem *tf; 300 + int err = 0; 301 + 302 + if (port->x_char) { 303 + if (msm_port->is_uartdm) 304 + tf = port->membase + UARTDM_TF; 305 + else 306 + tf = port->membase + UART_TF; 307 + 308 + if (msm_port->is_uartdm) 309 + msm_reset_dm_count(port, 1); 310 + 311 + iowrite8_rep(tf, &port->x_char, 1); 312 + port->icount.tx++; 313 + port->x_char = 0; 314 + return; 315 + } 316 + 317 + if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 318 + msm_stop_tx(port); 319 + return; 320 + } 321 + 322 + pio_count = CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE); 323 + dma_count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 324 + 325 + dma_min = 1; /* Always DMA */ 326 + if (msm_port->is_uartdm > UARTDM_1P3) { 327 + dma_count = UARTDM_TX_AIGN(dma_count); 328 + dma_min = UARTDM_BURST_SIZE; 329 + } else { 330 + if (dma_count > UARTDM_TX_MAX) 331 + dma_count = UARTDM_TX_MAX; 332 + } 333 + 334 + if (pio_count > port->fifosize) 335 + pio_count = port->fifosize; 336 + 337 + if (!dma->chan || dma_count < dma_min) 338 + msm_handle_tx_pio(port, pio_count); 339 + else 340 + err = msm_handle_tx_dma(msm_port, dma_count); 341 + 342 + if (err) /* fall back to PIO mode */ 343 + msm_handle_tx_pio(port, pio_count); 344 + } 345 + 346 + static void msm_handle_delta_cts(struct uart_port *port) 701 347 { 702 348 msm_write(port, UART_CR_CMD_RESET_CTS, UART_CR); 703 349 port->icount.cts++; 704 350 wake_up_interruptible(&port->state->port.delta_msr_wait); 705 351 } 706 352 707 - static irqreturn_t msm_irq(int irq, void *dev_id) 353 + static irqreturn_t msm_uart_irq(int irq, void *dev_id) 708 354 { 709 355 struct uart_port *port = dev_id; 710 356 struct msm_port *msm_port = UART_TO_MSM(port); 357 + struct msm_dma *dma = &msm_port->rx_dma; 358 + unsigned long flags; 711 359 unsigned int misr; 360 + u32 val; 712 361 713 - spin_lock(&port->lock); 362 + spin_lock_irqsave(&port->lock, flags); 714 363 misr = msm_read(port, UART_MISR); 715 364 msm_write(port, 0, UART_IMR); /* disable interrupt */ 716 365 ··· 776 313 } 777 314 778 315 if (misr & (UART_IMR_RXLEV | UART_IMR_RXSTALE)) { 779 - if (msm_port->is_uartdm) 780 - handle_rx_dm(port, misr); 781 - else 782 - handle_rx(port); 316 + if (dma->count) { 317 + val = UART_CR_CMD_STALE_EVENT_DISABLE; 318 + msm_write(port, val, UART_CR); 319 + val = UART_CR_CMD_RESET_STALE_INT; 320 + msm_write(port, val, UART_CR); 321 + /* 322 + * Flush DMA input fifo to memory, this will also 323 + * trigger DMA RX completion 324 + */ 325 + dmaengine_terminate_all(dma->chan); 326 + } else if (msm_port->is_uartdm) { 327 + msm_handle_rx_dm(port, misr); 328 + } else { 329 + msm_handle_rx(port); 330 + } 783 331 } 784 332 if (misr & UART_IMR_TXLEV) 785 - handle_tx(port); 333 + msm_handle_tx(port); 786 334 if (misr & UART_IMR_DELTA_CTS) 787 - handle_delta_cts(port); 335 + msm_handle_delta_cts(port); 788 336 789 337 msm_write(port, msm_port->imr, UART_IMR); /* restore interrupt */ 790 - spin_unlock(&port->lock); 338 + spin_unlock_irqrestore(&port->lock, flags); 791 339 792 340 return IRQ_HANDLED; 793 341 } ··· 882 408 { 3, 0xdd, 8 }, 883 409 { 2, 0xee, 16 }, 884 410 { 1, 0xff, 31 }, 411 + { 0, 0xff, 31 }, 885 412 }; 886 413 887 414 divisor = uart_get_divisor(port, baud); ··· 894 419 return entry; /* Default to smallest divider */ 895 420 } 896 421 897 - static int msm_set_baud_rate(struct uart_port *port, unsigned int baud) 422 + static int msm_set_baud_rate(struct uart_port *port, unsigned int baud, 423 + unsigned long *saved_flags) 898 424 { 899 - unsigned int rxstale, watermark; 425 + unsigned int rxstale, watermark, mask; 900 426 struct msm_port *msm_port = UART_TO_MSM(port); 901 427 const struct msm_baud_map *entry; 428 + unsigned long flags; 902 429 903 430 entry = msm_find_best_baud(port, baud); 904 431 905 432 msm_write(port, entry->code, UART_CSR); 906 433 434 + if (baud > 460800) 435 + port->uartclk = baud * 16; 436 + 437 + flags = *saved_flags; 438 + spin_unlock_irqrestore(&port->lock, flags); 439 + 440 + clk_set_rate(msm_port->clk, port->uartclk); 441 + 442 + spin_lock_irqsave(&port->lock, flags); 443 + *saved_flags = flags; 444 + 907 445 /* RX stale watermark */ 908 446 rxstale = entry->rxstale; 909 447 watermark = UART_IPR_STALE_LSB & rxstale; 910 - watermark |= UART_IPR_RXSTALE_LAST; 911 - watermark |= UART_IPR_STALE_TIMEOUT_MSB & (rxstale << 2); 448 + if (msm_port->is_uartdm) { 449 + mask = UART_DM_IPR_STALE_TIMEOUT_MSB; 450 + } else { 451 + watermark |= UART_IPR_RXSTALE_LAST; 452 + mask = UART_IPR_STALE_TIMEOUT_MSB; 453 + } 454 + 455 + watermark |= mask & (rxstale << 2); 456 + 912 457 msm_write(port, watermark, UART_IPR); 913 458 914 459 /* set RX watermark */ ··· 971 476 static int msm_startup(struct uart_port *port) 972 477 { 973 478 struct msm_port *msm_port = UART_TO_MSM(port); 974 - unsigned int data, rfr_level; 479 + unsigned int data, rfr_level, mask; 975 480 int ret; 976 481 977 482 snprintf(msm_port->name, sizeof(msm_port->name), 978 483 "msm_serial%d", port->line); 979 484 980 - ret = request_irq(port->irq, msm_irq, IRQF_TRIGGER_HIGH, 485 + ret = request_irq(port->irq, msm_uart_irq, IRQF_TRIGGER_HIGH, 981 486 msm_port->name, port); 982 487 if (unlikely(ret)) 983 488 return ret; ··· 991 496 992 497 /* set automatic RFR level */ 993 498 data = msm_read(port, UART_MR1); 994 - data &= ~UART_MR1_AUTO_RFR_LEVEL1; 499 + 500 + if (msm_port->is_uartdm) 501 + mask = UART_DM_MR1_AUTO_RFR_LEVEL1; 502 + else 503 + mask = UART_MR1_AUTO_RFR_LEVEL1; 504 + 505 + data &= ~mask; 995 506 data &= ~UART_MR1_AUTO_RFR_LEVEL0; 996 - data |= UART_MR1_AUTO_RFR_LEVEL1 & (rfr_level << 2); 507 + data |= mask & (rfr_level << 2); 997 508 data |= UART_MR1_AUTO_RFR_LEVEL0 & rfr_level; 998 509 msm_write(port, data, UART_MR1); 510 + 511 + if (msm_port->is_uartdm) { 512 + msm_request_tx_dma(msm_port, msm_port->uart.mapbase); 513 + msm_request_rx_dma(msm_port, msm_port->uart.mapbase); 514 + } 515 + 999 516 return 0; 1000 517 } 1001 518 ··· 1018 511 msm_port->imr = 0; 1019 512 msm_write(port, 0, UART_IMR); /* disable interrupts */ 1020 513 514 + if (msm_port->is_uartdm) 515 + msm_release_dma(msm_port); 516 + 1021 517 clk_disable_unprepare(msm_port->clk); 1022 518 1023 519 free_irq(port->irq, port); ··· 1029 519 static void msm_set_termios(struct uart_port *port, struct ktermios *termios, 1030 520 struct ktermios *old) 1031 521 { 522 + struct msm_port *msm_port = UART_TO_MSM(port); 523 + struct msm_dma *dma = &msm_port->rx_dma; 1032 524 unsigned long flags; 1033 525 unsigned int baud, mr; 1034 526 1035 527 spin_lock_irqsave(&port->lock, flags); 1036 528 529 + if (dma->chan) /* Terminate if any */ 530 + msm_stop_dma(port, dma); 531 + 1037 532 /* calculate and set baud rate */ 1038 - baud = uart_get_baud_rate(port, termios, old, 300, 115200); 1039 - baud = msm_set_baud_rate(port, baud); 533 + baud = uart_get_baud_rate(port, termios, old, 300, 4000000); 534 + baud = msm_set_baud_rate(port, baud, &flags); 1040 535 if (tty_termios_baud_rate(termios)) 1041 536 tty_termios_encode_baud_rate(termios, baud, baud); 1042 537 ··· 1102 587 port->read_status_mask |= UART_SR_RX_BREAK; 1103 588 1104 589 uart_update_timeout(port, termios->c_cflag, baud); 590 + 591 + /* Try to use DMA */ 592 + msm_start_rx_dma(msm_port); 1105 593 1106 594 spin_unlock_irqrestore(&port->lock, flags); 1107 595 } ··· 1283 765 msm_write(port, 0, UART_IMR); 1284 766 1285 767 if (msm_port->is_uartdm) 1286 - reset_dm_count(port, 1); 768 + msm_reset_dm_count(port, 1); 1287 769 1288 770 /* Wait until FIFO is empty */ 1289 771 while (!(msm_read(port, UART_SR) & UART_SR_TX_READY)) ··· 1357 839 1358 840 #define UART_NR ARRAY_SIZE(msm_uart_ports) 1359 841 1360 - static inline struct uart_port *get_port_from_line(unsigned int line) 842 + static inline struct uart_port *msm_get_port_from_line(unsigned int line) 1361 843 { 1362 844 return &msm_uart_ports[line].uart; 1363 845 } ··· 1384 866 1385 867 spin_lock(&port->lock); 1386 868 if (is_uartdm) 1387 - reset_dm_count(port, count); 869 + msm_reset_dm_count(port, count); 1388 870 1389 871 i = 0; 1390 872 while (i < count) { ··· 1429 911 1430 912 BUG_ON(co->index < 0 || co->index >= UART_NR); 1431 913 1432 - port = get_port_from_line(co->index); 914 + port = msm_get_port_from_line(co->index); 1433 915 msm_port = UART_TO_MSM(port); 1434 916 1435 917 __msm_console_write(port, s, count, msm_port->is_uartdm); ··· 1446 928 if (unlikely(co->index >= UART_NR || co->index < 0)) 1447 929 return -ENXIO; 1448 930 1449 - port = get_port_from_line(co->index); 931 + port = msm_get_port_from_line(co->index); 1450 932 1451 933 if (unlikely(!port->membase)) 1452 934 return -ENXIO; ··· 1561 1043 1562 1044 dev_info(&pdev->dev, "msm_serial: detected port #%d\n", line); 1563 1045 1564 - port = get_port_from_line(line); 1046 + port = msm_get_port_from_line(line); 1565 1047 port->dev = &pdev->dev; 1566 1048 msm_port = UART_TO_MSM(port); 1567 1049
+31 -22
drivers/tty/serial/msm_serial.h
··· 20 20 21 21 #define UART_MR1_AUTO_RFR_LEVEL0 0x3F 22 22 #define UART_MR1_AUTO_RFR_LEVEL1 0x3FF00 23 - #define UART_MR1_RX_RDY_CTL (1 << 7) 24 - #define UART_MR1_CTS_CTL (1 << 6) 23 + #define UART_DM_MR1_AUTO_RFR_LEVEL1 0xFFFFFF00 24 + #define UART_MR1_RX_RDY_CTL BIT(7) 25 + #define UART_MR1_CTS_CTL BIT(6) 25 26 26 27 #define UART_MR2 0x0004 27 - #define UART_MR2_ERROR_MODE (1 << 6) 28 + #define UART_MR2_ERROR_MODE BIT(6) 28 29 #define UART_MR2_BITS_PER_CHAR 0x30 29 30 #define UART_MR2_BITS_PER_CHAR_5 (0x0 << 4) 30 31 #define UART_MR2_BITS_PER_CHAR_6 (0x1 << 4) ··· 59 58 #define UART_CR_CMD_SET_RFR (13 << 4) 60 59 #define UART_CR_CMD_RESET_RFR (14 << 4) 61 60 #define UART_CR_CMD_PROTECTION_EN (16 << 4) 61 + #define UART_CR_CMD_STALE_EVENT_DISABLE (6 << 8) 62 62 #define UART_CR_CMD_STALE_EVENT_ENABLE (80 << 4) 63 63 #define UART_CR_CMD_FORCE_STALE (4 << 8) 64 64 #define UART_CR_CMD_RESET_TX_READY (3 << 8) 65 - #define UART_CR_TX_DISABLE (1 << 3) 66 - #define UART_CR_TX_ENABLE (1 << 2) 67 - #define UART_CR_RX_DISABLE (1 << 1) 68 - #define UART_CR_RX_ENABLE (1 << 0) 65 + #define UART_CR_TX_DISABLE BIT(3) 66 + #define UART_CR_TX_ENABLE BIT(2) 67 + #define UART_CR_RX_DISABLE BIT(1) 68 + #define UART_CR_RX_ENABLE BIT(0) 69 69 #define UART_CR_CMD_RESET_RXBREAK_START ((1 << 11) | (2 << 4)) 70 70 71 71 #define UART_IMR 0x0014 72 - #define UART_IMR_TXLEV (1 << 0) 73 - #define UART_IMR_RXSTALE (1 << 3) 74 - #define UART_IMR_RXLEV (1 << 4) 75 - #define UART_IMR_DELTA_CTS (1 << 5) 76 - #define UART_IMR_CURRENT_CTS (1 << 6) 77 - #define UART_IMR_RXBREAK_START (1 << 10) 72 + #define UART_IMR_TXLEV BIT(0) 73 + #define UART_IMR_RXSTALE BIT(3) 74 + #define UART_IMR_RXLEV BIT(4) 75 + #define UART_IMR_DELTA_CTS BIT(5) 76 + #define UART_IMR_CURRENT_CTS BIT(6) 77 + #define UART_IMR_RXBREAK_START BIT(10) 78 78 79 79 #define UART_IPR_RXSTALE_LAST 0x20 80 80 #define UART_IPR_STALE_LSB 0x1F 81 81 #define UART_IPR_STALE_TIMEOUT_MSB 0x3FF80 82 + #define UART_DM_IPR_STALE_TIMEOUT_MSB 0xFFFFFF80 82 83 83 84 #define UART_IPR 0x0018 84 85 #define UART_TFWR 0x001C ··· 99 96 #define UART_TEST_CTRL 0x0050 100 97 101 98 #define UART_SR 0x0008 102 - #define UART_SR_HUNT_CHAR (1 << 7) 103 - #define UART_SR_RX_BREAK (1 << 6) 104 - #define UART_SR_PAR_FRAME_ERR (1 << 5) 105 - #define UART_SR_OVERRUN (1 << 4) 106 - #define UART_SR_TX_EMPTY (1 << 3) 107 - #define UART_SR_TX_READY (1 << 2) 108 - #define UART_SR_RX_FULL (1 << 1) 109 - #define UART_SR_RX_READY (1 << 0) 99 + #define UART_SR_HUNT_CHAR BIT(7) 100 + #define UART_SR_RX_BREAK BIT(6) 101 + #define UART_SR_PAR_FRAME_ERR BIT(5) 102 + #define UART_SR_OVERRUN BIT(4) 103 + #define UART_SR_TX_EMPTY BIT(3) 104 + #define UART_SR_TX_READY BIT(2) 105 + #define UART_SR_RX_FULL BIT(1) 106 + #define UART_SR_RX_READY BIT(0) 110 107 111 108 #define UART_RF 0x000C 112 109 #define UARTDM_RF 0x0070 113 110 #define UART_MISR 0x0010 114 111 #define UART_ISR 0x0014 115 - #define UART_ISR_TX_READY (1 << 7) 112 + #define UART_ISR_TX_READY BIT(7) 116 113 117 114 #define UARTDM_RXFS 0x50 118 115 #define UARTDM_RXFS_BUF_SHIFT 0x7 ··· 121 118 #define UARTDM_DMEN 0x3C 122 119 #define UARTDM_DMEN_RX_SC_ENABLE BIT(5) 123 120 #define UARTDM_DMEN_TX_SC_ENABLE BIT(4) 121 + 122 + #define UARTDM_DMEN_TX_BAM_ENABLE BIT(2) /* UARTDM_1P4 */ 123 + #define UARTDM_DMEN_TX_DM_ENABLE BIT(0) /* < UARTDM_1P4 */ 124 + 125 + #define UARTDM_DMEN_RX_BAM_ENABLE BIT(3) /* UARTDM_1P4 */ 126 + #define UARTDM_DMEN_RX_DM_ENABLE BIT(1) /* < UARTDM_1P4 */ 124 127 125 128 #define UARTDM_DMRX 0x34 126 129 #define UARTDM_NCF_TX 0x40
+1 -1
drivers/tty/serial/mxs-auart.c
··· 1196 1196 enum mctrl_gpio_idx i; 1197 1197 struct gpio_desc *gpiod; 1198 1198 1199 - s->gpios = mctrl_gpio_init(dev, 0); 1199 + s->gpios = mctrl_gpio_init_noauto(dev, 0); 1200 1200 if (IS_ERR(s->gpios)) 1201 1201 return PTR_ERR(s->gpios); 1202 1202
+10
drivers/tty/serial/of_serial.c
··· 21 21 #include <linux/nwpserial.h> 22 22 #include <linux/clk.h> 23 23 24 + #ifdef CONFIG_SERIAL_8250_MODULE 25 + #define CONFIG_SERIAL_8250 CONFIG_SERIAL_8250_MODULE 26 + #endif 27 + 24 28 #include "8250/8250.h" 25 29 26 30 struct of_serial_info { ··· 153 149 port->iotype = UPIO_AU; 154 150 break; 155 151 } 152 + 153 + if (IS_ENABLED(CONFIG_SERIAL_8250_FSL) && 154 + (of_device_is_compatible(np, "fsl,ns16550") || 155 + of_device_is_compatible(np, "fsl,16550-FIFO64"))) 156 + port->handle_irq = fsl8250_handle_irq; 156 157 157 158 return 0; 158 159 out: ··· 359 350 #endif 360 351 { /* end of list */ }, 361 352 }; 353 + MODULE_DEVICE_TABLE(of, of_platform_serial_table); 362 354 363 355 static struct platform_driver of_platform_serial_driver = { 364 356 .driver = {
+2
drivers/tty/serial/omap-serial.c
··· 199 199 serial_out(up, UART_FCR, 0); 200 200 } 201 201 202 + #ifdef CONFIG_PM 202 203 static int serial_omap_get_context_loss_count(struct uart_omap_port *up) 203 204 { 204 205 struct omap_uart_port_info *pdata = dev_get_platdata(up->dev); ··· 220 219 221 220 pdata->enable_wakeup(up->dev, enable); 222 221 } 222 + #endif /* CONFIG_PM */ 223 223 224 224 /* 225 225 * Calculate the absolute difference between the desired and actual baud
+21 -45
drivers/tty/serial/samsung.c
··· 385 385 } 386 386 } 387 387 388 - static int s3c24xx_serial_rx_fifocnt(struct s3c24xx_uart_port *ourport, 389 - unsigned long ufstat); 390 - 391 - static void uart_rx_drain_fifo(struct s3c24xx_uart_port *ourport) 392 - { 393 - struct uart_port *port = &ourport->port; 394 - struct tty_port *tty = &port->state->port; 395 - unsigned int ch, ufstat; 396 - unsigned int count; 397 - 398 - ufstat = rd_regl(port, S3C2410_UFSTAT); 399 - count = s3c24xx_serial_rx_fifocnt(ourport, ufstat); 400 - 401 - if (!count) 402 - return; 403 - 404 - while (count-- > 0) { 405 - ch = rd_regb(port, S3C2410_URXH); 406 - 407 - ourport->port.icount.rx++; 408 - tty_insert_flip_char(tty, ch, TTY_NORMAL); 409 - } 410 - 411 - tty_flip_buffer_push(tty); 412 - } 413 - 414 388 static void s3c24xx_serial_stop_rx(struct uart_port *port) 415 389 { 416 390 struct s3c24xx_uart_port *ourport = to_ourport(port); ··· 547 573 ourport->rx_mode = S3C24XX_RX_PIO; 548 574 } 549 575 550 - static irqreturn_t s3c24xx_serial_rx_chars_dma(int irq, void *dev_id) 576 + static void s3c24xx_serial_rx_drain_fifo(struct s3c24xx_uart_port *ourport); 577 + 578 + static irqreturn_t s3c24xx_serial_rx_chars_dma(void *dev_id) 551 579 { 552 580 unsigned int utrstat, ufstat, received; 553 581 struct s3c24xx_uart_port *ourport = dev_id; ··· 582 606 enable_rx_pio(ourport); 583 607 } 584 608 585 - uart_rx_drain_fifo(ourport); 609 + s3c24xx_serial_rx_drain_fifo(ourport); 586 610 587 611 if (tty) { 588 612 tty_flip_buffer_push(t); ··· 597 621 return IRQ_HANDLED; 598 622 } 599 623 600 - static irqreturn_t s3c24xx_serial_rx_chars_pio(int irq, void *dev_id) 624 + static void s3c24xx_serial_rx_drain_fifo(struct s3c24xx_uart_port *ourport) 601 625 { 602 - struct s3c24xx_uart_port *ourport = dev_id; 603 626 struct uart_port *port = &ourport->port; 604 627 unsigned int ufcon, ch, flag, ufstat, uerstat; 605 - unsigned long flags; 606 628 int max_count = port->fifosize; 607 - 608 - spin_lock_irqsave(&port->lock, flags); 609 629 610 630 while (max_count-- > 0) { 611 631 ufcon = rd_regl(port, S3C2410_UFCON); ··· 626 654 ufcon |= S3C2410_UFCON_RESETRX; 627 655 wr_regl(port, S3C2410_UFCON, ufcon); 628 656 rx_enabled(port) = 1; 629 - spin_unlock_irqrestore(&port->lock, 630 - flags); 631 - goto out; 657 + return; 632 658 } 633 659 continue; 634 660 } ··· 646 676 dbg("break!\n"); 647 677 port->icount.brk++; 648 678 if (uart_handle_break(port)) 649 - goto ignore_char; 679 + continue; /* Ignore character */ 650 680 } 651 681 652 682 if (uerstat & S3C2410_UERSTAT_FRAME) ··· 666 696 } 667 697 668 698 if (uart_handle_sysrq_char(port, ch)) 669 - goto ignore_char; 699 + continue; /* Ignore character */ 670 700 671 701 uart_insert_char(port, uerstat, S3C2410_UERSTAT_OVERRUN, 672 702 ch, flag); 673 - 674 - ignore_char: 675 - continue; 676 703 } 677 704 678 - spin_unlock_irqrestore(&port->lock, flags); 679 705 tty_flip_buffer_push(&port->state->port); 706 + } 680 707 681 - out: 708 + static irqreturn_t s3c24xx_serial_rx_chars_pio(void *dev_id) 709 + { 710 + struct s3c24xx_uart_port *ourport = dev_id; 711 + struct uart_port *port = &ourport->port; 712 + unsigned long flags; 713 + 714 + spin_lock_irqsave(&port->lock, flags); 715 + s3c24xx_serial_rx_drain_fifo(ourport); 716 + spin_unlock_irqrestore(&port->lock, flags); 717 + 682 718 return IRQ_HANDLED; 683 719 } 684 720 ··· 694 718 struct s3c24xx_uart_port *ourport = dev_id; 695 719 696 720 if (ourport->dma && ourport->dma->rx_chan) 697 - return s3c24xx_serial_rx_chars_dma(irq, dev_id); 698 - return s3c24xx_serial_rx_chars_pio(irq, dev_id); 721 + return s3c24xx_serial_rx_chars_dma(dev_id); 722 + return s3c24xx_serial_rx_chars_pio(dev_id); 699 723 } 700 724 701 725 static irqreturn_t s3c24xx_serial_tx_chars(int irq, void *id)
+6 -1
drivers/tty/serial/sc16is7xx.c
··· 1321 1321 const struct of_device_id *of_id = 1322 1322 of_match_device(sc16is7xx_dt_ids, &spi->dev); 1323 1323 1324 + if (!of_id) 1325 + return -ENODEV; 1326 + 1324 1327 devtype = (struct sc16is7xx_devtype *)of_id->data; 1325 1328 } else { 1326 1329 const struct spi_device_id *id_entry = spi_get_device_id(spi); ··· 1383 1380 const struct of_device_id *of_id = 1384 1381 of_match_device(sc16is7xx_dt_ids, &i2c->dev); 1385 1382 1383 + if (!of_id) 1384 + return -ENODEV; 1385 + 1386 1386 devtype = (struct sc16is7xx_devtype *)of_id->data; 1387 1387 } else { 1388 1388 devtype = (struct sc16is7xx_devtype *)id->driver_data; ··· 1426 1420 .id_table = sc16is7xx_i2c_id_table, 1427 1421 }; 1428 1422 1429 - MODULE_ALIAS("i2c:sc16is7xx"); 1430 1423 #endif 1431 1424 1432 1425 static int __init sc16is7xx_init(void)
+28 -66
drivers/tty/serial/serial-tegra.c
··· 186 186 tegra_uart_write(tup, mcr, UART_MCR); 187 187 tup->mcr_shadow = mcr; 188 188 } 189 - return; 190 189 } 191 190 192 191 static void set_dtr(struct tegra_uart_port *tup, bool active) ··· 201 202 tegra_uart_write(tup, mcr, UART_MCR); 202 203 tup->mcr_shadow = mcr; 203 204 } 204 - return; 205 205 } 206 206 207 207 static void tegra_uart_set_mctrl(struct uart_port *u, unsigned int mctrl) ··· 215 217 216 218 dtr_enable = !!(mctrl & TIOCM_DTR); 217 219 set_dtr(tup, dtr_enable); 218 - return; 219 220 } 220 221 221 222 static void tegra_uart_break_ctl(struct uart_port *u, int break_ctl) ··· 508 511 async_tx_ack(tup->tx_dma_desc); 509 512 xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1); 510 513 tup->tx_in_progress = 0; 511 - return; 512 514 } 513 515 514 516 static void tegra_uart_handle_tx_pio(struct tegra_uart_port *tup) ··· 519 523 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 520 524 uart_write_wakeup(&tup->uport); 521 525 tegra_uart_start_next_tx(tup); 522 - return; 523 526 } 524 527 525 528 static void tegra_uart_handle_rx_pio(struct tegra_uart_port *tup, ··· 540 545 if (!uart_handle_sysrq_char(&tup->uport, ch) && tty) 541 546 tty_insert_flip_char(tty, ch, flag); 542 547 } while (1); 543 - 544 - return; 545 548 } 546 549 547 550 static void tegra_uart_copy_rx_to_tty(struct tegra_uart_port *tup, ··· 569 576 TEGRA_UART_RX_DMA_BUFFER_SIZE, DMA_TO_DEVICE); 570 577 } 571 578 579 + static void tegra_uart_rx_buffer_push(struct tegra_uart_port *tup, 580 + unsigned int residue) 581 + { 582 + struct tty_port *port = &tup->uport.state->port; 583 + struct tty_struct *tty = tty_port_tty_get(port); 584 + unsigned int count; 585 + 586 + async_tx_ack(tup->rx_dma_desc); 587 + count = tup->rx_bytes_requested - residue; 588 + 589 + /* If we are here, DMA is stopped */ 590 + tegra_uart_copy_rx_to_tty(tup, port, count); 591 + 592 + tegra_uart_handle_rx_pio(tup, port); 593 + if (tty) { 594 + tty_flip_buffer_push(port); 595 + tty_kref_put(tty); 596 + } 597 + } 598 + 572 599 static void tegra_uart_rx_dma_complete(void *args) 573 600 { 574 601 struct tegra_uart_port *tup = args; 575 602 struct uart_port *u = &tup->uport; 576 - unsigned int count = tup->rx_bytes_requested; 577 - struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port); 578 - struct tty_port *port = &u->state->port; 579 603 unsigned long flags; 580 604 struct dma_tx_state state; 581 605 enum dma_status status; ··· 606 596 goto done; 607 597 } 608 598 609 - async_tx_ack(tup->rx_dma_desc); 610 - 611 599 /* Deactivate flow control to stop sender */ 612 600 if (tup->rts_active) 613 601 set_rts(tup, false); 614 602 615 - /* If we are here, DMA is stopped */ 616 - tegra_uart_copy_rx_to_tty(tup, port, count); 617 - 618 - tegra_uart_handle_rx_pio(tup, port); 619 - if (tty) { 620 - spin_unlock_irqrestore(&u->lock, flags); 621 - tty_flip_buffer_push(port); 622 - spin_lock_irqsave(&u->lock, flags); 623 - tty_kref_put(tty); 624 - } 603 + tegra_uart_rx_buffer_push(tup, 0); 625 604 tegra_uart_start_rx_dma(tup); 626 605 627 606 /* Activate flow control to start transfer */ ··· 621 622 spin_unlock_irqrestore(&u->lock, flags); 622 623 } 623 624 624 - static void tegra_uart_handle_rx_dma(struct tegra_uart_port *tup, 625 - unsigned long *flags) 625 + static void tegra_uart_handle_rx_dma(struct tegra_uart_port *tup) 626 626 { 627 627 struct dma_tx_state state; 628 - struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port); 629 - struct tty_port *port = &tup->uport.state->port; 630 - struct uart_port *u = &tup->uport; 631 - unsigned int count; 632 628 633 629 /* Deactivate flow control to stop sender */ 634 630 if (tup->rts_active) 635 631 set_rts(tup, false); 636 632 637 633 dmaengine_terminate_all(tup->rx_dma_chan); 638 - dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state); 639 - async_tx_ack(tup->rx_dma_desc); 640 - count = tup->rx_bytes_requested - state.residue; 641 - 642 - /* If we are here, DMA is stopped */ 643 - tegra_uart_copy_rx_to_tty(tup, port, count); 644 - 645 - tegra_uart_handle_rx_pio(tup, port); 646 - if (tty) { 647 - spin_unlock_irqrestore(&u->lock, *flags); 648 - tty_flip_buffer_push(port); 649 - spin_lock_irqsave(&u->lock, *flags); 650 - tty_kref_put(tty); 651 - } 634 + dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state); 635 + tegra_uart_rx_buffer_push(tup, state.residue); 652 636 tegra_uart_start_rx_dma(tup); 653 637 654 638 if (tup->rts_active) ··· 679 697 /* Will start/stop_tx accordingly */ 680 698 if (msr & UART_MSR_DCTS) 681 699 uart_handle_cts_change(&tup->uport, msr & UART_MSR_CTS); 682 - return; 683 700 } 684 701 685 702 static irqreturn_t tegra_uart_isr(int irq, void *data) ··· 695 714 iir = tegra_uart_read(tup, UART_IIR); 696 715 if (iir & UART_IIR_NO_INT) { 697 716 if (is_rx_int) { 698 - tegra_uart_handle_rx_dma(tup, &flags); 717 + tegra_uart_handle_rx_dma(tup); 699 718 if (tup->rx_in_progress) { 700 719 ier = tup->ier_shadow; 701 720 ier |= (UART_IER_RLSI | UART_IER_RTOIE | ··· 750 769 static void tegra_uart_stop_rx(struct uart_port *u) 751 770 { 752 771 struct tegra_uart_port *tup = to_tegra_uport(u); 753 - struct tty_struct *tty; 754 - struct tty_port *port = &u->state->port; 755 772 struct dma_tx_state state; 756 773 unsigned long ier; 757 - int count; 758 774 759 775 if (tup->rts_active) 760 776 set_rts(tup, false); 761 777 762 778 if (!tup->rx_in_progress) 763 779 return; 764 - 765 - tty = tty_port_tty_get(&tup->uport.state->port); 766 780 767 781 tegra_uart_wait_sym_time(tup, 1); /* wait a character interval */ 768 782 ··· 767 791 tup->ier_shadow = ier; 768 792 tegra_uart_write(tup, ier, UART_IER); 769 793 tup->rx_in_progress = 0; 770 - if (tup->rx_dma_chan) { 771 - dmaengine_terminate_all(tup->rx_dma_chan); 772 - dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state); 773 - async_tx_ack(tup->rx_dma_desc); 774 - count = tup->rx_bytes_requested - state.residue; 775 - tegra_uart_copy_rx_to_tty(tup, port, count); 776 - tegra_uart_handle_rx_pio(tup, port); 777 - } else { 778 - tegra_uart_handle_rx_pio(tup, port); 779 - } 780 - if (tty) { 781 - tty_flip_buffer_push(port); 782 - tty_kref_put(tty); 783 - } 784 - return; 794 + dmaengine_terminate_all(tup->rx_dma_chan); 795 + dmaengine_tx_status(tup->rx_dma_chan, tup->rx_cookie, &state); 796 + tegra_uart_rx_buffer_push(tup, state.residue); 785 797 } 786 798 787 799 static void tegra_uart_hw_deinit(struct tegra_uart_port *tup) ··· 1047 1083 tup->tx_bytes = 0; 1048 1084 if (tup->tx_dma_chan) 1049 1085 dmaengine_terminate_all(tup->tx_dma_chan); 1050 - return; 1051 1086 } 1052 1087 1053 1088 static void tegra_uart_shutdown(struct uart_port *u) ··· 1186 1223 tegra_uart_read(tup, UART_IER); 1187 1224 1188 1225 spin_unlock_irqrestore(&u->lock, flags); 1189 - return; 1190 1226 } 1191 1227 1192 1228 static const char *tegra_uart_type(struct uart_port *u)
+6 -3
drivers/tty/serial/serial_core.c
··· 1437 1437 clear_bit(ASYNCB_CLOSING, &port->flags); 1438 1438 spin_unlock_irq(&port->lock); 1439 1439 wake_up_interruptible(&port->open_wait); 1440 - wake_up_interruptible(&port->close_wait); 1441 1440 1442 1441 mutex_unlock(&port->mutex); 1443 1442 ··· 1818 1819 * @options: ptr for <options> field; NULL if not present (out) 1819 1820 * 1820 1821 * Decodes earlycon kernel command line parameters of the form 1821 - * earlycon=<name>,io|mmio|mmio32|mmio32be,<addr>,<options> 1822 - * console=<name>,io|mmio|mmio32|mmio32be,<addr>,<options> 1822 + * earlycon=<name>,io|mmio|mmio32|mmio32be|mmio32native,<addr>,<options> 1823 + * console=<name>,io|mmio|mmio32|mmio32be|mmio32native,<addr>,<options> 1823 1824 * 1824 1825 * The optional form 1825 1826 * earlycon=<name>,0x<addr>,<options> ··· 1840 1841 } else if (strncmp(p, "mmio32be,", 9) == 0) { 1841 1842 *iotype = UPIO_MEM32BE; 1842 1843 p += 9; 1844 + } else if (strncmp(p, "mmio32native,", 13) == 0) { 1845 + *iotype = IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) ? 1846 + UPIO_MEM32BE : UPIO_MEM32; 1847 + p += 13; 1843 1848 } else if (strncmp(p, "io,", 3) == 0) { 1844 1849 *iotype = UPIO_PORT; 1845 1850 p += 3;
+129 -4
drivers/tty/serial/serial_mctrl_gpio.c
··· 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 16 15 */ 17 16 18 17 #include <linux/err.h> 19 18 #include <linux/device.h> 19 + #include <linux/irq.h> 20 20 #include <linux/gpio/consumer.h> 21 21 #include <linux/termios.h> 22 + #include <linux/serial_core.h> 22 23 23 24 #include "serial_mctrl_gpio.h" 24 25 25 26 struct mctrl_gpios { 27 + struct uart_port *port; 26 28 struct gpio_desc *gpio[UART_GPIO_MAX]; 29 + int irq[UART_GPIO_MAX]; 30 + unsigned int mctrl_prev; 31 + bool mctrl_on; 27 32 }; 28 33 29 34 static const struct { ··· 87 82 } 88 83 EXPORT_SYMBOL_GPL(mctrl_gpio_get); 89 84 90 - struct mctrl_gpios *mctrl_gpio_init(struct device *dev, unsigned int idx) 85 + struct mctrl_gpios *mctrl_gpio_init_noauto(struct device *dev, unsigned int idx) 91 86 { 92 87 struct mctrl_gpios *gpios; 93 88 enum mctrl_gpio_idx i; ··· 115 110 116 111 return gpios; 117 112 } 118 - EXPORT_SYMBOL_GPL(mctrl_gpio_init); 113 + EXPORT_SYMBOL_GPL(mctrl_gpio_init_noauto); 114 + 115 + #define MCTRL_ANY_DELTA (TIOCM_RI | TIOCM_DSR | TIOCM_CD | TIOCM_CTS) 116 + static irqreturn_t mctrl_gpio_irq_handle(int irq, void *context) 117 + { 118 + struct mctrl_gpios *gpios = context; 119 + struct uart_port *port = gpios->port; 120 + u32 mctrl = gpios->mctrl_prev; 121 + u32 mctrl_diff; 122 + 123 + mctrl_gpio_get(gpios, &mctrl); 124 + 125 + mctrl_diff = mctrl ^ gpios->mctrl_prev; 126 + gpios->mctrl_prev = mctrl; 127 + 128 + if (mctrl_diff & MCTRL_ANY_DELTA && port->state != NULL) { 129 + if ((mctrl_diff & mctrl) & TIOCM_RI) 130 + port->icount.rng++; 131 + 132 + if ((mctrl_diff & mctrl) & TIOCM_DSR) 133 + port->icount.dsr++; 134 + 135 + if (mctrl_diff & TIOCM_CD) 136 + uart_handle_dcd_change(port, mctrl & TIOCM_CD); 137 + 138 + if (mctrl_diff & TIOCM_CTS) 139 + uart_handle_cts_change(port, mctrl & TIOCM_CTS); 140 + 141 + wake_up_interruptible(&port->state->port.delta_msr_wait); 142 + } 143 + 144 + return IRQ_HANDLED; 145 + } 146 + 147 + struct mctrl_gpios *mctrl_gpio_init(struct uart_port *port, unsigned int idx) 148 + { 149 + struct mctrl_gpios *gpios; 150 + enum mctrl_gpio_idx i; 151 + 152 + gpios = mctrl_gpio_init_noauto(port->dev, idx); 153 + if (IS_ERR(gpios)) 154 + return gpios; 155 + 156 + gpios->port = port; 157 + 158 + for (i = 0; i < UART_GPIO_MAX; ++i) { 159 + int ret; 160 + 161 + if (!gpios->gpio[i] || mctrl_gpios_desc[i].dir_out) 162 + continue; 163 + 164 + ret = gpiod_to_irq(gpios->gpio[i]); 165 + if (ret <= 0) { 166 + dev_err(port->dev, 167 + "failed to find corresponding irq for %s (idx=%d, err=%d)\n", 168 + mctrl_gpios_desc[i].name, idx, ret); 169 + return ERR_PTR(ret); 170 + } 171 + gpios->irq[i] = ret; 172 + 173 + /* irqs should only be enabled in .enable_ms */ 174 + irq_set_status_flags(gpios->irq[i], IRQ_NOAUTOEN); 175 + 176 + ret = devm_request_irq(port->dev, gpios->irq[i], 177 + mctrl_gpio_irq_handle, 178 + IRQ_TYPE_EDGE_BOTH, dev_name(port->dev), 179 + gpios); 180 + if (ret) { 181 + /* alternatively implement polling */ 182 + dev_err(port->dev, 183 + "failed to request irq for %s (idx=%d, err=%d)\n", 184 + mctrl_gpios_desc[i].name, idx, ret); 185 + return ERR_PTR(ret); 186 + } 187 + } 188 + 189 + return gpios; 190 + } 119 191 120 192 void mctrl_gpio_free(struct device *dev, struct mctrl_gpios *gpios) 121 193 { 122 194 enum mctrl_gpio_idx i; 123 195 124 - for (i = 0; i < UART_GPIO_MAX; i++) 196 + for (i = 0; i < UART_GPIO_MAX; i++) { 197 + if (gpios->irq[i]) 198 + devm_free_irq(gpios->port->dev, gpios->irq[i], gpios); 199 + 125 200 if (gpios->gpio[i]) 126 201 devm_gpiod_put(dev, gpios->gpio[i]); 202 + } 127 203 devm_kfree(dev, gpios); 128 204 } 129 205 EXPORT_SYMBOL_GPL(mctrl_gpio_free); 206 + 207 + void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios) 208 + { 209 + enum mctrl_gpio_idx i; 210 + 211 + /* .enable_ms may be called multiple times */ 212 + if (gpios->mctrl_on) 213 + return; 214 + 215 + gpios->mctrl_on = true; 216 + 217 + /* get initial status of modem lines GPIOs */ 218 + mctrl_gpio_get(gpios, &gpios->mctrl_prev); 219 + 220 + for (i = 0; i < UART_GPIO_MAX; ++i) { 221 + if (!gpios->irq[i]) 222 + continue; 223 + 224 + enable_irq(gpios->irq[i]); 225 + } 226 + } 227 + EXPORT_SYMBOL_GPL(mctrl_gpio_enable_ms); 228 + 229 + void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios) 230 + { 231 + enum mctrl_gpio_idx i; 232 + 233 + if (!gpios->mctrl_on) 234 + return; 235 + 236 + gpios->mctrl_on = false; 237 + 238 + for (i = 0; i < UART_GPIO_MAX; ++i) { 239 + if (!gpios->irq[i]) 240 + continue; 241 + 242 + disable_irq(gpios->irq[i]); 243 + } 244 + }
+38 -2
drivers/tty/serial/serial_mctrl_gpio.h
··· 22 22 #include <linux/device.h> 23 23 #include <linux/gpio/consumer.h> 24 24 25 + struct uart_port; 26 + 25 27 enum mctrl_gpio_idx { 26 28 UART_GPIO_CTS, 27 29 UART_GPIO_DSR, ··· 62 60 enum mctrl_gpio_idx gidx); 63 61 64 62 /* 63 + * Request and set direction of modem control lines GPIOs and sets up irq 64 + * handling. 65 + * devm_* functions are used, so there's no need to call mctrl_gpio_free(). 66 + * Returns a pointer to the allocated mctrl structure if ok, -ENOMEM on 67 + * allocation error. 68 + */ 69 + struct mctrl_gpios *mctrl_gpio_init(struct uart_port *port, unsigned int idx); 70 + 71 + /* 65 72 * Request and set direction of modem control lines GPIOs. 66 73 * devm_* functions are used, so there's no need to call mctrl_gpio_free(). 67 74 * Returns a pointer to the allocated mctrl structure if ok, -ENOMEM on 68 75 * allocation error. 69 76 */ 70 - struct mctrl_gpios *mctrl_gpio_init(struct device *dev, unsigned int idx); 77 + struct mctrl_gpios *mctrl_gpio_init_noauto(struct device *dev, 78 + unsigned int idx); 71 79 72 80 /* 73 81 * Free the mctrl_gpios structure. ··· 85 73 * be disposed of by the resource management code. 86 74 */ 87 75 void mctrl_gpio_free(struct device *dev, struct mctrl_gpios *gpios); 76 + 77 + /* 78 + * Enable gpio interrupts to report status line changes. 79 + */ 80 + void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios); 81 + 82 + /* 83 + * Disable gpio interrupts to report status line changes. 84 + */ 85 + void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios); 88 86 89 87 #else /* GPIOLIB */ 90 88 ··· 117 95 } 118 96 119 97 static inline 120 - struct mctrl_gpios *mctrl_gpio_init(struct device *dev, unsigned int idx) 98 + struct mctrl_gpios *mctrl_gpio_init(struct uart_port *port, unsigned int idx) 99 + { 100 + return ERR_PTR(-ENOSYS); 101 + } 102 + 103 + static inline 104 + struct mctrl_gpios *mctrl_gpio_init_noauto(struct device *dev, unsigned int idx) 121 105 { 122 106 return ERR_PTR(-ENOSYS); 123 107 } 124 108 125 109 static inline 126 110 void mctrl_gpio_free(struct device *dev, struct mctrl_gpios *gpios) 111 + { 112 + } 113 + 114 + static inline void mctrl_gpio_enable_ms(struct mctrl_gpios *gpios) 115 + { 116 + } 117 + 118 + static inline void mctrl_gpio_disable_ms(struct mctrl_gpios *gpios) 127 119 { 128 120 } 129 121
+628 -564
drivers/tty/serial/sh-sci.c
··· 84 84 unsigned int overrun_reg; 85 85 unsigned int overrun_mask; 86 86 unsigned int error_mask; 87 + unsigned int error_clear; 87 88 unsigned int sampling_rate; 88 89 resource_size_t reg_size; 89 90 ··· 104 103 struct dma_chan *chan_rx; 105 104 106 105 #ifdef CONFIG_SERIAL_SH_SCI_DMA 107 - struct dma_async_tx_descriptor *desc_tx; 108 - struct dma_async_tx_descriptor *desc_rx[2]; 109 106 dma_cookie_t cookie_tx; 110 107 dma_cookie_t cookie_rx[2]; 111 108 dma_cookie_t active_rx; 112 - struct scatterlist sg_tx; 113 - unsigned int sg_len_tx; 109 + dma_addr_t tx_dma_addr; 110 + unsigned int tx_dma_len; 114 111 struct scatterlist sg_rx[2]; 112 + void *rx_buf[2]; 115 113 size_t buf_len_rx; 116 - struct sh_dmae_slave param_tx; 117 - struct sh_dmae_slave param_rx; 118 114 struct work_struct work_tx; 119 - struct work_struct work_rx; 120 115 struct timer_list rx_timer; 121 116 unsigned int rx_timeout; 122 117 #endif 123 118 124 119 struct notifier_block freq_transition; 125 120 }; 126 - 127 - /* Function prototypes */ 128 - static void sci_start_tx(struct uart_port *port); 129 - static void sci_stop_tx(struct uart_port *port); 130 - static void sci_start_rx(struct uart_port *port); 131 121 132 122 #define SCI_NPORTS CONFIG_SERIAL_SH_SCI_NR_UARTS 133 123 ··· 138 146 /* Helper for invalidating specific entries of an inherited map. */ 139 147 #define sci_reg_invalid { .offset = 0, .size = 0 } 140 148 141 - static struct plat_sci_reg sci_regmap[SCIx_NR_REGTYPES][SCIx_NR_REGS] = { 149 + static const struct plat_sci_reg sci_regmap[SCIx_NR_REGTYPES][SCIx_NR_REGS] = { 142 150 [SCIx_PROBE_REGTYPE] = { 143 151 [0 ... SCIx_NR_REGS - 1] = sci_reg_invalid, 144 152 }, ··· 391 399 */ 392 400 static unsigned int sci_serial_in(struct uart_port *p, int offset) 393 401 { 394 - struct plat_sci_reg *reg = sci_getreg(p, offset); 402 + const struct plat_sci_reg *reg = sci_getreg(p, offset); 395 403 396 404 if (reg->size == 8) 397 405 return ioread8(p->membase + (reg->offset << p->regshift)); ··· 405 413 406 414 static void sci_serial_out(struct uart_port *p, int offset, int value) 407 415 { 408 - struct plat_sci_reg *reg = sci_getreg(p, offset); 416 + const struct plat_sci_reg *reg = sci_getreg(p, offset); 409 417 410 418 if (reg->size == 8) 411 419 iowrite8(value, p->membase + (reg->offset << p->regshift)); ··· 481 489 pm_runtime_put_sync(sci_port->port.dev); 482 490 } 483 491 492 + static inline unsigned long port_rx_irq_mask(struct uart_port *port) 493 + { 494 + /* 495 + * Not all ports (such as SCIFA) will support REIE. Rather than 496 + * special-casing the port type, we check the port initialization 497 + * IRQ enable mask to see whether the IRQ is desired at all. If 498 + * it's unset, it's logically inferred that there's no point in 499 + * testing for it. 500 + */ 501 + return SCSCR_RIE | (to_sci_port(port)->cfg->scscr & SCSCR_REIE); 502 + } 503 + 504 + static void sci_start_tx(struct uart_port *port) 505 + { 506 + struct sci_port *s = to_sci_port(port); 507 + unsigned short ctrl; 508 + 509 + #ifdef CONFIG_SERIAL_SH_SCI_DMA 510 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 511 + u16 new, scr = serial_port_in(port, SCSCR); 512 + if (s->chan_tx) 513 + new = scr | SCSCR_TDRQE; 514 + else 515 + new = scr & ~SCSCR_TDRQE; 516 + if (new != scr) 517 + serial_port_out(port, SCSCR, new); 518 + } 519 + 520 + if (s->chan_tx && !uart_circ_empty(&s->port.state->xmit) && 521 + dma_submit_error(s->cookie_tx)) { 522 + s->cookie_tx = 0; 523 + schedule_work(&s->work_tx); 524 + } 525 + #endif 526 + 527 + if (!s->chan_tx || port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 528 + /* Set TIE (Transmit Interrupt Enable) bit in SCSCR */ 529 + ctrl = serial_port_in(port, SCSCR); 530 + serial_port_out(port, SCSCR, ctrl | SCSCR_TIE); 531 + } 532 + } 533 + 534 + static void sci_stop_tx(struct uart_port *port) 535 + { 536 + unsigned short ctrl; 537 + 538 + /* Clear TIE (Transmit Interrupt Enable) bit in SCSCR */ 539 + ctrl = serial_port_in(port, SCSCR); 540 + 541 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 542 + ctrl &= ~SCSCR_TDRQE; 543 + 544 + ctrl &= ~SCSCR_TIE; 545 + 546 + serial_port_out(port, SCSCR, ctrl); 547 + } 548 + 549 + static void sci_start_rx(struct uart_port *port) 550 + { 551 + unsigned short ctrl; 552 + 553 + ctrl = serial_port_in(port, SCSCR) | port_rx_irq_mask(port); 554 + 555 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 556 + ctrl &= ~SCSCR_RDRQE; 557 + 558 + serial_port_out(port, SCSCR, ctrl); 559 + } 560 + 561 + static void sci_stop_rx(struct uart_port *port) 562 + { 563 + unsigned short ctrl; 564 + 565 + ctrl = serial_port_in(port, SCSCR); 566 + 567 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 568 + ctrl &= ~SCSCR_RDRQE; 569 + 570 + ctrl &= ~port_rx_irq_mask(port); 571 + 572 + serial_port_out(port, SCSCR, ctrl); 573 + } 574 + 575 + static void sci_clear_SCxSR(struct uart_port *port, unsigned int mask) 576 + { 577 + if (port->type == PORT_SCI) { 578 + /* Just store the mask */ 579 + serial_port_out(port, SCxSR, mask); 580 + } else if (to_sci_port(port)->overrun_mask == SCIFA_ORER) { 581 + /* SCIFA/SCIFB and SCIF on SH7705/SH7720/SH7721 */ 582 + /* Only clear the status bits we want to clear */ 583 + serial_port_out(port, SCxSR, 584 + serial_port_in(port, SCxSR) & mask); 585 + } else { 586 + /* Store the mask, clear parity/framing errors */ 587 + serial_port_out(port, SCxSR, mask & ~(SCIF_FERC | SCIF_PERC)); 588 + } 589 + } 590 + 484 591 #if defined(CONFIG_CONSOLE_POLL) || defined(CONFIG_SERIAL_SH_SCI_CONSOLE) 485 592 486 593 #ifdef CONFIG_CONSOLE_POLL ··· 591 500 do { 592 501 status = serial_port_in(port, SCxSR); 593 502 if (status & SCxSR_ERRORS(port)) { 594 - serial_port_out(port, SCxSR, SCxSR_ERROR_CLEAR(port)); 503 + sci_clear_SCxSR(port, SCxSR_ERROR_CLEAR(port)); 595 504 continue; 596 505 } 597 506 break; ··· 604 513 605 514 /* Dummy read */ 606 515 serial_port_in(port, SCxSR); 607 - serial_port_out(port, SCxSR, SCxSR_RDxF_CLEAR(port)); 516 + sci_clear_SCxSR(port, SCxSR_RDxF_CLEAR(port)); 608 517 609 518 return c; 610 519 } ··· 619 528 } while (!(status & SCxSR_TDxE(port))); 620 529 621 530 serial_port_out(port, SCxTDR, c); 622 - serial_port_out(port, SCxSR, SCxSR_TDxE_CLEAR(port) & ~SCxSR_TEND(port)); 531 + sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port) & ~SCxSR_TEND(port)); 623 532 } 624 533 #endif /* CONFIG_CONSOLE_POLL || CONFIG_SERIAL_SH_SCI_CONSOLE */ 625 534 626 535 static void sci_init_pins(struct uart_port *port, unsigned int cflag) 627 536 { 628 537 struct sci_port *s = to_sci_port(port); 629 - struct plat_sci_reg *reg = sci_regmap[s->cfg->regtype] + SCSPTR; 538 + const struct plat_sci_reg *reg = sci_regmap[s->cfg->regtype] + SCSPTR; 630 539 631 540 /* 632 541 * Use port-specific handler if provided. ··· 656 565 657 566 static int sci_txfill(struct uart_port *port) 658 567 { 659 - struct plat_sci_reg *reg; 568 + const struct plat_sci_reg *reg; 660 569 661 570 reg = sci_getreg(port, SCTFDR); 662 571 if (reg->size) ··· 676 585 677 586 static int sci_rxfill(struct uart_port *port) 678 587 { 679 - struct plat_sci_reg *reg; 588 + const struct plat_sci_reg *reg; 680 589 681 590 reg = sci_getreg(port, SCRFDR); 682 591 if (reg->size) ··· 746 655 port->icount.tx++; 747 656 } while (--count > 0); 748 657 749 - serial_port_out(port, SCxSR, SCxSR_TDxE_CLEAR(port)); 658 + sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port)); 750 659 751 660 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 752 661 uart_write_wakeup(port); ··· 757 666 758 667 if (port->type != PORT_SCI) { 759 668 serial_port_in(port, SCxSR); /* Dummy read */ 760 - serial_port_out(port, SCxSR, SCxSR_TDxE_CLEAR(port)); 669 + sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port)); 761 670 } 762 671 763 672 ctrl |= SCSCR_TIE; ··· 841 750 } 842 751 843 752 serial_port_in(port, SCxSR); /* dummy read */ 844 - serial_port_out(port, SCxSR, SCxSR_RDxF_CLEAR(port)); 753 + sci_clear_SCxSR(port, SCxSR_RDxF_CLEAR(port)); 845 754 846 755 copied += count; 847 756 port->icount.rx += count; ··· 852 761 tty_flip_buffer_push(tport); 853 762 } else { 854 763 serial_port_in(port, SCxSR); /* dummy read */ 855 - serial_port_out(port, SCxSR, SCxSR_RDxF_CLEAR(port)); 764 + sci_clear_SCxSR(port, SCxSR_RDxF_CLEAR(port)); 856 765 } 857 766 } 858 767 ··· 957 866 { 958 867 struct tty_port *tport = &port->state->port; 959 868 struct sci_port *s = to_sci_port(port); 960 - struct plat_sci_reg *reg; 869 + const struct plat_sci_reg *reg; 961 870 int copied = 0; 962 871 u16 status; 963 872 ··· 1015 924 return copied; 1016 925 } 1017 926 927 + #ifdef CONFIG_SERIAL_SH_SCI_DMA 928 + static void sci_dma_tx_complete(void *arg) 929 + { 930 + struct sci_port *s = arg; 931 + struct uart_port *port = &s->port; 932 + struct circ_buf *xmit = &port->state->xmit; 933 + unsigned long flags; 934 + 935 + dev_dbg(port->dev, "%s(%d)\n", __func__, port->line); 936 + 937 + spin_lock_irqsave(&port->lock, flags); 938 + 939 + xmit->tail += s->tx_dma_len; 940 + xmit->tail &= UART_XMIT_SIZE - 1; 941 + 942 + port->icount.tx += s->tx_dma_len; 943 + 944 + if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 945 + uart_write_wakeup(port); 946 + 947 + if (!uart_circ_empty(xmit)) { 948 + s->cookie_tx = 0; 949 + schedule_work(&s->work_tx); 950 + } else { 951 + s->cookie_tx = -EINVAL; 952 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 953 + u16 ctrl = serial_port_in(port, SCSCR); 954 + serial_port_out(port, SCSCR, ctrl & ~SCSCR_TIE); 955 + } 956 + } 957 + 958 + spin_unlock_irqrestore(&port->lock, flags); 959 + } 960 + 961 + /* Locking: called with port lock held */ 962 + static int sci_dma_rx_push(struct sci_port *s, void *buf, size_t count) 963 + { 964 + struct uart_port *port = &s->port; 965 + struct tty_port *tport = &port->state->port; 966 + int copied; 967 + 968 + copied = tty_insert_flip_string(tport, buf, count); 969 + if (copied < count) { 970 + dev_warn(port->dev, "Rx overrun: dropping %zu bytes\n", 971 + count - copied); 972 + port->icount.buf_overrun++; 973 + } 974 + 975 + port->icount.rx += copied; 976 + 977 + return copied; 978 + } 979 + 980 + static int sci_dma_rx_find_active(struct sci_port *s) 981 + { 982 + unsigned int i; 983 + 984 + for (i = 0; i < ARRAY_SIZE(s->cookie_rx); i++) 985 + if (s->active_rx == s->cookie_rx[i]) 986 + return i; 987 + 988 + dev_err(s->port.dev, "%s: Rx cookie %d not found!\n", __func__, 989 + s->active_rx); 990 + return -1; 991 + } 992 + 993 + static void sci_rx_dma_release(struct sci_port *s, bool enable_pio) 994 + { 995 + struct dma_chan *chan = s->chan_rx; 996 + struct uart_port *port = &s->port; 997 + unsigned long flags; 998 + 999 + spin_lock_irqsave(&port->lock, flags); 1000 + s->chan_rx = NULL; 1001 + s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL; 1002 + spin_unlock_irqrestore(&port->lock, flags); 1003 + dmaengine_terminate_all(chan); 1004 + dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0], 1005 + sg_dma_address(&s->sg_rx[0])); 1006 + dma_release_channel(chan); 1007 + if (enable_pio) 1008 + sci_start_rx(port); 1009 + } 1010 + 1011 + static void sci_dma_rx_complete(void *arg) 1012 + { 1013 + struct sci_port *s = arg; 1014 + struct dma_chan *chan = s->chan_rx; 1015 + struct uart_port *port = &s->port; 1016 + struct dma_async_tx_descriptor *desc; 1017 + unsigned long flags; 1018 + int active, count = 0; 1019 + 1020 + dev_dbg(port->dev, "%s(%d) active cookie %d\n", __func__, port->line, 1021 + s->active_rx); 1022 + 1023 + spin_lock_irqsave(&port->lock, flags); 1024 + 1025 + active = sci_dma_rx_find_active(s); 1026 + if (active >= 0) 1027 + count = sci_dma_rx_push(s, s->rx_buf[active], s->buf_len_rx); 1028 + 1029 + mod_timer(&s->rx_timer, jiffies + s->rx_timeout); 1030 + 1031 + if (count) 1032 + tty_flip_buffer_push(&port->state->port); 1033 + 1034 + desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[active], 1, 1035 + DMA_DEV_TO_MEM, 1036 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1037 + if (!desc) 1038 + goto fail; 1039 + 1040 + desc->callback = sci_dma_rx_complete; 1041 + desc->callback_param = s; 1042 + s->cookie_rx[active] = dmaengine_submit(desc); 1043 + if (dma_submit_error(s->cookie_rx[active])) 1044 + goto fail; 1045 + 1046 + s->active_rx = s->cookie_rx[!active]; 1047 + 1048 + dma_async_issue_pending(chan); 1049 + 1050 + dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n", 1051 + __func__, s->cookie_rx[active], active, s->active_rx); 1052 + spin_unlock_irqrestore(&port->lock, flags); 1053 + return; 1054 + 1055 + fail: 1056 + spin_unlock_irqrestore(&port->lock, flags); 1057 + dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); 1058 + sci_rx_dma_release(s, true); 1059 + } 1060 + 1061 + static void sci_tx_dma_release(struct sci_port *s, bool enable_pio) 1062 + { 1063 + struct dma_chan *chan = s->chan_tx; 1064 + struct uart_port *port = &s->port; 1065 + unsigned long flags; 1066 + 1067 + spin_lock_irqsave(&port->lock, flags); 1068 + s->chan_tx = NULL; 1069 + s->cookie_tx = -EINVAL; 1070 + spin_unlock_irqrestore(&port->lock, flags); 1071 + dmaengine_terminate_all(chan); 1072 + dma_unmap_single(chan->device->dev, s->tx_dma_addr, UART_XMIT_SIZE, 1073 + DMA_TO_DEVICE); 1074 + dma_release_channel(chan); 1075 + if (enable_pio) 1076 + sci_start_tx(port); 1077 + } 1078 + 1079 + static void sci_submit_rx(struct sci_port *s) 1080 + { 1081 + struct dma_chan *chan = s->chan_rx; 1082 + int i; 1083 + 1084 + for (i = 0; i < 2; i++) { 1085 + struct scatterlist *sg = &s->sg_rx[i]; 1086 + struct dma_async_tx_descriptor *desc; 1087 + 1088 + desc = dmaengine_prep_slave_sg(chan, 1089 + sg, 1, DMA_DEV_TO_MEM, 1090 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1091 + if (!desc) 1092 + goto fail; 1093 + 1094 + desc->callback = sci_dma_rx_complete; 1095 + desc->callback_param = s; 1096 + s->cookie_rx[i] = dmaengine_submit(desc); 1097 + if (dma_submit_error(s->cookie_rx[i])) 1098 + goto fail; 1099 + 1100 + dev_dbg(s->port.dev, "%s(): cookie %d to #%d\n", __func__, 1101 + s->cookie_rx[i], i); 1102 + } 1103 + 1104 + s->active_rx = s->cookie_rx[0]; 1105 + 1106 + dma_async_issue_pending(chan); 1107 + return; 1108 + 1109 + fail: 1110 + if (i) 1111 + dmaengine_terminate_all(chan); 1112 + for (i = 0; i < 2; i++) 1113 + s->cookie_rx[i] = -EINVAL; 1114 + s->active_rx = -EINVAL; 1115 + dev_warn(s->port.dev, "Failed to re-start Rx DMA, using PIO\n"); 1116 + sci_rx_dma_release(s, true); 1117 + } 1118 + 1119 + static void work_fn_tx(struct work_struct *work) 1120 + { 1121 + struct sci_port *s = container_of(work, struct sci_port, work_tx); 1122 + struct dma_async_tx_descriptor *desc; 1123 + struct dma_chan *chan = s->chan_tx; 1124 + struct uart_port *port = &s->port; 1125 + struct circ_buf *xmit = &port->state->xmit; 1126 + dma_addr_t buf; 1127 + 1128 + /* 1129 + * DMA is idle now. 1130 + * Port xmit buffer is already mapped, and it is one page... Just adjust 1131 + * offsets and lengths. Since it is a circular buffer, we have to 1132 + * transmit till the end, and then the rest. Take the port lock to get a 1133 + * consistent xmit buffer state. 1134 + */ 1135 + spin_lock_irq(&port->lock); 1136 + buf = s->tx_dma_addr + (xmit->tail & (UART_XMIT_SIZE - 1)); 1137 + s->tx_dma_len = min_t(unsigned int, 1138 + CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE), 1139 + CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE)); 1140 + spin_unlock_irq(&port->lock); 1141 + 1142 + desc = dmaengine_prep_slave_single(chan, buf, s->tx_dma_len, 1143 + DMA_MEM_TO_DEV, 1144 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1145 + if (!desc) { 1146 + dev_warn(port->dev, "Failed preparing Tx DMA descriptor\n"); 1147 + /* switch to PIO */ 1148 + sci_tx_dma_release(s, true); 1149 + return; 1150 + } 1151 + 1152 + dma_sync_single_for_device(chan->device->dev, buf, s->tx_dma_len, 1153 + DMA_TO_DEVICE); 1154 + 1155 + spin_lock_irq(&port->lock); 1156 + desc->callback = sci_dma_tx_complete; 1157 + desc->callback_param = s; 1158 + spin_unlock_irq(&port->lock); 1159 + s->cookie_tx = dmaengine_submit(desc); 1160 + if (dma_submit_error(s->cookie_tx)) { 1161 + dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n"); 1162 + /* switch to PIO */ 1163 + sci_tx_dma_release(s, true); 1164 + return; 1165 + } 1166 + 1167 + dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n", 1168 + __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx); 1169 + 1170 + dma_async_issue_pending(chan); 1171 + } 1172 + 1173 + static void rx_timer_fn(unsigned long arg) 1174 + { 1175 + struct sci_port *s = (struct sci_port *)arg; 1176 + struct dma_chan *chan = s->chan_rx; 1177 + struct uart_port *port = &s->port; 1178 + struct dma_tx_state state; 1179 + enum dma_status status; 1180 + unsigned long flags; 1181 + unsigned int read; 1182 + int active, count; 1183 + u16 scr; 1184 + 1185 + spin_lock_irqsave(&port->lock, flags); 1186 + 1187 + dev_dbg(port->dev, "DMA Rx timed out\n"); 1188 + 1189 + active = sci_dma_rx_find_active(s); 1190 + if (active < 0) { 1191 + spin_unlock_irqrestore(&port->lock, flags); 1192 + return; 1193 + } 1194 + 1195 + status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state); 1196 + if (status == DMA_COMPLETE) { 1197 + dev_dbg(port->dev, "Cookie %d #%d has already completed\n", 1198 + s->active_rx, active); 1199 + spin_unlock_irqrestore(&port->lock, flags); 1200 + 1201 + /* Let packet complete handler take care of the packet */ 1202 + return; 1203 + } 1204 + 1205 + dmaengine_pause(chan); 1206 + 1207 + /* 1208 + * sometimes DMA transfer doesn't stop even if it is stopped and 1209 + * data keeps on coming until transaction is complete so check 1210 + * for DMA_COMPLETE again 1211 + * Let packet complete handler take care of the packet 1212 + */ 1213 + status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state); 1214 + if (status == DMA_COMPLETE) { 1215 + spin_unlock_irqrestore(&port->lock, flags); 1216 + dev_dbg(port->dev, "Transaction complete after DMA engine was stopped"); 1217 + return; 1218 + } 1219 + 1220 + /* Handle incomplete DMA receive */ 1221 + dmaengine_terminate_all(s->chan_rx); 1222 + read = sg_dma_len(&s->sg_rx[active]) - state.residue; 1223 + dev_dbg(port->dev, "Read %u bytes with cookie %d\n", read, 1224 + s->active_rx); 1225 + 1226 + if (read) { 1227 + count = sci_dma_rx_push(s, s->rx_buf[active], read); 1228 + if (count) 1229 + tty_flip_buffer_push(&port->state->port); 1230 + } 1231 + 1232 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 1233 + sci_submit_rx(s); 1234 + 1235 + /* Direct new serial port interrupts back to CPU */ 1236 + scr = serial_port_in(port, SCSCR); 1237 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 1238 + scr &= ~SCSCR_RDRQE; 1239 + enable_irq(s->irqs[SCIx_RXI_IRQ]); 1240 + } 1241 + serial_port_out(port, SCSCR, scr | SCSCR_RIE); 1242 + 1243 + spin_unlock_irqrestore(&port->lock, flags); 1244 + } 1245 + 1246 + static struct dma_chan *sci_request_dma_chan(struct uart_port *port, 1247 + enum dma_transfer_direction dir, 1248 + unsigned int id) 1249 + { 1250 + dma_cap_mask_t mask; 1251 + struct dma_chan *chan; 1252 + struct dma_slave_config cfg; 1253 + int ret; 1254 + 1255 + dma_cap_zero(mask); 1256 + dma_cap_set(DMA_SLAVE, mask); 1257 + 1258 + chan = dma_request_slave_channel_compat(mask, shdma_chan_filter, 1259 + (void *)(unsigned long)id, port->dev, 1260 + dir == DMA_MEM_TO_DEV ? "tx" : "rx"); 1261 + if (!chan) { 1262 + dev_warn(port->dev, 1263 + "dma_request_slave_channel_compat failed\n"); 1264 + return NULL; 1265 + } 1266 + 1267 + memset(&cfg, 0, sizeof(cfg)); 1268 + cfg.direction = dir; 1269 + if (dir == DMA_MEM_TO_DEV) { 1270 + cfg.dst_addr = port->mapbase + 1271 + (sci_getreg(port, SCxTDR)->offset << port->regshift); 1272 + cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1273 + } else { 1274 + cfg.src_addr = port->mapbase + 1275 + (sci_getreg(port, SCxRDR)->offset << port->regshift); 1276 + cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1277 + } 1278 + 1279 + ret = dmaengine_slave_config(chan, &cfg); 1280 + if (ret) { 1281 + dev_warn(port->dev, "dmaengine_slave_config failed %d\n", ret); 1282 + dma_release_channel(chan); 1283 + return NULL; 1284 + } 1285 + 1286 + return chan; 1287 + } 1288 + 1289 + static void sci_request_dma(struct uart_port *port) 1290 + { 1291 + struct sci_port *s = to_sci_port(port); 1292 + struct dma_chan *chan; 1293 + 1294 + dev_dbg(port->dev, "%s: port %d\n", __func__, port->line); 1295 + 1296 + if (!port->dev->of_node && 1297 + (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0)) 1298 + return; 1299 + 1300 + s->cookie_tx = -EINVAL; 1301 + chan = sci_request_dma_chan(port, DMA_MEM_TO_DEV, s->cfg->dma_slave_tx); 1302 + dev_dbg(port->dev, "%s: TX: got channel %p\n", __func__, chan); 1303 + if (chan) { 1304 + s->chan_tx = chan; 1305 + /* UART circular tx buffer is an aligned page. */ 1306 + s->tx_dma_addr = dma_map_single(chan->device->dev, 1307 + port->state->xmit.buf, 1308 + UART_XMIT_SIZE, 1309 + DMA_TO_DEVICE); 1310 + if (dma_mapping_error(chan->device->dev, s->tx_dma_addr)) { 1311 + dev_warn(port->dev, "Failed mapping Tx DMA descriptor\n"); 1312 + dma_release_channel(chan); 1313 + s->chan_tx = NULL; 1314 + } else { 1315 + dev_dbg(port->dev, "%s: mapped %lu@%p to %pad\n", 1316 + __func__, UART_XMIT_SIZE, 1317 + port->state->xmit.buf, &s->tx_dma_addr); 1318 + } 1319 + 1320 + INIT_WORK(&s->work_tx, work_fn_tx); 1321 + } 1322 + 1323 + chan = sci_request_dma_chan(port, DMA_DEV_TO_MEM, s->cfg->dma_slave_rx); 1324 + dev_dbg(port->dev, "%s: RX: got channel %p\n", __func__, chan); 1325 + if (chan) { 1326 + unsigned int i; 1327 + dma_addr_t dma; 1328 + void *buf; 1329 + 1330 + s->chan_rx = chan; 1331 + 1332 + s->buf_len_rx = 2 * max_t(size_t, 16, port->fifosize); 1333 + buf = dma_alloc_coherent(chan->device->dev, s->buf_len_rx * 2, 1334 + &dma, GFP_KERNEL); 1335 + if (!buf) { 1336 + dev_warn(port->dev, 1337 + "Failed to allocate Rx dma buffer, using PIO\n"); 1338 + dma_release_channel(chan); 1339 + s->chan_rx = NULL; 1340 + return; 1341 + } 1342 + 1343 + for (i = 0; i < 2; i++) { 1344 + struct scatterlist *sg = &s->sg_rx[i]; 1345 + 1346 + sg_init_table(sg, 1); 1347 + s->rx_buf[i] = buf; 1348 + sg_dma_address(sg) = dma; 1349 + sg->length = s->buf_len_rx; 1350 + 1351 + buf += s->buf_len_rx; 1352 + dma += s->buf_len_rx; 1353 + } 1354 + 1355 + setup_timer(&s->rx_timer, rx_timer_fn, (unsigned long)s); 1356 + 1357 + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 1358 + sci_submit_rx(s); 1359 + } 1360 + } 1361 + 1362 + static void sci_free_dma(struct uart_port *port) 1363 + { 1364 + struct sci_port *s = to_sci_port(port); 1365 + 1366 + if (s->chan_tx) 1367 + sci_tx_dma_release(s, false); 1368 + if (s->chan_rx) 1369 + sci_rx_dma_release(s, false); 1370 + } 1371 + #else 1372 + static inline void sci_request_dma(struct uart_port *port) 1373 + { 1374 + } 1375 + 1376 + static inline void sci_free_dma(struct uart_port *port) 1377 + { 1378 + } 1379 + #endif 1380 + 1018 1381 static irqreturn_t sci_rx_interrupt(int irq, void *ptr) 1019 1382 { 1020 1383 #ifdef CONFIG_SERIAL_SH_SCI_DMA ··· 1485 940 scr |= SCSCR_RDRQE; 1486 941 } else { 1487 942 scr &= ~SCSCR_RIE; 943 + sci_submit_rx(s); 1488 944 } 1489 945 serial_port_out(port, SCSCR, scr); 1490 946 /* Clear current interrupt */ 1491 - serial_port_out(port, SCxSR, ssr & ~(1 | SCxSR_RDxF(port))); 947 + serial_port_out(port, SCxSR, 948 + ssr & ~(SCIF_DR | SCxSR_RDxF(port))); 1492 949 dev_dbg(port->dev, "Rx IRQ %lu: setup t-out in %u jiffies\n", 1493 950 jiffies, s->rx_timeout); 1494 951 mod_timer(&s->rx_timer, jiffies + s->rx_timeout); ··· 1523 976 static irqreturn_t sci_er_interrupt(int irq, void *ptr) 1524 977 { 1525 978 struct uart_port *port = ptr; 979 + struct sci_port *s = to_sci_port(port); 1526 980 1527 981 /* Handle errors */ 1528 982 if (port->type == PORT_SCI) { 1529 983 if (sci_handle_errors(port)) { 1530 984 /* discard character in rx buffer */ 1531 985 serial_port_in(port, SCxSR); 1532 - serial_port_out(port, SCxSR, SCxSR_RDxF_CLEAR(port)); 986 + sci_clear_SCxSR(port, SCxSR_RDxF_CLEAR(port)); 1533 987 } 1534 988 } else { 1535 989 sci_handle_fifo_overrun(port); 1536 - sci_rx_interrupt(irq, ptr); 990 + if (!s->chan_rx) 991 + sci_receive_chars(ptr); 1537 992 } 1538 993 1539 - serial_port_out(port, SCxSR, SCxSR_ERROR_CLEAR(port)); 994 + sci_clear_SCxSR(port, SCxSR_ERROR_CLEAR(port)); 1540 995 1541 996 /* Kick the transmission */ 1542 - sci_tx_interrupt(irq, ptr); 997 + if (!s->chan_tx) 998 + sci_tx_interrupt(irq, ptr); 1543 999 1544 1000 return IRQ_HANDLED; 1545 1001 } ··· 1553 1003 1554 1004 /* Handle BREAKs */ 1555 1005 sci_handle_breaks(port); 1556 - serial_port_out(port, SCxSR, SCxSR_BREAK_CLEAR(port)); 1006 + sci_clear_SCxSR(port, SCxSR_BREAK_CLEAR(port)); 1557 1007 1558 1008 return IRQ_HANDLED; 1559 - } 1560 - 1561 - static inline unsigned long port_rx_irq_mask(struct uart_port *port) 1562 - { 1563 - /* 1564 - * Not all ports (such as SCIFA) will support REIE. Rather than 1565 - * special-casing the port type, we check the port initialization 1566 - * IRQ enable mask to see whether the IRQ is desired at all. If 1567 - * it's unset, it's logically inferred that there's no point in 1568 - * testing for it. 1569 - */ 1570 - return SCSCR_RIE | (to_sci_port(port)->cfg->scscr & SCSCR_REIE); 1571 1009 } 1572 1010 1573 1011 static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr) ··· 1586 1048 * DR flags 1587 1049 */ 1588 1050 if (((ssr_status & SCxSR_RDxF(port)) || s->chan_rx) && 1589 - (scr_status & SCSCR_RIE)) { 1590 - if (port->type == PORT_SCIF || port->type == PORT_HSCIF) 1591 - sci_handle_fifo_overrun(port); 1051 + (scr_status & SCSCR_RIE)) 1592 1052 ret = sci_rx_interrupt(irq, ptr); 1593 - } 1594 1053 1595 1054 /* Error Interrupt */ 1596 1055 if ((ssr_status & SCxSR_ERRORS(port)) && err_enabled) ··· 1598 1063 ret = sci_br_interrupt(irq, ptr); 1599 1064 1600 1065 /* Overrun Interrupt */ 1601 - if (orer_status & s->overrun_mask) 1066 + if (orer_status & s->overrun_mask) { 1602 1067 sci_handle_fifo_overrun(port); 1068 + ret = IRQ_HANDLED; 1069 + } 1603 1070 1604 1071 return ret; 1605 1072 } ··· 1629 1092 return NOTIFY_OK; 1630 1093 } 1631 1094 1632 - static struct sci_irq_desc { 1095 + static const struct sci_irq_desc { 1633 1096 const char *desc; 1634 1097 irq_handler_t handler; 1635 1098 } sci_irq_desc[] = { ··· 1671 1134 int i, j, ret = 0; 1672 1135 1673 1136 for (i = j = 0; i < SCIx_NR_IRQS; i++, j++) { 1674 - struct sci_irq_desc *desc; 1137 + const struct sci_irq_desc *desc; 1675 1138 int irq; 1676 1139 1677 1140 if (SCIx_IRQ_IS_MUXED(port)) { ··· 1691 1154 desc = sci_irq_desc + i; 1692 1155 port->irqstr[j] = kasprintf(GFP_KERNEL, "%s:%s", 1693 1156 dev_name(up->dev), desc->desc); 1694 - if (!port->irqstr[j]) { 1695 - dev_err(up->dev, "Failed to allocate %s IRQ string\n", 1696 - desc->desc); 1157 + if (!port->irqstr[j]) 1697 1158 goto out_nomem; 1698 - } 1699 1159 1700 1160 ret = request_irq(irq, desc->handler, up->irqflags, 1701 1161 port->irqstr[j], port); ··· 1766 1232 static void sci_set_mctrl(struct uart_port *port, unsigned int mctrl) 1767 1233 { 1768 1234 if (mctrl & TIOCM_LOOP) { 1769 - struct plat_sci_reg *reg; 1235 + const struct plat_sci_reg *reg; 1770 1236 1771 1237 /* 1772 1238 * Standard loopback mode for SCFCR ports. ··· 1788 1254 return TIOCM_DSR | TIOCM_CAR; 1789 1255 } 1790 1256 1791 - #ifdef CONFIG_SERIAL_SH_SCI_DMA 1792 - static void sci_dma_tx_complete(void *arg) 1793 - { 1794 - struct sci_port *s = arg; 1795 - struct uart_port *port = &s->port; 1796 - struct circ_buf *xmit = &port->state->xmit; 1797 - unsigned long flags; 1798 - 1799 - dev_dbg(port->dev, "%s(%d)\n", __func__, port->line); 1800 - 1801 - spin_lock_irqsave(&port->lock, flags); 1802 - 1803 - xmit->tail += sg_dma_len(&s->sg_tx); 1804 - xmit->tail &= UART_XMIT_SIZE - 1; 1805 - 1806 - port->icount.tx += sg_dma_len(&s->sg_tx); 1807 - 1808 - async_tx_ack(s->desc_tx); 1809 - s->desc_tx = NULL; 1810 - 1811 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1812 - uart_write_wakeup(port); 1813 - 1814 - if (!uart_circ_empty(xmit)) { 1815 - s->cookie_tx = 0; 1816 - schedule_work(&s->work_tx); 1817 - } else { 1818 - s->cookie_tx = -EINVAL; 1819 - if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 1820 - u16 ctrl = serial_port_in(port, SCSCR); 1821 - serial_port_out(port, SCSCR, ctrl & ~SCSCR_TIE); 1822 - } 1823 - } 1824 - 1825 - spin_unlock_irqrestore(&port->lock, flags); 1826 - } 1827 - 1828 - /* Locking: called with port lock held */ 1829 - static int sci_dma_rx_push(struct sci_port *s, size_t count) 1830 - { 1831 - struct uart_port *port = &s->port; 1832 - struct tty_port *tport = &port->state->port; 1833 - int i, active, room; 1834 - 1835 - room = tty_buffer_request_room(tport, count); 1836 - 1837 - if (s->active_rx == s->cookie_rx[0]) { 1838 - active = 0; 1839 - } else if (s->active_rx == s->cookie_rx[1]) { 1840 - active = 1; 1841 - } else { 1842 - dev_err(port->dev, "cookie %d not found!\n", s->active_rx); 1843 - return 0; 1844 - } 1845 - 1846 - if (room < count) 1847 - dev_warn(port->dev, "Rx overrun: dropping %zu bytes\n", 1848 - count - room); 1849 - if (!room) 1850 - return room; 1851 - 1852 - for (i = 0; i < room; i++) 1853 - tty_insert_flip_char(tport, ((u8 *)sg_virt(&s->sg_rx[active]))[i], 1854 - TTY_NORMAL); 1855 - 1856 - port->icount.rx += room; 1857 - 1858 - return room; 1859 - } 1860 - 1861 - static void sci_dma_rx_complete(void *arg) 1862 - { 1863 - struct sci_port *s = arg; 1864 - struct uart_port *port = &s->port; 1865 - unsigned long flags; 1866 - int count; 1867 - 1868 - dev_dbg(port->dev, "%s(%d) active #%d\n", 1869 - __func__, port->line, s->active_rx); 1870 - 1871 - spin_lock_irqsave(&port->lock, flags); 1872 - 1873 - count = sci_dma_rx_push(s, s->buf_len_rx); 1874 - 1875 - mod_timer(&s->rx_timer, jiffies + s->rx_timeout); 1876 - 1877 - spin_unlock_irqrestore(&port->lock, flags); 1878 - 1879 - if (count) 1880 - tty_flip_buffer_push(&port->state->port); 1881 - 1882 - schedule_work(&s->work_rx); 1883 - } 1884 - 1885 - static void sci_rx_dma_release(struct sci_port *s, bool enable_pio) 1886 - { 1887 - struct dma_chan *chan = s->chan_rx; 1888 - struct uart_port *port = &s->port; 1889 - 1890 - s->chan_rx = NULL; 1891 - s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL; 1892 - dma_release_channel(chan); 1893 - if (sg_dma_address(&s->sg_rx[0])) 1894 - dma_free_coherent(port->dev, s->buf_len_rx * 2, 1895 - sg_virt(&s->sg_rx[0]), sg_dma_address(&s->sg_rx[0])); 1896 - if (enable_pio) 1897 - sci_start_rx(port); 1898 - } 1899 - 1900 - static void sci_tx_dma_release(struct sci_port *s, bool enable_pio) 1901 - { 1902 - struct dma_chan *chan = s->chan_tx; 1903 - struct uart_port *port = &s->port; 1904 - 1905 - s->chan_tx = NULL; 1906 - s->cookie_tx = -EINVAL; 1907 - dma_release_channel(chan); 1908 - if (enable_pio) 1909 - sci_start_tx(port); 1910 - } 1911 - 1912 - static void sci_submit_rx(struct sci_port *s) 1913 - { 1914 - struct dma_chan *chan = s->chan_rx; 1915 - int i; 1916 - 1917 - for (i = 0; i < 2; i++) { 1918 - struct scatterlist *sg = &s->sg_rx[i]; 1919 - struct dma_async_tx_descriptor *desc; 1920 - 1921 - desc = dmaengine_prep_slave_sg(chan, 1922 - sg, 1, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT); 1923 - 1924 - if (desc) { 1925 - s->desc_rx[i] = desc; 1926 - desc->callback = sci_dma_rx_complete; 1927 - desc->callback_param = s; 1928 - s->cookie_rx[i] = desc->tx_submit(desc); 1929 - } 1930 - 1931 - if (!desc || s->cookie_rx[i] < 0) { 1932 - if (i) { 1933 - async_tx_ack(s->desc_rx[0]); 1934 - s->cookie_rx[0] = -EINVAL; 1935 - } 1936 - if (desc) { 1937 - async_tx_ack(desc); 1938 - s->cookie_rx[i] = -EINVAL; 1939 - } 1940 - dev_warn(s->port.dev, 1941 - "failed to re-start DMA, using PIO\n"); 1942 - sci_rx_dma_release(s, true); 1943 - return; 1944 - } 1945 - dev_dbg(s->port.dev, "%s(): cookie %d to #%d\n", 1946 - __func__, s->cookie_rx[i], i); 1947 - } 1948 - 1949 - s->active_rx = s->cookie_rx[0]; 1950 - 1951 - dma_async_issue_pending(chan); 1952 - } 1953 - 1954 - static void work_fn_rx(struct work_struct *work) 1955 - { 1956 - struct sci_port *s = container_of(work, struct sci_port, work_rx); 1957 - struct uart_port *port = &s->port; 1958 - struct dma_async_tx_descriptor *desc; 1959 - int new; 1960 - 1961 - if (s->active_rx == s->cookie_rx[0]) { 1962 - new = 0; 1963 - } else if (s->active_rx == s->cookie_rx[1]) { 1964 - new = 1; 1965 - } else { 1966 - dev_err(port->dev, "cookie %d not found!\n", s->active_rx); 1967 - return; 1968 - } 1969 - desc = s->desc_rx[new]; 1970 - 1971 - if (dma_async_is_tx_complete(s->chan_rx, s->active_rx, NULL, NULL) != 1972 - DMA_COMPLETE) { 1973 - /* Handle incomplete DMA receive */ 1974 - struct dma_chan *chan = s->chan_rx; 1975 - struct shdma_desc *sh_desc = container_of(desc, 1976 - struct shdma_desc, async_tx); 1977 - unsigned long flags; 1978 - int count; 1979 - 1980 - dmaengine_terminate_all(chan); 1981 - dev_dbg(port->dev, "Read %zu bytes with cookie %d\n", 1982 - sh_desc->partial, sh_desc->cookie); 1983 - 1984 - spin_lock_irqsave(&port->lock, flags); 1985 - count = sci_dma_rx_push(s, sh_desc->partial); 1986 - spin_unlock_irqrestore(&port->lock, flags); 1987 - 1988 - if (count) 1989 - tty_flip_buffer_push(&port->state->port); 1990 - 1991 - sci_submit_rx(s); 1992 - 1993 - return; 1994 - } 1995 - 1996 - s->cookie_rx[new] = desc->tx_submit(desc); 1997 - if (s->cookie_rx[new] < 0) { 1998 - dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); 1999 - sci_rx_dma_release(s, true); 2000 - return; 2001 - } 2002 - 2003 - s->active_rx = s->cookie_rx[!new]; 2004 - 2005 - dev_dbg(port->dev, "%s: cookie %d #%d, new active #%d\n", 2006 - __func__, s->cookie_rx[new], new, s->active_rx); 2007 - } 2008 - 2009 - static void work_fn_tx(struct work_struct *work) 2010 - { 2011 - struct sci_port *s = container_of(work, struct sci_port, work_tx); 2012 - struct dma_async_tx_descriptor *desc; 2013 - struct dma_chan *chan = s->chan_tx; 2014 - struct uart_port *port = &s->port; 2015 - struct circ_buf *xmit = &port->state->xmit; 2016 - struct scatterlist *sg = &s->sg_tx; 2017 - 2018 - /* 2019 - * DMA is idle now. 2020 - * Port xmit buffer is already mapped, and it is one page... Just adjust 2021 - * offsets and lengths. Since it is a circular buffer, we have to 2022 - * transmit till the end, and then the rest. Take the port lock to get a 2023 - * consistent xmit buffer state. 2024 - */ 2025 - spin_lock_irq(&port->lock); 2026 - sg->offset = xmit->tail & (UART_XMIT_SIZE - 1); 2027 - sg_dma_address(sg) = (sg_dma_address(sg) & ~(UART_XMIT_SIZE - 1)) + 2028 - sg->offset; 2029 - sg_dma_len(sg) = min((int)CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE), 2030 - CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE)); 2031 - spin_unlock_irq(&port->lock); 2032 - 2033 - BUG_ON(!sg_dma_len(sg)); 2034 - 2035 - desc = dmaengine_prep_slave_sg(chan, 2036 - sg, s->sg_len_tx, DMA_MEM_TO_DEV, 2037 - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 2038 - if (!desc) { 2039 - /* switch to PIO */ 2040 - sci_tx_dma_release(s, true); 2041 - return; 2042 - } 2043 - 2044 - dma_sync_sg_for_device(port->dev, sg, 1, DMA_TO_DEVICE); 2045 - 2046 - spin_lock_irq(&port->lock); 2047 - s->desc_tx = desc; 2048 - desc->callback = sci_dma_tx_complete; 2049 - desc->callback_param = s; 2050 - spin_unlock_irq(&port->lock); 2051 - s->cookie_tx = desc->tx_submit(desc); 2052 - if (s->cookie_tx < 0) { 2053 - dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n"); 2054 - /* switch to PIO */ 2055 - sci_tx_dma_release(s, true); 2056 - return; 2057 - } 2058 - 2059 - dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n", 2060 - __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx); 2061 - 2062 - dma_async_issue_pending(chan); 2063 - } 2064 - #endif 2065 - 2066 - static void sci_start_tx(struct uart_port *port) 2067 - { 2068 - struct sci_port *s = to_sci_port(port); 2069 - unsigned short ctrl; 2070 - 2071 - #ifdef CONFIG_SERIAL_SH_SCI_DMA 2072 - if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 2073 - u16 new, scr = serial_port_in(port, SCSCR); 2074 - if (s->chan_tx) 2075 - new = scr | SCSCR_TDRQE; 2076 - else 2077 - new = scr & ~SCSCR_TDRQE; 2078 - if (new != scr) 2079 - serial_port_out(port, SCSCR, new); 2080 - } 2081 - 2082 - if (s->chan_tx && !uart_circ_empty(&s->port.state->xmit) && 2083 - s->cookie_tx < 0) { 2084 - s->cookie_tx = 0; 2085 - schedule_work(&s->work_tx); 2086 - } 2087 - #endif 2088 - 2089 - if (!s->chan_tx || port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 2090 - /* Set TIE (Transmit Interrupt Enable) bit in SCSCR */ 2091 - ctrl = serial_port_in(port, SCSCR); 2092 - serial_port_out(port, SCSCR, ctrl | SCSCR_TIE); 2093 - } 2094 - } 2095 - 2096 - static void sci_stop_tx(struct uart_port *port) 2097 - { 2098 - unsigned short ctrl; 2099 - 2100 - /* Clear TIE (Transmit Interrupt Enable) bit in SCSCR */ 2101 - ctrl = serial_port_in(port, SCSCR); 2102 - 2103 - if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 2104 - ctrl &= ~SCSCR_TDRQE; 2105 - 2106 - ctrl &= ~SCSCR_TIE; 2107 - 2108 - serial_port_out(port, SCSCR, ctrl); 2109 - } 2110 - 2111 - static void sci_start_rx(struct uart_port *port) 2112 - { 2113 - unsigned short ctrl; 2114 - 2115 - ctrl = serial_port_in(port, SCSCR) | port_rx_irq_mask(port); 2116 - 2117 - if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 2118 - ctrl &= ~SCSCR_RDRQE; 2119 - 2120 - serial_port_out(port, SCSCR, ctrl); 2121 - } 2122 - 2123 - static void sci_stop_rx(struct uart_port *port) 2124 - { 2125 - unsigned short ctrl; 2126 - 2127 - ctrl = serial_port_in(port, SCSCR); 2128 - 2129 - if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) 2130 - ctrl &= ~SCSCR_RDRQE; 2131 - 2132 - ctrl &= ~port_rx_irq_mask(port); 2133 - 2134 - serial_port_out(port, SCSCR, ctrl); 2135 - } 2136 - 2137 1257 static void sci_break_ctl(struct uart_port *port, int break_state) 2138 1258 { 2139 1259 struct sci_port *s = to_sci_port(port); 2140 - struct plat_sci_reg *reg = sci_regmap[s->cfg->regtype] + SCSPTR; 1260 + const struct plat_sci_reg *reg = sci_regmap[s->cfg->regtype] + SCSPTR; 2141 1261 unsigned short scscr, scsptr; 2142 1262 2143 1263 /* check wheter the port has SCSPTR */ ··· 1817 1629 serial_port_out(port, SCSPTR, scsptr); 1818 1630 serial_port_out(port, SCSCR, scscr); 1819 1631 } 1820 - 1821 - #ifdef CONFIG_SERIAL_SH_SCI_DMA 1822 - static bool filter(struct dma_chan *chan, void *slave) 1823 - { 1824 - struct sh_dmae_slave *param = slave; 1825 - 1826 - dev_dbg(chan->device->dev, "%s: slave ID %d\n", 1827 - __func__, param->shdma_slave.slave_id); 1828 - 1829 - chan->private = &param->shdma_slave; 1830 - return true; 1831 - } 1832 - 1833 - static void rx_timer_fn(unsigned long arg) 1834 - { 1835 - struct sci_port *s = (struct sci_port *)arg; 1836 - struct uart_port *port = &s->port; 1837 - u16 scr = serial_port_in(port, SCSCR); 1838 - 1839 - if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { 1840 - scr &= ~SCSCR_RDRQE; 1841 - enable_irq(s->irqs[SCIx_RXI_IRQ]); 1842 - } 1843 - serial_port_out(port, SCSCR, scr | SCSCR_RIE); 1844 - dev_dbg(port->dev, "DMA Rx timed out\n"); 1845 - schedule_work(&s->work_rx); 1846 - } 1847 - 1848 - static void sci_request_dma(struct uart_port *port) 1849 - { 1850 - struct sci_port *s = to_sci_port(port); 1851 - struct sh_dmae_slave *param; 1852 - struct dma_chan *chan; 1853 - dma_cap_mask_t mask; 1854 - int nent; 1855 - 1856 - dev_dbg(port->dev, "%s: port %d\n", __func__, port->line); 1857 - 1858 - if (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0) 1859 - return; 1860 - 1861 - dma_cap_zero(mask); 1862 - dma_cap_set(DMA_SLAVE, mask); 1863 - 1864 - param = &s->param_tx; 1865 - 1866 - /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_TX */ 1867 - param->shdma_slave.slave_id = s->cfg->dma_slave_tx; 1868 - 1869 - s->cookie_tx = -EINVAL; 1870 - chan = dma_request_channel(mask, filter, param); 1871 - dev_dbg(port->dev, "%s: TX: got channel %p\n", __func__, chan); 1872 - if (chan) { 1873 - s->chan_tx = chan; 1874 - sg_init_table(&s->sg_tx, 1); 1875 - /* UART circular tx buffer is an aligned page. */ 1876 - BUG_ON((uintptr_t)port->state->xmit.buf & ~PAGE_MASK); 1877 - sg_set_page(&s->sg_tx, virt_to_page(port->state->xmit.buf), 1878 - UART_XMIT_SIZE, 1879 - (uintptr_t)port->state->xmit.buf & ~PAGE_MASK); 1880 - nent = dma_map_sg(port->dev, &s->sg_tx, 1, DMA_TO_DEVICE); 1881 - if (!nent) 1882 - sci_tx_dma_release(s, false); 1883 - else 1884 - dev_dbg(port->dev, "%s: mapped %d@%p to %pad\n", 1885 - __func__, 1886 - sg_dma_len(&s->sg_tx), port->state->xmit.buf, 1887 - &sg_dma_address(&s->sg_tx)); 1888 - 1889 - s->sg_len_tx = nent; 1890 - 1891 - INIT_WORK(&s->work_tx, work_fn_tx); 1892 - } 1893 - 1894 - param = &s->param_rx; 1895 - 1896 - /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_RX */ 1897 - param->shdma_slave.slave_id = s->cfg->dma_slave_rx; 1898 - 1899 - chan = dma_request_channel(mask, filter, param); 1900 - dev_dbg(port->dev, "%s: RX: got channel %p\n", __func__, chan); 1901 - if (chan) { 1902 - dma_addr_t dma[2]; 1903 - void *buf[2]; 1904 - int i; 1905 - 1906 - s->chan_rx = chan; 1907 - 1908 - s->buf_len_rx = 2 * max(16, (int)port->fifosize); 1909 - buf[0] = dma_alloc_coherent(port->dev, s->buf_len_rx * 2, 1910 - &dma[0], GFP_KERNEL); 1911 - 1912 - if (!buf[0]) { 1913 - dev_warn(port->dev, 1914 - "failed to allocate dma buffer, using PIO\n"); 1915 - sci_rx_dma_release(s, true); 1916 - return; 1917 - } 1918 - 1919 - buf[1] = buf[0] + s->buf_len_rx; 1920 - dma[1] = dma[0] + s->buf_len_rx; 1921 - 1922 - for (i = 0; i < 2; i++) { 1923 - struct scatterlist *sg = &s->sg_rx[i]; 1924 - 1925 - sg_init_table(sg, 1); 1926 - sg_set_page(sg, virt_to_page(buf[i]), s->buf_len_rx, 1927 - (uintptr_t)buf[i] & ~PAGE_MASK); 1928 - sg_dma_address(sg) = dma[i]; 1929 - } 1930 - 1931 - INIT_WORK(&s->work_rx, work_fn_rx); 1932 - setup_timer(&s->rx_timer, rx_timer_fn, (unsigned long)s); 1933 - 1934 - sci_submit_rx(s); 1935 - } 1936 - } 1937 - 1938 - static void sci_free_dma(struct uart_port *port) 1939 - { 1940 - struct sci_port *s = to_sci_port(port); 1941 - 1942 - if (s->chan_tx) 1943 - sci_tx_dma_release(s, false); 1944 - if (s->chan_rx) 1945 - sci_rx_dma_release(s, false); 1946 - } 1947 - #else 1948 - static inline void sci_request_dma(struct uart_port *port) 1949 - { 1950 - } 1951 - 1952 - static inline void sci_free_dma(struct uart_port *port) 1953 - { 1954 - } 1955 - #endif 1956 1632 1957 1633 static int sci_startup(struct uart_port *port) 1958 1634 { ··· 1851 1799 sci_stop_rx(port); 1852 1800 sci_stop_tx(port); 1853 1801 spin_unlock_irqrestore(&port->lock, flags); 1802 + 1803 + #ifdef CONFIG_SERIAL_SH_SCI_DMA 1804 + if (s->chan_rx) { 1805 + dev_dbg(port->dev, "%s(%d) deleting rx_timer\n", __func__, 1806 + port->line); 1807 + del_timer_sync(&s->rx_timer); 1808 + } 1809 + #endif 1854 1810 1855 1811 sci_free_dma(port); 1856 1812 sci_free_irq(s); ··· 1952 1892 1953 1893 static void sci_reset(struct uart_port *port) 1954 1894 { 1955 - struct plat_sci_reg *reg; 1895 + const struct plat_sci_reg *reg; 1956 1896 unsigned int status; 1957 1897 1958 1898 do { ··· 1970 1910 struct ktermios *old) 1971 1911 { 1972 1912 struct sci_port *s = to_sci_port(port); 1973 - struct plat_sci_reg *reg; 1913 + const struct plat_sci_reg *reg; 1974 1914 unsigned int baud, smr_val = 0, max_baud, cks = 0; 1975 1915 int t = -1; 1976 1916 unsigned int srr = 15; ··· 2011 1951 2012 1952 sci_reset(port); 2013 1953 2014 - smr_val |= serial_port_in(port, SCSMR) & 3; 1954 + smr_val |= serial_port_in(port, SCSMR) & SCSMR_CKS; 2015 1955 2016 1956 uart_update_timeout(port, termios->c_cflag, baud); 2017 1957 ··· 2056 1996 #ifdef CONFIG_SERIAL_SH_SCI_DMA 2057 1997 /* 2058 1998 * Calculate delay for 2 DMA buffers (4 FIFO). 2059 - * See drivers/serial/serial_core.c::uart_update_timeout(). With 10 2060 - * bits (CS8), 250Hz, 115200 baud and 64 bytes FIFO, the above function 2061 - * calculates 1 jiffie for the data plus 5 jiffies for the "slop(e)." 2062 - * Then below we calculate 5 jiffies (20ms) for 2 DMA buffers (4 FIFO 2063 - * sizes), but when performing a faster transfer, value obtained by 2064 - * this formula is may not enough. Therefore, if value is smaller than 2065 - * 20msec, this sets 20msec as timeout of DMA. 1999 + * See serial_core.c::uart_update_timeout(). 2000 + * With 10 bits (CS8), 250Hz, 115200 baud and 64 bytes FIFO, the above 2001 + * function calculates 1 jiffie for the data plus 5 jiffies for the 2002 + * "slop(e)." Then below we calculate 5 jiffies (20ms) for 2 DMA 2003 + * buffers (4 FIFO sizes), but when performing a faster transfer, the 2004 + * value obtained by this formula is too small. Therefore, if the value 2005 + * is smaller than 20ms, use 20ms as the timeout value for DMA. 2066 2006 */ 2067 2007 if (s->chan_rx) { 2068 2008 unsigned int bits; ··· 2247 2187 { 2248 2188 struct uart_port *port = &sci_port->port; 2249 2189 const struct resource *res; 2250 - unsigned int sampling_rate; 2251 2190 unsigned int i; 2252 2191 int ret; 2253 2192 ··· 2291 2232 port->fifosize = 256; 2292 2233 sci_port->overrun_reg = SCxSR; 2293 2234 sci_port->overrun_mask = SCIFA_ORER; 2294 - sampling_rate = 16; 2235 + sci_port->sampling_rate = 16; 2295 2236 break; 2296 2237 case PORT_HSCIF: 2297 2238 port->fifosize = 128; 2298 - sampling_rate = 0; 2299 2239 sci_port->overrun_reg = SCLSR; 2300 2240 sci_port->overrun_mask = SCLSR_ORER; 2241 + sci_port->sampling_rate = 0; 2301 2242 break; 2302 2243 case PORT_SCIFA: 2303 2244 port->fifosize = 64; 2304 2245 sci_port->overrun_reg = SCxSR; 2305 2246 sci_port->overrun_mask = SCIFA_ORER; 2306 - sampling_rate = 16; 2247 + sci_port->sampling_rate = 16; 2307 2248 break; 2308 2249 case PORT_SCIF: 2309 2250 port->fifosize = 16; 2310 2251 if (p->regtype == SCIx_SH7705_SCIF_REGTYPE) { 2311 2252 sci_port->overrun_reg = SCxSR; 2312 2253 sci_port->overrun_mask = SCIFA_ORER; 2313 - sampling_rate = 16; 2254 + sci_port->sampling_rate = 16; 2314 2255 } else { 2315 2256 sci_port->overrun_reg = SCLSR; 2316 2257 sci_port->overrun_mask = SCLSR_ORER; 2317 - sampling_rate = 32; 2258 + sci_port->sampling_rate = 32; 2318 2259 } 2319 2260 break; 2320 2261 default: 2321 2262 port->fifosize = 1; 2322 2263 sci_port->overrun_reg = SCxSR; 2323 2264 sci_port->overrun_mask = SCI_ORER; 2324 - sampling_rate = 32; 2265 + sci_port->sampling_rate = 32; 2325 2266 break; 2326 2267 } 2327 2268 ··· 2329 2270 * match the SoC datasheet, this should be investigated. Let platform 2330 2271 * data override the sampling rate for now. 2331 2272 */ 2332 - sci_port->sampling_rate = p->sampling_rate ? p->sampling_rate 2333 - : sampling_rate; 2273 + if (p->sampling_rate) 2274 + sci_port->sampling_rate = p->sampling_rate; 2334 2275 2335 2276 if (!early) { 2336 2277 sci_port->iclk = clk_get(&dev->dev, "sci_ick"); ··· 2362 2303 /* 2363 2304 * Establish some sensible defaults for the error detection. 2364 2305 */ 2365 - sci_port->error_mask = (p->type == PORT_SCI) ? 2366 - SCI_DEFAULT_ERROR_MASK : SCIF_DEFAULT_ERROR_MASK; 2306 + if (p->type == PORT_SCI) { 2307 + sci_port->error_mask = SCI_DEFAULT_ERROR_MASK; 2308 + sci_port->error_clear = SCI_ERROR_CLEAR; 2309 + } else { 2310 + sci_port->error_mask = SCIF_DEFAULT_ERROR_MASK; 2311 + sci_port->error_clear = SCIF_ERROR_CLEAR; 2312 + } 2367 2313 2368 2314 /* 2369 2315 * Make the error mask inclusive of overrun detection, if 2370 2316 * supported. 2371 2317 */ 2372 - if (sci_port->overrun_reg == SCxSR) 2318 + if (sci_port->overrun_reg == SCxSR) { 2373 2319 sci_port->error_mask |= sci_port->overrun_mask; 2320 + sci_port->error_clear &= ~sci_port->overrun_mask; 2321 + } 2374 2322 2375 2323 port->type = p->type; 2376 2324 port->flags = UPF_FIXED_PORT | p->flags; ··· 2630 2564 info = match->data; 2631 2565 2632 2566 p = devm_kzalloc(&pdev->dev, sizeof(struct plat_sci_port), GFP_KERNEL); 2633 - if (!p) { 2634 - dev_err(&pdev->dev, "failed to allocate DT config data\n"); 2567 + if (!p) 2635 2568 return NULL; 2636 - } 2637 2569 2638 2570 /* Get the line number for the aliases node. */ 2639 2571 id = of_alias_get_id(np, "serial");
+16 -33
drivers/tty/serial/sh-sci.h
··· 54 54 55 55 #define SCI_DEFAULT_ERROR_MASK (SCI_PER | SCI_FER) 56 56 57 - #define SCI_RDxF_CLEAR ~(SCI_RESERVED | SCI_RDRF) 58 - #define SCI_ERROR_CLEAR ~(SCI_RESERVED | SCI_PER | SCI_FER | SCI_ORER) 59 - #define SCI_TDxE_CLEAR ~(SCI_RESERVED | SCI_TEND | SCI_TDRE) 60 - #define SCI_BREAK_CLEAR ~(SCI_RESERVED | SCI_PER | SCI_FER | SCI_ORER) 57 + #define SCI_RDxF_CLEAR (u32)(~(SCI_RESERVED | SCI_RDRF)) 58 + #define SCI_ERROR_CLEAR (u32)(~(SCI_RESERVED | SCI_PER | SCI_FER | SCI_ORER)) 59 + #define SCI_TDxE_CLEAR (u32)(~(SCI_RESERVED | SCI_TEND | SCI_TDRE)) 60 + #define SCI_BREAK_CLEAR (u32)(~(SCI_RESERVED | SCI_PER | SCI_FER | SCI_ORER)) 61 61 62 62 /* SCxSR (Serial Status Register) on SCIF, SCIFA, SCIFB, HSCIF */ 63 63 #define SCIF_ER BIT(7) /* Receive Error */ ··· 76 76 77 77 #define SCIF_DEFAULT_ERROR_MASK (SCIF_PER | SCIF_FER | SCIF_BRK | SCIF_ER) 78 78 79 - #define SCIF_RDxF_CLEAR ~(SCIF_DR | SCIF_RDF) 80 - #define SCIF_ERROR_CLEAR ~(SCIFA_ORER | SCIF_PER | SCIF_FER | SCIF_ER) 81 - #define SCIF_TDxE_CLEAR ~(SCIF_TDFE) 82 - #define SCIF_BREAK_CLEAR ~(SCIF_PER | SCIF_FER | SCIF_BRK) 79 + #define SCIF_RDxF_CLEAR (u32)(~(SCIF_DR | SCIF_RDF)) 80 + #define SCIF_ERROR_CLEAR (u32)(~(SCIF_PER | SCIF_FER | SCIF_ER)) 81 + #define SCIF_TDxE_CLEAR (u32)(~(SCIF_TDFE)) 82 + #define SCIF_BREAK_CLEAR (u32)(~(SCIF_PER | SCIF_FER | SCIF_BRK)) 83 83 84 84 /* SCFCR (FIFO Control Register) */ 85 85 #define SCFCR_MCE BIT(3) /* Modem Control Enable */ ··· 119 119 120 120 #define SCxSR_ERRORS(port) (to_sci_port(port)->error_mask) 121 121 122 - #if defined(CONFIG_CPU_SUBTYPE_SH7705) || \ 123 - defined(CONFIG_CPU_SUBTYPE_SH7720) || \ 124 - defined(CONFIG_CPU_SUBTYPE_SH7721) || \ 125 - defined(CONFIG_ARCH_SH73A0) || \ 126 - defined(CONFIG_ARCH_R8A7740) 127 - 128 - # define SCxSR_RDxF_CLEAR(port) \ 129 - (serial_port_in(port, SCxSR) & SCIF_RDxF_CLEAR) 130 - # define SCxSR_ERROR_CLEAR(port) \ 131 - (serial_port_in(port, SCxSR) & SCIF_ERROR_CLEAR) 132 - # define SCxSR_TDxE_CLEAR(port) \ 133 - (serial_port_in(port, SCxSR) & SCIF_TDxE_CLEAR) 134 - # define SCxSR_BREAK_CLEAR(port) \ 135 - (serial_port_in(port, SCxSR) & SCIF_BREAK_CLEAR) 136 - #else 137 - # define SCxSR_RDxF_CLEAR(port) \ 138 - ((((port)->type == PORT_SCI) ? SCI_RDxF_CLEAR : SCIF_RDxF_CLEAR) & 0xff) 139 - # define SCxSR_ERROR_CLEAR(port) \ 140 - ((((port)->type == PORT_SCI) ? SCI_ERROR_CLEAR : SCIF_ERROR_CLEAR) & 0xff) 141 - # define SCxSR_TDxE_CLEAR(port) \ 142 - ((((port)->type == PORT_SCI) ? SCI_TDxE_CLEAR : SCIF_TDxE_CLEAR) & 0xff) 143 - # define SCxSR_BREAK_CLEAR(port) \ 144 - ((((port)->type == PORT_SCI) ? SCI_BREAK_CLEAR : SCIF_BREAK_CLEAR) & 0xff) 145 - #endif 146 - 122 + #define SCxSR_RDxF_CLEAR(port) \ 123 + (((port)->type == PORT_SCI) ? SCI_RDxF_CLEAR : SCIF_RDxF_CLEAR) 124 + #define SCxSR_ERROR_CLEAR(port) \ 125 + (to_sci_port(port)->error_clear) 126 + #define SCxSR_TDxE_CLEAR(port) \ 127 + (((port)->type == PORT_SCI) ? SCI_TDxE_CLEAR : SCIF_TDxE_CLEAR) 128 + #define SCxSR_BREAK_CLEAR(port) \ 129 + (((port)->type == PORT_SCI) ? SCI_BREAK_CLEAR : SCIF_BREAK_CLEAR)
+1
drivers/tty/serial/sprd_serial.c
··· 782 782 {.compatible = "sprd,sc9836-uart",}, 783 783 {} 784 784 }; 785 + MODULE_DEVICE_TABLE(of, serial_ids); 785 786 786 787 static struct platform_driver sprd_platform_driver = { 787 788 .probe = sprd_probe,
+1 -1
drivers/tty/serial/st-asc.c
··· 430 430 */ 431 431 static int asc_startup(struct uart_port *port) 432 432 { 433 - if (request_irq(port->irq, asc_interrupt, IRQF_NO_SUSPEND, 433 + if (request_irq(port->irq, asc_interrupt, 0, 434 434 asc_port_name(port), port)) { 435 435 dev_err(port->dev, "cannot allocate irq.\n"); 436 436 return -ENODEV;
+1 -2
drivers/tty/serial/stm32-usart.c
··· 322 322 u32 val; 323 323 int ret; 324 324 325 - ret = request_irq(port->irq, stm32_interrupt, IRQF_NO_SUSPEND, 326 - name, port); 325 + ret = request_irq(port->irq, stm32_interrupt, 0, name, port); 327 326 if (ret) 328 327 return ret; 329 328
+5 -15
drivers/tty/synclink.c
··· 3314 3314 -EAGAIN : -ERESTARTSYS; 3315 3315 break; 3316 3316 } 3317 - 3317 + 3318 3318 dcd = tty_port_carrier_raised(&info->port); 3319 - 3320 - if (!(port->flags & ASYNC_CLOSING) && (do_clocal || dcd)) 3321 - break; 3322 - 3319 + if (do_clocal || dcd) 3320 + break; 3321 + 3323 3322 if (signal_pending(current)) { 3324 3323 retval = -ERESTARTSYS; 3325 3324 break; ··· 3397 3398 printk("%s(%d):mgsl_open(%s), old ref count = %d\n", 3398 3399 __FILE__,__LINE__,tty->driver->name, info->port.count); 3399 3400 3400 - /* If port is closing, signal caller to try again */ 3401 - if (info->port.flags & ASYNC_CLOSING){ 3402 - wait_event_interruptible_tty(tty, info->port.close_wait, 3403 - !(info->port.flags & ASYNC_CLOSING)); 3404 - retval = ((info->port.flags & ASYNC_HUP_NOTIFY) ? 3405 - -EAGAIN : -ERESTARTSYS); 3406 - goto cleanup; 3407 - } 3408 - 3409 3401 info->port.low_latency = (info->port.flags & ASYNC_LOW_LATENCY) ? 1 : 0; 3410 3402 3411 3403 spin_lock_irqsave(&info->netlock, flags); ··· 6625 6635 unsigned char *ptmp = info->intermediate_rxbuffer; 6626 6636 6627 6637 if ( !(status & RXSTATUS_CRC_ERROR)) 6628 - info->icount.rxok++; 6638 + info->icount.rxok++; 6629 6639 6630 6640 while(copy_count) { 6631 6641 int partial_count;
+2 -12
drivers/tty/synclinkmp.c
··· 752 752 printk("%s(%d):%s open(), old ref count = %d\n", 753 753 __FILE__,__LINE__,tty->driver->name, info->port.count); 754 754 755 - /* If port is closing, signal caller to try again */ 756 - if (info->port.flags & ASYNC_CLOSING){ 757 - wait_event_interruptible_tty(tty, info->port.close_wait, 758 - !(info->port.flags & ASYNC_CLOSING)); 759 - retval = ((info->port.flags & ASYNC_HUP_NOTIFY) ? 760 - -EAGAIN : -ERESTARTSYS); 761 - goto cleanup; 762 - } 763 - 764 755 info->port.low_latency = (info->port.flags & ASYNC_LOW_LATENCY) ? 1 : 0; 765 756 766 757 spin_lock_irqsave(&info->netlock, flags); ··· 3332 3341 } 3333 3342 3334 3343 cd = tty_port_carrier_raised(port); 3335 - 3336 - if (!(port->flags & ASYNC_CLOSING) && (do_clocal || cd)) 3337 - break; 3344 + if (do_clocal || cd) 3345 + break; 3338 3346 3339 3347 if (signal_pending(current)) { 3340 3348 retval = -ERESTARTSYS;
+5 -1
drivers/tty/sysrq.c
··· 1003 1003 #define param_check_sysrq_reset_seq(name, p) \ 1004 1004 __param_check(name, p, unsigned short) 1005 1005 1006 + /* 1007 + * not really modular, but the easiest way to keep compat with existing 1008 + * bootargs behaviour is to continue using module_param here. 1009 + */ 1006 1010 module_param_array_named(reset_seq, sysrq_reset_seq, sysrq_reset_seq, 1007 1011 &sysrq_reset_seq_len, 0644); 1008 1012 ··· 1123 1119 1124 1120 return 0; 1125 1121 } 1126 - module_init(sysrq_init); 1122 + device_initcall(sysrq_init);
+11 -1
drivers/tty/tty_buffer.c
··· 403 403 * flush_to_ldisc() sees buffer data. 404 404 */ 405 405 smp_store_release(&buf->tail->commit, buf->tail->used); 406 - schedule_work(&buf->work); 406 + queue_work(system_unbound_wq, &buf->work); 407 407 } 408 408 EXPORT_SYMBOL(tty_schedule_flip); 409 409 ··· 586 586 void tty_buffer_set_lock_subclass(struct tty_port *port) 587 587 { 588 588 lockdep_set_subclass(&port->buf.lock, TTY_LOCK_SLAVE); 589 + } 590 + 591 + bool tty_buffer_restart_work(struct tty_port *port) 592 + { 593 + return queue_work(system_unbound_wq, &port->buf.work); 594 + } 595 + 596 + bool tty_buffer_cancel_work(struct tty_port *port) 597 + { 598 + return cancel_work_sync(&port->buf.work); 589 599 }
+29 -30
drivers/tty/tty_io.c
··· 390 390 * Locking: ctrl_lock 391 391 */ 392 392 393 - int tty_check_change(struct tty_struct *tty) 393 + int __tty_check_change(struct tty_struct *tty, int sig) 394 394 { 395 395 unsigned long flags; 396 - struct pid *pgrp; 396 + struct pid *pgrp, *tty_pgrp; 397 397 int ret = 0; 398 398 399 399 if (current->signal->tty != tty) ··· 403 403 pgrp = task_pgrp(current); 404 404 405 405 spin_lock_irqsave(&tty->ctrl_lock, flags); 406 - 407 - if (!tty->pgrp) { 408 - printk(KERN_WARNING "tty_check_change: tty->pgrp == NULL!\n"); 409 - goto out_unlock; 410 - } 411 - if (pgrp == tty->pgrp) 412 - goto out_unlock; 406 + tty_pgrp = tty->pgrp; 413 407 spin_unlock_irqrestore(&tty->ctrl_lock, flags); 414 408 415 - if (is_ignored(SIGTTOU)) 416 - goto out_rcuunlock; 417 - if (is_current_pgrp_orphaned()) { 418 - ret = -EIO; 419 - goto out_rcuunlock; 409 + if (tty_pgrp && pgrp != tty->pgrp) { 410 + if (is_ignored(sig)) { 411 + if (sig == SIGTTIN) 412 + ret = -EIO; 413 + } else if (is_current_pgrp_orphaned()) 414 + ret = -EIO; 415 + else { 416 + kill_pgrp(pgrp, sig, 1); 417 + set_thread_flag(TIF_SIGPENDING); 418 + ret = -ERESTARTSYS; 419 + } 420 420 } 421 - kill_pgrp(pgrp, SIGTTOU, 1); 422 421 rcu_read_unlock(); 423 - set_thread_flag(TIF_SIGPENDING); 424 - ret = -ERESTARTSYS; 425 - return ret; 426 - out_unlock: 427 - spin_unlock_irqrestore(&tty->ctrl_lock, flags); 428 - out_rcuunlock: 429 - rcu_read_unlock(); 422 + 423 + if (!tty_pgrp) { 424 + pr_warn("%s: tty_check_change: sig=%d, tty->pgrp == NULL!\n", 425 + tty_name(tty), sig); 426 + } 427 + 430 428 return ret; 431 429 } 432 430 431 + int tty_check_change(struct tty_struct *tty) 432 + { 433 + return __tty_check_change(tty, SIGTTOU); 434 + } 433 435 EXPORT_SYMBOL(tty_check_change); 434 436 435 437 static ssize_t hung_up_tty_read(struct file *file, char __user *buf, ··· 1200 1198 if (tty) { 1201 1199 mutex_lock(&tty->atomic_write_lock); 1202 1200 tty_lock(tty); 1203 - if (tty->ops->write && tty->count > 0) { 1204 - tty_unlock(tty); 1201 + if (tty->ops->write && tty->count > 0) 1205 1202 tty->ops->write(tty, msg, strlen(msg)); 1206 - } else 1207 - tty_unlock(tty); 1203 + tty_unlock(tty); 1208 1204 tty_write_unlock(tty); 1209 1205 } 1210 1206 return; ··· 1689 1689 tty->port->itty = NULL; 1690 1690 if (tty->link) 1691 1691 tty->link->port->itty = NULL; 1692 - cancel_work_sync(&tty->port->buf.work); 1692 + tty_buffer_cancel_work(tty->port); 1693 1693 1694 1694 tty_kref_put(tty->link); 1695 1695 tty_kref_put(tty); ··· 2569 2569 struct pid *pgrp; 2570 2570 pid_t pgrp_nr; 2571 2571 int retval = tty_check_change(real_tty); 2572 - unsigned long flags; 2573 2572 2574 2573 if (retval == -EIO) 2575 2574 return -ENOTTY; ··· 2591 2592 if (session_of_pgrp(pgrp) != task_session(current)) 2592 2593 goto out_unlock; 2593 2594 retval = 0; 2594 - spin_lock_irqsave(&tty->ctrl_lock, flags); 2595 + spin_lock_irq(&tty->ctrl_lock); 2595 2596 put_pid(real_tty->pgrp); 2596 2597 real_tty->pgrp = get_pid(pgrp); 2597 - spin_unlock_irqrestore(&tty->ctrl_lock, flags); 2598 + spin_unlock_irq(&tty->ctrl_lock); 2598 2599 out_unlock: 2599 2600 rcu_read_unlock(); 2600 2601 return retval;
+1 -1
drivers/tty/tty_ldisc.c
··· 319 319 320 320 static inline void __tty_ldisc_unlock(struct tty_struct *tty) 321 321 { 322 - return ldsem_up_write(&tty->ldisc_sem); 322 + ldsem_up_write(&tty->ldisc_sem); 323 323 } 324 324 325 325 static int __lockfunc
+4 -24
drivers/tty/tty_port.c
··· 22 22 memset(port, 0, sizeof(*port)); 23 23 tty_buffer_init(port); 24 24 init_waitqueue_head(&port->open_wait); 25 - init_waitqueue_head(&port->close_wait); 26 25 init_waitqueue_head(&port->delta_msr_wait); 27 26 mutex_init(&port->mutex); 28 27 mutex_init(&port->buf_mutex); ··· 130 131 */ 131 132 void tty_port_destroy(struct tty_port *port) 132 133 { 133 - cancel_work_sync(&port->buf.work); 134 + tty_buffer_cancel_work(port); 134 135 tty_buffer_free_all(port); 135 136 } 136 137 EXPORT_SYMBOL(tty_port_destroy); ··· 362 363 unsigned long flags; 363 364 DEFINE_WAIT(wait); 364 365 365 - /* block if port is in the process of being closed */ 366 - if (port->flags & ASYNC_CLOSING) { 367 - wait_event_interruptible_tty(tty, port->close_wait, 368 - !(port->flags & ASYNC_CLOSING)); 369 - if (port->flags & ASYNC_HUP_NOTIFY) 370 - return -EAGAIN; 371 - else 372 - return -ERESTARTSYS; 373 - } 374 - 375 366 /* if non-blocking mode is set we can pass directly to open unless 376 367 the port has just hung up or is in another error state */ 377 368 if (tty->flags & (1 << TTY_IO_ERROR)) { ··· 412 423 * Never ask drivers if CLOCAL is set, this causes troubles 413 424 * on some hardware. 414 425 */ 415 - if (!(port->flags & ASYNC_CLOSING) && 416 - (do_clocal || tty_port_carrier_raised(port))) 426 + if (do_clocal || tty_port_carrier_raised(port)) 417 427 break; 418 428 if (signal_pending(current)) { 419 429 retval = -ERESTARTSYS; ··· 451 463 schedule_timeout_interruptible(timeout); 452 464 } 453 465 454 - /* Caller holds tty lock. 455 - * NB: may drop and reacquire tty lock (in tty_wait_until_sent_from_close()) 456 - * so tty and tty port may have changed state (but not hung up or reopened). 457 - */ 466 + /* Caller holds tty lock. */ 458 467 int tty_port_close_start(struct tty_port *port, 459 468 struct tty_struct *tty, struct file *filp) 460 469 { ··· 487 502 if (tty->flow_stopped) 488 503 tty_driver_flush_buffer(tty); 489 504 if (port->closing_wait != ASYNC_CLOSING_WAIT_NONE) 490 - tty_wait_until_sent_from_close(tty, port->closing_wait); 505 + tty_wait_until_sent(tty, port->closing_wait); 491 506 if (port->drain_delay) 492 507 tty_port_drain_delay(port, tty); 493 508 } ··· 519 534 wake_up_interruptible(&port->open_wait); 520 535 } 521 536 port->flags &= ~(ASYNC_NORMAL_ACTIVE | ASYNC_CLOSING); 522 - wake_up_interruptible(&port->close_wait); 523 537 spin_unlock_irqrestore(&port->lock, flags); 524 538 } 525 539 EXPORT_SYMBOL(tty_port_close_end); ··· 527 543 * tty_port_close 528 544 * 529 545 * Caller holds tty lock 530 - * 531 - * NB: may drop and reacquire tty lock (in tty_port_close_start()-> 532 - * tty_wait_until_sent_from_close()) so tty and tty_port may have changed 533 - * state (but not hung up or reopened). 534 546 */ 535 547 void tty_port_close(struct tty_port *port, struct tty_struct *tty, 536 548 struct file *filp)
+4 -2
drivers/usb/gadget/function/u_serial.c
··· 114 114 struct gs_buf port_write_buf; 115 115 wait_queue_head_t drain_wait; /* wait while writes drain */ 116 116 bool write_busy; 117 + wait_queue_head_t close_wait; 117 118 118 119 /* REVISIT this state ... */ 119 120 struct usb_cdc_line_coding port_line_coding; /* 8-N-1 etc */ ··· 884 883 pr_debug("gs_close: ttyGS%d (%p,%p) done!\n", 885 884 port->port_num, tty, file); 886 885 887 - wake_up(&port->port.close_wait); 886 + wake_up(&port->close_wait); 888 887 exit: 889 888 spin_unlock_irq(&port->port_lock); 890 889 } ··· 1044 1043 tty_port_init(&port->port); 1045 1044 spin_lock_init(&port->port_lock); 1046 1045 init_waitqueue_head(&port->drain_wait); 1046 + init_waitqueue_head(&port->close_wait); 1047 1047 1048 1048 tasklet_init(&port->push, gs_rx_push, (unsigned long) port); 1049 1049 ··· 1075 1073 { 1076 1074 tasklet_kill(&port->push); 1077 1075 /* wait for old opens to finish */ 1078 - wait_event(port->port.close_wait, gs_closed(port)); 1076 + wait_event(port->close_wait, gs_closed(port)); 1079 1077 WARN_ON(port->port_usb != NULL); 1080 1078 tty_port_destroy(&port->port); 1081 1079 kfree(port);
+10 -1
include/linux/dma/hsu.h
··· 35 35 unsigned int length; 36 36 unsigned int offset; 37 37 struct hsu_dma *hsu; 38 - struct hsu_dma_platform_data *pdata; 39 38 }; 40 39 40 + #if IS_ENABLED(CONFIG_HSU_DMA) 41 41 /* Export to the internal users */ 42 42 irqreturn_t hsu_dma_irq(struct hsu_dma_chip *chip, unsigned short nr); 43 43 44 44 /* Export to the platform drivers */ 45 45 int hsu_dma_probe(struct hsu_dma_chip *chip); 46 46 int hsu_dma_remove(struct hsu_dma_chip *chip); 47 + #else 48 + static inline irqreturn_t hsu_dma_irq(struct hsu_dma_chip *chip, 49 + unsigned short nr) 50 + { 51 + return IRQ_NONE; 52 + } 53 + static inline int hsu_dma_probe(struct hsu_dma_chip *chip) { return -ENODEV; } 54 + static inline int hsu_dma_remove(struct hsu_dma_chip *chip) { return 0; } 55 + #endif /* CONFIG_HSU_DMA */ 47 56 48 57 #endif /* _DMA_HSU_H */
+3 -5
include/linux/n_r3964.h
··· 152 152 unsigned char *rx_buf; /* ring buffer */ 153 153 unsigned char *tx_buf; 154 154 155 - wait_queue_head_t read_wait; 156 - //struct wait_queue *read_wait; 157 - 158 155 struct r3964_block_header *rx_first; 159 156 struct r3964_block_header *rx_last; 160 157 struct r3964_block_header *tx_first; ··· 161 164 unsigned char last_rx; 162 165 unsigned char bcc; 163 166 unsigned int blocks_in_rx_queue; 164 - 165 - 167 + 168 + struct mutex read_lock; /* serialize r3964_read */ 169 + 166 170 struct r3964_client_info *firstClient; 167 171 unsigned int state; 168 172 unsigned int flags;
-6
include/linux/platform_data/atmel.h
··· 19 19 #include <linux/serial.h> 20 20 #include <linux/platform_data/macb.h> 21 21 22 - /* 23 - * at91: 6 USARTs and one DBGU port (SAM9260) 24 - * avr32: 4 25 - */ 26 - #define ATMEL_MAX_UART 7 27 - 28 22 /* Compact Flash */ 29 23 struct at91_cf_data { 30 24 int irq_pin; /* I/O IRQ */
-4
include/linux/platform_data/dma-hsu.h
··· 18 18 int chan_id; 19 19 }; 20 20 21 - struct hsu_dma_platform_data { 22 - unsigned short nr_channels; 23 - }; 24 - 25 21 #endif /* _PLATFORM_DATA_DMA_HSU_H */
+3 -45
include/linux/tty.h
··· 227 227 int blocked_open; /* Waiting to open */ 228 228 int count; /* Usage count */ 229 229 wait_queue_head_t open_wait; /* Open waiters */ 230 - wait_queue_head_t close_wait; /* Close waiters */ 231 230 wait_queue_head_t delta_msr_wait; /* Modem status change */ 232 231 unsigned long flags; /* TTY flags ASY_*/ 233 232 unsigned char console:1, /* port is a console */ ··· 423 424 const char *routine); 424 425 extern const char *tty_name(const struct tty_struct *tty); 425 426 extern void tty_wait_until_sent(struct tty_struct *tty, long timeout); 427 + extern int __tty_check_change(struct tty_struct *tty, int sig); 426 428 extern int tty_check_change(struct tty_struct *tty); 427 429 extern void __stop_tty(struct tty_struct *tty); 428 430 extern void stop_tty(struct tty_struct *tty); ··· 467 467 extern void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld); 468 468 extern void tty_buffer_init(struct tty_port *port); 469 469 extern void tty_buffer_set_lock_subclass(struct tty_port *port); 470 + extern bool tty_buffer_restart_work(struct tty_port *port); 471 + extern bool tty_buffer_cancel_work(struct tty_port *port); 470 472 extern speed_t tty_termios_baud_rate(struct ktermios *termios); 471 473 extern speed_t tty_termios_input_baud_rate(struct ktermios *termios); 472 474 extern void tty_termios_encode_baud_rate(struct ktermios *termios, ··· 658 656 extern void __lockfunc tty_lock_slave(struct tty_struct *tty); 659 657 extern void __lockfunc tty_unlock_slave(struct tty_struct *tty); 660 658 extern void tty_set_lock_subclass(struct tty_struct *tty); 661 - /* 662 - * this shall be called only from where BTM is held (like close) 663 - * 664 - * We need this to ensure nobody waits for us to finish while we are waiting. 665 - * Without this we were encountering system stalls. 666 - * 667 - * This should be indeed removed with BTM removal later. 668 - * 669 - * Locking: BTM required. Nobody is allowed to hold port->mutex. 670 - */ 671 - static inline void tty_wait_until_sent_from_close(struct tty_struct *tty, 672 - long timeout) 673 - { 674 - tty_unlock(tty); /* tty->ops->close holds the BTM, drop it while waiting */ 675 - tty_wait_until_sent(tty, timeout); 676 - tty_lock(tty); 677 - } 678 - 679 - /* 680 - * wait_event_interruptible_tty -- wait for a condition with the tty lock held 681 - * 682 - * The condition we are waiting for might take a long time to 683 - * become true, or might depend on another thread taking the 684 - * BTM. In either case, we need to drop the BTM to guarantee 685 - * forward progress. This is a leftover from the conversion 686 - * from the BKL and should eventually get removed as the BTM 687 - * falls out of use. 688 - * 689 - * Do not use in new code. 690 - */ 691 - #define wait_event_interruptible_tty(tty, wq, condition) \ 692 - ({ \ 693 - int __ret = 0; \ 694 - if (!(condition)) \ 695 - __ret = __wait_event_interruptible_tty(tty, wq, \ 696 - condition); \ 697 - __ret; \ 698 - }) 699 - 700 - #define __wait_event_interruptible_tty(tty, wq, condition) \ 701 - ___wait_event(wq, condition, TASK_INTERRUPTIBLE, 0, 0, \ 702 - tty_unlock(tty); \ 703 - schedule(); \ 704 - tty_lock(tty)) 705 659 706 660 #ifdef CONFIG_PROC_FS 707 661 extern void proc_tty_register_driver(struct tty_driver *);
+1 -30
net/irda/ircomm/ircomm_tty.c
··· 335 335 * specified, we cannot return before the IrCOMM link is 336 336 * ready 337 337 */ 338 - if (!test_bit(ASYNCB_CLOSING, &port->flags) && 339 - (do_clocal || tty_port_carrier_raised(port)) && 338 + if ((do_clocal || tty_port_carrier_raised(port)) && 340 339 self->state == IRCOMM_TTY_READY) 341 340 { 342 341 break; ··· 441 442 442 443 /* Not really used by us, but lets do it anyway */ 443 444 self->port.low_latency = (self->port.flags & ASYNC_LOW_LATENCY) ? 1 : 0; 444 - 445 - /* 446 - * If the port is the middle of closing, bail out now 447 - */ 448 - if (test_bit(ASYNCB_CLOSING, &self->port.flags)) { 449 - 450 - /* Hm, why are we blocking on ASYNC_CLOSING if we 451 - * do return -EAGAIN/-ERESTARTSYS below anyway? 452 - * IMHO it's either not needed in the first place 453 - * or for some reason we need to make sure the async 454 - * closing has been finished - if so, wouldn't we 455 - * probably better sleep uninterruptible? 456 - */ 457 - 458 - if (wait_event_interruptible(self->port.close_wait, 459 - !test_bit(ASYNCB_CLOSING, &self->port.flags))) { 460 - net_warn_ratelimited("%s - got signal while blocking on ASYNC_CLOSING!\n", 461 - __func__); 462 - return -ERESTARTSYS; 463 - } 464 - 465 - #ifdef SERIAL_DO_RESTART 466 - return (self->port.flags & ASYNC_HUP_NOTIFY) ? 467 - -EAGAIN : -ERESTARTSYS; 468 - #else 469 - return -EAGAIN; 470 - #endif 471 - } 472 445 473 446 /* Check if this is a "normal" ircomm device, or an irlpt device */ 474 447 if (self->line < 0x10) {