Merge tag 'tty-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty

Pull tty / serial updates from Greg KH:
"Here is the big set of tty/serial driver changes for 6.10-rc1.
Included in here are:

- Usual good set of api cleanups and evolution by Jiri Slaby to make
the serial interfaces move out of the 1990's by using kfifos
instead of hand-rolling their own logic.

- 8250_exar driver updates

- max3100 driver updates

- sc16is7xx driver updates

- exar driver updates

- sh-sci driver updates

- tty ldisc api addition to help refuse bindings

- other smaller serial driver updates

All of these have been in linux-next for a while with no reported
issues"

* tag 'tty-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (113 commits)
serial: Clear UPF_DEAD before calling tty_port_register_device_attr_serdev()
serial: imx: Raise TX trigger level to 8
serial: 8250_pnp: Simplify "line" related code
serial: sh-sci: simplify locking when re-issuing RXDMA fails
serial: sh-sci: let timeout timer only run when DMA is scheduled
serial: sh-sci: describe locking requirements for invalidating RXDMA
serial: sh-sci: protect invalidating RXDMA on shutdown
tty: add the option to have a tty reject a new ldisc
serial: core: Call device_set_awake_path() for console port
dt-bindings: serial: brcm,bcm2835-aux-uart: convert to dtschema
tty: serial: uartps: Add support for uartps controller reset
arm64: zynqmp: Add resets property for UART nodes
dt-bindings: serial: cdns,uart: Add optional reset property
serial: 8250_pnp: Switch to DEFINE_SIMPLE_DEV_PM_OPS()
serial: 8250_exar: Keep the includes sorted
serial: 8250_exar: Make type of bit the same in exar_ee_*_bit()
serial: 8250_exar: Use BIT() in exar_ee_read()
serial: 8250_exar: Switch to use dev_err_probe()
serial: 8250_exar: Return directly from switch-cases
serial: 8250_exar: Decrease indentation level
...

+3141 -1808
+19
Documentation/admin-guide/kernel-parameters.txt
··· 788 Documentation/networking/netconsole.rst for an 789 alternative. 790 791 uart[8250],io,<addr>[,options] 792 uart[8250],mmio,<addr>[,options] 793 uart[8250],mmio16,<addr>[,options]
··· 788 Documentation/networking/netconsole.rst for an 789 alternative. 790 791 + <DEVNAME>:<n>.<n>[,options] 792 + Use the specified serial port on the serial core bus. 793 + The addressing uses DEVNAME of the physical serial port 794 + device, followed by the serial core controller instance, 795 + and the serial port instance. The options are the same 796 + as documented for the ttyS addressing above. 797 + 798 + The mapping of the serial ports to the tty instances 799 + can be viewed with: 800 + 801 + $ ls -d /sys/bus/serial-base/devices/*:*.*/tty/* 802 + /sys/bus/serial-base/devices/00:04:0.0/tty/ttyS0 803 + 804 + In the above example, the console can be addressed with 805 + console=00:04:0.0. Note that a console addressed this 806 + way will only get added when the related device driver 807 + is ready. The use of an earlycon parameter in addition to 808 + the console may be desired for console output early on. 809 + 810 uart[8250],io,<addr>[,options] 811 uart[8250],mmio,<addr>[,options] 812 uart[8250],mmio16,<addr>[,options]
+9
Documentation/admin-guide/sysrq.rst
··· 161 will be printed to your console. (``0``, for example would make 162 it so that only emergency messages like PANICs or OOPSes would 163 make it to your console.) 164 =========== =================================================================== 165 166 Okay, so what can I use them for? ··· 212 213 "just thaw ``it(j)``" is useful if your system becomes unresponsive due to a 214 frozen (probably root) filesystem via the FIFREEZE ioctl. 215 216 Sometimes SysRq seems to get 'stuck' after using it, what can I do? 217 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
··· 161 will be printed to your console. (``0``, for example would make 162 it so that only emergency messages like PANICs or OOPSes would 163 make it to your console.) 164 + 165 + ``R`` Replay the kernel log messages on consoles. 166 =========== =================================================================== 167 168 Okay, so what can I use them for? ··· 210 211 "just thaw ``it(j)``" is useful if your system becomes unresponsive due to a 212 frozen (probably root) filesystem via the FIFREEZE ioctl. 213 + 214 + ``Replay logs(R)`` is useful to view the kernel log messages when system is hung 215 + or you are not able to use dmesg command to view the messages in printk buffer. 216 + User may have to press the key combination multiple times if console system is 217 + busy. If it is completely locked up, then messages won't be printed. Output 218 + messages depend on current console loglevel, which can be modified using 219 + sysrq[0-9] (see above). 220 221 Sometimes SysRq seems to get 'stuck' after using it, what can I do? 222 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-16
Documentation/devicetree/bindings/serial/actions,owl-uart.txt
··· 1 - Actions Semi Owl UART 2 - 3 - Required properties: 4 - - compatible : "actions,s500-uart", "actions,owl-uart" for S500 5 - "actions,s900-uart", "actions,owl-uart" for S900 6 - - reg : Offset and length of the register set for the device. 7 - - interrupts : Should contain UART interrupt. 8 - 9 - 10 - Example: 11 - 12 - uart3: serial@b0126000 { 13 - compatible = "actions,s500-uart", "actions,owl-uart"; 14 - reg = <0xb0126000 0x1000>; 15 - interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>; 16 - };
···
+48
Documentation/devicetree/bindings/serial/actions,owl-uart.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/serial/actions,owl-uart.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Actions Semi Owl UART 8 + 9 + maintainers: 10 + - Kanak Shilledar <kanakshilledar111@protonmail.com> 11 + 12 + allOf: 13 + - $ref: serial.yaml 14 + 15 + properties: 16 + compatible: 17 + items: 18 + - enum: 19 + - actions,s500-uart 20 + - actions,s900-uart 21 + - const: actions,owl-uart 22 + 23 + reg: 24 + maxItems: 1 25 + 26 + interrupts: 27 + maxItems: 1 28 + 29 + clocks: 30 + maxItems: 1 31 + 32 + required: 33 + - compatible 34 + - reg 35 + - interrupts 36 + 37 + unevaluatedProperties: false 38 + 39 + examples: 40 + - | 41 + #include <dt-bindings/clock/actions,s500-cmu.h> 42 + #include <dt-bindings/interrupt-controller/arm-gic.h> 43 + uart0: serial@b0126000 { 44 + compatible = "actions,s500-uart", "actions,owl-uart"; 45 + reg = <0xb0126000 0x1000>; 46 + clocks = <&cmu CLK_UART0>; 47 + interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>; 48 + };
-18
Documentation/devicetree/bindings/serial/brcm,bcm2835-aux-uart.txt
··· 1 - * BCM2835 AUXILIAR UART 2 - 3 - Required properties: 4 - 5 - - compatible: "brcm,bcm2835-aux-uart" 6 - - reg: The base address of the UART register bank. 7 - - interrupts: A single interrupt specifier. 8 - - clocks: Clock driving the hardware; used to figure out the baud rate 9 - divisor. 10 - 11 - Example: 12 - 13 - uart1: serial@7e215040 { 14 - compatible = "brcm,bcm2835-aux-uart"; 15 - reg = <0x7e215040 0x40>; 16 - interrupts = <1 29>; 17 - clocks = <&aux BCM2835_AUX_CLOCK_UART>; 18 - };
···
+46
Documentation/devicetree/bindings/serial/brcm,bcm2835-aux-uart.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/serial/brcm,bcm2835-aux-uart.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: BCM2835 AUXILIARY UART 8 + 9 + maintainers: 10 + - Pratik Farkase <pratikfarkase94@gmail.com> 11 + - Florian Fainelli <florian.fainelli@broadcom.com> 12 + - Stefan Wahren <wahrenst@gmx.net> 13 + 14 + allOf: 15 + - $ref: serial.yaml 16 + 17 + properties: 18 + compatible: 19 + const: brcm,bcm2835-aux-uart 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + interrupts: 25 + maxItems: 1 26 + 27 + clocks: 28 + maxItems: 1 29 + 30 + required: 31 + - compatible 32 + - reg 33 + - interrupts 34 + - clocks 35 + 36 + unevaluatedProperties: false 37 + 38 + examples: 39 + - | 40 + #include <dt-bindings/clock/bcm2835-aux.h> 41 + serial@7e215040 { 42 + compatible = "brcm,bcm2835-aux-uart"; 43 + reg = <0x7e215040 0x40>; 44 + interrupts = <1 29>; 45 + clocks = <&aux BCM2835_AUX_CLOCK_UART>; 46 + };
+3
Documentation/devicetree/bindings/serial/cdns,uart.yaml
··· 46 power-domains: 47 maxItems: 1 48 49 required: 50 - compatible 51 - reg
··· 46 power-domains: 47 maxItems: 1 48 49 + resets: 50 + maxItems: 1 51 + 52 required: 53 - compatible 54 - reg
+1
Documentation/devicetree/bindings/serial/renesas,scif.yaml
··· 68 - renesas,scif-r8a779a0 # R-Car V3U 69 - renesas,scif-r8a779f0 # R-Car S4-8 70 - renesas,scif-r8a779g0 # R-Car V4H 71 - const: renesas,rcar-gen4-scif # R-Car Gen4 72 - const: renesas,scif # generic SCIF compatible UART 73
··· 68 - renesas,scif-r8a779a0 # R-Car V3U 69 - renesas,scif-r8a779f0 # R-Car S4-8 70 - renesas,scif-r8a779g0 # R-Car V4H 71 + - renesas,scif-r8a779h0 # R-Car V4M 72 - const: renesas,rcar-gen4-scif # R-Car Gen4 73 - const: renesas,scif # generic SCIF compatible UART 74
+2
arch/arm64/boot/dts/xilinx/zynqmp.dtsi
··· 906 reg = <0x0 0xff000000 0x0 0x1000>; 907 clock-names = "uart_clk", "pclk"; 908 power-domains = <&zynqmp_firmware PD_UART_0>; 909 }; 910 911 uart1: serial@ff010000 { ··· 918 reg = <0x0 0xff010000 0x0 0x1000>; 919 clock-names = "uart_clk", "pclk"; 920 power-domains = <&zynqmp_firmware PD_UART_1>; 921 }; 922 923 usb0: usb@ff9d0000 {
··· 906 reg = <0x0 0xff000000 0x0 0x1000>; 907 clock-names = "uart_clk", "pclk"; 908 power-domains = <&zynqmp_firmware PD_UART_0>; 909 + resets = <&zynqmp_reset ZYNQMP_RESET_UART0>; 910 }; 911 912 uart1: serial@ff010000 { ··· 917 reg = <0x0 0xff010000 0x0 0x1000>; 918 clock-names = "uart_clk", "pclk"; 919 power-domains = <&zynqmp_firmware PD_UART_1>; 920 + resets = <&zynqmp_reset ZYNQMP_RESET_UART1>; 921 }; 922 923 usb0: usb@ff9d0000 {
+7 -1
drivers/tty/amiserial.c
··· 1578 free_irq(IRQ_AMIGA_RBF, state); 1579 } 1580 1581 - static struct platform_driver amiga_serial_driver = { 1582 .remove_new = __exit_p(amiga_serial_remove), 1583 .driver = { 1584 .name = "amiga-serial",
··· 1578 free_irq(IRQ_AMIGA_RBF, state); 1579 } 1580 1581 + /* 1582 + * amiga_serial_remove() lives in .exit.text. For drivers registered via 1583 + * module_platform_driver_probe() this is ok because they cannot get unbound at 1584 + * runtime. So mark the driver struct with __refdata to prevent modpost 1585 + * triggering a section mismatch warning. 1586 + */ 1587 + static struct platform_driver amiga_serial_driver __refdata = { 1588 .remove_new = __exit_p(amiga_serial_remove), 1589 .driver = { 1590 .name = "amiga-serial",
+1 -1
drivers/tty/hvc/hvc_xen.c
··· 558 break; 559 fallthrough; /* Missed the backend's CLOSING state */ 560 case XenbusStateClosing: { 561 - struct xencons_info *info = dev_get_drvdata(&dev->dev);; 562 563 /* 564 * Don't tear down the evtchn and grant ref before the other
··· 558 break; 559 fallthrough; /* Missed the backend's CLOSING state */ 560 case XenbusStateClosing: { 561 + struct xencons_info *info = dev_get_drvdata(&dev->dev); 562 563 /* 564 * Don't tear down the evtchn and grant ref before the other
+1 -1
drivers/tty/n_gsm.c
··· 4010 mux_net = netdev_priv(net); 4011 mux_net->dlci = dlci; 4012 kref_init(&mux_net->ref); 4013 - strncpy(nc->if_name, net->name, IFNAMSIZ); /* return net name */ 4014 4015 /* reconfigure dlci for network */ 4016 dlci->prev_adaption = dlci->adaption;
··· 4010 mux_net = netdev_priv(net); 4011 mux_net->dlci = dlci; 4012 kref_init(&mux_net->ref); 4013 + strscpy(nc->if_name, net->name); /* return net name */ 4014 4015 /* reconfigure dlci for network */ 4016 dlci->prev_adaption = dlci->adaption;
+6 -8
drivers/tty/serial/8250/8250_bcm7271.c
··· 413 static int brcmuart_tx_dma(struct uart_8250_port *p) 414 { 415 struct brcmuart_priv *priv = p->port.private_data; 416 - struct circ_buf *xmit = &p->port.state->xmit; 417 u32 tx_size; 418 419 if (uart_tx_stopped(&p->port) || priv->tx_running || 420 - uart_circ_empty(xmit)) { 421 return 0; 422 } 423 - tx_size = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 424 425 priv->dma.tx_err = 0; 426 - memcpy(priv->tx_buf, &xmit->buf[xmit->tail], tx_size); 427 - uart_xmit_advance(&p->port, tx_size); 428 429 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 430 uart_write_wakeup(&p->port); 431 432 udma_writel(priv, REGS_DMA_TX, UDMA_TX_TRANSFER_LEN, tx_size); ··· 538 struct brcmuart_priv *priv = up->private_data; 539 struct device *dev = up->dev; 540 struct uart_8250_port *port_8250 = up_to_u8250p(up); 541 - struct circ_buf *xmit = &port_8250->port.state->xmit; 542 543 if (isr & UDMA_INTR_TX_ABORT) { 544 if (priv->tx_running) ··· 546 return; 547 } 548 priv->tx_running = false; 549 - if (!uart_circ_empty(xmit) && !uart_tx_stopped(up)) 550 brcmuart_tx_dma(port_8250); 551 } 552
··· 413 static int brcmuart_tx_dma(struct uart_8250_port *p) 414 { 415 struct brcmuart_priv *priv = p->port.private_data; 416 + struct tty_port *tport = &p->port.state->port; 417 u32 tx_size; 418 419 if (uart_tx_stopped(&p->port) || priv->tx_running || 420 + kfifo_is_empty(&tport->xmit_fifo)) { 421 return 0; 422 } 423 424 priv->dma.tx_err = 0; 425 + tx_size = uart_fifo_out(&p->port, priv->tx_buf, UART_XMIT_SIZE); 426 427 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 428 uart_write_wakeup(&p->port); 429 430 udma_writel(priv, REGS_DMA_TX, UDMA_TX_TRANSFER_LEN, tx_size); ··· 540 struct brcmuart_priv *priv = up->private_data; 541 struct device *dev = up->dev; 542 struct uart_8250_port *port_8250 = up_to_u8250p(up); 543 + struct tty_port *tport = &port_8250->port.state->port; 544 545 if (isr & UDMA_INTR_TX_ABORT) { 546 if (priv->tx_running) ··· 548 return; 549 } 550 priv->tx_running = false; 551 + if (!kfifo_is_empty(&tport->xmit_fifo) && !uart_tx_stopped(up)) 552 brcmuart_tx_dma(port_8250); 553 } 554
+7 -1
drivers/tty/serial/8250/8250_core.c
··· 15 */ 16 17 #include <linux/acpi.h> 18 #include <linux/module.h> 19 #include <linux/moduleparam.h> 20 #include <linux/ioport.h> ··· 41 #endif 42 43 #include <asm/irq.h> 44 45 #include "8250.h" 46 ··· 283 */ 284 lsr = serial_lsr_in(up); 285 if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) && 286 - (!uart_circ_empty(&up->port.state->xmit) || up->port.x_char) && 287 (lsr & UART_LSR_THRE)) { 288 iir &= ~(UART_IIR_ID | UART_IIR_NO_INT); 289 iir |= UART_IIR_THRI; ··· 563 port->irqflags |= irqflag; 564 if (serial8250_isa_config != NULL) 565 serial8250_isa_config(i, &up->port, &up->capabilities); 566 } 567 } 568
··· 15 */ 16 17 #include <linux/acpi.h> 18 + #include <linux/cleanup.h> 19 #include <linux/module.h> 20 #include <linux/moduleparam.h> 21 #include <linux/ioport.h> ··· 40 #endif 41 42 #include <asm/irq.h> 43 + 44 + #include "../serial_base.h" /* For serial_base_add_isa_preferred_console() */ 45 46 #include "8250.h" 47 ··· 280 */ 281 lsr = serial_lsr_in(up); 282 if ((iir & UART_IIR_NO_INT) && (up->ier & UART_IER_THRI) && 283 + (!kfifo_is_empty(&up->port.state->port.xmit_fifo) || 284 + up->port.x_char) && 285 (lsr & UART_LSR_THRE)) { 286 iir &= ~(UART_IIR_ID | UART_IIR_NO_INT); 287 iir |= UART_IIR_THRI; ··· 559 port->irqflags |= irqflag; 560 if (serial8250_isa_config != NULL) 561 serial8250_isa_config(i, &up->port, &up->capabilities); 562 + 563 + serial_base_add_isa_preferred_console(serial8250_reg.dev_name, i); 564 } 565 } 566
+20 -11
drivers/tty/serial/8250/8250_dma.c
··· 15 { 16 struct uart_8250_port *p = param; 17 struct uart_8250_dma *dma = p->dma; 18 - struct circ_buf *xmit = &p->port.state->xmit; 19 unsigned long flags; 20 int ret; 21 ··· 28 29 uart_xmit_advance(&p->port, dma->tx_size); 30 31 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 32 uart_write_wakeup(&p->port); 33 34 ret = serial8250_tx_dma(p); ··· 86 int serial8250_tx_dma(struct uart_8250_port *p) 87 { 88 struct uart_8250_dma *dma = p->dma; 89 - struct circ_buf *xmit = &p->port.state->xmit; 90 struct dma_async_tx_descriptor *desc; 91 struct uart_port *up = &p->port; 92 int ret; 93 94 if (dma->tx_running) { ··· 103 uart_xchar_out(up, UART_TX); 104 } 105 106 - if (uart_tx_stopped(&p->port) || uart_circ_empty(xmit)) { 107 /* We have been called from __dma_tx_complete() */ 108 return 0; 109 } 110 111 - dma->tx_size = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 112 - 113 serial8250_do_prepare_tx_dma(p); 114 115 - desc = dmaengine_prep_slave_single(dma->txchan, 116 - dma->tx_addr + xmit->tail, 117 - dma->tx_size, DMA_MEM_TO_DEV, 118 - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 119 if (!desc) { 120 ret = -EBUSY; 121 goto err; ··· 262 263 /* TX buffer */ 264 dma->tx_addr = dma_map_single(dma->txchan->device->dev, 265 - p->port.state->xmit.buf, 266 UART_XMIT_SIZE, 267 DMA_TO_DEVICE); 268 if (dma_mapping_error(dma->txchan->device->dev, dma->tx_addr)) {
··· 15 { 16 struct uart_8250_port *p = param; 17 struct uart_8250_dma *dma = p->dma; 18 + struct tty_port *tport = &p->port.state->port; 19 unsigned long flags; 20 int ret; 21 ··· 28 29 uart_xmit_advance(&p->port, dma->tx_size); 30 31 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 32 uart_write_wakeup(&p->port); 33 34 ret = serial8250_tx_dma(p); ··· 86 int serial8250_tx_dma(struct uart_8250_port *p) 87 { 88 struct uart_8250_dma *dma = p->dma; 89 + struct tty_port *tport = &p->port.state->port; 90 struct dma_async_tx_descriptor *desc; 91 struct uart_port *up = &p->port; 92 + struct scatterlist sg; 93 int ret; 94 95 if (dma->tx_running) { ··· 102 uart_xchar_out(up, UART_TX); 103 } 104 105 + if (uart_tx_stopped(&p->port) || kfifo_is_empty(&tport->xmit_fifo)) { 106 /* We have been called from __dma_tx_complete() */ 107 return 0; 108 } 109 110 serial8250_do_prepare_tx_dma(p); 111 112 + sg_init_table(&sg, 1); 113 + /* kfifo can do more than one sg, we don't (quite yet) */ 114 + ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1, 115 + UART_XMIT_SIZE, dma->tx_addr); 116 + 117 + /* we already checked empty fifo above, so there should be something */ 118 + if (WARN_ON_ONCE(ret != 1)) 119 + return 0; 120 + 121 + dma->tx_size = sg_dma_len(&sg); 122 + 123 + desc = dmaengine_prep_slave_sg(dma->txchan, &sg, 1, 124 + DMA_MEM_TO_DEV, 125 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 126 if (!desc) { 127 ret = -EBUSY; 128 goto err; ··· 253 254 /* TX buffer */ 255 dma->tx_addr = dma_map_single(dma->txchan->device->dev, 256 + p->port.state->port.xmit_buf, 257 UART_XMIT_SIZE, 258 DMA_TO_DEVICE); 259 if (dma_mapping_error(dma->txchan->device->dev, dma->tx_addr)) {
+16 -29
drivers/tty/serial/8250/8250_dw.c
··· 100 (void)p->serial_in(p, UART_RX); 101 } 102 103 - static void dw8250_check_lcr(struct uart_port *p, int value) 104 { 105 - void __iomem *offset = p->membase + (UART_LCR << p->regshift); 106 int tries = 1000; 107 108 /* Make sure LCR write wasn't ignored */ 109 while (tries--) { 110 - unsigned int lcr = p->serial_in(p, UART_LCR); 111 112 if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 113 return; ··· 120 121 #ifdef CONFIG_64BIT 122 if (p->type == PORT_OCTEON) 123 - __raw_writeq(value & 0xff, offset); 124 else 125 #endif 126 if (p->iotype == UPIO_MEM32) 127 - writel(value, offset); 128 else if (p->iotype == UPIO_MEM32BE) 129 - iowrite32be(value, offset); 130 else 131 - writeb(value, offset); 132 } 133 /* 134 * FIXME: this deadlocks if port->lock is already held ··· 162 163 static void dw8250_serial_out(struct uart_port *p, int offset, int value) 164 { 165 - struct dw8250_data *d = to_dw8250_data(p->private_data); 166 - 167 writeb(value, p->membase + (offset << p->regshift)); 168 - 169 - if (offset == UART_LCR && !d->uart_16550_compatible) 170 - dw8250_check_lcr(p, value); 171 } 172 173 static void dw8250_serial_out38x(struct uart_port *p, int offset, int value) ··· 185 #ifdef CONFIG_64BIT 186 static unsigned int dw8250_serial_inq(struct uart_port *p, int offset) 187 { 188 - unsigned int value; 189 - 190 - value = (u8)__raw_readq(p->membase + (offset << p->regshift)); 191 192 return dw8250_modify_msr(p, offset, value); 193 } 194 195 static void dw8250_serial_outq(struct uart_port *p, int offset, int value) 196 { 197 - struct dw8250_data *d = to_dw8250_data(p->private_data); 198 - 199 value &= 0xff; 200 __raw_writeq(value, p->membase + (offset << p->regshift)); 201 /* Read back to ensure register write ordering. */ 202 __raw_readq(p->membase + (UART_LCR << p->regshift)); 203 204 - if (offset == UART_LCR && !d->uart_16550_compatible) 205 - dw8250_check_lcr(p, value); 206 } 207 #endif /* CONFIG_64BIT */ 208 209 static void dw8250_serial_out32(struct uart_port *p, int offset, int value) 210 { 211 - struct dw8250_data *d = to_dw8250_data(p->private_data); 212 - 213 writel(value, p->membase + (offset << p->regshift)); 214 - 215 - if (offset == UART_LCR && !d->uart_16550_compatible) 216 - dw8250_check_lcr(p, value); 217 } 218 219 static unsigned int dw8250_serial_in32(struct uart_port *p, int offset) ··· 216 217 static void dw8250_serial_out32be(struct uart_port *p, int offset, int value) 218 { 219 - struct dw8250_data *d = to_dw8250_data(p->private_data); 220 - 221 iowrite32be(value, p->membase + (offset << p->regshift)); 222 - 223 - if (offset == UART_LCR && !d->uart_16550_compatible) 224 - dw8250_check_lcr(p, value); 225 } 226 227 static unsigned int dw8250_serial_in32be(struct uart_port *p, int offset)
··· 100 (void)p->serial_in(p, UART_RX); 101 } 102 103 + static void dw8250_check_lcr(struct uart_port *p, int offset, int value) 104 { 105 + struct dw8250_data *d = to_dw8250_data(p->private_data); 106 + void __iomem *addr = p->membase + (offset << p->regshift); 107 int tries = 1000; 108 + 109 + if (offset != UART_LCR || d->uart_16550_compatible) 110 + return; 111 112 /* Make sure LCR write wasn't ignored */ 113 while (tries--) { 114 + unsigned int lcr = p->serial_in(p, offset); 115 116 if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 117 return; ··· 116 117 #ifdef CONFIG_64BIT 118 if (p->type == PORT_OCTEON) 119 + __raw_writeq(value & 0xff, addr); 120 else 121 #endif 122 if (p->iotype == UPIO_MEM32) 123 + writel(value, addr); 124 else if (p->iotype == UPIO_MEM32BE) 125 + iowrite32be(value, addr); 126 else 127 + writeb(value, addr); 128 } 129 /* 130 * FIXME: this deadlocks if port->lock is already held ··· 158 159 static void dw8250_serial_out(struct uart_port *p, int offset, int value) 160 { 161 writeb(value, p->membase + (offset << p->regshift)); 162 + dw8250_check_lcr(p, offset, value); 163 } 164 165 static void dw8250_serial_out38x(struct uart_port *p, int offset, int value) ··· 185 #ifdef CONFIG_64BIT 186 static unsigned int dw8250_serial_inq(struct uart_port *p, int offset) 187 { 188 + u8 value = __raw_readq(p->membase + (offset << p->regshift)); 189 190 return dw8250_modify_msr(p, offset, value); 191 } 192 193 static void dw8250_serial_outq(struct uart_port *p, int offset, int value) 194 { 195 value &= 0xff; 196 __raw_writeq(value, p->membase + (offset << p->regshift)); 197 /* Read back to ensure register write ordering. */ 198 __raw_readq(p->membase + (UART_LCR << p->regshift)); 199 200 + dw8250_check_lcr(p, offset, value); 201 } 202 #endif /* CONFIG_64BIT */ 203 204 static void dw8250_serial_out32(struct uart_port *p, int offset, int value) 205 { 206 writel(value, p->membase + (offset << p->regshift)); 207 + dw8250_check_lcr(p, offset, value); 208 } 209 210 static unsigned int dw8250_serial_in32(struct uart_port *p, int offset) ··· 225 226 static void dw8250_serial_out32be(struct uart_port *p, int offset, int value) 227 { 228 iowrite32be(value, p->membase + (offset << p->regshift)); 229 + dw8250_check_lcr(p, offset, value); 230 } 231 232 static unsigned int dw8250_serial_in32be(struct uart_port *p, int offset)
+950 -109
drivers/tty/serial/8250/8250_exar.c
··· 6 * 7 * Copyright (C) 2017 Sudip Mukherjee, All Rights Reserved. 8 */ 9 #include <linux/bits.h> 10 #include <linux/delay.h> 11 #include <linux/device.h> ··· 47 #define PCI_DEVICE_ID_COMMTECH_4228PCIE 0x0021 48 #define PCI_DEVICE_ID_COMMTECH_4222PCIE 0x0022 49 50 #define PCI_DEVICE_ID_EXAR_XR17V4358 0x4358 51 #define PCI_DEVICE_ID_EXAR_XR17V8358 0x8358 52 53 #define PCI_SUBDEVICE_ID_USR_2980 0x0128 54 #define PCI_SUBDEVICE_ID_USR_2981 0x0129 ··· 129 #define UART_EXAR_DLD 0x02 /* Divisor Fractional */ 130 #define UART_EXAR_DLD_485_POLARITY 0x80 /* RS-485 Enable Signal Polarity */ 131 132 /* 133 * IOT2040 MPIO wiring semantics: 134 * 135 * MPIO Port Function 136 * ---- ---- -------- 137 - * 0 2 Mode bit 0 138 * 1 2 Mode bit 1 139 * 2 2 Terminate bus 140 * 3 - <reserved> ··· 177 #define IOT2040_UARTS_ENABLE 0x03 178 #define IOT2040_UARTS_GPIO_HI_MODE 0xF8 /* enable & LED as outputs */ 179 180 struct exar8250; 181 182 struct exar8250_platform { 183 int (*rs485_config)(struct uart_port *port, struct ktermios *termios, 184 struct serial_rs485 *rs485); 185 const struct serial_rs485 *rs485_supported; 186 - int (*register_gpio)(struct pci_dev *, struct uart_8250_port *); 187 - void (*unregister_gpio)(struct uart_8250_port *); 188 }; 189 190 /** 191 * struct exar8250_board - board information 192 * @num_ports: number of serial ports 193 * @reg_shift: describes UART register mapping in PCI memory 194 - * @setup: quirk run at ->probe() stage 195 * @exit: quirk run at ->remove() stage 196 */ 197 struct exar8250_board { 198 unsigned int num_ports; 199 unsigned int reg_shift; 200 - int (*setup)(struct exar8250 *, struct pci_dev *, 201 - struct uart_8250_port *, int); 202 void (*exit)(struct pci_dev *pcidev); 203 }; 204 205 struct exar8250 { 206 unsigned int nr; 207 struct exar8250_board *board; 208 void __iomem *virt; 209 int line[]; 210 }; 211 212 static void exar_pm(struct uart_port *port, unsigned int state, unsigned int old) ··· 532 { 533 bool tx_complete = false; 534 struct uart_8250_port *up = up_to_u8250p(port); 535 - struct circ_buf *xmit = &port->state->xmit; 536 int i = 0; 537 u16 lsr; 538 ··· 543 else 544 tx_complete = false; 545 usleep_range(1000, 1100); 546 - } while (!uart_circ_empty(xmit) && !tx_complete && i++ < 1000); 547 548 serial8250_do_shutdown(port); 549 } ··· 608 writeb(32, p + UART_EXAR_TXTRG); 609 writeb(32, p + UART_EXAR_RXTRG); 610 611 /* 612 * Setup Multipurpose Input/Output pins. 613 */ 614 - if (idx == 0) { 615 - switch (pcidev->device) { 616 - case PCI_DEVICE_ID_COMMTECH_4222PCI335: 617 - case PCI_DEVICE_ID_COMMTECH_4224PCI335: 618 - writeb(0x78, p + UART_EXAR_MPIOLVL_7_0); 619 - writeb(0x00, p + UART_EXAR_MPIOINV_7_0); 620 - writeb(0x00, p + UART_EXAR_MPIOSEL_7_0); 621 - break; 622 - case PCI_DEVICE_ID_COMMTECH_2324PCI335: 623 - case PCI_DEVICE_ID_COMMTECH_2328PCI335: 624 - writeb(0x00, p + UART_EXAR_MPIOLVL_7_0); 625 - writeb(0xc0, p + UART_EXAR_MPIOINV_7_0); 626 - writeb(0xc0, p + UART_EXAR_MPIOSEL_7_0); 627 - break; 628 - } 629 - writeb(0x00, p + UART_EXAR_MPIOINT_7_0); 630 - writeb(0x00, p + UART_EXAR_MPIO3T_7_0); 631 - writeb(0x00, p + UART_EXAR_MPIOOD_7_0); 632 } 633 634 return 0; 635 } 636 637 - static int 638 - pci_connect_tech_setup(struct exar8250 *priv, struct pci_dev *pcidev, 639 - struct uart_8250_port *port, int idx) 640 { 641 - unsigned int offset = idx * 0x200; 642 - unsigned int baud = 1843200; 643 644 - port->port.uartclk = baud * 16; 645 - return default_setup(priv, pcidev, idx, offset, port); 646 } 647 648 static int ··· 1168 * devices will export them as GPIOs, so we pre-configure them safely 1169 * as inputs. 1170 */ 1171 - 1172 u8 dir = 0x00; 1173 1174 if ((pcidev->vendor == PCI_VENDOR_ID_EXAR) && 1175 - (pcidev->subsystem_vendor != PCI_VENDOR_ID_SEALEVEL)) { 1176 // Configure GPIO as inputs for Commtech adapters 1177 dir = 0xff; 1178 } else { ··· 1248 port->port.private_data = NULL; 1249 } 1250 1251 - static int generic_rs485_config(struct uart_port *port, struct ktermios *termios, 1252 - struct serial_rs485 *rs485) 1253 - { 1254 - bool is_rs485 = !!(rs485->flags & SER_RS485_ENABLED); 1255 - u8 __iomem *p = port->membase; 1256 - u8 value; 1257 - 1258 - value = readb(p + UART_EXAR_FCTR); 1259 - if (is_rs485) 1260 - value |= UART_FCTR_EXAR_485; 1261 - else 1262 - value &= ~UART_FCTR_EXAR_485; 1263 - 1264 - writeb(value, p + UART_EXAR_FCTR); 1265 - 1266 - if (is_rs485) 1267 - writeb(UART_EXAR_RS485_DLY(4), p + UART_MSR); 1268 - 1269 - return 0; 1270 - } 1271 - 1272 static int sealevel_rs485_config(struct uart_port *port, struct ktermios *termios, 1273 struct serial_rs485 *rs485) 1274 { ··· 1261 if (ret) 1262 return ret; 1263 1264 - if (rs485->flags & SER_RS485_ENABLED) { 1265 - old_lcr = readb(p + UART_LCR); 1266 1267 - /* Set EFR[4]=1 to enable enhanced feature registers */ 1268 - efr = readb(p + UART_XR_EFR); 1269 - efr |= UART_EFR_ECB; 1270 - writeb(efr, p + UART_XR_EFR); 1271 1272 - /* Set MCR to use DTR as Auto-RS485 Enable signal */ 1273 - writeb(UART_MCR_OUT1, p + UART_MCR); 1274 1275 - /* Set LCR[7]=1 to enable access to DLD register */ 1276 - writeb(old_lcr | UART_LCR_DLAB, p + UART_LCR); 1277 1278 - /* Set DLD[7]=1 for inverted RS485 Enable logic */ 1279 - dld = readb(p + UART_EXAR_DLD); 1280 - dld |= UART_EXAR_DLD_485_POLARITY; 1281 - writeb(dld, p + UART_EXAR_DLD); 1282 1283 - writeb(old_lcr, p + UART_LCR); 1284 - } 1285 1286 return 0; 1287 } 1288 - 1289 - static const struct serial_rs485 generic_rs485_supported = { 1290 - .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND, 1291 - }; 1292 1293 static const struct exar8250_platform exar8250_default_platform = { 1294 .register_gpio = xr17v35x_register_gpio, ··· 1471 return IRQ_HANDLED; 1472 } 1473 1474 static int 1475 exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent) 1476 { ··· 1519 1520 maxnr = pci_resource_len(pcidev, bar) >> (board->reg_shift + 3); 1521 1522 - if (pcidev->vendor == PCI_VENDOR_ID_ACCESSIO) 1523 - nr_ports = BIT(((pcidev->device & 0x38) >> 3) - 1); 1524 - else if (board->num_ports) 1525 - nr_ports = board->num_ports; 1526 - else 1527 - nr_ports = pcidev->device & 0x0f; 1528 1529 priv = devm_kzalloc(&pcidev->dev, struct_size(priv, line, nr_ports), GFP_KERNEL); 1530 if (!priv) ··· 1554 for (i = 0; i < nr_ports && i < maxnr; i++) { 1555 rc = board->setup(priv, pcidev, &uart, i); 1556 if (rc) { 1557 - dev_err(&pcidev->dev, "Failed to setup port %u\n", i); 1558 break; 1559 } 1560 ··· 1563 1564 priv->line[i] = serial8250_register_8250_port(&uart); 1565 if (priv->line[i] < 0) { 1566 - dev_err(&pcidev->dev, 1567 - "Couldn't register serial port %lx, irq %d, type %d, error %d\n", 1568 - uart.port.iobase, uart.port.irq, 1569 - uart.port.iotype, priv->line[i]); 1570 break; 1571 } 1572 } ··· 1630 .setup = pci_fastcom335_setup, 1631 }; 1632 1633 - static const struct exar8250_board pbn_connect = { 1634 - .setup = pci_connect_tech_setup, 1635 }; 1636 1637 static const struct exar8250_board pbn_exar_ibm_saturn = { ··· 1690 .exit = pci_xr17v35x_exit, 1691 }; 1692 1693 - #define CONNECT_DEVICE(devid, sdevid, bd) { \ 1694 - PCI_DEVICE_SUB( \ 1695 - PCI_VENDOR_ID_EXAR, \ 1696 - PCI_DEVICE_ID_EXAR_##devid, \ 1697 - PCI_SUBVENDOR_ID_CONNECT_TECH, \ 1698 - PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_##sdevid), 0, 0, \ 1699 - (kernel_ulong_t)&bd \ 1700 } 1701 1702 #define EXAR_DEVICE(vend, devid, bd) { PCI_DEVICE_DATA(vend, devid, &bd) } ··· 1705 PCI_DEVICE_SUB( \ 1706 PCI_VENDOR_ID_EXAR, \ 1707 PCI_DEVICE_ID_EXAR_##devid, \ 1708 - PCI_VENDOR_ID_IBM, \ 1709 PCI_SUBDEVICE_ID_IBM_##sdevid), 0, 0, \ 1710 (kernel_ulong_t)&bd \ 1711 } ··· 1728 EXAR_DEVICE(ACCESSIO, COM_4SM, pbn_exar_XR17C15x), 1729 EXAR_DEVICE(ACCESSIO, COM_8SM, pbn_exar_XR17C15x), 1730 1731 - CONNECT_DEVICE(XR17C152, UART_2_232, pbn_connect), 1732 - CONNECT_DEVICE(XR17C154, UART_4_232, pbn_connect), 1733 - CONNECT_DEVICE(XR17C158, UART_8_232, pbn_connect), 1734 - CONNECT_DEVICE(XR17C152, UART_1_1, pbn_connect), 1735 - CONNECT_DEVICE(XR17C154, UART_2_2, pbn_connect), 1736 - CONNECT_DEVICE(XR17C158, UART_4_4, pbn_connect), 1737 - CONNECT_DEVICE(XR17C152, UART_2, pbn_connect), 1738 - CONNECT_DEVICE(XR17C154, UART_4, pbn_connect), 1739 - CONNECT_DEVICE(XR17C158, UART_8, pbn_connect), 1740 - CONNECT_DEVICE(XR17C152, UART_2_485, pbn_connect), 1741 - CONNECT_DEVICE(XR17C154, UART_4_485, pbn_connect), 1742 - CONNECT_DEVICE(XR17C158, UART_8_485, pbn_connect), 1743 1744 IBM_DEVICE(XR17C152, SATURN_SERIAL_ONE_PORT, pbn_exar_ibm_saturn), 1745
··· 6 * 7 * Copyright (C) 2017 Sudip Mukherjee, All Rights Reserved. 8 */ 9 + #include <linux/bitfield.h> 10 #include <linux/bits.h> 11 #include <linux/delay.h> 12 #include <linux/device.h> ··· 46 #define PCI_DEVICE_ID_COMMTECH_4228PCIE 0x0021 47 #define PCI_DEVICE_ID_COMMTECH_4222PCIE 0x0022 48 49 + #define PCI_VENDOR_ID_CONNECT_TECH 0x12c4 50 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_SP_OPTO 0x0340 51 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_SP_OPTO_A 0x0341 52 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_SP_OPTO_B 0x0342 53 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS 0x0350 54 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_A 0x0351 55 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_B 0x0352 56 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS 0x0353 57 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_A 0x0354 58 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_B 0x0355 59 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS_OPTO 0x0360 60 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_A 0x0361 61 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_B 0x0362 62 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP 0x0370 63 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_232 0x0371 64 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_485 0x0372 65 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_SP 0x0373 66 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_6_2_SP 0x0374 67 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_6_SP 0x0375 68 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_232_NS 0x0376 69 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_LEFT 0x0380 70 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_RIGHT 0x0381 71 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XP_OPTO 0x0382 72 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_XPRS_OPTO 0x0392 73 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP 0x03A0 74 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_232 0x03A1 75 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_485 0x03A2 76 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_232_NS 0x03A3 77 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XEG001 0x0602 78 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_BASE 0x1000 79 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_2 0x1002 80 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_4 0x1004 81 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_8 0x1008 82 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_12 0x100C 83 + #define PCI_SUBDEVICE_ID_CONNECT_TECH_PCIE_XR35X_16 0x1010 84 + #define PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG00X 0x110c 85 + #define PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG01X 0x110d 86 + #define PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_16 0x1110 87 + 88 #define PCI_DEVICE_ID_EXAR_XR17V4358 0x4358 89 #define PCI_DEVICE_ID_EXAR_XR17V8358 0x8358 90 + #define PCI_DEVICE_ID_EXAR_XR17V252 0x0252 91 + #define PCI_DEVICE_ID_EXAR_XR17V254 0x0254 92 + #define PCI_DEVICE_ID_EXAR_XR17V258 0x0258 93 94 #define PCI_SUBDEVICE_ID_USR_2980 0x0128 95 #define PCI_SUBDEVICE_ID_USR_2981 0x0129 ··· 86 #define UART_EXAR_DLD 0x02 /* Divisor Fractional */ 87 #define UART_EXAR_DLD_485_POLARITY 0x80 /* RS-485 Enable Signal Polarity */ 88 89 + /* EEPROM registers */ 90 + #define UART_EXAR_REGB 0x8e 91 + #define UART_EXAR_REGB_EECK BIT(4) 92 + #define UART_EXAR_REGB_EECS BIT(5) 93 + #define UART_EXAR_REGB_EEDI BIT(6) 94 + #define UART_EXAR_REGB_EEDO BIT(7) 95 + #define UART_EXAR_REGB_EE_ADDR_SIZE 6 96 + #define UART_EXAR_REGB_EE_DATA_SIZE 16 97 + 98 + #define UART_EXAR_XR17C15X_PORT_OFFSET 0x200 99 + #define UART_EXAR_XR17V25X_PORT_OFFSET 0x200 100 + #define UART_EXAR_XR17V35X_PORT_OFFSET 0x400 101 + 102 /* 103 * IOT2040 MPIO wiring semantics: 104 * 105 * MPIO Port Function 106 * ---- ---- -------- 107 + * 0 2 Mode bit 0 108 * 1 2 Mode bit 1 109 * 2 2 Terminate bus 110 * 3 - <reserved> ··· 121 #define IOT2040_UARTS_ENABLE 0x03 122 #define IOT2040_UARTS_GPIO_HI_MODE 0xF8 /* enable & LED as outputs */ 123 124 + /* CTI EEPROM offsets */ 125 + #define CTI_EE_OFF_XR17C15X_OSC_FREQ 0x04 /* 2 words */ 126 + #define CTI_EE_OFF_XR17V25X_OSC_FREQ 0x08 /* 2 words */ 127 + #define CTI_EE_OFF_XR17C15X_PART_NUM 0x0A /* 4 words */ 128 + #define CTI_EE_OFF_XR17V25X_PART_NUM 0x0E /* 4 words */ 129 + #define CTI_EE_OFF_XR17C15X_SERIAL_NUM 0x0E /* 1 word */ 130 + #define CTI_EE_OFF_XR17V25X_SERIAL_NUM 0x12 /* 1 word */ 131 + #define CTI_EE_OFF_XR17V35X_SERIAL_NUM 0x11 /* 2 word */ 132 + #define CTI_EE_OFF_XR17V35X_BRD_FLAGS 0x13 /* 1 word */ 133 + #define CTI_EE_OFF_XR17V35X_PORT_FLAGS 0x14 /* 1 word */ 134 + 135 + #define CTI_EE_MASK_PORT_FLAGS_TYPE GENMASK(7, 0) 136 + #define CTI_EE_MASK_OSC_FREQ_LOWER GENMASK(15, 0) 137 + #define CTI_EE_MASK_OSC_FREQ_UPPER GENMASK(31, 16) 138 + 139 + #define CTI_FPGA_RS485_IO_REG 0x2008 140 + #define CTI_FPGA_CFG_INT_EN_REG 0x48 141 + #define CTI_FPGA_CFG_INT_EN_EXT_BIT BIT(15) /* External int enable bit */ 142 + 143 + #define CTI_DEFAULT_PCI_OSC_FREQ 29491200 144 + #define CTI_DEFAULT_PCIE_OSC_FREQ 125000000 145 + #define CTI_DEFAULT_FPGA_OSC_FREQ 33333333 146 + 147 + /* 148 + * CTI Serial port line types. These match the values stored in the first 149 + * nibble of the CTI EEPROM port_flags word. 150 + */ 151 + enum cti_port_type { 152 + CTI_PORT_TYPE_NONE = 0, 153 + CTI_PORT_TYPE_RS232, // RS232 ONLY 154 + CTI_PORT_TYPE_RS422_485, // RS422/RS485 ONLY 155 + CTI_PORT_TYPE_RS232_422_485_HW, // RS232/422/485 HW ONLY Switchable 156 + CTI_PORT_TYPE_RS232_422_485_SW, // RS232/422/485 SW ONLY Switchable 157 + CTI_PORT_TYPE_RS232_422_485_4B, // RS232/422/485 HW/SW (4bit ex. BCG004) 158 + CTI_PORT_TYPE_RS232_422_485_2B, // RS232/422/485 HW/SW (2bit ex. BBG008) 159 + CTI_PORT_TYPE_MAX, 160 + }; 161 + 162 + #define CTI_PORT_TYPE_VALID(_port_type) \ 163 + (((_port_type) > CTI_PORT_TYPE_NONE) && \ 164 + ((_port_type) < CTI_PORT_TYPE_MAX)) 165 + 166 + #define CTI_PORT_TYPE_RS485(_port_type) \ 167 + (((_port_type) > CTI_PORT_TYPE_RS232) && \ 168 + ((_port_type) < CTI_PORT_TYPE_MAX)) 169 + 170 struct exar8250; 171 172 struct exar8250_platform { 173 int (*rs485_config)(struct uart_port *port, struct ktermios *termios, 174 struct serial_rs485 *rs485); 175 const struct serial_rs485 *rs485_supported; 176 + int (*register_gpio)(struct pci_dev *pcidev, struct uart_8250_port *port); 177 + void (*unregister_gpio)(struct uart_8250_port *port); 178 }; 179 180 /** 181 * struct exar8250_board - board information 182 * @num_ports: number of serial ports 183 * @reg_shift: describes UART register mapping in PCI memory 184 + * @setup: quirk run at ->probe() stage for each port 185 * @exit: quirk run at ->remove() stage 186 */ 187 struct exar8250_board { 188 unsigned int num_ports; 189 unsigned int reg_shift; 190 + int (*setup)(struct exar8250 *priv, struct pci_dev *pcidev, 191 + struct uart_8250_port *port, int idx); 192 void (*exit)(struct pci_dev *pcidev); 193 }; 194 195 struct exar8250 { 196 unsigned int nr; 197 + unsigned int osc_freq; 198 struct exar8250_board *board; 199 void __iomem *virt; 200 int line[]; 201 + }; 202 + 203 + static inline void exar_write_reg(struct exar8250 *priv, 204 + unsigned int reg, u8 value) 205 + { 206 + writeb(value, priv->virt + reg); 207 + } 208 + 209 + static inline u8 exar_read_reg(struct exar8250 *priv, unsigned int reg) 210 + { 211 + return readb(priv->virt + reg); 212 + } 213 + 214 + static inline void exar_ee_select(struct exar8250 *priv) 215 + { 216 + // Set chip select pin high to enable EEPROM reads/writes 217 + exar_write_reg(priv, UART_EXAR_REGB, UART_EXAR_REGB_EECS); 218 + // Min ~500ns delay needed between CS assert and EEPROM access 219 + udelay(1); 220 + } 221 + 222 + static inline void exar_ee_deselect(struct exar8250 *priv) 223 + { 224 + exar_write_reg(priv, UART_EXAR_REGB, 0x00); 225 + } 226 + 227 + static inline void exar_ee_write_bit(struct exar8250 *priv, u8 bit) 228 + { 229 + u8 value = UART_EXAR_REGB_EECS; 230 + 231 + if (bit) 232 + value |= UART_EXAR_REGB_EEDI; 233 + 234 + // Clock out the bit on the EEPROM interface 235 + exar_write_reg(priv, UART_EXAR_REGB, value); 236 + // 2us delay = ~500khz clock speed 237 + udelay(2); 238 + 239 + value |= UART_EXAR_REGB_EECK; 240 + 241 + exar_write_reg(priv, UART_EXAR_REGB, value); 242 + udelay(2); 243 + } 244 + 245 + static inline u8 exar_ee_read_bit(struct exar8250 *priv) 246 + { 247 + u8 regb; 248 + u8 value = UART_EXAR_REGB_EECS; 249 + 250 + // Clock in the bit on the EEPROM interface 251 + exar_write_reg(priv, UART_EXAR_REGB, value); 252 + // 2us delay = ~500khz clock speed 253 + udelay(2); 254 + 255 + value |= UART_EXAR_REGB_EECK; 256 + 257 + exar_write_reg(priv, UART_EXAR_REGB, value); 258 + udelay(2); 259 + 260 + regb = exar_read_reg(priv, UART_EXAR_REGB); 261 + 262 + return (regb & UART_EXAR_REGB_EEDO ? 1 : 0); 263 + } 264 + 265 + /** 266 + * exar_ee_read() - Read a word from the EEPROM 267 + * @priv: Device's private structure 268 + * @ee_addr: Offset of EEPROM to read word from 269 + * 270 + * Read a single 16bit word from an Exar UART's EEPROM. 271 + * The type of the EEPROM is AT93C46D. 272 + * 273 + * Return: EEPROM word 274 + */ 275 + static u16 exar_ee_read(struct exar8250 *priv, u8 ee_addr) 276 + { 277 + int i; 278 + u16 data = 0; 279 + 280 + exar_ee_select(priv); 281 + 282 + // Send read command (opcode 110) 283 + exar_ee_write_bit(priv, 1); 284 + exar_ee_write_bit(priv, 1); 285 + exar_ee_write_bit(priv, 0); 286 + 287 + // Send address to read from 288 + for (i = UART_EXAR_REGB_EE_ADDR_SIZE - 1; i >= 0; i--) 289 + exar_ee_write_bit(priv, ee_addr & BIT(i)); 290 + 291 + // Read data 1 bit at a time starting with a dummy bit 292 + for (i = UART_EXAR_REGB_EE_DATA_SIZE; i >= 0; i--) { 293 + if (exar_ee_read_bit(priv)) 294 + data |= BIT(i); 295 + } 296 + 297 + exar_ee_deselect(priv); 298 + 299 + return data; 300 + } 301 + 302 + /** 303 + * exar_mpio_config_output() - Configure an Exar MPIO as an output 304 + * @priv: Device's private structure 305 + * @mpio_num: MPIO number/offset to configure 306 + * 307 + * Configure a single MPIO as an output and disable tristate. It is reccomended 308 + * to set the level with exar_mpio_set_high()/exar_mpio_set_low() prior to 309 + * calling this function to ensure default MPIO pin state. 310 + * 311 + * Return: 0 on success, negative error code on failure 312 + */ 313 + static int exar_mpio_config_output(struct exar8250 *priv, 314 + unsigned int mpio_num) 315 + { 316 + unsigned int mpio_offset; 317 + u8 sel_reg; // MPIO Select register (input/output) 318 + u8 tri_reg; // MPIO Tristate register 319 + u8 value; 320 + 321 + if (mpio_num < 8) { 322 + sel_reg = UART_EXAR_MPIOSEL_7_0; 323 + tri_reg = UART_EXAR_MPIO3T_7_0; 324 + mpio_offset = mpio_num; 325 + } else if (mpio_num >= 8 && mpio_num < 16) { 326 + sel_reg = UART_EXAR_MPIOSEL_15_8; 327 + tri_reg = UART_EXAR_MPIO3T_15_8; 328 + mpio_offset = mpio_num - 8; 329 + } else { 330 + return -EINVAL; 331 + } 332 + 333 + // Disable MPIO pin tri-state 334 + value = exar_read_reg(priv, tri_reg); 335 + value &= ~BIT(mpio_offset); 336 + exar_write_reg(priv, tri_reg, value); 337 + 338 + value = exar_read_reg(priv, sel_reg); 339 + value &= ~BIT(mpio_offset); 340 + exar_write_reg(priv, sel_reg, value); 341 + 342 + return 0; 343 + } 344 + 345 + /** 346 + * _exar_mpio_set() - Set an Exar MPIO output high or low 347 + * @priv: Device's private structure 348 + * @mpio_num: MPIO number/offset to set 349 + * @high: Set MPIO high if true, low if false 350 + * 351 + * Set a single MPIO high or low. exar_mpio_config_output() must also be called 352 + * to configure the pin as an output. 353 + * 354 + * Return: 0 on success, negative error code on failure 355 + */ 356 + static int _exar_mpio_set(struct exar8250 *priv, 357 + unsigned int mpio_num, bool high) 358 + { 359 + unsigned int mpio_offset; 360 + u8 lvl_reg; 361 + u8 value; 362 + 363 + if (mpio_num < 8) { 364 + lvl_reg = UART_EXAR_MPIOLVL_7_0; 365 + mpio_offset = mpio_num; 366 + } else if (mpio_num >= 8 && mpio_num < 16) { 367 + lvl_reg = UART_EXAR_MPIOLVL_15_8; 368 + mpio_offset = mpio_num - 8; 369 + } else { 370 + return -EINVAL; 371 + } 372 + 373 + value = exar_read_reg(priv, lvl_reg); 374 + if (high) 375 + value |= BIT(mpio_offset); 376 + else 377 + value &= ~BIT(mpio_offset); 378 + exar_write_reg(priv, lvl_reg, value); 379 + 380 + return 0; 381 + } 382 + 383 + static int exar_mpio_set_low(struct exar8250 *priv, unsigned int mpio_num) 384 + { 385 + return _exar_mpio_set(priv, mpio_num, false); 386 + } 387 + 388 + static int exar_mpio_set_high(struct exar8250 *priv, unsigned int mpio_num) 389 + { 390 + return _exar_mpio_set(priv, mpio_num, true); 391 + } 392 + 393 + static int generic_rs485_config(struct uart_port *port, struct ktermios *termios, 394 + struct serial_rs485 *rs485) 395 + { 396 + bool is_rs485 = !!(rs485->flags & SER_RS485_ENABLED); 397 + u8 __iomem *p = port->membase; 398 + u8 value; 399 + 400 + value = readb(p + UART_EXAR_FCTR); 401 + if (is_rs485) 402 + value |= UART_FCTR_EXAR_485; 403 + else 404 + value &= ~UART_FCTR_EXAR_485; 405 + 406 + writeb(value, p + UART_EXAR_FCTR); 407 + 408 + if (is_rs485) 409 + writeb(UART_EXAR_RS485_DLY(4), p + UART_MSR); 410 + 411 + return 0; 412 + } 413 + 414 + static const struct serial_rs485 generic_rs485_supported = { 415 + .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND, 416 }; 417 418 static void exar_pm(struct uart_port *port, unsigned int state, unsigned int old) ··· 214 { 215 bool tx_complete = false; 216 struct uart_8250_port *up = up_to_u8250p(port); 217 + struct tty_port *tport = &port->state->port; 218 int i = 0; 219 u16 lsr; 220 ··· 225 else 226 tx_complete = false; 227 usleep_range(1000, 1100); 228 + } while (!kfifo_is_empty(&tport->xmit_fifo) && 229 + !tx_complete && i++ < 1000); 230 231 serial8250_do_shutdown(port); 232 } ··· 289 writeb(32, p + UART_EXAR_TXTRG); 290 writeb(32, p + UART_EXAR_RXTRG); 291 292 + /* Skip the initial (per device) setup */ 293 + if (idx) 294 + return 0; 295 + 296 /* 297 * Setup Multipurpose Input/Output pins. 298 */ 299 + switch (pcidev->device) { 300 + case PCI_DEVICE_ID_COMMTECH_4222PCI335: 301 + case PCI_DEVICE_ID_COMMTECH_4224PCI335: 302 + writeb(0x78, p + UART_EXAR_MPIOLVL_7_0); 303 + writeb(0x00, p + UART_EXAR_MPIOINV_7_0); 304 + writeb(0x00, p + UART_EXAR_MPIOSEL_7_0); 305 + break; 306 + case PCI_DEVICE_ID_COMMTECH_2324PCI335: 307 + case PCI_DEVICE_ID_COMMTECH_2328PCI335: 308 + writeb(0x00, p + UART_EXAR_MPIOLVL_7_0); 309 + writeb(0xc0, p + UART_EXAR_MPIOINV_7_0); 310 + writeb(0xc0, p + UART_EXAR_MPIOSEL_7_0); 311 + break; 312 + default: 313 + break; 314 } 315 + writeb(0x00, p + UART_EXAR_MPIOINT_7_0); 316 + writeb(0x00, p + UART_EXAR_MPIO3T_7_0); 317 + writeb(0x00, p + UART_EXAR_MPIOOD_7_0); 318 319 return 0; 320 } 321 322 + /** 323 + * cti_tristate_disable() - Disable RS485 transciever tristate 324 + * @priv: Device's private structure 325 + * @port_num: Port number to set tristate off 326 + * 327 + * Most RS485 capable cards have a power on tristate jumper/switch that ensures 328 + * the RS422/RS485 transceiver does not drive a multi-drop RS485 bus when it is 329 + * not the master. When this jumper is installed the user must set the RS485 330 + * mode to Full or Half duplex to disable tristate prior to using the port. 331 + * 332 + * Some Exar UARTs have an auto-tristate feature while others require setting 333 + * an MPIO to disable the tristate. 334 + * 335 + * Return: 0 on success, negative error code on failure 336 + */ 337 + static int cti_tristate_disable(struct exar8250 *priv, unsigned int port_num) 338 { 339 + int ret; 340 341 + ret = exar_mpio_set_high(priv, port_num); 342 + if (ret) 343 + return ret; 344 + 345 + return exar_mpio_config_output(priv, port_num); 346 + } 347 + 348 + /** 349 + * cti_plx_int_enable() - Enable UART interrupts to PLX bridge 350 + * @priv: Device's private structure 351 + * 352 + * Some older CTI cards require MPIO_0 to be set low to enable the 353 + * interrupts from the UART to the PLX PCI->PCIe bridge. 354 + * 355 + * Return: 0 on success, negative error code on failure 356 + */ 357 + static int cti_plx_int_enable(struct exar8250 *priv) 358 + { 359 + int ret; 360 + 361 + ret = exar_mpio_set_low(priv, 0); 362 + if (ret) 363 + return ret; 364 + 365 + return exar_mpio_config_output(priv, 0); 366 + } 367 + 368 + /** 369 + * cti_read_osc_freq() - Read the UART oscillator frequency from EEPROM 370 + * @priv: Device's private structure 371 + * @eeprom_offset: Offset where the oscillator frequency is stored 372 + * 373 + * CTI XR17x15X and XR17V25X cards have the serial boards oscillator frequency 374 + * stored in the EEPROM. FPGA and XR17V35X based cards use the PCI/PCIe clock. 375 + * 376 + * Return: frequency on success, negative error code on failure 377 + */ 378 + static int cti_read_osc_freq(struct exar8250 *priv, u8 eeprom_offset) 379 + { 380 + u16 lower_word; 381 + u16 upper_word; 382 + 383 + lower_word = exar_ee_read(priv, eeprom_offset); 384 + // Check if EEPROM word was blank 385 + if (lower_word == 0xFFFF) 386 + return -EIO; 387 + 388 + upper_word = exar_ee_read(priv, (eeprom_offset + 1)); 389 + if (upper_word == 0xFFFF) 390 + return -EIO; 391 + 392 + return FIELD_PREP(CTI_EE_MASK_OSC_FREQ_LOWER, lower_word) | 393 + FIELD_PREP(CTI_EE_MASK_OSC_FREQ_UPPER, upper_word); 394 + } 395 + 396 + /** 397 + * cti_get_port_type_xr17c15x_xr17v25x() - Get port type of xr17c15x/xr17v25x 398 + * @priv: Device's private structure 399 + * @pcidev: Pointer to the PCI device for this port 400 + * @port_num: Port to get type of 401 + * 402 + * CTI xr17c15x and xr17v25x based cards port types are based on PCI IDs. 403 + * 404 + * Return: port type on success, CTI_PORT_TYPE_NONE on failure 405 + */ 406 + static enum cti_port_type cti_get_port_type_xr17c15x_xr17v25x(struct exar8250 *priv, 407 + struct pci_dev *pcidev, 408 + unsigned int port_num) 409 + { 410 + switch (pcidev->subsystem_device) { 411 + // RS232 only cards 412 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_232: 413 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_232: 414 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_232: 415 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_232: 416 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_232_NS: 417 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_232: 418 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_232_NS: 419 + return CTI_PORT_TYPE_RS232; 420 + // 1x RS232, 1x RS422/RS485 421 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_1_1: 422 + return (port_num == 0) ? CTI_PORT_TYPE_RS232 : CTI_PORT_TYPE_RS422_485; 423 + // 2x RS232, 2x RS422/RS485 424 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_2: 425 + return (port_num < 2) ? CTI_PORT_TYPE_RS232 : CTI_PORT_TYPE_RS422_485; 426 + // 4x RS232, 4x RS422/RS485 427 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4: 428 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_SP: 429 + return (port_num < 4) ? CTI_PORT_TYPE_RS232 : CTI_PORT_TYPE_RS422_485; 430 + // RS232/RS422/RS485 HW (jumper) selectable 431 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2: 432 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4: 433 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8: 434 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_SP_OPTO: 435 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_SP_OPTO_A: 436 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_SP_OPTO_B: 437 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS: 438 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_A: 439 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_B: 440 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS: 441 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_A: 442 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_B: 443 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS_OPTO: 444 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_A: 445 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_B: 446 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP: 447 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_LEFT: 448 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_RIGHT: 449 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XP_OPTO: 450 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_XPRS_OPTO: 451 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP: 452 + return CTI_PORT_TYPE_RS232_422_485_HW; 453 + // RS422/RS485 HW (jumper) selectable 454 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_485: 455 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_485: 456 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_485: 457 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_485: 458 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_485: 459 + return CTI_PORT_TYPE_RS422_485; 460 + // 6x RS232, 2x RS422/RS485 461 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_6_2_SP: 462 + return (port_num < 6) ? CTI_PORT_TYPE_RS232 : CTI_PORT_TYPE_RS422_485; 463 + // 2x RS232, 6x RS422/RS485 464 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_6_SP: 465 + return (port_num < 2) ? CTI_PORT_TYPE_RS232 : CTI_PORT_TYPE_RS422_485; 466 + default: 467 + dev_err(&pcidev->dev, "unknown/unsupported device\n"); 468 + return CTI_PORT_TYPE_NONE; 469 + } 470 + } 471 + 472 + /** 473 + * cti_get_port_type_fpga() - Get the port type of a CTI FPGA card 474 + * @priv: Device's private structure 475 + * @pcidev: Pointer to the PCI device for this port 476 + * @port_num: Port to get type of 477 + * 478 + * FPGA based cards port types are based on PCI IDs. 479 + * 480 + * Return: port type on success, CTI_PORT_TYPE_NONE on failure 481 + */ 482 + static enum cti_port_type cti_get_port_type_fpga(struct exar8250 *priv, 483 + struct pci_dev *pcidev, 484 + unsigned int port_num) 485 + { 486 + switch (pcidev->device) { 487 + case PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG00X: 488 + case PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG01X: 489 + case PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_16: 490 + return CTI_PORT_TYPE_RS232_422_485_HW; 491 + default: 492 + dev_err(&pcidev->dev, "unknown/unsupported device\n"); 493 + return CTI_PORT_TYPE_NONE; 494 + } 495 + } 496 + 497 + /** 498 + * cti_get_port_type_xr17v35x() - Read port type from the EEPROM 499 + * @priv: Device's private structure 500 + * @pcidev: Pointer to the PCI device for this port 501 + * @port_num: port offset 502 + * 503 + * CTI XR17V35X based cards have the port types stored in the EEPROM. 504 + * This function reads the port type for a single port. 505 + * 506 + * Return: port type on success, CTI_PORT_TYPE_NONE on failure 507 + */ 508 + static enum cti_port_type cti_get_port_type_xr17v35x(struct exar8250 *priv, 509 + struct pci_dev *pcidev, 510 + unsigned int port_num) 511 + { 512 + enum cti_port_type port_type; 513 + u16 port_flags; 514 + u8 offset; 515 + 516 + offset = CTI_EE_OFF_XR17V35X_PORT_FLAGS + port_num; 517 + port_flags = exar_ee_read(priv, offset); 518 + 519 + port_type = FIELD_GET(CTI_EE_MASK_PORT_FLAGS_TYPE, port_flags); 520 + if (CTI_PORT_TYPE_VALID(port_type)) 521 + return port_type; 522 + 523 + /* 524 + * If the port type is missing the card assume it is a 525 + * RS232/RS422/RS485 card to be safe. 526 + * 527 + * There is one known board (BEG013) that only has 3 of 4 port types 528 + * written to the EEPROM so this acts as a work around. 529 + */ 530 + dev_warn(&pcidev->dev, "failed to get port %d type from EEPROM\n", port_num); 531 + 532 + return CTI_PORT_TYPE_RS232_422_485_HW; 533 + } 534 + 535 + static int cti_rs485_config_mpio_tristate(struct uart_port *port, 536 + struct ktermios *termios, 537 + struct serial_rs485 *rs485) 538 + { 539 + struct exar8250 *priv = (struct exar8250 *)port->private_data; 540 + int ret; 541 + 542 + ret = generic_rs485_config(port, termios, rs485); 543 + if (ret) 544 + return ret; 545 + 546 + // Disable power-on RS485 tri-state via MPIO 547 + return cti_tristate_disable(priv, port->port_id); 548 + } 549 + 550 + static void cti_board_init_osc_freq(struct exar8250 *priv, struct pci_dev *pcidev, u8 eeprom_offset) 551 + { 552 + int osc_freq; 553 + 554 + osc_freq = cti_read_osc_freq(priv, eeprom_offset); 555 + if (osc_freq <= 0) { 556 + dev_warn(&pcidev->dev, "failed to read OSC freq from EEPROM, using default\n"); 557 + osc_freq = CTI_DEFAULT_PCI_OSC_FREQ; 558 + } 559 + 560 + priv->osc_freq = osc_freq; 561 + } 562 + 563 + static int cti_port_setup_common(struct exar8250 *priv, 564 + struct pci_dev *pcidev, 565 + int idx, unsigned int offset, 566 + struct uart_8250_port *port) 567 + { 568 + int ret; 569 + 570 + port->port.port_id = idx; 571 + port->port.uartclk = priv->osc_freq; 572 + 573 + ret = serial8250_pci_setup_port(pcidev, port, 0, offset, 0); 574 + if (ret) 575 + return ret; 576 + 577 + port->port.private_data = (void *)priv; 578 + port->port.pm = exar_pm; 579 + port->port.shutdown = exar_shutdown; 580 + 581 + return 0; 582 + } 583 + 584 + static int cti_board_init_fpga(struct exar8250 *priv, struct pci_dev *pcidev) 585 + { 586 + int ret; 587 + u16 cfg_val; 588 + 589 + // FPGA OSC is fixed to the 33MHz PCI clock 590 + priv->osc_freq = CTI_DEFAULT_FPGA_OSC_FREQ; 591 + 592 + // Enable external interrupts in special cfg space register 593 + ret = pci_read_config_word(pcidev, CTI_FPGA_CFG_INT_EN_REG, &cfg_val); 594 + if (ret) 595 + return pcibios_err_to_errno(ret); 596 + 597 + cfg_val |= CTI_FPGA_CFG_INT_EN_EXT_BIT; 598 + ret = pci_write_config_word(pcidev, CTI_FPGA_CFG_INT_EN_REG, cfg_val); 599 + if (ret) 600 + return pcibios_err_to_errno(ret); 601 + 602 + // RS485 gate needs to be enabled; otherwise RTS/CTS will not work 603 + exar_write_reg(priv, CTI_FPGA_RS485_IO_REG, 0x01); 604 + 605 + return 0; 606 + } 607 + 608 + static int cti_port_setup_fpga(struct exar8250 *priv, 609 + struct pci_dev *pcidev, 610 + struct uart_8250_port *port, 611 + int idx) 612 + { 613 + enum cti_port_type port_type; 614 + unsigned int offset; 615 + int ret; 616 + 617 + if (idx == 0) { 618 + ret = cti_board_init_fpga(priv, pcidev); 619 + if (ret) 620 + return ret; 621 + } 622 + 623 + port_type = cti_get_port_type_fpga(priv, pcidev, idx); 624 + 625 + // FPGA shares port offsets with XR17C15X 626 + offset = idx * UART_EXAR_XR17C15X_PORT_OFFSET; 627 + port->port.type = PORT_XR17D15X; 628 + 629 + port->port.get_divisor = xr17v35x_get_divisor; 630 + port->port.set_divisor = xr17v35x_set_divisor; 631 + port->port.startup = xr17v35x_startup; 632 + 633 + if (CTI_PORT_TYPE_RS485(port_type)) { 634 + port->port.rs485_config = generic_rs485_config; 635 + port->port.rs485_supported = generic_rs485_supported; 636 + } 637 + 638 + return cti_port_setup_common(priv, pcidev, idx, offset, port); 639 + } 640 + 641 + static void cti_board_init_xr17v35x(struct exar8250 *priv, struct pci_dev *pcidev) 642 + { 643 + // XR17V35X uses the PCIe clock rather than an oscillator 644 + priv->osc_freq = CTI_DEFAULT_PCIE_OSC_FREQ; 645 + } 646 + 647 + static int cti_port_setup_xr17v35x(struct exar8250 *priv, 648 + struct pci_dev *pcidev, 649 + struct uart_8250_port *port, 650 + int idx) 651 + { 652 + enum cti_port_type port_type; 653 + unsigned int offset; 654 + int ret; 655 + 656 + if (idx == 0) 657 + cti_board_init_xr17v35x(priv, pcidev); 658 + 659 + port_type = cti_get_port_type_xr17v35x(priv, pcidev, idx); 660 + 661 + offset = idx * UART_EXAR_XR17V35X_PORT_OFFSET; 662 + port->port.type = PORT_XR17V35X; 663 + 664 + port->port.get_divisor = xr17v35x_get_divisor; 665 + port->port.set_divisor = xr17v35x_set_divisor; 666 + port->port.startup = xr17v35x_startup; 667 + 668 + switch (port_type) { 669 + case CTI_PORT_TYPE_RS422_485: 670 + case CTI_PORT_TYPE_RS232_422_485_HW: 671 + port->port.rs485_config = cti_rs485_config_mpio_tristate; 672 + port->port.rs485_supported = generic_rs485_supported; 673 + break; 674 + case CTI_PORT_TYPE_RS232_422_485_SW: 675 + case CTI_PORT_TYPE_RS232_422_485_4B: 676 + case CTI_PORT_TYPE_RS232_422_485_2B: 677 + port->port.rs485_config = generic_rs485_config; 678 + port->port.rs485_supported = generic_rs485_supported; 679 + break; 680 + default: 681 + break; 682 + } 683 + 684 + ret = cti_port_setup_common(priv, pcidev, idx, offset, port); 685 + if (ret) 686 + return ret; 687 + 688 + exar_write_reg(priv, (offset + UART_EXAR_8XMODE), 0x00); 689 + exar_write_reg(priv, (offset + UART_EXAR_FCTR), UART_FCTR_EXAR_TRGD); 690 + exar_write_reg(priv, (offset + UART_EXAR_TXTRG), 128); 691 + exar_write_reg(priv, (offset + UART_EXAR_RXTRG), 128); 692 + 693 + return 0; 694 + } 695 + 696 + static void cti_board_init_xr17v25x(struct exar8250 *priv, struct pci_dev *pcidev) 697 + { 698 + cti_board_init_osc_freq(priv, pcidev, CTI_EE_OFF_XR17V25X_OSC_FREQ); 699 + 700 + /* enable interrupts on cards that need the "PLX fix" */ 701 + switch (pcidev->subsystem_device) { 702 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS: 703 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_A: 704 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_16_XPRS_B: 705 + cti_plx_int_enable(priv); 706 + break; 707 + default: 708 + break; 709 + } 710 + } 711 + 712 + static int cti_port_setup_xr17v25x(struct exar8250 *priv, 713 + struct pci_dev *pcidev, 714 + struct uart_8250_port *port, 715 + int idx) 716 + { 717 + enum cti_port_type port_type; 718 + unsigned int offset; 719 + int ret; 720 + 721 + if (idx == 0) 722 + cti_board_init_xr17v25x(priv, pcidev); 723 + 724 + port_type = cti_get_port_type_xr17c15x_xr17v25x(priv, pcidev, idx); 725 + 726 + offset = idx * UART_EXAR_XR17V25X_PORT_OFFSET; 727 + port->port.type = PORT_XR17D15X; 728 + 729 + // XR17V25X supports fractional baudrates 730 + port->port.get_divisor = xr17v35x_get_divisor; 731 + port->port.set_divisor = xr17v35x_set_divisor; 732 + port->port.startup = xr17v35x_startup; 733 + 734 + if (CTI_PORT_TYPE_RS485(port_type)) { 735 + switch (pcidev->subsystem_device) { 736 + // These cards support power on 485 tri-state via MPIO 737 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP: 738 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_485: 739 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_SP: 740 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_6_2_SP: 741 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_6_SP: 742 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_LEFT: 743 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_RIGHT: 744 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XP_OPTO: 745 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_XPRS_OPTO: 746 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP: 747 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_485: 748 + port->port.rs485_config = cti_rs485_config_mpio_tristate; 749 + break; 750 + // Otherwise auto or no power on 485 tri-state support 751 + default: 752 + port->port.rs485_config = generic_rs485_config; 753 + break; 754 + } 755 + 756 + port->port.rs485_supported = generic_rs485_supported; 757 + } 758 + 759 + ret = cti_port_setup_common(priv, pcidev, idx, offset, port); 760 + if (ret) 761 + return ret; 762 + 763 + exar_write_reg(priv, (offset + UART_EXAR_8XMODE), 0x00); 764 + exar_write_reg(priv, (offset + UART_EXAR_FCTR), UART_FCTR_EXAR_TRGD); 765 + exar_write_reg(priv, (offset + UART_EXAR_TXTRG), 32); 766 + exar_write_reg(priv, (offset + UART_EXAR_RXTRG), 32); 767 + 768 + return 0; 769 + } 770 + 771 + static void cti_board_init_xr17c15x(struct exar8250 *priv, struct pci_dev *pcidev) 772 + { 773 + cti_board_init_osc_freq(priv, pcidev, CTI_EE_OFF_XR17C15X_OSC_FREQ); 774 + 775 + /* enable interrupts on cards that need the "PLX fix" */ 776 + switch (pcidev->subsystem_device) { 777 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS: 778 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_A: 779 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_B: 780 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XPRS_OPTO: 781 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_A: 782 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XPRS_OPTO_B: 783 + cti_plx_int_enable(priv); 784 + break; 785 + default: 786 + break; 787 + } 788 + } 789 + 790 + static int cti_port_setup_xr17c15x(struct exar8250 *priv, 791 + struct pci_dev *pcidev, 792 + struct uart_8250_port *port, 793 + int idx) 794 + { 795 + enum cti_port_type port_type; 796 + unsigned int offset; 797 + 798 + if (idx == 0) 799 + cti_board_init_xr17c15x(priv, pcidev); 800 + 801 + port_type = cti_get_port_type_xr17c15x_xr17v25x(priv, pcidev, idx); 802 + 803 + offset = idx * UART_EXAR_XR17C15X_PORT_OFFSET; 804 + port->port.type = PORT_XR17D15X; 805 + 806 + if (CTI_PORT_TYPE_RS485(port_type)) { 807 + switch (pcidev->subsystem_device) { 808 + // These cards support power on 485 tri-state via MPIO 809 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP: 810 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_SP_485: 811 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_SP: 812 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_6_2_SP: 813 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_6_SP: 814 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_LEFT: 815 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_2_XP_OPTO_RIGHT: 816 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_XP_OPTO: 817 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_4_4_XPRS_OPTO: 818 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP: 819 + case PCI_SUBDEVICE_ID_CONNECT_TECH_PCI_UART_8_XPRS_LP_485: 820 + port->port.rs485_config = cti_rs485_config_mpio_tristate; 821 + break; 822 + // Otherwise auto or no power on 485 tri-state support 823 + default: 824 + port->port.rs485_config = generic_rs485_config; 825 + break; 826 + } 827 + 828 + port->port.rs485_supported = generic_rs485_supported; 829 + } 830 + 831 + return cti_port_setup_common(priv, pcidev, idx, offset, port); 832 } 833 834 static int ··· 344 * devices will export them as GPIOs, so we pre-configure them safely 345 * as inputs. 346 */ 347 u8 dir = 0x00; 348 349 if ((pcidev->vendor == PCI_VENDOR_ID_EXAR) && 350 + (pcidev->subsystem_vendor != PCI_VENDOR_ID_SEALEVEL)) { 351 // Configure GPIO as inputs for Commtech adapters 352 dir = 0xff; 353 } else { ··· 425 port->port.private_data = NULL; 426 } 427 428 static int sealevel_rs485_config(struct uart_port *port, struct ktermios *termios, 429 struct serial_rs485 *rs485) 430 { ··· 459 if (ret) 460 return ret; 461 462 + if (!(rs485->flags & SER_RS485_ENABLED)) 463 + return 0; 464 465 + old_lcr = readb(p + UART_LCR); 466 467 + /* Set EFR[4]=1 to enable enhanced feature registers */ 468 + efr = readb(p + UART_XR_EFR); 469 + efr |= UART_EFR_ECB; 470 + writeb(efr, p + UART_XR_EFR); 471 472 + /* Set MCR to use DTR as Auto-RS485 Enable signal */ 473 + writeb(UART_MCR_OUT1, p + UART_MCR); 474 475 + /* Set LCR[7]=1 to enable access to DLD register */ 476 + writeb(old_lcr | UART_LCR_DLAB, p + UART_LCR); 477 478 + /* Set DLD[7]=1 for inverted RS485 Enable logic */ 479 + dld = readb(p + UART_EXAR_DLD); 480 + dld |= UART_EXAR_DLD_485_POLARITY; 481 + writeb(dld, p + UART_EXAR_DLD); 482 + 483 + writeb(old_lcr, p + UART_LCR); 484 485 return 0; 486 } 487 488 static const struct exar8250_platform exar8250_default_platform = { 489 .register_gpio = xr17v35x_register_gpio, ··· 672 return IRQ_HANDLED; 673 } 674 675 + static unsigned int exar_get_nr_ports(struct exar8250_board *board, struct pci_dev *pcidev) 676 + { 677 + if (pcidev->vendor == PCI_VENDOR_ID_ACCESSIO) 678 + return BIT(((pcidev->device & 0x38) >> 3) - 1); 679 + 680 + // Check if board struct overrides number of ports 681 + if (board->num_ports > 0) 682 + return board->num_ports; 683 + 684 + // Exar encodes # ports in last nibble of PCI Device ID ex. 0358 685 + if (pcidev->vendor == PCI_VENDOR_ID_EXAR) 686 + return pcidev->device & 0x0f; 687 + 688 + // Handle CTI FPGA cards 689 + if (pcidev->vendor == PCI_VENDOR_ID_CONNECT_TECH) { 690 + switch (pcidev->device) { 691 + case PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG00X: 692 + case PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_12_XIG01X: 693 + return 12; 694 + case PCI_DEVICE_ID_CONNECT_TECH_PCI_XR79X_16: 695 + return 16; 696 + default: 697 + return 0; 698 + } 699 + } 700 + 701 + return 0; 702 + } 703 + 704 static int 705 exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent) 706 { ··· 691 692 maxnr = pci_resource_len(pcidev, bar) >> (board->reg_shift + 3); 693 694 + nr_ports = exar_get_nr_ports(board, pcidev); 695 + if (nr_ports == 0) 696 + return dev_err_probe(&pcidev->dev, -ENODEV, "failed to get number of ports\n"); 697 698 priv = devm_kzalloc(&pcidev->dev, struct_size(priv, line, nr_ports), GFP_KERNEL); 699 if (!priv) ··· 729 for (i = 0; i < nr_ports && i < maxnr; i++) { 730 rc = board->setup(priv, pcidev, &uart, i); 731 if (rc) { 732 + dev_err_probe(&pcidev->dev, rc, "Failed to setup port %u\n", i); 733 break; 734 } 735 ··· 738 739 priv->line[i] = serial8250_register_8250_port(&uart); 740 if (priv->line[i] < 0) { 741 + dev_err_probe(&pcidev->dev, priv->line[i], 742 + "Couldn't register serial port %lx, type %d, irq %d\n", 743 + uart.port.iobase, uart.port.iotype, uart.port.irq); 744 break; 745 } 746 } ··· 806 .setup = pci_fastcom335_setup, 807 }; 808 809 + static const struct exar8250_board pbn_cti_xr17c15x = { 810 + .setup = cti_port_setup_xr17c15x, 811 + }; 812 + 813 + static const struct exar8250_board pbn_cti_xr17v25x = { 814 + .setup = cti_port_setup_xr17v25x, 815 + }; 816 + 817 + static const struct exar8250_board pbn_cti_xr17v35x = { 818 + .setup = cti_port_setup_xr17v35x, 819 + }; 820 + 821 + static const struct exar8250_board pbn_cti_fpga = { 822 + .setup = cti_port_setup_fpga, 823 }; 824 825 static const struct exar8250_board pbn_exar_ibm_saturn = { ··· 854 .exit = pci_xr17v35x_exit, 855 }; 856 857 + #define CTI_EXAR_DEVICE(devid, bd) { \ 858 + PCI_DEVICE_SUB( \ 859 + PCI_VENDOR_ID_EXAR, \ 860 + PCI_DEVICE_ID_EXAR_##devid, \ 861 + PCI_SUBVENDOR_ID_CONNECT_TECH, \ 862 + PCI_ANY_ID), 0, 0, \ 863 + (kernel_ulong_t)&bd \ 864 } 865 866 #define EXAR_DEVICE(vend, devid, bd) { PCI_DEVICE_DATA(vend, devid, &bd) } ··· 869 PCI_DEVICE_SUB( \ 870 PCI_VENDOR_ID_EXAR, \ 871 PCI_DEVICE_ID_EXAR_##devid, \ 872 + PCI_SUBVENDOR_ID_IBM, \ 873 PCI_SUBDEVICE_ID_IBM_##sdevid), 0, 0, \ 874 (kernel_ulong_t)&bd \ 875 } ··· 892 EXAR_DEVICE(ACCESSIO, COM_4SM, pbn_exar_XR17C15x), 893 EXAR_DEVICE(ACCESSIO, COM_8SM, pbn_exar_XR17C15x), 894 895 + /* Connect Tech cards with Exar vendor/device PCI IDs */ 896 + CTI_EXAR_DEVICE(XR17C152, pbn_cti_xr17c15x), 897 + CTI_EXAR_DEVICE(XR17C154, pbn_cti_xr17c15x), 898 + CTI_EXAR_DEVICE(XR17C158, pbn_cti_xr17c15x), 899 + 900 + CTI_EXAR_DEVICE(XR17V252, pbn_cti_xr17v25x), 901 + CTI_EXAR_DEVICE(XR17V254, pbn_cti_xr17v25x), 902 + CTI_EXAR_DEVICE(XR17V258, pbn_cti_xr17v25x), 903 + 904 + CTI_EXAR_DEVICE(XR17V352, pbn_cti_xr17v35x), 905 + CTI_EXAR_DEVICE(XR17V354, pbn_cti_xr17v35x), 906 + CTI_EXAR_DEVICE(XR17V358, pbn_cti_xr17v35x), 907 + 908 + /* Connect Tech cards with Connect Tech vendor/device PCI IDs (FPGA based) */ 909 + EXAR_DEVICE(CONNECT_TECH, PCI_XR79X_12_XIG00X, pbn_cti_fpga), 910 + EXAR_DEVICE(CONNECT_TECH, PCI_XR79X_12_XIG01X, pbn_cti_fpga), 911 + EXAR_DEVICE(CONNECT_TECH, PCI_XR79X_16, pbn_cti_fpga), 912 913 IBM_DEVICE(XR17C152, SATURN_SERIAL_ONE_PORT, pbn_exar_ibm_saturn), 914
+1 -1
drivers/tty/serial/8250/8250_mtk.c
··· 199 200 if (up->dma) { 201 data->rx_status = DMA_RX_START; 202 - uart_circ_clear(&port->state->xmit); 203 } 204 #endif 205 memset(&port->icount, 0, sizeof(port->icount));
··· 199 200 if (up->dma) { 201 data->rx_status = DMA_RX_START; 202 + kfifo_reset(&port->state->port.xmit_fifo); 203 } 204 #endif 205 memset(&port->icount, 0, sizeof(port->icount));
+37
drivers/tty/serial/8250/8250_of.c
··· 18 #include <linux/pm_runtime.h> 19 #include <linux/clk.h> 20 #include <linux/reset.h> 21 22 #include "8250.h" 23 ··· 27 struct reset_control *rst; 28 int type; 29 int line; 30 }; 31 32 /* Nuvoton NPCM timeout register */ ··· 58 port->get_divisor = npcm_get_divisor; 59 port->startup = npcm_startup; 60 return 0; 61 } 62 63 /* ··· 240 info->type = port_type; 241 info->line = ret; 242 platform_set_drvdata(ofdev, info); 243 return 0; 244 err_dispose: 245 pm_runtime_put_sync(&ofdev->dev); 246 pm_runtime_disable(&ofdev->dev); ··· 267 static void of_platform_serial_remove(struct platform_device *ofdev) 268 { 269 struct of_serial_info *info = platform_get_drvdata(ofdev); 270 271 serial8250_unregister_port(info->line); 272
··· 18 #include <linux/pm_runtime.h> 19 #include <linux/clk.h> 20 #include <linux/reset.h> 21 + #include <linux/notifier.h> 22 23 #include "8250.h" 24 ··· 26 struct reset_control *rst; 27 int type; 28 int line; 29 + struct notifier_block clk_notifier; 30 }; 31 32 /* Nuvoton NPCM timeout register */ ··· 56 port->get_divisor = npcm_get_divisor; 57 port->startup = npcm_startup; 58 return 0; 59 + } 60 + 61 + static inline struct of_serial_info *clk_nb_to_info(struct notifier_block *nb) 62 + { 63 + return container_of(nb, struct of_serial_info, clk_notifier); 64 + } 65 + 66 + static int of_platform_serial_clk_notifier_cb(struct notifier_block *nb, unsigned long event, 67 + void *data) 68 + { 69 + struct of_serial_info *info = clk_nb_to_info(nb); 70 + struct uart_8250_port *port8250 = serial8250_get_port(info->line); 71 + struct clk_notifier_data *ndata = data; 72 + 73 + if (event == POST_RATE_CHANGE) { 74 + serial8250_update_uartclk(&port8250->port, ndata->new_rate); 75 + return NOTIFY_OK; 76 + } 77 + 78 + return NOTIFY_DONE; 79 } 80 81 /* ··· 218 info->type = port_type; 219 info->line = ret; 220 platform_set_drvdata(ofdev, info); 221 + 222 + if (info->clk) { 223 + info->clk_notifier.notifier_call = of_platform_serial_clk_notifier_cb; 224 + ret = clk_notifier_register(info->clk, &info->clk_notifier); 225 + if (ret) { 226 + dev_err_probe(port8250.port.dev, ret, "Failed to set the clock notifier\n"); 227 + goto err_unregister; 228 + } 229 + } 230 + 231 return 0; 232 + err_unregister: 233 + serial8250_unregister_port(info->line); 234 err_dispose: 235 pm_runtime_put_sync(&ofdev->dev); 236 pm_runtime_disable(&ofdev->dev); ··· 233 static void of_platform_serial_remove(struct platform_device *ofdev) 234 { 235 struct of_serial_info *info = platform_get_drvdata(ofdev); 236 + 237 + if (info->clk) 238 + clk_notifier_unregister(info->clk, &info->clk_notifier); 239 240 serial8250_unregister_port(info->line); 241
+33 -16
drivers/tty/serial/8250/8250_omap.c
··· 19 #include <linux/platform_device.h> 20 #include <linux/slab.h> 21 #include <linux/of.h> 22 - #include <linux/of_gpio.h> 23 #include <linux/of_irq.h> 24 #include <linux/delay.h> 25 #include <linux/pm_runtime.h> ··· 1093 { 1094 struct uart_8250_port *p = param; 1095 struct uart_8250_dma *dma = p->dma; 1096 - struct circ_buf *xmit = &p->port.state->xmit; 1097 unsigned long flags; 1098 bool en_thri = false; 1099 struct omap8250_priv *priv = p->port.private_data; ··· 1112 omap8250_restore_regs(p); 1113 } 1114 1115 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1116 uart_write_wakeup(&p->port); 1117 1118 - if (!uart_circ_empty(xmit) && !uart_tx_stopped(&p->port)) { 1119 int ret; 1120 1121 ret = omap_8250_tx_dma(p); ··· 1137 { 1138 struct uart_8250_dma *dma = p->dma; 1139 struct omap8250_priv *priv = p->port.private_data; 1140 - struct circ_buf *xmit = &p->port.state->xmit; 1141 struct dma_async_tx_descriptor *desc; 1142 - unsigned int skip_byte = 0; 1143 int ret; 1144 1145 if (dma->tx_running) 1146 return 0; 1147 - if (uart_tx_stopped(&p->port) || uart_circ_empty(xmit)) { 1148 1149 /* 1150 * Even if no data, we need to return an error for the two cases ··· 1160 return 0; 1161 } 1162 1163 - dma->tx_size = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 1164 if (priv->habit & OMAP_DMA_TX_KICK) { 1165 u8 tx_lvl; 1166 1167 /* ··· 1198 ret = -EINVAL; 1199 goto err; 1200 } 1201 - skip_byte = 1; 1202 } 1203 1204 - desc = dmaengine_prep_slave_single(dma->txchan, 1205 - dma->tx_addr + xmit->tail + skip_byte, 1206 - dma->tx_size - skip_byte, DMA_MEM_TO_DEV, 1207 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1208 if (!desc) { 1209 ret = -EBUSY; ··· 1230 dma->tx_err = 0; 1231 1232 serial8250_clear_THRI(p); 1233 - if (skip_byte) 1234 - serial_out(p, UART_TX, xmit->buf[xmit->tail]); 1235 - return 0; 1236 err: 1237 dma->tx_err = 1; 1238 return ret; 1239 } 1240 ··· 1325 serial8250_modem_status(up); 1326 if (status & UART_LSR_THRE && up->dma->tx_err) { 1327 if (uart_tx_stopped(&up->port) || 1328 - uart_circ_empty(&up->port.state->xmit)) { 1329 up->dma->tx_err = 0; 1330 serial8250_tx_chars(up); 1331 } else {
··· 19 #include <linux/platform_device.h> 20 #include <linux/slab.h> 21 #include <linux/of.h> 22 #include <linux/of_irq.h> 23 #include <linux/delay.h> 24 #include <linux/pm_runtime.h> ··· 1094 { 1095 struct uart_8250_port *p = param; 1096 struct uart_8250_dma *dma = p->dma; 1097 + struct tty_port *tport = &p->port.state->port; 1098 unsigned long flags; 1099 bool en_thri = false; 1100 struct omap8250_priv *priv = p->port.private_data; ··· 1113 omap8250_restore_regs(p); 1114 } 1115 1116 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 1117 uart_write_wakeup(&p->port); 1118 1119 + if (!kfifo_is_empty(&tport->xmit_fifo) && !uart_tx_stopped(&p->port)) { 1120 int ret; 1121 1122 ret = omap_8250_tx_dma(p); ··· 1138 { 1139 struct uart_8250_dma *dma = p->dma; 1140 struct omap8250_priv *priv = p->port.private_data; 1141 + struct tty_port *tport = &p->port.state->port; 1142 struct dma_async_tx_descriptor *desc; 1143 + struct scatterlist sg; 1144 + int skip_byte = -1; 1145 int ret; 1146 1147 if (dma->tx_running) 1148 return 0; 1149 + if (uart_tx_stopped(&p->port) || kfifo_is_empty(&tport->xmit_fifo)) { 1150 1151 /* 1152 * Even if no data, we need to return an error for the two cases ··· 1160 return 0; 1161 } 1162 1163 + sg_init_table(&sg, 1); 1164 + ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1, 1165 + UART_XMIT_SIZE, dma->tx_addr); 1166 + if (ret != 1) { 1167 + serial8250_clear_THRI(p); 1168 + return 0; 1169 + } 1170 + 1171 + dma->tx_size = sg_dma_len(&sg); 1172 + 1173 if (priv->habit & OMAP_DMA_TX_KICK) { 1174 + unsigned char c; 1175 u8 tx_lvl; 1176 1177 /* ··· 1188 ret = -EINVAL; 1189 goto err; 1190 } 1191 + if (!kfifo_get(&tport->xmit_fifo, &c)) { 1192 + ret = -EINVAL; 1193 + goto err; 1194 + } 1195 + skip_byte = c; 1196 + /* now we need to recompute due to kfifo_get */ 1197 + kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1, 1198 + UART_XMIT_SIZE, dma->tx_addr); 1199 } 1200 1201 + desc = dmaengine_prep_slave_sg(dma->txchan, &sg, 1, DMA_MEM_TO_DEV, 1202 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1203 if (!desc) { 1204 ret = -EBUSY; ··· 1215 dma->tx_err = 0; 1216 1217 serial8250_clear_THRI(p); 1218 + ret = 0; 1219 + goto out_skip; 1220 err: 1221 dma->tx_err = 1; 1222 + out_skip: 1223 + if (skip_byte >= 0) 1224 + serial_out(p, UART_TX, skip_byte); 1225 return ret; 1226 } 1227 ··· 1308 serial8250_modem_status(up); 1309 if (status & UART_LSR_THRE && up->dma->tx_err) { 1310 if (uart_tx_stopped(&up->port) || 1311 + kfifo_is_empty(&up->port.state->port.xmit_fifo)) { 1312 up->dma->tx_err = 0; 1313 serial8250_tx_chars(up); 1314 } else {
+22 -28
drivers/tty/serial/8250/8250_pci1xxxx.c
··· 382 } 383 384 static void pci1xxxx_process_write_data(struct uart_port *port, 385 - struct circ_buf *xmit, 386 int *data_empty_count, 387 u32 *valid_byte_count) 388 { 389 u32 valid_burst_count = *valid_byte_count / UART_BURST_SIZE; 390 391 /* ··· 395 * one byte at a time. 396 */ 397 while (valid_burst_count) { 398 if (*data_empty_count - UART_BURST_SIZE < 0) 399 break; 400 - if (xmit->tail > (UART_XMIT_SIZE - UART_BURST_SIZE)) 401 break; 402 - writel(*(unsigned int *)&xmit->buf[xmit->tail], 403 - port->membase + UART_TX_BURST_FIFO); 404 *valid_byte_count -= UART_BURST_SIZE; 405 *data_empty_count -= UART_BURST_SIZE; 406 valid_burst_count -= UART_BYTE_SIZE; 407 - 408 - xmit->tail = (xmit->tail + UART_BURST_SIZE) & 409 - (UART_XMIT_SIZE - 1); 410 } 411 412 while (*valid_byte_count) { 413 - if (*data_empty_count - UART_BYTE_SIZE < 0) 414 break; 415 - writeb(xmit->buf[xmit->tail], port->membase + 416 - UART_TX_BYTE_FIFO); 417 *data_empty_count -= UART_BYTE_SIZE; 418 *valid_byte_count -= UART_BYTE_SIZE; 419 - 420 - /* 421 - * When the tail of the circular buffer is reached, the next 422 - * byte is transferred to the beginning of the buffer. 423 - */ 424 - xmit->tail = (xmit->tail + UART_BYTE_SIZE) & 425 - (UART_XMIT_SIZE - 1); 426 427 /* 428 * If there are any pending burst count, data is handled by 429 * transmitting DWORDs at a time. 430 */ 431 - if (valid_burst_count && (xmit->tail < 432 - (UART_XMIT_SIZE - UART_BURST_SIZE))) 433 break; 434 } 435 } ··· 432 static void pci1xxxx_tx_burst(struct uart_port *port, u32 uart_status) 433 { 434 struct uart_8250_port *up = up_to_u8250p(port); 435 u32 valid_byte_count; 436 int data_empty_count; 437 - struct circ_buf *xmit; 438 - 439 - xmit = &port->state->xmit; 440 441 if (port->x_char) { 442 writeb(port->x_char, port->membase + UART_TX); ··· 443 return; 444 } 445 446 - if ((uart_tx_stopped(port)) || (uart_circ_empty(xmit))) { 447 port->ops->stop_tx(port); 448 } else { 449 data_empty_count = (pci1xxxx_read_burst_status(port) & 450 UART_BST_STAT_TX_COUNT_MASK) >> 8; 451 do { 452 - valid_byte_count = uart_circ_chars_pending(xmit); 453 454 - pci1xxxx_process_write_data(port, xmit, 455 &data_empty_count, 456 &valid_byte_count); 457 458 port->icount.tx++; 459 - if (uart_circ_empty(xmit)) 460 break; 461 } while (data_empty_count && valid_byte_count); 462 } 463 464 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 465 uart_write_wakeup(port); 466 467 /* ··· 469 * the HW can go idle. So we get here once again with empty FIFO and 470 * disable the interrupt and RPM in __stop_tx() 471 */ 472 - if (uart_circ_empty(xmit) && !(up->capabilities & UART_CAP_RPM)) 473 port->ops->stop_tx(port); 474 } 475
··· 382 } 383 384 static void pci1xxxx_process_write_data(struct uart_port *port, 385 int *data_empty_count, 386 u32 *valid_byte_count) 387 { 388 + struct tty_port *tport = &port->state->port; 389 u32 valid_burst_count = *valid_byte_count / UART_BURST_SIZE; 390 391 /* ··· 395 * one byte at a time. 396 */ 397 while (valid_burst_count) { 398 + u32 c; 399 + 400 if (*data_empty_count - UART_BURST_SIZE < 0) 401 break; 402 + if (kfifo_len(&tport->xmit_fifo) < UART_BURST_SIZE) 403 break; 404 + if (WARN_ON(kfifo_out(&tport->xmit_fifo, (u8 *)&c, sizeof(c)) != 405 + sizeof(c))) 406 + break; 407 + writel(c, port->membase + UART_TX_BURST_FIFO); 408 *valid_byte_count -= UART_BURST_SIZE; 409 *data_empty_count -= UART_BURST_SIZE; 410 valid_burst_count -= UART_BYTE_SIZE; 411 } 412 413 while (*valid_byte_count) { 414 + u8 c; 415 + 416 + if (!kfifo_get(&tport->xmit_fifo, &c)) 417 break; 418 + writeb(c, port->membase + UART_TX_BYTE_FIFO); 419 *data_empty_count -= UART_BYTE_SIZE; 420 *valid_byte_count -= UART_BYTE_SIZE; 421 422 /* 423 * If there are any pending burst count, data is handled by 424 * transmitting DWORDs at a time. 425 */ 426 + if (valid_burst_count && 427 + kfifo_len(&tport->xmit_fifo) >= UART_BURST_SIZE) 428 break; 429 } 430 } ··· 437 static void pci1xxxx_tx_burst(struct uart_port *port, u32 uart_status) 438 { 439 struct uart_8250_port *up = up_to_u8250p(port); 440 + struct tty_port *tport = &port->state->port; 441 u32 valid_byte_count; 442 int data_empty_count; 443 444 if (port->x_char) { 445 writeb(port->x_char, port->membase + UART_TX); ··· 450 return; 451 } 452 453 + if ((uart_tx_stopped(port)) || kfifo_is_empty(&tport->xmit_fifo)) { 454 port->ops->stop_tx(port); 455 } else { 456 data_empty_count = (pci1xxxx_read_burst_status(port) & 457 UART_BST_STAT_TX_COUNT_MASK) >> 8; 458 do { 459 + valid_byte_count = kfifo_len(&tport->xmit_fifo); 460 461 + pci1xxxx_process_write_data(port, 462 &data_empty_count, 463 &valid_byte_count); 464 465 port->icount.tx++; 466 + if (kfifo_is_empty(&tport->xmit_fifo)) 467 break; 468 } while (data_empty_count && valid_byte_count); 469 } 470 471 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 472 uart_write_wakeup(port); 473 474 /* ··· 476 * the HW can go idle. So we get here once again with empty FIFO and 477 * disable the interrupt and RPM in __stop_tx() 478 */ 479 + if (kfifo_is_empty(&tport->xmit_fifo) && 480 + !(up->capabilities & UART_CAP_RPM)) 481 port->ops->stop_tx(port); 482 } 483
+36 -29
drivers/tty/serial/8250/8250_pnp.c
··· 10 */ 11 #include <linux/module.h> 12 #include <linux/pci.h> 13 #include <linux/pnp.h> 14 #include <linux/string.h> 15 #include <linux/kernel.h> ··· 435 serial_pnp_probe(struct pnp_dev *dev, const struct pnp_device_id *dev_id) 436 { 437 struct uart_8250_port uart, *port; 438 - int ret, line, flags = dev_id->driver_data; 439 440 if (flags & UNKNOWN_DEV) { 441 ret = serial_pnp_guess_board(dev); ··· 446 } 447 448 memset(&uart, 0, sizeof(uart)); 449 - if (pnp_irq_valid(dev, 0)) 450 - uart.port.irq = pnp_irq(dev, 0); 451 if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) { 452 uart.port.iobase = pnp_port_start(dev, 2); 453 - uart.port.iotype = UPIO_PORT; 454 } else if (pnp_port_valid(dev, 0)) { 455 uart.port.iobase = pnp_port_start(dev, 0); 456 - uart.port.iotype = UPIO_PORT; 457 } else if (pnp_mem_valid(dev, 0)) { 458 uart.port.mapbase = pnp_mem_start(dev, 0); 459 - uart.port.iotype = UPIO_MEM; 460 uart.port.flags = UPF_IOREMAP; 461 } else 462 return -ENODEV; 463 464 - dev_dbg(&dev->dev, 465 - "Setup PNP port: port %#lx, mem %#llx, irq %u, type %u\n", 466 - uart.port.iobase, (unsigned long long)uart.port.mapbase, 467 - uart.port.irq, uart.port.iotype); 468 469 if (flags & CIR_PORT) { 470 uart.port.flags |= UPF_FIXED_PORT | UPF_FIXED_TYPE; 471 uart.port.type = PORT_8250_CIR; 472 } 473 474 - uart.port.flags |= UPF_SKIP_TEST | UPF_BOOT_AUTOCONF; 475 - if (pnp_irq_flags(dev, 0) & IORESOURCE_IRQ_SHAREABLE) 476 - uart.port.flags |= UPF_SHARE_IRQ; 477 - uart.port.uartclk = 1843200; 478 - device_property_read_u32(&dev->dev, "clock-frequency", &uart.port.uartclk); 479 - uart.port.dev = &dev->dev; 480 481 line = serial8250_register_8250_port(&uart); 482 if (line < 0 || (flags & CIR_PORT)) ··· 495 if (uart_console(&port->port)) 496 dev->capabilities |= PNP_CONSOLE; 497 498 - pnp_set_drvdata(dev, (void *)((long)line + 1)); 499 return 0; 500 } 501 ··· 504 long line = (long)pnp_get_drvdata(dev); 505 506 dev->capabilities &= ~PNP_CONSOLE; 507 - if (line) 508 - serial8250_unregister_port(line - 1); 509 } 510 511 - static int __maybe_unused serial_pnp_suspend(struct device *dev) 512 { 513 long line = (long)dev_get_drvdata(dev); 514 515 - if (!line) 516 - return -ENODEV; 517 - serial8250_suspend_port(line - 1); 518 return 0; 519 } 520 521 - static int __maybe_unused serial_pnp_resume(struct device *dev) 522 { 523 long line = (long)dev_get_drvdata(dev); 524 525 - if (!line) 526 - return -ENODEV; 527 - serial8250_resume_port(line - 1); 528 return 0; 529 } 530 531 - static SIMPLE_DEV_PM_OPS(serial_pnp_pm_ops, serial_pnp_suspend, serial_pnp_resume); 532 533 static struct pnp_driver serial_pnp_driver = { 534 .name = "serial", 535 .probe = serial_pnp_probe, 536 .remove = serial_pnp_remove, 537 .driver = { 538 - .pm = &serial_pnp_pm_ops, 539 }, 540 .id_table = pnp_dev_table, 541 };
··· 10 */ 11 #include <linux/module.h> 12 #include <linux/pci.h> 13 + #include <linux/pm.h> 14 #include <linux/pnp.h> 15 #include <linux/string.h> 16 #include <linux/kernel.h> ··· 434 serial_pnp_probe(struct pnp_dev *dev, const struct pnp_device_id *dev_id) 435 { 436 struct uart_8250_port uart, *port; 437 + int ret, flags = dev_id->driver_data; 438 + unsigned char iotype; 439 + long line; 440 441 if (flags & UNKNOWN_DEV) { 442 ret = serial_pnp_guess_board(dev); ··· 443 } 444 445 memset(&uart, 0, sizeof(uart)); 446 if ((flags & CIR_PORT) && pnp_port_valid(dev, 2)) { 447 uart.port.iobase = pnp_port_start(dev, 2); 448 + iotype = UPIO_PORT; 449 } else if (pnp_port_valid(dev, 0)) { 450 uart.port.iobase = pnp_port_start(dev, 0); 451 + iotype = UPIO_PORT; 452 } else if (pnp_mem_valid(dev, 0)) { 453 uart.port.mapbase = pnp_mem_start(dev, 0); 454 + uart.port.mapsize = pnp_mem_len(dev, 0); 455 + iotype = UPIO_MEM; 456 uart.port.flags = UPF_IOREMAP; 457 } else 458 return -ENODEV; 459 460 + uart.port.uartclk = 1843200; 461 + uart.port.dev = &dev->dev; 462 + uart.port.flags |= UPF_SKIP_TEST | UPF_BOOT_AUTOCONF; 463 + 464 + ret = uart_read_port_properties(&uart.port); 465 + /* no interrupt -> fall back to polling */ 466 + if (ret == -ENXIO) 467 + ret = 0; 468 + if (ret) 469 + return ret; 470 + 471 + /* 472 + * The previous call may not set iotype correctly when reg-io-width 473 + * property is absent and it doesn't support IO port resource. 474 + */ 475 + uart.port.iotype = iotype; 476 477 if (flags & CIR_PORT) { 478 uart.port.flags |= UPF_FIXED_PORT | UPF_FIXED_TYPE; 479 uart.port.type = PORT_8250_CIR; 480 } 481 482 + dev_dbg(&dev->dev, 483 + "Setup PNP port: port %#lx, mem %#llx, size %#llx, irq %u, type %u\n", 484 + uart.port.iobase, (unsigned long long)uart.port.mapbase, 485 + (unsigned long long)uart.port.mapsize, uart.port.irq, uart.port.iotype); 486 487 line = serial8250_register_8250_port(&uart); 488 if (line < 0 || (flags & CIR_PORT)) ··· 483 if (uart_console(&port->port)) 484 dev->capabilities |= PNP_CONSOLE; 485 486 + pnp_set_drvdata(dev, (void *)line); 487 return 0; 488 } 489 ··· 492 long line = (long)pnp_get_drvdata(dev); 493 494 dev->capabilities &= ~PNP_CONSOLE; 495 + serial8250_unregister_port(line); 496 } 497 498 + static int serial_pnp_suspend(struct device *dev) 499 { 500 long line = (long)dev_get_drvdata(dev); 501 502 + serial8250_suspend_port(line); 503 return 0; 504 } 505 506 + static int serial_pnp_resume(struct device *dev) 507 { 508 long line = (long)dev_get_drvdata(dev); 509 510 + serial8250_resume_port(line); 511 return 0; 512 } 513 514 + static DEFINE_SIMPLE_DEV_PM_OPS(serial_pnp_pm_ops, serial_pnp_suspend, serial_pnp_resume); 515 516 static struct pnp_driver serial_pnp_driver = { 517 .name = "serial", 518 .probe = serial_pnp_probe, 519 .remove = serial_pnp_remove, 520 .driver = { 521 + .pm = pm_sleep_ptr(&serial_pnp_pm_ops), 522 }, 523 .id_table = pnp_dev_table, 524 };
+13 -16
drivers/tty/serial/8250/8250_port.c
··· 612 { 613 struct uart_8250_port *up = up_to_u8250p(port); 614 615 - /* pick sane settings if the user hasn't */ 616 - if (!!(rs485->flags & SER_RS485_RTS_ON_SEND) == 617 - !!(rs485->flags & SER_RS485_RTS_AFTER_SEND)) { 618 - rs485->flags |= SER_RS485_RTS_ON_SEND; 619 - rs485->flags &= ~SER_RS485_RTS_AFTER_SEND; 620 - } 621 - 622 /* 623 * Both serial8250_em485_init() and serial8250_em485_destroy() 624 * are idempotent. ··· 1623 /* Port locked to synchronize UART_IER access against the console. */ 1624 lockdep_assert_held_once(&port->lock); 1625 1626 - if (!port->x_char && uart_circ_empty(&port->state->xmit)) 1627 return; 1628 1629 serial8250_rpm_get_tx(up); ··· 1771 void serial8250_tx_chars(struct uart_8250_port *up) 1772 { 1773 struct uart_port *port = &up->port; 1774 - struct circ_buf *xmit = &port->state->xmit; 1775 int count; 1776 1777 if (port->x_char) { ··· 1782 serial8250_stop_tx(port); 1783 return; 1784 } 1785 - if (uart_circ_empty(xmit)) { 1786 __stop_tx(up); 1787 return; 1788 } 1789 1790 count = up->tx_loadsz; 1791 do { 1792 - serial_out(up, UART_TX, xmit->buf[xmit->tail]); 1793 if (up->bugs & UART_BUG_TXRACE) { 1794 /* 1795 * The Aspeed BMC virtual UARTs have a bug where data ··· 1807 */ 1808 serial_in(up, UART_SCR); 1809 } 1810 - uart_xmit_advance(port, 1); 1811 - if (uart_circ_empty(xmit)) 1812 - break; 1813 if ((up->capabilities & UART_CAP_HFIFO) && 1814 !uart_lsr_tx_empty(serial_in(up, UART_LSR))) 1815 break; ··· 1817 break; 1818 } while (--count > 0); 1819 1820 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1821 uart_write_wakeup(port); 1822 1823 /* ··· 1825 * HW can go idle. So we get here once again with empty FIFO and disable 1826 * the interrupt and RPM in __stop_tx() 1827 */ 1828 - if (uart_circ_empty(xmit) && !(up->capabilities & UART_CAP_RPM)) 1829 __stop_tx(up); 1830 } 1831 EXPORT_SYMBOL_GPL(serial8250_tx_chars);
··· 612 { 613 struct uart_8250_port *up = up_to_u8250p(port); 614 615 /* 616 * Both serial8250_em485_init() and serial8250_em485_destroy() 617 * are idempotent. ··· 1630 /* Port locked to synchronize UART_IER access against the console. */ 1631 lockdep_assert_held_once(&port->lock); 1632 1633 + if (!port->x_char && kfifo_is_empty(&port->state->port.xmit_fifo)) 1634 return; 1635 1636 serial8250_rpm_get_tx(up); ··· 1778 void serial8250_tx_chars(struct uart_8250_port *up) 1779 { 1780 struct uart_port *port = &up->port; 1781 + struct tty_port *tport = &port->state->port; 1782 int count; 1783 1784 if (port->x_char) { ··· 1789 serial8250_stop_tx(port); 1790 return; 1791 } 1792 + if (kfifo_is_empty(&tport->xmit_fifo)) { 1793 __stop_tx(up); 1794 return; 1795 } 1796 1797 count = up->tx_loadsz; 1798 do { 1799 + unsigned char c; 1800 + 1801 + if (!uart_fifo_get(port, &c)) 1802 + break; 1803 + 1804 + serial_out(up, UART_TX, c); 1805 if (up->bugs & UART_BUG_TXRACE) { 1806 /* 1807 * The Aspeed BMC virtual UARTs have a bug where data ··· 1809 */ 1810 serial_in(up, UART_SCR); 1811 } 1812 + 1813 if ((up->capabilities & UART_CAP_HFIFO) && 1814 !uart_lsr_tx_empty(serial_in(up, UART_LSR))) 1815 break; ··· 1821 break; 1822 } while (--count > 0); 1823 1824 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 1825 uart_write_wakeup(port); 1826 1827 /* ··· 1829 * HW can go idle. So we get here once again with empty FIFO and disable 1830 * the interrupt and RPM in __stop_tx() 1831 */ 1832 + if (kfifo_is_empty(&tport->xmit_fifo) && 1833 + !(up->capabilities & UART_CAP_RPM)) 1834 __stop_tx(up); 1835 } 1836 EXPORT_SYMBOL_GPL(serial8250_tx_chars);
+23 -31
drivers/tty/serial/Kconfig
··· 307 If unsure, say Y. 308 309 config SERIAL_MAX3100 310 - tristate "MAX3100 support" 311 depends on SPI 312 select SERIAL_CORE 313 help 314 - MAX3100 chip support 315 316 config SERIAL_MAX310X 317 tristate "MAX310X support" ··· 1024 Support for console on SCCNXP serial ports. 1025 1026 config SERIAL_SC16IS7XX_CORE 1027 - tristate 1028 - 1029 - config SERIAL_SC16IS7XX 1030 - tristate "SC16IS7xx serial support" 1031 select SERIAL_CORE 1032 - depends on (SPI_MASTER && !I2C) || I2C 1033 help 1034 - This selects support for SC16IS7xx serial ports. 1035 - Supported ICs are SC16IS740, SC16IS741, SC16IS750, SC16IS752, 1036 - SC16IS760 and SC16IS762. Select supported buses using options below. 1037 1038 config SERIAL_SC16IS7XX_I2C 1039 - bool "SC16IS7xx for I2C interface" 1040 - depends on SERIAL_SC16IS7XX 1041 - depends on I2C 1042 - select SERIAL_SC16IS7XX_CORE if SERIAL_SC16IS7XX 1043 - select REGMAP_I2C if I2C 1044 - default y 1045 - help 1046 - Enable SC16IS7xx driver on I2C bus, 1047 - If required say y, and say n to i2c if not required, 1048 - Enabled by default to support oldconfig. 1049 - You must select at least one bus for the driver to be built. 1050 1051 config SERIAL_SC16IS7XX_SPI 1052 - bool "SC16IS7xx for spi interface" 1053 - depends on SERIAL_SC16IS7XX 1054 - depends on SPI_MASTER 1055 - select SERIAL_SC16IS7XX_CORE if SERIAL_SC16IS7XX 1056 - select REGMAP_SPI if SPI_MASTER 1057 - help 1058 - Enable SC16IS7xx driver on SPI bus, 1059 - If required say y, and say n to spi if not required, 1060 - This is additional support to existing driver. 1061 - You must select at least one bus for the driver to be built. 1062 1063 config SERIAL_TIMBERDALE 1064 tristate "Support for timberdale UART"
··· 307 If unsure, say Y. 308 309 config SERIAL_MAX3100 310 + tristate "MAX3100/3110/3111/3222 support" 311 depends on SPI 312 select SERIAL_CORE 313 help 314 + This selects support for an advanced UART from Maxim. 315 + Supported ICs are MAX3100, MAX3110, MAX3111, MAX3222. 316 + 317 + Say Y here if you want to support these ICs. 318 319 config SERIAL_MAX310X 320 tristate "MAX310X support" ··· 1021 Support for console on SCCNXP serial ports. 1022 1023 config SERIAL_SC16IS7XX_CORE 1024 + tristate "NXP SC16IS7xx UART support" 1025 select SERIAL_CORE 1026 + select SERIAL_SC16IS7XX_SPI if SPI_MASTER 1027 + select SERIAL_SC16IS7XX_I2C if I2C 1028 help 1029 + Core driver for NXP SC16IS7xx UARTs. 1030 + Supported ICs are: 1031 + 1032 + SC16IS740 1033 + SC16IS741 1034 + SC16IS750 1035 + SC16IS752 1036 + SC16IS760 1037 + SC16IS762 1038 + 1039 + The driver supports both I2C and SPI interfaces. 1040 1041 config SERIAL_SC16IS7XX_I2C 1042 + tristate 1043 + select REGMAP_I2C 1044 1045 config SERIAL_SC16IS7XX_SPI 1046 + tristate 1047 + select REGMAP_SPI 1048 1049 config SERIAL_TIMBERDALE 1050 tristate "Support for timberdale UART"
+2
drivers/tty/serial/Makefile
··· 76 obj-$(CONFIG_SERIAL_SB1250_DUART) += sb1250-duart.o 77 obj-$(CONFIG_SERIAL_SCCNXP) += sccnxp.o 78 obj-$(CONFIG_SERIAL_SC16IS7XX_CORE) += sc16is7xx.o 79 obj-$(CONFIG_SERIAL_SH_SCI) += sh-sci.o 80 obj-$(CONFIG_SERIAL_SIFIVE) += sifive.o 81 obj-$(CONFIG_SERIAL_SPRD) += sprd_serial.o
··· 76 obj-$(CONFIG_SERIAL_SB1250_DUART) += sb1250-duart.o 77 obj-$(CONFIG_SERIAL_SCCNXP) += sccnxp.o 78 obj-$(CONFIG_SERIAL_SC16IS7XX_CORE) += sc16is7xx.o 79 + obj-$(CONFIG_SERIAL_SC16IS7XX_SPI) += sc16is7xx_spi.o 80 + obj-$(CONFIG_SERIAL_SC16IS7XX_I2C) += sc16is7xx_i2c.o 81 obj-$(CONFIG_SERIAL_SH_SCI) += sh-sci.o 82 obj-$(CONFIG_SERIAL_SIFIVE) += sifive.o 83 obj-$(CONFIG_SERIAL_SPRD) += sprd_serial.o
+21 -41
drivers/tty/serial/amba-pl011.c
··· 256 const u16 *reg_offset; 257 struct clk *clk; 258 const struct vendor_data *vendor; 259 - unsigned int dmacr; /* dma control reg */ 260 unsigned int im; /* interrupt mask */ 261 unsigned int old_status; 262 unsigned int fifosize; /* vendor-specific */ ··· 265 unsigned int rs485_tx_drain_interval; /* usecs */ 266 #ifdef CONFIG_DMA_ENGINE 267 /* DMA stuff */ 268 bool using_tx_dma; 269 bool using_rx_dma; 270 struct pl011_dmarx_data dmarx; ··· 535 static void pl011_dma_tx_callback(void *data) 536 { 537 struct uart_amba_port *uap = data; 538 struct pl011_dmatx_data *dmatx = &uap->dmatx; 539 unsigned long flags; 540 u16 dmacr; ··· 559 * get further refills (hence we check dmacr). 560 */ 561 if (!(dmacr & UART011_TXDMAE) || uart_tx_stopped(&uap->port) || 562 - uart_circ_empty(&uap->port.state->xmit)) { 563 uap->dmatx.queued = false; 564 uart_port_unlock_irqrestore(&uap->port, flags); 565 return; ··· 589 struct dma_chan *chan = dmatx->chan; 590 struct dma_device *dma_dev = chan->device; 591 struct dma_async_tx_descriptor *desc; 592 - struct circ_buf *xmit = &uap->port.state->xmit; 593 unsigned int count; 594 595 /* ··· 598 * the standard interrupt handling. This ensures that we 599 * issue a uart_write_wakeup() at the appropriate time. 600 */ 601 - count = uart_circ_chars_pending(xmit); 602 if (count < (uap->fifosize >> 1)) { 603 uap->dmatx.queued = false; 604 return 0; ··· 614 if (count > PL011_DMA_BUFFER_SIZE) 615 count = PL011_DMA_BUFFER_SIZE; 616 617 - if (xmit->tail < xmit->head) { 618 - memcpy(&dmatx->buf[0], &xmit->buf[xmit->tail], count); 619 - } else { 620 - size_t first = UART_XMIT_SIZE - xmit->tail; 621 - size_t second; 622 - 623 - if (first > count) 624 - first = count; 625 - second = count - first; 626 - 627 - memcpy(&dmatx->buf[0], &xmit->buf[xmit->tail], first); 628 - if (second) 629 - memcpy(&dmatx->buf[first], &xmit->buf[0], second); 630 - } 631 - 632 dmatx->len = count; 633 dmatx->dma = dma_map_single(dma_dev->dev, dmatx->buf, count, 634 DMA_TO_DEVICE); ··· 657 */ 658 uart_xmit_advance(&uap->port, count); 659 660 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 661 uart_write_wakeup(&uap->port); 662 663 return 1; ··· 1441 /* Returns true if tx interrupts have to be (kept) enabled */ 1442 static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq) 1443 { 1444 - struct circ_buf *xmit = &uap->port.state->xmit; 1445 int count = uap->fifosize >> 1; 1446 1447 if (uap->port.x_char) { ··· 1450 uap->port.x_char = 0; 1451 --count; 1452 } 1453 - if (uart_circ_empty(xmit) || uart_tx_stopped(&uap->port)) { 1454 pl011_stop_tx(&uap->port); 1455 return false; 1456 } ··· 1459 if (pl011_dma_tx_irq(uap)) 1460 return true; 1461 1462 - do { 1463 if (likely(from_irq) && count-- == 0) 1464 break; 1465 1466 - if (!pl011_tx_char(uap, xmit->buf[xmit->tail], from_irq)) 1467 break; 1468 1469 - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 1470 - } while (!uart_circ_empty(xmit)); 1471 1472 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1473 uart_write_wakeup(&uap->port); 1474 1475 - if (uart_circ_empty(xmit)) { 1476 pl011_stop_tx(&uap->port); 1477 return false; 1478 } ··· 2692 return -EBUSY; 2693 } 2694 2695 - static int pl011_get_rs485_mode(struct uart_amba_port *uap) 2696 - { 2697 - struct uart_port *port = &uap->port; 2698 - int ret; 2699 - 2700 - ret = uart_get_rs485_mode(port); 2701 - if (ret) 2702 - return ret; 2703 - 2704 - return 0; 2705 - } 2706 - 2707 static int pl011_setup_port(struct device *dev, struct uart_amba_port *uap, 2708 struct resource *mmiobase, int index) 2709 { ··· 2712 uap->port.flags = UPF_BOOT_AUTOCONF; 2713 uap->port.line = index; 2714 2715 - ret = pl011_get_rs485_mode(uap); 2716 if (ret) 2717 return ret; 2718
··· 256 const u16 *reg_offset; 257 struct clk *clk; 258 const struct vendor_data *vendor; 259 unsigned int im; /* interrupt mask */ 260 unsigned int old_status; 261 unsigned int fifosize; /* vendor-specific */ ··· 266 unsigned int rs485_tx_drain_interval; /* usecs */ 267 #ifdef CONFIG_DMA_ENGINE 268 /* DMA stuff */ 269 + unsigned int dmacr; /* dma control reg */ 270 bool using_tx_dma; 271 bool using_rx_dma; 272 struct pl011_dmarx_data dmarx; ··· 535 static void pl011_dma_tx_callback(void *data) 536 { 537 struct uart_amba_port *uap = data; 538 + struct tty_port *tport = &uap->port.state->port; 539 struct pl011_dmatx_data *dmatx = &uap->dmatx; 540 unsigned long flags; 541 u16 dmacr; ··· 558 * get further refills (hence we check dmacr). 559 */ 560 if (!(dmacr & UART011_TXDMAE) || uart_tx_stopped(&uap->port) || 561 + kfifo_is_empty(&tport->xmit_fifo)) { 562 uap->dmatx.queued = false; 563 uart_port_unlock_irqrestore(&uap->port, flags); 564 return; ··· 588 struct dma_chan *chan = dmatx->chan; 589 struct dma_device *dma_dev = chan->device; 590 struct dma_async_tx_descriptor *desc; 591 + struct tty_port *tport = &uap->port.state->port; 592 unsigned int count; 593 594 /* ··· 597 * the standard interrupt handling. This ensures that we 598 * issue a uart_write_wakeup() at the appropriate time. 599 */ 600 + count = kfifo_len(&tport->xmit_fifo); 601 if (count < (uap->fifosize >> 1)) { 602 uap->dmatx.queued = false; 603 return 0; ··· 613 if (count > PL011_DMA_BUFFER_SIZE) 614 count = PL011_DMA_BUFFER_SIZE; 615 616 + count = kfifo_out_peek(&tport->xmit_fifo, dmatx->buf, count); 617 dmatx->len = count; 618 dmatx->dma = dma_map_single(dma_dev->dev, dmatx->buf, count, 619 DMA_TO_DEVICE); ··· 670 */ 671 uart_xmit_advance(&uap->port, count); 672 673 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 674 uart_write_wakeup(&uap->port); 675 676 return 1; ··· 1454 /* Returns true if tx interrupts have to be (kept) enabled */ 1455 static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq) 1456 { 1457 + struct tty_port *tport = &uap->port.state->port; 1458 int count = uap->fifosize >> 1; 1459 1460 if (uap->port.x_char) { ··· 1463 uap->port.x_char = 0; 1464 --count; 1465 } 1466 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(&uap->port)) { 1467 pl011_stop_tx(&uap->port); 1468 return false; 1469 } ··· 1472 if (pl011_dma_tx_irq(uap)) 1473 return true; 1474 1475 + while (1) { 1476 + unsigned char c; 1477 + 1478 if (likely(from_irq) && count-- == 0) 1479 break; 1480 1481 + if (!kfifo_peek(&tport->xmit_fifo, &c)) 1482 break; 1483 1484 + if (!pl011_tx_char(uap, c, from_irq)) 1485 + break; 1486 1487 + kfifo_skip(&tport->xmit_fifo); 1488 + } 1489 + 1490 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 1491 uart_write_wakeup(&uap->port); 1492 1493 + if (kfifo_is_empty(&tport->xmit_fifo)) { 1494 pl011_stop_tx(&uap->port); 1495 return false; 1496 } ··· 2700 return -EBUSY; 2701 } 2702 2703 static int pl011_setup_port(struct device *dev, struct uart_amba_port *uap, 2704 struct resource *mmiobase, int index) 2705 { ··· 2732 uap->port.flags = UPF_BOOT_AUTOCONF; 2733 uap->port.line = index; 2734 2735 + ret = uart_get_rs485_mode(&uap->port); 2736 if (ret) 2737 return ret; 2738
+8 -10
drivers/tty/serial/ar933x_uart.c
··· 390 391 static void ar933x_uart_tx_chars(struct ar933x_uart_port *up) 392 { 393 - struct circ_buf *xmit = &up->port.state->xmit; 394 struct serial_rs485 *rs485conf = &up->port.rs485; 395 int count; 396 bool half_duplex_send = false; ··· 399 return; 400 401 if ((rs485conf->flags & SER_RS485_ENABLED) && 402 - (up->port.x_char || !uart_circ_empty(xmit))) { 403 ar933x_uart_stop_rx_interrupt(up); 404 gpiod_set_value(up->rts_gpiod, !!(rs485conf->flags & SER_RS485_RTS_ON_SEND)); 405 half_duplex_send = true; ··· 408 count = up->port.fifosize; 409 do { 410 unsigned int rdata; 411 412 rdata = ar933x_uart_read(up, AR933X_UART_DATA_REG); 413 if ((rdata & AR933X_UART_DATA_TX_CSR) == 0) ··· 421 continue; 422 } 423 424 - if (uart_circ_empty(xmit)) 425 break; 426 427 - ar933x_uart_putc(up, xmit->buf[xmit->tail]); 428 - 429 - uart_xmit_advance(&up->port, 1); 430 } while (--count > 0); 431 432 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 433 uart_write_wakeup(&up->port); 434 435 - if (!uart_circ_empty(xmit)) { 436 ar933x_uart_start_tx_interrupt(up); 437 } else if (half_duplex_send) { 438 ar933x_uart_wait_tx_complete(up); ··· 692 .cons = NULL, /* filled in runtime */ 693 }; 694 695 - static const struct serial_rs485 ar933x_no_rs485 = {}; 696 static const struct serial_rs485 ar933x_rs485_supported = { 697 .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND, 698 }; ··· 787 up->rts_gpiod = mctrl_gpio_to_gpiod(up->gpios, UART_GPIO_RTS); 788 789 if (!up->rts_gpiod) { 790 - port->rs485_supported = ar933x_no_rs485; 791 if (port->rs485.flags & SER_RS485_ENABLED) { 792 dev_err(&pdev->dev, "lacking rts-gpio, disabling RS485\n"); 793 port->rs485.flags &= ~SER_RS485_ENABLED;
··· 390 391 static void ar933x_uart_tx_chars(struct ar933x_uart_port *up) 392 { 393 + struct tty_port *tport = &up->port.state->port; 394 struct serial_rs485 *rs485conf = &up->port.rs485; 395 int count; 396 bool half_duplex_send = false; ··· 399 return; 400 401 if ((rs485conf->flags & SER_RS485_ENABLED) && 402 + (up->port.x_char || !kfifo_is_empty(&tport->xmit_fifo))) { 403 ar933x_uart_stop_rx_interrupt(up); 404 gpiod_set_value(up->rts_gpiod, !!(rs485conf->flags & SER_RS485_RTS_ON_SEND)); 405 half_duplex_send = true; ··· 408 count = up->port.fifosize; 409 do { 410 unsigned int rdata; 411 + unsigned char c; 412 413 rdata = ar933x_uart_read(up, AR933X_UART_DATA_REG); 414 if ((rdata & AR933X_UART_DATA_TX_CSR) == 0) ··· 420 continue; 421 } 422 423 + if (!uart_fifo_get(&up->port, &c)) 424 break; 425 426 + ar933x_uart_putc(up, c); 427 } while (--count > 0); 428 429 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 430 uart_write_wakeup(&up->port); 431 432 + if (!kfifo_is_empty(&tport->xmit_fifo)) { 433 ar933x_uart_start_tx_interrupt(up); 434 } else if (half_duplex_send) { 435 ar933x_uart_wait_tx_complete(up); ··· 693 .cons = NULL, /* filled in runtime */ 694 }; 695 696 static const struct serial_rs485 ar933x_rs485_supported = { 697 .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND, 698 }; ··· 789 up->rts_gpiod = mctrl_gpio_to_gpiod(up->gpios, UART_GPIO_RTS); 790 791 if (!up->rts_gpiod) { 792 + port->rs485_supported.flags &= ~SER_RS485_ENABLED; 793 if (port->rs485.flags & SER_RS485_ENABLED) { 794 dev_err(&pdev->dev, "lacking rts-gpio, disabling RS485\n"); 795 port->rs485.flags &= ~SER_RS485_ENABLED;
+3 -5
drivers/tty/serial/arc_uart.c
··· 155 */ 156 static void arc_serial_tx_chars(struct uart_port *port) 157 { 158 - struct circ_buf *xmit = &port->state->xmit; 159 int sent = 0; 160 unsigned char ch; 161 ··· 164 port->icount.tx++; 165 port->x_char = 0; 166 sent = 1; 167 - } else if (!uart_circ_empty(xmit)) { 168 - ch = xmit->buf[xmit->tail]; 169 - uart_xmit_advance(port, 1); 170 while (!(UART_GET_STATUS(port) & TXEMPTY)) 171 cpu_relax(); 172 UART_SET_DATA(port, ch); ··· 175 * If num chars in xmit buffer are too few, ask tty layer for more. 176 * By Hard ISR to schedule processing in software interrupt part 177 */ 178 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 179 uart_write_wakeup(port); 180 181 if (sent)
··· 155 */ 156 static void arc_serial_tx_chars(struct uart_port *port) 157 { 158 + struct tty_port *tport = &port->state->port; 159 int sent = 0; 160 unsigned char ch; 161 ··· 164 port->icount.tx++; 165 port->x_char = 0; 166 sent = 1; 167 + } else if (uart_fifo_get(port, &ch)) { 168 while (!(UART_GET_STATUS(port) & TXEMPTY)) 169 cpu_relax(); 170 UART_SET_DATA(port, ch); ··· 177 * If num chars in xmit buffer are too few, ask tty layer for more. 178 * By Hard ISR to schedule processing in software interrupt part 179 */ 180 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 181 uart_write_wakeup(port); 182 183 if (sent)
+64 -82
drivers/tty/serial/atmel_serial.c
··· 96 * can contain up to 1024 characters in PIO mode and up to 4096 characters in 97 * DMA mode. 98 */ 99 - #define ATMEL_SERIAL_RINGSIZE 1024 100 101 /* 102 * at91: 6 USARTs and one DBGU port (SAM9260) ··· 134 struct dma_async_tx_descriptor *desc_rx; 135 dma_cookie_t cookie_tx; 136 dma_cookie_t cookie_rx; 137 - struct scatterlist sg_tx; 138 - struct scatterlist sg_rx; 139 struct tasklet_struct tasklet_rx; 140 struct tasklet_struct tasklet_tx; 141 atomic_t tasklet_shutdown; ··· 859 { 860 struct atmel_uart_port *atmel_port = arg; 861 struct uart_port *port = &atmel_port->uart; 862 - struct circ_buf *xmit = &port->state->xmit; 863 struct dma_chan *chan = atmel_port->chan_tx; 864 unsigned long flags; 865 ··· 875 atmel_port->desc_tx = NULL; 876 spin_unlock(&atmel_port->lock_tx); 877 878 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 879 uart_write_wakeup(port); 880 881 /* 882 - * xmit is a circular buffer so, if we have just send data from 883 - * xmit->tail to the end of xmit->buf, now we have to transmit the 884 - * remaining data from the beginning of xmit->buf to xmit->head. 885 */ 886 - if (!uart_circ_empty(xmit)) 887 atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx); 888 else if (atmel_uart_is_half_duplex(port)) { 889 /* ··· 906 if (chan) { 907 dmaengine_terminate_all(chan); 908 dma_release_channel(chan); 909 - dma_unmap_sg(port->dev, &atmel_port->sg_tx, 1, 910 - DMA_TO_DEVICE); 911 } 912 913 atmel_port->desc_tx = NULL; ··· 921 static void atmel_tx_dma(struct uart_port *port) 922 { 923 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 924 - struct circ_buf *xmit = &port->state->xmit; 925 struct dma_chan *chan = atmel_port->chan_tx; 926 struct dma_async_tx_descriptor *desc; 927 - struct scatterlist sgl[2], *sg, *sg_tx = &atmel_port->sg_tx; 928 - unsigned int tx_len, part1_len, part2_len, sg_len; 929 dma_addr_t phys_addr; 930 931 /* Make sure we have an idle channel */ 932 if (atmel_port->desc_tx != NULL) 933 return; 934 935 - if (!uart_circ_empty(xmit) && !uart_tx_stopped(port)) { 936 /* 937 * DMA is idle now. 938 * Port xmit buffer is already mapped, ··· 942 * Take the port lock to get a 943 * consistent xmit buffer state. 944 */ 945 - tx_len = CIRC_CNT_TO_END(xmit->head, 946 - xmit->tail, 947 - UART_XMIT_SIZE); 948 949 if (atmel_port->fifo_size) { 950 /* multi data mode */ ··· 957 958 sg_init_table(sgl, 2); 959 sg_len = 0; 960 - phys_addr = sg_dma_address(sg_tx) + xmit->tail; 961 if (part1_len) { 962 sg = &sgl[sg_len++]; 963 sg_dma_address(sg) = phys_addr; ··· 974 975 /* 976 * save tx_len so atmel_complete_tx_dma() will increase 977 - * xmit->tail correctly 978 */ 979 atmel_port->tx_len = tx_len; 980 ··· 989 return; 990 } 991 992 - dma_sync_sg_for_device(port->dev, sg_tx, 1, DMA_TO_DEVICE); 993 994 atmel_port->desc_tx = desc; 995 desc->callback = atmel_complete_tx_dma; ··· 1005 dma_async_issue_pending(chan); 1006 } 1007 1008 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1009 uart_write_wakeup(port); 1010 } 1011 1012 static int atmel_prepare_tx_dma(struct uart_port *port) 1013 { 1014 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1015 struct device *mfd_dev = port->dev->parent; 1016 dma_cap_mask_t mask; 1017 struct dma_slave_config config; 1018 struct dma_chan *chan; 1019 - int ret, nent; 1020 1021 dma_cap_zero(mask); 1022 dma_cap_set(DMA_SLAVE, mask); ··· 1032 dma_chan_name(atmel_port->chan_tx)); 1033 1034 spin_lock_init(&atmel_port->lock_tx); 1035 - sg_init_table(&atmel_port->sg_tx, 1); 1036 /* UART circular tx buffer is an aligned page. */ 1037 - BUG_ON(!PAGE_ALIGNED(port->state->xmit.buf)); 1038 - sg_set_page(&atmel_port->sg_tx, 1039 - virt_to_page(port->state->xmit.buf), 1040 - UART_XMIT_SIZE, 1041 - offset_in_page(port->state->xmit.buf)); 1042 - nent = dma_map_sg(port->dev, 1043 - &atmel_port->sg_tx, 1044 - 1, 1045 - DMA_TO_DEVICE); 1046 1047 - if (!nent) { 1048 dev_dbg(port->dev, "need to release resource of dma\n"); 1049 goto chan_err; 1050 } else { 1051 - dev_dbg(port->dev, "%s: mapped %d@%p to %pad\n", __func__, 1052 - sg_dma_len(&atmel_port->sg_tx), 1053 - port->state->xmit.buf, 1054 - &sg_dma_address(&atmel_port->sg_tx)); 1055 } 1056 1057 /* Configure the slave DMA */ ··· 1088 if (chan) { 1089 dmaengine_terminate_all(chan); 1090 dma_release_channel(chan); 1091 - dma_unmap_sg(port->dev, &atmel_port->sg_rx, 1, 1092 - DMA_FROM_DEVICE); 1093 } 1094 1095 atmel_port->desc_rx = NULL; ··· 1122 } 1123 1124 /* CPU claims ownership of RX DMA buffer */ 1125 - dma_sync_sg_for_cpu(port->dev, 1126 - &atmel_port->sg_rx, 1127 - 1, 1128 - DMA_FROM_DEVICE); 1129 1130 /* 1131 * ring->head points to the end of data already written by the DMA. ··· 1132 * The current transfer size should not be larger than the dma buffer 1133 * length. 1134 */ 1135 - ring->head = sg_dma_len(&atmel_port->sg_rx) - state.residue; 1136 - BUG_ON(ring->head > sg_dma_len(&atmel_port->sg_rx)); 1137 /* 1138 * At this point ring->head may point to the first byte right after the 1139 * last byte of the dma buffer: ··· 1147 * tail to the end of the buffer then reset tail. 1148 */ 1149 if (ring->head < ring->tail) { 1150 - count = sg_dma_len(&atmel_port->sg_rx) - ring->tail; 1151 1152 tty_insert_flip_string(tport, ring->buf + ring->tail, count); 1153 ring->tail = 0; ··· 1160 1161 tty_insert_flip_string(tport, ring->buf + ring->tail, count); 1162 /* Wrap ring->head if needed */ 1163 - if (ring->head >= sg_dma_len(&atmel_port->sg_rx)) 1164 ring->head = 0; 1165 ring->tail = ring->head; 1166 port->icount.rx += count; 1167 } 1168 1169 /* USART retreives ownership of RX DMA buffer */ 1170 - dma_sync_sg_for_device(port->dev, 1171 - &atmel_port->sg_rx, 1172 - 1, 1173 - DMA_FROM_DEVICE); 1174 1175 tty_flip_buffer_push(tport); 1176 ··· 1184 struct dma_slave_config config; 1185 struct circ_buf *ring; 1186 struct dma_chan *chan; 1187 - int ret, nent; 1188 1189 ring = &atmel_port->rx_ring; 1190 ··· 1201 dma_chan_name(atmel_port->chan_rx)); 1202 1203 spin_lock_init(&atmel_port->lock_rx); 1204 - sg_init_table(&atmel_port->sg_rx, 1); 1205 /* UART circular rx buffer is an aligned page. */ 1206 BUG_ON(!PAGE_ALIGNED(ring->buf)); 1207 - sg_set_page(&atmel_port->sg_rx, 1208 - virt_to_page(ring->buf), 1209 - sizeof(struct atmel_uart_char) * ATMEL_SERIAL_RINGSIZE, 1210 - offset_in_page(ring->buf)); 1211 - nent = dma_map_sg(port->dev, 1212 - &atmel_port->sg_rx, 1213 - 1, 1214 - DMA_FROM_DEVICE); 1215 1216 - if (!nent) { 1217 dev_dbg(port->dev, "need to release resource of dma\n"); 1218 goto chan_err; 1219 } else { 1220 - dev_dbg(port->dev, "%s: mapped %d@%p to %pad\n", __func__, 1221 - sg_dma_len(&atmel_port->sg_rx), 1222 - ring->buf, 1223 - &sg_dma_address(&atmel_port->sg_rx)); 1224 } 1225 1226 /* Configure the slave DMA */ ··· 1233 * each one is half ring buffer size 1234 */ 1235 desc = dmaengine_prep_dma_cyclic(atmel_port->chan_rx, 1236 - sg_dma_address(&atmel_port->sg_rx), 1237 - sg_dma_len(&atmel_port->sg_rx), 1238 - sg_dma_len(&atmel_port->sg_rx)/2, 1239 DMA_DEV_TO_MEM, 1240 DMA_PREP_INTERRUPT); 1241 if (!desc) { ··· 1442 static void atmel_tx_pdc(struct uart_port *port) 1443 { 1444 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1445 - struct circ_buf *xmit = &port->state->xmit; 1446 struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 1447 - int count; 1448 1449 /* nothing left to transmit? */ 1450 if (atmel_uart_readl(port, ATMEL_PDC_TCR)) ··· 1456 /* disable PDC transmit */ 1457 atmel_uart_writel(port, ATMEL_PDC_PTCR, ATMEL_PDC_TXTDIS); 1458 1459 - if (!uart_circ_empty(xmit) && !uart_tx_stopped(port)) { 1460 dma_sync_single_for_device(port->dev, 1461 pdc->dma_addr, 1462 pdc->dma_size, 1463 DMA_TO_DEVICE); 1464 1465 - count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 1466 pdc->ofs = count; 1467 1468 - atmel_uart_writel(port, ATMEL_PDC_TPR, 1469 - pdc->dma_addr + xmit->tail); 1470 atmel_uart_writel(port, ATMEL_PDC_TCR, count); 1471 /* re-enable PDC transmit */ 1472 atmel_uart_writel(port, ATMEL_PDC_PTCR, ATMEL_PDC_TXTEN); ··· 1482 } 1483 } 1484 1485 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1486 uart_write_wakeup(port); 1487 } 1488 ··· 1490 { 1491 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1492 struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 1493 - struct circ_buf *xmit = &port->state->xmit; 1494 1495 - pdc->buf = xmit->buf; 1496 pdc->dma_addr = dma_map_single(port->dev, 1497 pdc->buf, 1498 UART_XMIT_SIZE, ··· 2937 2938 if (!atmel_use_pdc_rx(&atmel_port->uart)) { 2939 ret = -ENOMEM; 2940 - data = kmalloc_array(ATMEL_SERIAL_RINGSIZE, 2941 - sizeof(struct atmel_uart_char), 2942 - GFP_KERNEL); 2943 if (!data) 2944 goto err_clk_disable_unprepare; 2945 atmel_port->rx_ring.buf = data;
··· 96 * can contain up to 1024 characters in PIO mode and up to 4096 characters in 97 * DMA mode. 98 */ 99 + #define ATMEL_SERIAL_RINGSIZE 1024 100 + #define ATMEL_SERIAL_RX_SIZE array_size(sizeof(struct atmel_uart_char), \ 101 + ATMEL_SERIAL_RINGSIZE) 102 103 /* 104 * at91: 6 USARTs and one DBGU port (SAM9260) ··· 132 struct dma_async_tx_descriptor *desc_rx; 133 dma_cookie_t cookie_tx; 134 dma_cookie_t cookie_rx; 135 + dma_addr_t tx_phys; 136 + dma_addr_t rx_phys; 137 struct tasklet_struct tasklet_rx; 138 struct tasklet_struct tasklet_tx; 139 atomic_t tasklet_shutdown; ··· 857 { 858 struct atmel_uart_port *atmel_port = arg; 859 struct uart_port *port = &atmel_port->uart; 860 + struct tty_port *tport = &port->state->port; 861 struct dma_chan *chan = atmel_port->chan_tx; 862 unsigned long flags; 863 ··· 873 atmel_port->desc_tx = NULL; 874 spin_unlock(&atmel_port->lock_tx); 875 876 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 877 uart_write_wakeup(port); 878 879 /* 880 + * xmit is a circular buffer so, if we have just send data from the 881 + * tail to the end, now we have to transmit the remaining data from the 882 + * beginning to the head. 883 */ 884 + if (!kfifo_is_empty(&tport->xmit_fifo)) 885 atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx); 886 else if (atmel_uart_is_half_duplex(port)) { 887 /* ··· 904 if (chan) { 905 dmaengine_terminate_all(chan); 906 dma_release_channel(chan); 907 + dma_unmap_single(port->dev, atmel_port->tx_phys, 908 + UART_XMIT_SIZE, DMA_TO_DEVICE); 909 } 910 911 atmel_port->desc_tx = NULL; ··· 919 static void atmel_tx_dma(struct uart_port *port) 920 { 921 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 922 + struct tty_port *tport = &port->state->port; 923 struct dma_chan *chan = atmel_port->chan_tx; 924 struct dma_async_tx_descriptor *desc; 925 + struct scatterlist sgl[2], *sg; 926 + unsigned int tx_len, tail, part1_len, part2_len, sg_len; 927 dma_addr_t phys_addr; 928 929 /* Make sure we have an idle channel */ 930 if (atmel_port->desc_tx != NULL) 931 return; 932 933 + if (!kfifo_is_empty(&tport->xmit_fifo) && !uart_tx_stopped(port)) { 934 /* 935 * DMA is idle now. 936 * Port xmit buffer is already mapped, ··· 940 * Take the port lock to get a 941 * consistent xmit buffer state. 942 */ 943 + tx_len = kfifo_out_linear(&tport->xmit_fifo, &tail, 944 + UART_XMIT_SIZE); 945 946 if (atmel_port->fifo_size) { 947 /* multi data mode */ ··· 956 957 sg_init_table(sgl, 2); 958 sg_len = 0; 959 + phys_addr = atmel_port->tx_phys + tail; 960 if (part1_len) { 961 sg = &sgl[sg_len++]; 962 sg_dma_address(sg) = phys_addr; ··· 973 974 /* 975 * save tx_len so atmel_complete_tx_dma() will increase 976 + * tail correctly 977 */ 978 atmel_port->tx_len = tx_len; 979 ··· 988 return; 989 } 990 991 + dma_sync_single_for_device(port->dev, atmel_port->tx_phys, 992 + UART_XMIT_SIZE, DMA_TO_DEVICE); 993 994 atmel_port->desc_tx = desc; 995 desc->callback = atmel_complete_tx_dma; ··· 1003 dma_async_issue_pending(chan); 1004 } 1005 1006 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 1007 uart_write_wakeup(port); 1008 } 1009 1010 static int atmel_prepare_tx_dma(struct uart_port *port) 1011 { 1012 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1013 + struct tty_port *tport = &port->state->port; 1014 struct device *mfd_dev = port->dev->parent; 1015 dma_cap_mask_t mask; 1016 struct dma_slave_config config; 1017 struct dma_chan *chan; 1018 + int ret; 1019 1020 dma_cap_zero(mask); 1021 dma_cap_set(DMA_SLAVE, mask); ··· 1029 dma_chan_name(atmel_port->chan_tx)); 1030 1031 spin_lock_init(&atmel_port->lock_tx); 1032 /* UART circular tx buffer is an aligned page. */ 1033 + BUG_ON(!PAGE_ALIGNED(tport->xmit_buf)); 1034 + atmel_port->tx_phys = dma_map_single(port->dev, tport->xmit_buf, 1035 + UART_XMIT_SIZE, DMA_TO_DEVICE); 1036 1037 + if (dma_mapping_error(port->dev, atmel_port->tx_phys)) { 1038 dev_dbg(port->dev, "need to release resource of dma\n"); 1039 goto chan_err; 1040 } else { 1041 + dev_dbg(port->dev, "%s: mapped %lu@%p to %pad\n", __func__, 1042 + UART_XMIT_SIZE, tport->xmit_buf, 1043 + &atmel_port->tx_phys); 1044 } 1045 1046 /* Configure the slave DMA */ ··· 1093 if (chan) { 1094 dmaengine_terminate_all(chan); 1095 dma_release_channel(chan); 1096 + dma_unmap_single(port->dev, atmel_port->rx_phys, 1097 + ATMEL_SERIAL_RX_SIZE, DMA_FROM_DEVICE); 1098 } 1099 1100 atmel_port->desc_rx = NULL; ··· 1127 } 1128 1129 /* CPU claims ownership of RX DMA buffer */ 1130 + dma_sync_single_for_cpu(port->dev, atmel_port->rx_phys, 1131 + ATMEL_SERIAL_RX_SIZE, DMA_FROM_DEVICE); 1132 1133 /* 1134 * ring->head points to the end of data already written by the DMA. ··· 1139 * The current transfer size should not be larger than the dma buffer 1140 * length. 1141 */ 1142 + ring->head = ATMEL_SERIAL_RX_SIZE - state.residue; 1143 + BUG_ON(ring->head > ATMEL_SERIAL_RX_SIZE); 1144 /* 1145 * At this point ring->head may point to the first byte right after the 1146 * last byte of the dma buffer: ··· 1154 * tail to the end of the buffer then reset tail. 1155 */ 1156 if (ring->head < ring->tail) { 1157 + count = ATMEL_SERIAL_RX_SIZE - ring->tail; 1158 1159 tty_insert_flip_string(tport, ring->buf + ring->tail, count); 1160 ring->tail = 0; ··· 1167 1168 tty_insert_flip_string(tport, ring->buf + ring->tail, count); 1169 /* Wrap ring->head if needed */ 1170 + if (ring->head >= ATMEL_SERIAL_RX_SIZE) 1171 ring->head = 0; 1172 ring->tail = ring->head; 1173 port->icount.rx += count; 1174 } 1175 1176 /* USART retreives ownership of RX DMA buffer */ 1177 + dma_sync_single_for_device(port->dev, atmel_port->rx_phys, 1178 + ATMEL_SERIAL_RX_SIZE, DMA_FROM_DEVICE); 1179 1180 tty_flip_buffer_push(tport); 1181 ··· 1193 struct dma_slave_config config; 1194 struct circ_buf *ring; 1195 struct dma_chan *chan; 1196 + int ret; 1197 1198 ring = &atmel_port->rx_ring; 1199 ··· 1210 dma_chan_name(atmel_port->chan_rx)); 1211 1212 spin_lock_init(&atmel_port->lock_rx); 1213 /* UART circular rx buffer is an aligned page. */ 1214 BUG_ON(!PAGE_ALIGNED(ring->buf)); 1215 + atmel_port->rx_phys = dma_map_single(port->dev, ring->buf, 1216 + ATMEL_SERIAL_RX_SIZE, 1217 + DMA_FROM_DEVICE); 1218 1219 + if (dma_mapping_error(port->dev, atmel_port->rx_phys)) { 1220 dev_dbg(port->dev, "need to release resource of dma\n"); 1221 goto chan_err; 1222 } else { 1223 + dev_dbg(port->dev, "%s: mapped %zu@%p to %pad\n", __func__, 1224 + ATMEL_SERIAL_RX_SIZE, ring->buf, &atmel_port->rx_phys); 1225 } 1226 1227 /* Configure the slave DMA */ ··· 1250 * each one is half ring buffer size 1251 */ 1252 desc = dmaengine_prep_dma_cyclic(atmel_port->chan_rx, 1253 + atmel_port->rx_phys, 1254 + ATMEL_SERIAL_RX_SIZE, 1255 + ATMEL_SERIAL_RX_SIZE / 2, 1256 DMA_DEV_TO_MEM, 1257 DMA_PREP_INTERRUPT); 1258 if (!desc) { ··· 1459 static void atmel_tx_pdc(struct uart_port *port) 1460 { 1461 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1462 + struct tty_port *tport = &port->state->port; 1463 struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 1464 1465 /* nothing left to transmit? */ 1466 if (atmel_uart_readl(port, ATMEL_PDC_TCR)) ··· 1474 /* disable PDC transmit */ 1475 atmel_uart_writel(port, ATMEL_PDC_PTCR, ATMEL_PDC_TXTDIS); 1476 1477 + if (!kfifo_is_empty(&tport->xmit_fifo) && !uart_tx_stopped(port)) { 1478 + unsigned int count, tail; 1479 + 1480 dma_sync_single_for_device(port->dev, 1481 pdc->dma_addr, 1482 pdc->dma_size, 1483 DMA_TO_DEVICE); 1484 1485 + count = kfifo_out_linear(&tport->xmit_fifo, &tail, 1486 + UART_XMIT_SIZE); 1487 pdc->ofs = count; 1488 1489 + atmel_uart_writel(port, ATMEL_PDC_TPR, pdc->dma_addr + tail); 1490 atmel_uart_writel(port, ATMEL_PDC_TCR, count); 1491 /* re-enable PDC transmit */ 1492 atmel_uart_writel(port, ATMEL_PDC_PTCR, ATMEL_PDC_TXTEN); ··· 1498 } 1499 } 1500 1501 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 1502 uart_write_wakeup(port); 1503 } 1504 ··· 1506 { 1507 struct atmel_uart_port *atmel_port = to_atmel_uart_port(port); 1508 struct atmel_dma_buffer *pdc = &atmel_port->pdc_tx; 1509 + struct tty_port *tport = &port->state->port; 1510 1511 + pdc->buf = tport->xmit_buf; 1512 pdc->dma_addr = dma_map_single(port->dev, 1513 pdc->buf, 1514 UART_XMIT_SIZE, ··· 2953 2954 if (!atmel_use_pdc_rx(&atmel_port->uart)) { 2955 ret = -ENOMEM; 2956 + data = kmalloc(ATMEL_SERIAL_RX_SIZE, GFP_KERNEL); 2957 if (!data) 2958 goto err_clk_disable_unprepare; 2959 atmel_port->rx_ring.buf = data;
+6 -6
drivers/tty/serial/clps711x.c
··· 146 { 147 struct uart_port *port = dev_id; 148 struct clps711x_port *s = dev_get_drvdata(port->dev); 149 - struct circ_buf *xmit = &port->state->xmit; 150 151 if (port->x_char) { 152 writew(port->x_char, port->membase + UARTDR_OFFSET); ··· 156 return IRQ_HANDLED; 157 } 158 159 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 160 if (s->tx_enabled) { 161 disable_irq_nosync(port->irq); 162 s->tx_enabled = 0; ··· 164 return IRQ_HANDLED; 165 } 166 167 - while (!uart_circ_empty(xmit)) { 168 u32 sysflg = 0; 169 170 - writew(xmit->buf[xmit->tail], port->membase + UARTDR_OFFSET); 171 - uart_xmit_advance(port, 1); 172 173 regmap_read(s->syscon, SYSFLG_OFFSET, &sysflg); 174 if (sysflg & SYSFLG_UTXFF) 175 break; 176 } 177 178 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 179 uart_write_wakeup(port); 180 181 return IRQ_HANDLED;
··· 146 { 147 struct uart_port *port = dev_id; 148 struct clps711x_port *s = dev_get_drvdata(port->dev); 149 + struct tty_port *tport = &port->state->port; 150 + unsigned char c; 151 152 if (port->x_char) { 153 writew(port->x_char, port->membase + UARTDR_OFFSET); ··· 155 return IRQ_HANDLED; 156 } 157 158 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 159 if (s->tx_enabled) { 160 disable_irq_nosync(port->irq); 161 s->tx_enabled = 0; ··· 163 return IRQ_HANDLED; 164 } 165 166 + while (uart_fifo_get(port, &c)) { 167 u32 sysflg = 0; 168 169 + writew(c, port->membase + UARTDR_OFFSET); 170 171 regmap_read(s->syscon, SYSFLG_OFFSET, &sysflg); 172 if (sysflg & SYSFLG_UTXFF) 173 break; 174 } 175 176 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 177 uart_write_wakeup(port); 178 179 return IRQ_HANDLED;
+7 -13
drivers/tty/serial/cpm_uart.c
··· 648 int count; 649 struct uart_cpm_port *pinfo = 650 container_of(port, struct uart_cpm_port, port); 651 - struct circ_buf *xmit = &port->state->xmit; 652 653 /* Handle xon/xoff */ 654 if (port->x_char) { ··· 673 return 1; 674 } 675 676 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 677 cpm_uart_stop_tx(port); 678 return 0; 679 } ··· 681 /* Pick next descriptor and fill from buffer */ 682 bdp = pinfo->tx_cur; 683 684 - while (!(in_be16(&bdp->cbd_sc) & BD_SC_READY) && !uart_circ_empty(xmit)) { 685 - count = 0; 686 p = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr), pinfo); 687 - while (count < pinfo->tx_fifosize) { 688 - *p++ = xmit->buf[xmit->tail]; 689 - uart_xmit_advance(port, 1); 690 - count++; 691 - if (uart_circ_empty(xmit)) 692 - break; 693 - } 694 out_be16(&bdp->cbd_datlen, count); 695 setbits16(&bdp->cbd_sc, BD_SC_READY); 696 /* Get next BD. */ ··· 695 } 696 pinfo->tx_cur = bdp; 697 698 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 699 uart_write_wakeup(port); 700 701 - if (uart_circ_empty(xmit)) { 702 cpm_uart_stop_tx(port); 703 return 0; 704 }
··· 648 int count; 649 struct uart_cpm_port *pinfo = 650 container_of(port, struct uart_cpm_port, port); 651 + struct tty_port *tport = &port->state->port; 652 653 /* Handle xon/xoff */ 654 if (port->x_char) { ··· 673 return 1; 674 } 675 676 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 677 cpm_uart_stop_tx(port); 678 return 0; 679 } ··· 681 /* Pick next descriptor and fill from buffer */ 682 bdp = pinfo->tx_cur; 683 684 + while (!(in_be16(&bdp->cbd_sc) & BD_SC_READY) && 685 + !kfifo_is_empty(&tport->xmit_fifo)) { 686 p = cpm2cpu_addr(in_be32(&bdp->cbd_bufaddr), pinfo); 687 + count = uart_fifo_out(port, p, pinfo->tx_fifosize); 688 out_be16(&bdp->cbd_datlen, count); 689 setbits16(&bdp->cbd_sc, BD_SC_READY); 690 /* Get next BD. */ ··· 701 } 702 pinfo->tx_cur = bdp; 703 704 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 705 uart_write_wakeup(port); 706 707 + if (kfifo_is_empty(&tport->xmit_fifo)) { 708 cpm_uart_stop_tx(port); 709 return 0; 710 }
+6 -6
drivers/tty/serial/digicolor-usart.c
··· 179 180 static void digicolor_uart_tx(struct uart_port *port) 181 { 182 - struct circ_buf *xmit = &port->state->xmit; 183 unsigned long flags; 184 185 if (digicolor_uart_tx_full(port)) 186 return; ··· 195 goto out; 196 } 197 198 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 199 digicolor_uart_stop_tx(port); 200 goto out; 201 } 202 203 - while (!uart_circ_empty(xmit)) { 204 - writeb(xmit->buf[xmit->tail], port->membase + UA_EMI_REC); 205 - uart_xmit_advance(port, 1); 206 207 if (digicolor_uart_tx_full(port)) 208 break; 209 } 210 211 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 212 uart_write_wakeup(port); 213 214 out:
··· 179 180 static void digicolor_uart_tx(struct uart_port *port) 181 { 182 + struct tty_port *tport = &port->state->port; 183 unsigned long flags; 184 + unsigned char c; 185 186 if (digicolor_uart_tx_full(port)) 187 return; ··· 194 goto out; 195 } 196 197 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 198 digicolor_uart_stop_tx(port); 199 goto out; 200 } 201 202 + while (uart_fifo_get(port, &c)) { 203 + writeb(c, port->membase + UA_EMI_REC); 204 205 if (digicolor_uart_tx_full(port)) 206 break; 207 } 208 209 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 210 uart_write_wakeup(port); 211 212 out:
+6 -7
drivers/tty/serial/dz.c
··· 252 static inline void dz_transmit_chars(struct dz_mux *mux) 253 { 254 struct dz_port *dport = &mux->dport[0]; 255 - struct circ_buf *xmit; 256 unsigned char tmp; 257 u16 status; 258 259 status = dz_in(dport, DZ_CSR); 260 dport = &mux->dport[LINE(status)]; 261 - xmit = &dport->port.state->xmit; 262 263 if (dport->port.x_char) { /* XON/XOFF chars */ 264 dz_out(dport, DZ_TDR, dport->port.x_char); ··· 267 return; 268 } 269 /* If nothing to do or stopped or hardware stopped. */ 270 - if (uart_circ_empty(xmit) || uart_tx_stopped(&dport->port)) { 271 uart_port_lock(&dport->port); 272 dz_stop_tx(&dport->port); 273 uart_port_unlock(&dport->port); ··· 279 * If something to do... (remember the dz has no output fifo, 280 * so we go one char at a time) :-< 281 */ 282 - tmp = xmit->buf[xmit->tail]; 283 dz_out(dport, DZ_TDR, tmp); 284 - uart_xmit_advance(&dport->port, 1); 285 286 - if (uart_circ_chars_pending(xmit) < DZ_WAKEUP_CHARS) 287 uart_write_wakeup(&dport->port); 288 289 /* Are we are done. */ 290 - if (uart_circ_empty(xmit)) { 291 uart_port_lock(&dport->port); 292 dz_stop_tx(&dport->port); 293 uart_port_unlock(&dport->port);
··· 252 static inline void dz_transmit_chars(struct dz_mux *mux) 253 { 254 struct dz_port *dport = &mux->dport[0]; 255 + struct tty_port *tport; 256 unsigned char tmp; 257 u16 status; 258 259 status = dz_in(dport, DZ_CSR); 260 dport = &mux->dport[LINE(status)]; 261 + tport = &dport->port.state->port; 262 263 if (dport->port.x_char) { /* XON/XOFF chars */ 264 dz_out(dport, DZ_TDR, dport->port.x_char); ··· 267 return; 268 } 269 /* If nothing to do or stopped or hardware stopped. */ 270 + if (uart_tx_stopped(&dport->port) || 271 + !uart_fifo_get(&dport->port, &tmp)) { 272 uart_port_lock(&dport->port); 273 dz_stop_tx(&dport->port); 274 uart_port_unlock(&dport->port); ··· 278 * If something to do... (remember the dz has no output fifo, 279 * so we go one char at a time) :-< 280 */ 281 dz_out(dport, DZ_TDR, tmp); 282 283 + if (kfifo_len(&tport->xmit_fifo) < DZ_WAKEUP_CHARS) 284 uart_write_wakeup(&dport->port); 285 286 /* Are we are done. */ 287 + if (kfifo_is_empty(&tport->xmit_fifo)) { 288 uart_port_lock(&dport->port); 289 dz_stop_tx(&dport->port); 290 uart_port_unlock(&dport->port);
+9 -8
drivers/tty/serial/fsl_linflexuart.c
··· 174 175 static inline void linflex_transmit_buffer(struct uart_port *sport) 176 { 177 - struct circ_buf *xmit = &sport->state->xmit; 178 179 - while (!uart_circ_empty(xmit)) { 180 - linflex_put_char(sport, xmit->buf[xmit->tail]); 181 - uart_xmit_advance(sport, 1); 182 } 183 184 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 185 uart_write_wakeup(sport); 186 187 - if (uart_circ_empty(xmit)) 188 linflex_stop_tx(sport); 189 } 190 ··· 201 static irqreturn_t linflex_txint(int irq, void *dev_id) 202 { 203 struct uart_port *sport = dev_id; 204 - struct circ_buf *xmit = &sport->state->xmit; 205 unsigned long flags; 206 207 uart_port_lock_irqsave(sport, &flags); ··· 211 goto out; 212 } 213 214 - if (uart_circ_empty(xmit) || uart_tx_stopped(sport)) { 215 linflex_stop_tx(sport); 216 goto out; 217 }
··· 174 175 static inline void linflex_transmit_buffer(struct uart_port *sport) 176 { 177 + struct tty_port *tport = &sport->state->port; 178 + unsigned char c; 179 180 + while (uart_fifo_get(sport, &c)) { 181 + linflex_put_char(sport, c); 182 + sport->icount.tx++; 183 } 184 185 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 186 uart_write_wakeup(sport); 187 188 + if (kfifo_is_empty(&tport->xmit_fifo)) 189 linflex_stop_tx(sport); 190 } 191 ··· 200 static irqreturn_t linflex_txint(int irq, void *dev_id) 201 { 202 struct uart_port *sport = dev_id; 203 + struct tty_port *tport = &sport->state->port; 204 unsigned long flags; 205 206 uart_port_lock_irqsave(sport, &flags); ··· 210 goto out; 211 } 212 213 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(sport)) { 214 linflex_stop_tx(sport); 215 goto out; 216 }
+19 -26
drivers/tty/serial/fsl_lpuart.c
··· 7 8 #include <linux/bitfield.h> 9 #include <linux/bits.h> 10 #include <linux/clk.h> 11 #include <linux/console.h> 12 #include <linux/delay.h> ··· 474 475 static void lpuart_dma_tx(struct lpuart_port *sport) 476 { 477 - struct circ_buf *xmit = &sport->port.state->xmit; 478 struct scatterlist *sgl = sport->tx_sgl; 479 struct device *dev = sport->port.dev; 480 struct dma_chan *chan = sport->dma_tx_chan; ··· 483 if (sport->dma_tx_in_progress) 484 return; 485 486 - sport->dma_tx_bytes = uart_circ_chars_pending(xmit); 487 - 488 - if (xmit->tail < xmit->head || xmit->head == 0) { 489 - sport->dma_tx_nents = 1; 490 - sg_init_one(sgl, xmit->buf + xmit->tail, sport->dma_tx_bytes); 491 - } else { 492 - sport->dma_tx_nents = 2; 493 - sg_init_table(sgl, 2); 494 - sg_set_buf(sgl, xmit->buf + xmit->tail, 495 - UART_XMIT_SIZE - xmit->tail); 496 - sg_set_buf(sgl + 1, xmit->buf, xmit->head); 497 - } 498 499 ret = dma_map_sg(chan->device->dev, sgl, sport->dma_tx_nents, 500 DMA_TO_DEVICE); ··· 514 515 static bool lpuart_stopped_or_empty(struct uart_port *port) 516 { 517 - return uart_circ_empty(&port->state->xmit) || uart_tx_stopped(port); 518 } 519 520 static void lpuart_dma_tx_complete(void *arg) 521 { 522 struct lpuart_port *sport = arg; 523 struct scatterlist *sgl = &sport->tx_sgl[0]; 524 - struct circ_buf *xmit = &sport->port.state->xmit; 525 struct dma_chan *chan = sport->dma_tx_chan; 526 unsigned long flags; 527 ··· 539 sport->dma_tx_in_progress = false; 540 uart_port_unlock_irqrestore(&sport->port, flags); 541 542 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 543 uart_write_wakeup(&sport->port); 544 545 if (waitqueue_active(&sport->dma_wait)) { ··· 750 751 static inline void lpuart32_transmit_buffer(struct lpuart_port *sport) 752 { 753 - struct circ_buf *xmit = &sport->port.state->xmit; 754 unsigned long txcnt; 755 756 if (sport->port.x_char) { 757 lpuart32_write(&sport->port, sport->port.x_char, UARTDATA); ··· 769 txcnt = lpuart32_read(&sport->port, UARTWATER); 770 txcnt = txcnt >> UARTWATER_TXCNT_OFF; 771 txcnt &= UARTWATER_COUNT_MASK; 772 - while (!uart_circ_empty(xmit) && (txcnt < sport->txfifo_size)) { 773 - lpuart32_write(&sport->port, xmit->buf[xmit->tail], UARTDATA); 774 - uart_xmit_advance(&sport->port, 1); 775 txcnt = lpuart32_read(&sport->port, UARTWATER); 776 txcnt = txcnt >> UARTWATER_TXCNT_OFF; 777 txcnt &= UARTWATER_COUNT_MASK; 778 } 779 780 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 781 uart_write_wakeup(&sport->port); 782 783 - if (uart_circ_empty(xmit)) 784 lpuart32_stop_tx(&sport->port); 785 } 786 ··· 2879 sport->ipg_clk = devm_clk_get(&pdev->dev, "ipg"); 2880 if (IS_ERR(sport->ipg_clk)) { 2881 ret = PTR_ERR(sport->ipg_clk); 2882 - dev_err(&pdev->dev, "failed to get uart ipg clk: %d\n", ret); 2883 - return ret; 2884 } 2885 2886 sport->baud_clk = NULL; ··· 2887 sport->baud_clk = devm_clk_get(&pdev->dev, "baud"); 2888 if (IS_ERR(sport->baud_clk)) { 2889 ret = PTR_ERR(sport->baud_clk); 2890 - dev_err(&pdev->dev, "failed to get uart baud clk: %d\n", ret); 2891 - return ret; 2892 } 2893 } 2894
··· 7 8 #include <linux/bitfield.h> 9 #include <linux/bits.h> 10 + #include <linux/circ_buf.h> 11 #include <linux/clk.h> 12 #include <linux/console.h> 13 #include <linux/delay.h> ··· 473 474 static void lpuart_dma_tx(struct lpuart_port *sport) 475 { 476 + struct tty_port *tport = &sport->port.state->port; 477 struct scatterlist *sgl = sport->tx_sgl; 478 struct device *dev = sport->port.dev; 479 struct dma_chan *chan = sport->dma_tx_chan; ··· 482 if (sport->dma_tx_in_progress) 483 return; 484 485 + sg_init_table(sgl, ARRAY_SIZE(sport->tx_sgl)); 486 + sport->dma_tx_bytes = kfifo_len(&tport->xmit_fifo); 487 + sport->dma_tx_nents = kfifo_dma_out_prepare(&tport->xmit_fifo, sgl, 488 + ARRAY_SIZE(sport->tx_sgl), sport->dma_tx_bytes); 489 490 ret = dma_map_sg(chan->device->dev, sgl, sport->dma_tx_nents, 491 DMA_TO_DEVICE); ··· 521 522 static bool lpuart_stopped_or_empty(struct uart_port *port) 523 { 524 + return kfifo_is_empty(&port->state->port.xmit_fifo) || 525 + uart_tx_stopped(port); 526 } 527 528 static void lpuart_dma_tx_complete(void *arg) 529 { 530 struct lpuart_port *sport = arg; 531 struct scatterlist *sgl = &sport->tx_sgl[0]; 532 + struct tty_port *tport = &sport->port.state->port; 533 struct dma_chan *chan = sport->dma_tx_chan; 534 unsigned long flags; 535 ··· 545 sport->dma_tx_in_progress = false; 546 uart_port_unlock_irqrestore(&sport->port, flags); 547 548 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 549 uart_write_wakeup(&sport->port); 550 551 if (waitqueue_active(&sport->dma_wait)) { ··· 756 757 static inline void lpuart32_transmit_buffer(struct lpuart_port *sport) 758 { 759 + struct tty_port *tport = &sport->port.state->port; 760 unsigned long txcnt; 761 + unsigned char c; 762 763 if (sport->port.x_char) { 764 lpuart32_write(&sport->port, sport->port.x_char, UARTDATA); ··· 774 txcnt = lpuart32_read(&sport->port, UARTWATER); 775 txcnt = txcnt >> UARTWATER_TXCNT_OFF; 776 txcnt &= UARTWATER_COUNT_MASK; 777 + while (txcnt < sport->txfifo_size && 778 + uart_fifo_get(&sport->port, &c)) { 779 + lpuart32_write(&sport->port, c, UARTDATA); 780 txcnt = lpuart32_read(&sport->port, UARTWATER); 781 txcnt = txcnt >> UARTWATER_TXCNT_OFF; 782 txcnt &= UARTWATER_COUNT_MASK; 783 } 784 785 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 786 uart_write_wakeup(&sport->port); 787 788 + if (kfifo_is_empty(&tport->xmit_fifo)) 789 lpuart32_stop_tx(&sport->port); 790 } 791 ··· 2884 sport->ipg_clk = devm_clk_get(&pdev->dev, "ipg"); 2885 if (IS_ERR(sport->ipg_clk)) { 2886 ret = PTR_ERR(sport->ipg_clk); 2887 + return dev_err_probe(&pdev->dev, ret, "failed to get uart ipg clk\n"); 2888 } 2889 2890 sport->baud_clk = NULL; ··· 2893 sport->baud_clk = devm_clk_get(&pdev->dev, "baud"); 2894 if (IS_ERR(sport->baud_clk)) { 2895 ret = PTR_ERR(sport->baud_clk); 2896 + return dev_err_probe(&pdev->dev, ret, "failed to get uart baud clk\n"); 2897 } 2898 } 2899
+6 -19
drivers/tty/serial/icom.c
··· 877 static int icom_write(struct uart_port *port) 878 { 879 struct icom_port *icom_port = to_icom_port(port); 880 unsigned long data_count; 881 unsigned char cmdReg; 882 unsigned long offset; 883 - int temp_tail = port->state->xmit.tail; 884 885 trace(icom_port, "WRITE", 0); 886 ··· 890 return 0; 891 } 892 893 - data_count = 0; 894 - while ((port->state->xmit.head != temp_tail) && 895 - (data_count <= XMIT_BUFF_SZ)) { 896 - 897 - icom_port->xmit_buf[data_count++] = 898 - port->state->xmit.buf[temp_tail]; 899 - 900 - temp_tail++; 901 - temp_tail &= (UART_XMIT_SIZE - 1); 902 - } 903 904 if (data_count) { 905 icom_port->statStg->xmit[0].flags = ··· 948 949 static void xmit_interrupt(u16 port_int_reg, struct icom_port *icom_port) 950 { 951 - u16 count, i; 952 953 if (port_int_reg & (INT_XMIT_COMPLETED)) { 954 trace(icom_port, "XMIT_COMPLETE", 0); ··· 961 count = le16_to_cpu(icom_port->statStg->xmit[0].leLength); 962 icom_port->uart_port.icount.tx += count; 963 964 - for (i=0; i<count && 965 - !uart_circ_empty(&icom_port->uart_port.state->xmit); i++) { 966 - 967 - icom_port->uart_port.state->xmit.tail++; 968 - icom_port->uart_port.state->xmit.tail &= 969 - (UART_XMIT_SIZE - 1); 970 - } 971 972 if (!icom_write(&icom_port->uart_port)) 973 /* activate write queue */
··· 877 static int icom_write(struct uart_port *port) 878 { 879 struct icom_port *icom_port = to_icom_port(port); 880 + struct tty_port *tport = &port->state->port; 881 unsigned long data_count; 882 unsigned char cmdReg; 883 unsigned long offset; 884 885 trace(icom_port, "WRITE", 0); 886 ··· 890 return 0; 891 } 892 893 + data_count = kfifo_out_peek(&tport->xmit_fifo, icom_port->xmit_buf, 894 + XMIT_BUFF_SZ); 895 896 if (data_count) { 897 icom_port->statStg->xmit[0].flags = ··· 956 957 static void xmit_interrupt(u16 port_int_reg, struct icom_port *icom_port) 958 { 959 + struct tty_port *tport = &icom_port->uart_port.state->port; 960 + u16 count; 961 962 if (port_int_reg & (INT_XMIT_COMPLETED)) { 963 trace(icom_port, "XMIT_COMPLETE", 0); ··· 968 count = le16_to_cpu(icom_port->statStg->xmit[0].leLength); 969 icom_port->uart_port.icount.tx += count; 970 971 + kfifo_skip_count(&tport->xmit_fifo, count); 972 973 if (!icom_write(&icom_port->uart_port)) 974 /* activate write queue */
+28 -35
drivers/tty/serial/imx.c
··· 8 * Copyright (C) 2004 Pengutronix 9 */ 10 11 #include <linux/module.h> 12 #include <linux/ioport.h> 13 #include <linux/init.h> ··· 27 #include <linux/slab.h> 28 #include <linux/of.h> 29 #include <linux/io.h> 30 #include <linux/dma-mapping.h> 31 32 #include <asm/irq.h> ··· 523 /* called with port.lock taken and irqs off */ 524 static inline void imx_uart_transmit_buffer(struct imx_port *sport) 525 { 526 - struct circ_buf *xmit = &sport->port.state->xmit; 527 528 if (sport->port.x_char) { 529 /* Send next char */ ··· 534 return; 535 } 536 537 - if (uart_circ_empty(xmit) || uart_tx_stopped(&sport->port)) { 538 imx_uart_stop_tx(&sport->port); 539 return; 540 } ··· 559 return; 560 } 561 562 - while (!uart_circ_empty(xmit) && 563 - !(imx_uart_readl(sport, imx_uart_uts_reg(sport)) & UTS_TXFULL)) { 564 - /* send xmit->buf[xmit->tail] 565 - * out the port here */ 566 - imx_uart_writel(sport, xmit->buf[xmit->tail], URTX0); 567 - uart_xmit_advance(&sport->port, 1); 568 - } 569 570 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 571 uart_write_wakeup(&sport->port); 572 573 - if (uart_circ_empty(xmit)) 574 imx_uart_stop_tx(&sport->port); 575 } 576 577 static void imx_uart_dma_tx_callback(void *data) 578 { 579 struct imx_port *sport = data; 580 struct scatterlist *sgl = &sport->tx_sgl[0]; 581 - struct circ_buf *xmit = &sport->port.state->xmit; 582 unsigned long flags; 583 u32 ucr1; 584 ··· 592 593 sport->dma_is_txing = 0; 594 595 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 596 uart_write_wakeup(&sport->port); 597 598 - if (!uart_circ_empty(xmit) && !uart_tx_stopped(&sport->port)) 599 imx_uart_dma_tx(sport); 600 else if (sport->port.rs485.flags & SER_RS485_ENABLED) { 601 u32 ucr4 = imx_uart_readl(sport, UCR4); ··· 610 /* called with port.lock taken and irqs off */ 611 static void imx_uart_dma_tx(struct imx_port *sport) 612 { 613 - struct circ_buf *xmit = &sport->port.state->xmit; 614 struct scatterlist *sgl = sport->tx_sgl; 615 struct dma_async_tx_descriptor *desc; 616 struct dma_chan *chan = sport->dma_chan_tx; ··· 625 ucr4 &= ~UCR4_TCEN; 626 imx_uart_writel(sport, ucr4, UCR4); 627 628 - sport->tx_bytes = uart_circ_chars_pending(xmit); 629 - 630 - if (xmit->tail < xmit->head || xmit->head == 0) { 631 - sport->dma_tx_nents = 1; 632 - sg_init_one(sgl, xmit->buf + xmit->tail, sport->tx_bytes); 633 - } else { 634 - sport->dma_tx_nents = 2; 635 - sg_init_table(sgl, 2); 636 - sg_set_buf(sgl, xmit->buf + xmit->tail, 637 - UART_XMIT_SIZE - xmit->tail); 638 - sg_set_buf(sgl + 1, xmit->buf, xmit->head); 639 - } 640 641 ret = dma_map_sg(dev, sgl, sport->dma_tx_nents, DMA_TO_DEVICE); 642 if (ret == 0) { ··· 646 desc->callback = imx_uart_dma_tx_callback; 647 desc->callback_param = sport; 648 649 - dev_dbg(dev, "TX: prepare to send %lu bytes by DMA.\n", 650 - uart_circ_chars_pending(xmit)); 651 652 ucr1 = imx_uart_readl(sport, UCR1); 653 ucr1 |= UCR1_TXDMAEN; ··· 663 static void imx_uart_start_tx(struct uart_port *port) 664 { 665 struct imx_port *sport = (struct imx_port *)port; 666 u32 ucr1; 667 668 - if (!sport->port.x_char && uart_circ_empty(&port->state->xmit)) 669 return; 670 671 /* ··· 742 return; 743 } 744 745 - if (!uart_circ_empty(&port->state->xmit) && 746 !uart_tx_stopped(port)) 747 imx_uart_dma_tx(sport); 748 return; ··· 1305 1306 } 1307 1308 - #define TXTL_DEFAULT 2 /* reset default */ 1309 #define RXTL_DEFAULT 8 /* 8 characters or aging timer */ 1310 #define TXTL_DMA 8 /* DMA burst setting */ 1311 #define RXTL_DMA 9 /* DMA burst setting */ ··· 2003 struct imx_port *sport = imx_uart_ports[co->index]; 2004 struct imx_port_ucrs old_ucr; 2005 unsigned long flags; 2006 - unsigned int ucr1; 2007 int locked = 1; 2008 2009 if (sport->port.sysrq) ··· 2034 * Finally, wait for transmitter to become empty 2035 * and restore UCR1/2/3 2036 */ 2037 - while (!(imx_uart_readl(sport, USR2) & USR2_TXDC)); 2038 - 2039 imx_uart_ucrs_restore(sport, &old_ucr); 2040 2041 if (locked)
··· 8 * Copyright (C) 2004 Pengutronix 9 */ 10 11 + #include <linux/circ_buf.h> 12 #include <linux/module.h> 13 #include <linux/ioport.h> 14 #include <linux/init.h> ··· 26 #include <linux/slab.h> 27 #include <linux/of.h> 28 #include <linux/io.h> 29 + #include <linux/iopoll.h> 30 #include <linux/dma-mapping.h> 31 32 #include <asm/irq.h> ··· 521 /* called with port.lock taken and irqs off */ 522 static inline void imx_uart_transmit_buffer(struct imx_port *sport) 523 { 524 + struct tty_port *tport = &sport->port.state->port; 525 + unsigned char c; 526 527 if (sport->port.x_char) { 528 /* Send next char */ ··· 531 return; 532 } 533 534 + if (kfifo_is_empty(&tport->xmit_fifo) || 535 + uart_tx_stopped(&sport->port)) { 536 imx_uart_stop_tx(&sport->port); 537 return; 538 } ··· 555 return; 556 } 557 558 + while (!(imx_uart_readl(sport, imx_uart_uts_reg(sport)) & UTS_TXFULL) && 559 + uart_fifo_get(&sport->port, &c)) 560 + imx_uart_writel(sport, c, URTX0); 561 562 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 563 uart_write_wakeup(&sport->port); 564 565 + if (kfifo_is_empty(&tport->xmit_fifo)) 566 imx_uart_stop_tx(&sport->port); 567 } 568 569 static void imx_uart_dma_tx_callback(void *data) 570 { 571 struct imx_port *sport = data; 572 + struct tty_port *tport = &sport->port.state->port; 573 struct scatterlist *sgl = &sport->tx_sgl[0]; 574 unsigned long flags; 575 u32 ucr1; 576 ··· 592 593 sport->dma_is_txing = 0; 594 595 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 596 uart_write_wakeup(&sport->port); 597 598 + if (!kfifo_is_empty(&tport->xmit_fifo) && 599 + !uart_tx_stopped(&sport->port)) 600 imx_uart_dma_tx(sport); 601 else if (sport->port.rs485.flags & SER_RS485_ENABLED) { 602 u32 ucr4 = imx_uart_readl(sport, UCR4); ··· 609 /* called with port.lock taken and irqs off */ 610 static void imx_uart_dma_tx(struct imx_port *sport) 611 { 612 + struct tty_port *tport = &sport->port.state->port; 613 struct scatterlist *sgl = sport->tx_sgl; 614 struct dma_async_tx_descriptor *desc; 615 struct dma_chan *chan = sport->dma_chan_tx; ··· 624 ucr4 &= ~UCR4_TCEN; 625 imx_uart_writel(sport, ucr4, UCR4); 626 627 + sg_init_table(sgl, ARRAY_SIZE(sport->tx_sgl)); 628 + sport->tx_bytes = kfifo_len(&tport->xmit_fifo); 629 + sport->dma_tx_nents = kfifo_dma_out_prepare(&tport->xmit_fifo, sgl, 630 + ARRAY_SIZE(sport->tx_sgl), sport->tx_bytes); 631 632 ret = dma_map_sg(dev, sgl, sport->dma_tx_nents, DMA_TO_DEVICE); 633 if (ret == 0) { ··· 653 desc->callback = imx_uart_dma_tx_callback; 654 desc->callback_param = sport; 655 656 + dev_dbg(dev, "TX: prepare to send %u bytes by DMA.\n", sport->tx_bytes); 657 658 ucr1 = imx_uart_readl(sport, UCR1); 659 ucr1 |= UCR1_TXDMAEN; ··· 671 static void imx_uart_start_tx(struct uart_port *port) 672 { 673 struct imx_port *sport = (struct imx_port *)port; 674 + struct tty_port *tport = &sport->port.state->port; 675 u32 ucr1; 676 677 + if (!sport->port.x_char && kfifo_is_empty(&tport->xmit_fifo)) 678 return; 679 680 /* ··· 749 return; 750 } 751 752 + if (!kfifo_is_empty(&tport->xmit_fifo) && 753 !uart_tx_stopped(port)) 754 imx_uart_dma_tx(sport); 755 return; ··· 1312 1313 } 1314 1315 + #define TXTL_DEFAULT 8 1316 #define RXTL_DEFAULT 8 /* 8 characters or aging timer */ 1317 #define TXTL_DMA 8 /* DMA burst setting */ 1318 #define RXTL_DMA 9 /* DMA burst setting */ ··· 2010 struct imx_port *sport = imx_uart_ports[co->index]; 2011 struct imx_port_ucrs old_ucr; 2012 unsigned long flags; 2013 + unsigned int ucr1, usr2; 2014 int locked = 1; 2015 2016 if (sport->port.sysrq) ··· 2041 * Finally, wait for transmitter to become empty 2042 * and restore UCR1/2/3 2043 */ 2044 + read_poll_timeout_atomic(imx_uart_readl, usr2, usr2 & USR2_TXDC, 2045 + 0, USEC_PER_SEC, false, sport, USR2); 2046 imx_uart_ucrs_restore(sport, &old_ucr); 2047 2048 if (locked)
+12 -14
drivers/tty/serial/ip22zilog.c
··· 355 static void ip22zilog_transmit_chars(struct uart_ip22zilog_port *up, 356 struct zilog_channel *channel) 357 { 358 - struct circ_buf *xmit; 359 360 if (ZS_IS_CONS(up)) { 361 unsigned char status = readb(&channel->control); ··· 399 400 if (up->port.state == NULL) 401 goto ack_tx_int; 402 - xmit = &up->port.state->xmit; 403 - if (uart_circ_empty(xmit)) 404 - goto ack_tx_int; 405 if (uart_tx_stopped(&up->port)) 406 goto ack_tx_int; 407 408 up->flags |= IP22ZILOG_FLAG_TX_ACTIVE; 409 - writeb(xmit->buf[xmit->tail], &channel->data); 410 ZSDELAY(); 411 ZS_WSYNC(channel); 412 413 - uart_xmit_advance(&up->port, 1); 414 - 415 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 416 uart_write_wakeup(&up->port); 417 418 return; ··· 599 port->icount.tx++; 600 port->x_char = 0; 601 } else { 602 - struct circ_buf *xmit = &port->state->xmit; 603 604 - if (uart_circ_empty(xmit)) 605 return; 606 - writeb(xmit->buf[xmit->tail], &channel->data); 607 ZSDELAY(); 608 ZS_WSYNC(channel); 609 610 - uart_xmit_advance(port, 1); 611 - 612 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 613 uart_write_wakeup(&up->port); 614 } 615 }
··· 355 static void ip22zilog_transmit_chars(struct uart_ip22zilog_port *up, 356 struct zilog_channel *channel) 357 { 358 + struct tty_port *tport; 359 + unsigned char c; 360 361 if (ZS_IS_CONS(up)) { 362 unsigned char status = readb(&channel->control); ··· 398 399 if (up->port.state == NULL) 400 goto ack_tx_int; 401 + tport = &up->port.state->port; 402 if (uart_tx_stopped(&up->port)) 403 + goto ack_tx_int; 404 + if (!uart_fifo_get(&up->port, &c)) 405 goto ack_tx_int; 406 407 up->flags |= IP22ZILOG_FLAG_TX_ACTIVE; 408 + writeb(c, &channel->data); 409 ZSDELAY(); 410 ZS_WSYNC(channel); 411 412 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 413 uart_write_wakeup(&up->port); 414 415 return; ··· 600 port->icount.tx++; 601 port->x_char = 0; 602 } else { 603 + struct tty_port *tport = &port->state->port; 604 + unsigned char c; 605 606 + if (!uart_fifo_get(port, &c)) 607 return; 608 + writeb(c, &channel->data); 609 ZSDELAY(); 610 ZS_WSYNC(channel); 611 612 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 613 uart_write_wakeup(&up->port); 614 } 615 }
+9 -22
drivers/tty/serial/jsm/jsm_cls.c
··· 443 444 static void cls_copy_data_from_queue_to_uart(struct jsm_channel *ch) 445 { 446 - u16 tail; 447 int n; 448 - int qlen; 449 u32 len_written = 0; 450 - struct circ_buf *circ; 451 452 if (!ch) 453 return; 454 455 - circ = &ch->uart_port.state->xmit; 456 - 457 - /* No data to write to the UART */ 458 - if (uart_circ_empty(circ)) 459 - return; 460 461 /* If port is "stopped", don't send any data to the UART */ 462 if ((ch->ch_flags & CH_STOP) || (ch->ch_flags & CH_BREAK_SENDING)) ··· 461 return; 462 463 n = 32; 464 - 465 - /* cache tail of queue */ 466 - tail = circ->tail & (UART_XMIT_SIZE - 1); 467 - qlen = uart_circ_chars_pending(circ); 468 - 469 - /* Find minimum of the FIFO space, versus queue length */ 470 - n = min(n, qlen); 471 - 472 while (n > 0) { 473 - writeb(circ->buf[tail], &ch->ch_cls_uart->txrx); 474 - tail = (tail + 1) & (UART_XMIT_SIZE - 1); 475 n--; 476 ch->ch_txcount++; 477 len_written++; 478 } 479 480 - /* Update the final tail */ 481 - circ->tail = tail & (UART_XMIT_SIZE - 1); 482 - 483 if (len_written > ch->ch_t_tlevel) 484 ch->ch_flags &= ~(CH_TX_FIFO_EMPTY | CH_TX_FIFO_LWM); 485 486 - if (uart_circ_empty(circ)) 487 uart_write_wakeup(&ch->uart_port); 488 } 489
··· 443 444 static void cls_copy_data_from_queue_to_uart(struct jsm_channel *ch) 445 { 446 + struct tty_port *tport; 447 int n; 448 u32 len_written = 0; 449 450 if (!ch) 451 return; 452 453 + tport = &ch->uart_port.state->port; 454 455 /* If port is "stopped", don't send any data to the UART */ 456 if ((ch->ch_flags & CH_STOP) || (ch->ch_flags & CH_BREAK_SENDING)) ··· 467 return; 468 469 n = 32; 470 while (n > 0) { 471 + unsigned char c; 472 + 473 + if (!kfifo_get(&tport->xmit_fifo, &c)) 474 + break; 475 + 476 + writeb(c, &ch->ch_cls_uart->txrx); 477 n--; 478 ch->ch_txcount++; 479 len_written++; 480 } 481 482 if (len_written > ch->ch_t_tlevel) 483 ch->ch_flags &= ~(CH_TX_FIFO_EMPTY | CH_TX_FIFO_LWM); 484 485 + if (kfifo_is_empty(&tport->xmit_fifo)) 486 uart_write_wakeup(&ch->uart_port); 487 } 488
+13 -25
drivers/tty/serial/jsm/jsm_neo.c
··· 474 475 static void neo_copy_data_from_queue_to_uart(struct jsm_channel *ch) 476 { 477 - u16 head; 478 - u16 tail; 479 int n; 480 int s; 481 int qlen; 482 u32 len_written = 0; 483 - struct circ_buf *circ; 484 485 if (!ch) 486 return; 487 488 - circ = &ch->uart_port.state->xmit; 489 490 /* No data to write to the UART */ 491 - if (uart_circ_empty(circ)) 492 return; 493 494 /* If port is "stopped", don't send any data to the UART */ ··· 504 if (ch->ch_cached_lsr & UART_LSR_THRE) { 505 ch->ch_cached_lsr &= ~(UART_LSR_THRE); 506 507 - writeb(circ->buf[circ->tail], &ch->ch_neo_uart->txrx); 508 - jsm_dbg(WRITE, &ch->ch_bd->pci_dev, 509 - "Tx data: %x\n", circ->buf[circ->tail]); 510 - circ->tail = (circ->tail + 1) & (UART_XMIT_SIZE - 1); 511 ch->ch_txcount++; 512 } 513 return; ··· 519 return; 520 521 n = UART_17158_TX_FIFOSIZE - ch->ch_t_tlevel; 522 - 523 - /* cache head and tail of queue */ 524 - head = circ->head & (UART_XMIT_SIZE - 1); 525 - tail = circ->tail & (UART_XMIT_SIZE - 1); 526 - qlen = uart_circ_chars_pending(circ); 527 528 /* Find minimum of the FIFO space, versus queue length */ 529 n = min(n, qlen); 530 531 while (n > 0) { 532 - 533 - s = ((head >= tail) ? head : UART_XMIT_SIZE) - tail; 534 - s = min(s, n); 535 - 536 if (s <= 0) 537 break; 538 539 - memcpy_toio(&ch->ch_neo_uart->txrxburst, circ->buf + tail, s); 540 - /* Add and flip queue if needed */ 541 - tail = (tail + s) & (UART_XMIT_SIZE - 1); 542 n -= s; 543 ch->ch_txcount += s; 544 len_written += s; 545 } 546 547 - /* Update the final tail */ 548 - circ->tail = tail & (UART_XMIT_SIZE - 1); 549 - 550 if (len_written >= ch->ch_t_tlevel) 551 ch->ch_flags &= ~(CH_TX_FIFO_EMPTY | CH_TX_FIFO_LWM); 552 553 - if (uart_circ_empty(circ)) 554 uart_write_wakeup(&ch->uart_port); 555 } 556
··· 474 475 static void neo_copy_data_from_queue_to_uart(struct jsm_channel *ch) 476 { 477 + struct tty_port *tport; 478 + unsigned char *tail; 479 + unsigned char c; 480 int n; 481 int s; 482 int qlen; 483 u32 len_written = 0; 484 485 if (!ch) 486 return; 487 488 + tport = &ch->uart_port.state->port; 489 490 /* No data to write to the UART */ 491 + if (kfifo_is_empty(&tport->xmit_fifo)) 492 return; 493 494 /* If port is "stopped", don't send any data to the UART */ ··· 504 if (ch->ch_cached_lsr & UART_LSR_THRE) { 505 ch->ch_cached_lsr &= ~(UART_LSR_THRE); 506 507 + WARN_ON_ONCE(!kfifo_get(&tport->xmit_fifo, &c)); 508 + writeb(c, &ch->ch_neo_uart->txrx); 509 + jsm_dbg(WRITE, &ch->ch_bd->pci_dev, "Tx data: %x\n", c); 510 ch->ch_txcount++; 511 } 512 return; ··· 520 return; 521 522 n = UART_17158_TX_FIFOSIZE - ch->ch_t_tlevel; 523 + qlen = kfifo_len(&tport->xmit_fifo); 524 525 /* Find minimum of the FIFO space, versus queue length */ 526 n = min(n, qlen); 527 528 while (n > 0) { 529 + s = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, n); 530 if (s <= 0) 531 break; 532 533 + memcpy_toio(&ch->ch_neo_uart->txrxburst, tail, s); 534 + kfifo_skip_count(&tport->xmit_fifo, s); 535 n -= s; 536 ch->ch_txcount += s; 537 len_written += s; 538 } 539 540 if (len_written >= ch->ch_t_tlevel) 541 ch->ch_flags &= ~(CH_TX_FIFO_EMPTY | CH_TX_FIFO_LWM); 542 543 + if (kfifo_is_empty(&tport->xmit_fifo)) 544 uart_write_wakeup(&ch->uart_port); 545 } 546
+138 -202
drivers/tty/serial/max3100.c
··· 1 // SPDX-License-Identifier: GPL-2.0+ 2 /* 3 - * 4 * Copyright (C) 2008 Christian Pellegrin <chripell@evolware.org> 5 * 6 * Notes: the MAX3100 doesn't provide an interrupt on CTS so we have ··· 7 * writing conf clears FIFO buffer and we cannot have this interrupt 8 * always asking us for attention. 9 * 10 - * Example platform data: 11 - 12 - static struct plat_max3100 max3100_plat_data = { 13 - .loopback = 0, 14 - .crystal = 0, 15 - .poll_time = 100, 16 - }; 17 - 18 - static struct spi_board_info spi_board_info[] = { 19 - { 20 - .modalias = "max3100", 21 - .platform_data = &max3100_plat_data, 22 - .irq = IRQ_EINT12, 23 - .max_speed_hz = 5*1000*1000, 24 - .chip_select = 0, 25 - }, 26 - }; 27 - 28 * The initial minor number is 209 in the low-density serial port: 29 * mknod /dev/ttyMAX0 c 204 209 30 */ ··· 16 /* 4 MAX3100s should be enough for everyone */ 17 #define MAX_MAX3100 4 18 19 #include <linux/delay.h> 20 - #include <linux/slab.h> 21 #include <linux/device.h> 22 #include <linux/module.h> 23 #include <linux/serial_core.h> 24 #include <linux/serial.h> 25 #include <linux/spi/spi.h> 26 - #include <linux/freezer.h> 27 - #include <linux/tty.h> 28 #include <linux/tty_flip.h> 29 30 - #include <linux/serial_max3100.h> 31 32 #define MAX3100_C (1<<14) 33 #define MAX3100_D (0<<14) ··· 96 #define MAX3100_7BIT 4 97 int rx_enabled; /* if we should rx chars */ 98 99 - int irq; /* irq assigned to the max3100 */ 100 - 101 int minor; /* minor number */ 102 - int crystal; /* 1 if 3.6864Mhz crystal 0 for 1.8432 */ 103 int loopback; /* 1 if we are in loopback mode */ 104 105 /* for handling irqs: need workqueue since we do spi_sync */ ··· 108 /* need to know we are suspending to avoid deadlock on workqueue */ 109 int suspending; 110 111 - /* hook for suspending MAX3100 via dedicated pin */ 112 - void (*max3100_hw_suspend) (int suspend); 113 - 114 - /* poll time (in ms) for ctrl lines */ 115 - int poll_time; 116 - /* and its timer */ 117 struct timer_list timer; 118 }; 119 120 static struct max3100_port *max3100s[MAX_MAX3100]; /* the chips */ 121 static DEFINE_MUTEX(max3100s_lock); /* race on probe */ ··· 153 *c |= max3100_do_parity(s, *c) << 8; 154 } 155 156 - static void max3100_work(struct work_struct *w); 157 - 158 - static void max3100_dowork(struct max3100_port *s) 159 - { 160 - if (!s->force_end_work && !freezing(current) && !s->suspending) 161 - queue_work(s->workqueue, &s->work); 162 - } 163 - 164 - static void max3100_timeout(struct timer_list *t) 165 - { 166 - struct max3100_port *s = from_timer(s, t, timer); 167 - 168 - if (s->port.state) { 169 - max3100_dowork(s); 170 - mod_timer(&s->timer, jiffies + s->poll_time); 171 - } 172 - } 173 - 174 static int max3100_sr(struct max3100_port *s, u16 tx, u16 *rx) 175 { 176 struct spi_message message; 177 - u16 etx, erx; 178 int status; 179 struct spi_transfer tran = { 180 .tx_buf = &etx, ··· 178 return 0; 179 } 180 181 - static int max3100_handlerx(struct max3100_port *s, u16 rx) 182 { 183 unsigned int status = 0; 184 int ret = 0, cts; ··· 219 return ret; 220 } 221 222 static void max3100_work(struct work_struct *w) 223 { 224 struct max3100_port *s = container_of(w, struct max3100_port, work); 225 int rxchars; 226 u16 tx, rx; 227 - int conf, cconf, crts; 228 - struct circ_buf *xmit = &s->port.state->xmit; 229 230 dev_dbg(&s->spi->dev, "%s\n", __func__); 231 ··· 247 conf = s->conf; 248 cconf = s->conf_commit; 249 s->conf_commit = 0; 250 crts = s->rts_commit; 251 s->rts_commit = 0; 252 spin_unlock(&s->conf_lock); 253 if (cconf) 254 max3100_sr(s, MAX3100_WC | conf, &rx); 255 if (crts) { 256 max3100_sr(s, MAX3100_WD | MAX3100_TE | 257 (s->rts ? MAX3100_RTS : 0), &rx); ··· 271 tx = s->port.x_char; 272 s->port.icount.tx++; 273 s->port.x_char = 0; 274 - } else if (!uart_circ_empty(xmit) && 275 - !uart_tx_stopped(&s->port)) { 276 - tx = xmit->buf[xmit->tail]; 277 - uart_xmit_advance(&s->port, 1); 278 } 279 if (tx != 0xffff) { 280 max3100_calc_parity(s, &tx); ··· 287 tty_flip_buffer_push(&s->port.state->port); 288 rxchars = 0; 289 } 290 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 291 uart_write_wakeup(&s->port); 292 293 } while (!s->force_end_work && 294 !freezing(current) && 295 ((rx & MAX3100_R) || 296 - (!uart_circ_empty(xmit) && 297 !uart_tx_stopped(&s->port)))); 298 299 if (rxchars > 0) 300 tty_flip_buffer_push(&s->port.state->port); 301 } 302 303 static irqreturn_t max3100_irq(int irqno, void *dev_id) ··· 326 327 static void max3100_enable_ms(struct uart_port *port) 328 { 329 - struct max3100_port *s = container_of(port, 330 - struct max3100_port, 331 - port); 332 333 - if (s->poll_time > 0) 334 - mod_timer(&s->timer, jiffies); 335 dev_dbg(&s->spi->dev, "%s\n", __func__); 336 } 337 338 static void max3100_start_tx(struct uart_port *port) 339 { 340 - struct max3100_port *s = container_of(port, 341 - struct max3100_port, 342 - port); 343 344 dev_dbg(&s->spi->dev, "%s\n", __func__); 345 ··· 343 344 static void max3100_stop_rx(struct uart_port *port) 345 { 346 - struct max3100_port *s = container_of(port, 347 - struct max3100_port, 348 - port); 349 350 dev_dbg(&s->spi->dev, "%s\n", __func__); 351 ··· 357 358 static unsigned int max3100_tx_empty(struct uart_port *port) 359 { 360 - struct max3100_port *s = container_of(port, 361 - struct max3100_port, 362 - port); 363 364 dev_dbg(&s->spi->dev, "%s\n", __func__); 365 ··· 368 369 static unsigned int max3100_get_mctrl(struct uart_port *port) 370 { 371 - struct max3100_port *s = container_of(port, 372 - struct max3100_port, 373 - port); 374 375 dev_dbg(&s->spi->dev, "%s\n", __func__); 376 ··· 380 381 static void max3100_set_mctrl(struct uart_port *port, unsigned int mctrl) 382 { 383 - struct max3100_port *s = container_of(port, 384 - struct max3100_port, 385 - port); 386 - int rts; 387 388 dev_dbg(&s->spi->dev, "%s\n", __func__); 389 390 rts = (mctrl & TIOCM_RTS) > 0; 391 392 spin_lock(&s->conf_lock); 393 if (s->rts != rts) { 394 s->rts = rts; 395 s->rts_commit = 1; 396 - max3100_dowork(s); 397 } 398 spin_unlock(&s->conf_lock); 399 } 400 ··· 406 max3100_set_termios(struct uart_port *port, struct ktermios *termios, 407 const struct ktermios *old) 408 { 409 - struct max3100_port *s = container_of(port, 410 - struct max3100_port, 411 - port); 412 - int baud = 0; 413 unsigned cflag; 414 u32 param_new, param_mask, parity = 0; 415 ··· 421 param_new = s->conf & MAX3100_BAUD; 422 switch (baud) { 423 case 300: 424 - if (s->crystal) 425 baud = s->baud; 426 else 427 param_new = 15; 428 break; 429 case 600: 430 - param_new = 14 + s->crystal; 431 break; 432 case 1200: 433 - param_new = 13 + s->crystal; 434 break; 435 case 2400: 436 - param_new = 12 + s->crystal; 437 break; 438 case 4800: 439 - param_new = 11 + s->crystal; 440 break; 441 case 9600: 442 - param_new = 10 + s->crystal; 443 break; 444 case 19200: 445 - param_new = 9 + s->crystal; 446 break; 447 case 38400: 448 - param_new = 8 + s->crystal; 449 break; 450 case 57600: 451 - param_new = 1 + s->crystal; 452 break; 453 case 115200: 454 - param_new = 0 + s->crystal; 455 break; 456 case 230400: 457 - if (s->crystal) 458 param_new = 0; 459 else 460 baud = s->baud; ··· 506 MAX3100_STATUS_PE | MAX3100_STATUS_FE | 507 MAX3100_STATUS_OE; 508 509 - if (s->poll_time > 0) 510 - del_timer_sync(&s->timer); 511 - 512 uart_update_timeout(port, termios->c_cflag, baud); 513 514 spin_lock(&s->conf_lock); ··· 522 523 static void max3100_shutdown(struct uart_port *port) 524 { 525 - struct max3100_port *s = container_of(port, 526 - struct max3100_port, 527 - port); 528 529 dev_dbg(&s->spi->dev, "%s\n", __func__); 530 ··· 532 533 s->force_end_work = 1; 534 535 - if (s->poll_time > 0) 536 - del_timer_sync(&s->timer); 537 538 if (s->workqueue) { 539 destroy_workqueue(s->workqueue); 540 s->workqueue = NULL; 541 } 542 - if (s->irq) 543 - free_irq(s->irq, s); 544 545 /* set shutdown mode to save power */ 546 - if (s->max3100_hw_suspend) 547 - s->max3100_hw_suspend(1); 548 - else { 549 - u16 tx, rx; 550 - 551 - tx = MAX3100_WC | MAX3100_SHDN; 552 - max3100_sr(s, tx, &rx); 553 - } 554 } 555 556 static int max3100_startup(struct uart_port *port) 557 { 558 - struct max3100_port *s = container_of(port, 559 - struct max3100_port, 560 - port); 561 char b[12]; 562 563 dev_dbg(&s->spi->dev, "%s\n", __func__); 564 565 s->conf = MAX3100_RM; 566 - s->baud = s->crystal ? 230400 : 115200; 567 s->rx_enabled = 1; 568 569 if (s->suspending) ··· 572 } 573 INIT_WORK(&s->work, max3100_work); 574 575 - if (request_irq(s->irq, max3100_irq, 576 - IRQF_TRIGGER_FALLING, "max3100", s) < 0) { 577 - dev_warn(&s->spi->dev, "cannot allocate irq %d\n", s->irq); 578 - s->irq = 0; 579 destroy_workqueue(s->workqueue); 580 s->workqueue = NULL; 581 return -EBUSY; 582 } 583 584 - if (s->loopback) { 585 - u16 tx, rx; 586 - tx = 0x4001; 587 - max3100_sr(s, tx, &rx); 588 - } 589 - 590 - if (s->max3100_hw_suspend) 591 - s->max3100_hw_suspend(0); 592 s->conf_commit = 1; 593 max3100_dowork(s); 594 /* wait for clock to settle */ ··· 593 594 static const char *max3100_type(struct uart_port *port) 595 { 596 - struct max3100_port *s = container_of(port, 597 - struct max3100_port, 598 - port); 599 600 dev_dbg(&s->spi->dev, "%s\n", __func__); 601 ··· 602 603 static void max3100_release_port(struct uart_port *port) 604 { 605 - struct max3100_port *s = container_of(port, 606 - struct max3100_port, 607 - port); 608 609 dev_dbg(&s->spi->dev, "%s\n", __func__); 610 } 611 612 static void max3100_config_port(struct uart_port *port, int flags) 613 { 614 - struct max3100_port *s = container_of(port, 615 - struct max3100_port, 616 - port); 617 618 dev_dbg(&s->spi->dev, "%s\n", __func__); 619 ··· 620 static int max3100_verify_port(struct uart_port *port, 621 struct serial_struct *ser) 622 { 623 - struct max3100_port *s = container_of(port, 624 - struct max3100_port, 625 - port); 626 int ret = -EINVAL; 627 628 dev_dbg(&s->spi->dev, "%s\n", __func__); ··· 632 633 static void max3100_stop_tx(struct uart_port *port) 634 { 635 - struct max3100_port *s = container_of(port, 636 - struct max3100_port, 637 - port); 638 639 dev_dbg(&s->spi->dev, "%s\n", __func__); 640 } 641 642 static int max3100_request_port(struct uart_port *port) 643 { 644 - struct max3100_port *s = container_of(port, 645 - struct max3100_port, 646 - port); 647 648 dev_dbg(&s->spi->dev, "%s\n", __func__); 649 return 0; ··· 647 648 static void max3100_break_ctl(struct uart_port *port, int break_state) 649 { 650 - struct max3100_port *s = container_of(port, 651 - struct max3100_port, 652 - port); 653 654 dev_dbg(&s->spi->dev, "%s\n", __func__); 655 } ··· 683 684 static int max3100_probe(struct spi_device *spi) 685 { 686 int i, retval; 687 - struct plat_max3100 *pdata; 688 - u16 tx, rx; 689 690 mutex_lock(&max3100s_lock); 691 692 if (!uart_driver_registered) { 693 - uart_driver_registered = 1; 694 retval = uart_register_driver(&max3100_uart_driver); 695 if (retval) { 696 - printk(KERN_ERR "Couldn't register max3100 uart driver\n"); 697 mutex_unlock(&max3100s_lock); 698 - return retval; 699 } 700 } 701 702 for (i = 0; i < MAX_MAX3100; i++) 703 if (!max3100s[i]) 704 break; 705 if (i == MAX_MAX3100) { 706 - dev_warn(&spi->dev, "too many MAX3100 chips\n"); 707 mutex_unlock(&max3100s_lock); 708 - return -ENOMEM; 709 } 710 711 max3100s[i] = kzalloc(sizeof(struct max3100_port), GFP_KERNEL); 712 if (!max3100s[i]) { 713 - dev_warn(&spi->dev, 714 - "kmalloc for max3100 structure %d failed!\n", i); 715 mutex_unlock(&max3100s_lock); 716 return -ENOMEM; 717 } 718 max3100s[i]->spi = spi; 719 - max3100s[i]->irq = spi->irq; 720 spin_lock_init(&max3100s[i]->conf_lock); 721 spi_set_drvdata(spi, max3100s[i]); 722 - pdata = dev_get_platdata(&spi->dev); 723 - max3100s[i]->crystal = pdata->crystal; 724 - max3100s[i]->loopback = pdata->loopback; 725 - max3100s[i]->poll_time = msecs_to_jiffies(pdata->poll_time); 726 - if (pdata->poll_time > 0 && max3100s[i]->poll_time == 0) 727 - max3100s[i]->poll_time = 1; 728 - max3100s[i]->max3100_hw_suspend = pdata->max3100_hw_suspend; 729 max3100s[i]->minor = i; 730 timer_setup(&max3100s[i]->timer, max3100_timeout, 0); 731 732 dev_dbg(&spi->dev, "%s: adding port %d\n", __func__, i); 733 - max3100s[i]->port.irq = max3100s[i]->irq; 734 - max3100s[i]->port.uartclk = max3100s[i]->crystal ? 3686400 : 1843200; 735 max3100s[i]->port.fifosize = 16; 736 max3100s[i]->port.ops = &max3100_ops; 737 max3100s[i]->port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF; 738 max3100s[i]->port.line = i; 739 max3100s[i]->port.type = PORT_MAX3100; 740 max3100s[i]->port.dev = &spi->dev; 741 retval = uart_add_one_port(&max3100_uart_driver, &max3100s[i]->port); 742 if (retval < 0) 743 - dev_warn(&spi->dev, 744 - "uart_add_one_port failed for line %d with error %d\n", 745 - i, retval); 746 747 /* set shutdown mode to save power. Will be woken-up on open */ 748 - if (max3100s[i]->max3100_hw_suspend) 749 - max3100s[i]->max3100_hw_suspend(1); 750 - else { 751 - tx = MAX3100_WC | MAX3100_SHDN; 752 - max3100_sr(max3100s[i], tx, &rx); 753 - } 754 mutex_unlock(&max3100s_lock); 755 return 0; 756 } ··· 767 } 768 pr_debug("removing max3100 driver\n"); 769 uart_unregister_driver(&max3100_uart_driver); 770 771 mutex_unlock(&max3100s_lock); 772 } 773 774 - #ifdef CONFIG_PM_SLEEP 775 - 776 static int max3100_suspend(struct device *dev) 777 { 778 struct max3100_port *s = dev_get_drvdata(dev); 779 780 dev_dbg(&s->spi->dev, "%s\n", __func__); 781 782 - disable_irq(s->irq); 783 784 s->suspending = 1; 785 uart_suspend_port(&max3100_uart_driver, &s->port); 786 787 - if (s->max3100_hw_suspend) 788 - s->max3100_hw_suspend(1); 789 - else { 790 - /* no HW suspend, so do SW one */ 791 - u16 tx, rx; 792 - 793 - tx = MAX3100_WC | MAX3100_SHDN; 794 - max3100_sr(s, tx, &rx); 795 - } 796 return 0; 797 } 798 ··· 795 796 dev_dbg(&s->spi->dev, "%s\n", __func__); 797 798 - if (s->max3100_hw_suspend) 799 - s->max3100_hw_suspend(0); 800 uart_resume_port(&max3100_uart_driver, &s->port); 801 s->suspending = 0; 802 803 - enable_irq(s->irq); 804 805 s->conf_commit = 1; 806 if (s->workqueue) ··· 807 return 0; 808 } 809 810 - static SIMPLE_DEV_PM_OPS(max3100_pm_ops, max3100_suspend, max3100_resume); 811 - #define MAX3100_PM_OPS (&max3100_pm_ops) 812 813 - #else 814 - #define MAX3100_PM_OPS NULL 815 - #endif 816 817 static struct spi_driver max3100_driver = { 818 .driver = { 819 .name = "max3100", 820 - .pm = MAX3100_PM_OPS, 821 }, 822 .probe = max3100_probe, 823 .remove = max3100_remove, 824 }; 825 826 module_spi_driver(max3100_driver); ··· 837 MODULE_DESCRIPTION("MAX3100 driver"); 838 MODULE_AUTHOR("Christian Pellegrin <chripell@evolware.org>"); 839 MODULE_LICENSE("GPL"); 840 - MODULE_ALIAS("spi:max3100");
··· 1 // SPDX-License-Identifier: GPL-2.0+ 2 /* 3 * Copyright (C) 2008 Christian Pellegrin <chripell@evolware.org> 4 * 5 * Notes: the MAX3100 doesn't provide an interrupt on CTS so we have ··· 8 * writing conf clears FIFO buffer and we cannot have this interrupt 9 * always asking us for attention. 10 * 11 * The initial minor number is 209 in the low-density serial port: 12 * mknod /dev/ttyMAX0 c 204 209 13 */ ··· 35 /* 4 MAX3100s should be enough for everyone */ 36 #define MAX_MAX3100 4 37 38 + #include <linux/container_of.h> 39 #include <linux/delay.h> 40 #include <linux/device.h> 41 + #include <linux/freezer.h> 42 + #include <linux/mod_devicetable.h> 43 #include <linux/module.h> 44 + #include <linux/pm.h> 45 + #include <linux/property.h> 46 #include <linux/serial_core.h> 47 #include <linux/serial.h> 48 + #include <linux/slab.h> 49 #include <linux/spi/spi.h> 50 #include <linux/tty_flip.h> 51 + #include <linux/tty.h> 52 + #include <linux/types.h> 53 54 + #include <asm/unaligned.h> 55 56 #define MAX3100_C (1<<14) 57 #define MAX3100_D (0<<14) ··· 110 #define MAX3100_7BIT 4 111 int rx_enabled; /* if we should rx chars */ 112 113 int minor; /* minor number */ 114 + int loopback_commit; /* need to change loopback */ 115 int loopback; /* 1 if we are in loopback mode */ 116 117 /* for handling irqs: need workqueue since we do spi_sync */ ··· 124 /* need to know we are suspending to avoid deadlock on workqueue */ 125 int suspending; 126 127 struct timer_list timer; 128 }; 129 + 130 + static inline struct max3100_port *to_max3100_port(struct uart_port *port) 131 + { 132 + return container_of(port, struct max3100_port, port); 133 + } 134 135 static struct max3100_port *max3100s[MAX_MAX3100]; /* the chips */ 136 static DEFINE_MUTEX(max3100s_lock); /* race on probe */ ··· 170 *c |= max3100_do_parity(s, *c) << 8; 171 } 172 173 static int max3100_sr(struct max3100_port *s, u16 tx, u16 *rx) 174 { 175 struct spi_message message; 176 + __be16 etx, erx; 177 int status; 178 struct spi_transfer tran = { 179 .tx_buf = &etx, ··· 213 return 0; 214 } 215 216 + static int max3100_handlerx_unlocked(struct max3100_port *s, u16 rx) 217 { 218 unsigned int status = 0; 219 int ret = 0, cts; ··· 254 return ret; 255 } 256 257 + static int max3100_handlerx(struct max3100_port *s, u16 rx) 258 + { 259 + unsigned long flags; 260 + int ret; 261 + 262 + uart_port_lock_irqsave(&s->port, &flags); 263 + ret = max3100_handlerx_unlocked(s, rx); 264 + uart_port_unlock_irqrestore(&s->port, flags); 265 + return ret; 266 + } 267 + 268 static void max3100_work(struct work_struct *w) 269 { 270 struct max3100_port *s = container_of(w, struct max3100_port, work); 271 + struct tty_port *tport = &s->port.state->port; 272 + unsigned char ch; 273 + int conf, cconf, cloopback, crts; 274 int rxchars; 275 u16 tx, rx; 276 277 dev_dbg(&s->spi->dev, "%s\n", __func__); 278 ··· 270 conf = s->conf; 271 cconf = s->conf_commit; 272 s->conf_commit = 0; 273 + cloopback = s->loopback_commit; 274 + s->loopback_commit = 0; 275 crts = s->rts_commit; 276 s->rts_commit = 0; 277 spin_unlock(&s->conf_lock); 278 if (cconf) 279 max3100_sr(s, MAX3100_WC | conf, &rx); 280 + if (cloopback) 281 + max3100_sr(s, 0x4001, &rx); 282 if (crts) { 283 max3100_sr(s, MAX3100_WD | MAX3100_TE | 284 (s->rts ? MAX3100_RTS : 0), &rx); ··· 290 tx = s->port.x_char; 291 s->port.icount.tx++; 292 s->port.x_char = 0; 293 + } else if (!uart_tx_stopped(&s->port) && 294 + uart_fifo_get(&s->port, &ch)) { 295 + tx = ch; 296 } 297 if (tx != 0xffff) { 298 max3100_calc_parity(s, &tx); ··· 307 tty_flip_buffer_push(&s->port.state->port); 308 rxchars = 0; 309 } 310 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 311 uart_write_wakeup(&s->port); 312 313 } while (!s->force_end_work && 314 !freezing(current) && 315 ((rx & MAX3100_R) || 316 + (!kfifo_is_empty(&tport->xmit_fifo) && 317 !uart_tx_stopped(&s->port)))); 318 319 if (rxchars > 0) 320 tty_flip_buffer_push(&s->port.state->port); 321 + } 322 + 323 + static void max3100_dowork(struct max3100_port *s) 324 + { 325 + if (!s->force_end_work && !freezing(current) && !s->suspending) 326 + queue_work(s->workqueue, &s->work); 327 + } 328 + 329 + static void max3100_timeout(struct timer_list *t) 330 + { 331 + struct max3100_port *s = from_timer(s, t, timer); 332 + 333 + max3100_dowork(s); 334 + mod_timer(&s->timer, jiffies + uart_poll_timeout(&s->port)); 335 } 336 337 static irqreturn_t max3100_irq(int irqno, void *dev_id) ··· 332 333 static void max3100_enable_ms(struct uart_port *port) 334 { 335 + struct max3100_port *s = to_max3100_port(port); 336 337 + mod_timer(&s->timer, jiffies); 338 dev_dbg(&s->spi->dev, "%s\n", __func__); 339 } 340 341 static void max3100_start_tx(struct uart_port *port) 342 { 343 + struct max3100_port *s = to_max3100_port(port); 344 345 dev_dbg(&s->spi->dev, "%s\n", __func__); 346 ··· 354 355 static void max3100_stop_rx(struct uart_port *port) 356 { 357 + struct max3100_port *s = to_max3100_port(port); 358 359 dev_dbg(&s->spi->dev, "%s\n", __func__); 360 ··· 370 371 static unsigned int max3100_tx_empty(struct uart_port *port) 372 { 373 + struct max3100_port *s = to_max3100_port(port); 374 375 dev_dbg(&s->spi->dev, "%s\n", __func__); 376 ··· 383 384 static unsigned int max3100_get_mctrl(struct uart_port *port) 385 { 386 + struct max3100_port *s = to_max3100_port(port); 387 388 dev_dbg(&s->spi->dev, "%s\n", __func__); 389 ··· 397 398 static void max3100_set_mctrl(struct uart_port *port, unsigned int mctrl) 399 { 400 + struct max3100_port *s = to_max3100_port(port); 401 + int loopback, rts; 402 403 dev_dbg(&s->spi->dev, "%s\n", __func__); 404 405 + loopback = (mctrl & TIOCM_LOOP) > 0; 406 rts = (mctrl & TIOCM_RTS) > 0; 407 408 spin_lock(&s->conf_lock); 409 + if (s->loopback != loopback) { 410 + s->loopback = loopback; 411 + s->loopback_commit = 1; 412 + } 413 if (s->rts != rts) { 414 s->rts = rts; 415 s->rts_commit = 1; 416 } 417 + if (s->loopback_commit || s->rts_commit) 418 + max3100_dowork(s); 419 spin_unlock(&s->conf_lock); 420 } 421 ··· 419 max3100_set_termios(struct uart_port *port, struct ktermios *termios, 420 const struct ktermios *old) 421 { 422 + struct max3100_port *s = to_max3100_port(port); 423 + unsigned int baud = port->uartclk / 16; 424 + unsigned int baud230400 = (baud == 230400) ? 1 : 0; 425 unsigned cflag; 426 u32 param_new, param_mask, parity = 0; 427 ··· 435 param_new = s->conf & MAX3100_BAUD; 436 switch (baud) { 437 case 300: 438 + if (baud230400) 439 baud = s->baud; 440 else 441 param_new = 15; 442 break; 443 case 600: 444 + param_new = 14 + baud230400; 445 break; 446 case 1200: 447 + param_new = 13 + baud230400; 448 break; 449 case 2400: 450 + param_new = 12 + baud230400; 451 break; 452 case 4800: 453 + param_new = 11 + baud230400; 454 break; 455 case 9600: 456 + param_new = 10 + baud230400; 457 break; 458 case 19200: 459 + param_new = 9 + baud230400; 460 break; 461 case 38400: 462 + param_new = 8 + baud230400; 463 break; 464 case 57600: 465 + param_new = 1 + baud230400; 466 break; 467 case 115200: 468 + param_new = 0 + baud230400; 469 break; 470 case 230400: 471 + if (baud230400) 472 param_new = 0; 473 else 474 baud = s->baud; ··· 520 MAX3100_STATUS_PE | MAX3100_STATUS_FE | 521 MAX3100_STATUS_OE; 522 523 + del_timer_sync(&s->timer); 524 uart_update_timeout(port, termios->c_cflag, baud); 525 526 spin_lock(&s->conf_lock); ··· 538 539 static void max3100_shutdown(struct uart_port *port) 540 { 541 + struct max3100_port *s = to_max3100_port(port); 542 + u16 rx; 543 544 dev_dbg(&s->spi->dev, "%s\n", __func__); 545 ··· 549 550 s->force_end_work = 1; 551 552 + del_timer_sync(&s->timer); 553 554 if (s->workqueue) { 555 destroy_workqueue(s->workqueue); 556 s->workqueue = NULL; 557 } 558 + if (port->irq) 559 + free_irq(port->irq, s); 560 561 /* set shutdown mode to save power */ 562 + max3100_sr(s, MAX3100_WC | MAX3100_SHDN, &rx); 563 } 564 565 static int max3100_startup(struct uart_port *port) 566 { 567 + struct max3100_port *s = to_max3100_port(port); 568 char b[12]; 569 + int ret; 570 571 dev_dbg(&s->spi->dev, "%s\n", __func__); 572 573 s->conf = MAX3100_RM; 574 + s->baud = port->uartclk / 16; 575 s->rx_enabled = 1; 576 577 if (s->suspending) ··· 598 } 599 INIT_WORK(&s->work, max3100_work); 600 601 + ret = request_irq(port->irq, max3100_irq, IRQF_TRIGGER_FALLING, "max3100", s); 602 + if (ret < 0) { 603 + dev_warn(&s->spi->dev, "cannot allocate irq %d\n", port->irq); 604 + port->irq = 0; 605 destroy_workqueue(s->workqueue); 606 s->workqueue = NULL; 607 return -EBUSY; 608 } 609 610 s->conf_commit = 1; 611 max3100_dowork(s); 612 /* wait for clock to settle */ ··· 627 628 static const char *max3100_type(struct uart_port *port) 629 { 630 + struct max3100_port *s = to_max3100_port(port); 631 632 dev_dbg(&s->spi->dev, "%s\n", __func__); 633 ··· 638 639 static void max3100_release_port(struct uart_port *port) 640 { 641 + struct max3100_port *s = to_max3100_port(port); 642 643 dev_dbg(&s->spi->dev, "%s\n", __func__); 644 } 645 646 static void max3100_config_port(struct uart_port *port, int flags) 647 { 648 + struct max3100_port *s = to_max3100_port(port); 649 650 dev_dbg(&s->spi->dev, "%s\n", __func__); 651 ··· 660 static int max3100_verify_port(struct uart_port *port, 661 struct serial_struct *ser) 662 { 663 + struct max3100_port *s = to_max3100_port(port); 664 int ret = -EINVAL; 665 666 dev_dbg(&s->spi->dev, "%s\n", __func__); ··· 674 675 static void max3100_stop_tx(struct uart_port *port) 676 { 677 + struct max3100_port *s = to_max3100_port(port); 678 679 dev_dbg(&s->spi->dev, "%s\n", __func__); 680 } 681 682 static int max3100_request_port(struct uart_port *port) 683 { 684 + struct max3100_port *s = to_max3100_port(port); 685 686 dev_dbg(&s->spi->dev, "%s\n", __func__); 687 return 0; ··· 693 694 static void max3100_break_ctl(struct uart_port *port, int break_state) 695 { 696 + struct max3100_port *s = to_max3100_port(port); 697 698 dev_dbg(&s->spi->dev, "%s\n", __func__); 699 } ··· 731 732 static int max3100_probe(struct spi_device *spi) 733 { 734 + struct device *dev = &spi->dev; 735 int i, retval; 736 + u16 rx; 737 738 mutex_lock(&max3100s_lock); 739 740 if (!uart_driver_registered) { 741 retval = uart_register_driver(&max3100_uart_driver); 742 if (retval) { 743 mutex_unlock(&max3100s_lock); 744 + return dev_err_probe(dev, retval, "Couldn't register max3100 uart driver\n"); 745 } 746 + 747 + uart_driver_registered = 1; 748 } 749 750 for (i = 0; i < MAX_MAX3100; i++) 751 if (!max3100s[i]) 752 break; 753 if (i == MAX_MAX3100) { 754 mutex_unlock(&max3100s_lock); 755 + return dev_err_probe(dev, -ENOMEM, "too many MAX3100 chips\n"); 756 } 757 758 max3100s[i] = kzalloc(sizeof(struct max3100_port), GFP_KERNEL); 759 if (!max3100s[i]) { 760 mutex_unlock(&max3100s_lock); 761 return -ENOMEM; 762 } 763 max3100s[i]->spi = spi; 764 spin_lock_init(&max3100s[i]->conf_lock); 765 spi_set_drvdata(spi, max3100s[i]); 766 max3100s[i]->minor = i; 767 timer_setup(&max3100s[i]->timer, max3100_timeout, 0); 768 769 dev_dbg(&spi->dev, "%s: adding port %d\n", __func__, i); 770 + max3100s[i]->port.irq = spi->irq; 771 max3100s[i]->port.fifosize = 16; 772 max3100s[i]->port.ops = &max3100_ops; 773 max3100s[i]->port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF; 774 max3100s[i]->port.line = i; 775 max3100s[i]->port.type = PORT_MAX3100; 776 max3100s[i]->port.dev = &spi->dev; 777 + 778 + /* Read clock frequency from a property, uart_add_one_port() will fail if it's not set */ 779 + device_property_read_u32(dev, "clock-frequency", &max3100s[i]->port.uartclk); 780 + 781 retval = uart_add_one_port(&max3100_uart_driver, &max3100s[i]->port); 782 if (retval < 0) 783 + dev_err_probe(dev, retval, "uart_add_one_port failed for line %d\n", i); 784 785 /* set shutdown mode to save power. Will be woken-up on open */ 786 + max3100_sr(max3100s[i], MAX3100_WC | MAX3100_SHDN, &rx); 787 mutex_unlock(&max3100s_lock); 788 return 0; 789 } ··· 830 } 831 pr_debug("removing max3100 driver\n"); 832 uart_unregister_driver(&max3100_uart_driver); 833 + uart_driver_registered = 0; 834 835 mutex_unlock(&max3100s_lock); 836 } 837 838 static int max3100_suspend(struct device *dev) 839 { 840 struct max3100_port *s = dev_get_drvdata(dev); 841 + u16 rx; 842 843 dev_dbg(&s->spi->dev, "%s\n", __func__); 844 845 + disable_irq(s->port.irq); 846 847 s->suspending = 1; 848 uart_suspend_port(&max3100_uart_driver, &s->port); 849 850 + /* no HW suspend, so do SW one */ 851 + max3100_sr(s, MAX3100_WC | MAX3100_SHDN, &rx); 852 return 0; 853 } 854 ··· 865 866 dev_dbg(&s->spi->dev, "%s\n", __func__); 867 868 uart_resume_port(&max3100_uart_driver, &s->port); 869 s->suspending = 0; 870 871 + enable_irq(s->port.irq); 872 873 s->conf_commit = 1; 874 if (s->workqueue) ··· 879 return 0; 880 } 881 882 + static DEFINE_SIMPLE_DEV_PM_OPS(max3100_pm_ops, max3100_suspend, max3100_resume); 883 884 + static const struct spi_device_id max3100_spi_id[] = { 885 + { "max3100" }, 886 + { } 887 + }; 888 + MODULE_DEVICE_TABLE(spi, max3100_spi_id); 889 + 890 + static const struct of_device_id max3100_of_match[] = { 891 + { .compatible = "maxim,max3100" }, 892 + { } 893 + }; 894 + MODULE_DEVICE_TABLE(of, max3100_of_match); 895 896 static struct spi_driver max3100_driver = { 897 .driver = { 898 .name = "max3100", 899 + .of_match_table = max3100_of_match, 900 + .pm = pm_sleep_ptr(&max3100_pm_ops), 901 }, 902 .probe = max3100_probe, 903 .remove = max3100_remove, 904 + .id_table = max3100_spi_id, 905 }; 906 907 module_spi_driver(max3100_driver); ··· 900 MODULE_DESCRIPTION("MAX3100 driver"); 901 MODULE_AUTHOR("Christian Pellegrin <chripell@evolware.org>"); 902 MODULE_LICENSE("GPL");
+16 -21
drivers/tty/serial/max310x.c
··· 747 748 static void max310x_handle_tx(struct uart_port *port) 749 { 750 - struct circ_buf *xmit = &port->state->xmit; 751 - unsigned int txlen, to_send, until_end; 752 753 if (unlikely(port->x_char)) { 754 max310x_port_write(port, MAX310X_THR_REG, port->x_char); ··· 758 return; 759 } 760 761 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) 762 return; 763 764 - /* Get length of data pending in circular buffer */ 765 - to_send = uart_circ_chars_pending(xmit); 766 - until_end = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 767 - if (likely(to_send)) { 768 /* Limit to space available in TX FIFO */ 769 txlen = max310x_port_read(port, MAX310X_TXFIFOLVL_REG); 770 txlen = port->fifosize - txlen; 771 - to_send = (to_send > txlen) ? txlen : to_send; 772 773 - if (until_end < to_send) { 774 - /* 775 - * It's a circ buffer -- wrap around. 776 - * We could do that in one SPI transaction, but meh. 777 - */ 778 - max310x_batch_write(port, xmit->buf + xmit->tail, until_end); 779 - max310x_batch_write(port, xmit->buf, to_send - until_end); 780 - } else { 781 - max310x_batch_write(port, xmit->buf + xmit->tail, to_send); 782 - } 783 uart_xmit_advance(port, to_send); 784 } 785 786 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 787 uart_write_wakeup(port); 788 } 789 ··· 1473 .reg_bits = 8, 1474 .val_bits = 8, 1475 .write_flag_mask = MAX310X_WRITE_BIT, 1476 - .cache_type = REGCACHE_RBTREE, 1477 .max_register = MAX310X_REG_1F, 1478 .writeable_reg = max310x_reg_writeable, 1479 .volatile_reg = max310x_reg_volatile, ··· 1577 static struct regmap_config regcfg_i2c = { 1578 .reg_bits = 8, 1579 .val_bits = 8, 1580 - .cache_type = REGCACHE_RBTREE, 1581 .writeable_reg = max310x_reg_writeable, 1582 .volatile_reg = max310x_reg_volatile, 1583 .precious_reg = max310x_reg_precious,
··· 747 748 static void max310x_handle_tx(struct uart_port *port) 749 { 750 + struct tty_port *tport = &port->state->port; 751 + unsigned int txlen, to_send; 752 + unsigned char *tail; 753 754 if (unlikely(port->x_char)) { 755 max310x_port_write(port, MAX310X_THR_REG, port->x_char); ··· 757 return; 758 } 759 760 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) 761 return; 762 763 + /* 764 + * It's a circ buffer -- wrap around. 765 + * We could do that in one SPI transaction, but meh. 766 + */ 767 + while (!kfifo_is_empty(&tport->xmit_fifo)) { 768 /* Limit to space available in TX FIFO */ 769 txlen = max310x_port_read(port, MAX310X_TXFIFOLVL_REG); 770 txlen = port->fifosize - txlen; 771 + if (!txlen) 772 + break; 773 774 + to_send = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, txlen); 775 + max310x_batch_write(port, tail, to_send); 776 uart_xmit_advance(port, to_send); 777 } 778 779 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 780 uart_write_wakeup(port); 781 } 782 ··· 1478 .reg_bits = 8, 1479 .val_bits = 8, 1480 .write_flag_mask = MAX310X_WRITE_BIT, 1481 + .cache_type = REGCACHE_MAPLE, 1482 .max_register = MAX310X_REG_1F, 1483 .writeable_reg = max310x_reg_writeable, 1484 .volatile_reg = max310x_reg_volatile, ··· 1582 static struct regmap_config regcfg_i2c = { 1583 .reg_bits = 8, 1584 .val_bits = 8, 1585 + .cache_type = REGCACHE_MAPLE, 1586 .writeable_reg = max310x_reg_writeable, 1587 .volatile_reg = max310x_reg_volatile, 1588 .precious_reg = max310x_reg_precious,
+10 -16
drivers/tty/serial/men_z135_uart.c
··· 293 static void men_z135_handle_tx(struct men_z135_port *uart) 294 { 295 struct uart_port *port = &uart->port; 296 - struct circ_buf *xmit = &port->state->xmit; 297 u32 txc; 298 u32 wptr; 299 int qlen; 300 - int n; 301 - int txfree; 302 - int head; 303 - int tail; 304 - int s; 305 306 - if (uart_circ_empty(xmit)) 307 goto out; 308 309 if (uart_tx_stopped(port)) ··· 310 goto out; 311 312 /* calculate bytes to copy */ 313 - qlen = uart_circ_chars_pending(xmit); 314 if (qlen <= 0) 315 goto out; 316 ··· 342 if (n <= 0) 343 goto irq_en; 344 345 - head = xmit->head & (UART_XMIT_SIZE - 1); 346 - tail = xmit->tail & (UART_XMIT_SIZE - 1); 347 348 - s = ((head >= tail) ? head : UART_XMIT_SIZE) - tail; 349 - n = min(n, s); 350 - 351 - memcpy_toio(port->membase + MEN_Z135_TX_RAM, &xmit->buf[xmit->tail], n); 352 iowrite32(n & 0x3ff, port->membase + MEN_Z135_TX_CTRL); 353 uart_xmit_advance(port, n); 354 355 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 356 uart_write_wakeup(port); 357 358 irq_en: 359 - if (!uart_circ_empty(xmit)) 360 men_z135_reg_set(uart, MEN_Z135_CONF_REG, MEN_Z135_IER_TXCIEN); 361 else 362 men_z135_reg_clr(uart, MEN_Z135_CONF_REG, MEN_Z135_IER_TXCIEN);
··· 293 static void men_z135_handle_tx(struct men_z135_port *uart) 294 { 295 struct uart_port *port = &uart->port; 296 + struct tty_port *tport = &port->state->port; 297 + unsigned char *tail; 298 + unsigned int n, txfree; 299 u32 txc; 300 u32 wptr; 301 int qlen; 302 303 + if (kfifo_is_empty(&tport->xmit_fifo)) 304 goto out; 305 306 if (uart_tx_stopped(port)) ··· 313 goto out; 314 315 /* calculate bytes to copy */ 316 + qlen = kfifo_len(&tport->xmit_fifo); 317 if (qlen <= 0) 318 goto out; 319 ··· 345 if (n <= 0) 346 goto irq_en; 347 348 + n = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, 349 + min_t(unsigned int, UART_XMIT_SIZE, n)); 350 + memcpy_toio(port->membase + MEN_Z135_TX_RAM, tail, n); 351 352 iowrite32(n & 0x3ff, port->membase + MEN_Z135_TX_CTRL); 353 uart_xmit_advance(port, n); 354 355 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 356 uart_write_wakeup(port); 357 358 irq_en: 359 + if (!kfifo_is_empty(&tport->xmit_fifo)) 360 men_z135_reg_set(uart, MEN_Z135_CONF_REG, MEN_Z135_IER_TXCIEN); 361 else 362 men_z135_reg_clr(uart, MEN_Z135_CONF_REG, MEN_Z135_IER_TXCIEN);
+5 -7
drivers/tty/serial/meson_uart.c
··· 141 142 static void meson_uart_start_tx(struct uart_port *port) 143 { 144 - struct circ_buf *xmit = &port->state->xmit; 145 - unsigned int ch; 146 u32 val; 147 148 if (uart_tx_stopped(port)) { ··· 158 continue; 159 } 160 161 - if (uart_circ_empty(xmit)) 162 break; 163 164 - ch = xmit->buf[xmit->tail]; 165 writel(ch, port->membase + AML_UART_WFIFO); 166 - uart_xmit_advance(port, 1); 167 } 168 169 - if (!uart_circ_empty(xmit)) { 170 val = readl(port->membase + AML_UART_CONTROL); 171 val |= AML_UART_TX_INT_EN; 172 writel(val, port->membase + AML_UART_CONTROL); 173 } 174 175 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 176 uart_write_wakeup(port); 177 } 178
··· 141 142 static void meson_uart_start_tx(struct uart_port *port) 143 { 144 + struct tty_port *tport = &port->state->port; 145 + unsigned char ch; 146 u32 val; 147 148 if (uart_tx_stopped(port)) { ··· 158 continue; 159 } 160 161 + if (!uart_fifo_get(port, &ch)) 162 break; 163 164 writel(ch, port->membase + AML_UART_WFIFO); 165 } 166 167 + if (!kfifo_is_empty(&tport->xmit_fifo)) { 168 val = readl(port->membase + AML_UART_CONTROL); 169 val |= AML_UART_TX_INT_EN; 170 writel(val, port->membase + AML_UART_CONTROL); 171 } 172 173 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 174 uart_write_wakeup(port); 175 } 176
+8 -7
drivers/tty/serial/milbeaut_usio.c
··· 72 73 static void mlb_usio_tx_chars(struct uart_port *port) 74 { 75 - struct circ_buf *xmit = &port->state->xmit; 76 int count; 77 78 writew(readw(port->membase + MLB_USIO_REG_FCR) & ~MLB_USIO_FCR_FTIE, ··· 87 port->x_char = 0; 88 return; 89 } 90 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 91 mlb_usio_stop_tx(port); 92 return; 93 } ··· 96 (readw(port->membase + MLB_USIO_REG_FBYTE) & 0xff); 97 98 do { 99 - writew(xmit->buf[xmit->tail], port->membase + MLB_USIO_REG_DR); 100 101 - uart_xmit_advance(port, 1); 102 - if (uart_circ_empty(xmit)) 103 break; 104 105 } while (--count > 0); 106 107 writew(readw(port->membase + MLB_USIO_REG_FCR) & ~MLB_USIO_FCR_FDRQ, ··· 111 writeb(readb(port->membase + MLB_USIO_REG_SCR) | MLB_USIO_SCR_TBIE, 112 port->membase + MLB_USIO_REG_SCR); 113 114 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 115 uart_write_wakeup(port); 116 117 - if (uart_circ_empty(xmit)) 118 mlb_usio_stop_tx(port); 119 } 120
··· 72 73 static void mlb_usio_tx_chars(struct uart_port *port) 74 { 75 + struct tty_port *tport = &port->state->port; 76 int count; 77 78 writew(readw(port->membase + MLB_USIO_REG_FCR) & ~MLB_USIO_FCR_FTIE, ··· 87 port->x_char = 0; 88 return; 89 } 90 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 91 mlb_usio_stop_tx(port); 92 return; 93 } ··· 96 (readw(port->membase + MLB_USIO_REG_FBYTE) & 0xff); 97 98 do { 99 + unsigned char ch; 100 101 + if (!uart_fifo_get(port, &ch)) 102 break; 103 104 + writew(ch, port->membase + MLB_USIO_REG_DR); 105 + port->icount.tx++; 106 } while (--count > 0); 107 108 writew(readw(port->membase + MLB_USIO_REG_FCR) & ~MLB_USIO_FCR_FDRQ, ··· 110 writeb(readb(port->membase + MLB_USIO_REG_SCR) | MLB_USIO_SCR_TBIE, 111 port->membase + MLB_USIO_REG_SCR); 112 113 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 114 uart_write_wakeup(port); 115 116 + if (kfifo_is_empty(&tport->xmit_fifo)) 117 mlb_usio_stop_tx(port); 118 } 119
+67 -55
drivers/tty/serial/msm_serial.c
··· 161 struct msm_dma { 162 struct dma_chan *chan; 163 enum dma_data_direction dir; 164 - dma_addr_t phys; 165 - unsigned char *virt; 166 dma_cookie_t cookie; 167 u32 enable_bit; 168 - unsigned int count; 169 struct dma_async_tx_descriptor *desc; 170 }; 171 ··· 254 unsigned int mapped; 255 u32 val; 256 257 - mapped = dma->count; 258 - dma->count = 0; 259 260 dmaengine_terminate_all(dma->chan); 261 ··· 274 val &= ~dma->enable_bit; 275 msm_write(port, val, UARTDM_DMEN); 276 277 - if (mapped) 278 - dma_unmap_single(dev, dma->phys, mapped, dma->dir); 279 } 280 281 static void msm_release_dma(struct msm_port *msm_port) ··· 299 if (dma->chan) { 300 msm_stop_dma(&msm_port->uart, dma); 301 dma_release_channel(dma->chan); 302 - kfree(dma->virt); 303 } 304 305 memset(dma, 0, sizeof(*dma)); ··· 371 372 of_property_read_u32(dev->of_node, "qcom,rx-crci", &crci); 373 374 - dma->virt = kzalloc(UARTDM_RX_SIZE, GFP_KERNEL); 375 - if (!dma->virt) 376 goto rel_rx; 377 378 memset(&conf, 0, sizeof(conf)); ··· 399 400 return; 401 err: 402 - kfree(dma->virt); 403 rel_rx: 404 dma_release_channel(dma->chan); 405 no_rx: ··· 434 struct msm_dma *dma = &msm_port->tx_dma; 435 436 /* Already started in DMA mode */ 437 - if (dma->count) 438 return; 439 440 msm_port->imr |= MSM_UART_IMR_TXLEV; ··· 452 { 453 struct msm_port *msm_port = args; 454 struct uart_port *port = &msm_port->uart; 455 - struct circ_buf *xmit = &port->state->xmit; 456 struct msm_dma *dma = &msm_port->tx_dma; 457 struct dma_tx_state state; 458 unsigned long flags; ··· 462 uart_port_lock_irqsave(port, &flags); 463 464 /* Already stopped */ 465 - if (!dma->count) 466 goto done; 467 468 dmaengine_tx_status(dma->chan, dma->cookie, &state); 469 470 - dma_unmap_single(port->dev, dma->phys, dma->count, dma->dir); 471 472 val = msm_read(port, UARTDM_DMEN); 473 val &= ~dma->enable_bit; ··· 478 msm_write(port, MSM_UART_CR_TX_ENABLE, MSM_UART_CR); 479 } 480 481 - count = dma->count - state.residue; 482 uart_xmit_advance(port, count); 483 - dma->count = 0; 484 485 /* Restore "Tx FIFO below watermark" interrupt */ 486 msm_port->imr |= MSM_UART_IMR_TXLEV; 487 msm_write(port, msm_port->imr, MSM_UART_IMR); 488 489 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 490 uart_write_wakeup(port); 491 492 msm_handle_tx(port); ··· 496 497 static int msm_handle_tx_dma(struct msm_port *msm_port, unsigned int count) 498 { 499 - struct circ_buf *xmit = &msm_port->uart.state->xmit; 500 struct uart_port *port = &msm_port->uart; 501 struct msm_dma *dma = &msm_port->tx_dma; 502 - void *cpu_addr; 503 int ret; 504 u32 val; 505 506 - cpu_addr = &xmit->buf[xmit->tail]; 507 508 - dma->phys = dma_map_single(port->dev, cpu_addr, count, dma->dir); 509 - ret = dma_mapping_error(port->dev, dma->phys); 510 - if (ret) 511 - return ret; 512 513 - dma->desc = dmaengine_prep_slave_single(dma->chan, dma->phys, 514 - count, DMA_MEM_TO_DEV, 515 DMA_PREP_INTERRUPT | 516 DMA_PREP_FENCE); 517 if (!dma->desc) { ··· 536 msm_port->imr &= ~MSM_UART_IMR_TXLEV; 537 msm_write(port, msm_port->imr, MSM_UART_IMR); 538 539 - dma->count = count; 540 - 541 val = msm_read(port, UARTDM_DMEN); 542 val |= dma->enable_bit; 543 ··· 550 dma_async_issue_pending(dma->chan); 551 return 0; 552 unmap: 553 - dma_unmap_single(port->dev, dma->phys, count, dma->dir); 554 return ret; 555 } 556 ··· 569 uart_port_lock_irqsave(port, &flags); 570 571 /* Already stopped */ 572 - if (!dma->count) 573 goto done; 574 575 val = msm_read(port, UARTDM_DMEN); ··· 586 587 port->icount.rx += count; 588 589 - dma->count = 0; 590 591 - dma_unmap_single(port->dev, dma->phys, UARTDM_RX_SIZE, dma->dir); 592 593 for (i = 0; i < count; i++) { 594 char flag = TTY_NORMAL; 595 596 - if (msm_port->break_detected && dma->virt[i] == 0) { 597 port->icount.brk++; 598 flag = TTY_BREAK; 599 msm_port->break_detected = false; ··· 604 if (!(port->read_status_mask & MSM_UART_SR_RX_BREAK)) 605 flag = TTY_NORMAL; 606 607 - sysrq = uart_prepare_sysrq_char(port, dma->virt[i]); 608 if (!sysrq) 609 - tty_insert_flip_char(tport, dma->virt[i], flag); 610 } 611 612 msm_start_rx_dma(msm_port); ··· 630 if (!dma->chan) 631 return; 632 633 - dma->phys = dma_map_single(uart->dev, dma->virt, 634 UARTDM_RX_SIZE, dma->dir); 635 - ret = dma_mapping_error(uart->dev, dma->phys); 636 if (ret) 637 goto sw_mode; 638 639 - dma->desc = dmaengine_prep_slave_single(dma->chan, dma->phys, 640 UARTDM_RX_SIZE, DMA_DEV_TO_MEM, 641 DMA_PREP_INTERRUPT); 642 if (!dma->desc) ··· 664 665 msm_write(uart, msm_port->imr, MSM_UART_IMR); 666 667 - dma->count = UARTDM_RX_SIZE; 668 669 dma_async_issue_pending(dma->chan); 670 ··· 684 685 return; 686 unmap: 687 - dma_unmap_single(uart->dev, dma->phys, UARTDM_RX_SIZE, dma->dir); 688 689 sw_mode: 690 /* ··· 847 848 static void msm_handle_tx_pio(struct uart_port *port, unsigned int tx_count) 849 { 850 - struct circ_buf *xmit = &port->state->xmit; 851 struct msm_port *msm_port = to_msm_port(port); 852 unsigned int num_chars; 853 unsigned int tf_pointer = 0; 854 void __iomem *tf; ··· 862 msm_reset_dm_count(port, tx_count); 863 864 while (tf_pointer < tx_count) { 865 - int i; 866 - char buf[4] = { 0 }; 867 868 if (!(msm_read(port, MSM_UART_SR) & MSM_UART_SR_TX_READY)) 869 break; ··· 873 else 874 num_chars = 1; 875 876 - for (i = 0; i < num_chars; i++) 877 - buf[i] = xmit->buf[xmit->tail + i]; 878 - 879 iowrite32_rep(tf, buf, 1); 880 - uart_xmit_advance(port, num_chars); 881 tf_pointer += num_chars; 882 } 883 884 /* disable tx interrupts if nothing more to send */ 885 - if (uart_circ_empty(xmit)) 886 msm_stop_tx(port); 887 888 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 889 uart_write_wakeup(port); 890 } 891 892 static void msm_handle_tx(struct uart_port *port) 893 { 894 struct msm_port *msm_port = to_msm_port(port); 895 - struct circ_buf *xmit = &msm_port->uart.state->xmit; 896 struct msm_dma *dma = &msm_port->tx_dma; 897 unsigned int pio_count, dma_count, dma_min; 898 char buf[4] = { 0 }; ··· 913 return; 914 } 915 916 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 917 msm_stop_tx(port); 918 return; 919 } 920 921 - pio_count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 922 - dma_count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 923 924 dma_min = 1; /* Always DMA */ 925 if (msm_port->is_uartdm > UARTDM_1P3) { ··· 967 } 968 969 if (misr & (MSM_UART_IMR_RXLEV | MSM_UART_IMR_RXSTALE)) { 970 - if (dma->count) { 971 val = MSM_UART_CR_CMD_STALE_EVENT_DISABLE; 972 msm_write(port, val, MSM_UART_CR); 973 val = MSM_UART_CR_CMD_RESET_STALE_INT;
··· 161 struct msm_dma { 162 struct dma_chan *chan; 163 enum dma_data_direction dir; 164 + union { 165 + struct { 166 + dma_addr_t phys; 167 + unsigned char *virt; 168 + unsigned int count; 169 + } rx; 170 + struct scatterlist tx_sg; 171 + }; 172 dma_cookie_t cookie; 173 u32 enable_bit; 174 struct dma_async_tx_descriptor *desc; 175 }; 176 ··· 249 unsigned int mapped; 250 u32 val; 251 252 + if (dma->dir == DMA_TO_DEVICE) { 253 + mapped = sg_dma_len(&dma->tx_sg); 254 + } else { 255 + mapped = dma->rx.count; 256 + dma->rx.count = 0; 257 + } 258 259 dmaengine_terminate_all(dma->chan); 260 ··· 265 val &= ~dma->enable_bit; 266 msm_write(port, val, UARTDM_DMEN); 267 268 + if (mapped) { 269 + if (dma->dir == DMA_TO_DEVICE) { 270 + dma_unmap_sg(dev, &dma->tx_sg, 1, dma->dir); 271 + sg_init_table(&dma->tx_sg, 1); 272 + } else 273 + dma_unmap_single(dev, dma->rx.phys, mapped, dma->dir); 274 + } 275 } 276 277 static void msm_release_dma(struct msm_port *msm_port) ··· 285 if (dma->chan) { 286 msm_stop_dma(&msm_port->uart, dma); 287 dma_release_channel(dma->chan); 288 + kfree(dma->rx.virt); 289 } 290 291 memset(dma, 0, sizeof(*dma)); ··· 357 358 of_property_read_u32(dev->of_node, "qcom,rx-crci", &crci); 359 360 + dma->rx.virt = kzalloc(UARTDM_RX_SIZE, GFP_KERNEL); 361 + if (!dma->rx.virt) 362 goto rel_rx; 363 364 memset(&conf, 0, sizeof(conf)); ··· 385 386 return; 387 err: 388 + kfree(dma->rx.virt); 389 rel_rx: 390 dma_release_channel(dma->chan); 391 no_rx: ··· 420 struct msm_dma *dma = &msm_port->tx_dma; 421 422 /* Already started in DMA mode */ 423 + if (sg_dma_len(&dma->tx_sg)) 424 return; 425 426 msm_port->imr |= MSM_UART_IMR_TXLEV; ··· 438 { 439 struct msm_port *msm_port = args; 440 struct uart_port *port = &msm_port->uart; 441 + struct tty_port *tport = &port->state->port; 442 struct msm_dma *dma = &msm_port->tx_dma; 443 struct dma_tx_state state; 444 unsigned long flags; ··· 448 uart_port_lock_irqsave(port, &flags); 449 450 /* Already stopped */ 451 + if (!sg_dma_len(&dma->tx_sg)) 452 goto done; 453 454 dmaengine_tx_status(dma->chan, dma->cookie, &state); 455 456 + dma_unmap_sg(port->dev, &dma->tx_sg, 1, dma->dir); 457 458 val = msm_read(port, UARTDM_DMEN); 459 val &= ~dma->enable_bit; ··· 464 msm_write(port, MSM_UART_CR_TX_ENABLE, MSM_UART_CR); 465 } 466 467 + count = sg_dma_len(&dma->tx_sg) - state.residue; 468 uart_xmit_advance(port, count); 469 + sg_init_table(&dma->tx_sg, 1); 470 471 /* Restore "Tx FIFO below watermark" interrupt */ 472 msm_port->imr |= MSM_UART_IMR_TXLEV; 473 msm_write(port, msm_port->imr, MSM_UART_IMR); 474 475 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 476 uart_write_wakeup(port); 477 478 msm_handle_tx(port); ··· 482 483 static int msm_handle_tx_dma(struct msm_port *msm_port, unsigned int count) 484 { 485 struct uart_port *port = &msm_port->uart; 486 + struct tty_port *tport = &port->state->port; 487 struct msm_dma *dma = &msm_port->tx_dma; 488 + unsigned int mapped; 489 int ret; 490 u32 val; 491 492 + sg_init_table(&dma->tx_sg, 1); 493 + kfifo_dma_out_prepare(&tport->xmit_fifo, &dma->tx_sg, 1, count); 494 495 + mapped = dma_map_sg(port->dev, &dma->tx_sg, 1, dma->dir); 496 + if (!mapped) { 497 + ret = -EIO; 498 + goto zero_sg; 499 + } 500 501 + dma->desc = dmaengine_prep_slave_sg(dma->chan, &dma->tx_sg, 1, 502 + DMA_MEM_TO_DEV, 503 DMA_PREP_INTERRUPT | 504 DMA_PREP_FENCE); 505 if (!dma->desc) { ··· 520 msm_port->imr &= ~MSM_UART_IMR_TXLEV; 521 msm_write(port, msm_port->imr, MSM_UART_IMR); 522 523 val = msm_read(port, UARTDM_DMEN); 524 val |= dma->enable_bit; 525 ··· 536 dma_async_issue_pending(dma->chan); 537 return 0; 538 unmap: 539 + dma_unmap_sg(port->dev, &dma->tx_sg, 1, dma->dir); 540 + zero_sg: 541 + sg_init_table(&dma->tx_sg, 1); 542 return ret; 543 } 544 ··· 553 uart_port_lock_irqsave(port, &flags); 554 555 /* Already stopped */ 556 + if (!dma->rx.count) 557 goto done; 558 559 val = msm_read(port, UARTDM_DMEN); ··· 570 571 port->icount.rx += count; 572 573 + dma->rx.count = 0; 574 575 + dma_unmap_single(port->dev, dma->rx.phys, UARTDM_RX_SIZE, dma->dir); 576 577 for (i = 0; i < count; i++) { 578 char flag = TTY_NORMAL; 579 580 + if (msm_port->break_detected && dma->rx.virt[i] == 0) { 581 port->icount.brk++; 582 flag = TTY_BREAK; 583 msm_port->break_detected = false; ··· 588 if (!(port->read_status_mask & MSM_UART_SR_RX_BREAK)) 589 flag = TTY_NORMAL; 590 591 + sysrq = uart_prepare_sysrq_char(port, dma->rx.virt[i]); 592 if (!sysrq) 593 + tty_insert_flip_char(tport, dma->rx.virt[i], flag); 594 } 595 596 msm_start_rx_dma(msm_port); ··· 614 if (!dma->chan) 615 return; 616 617 + dma->rx.phys = dma_map_single(uart->dev, dma->rx.virt, 618 UARTDM_RX_SIZE, dma->dir); 619 + ret = dma_mapping_error(uart->dev, dma->rx.phys); 620 if (ret) 621 goto sw_mode; 622 623 + dma->desc = dmaengine_prep_slave_single(dma->chan, dma->rx.phys, 624 UARTDM_RX_SIZE, DMA_DEV_TO_MEM, 625 DMA_PREP_INTERRUPT); 626 if (!dma->desc) ··· 648 649 msm_write(uart, msm_port->imr, MSM_UART_IMR); 650 651 + dma->rx.count = UARTDM_RX_SIZE; 652 653 dma_async_issue_pending(dma->chan); 654 ··· 668 669 return; 670 unmap: 671 + dma_unmap_single(uart->dev, dma->rx.phys, UARTDM_RX_SIZE, dma->dir); 672 673 sw_mode: 674 /* ··· 831 832 static void msm_handle_tx_pio(struct uart_port *port, unsigned int tx_count) 833 { 834 struct msm_port *msm_port = to_msm_port(port); 835 + struct tty_port *tport = &port->state->port; 836 unsigned int num_chars; 837 unsigned int tf_pointer = 0; 838 void __iomem *tf; ··· 846 msm_reset_dm_count(port, tx_count); 847 848 while (tf_pointer < tx_count) { 849 + unsigned char buf[4] = { 0 }; 850 851 if (!(msm_read(port, MSM_UART_SR) & MSM_UART_SR_TX_READY)) 852 break; ··· 858 else 859 num_chars = 1; 860 861 + num_chars = uart_fifo_out(port, buf, num_chars); 862 iowrite32_rep(tf, buf, 1); 863 tf_pointer += num_chars; 864 } 865 866 /* disable tx interrupts if nothing more to send */ 867 + if (kfifo_is_empty(&tport->xmit_fifo)) 868 msm_stop_tx(port); 869 870 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 871 uart_write_wakeup(port); 872 } 873 874 static void msm_handle_tx(struct uart_port *port) 875 { 876 struct msm_port *msm_port = to_msm_port(port); 877 + struct tty_port *tport = &port->state->port; 878 struct msm_dma *dma = &msm_port->tx_dma; 879 unsigned int pio_count, dma_count, dma_min; 880 char buf[4] = { 0 }; ··· 901 return; 902 } 903 904 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 905 msm_stop_tx(port); 906 return; 907 } 908 909 + dma_count = pio_count = kfifo_out_linear(&tport->xmit_fifo, NULL, 910 + UART_XMIT_SIZE); 911 912 dma_min = 1; /* Always DMA */ 913 if (msm_port->is_uartdm > UARTDM_1P3) { ··· 955 } 956 957 if (misr & (MSM_UART_IMR_RXLEV | MSM_UART_IMR_RXSTALE)) { 958 + if (dma->rx.count) { 959 val = MSM_UART_CR_CMD_STALE_EVENT_DISABLE; 960 msm_write(port, val, MSM_UART_CR); 961 val = MSM_UART_CR_CMD_RESET_STALE_INT;
+3 -5
drivers/tty/serial/mvebu-uart.c
··· 219 static void mvebu_uart_start_tx(struct uart_port *port) 220 { 221 unsigned int ctl; 222 - struct circ_buf *xmit = &port->state->xmit; 223 224 - if (IS_EXTENDED(port) && !uart_circ_empty(xmit)) { 225 - writel(xmit->buf[xmit->tail], port->membase + UART_TSH(port)); 226 - uart_xmit_advance(port, 1); 227 - } 228 229 ctl = readl(port->membase + UART_INTR(port)); 230 ctl |= CTRL_TX_RDY_INT(port);
··· 219 static void mvebu_uart_start_tx(struct uart_port *port) 220 { 221 unsigned int ctl; 222 + unsigned char c; 223 224 + if (IS_EXTENDED(port) && uart_fifo_get(port, &c)) 225 + writel(c, port->membase + UART_TSH(port)); 226 227 ctl = readl(port->membase + UART_INTR(port)); 228 ctl |= CTRL_TX_RDY_INT(port);
+6 -17
drivers/tty/serial/mxs-auart.c
··· 517 static void dma_tx_callback(void *param) 518 { 519 struct mxs_auart_port *s = param; 520 - struct circ_buf *xmit = &s->port.state->xmit; 521 522 dma_unmap_sg(s->dev, &s->tx_sgl, 1, DMA_TO_DEVICE); 523 ··· 526 smp_mb__after_atomic(); 527 528 /* wake up the possible processes. */ 529 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 530 uart_write_wakeup(&s->port); 531 532 mxs_auart_tx_chars(s); ··· 568 569 static void mxs_auart_tx_chars(struct mxs_auart_port *s) 570 { 571 - struct circ_buf *xmit = &s->port.state->xmit; 572 bool pending; 573 u8 ch; 574 575 if (auart_dma_enabled(s)) { 576 u32 i = 0; 577 - int size; 578 void *buffer = s->tx_dma_buf; 579 580 if (test_and_set_bit(MXS_AUART_DMA_TX_SYNC, &s->flags)) 581 return; 582 583 - while (!uart_circ_empty(xmit) && !uart_tx_stopped(&s->port)) { 584 - size = min_t(u32, UART_XMIT_SIZE - i, 585 - CIRC_CNT_TO_END(xmit->head, 586 - xmit->tail, 587 - UART_XMIT_SIZE)); 588 - memcpy(buffer + i, xmit->buf + xmit->tail, size); 589 - xmit->tail = (xmit->tail + size) & (UART_XMIT_SIZE - 1); 590 - 591 - i += size; 592 - if (i >= UART_XMIT_SIZE) 593 - break; 594 - } 595 - 596 if (uart_tx_stopped(&s->port)) 597 mxs_auart_stop_tx(&s->port); 598 599 if (i) { 600 mxs_auart_dma_tx(s, i);
··· 517 static void dma_tx_callback(void *param) 518 { 519 struct mxs_auart_port *s = param; 520 + struct tty_port *tport = &s->port.state->port; 521 522 dma_unmap_sg(s->dev, &s->tx_sgl, 1, DMA_TO_DEVICE); 523 ··· 526 smp_mb__after_atomic(); 527 528 /* wake up the possible processes. */ 529 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 530 uart_write_wakeup(&s->port); 531 532 mxs_auart_tx_chars(s); ··· 568 569 static void mxs_auart_tx_chars(struct mxs_auart_port *s) 570 { 571 + struct tty_port *tport = &s->port.state->port; 572 bool pending; 573 u8 ch; 574 575 if (auart_dma_enabled(s)) { 576 u32 i = 0; 577 void *buffer = s->tx_dma_buf; 578 579 if (test_and_set_bit(MXS_AUART_DMA_TX_SYNC, &s->flags)) 580 return; 581 582 if (uart_tx_stopped(&s->port)) 583 mxs_auart_stop_tx(&s->port); 584 + else 585 + i = kfifo_out(&tport->xmit_fifo, buffer, 586 + UART_XMIT_SIZE); 587 588 if (i) { 589 mxs_auart_dma_tx(s, i);
-1
drivers/tty/serial/omap-serial.c
··· 1093 1094 /* Wait up to 1s for flow control if necessary */ 1095 if (up->port.flags & UPF_CONS_FLOW) { 1096 - tmout = 1000000; 1097 for (tmout = 1000000; tmout; tmout--) { 1098 unsigned int msr = serial_in(up, UART_MSR); 1099
··· 1093 1094 /* Wait up to 1s for flow control if necessary */ 1095 if (up->port.flags & UPF_CONS_FLOW) { 1096 for (tmout = 1000000; tmout; tmout--) { 1097 unsigned int msr = serial_in(up, UART_MSR); 1098
+10 -11
drivers/tty/serial/pch_uart.c
··· 808 static unsigned int handle_tx(struct eg20t_port *priv) 809 { 810 struct uart_port *port = &priv->port; 811 - struct circ_buf *xmit = &port->state->xmit; 812 int fifo_size; 813 int tx_empty; 814 ··· 830 fifo_size--; 831 } 832 833 - while (!uart_tx_stopped(port) && !uart_circ_empty(xmit) && fifo_size) { 834 - iowrite8(xmit->buf[xmit->tail], priv->membase + PCH_UART_THR); 835 - uart_xmit_advance(port, 1); 836 fifo_size--; 837 tx_empty = 0; 838 } ··· 850 static unsigned int dma_handle_tx(struct eg20t_port *priv) 851 { 852 struct uart_port *port = &priv->port; 853 - struct circ_buf *xmit = &port->state->xmit; 854 struct scatterlist *sg; 855 int nent; 856 int fifo_size; 857 struct dma_async_tx_descriptor *desc; 858 int num; 859 int i; 860 - int bytes; 861 int size; 862 int rem; 863 ··· 886 fifo_size--; 887 } 888 889 - bytes = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 890 if (!bytes) { 891 dev_dbg(priv->port.dev, "%s 0 bytes return\n", __func__); 892 pch_uart_hal_disable_interrupt(priv, PCH_UART_HAL_TX_INT); ··· 920 921 for (i = 0; i < num; i++, sg++) { 922 if (i == (num - 1)) 923 - sg_set_page(sg, virt_to_page(xmit->buf), 924 rem, fifo_size * i); 925 else 926 - sg_set_page(sg, virt_to_page(xmit->buf), 927 size, fifo_size * i); 928 } 929 ··· 937 priv->nent = nent; 938 939 for (i = 0; i < nent; i++, sg++) { 940 - sg->offset = (xmit->tail & (UART_XMIT_SIZE - 1)) + 941 - fifo_size * i; 942 sg_dma_address(sg) = (sg_dma_address(sg) & 943 ~(UART_XMIT_SIZE - 1)) + sg->offset; 944 if (i == (nent - 1))
··· 808 static unsigned int handle_tx(struct eg20t_port *priv) 809 { 810 struct uart_port *port = &priv->port; 811 + unsigned char ch; 812 int fifo_size; 813 int tx_empty; 814 ··· 830 fifo_size--; 831 } 832 833 + while (!uart_tx_stopped(port) && fifo_size && 834 + uart_fifo_get(port, &ch)) { 835 + iowrite8(ch, priv->membase + PCH_UART_THR); 836 fifo_size--; 837 tx_empty = 0; 838 } ··· 850 static unsigned int dma_handle_tx(struct eg20t_port *priv) 851 { 852 struct uart_port *port = &priv->port; 853 + struct tty_port *tport = &port->state->port; 854 struct scatterlist *sg; 855 int nent; 856 int fifo_size; 857 struct dma_async_tx_descriptor *desc; 858 + unsigned int bytes, tail; 859 int num; 860 int i; 861 int size; 862 int rem; 863 ··· 886 fifo_size--; 887 } 888 889 + bytes = kfifo_out_linear(&tport->xmit_fifo, &tail, UART_XMIT_SIZE); 890 if (!bytes) { 891 dev_dbg(priv->port.dev, "%s 0 bytes return\n", __func__); 892 pch_uart_hal_disable_interrupt(priv, PCH_UART_HAL_TX_INT); ··· 920 921 for (i = 0; i < num; i++, sg++) { 922 if (i == (num - 1)) 923 + sg_set_page(sg, virt_to_page(tport->xmit_buf), 924 rem, fifo_size * i); 925 else 926 + sg_set_page(sg, virt_to_page(tport->xmit_buf), 927 size, fifo_size * i); 928 } 929 ··· 937 priv->nent = nent; 938 939 for (i = 0; i < nent; i++, sg++) { 940 + sg->offset = tail + fifo_size * i; 941 sg_dma_address(sg) = (sg_dma_address(sg) & 942 ~(UART_XMIT_SIZE - 1)) + sg->offset; 943 if (i == (nent - 1))
+8 -9
drivers/tty/serial/pic32_uart.c
··· 8 * Sorin-Andrei Pistirica <andrei.pistirica@microchip.com> 9 */ 10 11 #include <linux/kernel.h> 12 #include <linux/platform_device.h> 13 #include <linux/of.h> 14 #include <linux/of_irq.h> 15 - #include <linux/of_gpio.h> 16 #include <linux/init.h> 17 #include <linux/module.h> 18 #include <linux/slab.h> ··· 342 static void pic32_uart_do_tx(struct uart_port *port) 343 { 344 struct pic32_sport *sport = to_pic32_sport(port); 345 - struct circ_buf *xmit = &port->state->xmit; 346 unsigned int max_count = PIC32_UART_TX_FIFO_DEPTH; 347 348 if (port->x_char) { ··· 357 return; 358 } 359 360 - if (uart_circ_empty(xmit)) 361 goto txq_empty; 362 363 /* keep stuffing chars into uart tx buffer ··· 371 */ 372 while (!(PIC32_UART_STA_UTXBF & 373 pic32_uart_readl(sport, PIC32_UART_STA))) { 374 - unsigned int c = xmit->buf[xmit->tail]; 375 376 pic32_uart_writel(sport, PIC32_UART_TX, c); 377 378 - uart_xmit_advance(port, 1); 379 - if (uart_circ_empty(xmit)) 380 - break; 381 if (--max_count == 0) 382 break; 383 } 384 385 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 386 uart_write_wakeup(port); 387 388 - if (uart_circ_empty(xmit)) 389 goto txq_empty; 390 391 return;
··· 8 * Sorin-Andrei Pistirica <andrei.pistirica@microchip.com> 9 */ 10 11 + #include <linux/gpio/consumer.h> 12 #include <linux/kernel.h> 13 #include <linux/platform_device.h> 14 #include <linux/of.h> 15 #include <linux/of_irq.h> 16 #include <linux/init.h> 17 #include <linux/module.h> 18 #include <linux/slab.h> ··· 342 static void pic32_uart_do_tx(struct uart_port *port) 343 { 344 struct pic32_sport *sport = to_pic32_sport(port); 345 + struct tty_port *tport = &port->state->port; 346 unsigned int max_count = PIC32_UART_TX_FIFO_DEPTH; 347 348 if (port->x_char) { ··· 357 return; 358 } 359 360 + if (kfifo_is_empty(&tport->xmit_fifo)) 361 goto txq_empty; 362 363 /* keep stuffing chars into uart tx buffer ··· 371 */ 372 while (!(PIC32_UART_STA_UTXBF & 373 pic32_uart_readl(sport, PIC32_UART_STA))) { 374 + unsigned char c; 375 376 + if (!uart_fifo_get(port, &c)) 377 + break; 378 pic32_uart_writel(sport, PIC32_UART_TX, c); 379 380 if (--max_count == 0) 381 break; 382 } 383 384 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 385 uart_write_wakeup(port); 386 387 + if (kfifo_is_empty(&tport->xmit_fifo)) 388 goto txq_empty; 389 390 return;
+17 -16
drivers/tty/serial/pmac_zilog.c
··· 333 334 static void pmz_transmit_chars(struct uart_pmac_port *uap) 335 { 336 - struct circ_buf *xmit; 337 338 if (ZS_IS_CONS(uap)) { 339 unsigned char status = read_zsreg(uap, R0); ··· 385 386 if (uap->port.state == NULL) 387 goto ack_tx_int; 388 - xmit = &uap->port.state->xmit; 389 - if (uart_circ_empty(xmit)) { 390 uart_write_wakeup(&uap->port); 391 goto ack_tx_int; 392 } ··· 394 goto ack_tx_int; 395 396 uap->flags |= PMACZILOG_FLAG_TX_ACTIVE; 397 - write_zsdata(uap, xmit->buf[xmit->tail]); 398 zssync(uap); 399 400 - uart_xmit_advance(&uap->port, 1); 401 - 402 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 403 uart_write_wakeup(&uap->port); 404 405 return; ··· 606 port->icount.tx++; 607 port->x_char = 0; 608 } else { 609 - struct circ_buf *xmit = &port->state->xmit; 610 611 - if (uart_circ_empty(xmit)) 612 return; 613 - write_zsdata(uap, xmit->buf[xmit->tail]); 614 zssync(uap); 615 - uart_xmit_advance(port, 1); 616 617 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 618 uart_write_wakeup(&uap->port); 619 } 620 } ··· 1681 memset(uap, 0, sizeof(struct uart_pmac_port)); 1682 } 1683 1684 - static int __init pmz_attach(struct platform_device *pdev) 1685 { 1686 struct uart_pmac_port *uap; 1687 int i; ··· 1700 return uart_add_one_port(&pmz_uart_reg, &uap->port); 1701 } 1702 1703 - static void __exit pmz_detach(struct platform_device *pdev) 1704 { 1705 struct uart_pmac_port *uap = platform_get_drvdata(pdev); 1706 ··· 1775 #else 1776 1777 static struct platform_driver pmz_driver = { 1778 - .remove_new = __exit_p(pmz_detach), 1779 .driver = { 1780 .name = "scc", 1781 }, ··· 1824 #ifdef CONFIG_PPC_PMAC 1825 return macio_register_driver(&pmz_driver); 1826 #else 1827 - return platform_driver_probe(&pmz_driver, pmz_attach); 1828 #endif 1829 } 1830
··· 333 334 static void pmz_transmit_chars(struct uart_pmac_port *uap) 335 { 336 + struct tty_port *tport; 337 + unsigned char ch; 338 339 if (ZS_IS_CONS(uap)) { 340 unsigned char status = read_zsreg(uap, R0); ··· 384 385 if (uap->port.state == NULL) 386 goto ack_tx_int; 387 + tport = &uap->port.state->port; 388 + if (kfifo_is_empty(&tport->xmit_fifo)) { 389 uart_write_wakeup(&uap->port); 390 goto ack_tx_int; 391 } ··· 393 goto ack_tx_int; 394 395 uap->flags |= PMACZILOG_FLAG_TX_ACTIVE; 396 + WARN_ON(!uart_fifo_get(&uap->port, &ch)); 397 + write_zsdata(uap, ch); 398 zssync(uap); 399 400 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 401 uart_write_wakeup(&uap->port); 402 403 return; ··· 606 port->icount.tx++; 607 port->x_char = 0; 608 } else { 609 + struct tty_port *tport = &port->state->port; 610 + unsigned char ch; 611 612 + if (!uart_fifo_get(&uap->port, &ch)) 613 return; 614 + write_zsdata(uap, ch); 615 zssync(uap); 616 617 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 618 uart_write_wakeup(&uap->port); 619 } 620 } ··· 1681 memset(uap, 0, sizeof(struct uart_pmac_port)); 1682 } 1683 1684 + static int pmz_attach(struct platform_device *pdev) 1685 { 1686 struct uart_pmac_port *uap; 1687 int i; ··· 1700 return uart_add_one_port(&pmz_uart_reg, &uap->port); 1701 } 1702 1703 + static void pmz_detach(struct platform_device *pdev) 1704 { 1705 struct uart_pmac_port *uap = platform_get_drvdata(pdev); 1706 ··· 1775 #else 1776 1777 static struct platform_driver pmz_driver = { 1778 + .probe = pmz_attach, 1779 + .remove_new = pmz_detach, 1780 .driver = { 1781 .name = "scc", 1782 }, ··· 1823 #ifdef CONFIG_PPC_PMAC 1824 return macio_register_driver(&pmz_driver); 1825 #else 1826 + return platform_driver_register(&pmz_driver); 1827 #endif 1828 } 1829
+16 -18
drivers/tty/serial/qcom_geni_serial.c
··· 505 */ 506 qcom_geni_serial_poll_tx_done(uport); 507 508 - if (!uart_circ_empty(&uport->state->xmit)) { 509 irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN); 510 writel(irq_en | M_TX_FIFO_WATERMARK_EN, 511 uport->membase + SE_GENI_M_IRQ_EN); ··· 620 static void qcom_geni_serial_start_tx_dma(struct uart_port *uport) 621 { 622 struct qcom_geni_serial_port *port = to_dev_port(uport); 623 - struct circ_buf *xmit = &uport->state->xmit; 624 unsigned int xmit_size; 625 int ret; 626 627 if (port->tx_dma_addr) 628 return; 629 630 - if (uart_circ_empty(xmit)) 631 return; 632 633 - xmit_size = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 634 635 qcom_geni_serial_setup_tx(uport, xmit_size); 636 637 - ret = geni_se_tx_dma_prep(&port->se, &xmit->buf[xmit->tail], 638 - xmit_size, &port->tx_dma_addr); 639 if (ret) { 640 dev_err(uport->dev, "unable to start TX SE DMA: %d\n", ret); 641 qcom_geni_serial_stop_tx_dma(uport); ··· 855 unsigned int chunk) 856 { 857 struct qcom_geni_serial_port *port = to_dev_port(uport); 858 - struct circ_buf *xmit = &uport->state->xmit; 859 - unsigned int tx_bytes, c, remaining = chunk; 860 u8 buf[BYTES_PER_FIFO_WORD]; 861 862 while (remaining) { 863 memset(buf, 0, sizeof(buf)); 864 tx_bytes = min(remaining, BYTES_PER_FIFO_WORD); 865 866 - for (c = 0; c < tx_bytes ; c++) { 867 - buf[c] = xmit->buf[xmit->tail]; 868 - uart_xmit_advance(uport, 1); 869 - } 870 871 iowrite32_rep(uport->membase + SE_GENI_TX_FIFOn, buf, 1); 872 ··· 875 bool done, bool active) 876 { 877 struct qcom_geni_serial_port *port = to_dev_port(uport); 878 - struct circ_buf *xmit = &uport->state->xmit; 879 size_t avail; 880 size_t pending; 881 u32 status; ··· 888 if (active) 889 pending = port->tx_remaining; 890 else 891 - pending = uart_circ_chars_pending(xmit); 892 893 /* All data has been transmitted and acknowledged as received */ 894 if (!pending && !status && done) { ··· 931 uport->membase + SE_GENI_M_IRQ_EN); 932 } 933 934 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 935 uart_write_wakeup(uport); 936 } 937 938 static void qcom_geni_serial_handle_tx_dma(struct uart_port *uport) 939 { 940 struct qcom_geni_serial_port *port = to_dev_port(uport); 941 - struct circ_buf *xmit = &uport->state->xmit; 942 943 uart_xmit_advance(uport, port->tx_remaining); 944 geni_se_tx_dma_unprep(&port->se, port->tx_dma_addr, port->tx_remaining); 945 port->tx_dma_addr = 0; 946 port->tx_remaining = 0; 947 948 - if (!uart_circ_empty(xmit)) 949 qcom_geni_serial_start_tx_dma(uport); 950 951 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 952 uart_write_wakeup(uport); 953 } 954
··· 505 */ 506 qcom_geni_serial_poll_tx_done(uport); 507 508 + if (!kfifo_is_empty(&uport->state->port.xmit_fifo)) { 509 irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN); 510 writel(irq_en | M_TX_FIFO_WATERMARK_EN, 511 uport->membase + SE_GENI_M_IRQ_EN); ··· 620 static void qcom_geni_serial_start_tx_dma(struct uart_port *uport) 621 { 622 struct qcom_geni_serial_port *port = to_dev_port(uport); 623 + struct tty_port *tport = &uport->state->port; 624 unsigned int xmit_size; 625 + u8 *tail; 626 int ret; 627 628 if (port->tx_dma_addr) 629 return; 630 631 + if (kfifo_is_empty(&tport->xmit_fifo)) 632 return; 633 634 + xmit_size = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, 635 + UART_XMIT_SIZE); 636 637 qcom_geni_serial_setup_tx(uport, xmit_size); 638 639 + ret = geni_se_tx_dma_prep(&port->se, tail, xmit_size, 640 + &port->tx_dma_addr); 641 if (ret) { 642 dev_err(uport->dev, "unable to start TX SE DMA: %d\n", ret); 643 qcom_geni_serial_stop_tx_dma(uport); ··· 853 unsigned int chunk) 854 { 855 struct qcom_geni_serial_port *port = to_dev_port(uport); 856 + unsigned int tx_bytes, remaining = chunk; 857 u8 buf[BYTES_PER_FIFO_WORD]; 858 859 while (remaining) { 860 memset(buf, 0, sizeof(buf)); 861 tx_bytes = min(remaining, BYTES_PER_FIFO_WORD); 862 863 + tx_bytes = uart_fifo_out(uport, buf, tx_bytes); 864 865 iowrite32_rep(uport->membase + SE_GENI_TX_FIFOn, buf, 1); 866 ··· 877 bool done, bool active) 878 { 879 struct qcom_geni_serial_port *port = to_dev_port(uport); 880 + struct tty_port *tport = &uport->state->port; 881 size_t avail; 882 size_t pending; 883 u32 status; ··· 890 if (active) 891 pending = port->tx_remaining; 892 else 893 + pending = kfifo_len(&tport->xmit_fifo); 894 895 /* All data has been transmitted and acknowledged as received */ 896 if (!pending && !status && done) { ··· 933 uport->membase + SE_GENI_M_IRQ_EN); 934 } 935 936 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 937 uart_write_wakeup(uport); 938 } 939 940 static void qcom_geni_serial_handle_tx_dma(struct uart_port *uport) 941 { 942 struct qcom_geni_serial_port *port = to_dev_port(uport); 943 + struct tty_port *tport = &uport->state->port; 944 945 uart_xmit_advance(uport, port->tx_remaining); 946 geni_se_tx_dma_unprep(&port->se, port->tx_dma_addr, port->tx_remaining); 947 port->tx_dma_addr = 0; 948 port->tx_remaining = 0; 949 950 + if (!kfifo_is_empty(&tport->xmit_fifo)) 951 qcom_geni_serial_start_tx_dma(uport); 952 953 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 954 uart_write_wakeup(uport); 955 } 956
+6 -11
drivers/tty/serial/rda-uart.c
··· 330 331 static void rda_uart_send_chars(struct uart_port *port) 332 { 333 - struct circ_buf *xmit = &port->state->xmit; 334 - unsigned int ch; 335 u32 val; 336 337 if (uart_tx_stopped(port)) ··· 347 port->x_char = 0; 348 } 349 350 - while (rda_uart_read(port, RDA_UART_STATUS) & RDA_UART_TX_FIFO_MASK) { 351 - if (uart_circ_empty(xmit)) 352 - break; 353 - 354 - ch = xmit->buf[xmit->tail]; 355 rda_uart_write(port, ch, RDA_UART_RXTX_BUFFER); 356 - uart_xmit_advance(port, 1); 357 - } 358 359 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 360 uart_write_wakeup(port); 361 362 - if (!uart_circ_empty(xmit)) { 363 /* Re-enable Tx FIFO interrupt */ 364 val = rda_uart_read(port, RDA_UART_IRQ_MASK); 365 val |= RDA_UART_TX_DATA_NEEDED;
··· 330 331 static void rda_uart_send_chars(struct uart_port *port) 332 { 333 + struct tty_port *tport = &port->state->port; 334 + unsigned char ch; 335 u32 val; 336 337 if (uart_tx_stopped(port)) ··· 347 port->x_char = 0; 348 } 349 350 + while ((rda_uart_read(port, RDA_UART_STATUS) & RDA_UART_TX_FIFO_MASK) && 351 + uart_fifo_get(port, &ch)) 352 rda_uart_write(port, ch, RDA_UART_RXTX_BUFFER); 353 354 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 355 uart_write_wakeup(port); 356 357 + if (!kfifo_is_empty(&tport->xmit_fifo)) { 358 /* Re-enable Tx FIFO interrupt */ 359 val = rda_uart_read(port, RDA_UART_IRQ_MASK); 360 val |= RDA_UART_TX_DATA_NEEDED;
+28 -26
drivers/tty/serial/samsung_tty.c
··· 329 { 330 struct s3c24xx_uart_port *ourport = args; 331 struct uart_port *port = &ourport->port; 332 - struct circ_buf *xmit = &port->state->xmit; 333 struct s3c24xx_uart_dma *dma = ourport->dma; 334 struct dma_tx_state state; 335 unsigned long flags; ··· 348 uart_xmit_advance(port, count); 349 ourport->tx_in_progress = 0; 350 351 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 352 uart_write_wakeup(port); 353 354 s3c24xx_serial_start_next_tx(ourport); ··· 431 } 432 433 static int s3c24xx_serial_start_tx_dma(struct s3c24xx_uart_port *ourport, 434 - unsigned int count) 435 { 436 - struct uart_port *port = &ourport->port; 437 - struct circ_buf *xmit = &port->state->xmit; 438 struct s3c24xx_uart_dma *dma = ourport->dma; 439 440 if (ourport->tx_mode != S3C24XX_TX_DMA) 441 enable_tx_dma(ourport); 442 443 dma->tx_size = count & ~(dma_get_cache_alignment() - 1); 444 - dma->tx_transfer_addr = dma->tx_addr + xmit->tail; 445 446 dma_sync_single_for_device(dma->tx_chan->device->dev, 447 dma->tx_transfer_addr, dma->tx_size, ··· 466 static void s3c24xx_serial_start_next_tx(struct s3c24xx_uart_port *ourport) 467 { 468 struct uart_port *port = &ourport->port; 469 - struct circ_buf *xmit = &port->state->xmit; 470 - unsigned long count; 471 472 /* Get data size up to the end of buffer */ 473 - count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 474 475 if (!count) { 476 s3c24xx_serial_stop_tx(port); ··· 479 480 if (!ourport->dma || !ourport->dma->tx_chan || 481 count < ourport->min_dma_size || 482 - xmit->tail & (dma_get_cache_alignment() - 1)) 483 s3c24xx_serial_start_tx_pio(ourport); 484 else 485 - s3c24xx_serial_start_tx_dma(ourport, count); 486 } 487 488 static void s3c24xx_serial_start_tx(struct uart_port *port) 489 { 490 struct s3c24xx_uart_port *ourport = to_ourport(port); 491 - struct circ_buf *xmit = &port->state->xmit; 492 493 if (!ourport->tx_enabled) { 494 if (port->flags & UPF_CONS_FLOW) ··· 500 } 501 502 if (ourport->dma && ourport->dma->tx_chan) { 503 - if (!uart_circ_empty(xmit) && !ourport->tx_in_progress) 504 s3c24xx_serial_start_next_tx(ourport); 505 } 506 } ··· 867 static void s3c24xx_serial_tx_chars(struct s3c24xx_uart_port *ourport) 868 { 869 struct uart_port *port = &ourport->port; 870 - struct circ_buf *xmit = &port->state->xmit; 871 - int count, dma_count = 0; 872 873 - count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 874 875 if (ourport->dma && ourport->dma->tx_chan && 876 count >= ourport->min_dma_size) { 877 int align = dma_get_cache_alignment() - 878 - (xmit->tail & (dma_get_cache_alignment() - 1)); 879 if (count - align >= ourport->min_dma_size) { 880 dma_count = count - align; 881 count = align; 882 } 883 } 884 ··· 894 * stopped, disable the uart and exit 895 */ 896 897 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 898 s3c24xx_serial_stop_tx(port); 899 return; 900 } ··· 906 dma_count = 0; 907 } 908 909 - while (!uart_circ_empty(xmit) && count > 0) { 910 - if (rd_regl(port, S3C2410_UFSTAT) & ourport->info->tx_fifofull) 911 break; 912 913 - wr_reg(port, S3C2410_UTXH, xmit->buf[xmit->tail]); 914 - uart_xmit_advance(port, 1); 915 count--; 916 } 917 918 if (!count && dma_count) { 919 - s3c24xx_serial_start_tx_dma(ourport, dma_count); 920 return; 921 } 922 923 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 924 uart_write_wakeup(port); 925 926 - if (uart_circ_empty(xmit)) 927 s3c24xx_serial_stop_tx(port); 928 } 929 ··· 1119 1120 /* TX buffer */ 1121 dma->tx_addr = dma_map_single(dma->tx_chan->device->dev, 1122 - p->port.state->xmit.buf, UART_XMIT_SIZE, 1123 DMA_TO_DEVICE); 1124 if (dma_mapping_error(dma->tx_chan->device->dev, dma->tx_addr)) { 1125 reason = "DMA mapping error for TX buffer";
··· 329 { 330 struct s3c24xx_uart_port *ourport = args; 331 struct uart_port *port = &ourport->port; 332 + struct tty_port *tport = &port->state->port; 333 struct s3c24xx_uart_dma *dma = ourport->dma; 334 struct dma_tx_state state; 335 unsigned long flags; ··· 348 uart_xmit_advance(port, count); 349 ourport->tx_in_progress = 0; 350 351 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 352 uart_write_wakeup(port); 353 354 s3c24xx_serial_start_next_tx(ourport); ··· 431 } 432 433 static int s3c24xx_serial_start_tx_dma(struct s3c24xx_uart_port *ourport, 434 + unsigned int count, unsigned int tail) 435 { 436 struct s3c24xx_uart_dma *dma = ourport->dma; 437 438 if (ourport->tx_mode != S3C24XX_TX_DMA) 439 enable_tx_dma(ourport); 440 441 dma->tx_size = count & ~(dma_get_cache_alignment() - 1); 442 + dma->tx_transfer_addr = dma->tx_addr + tail; 443 444 dma_sync_single_for_device(dma->tx_chan->device->dev, 445 dma->tx_transfer_addr, dma->tx_size, ··· 468 static void s3c24xx_serial_start_next_tx(struct s3c24xx_uart_port *ourport) 469 { 470 struct uart_port *port = &ourport->port; 471 + struct tty_port *tport = &port->state->port; 472 + unsigned int count, tail; 473 474 /* Get data size up to the end of buffer */ 475 + count = kfifo_out_linear(&tport->xmit_fifo, &tail, UART_XMIT_SIZE); 476 477 if (!count) { 478 s3c24xx_serial_stop_tx(port); ··· 481 482 if (!ourport->dma || !ourport->dma->tx_chan || 483 count < ourport->min_dma_size || 484 + tail & (dma_get_cache_alignment() - 1)) 485 s3c24xx_serial_start_tx_pio(ourport); 486 else 487 + s3c24xx_serial_start_tx_dma(ourport, count, tail); 488 } 489 490 static void s3c24xx_serial_start_tx(struct uart_port *port) 491 { 492 struct s3c24xx_uart_port *ourport = to_ourport(port); 493 + struct tty_port *tport = &port->state->port; 494 495 if (!ourport->tx_enabled) { 496 if (port->flags & UPF_CONS_FLOW) ··· 502 } 503 504 if (ourport->dma && ourport->dma->tx_chan) { 505 + if (!kfifo_is_empty(&tport->xmit_fifo) && 506 + !ourport->tx_in_progress) 507 s3c24xx_serial_start_next_tx(ourport); 508 } 509 } ··· 868 static void s3c24xx_serial_tx_chars(struct s3c24xx_uart_port *ourport) 869 { 870 struct uart_port *port = &ourport->port; 871 + struct tty_port *tport = &port->state->port; 872 + unsigned int count, dma_count = 0, tail; 873 874 + count = kfifo_out_linear(&tport->xmit_fifo, &tail, UART_XMIT_SIZE); 875 876 if (ourport->dma && ourport->dma->tx_chan && 877 count >= ourport->min_dma_size) { 878 int align = dma_get_cache_alignment() - 879 + (tail & (dma_get_cache_alignment() - 1)); 880 if (count - align >= ourport->min_dma_size) { 881 dma_count = count - align; 882 count = align; 883 + tail += align; 884 } 885 } 886 ··· 894 * stopped, disable the uart and exit 895 */ 896 897 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 898 s3c24xx_serial_stop_tx(port); 899 return; 900 } ··· 906 dma_count = 0; 907 } 908 909 + while (!(rd_regl(port, S3C2410_UFSTAT) & ourport->info->tx_fifofull)) { 910 + unsigned char ch; 911 + 912 + if (!uart_fifo_get(port, &ch)) 913 break; 914 915 + wr_reg(port, S3C2410_UTXH, ch); 916 count--; 917 } 918 919 if (!count && dma_count) { 920 + s3c24xx_serial_start_tx_dma(ourport, dma_count, tail); 921 return; 922 } 923 924 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 925 uart_write_wakeup(port); 926 927 + if (kfifo_is_empty(&tport->xmit_fifo)) 928 s3c24xx_serial_stop_tx(port); 929 } 930 ··· 1118 1119 /* TX buffer */ 1120 dma->tx_addr = dma_map_single(dma->tx_chan->device->dev, 1121 + p->port.state->port.xmit_buf, 1122 + UART_XMIT_SIZE, 1123 DMA_TO_DEVICE); 1124 if (dma_mapping_error(dma->tx_chan->device->dev, dma->tx_addr)) { 1125 reason = "DMA mapping error for TX buffer";
+7 -6
drivers/tty/serial/sb1250-duart.c
··· 382 static void sbd_transmit_chars(struct sbd_port *sport) 383 { 384 struct uart_port *uport = &sport->port; 385 - struct circ_buf *xmit = &sport->port.state->xmit; 386 unsigned int mask; 387 int stop_tx; 388 ··· 396 } 397 398 /* If nothing to do or stopped or hardware stopped. */ 399 - stop_tx = (uart_circ_empty(xmit) || uart_tx_stopped(&sport->port)); 400 401 /* Send char. */ 402 if (!stop_tx) { 403 - write_sbdchn(sport, R_DUART_TX_HOLD, xmit->buf[xmit->tail]); 404 - uart_xmit_advance(&sport->port, 1); 405 406 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 407 uart_write_wakeup(&sport->port); 408 } 409 410 /* Are we are done? */ 411 - if (stop_tx || uart_circ_empty(xmit)) { 412 /* Disable tx interrupts. */ 413 mask = read_sbdshr(sport, R_DUART_IMRREG((uport->line) % 2)); 414 mask &= ~M_DUART_IMR_TX;
··· 382 static void sbd_transmit_chars(struct sbd_port *sport) 383 { 384 struct uart_port *uport = &sport->port; 385 + struct tty_port *tport = &sport->port.state->port; 386 + unsigned char ch; 387 unsigned int mask; 388 int stop_tx; 389 ··· 395 } 396 397 /* If nothing to do or stopped or hardware stopped. */ 398 + stop_tx = uart_tx_stopped(&sport->port) || 399 + !uart_fifo_get(&sport->port, &ch); 400 401 /* Send char. */ 402 if (!stop_tx) { 403 + write_sbdchn(sport, R_DUART_TX_HOLD, ch); 404 405 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 406 uart_write_wakeup(&sport->port); 407 } 408 409 /* Are we are done? */ 410 + if (stop_tx || kfifo_is_empty(&tport->xmit_fifo)) { 411 /* Disable tx interrupts. */ 412 mask = read_sbdshr(sport, R_DUART_IMRREG((uport->line) % 2)); 413 mask &= ~M_DUART_IMR_TX;
+75 -228
drivers/tty/serial/sc16is7xx.c
··· 1 // SPDX-License-Identifier: GPL-2.0+ 2 /* 3 - * SC16IS7xx tty serial driver - Copyright (C) 2014 GridPoint 4 - * Author: Jon Ringle <jringle@gridpoint.com> 5 * 6 - * Based on max310x.c, by Alexander Shiyan <shc_work@mail.ru> 7 */ 8 9 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 10 11 - #include <linux/bitops.h> 12 #include <linux/clk.h> 13 #include <linux/delay.h> 14 #include <linux/device.h> 15 #include <linux/gpio/driver.h> 16 - #include <linux/i2c.h> 17 #include <linux/mod_devicetable.h> 18 #include <linux/module.h> 19 #include <linux/property.h> 20 #include <linux/regmap.h> 21 #include <linux/serial_core.h> 22 #include <linux/serial.h> 23 #include <linux/tty.h> 24 #include <linux/tty_flip.h> 25 - #include <linux/spi/spi.h> 26 #include <linux/uaccess.h> 27 #include <linux/units.h> 28 - #include <uapi/linux/sched/types.h> 29 30 - #define SC16IS7XX_NAME "sc16is7xx" 31 #define SC16IS7XX_MAX_DEVS 8 32 - #define SC16IS7XX_MAX_PORTS 2 /* Maximum number of UART ports per IC. */ 33 34 /* SC16IS7XX register definitions */ 35 #define SC16IS7XX_RHR_REG (0x00) /* RX FIFO */ ··· 305 306 307 /* Misc definitions */ 308 - #define SC16IS7XX_SPI_READ_BIT BIT(7) 309 #define SC16IS7XX_FIFO_SIZE (64) 310 #define SC16IS7XX_GPIOS_PER_BANK 4 311 - 312 - struct sc16is7xx_devtype { 313 - char name[10]; 314 - int nr_gpio; 315 - int nr_uart; 316 - }; 317 318 #define SC16IS7XX_RECONF_MD (1 << 0) 319 #define SC16IS7XX_RECONF_IER (1 << 1) ··· 345 struct sc16is7xx_one p[]; 346 }; 347 348 - static DECLARE_BITMAP(sc16is7xx_lines, SC16IS7XX_MAX_DEVS); 349 350 static struct uart_driver sc16is7xx_uart = { 351 .owner = THIS_MODULE, ··· 488 sc16is7xx_ier_clear(port, SC16IS7XX_IER_RDI_BIT); 489 } 490 491 - static const struct sc16is7xx_devtype sc16is74x_devtype = { 492 .name = "SC16IS74X", 493 .nr_gpio = 0, 494 .nr_uart = 1, 495 }; 496 497 - static const struct sc16is7xx_devtype sc16is750_devtype = { 498 .name = "SC16IS750", 499 .nr_gpio = 8, 500 .nr_uart = 1, 501 }; 502 503 - static const struct sc16is7xx_devtype sc16is752_devtype = { 504 .name = "SC16IS752", 505 .nr_gpio = 8, 506 .nr_uart = 2, 507 }; 508 509 - static const struct sc16is7xx_devtype sc16is760_devtype = { 510 .name = "SC16IS760", 511 .nr_gpio = 8, 512 .nr_uart = 1, 513 }; 514 515 - static const struct sc16is7xx_devtype sc16is762_devtype = { 516 .name = "SC16IS762", 517 .nr_gpio = 8, 518 .nr_uart = 2, 519 }; 520 521 static bool sc16is7xx_regmap_volatile(struct device *dev, unsigned int reg) 522 { ··· 677 static void sc16is7xx_handle_tx(struct uart_port *port) 678 { 679 struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 680 - struct circ_buf *xmit = &port->state->xmit; 681 - unsigned int txlen, to_send, i; 682 unsigned long flags; 683 684 if (unlikely(port->x_char)) { 685 sc16is7xx_port_write(port, SC16IS7XX_THR_REG, port->x_char); ··· 688 return; 689 } 690 691 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 692 uart_port_lock_irqsave(port, &flags); 693 sc16is7xx_stop_tx(port); 694 uart_port_unlock_irqrestore(port, flags); 695 return; 696 } 697 698 - /* Get length of data pending in circular buffer */ 699 - to_send = uart_circ_chars_pending(xmit); 700 - if (likely(to_send)) { 701 - /* Limit to space available in TX FIFO */ 702 - txlen = sc16is7xx_port_read(port, SC16IS7XX_TXLVL_REG); 703 - if (txlen > SC16IS7XX_FIFO_SIZE) { 704 - dev_err_ratelimited(port->dev, 705 - "chip reports %d free bytes in TX fifo, but it only has %d", 706 - txlen, SC16IS7XX_FIFO_SIZE); 707 - txlen = 0; 708 - } 709 - to_send = (to_send > txlen) ? txlen : to_send; 710 - 711 - /* Convert to linear buffer */ 712 - for (i = 0; i < to_send; ++i) { 713 - s->buf[i] = xmit->buf[xmit->tail]; 714 - uart_xmit_advance(port, 1); 715 - } 716 - 717 - sc16is7xx_fifo_write(port, s->buf, to_send); 718 } 719 720 uart_port_lock_irqsave(port, &flags); 721 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 722 uart_write_wakeup(port); 723 724 - if (uart_circ_empty(xmit)) 725 sc16is7xx_stop_tx(port); 726 else 727 sc16is7xx_ier_set(port, SC16IS7XX_IER_THRI_BIT); ··· 1454 .delay_rts_after_send = 1, /* Not supported but keep returning -EINVAL */ 1455 }; 1456 1457 - static int sc16is7xx_probe(struct device *dev, 1458 - const struct sc16is7xx_devtype *devtype, 1459 - struct regmap *regmaps[], int irq) 1460 { 1461 unsigned long freq = 0, *pfreq = dev_get_platdata(dev); 1462 unsigned int val; 1463 u32 uartclk = 0; 1464 int i, ret; 1465 struct sc16is7xx_port *s; 1466 1467 for (i = 0; i < devtype->nr_uart; i++) 1468 if (IS_ERR(regmaps[i])) ··· 1527 regmap_write(regmaps[0], SC16IS7XX_IOCONTROL_REG, 1528 SC16IS7XX_IOCONTROL_SRESET_BIT); 1529 1530 for (i = 0; i < devtype->nr_uart; ++i) { 1531 - s->p[i].port.line = find_first_zero_bit(sc16is7xx_lines, 1532 - SC16IS7XX_MAX_DEVS); 1533 - if (s->p[i].port.line >= SC16IS7XX_MAX_DEVS) { 1534 - ret = -ERANGE; 1535 goto out_ports; 1536 - } 1537 1538 /* Initialize port data */ 1539 s->p[i].port.dev = dev; ··· 1585 if (ret) 1586 goto out_ports; 1587 1588 - set_bit(s->p[i].port.line, sc16is7xx_lines); 1589 1590 /* Enable EFR */ 1591 sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_LCR_REG, ··· 1643 #endif 1644 1645 out_ports: 1646 - for (i = 0; i < devtype->nr_uart; i++) 1647 - if (test_and_clear_bit(s->p[i].port.line, sc16is7xx_lines)) 1648 uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port); 1649 1650 kthread_stop(s->kworker_task); 1651 ··· 1657 1658 return ret; 1659 } 1660 1661 - static void sc16is7xx_remove(struct device *dev) 1662 { 1663 struct sc16is7xx_port *s = dev_get_drvdata(dev); 1664 int i; ··· 1671 1672 for (i = 0; i < s->devtype->nr_uart; i++) { 1673 kthread_cancel_delayed_work_sync(&s->p[i].ms_work); 1674 - if (test_and_clear_bit(s->p[i].port.line, sc16is7xx_lines)) 1675 - uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port); 1676 sc16is7xx_power(&s->p[i].port, 0); 1677 } 1678 ··· 1681 1682 clk_disable_unprepare(s->clk); 1683 } 1684 1685 - static const struct of_device_id __maybe_unused sc16is7xx_dt_ids[] = { 1686 { .compatible = "nxp,sc16is740", .data = &sc16is74x_devtype, }, 1687 { .compatible = "nxp,sc16is741", .data = &sc16is74x_devtype, }, 1688 { .compatible = "nxp,sc16is750", .data = &sc16is750_devtype, }, ··· 1692 { .compatible = "nxp,sc16is762", .data = &sc16is762_devtype, }, 1693 { } 1694 }; 1695 MODULE_DEVICE_TABLE(of, sc16is7xx_dt_ids); 1696 1697 - static struct regmap_config regcfg = { 1698 .reg_bits = 5, 1699 .pad_bits = 3, 1700 .val_bits = 8, 1701 - .cache_type = REGCACHE_RBTREE, 1702 .volatile_reg = sc16is7xx_regmap_volatile, 1703 .precious_reg = sc16is7xx_regmap_precious, 1704 .writeable_noinc_reg = sc16is7xx_regmap_noinc, ··· 1708 .max_raw_write = SC16IS7XX_FIFO_SIZE, 1709 .max_register = SC16IS7XX_EFCR_REG, 1710 }; 1711 1712 - static const char *sc16is7xx_regmap_name(u8 port_id) 1713 { 1714 switch (port_id) { 1715 case 0: return "port0"; ··· 1720 return NULL; 1721 } 1722 } 1723 1724 - static unsigned int sc16is7xx_regmap_port_mask(unsigned int port_id) 1725 { 1726 /* CH1,CH0 are at bits 2:1. */ 1727 return port_id << 1; 1728 } 1729 - 1730 - #ifdef CONFIG_SERIAL_SC16IS7XX_SPI 1731 - static int sc16is7xx_spi_probe(struct spi_device *spi) 1732 - { 1733 - const struct sc16is7xx_devtype *devtype; 1734 - struct regmap *regmaps[SC16IS7XX_MAX_PORTS]; 1735 - unsigned int i; 1736 - int ret; 1737 - 1738 - /* Setup SPI bus */ 1739 - spi->bits_per_word = 8; 1740 - /* For all variants, only mode 0 is supported */ 1741 - if ((spi->mode & SPI_MODE_X_MASK) != SPI_MODE_0) 1742 - return dev_err_probe(&spi->dev, -EINVAL, "Unsupported SPI mode\n"); 1743 - 1744 - spi->mode = spi->mode ? : SPI_MODE_0; 1745 - spi->max_speed_hz = spi->max_speed_hz ? : 4 * HZ_PER_MHZ; 1746 - ret = spi_setup(spi); 1747 - if (ret) 1748 - return ret; 1749 - 1750 - devtype = spi_get_device_match_data(spi); 1751 - if (!devtype) 1752 - return dev_err_probe(&spi->dev, -ENODEV, "Failed to match device\n"); 1753 - 1754 - for (i = 0; i < devtype->nr_uart; i++) { 1755 - regcfg.name = sc16is7xx_regmap_name(i); 1756 - /* 1757 - * If read_flag_mask is 0, the regmap code sets it to a default 1758 - * of 0x80. Since we specify our own mask, we must add the READ 1759 - * bit ourselves: 1760 - */ 1761 - regcfg.read_flag_mask = sc16is7xx_regmap_port_mask(i) | 1762 - SC16IS7XX_SPI_READ_BIT; 1763 - regcfg.write_flag_mask = sc16is7xx_regmap_port_mask(i); 1764 - regmaps[i] = devm_regmap_init_spi(spi, &regcfg); 1765 - } 1766 - 1767 - return sc16is7xx_probe(&spi->dev, devtype, regmaps, spi->irq); 1768 - } 1769 - 1770 - static void sc16is7xx_spi_remove(struct spi_device *spi) 1771 - { 1772 - sc16is7xx_remove(&spi->dev); 1773 - } 1774 - 1775 - static const struct spi_device_id sc16is7xx_spi_id_table[] = { 1776 - { "sc16is74x", (kernel_ulong_t)&sc16is74x_devtype, }, 1777 - { "sc16is740", (kernel_ulong_t)&sc16is74x_devtype, }, 1778 - { "sc16is741", (kernel_ulong_t)&sc16is74x_devtype, }, 1779 - { "sc16is750", (kernel_ulong_t)&sc16is750_devtype, }, 1780 - { "sc16is752", (kernel_ulong_t)&sc16is752_devtype, }, 1781 - { "sc16is760", (kernel_ulong_t)&sc16is760_devtype, }, 1782 - { "sc16is762", (kernel_ulong_t)&sc16is762_devtype, }, 1783 - { } 1784 - }; 1785 - 1786 - MODULE_DEVICE_TABLE(spi, sc16is7xx_spi_id_table); 1787 - 1788 - static struct spi_driver sc16is7xx_spi_uart_driver = { 1789 - .driver = { 1790 - .name = SC16IS7XX_NAME, 1791 - .of_match_table = sc16is7xx_dt_ids, 1792 - }, 1793 - .probe = sc16is7xx_spi_probe, 1794 - .remove = sc16is7xx_spi_remove, 1795 - .id_table = sc16is7xx_spi_id_table, 1796 - }; 1797 - #endif 1798 - 1799 - #ifdef CONFIG_SERIAL_SC16IS7XX_I2C 1800 - static int sc16is7xx_i2c_probe(struct i2c_client *i2c) 1801 - { 1802 - const struct sc16is7xx_devtype *devtype; 1803 - struct regmap *regmaps[SC16IS7XX_MAX_PORTS]; 1804 - unsigned int i; 1805 - 1806 - devtype = i2c_get_match_data(i2c); 1807 - if (!devtype) 1808 - return dev_err_probe(&i2c->dev, -ENODEV, "Failed to match device\n"); 1809 - 1810 - for (i = 0; i < devtype->nr_uart; i++) { 1811 - regcfg.name = sc16is7xx_regmap_name(i); 1812 - regcfg.read_flag_mask = sc16is7xx_regmap_port_mask(i); 1813 - regcfg.write_flag_mask = sc16is7xx_regmap_port_mask(i); 1814 - regmaps[i] = devm_regmap_init_i2c(i2c, &regcfg); 1815 - } 1816 - 1817 - return sc16is7xx_probe(&i2c->dev, devtype, regmaps, i2c->irq); 1818 - } 1819 - 1820 - static void sc16is7xx_i2c_remove(struct i2c_client *client) 1821 - { 1822 - sc16is7xx_remove(&client->dev); 1823 - } 1824 - 1825 - static const struct i2c_device_id sc16is7xx_i2c_id_table[] = { 1826 - { "sc16is74x", (kernel_ulong_t)&sc16is74x_devtype, }, 1827 - { "sc16is740", (kernel_ulong_t)&sc16is74x_devtype, }, 1828 - { "sc16is741", (kernel_ulong_t)&sc16is74x_devtype, }, 1829 - { "sc16is750", (kernel_ulong_t)&sc16is750_devtype, }, 1830 - { "sc16is752", (kernel_ulong_t)&sc16is752_devtype, }, 1831 - { "sc16is760", (kernel_ulong_t)&sc16is760_devtype, }, 1832 - { "sc16is762", (kernel_ulong_t)&sc16is762_devtype, }, 1833 - { } 1834 - }; 1835 - MODULE_DEVICE_TABLE(i2c, sc16is7xx_i2c_id_table); 1836 - 1837 - static struct i2c_driver sc16is7xx_i2c_uart_driver = { 1838 - .driver = { 1839 - .name = SC16IS7XX_NAME, 1840 - .of_match_table = sc16is7xx_dt_ids, 1841 - }, 1842 - .probe = sc16is7xx_i2c_probe, 1843 - .remove = sc16is7xx_i2c_remove, 1844 - .id_table = sc16is7xx_i2c_id_table, 1845 - }; 1846 - 1847 - #endif 1848 1849 static int __init sc16is7xx_init(void) 1850 { 1851 - int ret; 1852 - 1853 - ret = uart_register_driver(&sc16is7xx_uart); 1854 - if (ret) { 1855 - pr_err("Registering UART driver failed\n"); 1856 - return ret; 1857 - } 1858 - 1859 - #ifdef CONFIG_SERIAL_SC16IS7XX_I2C 1860 - ret = i2c_add_driver(&sc16is7xx_i2c_uart_driver); 1861 - if (ret < 0) { 1862 - pr_err("failed to init sc16is7xx i2c --> %d\n", ret); 1863 - goto err_i2c; 1864 - } 1865 - #endif 1866 - 1867 - #ifdef CONFIG_SERIAL_SC16IS7XX_SPI 1868 - ret = spi_register_driver(&sc16is7xx_spi_uart_driver); 1869 - if (ret < 0) { 1870 - pr_err("failed to init sc16is7xx spi --> %d\n", ret); 1871 - goto err_spi; 1872 - } 1873 - #endif 1874 - return ret; 1875 - 1876 - #ifdef CONFIG_SERIAL_SC16IS7XX_SPI 1877 - err_spi: 1878 - #endif 1879 - #ifdef CONFIG_SERIAL_SC16IS7XX_I2C 1880 - i2c_del_driver(&sc16is7xx_i2c_uart_driver); 1881 - err_i2c: 1882 - #endif 1883 - uart_unregister_driver(&sc16is7xx_uart); 1884 - return ret; 1885 } 1886 module_init(sc16is7xx_init); 1887 1888 static void __exit sc16is7xx_exit(void) 1889 { 1890 - #ifdef CONFIG_SERIAL_SC16IS7XX_I2C 1891 - i2c_del_driver(&sc16is7xx_i2c_uart_driver); 1892 - #endif 1893 - 1894 - #ifdef CONFIG_SERIAL_SC16IS7XX_SPI 1895 - spi_unregister_driver(&sc16is7xx_spi_uart_driver); 1896 - #endif 1897 uart_unregister_driver(&sc16is7xx_uart); 1898 } 1899 module_exit(sc16is7xx_exit); 1900 1901 MODULE_LICENSE("GPL"); 1902 MODULE_AUTHOR("Jon Ringle <jringle@gridpoint.com>"); 1903 - MODULE_DESCRIPTION("SC16IS7XX serial driver");
··· 1 // SPDX-License-Identifier: GPL-2.0+ 2 /* 3 + * SC16IS7xx tty serial driver - common code 4 * 5 + * Copyright (C) 2014 GridPoint 6 + * Author: Jon Ringle <jringle@gridpoint.com> 7 + * Based on max310x.c, by Alexander Shiyan <shc_work@mail.ru> 8 */ 9 10 + #undef DEFAULT_SYMBOL_NAMESPACE 11 + #define DEFAULT_SYMBOL_NAMESPACE SERIAL_NXP_SC16IS7XX 12 13 #include <linux/clk.h> 14 #include <linux/delay.h> 15 #include <linux/device.h> 16 + #include <linux/export.h> 17 #include <linux/gpio/driver.h> 18 + #include <linux/idr.h> 19 + #include <linux/kthread.h> 20 #include <linux/mod_devicetable.h> 21 #include <linux/module.h> 22 #include <linux/property.h> 23 #include <linux/regmap.h> 24 + #include <linux/sched.h> 25 #include <linux/serial_core.h> 26 #include <linux/serial.h> 27 + #include <linux/string.h> 28 #include <linux/tty.h> 29 #include <linux/tty_flip.h> 30 #include <linux/uaccess.h> 31 #include <linux/units.h> 32 33 + #include "sc16is7xx.h" 34 + 35 #define SC16IS7XX_MAX_DEVS 8 36 37 /* SC16IS7XX register definitions */ 38 #define SC16IS7XX_RHR_REG (0x00) /* RX FIFO */ ··· 302 303 304 /* Misc definitions */ 305 #define SC16IS7XX_FIFO_SIZE (64) 306 #define SC16IS7XX_GPIOS_PER_BANK 4 307 308 #define SC16IS7XX_RECONF_MD (1 << 0) 309 #define SC16IS7XX_RECONF_IER (1 << 1) ··· 349 struct sc16is7xx_one p[]; 350 }; 351 352 + static DEFINE_IDA(sc16is7xx_lines); 353 354 static struct uart_driver sc16is7xx_uart = { 355 .owner = THIS_MODULE, ··· 492 sc16is7xx_ier_clear(port, SC16IS7XX_IER_RDI_BIT); 493 } 494 495 + const struct sc16is7xx_devtype sc16is74x_devtype = { 496 .name = "SC16IS74X", 497 .nr_gpio = 0, 498 .nr_uart = 1, 499 }; 500 + EXPORT_SYMBOL_GPL(sc16is74x_devtype); 501 502 + const struct sc16is7xx_devtype sc16is750_devtype = { 503 .name = "SC16IS750", 504 .nr_gpio = 8, 505 .nr_uart = 1, 506 }; 507 + EXPORT_SYMBOL_GPL(sc16is750_devtype); 508 509 + const struct sc16is7xx_devtype sc16is752_devtype = { 510 .name = "SC16IS752", 511 .nr_gpio = 8, 512 .nr_uart = 2, 513 }; 514 + EXPORT_SYMBOL_GPL(sc16is752_devtype); 515 516 + const struct sc16is7xx_devtype sc16is760_devtype = { 517 .name = "SC16IS760", 518 .nr_gpio = 8, 519 .nr_uart = 1, 520 }; 521 + EXPORT_SYMBOL_GPL(sc16is760_devtype); 522 523 + const struct sc16is7xx_devtype sc16is762_devtype = { 524 .name = "SC16IS762", 525 .nr_gpio = 8, 526 .nr_uart = 2, 527 }; 528 + EXPORT_SYMBOL_GPL(sc16is762_devtype); 529 530 static bool sc16is7xx_regmap_volatile(struct device *dev, unsigned int reg) 531 { ··· 676 static void sc16is7xx_handle_tx(struct uart_port *port) 677 { 678 struct sc16is7xx_port *s = dev_get_drvdata(port->dev); 679 + struct tty_port *tport = &port->state->port; 680 unsigned long flags; 681 + unsigned int txlen; 682 683 if (unlikely(port->x_char)) { 684 sc16is7xx_port_write(port, SC16IS7XX_THR_REG, port->x_char); ··· 687 return; 688 } 689 690 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 691 uart_port_lock_irqsave(port, &flags); 692 sc16is7xx_stop_tx(port); 693 uart_port_unlock_irqrestore(port, flags); 694 return; 695 } 696 697 + /* Limit to space available in TX FIFO */ 698 + txlen = sc16is7xx_port_read(port, SC16IS7XX_TXLVL_REG); 699 + if (txlen > SC16IS7XX_FIFO_SIZE) { 700 + dev_err_ratelimited(port->dev, 701 + "chip reports %d free bytes in TX fifo, but it only has %d", 702 + txlen, SC16IS7XX_FIFO_SIZE); 703 + txlen = 0; 704 } 705 706 + txlen = uart_fifo_out(port, s->buf, txlen); 707 + sc16is7xx_fifo_write(port, s->buf, txlen); 708 + 709 uart_port_lock_irqsave(port, &flags); 710 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 711 uart_write_wakeup(port); 712 713 + if (kfifo_is_empty(&tport->xmit_fifo)) 714 sc16is7xx_stop_tx(port); 715 else 716 sc16is7xx_ier_set(port, SC16IS7XX_IER_THRI_BIT); ··· 1463 .delay_rts_after_send = 1, /* Not supported but keep returning -EINVAL */ 1464 }; 1465 1466 + int sc16is7xx_probe(struct device *dev, const struct sc16is7xx_devtype *devtype, 1467 + struct regmap *regmaps[], int irq) 1468 { 1469 unsigned long freq = 0, *pfreq = dev_get_platdata(dev); 1470 unsigned int val; 1471 u32 uartclk = 0; 1472 int i, ret; 1473 struct sc16is7xx_port *s; 1474 + bool port_registered[SC16IS7XX_MAX_PORTS]; 1475 1476 for (i = 0; i < devtype->nr_uart; i++) 1477 if (IS_ERR(regmaps[i])) ··· 1536 regmap_write(regmaps[0], SC16IS7XX_IOCONTROL_REG, 1537 SC16IS7XX_IOCONTROL_SRESET_BIT); 1538 1539 + /* Mark each port line and status as uninitialised. */ 1540 for (i = 0; i < devtype->nr_uart; ++i) { 1541 + s->p[i].port.line = SC16IS7XX_MAX_DEVS; 1542 + port_registered[i] = false; 1543 + } 1544 + 1545 + for (i = 0; i < devtype->nr_uart; ++i) { 1546 + ret = ida_alloc_max(&sc16is7xx_lines, 1547 + SC16IS7XX_MAX_DEVS - 1, GFP_KERNEL); 1548 + if (ret < 0) 1549 goto out_ports; 1550 + 1551 + s->p[i].port.line = ret; 1552 1553 /* Initialize port data */ 1554 s->p[i].port.dev = dev; ··· 1588 if (ret) 1589 goto out_ports; 1590 1591 + port_registered[i] = true; 1592 1593 /* Enable EFR */ 1594 sc16is7xx_port_write(&s->p[i].port, SC16IS7XX_LCR_REG, ··· 1646 #endif 1647 1648 out_ports: 1649 + for (i = 0; i < devtype->nr_uart; i++) { 1650 + if (s->p[i].port.line < SC16IS7XX_MAX_DEVS) 1651 + ida_free(&sc16is7xx_lines, s->p[i].port.line); 1652 + if (port_registered[i]) 1653 uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port); 1654 + } 1655 1656 kthread_stop(s->kworker_task); 1657 ··· 1657 1658 return ret; 1659 } 1660 + EXPORT_SYMBOL_GPL(sc16is7xx_probe); 1661 1662 + void sc16is7xx_remove(struct device *dev) 1663 { 1664 struct sc16is7xx_port *s = dev_get_drvdata(dev); 1665 int i; ··· 1670 1671 for (i = 0; i < s->devtype->nr_uart; i++) { 1672 kthread_cancel_delayed_work_sync(&s->p[i].ms_work); 1673 + ida_free(&sc16is7xx_lines, s->p[i].port.line); 1674 + uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port); 1675 sc16is7xx_power(&s->p[i].port, 0); 1676 } 1677 ··· 1680 1681 clk_disable_unprepare(s->clk); 1682 } 1683 + EXPORT_SYMBOL_GPL(sc16is7xx_remove); 1684 1685 + const struct of_device_id __maybe_unused sc16is7xx_dt_ids[] = { 1686 { .compatible = "nxp,sc16is740", .data = &sc16is74x_devtype, }, 1687 { .compatible = "nxp,sc16is741", .data = &sc16is74x_devtype, }, 1688 { .compatible = "nxp,sc16is750", .data = &sc16is750_devtype, }, ··· 1690 { .compatible = "nxp,sc16is762", .data = &sc16is762_devtype, }, 1691 { } 1692 }; 1693 + EXPORT_SYMBOL_GPL(sc16is7xx_dt_ids); 1694 MODULE_DEVICE_TABLE(of, sc16is7xx_dt_ids); 1695 1696 + const struct regmap_config sc16is7xx_regcfg = { 1697 .reg_bits = 5, 1698 .pad_bits = 3, 1699 .val_bits = 8, 1700 + .cache_type = REGCACHE_MAPLE, 1701 .volatile_reg = sc16is7xx_regmap_volatile, 1702 .precious_reg = sc16is7xx_regmap_precious, 1703 .writeable_noinc_reg = sc16is7xx_regmap_noinc, ··· 1705 .max_raw_write = SC16IS7XX_FIFO_SIZE, 1706 .max_register = SC16IS7XX_EFCR_REG, 1707 }; 1708 + EXPORT_SYMBOL_GPL(sc16is7xx_regcfg); 1709 1710 + const char *sc16is7xx_regmap_name(u8 port_id) 1711 { 1712 switch (port_id) { 1713 case 0: return "port0"; ··· 1716 return NULL; 1717 } 1718 } 1719 + EXPORT_SYMBOL_GPL(sc16is7xx_regmap_name); 1720 1721 + unsigned int sc16is7xx_regmap_port_mask(unsigned int port_id) 1722 { 1723 /* CH1,CH0 are at bits 2:1. */ 1724 return port_id << 1; 1725 } 1726 + EXPORT_SYMBOL_GPL(sc16is7xx_regmap_port_mask); 1727 1728 static int __init sc16is7xx_init(void) 1729 { 1730 + return uart_register_driver(&sc16is7xx_uart); 1731 } 1732 module_init(sc16is7xx_init); 1733 1734 static void __exit sc16is7xx_exit(void) 1735 { 1736 uart_unregister_driver(&sc16is7xx_uart); 1737 } 1738 module_exit(sc16is7xx_exit); 1739 1740 MODULE_LICENSE("GPL"); 1741 MODULE_AUTHOR("Jon Ringle <jringle@gridpoint.com>"); 1742 + MODULE_DESCRIPTION("SC16IS7xx tty serial core driver");
+41
drivers/tty/serial/sc16is7xx.h
···
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + /* SC16IS7xx SPI/I2C tty serial driver */ 3 + 4 + #ifndef _SC16IS7XX_H_ 5 + #define _SC16IS7XX_H_ 6 + 7 + #include <linux/mod_devicetable.h> 8 + #include <linux/regmap.h> 9 + #include <linux/types.h> 10 + 11 + #define SC16IS7XX_NAME "sc16is7xx" 12 + #define SC16IS7XX_MAX_PORTS 2 /* Maximum number of UART ports per IC. */ 13 + 14 + struct device; 15 + 16 + struct sc16is7xx_devtype { 17 + char name[10]; 18 + int nr_gpio; 19 + int nr_uart; 20 + }; 21 + 22 + extern const struct regmap_config sc16is7xx_regcfg; 23 + 24 + extern const struct of_device_id sc16is7xx_dt_ids[]; 25 + 26 + extern const struct sc16is7xx_devtype sc16is74x_devtype; 27 + extern const struct sc16is7xx_devtype sc16is750_devtype; 28 + extern const struct sc16is7xx_devtype sc16is752_devtype; 29 + extern const struct sc16is7xx_devtype sc16is760_devtype; 30 + extern const struct sc16is7xx_devtype sc16is762_devtype; 31 + 32 + const char *sc16is7xx_regmap_name(u8 port_id); 33 + 34 + unsigned int sc16is7xx_regmap_port_mask(unsigned int port_id); 35 + 36 + int sc16is7xx_probe(struct device *dev, const struct sc16is7xx_devtype *devtype, 37 + struct regmap *regmaps[], int irq); 38 + 39 + void sc16is7xx_remove(struct device *dev); 40 + 41 + #endif /* _SC16IS7XX_H_ */
+67
drivers/tty/serial/sc16is7xx_i2c.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* SC16IS7xx I2C interface driver */ 3 + 4 + #include <linux/dev_printk.h> 5 + #include <linux/i2c.h> 6 + #include <linux/mod_devicetable.h> 7 + #include <linux/module.h> 8 + #include <linux/regmap.h> 9 + #include <linux/string.h> 10 + 11 + #include "sc16is7xx.h" 12 + 13 + static int sc16is7xx_i2c_probe(struct i2c_client *i2c) 14 + { 15 + const struct sc16is7xx_devtype *devtype; 16 + struct regmap *regmaps[SC16IS7XX_MAX_PORTS]; 17 + struct regmap_config regcfg; 18 + unsigned int i; 19 + 20 + devtype = i2c_get_match_data(i2c); 21 + if (!devtype) 22 + return dev_err_probe(&i2c->dev, -ENODEV, "Failed to match device\n"); 23 + 24 + memcpy(&regcfg, &sc16is7xx_regcfg, sizeof(struct regmap_config)); 25 + 26 + for (i = 0; i < devtype->nr_uart; i++) { 27 + regcfg.name = sc16is7xx_regmap_name(i); 28 + regcfg.read_flag_mask = sc16is7xx_regmap_port_mask(i); 29 + regcfg.write_flag_mask = sc16is7xx_regmap_port_mask(i); 30 + regmaps[i] = devm_regmap_init_i2c(i2c, &regcfg); 31 + } 32 + 33 + return sc16is7xx_probe(&i2c->dev, devtype, regmaps, i2c->irq); 34 + } 35 + 36 + static void sc16is7xx_i2c_remove(struct i2c_client *client) 37 + { 38 + sc16is7xx_remove(&client->dev); 39 + } 40 + 41 + static const struct i2c_device_id sc16is7xx_i2c_id_table[] = { 42 + { "sc16is74x", (kernel_ulong_t)&sc16is74x_devtype, }, 43 + { "sc16is740", (kernel_ulong_t)&sc16is74x_devtype, }, 44 + { "sc16is741", (kernel_ulong_t)&sc16is74x_devtype, }, 45 + { "sc16is750", (kernel_ulong_t)&sc16is750_devtype, }, 46 + { "sc16is752", (kernel_ulong_t)&sc16is752_devtype, }, 47 + { "sc16is760", (kernel_ulong_t)&sc16is760_devtype, }, 48 + { "sc16is762", (kernel_ulong_t)&sc16is762_devtype, }, 49 + { } 50 + }; 51 + MODULE_DEVICE_TABLE(i2c, sc16is7xx_i2c_id_table); 52 + 53 + static struct i2c_driver sc16is7xx_i2c_driver = { 54 + .driver = { 55 + .name = SC16IS7XX_NAME, 56 + .of_match_table = sc16is7xx_dt_ids, 57 + }, 58 + .probe = sc16is7xx_i2c_probe, 59 + .remove = sc16is7xx_i2c_remove, 60 + .id_table = sc16is7xx_i2c_id_table, 61 + }; 62 + 63 + module_i2c_driver(sc16is7xx_i2c_driver); 64 + 65 + MODULE_LICENSE("GPL"); 66 + MODULE_DESCRIPTION("SC16IS7xx I2C interface driver"); 67 + MODULE_IMPORT_NS(SERIAL_NXP_SC16IS7XX);
+90
drivers/tty/serial/sc16is7xx_spi.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* SC16IS7xx SPI interface driver */ 3 + 4 + #include <linux/dev_printk.h> 5 + #include <linux/mod_devicetable.h> 6 + #include <linux/module.h> 7 + #include <linux/regmap.h> 8 + #include <linux/spi/spi.h> 9 + #include <linux/string.h> 10 + #include <linux/units.h> 11 + 12 + #include "sc16is7xx.h" 13 + 14 + /* SPI definitions */ 15 + #define SC16IS7XX_SPI_READ_BIT BIT(7) 16 + 17 + static int sc16is7xx_spi_probe(struct spi_device *spi) 18 + { 19 + const struct sc16is7xx_devtype *devtype; 20 + struct regmap *regmaps[SC16IS7XX_MAX_PORTS]; 21 + struct regmap_config regcfg; 22 + unsigned int i; 23 + int ret; 24 + 25 + /* Setup SPI bus */ 26 + spi->bits_per_word = 8; 27 + /* For all variants, only mode 0 is supported */ 28 + if ((spi->mode & SPI_MODE_X_MASK) != SPI_MODE_0) 29 + return dev_err_probe(&spi->dev, -EINVAL, "Unsupported SPI mode\n"); 30 + 31 + spi->mode = spi->mode ? : SPI_MODE_0; 32 + spi->max_speed_hz = spi->max_speed_hz ? : 4 * HZ_PER_MHZ; 33 + ret = spi_setup(spi); 34 + if (ret) 35 + return ret; 36 + 37 + devtype = spi_get_device_match_data(spi); 38 + if (!devtype) 39 + return dev_err_probe(&spi->dev, -ENODEV, "Failed to match device\n"); 40 + 41 + memcpy(&regcfg, &sc16is7xx_regcfg, sizeof(struct regmap_config)); 42 + 43 + for (i = 0; i < devtype->nr_uart; i++) { 44 + regcfg.name = sc16is7xx_regmap_name(i); 45 + /* 46 + * If read_flag_mask is 0, the regmap code sets it to a default 47 + * of 0x80. Since we specify our own mask, we must add the READ 48 + * bit ourselves: 49 + */ 50 + regcfg.read_flag_mask = sc16is7xx_regmap_port_mask(i) | 51 + SC16IS7XX_SPI_READ_BIT; 52 + regcfg.write_flag_mask = sc16is7xx_regmap_port_mask(i); 53 + regmaps[i] = devm_regmap_init_spi(spi, &regcfg); 54 + } 55 + 56 + return sc16is7xx_probe(&spi->dev, devtype, regmaps, spi->irq); 57 + } 58 + 59 + static void sc16is7xx_spi_remove(struct spi_device *spi) 60 + { 61 + sc16is7xx_remove(&spi->dev); 62 + } 63 + 64 + static const struct spi_device_id sc16is7xx_spi_id_table[] = { 65 + { "sc16is74x", (kernel_ulong_t)&sc16is74x_devtype, }, 66 + { "sc16is740", (kernel_ulong_t)&sc16is74x_devtype, }, 67 + { "sc16is741", (kernel_ulong_t)&sc16is74x_devtype, }, 68 + { "sc16is750", (kernel_ulong_t)&sc16is750_devtype, }, 69 + { "sc16is752", (kernel_ulong_t)&sc16is752_devtype, }, 70 + { "sc16is760", (kernel_ulong_t)&sc16is760_devtype, }, 71 + { "sc16is762", (kernel_ulong_t)&sc16is762_devtype, }, 72 + { } 73 + }; 74 + MODULE_DEVICE_TABLE(spi, sc16is7xx_spi_id_table); 75 + 76 + static struct spi_driver sc16is7xx_spi_driver = { 77 + .driver = { 78 + .name = SC16IS7XX_NAME, 79 + .of_match_table = sc16is7xx_dt_ids, 80 + }, 81 + .probe = sc16is7xx_spi_probe, 82 + .remove = sc16is7xx_spi_remove, 83 + .id_table = sc16is7xx_spi_id_table, 84 + }; 85 + 86 + module_spi_driver(sc16is7xx_spi_driver); 87 + 88 + MODULE_LICENSE("GPL"); 89 + MODULE_DESCRIPTION("SC16IS7xx SPI interface driver"); 90 + MODULE_IMPORT_NS(SERIAL_NXP_SC16IS7XX);
+10 -6
drivers/tty/serial/sccnxp.c
··· 439 static void sccnxp_handle_tx(struct uart_port *port) 440 { 441 u8 sr; 442 - struct circ_buf *xmit = &port->state->xmit; 443 struct sccnxp_port *s = dev_get_drvdata(port->dev); 444 445 if (unlikely(port->x_char)) { ··· 449 return; 450 } 451 452 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 453 /* Disable TX if FIFO is empty */ 454 if (sccnxp_port_read(port, SCCNXP_SR_REG) & SR_TXEMT) { 455 sccnxp_disable_irq(port, IMR_TXRDY); ··· 461 return; 462 } 463 464 - while (!uart_circ_empty(xmit)) { 465 sr = sccnxp_port_read(port, SCCNXP_SR_REG); 466 if (!(sr & SR_TXRDY)) 467 break; 468 469 - sccnxp_port_write(port, SCCNXP_THR_REG, xmit->buf[xmit->tail]); 470 - uart_xmit_advance(port, 1); 471 } 472 473 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 474 uart_write_wakeup(port); 475 } 476
··· 439 static void sccnxp_handle_tx(struct uart_port *port) 440 { 441 u8 sr; 442 + struct tty_port *tport = &port->state->port; 443 struct sccnxp_port *s = dev_get_drvdata(port->dev); 444 445 if (unlikely(port->x_char)) { ··· 449 return; 450 } 451 452 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 453 /* Disable TX if FIFO is empty */ 454 if (sccnxp_port_read(port, SCCNXP_SR_REG) & SR_TXEMT) { 455 sccnxp_disable_irq(port, IMR_TXRDY); ··· 461 return; 462 } 463 464 + while (1) { 465 + unsigned char ch; 466 + 467 sr = sccnxp_port_read(port, SCCNXP_SR_REG); 468 if (!(sr & SR_TXRDY)) 469 break; 470 471 + if (!uart_fifo_get(port, &ch)) 472 + break; 473 + 474 + sccnxp_port_write(port, SCCNXP_THR_REG, ch); 475 } 476 477 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 478 uart_write_wakeup(port); 479 } 480
+24 -19
drivers/tty/serial/serial-tegra.c
··· 484 485 static void tegra_uart_fill_tx_fifo(struct tegra_uart_port *tup, int max_bytes) 486 { 487 - struct circ_buf *xmit = &tup->uport.state->xmit; 488 int i; 489 490 for (i = 0; i < max_bytes; i++) { 491 - BUG_ON(uart_circ_empty(xmit)); 492 if (tup->cdata->tx_fifo_full_status) { 493 unsigned long lsr = tegra_uart_read(tup, UART_LSR); 494 if ((lsr & TEGRA_UART_LSR_TXFIFO_FULL)) 495 break; 496 } 497 - tegra_uart_write(tup, xmit->buf[xmit->tail], UART_TX); 498 - uart_xmit_advance(&tup->uport, 1); 499 } 500 } 501 ··· 514 static void tegra_uart_tx_dma_complete(void *args) 515 { 516 struct tegra_uart_port *tup = args; 517 - struct circ_buf *xmit = &tup->uport.state->xmit; 518 struct dma_tx_state state; 519 unsigned long flags; 520 unsigned int count; ··· 525 uart_port_lock_irqsave(&tup->uport, &flags); 526 uart_xmit_advance(&tup->uport, count); 527 tup->tx_in_progress = 0; 528 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 529 uart_write_wakeup(&tup->uport); 530 tegra_uart_start_next_tx(tup); 531 uart_port_unlock_irqrestore(&tup->uport, flags); ··· 534 static int tegra_uart_start_tx_dma(struct tegra_uart_port *tup, 535 unsigned long count) 536 { 537 - struct circ_buf *xmit = &tup->uport.state->xmit; 538 dma_addr_t tx_phys_addr; 539 540 tup->tx_bytes = count & ~(0xF); 541 - tx_phys_addr = tup->tx_dma_buf_phys + xmit->tail; 542 543 dma_sync_single_for_device(tup->uport.dev, tx_phys_addr, 544 tup->tx_bytes, DMA_TO_DEVICE); ··· 565 566 static void tegra_uart_start_next_tx(struct tegra_uart_port *tup) 567 { 568 unsigned long tail; 569 - unsigned long count; 570 - struct circ_buf *xmit = &tup->uport.state->xmit; 571 572 if (!tup->current_baud) 573 return; 574 575 - tail = (unsigned long)&xmit->buf[xmit->tail]; 576 - count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 577 if (!count) 578 return; 579 580 if (tup->use_tx_pio || count < TEGRA_UART_MIN_DMA) 581 tegra_uart_start_pio_tx(tup, count); ··· 592 static void tegra_uart_start_tx(struct uart_port *u) 593 { 594 struct tegra_uart_port *tup = to_tegra_uport(u); 595 - struct circ_buf *xmit = &u->state->xmit; 596 597 - if (!uart_circ_empty(xmit) && !tup->tx_in_progress) 598 tegra_uart_start_next_tx(tup); 599 } 600 ··· 634 635 static void tegra_uart_handle_tx_pio(struct tegra_uart_port *tup) 636 { 637 - struct circ_buf *xmit = &tup->uport.state->xmit; 638 639 tegra_uart_fill_tx_fifo(tup, tup->tx_bytes); 640 tup->tx_in_progress = 0; 641 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 642 uart_write_wakeup(&tup->uport); 643 tegra_uart_start_next_tx(tup); 644 } ··· 1175 tup->rx_dma_buf_virt = dma_buf; 1176 tup->rx_dma_buf_phys = dma_phys; 1177 } else { 1178 dma_phys = dma_map_single(tup->uport.dev, 1179 - tup->uport.state->xmit.buf, UART_XMIT_SIZE, 1180 - DMA_TO_DEVICE); 1181 if (dma_mapping_error(tup->uport.dev, dma_phys)) { 1182 dev_err(tup->uport.dev, "dma_map_single tx failed\n"); 1183 dma_release_channel(dma_chan); 1184 return -ENOMEM; 1185 } 1186 - dma_buf = tup->uport.state->xmit.buf; 1187 dma_sconfig.dst_addr = tup->uport.mapbase; 1188 dma_sconfig.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1189 dma_sconfig.dst_maxburst = 16;
··· 484 485 static void tegra_uart_fill_tx_fifo(struct tegra_uart_port *tup, int max_bytes) 486 { 487 + unsigned char ch; 488 int i; 489 490 for (i = 0; i < max_bytes; i++) { 491 if (tup->cdata->tx_fifo_full_status) { 492 unsigned long lsr = tegra_uart_read(tup, UART_LSR); 493 if ((lsr & TEGRA_UART_LSR_TXFIFO_FULL)) 494 break; 495 } 496 + if (WARN_ON_ONCE(!uart_fifo_get(&tup->uport, &ch))) 497 + break; 498 + tegra_uart_write(tup, ch, UART_TX); 499 } 500 } 501 ··· 514 static void tegra_uart_tx_dma_complete(void *args) 515 { 516 struct tegra_uart_port *tup = args; 517 + struct tty_port *tport = &tup->uport.state->port; 518 struct dma_tx_state state; 519 unsigned long flags; 520 unsigned int count; ··· 525 uart_port_lock_irqsave(&tup->uport, &flags); 526 uart_xmit_advance(&tup->uport, count); 527 tup->tx_in_progress = 0; 528 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 529 uart_write_wakeup(&tup->uport); 530 tegra_uart_start_next_tx(tup); 531 uart_port_unlock_irqrestore(&tup->uport, flags); ··· 534 static int tegra_uart_start_tx_dma(struct tegra_uart_port *tup, 535 unsigned long count) 536 { 537 + struct tty_port *tport = &tup->uport.state->port; 538 dma_addr_t tx_phys_addr; 539 + unsigned int tail; 540 541 tup->tx_bytes = count & ~(0xF); 542 + WARN_ON_ONCE(kfifo_out_linear(&tport->xmit_fifo, &tail, 543 + UART_XMIT_SIZE) < count); 544 + tx_phys_addr = tup->tx_dma_buf_phys + tail; 545 546 dma_sync_single_for_device(tup->uport.dev, tx_phys_addr, 547 tup->tx_bytes, DMA_TO_DEVICE); ··· 562 563 static void tegra_uart_start_next_tx(struct tegra_uart_port *tup) 564 { 565 + struct tty_port *tport = &tup->uport.state->port; 566 + unsigned char *tail_ptr; 567 unsigned long tail; 568 + unsigned int count; 569 570 if (!tup->current_baud) 571 return; 572 573 + count = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail_ptr, 574 + UART_XMIT_SIZE); 575 if (!count) 576 return; 577 + 578 + tail = (unsigned long)tail_ptr; 579 580 if (tup->use_tx_pio || count < TEGRA_UART_MIN_DMA) 581 tegra_uart_start_pio_tx(tup, count); ··· 586 static void tegra_uart_start_tx(struct uart_port *u) 587 { 588 struct tegra_uart_port *tup = to_tegra_uport(u); 589 + struct tty_port *tport = &u->state->port; 590 591 + if (!kfifo_is_empty(&tport->xmit_fifo) && !tup->tx_in_progress) 592 tegra_uart_start_next_tx(tup); 593 } 594 ··· 628 629 static void tegra_uart_handle_tx_pio(struct tegra_uart_port *tup) 630 { 631 + struct tty_port *tport = &tup->uport.state->port; 632 633 tegra_uart_fill_tx_fifo(tup, tup->tx_bytes); 634 tup->tx_in_progress = 0; 635 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 636 uart_write_wakeup(&tup->uport); 637 tegra_uart_start_next_tx(tup); 638 } ··· 1169 tup->rx_dma_buf_virt = dma_buf; 1170 tup->rx_dma_buf_phys = dma_phys; 1171 } else { 1172 + dma_buf = tup->uport.state->port.xmit_buf; 1173 dma_phys = dma_map_single(tup->uport.dev, 1174 + dma_buf, UART_XMIT_SIZE, DMA_TO_DEVICE); 1175 if (dma_mapping_error(tup->uport.dev, dma_phys)) { 1176 dev_err(tup->uport.dev, "dma_map_single tx failed\n"); 1177 dma_release_channel(dma_chan); 1178 return -ENOMEM; 1179 } 1180 dma_sconfig.dst_addr = tup->uport.mapbase; 1181 dma_sconfig.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1182 dma_sconfig.dst_maxburst = 16;
+30
drivers/tty/serial/serial_base.h
··· 49 50 int serial_core_register_port(struct uart_driver *drv, struct uart_port *port); 51 void serial_core_unregister_port(struct uart_driver *drv, struct uart_port *port);
··· 49 50 int serial_core_register_port(struct uart_driver *drv, struct uart_port *port); 51 void serial_core_unregister_port(struct uart_driver *drv, struct uart_port *port); 52 + 53 + #ifdef CONFIG_SERIAL_CORE_CONSOLE 54 + 55 + int serial_base_add_preferred_console(struct uart_driver *drv, 56 + struct uart_port *port); 57 + 58 + #else 59 + 60 + static inline 61 + int serial_base_add_preferred_console(struct uart_driver *drv, 62 + struct uart_port *port) 63 + { 64 + return 0; 65 + } 66 + 67 + #endif 68 + 69 + #ifdef CONFIG_SERIAL_8250_CONSOLE 70 + 71 + int serial_base_add_isa_preferred_console(const char *name, int idx); 72 + 73 + #else 74 + 75 + static inline 76 + int serial_base_add_isa_preferred_console(const char *name, int idx) 77 + { 78 + return 0; 79 + } 80 + 81 + #endif
+129
drivers/tty/serial/serial_base_bus.c
··· 8 * The serial core bus manages the serial core controller instances. 9 */ 10 11 #include <linux/container_of.h> 12 #include <linux/device.h> 13 #include <linux/idr.h> ··· 204 ida_free(&ctrl_dev->port_ida, port_dev->port->port_id); 205 put_device(&port_dev->dev); 206 } 207 208 static int serial_base_init(void) 209 {
··· 8 * The serial core bus manages the serial core controller instances. 9 */ 10 11 + #include <linux/cleanup.h> 12 #include <linux/container_of.h> 13 #include <linux/device.h> 14 #include <linux/idr.h> ··· 203 ida_free(&ctrl_dev->port_ida, port_dev->port->port_id); 204 put_device(&port_dev->dev); 205 } 206 + 207 + #ifdef CONFIG_SERIAL_CORE_CONSOLE 208 + 209 + static int serial_base_add_one_prefcon(const char *match, const char *dev_name, 210 + int port_id) 211 + { 212 + int ret; 213 + 214 + ret = add_preferred_console_match(match, dev_name, port_id); 215 + if (ret == -ENOENT) 216 + return 0; 217 + 218 + return ret; 219 + } 220 + 221 + #ifdef __sparc__ 222 + 223 + /* Handle Sparc ttya and ttyb options as done in console_setup() */ 224 + static int serial_base_add_sparc_console(const char *dev_name, int idx) 225 + { 226 + const char *name; 227 + 228 + switch (idx) { 229 + case 0: 230 + name = "ttya"; 231 + break; 232 + case 1: 233 + name = "ttyb"; 234 + break; 235 + default: 236 + return 0; 237 + } 238 + 239 + return serial_base_add_one_prefcon(name, dev_name, idx); 240 + } 241 + 242 + #else 243 + 244 + static inline int serial_base_add_sparc_console(const char *dev_name, int idx) 245 + { 246 + return 0; 247 + } 248 + 249 + #endif 250 + 251 + static int serial_base_add_prefcon(const char *name, int idx) 252 + { 253 + const char *char_match __free(kfree) = NULL; 254 + const char *nmbr_match __free(kfree) = NULL; 255 + int ret; 256 + 257 + /* Handle ttyS specific options */ 258 + if (strstarts(name, "ttyS")) { 259 + /* No name, just a number */ 260 + nmbr_match = kasprintf(GFP_KERNEL, "%i", idx); 261 + if (!nmbr_match) 262 + return -ENODEV; 263 + 264 + ret = serial_base_add_one_prefcon(nmbr_match, name, idx); 265 + if (ret) 266 + return ret; 267 + 268 + /* Sparc ttya and ttyb */ 269 + ret = serial_base_add_sparc_console(name, idx); 270 + if (ret) 271 + return ret; 272 + } 273 + 274 + /* Handle the traditional character device name style console=ttyS0 */ 275 + char_match = kasprintf(GFP_KERNEL, "%s%i", name, idx); 276 + if (!char_match) 277 + return -ENOMEM; 278 + 279 + return serial_base_add_one_prefcon(char_match, name, idx); 280 + } 281 + 282 + /** 283 + * serial_base_add_preferred_console - Adds a preferred console 284 + * @drv: Serial port device driver 285 + * @port: Serial port instance 286 + * 287 + * Tries to add a preferred console for a serial port if specified in the 288 + * kernel command line. Supports both the traditional character device such 289 + * as console=ttyS0, and a hardware addressing based console=DEVNAME:0.0 290 + * style name. 291 + * 292 + * Translates the kernel command line option using a hardware based addressing 293 + * console=DEVNAME:0.0 to the serial port character device such as ttyS0. 294 + * Cannot be called early for ISA ports, depends on struct device. 295 + * 296 + * Note that duplicates are ignored by add_preferred_console(). 297 + * 298 + * Return: 0 on success, negative error code on failure. 299 + */ 300 + int serial_base_add_preferred_console(struct uart_driver *drv, 301 + struct uart_port *port) 302 + { 303 + const char *port_match __free(kfree) = NULL; 304 + int ret; 305 + 306 + ret = serial_base_add_prefcon(drv->dev_name, port->line); 307 + if (ret) 308 + return ret; 309 + 310 + port_match = kasprintf(GFP_KERNEL, "%s:%i.%i", dev_name(port->dev), 311 + port->ctrl_id, port->port_id); 312 + if (!port_match) 313 + return -ENOMEM; 314 + 315 + /* Translate a hardware addressing style console=DEVNAME:0.0 */ 316 + return serial_base_add_one_prefcon(port_match, drv->dev_name, port->line); 317 + } 318 + 319 + #endif 320 + 321 + #ifdef CONFIG_SERIAL_8250_CONSOLE 322 + 323 + /* 324 + * Early ISA ports initialize the console before there is no struct device. 325 + * This should be only called from serial8250_isa_init_preferred_console(), 326 + * other callers are likely wrong and should rely on earlycon instead. 327 + */ 328 + int serial_base_add_isa_preferred_console(const char *name, int idx) 329 + { 330 + return serial_base_add_prefcon(name, idx); 331 + } 332 + 333 + #endif 334 335 static int serial_base_init(void) 336 {
+78 -76
drivers/tty/serial/serial_core.c
··· 243 uart_port_unlock_irq(uport); 244 } 245 246 - /* 247 - * Startup the port. This will be called once per open. All calls 248 - * will be serialised by the per-port mutex. 249 - */ 250 - static int uart_port_startup(struct tty_struct *tty, struct uart_state *state, 251 - bool init_hw) 252 { 253 - struct uart_port *uport = uart_port_check(state); 254 unsigned long flags; 255 unsigned long page; 256 - int retval = 0; 257 - 258 - if (uport->type == PORT_UNKNOWN) 259 - return 1; 260 - 261 - /* 262 - * Make sure the device is in D0 state. 263 - */ 264 - uart_change_pm(state, UART_PM_STATE_ON); 265 266 /* 267 * Initialise and allocate the transmit and temporary ··· 258 if (!page) 259 return -ENOMEM; 260 261 - uart_port_lock(state, flags); 262 - if (!state->xmit.buf) { 263 - state->xmit.buf = (unsigned char *) page; 264 - uart_circ_clear(&state->xmit); 265 uart_port_unlock(uport, flags); 266 } else { 267 uart_port_unlock(uport, flags); 268 /* 269 * Do not free() the page under the port lock, see 270 - * uart_shutdown(). 271 */ 272 free_page(page); 273 } 274 275 retval = uport->ops->startup(uport); 276 if (retval == 0) { ··· 391 { 392 struct uart_port *uport = uart_port_check(state); 393 struct tty_port *port = &state->port; 394 - unsigned long flags; 395 - char *xmit_buf = NULL; 396 397 /* 398 * Set the TTY IO error marker ··· 426 */ 427 tty_port_set_suspended(port, false); 428 429 - /* 430 - * Do not free() the transmit buffer page under the port lock since 431 - * this can create various circular locking scenarios. For instance, 432 - * console driver may need to allocate/free a debug object, which 433 - * can endup in printk() recursion. 434 - */ 435 - uart_port_lock(state, flags); 436 - xmit_buf = state->xmit.buf; 437 - state->xmit.buf = NULL; 438 - uart_port_unlock(uport, flags); 439 - 440 - free_page((unsigned long)xmit_buf); 441 } 442 443 /** ··· 587 { 588 struct uart_state *state = tty->driver_data; 589 struct uart_port *port; 590 - struct circ_buf *circ; 591 unsigned long flags; 592 int ret = 0; 593 594 - circ = &state->xmit; 595 port = uart_port_lock(state, flags); 596 - if (!circ->buf) { 597 uart_port_unlock(port, flags); 598 return 0; 599 } 600 601 - if (port && uart_circ_chars_free(circ) != 0) { 602 - circ->buf[circ->head] = c; 603 - circ->head = (circ->head + 1) & (UART_XMIT_SIZE - 1); 604 - ret = 1; 605 - } 606 uart_port_unlock(port, flags); 607 return ret; 608 } ··· 611 { 612 struct uart_state *state = tty->driver_data; 613 struct uart_port *port; 614 - struct circ_buf *circ; 615 unsigned long flags; 616 - int c, ret = 0; 617 618 /* 619 * This means you called this function _after_ the port was ··· 622 return -EL3HLT; 623 624 port = uart_port_lock(state, flags); 625 - circ = &state->xmit; 626 - if (!circ->buf) { 627 uart_port_unlock(port, flags); 628 return 0; 629 } 630 631 - while (port) { 632 - c = CIRC_SPACE_TO_END(circ->head, circ->tail, UART_XMIT_SIZE); 633 - if (count < c) 634 - c = count; 635 - if (c <= 0) 636 - break; 637 - memcpy(circ->buf + circ->head, buf, c); 638 - circ->head = (circ->head + c) & (UART_XMIT_SIZE - 1); 639 - buf += c; 640 - count -= c; 641 - ret += c; 642 - } 643 644 __uart_start(state); 645 uart_port_unlock(port, flags); ··· 643 unsigned int ret; 644 645 port = uart_port_lock(state, flags); 646 - ret = uart_circ_chars_free(&state->xmit); 647 uart_port_unlock(port, flags); 648 return ret; 649 } ··· 656 unsigned int ret; 657 658 port = uart_port_lock(state, flags); 659 - ret = uart_circ_chars_pending(&state->xmit); 660 uart_port_unlock(port, flags); 661 return ret; 662 } ··· 679 port = uart_port_lock(state, flags); 680 if (!port) 681 return; 682 - uart_circ_clear(&state->xmit); 683 if (port->ops->flush_buffer) 684 port->ops->flush_buffer(port); 685 uart_port_unlock(port, flags); ··· 1082 * interrupt happens). 1083 */ 1084 if (uport->x_char || 1085 - ((uart_circ_chars_pending(&state->xmit) > 0) && 1086 !uart_tx_stopped(uport))) 1087 result &= ~TIOCSER_TEMT; 1088 ··· 1780 { 1781 struct uart_state *state = container_of(port, struct uart_state, port); 1782 struct uart_port *uport = uart_port_check(state); 1783 - char *buf; 1784 1785 /* 1786 * At this point, we stop accepting input. To do this, we ··· 1802 */ 1803 tty_port_set_suspended(port, false); 1804 1805 - /* 1806 - * Free the transmit buffer. 1807 - */ 1808 - uart_port_lock_irq(uport); 1809 - uart_circ_clear(&state->xmit); 1810 - buf = state->xmit.buf; 1811 - state->xmit.buf = NULL; 1812 - uart_port_unlock_irq(uport); 1813 - 1814 - free_page((unsigned long)buf); 1815 1816 uart_change_pm(state, UART_PM_STATE_OFF); 1817 } ··· 2408 uport->ops->stop_rx(uport); 2409 uart_port_unlock_irq(uport); 2410 } 2411 goto unlock; 2412 } 2413 ··· 3207 if (uport->attr_group) 3208 uport->tty_groups[1] = uport->attr_group; 3209 3210 /* 3211 * Register the port whether it's detected or not. This allows 3212 * setserial to be used to alter this port's parameters. ··· 3220 if (!IS_ERR(tty_dev)) { 3221 device_set_wakeup_capable(tty_dev, 1); 3222 } else { 3223 dev_err(uport->dev, "Cannot register tty device on line %d\n", 3224 uport->line); 3225 } ··· 3422 if (ret) 3423 goto err_unregister_ctrl_dev; 3424 3425 - ret = serial_core_add_one_port(drv, port); 3426 if (ret) 3427 goto err_unregister_port_dev; 3428 3429 - port->flags &= ~UPF_DEAD; 3430 3431 mutex_unlock(&port_mutex); 3432
··· 243 uart_port_unlock_irq(uport); 244 } 245 246 + static int uart_alloc_xmit_buf(struct tty_port *port) 247 { 248 + struct uart_state *state = container_of(port, struct uart_state, port); 249 + struct uart_port *uport; 250 unsigned long flags; 251 unsigned long page; 252 253 /* 254 * Initialise and allocate the transmit and temporary ··· 271 if (!page) 272 return -ENOMEM; 273 274 + uport = uart_port_lock(state, flags); 275 + if (!state->port.xmit_buf) { 276 + state->port.xmit_buf = (unsigned char *)page; 277 + kfifo_init(&state->port.xmit_fifo, state->port.xmit_buf, 278 + PAGE_SIZE); 279 uart_port_unlock(uport, flags); 280 } else { 281 uart_port_unlock(uport, flags); 282 /* 283 * Do not free() the page under the port lock, see 284 + * uart_free_xmit_buf(). 285 */ 286 free_page(page); 287 } 288 + 289 + return 0; 290 + } 291 + 292 + static void uart_free_xmit_buf(struct tty_port *port) 293 + { 294 + struct uart_state *state = container_of(port, struct uart_state, port); 295 + struct uart_port *uport; 296 + unsigned long flags; 297 + char *xmit_buf; 298 + 299 + /* 300 + * Do not free() the transmit buffer page under the port lock since 301 + * this can create various circular locking scenarios. For instance, 302 + * console driver may need to allocate/free a debug object, which 303 + * can end up in printk() recursion. 304 + */ 305 + uport = uart_port_lock(state, flags); 306 + xmit_buf = port->xmit_buf; 307 + port->xmit_buf = NULL; 308 + INIT_KFIFO(port->xmit_fifo); 309 + uart_port_unlock(uport, flags); 310 + 311 + free_page((unsigned long)xmit_buf); 312 + } 313 + 314 + /* 315 + * Startup the port. This will be called once per open. All calls 316 + * will be serialised by the per-port mutex. 317 + */ 318 + static int uart_port_startup(struct tty_struct *tty, struct uart_state *state, 319 + bool init_hw) 320 + { 321 + struct uart_port *uport = uart_port_check(state); 322 + int retval; 323 + 324 + if (uport->type == PORT_UNKNOWN) 325 + return 1; 326 + 327 + /* 328 + * Make sure the device is in D0 state. 329 + */ 330 + uart_change_pm(state, UART_PM_STATE_ON); 331 + 332 + retval = uart_alloc_xmit_buf(&state->port); 333 + if (retval) 334 + return retval; 335 336 retval = uport->ops->startup(uport); 337 if (retval == 0) { ··· 356 { 357 struct uart_port *uport = uart_port_check(state); 358 struct tty_port *port = &state->port; 359 360 /* 361 * Set the TTY IO error marker ··· 393 */ 394 tty_port_set_suspended(port, false); 395 396 + uart_free_xmit_buf(port); 397 } 398 399 /** ··· 565 { 566 struct uart_state *state = tty->driver_data; 567 struct uart_port *port; 568 unsigned long flags; 569 int ret = 0; 570 571 port = uart_port_lock(state, flags); 572 + if (!state->port.xmit_buf) { 573 uart_port_unlock(port, flags); 574 return 0; 575 } 576 577 + if (port) 578 + ret = kfifo_put(&state->port.xmit_fifo, c); 579 uart_port_unlock(port, flags); 580 return ret; 581 } ··· 594 { 595 struct uart_state *state = tty->driver_data; 596 struct uart_port *port; 597 unsigned long flags; 598 + int ret = 0; 599 600 /* 601 * This means you called this function _after_ the port was ··· 606 return -EL3HLT; 607 608 port = uart_port_lock(state, flags); 609 + if (WARN_ON_ONCE(!state->port.xmit_buf)) { 610 uart_port_unlock(port, flags); 611 return 0; 612 } 613 614 + if (port) 615 + ret = kfifo_in(&state->port.xmit_fifo, buf, count); 616 617 __uart_start(state); 618 uart_port_unlock(port, flags); ··· 638 unsigned int ret; 639 640 port = uart_port_lock(state, flags); 641 + ret = kfifo_avail(&state->port.xmit_fifo); 642 uart_port_unlock(port, flags); 643 return ret; 644 } ··· 651 unsigned int ret; 652 653 port = uart_port_lock(state, flags); 654 + ret = kfifo_len(&state->port.xmit_fifo); 655 uart_port_unlock(port, flags); 656 return ret; 657 } ··· 674 port = uart_port_lock(state, flags); 675 if (!port) 676 return; 677 + kfifo_reset(&state->port.xmit_fifo); 678 if (port->ops->flush_buffer) 679 port->ops->flush_buffer(port); 680 uart_port_unlock(port, flags); ··· 1077 * interrupt happens). 1078 */ 1079 if (uport->x_char || 1080 + (!kfifo_is_empty(&state->port.xmit_fifo) && 1081 !uart_tx_stopped(uport))) 1082 result &= ~TIOCSER_TEMT; 1083 ··· 1775 { 1776 struct uart_state *state = container_of(port, struct uart_state, port); 1777 struct uart_port *uport = uart_port_check(state); 1778 1779 /* 1780 * At this point, we stop accepting input. To do this, we ··· 1798 */ 1799 tty_port_set_suspended(port, false); 1800 1801 + uart_free_xmit_buf(port); 1802 1803 uart_change_pm(state, UART_PM_STATE_OFF); 1804 } ··· 2413 uport->ops->stop_rx(uport); 2414 uart_port_unlock_irq(uport); 2415 } 2416 + device_set_awake_path(uport->dev); 2417 goto unlock; 2418 } 2419 ··· 3211 if (uport->attr_group) 3212 uport->tty_groups[1] = uport->attr_group; 3213 3214 + /* Ensure serdev drivers can call serdev_device_open() right away */ 3215 + uport->flags &= ~UPF_DEAD; 3216 + 3217 /* 3218 * Register the port whether it's detected or not. This allows 3219 * setserial to be used to alter this port's parameters. ··· 3221 if (!IS_ERR(tty_dev)) { 3222 device_set_wakeup_capable(tty_dev, 1); 3223 } else { 3224 + uport->flags |= UPF_DEAD; 3225 dev_err(uport->dev, "Cannot register tty device on line %d\n", 3226 uport->line); 3227 } ··· 3422 if (ret) 3423 goto err_unregister_ctrl_dev; 3424 3425 + ret = serial_base_add_preferred_console(drv, port); 3426 if (ret) 3427 goto err_unregister_port_dev; 3428 3429 + ret = serial_core_add_one_port(drv, port); 3430 + if (ret) 3431 + goto err_unregister_port_dev; 3432 3433 mutex_unlock(&port_mutex); 3434
+7 -2
drivers/tty/serial/serial_port.c
··· 11 #include <linux/of.h> 12 #include <linux/platform_device.h> 13 #include <linux/pm_runtime.h> 14 #include <linux/property.h> 15 #include <linux/serial_core.h> 16 #include <linux/spinlock.h> ··· 24 static int __serial_port_busy(struct uart_port *port) 25 { 26 return !uart_tx_stopped(port) && 27 - uart_circ_chars_pending(&port->state->xmit); 28 } 29 30 static int serial_port_runtime_resume(struct device *dev) ··· 256 257 if (dev_is_platform(dev)) 258 ret = platform_get_irq(to_platform_device(dev), 0); 259 - else 260 ret = fwnode_irq_get(dev_fwnode(dev), 0); 261 if (ret == -EPROBE_DEFER) 262 return ret;
··· 11 #include <linux/of.h> 12 #include <linux/platform_device.h> 13 #include <linux/pm_runtime.h> 14 + #include <linux/pnp.h> 15 #include <linux/property.h> 16 #include <linux/serial_core.h> 17 #include <linux/spinlock.h> ··· 23 static int __serial_port_busy(struct uart_port *port) 24 { 25 return !uart_tx_stopped(port) && 26 + !kfifo_is_empty(&port->state->port.xmit_fifo); 27 } 28 29 static int serial_port_runtime_resume(struct device *dev) ··· 255 256 if (dev_is_platform(dev)) 257 ret = platform_get_irq(to_platform_device(dev), 0); 258 + else if (dev_is_pnp(dev)) { 259 + ret = pnp_irq(to_pnp_dev(dev), 0); 260 + if (ret < 0) 261 + ret = -ENXIO; 262 + } else 263 ret = fwnode_irq_get(dev_fwnode(dev), 0); 264 if (ret == -EPROBE_DEFER) 265 return ret;
+37 -31
drivers/tty/serial/sh-sci.c
··· 585 sci_serial_out(port, SCSCR, new); 586 } 587 588 - if (s->chan_tx && !uart_circ_empty(&s->port.state->xmit) && 589 dma_submit_error(s->cookie_tx)) { 590 if (s->cfg->regtype == SCIx_RZ_SCIFA_REGTYPE) 591 /* Switch irq from SCIF to DMA */ ··· 817 818 static void sci_transmit_chars(struct uart_port *port) 819 { 820 - struct circ_buf *xmit = &port->state->xmit; 821 unsigned int stopped = uart_tx_stopped(port); 822 unsigned short status; 823 unsigned short ctrl; ··· 826 status = sci_serial_in(port, SCxSR); 827 if (!(status & SCxSR_TDxE(port))) { 828 ctrl = sci_serial_in(port, SCSCR); 829 - if (uart_circ_empty(xmit)) 830 ctrl &= ~SCSCR_TIE; 831 else 832 ctrl |= SCSCR_TIE; ··· 842 if (port->x_char) { 843 c = port->x_char; 844 port->x_char = 0; 845 - } else if (!uart_circ_empty(xmit) && !stopped) { 846 - c = xmit->buf[xmit->tail]; 847 - xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1); 848 - } else if (port->type == PORT_SCI && uart_circ_empty(xmit)) { 849 - ctrl = sci_serial_in(port, SCSCR); 850 - ctrl &= ~SCSCR_TE; 851 - sci_serial_out(port, SCSCR, ctrl); 852 - return; 853 - } else { 854 break; 855 } 856 ··· 860 861 sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port)); 862 863 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 864 uart_write_wakeup(port); 865 - if (uart_circ_empty(xmit)) { 866 if (port->type == PORT_SCI) { 867 ctrl = sci_serial_in(port, SCSCR); 868 ctrl &= ~SCSCR_TIE; ··· 1198 { 1199 struct sci_port *s = arg; 1200 struct uart_port *port = &s->port; 1201 - struct circ_buf *xmit = &port->state->xmit; 1202 unsigned long flags; 1203 1204 dev_dbg(port->dev, "%s(%d)\n", __func__, port->line); ··· 1207 1208 uart_xmit_advance(port, s->tx_dma_len); 1209 1210 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 1211 uart_write_wakeup(port); 1212 1213 - if (!uart_circ_empty(xmit)) { 1214 s->cookie_tx = 0; 1215 schedule_work(&s->work_tx); 1216 } else { ··· 1257 return -1; 1258 } 1259 1260 static void sci_dma_rx_chan_invalidate(struct sci_port *s) 1261 { 1262 unsigned int i; ··· 1271 static void sci_dma_rx_release(struct sci_port *s) 1272 { 1273 struct dma_chan *chan = s->chan_rx_saved; 1274 1275 s->chan_rx_saved = NULL; 1276 sci_dma_rx_chan_invalidate(s); 1277 dmaengine_terminate_sync(chan); 1278 dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0], 1279 sg_dma_address(&s->sg_rx[0])); ··· 1324 dev_dbg(port->dev, "%s(%d) active cookie %d\n", __func__, port->line, 1325 s->active_rx); 1326 1327 uart_port_lock_irqsave(port, &flags); 1328 1329 active = sci_dma_rx_find_active(s); 1330 if (active >= 0) 1331 count = sci_dma_rx_push(s, s->rx_buf[active], s->buf_len_rx); 1332 - 1333 - start_hrtimer_us(&s->rx_timer, s->rx_timeout); 1334 1335 if (count) 1336 tty_flip_buffer_push(&port->state->port); ··· 1354 uart_port_unlock_irqrestore(port, flags); 1355 dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n", 1356 __func__, s->cookie_rx[active], active, s->active_rx); 1357 return; 1358 1359 fail: 1360 - uart_port_unlock_irqrestore(port, flags); 1361 - dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); 1362 /* Switch to PIO */ 1363 - uart_port_lock_irqsave(port, &flags); 1364 dmaengine_terminate_async(chan); 1365 sci_dma_rx_chan_invalidate(s); 1366 sci_dma_rx_reenable_irq(s); 1367 uart_port_unlock_irqrestore(port, flags); 1368 } 1369 1370 static void sci_dma_tx_release(struct sci_port *s) ··· 1430 struct dma_async_tx_descriptor *desc; 1431 struct dma_chan *chan = s->chan_tx; 1432 struct uart_port *port = &s->port; 1433 - struct circ_buf *xmit = &port->state->xmit; 1434 unsigned long flags; 1435 dma_addr_t buf; 1436 - int head, tail; 1437 1438 /* 1439 * DMA is idle now. ··· 1443 * consistent xmit buffer state. 1444 */ 1445 uart_port_lock_irq(port); 1446 - head = xmit->head; 1447 - tail = xmit->tail; 1448 buf = s->tx_dma_addr + tail; 1449 - s->tx_dma_len = CIRC_CNT_TO_END(head, tail, UART_XMIT_SIZE); 1450 if (!s->tx_dma_len) { 1451 /* Transmit buffer has been flushed */ 1452 uart_port_unlock_irq(port); ··· 1474 } 1475 1476 uart_port_unlock_irq(port); 1477 - dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n", 1478 - __func__, xmit->buf, tail, head, s->cookie_tx); 1479 1480 dma_async_issue_pending(chan); 1481 return; ··· 1590 static void sci_request_dma(struct uart_port *port) 1591 { 1592 struct sci_port *s = to_sci_port(port); 1593 struct dma_chan *chan; 1594 1595 dev_dbg(port->dev, "%s: port %d\n", __func__, port->line); ··· 1619 if (chan) { 1620 /* UART circular tx buffer is an aligned page. */ 1621 s->tx_dma_addr = dma_map_single(chan->device->dev, 1622 - port->state->xmit.buf, 1623 UART_XMIT_SIZE, 1624 DMA_TO_DEVICE); 1625 if (dma_mapping_error(chan->device->dev, s->tx_dma_addr)) { ··· 1628 } else { 1629 dev_dbg(port->dev, "%s: mapped %lu@%p to %pad\n", 1630 __func__, UART_XMIT_SIZE, 1631 - port->state->xmit.buf, &s->tx_dma_addr); 1632 1633 INIT_WORK(&s->work_tx, sci_dma_tx_work_fn); 1634 s->chan_tx_saved = s->chan_tx = chan;
··· 585 sci_serial_out(port, SCSCR, new); 586 } 587 588 + if (s->chan_tx && !kfifo_is_empty(&port->state->port.xmit_fifo) && 589 dma_submit_error(s->cookie_tx)) { 590 if (s->cfg->regtype == SCIx_RZ_SCIFA_REGTYPE) 591 /* Switch irq from SCIF to DMA */ ··· 817 818 static void sci_transmit_chars(struct uart_port *port) 819 { 820 + struct tty_port *tport = &port->state->port; 821 unsigned int stopped = uart_tx_stopped(port); 822 unsigned short status; 823 unsigned short ctrl; ··· 826 status = sci_serial_in(port, SCxSR); 827 if (!(status & SCxSR_TDxE(port))) { 828 ctrl = sci_serial_in(port, SCSCR); 829 + if (kfifo_is_empty(&tport->xmit_fifo)) 830 ctrl &= ~SCSCR_TIE; 831 else 832 ctrl |= SCSCR_TIE; ··· 842 if (port->x_char) { 843 c = port->x_char; 844 port->x_char = 0; 845 + } else if (stopped || !kfifo_get(&tport->xmit_fifo, &c)) { 846 + if (port->type == PORT_SCI && 847 + kfifo_is_empty(&tport->xmit_fifo)) { 848 + ctrl = sci_serial_in(port, SCSCR); 849 + ctrl &= ~SCSCR_TE; 850 + sci_serial_out(port, SCSCR, ctrl); 851 + return; 852 + } 853 break; 854 } 855 ··· 861 862 sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port)); 863 864 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 865 uart_write_wakeup(port); 866 + if (kfifo_is_empty(&tport->xmit_fifo)) { 867 if (port->type == PORT_SCI) { 868 ctrl = sci_serial_in(port, SCSCR); 869 ctrl &= ~SCSCR_TIE; ··· 1199 { 1200 struct sci_port *s = arg; 1201 struct uart_port *port = &s->port; 1202 + struct tty_port *tport = &port->state->port; 1203 unsigned long flags; 1204 1205 dev_dbg(port->dev, "%s(%d)\n", __func__, port->line); ··· 1208 1209 uart_xmit_advance(port, s->tx_dma_len); 1210 1211 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 1212 uart_write_wakeup(port); 1213 1214 + if (!kfifo_is_empty(&tport->xmit_fifo)) { 1215 s->cookie_tx = 0; 1216 schedule_work(&s->work_tx); 1217 } else { ··· 1258 return -1; 1259 } 1260 1261 + /* Must only be called with uart_port_lock taken */ 1262 static void sci_dma_rx_chan_invalidate(struct sci_port *s) 1263 { 1264 unsigned int i; ··· 1271 static void sci_dma_rx_release(struct sci_port *s) 1272 { 1273 struct dma_chan *chan = s->chan_rx_saved; 1274 + struct uart_port *port = &s->port; 1275 + unsigned long flags; 1276 1277 + uart_port_lock_irqsave(port, &flags); 1278 s->chan_rx_saved = NULL; 1279 sci_dma_rx_chan_invalidate(s); 1280 + uart_port_unlock_irqrestore(port, flags); 1281 + 1282 dmaengine_terminate_sync(chan); 1283 dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0], 1284 sg_dma_address(&s->sg_rx[0])); ··· 1319 dev_dbg(port->dev, "%s(%d) active cookie %d\n", __func__, port->line, 1320 s->active_rx); 1321 1322 + hrtimer_cancel(&s->rx_timer); 1323 + 1324 uart_port_lock_irqsave(port, &flags); 1325 1326 active = sci_dma_rx_find_active(s); 1327 if (active >= 0) 1328 count = sci_dma_rx_push(s, s->rx_buf[active], s->buf_len_rx); 1329 1330 if (count) 1331 tty_flip_buffer_push(&port->state->port); ··· 1349 uart_port_unlock_irqrestore(port, flags); 1350 dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n", 1351 __func__, s->cookie_rx[active], active, s->active_rx); 1352 + 1353 + start_hrtimer_us(&s->rx_timer, s->rx_timeout); 1354 + 1355 return; 1356 1357 fail: 1358 /* Switch to PIO */ 1359 dmaengine_terminate_async(chan); 1360 sci_dma_rx_chan_invalidate(s); 1361 sci_dma_rx_reenable_irq(s); 1362 uart_port_unlock_irqrestore(port, flags); 1363 + dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n"); 1364 } 1365 1366 static void sci_dma_tx_release(struct sci_port *s) ··· 1424 struct dma_async_tx_descriptor *desc; 1425 struct dma_chan *chan = s->chan_tx; 1426 struct uart_port *port = &s->port; 1427 + struct tty_port *tport = &port->state->port; 1428 unsigned long flags; 1429 + unsigned int tail; 1430 dma_addr_t buf; 1431 1432 /* 1433 * DMA is idle now. ··· 1437 * consistent xmit buffer state. 1438 */ 1439 uart_port_lock_irq(port); 1440 + s->tx_dma_len = kfifo_out_linear(&tport->xmit_fifo, &tail, 1441 + UART_XMIT_SIZE); 1442 buf = s->tx_dma_addr + tail; 1443 if (!s->tx_dma_len) { 1444 /* Transmit buffer has been flushed */ 1445 uart_port_unlock_irq(port); ··· 1469 } 1470 1471 uart_port_unlock_irq(port); 1472 + dev_dbg(port->dev, "%s: %p: %u, cookie %d\n", 1473 + __func__, tport->xmit_buf, tail, s->cookie_tx); 1474 1475 dma_async_issue_pending(chan); 1476 return; ··· 1585 static void sci_request_dma(struct uart_port *port) 1586 { 1587 struct sci_port *s = to_sci_port(port); 1588 + struct tty_port *tport = &port->state->port; 1589 struct dma_chan *chan; 1590 1591 dev_dbg(port->dev, "%s: port %d\n", __func__, port->line); ··· 1613 if (chan) { 1614 /* UART circular tx buffer is an aligned page. */ 1615 s->tx_dma_addr = dma_map_single(chan->device->dev, 1616 + tport->xmit_buf, 1617 UART_XMIT_SIZE, 1618 DMA_TO_DEVICE); 1619 if (dma_mapping_error(chan->device->dev, s->tx_dma_addr)) { ··· 1622 } else { 1623 dev_dbg(port->dev, "%s: mapped %lu@%p to %pad\n", 1624 __func__, UART_XMIT_SIZE, 1625 + tport->xmit_buf, &s->tx_dma_addr); 1626 1627 INIT_WORK(&s->work_tx, sci_dma_tx_work_fn); 1628 s->chan_tx_saved = s->chan_tx = chan;
+2 -2
drivers/tty/serial/sifive.c
··· 761 } 762 763 OF_EARLYCON_DECLARE(sifive, "sifive,uart0", early_sifive_serial_setup); 764 - OF_EARLYCON_DECLARE(sifive, "sifive,fu540-c000-uart0", 765 early_sifive_serial_setup); 766 #endif /* CONFIG_SERIAL_EARLYCON */ 767 ··· 1032 sifive_serial_resume); 1033 1034 static const struct of_device_id sifive_serial_of_match[] = { 1035 - { .compatible = "sifive,fu540-c000-uart0" }, 1036 { .compatible = "sifive,uart0" }, 1037 {}, 1038 };
··· 761 } 762 763 OF_EARLYCON_DECLARE(sifive, "sifive,uart0", early_sifive_serial_setup); 764 + OF_EARLYCON_DECLARE(sifive, "sifive,fu540-c000-uart", 765 early_sifive_serial_setup); 766 #endif /* CONFIG_SERIAL_EARLYCON */ 767 ··· 1032 sifive_serial_resume); 1033 1034 static const struct of_device_id sifive_serial_of_match[] = { 1035 + { .compatible = "sifive,fu540-c000-uart" }, 1036 { .compatible = "sifive,uart0" }, 1037 {}, 1038 };
+10 -10
drivers/tty/serial/sprd_serial.c
··· 227 { 228 struct sprd_uart_port *sp = 229 container_of(port, struct sprd_uart_port, port); 230 - struct circ_buf *xmit = &port->state->xmit; 231 232 - sp->tx_dma.trans_len = 233 - CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 234 235 - sp->tx_dma.phys_addr = dma_map_single(port->dev, 236 - (void *)&(xmit->buf[xmit->tail]), 237 sp->tx_dma.trans_len, 238 DMA_TO_DEVICE); 239 return dma_mapping_error(port->dev, sp->tx_dma.phys_addr); ··· 244 struct uart_port *port = (struct uart_port *)data; 245 struct sprd_uart_port *sp = 246 container_of(port, struct sprd_uart_port, port); 247 - struct circ_buf *xmit = &port->state->xmit; 248 unsigned long flags; 249 250 uart_port_lock_irqsave(port, &flags); ··· 253 254 uart_xmit_advance(port, sp->tx_dma.trans_len); 255 256 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 257 uart_write_wakeup(port); 258 259 - if (uart_circ_empty(xmit) || sprd_tx_buf_remap(port) || 260 sprd_tx_dma_config(port)) 261 sp->tx_dma.trans_len = 0; 262 ··· 319 { 320 struct sprd_uart_port *sp = 321 container_of(port, struct sprd_uart_port, port); 322 - struct circ_buf *xmit = &port->state->xmit; 323 324 if (port->x_char) { 325 serial_out(port, SPRD_TXD, port->x_char); ··· 328 return; 329 } 330 331 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 332 sprd_stop_tx_dma(port); 333 return; 334 }
··· 227 { 228 struct sprd_uart_port *sp = 229 container_of(port, struct sprd_uart_port, port); 230 + struct tty_port *tport = &port->state->port; 231 + unsigned char *tail; 232 233 + sp->tx_dma.trans_len = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, 234 + UART_XMIT_SIZE); 235 236 + sp->tx_dma.phys_addr = dma_map_single(port->dev, tail, 237 sp->tx_dma.trans_len, 238 DMA_TO_DEVICE); 239 return dma_mapping_error(port->dev, sp->tx_dma.phys_addr); ··· 244 struct uart_port *port = (struct uart_port *)data; 245 struct sprd_uart_port *sp = 246 container_of(port, struct sprd_uart_port, port); 247 + struct tty_port *tport = &port->state->port; 248 unsigned long flags; 249 250 uart_port_lock_irqsave(port, &flags); ··· 253 254 uart_xmit_advance(port, sp->tx_dma.trans_len); 255 256 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 257 uart_write_wakeup(port); 258 259 + if (kfifo_is_empty(&tport->xmit_fifo) || sprd_tx_buf_remap(port) || 260 sprd_tx_dma_config(port)) 261 sp->tx_dma.trans_len = 0; 262 ··· 319 { 320 struct sprd_uart_port *sp = 321 container_of(port, struct sprd_uart_port, port); 322 + struct tty_port *tport = &port->state->port; 323 324 if (port->x_char) { 325 serial_out(port, SPRD_TXD, port->x_char); ··· 328 return; 329 } 330 331 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 332 sprd_stop_tx_dma(port); 333 return; 334 }
+2 -2
drivers/tty/serial/st-asc.c
··· 387 /* There are probably characters waiting to be transmitted. */ 388 static void asc_start_tx(struct uart_port *port) 389 { 390 - struct circ_buf *xmit = &port->state->xmit; 391 392 - if (!uart_circ_empty(xmit)) 393 asc_enable_tx_interrupts(port); 394 } 395
··· 387 /* There are probably characters waiting to be transmitted. */ 388 static void asc_start_tx(struct uart_port *port) 389 { 390 + struct tty_port *tport = &port->state->port; 391 392 + if (!kfifo_is_empty(&tport->xmit_fifo)) 393 asc_enable_tx_interrupts(port); 394 } 395
+20 -32
drivers/tty/serial/stm32-usart.c
··· 696 { 697 struct stm32_port *stm32_port = to_stm32_port(port); 698 const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; 699 - struct circ_buf *xmit = &port->state->xmit; 700 701 - while (!uart_circ_empty(xmit)) { 702 /* Check that TDR is empty before filling FIFO */ 703 if (!(readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE)) 704 break; 705 - writel_relaxed(xmit->buf[xmit->tail], port->membase + ofs->tdr); 706 - uart_xmit_advance(port, 1); 707 } 708 709 /* rely on TXE irq (mask or unmask) for sending remaining data */ 710 - if (uart_circ_empty(xmit)) 711 stm32_usart_tx_interrupt_disable(port); 712 else 713 stm32_usart_tx_interrupt_enable(port); ··· 721 static void stm32_usart_transmit_chars_dma(struct uart_port *port) 722 { 723 struct stm32_port *stm32port = to_stm32_port(port); 724 - struct circ_buf *xmit = &port->state->xmit; 725 struct dma_async_tx_descriptor *desc = NULL; 726 unsigned int count; 727 int ret; ··· 733 return; 734 } 735 736 - count = uart_circ_chars_pending(xmit); 737 - 738 - if (count > TX_BUF_L) 739 - count = TX_BUF_L; 740 - 741 - if (xmit->tail < xmit->head) { 742 - memcpy(&stm32port->tx_buf[0], &xmit->buf[xmit->tail], count); 743 - } else { 744 - size_t one = UART_XMIT_SIZE - xmit->tail; 745 - size_t two; 746 - 747 - if (one > count) 748 - one = count; 749 - two = count - one; 750 - 751 - memcpy(&stm32port->tx_buf[0], &xmit->buf[xmit->tail], one); 752 - if (two) 753 - memcpy(&stm32port->tx_buf[one], &xmit->buf[0], two); 754 - } 755 756 desc = dmaengine_prep_slave_single(stm32port->tx_ch, 757 stm32port->tx_dma_buf, ··· 780 { 781 struct stm32_port *stm32_port = to_stm32_port(port); 782 const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; 783 - struct circ_buf *xmit = &port->state->xmit; 784 u32 isr; 785 int ret; 786 787 if (!stm32_port->hw_flow_control && 788 port->rs485.flags & SER_RS485_ENABLED && 789 (port->x_char || 790 - !(uart_circ_empty(xmit) || uart_tx_stopped(port)))) { 791 stm32_usart_tc_interrupt_disable(port); 792 stm32_usart_rs485_rts_enable(port); 793 } ··· 814 return; 815 } 816 817 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 818 stm32_usart_tx_interrupt_disable(port); 819 return; 820 } ··· 829 else 830 stm32_usart_transmit_chars_pio(port); 831 832 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 833 uart_write_wakeup(port); 834 835 - if (uart_circ_empty(xmit)) { 836 stm32_usart_tx_interrupt_disable(port); 837 if (!stm32_port->hw_flow_control && 838 port->rs485.flags & SER_RS485_ENABLED) { ··· 963 /* There are probably characters waiting to be transmitted. */ 964 static void stm32_usart_start_tx(struct uart_port *port) 965 { 966 - struct circ_buf *xmit = &port->state->xmit; 967 968 - if (uart_circ_empty(xmit) && !port->x_char) { 969 stm32_usart_rs485_rts_disable(port); 970 return; 971 }
··· 696 { 697 struct stm32_port *stm32_port = to_stm32_port(port); 698 const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; 699 + struct tty_port *tport = &port->state->port; 700 701 + while (1) { 702 + unsigned char ch; 703 + 704 /* Check that TDR is empty before filling FIFO */ 705 if (!(readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE)) 706 break; 707 + 708 + if (!uart_fifo_get(port, &ch)) 709 + break; 710 + 711 + writel_relaxed(ch, port->membase + ofs->tdr); 712 } 713 714 /* rely on TXE irq (mask or unmask) for sending remaining data */ 715 + if (kfifo_is_empty(&tport->xmit_fifo)) 716 stm32_usart_tx_interrupt_disable(port); 717 else 718 stm32_usart_tx_interrupt_enable(port); ··· 716 static void stm32_usart_transmit_chars_dma(struct uart_port *port) 717 { 718 struct stm32_port *stm32port = to_stm32_port(port); 719 + struct tty_port *tport = &port->state->port; 720 struct dma_async_tx_descriptor *desc = NULL; 721 unsigned int count; 722 int ret; ··· 728 return; 729 } 730 731 + count = kfifo_out_peek(&tport->xmit_fifo, &stm32port->tx_buf[0], 732 + TX_BUF_L); 733 734 desc = dmaengine_prep_slave_single(stm32port->tx_ch, 735 stm32port->tx_dma_buf, ··· 792 { 793 struct stm32_port *stm32_port = to_stm32_port(port); 794 const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; 795 + struct tty_port *tport = &port->state->port; 796 u32 isr; 797 int ret; 798 799 if (!stm32_port->hw_flow_control && 800 port->rs485.flags & SER_RS485_ENABLED && 801 (port->x_char || 802 + !(kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)))) { 803 stm32_usart_tc_interrupt_disable(port); 804 stm32_usart_rs485_rts_enable(port); 805 } ··· 826 return; 827 } 828 829 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 830 stm32_usart_tx_interrupt_disable(port); 831 return; 832 } ··· 841 else 842 stm32_usart_transmit_chars_pio(port); 843 844 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 845 uart_write_wakeup(port); 846 847 + if (kfifo_is_empty(&tport->xmit_fifo)) { 848 stm32_usart_tx_interrupt_disable(port); 849 if (!stm32_port->hw_flow_control && 850 port->rs485.flags & SER_RS485_ENABLED) { ··· 975 /* There are probably characters waiting to be transmitted. */ 976 static void stm32_usart_start_tx(struct uart_port *port) 977 { 978 + struct tty_port *tport = &port->state->port; 979 980 + if (kfifo_is_empty(&tport->xmit_fifo) && !port->x_char) { 981 stm32_usart_rs485_rts_disable(port); 982 return; 983 }
+20 -15
drivers/tty/serial/sunhv.c
··· 39 40 static int hung_up = 0; 41 42 - static void transmit_chars_putchar(struct uart_port *port, struct circ_buf *xmit) 43 { 44 - while (!uart_circ_empty(xmit)) { 45 - long status = sun4v_con_putchar(xmit->buf[xmit->tail]); 46 47 if (status != HV_EOK) 48 break; ··· 54 } 55 } 56 57 - static void transmit_chars_write(struct uart_port *port, struct circ_buf *xmit) 58 { 59 - while (!uart_circ_empty(xmit)) { 60 - unsigned long ra = __pa(xmit->buf + xmit->tail); 61 - unsigned long len, status, sent; 62 63 - len = CIRC_CNT_TO_END(xmit->head, xmit->tail, 64 - UART_XMIT_SIZE); 65 status = sun4v_con_write(ra, len, &sent); 66 if (status != HV_EOK) 67 break; ··· 170 } 171 172 struct sunhv_ops { 173 - void (*transmit_chars)(struct uart_port *port, struct circ_buf *xmit); 174 int (*receive_chars)(struct uart_port *port); 175 }; 176 ··· 201 202 static void transmit_chars(struct uart_port *port) 203 { 204 - struct circ_buf *xmit; 205 206 if (!port->state) 207 return; 208 209 - xmit = &port->state->xmit; 210 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) 211 return; 212 213 - sunhv_ops->transmit_chars(port, xmit); 214 215 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 216 uart_write_wakeup(port); 217 } 218
··· 39 40 static int hung_up = 0; 41 42 + static void transmit_chars_putchar(struct uart_port *port, 43 + struct tty_port *tport) 44 { 45 + unsigned char ch; 46 + 47 + while (kfifo_peek(&tport->xmit_fifo, &ch)) { 48 + long status = sun4v_con_putchar(ch); 49 50 if (status != HV_EOK) 51 break; ··· 51 } 52 } 53 54 + static void transmit_chars_write(struct uart_port *port, struct tty_port *tport) 55 { 56 + while (!kfifo_is_empty(&tport->xmit_fifo)) { 57 + unsigned long len, ra, status, sent; 58 + unsigned char *tail; 59 60 + len = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, 61 + UART_XMIT_SIZE); 62 + ra = __pa(tail); 63 + 64 status = sun4v_con_write(ra, len, &sent); 65 if (status != HV_EOK) 66 break; ··· 165 } 166 167 struct sunhv_ops { 168 + void (*transmit_chars)(struct uart_port *port, struct tty_port *tport); 169 int (*receive_chars)(struct uart_port *port); 170 }; 171 ··· 196 197 static void transmit_chars(struct uart_port *port) 198 { 199 + struct tty_port *tport; 200 201 if (!port->state) 202 return; 203 204 + tport = &port->state->port; 205 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) 206 return; 207 208 + sunhv_ops->transmit_chars(port, tport); 209 210 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 211 uart_write_wakeup(port); 212 } 213
+9 -7
drivers/tty/serial/sunplus-uart.c
··· 200 201 static void transmit_chars(struct uart_port *port) 202 { 203 - struct circ_buf *xmit = &port->state->xmit; 204 205 if (port->x_char) { 206 sp_uart_put_char(port, port->x_char); ··· 209 return; 210 } 211 212 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 213 sunplus_stop_tx(port); 214 return; 215 } 216 217 do { 218 - sp_uart_put_char(port, xmit->buf[xmit->tail]); 219 - uart_xmit_advance(port, 1); 220 - if (uart_circ_empty(xmit)) 221 break; 222 } while (sunplus_tx_buf_not_full(port)); 223 224 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 225 uart_write_wakeup(port); 226 227 - if (uart_circ_empty(xmit)) 228 sunplus_stop_tx(port); 229 } 230
··· 200 201 static void transmit_chars(struct uart_port *port) 202 { 203 + struct tty_port *tport = &port->state->port; 204 205 if (port->x_char) { 206 sp_uart_put_char(port, port->x_char); ··· 209 return; 210 } 211 212 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 213 sunplus_stop_tx(port); 214 return; 215 } 216 217 do { 218 + unsigned char ch; 219 + 220 + if (!uart_fifo_get(port, &ch)) 221 break; 222 + 223 + sp_uart_put_char(port, ch); 224 } while (sunplus_tx_buf_not_full(port)); 225 226 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 227 uart_write_wakeup(port); 228 229 + if (kfifo_is_empty(&tport->xmit_fifo)) 230 sunplus_stop_tx(port); 231 } 232
+16 -14
drivers/tty/serial/sunsab.c
··· 232 static void transmit_chars(struct uart_sunsab_port *up, 233 union sab82532_irq_status *stat) 234 { 235 - struct circ_buf *xmit = &up->port.state->xmit; 236 int i; 237 238 if (stat->sreg.isr1 & SAB82532_ISR1_ALLS) { ··· 252 set_bit(SAB82532_XPR, &up->irqflags); 253 sunsab_tx_idle(up); 254 255 - if (uart_circ_empty(xmit) || uart_tx_stopped(&up->port)) { 256 up->interrupt_mask1 |= SAB82532_IMR1_XPR; 257 writeb(up->interrupt_mask1, &up->regs->w.imr1); 258 return; ··· 265 /* Stuff 32 bytes into Transmit FIFO. */ 266 clear_bit(SAB82532_XPR, &up->irqflags); 267 for (i = 0; i < up->port.fifosize; i++) { 268 - writeb(xmit->buf[xmit->tail], 269 - &up->regs->w.xfifo[i]); 270 - uart_xmit_advance(&up->port, 1); 271 - if (uart_circ_empty(xmit)) 272 break; 273 } 274 275 /* Issue a Transmit Frame command. */ 276 sunsab_cec_wait(up); 277 writeb(SAB82532_CMDR_XF, &up->regs->w.cmdr); 278 279 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 280 uart_write_wakeup(&up->port); 281 282 - if (uart_circ_empty(xmit)) 283 sunsab_stop_tx(&up->port); 284 } 285 ··· 436 { 437 struct uart_sunsab_port *up = 438 container_of(port, struct uart_sunsab_port, port); 439 - struct circ_buf *xmit = &up->port.state->xmit; 440 int i; 441 442 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) 443 return; 444 445 up->interrupt_mask1 &= ~(SAB82532_IMR1_ALLS|SAB82532_IMR1_XPR); ··· 452 clear_bit(SAB82532_XPR, &up->irqflags); 453 454 for (i = 0; i < up->port.fifosize; i++) { 455 - writeb(xmit->buf[xmit->tail], 456 - &up->regs->w.xfifo[i]); 457 - uart_xmit_advance(&up->port, 1); 458 - if (uart_circ_empty(xmit)) 459 break; 460 } 461 462 /* Issue a Transmit Frame command. */
··· 232 static void transmit_chars(struct uart_sunsab_port *up, 233 union sab82532_irq_status *stat) 234 { 235 + struct tty_port *tport = &up->port.state->port; 236 int i; 237 238 if (stat->sreg.isr1 & SAB82532_ISR1_ALLS) { ··· 252 set_bit(SAB82532_XPR, &up->irqflags); 253 sunsab_tx_idle(up); 254 255 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(&up->port)) { 256 up->interrupt_mask1 |= SAB82532_IMR1_XPR; 257 writeb(up->interrupt_mask1, &up->regs->w.imr1); 258 return; ··· 265 /* Stuff 32 bytes into Transmit FIFO. */ 266 clear_bit(SAB82532_XPR, &up->irqflags); 267 for (i = 0; i < up->port.fifosize; i++) { 268 + unsigned char ch; 269 + 270 + if (!uart_fifo_get(&up->port, &ch)) 271 break; 272 + 273 + writeb(ch, &up->regs->w.xfifo[i]); 274 } 275 276 /* Issue a Transmit Frame command. */ 277 sunsab_cec_wait(up); 278 writeb(SAB82532_CMDR_XF, &up->regs->w.cmdr); 279 280 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 281 uart_write_wakeup(&up->port); 282 283 + if (kfifo_is_empty(&tport->xmit_fifo)) 284 sunsab_stop_tx(&up->port); 285 } 286 ··· 435 { 436 struct uart_sunsab_port *up = 437 container_of(port, struct uart_sunsab_port, port); 438 + struct tty_port *tport = &up->port.state->port; 439 int i; 440 441 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) 442 return; 443 444 up->interrupt_mask1 &= ~(SAB82532_IMR1_ALLS|SAB82532_IMR1_XPR); ··· 451 clear_bit(SAB82532_XPR, &up->irqflags); 452 453 for (i = 0; i < up->port.fifosize; i++) { 454 + unsigned char ch; 455 + 456 + if (!uart_fifo_get(&up->port, &ch)) 457 break; 458 + 459 + writeb(ch, &up->regs->w.xfifo[i]); 460 } 461 462 /* Issue a Transmit Frame command. */
+8 -7
drivers/tty/serial/sunsu.c
··· 396 397 static void transmit_chars(struct uart_sunsu_port *up) 398 { 399 - struct circ_buf *xmit = &up->port.state->xmit; 400 int count; 401 402 if (up->port.x_char) { ··· 410 sunsu_stop_tx(&up->port); 411 return; 412 } 413 - if (uart_circ_empty(xmit)) { 414 __stop_tx(up); 415 return; 416 } 417 418 count = up->port.fifosize; 419 do { 420 - serial_out(up, UART_TX, xmit->buf[xmit->tail]); 421 - uart_xmit_advance(&up->port, 1); 422 - if (uart_circ_empty(xmit)) 423 break; 424 } while (--count > 0); 425 426 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 427 uart_write_wakeup(&up->port); 428 429 - if (uart_circ_empty(xmit)) 430 __stop_tx(up); 431 } 432
··· 396 397 static void transmit_chars(struct uart_sunsu_port *up) 398 { 399 + struct tty_port *tport = &up->port.state->port; 400 + unsigned char ch; 401 int count; 402 403 if (up->port.x_char) { ··· 409 sunsu_stop_tx(&up->port); 410 return; 411 } 412 + if (kfifo_is_empty(&tport->xmit_fifo)) { 413 __stop_tx(up); 414 return; 415 } 416 417 count = up->port.fifosize; 418 do { 419 + if (!uart_fifo_get(&up->port, &ch)) 420 break; 421 + 422 + serial_out(up, UART_TX, ch); 423 } while (--count > 0); 424 425 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 426 uart_write_wakeup(&up->port); 427 428 + if (kfifo_is_empty(&tport->xmit_fifo)) 429 __stop_tx(up); 430 } 431
+13 -14
drivers/tty/serial/sunzilog.c
··· 453 static void sunzilog_transmit_chars(struct uart_sunzilog_port *up, 454 struct zilog_channel __iomem *channel) 455 { 456 - struct circ_buf *xmit; 457 458 if (ZS_IS_CONS(up)) { 459 unsigned char status = readb(&channel->control); ··· 497 498 if (up->port.state == NULL) 499 goto ack_tx_int; 500 - xmit = &up->port.state->xmit; 501 - if (uart_circ_empty(xmit)) 502 - goto ack_tx_int; 503 504 if (uart_tx_stopped(&up->port)) 505 goto ack_tx_int; 506 507 up->flags |= SUNZILOG_FLAG_TX_ACTIVE; 508 - writeb(xmit->buf[xmit->tail], &channel->data); 509 ZSDELAY(); 510 ZS_WSYNC(channel); 511 512 - uart_xmit_advance(&up->port, 1); 513 - 514 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 515 uart_write_wakeup(&up->port); 516 517 return; ··· 700 port->icount.tx++; 701 port->x_char = 0; 702 } else { 703 - struct circ_buf *xmit = &port->state->xmit; 704 705 - if (uart_circ_empty(xmit)) 706 return; 707 - writeb(xmit->buf[xmit->tail], &channel->data); 708 ZSDELAY(); 709 ZS_WSYNC(channel); 710 711 - uart_xmit_advance(port, 1); 712 - 713 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 714 uart_write_wakeup(&up->port); 715 } 716 }
··· 453 static void sunzilog_transmit_chars(struct uart_sunzilog_port *up, 454 struct zilog_channel __iomem *channel) 455 { 456 + struct tty_port *tport; 457 + unsigned char ch; 458 459 if (ZS_IS_CONS(up)) { 460 unsigned char status = readb(&channel->control); ··· 496 497 if (up->port.state == NULL) 498 goto ack_tx_int; 499 + tport = &up->port.state->port; 500 501 if (uart_tx_stopped(&up->port)) 502 goto ack_tx_int; 503 504 + if (!uart_fifo_get(&up->port, &ch)) 505 + goto ack_tx_int; 506 + 507 up->flags |= SUNZILOG_FLAG_TX_ACTIVE; 508 + writeb(ch, &channel->data); 509 ZSDELAY(); 510 ZS_WSYNC(channel); 511 512 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 513 uart_write_wakeup(&up->port); 514 515 return; ··· 700 port->icount.tx++; 701 port->x_char = 0; 702 } else { 703 + struct tty_port *tport = &port->state->port; 704 + unsigned char ch; 705 706 + if (!uart_fifo_get(&up->port, &ch)) 707 return; 708 + writeb(ch, &channel->data); 709 ZSDELAY(); 710 ZS_WSYNC(channel); 711 712 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 713 uart_write_wakeup(&up->port); 714 } 715 }
+6 -4
drivers/tty/serial/tegra-tcu.c
··· 91 static void tegra_tcu_uart_start_tx(struct uart_port *port) 92 { 93 struct tegra_tcu *tcu = port->private_data; 94 - struct circ_buf *xmit = &port->state->xmit; 95 - unsigned long count; 96 97 for (;;) { 98 - count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE); 99 if (!count) 100 break; 101 102 - tegra_tcu_write(tcu, &xmit->buf[xmit->tail], count); 103 uart_xmit_advance(port, count); 104 } 105
··· 91 static void tegra_tcu_uart_start_tx(struct uart_port *port) 92 { 93 struct tegra_tcu *tcu = port->private_data; 94 + struct tty_port *tport = &port->state->port; 95 + unsigned char *tail; 96 + unsigned int count; 97 98 for (;;) { 99 + count = kfifo_out_linear_ptr(&tport->xmit_fifo, &tail, 100 + UART_XMIT_SIZE); 101 if (!count) 102 break; 103 104 + tegra_tcu_write(tcu, tail, count); 105 uart_xmit_advance(port, count); 106 } 107
+7 -10
drivers/tty/serial/timbuart.c
··· 95 96 static void timbuart_tx_chars(struct uart_port *port) 97 { 98 - struct circ_buf *xmit = &port->state->xmit; 99 100 while (!(ioread32(port->membase + TIMBUART_ISR) & TXBF) && 101 - !uart_circ_empty(xmit)) { 102 - iowrite8(xmit->buf[xmit->tail], 103 - port->membase + TIMBUART_TXFIFO); 104 - uart_xmit_advance(port, 1); 105 - } 106 107 dev_dbg(port->dev, 108 "%s - total written %d bytes, CTL: %x, RTS: %x, baud: %x\n", ··· 114 { 115 struct timbuart_port *uart = 116 container_of(port, struct timbuart_port, port); 117 - struct circ_buf *xmit = &port->state->xmit; 118 119 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) 120 return; 121 122 if (port->x_char) ··· 127 /* clear all TX interrupts */ 128 iowrite32(TXFLAGS, port->membase + TIMBUART_ISR); 129 130 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 131 uart_write_wakeup(port); 132 } else 133 /* Re-enable any tx interrupt */ ··· 138 * we wake up the upper layer later when we got the interrupt 139 * to give it some time to go out... 140 */ 141 - if (!uart_circ_empty(xmit)) 142 *ier |= TXBAE; 143 144 dev_dbg(port->dev, "%s - leaving\n", __func__);
··· 95 96 static void timbuart_tx_chars(struct uart_port *port) 97 { 98 + unsigned char ch; 99 100 while (!(ioread32(port->membase + TIMBUART_ISR) & TXBF) && 101 + uart_fifo_get(port, &ch)) 102 + iowrite8(ch, port->membase + TIMBUART_TXFIFO); 103 104 dev_dbg(port->dev, 105 "%s - total written %d bytes, CTL: %x, RTS: %x, baud: %x\n", ··· 117 { 118 struct timbuart_port *uart = 119 container_of(port, struct timbuart_port, port); 120 + struct tty_port *tport = &port->state->port; 121 122 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) 123 return; 124 125 if (port->x_char) ··· 130 /* clear all TX interrupts */ 131 iowrite32(TXFLAGS, port->membase + TIMBUART_ISR); 132 133 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 134 uart_write_wakeup(port); 135 } else 136 /* Re-enable any tx interrupt */ ··· 141 * we wake up the upper layer later when we got the interrupt 142 * to give it some time to go out... 143 */ 144 + if (!kfifo_is_empty(&tport->xmit_fifo)) 145 *ier |= TXBAE; 146 147 dev_dbg(port->dev, "%s - leaving\n", __func__);
+8 -5
drivers/tty/serial/uartlite.c
··· 189 190 static int ulite_transmit(struct uart_port *port, int stat) 191 { 192 - struct circ_buf *xmit = &port->state->xmit; 193 194 if (stat & ULITE_STATUS_TXFULL) 195 return 0; ··· 202 return 1; 203 } 204 205 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) 206 return 0; 207 208 - uart_out32(xmit->buf[xmit->tail], ULITE_TX, port); 209 - uart_xmit_advance(port, 1); 210 211 /* wake up */ 212 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 213 uart_write_wakeup(port); 214 215 return 1;
··· 189 190 static int ulite_transmit(struct uart_port *port, int stat) 191 { 192 + struct tty_port *tport = &port->state->port; 193 + unsigned char ch; 194 195 if (stat & ULITE_STATUS_TXFULL) 196 return 0; ··· 201 return 1; 202 } 203 204 + if (uart_tx_stopped(port)) 205 return 0; 206 207 + if (!uart_fifo_get(port, &ch)) 208 + return 0; 209 + 210 + uart_out32(ch, ULITE_TX, port); 211 212 /* wake up */ 213 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 214 uart_write_wakeup(port); 215 216 return 1;
+7 -13
drivers/tty/serial/ucc_uart.c
··· 334 unsigned char *p; 335 unsigned int count; 336 struct uart_port *port = &qe_port->port; 337 - struct circ_buf *xmit = &port->state->xmit; 338 339 /* Handle xon/xoff */ 340 if (port->x_char) { ··· 358 return 1; 359 } 360 361 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 362 qe_uart_stop_tx(port); 363 return 0; 364 } ··· 366 /* Pick next descriptor and fill from buffer */ 367 bdp = qe_port->tx_cur; 368 369 - while (!(ioread16be(&bdp->status) & BD_SC_READY) && !uart_circ_empty(xmit)) { 370 - count = 0; 371 p = qe2cpu_addr(ioread32be(&bdp->buf), qe_port); 372 - while (count < qe_port->tx_fifosize) { 373 - *p++ = xmit->buf[xmit->tail]; 374 - uart_xmit_advance(port, 1); 375 - count++; 376 - if (uart_circ_empty(xmit)) 377 - break; 378 - } 379 380 iowrite16be(count, &bdp->length); 381 qe_setbits_be16(&bdp->status, BD_SC_READY); ··· 382 } 383 qe_port->tx_cur = bdp; 384 385 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 386 uart_write_wakeup(port); 387 388 - if (uart_circ_empty(xmit)) { 389 /* The kernel buffer is empty, so turn off TX interrupts. We 390 don't need to be told when the QE is finished transmitting 391 the data. */
··· 334 unsigned char *p; 335 unsigned int count; 336 struct uart_port *port = &qe_port->port; 337 + struct tty_port *tport = &port->state->port; 338 339 /* Handle xon/xoff */ 340 if (port->x_char) { ··· 358 return 1; 359 } 360 361 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 362 qe_uart_stop_tx(port); 363 return 0; 364 } ··· 366 /* Pick next descriptor and fill from buffer */ 367 bdp = qe_port->tx_cur; 368 369 + while (!(ioread16be(&bdp->status) & BD_SC_READY) && 370 + !kfifo_is_empty(&tport->xmit_fifo)) { 371 p = qe2cpu_addr(ioread32be(&bdp->buf), qe_port); 372 + count = uart_fifo_out(port, p, qe_port->tx_fifosize); 373 374 iowrite16be(count, &bdp->length); 375 qe_setbits_be16(&bdp->status, BD_SC_READY); ··· 388 } 389 qe_port->tx_cur = bdp; 390 391 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 392 uart_write_wakeup(port); 393 394 + if (kfifo_is_empty(&tport->xmit_fifo)) { 395 /* The kernel buffer is empty, so turn off TX interrupts. We 396 don't need to be told when the QE is finished transmitting 397 the data. */
+25 -10
drivers/tty/serial/xilinx_uartps.c
··· 25 #include <linux/gpio.h> 26 #include <linux/gpio/consumer.h> 27 #include <linux/delay.h> 28 29 #define CDNS_UART_TTY_NAME "ttyPS" 30 #define CDNS_UART_NAME "xuartps" ··· 199 * @gpiod_rts: Pointer to the gpio descriptor 200 * @rs485_tx_started: RS485 tx state 201 * @tx_timer: Timer for tx 202 */ 203 struct cdns_uart { 204 struct uart_port *port; ··· 213 struct gpio_desc *gpiod_rts; 214 bool rs485_tx_started; 215 struct hrtimer tx_timer; 216 }; 217 struct cdns_platform_data { 218 u32 quirks; ··· 428 { 429 struct uart_port *port = (struct uart_port *)dev_id; 430 struct cdns_uart *cdns_uart = port->private_data; 431 - struct circ_buf *xmit = &port->state->xmit; 432 unsigned int numbytes; 433 434 - if (uart_circ_empty(xmit) || uart_tx_stopped(port)) { 435 /* Disable the TX Empty interrupt */ 436 writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_IDR); 437 return; 438 } 439 440 numbytes = port->fifosize; 441 - while (numbytes && !uart_circ_empty(xmit) && 442 - !(readl(port->membase + CDNS_UART_SR) & CDNS_UART_SR_TXFULL)) { 443 - 444 - writel(xmit->buf[xmit->tail], port->membase + CDNS_UART_FIFO); 445 - uart_xmit_advance(port, 1); 446 numbytes--; 447 } 448 449 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 450 uart_write_wakeup(port); 451 452 /* Enable the TX Empty interrupt */ 453 writel(CDNS_UART_IXR_TXEMPTY, cdns_uart->port->membase + CDNS_UART_IER); 454 455 if (cdns_uart->port->rs485.flags & SER_RS485_ENABLED && 456 - (uart_circ_empty(xmit) || uart_tx_stopped(port))) { 457 cdns_uart->tx_timer.function = &cdns_rs485_rx_callback; 458 hrtimer_start(&cdns_uart->tx_timer, 459 ns_to_ktime(cdns_calc_after_tx_delay(cdns_uart)), HRTIMER_MODE_REL); ··· 726 status |= CDNS_UART_CR_TX_EN; 727 writel(status, port->membase + CDNS_UART_CR); 728 729 - if (uart_circ_empty(&port->state->xmit)) 730 return; 731 732 /* Clear the TX Empty interrupt */ ··· 950 unsigned int status = 0; 951 952 is_brk_support = cdns_uart->quirks & CDNS_UART_RXBS_SUPPORT; 953 954 uart_port_lock_irqsave(port, &flags); 955 ··· 1728 dev_err(&pdev->dev, "clock name 'ref_clk' is deprecated.\n"); 1729 } 1730 1731 rc = clk_prepare_enable(cdns_uart_data->pclk); 1732 if (rc) { 1733 dev_err(&pdev->dev, "Unable to enable pclk clock.\n"); ··· 1895 if (console_port == port) 1896 console_port = NULL; 1897 #endif 1898 1899 if (!--instances) 1900 uart_unregister_driver(cdns_uart_data->cdns_uart_driver);
··· 25 #include <linux/gpio.h> 26 #include <linux/gpio/consumer.h> 27 #include <linux/delay.h> 28 + #include <linux/reset.h> 29 30 #define CDNS_UART_TTY_NAME "ttyPS" 31 #define CDNS_UART_NAME "xuartps" ··· 198 * @gpiod_rts: Pointer to the gpio descriptor 199 * @rs485_tx_started: RS485 tx state 200 * @tx_timer: Timer for tx 201 + * @rstc: Pointer to the reset control 202 */ 203 struct cdns_uart { 204 struct uart_port *port; ··· 211 struct gpio_desc *gpiod_rts; 212 bool rs485_tx_started; 213 struct hrtimer tx_timer; 214 + struct reset_control *rstc; 215 }; 216 struct cdns_platform_data { 217 u32 quirks; ··· 425 { 426 struct uart_port *port = (struct uart_port *)dev_id; 427 struct cdns_uart *cdns_uart = port->private_data; 428 + struct tty_port *tport = &port->state->port; 429 unsigned int numbytes; 430 + unsigned char ch; 431 432 + if (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port)) { 433 /* Disable the TX Empty interrupt */ 434 writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_IDR); 435 return; 436 } 437 438 numbytes = port->fifosize; 439 + while (numbytes && 440 + !(readl(port->membase + CDNS_UART_SR) & CDNS_UART_SR_TXFULL) && 441 + uart_fifo_get(port, &ch)) { 442 + writel(ch, port->membase + CDNS_UART_FIFO); 443 numbytes--; 444 } 445 446 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 447 uart_write_wakeup(port); 448 449 /* Enable the TX Empty interrupt */ 450 writel(CDNS_UART_IXR_TXEMPTY, cdns_uart->port->membase + CDNS_UART_IER); 451 452 if (cdns_uart->port->rs485.flags & SER_RS485_ENABLED && 453 + (kfifo_is_empty(&tport->xmit_fifo) || uart_tx_stopped(port))) { 454 cdns_uart->tx_timer.function = &cdns_rs485_rx_callback; 455 hrtimer_start(&cdns_uart->tx_timer, 456 ns_to_ktime(cdns_calc_after_tx_delay(cdns_uart)), HRTIMER_MODE_REL); ··· 723 status |= CDNS_UART_CR_TX_EN; 724 writel(status, port->membase + CDNS_UART_CR); 725 726 + if (kfifo_is_empty(&port->state->port.xmit_fifo)) 727 return; 728 729 /* Clear the TX Empty interrupt */ ··· 947 unsigned int status = 0; 948 949 is_brk_support = cdns_uart->quirks & CDNS_UART_RXBS_SUPPORT; 950 + 951 + ret = reset_control_deassert(cdns_uart->rstc); 952 + if (ret) 953 + return ret; 954 955 uart_port_lock_irqsave(port, &flags); 956 ··· 1721 dev_err(&pdev->dev, "clock name 'ref_clk' is deprecated.\n"); 1722 } 1723 1724 + cdns_uart_data->rstc = devm_reset_control_get_optional_exclusive(&pdev->dev, NULL); 1725 + if (IS_ERR(cdns_uart_data->rstc)) { 1726 + rc = PTR_ERR(cdns_uart_data->rstc); 1727 + dev_err_probe(&pdev->dev, rc, "Cannot get UART reset\n"); 1728 + goto err_out_unregister_driver; 1729 + } 1730 + 1731 rc = clk_prepare_enable(cdns_uart_data->pclk); 1732 if (rc) { 1733 dev_err(&pdev->dev, "Unable to enable pclk clock.\n"); ··· 1881 if (console_port == port) 1882 console_port = NULL; 1883 #endif 1884 + reset_control_assert(cdns_uart_data->rstc); 1885 1886 if (!--instances) 1887 uart_unregister_driver(cdns_uart_data->cdns_uart_driver);
+7 -6
drivers/tty/serial/zs.c
··· 606 607 static void zs_raw_transmit_chars(struct zs_port *zport) 608 { 609 - struct circ_buf *xmit = &zport->port.state->xmit; 610 611 /* XON/XOFF chars. */ 612 if (zport->port.x_char) { ··· 618 } 619 620 /* If nothing to do or stopped or hardware stopped. */ 621 - if (uart_circ_empty(xmit) || uart_tx_stopped(&zport->port)) { 622 zs_raw_stop_tx(zport); 623 return; 624 } 625 626 /* Send char. */ 627 - write_zsdata(zport, xmit->buf[xmit->tail]); 628 - uart_xmit_advance(&zport->port, 1); 629 630 - if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 631 uart_write_wakeup(&zport->port); 632 633 /* Are we are done? */ 634 - if (uart_circ_empty(xmit)) 635 zs_raw_stop_tx(zport); 636 } 637
··· 606 607 static void zs_raw_transmit_chars(struct zs_port *zport) 608 { 609 + struct tty_port *tport = &zport->port.state->port; 610 + unsigned char ch; 611 612 /* XON/XOFF chars. */ 613 if (zport->port.x_char) { ··· 617 } 618 619 /* If nothing to do or stopped or hardware stopped. */ 620 + if (uart_tx_stopped(&zport->port) || 621 + !uart_fifo_get(&zport->port, &ch)) { 622 zs_raw_stop_tx(zport); 623 return; 624 } 625 626 /* Send char. */ 627 + write_zsdata(zport, ch); 628 629 + if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS) 630 uart_write_wakeup(&zport->port); 631 632 /* Are we are done? */ 633 + if (kfifo_is_empty(&tport->xmit_fifo)) 634 zs_raw_stop_tx(zport); 635 } 636
+12 -1
drivers/tty/sysrq.c
··· 450 .enable_mask = SYSRQ_ENABLE_RTNICE, 451 }; 452 453 /* Key Operations table and lock */ 454 static DEFINE_SPINLOCK(sysrq_key_table_lock); 455 ··· 530 NULL, /* O */ 531 NULL, /* P */ 532 NULL, /* Q */ 533 - NULL, /* R */ 534 NULL, /* S */ 535 NULL, /* T */ 536 NULL, /* U */
··· 450 .enable_mask = SYSRQ_ENABLE_RTNICE, 451 }; 452 453 + static void sysrq_handle_replay_logs(u8 key) 454 + { 455 + console_replay_all(); 456 + } 457 + static struct sysrq_key_op sysrq_replay_logs_op = { 458 + .handler = sysrq_handle_replay_logs, 459 + .help_msg = "replay-kernel-logs(R)", 460 + .action_msg = "Replay kernel logs on consoles", 461 + .enable_mask = SYSRQ_ENABLE_DUMP, 462 + }; 463 + 464 /* Key Operations table and lock */ 465 static DEFINE_SPINLOCK(sysrq_key_table_lock); 466 ··· 519 NULL, /* O */ 520 NULL, /* P */ 521 NULL, /* Q */ 522 + &sysrq_replay_logs_op, /* R */ 523 NULL, /* S */ 524 NULL, /* T */ 525 NULL, /* U */
+6
drivers/tty/tty_ldisc.c
··· 545 goto out; 546 } 547 548 old_ldisc = tty->ldisc; 549 550 /* Shutdown the old discipline. */
··· 545 goto out; 546 } 547 548 + if (tty->ops->ldisc_ok) { 549 + retval = tty->ops->ldisc_ok(tty, disc); 550 + if (retval) 551 + goto out; 552 + } 553 + 554 old_ldisc = tty->ldisc; 555 556 /* Shutdown the old discipline. */
+13 -2
drivers/tty/vt/conmakehash.c
··· 76 int main(int argc, char *argv[]) 77 { 78 FILE *ctbl; 79 - char *tblname; 80 char buffer[65536]; 81 int fontlen; 82 int i, nuni, nent; ··· 101 exit(EX_NOINPUT); 102 } 103 } 104 105 /* For now we assume the default font is always 256 characters. */ 106 fontlen = 256; ··· 264 #include <linux/types.h>\n\ 265 \n\ 266 u8 dfont_unicount[%d] = \n\ 267 - {\n\t", argv[1], fontlen); 268 269 for ( i = 0 ; i < fontlen ; i++ ) 270 {
··· 76 int main(int argc, char *argv[]) 77 { 78 FILE *ctbl; 79 + const char *tblname, *rel_tblname; 80 + const char *abs_srctree; 81 char buffer[65536]; 82 int fontlen; 83 int i, nuni, nent; ··· 100 exit(EX_NOINPUT); 101 } 102 } 103 + 104 + abs_srctree = getenv("abs_srctree"); 105 + if (abs_srctree && !strncmp(abs_srctree, tblname, strlen(abs_srctree))) 106 + { 107 + rel_tblname = tblname + strlen(abs_srctree); 108 + while (*rel_tblname == '/') 109 + ++rel_tblname; 110 + } 111 + else 112 + rel_tblname = tblname; 113 114 /* For now we assume the default font is always 256 characters. */ 115 fontlen = 256; ··· 253 #include <linux/types.h>\n\ 254 \n\ 255 u8 dfont_unicount[%d] = \n\ 256 + {\n\t", rel_tblname, fontlen); 257 258 for ( i = 0 ; i < fontlen ; i++ ) 259 {
+10
drivers/tty/vt/vt.c
··· 3576 tty_port_put(&vc->port); 3577 } 3578 3579 static int default_color = 7; /* white */ 3580 static int default_italic_color = 2; // green (ASCII) 3581 static int default_underline_color = 3; // cyan (ASCII) ··· 3704 .resize = vt_resize, 3705 .shutdown = con_shutdown, 3706 .cleanup = con_cleanup, 3707 }; 3708 3709 static struct cdev vc0_cdev;
··· 3576 tty_port_put(&vc->port); 3577 } 3578 3579 + /* 3580 + * We can't deal with anything but the N_TTY ldisc, 3581 + * because we can sleep in our write() routine. 3582 + */ 3583 + static int con_ldisc_ok(struct tty_struct *tty, int ldisc) 3584 + { 3585 + return ldisc == N_TTY ? 0 : -EINVAL; 3586 + } 3587 + 3588 static int default_color = 7; /* white */ 3589 static int default_italic_color = 2; // green (ASCII) 3590 static int default_underline_color = 3; // cyan (ASCII) ··· 3695 .resize = vt_resize, 3696 .shutdown = con_shutdown, 3697 .cleanup = con_cleanup, 3698 + .ldisc_ok = con_ldisc_ok, 3699 }; 3700 3701 static struct cdev vc0_cdev;
+108 -35
include/linux/kfifo.h
··· 37 */ 38 39 #include <linux/array_size.h> 40 #include <linux/spinlock.h> 41 #include <linux/stddef.h> 42 #include <linux/types.h> ··· 310 ) 311 312 /** 313 - * kfifo_skip - skip output data 314 * @fifo: address of the fifo to be used 315 */ 316 - #define kfifo_skip(fifo) \ 317 - (void)({ \ 318 typeof((fifo) + 1) __tmp = (fifo); \ 319 const size_t __recsize = sizeof(*__tmp->rectype); \ 320 struct __kfifo *__kfifo = &__tmp->kfifo; \ 321 if (__recsize) \ 322 __kfifo_skip_r(__kfifo, __recsize); \ 323 else \ 324 - __kfifo->out++; \ 325 - }) 326 327 /** 328 * kfifo_peek_len - gets the size of the next fifo record ··· 590 * @buf: pointer to the storage buffer 591 * @n: max. number of elements to get 592 * 593 - * This macro get some data from the fifo and return the numbers of elements 594 * copied. 595 * 596 * Note that with only one concurrent reader and one concurrent ··· 617 * @n: max. number of elements to get 618 * @lock: pointer to the spinlock to use for locking 619 * 620 - * This macro get the data from the fifo and return the numbers of elements 621 * copied. 622 */ 623 #define kfifo_out_spinlocked(fifo, buf, n, lock) \ ··· 715 ) 716 717 /** 718 - * kfifo_dma_in_prepare - setup a scatterlist for DMA input 719 * @fifo: address of the fifo to be used 720 * @sgl: pointer to the scatterlist array 721 * @nents: number of entries in the scatterlist array 722 * @len: number of elements to transfer 723 * 724 * This macro fills a scatterlist for DMA input. 725 * It returns the number entries in the scatterlist array. ··· 728 * Note that with only one concurrent reader and one concurrent 729 * writer, you don't need extra locking to use these macros. 730 */ 731 - #define kfifo_dma_in_prepare(fifo, sgl, nents, len) \ 732 ({ \ 733 typeof((fifo) + 1) __tmp = (fifo); \ 734 struct scatterlist *__sgl = (sgl); \ ··· 737 const size_t __recsize = sizeof(*__tmp->rectype); \ 738 struct __kfifo *__kfifo = &__tmp->kfifo; \ 739 (__recsize) ? \ 740 - __kfifo_dma_in_prepare_r(__kfifo, __sgl, __nents, __len, __recsize) : \ 741 - __kfifo_dma_in_prepare(__kfifo, __sgl, __nents, __len); \ 742 }) 743 744 /** 745 * kfifo_dma_in_finish - finish a DMA IN operation 746 * @fifo: address of the fifo to be used 747 * @len: number of bytes to received 748 * 749 - * This macro finish a DMA IN operation. The in counter will be updated by 750 * the len parameter. No error checking will be done. 751 * 752 * Note that with only one concurrent reader and one concurrent ··· 769 }) 770 771 /** 772 - * kfifo_dma_out_prepare - setup a scatterlist for DMA output 773 * @fifo: address of the fifo to be used 774 * @sgl: pointer to the scatterlist array 775 * @nents: number of entries in the scatterlist array 776 * @len: number of elements to transfer 777 * 778 * This macro fills a scatterlist for DMA output which at most @len bytes 779 * to transfer. ··· 784 * Note that with only one concurrent reader and one concurrent 785 * writer, you don't need extra locking to use these macros. 786 */ 787 - #define kfifo_dma_out_prepare(fifo, sgl, nents, len) \ 788 ({ \ 789 typeof((fifo) + 1) __tmp = (fifo); \ 790 struct scatterlist *__sgl = (sgl); \ ··· 793 const size_t __recsize = sizeof(*__tmp->rectype); \ 794 struct __kfifo *__kfifo = &__tmp->kfifo; \ 795 (__recsize) ? \ 796 - __kfifo_dma_out_prepare_r(__kfifo, __sgl, __nents, __len, __recsize) : \ 797 - __kfifo_dma_out_prepare(__kfifo, __sgl, __nents, __len); \ 798 }) 799 800 /** 801 * kfifo_dma_out_finish - finish a DMA OUT operation 802 * @fifo: address of the fifo to be used 803 * @len: number of bytes transferred 804 * 805 - * This macro finish a DMA OUT operation. The out counter will be updated by 806 * the len parameter. No error checking will be done. 807 * 808 * Note that with only one concurrent reader and one concurrent 809 * writer, you don't need extra locking to use these macros. 810 */ 811 - #define kfifo_dma_out_finish(fifo, len) \ 812 - (void)({ \ 813 - typeof((fifo) + 1) __tmp = (fifo); \ 814 - unsigned int __len = (len); \ 815 - const size_t __recsize = sizeof(*__tmp->rectype); \ 816 - struct __kfifo *__kfifo = &__tmp->kfifo; \ 817 - if (__recsize) \ 818 - __kfifo_dma_out_finish_r(__kfifo, __recsize); \ 819 - else \ 820 - __kfifo->out += __len / sizeof(*__tmp->type); \ 821 - }) 822 823 /** 824 * kfifo_out_peek - gets some data from the fifo ··· 823 * @buf: pointer to the storage buffer 824 * @n: max. number of elements to get 825 * 826 - * This macro get the data from the fifo and return the numbers of elements 827 * copied. The data is not removed from the fifo. 828 * 829 * Note that with only one concurrent reader and one concurrent ··· 842 __kfifo_out_peek(__kfifo, __buf, __n); \ 843 }) \ 844 ) 845 846 extern int __kfifo_alloc(struct __kfifo *fifo, unsigned int size, 847 size_t esize, gfp_t gfp_mask); ··· 921 void __user *to, unsigned long len, unsigned int *copied); 922 923 extern unsigned int __kfifo_dma_in_prepare(struct __kfifo *fifo, 924 - struct scatterlist *sgl, int nents, unsigned int len); 925 926 extern unsigned int __kfifo_dma_out_prepare(struct __kfifo *fifo, 927 - struct scatterlist *sgl, int nents, unsigned int len); 928 929 extern unsigned int __kfifo_out_peek(struct __kfifo *fifo, 930 void *buf, unsigned int len); 931 932 extern unsigned int __kfifo_in_r(struct __kfifo *fifo, 933 const void *buf, unsigned int len, size_t recsize); ··· 946 unsigned long len, unsigned int *copied, size_t recsize); 947 948 extern unsigned int __kfifo_dma_in_prepare_r(struct __kfifo *fifo, 949 - struct scatterlist *sgl, int nents, unsigned int len, size_t recsize); 950 951 extern void __kfifo_dma_in_finish_r(struct __kfifo *fifo, 952 unsigned int len, size_t recsize); 953 954 extern unsigned int __kfifo_dma_out_prepare_r(struct __kfifo *fifo, 955 - struct scatterlist *sgl, int nents, unsigned int len, size_t recsize); 956 - 957 - extern void __kfifo_dma_out_finish_r(struct __kfifo *fifo, size_t recsize); 958 959 extern unsigned int __kfifo_len_r(struct __kfifo *fifo, size_t recsize); 960 ··· 962 963 extern unsigned int __kfifo_out_peek_r(struct __kfifo *fifo, 964 void *buf, unsigned int len, size_t recsize); 965 966 extern unsigned int __kfifo_max_r(unsigned int len, size_t recsize); 967
··· 37 */ 38 39 #include <linux/array_size.h> 40 + #include <linux/dma-mapping.h> 41 #include <linux/spinlock.h> 42 #include <linux/stddef.h> 43 #include <linux/types.h> ··· 309 ) 310 311 /** 312 + * kfifo_skip_count - skip output data 313 * @fifo: address of the fifo to be used 314 + * @count: count of data to skip 315 */ 316 + #define kfifo_skip_count(fifo, count) do { \ 317 typeof((fifo) + 1) __tmp = (fifo); \ 318 const size_t __recsize = sizeof(*__tmp->rectype); \ 319 struct __kfifo *__kfifo = &__tmp->kfifo; \ 320 if (__recsize) \ 321 __kfifo_skip_r(__kfifo, __recsize); \ 322 else \ 323 + __kfifo->out += (count); \ 324 + } while(0) 325 + 326 + /** 327 + * kfifo_skip - skip output data 328 + * @fifo: address of the fifo to be used 329 + */ 330 + #define kfifo_skip(fifo) kfifo_skip_count(fifo, 1) 331 332 /** 333 * kfifo_peek_len - gets the size of the next fifo record ··· 583 * @buf: pointer to the storage buffer 584 * @n: max. number of elements to get 585 * 586 + * This macro gets some data from the fifo and returns the numbers of elements 587 * copied. 588 * 589 * Note that with only one concurrent reader and one concurrent ··· 610 * @n: max. number of elements to get 611 * @lock: pointer to the spinlock to use for locking 612 * 613 + * This macro gets the data from the fifo and returns the numbers of elements 614 * copied. 615 */ 616 #define kfifo_out_spinlocked(fifo, buf, n, lock) \ ··· 708 ) 709 710 /** 711 + * kfifo_dma_in_prepare_mapped - setup a scatterlist for DMA input 712 * @fifo: address of the fifo to be used 713 * @sgl: pointer to the scatterlist array 714 * @nents: number of entries in the scatterlist array 715 * @len: number of elements to transfer 716 + * @dma: mapped dma address to fill into @sgl 717 * 718 * This macro fills a scatterlist for DMA input. 719 * It returns the number entries in the scatterlist array. ··· 720 * Note that with only one concurrent reader and one concurrent 721 * writer, you don't need extra locking to use these macros. 722 */ 723 + #define kfifo_dma_in_prepare_mapped(fifo, sgl, nents, len, dma) \ 724 ({ \ 725 typeof((fifo) + 1) __tmp = (fifo); \ 726 struct scatterlist *__sgl = (sgl); \ ··· 729 const size_t __recsize = sizeof(*__tmp->rectype); \ 730 struct __kfifo *__kfifo = &__tmp->kfifo; \ 731 (__recsize) ? \ 732 + __kfifo_dma_in_prepare_r(__kfifo, __sgl, __nents, __len, __recsize, \ 733 + dma) : \ 734 + __kfifo_dma_in_prepare(__kfifo, __sgl, __nents, __len, dma); \ 735 }) 736 + 737 + #define kfifo_dma_in_prepare(fifo, sgl, nents, len) \ 738 + kfifo_dma_in_prepare_mapped(fifo, sgl, nents, len, DMA_MAPPING_ERROR) 739 740 /** 741 * kfifo_dma_in_finish - finish a DMA IN operation 742 * @fifo: address of the fifo to be used 743 * @len: number of bytes to received 744 * 745 + * This macro finishes a DMA IN operation. The in counter will be updated by 746 * the len parameter. No error checking will be done. 747 * 748 * Note that with only one concurrent reader and one concurrent ··· 757 }) 758 759 /** 760 + * kfifo_dma_out_prepare_mapped - setup a scatterlist for DMA output 761 * @fifo: address of the fifo to be used 762 * @sgl: pointer to the scatterlist array 763 * @nents: number of entries in the scatterlist array 764 * @len: number of elements to transfer 765 + * @dma: mapped dma address to fill into @sgl 766 * 767 * This macro fills a scatterlist for DMA output which at most @len bytes 768 * to transfer. ··· 771 * Note that with only one concurrent reader and one concurrent 772 * writer, you don't need extra locking to use these macros. 773 */ 774 + #define kfifo_dma_out_prepare_mapped(fifo, sgl, nents, len, dma) \ 775 ({ \ 776 typeof((fifo) + 1) __tmp = (fifo); \ 777 struct scatterlist *__sgl = (sgl); \ ··· 780 const size_t __recsize = sizeof(*__tmp->rectype); \ 781 struct __kfifo *__kfifo = &__tmp->kfifo; \ 782 (__recsize) ? \ 783 + __kfifo_dma_out_prepare_r(__kfifo, __sgl, __nents, __len, __recsize, \ 784 + dma) : \ 785 + __kfifo_dma_out_prepare(__kfifo, __sgl, __nents, __len, dma); \ 786 }) 787 + 788 + #define kfifo_dma_out_prepare(fifo, sgl, nents, len) \ 789 + kfifo_dma_out_prepare_mapped(fifo, sgl, nents, len, DMA_MAPPING_ERROR) 790 791 /** 792 * kfifo_dma_out_finish - finish a DMA OUT operation 793 * @fifo: address of the fifo to be used 794 * @len: number of bytes transferred 795 * 796 + * This macro finishes a DMA OUT operation. The out counter will be updated by 797 * the len parameter. No error checking will be done. 798 * 799 * Note that with only one concurrent reader and one concurrent 800 * writer, you don't need extra locking to use these macros. 801 */ 802 + #define kfifo_dma_out_finish(fifo, len) do { \ 803 + typeof((fifo) + 1) ___tmp = (fifo); \ 804 + kfifo_skip_count(___tmp, (len) / sizeof(*___tmp->type)); \ 805 + } while (0) 806 807 /** 808 * kfifo_out_peek - gets some data from the fifo ··· 813 * @buf: pointer to the storage buffer 814 * @n: max. number of elements to get 815 * 816 + * This macro gets the data from the fifo and returns the numbers of elements 817 * copied. The data is not removed from the fifo. 818 * 819 * Note that with only one concurrent reader and one concurrent ··· 832 __kfifo_out_peek(__kfifo, __buf, __n); \ 833 }) \ 834 ) 835 + 836 + /** 837 + * kfifo_out_linear - gets a tail of/offset to available data 838 + * @fifo: address of the fifo to be used 839 + * @tail: pointer to an unsigned int to store the value of tail 840 + * @n: max. number of elements to point at 841 + * 842 + * This macro obtains the offset (tail) to the available data in the fifo 843 + * buffer and returns the 844 + * numbers of elements available. It returns the available count till the end 845 + * of data or till the end of the buffer. So that it can be used for linear 846 + * data processing (like memcpy() of (@fifo->data + @tail) with count 847 + * returned). 848 + * 849 + * Note that with only one concurrent reader and one concurrent 850 + * writer, you don't need extra locking to use these macro. 851 + */ 852 + #define kfifo_out_linear(fifo, tail, n) \ 853 + __kfifo_uint_must_check_helper( \ 854 + ({ \ 855 + typeof((fifo) + 1) __tmp = (fifo); \ 856 + unsigned int *__tail = (tail); \ 857 + unsigned long __n = (n); \ 858 + const size_t __recsize = sizeof(*__tmp->rectype); \ 859 + struct __kfifo *__kfifo = &__tmp->kfifo; \ 860 + (__recsize) ? \ 861 + __kfifo_out_linear_r(__kfifo, __tail, __n, __recsize) : \ 862 + __kfifo_out_linear(__kfifo, __tail, __n); \ 863 + }) \ 864 + ) 865 + 866 + /** 867 + * kfifo_out_linear_ptr - gets a pointer to the available data 868 + * @fifo: address of the fifo to be used 869 + * @ptr: pointer to data to store the pointer to tail 870 + * @n: max. number of elements to point at 871 + * 872 + * Similarly to kfifo_out_linear(), this macro obtains the pointer to the 873 + * available data in the fifo buffer and returns the numbers of elements 874 + * available. It returns the available count till the end of available data or 875 + * till the end of the buffer. So that it can be used for linear data 876 + * processing (like memcpy() of @ptr with count returned). 877 + * 878 + * Note that with only one concurrent reader and one concurrent 879 + * writer, you don't need extra locking to use these macro. 880 + */ 881 + #define kfifo_out_linear_ptr(fifo, ptr, n) \ 882 + __kfifo_uint_must_check_helper( \ 883 + ({ \ 884 + typeof((fifo) + 1) ___tmp = (fifo); \ 885 + unsigned int ___tail; \ 886 + unsigned int ___n = kfifo_out_linear(___tmp, &___tail, (n)); \ 887 + *(ptr) = ___tmp->kfifo.data + ___tail * kfifo_esize(___tmp); \ 888 + ___n; \ 889 + }) \ 890 + ) 891 + 892 893 extern int __kfifo_alloc(struct __kfifo *fifo, unsigned int size, 894 size_t esize, gfp_t gfp_mask); ··· 854 void __user *to, unsigned long len, unsigned int *copied); 855 856 extern unsigned int __kfifo_dma_in_prepare(struct __kfifo *fifo, 857 + struct scatterlist *sgl, int nents, unsigned int len, dma_addr_t dma); 858 859 extern unsigned int __kfifo_dma_out_prepare(struct __kfifo *fifo, 860 + struct scatterlist *sgl, int nents, unsigned int len, dma_addr_t dma); 861 862 extern unsigned int __kfifo_out_peek(struct __kfifo *fifo, 863 void *buf, unsigned int len); 864 + 865 + extern unsigned int __kfifo_out_linear(struct __kfifo *fifo, 866 + unsigned int *tail, unsigned int n); 867 868 extern unsigned int __kfifo_in_r(struct __kfifo *fifo, 869 const void *buf, unsigned int len, size_t recsize); ··· 876 unsigned long len, unsigned int *copied, size_t recsize); 877 878 extern unsigned int __kfifo_dma_in_prepare_r(struct __kfifo *fifo, 879 + struct scatterlist *sgl, int nents, unsigned int len, size_t recsize, 880 + dma_addr_t dma); 881 882 extern void __kfifo_dma_in_finish_r(struct __kfifo *fifo, 883 unsigned int len, size_t recsize); 884 885 extern unsigned int __kfifo_dma_out_prepare_r(struct __kfifo *fifo, 886 + struct scatterlist *sgl, int nents, unsigned int len, size_t recsize, 887 + dma_addr_t dma); 888 889 extern unsigned int __kfifo_len_r(struct __kfifo *fifo, size_t recsize); 890 ··· 892 893 extern unsigned int __kfifo_out_peek_r(struct __kfifo *fifo, 894 void *buf, unsigned int len, size_t recsize); 895 + 896 + extern unsigned int __kfifo_out_linear_r(struct __kfifo *fifo, 897 + unsigned int *tail, unsigned int n, size_t recsize); 898 899 extern unsigned int __kfifo_max_r(unsigned int len, size_t recsize); 900
+4
include/linux/pnp.h
··· 469 int pnp_register_driver(struct pnp_driver *drv); 470 void pnp_unregister_driver(struct pnp_driver *drv); 471 472 #else 473 474 /* device management */ ··· 501 static inline int compare_pnp_id(struct pnp_id *pos, const char *id) { return -ENODEV; } 502 static inline int pnp_register_driver(struct pnp_driver *drv) { return -ENODEV; } 503 static inline void pnp_unregister_driver(struct pnp_driver *drv) { } 504 505 #endif /* CONFIG_PNP */ 506
··· 469 int pnp_register_driver(struct pnp_driver *drv); 470 void pnp_unregister_driver(struct pnp_driver *drv); 471 472 + #define dev_is_pnp(d) ((d)->bus == &pnp_bus_type) 473 + 474 #else 475 476 /* device management */ ··· 499 static inline int compare_pnp_id(struct pnp_id *pos, const char *id) { return -ENODEV; } 500 static inline int pnp_register_driver(struct pnp_driver *drv) { return -ENODEV; } 501 static inline void pnp_unregister_driver(struct pnp_driver *drv) { } 502 + 503 + #define dev_is_pnp(d) false 504 505 #endif /* CONFIG_PNP */ 506
+7
include/linux/printk.h
··· 60 #define CONSOLE_LOGLEVEL_DEFAULT CONFIG_CONSOLE_LOGLEVEL_DEFAULT 61 #define CONSOLE_LOGLEVEL_QUIET CONFIG_CONSOLE_LOGLEVEL_QUIET 62 63 extern int console_printk[]; 64 65 #define console_loglevel (console_printk[0]) ··· 195 extern asmlinkage void dump_stack_lvl(const char *log_lvl) __cold; 196 extern asmlinkage void dump_stack(void) __cold; 197 void printk_trigger_flush(void); 198 #else 199 static inline __printf(1, 0) 200 int vprintk(const char *s, va_list args) ··· 273 { 274 } 275 static inline void printk_trigger_flush(void) 276 { 277 } 278 #endif
··· 60 #define CONSOLE_LOGLEVEL_DEFAULT CONFIG_CONSOLE_LOGLEVEL_DEFAULT 61 #define CONSOLE_LOGLEVEL_QUIET CONFIG_CONSOLE_LOGLEVEL_QUIET 62 63 + int add_preferred_console_match(const char *match, const char *name, 64 + const short idx); 65 + 66 extern int console_printk[]; 67 68 #define console_loglevel (console_printk[0]) ··· 192 extern asmlinkage void dump_stack_lvl(const char *log_lvl) __cold; 193 extern asmlinkage void dump_stack(void) __cold; 194 void printk_trigger_flush(void); 195 + void console_replay_all(void); 196 #else 197 static inline __printf(1, 0) 198 int vprintk(const char *s, va_list args) ··· 269 { 270 } 271 static inline void printk_trigger_flush(void) 272 + { 273 + } 274 + static inline void console_replay_all(void) 275 { 276 } 277 #endif
+31 -18
include/linux/serial_core.h
··· 11 #include <linux/compiler.h> 12 #include <linux/console.h> 13 #include <linux/interrupt.h> 14 - #include <linux/circ_buf.h> 15 #include <linux/spinlock.h> 16 #include <linux/sched.h> 17 #include <linux/tty.h> ··· 698 struct tty_port port; 699 700 enum uart_pm_state pm_state; 701 - struct circ_buf xmit; 702 703 atomic_t refcount; 704 wait_queue_head_t remove_wait; ··· 721 */ 722 static inline void uart_xmit_advance(struct uart_port *up, unsigned int chars) 723 { 724 - struct circ_buf *xmit = &up->state->xmit; 725 726 - xmit->tail = (xmit->tail + chars) & (UART_XMIT_SIZE - 1); 727 up->icount.tx += chars; 728 } 729 730 struct module; ··· 785 for_test, for_post) \ 786 ({ \ 787 struct uart_port *__port = (uport); \ 788 - struct circ_buf *xmit = &__port->state->xmit; \ 789 unsigned int pending; \ 790 \ 791 for (; (for_test) && (tx_ready); (for_post), __port->icount.tx++) { \ ··· 796 continue; \ 797 } \ 798 \ 799 - if (uart_circ_empty(xmit) || uart_tx_stopped(__port)) \ 800 break; \ 801 \ 802 - (ch) = xmit->buf[xmit->tail]; \ 803 (put_char); \ 804 - xmit->tail = (xmit->tail + 1) % UART_XMIT_SIZE; \ 805 } \ 806 \ 807 (tx_done); \ 808 \ 809 - pending = uart_circ_chars_pending(xmit); \ 810 if (pending < WAKEUP_CHARS) { \ 811 uart_write_wakeup(__port); \ 812 \ ··· 995 */ 996 int uart_suspend_port(struct uart_driver *reg, struct uart_port *port); 997 int uart_resume_port(struct uart_driver *reg, struct uart_port *port); 998 - 999 - #define uart_circ_empty(circ) ((circ)->head == (circ)->tail) 1000 - #define uart_circ_clear(circ) ((circ)->head = (circ)->tail = 0) 1001 - 1002 - #define uart_circ_chars_pending(circ) \ 1003 - (CIRC_CNT((circ)->head, (circ)->tail, UART_XMIT_SIZE)) 1004 - 1005 - #define uart_circ_chars_free(circ) \ 1006 - (CIRC_SPACE((circ)->head, (circ)->tail, UART_XMIT_SIZE)) 1007 1008 static inline int uart_tx_stopped(struct uart_port *port) 1009 {
··· 11 #include <linux/compiler.h> 12 #include <linux/console.h> 13 #include <linux/interrupt.h> 14 #include <linux/spinlock.h> 15 #include <linux/sched.h> 16 #include <linux/tty.h> ··· 699 struct tty_port port; 700 701 enum uart_pm_state pm_state; 702 703 atomic_t refcount; 704 wait_queue_head_t remove_wait; ··· 723 */ 724 static inline void uart_xmit_advance(struct uart_port *up, unsigned int chars) 725 { 726 + struct tty_port *tport = &up->state->port; 727 728 + kfifo_skip_count(&tport->xmit_fifo, chars); 729 up->icount.tx += chars; 730 + } 731 + 732 + static inline unsigned int uart_fifo_out(struct uart_port *up, 733 + unsigned char *buf, unsigned int chars) 734 + { 735 + struct tty_port *tport = &up->state->port; 736 + 737 + chars = kfifo_out(&tport->xmit_fifo, buf, chars); 738 + up->icount.tx += chars; 739 + 740 + return chars; 741 + } 742 + 743 + static inline unsigned int uart_fifo_get(struct uart_port *up, 744 + unsigned char *ch) 745 + { 746 + struct tty_port *tport = &up->state->port; 747 + unsigned int chars; 748 + 749 + chars = kfifo_get(&tport->xmit_fifo, ch); 750 + up->icount.tx += chars; 751 + 752 + return chars; 753 } 754 755 struct module; ··· 764 for_test, for_post) \ 765 ({ \ 766 struct uart_port *__port = (uport); \ 767 + struct tty_port *__tport = &__port->state->port; \ 768 unsigned int pending; \ 769 \ 770 for (; (for_test) && (tx_ready); (for_post), __port->icount.tx++) { \ ··· 775 continue; \ 776 } \ 777 \ 778 + if (uart_tx_stopped(__port)) \ 779 break; \ 780 \ 781 + if (!kfifo_get(&__tport->xmit_fifo, &(ch))) \ 782 + break; \ 783 + \ 784 (put_char); \ 785 } \ 786 \ 787 (tx_done); \ 788 \ 789 + pending = kfifo_len(&__tport->xmit_fifo); \ 790 if (pending < WAKEUP_CHARS) { \ 791 uart_write_wakeup(__port); \ 792 \ ··· 973 */ 974 int uart_suspend_port(struct uart_driver *reg, struct uart_port *port); 975 int uart_resume_port(struct uart_driver *reg, struct uart_port *port); 976 977 static inline int uart_tx_stopped(struct uart_port *port) 978 {
-48
include/linux/serial_max3100.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * 4 - * Copyright (C) 2007 Christian Pellegrin 5 - */ 6 - 7 - 8 - #ifndef _LINUX_SERIAL_MAX3100_H 9 - #define _LINUX_SERIAL_MAX3100_H 1 10 - 11 - 12 - /** 13 - * struct plat_max3100 - MAX3100 SPI UART platform data 14 - * @loopback: force MAX3100 in loopback 15 - * @crystal: 1 for 3.6864 Mhz, 0 for 1.8432 16 - * @max3100_hw_suspend: MAX3100 has a shutdown pin. This is a hook 17 - * called on suspend and resume to activate it. 18 - * @poll_time: poll time for CTS signal in ms, 0 disables (so no hw 19 - * flow ctrl is possible but you have less CPU usage) 20 - * 21 - * You should use this structure in your machine description to specify 22 - * how the MAX3100 is connected. Example: 23 - * 24 - * static struct plat_max3100 max3100_plat_data = { 25 - * .loopback = 0, 26 - * .crystal = 0, 27 - * .poll_time = 100, 28 - * }; 29 - * 30 - * static struct spi_board_info spi_board_info[] = { 31 - * { 32 - * .modalias = "max3100", 33 - * .platform_data = &max3100_plat_data, 34 - * .irq = IRQ_EINT12, 35 - * .max_speed_hz = 5*1000*1000, 36 - * .chip_select = 0, 37 - * }, 38 - * }; 39 - * 40 - **/ 41 - struct plat_max3100 { 42 - int loopback; 43 - int crystal; 44 - void (*max3100_hw_suspend) (int suspend); 45 - int poll_time; 46 - }; 47 - 48 - #endif
···
+8
include/linux/tty_driver.h
··· 154 * 155 * Optional. Called under the @tty->termios_rwsem. May sleep. 156 * 157 * @set_ldisc: ``void ()(struct tty_struct *tty)`` 158 * 159 * This routine allows the @tty driver to be notified when the device's ··· 379 void (*hangup)(struct tty_struct *tty); 380 int (*break_ctl)(struct tty_struct *tty, int state); 381 void (*flush_buffer)(struct tty_struct *tty); 382 void (*set_ldisc)(struct tty_struct *tty); 383 void (*wait_until_sent)(struct tty_struct *tty, int timeout); 384 void (*send_xchar)(struct tty_struct *tty, u8 ch);
··· 154 * 155 * Optional. Called under the @tty->termios_rwsem. May sleep. 156 * 157 + * @ldisc_ok: ``int ()(struct tty_struct *tty, int ldisc)`` 158 + * 159 + * This routine allows the @tty driver to decide if it can deal 160 + * with a particular @ldisc. 161 + * 162 + * Optional. Called under the @tty->ldisc_sem and @tty->termios_rwsem. 163 + * 164 * @set_ldisc: ``void ()(struct tty_struct *tty)`` 165 * 166 * This routine allows the @tty driver to be notified when the device's ··· 372 void (*hangup)(struct tty_struct *tty); 373 int (*break_ctl)(struct tty_struct *tty, int state); 374 void (*flush_buffer)(struct tty_struct *tty); 375 + int (*ldisc_ok)(struct tty_struct *tty, int ldisc); 376 void (*set_ldisc)(struct tty_struct *tty); 377 void (*wait_until_sent)(struct tty_struct *tty, int timeout); 378 void (*send_xchar)(struct tty_struct *tty, u8 ch);
+49 -47
include/uapi/linux/kd.h
··· 5 #include <linux/compiler.h> 6 7 /* 0x4B is 'K', to avoid collision with termios and vt */ 8 9 - #define GIO_FONT 0x4B60 /* gets font in expanded form */ 10 - #define PIO_FONT 0x4B61 /* use font in expanded form */ 11 12 - #define GIO_FONTX 0x4B6B /* get font using struct consolefontdesc */ 13 - #define PIO_FONTX 0x4B6C /* set font using struct consolefontdesc */ 14 struct consolefontdesc { 15 unsigned short charcount; /* characters in font (256 or 512) */ 16 unsigned short charheight; /* scan lines per character (1-32) */ 17 char __user *chardata; /* font data in expanded form */ 18 }; 19 20 - #define PIO_FONTRESET 0x4B6D /* reset to default font */ 21 22 - #define GIO_CMAP 0x4B70 /* gets colour palette on VGA+ */ 23 - #define PIO_CMAP 0x4B71 /* sets colour palette on VGA+ */ 24 25 - #define KIOCSOUND 0x4B2F /* start sound generation (0 for off) */ 26 - #define KDMKTONE 0x4B30 /* generate tone */ 27 28 - #define KDGETLED 0x4B31 /* return current led state */ 29 - #define KDSETLED 0x4B32 /* set led state [lights, not flags] */ 30 #define LED_SCR 0x01 /* scroll lock led */ 31 #define LED_NUM 0x02 /* num lock led */ 32 #define LED_CAP 0x04 /* caps lock led */ 33 34 - #define KDGKBTYPE 0x4B33 /* get keyboard type */ 35 #define KB_84 0x01 36 #define KB_101 0x02 /* this is what we always answer */ 37 #define KB_OTHER 0x03 38 39 - #define KDADDIO 0x4B34 /* add i/o port as valid */ 40 - #define KDDELIO 0x4B35 /* del i/o port as valid */ 41 - #define KDENABIO 0x4B36 /* enable i/o to video board */ 42 - #define KDDISABIO 0x4B37 /* disable i/o to video board */ 43 44 - #define KDSETMODE 0x4B3A /* set text/graphics mode */ 45 #define KD_TEXT 0x00 46 #define KD_GRAPHICS 0x01 47 #define KD_TEXT0 0x02 /* obsolete */ 48 #define KD_TEXT1 0x03 /* obsolete */ 49 - #define KDGETMODE 0x4B3B /* get current mode */ 50 51 - #define KDMAPDISP 0x4B3C /* map display into address space */ 52 - #define KDUNMAPDISP 0x4B3D /* unmap display from address space */ 53 54 typedef char scrnmap_t; 55 #define E_TABSZ 256 56 - #define GIO_SCRNMAP 0x4B40 /* get screen mapping from kernel */ 57 - #define PIO_SCRNMAP 0x4B41 /* put screen mapping table in kernel */ 58 - #define GIO_UNISCRNMAP 0x4B69 /* get full Unicode screen mapping */ 59 - #define PIO_UNISCRNMAP 0x4B6A /* set full Unicode screen mapping */ 60 61 - #define GIO_UNIMAP 0x4B66 /* get unicode-to-font mapping from kernel */ 62 struct unipair { 63 unsigned short unicode; 64 unsigned short fontpos; ··· 68 unsigned short entry_ct; 69 struct unipair __user *entries; 70 }; 71 - #define PIO_UNIMAP 0x4B67 /* put unicode-to-font mapping in kernel */ 72 - #define PIO_UNIMAPCLR 0x4B68 /* clear table, possibly advise hash algorithm */ 73 struct unimapinit { 74 unsigned short advised_hashsize; /* 0 if no opinion */ 75 unsigned short advised_hashstep; /* 0 if no opinion */ ··· 84 #define K_MEDIUMRAW 0x02 85 #define K_UNICODE 0x03 86 #define K_OFF 0x04 87 - #define KDGKBMODE 0x4B44 /* gets current keyboard mode */ 88 - #define KDSKBMODE 0x4B45 /* sets current keyboard mode */ 89 90 #define K_METABIT 0x03 91 #define K_ESCPREFIX 0x04 92 - #define KDGKBMETA 0x4B62 /* gets meta key handling mode */ 93 - #define KDSKBMETA 0x4B63 /* sets meta key handling mode */ 94 95 #define K_SCROLLLOCK 0x01 96 #define K_NUMLOCK 0x02 97 #define K_CAPSLOCK 0x04 98 - #define KDGKBLED 0x4B64 /* get led flags (not lights) */ 99 - #define KDSKBLED 0x4B65 /* set led flags (not lights) */ 100 101 struct kbentry { 102 unsigned char kb_table; ··· 108 #define K_ALTTAB 0x02 109 #define K_ALTSHIFTTAB 0x03 110 111 - #define KDGKBENT 0x4B46 /* gets one entry in translation table */ 112 - #define KDSKBENT 0x4B47 /* sets one entry in translation table */ 113 114 struct kbsentry { 115 unsigned char kb_func; 116 unsigned char kb_string[512]; 117 }; 118 - #define KDGKBSENT 0x4B48 /* gets one function key string entry */ 119 - #define KDSKBSENT 0x4B49 /* sets one function key string entry */ 120 121 struct kbdiacr { 122 unsigned char diacr, base, result; ··· 125 unsigned int kb_cnt; /* number of entries in following array */ 126 struct kbdiacr kbdiacr[256]; /* MAX_DIACR from keyboard.h */ 127 }; 128 - #define KDGKBDIACR 0x4B4A /* read kernel accent table */ 129 - #define KDSKBDIACR 0x4B4B /* write kernel accent table */ 130 131 struct kbdiacruc { 132 unsigned int diacr, base, result; ··· 135 unsigned int kb_cnt; /* number of entries in following array */ 136 struct kbdiacruc kbdiacruc[256]; /* MAX_DIACR from keyboard.h */ 137 }; 138 - #define KDGKBDIACRUC 0x4BFA /* read kernel accent table - UCS */ 139 - #define KDSKBDIACRUC 0x4BFB /* write kernel accent table - UCS */ 140 141 struct kbkeycode { 142 unsigned int scancode, keycode; 143 }; 144 - #define KDGETKEYCODE 0x4B4C /* read kernel keycode table entry */ 145 - #define KDSETKEYCODE 0x4B4D /* write kernel keycode table entry */ 146 147 - #define KDSIGACCEPT 0x4B4E /* accept kbd generated signals */ 148 149 struct kbd_repeat { 150 int delay; /* in msec; <= 0: don't change */ ··· 152 /* earlier this field was misnamed "rate" */ 153 }; 154 155 - #define KDKBDREP 0x4B52 /* set keyboard delay/repeat rate; 156 - * actually used values are returned */ 157 158 - #define KDFONTOP 0x4B72 /* font operations */ 159 160 struct console_font_op { 161 unsigned int op; /* operation code KD_FONT_OP_* */
··· 5 #include <linux/compiler.h> 6 7 /* 0x4B is 'K', to avoid collision with termios and vt */ 8 + #define KD_IOCTL_BASE 'K' 9 10 + #define GIO_FONT _IO(KD_IOCTL_BASE, 0x60) /* gets font in expanded form */ 11 + #define PIO_FONT _IO(KD_IOCTL_BASE, 0x61) /* use font in expanded form */ 12 13 + #define GIO_FONTX _IO(KD_IOCTL_BASE, 0x6B) /* get font using struct consolefontdesc */ 14 + #define PIO_FONTX _IO(KD_IOCTL_BASE, 0x6C) /* set font using struct consolefontdesc */ 15 struct consolefontdesc { 16 unsigned short charcount; /* characters in font (256 or 512) */ 17 unsigned short charheight; /* scan lines per character (1-32) */ 18 char __user *chardata; /* font data in expanded form */ 19 }; 20 21 + #define PIO_FONTRESET _IO(KD_IOCTL_BASE, 0x6D) /* reset to default font */ 22 23 + #define GIO_CMAP _IO(KD_IOCTL_BASE, 0x70) /* gets colour palette on VGA+ */ 24 + #define PIO_CMAP _IO(KD_IOCTL_BASE, 0x71) /* sets colour palette on VGA+ */ 25 26 + #define KIOCSOUND _IO(KD_IOCTL_BASE, 0x2F) /* start sound generation (0 for off) */ 27 + #define KDMKTONE _IO(KD_IOCTL_BASE, 0x30) /* generate tone */ 28 29 + #define KDGETLED _IO(KD_IOCTL_BASE, 0x31) /* return current led state */ 30 + #define KDSETLED _IO(KD_IOCTL_BASE, 0x32) /* set led state [lights, not flags] */ 31 #define LED_SCR 0x01 /* scroll lock led */ 32 #define LED_NUM 0x02 /* num lock led */ 33 #define LED_CAP 0x04 /* caps lock led */ 34 35 + #define KDGKBTYPE _IO(KD_IOCTL_BASE, 0x33) /* get keyboard type */ 36 #define KB_84 0x01 37 #define KB_101 0x02 /* this is what we always answer */ 38 #define KB_OTHER 0x03 39 40 + #define KDADDIO _IO(KD_IOCTL_BASE, 0x34) /* add i/o port as valid */ 41 + #define KDDELIO _IO(KD_IOCTL_BASE, 0x35) /* del i/o port as valid */ 42 + #define KDENABIO _IO(KD_IOCTL_BASE, 0x36) /* enable i/o to video board */ 43 + #define KDDISABIO _IO(KD_IOCTL_BASE, 0x37) /* disable i/o to video board */ 44 45 + #define KDSETMODE _IO(KD_IOCTL_BASE, 0x3A) /* set text/graphics mode */ 46 #define KD_TEXT 0x00 47 #define KD_GRAPHICS 0x01 48 #define KD_TEXT0 0x02 /* obsolete */ 49 #define KD_TEXT1 0x03 /* obsolete */ 50 + #define KDGETMODE _IO(KD_IOCTL_BASE, 0x3B) /* get current mode */ 51 52 + #define KDMAPDISP _IO(KD_IOCTL_BASE, 0x3C) /* map display into address space */ 53 + #define KDUNMAPDISP _IO(KD_IOCTL_BASE, 0x3D) /* unmap display from address space */ 54 55 typedef char scrnmap_t; 56 #define E_TABSZ 256 57 + #define GIO_SCRNMAP _IO(KD_IOCTL_BASE, 0x40) /* get screen mapping from kernel */ 58 + #define PIO_SCRNMAP _IO(KD_IOCTL_BASE, 0x41) /* put screen mapping table in kernel */ 59 + #define GIO_UNISCRNMAP _IO(KD_IOCTL_BASE, 0x69) /* get full Unicode screen mapping */ 60 + #define PIO_UNISCRNMAP _IO(KD_IOCTL_BASE, 0x6A) /* set full Unicode screen mapping */ 61 62 + #define GIO_UNIMAP _IO(KD_IOCTL_BASE, 0x66) /* get unicode-to-font mapping from kernel */ 63 struct unipair { 64 unsigned short unicode; 65 unsigned short fontpos; ··· 67 unsigned short entry_ct; 68 struct unipair __user *entries; 69 }; 70 + #define PIO_UNIMAP _IO(KD_IOCTL_BASE, 0x67) /* put unicode-to-font mapping in kernel */ 71 + #define PIO_UNIMAPCLR _IO(KD_IOCTL_BASE, 0x68) /* clear table, possibly advise hash algorithm */ 72 struct unimapinit { 73 unsigned short advised_hashsize; /* 0 if no opinion */ 74 unsigned short advised_hashstep; /* 0 if no opinion */ ··· 83 #define K_MEDIUMRAW 0x02 84 #define K_UNICODE 0x03 85 #define K_OFF 0x04 86 + #define KDGKBMODE _IO(KD_IOCTL_BASE, 0x44) /* gets current keyboard mode */ 87 + #define KDSKBMODE _IO(KD_IOCTL_BASE, 0x45) /* sets current keyboard mode */ 88 89 #define K_METABIT 0x03 90 #define K_ESCPREFIX 0x04 91 + #define KDGKBMETA _IO(KD_IOCTL_BASE, 0x62) /* gets meta key handling mode */ 92 + #define KDSKBMETA _IO(KD_IOCTL_BASE, 0x63) /* sets meta key handling mode */ 93 94 #define K_SCROLLLOCK 0x01 95 #define K_NUMLOCK 0x02 96 #define K_CAPSLOCK 0x04 97 + #define KDGKBLED _IO(KD_IOCTL_BASE, 0x64) /* get led flags (not lights) */ 98 + #define KDSKBLED _IO(KD_IOCTL_BASE, 0x65) /* set led flags (not lights) */ 99 100 struct kbentry { 101 unsigned char kb_table; ··· 107 #define K_ALTTAB 0x02 108 #define K_ALTSHIFTTAB 0x03 109 110 + #define KDGKBENT _IO(KD_IOCTL_BASE, 0x46) /* gets one entry in translation table */ 111 + #define KDSKBENT _IO(KD_IOCTL_BASE, 0x47) /* sets one entry in translation table */ 112 113 struct kbsentry { 114 unsigned char kb_func; 115 unsigned char kb_string[512]; 116 }; 117 + #define KDGKBSENT _IO(KD_IOCTL_BASE, 0x48) /* gets one function key string entry */ 118 + #define KDSKBSENT _IO(KD_IOCTL_BASE, 0x49) /* sets one function key string entry */ 119 120 struct kbdiacr { 121 unsigned char diacr, base, result; ··· 124 unsigned int kb_cnt; /* number of entries in following array */ 125 struct kbdiacr kbdiacr[256]; /* MAX_DIACR from keyboard.h */ 126 }; 127 + #define KDGKBDIACR _IO(KD_IOCTL_BASE, 0x4A) /* read kernel accent table */ 128 + #define KDSKBDIACR _IO(KD_IOCTL_BASE, 0x4B) /* write kernel accent table */ 129 130 struct kbdiacruc { 131 unsigned int diacr, base, result; ··· 134 unsigned int kb_cnt; /* number of entries in following array */ 135 struct kbdiacruc kbdiacruc[256]; /* MAX_DIACR from keyboard.h */ 136 }; 137 + #define KDGKBDIACRUC _IO(KD_IOCTL_BASE, 0xFA) /* read kernel accent table - UCS */ 138 + #define KDSKBDIACRUC _IO(KD_IOCTL_BASE, 0xFB) /* write kernel accent table - UCS */ 139 140 struct kbkeycode { 141 unsigned int scancode, keycode; 142 }; 143 + #define KDGETKEYCODE _IO(KD_IOCTL_BASE, 0x4C) /* read kernel keycode table entry */ 144 + #define KDSETKEYCODE _IO(KD_IOCTL_BASE, 0x4D) /* write kernel keycode table entry */ 145 146 + #define KDSIGACCEPT _IO(KD_IOCTL_BASE, 0x4E) /* accept kbd generated signals */ 147 148 struct kbd_repeat { 149 int delay; /* in msec; <= 0: don't change */ ··· 151 /* earlier this field was misnamed "rate" */ 152 }; 153 154 + #define KDKBDREP _IO(KD_IOCTL_BASE, 0x52) /* set keyboard delay/repeat rate; 155 + * actually used values are returned 156 + */ 157 158 + #define KDFONTOP _IO(KD_IOCTL_BASE, 0x72) /* font operations */ 159 160 struct console_font_op { 161 unsigned int op; /* operation code KD_FONT_OP_* */
+1 -1
kernel/printk/Makefile
··· 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-y = printk.o 3 obj-$(CONFIG_PRINTK) += printk_safe.o nbcon.o 4 obj-$(CONFIG_A11Y_BRAILLE_CONSOLE) += braille.o 5 obj-$(CONFIG_PRINTK_INDEX) += index.o
··· 1 # SPDX-License-Identifier: GPL-2.0-only 2 + obj-y = printk.o conopt.o 3 obj-$(CONFIG_PRINTK) += printk_safe.o nbcon.o 4 obj-$(CONFIG_A11Y_BRAILLE_CONSOLE) += braille.o 5 obj-$(CONFIG_PRINTK_INDEX) += index.o
+146
kernel/printk/conopt.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Kernel command line console options for hardware based addressing 4 + * 5 + * Copyright (C) 2023 Texas Instruments Incorporated - https://www.ti.com/ 6 + * Author: Tony Lindgren <tony@atomide.com> 7 + */ 8 + 9 + #include <linux/console.h> 10 + #include <linux/init.h> 11 + #include <linux/string.h> 12 + #include <linux/types.h> 13 + 14 + #include <asm/errno.h> 15 + 16 + #include "console_cmdline.h" 17 + 18 + /* 19 + * Allow longer DEVNAME:0.0 style console naming such as abcd0000.serial:0.0 20 + * in addition to the legacy ttyS0 style naming. 21 + */ 22 + #define CONSOLE_NAME_MAX 32 23 + 24 + #define CONSOLE_OPT_MAX 16 25 + #define CONSOLE_BRL_OPT_MAX 16 26 + 27 + struct console_option { 28 + char name[CONSOLE_NAME_MAX]; 29 + char opt[CONSOLE_OPT_MAX]; 30 + char brl_opt[CONSOLE_BRL_OPT_MAX]; 31 + u8 has_brl_opt:1; 32 + }; 33 + 34 + /* Updated only at console_setup() time, no locking needed */ 35 + static struct console_option conopt[MAX_CMDLINECONSOLES]; 36 + 37 + /** 38 + * console_opt_save - Saves kernel command line console option for driver use 39 + * @str: Kernel command line console name and option 40 + * @brl_opt: Braille console options 41 + * 42 + * Saves a kernel command line console option for driver subsystems to use for 43 + * adding a preferred console during init. Called from console_setup() only. 44 + * 45 + * Return: 0 on success, negative error code on failure. 46 + */ 47 + int __init console_opt_save(const char *str, const char *brl_opt) 48 + { 49 + struct console_option *con; 50 + size_t namelen, optlen; 51 + const char *opt; 52 + int i; 53 + 54 + namelen = strcspn(str, ","); 55 + if (namelen == 0 || namelen >= CONSOLE_NAME_MAX) 56 + return -EINVAL; 57 + 58 + opt = str + namelen; 59 + if (*opt == ',') 60 + opt++; 61 + 62 + optlen = strlen(opt); 63 + if (optlen >= CONSOLE_OPT_MAX) 64 + return -EINVAL; 65 + 66 + for (i = 0; i < MAX_CMDLINECONSOLES; i++) { 67 + con = &conopt[i]; 68 + 69 + if (con->name[0]) { 70 + if (!strncmp(str, con->name, namelen)) 71 + return 0; 72 + continue; 73 + } 74 + 75 + /* 76 + * The name isn't terminated, only opt is. Empty opt is fine, 77 + * but brl_opt can be either empty or NULL. For more info, see 78 + * _braille_console_setup(). 79 + */ 80 + strscpy(con->name, str, namelen + 1); 81 + strscpy(con->opt, opt, CONSOLE_OPT_MAX); 82 + if (brl_opt) { 83 + strscpy(con->brl_opt, brl_opt, CONSOLE_BRL_OPT_MAX); 84 + con->has_brl_opt = 1; 85 + } 86 + 87 + return 0; 88 + } 89 + 90 + return -ENOMEM; 91 + } 92 + 93 + static struct console_option *console_opt_find(const char *name) 94 + { 95 + struct console_option *con; 96 + int i; 97 + 98 + for (i = 0; i < MAX_CMDLINECONSOLES; i++) { 99 + con = &conopt[i]; 100 + if (!strcmp(name, con->name)) 101 + return con; 102 + } 103 + 104 + return NULL; 105 + } 106 + 107 + /** 108 + * add_preferred_console_match - Adds a preferred console if a match is found 109 + * @match: Expected console on kernel command line, such as console=DEVNAME:0.0 110 + * @name: Name of the console character device to add such as ttyS 111 + * @idx: Index for the console 112 + * 113 + * Allows driver subsystems to add a console after translating the command 114 + * line name to the character device name used for the console. Options are 115 + * added automatically based on the kernel command line. Duplicate preferred 116 + * consoles are ignored by __add_preferred_console(). 117 + * 118 + * Return: 0 on success, negative error code on failure. 119 + */ 120 + int add_preferred_console_match(const char *match, const char *name, 121 + const short idx) 122 + { 123 + struct console_option *con; 124 + char *brl_opt = NULL; 125 + 126 + if (!match || !strlen(match) || !name || !strlen(name) || 127 + idx < 0) 128 + return -EINVAL; 129 + 130 + con = console_opt_find(match); 131 + if (!con) 132 + return -ENOENT; 133 + 134 + /* 135 + * See __add_preferred_console(). It checks for NULL brl_options to set 136 + * the preferred_console flag. Empty brl_opt instead of NULL leads into 137 + * the preferred_console flag not set, and CON_CONSDEV not being set, 138 + * and the boot console won't get disabled at the end of console_setup(). 139 + */ 140 + if (con->has_brl_opt) 141 + brl_opt = con->brl_opt; 142 + 143 + console_opt_add_preferred_console(name, idx, con->opt, brl_opt); 144 + 145 + return 0; 146 + }
+6
kernel/printk/console_cmdline.h
··· 2 #ifndef _CONSOLE_CMDLINE_H 3 #define _CONSOLE_CMDLINE_H 4 5 struct console_cmdline 6 { 7 char name[16]; /* Name of the driver */
··· 2 #ifndef _CONSOLE_CMDLINE_H 3 #define _CONSOLE_CMDLINE_H 4 5 + #define MAX_CMDLINECONSOLES 8 6 + 7 + int console_opt_save(const char *str, const char *brl_opt); 8 + int console_opt_add_preferred_console(const char *name, const short idx, 9 + char *options, char *brl_options); 10 + 11 struct console_cmdline 12 { 13 char name[16]; /* Name of the driver */
+72 -28
kernel/printk/printk.c
··· 383 /* 384 * Array of consoles built from command line options (console=) 385 */ 386 - 387 - #define MAX_CMDLINECONSOLES 8 388 - 389 static struct console_cmdline console_cmdline[MAX_CMDLINECONSOLES]; 390 391 static int preferred_console = -1; ··· 2500 if (_braille_console_setup(&str, &brl_options)) 2501 return 1; 2502 2503 /* 2504 * Decode str into name, index, options. 2505 */ ··· 2540 return 1; 2541 } 2542 __setup("console=", console_setup); 2543 2544 /** 2545 * add_preferred_console - add a device to the list of preferred consoles. ··· 3161 pr_flush(1000, true); 3162 } 3163 3164 /** 3165 * console_flush_on_panic - flush console content on panic 3166 * @mode: flush all messages in buffer or just the pending ones ··· 3223 */ 3224 console_may_schedule = 0; 3225 3226 - if (mode == CONSOLE_REPLAY_ALL) { 3227 - struct console *c; 3228 - short flags; 3229 - int cookie; 3230 - u64 seq; 3231 - 3232 - seq = prb_first_valid_seq(prb); 3233 - 3234 - cookie = console_srcu_read_lock(); 3235 - for_each_console_srcu(c) { 3236 - flags = console_srcu_read_flags(c); 3237 - 3238 - if (flags & CON_NBCON) { 3239 - nbcon_seq_force(c, seq); 3240 - } else { 3241 - /* 3242 - * This is an unsynchronized assignment. On 3243 - * panic legacy consoles are only best effort. 3244 - */ 3245 - c->seq = seq; 3246 - } 3247 - } 3248 - console_srcu_read_unlock(cookie); 3249 - } 3250 3251 console_flush_all(false, &next_seq, &handover); 3252 } ··· 3522 * Note that a console with tty binding will have CON_CONSDEV 3523 * flag set and will be first in the list. 3524 */ 3525 - if (preferred_console < 0) { 3526 if (hlist_empty(&console_list) || !console_first()->device || 3527 console_first()->flags & CON_BOOT) { 3528 try_enable_default_console(newcon); ··· 4313 } 4314 EXPORT_SYMBOL_GPL(kmsg_dump_rewind); 4315 4316 #endif 4317 4318 #ifdef CONFIG_SMP
··· 383 /* 384 * Array of consoles built from command line options (console=) 385 */ 386 static struct console_cmdline console_cmdline[MAX_CMDLINECONSOLES]; 387 388 static int preferred_console = -1; ··· 2503 if (_braille_console_setup(&str, &brl_options)) 2504 return 1; 2505 2506 + /* Save the console for driver subsystem use */ 2507 + if (console_opt_save(str, brl_options)) 2508 + return 1; 2509 + 2510 + /* Flag register_console() to not call try_enable_default_console() */ 2511 + console_set_on_cmdline = 1; 2512 + 2513 + /* Don't attempt to parse a DEVNAME:0.0 style console */ 2514 + if (strchr(str, ':')) 2515 + return 1; 2516 + 2517 /* 2518 * Decode str into name, index, options. 2519 */ ··· 2532 return 1; 2533 } 2534 __setup("console=", console_setup); 2535 + 2536 + /* Only called from add_preferred_console_match() */ 2537 + int console_opt_add_preferred_console(const char *name, const short idx, 2538 + char *options, char *brl_options) 2539 + { 2540 + return __add_preferred_console(name, idx, options, brl_options, true); 2541 + } 2542 2543 /** 2544 * add_preferred_console - add a device to the list of preferred consoles. ··· 3146 pr_flush(1000, true); 3147 } 3148 3149 + /* 3150 + * Rewind all consoles to the oldest available record. 3151 + * 3152 + * IMPORTANT: The function is safe only when called under 3153 + * console_lock(). It is not enforced because 3154 + * it is used as a best effort in panic(). 3155 + */ 3156 + static void __console_rewind_all(void) 3157 + { 3158 + struct console *c; 3159 + short flags; 3160 + int cookie; 3161 + u64 seq; 3162 + 3163 + seq = prb_first_valid_seq(prb); 3164 + 3165 + cookie = console_srcu_read_lock(); 3166 + for_each_console_srcu(c) { 3167 + flags = console_srcu_read_flags(c); 3168 + 3169 + if (flags & CON_NBCON) { 3170 + nbcon_seq_force(c, seq); 3171 + } else { 3172 + /* 3173 + * This assignment is safe only when called under 3174 + * console_lock(). On panic, legacy consoles are 3175 + * only best effort. 3176 + */ 3177 + c->seq = seq; 3178 + } 3179 + } 3180 + console_srcu_read_unlock(cookie); 3181 + } 3182 + 3183 /** 3184 * console_flush_on_panic - flush console content on panic 3185 * @mode: flush all messages in buffer or just the pending ones ··· 3174 */ 3175 console_may_schedule = 0; 3176 3177 + if (mode == CONSOLE_REPLAY_ALL) 3178 + __console_rewind_all(); 3179 3180 console_flush_all(false, &next_seq, &handover); 3181 } ··· 3495 * Note that a console with tty binding will have CON_CONSDEV 3496 * flag set and will be first in the list. 3497 */ 3498 + if (preferred_console < 0 && !console_set_on_cmdline) { 3499 if (hlist_empty(&console_list) || !console_first()->device || 3500 console_first()->flags & CON_BOOT) { 3501 try_enable_default_console(newcon); ··· 4286 } 4287 EXPORT_SYMBOL_GPL(kmsg_dump_rewind); 4288 4289 + /** 4290 + * console_replay_all - replay kernel log on consoles 4291 + * 4292 + * Try to obtain lock on console subsystem and replay all 4293 + * available records in printk buffer on the consoles. 4294 + * Does nothing if lock is not obtained. 4295 + * 4296 + * Context: Any context. 4297 + */ 4298 + void console_replay_all(void) 4299 + { 4300 + if (console_trylock()) { 4301 + __console_rewind_all(); 4302 + /* Consoles are flushed as part of console_unlock(). */ 4303 + console_unlock(); 4304 + } 4305 + } 4306 #endif 4307 4308 #ifdef CONFIG_SMP
+55 -52
lib/kfifo.c
··· 5 * Copyright (C) 2009/2010 Stefani Seibold <stefani@seibold.net> 6 */ 7 8 #include <linux/err.h> 9 #include <linux/export.h> 10 #include <linux/kfifo.h> ··· 164 } 165 EXPORT_SYMBOL(__kfifo_out_peek); 166 167 unsigned int __kfifo_out(struct __kfifo *fifo, 168 void *buf, unsigned int len) 169 { ··· 306 } 307 EXPORT_SYMBOL(__kfifo_to_user); 308 309 - static int setup_sgl_buf(struct scatterlist *sgl, void *buf, 310 - int nents, unsigned int len) 311 { 312 - int n; 313 - unsigned int l; 314 - unsigned int off; 315 - struct page *page; 316 317 - if (!nents) 318 return 0; 319 320 - if (!len) 321 - return 0; 322 323 - n = 0; 324 - page = virt_to_page(buf); 325 - off = offset_in_page(buf); 326 - l = 0; 327 - 328 - while (len >= l + PAGE_SIZE - off) { 329 - struct page *npage; 330 - 331 - l += PAGE_SIZE; 332 - buf += PAGE_SIZE; 333 - npage = virt_to_page(buf); 334 - if (page_to_phys(page) != page_to_phys(npage) - l) { 335 - sg_set_page(sgl, page, l - off, off); 336 - sgl = sg_next(sgl); 337 - if (++n == nents || sgl == NULL) 338 - return n; 339 - page = npage; 340 - len -= l - off; 341 - l = off = 0; 342 - } 343 } 344 - sg_set_page(sgl, page, len, off); 345 - return n + 1; 346 } 347 348 static unsigned int setup_sgl(struct __kfifo *fifo, struct scatterlist *sgl, 349 - int nents, unsigned int len, unsigned int off) 350 { 351 unsigned int size = fifo->mask + 1; 352 unsigned int esize = fifo->esize; 353 - unsigned int l; 354 unsigned int n; 355 356 off &= fifo->mask; ··· 339 size *= esize; 340 len *= esize; 341 } 342 - l = min(len, size - off); 343 344 - n = setup_sgl_buf(sgl, fifo->data + off, nents, l); 345 - n += setup_sgl_buf(sgl + n, fifo->data, nents - n, len - l); 346 347 return n; 348 } 349 350 unsigned int __kfifo_dma_in_prepare(struct __kfifo *fifo, 351 - struct scatterlist *sgl, int nents, unsigned int len) 352 { 353 unsigned int l; 354 ··· 357 if (len > l) 358 len = l; 359 360 - return setup_sgl(fifo, sgl, nents, len, fifo->in); 361 } 362 EXPORT_SYMBOL(__kfifo_dma_in_prepare); 363 364 unsigned int __kfifo_dma_out_prepare(struct __kfifo *fifo, 365 - struct scatterlist *sgl, int nents, unsigned int len) 366 { 367 unsigned int l; 368 ··· 371 if (len > l) 372 len = l; 373 374 - return setup_sgl(fifo, sgl, nents, len, fifo->out); 375 } 376 EXPORT_SYMBOL(__kfifo_dma_out_prepare); 377 ··· 469 } 470 EXPORT_SYMBOL(__kfifo_out_peek_r); 471 472 unsigned int __kfifo_out_r(struct __kfifo *fifo, void *buf, 473 unsigned int len, size_t recsize) 474 { ··· 555 EXPORT_SYMBOL(__kfifo_to_user_r); 556 557 unsigned int __kfifo_dma_in_prepare_r(struct __kfifo *fifo, 558 - struct scatterlist *sgl, int nents, unsigned int len, size_t recsize) 559 { 560 BUG_ON(!nents); 561 ··· 565 if (len + recsize > kfifo_unused(fifo)) 566 return 0; 567 568 - return setup_sgl(fifo, sgl, nents, len, fifo->in + recsize); 569 } 570 EXPORT_SYMBOL(__kfifo_dma_in_prepare_r); 571 ··· 579 EXPORT_SYMBOL(__kfifo_dma_in_finish_r); 580 581 unsigned int __kfifo_dma_out_prepare_r(struct __kfifo *fifo, 582 - struct scatterlist *sgl, int nents, unsigned int len, size_t recsize) 583 { 584 BUG_ON(!nents); 585 ··· 589 if (len + recsize > fifo->in - fifo->out) 590 return 0; 591 592 - return setup_sgl(fifo, sgl, nents, len, fifo->out + recsize); 593 } 594 EXPORT_SYMBOL(__kfifo_dma_out_prepare_r); 595 596 - void __kfifo_dma_out_finish_r(struct __kfifo *fifo, size_t recsize) 597 - { 598 - unsigned int len; 599 - 600 - len = __kfifo_peek_n(fifo, recsize); 601 - fifo->out += len + recsize; 602 - } 603 - EXPORT_SYMBOL(__kfifo_dma_out_finish_r);
··· 5 * Copyright (C) 2009/2010 Stefani Seibold <stefani@seibold.net> 6 */ 7 8 + #include <linux/dma-mapping.h> 9 #include <linux/err.h> 10 #include <linux/export.h> 11 #include <linux/kfifo.h> ··· 163 } 164 EXPORT_SYMBOL(__kfifo_out_peek); 165 166 + unsigned int __kfifo_out_linear(struct __kfifo *fifo, 167 + unsigned int *tail, unsigned int n) 168 + { 169 + unsigned int size = fifo->mask + 1; 170 + unsigned int off = fifo->out & fifo->mask; 171 + 172 + if (tail) 173 + *tail = off; 174 + 175 + return min3(n, fifo->in - fifo->out, size - off); 176 + } 177 + EXPORT_SYMBOL(__kfifo_out_linear); 178 + 179 unsigned int __kfifo_out(struct __kfifo *fifo, 180 void *buf, unsigned int len) 181 { ··· 292 } 293 EXPORT_SYMBOL(__kfifo_to_user); 294 295 + static unsigned int setup_sgl_buf(struct __kfifo *fifo, struct scatterlist *sgl, 296 + unsigned int data_offset, int nents, 297 + unsigned int len, dma_addr_t dma) 298 { 299 + const void *buf = fifo->data + data_offset; 300 301 + if (!nents || !len) 302 return 0; 303 304 + sg_set_buf(sgl, buf, len); 305 306 + if (dma != DMA_MAPPING_ERROR) { 307 + sg_dma_address(sgl) = dma + data_offset; 308 + sg_dma_len(sgl) = len; 309 } 310 + 311 + return 1; 312 } 313 314 static unsigned int setup_sgl(struct __kfifo *fifo, struct scatterlist *sgl, 315 + int nents, unsigned int len, unsigned int off, dma_addr_t dma) 316 { 317 unsigned int size = fifo->mask + 1; 318 unsigned int esize = fifo->esize; 319 + unsigned int len_to_end; 320 unsigned int n; 321 322 off &= fifo->mask; ··· 345 size *= esize; 346 len *= esize; 347 } 348 + len_to_end = min(len, size - off); 349 350 + n = setup_sgl_buf(fifo, sgl, off, nents, len_to_end, dma); 351 + n += setup_sgl_buf(fifo, sgl + n, 0, nents - n, len - len_to_end, dma); 352 353 return n; 354 } 355 356 unsigned int __kfifo_dma_in_prepare(struct __kfifo *fifo, 357 + struct scatterlist *sgl, int nents, unsigned int len, 358 + dma_addr_t dma) 359 { 360 unsigned int l; 361 ··· 362 if (len > l) 363 len = l; 364 365 + return setup_sgl(fifo, sgl, nents, len, fifo->in, dma); 366 } 367 EXPORT_SYMBOL(__kfifo_dma_in_prepare); 368 369 unsigned int __kfifo_dma_out_prepare(struct __kfifo *fifo, 370 + struct scatterlist *sgl, int nents, unsigned int len, 371 + dma_addr_t dma) 372 { 373 unsigned int l; 374 ··· 375 if (len > l) 376 len = l; 377 378 + return setup_sgl(fifo, sgl, nents, len, fifo->out, dma); 379 } 380 EXPORT_SYMBOL(__kfifo_dma_out_prepare); 381 ··· 473 } 474 EXPORT_SYMBOL(__kfifo_out_peek_r); 475 476 + unsigned int __kfifo_out_linear_r(struct __kfifo *fifo, 477 + unsigned int *tail, unsigned int n, size_t recsize) 478 + { 479 + if (fifo->in == fifo->out) 480 + return 0; 481 + 482 + if (tail) 483 + *tail = fifo->out + recsize; 484 + 485 + return min(n, __kfifo_peek_n(fifo, recsize)); 486 + } 487 + EXPORT_SYMBOL(__kfifo_out_linear_r); 488 + 489 unsigned int __kfifo_out_r(struct __kfifo *fifo, void *buf, 490 unsigned int len, size_t recsize) 491 { ··· 546 EXPORT_SYMBOL(__kfifo_to_user_r); 547 548 unsigned int __kfifo_dma_in_prepare_r(struct __kfifo *fifo, 549 + struct scatterlist *sgl, int nents, unsigned int len, size_t recsize, 550 + dma_addr_t dma) 551 { 552 BUG_ON(!nents); 553 ··· 555 if (len + recsize > kfifo_unused(fifo)) 556 return 0; 557 558 + return setup_sgl(fifo, sgl, nents, len, fifo->in + recsize, dma); 559 } 560 EXPORT_SYMBOL(__kfifo_dma_in_prepare_r); 561 ··· 569 EXPORT_SYMBOL(__kfifo_dma_in_finish_r); 570 571 unsigned int __kfifo_dma_out_prepare_r(struct __kfifo *fifo, 572 + struct scatterlist *sgl, int nents, unsigned int len, size_t recsize, 573 + dma_addr_t dma) 574 { 575 BUG_ON(!nents); 576 ··· 578 if (len + recsize > fifo->in - fifo->out) 579 return 0; 580 581 + return setup_sgl(fifo, sgl, nents, len, fifo->out + recsize, dma); 582 } 583 EXPORT_SYMBOL(__kfifo_dma_out_prepare_r); 584